DevOps Factory

This article gives a short overview of my DevOps Factory virtualization project and the launch of Acando.tech platform at Acando GmbH.

 

Introduction

Near Christmas 2016 I kicked off a private project on further educating myself in Linux and Server Virtualization. As I did not want to operate multiple servers in a classical full rack and as technology progressed I went ahead and created my own Server configuration from different components based on a 2011/2012 Intel Architecture.

 

Today, what started as a private side project is now running a Plattform for 100 Employees at a Top IT Consulting Company in the German market.

Building the Server

Building an Enterprise Grade Server System with limited Resources in my spare time was quite the challenge. I have had prior experience from building consumer grade PCs, however, Enterprise-grade Servers add some new challenges to the spectrum. The final outcome contains the following components:

 

Phanteks Enthoo Luxe EATX Tower (Formfaktor SSI-EEB)
5x PWM Phantek Lüfter
2x ArticFreezer CPU Lüfter
2x Nocuta 140mm Industrial PWM Lüfter

Asrock EP2C602-4L/D16 Mainboard
2x CPU Xeon E5 V2 – 12 Core @2,5GH
16x16GB ECC RDIMM DDR3 1600MHz
2x 120GB WD Green SATA3 SSD
2x 1TB Samsung 970 Pro M.2. NVMe SSD
2x 525GB Crucial MX SATA3 SSD
2x 4TB WED RED 5400u/min SATA3 HDD
1x Inatek USB 3.0 PCI Adapter

The Hardships of Hardware

Building and Testing Components, Ordering, Packaging and returning Goods can be tough, given only a time frame of Saturdays and Sundays at home. So the first challenge was coordinating shipments and testing components.

 

 

Secondly, heating was a major issue, as Servers produce rather unpleasant excess heat. This has even had me ending up improvising on cooling for stability test while travelling and checking the progress remotely. In order to prevent components from damage, I had to come up with a remotely accessible hardware toggle for the current in case the Hardware overheated.

 

One of the more famous pictures was a Monday morning take before travelling:

 

 

Ultimately I had to end up with a more ventilated case and improved cooling through industrial fans. Additionally, I had to test all the PCI Connectors and experience Issues with some newer Hardware components that threw back the project a little.

 

 

In between also a Rack Mount Solution was tested, however then reverted back to a tower casing as no datacenter housing was planned yet and drive capacity was allotted differently.

 

 

With everything tested thoroughly, the final test runs came in the new office setup and preparation to ship the server to the data center started.

Moving on to Software

With Hardware intact and working as intended, it was time to move towards extensive software Setup.
In this particular case, we are using Proxmox, a Debian based open source hypervisor for Linux virtualization.

 

 

Here we leverage LXC and KVM virtualization to create a completely internal Datacenter, so that traffic never leaves the machine for internal communication.
Our main driver for Storage is ZFS and I also included a disaster recovery plan with external online storage. This is illustrated in the following figure in more detail.
Here I am preparing an upgrade for newer NVMe flash storage to improve overall capacity and speeds.

 

 

 

 

The main goal here was to create a platform using the Atlassian stack and software production tools to create a reliable, centralized artefact delivery for Software projects.

 

Selling the Plattform

Around this time the project has been finalized and I sold the hardware and platform to Acando GmbH. We came in around 25% of competitive offers’ initial invest and with more than double the average computing power and lower operational cost than any cloud services.

 

The ROI in just hardware invest and operations created a break-even point after 6 months compared to hosting at service providers. As operations sourcing also requires individual application management and setup I can safely say, that it is more cost efficient to run your own hardware, in case that you require the flexibility to create your own platforms. However, the labour costs are the biggest part of the check. This is irrelevant to hosting, as even with traditional compute outsourcing you will need to fully monitor and setup your applications.

 

As of the time of writing, I am administrating the data center rack remotely without a noisy server in my office and managing delivery and production artefact for Acando GmbH Germany.

 

Remote Administration via IPSec Tunnel: Improving the environment and documenting rollout presentation

 

Conclusion

This project has taught me a lot about how to setup a modern business infrastructure. I could test and improve my skills in various domains such as hardware, networking, application management, linux systems, storage systems, error analysis, architecture design, administration and many more. Nowadays I am also using the same software stack to maintain plunaris, a fully virtual collaboration environment for professional software production.

 

Daniel Gensert
IT Consultant @ Acando GmbH
Founder of gensert.tech