The infrastructure that protects your data

Heinlein Support, the company behind mailbox.org, has been operating their own technical infrastructure with dedicated servers for more than 25 years. As all devices and wiring are the company’s property, we are making sure that operations are entirely under our control and nobody else has access. With this in mind, we are confident in saying that the e-mails, files, and other sensitive data of all our customers, whether private or business, enjoy the best-possible security.

Our server racks are distributed across a number of professionally-equipped data centres in and around Berlin, Germany, where we rent our own spaces. To give you some insight into how operations run at this technical level, we accompanied Stefan Wagner, the administrator responsible for our server infrastructure, during a visit to one of our data centre locations.

 

The data centre

We are at a business park in the middle of Berlin. The building itself has multiple storeys, is purpose-built, and makes a rather unassuming impression. However, looking around us, we can see this is a high-security environment: Access requires a range of security checks, where those allowed to enter need to swipe exclusive biometrically-coded key cards and enter access codes to open the doors.

There is a strict time limit for passing through any of the doors. This is to make sure that nobody can easily follow an authorised person into the building. As a result, the admins get a fair bit of physical exercise sometimes, when hurrying along with 50lb of new hardware on their backs. All rooms, corridors, and stairways are secured with CCTV.

 

Not for the faint-hearted

On each building level there are huge halls where the servers are located. Before going in, Stefan grabs some hearing protection gear from a box by the door. He recommends we do the same as things can get noisy in there!

Upon entering the hall where the mailbox.org servers are located, we can feel a stream of warm air on our faces, and humming and rattling sounds left and right. One can hear the servers before actually seeing them – there are thousands of machines in this hall and each server cabinet is locked within an additional cross-barred case. All this stuff must weigh many tons, which explains why the floors and walls of this building are so sturdy and massive. We can see some of the server cabinets are not just sitting within a secure case like the others, but are locked behind yet another barrier and secured by an alarm system.

All these machines make a lot of noise – more than 80 decibels as we are standing directly next to them. Now we are glad to have taken the hearing protection with us earlier. At least the climatic conditions in this place are relatively pleasant: all dry and warm. Time for a picnic? No – eating and drinking is not allowed, in order to prevent accidents and protect the sensitive hardware.

The individual server cabinets are neatly lined up and the cooling system regulates the air flow in the hall to make sure all systems are properly ventilated and there are no build-ups of heat anywhere. Looking at the ceiling, there are huge amounts of wiring to be seen – we learn that these are for power lines and network connections.

Our colleague Stefan Wagner visits all the data centre locations at least twice a week. Cordula Velten, who has a communications role at mailbox.org, has joined Stefan on one of his trips today and uses the opportunity to ask him a number of questions about his work, which we hope our customers might find interesting:

Stefan, what exactly are your responsibilities?

Running your own mail servers involves a great deal of work such as performing maintenance on all devices and making sure there are regular backups of the data. I am responsible for all infrastructure-related operations, which means setting up new devices, making sure we always have sufficient spare parts in stock, and also planning the maintenance shifts.

An important aspect of my work is the replacing of defective hard drives that store e-mails and other data. If a drive fails, it should be replaced soon, because even though we use redundant storage, the load on the remaining discs in the storage array increases. One could imagine this like a chain of lights – as soon as one or two lights fail, the other lights need to soak up the additional power on the line, which increases the overall risk of failure. Our policy is that as soon as a failing device is detected, we will go in and replace it immediately.

Another part of my work involves the testing of all wiring and replacement of defective or worn-out cables. In addition to common copper cables we also have heavy-duty fibre optic cables in use, which are physically less robust and prone to fracturing. Those latter ones we need to check very often.

Finally, the installing of actual servers. This can be servers that belong to mailbox.org or dedicated servers for our hosting customers. All machines have their software pre-installed before they are transported here. Then we secure the servers in their cases before connecting them.

Apart from yourself, are there any other people administrating the servers?

Yes. All our administrators have been specially trained to work with the servers. We have a staff rota for on-call duty and all colleagues involved in this are qualified to carry out any maintenance work required, which might involve the occasional trip to a data centre. However, the normal case is that the admins control operations from their offices. This way, we can make sure that there is an administrator on duty 24-7 for monitoring the entire infrastructure. Our monitoring system will automatically indicate any faults with connected devices and issue alarms via text message and e-mail. The colleague on duty can then act immediately to resolve any issues that may arise.

What aspects of your work do you find most challenging?

Some of our maintenance rounds are at night where we must finish work within a strict time window. We do this to keep any down-times of services low and the possible impact on our customers at a minimum. Then there is the physical aspect: Some servers can be heavy – we’re frequently handling devices between 10 and 60 kilograms of weight, and if a big one needs to go into the top shelf of a rack, then that’s quite an effort. For this reason, we tend to often work in pairs when setting up new servers at the data centre.

Of course there are also more trivial issues, for instance when we set up custom servers that require some convincing to fit into the standardised data centre cases, because they’re older or bigger than modern servers. It can be a bit of an annoyance sometimes but so far we’ve always found a solution for whatever was thrown at us.

What was the biggest challenge you can remember?

That was clearly the setup of our second data centre location. A few years ago, we decided to migrate our mission-critical systems to another place, one that has a better network connection and access to a faster Internet node. Practically, this meant we had to disconnect server cabinets at different locations and move them to the new data centre. We managed to move about ten servers per night, which may not sound like much, except it is! Consider what is involved: the physical weight of the machines, disconnection and disassembly, wrapping up everything securely, the transport to the new location, unpacking, connecting, and making sure everthing is running smoothly before the maintenance window closes…

There are so many cables – don’t you get confused sometimes?

Without care this could happen easily, I guess, but we are very organised to avoid confusion and prevent mistakes. For example, we use a coding system where certain colours are used only for specific types of connections or functions. Furthermore, every cable is labelled on both ends to make it easier for us to recognise the connected ports.

Do you operate any other devices in the data centre that are not servers?

In addition to the servers, there are other devices that form part of the overall infrastructure. For instance, there are devices such as network switches and routers that are important for interconnecting the servers with the network and handling the transmission of data over the Internet.

Wouldn’t it be easier to just place the servers in the basement of our building, rather than in a data centre?

The basement (or any big room) in an ordinary building would not be suitable for housing&nbsp, and operating server technology reliably. Differences in humidity and temperature, which can arise from insufficient ventilation, would also have a negative impact on the service life of the components.

On the network side, office locations tend not to have an uplink to a high-bandwidth internet node, which is required for the servers to operate and transmit data quickly enough. Power would also be an issue of concern: The many devices we operate consume so much energy that the electrical installation of an ordinary building could not cope and fuses blow right away. Professional data centres also have emergency generators that can keep the servers running in the event of a power cut. By the way, 100% of the energy we use in our data centres comes from renewable sources.

Stefan, thank you very much for giving us some insight into the technical aspects of how our services are run.