Dispatches from Class: Software Engineering and Data Center Design

This continues my Dispatches from Class series

We web developers love our cloud platforms; AWS, Heroku, and others help us focus on code and not on servers. And while it’s nice to not have to worry about infrastructure, there is a lot of interesting innovation happening at data centers worldwide. Many of these leaps forward mirror best practices we use daily while writing software: modularity, reusability, and profiling, to name a few.


When you think of a data center, you probably imagine rows of server racks in a white, sterile room. While that’s still the case for many companies, modular data center components are gaining popularity due to their ease of installation and efficiency.

Sun modular data center

Unlike a brick-and-mortar data center that can take years to build and bring online, modular components are built entirely inside of standard shipping containers. They contain servers, cooling equipment, and networking infrastructure; they are essentially a microcosm of a standard data center. Once they arrive onsite, engineers need only to connect them to a power source and hook up the networking cables, and they can be fully functional the same day.

One main benefit is that they can help a company deal with spikes and lulls in traffic without needing to alter their main data center facilities. Think of it as an EC2 for a data center. With modular components, you can easily spin up and down resources as the situation warrants.


As software developers, we know to never repeat ourselves. Code should be written once and referenced anywhere else it’s needed, as opposed to duplicating the code every time you need to use its functionality. Modern data centers think the same way. When hot air is discharged from a server rack, why let it go to waste by dumping it outside? A number of data centers are capturing this hot air and using it to heat office space, homes, and even, in IBM Switzerland’s case, a swimming pool. Another notable example is found in Indiana, where Notre Dame University uses the air generated from its high-performance servers to heat its greenhouse and botanic gardens.

Free cooling, the process of using already-cold outside air to cool servers, is another popular method to reuse available resources. In cold climates, data centers can avoid using electricity to cool air pumped into server racks, since cold air already exists all around the data center. Google took this concept once step further in its Hamina, Finland data center, where it uses nearby frigid seawater to cool its servers.


The first part of speeding up any application is to identify what’s causing it to be slow. Web developers rely on a browser’s dev console to do this, and also use various profiling tools like JSHint. Data centers are similarly obsessed with performance and profiling. The main metric is PUE, or Power Usage Effectiveness. The calculation is simple: PUE = (total facility energy) / (IT equipment energy). Basically, PUE shows what percentage of your total electricity usage goes towards the essential purpose of a data center, which is to provide computational power. Any other electrical expenditure, like lighting and cooling, are not providing the primary functionality, and so their electrical usage should be minimized. With PUE, lower numbers are better. The best possible (theoretical) PUE would be 1.0, meaning 100% of all power consumed goes toward computation.

A 2013 survey showed that the average PUE for data centers around the world is 1.65. Compare that with Facebook’s impressive 1.05 PUE, and you can see that most data centers lag far behind industry leaders like Facebook and Google.

In fact, Facebook is so obsessed with profiling that you can see their PUE update in real-time for their Prineville Data Center facility. This push for transparency in profiling has created an arms race in data center efficiency, which is a boon to the companies themselves (since they end up saving in energy costs), as well as to the environment.

It turns out that software developers and data center engineers rely on many of the same concepts to produce quality, efficient work. Modularity, reusability, and profiling are essential components to manage efficiency in both software engineering and data center operation.