Wanting again on 10 years of constructing world-class knowledge facilities

This resulted in our highly efficient electrical system with fewer transitions and the idea that the servers themselves can switch between AC and DC relatively easily and quickly. Once that piece of the puzzle was in place, it laid the foundation for the planning and construction of our very first data center in Prineville.

As soon as we On the basis of the strategy of limiting the electrical conversion in the system, we looked for the most efficient way to dissipate the heat that is generated during necessary conversions. That meant thinking about making the servers a bit taller than usual, allowing for larger heat sinks, and ensuring efficient airflow through the data center itself.

We knew we wanted to avoid large-scale mechanical cooling (e.g. air or water-cooled water chillers) as these were very energy-intensive and would have significantly reduced the overall electrical efficiency of the data center. One idea was to route outside air through the data center and use it as a cooling medium. Instead of a conventional air conditioning system, we would have one with outside air and direct evaporative cooling to cool the servers and completely remove the heat generated by the servers from the building.

Also today we use a indirect cooling system In locations with less than ideal ambient conditions (e.g. extreme humidity or high levels of dust) that could impair direct cooling. These indirect cooling systems not only protect our servers and devices, but are also more energy and water efficient than conventional air conditioning systems or water chillers. Strategies like this have allowed us to build data centers that use at least 50 percent less water than typical data centers.

Optimization and sustainability

In the 10 years since we built our first data center in Prineville, the basic concepts of our original design have stayed the same. But we are constantly making improvements. Most importantly, we’ve added additional power and cooling to meet our growing network needs.

In 2018, for example, we have our StatePoint liquid cooling (SPLC) system in our data centers. SPLC is a unique liquid cooling system that is energy and water efficient and allows us to build new data centers in areas where direct cooling is not a viable solution. This is likely the most significant change in our original design and will affect future data center designs as well.

The original focus on minimizing electrical voltage transitions and determining the best cooling are still core attributes of our data centers. Because of this, Facebook’s facilities are some of the most efficient in the world. On average, our data centers use 32 percent less energy and 80 percent less water than the industry standard.

The software also plays an important role in all of this. As mentioned earlier, we knew from the start that the resilience of the software would play a major role in the efficiency of our data centers. Take my word for it when I say that in 2009 software couldn’t do what it is today. The advances we’ve made in terms of performance and resilience on the software side are incredible. For example, today we employ a number of Software tools which help our engineers to detect, diagnose, correct and repair hardware faults in the Interconnect Express (PCIe) peripheral component in our data centers.

If I were to characterize the differences between the way we think about our data center program and that in traditional industries, I think we calculated a lot sooner than we tried to evaluate risk versus benefit in efficiency. And the risk can be reduced with more resilient software. Software optimizations enable us, for example, to relocate the server load from one data center to another in an emergency without interrupting our services.

Comments are closed.