Stroke Rough Sketch of Genius

Jay Park began by studying civil engineering in college, but he soon switched to chemical engineering then finally moved on to electrical engineering. “Controlling power is what I love,” he says.

From a simple napkin doodle rose Jay Park’s concept for Facebook’s new, naturally cooled data centers, and the company is now sharing the idea for the betterment of all

Dedicated in loving memory to Jay Park’s son, Jason Park, who wanted to become an engineer and make the world a better place to live

Facebook data center napkin sketch
Engineer Jay Park drew the DC UPS power-system design for Facebook’s new data centers on a napkin during a 2 a.m. fit of inspiration. It was so innovative that a patent for the design was just approved this year. The napkin now hangs framed in Facebook’s Menlo Park, CA, headquarters.

On September 23, 2012, the New York Times ran a page-one article under the headline “Power, Pollution and the Internet.” Detailing the wasteful use of energy and water in data centers—especially those run by the largest and most recognized Internet-based companies—the roughly 4,300-word report ruffled many feathers in the digital world. But, its criticisms had (and continue to have) some validity.

The use of energy and water (mostly for cooling, which the hot servers generate a considerable demand for) has been a tertiary concern in the ongoing hypergrowth era of the information revolution. E-commerce is an inventive and evolving industry on steroids, after all, where everyone from entrepreneurs to programmers to engineers are working 24–7 to devise smarter ways of doing what they do. Matters of resource use have fallen by the wayside because being the fastest, most robust, and failsafe is what has proven to be the real difference between a $140 billion IPO (for Facebook) and a $545 million loss (which Rupert Murdoch ate on MySpace).

The reality of energy and water overuse in data centers is still real in many quarters, but it’s now a bit behind the times with regard to the work of Facebook—particularly that of Jay Park, the company’s vice president of data-center design, construction, and facility operations. In earlier stages of its explosive growth, the company operated under service-level agreements with third-party data centers (known as colocation companies), but when Park joined in 2009, he was the first person on staff whose mission was to build the company its own more efficient data center. And, he had to do it on a pace that matched the growth of the social networking behemoth, which went from 350 million users in 2009 to 1 billion by the end of 2012.

He has since fixed the problem—and then some. Not only have he and his engineers devised new ways of configuring, locating, and building data centers; they’ve actually gone open-source with what they’ve learned, sharing it with the world. The real marvel, though, is that all their plans and designs can be traced back to a 2 a.m. sketch Park made when solving the matters of efficiency for himself.

“It was in my mind,” Park says, recalling the night in 2009 when he had an epiphany and instinctively reached for the nearest available piece of paper: a dinner napkin. “I was constantly thinking and dreaming about it. I had to get rid of two components: inefficiency and the problems that can result when you make radical changes. Those problems included harmonics and short-circuited current.” In simplest terms, harmonics are undesirable voltage and current characteristics caused by nonlinear loads, such as servers, that cause power-quality issues and inefficiencies. Park wanted to do away with these.

The following morning, he took the napkin sketch—which still exists, framed on a wall at Facebook’s headquarters in Menlo Park, California—to the company’s chief hardware engineer, who examined it and declared Jay Park’s DC UPS system feasible. The world of data centers effectively changed that day; the two engineers were eventually proven right, and the system has since been patented. Power-usage effectiveness (PUE) and water-usage effectiveness are dramatically improved with Park’s configuration. And, it actually helped Facebook in a relatively short period of time. “Smart engineers try to utilize what’s already developed,” Park says. “We didn’t want to create something that was entirely new. We didn’t have three to five years to build.”

Facebook data center
A ductless air-distribution system draws cold air in from the outdoors and pipes it into the server room of each of Facebook’s data centers, negating the need for chiller plants, cooling towers, and associated pipes and pumps.

1.09 PUE

the power-usage effectiveness rating of Facebook’s Prineville, OR, data center (the industry target is 1.5, and a perfect score is 1.0)

27%

of Facebook’s Prineville data center is made from recycled materials

350,000 sq. ft.

the size of each Facebook data center (currently there are three, and a fourth will open in 2015)

530 tons

of construction waste was recycled during the building of the Prineville data center

100%

outside economization is achieved by Facebook’s Prineville data center, eliminating the need for a chiller plant or cooling towers

Facebook’s servers now reside in 350,000-square-foot data centers, the functionality of which depends largely on air handling, outdoor temperatures, and humidity, making the location of each new center a primary concern. The first was sited in Prineville, Oregon, centered in the high desert climate of the state, where cooler and dryer air is a predominant local feature. “Nothing is cheaper than bringing in cool air from the outside,” Park says.

The system inside each data center is capable of 100 percent outside economization and air-evaporative cooling and humidification, which means a chiller plant is not necessary nor are cooling towers or associated piping, pumps, or controls. Instead, ductless air distribution brings cooler air in via a built-up penthouse and sends it down into the data center through drywall airshafts. (The outdoor air is treated with direct evaporative cooling and humidification if necessary.) The cooler air enters through the front of the servers in the “cold aisles” and is exhausted into the “hot aisles,” which draw the hot air up through the ceiling plenum. In summer, the air is eventually expelled back outside, but in winter, the hot air is mixed with more outdoor air to achieve an optimal supply temperature to cool the servers. Then, waste heat from the servers is used to warm the centers’ office areas.

Facebook data centers also feature a new power-distribution configuration (the DC UPS system that Park sketched on his famous napkin) that influences everything from the incoming utility power to the server power-supply level. Park’s team, which now comprises 100 people, sought to reduce the 21–27 percent efficiency loss that is typical in data centers by eliminating the need for a centralized uninterruptible power supply (UPS) system and 480V and 208/120V power-distribution units. Instead, 480/277V power is supplied directly to a custom-designed power supply that also takes in 48VDC from a local battery backup system.

The result is a set of buildings that consumes 38 percent less energy than comparably sized data centers. The Prineville location even achieved a PUE of 1.09 in 2013. (The industry target is 1.5, with 1.0 being the perfect score.) Additionally, the facility earned a LEED Gold certification and a Best of the Best citation from Engineering News-Record in 2011. Outside of air handling, its other LEED points came mainly from its recycled and locally sourced materials (which make up 27 and 30 percent of the building, respectively), its use of FSC-certified wood (which makes up 91 percent of the building), and its recycling of 530 tons of its construction waste.

“LEED Gold is the standard of all our data centers,” Park says. There are currently two others—one in Forest City, North Carolina, and the other in Lulea, Sweden—and a fourth is scheduled to open in Altoona, Iowa, by 2015, powered entirely by a local wind farm. The Prineville center is run by a 100-kilowatt solar array, and clean, renewable hydroelectric energy runs the entire Lulea facility.

Jay Park, Facebook
“Nothing is cheaper than bringing in cool air from the outside.”
Jay Park, VP of Data-Center Design, Construction, and Facility Operations
(Photo: Sheila Barabad)

If anyone was meant to devise these breakthroughs, it was Jay Park. His educational path toward better server management took a circuitous route and began long before the term “social media” entered our lexicon. “I loved math as a child,” he says. “But when I started to study civil engineering in college, I knew mixing concrete was not for me.” Not that there’s anything wrong with it—after all, he acknowledges, the rapid deployment of Facebook’s facilities in remote locations took some smart and skilled civil engineers and construction people—but he decided instead to switch to chemical engineering, and after that he got into electrical engineering. “Controlling power is what I love,” he says.

Through the better part of the 1990s, Park worked in the semiconductor industry and was even responsible for building a semiconductor plant. He observed that the industry was moving offshore, though—China surpassed the United States in semiconductor manufacturing in 2013, with Japan, Taiwan, and Korea already controlling well over half the global market—so by 1999 he had transferred his skills to data centers.

The decision turned out to be a fruitful one for him personally, but his entire industry should be grateful that it happened, too, for his experience helped him to fundamentally alter the physical management of data. “We learn from everywhere, from everything we do,” he says. “We apply it everywhere as well.”    

Facebook's Prineville, OR, data center
Facebook sited its first data center in the high-altitude desert region of Oregon because the area’s cool, dry air is ideal for reducing heat loads inside the facility. (Photo: Alan Brandt)

Social Media Data Centers + Open Source = a Better World

While no one can claim that Facebook founder Mark Zuckerberg actually held Jay Park’s middle-of-the-night design inscribed on a napkin, “he certainly heard about it,” Jay Park says. But, as crucial to the business as the innovation was, the social media entrepreneur had no intention of keeping it confidential and proprietary. In at least two ways, Facebook shares how the resource-stingy data centers are configured and constructed and how they are performing.

First, the company established the Open Compute Project (OCP), applying open-source software models to hardware. Through a dedicated website (opencompute.org), the organization offers CAD files on eight key components of the new generation of data centers: the server, storage, data-center design, networking, hardware management, certification, open rack, and solution providers. Second, the company provides a dashboard (via a Facebook page, of course) that shares real-time readings of the centers’ power-usage effectiveness and water-usage effectiveness, broken down by the minute, day, week, month, quarter, and year.

Why all this openness? “Mark says it’s better to share this with the world,” Park says. “Data centers are our core business, and we take energy savings very seriously, which is why he founded the OCP—so that everyone can save energy and help make our planet more green.”

Facebook's Prineville, OR, data center
Facebook sought to reduce the 21–27% efficiency loss that plagues typical data centers. It did so by eliminating the need for a centralized uninterrupted power-supply system, and it now uses 38% less energy than its counterparts. (Photo: Alan Brandt)
Facebook's Lulea, Sweden, data center
In the summer, the air-distribution system at each of Facebook’s data centers expels hot air back outside. In the winter, it mixes the hot air with more cold air to achieve an optimal cooling temperature.