Not content with connecting 1 billion-plus people, Facebook wants to connect millions of computer servers, too. The idea is to create technologies to handle data loads that are tens, even hundreds of times greater than what most companies have to handle today.
Last week, Facebook released as an open source project its design for a networking device that it says will coordinates the actions of hundreds of thousands of servers in Facebook’s data centres. The machine, which relies primarily on off-the-shelf components and sophisticated software, has a design that Facebook says will add speed and efficiency to most commercial computer centers. It also adds new, and free, competition for companies like Cisco Systems, Juniper Networks and Arista Networks, which are among the more notable makers of commercial networking equipment.
Facebook, which has the world’s largest repository of photographs and can personalise social pages for millions of people at once, hopes open sourcing its hardware will attract people outside the company to innovate on Facebook’s behalf. According to the company, last year 1,000 non-Facebook people contributed to products that Facebook had open sourced.
Facebook’s case for the product is not so much price, but the ability to quickly readjust a computing system. That will become important to lots of companies, a Facebook official said, as Facebook-level data loads become a commonplace for many businesses. “There are efficiencies in terms of people and money, and total cost of ownership models,” said Jay Parikh, vice president of engineering at Facebook. “Most of our big bets are based around huge changes in flexibility though.” He added, “if we want the industry to rethink its practices, we have to publish” technical advances like the networking switch.
The company already has one of the most automated computing systems around. While most enterprises have a ratio in data centers of one person for every 250 to 500 computers, Facebook has 25,000 machines for every person, Parikh said. Facebook’s entire network, he said, is overseen by just one person at any given time.
The new switch, called “6-pack,” builds off a smaller-scale switching product that Facebook open sourced last June. That switch, called Wedge, was designed to work at the top of a rack of servers, coordinating the activity among them. 6-pack takes 12 computer boards used in Wedge, and packages them into a machine with eight boards for coordinating multiple racks, plus data moving in and out of the data centre, and four boards to coordinate activity among the eight outward-facing boards.
“This will easily scale to 100 gigabits per second, 200 gigs, and 400 gigs, as time goes on,” said Najam Ahmad, vice president of network engineering at Facebook. One hundred gigabits is currently considered cutting edge, and isn’t expected to be widely deployed for another year.
The modular design of 6-pack is part of a larger effort inside the company to standardise and automate as many parts of Facebook’s technology as possible. Ahmad said the crucial element was “software that detects and mitigates problems, without using people. Our slogan internally is ‘we want robots to fix problems, and people to build the robots.’”