Facebook is contributing four updated server designs to the Open Compute Project, to bring these benefits to the wider ecosystem.

March 8, 2017

2 Min Read

Facebook today announced a complete Facebook server refresh with four different types of hardware. Vijay Rao, Facebook's director of technology strategy, also announced that Facebook is contributing the server designs to the Open Compute Project, so Facebook can bring these benefits to the wider ecosystem.

Facebook continues to innovate on their servers to accommodate the growth of their apps and services, including the number of photos and videos being shared. To help put this into perspective: People watch 100 million hours of video every day on Facebook; 95M+ photos and videos are posted to Instagram every day; and 400M people now use voice and video chat every month on Messenger.

Today's announcements from Facebook include:

  • Bryce Canyon is a storage server primarily used for high-density storage, including photos and videos. The server is designed with more powerful processors and increased memory, and provides increased efficiency and performance. Bryce Canyon has 20% higher hard disk drive density and a 4x increase in compute capability over its predecessor, Honey Badger.

  • Yosemite v2 is a compute server that provides the flexibility and power efficiency needed for scale-out data centers. The power design supports hot service, meaning servers don't need to be powered down when the sled is pulled out of the chassis in order for components to be serviced; these servers can continue to operate.

  • Tioga Pass is a compute server with dual-socket motherboards and more IO bandwidth (i.e. more bandwidth to flash, network cards, and GPUs) than its predecessor Leopard. This design enables larger memory configurations and speeds up compute time.

  • Big Basin is a server used to train neural networks, a technology that can do a number of research tasks including learning to identify images by examining enormous numbers of them. With Big Basin, Facebook can train machine learning models that are 30% larger (compared its predecessor Big Sur). They can do so due to greater arithmetic throughput now available and by implementing more memory (12GB to 16GB). In tests with image classification model Resnet-50, they reached almost 100% improvement in throughput compared to Big Sur.

Facebook

Open Compute Project

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like