Facebook shares some secrets on making MySQL scale

When you’re storing every transaction for 800 million users and handling more than 60 million queries per second, your database environment had better be something special. Many readers might see these numbers and think NoSQL, but Facebook held a Tech Talk on Monday night explaining how it built a MySQL environment capable of handling everything the company needs in terms of scale, performance and availability.

Over the summer, I reported on Michael Stonebraker’s stance that Facebook is trapped in a MySQL “fate worse than death” because of its reliance on an outdated database paired with a complex sharding and caching strategy (read the comments and this follow-up post for a bevy of opinions on the validity of Stonebraker’s stance on SQL). Facebook declined an official comment at the time, but last night’s night talk proved to me that Stonebraker (and I) might have been wrong.

Keeping up with performance

Kicking off the event, Facebook’s Domas Mituzas shared some stats that illustrate the importance of its MySQL user database:

  • MySQL handles pretty much every user interaction: likes, shares, status updates, alerts, requests, etc.
  • Facebook has 800 million users; 500 million of them visit the site daily.
  • 350 million mobile users are constantly pushing and pulling status updates
  • 7 million applications and web sites are integrated into the Facebook platform
  • User data sets are made even larger by taking into account both scope and time

And, as Mituzas pointed out, everything on Facebook is social, so every action has a ripple effect that spreads beyond that specific user. “It’s not just about me accessing some object,” he said. “It’s also about analyzing and ranking through that include all my friends’ activities.” The result (although Mituzas noted these numbers are somewhat outdated) is 60 million queries per second, and nearly 4 million row changes per second.

Facebook shards, or splits its database into numerous distinct sections, because of the sheer volume of the data it stores (a number it doesn’t share), but it caches extensively in order to write all these transactions in a hurry. In fact, most queries (more than 90 percent) never hit the database at all but only touch the cache layer. Facebook relies heavily on the open-source memcached MySQL caching tool, as well as it custom-built Flashcache module for caching data on solid-state drives.

Keeping up with scale

Speaking of drives, and hardware generally, Facebook’s Mark Konetchy took the stage after Mituzas to share some data points on the growth of Facebook’s MySQL infrastructure. Although he made sure to point out that the “buzzkills at legal” won’t let him share actual numbers, he was able to point to 3x server growth across all data centers over the past two years, 7x growth in raw user data, and 20x growth in all user data (which includes replicated data). The median data-set size per physical host has increased almost 5x since Jan. 2010, and maximum data-set size per host has increased 10x.

Konetchy credits the ability to store so much more data per host on software-performance improvements made by Facebook’s MySQL team, as well as on better server technology. Facebook’s MySQL user database is composed of approximately 60 percent hard disk drives, 20 percent SSDs and 10 percent hybrid HDD-plus-SSD servers running Flashcache.

However, Facebook wants to buy fewer servers while still improving MySQL performance. Looking forward, Konetchy said some primary objectives are to automate the splitting of large data sets onto underutilized hardware, to improve MySQL compression and to move more data to the Hadoop-based HBase data store when appropriate. NoSQL databases such as HBase (which powers Facebook Messages) weren’t really around when Facebook built its MySQL environment, so there likely are unstructured or semistructured data currently in MySQL that are better suited for HBase.

With all this growth, why MySQL?

The logical question when one sees rampant growth and performance requirements like this is “Why stick with MySQL?”. As Stonebraker pointed out over the summer, both NoSQL and NewSQL are arguably better suited to large-scale web applications than is MySQL. Perhaps, but Facebook begs to differ.

Facebook’s Mark Callaghan, who spent eight years as a “principal member of the technical staff” at Oracle , explained that using open-source software lets Facebook operate with “orders of magnitude” more machines than people, which means lots of money saved on software licenses and lots of time put into working on new features (many of which, including the rather-cool Online Schema Change, are discussed in the talk).

Additionally, he said, the patch and update cycles at companies like Oracle are far slower than what Facebook can get by working on issues internally and with an open-source community. The same holds true for general support issues, which Facebook can resolve itself in hours instead of waiting days for commercial support.

On the performance front, Callaghan noted, Facebook might find some interesting things if large vendors allowed it to benchmark their products. But they won’t, and they won’t let Facebook publish the results, so MySQL it is. Plus, he said, you actually can tune MySQL to perform very fast per node if you know what you’re doing — and Facebook has the best MySQL team around. That also helps keep costs down because it requires fewer servers.

Callaghan was more open to using NoSQL databases, but said they’re still not quite ready for primetime, especially for mission-critical workloads such as Facebook’s user database. The implementations just aren’t as mature, he said, and there are no published cases of NoSQL databases operating at the scale of Facebook’s MySQL database. And, Callaghan noted, the HBase engineering team at Facebook is quite a bit larger than the MySQL engineering team, suggesting that tuning HBase to meet Facebook’s needs is more resource-intensive process than is tuning MySQL at this point.

The whole debate about Facebook and MySQL was never really about whether it should be using it, but rather about how much work it has put into MySQL to make it work at Facebook scale. The answer, clearly, is a lot, but Facebook seems to have it down to an art at this point, and everyone appears pretty content with what they have in place and how they plan to improve it. It doesn’t seem like a fate worse than death, and if it had to start from scratch, I don’t get the impression Facebook would do too much differently, even with the new database offerings available today.

fbtechtalks on livestream.com. Broadcast Live Free

Feature image courtesy of Flickr user Carolyn Coles.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Defining Hadoop: the Players, Technologies and Challenges of 2011
  • Migrating media applications to the private cloud: best practices for businesses
  • Dissecting the data: 5 issues for our digital future



GigaOM