Facebook unveils Presto engine for querying 250 PB data warehouse

At a conference for developers at Facebook headquarters on Thursday, engineers working for the social networking giant revealed that it’s using a new homemade query engine called Presto to do fast interactive analysis on its already enormous 250-petabyte-and-growing data warehouse.

More than 850 Facebook employees use Presto every day, scanning 320 TB each day, engineer Martin Traverso said.

“Historically, our data scientists and analysts have relied on Hive for data analysis,” Traverso said. “The problem with Hive is it’s designed for batch processing. We have other tools that are faster than Hive, but they’re either too limited in functionality or too simple to operate against our huge data warehouse. Over the past few months, we’ve been working on Presto to basically fill this gap.”

Facebook created Hive several years ago to give Hadoop some data warehouse and SQL-like capabilities, but it is showing its age in terms of speed because it relies on MapReduce. Scanning over an entire dataset could take many minutes to hours, which isn’t ideal if you’re trying to ask and answer questions in a hurry.

With Presto, however, simple queries can run in a few hundred milliseconds, while more complex ones will run in a few minutes, Traverso said. It runs in memory and never writes to disk, Traverso said.

Traverso explains the architecture of Facebook's new Presto engine. Source: Jordan Novet

Traverso explains the architecture of Facebook’s new Presto engine. Source: Jordan Novet

Think of Presto as Facebook’s version of Cloudera’s Impala SQL querying engine or what Hortonworks is working on with Stinger, but custom-fit for fast performance at Facebook scale. Presto isn’t competing with commercial products out there, although it could well rock the big data world soon. Facebook plans to release Presto in open source this fall, Traverso said.

The size of the data warehouse is growing faster than the number of the site’s users, said Ravi Murthy, a Facebook engineering manager. It’s 4,000 times bigger than it was four years ago. “As we project out this growth, over the next few years, it’s quite clear to us that at some point soon we will reach one exabyte,” Murthy said. “Looking at this exabyte scale, we have to rethink a lot of different things.”

Presto is one of those things. Alongside enabling fast queries, the engine is up to seven times more efficient on the CPU than Hive, Traverso said.

Another ongoing project is cutting down on the amount of space analytics data takes up in storage in Facebook’s data centers. Sambavi Muthukrishnan, an engineering manager, talked about how Facebook has been working on maintaining high availability of data while lowering the number of replicas made of data. This is possible especially with colder, or less frequently accessed, data.

As a preeminent webscale company, Facebook just keeps on innovating on hardware and software, from switches to servers to social graphs. These topics are sure to come up when my colleague Stacey Higginbotham sits down with Jay Parikh, Facebook’s vice president of infrastructure engineering, at GigaOM’s Structure conference in San Francisco on June 19. Presto is just the latest reason that conversation should be fascinating.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Sector RoadMap: Social customer service in 2013
  • Listening platforms: finding the value in social media data
  • A near-term outlook for big data


GigaOM