Google’s flu snafu and the reliability of web data

The web is full of data — much of it meaningful — but there’s some question as to how much we should actually rely on it. The latest evidence comes at Google’s expense, with some researchers questioning the validity of Google’s Flu Trends algorithm. They say the service, which estimates the number of flu cases around the world by analyzing trends on Google’s search engine, vastly overestimated this year’s season in the United States compared with more-traditional methods of measuring flu cases.

But this snafu is just a microcosm of a broader debate over how much stock we should put in web and social media data, and in what cases it’s most valid. It’s hard to figure out how much we should value speed and scale over quality of data. Millions of (presumably) younger people proactively searching or tweeting about a topic provides a huge and theoretically unbiased dataset, while traditional methods of phone calls or focus groups reach a smaller number of (presumably) older people who know they’re being observed, but who also are answering questions directly relevant to the research at hand.

Who’s more accurate: Google, Twitter or your neighbors?

The exact details of the discrepancy are explained in a Nature article published on Wednesday, but it appears to be a case of a lot of data that didn’t mean what Google thought it meant. Google’s search data covers almost the entirety of the web-surfing world and, in theory, can see outbreaks coming before they hit because it can watch the flu-related searches intensify in volume in real time. The Centers for Disease Control and Prevention says Google Flu Trends usually tracks very closely with its own data and can deliver results days faster, Nature writer Declan Butler reported.

Researchers think this year’s discrepancy might have something to do with hyped-up media reports leading to a volume of web searches for flu-related terms that was disproportionate — almost double, nationwide — to the actual number of cases. The CDC claims about 6 percent of the U.S. population was affected with flu-like symptoms during the peak period.

flu copy

Google estimated more than 10 percent of the U.S. population had flu-like symtoms.

On the other hand, one project called Flu Near You, which relies on volunteers to report cases of flu among their friends and family, estimated a number closer to (albeit lower than) the CDC’s official statistics, perhaps because the data is based on clinical definitions of “influenza” and relies on people expressly reporting known cases. However, Flu Near You claims less than 45,000 participants and, according to Nature, covers only 70,000 people.

flunearyou1

Flu Like You’s peak estimate was abour 4.5 percent of the population.

Responding to my inquiry about the discrepancy, a Google spokesperson sent the following statement:

And while Google’s predictions might be prone to the undue influence of a fear-mongering media environment, CDC researcher Lyn Finelli told Nature she’s even more skeptical of efforts to track flu outbreaks using Twitter data. She cites a low signal-to-noise ratio and a population of largely young-adult users that doesn’t align with the country’s overall demographic makeup.

To the contrary, however, Johns Hopkins University computer scientist Michael Paul told Nature that he’s a big believer in Twitter data, especially because it generates a large dataset that’s less susceptible to sample errors than smaller-scale projects such as Flu Near You. He claims to have developed a model that can accurately track the flu using Twitter, something a handful of other projects are already working on.

Pollsters struggle with the web, too

But flu statistics aside, questions over the validity of Twitter, Google and other web sites as data sources are nothing new. Last year, for example, I profiled a company called the Dachis Group that has devised a method for tracking a companies’ presences, buzz and sentiment on social media. It claims its algorithms for ranking the buzz around Super Bowl XLVI advertisers were far more accurate — or at least yielded drastically different results — than USA Today‘s traditional AdMeter rankings of Super Bowl ads based on phone-based polling.

Although people appear generally willing to do away with phone surveys and other marketing-based polling efforts, there’s a lot more skepticism when it comes to using the web to predict political elections and gauge response to culturally popular events such as presidential debates or the Olympics. I covered both sides of the debate in October, as pre-election fever was in full force and many people were atwitter about Twitter’s tweets-per-minute counts during the presidential debates. What side experts fall on seems to depend on how much they trust the demographics, the subjects themselves, the sample size and how well someone can actually analyze sentiment in text.

Even on Google, politics has proven that interest doesn’t necessarily signify intent. Leading up to the presidential election in November, Mitt Romney was trending quite a bit higher than Barack Obama in search volume. Election night, however, was a different story, with Obama winning in a landslide.

obamrom

Perhaps the best advice on how to deal with web data comes from Harvard epidemiologist John Brownstein, who told Nature, “You need to be constantly adapting these models, they don’t work in a vacuum. You need to recalibrate them every year.”

As web usage and users change along with the world around them, there’s really no guarantee that a single data point means the same thing or has the same effect from year to year. Even search is under attack by companies trying to proactively surface content for consumers before they know to look for it.

When accuracy is paramount, no place — Twitter, Google, the telephone or the wisdom of crowds — is the holy grail; they’ll all have to play a role.

Feature image courtesy of the Centers for Disease Control and Prevention.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • A near-term outlook for big data
  • Connected world: the consumer technology revolution
  • Listening platforms: finding the value in social media data


GigaOM