Dodgy data: the iceberg to science’s Titanic

There’s an epidemic going on in science, and it’s not of the H7N9 bird flu variety. The groundbreaking, novel results that scientists are incentivized to publish (and which journalists are then compelled to cover) seem peppered with gaps: experiments that no one can reproduce, studies that have to be retracted, and the emergence of an iceberg of an integrity crisis, both for scientists personally and for those who rely — medically, financially, professionally — on the data they produce.

As a recent PhD, I can attest to the fact that many researchers first experience the iceberg as no more than an ice cube-sized annoyance impeding their work towards a Nobel Prize. Maybe the experimental instructions from a rival lab don’t quite seem to work. Maybe the data you’re trying to replicate for your homework assignment don’t add up, as UMass graduate student Thomas Herndon found. If a spreadsheet error that underpins sweeping global economic policy doesn’t convince you, here is some more evidence that we are already scraping the iceberg of a research and data reliability crisis.

Newton, we have a problem

All those life-saving cancer drugs? Drug makers Amgen and Bayer found that the majority of research behind them couldn’t be reproduced.

What about neutral reporting of scientific results? A number of pervasive sources of bias in the scientific literature have been discovered, such as selective reporting of positive results, data suppression, or academic competition and pressure to publish. The hunger for ever more novel and high-impact results that could lead to that coveted paper in a top-tier journal like Nature or Science is not dissimilar to the clickbait headlines and obsession with pageviews we see in modern journalism. (The long-form investigative story is experiencing a bit of a digital renaissance, so maybe that bodes well for the appreciation of “slow science.”)

Readers of the New York Times Magazine will recall last month’s profile of Diederik Stapel, prolific data falsifier and holder of the number-two spot in the list of authors with the most scientific paper retractions. ScienceIsBroken.org is a frank compendium of first-hand anecdotes from the lab frontlines (the tags #MyLabIsSoPoor and #OverlyHonestMethods reflect the dire straits of research funding and the corner-cutting that can result) and factoids that drive home just how tenuous many published scientific results are (“Only 0.1% of published exploratory research is estimated to be reproducible”). The provocative title of a 2005 essay by Stanford professor John Ioannidis sums it up: “Why Most Published Research Findings Are False.”

Interestingly, unlike in science where many results are interdependent and on-the-record, in big data accuracy is not as much of an issue. As my colleague Derrick Harris points out, for big data scientists the abilty to churn through huge amounts of data very quickly is actually more important than complete accuracy. One reason for this is that they’re not dealing with, say, life-saving drug treatments, but with things like targeted advertising, where you don’t have to be 100 percent accurate. Big data scientists would rather be pointed in the right general direction faster — and course-correct as they go – than have to wait to be pointed in the exact right direction. This kind of error-tolerance has insidiously crept into science, too.

Lies, damn lies, and statistics

Fraud (the principal cause of retractions, which are up roughly tenfold since 1975)  is not a new phenomenon, but digital manipulation and distribution tools have increased the spread and impact of science, both faulty and legitimate, beyond the confines of the ivory tower. Patients now look for new clinical trial data online in search of cures, and studies that are ultimately retracted (with a delay of 12 years, in the case of the infamous MMR vaccine study published in The Lancet) can persist in their effects on public health, and public opinion.

The Great BetrayalEven in fields that don’t end up in the news or have direct medical impact, the effects of a retraction can be broad and disconcerting. When a lab at the Scripps Research Institute had to retract five papers because of a software error, hundreds of other researchers who based their work on those findings, and who used the same software, were affected. When the proverbial giants on whose shoulders scientists stand turn out to be a house of cards, years of effort can go down the drain.

For those providing the funding, like foundations and the federal government, events like this present a further justification to tighten the purse strings, and for the general public, they serve to deepen the distrust of science and increase the reluctance to support audacious (and economically and medically important) projects. This is not a PR crisis borne out of the actions of greedy charlatans or “nutty professors” – they are but a highly visible minority. The real problems for research reproducibility have to do with how we handle data, and are much more benign – and controllable.

Openness to the rescue?

As I reported last week, blind trust in black box scientific software is one part of the problem. Users of such software may not fully understand how the scientific sausage they end up with is made. Om has written about the deterministic influence big social data can have on people and businesses. As the velocity and volume of data have increased, the practices of science have struggled to keep pace. (I will discuss some potential solutions to the data-credibility problem in a separate post tomorrow.)

The question of whether the retraction and irreproducibility epidemic is spreading is akin to the debate over whether spiking autism or ADHD rates, for example, are real or the result of better diagnoses. The recent insight that retractions are correlated with impact factor (a measure of journal prestige derived from the number of citations papers in journals receive) seems to suggest that much of the most valued and publicized science could be less than trustworthy. (Retractions can result from innocuous omissions or malfunctioning equipment as well as more severe and willful acts like plagiarism or fraud.)

Some see the phenomenon not as an epidemic but as a rash, a sign that the research ecosystem is getting healthier and more transparent. Openness is indeed a much-touted solution to the woes of science, with even the White House mandating an open data policy for government agencies; the National Institutes of Health (the major biomedical research funder in the U.S.) and many foundations already require research they fund to be publicly accessible.

Efforts to combat seedy science are popping up at an almost viral pace. A new Center for Open Science has launched at the University of Virginia, and the twitterati of the ScienceOnline un-conference spearhead a number of initiatives to improve the practice, publishing and communication of science.

Motivation through tastier carrots

Culling the bird flu epidemic. Getty Images.

Culling the bird flu epidemic. Getty Images.

Many in these communities point to a broken incentive structure as the source of compromised science. Novel, “breakthrough” results are rewarded, though the foundation of science rests in the power of real physical phenomena to be experimentally replicated. If the best research – the real McCoys in a sea of trendy one-hit wonders – can be identified and rewarded, science, industry and the public stand to benefit. The challenge is finding predictors of high-quality research, beyond the traditional impact factor metric.

Much like scientists can rapidly sequence the genetic code of new influenza strains like H7N9, they are also starting to identify systemic frailties, attitudes and entrenched practices, and take measures to inoculate science against them. Stay tuned for tomorrow’s post, which will cover three remedies to tame the epidemic and revitalize reproducible research.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Why the big data startup boom will likely be short-lived
  • How big data analytics drives competitive advantage
  • How HR can make the case for workforce analytics

    


GigaOM