Over a period of three years, a psychologist from the University of Virginia and 269 of his peers attempted to reproduce the findings from 100 prominent psychology papers. This initiative – the Reproducibility Project – was the first large-scale systematic endeavour to answer questions that had been plaguing psychologists and the wider scientific community for years, if not centuries: what share of scientific papers are repeatable, and by extension, reliable?
The results of his experiment betrayed a fundamentally broken system. The authors were only successful in 39 of their 100 attempts. In other words, 61 per cent of the published scientific literature could not be replicated. To the research community, these results didn’t come as much surprise.
Perhaps the most serious – and certainly the most infamous – case of falsified research was the MMR vaccine controversy: a study in The Lancet linking the combined measles, mumps, and rubella (MMR) vaccine to colitis and autism spectrum disorders. While the paper was eventually retracted with the revelation of multiple undeclared conflicts of interest as well as manipulated evidence, the damage had already been done; its quackery had been widely reported and had already reached the minds of the general public. A small but noteworthy subsection of society continues to cling to it as truth.
Admittedly, this is an extreme example, but it demonstrates a valuable lesson: the scientific ecosystem is extraordinarily slow to redress errors. Flawed literature can and does slip through the editorial and peer review process. By the time bad papers are disproved, they’ve already metastasised into new research, causing irreparable damage. The world’s ten most ‘popular’ retracted papers have been cited on over 7,500 occasions, and that’s a conservative estimate. Research, including bad research, spreads like wildfire.
The phenomenon is partly inherent to science itself. The scientific ecosystem is fragile – built painstakingly upon the results of older science. It’s a wholly a posteriori network. Citations proliferate. That level of interdependence is eminently useful, but also makes it vulnerable. People can’t cross check each and every hypothesis that leads to their own hypothesis because of the time-pressure to publish.
But science’s inter-reliant makeup is also an opportunity. It’s a system ripe for disruption – highly suitable for a technology that has only been around for about a decade: blockchain.
In simple terms, blockchain is a decentralised database which is visible to everyone. It has the potential to reshape various business models in the majority of industry verticals and is most commonly referenced in relation to financial services and the supply chain space. But it’s academic research where blockchain has some of its greatest potential, specifically as a solution to the problem of trust.
Scientific knowledge is arguably the ultimate decentralised system, particularly as we have transitioned from analogue into digital. It isn’t controlled by a central authority, and, by its very nature, it demands public scrutiny and constant challenge.
But things don’t work that way the moment. Today, getting a study published rests firmly on the peer-review process. A handful of experts will quickly read a study, offer advice, and recommend whether it should be published. And as poor reproducibility statistics have gone to show, that’s a highly fallible process. Reviewers are under considerable time pressure. Reliability is difficult to gauge. The problem of bias is intractable.
With blockchain, every aspect of that process could be made transparent, opening up the possibility of genuine public scrutiny. Since blockchain is, at its core, an immutable ledger, it would offer users a rich, open paper trail of reviews, arguments, and hypotheses around certain scientific problems. With papers side-by-side on databases, and with the assistance of artificial intelligence, reliability becomes easier to establish. This could all instil much greater trust in the system.
Blockchain also provides a useful way to validate knowledge dynamically, subsequent to publication. Science is mercurial: new information constantly arises casting doubt on older items of research, but existing publishing practices lack the proper tools to accommodate these changes in literature – retractions, replications, new findings, and so on. Blockchain, with its innate traceability, would enable the peer review process to become more of a continuous, open process.
Finally, blockchain offers the potential of building entirely new economic models through issuing tokens, or digital currency, tied to the value of what the community is building: a currency tied to the most precious thing we humans possess: knowledge. With the incentives of tokenized economies frequently found in blockchain models, accurate peer reviewing, accurate research papers, and many more vital activities to maintain the community’s knowledge could all be rewarded. The influence of human bias would abate through the sheer volume of users and reviewers, and the cancellation of these biases in aggregation.
An open, scalable, decentralised platform, backgrounded by blockchain, thus offers an optimal way to fix distortions that beget inaccuracies and fuel distrust in scientific knowledge generation and dissemination worldwide. A community-run engine capable of checking the underlying factual base of a given input text provides us with a unique opportunity to unbias our entire knowledge base through a new prism built with the highest transparency and accountability standards.
By authenticating and certifying published research data using the blockchain, the scientific community could reduce errors and regain the public’s confidence – promoting reliable research, and sorting fact from fiction. And this is, fundamentally, what science is all about.