What you know about science may be wrong.
That’s the premise of the newly founded Metaknowledge Network, a consortium of 25 scientists and scholars funded largely by the John Templeton Foundation who hail from fields like evolutionary biology, physics, history, sociology, medicine and computer science.
These scientists believe our growing ability to harness technology and computational power to analyze research will identify unexamined rules and assumptions, yielding new insights into how we both succeed and fail at the scientific process – and also how we can improve it.
The fact that science can be deeply flawed may come as no surprise to those familiar with the work of Dr. John Ioannidis, a member of the Metaknowledge Network who published a landmark study in 2005 arguing that many published research findings are false.
At fault, he said, were poorly designed studies, financial and professional conflicts of interest, and the drumbeat of bias that can develop in a “hot” scientific field. These and other factors can lead to misleading associations or the non-disclosure of critical but contradictory findings.
A better model for pursuing science?
Science may be imperfect, says James Evans, the network’s director and a sociologist at the University of Chicago, and the network “starts with that as an obvious” rather than the main point. “Can we turn it around and produce something that’s more interesting or efficient or whatever it is we want our science to do?”
The goal of the Metaknowledge Network, which is based at the University of Chicago and the Computation Institute, is to develop tools to create better scientific inquiries that anticipate conventional biases, echo chambers and institutional practices that undermine research.
Unraveling those knots, says Evans, means applying technology like natural language processing, machine learning, statistical meta-analysis and computational modeling to the vast amount of data contained in published research findings.
Natural language processing, for example, can cull endless strings of text to extract claims and compare them to others contained in hundreds of thousands of papers. Computerized content analysis can also root out a “ghost theory,” a pervasive assumption that influences research, by picking up common keywords across disparate studies.
The reason why research may be less fruitful or objective than possible, Evans says, may have to do with scientists not asking the most efficient questions.
As an example, he points to the exhaustive studies done on the genome. Scientists and consumers alike have been hopeful about the promise of personalized medicine based on this research. While there have been many interesting findings, the field has not yet fulfilled its promise.
“Why has that been as unproductive as it has been?” Evans says.
Asking better questions
He thinks it may partly be the result of studying either individual gene-disease associations or every gene-disease association in small populations; in many cases, most known associations will not show up as statistically significant given the limited number of subjects.
Instead, the Metaknowledge Network approach would be to create an algorithm that tests a series of hypotheses and identifies more efficient questions to ask at the outset of a study. It’s a method that would also test scientists’ boundaries.
“That’s hard for people who are committed to developing a set of findings that give them a brand and get them tenure,” Evans says. Nevertheless, he hopes that such techniques will become part of research efforts, and that, for example, a big genomic center might devote part of its time to developing the best questions.
Network members will take on 10 major initiatives of their own, including one project on genetics and how different theories are connected. That research will inform the members’ efforts to improve predictions about interactions between genes and drugs in large-scale human tissue sample studies.
Jacob Foster, a statistical physicist by training and a research assistant professor at the University of Chicago, will help look at fields that study big questions that cannot be tested with experiments, like ‘What is the origin of the universe?’ and ‘What is the fundamental nature of physical reality?’
The idea, Foster says, is to understand why one theory stands out above others. Perhaps it’s the prestige of those who defend it or the phenomenon of researchers herding around the most popular explanation. It might even have to do with aesthetics – if a so-called “elegant” explanation of the universe prevails, it might be because it’s easier to understand.
Network members will use technology like machine learning or natural language processing to isolate these factors and then create hypotheses about how they operate.
“It’s not a debunking exercise. It’s not intended to unmask the frailties of science,” Foster says of the network’s research. “We’re building on decades of this deep work on science and trying to connect it to this computational moment…to get a quantitative understanding of why we have the knowledge we have.”
Top Image: Chemistry on a blackboard via Shutterstock.