Isaac Newton age, 46 (Godfrey Kneller artist, public domain)

Last week, several commentators felt I was worshipping at the altar of science, so now I feel honor-bound to express some of my own misgivings. The scientific process, as I see it, is the best we’ve got, but it’s far from perfect.

Honest scientists have known this for as long as science has been around, of course, recognizing the scientific method as a way of moving tentatively towards the truth, while never, ever reaching it. Or at least, we lack the ability to know when we get there, which is the same thing. (Brains are finite, and they evolved to outsmart saber-toothed tigers, not figure out what dark matter is.)

Isaac Newton displayed scientific honesty when, having figured out a simple equation for gravity (product of masses divided by distance squared), he admitted that his solution was hardly the last word: “That gravity should be innate, inherent, and essential to matter, so that one body may act upon another, at a distance through a vacuum … is to me … an absurdity.” In the future, Einsteinian gravity will surely be superseded by a yet more accurate formulation.

The current questioning of scientific results really began in earnest in 2005, when Stanford University epidemiologist John Ioannidis published what would become the most downloaded paper ever to be accepted for publication by the PloS Medicine journal, provocatively titled “Why Most Published Research Findings Are False.” In this and a related paper in JAMA (the Journal of the American Medical Association), Ioannidis reviewed the 49 most-cited clinical-research studies published between 1990 and 2003.

* 45 of the 49 were positive (positive results bias), i.e., the intervention under investigation succeeded. Which is a bit suspicious right off the bat, but this is a known issue. Medical journals are no different in this respect from all media, i.e. “something worked” is far sexier than a null result. (“Nothing happened in Blue Lake today!” doesn’t usually make headlines.) You probably won’t win a Nobel if you show that Vitamin E is ineffective against prostate cancer, the prize comes when you show what does work.

* Of these 45 papers, subsequent studies with larger sample sizes indicated that seven were contradicted, while the results of another seven were less robust than in the original papers. That is, nearly a third of these highly-regarded, much-quoted papers didn’t hold up.

Now Ioannidis is, as I said, an epidemiologist, so his review focused on medical treatment. However, it’s pretty easy to see why published reports throughout the world of science and technology are liable to the same sorts of problems as his set of 49.

The correlation/causation problem: I used the Vitamin E-prostate cancer example because it’s one that psychologist Richard Nisbett uses in his Crusade Against Multiple Regression Analysis. Apologies to all statisticians out there, but this is a fancy way of saying “correlation isn’t causation.” Nisbett points out that you can usually find a correlation between taking Vitamin E and low incidence of prostate cancer because of “healthy user bias”: the guy taking Vitamin E is also the guy who is “watching his weight and his cholesterol, gets plenty of exercise, drinks alcohol in moderation, doesn’t smoke, has a high level of education, and a high income. All of these things are likely to make you live longer … it’s going to look like Vitamin E is terrific because it’s dragging all these other good things along with it.”

These are two ways science can get bent: (1) positive results bias. Papers accepted for publication are biased for positive results and (2) the correlation/causation problem. Untangling correlation from causation can be insanely difficult. One more:

(3) Regression to the mean. Unless you’re a hypochondriac, you go to the doctor (or acupuncturist, or faith healer) when you feel really, really bad — perhaps at your peak badness. And whatever he or she does, you naturally move away from the peak and start feeling better — ergo, the treatment worked. Except you would have gotten better anyway.

Now expand that example to a researcher studying, for example, whether test subjects more likely or not to recognize a face after they’ve previously used words to describe it. Surprisingly, the experiment shows that the more we verbally describe a face, the less we’re likely to remember it. The researcher (actually, a guy by the name of Jonathan Schooler) called the phenomenon “verbal overshadowing.” But subsequently, when he and other researchers tried to replicate the experiment, the effect got smaller and smaller, i.e. it regressed towards the mean of “just a little bit.” (For a very readable account of the story, click here.)

###

The problems I’ve mentioned above (and many which I haven’t, including confirmation bias, selection bias, sunk-cost fallacy and the clustering illusion) are actually examples of why science works … in the long run. We know about bad results because they can’t be replicated, or a researcher’s unconscious bias shows up in subsequent experiments, or publication brings a skeptical review by one’s peers. (Scientists tend to ruthless skepticism when it comes to examining a colleague’s results!)

So in the long run, with hundreds or thousands of scientists working on the same problem and almost always getting the same results — be it the cost/benefits of GMOs or the reality of climate change or the (non-existent) link between the MMR vaccine and autism — we make progress. It ain’t perfect, but it’s the best we’ve got.

###

Barry Evans gave the best years of his life to civil engineering, and what thanks did he get? In his dotage, he travels, kayaks, meditates and writes for the Journal and the Humboldt Historian. He sucks at 8 Ball. Buy his Field Notes anthologies at any local bookstore. Please.