John Ioannidis is a professor at a university in Greece. He leads a group of Ph.D.s and doctoral candidates who specialize in a field of study called meta research, which is the study of the quality of scientific research itself.
His papers are among the most highly cited in the world and he is asked to speak at thousands of prestigious organizations and scientific gatherings every year.
His work draws surprisingly little criticism, which is interesting considering that he states that roughly 90 percent of the published medical information that doctors rely on is flawed.
Ioannidis and his team have shown repeatedly that most of what the medical community tells the world – and each other – about what to eat, what conditions warrant surgery and which medications are safe and effective is wrong.
He attributes this to a need by researchers to be published in prestigious and selective medical journals in order to garner funding and sustain their career paths.
As he said in an article published by The Atlantic, ““I realized even our gold-standard research had a lot of problems.”
The Atlantic article elaborated on this: “…before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.
To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news?”
He eventually established predictable error rates in published research which vary based on the type of study. A full 80% of non-randomized studies, 25% of randomized studies and 10% of the most respected variety, large-scale randomized studies are flawed.
This observation wasn’t merely applied to random, poorly popularized studies pulled out of obscure journals. Ioannidis assessed 49 of the most highly regarded medical studies in the past 13 years.
From the Atlantic article: “These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.”
What is one to do when most of the data given to us for the purpose of guiding scientifically valid decisions is flawed? Ioannidis suggests to simply ignore all of it.
“For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects—it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up. But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you. Even if changing that one factor does bring on the claimed improvement, there’s still a good chance that it won’t do you much good in the long run, because these studies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe.”
That’s not the most encouraging news for people invested in their own health and fitness levels. It’s a bit harder to know where one is going when most of the maps and road signs are wrong to begin with.
Ioannidis is right though, and since none of us wants to bumble blindly down this path, I suggest that when it comes to the more fundamental aspects of nutrition, strength and fat loss, we look to the people who are serving as human lab rats and draw some conclusions based off their results.
If a fairly large group of people want to get extremely strong, they spend several years training at Westside Barbell and, as it turns out, become really strong, it’s a safe assumption that whatever they are doing at Westside works.
The best predictor of results, as I see it, is the actual results. Want to see how well a nutrition style works? Go talk to a hundred people who have followed it for a long time and see how they’re doing.
Low calorie mail-order processed food programs that one commonly sees on infomercials come with fantastic claims, celebrity spokesmen and usually a handful of really inspiring before and after photos. But how many people do you know who have followed one of those plans, gotten the health and physique levels that they desired from it, and maintained those levels for years after? Right… probably close to zero.
Same thing with most of the nutritional supplements out there. Many of them make astounding health claims that can ostensibly be validated by a couple of published studies. But how many people do you actually know who changed their lives by drinking acai berry juice or taking deer antler pills?
Personally, I consider supplements to be a somewhat frivolous self indulgence which I nonetheless partake in. I know that many of them do nothing, some provide a little benefit and that none of them help a lot but at the same time I’m occasionally a sucker for good marketing and I like to stick with things like fish oil and minerals. At certain times in my life I’ve taken so many supplements that the combined effect should have been at least mild super-powers. I’ve also gone around a year at a time without taking anything. The difference is marginal at best.
The USDA food pyramid is heavily substantiated by a mountain of scientists, government agencies and a multi-million dollar ad campaign. But have you ever noticed that the children raised on those principles are the fattest, most medication dependant ones on the planet?
Now, how many fat people do you know who consistently follow a paleo style diet, or something like Precision Nutrition? Not many, I’d guess.
Want to know if a trainer’s methodologies work? If he’s got access to a Google machine and PubMed he can almost certainly provide you with a pile of studies to show you the validity of whatever it is that he’s up to all day. But what happens when you observe the progress of his clients over the course of several years? Are they consistently improving and uninjured? That’s the data that matters.
P.S. The full Atlantic article about Doctor Ioannidis is available here online. I tried to quote it as sparsely as I could here for the sake of brevity but it’s such a well written piece that it really deserves to be read in its entirety. I highly recommend it.