According to an article in PLoS Medicine, scientific journals are not the most reliable source for finding out about the true state of research science.
Most major newspapers and periodicals allocate space to report on the latest health research published in scientific and medical journals. From this coverage, we shape our beliefs and opinions about health and medicine. I know that I am repeatedly asked, “Where are the scientific studies that support alternative medicine?” But what if the so-called scientific studies that form our views, that place modern medicine on a pedestal, and that constantly disparage alternative health come from sources as biased as paid political campaign ads? What if media coverage of health and medicine can’t exactly be trusted — if the health news that we receive comes closer to propaganda than to real science?
In fact, according to an article in PLoS Medicine, scientific journals are not the most reliable source for finding out about the true state of research science. Like all publications, scientific journals exist to appeal to an audience (medical doctors, other researchers, etc.). This means that what gets published tends to be the most appealing, or titillating, or exploitable news — news that will sell the journal so that it can make money, or at least survive. According to the article, contributors to journals typically have an agenda: perhaps the desire to attract venture capital in order to underwrite product development or to gain exposure that might lead to approval of new pharmaceuticals or to bolster the writer’s professional reputation. In other words, the chances that someone will contribute an article that shows negative results for a medical procedure, or a drug, or that doesn’t somehow lead to a professional or marketing advantage are minimal.
In fact, for every study that gets space in a major research publication, hundreds or even thousands of studies go unreported. This fact reflects what Plos Medicine calls “the extreme imbalance between the abundance of supply (the output of basic science laboratories and clinical investigations) and the increasingly limited venues for publication.” Like with the rest of the publishing world, increasing costs have forced medical journals to cut back on what gets printed — only a tiny percentage of relevant news makes it into print. Authors scramble to get someone — anyone — to publish their papers, and still they may stand only a 10 percent or less chance of publication, particularly in the leading journals.
At best, the less “sexy” studies might get reported in small, minor publications — publications that the popular press will never pick up on. Maybe that’s why out of thousands upon thousands of articles published annually by journals affiliated with BioMed Central — an online information service that distributes articles from 187 medical and scientific publications — only 73 articles were accessed more than 10,000 times in 2007. In other words, the audience for what gets published in the smaller journals is quite limited.
When only the most dramatic results make it into the public arena, readers can’t possibly get a grip on the entire scope of data that might pertain to a particular issue. For instance, according to a recent article in the New England Journal of Medicine, publishedinformation about the effectiveness of antidepressants is downright misleading. Researchers reviewed the results of 74 antidepressant studies reported to the FDA. The results of 21 of those studies never got published, and the researchers concluded that another 11 of the original 74 studies presented the results in a false positive light. Of the unpublished studies, nearly all showed that antidepressants didn’t work well. Of the published studies, nearly all showed a positive result — that antidepressants supplied the happy cure that depressed patients needed. As the NEJM reports, “According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that only 51% were positive.”
That’s a mighty wide margin of error, involving a lot of smoke and mirrors. What’s truly bothersome is that even medical professionals might not get the full scoop, unless they happen to attend a conference where negative results get presented. Otherwise, they read their journal articles, and based on those articles design treatment plans and make decisions about prescriptions. As patients, we swallow the Zoloft or whatever pill gets assigned to us, not realizing that there’s a 50 percent chance of failure (and that’s assuming that you believe that the 42 positive studies weren’t agenda driven), along with side effects galore.
Unfortunately, the failure to spread the word about negative study results extends far beyond antidepressants. A 2005 article published in the Journal of the American Medical Association (JAMA) reported that out of 49 of the most-cited papers about medical interventions, one-third already had been contradicted within a few years of publication, although the contradictory research largely went unreported. Plus, as the article reports, five out of six “nonrandomized studies had been contradicted or had found stronger [initial] effects [than in follow-up studies]” within a few years. This means that initial positive results commonly turn out to be exaggerated or wrong when follow-up studies get completed — but there may be a significant time-lag before the contradictory results come to light. Also, and perhaps even more worrisome, as PLoS reports, once studies appear to be going in a negative direction — not substantiating claims that such and such a treatment works — they tend to be abandoned so that their results never get reported.
So beware, beware, beware! Remember that only 15% of all medical treatments are even backed by studies in the first place — and now we learn that even the 15% that are backed are questionable at best. In other words, take the results of any medical study you see in the media with a grain of salt. Just because such studies are backed by scientists doesn’t make them true. And maybe be a little more suspicious of any “scientific” study that challenges beta carotene, nutrition, Echinacea, vitamin E — take your pick. As we’ve seen, at best, the odds that it’s correct are substantially less than 50/50.