Professor Bainbridge obviously didn't read the paper
Professor Bainbridge, normally a pretty reasonable guy, unfortunately ran counter to his usual form the other day when he made a an ignorant and ill-informed statement with regards to the recent JAMA article showing that one third of major clinical studies are later found either to have been incorrect in finding a treatment effect for the intervention studied or to have found a treatment effect significantly stronger than what is found in later studies:
That statement is just so wrong on so many levels and exactly the sort of misinterpretation of the study that I feared when I first read it. The overall competence of most investigators (or lack thereof) has little or nothing to do with it, as Professor Bainbridge would probably have realized if he read the actual article, rather than just a CNN news report on it. As Dr. Ioannidis, the study's author and investigator, wrote (and whom I quoted in a lengthy piece I posted on this very study the other day):
This phenomenon of early clinical studies being contradicted by later studies is not due to lots of scientists who are lousy researchers, as Bainbridge seems to be implying. No, I'm not arguing that there aren't some lousy researchers out there; only that this phenomenon is not primarily as a result of lousy researchers. It's simply the nature of the beast when it comes to clinical studies. Indeed, if you got rid of every "lousy researcher" out there, this phenomeonon would almost certainly largely persist, mainly because lousy researchers don't often manage to pass peer review for journals like The New England Journal of Medicine, Lancet, or JAMA.* This phenomenon is mainly a function of how difficult clinical research is and how many confounding variables have to be controlled for, which often leads to later studies not agreeing with initial studies. Most doctors understand this; indeed, learning how to deal with conflicting literature is something that I wish medical schools would teach more effectively. It is also the reason that, when looking for the answer to a clinical problem, one has to look at the literature and studies in their totality, as the Cochrane Collaboration tries to do. In any case, if Professor Bainbridge had read the actual study (which, unlike many medical studies, is quite understandable by the educated lay person), I doubt he would have made such an ill-informed comment. If he can't get access to the original study, given that my institution has a subscription to JAMA, I'd be happy to e-mail him a PDF file of the original article.
Even worse, oddly enough, another blogger who supplied him a TrackBack somehow managed to relate this article to the Terri Schiavo case. I hate to tell Hennessy this, but this article says nothing with regards to the Terri Schiavo case, nor does it make the point that he seems to want it to make, namely that that experts are often "wrong in individual cases." In fact, it doesn't even address the issue of individual cases at all (particularly the diagnosis of something like a persistent vegetative state, which is no doubt what the blogger was implying the experts made a mistake about). It addresses only the issue of clinical studies about interventions that were later found to be either not efficacious or not nearly as efficacious as the early study showed! But never let a few facts stop you from making your ideological point, eh?
If it sometimes seems like health sciences professionals are constantly changing their minds about what's good or bad for you, maybe it's because some of them are just lousy scientists.
We should acknowledge that there is no proof that the subsequent studies and meta-analyses were necessarily correct. A perfect gold standard is not possible in clinical research, so we can only interpret results of studies relative to other studies. Whenever new research fails to replicate early claims for efficacy or suggests that efficacy is more limited than previously thought, it is not necessary that the original studies were totally wrong and the newer ones are correct simply because they are larger or better controlled.
Even worse, oddly enough, another blogger who supplied him a TrackBack somehow managed to relate this article to the Terri Schiavo case. I hate to tell Hennessy this, but this article says nothing with regards to the Terri Schiavo case, nor does it make the point that he seems to want it to make, namely that that experts are often "wrong in individual cases." In fact, it doesn't even address the issue of individual cases at all (particularly the diagnosis of something like a persistent vegetative state, which is no doubt what the blogger was implying the experts made a mistake about). It addresses only the issue of clinical studies about interventions that were later found to be either not efficacious or not nearly as efficacious as the early study showed! But never let a few facts stop you from making your ideological point, eh?
*Yes, I'll concede that the Wakefield study that claimed to find a link between the MMR vaccine and autism is an example of a lousy investigator managing to get a lousy study published in a high-visibility journal. Fortunately this doesn't happen that often, and when it does it often causes an uproar like the one that happened in the wake (sorry for the choice of words) of this study.
Your response is appreciated, but I don't think I misinterpreted what you wrote at all. I was referring to your July 15 post mainly, in which you said:
ReplyDelete"During the Terri Schiavo debate, I wrote a piece about how experts are often wrong when it comes to specific cases. Today, Professor Bainbridge links to a CNN story about just how often health care experts are: a full 1/3 of the time.
"I told you so."
Can you understand why I concluded that you were linking the JAMA article to your piece on Schiavo and to the Schiavo case in general? If you didn't intend to make such a link, then why on earth did you write what you wrote?
You make an excellent point about the need for multiple studies using different methodologies to examine multiple confounding variables so that we can arrive at scientifically valid conclusion. However, most physicians are not well trained in experimental design, statistical analysis or research ethics, resulting in poorly designed or executed studies. Additionally, peer review consists of individuals with similar training and experience. Therefore it is not inconceivable that poorly conducted medical research gets published, to be later refuted by better studies.
ReplyDeleteJust my two cents…