Frontier science versus textbook science
During my usual weekly perusal of the New York Times, I was surprised to come across this rather perceptive article by Nicholas Wade in which he discusses the difference between "frontier" science and "textbook" science. No, I wasn't surprised because Nicholas Wade wrote a perceptive article, but rather because it was published in the New York Times. In it, he asks:
Dr. Hwang Woo Suk, as you may recall, is the Korean scientist who has now been disgraced for having published what are now widely believed to have been fabricated results indicating that he had created a line of patient-specific stem cells, as well as having committed many ethical lapses such as using eggs from women who worked for him and thus were potentially susceptible to pressure from him. He has blamed the fabricated results on subordinates, but clearly at the very least he is guilty of extreme sloppiness and at the worst outright fraud. The whole scandal has been a major black eye to Korea's previously lauded efforts in stem cell research and has provoked many attacks on the peer review process that allowed Dr. Hwang's papers to have been published in the journal Science, one of the most prestigious and difficult to crack scientific journals in the world.
Mr. Wade makes the point that research such as the kind that Dr. Hwang does is what he characterizes as "frontier" science; that is, science at the very edge of what is known or possible and warns against overreacting:
I'm not sure I'd be quite so blithe about the failure of peer review in this particular case, but Mr. Wade does make a good point. Much of science at the very frontiers turns out not to be correct. However, the way it is all too often reported in the press is that it is correct. We in science understand the difference between settled textbook science and the sort of frontier science that makes it into journals like Science. Indeed, we often lament that the very highest tier journals, such as Nature, Science, and Cell, tend to be too enamored of publishing what seems to be "sexy science," exciting or counterintuitive results that really grab the attention of scientists--in other words "cutting edge" or frontier science. Such journals seem to pride themselves on publishing primarily such work (which is one reason why they are so widely read and cited), while more solid, less "sexy" results seem to end up in second-tier journals.
This leads to a paradox. The science that is getting published in the highest profile, most prestigious journals is almost by definition the most tentative science. Given that, it is surprising how much of what is published in such journals actually does stand the test of time, but it should not be surprising that much of it does not. However, the very prestige of such journals gives such research seemingly more authority than research published in less prestigious journals. It is often said that one Nature, Science, or Cell paper is worth five or even ten papers in more pedestrian, middle-of-the-road journals as far as improving a scientist's CV (and chance of a good job or promotion) goes. Perhaps that is because publications in such journals are viewed as an indication that the work a scientist is doing is on the cutting edge. That perception, built up over time, is likely the major reason that it is very, very difficult to get a paper accepted and published in Science, Nature, or Cell. The vast majority of submissions are rejected, many without even being sent out for peer review because an editorial decision is made that they are not "interesting" enough (something that happened to me once). However, scientists understand that papers published in the most cutting edge journals are tentative. They're interested in the papers because such work is the most likely to advance the frontiers of science, but they also know that the papers have a higher than average probability of being wrong, either in part or in whole, or a dead end. Wade nails it when he writes:
Many ideas for reforming peer review have been floated, but in reality I doubt that any of them would catch a determined fraud. Science and peer review inherently depend upon trust that the investigator presenting his data for publication has not fabricated it. The only real way to detect fraud would be to put such an onerous burden on peer reviewers that it would make finding qualified scientists willing to do be peer reviewers difficult unless they were paid. It would require seeing the raw data, and anyone who has done research knows just how hard it is to go through another's scientist's laboratory notebook to evaluate the raw data. One proposal, however, for reforming publication procedures and peer review that might actually help somewhat is this:
As Wade points out:
Indeed. That is the very nature of science. What is published the first time is considered tentative. It may or may not be correct. If other scientists can replicate the results or, even better, replicate the results and use them as a foundation to build upon and make new discoveries, only then does it become less frontier science. And if the results are replicated enough times and by enough people and used as a basis for further discoveries, to the point that they are considered settled results, only then can they become "textbook" science. What, alas, the public often doesn't understand is that science is a process, not a bunch of facts, and that at its cutting edge it is often quite uncertain and controversial among scientists. To a lot of scientists, Dr. Hwang's work seemed fishy, but seeing it in Science allayed many suspicions, at least until other groups could replicate the research. In this case, it turned out that the skeptics were right.
How then can the fraudulent claims by Dr. Hwang Woo Suk have been accepted by Science, a leading journal that rejects most papers submitted to it? How can the community of stem-cell scientists have allowed a very visible claim to have stood unchallenged in their field for 20 months? Little wonder that Richard Doerflinger, an official of the United States Conference of Catholic Bishops, ridiculed the dreams of therapeutic cloning in a statement last week, scoffing that scientists were chasing miracle cures "in pursuit of this mirage."
The contrast between the fallibility of Dr. Hwang's claims and the general solidity of scientific knowledge arises from the existence of two kinds of science - a distinction that is often blurred when new advances are reported first by scientific journals and then by the news media. There is textbook science and frontier science, and the two types carry quite different expiration dates.
Textbook science is material that has stood the test of time and can be largely relied upon. It may include findings made just a few years ago, but which have been reasonably well confirmed by other laboratories.
Mr. Wade makes the point that research such as the kind that Dr. Hwang does is what he characterizes as "frontier" science; that is, science at the very edge of what is known or possible and warns against overreacting:
Science from the frontiers of knowledge, on the other hand, is wild, untamed and often either wrong or irrelevant to future research. A few years after they are published, most scientific papers are never cited again.
Scientific journals try to impose order on the turbulent flow of new claims by having expert reviewers assess their merit. But even at the best journals, reviewers provide only a rough screen. Many papers slip through that later turn out to be innocently wrong. A few, like Dr. Hwang's, are found to be fraudulent.
This rough screening serves a purpose. Tightening it up, in a vain attempt to produce instant textbook science, could retard the pace of scientific advance.
This leads to a paradox. The science that is getting published in the highest profile, most prestigious journals is almost by definition the most tentative science. Given that, it is surprising how much of what is published in such journals actually does stand the test of time, but it should not be surprising that much of it does not. However, the very prestige of such journals gives such research seemingly more authority than research published in less prestigious journals. It is often said that one Nature, Science, or Cell paper is worth five or even ten papers in more pedestrian, middle-of-the-road journals as far as improving a scientist's CV (and chance of a good job or promotion) goes. Perhaps that is because publications in such journals are viewed as an indication that the work a scientist is doing is on the cutting edge. That perception, built up over time, is likely the major reason that it is very, very difficult to get a paper accepted and published in Science, Nature, or Cell. The vast majority of submissions are rejected, many without even being sent out for peer review because an editorial decision is made that they are not "interesting" enough (something that happened to me once). However, scientists understand that papers published in the most cutting edge journals are tentative. They're interested in the papers because such work is the most likely to advance the frontiers of science, but they also know that the papers have a higher than average probability of being wrong, either in part or in whole, or a dead end. Wade nails it when he writes:
I would also point out that, because of the imprimatur of Science, many scientists and physicians, myself included, considered Dr. Hwang's results to be major breakthroughs. Of course, part of this could be due to wish fulfillment, given the promise of fantastic new treatments for a variety of diseases that Dr. Hwang's results and new technique seemed to offer, but that's exactly the sort of situation when we as scientists should really be the most skeptical.But the roughness of the proceedings is not prominently advertised by journal editors, except when cases of blatant fraud are detected, whereupon they proclaim that peer review cannot reasonably be expected to detect fraud. They do not protest so much when newspapers report their journals' claims as if they were certifiably true. Because of Science's authority, Dr. Hwang's claims to have cloned human embryonic cells were prominently reported and presented to the public as if they were important breakthroughs.
Many ideas for reforming peer review have been floated, but in reality I doubt that any of them would catch a determined fraud. Science and peer review inherently depend upon trust that the investigator presenting his data for publication has not fabricated it. The only real way to detect fraud would be to put such an onerous burden on peer reviewers that it would make finding qualified scientists willing to do be peer reviewers difficult unless they were paid. It would require seeing the raw data, and anyone who has done research knows just how hard it is to go through another's scientist's laboratory notebook to evaluate the raw data. One proposal, however, for reforming publication procedures and peer review that might actually help somewhat is this:
Medical journals, including JAMA and several surgical journals, have been doing just this for a while now, with no undue burden or generation of strife. It may not prevent fraud, but it definitely makes one feel accountable as an author. I can say from personal experience that, when I sign off on one of those statements for a paper that I am a co-author on, I want to make damned sure that I have read the manuscript in its entirety carefully and that I do indeed agree with it, at least in general.But last week Dr. Kennedy announced he was considering revising the journal's publication procedures, though not with any great hope of preventing future cases of fraud. He suggested that authors would be required to state in writing their specific contributions to a report, a reform perhaps aimed at Dr. Gerald Schatten of the University of Pittsburgh. Dr. Schatten accepted senior authorship of - and thus responsibility for - one of Dr. Hwang's papers, even though Dr. Schatten had performed none of the experiments and was not in a position to vouch for them. All the work was done in Seoul.
A second proposed change is to have all authors state that they agree with an article's conclusions.
Both procedures may seem to include a certain potential for generating strife. Each author could overstate his or her contribution, arousing the wrath of all the others. Some authors may think a conclusion too timid, while others consider it an overstatement.
As Wade points out:
Tightening up the reviewing system may remove some faults but will not erase the inescapable gap between textbook science and frontier science. A more effective protection against being surprised by the likes of Dr. Hwang might be for journalists to recognize that journals like Science and Nature do not, and cannot, publish scientific truths. They publish roughly screened scientific claims, which may or may not turn out to be true.
However, the very prestige of such journals gives such research seemingly more authority than research published in less prestigious journals.
ReplyDeleteThe phrase "Seek truth before authority, not authority for truth" comes to mind.
The contrast between the fallibility of Dr. Hwang's claims and the general solidity of scientific knowledge arises from the existence of two kinds of science - a distinction that is often blurred when new advances are reported first by scientific journals and then by the news media.
You, and Wade, do a good job explaining that the non-scientific press which reports on these stories perpetuates the problems inherent in the reversal of that phrase.
This is a characteristic of humans in general which, probably, stems from the sheer enormity of information available to us. Since we can't be experts on everything, we must rely on the expertise of some authorities. Your story points out the necessity of keeping up our awareness of just what makes those authorities worthy of that title.
Frontier Science is a blast to read about! "Wow! They can fix the unfixable now!!!" The exhiliration of that kind of hope has got to be tempered by the mundanity of repetition, demonstration and replication though; ie, practice, practice, practice!
It does occur to me that maybe I didn't emphasize that another reason frontier science gets more coverage is because it's a hell of a lot more fun and interesting to read about than more solid and settled science.
ReplyDeleteUndoubtedly the high frequency reporting of 'frontier' science by the press (and science journals) is related to the 24/7 environment we find ourselves in. That these new 'discoveries' also feed into the opportunity stream of the 24/7 investment community seems to result in a precarious positive feedback loop.
ReplyDeleteDr. Fombonne said something in passing during his first persentation at the MIND institute, it was something like, journals (all journals) tend to want to publish positive findings, "we found that this was the case..." and to not to want to publish negative findings, "we looked at we couldn't find it..."
ReplyDeleteSo, for instance, Wakefield's "I found measles in the guts of blah blah blah" is more likely to get published than the work of those who replicated his research and don't find any measles, for that matter if he hadn't found measles in the first place that paper wouldn't have been published...
Would you say that this is the case? Seems like it would be, but I don't submit to journals and don't read entire journals, just the articles that interest me.
very eloquently written article, I will be pointing it out to various people. Came your way via the Medgadget Blog awards. Interesting reading.
ReplyDelete