Voltaire's Candide (it's hard to believe it was written more than 250 years ago) has a relentlessly optimistic character called Dr Pangloss who stoutly maintains that we live in the best of all possible worlds. Even when confronted with such random horror as the earthquake and tsunami that destroyed Lisbon on All Saint's Day (!) 1755, Pangloss comes up with a rational explanation of why such an event is a Good Thing.
That's a very WHITE view of the world. Winston Churchill had a more nuanced view of his bailiwick "Democracy is the worst form of government, except for all those other forms that have been tried from time to time."
What about science? In the same way as Americans are complacent about censorship, scientists are often unthinkly smug about science and how it is done. Even if they're not card-carrying Panglossians, and acknowledge that there are faults in the process, they probably row in behind Churchill in thinking that it's the least-worst way we have of finding out how the world ticks. And in biology, we're fond of erecting a straw-man in the theist idea of Intelligent Design and laughing that to scorn to beef up our belief (sic) in evolution.
Actually I reckon that not many scientists think much about the process of science, it's quite enough for a week's work thinking about our own data and trying to get it over the line for publication. There are two pregnant phrases in that last sentence.
"a week's work" implies that most of us treat science as a 40 (or 60) hour week and balance other aspects of our life with the science. Sure we've had a key insight while taking a slice off the Sunday roast, but most of the work happens M-F 9-5 - or M-F 8-8 if you're in the last year of your PhD thesis.
"get it over the line" is a very worrying metaphor, which we use all too often. We've got some funding, we've designed an experiment which is a little under-powered (because the money is never enough), we've hired a graduate student who is bright and personable and dedicated, they've worked their socks off - even putting in a few Saturdays at the bench, they've stripped the sample for all the data that's possible to extract and they've got a significant result. So we write it up and try to pitch it at a journal with a reasonably high impact factor. We need the publication to tick a box on the next 6-mo report due to the funders. The student needs the pub to launch their career in science and form the scaffolding for a chapter of her/his thesis. If we haven't got tenure, we need the pub for our own career. So we don't chid ourselves for doing aNNything wrong in getting it over the line.
At the end of last year Brian Nosek and his student Matt Motyl found themselves in a situation just like that. They'd carried out a nifty experiment in the psychology of perception which had a chunky sample size (N~2000 - nearly twice as big as Gallup or MORI will poll to predict the result of the next election) and discovered something that had the ring of truth. Nosek & Motyl asked their participants to associate (with a mouse-click) words with a shade along a spectrum of greys from white to black. They found that people at either extreme of the political spectrum tended to see things literally (the words) and figuratively (their politics) more black&white than moderates who chose shades of grey. It was statistically significant and they wrote it up. But as they were doing this they decided to run the experiment again to see if they could verify/replicate their result. The new sample showed that political moderates and political extremists picked word-shades more or less at random and there was no significant difference between the two groups. So no publication. Dang! (Alert to story and more commentary on Metafilter).
But it wasn't a total bust because they wrote up the story and expanded it into a potent analysis of a deeply flawed system of scientific publication: Scientific Utopia Restructuring Incentives and Practices to Promote Truth Over Publishability which you can/should/MUST read in full here. They cite two recent(ish) papers asserting that a large proportion of published papers are wrong in their findings and/or reproducibility:
Begley C. G. & Ellis L. M. (2012). Drug development: raise standards for preclinical cancer research. Nature, 483, 531–533
Ioannidis J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2, e124.
I was up in The Smoke last week and mentioned this story to a palomino - Tony Kavanagh, a meticulously careful and thoughtful molecular geneticist. He singled out journals as a major contributor to the publication of crap: their impact and existence depends to a large extent on the publication of novel and exciting papers. So they are forked by the truth-or-publish dilemma that Nosek et al identified. He cited the particularly egregious example when Science (the premier US general science journal) published a paper in 2010 claiming the existence of a bacterium that replaced phosphorus with arsenic in the very backbone of its DNA molecule (33As is one step down from 15P in the periodic table). A simple test of actual DNA purified from the microbe would have revealed as much phosphorus there as every-living-thing has but, in the rush to be sexy, the editors (and their independent referees) didn't insist on this test. As Tony said - extraordinary claims require extraordinary levels of proof, and in this case not even ordinary levels of proof had been carried out. Eeee, he were quite cross, our Tony!
I've been bigging up Begley&Ellis and Ioannidis since they came across my
radar but I'm sure that for most of the people on whose white-coat I
have pulled about the matter, their reaction was: tsk - terrible things are clearly happening out there; without much reflection on whoa - I wonder if my lab is part of the problem?
Post a Comment