I was slagging off The Edge at the beginning of the year for launching a rather tired question for 2016. But since long-time, I've been on their e-list and they send me links to essays and symposia on an irregular basis - most of them seem not as interesting as the participants think they are. This week I was hooked by an interview The Crusade Against Multiple Regression Analysis because MultRegr was the cornerstone of my multivariate statistical analysis module in graduate school 30+ years ago. It's not really about Multiple Regression but is a welcome tirade about the poorly designed nonsense that passes for much of the work published under cognitive psychology. No better man to talk about it than Richard E Nesbitt, Theodore M. Newcomb Distinguished Professor of Social Psychology at U Mich, Ann Arbor. A number of issues are exposed here which come under the heading "studies that seemed interesting and important but failed to replicate". I've aired this sort of stuff on The Blob before: Brian Nosek on reproducibility - underpowered studies - pathetic levels of evidence - underpowered = waste of money - stem cells not all they're cracked up to be - and Prof Nesbitt exposes some of the nonsense that has aggravated him from his own field.
One of the points Nesbitt makes is that, in any field, interest in particular aspects is a bit tidal: such a topic is all hot-and-heavy for a decade and then it goes off the boil and another angle is deemed to be sexy. In cognitive psychology, it used to be that the personality dictated outcomes - extroverts would behave such-a-way. Hans "IQ Test" Eysenck was famous for simplifying the complexity of human personality on two axes (at right angles to each other because they were supposed to be independent): introvert vs extrovert and neurotic vs stable. Sounds like it might be informative because we all know people who are ebullient and bouncy but a tower of strength as well as the cliché 'strong silent man'; and others who are quiet and wash their hands rather more often than strictly necessary. Nowadays behaviour is all about context and framing: no matter where you are on the neurosis spectrum you'll react differently according to slight changes in the form of words. Even life-and-death decisions can be swayed by how questions are asked.
One of the hazards of the scientific method, as normally framed, is that we carry out reductionist experiments: holding all variables except one steady so we can document the effect of one thing at a time. Nice in the lab, often irrelevant in the real world; where lots of variables impinge on the system and interact in unexpected ways. Nesbitt takes prostate cancer [again?!], for example: if you conduct controlled experiments on very large samples, it turns out that vitamin-E slightly increases the onset and progression of prostate cancer. But if you approach the problem with an epidemiological survey the opposite effect becomes statistically significant. Drilling down in the data reveals that men who pop vit-E [a minority, surely?] tend to be fitter, richer, healthier, drink less and have way better diets than 'normal' blokes. These general good health correlates reduce all morbidities, including prostate problems: it's called healthy user bias.
Similarly, in the USA, the death-rate per million driver-miles is far higher for pick-up trucks than for Volvo estates. Which is great propaganda for selling Volvos but little to do with any real or supposed intrinsic safety features. Because it fits the standard safety picture, we are sorely tempted to accept the finding: critically evaluating data is just so tiring. The confounding variable is The Driver. Very few hopped-up young men act the tear-away in Volvo estates; very few middle-aged middle-class mothers pick up their kids from school in a truck. Obvious with hindsight, of course, but we could all do with training our minds so that they work better in foresight.
You should really read the Nesbitt piece, he describes some very nice studies showing that we contribute a lot more if we think we are playing than if we are paid to do it. If we get money for doing something (teaching in an Instititute of Technology, hmmmmm?) we deliver less. What happens is that our intrinsic motivation gets undermined by reward - the external motivation. That's all very sad and shows that monetising everything, as happens in the West, makes work and life seem like shit because our intrinsic reward system has been eviscerated. It also drives all the fun out of contributing as part of the Voluntariat.