I suppose I should stop being so amazed at the tendency of health researchers and reporters (particularly, but not exclusively, the anti-tobacco activists among them) to assume that the rest of the human population are complete and utter morons. Could you imagine someone reporting on science in other areas claiming “new study show crops grow better during droughts” or “involuntary unemployment shown to make people happier”? Authors (both journal and newspaper) in other fields are rather more hesitant to pronounce, based on one little odd study, universal truths that contradict what tens of millions of people know from experience.
Yet that is just what today’s New York Times (and the authors they cite from Addiction: Hajek, Taylor, McRobbie 2010) did. The claim is that smoking offers no relief from stress. How can they not see the absurdity of their claim? Set aside that the study that was the basis for this pronouncement used an unusual population (people who had been hospitalized for heart disease) and had glaringly obvious pretty-much-fatal problems of confounding (the comparison was between reported stress levels of people who remained abstinent from smoking after their hospitalization to those who resumed smoking), as well as some other suspect characteristics. (Hmm, y’think maybe that people who stayed abstinent might have been the ones who were inclined to feel relieved of the stress of their recent life-altering disease by knowing that they did something about it by quitting, while those who got more relief from their increased stress from continuing to smoke? Nah, how could that be?) We can even set aside that maybe the stress relief advantage from smoking for many is more the ability to regulating the timing of relief rather than lower overall average stress. We just need to consider the fact that lots of people are quite sure, from personal experience, that they get great stress relief from smoking (or otherwise using nicotine), including those who have excellent crossover data (i.e., they smoke only intermittently, re-start after years to relieve stress, etc., and so are not trapped in a naive state of no knowing how they would feel if abstinent).
There are a few errors that people consistently make. There are optical illusions and well-known cognitive illusions about probabilities, in particular. And for factoids that do not directly affect them, there is a willingness to believe what they hear. But for the most part we sensibly assume that what millions of people conclude from personal experience about matters of deep practical importance is probably right unless we have a really good reason to doubt them. One little study is not a really good reason, whatever the pronouncements of the authors that they have disproved what everyone previously thought they knew.
Of course, it is much worse in the anti-tobacco arena, where most authors start with the assumptions that there are no benefits, and follow wherever that leads them. It leads them to conclude that smokers are idiots and unworthy of consideration; what used to be standard boilerplate about ill-informed smokers who are the supposed beneficiaries of anti-tobacco policies have largely vanished as anti-tobacco has solidified as just that — anti-tobacco, rather than a way of being pro-people which it once was. Compare the amount of press received by the very similar recent study in Neuropsychopharmacology that claimed that caffeine does not actually promote alertness. Remember that? No, probably not, because caffeine is an accepted drug, even among the nanny-class, and so absurd claims about it based on junk science do not get much traction.
Note that the junk science nature of these studies does not come from the study that they actually did and the data it produced, which might be informative for something, but from the attempt to generate headlines or political points by making claims that do not follow from the results (just as I wrote last week).
The bottom line that readers should understand is this: As any real scientist knows, it is very easy to design an experiment that fails to detect a real phenomenon. Indeed, it is generally much easier to design something that fails than something that does what it is supposed to do. If we are pretty sure that a phenomenon is real (smoking relieves stress, caffeine promotes alertness, rain makes the crops grow) and a particular study does not confirm this, the first assumption by anyone who cares about good science (rather than getting on their local television news, say) is that the study was not properly designed to detect it. Perhaps if the particular result were solid (i.e., not easily explained by obvious confounding, etc.) someone might claim “under these particular circumstances, the phenomenon that is accepted as true on average appears to not apply.” I have wonder if the people who make universal pronouncements that one little study contradicts all other evidence frequently assume their keys do not exist: They look for their keys where they think they left them, but if they are not there they conclude that they have just definitively demonstrated that — contrary to their previous belief that they had keys — that no keys exist and so they stop looking, call the locksmith, and go out to buy a new car. At least that would stimulate the economy and keep them busy so that they did not produce any further junk science that day.
A report on the object permanence of keys, I.G. Norance et al., American Journal of Public Health 2010
Abstract: Background: It is widely believed that my keys are around here somewhere. Methods: We designed a search that consisted of checking the basket by the front door and the pocket of the pants I wore yesterday. Results: No evidence of keys was found (p<.05). Conclusions: Contrary to widespread belief, just because you drove home yesterday does not mean your keys actually exist. Billions of dollars should be spent.
Also note that this approach is excellent for proving that evolution by natural selection does not really occur.
– Carl V. Phillips