Science Lesson: on “anecdotes”
Posted on 9 January 2015 by Carl V Phillips | 6 Comments
by Carl V Phillips
Following a twitter conversation between some vapers and some others who mistakenly think they understand science, I decided to write a quick lesson for those who might want more than a bumper-sticker about this. The issue at hand is whether you can learn anything about the world from individual testimonials about people’s experiences or other one-off “anecdotes”. The answer is obviously yes. It is safe to say that more than 99% of what each of us knows comes from such evidence. So why do so many people not understand this obvious fact, and habitually denigrate such evidence as “anecdotes”? It seems to be because they are stuck in a grade-school understanding of science.
I have written entire papers that are substantially about this point, but here is my quick explanation for how to better think about this. (Oh, and as an aside, anyone who thinks that this exposition is more correct due to the fact that I have also published versions in journals knows even less about understanding scientific evidence, but that is another story that readers will know I cover elsewhere.)
A good basic lesson in science for a seven-year-old is the following: Just because one reasonably likely event followed another reasonably likely event does not mean that the first caused the second. We all know that intuitively in most cases, but it is also easy to get tricked under particular circumstances. This is basically what superstition is — institutionalizing such unfounded observations in a desperate search for explanations for “random” events. Moreover we can be intentionally tricked into believing otherwise by people who profit from the claim of cause and effect. The most notable example is the medical profession, whose leading method throughout time (and to a substantial extent still today) was to wave their hands over someone and then take credit for the recovery that was going to happen anyway.
Moving on to about age eleven, with a bit of math available, another useful lesson is that unlikely events happen and seeking an explanation for any one of them is likely to result in the formation of superstition. My favorite example is starting a lesson with: “wow, you would never believe what happened! I was walking to class and saw a car with license plate number B2X 113…” (or whatever would fit the pattern for local license plates) “…What is the chance of that happening?! I wonder what caused it?” Of course, the chance of that exact event happening is impressively small. But the chance of seeing a car with some license plate number is about 100%, and since you could tell that story no matter what the number is, there is nothing interesting here.
With those lessons, we immunized grade-schoolers against being tricked by coincidence, teaching them to not over-conclude from one observation. We also teach young children “do not get into a car with a stranger”, which is good simple advice at the appropriate level for them. It guards against the worst possible mistake. But anyone who does not eventually learn that life is not so simplistic will have a rather hard time getting from the airport to their hotel.
Unfortunately, far too many people only get to the grade-school level in their understanding of scientific inference, but nevertheless think they are experts. People working in serious sciences get past that and learn the value of single observations (imagine: “The orbit of Mercury is not what Newtonian physics predicts. But who cares? That is just one anecdote. Ignore it.”). But a large portion of those pontificating about health science, particularly physicians, do not understand scientific inference any better than fifth grader. The worst problem is that they do not realize that.
Learning exactly how to tease the informational nuances out of any data, whether it be columns of numbers or a single personal testimonial, requires a lifetime of study and some decent intuition. But it is possible to teach a simple lesson to the “that is just an anecdote!!” crowd to make it obvious that their “never get in a car with a stranger”-level understanding is overly simplistic.
Consider a man who at 3:00 has no major injuries; shortly thereafter he gets into a car crash (and nothing else noteworthy occurs); upon talking to first-responders at 4:00 he say “I am basically ok, but I really hurt my wrist in the crash”; at 6:00 that evening, an x-ray shows he has a broken wrist. He tells us this anecdote, leading with the causal claim, “I broke my wrist in that car crash.” Do we believe his assertion of cause an effect? And, thus also conclude that (sometimes) car crashes cause broken wrists? It was, after all, just one story.
Of course we believe him. We are not idiots.
Would we say “I am only going to believe that car crashes can cause broken wrists after I see a statistical study in a journal that shows that to be the case”? Again, no, because we are not idiots.
I figure it is a safe bet that no one has ever done that study, comparing the hourly incidence rate of wrist fracture proximate to car crashes to the rate among those not experiencing car crashes. Again, because no one with the basic literacy skills to conduct a study is that much of an idiot (though if they did, they could undoubtedly get it published in a peer-reviewed journal). Now if the question were how often do car crashes cause wrist fractures, that would require gathering statistics, and that has been done. But it is obviously not necessary for assessing the basic point of whether the phenomenon ever happens. Moreover, the nice clean, controlled, artificial experiment with crash test dummies would tell us even less about the basic question of whether this phenomenon actually ever happens in the real world.
The only thing that would cause us to seriously doubt his claim is if we somehow thought he was lying about the events. Otherwise, if those events happened, we have evidence of cause-and-effect as convincing as ever exists in the real world. There is never any actual technical proof of causation in the real world, but this is evidence at the level that we loosely call proof. Sure, it is theoretically possible that hidden space aliens shot his wrist with a ray gun at just about the same time as the crash, or perhaps listening to Aerosmith on the car radio caused his wrist to fail (this is why there is never any proof), but it seems safe to assume otherwise.
So what is the difference between this “anecdote”, in terms of providing scientific evidence, and the grade-school teaching example (say “he had a cough; he took some homeopathic medicine; the cough went away; therefore the medicine cured the cough”)? There are a few key features, and recognizing them is the difference between thinking like a scientist and just repeating imperfect rules-of-thumb you learned in grade school. First, there are good reasons to believe that a car crash could cause a broken wrist, unlike singing along with Walk This Way. I trust I do not have to explain those. Second, the incidence rate of spontaneous wrist fracture, unassociated with any dramatic event, is trivially small during any given afternoon, in contrast with the rate of an extant cough going away sometime over a several-day period. Thus it is safe to rule out the “it would have happened anyway” alternative. Third, the event and the outcome are both well-defined and definitively occurred, unlike many superstitious cause-and-effect claims (e.g., “I prayed yesterday that I would have a good day today, and, sure enough, I found $100 in the sofa cushion” — something similar to the license plate example).
Of course, these are not the only characteristics that might cause you to draw scientific conclusions from a single observation. (They do not describe that case of drawing conclusions from Mercury’s orbit, for example.) There are no simplistic rules like that, which is why scientific inference requires thinking, not recipes. But they are the useful ones in this case.
In what other case might these observations about drawing a conclusion from a single observation be useful? Obviously the case of people quitting smoking thanks to e-cigarettes or some other low-risk substitute. There are obvious good reasons to believe that finding a better substitute for a behavior can cause an end to that behavior. The rate of spontaneous, unexplained smoking cessation in a given week, or even month, is very small. The ostensible cause and effect are very clearly defined and definitively observable. Thus, when a single individual says “I smoked for years; I tried to quit some times but failed; but then I tried switching to e-cigarettes and that finally let me quit smoking”, you either have to claim she is lying about the basic facts or recognize that it is as close to proof as can exist that e-cigarettes caused her to quit smoking. From that you can further conclude that — at least sometimes — e-cigarettes cause smoking cessation. No further study is necessary. Of course, if you want to figure out how many times this has happened or how often the attempted switch is successful, then you need to gather the right statistics; but those are different questions.
So, dear reader, congratulations. If you read this and understood it (or already understood the points, of course), you understand science at better than the “never get in a car with a stranger” level. It is not, by itself, everything you need to know about drawing scientific inference from non-systematic observations, but it is a definite step up from the naive, simplistic, and grossly incorrect claims that “anecdotes cannot prove anything” or “anecdotes are the lowest form of evidence” (whatever that means).
source: http://antithrlies.com/2015/01/09/science-lesson-on-anecdotes/
Posted on 9 January 2015 by Carl V Phillips | 6 Comments
by Carl V Phillips
Following a twitter conversation between some vapers and some others who mistakenly think they understand science, I decided to write a quick lesson for those who might want more than a bumper-sticker about this. The issue at hand is whether you can learn anything about the world from individual testimonials about people’s experiences or other one-off “anecdotes”. The answer is obviously yes. It is safe to say that more than 99% of what each of us knows comes from such evidence. So why do so many people not understand this obvious fact, and habitually denigrate such evidence as “anecdotes”? It seems to be because they are stuck in a grade-school understanding of science.
I have written entire papers that are substantially about this point, but here is my quick explanation for how to better think about this. (Oh, and as an aside, anyone who thinks that this exposition is more correct due to the fact that I have also published versions in journals knows even less about understanding scientific evidence, but that is another story that readers will know I cover elsewhere.)
A good basic lesson in science for a seven-year-old is the following: Just because one reasonably likely event followed another reasonably likely event does not mean that the first caused the second. We all know that intuitively in most cases, but it is also easy to get tricked under particular circumstances. This is basically what superstition is — institutionalizing such unfounded observations in a desperate search for explanations for “random” events. Moreover we can be intentionally tricked into believing otherwise by people who profit from the claim of cause and effect. The most notable example is the medical profession, whose leading method throughout time (and to a substantial extent still today) was to wave their hands over someone and then take credit for the recovery that was going to happen anyway.
Moving on to about age eleven, with a bit of math available, another useful lesson is that unlikely events happen and seeking an explanation for any one of them is likely to result in the formation of superstition. My favorite example is starting a lesson with: “wow, you would never believe what happened! I was walking to class and saw a car with license plate number B2X 113…” (or whatever would fit the pattern for local license plates) “…What is the chance of that happening?! I wonder what caused it?” Of course, the chance of that exact event happening is impressively small. But the chance of seeing a car with some license plate number is about 100%, and since you could tell that story no matter what the number is, there is nothing interesting here.
With those lessons, we immunized grade-schoolers against being tricked by coincidence, teaching them to not over-conclude from one observation. We also teach young children “do not get into a car with a stranger”, which is good simple advice at the appropriate level for them. It guards against the worst possible mistake. But anyone who does not eventually learn that life is not so simplistic will have a rather hard time getting from the airport to their hotel.
Unfortunately, far too many people only get to the grade-school level in their understanding of scientific inference, but nevertheless think they are experts. People working in serious sciences get past that and learn the value of single observations (imagine: “The orbit of Mercury is not what Newtonian physics predicts. But who cares? That is just one anecdote. Ignore it.”). But a large portion of those pontificating about health science, particularly physicians, do not understand scientific inference any better than fifth grader. The worst problem is that they do not realize that.
Learning exactly how to tease the informational nuances out of any data, whether it be columns of numbers or a single personal testimonial, requires a lifetime of study and some decent intuition. But it is possible to teach a simple lesson to the “that is just an anecdote!!” crowd to make it obvious that their “never get in a car with a stranger”-level understanding is overly simplistic.
Consider a man who at 3:00 has no major injuries; shortly thereafter he gets into a car crash (and nothing else noteworthy occurs); upon talking to first-responders at 4:00 he say “I am basically ok, but I really hurt my wrist in the crash”; at 6:00 that evening, an x-ray shows he has a broken wrist. He tells us this anecdote, leading with the causal claim, “I broke my wrist in that car crash.” Do we believe his assertion of cause an effect? And, thus also conclude that (sometimes) car crashes cause broken wrists? It was, after all, just one story.
Of course we believe him. We are not idiots.
Would we say “I am only going to believe that car crashes can cause broken wrists after I see a statistical study in a journal that shows that to be the case”? Again, no, because we are not idiots.
I figure it is a safe bet that no one has ever done that study, comparing the hourly incidence rate of wrist fracture proximate to car crashes to the rate among those not experiencing car crashes. Again, because no one with the basic literacy skills to conduct a study is that much of an idiot (though if they did, they could undoubtedly get it published in a peer-reviewed journal). Now if the question were how often do car crashes cause wrist fractures, that would require gathering statistics, and that has been done. But it is obviously not necessary for assessing the basic point of whether the phenomenon ever happens. Moreover, the nice clean, controlled, artificial experiment with crash test dummies would tell us even less about the basic question of whether this phenomenon actually ever happens in the real world.
The only thing that would cause us to seriously doubt his claim is if we somehow thought he was lying about the events. Otherwise, if those events happened, we have evidence of cause-and-effect as convincing as ever exists in the real world. There is never any actual technical proof of causation in the real world, but this is evidence at the level that we loosely call proof. Sure, it is theoretically possible that hidden space aliens shot his wrist with a ray gun at just about the same time as the crash, or perhaps listening to Aerosmith on the car radio caused his wrist to fail (this is why there is never any proof), but it seems safe to assume otherwise.
So what is the difference between this “anecdote”, in terms of providing scientific evidence, and the grade-school teaching example (say “he had a cough; he took some homeopathic medicine; the cough went away; therefore the medicine cured the cough”)? There are a few key features, and recognizing them is the difference between thinking like a scientist and just repeating imperfect rules-of-thumb you learned in grade school. First, there are good reasons to believe that a car crash could cause a broken wrist, unlike singing along with Walk This Way. I trust I do not have to explain those. Second, the incidence rate of spontaneous wrist fracture, unassociated with any dramatic event, is trivially small during any given afternoon, in contrast with the rate of an extant cough going away sometime over a several-day period. Thus it is safe to rule out the “it would have happened anyway” alternative. Third, the event and the outcome are both well-defined and definitively occurred, unlike many superstitious cause-and-effect claims (e.g., “I prayed yesterday that I would have a good day today, and, sure enough, I found $100 in the sofa cushion” — something similar to the license plate example).
Of course, these are not the only characteristics that might cause you to draw scientific conclusions from a single observation. (They do not describe that case of drawing conclusions from Mercury’s orbit, for example.) There are no simplistic rules like that, which is why scientific inference requires thinking, not recipes. But they are the useful ones in this case.
In what other case might these observations about drawing a conclusion from a single observation be useful? Obviously the case of people quitting smoking thanks to e-cigarettes or some other low-risk substitute. There are obvious good reasons to believe that finding a better substitute for a behavior can cause an end to that behavior. The rate of spontaneous, unexplained smoking cessation in a given week, or even month, is very small. The ostensible cause and effect are very clearly defined and definitively observable. Thus, when a single individual says “I smoked for years; I tried to quit some times but failed; but then I tried switching to e-cigarettes and that finally let me quit smoking”, you either have to claim she is lying about the basic facts or recognize that it is as close to proof as can exist that e-cigarettes caused her to quit smoking. From that you can further conclude that — at least sometimes — e-cigarettes cause smoking cessation. No further study is necessary. Of course, if you want to figure out how many times this has happened or how often the attempted switch is successful, then you need to gather the right statistics; but those are different questions.
So, dear reader, congratulations. If you read this and understood it (or already understood the points, of course), you understand science at better than the “never get in a car with a stranger” level. It is not, by itself, everything you need to know about drawing scientific inference from non-systematic observations, but it is a definite step up from the naive, simplistic, and grossly incorrect claims that “anecdotes cannot prove anything” or “anecdotes are the lowest form of evidence” (whatever that means).
source: http://antithrlies.com/2015/01/09/science-lesson-on-anecdotes/