Tuesday, August 03, 2004

causality bites, follow-up

Ann links to my morning cup o' public sociology regarding the NYT mention of a study showing an association between television viewing and teen smoking. Ann speculates that there may be studies showing positive effects of children viewing large amounts of television. She could be right, but my general reaction is: if only social science were so lucky. When there are contradictory findings, it's clear that something is up. Commonly, however, in social science, you can get a considerable uniformity in findings, but from studies that all have the same basic deficiency for making a confident determination about causality. Given what we know about the background characteristics of families in which children watch the most television, you would expect there to be all kinds of negative outcomes that are correlated with watching oodles of TV. You would expect to observe these in sample after sample, and regardless of whether television had any specific negative consequences of its own. There are some social scientists who recognize the problems of making causal inferences from a single correlational study, but who seem to believe that, through some mysterious feat of epistemological alchemy, these problems go away if you have multiple correlationial studies that say the same thing.

This does not mean that television does not have negative consequences, perhaps even severe negative consequences, but rather that, barring the ability to experimentally assign children to different television-watching regimens for years of their lives, its very difficult to assess the actual causal influence of television.

(Quantitative social scientists who read my first paragraph might note that there exists an elaborate set of statistical techniques for dealing with ["controlling for"] the confounding influence of "background characteristics" and similar sources of spuriousness. My belief is that most sociologists either overestimate or wildly-dramatically-radically-maniacally-overestimate how well these methods can actually be expected to work given the the typical quality and number of background measures that social scientists have available for analysis. I've met more than one sociologist who I felt was willfully naive about this, because our jobs are made much easier if we all pretend that the methods work better than we have ever had any reason to believe they actually do.)

No comments: