[
Note: This actually turned into a mostly serious rumination/lamentation about quantitative sociology that I needed to get out of my system, and I'm posting here even though there isn't any reason for you to read it. Part of it may end up working its way into a methods lecture or something else sometime.
Really, skip it and move to the next post.]
Shelly B was indulgent of me the other day when I launched into this disquisition about how I became much better at being able to figure out magic tricks. The epiphany I had sometime in my teens was that magic tricks evince a certain kind of optimality. Everything supporting the desired illusion that the magician can let you see, you see. The corollary is that whatever you didn't see that would have supported the illusion, had you been shown it, must be something the magician couldn't let you see. In other words, you should first think about what would have made the trick even better, and then you should think about why it was that the magician didn't do the trick that way.
Anyway, since I don't regularly attend magic shows and since televised magic has devolved into all kinds of video fakery (expect a rant on here about David Blaine sometime, by the way), one would think that this little insight I had into solving magic tricks would not be all that handy to my life now. Instead, you would be surprised at how much analogous logic makes its way into the times when my work involves reading published quantitative research by others. For example, awhile back a colleague came by to bring me a copy of a paper we had discussed. Since it was one of those papers whose storyline you can follow from the tables, he flips to the first table of regression results.
I saw that first table And I Knew. "Wait, why would they do the table like that? The point of their theory would be to first present the results from a reduced model, and then run the model whose results they show, so that you can see how the coefficients change from the first model to the second." [okay, what I actually said was much simpler, but I'm scumbling some details to conceal the identities of the particular paper I'm talking about.]
My colleague replied that while that would have provided the most straightforward test of their theory, the paper was instead presented as an elaboration of the theory. So that the analysis that I'm talking about wasn't really the focus of the paper. For the focus of the paper as it was presented, only the results from the full model were relevant.
The problem with this explanation is that it presumes, optimistically, that the analyses began with the same focus in mind that is the focus presented in the eventual published paper. I am increasingly of the opinion that one should instead presume, until convinced otherwise, that papers take on the particular focus that they do because of the way that a set of statistical analyses conducted with who-knows-what initial focus happened to turn out. Now, if this were just saying that it is a trope in sociology (and likely elsewhere) to present results that were inductively generated as if they had been deductively generated in the effort to pit competing hypotheses fairly against one another, this would be lamentable enough. The thing is, however, that the analyses are also presented in a way which hides evidence that would call the conclusion of the paper into question.
This comes up when you read a paper and can think of simple things omitted from the analyses that would have strengthened the author's arguments had they been presented. In this case, a comparison of coefficients from the reduced and full models [that is, seeing how coefficients change when other regressors are added]. The author's presentation would have been much more compelling had the two models been shown and the reader seen that the key coefficients changed substantially. So what I suddenly knew looking at the first table in this paper is that in fact the coefficients probably didn't change much at all, because if they had they would have been there. It so happened that I had the dataset used in the paper at hand, and I was quickly able to do the analyses that were omitted from the paper. Sure enough, the coefficients are either practically the same across the reduced and full models, and in a few cases the direction of the small change is actually the opposite of what the larger theory being advocated by the author would predict. So results are presented as supporting a small implication of a theory that actually, if presented completely, would have undermined a much larger implication of the same theory.
Another example of this was a paper that I was once given to review where the author developed hypotheses that were tested using specific survey items as dependent variables. Supporting results were reported for the various tests, in tables with little stars-of-statistical-significance in all the right places. The thing was, I was familiar with the particular survey the author used, and I knew they asked a much broader array of items than what were included in the tests [again, I'm being purposefully vague]. Indeed, I knew that for some of the items the author used, an alternative version of the item that was intended to tap precisely the same concept was also included (and were on the same page of the codebook!), but no mention of tests using this alternative item were made anywhere in the paper. Since the paper's illusion would have been stronger if all the items had been used, I surmised that the missing items couldn't be shown because otherwise they would have undermined it. Again, I had the data at hand, and, lo, analyses of the excluded variables provided absolutely no evidence for any of the authors' hypotheses.
In academia, there are strong norms against suggesting that someone has been dishonest. And, of course, you can't prove that they have been dishonest; maybe they are merely incompetent or sloppy or working-too-quickly and did not realize that they should have conducted these additional analysis. However, sometimes it does seem like truly bountiful providence must have led the author to make a combination of arbitrary analytic decisions that, upon further inspection, also happen to be the circumstances under which the produced results happened to be the strongest in their favored position. I suppose that, when one is in the thrall of a particular theory, and especially one that is their own and helping them toward disciplinary fame, it becomes very easy to convince oneself that all of these arbitrary decisions that nudge the results in the right direction are actually substantively well-justified.
Anyway, I have not been in the academic game that terribly long, especially on the quantitative side of things, and I feel like I have already seen way too much of this. If I were being completely honest, I would admit that I've also felt the lure of it myself and have had to mentally militate against it. Sociology does itself no favors as a discipline by producing research that, when given a careful and informed reading, gives the impression of thumbs laid heavily on the holy scales of multiple regression. Maybe the situation would be improved if critical replication was a more valued enterprise in our discipline, so that people had more reason to worry about being called out for analyses that were not more reflective and open about their shortcomings. Relatedly, another big contributor to the problem may be that sociology is spread so thinly across areas that, especially when findings are relatively bland but consistent with the general party line, there is not much worry to think that anyone will do anything but just parrot the reported upshot of your results.
Quantitative sociologists complain regularly that their research is not taken seriously enough in the formation of public policy. The sneaking feeling that I voice only in the nether paragraphs of protracted weblog entries is that the common inattention to sociological studies might just be well-justified. Don't get me wrong--there is responsible, careful, thorough, competent, and honest quantitative research conducted in sociology. However, I am not necessarily convinced that sociologists provide any good means for people involved in the determination of policy to be able to find that research amidst the rest.
[Here, I become increasingly drowsy and cognizant of all that I need to do yet tonight, and stop abruptly even though much more could be said.]