Saturday, March 25, 2006

mama, he's crazy

I am slated to give a presentation at the sociology methodology miniconference at UConn next month, in which I will be articulating an argument so radical that I worry that the audience will judge me to be insane. I would sketch it here, but your esteem is important to me, and I am not emotionally ready to risk just yet having you decide that I am barking mad.

The exchange discussed on in this blog post by an economist is relevant to it, however, and also has some good examples of fun academic pissiness.


Mike Shanahan said...

Oh come now, you can't do this to your faithful readers. Say more!

Tom Bozzo said...

I had The Lowest Deep on my blogroll for about two seconds before I figured out it was a seemingly fallen econoblog at the time, but I don't think Adam O'Neill had a very good take on the Hoxby-Rothstein conflict. In particular, the near-non-replicability of much quantitative research in economics (and, I can only assume, the other social sciences as well) is a really big problem.

I should note that I've found most of Hoxby's research that I've read to be evidence of the shine that a Harvard pedigree will put on straw man arguments you'd laugh at if it came out of (say) a Scaife-funded think (sic) tank, but that's perhaps another story.

Mike Shanahan said...

In behavioral genetics, they sometimes have workshops where everyone gets the same dataset and is told to model a specific dependent variable ahead of time, and then everyone presents their results at the workshop, discussing assumptions, analytic decisions, etc. A model worth attention in the social sciences perhaps.

Tom Volscho said...

I've come across about 7 non-replicable results in the past 3 years and I am not going to write anything about them. On the other hand, I've been able to replicate some scholars past findings with new data, thereby increasing my confidence in the findings from those studies.

One of my favorite books by economists is Card and Krueger's (1995) "Myth and Measurement"--an analysis of whether or not minimum wage increases generate disemployment effects.

In addition to promoting quasi-experimental methods with their own original data on fast-food establishments, they re-analyze the data from nearly every major study on the topic and provide a tour-de-force example of exactly how research on a given topic should be approached---begin if possible, by re-analyzing how researcher decisions and dominant theoretical paradigms in past studies impacted the results of those studies. Or how dominant theoretical paradigms compelled the previous researcher to analytical decision X...or to make a series of analytical decisions that produced results that were consistent with the dominant paradigm. Thus, Card and Krueger (1995) find that prior analysts made a series of measurement and coding decisions that would allow the econometric analysis to show that minimum wages produce disemployment effects--consistent with the dominant theoretical predictions of neo-classical economics.

Tom Volscho said...

This seems pretty helpful:

Kieran said...

Is it the replicability angle that you'll be talking about, or the possible benefits of competition between scholars? If the latter, there's a passage in Elster's _Sour Grapes_ (107-108) that's of interest. He's talking about problems of explanation in Veblen and Bourdieu and then, almost in passing says there is a fallacy of that often afflicts scholars,

"... when one is led to tolerate errors or imperfections in one's own work because one knows they sometimes prove useful or fertile. In particular, many will have come across the brand of scientist who excuses the one-sidedness of his work by the need for fertile disagreement in science. ... this attitude goes together with a form of self-monitoring whose corrosive effects I have been concerned to bring out."

I think he discusses this elsewhere, too, but I can't remember where. He's concerned to expose as a fallacy the idea that science can be globally rational (trending towards progressively better answers etc) even when (or precisely because) individual scientists are prepared to let their own biases override good individual practice. In other words, it's an attack on faith in "the marketplace of ideas," where this is taken to mean real faith in the collective ability to sort out good from bad stuff efficiently.

jeremy said...

That's a great quote from Elster. My focus is "the replicability angle."