Sunday, April 03, 2005

what does it mean to be the top ranked sociology department, anyway?

Or, at least, how is it that the US News & World Report generates the ranking that has Wisconsin #1 and Berkeley #2?

As enthusiastic as I am to see Wisconsin at the top of the rankings, I would be lying if I said I thought that the US News methodology of rankings was particularly sound. It's plainly inferior, for example, to the methods that are used by news services to rank college sports teams.

Here's how it works: the chairs and directors of graduate studies of active Ph.D. granting programs (averaging at least one Ph.D. a year over the past five years) are given a survey. The ultimate response rates for the survey are low--about 50% of the sociology surveys were returned, which was actually the highest rate among all the Social Sciences & Humanities (less than a quarter of the psychology surveys were returned). I don't regard the low response rates as a problem, as my guess would be that there is a substantial correlation between not sending back the survey and being relatively uninformed about other departments and graduate programs. (That said, it seems very much an open question about whether anybody is really in a great position to know that much about very many other graduate programs, and so whether 'expert polling' can ultimately produce rankings with that much validity.)

The survey basically lists all the departments with Ph.D. programs in alphabetical order and asks respondents to rate each program on a scale of 1-5 (5=outstanding, 4=strong, 3=good, 2=adequate, or 1=marginal). To reduce the capacity of a single rogue voter to game the rankings, US News throws out the two highest and two lowest scores for each school, and then takes the average. The rankings are just the ordering of these average scores, with the additional weird twist that US News will regard two schools as tied if their averages are equal when rounded to the tenths digit. (In other words, while two schools with averages 4.86 and 4.84 are not considered tied, two schools with averages 4.84 and 4.76 are considered tied).

The problem with using this scheme to make distinctions among top-ranked departments is that only a minority of respondents end up casting the votes that make the ultimate difference in which department is top-ranked. Wisconsin is reported as having a 4.9 average, while Berkeley's average is 4.8. If we presume that no more than two people would be so ridiculous as to give either department a 3, what this implies is that somewhere between 5-15% of people gave Wisconsin a 4, while somewhere between 15-25% of people gave Berkeley a 4. The majority of respondents (somewhere between 60-90%) gave Wisconsin and Berkeley the same rating.

It could be, then, that if you had people specifically rank the top departments, a majority of respondents would have put Berkeley ahead of Madison. Indeed, if you had respondents rank departments and then determined The Number One Department using a system like the Instant Run-Off Voting System advocated by the Greens, it's mathematically possible that any of the departments with ratings of 4.3 of above (Wisconsin, Berkeley, Michigan, Chicago, North Carolina, Princeton, Stanford, Harvard, UCLA) could be the winner, although scenarios become increasingly implausible as you move down the list. The larger point, though, is that the relative difference between Wisconsin and Berkeley in the rankings is generated by the 10-40% who regard the difference between the two departments as enough to give one a 5 and the other a 4, and not at all by the 60-90% who thought the departments deserved the same rating on a 1-5 scale but who could still have definite opinions on which program is better.

This isn't to say Wisconsin wouldn't be #1 under an alternative and better set of rankings--I have no way of knowing--it's just to say that Wisconsin's ranking shouldn't be interpreted as meaning something more or different than it does.

Incidentally, the US News specialty rankings are done entirely differently. Respondents are asked merely to list (but not rank) up to ten departments that they regard as distinguished in that speciality. US News counts up how many respondents list each school and these counts provide the basis for the ranking. I think US News must do it this way because they recognize the limited knowledge chairs and DGSes must have of specialities outside their own. Anyway, this means that the specialty rankings are essentially a sort of like a gauge of the overall name recognition that a school has for a particular area. Presumably, in terms of what departments are ranked first vs. second in a specialty, the rankings are entirely a measure of the % of respondents who didn't think to include a school on their list, and doesn't at all reflect whatever relative opinions about the two programs are held by the vast majority of respondents who included both on their lists. (Also, my understanding is that there is no equivalent of throwing out the two lowest scores for the specialty rankings, so they are more vulnerable to being gamed by respondents leaving peer departments in a specialty off their lists.)

11 comments:

Anonymous said...

The specialty rankings are a bit silly -- e.g., I can only imagine that the University of Washington makes the top 10 for Social Psychology because a substantial number of respondents think Richard Emerson is still there.

On the other hand, I think the whole survey is susceptible to these lag-effects. Harvard should be doing better than it is, for example, and I imagine Princeton is still riding a bit on the reputation of people who aren't there anymore. And of course Chicago's position in the list is absurd.

Brady said...

So, just out of curiousity, where is Bucky on the Sociology of Culture list this time?

jeremy said...

Unranked. The sociology of culture list has 12 schools ranked.

1. Princeton University (NJ)
2. University of California–Berkeley
3. Northwestern University (IL)
4. Harvard University (MA)
5. Yale University (CT)
6. University of California–San Diego
University of California–Santa Barbara
University of Chicago
9. University of California–Los Angeles
10. New York University
11. Rutgers State University–New Brunswick (NJ)
University of Virginia

Anonymous said...

All this time I thought they were just dividing the number of faculty by 20 to get the score.

Brady said...

Wow. Neither my former nor my current institution are on the culture list.

But, yeah, um, rankings are bollocks.

And it's warm here.

I keed, I keed. Actually, we made a really swell hire for next year, so between our new hire, P & N, and Barry G., I think we've got a pretty strong soc of culture program going here at the ol' U of Spoiled Children.

(Oh yeah, and I started a grad student brownbag out here this year and have been running it, so now I know your pain, SPAM-meister. Nothing like trying to fill that first slot when nobody wants to go and you've got a whole week to schedule a speaker, to say nothing of the whole "Will the projector work with your laptop" issue that has taken up far too much time this semester. They should give you, like, a prize or something.)

dorotha said...

jeremy complained that i don't comment enough on his blog. just to be clear, it isn't that i think that his posts aren't interesting, it is just that i could never hope that my comments would measure up to either the insight or wit that jeremy demonstrates on JFW. i would hate to drag his blog down with silly comments like:

"oh, jeremy, you have once again illustrated the farce that is our lives in academia! bravo!"

or "with a political eye like yours, i wonder why you don't run for a state senate seat? you'd have my vote!"

or "i never knew what humor was until i met you. jeremy, you have changed my life!"

or simply "you are my god."

so, you see, jeremy, i just don't want to waste people's time with my overwhelming lameness when you are so awesomely awesome to behold.

and you are soooo smart.

and you look taller than you actually are.

and you are the cleverest, really.

jeremy said...

Further evidence of the abuse I suffer at the hands of our graduate students.

Anonymous said...

beware halo effects -- especially for the ivies (sex and gender subfield notwithstanding). a recent poll of "experts" had Princeton's law school ranked 7th nationally. the only hitch: Princeton has no law school!

Anonymous said...

My back of the envelope math says that with 50 respondents (I don't know the actual number) and a uniformity of opinion, such as their must be for schools with a 4.8 average, a simple 95% confidence interval would be a like +/- .12, which would mean we can't really tell which of the top three schools is the best, based on this survey. For departments where uniformity of opinion does not reign, the 95% confidence interval is more like +/- .25, which means even more ties.

Avery et. al propose a fun twist on rankings in their article, "A Revealed Preference Ranking of U.S. Colleges and Universities." They use the choices of admitted students to rank colleges. As they put it, "Our statistical model extends models used for ranking players in tournaments, such as chess or tennis. When a student makes his matriculation decision among colleges that have admitted him, he chooses which college "wins" in head-to-head competition." Their results are very similar to the US News ones. I can't imagine such a survey would be tough to conduct among graduate students. The only major data collection problem is that you need to make sure you get several respondents from each program.

Neal

Article Link:http://papers.ssrn.com/sol3/papers.cfm?abstract_id=601105

jeremy said...

Neal: Your math seems right, but I'm not sure I buy the relevance of confidence intervals given that we're talking about an attempted census. An alternative issue is how much reliability there is in ratings, especially for schools in the middle of the distribution.

The problem with using student choices to do rankings would seem to be that geographic preferences play such a big role in choices.

Tom Bozzo said...

Jeremy, you wrote:

The problem with using student choices to do rankings would seem to be that geographic preferences play such a big role in choices.

Avery &c. are actually trying to extract the preference ("latent desirability") ranking underlying undergraduate matriculation choices, of which geography is pretty clearly an element, so on their terms this isn't a shortcoming of their method.

If one were to take the position that the U.S. News undergraduate rankings were a pure measure of educational quality (not obviously a defensible position), then Avery &c. would be subtly changing the terms of the ranking -- quality of education is only one dimension (more important for graduate programs, I'd argue) of a school's offerings to its students.

A more fundamental issue is that they're just assuming, as economists are wont to do, that there are exogenously given preferences to reveal in the first place.

(I also blogged some irritation at that paper's straw man motivation late last year.)