Saturday, November 04, 2006

sociologist, rank thyself!

So, apparently some people have repeated the exercise of providing rankings of the research productivity of sociology departments based on publications in the American Sociological Review, American Journal of Sociology, and Social Forces (via Chris). The authors appear to be from Notre Dame, which by the authors' methodology (e.g., the selected time window and counting affiliations by where authors are now as opposed to where they were when doing the article), comes out #5, far above its usual reputational ranking and well above where it was the last time productivity rankings were done, although then it was by a group at (my beloved) Iowa rather than at Notre Dame. I think Notre Dame has a number of great people, has made some extremely good recent hires, and is underrated by reputational rankings, but one of the curious things about these rankings is that they have a way of appearing just at the time and using just the criteria that manage to favor the department doing the rankings. When the Iowa group did the earlier rankings, they published a way of calculating them by which Iowa came out #1.

I'm all for departments promoting themselves, and I can understand where departments that feel like they have productive people and are collectively underappreciated would want to get the word out. But I don't much like the process of dressing it up like one is engaged in a dispassionate enterprise that just happens to produce results favoring one's home department.

Anyway, there are many obvious criticisms of using this as a general metric of departmental prestige or even department article-producing productivity, which the authors acknowledge (even if they plow ahead nonetheless). I want to offer an additional criticism the authors don't acknowledge: I want to see a defense of the concept that there is presently a "Big 3" of sociology journals. I think there is a "Big 2": ASR and AJS. Nobody in sociology confuses the prestige of a Social Forces with the prestige of an ASR or AJS. But the bigger problem with including Social Forces is that it's not obvious to me that Social Forces is the "next best home" for articles that don't make it into ASR or AJS. I think that much of that sociology right now is conducted in subareas for which the top journal in that subarea the most prestigious outlet after ASR and AJS, and then Social Forces comes somewhere after that. If that's true, then it really makes no sense to include it in rankings like this, as then there is all kinds of bias induced by whether work is in a subarea for which SF is the top outlet after ASR/AJS or not (e.g., ever noticed how many papers on religion appear in SF?).

I don't really mean to diss Social Forces--especially since, um, they could be getting a submission from me in the next few months--it's just that using them in these rankings raises two irresistable ironies. One is that the way the Notre Dame groups motivates their article is by the idea that reputational rankings of departments are fuzzy and history-laden so we should look to some hard-numbers criterion. Fine, then, I want to see a hard numbers criteria applied to establishing the prestige of the journals included, as then otherwise the authors are just using the same kind of fuzzy history-laden reputational reasoning for journals that they see as problematic in departments. The other is that, even though in the best case scenario, Social Forces is the clear third of the "Big 3", yet it publishes more articles than either AJS or ASR, so it actually ends up counting the most in these rankings.

For the record, I'm not much into prestige rankings for either departments or journals. Indeed, it's for that reason that I think when rankings are offered, they warrant some scrutiny.

37 comments:

Anonymous said...

Call me a snob, but I look at Social Forces as a repository of papers that (a) got rejected from ASR or AJS and/or (b) are written by UNC graduate students or alums. There are some good papers that meet one or both criteria, of course, but on average SF papers are no better than the papers in, say, Social Sciences Research.

The quality and prestige of SF has not been helped by its editors' decision to start publishing about 400 papers per issue.

Anonymous said...

Go ahead, heap scorn on Social Forces. Give in to your hate (no, wait, folks from Iowa don't hate much, do they?).

I want to love "my department's" journal (I have been assured, that despite its affiliation with SSS, UNC can really do whatever it wants with SF), but it is hard. For years they couldn't turn an article around, and then some say J. Blau ran it into the ground with the public sociology debacle and now it seems like, to make up for these first two, they'll publish 80 articles and issue with healthy, heapin' dose of definitely non-public sociology.

Then there is the charge that SF is an outlet for UNC folks and kin in a way that AJS isn't. In a completely superficial, unscientific way this seems true. I've thought about being less superficial and more scientific, but, well, evidently I got this dissertation to write that is getting in the way.

We've debated this new ranking a little around here. If I've understood it, the ranking is, basically, number of faculty-articles in a three year span. What about faculty size, that would have been easy to do, no? How about tenure status, it might be that junior faculty publish more in these outlets then senior faculty, or vice versa?

Of course, we came out looking pretty good, so I guess I don't care (too much).

Unknown said...

Lars: I think we should adjust for weight and height, as in: Department score =E(articles)/{E(lbs)/E(inches)}.

Why? Because I'd guess that pound for pound, Cornell is one of the top 5 sociology departments in the country. (Of course, it was better when we had Grusky, S. Szelenyi, and Brinton, whose combined weight -- soaking wet -- barely exceeds 300 pounds.)

You have to adjust for height as well as weight because otherwise departments with disproportionate number of women would be rated higher, all else being equal, and we all know that's not accurate.

Anonymous said...

Well, the flaws of Jeremy's post are abundant.

What he doesn't mention is that people actually have done objective measurements of the soc journals. According to the web of science, in a typical year, Social Forces comes in with something close to third highest impact factor. Sure, every few years one journal might sneak in front of Social Forces - but this is usually a journal with as much of a non-sociologist presence as a sociologist presence (e.g. Administrative Science Quarterly). Even better than Web of Science, Michael Allen did a network centrality analysis and found that Social Forces was indeed the third most central journal in our discipline. Sure, Social Forces fluctuates, but over the past 10-15 years, SF is routinely at or very near the top 3. These sorts of results have appeared in Footnotes, but oddly Jeremy seems to have neglected them.

What is even more problematic about Jeremy's post is the way he rhetorically casts ASR and AJS as clearly above any problems or free of any biases/advantages.

For example, when Camic/Wilson edited ASR, there was clearly some advantage for Wisonsin people or Wisconsin students, etc. Are we to believe that every time a Wisconsin person publishes in ASR it is just on merit? AJS obviously advantages Chicago people. Why are there so many Chicago alums that have never published anything *except* that solo-AJS based on their diss? If you want to throw stones at Blau's management of SF, why does no one complain about the obscenely slow turnaround at AJS or the clearly biased intervention of Abbott?

Are we to believe that Jeremy has never benefitted from any priviliges or connections and those "nepotists" at SF are unusual?

Last, but not least, there are areas of sociology that do not have a specialty journal that is obvious. While educ scholar and health scholars have clear outlets with ASA, many others do not and SF becomes the obvious next best place for their work.

No, I don't work for SF and have never had any connection to that journal or UNC.

Anonymous said...

Well, that was about as smug and pedantic as it gets. More evidence in support of my dislike of acdemics and their style of argument.

Anonymous said...

Kim:

I think your suggestion of controlling for BMI is important and useful. We might have to throw Cornell out as an outlier. However, I would also add that we will need to control for climate, as we know that, ceteris paribus, those in the north are heavier, especially in the winter, in order to keep warm. (of course, we will over loothe extra 35 lbs. I have gained as I moved from Chicago to N.C.)

Which, I will point out to Jeremy here, makes a strongly compelling case for why sociologists DO need to account for climate as much as genetics.

Anonymous said...

I forgot to add the alternative hypothesis, that people in the north will have lower BMIs, as they expend more energy keeping warm, at which point we would need to keep Cornell in the analysis.

Anonymous said...

"If you want to throw stones at Blau's management of SF, why does no one complain about the obscenely slow turnaround at AJS or the clearly biased intervention of Abbott?"

I think both Blau and Abbott deserve a small stoning.
-Georg Simmel

Anonymous said...

When you have so much uncertainty in the system and clear bias for those from UNC and Chicago, it seems clear that these sorts of exercises have little to do with any real rankings. Books matter, and articles in other journals matter. What matters most of all is preceived leadership in the discipline. Because Notre Dame and Iowa rank so low on this last dimension, and yet do manage to produce decent articles, they obviously feel the need to do this sort of self-promotion.

Anonymous said...

Given how poorly sociology is doing relative to other social science disciplines, being highly ranked for the 'normal science' that ASR/AJS/SF embraces may not be the best measure of quality and long-run impact. It could be something totally different -- like books that are cited as cutting edge, journal articles cited out of the discipline, and so on. Getting an index of that may well be impossible until we look back in 20 years and see where the truly influential people were. I suspect they are fairly evenly spread out, and thus that there is a great deal more parity in the field than is recognized in any of the standard sorts of rankings.

jeremy said...

Anon 11:14am: I don't see where in my post I was criticizing the editorship of any of the journals in question. I don't see where I made accusations of "nepotism." I think editors have a hard job and are subject to continual complaints about the work they do. I have my own complaints, although I try to temper them by recognition that they are in a difficult spot, and in any case they are irrelevant (and thus were unvoiced) in my post. What other people say in response to my posts is not under my control, and if you are going to be so insufferable in your counterarguments, you might at least take the trouble to distinguish between what I say and what commenters say.

My argument is that AJS and ASR are much more highly regarded than SF as a publication outlet, and that it's unclear where SF fits in relative to top subarea journals. For this reason, if someone is really interested in "which departments are producing the most research in top journals," including SF is a bad social science decision, especially since it is going to contribute the most to the outcome since it publishes the most articles. You sound from your post like you are familiar with sociology, so I cannot believe that you are suffering under any delusion that SF is comparable to AJS and ASR in terms of, say, what it does for a job candidate's career prospects or for a faculty member's merit pay claims to have an article published in ASR/AJS vs. SF. So your claim is that it is a more clear number 3 than I think it is. If you post a link to the abundant evidence in Footnotes to this end, I'll be pleased to look it over.

Incidentally, be aware that regardless of the merits of your arguments, if you are a professional sociologist, then using the luxury of commenter anonymity to make assertions about the publication of specific person's (i.e., me) articles as being the result of nepotism makes you, in fact, a quite unprofessional sociologist.

Anonymous said...

Rankings are a rich source of sociological outrage. The funny things is that the U.S. New&World ranking - arguably the worst hackwork - is the de facto reference ranking for most people in the discipline. This includes nearly everybody at Jeremy's place of employ, the venerable UW Madison, long ranked #1. This is not to deny that Madison is a fantastic department. But the #1 ranking occasions a certain amount of smugness among its students in excess of what's defensible. Consider this: what is the #1 department in the country in terms of placing its PhDs in the top (say, 10 or 25) departments after graduation? Wouldn't that be the most important ranking for grad students to consider? Yet, neither relatively, nor - shocking! - in absolute terms would the answer be UW Madison.

jeremy said...

Anon 2:50pm: My place of employ is actually Harvard. I agree that students should look at placement when contemplating schools, and I think Wisconsin could be doing better about placement in a number of distinct ways. Nonetheless, I think the idea that placement provides the decisive criterion for ranking graduate programs vastly overestimates the incremental causal effect of programs on students. I think most of the variance between programs in placement is due to selection--although that's a hypothesis for which I think it would be great if someone could put together data and test.

Also, NRC is coming out with sociology rankings, which will be accorded more esteem than the US News rankings. I posted at length about flaws in the US News ranking system when the sociology rankings came out.

Unknown said...

On the purported homefield advantage in getting into journals: representation in a journal is a function of submission rates as well as acceptance rates. It's quite possible for a journal editor to be unbiased (setting aside, for a second, whether correlated preferences=bias) in evaluating submissions, but still end up with a disproportionate number of articles written by a given type of author (e.g., Chicago students/recent PhDs, UNC students/recent PhDs).

Most of us don't have access to the data one would need to support or refute the bias charge, but journal editors do. Seems like they could do the discipline a huge favor by calculating and publishing acceptance rates by carefully chosen subgroups (e.g., acceptance rates for Chicago students/recent PhDs and acceptance rates for Berkeley, Wisconsin, Michigan, or Harvard students/recent PhDs). This would either put the burden of proof back on the critics or, at worst, show that internal review practices need to be reevaluated. (Abbott mentioned in his annual AJS report this year that there wasn't a bias toward Chicago students, but he didn't elaborate, at least that I remember.)

Anonymous said...

I was talking to a friend of mine about this today. I think that placement is higher among private schools, and that might be a matter of selection, but it is probably a matter of funding too. If you to worry about getting funding every semester, as in Wisconsin and Berkeley students often do, it is hard to focus on publishing. Besides, Wisconsin is a very impersonal department for many of us, you have to figure out on your own (and with other graduate students) how the whole process works, unless you are one of the few "chosen ones"

Unknown said...
This comment has been removed by a blog administrator.
jeremy said...

Anon 7:48pm: This may well be a problem for Madison, but I'm not sure it's a problem for Berkeley. If it is, given Berkeley's strong placement record the last 10 years, it would suggest an even stronger role for selection or an even stronger contribution of the program itself. I guess my own suspicion about the effect of funding is more the effect of teaching vs. not teaching versus the effect of uncertainty vs. certainty.

Kim: I'll look forward to seeing your post.

Anonymous said...

Anon., 2:50 p.m.: Not sure I buy this whole "Wisconsin grad students have excess, unwarranted smugness" argument. Every Wisconsin grad student I've ever known is a nervous wreck about publishing and the job market. I've actually never heard a grad student at Wisconsin seem at all assured about anything regarding her/his career. Of course, this is all anecdotal. But I'm reasonably assured you didn't do a study using the Smugness Index, either.

Anonymous said...

I agree totally with the p;evious post, we are usually not smugs, thought there is a smug among us once in a while (they're smugs in general though). Also, given our darwinistically oriented system of selection, some people think it pays to "distinguish themselves from the masses" (i.e., the other "mortal" grad students at UW) by being a smug.

If you want to see how paranoid wisconsin students are about job market, just look at previous comments to Jeremy's posts (especially job market related)

Anonymous said...

At Cornell, we have much more fun ranking departments by criteria that we find amusing -- our favorite being best-looking, as it allows us to think about the ugliest faculty we know. We are looking for the equivalent of the early 1980s Boston Celtics (high quality players, but so ugly you wouldn't want to sit across from them during a meal).

In all seriousness (and I don't know if the prior revelation counts us as smug or not, though it is clearly a sign of pettiness), in my view the only way to really rate research impact of a department's faculty is to rank the faculty, not thin slices of their CVs that place them in journal articles in whatever big-3 one picks. For example, Harvard should get credit for having Mary Waters, Orlando Patterson, Christopher Jencks, and William Julius Wilson, none of whom have published in the big 3 much at all. But, they write great articles and great books that people read and people cite.

This could be done by a committee of CV readers. In fact, I know of a department Chairman who ranked all faculty at top 30 departments with 1, 2, and 3 for quality of recent work, based on the last 8 or so yeras of their CVs. He then calculated the means and took them to his dean to say: fortunately, we don't have many 3s, but we have too few 1s. He also told me about it over a meal, and it was just about the best ranking I have seen/heard, as it seemed to factor in more data by using the fair judgement of a critic who was ultimately motivated to convince his Dean to give him more resources but who did not want to undermine his case by making his department look so mediocre as not to deserve any investment. And, he had a great descrptor of being in the top 10, something like 5+ number 1s and a faculty that was lower than 40% 3s.

jeremy said...

Steve M: Why does the % of 3s under that system matter? It seems to make an assumption that the weaker people in a department actually make it worse. This may betray my belief in the value of department size, which of course also happens to be self-serving.

I've thought the best way of doing a faculty quality ranking (not the same as a grad program ranking) would be to make an ordered list of the eight "best" people in each department--by whatever criteria motivate the enterprise--and then just base the comparison on the 5th through 7th best persons. I waver between this and the more parsimonious thinking that you should just base rankings on the 6th best person in each department versus the broader comparison of 4th through 8th.

Anonymous said...

I asked this same question, and his response was that, although the uusal rankings are based on the "size + eminence" calculation, he was unconvinced by this. Although it was hard for me to figure out exactly how one got a 3 rather than a 2 (since he would not give examples!), I believe his point was that an abundance of 3's hurts for a variety of reasons. First, they are a leading indicator of the direction of a department, when the 3s are less than 45 or so. The presence of 3s in the over-45 category then indicates that a department is out of step with the leading edge of the discipline (and may be a sign that a department has the wrong mix of incentives and norms about productivity, possibly weighing teaching and service too highly). Remember, this was a ranking a research excellence, not overall departmental strength defined as broadly as one might choose.

Finally, I got the sense from talking to him that he also felt that 3s tend to pollute the environment with their mediocrity by pusing for hiring, etc., that is suboptimal. I don't know if I agree with this last bit at all, but maybe that is because I am a 3! Or maybe I heaven't been in the business long enough to observe this.

In my view, it seems clear that trying to rank all departments together is a bit silly, especially across the public and private sector. Does it really make sense to try to compare Wisconsin to Princeton? I don't think so. Should Wisconsin be penalized because it does not have the money to hire Marta Tienda or Paul DiMaggio? I don't think so. Should Princeton be penalized because they don't have enough undergraduates to need more than 14 factuly members, or whatever? I don't think so. It's apples and oranges for the most part.

All that being said, in my view Iowa hurt themselves reputationally be trying this 8 years ago, and Notre Dame will likely suffer the same fate. Such rankings can serve a purpose when shared with a Dean, but they just make a department look weak and paranoid when shared with the discipline as a whole.

Anonymous said...

I must admit that I don't really understand the benefit of rankings to grad students or junior faculty. Isn't prductivity in colleagues a bit overrated? Trickle-down funding through the department, reflected glory at conferences? Not to discount any of these, but, first, I care about my own productivity (job, tenure!), second, I care about how much my environment will help my productivity. That's a function of the curiosity, style, and character of my colleagues, more than of their productivity. What good are stars if they don't talk to students/junior colleagues on principle or for lack of time?

Anonymous said...

"Well, the flaws of Jeremy's post are abundant...What he doesn't mention is that people actually have done objective measurements of the soc journals."

Jeremy - I've got to say that methinks this ill-tempered anony hits nail on head. All the DATA suggest that SF is indeed third. Yes, third is...third...but it still is, um, third. See, there's this little thingy called "impact factor" so on so forth, but you must surely know this, so your post does kinda leave me scratching my head

jeremy said...

Anon 8:49am: There's a basic point about measurement here that apparently some portion of my readers are just not going to understand. Note that I have never proposed an alternative journal as #3. Note that in saying there is not a Big 3, I have not said there is no #3. I have said that #3 is not big in the sense that ASR/AJS is big, that it's relationship to the top journal in a subarea depends on the subarea and so makes its prestige contingent on area, and that the fact that it publishes more articles than ASR or AJS exacerbates the conceptual problem of including it in the measure because it means that it has the largest effect. It's all right if you disagree with these points or do not understand them, but do not presume that I'm proceeding forth in ignorance of the existence of journal impact factors.

Anonymous said...

"It's all right if you disagree with these points or do not understand them, but do not presume that I'm proceeding forth in ignorance of the existence of journal impact factors"

8:49 here again. You're getting a little testy about this are you not? I understand your point - and took pains to point out that you surely understood that of earlier critics ("there is a minor literature on this, but you appear to be shorting it") - so I don't get the vitrol. I agree that SF is a VERY DISTANT "third," but the point is that the "big three" do wind up at the core of all citation analyses of the discipline. The idea of multiple Lake Woebegones in which we're all above average in our precious "specialty areas" is simply not supported by any analysis of the field. Once again, you are surely aware of this, so I wonder why you are throwing red meat to the masses on this point? Jus' askin'.

jeremy said...

Sorry for being testy. I'm still feeling pissy from the anon commenter from earlier in this thread. Anyway, I don't have any problem with it's being characterized as a very distant third as long as it's understood that this means it's bad to include in rankings of prestigious publications if it's going to be the main determinant of the outcome because it publishes the most papers.

Anonymous said...

Where's the ranking without Social Forces? Wasn't someone going to run and/or share those numbers?

I'd be interested to see where the movement is between the rankings based on the Big 3 and the Top 2.

Anonymous said...

Anon 2:45:

See here

Anonymous said...

CHECK YOUR OWN BIASES!

Actually, it's not "curious" at all that a department that looks good is going to publish the study. Who do you think is going to do it? A department that looks bad in the ranking?

Furthermore, the rankings use standard methods that have been used before by other rankers, so it may not be "a dispassionate enterprise" (I challenge you to name some research that is!), but it uses a common method for ranking (which is why Social Forces is included) and the article freely admits that this is only one way of doing it and only one measure of productivity. If you don't like the way it was done, do it your way and publish that.

As expected however, the critiques of the study will emanate from departments that didn't fair so well. Wisconsin, Freese's department, suffered both on this study and others like it, relative to its perenial number one ranking. Undoubtedly, then, the study MUST be flawed....

Furthermore, it seems to me that if ND were seeking merely to maximize its position in the rankings, it could have done a lot better by adjusting for faculty size (as Markovsky and Jones et al. did). The last time I looked, the ND department, and some others on the list that did well (Stanford, Duke) are about half the size of the Wisconsin giant not to mention the usual winner of this contest, Ohio State.

Anonymous said...

ND Responds

Well, I guess Notre Dame has to weigh in here. I'm Dan Myers, currently chair of the ND department.

Let me say that Jeremy's issues with this ranking study are well founded. But at the same time, there are problems with any ranking study and if you calculate different things different ways, the results are going to be different. (That doesn't apply just to ranking studies, of course--what is the right operationalization of ANYTHING?). We can't deny that we could have calculated things differently and ended up ranking ND lower. Of course, we could have calculated things differently and ended up ranking ND higher too. Instead of picking one of those methods, we just did what other people have done in the past. Maybe it's not the best, maybe there is nothing that is the best, but it does offer some sense of a standard approach that has at least something to offer in terms of over-time comparability.

I'll also just repeat what was said by the authors of the article. This is just one important thing among many others that departments do. It's not a comprehensive rank of quality or any such thing. Obviously, Berkeley does poorly on this score because they are much more of a "book" department. My response to that is: yep. And other dimensions of quality in terms of grant getting, specialty journals, graduate training, etc. etc. are not considered. But that's not what the article is trying to do. Should we not know this information because it isn't complete? Some may argue "yes, because it distorts." OK, true enough, but if we all do that, we'll never get anywhere.

Reputational studies have awfully big problems--and many people have noted an inertia in them that almost seems insurmountable. I don't think it is unfair to say that they don't reflect the current reality of many departments too well. That was the motivation for this study. The NRC study is about to happen and its reputational ranks tend to have an awfully long tail of influence. We thought it would be nice to have that rank informed by some reality. It's true that our department has made progress as Jeremy pointed out, and it isn't unreasonable for us to point that out using some actual data.

At the same time, one can take accusations of bias too far. The timing of the study was done to get the latest possible information into the study such that it could still be printed in time to inform those who do reputational rankings for the NRC. The truth of the matter is that based on what is in the pipelines of these journals, ND probably would have been better off to wait a bit, but then the study couldn't have informed the NRC process. Not that it will have much of an effect, but that was the point (whether you think that is legitimate or not is another question, I suppose).

On the Social Forces question. I think most people in the discipline think of AJS and ASR as in a virtual tie for the top general journal and that SF is somewhat lower. But, we still do look to it as a strong general outlet and publication in it can be very important in terms of the visibility of research and the career trajectories of its authors. Furthermore, even though the three journals are not all tied for the lead, it doesn't mean they shouldn't be considered. In political science there are also three top journals, but one is clearly the top dog, one is clearly second, and another is clearly third. Should we eliminate one or both of the lower two? The answer is that you could, but then you are providing different information. That's fine. Provide it if you wish. We provided one kind of information that we thought people might generally appreciate as useful.

If you ask me, the sensible thing would be to weight journals by some thing that reflects their relative position. Impact factor is a decent candidate. Allen's core influence factor is another. And this would undoubtedbly move things around some on the rank. But that also opens up other questions like, why not include Social Problems, top specialty journals like JHSB, or even all sociology journals and just weight them by their impact factor. But then what about journals in nearby fields--like Education, Social Psychology, Political science, etc. What about people who publish in those? Etc. etc.

You can go on all day thinking of permutations of these kinds of studies. I say, if you are motivated to do one, do it. The more the merrier.

Apologies to the authors for sounding like I represent them--I don't. Subsequent barbs about the above should be directed at me, not them. Just give me a sec to get this kevlar vest on!

Dan

Anonymous said...

Geez... I'm reviewing my post and seeing this: "if you calculate different things different ways, the results are going to be different."

Wow. That is brilliant! I hope that shows up in an intro methodology book. Can you actually lose tenure for saying something that dumb! :)

-D

Anonymous said...

I find it amzaing that Dan Myers would publicly admit to his desire to influence the NRC rankings by supporting this 'study'.
This admission is almost as damaging to the ND reputation as the study itself.

Anonymous said...

The purpose of "influencing" them is to make them more accurate, not to distort them. I hope that makes a difference to you. It should be clear that I welcome all kinds of information that could inform how people rank departments in any reputational study.

Anonymous said...

Speaking as someone out of the discipline (and academics in general), who honestly does not know the answer, I wonder: Just how influential are these rankings? What does one "receive" from them, literally or figuratively? What does a prestige score actually "get" you? I'd be interested to hear What Jeremy, Dan, and everyone else thinks.

Anonymous said...

That's a good question. People get pretty riled up about the whole thing, so it must have some kind of consequences.

My perception though, is that studies like the Footnotes one tend to produce a short conversation buzz and then are promptly forgotten about. There are bunches of them out there that you can find here and there in more or less reputable outlets. They assess departments and even individual people and publications on various criteria (citations, publishing houses, journal articles, grad student placements, books versus articles, etc.). I don't really think people pay much attention to them in the long haul.

The bigger deal ones are the NRC study and the US News behemoth. These, I think, do have a bigger impact/payoff because they are equated with prestige in many people's minds (and clearly they are at least correlated with prestige). That turns into the department being more attractive for graduate students, for prospective faculty, and so on. Some people also think you get a golden glow when it comes to publishing and grant proposals from being associated with higher ranked departments. Other believe that coming from a higher ranked department--even if all other things are equal--will help you do better on the job market.

I think the biggest thing, though, is internal resources. For example, departments are always fighting with Deans and Provosts to get lines to hire faculty and to get funding for graduate students (or to keep lines when someone leaves or retires). Higher ranks make that job a lot easier and more likely to be successful.

Really, they shouldn't matter as much as they do--just like publications in ASR, AJS, and SF probably shouldn't matter so much in tenure and hiring decisions--but external stamps of prestige do turn into real resources, which is why this is such a hot topic!

Dan

Anonymous said...

i am curious why social problems is neglected here. it has a higher impact score than social forces! I'm curious what others think about social prblmes compared to social forces. no, i am not affiliated with either but i have published in both...