For each category, you are asked to selected up to four categories you think are "important" and then the two you think are most important. I will italicize the two I selected as most important, but restrain myself from more discussion. Assessing graduate program quality is a matter about which I have many wheelbarrows full of thoughts, but I have way too much to do so I don't even want to get into that here.
Regarding the quality of the program's faculty:Anyway, I'd be interested in how you would have answered these questions (or, if you're also faculty and have done the survey, how you did answer them). One of the things I kept thinking about as I did the survey is how I would have answered the questions if I'd done this survey as a graduate student vs. how I answered them now.
a. Number of publications (books, articles, etc.) per faculty member
b. Number of citations per faculty member
c. Receipt of extramural grants for research
d. Involvement in interdisciplinary work
e. Racial/ethnic diversity of the program faculty
f. Gender diversity of the program faculty
g. Reception by peers of a faculty member's work as measured by honors and awards
Regarding the quality of the program's students:
a. Median GRE scores of entering students
b. Percentage of students receiving full financial support
c. Percentage of students with portable fellowships
d. Number of student publications and presentations
e. Racial/ethnic diversity of the student population
f. Gender diversity of the student population
g. A high percentage of international students
Regarding the program:
a. Average number of Ph.D.s granted over the last five years
b. Percentage of entering students who complete a doctoral degree
c. Time to degree
d. Placement of students after graduation
e. Percentage of students with individual work space
f. Percentage of health insurance premiums covered by the institution or program
g. Number of student support activities provided at either the institutional or program level
BTW, I was surprised by the absence of faculty size as a criterion. I feel like small departments must have won some political/rhetorical battle for that to be something not even available as an alternative someone doing the ratings could choose.
BTW-BTW, if NRC conducted a survey of graduate student satisfaction/happiness, I would have selected this as a criterion that should be used. I have thought that ASA should organize an online survey on that as a service to the future of the profession. My experience is that when sociology faculty see unhappy graduate students in their midst, their response is first to remove their sociology thinking caps and then to say "there are unhappy graduate students everywhere," a statement that is no doubt true but doesn't really note the relevant and important possibility that the proportion of unhappy students may vary from place to place and perhaps some of this variation has to do with details of their program.
25 comments:
As ever, I am impressed by your powers of boy detection. When the NRC survey was announced in my (relatively) small department, that faculty size was no longer an available criterion was hailed by administrators and senior faculty as a "victory for small departments." I have no idea who the specific protagonists were, but it certainly sounded like there was deliberate and sustained mobilization behind the change and some great, cheerful anticipation about its potential consequences. We shall see...
I believe these rankings (seen as an alternative to NRC, and discussed at length in a recent Chronicle of Higher Ed article) control for faculty size:
http://chronicle.com/stats/productivity/page.php?institution=45&byinst=Go
I was surprised by your choice of "extramural grants" as one of the two most important in response to the first question. My own choice would be "number of citations." I think of grants as a means to the end of producing research rather than an end in themselves. We should then measure the research produced and its influence rather than money to get at department influence or quality. I also would say this slants the tables in favor of departments that have many faculty working in areas in which there is a lot of grant money available, like health and aging, while departments strong in political sociology or ethnography would necessarily be lower-rated. Want to explain your rationale a bit more?
"I think of grants as a means to the end of producing research rather than an end in themselves."
hear, hear!!
I think of measures as indicators of a latent construct rather than an end in themselves. The question is not grants vs. citations in some abstract ether of quality, but their practical realities as measures by NRC. More later.
I'm depressed.
Okay, so, rather than offering a spirited defense of why grants should count more than citations--especially since I do believe that citations are vastly overrated as a measure of either individual or department quality for many reasons--I'm just going to confess that if my vote determined the criteria, I would rank citations over grants as well. My vote is just one among many, though, and so instead I voted in that case for what I thought my colleagues would be most likely to underrate vs. overrate in the vote totals. Rest assured counting citations will get enough votes.
I wonder how many respondents will vote strategically like Jeremy. There are several items on the lists that I find it difficult to believe that anyone would rank as "two most important" (as compared to merely desirable) absent strategic voting.
I have some issue with the *number* of publications. There's some value in it, especially in combination with # of citations, but the quality of publication makes a big difference. I hate when departments focus on quantity over quality, which pushes faculty to write the same article repeatedly or write crap just to have publications. And of course it doesn't take into account the relative time to produce an article versus a book. So, it's not entirely irrelevant but is a seriously flawed measure of faculty quality.
I also have the same qualms with extramural funding, for the reasons already mentioned, though I don't think it's a useless measure either.
I'd rather see faculty-student ratio, or something like that, rather than faculty size.
--andrea
I think I'd also rank number of student support activities as one of the most important (maybe instead of # of PhD's granted), because I take it also as a measure of internal opportunities for professional development.
-andrea
Andrea: My understanding is that many Ph.D. sociology programs, especially outside the conventional Top 20-30, would actually like to have more graduate students but are limited by what they are able to recruit. A measure of faculty:student ratio would reward those departments. (Although maybe I overestimate the extent to which there are departments whose graduate programs are constricted due to lack of demand.)
It is curious that the first question didn't include reputational rankings from surveys like those used by USNWR and (I think?) the NRC itself to rank departments. Although also somewhat flawed, I tend to believe that reputational measures are probably a slightly better indicator to rank departments than citation counts, publication counts, etc.
NRC will be doing a separate reputational survey. Part of this survey asks if you are willing to participate in that one, but I don't know how they specifically choose the sample among willing participants.
I think I gave identical rankings on the first two categories, but intentionally chose "percentage of entering students who complete a doctoral degree" over "average number of PhDs granted over the last 5 years." The absolute measure presumably benefits departments with a large number of faculty and/or that admit large cohorts of (relatively) poorly funded graduate students, and neither of these attributes strike me as prima facie evidence of quality.
Percentage of doctoral students who complete is to me a bad measure of quality because it counts early attritions the same as late attritions. (Actually, depending on how one does it, early attritions often end up counting more heavily against a department.) In my world, early attritions aren't even necessarily a strike against a department, while late attritions are bad things.
You are right that # of Ph.D.'s gives an obvious advantage to departments with more students. I selected that because of the dearth of other ways of giving credit for faculty size.
Huh, I never thought of grad programs not being able to recruit as many students as they would like. Nor have I heard of that (though I'm not saying it's not possible).
But what's the inherent benefit of faculty size? Maybe I just don't see it because that doesn't seem desirable to me personally, either as a student or faculty member.
-andrea
At this website you can "play with" your own values and see how it all works out:
http://www.phds.org/rankings/sociology/
It uses NRC data from 1994. So it's old. But it's still interesting to see as you weight stuff.
"c. Receipt of extramural grants for research"
what? my faith in your to-date-flawless judgment has been fatally challenged. even as a bit of strategery, i don't get it. Your 7"30 explanation only adds to the mess.
I can't help but think the aversion to grants as a criterion strongly reflects an aesthetic/idealistic reaction, rather than much consideration of its actual implications as a measure.
Extramural grants are competitive. The subareas they favor are also the subareas that more consistently have jobs. The subareas they disfavor still often have quality faculty who find ways to get grants. Sociology grad programs need money to run. Grants offer opportunities for apprenticeship. Grants often involve assembling a team from the same university, which shows constructive collaboration is there; they involve explicit peer assessment of the investigator and of the research environment of the place being evaluated. Grants show the department has some committment to doing "relevant" research that someone else finds credible enough to pay for. And so on.
Sociologists at places that have grants revel in the benefits of overhead -- I could provide examples from grad students at Wisconsin talking about free printing as though it were some inalienable student right -- but then they want to take this stance of disdain toward grants, like they are irrelevant contaminants of this pristine thing that is true "quality".
"I can't help but think the aversion to grants as a criterion strongly reflects an aesthetic/idealistic reaction..."
I don't have an "aversion," but I do think it is a pretty silly "top two" criterion. The problems form a long list, much longer that those associated with citation counts (for instance). I also think you confuse the audience. Most basic science is, almost by definition, not "'relevant' research that someone else finds credible enough to pay for." And, strangely enough, most of us are interested in "basic science." When someone mentiones "relevant" and "sociology" in the same breath, I instinctively reach for my NoDoz.
Free printing isn't an alienable student right? Shhh! Don't tell my department chair! We are about to lose all grad student workspace, so don't give 'em an excuse to take away the rest of our toys, too.
Most basic science is, almost by definition, not "'relevant' research that someone else finds credible enough to pay for." And, strangely enough, most of us are interested in "basic science."
If the first sentence is supposed to be a generalization about science, as in one whose primary pertinence is to the natural sciences, you're being silly if you think that basic science research is being conducted without significant amounts of funding. Those sciences have made a compelling case that their basic science is worth paying lots of money for (largely by the promise that downstream it will either help us cure disease or win wars).
Meanwhile, if in your use of "us" in the second sentence is talking about "sociology," I doubt you are correct about that. I'm generally on the side of not caring so much about the "relevance" of sociological work, and feel like that is a minority view, especially as it is a view that many sociologists have even disdain for.
I think anon 7:37 means that "basic" sociological research doesn't require large infusions of funds and that $$ is not a great measure of "program quality" (anon?). I am confused by your (JF's) response: so "science" does not equal "sociology," it is not "us?"
I'm not going to get into the intro class issue of sociology's status as science here and what exactly "basic science" in sociology is here. Suffice it to say I am on the side that would personally greatly prefer if more sociologists took a genuinely scientific stance toward their work.
I started to write a comment, but it got way too involved. If anyone is so motivated, you can read the whole deal here. Also, I have a poll up so you can vote your own opinion about these three questions.
Post a Comment