Or, at least, how is it that the US News & World Report generates the ranking that has Wisconsin #1 and Berkeley #2?
As enthusiastic as I am to see Wisconsin at the top of the rankings, I would be lying if I said I thought that the US News methodology of rankings was particularly sound. It's plainly inferior, for example, to the methods that are used by news services to rank college sports teams.
Here's how it works: the chairs and directors of graduate studies of active Ph.D. granting programs (averaging at least one Ph.D. a year over the past five years) are given a survey. The ultimate response rates for the survey are low--about 50% of the sociology surveys were returned, which was actually the highest rate among all the Social Sciences & Humanities (less than a quarter of the psychology surveys were returned). I don't regard the low response rates as a problem, as my guess would be that there is a substantial correlation between not sending back the survey and being relatively uninformed about other departments and graduate programs. (That said, it seems very much an open question about whether anybody is really in a great position to know that much about very many other graduate programs, and so whether 'expert polling' can ultimately produce rankings with that much validity.)
The survey basically lists all the departments with Ph.D. programs in alphabetical order and asks respondents to rate each program on a scale of 1-5 (5=outstanding, 4=strong, 3=good, 2=adequate, or 1=marginal). To reduce the capacity of a single rogue voter to game the rankings, US News throws out the two highest and two lowest scores for each school, and then takes the average. The rankings are just the ordering of these average scores, with the additional weird twist that US News will regard two schools as tied if their averages are equal when rounded to the tenths digit. (In other words, while two schools with averages 4.86 and 4.84 are not considered tied, two schools with averages 4.84 and 4.76 are considered tied).
The problem with using this scheme to make distinctions among top-ranked departments is that only a minority of respondents end up casting the votes that make the ultimate difference in which department is top-ranked. Wisconsin is reported as having a 4.9 average, while Berkeley's average is 4.8. If we presume that no more than two people would be so ridiculous as to give either department a 3, what this implies is that somewhere between 5-15% of people gave Wisconsin a 4, while somewhere between 15-25% of people gave Berkeley a 4. The majority of respondents (somewhere between 60-90%) gave Wisconsin and Berkeley the same rating.
It could be, then, that if you had people specifically rank the top departments, a majority of respondents would have put Berkeley ahead of Madison. Indeed, if you had respondents rank departments and then determined The Number One Department using a system like the Instant Run-Off Voting System advocated by the Greens, it's mathematically possible that any of the departments with ratings of 4.3 of above (Wisconsin, Berkeley, Michigan, Chicago, North Carolina, Princeton, Stanford, Harvard, UCLA) could be the winner, although scenarios become increasingly implausible as you move down the list. The larger point, though, is that the relative difference between Wisconsin and Berkeley in the rankings is generated by the 10-40% who regard the difference between the two departments as enough to give one a 5 and the other a 4, and not at all by the 60-90% who thought the departments deserved the same rating on a 1-5 scale but who could still have definite opinions on which program is better.
This isn't to say Wisconsin wouldn't be #1 under an alternative and better set of rankings--I have no way of knowing--it's just to say that Wisconsin's ranking shouldn't be interpreted as meaning something more or different than it does.
Incidentally, the US News specialty rankings are done entirely differently. Respondents are asked merely to list (but not rank) up to ten departments that they regard as distinguished in that speciality. US News counts up how many respondents list each school and these counts provide the basis for the ranking. I think US News must do it this way because they recognize the limited knowledge chairs and DGSes must have of specialities outside their own. Anyway, this means that the specialty rankings are essentially a sort of like a gauge of the overall name recognition that a school has for a particular area. Presumably, in terms of what departments are ranked first vs. second in a specialty, the rankings are entirely a measure of the % of respondents who didn't think to include a school on their list, and doesn't at all reflect whatever relative opinions about the two programs are held by the vast majority of respondents who included both on their lists. (Also, my understanding is that there is no equivalent of throwing out the two lowest scores for the specialty rankings, so they are more vulnerable to being gamed by respondents leaving peer departments in a specialty off their lists.)