A statistical analysis I would like to see would compare the distribution of NFL won-loss records at the end of the season in recent years with the distribution of MLB baseball records after 16 games. Basically, what I'm wondering is whether NFL's movement toward parity has led it to become as seemingly stochastic in the outcomes of individual events as baseball is. So that the frequency of, say, 13-3 or 3-13 teams in the NFL is no more common than it is in baseball, and so that the only reason that there are more teams that win 70% of their NFL games vs. 70% of their baseball games is that baseball plays 10 times as many games.
A flaw with this kind of analysis would be that baseball teams play the first sixteen games of their season against about five opponents, whereas football teams play their sixteen games against, what, thirteen opponents? (ten? how many teams do football teams play twice? [update: a reader from Ithaca, NY tells me that it's thirteen opponents, with three played twice]). That should make baseball appear less stochastic through 16 games than it would be if the first 16 games were played against as diverse a group of teams as the NFL does.
Who cares? Why is this interesting? It's intriguing to me because there is a whole ethic in baseball built around the idea that you can't make too strong of judgments based on one game and so you need a 162-game regular season and best of 7 series in the postseason, while in football there is a whole ethic built much more around the idea that merit shows itself in Big Games, so that it's hard to even imagine someone proposing a best-of-3 Superbowl. Sports analysts certainly seem to feel much more confident about their predictions for a Super Bowl than for Game 1 of a World Series. So, it's something to imagine that the underlying randomness in the outcome may be ultimately about the same. I suppose one could make a similar judgment by comparing what the largest spreads of odds are for a game in football versus a game in baseball.