It's an open secret in the world of college basketball that the AP poll is little more than a publicity tool.
It gets fans talking. It gives coaches a bit of recruiting-trail fodder. But as a gauge for competitive outcomes, it's not all that helpful.
The NCAA selection committee doesn't give two hoots about what the voters think, as well they shouldn't. After all, the writers have proven stunningly obtuse when it comes to poll rankings, regularly favoring teams that haven't lost recently over teams with more complete resumes.
And that's not to mention the voters' fawning affection for blueblood teams in power conferences, which can border on the absurd.
But for all the just criticism directed at the AP poll, there is a sense that it gives us some basic understanding of which teams might make deep tournament runs.
One wonders though, in a year like this one—with all sorts of instability at the top—is a preliminary exercise in team ranking at all useful? If the voters can't decide on a set of top teams, does the AP poll lose what little predictive value it has?
Before we dig into that question, we have to ask: Just how tospy-turvy has the 2012-13 campaign been compared against seasons past?
Through the first 16 weeks of polls, 22 different teams have appeared in the AP Top 10. Thirteen of those 22 have also appeared in the AP Top Five.
I went back through the last 20 years of polling data and found only two other years where more than 13 teams appeared in the Top Five over the course of an entire season: 2001-02 (14 teams) and 2003-04 (18 teams). With a bit more commotion, the 2012-13 season has a chance to be historically tumultuous.
The 2001-02 season also holds the 20-year high for most Top 10 teams at 24. It's not hard to imagine a few more teams from the 2012-13 sneaking into the Top 10 and eclipsing that mark (Georgetown? Oklahoma State?).
So, 2012-13 really has been as crazy as we all thought.
But does mid-February madness translate to March?
To test that question, I looked at the final poll data from the last 20 years and compared that to the quartet of teams that appeared in each year's Final Four.
My theory here is that if a season produces lots of poll instability, its final poll should be a bad gauge of tournament outcomes. In other words, a crazy season should yield a crazy Final Four.
Let's crunch the numbers.
In the chart below, I grouped the polls by the number of Top 10 teams from each year that played in the Final Four. For example, there were four years in which the Final Four was composed entirely of teams that finished in the Top 10. If you take the average number of total Top 10 teams in those four years, it comes to 20 per year.
|Number of Top 10 teams in the Final Four||Average number of total Top 10 teams in those years|
Welp, not much there.
Obviously, we're working with small sample sizes. There were 11 years where three Top 10s reached the Final Four, four years where all four were Top 10s, three years where only one Top 10 reached the Final Four and two years where exactly two Top 10s made the final quartet.
I suppose there's a small pattern if we decided to exclude the two years where exactly two Top 10 teams made the Final Four. But even then, it isn't much.
Now, let's run the same test with Top Five teams.
|Number of Top Five teams in the Final Four||Average number of total Top Five teams in those years|
And another whiff.
Maybe if we expanded the sample size by another 20 years something definitive might develop. But for now, it's hard to see how poll stability correlates to postseason stability.
In a season as wild as this one, the AP poll is just as likely to help us predict the eventual Final Four as it is in a year where everything goes according to script.
So, has the AP college basketball poll ever meant less?
Only if it meant something to you in the first place.