In yet another controversial BCS, the voters have spoken. Yet again, the consensus #2 of the coaches and media panelists - by the slimmest of margins - won't be playing for a championship. It's come a week earlier than usual, and a victory by Oklahoma in the B12CG will likely move them up to #2 in the polls. But let's get this straight offhand - horrible of a tiebreaker as it is, the Coaches and Harris voters named Texas Big 12 South Champions. The BCS computers overrode that decision.
There will likely be some controversy over this, more so if Texas wins the Fiesta Bowl while Oklahoma gets bombed in the BCS Championship. Given OU's 0-4 BCS record post-Mike Stoops, I don't see how that could possibly happen.
Standardize Conference Championships
A fun fact is that if the Big 12 used the divisional tiebreaker rules of the ACC, CUSA, MAC, or SEC (that is to say, literally any other league broken into two divisions), Texas would be the Big 12 South champion. So not only should Longhorn fans be cursing the BCS computers, they should also be cursing the Big 12 for not following everyone else's format.
http://collegefootball.rivals.com/content.asp?CID=883480
But actually, it's not like the other four 12-team conferences have some agreed-upon tiebreaker system. In the SEC, the lowest team is dropped from discussion and the top two are separated via to head-to-head. In the ACC, the opponents' combined record is the relevent tiebreaker. In CUSA and the MAC, their record against cross-divisional top teams.
I'm not sure which of these systems is necessarily best - though you can expect a post on this later - but the fact that inconsistent tiebreaker rules are determining divisional champions is ridiculous.
Ideally we would eventually be in a situation where every conference either has 10 teams that play round-robin (like the Pac 10) or every conference has 12 teams that play a championship-style finish such as the ACC/B12/SEC. However, since this relies upon conferences adding/dropping teams, the immediate solution is to simply standardize the way winners are chosen if their conferences are laid out the same way.
Minimize the Computer Weight
For all the hypothetical pros and cons of computer polls, what has their actual effect on the BCS been?
* Put 2000 Florida State into NC game over Miami despite head-to-head.
* Put 2001 Nebraska into NC game over Oregon.
* Put 2003 Oklahoma into NC game over USC.
* Put 2004 Texas as an at-large over Cal.
* Put 2008 Oklahoma in the Big 12 Championship over Texas despite head-to-head.
All other BCS results featured the computers and voters roughly agreeing on ranking; at least close enough that it did not affect who was in the championship game or an automatic at-large selection.
In seasons when the computers might have actually done some good (2005 Oregon/Notre Dame for at-large), other BCS rules prevented them from doing so.
It's fairly safe to say that in 2000 and 2003, the BCS computers flat-out got it wrong, to the point of perhaps leaving the best team in the nation out of the championship. 2001 was also a very questionable call. Only 2004 serves as retribution for the computers over the voters. This still means messing up three national championship games and only correcting a non-championship BCS bowl in return. This season now marks the second time the computers have trumped the voters while ignoring a head-to-head result.
I'm going to backpeddle a little and say that there is room for the computer polls in the BCS. But clearly their influence must be minimized so that they cannot so easily overturn the voters' judgement. The problem is that any season when the teams are close, the voter polls are likely going to be closer than the computer polls unless the latter are tied. With only four polls being counted for each team, a change of one spot in a computer poll is worth 0.25 spots in the computers' average ranking. This may be greater than the entire difference in the Coaches or Harris poll. An ideal solution might be convincing the AP Poll to rejoin the BCS (which they might be agreeable to with the changes that were made in 2005 along with these changes to the scoring) such that each voter poll counts 25% and the computer average also counts 25%. Otherwise, perhaps the Coaches and Harris polls should each count double the computer average.
Count All Six Computer Polls
A team's "computer average" is actually the truncated average of their middle 4 computer poll rankings. Not only is this producing a loss of data, but it also means that team A and team B might be graded by entirely different sets of rating systems. For example, Texas's computer score this season threw out the results of Billingsley's and Colley's. Oklahoma's ranking kept both of their scores in these two, throwing out Anderson's and one of (Massey/Sagarin/Wolfe). Texas's and Oklahoma's computer scorecards, which determined who went to the Big 12 Championship game, used only 50% of the same judges. (Allow me to clarify that in this specific example, use of all 6 polls would have produced the same result.)
Allow Margin of Victory in Computer Rankings
Some computers are designed to use MoV. (Massey and Sagarin, for example) Others have always been designed to work without it. (Anderson and Colley) Forcing Sagarin to submit incomplete rankings based on partial data is as silly as trying to force Colley to take scores into account.
This season, removing MoV did not cause (that I can tell) any computers to flip Oklahoma and Texas. Sagarin has OU 1, Texas 2 in both his complete rankings and in the elo-chess portion. Massey has OU 1, Texas 2 in the rankings that include MOV, but OU 1, Texas Tech 2, Texas 3 in the rankings that do not. Indeed, the only computers which had Texas actually above Oklahoma were the ones designed to use only win/loss from the beginning - Anderson's and Colley's.
Removing MoV from the computer polls was an example of a knee-jerk reaction that the BCS made to try to fix previous mistakes. As I feared back in 2005, it has led to the computers coming up with less sensible results. A computer ranking Texas Tech over Texas is clearly not taking into account how extremely close their game in Lubbock was, nor how badly the Red Raiders were dominated by Oklahoma.
Force Computers to Rank and Include All Teams
Certain computers (Colley's for example) simply throw out games against FCS opposition and below. This plays a big factor in determining strength of schedule, as a team scheduling lowly FBS teams will be punished but a team scheduling FCS teams won't. Additionally, losing to an FCS team really needs to be taken into account in the rare cases when that happens.
Revisit Billingsley and Colley Rankings
There are two issues with Billingsley's.
1) It uses preseason rankings based on the previous season's final results. This means LSU started 2008 ranked #1, Kansas #2, USC #3, and so on...
http://www.cfrc.com/Ratings_2008/PS_2008.htm
All of Billingsley's results are based upon these preseason rankings. Four of the six polls had Oklahoma and Texas within one spot of each other. Of the two that didn't, one was Massey's which is an incomplete rating (his full rating had OU 1, Texas 2). The other is Billingsley. Could the fact that Oklahoma started out ahead of Texas in Billingsley's have made a difference? I can't say for certain. It also could be that if Oklahoma had not started out so many spots ahead of Florida, the final rankings would have had OU 3, Texas 4 - also a one spot differential. The fact that it's even a possibility should be enough to convince the BCS that the computer polls need to start with all teams seeded equally.
2) It appears to use a single-iteration stepwise process. What this means is that the week 1 rankings are FINAL and that these are used to determine week 2 explicitly. Then, week 2 rankings are FINAL and they are used to determine week 3 explicitly. So when Colorado beat West Virginia in week 4, for example, they were getting credit for beating the #13 team in the country and that credit was never modified. West Virginia finished the season ranked #30.
I've poked at Colley's a lot, but truthfully Billingsley is the most questionable poll. I'd go as far as to say that if these issues are not changed, the poll should be replaced with something else or dropped entirely.
Colley's, on the other hand, just seems to produce the most outliers. Colley had Texas #1 and he's had them there all season, even when Texas Tech was still unbeaten and had just defeated Texas. Colley was also the lone pollster to rank Florida #1 at the end of the 2006 regular season; a result that looks great in retrospect but certainly cannot be justified based on the results of the 2006 regular season alone. Colley's poll was one of the dropped polls for 2007 LSU and Ohio State, as well as giving Georgia their highest ranking.
Much as I rip on the poll, I'm not saying that for certain it needs to be changed or dropped. I am saying that it appears to show less concordance with the other polls, and should be looked at for that reason.
Sunday, November 30
Fixing the System
Posted by James at 6:20 PM
Labels: bcs, college football