Image Credit: Ohio State Athletics
Since its conception in 2013/14, the College Football Playoffs have had one job – put the four best teams in college football in a playoff to decide a national champion. Gone are the ways of the flawed BCS system.
Where the BCS used computers and equations, the CFP uses people and film study to determine their rankings. Unfortunately, there are some misconceptions about the CFP system which have led to many fans not being pleased with the rankings. It’s important we understand the committee’s decisions on where they rank teams. To truly understand this, we first need to get a better grasp of the CFP ranking process itself. And to do that, we need to delve into the metrics the committee uses and why they use those metrics.
One of the most important metrics the committee weighs is a team’s Strength of Schedule (SOS). SOS is a measurement that takes a team’s opponents’ records and their opponents’ opponents’ in an equation that gives you SOS. Simply put, it shows who has played the most difficult schedule thus far in the season. At the time of writing Alabama currently has the most difficult schedule of the top four teams (29th). Ohio State currently sits at 77th, which is the lowest of the top four.
If you want a perfect example of SOS and its use, look no further than the differences between Texas A&M and UGA versus BYU and Cincinnati. Both A&M and UGA already have losses under their belts, yet they are both ranked above undefeated BYU and Cincy. An outsider might wonder why that’s the case. If you look the teams’ respective SOS rankings, you can see why the committee has the 1-loss SEC teams over the undefeated G5 teams. UGA has the most difficult schedule of all ranked teams at third in the nation. Texas A&M has the second-highest rated SOS with the eighth-toughest schedule. Meanwhile, Cincinnati sits with the 70th SOS, and BYU is further behind the mark at 87th.
A similar metric to SOS is a team’s Strength of Record (SOR) rating. SOR is built off a simple question – “What if someone else played that same schedule?” The computers take that question and create an equation that generates a ranking. A teams SOR quantifies how difficult it would be for an average Top 25 team to have the same record with the same schedule as the team in question. The higher the rating, the more difficult it would be. This is the most important metric the committee evaluates, as it’s reflected in the actual rankings.
Currently, Alabama has the highest-rated SOR in the nation, followed by Notre Dame, then Texas A&M. All three of these teams are in the Top 5 of the latest CFP rankings. An outlier in the Top 4 is Clemson, which currently has the tenth highest SOR, yet sits third in the rankings.
One metric that may not be used as heavily as SOS and SOR is Game Control (GC). GC measures how dominant a teamis within their schedule. If a team is skating by opponents, they’ll have a low GC rating. This can lead to some real questions about a team’s resume. The most significant time it was used is back in the inaugural season. FSU was coming off a national championship season in 2013 and were following it up with another run. They were on a 28-game win streak going into the playoffs, yet they were only the third seed. With a game control rating of 26th, they had the second-lowest rating in the Top 15.
This season, GC has not had significant role, but the best example can be seen in BYU’s rank. BYU has a game control rating of sixth, higher than seven teams ranked above them. However, they still have yet to crack the Top 10. Alone, GC shows you how explosive a team may be. When paired with SOS and SOR, it shows a team’s true caliber and ability. This leads us to the most important factor the CFP Committee utilizes.
The Human Factor
The biggest difference between the previous format (BCS) and now is the human aspect. A computer does not watch film and doesn’t know who is playing in the games. A computer can not see the weather of the games. A committee of real people can.
The committee can look at a team like Clemson and see more than a one-loss team with the tenth SOR rating. They recognize that the Tigers were down seven starters against Notre Dame and that the Tigers control their own destiny. Computer rankings may over-penalize Clemson for their loss due to those mitigating circumstances. The Tigers are still a favorite in the championship hunt.
One of my favorite things about the CFP rankings is their release schedule. Instead of letting “preseason hype” and poll inertia affect their rankings, they can look at the numbers and game film alone. Past seasons don’t matter. With all the hype and theatrics out of the way, the committee can get the clearest look at the field.
The rankings tell us who and what the committee has seen thus far. The rankings are not predictive, but reactive – “If the playoffs started tomorrow, who would be in?” Unfortunately, that goes unsaid and is misunderstood. Another misunderstanding is the week-to-week positioning. Every time the rankings are released, they are something completely new. Even though a team was ranked #10 last week does not mean they should be be there again the next week. A lot can happen to effect the new rankings, so the committee starts from scratch every week.
The longer the CFP exists, the more we will understand about the committee and its decisions. Until that time, it’s important that we as fans educate ourselves on the process. I get a great deal of enjoyment by knowing what the committee sees. I do not have to watch a game of a top-ranked team to see how it can effect the next batch of rankings. Now, I can watch anyone anywhere and still get a full grasp of the the CFP landscape.