The initial College Football Playoff rankings will be released Tuesday night, and I can say I’ve been in the room where those rankings will have been assembled.
It’s a fourth-floor conference room in the mammoth Gaylord Texan Resort hotel outside of Dallas. The 12 members of the CFP selection committee sit at four long tables arranged in a rectangle with large flat-screen televisions planted in the middle.
I know this because about a month ago, the CFP staff invited 12 media members to go through a mock selection to get a better understanding of how the real committee does its work. The group included respected writers and broadcasters from around the country, and also me. With one semifinal being played at the Georgia Dome on New Year’s Eve, it was an immersive way to learn more about how the teams that will play for the national championship (including the two bound for Atlanta) will have been selected.
I’ll say upfront that I’m not a conspiracy theorist by nature. I don’t believe, for example, that Charles Woodson won the 1997 Heisman Trophy over Peyton Manning because ESPN pushed for it (full disclosure: I went to Michigan, and also my wife loves Manning’s commercials). I tend to doubt Ray Lewis’ theory that the NFL caused the Super Bowl blackout in 2013.
I also find merit in the way that the playoff field is decided, by a 12-person committee made up largely of former coaches and college athletics administrators. But there’s always distrust and dissent with just about any change to policy or rules in college football, and perhaps reasonably so, which was part of the reason why the CFP invited media to take part in the mock selection exercise, plying us with a Tex-Mex dinner (which, admittedly, was really good) and granting access to the very room where the playoff field and bowl matchups will be set up.
Our mission
We were given the challenge of retroactively ranking the top 25 and deciding which four teams would have been picked to play in a hypothetical playoff in 2010, which was the year that Auburn and Cam Newton beat Oregon for the national title. You may remember that TCU also finished the regular season undefeated, while Ohio State, Stanford and Wisconsin were all one-loss teams before the bowl season. By dint of record and statistical data, Auburn and Oregon didn’t take much work. The next two slots, to say nothing of the remaining 21 in the top 25, were much more challenging.
As we went through the exercise, guided by CFP staff and given ample time to ask questions about the process, I gained a better understanding of how the committee does its work and why the rankings work the way that they do. For instance, in 2014, the playoff’s first year, Florida State dropped from No. 2 to No. 4 during the regular season (returning to No. 3 after the conference championships) despite the fact that the defending national champions were the only undefeated team at the end of the regular season.
While some corners of fandom (particularly the corners colored in garnet and gold) raised considerable hue and cry on the grounds that “they’re the champs until proven otherwise,” I learned that that argument doesn’t carry. One principle guiding discussion is that there is no carryover from the previous season; committee members are ranking only the top 25 teams of that season in that particular week.
CFP executive director Bill Hancock shared his recollections of former committee chair Jeff Long waving off comments when committee members brought up a game from a previous season or even mentioned a game upcoming.
“Every year is a new year,” said Kirby Hocutt, Texas Tech’s athletic director and the committee chair, who sat in on our panel and provided direction.
Guiding principles
Further, and perhaps more to the point in this case, there are specific principles by which committee members are to rank teams. Strength of schedule, league championships, head-to-head competition, comparative outcomes against common opponents and other factors such as injuries are among the principles for distinguishing between similar teams.
To go back to 2014, when comparing an undefeated FSU team against the likes of one-loss Alabama, Oregon and TCU, I could see how the argument could be made by people who had watched most or all of all of those teams’ games and pored over data could come to the conclusion that the Seminoles were, by narrow margins, not quite their equal.
Further, teams can move up and down week to week because the CFP rankings are not static or locked in (you win, you stay) the way that the polls often are. The rankings start over fresh each week. The way it works is that the committee first ranks the top three teams, then 4-6, 7-9, 10-13 and on. By private ballot done over the laptops, members first vote to select a pool of teams to consider for each three- or four-team slice of the ranking, then discuss and debate.
After discussion, the members vote to rank each three- or four-team set, and there is room for more debate if necessary before moving on. For example, as we ranked the 2010 teams, we re-considered spots 3-7 even after we finished our top 25, causing the Nos. 6 and 7 teams (Stanford and Boise State) to swap. At another point, Florida State and South Carolina were tied. Using a platform designed by Atlanta-based SportSource Analytics, we could study the teams across dozens of statistical categories. One that jumped out in the FSU-South Carolina comparison was common opponents.
Both had played Clemson and Florida within a two-week span. Both beat Florida easily, but South Carolina also easily handled Clemson, while FSU escaped by a field goal. Upon further review, we voted the Gamecocks ahead of the Seminoles. (You may remember that the teams later met in the Chick-fil-A Bowl, won by FSU.)
(Worth noting: the actual committee members watch 15 to 20 games each week, and have far more knowledge of teams than we did of the top 25 teams from 2010, plus more time to debate them. So two sets of scores probably would not have had the same impact on them that they did on us.)
Fluid system
This method helps ensure that the rankings are well thought out. Whoever is No. 1 this week, for instance, won’t remain No. 1 next week just because it won. Another weekend of games will provide another chance to learn more about that team and others comparable to it, both by watching the teams play but also to compare statistical data. Each decision, 1 vs. 2, 2 vs. 3, 3 vs. 4 and so on, gets thorough discussion from well-informed and impartial committee members.
So back to those who contend it’s a conspiracy for Team X or conference Y or a certain four-lettered cable network.
In this environment of lengthy discussion, it would be difficult and maybe impossible to carry an agenda for a team or conference (or network) through weeks of rankings. Given that 21 vs. 22 gets as much scrutiny as 1 vs. 2, it would be difficult for biases to not be exposed or go far in a room of invested and informed individuals. Further, members are recused when teams they have affiliations with are discussed. And to bring home this point, just outside the conference room, there are hats hung on a hat rack, each with the member’s name stitched on, a reminder to “leave your hat at the door.”
The same week I went through my mock ranking, Peach Bowl CEO Gary Stokan went through the same exercise with conference and bowl officials.
“It was very comprehensive, very inclusive and very fair,” he said.
I’d have to agree.
On the other hand, those hats at the door were made by Nike…
About the Author