How TeamRankings Makes College Basketball Preseason Rankings
October 24, 2021 – by David Hess
John Calipari knows that past program success matters (Photo by Scott Winters/Icon Sportswire)
This post describes our methodology and process for creating college basketball preseason rankings for all 358 teams competing in Division I men’s basketball this season.
As one would expect from TeamRankings, our college basketball preseason rankings are driven almost entirely by stats and modeling, rather than more qualitative approaches like film study or reviewing media scouting reports.
Before we dive into the details of our approach, let’s cover a few basics.
What Our College Basketball Preseason Rankings Represent
Content:
ToggleFirst, it’s important to know that our preseason rankings are simply the rank order of the preseason predictive ratings that we generate for every Division I college basketball team.
So to create our preseason rankings, the first thing we do is calculate preseason ratings for every team.
Predictive Rating Definition
In simple terms, a team’s predictive rating is a number that represents the margin of victory we expect when that team plays a “perfectly average” Division I team on a neutral court.
This rating can be a positive or negative number; the higher the rating, the better the team. A rating of 0.0 indicates a perfectly average team.
How Ratings Translate To Predictions
Because our predictive rating is measured in points, the difference in rating between any two teams indicates the projected winner and margin of victory in a neutral-site game between them.
For example, our system would expect Gonzaga, which has a 2022 preseason rating of +22.0, to beat an average Division I team (with a 0.0 rating) by about 22 points on a neutral court.
It would expect Gonzaga to beat Delaware State, which has a -16.7 rating, by about 39 points. And Delaware State would be expected to lose to an average team by about 17 points.
Ratings Are More Precise Than Rankings
Understanding the nature of predictive ratings is critical, because they are a more precise metric than a simple ranking.
For example, Kansas fans may not like that Texas is ranked ahead of them in our 2022 preseason rankings. But the two are only separated by 0.4 points, +15.8 for Texas and +15.4 for Kansas.
So yes, if you put a gun to our head and forced us to rank order every team, we’d say Texas is going to be better than Kansas this season. But the difference is so small that it’s practically meaningless. Based on our preseason ratings, Texas vs. Kansas projects as a toss-up game on a neutral court.
So don’t place too much stock in a team’s ranking. Ratings tell the more refined story.
When and Why We Make College Basketball Preseason Ratings
Once the college basketball season starts, our predictive ratings go on autopilot. Every morning, our system automatically adjusts team ratings (and the resulting rankings) based on the game results from the day before.
Teams that win by more than our ratings had predicted see their ratings increase. Teams that suffer worse than expected losses see their ratings drop. Software code controls all of the adjustments and no manual intervention is required.
Generating preseason ratings, however, involves a more labor-intensive process that we go through before every new season starts. What we are trying to do, in basic terms, is to pre-calibrate our predictive ratings system. We want to give it a smarter starting point than simply having every team start out with a 0.0 rating.
Put another way, our preseason ratings are our first prediction of what we think every Division I men’s college basketball team’s predictive rating will be at the end of the upcoming season. And we need to make that prediction before any regular season games are actually played.
Despite being a substantial challenge from a data perspective, our approach to this process is still mostly data-driven and objective. However, there are some judgment calls incorporated, which we’ll explain below.
Why We Make Preseason Ratings
Before we get into the details, a brief history may help explain the how and why our current preseason ratings process evolved:
In the way old days (early 2000s), every team would start the season with a 0.0 rating, and we’d put a note on the site not to trust our ratings until late December. Before then, with such a tiny sample size of games, big surprises or lopsided results could produce some really funky ratings.In the semi old days (mid to late 2000s), we started having each team begin the season with its end of season rating from the prior year. The impact of the prior year rating would gradually decay to zero, and by midseason we’d only consider current season results. Better, but still not the best.Starting in 2011, we implemented the framework we use today. We looked at years of historical data and built a customized model to generate preseason ratings for college basketball. This approach is completely divorced from our automated in-season ratings updates.
Why we took that final step is simple. Generating preseason team ratings using a customized model significantly improved the in-season game predictions made by our ratings — and not only in early season games, where one would logically expect to see the biggest improvement.
In fact, still giving the preseason ratings some weight even at the very end of the season even improved our NCAA tournament prediction performance.
Objective Performance Measurement Shows The Value
The payoff of this approach has been clear. For example, according to college basketball ratings analysis by Mark Moog, using data from the Massey College Basketball Rankings Composite, our rankings (“TRP” in Mark’s chart) have finished in first place for full-season predictive accuracy out of all systems tracked for the past two seasons running.
The group of systems tracked by Mark includes many other leading data-driven prognosticators such as Ken Pomeroy, Bart Torvik, and Jeff Sagarin.
When We Make Preseason Ratings
During every college basketball offseason, we first put in work to improve our preseason ratings methodology. We investigate new potential data sources, and refit our preseason ratings model using an additional year of data.
After implementing any refinements to our process and model, we then gather the necessary data from various sources, and generate our preseason ratings for the upcoming season. We typically complete the process a week or so before the regular season starts.
How We Make College Basketball Preseason Ratings
Now let’s get to the meat. By analyzing years of historical college basketball data — our current training data set includes team profiles going back to the 2007-08 season — we’ve identified a short list of descriptive factors that have correlated strongly with end-of-season power ratings.
We use a two-stage regression model to determine each factor’s weight in our preseason ratings:
The first stage uses predictive ratings from the past few years, player stats from the most recent season, and recruiting info to make an initial rating for a team.The second stage adds transfer info into the mix. We found that our model performs better when transfer value is a function of the initial predicted team rating. Basically, the better a team is expected to be, the less additional bonus they can get from incoming transfers. (In case you were wondering, we found that treating incoming freshman recruits this way did not improve the model. Top recruits seem to improve already-loaded teams more than top transfers do.)
Using a regression model helps ensure that the relative importance of each factor in our ratings is based on its demonstrated level of predictive power, rather than arbitrary weights that just “feel right” to us.
Finally, we group the impact of some variables into single components to help us interpret and talk about the model. Here are the components, which we’ll discuss in more detail below:
LAST YEAR: How good a team was last seasonPROGRAM: Recent historical performance, excluding last seasonRETURNING OFFENSE: Returning offensive production, compared to typicalRETURNING DEFENSE: Returning defensive production, compared to typicalRECRUIT: Value of incoming freshman recruiting classTRANSFER: Value of incoming Division I transfers (JUCO transfers ignored)COACH: Recent coaching changes expected to have positive or negative impact
LAST YEAR
How good a team was in the most recent season — as measured by end-of-season predictive rating and not win-loss record — is the single best objective measure of how good that team will be in the upcoming season.
The year-to-year correlation coefficient for our predictive rating is +0.84. That’s very strong. The correlation of our preseason predicted ratings to end of season ratings is +0.90, so using last year’s rating gets us most of the way there.
In non-stat geek terms: Duke is not going to turn into Florida A&M overnight. Even “terrible” years for elite programs are good seasons in the overall college basketball landscape.
That said, other factors do contribute meaningfully to the final preseason ratings.
PROGRAM
This factor measures how good a team has been in recent history, not including the previous season.
College basketball programs aren’t forged anew from the molten earth each season. They are continuations of the past. What happened 2, 3 or 4 years ago is relevant to this season for a number of reasons.
Some of the players are still around. Often times the coaching staff is largely the same. The facilities usually don’t change much, or the fan support. Geographic advantages and disadvantages don’t change. Looking at longer term performance trends measures the “brand value” of a program, so to speak.
We think most fans intuitively understand the importance of program history. If all you know about two teams is:
Both finished in last year’s AP top 10Team A hadn’t finished in the top 25 the previous 3 seasonsTeam B has finished in the top 10 4 years in a row
Which team do you think is likely to be better this year? (We’re going with Team B, in case it wasn’t clear.)
This is borne out by the numbers. The correlation between final predictive ratings in a given year and those from two seasons earlier is +0.76. (Remember, the correlation with the immediately previous season is +0.84.) The correlation with ratings from three seasons earlier is still +0.72., and four seasons ago is +0.70.
RETURNING OFFENSE
The returning offense component tells us how much additional improvement or decline we can expect based on the total offensive production (which we’ll explain shortly) of a team’s returning players, compared to a baseline expectation for a team of that quality.
The “additional” and “for a team of that quality” parts of that definition are important! A lot of the value of the returning players is already accounted for by the LAST YEAR component. In a way, you can think of that component as assuming that every team is returning an exactly average amount of their production from the previous season (so, about 50-55%).
If a team is returning less offensive production than that, it’s going to get docked some in the RETURNING OFFENSE component, even though the returning players might be very good. For example, Texas Tech in 2019 is returning only 29% of its offensive production, so it has a negative RETURNING OFFENSE value. Alcorn State is returning 76% of its production, so it has a positive RETURNING OFFENSE value. The returning players on Texas Tech are probably better than those on Alcorn State! But as a group their production was less than the “expected” returning value for a team as good as Texas Tech. Meanwhile, the returning Alcorn State players produced more than you’d typically expect for a team of Alcorn State’s quality.
In addition to simply looking at the percent of returning production, we make two additional small adjustments:
We penalize losing high draft picks. Those players tend to be more valuable to their team than the raw statistics show, and losing them is a bigger hit.We give bonuses to teams returning a lot of offensive production from freshmen, as those players tend to improve more than older players.
Again, we’re not doing these on a whim. These adjustments improve the accuracy of the model.
So, what do we mean by “offensive production”?
We calculate a player’s offensive production in 4 steps:
Calculate a player’s offensive rating, as defined by Dean Oliver in his book Basketball On Paper.Find the difference between that value and a “replacement-level” baseline that’s roughly equal to the offensive efficiency of the worst Division I teams, to get a player’s marginal efficiency per player possession used.Multiply that by a player’s usage rate to find their marginal value per team possession.Multiply that by the percent of minutes a player played to get their total value for the season.
We sum the value for all players in order to find the total team offensive production. We can then look at the value of only the returning players to find the percent of returning production.
RETURNING DEFENSE
The returning defense component is very similar to the returning offense one. Like the offense, it’s the amount of additional improvement or decline expected based on the amount of returning defensive production, compared to a baseline for a team of that quality.
We calculate “defensive production” for each player based on the Dean Oliver definition of defensive rating, similar to the way we calculate “offensive production.” We then sum the production of all players, and calculate the percent returning.
And, again like returning offense, we make some additional adjustments beyond simply looking at the percent of returning defensive production:
The amount of credit for the percentage of returning defense depends on how good a team was the past season. For offense, returning a lot of production on a bad team is still a good sign. With defense, that’s less true, and the bonuses for returning a lot of players on a bad defense can be small or even negative.Returning a very low amount of defensive production results in an additional penalty. Basically, the data seems to show that starting over from scratch on offense is easier than doing the same thing on defense.
RECRUIT
The recruiting component represents the projected value of the last two recruiting classes. Most of the value (about 75%) comes from this season’s entering class, but there is still a bit of value in having a good class the previous year. Presumably this is because those highly-ranked players are likely to improve more this season than other non-elite recruits are.
In order to make our recruiting class rankings, we use RSCI consensus recruiting data. Based on their average rank across the various recruiting sites, each player is assigned a score that represents their expected value to a team. These scores are based on analysis of past data, mapping recruiting rankings to team rating improvements.
We then sum the value of all recruits to get a team’s overall class recruiting value.
TRANSFER
Transfer value is calculated very similarly to returning player value. We calculate offensive and defensive production, and total those up to get the overall value for a player.
However, there’s a wrinkle here. In addition to the value calculation using a replacement-level baseline, we also calculate an overall production value using a higher baseline closer to the Division I average efficiency. This results in a second — and lower — player overall production value.
We blend those two values based on the initial predicted rating of the player’s new team from the “first stage” regression mentioned above. The better the team is the more weight we give to the second value.
In effect, this means that the same player has more value when transferring to a bad team than when going to a good one. This makes some sense. First, the worse team will likely have more minutes available for him. Second, worse teams tend to be in worse conferences, and play worse schedules, so the player is more likely to be facing easier competition, which ought to be better for his production.
It also means that the same player has more value when returning to a good team than when transferring to a different good team. We’re OK with that — players transfer for a reason, and this could reflect that transferring players tend to have hidden issues that aren’t evident from the efficiency stats. Or, it could simply reflect it takes some time to learn a new system and fit into a new team, and there is some chance the “fit” won’t be as good as before.
COACH
The coaching component is less rigorous than the others. In fact, it’s a manual adjustment much like the market adjustment that we’ll discuss below.
For teams with new coaches, we review the coaching history for both the old and new coach. This includes inspecting how each school performed (in terms of final season ratings, win loss record, and NCAA tournament seeding and results) before, during, and after the coach’s tenure there.
When the new coach appears to be better or worse than the old coach, based on their past coaching resume, we make an adjustment.
Step 2: Review & Refine The Initial Results
After our model generates its data-driven preseason ratings for college basketball, we then compare those ratings (and the resulting team rankings) to the betting markets and human polls.
If our assessment of a specific team seems way out of whack in comparison to those benchmarks, we’ll investigate more. Primarily, we’re looking to identify some factor not taken into account by our model (e.g. an injury in the previous season, or a coaching change 2 or 3 seasons ago) that is likely to impact the expected performance level of a team.
In some of those cases, we end up adjusting our rating to be closer to the consensus. As a result, this final part of the process does inject some subjective judgment calls into our process.
Why Adjust College Basketball Ratings Manually?
We’re data guys, so it typically takes a lot of convincing for us to incorporate some level of subjectivity into our predictions.
There’s a very high statistical bar to reach in order to anoint a particular stat as generally predictive of future performance. Consequently, very few stats pass the test.
That’s a good thing. One of the biggest challenges of predictive modeling is filtering out the signal from the noise, and “false positives” based on small sample sizes can ruin the future accuracy of a model.
At the same time, lots of different factors are still likely to impact the future performance of a particular team in some significant way. But until we have a large enough sample size of similar events to analyze, it would be very risky to incorporate them into our model.
Especially in more outlier-type cases, our best solution for the foreseeable future may be to make manual adjustments to incorporate the opinion of the betting markets.
Of course, now that we’ve been making these market adjustments for several years, we’ve evaluated them, and … they do improve our overall accuracy. So we’ll continue to use them.
Conclusion
There are many different ways to make college basketball preseason rankings. The approaches can vary greatly, from media power rankings to “expert” analysis, from building complex statistical models to making inferences from futures odds in the betting markets.
And speaking frankly, there’s plenty of crap out there. At the same time, there’s also no Holy Grail.
Within ten seconds of looking over our preseason college basketball rankings, you’ll probably find several rankings you disagree with, or that differ from what most other “experts” or ranking systems think. That’s to be expected.
When the dust settles at the end of the season, our college basketball preseason ratings, and the various projections we generate using them, will almost certainly be significantly off for at least several teams. As happens every year, some teams simply defy expectations thanks to surprise breakout performances, while other teams are impacted by injuries, suspensions and other unanticipated events.
Nonetheless, the primary goal of our preseason analysis is to provide a baseline rating for each team (or “prior” in statistical terms) that makes our system better overall at predicting game results. To stress, we’re most concerned about the overall accuracy of the system — that is, how good it is at predicting where every predictive rating for every college basketball team will end up at the end of the upcoming season.
For that purpose, we’ve settled on an almost entirely data-driven (but still subjectively adjusted in a handful of cases) approach to preseason team ratings. And so far, this approach has delivered very good results.
If you liked this post, please share it. Thank you! Twitter Facebook
NFL Football Pool Picks NFL Survivor Pool Picks NCAA Bracket Picks College Bowl Pool Picks College Football Pool Picks NFL Picks NBA Picks MLB Picks College Football Picks College Basketball Picks NFL Predictions NBA Predictions MLB Predictions College Football Predictions College Basketball Predictions NFL Spread Picks NBA Spread Picks MLB Spread Picks College Football Spread Picks College Basketball Spread Picks NFL Rankings NBA Rankings MLB Rankings College Football Rankings College Basketball Rankings NFL Stats NBA Stats MLB Stats College Football Stats College Basketball Stats NFL Odds NBA Odds MLB Odds College Football Odds College Basketball Odds A product ofTeamRankings BlogAboutTeamJobsContact
© 2005-2024 Team Rankings, LLC. All Rights Reserved. Statistical data provided by Gracenote.
TeamRankings.com is not affiliated with the National Collegiate Athletic Association (NCAA®) or March Madness Athletic Association, neither of which has supplied, reviewed, approved or endorsed the material on this site. TeamRankings.com is solely responsible for this site but makes no guarantee about the accuracy or completeness of the information herein.
Terms of ServicePrivacy Policy