Featured Post

The 2017-18 Season in Review

LINKS TO ALL THE POSTSEASON COVERAGE All the numbers have been pretty well crunched and the lists made.  The writing takes a while. I can&...

Monday, December 19, 2016

How rankings are determined

Just a few notes on where the rankings come from, as we prepare for the first rankings of 2016-17.

First and foremost, the rankings are only as good as the data available.  Some conferences (Shore South, Big North, GMC) have great resources for finding results.  For many, I rely on what’s reported to nj.com, which is generally accurate but always incomplete.  Some conferences report very-little-to-nothing.  It’ll get better (especially Shore North-B, where I wait for the standing sheet to be posted in late December).  As of December 19, I have at least some average data on 224 teams out of about 240 (boys) and 140 of about 160 (girls), and I think I have all the genuine contenders covered.

The main sorting tool is what I refer to as “game average”.  It reduces all games bowled in competition, dual matches or tournaments, to the average game per bowler.  It puts 4-bowler conferences on equal footing, and gives us a number that means something to any bowling follower. As of December 19, there are six boys’ teams with game averages over 200, 28 at 190 or better, and 65 over 180.  These numbers are significant, useful, and predictive.

But not perfect.  We start with game average, but adjust for a few things.  Difference in conditions is subtle and hard to figure, but it’s unavoidable that some houses are bigger scoring than others.  I wish I had more data on this, but it’s something to keep in mind.  More explicitly, the rules are different across the state.  It’s easier to score higher for teams that bowl six & count five (GMC, Skyland) than it is for just a straight 5-bowler score.  Four-bowler teams are even trickier.  We judge teams on their strength as a five-bowler squad (because that’s how states is decided), so we have to account for the “missing” score.  Teams that have bowled in tournaments, or who have used at least five different bowlers during the season are easier to judge.  Unless there is obviously available strength at that fifth spot, we should adjust the GA of these teams down a little.

Tournament results are our best window into comparative strength.  Yes, anybody can have a bad day or be missing a key player, but a chance to see direct competition under the same conditions is very valuable.

Teams records, by themselves, are not especially useful.  The difference in quality from conference to conference is self-evidently massive.  An 8-7 team in the top ten while a 35-0 team is outside the top 25. Completely normal and justified.  Head-to-head results, however, are useful when comparing teams with similar overall stats.

Each week, we’ll rank the top 20 boys & girls teams.  After that will be 5 that “just missed”, listed by game average.  Also, there will be 5 teams to “keep an eye on”.  This is not necessarily the next 5; these may be teams with one outlier great result that might be a sign of things to come.  Might be a team with a couple of special bowlers that could be dangerous if they improve down-lineup.  Early in the season it will often be a team with good stats but very few data points.  It’s usually unfair to rank a team off only one good performance.

As always, if there’s something I’ve missed or a result I haven’t considered, please let me know, and share with whoever may be interested.

updated January 9, 2017 to include the following:

Teams performances can vary from week to week, match to match, for many reasons.  Performance levels will always vary, of course, and there are injuries, illnesses, and bowler unavailability for other reasons.  That will affect game average, and it's fair that the teams' actual performance is reflected in the actual numbers.

But there are two more problems with using game average as the basic ratings tool, in addition to those listed above (6/5 vs 5 vs 4, for example).  The first problem concerns depth and coaching philosophy.  A few teams run the same 5 (or 4, or 6) bowlers out  there for every single match.  Some sub only when forced to by circumstance, or when the match is a complete blowout.  Other coaches sub more liberally, either because the dropoff isn't huge, or several bowlers are in the same range and they're playing the hot hand, or because they have a system whereby all bowlers get some varsity experience.  To be clear, I think that's great and very good for the sport overall.  But it's hell on the stats.  A coach subbing out a 180 bowler to bring in a 150 bowler for a few games over the course of the season doesn't make much of a dent, but there are plenty of examples where the effect is more extreme and the stats notice.  This is unfortunate, because a team winning 2500 to 1700 instead of 2900 to 1700 has no real world effect and doesn't change the strength of the team in reality at all.

The other problem gets to the heart of what the rankings are supposed to be: are we evaluating how well the team has bowled so far this season, or assessing their strength heading into the postseason?  Ideally, we trying to balance the two. So to that end, I came up with another tool, the T5 rating, which is an attempt to approximate the strength of the team based on the averages of their top five bowlers (in most cases, their most likely tournament lineup).  

Unfortunately, the T5 is hell to figure because of the handful of conferences that don't have individual stats and nj.com's basic statistical unfriendliness to bowling.  So I did it once (in advance of 1-9-17 rankings), and I may do it again next month in advance of sectionals.  The variance between Game Average and T5 rating can be minuscule or massive; it was actually pretty interesting to a stat nerd like myself.  Teams that have missed a top bowler for significant time and those that have a lot of firepower but also sub frequently were most likely to have a much higher T5 than GA, but it was really all over the map.  Some 4-man teams went down a lot due to a lack of proven depth, some actually went up because they have five or six good bowlers but rarely let their top 4 bowl all 12 games.  The T5 ratings of 6/5 teams, relative to their GA, really show who's benefiting from the "drop one" rule.

Anyway, this is all very interesting to pretty much just me, but it seemed like if I was gonna start using a new stat, I should have an explanation someplace so here it is.  If I didn't make sense at some point and you want something clarified, just let me know (nobody will ever read this far).

No comments:

Post a Comment