To figure out what’s going on here, I extended an old Neil Paine study (who himself expanded on similar studies) by using prior season values to predict teams in the following year. This implies that WP and WS are either capturing the real-world volatility of player value - think of a scale bouncing around yet still accurately measuring weight - or inaccurately allocating credit based on how well a player’s teammates play. Wins Produced (WP) and Win Shares (WS) have a nearly 2:1 ratio, whereas stats like PIPM and APM are fairly similar in variability, regardless of new teammate circumstance. What’s noteworthy here is the ratio between new team and “same” team variability. On the other hand, a metric that is insensitive to team change might be accurately measuring a player’s overall “goodness” (irrespective of circumstantial value), or it might be rigidly measuring something else that is consistent, but inaccurate. This could be accurately capturing how differing circumstances or roles dictate value, or it might be inaccurately tied to his teammate’s performance, much like iSRS. There’s nothing inherently good or bad about more stability. For instance, imagine a volatile metric that says Isaiah Thomas was incredibly valuable in Boston but not so valuable in Phoenix. This is by design: iSRS treats all starters as roughly the same, regardless of how they actually played. On the opposite end of the spectrum is the dummy statistic, iSRS, which is incredibly sensitive to new team changes. PER, which is heavily influenced by volume scoring, clocks in as the second-most change-immune metric. It’s also the most consistent when players move to new teams, which makes sense given that most 15-point per game scorers don’t suddenly morph into 9- or 21-point per game players (the equivalent of a one standard deviation change). Unsurprisingly, points per game is the most consistent of these numbers from year-to-year when teams remain largely unchanged. Since different metrics are on different scales, I’ve charted variability in standard deviations, ordered by the most stable metrics when playing with similar teammates: In other words, if LeBron James averaged 27 points per game last year, should we expect him to average 27 per game this year? And what should we expect if he changes teams? It’s helpful to understand how team circumstances can influence a statistic, so I compared each metric’s variability when a player changes teams to when he remains on the same team with similar teammates - in this case, defined as having 85 percent of the same minutes return from the previous season, per Basketball-Reference’s roster continuity calculator. I started by looking at a metric’s stability from year-to-year. iSRS: individual SRS, a dummy metric that multiplies a player’s minutes by his team’s SRS (adjusted point differential).AuPM: Augmented Plus-Minus, designed to mimic APM without play-by-play data.APM: Regularized Adjusted Plus-Minus - often called RAPM.BP BPM: My own version of Box Plus-Minus.BBR BPM: Basketball-Reference’s Box Plus-Minus.PPG: Points per game, the original player ranker.To investigate this, I grabbed the following metrics for every player-season since 1997 (unless otherwise noted) in attempt to glean value and a deeper understanding of these numbers: Besides feeling like alphabet soup, what do these all-in-one stats really measure, and how valuable are they for player analysis? The first composite stat to really take hold was John Hollinger’s Player Efficiency Rating (PER), which has been followed by further advancements on the box score along with metrics that harness plus-minus data, like ESPN’s Real Plus-Minus (RPM). Over the past two decades, single-value basketball metrics have been popping up like weeds.
0 Comments
Leave a Reply. |