The following data set in regard to Super Bowl winners was presented on OverTheCap.com in January:
Now, I don't want to rip on the author too much, because he's pointing out something that looks interesting more than he's drawing any firm conclusions. But conclusions have been drawn. To wit, the history seems to indicate that paying a quarterback -- such as Russell Wilson -- too much of the cap -- such as $20M+ per year, which is 14% -- is likely to eliminate a team from Super Bowl contention.
And that's just dumb.
It's An Apples to Oranges Comparison
The chart shows only the quarterback's one-year cap hits, without regard to variations caused by cap growth, uneven contracts, or extensions.
Superficially, paying Russell Wilson $22M a year (the same as Aaron Rodgers) would eat up 15% of the $143M salary cap. But if Wilson gets exactly that on an extension, and the cap grows at its current $10M/year rate, his cap hit will never be 15%. Assuming a four-year $88M extension ($22M per year) with $35M paid as a bonus,
[EDIT: Numbers corrected, thanks to Jason_OTC]
2015: $3M salary + $7M bonus = $10M / $143M = 7.0%
2016: $13.25M salary + $7M bonus = $20.25M / $153M = 13.2%
2017: $13.25M salary + $7M bonus = $20.25M / $163M = 12.4%
2018: $13.25M salary + $7M bonus = $20.25M / $173M = 11.7%
2019: $13.25M salary + $7M bonus = $20.25M / $183M = 11.1%
Now, you can debate those specific numbers, but that's not the point. The point is that these one-off cap hits don't really tell us what Super Bowl winning teams paid for their quarterback. NFL teams are constantly juggling their cap chargers, and if a quarterback cashes in one season it likely means that another player is taking a small cap hit relative to his own contract. A season or two later, the quarterback's cap percentage shrinks and that other player will be taking a bigger chunk.
(Note that Tom Brady's 2010 contract made him the highest-paid player in the NFL. That didn't stop the Patriots from winning 14 games that season and leading the league in DVOA, nor did it prevent Brady from going to two Super Bowls in the next five years and winning the Lombardi -- when his cap hit was listed at about 10%.)
More Like Apples to Pork Chops
Recklessly omitted from OTC's chart is an NFL team's companion to the salary resource, namely, draft capital. Every team gets seven picks per year, with players coming on for four years at a significant discount. In effect, each team has about 28 discount tokens available at any given time for stocking their rosters.
The expenditure of these tokens is not counted on the chart, making it (again) severely flawed. That much is obvious.
(Less obvious is another limited resource, namely, time under center. The direct cost of Wilson's discount token was a third-round draft pick, but the cost of finding a discount quarterback is markedly higher. Whether it's an undervalued veteran like Drew Brees, an unproven new player like Tom Brady or Kurt Warner, or a backup quarterback like Aaron Rodgers, teams have to spend wasted seasons, draft picks, and free agent money to identify their man.)
Or Mabye Apples to Yellow Cake Uranium
The NFL & NFLPA began a new CBA in 2011. OMG. Did you know that? Did you forget?
The new deal includes a rookie salary cap. The #6 overall pick in 2010, Russell Okung, signed a rookie contract for $8M/year. The #6 overall pick in 2011, Julio Jones, signed a rookie contract for $4M/year.
That difference of four million dollars is a lot of money. It's even more for higher draft picks, and extends (somewhat less spectacularly) through all 7 rounds. This means that, in 2011, not just more money overall but a higher percentage of the cap was available for paying veteran players.
The percentage of cap space going to veterans would increase every season under the new CBA, as more players rotate in on cheaper deals. It would not reach its peak until four rookie classes were in place. Ergo, when considering future veteran contracts, the historical list of NFL seasons that are valid for comparison looks something like this:
The problem with the math is that there is none. Simply looking for patterns in the numbers is akin to staring at the sky and observing that sheep-shaped clouds like to go left and the shredded cotton balls like to go right.
Some specific flaws:
1 The causal explanations are unsupported. If success comes from paying a quarterback less, the distribution should be bottom-heavy, dominated by quarterbacks earning less than 5% with very few outliers in double-digits.
Alternatively, it may be said that there is a "proven model" of paying quarterbacks in the 6-10% range, and therefore 15% won't work. But there are winning quarterbacks at 1.6% or less. How can a 50% increase on the "proven model" be considered an excessive divergence when the bottom end of that proven model is 300% higher than four historical winners?
2 The source population is ignored. Seahawk fans will remember a year ago when Seattle was dismissed as a contender based on the historical fact that such teams were unlikely to return to the Super Bowl. The fallacy in that reasoning is that the Seahawks' chances were being measured against 31 other teams (or 15 others in the Conference). So while it's always true that Defending Champ vs The Field is a bad bet, the Defending Champ is a much better bet than any single other team.
We are not given the league-wide distribution of quarterback cap hits over the last 20 years. If such is dominated by quarterbacks in the 6-10% range, with very few outliers over 11%, then the alleged "proven model" may actually be the worst for winning the Super Bowl, with highly-paid quarterbacks having the best chance. Math, folks.
3 Trying to infer causal relationships exclusively from post-hoc data of unique events is basically superstition. If you don't follow that, let me give an example:
|Season||Winning Quarterback|| Team |
| Points |
It appears that teams have the best chance of winning a Super Bowl when they score points in the upper 300's or very low 400's. It's definitely better to drop below 350 (four winners) than go over the dreaded 500 mark (just one).
Of course, scoring frequency changes with the rules, so we can also look at scoring rank. The proven model for success is to be between the 6th- and 14th-most prolific scoring team. Stay out of the top five! The 9th-, 10th- and 14th-ranked teams are twice as likely to win the Super Bowl as the #1 overall team. And if you're closing in on the end of the season in position #3, you better hope for a Sunday night game so you can tweak your gameplan to avoid the curse position.
Even more telling is the latitudinal sweet spot-- situating your home stadium between 39 and 43 degrees North is obviously a smart move (11 out of 15 winners). Worse, teams starting the season with a home stadium north of 44.5 degrees latitude have just a 1/88 chance (1.1%) of winning the Super Bowl. Seahawk fans can take some consolation in the fact that we won't have to go all the way to California to get back into contention-- a move to Medford, Oregon would put us at 42.3 degrees North, with a mere 7 hour drive according to google maps.
A Real Comparison
We know that including rookie contracts and doing a post-hoc analysis are worthless. If we want to look for trends, then, we should look at veteran quarterbacks and their salaries across all teams and compare team success. The only salary data readily available came from 2013 and 2014, but that's fine because previous years are skewed by higher rookie contracts preceding the current CBA.
Adjusted Salary is just a small tweak to make the numbers for 2013 and 2014 (which have different caps) similar (I could have used cap percentage, but the math was more fun this way).
DVOA (Defense-adjusted value over average) is a good omnibus number for measuring a team's performance across all facets (offense, defense, special teams). Only quarterbacks with at least 10 starts are included.
ANY/A is adjusted net yards per pass attempt (counting sacks, interceptions, and touchdowns). We'll get to that and the "expected ANY/A" further down.
|Season||Player||Team|| Adjusted |
| Team |
| Playoff |
| Salary- |
|ANY/A|| ANY/A |
Some quick observations:
Among teams with a DVOA of +12%, 8 out of 10 paid their quarterback at least $13.6M (adjusted).
Among teams with a positive DVOA, 14 out of 17 paid their quarterback at least $9.2M. Two of the remaining three-- Carson Palmer and Alex Smith-- were on low-cap years of much more lucrative contracts ($16.5M/yr and $17M/yr).
The lowest cap hits among quarterbacks who won a playoff game were Tony Romo's 2014 (adjusted) hit of $10.8M (contract average $18M/yr) and Joe Flacco's 2014 hit of $13.6M (contract average $22M/yr).
Overall, a higher QB salary has a positive correlation both with team success (DVOA) and playoff games:
Some Reasonable Exclusions
I'll make one concession in that if you pay your quarterback like an elite player and he performs below league average, you aren't in good shape to win a Championship.
To determine which quarterbacks are underperforming (and by how much), we start with ANY/A. This would not be an accurate statistic for quarterbacks who contribute a lot of rushing, but as the veterans table does not include Wilson, Luck, Newton, Griffin, or Kaepernick, it should work just fine for the nonce.
Now, a quarterback who records 10.0 adjusted net yards/attempt is not worth three times as much as a quarterback at 3.3 adjusted net yards/attempt. He's worth more like 20 times as much, so the salary ranges are considerably wider than the ANY/A range. To line those up, I computed an average ANY/A among all 2013-2014 QB's with at least 300 attempts (6.16) and a standard deviation (1.56). I did the same for the veteran salaries ($11.9M average, $6.0M standard deviation). It's then a simple calculation to show that Joe Flacco's 2014 adjusted salary of $13.62M (+0.28 standard deviations) produces an expected ANY/A of 6.597.
Now let's re-examine the data and leave out those quarterbacks who performed a full standard deviation below expectations. Note, first, that this is not a sweeping arbitrary exclusion-- all three quarterbacks who are removed (Cutler '14, E.Manning '13, and Stafford '13) still show up for their other season; and all lower-paid quarterbacks who over-perform are still included.
Note secondly, and most importantly, that the original premise under discussion is the best strategy for winning a Super Bowl. Spreading $20M in cap among four players gives you the best chances of winning a minimum of 6 games, but putting all your eggs in one basket ($20M to a worthwhile quarterback) is better for a championship. It's far more likely for a single player to outperform expectations in a single season than it is for four different player to do so simultaneously.
And with just those three exclusions, the correlations look like this:
For normal statistics with typically random sampling, you'd want a correlation coefficient of .7 to show significance for these these sample sizes. But in the chaotic world of NFL football, .46 and .49 are pretty strong.
Finally, it bears repeating that all NFL teams have the same salary cap. So any measure of team success versus spending across all positions would necessarily produce a flat correlation line. Therefore, the data show that spending money on a quarterback is a better strategy than spending it elsewhere.