When a team signs a player to a $40 million extension, the conversation on every fan timeline goes the same way. Half the replies say it is a steal. The other half say it is a disaster. Within an hour someone has photoshopped the GM into a meme. None of it is grounded in anything you could check.
The FanVerdicts Contract Value Index — CVI for short, but we will use the full name throughout — is the grade we put on every one of those contracts. It is a letter, A+ down to F, and it is the same kind of grade you got in school. A B+ contract is a quality contract. An A+ contract is one of the best on the books. An F contract is one a team would unwind tomorrow if it could.
This article explains what goes into that grade in plain English. It does not give you the coefficients or the cutoffs — that is the part we keep proprietary, the same way Moody's keeps the inner workings of its credit ratings proprietary — but it gives you the honest version of what we are measuring and why we believe it.
The question we are answering
Fans, reporters, and front-office people all evaluate contracts the same way intuitively. They look at four things. They look at how good the player is. They look at how much the player is making. They look at what comparable players in the same position are making. And they look at where the player is on their career arc — going up, holding steady, or trending down.
Every analyst on television does this in their head. The problem is that everyone does it differently, with different reference points, and nobody publishes their methodology. One analyst calls a contract a steal because they happen to weight the player's recent peak heavily. Another calls the same contract a disaster because they are weighting age more heavily. Both are reasonable. Neither is auditable.
The Contract Value Index is the same comparison, done the same way every time, on every contract, with the result published and tracked over time so we can see how well the grade held up.
The four ingredients
Every Contract Value Index grade combines four inputs. The recipe is the same across NFL, NBA, and MLB, with sport-specific calibration on top.
Performance. This is the player's on-field production, normalized so a quarterback and a left tackle can be compared on the same scale even though their stat lines look nothing alike. We use a separate FanVerdicts Performance Score formula for this, sport by sport, calibrated against career-award outcomes — All-Pro, All-NBA, All-Star teams. Performance is the most heavily weighted ingredient in the index. A great contract for a great player is great mostly because the player is great.
Salary percentile. This is what the player is making, expressed as a percentile against everyone else at the same position in the same league. We use percentile rank instead of raw dollars on purpose. The salary cap moves up roughly seven percent a year. A $20 million quarterback in 2018 and a $20 million quarterback in 2026 are not the same caliber of player, because the cap that defined "expensive" has moved. Percentile rank handles that automatically.
Career contract history. Has this player honored their previous contracts? Have they outperformed them, underperformed them, gotten cut, or gotten extended early? Contracts compound. A player on their fourth NFL deal is being graded partly against the trajectory the first three deals established.
Recent trend. A 28-year-old wide receiver coming off two All-Pro seasons and a 28-year-old wide receiver coming off a major injury are two different bets, even at the same age and the same Performance Score average over their career. The trend matters. We capture it as a one-to-three-season window of recent performance relative to expectations.
The structural shape
Underneath the math, the Contract Value Index is a ratio. The top half is what the player is producing. The bottom half is what the market is paying them. A grade above the population average means the production is exceeding the price. A grade below it means the price is exceeding the production.
That structure is borrowed directly from the executive-compensation literature in corporate finance — work by economists like Jensen, Murphy, Bebchuk, and Edmans on whether CEO pay tracks the value CEOs create. The same comparative logic — observed pay versus expected pay conditional on observable performance — has been applied in sports labor economics by Berri, Schmidt, and Brook in their book The Wages of Wins, and by Massey and Thaler in their work on the rookie-wage-scale surplus. The Contract Value Index is the applied, multi-sport, contract-level version of that academic work.
We did not invent the structural idea. What we did was operationalize it for every NFL, NBA, and MLB contract on the books, calibrate it empirically against real outcomes, and publish a grade.
How we know it works
A grading system that nobody validates is just an opinion column with a font change. The Contract Value Index is empirically validated three ways.
The first is correlation against industry-consensus contract evaluations — the qualitative grades published by sources like Spotrac and Over The Cap, summarized into A/B/C/D/F buckets. On a 468-contract NFL sample the Contract Value Index correlates at +0.48 with industry consensus. On a 62-contract NBA sample it correlates at +0.54. Those numbers clear the publishability bar in the academic literature on grading systems.
The second test is the harder one, because it is non-circular. We took NFL contracts signed between 2009 and 2018, computed their Contract Value Index using only the information available at signing — the player's pre-signing performance, age, and contract terms — and then measured what the player actually produced in the years after signing. A contract grade is only as useful as its forward-looking content, and a forward-looking grade is only worth anything if it predicts outcomes the grade did not see at the time. On a 971-contract longitudinal sample, the Contract Value Index meaningfully predicts post-signing career production. The full results are documented in our Historical Validation paper.
The third test is even more direct. Contracts get terminated early. Some of those terminations are good news for the player — the team extends or renegotiates because the player has outperformed. Some of them are bad news — the team cuts the player because the contract has become unworkable. The Over The Cap database flags which is which. We checked whether Contract Value Index grades line up with the right side of those outcomes. They do.
What the grade does not tell you
A few things to be clear about, because grading systems get over-claimed all the time.
The Contract Value Index is a contract-level grade, not a player-level grade. A player can be a great player on a bad contract, or a mediocre player on a great contract. We grade players separately, with a separate Performance grade. The two are designed to be read together.
The Contract Value Index does not predict injuries. It does not predict trades. It does not predict locker-room dynamics. It is a quantitative grade on the contract structure given the player's profile at signing. Front offices use a hundred inputs we cannot quantify — medical history, character, scheme fit. The grade is a useful baseline, not a verdict.
And the Contract Value Index is calibrated separately for each sport. NBA contracts have a max-salary structure that does not exist in the NFL. MLB contracts run through arbitration tiers that do not exist in either of the other two. Each sport has its own version of the formula, calibrated on its own data, because pretending the leagues are the same is the fastest way to get a bad grade.
What you see on the site
On every player page and every transaction page, the Contract Value Index shows up as a single letter, with a tier label like "high-end starter" or "rotational backup" depending on the grade band. We do not show you the underlying score. We do not show you the coefficient values. We show you the letter, the tier, and an analysis paragraph that explains why the letter came out where it did, written from the actual data inputs — not from training-data guesses about what we think the contract looks like.
That paragraph, the letter, and the methodology behind them are what we mean when we say a grade is grounded.