Why four grades, not one?
Most sports-grading sites collapse every judgment into a single letter, which forces opinions about “performance,” “value,” and “reaction” to fight each other inside one number. A running back on a winning team is not the same as a running back on a rebuilding team, and a thirty-four-million-dollar contract is not the same as a veteran minimum — yet a single-grade system treats them as comparable.
FanVerdicts splits the question into four independent dimensions, each of which is calibrated separately and published separately. A player can earn an A on the field and a C on their contract at the same time. Fans can cheer a trade the algorithm pans (or vice versa), and the Fan Verdict grade captures that disagreement honestly instead of hiding it.
The four grades are Performance, Sentiment, Contract Value Index (CVI), and Fan Verdict. Every transaction page, player profile, team page, and GM page shows all four side by side.
1. Performance Grade
Performance measures a player's on-field production relative to their peers at the same position. We use per-game box score inputs, advanced statistics where the sport publishes them (EPA, DVOA, PIPM, wRC+, WAR, etc.), and era-adjusted benchmarks so a 2015 season and a 2025 season remain directly comparable. For rookies we grade on per-game production within their first season rather than career benchmarks, so the grade reflects what they actually did, not what scouts predicted.
Each sport has its own performance formula because the positional value landscape is different: NFL grades are position-stratified against a positional win-rate baseline; NBA grades weight offensive and defensive impact scaled against era-adjusted league averages; MLB grades separate pitchers from hitters and use percentile-calibrated normalization so the grade distribution matches the panel's target bell curve.
2. Sentiment Grade
Sentiment captures how the transaction was received — by reporters, by national coverage, and by fans. We ingest qualified headlines and article snippets from RSS feeds of major outlets, score each mention on a positive-to-negative axis, and combine the result with significance-weighted fan reactions. A single critical column from a national columnist has more weight than a hundred echo-chamber tweets, but both are present in the signal.
Sentiment is a leading indicator of public perception, not a judgment of the underlying move. A deal can be panned by the media and still earn an A on CVI if the contract is objectively team-friendly. We publish both grades because readers deserve to see when public opinion and the contract math disagree.
3. Contract Value Index (CVI)
The Contract Value Index answers the question every fan asks after a signing: is this deal worth it? CVI compares the structure of the contract — guaranteed money, average annual value, age at signing, years of control — against the player's most recent performance output, with sport-specific adjustments for position scarcity, age curves, and the local salary cap environment.
CVI is the most research-heavy of the four grades. It incorporates findings from a panel of PhD economists and statisticians who review our empirical calibration each off-season, layered against bias-correction gates we maintain from prior-season back-tests. We keep the exact formula internal because it is our core IP, but we publish letter grades and qualitative commentary on every move so the reasoning is transparent even when the coefficients are not.
4. Fan Verdict
Every transaction page, player profile, team page, and GM page includes a Fan Verdict vote. This is not an aggregate of the three algorithmic grades — it is an independent grade produced by readers. Fan Verdict is what the community actually thinks, with no smoothing, no weighting, and no editorial override.
When Fan Verdict disagrees with the algorithmic grades, that disagreement is itself interesting. It often flags a story the data misses (an off-field concern, a locker- room fit question, a hometown bias), and it is one of the inputs we watch when we decide whether a grade formula needs recalibration.
How the grades are kept current
Every grade on FanVerdicts is regenerated on a rolling schedule. Transaction grades are produced within minutes of the news hitting our ingest feeds. Player grades are re-computed nightly so a big performance immediately moves the letter. Team and GM grades are aggregated from transaction and player grades each morning, which keeps front-office report cards fresh without requiring the whole system to recompute every time a player has a big game.
We publish a Grade History log on every page, so anyone can see how a grade moved and when it moved. A grade is a statement about the best available evidence today; if the evidence changes, the grade should change, and the history should make the change visible.
Our research panel
FanVerdicts grades are reviewed by an anonymous standing panel of academics and practitioners with published research in sports economics, labor markets, performance analytics, and survey methodology. No formula change ships without an empirical test against real outcomes from prior seasons, and no sport-specific coefficient ships without panel sign-off.
Our two-test framework requires every proposed formula change to either (a) improve predictive accuracy against held-out seasons by a non-trivial margin or (b) correct a documented bias in the prior formula. Changes that pass neither test are shelved. The point is to keep letter grades defensible years after they were published, not just the week they hit the homepage.
What the grades do not do
FanVerdicts is an editorial product, not financial advice, fantasy advice, or betting advice. A player with an A Performance grade can have a bad game tomorrow. A contract with an A on CVI can look foolish if the sport's economics shift. A GM with an A report card can get fired for reasons that have nothing to do with transactions. Our job is to make the grading process transparent and defensible; what readers do with those grades is up to them.
We do not aggregate grades across the four systems into a single overall grade. That would undo the entire point of publishing them independently. If you see a site citing a “FanVerdicts overall grade” on social, that is someone's hand-averaging, not our output.
Questions about a specific grade?
Every grade page includes written commentary generated from the same empirical inputs that produced the letter. If a grade still looks wrong, we want to hear it — the best feedback is the kind that points at a specific player, team, or transaction and explains why the letter feels off.