About the Metrics

FantasyMeter helps you understand how each hero contributes to results across tournaments. Our metrics blend hero performance (scoring, participation, consistency) with player choices (deck building, entry selection, timing). The sections below tell you what a metric represents, why it matters, and how to read it. Click Formula (for the curious) if you’d like the math.

Initial Value (Vi)

What it is
Your cost basis per card (ETH). You can use the default or set your purchase price.
Why it matters
Vi anchors any return you see. Two cards with the same rewards can have very different ROI if you paid different amounts.
How to read it
Use a custom Vi when you know what you paid. A higher Vi makes ROI harder; a lower Vi improves ROI. Tip If Vi is 0, ROI is not shown (division by zero).
Formula (for the curious)
Overall: ROI = Σ(Hero ETH-equiv) / Vi
Per tournament: ROIₜ = Hero ETH-equiv / Vi

Fragment Rate

What it is
A conversion rate to turn Fragments into ETH-equivalent. Default is 0.00006 ETH/Frag (configurable).
Why it matters
Rewards can be paid in ETH and Frags. Converting to a single unit lets you evaluate total value and ROI cleanly.
How to read it
Use the rate that reflects current pricing or your own assumption. Changing it updates ETH-equivalent and every ROI metric.

Deck Score & Hero Score

What it is
Each tournament yields a DeckScore (team output) and HeroScore (this card’s contribution).
Normalized scores remove rarity multipliers; Raw scores include them and align with reward splits.
Why it matters
They’re the backbone for participation and thus how much reward the hero earns from the deck.
How to read it
Use Normalized when comparing pure performance across rarities; use Raw for a view that matches how rewards are split.
Normal leagues: HeroRaw = HeroNorm × rarityMult. Reverse leagues: HeroRaw = HeroNorm ÷ rarityMult (e.g., Legendary 2.5, Epic 2.0, Rare 1.5, Common 1.0).

Hero Participation

What it is
The hero’s share of the deck’s output in a tournament (their “slice of the pie”).
Why it matters
Participation is how we apportion rewards down to the hero level.
How to read it
Higher participation → larger reward share. If the lineup is degenerate (not enough valid scores), participation is treated as 0.
Normal leagues use rarity-weighted scores; Reverse leagues use normalized scores with a softmax-of-negatives so lower scores receive a larger share smoothly.
Formula (for the curious)
Normal: hp = HeroScoreRaw / DeckScoreRaw
Reverse (softmax of negatives): hpi = exp(−λ · HeroNormi) / Σ exp(−λ · HeroNormj) (λ ≈ 0.1 by default)

Deck Rewards → Hero Share

What it is
We split deck rewards (ETH + Frags) by a hero’s participation to get HeroETH and HeroFrags.
Why it matters
This fairly attributes team rewards to individual contributors.
How to read it
Bigger slice (hp) → bigger ETH + Frags. Reverse leagues are handled automatically by the participation model above.
Formula (for the curious)
HeroETH = DeckETH × hp
HeroFrags = DeckFrags × hp

ETH-equivalent Reward

What it is
A single ETH-based value that combines ETH and Frags using your fragment rate.
Why it matters
Lets you compare payouts consistently and feed them into ROI and break-even.
How to read it
If your view of frag pricing changes, adjust the rate. ETH-equiv will update everywhere.
Formula (for the curious)
DeckETH-equiv = DeckETH + DeckFrags × rate
HeroETH-equiv = HeroETH + HeroFrags × rate

Per-Tournament ROIₜ

What it is
The hero’s return for a single tournament relative to Vi.
Why it matters
Shows event-by-event efficiency and feeds consistency measures like σROI.
How to read it
Compare average ROIₜ across heroes/leagues. Use σROI alongside it to judge consistency.
Formula (for the curious)
ROIₜ = HeroETH-equiv / Vi

Overall ROI

What it is
The hero’s cumulative return relative to Vi across the full history you’re viewing.
Why it matters
The cleanest single number for long-run performance.
How to read it
Useful for “did this pay off?” judgement. Pair with league breakdowns to see where the ROI actually came from.
Formula (for the curious)
ROI = Σ(HeroETH-equiv) / Vi

σROI (Volatility)

What it is
The standard deviation of ROIₜ across tournaments.
Why it matters
Lower σ means more predictable outcomes; higher σ means swingy results.
How to read it
Two heroes can share the same avg ROIₜ; the one with lower σ is usually safer to rely on.

Momentum-Level (from gliding_24h / gliding_7d)

🔒 Turning on Momentum-Level metrics in the card panel costs 5 Runes.

Momentum (24h vs 7d)

Snapshot of current “heat” versus the weekly baseline.

  • > 0 → trending up; < 0 → cooling versus the 7-day glide.
  • Great for spotting fresh moves or post-spike cool-downs.

Momentum (24h vs 3d)

  • > 0 → short-term outpacing the 3-day glide; < 0 → losing pace.

Momentum (24h vs 21d)

  • Helps separate noise from real regime shifts.

Momentum (3d vs 21d)

  • Confirms sustained improvement or deterioration.

Alignment score

Count of positive momentum reads (0–4).

  • 3–4 → broad strength; 0–1 → broad weakness; 2 → mixed.

Velocity (rank)

  • > 0 → rank improving (smaller rank numbers); < 0 → slipping.

Acceleration (rank)

  • > 0 → improvement is speeding up; < 0 → fading.

Velocity (score)

  • Pairs with rank velocity to confirm/deny the move.

Acceleration (score)

  • Positive and rising → building power; negative → stalling.

Volatility (24h σ)

How choppy the last day has been.

Lower = steadier; higher = whipsaw.

Volatility (7d σ)

Baseline choppiness over the week.

Use as a context anchor for 24h moves.

Δσ (24h − 7d)

  • < 0 → calming down (often constructive); > 0 → heating up.

CV (7d)

Scale-free stability: volatility relative to the average.

Lower CV = more dependable prints.

Z (24h vs 21d)

Where the latest 24h score sits versus the 21-day average.

  • ~0 = typical; > +1 strong; < −1 weak.

% from 21d high

  • Closer to 0% → pressing highs; larger negatives → pulled back.

% from 21d low

  • Closer to 0% → retesting lows; higher → above the floor.

RSI (14)

Overbought/oversold style momentum read on score changes.

  • > ~55 constructive; < ~45 weak; extremes near 70/30 can mean exhaustion.

Percentile (21d)

How today’s print ranks within the last 21 days.

  • 80% → top quintile; 20% → bottom quintile.

Star Efficiency (live)

“Points per current star” snapshot. If stars go up, this can dip even as score improves.

ΔSE (24h − 7d)

  • > 0 → converting better than the weekly norm; < 0 → softer conversion.

SE z(21d)

  • Positive → unusually efficient; negative → below usual conversion.

Divergence (score vs rank)

Score and rank disagree.

  • Score↑ + Rank↓ → stealth strength; Score↓ + Rank↑ → hollow momentum.

Composite momentum

Single view blending momentum, rank velocity, context, and stability.

  • > +0.20 often bullish; < −0.20 often bearish; in-between = mixed/indecisive.

Tournament-Level (from tournament_histories + usage)

These compare tournament N vs N−1 and incorporate deck participation. “LA” means league-aware: in Reverse leagues, lower hero score is better and all deltas that measure “better vs worse” flip sign accordingly.

ΔHero (norm, LA)

What it is
Change in normalized (rarity-neutral) hero score vs the previous event, league-aware.
Why it matters
Shows whether the card’s underlying (rarity-neutral) performance improved.
How to read it
Positive = improved contribution; negative = slipped. In Reverse, improvement at the score level (lower is better) is reflected by flipping the sign.
Formula (for the curious)
ΔHeronormₙ = HeroNormₙ − HeroNormₙ₋₁
ΔHeronorm,LAₙ = signFlip(ΔHeronormₙ)
signFlip(x) = x in Normal leagues; −x in Reverse.

ΔHero (raw, LA)

What it is
Same concept with raw scores (includes rarity multipliers; aligns with rewards), league-aware.
Why it matters
Useful when you care about reward-relevant movement, not just pure performance.
How to read it
Use alongside normalized to separate skill vs. multiplier effects.
Formula (for the curious)
ΔHerorawₙ = HeroRawₙ − HeroRawₙ₋₁
ΔHeroraw,LAₙ = signFlip(ΔHerorawₙ)

TPD (Tournament Performance Delta, LA)

What it is
Change in the history stream’s fantasy_score between tournaments, league-aware.
Why it matters
A simple “up or down” performance gauge that respects Reverse semantics.
How to read it
Positive = improved; negative = regressed. In Reverse, lower score is better, so TPD flips sign. Pair with HP and RAL to see why.
Formula (for the curious)
TPDrawₙ = Scoreₙ − Scoreₙ₋₁
TPDₙ = signFlip(TPDrawₙ)

HP (Hero Participation)

What it is
Hero’s share of deck output in the tournament (lineup-aware).
Why it matters
Determines how much of the deck reward is attributed to this hero.
How to read it
Normal leagues use a direct ratio; Reverse uses a reverse-favoring split onnormalized bases (zero-first with lineup awareness). Higher share → bigger rewards.
Formula (for the curious)
Normal (raw bases): hp = HeroRaw / DeckRaw
Reverse (normalized bases): hp = soft reverse split on lineup bases
Implementation detail: Reverse uses a zero-first, lineup-aware weighting on normalized bases; if any teammate has ~0 base, only those zeros share equally.

DPI (Deck Push Index)

What it is
RAW-basis participation surprise: did the hero carry more or less than their own historical baseline?
Why it matters
Separates “deck did well” from “this hero actually pushed the deck,” independent of Reverse normalization.
How to read it
> 0 pushed more than usual; < 0 dragged vs baseline. Baseline currently averages all past tournaments (any league).
Formula (for the curious)
hpraw = HeroRaw / DeckRaw
ExpectedHPraw = mean(past hpraw)
DPIₙ = hprawₙ − ExpectedHPraw

Note: A Reverse “great” score (often near 0) can still yield a negative DPI if this event’s hpraw is below your own historical average. A per-league baseline may be added in the future.

RAL (Reward Attribution Lift, ETH-eq)

What it is
Extra ETH-equivalent this event vs what we’d expect at your league-aware participation baseline.
Why it matters
Answers: “Did this hero add value beyond their usual share?”
How to read it
> 0 added more value than expected; < 0 underdelivered. Uses the same per-event fragment overrides for ETH-equivalent.
Formula (for the curious)
ExpectedHP = mean(past hp) (league-aware hp as used for reward splits)
DeckRewardₙethEq = ETHₙ + Fragsₙ × rateₙ
HeroRewardₙethEq = DeckRewardₙethEq × hpₙ
Expected = DeckRewardₙethEq × ExpectedHP
RALₙ = HeroRewardₙethEq − Expected

Consistency (σ, CV)

What it is
Steadiness of the hero’s tournament scores.
Why it matters
Lower variability makes planning easier and reduces lineup risk.
How to read it
σ is absolute choppiness; CV scales σ by the mean to make comparisons fair across levels.
Formula (for the curious)
σ = stdev(last m tournament scores)
CV = σ / mean

Practical reading patterns

  • Reverse quick read: lower score is better; big positive TPD/ΔHero (LA) after a zero means improvement.
  • DPI vs Reverse scores: zero score can still show DPI < 0 if your hpraw this event is below your own historical average; this is expected.
  • Which ΔHero to trust? Use normalized for rarity-neutral performance; use raw for reward relevance.
  • Baseline scope: DPI baseline currently spans all leagues; per-league baseline may be introduced later.
  • Star Efficiency (live): uses current star count for both 24h/7d; per-tournament star history is not available.

Extended Momentum & Stability Metrics

Z(7d vs 3w)

Z-score comparing the 7-day average to the 3-tournament (≈21-day) mean.
> +1 = strong short-term uptrend; < −1 = short-term weakness vs baseline.

Warmth (3d)

Alias of Z(3d vs 3w). Measures how “hot” the last 3 days were compared to the 3-tournament baseline.
High positive → building momentum; negative → cooling down.

Z(max 3d vs 3w)

Z-score of the peak 24 h in the last 3 days versus the 3-week mean.
Highlights spikes: > +2 = viral moment or breakout; values near 0 = typical range.

Spike Breadth (3d)

Fraction of 3-day samples ≥ 80 % of the max.
0–0.2 = one-off burst; 0.4+ = sustained heat.

Z(24h vs 28d)

Where the latest 24 h score sits relative to the 4-tournament (~28 d) mean.
> +1 = breaking higher vs long baseline; < −1 = lagging long term.

σ (28d)

Standard deviation of last 4 tournament scores.
High σ → volatile performer; low σ → stable and predictable.

CV (28d)

Coefficient of variation over 4 tournaments (σ / mean).
< 0.10 = excellent steadiness; > 0.30 = erratic results.

σ Ratio (7d / 28d)

Compares short-term to long-term volatility.
> 1 = short-term turbulence; < 1 = calming trend.

Signal Quality (7d)

Reliability of recent momentum, blending z7_vs_3w and σ7d.
> 0.6 = clean, reliable trend; < 0.3 = noisy/uncertain.

Confirmations (short + mid)

Count of confirming conditions among {z7_vs_3w &gt; 0, ema7 &gt; ema14, mom21_28 &gt; 0}.
3 = strong multi-timeframe alignment; 0 = no agreement.

Composite Momentum Score

Weighted summary of short + mid momentum vs anchors, used for labeling.
> +0.20 = bullish; < −0.20 = bearish; near 0 = neutral.

EMA 7 / 14 & Cross-Age

Short and medium exponential averages of the 24 h series. Cross-Age = time since last bullish 7 > 14 crossover.
Long recent Cross-Age → trend persistent; very short → fresh reversal.

Max Drawdown (14d)

Largest peak-to-trough loss over 14 days.
< −10 % = healthy pullback; > −25 % = deep correction.

Ulcer Index (14d / 28d)

Root-mean-square drawdown — penalizes sustained stress more than brief drops.
< 5 % = smooth; > 15 % = rough ride.

Hurst Exponent (7–14d)

Trend persistence index.
> 0.55 = sustained trend; < 0.45 = mean-reverting chop.

Anchor Sharpe (8 w)

Reward-to-risk measure using 8-week tournament anchors.
Higher = more efficient long-term performance.

Recovery Rate (14d)

Speed of rebound from the last trough relative to its depth.
> 0.5 = strong comeback; < 0.2 = slow recovery.

Ulcer Performance Ratio

Anchor Sharpe / Ulcer Index — combines return and pain into one score.
> 2 = excellent efficiency; < 1 = inefficient or volatile.

Consensus Momentum

Robust composite blending all timeframe momentum readings.
> +0.5 = broad strength; < −0.5 = broad weakness.

Average Score (last K tournaments)

Rolling mean of last 4 / 8 / 12 tournament scores.
Use as steady anchors to judge regime shifts and normalize newer prints.

DDIᴺ, DDIᴿ & Rarity Premium

What it is
DDI (Deck Dependency Index) estimates how much of the deck’s score comes from this hero.
Why it matters
High DDI heroes are central to results; low DDI heroes are more supporting.
How to read it
Use DDIᴺ for rarity-neutral comparisons across cards. Use DDIᴿ when you care about reward-weighted impact.

DDIᴺ — rarity-neutral

Uses normalized scores (no rarity multipliers).

Formula (for the curious)
DDIᴺ (weighted) = Σ(HeroNorm) / Σ(DeckNorm)
DDIᴺ (avg) = mean(HeroNorm / DeckNorm)

DDIᴿ — rarity-weighted

Uses raw scores (with rarity multipliers) — aligns with reward splits.

Formula (for the curious)
DDIᴿ (weighted) = Σ(HeroRaw) / Σ(DeckRaw)
DDIᴿ (avg) = mean(HeroRaw / DeckRaw)

Interpretation bands (rules of thumb):

  • < 0.15 — Low dependency (filler role).
  • 0.15–0.30 — Normal share.
  • > 0.30 — High dependency (carry).

Rarity Premium

What it is
The % change in contribution once rarity multipliers are applied (DDIᴿ vs DDIᴺ).
Why it matters
Tells you if multipliers (or Reverse exposure) are helping or hurting this card’s share.
How to read it
> 0% → multipliers help; ≈ 0% → little effect;< 0% → multipliers hurt (often teammates have higher rarity in non-Reverse).
Formula (for the curious)
Rarity Premium = (DDIᴿ / DDIᴺ) − 1

Hero Win (per tournament)

What it is
ETH-equivalent earned by this hero in a specific tournamentafter apportioning.
Why it matters
A fair, apples-to-apples way to compare across lineups and leagues.
How to read it
Use to rank your heroes on actual value created per event.
Formula (for the curious)
Hero Win = HeroETH-equiv = (DeckETH + DeckFrags × rate) × hp

Rounding & Display Rules

  • ETH values typically show 5 decimals.
  • Fragments are rounded in the UI; calculations use exact fractional values.
  • Percentages display to 2 decimals.
  • When deck rewards are zero and Vi > 0, ROIₜ displays 0.00% (or where intentionally muted).

Break-even (by league)

What it is
Approximate number of tournaments to recover Vi in a given league if future performance matches the historical average.
Why it matters
Useful for planning: “If things stay like they’ve been, how long to get my cost back?”
How to read it
Works best when avg ROIₜ is stable and positive. If avg(ROIₜ) ≤ 0 or Vi = 0, break-even shows .
Formula (for the curious)
Break-even ≈ 1 / avg(ROIₜ)
Where ROIₜ = HeroETH-equiv / Vi. Fragment rate affects ETH-equivalent and thus ROI.

League breakdown table

What it is
An aggregate view by league showing usage, ROI, volatility, DDI, and estimated break-even.
Why it matters
Helps you place cards where they historically worked best.
How to read it
Look for leagues with healthy ROI, lower σROIₜ, and reasonable break-even. Use DDI to understand role in those leagues.

Notes: Participation and rewards are Reverse-aware; fragment rate affects ETH-equivalent and thus ROI.

Zero Rewards & Missing Data

  • If DeckScore = 0, we set hp = 0 for that tournament.
  • If rewards are missing, we treat them as zero for ROI and label the row accordingly.
  • If Vi = 0, ROI metrics show to avoid division by zero.

Worked Examples

Example A: Deck rewards 0.003 ETH and 50 Frags; rate= 0.00006; hp=30%.

  • DeckETH-equiv = 0.003 + 50×0.00006 = 0.006 ETH
  • HeroETH-equiv = 0.006 × 0.30 = 0.0018 ETH
  • If Vi = 0.04 → ROIₜ = 0.0018 / 0.04 = 4.50%

Example B: Deck rewards 0 ETH and 20 Frags; hp≈19%; Vi=0.03810; rate=0.00006.

  • DeckETH-equiv = 20×0.00006 = 0.00120 ETH
  • HeroFrags = 20×0.19 = 3.8 → HeroETH-equiv ≈ 3.8×0.00006 = 0.000228 ETH
  • ROIₜ ≈ 0.000228 / 0.03810 ≈ 0.60%

FAQ

Why are some tournaments missing?

If you flipped a card (sold and rebought it) or if you've been playing since FantasyTop was in Blast (chain), you need to create a canonical token using HRI mode to link the cards. Check "How to" to learn how to create a canonical token.

Why can “Frags: 4 (≈ 0.00023 ETH)” be less than 4 × rate?

UI rounds Frags for readability; calculations use exact fractional Frags (e.g., 3.80), so ETH-equivalent reflects the precise number.

What’s the difference between ROI and ROIₜ?

ROI is overall performance: Σ(hero ETH-equiv)/Vi. ROIₜ is the per-tournament value, which we average and use for σROI.

How are DDIᴺ and DDIᴿ different? What do “weighted” vs “avg” mean?

DDIᴺ uses normalized (rarity-neutral) scores. DDIᴿ uses raw (rarity-weighted) scores that align with reward splits. “Weighted” uses totals (ΣHero/ΣDeck) so high-scoring tournaments count more; “avg” is the simple mean of per-tournament ratios (Hero/Deck).

Can I change the fragment rate?

Yes—most views expose a Fragment Rate control. Changing it updates ETH-equivalent and all ROI metrics. You can also change the ETH per frag rate for each tournament individually. Check "How to" to find out how.

Need help interpreting a specific card? Head back to Home, search a wallet, and open a card.
← Back to app
Pro Tips
  • Use a custom Vi to reflect your true cost basis.
  • Compare DDI and σROI to spot consistent contributors vs. high-variance heroes.
  • Hero Win is already apportioned by participation—use it to compare lineups fairly.
  • Watch Momentum and Velocity during the window; negative momentum right after a spike is normal.