The Yahtzee Manifesto dice logo
The Yahtzee Manifesto
  • Scorecards
  • Rules
  • Strategy
  • Odds
  • Shop
  • Play

Log In Sign Up
Search icon The Yahtzee Archive icon Flag of the Netherlands Flag of Germany A scroll icon for The Yahtzee Manifesto
Play Yahtzee icon
Play Yahtzee
Yahtzee Score Card Yahtzee Rules Basic Yahtzee Equipment Yahtzee Variations Yahtzee House Rules
Yahtzee Probability & Strategy icon
Probability & Strategy
Yahtzee Odds The Top 5 Yahtzee Mistakes The Yahtzee Beginner How to Cheat at Yahtzee Teach Math with Yahtzee
History of Yahtzee icon
Origins & Evolution
The Yahtzee Archive Dice History Yahtzee Tournaments Is Yahtzee Chinese? History of Board Games
Yahtzee News & Commentary icon
News & Commentary
"Loaded Dice" Yahtzee and Sexuality Improve Your Mental Health with Yahtzee Boost Creativity with Yahtzee World Yahtzee Revolution
Order The Yahtzee Manifesto

Reward Variance in Markov Chains


Reward Variance in Markov Chains: A Calculational Approach
Tom Verhoeff
Eindhoven University of Technology
2004

Abstract


Reward Variance in Markov Chains
Reward Variance in Markov Chains
Verhoeff, 2004
---> DOWNLOAD <---
This paper introduces a new method for calculating the variance of rewards accumulated until absorption in a Markov chain. Traditional approaches derive variance from the second moment (expectation of the squared reward), risking catastrophic cancellation when first and second moments nearly equal. The proposed technique establishes a direct system of linear equations for the variance at each state, requiring only the first moment (reward expectation). By conditioning on the first transition step, recursive relationships express a state's variance in terms of neighboring states' variances and expectations, avoiding explicit second moment calculation.

The derivation leverages properties of expectations over walks until absorption in a subset of states. The resulting variance equations can be solved efficiently, e.g., by backward substitution for absorbing chains without cycles. Extensions handle covariances between different reward structures. Applying this to a large Yahtzee model demonstrates practical value by computing variances and covariances of final scores for different category scoring choices. Overall, this offers a numerically robust, computationally efficient approach for analyzing stochastic processes via absorbing Markov chains.

More Resources


Beat Yahtzee High Scores
How to Beat
Yahtzee High Scores
Reinforcement Learning for Solving Yahtzee
Reinforcement Learning
for Solving Yahtzee
Optimal Yahtzee
Optimal Yahtzee
The Yahtzee Manifesto formal yellow border
The Yahtzee Manifesto formal yellow border
  • About Us
  • Blog
  • Sitemap
  • RSS
  • Privacy Policy
  • Terms of Service
  • Disclaimer
  • Contact
  • Facebook button
    Twitter button
  • Instagram button
    Pinterest button
Copyleft image 2011-2025 - World Yahtzee Institute