Markov chain

From Citizendium
Revision as of 18:32, 8 October 2007 by imported>Brian Napoletano (Basic introduction to Markov chains)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

A Markov chain is a Markov Process with a discrete time parameter [1]. The Markov chain is a useful way to model systems with no long-term memory of previous states. That is, the state of the system at time Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \left(t + 1\right)} is solely a function of the state Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle t} , and not of any previous states [2].

A Formal Model

The influence of the values of Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle X^{\left(0\right)}, X^{\left(1\right)}, \ldots, X^{\left(n\right)}} on the distribution of Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle X^{\left(n+1\right)}} can be formally modelled as:

Eq. 1

In this model, is any desired subset of the series . These indexes commonly represent the time component, and the range of is the Markov chain's state space [1].

Probability Density

The Markov chain can also be specified using a series of probabilities. If the initial probability of the state is , then the transition probability for state occuring at time can be expressed as:

Failed to parse (syntax error): {\displaystyle p_{\left(n+1\right)}\left(y\right) = \sum_{x} p_{\left(n\right)}\left(x\right} p\left(y \mid x\right)} Eq. 2

In words, this states that the probability of the system entering state at time <mat>n + 1</math> is a function of the summed products of the initial probability density and the probability of state Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle y} given state Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle x} [2].

Invariant Distributions

In many cases, the density will approach a limit Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle p\left(y\right)} that is uniquely determined by Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle p\left(y \mid x\right)} (and not Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle p_{\left(n\right)}\left(x\right)} ). This limiting distribution is referred to as the invariant (or stationary) distribution over the states of the Markov chain. When such a distribution is reached, it persists forever.

References

  1. 1.0 1.1 Neal, R.M. (1993) Probabilistic Inference using Markov Chain Monte Carlo Methods. Technical Report TR-931. Department of Computer Science, University of Toronto http://www.cs.toronto.edu/~radford/review.abstract.html
  2. 2.0 2.1 Peter M. Lee (2004) Bayesian Statistics: An Introduction. New York: Hodder Arnold. 368 p.