# Markov Chains
Markov chain (or Markov process) is a mathematical object that discribes stochastic processes that have to do with a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.
## Three Key Concepts of the Markov Chain
>[!1. State Space] 1. State Space
>The state space defines the possible outcomes within the scope of the process under discussion. For example, in a process involving a coin the state space will be heads and tails $\rightarrow \{H, T\}$
>[!2. Markov Assumption] 2. Markov Assumption
>What is happening in the current state only depends on the previous state. For exaple, the weather today is only dependant on the weather of yesterday:
>$P(w_t|w_{t-1}, w_{t-2}, ...) = P(w_t|w_{t-1})$
> Everything that will happen in the future only depends on what is happening right now
>[!note] 3. Transition Matrix
>The compact form that contains information as to how to transition from one state into another.
### Example
Suppose you have player A and player B where a dice with six faces is rolled consecutively. Player A bets that *two consecutive rolls that sum to 10* will occur first. Player B bets that *two consecutive 6s* will occur first. The players keep rolling the dice until one player wins.
What is the probability that A will win?
In order for player A to win, the consequetive rolls must be an element of the following set:
$ \{\{5, 5\}, \{4, 6\}, \{6, 4\} \}$
For player B to win, the only option is to have two $6$ in a row.
Here is a Markov Chain representing the game between player A and player B
$\text{Red} \rightarrow \text{Player }A \text{ Wins}$
$\text{Green} \rightarrow \text{Player }B \text{ Wins}$
![[die_rolling_mkv_chain.jpeg]]
## The Transition [[Matrix]]
The transition matrix simply represents the Markov process in an [[Algebra]]ic form. The Transition Matrix is a form of an input/output matrix representing the inputs as rows and outputs as columns. The transition matrix for the problem above is as follows:
![[transition_mtx.jpeg]]
>[!The transtion Matrix:]
>1. Square matrix
>2. Discrete states
>3. Constant probabilities
>4. Each row of the matrix must add up to 1
The states that have a probability of 1 of stay the same (the syncs) are called *absorbent*. Once you enter these states, you cannot leave them.
### Other Example - [[Moose Hunting Probability Game]]
This seems to be a tricky one to solve.
## Other Relations
[[Probability Theory]]