Implementing Markov Chain in Python

Keywords: Markov Chain, Python, probability, data analysis, data science

Markov Chain

Markov chain is a probabilistic models that describe a sequence of observations whose occurrence are statistically dependent only on the previous ones. This article is about implementing Markov chain in Python

Markov chain is described in one of the earlier posts. For better understanding of the concept, review the post before proceeding further.

We will model a car’s behavior using the same transition matrix and starting probabilities described in the earlier post for modeling the corresponding Markov chain model (refer Figure 1). The matrix defines the probabilities of transitioning between different states, including accelerating, maintaining a constant speed, idling, and braking.

Markov chain model for car's behavior
Figure 1: Modeling a car’s behavior using Markov chain model

The starting probabilities indicate that the car starts in the break state with probability 1, which means it is already stopped and not moving.

Python implementation

Here’s the sample code in Python that implements the above model:

import random

# Define a transition matrix for the Markov chain
transition_matrix = {
    'accelerate': {'accelerate': 0.3, 'constant speed': 0.2, 'idling': 0 , 'break': 0.5 },
    'constant speed': {'accelerate': 0.1, 'constant speed': 0.4, 'idling': 0 , 'break': 0.5 },
    'idling': {'accelerate': 0.8, 'constant speed': 0, 'idling': 0.2 , 'break': 0 },
    'break': {'accelerate': 0.4, 'constant speed': 0.05, 'idling': 0.5 , 'break': 0.05 },
}

# Define starting probabilities for each state
starting_probabilities = {'accelerate': 0, 'constant speed': 0, 'idling': 0, 'break': 1}

# Choose the starting state randomly based on the starting probabilities
current_state = random.choices(
    population=list(starting_probabilities.keys()),
    weights=list(starting_probabilities.values())
)[0]

# Generate a sequence of states using the transition matrix
num_iterations = 10
for i in range(num_iterations):
    print(current_state)
    next_state = random.choices(
        population=list(transition_matrix[current_state].keys()),
        weights=list(transition_matrix[current_state].values())
    )[0]
    current_state = next_state

In this example, we use the random.choices() function to choose the starting state randomly based on the starting probabilities. We then generate a sequence of 10 states using the transition matrix, and print out the sequence of states as they are generated. A sample output of the program is given below.

>>> exec(open('markov_chain.py').read()) #Python 3 syntax
break
idling
accelerate
break
accelerate
break
accelerate
constant speed
break
accelerate

Post your valuable comments !!!