Scikit learn Hidden Markov Model

In this Python tutorial, we will learn How to create a scikit learn Markov model in python and we will also cover these examples related to the Markov model. And, we will cover these topics.

  • What is scikit learn Markov model?
  • What made scikit learn Markov model hidden
  • Scikit learn hidden Markov model example

What is scikit learn Markov model?

In this section, we will learn about the Scikit learn Markov model in python and how its works.

A Markov model is defined as a stochastic process whereas the considered probability of the future state depends upon the current process state.

Example:

Consider you want to make a model future probability that our cat is in the three-state given its current state.

We consider the cat we have been very lazy. We define its state sleeping eating, Pooping. We set the initial probabilities to 45%,35%, and20% resp.

Code:

import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plot
%matplotlib inline

states = ['sleeping', 'eating', 'pooping']
pi = [0.45, 0.35, 0.2]
state_space = pd.Series(pi, index=states, name='states')
print(state_space)
print(state_space.sum())

After running the above code we get the following output in which we can see that the probability of the assuming state of the cat.

scikit learn markov model prediction
scikit learn Markov model prediction

The next step is to define the probability simply in the same state or moving to a different state given the current state.

q_df = pd.DataFrame(columns=states, index=states)
q_df.loc[states[0]] = [0.4, 0.2, 0.4]
q_df.loc[states[1]] = [0.40, 0.40, 0.2]
q_df.loc[states[2]] = [0.40, 0.20, .4]

print(q_df)

q_f = q_df.values
print('\n', q_f, q_f.shape, '\n')
print(q_df.sum(axis=1))
scikit learn markov model transition process
scikit learn Markov model transition process

After doing the above process we have initial and transition probabilities and now we are creating a Markov diagram using Networkx package.

from pprint import pprint 


def _get_markov_edges(Q):
    edge = {}
    for column in Q.columns:
        for index in Q.index:
            edge[(index,column)] = Q.loc[index,column]
    return edge

edge_wt = _get_markov_edges(q_df)
pprint(edge_wt)
scikit learn markov model using networkx
scikit learn Markov model using networkx

Now we can create a scikit learn Markov model graph.

  • Graph.add_nodes_from(states) is used to create nodes corresponding to the states.
  • Graph.add_edge(tmp_origin, tmp_destination, weight=v, label=v) edges represent the transition probabilities.
Graph = nx.MultiDiGraph()


Graph.add_nodes_from(states)
print(f'Nodes:\n{Graph.nodes()}\n')


for k, v in edge_wt.items():
    tmp_origin, tmp_destination = k[0], k[1]
    Graph.add_edge(tmp_origin, tmp_destination, weight=v, label=v)
print(f'Edges:')
pprint(Graph.edges(data=True))    

position = nx.drawing.nx_pydot.graphviz_layout(Graph, prog='dot')
nx.draw_networkx(Graph, position)


edge_labels = {(n1,n2):d['label'] for n1,n2,d in Graph.edges(data=True)}
nx.draw_networkx_edge_labels(Graph , position, edge_labels=edge_labels)
nx.drawing.nx_pydot.write_dot(Graph, 'pet_dog_markov.dot')
Scikit learn markov model graph
Scikit learn Markov model graph

Read: Scikit-learn logistic regression

What made scikit learn Markov model hidden

In this section, we will learn about the scikit learn model hidden and who made the Markov model hidden.

Consider that our cat is acting strangely and we find that why they act like that our cat behavior is due to sickness or simply they act like that.

Code:

In this, we create a space in which we can see that our cat is healthy or sick.

hidden_state = ['healthy', 'sick']
pi = [0.55, 0.45]
state_space = pd.Series(pi, index=hidden_state, name='states')
print(state_space)
print('\n', state_space.sum())
scikit learn markov hidden state space
scikit learn Markov hidden state space

In the next step, we are creating a transition matrix for the hidden state.

a1_df = pd.DataFrame(columns=hidden_state, index=hidden_state)
a1_df.loc[hidden_state[0]] = [0.7, 0.3]
a1_df.loc[hidden_state[1]] = [0.4, 0.6]

print(a1_df)

a1 = a1_df.values
print('\n', a1, a1.shape, '\n')
print(a1_df.sum(axis=1))
scikit learn transition matrix hidden state
scikit learn transition matrix hidden state

Here in this step, we create an emission and observation matrix. The matrix we are creating is the size of the axb. a is the number of hidden states and b is the number of observable states.

observable_state = states

b1_df = pd.DataFrame(columns=observable_state, index=hidden_state)
b1_df.loc[hidden_state[0]] = [0.3, 0.5, 0.2]
b1_df.loc[hidden_state[1]] = [0.3, 0.3, 0.4]

print(b1_df)

b1 = b1_df.values
print('\n', b1, b1.shape, '\n')
print(b1_df.sum(axis=1))
scikit learn emmision and observable matrix
scikit learn emission and observable matrix

In this step, we are creating the graph edges and the graph object from which we can create a complete graph.

hide_edges_wt = _get_markov_edges(a1_df)
pprint(hide_edges_wt)

emit_edges_wt = _get_markov_edges(b1_df)
pprint(emit_edges_wt)
scikit learn hidden state graph
scikit learn hidden state graph

Here we can plot the complete graph of the hidden Markov model.

G = nx.MultiDiGraph()


G.add_nodes_from(hidden_state)
print(f'Nodes:\n{G.nodes()}\n')

for k, v in hide_edges_wt.items():
    tmp_origin, tmp_destination = k[0], k[1]
    G.add_edge(tmp_origin, tmp_destination, weight=v, label=v)


for k, v in emit_edges_wt.items():
    tmp_origin, tmp_destination = k[0], k[1]
    G.add_edge(tmp_origin, tmp_destination, weight=v, label=v)
    
print(f'Edges:')
pprint(G.edges(data=True))    

pos = nx.drawing.nx_pydot.graphviz_layout(G, prog='neato')
nx.draw_networkx(G, pos)
scikit learn markov  hidden  model
scikit learn Markov hidden model

Read: Scikit learn Decision Tree

Scikit learn hidden Markov model example

In this section, we will learn about scikit learn hidden Markov model example in python.

The scikit learn hidden Markov model is a process whereas the future probability of future depends upon the current state.

Code:

In the following code, we will import some libraries from which we are creating a hidden Markov model.

state_space = pd.Series(pi, index=states, name=’states’) is used to create a state space and initial state space probability.

edge[(index,column)] = Q.loc[index,column] is used to create a function that maps transition probability dataframe.

Graph.add_nodes_from(states) is used to add the node corresponding to the data frame.

Graph.add_edge(tmp_origin, tmp_destination, weight=v, label=v) edges is used to represent the transition property.

import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plot
%matplotlib inline

states = ['sleeping', 'eating', 'pooping']
pi = [0.45, 0.35, 0.2]
state_space = pd.Series(pi, index=states, name='states')
print(state_space)
print(state_space.sum())
q_df = pd.DataFrame(columns=states, index=states)
q_df.loc[states[0]] = [0.4, 0.2, 0.4]
q_df.loc[states[1]] = [0.40, 0.40, 0.2]
q_df.loc[states[2]] = [0.40, 0.20, .4]

print(q_df)

q_f = q_df.values
print('\n', q_f, q_f.shape, '\n')
print(q_df.sum(axis=1))
from pprint import pprint 

def _get_markov_edges(Q):
    edge = {}
    for column in Q.columns:
        for index in Q.index:
            edge[(index,column)] = Q.loc[index,column]
    return edge

edge_wt = _get_markov_edges(q_df)
pprint(edge_wt)
Graph = nx.MultiDiGraph()


Graph.add_nodes_from(states)
print(f'Nodes:\n{Graph.nodes()}\n')


for k, v in edge_wt.items():
    tmp_origin, tmp_destination = k[0], k[1]
    Graph.add_edge(tmp_origin, tmp_destination, weight=v, label=v)
print(f'Edges:')
pprint(Graph.edges(data=True))    

position = nx.drawing.nx_pydot.graphviz_layout(Graph, prog='dot')
nx.draw_networkx(Graph, position)


edge_labels = {(n1,n2):d['label'] for n1,n2,d in Graph.edges(data=True)}
nx.draw_networkx_edge_labels(Graph , position, edge_labels=edge_labels)
nx.drawing.nx_pydot.write_dot(Graph, 'pet_dog_markov.dot')

Output:

After running the above code, we get the following output in which we can see that the Markov model is plotted on the screen.

scikit learn hidden markov model
scikit learn hidden Markov model

You may also like to read the following Scikit learn tutorials.

So, in this tutorial we discussed the scikit learn hidden Markov model and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.

  • What is scikit learn Markov model?
  • What made scikit learn Markov model hidden
  • Scikit learn hidden Markov model example.