The Hidden Markov Model demystified: Part 3

Blog / Elodie Thilliez / June 27, 2018

Hidden Markov models (HMMs) are often used as black boxes to perform a wide range of classification and clustering tasks. Refer to Part 1 of this series for a detailed introduction to HMMs and to Part 2 for the application of the Forward algorithm. In this final part, we will look at another application of HMMs, the Viterbi Algorithm, and examine how to employ the algorithm to solve an example problem in R.

Overview of the previous post

In part 1 of this post, we defined our HMM with possible states: Hungry (H) and Thirsty (T), and three possible observations: Burger (B), Ice-cream (I) and Soda (S).

The corresponding initial probability vector, IN, is:

Hungry Thirsty
0.4 0.6

With a transition probability matrix, TR:

at t=0 \ at t=1 Hungry Thirsty
Hungry 0.3 0.7
Thirsty 0.8 0.2

And an emission probability matrix, EM

Burger Ice-cream Soda
Hungry 0.6 0.3 0.1
Thirsty 0.05 0.4 0.55

Given the model we defined and a sequence of observations BIS, we are now aiming to answer the following question: after consuming a Burger, Ice-cream and a Soda, what is the most likely state at the end of the meal: Hungry or Thirsty? Here we wish to derive the final hidden state from the BIS sequence.

Decoding the states: the Viterbi algorithm

The good news is we have already completed 50% of the work in part 2! Given that we have estimated the probability of each observation of the sequence given both possible states, we can decode the hidden state at the end of lunch by isolating the most probable sequence of hidden states. This is exactly what the Viterbi Algorithm does.

A) Estimating all possible paths

Entry: Burger

Let’s start by estimating the probability of having a Burger given the two possible states. I could have this Burger because I am Hungry:

$$\begin{aligned}
P_1 (B|H_1) & = P_{init}(H) \times P(B|H) \\
& = 0.4 \times 0.6 \\
& = 0.24
\end{aligned}$$

Or because I am Thirsty:

$$\begin{aligned}
P_1 (B|T_1) & = P_{init}(T) \times P(B|T) \\
& = 0.6 \times 0.05 \\
& = 0.03
\end{aligned}$$

Figure 1 represents these two possible paths.

Fig.1 – The paths leading to both different hidden states during the consumption of the Burger.

Main: Ice-cream

Now let’s move on to the second item on the menu: my Ice-cream. Given my two possible states while I had the Burger, there are now four paths that lead to the two possibles states and to my Ice-cream order (see Figure 2).

Let’s start by assuming that my state while having the Burger was Hungry. I then ordered the Ice-cream for one of two possible reasons:

  1. i) the Burger made me Thirsty and I was hoping the Ice-cream will quench my Thirst
  2. ii) the Burger did not fill me up and I was hoping an Ice-cream will stop my Hunger.

The above reasons, the following probabilities can be calculated. The probability of being Thirsty while ordering the ice-cream after having a Burger out of Hunger (case i) is therefore:

$$\begin{aligned}
P_2(T|I, H_1) & = P_1(B|H_1) \times P(T|H) \times P(I|T) \\
& = 0.24 \times 0.7 \times 0.4 \\
& = 0.067
\end{aligned}$$

The probability of still being Hungry while ordering the ice-cream after having a Burger out of Hunger (case ii) is:

$$\begin{aligned}
P_2(H|I, H_1) & = P_1(B|H_1) \times P(H|H) \times P(I|H) \\
& = 0.24 \times 0.3 \times 0.3 \\
& = 0.022
\end{aligned}$$

Now, let’s assume instead that I had a Burger because I was Thirsty. The probability of being Thirsty now while ordering the ice-cream after having a Burger out of Thirst (case iii) is therefore:

$$\begin{aligned}
P_2(T|I, T1) & = P_1(B|T_1) \times P(T|T) \times P(I|T) \\
& = 0.03 \times 0.2 \times 0.4 \\
& = 0.002
\end{aligned}$$

The probability of being Hungry while ordering the ice-cream after having a Burger out of Thirst (case iv) is:

$$\begin{aligned}
P_2(H|I, T_1) & = P_1(B|T1) \times P(T|H) \times P(I|H) \\
& = 0.03 \times 0.8 \times 0.3 \\
& = 0.007
\end{aligned}$$

Fig.2 – The four possible paths leading to both different hidden states during the consumption of the ice-cream.

We then consider the most likely path that arrives at each state. Of the two paths that lead to being Hungry, the most likely is:

$$P_2(H|I, H_1) = 0.022$$

So we define:

$$P_2(H_2) = 0.022$$

Similarly, of the two paths that lead to being Thirsty, the most likely is:

$$P_2(T|I, H_1) = 0.067$$

So we define:

$$P_2(T_2) = 0.067$$

Dessert: Soda

Again, there are four possible paths resulting in me ordering a Soda as dessert (see Figure 3).

Fig.3 – The four possible paths leading to both different hidden states during the consumption of the soda.

Firstly, let’s assume that I had an Ice-cream because I was Hungry, and then ordered a Soda. What are the probability of being Thirsty or Hungry at this stage? The Thirsty probability is given by:

$$\begin{aligned}
P_3(T|S,H_2) & = P_2(H_2) \times P(T|H) \times P(S|T) \\
& = 0.022 \times 0.7 \times 0.55 \\
& = 0.008
\end{aligned}$$

while the Hungry state is:

$$\begin{aligned}
P_3(H|S,H_2) & = P_2(H_2) \times P(H|H) \times P(S|H) \\
& = 0.022 \times 0.3 \times 0.1 \\
& = 0.0007
\end{aligned}$$

Now, assuming that I was Thirsty when having an Ice-cream instead, and then decided to get a Soda, the probability of still being Thirsty after my Soda is:

$$\begin{aligned}
P_3(T|S,T_2) & = P_2(T_2) \times P(T|T) \times P(S|T) \\
& = 0.067 \times 0.2 \times 0.55 \\
& = 0.007
\end{aligned}$$

Finally, the probability of being Hungry while ordering the Soda is:

$$\begin{aligned}
P_3(H|S,T_2) & = P_2(T_2) \times P(H|T) \times P(S|H) \\
& = 0.067 \times 0.8 \times 0.1 \\
& = 0.005
\end{aligned}$$

B) The most probable path

Now that we know all the possible paths to both states at every item of the meal, we can trace back to the most probable sequence of hidden states. Let’s start from the end with the Soda. From the four possible paths leading to getting my Soda, the highest probability was 0.008, which is associated with:

$$P_3(T|S,H_2)$$

Therefore, my most likely state when I ordered my Soda was Thirsty.

This probability:

$$P_3(T|S,H_2)$$

Is also associated with previously getting the Ice-cream because of a Hungry state: H2.

Now looking at both possible paths leading to a Hungry state: H2, the path with the highest probability was associated with:

$$P_2(H|I, H_1)$$

With a probability of 0.022. This probability is also associated with getting my Burger because I was initially Hungry, H1.

Putting all pieces of the puzzle together, we can derive the most probable hidden state sequence which, as illustrated in Figure 4, is: Hungry, H1-> Hungry, H2 -> Thirsty, T3.

Fig.4 – Viterbi algorithm is applied to estimate the most likely path of hidden states given the set of observations.

C) Implementation in R

Using the HMM R library [2], here is a snippet to implement our example and solve it with the Viterbi algorithm:

Input

library(HMM)

    # Initialise HMM
    hmm = initHMM(c("Hungry", "Thirsty"), c("Burger", "Ice-cream", "Coke"),
                    startProbs=c(0.4, 0.6),
                    transProbs=matrix(c(.3, .8, .7, .2), 2),
                    emissionProbs=matrix(c(0.6, 0.05, 0.3, 0.4, 0.1, 0.55), 3, 2))

    # Sequence of observations
    observations = c("Burger", "Ice-cream", "Coke")

    # Viterbi algor
    viterbi_state = viterbi(hmm, observations)
    cat("Hidden states sequence is: ", viterbi_state)
    cat("Final Hidden state is:", viterbi_state[3])

Output

Hidden states sequence is: "Hungry" "Hungry" "Thirsty"
    Final Hidden state is: "Thirsty"

Congratulations, this wraps up our blog post series on demystifying the Hidden Markov Model. We hope you have not only learnt the basics of Hidden Markov models but also have a deeper understanding of their application. Are you Hungry or Thirsty? You know what to do!!

References

  1. http://idiom.ucsd.edu/~rlevy/teaching/winter2009/ligncse256/lectures/hmm_viterbi_mini_example.pdf
  2. https://cran.r-project.org/web/packages/HMM/HMM.pdf