Practice Exercise 6.D
Probability and Time

Back to practice exercises.

1: Background Reading

2: Learning Goals

  • Understand Markov chains as a type of belief network.
  • Use hidden Markov models to form estimates from measurements and predictions.

3: Directed Questions

  1. How does the time complexity of exact inference in a Markov chain depend on the number of time steps? [solution]

4: Exercise: Localization

  1. Open the localization applet and read the decription of the model. Draw a time-slice of the hidden markov model in the Belief network tool. [solution]

  2. Write the conditional probability tables for the sensing model. Also provide the state probability distribution according to the dynamics model, given that the previous state was position 0 and the previous action was to move right. [solution]

  3. Click "Know Location" to set the prior distribution to a point mass at a random location. Next, "Observe Door" and "No Light". What happens? Give an intuitive reason for your observation. How might this be a problem in real applications, and how might we solve it? [solution]

  4. Still using the "Know Location" prior, move the robot right twice, then left once. Which locations have a belief probability above the smallest value in the distribution? Why? [solution]

  5. Suppose the robot is randomly placed into the world, so that it can be at any location with equal probability (that is, we set a uniform prior). If we now observe a door, what is the probability that we really are at a location with a door? Try solving this both by hand and with the applet, then compare the results. [solution]

5: Learning Goals Revisited

  • Understand Markov chains as a type of belief network.
  • Use hidden Markov models to form estimates from measurements and predictions.

Valid HTML 4.0 Transitional