Back to practice exercises.
1: Background Reading
2: Learning Goals
 Compare and contrast stochastic singlestage (oneoff) decisions vs. multistage decisions.
 Define a utility function on possible worlds.
 Define and compute optimal oneoff decision (max expected utility).
 Represent oneoff decisions as a singlestage decision network and compute optimal decisions using variable elimination.
3: Directed Questions
 What is meant by a oneoff decision? How can this be applied in the delivery robot example? [solution]
 Define utility in a decision problem. [solution]
 How do we calculate the expected utility of a decision? [solution]
 How do we compute an optimal oneoff decision? [solution]
 What are the three types of nodes in a singlestage decision network? [solution]
 What is a policy for a singlestage decision network? What is an optimal policy? [solution]
 Describe the variable elimination steps for finding an optimal policy for a singlestage decision network. [solution]
4: Exercise: Bike Ride
You are preparing to go for a bike ride and are trying to decide whether to use your thin road
tires or your thicker, knobbier tires. You know from previous experience that your road tires are
more likely to go flat during a ride. There's a 40% chance your road tires will go flat but only a
10% chance that the thicker tires will go flat.
Because of the risk of a flat, you also have to decide whether or not to bring your tools along on
the ride (a pump, tire levers and a puncture kit). These tools will weigh you down.
The advantage of the thin road tires is that you can ride much faster. The table below gives the
utilities for these variables:
bringTools 
flatTire 
bringRoadTires 
Satisfaction 
T 
T 
T 
50.0 
T 
T 
F 
40.0 
T 
F 
T 
75.0 
T 
F 
F 
65.0 
F 
T 
T 
0.0 
F 
T 
F 
0.0 
F 
F 
T 
100.0 
F 
F 
F 
75.0 
 Create the decision network representing this problem, using the belief and decision networks tool.
 Use variable elimination to find the optimal policy.
 What are the initial factors? [solution]
 Specify your elimination ordering and give each step of the VE algorithm. [solution]
 What is the optimal policy? What is the expected value of the optimal policy? [solution]
 Try changing the utilities and the probabilities in this problem, and identify which changes
result in a different optimal policy. [solution]
5: Learning Goals Revisited
 Compare and contrast stochastic singlestage (oneoff) decisions vs. multistage decisions.
 Define a utility function on possible worlds.
 Define and compute optimal oneoff decision (max expected utility).
 Represent oneoff decisions as a singlestage decision network and compute optimal decisions using variable elimination.
