Practice Exercise 9.A
Single Decisions

## 2: Learning Goals

• Compare and contrast stochastic single-stage (one-off) decisions vs. multistage decisions.
• Define a utility function on possible worlds.
• Define and compute optimal one-off decision (max expected utility).
• Represent one-off decisions as a single-stage decision network and compute optimal decisions using variable elimination.

## 3: Directed Questions

1. What is meant by a one-off decision? How can this be applied in the delivery robot example? [solution]

2. Define utility in a decision problem. [solution]

3. How do we calculate the expected utility of a decision? [solution]

4. How do we compute an optimal one-off decision? [solution]

5. What are the three types of nodes in a single-stage decision network? [solution]

6. What is a policy for a single-stage decision network? What is an optimal policy? [solution]

7. Describe the variable elimination steps for finding an optimal policy for a single-stage decision network. [solution]

## 4: Exercise: Bike Ride

You are preparing to go for a bike ride and are trying to decide whether to use your thin road tires or your thicker, knobbier tires. You know from previous experience that your road tires are more likely to go flat during a ride. There's a 40% chance your road tires will go flat but only a 10% chance that the thicker tires will go flat.

Because of the risk of a flat, you also have to decide whether or not to bring your tools along on the ride (a pump, tire levers and a puncture kit). These tools will weigh you down.

The advantage of the thin road tires is that you can ride much faster. The table below gives the utilities for these variables:

T T T 50.0
T T F 40.0
T F T 75.0
T F F 65.0
F T T 0.0
F T F 0.0
F F T 100.0
F F F 75.0
1. Create the decision network representing this problem, using the belief and decision networks tool.

2. Use variable elimination to find the optimal policy.

• What are the initial factors? [solution]

• Specify your elimination ordering and give each step of the VE algorithm. [solution]

3. What is the optimal policy? What is the expected value of the optimal policy? [solution]

4. Try changing the utilities and the probabilities in this problem, and identify which changes result in a different optimal policy. [solution]

## 5: Learning Goals Revisited

• Compare and contrast stochastic single-stage (one-off) decisions vs. multistage decisions.
• Define a utility function on possible worlds.
• Define and compute optimal one-off decision (max expected utility).
• Represent one-off decisions as a single-stage decision network and compute optimal decisions using variable elimination.