**Planning in Probabilistic
domains**

**Example of a classical
planning domain**

**Example of a classical
planning domain**

**Example of a classical
planning domain**

**Example of a classical
planning domain**

**A classical planning domain**

**Probabilistic planning?**

**Ex. of a probabilistic
planning domain**

**Markov Decision Process**

**GOAL of the talk!**

**Observability and Policy**

**Infinite horizon and
Discount factor**

**Recap: Problem**

**Computation of ideal policy
: Value Iteration**

**Value Iteration (contd)**

**Calculating policy from
value iteration**

**Problem with Value iteration**

**END OF PART 1 J**

**Binary Decision Diagrams**

**BDD example**

**BDD operations: Reduce**

**BDD operations: Reduce**

**BDD operations : Apply**

**Algebraic Decision Diagrams**

**Advantages of ADD/BDD**

**Coffee domain**

**Factored MDP**

**Transition function in
Factored MDP**

**Transition function (contd)**

**Value iteration using ADDs**

**State reachability analysis:
LAO***

**Symbolic LAO***

**END J**