Lecture Notes For All: Dynamic Programming and Stochastic Control

GoDaddy

...................

Sunday, March 28, 2010

Dynamic Programming and Stochastic Control


Dynamic Programming and Stochastic Control

Diagram in which nodes can be inserted into or removed from a list
 of active nodes.
Label correcting methods for shortest paths. See lecture 4 for more information. (Figure by MIT OpenCourseWare, adapted from course notes by Prof. Dimitri Bertsekas.)

Course Highlights

This course features a complete set of lecture notes, as well as assignments andexams with solutions.

Course Description

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Lecture Notes

This section includes the complete lecture notes from Fall 2008, based on the third edition of the course textbook, both as one file and broken down by session. For reference, it also includes the complete lecture notes from Fall 2003, based on the second edition of the textbook.

Complete Lecture Slides

Complete slides, Fall 2008 (PDF - 2.4 MB)
Complete slides, Fall 2003 (PDF - 1.9 MB)

Lecture Slides by Session

SES #TOPICS
1Introduction to dynamic programming; examples and formulation (PDF)
2The dynamic programming algorithm (PDF)
3Deterministic systems and the shortest path problem (PDF)
4Shortest path algorithms (PDF)
5Deterministic continuous-time optimal control (PDF)
6Stopping and scheduling problems (PDF)
7Linear systems with quadratic costs and inventory control (PDF)
8Problems with imperfect state information (PDF)
9Sufficient statistics (PDF)
10Suboptimal control (PDF)
11Rollout algorithms (PDF)
12More on suboptimal control (PDF)
13Infinite horizon I: stochastic shortest path problems (PDF)
14Infinite horizon II: discounted problems (PDF)
15Infinite horizon III: average cost problems (PDF)
16Semi-Markov problems (PDF)
17Infinite horizon: discounted problems I (PDF)
18Infinite horizon: discounted problems II (PDF)

Midterm
19Stochastic shortest path problems (PDF)
20
Overview of main approaches in approximate dynamic programming (PDF)
Detailed outline for approximate dynamic programming, lectures 20-25 (PDF)
21Cost approximation: discounted cost (PDF)
22Projected equation methods (PDF)
23More on projected equations: Q-learning (PDF)
24Extensions to stochastic shortest path and average cost (PDF)
25Gradient methods for approximation in policy space (PDF)
26Project presentations I
27Project presentations II

No comments:

Post a Comment