Required text: Christopher M. Bishop, Pattern Recognition and Machine Learning.
The following are useful references in addition to the reading material assigned for each class:
| On | we will cover | which means that after class you will understand | if before class you have read |
|---|---|---|---|
| 1/18 | Probability Theory | independence, bayes rule | bishop 1.2 |
| 1/23 | Information Theory | entropy, kl-distance, coding | bishop 1.6 |
| 1/25 | Probabilistic Inference | priors on continuous variables | bishop 2.2 |
| 1/30 | Probabilistic Inference | priors: bayesian reasoning, MAP | bishop 3.4 |
| 2/1 | Perceptron, Backpropagation | the chain rule | bishop 5.1, 5.2, 5.3 |
| 2/6 | Support Vectors | the kernel trick | bishop 6.1, 6.2 |
| 2/8 | Support Vectors | the wolfe dual | bishop 7.1 |
| 2/13 | Probabilistic Inference | polytree | bishop 8.4 |
| 2/15 | Expectation Maximization | latent variable clustering | bishop 9 |
| 2/20 | Sampling | Markov Chain Monte Carlo | bishop 11.2 |
| 2/22 | Sampling | Gibbs Sampling | bishop 11.3 |
| 2/27 | Eigenvectors | least squares, PCA | bishop 12.1 |
| 3/1 | Learning Theory | probably approximately correct | kearns&vazirani 1 |
| 3/6 | Learning Theory | VC dimension | kearns&vazirani 2, 3 |
| 3/8 | Independent Component Analysis | source separation | bishop 12.4.1 |
| 3/20 | Review | ||
| 3/22 | Midterm | ||
| 3/27 | Midterm Solutions | ||
| 3/29 | Logistic Regression | maximum entropy | bishop 4.3 |
| 4/3 | Optimization | conjugate gradient | bishop (1995) 274-282 |
| 4/5 | Hidden Markov Models | forward-backward | bishop 13.2 |
| 4/10 | Particle Filters | slam | Thrun et al. ch 4 |
| 4/12 | Reinforcement Learning | q-learning | ballard 11 |
| 4/17 | Reinforcement Learning | partial observability | ballard 11 |
| 4/19 | Games | nash equilibrium | morris 115-131 |
| 4/24 | Games | learning to co-operate | |
| 4/26 | Something Fun | ||
| 5/1 | Review | come to class with questions! |