Main Page Contents
About this course
Schedule
Lectures
Materials
Links
Course People
Matthew Stone
Rong Zhang
|
Schedule
Class Tuesday/Thursday 4:30-5:50, Hill 254
Announcements
- Dec 16
Reminder: the exam is either at 9am or 4pm December 18 in Hill
254.
Caveat: there is a typo in the justification of the ANOVA in
Cohen, on page 193. The between-group variance should include an
additional factor Nj - the number of data points in group j.
Good luck.
- Nov 27
Short Written Exercises
Due December 6.
- Nov 13
Final paper - a short proposal
Due December 11
(Plenty of time to think about it.)
Reminder: No office hours Nov 14, Dimitris Metaxas Nov 15.
- Nov 8
Written Exercises
Due November 20
Updated and synchronized syllabus on this page.
- Oct 4
Correction to Homework One
Small typo fixed in mystery algorithm of extra credit A
- Oct 3
Written Exercises
Due October 18
- Sep 27
Updated and synchronized syllabus on this page.
- Sep 25
Homework One
Due October 11
- Aug 29
Who should take this class?
Lecture Schedule, AI Events, Notes
- Sep 4
What is AI?
- Module 1: A Prototypical Case Study in AI
Reading: Agents in
the Real World Homework: Decision
Analysis Lectures: Sep 6-Sep 20
- Sep 6
Decision analysis as a computational model (1)
Actions, observations, outcomes; probability and utility.
- Sep 11
- Sep 13
Decision analysis as a computational model (2)
Interpreting models; solving for and carrying out policies in
agents.
- Sep 18
Decision analysis as a computational model (3)
Design and evaluation; sensitivity analysis, statistical
hypotheses about running agents.
- Sep 20
Decision analysis as a computational model (3)
Computational complexity; training data; model estimation and
model induction; generalization, model selection, data sparsity.
- Module 2: Perception and Bayesian Inference
Reading: Chris Bishop, Neural Networks for Pattern
Recognition Chapters 1-3
Homework: Written Exercises
Lectures: Sep 25-Oct 11.
- Sep 25
Bayesian analysis and models for classification (1)
Motivation for a Bayesian approach. (Ch 1.1-1.3; 1.8)
- Sep 27
Bayesian analysis and models for classification (2)
Naive Bayes inference (discrete case).
Text classification. (Ch 3.1.4)
- Oct 2
Bayesian analysis and models for classification (3)
Continuous variables, normal distributions, linear
classifiers. (Ch 1.8; Ch 3.1; Ch 3.2)
- Oct 4
Bayesian analysis and models for classification (4)
Learning from training data. Maximum likelihood.
(Ch 1.9 and 2.1-2.4)
- Oct 9
Guest Lecture: Haym
Hirsh
Current research in text classification.
- Oct 11
Bayesian analysis and models for classification (5)
General density estimation: nearest neighbor classification.
(Ch 2.5 and 2.5)
- Oct 16
Bayesian analysis and models for classification (5)
Clustering; k-means; expectation maximization.
- Module 3: Time
Reading: Eugene Charniak,
Statistical Language Learning Chapters 1-7
Maybeck. Introduction, from Stochastic Models, Estimation and
Control
Russell and Norvig. Chapter 15, Probabilistic Inference, from
Artificial Intelligence, a Modern Approach
Homework: Written Exercises
Lectures: Oct 18-Nov 15.
- Oct 18
Models of time (1)
Markov models and hidden Markov models.
Charniak Chapter 2 and 3.1-3.2.
- Oct 23
Models of time (2)
HMM decoding. Part-of-speech tagging.
Charniak Chapter 3.3 and Chapter 4.1.
- Oct 25
Midterm.
- Oct 30
Models of time (3)
HMM inference. Forward/backward.
Charniak Chapter 4.2
- Nov 1
HMM inference: Training. Speech and gesture
recognition.
Graphical representations of probabilistic models.
Charniak Chapter 4.3; Russell and Norvig Chapter 15.
- Nov 6
Models of time (4)
The Kalman filter. Tracking and learning with
Gaussian priors.
Maybeck, Introduction.
- Nov 8
Models of hierarchical structure. Trees, CFGs and PCFGs.
Charniak Chapter 5.
- Nov 13
Algorithms for PCFGs.
Charniak Chapter 6.
- Nov 15
Guest Lecture:
Dimitris Metaxas.
Current research in visual tracking and recognition.
- Module 4: Planning
Reading: Russell and
Norvig. Chapters 16, 17 and 20, from
Artificial Intelligence, a Modern Approach
- Nov 20
General probabilistic inference: influence diagrams.
- Nov 27
Markov decision processes: Value iteration.
- Nov 29
Markov decision processes: Policy iteration.
- Module 5: Evaluation
Reading: Cohen. Chapters 3 and 6, from
Empirical Methods in Artificial Intelligence
- Dec 4
Evaluation (1).
Pitfalls and methodology.
- Dec 6
Evaluation (2).
Performance metrics.
- Dec 11
Evaluation (3).
Training and test data. Reliability. Cross-validation.
- Dec 18
4-7 pm, Hill 254: Final.
Materials
- Texts
Neural networks for pattern recognition. Christopher Bishop.
Oxford, 1995.
Statistical language learning. Eugene Charniak. MIT,
1993.
- On Reserve in the Math Library
Artificial Intelligence: A Modern Approach. Stuart Russell
and Peter Norvig, Prentice Hall, 1995. Chapters 15, 16, 17 and
20.
Empirical Methods for Artificial Intelligence. Paul Cohen,
MIT, 1995. Chapters 3 and 6.
- Notes
Agents
in the Real World: Computational Models in Artificial Intelligence
and Cognitive Science. Matthew Stone. Revised version of
this chapter will appear in Zenon Pylyshyn and Ernie Lepore, eds.,
What is cogntive science, (second edition) Blackwell,
2002.
Peter
Maybeck's introduction to the Kalman filter, a reference for
the material covered in class October 30. Source: Maybeck, 1979
Stochastic Models, Estimation and Control, Chapter 1,
"Introduction", pp 1-15. Part of a general web resource on the
Kalman filter.
- Research Articles (following up class material, for people
who are curious)
The
interactive museum tour-guide robot. Burgard et al, AAAI
(National Conference on Artificial Intelligence) 1998. An
overview of a state-of-the-art agent.
Hierarchically
classifying documents using very few words. Koller and Sahami,
ICML (International Conference on Machine Learning) 1997. A
description of a principled and effective probabilistic text
classifier.
Real-Time
American Sign Language recognition from video using hidden Markov
models Thad Starner and Alex Pentland. An illustration of
the breadth of HMM techniques for AI---recognizing a visual
language.
CONDENSATION
- conditional density propagation for visual tracking Michael
Isard and Andrew Blake, International Journal on Computer
Vision, 1998. This paper uses sampling to represent
probability density for a computer vision application that has to
deal with visual ambiguity.
Statistical
parsing with a context-free grammar and word statistics Eugene
Charniak, AAAI (National Conference on Artificial Intelligence)
1997. An important update to the book, that shows how to build
a PCFG grammar for English that disambiguates reliably.
Packet Routing in Dynamically
Changing Networks: A Reinforcement Learning Approach Justin
Boyan and Michael Littman. NIPS (Neural Information Processing
Systems Conference), 1994.
An
application of reinforcement learning to dialogue strategy
selection in a spoken dialogue system for email. Marilyn
A. Walker. Journal of Artificial Intelligence Research, Volume
12, pages 387-416, 2000. An illustration of connections between
MDPs, reinforcement learning, and performance evaluation in agents.
Links
- General AI References
- Cool AI Systems
- Robots and the Media
- More AI related Rutgers stuff
|