University of Massachusetts | Artificial Intelligence | Computer Science Department |
CMPSCI 683 | Fall 2008 |
INTRODUCTION | ||||||||||
Lecture 1: Introduction to AI [Wed 9/03]
Guest Lecturer : Huzaifa Zafar Course introduction. What is AI? Goals of AI. Importance and Practicality of AI. Issues with AI. AI and Uncertainty. Agents. Combinatorial Auctions | ||||||||||
Lecture 2: Introduction to Search [Mon 9/08]
Search and AI. Reading: Russel and Norvig, Chapters 1, 2. | ||||||||||
SEARCH | ||||||||||
Lecture 3: Introduction to Search Strategies [Wed 9/10]
Abstraction. Problem Solving by Search. Knowledge and Problem Types. Search Trees. Search Algorithms - Breath First Search, Depth First Search, Iterative Search and Bi-directional Search. Introduction to heuristic search. Reading: Russel and Norvig, Chapter 3.1 - 3.7, 4.1-4.4. | ||||||||||
Lecture 4: Heuristic search [Mon 9/15]
Introduction to heuristic search. Best-first search. Greedy Search. A*. Admissible evaluation functions. Monotone evaluation functions. Relationships among search algorithms. Meta-Level Reasoning. Reading: Russel and Norvig, Chapter 4.1-4.4. Homework 1 assigned | ||||||||||
Lecture 5: Resource-Bounded Search Techniques [Wed 9/17]
Iterative Deepening A*. Recursive Best First Search (RBFS). Simplified Memory Bounded A* (SMA*). Memory-bounded heuristic search. Real-time problem solving. Satisficing and Optimizing. Introduction to Anytime A*. Reading : Richard E. Korf, Real-Time Heuristic Search, Artificial Intellligence 42 (1990), pp 189-211. | ||||||||||
Lecture 6: Time and space variations of A* [Mon 9/22]
Anytime A*. Real-Time A*. Hierarchial Search. Hierarchial A*. Readings: Eric A. Hansen, Shlomo Zilberstein, Victor A. Danilchenko, Anytime Heuristic Search: First Results, CS Technical Report, 97-50, UMASS Hierarchical A*: R.C. Holte, M.B. Perez, R.M.Zimmer, A.J. Macdonald, Hierarchical A*: Searching Abstraction Hierarchies Efficiently, {AAAI}/{IAAI}, Vol. 1, pp 530-535, 1996 Other Examples of Hierarchical Problem Solving: Craig A. Knoblock, Abstracting the Tower of Hanoi, In Proceedings of the Workshop on Automatic Generation of Approximations and Abstractions, pages 13--23, Boston, MA, 1990 | ||||||||||
Lecture 7: Local Search [Wed 9/24]
Homework 1 Clarifications. Continuation of Hierarchical A*. Advantages of Local Search. Iterated Improvement. Hill Climbing. Simulated Annealing. Beam Search. | ||||||||||
Lecture 8: Solving CSPs using Heuristic Search [Mon 9/29]
Evolutionary Computation. Genetic Search. Introduction to CSPs. Constraint Optimizations. Heuristic Repair for CSPs. Satisfiability Problem. Reading: Russel and Norvig, Chapter 4.4 - 4.5 Bart Selman, Hector Levesque and David Mitchell A New Method for Solving Hard Satisfiability Problems.. Proceedings AAAI-92 Steven Minton, Andy Philips, Mark D. Johnston and Philip Laird. Minimizing Conflicts: A Heuristic Repair Method for Constraint-Satisfaction and Scheduling Problems. Journal of Artificial Intelligence Research 1 (1993) 1-15 | ||||||||||
Lecture 9: Speeding up CSP Algorithms [Wed 10/01]
Systematic Search. Backtracking Search. Informed Backtracking. Constraint Propagation. Arc Consistency. K-consistency. Problem Textures. Variable and Value ordering. | ||||||||||
Lecture 10: Blackboard Systems as an Architecture for Interpretation [Mon 10/06]
CSP Heuristics. Informed Backtracking. Advanced Backtracking. Tree Structured CSPs. Subgoal/Subproblem Interactions. Nearly Decomposable Problems. Introduction to Blackboard Architectures. Blackboard Problem Solving. Blackboard and Search. Cooperating Experts. Blackboard Applications. Reading: Erman, L.D., Hayes-Roth, F., Lesser, V.R., and Reddy, D.R. (1980). The HEARSAY-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty. Computing Surveys 12, (2), 213-253, 1980. Additional (optional) reading: Carver, N. and Lesser, V. The Evolution of Blackboard Control Architectures. Computer Science Technical Report 92-71, University of Massachusetts, Amherst. (This is a revised and extended version of paper with same title in Expert Systems with Applications: Special Issue on the Blackboard Paradigm and Its Applications.) | ||||||||||
Lecture 11: Hearsay-II - An Example Blackboard System [Wed 10/08]
Blackboard Model. Hearsay-II Architecture. Blackboard Control. Blackboard Nodes. Knowledge Sources Structure. Trace of Hearsay-II. Advantages and Disadvantages of Blackboard Systems. Reading: Russel and Norvig, Chapter 17.1 - 17.4 Homework 2 assigned | ||||||||||
Lecture 12: NO LECTURE [Tues 10/14]
Discussion of Homework 2 | ||||||||||
Lecture 12: Planning as Search [Wed 10/15]
Guest Lecturer : Huzaifa Zafar Classical Planning. Representations of operators and plans. Planning problem. Blocks World. Classical and Set-theoretic representation. Planning as state-space search. Forward-Search. Backward Search. Lifting. STRIPS. Domain-Specific Algorithm. Plan Space Planning. Reading: Russel and Norvig, Chapter 11.1 - 11.4 | ||||||||||
MARKOV PROCESSES | ||||||||||
Lecture 13: Markov Decision Processes [Mon 10/20]
Search with Uncertainty. Introduction to Markov Decision Processes. Goals and Rewards. Performance Criteria. Bellman Equation. Value Iteration. Policy Iteration. Value Determination. | ||||||||||
Lecture 14: Partial Observable Markov Decision Processes [Wed 10/22]
Greedy Policy v/s Optimal Policy. Policy Iteration. Value Determination. Introduction to SMDP. Introduction to POMDP. Bayesan Policy Representation. Finite-Memory Policies. Introduction to Hidden Markov Models. Probabilistic Inference in HMM. Representation for Paths. Viterbi Algorithm. Reading: Russel and Norvig, Chapter 14 | ||||||||||
MIDTERM [Mon 10/27] | ||||||||||
| ||||||||||
MARKOV PROCESSES Cont'd | ||||||||||
Lecture 15: Hidden Markov Models [Wed 10/29]
Midterm Solutions. HMM Formalism. The POMDP Model. Path Represenation. Dynamic Programming. Viterbi Algorithm. HARPY. Network Search Algorithm. BEAM Search. Reading: A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, Sections I-III. | ||||||||||
REASONING UNDER UNCERTAINTY | ||||||||||
Lecture 16: Uncertainty in Intelligent Systems [Mon 11/03]
Ubiquity of Uncertainty. Sources of Uncertainty. Reasoning under Uncertainty. Acting under Uncertainty. Uncertainty in First-Order Logic. Nonmonotonicity. Belief and Evidence. Probability v/s Causality. MYCIN's Certainty Factors. Probability Theory. Bayesian reasoning. Abductive Uncertainty. Reading: Russel and Norvig, Chapter 13.1 to 13.6 | ||||||||||
Lecture 17: Introduction to Probabilistic Reasoning with Belief Networks [Wed 11/05]
Review of Lecture 16. Introduction to Belief (or Bayesial) Networks. Conditional Independence in BNs. Semantics of BN. Inference in BNs. Reasoning in BNs. Reading: Russel and Norvig, Chapter 14.1 to 14.7 | ||||||||||
Lecture 18: More on Probabilistic reasoning with Belief Networks [Mon 11/10]
Representation of Conditional Probability Tables. Benefits of BNs. Constructing BNs. d-Seperation. Inference in BNs. Reasoning in BNs. Variable Elimination. Homework 3 assigned. Makeup exam for the midterm assigned. | ||||||||||
Lecture 19: Approximate inference for BNs. [Mon 11/17]
Variable Elimination in Chains. Elimination in Chains with Evidence. Incremental Updating of BNs. Belief Propagation in Trees. Inference in Multiply Connected BNs. Clustering methods. Cutset Conditioning. Reading: Russel and Norvig, Chapter 14.4 - 14.7. | ||||||||||
Lecture 20: Decision Theory [Wed 11/19]
False Positive and Negatives. Inference in Multiply Connected BNs. Stochastic Simulation. Likelihood Weighting. Markov Chain Monte Carlo. Markov Blanket. Introduction to Utility Theory. Axioms of Utility Theory. Utility scales and utility assessment. Value of Information. Reading: Russel and Norvig, Chapter 16 | ||||||||||
Lecture 21: Decision Networks [Mon 11/24]
Value of Information examples. Value of Perfect Information. Properties of the Value of Information. Decision trees. . Reading: Russel and Norvig, Chapter 18.3 | ||||||||||
Lecture 22: More on Decision Networks [Wed 11/26]
Introduction to Decision Networks. Nodes in a Decision Network. Knowledge in a Decision Network. Topology of Decision Networks. Evaluating Decision Networks. Evaluation by Graph Reduction. Shachter's Algorithm. Dempster-Shafer Theory. Fuzzy Set Theory/Logic. Truth Maintainance Systems. Reading: Russel and Norvig, Chapter 18.1,2,4,5. | ||||||||||
LEARNING | ||||||||||
Lecture 23: Introduction to Learning [Mon 12/01]
Definition of Learning. Types of Learned Knowledge. Characterizing Learning Systems. Model of Learning Agents. Dimensions of Learning. Supervised Learning. Ockham's Razor. Decision Trees and Learning. Homework 4a assigned. | ||||||||||
Lecture 24: Decision Tree Learning [Wed 12/03]
Decision Tree Algorithm for Learning. Choosing the Best Attribute Based on Information Theory. Splitting Examples. Decision Tree Learning. Performance Measurements. Inductive Bias. Overfitting. Missing Data. Multi-Valued Attributes. Continuous Valued Attributes. Intro to Neural Networks. Connectionist Computation. Takeover Midterm Exam Review. Reading: Russel and Norvig, Chapters 19.1-19.5, 20.8. Homework 4b assigned. | ||||||||||
Lecture 25: Neural Networks [Mon 12/08]
Artificial Neural Networks. Neural Network Learning. Multi-Layer Networks. Perceptron. Perceptron Learning. Gradient Decent. Delta Rule. Approximation to Gradient Decent. Back Propagation. Overfitting. Convergence. Reading: Russel and Norvig, Chapter 20.1-20.6. | ||||||||||
Lecture 26: Reinforcement Learning [Wed 12/10]
Problem with Supervised Learning. Intro to Reinforcement Learning. Markov Decision Processes. Key Features of RL. Utility Function. Action-Value Function. Passive versus Active Learning. Learning Utility Functions. Direct Utility Estimation. Adaptive Dynamic Programming. Temporal Difference Learning. Tic-Tac-Toe. Simple Monte Carlo. Limitations. Q Learning for Deterministic Worlds. Non-Deterministic Q Learning. | ||||||||||
FINAL EXAM [Wed 12/17, 1:30 - 3:30, CMPSCI 142] | ||||||||||
|