University of Massachusetts Artificial Intelligence Computer Science Department
CMPSCI 683 Fall 2004

Schedule of Lectures
The slides from two years ago are also available here

INTRODUCTION
Lecture 1: Introduction [Thurs 9/09]
Course information. What is Artificial Intelligence? A brief history of AI, its goals and achievements. Computer systems as intelligent agents. Types of environments, agents, and performance measures. Reflex agents, agents that keep track of the world, goal-based agents, and utility-based agents.
Reading: AIMA Chptrs 1&2.
PROBLEM SOLVING USING SEARCH
Lecture 2: Overview of Issues in Heuristic Search [Tues 9/14]
Key concepts in heuristic search, state search paradigm and its complexities, local control/non-local control, satisficing and bounded-rationality, data, solution and control uncertainty, multi-level search, meta-level control, open vs. closed world assumption, resource bounds.
Reading: Russel and Norvig, Chapter 3.
Lecture 3: Heuristic search [Thurs 9/16]
Algorithms for guiding search using heuristic information. The nature and origin of heuristics. Best-first search, A* and IDA*. Admissible evaluation functions. The effect of heuristic error. K-Best-First Search.
Reading: Sections 4.1-4.4.
Lecture 4 (guest lecturer Jiaying Shen): Time and space variations of A* [Tues 9/21]
IDA*, RBFS, SMA*, Memory-bounded heuristic search, RTA*
Reading : Richard E. Korf, Real-Time Heuristic Search, Artificial Intellligence 42 (1990), pp 189-211.
Lecture 5: Search complexity and applications [Thurs 9/23]
Anytime A*, Hierarchical A*, Other Examples of Hierarchical Problem Solving
Readings: Eric A. Hansen, Shlomo Zilberstein, Victor A. Danilchenko, Anytime Heuristic Search: First Results, CS Technical Report, 97-50, UMASS
Hierarchical A*: R.C. Holte, M.B. Perez, R.M.Zimmer, A.J. Macdonald, Hierarchical A*: Searching Abstraction Hierarchies Efficiently, {AAAI}/{IAAI}, Vol. 1, pp 530-535, 1996
Other Examples of Hierarchical Problem Solving: Craig A. Knoblock, Abstracting the Tower of Hanoi, In Proceedings of the Workshop on Automatic Generation of Approximations and Abstractions, pages 13--23, Boston, MA, 1990
Lecture 6: Local Search [Tues 9/28]

Beam search, Hill Climbing, Genetic Algorithms, Simulated Annealing, Iterated Improvement.

Stochastic Search.

Lecture 7: CSPs: Heuristics for CSPs [Thurs 9/30]
Heuristic Repair for CSPs, Texture Measures, Solving CSPs using Systematic Search, Relationship of Problem structure to complexity.
Reading: Section 4.4-4.5; Bart Selman, Hector Levesque and David Mitchell A New Method for Solving Hard Satisfiability Problems.. Proceedings AAAI-92; Steven Minton, Andy Philips, Mark D. Johnston and Philip Laird. Minimizing Conflicts: A Heuristic Repair Method for Constraint-Satisfaction and Scheduling Problems. Journal of Artificial Intelligence Research 1 (1993) 1-15.
Lecture 8: CSPs, Interaction of Subproblems, Multi-Level Search [Tues 10/05]
Bactracking, K-consistency. Problem instance hardness, Necceisity of multi-level search, begin blackboard system discussion..
REASONING UNDER UNCERTAINTY
Lecture 9: Blackboard Systems as an Architecture for Interpretation [Thurs 10/07]
Basic concepts of blackboard systems, separating control and domain problem solving, knowledge sources, multi-level search space.
Reading: Erman, L.D., Hayes-Roth, F., Lesser, V.R., and Reddy, D.R. (1980). The HEARSAY-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty. Computing Surveys 12, (2), 213-253, 1980.
Additional (optional) reading: Carver, N. and Lesser, V. The Evolution of Blackboard Control Architectures. Computer Science Technical Report 92-71, University of Massachusetts, Amherst. (This is a revised and extended version of paper with same title in Expert Systems with Applications: Special Issue on the Blackboard Paradigm and Its Applications.)
Lecture 10: Uncertainty [Tues 10/12]

Sources of uncertainty, representing uncertainty, Bayesian reasoning. Bayes' Rule and its uncertainty. Reading: Chapter 13.

Lecture 11: Probabilistic reasoning with belief networks[Thurs 10/14]
More uses of Bayes' rule. Introduction to graphical models, specifically (Bayes') belief networks, d-seperation, noisy-OR
Reading: Chapter 14.
Lecture 12: Probabilistic reasoning with belief networks 2 [Tues 10/19]
Network construction, Inference in BNs automated belief propagation in polytrees, exact inference in tree-structured networks, inference in multiply connected BNs.
Lecture 13: Approximate inference for BNs Alternative Aproaches to uncertainty.[Thurs 10/21]
Inference in multiply connected belief networks. Clustering methods, cutset conditioning, and stochastic simulation. Alternative approaches to uncertain reasoning.
Reading: Sections 14.4-14.7.
Lecture 14: Decision Theory [Tues 10/26]
Making optimal decisions by maximizing utility. The axioms of decision theory. Utility scales and utility assessment. . Reading: Chapter 16
Lecture 15: Value of Information, Intro to MDPs [Thurs 10/28]
The value of information. Intr to Markov Decision Processes
Reading: Sections 16.6, 17.1-17.4
Reading: Ross Schachter. Evaluating Influence Diagrams. Operations Research, 34:871-882, 1986.
MIDTERM [Tues 11/02]
<

midterm 1996

midterm 1998
midterm 2000 solutions 2000

midterm 2002
LEARNING
Lecture 16: Decision Networks [Thurs 11/04]
Decision tree methods. The semantics of decision networks. Evaluating decision networks. Reading: Section 18.3
Lecture 17: Learning from observations [11/09]
How to get intelligent systems to learn from their experience. Inducing rules from data. Learning decision trees. Learning general logical descriptions.
Reading: Sections 18.1,2,4,5.
No Class Veteran Day [Thurs 11/11]
Lecture 18: [Tues 11/16]
Decision tree issues. Current-best-hypothesis search. Version Space. Neural network introduction.
Lecture 19: Neural networks [Thurs 11/18]
Network structure, perceptrons, Hopefield networks, associative memory, multi-layer feed-forward networks, applications.
Reading: Sections 19.1-19.5, 20.8.
Lecture 20: Markov decision processes and Reinforcement learning [Mon 11/22 (NOTE Special Day! Monday is a Thursday schedule.)]
Formulating planning problems using Markov decision processes. Generating optimal action selection policies using value iteration and policy iteration. Solving Markov decision problems using heuristic search, temporal difference learning.
Reading: Sections 17.1-17.3, Hansen and Zilberstein. A Heuristic Search Algorithm for Markov Decision Problems. Bar-Ilan Symposium on the Foundation of Artificial Intelligence, 1999.
Lecture 21: Reinforcement Learning [Tues 11/23]
Exploration versus exploitation, Q-Learning, degrees of abstraction,
Reading: Sections 20.1-20.6.
No Class Thankgiving Holiday [Thurs 11/25]
Lecture 22: Analytical Learning and Planning [Tues 11/30]
Analytical learning (explanation-based learning), an overview of planning
INTELLIGENT SYSTEMS
Lecture 23: Data Mining [Thurs 12/02]
Lecture 24: Resource-bounded reasoning systems [Tues 12/07]
The problem of real-time decision making. Approaches to reasoning with limited computational resources: composition of anytime algorithms, design-to-time, progressive reasoning. Run-time monitoring. Applications.
Lecture 25: Summary [Thurs 12/09]
Course review and summary. The multiple goals of AI. Current research directions.
FINAL EXAM [When:12/17 8am, Where: Goessman 51 ]

final 1998

final 2000


[Home] [Lectures] [Homeworks] [Messages] [Pointers]