DCS 440 -- Undergraduate Introduction to AI

Project Part 1 Report Key

Fall 2000


This page describes some common pitfalls of preliminary project reports.

Potential problems are indexed by numbers. On projects, I write W when you need to worry about a potential problem and G when your project checks out with respect to a potential problem. A star means that I discuss some directions for responding to a worry in my written comments to you; I hope I've done a good job of addressing specific points for particular projects.

No problem is a fatal flaw for a project. In most cases, you can deal with a potential problem just by paying careful attention to avoid it in the next stage of doing the project. However, anyone who wants to is welcome to come to my office hours to discuss the problem and strategies for avoiding it further. In addition, anyone is welcome to turn in a revised description of your domain by Wednesday October 10. What you turn in will supersede your initial draft and will receive credit towards the overall project as though you had handed it in originally on September 27. (And of course it will help you to make definite progress on an interesting and feasible project.)

The next stage of the project will be a programming mockup where you design data structures to use in decision-making and test out those data structures on sample decisions for your project domain. Most of the pitfalls here will crop up and be dispatched easily for this programming mockup.

So you can start thinking about it, I can explain what a mockup is briefly from the path-describing agent from class. A mockup would describe a data structure for paths and give a fragment of a Prolog program that would suffice prove a query that a specific path connects two places. This mockup would show that you've represented enough features of the domain to answer the questions you need to and that you could use those answers to act reasonably in your domain. What's missing from the mockup is any consideration of search and other programming techniques that would allow your agent to actually compute these data structures in an effective way. And for the mockup you also do not need to have a full environment to visualize the action of your agent or to interact with your agent.


Page Contents
1. Abstraction
2. Interface
3. Variability
4. Specificity
5. Productivity

1. Abstraction

You need to be careful, clear, and consistent about how you are going to think of the percepts and actions of the agent. Percepts and actions can come at varying levels of abstraction.

Think of an agent that can move around. At one extreme, this agent might only perceive the locations of features around it; the agent would have to reason to associate those features with landmarks in the agent's environment and then triangulate to discover what location the agent was actually in. The agent might only be able to move by specifying the amount of torque that each of its motors would apply to its wheels. At the other extreme, the agent might perceive its location in a map directly. Its actions might be to go directly to some other specified location in its map.

There's nothing right or wrong about the level of abstraction you choose. You can find good AI problems at anly level. But you need to make sure that you choose a reasonable level of abstraction and carve out a substantial but not overwhelming AI problem -- involving features such as explanation, planning, diagnosis, or prediction -- that applies when your agent's percepts and actions are represented at this level. For example, if the only action that your agent can represent is to torque its wheels, it would be unreasonable to attempt to program this agent to learn to dance English country line dances by observing and interacting with a crowd of robot partners. Assuming your agent could automatically execute specific dance steps would be more helpful. On the other hand, if the sole task of your agent is to move from one place to another, it will pretty much knock any intelligence out of your project if you assume that your agent has a primitive action that automatically moves it from one place to another. In this case, assuming some lower-level control mechanism would be required for your program to require tasks such as planning and prediction. Remember, of course, that the reasoning that your agent does and the knowledge about the world that your agent has to be expressed at this same level as well.

The potential problem of abstraction poses the biggest danger when your agent will be working in a simulated environment. If the actions of your agent are fixed in advance because of the plan you have for interacting with it or for using your agent's output, you know where you stand. But if you get to decide everything, you have to be careful.


2. Interface

For people working in simulated environment, interface issues are also important. How will your program get its information about the world? How will its actions be registered?

You should expect to find out that the task or environment that you're working in actually turns out to larger, more interesting and more subtle than you expected. In some sense, this has been the main result of AI research for more than fifty years. It's very difficult to find this out just by reading an abstract description of the inputs and outputs of a program, however. You typically need to appraise the results of the program in real-world terms; that might mean, for example, seeing what the program is doing, or listening to what the program is saying.

Concretely, be careful that your project doesn't depend on the following kind of situation to be interesting:

  • The agent issues actions that you intend to correspond to physical actions in the real world, but that you're faking because you don't have a killer space robot -- or for whatever other technical limitation.
  • Those actions have unpredictable results which have to be fed back into the program in order for the robot to make its further decisions.
  • You don't have a way of simulating these effects automatically. (You might have a way of simulating these effects because you have a randomized model of the world that handles the unpredictability, or because the unpredictability comes from another agent that inhabits the same simulated environment as yours.)
That means somebody has to ``think up'' all the unpredictable things that might happen in their head in response to the robot's actions, and type them in explicitly. This requires a lot of extra creativity. Worse, it requires a lot of moral strength to keep challenging your agent in realistic ways and to keep developing it in the face of the extra effort it requires in your part. It will be too easy to slack off and just end up with something lame.


3. Variability

There should be some variability in the situations in which your agent has to act. The thing to guard against is a project where the agent has fixed goals and a small range of input and knowledge to deal with. In such a case, it would be feasible to just write a brute-force list of all the situations that the agent could encounter and what the agent should do in each case. Indeed, that might be the best way to program an agent in this case. Well, nothing you learn in this class will apply to writing a little algorithm like this, and you won't find the program very rewarding or interesting (I pass over the inevitable grade-related repercussions).

But even small domains can have complexity. Uncertainty is one reason for this. Even if the agent might only be in a few situations and might only have a few actions to choose from, substantial knowledge and lookahead may be required to pick the best thing to do if the agent doesn't exactly know which situation it's currently in and has only a rough idea about how its action could affect the situation. (That's the essence of playing games like bridge, of course, and look how interesting and complicated those can be!)

So when you continue your project, you just need to be clear about where complexity and variability is coming from in your domain.


4. Specificity

You need to identify a number of concrete, specific sample situations in order to describe your project, mock up your project, focus in on key issues in your implementation and show that your implementation is successful. Be specific! If your storyboard only talked about the kind of behavior you'd like to see from your agent in vague or general terms, you should plan things out in more detail. You will have to---the mockup will ask you to formalize things like the specific state of the world you're working in, the specific percepts and overall goals of an agent in that state, and the specific rationale for action that the agent should use in that state, as part of a preliminary Prolog program!


5. Productivity

In Aritificial Intelligence and Cognitive Science, productivity refers to the abiltiy to handle a wide range of situations using a small body of knowledge. For example, linguistic rules are productive because speakers of a language such as English can use their knowledge to understand countless sentences that they have never heard before.

Although you are working specifically with a small number of sample situations, you should strive to make all the rules and representations you develop for your project as productive as possible. Don't attack problems or scenarios that your program will encounter one-by-one, each on its own terms. Try to develop a common underlying approach, that relies on general principles about describing actions, goals, effects, plans and agents and that's parameterized by the particular real-world knowledge that might be required in a particular situation.