Karen Mulder's AI Project
ASK ME - Autonomous Symbolic Knowledge & Memory Engine

Contents
Domain
Illustration
Methods
Conclusions
Try Me!

Domain

ASK ME is an interactive, twenty-questions like game, with multiple modes. In Game 1, the agent asks the user to think of an object, and then asks a series of questions in an attempt to guess the user's object. In Game 2, the agent chooses an object and offers clues while the user tries to correctly identify the object. The user can also view the current state of the knowledge base and statistics about it.


Illustration

Feel free to try the online version (web interface courtesy Gary Mulder). Note: you must be connected through Rutgers, not an outside ISP. Alternatively, it works through lynx on romulus.

Here is a link to a sample session.

Below is a shorter version, edited for space:

Here is the knowledge base before a session of Game 1:

Objects consist of the object name, the list of positive attributes, 
the list of negative attributes, and the object's rank.

Objects:
-> object(cat,[furry],[],-2)
-> object(dog,[friendly],[],-1)
-> object(giraffe,[tall,furry],[],0)
-> object(sweater,[warm],[furry,friendly],-1)
-> object(pencil,[useful],[friendly,furry],0)
done!

Attributes consist of the attribute name, its rank in Game 1, and its
rank in Game 2.

Attributes:
-> attribute(tall,3,0)
-> attribute(warm,3,0)
-> attribute(furry,4.5,0)
-> attribute(friendly,3.8333333333333335,0)
-> attribute(useful,3,0)
done!

Then Game 1 is played:

Think of an object, and I will try to guess it!
I will ask you questions about it.  Please let me know if the
answer is Yes (y), No (n), or that you don't know (d).
Remember to follow all answers with a period!
Are you ready?  (enter 'y'):  y.

Is it furry?  (y/n/d):  n.
Is it friendly?  (y/n/d):  n.
Is it warm?  (y/n/d):  n.
Is the object you're thinking of a pencil?  (y/n):  n.
I give up!  What are you thinking of?  A:  iceberg.
What is an attribute that describes it?  A iceberg is:  big.
Thank you, I didn't know that!

Here is the updated knowledge base:

Objects:
-> object(cat,[furry],[],-2)
-> object(dog,[friendly],[],-1)
-> object(giraffe,[tall,furry],[],0)
-> object(sweater,[warm],[furry,friendly],-1)
-> object(pencil,[useful],[friendly,furry],-1)
-> object(iceberg,[big],[warm,friendly,furry],0)
done!

Attributes consist of the attribute name, its rank in Game 1, and its
rank in Game 2.

Attributes:
-> attribute(tall,3,0)
-> attribute(useful,3,0)
-> attribute(furry,4.9,0)
-> attribute(friendly,4.166666666666667,0)
-> attribute(warm,3.5,0)
-> attribute(big,3,0)
done!

Game 1 is then run again:

Think of an object, and I will try to guess it!
I will ask you questions about it.  Please let me know if the
answer is Yes (y), No (n), or that you don't know (d).
Remember to follow all answers with a period!
Are you ready?  (enter 'y'):  y.

Is it furry?  (y/n/d):  n.
Is it friendly?  (y/n/d):  n.
Is it warm?  (y/n/d):  n.
Is it useful?  (y/n/d):  y.
Is it big?  (y/n/d):  n.
Is the object you're thinking of a pencil?  (y/n):  y.

The resulting knowledge base:

Objects consist of the object name, the list of positive attributes, 
the list of negative attributes, and the object's rank.

Objects:
-> object(cat,[furry],[],-2)
-> object(dog,[friendly],[],-1)
-> object(giraffe,[tall,furry],[],0)
-> object(sweater,[warm],[furry,friendly],-1)
-> object(iceberg,[big],[warm,friendly,furry],0)
-> object(pencil,[useful],[big,warm,friendly,furry],0)
done!

Attributes consist of the attribute name, its rank in Game 1, and its
rank in Game 2.

Attributes:
-> attribute(tall,3,0)
-> attribute(furry,5.233333333333333,0)
-> attribute(friendly,4.416666666666667,0)
-> attribute(warm,3.8333333333333335,0)
-> attribute(useful,3.0,0)
-> attribute(big,3.5,0)
done!


Below is the knowledge base at a later point, before Game2 is played:

Objects consist of the object name, the list of positive attributes, 
the list of negative attributes, and the object's rank.

Objects:
-> object(dog,[friendly],[],-1)
-> object(giraffe,[tall,furry],[],0)
-> object(sweater,[warm],[furry,friendly],-1)
-> object(iceberg,[big],[warm,friendly,furry],0)
-> object(pencil,[useful],[big,warm,friendly,furry],0)
-> object(cat,[furry],[],-2)
done!

Attributes consist of the attribute name, its rank in Game 1, and its
rank in Game 2.

Attributes:
-> attribute(tall,3,0)
-> attribute(furry,5.233333333333333,0)
-> attribute(warm,3.8333333333333335,0)
-> attribute(useful,3.0,0)
-> attribute(big,3.5,0)
-> attribute(friendly,4.416666666666667,1)
done!

Game 2 is then played:

I'm thinking of an object...
I will give you clues, and you can try to guess it.
Remember to follow each guess with a period.
Here is your first clue:  The object I am thinking about is not friendly.
What is your guess?  It is a:  bat.

Sorry, it's not a bat.
The object I am thinking about is big.
What is your guess?  It is a:  house.

Sorry, it's not a house.
The object I am thinking about is not warm.
What is your guess?  It is a:  mountain.

Sorry, it's not a mountain.
The object I am thinking about is not furry.
What is your guess?  It is a:  elephant.

Sorry, it's not a elephant.
The object I am thinking of has 7 letters.
What is your guess?  It is a:  octopus.

Sorry, it's not a octopus.
The next letter is i
What is your guess?  It is a:  iceberg.

Congratulations, you guessed it in 6 tries!

The resulting update to the knowledge base:

Objects consist of the object name, the list of positive attributes, 
the list of negative attributes, and the object's rank.

Objects:
-> object(dog,[friendly],[],-1)
-> object(giraffe,[tall,furry],[],0)
-> object(sweater,[warm],[furry,friendly],-1)
-> object(iceberg,[big],[warm,friendly,furry],0)
-> object(pencil,[useful],[big,warm,friendly,furry],0)
-> object(cat,[furry],[],-2)
-> object(bat,[],[friendly],0)
-> object(house,[big],[friendly],0)
-> object(mountain,[big],[warm,friendly],0)
-> object(elephant,[big],[furry,warm,friendly],0)
-> object(octopus,[big],[furry,warm,friendly],0)
done!

Attributes consist of the attribute name, its rank in Game 1, and its
rank in Game 2.

Attributes:
-> attribute(tall,3,0)
-> attribute(useful,3.0,0)
-> attribute(furry,5.233333333333333,2)
-> attribute(warm,3.8333333333333335,3)
-> attribute(big,3.5,4)
-> attribute(friendly,4.416666666666667,6)
done!

Methods

ASK ME builds its knowledge base entirely from scratch, based solely on the user's clues and guesses. It constructs a database objects and related lists of attributes they are known to have or not have. It uses this knowledge along with several ranking heuristics to determine which questions to ask, which objects to guess, which clues to offer.

ASK ME builds its knowledge base entirely from scratch, based solely on the user's clues and guesses. It constructs a database objects and related lists of attributes they are known to have or not have. It uses this knowledge along with several ranking heuristics to determine which questions to ask, which objects to guess, which clues to offer.

Game 1 decides whether to ask a question or try to guess the object, as well as which question to ask or object to guess, based on a ranking system. An object's rank is increased every time it is what the user was thinking about (so, for example, the agent might learn that a user is much more likely to be thinking about a cat than about an aardvark). Every time the agent asks the highest-ranked question, it is trying to reduce the size of the problem domain by the greatest amount.

Attributes of objects are also weighted, based on their effectiveness as questions to the user: how many possible objects have they eliminated in the past? By ranking both attributes and objects in this way, the agent can learn not only what are more effective strategies in the game, but can also evolve over time: if initially trained on a user who tends to think of animals, the agent might highly weight certain animals, and attributes such as being furry. However, if a new user comes along who's more prone to thinking about inanimate objects, the agent can adapt its strategy to better deal with the conditions in its changing world, gradually weighting objects such as cars or furniture higher than animals, attributes such as being plastic or wood higher than being furry. (All the information it has collected about animals remains, however, should a user similar to the first one come along - there is no catastrophic forgetting.)

Additionally, the rank of objects may be temporarily increased (for the duration of the game, but not affecting the value in the knowledge base), depending on the outcome of questions. For example, if the user says the object they are thinking of is furry, all objects known to be non-furry are removed from the list of possible objects, leaving the agent with objects known to be furry, and objects whose furriness it knows nothing about. The agent will therefore increase the temporary rank of known furry objects, increasing its belief that one of those objects might be the correct object. (The effect is multiplied the more often this happens within a game - the furry, small, four-legged, mammal object is more likely to be an object known to have all of these qualities, such as a cat, than an object which makes no reference to these qualities - and thus receives an additional four points in rank.)

Upon completion of the game, the agent has much new information to add to its knowledge base: an object (if the user's object was not already in the database), attributes that correspond to that object, an increase to that object's rank, and changes to the ranks of various attributes it used to help find the answer.

In Game 2, the agent chooses an object from its knowledge base at random, and offers the user clues to determine which object it is. In the event that the agent runs out of attributes to use as clues, it gives the length of the object's name, and then spells it out, one letter at a time, thereby allowing most users to eventually guess the object.

The heuristics in Game 2 are simpler than those in Game 1. As in Game 1, the agent ranks the attributes as to their value as clues - in this case, the longer an attribute used as a clue prolongs the game, the more valuable it is.

The most vital function of Game 2, therefore, is its knowledge collection. In an average game, it can collect much more information to add to the knowledge base than Game 1, especially in terms of objects (Game 1 can collect, at most, one new object a game).


Conclusions

ASK ME relies on the assumption that all users will be consistent when entering data, that you and I will always be in agreement on whether a car is big or not, whether a platypus is a mammal, whether a cat is friendly. This is entirely unrealistic. The one concession ASK ME makes is that a user doesn't have to answer yes or no, they can say they don't know. It would, however, be more representative of the world and users' conceptions of it to devise a framework that allows for disagreement.

I'm also not fully satisfied with the heuristics; it's not until you get them working and start playing around with them that you can see what's effective, what needs to be altered slightly, and how better to optimize the heuristic representation.

One implementation that I had hoped to complete in time was to have the agent make generalizations about its world, i.e. all furry objects (in its world) are mammals; small things appear to not be heavy; etc. Aside from being a cute factoid in the Statistics section, ASK ME could have used these implications to further aid in guessing correct objects. For example, if it determines that the object the user is thinking about is not heavy, and in its experience 90% of small things are not heavy, and no small things are known to be heavy, it would then raise the temporary rank of small things whose heaviness is not known, but leave alone non-small objects whose heaviness is not known. In this manner it could take advantage of observations it had made about its world, without running the risk of corrupting its knowledge base or incorrectly eliminating an appropriate object. The effect would be more helpful when multiple generalizations came in to play. I have some inductive generalization code implemented, but not yet working.