Magat Aguirre and Pete Talley
FAKE, Fraudulent Arena Killing Environment

Contents
Domain
Illustration
Methods
Conclusions

Domain

Our domain is an N x N playing field. Each square represents a piece of the map which holds Agents and weapons. Agents move around the map searching for wepons and other agents to attack. An agent chooses an action based on its goodness value. After each action, the agent again determines the goodness of actions that can be taken in that situation.


Illustration

| ?- loop(init). move(player1,north) I am at 1,2do(move(player1,north),init)

Go to next state?y. _561 I am at 1,3do(move(player1,north),do(move(player1,north),init))

Go to next state?y. _908 I am at 1,2do(move(player1,south),do(move(player1,north), do(move(player1,north),init)))

Go to next state?g. I am at 2,2do(move(player1,east),do(move(player1,north), do(move(player1,north),init)))

Go to next state?r. I am at 1,2do(move(player1,west),do(move(player1,north), do(move(player1,north),init)))

Go to next state?e. I am at 1,3do(move(player2,north),do(move(player1,north), do(move(player1,north),init)))

Go to next state?f. I am at 1,2do(move(player2,south),do(move(player1,north), do(move(player1,north),init)))

Go to next state?d. I am at 2,2do(move(player2,east),do(move(player1,north), do(move(player1,north),init)))

Go to next state?


Methods

Each agent in the environment can move, attack, pickup items and retreat. The agent chooses an action by assigning a goodness to that action. Once an action is chosen, the agent does that action. Then the process starts again. The value of a move is based on current health, whether there is an agent at the square being moved to, if there is a weapon or if there is nothing. Attack is based on current health and if there is another agent to attack. Pickup looks if there is a weapon, if so, pick it up. Retreat moves the agent away from an attack if its health becomes low.


Conclusions

AI is infinitely easier to discuss than to implement! However, it was fun to actually run the program and see resutls. We tried to make the agents 'think' a few steps ahead to decide what to do, but it did not work properly, so the agents only think one step ahead. So, if an agent is at a location it looks in each direction to see what to do next. It doesn't look at those next directions and check what it would do if it moved there.