Your task is to implement the Robot class’ getAction method such that it reaches the target location. The environment size, performance threshold, and number of allowable time steps will also change for testing. For example, it would be impossible to reach the target location in a 100×100 room within 20 time steps.
The getAction method returns a single Action object from the Action.java file. The robot can DO_NOTHING, or MOVE in 1 of four directions.
Note, depending on your implementation, the agent’s decision making algorithm may take a large amount of time. For grading purposes, tests will automatically fail after 180 seconds (or 3 minutes) to prevent simple brute forcing.
Throughout the room, there are impassable walls blocking the agent’s traversal. There are also some modifications to the Environment class detailed in the demo Robot’s getAction method.
You are free to implement the Robot object however you like – though it is STRONGLY encouraged that you create some form of search tree for decisions. Consider utilizing inner classes for your implementation.
Two simulations files have been included in the files: VisualizeSimulation and RunSimulation.
VisualizeSimulation will generate a visualization of your agent’s interactions with the environment, updating every 200 milliseconds. This will help you visualize your agent’s behavior, in case it does not interact the way you think.
RunSimulation, on the other hand, is how we will be grading your submissions. In lieu of a visualization, this file quickly runs through the simulation and outputs its end results.
