Final Project

Progress report due on 4/22/2013. Work 15% of the final project grade. This is a one-page description of your progress. In order to get credit, you need to show some kind of progress toward the project goal.

Due on the last day of exam period (5/9/2013). Only students who have signed up for the 5XX-level of this class need to complete a project.

The project should explore an application of an idea from one of the topic areas covered in class to a specific robotics problem. The main part of the project should be a simulation of the idea in a robotics scenario. You can simulate the work in any programming language including Matlab.

Students may work individually or in pairs. The expectation for a joint project is higher for joint projects than it is for single projects. The expectation for a single project is the equivalent of two homework assignments worth of work.

The project should be documented by a project report between three and six pages long. The report shound include the following sections: 1) problem description; 2) background; 3) approach; 4) simulations; 5) conclusions.

If you are working as an individual, then you can use one of the topics described below and you do not need to submit a proposal. Alternatively, you can submit a one-page project proposal to do a project on a different topic. If you are working as a two-person team, then you must submit a one-page project proposal describing a project that is significantly more difficult than what is expected for the individual project. All project proposals are due by 4/4/2012. Please email project proposals to Dr Platt and Suchismit Mahapatra (the course TA).

Possible project topics (for individual projects only):

1. Compare Kalman filtering with Particle filtering as ways to localize and track the location of a robot in a known planar environment. The position and orientation of the robot is state. The only observation available to the robot is a single laser beam that measures range between the robot and an obstacle directly in front of it.

2. Use reinforcement learning to solve the torque-limited pendulum problem. Assume that the parameters of the pendulum are partially or completely unknown. Learn through experience to balance the pendulum. For example, you might use q-learning, sarsa, or LSTD by discretizing the two-dimensinoal theta/theta_dot state space. (You'll need to simulate the true pendulum in order to provide the learning system something with which to gain experience.) You might also use policy gradient to solve the problem.

3. Experimentally compare RRT with RRT* for a three-link planar manipulator. Compare average distance (cost) of solutions found using both methods as a function of iterations of the algorithm.

4. Compare KLD particle filtering as described here with regular particle filtering for a mobile robot domain where a robot must localize itself in two dimensions in a simulated office domain. Compare the perfomance and efficiency of both versions and comment on the differences.

5. Experimentally compare RRT with A* for a three-link planar manipulator. Compare average distance (cost) of solutions found using both methods as a function of iterations of the algorithm.

More on the way...