R250 Imitation Learning

Imitation learning was initially proposed in robotics as a way to better robots (Schaal, 1999). The connecting theme is to combine the reward function in the end of the action sequence with demonstrations of the task in hand by an expert. Since then it has been applied to a number of tasks which can be modelled as a sequence of actions taken by an agent. These include the video game agents, moving cameras to track players and structured prediction in various tasks in natural language processing.

Over the years there has been a number of algorithms proposed, in the literature but without necessarily making the connections between the various approaches clear. The initial lecture will set the criteria to be used to examine the algorithms with.

Each student will present a paper and corresponding algorithm from the list of papers below and may write a report testing it on a dataset of their choice.