R250 Imitation Learning
Imitation learning was initially proposed in robotics as a way to better robots (Schaal, 1999). The connecting theme is to combine the reward function in the end of the action sequence with demonstrations of the task in hand by an expert. Since then it has been applied to a number of tasks which can be modelled as a sequence of actions taken by an agent. These include the video game agents, moving cameras to track players and structured prediction in various tasks in natural language processing. For a recent tutorial see here.
Over the years there has been a number of algorithms proposed, in the literature but without necessarily making the connections between the various approaches clear. The initial lecture will set the criteria to be used to examine the algorithms with.
The papers presented in the 2021 version of the topic were:
-
Sequence Level Training with Recurrent Neural Networks Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba International Conference on Machine Learning (ICLR), 2016
-
Hierarchical Imitation and Reinforcement Learning Hoang M. Le, Nan Jiang, Alekh Agarwal, Miroslav Dudík, Yisong Yue and Hal Daumé III International Conference on Machine Learning (ICML), 2018
-
Generative Adversarial Imitation Learning Jonathan Ho and Stefano Ermon 30th Conference on Neural Information Processing Systems (NeurIPS 2016)
-
Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks Samy Bengio, Oriol Vinyals, Navdeep Jaitly and Noam Shazeer 29th Conference on Neural Information Processing Systems (NeurIPS 2015)