Program

Overview

Morning Session (AM)

8:30-8:40 Introduction
8:40-9:10 Sridhar Mahadevan – Rethinking RL: From Optimization to Equilibration
9:10-9:35 Brian Ziebart – Predictive Inverse Optimal Control and Scaling It Up
9:35-10:00 Aldo Faisal – Human vs Machine Learning of Nonlinear Optimal Control

10:00-10:30 Coffee Break

10:30-11:00 Emma Brunskill – RL for People
11:00-11:30 David Silver – Deep Reinforcement Learning
from 11:30 Poster Session 1

Afternoon Session (PM)

3:00-3:25 Gerhard Neumann – Learning Modular Control Policies for Robotics
3:25-3:50 Vicenç Gomez – Applications of Linearly Solvable Optimal Control
3:50-4:20 Naftali Tishby – On the Role of Information in Reinforcement Learning and Control

from 4:20 Poster Session 2

4:30-5:00 Coffee Break

5:00-5:30 Daniel Lee – Noether and Bellman: Learning and planning in high-dimensional continuous state spaces
5:30-6:00 Pieter Abbeel – Reinforcement Learning Neural Net Policies for Robotic Control with Guided Policy Search
6:00-6:30 Discussion & Closing Remarks

Posters

J. Audiffren, M. Valko, A. Lazaric, M. Ghavamzadeh. Maximum entropy semi-supervised inverse reinforcement learning.

P. Prasanna, A. Sarath Chandar, B. Ravindran. iBayes : A Thompson sampling approach to reinforcement learning with instructions.

J. Perez, G. Bouchard. Customer care dialog management, an inverse reinforcement learning approach.

A. Barreto, B. Balle, J. Pineau, D. Precup. Starting to uncover the relationship between stochastic factorization and HMMs.

V. Kovecses, D. Precup. Using information theoretic criteria to discover useful options.

P. Englert, M. Toussaint. Inverse KKT motion optimization.

Advertisements