Opinions

Shie Mannor
Technion, Israel
Main Application Domain: Power Systems and Cognitive Networks
1. What is an application area that can greatly benefit from reinforcement learning in the next decade? Energy (electricity) management of various types, especially what is known as asset management and operational management.
2. What improvement of current RL technology could have a significant impact in making RL more widely used in practice? Dealing with model misspecification and off-policy optimization. Efficient parallel computation.
3. Which (recent) advances in other fields are most relevant for current and future work in reinforcement learning (applied or theoretical)? Advances in large scale optimization (practical and theoretical), high-dimensional statistics.

Doina Precup
McGill University, Canada
Main Application Domain: Power plant control and medical applications
RL has made tremendous strides in application domains ranging from robotics to game playing and operation-research problems, and is the standard model in neuroscience to describe brain decision making. So first, as a community, we should acknowledge and celebrate this (instead of just complaining of all its problems). Second, to go beyond the current state-of-art, we need to tackle representation learning (without which solving general AI problems with RL is not possible). We have bits and pieces (eg some good adaptive function approximators). The efforts and tremendous progress in the deep learning community show a way in which new algorithms, previously thought impossible, can be designed with a general AI goal in mind, yet while still achieving success in many practical tasks.

Kee-Eung Kim
KAIST, South Korea
Main Application Domain: Reinforcement Learning
1. What is an application area that can greatly benefit from reinforcement learning in the next decade? Virtually every possible area we can think of. I see an intelligent machine as stacked multiple tiers of components, from low-level signal processing in the bottom, up to decision making/reinforcement learning at the top. We made a lot of progress in the lower tiers to the state where off-the-shelf software can be readily deployed into the field. I think this progress towards practicality will eventually reach the top tier, resulting in a holistic approach to implementing the system.
2. What improvement of current RL technology could have a significant impact in making RL more widely used in practice? It must be sample efficient, and in this sense, I’d like to vote for Bayesian reinforcement learning.

Christos Dimitrakakis
Chalmers University of Technology, Sweden
Main Application Domain: reinforcement learning
1. What is an application area that can greatly benefit from reinforcement learning in the next decade? I suppose everything that involves interactive learning from indirect feedback can benefit, in principle. Automated experiment design for domains where the experimental process itself can be largely automated seems a particularly promising area. I also think that a lot of applications involving control engineering might finally benefit from an RL point of view.
2. What improvement of current RL technology could have a significant impact in making RL more widely used in practice? I don’t have an application area, but in general it would be a reduction in algorithmic complexity and in the need to tune hyper-parameters.
3. Which (recent) advances in other fields are most relevant for current and future work in reinforcement learning (applied or theoretical)? Advances in statistics/ML, particularly for dealing with high dimensional/volume problems, as well as multi-task problems seem to be particularly relevant for RL. Deep architectures have had some impact, but I am not sure how well they transfer to the RL setting, where data is actively sampled (experience replay seems to alleviate this problem). I also think that Approximate Bayesian Computation (ABC) is a promising methodology for dealing with arbitrary model sets in RL.

Marc Deisenroth
Imperial College, UK
Main Application Domain: Robotics
1. What is an application area that can greatly benefit from reinforcement learning in the next decade? Robotics, computer games
2. What improvement of current RL technology could have a significant impact in making RL more widely used in practice? Scaling and fast learning. #1 can probably be solved by large-scale computing architectures and giving the agent the ability to collect a lot of data. #2 requires a careful extraction of information from available data, especially in the early stages of learning when not many data points have been collected.
3. Which (recent) advances in other fields are most relevant for current and future work in reinforcement learning (applied or theoretical)? Distributed computing. But we should also look at informative priors: sometimes we can encode a bit of useful (general) information, which will allow us to solve problems much faster. RL needs many more compelling applications.

Pieter Abbeel
UC Berkeley, USA
Main Application Domain: Robotics
1. What is an application area that can greatly benefit from reinforcement learning in the next decade? Robotics (esp. bridging perception and control), Dialogue, Marketing
2. What improvement of current RL technology could have a significant impact in making RL more widely used in practice? Ways to leverage large-scale data and computing; which has already happened pretty successfully in machine translation, vision, and speech and given a huge boost to performance there.
3. Which (recent) advances in other fields are most relevant for current and future work in reinforcement learning (applied or theoretical)? (i) Representation/Feature learning as achieved by deep learning in computer vision and speech. (ii) Distributed optimization / stochastic gradient descent (eg HogWild etc.) (iii) Benchmark(s) of great appeal for RL (like ImageNet has managed to do in computer vision)

Hiroyuki Nakahara
RIKEN Brain Science Institute, Japan
Main Application Domain: computational neuroscience
Reinforcement learning (RL) has a great impact on the field of neuroscience, to study decision-making and associated learning. It improves understanding of the relationship between behavior and neural systems, i.e., linking them through key computations or variables of RL. I am sure that it will continue to play the important role in future.

In the future, I expect a progress in our field, in relation to two critical aspects of the RL framework: state and time, i.e., how to treat them when using RL framework to link behavior and neural systems. First, I hope to see that RL will be further integrated with representational learning (e.g., Nakahara and Hikosaka 2012; Nakahara 2014), or the relationship of states to feature representation (e.g., Sutton et al 2009; Maei et al 2009). And the advancement should be connected with the progress in model-based RL. Second, the time of the neural valuation process, or the iterations over states (the valuation time), may not necessarily be the same in the way we measure a physical passage of time, i.e., the clock time (Nakahara and Kaveri 2010); furthermore, the valuation time should be connected to the variety of the processes in the brain, other “times” in the brain or the brain time in total. I hope to see that a future RL framework provide a solid theoretical ground for handling the relation of the brain time to the clock time. For both state and time, I believe that theoretical advancement in RL framework, e.g., in machine learning field, will be critical and help us progress in our field.

Peter Dayan
University College London, UK
Main Application Domain: Computational Neuroscience
1. What is an application area that can greatly benefit from reinforcement learning in the next decade? Meta-control – the prospect of using RL ideas and methods for controlling the processes of planning and action inference.
3. Which (recent) advances in other fields are most relevant for current and future work in reinforcement learning (applied or theoretical)? We need to have a richer and more sophisticated understanding of the space and natural statistics of decision-making tasks. This will underpin meta-control, but also provide a justifiable foundation for hierarchical RL, etc.
2. What improvement of current RL technology could have a significant impact in making RL more widely used in practice?[Given the previous answer], we need better representations for tasks that can underpin model-based and model-free control and meta-control.

Daniel Polani
University of Hertfordshire, UK
Main Application Domain: Agent and cognitive modelling, Game play
1. What is an application area that can greatly benefit from reinforcement learning in the next decade? Walking, game play, multiagent coordination
2. What improvement of current RL technology could have a significant impact in making RL more widely used in practice? 2. Scaling up, automated combination of building blocks, hierarchy
3. Which (recent) advances in other fields are most relevant for current and future work in reinforcement learning (applied or theoretical)? Setting RL in relation to cognitive load limitation, e.g. using information-theoretical constraints, possibly others.

Submission Form

Please let us know your opinion about the future of RL. For instance, your opinion could address one or more of the following questions:
speech-bubble

  1. What is an application area that can greatly benefit from reinforcement learning in the next decade?
  2. What improvement of current RL technology could have the largest impact in making RL more widely used in your application area?
  3. Which advances in other fields are most relevant for current and future work in reinforcement learning (applied or theoretical)?

All selected opinions will be published on this website, and we will use them as background material for the workshop. To submit your opinion, please (a) use the form below, or if you want to be very brief, write a tweet using the hashtag #NIPSTrendsRL.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s