Please distribute:

------------------------------------------------------------------------
Call for Papers
ECML/PKDD 2013 Workshop on
Reinforcement Learning with Generalized Feedback: Beyond Numeric Rewards
------------------------------------------------------------------------

<http://www.ke.tu-darmstadt.de/events/PBRL-13/pbrl-13.html>

This workshop will be held on Monday, September 23rd 2013, as part
of the ECML/PKDD 2013 <http://www.ecmlpkdd2013.org/> conference.


BACKGROUND

Reinforcement learning is traditionally formalized within the /Markov
Decision Process/ (MDP) framework: By taking actions in a stochastic and
possibly unknown environment, an agent moves between states in this
environment; moreover, after each action, it receives a numeric,
possibly delayed reward signal. The agent's learning task then consists
of developing a strategy that allows it to act optimally, that is, to
devise a policy (mapping states to actions) that maximizes its long-term
(cumulative) reward.

In recent years, different generalizations of the standard setting of
reinforcement learning have emerged; in particular, several attempts
have been made to relax the quite restrictive requirement for numeric
feedback and to learn from different types of more flexible training
information. Examples of generalized settings of that kind include
apprenticeship learning, inverse reinforcement learning, multi-objective 
reinforcement learning, and preference-based reinforcement learning.
Learning in these generalized frameworks can be considerably harder than 
learning in MDPs because rewards cannot be easily aggregated over 
different states. 


GOALS AND OBJECTIVES

The most important goal of this workshop is to help in unifying and
streamlining research on generalizations of standard reinforcement
learning, which, for the time being, seem to be pursued in a rather
disconnected manner. Indeed, many of the extensions and generalizations
discussed above are still lacking a sound theoretical foundation, let
alone a generally accepted underlying framework comparable to Markov
Decision Processes for conventional reinforcement learning. Besides,
many of the commonalities shared by these generalizations have
apparently not been recognized or explored so far. A formalization in
terms of preferences may provide such a theoretical underpinning.
Ideally, the workshop will help the participants to identify some common
ground of their work, thereby helping the field move toward a
theoretical foundation of reinforcement learning with generalized feedback.

Apart from fostering theoretical developments of that kind, we are also
interested in identifying and exchanging interesting applications and
problems that may serve as benchmarks for qualitative or
preference-based reinforcement learning (such as cart-pole balancing or
the mountain car for classical reinforcement learning).


TOPICS OF INTEREST

Topics of interest include but are not limited to

  * novel frameworks for reinforcement learning beyond MDPs
  * algorithms for learning from preferences and non-numeric,
    qualitative, or structured feedback
  * theoretical results on the learnability of optimal policies,
    convergence of algorithms in qualitative settings, etc.
  * applications and benchmark problems for reinforcement learning in
    non-standard settings.


SUBMISSIONS

Please e-mail submissions in Springer LNCS format to both workshop chairs.
There is no strict page limit, but we encourage authors to stay within 
the page limits of the main conference (16 pages). We particularly
encourage short papers (8 pages or less).

Should there be a high turnout in papers with high quality, we will also
consider a post-workshop publication, such as a special issue or a book.
We like to emphasize, however, that the ambition of the workshop is not 
to collect mature work ready for publication but to provide a forum of 
exchange for researchers, with the possibility to discuss ongoing 
developments and work in progress.


IMPORTANT DATES

Paper deadline: /June 28, 2013/
Notifications:  /July 19, 2013/
Final versions: /August 2, 2013/
Workshop date:  /September 23, 2013/


WORKSHOP CHAIRS

  * Johannes Fürnkranz <mailto:ju...@ke.informatik.tu-darmstadt.de>
    (TU Darmstadt)
  * Eyke Hüllermeier <mailto:e...@mathematik.uni-marburg.de>
    (Universität Marburg) 



_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to