(Apologies for multiple postings)
Call for Papers: Workshop on Rich Representations for Reinforcement Learning
Date: August 7th, 2005 in conjunction with ICML'05, Bonn, Germany
Web Site: http://www.cs.waikato.ac.nz/~kurtd/rrfrl/
Overview:
Reinforcement learning (RL) has developed into a primary approach to learning control strategies for autonomous agents. The majority of RL work has focused on propositional or attribute-value representations of states and actions, simple temporal models of action, and memoryless policy representations. Many problem domains, however, are not easily represented under these assumptions.
This has led to recent work that studies the use of richer representations in RL to overcome some of these traditional limitations. This includes for example: relational reinforcement learning, where states, actions and learned policies have relational representations; richer temporal representations of action, such as options; richer policy representations that incorporate internal state, such as MAXQ hierarchies; and the recently introduced predictive state representations where the state of a system is represented in terms of the predictions of future observations.
The main topic of the workshop will be the application of these (and possibly other) rich representational formats, the relationships among them, and their benefits (or drawbacks) for reinforcement learning.
There have been a number of previous workshops that focus on individual representational items noted above. The goal of this workshop is mainly to promote interaction between researchers in the various representational aspects of RL. There is a high diversity of rich representations and possible approaches, many of which may mutually benefit one another. This workshop will give researchers the chance to consider such benefits and highlight some of the key challenges that remain.
Given the co-location of ICML with ILP this year, we expect attendees from both conferences to participate in the workshop as the topic intersects with interests of both, in particular the incorporation of relational and logical representations into RL.
Some example topics/issues that could be addressed include:
* New algorithms for exploiting rich representations to the fullest. When is it possible to design algorithms for rich representations by reduction to traditional techniques?
* When and how does reinforcement learning benefit from rich representations? Specific real-world successes and failures are of particular interest.
* What is the influence of rich representations on the (re-)usability of reinforcement learning results, or transfer learning (for example through goal parameterization)?
* Should the introduction of rich representations in reinforcement learning be accompanied by different learning goals (such as policy-optimality) to keep the learning problems feasible?
* How should we evaluate new algorithms for rich representations? Specific benchmarks that exhibit the weaknesses and benefits of various representational features are of particular interest.
* How can RL benefit from/contribute to existing models and techniques used for (decision-theoretic) planning and agents that already use richer representations, but lack learning?
Submissions Format:
Potential speakers should submit a paper of a maximum of 6 pages in the ICML paper format. We encourage smaller contributions or summaries of on-going work, one page abstracts, and position papers on the topics relevant to the workshop.
To supply the panel planned at the end of the workshop with discussion topics, we ask each potential presenter and participant to propose, in advance, a provocative question or claim, with the emphasis on provocative. We will use the resulting pool of questions, possibly anonymously, to stimulate discussion as needed. The papers and provocative questions or claims should be sent by email to [EMAIL PROTECTED] We will assume that your questions can be attributed to you unless you request anonymity.
Important Dates:
April 1 Paper submission deadline April 22 Notification of acceptance May 13 Final paper deadline August 7 Workshop date
Organizing Committee:
Kurt Driessens: University of Waikato, Hamilton, New Zealand Alan Fern: Oregon State University, Corvallis, U.S.A. Martijn van Otterlo: University of Twente, The Netherlands
Program Committee:
Robert Givan: Purdue University, U.S.A. Roni Khardon: Tufts University, U.S.A. Ron Parr: Duke University, U.S.A. Sridhar Mahadevan:University of Massachusetts, U.S.A. Satinder Singh: University of Michigan, U.S.A. Prasad Tadepalli: Oregon State University, U.S.A.
-- Kurt Driessens ----------------------------------------------------------- Department of Computer Science University of Waikato Private Bag 3105 phone: +64 7 838 4791 Hamilton, New Zealand ----------------------------------------------------------- "The probability that what I was proposing and what you say I am proposing is the same thing is rather high." -- Jan Ramon in a recent email-discussion -----------------------------------------------------------
_______________________________________________ uai mailing list uai@ENGR.ORST.EDU https://secure.engr.oregonstate.edu/mailman/listinfo/uai