CALL FOR PAPERS
==============================================
# Optimizing the Optimizers
## NIPS 2016 Workshop
Barcelona, Spain, December 9 OR 10, 2016
http://www.probabilistic-numerics.org/meetings/NIPS2016/
===============================================
### Invited speakers
Stephen J Wright (U of Wisconsin)
Mark Schmidt (UBC)
David Duvenaud (Harvard -> Toronto)
Misha Denil (DeepMind)
Samantha Hansen (Spotify)
### Topics of interest
• Parameter adaptation for optimization algorithms
• Stochastic optimization methods
• Optimization methods adapted for specific applications
• Batch selection methods
• Convergence diagnostics for optimization algorithms
### Workshop overview
Optimization problems in machine learning have aspects that make them more
challenging than the traditional settings, like stochasticity, and parameters
with side-effects (e.g., the batch size and structure). The field has invented
many different approaches to deal with these demands. Unfortunately - and
intriguingly - this extra functionality seems to invariably necessitate the
introduction of tuning parameters: step sizes, decay rates, cycle lengths,
batch sampling distributions, and so on. Such parameters are not present, or at
least not as prominent, in classic optimization methods. But getting them right
is frequently crucial, and necessitates inconvenient human “babysitting”.
Recent work has increasingly tried to eliminate such fiddle factors, typically
by statistical estimation. This also includes automatic selection of external
parameters like the batch-size or -structure, which have not traditionally been
treated as part of the optimization task. Several different strategies have now
been proposed, but they are not always compatible with each other, and lack a
common framework that would foster both conceptual and algorithmic
interoperability. This workshop aims to provide a forum for the nascent
community studying automating parameter-tuning in optimization routines.
#### Among the questions to be addressed by the workshop are:
• Is the prominence of tuning parameters a fundamental feature of
stochastic optimization problems? Why do classic optimization methods manage to
do well with virtually no free parameters?
• In which precise sense can the “optimization of optimization
algorithms” be phrased as an inference / learning problem?
• Should, and can, parameters be inferred at design-time (by a human),
at compile-time (by an external compiler with access to a meta-description of
the problem) or run-time (by the algorithm itself)?
• What are generic ways to learn parameters of algorithms, and inherent
difficulties for doing so? Is the goal to specialize to a particular problem,
or to generalize over many problems?
### Submission instructions
Contributed papers addressing a question relevant to the workshop’s topic are
invited. Submissions should be in the (new!) NIPS 2016 format, with a maximum
of 4 pages (excluding references). Accepted papers will be made available
online at the workshop website, and will be presented in a spotlight talk at
the workshop itself, but the workshop proceedings can be considered
non-archival. Shorter versions of relevant papers submitted elsewhere are
explicitly encouraged. Submissions need not be anonymous. Please send your
submission to Maren Mahsereci <[email protected]>
### Important dates
Submission deadline: 18:00 GMT, 30 September 2016
Notification of acceptance: 7 November 2016
### Organizers
Maren Mahsereci (MPI Tübingen)
Alex Davies (Google)
Philipp Hennig (MPI Tübingen)
_______________________________________________
uai mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/uai