Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Matthieu Brucher
2010/11/22 Gael Varoquaux :
> Hi list,

Hi ;)

> does anybody have, or knows where I can find some N dimensional dichotomy 
> optimization code in Python (BSD licensed, or equivalent)?

I don't know any code, but it should be too difficult by bgoing
through a KdTree.

> Worst case, it does not look too bad to code, but I am interested by any 
> advice. I haven't done my reading yet, and I don't know how ill-posed a 
> problem it is. I had in mind starting from a set of points and iterating the 
> computation of the objective function's value at the barycenters of these 
> points, and updating this list of points. This does raise a few questions on 
> what are the best possible updates.

In this case, you may want to check Nelder-Mead algotihm (also known
as down-hill simplex or polytope), which is available in
scikits.optimization, but there are other implementations out there.

Cheers ;)

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Gael Varoquaux
On Mon, Nov 22, 2010 at 09:12:45PM +0100, Matthieu Brucher wrote:
> Hi ;)

Hi bro

> > does anybody have, or knows where I can find some N dimensional
> > dichotomy optimization code in Python (BSD licensed, or equivalent)?

> I don't know any code, but it should be too difficult by bgoing
> through a KdTree.

I am not in terribly high-dimensional spaces, so I don't really need to
use a KdTree (but we do happen to have a nice BallTree available in the
scikit-learn, so I could use it just to  play :>). 

> > Worst case, it does not look too bad to code, but I am interested by
> > any advice. I haven't done my reading yet, and I don't know how
> > ill-posed a problem it is. I had in mind starting from a set of
> > points and iterating the computation of the objective function's
> > value at the barycenters of these points, and updating this list of
> > points. This does raise a few questions on what are the best possible
> > updates.

> In this case, you may want to check Nelder-Mead algotihm (also known
> as down-hill simplex or polytope), which is available in
> scikits.optimization, but there are other implementations out there.

Interesting reference. I had never looked at the Nelder-Mead algorithm.
I am wondering if it does what I want, thought. 

The reason I am looking at dichotomy optimization is that the objective
function that I want to optimize has local roughness, but is globally
pretty much a bell-shaped curve. Dichotomy looks like it will get quite
close to the top of the curve (and I have been told that it works well on
such problems). One thing that is nice with dichotomy for my needs is
that it is not based on a gradient, and it works in a convex of the
parameter space.

Will the Nelder-Mead display such properties? It seems so to me, but I
don't trust my quick read through of Wikipedia.

I realize that maybe I should rephrase my question to try and draw more
out of the common wealth of knowledge on this mailing list: what do
people suggest to tackle this problem? Guided by Matthieu's suggestion, I
have started looking at Powell's algorithm, and given the introduction of
www.damtp.cam.ac.uk/user/na/NA_papers/NA2007_03.pdf I am wondering
whether I should not investigate it. Can people provide any insights on
these problems.

Many thanks,

Gael

PS: The reason I am looking at this optimization problem is that I got
tired of looking at grid searches optimize the cross-validation score on 
my 3-parameter estimator (not in the scikit-learn, because it is way too
specific to my problems).
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Matthieu Brucher
2010/11/22 Gael Varoquaux :
> On Mon, Nov 22, 2010 at 09:12:45PM +0100, Matthieu Brucher wrote:
>> Hi ;)
>
> Hi bro
>
>> > does anybody have, or knows where I can find some N dimensional
>> > dichotomy optimization code in Python (BSD licensed, or equivalent)?
>
>> I don't know any code, but it should be too difficult by bgoing
>> through a KdTree.
>
> I am not in terribly high-dimensional spaces, so I don't really need to
> use a KdTree (but we do happen to have a nice BallTree available in the
> scikit-learn, so I could use it just to  play :>).

:D

>> In this case, you may want to check Nelder-Mead algotihm (also known
>> as down-hill simplex or polytope), which is available in
>> scikits.optimization, but there are other implementations out there.
>
> Interesting reference. I had never looked at the Nelder-Mead algorithm.
> I am wondering if it does what I want, thought.
>
> The reason I am looking at dichotomy optimization is that the objective
> function that I want to optimize has local roughness, but is globally
> pretty much a bell-shaped curve. Dichotomy looks like it will get quite
> close to the top of the curve (and I have been told that it works well on
> such problems). One thing that is nice with dichotomy for my needs is
> that it is not based on a gradient, and it works in a convex of the
> parameter space.

It seems that a simplex is what you need. It uses the barycenter (more
or less) to find a new point in the simplex. And it works well only in
convex functions (but in fact almost all functions have an issue with
this :D)

> Will the Nelder-Mead display such properties? It seems so to me, but I
> don't trust my quick read through of Wikipedia.

Yes, it does need a gradient and if the function is convex, it works
in a convex in the parameter space.

> I realize that maybe I should rephrase my question to try and draw more
> out of the common wealth of knowledge on this mailing list: what do
> people suggest to tackle this problem? Guided by Matthieu's suggestion, I
> have started looking at Powell's algorithm, and given the introduction of
> www.damtp.cam.ac.uk/user/na/NA_papers/NA2007_03.pdf I am wondering
> whether I should not investigate it. Can people provide any insights on
> these problems.

Indeed, Powell may also a solution. A simplex is just what is closer
to what you hinted as an optimization algorithm.

> Many thanks,

You're welcome ;)

> Gael
>
> PS: The reason I am looking at this optimization problem is that I got
> tired of looking at grid searches optimize the cross-validation score on
> my 3-parameter estimator (not in the scikit-learn, because it is way too
> specific to my problems).

Perhaps you may want to combine it with genetic algorithms. We also
kind of combine grid search with simplex-based optimizer with
simulated annealing in some of our global optimization problems, and I
think I'll try at one point to introduce genetic algorithms instead of
the grid search. Your problem is simpler though if it displays some
convexity.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Gael Varoquaux
On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
> It seems that a simplex is what you need.

Ha! I am learning new fancy words. Now I can start looking clever.

> > I realize that maybe I should rephrase my question to try and draw more
> > out of the common wealth of knowledge on this mailing list: what do
> > people suggest to tackle this problem? Guided by Matthieu's suggestion, I
> > have started looking at Powell's algorithm, and given the introduction of
> > www.damtp.cam.ac.uk/user/na/NA_papers/NA2007_03.pdf I am wondering
> > whether I should not investigate it. Can people provide any insights on
> > these problems.

> Indeed, Powell may also a solution. A simplex is just what is closer
> to what you hinted as an optimization algorithm.

I'll do a bit more reading.

> > PS: The reason I am looking at this optimization problem is that I got
> > tired of looking at grid searches optimize the cross-validation score on
> > my 3-parameter estimator (not in the scikit-learn, because it is way too
> > specific to my problems).

> Perhaps you may want to combine it with genetic algorithms. We also
> kind of combine grid search with simplex-based optimizer with
> simulated annealing in some of our global optimization problems, and I
> think I'll try at one point to introduce genetic algorithms instead of
> the grid search.

Well, in the scikit, in the long run (it will take a little while) I'd
like to expose other optimization methods then the GridSearchCV, so if
you have code or advice to give us, we'd certainly be interested.

Gael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Matthieu Brucher
2010/11/22 Gael Varoquaux :
> On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
>> It seems that a simplex is what you need.
>
> Ha! I am learning new fancy words. Now I can start looking clever.
>
>> > I realize that maybe I should rephrase my question to try and draw more
>> > out of the common wealth of knowledge on this mailing list: what do
>> > people suggest to tackle this problem? Guided by Matthieu's suggestion, I
>> > have started looking at Powell's algorithm, and given the introduction of
>> > www.damtp.cam.ac.uk/user/na/NA_papers/NA2007_03.pdf I am wondering
>> > whether I should not investigate it. Can people provide any insights on
>> > these problems.
>
>> Indeed, Powell may also a solution. A simplex is just what is closer
>> to what you hinted as an optimization algorithm.
>
> I'll do a bit more reading.
>
>> > PS: The reason I am looking at this optimization problem is that I got
>> > tired of looking at grid searches optimize the cross-validation score on
>> > my 3-parameter estimator (not in the scikit-learn, because it is way too
>> > specific to my problems).
>
>> Perhaps you may want to combine it with genetic algorithms. We also
>> kind of combine grid search with simplex-based optimizer with
>> simulated annealing in some of our global optimization problems, and I
>> think I'll try at one point to introduce genetic algorithms instead of
>> the grid search.
>
> Well, in the scikit, in the long run (it will take a little while) I'd
> like to expose other optimization methods then the GridSearchCV, so if
> you have code or advice to give us, we'd certainly be interested.
>
> Gael

There is scikits.optimization partly in the externals :D But I don't
think they should be in scikits.learn directly. Of course, the scikit
may need access to some global optimization methods, but the most used
one is already there (the grid search).
Then for genetic algorithms, pyevolve is pretty much all you want (I
still have to check the multiprocessing part)

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Gael Varoquaux
On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
> It seems that a simplex is what you need. It uses the barycenter (more
> or less) to find a new point in the simplex. And it works well only in
> convex functions (but in fact almost all functions have an issue with
> this :D)

One last question, now that I know that what I am looking for is a
simplex algorithm (it indeed corresponds to what I was after), is there a
reason not to use optimize.fmin? It implements a Nelder-Mead. I must
admit that I don't see how I can use it to specify the convex hull of the
parameters in which it operates, or restrict it to work only on integers,
which are two things that I may want to do.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread josef . pktd
On Mon, Nov 22, 2010 at 5:27 PM, Matthieu Brucher
 wrote:
> 2010/11/22 Gael Varoquaux :
>> On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
>>> It seems that a simplex is what you need.
>>
>> Ha! I am learning new fancy words. Now I can start looking clever.
>>
>>> > I realize that maybe I should rephrase my question to try and draw more
>>> > out of the common wealth of knowledge on this mailing list: what do
>>> > people suggest to tackle this problem? Guided by Matthieu's suggestion, I
>>> > have started looking at Powell's algorithm, and given the introduction of
>>> > www.damtp.cam.ac.uk/user/na/NA_papers/NA2007_03.pdf I am wondering
>>> > whether I should not investigate it. Can people provide any insights on
>>> > these problems.
>>
>>> Indeed, Powell may also a solution. A simplex is just what is closer
>>> to what you hinted as an optimization algorithm.
>>
>> I'll do a bit more reading.
>>
>>> > PS: The reason I am looking at this optimization problem is that I got
>>> > tired of looking at grid searches optimize the cross-validation score on
>>> > my 3-parameter estimator (not in the scikit-learn, because it is way too
>>> > specific to my problems).
>>
>>> Perhaps you may want to combine it with genetic algorithms. We also
>>> kind of combine grid search with simplex-based optimizer with
>>> simulated annealing in some of our global optimization problems, and I
>>> think I'll try at one point to introduce genetic algorithms instead of
>>> the grid search.
>>
>> Well, in the scikit, in the long run (it will take a little while) I'd
>> like to expose other optimization methods then the GridSearchCV, so if
>> you have code or advice to give us, we'd certainly be interested.
>>
>> Gael
>
> There is scikits.optimization partly in the externals :D But I don't
> think they should be in scikits.learn directly. Of course, the scikit
> may need access to some global optimization methods, but the most used
> one is already there (the grid search).
> Then for genetic algorithms, pyevolve is pretty much all you want (I
> still have to check the multiprocessing part)

Is that license http://pyevolve.sourceforge.net/license.html BSD compatible ?

Josef

>
> Matthieu
> --
> Information System Engineer, Ph.D.
> Blog: http://matt.eifelle.com
> LinkedIn: http://www.linkedin.com/in/matthieubrucher
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Matthieu Brucher
2010/11/22 Gael Varoquaux :
> On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
>> It seems that a simplex is what you need. It uses the barycenter (more
>> or less) to find a new point in the simplex. And it works well only in
>> convex functions (but in fact almost all functions have an issue with
>> this :D)
>
> One last question, now that I know that what I am looking for is a
> simplex algorithm (it indeed corresponds to what I was after), is there a
> reason not to use optimize.fmin? It implements a Nelder-Mead. I must
> admit that I don't see how I can use it to specify the convex hull of the
> parameters in which it operates, or restrict it to work only on integers,
> which are two things that I may want to do.

optimize.fmin can be enough, I don't know it well enough. Nelder-Mead
is not a constrained optimization algorithm, so you can't specify an
outer hull. As for the integer part, I don't know if optimize.fmin is
type consistent, I don't know if scikits.optimization is either, but I
can check it. It should, as there is nothing impeding it.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 08:21:55AM +0100, Matthieu Brucher wrote:
> optimize.fmin can be enough, I don't know it well enough. Nelder-Mead
> is not a constrained optimization algorithm, so you can't specify an
> outer hull.

I saw that, after a bit more reading.

> As for the integer part, I don't know if optimize.fmin is
> type consistent,

That not a problem: I wrap my function in a small object to ensure
memoization, and input argument casting.

The problem is that I can't tell the Nelder-Mead that the smallest jump
it should attempt is .5. I can set xtol to .5, but it still attemps jumps
of .001 in its initial jumps. Of course optimization on integers is
fairly ill-posed, so I am asking for trouble.

Gael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Matthieu Brucher
> The problem is that I can't tell the Nelder-Mead that the smallest jump
> it should attempt is .5. I can set xtol to .5, but it still attemps jumps
> of .001 in its initial jumps.

This is strange. It should not if the intiial points are set
adequatly. You may want to check if the initial conditions make the
optimization start at correct locations.

> Of course optimization on integers is
> fairly ill-posed, so I am asking for trouble.

Indeed :D
That's why GA can be a good solution as well.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 10:18:50AM +0100, Matthieu Brucher wrote:
> > The problem is that I can't tell the Nelder-Mead that the smallest jump
> > it should attempt is .5. I can set xtol to .5, but it still attemps jumps
> > of .001 in its initial jumps.

> This is strange. It should not if the intiial points are set
> adequatly. You may want to check if the initial conditions make the
> optimization start at correct locations.

Yes, that's excatly the problem. And it is easy to see why: in
scipy.optimise.fmin, around line 186, the initial points are chosen with
a relative distance of 0.00025 to the intial guess that is given. That's
not what I want in the case of integers :).

> > Of course optimization on integers is
> > fairly ill-posed, so I am asking for trouble.

> Indeed :D That's why GA can be a good solution as well.

It's suboptimal if I know that my function is bell-shaped.

Gael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
Hello Gael,

On Tue, Nov 23, 2010 at 10:27 AM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 10:18:50AM +0100, Matthieu Brucher wrote:
>> > The problem is that I can't tell the Nelder-Mead that the smallest jump
>> > it should attempt is .5. I can set xtol to .5, but it still attemps jumps
>> > of .001 in its initial jumps.
>
>> This is strange. It should not if the intiial points are set
>> adequatly. You may want to check if the initial conditions make the
>> optimization start at correct locations.
>
> Yes, that's excatly the problem. And it is easy to see why: in
> scipy.optimise.fmin, around line 186, the initial points are chosen with
> a relative distance of 0.00025 to the intial guess that is given. That's
> not what I want in the case of integers :).

I'm not familiar with dichotomy optimization.
Several techniques have been proposed to solve the problem: genetic
algorithms, simulated annealing, Nelder-Mead and Powell.
To be honest, I find it quite confusing that these algorithms are
named in the same breath.
Do you have a continuous or a discrete problem?

Is your problem of the following form?

min_x f(x)
s.t.   lo <= Ax + b <= up
   0 = g(x)
   0 <= h(x)

An if yes, in which space does x live?

cheers,
Sebastian











>
>> > Of course optimization on integers is
>> > fairly ill-posed, so I am asking for trouble.
>
>> Indeed :D That's why GA can be a good solution as well.
>
> It's suboptimal if I know that my function is bell-shaped.
>
> Gael
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
> I'm not familiar with dichotomy optimization.
> Several techniques have been proposed to solve the problem: genetic
> algorithms, simulated annealing, Nelder-Mead and Powell.
> To be honest, I find it quite confusing that these algorithms are
> named in the same breath.

I am confused too. But that stems from my lack of knowledge in
optimization.

> Do you have a continuous or a discrete problem?

Both.

> Is your problem of the following form?

> min_x f(x)
> s.t.   lo <= Ax + b <= up
>0 = g(x)
>0 <= h(x)

No constraints.

> An if yes, in which space does x live?

Either in R^n, in the set of integers (unidimensional), or in the set of
positive integers.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
>> I'm not familiar with dichotomy optimization.
>> Several techniques have been proposed to solve the problem: genetic
>> algorithms, simulated annealing, Nelder-Mead and Powell.
>> To be honest, I find it quite confusing that these algorithms are
>> named in the same breath.
>
> I am confused too. But that stems from my lack of knowledge in
> optimization.
>
>> Do you have a continuous or a discrete problem?
>
> Both.
>
>> Is your problem of the following form?
>
>> min_x f(x)
>> s.t.   lo <= Ax + b <= up
>>            0 = g(x)
>>            0 <= h(x)
>
> No constraints.

didn't you say that you operate only in some convex hull?

>
>> An if yes, in which space does x live?
>
> Either in R^n, in the set of integers (unidimensional), or in the set of
> positive integers.
According to  http://openopt.org/Problems
this is a mixed integer nonlinear program http://openopt.org/MINLP . I
don't have experience with the solver though,
but it may take a long time to run it since it uses branch-and-bound.
In my field of work we typically relax the integers to real numbers,
perform the optimization and then round to the next integer.
This is often sufficiently close a good solution.

>
> Gaël
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote:
> >> min_x f(x)
> >> s.t.   lo <= Ax + b <= up
> >>            0 = g(x)
> >>            0 <= h(x)

> > No constraints.

> didn't you say that you operate only in some convex hull?

No. I have an initial guess that allows me to specify a convex hull in
which the minimum should probably lie, but its not a constraint: nothing
bad happens if I leave that convex hull.

> > Either in R^n, in the set of integers (unidimensional), or in the set of
> > positive integers.
> According to  http://openopt.org/Problems
> this is a mixed integer nonlinear program http://openopt.org/MINLP . 

It is indead the name I know for it, however I have additional hypothesis
(namely that f is roughly convex) which makes it much easier.

> I don't have experience with the solver though, but it may take a long
> time to run it since it uses branch-and-bound.

Yes, this is too brutal: this is for non convex optimization. 
Dichotomy seems well-suited for finding an optimum on the set of
intehers.

> In my field of work we typically relax the integers to real numbers,
> perform the optimization and then round to the next integer.
> This is often sufficiently close a good solution.

This is pretty much what I am doing, but you have to be careful: if the
algorithm does jumps that are smaller than 1, it gets a zero difference
between those jumps. If you are not careful, this might confuse a lot the
algorithm and trick it into not converging.

Thanks for your advice,

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
On Tue, Nov 23, 2010 at 11:43 AM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote:
>> >> min_x f(x)
>> >> s.t.   lo <= Ax + b <= up
>> >>            0 = g(x)
>> >>            0 <= h(x)
>
>> > No constraints.
>
>> didn't you say that you operate only in some convex hull?
>
> No. I have an initial guess that allows me to specify a convex hull in
> which the minimum should probably lie, but its not a constraint: nothing
> bad happens if I leave that convex hull.
>
>> > Either in R^n, in the set of integers (unidimensional), or in the set of
>> > positive integers.
>> According to  http://openopt.org/Problems
>> this is a mixed integer nonlinear program http://openopt.org/MINLP .
>
> It is indead the name I know for it, however I have additional hypothesis
> (namely that f is roughly convex) which makes it much easier.
>
>> I don't have experience with the solver though, but it may take a long
>> time to run it since it uses branch-and-bound.
>
> Yes, this is too brutal: this is for non convex optimization.
> Dichotomy seems well-suited for finding an optimum on the set of
> intehers.
>
>> In my field of work we typically relax the integers to real numbers,
>> perform the optimization and then round to the next integer.
>> This is often sufficiently close a good solution.
>
> This is pretty much what I am doing, but you have to be careful: if the
> algorithm does jumps that are smaller than 1, it gets a zero difference
> between those jumps. If you are not careful, this might confuse a lot the
> algorithm and trick it into not converging.
>

ah, that clears things up a lot.

Well, I don't know what the best method is to solve your problem, so
take the following with a grain of salt:
Wouldn't it be better to change the model than modifying the
optimization algorithm?
It sounds as if the resulting objective function is piecewise
constant. AFAIK most optimization algorithms for continuous problems
require at least Lipschitz continuous functions to work ''acceptable
well''. Not sure if this is also true for Nelder-Mead.


> Thanks for your advice,
>
> Gaël
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
> Well, I don't know what the best method is to solve your problem, so
> take the following with a grain of salt:
> Wouldn't it be better to change the model than modifying the
> optimization algorithm?

In this case, that's not possible. You can think of this parameter as the
number of components in a PCA (it's actually a more complex dictionnary
learning framework), so it's a parameter that is discrete, and I can't do
anything about it :).

> It sounds as if the resulting objective function is piecewise
> constant.

> AFAIK most optimization algorithms for continuous problems require at
> least Lipschitz continuous functions to work ''acceptable well''. Not
> sure if this is also true for Nelder-Mead.

Yes correct. We do have a problem.

I have a Nelder-Mead that seems to be working quite well on a few toy
problems.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread josef . pktd
On Tue, Nov 23, 2010 at 8:50 AM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
>> Well, I don't know what the best method is to solve your problem, so
>> take the following with a grain of salt:
>> Wouldn't it be better to change the model than modifying the
>> optimization algorithm?
>
> In this case, that's not possible. You can think of this parameter as the
> number of components in a PCA (it's actually a more complex dictionnary
> learning framework), so it's a parameter that is discrete, and I can't do
> anything about it :).
>
>> It sounds as if the resulting objective function is piecewise
>> constant.
>
>> AFAIK most optimization algorithms for continuous problems require at
>> least Lipschitz continuous functions to work ''acceptable well''. Not
>> sure if this is also true for Nelder-Mead.
>
> Yes correct. We do have a problem.
>
> I have a Nelder-Mead that seems to be working quite well on a few toy
> problems.

Assuming your function is well behaved, one possible idea is to try
replacing the integer objective function with a continuous
interpolation. Or maybe fit a bellshaped curve to a few gridpoints. It
might get you faster into the right neighborhood to do an exact
search.

(There are some similar methods of using a surrogate objective
function, when it is very expensive or impossible to calculate an
objective function, but I never looked closely at these cases.)

Josef

>
> Gaël
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
On Tue, Nov 23, 2010 at 2:50 PM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
>> Well, I don't know what the best method is to solve your problem, so
>> take the following with a grain of salt:
>> Wouldn't it be better to change the model than modifying the
>> optimization algorithm?
>
> In this case, that's not possible. You can think of this parameter as the
> number of components in a PCA (it's actually a more complex dictionnary
> learning framework), so it's a parameter that is discrete, and I can't do
> anything about it :).

In optimum experimental design one encounters MINLPs where integers
define the number of rows of a matrix.
At first glance it looks as if a relaxation is simply not possible:
either there are additional rows or not.
But with some technical transformations it is possible to reformulate
the problem into a form that allows the relaxation of the integer
constraint in a natural way.

Maybe this is also possible in your case?
Otherwise, well, let me know if you find a working solution ;)


>
>> It sounds as if the resulting objective function is piecewise
>> constant.
>
>> AFAIK most optimization algorithms for continuous problems require at
>> least Lipschitz continuous functions to work ''acceptable well''. Not
>> sure if this is also true for Nelder-Mead.
>
> Yes correct. We do have a problem.
>
> I have a Nelder-Mead that seems to be working quite well on a few toy
> problems.
>
> Gaël
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 10:19:06AM -0500, josef.p...@gmail.com wrote:
> > I have a Nelder-Mead that seems to be working quite well on a few toy
> > problems.

> Assuming your function is well behaved, one possible idea is to try
> replacing the integer objective function with a continuous
> interpolation. Or maybe fit a bellshaped curve to a few gridpoints. It
> might get you faster into the right neighborhood to do an exact
> search.

I've actually been wondering if Gaussian Process regression (aka Krigin)
would not be useful here :). It's fairly good at fitting processes for
which there is very irregular information.

Now I am perfectly aware that any move in this direction would require
significant work, so this is more day dreaming than project planning.

Gael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
> At first glance it looks as if a relaxation is simply not possible:
> either there are additional rows or not.
> But with some technical transformations it is possible to reformulate
> the problem into a form that allows the relaxation of the integer
> constraint in a natural way.

> Maybe this is also possible in your case?

Well, given that it is a cross-validation score that I am optimizing,
there is not simple algorithm giving this score, so it's not obvious at
all that there is a possible relaxation. A road to follow would be to
find an oracle giving empirical risk after estimation of the penalized
problem, and try to relax this oracle. That's two steps further than I am
(I apologize if the above paragraph is incomprehensible, I am getting too
much in the technivalities of my problem.

> Otherwise, well, let me know if you find a working solution ;)

Nelder-Mead seems to be working fine, so far. It will take a few weeks
(or more) to have a real insight on what works and what doesn't.

Thanks for your input,

Gael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Zachary Pincus

On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:

> On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
>> At first glance it looks as if a relaxation is simply not possible:
>> either there are additional rows or not.
>> But with some technical transformations it is possible to reformulate
>> the problem into a form that allows the relaxation of the integer
>> constraint in a natural way.
>
>> Maybe this is also possible in your case?
>
> Well, given that it is a cross-validation score that I am optimizing,
> there is not simple algorithm giving this score, so it's not obvious  
> at
> all that there is a possible relaxation. A road to follow would be to
> find an oracle giving empirical risk after estimation of the penalized
> problem, and try to relax this oracle. That's two steps further than  
> I am
> (I apologize if the above paragraph is incomprehensible, I am  
> getting too
> much in the technivalities of my problem.
>
>> Otherwise, well, let me know if you find a working solution ;)
>
> Nelder-Mead seems to be working fine, so far. It will take a few weeks
> (or more) to have a real insight on what works and what doesn't.

Jumping in a little late, but it seems that simulated annealing might  
be a decent method here: take random steps (drawing from a  
distribution of integer step sizes), reject steps that fall outside  
the fitting range, and accept steps according to the standard  
annealing formula.

Something with a global optimum but spikes along the way is pretty  
well-suited to SA in general, and it's also an easy algorithm to make  
work on a lattice. If you're in high dimensions, there are also bolt- 
on methods for biasing the steps toward "good directions" as opposed  
to just taking isotropic random steps. Again, pretty easy to think of  
discrete implementations of this...

Zach
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Matthieu Brucher
2010/11/23 Zachary Pincus :
>
> On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:
>
>> On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
>>> At first glance it looks as if a relaxation is simply not possible:
>>> either there are additional rows or not.
>>> But with some technical transformations it is possible to reformulate
>>> the problem into a form that allows the relaxation of the integer
>>> constraint in a natural way.
>>
>>> Maybe this is also possible in your case?
>>
>> Well, given that it is a cross-validation score that I am optimizing,
>> there is not simple algorithm giving this score, so it's not obvious
>> at
>> all that there is a possible relaxation. A road to follow would be to
>> find an oracle giving empirical risk after estimation of the penalized
>> problem, and try to relax this oracle. That's two steps further than
>> I am
>> (I apologize if the above paragraph is incomprehensible, I am
>> getting too
>> much in the technivalities of my problem.
>>
>>> Otherwise, well, let me know if you find a working solution ;)
>>
>> Nelder-Mead seems to be working fine, so far. It will take a few weeks
>> (or more) to have a real insight on what works and what doesn't.
>
> Jumping in a little late, but it seems that simulated annealing might
> be a decent method here: take random steps (drawing from a
> distribution of integer step sizes), reject steps that fall outside
> the fitting range, and accept steps according to the standard
> annealing formula.

There is also a simulated-annealing modification of Nelder Mead that
can be of use.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 07:14:56PM +0100, Matthieu Brucher wrote:
> > Jumping in a little late, but it seems that simulated annealing might
> > be a decent method here: take random steps (drawing from a
> > distribution of integer step sizes), reject steps that fall outside
> > the fitting range, and accept steps according to the standard
> > annealing formula.

> There is also a simulated-annealing modification of Nelder Mead that
> can be of use.

Sounds interesting. Any reference?

G
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Matthieu Brucher
2010/11/24 Gael Varoquaux :
> On Tue, Nov 23, 2010 at 07:14:56PM +0100, Matthieu Brucher wrote:
>> > Jumping in a little late, but it seems that simulated annealing might
>> > be a decent method here: take random steps (drawing from a
>> > distribution of integer step sizes), reject steps that fall outside
>> > the fitting range, and accept steps according to the standard
>> > annealing formula.
>
>> There is also a simulated-annealing modification of Nelder Mead that
>> can be of use.
>
> Sounds interesting. Any reference?

Not right away, I have to check. The main difference is the possible
acceptance of a contraction that doesn't lower the cost, and this is
done with a temperature like simulated annealing.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-24 Thread Charles R Harris
On Tue, Nov 23, 2010 at 3:37 AM, Sebastian Walter <
sebastian.wal...@gmail.com> wrote:

> On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux
>  wrote:
> > On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
> >> I'm not familiar with dichotomy optimization.
> >> Several techniques have been proposed to solve the problem: genetic
> >> algorithms, simulated annealing, Nelder-Mead and Powell.
> >> To be honest, I find it quite confusing that these algorithms are
> >> named in the same breath.
> >
> > I am confused too. But that stems from my lack of knowledge in
> > optimization.
> >
> >> Do you have a continuous or a discrete problem?
> >
> > Both.
> >
> >> Is your problem of the following form?
> >
> >> min_x f(x)
> >> s.t.   lo <= Ax + b <= up
> >>0 = g(x)
> >>0 <= h(x)
> >
> > No constraints.
>
> didn't you say that you operate only in some convex hull?
>
> >
> >> An if yes, in which space does x live?
> >
> > Either in R^n, in the set of integers (unidimensional), or in the set of
> > positive integers.
> According to  http://openopt.org/Problems
> this is a mixed integer nonlinear program http://openopt.org/MINLP . I
> don't have experience with the solver though,
> but it may take a long time to run it since it uses branch-and-bound.
> In my field of work we typically relax the integers to real numbers,
> perform the optimization and then round to the next integer.
> This is often sufficiently close a good solution.
>
>
I've sometimes applied a genetic algorithm to the rounding, which can
improve things a bit if needed.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-28 Thread Paul Anton Letnes

> 
> 
> On Tue, Nov 23, 2010 at 3:37 AM, Sebastian Walter 
>  wrote:
> On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux
>  wrote:
> > On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
> >> I'm not familiar with dichotomy optimization.
> >> Several techniques have been proposed to solve the problem: genetic
> >> algorithms, simulated annealing, Nelder-Mead and Powell.
> >> To be honest, I find it quite confusing that these algorithms are
> >> named in the same breath.
> >
> > I am confused too. But that stems from my lack of knowledge in
> > optimization.
> >
> >> Do you have a continuous or a discrete problem?
> >
> > Both.

I would like to advertise a bit for genetic algorithms. In my experience, they 
seem to be the most powerful of the optimization techniques mentioned here. In 
particular, they are good at getting out of local minima, and don't really care 
if you are talking about integer or continuous problems. As long as you can 
think of a good representation and good genetic operators, you should be good! 
I have just a little experience with pyevolve myself, but it is really easy to 
test GAs with pyevolve, as you just have to make a few settings to get going.

One word of warning: GA performance is very sensitive to the actual parameters 
you choose! Especially, you should think about mutation rates, crossover rates, 
selection protocols, and number of crossover points. (This list came off the 
top of my head...)

If you have any GA questions, ask, and perhaps I can come up with an answer.

Paul.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion