Re: [Numpy-discussion] categorical distributions

2010-11-23 Thread Hagen Fürstenau
> Can you compare the speed of your cython solution with the version of Chuck

For multiple samples of the same distribution, it would do more or less
the same as the "searchsorted" method, so I don't expect any improvement
(except for being easier to find).

For multiple samples of different distributions, my version is 4-5x
faster than "searchsorted(random())". This is without normalizing the
probability vector, which means that you typically don't have to sum up
the whole vector (and store all the intermediate sums).

- Hagen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 08:21:55AM +0100, Matthieu Brucher wrote:
> optimize.fmin can be enough, I don't know it well enough. Nelder-Mead
> is not a constrained optimization algorithm, so you can't specify an
> outer hull.

I saw that, after a bit more reading.

> As for the integer part, I don't know if optimize.fmin is
> type consistent,

That not a problem: I wrap my function in a small object to ensure
memoization, and input argument casting.

The problem is that I can't tell the Nelder-Mead that the smallest jump
it should attempt is .5. I can set xtol to .5, but it still attemps jumps
of .001 in its initial jumps. Of course optimization on integers is
fairly ill-posed, so I am asking for trouble.

Gael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Matthieu Brucher
> The problem is that I can't tell the Nelder-Mead that the smallest jump
> it should attempt is .5. I can set xtol to .5, but it still attemps jumps
> of .001 in its initial jumps.

This is strange. It should not if the intiial points are set
adequatly. You may want to check if the initial conditions make the
optimization start at correct locations.

> Of course optimization on integers is
> fairly ill-posed, so I am asking for trouble.

Indeed :D
That's why GA can be a good solution as well.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 10:18:50AM +0100, Matthieu Brucher wrote:
> > The problem is that I can't tell the Nelder-Mead that the smallest jump
> > it should attempt is .5. I can set xtol to .5, but it still attemps jumps
> > of .001 in its initial jumps.

> This is strange. It should not if the intiial points are set
> adequatly. You may want to check if the initial conditions make the
> optimization start at correct locations.

Yes, that's excatly the problem. And it is easy to see why: in
scipy.optimise.fmin, around line 186, the initial points are chosen with
a relative distance of 0.00025 to the intial guess that is given. That's
not what I want in the case of integers :).

> > Of course optimization on integers is
> > fairly ill-posed, so I am asking for trouble.

> Indeed :D That's why GA can be a good solution as well.

It's suboptimal if I know that my function is bell-shaped.

Gael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
Hello Gael,

On Tue, Nov 23, 2010 at 10:27 AM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 10:18:50AM +0100, Matthieu Brucher wrote:
>> > The problem is that I can't tell the Nelder-Mead that the smallest jump
>> > it should attempt is .5. I can set xtol to .5, but it still attemps jumps
>> > of .001 in its initial jumps.
>
>> This is strange. It should not if the intiial points are set
>> adequatly. You may want to check if the initial conditions make the
>> optimization start at correct locations.
>
> Yes, that's excatly the problem. And it is easy to see why: in
> scipy.optimise.fmin, around line 186, the initial points are chosen with
> a relative distance of 0.00025 to the intial guess that is given. That's
> not what I want in the case of integers :).

I'm not familiar with dichotomy optimization.
Several techniques have been proposed to solve the problem: genetic
algorithms, simulated annealing, Nelder-Mead and Powell.
To be honest, I find it quite confusing that these algorithms are
named in the same breath.
Do you have a continuous or a discrete problem?

Is your problem of the following form?

min_x f(x)
s.t.   lo <= Ax + b <= up
   0 = g(x)
   0 <= h(x)

An if yes, in which space does x live?

cheers,
Sebastian











>
>> > Of course optimization on integers is
>> > fairly ill-posed, so I am asking for trouble.
>
>> Indeed :D That's why GA can be a good solution as well.
>
> It's suboptimal if I know that my function is bell-shaped.
>
> Gael
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
> I'm not familiar with dichotomy optimization.
> Several techniques have been proposed to solve the problem: genetic
> algorithms, simulated annealing, Nelder-Mead and Powell.
> To be honest, I find it quite confusing that these algorithms are
> named in the same breath.

I am confused too. But that stems from my lack of knowledge in
optimization.

> Do you have a continuous or a discrete problem?

Both.

> Is your problem of the following form?

> min_x f(x)
> s.t.   lo <= Ax + b <= up
>0 = g(x)
>0 <= h(x)

No constraints.

> An if yes, in which space does x live?

Either in R^n, in the set of integers (unidimensional), or in the set of
positive integers.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
>> I'm not familiar with dichotomy optimization.
>> Several techniques have been proposed to solve the problem: genetic
>> algorithms, simulated annealing, Nelder-Mead and Powell.
>> To be honest, I find it quite confusing that these algorithms are
>> named in the same breath.
>
> I am confused too. But that stems from my lack of knowledge in
> optimization.
>
>> Do you have a continuous or a discrete problem?
>
> Both.
>
>> Is your problem of the following form?
>
>> min_x f(x)
>> s.t.   lo <= Ax + b <= up
>>            0 = g(x)
>>            0 <= h(x)
>
> No constraints.

didn't you say that you operate only in some convex hull?

>
>> An if yes, in which space does x live?
>
> Either in R^n, in the set of integers (unidimensional), or in the set of
> positive integers.
According to  http://openopt.org/Problems
this is a mixed integer nonlinear program http://openopt.org/MINLP . I
don't have experience with the solver though,
but it may take a long time to run it since it uses branch-and-bound.
In my field of work we typically relax the integers to real numbers,
perform the optimization and then round to the next integer.
This is often sufficiently close a good solution.

>
> Gaël
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote:
> >> min_x f(x)
> >> s.t.   lo <= Ax + b <= up
> >>            0 = g(x)
> >>            0 <= h(x)

> > No constraints.

> didn't you say that you operate only in some convex hull?

No. I have an initial guess that allows me to specify a convex hull in
which the minimum should probably lie, but its not a constraint: nothing
bad happens if I leave that convex hull.

> > Either in R^n, in the set of integers (unidimensional), or in the set of
> > positive integers.
> According to  http://openopt.org/Problems
> this is a mixed integer nonlinear program http://openopt.org/MINLP . 

It is indead the name I know for it, however I have additional hypothesis
(namely that f is roughly convex) which makes it much easier.

> I don't have experience with the solver though, but it may take a long
> time to run it since it uses branch-and-bound.

Yes, this is too brutal: this is for non convex optimization. 
Dichotomy seems well-suited for finding an optimum on the set of
intehers.

> In my field of work we typically relax the integers to real numbers,
> perform the optimization and then round to the next integer.
> This is often sufficiently close a good solution.

This is pretty much what I am doing, but you have to be careful: if the
algorithm does jumps that are smaller than 1, it gets a zero difference
between those jumps. If you are not careful, this might confuse a lot the
algorithm and trick it into not converging.

Thanks for your advice,

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
On Tue, Nov 23, 2010 at 11:43 AM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote:
>> >> min_x f(x)
>> >> s.t.   lo <= Ax + b <= up
>> >>            0 = g(x)
>> >>            0 <= h(x)
>
>> > No constraints.
>
>> didn't you say that you operate only in some convex hull?
>
> No. I have an initial guess that allows me to specify a convex hull in
> which the minimum should probably lie, but its not a constraint: nothing
> bad happens if I leave that convex hull.
>
>> > Either in R^n, in the set of integers (unidimensional), or in the set of
>> > positive integers.
>> According to  http://openopt.org/Problems
>> this is a mixed integer nonlinear program http://openopt.org/MINLP .
>
> It is indead the name I know for it, however I have additional hypothesis
> (namely that f is roughly convex) which makes it much easier.
>
>> I don't have experience with the solver though, but it may take a long
>> time to run it since it uses branch-and-bound.
>
> Yes, this is too brutal: this is for non convex optimization.
> Dichotomy seems well-suited for finding an optimum on the set of
> intehers.
>
>> In my field of work we typically relax the integers to real numbers,
>> perform the optimization and then round to the next integer.
>> This is often sufficiently close a good solution.
>
> This is pretty much what I am doing, but you have to be careful: if the
> algorithm does jumps that are smaller than 1, it gets a zero difference
> between those jumps. If you are not careful, this might confuse a lot the
> algorithm and trick it into not converging.
>

ah, that clears things up a lot.

Well, I don't know what the best method is to solve your problem, so
take the following with a grain of salt:
Wouldn't it be better to change the model than modifying the
optimization algorithm?
It sounds as if the resulting objective function is piecewise
constant. AFAIK most optimization algorithms for continuous problems
require at least Lipschitz continuous functions to work ''acceptable
well''. Not sure if this is also true for Nelder-Mead.


> Thanks for your advice,
>
> Gaël
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
> Well, I don't know what the best method is to solve your problem, so
> take the following with a grain of salt:
> Wouldn't it be better to change the model than modifying the
> optimization algorithm?

In this case, that's not possible. You can think of this parameter as the
number of components in a PCA (it's actually a more complex dictionnary
learning framework), so it's a parameter that is discrete, and I can't do
anything about it :).

> It sounds as if the resulting objective function is piecewise
> constant.

> AFAIK most optimization algorithms for continuous problems require at
> least Lipschitz continuous functions to work ''acceptable well''. Not
> sure if this is also true for Nelder-Mead.

Yes correct. We do have a problem.

I have a Nelder-Mead that seems to be working quite well on a few toy
problems.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread josef . pktd
On Tue, Nov 23, 2010 at 8:50 AM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
>> Well, I don't know what the best method is to solve your problem, so
>> take the following with a grain of salt:
>> Wouldn't it be better to change the model than modifying the
>> optimization algorithm?
>
> In this case, that's not possible. You can think of this parameter as the
> number of components in a PCA (it's actually a more complex dictionnary
> learning framework), so it's a parameter that is discrete, and I can't do
> anything about it :).
>
>> It sounds as if the resulting objective function is piecewise
>> constant.
>
>> AFAIK most optimization algorithms for continuous problems require at
>> least Lipschitz continuous functions to work ''acceptable well''. Not
>> sure if this is also true for Nelder-Mead.
>
> Yes correct. We do have a problem.
>
> I have a Nelder-Mead that seems to be working quite well on a few toy
> problems.

Assuming your function is well behaved, one possible idea is to try
replacing the integer objective function with a continuous
interpolation. Or maybe fit a bellshaped curve to a few gridpoints. It
might get you faster into the right neighborhood to do an exact
search.

(There are some similar methods of using a surrogate objective
function, when it is very expensive or impossible to calculate an
objective function, but I never looked closely at these cases.)

Josef

>
> Gaël
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() errors

2010-11-23 Thread Stéfan van der Walt
On Tue, Nov 23, 2010 at 9:28 AM, Nils Wagner
 wrote:
> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/npyio.py",
> line 66, in seek_gzip_factory
>     g.name = f.name
> AttributeError: GzipFile instance has no attribute 'name'

This one is mine--the change was made to avoid a deprecationwarning.
Which version of Python are you using?

Regards
Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
On Tue, Nov 23, 2010 at 2:50 PM, Gael Varoquaux
 wrote:
> On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
>> Well, I don't know what the best method is to solve your problem, so
>> take the following with a grain of salt:
>> Wouldn't it be better to change the model than modifying the
>> optimization algorithm?
>
> In this case, that's not possible. You can think of this parameter as the
> number of components in a PCA (it's actually a more complex dictionnary
> learning framework), so it's a parameter that is discrete, and I can't do
> anything about it :).

In optimum experimental design one encounters MINLPs where integers
define the number of rows of a matrix.
At first glance it looks as if a relaxation is simply not possible:
either there are additional rows or not.
But with some technical transformations it is possible to reformulate
the problem into a form that allows the relaxation of the integer
constraint in a natural way.

Maybe this is also possible in your case?
Otherwise, well, let me know if you find a working solution ;)


>
>> It sounds as if the resulting objective function is piecewise
>> constant.
>
>> AFAIK most optimization algorithms for continuous problems require at
>> least Lipschitz continuous functions to work ''acceptable well''. Not
>> sure if this is also true for Nelder-Mead.
>
> Yes correct. We do have a problem.
>
> I have a Nelder-Mead that seems to be working quite well on a few toy
> problems.
>
> Gaël
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() errors

2010-11-23 Thread Gerrit Holl
2010/11/23 Stéfan van der Walt :
> On Tue, Nov 23, 2010 at 9:28 AM, Nils Wagner
>  wrote:
>> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/npyio.py",
>> line 66, in seek_gzip_factory
>>     g.name = f.name
>> AttributeError: GzipFile instance has no attribute 'name'
>
> This one is mine--the change was made to avoid a deprecationwarning.
> Which version of Python are you using?

I hope 2.5, as his site-packages directory is in lib/python2.5 :)

Gerrit.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 10:19:06AM -0500, josef.p...@gmail.com wrote:
> > I have a Nelder-Mead that seems to be working quite well on a few toy
> > problems.

> Assuming your function is well behaved, one possible idea is to try
> replacing the integer objective function with a continuous
> interpolation. Or maybe fit a bellshaped curve to a few gridpoints. It
> might get you faster into the right neighborhood to do an exact
> search.

I've actually been wondering if Gaussian Process regression (aka Krigin)
would not be useful here :). It's fairly good at fitting processes for
which there is very irregular information.

Now I am perfectly aware that any move in this direction would require
significant work, so this is more day dreaming than project planning.

Gael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
> At first glance it looks as if a relaxation is simply not possible:
> either there are additional rows or not.
> But with some technical transformations it is possible to reformulate
> the problem into a form that allows the relaxation of the integer
> constraint in a natural way.

> Maybe this is also possible in your case?

Well, given that it is a cross-validation score that I am optimizing,
there is not simple algorithm giving this score, so it's not obvious at
all that there is a possible relaxation. A road to follow would be to
find an oracle giving empirical risk after estimation of the penalized
problem, and try to relax this oracle. That's two steps further than I am
(I apologize if the above paragraph is incomprehensible, I am getting too
much in the technivalities of my problem.

> Otherwise, well, let me know if you find a working solution ;)

Nelder-Mead seems to be working fine, so far. It will take a few weeks
(or more) to have a real insight on what works and what doesn't.

Thanks for your input,

Gael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Zachary Pincus

On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:

> On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
>> At first glance it looks as if a relaxation is simply not possible:
>> either there are additional rows or not.
>> But with some technical transformations it is possible to reformulate
>> the problem into a form that allows the relaxation of the integer
>> constraint in a natural way.
>
>> Maybe this is also possible in your case?
>
> Well, given that it is a cross-validation score that I am optimizing,
> there is not simple algorithm giving this score, so it's not obvious  
> at
> all that there is a possible relaxation. A road to follow would be to
> find an oracle giving empirical risk after estimation of the penalized
> problem, and try to relax this oracle. That's two steps further than  
> I am
> (I apologize if the above paragraph is incomprehensible, I am  
> getting too
> much in the technivalities of my problem.
>
>> Otherwise, well, let me know if you find a working solution ;)
>
> Nelder-Mead seems to be working fine, so far. It will take a few weeks
> (or more) to have a real insight on what works and what doesn't.

Jumping in a little late, but it seems that simulated annealing might  
be a decent method here: take random steps (drawing from a  
distribution of integer step sizes), reject steps that fall outside  
the fitting range, and accept steps according to the standard  
annealing formula.

Something with a global optimum but spikes along the way is pretty  
well-suited to SA in general, and it's also an easy algorithm to make  
work on a lattice. If you're in high dimensions, there are also bolt- 
on methods for biasing the steps toward "good directions" as opposed  
to just taking isotropic random steps. Again, pretty easy to think of  
discrete implementations of this...

Zach
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() errors

2010-11-23 Thread Nils Wagner
On Tue, 23 Nov 2010 16:39:13 +0100
  Gerrit Holl  wrote:
> 2010/11/23 Stéfan van der Walt :
>> On Tue, Nov 23, 2010 at 9:28 AM, Nils Wagner
>>  wrote:
>>> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/npyio.py",
>>> line 66, in seek_gzip_factory
>>>     g.name = f.name
>>> AttributeError: GzipFile instance has no attribute 
>>>'name'
>>
>> This one is mine--the change was made to avoid a 
>>deprecationwarning.
>> Which version of Python are you using?
> 
> I hope 2.5, as his site-packages directory is in 
>lib/python2.5 :)
> 

Exactly.
 Nils


  

  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Matthieu Brucher
2010/11/23 Zachary Pincus :
>
> On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:
>
>> On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
>>> At first glance it looks as if a relaxation is simply not possible:
>>> either there are additional rows or not.
>>> But with some technical transformations it is possible to reformulate
>>> the problem into a form that allows the relaxation of the integer
>>> constraint in a natural way.
>>
>>> Maybe this is also possible in your case?
>>
>> Well, given that it is a cross-validation score that I am optimizing,
>> there is not simple algorithm giving this score, so it's not obvious
>> at
>> all that there is a possible relaxation. A road to follow would be to
>> find an oracle giving empirical risk after estimation of the penalized
>> problem, and try to relax this oracle. That's two steps further than
>> I am
>> (I apologize if the above paragraph is incomprehensible, I am
>> getting too
>> much in the technivalities of my problem.
>>
>>> Otherwise, well, let me know if you find a working solution ;)
>>
>> Nelder-Mead seems to be working fine, so far. It will take a few weeks
>> (or more) to have a real insight on what works and what doesn't.
>
> Jumping in a little late, but it seems that simulated annealing might
> be a decent method here: take random steps (drawing from a
> distribution of integer step sizes), reject steps that fall outside
> the fitting range, and accept steps according to the standard
> annealing formula.

There is also a simulated-annealing modification of Nelder Mead that
can be of use.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Theano 0.3 Released

2010-11-23 Thread Frédéric Bastien
==
 Announcing Theano 0.3
==

This is an important release. The upgrade is recommended for everybody
using Theano 0.1. For those using the bleeding edge version in the
mercurial repository, we encourage you to update to the `0.3` tag.

This is the first major release of Theano since 0.1. Version 0.2
development started internally but it was never advertised as a
release.

What's New
--

There have been too many changes since 0.1 to keep track of them all.
Below is a *partial* list of changes since 0.1.

 * GPU code using NVIDIA's CUDA framework is now generated for many Ops.
 * Some interface changes since 0.1:
    * A new "shared variable" system which allows for reusing memory
space between
      Theano functions.
        * A new memory contract has been formally written for Theano,
          for people who want to minimize memory copies.
    * The old module system has been deprecated.
    * By default, inputs to a Theano function will not be silently
      downcasted (e.g. from float64 to float32).
    * An error is now raised when using the result of a logical operation of
      a Theano variable in an 'if' (i.e. an implicit call to __nonzeros__).
    * An error is now raised when we receive a non-aligned ndarray as
      input to a function (this is not supported).
    * An error is raised when the list of dimensions passed to
      dimshuffle() contains duplicates or is otherwise not sensible.
    * Call NumPy BLAS bindings for gemv operations in addition to the
      already supported gemm.
    * If gcc is unavailable at import time, Theano now falls back to a
      Python-based emulation mode after raising a warning.
    * An error is now raised when tensor.grad is called on a non-scalar
      Theano variable (in the past we would implicitly do a sum on the
      tensor to make it a scalar).
    * Added support for "erf" and "erfc" functions.
 * The current default value of the parameter axis of theano.{max,min,
  argmax,argmin,max_and_argmax} is deprecated. We now use the default NumPy
  behavior of operating on the entire tensor.
 * Theano is now available from PyPI and installable through "easy_install" or
  "pip".

You can download Theano from http://pypi.python.org/pypi/Theano.


Description
---

Theano is a Python library that allows you to define, optimize, and
efficiently evaluate mathematical expressions involving
multi-dimensional arrays. It is built on top of NumPy. Theano
features:

 * tight integration with NumPy: a similar interface to NumPy's.
  numpy.ndarrays are also used internally in Theano-compiled functions.
 * transparent use of a GPU: perform data-intensive computations up to
  140x faster than on a CPU (support for float32 only).
 * efficient symbolic differentiation: Theano can compute derivatives
  for functions of one or many inputs.
 * speed and stability optimizations: avoid nasty bugs when computing
  expressions such as log(1+ exp(x) ) for large values of x.
 * dynamic C code generation: evaluate expressions faster.
 * extensive unit-testing and self-verification: includes tools for
  detecting and diagnosing bugs and/or potential problems.

Theano has been powering large-scale computationally intensive
scientific research since 2007, but it is also approachable
enough to be used in the classroom (IFT6266 at the University of Montreal).

Resources
-

About Theano:

http://deeplearning.net/software/theano/

About NumPy:

http://numpy.scipy.org/

About Scipy:

http://www.scipy.org/

Acknowledgments
---

I would like to thank all contributors of Theano. For this particular
release, the people who have helped resolve many outstanding issues:
(in alphabetical order) Frederic Bastien, James Bergstra,
Guillaume Desjardins, David-Warde Farley, Ian Goodfellow, Pascal Lamblin,
Razvan Pascanu and Josh Bleecher Snyder.

Also, thank you to all NumPy and Scipy developers as Theano builds on
its strength.

All questions/comments are always welcome on the Theano
mailing-lists ( http://deeplearning.net/software/theano/ )
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() errors

2010-11-23 Thread Stéfan van der Walt
2010/11/23 Stéfan van der Walt :
> On Tue, Nov 23, 2010 at 9:28 AM, Nils Wagner
>  wrote:
>> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/npyio.py",
>> line 66, in seek_gzip_factory
>>     g.name = f.name
>> AttributeError: GzipFile instance has no attribute 'name'
>
> This one is mine--the change was made to avoid a deprecationwarning.
> Which version of Python are you using?

OK, should be fixed.  Let me know if it is working.

Cheers
Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() errors

2010-11-23 Thread Pauli Virtanen
On Tue, 23 Nov 2010 23:24:25 +0200, Stéfan van der Walt wrote:
> 2010/11/23 Stéfan van der Walt :
>> On Tue, Nov 23, 2010 at 9:28 AM, Nils Wagner
>>  wrote:
>>> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/npyio.py",
>>> line 66, in seek_gzip_factory
>>>     g.name = f.name
>>> AttributeError: GzipFile instance has no attribute 'name'
>>
>> This one is mine--the change was made to avoid a deprecationwarning.
>> Which version of Python are you using?
> 
> OK, should be fixed.  Let me know if it is working.

There's this on Python 3.2:

==
ERROR: test_io.test_gzip_load
--
Traceback (most recent call last):
  File "/var/lib/buildslave/numpy-real/b15/../../local3/nose/case.py", line 
177, in runTest
self.test(*self.arg)
  File 
"/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/lib/tests/test_io.py",
 line 1255, in test_gzip_load
assert_array_equal(np.load(f), a)
  File 
"/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/lib/npyio.py",
 line 332, in load
fid = seek_gzip_factory(file)
  File 
"/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/lib/npyio.py",
 line 73, in seek_gzip_factory
f = GzipFile(fileobj=f.fileobj, filename=name)
  File "/usr/local/stow/python-3.2a4/lib/python3.2/gzip.py", line 162, in 
__init__
if hasattr(fileobj, 'mode'): mode = fileobj.mode
  File "/usr/local/stow/python-3.2a4/lib/python3.2/gzip.py", line 101, in 
__getattr__
return getattr(name, self.file)
TypeError: getattr(): attribute name must be string


and also


Python 3.2a4 (r32a4:86446, Nov 20 2010, 17:59:19) 
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.core.multiarray.__file__
'.../numpy/core/multiarray.cpython-32m.so'

which leads to

==
ERROR: Failure: OSError 
(/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/core/multiarray.pyd:
 cannot open shared object file: No such file or directory)
--
Traceback (most recent call last):
  File "/var/lib/buildslave/numpy-real/b15/../../local3/nose/failure.py", line 
27, in runTest
reraise(self.exc_class, self.exc_val, self.tb)
  File "/var/lib/buildslave/numpy-real/b15/../../local3/nose/_3.py", line 7, in 
reraise
raise exc_class(exc_val).with_traceback(tb)
  File "/var/lib/buildslave/numpy-real/b15/../../local3/nose/loader.py", line 
372, in loadTestsFromName
addr.filename, addr.module)
  File "/var/lib/buildslave/numpy-real/b15/../../local3/nose/importer.py", line 
39, in importFromPath
return self.importFromDir(dir_path, fqname)
  File "/var/lib/buildslave/numpy-real/b15/../../local3/nose/importer.py", line 
84, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
  File 
"/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/tests/test_ctypeslib.py",
 line 8, in 
cdll = load_library('multiarray', np.core.multiarray.__file__)
  File 
"/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/ctypeslib.py",
 line 122, in load_library
raise exc
  File 
"/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/ctypeslib.py",
 line 119, in load_library
return ctypes.cdll[libpath]
  File "/usr/local/stow/python-3.2a4/lib/python3.2/ctypes/__init__.py", line 
415, in __getitem__
return getattr(self, name)
  File "/usr/local/stow/python-3.2a4/lib/python3.2/ctypes/__init__.py", line 
410, in __getattr__
dll = self._dlltype(name)
  File "/usr/local/stow/python-3.2a4/lib/python3.2/ctypes/__init__.py", line 
340, in __init__
self._handle = _dlopen(self._name, mode)
OSError: 
/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/core/multiarray.pyd:
 cannot open shared object file: No such file or directory

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() errors

2010-11-23 Thread Neil Muller
On 23 November 2010 23:44, Pauli Virtanen  wrote:
> On Tue, 23 Nov 2010 23:24:25 +0200, Stéfan van der Walt wrote:
> There's this on Python 3.2:
>
> ==
> ERROR: test_io.test_gzip_load
> --
> Traceback (most recent call last):
>  File "/var/lib/buildslave/numpy-real/b15/../../local3/nose/case.py", line 
> 177, in runTest
>    self.test(*self.arg)
>  File 
> "/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/lib/tests/test_io.py",
>  line 1255, in test_gzip_load
>    assert_array_equal(np.load(f), a)
>  File 
> "/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/lib/npyio.py",
>  line 332, in load
>    fid = seek_gzip_factory(file)
>  File 
> "/var/lib/buildslave/numpy-real/b15/numpy-install-3.2/lib/python3.2/site-packages/numpy/lib/npyio.py",
>  line 73, in seek_gzip_factory
>    f = GzipFile(fileobj=f.fileobj, filename=name)
>  File "/usr/local/stow/python-3.2a4/lib/python3.2/gzip.py", line 162, in 
> __init__
>    if hasattr(fileobj, 'mode'): mode = fileobj.mode
>  File "/usr/local/stow/python-3.2a4/lib/python3.2/gzip.py", line 101, in 
> __getattr__
>    return getattr(name, self.file)
> TypeError: getattr(): attribute name must be string

This was filed and fixed during the python bug weekend
(http://bugs.python.org/issue10465), so it shouldn't be a problem with
a current 3.2 checkout.

-- 
Neil Muller
drnlmul...@gmail.com

I've got a gmail account. Why haven't I become cool?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Gael Varoquaux
On Tue, Nov 23, 2010 at 07:14:56PM +0100, Matthieu Brucher wrote:
> > Jumping in a little late, but it seems that simulated annealing might
> > be a decent method here: take random steps (drawing from a
> > distribution of integer step sizes), reject steps that fall outside
> > the fitting range, and accept steps according to the standard
> > annealing formula.

> There is also a simulated-annealing modification of Nelder Mead that
> can be of use.

Sounds interesting. Any reference?

G
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] question for numpy import error

2010-11-23 Thread 명광민
I receive the following error when I try to import numpy


===
[node0]:/home/koojy/KMM> python2.6
Python 2.6.4 (r264:75706, Jul 26 2010, 16:55:18) 
[GCC 3.4.6 20060404 (Red Hat 3.4.6-10)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/local/lib/python2.6/site-packages/numpy/__init__.py", line 132, in 

import add_newdocs
  File "/usr/local/lib/python2.6/site-packages/numpy/add_newdocs.py", line 9, 
in 
from lib import add_newdoc
  File "/usr/local/lib/python2.6/site-packages/numpy/lib/__init__.py", line 13, 
in 
from polynomial import *
  File "/usr/local/lib/python2.6/site-packages/numpy/lib/polynomial.py", line 
17, in 
from numpy.linalg import eigvals, lstsq
  File "/usr/local/lib/python2.6/site-packages/numpy/linalg/__init__.py", line 
47, in 
from linalg import *
  File "/usr/local/lib/python2.6/site-packages/numpy/linalg/linalg.py", line 
22, in 
from numpy.linalg import lapack_lite
ImportError: libgfortran.so.1: cannot open shared object file: No such file or 
directory
>>> 



OS info : Linux node0 2.6.9-78.ELsmp #1 SMP Thu Jul 24 23:54:48 EDT 2008 x86_64 
x86_64 x86_64 GNU/Linux

numpy install :  following the "Building by hand" of Contents of below site.
  
http://www.scipy.org/Installing_SciPy/Linux#head-40c26a5b93b9afc7e3241e1d7fd84fe9326402e7

and  To build with gfortran:   python setup.py build –fcompiler=gnu95

Any help on this error would be appreciated.


K.M. Myung

==
Korea Environmental Science & Technology Institute, inc.
phone : 82-2-2113-0705  
direct : 82-70-7098-2644
fax : 82-2-2113-0706
e-mail : kmmy...@kesti.co.kr
==___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Matthieu Brucher
2010/11/24 Gael Varoquaux :
> On Tue, Nov 23, 2010 at 07:14:56PM +0100, Matthieu Brucher wrote:
>> > Jumping in a little late, but it seems that simulated annealing might
>> > be a decent method here: take random steps (drawing from a
>> > distribution of integer step sizes), reject steps that fall outside
>> > the fitting range, and accept steps according to the standard
>> > annealing formula.
>
>> There is also a simulated-annealing modification of Nelder Mead that
>> can be of use.
>
> Sounds interesting. Any reference?

Not right away, I have to check. The main difference is the possible
acceptance of a contraction that doesn't lower the cost, and this is
done with a temperature like simulated annealing.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion