[Numpy-discussion] Announcing Theano 0.7

2015-03-27 Thread Pascal Lamblin

===
 Announcing Theano 0.7
===

This is a release for a major version, with lots of new
features, bug fixes, and some interface changes (deprecated or
potentially misleading features were removed).

Upgrading to Theano 0.7 is recommended for everyone, but you should
first make sure that your code does not raise deprecation warnings with
the version you are currently using.

For those using the bleeding edge version in the git repository, we
encourage you to update to the `rel-0.7` tag.

What's New
--

Highlights:
 * Integration of CuDNN for 2D convolutions and pooling on supported GPUs
 * Too many optimizations and new features to count
 * Various fixes and improvements to scan
 * Better support for GPU on Windows
 * On Mac OS X, clang is used by default
 * Many crash fixes
 * Some bug fixes as well

Description
---
Theano is a Python library that allows you to define, optimize, and
efficiently evaluate mathematical expressions involving
multi-dimensional arrays. It is built on top of NumPy. Theano
features:

 * tight integration with NumPy: a similar interface to NumPy's.
   numpy.ndarrays are also used internally in Theano-compiled functions.
 * transparent use of a GPU: perform data-intensive computations up to
   140x faster than on a CPU (support for float32 only).
 * efficient symbolic differentiation: Theano can compute derivatives
   for functions of one or many inputs.
 * speed and stability optimizations: avoid nasty bugs when computing
   expressions such as log(1+ exp(x)) for large values of x.
 * dynamic C code generation: evaluate expressions faster.
 * extensive unit-testing and self-verification: includes tools for
   detecting and diagnosing bugs and/or potential problems.

Theano has been powering large-scale computationally intensive
scientific research since 2007, but it is also approachable
enough to be used in the classroom (IFT6266 at the University of Montreal).

Resources
-

About Theano:
http://deeplearning.net/software/theano/

Related projects:
http://github.com/Theano/Theano/wiki/Related-projects

About NumPy:
http://numpy.scipy.org/

About SciPy:
http://www.scipy.org/

Machine Learning Tutorial with Theano on Deep Architectures:
http://deeplearning.net/tutorial/

Acknowledgments
---

I would like to thank all contributors of Theano. For this particular
release, many people have helped, and to list them all would be
impractical.

I would also like to thank users who submitted bug reports.

Also, thank you to all NumPy and Scipy developers as Theano builds on
their strengths.

All questions/comments are always welcome on the Theano
mailing-lists ( http://deeplearning.net/software/theano/#community )

-- 
Pascal
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Announcing Theano 0.5

2012-02-23 Thread Pascal Lamblin
===
 Announcing Theano 0.5
===

This is a major version, with lots of new features, bug fixes, and some
interface changes (deprecated or potentially misleading features were
removed).

Upgrading to Theano 0.5 is recommended for everyone, but you should first make
sure that your code does not raise deprecation warnings with Theano 0.4.1.
Otherwise, in one case the results can change. In other cases, the warnings are
turned into errors (see below for details).

For those using the bleeding edge version in the
git repository, we encourage you to update to the `0.5` tag.

If you have updated to 0.5rc1 or 0.5rc2, you are highly encouraged to
update to 0.5, as some bugs introduced in those versions have now been
fixed, see items marked with '#' in the lists below.


What's New
--

Highlight:
 * Moved to github: http://github.com/Theano/Theano/
 * Old trac ticket moved to assembla ticket: 
http://www.assembla.com/spaces/theano/tickets
 * Theano vision: 
http://deeplearning.net/software/theano/introduction.html#theano-vision (Many 
people)
 * Theano with GPU works in some cases on Windows now. Still experimental. 
(Sebastian Urban)
 * Faster dot() call: New/Better direct call to cpu and gpu ger, gemv, gemm
   and dot(vector, vector). (James, Frédéric, Pascal)
 * C implementation of Alloc. (James, Pascal)
 * theano.grad() now also work with sparse variable. (Arnaud)
 * Macro to implement the Jacobian/Hessian with 
theano.tensor.{jacobian,hessian} (Razvan)
 * See the Interface changes.


Interface Behavior Changes:
 * The current default value of the parameter axis of
   theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
   numpy: None. i.e. operate on all dimensions of the tensor.
   (Frédéric Bastien, Olivier Delalleau) (was deprecated and generated
   a warning since Theano 0.3 released Nov. 23rd, 2010)
 * The current output dtype of sum with input dtype [u]int* is now always 
[u]int64.
   You can specify the output dtype with a new dtype parameter to sum.
   The output dtype is the one using for the summation.
   There is no warning in previous Theano version about this.
   The consequence is that the sum is done in a dtype with more precision than 
before.
   So the sum could be slower, but will be more resistent to overflow.
   This new behavior is the same as numpy. (Olivier, Pascal)
 # When using a GPU, detect faulty nvidia drivers. This was detected
   when running Theano tests. Now this is always tested. Faulty
   drivers results in in wrong results for reduce operations. (Frederic B.)


Interface Features Removed (most were deprecated):
 * The string modes FAST_RUN_NOGC and STABILIZE are not accepted. They
   were accepted only by theano.function().
   Use Mode(linker='c|py_nogc') or Mode(optimizer='stabilize') instead.
 * tensor.grad(cost, wrt) now always returns an object of the "same type" as wrt
   (list/tuple/TensorVariable). (Ian Goodfellow, Olivier)
 * A few tag.shape and Join.vec_length left have been removed. (Frederic)
 * The .value attribute of shared variables is removed, use shared.set_value()
   or shared.get_value() instead. (Frederic)
 * Theano config option "home" is not used anymore as it was redundant with 
"base_compiledir".
   If you use it, Theano will now raise an error. (Olivier D.)
 * scan interface changes: (Razvan Pascanu)
* The use of `return_steps` for specifying how many entries of the output
  to return has been removed. Instead, apply a subtensor to the output
  returned by scan to select a certain slice.
* The inner function (that scan receives) should return its outputs and
  updates following this order:
[outputs], [updates], [condition].
  One can skip any of the three if not used, but the order has to stay 
unchanged.

Interface bug fix:
 * Rop in some case should have returned a list of one Theano variable,
   but returned the variable itself. (Razvan)

New deprecation (will be removed in Theano 0.6, warning generated if you use 
them):
 * tensor.shared() renamed to tensor._shared(). You probably want to
   call theano.shared() instead! (Olivier D.)


Bug fixes (incorrect results):
 * On CPU, if the convolution had received explicit shape information,
   they where not checked at runtime.  This caused wrong result if the
   input shape was not the one expected. (Frederic, reported by Sander
   Dieleman)
 * Theoretical bug: in some case we could have GPUSum return bad value.
   We were not able to reproduce this problem
 * patterns affected ({0,1}*nb dim, 0 no reduction on this dim, 1 reduction 
on this dim):
   01, 011, 0111, 010, 10, 001, 0011, 0101 (Frederic)
 * div by zero in verify_grad. This hid a bug in the grad of Images2Neibs. 
(James)
 * theano.sandbox.neighbors.Images2Neibs grad was returning a wrong value.
   The grad is now disabled and returns an error. (Frederic)
 * An expression o

Re: [Numpy-discussion] Upgrade to 1.6.x: frompyfunc() ufunc casting issue

2012-01-20 Thread Pascal Lamblin
Hi everyone,

A long time ago, Aditya Sethi  I am facing an issue upgrading numpy from 1.5.1 to 1.6.1.
> In numPy 1.6, the casting behaviour for ufunc has changed and has become
> stricter.
> 
> Can someone advise how to implement the below simple example which worked in
> 1.5.1 but fails in 1.6.1?
> 
> >>> import numpy as np
> >>> def add(a,b):
> ...return (a+b)
> >>> uadd = np.frompyfunc(add,2,1)
> >>> uadd
> 
> >>> uadd.accumulate([1,2,3])
> Traceback (most recent call last):
>   File "", line 1, in 
> ValueError: could not find a matching type for add (vectorized).accumulate,
> requested type has type code 'l'

Here's the workaround I found to that problem:

>>> uadd.accumulate([1,2,3], dtype='object')
array([1, 3, 6], dtype=object)

It seems like "accumulate" infers that 'l' is the required output dtype,
but does not have the appropriate implementation:
>>> uadd.types
['OO->O']

Forcing the output dtype to be 'object' (the only supported dtype) seems
to do the trick.

Hope this helps,
-- 
Pascal
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Any idea to run the dot-product on many arrays

2011-01-13 Thread Pascal
On 01/13/2011 05:04 PM, EMMEL Thomas wrote:
> Hi,
>
> I need to rotate many vectors (x,y,z) with a given rotation matrix (3x3).
> I can always do
>
> for v in vectors:
> tv += np.dot(mat, v)
>
> where mat is my fixed matrix (or array of arrays) and v is a single array.
> Is there any efficient way to use an array of vectors to do the 
> transfomation
> for all of these vectors at once?

numpy.dot(rotationmatrix , coordinates.T).T

Where coordinates is a n*3 matrix of n stacked vectors in rows. It works 
with vectors stacked in column without the the two transpose.

It's even possible to apply a symmetry operation to a bunch of second 
rank tensors in one go.

Pascal

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] speed of numpy.ndarray compared to Numeric.array

2011-01-10 Thread Pascal
Hi,

On 01/10/2011 09:09 AM, EMMEL Thomas wrote:
>
> No I didn't, due to the fact that these values are coordinates in 3D (x,y,z).
> In fact I work with a list/array/tuple of arrays with 10 to 1M of 
> elements or more.
> What I need to do is to calculate the distance of each of these elements 
> (coordinates)
> to a given coordinate and filter for the nearest.
> The brute force method would look like this:
>
>
> #~
> def bruteForceSearch(points, point):
>
>  minpt = min([(vec2Norm(pt, point), pt, i)
>   for i, pt in enumerate(points)], key=itemgetter(0))
>  return sqrt(minpt[0]), minpt[1], minpt[2]
>
> #~~
> def vec2Norm(pt1,pt2):
>  xDis = pt1[0]-pt2[0]
>  yDis = pt1[1]-pt2[1]
>  zDis = pt1[2]-pt2[2]
>  return xDis*xDis+yDis*yDis+zDis*zDis
>

I am not sure I understood the problem properly but here what I would 
use to calculate a distance from horizontally stacked vectors (big):

ref=numpy.array([0.1,0.2,0.3])
big=numpy.random.randn(100, 3)

big=numpy.add(big,-ref)
distsquared=numpy.sum(big**2, axis=1)

Pascal
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fourier transform

2010-03-30 Thread Pascal
Le Mon, 29 Mar 2010 16:12:56 -0600,
Charles R Harris  a écrit :

> On Mon, Mar 29, 2010 at 3:00 PM, Pascal  wrote:
> 
> > Hi,
> >
> > Does anyone have an idea how fft functions are implemented? Is it
> > pure python? based on BLAS/LAPACK? or is it using fftw?
> >
> > I successfully used numpy.fft in 3D. I would like to know if I can
> > calculate a specific a plane using the numpy.fft.
> >
> > I have in 3D:
> > r(x, y, z)=\sum_h^N-1 \sum_k^M-1 \sum_l^O-1 f_{hkl}
> >  \exp(-2\pi \i (hx/N+ky/M+lz/O))
> >
> > So for the plane, z is no longer independant.
> > I need to solve the system:
> > ax+by+cz+d=0
> > r(x, y, z)=\sum_h^N-1 \sum_k^M-1 \sum_l^O-1 f_{hkl}
> >  \exp(-2\pi \i (hx/N+ky/M+lz/O))
> >
> > Do you think it's possible to use numpy.fft for this?
> >
> >
> I'm not clear on what you want to do here, but note that the term in
> the in the exponent is of the form , i.e., the inner product of
> the vectors k and x. So if you rotate x by O so that the plane is
> defined by z = 0, then  = . That is, you can apply the
> transpose of the rotation to the result of the fft.

In other words, z is no longer independent but depends on x and y.

Apparently, nobody is calculating the exact plane but they are making a
slice in the 3D grid and doing some interpolation.

However, your answer really help me on something completely different :)

Thanks,
Pascal
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fourier transform

2010-03-29 Thread Pascal
Hi,

Does anyone have an idea how fft functions are implemented? Is it pure
python? based on BLAS/LAPACK? or is it using fftw?

I successfully used numpy.fft in 3D. I would like to know if I can
calculate a specific a plane using the numpy.fft.

I have in 3D: 
r(x, y, z)=\sum_h^N-1 \sum_k^M-1 \sum_l^O-1 f_{hkl}
 \exp(-2\pi \i (hx/N+ky/M+lz/O))

So for the plane, z is no longer independant.
I need to solve the system:
ax+by+cz+d=0
r(x, y, z)=\sum_h^N-1 \sum_k^M-1 \sum_l^O-1 f_{hkl}
 \exp(-2\pi \i (hx/N+ky/M+lz/O))

Do you think it's possible to use numpy.fft for this?

Regards,
Pascal
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion