Re: [Numpy-discussion] iteration slowing, no increase in memory

2009-09-10 Thread Chad Netzer
On Thu, Sep 10, 2009 at 10:03 AM, John [H2O] wrote: > It runs very well for the first few iterations, but then slows tremendously > - there is nothing significantly different about the files or directory in > which it slows. I've monitored the memory use, and it is not increasing. The memory use

Re: [Numpy-discussion] iteration slowing, no increase in memory

2009-09-10 Thread Robert Kern
On Thu, Sep 10, 2009 at 12:03, John [H2O] wrote: > > Hello, > > I have a routine that is iterating through a series of directories, loading > files, plotting, then moving on... > > It runs very well for the first few iterations, but then slows tremendously > - there is nothing significantly differe

[Numpy-discussion] iteration slowing, no increase in memory

2009-09-10 Thread John [H2O]
Hello, I have a routine that is iterating through a series of directories, loading files, plotting, then moving on... It runs very well for the first few iterations, but then slows tremendously - there is nothing significantly different about the files or directory in which it slows. I've monito

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Rohit Garg
> Yes. However, it is worth making the distinction between > embarrassingly parallel problems and SIMD problems. Not all > embarrassingly parallel problems are SIMD-capable. GPUs do SIMD, not > generally embarrassing problems. GPUs exploit both dimensions of parallelism, both simd (aka vectorizati

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Robert Kern
On Thu, Sep 10, 2009 at 07:28, Francesc Alted wrote: > A Thursday 10 September 2009 11:37:24 Gael Varoquaux escrigué: > >> On Thu, Sep 10, 2009 at 11:29:49AM +0200, Francesc Alted wrote: > >> > The point is: are GPUs prepared to compete with a general-purpose CPUs > >> > in all-road operations, lik

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Rohit Garg
> I think whatever supported by the underlying CPU, whenever it is extended > double precision (12 bytes) or quad precision (16 bytes). classic 64 bit cpu's support neither. > > -- > > Francesc Alted > > ___ > NumPy-Discussion mailing list > NumPy-Discus

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 15:51:15 Rohit Garg escrigué: > Apart from float and double, which floating point formats are > supported by numpy? I think whatever supported by the underlying CPU, whenever it is extended double precision (12 bytes) or quad precision (16 bytes). -- Francesc Alted

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Rohit Garg
Apart from float and double, which floating point formats are supported by numpy? On Thu, Sep 10, 2009 at 7:09 PM, Bruce Southey wrote: > On 09/10/2009 07:40 AM, Francesc Alted wrote: > > A Thursday 10 September 2009 14:36:16 Rohit Garg escrigué: > >> > That's nice to see. I think I'll change my

Re: [Numpy-discussion] Behavior from a change in dtype?

2009-09-10 Thread Skipper Seabold
On Tue, Sep 8, 2009 at 12:53 PM, Christopher Barker wrote: > Skipper Seabold wrote: >> Hmm, okay, well I came across this in trying to create a recarray like >> data2 below, so I guess I should just combine the two questions. > > key to understanding this is to understand what is going on under th

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Ruben Salvador
Well...you are right, sorry, I just thought 'np.shape(offspr)' result would be enough. Obviously, not! offspr wasn't actually a numpy array, but a Python list. I'm sorry for the inconvenience but I didn't realizeI'm just changing my code so that I just use numpy arrays, and forgot to change of

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Bruce Southey
On 09/10/2009 07:40 AM, Francesc Alted wrote: A Thursday 10 September 2009 14:36:16 Rohit Garg escrigué: > > That's nice to see. I think I'll change my mind if someone could perform > > a vector-vector multiplication (a operation that is typically > > memory-bounded) > > You mean a dot pr

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 14:36:16 Rohit Garg escrigué: > > That's nice to see. I think I'll change my mind if someone could perform > > a vector-vector multiplication (a operation that is typically > > memory-bounded) > > You mean a dot product? Whatever, dot product or element-wise product.

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Rohit Garg
> That's nice to see. I think I'll change my mind if someone could perform a > vector-vector multiplication (a operation that is typically memory-bounded) You mean a dot product? -- Rohit Garg http://rpg-314.blogspot.com/ Senior Undergraduate Department of Physics Indian Institute of Technolog

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Rohit Garg
> a = np.cos(b) > > where b is a 1x1 matrix is *very* embarrassing (in the parallel > meaning of the term ;-) On this operation, gpu's will eat up cpu's like a pack of pirhanas. :) -- Rohit Garg http://rpg-314.blogspot.com/ Senior Undergraduate Department of Physics Indian Institute of

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 14:22:57 Dag Sverre Seljebotn escrigué: > > > (Also a guard in timeit against CPU frequency scaling errors would be > > > > > > great :-) Like simply outputting a warning if frequency scaling is > > > > > > detected). > > > > Sorry, I don't get this one. > > I had

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 11:37:24 Gael Varoquaux escrigué: > On Thu, Sep 10, 2009 at 11:29:49AM +0200, Francesc Alted wrote: > >The point is: are GPUs prepared to compete with a general-purpose CPUs > > in all-road operations, like evaluating transcendental functions, > > conditionals all o

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 11:40:48 Sturla Molden escrigué: > Francesc Alted skrev: > > Numexpr already uses the Python parser, instead of build a new one. > > However the bytecode emitted after the compilation process is > > different, of course. > > > > Also, I don't see the point in requiring

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Dag Sverre Seljebotn
Francesc Alted wrote: > A Thursday 10 September 2009 13:45:10 Dag Sverre Seljebotn escrigué: > > Do you see any issues with this approach: Add a flag timeit to provide > > > two modes: > > > > > > a) Do an initial run which is always not included in timings (in fact, > > > as it gets "min"

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 13:45:10 Dag Sverre Seljebotn escrigué: > Francesc Alted wrote: > > A Wednesday 09 September 2009 20:17:20 Dag Sverre Seljebotn escrigué: > > > Ruben Salvador wrote: > > > > Your results are what I expected...but. This code is called from my > > > > main > > > > > >

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Dag Sverre Seljebotn
Francesc Alted wrote: > A Wednesday 09 September 2009 20:17:20 Dag Sverre Seljebotn escrigué: > > > Ruben Salvador wrote: > > > > Your results are what I expected...but. This code is called from my > main > > > > program, and what I have in there (output array already created for > both >

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 11:43:44 Ruben Salvador escrigué: > OK. Thanks everybody :D > But...what is happening now? When executing this code: > > print ' . object parameters mutation .' > print 'np.shape(offspr)', np.shape(offspr) > print 'np.shape(offspr[0])', np.shape(offspr[0]) > pr

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Rohit Garg
> The point is: are GPUs prepared to compete with a general-purpose CPUs in > all-road operations, like evaluating transcendental functions, conditionals > all of this with a rich set of data types? Yup. -- Rohit Garg http://rpg-314.blogspot.com/ Senior Undergraduate Department of Physics India

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Ruben Salvador
OK. Thanks everybody :D But...what is happening now? When executing this code: print ' . object parameters mutation .' print 'np.shape(offspr)', np.shape(offspr) print 'np.shape(offspr[0])', np.shape(offspr[0]) print "np.shape(r)", np.shape(r) print "np.shape(offspr_sigma)", np.shape(offs

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Matthieu Brucher
> Sure. Specially because NumPy is all about embarrasingly parallel problems > (after all, this is how an ufunc works, doing operations > element-by-element). > > The point is: are GPUs prepared to compete with a general-purpose CPUs in > all-road operations, like evaluating transcendental function

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Sturla Molden
Francesc Alted skrev: > > Numexpr already uses the Python parser, instead of build a new one. > However the bytecode emitted after the compilation process is > different, of course. > > Also, I don't see the point in requiring immutable buffers. Could you > develop this further? > If you do lacy

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Gael Varoquaux
On Thu, Sep 10, 2009 at 11:29:49AM +0200, Francesc Alted wrote: >The point is: are GPUs prepared to compete with a general-purpose CPUs in >all-road operations, like evaluating transcendental functions, >conditionals all of this with a rich set of data types? I would like to >believ

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 11:20:21 Gael Varoquaux escrigué: > On Thu, Sep 10, 2009 at 10:36:27AM +0200, Francesc Alted wrote: > >Where are you getting this info from? IMO the technology of memory in > >graphics boards cannot be so different than in commercial > > motherboards. It could b

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 10:58:13 Rohit Garg escrigué: > > Where are you getting this info from? IMO the technology of memory in > > graphics boards cannot be so different than in commercial motherboards. > > It could be a *bit* faster (at the expenses of packing less of it), but > > I'd say no

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Gael Varoquaux
On Thu, Sep 10, 2009 at 10:36:27AM +0200, Francesc Alted wrote: >Where are you getting this info from? IMO the technology of memory in >graphics boards cannot be so different than in commercial motherboards. It >could be a *bit* faster (at the expenses of packing less of it), but I'd >

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 11:11:22 Sturla Molden escrigué: > Citi, Luca skrev: > > That is exactly why numexpr is faster in these cases. > > I hope one day numpy will be able to perform such > > optimizations. > > I think it is going to require lazy evaluation. Whenever possible, an > operator w

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Ruben Salvador
OK. I get the idea, but I can't see it. In both cases, as the print statement shows, offspr is already created. I need light :S On Wed, Sep 9, 2009 at 8:17 PM, Dag Sverre Seljebotn < da...@student.matnat.uio.no> wrote: > Ruben Salvador wrote: > > Your results are what I expected...but. This code

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Sturla Molden
Rohit Garg skrev: > gtx280-->141GBps-->has 1GB > ati4870-->115GBps-->has 1GB > ati5870-->153GBps (launches sept 22, 2009)-->2GB models will be there too > That is going to help if buffers are kept in graphics memory. But the problem is that graphics memory is a scarse resource. S.M.

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Sturla Molden
Citi, Luca skrev: > That is exactly why numexpr is faster in these cases. > I hope one day numpy will be able to perform such > optimizations. > I think it is going to require lazy evaluation. Whenever possible, an operator would just return a symbolic representation of the operation. This wou

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Rohit Garg
> Where are you getting this info from? IMO the technology of memory in > graphics boards cannot be so different than in commercial motherboards. It > could be a *bit* faster (at the expenses of packing less of it), but I'd say > not as much as 4x faster (100 GB/s vs 25 GB/s of Intel i7 in sequenti

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Francesc Alted
A Wednesday 09 September 2009 20:17:20 Dag Sverre Seljebotn escrigué: > Ruben Salvador wrote: > > Your results are what I expected...but. This code is called from my main > > program, and what I have in there (output array already created for both > > cases) is: > > > > print "lambd", lambd > > pri

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-10 Thread Citi, Luca
Hi Ruben, > In both cases, as the print > statement shows, offspr is already created. >>> offspr[...] = r + a[:, None] means "fill the existing object pointed by offspr with r + a[:, None]" while >>> offspr = r + a[:,None] means "create a new array and assign it to the variable offspr (after dec

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Citi, Luca
Hi Sturla, > The proper way to speed up "dot(a*b+c*sqrt(d), e)" is to get rid of > temporary intermediates. I implemented a patch http://projects.scipy.org/numpy/ticket/1153 that reduces the number of temporary intermediates. In your example from 4 to 2. There is a big improvement in terms of me

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Francesc Alted
A Thursday 10 September 2009 09:45:29 Rohit Garg escrigué: > > You do realize that the throughput from onboard (video) RAM is going > > to be much higher, right? It's not just the parallelization but the > > memory bandwidth. And as James pointed out, if you can keep most of > > your intermediate c

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Rohit Garg
> You do realize that the throughput from onboard (video) RAM is going > to be much higher, right? It's not just the parallelization but the > memory bandwidth. And as James pointed out, if you can keep most of > your intermediate computation on-card, you stand to benefit immensely, > even if doing

Re: [Numpy-discussion] Huge arrays

2009-09-10 Thread David Cournapeau
Kim Hansen wrote: > > On 9-Sep-09, at 4:48 AM, Francesc Alted wrote: > > > Yes, this later is supported in PyTables as long as the underlying > > filesystem > > supports files > 2 GB, which is very usual in modern operating > > systems. > > I think the OP said he was on Win3

Re: [Numpy-discussion] Huge arrays

2009-09-10 Thread Kim Hansen
> > On 9-Sep-09, at 4:48 AM, Francesc Alted wrote: > > > Yes, this later is supported in PyTables as long as the underlying > > filesystem > > supports files > 2 GB, which is very usual in modern operating > > systems. > > I think the OP said he was on Win32, in which case it should be noted: > FAT

[Numpy-discussion] error: comma at end of enumerator list

2009-09-10 Thread Mads Ipsen
Hey, When I try to compile a swig based interface to NumPy, I get the error: lib/python2.6/site-packages/numpy/core/include/numpy/npy_common.h:11: error: comma at end of enumerator list In npy_common.h, changing /* enums for detected endianness */ enum { NPY_CPU_UNKNOWN_ENDIAN, NPY_C