Re: [Numpy-discussion] Huge arrays

2009-09-09 Thread Francesc Alted
A Wednesday 09 September 2009 07:22:33 David Cournapeau escrigué: On Wed, Sep 9, 2009 at 2:10 PM, Sebastian Haaseseb.ha...@gmail.com wrote: Hi, you can probably use PyTables for this. Even though it's meant to save/load data to/from disk (in HDF5 format) as far as I understand, it can be

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread Francesc Alted
A Tuesday 08 September 2009 21:19:05 George Dahl escrigué: Sturla Molden sturla at molden.no writes: Erik Tollerud skrev: NumPy arrays on the GPU memory is an easy task. But then I would have to write the computation in OpenCL's dialect of C99? This is true to some extent, but also

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread Francesc Alted
A Tuesday 08 September 2009 23:21:53 Christopher Barker escrigué: Also, perhaps a GPU-aware numexpr could be helpful which I think is the kind of thing that Sturla was refering to when she wrote: Incidentally, this will also make it easier to leverage on modern GPUs. Numexpr mainly supports

Re: [Numpy-discussion] Row-wise dot product?

2009-09-09 Thread Chris Colbert
the way I do my rotations is this: tmat = rotation matrix vec = stack of row vectors rotated_vecs = np.dot(tmat, vec.T).T On Mon, Sep 7, 2009 at 6:53 PM, T Jtjhn...@gmail.com wrote: On Mon, Sep 7, 2009 at 3:43 PM, T Jtjhn...@gmail.com wrote: Or perhaps I am just being dense. Yes.  I just

[Numpy-discussion] Adding a 2D with a 1D array...

2009-09-09 Thread Ruben Salvador
Hi there! I'm sure I'm missing something, but I am not able of doing a simple sum of two arrays with different dimensions. I have a 2D array and a 1D array np.shape(a) (8, 26) np.shape(b) (8,) and I want to sum each *row* of 'a' with the equivalent *row* of 'b' (this is, summing each 1D row

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-09 Thread Citi, Luca
Hi Ruben One dimensional arrays can be thought of as rows. If you want a column, you need to append a dimension. d = a + b[:,None] which is equivalent to d = a + b[:,np.newaxis] Best, Luca ___ NumPy-Discussion mailing list

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-09 Thread Ruben Salvador
Perfect! Thank you very much :D It's not obvious, though...I think I should read more deeply into Python/NumPy...but for the use I'm giving to it... Anyway, I thought the pythonic way would be faster, but after trying with a size 8 instead of 8...the for loop is faster! Pythonic time ==

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-09 Thread Pauli Virtanen
Wed, 09 Sep 2009 13:08:22 +0200, Ruben Salvador wrote: Perfect! Thank you very much :D It's not obvious, though...I think I should read more deeply into Python/NumPy...but for the use I'm giving to it... Anyway, I thought the pythonic way would be faster, but after trying with a size

[Numpy-discussion] re loading f2py modules in ipython

2009-09-09 Thread John [H2O]
Hello, I've started to rely more and more on f2py to create simple modules utilizing Fortran for efficiency. This is a great tool to have within Python! A problem, however, is that unlike python modules, the reload() function does not seem to update the f2py modules within ipython (which I use

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-09 Thread Citi, Luca
I am sorry but it doesn't make much sense. How do you measure the performance? Are you sure you include the creation of the c output array in the time spent (which is outside the for loop but should be considered anyway)? Here are my results... In [84]: a = np.random.rand(8,26) In [85]: b =

Re: [Numpy-discussion] question about future support for python-3

2009-09-09 Thread Darren Dale
On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com wrote: On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com wrote: Hi David, I already gave my own opinion on py3k, which can be summarized as:  - it is a huge effort, and no core numpy/scipy developer has expressed

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-09 Thread Ruben Salvador
Your results are what I expected...but. This code is called from my main program, and what I have in there (output array already created for both cases) is: print lambd, lambd print np.shape(a), np.shape(a) print np.shape(r), np.shape(r) print np.shape(offspr), np.shape(offspr) t = clock() for i

Re: [Numpy-discussion] Adding a 2D with a 1D array...

2009-09-09 Thread Ruben Salvador
I forgot...just in case: rsalva...@cactus:~$ python --version Python 2.5.2 python-scipy: version 0.6.0 On Wed, Sep 9, 2009 at 2:36 PM, Ruben Salvador rsalvador...@gmail.comwrote: Your results are what I expected...but. This code is called from my main program, and what I have in there

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread Lev Givon
Received from Francesc Alted on Wed, Sep 09, 2009 at 05:18:48AM EDT: (snip) The point here is that matrix-matrix multiplications (or, in general, functions with a large operation/element ratio) are a *tiny* part of all the possible operations between arrays that NumPy supports. This is why

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread Francesc Alted
A Wednesday 09 September 2009 11:26:06 Francesc Alted escrigué: A Tuesday 08 September 2009 23:21:53 Christopher Barker escrigué: Also, perhaps a GPU-aware numexpr could be helpful which I think is the kind of thing that Sturla was refering to when she wrote: Incidentally, this will

Re: [Numpy-discussion] question about future support for python-3

2009-09-09 Thread Robert Kern
On Wed, Sep 9, 2009 at 07:15, Darren Daledsdal...@gmail.com wrote: Another topic concerning documentation is API compatibility. The python devs have requested projects not use the 2-3 transition as an excuse to change their APIs, but numpy is maybe a special case. I'm thinking about PEP3118.

Re: [Numpy-discussion] re loading f2py modules in ipython

2009-09-09 Thread Robert Kern
On Wed, Sep 9, 2009 at 06:25, John [H2O]washa...@gmail.com wrote: Hello, I've started to rely more and more on f2py to create simple modules utilizing Fortran for efficiency. This is a great tool to have within Python! A problem, however, is that unlike python modules, the reload()

Re: [Numpy-discussion] question about future support for python-3

2009-09-09 Thread Charles R Harris
On Wed, Sep 9, 2009 at 7:15 AM, Darren Dale dsdal...@gmail.com wrote: On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com wrote: On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com wrote: Hi David, I already gave my own opinion on py3k, which can be summarized

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread James Bergstra
On Wed, Sep 9, 2009 at 10:41 AM, Francesc Alted fal...@pytables.org wrote: Numexpr mainly supports functions that are meant to be used element-wise, so the operation/element ratio is normally 1 (or close to 1). In these scenarios is where improved memory access is much more important than CPU

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread Dag Sverre Seljebotn
Christopher Barker wrote: George Dahl wrote: Sturla Molden sturla at molden.no writes: Teraflops peak performance of modern GPUs is impressive. But NumPy cannot easily benefit from that. I know that for my work, I can get around an order of a 50-fold speedup over numpy using a python

Re: [Numpy-discussion] question about future support for python-3

2009-09-09 Thread Dag Sverre Seljebotn
Darren Dale wrote: On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com wrote: On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com wrote: Hi David, I already gave my own opinion on py3k, which can be summarized as: - it is a huge effort, and no core numpy/scipy

Re: [Numpy-discussion] question about future support for python-3

2009-09-09 Thread Darren Dale
On Wed, Sep 9, 2009 at 11:25 AM, Charles R Harrischarlesr.har...@gmail.com wrote: On Wed, Sep 9, 2009 at 7:15 AM, Darren Dale dsdal...@gmail.com wrote: On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com wrote: On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com

Re: [Numpy-discussion] question about future support for python-3

2009-09-09 Thread Dag Sverre Seljebotn
Dag Sverre Seljebotn wrote: Darren Dale wrote: On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com wrote: On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com wrote: Hi David, I already gave my own opinion on py3k, which can be summarized as: - it is a huge effort,

Re: [Numpy-discussion] question about future support for python-3

2009-09-09 Thread Christopher Barker
Robert Kern wrote: On Wed, Sep 9, 2009 at 07:15, Darren Daledsdal...@gmail.com wrote: We aren't supposed to break APIs that aren't related to the 2-3 transition. PEP3118 is related to the 2-3 transition. Since I'm that somebody that always pipes up about this topic, I'm pretty sure it hasn't

Re: [Numpy-discussion] question about future support for python-3

2009-09-09 Thread Robert Kern
On Wed, Sep 9, 2009 at 11:40, Christopher Barkerchris.bar...@noaa.gov wrote: Robert Kern wrote: On Wed, Sep 9, 2009 at 07:15, Darren Daledsdal...@gmail.com wrote: We aren't supposed to break APIs that aren't related to the 2-3 transition. PEP3118 is related to the 2-3 transition. Since I'm

Re: [Numpy-discussion] Huge arrays

2009-09-09 Thread David Warde-Farley
On 9-Sep-09, at 4:48 AM, Francesc Alted wrote: Yes, this later is supported in PyTables as long as the underlying filesystem supports files 2 GB, which is very usual in modern operating systems. I think the OP said he was on Win32, in which case it should be noted: FAT32 has its upper

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread Sturla Molden
George Dahl skrev: I know that for my work, I can get around an order of a 50-fold speedup over numpy using a python wrapper for a simple GPU matrix class. So I might be dealing with a lot of matrix products where I multiply a fixed 512 by 784 matrix by a 784 by 256 matrix that changes

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread Sturla Molden
James Bergstra skrev: Suppose you want to evaluate dot(a*b+c*sqrt(d), e). The GPU is great for doing dot(), The CPU is equally great (or better?) for doing dot(). In both cases: - memory access scale O(n) for dot producs. - computation scale O(n) for dot producs. - memory is low - computation

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread David Warde-Farley
On 10-Sep-09, at 12:47 AM, Sturla Molden wrote: The CPU is equally great (or better?) for doing dot(). In both cases: - memory access scale O(n) for dot producs. - computation scale O(n) for dot producs. - memory is low - computation is fast (faster for GPU) You do realize that the

Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-09 Thread Fernando Perez
On Wed, Sep 9, 2009 at 9:47 PM, Sturla Moldenstu...@molden.no wrote: James Bergstra skrev: Suppose you want to evaluate dot(a*b+c*sqrt(d), e).  The GPU is great for doing dot(), The CPU is equally great (or better?) for doing dot(). In both cases: - memory access scale O(n) for dot producs.