A Wednesday 09 September 2009 07:22:33 David Cournapeau escrigué:
On Wed, Sep 9, 2009 at 2:10 PM, Sebastian Haaseseb.ha...@gmail.com wrote:
Hi,
you can probably use PyTables for this. Even though it's meant to
save/load data to/from disk (in HDF5 format) as far as I understand,
it can be
A Tuesday 08 September 2009 21:19:05 George Dahl escrigué:
Sturla Molden sturla at molden.no writes:
Erik Tollerud skrev:
NumPy arrays on the GPU memory is an easy task. But then I would have
to write the computation in OpenCL's dialect of C99?
This is true to some extent, but also
A Tuesday 08 September 2009 23:21:53 Christopher Barker escrigué:
Also, perhaps a GPU-aware numexpr could be helpful which I think is the
kind of thing that Sturla was refering to when she wrote:
Incidentally, this will also make it easier to leverage on modern GPUs.
Numexpr mainly supports
the way I do my rotations is this:
tmat = rotation matrix
vec = stack of row vectors
rotated_vecs = np.dot(tmat, vec.T).T
On Mon, Sep 7, 2009 at 6:53 PM, T Jtjhn...@gmail.com wrote:
On Mon, Sep 7, 2009 at 3:43 PM, T Jtjhn...@gmail.com wrote:
Or perhaps I am just being dense.
Yes. I just
Hi there!
I'm sure I'm missing something, but I am not able of doing a simple sum of
two arrays with different dimensions. I have a 2D array and a 1D array
np.shape(a) (8, 26)
np.shape(b) (8,)
and I want to sum each *row* of 'a' with the equivalent *row* of 'b' (this
is, summing each 1D row
Hi Ruben
One dimensional arrays can be thought of as rows. If you want a column, you
need to append a dimension.
d = a + b[:,None]
which is equivalent to
d = a + b[:,np.newaxis]
Best,
Luca
___
NumPy-Discussion mailing list
Perfect! Thank you very much :D
It's not obvious, though...I think I should read more deeply into
Python/NumPy...but for the use I'm giving to it...
Anyway, I thought the pythonic way would be faster, but after trying with a
size 8 instead of 8...the for loop is faster!
Pythonic time ==
Wed, 09 Sep 2009 13:08:22 +0200, Ruben Salvador wrote:
Perfect! Thank you very much :D
It's not obvious, though...I think I should read more deeply into
Python/NumPy...but for the use I'm giving to it...
Anyway, I thought the pythonic way would be faster, but after trying
with a size
Hello,
I've started to rely more and more on f2py to create simple modules
utilizing Fortran for efficiency. This is a great tool to have within
Python!
A problem, however, is that unlike python modules, the reload() function
does not seem to update the f2py modules within ipython (which I use
I am sorry but it doesn't make much sense.
How do you measure the performance?
Are you sure you include the creation of the c output array in the time spent
(which is outside the for loop but should be considered anyway)?
Here are my results...
In [84]: a = np.random.rand(8,26)
In [85]: b =
On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com wrote:
On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com wrote:
Hi David,
I already gave my own opinion on py3k, which can be summarized as:
- it is a huge effort, and no core numpy/scipy developer has
expressed
Your results are what I expected...but. This code is called from my main
program, and what I have in there (output array already created for both
cases) is:
print lambd, lambd
print np.shape(a), np.shape(a)
print np.shape(r), np.shape(r)
print np.shape(offspr), np.shape(offspr)
t = clock()
for i
I forgot...just in case:
rsalva...@cactus:~$ python --version
Python 2.5.2
python-scipy: version 0.6.0
On Wed, Sep 9, 2009 at 2:36 PM, Ruben Salvador rsalvador...@gmail.comwrote:
Your results are what I expected...but. This code is called from my main
program, and what I have in there
Received from Francesc Alted on Wed, Sep 09, 2009 at 05:18:48AM EDT:
(snip)
The point here is that matrix-matrix multiplications (or, in general,
functions with a large operation/element ratio) are a *tiny* part of all the
possible operations between arrays that NumPy supports. This is why
A Wednesday 09 September 2009 11:26:06 Francesc Alted escrigué:
A Tuesday 08 September 2009 23:21:53 Christopher Barker escrigué:
Also, perhaps a GPU-aware numexpr could be helpful which I think is the
kind of thing that Sturla was refering to when she wrote:
Incidentally, this will
On Wed, Sep 9, 2009 at 07:15, Darren Daledsdal...@gmail.com wrote:
Another topic concerning documentation is API compatibility. The
python devs have requested projects not use the 2-3 transition as an
excuse to change their APIs, but numpy is maybe a special case. I'm
thinking about PEP3118.
On Wed, Sep 9, 2009 at 06:25, John [H2O]washa...@gmail.com wrote:
Hello,
I've started to rely more and more on f2py to create simple modules
utilizing Fortran for efficiency. This is a great tool to have within
Python!
A problem, however, is that unlike python modules, the reload()
On Wed, Sep 9, 2009 at 7:15 AM, Darren Dale dsdal...@gmail.com wrote:
On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com
wrote:
On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com wrote:
Hi David,
I already gave my own opinion on py3k, which can be summarized
On Wed, Sep 9, 2009 at 10:41 AM, Francesc Alted fal...@pytables.org wrote:
Numexpr mainly supports functions that are meant to be used element-wise,
so the operation/element ratio is normally 1 (or close to 1). In these
scenarios is where improved memory access is much more important than CPU
Christopher Barker wrote:
George Dahl wrote:
Sturla Molden sturla at molden.no writes:
Teraflops peak performance of modern GPUs is impressive. But NumPy
cannot easily benefit from that.
I know that for my work, I can get around an order of a 50-fold speedup over
numpy using a python
Darren Dale wrote:
On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com wrote:
On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com wrote:
Hi David,
I already gave my own opinion on py3k, which can be summarized as:
- it is a huge effort, and no core numpy/scipy
On Wed, Sep 9, 2009 at 11:25 AM, Charles R
Harrischarlesr.har...@gmail.com wrote:
On Wed, Sep 9, 2009 at 7:15 AM, Darren Dale dsdal...@gmail.com wrote:
On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com
wrote:
On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com
Dag Sverre Seljebotn wrote:
Darren Dale wrote:
On Tue, Sep 8, 2009 at 9:02 PM, David Cournapeaucourn...@gmail.com wrote:
On Wed, Sep 9, 2009 at 9:37 AM, Darren Daledsdal...@gmail.com wrote:
Hi David,
I already gave my own opinion on py3k, which can be summarized as:
- it is a huge effort,
Robert Kern wrote:
On Wed, Sep 9, 2009 at 07:15, Darren Daledsdal...@gmail.com wrote:
We aren't supposed to break APIs that aren't related to the 2-3
transition. PEP3118 is related to the 2-3 transition. Since I'm that
somebody that always pipes up about this topic, I'm pretty sure it
hasn't
On Wed, Sep 9, 2009 at 11:40, Christopher Barkerchris.bar...@noaa.gov wrote:
Robert Kern wrote:
On Wed, Sep 9, 2009 at 07:15, Darren Daledsdal...@gmail.com wrote:
We aren't supposed to break APIs that aren't related to the 2-3
transition. PEP3118 is related to the 2-3 transition. Since I'm
On 9-Sep-09, at 4:48 AM, Francesc Alted wrote:
Yes, this later is supported in PyTables as long as the underlying
filesystem
supports files 2 GB, which is very usual in modern operating
systems.
I think the OP said he was on Win32, in which case it should be noted:
FAT32 has its upper
George Dahl skrev:
I know that for my work, I can get around an order of a 50-fold
speedup over
numpy using a python wrapper for a simple GPU matrix class. So I
might be
dealing with a lot of matrix products where I multiply a fixed 512 by
784 matrix
by a 784 by 256 matrix that changes
James Bergstra skrev:
Suppose you want to evaluate dot(a*b+c*sqrt(d), e). The GPU is
great for doing dot(),
The CPU is equally great (or better?) for doing dot(). In both cases:
- memory access scale O(n) for dot producs.
- computation scale O(n) for dot producs.
- memory is low
- computation
On 10-Sep-09, at 12:47 AM, Sturla Molden wrote:
The CPU is equally great (or better?) for doing dot(). In both cases:
- memory access scale O(n) for dot producs.
- computation scale O(n) for dot producs.
- memory is low
- computation is fast (faster for GPU)
You do realize that the
On Wed, Sep 9, 2009 at 9:47 PM, Sturla Moldenstu...@molden.no wrote:
James Bergstra skrev:
Suppose you want to evaluate dot(a*b+c*sqrt(d), e). The GPU is
great for doing dot(),
The CPU is equally great (or better?) for doing dot(). In both cases:
- memory access scale O(n) for dot producs.
30 matches
Mail list logo