Den 20.07.2011 08:49, skrev Carlos Becker:
>
> The main difference is that Matlab is able to take into account a
> pre-allocated array/matrix, probably avoiding the creation of a
> temporary and writing the results directly in the pre-allocated array.
>
> I think this is essential to speed up num
On Wednesday, July 20, 2011, Carlos Becker wrote:
> Those are very interesting examples. I think that pre-allocation is very
> important, and something similar happens in Matlab if no pre-allocation is
> done: it takes 3-4x longer than with pre-allocation.The main difference is
> that Matlab is
Hi all. Thanks for the feedback.
My point is not to start a war on matlab/numpy. This comes out of my wish to
switch from Matlab to something more appealing.
I like numpy and python, being a proper language (not like matlab scripts,
whose syntax is patched and destroyed as new versions come up).
I
Den 19.07.2011 11:05, skrev Carlos Becker:
N = 100;
tic;
for I=1:N
k = m - 0.5;
end
toc / N
m = rand(2000,2000);
Here, Matlab's JIT compiler can probably hoist the invariant out of the
loop, and just do
I=N
k = m - 0.5
Try thi
Den 20.07.2011 09:35, skrev Carlos Becker:
>
> In my case, sometimes it is required to process 1k images or more, and
> 2x speed improvement in this case means 2 hours of processing vs 4.
Can you demonstrate that Matlab is faster than NumPy for this task?
Sturla
_
Tue, 19 Jul 2011 21:55:28 +0200, Ralf Gommers wrote:
> On Sun, Jul 17, 2011 at 11:48 PM, Darren Dale
> wrote:
>> In numpy.distutils.system info:
>>
>>default_x11_lib_dirs = libpaths(['/usr/X11R6/lib','/usr/X11/lib',
>> '/usr/lib'], platform_bits)
>>defau
Wed, 20 Jul 2011 08:49:21 +0200, Carlos Becker wrote:
> Those are very interesting examples. I think that pre-allocation is very
> important, and something similar happens in Matlab if no pre-allocation
> is done: it takes 3-4x longer than with pre-allocation. The main
> difference is that Matlab i
Wed, 20 Jul 2011 09:04:09 +, Pauli Virtanen wrote:
> Wed, 20 Jul 2011 08:49:21 +0200, Carlos Becker wrote:
>> Those are very interesting examples. I think that pre-allocation is
>> very important, and something similar happens in Matlab if no
>> pre-allocation is done: it takes 3-4x longer tha
Hi,
On Wed, Jul 20, 2011 at 2:42 AM, Chad Netzer wrote:
> On Tue, Jul 19, 2011 at 6:10 PM, Pauli Virtanen wrote:
>
> >k = m - 0.5
> >
> > does here the same thing as
> >
> >k = np.empty_like(m)
> >np.subtract(m, 0.5, out=k)
> >
> > The memory allocation (empty_like and t
On Tue, Jul 19, 2011 at 11:49 PM, Carlos Becker wrote:
> Those are very interesting examples.
Cool.
> I think that pre-allocation is very
> important, and something similar happens in Matlab if no pre-allocation is
> done: it takes 3-4x longer than with pre-allocation.
Can you provide a simple
I will be away from my computer for a week, but what I could try today
shows that Matlab JIT is doing some tricks so the results I have shown
previously for Matlab are likely to be wrong.
In this sense, it seems to be that timings are similar between numpy
and matlab if Jit tricks are avoided
Wed, 20 Jul 2011 09:04:09 +, Pauli Virtanen wrote:
> Wed, 20 Jul 2011 08:49:21 +0200, Carlos Becker wrote:
>> Those are very interesting examples. I think that pre-allocation is
>> very important, and something similar happens in Matlab if no
>> pre-allocation is done: it takes 3-4x longer than
On Wed, Jul 20, 2011 at 3:57 AM, eat wrote:
> Perhaps slightly OT, but here is something very odd going on. I would expect
> the performance to be in totally different ballpark.
>>
>> >>> t=timeit.Timer('m =- 0.5', setup='import numpy as np;m =
>> >>> np.ones([8092,8092],float)')
>> >>> np.mean(t.
> with "gcc -O3 -ffast-math -march=native -mfpmath=sse" optimizations
> for the C code (involving SSE2 vectorization and whatnot, looking at
> the assembler output). Numpy is already going essentially at the maximum
> speed.
As a related side question that I've been wondering myself for some time
On Wed, Jul 20, 2011 at 03:58, Pauli Virtanen wrote:
> Tue, 19 Jul 2011 21:55:28 +0200, Ralf Gommers wrote:
>> On Sun, Jul 17, 2011 at 11:48 PM, Darren Dale
>> wrote:
>>> In numpy.distutils.system info:
>>>
>>> default_x11_lib_dirs = libpaths(['/usr/X11R6/lib','/usr/X11/lib',
>>>
Dear NumPy gurus,
I don't get the difference between frompyfunc and vectorize. What is
their respective use cases?
Thanks!
== Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Wed, 20 Jul 2011 11:31:41 +, Pauli Virtanen wrote:
[clip]
> There is a sharp order-of-magnitude change of speed in malloc+memset of
> an array, which is not present in memset itself. (This is then also
> reflected in the Numpy performance -- floating point operations probably
> don't cost much
On Tue, Jul 19, 2011 at 11:08 AM, Robert Kern wrote:
> On Tue, Jul 19, 2011 at 07:38, Andrea Cimatoribus
> wrote:
>> Dear all,
>> I would like to avoid the use of a boolean array (mask) in the following
>> statement:
>>
>> mask = (A != 0.)
>> B = A[mask]
>>
>> in order to be able to move th
The 'm' seems to be the math library on Linux, removing it breaks the build
for me. I've put this patch, minus removing the 'm', in a pull request along
with hopefully a fix for http://projects.scipy.org/numpy/ticket/1909.
https://github.com/numpy/numpy/pull/118
-Mark
On Tue, Jul 19, 2011 at 3:1
Hello,
I'm struggling to create openmp subroutines. I've simplified the problem
down to the subroutine below.
-- play.f90 --
subroutine step(soln,n)
implicit none
integer n,i
real*8 soln(n)
!f2py intent(in) n
!f2py intent(out) soln
!f2py depend(n) soln
!$OMP PARALLEL DO
do i=1,n
I'm not at my Mac to check the exact paths but see if pointing one of
the environment variables
LD_LIBRARY_PATH
or
DYLD_LIBRARY_PATH
to a directory where the gfortran openmp libraries can be found - this
will depend on where you got gfortran from and the version, but you
should be able to find it b
>> I think this is essential to speed up numpy. Maybe numexpr could handle this
>> in the future? Right now the general use of numexpr is result =
>> numexpr.evaluate("whatever"), so the same problem seems to be there.
>>
>> With this I am not saying that numpy is not worth it, just that for many
On Wed, Jul 20, 2011 at 5:52 PM, srean wrote:
> >> I think this is essential to speed up numpy. Maybe numexpr could handle
> this in the future? Right now the general use of numexpr is result =
> numexpr.evaluate("whatever"), so the same problem seems to be there.
> >>
> >> With this I am not say
23 matches
Mail list logo