Hi,

> You didn't use it right. You have to manually hoist out the loop
> invariants. Obviously, declaring two local variables _kwargs and
> _ndarray doesn't help if you do it for each array you construct.
>
> Say you want to construct 1000 similar arrays:

Right - but here you are just caching the parameter dictionary in the
hope that the size, dtype and/or data are the same from one iteration
to the next.  Again, it's a bit faster in the best possible case
(exactly the same array over and over):

In [2]: timeit a = sturlabench.make_local_1d_array(n, data, dt)
100000 loops, best of 3: 2.19 us per loop

In [3]: timeit a = py_make_1d_array(n, data, dt)
100000 loops, best of 3: 2.95 us per loop

but still quite a bit slower than the C version:

In [4]: timeit a = arraymaker2.make_1d_array(n, data, dt)
1000000 loops, best of 3: 773 ns per loop

In my case I can have many many arrays being created, which unlikely
to have the data, dtype or size the same from iteration to iteration.

> I'd also like to point out this: Even with the non-optimized "Python" version 
> (py_make_1d_array), creating 1000 arrays will only take 3.5 ms with your 
> timings. "Premature optimization is the root of all evil in computer 
> programming..." (C.A.R. Hoare, according to D. Knuth).

Ah yes, but, I believe the Knuth quote is "We should forget about
small efficiencies, say about 97% of the time: premature optimization
is the root of all evil" [1], and I guess that we Cython users feel
ourselves to be towards that 3% at least some of the time.

Cheers,

Matthew

[1] http://en.wikipedia.org/wiki/C._A._R._Hoare#Quotes
_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev

Reply via email to