Re: [Numpy-discussion] no ordinary Bessel functions?

2009-12-14 Thread Robert Kern
On Mon, Dec 14, 2009 at 22:30, Dr. Phillip M. Feldman
 wrote:
>
> When I issue the command
>
> np.lookfor('bessel')
>
> I get the following:
>
> Search results for 'bessel'
> ---
> numpy.i0
>    Modified Bessel function of the first kind, order 0.
> numpy.kaiser
>    Return the Kaiser window.
> numpy.random.vonmises
>    Draw samples from a von Mises distribution.
>
> I assume that there is an ordinary (unmodified) Bessel function in NumPy,

Nope. i0() is only in numpy to support the kaiser() window. Our policy
on special functions is to include those which are exposed by C99 with
a few exceptions for those that are necessary to support other
functions in numpy. scipy.special is the place to go for general
special function needs.

> but have not been able to figure out how to access it. Also, I need to
> operate sometimes on scalars, and sometimes on arrays. For operations on
> scalars, are the NumPy Bessel functions significantly slower than the SciPy
> Bessel functions?

I recommend using the %timeit magic in IPython to test such things:


In [1]: from scipy import special
%y
In [2]: %timeit numpy.i0(1.0)
1000 loops, best of 3: 921 µs per loop

In [3]: %timeit special.i0(1.0)
10 loops, best of 3: 5.6 µs per loop

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] no ordinary Bessel functions?

2009-12-14 Thread Dr. Phillip M. Feldman

When I issue the command

np.lookfor('bessel')

I get the following:

Search results for 'bessel'
---
numpy.i0
Modified Bessel function of the first kind, order 0.
numpy.kaiser
Return the Kaiser window.
numpy.random.vonmises
Draw samples from a von Mises distribution.

I assume that there is an ordinary (unmodified) Bessel function in NumPy,
but have not been able to figure out how to access it. Also, I need to
operate sometimes on scalars, and sometimes on arrays. For operations on
scalars, are the NumPy Bessel functions significantly slower than the SciPy
Bessel functions?
-- 
View this message in context: 
http://old.nabble.com/no-ordinary-Bessel-functions--tp26789343p26789343.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with set_fill_value for masked structured array

2009-12-14 Thread Pierre GM
On Dec 14, 2009, at 4:28 PM, Thomas Robitaille wrote:
> Pierre GM-2 wrote:
>> 
>> Well, that's a problem indeed, and I'd put that as a bug.
>> However, you can use that syntax instead:
> t.fill_value['a']=10
>> or set all the fields at once:
> t.fill_value=(10,99)
>> 
> 
> Thanks for your reply - should I submit a bug report on the numpy trac site?


Always best to do so...
Thx in advance !!!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with set_fill_value for masked structured array

2009-12-14 Thread Thomas Robitaille


Pierre GM-2 wrote:
> 
> Well, that's a problem indeed, and I'd put that as a bug.
> However, you can use that syntax instead:
 t.fill_value['a']=10
> or set all the fields at once:
t.fill_value=(10,99)
> 

Thanks for your reply - should I submit a bug report on the numpy trac site?

Thomas

-- 
View this message in context: 
http://old.nabble.com/Problem-with-set_fill_value-for-masked-structured-array-tp26770843p26780052.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing slower than matrix multiplication?

2009-12-14 Thread Charles R Harris
On Mon, Dec 14, 2009 at 10:27 AM, Jasper van de Gronde <
th.v.d.gro...@hccnet.nl> wrote:

> Bruce Southey wrote:
> >> So far this is the fastest code I've got:
> >> 
> >> import numpy as np
> >>
> >> nmax = 100
> >>
> >> def minover(Xi,S):
> >>   P,N = Xi.shape
> >>   SXi = Xi.copy()
> >>   for i in xrange(0,P):
> >>   SXi[i] *= S[i]
> >>   SXi2 = np.dot(SXi,SXi.T)
> >>   SXiSXi2divN = np.concatenate((SXi,SXi2),axis=1)/N
> >>   w = np.random.standard_normal((N))
> >>   E = np.dot(SXi,w)
> >>   wE = np.concatenate((w,E))
> >>   for s in xrange(0,nmax*P):
> >>   mu = wE[N:].argmin()
> >>   wE += SXiSXi2divN[mu]
> >>   # E' = dot(SXi,w')
> >>   #= dot(SXi,w + SXi[mu,:]/N)
> >>   #= dot(SXi,w) + dot(SXi,SXi[mu,:])/N
> >>   #= E + dot(SXi,SXi.T)[:,mu]/N
> >>   #= E + dot(SXi,SXi.T)[mu,:]/N
> >>   return wE[:N]
> >> 
> >>
> >> I am particularly interested in cleaning up the initialization part, but
> >> any suggestions for improving the overall performance are of course
> >> appreciated.
> >>
> >>
> > What is Xi and S?
> > I think that your SXi is just:
> > SXi=Xi*S
>
> Sort of, it's actually (Xi.T*S).T, now that I think of it... I'll see if
> that is any faster. And if there is a neater way of doing it I'd love to
> hear about it.
>
>
Xi*S[:,newaxis]

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: job opening at Tradelink

2009-12-14 Thread John Hunter
We are looking to hire a quantitative researcher to help research and
develop trading ideas, and to develop and support infrastructure to
put these trading strategies into production.  We are looking for
someone who is bright and curious with a quantitative background and a
strong interest in writing good code and building systems that work.
Experience with probability, statistics and time series is required,
and experience working with real world data is a definite plus.  We do
not require a financial background, but are looking for someone with
an enthusiasm to dive into this industry and learn a lot.  We do most
of our data modeling and production software in python and R.  We have
a lot of ideas to test and hopefully put into production, and you'll
be working with a fast paced and friendly small team of traders,
programmers and quantitative researchers.


Applying:

  Please submit a resume and cover letter to qsj...@trdlnk.com.  In
  your cover letter, please address how your background, experience
  and skills will fit into the position described above.  We are
  looking for a full-time, on-site candidate only.

About Us:


  TradeLink Holdings LLC is a diversified alternative investment,
  trading and software firm. Headquartered in Chicago, TradeLink
  Holdings LLC includes a number of closely related entities. Since
  its organization in 1979, TradeLink has been actively engaged in the
  securities, futures, options, and commodities trading
  industries. Engaged in the option arbitrage business since 1983,
  TradeLink has a floor trading and/or electronic trading interface in
  commodity options, financial futures and options, and currency
  futures and options at all major U.S. exchanges. TradeLink is
  involved in various market-making programs in many different
  exchanges around the world, including over-the-counter derivatives
  markets. http://www.tradelinkllc.com
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing slower than matrix multiplication?

2009-12-14 Thread josef . pktd
On Mon, Dec 14, 2009 at 12:51 PM, Francesc Alted  wrote:
> A Monday 14 December 2009 18:20:32 Jasper van de Gronde escrigué:
>> Francesc Alted wrote:
>> > A Monday 14 December 2009 17:09:13 Francesc Alted escrigué:
>> >> The things seems to be worst than 1.6x times slower for numpy, as matlab
>> >> orders arrays by column, while numpy order is by row.  So, if we want to
>> >> compare pears with pears:
>> >>
>> >> For Python 600x200:
>> >>    Add a row: 0.113243 (1.132425e-05 per iter)
>> >> For Matlab 600x200:
>> >>    Add a column: 0.021325 (2.132527e-006 per iter)
>> >
>> > Mmh, I've repeated this benchmark on my machine and got:
>> >
>> > In [59]: timeit E + Xi2[P/2]
>> > 10 loops, best of 3: 2.8 µs per loop
>> >
>> > that is, very similar to matlab's 2.1 µs and quite far from the 11 µs you
>> > are getting for numpy in your machine...  I'm using a Core2 @ 3 GHz.
>>
>> I'm using Python 2.6 and numpy 1.4.0rc1 on a Core2 @ 1.33 GHz
>> (notebook). I'll have a look later to see if upgrading Python to 2.6.4
>> makes a difference.
>
> I don't think so.  Your machine is slow for nowadays standards, so the 5x
> slowness should be due to python/numpy overhead, but unfortunately nothing
> that could be solved magically by using a newer python/numpy version.

dot is slow on single cpu, older notebook with older atlas and low in
memory, (dot cannot multi-process).
it looks like adding a row is almost only overhead

for 600x200
>>> print "Dot product: %f" % dotProduct.timeit(N)
Dot product: 3.124008
>>> print "Add a row: %f" % additionRow.timeit(N)
Add a row: 0.080612
>>> print "Add a column: %f" % additionCol.timeit(N)
Add a column: 0.113229

for 60x20
>>> print "Dot product: %f" % dotProduct.timeit(N)
Dot product: 0.070933
>>> print "Add a row: %f" % additionRow.timeit(N)
Add a row: 0.058492
>>> print "Add a column: %f" % additionCol.timeit(N)
Add a column: 0.061401

600x2000  (dot may induce swapping to disc)
>>> print "Dot product: %f" % dotProduct.timeit(N)
Dot product: 43.114585
>>> print "Add a row: %f" % additionRow.timeit(N)
Add a row: 0.085261
>>> print "Add a column: %f" % additionCol.timeit(N)
Add a column: 0.122754
>>> print "Dot product: %f" % dotProduct.timeit(N)
Dot product: 35.232084

Josef

> --
> Francesc Alted
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing slower than matrix multiplication?

2009-12-14 Thread Francesc Alted
A Monday 14 December 2009 18:20:32 Jasper van de Gronde escrigué:
> Francesc Alted wrote:
> > A Monday 14 December 2009 17:09:13 Francesc Alted escrigué:
> >> The things seems to be worst than 1.6x times slower for numpy, as matlab
> >> orders arrays by column, while numpy order is by row.  So, if we want to
> >> compare pears with pears:
> >>
> >> For Python 600x200:
> >>Add a row: 0.113243 (1.132425e-05 per iter)
> >> For Matlab 600x200:
> >>Add a column: 0.021325 (2.132527e-006 per iter)
> >
> > Mmh, I've repeated this benchmark on my machine and got:
> >
> > In [59]: timeit E + Xi2[P/2]
> > 10 loops, best of 3: 2.8 µs per loop
> >
> > that is, very similar to matlab's 2.1 µs and quite far from the 11 µs you
> > are getting for numpy in your machine...  I'm using a Core2 @ 3 GHz.
> 
> I'm using Python 2.6 and numpy 1.4.0rc1 on a Core2 @ 1.33 GHz
> (notebook). I'll have a look later to see if upgrading Python to 2.6.4
> makes a difference.

I don't think so.  Your machine is slow for nowadays standards, so the 5x 
slowness should be due to python/numpy overhead, but unfortunately nothing 
that could be solved magically by using a newer python/numpy version.

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing slower than matrix multiplication?

2009-12-14 Thread Jasper van de Gronde
Bruce Southey wrote:
>> So far this is the fastest code I've got:
>> 
>> import numpy as np
>>
>> nmax = 100
>>
>> def minover(Xi,S):
>>   P,N = Xi.shape
>>   SXi = Xi.copy()
>>   for i in xrange(0,P):
>>   SXi[i] *= S[i]
>>   SXi2 = np.dot(SXi,SXi.T)
>>   SXiSXi2divN = np.concatenate((SXi,SXi2),axis=1)/N
>>   w = np.random.standard_normal((N))
>>   E = np.dot(SXi,w)
>>   wE = np.concatenate((w,E))
>>   for s in xrange(0,nmax*P):
>>   mu = wE[N:].argmin()
>>   wE += SXiSXi2divN[mu]
>>   # E' = dot(SXi,w')
>>   #= dot(SXi,w + SXi[mu,:]/N)
>>   #= dot(SXi,w) + dot(SXi,SXi[mu,:])/N
>>   #= E + dot(SXi,SXi.T)[:,mu]/N
>>   #= E + dot(SXi,SXi.T)[mu,:]/N
>>   return wE[:N]
>> 
>>
>> I am particularly interested in cleaning up the initialization part, but
>> any suggestions for improving the overall performance are of course
>> appreciated.
>>
>>
> What is Xi and S?
> I think that your SXi is just:
> SXi=Xi*S

Sort of, it's actually (Xi.T*S).T, now that I think of it... I'll see if 
that is any faster. And if there is a neater way of doing it I'd love to 
hear about it.

> But really I do not understand what you are actually trying to do. As 
> previously indicated, some times simplifying an algorithm can make it 
> computationally slower.

It was hardly simplified, this was the original function body:
 P,N = Xi.shape
 SXi = Xi.copy()
 for i in xrange(0,P):
 SXi[i] *= S[i]
 w = np.random.standard_normal((N))
 for s in xrange(0,nmax*P):
 E = np.dot(SXi,w)
 mu = E.argmin()
 w += SXi[mu]/N
 return w

As you can see it's basically some basic linear algebra (which reduces 
the time complexity from about O(n^3) to O(n^2)), plus some less nice 
tweaks to avoid the high Python overhead.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing slower than matrix multiplication?

2009-12-14 Thread Jasper van de Gronde
Francesc Alted wrote:
> A Monday 14 December 2009 17:09:13 Francesc Alted escrigué:
>> The things seems to be worst than 1.6x times slower for numpy, as matlab
>> orders arrays by column, while numpy order is by row.  So, if we want to
>> compare pears with pears:
>>
>> For Python 600x200:
>>Add a row: 0.113243 (1.132425e-05 per iter)
>> For Matlab 600x200:
>>Add a column: 0.021325 (2.132527e-006 per iter)
> 
> Mmh, I've repeated this benchmark on my machine and got:
> 
> In [59]: timeit E + Xi2[P/2]
> 10 loops, best of 3: 2.8 µs per loop
> 
> that is, very similar to matlab's 2.1 µs and quite far from the 11 µs you are 
> getting for numpy in your machine...  I'm using a Core2 @ 3 GHz.

I'm using Python 2.6 and numpy 1.4.0rc1 on a Core2 @ 1.33 GHz 
(notebook). I'll have a look later to see if upgrading Python to 2.6.4 
makes a difference.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing slower than matrix multiplication?

2009-12-14 Thread Francesc Alted
A Monday 14 December 2009 17:09:13 Francesc Alted escrigué:
> The things seems to be worst than 1.6x times slower for numpy, as matlab
> orders arrays by column, while numpy order is by row.  So, if we want to
> compare pears with pears:
> 
> For Python 600x200:
>Add a row: 0.113243 (1.132425e-05 per iter)
> For Matlab 600x200:
>Add a column: 0.021325 (2.132527e-006 per iter)

Mmh, I've repeated this benchmark on my machine and got:

In [59]: timeit E + Xi2[P/2]
10 loops, best of 3: 2.8 µs per loop

that is, very similar to matlab's 2.1 µs and quite far from the 11 µs you are 
getting for numpy in your machine...  I'm using a Core2 @ 3 GHz.

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing slower than matrix multiplication?

2009-12-14 Thread Francesc Alted
A Saturday 12 December 2009 12:59:16 Jasper van de Gronde escrigué:
> Francesc Alted wrote:
> > ...
> > Yeah, I think taking slices here is taking quite a lot of time:
> >
> > In [58]: timeit E + Xi2[P/2,:]
> > 10 loops, best of 3: 3.95 µs per loop
> >
> > In [59]: timeit E + Xi2[P/2]
> > 10 loops, best of 3: 2.17 µs per loop
> >
> > don't know why the additional ',:' in the slice is taking so much time,
> > but my guess is that passing & analyzing the second argument
> > (slice(None,None,None)) could be the responsible for the slowdown (but
> > that is taking too much time). Mmh, perhaps it would be worth to study
> > this more carefully so that an optimization could be done in NumPy.
> 
> This is indeed interesting! And very nice that this actually works the
> way you'd expect it to. I guess I've just worked too long with Matlab :)
> 
> >> I think the lesson mostly should be that with so little data,
> >> benchmarking becomes a very difficult art.
> >
> > Well, I think it is not difficult, it is just that you are perhaps
> > benchmarking Python/NumPy machinery instead ;-)  I'm curious whether
> > Matlab can do slicing much more faster than NumPy.  Jasper?
> 
> I had a look, these are the timings for Python for 60x20:
>Dot product: 0.051165 (5.116467e-06 per iter)
>Add a row: 0.092849 (9.284860e-06 per iter)
>Add a column: 0.082523 (8.252348e-06 per iter)
> For Matlab 60x20:
>Dot product: 0.029927 (2.992664e-006 per iter)
>Add a row: 0.019664 (1.966444e-006 per iter)
>Add a column: 0.008384 (8.384376e-007 per iter)
> For Python 600x200:
>Dot product: 1.917235 (1.917235e-04 per iter)
>Add a row: 0.113243 (1.132425e-05 per iter)
>Add a column: 0.162740 (1.627397e-05 per iter)
> For Matlab 600x200:
>Dot product: 1.282778 (1.282778e-004 per iter)
>Add a row: 0.107252 (1.072525e-005 per iter)
>Add a column: 0.021325 (2.132527e-006 per iter)
> 
> If I fit a line through these two data points (60 and 600 rows), I get
> the following equations:
>Python, AR: 3.8e-5 * n + 0.091
>Matlab, AC: 2.4e-5 * n + 0.0069
> This would suggest that Matlab performs the vector addition about 1.6
> times faster and has a 13 times smaller constant cost!

The things seems to be worst than 1.6x times slower for numpy, as matlab 
orders arrays by column, while numpy order is by row.  So, if we want to 
compare pears with pears:

For Python 600x200:
   Add a row: 0.113243 (1.132425e-05 per iter)
For Matlab 600x200:
   Add a column: 0.021325 (2.132527e-006 per iter)

which makes numpy 5x slower than matlab.  Hmm, I definitely think that numpy 
could do better here :-/

However, caveat emptor, when you do timings, you normally put your code 
snippets in loops, and after the first iteration, the dataset (if small 
enough, as in your examples above) lives in CPU caches.  But this is not 
*usually* the case because you have to transmit your data to CPU first.  This 
transmission process is normally the main bottleneck when doing BLAS-1 level 
operations (i.e. vector-vector).  This is to say that, in real-life 
calculations your numpy code will work almost as fast as matlab.  So, my 
adivce is: don't be too worried about small dataset speed in small loops, and 
concentrate your optimization efforts in making your *real* code faster.

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing slower than matrix multiplication?

2009-12-14 Thread Bruce Southey
On 12/13/2009 05:13 AM, Jasper van de Gronde wrote:
> Bruce Southey wrote:
>
>> Really I would suggest asking the list for the real problem because it
>> is often amazing what solutions have been given.
>>  
> So far this is the fastest code I've got:
> 
> import numpy as np
>
> nmax = 100
>
> def minover(Xi,S):
>   P,N = Xi.shape
>   SXi = Xi.copy()
>   for i in xrange(0,P):
>   SXi[i] *= S[i]
>   SXi2 = np.dot(SXi,SXi.T)
>   SXiSXi2divN = np.concatenate((SXi,SXi2),axis=1)/N
>   w = np.random.standard_normal((N))
>   E = np.dot(SXi,w)
>   wE = np.concatenate((w,E))
>   for s in xrange(0,nmax*P):
>   mu = wE[N:].argmin()
>   wE += SXiSXi2divN[mu]
>   # E' = dot(SXi,w')
>   #= dot(SXi,w + SXi[mu,:]/N)
>   #= dot(SXi,w) + dot(SXi,SXi[mu,:])/N
>   #= E + dot(SXi,SXi.T)[:,mu]/N
>   #= E + dot(SXi,SXi.T)[mu,:]/N
>   return wE[:N]
> 
>
> I am particularly interested in cleaning up the initialization part, but
> any suggestions for improving the overall performance are of course
> appreciated.
>
>
What is Xi and S?
I think that your SXi is just:
SXi=Xi*S

But really I do not understand what you are actually trying to do. As 
previously indicated, some times simplifying an algorithm can make it 
computationally slower.

Bruce




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Import error in builds of 7726

2009-12-14 Thread Chris
David Cournapeau  gmail.com> writes:

> 
> Could you give us the generated config.h (somewhere in
> build/src.*/numpy/core/), just in case ?
> 

Here it is:

http://files.me.com/fonnesbeck/d9eyxi

Thanks again.
cf




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Does Numpy support CGI-scripting?`

2009-12-14 Thread David Warde-Farley
On 14-Dec-09, at 2:31 AM, yogesh karpate wrote:

> Does Numpy Support CGI scripting? DO scipy and matplotlib also  
> support?

I'm not sure what you're asking exactly.

If the question is "can you create CGI scripts that use NumPy/SciPy/ 
matplotlib" then the answer is yes. You just need to look up how to  
create CGI scripts in Python, and then import the relevant modules  
from your script. Provided that numpy/scipy/matplotlib are installed  
on the machine executing the CGI script, it should work just fine.

There is an entry in the matplotlib FAQ that is relevant: 
http://tinyurl.com/ya6wule

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion