Re: [Numpy-discussion] building NumPy with Intel CC MKL (solved!)
On 25/01/07, David Cournapeau [EMAIL PROTECTED] wrote: rex wrote: I think it should do much better. A few minutes ago I compiled a C math benchmark with : icc -o3 -parallel -xT and it ran 2.8x as fast as it did when compiled with gcc -o3. In fact, it ran at a little over a gigaflop, which is a higher speed than anyone has reported for this benchmark. Without seeing the benchmark, it would be quite hard to know what's happening. Also, when you are using numpy, you are using python, and for Perhaps compiling python itself with icc might give a useful speedup. Apparently somebody managed this for python 2.3 in 2003: http://mail.python.org/pipermail/c++-sig/2003-October/005824.html --George Nurser. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] setmember1d memory leak?
On 1/24/07, Charles R Harris [EMAIL PROTECTED] wrote: On 1/24/07, Robert Cimrman [EMAIL PROTECTED] wrote: Robert Kern wrote: Robert Cimrman wrote: Or you could just call unique1d prior to your call to setmember1d - it was meant to be used that way... you would not loose much speed that way, IMHO. But that doesn't do what they want. They want a function that gives the mask against their original array of the elements that are in the other array. The result of setmember1d(unique1d(ar1), unique1d(ar2)) snip For instance In [7]: def countmembers(a1, a2) : ...: a = sort(a2) ...: il = a.searchsorted(a1, side='l') ...: ir = a.searchsorted(a1, side='r') ...: return ir - il ...: In [8]: a2 = random.randint(0,10,(100,)) In [9]: a1 = arange(11) In [11]: a2 = random.randint(0,5,(100,)) In [12]: a1 = arange(10) In [13]: countmembers(a1,a2) Out[13]: array([16, 28, 16, 25, 15, 0, 0, 0, 0, 0]) The subtraction can be replaced by != to get a boolean mask. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] setmember1d memory leak?
Charles R Harris wrote: On 1/24/07, Charles R Harris [EMAIL PROTECTED] wrote: On 1/24/07, Robert Cimrman [EMAIL PROTECTED] wrote: Robert Kern wrote: Robert Cimrman wrote: Or you could just call unique1d prior to your call to setmember1d - it was meant to be used that way... you would not loose much speed that way, IMHO. But that doesn't do what they want. They want a function that gives the mask against their original array of the elements that are in the other array. The result of setmember1d(unique1d(ar1), unique1d(ar2)) snip For instance In [7]: def countmembers(a1, a2) : ...: a = sort(a2) ...: il = a.searchsorted(a1, side='l') ...: ir = a.searchsorted(a1, side='r') ...: return ir - il ...: In [8]: a2 = random.randint(0,10,(100,)) In [9]: a1 = arange(11) In [11]: a2 = random.randint(0,5,(100,)) In [12]: a1 = arange(10) In [13]: countmembers(a1,a2) Out[13]: array([16, 28, 16, 25, 15, 0, 0, 0, 0, 0]) The subtraction can be replaced by != to get a boolean mask. It looks good! Isn't it faster than setmember1d for unique input arrays? I do not like setmember1d much (it is long unlike other functions in arraysetops and looks clumsy to me now and I do not understand it anymore...), so feel free to replace it. BTW. setmember1d gives me the same mask as countmembers for several non-unique inputs I tried... r. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] A bug in scipy.linalg.lu_factor?
Hello, I am trying to write some unit tests to my new Automatic matrix code and I think I bumped into a bug in scipy.linalg.lu_factor. If you give a matrix to it, it doesn't honor the overwrite_a option: In [1]:import numpy as num In [2]:M = num.mat(num.random.rand(2,2)) In [3]:print M [[ 0.33267781 0.2100424 ] [ 0.61852696 0.32244386]] In [4]:import scipy.linalg as la In [5]:LU, P = la.lu_factor(M, overwrite_a=0); print M [[ 0.33267781 0.63136882] [ 0.61852696 -0.06807478]] In [6]:LU, P = la.lu_factor(M, overwrite_a=0); print M [[ 0.63136882 0.52691517] [-0.06807478 0.6543966 ]] As you can see the matrix is changed by calling the function. can anyone confirm this? I am running numpy 1.0.1 and scipy 0.5.2. Best, Paulo ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] A bug in scipy.linalg.lu_factor?
On Thu, 25 Jan 2007 16:06:23 -0200 Paulo J. S. Silva [EMAIL PROTECTED] wrote: Hello, I am trying to write some unit tests to my new Automatic matrix code and I think I bumped into a bug in scipy.linalg.lu_factor. If you give a matrix to it, it doesn't honor the overwrite_a option: In [1]:import numpy as num In [2]:M = num.mat(num.random.rand(2,2)) In [3]:print M [[ 0.33267781 0.2100424 ] [ 0.61852696 0.32244386]] In [4]:import scipy.linalg as la In [5]:LU, P = la.lu_factor(M, overwrite_a=0); print M [[ 0.33267781 0.63136882] [ 0.61852696 -0.06807478]] In [6]:LU, P = la.lu_factor(M, overwrite_a=0); print M [[ 0.63136882 0.52691517] [-0.06807478 0.6543966 ]] As you can see the matrix is changed by calling the function. can anyone confirm this? I am running numpy 1.0.1 and scipy 0.5.2. Best, Paulo Hi Paulo, I can confirm this bug. 1.0.2.dev3511 0.5.3.dev2560 Nils ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] A bug in scipy.linalg.lu_factor?
Em Qui, 2007-01-25 às 19:46 +0100, Nils Wagner escreveu: It works if you use M=num.random.rand(2,2) Nils Yes, it works for arrays but not for matrices. I thought that scipy.linalg functions were supposed to work with matrices. Paulo ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] setmember1d memory leak?
For instance In [7]: def countmembers(a1, a2) : ...: a = sort(a2) ...: il = a.searchsorted(a1, side='l') ...: ir = a.searchsorted(a1, side='r') ...: return ir - il ...: In [8]: a2 = random.randint(0,10,(100,)) In [9]: a1 = arange(11) In [11]: a2 = random.randint(0,5,(100,)) In [12]: a1 = arange(10) In [13]: countmembers(a1,a2) Out[13]: array([16, 28, 16, 25, 15, 0, 0, 0, 0, 0]) The subtraction can be replaced by != to get a boolean mask. It looks good! Isn't it faster than setmember1d for unique input arrays? I do not like setmember1d much (it is long unlike other functions in arraysetops and looks clumsy to me now and I do not understand it anymore...), so feel free to replace it. BTW. setmember1d gives me the same mask as countmembers for several non-unique inputs I tried... r. Try this, then: countmembers(N.array([1,1]), N.array([2])) array([0, 0]) N.setmember1d(N.array([1,1]), N.array([2])) array([ True, False], dtype=bool) setmember1d really needs the first array to be unique. I thought about it quite a bit and tried to understand the code (which is no small feat and I don't claim I have succeeded). As far as I can tell, setmember1d gets it right for the duplicate element with the highest index, all other duplicates are found to be in the second array, independent of whether or not that's actually true. I found it easier to state my problem in terms of unique arrays rather than trying to figure out a general solution, but countmembers sure is nice. Jan -- Forwarded message -- From: Robert Cimrman [EMAIL PROTECTED] To: Discussion of Numerical Python numpy-discussion@scipy.org Date: Thu, 25 Jan 2007 12:35:10 +0100 Subject: Re: [Numpy-discussion] setmember1d memory leak? Robert Cimrman wrote: Charles R Harris wrote: In [7]: def countmembers(a1, a2) : ...: a = sort(a2) ...: il = a.searchsorted(a1, side='l') ...: ir = a.searchsorted(a1, side='r') ...: return ir - il ...: The subtraction can be replaced by != to get a boolean mask. It looks good! Isn't it faster than setmember1d for unique input arrays? I do not like setmember1d much (it is long unlike other functions in arraysetops and looks clumsy to me now and I do not understand it anymore...), so feel free to replace it. BTW. setmember1d gives me the same mask as countmembers for several non-unique inputs I tried... But still a function like 'findsorted' returning a bool mask would be handy - one searchsorted-like call could be saved in setmember1d. cheers, r. -- Forwarded message -- From: rex [EMAIL PROTECTED] To: Discussion of Numerical Python numpy-discussion@scipy.org Date: Thu, 25 Jan 2007 03:50:24 -0800 Subject: [Numpy-discussion] Compiling Python with icc George Nurser [EMAIL PROTECTED] [2007-01-25 02:05]: Perhaps compiling python itself with icc might give a useful speedup. Apparently somebody managed this for python 2.3 in 2003: http://mail.python.org/pipermail/c++-sig/2003-October/005824.html Hello George, I saw that post yesterday, and just got around to trying it. It works. ./configure CC=icc --prefix=/usr/local In addition to commenting out #BASECFLAGS= -OPT:Olimit=0 I added -xT -parallel to the OPT= line for my Core 2 Duo CPU. The usual Make, Make install worked, and pybench now runs in 3.15 seconds vs 4.7 seconds with Python2.5 compiled with gcc. That's a 49% speed increase. http://svn.python.org/projects/external/pybench-2.0/ And, if psyco is used, pybench runs in 1.6 seconds for one iteration and then crashes. Psyco + icc results in a ~300% speed increase. Pybench needs to be updated for 1+ gigaflop systems. http://psyco.sourceforge.net/ -rex -- I have always wished that my computer would be as easy to use as my telephone. My wish has come true. I no longer know how to use my telephone --Bjorne Stroustrup (originator of C++ programming language) ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] A bug in scipy.linalg.lu_factor?
On Thu, 25 Jan 2007 16:56:54 -0200 Paulo J. S. Silva [EMAIL PROTECTED] wrote: Em Qui, 2007-01-25 às 19:46 +0100, Nils Wagner escreveu: It works if you use M=num.random.rand(2,2) Nils Yes, it works for arrays but not for matrices. I thought that scipy.linalg functions were supposed to work with matrices. Paulo ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion linalg.solve works with matrix input In [1]: import numpy as num In [2]: import scipy.linalg as la In [3]: M = num.mat(num.random.rand(2,2)) In [4]: b = num.random.rand(2) In [5]: x = la.solve(M,b) In [6]: print M*x [[ 0.29508067 0.17152755]] In [7]: b Out[7]: array([ 0.29508067, 0.17152755]) Nils ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Build numpy without support of long double' on OS-X
On 1/25/07, Robert Kern [EMAIL PROTECTED] wrote: Sebastian Haase wrote: Hi! When I try running my code on panther (10.3) with a numpy that was built on tiger (10.4) it can't load numpy because of missing symbols in numpy/core/umath.so The symbols are _acoshl$LDBL128 _acosl$LDBL128 _asinhl$LDBL128 (see my post from 5 oct 2006: http://permalink.gmane.org/gmane.comp.python.numeric.general/8521 ) I traced the problem to the libmx system library. Since I really don't need long double (128 bit) operations - I was wondering if there is a flag to just turn them of? Generally speaking, you need to build binaries on the lowest-versioned OS X that you intend to run on. The problem with building on 10.3 is that it generally comes only with gcc 3.3. I remember that some things require gcc4 - right ? I just found this http://developer.apple.com/documentation/DeveloperTools/Conceptual/CppRuntimeEnv/Articles/LibCPPDeployment.html which states: Support for the 128-bit long double type was not introduced until Mac OS X 10.4. The easiest would be to be able to disable the long double functions. -Sebastian ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Build numpy without support of long double' on OS-X
Generally speaking, you need to build binaries on the lowest- versioned OS X that you intend to run on. The problem with building on 10.3 is that it generally comes only with gcc 3.3. I remember that some things require gcc4 - right ? I think that might only bite you if you want to compile universal binaries, though I'm not sure if there are any other problems w/ gcc3.3, I'm pretty sure that's the big one. Of course .. that really shouldn't matter if you're just compiling it for yourself for just that cpu. -steve ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] random permutation
BTW, This test doesn't work on python 2.3 because sorted does not exist there. Ted On Jan 13, 2007, at 15:15, Stefan van der Walt wrote: On Sat, Jan 13, 2007 at 10:01:59AM -0800, Keith Goodman wrote: On 1/11/07, Robert Kern [EMAIL PROTECTED] wrote: Keith Goodman wrote: Why is the first element of the permutation always the same? Am I using random.permutation in the right way? M.__version__ '1.0rc1' This has been fixed in more recent versions. http://projects.scipy.org/scipy/numpy/ticket/374 I don't see any unit tests for numpy.random. I guess randomness is hard to test. Every time we fix a bug, we add a corresponding test to make sure that it doesn't pop up again. In this case, take a look in numpy/core/tests/test_regression.py: def check_random_shuffle(self, level=rlevel): Ticket #374 a = N.arange(5).reshape((5,1)) b = a.copy() N.random.shuffle(b) assert_equal(sorted(b),a) Cheers Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] random permutation
Ted Horst wrote: BTW, This test doesn't work on python 2.3 because sorted does not exist there. Fixed, thank you. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Build numpy without support of long double' on OS-X
Sebastian Haase wrote: The problem with building on 10.3 is that it generally comes only with gcc 3.3. I remember that some things require gcc4 - right ? No, you're right. I thought this might have been available with 10.3.9 (the only version in the 10.3 series that can run Universal binaries anyways), but this is not true. I just found this http://developer.apple.com/documentation/DeveloperTools/Conceptual/CppRuntimeEnv/Articles/LibCPPDeployment.html which states: Support for the 128-bit long double type was not introduced until Mac OS X 10.4. The easiest would be to be able to disable the long double functions. I'll accept a patch if you can provide one. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Build numpy without support of long double' on OS-X
Of course .. that really shouldn't matter if you're just compiling it for yourself for just that cpu. On the contrary ! I'm trying to provide a precompiled build of numpy together with a couple a handy functions and classes that I made myself, to establish Python as a development platform in multi-dimensional image analysis. I call it Priithon and it is build around wxPython, pyOpenG, SWIG and (of course) numpy [numarray until recently]. I have been working on this project for a couple years as part of my PhD, http://www.ucsf.edu/sedat/Priithon/PriithonHandbook.html And I'm trying to have all of Linux, Windows, OS-X (PPC) and OS-X (intel) supported. Wow ... cool! Kudos to you for that. It seems like I lead you astray with my advice/suggestion anyhow. Good luck with it. -steve ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] inconsistent behaviour of mean, average and median
Hi, I noticed the following behaviour for empty lists: In [4]: N.median([]) --- exceptions.IndexErrorTraceback (most recent call last) /home/stefan/ipython console /home/stefan/lib/python2.4/site-packages/numpy/lib/function_base.py in median(m) 1081 return sorted[index] 1082 else: - 1083 return (sorted[index-1]+sorted[index])/2.0 1084 1085 def trapz(y, x=None, dx=1.0, axis=-1): IndexError: index out of bounds In [5]: N.mean([]) Out[5]: nan In [6]: N.average([]) --- exceptions.ZeroDivisionError Traceback (most recent call last) /home/stefan/ipython console /home/stefan/lib/python2.4/site-packages/numpy/lib/function_base.py in average(a, axis, weights, returned) 294 if not isinstance(d, ndarray): 295 if d == 0.0: -- 296 raise ZeroDivisionError, 'zero denominator in average()' 297 if returned: 298 return n/d, d ZeroDivisionError: zero denominator in average() Which is the ideal response -- NaN or an exception, and if an exception, of which kind? Cheers Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion