Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > Matthew Brett wrote: > > Hi, > > > > We are using numpy.distutils, and have run into this odd behavior in > > windows: > > > > Short answer: > > I am afraid it cannot work as you want. Basically, when you pass an > option to build_ext, it does not affect other distutils commands, which > are run before build_ext, and need the compiler (config in this case I > think). So you need to pass the -c option to every command affected by > the compiler (build_ext, build_clib and config IIRC). > > cheers, > > David > I'm having the same problems! Running windows XP, Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)]. In my distutils.cfg I've got: [build] compiler=mingw32 [config] compiler = mingw32 and previously a python setup.py bdist_wininst would create an .exe installer, now I get the following error message: error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. python setup.py build build_ext --compiler=mingw32 appeared to work (barring a warning: numpy\core\setup_common.py:81: MismatchCAPIWarning) but then how do I create a .exe installer afterwards? python setup.py bdist_wininst fails with the same error message as before and python setup.py bdist_wininst --compiler=mingw32 fails with the message: error: option --compiler not recognized Is it still possible to create a .exe installer on Windows and if so what are the commands we need to make it work? Thanks in advance for any help/workarounds it would be much appreciated! Regards, Dave ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Funded work on Numpy: proposed improvements and request for feedback
Hi Chuck, Charles R Harris wrote: > > > > To make purely computational code available to third parties, two > things are > needed: > > 1. the code itself needs to make the split explicit. > 2. there needs to be support so that reusing those functionalities > is as > painless as possible, from a build point of view (Note: this is > almost > done in the upcoming numpy 1.4.0 as long as static linking is OK). > > > Ah, it itches. This is certainly a worthy goal, but are there third > parties who have expressed an interest in this? I mean, besides trying > to avoid duplicate bits of code in Scipy. Actually, I think that's what interests people around the Nipy project the most. In particular, they need to reuse lapack and random quite a bit, and for now, they just duplicate the code, with all the problems it brings (duplication, lack of reliability as far as cross platform is concerned, etc...). > > > > Splitting the code > -- > > The amount of work is directly proportional to the amount of > functions to be > made available. The most obvious candidates are: > > 1. C99 math functions: a lot of this has already been done. In > particular math > constants, and special values support is already implemented. > Almost every > real function in numpy has a portable npy\_ implementation in C. > 2. C99-like complex support: this naturally extends the previous > series. The > main difficult is to support platforms without C99 complex > support, and the > corresponding C99 complex functions. > 3. FFT code: there is no support to reuse FFT at the C level at > the moment. > 4. Random: there is no support either > 5. Linalg: idem. > > > This is good. I think it should go along with code reorganization. The > files are now broken up but I am not convinced that everything is yet > where it should be. Oh, definitely agreed. Another thing I would like in that spirit is to split the numy headers like in Python itself: ndarrayobject.h would still pull out everything (for backward compatibility reasons), but people could only include a few headers if they want to. The rationale for me is when I work on numpy itself: it is kind of stupid that everytime I change the iterator structures, the whole numpy core has to be rebuilt. That's quite wasteful and frustrating. Another rationale is to be able to compile and test a very minimal core numpy (the array object + a few things). I don't see py3k port being possible in a foreseeable future without this. > > The complex support could be a major effort in its own right if we > need to rewrite all the current functions. That said, it would be nice > if the complex support was separated out like the current real > support. Test to go along with it would be helpful. This also ties in > with having build support for many platforms. Pauli has worked on this a little, and I have actually worked quite a bit myself because I need a minimal support for windows 64 bits support (to fake libgfortran). I have already implemented around 10 core complex functions (cabs, cangle, creal, cimag, cexp, cpow, csqrt, clog, ccos, csin, ctan), in such a way that native C99 complex are used on platforms which support it, and there is a quite thorough test suite which tests every special value condition (negative zero, inf, nan) as specified in the C99 standard. Still lacks actual values (!), FPU exception and branch cuts tests, and thorough tests on major platforms. And quite a few other functions would be useful (hyperbolic trigo). > > > > Build support > - > > Once the code itself is split, there needs some support so that > the code can be > reused by third-parties. The following issues need to be solved: > > 1. Compile the computational code into shared or static library > 2. Once built, making the libraries available to third parties > (distutils > issues). Ideally, it should work in installed, in-place builds, > etc\.\.\. > situations. > 3. Versioning, ABI/API compatibility issues > > > Trying to break out the build support itself might be useful. What do you mean by breakout exactly ? I have documented the already implemented support: http://docs.scipy.org/doc/numpy/reference/distutils.html#building-installable-c-libraries > I think this needs some thought. This would essentially be a c library > of iterator code. C++ is probably an easier language for such things > as it handles the classes and inlining automatically. Which is to say > if I had to deal with a lot of iterators I might choose a different > language for implementation. C++ is not an option for numpy (and if I had to chose another language compared to C, I would rather take D, or one language which outputs C in the spirit of vala :) ). I think handling iterators in C is OK: sure, it is a bit messy, because of the lack of namespa
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
Dave wrote: > David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > >> Matthew Brett wrote: >> >>> Hi, >>> >>> We are using numpy.distutils, and have run into this odd behavior in >>> windows: >>> >>> >> Short answer: >> >> I am afraid it cannot work as you want. Basically, when you pass an >> option to build_ext, it does not affect other distutils commands, which >> are run before build_ext, and need the compiler (config in this case I >> think). So you need to pass the -c option to every command affected by >> the compiler (build_ext, build_clib and config IIRC). >> >> cheers, >> >> David >> >> > > I'm having the same problems! Running windows XP, Python 2.5.4 (r254:67916, > Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)]. > > In my distutils.cfg I've got: > > [build] > compiler=mingw32 > > [config] > compiler = mingw32 > > Yes, config files are an alternative I did not mention. I never use them because I prefer controlling the build on a per package basis, and the interaction between command line and config files is not always clear. > python setup.py build build_ext --compiler=mingw32 appeared to work (barring a > warning: numpy\core\setup_common.py:81: MismatchCAPIWarning) The warning is harmless: it is just a reminder that before releasing numpy 1.4.0, we will need to raise the C API version (to avoid problems we had in the past with mismatched numpy version). There is no point updating it during dev time I think. > but then how do I > create a .exe installer afterwards? python setup.py bdist_wininst fails with > the same error message as before and python setup.py bdist_wininst > --compiler=mingw32 fails with the message: > error: option --compiler not recognized > You need to do as follows, if you want to control from the command line: python setup.py build -c mingw32 bdist_wininst That's how I build the official binaries . cheers, David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > You need to do as follows, if you want to control from the command line: > > python setup.py build -c mingw32 bdist_wininst > > That's how I build the official binaries . > > cheers, > > David > Running the command: C:\dev\src\numpy>python setup.py build -c mingw32 bdist_wininst > build.txt still gives me the error: error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. I tried without a distutils.cfg file and deleted the build directory both times. In case it helps the bulid log should be available from http://pastebin.com/m607992ba Am I doing something wrong? -Dave ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
Dave wrote: > David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > >> You need to do as follows, if you want to control from the command line: >> >> python setup.py build -c mingw32 bdist_wininst >> >> That's how I build the official binaries . >> >> cheers, >> >> David >> >> > > Running the command: > > C:\dev\src\numpy>python setup.py build -c mingw32 bdist_wininst > build.txt > > still gives me the error: > > error: Python was built with Visual Studio 2003; > extensions must be built with a compiler than can generate compatible > binaries. > Visual Studio 2003 was not found on this system. If you have Cygwin installed, > you can try compiling with MingW32, by passing "-c mingw32" to setup.py. > > I tried without a distutils.cfg file and deleted the build directory both > times. > > In case it helps the bulid log should be available from > http://pastebin.com/m607992ba > > Am I doing something wrong? > No, I think you and Matthew actually found a bug in recent changes I have done in distutils. I will fix it right away, cheers, David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
On Tue, Aug 4, 2009 at 5:28 PM, David Cournapeau wrote: > > No, I think you and Matthew actually found a bug in recent changes I > have done in distutils. I will fix it right away, Ok, not right away, but could you check that r7280 fixed it for you ? cheers, David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
David Cournapeau gmail.com> writes: > > On Tue, Aug 4, 2009 at 5:28 PM, David > Cournapeau ar.media.kyoto-u.ac.jp> wrote: > > > > > No, I think you and Matthew actually found a bug in recent changes I > > have done in distutils. I will fix it right away, > > Ok, not right away, but could you check that r7280 fixed it for you ? > > cheers, > > David > Work's for me. adding 'SCRIPTS\f2py.py' creating dist removing 'build\bdist.win32\wininst' (and everything under it) Thanks for the quick fix! -Dave ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
Dave gmail.com> writes: > > Work's for me. > > -Dave > Except now when trying to compile the latest scipy I get the following error: C:\dev\src\scipy>svn up Fetching external item into 'doc\sphinxext' External at revision 7280. At revision 5890. C:\dev\src\scipy>python setup.py bdist_wininst Traceback (most recent call last): File "setup.py", line 160, in setup_package() File "setup.py", line 152, in setup_package configuration=configuration ) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils\core.py", line 152, in setup config = configuration() File "setup.py", line 118, in configuration config.add_subpackage('scipy') File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils\misc_util.py", line 890, in add_subpackage caller_level = 2) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils\misc_util.py", line 859, in get_subpackage caller_level = caller_level + 1) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils\misc_util.py", line 796, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy\setup.py", line 20, in configuration config.add_subpackage('special') File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils\misc_util.py", line 890, in add_subpackage caller_level = 2) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils\misc_util.py", line 859, in get_subpackage caller_level = caller_level + 1) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils\misc_util.py", line 796, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy\special\setup.py", line 45, in configuration extra_info=get_info("npymath") File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils\misc_util.py", line 1954, in get_info pkg_info = get_pkg_info(pkgname, dirs) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils\misc_util.py", line 1921, in get_pkg_info return read_config(pkgname, dirs) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils \npy_pkg_config.py", line 235, in read_config v = _read_config_imp(pkg_to_filename(pkgname), dirs) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils \npy_pkg_config.py", line 221, in _read_config_imp meta, vars, sections, reqs = _read_config(filenames) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils \npy_pkg_config.py", line 205, in _read_config meta, vars, sections, reqs = parse_config(f, dirs) File "C:\dev\bin\Python25\Lib\site-packages\numpy\distutils \npy_pkg_config.py", line 177, in parse_config raise PkgNotFound("Could not find file(s) %s" % str(filenames)) numpy.distutils.npy_pkg_config.PkgNotFound: Could not find file(s) ['C:\\dev\\bin\\Python25\\lib\\site-packages\\numpy\\core\\lib \\npy-pkg-config\\npymath.ini'] In the numpy\core\lib directory there is no npy-pkg-config sub-directory, only a single file - libnpymath.a Is this expected - has scipy not yet caught up with the numpy changes or is this a numpy issue? -Dave ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
Dave wrote: > Dave gmail.com> writes: > > >> Work's for me. >> >> -Dave >> >> > > Except now when trying to compile the latest scipy I get the following error: > Was numpy installed from a bdist_wininst installer, or did you use the install method directly ? David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > Dave wrote: > > Dave gmail.com> writes: > > > > > >> Work's for me. > >> > >> -Dave > >> > >> > > > > Except now when trying to compile the latest scipy I get the following error: > > > > Was numpy installed from a bdist_wininst installer, or did you use the > install method directly ? > > David > Numpy was installed with the bdist_wininst installer. In case it's relevant the installer seemed to create 2 egg-info files: numpy-1.4.0.dev7277-py2.5.egg-info numpy-1.4.0.dev7280-py2.5.egg-info Deleting the numpy directory and the egg-info files and re-installing from the bdist_wininst installer gave the same result (with the above 2 egg-info files) Installing numpy with python setup.py install seemed to work (at least the npymath.ini file was now in the numpy\core\lib\npy-pkg-config folder) Compiling scipy got much further now, but still failed with the below error message: C:\dev\src\scipy>python setup.py bdist_wininst > build.txt Warning: No configuration returned, assuming unavailable. C:\dev\bin\Python25\lib\site-packages\numpy\distutils\command\config.py:394: DeprecationWarning: + Usage of get_output is deprecated: please do not use it anymore, and avoid configuration checks involving running executable on the target machine. + DeprecationWarning) C:\dev\bin\Python25\lib\site-packages\numpy\distutils\system_info.py:452: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) error: Command "C:\dev\bin\mingw\bin\g77.exe -g -Wall -mno-cygwin -g -Wall -mno-cygwin -shared build\temp.win32-2.5\Release\scipy\special\_cephesmodule.o build\temp.win32-2.5\Release\scipy\special\amos_wrappers.o build\temp.win32-2.5\Release\scipy\special\specfun_wrappers.o build\temp.win32-2.5\Release\scipy\special\toms_wrappers.o build\temp.win32-2.5\Release\scipy\special\cdf_wrappers.o build\temp.win32-2.5\Release\scipy\special\ufunc_extras.o -LC:\dein\Python25\Lib\site-packages -LC:\dev\bin\mingw\lib -LC:\dev\bin\mingw\lib\gcc\mingw32\3.4.5 -LC:\dev\bin\Python25\libs -LC:\dev\bin\Python25\PCBuild -Lbuild\temp.win32-2.5 -lsc_amos -lsc_toms -lsc_c_misc -lsc_cephes -lsc_mach -lsc_cdf -lsc_specfun -lnpymath -lpython25 -lg2c -o build\lib.win32-2.5\scipy\special\_cephes.pyd" failed with exit status 1 The output of the build is available from http://pastebin.com/d3efe5650 Note the strange character on line 4600. In my terminal window this is displayed as: compile options: '-D_USE_MATH_DEFINES -D_USE_MATH_DEFINES -IC:\dein\Python25\Lib\site-packages -IC:\dev\bin\Python25\lib\site-packages\numpy\core\include -IC:\dev\bin\Python25\include -IC:\dev\bin\Python25\PC -c' HTH, Dave ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] speed of atleast_1d and friends
Hello all, I am making a lot of use of atleast_1d and atleast_2d in my routines. Does anybody know whether this will slow down my code significantly? Thanks, Mark ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
Dave wrote: > David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > >> Dave wrote: >> >>> Dave gmail.com> writes: >>> >>> >>> Work's for me. -Dave >>> Except now when trying to compile the latest scipy I get the following >>> > error: > >>> >>> >> Was numpy installed from a bdist_wininst installer, or did you use the >> install method directly ? >> >> David >> >> > > Numpy was installed with the bdist_wininst installer. > > In case it's relevant the installer seemed to create 2 egg-info files: > numpy-1.4.0.dev7277-py2.5.egg-info > numpy-1.4.0.dev7280-py2.5.egg-info > > Deleting the numpy directory and the egg-info files and re-installing from the > bdist_wininst installer gave the same result (with the above 2 egg-info files) > > Installing numpy with python setup.py install seemed to work (at least the > npymath.ini file was now in the numpy\core\lib\npy-pkg-config folder) > I think I understand the problem. Unfortunately, that's looks tricky to solve... I hate distutils. David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] speed of atleast_1d and friends
Hello, >I am making a lot of use of atleast_1d and atleast_2d in my routines. >Does anybody know whether this will slow down my code significantly? if there is no need to make copies (i.e. if you take arrays as parameters (?)), calls to atleast_1d and atleast_2d should be extremely fast: it's just a question of creating a different view, I think. Did you profile your code to check? Cheers, Emmanuelle ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
Bruce Southey wrote: > Hi, > Can you try these from the command line: > python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000, > (2*3.14159) / 1000, dtype=np.float32)" > python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000, > (2*3.14159) / 1000, dtype=np.float32); b=np.sin(a)" > python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000, > (2*3.14159) / 1000, dtype=np.float32); np.sin(a)" > python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000, > (2*3.14159) / 1000, dtype=np.float32)" "np.sin(a)" > > The first should be similar for different dtypes because it is just > array creation. The second extends that by storing the sin into another > array. I am not sure how to interpret the third but in the Python prompt > it would print it to screen. The last causes Python to handle two > arguments which is slow using float32 but not for float64 and float128 > suggesting compiler issue such as not using SSE or similar. Results: $ python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float32)" 100 loops, best of 3: 0.0811 usec per loop $ python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float32); b=np.sin(a)" 100 loops, best of 3: 0.11 usec per loop $ python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float32); np.sin(a)" 100 loops, best of 3: 0.11 usec per loop $ python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float32)" "np.sin(a)" 100 loops, best of 3: 112 msec per loop $ python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float64)" "np.sin(a)" 100 loops, best of 3: 13.2 msec per loop I think the second and third are effectively the same; both create an array containing the result. The second assigns that array to a value, while the third does not, so it should get garbage collected. The fourth one is the only one that actually runs the sin in the timing loop. I don't understand what you mean by causing Pyton to handle two arguments? The fifth run I added uses float64 to compare (and reproduces the problem). Andrew ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] speed of atleast_1d and friends
On Tue, Aug 4, 2009 at 4:23 AM, Mark Bakker wrote: > Hello all, > I am making a lot of use of atleast_1d and atleast_2d in my routines. > Does anybody know whether this will slow down my code significantly? > Thanks, > Mark Here's atleast_1d: def atleast_1d(*arys): res = [] for ary in arys: res.append(array(ary,copy=False,subok=True,ndmin=1)) if len(res) == 1: return res[0] else: return res If you only pass in on array at a time, that reduces to: def myatleast_1d(ary): return array(ary, copy=False, subok=True, ndmin=1) That might save some time. I'm always amazed at the solutions people come up with on this list. So if you send an example, someone might be able to get rid of the need for atleast_1d. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] speed of atleast_1d and friends
On Tue, Aug 04, 2009 at 07:37:03AM -0700, Keith Goodman wrote: > I'm always amazed at the solutions people come up with on this list. > So if you send an example, someone might be able to get rid of the > need for atleast_1d. On the other hand, it costs almost no time, and makes your API more robusts (for instance it can be used with numbers as well as arrays). I am all for abusive use of np.atleast_1d. Gaël ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
Charles R Harris wrote: > On Mon, Aug 3, 2009 at 11:51 AM, Andrew Friedley wrote: > >> Charles R Harris wrote: >>> What compiler versions are folks using? In the slow cases, what is the >>> timing for converting to double, computing the sin, then casting back to >>> single? >> I did this, is this the right way to do that? >> >> t = >> timeit.Timer("numpy.sin(a.astype(numpy.float64)).astype(numpy.float32)", >> "import numpy\n" >> "a = numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, >> dtype=numpy.float64)") >> print "sin converted float 32/64", min(t.repeat(3, 10)) >> >> Timings on my opteron system (2-socket 2-core 2GHz): >> >> sin float32 1.13407707214 >> sin float64 0.133460998535 >> sin converted float 32/64 0.18202996254 >> >> Not too surprising I guess. >> >> gcc --version shows: >> >> gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-44) >> >> My compile flags for my Python 2.6.1/NumPy 1.3.0 builds: >> >> -Os -fomit-frame-pointer -pipe -s -march=k8 -m64 >> > > That looks right. When numpy doesn't find a *f version it basically does > that conversion. This is beginning to look like a hardware/software > implementation problem, maybe compiler related. That is, I suspect the fast > times come from using a hardware implementation. What happens if you use -O2 > instead of -Os? Do you know where this conversion is, in the code? The impression I got from my quick look at the code was that a wrapper sinf was defined that just calls sin. I guess the typecast to float in there will do the conversion, is that what you are referring to, or something at a higher level? I recompiled the same versions of Python/NumPy, using the same flags except -O2 instead of -Os, the behavior is still the same. Andrew ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
On Wed, Aug 5, 2009 at 12:14 AM, Andrew Friedley wrote: > Do you know where this conversion is, in the code? The impression I got > from my quick look at the code was that a wrapper sinf was defined that > just calls sin. I guess the typecast to float in there will do the > conversion Exact. Given your CPU, compared to my macbook, it looks like the float32 is the problem (i.e. the float64 is not particularly fast). I really can't see what could cause such a slowdown: the range over which you evaluate sin should not cause denormal numbers - just to be sure, could you try the same benchmark but using a simple array of constant values (say numpy.ones(1000)) ? Also, you may want to check what happens if you force raising errors in case of FPU exceptions (numpy.seterr(raise="all")). cheers, David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
On Tue, Aug 4, 2009 at 8:13 PM, David Cournapeau wrote: > I think I understand the problem. Unfortunately, that's looks tricky to > solve... I hate distutils. Ok - should be fixed in r7281. David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Why NaN?
Hello, I know this has to have a very simple answer, but stuck at this very moment and can't get a meaningful result out of np.mean() In [121]: a = array([NaN, 4, NaN, 12]) In [122]: b = array([NaN, 2, NaN, 3]) In [123]: c = a/b In [124]: mean(c) Out[124]: nan In [125]: mean a > mean(a) Out[125]: nan Further when I tried: In [138]: c Out[138]: array([ NaN, 2., NaN, 4.]) In [139]: np.where(c==NaN) Out[139]: (array([], dtype=int32),) In [141]: mask = [c != NaN] In [142]: mask Out[142]: [array([ True, True, True, True], dtype=bool)] Any ideas? -- Gökhan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
David Cournapeau gmail.com> writes: > > On Tue, Aug 4, 2009 at 8:13 PM, David > Cournapeau ar.media.kyoto-u.ac.jp> wrote: > > > I think I understand the problem. Unfortunately, that's looks tricky to > > solve... I hate distutils. > > Ok - should be fixed in r7281. > > David > Well, that seemed to fix the bdist_wininst issue. The problem compiling scipy remains, but I assume that's probably something I should take up on the scipy list? FWIW running the full numpy test suite (verbose=10) I get 7 failures. The results are available from http://pastebin.com/m5505d4b5 The "errors" seem to be be related to the NaN handling. Thanks for the help today! -Dave ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 11:46, Gökhan Sever wrote: > Hello, > > I know this has to have a very simple answer, but stuck at this very moment > and can't get a meaningful result out of np.mean() > > > In [121]: a = array([NaN, 4, NaN, 12]) > > In [122]: b = array([NaN, 2, NaN, 3]) > > In [123]: c = a/b > > In [124]: mean(c) > Out[124]: nan > > In [125]: mean a > > mean(a) > Out[125]: nan > > Further when I tried: > > In [138]: c > Out[138]: array([ NaN, 2., NaN, 4.]) > > In [139]: np.where(c==NaN) > Out[139]: (array([], dtype=int32),) > > > In [141]: mask = [c != NaN] > > In [142]: mask > Out[142]: [array([ True, True, True, True], dtype=bool)] Yeah, NaN != NaN. It's a feature, not a bug. Use np.ma.masked_invalid(c).mean(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 9:46 AM, Gökhan Sever wrote: > Hello, > > I know this has to have a very simple answer, but stuck at this very moment > and can't get a meaningful result out of np.mean() > > > In [121]: a = array([NaN, 4, NaN, 12]) > > In [122]: b = array([NaN, 2, NaN, 3]) > > In [123]: c = a/b > > In [124]: mean(c) > Out[124]: nan > > In [125]: mean a > > mean(a) > Out[125]: nan > > Further when I tried: > > In [138]: c > Out[138]: array([ NaN, 2., NaN, 4.]) > > In [139]: np.where(c==NaN) > Out[139]: (array([], dtype=int32),) > > > In [141]: mask = [c != NaN] > > In [142]: mask > Out[142]: [array([ True, True, True, True], dtype=bool)] > > > Any ideas? >> a = array([NaN, 4, NaN, 12]) >> b = array([NaN, 2, NaN, 3]) >> c = a/b >> from scipy import stats >> stats.nan [tab] stats.nanmeanstats.nanmedian stats.nanstd >> stats.nanmean(c) 3.0 >> stats.nanmean(a) 8.0 >> c[isnan(c)] array([ NaN, NaN]) ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
Note that NaN generally contaminates sums and other net results (as it should). You should filter them out (there is more than one way to do that). But also note that the IEEE standard for floating point numbers requires NaN != Nan. Thus any attempts to find where NaNs that way is destined to fail. Use the function isnan() instead to generate a mask. Perry On Aug 4, 2009, at 12:46 PM, Gökhan Sever wrote: > Hello, > > I know this has to have a very simple answer, but stuck at this very > moment and can't get a meaningful result out of np.mean() > > > In [121]: a = array([NaN, 4, NaN, 12]) > > In [122]: b = array([NaN, 2, NaN, 3]) > > In [123]: c = a/b > > In [124]: mean(c) > Out[124]: nan > > In [125]: mean a > > mean(a) > Out[125]: nan > > Further when I tried: > > In [138]: c > Out[138]: array([ NaN, 2., NaN, 4.]) > > In [139]: np.where(c==NaN) > Out[139]: (array([], dtype=int32),) > > > In [141]: mask = [c != NaN] > > In [142]: mask > Out[142]: [array([ True, True, True, True], dtype=bool)] > > > Any ideas? > > -- > Gökhan > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 9:54 AM, Keith Goodman wrote: > On Tue, Aug 4, 2009 at 9:46 AM, Gökhan Sever wrote: >> Hello, >> >> I know this has to have a very simple answer, but stuck at this very moment >> and can't get a meaningful result out of np.mean() >> >> >> In [121]: a = array([NaN, 4, NaN, 12]) >> >> In [122]: b = array([NaN, 2, NaN, 3]) >> >> In [123]: c = a/b >> >> In [124]: mean(c) >> Out[124]: nan >> >> In [125]: mean a >> > mean(a) >> Out[125]: nan >> >> Further when I tried: >> >> In [138]: c >> Out[138]: array([ NaN, 2., NaN, 4.]) >> >> In [139]: np.where(c==NaN) >> Out[139]: (array([], dtype=int32),) >> >> >> In [141]: mask = [c != NaN] >> >> In [142]: mask >> Out[142]: [array([ True, True, True, True], dtype=bool)] >> >> >> Any ideas? > >>> a = array([NaN, 4, NaN, 12]) >>> b = array([NaN, 2, NaN, 3]) >>> c = a/b >>> from scipy import stats >>> stats.nan [tab] > stats.nanmean stats.nanmedian stats.nanstd >>> stats.nanmean(c) > 3.0 >>> stats.nanmean(a) > 8.0 >>> c[isnan(c)] > array([ NaN, NaN]) One more: >> c[isfinite(c)].mean() 3.0 ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 12:59 PM, Keith Goodman wrote: > On Tue, Aug 4, 2009 at 9:54 AM, Keith Goodman wrote: >> On Tue, Aug 4, 2009 at 9:46 AM, Gökhan Sever wrote: >>> Hello, >>> >>> I know this has to have a very simple answer, but stuck at this very moment >>> and can't get a meaningful result out of np.mean() >>> >>> >>> In [121]: a = array([NaN, 4, NaN, 12]) >>> >>> In [122]: b = array([NaN, 2, NaN, 3]) >>> >>> In [123]: c = a/b >>> >>> In [124]: mean(c) >>> Out[124]: nan >>> >>> In [125]: mean a >>> > mean(a) >>> Out[125]: nan >>> >>> Further when I tried: >>> >>> In [138]: c >>> Out[138]: array([ NaN, 2., NaN, 4.]) >>> >>> In [139]: np.where(c==NaN) >>> Out[139]: (array([], dtype=int32),) >>> >>> >>> In [141]: mask = [c != NaN] >>> >>> In [142]: mask >>> Out[142]: [array([ True, True, True, True], dtype=bool)] >>> >>> >>> Any ideas? >> a = array([NaN, 4, NaN, 12]) b = array([NaN, 2, NaN, 3]) c = a/b from scipy import stats stats.nan [tab] >> stats.nanmean stats.nanmedian stats.nanstd stats.nanmean(c) >> 3.0 stats.nanmean(a) >> 8.0 c[isnan(c)] >> array([ NaN, NaN]) > > One more: > >>> c[isfinite(c)].mean() > 3.0 > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > What's going on with the response time here? I cannot even finish reading the question and start python. Josef ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 12:05, wrote: > What's going on with the response time here? > > I cannot even finish reading the question and start python. Practice. :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 10:05 AM, wrote: > What's going on with the response time here? > > I cannot even finish reading the question and start python. The trick is to not read the entire question. I usually reply after reading the subj line. Or just auto-reply with "x.sort() returns None" which seems to be the most common question. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
On Tue, Aug 4, 2009 at 10:51 AM, Dave wrote: > David Cournapeau gmail.com> writes: > > > > > On Tue, Aug 4, 2009 at 8:13 PM, David > > Cournapeau ar.media.kyoto-u.ac.jp> wrote: > > > > > I think I understand the problem. Unfortunately, that's looks tricky to > > > solve... I hate distutils. > > > > Ok - should be fixed in r7281. > > > > David > > > > Well, that seemed to fix the bdist_wininst issue. > > The problem compiling scipy remains, but I assume that's probably something > I > should take up on the scipy list? > > FWIW running the full numpy test suite (verbose=10) I get 7 failures. The > results are available from http://pastebin.com/m5505d4b5 > > The "errors" seem to be be related to the NaN handling. > The nan problems come from these tests: # atan2(+-infinity, -infinity) returns +-3*pi/4. yield assert_almost_equal, ncu.arctan2( np.inf, -np.inf), 0.75 * np.pi yield assert_almost_equal, ncu.arctan2(-np.inf, -np.inf), -0.75 * np.pi # atan2(+-infinity, +infinity) returns +-pi/4. yield assert_almost_equal, ncu.arctan2( np.inf, np.inf), 0.25 * np.pi yield assert_almost_equal, ncu.arctan2(-np.inf, np.inf), -0.25 * np.pi So the problem seems to be with the inf handling. Windows arctan2 is known to be wtf-buggy and I suspect that is what is being tested. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
David Cournapeau wrote: > On Wed, Aug 5, 2009 at 12:14 AM, Andrew Friedley wrote: > >> Do you know where this conversion is, in the code? The impression I got >> from my quick look at the code was that a wrapper sinf was defined that >> just calls sin. I guess the typecast to float in there will do the >> conversion > > Exact. Given your CPU, compared to my macbook, it looks like the > float32 is the problem (i.e. the float64 is not particularly fast). I > really can't see what could cause such a slowdown: the range over > which you evaluate sin should not cause denormal numbers - just to be > sure, could you try the same benchmark but using a simple array of > constant values (say numpy.ones(1000)) ? Also, you may want to check > what happens if you force raising errors in case of FPU exceptions > (numpy.seterr(raise="all")). OK, have some interesting results. First is my array creation was not doing what I thought it was. This (what I've been doing) creates an array of 159161 elements: numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=numpy.float32) Which isn't what I was after (1000 elements ranging from 0 to 2PI). So the values in that array climb up to 999.999. Running with numpy.ones() gives a much different timing (I did numpy.ones(159161) to keep the array lengths the same): sin float32 0.078202009201 sin float64 0.0767619609833 cos float32 0.0750858783722 cos float64 0.088515996933 Much better, but still a little strange, float32 should be relatively faster yet. I tried with 1000 elements and got similar results. So the performance has something to do with the input values. This is believable, but I don't think it explains why float32 would behave that way and not float64, unless there's something else I don't understand. Also I assume you meant seterr(all='raise'). This didn't seem to do anything, I don't have any exceptions thrown or other output. Andrew ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
On Tue, Aug 4, 2009 at 12:19, Andrew Friedley wrote: > OK, have some interesting results. First is my array creation was not > doing what I thought it was. This (what I've been doing) creates an > array of 159161 elements: > > numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=numpy.float32) > > Which isn't what I was after (1000 elements ranging from 0 to 2PI). So > the values in that array climb up to 999.999. One uses arange() like so: numpy.arange(start, stop, step), just like the builtin range(). You want numpy.linspace(0.0, 2*numpy.pi, 1000). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
Actually, Robert's really a robot (indeed, the Kernel of all robot minds) - no way a biologic is going to beat him. ;-) DG --- On Tue, 8/4/09, Robert Kern wrote: > From: Robert Kern > Subject: Re: [Numpy-discussion] Why NaN? > To: "Discussion of Numerical Python" > Date: Tuesday, August 4, 2009, 10:08 AM > On Tue, Aug 4, 2009 at 12:05, > wrote: > > > What's going on with the response time here? > > > > I cannot even finish reading the question and start > python. > > Practice. :-) > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, > a harmless > enigma that is made terrible by our own mad attempt to > interpret it as > though it had an underlying truth." > -- Umberto Eco > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
On Tue, Aug 4, 2009 at 11:19 AM, Andrew Friedley wrote: > David Cournapeau wrote: > > On Wed, Aug 5, 2009 at 12:14 AM, Andrew Friedley > wrote: > > > >> Do you know where this conversion is, in the code? The impression I got > >> from my quick look at the code was that a wrapper sinf was defined that > >> just calls sin. I guess the typecast to float in there will do the > >> conversion > > > > Exact. Given your CPU, compared to my macbook, it looks like the > > float32 is the problem (i.e. the float64 is not particularly fast). I > > really can't see what could cause such a slowdown: the range over > > which you evaluate sin should not cause denormal numbers - just to be > > sure, could you try the same benchmark but using a simple array of > > constant values (say numpy.ones(1000)) ? Also, you may want to check > > what happens if you force raising errors in case of FPU exceptions > > (numpy.seterr(raise="all")). > > OK, have some interesting results. First is my array creation was not > doing what I thought it was. This (what I've been doing) creates an > array of 159161 elements: > > numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=numpy.float32) > > Which isn't what I was after (1000 elements ranging from 0 to 2PI). So > the values in that array climb up to 999.999. > > Running with numpy.ones() gives a much different timing (I did > numpy.ones(159161) to keep the array lengths the same): > > sin float32 0.078202009201 > sin float64 0.0767619609833 > cos float32 0.0750858783722 > cos float64 0.088515996933 > > Much better, but still a little strange, float32 should be relatively > faster yet. I tried with 1000 elements and got similar results. > Depends on the CPU, FPU and the compiler flags. The computations could very well be done using double precision internally with conversions on load/store. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
Charles R Harris wrote: > Depends on the CPU, FPU and the compiler flags. The computations could very > well be done using double precision internally with conversions on > load/store. Sure, but if this is the case, why is the performance blowing up on larger input values for float32 but not float64? Both should blow up, not just one or the other. In other words I think they are using different implementations :) Am I missing something? Andrew ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 1:45 PM, David Goldsmith wrote: > > Actually, Robert's really a robot (indeed, the Kernel of all robot minds) - > no way a biologic is going to beat him. ;-) So, what is the conclusion, do we need more practice, or can we sit back and let Robert take care of things? Josef > > DG > > --- On Tue, 8/4/09, Robert Kern wrote: > >> From: Robert Kern >> Subject: Re: [Numpy-discussion] Why NaN? >> To: "Discussion of Numerical Python" >> Date: Tuesday, August 4, 2009, 10:08 AM >> On Tue, Aug 4, 2009 at 12:05, >> wrote: >> >> > What's going on with the response time here? >> > >> > I cannot even finish reading the question and start >> python. >> >> Practice. :-) >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, >> a harmless >> enigma that is made terrible by our own mad attempt to >> interpret it as >> though it had an underlying truth." >> -- Umberto Eco >> ___ >> NumPy-Discussion mailing list >> NumPy-Discussion@scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
Uh-oh, if my joke is going to promote wide-spread complacency, I take it back, I take it back! DG --- On Tue, 8/4/09, josef.p...@gmail.com wrote: > From: josef.p...@gmail.com > Subject: Re: [Numpy-discussion] Why NaN? > To: "Discussion of Numerical Python" > Date: Tuesday, August 4, 2009, 11:11 AM > On Tue, Aug 4, 2009 at 1:45 PM, David > Goldsmith > wrote: > > > > Actually, Robert's really a robot (indeed, the Kernel > of all robot minds) - no way a biologic is going to beat > him. ;-) > > So, what is the conclusion, do we need more practice, or > can we sit > back and let Robert take care of things? > > Josef > > > > > > DG > > > > --- On Tue, 8/4/09, Robert Kern > wrote: > > > >> From: Robert Kern > >> Subject: Re: [Numpy-discussion] Why NaN? > >> To: "Discussion of Numerical Python" > >> Date: Tuesday, August 4, 2009, 10:08 AM > >> On Tue, Aug 4, 2009 at 12:05, > >> wrote: > >> > >> > What's going on with the response time here? > >> > > >> > I cannot even finish reading the question and > start > >> python. > >> > >> Practice. :-) > >> > >> -- > >> Robert Kern > >> > >> "I have come to believe that the whole world is an > enigma, > >> a harmless > >> enigma that is made terrible by our own mad > attempt to > >> interpret it as > >> though it had an underlying truth." > >> -- Umberto Eco > >> ___ > >> NumPy-Discussion mailing list > >> NumPy-Discussion@scipy.org > >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > >> > > > > > > > > ___ > > NumPy-Discussion mailing list > > NumPy-Discussion@scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 04, 2009 at 02:11:57PM -0400, josef.p...@gmail.com wrote: > On Tue, Aug 4, 2009 at 1:45 PM, David Goldsmith > wrote: > > Actually, Robert's really a robot (indeed, the Kernel of all robot minds) - > > no way a biologic is going to beat him. ;-) > So, what is the conclusion, do we need more practice, or can we sit > back and let Robert take care of things? No, we need to get the master schematics of Robert and replicate him! Robert, please? Gaël ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
This is the loveliest of all solutions: c[isfinite(c)].mean() You are all very helpful and funny. I am sure most of you spend more than 16 hours a day in front of or by your screens :) On Tue, Aug 4, 2009 at 11:46 AM, Gökhan Sever wrote: > Hello, > > I know this has to have a very simple answer, but stuck at this very moment > and can't get a meaningful result out of np.mean() > > > In [121]: a = array([NaN, 4, NaN, 12]) > > In [122]: b = array([NaN, 2, NaN, 3]) > > In [123]: c = a/b > > In [124]: mean(c) > Out[124]: nan > > In [125]: mean a > > mean(a) > Out[125]: nan > > Further when I tried: > > In [138]: c > Out[138]: array([ NaN, 2., NaN, 4.]) > > In [139]: np.where(c==NaN) > Out[139]: (array([], dtype=int32),) > > > In [141]: mask = [c != NaN] > > In [142]: mask > Out[142]: [array([ True, True, True, True], dtype=bool)] > > > Any ideas? > > -- > Gökhan > -- Gökhan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 13:40, Gökhan Sever wrote: > This is the loveliest of all solutions: > > c[isfinite(c)].mean() I kind of like c[c == c].mean(), but only because it's a bit mind-blowing. :-) > You are all very helpful and funny. I am sure most of you spend more than 16 > hours a day in front of or by your screens :) Hey! I resemble that remark! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
On Tuesday 04 August 2009 19:19:22 Andrew Friedley wrote: > OK, have some interesting results. First is my array creation was not > doing what I thought it was. This (what I've been doing) creates an > array of 159161 elements: > > numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=numpy.float32) Ah. And I wondered why taking the sin/cos of 1000 elements took so long... ;-) (actually, I would've used larger arrays for benchmarking to begin with) Indeed, the value range fixes stuff here (Linux, GCC/amd64, Xeon X5450 @ 3.00GHz, NumPy 1.3.0), too: Before: float64 10 loops, best of 3: 54.2 ms per loop float32 10 loops, best of 3: 7.62 ms per loop After: float64 10 loops, best of 3: 6.03 ms per loop float32 10 loops, best of 3: 3.81 ms per loop Best, Hans ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 04, 2009 at 01:43:54PM -0500, Robert Kern wrote: > I kind of like c[c == c].mean(), but only because it's a bit mind-blowing. :-) > > You are all very helpful and funny. I am sure most of you spend more than 16 > > hours a day in front of or by your screens :) > Hey! I resemble that remark! Out of these 16 hours, 14 are spent staring at two terminals: one with IPython on one side, and another with vim on the other. Yeah baby! Gaël ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 1:48 PM, Gael Varoquaux < gael.varoqu...@normalesup.org> wrote: > On Tue, Aug 04, 2009 at 01:43:54PM -0500, Robert Kern wrote: > > I kind of like c[c == c].mean(), but only because it's a bit > mind-blowing. :-) > > > > You are all very helpful and funny. I am sure most of you spend more > than 16 > > > hours a day in front of or by your screens :) > > > Hey! I resemble that remark! > > Out of these 16 hours, 14 are spent staring at two terminals: one with > IPython on one side, and another with vim on the other. > > Yeah baby! > > Gaël > I see that you should have a browser embedding plugin for Ipyhon which you don't want to share with us :) And do you only fix Mayavi issues in that not-included 2 hours? -- Gökhan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Aug 4, 2009, at 2:43 PM, Robert Kern wrote: > On Tue, Aug 4, 2009 at 13:40, Gökhan Sever > wrote: >> This is the loveliest of all solutions: >> >> c[isfinite(c)].mean() > > I kind of like c[c == c].mean(), but only because it's a bit mind- > blowing. :-) But it doesn't give the same result as the previous one when there's an inf... ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 14:29, Pierre GM wrote: > > On Aug 4, 2009, at 2:43 PM, Robert Kern wrote: > >> On Tue, Aug 4, 2009 at 13:40, Gökhan Sever >> wrote: >>> This is the loveliest of all solutions: >>> >>> c[isfinite(c)].mean() >> >> I kind of like c[c == c].mean(), but only because it's a bit mind- >> blowing. :-) > > But it doesn't give the same result as the previous one when there's > an inf... NaNs might be markers of missing data, but I see infs as data. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 04, 2009 at 01:54:49PM -0500, Gökhan Sever wrote: >I see that you should have a browser embedding plugin for Ipyhon which you >don't want to share with us :) No, I answer e-mail using vim. >And do you only fix Mayavi issues in that not-included 2 hours? No, during the other hours, using IPython and vim, what else? Gaël ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 12:59 PM, Gael Varoquaux wrote: > On Tue, Aug 04, 2009 at 01:54:49PM -0500, Gökhan Sever wrote: >> I see that you should have a browser embedding plugin for Ipyhon which you >> don't want to share with us :) > > No, I answer e-mail using vim. Yeah, I'm trying that right now. :wq :q! :dammit ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
Hi, On Tue, Aug 4, 2009 at 9:31 AM, David Cournapeau wrote: > On Tue, Aug 4, 2009 at 8:13 PM, David > Cournapeau wrote: > >> I think I understand the problem. Unfortunately, that's looks tricky to >> solve... I hate distutils. > > Ok - should be fixed in r7281. Just to clarify - it's still true I guess that this: python setup.py build_ext --compiler=mingw32 --inplace just can't work - because the --compiler flag does not get passed to the build step? I noticed, when I was trying to be fancy: python setup.py build build_ext --inplace this error: File "/home/mb312/usr/local/lib/python2.5/site-packages/numpy/distutils/command/build_ext.py", line 74, in run self.library_dirs.append(build_clib.build_clib) UnboundLocalError: local variable 'build_clib' referenced before assignment because of the check for inplace builds above that, leaving build_clib undefined. I'm afraid I wasn't quite sure what the right thing to do was. Thanks a lot, Matthew ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this a bug in numpy.distutils ?
On Tue, Aug 4, 2009 at 15:09, Matthew Brett wrote: > File > "/home/mb312/usr/local/lib/python2.5/site-packages/numpy/distutils/command/build_ext.py", > line 74, in run > self.library_dirs.append(build_clib.build_clib) > UnboundLocalError: local variable 'build_clib' referenced before assignment > > because of the check for inplace builds above that, leaving build_clib > undefined. I'm afraid I wasn't quite sure what the right thing to do > was. Probably just build_clib = self.distribution.get_command_obj('build_clib') after the log.warn(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Features in SciPy That are Absent in NumPy
What features does SciPy have that are absent in NumPy? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Features in SciPy That are Absent in NumPy
On 2009-08-04 14:36 , Nanime Puloski wrote: > What features does SciPy have that are absent in NumPy? Many. SciPy includes algorithms for optimization, solving differential equations, numerical integration among many others. NumPy primarily provides a useful n-dimensional array container. While there are some basic scientific features such as FFTs in NumPy, these appear in more detail in SciPy. If you can give more specifics on what features you would be interested in, we can offer more help about which package contains those features. -Neil ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 1:40 PM, Gökhan Sever wrote: > This is the loveliest of all solutions: > > c[isfinite(c)].mean() This handling of nonfinite elements has come up before. Please remember that this only for 1d or flatten array so it not work in general especially along an axis. Bruce ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 1:53 PM, Bruce Southey wrote: > On Tue, Aug 4, 2009 at 1:40 PM, Gökhan Sever wrote: >> This is the loveliest of all solutions: >> >> c[isfinite(c)].mean() > > This handling of nonfinite elements has come up before. > Please remember that this only for 1d or flatten array so it not work > in general especially along an axis. If you don't want to use nanmean from scipy.stats you could use: np.nansum(c, axis=0) / (~np.isnan(c)).sum(axis=0) or np.nansum(c, axis=0) / (c == c).sum(axis=0) But if c contains ints then you'll run into trouble with the division, so you'll need to protect against that. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Funded work on Numpy: proposed improvements and request for feedback
On Mon, Aug 3, 2009 at 9:42 PM, David Cournapeau wrote: > Hi All, > > I (David Cournapeau) and the people at Berkeley (Jarrod Millman, > Fernando Perez, Matthew Brett) have been in discussion so that I could > do some funded work on NumPy/SciPy. Although they are obviously > interested in improvements that help their own projects, they are > willing to make sure the work will impact numpy/scipy as a whole. As > such we would like to get some feedback about the proposal. > > There are several areas we discussed about, but the main 'vision' is to > make more of the C code in numpy reusable to 3rd parties, in particular > purely computational (fft, linear algebra, etc...) code. A first draft > of the proposal is pasted below. > > Comments, request for details, objections are welcomed, > > Thank you for your attention, > > The Berkeley team, Gael Varoquaux and David Cournapeau > [snip] Almost a year ago Travis send an email : 'Report from SciPy'? http://mail.scipy.org/pipermail/numpy-discussion/2008-August/036909.html Of importance was that " * NumPy 2.0 will be a library and will not automagically import numpy.fft * We will suggest that other libraries use from numpy import fft instead of import numpy as np; np.fft " I sort of see that the proposed work could help make numpy a library as a whole but it is not clear that the work is heading towards that goal. So if numpy 2.0 is still planned as a library then I would like to see a clearer statement towards that goal. Not really understanding the problems of C99, but I know that trying to cover all the little details can be very time consuming when more effort could be spent on things. So if 'C99-like' is going to be the near term future, is there any point in supporting non-C99 environments with this work? That is, is the limitation in the compiler, operating system, processor or some combination of these? Anyhow, these are only my thoughts and pale in comparison to the work you are doing so feel free ignore them. Thanks Bruce ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Features in SciPy That are Absent in NumPy
--- On Tue, 8/4/09, Neil Martinsen-Burrell wrote: > > What features does SciPy have that are absent in > NumPy? > > Many. And that's an understatement! DG > SciPy includes algorithms for optimization, > solving differential > equations, numerical integration among many others. > NumPy primarily > provides a useful n-dimensional array container. > While there are some > basic scientific features such as FFTs in NumPy, these > appear in more > detail in SciPy. If you can give more specifics on > what features you > would be interested in, we can offer more help about which > package > contains those features. > > -Neil > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Funded work on Numpy: proposed improvements and request for feedback
On Tue, Aug 4, 2009 at 3:24 PM, Bruce Southey wrote: > On Mon, Aug 3, 2009 at 9:42 PM, David > Cournapeau wrote: > > Hi All, > > > >I (David Cournapeau) and the people at Berkeley (Jarrod Millman, > > Fernando Perez, Matthew Brett) have been in discussion so that I could > > do some funded work on NumPy/SciPy. Although they are obviously > > interested in improvements that help their own projects, they are > > willing to make sure the work will impact numpy/scipy as a whole. As > > such we would like to get some feedback about the proposal. > > > > There are several areas we discussed about, but the main 'vision' is to > > make more of the C code in numpy reusable to 3rd parties, in particular > > purely computational (fft, linear algebra, etc...) code. A first draft > > of the proposal is pasted below. > > > > Comments, request for details, objections are welcomed, > > > > Thank you for your attention, > > > > The Berkeley team, Gael Varoquaux and David Cournapeau > > > > [snip] > > > Almost a year ago Travis send an email : > 'Report from SciPy'? > http://mail.scipy.org/pipermail/numpy-discussion/2008-August/036909.html > > Of importance was that > " * NumPy 2.0 will be a library and will not automagically import numpy.fft > * We will suggest that other libraries use from numpy import fft > instead of import numpy as np; np.fft > " > > I sort of see that the proposed work could help make numpy a library > as a whole but it is not clear that the work is heading towards that > goal. So if numpy 2.0 is still planned as a library then I would like > to see a clearer statement towards that goal. > > Not really understanding the problems of C99, but I know that trying > to cover all the little details can be very time consuming when more > effort could be spent on things. > So if 'C99-like' is going to be the near term future, is there any > point in supporting non-C99 environments with this work? Windows? I don't know the status of the most recent MSVC compilers, but they haven't been c99 compliant in the past and compliance doesn't seem to be a priority. Other compilers are a mixed bag. This is the git conundrum: support isn't sufficiently widespread on all platforms to make the transition so we are stuck with the lowest common denominator. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Funded work on Numpy: proposed improvements and request for feedback
--- On Tue, 8/4/09, Bruce Southey wrote: > [snip] > > Almost a year ago Travis send an email : > 'Report from SciPy'? > http://mail.scipy.org/pipermail/numpy-discussion/2008-August/036909.html > > Of importance was that > " * NumPy 2.0 will be a library and will not automagically > import numpy.fft As someone who tends to think of "modules" as "libraries" (renamed for Python for "branding" purposes), what's the difference? DG > * We will suggest that other libraries use from numpy > import fft > instead of import numpy as np; np.fft > " > > I sort of see that the proposed work could help make numpy > a library > as a whole but it is not clear that the work is heading > towards that > goal. So if numpy 2.0 is still planned as a library then I > would like > to see a clearer statement towards that goal. > > Not really understanding the problems of C99, but I know > that trying > to cover all the little details can be very time consuming > when more > effort could be spent on things. > So if 'C99-like' is going to be the near term future, is > there any > point in supporting non-C99 environments with this work? > That is, is the limitation in the compiler, operating > system, > processor or some combination of these? > > Anyhow, these are only my thoughts and pale in comparison > to the work > you are doing so feel free ignore them. > > Thanks > Bruce > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Funded work on Numpy: proposed improvements and request for feedback
On Tue, Aug 4, 2009 at 16:49, David Goldsmith wrote: > > --- On Tue, 8/4/09, Bruce Southey wrote: > >> [snip] >> >> Almost a year ago Travis send an email : >> 'Report from SciPy'? >> http://mail.scipy.org/pipermail/numpy-discussion/2008-August/036909.html >> >> Of importance was that >> " * NumPy 2.0 will be a library and will not automagically >> import numpy.fft > > As someone who tends to think of "modules" as "libraries" (renamed for Python > for "branding" purposes), what's the difference? Poor phrasing. I believe Travis meant something along the lines of "NumPy 2.0 will be a [well-behaved] library and will not automagically import numpy.fft." The informative part is the latter point. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Funded work on Numpy: proposed improvements and request for feedback
Gotchya, thanks! DG --- On Tue, 8/4/09, Robert Kern wrote: > From: Robert Kern > Subject: Re: [Numpy-discussion] Funded work on Numpy: proposed improvements > and request for feedback > To: "Discussion of Numerical Python" > Date: Tuesday, August 4, 2009, 2:53 PM > On Tue, Aug 4, 2009 at 16:49, David > Goldsmith > wrote: > > > > --- On Tue, 8/4/09, Bruce Southey > wrote: > > > >> [snip] > >> > >> Almost a year ago Travis send an email : > >> 'Report from SciPy'? > >> http://mail.scipy.org/pipermail/numpy-discussion/2008-August/036909.html > >> > >> Of importance was that > >> " * NumPy 2.0 will be a library and will not > automagically > >> import numpy.fft > > > > As someone who tends to think of "modules" as > "libraries" (renamed for Python for "branding" purposes), > what's the difference? > > Poor phrasing. I believe Travis meant something along the > lines of > "NumPy 2.0 will be a [well-behaved] library and will not > automagically > import numpy.fft." The informative part is the latter > point. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, > a harmless > enigma that is made terrible by our own mad attempt to > interpret it as > though it had an underlying truth." > -- Umberto Eco > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] scipy.stats.poisson.ppf raises "OverflowError: cannot convert float infinity to long"
Hello everybody, I'm using the following versions of scipy and numpy: >>> scipy.__version__ '0.6.0' >>> import numpy >>> numpy.__version__ '1.1.1' Would anybody happen to know why I get an exception when calling scipy.stats.poisson.ppf function: >>> from scipy.stats import * >>> poisson.ppf(0., 4) Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python2.5/site-packages/scipy/stats/ distributions.py", line 3601, in ppf place(output,cond2,self.b) File "/usr/lib64/python2.5/site-packages/numpy/lib/ function_base.py", line 957, in place return _insert(arr, mask, vals) OverflowError: cannot convert float infinity to long >>> Thanks a lot in advance, Masha liu...@usc.edu ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] scipy.stats.poisson.ppf raises "OverflowError: cannot convert float infinity to long"
On Tue, Aug 4, 2009 at 6:36 PM, Maria Liukis wrote: > Hello everybody, > I'm using the following versions of scipy and numpy: scipy.__version__ > '0.6.0' import numpy numpy.__version__ > '1.1.1' > Would anybody happen to know why I get an exception when calling > scipy.stats.poisson.ppf function: from scipy.stats import * poisson.ppf(0., 4) > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib64/python2.5/site-packages/scipy/stats/distributions.py", > line 3601, in ppf > place(output,cond2,self.b) > File "/usr/lib64/python2.5/site-packages/numpy/lib/function_base.py", line > 957, in place > return _insert(arr, mask, vals) > OverflowError: cannot convert float infinity to long >>> stats.poisson.ppf(0., 4) 13.0 >>> stats.poisson.cdf(13, 4) 0.2367158465667 should be fixed since scipy 0.7.0 Josef > Thanks a lot in advance, > Masha > > liu...@usc.edu > > > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] scipy.stats.poisson.ppf raises "OverflowError: cannot convert float infinity to long"
Josef, Thanks a bunch! Masha liu...@usc.edu On Aug 4, 2009, at 4:01 PM, josef.p...@gmail.com wrote: On Tue, Aug 4, 2009 at 6:36 PM, Maria Liukis wrote: Hello everybody, I'm using the following versions of scipy and numpy: scipy.__version__ '0.6.0' import numpy numpy.__version__ '1.1.1' Would anybody happen to know why I get an exception when calling scipy.stats.poisson.ppf function: from scipy.stats import * poisson.ppf(0., 4) Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python2.5/site-packages/scipy/stats/ distributions.py", line 3601, in ppf place(output,cond2,self.b) File "/usr/lib64/python2.5/site-packages/numpy/lib/ function_base.py", line 957, in place return _insert(arr, mask, vals) OverflowError: cannot convert float infinity to long stats.poisson.ppf(0., 4) 13.0 stats.poisson.cdf(13, 4) 0.2367158465667 should be fixed since scipy 0.7.0 Josef Thanks a lot in advance, Masha liu...@usc.edu ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On 4-Aug-09, at 2:54 PM, Gökhan Sever wrote: > I see that you should have a browser embedding plugin for Ipyhon > which you > don't want to share with us :) Ondrej's well on his way to fixing that: http://pythonnb.appspot.com/ David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] scipy.stats.poisson.ppf raises "OverflowError: cannot convert float infinity to long"
On Tue, Aug 4, 2009 at 7:03 PM, Maria Liukis wrote: > Josef, > Thanks a bunch! > Masha You're welcome. Josef > > liu...@usc.edu > > > On Aug 4, 2009, at 4:01 PM, josef.p...@gmail.com wrote: > > On Tue, Aug 4, 2009 at 6:36 PM, Maria Liukis wrote: > > Hello everybody, > I'm using the following versions of scipy and numpy: > > scipy.__version__ > > '0.6.0' > > import numpy > numpy.__version__ > > '1.1.1' > Would anybody happen to know why I get an exception when calling > scipy.stats.poisson.ppf function: > > from scipy.stats import * > poisson.ppf(0., 4) > > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib64/python2.5/site-packages/scipy/stats/distributions.py", > line 3601, in ppf > place(output,cond2,self.b) > File "/usr/lib64/python2.5/site-packages/numpy/lib/function_base.py", line > 957, in place > return _insert(arr, mask, vals) > OverflowError: cannot convert float infinity to long > > stats.poisson.ppf(0., 4) > > 13.0 > > stats.poisson.cdf(13, 4) > > 0.2367158465667 > should be fixed since scipy 0.7.0 > Josef > > Thanks a lot in advance, > Masha > > liu...@usc.edu > > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 4, 2009 at 6:49 PM, David Warde-Farley wrote: > On 4-Aug-09, at 2:54 PM, Gökhan Sever wrote: > > > I see that you should have a browser embedding plugin for Ipyhon > > which you > > don't want to share with us :) > > Ondrej's well on his way to fixing that: http://pythonnb.appspot.com/ > > David > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Hehe :) I would not be surprised if someone brings a real python snake into the conference then :) -- Gökhan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
Hi all, I see something similar on my system. OK I've just done a test. System is Ubuntu 9.04 AMD64 there seems to be a regression for float32 with high values: In [47]: a=np.random.rand(1).astype(np.float32) In [48]: b=np.random.rand(1).astype(np.float64) In [49]: c=1000*np.random.rand(1).astype(np.float32) In [50]: d=1000*np.random.rand(1000).astype(np.float64) In [51]: %timeit -n 10 np.sin(a) 10 loops, best of 3: 251 µs per loop In [52]: %timeit -n 10 np.sin(b) 10 loops, best of 3: 395 µs per loop In [53]: %timeit -n 10 np.sin(c) 10 loops, best of 3: 5.65 ms per loop In [54]: %timeit -n 10 np.sin(d) 10 loops, best of 3: 87.7 µs per loop In [55]: %timeit -n 10 np.sin(c.astype(np.float64)).astype(np.float32) 10 loops, best of 3: 891 µs per loop Cheers Jochen On Mon, 3 Aug 2009 15:45:56 +0200 Emmanuelle Gouillart wrote: > Hi Andrew, > > %timeit is an Ipython magic command that uses the timeit > module, see > http://ipython.scipy.org/doc/stable/html/interactive/reference.html?highlight=timeit > for more information about how to use it. So you were right to suppose > that it is not a "normal Python". > > However, I was not able to reproduce your observations. > > >>> import numpy as np > >>> a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32) > >>> b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float64) > >>> %timeit -n 10 np.sin(a) > 10 loops, best of 3: 8.67 ms per loop > >>> %timeit -n 10 np.sin(b) > 10 loops, best of 3: 9.29 ms per loop > > Emmanuelle > > On Mon, Aug 03, 2009 at 09:32:57AM -0400, Andrew Friedley wrote: > > While working on GSoC stuff I came across this weird performance > > behavior for sine and cosine -- using float32 is way slower than > > float64. On a 2ghz opteron: > > > > sin float32 1.12447786331 > > sin float64 0.133481025696 > > cos float32 1.14155912399 > > cos float64 0.131420135498 > > > > The times are in seconds, and are best of three runs of ten > > iterations of numpy.{sin,cos} over a 1000-element array (script > > attached). I've produced similar results on a PS3 system also. > > The opteron is running Python 2.6.1 and NumPy 1.3.0, while the PS3 > > has Python 2.5.1 and NumPy 1.1.1. > > > > I haven't jumped into the code yet, but does anyone know why > > sin/cos are ~8.5x slower for 32-bit floats compared to 64-bit > > doubles? > > > > Side question: I see people in emails writing things like 'timeit > > foo(x)' and having it run some sort of standard benchmark, how > > exactly do I do that? Is that some environment other than a normal > > Python? > > > > Thanks, > > > > Andrew > > > import timeit > > > t = timeit.Timer("numpy.sin(a)", > > "import numpy\n" > > "a = numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, > > dtype=numpy.float32)") print "sin float32", min(t.repeat(3, 10)) > > > t = timeit.Timer("numpy.sin(a)", > > "import numpy\n" > > "a = numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, > > dtype=numpy.float64)") print "sin float64", min(t.repeat(3, 10)) > > > t = timeit.Timer("numpy.cos(a)", > > "import numpy\n" > > "a = numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, > > dtype=numpy.float32)") print "cos float32", min(t.repeat(3, 10)) > > > t = timeit.Timer("numpy.cos(a)", > > "import numpy\n" > > "a = numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, > > dtype=numpy.float64)") print "cos float64", min(t.repeat(3, 10)) > > > > ___ > > NumPy-Discussion mailing list > > NumPy-Discussion@scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] strange sin/cos performance
On Tue, Aug 4, 2009 at 7:18 PM, Jochen wrote: > Hi all, > I see something similar on my system. > OK I've just done a test. System is Ubuntu 9.04 AMD64 > there seems to be a regression for float32 with high values: > > In [47]: a=np.random.rand(1).astype(np.float32) > > In [48]: b=np.random.rand(1).astype(np.float64) > > In [49]: c=1000*np.random.rand(1).astype(np.float32) > > In [50]: d=1000*np.random.rand(1000).astype(np.float64) > > In [51]: %timeit -n 10 np.sin(a) > 10 loops, best of 3: 251 µs per loop > > In [52]: %timeit -n 10 np.sin(b) > 10 loops, best of 3: 395 µs per loop > > In [53]: %timeit -n 10 np.sin(c) > 10 loops, best of 3: 5.65 ms per loop > > In [54]: %timeit -n 10 np.sin(d) > 10 loops, best of 3: 87.7 µs per loop > > In [55]: %timeit -n 10 np.sin(c.astype(np.float64)).astype(np.float32) > 10 loops, best of 3: 891 µs per loop > Is anyone with this problem *not* running ubuntu? Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Why NaN?
On Tue, Aug 04, 2009 at 07:03:43PM -0500, Gökhan Sever wrote: >I would not be surprised if someone brings a real python snake into the >conference then :) http://picasaweb.google.com/ziade.tarek/PyconFR#slideshow/5342502528927090354 ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] [ANN] IPython 0.10 is out.
Hi all, on behalf of the IPython development team, I'm happy to announce that we've just put out IPython 0.10 final. Many thanks to all those who contributed ideas, bug reports and code. You can download it from the usual location: - http://ipython.scipy.org/moin/Download: direct links to various formats - http://ipython.scipy.org/dist: all files are stored here. The official documentation for this release can be found at: - http://ipython.scipy.org/doc/rel-0.10/html: as HTML pages. - http://ipython.scipy.org/doc/rel-0.10/ipython.pdf: as a single PDF. In brief, this release gathers all recent work and in a sense closes a cycle of the current useful-but-internally-messy structure of the IPython code. We are now well into the work of a major internal cleanup that will inevitably change some APIs and will likely take some time to stabilize, so the 0.10 release should be used for a while until the dust settles on the development branch. The 0.10 release fixes many bugs, including some very problematic ones (a major memory leak with repeated %run is closed), and also brings a number of new features, stability improvements and improved documentation. Some highlights: - Improved WX-based ipythonx and ipython-wx tools, suitable for embedding into other applications and standalone use. - Better interactive demos with the IPython.demo module. - Refactored ipcluster with support for local execution, MPI, PBS and systems with SSH key access preconfigured. - Integration with the TextMate editor in the %edit command. The full release notes are available here with all the details: http://ipython.scipy.org/doc/rel-0.10/html/changes.html#release-0-10 We hope you enjoy it, please report any problems as usual either on the mailing list, or by filing a bug report at our Launchpad tracker: https://bugs.launchpad.net/ipython Cheers, The IPython team. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion