Re: [Numpy-discussion] loadtxt ndmin option
On 4. mai 2011, at 20.33, Benjamin Root wrote: > On Wed, May 4, 2011 at 7:54 PM, Derek Homeier > wrote: > On 05.05.2011, at 2:40AM, Paul Anton Letnes wrote: > > > But: Isn't the numpy.atleast_2d and numpy.atleast_1d functions written for > > this? Shouldn't we reuse them? Perhaps it's overkill, and perhaps it will > > reintroduce the 'transposed' problem? > > Yes, good point, one could replace the > X.shape = (X.size, ) with X = np.atleast_1d(X), > but for the ndmin=2 case, we'd need to replace > X.shape = (X.size, 1) with X = np.atleast_2d(X).T - > not sure which solution is more efficient in terms of memory access etc... > > Cheers, >Derek > > > I can confirm that the current behavior is not sufficient for all of the > original corner cases that ndmin was supposed to address. Keep in mind that > np.loadtxt takes a one-column data file and a one-row data file down to the > same shape. I don't see how the current code is able to produce the correct > array shape when ndmin=2. Do we have some sort of counter in loadtxt for > counting the number of rows and columns read? Could we use those to help > guide the ndmin=2 case? > > I think that using atleast_1d(X) might be a bit overkill, but it would be > very clear as to the code's intent. I don't think we have to worry about > memory usage if we limit its use to only situations where ndmin is greater > than the number of dimensions of the array. In those cases, the array is > either an empty result, a scalar value (in which memory access is trivial), > or 1-d (in which a transpose is cheap). What if one does things the other way around - avoid calling squeeze until _after_ doing the atleast_Nd() magic? That way the row/column information should be conserved, right? Also, we avoid transposing, memory use, ... Oh, and someone could conceivably have a _looong_ 1D file, but would want it read as a 2D array. Paul ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] loadtxt ndmin option
On Wed, May 4, 2011 at 7:54 PM, Derek Homeier < de...@astro.physik.uni-goettingen.de> wrote: > On 05.05.2011, at 2:40AM, Paul Anton Letnes wrote: > > > But: Isn't the numpy.atleast_2d and numpy.atleast_1d functions written > for this? Shouldn't we reuse them? Perhaps it's overkill, and perhaps it > will reintroduce the 'transposed' problem? > > Yes, good point, one could replace the > X.shape = (X.size, ) with X = np.atleast_1d(X), > but for the ndmin=2 case, we'd need to replace > X.shape = (X.size, 1) with X = np.atleast_2d(X).T - > not sure which solution is more efficient in terms of memory access etc... > > Cheers, > Derek > > I can confirm that the current behavior is not sufficient for all of the original corner cases that ndmin was supposed to address. Keep in mind that np.loadtxt takes a one-column data file and a one-row data file down to the same shape. I don't see how the current code is able to produce the correct array shape when ndmin=2. Do we have some sort of counter in loadtxt for counting the number of rows and columns read? Could we use those to help guide the ndmin=2 case? I think that using atleast_1d(X) might be a bit overkill, but it would be very clear as to the code's intent. I don't think we have to worry about memory usage if we limit its use to only situations where ndmin is greater than the number of dimensions of the array. In those cases, the array is either an empty result, a scalar value (in which memory access is trivial), or 1-d (in which a transpose is cheap). Ben Root ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] loadtxt ndmin option
On 05.05.2011, at 2:40AM, Paul Anton Letnes wrote: > But: Isn't the numpy.atleast_2d and numpy.atleast_1d functions written for > this? Shouldn't we reuse them? Perhaps it's overkill, and perhaps it will > reintroduce the 'transposed' problem? Yes, good point, one could replace the X.shape = (X.size, ) with X = np.atleast_1d(X), but for the ndmin=2 case, we'd need to replace X.shape = (X.size, 1) with X = np.atleast_2d(X).T - not sure which solution is more efficient in terms of memory access etc... Cheers, Derek ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] loadtxt ndmin option
On 4. mai 2011, at 17.34, Derek Homeier wrote: > Hi Paul, > > I've got back to your suggestion re. the ndmin flag for loadtxt from a few > weeks ago... > > On 27.03.2011, at 12:09PM, Paul Anton Letnes wrote: > 1562: I attach a possible patch. This could also be the default behavior to my mind, since the function caller can simply call numpy.squeeze if needed. Changing default behavior would probably break old code, however. >>> >>> See comments on Trac as well. >> >> Your patch is better, but there is one thing I disagree with. >> 808if X.ndim < ndmin: >> 809if ndmin == 1: >> 810X.shape = (X.size, ) >> 811elif ndmin == 2: >> 812X.shape = (X.size, 1) >> The last line should be: >> 812X.shape = (1, X.size) >> If someone wants a 2D array out, they would most likely expect a one-row >> file to come out as a one-row array, not the other way around. IMHO. > > I think you are completely right for the test case with one row. More > generally though, > since a file of N rows and M columns is read into an array of shape (N, M), > ndmin=2 > should enforce X.shape = (1, X.size) for single-row input, and X.shape = > (X.size, 1) > for single-column input. > I thought this would be handled automatically by preserving the original 2 > dimensions, > but apparently with single-row/multi-column input an extra dimension 1 is > prepended > when the array is returned from the parser. I've put up a fix for this at > > https://github.com/dhomeier/numpy/compare/master...ndmin-cols > > and also tested the patch against 1.6.0.rc2. > > Cheers, > Derek Looks sensible to me at least! But: Isn't the numpy.atleast_2d and numpy.atleast_1d functions written for this? Shouldn't we reuse them? Perhaps it's overkill, and perhaps it will reintroduce the 'transposed' problem? Paul ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] loadtxt ndmin option
Hi Paul, I've got back to your suggestion re. the ndmin flag for loadtxt from a few weeks ago... On 27.03.2011, at 12:09PM, Paul Anton Letnes wrote: >>> 1562: >>> I attach a possible patch. This could also be the default >>> behavior to my mind, since the function caller can simply call >>> numpy.squeeze if needed. Changing default behavior would probably >>> break old code, however. >> >> See comments on Trac as well. > > Your patch is better, but there is one thing I disagree with. > 808if X.ndim < ndmin: > 809if ndmin == 1: > 810X.shape = (X.size, ) > 811elif ndmin == 2: > 812X.shape = (X.size, 1) > The last line should be: > 812X.shape = (1, X.size) > If someone wants a 2D array out, they would most likely expect a one-row file > to come out as a one-row array, not the other way around. IMHO. I think you are completely right for the test case with one row. More generally though, since a file of N rows and M columns is read into an array of shape (N, M), ndmin=2 should enforce X.shape = (1, X.size) for single-row input, and X.shape = (X.size, 1) for single-column input. I thought this would be handled automatically by preserving the original 2 dimensions, but apparently with single-row/multi-column input an extra dimension 1 is prepended when the array is returned from the parser. I've put up a fix for this at https://github.com/dhomeier/numpy/compare/master...ndmin-cols and also tested the patch against 1.6.0.rc2. Cheers, Derek ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2
On Tue, May 3, 2011 at 1:18 PM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the second release > candidate of NumPy 1.6.0. > > Compared to the first release candidate, one segfault on (32-bit > Windows + MSVC) and several memory leaks were fixed. If no new > problems are reported, the final release will be in one week. > > Sources and binaries can be found at > http://sourceforge.net/projects/numpy/files/NumPy/1.6.0rc2/ > For (preliminary) release notes see below. > > Enjoy, > Ralf > > > Minor issue I just noticed on my recently installed Ubuntu 11.04 machine. The setup script is making a call to 'svnversion'. Doesn't impact the build or anything, but I only noticed it because svn hasn't been installed yet on that machine. Don't know if it is something that ought to be cleaned up or not. Otherwise, all tests on my machines (Fedora 13 x86 python 2.6, and Ubuntu 11.04 x86, python 2.7) work just fine. I will be testing some of the changes to file-loading later tonight. Good release! Ben Root ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2
On 04.05.2011, at 8:42PM, Ralf Gommers wrote: > == > FAIL: test_return_character.TestF90ReturnCharacter.test_all > -- > Traceback (most recent call last): > File "/sw/lib/python3.2/site-packages/nose/case.py", line 188, in runTest >self.test(*self.arg) > File > "/sw/lib/python3.2/site-packages/numpy/f2py/tests/test_return_character.py", > line 136, in test_all >self.check_function(getattr(self.module.f90_return_char, name)) > File > "/sw/lib/python3.2/site-packages/numpy/f2py/tests/test_return_character.py", > line 12, in check_function >r = t(array('ab'));assert_( r==asbytes('a'),repr(r)) > File "/sw/lib/python3.2/site-packages/numpy/testing/utils.py", line 34, in > assert_ >raise AssertionError(msg) > AssertionError: b' ' > > All the previous failures on ppc are gone now, and with Python2.[4567] all > tests are passing! > On Intel MacOS X (10.5/i386 and 10.6/x86_64) everything is OK for Python > 2.5-3.2 as well. > > > Looks like Fortran. Maybe we should lst the Fortran compiler when the full > test suite is run. > > lst = list? That would be useful. > It seems to be listed in case of an f2py error, so maybe this could be done for failures as well... The compiler versions used to compile numpy would also be useful, I suppose. Anyway, the failure above turns up with numpy built with either of the system gcc (4.0.1 or 4.2.1) and gfortran from the fink-installed gcc 4.2.4 or 4.4.4. Unfortunately I cannot build numpy itself with the fink gcc versions, since they don't support the '-faltivec' flag. > Is this failure a problem or can it wait? Definitely not a show-stopper for 1.6.0, afaiac. I noticed another strange behaviour of f2py under OS X 10.5 - ppc and i386: When I have g77 installed, it seems to no longer find the (still present) gfortran, causing other test errors (maybe not surprising, since it is g77 3.4.3, but I'd still think gfortran should be given preference, if both are installed). Cheers, Derek ERROR: test_return_real.TestF77ReturnReal.test_all -- Traceback (most recent call last): File "/sw/lib/python2.7/site-packages/nose/case.py", line 371, in setUp try_run(self.inst, ('setup', 'setUp')) File "/sw/lib/python2.7/site-packages/nose/util.py", line 478, in try_run return func() File "/sw/lib/python2.7/site-packages/numpy/f2py/tests/util.py", line 347, in setUp module_name=self.module_name) File "/sw/lib/python2.7/site-packages/numpy/f2py/tests/util.py", line 73, in wrapper memo[key] = func(*a, **kw) File "/sw/lib/python2.7/site-packages/numpy/f2py/tests/util.py", line 162, in build_code module_name=module_name) File "/sw/lib/python2.7/site-packages/numpy/f2py/tests/util.py", line 73, in wrapper memo[key] = func(*a, **kw) File "/sw/lib/python2.7/site-packages/numpy/f2py/tests/util.py", line 134, in build_module % (cmd[4:], asstr(out))) RuntimeError: Running f2py failed: ['-m', '_test_ext_module_5403', '/scratch/derek/tmp/tmp6cjHzz/tmpZt9fgZ.f'] running build [...] build_src: building npy-pkg config files running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Found executable /sw/bin/f77 absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Found executable /sw/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using build_ext building '_test_ext_module_5403' extension compiling C sources C compiler: gcc -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes creating /scratch/derek/tmp/tmpeRWXKV/scratch creating /scratch/derek/tmp/tmpeRWXKV/scratch/derek creating /scratch/derek/tmp/tmpeRWXKV/scratch/derek/tmp creating /scratch/derek/tmp/tmpeRWXKV/scratch/derek/tmp/tmpeRWXKV creating /scratch/derek/tmp/tmpeRWXKV/scratch/derek/tmp/tmpeRWXKV/src.macosx-10.5-ppc-2.7 compile options: '-I/scratch/derek/tmp/tmpeRWXKV/src.macosx-10.5-ppc-2.7 -I/sw/lib/python2.7/site-packages/numpy/core/include -I/sw/include/python2.7 -c' gcc: /scratch/derek/tmp/tmpeRWXKV/src.macosx-10.5-ppc-2.7/_test_ext_module_5403module.c gcc: /scratch/derek/tmp/tmpeRWXKV/src.macosx-10.5-ppc-2.7/fortranobject.c compiling Fortran sources Fortran f77 compiler: /sw/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating /scratch/derek/tmp/tmpeRWXKV/scratch/derek/t
Re: [Numpy-discussion] numpy easy_install fails for python 3.2
Hi, On Wed, May 4, 2011 at 1:23 PM, Ralf Gommers wrote: > > > On Wed, May 4, 2011 at 6:53 PM, Matthew Brett > wrote: >> >> Hi, >> >> I can imagine that this is low-priority, but I have just been enjoying >> pytox for automated virtualenv testing: >> >> http://codespeak.net/tox/index.html >> >> which revealed that numpy download-build-install via easy_install >> (distribute) fails with the appended traceback ending in "ValueError: >> 'build/py3k/numpy' is not a directory". > > I think it would be good to just say "wontfix" immediately, rather than just > leaving a ticket open and not do anything (like we did with > http://projects.scipy.org/numpy/ticket/860). Ouch - yes - I see what you mean. > It seems tox can also use pip (which works with py3k now), does that work > for you? I think current tox 0.9 uses virtualenv5 for python3.2 and has to use distribute, I believe. Current tip of pytox appears to use virtualenv 1.6.1 for python 3.2, and does use pip, but generates the same error in the end. I've appended the result of a fresh python3.2 virtualenv and a "pip install numpy". Sorry - I know these are not fun problems, See you, Matthew RefactoringTool: /home/mb312/.virtualenvs/bare-32/build/numpy/build/py3k/numpy/core/defchararray.py Running from numpy source directory.Traceback (most recent call last): File "", line 14, in File "/home/mb312/.virtualenvs/bare-32/build/numpy/setup.py", line 211, in setup_package() File "/home/mb312/.virtualenvs/bare-32/build/numpy/setup.py", line 204, in setup_package configuration=configuration ) File "/home/mb312/.virtualenvs/bare-32/build/numpy/build/py3k/numpy/distutils/core.py", line 152, in setup config = configuration() File "/home/mb312/.virtualenvs/bare-32/build/numpy/setup.py", line 151, in configuration config.add_subpackage('numpy') File "/home/mb312/.virtualenvs/bare-32/build/numpy/build/py3k/numpy/distutils/misc_util.py", line 972, in add_subpackage caller_level = 2) File "/home/mb312/.virtualenvs/bare-32/build/numpy/build/py3k/numpy/distutils/misc_util.py", line 941, in get_subpackage caller_level = caller_level + 1) File "/home/mb312/.virtualenvs/bare-32/build/numpy/build/py3k/numpy/distutils/misc_util.py", line 878, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "numpy/setup.py", line 5, in configuration config = Configuration('numpy',parent_package,top_path) File "/home/mb312/.virtualenvs/bare-32/build/numpy/build/py3k/numpy/distutils/misc_util.py", line 713, in __init__ raise ValueError("%r is not a directory" % (package_path,)) ValueError: 'build/py3k/numpy' is not a directory ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2
On Tue, May 3, 2011 at 11:22 PM, Ilan Schnell wrote: > I'm seeing these three failures on Solaris 5.10 (x86_64, using Python > 2.7.1): > > == > FAIL: Test basic arithmetic function errors > -- > Traceback (most recent call last): > File > "/home/demo/master/lib/python2.7/site-packages/numpy/testing/decorators.py", > line 215, in knownfailer >return f(*args, **kwargs) > File > "/home/demo/master/lib/python2.7/site-packages/numpy/core/tests/test_numeric.py", > line 321, in test_floating_exceptions >lambda a,b:a/b, ft_tiny, ft_max) > File > "/home/demo/master/lib/python2.7/site-packages/numpy/core/tests/test_numeric.py", > line 271, in assert_raises_fpe >"Type %s did not raise fpe error '%s'." % (ftype, fpeerr)) > File > "/home/demo/master/lib/python2.7/site-packages/numpy/testing/utils.py", > line 34, in assert_ >raise AssertionError(msg) > AssertionError: Type did not raise fpe error > 'underflow'. > This is #1755, it's just not marked as knownfail on Solaris. I'll do that. > > == > FAIL: test_zero_nzero (test_umath.TestArctan2SpecialValues) > -- > Traceback (most recent call last): > File > "/home/demo/master/lib/python2.7/site-packages/numpy/core/tests/test_umath.py", > line 322, in test_zero_nzero >assert_almost_equal(ncu.arctan2(np.PZERO, np.NZERO), np.pi) > File > "/home/demo/master/lib/python2.7/site-packages/numpy/testing/utils.py", > line 468, in assert_almost_equal >raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: 0.0 > DESIRED: 3.141592653589793 > > == > FAIL: test_zero_pzero (test_umath.TestArctan2SpecialValues) > -- > Traceback (most recent call last): > File > "/home/demo/master/lib/python2.7/site-packages/numpy/core/tests/test_umath.py", > line 328, in test_zero_pzero >assert_arctan2_isnzero(np.NZERO, np.PZERO) > File > "/home/demo/master/lib/python2.7/site-packages/numpy/core/tests/test_umath.py", > line 310, in assert_arctan2_isnzero >assert (ncu.arctan2(x, y) == 0 and np.signbit(ncu.arctan2(x, y))), > "arctan(%s, %s) is %s, not -0" % (x, y, ncu.arctan2(x, y)) > AssertionError: arctan(-0.0, 0.0) is 0.0, not -0 > > These are really corner cases. Can you open a ticket? > > I'm not sure what the state of Solaris support is these days, but > I remember being able to run all tests without any failures. > > Pretty good apparently. Cheers, Ralf ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2
On Wed, May 4, 2011 at 2:29 AM, Charles R Harris wrote: > > > On Tue, May 3, 2011 at 4:58 PM, Derek Homeier < > de...@astro.physik.uni-goettingen.de> wrote: > >> Hi Ralf, >> >> > I am pleased to announce the availability of the second release >> > candidate of NumPy 1.6.0. >> > >> > Compared to the first release candidate, one segfault on (32-bit >> > Windows + MSVC) and several memory leaks were fixed. If no new >> > problems are reported, the final release will be in one week. >> >> I found a problem apparently related to string handling on MacOS X >> 10.5/ppc with Python3 - not a new one though, at least it seemed to >> be present with 1.6.0b2: >> >> >>> numpy.test('full') >> Running unit tests for numpy >> NumPy version 1.6.0rc2 >> NumPy is installed in /sw/lib/python3.2/site-packages/numpy >> Python version 3.2 (r32:88445, Mar 1 2011, 18:28:16) [GCC 4.0.1 (Apple >> Inc. build 5493)] >> nose version 1.0.0 >> ... >> == >> FAIL: test_return_character.TestF77ReturnCharacter.test_all >> -- >> Traceback (most recent call last): >> File "/sw/lib/python3.2/site-packages/nose/case.py", line 188, in >> runTest >>self.test(*self.arg) >> File >> "/sw/lib/python3.2/site-packages/numpy/f2py/tests/test_return_character.py", >> line 78, in test_all >>self.check_function(getattr(self.module, name)) >> File >> "/sw/lib/python3.2/site-packages/numpy/f2py/tests/test_return_character.py", >> line 12, in check_function >>r = t(array('ab'));assert_( r==asbytes('a'),repr(r)) >> File "/sw/lib/python3.2/site-packages/numpy/testing/utils.py", line 34, >> in assert_ >>raise AssertionError(msg) >> AssertionError: b' ' >> >> == >> FAIL: test_return_character.TestF90ReturnCharacter.test_all >> -- >> Traceback (most recent call last): >> File "/sw/lib/python3.2/site-packages/nose/case.py", line 188, in >> runTest >>self.test(*self.arg) >> File >> "/sw/lib/python3.2/site-packages/numpy/f2py/tests/test_return_character.py", >> line 136, in test_all >>self.check_function(getattr(self.module.f90_return_char, name)) >> File >> "/sw/lib/python3.2/site-packages/numpy/f2py/tests/test_return_character.py", >> line 12, in check_function >>r = t(array('ab'));assert_( r==asbytes('a'),repr(r)) >> File "/sw/lib/python3.2/site-packages/numpy/testing/utils.py", line 34, >> in assert_ >>raise AssertionError(msg) >> AssertionError: b' ' >> >> All the previous failures on ppc are gone now, and with Python2.[4567] all >> tests are passing! >> On Intel MacOS X (10.5/i386 and 10.6/x86_64) everything is OK for Python >> 2.5-3.2 as well. >> >> > Looks like Fortran. Maybe we should lst the Fortran compiler when the full > test suite is run. > > lst = list? That would be useful. Is this failure a problem or can it wait? Ralf ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2
On Tue, May 3, 2011 at 11:24 PM, wrote: > On Tue, May 3, 2011 at 5:06 PM, Ralf Gommers > wrote: > > On Tue, May 3, 2011 at 10:35 PM, Christoph Gohlke > wrote: > >> > >> > >> On 5/3/2011 11:18 AM, Ralf Gommers wrote: > >>> Hi, > >>> > >>> I am pleased to announce the availability of the second release > >>> candidate of NumPy 1.6.0. > >>> > >>> Compared to the first release candidate, one segfault on (32-bit > >>> Windows + MSVC) and several memory leaks were fixed. If no new > >>> problems are reported, the final release will be in one week. > >>> > >>> Sources and binaries can be found at > >>> http://sourceforge.net/projects/numpy/files/NumPy/1.6.0rc2/ > >>> For (preliminary) release notes see below. > >>> > >>> Enjoy, > >>> Ralf > >>> > >> > >> Looks good. The msvc9/MKL builds now pass all tests on win32 and > >> win-amd64, python 2.6, 2.7, 3.1, and 3.2. > > > > Good, thanks for testing. > >> > >> One scipy test failure reported earlier remains, but that is probably no > >> release blocker. > >> < > http://mail.scipy.org/pipermail/numpy-discussion/2011-April/055877.html> > > > > That's a problem in scipy.stats, that only showed up recently because > > of a bug fix in numpy.testing. > > Sorry, I don't have 1.6 to test, but what are assertions like > > np.testing.assert_array_less(2, np.inf) > np.testing.assert_array_less(np.array([ 0.911, 1.065, 1.325, 1.587]), > np.inf) > > supposed to be with numpy 1.6 ? > > The same as what they were before ideally, but I get an AssertionError. No tests for assert_array_less, so I missed this. The easiest way to fix this I can think of is to add a switch to assert_array_compare that determines whether or not to special-case nan/inf. What do you think about these ones: >>> assert_array_less([1, np.inf], [2, np.inf]) >>> assert_array_less([1, np.nan], [2, np.nan]) They both pass now. The first one did not with 1.5, the second one did (I think). It seems to me that there's no obvious answer, it's just not very well-defined. Ralf ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy easy_install fails for python 3.2
On Wed, May 4, 2011 at 6:53 PM, Matthew Brett wrote: > Hi, > > I can imagine that this is low-priority, but I have just been enjoying > pytox for automated virtualenv testing: > > http://codespeak.net/tox/index.html > > which revealed that numpy download-build-install via easy_install > (distribute) fails with the appended traceback ending in "ValueError: > 'build/py3k/numpy' is not a directory". > I think it would be good to just say "wontfix" immediately, rather than just leaving a ticket open and not do anything (like we did with http://projects.scipy.org/numpy/ticket/860). It seems tox can also use pip (which works with py3k now), does that work for you? Ralf > > easy_install for pythons 2.5 - 2.7 work fine. > > Best, > > Matthew > > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy easy_install fails for python 3.2
Hi, I can imagine that this is low-priority, but I have just been enjoying pytox for automated virtualenv testing: http://codespeak.net/tox/index.html which revealed that numpy download-build-install via easy_install (distribute) fails with the appended traceback ending in "ValueError: 'build/py3k/numpy' is not a directory". easy_install for pythons 2.5 - 2.7 work fine. Best, Matthew RefactoringTool: /tmp/easy_install-xr2px3/numpy-1.5.1/build/py3k/numpy/compat/py3k.py Running from numpy source directory.Converting to Python3 via 2to3... Traceback (most recent call last): File "../bin/easy_install", line 9, in load_entry_point('distribute==0.6.14', 'console_scripts', 'easy_install')() File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/command/easy_install.py", line 1855, in main with_ei_usage(lambda: File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/command/easy_install.py", line 1836, in with_ei_usage return f() File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/command/easy_install.py", line 1859, in distclass=DistributionWithoutHelpCommands, **kw File "/usr/lib/python3.2/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python3.2/distutils/dist.py", line 919, in run_commands self.run_command(cmd) File "/usr/lib/python3.2/distutils/dist.py", line 938, in run_command cmd_obj.run() File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/command/easy_install.py", line 342, in run self.easy_install(spec, not self.no_deps) File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/command/easy_install.py", line 582, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/command/easy_install.py", line 612, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/command/easy_install.py", line 802, in install_eggs return self.build_and_install(setup_script, setup_base) File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/command/easy_install.py", line 1079, in build_and_install self.run_setup(setup_script, setup_base, args) File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/command/easy_install.py", line 1068, in run_setup run_setup(setup_script, args) File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/sandbox.py", line 30, in run_setup lambda: exec(compile(open( File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/sandbox.py", line 71, in run return func() File "/home/mb312/dev_trees/nibabel/.tox/py32/lib/python3.2/site-packages/distribute-0.6.14-py3.2.egg/setuptools/sandbox.py", line 33, in {'__file__':setup_script, '__name__':'__main__'}) File "setup.py", line 211, in File "setup.py", line 204, in setup_package File "/tmp/easy_install-xr2px3/numpy-1.5.1/build/py3k/numpy/distutils/core.py", line 152, in setup File "setup.py", line 151, in configuration File "/tmp/easy_install-xr2px3/numpy-1.5.1/build/py3k/numpy/distutils/misc_util.py", line 972, in add_subpackage File "/tmp/easy_install-xr2px3/numpy-1.5.1/build/py3k/numpy/distutils/misc_util.py", line 941, in get_subpackage File "/tmp/easy_install-xr2px3/numpy-1.5.1/build/py3k/numpy/distutils/misc_util.py", line 878, in _get_configuration_from_setup_py File "numpy/setup.py", line 5, in configuration File "/tmp/easy_install-xr2px3/numpy-1.5.1/build/py3k/numpy/distutils/misc_util.py", line 713, in __init__ ValueError: 'build/py3k/numpy' is not a directory ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Numpy steering group?
On Wed, May 4, 2011 at 11:14, Matthew Brett wrote: > Hi, > > On Tue, May 3, 2011 at 7:58 PM, Robert Kern wrote: >> I can't speak for the rest of the group, but as for myself, if you >> would like to draft such a letter, I'm sure I will agree with its >> contents. > > Thank you - sadly I am not confident in deserving your confidence, but > I will do my best to say something sensible. Any objections to a > public google doc? Even better! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Numpy steering group?
Hi, On Tue, May 3, 2011 at 7:58 PM, Robert Kern wrote: > On Tue, May 3, 2011 at 12:07, Matthew Brett wrote: >> Hi, >> >> On Sat, Apr 30, 2011 at 5:21 AM, Ralf Gommers >> wrote: >>> On Wed, Apr 27, 2011 at 8:52 PM, Matthew Brett >>> wrote: Hi, This is just to follow up on a dead thread of mine a little while back. I was asking about letters for Clint Whaley's tenure case, from numpy, but I realized that I don't know who 'numpy' is :) Is there in fact a numpy steering group? Who is best to write letters representing the 'numpy community'? >>> >>> At http://scipy.org/Developer_Zone there's a list of people under a >>> big header "steering committee". It seems to me that writing such a >>> letter representing the community is one of the purposes that >>> committee could serve. >> >> Ah - yes - thanks for the reply. >> >> In the interests of general transparency - and given that no-one from >> that group has replied to this email - how should the group best be >> addressed? By personal email? That seems to break the open-source >> matra of everything on-list: >> >> http://producingoss.com/en/setting-tone.html#avoid-private-discussions > > Having project-relevant *discussions* on-list doesn't preclude getting > someone's *attention* off-list. Yes, that's true. My worry was that, having put the question on the list, and not had an answer, it might send a bad signal if it was obvious that I had only got a reply because I'd asked for one off-list. > I can't speak for the rest of the group, but as for myself, if you > would like to draft such a letter, I'm sure I will agree with its > contents. Thank you - sadly I am not confident in deserving your confidence, but I will do my best to say something sensible. Any objections to a public google doc? See you, Matthew ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] optimizing ndarray.__setitem__
On Wed, May 4, 2011 at 6:19 AM, Christoph Groth wrote: > Dear numpy experts, > > I have noticed that with Numpy 1.5.1 the operation > > m[::2] += 1.0 > > takes twice as long as > > t = m[::2] > t += 1.0 > > where "m" is some large matrix. This is of course because the first > snippet is equivalent to > > t = m[::2] > t += 1.0 > m[::2] = t > > I wonder whether it would not be a good idea to optimize > ndarray.__setitem__ to not execute an assignment of a slice onto itself. > Is there any good reason why this is not being done already? > > best, Christoph > You'd better time this in 1.6 too. ;) https://github.com/numpy/numpy/commit/f60797ba64ccf33597225d23b893b6eb11149860 The case of boolean mask indexing can't benefit so easily from this optimization, but I think could see a big performance benefit if combined __index__ + __i__ operators were added to Python. Something to consider, anyway. -Mark ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] optimizing ndarray.__setitem__
On Wed, May 4, 2011 at 08:19, Christoph Groth wrote: > Dear numpy experts, > > I have noticed that with Numpy 1.5.1 the operation > > m[::2] += 1.0 > > takes twice as long as > > t = m[::2] > t += 1.0 > > where "m" is some large matrix. This is of course because the first > snippet is equivalent to > > t = m[::2] > t += 1.0 > m[::2] = t > > I wonder whether it would not be a good idea to optimize > ndarray.__setitem__ to not execute an assignment of a slice onto itself. > Is there any good reason why this is not being done already? We didn't think of it. If you can write up a patch that works safely and shows a performance improvement, it's probably worth putting in. It's probably not *that* common of a bottleneck, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] optimizing ndarray.__setitem__
Dear numpy experts, I have noticed that with Numpy 1.5.1 the operation m[::2] += 1.0 takes twice as long as t = m[::2] t += 1.0 where "m" is some large matrix. This is of course because the first snippet is equivalent to t = m[::2] t += 1.0 m[::2] = t I wonder whether it would not be a good idea to optimize ndarray.__setitem__ to not execute an assignment of a slice onto itself. Is there any good reason why this is not being done already? best, Christoph ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] [ANN] EuroScipy 2011 - deadline approaching
= EuroScipy 2011 - Deadline Approaching = Beware: talk submission deadline is approaching. You can submit your contribution until Sunday May 8. - The 4th European meeting on Python in Science - **Paris, Ecole Normale Supérieure, August 25-28 2011** We are happy to announce the 4th EuroScipy meeting, in Paris, August 2011. The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic research and state of the art industry. Main topics === - Presentations of scientific tools and libraries using the Python language, including but not limited to: - vector and array manipulation - parallel computing - scientific visualization - scientific data flow and persistence - algorithms implemented or exposed in Python - web applications and portals for science and engineering. - Reports on the use of Python in scientific achievements or ongoing projects. - General-purpose Python tools that can be of special interest to the scientific community. Tutorials = There will be two tutorial tracks at the conference, an introductory one, to bring up to speed with the Python language as a scientific tool, and an advanced track, during which experts of the field will lecture on specific advanced topics such as advanced use of numpy, scientific visualization, software engineering... Keynote Speaker: Fernando Perez === We are excited to welcome Fernando Perez (UC Berkeley, Helen Wills Neuroscience Institute, USA) as our keynote speaker. Fernando Perez is the original author of the enhanced interactive python shell IPython and a very active contributor to the Python for Science ecosystem. Important dates === Talk submission deadline: Sunday May 8 Program announced:Sunday May 29 Tutorials tracks: Thursday August 25 - Friday August 26 Conference track: Saturday August 27 - Sunday August 28 Call for papers === We are soliciting talks that discuss topics related to scientific computing using Python. These include applications, teaching, future development directions, and research. We welcome contributions from the industry as well as the academic world. Indeed, industrial research and development as well academic research face the challenge of mastering IT tools for exploration, modeling and analysis. We look forward to hearing your recent breakthroughs using Python! Submission guidelines = - We solicit talk proposals in the form of a one-page long abstract. - Submissions whose main purpose is to promote a commercial product or service will be refused. - All accepted proposals must be presented at the EuroSciPy conference by at least one author. The one-page long abstracts are for conference planing and selection purposes only. We will later select papers for publication of post-proceedings in a peer-reviewed journal. How to submit an abstract = To submit a talk to the EuroScipy conference follow the instructions here: http://www.euroscipy.org/card/euroscipy2011_call_for_papers Organizers == Chairs: - Gaël Varoquaux (INSERM, Unicog team, and INRIA, Parietal team) - Nicolas Chauvat (Logilab) Local organization committee: - Emmanuelle Gouillart (Saint-Gobain Recherche) - Jean-Philippe Chauvat (Logilab) Tutorial chair: - Valentin Haenel (MKP, Technische Universität Berlin) Program committee: - Chair: Tiziano Zito (MKP, Technische Universität Berlin) - Romain Brette (ENS Paris, DEC) - Emmanuelle Gouillart (Saint-Gobain Recherche) - Eric Lebigot (Laboratoire Kastler Brossel, Université Pierre et Marie Curie) - Konrad Hinsen (Soleil Synchrotron, CNRS) - Hans Petter Langtangen (Simula laboratories) - Jarrod Millman (UC Berkeley, Helen Wills NeuroScience institute) - Mike Müller (Python Academy) - Didrik Pinte (Enthought Inc) - Marc Poinot (ONERA) - Christophe Pradal (CIRAD/INRIA, Virtual Plantes team) - Andreas Schreiber (DLR) - Stéfan van der Walt (University of Stellenbosch) Website === http://www.euroscipy.org/conference/euroscipy_2011 ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion