Re: [Numpy-discussion] unittest for np.random.poisson with lambda=0
Sebastian Haase wrote: > Thanks Robert, > Just for the archive: I hope the problem I was thinking of is what is > referred to here: > http://mail.scipy.org/pipermail/numpy-discussion/2005-July/004992.html > i.e. it should have been fixed as bug #1123145 in Numeric. > > (see also here: > https://nanohub.org/infrastructure/rappture-runtime/browser/trunk/Numeric-24.2/changes.txt?rev=42 > ) > > I still haven't learned how to write unittests (sorry) That's very easy. Assuming you have a new random generator foo which should always return one, it may be as easy as writing: from numpy.random import foo def test_foo(): assert foo() == 1 In numpy/random/test_random. You can follow the nose doc (or other unit tests) for more complex examples. If you are unsure about how to write something, just create an issue to numpy trac with a patch, and regularly bother us until we review it. cheers, David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Import error in builds of 7726
David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > Did you make sure to build from scratch and clean the working directory > first ? > > I don't see the error on my macbook: umath.so has npy_cexp* functions > defined. > > David > Just now tried deleting everything and pulling down numpy again from SVN. Same result. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Import error in builds of 7726
David Cournapeau ar.media.kyoto-u.ac.jp> writes: > Did you make sure to build from scratch and clean the working directory > first ? > > I don't see the error on my macbook: umath.so has npy_cexp* functions > defined. > > David > Yeah, here is my build script -- it removes the build directory entirely: #!/bin/sh export MACOSX_DEPLOYMENT_TARGET=10.6 export CFLAGS="-arch i386 -arch x86_64" export FFLAGS="-arch i386 -arch x86_64" export LDFLAGS="-Wall -undefined dynamic_lookup -bundle -arch i386 -arch x86_64" rm -rf build python setupegg.py config_fc --fcompiler gfortran config - L/Developer/src/freetype-2.3.5 -L/Developer/src/libpng-1.2.24 build bdist_egg ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] unittest for np.random.poisson with lambda=0
On Wed, Nov 11, 2009 at 15:47, Sebastian Haase wrote: > Thanks Robert, > Just for the archive: I hope the problem I was thinking of is what is > referred to here: > http://mail.scipy.org/pipermail/numpy-discussion/2005-July/004992.html > i.e. it should have been fixed as bug #1123145 in Numeric. > > (see also here: > https://nanohub.org/infrastructure/rappture-runtime/browser/trunk/Numeric-24.2/changes.txt?rev=42 > ) The PRNG is completely different in numpy. > I still haven't learned how to write unittests (sorry) If this issue is important to you, now might be a good time to start. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] unittest for np.random.poisson with lambda=0
Thanks Robert, Just for the archive: I hope the problem I was thinking of is what is referred to here: http://mail.scipy.org/pipermail/numpy-discussion/2005-July/004992.html i.e. it should have been fixed as bug #1123145 in Numeric. (see also here: https://nanohub.org/infrastructure/rappture-runtime/browser/trunk/Numeric-24.2/changes.txt?rev=42 ) I still haven't learned how to write unittests (sorry) - Sebastian Haase On Wed, Nov 11, 2009 at 8:53 PM, Robert Kern wrote: > On Wed, Nov 11, 2009 at 02:30, Sebastian Haase wrote: >> Hi, >> maybe this is an obsolete concern, >> but at some point in the past >> N.random.poisson would not always return 0 for lambda being zero. >> (My post at the time should be in the (numarray) list archive) >> >> So the question is, is this still a valid concern for the current numpy? > > No. > >> Could there a unittest added that does something like this: >> > a=N.random.poisson(0, 1) > N.alltrue(a==0) >> True > > Write up a patch. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Solaris atan2 differences
There are a set of three unit test failures on Solaris all related to differences in the atan2 implementation, reported as bugs 1201, 1202 and 1203. They boil down to the following three differences between Solaris (C compiler 5.3) and "other" platforms -- when I say "other" I mean at least Linux x86 and x86_64, and Mac OS X x86. import numpy as np import numpy.core.umath as ncu ncu.arctan2(np.PZERO, np.NZERO) 0.0 (Solaris) 3.1415926535897931 (other) ncu.arctan2(np.NZERO, np.NZERO) 0.0 (Solaris) -3.1415926535897931 (other) np.signbit(ncu.arctan2(np.NZERO, np.PZERO)) False (Solaris) True (other) The easy fix seems to be to force it to use the npy_math version of atan2 on Solaris (as we already do for MS Windows). (Patch attached). Indeed this fixes the unit tests. Does that seem right? Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA Index: numpy/core/src/private/npy_config.h === --- numpy/core/src/private/npy_config.h (revision 7726) +++ numpy/core/src/private/npy_config.h (working copy) @@ -9,4 +9,9 @@ #undef HAVE_HYPOT #endif +/* Disable broken Solaris math functions */ +#ifdef __SUNPRO_C +#undef HAVE_ATAN2 #endif + +#endif ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] unittest for np.random.poisson with lambda=0
On Wed, Nov 11, 2009 at 02:30, Sebastian Haase wrote: > Hi, > maybe this is an obsolete concern, > but at some point in the past > N.random.poisson would not always return 0 for lambda being zero. > (My post at the time should be in the (numarray) list archive) > > So the question is, is this still a valid concern for the current numpy? No. > Could there a unittest added that does something like this: > a=N.random.poisson(0, 1) N.alltrue(a==0) > True Write up a patch. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] parsing tab separated files into dictionaries - alternative to genfromtxt?
On Wed, Nov 11, 2009 at 11:53 AM, per freem wrote: > hi all, > > i've been using genfromtxt to parse tab separated files for plotting > purposes in matplotlib. the problem is that genfromtxt seems to give > only two ways to access the contents of the file: one is by column, > where you can use: > > d = genfromtxt(...) > > and then do d['header_name1'] to access the column named by > 'header_name1', d['header_name2'] to access the column named by > 'header_name2', etc. Or it will allow you to traverse the file line > by line, and then access each header by number, i.e. > > for line in d: > field1 = d[0] > field2 = d[1] > # etc. > > the problem is that the second method relies on knowing the order of > the fields rather than just their name, and the first method does not > allow line by line iteration. > ideally what i would like is to be able to traverse each line of the > parsed file, and then refer to each of its fields by header name, so > that if the column order in the file changes my program will be > unaffected: > > for line in d: > field1 = ['header_name1'] > field2 = ['header_name2'] > > is there a way to do this using standard matplotlib/numpy/scipy > utilities? i could write my own code to do this but it seems like > something somebody probably already thought of a good representation > for and has implemented a more optimized version than i could write on > my own. does such a thing exist? > > thanks very much I have a constructor class to read space-delimited ASCII files. class NasaFile(object): def __init__(self, filename): ... # Reading data _data = np.loadtxt(filename, dtype='float', skiprows=self.NLHEAD).T # Read using data['Time'] syntax self.data = dict(zip(self.VDESC, _data)) ... There is a meta-header in this type of data and NLHEAD is the variable telling me how many lines to skip to reach the actual data. VDESC tells me what each columns are (starting with Time variable and many other different measurement results.) There is not any column dependence in this case, and generically read any length specifically formatted data. For instance: from nasafile import NasaFile c = NasaFile("mydata") c.data['Time'] gets me the whole Time column as an ndarray . Why do you think dictionaries are not sufficient for your case? I was using locals() to create automatic names but that was not a very wise approach. > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Gökhan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] parsing tab separated files into dictionaries - alternative to genfromtxt?
hi all, i've been using genfromtxt to parse tab separated files for plotting purposes in matplotlib. the problem is that genfromtxt seems to give only two ways to access the contents of the file: one is by column, where you can use: d = genfromtxt(...) and then do d['header_name1'] to access the column named by 'header_name1', d['header_name2'] to access the column named by 'header_name2', etc. Or it will allow you to traverse the file line by line, and then access each header by number, i.e. for line in d: field1 = d[0] field2 = d[1] # etc. the problem is that the second method relies on knowing the order of the fields rather than just their name, and the first method does not allow line by line iteration. ideally what i would like is to be able to traverse each line of the parsed file, and then refer to each of its fields by header name, so that if the column order in the file changes my program will be unaffected: for line in d: field1 = ['header_name1'] field2 = ['header_name2'] is there a way to do this using standard matplotlib/numpy/scipy utilities? i could write my own code to do this but it seems like something somebody probably already thought of a good representation for and has implemented a more optimized version than i could write on my own. does such a thing exist? thanks very much ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Solaris Sparc build broken
On Thu, Nov 12, 2009 at 12:33 AM, Michael Droettboom wrote: > Thanks. Behind that, however, it runs into this compiler shortcoming: > > cc: build/src.solaris-2.8-sun4u-2.5/numpy/core/src/npymath/npy_math.c > "numpy/core/src/npymath/npy_math_private.h", line 229: invalid type for > bit-field: manh > "numpy/core/src/npymath/npy_math_private.h", line 230: invalid type for > bit-field: manl > > It seems that Sun compilers (before Sun Studio 12, C compiler 5.9), > don't support bit fields larger than 32-bits. Welcome to my world :) Damn... > > The best solution I've been able to dream up is to use a macro to get at > the manh bitfield on these older compilers. This fix does seem to pass > the "test_umath.py" unit tests. It's not as clean as the bit-field > method, but it should be more portable. I think it is the only solution, actually. Actually, I wanted to use bit mask at first, because I thought bitfield was a C99 feature, not supported by MS compilers (although it actually is). > Can you think of a better solution? If this is *the* solution, we may > want to add getters/setters for all the members (sign, exp, manl) and > fix nextafterl in terms of those, just for consistency. I will implement this, but I would prefer using this method everywhere for every compiler, it would be more robust. David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Solaris Sparc build broken
Forgot to attach the patch. Mike Michael Droettboom wrote: Thanks. Behind that, however, it runs into this compiler shortcoming: cc: build/src.solaris-2.8-sun4u-2.5/numpy/core/src/npymath/npy_math.c "numpy/core/src/npymath/npy_math_private.h", line 229: invalid type for bit-field: manh "numpy/core/src/npymath/npy_math_private.h", line 230: invalid type for bit-field: manl It seems that Sun compilers (before Sun Studio 12, C compiler 5.9), don't support bit fields larger than 32-bits. Welcome to my world :) You can see a discussion of this here: http://www.mail-archive.com/tools-disc...@opensolaris.org/msg02634.html and the release notes for Sun C compiler 5.9 here: http://docs.sun.com/source/820-4155/c.html The best solution I've been able to dream up is to use a macro to get at the manh bitfield on these older compilers. This fix does seem to pass the "test_umath.py" unit tests. It's not as clean as the bit-field method, but it should be more portable. You can see what I'm getting at in the attached (rough) patch. Can you think of a better solution? If this is *the* solution, we may want to add getters/setters for all the members (sign, exp, manl) and fix nextafterl in terms of those, just for consistency. Cheers, Mike David Cournapeau wrote: On Wed, Nov 11, 2009 at 6:18 AM, Michael Droettboom wrote: I don't know if your 'long double' detection code is complete yet, but I thought I'd share the current build output on one of our Solaris machines. It looks like it may just be a typo difference between 'IEEE_QUAD_BE' in long_double_representation() and 'IEEE_QUAD_16B_BE' in setup.py, but I wanted to confirm. Yes, you're right - I don't have any machine with a 128 bits long double type, so I let the typo slipped. Could you try last revision (r7726) ? David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA Index: numpy/core/src/npymath/npy_math_private.h === --- numpy/core/src/npymath/npy_math_private.h (revision 7726) +++ numpy/core/src/npymath/npy_math_private.h (working copy) @@ -174,6 +174,9 @@ /* * Long double support */ +#define LDBL_GET_MANH(x) ((x).bits.manh) +#define LDBL_SET_MANH(x, v) ((x).bits.manh = (v)) + #if defined(HAVE_LDOUBLE_INTEL_EXTENDED_12_BYTES_LE) /* 80 bits Intel format aligned on 12 bytes */ union IEEEl2bits { @@ -221,15 +224,32 @@ #define mask_nbit_l(u) ((void)0) #elif defined(HAVE_LDOUBLE_IEEE_QUAD_BE) /* Quad precision IEEE format */ -union IEEEl2bits { -npy_longdouble e; -struct { -npy_uint32 sign:1; -npy_uint32 exp :15; -npy_uint64 manh:48; -npy_uint64 manl:64; -} bits; -}; +#if defined(__SUNPRO_C) && (__SUNPRO_C < 0x590) +union IEEEl2bits { + npy_longdouble e; + struct { + npy_uint32 sign:1; + npy_uint32 exp :15; +/* implicit padding */ +npy_uint64 manl; + } bits; +}; +#undef LDBL_GET_MANH +#define LDBL_GET_MANH(x) ((*(npy_uint64 *)&((x).e)) & 0xL) +#undef LDBL_SET_MANH +#define LDBL_SET_MANH(x, v) ((*(npy_uint64 *)&((x).e)) |= ((v) & 0xL)) +#else +union IEEEl2bits { + npy_longdouble e; + struct { + npy_uint32 sign:1; + npy_uint32 exp :15; + npy_uint64 manh:48; + npy_uint64 manl:64; + } bits; + }; +#endif +#define LDBL_NBIT 0 #define mask_nbit_l(u) ((void)0) #elif defined(HAVE_LDOUBLE_IEEE_DOUBLE_LE) union IEEEl2bits { Index: numpy/core/src/npymath/ieee754.c.src === --- numpy/core/src/npymath/ieee754.c.src(revision 7726) +++ numpy/core/src/npymath/ieee754.c.src(working copy) @@ -165,16 +165,16 @@ uy.e = y; if ((ux.bits.exp == 0x7fff && - ((ux.bits.manh & ~LDBL_NBIT) | ux.bits.manl) != 0) || + ((LDBL_GET_MANH(ux) & ~LDBL_NBIT) | ux.bits.manl) != 0) || (uy.bits.exp == 0x7fff && - ((uy.bits.manh & ~LDBL_NBIT) | uy.bits.manl) != 0)) { + ((LDBL_GET_MANH(uy) & ~LDBL_NBIT) | uy.bits.manl) != 0)) { return x + y; /* x or y is nan */ } if (x == y) { return y; /* x=y, return y */ } if (x == 0.0) { -ux.bits.manh = 0; /* return +-minsubnormal */ +LDBL_SET_MANH(ux, 0); /* return +-minsubnormal */ ux.bits.manl = 1; ux.bits.sign =
Re: [Numpy-discussion] Solaris Sparc build broken
Thanks. Behind that, however, it runs into this compiler shortcoming: cc: build/src.solaris-2.8-sun4u-2.5/numpy/core/src/npymath/npy_math.c "numpy/core/src/npymath/npy_math_private.h", line 229: invalid type for bit-field: manh "numpy/core/src/npymath/npy_math_private.h", line 230: invalid type for bit-field: manl It seems that Sun compilers (before Sun Studio 12, C compiler 5.9), don't support bit fields larger than 32-bits. Welcome to my world :) You can see a discussion of this here: http://www.mail-archive.com/tools-disc...@opensolaris.org/msg02634.html and the release notes for Sun C compiler 5.9 here: http://docs.sun.com/source/820-4155/c.html The best solution I've been able to dream up is to use a macro to get at the manh bitfield on these older compilers. This fix does seem to pass the "test_umath.py" unit tests. It's not as clean as the bit-field method, but it should be more portable. You can see what I'm getting at in the attached (rough) patch. Can you think of a better solution? If this is *the* solution, we may want to add getters/setters for all the members (sign, exp, manl) and fix nextafterl in terms of those, just for consistency. Cheers, Mike David Cournapeau wrote: > On Wed, Nov 11, 2009 at 6:18 AM, Michael Droettboom wrote: > >> I don't know if your 'long double' detection code is complete yet, but I >> thought I'd share the current build output on one of our Solaris >> machines. It looks like it may just be a typo difference between >> 'IEEE_QUAD_BE' in long_double_representation() and 'IEEE_QUAD_16B_BE' in >> setup.py, but I wanted to confirm. >> > > Yes, you're right - I don't have any machine with a 128 bits long > double type, so I let the typo slipped. > > Could you try last revision (r7726) ? > > David > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] finding close together points.
On Tue, Nov 10, 2009 at 11:15 PM, David Cournapeau < da...@ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > I think Python lists are basically just expanding arrays and pointers > > are cheap. Where you might lose is in creating python objects to put > > in the list and not having ufuncs and the rest of the numpy machinery. > > If you don't need the machinery, lists are probably not a bad way to go. > > I am not familiar enough with the kdtree code: if you need to manipulate > core datatypes (float, etc...), I think python lists are a significant > cost, because of pointer chasing in particular. It may or may not apply, > but Stefan used some tricks to avoid using python list and use basic C > structures in some custom code (written in cython), maybe he has > something to say. > > I would love having a core C library of containers - there are a few > candidates we could take code from: > I've got a cythonized structure for union-find (relations) and we could put together a heap pretty easily from heap sort. I've also got one for the problem at hand, but it is for several thousand points (stars) scattered over an image and uses c++ lists. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] finding close together points.
- Original Message From: Christopher Barker To: Discussion of Numerical Python Sent: Tue, November 10, 2009 7:07:32 PM Subject: [Numpy-discussion] finding close together points. Hi all, I have a bunch of points in 2-d space, and I need to find out which pairs of points are within a certain distance of one-another (regular old Euclidean norm). scipy.spatial.KDTree.query_ball_tree() seems like it's built for this. Chris, Maybe I'm missing something simple, but if your array of 2D points is static, a KD tree for 2D nearest neighbor seems like over kill. You might want to try the simple approach of using boxes of points to narrow things down by sorting on the first component. If your distances are, say, 10% of the variance, then you'll *roughly* decrease the remaining points to search by a factor of 10. This can get more sophisticated and is useful in higher dimensions (see: Sameer A. Nene and Shree K. Nayar, A Simple Algorithm for Nearest Neighbor Search in High Dimensions, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 19 (9), 989 (1997).) where for a static data set it can match KD trees in speed, but is SO much easier to program. -- Lou Pecora, my views are my own. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] unittest for np.random.poisson with lambda=0
Hi, maybe this is an obsolete concern, but at some point in the past N.random.poisson would not always return 0 for lambda being zero. (My post at the time should be in the (numarray) list archive) So the question is, is this still a valid concern for the current numpy? Could there a unittest added that does something like this: >>> a=N.random.poisson(0, 1) >>> N.alltrue(a==0) True Regards, Sebastian Haase ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion