Re: [Numpy-discussion] FeatureRequest: support for array construction from iterators

2015-12-11 Thread Juan Nunez-Iglesias
Nathaniel, > IMO this is better than making np.array(iter) internally call list(iter) or equivalent Yeah but that's not the only option: from itertools import chain def fromiter_awesome_edition(iterable): elem = next(iterable) dtype = whatever_numpy_does_to_infer_dtypes_from_lists(elem)

Re: [Numpy-discussion] FeatureRequest: support for array construction from iterators

2015-12-11 Thread Nathaniel Smith
Constructing an array from an iterator is fundamentally different from constructing an array from an in-memory data structure like a list, because in the iterator case it's necessary to either use a single-pass algorithm or else create extra temporary buffers that cause much higher memory overhead.

Re: [Numpy-discussion] FeatureRequest: support for array construction from iterators

2015-12-11 Thread Stephan Sahm
numpy.fromiter is neither numpy.array nor does it work similar to numpy.array(list(...)) as the dtype argument is necessary is there a reason, why np.array(...) should not work on iterators? I have the feeling that such requests get (repeatedly) dismissed, but until yet I haven't found a compellin

Re: [Numpy-discussion] Memory mapping and NPZ files

2015-12-11 Thread Erik Bray
On Wed, Dec 9, 2015 at 9:51 AM, Mathieu Dubois wrote: > Dear all, > > If I am correct, using mmap_mode with Npz files has no effect i.e.: > f = np.load("data.npz", mmap_mode="r") > X = f['X'] > will load all the data in memory. > > Can somebody confirm that? > > If I'm correct, the mmap_mode argum

Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread Eric Moore
I have a mostly complete wrapping of the double-double type from the QD library (http://crd-legacy.lbl.gov/~dhbailey/mpdist/) into a numpy dtype. The real problem is, as david pointed out, user dtypes aren't quite full equivalents of the builtin dtypes. I can post the code if there is interest. S

Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread Charles R Harris
On Fri, Dec 11, 2015 at 10:45 AM, Nathaniel Smith wrote: > On Dec 11, 2015 7:46 AM, "Charles R Harris" > wrote: > > > > > > > > On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel > wrote: > >> > >> From time to time it is asked on forums how to extend precision of > computation on Numpy array. Th

Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread Nathaniel Smith
On Dec 11, 2015 7:46 AM, "Charles R Harris" wrote: > > > > On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel wrote: >> >> From time to time it is asked on forums how to extend precision of computation on Numpy array. The most common answer >> given to this question is: use the dtype=object with so

Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread David Cournapeau
On Fri, Dec 11, 2015 at 4:22 PM, Anne Archibald wrote: > Actually, GCC implements 128-bit floats in software and provides them as > __float128; there are also quad-precision versions of the usual functions. > The Intel compiler provides this as well, I think, but I don't think > Microsoft compile

Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread josef.pktd
On Fri, Dec 11, 2015 at 11:22 AM, Anne Archibald wrote: > Actually, GCC implements 128-bit floats in software and provides them as > __float128; there are also quad-precision versions of the usual functions. > The Intel compiler provides this as well, I think, but I don't think > Microsoft compile

Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread Anne Archibald
Actually, GCC implements 128-bit floats in software and provides them as __float128; there are also quad-precision versions of the usual functions. The Intel compiler provides this as well, I think, but I don't think Microsoft compilers do. A portable quad-precision library might be less painful.

[Numpy-discussion] ANN: pyMIC v0.7 Released

2015-12-11 Thread Klemm, Michael
Announcement: pyMIC v0.7 = I'm happy to announce the release of pyMIC v0.7. pyMIC is a Python module to offload computation in a Python program to the Intel Xeon Phi coprocessor. It contains offloadable arrays and device management functions. It supports invocation of

Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread Chris Barker - NOAA Federal
> There has also been some talk of adding a user type for ieee 128 bit doubles. > I've looked once for relevant code for the latter and, IIRC, the available > packages were GPL :(. This looks like it's BSD-Ish: http://www.jhauser.us/arithmetic/SoftFloat.html Don't know if it's any good C

Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread Charles R Harris
On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel wrote: > From time to time it is asked on forums how to extend precision of > computation on Numpy array. The most common answer > given to this question is: use the dtype=object with some arbitrary > precision module like mpmath or gmpy. > See > h

[Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread Thomas Baruchel
>From time to time it is asked on forums how to extend precision of computation >on Numpy array. The most common answer given to this question is: use the dtype=object with some arbitrary precision module like mpmath or gmpy. See http://stackoverflow.com/questions/6876377/numpy-arbitrary-precisi

Re: [Numpy-discussion] Numpy intermittent seg fault

2015-12-11 Thread Antoine Pitrou
Hi, On Fri, 11 Dec 2015 10:05:59 +1000 Jacopo Sabbatini wrote: > > I'm experiencing random segmentation faults from numpy. I have generated a > core dumped and extracted a stack trace, the following: > > #0 0x7f3a8d921d5d in getenv () from /lib64/libc.so.6 > #1 0x7f3a843bde21 in blas

Re: [Numpy-discussion] Memory mapping and NPZ files

2015-12-11 Thread Sturla Molden
Mathieu Dubois wrote: > The point is precisely that, you can't do memory mapping with Npz files > (while it works with Npy files). The operating system can memory map any file. But as npz-files are compressed, you will need to uncompress the contents in your memory mapping to make sense of it.