Nathaniel,
> IMO this is better than making np.array(iter) internally call list(iter)
or equivalent
Yeah but that's not the only option:
from itertools import chain
def fromiter_awesome_edition(iterable):
elem = next(iterable)
dtype = whatever_numpy_does_to_infer_dtypes_from_lists(elem)
Constructing an array from an iterator is fundamentally different from
constructing an array from an in-memory data structure like a list,
because in the iterator case it's necessary to either use a
single-pass algorithm or else create extra temporary buffers that
cause much higher memory overhead.
numpy.fromiter is neither numpy.array nor does it work similar to
numpy.array(list(...)) as the dtype argument is necessary
is there a reason, why np.array(...) should not work on iterators? I have
the feeling that such requests get (repeatedly) dismissed, but until yet I
haven't found a compellin
On Wed, Dec 9, 2015 at 9:51 AM, Mathieu Dubois
wrote:
> Dear all,
>
> If I am correct, using mmap_mode with Npz files has no effect i.e.:
> f = np.load("data.npz", mmap_mode="r")
> X = f['X']
> will load all the data in memory.
>
> Can somebody confirm that?
>
> If I'm correct, the mmap_mode argum
I have a mostly complete wrapping of the double-double type from the QD
library (http://crd-legacy.lbl.gov/~dhbailey/mpdist/) into a numpy dtype.
The real problem is, as david pointed out, user dtypes aren't quite full
equivalents of the builtin dtypes. I can post the code if there is
interest.
S
On Fri, Dec 11, 2015 at 10:45 AM, Nathaniel Smith wrote:
> On Dec 11, 2015 7:46 AM, "Charles R Harris"
> wrote:
> >
> >
> >
> > On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel
> wrote:
> >>
> >> From time to time it is asked on forums how to extend precision of
> computation on Numpy array. Th
On Dec 11, 2015 7:46 AM, "Charles R Harris"
wrote:
>
>
>
> On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel wrote:
>>
>> From time to time it is asked on forums how to extend precision of
computation on Numpy array. The most common answer
>> given to this question is: use the dtype=object with so
On Fri, Dec 11, 2015 at 4:22 PM, Anne Archibald wrote:
> Actually, GCC implements 128-bit floats in software and provides them as
> __float128; there are also quad-precision versions of the usual functions.
> The Intel compiler provides this as well, I think, but I don't think
> Microsoft compile
On Fri, Dec 11, 2015 at 11:22 AM, Anne Archibald wrote:
> Actually, GCC implements 128-bit floats in software and provides them as
> __float128; there are also quad-precision versions of the usual functions.
> The Intel compiler provides this as well, I think, but I don't think
> Microsoft compile
Actually, GCC implements 128-bit floats in software and provides them as
__float128; there are also quad-precision versions of the usual functions.
The Intel compiler provides this as well, I think, but I don't think
Microsoft compilers do. A portable quad-precision library might be less
painful.
Announcement: pyMIC v0.7
=
I'm happy to announce the release of pyMIC v0.7.
pyMIC is a Python module to offload computation in a Python program to the
Intel Xeon Phi coprocessor. It contains offloadable arrays and device
management functions. It supports invocation of
> There has also been some talk of adding a user type for ieee 128 bit doubles.
> I've looked once for relevant code for the latter and, IIRC, the available
> packages were GPL :(.
This looks like it's BSD-Ish:
http://www.jhauser.us/arithmetic/SoftFloat.html
Don't know if it's any good
C
On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel wrote:
> From time to time it is asked on forums how to extend precision of
> computation on Numpy array. The most common answer
> given to this question is: use the dtype=object with some arbitrary
> precision module like mpmath or gmpy.
> See
> h
>From time to time it is asked on forums how to extend precision of computation
>on Numpy array. The most common answer
given to this question is: use the dtype=object with some arbitrary precision
module like mpmath or gmpy.
See
http://stackoverflow.com/questions/6876377/numpy-arbitrary-precisi
Hi,
On Fri, 11 Dec 2015 10:05:59 +1000
Jacopo Sabbatini wrote:
>
> I'm experiencing random segmentation faults from numpy. I have generated a
> core dumped and extracted a stack trace, the following:
>
> #0 0x7f3a8d921d5d in getenv () from /lib64/libc.so.6
> #1 0x7f3a843bde21 in blas
Mathieu Dubois wrote:
> The point is precisely that, you can't do memory mapping with Npz files
> (while it works with Npy files).
The operating system can memory map any file. But as npz-files are
compressed, you will need to uncompress the contents in your memory mapping
to make sense of it.
16 matches
Mail list logo