On Tue, Mar 14, 2017 at 8:16 AM, Ryan May wrote:
> Is https://docs.scipy.org/ being down known issue?
>
It is. Is being worked on; tracking issue is
https://github.com/numpy/numpy/issues/8779
Thanks for reporting,
Ralf
___
NumPy-Discussion mailing lis
Is https://docs.scipy.org/ being down known issue?
Ryan
--
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
2017-03-13 18:11 GMT+01:00 Julian Taylor :
> On 13.03.2017 16:21, Anne Archibald wrote:
> >
> >
> > On Mon, Mar 13, 2017 at 12:21 PM Julian Taylor
> > mailto:jtaylor.deb...@googlemail.com>>
> > wrote:
> >
> > Should it be agreed that caching is worthwhile I would propose a very
> > simple
On 13.03.2017 16:21, Anne Archibald wrote:
>
>
> On Mon, Mar 13, 2017 at 12:21 PM Julian Taylor
> mailto:jtaylor.deb...@googlemail.com>>
> wrote:
>
> Should it be agreed that caching is worthwhile I would propose a very
> simple implementation. We only really need to cache a small handfu
On Mon, Mar 13, 2017 at 12:21 PM Julian Taylor <
jtaylor.deb...@googlemail.com> wrote:
Should it be agreed that caching is worthwhile I would propose a very
> simple implementation. We only really need to cache a small handful of
> array data pointers for the fast allocate deallocate cycle that ap
On Mon, Mar 13, 2017 at 12:57 PM Eric Wieser
wrote:
> > `float(repr(a)) == a` is guaranteed for Python `float`
>
> And `np.float16(repr(a)) == a` is guaranteed for `np.float16`(and the same
> is true up to `float128`, which can be platform-dependent). Your code
> doesn't work because you're deser
> `float(repr(a)) == a` is guaranteed for Python `float`
And `np.float16(repr(a)) == a` is guaranteed for `np.float16`(and the same
is true up to `float128`, which can be platform-dependent). Your code
doesn't work because you're deserializing to a higher precision format than
you serialized to.
Hi,
As numpy often allocates large arrays and one factor in its performance
is faulting memory from the kernel to the process. This has some cost
that is relatively significant. For example in this operation on large
arrays it accounts for 10-15% of the runtime:
import numpy as np
a = np.ones(1000