Hi,
We understand that it can be burden, of course a larger binary is bad
but that bad usually also comes with good, like better performance or
more features.
How much of a burden is it and where is the line between I need twice as
long to download it which is just annoying and I cannot use it any
Hi everyone,
We are happy to announce that Sebastian Berg has joined the team at
BIDS, and will be working full-time on NumPy for the next two years.
Sebastian has been a core contributor to the project for seven years
already, and we are delighted that he will now be able to spend even
more of hi
Hi Stephan,
> Hameer, it's great that you are exploring these problems with a fresh
> approach! I'm excited to see how dispatching problems could be solved without
> the constraint of compatibility with NumPy's legacy approaches.
>
> When you have a prototype and/or design documents ready for re
On Fri, Apr 26, 2019 at 3:10 AM Hameer Abbasi
wrote:
> Here’s how `uarray` solves each of these issues:
>
>1. Backends… There is no default implementation.
>2. This is handled by (thread-safe) context managers, which make
>switching easy.
>3. There’s one coercion function per type
On Fri, Apr 26, 2019 at 1:24 AM Ralf Gommers wrote:
> Thanks, this helped clarify what's going on here. This example is clear.
> The problem seems to be that there's two separate discussions in this
> thread:
> 1. your original proposal, __numpy_implementation__. it does not have the
> problem of
Hi,
Obviously this is a trade-off; if we can increase binary size we can
add more optimizations, for more platforms, and development will be a
little faster, because we are not having to spend time optimizing for
binary size.
If people on slow internet connections had to download numpy multiple
t
Le vendredi 26 avril 2019, 12:49:39 SAST Ilhan Polat a écrit :
Hi Ihlan,
That's an interesting link, but they provide the average, which is not a very
good indicator. I have myself a 100 Mb/s link where I live, which means that
as Akamai ranks my country with an average speed of 6.7 Mb/s, a lot
here is a baseline
https://en.wikipedia.org/wiki/List_of_countries_by_Internet_connection_speeds
. Probably a good idea to throttle values at 60% of the bandwidth and you
get a crude average delay it would cause per 1MB worldwide.
On Fri, Apr 26, 2019 at 11:49 AM Éric Depagne wrote:
> Le vendr
Here’s my take on it: The goal is basically “separation of interface from
implementation”, NumPy reference becomes just one (reference) implementation
(kind of like CPython is today). The idea is that unumpy/NumPy drive the
interface, while there can be many implementations.
To make duck-arrays
Le vendredi 26 avril 2019, 11:10:56 SAST Ralf Gommers a écrit :
Hi Ralf,
>
> Right now a wheel is 16 MB. If we increase that by 10%/50%/100% - are we
> causing a real problem for someone?
Access to large bandwidth is not universal at all, and in many countries (I'd
even say in most of the count
Hi all,
In https://github.com/numpy/numpy/pull/13207 a discussion started about the
tradeoff between performance gain for one function vs increasing the size
of a NumPy build by a couple of percent. We also discussed that in the
community call on Wednesday and concluded that it may be useful to as
On Fri, Apr 26, 2019 at 1:02 AM Stephan Hoyer wrote:
> On Thu, Apr 25, 2019 at 3:39 PM Ralf Gommers
> wrote:
>
>>
>> On Fri, Apr 26, 2019 at 12:04 AM Stephan Hoyer wrote:
>>
>>> I do like the look of this, but keep in mind that there is a downside to
>>> exposing the implementation of NumPy fun
12 matches
Mail list logo