Hi all, and thank you for all your hard work with this. 

I wanted to provide more of an "end user" perspective than I think has been 
present in this discussion so far. Over the past month, I've quickly skimmed 
some emails on this thread and skipped others altogether. I am far from a NumPy 
novice, but essentially *all* of the discussion went over my head. For a while 
my attitude was "Oh well, far smarter people than me are dealing with this, 
I'll let them figure it out." Looking at the participants in the thread, I 
worry that this is the attitude almost everyone has taken, and that the 
solution proposed will not be easy enough to deal with for any meaningful 
adoption. Certainly with `__array_function__` I only took interest when our 
tests broke with 1.17rc1.

Today I was particularly interested because I'm working to improve scikit-image 
support for pyopencl.Array inputs. I went back and read the original NEP and 
the latest iteration. Thank you again for the discussion, because the latest is 
indeed a vast improvement over the original.

I think the very motivation has the wrong focus. I would summarise it as "we've 
been coming up with all kinds of ways to do multiple dispatch for array-likes, 
and we've found that we need more ways, so let's come up with the One True 
Way." I think the focus should be on the users and community. Something along 
the lines of: "New implementations of array computing are cropping up left, 
right, and centre in Python (not to speak of other languages!). There are good 
reasons for this (GPUs, distributed computing, sparse data, etc), but it leaves 
users and library authors in a pickle: how can they ensure that their 
functions, written with NumPy array inputs and outputs in mind, work well in 
this ecosystem?"

With this new motivation in mind, I think that the user story below is (a) the 
best part of the NEP, but (b) underdeveloped. The NEP is all about "if I want 
my array implementation to work with this fancy dispatch system, what do I need 
to do?". But there should be more of "in user situations X, Y, and Z, what is 
the desired behaviour?"

> The way we propose the overrides will be used by end users is::
> 
>  # On the library side
> 
>  import numpy.overridable as unp
> 
>  def library_function(array):
>  array = unp.asarray(array)
>  # Code using unumpy as usual
>  return array
> 
>  # On the user side:
> 
>  import numpy.overridable as unp
>  import uarray as ua
>  import dask.array as da
> 
>  ua.register_backend(da)
> 
>  library_function(dask_array) # works and returns dask_array
> 
>  with unp.set_backend(da):
>  library_function([1, 2, 3, 4]) # actually returns a Dask array.
> 
> Here, ``backend`` can be any compatible object defined either by NumPy or an
> external library, such as Dask or CuPy. Ideally, it should be the module
> ``dask.array`` or ``cupy`` itself.

Some questions about the above:

- What happens if I call `library_function(dask_array)` without registering 
`da` as a backend first? Will `unp.asarray` try to instantiate a potentially 
100GB array? This seems bad.
- To get `library_function`, I presumably have to do `from fancy_array_library 
import library_function`. Can the code in `fancy_array_library` itself register 
backends, and if so, should/would fancy array libraries that want to maximise 
compatibility pre-register a bunch of backends so that users don't have to?

Here are a couple of code snippets that I would *want* to "just work". Maybe 
it's unreasonable, but imho the NEP should provide these as use cases 
(specifically: how library_function should be written so that they work, and 
what dask.array and pytorch would need to do so that they work, OR, why the NEP 
doesn't solve them).

1.
from dask import array as da
from fancy_array_library import library_function # hopefully skimage one day ;)

data = da.from_zarr('myfile.zarr')
result = library_function(data) # result should still be dask, all things being 
equal
result.to_zarr('output.zarr')

2.
from dask import array as da
from magic_library import pytorch_predict

data = da.from_zarr('myfile.zarr')
result = pytorch_predict(data) # normally here I would use e.g. 
data.map_overlap, but could this be done magically?
result.to_zarr('output.zarr')


There's probably a whole bunch of other "user stories" one can concoct, and no 
doubt many from the authors of the NEP themselves, but they don't come through 
in the NEP text. My apologies that I haven't read *all* the references: I 
understand that it is frustrating if the above are addressed there, but I think 
it's important to have this kind of context in the NEP itself.

Thank you again, and I hope the above is helpful rather than feels like more 
unnecessary churn.

Juan.
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to