Thanks for the writeup Marten,

Nathaniel:

Output shape feels very similar to
output dtype to me, so maybe the general way to handle this would be
to make the first callback take the input shapes+dtypes and return the
desired output shapes+dtypes?

This hits on an interesting alternative to frozen dimensions - np.cross
could just become a regular ufunc with signature np.dtype((float64, 3)),
np.dtype((float64, 3)) → np.dtype((float64, 3))

Furthermore, the expansion quickly becomes cumbersome. For instance, for
the all_equal signature of (n|1),(n|1)->() …

I think this is only a good argument when used in conjunction with the
broadcasting syntax. I don’t think it’s a reason for matmul not to have
multiple signatures. Having multiple signatures is an disincentive to
introduced too many overloads of the same function, which seems like a good
thing to me

Summarizing my overall opinions:

   - I’m +0.5 on frozen dimensions. The use-cases seem reasonable, and it
   seems like an easy-ish way to get them. Allowing ufuncs to natively support
   subarray types might be a tidier solution, but that could come down the road
   - I’m -1 on optional dimensions: they seem to legitimize creating many
   overloads of gufuncs. I’m already not a fan of how matmul has special cases
   for lower dimensions that don’t generalize well. To me, the best way to
   handle matmul would be to use the proposed __array_function__ to handle
   the shape-based special-case dispatching, either by:
      - Inserting dimensions, and calling the true gufunc
      np.linalg.matmul_2d (which is a function I’d like direct access to
      anyway).
      - Dispatching to one of four ufuncs
   - Broadcasting dimensions:
      - I know you’re not suggesting this but: enabling broadcasting
      unconditionally for all gufuncs would be a bad idea, masking linalg bugs.
      (although einsum does support broadcasting…)
      - Does it really need a per-dimension flag, rather than a global one?
      Can you give a case where that’s useful?
      - If we’d already made all_equal a gufunc, I’d be +1 on adding
      broadcasting support to it
      - I’m -0.5 on the all_equal path in the first place. I think we
      either should have a more generic approach to combined ufuncs, or just
      declare them numbas job.
      - Can you come up with a broadcasting use-case that isn’t just
      chaining a reduction with a broadcasting ufunc?

Eric

On Sun, 10 Jun 2018 at 16:02 Eric Wieser <wieser.eric+nu...@gmail.com>
wrote:

Rendered here:
> https://github.com/mhvk/numpy/blob/nep-gufunc-signature-enhancement/doc/neps/nep-0020-gufunc-signature-enhancement.rst
>
>
> Eric
>
> On Sun, 10 Jun 2018 at 09:37 Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>> OK, I spent my Sunday morning writing a NEP. I hope this can lead to some
>> closure...
>> See https://github.com/numpy/numpy/pull/11297
>> -- Marten
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@python.org
>> https://mail.python.org/mailman/listinfo/numpy-discussion
>>
> ​
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to