Re: [Numpy-discussion] Converting np.sinc into a ufunc

2019-05-22 Thread Joshua Wilson
Re Ralf's question:

> Can you quantify the precision improvement (approximately)?

On one level you'll get a large decrease in relative error around the
zeros of the sinc function because argument reduction is being done by
a number which is exactly representable in double precision (i.e. the
number 2) versus an irrational number. For example, consider:

>>> import numpy as np
>>> import mpmath
>>> x = 1 + 1e-8
>>> (np.sinc(x) - mpmath.sincpi(x))/mpmath.sincpi(x)

On master that will give you

mpf('-5.1753363184721223e-9')

versus

mpf('1.6543612517040003e-16')

on the new branch.

*But* there are some caveats to that answer: since we are close to the
zeros of the sinc function the condition number is large, so real
world data that has already been rounded before even calling sinc will
incur the same mathematically unavoidable loss of precision.

On Wed, May 22, 2019 at 6:24 PM Charles R Harris
 wrote:
>
>
>
> On Wed, May 22, 2019 at 7:14 PM Marten van Kerkwijk 
>  wrote:
>>
>> On a more general note, if we change to a ufunc, it will get us stuck with 
>> sinc being the normalized version, where the units of the input have to be 
>> in the half-cycles preferred by signal-processing people rather than the 
>> radians preferred by mathematicians.
>>
>> In this respect, note that there is an outstanding issue about whether to 
>> allow one to choose between the two: 
>> https://github.com/numpy/numpy/issues/13457 (which itself was raised 
>> following an inconclusive PR that tried to add a keyword argument for it).
>>
>> Adding a keyword argument is much easier for a general function than for a 
>> ufunc.
>>
>
> I'd be tempted to have two sinc functions with the different normalizations. 
> Of course, one could say the same about trig functions in both radians and 
> degrees. If I had to pick one, I'd choose sinc in radians, but I think that 
> ship has sailed.
>
> Chuck
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Put type annotations in NumPy proper?

2020-03-24 Thread Joshua Wilson
> That is, is this an all-or-nothing thing where as soon as we start, 
> numpy-stubs becomes unusable?

Until NumPy is made PEP 561 compatible by adding a `py.typed` file,
type checkers will ignore the types in the repo, so in theory you can
avoid the all or nothing. In practice it's maybe trickier because
currently people can use the stubs, but they won't be able to use the
types in the repo until the PEP 561 switch is flipped. So e.g.
currently SciPy pulls the stubs from `numpy-stubs` master, allowing
for a short

find place where NumPy stubs are lacking -> improve stubs -> improve SciPy types

loop. If all development moves into the main repo then SciPy is
blocked on it becoming PEP 561 compatible before moving forward. But,
you could complain that I put the cart before the horse with
introducing typing in the SciPy repo before the NumPy types were more
resolved, and that's probably a fair complaint.

> Anyone interested in taking the lead on this?

Not that I am a core developer or anything, but I am interested in
helping to improve typing in NumPy.

On Tue, Mar 24, 2020 at 11:15 AM Eric Wieser
 wrote:
>
> >  Putting
> > aside ndarray, as more challenging, even annotations for numpy functions
> > and method parameters with built-in types would help, as a start.
>
> This is a good idea in principle, but one thing concerns me.
>
> If we add type annotations to numpy, does it become an error to have 
> numpy-stubs installed?
> That is, is this an all-or-nothing thing where as soon as we start, 
> numpy-stubs becomes unusable?
>
> Eric
>
> On Tue, 24 Mar 2020 at 17:28, Roman Yurchak  wrote:
>>
>> Thanks for re-starting this discussion, Stephan! I think there is
>> definitely significant interest in this topic:
>> https://github.com/numpy/numpy/issues/7370 is the issue with the largest
>> number of user likes in the issue tracker (FWIW).
>>
>> Having them in numpy, as opposed to a separate numpy-stubs repository
>> would indeed be ideal from a user perspective. When looking into it in
>> the past, I was never sure how well in sync numpy-stubs was. Putting
>> aside ndarray, as more challenging, even annotations for numpy functions
>> and method parameters with built-in types would help, as a start.
>>
>> To add to the previously listed projects that would benefit from this,
>> we are currently considering to start using some (minimal) type
>> annotations in scikit-learn.
>>
>> --
>> Roman Yurchak
>>
>> On 24/03/2020 18:00, Stephan Hoyer wrote:
>> > When we started numpy-stubs [1] a few years ago, putting type
>> > annotations in NumPy itself seemed premature. We still supported Python
>> > 2, which meant that we would need to use awkward comments for type
>> > annotations.
>> >
>> > Over the past few years, using type annotations has become increasingly
>> > popular, even in the scientific Python stack. For example, off-hand I
>> > know that at least SciPy, pandas and xarray have at least part of their
>> > APIs type annotated. Even without annotations for shapes or dtypes, it
>> > would be valuable to have near complete annotations for NumPy, the
>> > project at the bottom of the scientific stack.
>> >
>> > Unfortunately, numpy-stubs never really took off. I can think of a few
>> > reasons for that:
>> > 1. Missing high level guidance on how to write type annotations,
>> > particularly for how (or if) to annotate particularly dynamic parts of
>> > NumPy (e.g., consider __array_function__), and whether we should
>> > prioritize strictness or faithfulness [2].
>> > 2. We didn't have a good experience for new contributors. Due to the
>> > relatively low level of interest in the project, when a contributor
>> > would occasionally drop in, I often didn't even notice their PR for a
>> > few weeks.
>> > 3. Developing type annotations separately from the main codebase makes
>> > them a little harder to keep in sync. This means that type annotations
>> > couldn't serve their typical purpose of self-documenting code. Part of
>> > this may be necessary for NumPy (due to our use of C extensions), but
>> > large parts of NumPy's user facing APIs are written in Python. We no
>> > longer support Python 2, so at least we no longer need to worry about
>> > putting annotations in comments.
>> >
>> > We eventually could probably use a formal NEP (or several) on how we
>> > want to use type annotations in NumPy, but I think a good first step
>> > would be to think about how to start moving the annotations from
>> > numpy-stubs into numpy proper.
>> >
>> > Any thoughts? Anyone interested in taking the lead on this?
>> >
>> > Cheers,
>> > Stephan
>> >
>> > [1] https://github.com/numpy/numpy-stubs
>> > [2] https://github.com/numpy/numpy-stubs/issues/12
>> >
>> > ___
>> > NumPy-Discussion mailing list
>> > NumPy-Discussion@python.org
>> > https://mail.python.org/mailman/listinfo/numpy-discussion
>> >
>>
>> ___
>> NumPy-Discussion mailing list
>

[Numpy-discussion] Using scalar constructors to produce arrays

2020-04-19 Thread Joshua Wilson
Over in the NumPy stubs there's an issue

https://github.com/numpy/numpy-stubs/issues/41

which points out that you can in fact do something like

```
np.float32([1.0, 0.0, 0.0])
```

to construct an ndarray of float32. It seems to me that though you can
do that, it is not a best practice, and one should instead do

```
np.array([1.0, 0.0, 0.0], dtype=np.float32)
```

Do people agree with that assessment of what the best practice is? If
so, it seems to make the most sense to continue banning constructs
like `np.float32([1.0, 0.0, 0.0])` in the type stubs (as they should
promote making easy-to-understand, scalable NumPy code).

- Josh
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Feelings about type aliases in NumPy

2020-04-24 Thread Joshua Wilson
Hey everyone,

Over in numpy-stubs we've been working on typing "array like":

https://github.com/numpy/numpy-stubs/pull/66

It would be nice if the type were public so that downstream projects
could use it (e.g. it would be very helpful in SciPy). Originally the
plan was to only make it publicly available at typing time and not
runtime, which would mean that no changes to NumPy are necessary; see

https://github.com/numpy/numpy-stubs/pull/66#issuecomment-618784833

for more information on how that works.

But, Stephan pointed out that it might be confusing to users for
objects to only exist at typing time, so we came around to the
question of whether people are open to the idea of including the type
aliases in NumPy itself. Ralf's concrete proposal was to make a module
numpy.types (or maybe numpy.typing) to hold the aliases so that they
don't pollute the top-level namespace. The module would initially
contain the types

- ArrayLike
- DtypeLike
- (maybe) ShapeLike

Note that we would not need to make changes to NumPy right away;
instead it would probably be done when numpy-stubs is merged into
NumPy itself.

What do people think?

- Josh
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Feelings about type aliases in NumPy

2020-04-26 Thread Joshua Wilson
To try and add some more data points to the conversation:

> Maybe we can go for a bit more distant name like "numpy.annotations" or 
> whatever.

Interestingly this was proposed independently here:

https://github.com/numpy/numpy-stubs/pull/66#issuecomment-619131274

Related to that, Ralf was opposed to numpy.typing because it would
shadow a stdlib module name:

https://github.com/numpy/numpy-stubs/pull/66#issuecomment-619123629

But, types is _also_ a stdlib module name. Maybe the above points give
some extra weight to "numpy.annotations"?

> Unless we anticipate adding a long list of type aliases (more than the three 
> suggested so far)

While working on some types in SciPy here:

https://github.com/scipy/scipy/pull/11936#discussion_r415280894

we ran into the issue of typing things that are "integer types" or
"floating types". For the time being we just inlined a definition like
Union[float, np.floating], but conceivably we would want to unify
those definitions somewhere instead of redefining them in every
project. (Note that existing types like SupportsInt etc. were not what
we wanted.) This perhaps suggests that the ultimate number of type
aliases might be larger than we initially thought.

On Sun, Apr 26, 2020 at 6:25 AM Ilhan Polat  wrote:
>
> I agree that parking all these in a secondary namespace sounds a better 
> option, can't say that I feel for the word "typing" though. There are already 
> too many type, dtype, ctypeslib etc. Maybe we can go for a bit more distant 
> name like "numpy.annotations" or whatever.
>
> On Sat, Apr 25, 2020 at 8:51 AM Kevin Sheppard  
> wrote:
>>
>> Typing is for library developers more than end users. I would also worry 
>> that putting it into the top level might discourage other typing classes 
>> since it is more difficult to add to the top level than to a lower level 
>> module. np.typing seems very clear to me.
>>
>> On Sat, Apr 25, 2020, 07:41 Stephan Hoyer  wrote:
>>>
>>>
>>>
>>> On Fri, Apr 24, 2020 at 11:31 AM Sebastian Berg 
>>>  wrote:
>>>>
>>>> On Fri, 2020-04-24 at 11:10 -0700, Stefan van der Walt wrote:
>>>> > On Fri, Apr 24, 2020, at 08:45, Joshua Wilson wrote:
>>>> > > But, Stephan pointed out that it might be confusing to users for
>>>> > > objects to only exist at typing time, so we came around to the
>>>> > > question of whether people are open to the idea of including the
>>>> > > type
>>>> > > aliases in NumPy itself. Ralf's concrete proposal was to make a
>>>> > > module
>>>> > > numpy.types (or maybe numpy.typing) to hold the aliases so that
>>>> > > they
>>>> > > don't pollute the top-level namespace. The module would initially
>>>> > > contain the types
>>>> >
>>>> > That sounds very sensible.  Having types available with NumPy should
>>>> > also encourage their use, especially if we can add some documentation
>>>> > around it.
>>>>
>>>> I agree, I might have a small tendency for `numpy.types` if we ever
>>>> find any usage other than direct typing that may be the better name?
>>>
>>>
>>> Unless we anticipate adding a long list of type aliases (more than the 
>>> three suggested so far), I would lean towards adding ArrayLike to the top 
>>> level NumPy namespace as np.ArrayLike.
>>>
>>> Type annotations are becoming an increasingly core part of modern Python 
>>> code. We should make it easy to appropriately type check functions that act 
>>> on NumPy arrays, and a top level np.ArrayLike is definitely more convenient 
>>> than np.types.ArrayLike.
>>>
>>>> Out of curiousity, I guess `ArrayLike` would be an ABC that a
>>>> downstream project can register with?
>>>
>>>
>>> ArrayLike will be a typing Protocol, automatically recognizing attributes 
>>> like __array__ to indicate that something can be cast to an array.
>>>
>>>>
>>>>
>>>> - Sebastian
>>>>
>>>>
>>>> >
>>>> > Stéfan
>>>> > ___
>>>> > NumPy-Discussion mailing list
>>>> > NumPy-Discussion@python.org
>>>> > https://mail.python.org/mailman/listinfo/numpy-discussion
>>>>
>>>>
>>>> ___
>>>> NumPy-Discussion mailing list
>>>> NumPy-Discussion@python.org
>>>> https://mail.python.org/mailman/listinfo/numpy-discussion
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@python.org
>>> https://mail.python.org/mailman/listinfo/numpy-discussion
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@python.org
>> https://mail.python.org/mailman/listinfo/numpy-discussion
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Feelings about type aliases in NumPy

2020-05-09 Thread Joshua Wilson
Following some additional discussion on the PR (see comments after
https://github.com/numpy/numpy-stubs/pull/66#issuecomment-620139434),
the proposed path forward is:

- Add the module `numpy.typing` to the type stubs only for now
- Initially it will contain types for ArrayLike and DtypeLike
- When the stubs are merged into NumPy, add the `numpy.typing` module
to NumPy itself.

Any further objections?

On Mon, Apr 27, 2020 at 10:50 AM Ilhan Polat  wrote:
>
> > Interestingly this was proposed independently here:
>
> Wow apologies for missing the entire thread about it and the noise.
>
>
> On Sun, Apr 26, 2020 at 11:19 PM Joshua Wilson  
> wrote:
>>
>> To try and add some more data points to the conversation:
>>
>> > Maybe we can go for a bit more distant name like "numpy.annotations" or 
>> > whatever.
>>
>> Interestingly this was proposed independently here:
>>
>> https://github.com/numpy/numpy-stubs/pull/66#issuecomment-619131274
>>
>> Related to that, Ralf was opposed to numpy.typing because it would
>> shadow a stdlib module name:
>>
>> https://github.com/numpy/numpy-stubs/pull/66#issuecomment-619123629
>>
>> But, types is _also_ a stdlib module name. Maybe the above points give
>> some extra weight to "numpy.annotations"?
>>
>> > Unless we anticipate adding a long list of type aliases (more than the 
>> > three suggested so far)
>>
>> While working on some types in SciPy here:
>>
>> https://github.com/scipy/scipy/pull/11936#discussion_r415280894
>>
>> we ran into the issue of typing things that are "integer types" or
>> "floating types". For the time being we just inlined a definition like
>> Union[float, np.floating], but conceivably we would want to unify
>> those definitions somewhere instead of redefining them in every
>> project. (Note that existing types like SupportsInt etc. were not what
>> we wanted.) This perhaps suggests that the ultimate number of type
>> aliases might be larger than we initially thought.
>>
>> On Sun, Apr 26, 2020 at 6:25 AM Ilhan Polat  wrote:
>> >
>> > I agree that parking all these in a secondary namespace sounds a better 
>> > option, can't say that I feel for the word "typing" though. There are 
>> > already too many type, dtype, ctypeslib etc. Maybe we can go for a bit 
>> > more distant name like "numpy.annotations" or whatever.
>> >
>> > On Sat, Apr 25, 2020 at 8:51 AM Kevin Sheppard 
>> >  wrote:
>> >>
>> >> Typing is for library developers more than end users. I would also worry 
>> >> that putting it into the top level might discourage other typing classes 
>> >> since it is more difficult to add to the top level than to a lower level 
>> >> module. np.typing seems very clear to me.
>> >>
>> >> On Sat, Apr 25, 2020, 07:41 Stephan Hoyer  wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Fri, Apr 24, 2020 at 11:31 AM Sebastian Berg 
>> >>>  wrote:
>> >>>>
>> >>>> On Fri, 2020-04-24 at 11:10 -0700, Stefan van der Walt wrote:
>> >>>> > On Fri, Apr 24, 2020, at 08:45, Joshua Wilson wrote:
>> >>>> > > But, Stephan pointed out that it might be confusing to users for
>> >>>> > > objects to only exist at typing time, so we came around to the
>> >>>> > > question of whether people are open to the idea of including the
>> >>>> > > type
>> >>>> > > aliases in NumPy itself. Ralf's concrete proposal was to make a
>> >>>> > > module
>> >>>> > > numpy.types (or maybe numpy.typing) to hold the aliases so that
>> >>>> > > they
>> >>>> > > don't pollute the top-level namespace. The module would initially
>> >>>> > > contain the types
>> >>>> >
>> >>>> > That sounds very sensible.  Having types available with NumPy should
>> >>>> > also encourage their use, especially if we can add some documentation
>> >>>> > around it.
>> >>>>
>> >>>> I agree, I might have a small tendency for `numpy.types` if we ever
>> >>>> find any usage other than direct typing that may be the better name?
>> >>>
>> >>>
>> >>> Unless we anticipate adding a long list of type a

Re: [Numpy-discussion] Ndarray static typing: Order of generic types

2020-11-01 Thread Joshua Wilson
> Just to speak for myself, I don't think the precise choice matters very much. 
> There are arguments for consistency both ways.

I agree with this. In the absence of strong theoretical considerations
I'd fall back to a practical one-we can make ndarray generic over
dtype _right now_, while for shape we will need to wait 1+ years for
the variadic type variable PEP to settle etc. To me that suggests:

- Do ndarray[DType] now
- When the shape stuff is ready, do ndarray[DType, ShapeStuff] (or
however ShapeStuff ends up being spelled)
- Write a mypy plugin that rewrites ndarray[DType] to ndarray[DType,
AnyShape] (or whatever) for backwards compatibility

On Thu, Oct 29, 2020 at 1:37 PM Stephan Hoyer  wrote:
>
> On Wed, Oct 28, 2020 at 2:44 PM bas van beek  wrote:
>>
>> Hey all,
>>
>>
>>
>> With the recent merging of numpy/numpy#16759 we’re at the point where 
>> `ndarray` can be made generic w.r.t. its dtype and shape.
>>
>> An open question which yet remains is to order in which these two parameters 
>> should appear (numpy/numpy#16547):
>>
>> · `ndarray[Dtype, Shape]`
>>
>> · `ndarray[Shape, Dtype]`
>
>
> Hi Bas,
>
> Thanks for driving this forward!
>
> Just to speak for myself, I don't think the precise choice matters very much. 
> There are arguments for consistency both ways. In the end Dtype and Shape are 
> different enough that I doubt it will be a point of confusion.
>
> Also, I would guess many users will define their own type aliases, so can 
> write something more succinct like Float64[shape] rather than 
> ndarray[float64, shape].  We might even consider including some of these in 
> numpy.typing.
>
> Cheers,
> Stephan
>
>
>>
>>
>>
>> There has been a some discussion about this question in issue 16547, but a 
>> consensus has not yet to be reached.
>>
>> Most people seem to slightly preferring one option over the other.
>>
>>
>>
>> Are there any further thoughts on this subject?
>>
>>
>>
>> Regards,
>>
>> Bas van Beek
>>
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@python.org
>> https://mail.python.org/mailman/listinfo/numpy-discussion
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] sinpi/cospi trigonometric functions

2021-07-14 Thread Joshua Wilson
I'll note that SciPy actually does have `sinpi` and `cospi`-they just
happen to be private:

https://github.com/scipy/scipy/blob/master/scipy/special/functions.json#L58
https://github.com/scipy/scipy/blob/master/scipy/special/functions.json#L12

They are used extensively inside the module though as helpers in other
special functions and have extensive tests of their own:

https://github.com/scipy/scipy/blob/master/scipy/special/tests/test_mpmath.py#L533
https://github.com/scipy/scipy/blob/master/scipy/special/tests/test_mpmath.py#L547
https://github.com/scipy/scipy/blob/master/scipy/special/tests/test_mpmath.py#L960
https://github.com/scipy/scipy/blob/master/scipy/special/tests/test_mpmath.py#L1741

I have no objections to making them public; the PR is as simple as
removing the underscores and adding a docstring.

On Wed, Jul 14, 2021 at 9:17 AM Robert Kern  wrote:
>
> On Wed, Jul 14, 2021 at 11:39 AM Tom Programming  
> wrote:
>>
>> Hi all,
>>
>> (I am very new to this mail list so please cut me some slack)
>>
>> trigonometric functions like sin(x) are usually implemented as:
>>
>> 1. Some very complicated function that does bit twiddling and basically 
>> computes the reminder of x by pi/2. An example in 
>> http://www.netlib.org/fdlibm/e_rem_pio2.c (that calls 
>> http://www.netlib.org/fdlibm/k_rem_pio2.c ). i.e. ~500 lines of branching C 
>> code. The complexity arises in part because for big values of x the 
>> subtraction becomes more and more ill defined, due to x being represented in 
>> binary base to which an irrational number has to subtracted and consecutive 
>> floating point values being more and more apart for higher absolute values.
>> 2. A Taylor series for the small values of x,
>> 3. Plus some manipulation to get the correct branch, deal with subnormal 
>> numbers, deal with -0, etc...
>>
>> If we used a function like sinpi(x) = sin(pi*x) part (1) can be greatly 
>> simplified, since it becomes trivial to separate the reminder of the 
>> division by pi/2. There are gains both in the accuracy and the performance.
>>
>> In large parts of the code anyways there is a pi inside the argument of sin 
>> since it is common to compute something like sin(2*pi*f*t) etc. So I wonder 
>> if it is feasible to implement those functions in numpy.
>>
>> To strengthen my argument I'll note that the IEEE standard, too, defines ( 
>> https://irem.univ-reunion.fr/IMG/pdf/ieee-754-2008.pdf ) the functions 
>> sinPi, cosPi, tanPi, atanPi, atan2Pi. And there are existing 
>> implementations, for example, in Julia ( 
>> https://github.com/JuliaLang/julia/blob/6aaedecc447e3d8226d5027fb13d0c3cbfbfea2a/base/special/trig.jl#L741-L745
>>  ) and the Boost C++ Math library ( 
>> https://www.boost.org/doc/libs/1_54_0/boost/math/special_functions/sin_pi.hpp
>>  )
>>
>> And that issue caused by apparently inexact calculations have been raised in 
>> the past in various forums ( 
>> https://stackoverflow.com/questions/20903384/numpy-sinpi-returns-negative-value
>>  
>> https://stackoverflow.com/questions/51425732/how-to-make-sinpi-and-cospi-2-zero
>>  
>> https://www.reddit.com/r/Python/comments/2g99wa/why_does_python_not_make_sinpi_0_just_really/
>>  ... )
>>
>> PS: to be nitpicky I see that most implementation implement sinpi as 
>> sin(pi*x) for small values of x, i.e. they multiply x by pi and then use the 
>> same coefficients for the Taylor series as the canonical sin. A multiply 
>> instruction could be spared, in my opinion, by storing different Taylor 
>> expansion number coefficients tailored for the sinpi function. It is not 
>> clear to me if it is not done because the performance gain is small, because 
>> I am wrong about something, or because those 6 constants of the Taylor 
>> expansion have a "sacred aura" about them and nobody wants to enter deeply 
>> into this.
>
>
> The main value of the sinpi(x) formulation is that you can do the reduction 
> on x more accurately than on pi*x (reduce-then-multiply rather than 
> multiply-then-reduce) for people who particularly care about the special 
> locations of half-integer x. sin() and cos() are often not implemented in 
> software, but by CPU instructions, so you don't want to reimplement them. 
> There is likely not a large accuracy win by removing the final multiplication.
>
> We do have sindg(), cosdg(), and tandg() in scipy.special that work similarly 
> for inputs in degrees rather than radians. They also follow the 
> reduce-then-multiply strategy. scipy.special would be a good place for 
> sinpi() and friends.
>
> --
> Robert Kern
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Re: Automatic formatters for only changed lines

2022-05-08 Thread Joshua Wilson
As someone who worked on a project that used to use yapf, a word of caution
- regardless of quality of formatting versus black, it generally has not
had the resources to keep up with new releases of Python. See

https://github.com/google/yapf/issues/772
https://github.com/google/yapf/issues/993
https://github.com/google/yapf/issues/1001

for example. We eventually had to switch to black because of the lack of
3.8 support. (Though with 3.8 in particular you'd be fine if you banned
using the walrus operator.)

On Sun, May 8, 2022 at 5:59 AM Evgeni Burovski 
wrote:

> Mind sharing your yapf config Juan?
>
>
> вс, 8 мая 2022 г., 08:40 Juan Nunez-Iglesias :
>
>> With the caveat that I am just an interested bystander, I would like to
>> point back to yapf as an alternative. I agree with what has already been
>> echoed by the majority of the community, that setting *some* auto-formatter
>> is a huge benefit for both review and writing sanity. I very much disagree
>> with some of the other choices that black makes, while in contrast I find
>> that I can get yapf to write code that I find very close to what I would
>> naturally write as an old-fashioned Python style pedant.
>>
>> I might have already pointed to these lines of code
>> 
>> that black produced for napari that I find abhorrent:
>>
>> (
>> custom_colormap,
>> label_color_index,
>> ) = color_dict_to_colormap(self.color)
>>
>> I can confirm that this hurts readability because recently I and another
>> experienced developer stared just a couple of lines downstream for a hot
>> minute wondering where the hell "label_color_index" was defined. It is also
>> more lines of code than
>>
>> custom_colormap, label_color_index = (
>> color_dict_to_colormap(self.color)
>> )
>>
>> (Personally I really like the pep8 recommendation on hanging indents,
>> "further indentation should be used to clearly distinguish itself as a
>> continuation line," which black ignores.)
>>
>> Anyway, didn't mean to start a flame war, just to express my preference
>> for yapf and that it would therefore make me very happy to see NumPy adopt
>> it as a counteracting force to the monopoly that black is developing. I do
>> strongly agree that black is preferable to no formatter. In short my
>> recommendation order would be:
>>
>> 1. Use yapf;
>> 2. Use black;
>> 3. Don't use a formatter.
>>
>> Juan.
>>
>> On Sat, 7 May 2022, at 3:20 PM, Peter Cock wrote:
>>
>> On Fri, May 6, 2022 at 4:59 PM Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>
>> Heh. I think automatic formatting is the future and was happy to see the
>> PR. The only question is if black is the way we want to go. IIRC, the main
>> sticking points were
>>
>>- Line length (<= 79).
>>- Quotation mark changes (noisy).
>>- Formatting of  '*', '**', and '/'
>>
>> Even if the result isn't perfect by our current standards, I suspect we
>> will get used to it and just not worry anymore.
>>
>>
>> On that note, in black v22.1.0 (the first non-beta release), one of the
>> changes was to the ** operator to better suit mathematical conventions:
>>
>>  "Remove spaces around power operators if both operands are simple"
>> https://github.com/psf/black/pull/2726
>>
>> Either way I agree with you, most people seem to get used to black and
>> stop worrying about it.
>>
>> Peter
>>
>> ___
>> NumPy-Discussion mailing list -- numpy-discussion@python.org
>> To unsubscribe send an email to numpy-discussion-le...@python.org
>> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
>> Member address: j...@fastmail.com
>>
>>
>> ___
>> NumPy-Discussion mailing list -- numpy-discussion@python.org
>> To unsubscribe send an email to numpy-discussion-le...@python.org
>> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
>> Member address: evgeny.burovs...@gmail.com
>>
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: josh.craig.wil...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com