On 2019-09-07 15:33, Ralf Gommers wrote:
On Sat, Sep 7, 2019 at 1:07 PM Sebastian Berg
wrote:
On Fri, 2019-09-06 at 14:45 -0700, Ralf Gommers wrote:
That's part of it. The concrete problems it's solving are
threefold:
Array creation functions can be overridden.
Array coerc
t looks like).
I had for a long time hoped for the HPy drive will solve this, but
there is no reason to wait for it.
In any case, contributions to this effect are very much welcome, I have
been hoping they would come for a long time, but I am not excited about
just removing the "static".
..` clause somewhere, but that
isn't a nice design).
- Sebastian
[1] In principle you are right: sorting unicode is complicated! In
practice, that is your problem as a user though. If you want to deal
with weirder things, you have to normalize the unicode first, etc.
>
> Cheers
&
rely on full sorting for now, which can
be slow, which I can live with personally.)
- Sebastian
[1] There are different styles of weights and for some method that
clearly matters. Thus, if we ever expand the definition, it may be
that `weights` has to be mapped to one of these, or that the the
gen
it painful, please let us know if
it is too painful for some reason.
But OTOH it has been a recurring surprise and is a common reason for
linux written software to not run on windows.
- Sebastian
___
NumPy-Discussion mailing list -- numpy-discussion@pyt
On Thu, 2023-11-02 at 19:37 +0100, Michael Siebert wrote:
> Hi Sebastian,
>
> great news! Does that mean that Windows Numpy 64 bit default integers
> are coming before Numpy 2.0, like in Numpy 1.27? Will there be
> another release before 2.0?
NumPy 2 of course. Way to big chang
mory layout for speed.
Such code should normally fail gracefully, but fail it will.
Also, as Aaron said, a lot of these places might not enforce it but
still be speed impacted.
So yes, it would be expected break a lot of C-interfacing code that has
Python wrappers around it to normalize input.
-
hough).
I suppose the machinery isn't quite set up to do both side-by-side.
- Sebastian
>
> Marten
>
> Martin Ling writes:
>
> > Hi folks,
> >
> > I don't follow numpy development in much detail these days but I
> > see
> > that there is
On Sat, 2023-12-23 at 09:56 -0500, Marten van Kerkwijk wrote:
> Hi Sebastian,
>
> > That looks nice, I don't have a clear feeling on the order of
> > items, if
> > we think of it in terms of `(start, stop)` there was also the idea
> > voiced to simply add a
On Sat, 2023-12-23 at 09:56 -0500, Marten van Kerkwijk wrote:
> Hi Sebastian,
>
> > That looks nice, I don't have a clear feeling on the order of
> > items, if
> > we think of it in terms of `(start, stop)` there was also the idea
> > voiced to simply add a
to hope that this is in a way that pandas will not be
affected and, honestly, without deep integration testing we won't make
progress in figuring out whether there is some change needed or not.
Thanks for the great work!
- Sebastian
>
> https://numpy.org/neps/nep-0055-string_dtype.
You shouldn't
really see much of a difference on up to date NumPy versions.
- Sebastian
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> htt
uld rather be grown to strengthen the
argument instead?
(Of course there are true exceptions, IIRC scikit-learn chooses to have
much longer support windows.)
- Sebastian
>
>
> Matti
>
> ___
> NumPy-Discussion mailing list -- numpy-disc
ling users.
(Note that skimage users will hit cython, so should get a relatively
clear printout that includes a "please downgrade NumPy" suggestion.)
- Sebastian
>
> > A library that requires a manual version intervention is not
> > broken, it’s just irritating. A lib
at raises a detailed/informative error message
at runtime.
I.e. "work around" pip by telling users exactly what they should do.
- Sebastian
>
> Gaël
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsub
support 3.9 and NumPy 2 in a release. And trying to avoid that was
part of why the discussion started I think.)
- Sebastian
>
> If you drop 3.9 from the metadata, I don't think there's any need to
> secretly keep support. It's too hard to actually use it, and it
nities) to realize that this is how Python
defines things.
Python modulo is not identical to IEEE modulo as describd in the docs.
- Sebastian
>
> The actual implementation is a bit scattered. I think it would be
> nice
> if we could have an "explain" decorator to ufuncs that
The most probably change seems to me that NumPy now includes
`complex.h`. But not sure that is the right direction or why it would
lead to cryptic errors.
- Sebastian
On Wed, 2024-07-03 at 10:30 +0200, PIERRE AUGIER wrote:
> Hi,
>
> We have a strange issue with building pyFFTW with
most all of the
things that have come up about ufunc core dimension flexibility (might
be nice to check briefly, but even if not I suspect the hook here is
the right choice).
- Sebastian
> I have proposed a change in https://github.com/numpy/numpy/pull/26908
> that makes both these features
On Fri, 2024-07-12 at 09:56 -0400, Warren Weckesser wrote:
> On Fri, Jul 12, 2024 at 7:47 AM Sebastian Berg
> wrote:
> >
>
> > (You won't be able to know these relations from reading the
> > signature,
> > but I doubt it's worth worrying about
for all the contributions!
Cheers,
Sebastian
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member
on the
dtype.
(`.info.min` seems tricky, because I am not sure it is clear whether
inf or the minimum finite value is "min".)
A (potentially very short) NEP would probably help to get momentum on
making a decision. I certainly would like to see this being worked on!
- Sebastian
> --
y have a couple of hundred
elements or so for each row, the math is probably the problem and most
of that might be the `exp`.
You can get rid of the `row` loop though in case row if an individual
row is a pretty small array.
To be honest, I am a bit surprised tha
ro out the output for the case of
`matrix_multiply(np.ones((10, 0)), np.ones((0, 10)))`. So this could
turn code that errored out for weird reasons into wrong results in rare
cases.
- Sebastian
signature.asc
Description: This is a digitally signed message part
__
in enumerate(meshB):
# possibly insert np.newaxis/None or a reshape in [??]
A[:, j] = self.basisfunction[j](meshA[??] - col)
- Sebastian
>
> Best,
> Florian
>
> Am 25.03.2017 um 22:31 schrieb Sebastian Berg:
> > On Sat, 2017-03-25 at 18:46 +0100, Florian Lindner wro
time and place as far as I am concerned :).
- Sebastian
>
> Ralf
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
sig
implementing UTF-8 and ASCII-surrogateescape
> > first
> > as they seem to knock off more use cases straight away.
> >
>
>
> Please list the use cases in the context of numpy usage. hdf5 is the
> most obvious, but how exactly would hdf5 use an utf8 array in the
>
s, complex)
> >>> y = np.zeros(s)
> >>> y += abs(x * 2.0)**2
> Traceback (most recent call last):
> File "", line 1, in
> TypeError: Cannot cast ufunc add output from dtype('complex128') to
> dtype('float64') with casting rule &
ave a BLAS compatible type). You might also want to check out
np.einsum, which is pretty slick and can handle these kind of
operations as well. Note that `np.dot` calls into BLAS so that it is in
general much faster then np.einsum.
- Sebastian
> The routine that I want to implement lo
ly thing I can see that might be good is putting "community
work" or something like it specifically as part of the job description,
and thats up to Nathaniel probably.
Some things like not merging large changes by two people sittings in
the same office should be obvious (and even if it
se the individual ufunc things (type resolving and
1d loop) but not all the outer loop nditer setup which is ndarray
specific in any case (honestly, I am not sure it is entirely possible
it is already exposed).
- Sebastian
> Chuck
> ___
>
.
>
Sure, I would say there is nothing wrong with reverting for now (and it
simply is the safe and easy way).
Though it would be good to address the issue of what should happen in
the future with diff (and possibly the subtract deprecation itself)
with your own stuff, you have to make sure to point
out that parts are LGPL of course (just like there is a reason you get
the GPL printed out with some devices) and if you modify it provide
these modifications, etc.
Of course you cannot include it into the scipy codebase itself, but
th
ency
(i.e. `import package`).
- Sebastian
> Carl
>
> 2017-06-24 22:07 GMT+02:00 Sebastian Berg >:
> > On Sat, 2017-06-24 at 15:47 -0400, josef.p...@gmail.com wrote:
> > >
> > >
> > > On Sat, Jun 24, 2017 at 3:16 PM, Nathaniel Smith
> > > w
For subtract, I don't
remember really, but I don't think there was any huge argument for it.
Probably it was mostly that many feel that:
`False - True == -1` as is the case in python while we have:
`np.False_ - np.True_ == np.True_`.
And going to a deprecation would open up that possibi
t; --{--cut here--
> make -k
> python3 shortestPathABC.py
> d0= <0> d1= <1> d2= 3.0 d3= 6.0
> type(d0)= ShortestNull
> d4= 3.0
> d5= 9.0
> d6= <0>
> d7= 3.0
> d8= <0>
> d9= 3.0
> a=
> [[ 12.0]
> [12.0 <0>
t; I think it is very useful function in general and it has
> well defined behaviour. It has usage not only for graphics,
> but actually any data copying-pasting between arrays.
>
> So I am looking forward to comments on this proposal.
>
First, the slice object provides some hel
ains.
The question is, do you really see a big advantage in fixing a
gazillion tests at once over doing a small part of the fixes one after
another? The "big step" thing did not work too well for Python 3
- Sebastian
> On 30 Jun 2017, 6:42 AM +1000, Marten van Kerkwijk
&g
re want from
it or do a few people who know numpy/scipy already plan to come? Two
years ago, we did not have much of a plan, so it was mostly giving
three people or so a bit of a tutorial of how numpy worked internally
leading to some bug fixes.
One quick idea that might be nice and dives a bit int
bout with reshape is the
order, numpy reshape defaults to C-order, while other packages may use
fortran order for reshaping, you can actually change the order you want
to use (though it is in general a good idea to prefer C-order in numpy
probably).
- Sebastian
> Regards,
> -eat
> > Th
without nose/pytest if they worked before without it I think.
My guess is that all your options do that, so I think we should take
the one that gives the nicest maintainable code :). Though can't say I
looked enough into it to really make a well educated decision, that
probably means your
a,None)
>
> gives the desired result, but feels a bit clumsy.
>
Yes, I guess ones bug is someone elses feature :(, if it is very bad,
we could delay the deprecation probably. For a solutions, maybe we
could add a ufunc for elementwise `is` on object arrays (dunno about
the name, maybe `
On Wed, 2017-07-19 at 08:31 +, martin.gfel...@swisscom.com wrote:
> Thank you for your help!
>
> Sebastian, I couldn't agree more with someone's bug being someone
> else's feature! A fast identity ufunc would be useful, though.
>
An `object_identity` ufunc sh
ed a single time when we
donate a bit of numpy money to the mingwpy effort).
I am not sure if we had it, but we could put in (up to changes of
course), a rough number of people we aim to have on it. Just so we
don't forget to discuss that there should be a bit flux. And I am all
for some flux,
On Fri, 2017-07-21 at 12:59 -0700, Nathaniel Smith wrote:
> On Jul 21, 2017 9:36 AM, "Sebastian Berg" > wrote:
> On Fri, 2017-07-21 at 16:58 +0200, Julian Taylor wrote:
> > On 21.07.2017 08:52, Ralf Gommers wrote:
> Also FWIW, the jupyter steering council i
y nice in these regards, you could use
np.frompyfunc/np.vectorize together with `operator.getitem` to avoid
the loop. It probably will not be much faster though.
- Sebastian
> s = np.zeros(4)
> for i in np.arange(4):
> s[i] = a[i][0]
>
the
governance docs say).
Regards,
Sebastian
[1] Two of whom may be appointed with some delay due to the one year
rule. We may have to hash out details here.
On Fri, 2017-07-21 at 22:18 +0200, Sebastian Berg wrote:
> On Fri, 2017-07-21 at 12:59 -0700, Nathaniel Smith wrote:
> >
and then keep using it, you can get
segfaults of course, I am not sure what you can do about it. Maybe
python can try to warn you when you exit the context/close a file
pointer, but I suppose: Python does memory management for you, it makes
doing IO management easy, but you need to manage the IO
ackagers have to make use of that or I
fear it is actually less available than a standalone python module.
- Sebastian
> The same questions apply with respect to TCL.
> > > TCL uses the Transpose-Transpose-GEMM-Transpose approach where
> > > all tensors are flattened into matrices (via H
honestly are less interesting to me.
Probably just me, was just wondering if anyone knew a setting or so?
- Sebastian
signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https
really
miss it? (right now I have those in mail -- which I like -- and
on the website, which I don't care too much about).
Probably I can set it up to get everything as mail, and set the website
to still only give notifications for 2., which would be OK. Maybe I am
just change resista
ncy new stuff likely also wants other fancy new
stuff so will soon have to use python 3 anyway Which means, if we
think the extra burden of a "LTS" is lower then the current hassle,
lets do it :).
Also downstream seems only half a reason to me, since downstream
normally supports much
ick, I will refer the the documentation for how it
works, except that it is basically:
R[x,y] = D[I1[x, y], I2[x, y]]
R = D[np.arange(I.shape[0])[:, np.newaxis], I]
- Sebastian
> Thanks.
>
>
> ZHUO QL (KDr2) http://kdr2.com
>
). And all of that
should be covered in the docs?
- Sebastian
>
> Am 12.12.2017 09:09 schrieb Nathaniel Smith:
> > On Tue, Dec 12, 2017 at 12:02 AM, Joe wrote:
> > > Hi,
> > >
> > > question says it all. I looked through the basic and advanced
> > &g
like an `np.newaxis`.
It all makes perfect sense if you think of it of a 0-d array
picking
The same thing is true for example for lists of booleans.
- Sebastian
> On Thu, Dec 14, 2017, 04:27 Joe wrote:
> > Hello,
> > thanks for you feedback.
> >
> > Sorry, if thie
rs around those.
- Sebastian
>
> Let's say I have an (n,m) array and I want to AND along the first
> axis, such that I get a (1,m) (or just (m,) dimensional array back. I
> looked at the documentation for np.logical_and and friends but
> couldn't find an axis keyword on the logical_
e this year."
> >
> > I think that's still the case, so I won't be mentoring or
> > organizing. In case anyone is interested to do one of those things,
> > please speak up!
> >
> >
>
> Sounds realistic. I thought some of the ideas la
On Tue, 2018-01-09 at 12:27 +, martin.gfel...@swisscom.com wrote:
> Hi Derek
>
> I have a related question:
>
> Given:
>
> a = numpy.array([[0,1,2],[3,4]])
> assert a.ndim == 1
> b = numpy.array([[0,1,2],[3,4,5]])
> assert b.ndim == 2
>
> Is there an elegant way to
ce of reversing them?
>
Without knowing the change, there is always a chance of (temporary)
reversal and for unexpected complications its probably the safest
default if there is no agreement anyway.
- Sebastian
> Cheers,
>
> Matthew
> __
Great news, as always, thanks for your relentless effort Chuck!
- Sebastian
On Tue, 2018-02-20 at 18:21 -0700, Charles R Harris wrote:
> Hi All,
>
> On behalf of the NumPy team, I am pleased to announce NumPy
> 1.14.1. This is a bugfix release for some problems reported following
to add a "step" argument to linspace, but didn't in the
end, largely because it basically enforced in a very convoluted way
that the step fit exactly to a number of steps (up to floating point
precision) and body was quite sure it was a good idea, since it would
just be useful for a
e do not need to define the minimal
reference. In practice we do as soon as we use it for numpy functions.
- Sebastian
>
> Because there is such a gradation of "duck array" types, I agree with
> Marten that we should not deprecate NDArrayOperatorsMixin. It's
> useful for types lik
This NEP draft has some more hints/explanations if you are interested:
https://github.com/seberg/numpy/blob/5becd12914d0402967205579d6f59a9815
1e0d98/doc/neps/indexing.rst#examples
Plus, it tries to avoid the word "subspace" hehehe.
- Sebastian
On Thu, 2018-03-22 at 10:41 +0
nice. However, "start" seems a bit like solving
a different issue in any case.
Anyway, mostly noise. I really like adding this, the only thing worth
discussing a bit is the name :).
- Sebastian
On Mon, 2018-03-26 at 05:57 -0400, Hameer Abbasi wrote:
> It calls it `initializer
least get rid of that
annoying thing with object ufuncs (which currently have a default, but
not really an identity/initializer).
Best,
Sebastian
On Mon, 2018-03-26 at 08:20 -0400, Hameer Abbasi wrote:
> Actually, the behavior right now isn’t that of `default` but that of
> `initializer` or
with
it, since it is not a default, but an initializer. Initializing to NaN
would just make all results NaN.
- Sebastian
> On 26/03/2018 at 17:35, Benjamin wrote: Hmm, this is neat.
> I imagine it would finally give some people a choice on what
> np.nansum([np.nan]) should return? It caus
mal.Decimal` to work most of the time, while here it would give
silently bad results.
- Sebastian
> On 26/03/2018 at 17:45, Sebastian wrote: On Mon, 2018-03-26 at
> 11:39 -0400, Hameer Abbasi wrote: That is the idea, but NaN functions
> are in a separate branch for another PR to be dis
On Mon, 2018-03-26 at 18:48 +0200, Sebastian Berg wrote:
> On Mon, 2018-03-26 at 11:53 -0400, Hameer Abbasi wrote:
> > It'll need to be thought out for object arrays and subclasses. But
> > for
> > Regular numeric stuff, Numpy uses fmin and this would have the
> > d
e initializer is passed in. Yes, this will require holding on to
some extra information since you will have to know/remember whether the
"identity" was passed in or defined otherwise.
I did not check the code, but I would hope that it is not awfully
tricky to do that.
- Sebastian
PS: A sid
ld make real sense (plus initial probably disallows
default...), but I got some feeling that the "default" meaning may be
even more useful to simplify special casing the empty case.
Anyway, still just pointing out that I it gives me some headaches to
see such a special case for objects :(
d the object case important, if someone seriously
argues the opposite I might be swayed possibly.
- Sebastian
>
> Hameer
>
> On Mon, Mar 26, 2018 at 8:09 PM, Sebastian Berg ns.net> wrote:
> > On Mon, 2018-03-26 at 17:40 +, Eric Wieser wrote:
> > > The difficulty i
g about how to help make NumPy contributors more
> productive, we laid out these tasks:
>
Welcome also from me :), I am looking forward to seeing how things
develop!
- Sebastian
> - triage open issues and pull requests, picking up some of the long-
> standing issues and trying to
hange. So I think we should probably just stick
with the python/Guido van Rossum ideals, or did those change?
- Sebastian
> The change is trivial, and allows shuffling a new array in one line
> instead of two:
>
> x = np.random.shuffle(np.array(some_junk))
>
> I
I seem to recall that there was a discussion on this and it was a lot
trickier then expected.
I think statsmodels might have options in this direction.
- Sebastian
On Thu, 2018-04-26 at 15:44 +, Corin Hoad wrote:
> Hello,
>
> Would it be possible to add the fweights and aweight
there is likely no real
performance hit compared to a non-pure python version.
- Sebastian
>
> You can find more information about this on the ufunc doc page. I
> don’t think it’s worth it to break this machinery for any and all, as
> it has numerous other advantages (such as being ab
On Thu, 2018-04-26 at 19:26 +0200, Sebastian Berg wrote:
> On Thu, 2018-04-26 at 09:51 -0700, Hameer Abbasi wrote:
> > Hi Nathan,
> >
> > np.any and np.all call np.or.reduce and np.and.reduce respectively,
> > and unfortunately the underlying function (ufunc.reduce)
milar (cool but unclear how
complex/long term) or `__array_ufunc__` based (relatively simple, will
get rid of the nastier hacks that are currently needed).
Or even both, just on different time scales?
My first gut feeling about the proposal is: I love the idea to get rid
of it... but lets no
On Wed, 2018-05-23 at 23:48 +0200, Sebastian Berg wrote:
> On Wed, 2018-05-23 at 17:33 -0400, Allan Haldane wrote:
>
> If we do not plan to replace it within numpy, we need to discuss a
> bit
> how it might affect infrastructure (multiple implementations).
>
&g
A more general question is actually whether we should rather focus on
solving the same problem more generally.
For example if `numexpr` would implement all/any reductions, it may be
able to pretty simply get the identical tradeoffs with even more
flexibility! (I have to admit, it may get tricky
l work
always. If you want to have a no-copy behaviour in case your original
index is ont an advanced indexing operation, you should replace the
np.array(0) with just 0.
- Sebastian
> ---
>
> Michael
> ___
> NumPy-Discu
ankly am not sure right now if the vindex proposal was with a
forced copy or not, probably it was.
- Sebastian
>
> Michael
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://ma
ed).
Might be good to do a quick deprecation anyway though, mostly out of
principle.
- Sebastian
> Any thoughts or objections?
> Matti
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailma
users are used to
different results.
Otherwise, there is mostly one case which would get annoying, and that
is `arr[:, rr, cc]` since `arr.vindex[:, rr, cc]` would not be exactly
the same. Because, yes, in some cases the current logic is convenient,
just incredibly surprising as well.
- Sebastian
>
On Tue, 2018-06-26 at 01:21 -0700, Robert Kern wrote:
> On Tue, Jun 26, 2018 at 12:58 AM Sebastian Berg
> wrote:
> >
> > Yes, that is true, but I doubt you will find a lot of code path
> > that
> > need the current indexing as opposed to vindex here,
>
&g
all up inside of
`oindex`. But with fancy indexing mixing boolean + integer seems
currently pretty much useless (and thus the same is true for `vindex`,
in `oindex` things make sense).
Now you could invent some new logic for such a mixing case in `vindex`,
but it seems easier to just ignore it for t
On Tue, 2018-06-26 at 02:27 -0700, Robert Kern wrote:
> On Tue, Jun 26, 2018 at 1:36 AM Sebastian Berg s.net> wrote:
> > On Tue, 2018-06-26 at 01:21 -0700, Robert Kern wrote:
> > > On Tue, Jun 26, 2018 at 12:58 AM Sebastian Berg
> > > wrote:
> >
> >
&g
expose some of
the internals, or maybe even provide funcs to map e.g. oindex to vindex
or vindex to plain indexing, etc. but it would be helpful to know what
downstream actually might need. For all I know the things that you are
thinking of may not even exist...
- Sebastian
>
> Best R
care really what the
warnings itself say for now (Deprecation or not), larger packages will
have to avoid them in any case though.
But I guess we have a consent on a certain amount of warnings (probably
will have to see how much they actually appear) and then can revisit in
a longer while.
-
you want a multi-dimensional
index or not.
- Sebastian
> /home/nbecker/.local/lib/python3.6/site-
> packages/scipy/fftpack/basic.py:160: FutureWarning: Using a non-tuple
> sequence for multidimensional indexing is deprecated; use
> `arr[tuple(seq)]` instead of `arr[seq]`. In the future
is now live on http://www.n
> umpy.org/neps/. Thanks all!
>
Great, I hope we can check off some of them soon! :)
- Sebastian
> Cheers,
> Ralf
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://m
t seems to me that this may be the actual discussion with many of
those other discussions. Not so much the wording, but over how exactly
lines were drawn in practice.
Sure, probably we set a bit of bias with the list, but I doubt it is
enough to fight over. And hopefully we can avoid a huge discussio
//scipy.org/scipylib/mailing-lists.html, and give Nathan and
> others who are interested the permissions they need.
>
Yeah, the gitter seems pretty inactive as well. But I guess it doesn't
hurt to mention them.
- Sebastian
> I think our official recommendation for usage questions i
On Tue, 2018-08-07 at 22:07 -0700, Ralf Gommers wrote:
>
>
> On Tue, Aug 7, 2018 at 4:34 AM, Sebastian Berg s.net> wrote:
> > On Mon, 2018-08-06 at 21:52 -0700, Ralf Gommers wrote:
> > >
> > >
> > > On Mon, Aug 6, 2018 at 7:15 PM, Natha
On Wed, 2018-08-08 at 08:55 -0700, Ralf Gommers wrote:
>
>
> On Wed, Aug 8, 2018 at 1:23 AM, Sebastian Berg s.net> wrote:
> > On Tue, 2018-08-07 at 22:07 -0700, Ralf Gommers wrote:
> > >
> > >
> > > On Tue, Aug 7, 2018 at 4:34 AM, Sebastian Be
ave a vague memory of asking
if we are sure this is what we want and at least Ralf agreeing. Also I
don't know how consistent it is overall.
- Sebastian
> Chuck
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> ht
On Sat, 2018-08-11 at 11:11 -0700, Ralf Gommers wrote:
>
>
> On Sat, Aug 11, 2018 at 1:22 AM, Sebastian Berg ns.net> wrote:
> > On Fri, 2018-08-10 at 16:05 -0600, Charles R Harris wrote:
> > > Hi All,
> > >
> > > Do we have a policy for th
.
> >
>
> This edit to the SciPy CoC has now been merged.
>
> It looks to me like we're good to go here and take over the SciPy
> CoC.
Sounds good, so +1.
I am happy with the committee as well, and I guess most/all are, but we
might want to discuss it separately
mean adding the NpyIter and possibly fast paths
(not sure about the state of count nonzero), but should not be very
difficult.
- Sebastian
> > > > np.better_count_nonzero([[10, 11], [0, 3]], axis=1)
>
> array([2, 1])
>
> It would be much more useful
e buffering machinery, so the cast is only
done in small chunks. But the operation itself should be performed
using the given dtype.
- Sebastian
> We ran into this issue in pydata/sparse#191 when trying to match the
> two where the only thing differing is the number of zeros for sum,
> w
e should be working on.
>
> Any objections or thoughts?
>
Sounds like a plan, especially having practically meaningless tags
right now is no help. Most of them are historical and personally I have
only been using the milestones to tag things as high p
1 - 100 of 666 matches
Mail list logo