In the following session a numpy array is created from an stdlib array:
In [1]: import array
In [2]: base = array.array('i', [1, 2])
In [3]: a = np.asarray(base)
In [4]: a.base
Out[4]:
In [5]: a.base.obj
Out[5]: array('i', [1, 2])
In [6]: a.base.obj is base
Out[6]: True
Why can't a.base be
On Mon, Oct 19, 2015 at 4:12 PM, Stephan Hoyer wrote:
> On Mon, Oct 19, 2015 at 12:34 PM, Chris Barker
> wrote:
>
>> Also -- I think we are at phase one of a (at least) two step process:
>>
>> 1) clean up datetime64 just enough that it is useful, and less
>> error-prone -- i.e. have it not prete
On Mon, Oct 19, 2015 at 4:12 PM, Stephan Hoyer wrote:
> Alexander -- by "mst" I think Chris meant "most".
Good because in context it could be "Moscow Standard Time" or "Mean Solar
Time". :-)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On Mon, Oct 19, 2015 at 3:34 PM, Chris Barker wrote:
>
> > In Python 3.6, datetime.now() will return different values in the first
> and the second repeated hour in the "fall-back fold." > If you allow
> datetime.datetime to numpy.datetime64 conversion, you should decide what
> you do with that d
On Mon, Oct 19, 2015 at 3:34 PM, Chris Barker wrote:
> DST is a civil construct -- and mst (all?) implementations use the
> convention of having repeated times.
What is "mst"?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.s
On Sat, Oct 17, 2015 at 6:59 PM, Chris Barker wrote:
> If anyone decides to actually get around to leap seconds support in numpy
> datetime, s/he can decide ...
This attitude is the reason why we will probably never have bug free
software when it comes to civil time reckoning. Even though ANS
On Sat, Oct 17, 2015 at 6:59 PM, Chris Barker wrote:
> Off the top of my head, I think allowing a 60th second makes more sense --
> jsut like we do leap years.
Yet we don't implement DST by allowing the 24th hour. Even the countries
that adjust the clocks at midnight don't do that.
In some se
On Tue, Oct 13, 2015 at 6:48 PM, Chris Barker wrote:
> And because we probably want fast absolute delta computation, when we add
> timezones, we'll probably want to store the datetime in UTC, and apply the
> timezone on I/O.
>
> Alexander: Am I right that we don't need the "fold" bit in this case
On Mon, Oct 12, 2015 at 3:10 AM, Stephan Hoyer wrote:
> The tentative consensus from last year's discussion was that we should
> make datetime64 timezone naive, like the standard library's
> datetime.datetime
If you are going to make datetime64 more like datetime.datetime, please
consider addi
On Wed, Jun 3, 2015 at 5:12 PM, Charles R Harris
wrote:
> but is as good as dot right now except it doesn't handle object arrays.
This is a fairly low standard. :-(
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/ma
On Sat, May 30, 2015 at 6:23 PM, Charles R Harris wrote:
> The problem arises when multiplying a stack of matrices times a vector.
> PEP465 defines this as appending a '1' to the dimensions of the vector and
> doing the defined stacked matrix multiply, then removing the last dimension
> from the
On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote:
> For higher dimension inputs like (i, j, n, m) it acts like any other
> gufunc (e.g., everything in np.linalg)
Unfortunately, not everything in linalg acts the same way. For example,
matrix_rank and lstsq don't.
_
On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith wrote:
>
> .. there's been some discussion of the possibility of
> adding specialized gufuncs for broadcasted vector-vector,
> vector-matrix, matrix-vector multiplication, which wouldn't do the
> magic vector promotion that dot and @ do.
This woul
1. Is there a simple expression using existing numpy functions that
implements PEP 465 semantics for @?
2. Suppose I have a function that takes two vectors x and y, and a matrix M
and returns x.dot(M.dot(y)). I would like to "vectorize" this function so
that it works with x and y of any ndim >= 1
On Mon, Apr 27, 2015 at 7:14 PM, Nathaniel Smith wrote:
> There's no way to access the ast reliably at runtime in python -- it gets
> thrown away during compilation.
The "meta" package supports bytecode to ast translation. See <
http://meta.readthedocs.org/en/latest/api/decompile.html>.
__
On Mon, Jan 26, 2015 at 6:06 AM, Dieter Van Eessen <
dieter.van.ees...@gmail.com> wrote:
> I've read that numpy.array isn't arranged according to the
> 'right-hand-rule' (right-hand-rule => thumb = +x; index finger = +y, bend
> middle finder = +z). This is also confirmed by an old message I dug up
On Mon, Jan 12, 2015 at 8:48 PM, Charles R Harris
wrote:
>
> That is to say, in this case C long has the same precision as C long
long. That varies depending on the platform, which is one reason the
precision nomenclature came in. It can be confusing, and I've often
fantasized getting rid of the l
Consider this (on a 64-bit platform):
>>> numpy.dtype('q') == numpy.dtype('l')
True
but
>>> numpy.dtype('q').char == numpy.dtype('l').char
False
Is that intended? Shouldn't dtype constructor "normalize" 'l' to 'q' (or
'i')?
___
NumPy-Discussion maili
On Tue, Jan 6, 2015 at 8:20 PM, Nathaniel Smith wrote:
> > Since matrices are now part of some high school curricula, I urge that
> they
> > be treated appropriately in Numpy. Further, I suggest that
> consideration be
> > given to establishing V and VT sub-classes, to cover vectors and
> transp
A discussion [1] is currently underway at GitHub which will benefit from a
larger forum.
In version 1.9, the diagonal() method was changed to return a read-only
(non-contiguous) view into the original array instead of a plain copy.
Also, it has been announced [2] that in 1.10 the view will become
On Tue, Dec 30, 2014 at 2:49 PM, Benjamin Root wrote:
> Where does it say that operations on masked arrays should not produce NaNs?
Masked arrays were invented with the specific goal to avoid carrying NaNs
in computations. Back in the days, NaNs were not available on some
platforms and had sig
On Tue, Dec 30, 2014 at 1:45 PM, Benjamin Root wrote:
> What do you mean that the mean function doesn't take care of the case
> where the array is empty? In the example you provided, they both end up
> being NaN, which is exactly correct.
Operations on masked arrays should not produce NaNs. Th
I probably miss something very basic, but how given two arrays a and b, can
I find positions in a where elements of b are located? If a were sorted, I
could use searchsorted, but I don't want to get valid positions for
elements that are not in a. In my case, a has unique elements, but in the
gene
On Sun, Nov 2, 2014 at 2:32 PM, Warren Weckesser wrote:
>
>> Still, the case of dtype=None, name=None is problematic. Suppose I want
>> genfromtxt() to detect the column names from the 1-st row and data types
>> from the 3-rd. How would you do that?
>>
>>
>
> This may sound like a cop out, bu
Sorry, I meant names=True, not name=None.
On Sun, Nov 2, 2014 at 2:18 PM, Alexander Belopolsky
wrote:
>
> On Sun, Nov 2, 2014 at 1:56 PM, Warren Weckesser <
> warren.weckes...@gmail.com> wrote:
>
>> Or you could just call genfromtxt() once with `max_rows=1` to skip a
>
On Sun, Nov 2, 2014 at 1:56 PM, Warren Weckesser wrote:
> Or you could just call genfromtxt() once with `max_rows=1` to skip a row.
> (I'm assuming that the first argument to genfromtxt is the open file
> object--or some other iterator--and not the filename.)
That's hackish. If I have to resor
On Sat, Nov 1, 2014 at 3:15 PM, Warren Weckesser wrote:
> Is there wider interest in such an argument to `genfromtxt`? For my
> use-cases, `max_rows` is sufficient. I can't recall ever needing the full
> generality of a slice for pulling apart a text file. Does anyone have
> compelling use-cas
On Wed, Oct 29, 2014 at 5:39 AM, Andrew Nelson wrote:
> I have a 4D array, A, that has the shape (NX, NY, 2, 2). I wish to
> perform matrix multiplication of the 'NY' 2x2 matrices, resulting in the
> matrix B. B would have shape (NX, 2, 2).
What you are looking for is dot.reduce and NumPy doe
On Tue, Oct 28, 2014 at 10:11 PM, Nathaniel Smith wrote:
> .diagonal has no magic, it just turns out that the diagonal of any strided
> array is also expressible as a strided array. (Specifically, new_strides =
> (sum(old_strides),).)
This is genius! Once you mentioned this, it is obvious how
On Tue, Oct 28, 2014 at 10:11 PM, Nathaniel Smith wrote:
> > I don't think so - I think all the heavy lifting is already done in
> flatiter. The missing parts are mostly trivial things like .size or .shape
> or can be fudged by coercing to true ndarray using existing
> flatiter.__array__ method.
On Tue, Oct 28, 2014 at 9:23 PM, Nathaniel Smith wrote:
> OTOH trying to make .flat into a full duck-compatible ndarray-like
> type is a non-starter; it would take a tremendous amount of work for
> no clear gain.
>
I don't think so - I think all the heavy lifting is already done in
flatiter. Th
On Tue, Oct 28, 2014 at 1:42 PM, Stephan Hoyer wrote:
> .flat lets you iterate over all elements of a N-dimensional array as if it
> was 1D, without ever needing to make a copy of the array. In contrast,
> ravel() and reshape(-1) cannot always avoid a copy, because they need to
> return another n
On Mon, Oct 27, 2014 at 9:41 PM, Yuxiang Wang wrote:
> In my opinion - because they don't do the same thing, especially when
> you think in terms in lower-level.
>
> ndarray.flat returns an iterator; ndarray.flatten() returns a copy;
> ndarray.ravel() only makes copies when necessary; ndarray.res
Given an n-dim array x, I can do
1. x.flat
2. x.flatten()
3. x.ravel()
4. x.reshape(-1)
Each of these expressions returns a flat version of x with some
variations. Why does NumPy implement four different ways to do essentially
the same thing?
___
NumPy
Also, the use of strings will confuse most syntax highlighters. Compare
the two options in this screenshot:
[image: Inline image 2]
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Fri, Jul 11, 2014 at 4:30 PM, Daniel da Silva
wrote:
> If leading a presentation on scientific computing in Python to beginners,
> which would look better on a bullet in a slide?
>
>-
>
>np.build('.2 .7 .1; .3 .5 .2; .1 .1 .9'))
>
>-
>
>np.array([[.2, .7, .1], [.3, .5, .2], [.1
On Sat, Jul 12, 2014 at 8:02 PM, Nathaniel Smith wrote:
> I feel like for most purposes, what we *really* want is a variable length
> string dtype (I.e., where each element can be a different length.).
I've been toying with the idea of creating an array type for interned
strings. In many appl
On Sun, Jul 6, 2014 at 10:59 PM, Eric Firing wrote:
> > I would suggest calling it something like np.array_simple or
> > np.array_from_string, but the best choice IMO, would be
> > np.ndarray.from_string (a static constructor method).
>
>
> I think the problem is that this defeats the point: mini
On Sun, Jul 6, 2014 at 6:06 PM, Eric Firing wrote:
> (I'm not entirely convinced
> np.arr() is a good idea at all; but if it is, it must be kept simple.)
>
If you are going to introduce this functionality, please don't call it
np.arr.
Right now, np.a presents you with a whopping 53 completion
On Mon, Jun 2, 2014 at 12:25 PM, Charles R Harris wrote:
> I think the masked array code is also due a cleanup/rationalization. Any
> comments you have along that line are welcome.
Here are a few thoughts:
1. Please avoid another major rewrite.
2. Stop pretending that instances of ma.MaskedArr
On Mon, Jun 2, 2014 at 11:48 AM, Charles R Harris wrote:
> Masked arrays have no maintainer, and haven't for several years, nor do I
> see anyone coming along to take it up.
I was effectively a maintainer of ma after Numeric -> numpy transition and
before it was rewritten to use inheritance fro
>
> It seems that there is not a percentile function for masked array in numpy
> or scipy?
>
Percentile is not the only function missing in ma. See for example
https://github.com/numpy/numpy/issues/4356
https://github.com/numpy/numpy/issues/4355
It seems to me that ma was treated on par with
On Mon, Apr 7, 2014 at 11:12 AM, Björn Dahlgren wrote:
> I think the code needed for the general n dimensional case with m number
> of arrays
> is non-trivial enough for it to be useful to provide such a function in
> numpy
>
As of version 1.8.1, I count 571 public names in numpy namespace:
>>>
On Wed, Apr 16, 2014 at 3:50 PM, Fernando Perez wrote:
> Does argument clinic work with python2 or is it python3 only?
>
> http://legacy.python.org/dev/peps/pep-0436/
>
> It is python3 only, but is should not be hard to adapt it to generate 2/3
compatible code.
On Sat, Apr 12, 2014 at 5:03 PM, Sebastian Berg
wrote:
> > As a simple example, suppose for array `a` I want
> > np.flatnonzero(a>0) and np.flatnonzero(a<=0).
> > Can I get them both in one go?
> >
>
> Might be missing something, but I don't think there is a way to do it in
> one go. The result is
On Sat, Apr 12, 2014 at 4:47 PM, Alan G Isaac wrote:
> As a simple example, suppose for array `a` I want
> np.flatnonzero(a>0) and np.flatnonzero(a<=0).
> Can I get them both in one go?
>
I don't think you can do better than
x = a > 0
p, q = np.flatnonzero(x), np.flatnonzero(~x)
___
On Sat, Apr 12, 2014 at 10:02 AM, Alan G Isaac wrote:
> Are there any considerations besides convenience in choosing
> between:
>
> a&b a*b logical_and(a,b)
> a|b a+b logical_or(a,b)
> ~aTrue-a logical_not(a)
>
Boolean "-" is being deprecated:
On Fri, Apr 11, 2014 at 7:58 PM, Stephan Hoyer wrote:
> print datetime(2010, 1, 1) < np.datetime64('2011-01-01') # raises exception
This is somewhat consistent with
>>> from datetime import *
>>> datetime(2010, 1, 1) < date(2010, 1, 1)
Traceback (most recent call last):
File "", line 1, in
Benjamin Peterson has posted a complete patch implementing the @ operator
for Python 3.5:
http://bugs.python.org/file34762/mat-mult5.patch
Now we should implement matmul in numpy:
https://github.com/numpy/numpy/issues/4464
___
NumPy-Discussion mailing
I took liberty and reposted this as an "ENH" issue on the Python bug
tracker.
http://bugs.python.org/issue21176
On Mon, Apr 7, 2014 at 7:23 PM, Nathaniel Smith wrote:
> Guido just formally accepted PEP 465:
> https://mail.python.org/pipermail/python-dev/2014-April/133819.html
> http://lega
On Tue, Apr 1, 2014 at 1:12 PM, Nathaniel Smith wrote:
> In [6]: a[0] = "garbage"
> ValueError: could not convert string to float: garbage
>
> (Cf, "Errors should never pass silently".) Any reason why datetime64
> should be different?
>
datetime64 is different because it has NaT support from the
On Tue, Apr 1, 2014 at 12:10 PM, Chris Barker wrote:
> It seems this committee of two has come to a consensus on naive -- and
> you're probably right, raise an exception if there is a time zone specifier.
Count me as +1 on naive, but consider converting garbage (including strings
with trailing
On Tue, Apr 1, 2014 at 12:10 PM, Chris Barker wrote:
> "For a naive object, the %z and %Z format codes are replaced by empty
> strings."
>
> though I'm not entirely sure what that means -- probably only for writing.
>
That's right:
>>> from datetime import *
>>> datetime.now().strftime('%z')
'
On Mon, Mar 24, 2014 at 11:32 AM, Alan G Isaac wrote:
> I'm wondering if `sort` intentionally does not accept a `key`
> or if this is just a missing feature?
>
It would be very inefficient to call a key function on every element
compared during the sort. See np.argsort and np.lexsort for faste
On Sat, Mar 22, 2014 at 10:35 PM, Sturla Molden wrote:
> On the other hand, this
>
> vec.T @ Mat @ Mat
>
> would not need parentheses for optimisation when the associativity is
left.
>
>
Nor does it require .T if vec is 1d.
>
> By the way, the * operator for np.matrix and Matlab matrices are
On Sat, Mar 22, 2014 at 2:13 PM, Nathaniel Smith wrote:
> If you think of some other arguments in favor of left-associativity,
> then please share!
>
I argued on python-ideas [1] that given the display properties of python
lists and numpy arrays, vec @ Mat is more natural than Mat @ vec. The
la
On Fri, Mar 21, 2014 at 5:31 PM, Chris Barker wrote:
> But this brings up a good point -- having time zone handling fully
> compatible ith datetime.datetime would have its advantages.
I don't know if everyone is aware of this, but Python stdlib has support
for fixed-offset timezones since versi
On Thu, Mar 20, 2014 at 7:27 PM, Chris Barker wrote:
> On Thu, Mar 20, 2014 at 4:16 AM, Nathaniel Smith wrote:
>
>> Your NEP suggests making all datetime64s be in UTC, and treating string
>> representations from unknown timezones as UTC. How does this differ from,
>> and why is it superior to, m
On Thu, Mar 20, 2014 at 9:39 AM, Sankarshan Mudkavi
wrote:
> A naive datetime64 would be unable to handle this, and would either have
> to ignore the tzinfo or would have to throw up an exception.
This is not true. Python's own datetime has no problem handling this:
>>> t1 = datetime(2000,1,1,
On Thu, Mar 20, 2014 at 7:16 AM, Nathaniel Smith wrote:
> Your NEP suggests making all datetime64s be in UTC, and treating string
> representations from unknown timezones as UTC.
I recall that it was at some point suggested that epoch be part of dtype.
I was not able to find the reasons for a
On Thu, Mar 20, 2014 at 9:10 AM, Andrew Dalke wrote:
> In DSL space, that means @ could be used as the inverse of ** by those
> who want to discard any ties to its use in numerics. Considering it
> now, I agree this would indeed open up some design space.
>
> I don't see anything disastrously wron
On Mon, Mar 17, 2014 at 8:54 PM, Nathaniel Smith wrote:
>
> Currently Python has 3 different kinds of ops: left-associative (most
> of them), right-associative (**), and "chaining". Chaining is used for
> comparison ops. Example:
>
>a < b < c
>
> gets parsed to something like
>
>do_compari
On Mon, Mar 17, 2014 at 6:33 PM, Christophe Bal wrote:
>
> Defining *-product to have stronger priority than the @-product, and this
> last having stronger priority than +, will make the changes in the grammar
> easier.
>
The easiest is to give @ the same precedence as *. This will only requir
On Mon, Mar 17, 2014 at 2:55 PM, wrote:
> I'm again in favor of "left", because it's the simplest to understand
> A.dot(B).dot(C)
>
+1
Note that for many years to come the best option for repeated matrix
product will be A.dot(B).dot(C) ...
People who convert their dot(dot(dot('s to more readab
On Mon, Mar 17, 2014 at 12:13 PM, Nathaniel Smith wrote:
> In practice all
> well-behaved classes have to make sure that they implement __special__
> methods in such a way that all the different variations work, no
> matter which class ends up actually handling the operation.
>
"Well-behaved cla
On Mon, Mar 17, 2014 at 11:48 AM, Nathaniel Smith wrote:
> > One more question that I think should be answered by the PEP and may
> > influence the associativity decision is what happens if in an A @ B @ C
> > expression, each operand has its own type that defines __matmul__ and
> > __rmatmul__?
On Sat, Mar 15, 2014 at 4:00 PM, Charles R Harris wrote:
> These days they are usually written as v*w.T, i.e., the outer product of
> two vectors and are a fairly common occurrence in matrix expressions. For
> instance, covariance matrices are defined as E(v * v.T)
With the current numpy, we c
On Sat, Mar 15, 2014 at 3:29 PM, Nathaniel Smith wrote:
> > It would be nice if u@v@None, or some such, would evaluate as a dyad.
> Or else we will still need the concept of row and column 1-D matrices. I
> still think v.T should set a flag so that one can distinguish u@v.T(dyad)
> from u.T@v(in
On Sat, Mar 15, 2014 at 2:25 PM, Alexander Belopolsky wrote:
> On Fri, Mar 14, 2014 at 11:41 PM, Nathaniel Smith wrote:
>
>> Here's the main blocker for adding a matrix multiply operator '@' to
>> Python: we need to decide what we think its precedence and assoc
On Fri, Mar 14, 2014 at 11:41 PM, Nathaniel Smith wrote:
> Here's the main blocker for adding a matrix multiply operator '@' to
> Python: we need to decide what we think its precedence and associativity
> should be.
I am not ready to form my own opinion, but I hope the following will help
shapi
On Fri, Feb 28, 2014 at 10:34 AM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:
>
>
> Whatever happened to duck typing?
>
http://legacy.python.org/dev/peps/pep-3119/#abcs-vs-duck-typing
___
NumPy-Discussion mailing list
NumPy-Discussion@sci
On Tue, Feb 25, 2014 at 11:29 AM, Benjamin Root wrote:
> I seem to recall reading somewhere that pickles are not intended to be
> long-term archives as there is no guarantee that a pickle made in one
> version of python would work in another version, much less between
> different versions of the
I would like to invite numpy community to weigh in on the idea that is
getting momentum at
https://mail.python.org/pipermail/python-ideas/2014-February/025437.html
The main motivation is to provide syntactic alternative to proliferation of
default value options, so that
x = getattr(u, 'answer',
On Fri, Feb 14, 2014 at 4:51 PM, Charles G. Waldman wrote:
> >>> d = numpy.dtype(int)
> >>> if d: print "OK"
> ... else: print "I'm surprised"
>
> I'm surprised
> ___
>
I think this is an artifact of regular dtypes having "length" of zero:
>>> len(arra
On Mon, Feb 10, 2014 at 11:31 AM, Nathaniel Smith wrote:
> And in the long run, I
> think the goal is to move people away from inheriting from np.ndarray.
>
This is music to my ears, but what is the future of numpy.ma? I understand
that numpy.oldnumeric.ma (the older version written without inh
On Sun, Feb 9, 2014 at 4:59 PM, alex wrote:
> On the other hand, it really needs to be deprecated.
While numpy.matrix may have its problems, a NEP should list a better
rationale than the above to gain acceptance.
Personally, I decided not to use numpy.matrix in production code about 10
years a
On Sun, Feb 2, 2014 at 2:58 PM, Mads Ipsen wrote:
> Since atoms [1,2,3,7,8] have been
> deleted, the remaining atoms with indices larger than the deleted atoms
> must be decremented.
>
Let
>>> x
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
and
>>> i = [1, 0, 2]
On Sat, Dec 14, 2013 at 2:59 PM, David Cournapeau wrote:
> There is indeed no support in NumPy for this. Unfortunately, fixing this
> would be a significant amount of work, as buffer management is not really
> abstracted in NumPy ATM.
While providing a full support for indirect buffers as a stor
PEP 3118 [1] allows exposing multi-dimensional data that is organized as
array of pointers. It appears, however that NumPy cannot consume such
memory views.
Looking at _array_from_buffer_3118() function [2], I don't see any attempt
to process suboffsets. The documentation [3] is also silent on
On Fri, Dec 6, 2013 at 1:46 PM, Alan G Isaac wrote:
> On 12/6/2013 1:35 PM, josef.p...@gmail.com wrote:
> > unary versus binary minus
>
> Oh right; I consider binary `-` broken for
> Boolean arrays. (Sorry Alexander; I did not
> see your entire issue.)
>
>
> > I'd rather write ~ than unary - if t
On Fri, Dec 6, 2013 at 11:13 AM, Alan G Isaac wrote:
> On 12/5/2013 11:14 PM, Alexander Belopolsky wrote:
> > did you find minus to be as useful?
>
>
> It is also a correct usage.
>
>
Can you provide a reference?
> I think a good approach to this is to first rea
On Thu, Dec 5, 2013 at 11:05 PM, Alan G Isaac wrote:
> For + and * (and thus `dot`), this will "fix" something that is not broken.
+ and * are not broken - just redundant given | and &.
What is really broken is -, both unary and binary:
>>> int(np.bool_(0) - np.bool_(1))
1
>>> int(-np.bool_(0
On Thu, Dec 5, 2013 at 10:35 PM, wrote:
> what about np.dot,np.dot(mask, x) which is the same as (mask *
> x).sum(0) ?
I am not sure which way your argument goes, but I don't think you would
find the following natural:
>>> x = array([True, True])
>>> dot(x,x)
True
>>> (x*x).sum()
2
>>> (x*
On Thu, Dec 5, 2013 at 5:37 PM, Sebastian Berg
wrote:
> there was a discussion that for numpy booleans math operators +,-,* (and
> the unary -), while defined, are not very helpful.
It has been suggested at the Github that there is an area where it is
useful to have linear algebra operations like
On Thu, Dec 5, 2013 at 5:37 PM, Sebastian Berg
wrote:
> For the moment I saw one "annoying" change in
> numpy, and that is `abs(x - y)` being used for allclose and working
> nicely currently.
>
It would probably be an improvement if allclose returned all(x == y) unless
one of the arguments is ine
85 matches
Mail list logo