04.11.2011 19:59, Pauli Virtanen kirjoitti:
[clip]
This makes inline binary ops
behave like Nn. Reductions are N. (Assignment: dC, reductions: N, binary
ops: PX, unary ops: PC, inline binary ops: Nn).
Sorry, inline binary ops are also PdX, not Nn.
--
Pauli Virtanen
On Fri, Nov 4, 2011 at 11:59 AM, Pauli Virtanen p...@iki.fi wrote:
I have a feeling that if you don't start by mathematically defining the
scalar operations first, and only after that generalize them to arrays,
some conceptual problems may follow.
Yes. I was going to mention this point as
NAN and NA apparently fall into the PdS class.
Here is where I think we need ot be a bit more careful. It is true that we want
NAN and MISSING to propagate, but then we additionally want to ignore it
sometimes. This is precisely why we have functions like nansum. Although
people
are
On Fri, Nov 4, 2011 at 1:03 PM, Gary Strangman
str...@nmr.mgh.harvard.eduwrote:
To push this forward a bit, can I propose that IGNORE behave as: PnC
x = np.array([1, 2, 3])
y = np.array([10, 20, 30])
ignore(x[2])
x
[1, IGNORED(2), 3]
x + 2
[3, IGNORED(4), 5]
x + y
[11,
On Fri, Nov 4, 2011 at 1:03 PM, Gary Strangman str...@nmr.mgh.harvard.edu
wrote:
To push this forward a bit, can I propose that IGNORE behave
as: PnC
x = np.array([1, 2, 3])
y = np.array([10, 20, 30])
ignore(x[2])
x
[1, IGNORED(2), 3]
x +
On Fri, Nov 4, 2011 at 1:59 PM, Pauli Virtanen p...@iki.fi wrote:
For shorthand, we can refer to the above choices with the nomenclature
shorthand ::= propagation destructivity payload_type
propagation ::= P | N
destructivity ::= d | n | s
payload_type ::= S | E | C
I really
On Fri, Nov 4, 2011 at 1:22 PM, T J tjhn...@gmail.com wrote:
I agree that it would be ideal if the default were to skip IGNORED values,
but that behavior seems inconsistent with its propagation properties (such
as when adding arrays with IGNORED values). To illustrate, when we did
x+2, we
04.11.2011 20:49, T J kirjoitti:
[clip]
To push this forward a bit, can I propose that IGNORE behave as: PnC
The *n* classes can be a bit confusing in Python:
### PnC
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
ignore(y[1])
z = x + y
z
np.array([5, IGNORE(7), 9])
x += y
On Fri, Nov 4, 2011 at 2:41 PM, Pauli Virtanen p...@iki.fi wrote:
04.11.2011 20:49, T J kirjoitti:
[clip]
To push this forward a bit, can I propose that IGNORE behave as: PnC
The *n* classes can be a bit confusing in Python:
### PnC
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
On Fri, Nov 4, 2011 at 11:59 AM, Pauli Virtanen p...@iki.fi wrote:
I have a feeling that if you don't start by mathematically defining the
scalar operations first, and only after that generalize them to arrays,
some conceptual problems may follow.
On the other hand, I should note that
On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith n...@pobox.com wrote:
On Fri, Nov 4, 2011 at 1:22 PM, T J tjhn...@gmail.com wrote:
I agree that it would be ideal if the default were to skip IGNORED
values,
but that behavior seems inconsistent with its propagation properties
(such
as when
On Fri, Nov 4, 2011 at 3:04 PM, Nathaniel Smith n...@pobox.com wrote:
On Fri, Nov 4, 2011 at 11:59 AM, Pauli Virtanen p...@iki.fi wrote:
If classified this way, behaviour of items in np.ma arrays is different
in different operations, but seems roughly PdX, where X stands for
returning a masked
04.11.2011 22:57, T J kirjoitti:
[clip]
(m) mark-ignored
a := SPECIAL_1
# - a == SPECIAL_a ; the payload of the RHS is neglected,
# the assigned value has the original LHS
# as the payload
[clip]
Does this behave as expected for x + y
On Fri, Nov 4, 2011 at 3:08 PM, T J tjhn...@gmail.com wrote:
On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith n...@pobox.com wrote:
Continuing my theme of looking for consensus first... there are
obviously a ton of ugly corners in here. But my impression is that at
least for some simple cases,
04.11.2011 23:04, Nathaniel Smith kirjoitti:
[clip]
Assuming that, I believe that what people want for IGNORED values is
unop(SPECIAL_1) == SPECIAL_1
which doesn't seem to be an option in your taxonomy.
Well, you can always add a new branch for rules on what to do with unary
ops.
[clip]
On Fri, Nov 4, 2011 at 3:38 PM, Nathaniel Smith n...@pobox.com wrote:
On Fri, Nov 4, 2011 at 3:08 PM, T J tjhn...@gmail.com wrote:
On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith n...@pobox.com wrote:
Continuing my theme of looking for consensus first... there are
obviously a ton of ugly
Hi,
I noticed this:
(Intel Mac):
In [2]: np.int32(np.float32(2**31))
Out[2]: -2147483648
(PPC):
In [3]: np.int32(np.float32(2**31))
Out[3]: 2147483647
I assume what is happening is that the casting is handing off to the c
library, and that behavior of the c library differs on these
04.11.2011 23:29, Pauli Virtanen kirjoitti:
[clip]
As the definition concerns only what happens on assignment, it does not
have problems with commutativity.
This is of course then not really true in a wider sense, as an example
from T J shows:
a = 1
a += IGNORE(3)
# - a := a + IGNORE(3)
# - a
On Fri, Nov 4, 2011 at 4:29 PM, Pauli Virtanen p...@iki.fi wrote:
04.11.2011 23:29, Pauli Virtanen kirjoitti:
[clip]
As the definition concerns only what happens on assignment, it does not
have problems with commutativity.
This is of course then not really true in a wider sense, as an
04.11.2011 22:29, Nathaniel Smith kirjoitti:
[clip]
Continuing my theme of looking for consensus first... there are
obviously a ton of ugly corners in here. But my impression is that at
least for some simple cases, it's clear what users want:
a = [1, IGNORED(2), 3]
#
Also, how does something like this get handled?
a = [1, 2, IGNORED(3), NaN]
If I were to say, What is the mean of 'a'?, then I think most of the time
people would want 1.5.
I would want NaN! But that's because the only way I get NaN's is when
I do dumb things like compute log(0), and
05.11.2011 00:14, T J kirjoitti:
[clip]
a = 1
a += 2
a += IGNORE
b = 1 + 2 + IGNORE
I think having a == b is essential. If they can be different, that will
only lead to confusion. On this point alone, does anyone think it is
acceptable to have a != b?
It seems to me
On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen p...@iki.fi wrote:
05.11.2011 00:14, T J kirjoitti:
[clip]
a = 1
a += 2
a += IGNORE
b = 1 + 2 + IGNORE
I think having a == b is essential. If they can be different, that will
only lead to confusion. On this point
On Fri, Nov 4, 2011 at 7:43 PM, T J tjhn...@gmail.com wrote:
On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen p...@iki.fi wrote:
An acid test for proposed rules: given two arrays `a` and `b`,
a = [1, 2, IGNORED(3), IGNORED(4)]
b = [10, IGNORED(20), 30, IGNORED(40)]
[...]
(A1)
On Fri, Nov 4, 2011 at 8:03 PM, Nathaniel Smith n...@pobox.com wrote:
On Fri, Nov 4, 2011 at 7:43 PM, T J tjhn...@gmail.com wrote:
On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen p...@iki.fi wrote:
An acid test for proposed rules: given two arrays `a` and `b`,
a = [1, 2,
On Fri, Nov 4, 2011 at 10:33 PM, T J tjhn...@gmail.com wrote:
On Fri, Nov 4, 2011 at 8:03 PM, Nathaniel Smith n...@pobox.com wrote:
On Fri, Nov 4, 2011 at 7:43 PM, T J tjhn...@gmail.com wrote:
On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen p...@iki.fi wrote:
An acid test for proposed
On Thu, Nov 3, 2011 at 7:54 PM, Gary Strangman
str...@nmr.mgh.harvard.edu wrote:
For the non-destructive+propagating case, do I understand correctly that
this would mean I (as a user) could temporarily decide to IGNORE certain
portions of my data, perform a series of computation on that data,
On Friday, November 4, 2011, Nathaniel Smith n...@pobox.com wrote:
On Thu, Nov 3, 2011 at 7:54 PM, Gary Strangman
str...@nmr.mgh.harvard.edu wrote:
For the non-destructive+propagating case, do I understand correctly that
this would mean I (as a user) could temporarily decide to IGNORE certain
non-destructive+propagating -- it really depends on exactly what
computations you want to perform, and how you expect them to work. The
main difference is how reduction operations are treated. I kind of
feel like the non-propagating version makes more sense overall, but I
don't know if
Gary Strangman writes:
For the non-destructive+propagating case, do I understand correctly that
this would mean I (as a user) could temporarily decide to IGNORE certain
portions of my data, perform a series of computation on that data, and the
IGNORED flag (or however it is implemented)
On Fri, 4 Nov 2011, Benjamin Root wrote:
On Friday, November 4, 2011, Gary Strangman str...@nmr.mgh.harvard.edu
wrote:
non-destructive+propagating -- it really depends on exactly what
computations you want to perform, and how you expect them to work. The
main difference is how reduction
Gary Strangman writes:
[...]
Given I'm still fuzzy on all the distinctions, perhaps someone could try to
help
me (and others?) to define all /4/ logical possibilities ... some may be
obvious
dead-ends. I'll take a stab at them, but these should definitely get edited by
others:
On Fri, Nov 4, 2011 at 5:26 AM, Pierre GM pgmdevl...@gmail.com wrote:
On Nov 03, 2011, at 23:07 , Joe Kington wrote:
I'm not sure if this is exactly a bug, per se, but it's a very confusing
consequence of the current design of masked arrays…
I would just add a I think between the but and
On Fri, Nov 4, 2011 at 11:08 AM, Lluís xscr...@gmx.net wrote:
Gary Strangman writes:
[...]
destructive + non-propagating = the data point is truly missing, this is
the
nature of that data point, such missingness should be replicated in
elementwise
operations, but such missingness
destructive + propagating = the data point is truly missing (satellite fell
into
the ocean; dog ate my source datasheet, or whatever), this is the nature of
that
data point, such missingness should be replicated in elementwise operations,
and
the missingness SHOULD interfere with
On Fri, Nov 4, 2011 at 11:08 AM, Lluís xscr...@gmx.net wrote:
Gary Strangman writes:
[...]
destructive + non-propagating = the data point is truly
missing, this is the
nature of that data point, such missingness should be
replicated in elementwise
Benjamin Root writes:
On Fri, Nov 4, 2011 at 11:08 AM, Lluís xscr...@gmx.net wrote:
Gary Strangman writes:
[...]
destructive + non-propagating = the data point is truly missing, this is the
nature of that data point, such missingness should be replicated in
elementwise
For np.gradient(), one can specify a sample distance for each axis to apply
to the gradient. But, all this does is just divides the gradient by the
sample distance. I could easily do that myself with the output from
gradient. Wouldn't it be more valuable to be able to specify the width of
the
04.11.2011 17:31, Gary Strangman kirjoitti:
[clip]
The question does still remain what to do when performing operations like
those above in IGNORE cases. Perform the operation underneath? Or not?
I have a feeling that if you don't start by mathematically defining the
scalar operations first,
39 matches
Mail list logo