Re: [Numpy-discussion] Zero Division not handled correctly?

2009-12-08 Thread David Cournapeau
On Mon, Dec 7, 2009 at 6:16 AM, Skipper Seabold jsseab...@gmail.com wrote:
 I believe this is known, but I am surprised that division by integer
 zero results in the following.

 In [1]: import numpy as np

 In [2]: np.__version__
 Out[2]: '1.4.0.dev7539'

 In [3]: 0**-1 # or 0**-1/-1
 ---
 ZeroDivisionError                         Traceback (most recent call last)

 /home/skipper/school/Data/ascii/numpy/ipython console in module()

 ZeroDivisionError: 0.0 cannot be raised to a negative power

 In [4]: np.array([0.])**-1
 Out[4]: array([ Inf])

 In [5]: np.array([0.])**-1/-1
 Out[5]: array([-Inf])

 In [6]: np.array([0])**-1.
 Out[6]: array([ Inf])

 In [7]: np.array([0])**-1./-1
 Out[7]: array([-Inf])

 In [8]: np.array([0])**-1
 Out[8]: array([-9223372036854775808])

 In [9]: np.array([0])**-1/-1
 Floating point exception

 This last command crashes the interpreter.

 There have been some threads about similar issues over the years, but
 I'm wondering if this is still intended/known or if this should raise
 an exception or return inf or -inf.  I expected a -inf, though maybe
 this is incorrect on my part.

The crash is fixed, but you get some potentially spurious warning,
still. A thorough solution will require some more work.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Build error

2009-12-08 Thread Gael Varoquaux
I just did an SVN up, and I am getting a build error:

$ python setup.py build_ext --inplace
...
building extension numpy.linalg.lapack_lite sources
  adding 'numpy/linalg/lapack_litemodule.c' to sources.
  adding 'numpy/linalg/python_xerbla.c' to sources.
building extension numpy.random.mtrand sources
error: _numpyconfig.h not found in numpy include dirs
['numpy/core/include', 'numpy/core/include/numpy']

It's probably something minor, and I will figure it out, but I wanted to
make sure the devs were aware of the potential hitch.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Build error

2009-12-08 Thread Gael Varoquaux
On Tue, Dec 08, 2009 at 09:25:22AM +0100, Gael Varoquaux wrote:
 I just did an SVN up, and I am getting a build error:

Please ignore this. Brain fart. I should refrain from doing anything
before coffee.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread Pierre GM
On Dec 8, 2009, at 2:31 AM, David Cournapeau wrote:
 Pierre GM wrote:
 A bit of background first;
 In the first implementations of numpy.core.ma, the approach was to get rid 
 of the data that could cause problem beforehand by replacing them with safe 
 values. Turned out that finding these data is not always obvious (cf the 
 problem of defining a domain when dealing with exponents that we discussed a 
 while back on this list), and that all in all, it is faster to compute first 
 and deal with the problems afterwards. Of course, the user doesn't need the 
 warnings if something goes wrong, so disabling them globally looked like a 
 way to go. I thought the disabling would be only for numpy.ma, though
 
 I don't think there is an easy (or any) way to disable this at module
 level. Please be sure to always do so in a try/finally (like you would
 handle a file object to guarantee it is always closed, for example),
 because otherwise, test failures and SIGINT (ctrl+C) will pollute the
 user environment.

Will try. We're still supporting 2.3, right ?

 Setting/unsetting the FPU state definitely has a cost. I don't know how
 significant it would be for your precise case, though: is the cost
 because of setting/unsetting the state in the test themselves or ? We
 may be able to improve the situation later on once we have better numbers.

I can't tell you, unfortunately... And I'm afraid I won't have too much time to 
write some benchmarks. 

 I have already committed the removal of the global np.seterr in the
 trunk. I feel like backporting this one to 1.4.x is a good idea (because
 it could be considered as a regression), but maybe someone has a strong
 case against it.


Well, there's another cosmetic issue that nags me: when using np functions 
instead of their ma equivalents on masked arrays, a warning might be raised. I 
could probably find a workaround with __array_prepare__, but it may take some 
time (and it probably won't be very pretty). So couldn't we keep things the way 
they are for 1.4.0, and get the fixes for 1.5 only ?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero Division not handled correctly?

2009-12-08 Thread David Goldsmith
Thanks, Dave!

DG

On Tue, Dec 8, 2009 at 12:04 AM, David Cournapeau courn...@gmail.comwrote:

 On Mon, Dec 7, 2009 at 6:16 AM, Skipper Seabold jsseab...@gmail.com
 wrote:
  I believe this is known, but I am surprised that division by integer
  zero results in the following.
 
  In [1]: import numpy as np
 
  In [2]: np.__version__
  Out[2]: '1.4.0.dev7539'
 
  In [3]: 0**-1 # or 0**-1/-1
 
 ---
  ZeroDivisionError Traceback (most recent call
 last)
 
  /home/skipper/school/Data/ascii/numpy/ipython console in module()
 
  ZeroDivisionError: 0.0 cannot be raised to a negative power
 
  In [4]: np.array([0.])**-1
  Out[4]: array([ Inf])
 
  In [5]: np.array([0.])**-1/-1
  Out[5]: array([-Inf])
 
  In [6]: np.array([0])**-1.
  Out[6]: array([ Inf])
 
  In [7]: np.array([0])**-1./-1
  Out[7]: array([-Inf])
 
  In [8]: np.array([0])**-1
  Out[8]: array([-9223372036854775808])
 
  In [9]: np.array([0])**-1/-1
  Floating point exception
 
  This last command crashes the interpreter.
 
  There have been some threads about similar issues over the years, but
  I'm wondering if this is still intended/known or if this should raise
  an exception or return inf or -inf.  I expected a -inf, though maybe
  this is incorrect on my part.

 The crash is fixed, but you get some potentially spurious warning,
 still. A thorough solution will require some more work.

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Py3 merge

2009-12-08 Thread David Cournapeau
On Tue, Dec 8, 2009 at 8:03 PM, Francesc Alted fal...@pytables.org wrote:


 That's true, but at least this can be attributed to a poor programming
 practice.  The same happens with:

 array([1]).dtype == 'int32'  # in 32-bit systems
 array([1]).dtype == 'int64'  # in 64-bit systems

 and my impression is that int32/int64 duality for int default would hit much
 more NumPy people than the U/S for string defaults.

I have not followed this discussion much so far, but I was wondering
whether it would make sense to write our own fixer for 2to3 to handle
some of those issues ? Encoding issues cannot be fixed in 2to3, but
things like changing dtype name should be doable, no ?

I have not looked at the 2to3 code, so I don't know how much work this would be.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread David Cournapeau
Pierre GM wrote:

 Will try. We're still supporting 2.3, right ?
   

We stopped supporting 2.3 starting at 1.3 I think. We require 2.4,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assigning complex values to a real array

2009-12-08 Thread Ryan May
 At a minimum, this inconsistency needs to be cleared up.  My
 preference
 would be that the programmer should have to explicitly downcast from
 complex to float, and that if he/she fails to do this, that an
 exception be
 triggered.

 That would most likely break a *lot* of deployed code that depends on
 the implicit downcast behaviour. A less harmful solution (if a
 solution is warranted, which is for the Council of the Elders to
 decide) would be to treat the Python complex type as a special case,
 so that the .real attribute is accessed instead of trying to cast to
 float.

Except that the exception raised on downcast is the behavior we really
want.  We don't need python complex types introducing subtle bugs as
well.

I understand why we have the silent downcast from complex to float,
but I consider it a wart, not a feature.  I've lost hours tracking
down bugs where I'm putting complex data from some routine into a new
array (without specifying a dtype) ends up with the complex downcast
silently to float64. The only reason you even notice it is because at
the end you have incorrect answers. I know to look for it now, but for
inexperienced users, it's a pain.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread Charles R Harris
On Tue, Dec 8, 2009 at 12:31 AM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp wrote:

 Pierre GM wrote:
  A bit of background first;
  In the first implementations of numpy.core.ma, the approach was to get
 rid of the data that could cause problem beforehand by replacing them with
 safe values. Turned out that finding these data is not always obvious (cf
 the problem of defining a domain when dealing with exponents that we
 discussed a while back on this list), and that all in all, it is faster to
 compute first and deal with the problems afterwards. Of course, the user
 doesn't need the warnings if something goes wrong, so disabling them
 globally looked like a way to go. I thought the disabling would be only for
 numpy.ma, though

 I don't think there is an easy (or any) way to disable this at module
 level. Please be sure to always do so in a try/finally (like you would
 handle a file object to guarantee it is always closed, for example),
 because otherwise, test failures and SIGINT (ctrl+C) will pollute the
 user environment.

 
  Anyhow, running numpy.ma.test() on 1.5.x, I get 36 warnings by getting
 rid of the np.seterr(all='ignore') in numpy.ma.core. I can go down to 2 by
 saving the seterr state before the computations in
 _Masked/DomainUnary/BinaryOperation and restoring it after the computation.
 I gonna try to find where the 2 missing warnings come from.
  The 281 tests + 36 warnings take 4.087s to run, the 2 warning version
 5.95s (but I didn't try to go to much into timing details...)

 Setting/unsetting the FPU state definitely has a cost. I don't know how
 significant it would be for your precise case, though: is the cost
 because of setting/unsetting the state in the test themselves or ? We
 may be able to improve the situation later on once we have better numbers.

 A proper solution to this FPU exception may require hard work (because
 of the inherent asynchronous nature of signals, because signals behave
 very differently on different platforms, and because I don't think we
 can afford spending too many cycles on it).

 
  So, what do you want me to do guys ? Commit the fixes to the trunk ?
 Backporting them to 1.4.x ?

 I have already committed the removal of the global np.seterr in the
 trunk. I feel like backporting this one to 1.4.x is a good idea (because
 it could be considered as a regression), but maybe someone has a strong
 case against it.


At this point it isn't a regression, it is a tradition. I think it best to
leave the fix out of 1.4 and make the change for 1.5 because it is likely to
break user code.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread Bruce Southey

On 12/08/2009 08:36 AM, Charles R Harris wrote:



On Tue, Dec 8, 2009 at 12:31 AM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp mailto:da...@ar.media.kyoto-u.ac.jp 
wrote:


Pierre GM wrote:
 A bit of background first;
 In the first implementations of numpy.core.ma
http://numpy.core.ma, the approach was to get rid of the data
that could cause problem beforehand by replacing them with safe
values. Turned out that finding these data is not always obvious
(cf the problem of defining a domain when dealing with exponents
that we discussed a while back on this list), and that all in all,
it is faster to compute first and deal with the problems
afterwards. Of course, the user doesn't need the warnings if
something goes wrong, so disabling them globally looked like a way
to go. I thought the disabling would be only for numpy.ma
http://numpy.ma, though

I don't think there is an easy (or any) way to disable this at module
level. Please be sure to always do so in a try/finally (like you would
handle a file object to guarantee it is always closed, for example),
because otherwise, test failures and SIGINT (ctrl+C) will pollute the
user environment.


 Anyhow, running numpy.ma.test() on 1.5.x, I get 36 warnings by
getting rid of the np.seterr(all='ignore') in numpy.ma.core. I can
go down to 2 by saving the seterr state before the computations in
_Masked/DomainUnary/BinaryOperation and restoring it after the
computation. I gonna try to find where the 2 missing warnings come
from.
 The 281 tests + 36 warnings take 4.087s to run, the 2 warning
version 5.95s (but I didn't try to go to much into timing details...)

Setting/unsetting the FPU state definitely has a cost. I don't
know how
significant it would be for your precise case, though: is the cost
because of setting/unsetting the state in the test themselves or ? We
may be able to improve the situation later on once we have better
numbers.

A proper solution to this FPU exception may require hard work (because
of the inherent asynchronous nature of signals, because signals behave
very differently on different platforms, and because I don't think we
can afford spending too many cycles on it).


 So, what do you want me to do guys ? Commit the fixes to the
trunk ? Backporting them to 1.4.x ?

I have already committed the removal of the global np.seterr in the
trunk. I feel like backporting this one to 1.4.x is a good idea
(because
it could be considered as a regression), but maybe someone has a
strong
case against it.


At this point it isn't a regression, it is a tradition. I think it 
best to leave the fix out of 1.4 and make the change for 1.5 because 
it is likely to break user code.


Chuck


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
   


I understand the reason for the masked arrays behavior but changing the 
seterr default will be a problem until ma is changed. With Python 2.6 
and numpy '1.4.0.dev7750', the current default works but fails when 
changing the seterr default.

 a = np.ma.masked_array([-1, 0, 1, 2, 3], mask=[0, 0, 0, 0, 1])
 np.sqrt(a)
masked_array(data = [-- 0.0 1.0 1.41421356237 --],
 mask = [ True False False False  True],
   fill_value = 99)
 np.seterr(all='raise')
{'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 
'ignore'}

 np.sqrt(a)
Traceback (most recent call last):
  File stdin, line 1, in module
FloatingPointError: invalid value encountered in sqrt

Furthermore, with np.seterr(all='raise') scipy.test() stops with an 
numpy error (output below) and scipy also appears to have some test 
errors. Should I submit a bug report for that error?


Also, with seterr(all='raise') there are 25 test errors in numpy 1.3.0 
and 25 test errors in numpy 1.4.0.dev7750 (only 10 are the same). So 
these tests would need to be resolved.


So I agree with Chuck that we need to hold off the change in default 
until after 1.4 because there may be other code that has similar 
behavior to ma and scipy. Also we need both numpy and scipy to pass all 
the tests with this default.


Bruce


 sp.__version__
'0.8.0.dev6119'
 sp.test()
Running unit tests for scipy
NumPy version 1.4.0.dev7750
NumPy is installed in /usr/lib64/python2.6/site-packages/numpy
SciPy version 0.8.0.dev6119
SciPy is installed in /usr/lib64/python2.6/site-packages/scipy
Python version 2.6 (r26:66714, Jun  8 2009, 16:07:29) [GCC 4.4.0 
20090506 (Red Hat 4.4.0-4)]

nose version 0.10.4

Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread josef . pktd
On Tue, Dec 8, 2009 at 10:21 AM, David Cournapeau courn...@gmail.com wrote:
 On Wed, Dec 9, 2009 at 12:10 AM, Bruce Southey bsout...@gmail.com wrote:


 I understand the reason for the masked arrays behavior but changing the
 seterr default will be a problem until ma is changed. With Python 2.6 and
 numpy '1.4.0.dev7750', the current default works but fails when changing the
 seterr default.

 The default was warn, and not raise. Raise as a default does not make
 much sense.

There are no problems with 'warn', running scipy.stats.test() just
prints a lot of underflow, zero division,.. warnings but ends with
zero failures, zero errors. It only adds a bit of noise to the test
output.

Josef


 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread Pauli Virtanen
ke, 2009-12-09 kello 00:21 +0900, David Cournapeau kirjoitti:
 On Wed, Dec 9, 2009 at 12:10 AM, Bruce Southey bsout...@gmail.com wrote:
 
 
  I understand the reason for the masked arrays behavior but changing the
  seterr default will be a problem until ma is changed. With Python 2.6 and
  numpy '1.4.0.dev7750', the current default works but fails when changing the
  seterr default.
 
 The default was warn, and not raise. Raise as a default does not make
 much sense.

There are drawbacks with the all='print' default:

It prints to C stdio stderr, which is unsanitary. Perhaps it should
print to Python's sys.stderr instead?

Also, some code that worked OK before would now start to spit out extra
warnings, which is not so nice.

-- 
Pauli Virtanen


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread David Cournapeau
On Wed, Dec 9, 2009 at 12:57 AM, Pauli Virtanen p...@iki.fi wrote:


 Also, some code that worked OK before would now start to spit out extra
 warnings, which is not so nice.

Hm, there are several things mixed up in this discussion, I feel like
we are not talking about exactly the same thing:
 - I am talking about setting the default back as before, which was
warn and not print AFAIK. This means that things like np.log(0) will
raise a proper warning, which can be filtered globally if wanted.
Same for np.array([1]) / 0, etc... no stderr is involved AFAICS for
simple examples
  - Because warns are involved, they will only appear once per
exception type and origin

Once this is understood, if you still think it should not be the
default for 1.4.0, I will not cherry-pick it for 1.4.x.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 10:12, David Cournapeau courn...@gmail.com wrote:
 On Wed, Dec 9, 2009 at 12:57 AM, Pauli Virtanen p...@iki.fi wrote:


 Also, some code that worked OK before would now start to spit out extra
 warnings, which is not so nice.

 Hm, there are several things mixed up in this discussion, I feel like
 we are not talking about exactly the same thing:
  - I am talking about setting the default back as before, which was
 warn and not print AFAIK. This means that things like np.log(0) will
 raise a proper warning, which can be filtered globally if wanted.
 Same for np.array([1]) / 0, etc... no stderr is involved AFAICS for
 simple examples
  - Because warns are involved, they will only appear once per
 exception type and origin

The default has always been print, not warn (except for underflow,
which was ignore). However, warn is better for the reasons you
state (except for underflow, which should remain ignore).

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Cython issues w/ 1.4.0

2009-12-08 Thread Pauli Virtanen
Sun, 06 Dec 2009 14:53:58 +0100, Gael Varoquaux wrote:
 I have a lot of code that has stopped working with my latest SVN pull to
 numpy.
 
 * Some compiled code yields an error looking like (from memory):
 
 incorrect type 'numpy.ndarray'

This, by the way, also affects the 1.4.x branch. Because of the datetime 
branch merge, a new field was added to ArrayDescr -- and this breaks 
previously compiled Cython modules.

I guess this should be mentioned in the release notes. We'll probably be 
doing it again in 1.5.0...

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython issues w/ 1.4.0

2009-12-08 Thread Pauli Virtanen
ti, 2009-12-08 kello 12:12 -0500, Darren Dale kirjoitti:
 On Tue, Dec 8, 2009 at 12:02 PM, Pauli Virtanen pav...@iki.fi wrote:
  Sun, 06 Dec 2009 14:53:58 +0100, Gael Varoquaux wrote:
  I have a lot of code that has stopped working with my latest SVN pull to
  numpy.
 
  * Some compiled code yields an error looking like (from memory):
 
  incorrect type 'numpy.ndarray'
 
  This, by the way, also affects the 1.4.x branch. Because of the datetime
  branch merge, a new field was added to ArrayDescr -- and this breaks
  previously compiled Cython modules.
 
 Will the datetime changes effect other previously-compiled python
 modules, like PyQwt?

For those modules that explicitly check the sizeof of various C
structures.

For other modules, I'd expect there be no consequences, as the new
fields were added to the end of the struct.

-- 
Pauli Virtanen


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython issues w/ 1.4.0

2009-12-08 Thread David Cournapeau
On Wed, Dec 9, 2009 at 2:02 AM, Pauli Virtanen pav...@iki.fi wrote:
 Sun, 06 Dec 2009 14:53:58 +0100, Gael Varoquaux wrote:
 I have a lot of code that has stopped working with my latest SVN pull to
 numpy.

 * Some compiled code yields an error looking like (from memory):

     incorrect type 'numpy.ndarray'

 This, by the way, also affects the 1.4.x branch. Because of the datetime
 branch merge, a new field was added to ArrayDescr -- and this breaks
 previously compiled Cython modules.

It seems that it is partly a cython problem. If py3k can be done for
numpy 1.5, I wonder if we should focus on making incompatible numpy
1.6 (or 2.0 :) ), with an emphasis on making the C api more robust
about those changes, using opaque pointer, functions, etc...
Basically, implementing something like PEP 384, but for numpy.

As numpy becomes more and more used as a basic for so many softwares,
I feel like the current situation is hurting numpy users quite badly.
Maybe I am overestimate the problem, though ?

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython issues w/ 1.4.0

2009-12-08 Thread Gael Varoquaux
On Wed, Dec 09, 2009 at 02:28:46AM +0900, David Cournapeau wrote:
 As numpy becomes more and more used as a basic for so many softwares,
 I feel like the current situation is hurting numpy users quite badly.
 Maybe I am overestimate the problem, though ?

I think you are right. It is going to hurt us pretty badly, in my
instute, where we have several python deployed, with several numpy
versions (the system one, the NFS one, and often some locally-built
packages), and users not fully aware of the situation.

In addition, it is going to make deploying soft harder, and lead to
impossible situations, where the binaries for module A work with one
version of numpy, and the binaries for module B work with another. And
recompiling is not a good option for end users.

Of course, I am not knowledgeable-enough to say technicaly what the way
out, or the best compromise is. I am just saying that the current
situation will hurt.

My 2 cents,

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython issues w/ 1.4.0

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 11:28, David Cournapeau courn...@gmail.com wrote:
 On Wed, Dec 9, 2009 at 2:02 AM, Pauli Virtanen pav...@iki.fi wrote:
 Sun, 06 Dec 2009 14:53:58 +0100, Gael Varoquaux wrote:
 I have a lot of code that has stopped working with my latest SVN pull to
 numpy.

 * Some compiled code yields an error looking like (from memory):

     incorrect type 'numpy.ndarray'

 This, by the way, also affects the 1.4.x branch. Because of the datetime
 branch merge, a new field was added to ArrayDescr -- and this breaks
 previously compiled Cython modules.

 It seems that it is partly a cython problem.

It's *entirely* a Cython problem.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread David Cournapeau
On Wed, Dec 9, 2009 at 1:21 AM, Robert Kern robert.k...@gmail.com wrote:

 The default has always been print, not warn (except for underflow,
 which was ignore).

Ah, ok, that explains part of the misunderstanding, sorry for the confusion.

 However, warn is better for the reasons you
 state (except for underflow, which should remain ignore).

do you have an opinion on whether we should keep the current behavior
vs replacing with warn for everything but underflow for 1.4.x ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython issues w/ 1.4.0

2009-12-08 Thread David Cournapeau
On Wed, Dec 9, 2009 at 2:37 AM, Pauli Virtanen p...@iki.fi wrote:
 ke, 2009-12-09 kello 02:28 +0900, David Cournapeau kirjoitti:
 [clip]
 It seems that it is partly a cython problem. If py3k can be done for
 numpy 1.5, I wonder if we should focus on making incompatible numpy
 1.6 (or 2.0 :) ), with an emphasis on making the C api more robust
 about those changes, using opaque pointer, functions, etc...
 Basically, implementing something like PEP 384, but for numpy.

 As numpy becomes more and more used as a basic for so many softwares,
 I feel like the current situation is hurting numpy users quite badly.
 Maybe I am overestimate the problem, though ?

 If we add an unused

        void *private

 both to ArrayDescr and ArrayObject for 1.4.x, we can stuff private data
 there and don't need to break the ABI again for 1.5 just because of
 possible changes to implementation details. (And if it turns we can do
 without them in 1.5.x, then we have some leeway for future changes.)

What I had in mind was more thorough: all the structs become as opaque
as possible, we remove most macros to replace them with accessors (or
mark the macro as unsafe as far as ABI goes). This may be
unrealistic, at least in some cases, for speed reasons, though.

Of course, this does not prevent from applying your suggested change -
I don't understand why you want to add it to 1.4.0, though. 1.4.0 does
not break the ABI compared to 1.3.0. Or is it just to avoid the
cython issue to reappear for 1.5.0 ?

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 11:41, David Cournapeau courn...@gmail.com wrote:
 On Wed, Dec 9, 2009 at 1:21 AM, Robert Kern robert.k...@gmail.com wrote:

 The default has always been print, not warn (except for underflow,
 which was ignore).

 Ah, ok, that explains part of the misunderstanding, sorry for the confusion.

 However, warn is better for the reasons you
 state (except for underflow, which should remain ignore).

 do you have an opinion on whether we should keep the current behavior
 vs replacing with warn for everything but underflow for 1.4.x ?

As far as I can tell, the faulty global seterr() has been in place
since 1.1.0, so fixing it at all should be considered a feature
change. It's not likely to actually *break* things except for doctests
and documentation. I think I fall in with Chuck in suggesting that it
should be changed in 1.5.0. I would add that it would be okay to use
the preferable warn option instead of print at that time since it
really isn't a fix anymore, just a new feature.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Complex zerodivision/negative powers not handled correctly

2009-12-08 Thread Jörgen Stenarson
Hi,

I have observed a problem with complex zerodivision and negative powers.
With a complex zero the result is either zero or NaN NaNj, the first one 
is clearly wrong and the other one I don't know what is most reasonable 
some kind of inf or a Nan.

This problem has been reported in the tracker as #1271.

In [1]: import numpy

In [2]: numpy.__version__
Out[2]: '1.4.0rc1'

In [3]: from numpy import *

In [4]: array([-0., 0])**-1
Out[4]: array([-Inf,  Inf])

In [5]: array([-0., 0])**-2
Out[5]: array([ Inf,  Inf])

In [6]: array([-0.-0j, -0.+0j, 0-0j, 0+0j])**-2
Out[6]: array([ 0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j])

In [7]: array([-0.-0j, -0.+0j, 0-0j, 0+0j])**-1
Out[7]: array([ NaN NaNj,  NaN NaNj,  NaN NaNj,  NaN NaNj])

In [8]: 1/array([-0.-0j, -0.+0j, 0-0j, 0+0j])
Out[8]: array([ NaN NaNj,  NaN NaNj,  NaN NaNj,  NaN NaNj])
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython issues w/ 1.4.0

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 12:08, Pauli Virtanen p...@iki.fi wrote:
 ke, 2009-12-09 kello 02:47 +0900, David Cournapeau kirjoitti:
 [clip]
 Of course, this does not prevent from applying your suggested change -
 I don't understand why you want to add it to 1.4.0, though. 1.4.0 does
 not break the ABI compared to 1.3.0. Or is it just to avoid the
 cython issue to reappear for 1.5.0 ?

 Yes, it's to avoid having to deal with the Cython issue again in 1.5.0.

Do we have any features on deck that would add a struct member? I
think it's pretty rare for us to do so, as it should be.

 Although it's not strictly speaking an ABI break, it seems this is a bit
 of a nuisance for some people, so if we can work around it cheaply, I
 think we should do it.

Breaking compatibility via a major reorganization of our structs is not cheap!

 We should maybe convince the Cython people to disable this check, at
 least for Numpy.

They appear to be. See the latest messages in the thread Checking
extension type sizes.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assigning complex values to a real array

2009-12-08 Thread Dr. Phillip M. Feldman



David Warde-Farley-2 wrote:
 
 
 A less harmful solution (if a solution is warranted, which is for the
 Council of the Elders to  
 decide) would be to treat the Python complex type as a special case, so
 that the .real attribute is accessed instead of trying to cast to float.
 
 

There are two even less harmful solutions: (1) Raise an exception.  (2)
Provide the user with a top-level flag to control whether the attempt to
downcast a NumPy complex to a float should be handled by raising an
exception, by throwing away the imaginary part, or by taking the magnitude.

P.S. As things stand now, I do not regard NumPy as a reliable platform for
scientific computing.
-- 
View this message in context: 
http://old.nabble.com/Assigning-complex-values-to-a-real-array-tp22383353p26698253.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What protocol to use now?

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 12:34, Christopher Barker chris.bar...@noaa.gov wrote:
 Hi folks,

 There was just a question on the wxPython list about how to optimize
 some drawing of data in numpy arrays. Currently, wxPython uses
 PySequenceGetItem to iterate through an array, so you can imagine there
 is a fair bit of overhead in that.

 But what to use?

 We don't want to require numpy, so using the numpy API directly is out.

 Using the buffer interface makes it too hard to catch user errors.

 The array interface was made for this sort of thing, but is deprecated:

 http://docs.scipy.org/doc/numpy/reference/arrays.interface.html

 Is the new PEP 3118 protocol now (as of version 1.4) supported by numpy,
 at least for export? At the moment, a one-way street is OK for this
 application.

I think the wording is overly strong. I don't think that we actually
decided to deprecate the interface. PEP 3118 is not yet implemented by
numpy, and the PEP 3118 API won't be available to Python's 2.6
(Cython's workarounds notwithstanding).

Pauli, did we discuss this before you wrote that warning and I'm just
not remembering it?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread John [H2O]

I see record arrays don't have a masked_where method. How can I achieve the
following for a record array:

cd.masked_where(cd.co == -.)

Or something like this.

Thanks!

-- 
View this message in context: 
http://old.nabble.com/more-recfunctions%2C-structured-array-help-tp26700380p26700380.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] doctest improvements patch (and possible regressions)

2009-12-08 Thread Paul Ivanov
Hi Numpy-devs,

I'm a long time listener, first time caller. I grabbed 1.4.0rc1 and was
happy that all the tests passed. But then I tried:

import numpy as np
np.test(doctests=True)
   ...
   Ran 1696 tests in 22.027s

   FAILED (failures=113, errors=24)

I looked at some of the failures, and they looked like trivial typos. So
I said to myself: Self, wouldn't it be cool if all the doctests worked?

Well, I didn't quite get there spelunking and snorkeling in the source
code a few evenings during the past week, but I got close.  With the
attached patch (which patches the 1.4.0rc1 tarball), I now get:

import numpy as np
np.test(doctests=True)
   ...
   Ran 1696 tests in 20.937s

   FAILED (failures=33, errors=25)


I marked up suspicious differences with XXX, since I don't know if
they're significant. In particular:
 - shortening a defchararray by strip does not change it's dtype to a
shorter one (apparently it used to?)
 - the docstring for seterr says that np.seterr() should reset all
errors to defaults, but clearly doesn't do that
 - there's a regression in recfunctions which may be related to #1299
and may have been fixed
 - recfunctions.find_duplicates ignoremask flag has no effect.

There are a few other things, but they're minor (e.g. I added a note
about how missing values are filled with usemask=False in
recfunctions.merge_arrays).

I think the only code I added was to testing/noseclasses.py. There, if a
test fails, I give a few more chances to pass by normalizing the
endianness of both desired and actual output, as well as default int
size for 32 and 64 bit machines. This is done just using replace() on
the strings.

Everything else is docstring stuff, so I was hoping to sneak this into
1.4.0, since it would make it that much more polished.

Does that sound crazy?

best,
Paul Ivanov


better-doctests.patch.gz
Description: GNU Zip compressed data
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython issues w/ 1.4.0

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 14:52, Dag Sverre Seljebotn
da...@student.matnat.uio.no wrote:
 Robert Kern wrote:
 On Tue, Dec 8, 2009 at 12:38, Pauli Virtanen p...@iki.fi wrote:

 - We need to cache the buffer protocol format string somewhere,
  if we do not want to regenerate it on each buffer acquisition.

 My suspicion is that YAGNI. I would wait until it is actually in use
 and we see whether it takes up a significant amount of time in actual
 code.

 The slight problem with that is that if somebody discover that this is a
 bottleneck in the code, the turnaround time for waiting for a new NumPy
 release could be quite a while. Not that I think it will ever be a problem.

That's true of anything we might do. I'm just skeptical that
regeneration takes so much time that it will significantly affect real
applications. How often are buffers going to be converted, really?
Particularly since one of the points of this interface is to get the
buffer once and read/write into it many times and avoid copying
anything. Adding a struct member or even using the envisioned dynamic
slots is pretty costly, and is not something that we should do for a
cache until there is some profiling done on real applications.
Premature optimization, root of all evil, and all that.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython issues w/ 1.4.0

2009-12-08 Thread Pauli Virtanen
ti, 2009-12-08 kello 21:52 +0100, Dag Sverre Seljebotn kirjoitti:
[clip] 
 How about this:
   - Cache/store the format string in a bytes object in a global 
 WeakRefKeyDict (?), keyed by dtype
   - The array holds a ref to the dtype, and the Py_buffer holds a ref to 
 the array (through the obj field).

Yep, storage in a static variable is the second alternative. We can even
handle allocation and deallocation manually.

I think I'll make it this way then.

 Alternatively, create a new Python object and stick it in the obj in 
 the Py_buffer, I don't think obj has to point to the actual object the 
 buffer was acquired from, as long as it keeps alive a reference to it 
 somehow (though I didn't find any docs for the obj field, it was added 
 as an afterthought by the implementors after the PEP...). But the only 
 advantage is not using weak references (if that is a problem), and it is 
 probably slower and doesn't cache the string.

The current implementation of MemoryView in Python assumes that you can
call PyObject_GetBuffer(view-obj).

But this can be changed, IIRC, the memoryview has also a -base member
containing the same info.

  - We need to cache the buffer protocol format string somewhere,
   if we do not want to regenerate it on each buffer acquisition.
  
  My suspicion is that YAGNI. I would wait until it is actually in use
  and we see whether it takes up a significant amount of time in actual
  code.

Sure, it's likely that it won't be a real problem performance-wise, as
it's simple C code.

The point is that the format string needs to be stored somewhere for
later deallocation, and to work around bugs in Python, we cannot put it
in Py_buffer where it would naturally belong to.

But anyway, it may really be best to not pollute object structs because
of a need for workarounds -- I suppose if I submit patches to Python
soon, they may make it in releases before Numpy 1.5.0 rolls out. For
backward compatibility, we'll just make do with static variables.

Ok, the reserved-for-future pointers in structs may not then be needed
after all, at least for this purpose.

-- 
Pauli Virtanen



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy distutils breaks scipy install on mac

2009-12-08 Thread Mark Sienkiewicz
When I compile scipy on a mac, the build fails with:

...
gfortran:f77: scipy/fftpack/src/dfftpack/dcosqb.f
f951: error: unrecognized command line option -arch
f951: error: unrecognized command line option -arch
f951: error: unrecognized command line option -arch
f951: error: unrecognized command line option -arch
error: Command /sw/bin/gfortran -Wall -ffixed-form 
-fno-second-underscore -arch i686 -arch x86_64 -fPIC -O3 -funroll-loops 
-I/usr/stsci/pyssgdev/2.5.4/numpy/core/include -c -c 
scipy/fftpack/src/dfftpack/dcosqb.f -o 
build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/dfftpack/dcosqb.o 
failed with exit status 1


I have

% gfortran --version
GNU Fortran (GCC) 4.3.0
Copyright (C) 2008 Free Software Foundation, Inc.

% which gfortran
/sw/bin/gfortran


(This /sw/bin apparently means it was installed by fink.  My IT 
department did this.  This is not the recommended compiler from ATT, 
but it seems a likely configuration to encounter in the wild, and I 
didn't expect a problem. )

I traced the problem to numpy/distutils/fcompiler/gnu.py in the class 
Gnu94FCompiler.  The function _universal_flags() tries to detect which 
processor types are recognized by the compiler, presumably in an attempt 
to make a macintosh universal binary.  It adds -arch whatever for each 
architecture that it thinks it detected.  Since gfortran does not 
recognize -arch, the compile fails.

( Presumably, some other version of gfortan does accept -arch, or this 
code wouldn't be here, right? )

The function _can_target() attempts to recognize what architectures the 
compiler is capable of by passing in -arch parameters with various known 
values, but gfortran does not properly indicate a problem in a way that 
_can_target() can detect:

% gfortran -arch i386 hello_world.f
f951: error: unrecognized command line option -arch
% gfortran -arch i386 -v
Using built-in specs.
Target: i686-apple-darwin9
Configured with: ../gcc-4.3.0/configure --prefix=/sw 
--prefix=/sw/lib/gcc4.3 --mandir=/sw/share/man --infodir=/sw/share/info 
--enable-languages=c,c++,fortran,objc,java --with-arch=nocona 
--with-tune=generic --build=i686-apple-darwin9 --with-gmp=/sw 
--with-libiconv-prefix=/sw --with-system-zlib 
--x-includes=/usr/X11R6/include --x-libraries=/usr/X11R6/lib 
--disable-libjava-multilib
Thread model: posix
gcc version 4.3.0 (GCC)
% echo $status
0
%

That is, when you say -v, it gives no indication that it doesn't 
understand the -arch flag.

I didn't ask for a universal binary and I don't need one, so I'm 
surprised that it is trying to make one for me.  I think the correct 
solution is that _universal_flag() should not add -arch flags unless the 
user specifically requests one.  Unfortunately, I can't write a patch, 
because I don't have the time it would take to reverse engineer 
distutils well enough to know how to do it.

As is usual when a setup.py auto-detects the wrong compiler flags, the 
easiest solution is to create a shell script that looks like the 
compiler, but add/removes flags as necessary:

% cat  /eng/ssb/auto/prog/binhacks/scipy.osx/gfortran
#!/bin/sh
args=

for x in $*
do
case $x
in
-arch)
shift
shift
;;
*)
args=$args $x
;;
esac
done
/sw/bin/gfortran $args


Mark S.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 15:25, Pierre GM pgmdevl...@gmail.com wrote:
 On Dec 8, 2009, at 12:54 PM, Robert Kern wrote:

 As far as I can tell, the faulty global seterr() has been in place
 since 1.1.0, so fixing it at all should be considered a feature
 change. It's not likely to actually *break* things except for doctests
 and documentation. I think I fall in with Chuck in suggesting that it
 should be changed in 1.5.0.

 OK. I'll work on fixing the remaining issues when a np function is applied on 
 a masked array.

 FYI. most of the warnings can be fixed in _MaskedUnaryOperation and consorts 
 with:

        err_status_ini = np.geterr()
        np.seterr(divide='ignore', invalid='ignore')
        result = self.f(da, db, *args, **kwargs)
        np.seterr(**err_status_ini)

 Is this kind of fix acceptable ?

  olderr = np.seterr(divide='ignore', invalid='ignore')
  try:
result = self.f(da, db, *args, **kwargs)
  finally:
np.seterr(**olderr)

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread Pierre GM
On Dec 8, 2009, at 3:42 PM, John [H2O] wrote:
 I see record arrays don't have a masked_where method. How can I achieve the
 following for a record array:
 
 cd.masked_where(cd.co == -.)
 
 Or something like this.


masked_where is a function that requires 2 arguments.
If you try to mask a whole record, you can try something like
 x = ma.array([('a',1),('b',2)],dtype=[('','|S1'),('',float)])
 x[x['f0']=='a'] = ma.masked
For an individual field, try something like
x['f1'][x['f1']=='b'] = ma.masked

Otherwise, ma.masked_where doesn't work with structured arrays (yet, that's a 
bug I just find out)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What protocol to use now?

2009-12-08 Thread Christopher Barker
Robert Kern wrote:
 The array interface was made for this sort of thing, but is deprecated:

 http://docs.scipy.org/doc/numpy/reference/arrays.interface.html

 I think the wording is overly strong. I don't think that we actually
 decided to deprecate the interface. PEP 3118 is not yet implemented by
 numpy.

That settles it then -- the array interface is the only option if you 
want to do any type checking.

I'm a bit surprised that PEP 3118 hasn't been implemented yet in numpy 
-- after all, it we designed very much with numpy in mind. Oh well, I'm 
not writing the code.

thanks,
   -Chris





-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release blockers for 1.4.0 ?

2009-12-08 Thread Pierre GM
On Dec 8, 2009, at 4:36 PM, Robert Kern wrote:
 
err_status_ini = np.geterr()
np.seterr(divide='ignore', invalid='ignore')
result = self.f(da, db, *args, **kwargs)
np.seterr(**err_status_ini)
 
 Is this kind of fix acceptable ?
 
  olderr = np.seterr(divide='ignore', invalid='ignore')
  try:
result = self.f(da, db, *args, **kwargs)
  finally:
np.seterr(**olderr)

Neat ! I didn't know about np.seterr returning the old settings.
 Thanks a million Robert.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy distutils breaks scipy install on mac

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 15:36, Mark Sienkiewicz sienk...@stsci.edu wrote:
 When I compile scipy on a mac, the build fails with:

 ...
 gfortran:f77: scipy/fftpack/src/dfftpack/dcosqb.f
 f951: error: unrecognized command line option -arch
 f951: error: unrecognized command line option -arch
 f951: error: unrecognized command line option -arch
 f951: error: unrecognized command line option -arch
 error: Command /sw/bin/gfortran -Wall -ffixed-form
 -fno-second-underscore -arch i686 -arch x86_64 -fPIC -O3 -funroll-loops
 -I/usr/stsci/pyssgdev/2.5.4/numpy/core/include -c -c
 scipy/fftpack/src/dfftpack/dcosqb.f -o
 build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/dfftpack/dcosqb.o
 failed with exit status 1


 I have

 % gfortran --version
 GNU Fortran (GCC) 4.3.0
 Copyright (C) 2008 Free Software Foundation, Inc.

 % which gfortran
 /sw/bin/gfortran


 (This /sw/bin apparently means it was installed by fink.  My IT
 department did this.  This is not the recommended compiler from ATT,
 but it seems a likely configuration to encounter in the wild, and I
 didn't expect a problem. )

 I traced the problem to numpy/distutils/fcompiler/gnu.py in the class
 Gnu94FCompiler.  The function _universal_flags() tries to detect which
 processor types are recognized by the compiler, presumably in an attempt
 to make a macintosh universal binary.  It adds -arch whatever for each
 architecture that it thinks it detected.  Since gfortran does not
 recognize -arch, the compile fails.

 ( Presumably, some other version of gfortan does accept -arch, or this
 code wouldn't be here, right? )

Right. The -arch flag was added by Apple to GCC and their patch really
should be applied to all builds of GCC compilers for the Mac. It is
deeply disappointing that Fink ignored this. The only Mac gfortran
build that I can recommend is here:

  http://r.research.att.com/tools/

_can_target() should be fixed to be more accurate, though, so if you
find a patch that works for you, please let us know.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What protocol to use now?

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 15:44, Christopher Barker chris.bar...@noaa.gov wrote:
 Robert Kern wrote:
 The array interface was made for this sort of thing, but is deprecated:

 http://docs.scipy.org/doc/numpy/reference/arrays.interface.html

 I think the wording is overly strong. I don't think that we actually
 decided to deprecate the interface. PEP 3118 is not yet implemented by
 numpy.

 That settles it then -- the array interface is the only option if you
 want to do any type checking.

 I'm a bit surprised that PEP 3118 hasn't been implemented yet in numpy
 -- after all, it we designed very much with numpy in mind.

Travis's time commitments very suddenly changed late in the PEP's life.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread John [H2O]

This is what I get:

In [74]: type(cd)
Out[74]: class 'numpy.core.records.recarray'

In [75]: type(cd.co)
Out[75]: type 'numpy.ndarray'

In [76]: cd[cd['co']==-.] = np.ma.masked
---
ValueErrorTraceback (most recent call last)

/home/jfb/Research/arctic_co/co_plot.py in module()
 1 
  2 
  3 
  4 
  5 

ValueError: tried to set void-array with object members using buffer.



-- 
View this message in context: 
http://old.nabble.com/more-recfunctions%2C-structured-array-help-tp26700380p26701484.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread Pierre GM
On Dec 8, 2009, at 4:53 PM, John [H2O] wrote:
 This is what I get:
 
 In [74]: type(cd)
 Out[74]: class 'numpy.core.records.recarray'
 
 In [75]: type(cd.co)
 Out[75]: type 'numpy.ndarray'
 
 In [76]: cd[cd['co']==-.] = np.ma.masked
 ---
 ValueErrorTraceback (most recent call last)
 
 /home/jfb/Research/arctic_co/co_plot.py in module()
  1 
  2 
  3 
  4 
  5 
 
 ValueError: tried to set void-array with object members using buffer.

John,
* Could you post self contained example next time ?
* cd should be a MaskedRecords or MaskedArray before you can mask it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What protocol to use now?

2009-12-08 Thread Pauli Virtanen
ti, 2009-12-08 kello 14:23 -0600, Robert Kern kirjoitti:
[clip]
 I think the wording is overly strong. I don't think that we actually
 decided to deprecate the interface. PEP 3118 is not yet implemented by
 numpy, and the PEP 3118 API won't be available to Python's 2.6
 (Cython's workarounds notwithstanding).
 
 Pauli, did we discuss this before you wrote that warning and I'm just
 not remembering it?

I think this came about as a result of some discussion. This, I believe:
http://thread.gmane.org/gmane.comp.python.numeric.general/27413

Yes, the warning is strongly worded -- especially as the support
for PEP 3118 will not arrive before Numpy 1.5.0, and I don't see any
reason why we would be removing support for the array interface.

Perhaps Py3 is a different ball game, but even there, there is no real
reason to remove the support, so I don't think we should do it.

-- 
Pauli Virtanen



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread John [H2O]


Pierre GM-2 wrote:
 
 
 
 masked_where is a function that requires 2 arguments.
 If you try to mask a whole record, you can try something like
 x = ma.array([('a',1),('b',2)],dtype=[('','|S1'),('',float)])
 x[x['f0']=='a'] = ma.masked
 For an individual field, try something like
x['f1'][x['f1']=='b'] = ma.masked
 
 


Just some more detail, here's what I'm working on:

def mk_COarray(rD,datetimevec):
 rD is a previous record array, but I add the datetime vector 
codata =
np.column_stack((np.array(datetime),rD.lon,rD.lat,rD.elv,rD.co))
codata = np.ma.array(codata)
codata_masked = np.ma.masked_where(codata==-.,codata)
codata =
np.rec.fromrecords(codata_masked,names='datetime,lon,lat,elv,co')
return codata, codata_masked


Plotting the arrays out of this:
In [128]: cd,cdm = mk_COarray(rD,datetimevec)
In [129]: plt.plot(cd.datetime,cd.co,label='raw');
plt.plot(cdm[:,0],cdm[:,4],label='masked')

I get the following image, where you can see that the codata which is
created from the codata_masked seems to not be masked

http://old.nabble.com/file/p26702019/example.png 
-- 
View this message in context: 
http://old.nabble.com/more-recfunctions%2C-structured-array-help-tp26700380p26702019.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread Pierre GM
On Dec 8, 2009, at 5:32 PM, John [H2O] wrote:
 Pierre GM-2 wrote:
 
 
 
 masked_where is a function that requires 2 arguments.
 If you try to mask a whole record, you can try something like
 x = ma.array([('a',1),('b',2)],dtype=[('','|S1'),('',float)])
 x[x['f0']=='a'] = ma.masked
 For an individual field, try something like
 x['f1'][x['f1']=='b'] = ma.masked
 
 
 
 
 Just some more detail, here's what I'm working on:

Did you check scikits.timeseries ? Might be a solution if you have data indexed 
in time

 
 def mk_COarray(rD,datetimevec):
 rD is a previous record array, but I add the datetime vector 
codata =
 np.column_stack((np.array(datetime),rD.lon,rD.lat,rD.elv,rD.co))
codata = np.ma.array(codata)
codata_masked = np.ma.masked_where(codata==-.,codata)
codata =
 np.rec.fromrecords(codata_masked,names='datetime,lon,lat,elv,co')
return codata, codata_masked

OK, I gonna have to guess again:
codata is a regular ndarray, not structured ? Then you don't have to transform 
it into a masked array
codata=...
codata_masked = np.ma.masked_values(codata,-.)

Then you transform codata into a np.recarray... But why not transforming 
codata_masked ?

It is hard to help you, because I don't know the actual structure you use. Once 
again, please give a self contained example. The first two entries of codata 
would be enough.


 Plotting the arrays out of this:
 In [128]: cd,cdm = mk_COarray(rD,datetimevec)
 In [129]: plt.plot(cd.datetime,cd.co,label='raw');
 plt.plot(cdm[:,0],cdm[:,4],label='masked')
 
 I get the following image, where you can see that the codata which is
 created from the codata_masked seems to not be masked


Er... You can check whether codata_masked is masked by checking if some entries 
of its mask are True (codata_masked.mask.any()). 
Given your graph, I'd say yes, codata_masked is actually masked: see how you 
have a gap in your green curve when the blue one plummets into negative? That's 
likely where your elevation was -., I'd say.  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What protocol to use now?

2009-12-08 Thread Christopher Barker
Pauli Virtanen wrote:
 I think the wording is overly strong.

not just too string, but actually wrong -- you can't target PEP 3118 -- 
numpy doesn't support it at all yet!


Current wording:


Warning

This page describes the old, deprecated array interface. Everything 
still works as described as of numpy 1.2 and on into the foreseeable 
future, but new development should target PEP 3118 – The Revised Buffer 
Protocol. PEP 3118 was incorporated into Python 2.6 and 3.0
...


My suggested new wording:


Warning

This page describes the current array interface. Everything still works 
as described as of numpy 1.4 and on into the foreseeable future. 
However, future versions of numpy will target PEP 3118 – The Revised 
Buffer Protocol. PEP 3118 was incorporated into Python 2.6 and 3.0, and 
we hope to incorporate it into numpy 1.5
...



-Chris



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy distutils breaks scipy install on mac

2009-12-08 Thread David Cournapeau
On Wed, Dec 9, 2009 at 6:47 AM, Robert Kern robert.k...@gmail.com wrote:

 Right. The -arch flag was added by Apple to GCC and their patch really
 should be applied to all builds of GCC compilers for the Mac. It is
 deeply disappointing that Fink ignored this. The only Mac gfortran
 build that I can recommend is here:

  http://r.research.att.com/tools/

 _can_target() should be fixed to be more accurate, though, so if you
 find a patch that works for you, please let us know.

Damn, I thought I fixed all remaining issues by testing with a
custom-built gfortran, but it seems that every gfortran variation
likes to behave differently...

I will install fink and fix _can_target accordingly

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What protocol to use now?

2009-12-08 Thread Robert Kern
On Tue, Dec 8, 2009 at 17:41, Christopher Barker chris.bar...@noaa.gov wrote:
 Pauli Virtanen wrote:
 I think the wording is overly strong.

 not just too string, but actually wrong -- you can't target PEP 3118 --
 numpy doesn't support it at all yet!


 Current wording:

 
 Warning

 This page describes the old, deprecated array interface. Everything
 still works as described as of numpy 1.2 and on into the foreseeable
 future, but new development should target PEP 3118 – The Revised Buffer
 Protocol. PEP 3118 was incorporated into Python 2.6 and 3.0
 ...
 

 My suggested new wording:

 
 Warning

 This page describes the current array interface. Everything still works
 as described as of numpy 1.4 and on into the foreseeable future.
 However, future versions of numpy will target PEP 3118 – The Revised
 Buffer Protocol. PEP 3118 was incorporated into Python 2.6 and 3.0, and
 we hope to incorporate it into numpy 1.5
 ...

 

The Cython information is still nice. Also, it should not be a
warning, just a note, since there is no impending deprecation to warn
about.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What protocol to use now?

2009-12-08 Thread Christopher Barker
Robert Kern wrote:
 On Tue, Dec 8, 2009 at 17:41, Christopher Barker chris.bar...@noaa.gov 
 wrote:
 My suggested new wording:

 
 Warning

 This page describes the current array interface. Everything still works
 as described as of numpy 1.4 and on into the foreseeable future.
 However, future versions of numpy will target PEP 3118 – The Revised
 Buffer Protocol. PEP 3118 was incorporated into Python 2.6 and 3.0, and
 we hope to incorporate it into numpy 1.5
 ...

 
 
 The Cython information is still nice. 

sure -- that's the ... there ;-)

Also, it should not be a
 warning, just a note, since there is no impending deprecation to warn
 about.

good point.

-Chris



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread John [H2O]


Pierre GM-2 wrote:
 
 
 Did you check scikits.timeseries ? Might be a solution if you have data
 indexed in time
 
 
 np.rec.fromrecords(codata_masked,names='datetime,lon,lat,elv,co')
return codata, codata_masked
 
 OK, I gonna have to guess again:
 codata is a regular ndarray, not structured ? Then you don't have to
 transform it into a masked array
 codata=...
 codata_masked = np.ma.masked_values(codata,-.)
 
 Then you transform codata into a np.recarray... But why not transforming
 codata_masked ?
 
 It is hard to help you, because I don't know the actual structure you use.
 Once again, please give a self contained example. The first two entries of
 codata would be enough.
 
 
 
 
 
 Er... You can check whether codata_masked is masked by checking if some
 entries of its mask are True (codata_masked.mask.any()). 
 Given your graph, I'd say yes, codata_masked is actually masked: see how
 you have a gap in your green curve when the blue one plummets into
 negative? That's likely where your elevation was -., I'd say.  
 ___
 
 

My apologies for adding confusing. In answer to your first question. Yes at
one point I tried playing with scikits.timeseries... there were some issues
at the time that prevented me from working with it, maybe I should revisit.
But on to this problem...

First off, let me say, my biggest curve with numpy has been dealing with
different types of data structures, and I find that I always am starting
with something new. For instance, I am now somewhat comfortable with arrays,
but not yet masked arrays, and then it seems record arrays are the most
'advanced' and quite practical, but it seems the handling of them is quite
different from standard arrays. So I;m learning!

As best I can I'll provide a full example.

Here's what I have:

def mk_COarray(rD,dtvector,mask=None):
codata =
np.column_stack((np.array(dtvector),rD.lon,rD.lat,rD.elv,rD.co))
print type(codata)

if mask:
codata_masked = np.ma.masked_where(codata==mask,codata,copy=False)
# Create record array from codata_masked
else:
codata_masked = codata
codata =
np.rec.fromrecords(codata_masked,names='datetime,lon,lat,elv,co')
#Note the above is just for debugging, and below I return masked and
unmasked arrays
return codata, codata_masked


In [162]: codata,codata_masked =mk_COarray(rD,dtvec,mask=-.)
In [163]: type(codata); type(codata_masked)
Out[163]: class 'numpy.core.records.recarray'
Out[163]: class 'numpy.ma.core.MaskedArray'
In [164]: codata[0]
Out[164]: (datetime.datetime(2008, 4, 6, 11, 38, 37, 76),
20.3271002, 67.8215, 442.62, -.0)

In [165]: codata_masked[0]
Out[165]: 
masked_array(data = [2008-04-06 11:38:37.76 20.3271 67.8215 442.6 --],
 mask = [False False False False  True],
   fill_value = ?)


So then, the plot above will be the same. codata is the blue line
(codata_masked converted into a rec array),  whereas for debugging, I also
return codata_masked (and it is plotted green).

In my prior post I used the variables cd and cdm which refer to codata and
codata_masked.

I know this isn't terribly clear, but hopefully enough so to let me know how
to create a masked record array ;)

-john









-- 
View this message in context: 
http://old.nabble.com/more-recfunctions%2C-structured-array-help-tp26700380p26703152.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread John [H2O]

Maybe I should add, I'm looking at this thread:
http://old.nabble.com/masked-record-arrays-td26237612.html

And, I guess I'm in the same situation as the OP there. It's not clear to
me, but as best I can tell I am working with structured arrays (that's from
np.rec.fromrecords creates, no?).

Anyway, perhaps the simplest thing someone could do to help is to show how
to create a masked structured array.

Thanks!

-- 
View this message in context: 
http://old.nabble.com/more-recfunctions%2C-structured-array-help-tp26700380p26703314.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Flattening an array

2009-12-08 Thread Jake VanderPlas
Hello,
I have a function -- call it f() -- which takes a length-N 1D numpy
array as an argument, and returns a length-N 1D array.
I want to pass it the data in an N-D array, and obtain the N-D array
of the result.
I've thought about wrapping it as such:

#python code:
from my_module import f   # takes a 1D array, raises an exception otherwise
def f_wrap(A):
A_1D = A.ravel()
B = f(A_1D)
return B.reshape(A.shape)
#end code

I expect A to be contiguous in memory, but I don't know if it will be
C_CONTIGUOUS or F_CONTIGUOUS.  Is there a way to implement this such
that
  1) the data in the arrays A and B_1D are not copied (memory issues)
  2) the function f is only called once (speed issues)?
The above implementation appears to copy data if A is fortran-ordered.
 Thanks for the help
   -Jake
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Can't set an element of a subset of an array, , ,

2009-12-08 Thread alan
Okay, I'm stuck. Why doesn't this work?

In [226]: mask
Out[226]: array([False, False, False, ..., False, False, False], dtype=bool)

In [227]: mask[data['horizon']==i]
Out[227]:
array([ True,  True, False, False,  True, False, False, False, False,
True, False, False, False,  True, False, False, False, False,
   False,  True, False, False, False, False, False, False,  True,
   False, False, False, False, False, False,  True, False, False,
   False, False, False, False, False, False, False, False, False,
   False,  True, False, False, False, False, False, False, False,
   False, False, False, False, False,  True, False,  True, False,
   False, False, False, False, False, False, False, False,  True,
   False, False, False, False, False, False, False, False,  True,
True, False, False, False,  True, False,  True, False, False,
True,  True, False, False, False, False, False, False, False,
   False, False,  True, False, False, False,  True, False, False,
   False, False, False, False, False, False, False, False, False,
   False, False, False, False, False,  True, False, False, False,
   False, False,  True, False, False, False, False, False,  True,
   False, False, False,  True, False,  True, False, False, False,
   False, False,  True, False, False, False, False, False, False,
   False, False, False, False, False, False, False, False, False,
   False, False, False, False, False, False, False, False, False,
True, False, False, False, False, False, False, False, False,
   False, False, False, False, False, False, False, False, False,
   False, False, False, False, False, False, False, False, False,
   False, False, False, False, False, False, False, False,  True,
True, False, False, False, False, False, False, False, False,
True, False, False, False, False, False, False, False, False,
   False,  True, False, False,  True, False, False, False, False,
   False, False, False, False, False,  True, False, False, False,
   False, False, False, False, False, False,  True, False, False,
   False, False, False, False, False, False, False, False, False,
   False,  True, False, False, False, False, False, False, False,
   False, False, False, False, False, False, False, False, False,
   False, False, False, False, False, False], dtype=bool)

In [228]: mask[data['horizon']==i][2]
Out[228]: False

In [229]: mask[data['horizon']==i][2] = True

In [230]: mask[data['horizon']==i][2]
Out[230]: False



-- 
---
| Alan K. Jackson| To see a World in a Grain of Sand  |
| a...@ajackson.org  | And a Heaven in a Wild Flower, |
| www.ajackson.org   | Hold Infinity in the palm of your hand |
| Houston, Texas | And Eternity in an hour. - Blake   |
---
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Can't set an element of a subset of an array, , ,

2009-12-08 Thread Robert Kern
2009/12/8  a...@ajackson.org:
 Okay, I'm stuck. Why doesn't this work?

 In [226]: mask
 Out[226]: array([False, False, False, ..., False, False, False], dtype=bool)
 In [229]: mask[data['horizon']==i][2] = True

mask[data['horizon']==i] creates a copy. mask[data['horizon']==i][2]
assigns to the copy, which then gets thrown away.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Can't set an element of a subset of an array, , ,

2009-12-08 Thread alan
2009/12/8  a...@ajackson.org:
 Okay, I'm stuck. Why doesn't this work?

 In [226]: mask
 Out[226]: array([False, False, False, ..., False, False, False], dtype=bool)
 In [229]: mask[data['horizon']==i][2] = True

mask[data['horizon']==i] creates a copy. mask[data['horizon']==i][2]
assigns to the copy, which then gets thrown away.


Bummer. That was such a nice way to reach inside the data structure.


-- 
---
| Alan K. Jackson| To see a World in a Grain of Sand  |
| a...@ajackson.org  | And a Heaven in a Wild Flower, |
| www.ajackson.org   | Hold Infinity in the palm of your hand |
| Houston, Texas | And Eternity in an hour. - Blake   |
---
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread Pierre GM
On Dec 8, 2009, at 7:27 PM, John [H2O] wrote:
 Maybe I should add, I'm looking at this thread:
 http://old.nabble.com/masked-record-arrays-td26237612.html
 
 And, I guess I'm in the same situation as the OP there. It's not clear to
 me, but as best I can tell I am working with structured arrays (that's from
 np.rec.fromrecords creates, no?).
 
 Anyway, perhaps the simplest thing someone could do to help is to show how
 to create a masked structured array.
 
 Thanks!

(Note to self: one of us all's gonna have to write some doc about that...)

A structured array is a ndarray with named fields. Like a standard ndarray, 
each item has a given size defined by the dtype. At the difference of a 
standard ndarray, each item is composed of different sub-items whose types 
don't have to be homogeneous. Each item is a special numpy scalar called a 
numpy.void.
For example:
 x = np.array([('a',1),('b',2)],dtype=[('F0','|S1'),('F1',float)]) 

The first item, x[0], is composed of two fields, 'F0' and 'F1'. The first field 
is a single character, the second a float. 
Fields can be accessed for each item (like x[0]['F0']) or globally (like 
x['F0']). Note that this syntax is analogous to getting an item.

A recarray is just a  structured ndarray with some overwritten methods, where 
the fields can also be accessed as attributes. Because it uses some overwritten 
__getattr__ and __setattr__, they tend to be not as efficient as standard 
structured ndarrays, but that's the price for convenience. To create a 
recarray, you can use the constructions functions in np.records, or simply take 
a view of your structured array as a np.recarray. So, when you use 
np.rec.fromrecords, you get a recarray, which is a subclass of structured 
arrays. Each item of a np.recarray is a special object (np.record), which is a 
regular np.void that allows attribute-like access to fields.

Masked arrays are ndarrays that have a special mask attributes. Since 1.3, 
masked arrays support flexible dtypes (aka structured dtype), and you can mask 
individual fields. If 
 x = ma.array([('a',1), ('b',2)], dtype=[('F0','|S1'),('F1',float)]) 
 x['F0'][0] = ma.masked
 x
masked_array(data = [(--, 1.0) ('b', 2.0)],
 mask = [(True, False) (False, False)],
   fill_value = ('N', 1e+20),
dtype = [('F0', '|S1'), ('F1', 'f8')])

Here you have a structured masked array, where fields can be accessed like 
items, but not like attributes, If you need the attribute-like access, take a 
view as a np.ma.mreocrds.MaskedRecords.
Note that we just used the regular ma.array or ma.masked_array function to 
create this masked structured array. We could also have defined a structured 
ndarray, and then taken a view as a np.ma.MaskedArray...

Unless you have a compelling reason to use np.recarrays or 
np.ma.mrecords.mrecarrays (like a long-time addiction to attribute access), 
then stick to structured arrays (masked or not)...

HIH
P.







___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread Pierre GM
On Dec 8, 2009, at 7:11 PM, John [H2O] wrote:
 My apologies for adding confusing. In answer to your first question. Yes at
 one point I tried playing with scikits.timeseries... there were some issues
 at the time that prevented me from working with it, maybe I should revisit.

What kind of issues ?

 
 As best I can I'll provide a full example.

Except that you forgot to give a sample of the input arrays ;)

 
 Here's what I have:
 
 def mk_COarray(rD,dtvector,mask=None):
codata =
 np.column_stack((np.array(dtvector),rD.lon,rD.lat,rD.elv,rD.co))
print type(codata)
 
if mask:
codata_masked = np.ma.masked_where(codata==mask,codata,copy=False)
# Create record array from codata_masked
else:
codata_masked = codata
codata =
 np.rec.fromrecords(codata_masked,names='datetime,lon,lat,elv,co')
#Note the above is just for debugging, and below I return masked and
 unmasked arrays
return codata, codata_masked
 
 
 In [162]: codata,codata_masked =mk_COarray(rD,dtvec,mask=-.)
 In [163]: type(codata); type(codata_masked)
 Out[163]: class 'numpy.core.records.recarray'
 Out[163]: class 'numpy.ma.core.MaskedArray'
 In [164]: codata[0]
 Out[164]: (datetime.datetime(2008, 4, 6, 11, 38, 37, 76),
 20.3271002, 67.8215, 442.62, -.0)
 
 In [165]: codata_masked[0]
 Out[165]: 
 masked_array(data = [2008-04-06 11:38:37.76 20.3271 67.8215 442.6 --],
 mask = [False False False False  True],
   fill_value = ?)
 
 
 So then, the plot above will be the same. codata is the blue line
 (codata_masked converted into a rec array),  whereas for debugging, I also
 return codata_masked (and it is plotted green).
 
 In my prior post I used the variables cd and cdm which refer to codata and
 codata_masked.
 
 I know this isn't terribly clear, but hopefully enough so to let me know how
 to create a masked record array ;)
 

Your structured ndarray:
  x=np.array([('aaa',1,2,30),('bbb',2,4,40)], 
 dtype=[('f0',np.object),('f1',int),('f2',int),('f3',float)])
Make a MaskedRecords:
  x = x.view(np.ma.mrecords.mrecarray)
Mask the whole records where field 'f3'  30
 x[x['f3']  30] = ma.masked
Mask the 'f1' field where it is equal to 1
 x['f1'][x['f1']==1]=ma.masked




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more recfunctions, structured array help

2009-12-08 Thread John [H2O]



Pierre GM-2 wrote:
 
 On Dec 8, 2009, at 7:27 PM, John [H2O] wrote:
 Maybe I should add, I'm looking at this thread:
 http://old.nabble.com/masked-record-arrays-td26237612.html
 
 And, I guess I'm in the same situation as the OP there. It's not clear to
 me, but as best I can tell I am working with structured arrays (that's
 from
 np.rec.fromrecords creates, no?).
 
 Anyway, perhaps the simplest thing someone could do to help is to show
 how
 to create a masked structured array.
 
 Thanks!
 
 (Note to self: one of us all's gonna have to write some doc about that...)
 
 

Pierre,

Do you have access to the docs. For now, this is indeed very helpful. Thanks
for the description. I would recommend just adding this at least as a note
to the page:

http://docs.scipy.org/doc/numpy/user/basics.rec.html

Just a thought now I'm going to experiment and see if I can figure out
how to pick and choose one data structure to work with! ;)
-- 
View this message in context: 
http://old.nabble.com/more-recfunctions%2C-structured-array-help-tp26700380p26704233.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assigning complex values to a real array

2009-12-08 Thread David Warde-Farley
On Tue, Dec 08, 2009 at 10:17:20AM -0800, Dr. Phillip M. Feldman wrote:
 
 
 
 David Warde-Farley-2 wrote:
  
  
  A less harmful solution (if a solution is warranted, which is for the
  Council of the Elders to  
  decide) would be to treat the Python complex type as a special case, so
  that the .real attribute is accessed instead of trying to cast to float.
  
  
 
 There are two even less harmful solutions: (1) Raise an exception.

This is not less harmful, since as I mentioned there is likely a lot of
deployed code that is not expecting such exceptions. If such a change were to
take place it would have to take place over several versions, where warnings
are issued for a while (probably at least one stable release) before the
feature being removed. Esoteric handling of ambiguous assignments may not
speed adoption of NumPy, but monumental shifts in basic behaviour without any
warning will make us even less friends.

The best thing to do is probably to file an enhancement ticket on the
bugtracker so that the issue doesn't get lost/forgotten.

 (2) Provide the user with a top-level flag to control whether the attempt to
 downcast a NumPy complex to a float should be handled by raising an
 exception, by throwing away the imaginary part, or by taking the magnitude.

I'm not so sure that introducing more global state is looked fondly upon, but
it'd be worth including this proposal in the ticket.

 P.S. As things stand now, I do not regard NumPy as a reliable platform for
 scientific computing.

One man's bug is another's feature, I guess. I rarely use complex numbers and
when I do I simply avoid this situation. 

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assigning complex values to a real array

2009-12-08 Thread Jochen Schroeder
On 12/08/09 02:32, David Warde-Farley wrote:
 On 7-Dec-09, at 11:13 PM, Dr. Phillip M. Feldman wrote:
 
  Example #1:
  IPython 0.10   [on Py 2.5.4]
  [~]|1 z= zeros(3)
  [~]|2 z[0]= 1+1J
 
  TypeError: can't convert complex to float; use abs(z)
 
 The problem is that you're using Python's built-in complex type, and  
 it responds to type coercion differently than NumPy types do. Calling  
 float() on a Python complex will raise the exception. Calling float()  
 on (for example) a numpy.complex64 will not. Notice what happens here:
 
 In [14]: z = zeros(3)
 
 In [15]: z[0] = complex64(1+1j)
 
 In [16]: z[0]
 Out[16]: 1.0
 
  Example #2:
 
  ### START OF CODE ###
  from numpy import *
  q = ones(2,dtype=complex)*(1 + 1J)
  r = zeros(2,dtype=float)
  r[:] = q
  print 'q = ',q
  print 'r = ',r
  ### END OF CODE ###
 
 Here, both operands are NumPy arrays. NumPy is in complete control of  
 the situation, and it's well documented what it will do.
 
 I do agree that the behaviour in example #1 is mildly inconsistent,  
 but such is the way with NumPy vs. Python scalars. They are mostly  
 transparently intermingled, except when they're not.
 
  At a minimum, this inconsistency needs to be cleared up.  My  
  preference
  would be that the programmer should have to explicitly downcast from
  complex to float, and that if he/she fails to do this, that an  
  exception be
  triggered.
 
 That would most likely break a *lot* of deployed code that depends on  
 the implicit downcast behaviour. A less harmful solution (if a  
 solution is warranted, which is for the Council of the Elders to  
 decide) would be to treat the Python complex type as a special case,  
 so that the .real attribute is accessed instead of trying to cast to  
 float.

I'm not sure how much code is actually relying on the implicit downcast, but
I'd argue that it's bad programming anyways. It is really difficult to spot if
you reviewing someone else's code. As others mentioned it's also a bitch to
track down a bug that has been accidentally introduced by this behaviour. 

Jochen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] slices of structured arrays?

2009-12-08 Thread josef . pktd
2009/12/8 Ernest Adrogué eadro...@gmx.net:
 Hello,

 Here's a structured array with fields 'a','b' and 'c':

 s=[(i,int) for i in 'abc']
 t=np.zeros(1,s)

 It has the form: array([(0, 0, 0)]

 I was wondering if such an array can be accessed like an
 ordinary array (e.g., with a slice) in order to set multiple
 values at once.

 t[0] does not access the first element of the array, instead
 it returns the whole array.

 In [329]: t[0]
 Out[329]: (1, 0, 0)

 But this array is type np.void and does not support slices.
 t[0][0] returns the first element, but t[0][:2] fails with
 IndexError: invalid index.

 Any suggestion?

as long as all numbers are of the same type, you can create a view
that behaves (mostly) like a regular array

 t0=np.arange(12).reshape(-1,3)
 t0
array([[ 0,  1,  2],
   [ 3,  4,  5],
   [ 6,  7,  8],
   [ 9, 10, 11]])
 t0.dtype = s
 t0
array([[(0, 1, 2)],
   [(3, 4, 5)],
   [(6, 7, 8)],
   [(9, 10, 11)]],
  dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'i4')])
 t0.view(int)
array([[ 0,  1,  2],
   [ 3,  4,  5],
   [ 6,  7,  8],
   [ 9, 10, 11]])
 t0.view(int)[3]
array([ 9, 10, 11])
 t0.view(int)[3,1:]
array([10, 11])


structured arrays treat all parts of the dtype as a single array
element, your t[0] returns the first row/element corresponding to s

 t0.shape
(4, 1)

 t1=np.arange(12)
 t1.dtype = s
 t1
array([(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11)],
  dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'i4')])
 t1.shape
(4,)
 t1.view(int)
array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11])
 t1.view(int).reshape(-1,3)
array([[ 0,  1,  2],
   [ 3,  4,  5],
   [ 6,  7,  8],
   [ 9, 10, 11]])
 t1.view(int).reshape(-1,3)[2,2:]
array([8])
 t1.view(int).reshape(-1,3)[2,1:]
array([7, 8])

as long as there is no indexing that makes a copy, you can still
change the original array by changing the view

 t1.view(int).reshape(-1,3)[2,1:] = 0
 t1
array([(0, 1, 2), (3, 4, 5), (6, 0, 0), (9, 10, 11)],
  dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'i4')])

If the numbers are of different types, e.g. some columns int some
float, it gets difficult.


I often need some trial and error, but there should be quite a few
examples on the mailing list.

Josef



 Ernest
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Flattening an array

2009-12-08 Thread josef . pktd
On Tue, Dec 8, 2009 at 7:29 PM, Jake VanderPlas jake...@gmail.com wrote:
 Hello,
 I have a function -- call it f() -- which takes a length-N 1D numpy
 array as an argument, and returns a length-N 1D array.
 I want to pass it the data in an N-D array, and obtain the N-D array
 of the result.
 I've thought about wrapping it as such:

 #python code:
 from my_module import f   # takes a 1D array, raises an exception otherwise
 def f_wrap(A):
    A_1D = A.ravel()
    B = f(A_1D)
    return B.reshape(A.shape)
 #end code

 I expect A to be contiguous in memory, but I don't know if it will be
 C_CONTIGUOUS or F_CONTIGUOUS.  Is there a way to implement this such
 that
  1) the data in the arrays A and B_1D are not copied (memory issues)
  2) the function f is only called once (speed issues)?
 The above implementation appears to copy data if A is fortran-ordered.

maybe one way is to  check the flags, and conditional on C or F, use
the corresponding order
in numpy.ravel(a, order='C')  ?

if A.flags.c_contiguous:
...
elif A.flags.f_contiguous
...

not tried, and I don't know what the right condition for `is_c_contiguous` is

Josef

  Thanks for the help
   -Jake
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assigning complex values to a real array

2009-12-08 Thread Dr. Phillip M. Feldman


Stéfan van der Walt wrote:
 
 
 Would it be possible to, optionally, throw an exception?
 
 S.
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 

I'm certain that it is possible.  And, I believe that if this option is
selected via a Python flag, the run-time performance implications should be
nil.  I wonder if there is some way of taking a vote to see how many people
would like such an option.
-- 
View this message in context: 
http://old.nabble.com/Assigning-complex-values-to-a-real-array-tp22383353p26705632.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assigning complex values to a real array

2009-12-08 Thread Dr. Phillip M. Feldman


Darren Dale wrote:
 
 On Sat, Mar 7, 2009 at 5:18 AM, Robert Kern robert.k...@gmail.com wrote:
 
 On Sat, Mar 7, 2009 at 04:10, Stéfan van der Walt ste...@sun.ac.za
 wrote:
  2009/3/7 Robert Kern robert.k...@gmail.com:
  In [5]: z = zeros(3, int)
 
  In [6]: z[1] = 1.5
 
  In [7]: z
  Out[7]: array([0, 1, 0])
 
  Blind moment, sorry.  So, what is your take -- should this kind of
  thing pass silently?

 Downcasting data is a necessary operation sometimes. We explicitly
 made a choice a long time ago to allow this.
 
 In that case, do you know why this raises an exception:
 
 np.int64(10+20j)
 
 Darren
 
 

I think that you have a good point, Darren, and that Robert is
oversimplifying
the situation. NumPy and Python are somewhat out of step. The NumPy approach
is
stricter and more likely to catch errors than Python. Python tends to be
somewhat laissez-faire about numerical errors and the correctness of
results.
Unfortunately, NumPy seems to be a sort of step-child of Python, tolerated,
but
not fully accepted. There are a number of people who continue to use Matlab,
despite all of its deficiencies, because it can at least be counted on to
produce correct answers most of the time.

Dr. Phillip M. Feldman
-- 
View this message in context: 
http://old.nabble.com/Assigning-complex-values-to-a-real-array-tp22383353p26705737.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion