Re: [Numpy-discussion] numpy.distutils

2010-10-11 Thread Charles Doutriaux
 Hi David,

The behaviour is there in regular distutils, it is apparently a known
bug, I'm copy/pasting their answer in there for information.

thanks,

C.


Answer (from Tarek Ziade):

This is a regression I introduced to fix the fact that the .so are not
rebuilt we you do subtle changes in your project.

We ended up reverting this, and the changes will be done in
Distutils2. see: http://BLOCKEDbugs.python.org/issue8688 for the details.








On 10/7/10 9:39 PM, David Cournapeau wrote:
 On Fri, Oct 8, 2010 at 7:40 AM, Charles Doutriaux doutria...@llnl.gov wrote:

 Did anybody else noticed this? Anybody know what changed (the fact that
 it's since Python 2.7 make me think it might be pure distutils related)
 Could you check whether you still see the issue without using
 numpy.distutils ? I actually would be surprised to see it as a
 distutils issue proper, as somebody would have almost certainly
 noticed it already.

 Any insight on how distuils and numpy.distutils are related would be
 great too!
 numpy.distutils uses distutils, but in an unconvential way, because it
 needs to hook itself to internals. Hence, any non trivial change (and
 even trivial, sometimes) can break numpy.distutils.

 cheers,

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://BLOCKEDmail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.distutils

2010-10-07 Thread Charles Doutriaux
 Hi,

I'm not sure if this is a numpy.distutils or a regular distutils issue
so please excuse me if it doesn't belong here.

I 'm using numpy 1.4.1 and I have a C extension (using numpy arrays)
that I built with numpy.

When I'm debugging I frequently have to rebuild.

It use to rebuild only the C files that had been touched.

Since python 2.7 (it seems to be the trigger) it now ALWAYS rebuild
EVERY C file wherever or not they need to.

Did anybody else noticed this? Anybody know what changed (the fact that
it's since Python 2.7 make me think it might be pure distutils related)

Any insight on how distuils and numpy.distutils are related would be
great too!

Thanks,

C.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fwd: distutils

2010-09-09 Thread Charles Doutriaux



 Original Message 
Subject:[Numpy-discussion] distutils
Date:   Tue, 7 Sep 2010 12:12:58 -0700
From:   Charles Doutriaux doutria...@llnl.gov
Reply-To:   Discussion of Numerical Python numpy-discussion@scipy.org
To: Discussion of Numerical Python numpy-discussion@scipy.org



 Hi,

I'm using distutils to build extensions written in C.

I noticed that lately (it seems to be python 2.7 related) whenever I
touch 1 C file, ALL the C files are rebuilt.
Since I have a lot of C code, it takes a lot of time for nothing.

Any idea why this is happening?

Do I need to set something new in my setup.py ?

Thx,

C.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://*mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] distutils

2010-09-07 Thread Charles Doutriaux
 Hi,

I'm using distutils to build extensions written in C.

I noticed that lately (it seems to be python 2.7 related) whenever I
touch 1 C file, ALL the C files are rebuilt.
Since I have a lot of C code, it takes a lot of time for nothing.

Any idea why this is happening?

Do I need to set something new in my setup.py ?

Thx,

C.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.2.0rc2 tagged! --PLEASE TEST--

2008-09-16 Thread Charles Doutriaux

On this system, both 1.2rc1 and rc2 hang on mtrand.c

Linux thunder0 2.6.9-74chaos #1 SMP Wed Oct 24 08:41:12 PDT 2007 ia64 
ia64 ia64 GNU/Linux


attached is the log.




numpy.LOG.bz2
Description: application/bzip
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.2.0rc2 tagged! --PLEASE TEST--

2008-09-16 Thread Charles Doutriaux
Thanks Jarrod it's coming back now.

I thought they had updated the system... but no luck...

Ok that's the issue i'm using:
gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-8)

Thanks again,

C.

Jarrod Millman wrote:
 Which version of gcc are you using?  Do I remember correctly that you
 had the same failure with numpy 1.1.1 when using gcc 3.3 and that the
 problem went away when you used gcc 4.1.  If you are using 3.3, could
 you try with 4.1 and let us know if you run into the same problem?

 Thanks,

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py related question

2008-09-11 Thread Charles Doutriaux
Hello,

I have a quick question that I'm hoping f2py developpers will be able to 
quickly answer

I have some C code, input type can be either int or long
I'm trying to write some fortran interface to it

my understanding is that
integer in fortran corresponds to int
integer(kind=2) matches short
but what would match long ?

Thanks,

c.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py related question

2008-09-11 Thread Charles Doutriaux
Thanks robert,

That's exactly the information I needed.

Thanks for the link too.

C.

Robert Kern wrote:
 On Thu, Sep 11, 2008 at 14:12, Charles Doutriaux [EMAIL PROTECTED] wrote:
   
 Hello,

 I have a quick question that I'm hoping f2py developpers will be able to
 quickly answer

 I have some C code, input type can be either int or long
 I'm trying to write some fortran interface to it

 my understanding is that
 integer in fortran corresponds to int
 integer(kind=2) matches short
 but what would match long ?
 

 Unfortunately, it depends on your system. A C long is defined to be at
 least as large as an int. Incidentally, an int is defined to be at
 least as long as a short and a short is defined to be at least as long
 as a char. So you *could* have char == short == int == long == 1 byte,
 but no such system exists (I hope!).

 On pretty much all 32-bit systems I am aware of, int == long == 4
 bytes. On 64-bit systems, things get a little more fragmented. Most
 UNIX-type 64-bit systems, long == 8 bytes. On Win64, however, long ==
 4 bytes. This section of the Wikipedia article on 64-bit computing
 gives more detail:

   http:// en.wikipedia.org/wiki/64-bit#64-bit_data_models

 Furthermore, it's not just the OS that determines the data model, but
 also the options that are given to your compiler when building your
 program *and also all of the libraries and system libraries* that it
 uses.

 Fortunately, there's an easy way for you to tell: just ask numpy.
 Python ints are C longs, and the default numpy int dtype matches
 Python ints. On my 32-bit system:

   
 import numpy
 numpy.dtype(int).itemsize
 
 4

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] small bug in numpy.ma.minimum.outer

2008-09-09 Thread Charles Doutriaux
 a=numpy.ones((2,2))
  numpy.minimum.outer(a,a)
array( 1.,  1.],
 [ 1.,  1.]],

[[ 1.,  1.],
 [ 1.,  1.]]],


   [[[ 1.,  1.],
 [ 1.,  1.]],

[[ 1.,  1.],
 [ 1.,  1.)
  numpy.ma.minimum.outer(a,a)
Traceback (most recent call last):
  File stdin, line 1, in module
  File /lgm/cdat/latest/lib/python2.5/site-packages/numpy/ma/core.py, 
line 3317, in outer
result._mask = m
AttributeError: 'numpy.ndarray' object has no attribute '_mask'

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] compressed and sqrt in numpy.ma

2008-09-09 Thread Charles Doutriaux
The following is causing our code to crash, shouldn't .data be just ones ?

  a = numpy.ma.ones((2,2))
  b = numpy.ma.sqrt(a)
  b.compressed().data
read-write buffer for 0x9154460, size 32, offset 0 at 0xb764e500

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] small bug in numpy.ma.minimum.outer

2008-09-09 Thread Charles Doutriaux
Thx Pierre

Pierre GM wrote:
 On Tuesday 09 September 2008 14:03:04 Charles Doutriaux wrote:
   
  a=numpy.ones((2,2))

   numpy.minimum.outer(a,a)
   numpy.ma.minimum.outer(a,a)
 

 Fixed in SVN r5800.
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http:// projects.scipy.org/mailman/listinfo/numpy-discussion


   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy 1.1.1 fails because of missing md5

2008-09-03 Thread Charles Doutriaux

Hi Robert,

The first email got intercepted because the attachment was too big 
(awaits moderator), so i compressed the log and I resend this email.


I'm attaching my Python build log, can you spot anything? It seems 
like md5 is built, i get a very similar log on my machine and i have a 
working import md5.


I'm not sure of what's going on. Usually the build is fine.

C.


Robert Kern wrote:

On Tue, Sep 2, 2008 at 16:40, Charles Doutriaux [EMAIL PROTECTED] wrote:
  

Joseph,

Ok all failed because numpy couldn't build... It's looking for md5



That's part of the standard library. Please check your Python installation.

  




Python.LOG.bz2
Description: application/bzip
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy 1.1.1 fails because of missing md5

2008-09-03 Thread Charles Doutriaux
Thanks for spotting the origin, I'll pass this along to our user maybe 
they'll be able to figure out how to build python w/o openssl

C.

Robert Kern wrote:
 On Wed, Sep 3, 2008 at 10:39, Charles Doutriaux [EMAIL PROTECTED] wrote:
   
 Hi Robert,

 The first email got intercepted because the attachment was too big (awaits
 moderator), so i compressed the log and I resend this email.

 I'm attaching my Python build log, can you spot anything? It seems like
 md5 is built, i get a very similar log on my machine and i have a working
 import md5.
 

 md5.py gets installed, but it just (eventually) imports from one of
 the extension modules _md5 or _hashlib, neither of which is getting
 built. The errors following the line

   building '_hashlib' extension

 are relevant. OpenSSL gets used for its hash function implementations
 if it is available. The configuration thinks you want it to use
 OpenSSL, so it tries to build _hashlib, which fails. If the
 configuration thought you didn't want to use OpenSSL, it would try to
 build _md5.

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy 1.1.1 fails because of missing md5

2008-09-02 Thread Charles Doutriaux

Joseph,

Ok all failed because numpy couldn't build... It's looking for md5

on my machine i type:
whereis md5
md5: /usr/include/md5.h /usr/share/man/man1/md5.1ssl.gz 
/usr/share/man/man3/md5.3ssl.gz


I'm ccing the numpy discussion list on this. The numpy we're trying to 
build is 1.1.1 I'm attaching the numpy log.


Is it fixed in 1.2 ?

Joseph in the mean time you might want to try to install md5 on your system.

C.


Joseph Hargitai wrote:

here are the logs -

OS:
 2.6.9-55.ELsmp #1 SMP Fri Apr 20 16:36:54 EDT 2007 x86_64 x86_64 x86_64 
GNU/Linux

Rocks 4.3 with Redhat installation cluster. 



  


rm -rf logs/numpy.LOG;
cd build;
cp ../exsrc/src/numpy* .;
gunzip numpy*gz;
tar xf numpy-1.1.1.tar;
rm  numpy-1.1.1.tar;
cd numpy*;
LDFLAGS=-L/state/partition1/cdat/Externals/lib   -L/usr/X11R6/lib64   -L/usr/local/lib  -Wl,-R/state/partition1/cdat/Externals/lib -L/state/partition1/cdat/Externals/HDF5/lib -L/state/partition1/cdat/Externals/NetCDF/lib; export LDFLAGS ; CPPFLAGS=-I/state/partition1/cdat/Externals/include   -I/state/partition1/cdat/Externals/HDF5/include -I/state/partition1/cdat/Externals/HDF5/include -I/state/partition1/cdat/Externals/NetCDF/include ; export CPPFLAGS ; CC=gcc ; export CC ; FC=g77 ; export FC ; FCFLAGS=-g -O2 ; export FCFLAGS ;FCLIBS= -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/lib/../lib64 -L/usr/lib/../lib64 -lfrtbegin -lg2c -lm ; export FCLIBS ; F77=g77 ; export F77;FFLAGS=-g -O2 ; export FFLAGS;FLIBS= -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/lib/../lib64 -L/usr/lib/../lib64 -lfrtbegin -lg2c -lm ; export FLIBS ;CPP=gcc -E; export CPP;CXX=g++; export CXX ; EXTERNALS=/state/partition1/cdat/Externals; export EXTERNALS ; PKG_CONFIG_PATH=/state/partition1/cdat/Externals/lib/pkgconfig: ; export PKG_CONFIG_PATH ;PATH=/state/partition1/cdat/Externals/bin:/state/partition1/cdat/bin:/usr/kerberos/bin:/usr/java/jdk1.5.0_10/bin:/usr/lpp/mmfs/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/opt/ganglia/bin:/opt/ganglia/sbin:/usr/local/matlab/R2007a/bin:/usr/sbin:/opt/openmpi/bin/:/opt/maui/bin:/opt/torque/bin:/opt/torque/sbin:/opt/rocks/bin:/opt/rocks/sbin:/usr/local/toolworks/totalview.8.4.1-5/bin:/usr/local/toolworks/totalview.8.4.1-5/man:/home/hargitai/bin:/opt/torque/bin:/opt/intel/fce/10.0.023/bin:/opt/intel/cce/10.0.023/bin:/usr/mpi/intel/openmpi-1.2.2-1/bin:/usr/local/toolworks/totalview.8.4.1-5/bin ; export PATH;
cp /state/partition1/cdat5/site.cfg . ;
unset LDFLAGS;;
/state/partition1/cdat/bin/python setup.py build  --force install --prefix=/state/partition1/cdat;
Running from numpy source directory.
F2PY Version 2_5585
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /state/partition1/cdat/Externals/lib
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /state/partition1/cdat/Externals/lib
  NOT AVAILABLE

atlas_blas_info:
  libraries f77blas,cblas,atlas not found in /state/partition1/cdat/Externals/lib
  NOT AVAILABLE

/state/partition1/cdat5/build/numpy-1.1.1/numpy/distutils/system_info.py:1340: UserWarning: 
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
  warnings.warn(AtlasNotFoundError.__doc__)
blas_info:
  libraries blas not found in /state/partition1/cdat/Externals/lib
  NOT AVAILABLE

/state/partition1/cdat5/build/numpy-1.1.1/numpy/distutils/system_info.py:1349: UserWarning: 
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
  warnings.warn(BlasNotFoundError.__doc__)
blas_src_info:
  NOT AVAILABLE

/state/partition1/cdat5/build/numpy-1.1.1/numpy/distutils/system_info.py:1352: UserWarning: 
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
  warnings.warn(BlasSrcNotFoundError.__doc__)
  NOT AVAILABLE

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  libraries mkl,vml,guide not found in /state/partition1/cdat/Externals/lib
  NOT AVAILABLE

  NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /state/partition1/cdat/Externals/lib
  libraries lapack_atlas not found in /state/partition1/cdat/Externals/lib
numpy.distutils.system_info.atlas_threads_info
  NOT AVAILABLE

atlas_info:
  libraries f77blas,cblas,atlas not found in /state/partition1/cdat/Externals/lib
  libraries 

[Numpy-discussion] numpy build fail under cygwin (Vista)

2008-08-20 Thread Charles Doutriaux
Hi both trunk and 1.1.1 fail to build under cygwin vista (latest version)

I'm copy/pasting the end of the log

C.

copying numpy/doc/reference/__init__.py - 
build/lib.cygwin-1.5.25-i686-2.5/numpy/doc/reference
running build_ext
customize UnixCCompiler
customize UnixCCompiler using build_ext
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler using build_ext
building 'numpy.core.multiarray' extension
compiling C sources
C compiler: gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall 
-Wstrict-prototypes

creating build/temp.cygwin-1.5.25-i686-2.5
creating build/temp.cygwin-1.5.25-i686-2.5/numpy
creating build/temp.cygwin-1.5.25-i686-2.5/numpy/core
creating build/temp.cygwin-1.5.25-i686-2.5/numpy/core/src
compile options: '-g -Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/src 
-Inumpy/core/include 
-Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy 
-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.5 
-I/usr/include/python2.5 -c'
gcc: numpy/core/src/multiarraymodule.c
gcc -shared -Wl,--enable-auto-image-base -g 
build/temp.cygwin-1.5.25-i686-2.5/numpy/core/src/multiarraymodule.o 
-L/usr/lib/python2.5/config -lpython2.5 -o 
build/lib.cygwin-1.5.25-i686-2.5/numpy/core/multiarray.dll
building 'numpy.core.umath' extension
compiling C sources
C compiler: gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall 
-Wstrict-prototypes

creating build/temp.cygwin-1.5.25-i686-2.5/build
creating build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5
creating 
build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy
creating 
build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy/core
creating 
build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy/core/src
compile options: '-g -Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/src 
-Inumpy/core/include 
-Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy 
-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.5 
-I/usr/include/python2.5 -c'
gcc: build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/umathmodule.c
/cygdrive/c/Users/CHARLE~1/AppData/Local/Temp/ccib5Stl.s: Assembler 
messages:
/cygdrive/c/Users/CHARLE~1/AppData/Local/Temp/ccib5Stl.s:72843: Error: 
suffix or operands invalid for `fnstsw'
/cygdrive/c/Users/CHARLE~1/AppData/Local/Temp/ccib5Stl.s:73098: Error: 
suffix or operands invalid for `fnstsw'
/cygdrive/c/Users/CHARLE~1/AppData/Local/Temp/ccib5Stl.s: Assembler 
messages:
/cygdrive/c/Users/CHARLE~1/AppData/Local/Temp/ccib5Stl.s:72843: Error: 
suffix or operands invalid for `fnstsw'
/cygdrive/c/Users/CHARLE~1/AppData/Local/Temp/ccib5Stl.s:73098: Error: 
suffix or operands invalid for `fnstsw'
error: Command gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall 
-Wstrict-prototypes -g -Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/src 
-Inumpy/core/include 
-Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy 
-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.5 
-I/usr/include/python2.5 -c 
build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/umathmodule.c -o 
build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/umathmodule.o
 
failed with exit status 1

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build fail under cygwin (Vista)

2008-08-20 Thread Charles Doutriaux
Thx David,

Is there any plans on applying the suggested fix into numpy ?

C.
David Cournapeau wrote:
 On Wed, Aug 20, 2008 at 8:04 AM, Charles Doutriaux [EMAIL PROTECTED] wrote:
   
 Hi both trunk and 1.1.1 fail to build under cygwin vista (latest version)

 I'm copy/pasting the end of the log

 

 It is a cygwin bug:

 http:// www. mail-archive.com/numpy-discussion@scipy.org/msg10051.html

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http:// projects.scipy.org/mailman/listinfo/numpy-discussion


   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] masked_equal not commutative?

2008-08-12 Thread Charles Doutriaux
Hi I'm using 1.1.1
and found that numpy.ma.masked_equal is not commutative!
I would expect it to be in this case. Or raise an error for uncompatible 
shape in the first case, no ?

  a = numpy.ma.arange(100)
  a.shape=(10,10)
  b=numpy.ma.masked_equal(1,a)
  b
Traceback (most recent call last):
  File stdin, line 1, in module
  File /lgm/cdat/latest/lib/python2.5/site-packages/numpy/ma/core.py, 
line 1691, in __repr__
'data': str(self),
  File /lgm/cdat/latest/lib/python2.5/site-packages/numpy/ma/core.py, 
line 1665, in __str__
res[m] = f
IndexError: 0-d arrays can't be indexed.
  b=numpy.ma.masked_equal(a,1)
  b
masked_array(data =
 [[0 -- 2 3 4 5 6 7 8 9]
 [10 11 12 13 14 15 16 17 18 19]
 [20 21 22 23 24 25 26 27 28 29]
 [30 31 32 33 34 35 36 37 38 39]
 [40 41 42 43 44 45 46 47 48 49]
 [50 51 52 53 54 55 56 57 58 59]
 [60 61 62 63 64 65 66 67 68 69]
 [70 71 72 73 74 75 76 77 78 79]
 [80 81 82 83 84 85 86 87 88 89]
 [90 91 92 93 94 95 96 97 98 99]],
  mask =
 [[False  True False False False False False False False False]
 [False False False False False False False False False False]
 [False False False False False False False False False False]
 [False False False False False False False False False False]
 [False False False False False False False False False False]
 [False False False False False False False False False False]
 [False False False False False False False False False False]
 [False False False False False False False False False False]
 [False False False False False False False False False False]
 [False False False False False False False False False False]],
  fill_value=99)

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] masked_equal not commutative

2008-08-12 Thread Charles Doutriaux
As always as i clicked send i realized my error
it is indeed not commutative and that makes sense
but i'm not sure the case:
numpy.ma.masked_equal(1,a)

should have worked, since we don't really know how to do this 
comparison, the only thing that could make sense would be commutation 
but i thinks it's probably dangerous to assume what the user wants to do.

C.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 64bit issue?

2008-08-05 Thread Charles Doutriaux
Hello I'm running into some strange error on a 64bit machine,
I tracked it down to this line returning a NULL pointer, any idea why is 
this?
I tried both numpy1.1.1 and what in trunk, numpy.test() passes for both

Ok here's the uname of the machine and the offending line:

Linux quartic.txcorp.com 2.6.20-1.2320.fc5 #1 SMP Tue Jun 12 18:50:49 
EDT 2007 x86_64 x86_64 x86_64 GNU/Linux

  array = (PyArrayObject *)PyArray_SimpleNew(d, dims, self-type);

where d is 3 and dims: 120,46,72
self-type is 11

it seems to pass with d=1, but i'm not 100% positive.

Any idea on why it could be failing?

Thanks,

C.

PS: just in case here are numpy.test results:
  import numpy
  numpy.test()
Numpy is installed in 
/home/facets/doutriaux1/cdat/latest/lib/python2.5/site-packages/numpy
Numpy version 1.1.1
Python version 2.5.2 (r252:60911, Aug  4 2008, 13:47:12) [GCC 4.1.1 
20070105 (Red Hat 4.1.1-51)]
  Found 3/3 tests for numpy.core.tests.test_memmap
  Found 145/145 tests for numpy.core.tests.test_regression
  Found 12/12 tests for numpy.core.tests.test_records
  Found 3/3 tests for numpy.core.tests.test_errstate
  Found 72/72 tests for numpy.core.tests.test_numeric
  Found 36/36 tests for numpy.core.tests.test_numerictypes
  Found 290/290 tests for numpy.core.tests.test_multiarray
  Found 18/18 tests for numpy.core.tests.test_defmatrix
  Found 63/63 tests for numpy.core.tests.test_unicode
  Found 16/16 tests for numpy.core.tests.test_umath
  Found 7/7 tests for numpy.core.tests.test_scalarmath
  Found 2/2 tests for numpy.core.tests.test_ufunc
  Found 5/5 tests for numpy.distutils.tests.test_misc_util
  Found 4/4 tests for numpy.distutils.tests.test_fcompiler_gnu
  Found 2/2 tests for numpy.fft.tests.test_fftpack
  Found 3/3 tests for numpy.fft.tests.test_helper
  Found 15/15 tests for numpy.lib.tests.test_twodim_base
  Found 1/1 tests for numpy.lib.tests.test_machar
  Found 1/1 tests for numpy.lib.tests.test_regression
  Found 43/43 tests for numpy.lib.tests.test_type_check
  Found 1/1 tests for numpy.lib.tests.test_ufunclike
  Found 15/15 tests for numpy.lib.tests.test_io
  Found 25/25 tests for numpy.lib.tests.test__datasource
  Found 10/10 tests for numpy.lib.tests.test_arraysetops
  Found 1/1 tests for numpy.lib.tests.test_financial
  Found 4/4 tests for numpy.lib.tests.test_polynomial
  Found 6/6 tests for numpy.lib.tests.test_index_tricks
  Found 49/49 tests for numpy.lib.tests.test_shape_base
  Found 55/55 tests for numpy.lib.tests.test_function_base
  Found 5/5 tests for numpy.lib.tests.test_getlimits
  Found 3/3 tests for numpy.linalg.tests.test_regression
  Found 89/89 tests for numpy.linalg.tests.test_linalg
  Found 96/96 tests for numpy.ma.tests.test_core
  Found 19/19 tests for numpy.ma.tests.test_mrecords
  Found 15/15 tests for numpy.ma.tests.test_extras
  Found 4/4 tests for numpy.ma.tests.test_subclassing
  Found 36/36 tests for numpy.ma.tests.test_old_ma
  Found 1/1 tests for numpy.oldnumeric.tests.test_oldnumeric
  Found 7/7 tests for numpy.tests.test_random
  Found 16/16 tests for numpy.testing.tests.test_utils
  Found 6/6 tests for numpy.tests.test_ctypeslib
..
 
..
--
Ran 1292 tests in 1.237s

OK
unittest._TextTestResult run=1292 errors=0 failures=0


and:
  import numpy
  numpy.test()
Running unit tests for numpy
NumPy version 1.2.0.dev5611
NumPy is installed in 
/home/facets/doutriaux1/cdat/latest/lib/python2.5/site-packages/numpy
Python version 2.5.2 (r252:60911, Aug  4 2008, 13:47:12) [GCC 4.1.1 
20070105 (Red Hat 4.1.1-51)]
nose version 0.10.3

Re: [Numpy-discussion] 64bit issue?

2008-08-05 Thread Charles Doutriaux
Hi chuck, works great on 32bit

  int *dims;
dims = (int *)malloc(self-nd*sizeof(int));

and self-nd is 3

C.

Charles R Harris wrote:


 On Tue, Aug 5, 2008 at 12:20 PM, Charles Doutriaux 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hello I'm running into some strange error on a 64bit machine,
 I tracked it down to this line returning a NULL pointer, any idea
 why is
 this?
 I tried both numpy1.1.1 and what in trunk, numpy.test() passes for
 both

 Ok here's the uname of the machine and the offending line:

 Linux quartic.txcorp.com http://quartic.txcorp.com/
 2.6.20-1.2320.fc5 #1 SMP Tue Jun 12 18:50:49
 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux

  array = (PyArrayObject *)PyArray_SimpleNew(d, dims, self-type);

 where d is 3 and dims: 120,46,72
 self-type is 11

 it seems to pass with d=1, but i'm not 100% positive.

 Any idea on why it could be failing?

  
 What is the type of dims? Is there also a problem with a 32 bit OS?
  
 Chuck
 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http:// projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64bit issue?

2008-08-05 Thread Charles Doutriaux
Wow! It did it!

Is there other little tricks like this one I should track?

Thanks a lot! It would have take me days to track this one!

C.


Charles R Harris wrote:


 On Tue, Aug 5, 2008 at 1:14 PM, Charles Doutriaux [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 Hi chuck, works great on 32bit

  int *dims;
dims = (int *)malloc(self-nd*sizeof(int));

 and self-nd is 3

  
 Should be
  
 npy_intp *dims;
  
 npy_intp will be 32 bits/ 64 bits depending on the architecture, ints 
 tend to always be 32 bits.
  
 Chuck
 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http:// projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Cdat-discussion] Arrays containing NaNs

2008-07-25 Thread Charles Doutriaux
Hi Stephane,

This is a good suggestion, I'm ccing the numpy list on this. Because I'm 
wondering if it wouldn't be a better fit to do it directly at the 
numpy.ma level.

I'm sure they already thought about this (and 'inf' values as well) and 
if they don't do it , there's probably some good reason we didn't think 
of yet.
So before i go ahead and do it in MV2 I'd like to know the reason why 
it's not in numpy.ma, they are probably valid for MVs too.

C.

Stephane Raynaud wrote:
 Hi,

 how about automatically (or at least optionally) masking all NaN 
 values when creating a MV array?

 On Thu, Jul 24, 2008 at 11:43 PM, Arthur M. Greene 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Yup, this works. Thanks!

 I guess it's time for me to dig deeper into numpy syntax and
 functions, now that CDAT is using the numpy core for array
 management...

 Best,

 Arthur


 Charles Doutriaux wrote:

 Seems right to me,

 Except that the syntax might scare a bit the new users :)

 C.

 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hi,

 I'm not sure if what I am about to suggest is a good idea
 or not, perhaps Charles will correct me if this is a bad
 idea for any reason.

 Lets say you have a cdms variable called U with NaNs as
 the missing
  value. First we can replace the NaNs with 1e20:

 U.data[numpy.where(numpy.isnan(U.data))] = 1e20

 And remember to set the missing value of the variable
 appropriately:

 U.setMissing(1e20)

 I hope that helps, Andrew



 Hi Arthur,

 If i remember correctly the way i used to do it was:
 a= MV2.greater(data,1.) b=MV2.less_equal(data,1)
 c=MV2.logical_and(a,b) # Nan are the only one left
 data=MV2.masked_where(c,data)

 BUT I believe numpy now has way to deal with nan I
 believe it is numpy.nan_to_num But it replaces with 0
 so it may not be what you
  want

 C.


 Arthur M. Greene wrote:

 A typical netcdf file is opened, and the single
 variable extracted:


 fpr=cdms.open('prTS2p1_SEA_allmos.cdf')
 pr0=fpr('prcp') type(pr0)

 class 'cdms2.tvariable.TransientVariable'

 Masked values (indicating ocean in this case) show
 up here as NaNs.


 pr0[0,-15:-5,0]

 prcp array([NaN NaN NaN NaN NaN NaN 0.37745094
 0.3460784 0.21960783 0.19117641])

 So far this is all consistent. A map of the first
 time step shows the proper land-ocean boundaries,
 reasonable-looking values, and so on. But there
 doesn't seem to be any way to mask
  this array, so, e.g., an 'xy' average can be
 computed (it
 comes out all nans). NaN is not equal to anything
 -- even
 itself -- so there does not seem to be any
 condition, among the
  MV.masked_xxx options, that can be applied as a
 test. Also, it
  does not seem possible to compute seasonal averages,
 anomalies, etc. -- they also produce just NaNs.

 The workaround I've come up with -- for now -- is
 to first generate a new array of identical shape,
 filled with 1.0E+20. One test I've found that can
 detect NaNs is numpy.isnan:


 isnan(pr0[0,0,0])

 True

 So it is _possible_ to tediously loop through
 every value in the old array, testing with isnan,
 then copying to the new array if the test fails.
 Then the axes have to be reset...

 isnan does not accept array arguments, so one
 cannot do, e.g.,

 prmasked=MV.masked_where(isnan(pr0),pr0)

 The element-by-element conversion is quite slow.
 (I'm still waiting for it to complete, in fact).
 Any suggestions for dealing with NaN-infested data
 objects?

 Thanks!

 AMG

 P.S. This is 5.0.0.beta, RHEL4.


 *^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*
 Arthur M. Greene, Ph.D.
 The International Research Institute for Climate and Society
 The Earth Institute, Columbia

Re: [Numpy-discussion] [Cdat-discussion] Arrays containing NaNs

2008-07-25 Thread Charles Doutriaux
Hi All,

I'm sending a copy of this reply here because i think we could get some 
good answer.

Basically it was suggested to automarically mask NaN (and Inf ?) when 
creating ma.

I'm sure you already thought of this on this list and was curious to 
know why you decided not to do it.

Just so I can relay it to our list (sending to both list came back 
flagged as spam...)

C.


Hi Stephane,

This is a good suggestion, I'm ccing the numpy list on this. Because I'm 
wondering if it wouldn't be a better fit to do it directly at the 
numpy.ma level.

I'm sure they already thought about this (and 'inf' values as well) and 
if they don't do it , there's probably some good reason we didn't think 
of yet.
So before i go ahead and do it in MV2 I'd like to know the reason why 
it's not in numpy.ma, they are probably valid for MVs too.

C.

Stephane Raynaud wrote:
 Hi,

 how about automatically (or at least optionally) masking all NaN 
 values when creating a MV array?

 On Thu, Jul 24, 2008 at 11:43 PM, Arthur M. Greene 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Yup, this works. Thanks!

 I guess it's time for me to dig deeper into numpy syntax and
 functions, now that CDAT is using the numpy core for array
 management...

 Best,

 Arthur


 Charles Doutriaux wrote:

 Seems right to me,

 Except that the syntax might scare a bit the new users :)

 C.

 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hi,

 I'm not sure if what I am about to suggest is a good idea
 or not, perhaps Charles will correct me if this is a bad
 idea for any reason.

 Lets say you have a cdms variable called U with NaNs as
 the missing
  value. First we can replace the NaNs with 1e20:

 U.data[numpy.where(numpy.isnan(U.data))] = 1e20

 And remember to set the missing value of the variable
 appropriately:

 U.setMissing(1e20)

 I hope that helps, Andrew



 Hi Arthur,

 If i remember correctly the way i used to do it was:
 a= MV2.greater(data,1.) b=MV2.less_equal(data,1)
 c=MV2.logical_and(a,b) # Nan are the only one left
 data=MV2.masked_where(c,data)

 BUT I believe numpy now has way to deal with nan I
 believe it is numpy.nan_to_num But it replaces with 0
 so it may not be what you
  want

 C.


 Arthur M. Greene wrote:

 A typical netcdf file is opened, and the single
 variable extracted:


 fpr=cdms.open('prTS2p1_SEA_allmos.cdf')
 pr0=fpr('prcp') type(pr0)

 class 'cdms2.tvariable.TransientVariable'

 Masked values (indicating ocean in this case) show
 up here as NaNs.


 pr0[0,-15:-5,0]

 prcp array([NaN NaN NaN NaN NaN NaN 0.37745094
 0.3460784 0.21960783 0.19117641])

 So far this is all consistent. A map of the first
 time step shows the proper land-ocean boundaries,
 reasonable-looking values, and so on. But there
 doesn't seem to be any way to mask
  this array, so, e.g., an 'xy' average can be
 computed (it
 comes out all nans). NaN is not equal to anything
 -- even
 itself -- so there does not seem to be any
 condition, among the
  MV.masked_xxx options, that can be applied as a
 test. Also, it
  does not seem possible to compute seasonal averages,
 anomalies, etc. -- they also produce just NaNs.

 The workaround I've come up with -- for now -- is
 to first generate a new array of identical shape,
 filled with 1.0E+20. One test I've found that can
 detect NaNs is numpy.isnan:


 isnan(pr0[0,0,0])

 True

 So it is _possible_ to tediously loop through
 every value in the old array, testing with isnan,
 then copying to the new array if the test fails.
 Then the axes have to be reset...

 isnan does not accept array arguments, so one
 cannot do, e.g.,

 prmasked=MV.masked_where(isnan(pr0),pr0)

 The element-by-element conversion is quite slow.
 (I'm still waiting for it to complete, in fact).
 Any suggestions

Re: [Numpy-discussion] [Cdat-discussion] Arrays containing NaNs

2008-07-25 Thread Charles Doutriaux
Hi Bruce,

Thx for the reply, we're aware of this, basically the question was why 
not mask NaN automatically when creating a nump.ma array?

C.

Bruce Southey wrote:
 Charles Doutriaux wrote:
   
 Hi Stephane,

 This is a good suggestion, I'm ccing the numpy list on this. Because I'm 
 wondering if it wouldn't be a better fit to do it directly at the 
 numpy.ma level.

 I'm sure they already thought about this (and 'inf' values as well) and 
 if they don't do it , there's probably some good reason we didn't think 
 of yet.
 So before i go ahead and do it in MV2 I'd like to know the reason why 
 it's not in numpy.ma, they are probably valid for MVs too.

 C.

 Stephane Raynaud wrote:
   
 
 Hi,

 how about automatically (or at least optionally) masking all NaN 
 values when creating a MV array?

 On Thu, Jul 24, 2008 at 11:43 PM, Arthur M. Greene 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Yup, this works. Thanks!

 I guess it's time for me to dig deeper into numpy syntax and
 functions, now that CDAT is using the numpy core for array
 management...

 Best,

 Arthur


 Charles Doutriaux wrote:

 Seems right to me,

 Except that the syntax might scare a bit the new users :)

 C.

 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hi,

 I'm not sure if what I am about to suggest is a good idea
 or not, perhaps Charles will correct me if this is a bad
 idea for any reason.

 Lets say you have a cdms variable called U with NaNs as
 the missing
  value. First we can replace the NaNs with 1e20:

 U.data[numpy.where(numpy.isnan(U.data))] = 1e20

 And remember to set the missing value of the variable
 appropriately:

 U.setMissing(1e20)

 I hope that helps, Andrew



 Hi Arthur,

 If i remember correctly the way i used to do it was:
 a= MV2.greater(data,1.) b=MV2.less_equal(data,1)
 c=MV2.logical_and(a,b) # Nan are the only one left
 data=MV2.masked_where(c,data)

 BUT I believe numpy now has way to deal with nan I
 believe it is numpy.nan_to_num But it replaces with 0
 so it may not be what you
  want

 C.


 Arthur M. Greene wrote:

 A typical netcdf file is opened, and the single
 variable extracted:


 fpr=cdms.open('prTS2p1_SEA_allmos.cdf')
 pr0=fpr('prcp') type(pr0)

 class 'cdms2.tvariable.TransientVariable'

 Masked values (indicating ocean in this case) show
 up here as NaNs.


 pr0[0,-15:-5,0]

 prcp array([NaN NaN NaN NaN NaN NaN 0.37745094
 0.3460784 0.21960783 0.19117641])

 So far this is all consistent. A map of the first
 time step shows the proper land-ocean boundaries,
 reasonable-looking values, and so on. But there
 doesn't seem to be any way to mask
  this array, so, e.g., an 'xy' average can be
 computed (it
 comes out all nans). NaN is not equal to anything
 -- even
 itself -- so there does not seem to be any
 condition, among the
  MV.masked_xxx options, that can be applied as a
 test. Also, it
  does not seem possible to compute seasonal averages,
 anomalies, etc. -- they also produce just NaNs.

 The workaround I've come up with -- for now -- is
 to first generate a new array of identical shape,
 filled with 1.0E+20. One test I've found that can
 detect NaNs is numpy.isnan:


 isnan(pr0[0,0,0])

 True

 So it is _possible_ to tediously loop through
 every value in the old array, testing with isnan,
 then copying to the new array if the test fails.
 Then the axes have to be reset...

 isnan does not accept array arguments, so one
 cannot do, e.g.,

 prmasked=MV.masked_where(isnan(pr0),pr0)

 The element-by-element conversion is quite slow.
 (I'm still waiting for it to complete, in fact).
 Any suggestions for dealing with NaN-infested data
 objects?

 Thanks!

 AMG

 P.S. This is 5.0.0.beta

Re: [Numpy-discussion] [Cdat-discussion] Arrays containing NaNs

2008-07-25 Thread Charles Doutriaux
I mean not having to it myself.
data is a numpy array with NaN in it
masked_data = numpy.ma.array(data)
returns a masked array with a mask where NaN were in data

C.

Bruce Southey wrote:
 Charles Doutriaux wrote:
   
 Hi Bruce,

 Thx for the reply, we're aware of this, basically the question was why 
 not mask NaN automatically when creating a nump.ma array?

 C.

 Bruce Southey wrote:
   
 
 Charles Doutriaux wrote:
   
 
   
 Hi Stephane,

 This is a good suggestion, I'm ccing the numpy list on this. Because I'm 
 wondering if it wouldn't be a better fit to do it directly at the 
 numpy.ma level.

 I'm sure they already thought about this (and 'inf' values as well) and 
 if they don't do it , there's probably some good reason we didn't think 
 of yet.
 So before i go ahead and do it in MV2 I'd like to know the reason why 
 it's not in numpy.ma, they are probably valid for MVs too.

 C.

 Stephane Raynaud wrote:
   
 
   
 
 Hi,

 how about automatically (or at least optionally) masking all NaN 
 values when creating a MV array?

 On Thu, Jul 24, 2008 at 11:43 PM, Arthur M. Greene 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Yup, this works. Thanks!

 I guess it's time for me to dig deeper into numpy syntax and
 functions, now that CDAT is using the numpy core for array
 management...

 Best,

 Arthur


 Charles Doutriaux wrote:

 Seems right to me,

 Except that the syntax might scare a bit the new users :)

 C.

 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hi,

 I'm not sure if what I am about to suggest is a good idea
 or not, perhaps Charles will correct me if this is a bad
 idea for any reason.

 Lets say you have a cdms variable called U with NaNs as
 the missing
  value. First we can replace the NaNs with 1e20:

 U.data[numpy.where(numpy.isnan(U.data))] = 1e20

 And remember to set the missing value of the variable
 appropriately:

 U.setMissing(1e20)

 I hope that helps, Andrew



 Hi Arthur,

 If i remember correctly the way i used to do it was:
 a= MV2.greater(data,1.) b=MV2.less_equal(data,1)
 c=MV2.logical_and(a,b) # Nan are the only one left
 data=MV2.masked_where(c,data)

 BUT I believe numpy now has way to deal with nan I
 believe it is numpy.nan_to_num But it replaces with 0
 so it may not be what you
  want

 C.


 Arthur M. Greene wrote:

 A typical netcdf file is opened, and the single
 variable extracted:


 fpr=cdms.open('prTS2p1_SEA_allmos.cdf')
 pr0=fpr('prcp') type(pr0)

 class 'cdms2.tvariable.TransientVariable'

 Masked values (indicating ocean in this case) show
 up here as NaNs.


 pr0[0,-15:-5,0]

 prcp array([NaN NaN NaN NaN NaN NaN 0.37745094
 0.3460784 0.21960783 0.19117641])

 So far this is all consistent. A map of the first
 time step shows the proper land-ocean boundaries,
 reasonable-looking values, and so on. But there
 doesn't seem to be any way to mask
  this array, so, e.g., an 'xy' average can be
 computed (it
 comes out all nans). NaN is not equal to anything
 -- even
 itself -- so there does not seem to be any
 condition, among the
  MV.masked_xxx options, that can be applied as a
 test. Also, it
  does not seem possible to compute seasonal averages,
 anomalies, etc. -- they also produce just NaNs.

 The workaround I've come up with -- for now -- is
 to first generate a new array of identical shape,
 filled with 1.0E+20. One test I've found that can
 detect NaNs is numpy.isnan:


 isnan(pr0[0,0,0])

 True

 So it is _possible_ to tediously loop through
 every value in the old array, testing with isnan,
 then copying to the new array if the test fails.
 Then the axes have to be reset...

 isnan does not accept array arguments, so one
 cannot do, e.g.,

 prmasked=MV.masked_where(isnan(pr0),pr0)

 The element-by-element conversion is quite slow

Re: [Numpy-discussion] [Cdat-discussion] Arrays containing NaNs

2008-07-25 Thread Charles Doutriaux
Hi Pierre,

Thanks for the answer, I'm ccing cdat's discussion list.

It makes sense, that's also the way we develop things here NEVER assume 
what the user is going to do with the data BUT give the user the 
necessary tools to do what you're assuming he/she wants to do (as simple 
as possible)

Thanks again for the answer.

C.


Pierre GM wrote:
 Oh, I guess this one's for me...

 On Thursday 01 January 1970 04:21:03 Charles Doutriaux wrote:

   
 Basically it was suggested to automarically mask NaN (and Inf ?) when
 creating ma.
 I'm sure you already thought of this on this list and was curious to
 know why you decided not to do it.
 

 Because it's always best to let the user decide what to do with his/her data 
 and not impose anything ?

 Masking a point doesn't necessarily mean that the point is invalid (in the 
 sense of NaNs/Infs), just that it doesn't satisfy some particular condition. 
 In that sense, masks act as selecting tools.

 By forcing invalid data to be masked at the creation of an array, you run the 
 risk to tamper with the (potential) physical meaning of the mask you have 
 given as input, and/or miss the fact that some data are actually invalid when 
 you don't expect it to be.

 Let's take an example: 
 I want to analyze sea surface temperatures at the world scale. The data comes 
 as a regular 2D ndarray, with NaNs for missing or invalid data. In a first 
 step, I create a masked array of this data, filtering out the land masses by 
 a predefined geographical mask. The remaining NaNs in the masked array 
 indicate areas where the sensor failed... It's an important information I 
 would probably have missed by masking all the NaNs at first...


 As Eric F. suggested, you can use numpy.ma.masked_invalid to create a masked 
 array with NaNs/Infs filtered out:

   
 import numpy as np,. numpy.ma as ma
 x = np.array([1,2,None,4], dtype=float)
 x
 
 array([  1.,   2.,  NaN,   4.])
   
 mx = ma.masked_invalid(x)
 mx
 
 masked_array(data = [1.0 2.0 -- 4.0],
   mask = [False False  True False],
   fill_value=1e+20)

 Note that the underlying data still has NaNs/Infs:
   
 mx._data
 
 array([  1.,   2.,  NaN,   4.])

 You can also use the ma.fix_invalid function: it creates a mask where the 
 data 
 is not finite (NaNs/Infs), and set the corresponding points to fill_value.
   
 mx = ma.fix_invalid(x, fill_value=999)
 mx
 
 masked_array(data = [1.0 2.0 -- 4.0],
   mask = [False False  True False],
   fill_value=1e+20)
   
 mx._data
 
 array([   1.,2.,  999.,4.])


 The advantage of the second approach is that you no longer have NaNs/Infs in 
 the underlying data, which speeds things up during computation. The obvious 
 disadvantage is that you no longer know where the data was invalid...

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bug in ma ?

2008-07-21 Thread Charles Doutriaux
Hello,

I think i found a bug in numpy.ma

I tried it both with the trunk and the 1.1 version

import numpy
a= numpy.ma.arange(256)
a.shape=(128,2)

b=numpy.reshape(a,(64,2,2))


Traceback (most recent call last):
  File quick_test_reshape.py, line 7, in module
b=numpy.reshape(a,(64,2,2))
  File 
/lgm/cdat/latest/lib/python2.5/site-packages/numpy/core/fromnumeric.py, 
line 116, in reshape
return reshape(newshape, order=order)
TypeError: reshape() got an unexpected keyword argument 'order'

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] kinds

2008-07-16 Thread Charles Doutriaux
Hello,

A long long time ago, there used to be this module named kinds

It's totally outdated nowdays but it had one nice functionality and i 
was wondering if you knew how to reproduce that

it was:
maxexp=kinds.default_float_kind.MAX_10_EXP
minexp=kinds.default_float_kind.MIN_10_EXP

and a bunch of similar flags that would basically tell you the limits on 
the machine you're running (or at least compiled on)

Any idea on how to reproduce this?

While we're at it, does anybody know of way in python to know how memory 
is available on your system (similar to the free call unde rLinux) I'm 
looking for something that works accross platforms (well we can forget 
windows for now I could live w/o it)

Thanks,

C.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] kinds

2008-07-16 Thread Charles Doutriaux
Thx Pierre,

That's exactly what i was looking for

C.

Pierre GM wrote:
 On Wednesday 16 July 2008 15:08:59 Charles Doutriaux wrote:

   
 and a bunch of similar flags that would basically tell you the limits on
 the machine you're running (or at least compiled on)

 Any idea on how to reproduce this?
 

 Charels, have you tried numpy.finfo ? That should give you information for 
 floating points.


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] svd

2008-07-16 Thread Charles Doutriaux
doh...

Thanks Charles... I guess I've been staring at this code for too long 
now...

C.

Charles R Harris wrote:


 On Wed, Jul 16, 2008 at 3:58 PM, Charles Doutriaux 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hello,

 I'm using 1.1.0 and I have a bizarre thing happening

 it seems as if:
 doing:
 import numpy
 SVD = numpy.linalg.svd

 if different as doing
 import numpy.oldnumeric.linear_algebra
 SVD = numpy.oldnumeric.linear_algebra.singular_value_decomposition

 In the first case passing an array (204,1484) retuns array of shape:
 svd: (204, 204) (204,) (1484, 1484)

 in the second case I get (what i expected actually):
 svd: (204, 204) (204,) (204, 1484)

 But looking at the code, it seems like
 numpy.oldnumeric.linear_algebra.singular_value_decomposition
 is basicalyy numpy.linalg.svd

 Any idea on what's happening here?


 There is a full_matrices flag that determines if you get the full 
 orthogonal matrices, or the the minimum size needed, i.e.

 In [12]: l,d,r = linalg.svd(x, full_matrices=0)

 In [13]: shape(r)
 Out[13]: (2, 4)

 In [14]: x = zeros((2,4))

 In [15]: l,d,r = linalg.svd(x)

 In [16]: shape(r)
 Out[16]: (4, 4)

 In [17]: l,d,r = linalg.svd(x, full_matrices=0)

 In [18]: shape(r)
 Out[18]: (2, 4)


 Chuck



  


 Thx,

 C.


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org mailto:Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] quick question about numpy deprecation warnings

2008-07-15 Thread Charles Doutriaux
Hi I have a quick question and i hope somebody can answer me (I admit I 
should first really check the numpy doc)

I have been porting old Numeric based C code to numpy/numpy.ma for the 
alst couple weeks.

Just as I thought I was done, this morning I updatethe numpy trunk and I 
now get the following deprecation warnings

 DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew.
DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use 
PyArray_NewFromDescr.

My quick question is:
can I simply replace the function names or is the arg list different?

Thanks,

Charles.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] _md5 module?

2008-06-24 Thread Charles Doutriaux
Hello on redhat enterprise 5 i get this with numpy 1.1 and python 2.5

Any idea if we can get around tihs? short of rebuilding python with md5 support 
i guess.
Shouldn't numpy catch that before building?

C.


/usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/__init__.py,
 line 42, in module
import add_newdocs
  File
/usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/add_newdocs.py,
 line 5, in module
from lib import add_newdoc
  File
/usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/lib/__init__.py,
 line 18, in module
from io import *
  File
/usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/lib/io.py,
 line 16, in module
from _datasource import DataSource
  File
/usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/lib/_datasource.py,
 line 42, in module
from urllib2 import urlopen, URLError
  File
/usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/urllib2.py,
line 91, in module
import hashlib
  File
/usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/hashlib.py,
line 133, in module
md5 = __get_builtin_constructor('md5')
  File
/usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/hashlib.py,
line 60, in __get_builtin_constructor
import _md5
ImportError: No module named _md5

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] _md5 module?

2008-06-24 Thread Charles Doutriaux
Hi Charles,

Yes.. unfortunately we have to be able to build out of the box, we can't 
rely on much anything (Except compiler and X) to be there already...

It would be much easier for CDAT to simply say getpython 2.5 and numpy 
1.1 and all the externals, and if you ever get it together come back see 
us :)

Unfortunately i don't think we'd have much success that way :)

I'll keep looking... I don't understand why python didn't build with 
md5. Never happened before...

I guess it's more of a python issue than numpy... But a syou said it's 
deprecated so you might as well want to replace it.

C.

Charles R Harris wrote:


 On Tue, Jun 24, 2008 at 10:07 AM, Charles Doutriaux 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hello on redhat enterprise 5 i get this with numpy 1.1 and python 2.5

 Any idea if we can get around tihs? short of rebuilding python
 with md5 support i guess.
 Shouldn't numpy catch that before building?

 C.


 
 /usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/__init__.py,
 line 42, in module
import add_newdocs
  File
 
 /usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/add_newdocs.py,
 line 5, in module
from lib import add_newdoc
  File
 
 /usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/lib/__init__.py,
 line 18, in module
from io import *
  File
 
 /usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/lib/io.py,
 line 16, in module
from _datasource import DataSource
  File
 
 /usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/site-packages/numpy/lib/_datasource.py,
 line 42, in module
from urllib2 import urlopen, URLError
  File
 /usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/urllib2.py,
 line 91, in module
import hashlib
  File
 /usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/hashlib.py,
 line 133, in module
md5 = __get_builtin_constructor('md5')
  File
 /usr/local/cdat/release/5.0e/5.0.0.alpha7/lib/python2.5/hashlib.py,
 line 60, in __get_builtin_constructor
import _md5
 ImportError: No module named _md5


 Python2.5 should come with md5, at least it does on fedora 8: 
 /usr/lib/python2.5/lib-dynload/_md5module.so. It is also deprecated as 
 of 2.5, so we might need some modifications anyway.

 Are you using the python built by CDAT? I found the easiest way to get 
 CDAT 4.3 working was to copy over the relevant site-packages from the 
 CDAT built python to the system python (of the same version). You 
 could maybe go the other way also.

 Chuck


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building 1.1.0 on 64bit, seems to be stuck on infinite loop or something

2008-06-18 Thread Charles Doutriaux
Well this got nothing to do with cdat 5.0 yet. At this point it just 
built python from sources, and is putting numpy 1.1.0 in it.

I'll check the compiler bug option next... What if it is indeed a 
compiler bug? What to do next?

C.

Charles R Harris wrote:


 On Tue, Jun 17, 2008 at 3:40 PM, Charles Doutriaux 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hello,

 I'm trying to build 1.1.0 on a:
 Linux thunder0 2.6.9-74chaos #1 SMP Wed Oct 24 08:41:12 PDT 2007 ia64
 ia64 ia64 GNU/Linux

 using python 2.5 (latest stable)

 It seems to be stuck forever at the end of that output:

 anybody ran into this?

 ~/CDAT/5.0.0.beta1/bin/python setup.py install
 Running from numpy source directory.
 F2PY Version 2_5237
 blas_opt_info:
 blas_mkl_info:
  libraries mkl,vml,guide not found in
 /g/g90/cdoutrix/CDAT/Externals/lib
  NOT AVAILABLE


 I've had mixed success trying to set up CDAT5.0.0.beta1 and wasn't 
 motivated to pursue it. I'll give it another shot, but probably not 
 until the weekend.

 Chuck


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building 1.1.0 on 64bit, seems to be stuck on infinite loop or something

2008-06-18 Thread Charles Doutriaux
Hi there,

looks like it, moving from gcc3.3.4 to gcc4.1.2 seems to have fix it.

Thx,

C.

David Cournapeau wrote:
 Charles Doutriaux wrote:
   
 Hello,

 I'm trying to build 1.1.0 on a:
 Linux thunder0 2.6.9-74chaos #1 SMP Wed Oct 24 08:41:12 PDT 2007 ia64 
 ia64 ia64 GNU/Linux

 using python 2.5 (latest stable)

 It seems to be stuck forever at the end of that output:
   
 

 Is it stuck with 100 % Cpu usage ? May well be a compiler bug ? Did you 
 check gcc buglist for your gcc version ?

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] building 1.1.0 on 64bit, seems to be stuck on infinite loop or something

2008-06-17 Thread Charles Doutriaux
Hello,

I'm trying to build 1.1.0 on a:
Linux thunder0 2.6.9-74chaos #1 SMP Wed Oct 24 08:41:12 PDT 2007 ia64 
ia64 ia64 GNU/Linux

using python 2.5 (latest stable)

It seems to be stuck forever at the end of that output:

anybody ran into this?

~/CDAT/5.0.0.beta1/bin/python setup.py install
Running from numpy source directory.
F2PY Version 2_5237
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /g/g90/cdoutrix/CDAT/Externals/lib
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in 
/g/g90/cdoutrix/CDAT/Externals/lib
  NOT AVAILABLE

atlas_blas_info:
  libraries f77blas,cblas,atlas not found in 
/g/g90/cdoutrix/CDAT/Externals/lib
  NOT AVAILABLE

/g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1340:
 
UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
  warnings.warn(AtlasNotFoundError.__doc__)
blas_info:
  libraries blas not found in /g/g90/cdoutrix/CDAT/Externals/lib
  NOT AVAILABLE

/g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1349:
 
UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
  warnings.warn(BlasNotFoundError.__doc__)
blas_src_info:
  NOT AVAILABLE

/g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1352:
 
UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
  warnings.warn(BlasSrcNotFoundError.__doc__)
  NOT AVAILABLE

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  libraries mkl,vml,guide not found in /g/g90/cdoutrix/CDAT/Externals/lib
  NOT AVAILABLE

  NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in 
/g/g90/cdoutrix/CDAT/Externals/lib
  libraries lapack_atlas not found in /g/g90/cdoutrix/CDAT/Externals/lib
numpy.distutils.system_info.atlas_threads_info
  NOT AVAILABLE

atlas_info:
  libraries f77blas,cblas,atlas not found in 
/g/g90/cdoutrix/CDAT/Externals/lib
  libraries lapack_atlas not found in /g/g90/cdoutrix/CDAT/Externals/lib
numpy.distutils.system_info.atlas_info
  NOT AVAILABLE

/g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1247:
 
UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
  warnings.warn(AtlasNotFoundError.__doc__)
lapack_info:
  libraries lapack not found in /g/g90/cdoutrix/CDAT/Externals/lib
  NOT AVAILABLE

/g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1258:
 
UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
  warnings.warn(LapackNotFoundError.__doc__)
lapack_src_info:
  NOT AVAILABLE

/g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1261:
 
UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
  warnings.warn(LapackSrcNotFoundError.__doc__)
  NOT AVAILABLE

running install
running build
running scons
customize UnixCCompiler
Found executable /usr/local/bin/gcc
customize GnuFCompiler
Found executable /usr/local/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize UnixCCompiler
customize UnixCCompiler using scons
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands 
--compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands 
--fcompiler options
running build_src
building py_modules sources
building extension numpy.core.multiarray sources
  adding 'build/src.linux-ia64-2.5/numpy/core/config.h' to sources.
  adding 'build/src.linux-ia64-2.5/numpy/core/numpyconfig.h' to sources.
executing numpy/core/code_generators/generate_array_api.py
  adding 'build/src.linux-ia64-2.5/numpy/core/__multiarray_api.h' to 
sources.
  adding 

Re: [Numpy-discussion] building 1.1.0 on 64bit, seems to be stuck on infinite loop or something

2008-06-17 Thread Charles Doutriaux
Just so it might help debuging, when i kill it i get:
_exec_command_posix failed (status=2)
C.


Charles Doutriaux wrote:
 Hello,

 I'm trying to build 1.1.0 on a:
 Linux thunder0 2.6.9-74chaos #1 SMP Wed Oct 24 08:41:12 PDT 2007 ia64 
 ia64 ia64 GNU/Linux

 using python 2.5 (latest stable)

 It seems to be stuck forever at the end of that output:

 anybody ran into this?

 ~/CDAT/5.0.0.beta1/bin/python setup.py install
 Running from numpy source directory.
 F2PY Version 2_5237
 blas_opt_info:
 blas_mkl_info:
   libraries mkl,vml,guide not found in /g/g90/cdoutrix/CDAT/Externals/lib
   NOT AVAILABLE

 atlas_blas_threads_info:
 Setting PTATLAS=ATLAS
   libraries ptf77blas,ptcblas,atlas not found in 
 /g/g90/cdoutrix/CDAT/Externals/lib
   NOT AVAILABLE

 atlas_blas_info:
   libraries f77blas,cblas,atlas not found in 
 /g/g90/cdoutrix/CDAT/Externals/lib
   NOT AVAILABLE

 /g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1340:
  
 UserWarning:
 Atlas (http://math-atlas.sourceforge.net/) libraries not found.
 Directories to search for the libraries can be specified in the
 numpy/distutils/site.cfg file (section [atlas]) or by setting
 the ATLAS environment variable.
   warnings.warn(AtlasNotFoundError.__doc__)
 blas_info:
   libraries blas not found in /g/g90/cdoutrix/CDAT/Externals/lib
   NOT AVAILABLE

 /g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1349:
  
 UserWarning:
 Blas (http://www.netlib.org/blas/) libraries not found.
 Directories to search for the libraries can be specified in the
 numpy/distutils/site.cfg file (section [blas]) or by setting
 the BLAS environment variable.
   warnings.warn(BlasNotFoundError.__doc__)
 blas_src_info:
   NOT AVAILABLE

 /g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1352:
  
 UserWarning:
 Blas (http://www.netlib.org/blas/) sources not found.
 Directories to search for the sources can be specified in the
 numpy/distutils/site.cfg file (section [blas_src]) or by setting
 the BLAS_SRC environment variable.
   warnings.warn(BlasSrcNotFoundError.__doc__)
   NOT AVAILABLE

 lapack_opt_info:
 lapack_mkl_info:
 mkl_info:
   libraries mkl,vml,guide not found in /g/g90/cdoutrix/CDAT/Externals/lib
   NOT AVAILABLE

   NOT AVAILABLE

 atlas_threads_info:
 Setting PTATLAS=ATLAS
   libraries ptf77blas,ptcblas,atlas not found in 
 /g/g90/cdoutrix/CDAT/Externals/lib
   libraries lapack_atlas not found in /g/g90/cdoutrix/CDAT/Externals/lib
 numpy.distutils.system_info.atlas_threads_info
   NOT AVAILABLE

 atlas_info:
   libraries f77blas,cblas,atlas not found in 
 /g/g90/cdoutrix/CDAT/Externals/lib
   libraries lapack_atlas not found in /g/g90/cdoutrix/CDAT/Externals/lib
 numpy.distutils.system_info.atlas_info
   NOT AVAILABLE

 /g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1247:
  
 UserWarning:
 Atlas (http://math-atlas.sourceforge.net/) libraries not found.
 Directories to search for the libraries can be specified in the
 numpy/distutils/site.cfg file (section [atlas]) or by setting
 the ATLAS environment variable.
   warnings.warn(AtlasNotFoundError.__doc__)
 lapack_info:
   libraries lapack not found in /g/g90/cdoutrix/CDAT/Externals/lib
   NOT AVAILABLE

 /g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1258:
  
 UserWarning:
 Lapack (http://www.netlib.org/lapack/) libraries not found.
 Directories to search for the libraries can be specified in the
 numpy/distutils/site.cfg file (section [lapack]) or by setting
 the LAPACK environment variable.
   warnings.warn(LapackNotFoundError.__doc__)
 lapack_src_info:
   NOT AVAILABLE

 /g/g90/cdoutrix/svn/cdat/branches/devel/build/numpy-1.1.0/numpy/distutils/system_info.py:1261:
  
 UserWarning:
 Lapack (http://www.netlib.org/lapack/) sources not found.
 Directories to search for the sources can be specified in the
 numpy/distutils/site.cfg file (section [lapack_src]) or by setting
 the LAPACK_SRC environment variable.
   warnings.warn(LapackSrcNotFoundError.__doc__)
   NOT AVAILABLE

 running install
 running build
 running scons
 customize UnixCCompiler
 Found executable /usr/local/bin/gcc
 customize GnuFCompiler
 Found executable /usr/local/bin/g77
 gnu: no Fortran 90 compiler found
 gnu: no Fortran 90 compiler found
 customize GnuFCompiler
 gnu: no Fortran 90 compiler found
 gnu: no Fortran 90 compiler found
 customize UnixCCompiler
 customize UnixCCompiler using scons
 running config_cc
 unifing config_cc, config, build_clib, build_ext, build commands 
 --compiler options
 running config_fc
 unifing config_fc, config, build_clib, build_ext, build commands 
 --fcompiler options
 running build_src
 building py_modules sources
 building extension numpy.core.multiarray sources
   adding 'build/src.linux-ia64-2.5/numpy/core/config.h

Re: [Numpy-discussion] One more wrinkle in going to 5.0

2008-06-05 Thread Charles Doutriaux
Arthur
I'm forwarding your question to the numpy list, I'm hoping somebody 
there will be able to help you with that.

C.

Arthur M. Greene wrote:
 Hi All,

 This does not involve the CDAT-5 code, but rather files pickled under 
 earlier versions of CDAT. These files store the variable type along 
 with the data, but some types from 4.x are absent in 5.0, making the 
 pickled files unreadable. Example:

 $ cdat2
 Executing /home/amg/usr/local/cdat/5.0.0.beta1/bin/python
 Python 2.5.2 (r252:60911, May 30 2008, 11:00:23)
 [GCC 3.4.6 20060404 (Red Hat 3.4.6-9)] on linux2
 Type help, copyright, credits or license for more information.
  import cPickle as pickle
  f=open('pickled/prTS2p1_Ind.p')
  pr=pickle.load(f)
 Traceback (most recent call last):
   File stdin, line 1, in module
 ImportError: No module named cdms.tvariable
 

 $ grep cdms pickled/prTS2p1_Ind.p
 (ccdms.tvariable
 (icdms.grid
 (icdms.axis
 (icdms.axis
 ((icdms.axis

 If I replace cdms with cdms2 (using sed) and try again, a new 
 error comes up:

  f=open('prTS2p1_Ind1.p')
  pr=pickle.load(f)
 Traceback (most recent call last):
   File stdin, line 1, in module
 ImportError: No module named Numeric

 $ grep -B 3 -A 3 Numeric prTS2p1_Ind1.p
 (dp22
 S'gridtype'
 p23
 cNumeric
 array_constructor
 p24
 ((I1

 Numpy does not have an array_constructor attribute, so can't simply 
 replace Numeric with numpy in this case. If I replace instead with 
 numpy.oldnumeric, trying to unpickle produces what appears to be a 
 screen dump (in the python console) of the pickled file as text, but 
 there is no assignment. So I'm presently at a loss as to how to read 
 such files. I have hundreds of files that were pickled under 4.x, so 
 this could be a big headache...

 Thx for any ideas,

 Arthur


 *^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*
 Arthur M. Greene, Ph.D.
 The International Research Institute for Climate and Society
 The Earth Institute, Columbia University, Lamont Campus
 amg -at- iri -dot- columbia -dot- edu
 *^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*



 -- 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Status Report for NumPy 1.1.0

2008-05-08 Thread Charles Doutriaux
I don't think it is reasonable to say the trunk is in good shape when 
the power function does not work...

Just my thoughts...

C.

Charles R Harris wrote:
 Hi Jarrod,

 On Tue, May 6, 2008 at 2:40 AM, Jarrod Millman [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 Hey,

 The trunk is in pretty good shape and it is about time that I put out
 an official release.  So tomorrow (in a little over twelve hours) I am
 going to create a 1.1.x branch and the trunk will be officially open
 for 1.2 development.  If there are no major issues that show up at the
 last minute, I will tag 1.1.0 twenty-four hours after I branch.  As
 soon as I tag the release I will ask the David and Chris to create the
 official Windows and Mac binaries.  If nothing goes awry, you can
 expect the official release announcement by Monday, May 12th.

 In order to help me with the final touches, would everyone look over
 the release notes one last time:
 http://projects.scipy.org/scipy/numpy/milestone/1.1.0
 Please let me know if there are any important omissions or errors
 ASAP.


 Scalar indexing of matrices has changed, 1D arrays are now returned 
 instead of matrices. This has to be documented in the release notes.

 Chuck


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.ma question

2008-04-30 Thread Charles Doutriaux
Hello i have a quick question about MA ans scalar

the following works:
import numpy.ma as MA
a0= MA.array(0)/1
a1= MA.array((0,0))/1

but not:
import numpy.ma as MA
a0= MA.array(0)/0
a1= MA.array((0,0))/0

Is that a bug ?

I'm using numpy 1.0.5.dev4958 and also whats in trunk right now 
(1.1.0.dev5113)

C.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] oldnumeric.MA behaviour broken ?

2008-04-30 Thread Charles Doutriaux
HI,

import numpy.oldnumeric.ma as MA
a0= MA.array(0)/0
sh0=list(a0.shape)
sh0.insert(0,1)
b0=MA.resize(a0,sh0)

Does not work anymore, I believe it used to work

It does works using numpy.ma  (but i can't subclass these yet...)

C.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.ma question

2008-04-30 Thread Charles Doutriaux
that's exactly my understanding thanks for confirming

C.

Pierre GM wrote:
 Charles,

   
 but not:
 import numpy.ma as MA
 a0= MA.array(0)/0
 a1= MA.array((0,0))/0

 Is that a bug ?
 

 That a0 is MA.masked is not a bug. That a1 should be a (2,) array masked 
 everywhere should work, but does not: that's the bug. Thanks for reporting.

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy release

2008-04-25 Thread Charles Doutriaux
Anne Archibald wrote:
 Yes, well, it really looks unlikely we will be able to agree on what
 the correct solution is before 1.1, so I would like to have something
 non-broken for that release.
   

+1 on that!
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bz2

2008-04-17 Thread Charles Doutriaux
Hi,

Just so you know, yesterday i was doing some test on a fresh ubuntu 
installation, trying to figure out the external minimum dependencies for 
our system.

I built (from sources) python (2.5.2)  then numpy.

All seemed ok, but when importing numpy i had an error, trying to import 
module bz2

Turns out Python didn't build with bz2 because the bz2 devel package 
wasn't in.

I'm just reporting this to you guys, has you may want to add a test for 
bz2 in your setup.py before going ahead and build the whole thing.

Hope this helps,

C.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.ma dtype.char set to ?

2008-04-09 Thread Charles Doutriaux
Hi I'm tracked down a bug in our code back to the numpy.ma

s2=numpy.array([[10,60],[65,45]])
s3=numpy.ma.masked_greater(s2,50.)
s3.mask.dtype.char
returns: '?'

In the Numeric - numpy.ma (page 35) the ? is not listed. Is it an 
omission in numpy.ma ? Or is it a valid char type ?
s3.dtype does say it is of type bool  which I guess should be 'B' or 
'b' no ?

Thanks,

C.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.ma dtype.char set to ?

2008-04-09 Thread Charles Doutriaux
Sorry, I can answer my own question (page 22 of numpy book) booleans are 
supposed to be ? my mistake.

C.

Charles Doutriaux wrote:
 Hi I'm tracked down a bug in our code back to the numpy.ma

 s2=numpy.array([[10,60],[65,45]])
 s3=numpy.ma.masked_greater(s2,50.)
 s3.mask.dtype.char
 returns: '?'

 In the Numeric - numpy.ma (page 35) the ? is not listed. Is it an 
 omission in numpy.ma ? Or is it a valid char type ?
 s3.dtype does say it is of type bool  which I guess should be 'B' or 
 'b' no ?

 Thanks,

 C.



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-04-03 Thread Charles Doutriaux
Travis, Pierre,

Goood news what's in trunk right now seems to work great with our stuff, 
I only had to replace numpy.core.ma.take with numpy.ma.take 
(numpy.oldnumeric.ma.take didn't seem to work)

So as soon as 1.0.5 is out, I'll start working toward using numpy.ma only.

Also, I have one little request, i remember you said oldnumeric would be 
gone in the next version. But our users will have hundreds (thousands?)  
of script convert with import numpy.oldnumeric.ma as MA or import 
numpy.oldnumeric as Numeric
And although we're insisting and using numpy and numpy.ma right away, 
i'm sure 99% of them will ignore this. So would be be horrible to leave 
as shortcut (with a warning while loading probably, actually we could 
have the warning already in numpy 1.0.5 ?) from numpy.oldnumeric back to 
numpy ? same for oldnumeric.ma pointing back to numpy.ma. I realize it's 
not really clean... At least it would be great to have the warning 
raised in 1.0.5 that it will disappear in 1.1. Like that users might be 
more inclined to start converting cleaninly

Thanks for considering this and also thanks a LOT for all your help on 
this issue!

C.


Travis E. Oliphant wrote:
 Charles Doutriaux wrote:
   
 Hi Travis,
 Ok we're almost there, in my test suite i get:
 maresult = numpy.core.ma.take(ta, indices, axis=axis)
 AttributeError: 'module' object has no attribute 'ma'
 data = numpy.core.ma.take(ax[:], indices)
 AttributeError: 'module' object has no attribute 'ma'
   
 

 I think the problem here is that numpy.core.ma is no longer the correct 
 place.This should be

 numpy.oldnumeric.ma.take  because numpy.oldnumeric.ma is the correct 
 location.

 In my mind you shouldn't really have been using numpy.core.ma, but 
 instead numpy.ma because whether things are in core or lib could change 
 ;-) 

 -Travis

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-04-02 Thread Charles Doutriaux
Hi Travis,

Thanks,

I'll fix this and let you know. (probably tomorrow because I came down 
with some nasty real bug... flue or something like that)

C.

Travis E. Oliphant wrote:
 Charles Doutriaux wrote:
   
 Hi Travis,
 Ok we're almost there, in my test suite i get:
 maresult = numpy.core.ma.take(ta, indices, axis=axis)
 AttributeError: 'module' object has no attribute 'ma'
 data = numpy.core.ma.take(ax[:], indices)
 AttributeError: 'module' object has no attribute 'ma'
   
 

 I think the problem here is that numpy.core.ma is no longer the correct 
 place.This should be

 numpy.oldnumeric.ma.take  because numpy.oldnumeric.ma is the correct 
 location.

 In my mind you shouldn't really have been using numpy.core.ma, but 
 instead numpy.ma because whether things are in core or lib could change 
 ;-) 

 -Travis

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-04-01 Thread Charles Doutriaux
Hi Pierre,

Im ccing Bob on this, he's the main developper for cdms2 package. At 
this point I think Travis original suggestion was the best. We should 
leave it like it was for 1.0.5 There's a lot of changes to do in order 
to get the backward compatibility going. And I feel it should wait until 
1.1. I don't feel comfortable doing all these changes and releasing our 
software like this. It's a major change and it needs to be tested for a 
while.  OUr users are massively hammering the MA and rely on it so much. 
Although I do see the usefulness of the new ma and at term I believe it 
has major merits to be used instead of oldnumeric.ma.

Your thoughts?

C.

Pierre GM wrote:
 Charles,

   
 Any idea where that comes from ?
 

 No, not really. Seems that TransientVariable(*args) doesn't work. I guess 
 it's 
 because it has inherited a __call__method, and tries to use that method 
 instead of the __new__. Try to call TransientVariable.__new__ instead of just 
 TransientVariable in l505 of avariables.py, and see how it goes. You may want 
 to rethink what subSlice does as well. Instead of calling the class 
 constructor, you can also just create a view of your array and update the 
 attributes accordingly.

 Once again, a stripped-down version of the class and its parents would be 
 useful.

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-04-01 Thread Charles Doutriaux
Hi Travis,

I get this:

import numpy, numpy.oldnumeric.ma as MA, numpy.oldnumeric as 
Numeric, PropertiedClasses
  File 
/lgm/cdat/latest/lib/python2.5/site-packages/numpy/oldnumeric/ma.py, 
line 2204, in module
array.mean = _m(average)
NameError: name 'average' is not defined

C.

Travis E. Oliphant wrote:
 Charles Doutriaux wrote:
   
 Hi Pierre,

 Im ccing Bob on this, he's the main developper for cdms2 package.
 
 I've uploaded the original ma.py file back into oldnumeric so that 
 oldnumeric.ma should continue to work as before.  Can you verify this?

 Thanks,

 -Travis O.


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-04-01 Thread Charles Doutriaux
Hi Travis,
Ok we're almost there, in my test suite i get:
maresult = numpy.core.ma.take(ta, indices, axis=axis)
AttributeError: 'module' object has no attribute 'ma'
data = numpy.core.ma.take(ax[:], indices)
AttributeError: 'module' object has no attribute 'ma'

I don't know if it was automatically put there by the converter or if we 
put it by hand.

If it's the first one, you might want to correct this, otherwise don't 
worry it's easy enough to fix (i think ?)

C.

Travis E. Oliphant wrote:
 Charles Doutriaux wrote:
   
 Hi Travis,

 I get this:

 import numpy, numpy.oldnumeric.ma as MA, numpy.oldnumeric as 
 Numeric, PropertiedClasses
   File 
 /lgm/cdat/latest/lib/python2.5/site-packages/numpy/oldnumeric/ma.py, 
 line 2204, in module
 array.mean = _m(average)
 NameError: name 'average' is not defined
   
 
 Thanks,

 Can you try again?

 Best regards,

 -Travis

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-03-28 Thread Charles Doutriaux
Hi Pierre,

I just tested it out, I'm still missing
from numpy.oldnumeric.ma import common_fill_value , set_fill_value
which breaks the code later

C.

Pierre GM wrote:
 All,
 Would you mind trying the SVN (ver  4946) and let me know what I'm still 
 missing ?
 Thanks a lot in advance
 P.
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-03-28 Thread Charles Doutriaux
Hi Pierre,

Hum... something is still broken. But maybe you can help me figuring out 
if something dramtically changed
We're defining a new class object MaskedVariable which inherit from our 
other class: AbstractVariable (in which we define a reorder function for 
the obects) and from MaskedArray (used to be from numpy.oldnumeric.ma.array)
Now when reading data from a file it complains that MaskedArray has no 
attribute reorder, so that probably means that somewhere something 
failed in the initialisation of our object and it retuned a simple 
MaskedArray instead of an MaskedVariable... But since the only changes 
are from numpy, i wonder if the inheritance form MaskedArray is somehow 
different from the one from MA.array ?

Any clue on where to start looking would be great,

Thanks,

C

Pierre GM wrote:
 Charles,

   
 I just tested it out, I'm still missing
 from numpy.oldnumeric.ma import common_fill_value , set_fill_value
 which breaks the code later
 

 Turns out I had forgotten to put the functions in numpy.ma.core.__all__: they 
 had been already coded all along... That's fixed in SVN4950
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-03-27 Thread Charles Doutriaux
Eric, Pierre,

I agree the new ma is probably much better and we should use it.

all i was saying is that 1.0.4 was working great with the small 
compatibility layer.
I even have a frozen version of 1.0.5 devel that works great. Then 
suddenly everything broke.

I was really happy was the layer of 1.0.4.

It is not only a matter of converting our software, I can do that. It 
also a matter of have our user base go smoothly thru the transition.

So far they were happy with the 1st order conversion script. And a 
little bit of editing. But we can't ask them to go thru thousands of 
lines of old code and sort of rewrite it all.

When I said it shouldn't be hard to do as much of MA as possible. I 
simply meant put back the compatibility layer that was there in 1.0.4. 
and early 1.0.5dev. I'm not advocating to rewrite the old MA at all, 
simply to keep what was already there as far as transition, why undoing it?

I really don't mind help you i the process if you want.

C.




Eric Firing wrote:
 Charles Doutriaux wrote:
   
 The reorder is a function we implement. By digging a bit into this my 
 guess is that all the missing function in numpy.ma are causing to fail 
 at some point in our init and returning the wrong object type.

 But the whole idea was to keep a backward compatible layer with Numeric 
 and MA. It worked great for a while and now things are getting more and 
 more broken.
 
 There are costs as well as benefits in maintaining backward 
 compatibility, so one should not rely on it indefinitely.
   
 Correct me if I'm wrong but it seems as if the numpy.oldnumeric.am is 
 now simply numpy.ma and it's pointing to the new MaskedArray interface. 
 Loosing a LOT of backward compatibility at the same time.
 

 numpy.oldnumeric.ma was a very small compatibility wrapper for 
 numpy.core.ma; now it is the same, but pointing to numpy.ma, which is 
 now Pierre's new maskedarray implementatin. Maybe more compatibility 
 interfacing is needed, either there or in numpy.ma itself, but I would 
 not agree with your characterization of the degree of incompatibility.

 Whether it would be possible (and desirable) to replace oldnumeric.ma 
 with the old numpy/core/ma.py, I don't know, but maybe this, or some 
 other way of keeping core/ma.py available, should be considered.  Would 
 this meet your needs?

 Were you happy with release 1.04?

   
 I'm thinking that such changes should definitely not happen from 1.0.4 
 to 1.0.5 but rather in some major upgrade of numpy (1.1 at least, may be 
 even 2.0).
 

 No, this has been planned for quite a while, and I would strongly oppose 
 any such drastic delay.

   
 It is absolutely necessary to have the oldnumeric.ma working as much as 
 possible as MA, what's in now is incompatible with code that have been 
 successfully upgraded to numpy using your recommended method (official 
 numpy doc)

 Can you put back ALL the function from numpy.oldnumeric.ma ? It 
 shouldn't be too much work.

 Now I'm actually worried about using ma at all? What version is in? Is 
 it a completely new package or is it still the old one just a bit 
 broken? If it's a new one, we'd have to be sure it is fully tested 
 

 No, it is not broken, it has many improvements and bug fixes relative to 
 the old ma.py.  That is why it is replacing ma.py.

   
 before we can redistribute it to other people via our package, or before 
 we use it ourselves
 

 Well, the only way to get something fully tested is to put it in use. 
 It has been available for testing for a long time as a separate 
 implementation, then as a numpy branch, and now for a while in the numpy 
 svn trunk.  It works well.  It is time to release it--possibly after a 
 few more tweaks, possibly leaving the old core/ma.py accessible, but 
 definitely for 1.05. No one will force you to adopt 1.05, so if more 
 compatibility tweaks are needed after 1.05 you can identify them and 
 they can be incorporated for the next release.

 Eric

   
 Can somebody bring some light on this issue? thanks a lot,

 C.


 Pierre GM wrote:
 
 Charles,
   
   
 result = result.reorder(order).regrid(grid)
 AttributeError: 'MaskedArray' object has no attribute 'reorder'

 Should I inherit from soemtihng else ?
 
 
 Mmh, .reorder is not a regular ndarray method, so that won't work. What is 
 it 
 supposed to do ? And regrid ?

   
   
 Aslo I used to import a some function from numpy.oldnumeric.ma, that are
 now missing can you point me to their new ma equivalent?
 common_fill_value, identity, indices and set_fill_value
 
 
 For set_fill_value, just use
 m.fill_value = your_fill_value

 For identity, just use the regular numpy version, and view it as a 
 MaskedArray:
 numpy.identity(...).view(MaskedArray)

 For common_fill_value: ah, tricky one, I'll have to check.

 If needed, I'll bring identity into numpy.ma. Please don't hesitate to send 
 more feedback, it's always needed.
 Sincerely,
 P

Re: [Numpy-discussion] missing function in numpy.ma?

2008-03-27 Thread Charles Doutriaux
Hi Pierre,

No problem, let me know when you have something in. I can't be sure all 
I mentioned is all that's missing. It's all I got so far.
But since I can't get our end going. I can't really give you a 
comprehensive list of what's exactly missing. Hopefully this is all and 
it will work fine after your changes.

Thanks for doing this,

C.

Pierre GM wrote:
 Charles,

   
 all i was saying is that 1.0.4 was working great with the small
 compatibility layer.
 I even have a frozen version of 1.0.5 devel that works great. Then
 suddenly everything broke.
 

 Could you be more specific ? Would you mind sending me bug reports so that I 
 can check what's going on and how to improve backwards compatibility ?

   
 I was really happy was the layer of 1.0.4.
 

 We're talking about the 15-line long numpy.oldnumeric.ma, right ? The ones 
 that redefines take as to take the averages along some indices ? The only 
 difference between versions is that 1.0.5 uses numpy.ma instead of the old 
 numpy.core.ma. 
 [...]
 OK, now I see: there were some functions in numpy.core.ma that are not in 
 numpy.ma (identity, indices). So, the pb is not in the conversion layer, but 
 on numpy.ma itself. OK, that should be relatively easy to fix. Can you gimme 
 a day or two ? 

 Sorry for the delayed comprehension.
 P.
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-03-27 Thread Charles Doutriaux
Hello,

Ok, I'll wait for Pierre's changes and see what it does for us. If it 
still breaks here or there then i'll do as Travis suggested (while still 
reporting to Pierre what went wrong).

Thank you all,

C.

Travis E. Oliphant wrote:
 Jarrod Millman wrote:
   
 On Thu, Mar 27, 2008 at 1:31 PM, Pierre GM [EMAIL PROTECTED] wrote:
   
 
 On Thursday 27 March 2008 16:11:22 Travis E. Oliphant wrote:
   I guess what could be done is to take the old numpy.core.ma file and
   move it into oldnumeric.ma (along with the few re-namings that are there
   now)?  Could you test that option out and see if it works for you?

  I'm currently re-introducing some functions  of numpy.core.ma in
  numpy.ma.core, fixing a couple of bugs along the way (for example, round: 
 the
  current behavior of numpy.round is inconsistent: it'll return a MaskedArray
  if the nb of decimals is 0, but a regular ndarray otherwise, so I just 
 coded
  a numpy.ma.round and the corresponding method).
  2-3 functions from numpy.core.ma (average, dot, and another one I can't
  remmbr) are already in numpy.ma.extras.
  I should update the SVN by later this afternoon.
 
   
 Excellent, I prefer this approach.

   
 
 If this works then it should be fine.  If it doesn't, however, then it 
 would not be too big a deal to just move the old implementation over.   
 In fact, I rather think it ought to be done anyway. 

 The oldnumeric directory will be disappearing in 1.1, so it doesn't 
 introduce any long-term burden and just makes 1.0.5 a bit more robust.

 -Travis


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] missing function in numpy.ma?

2008-03-26 Thread Charles Doutriaux
Hello,

I used to be able to inherit form nump.oldnumeric.ma.array, it looks 
like you can't any longer.
I replaced it with:
numpy.ma.MaskedArray
i'm getting:  
result = result.reorder(order).regrid(grid)
AttributeError: 'MaskedArray' object has no attribute 'reorder'

Should I inherit from soemtihng else ?

Aslo I used to import a some function from numpy.oldnumeric.ma, that are 
now missing can you point me to their new ma equivalent?
common_fill_value, identity, indices and set_fill_value

Thanks,

C.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing function in numpy.ma?

2008-03-26 Thread Charles Doutriaux
The reorder is a function we implement. By digging a bit into this my 
guess is that all the missing function in numpy.ma are causing to fail 
at some point in our init and returning the wrong object type.

But the whole idea was to keep a backward compatible layer with Numeric 
and MA. It worked great for a while and now things are getting more and 
more broken.

Correct me if I'm wrong but it seems as if the numpy.oldnumeric.am is 
now simply numpy.ma and it's pointing to the new MaskedArray interface. 
Loosing a LOT of backward compatibility at the same time.

I'm thinking that such changes should definitely not happen from 1.0.4 
to 1.0.5 but rather in some major upgrade of numpy (1.1 at least, may be 
even 2.0).

It is absolutely necessary to have the oldnumeric.ma working as much as 
possible as MA, what's in now is incompatible with code that have been 
successfully upgraded to numpy using your recommended method (official 
numpy doc)

Can you put back ALL the function from numpy.oldnumeric.ma ? It 
shouldn't be too much work.

Now I'm actually worried about using ma at all? What version is in? Is 
it a completely new package or is it still the old one just a bit 
broken? If it's a new one, we'd have to be sure it is fully tested 
before we can redistribute it to other people via our package, or before 
we use it ourselves

Can somebody bring some light on this issue? thanks a lot,

C.


Pierre GM wrote:
 Charles,
   
 result = result.reorder(order).regrid(grid)
 AttributeError: 'MaskedArray' object has no attribute 'reorder'

 Should I inherit from soemtihng else ?
 

 Mmh, .reorder is not a regular ndarray method, so that won't work. What is it 
 supposed to do ? And regrid ?

   
 Aslo I used to import a some function from numpy.oldnumeric.ma, that are
 now missing can you point me to their new ma equivalent?
 common_fill_value, identity, indices and set_fill_value
 

 For set_fill_value, just use
 m.fill_value = your_fill_value

 For identity, just use the regular numpy version, and view it as a 
 MaskedArray:
 numpy.identity(...).view(MaskedArray)

 For common_fill_value: ah, tricky one, I'll have to check.

 If needed, I'll bring identity into numpy.ma. Please don't hesitate to send 
 more feedback, it's always needed.
 Sincerely,
 P.
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py changed ?

2008-03-25 Thread Charles Doutriaux
Hello,

I have an f2py module that used to work great, now it breaks,
first of all the setup.py extension used to have:

#   f2py_options = [--fcompiler=gfortran,],

I now need to comment this out, and hope it picks up the right compiler...

at the beg of the script I have a line from the autoconvert:
import numpy.oldnumeric as Numeric
when running I get:
variable = 
_gengridzmean.as_column_major_storage(Numeric.transpose(variable.astype 
(Numeric.Float32).filled(0)))
AttributeError: 'module' object has no attribute 'as_column_major_storage'

I tried to go around that by using straight numpy calls everywhere and 
using numpy.asfortranarray instead

But now it collapse a bit further:
res = ZonalMeans.compute(s)
  File 
/lgm/cdat/latest/lib/python2.5/site-packages/ZonalMeans/zmean.py, line 
397, in compute
bandlat) #,imt,jmt,kmt,nt,kmt_grid,iomax,vl)
_gengridzmean.error: failed in converting 5th argument `mask' of 
_gengridzmean.zonebasin to C/Fortran array

I checked all my arrays are treated with asfortranarray and even 
ascontiguousarray to be sure!

I believe this used to work after converting to numpy
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to build a series of arrays as I go?

2008-03-17 Thread Charles Doutriaux
Hi Chris,

1-)You could use the concatenate function to grow an array as you go.

2-) assumnig you still have your list

b=numpy.array(data[name])
bmasked=numpy.ma.masked_equal(b,-1)


Chris Withers wrote:
 Hi All,

 I'm using xlrd to read an excel workbook containing several columns of 
 data as follows:

 for r in range(1,sheet.nrows):
  date = \
 datetime(*xlrd.xldate_as_tuple(sheet.cell_value(r,0),book.datemode))
  if date_cut_off and date  date_cut_off:
  continue
  for c in range(len(names)):
  name = names[c]
  cell = sheet.cell(r,c)
  if cell.ctype==xlrd.XL_CELL_EMPTY:
  value = -1
  elif cell.ctype==xlrd.XL_CELL_DATE:
  value = \
 datetime(*xlrd.xldate_as_tuple(cell.value,book.datemode))
  else:
  value = cell.value
  data[name].append(value)

 Two questions:

 How can I build arrays as I go instead of lists?
 (ie: the last line of the above snippet)

 Once I've built arrays, how can I mask the empty cells?
 (the above shows my hack-so-far of turning empty cells into -1 so I can 
 use masked_where, but it would be greato build a masked array as I went, 
 for efficiencies sake)

 cheers for any help!

 Chris

 PS: Slightly pissed off at actually paying for the book only to be told 
 it'll be 2 days before I can even read the online version, especially 
 given the woefully inadequate state of the currently available free 
 documentation :-(

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy from subversion

2008-03-13 Thread Charles Doutriaux
Hi Stephan,

Does the converter from Numeric fixes that? I mean runnning it on an old 
Numeric script will import numpy.ma, does it still replace with 
numpy.oldnumeric.ma?

Thx,

C.

Stéfan van der Walt wrote:
 On Wed, Mar 12, 2008 at 11:39 AM, Charles Doutriaux [EMAIL PROTECTED] wrote:
   
 My mistake i was still in trunk

  but i do get:

 import numpy, numpy.oldnumeric.ma as MA, numpy.oldnumeric as
  Numeric, PropertiedClasses
   File
  /lgm/cdat/latest/lib/python2.5/site-packages/numpy/oldnumeric/ma.py,
  line 4, in module
 from numpy.core.ma import *
  ImportError: No module named ma

  How does one build ma these days?
 

 Travis fixed this in latest SVN.  Maskedarrays should now be imported
 as numpy.ma.

 Regards
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.array.ma class init

2008-03-13 Thread Charles Doutriaux
Hello,

we used to have this working, the latest numpy breaks it.
  File 
/lgm/cdat/5.0.0.alpha7/lib/python2.5/site-packages/cdms2/tvariable.py, 
line 21, in module
import numpy.oldnumeric.ma as MA
class TransientVariable(AbstractVariable, MA.array):
TypeError: Error when calling the metaclass bases
function() argument 1 must be code, not str
  numpy.oldnumeric.ma
function array at 0xb7a8f48c

Any suggestion on how to fix that?

Thx,

C


Charles Doutriaux wrote:
 Hi Stephan,

 Does the converter from Numeric fixes that? I mean runnning it on an old 
 Numeric script will import numpy.ma, does it still replace with 
 numpy.oldnumeric.ma?

 Thx,

 C.

 Stéfan van der Walt wrote:
   
 On Wed, Mar 12, 2008 at 11:39 AM, Charles Doutriaux [EMAIL PROTECTED] 
 wrote:
   
 
 My mistake i was still in trunk

  but i do get:

 import numpy, numpy.oldnumeric.ma as MA, numpy.oldnumeric as
  Numeric, PropertiedClasses
   File
  /lgm/cdat/latest/lib/python2.5/site-packages/numpy/oldnumeric/ma.py,
  line 4, in module
 from numpy.core.ma import *
  ImportError: No module named ma

  How does one build ma these days?
 
   
 Travis fixed this in latest SVN.  Maskedarrays should now be imported
 as numpy.ma.

 Regards
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bug in f2py on Mac 10.5 ?

2008-03-06 Thread Charles Doutriaux
Hello,

we're trying to install fortran extension with f2py, works great on 
linux, mac 10.4 (gfortran and g77)
but on 10.5, it picks up g77 and then complains about cc_dynamic library.

Apparently this lib is not part os 10.5 (Xcode), is that a known 
problem? Should we try with what's in trunk?

Thanks,

C.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion