[Numpy-discussion] Reading automatically all the parameters from a file

2011-11-30 Thread Giovanni Plantageneto
Dear all,
I have a simple question. I would like to have all the parameters of a
model written in a configuration file (text), and I would like to have
all the parameters in the file automatically defined inside a program.
I find ConfigParser a bit low level, is there any function that
automatically reads everything from a file?
Thanks.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading automatically all the parameters from a file

2011-11-30 Thread Robert Kern
On Wed, Nov 30, 2011 at 11:09, Giovanni Plantageneto
g.plantagen...@gmail.com wrote:
 Dear all,
 I have a simple question. I would like to have all the parameters of a
 model written in a configuration file (text), and I would like to have
 all the parameters in the file automatically defined inside a program.
 I find ConfigParser a bit low level, is there any function that
 automatically reads everything from a file?

You may want to give something like configobj a try.

  http://pypi.python.org/pypi/configobj

It builds on ConfigParser to read all of the parameters in and creates
a hierarchical object will all of the parameters as attributes.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading automatically all the parameters from a file

2011-11-30 Thread Alan G Isaac
On 11/30/2011 6:09 AM, Giovanni Plantageneto wrote:
 I find ConfigParser a bit low level, is there any function that
 automatically reads everything from a file?


You could just use a dictionary for your params,
and import it from your configuration file.
If you insist on an ini format, ConfigParser/configparser
looks pretty good.

fwiw,
Alan Isaac
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading automatically all the parameters from a file

2011-11-30 Thread Paul Anton Letnes
On 30. nov. 2011, at 12:09, Giovanni Plantageneto wrote:

 Dear all,
 I have a simple question. I would like to have all the parameters of a
 model written in a configuration file (text), and I would like to have
 all the parameters in the file automatically defined inside a program.
 I find ConfigParser a bit low level, is there any function that
 automatically reads everything from a file?
 Thanks.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

I like having my input files simply be python files, on the form
bar = 'foo'
ham = ['spam', 'eggs']

Then I import them as
import imp
parameters = imp.load_source(parameters, myinpufile.py)

Now the object 'parameters' is a python module so I can say
print parameters.bar
and 'foo' will be printed. 

Paul

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading automatically all the parameters from a file

2011-11-30 Thread Benjamin Root
On Wednesday, November 30, 2011, Robert Kern robert.k...@gmail.com wrote:
 On Wed, Nov 30, 2011 at 11:09, Giovanni Plantageneto
 g.plantagen...@gmail.com wrote:
 Dear all,
 I have a simple question. I would like to have all the parameters of a
 model written in a configuration file (text), and I would like to have
 all the parameters in the file automatically defined inside a program.
 I find ConfigParser a bit low level, is there any function that
 automatically reads everything from a file?

 You may want to give something like configobj a try.

  http://pypi.python.org/pypi/configobj

 It builds on ConfigParser to read all of the parameters in and creates
 a hierarchical object will all of the parameters as attributes.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
   -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


+1 on configobj.  I use this module extensively for my simulation
configuration.  It can even do some validation of parameters and allows for
saving of comments.  Furthermore, it utilizes the dictionary idiom, which
makes it very easy to work with, especially for passing kwargs to functions.

Cheers!
Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] what statistical module to use for python?

2011-11-30 Thread Chao YUE
Hi all,

I just want to broadly ask what statistical package are you guys using? I
mean routine statistical function like linear regression, GLM, ANOVA... etc.

I know there is SciKits packages like statsmodels, but are there more
general and complete ones?

thanks to all,

Chao
-- 
***
Chao YUE
Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL)
UMR 1572 CEA-CNRS-UVSQ
Batiment 712 - Pe 119
91191 GIF Sur YVETTE Cedex
Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading automatically all the parameters from a file

2011-11-30 Thread Neal Becker
My suggestion is: don't.

It's easier to script runs if you read parameters from the command line.
I recommend argparse.

Giovanni Plantageneto wrote:

 Dear all,
 I have a simple question. I would like to have all the parameters of a
 model written in a configuration file (text), and I would like to have
 all the parameters in the file automatically defined inside a program.
 I find ConfigParser a bit low level, is there any function that
 automatically reads everything from a file?
 Thanks.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading automatically all the parameters from a file

2011-11-30 Thread Tony Yu
On Wed, Nov 30, 2011 at 1:49 PM, Neal Becker ndbeck...@gmail.com wrote:

 My suggestion is: don't.

 It's easier to script runs if you read parameters from the command line.
 I recommend argparse.


 I think setting parameters in a config file and setting them on the
command line both have their merits. I like to combine ConfigObj with
argparse; something like:

#~~~
parser = argparse.ArgumentParser()
add arguments to parser here

cfg = configobj.ConfigObj(params_file.cfg)
parser.set_defaults(**cfg.config)
#~~~

Then call parser.parse_args, which will override parameters in the config
file by specifying values on the command line.

Cheers,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] who owns the data?

2011-11-30 Thread josef . pktd
just a basic question (since I haven't looked at this in some time)

I'm creating a structured array in a function. However, I want to
return the array with just a simple dtype

uni = uni.view(dt).reshape(-1, ncols)
return uni

the returned uni has owndata=False. Who owns the data, since the
underlying, original array went out of scope?

alternatives

1)
uni = np.asarray(uni, dt).reshape(-1, ncols)
return uni

looks obvious but raises exception

2)
uni.dtype = dt
uni.reshape(-1, ncols)
return uni

this works and uni owns the data. I'm only worried whether assigning
to dtype directly is not a dangerous thing to do.

 u
array([0, 0, 0, 1, 1, 0, 1, 1])
 u.dtype = np.dtype(float)
 u
array([  0.e+000,   2.12199579e-314,   4.94065646e-324,
 2.12199579e-314])

adding a safety check:

for t in uni.dtype.fields.values():
assert (t[0] == dt)


maybe I shouldn't care if nobody owns the data.

Thanks,

Josef
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] who owns the data?

2011-11-30 Thread Robert Kern
On Wed, Nov 30, 2011 at 20:30,  josef.p...@gmail.com wrote:
 just a basic question (since I haven't looked at this in some time)

 I'm creating a structured array in a function. However, I want to
 return the array with just a simple dtype

 uni = uni.view(dt).reshape(-1, ncols)
 return uni

 the returned uni has owndata=False. Who owns the data, since the
 underlying, original array went out of scope?

Every time you make a view through .view(), slicing, .T, certain
restricted .reshape() calls , etc. a reference to the original object
is stored on the view. Consequently, the original object does not get
garbage collected until all of the views go away too. Making view of a
view just adds another link in the chain. In your example, the
original object that was assigned to `uni` before that last assignment
statement was executed maintains ownership of the memory. The new
ndarray object that gets assigned to `uni` for the return statement
refers to the temporary ndarray returned by .view() which in turn
refers to the original `uni` array which owns the actual memory.

 2)
 uni.dtype = dt
 uni.reshape(-1, ncols)
 return uni

 this works and uni owns the data.

uni.reshape() doesn't reshape `uni` inplace, though. It is possible
that your `uni` array wasn't contiguous to begin with. In all of the
cases that your first example would have owndata=False, this one
should too.

 I'm only worried whether assigning
 to dtype directly is not a dangerous thing to do.

It's no worse than .view(dt). The same kind of checking goes on in both places.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] who owns the data?

2011-11-30 Thread josef . pktd
On Wed, Nov 30, 2011 at 4:00 PM, Robert Kern robert.k...@gmail.com wrote:
 On Wed, Nov 30, 2011 at 20:30,  josef.p...@gmail.com wrote:
 just a basic question (since I haven't looked at this in some time)

 I'm creating a structured array in a function. However, I want to
 return the array with just a simple dtype

 uni = uni.view(dt).reshape(-1, ncols)
 return uni

 the returned uni has owndata=False. Who owns the data, since the
 underlying, original array went out of scope?

 Every time you make a view through .view(), slicing, .T, certain
 restricted .reshape() calls , etc. a reference to the original object
 is stored on the view. Consequently, the original object does not get
 garbage collected until all of the views go away too. Making view of a
 view just adds another link in the chain. In your example, the
 original object that was assigned to `uni` before that last assignment
 statement was executed maintains ownership of the memory. The new
 ndarray object that gets assigned to `uni` for the return statement
 refers to the temporary ndarray returned by .view() which in turn
 refers to the original `uni` array which owns the actual memory.

Thanks for the explanation.

There where cases on the mailing list where views created problem, so
I just thought of trying to own the data, but I don't think it's
really relevant.



 2)
 uni.dtype = dt
 uni.reshape(-1, ncols)
 return uni

 this works and uni owns the data.

 uni.reshape() doesn't reshape `uni` inplace, though. It is possible
 that your `uni` array wasn't contiguous to begin with. In all of the
 cases that your first example would have owndata=False, this one
 should too.

this bug happened to me a few times now. I found it but only checked
the flags before fixing it.

Since reshape again creates a view, the next step is to assign to shape

uni.shape = (uni.size//ncols, ncols)

but that starts to look like too much inplace modifications just to avoid a view

Thanks,

Josef


 I'm only worried whether assigning
 to dtype directly is not a dangerous thing to do.

 It's no worse than .view(dt). The same kind of checking goes on in both 
 places.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
   -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ignore NAN in numpy.true_divide()

2011-11-30 Thread questions anon
I am trying to calculate the mean across many netcdf files. I cannot use
numpy.mean because there are too many files to concatenate and I end up
with a memory error. I have enabled the below code to do what I need but I
have a few nan values in some of my arrays. Is there a way to ignore these
somewhere in my code. I seem to face this problem often so I would love a
command that ignores blanks in my array before I continue on to the next
processing step.
Any feedback is greatly appreciated.


netCDF_list=[]
for dir in glob.glob(MainFolder + '*/01/')+ glob.glob(MainFolder +
'*/02/')+ glob.glob(MainFolder + '*/12/'):
for ncfile in glob.glob(dir + '*.nc'):
netCDF_list.append(ncfile)

slice_counter=0
print netCDF_list

for filename in netCDF_list:
ncfile=netCDF4.Dataset(filename)
TSFC=ncfile.variables['T_SFC'][:]
fillvalue=ncfile.variables['T_SFC']._FillValue
TSFC=MA.masked_values(TSFC, fillvalue)
for i in xrange(0,len(TSFC)-1,1):
slice_counter +=1
#print slice_counter
try:
running_sum=N.add(running_sum, TSFC[i])
except NameError:
print Initiating the running total of my
variable...
running_sum=N.array(TSFC[i])

TSFC_avg=N.true_divide(running_sum, slice_counter)
N.set_printoptions(threshold='nan')
print the TSFC_avg is:, TSFC_avg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what statistical module to use for python?

2011-11-30 Thread josef . pktd
On Wed, Nov 30, 2011 at 1:16 PM, Chao YUE chaoyue...@gmail.com wrote:
 Hi all,

 I just want to broadly ask what statistical package are you guys using? I
 mean routine statistical function like linear regression, GLM, ANOVA... etc.

 I know there is SciKits packages like statsmodels, but are there more
 general and complete ones?

 thanks to all,

I forwarded it to the scipy-user mailing list since that is more suitable.

Josef


 Chao
 --
 ***
 Chao YUE
 Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL)
 UMR 1572 CEA-CNRS-UVSQ
 Batiment 712 - Pe 119
 91191 GIF Sur YVETTE Cedex
 Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16
 


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Apparently non-deterministic behaviour of complex array multiplication

2011-11-30 Thread Karl Kappler
Hello,
I am somewhat new to scipy/numpy so please point me in the right direction
if I am posting to an incorrect forum.

The experience which has prompted my post is the following:
I have a numpy array Y where the elements of Y are
type(Y[0,0])
Out[709]: type 'numpy.complex128'

The absolute values of the real and complex values do not far exceed say
1e-10.  The shape of Y is (24, 49218).
When I perform the operation: C = dot(Y,Y.conj().transpose), i.e. I form
the covariance matrix by multiplying T by its conjugate transpose, I
sometimes get NaN in the array C.

I can imagine some reasons why this may happen, but what is truly puzzling
to me is that I will be working in ipython and will execute for example:
find(isnan(C)) and will be returned an list of elements of C which are NaN,
fine, but then I recalculate C, and repeat the find(isnan(C)) command and I
get a different answer.

I type:
find(isnan(dot(Y,Y.conj().transpose)))
and an empty array is returned.  Repeated calls of the same command however
result in a non-empty array.  In fact, the sequence of arrays returned from
several consecutive calls varies. Sometimes there are tens of NaNs,
sometimes none.

I have been performing a collection of experiments for some hours and
cannot get to the bottom of this;
Some things I have tried:
1. Cast the array Y as a matrix X and calculate X*X.H --- in this case i
get the same result in that sometimes I have NaN and sometimes I do not.
2. set A=X.H and calculate X*A --- same results*
3. set b=A.copy() and calc X*b --- same results*.
4. find(isnan(real(X*X.H))) --- same results*
5. find(isnan(real(X)*real(X.H))) - no NaN appear

*N.B. Same results does not mean that the same indices were going NaN,
simply that I was getting back a different result if I ran the command say
a dozen times.

So it would seem that it has something to do with the complex
multiplication.   I am wondering if there is too much dynamic range being
used in the calculation?  It absolutely amazes me that I can perform the
same complex-arithmetic operation sitting at the command line and obtain
different results each time.  In one case I ran a for loop where I
performed the multiplication 1000 times and found that 694 trials had no
NaN and 306 trials had NaN.

Saving X to file and then reloading it in a new ipython interpreter
typically resulted in no NaN.

For a fixed interpretter and instance of X or Y, the indices which go NaN
(when they do) sometimes repeat many times and sometimes they vary
apparently at random.

Also note that I have had a similar problem with much smaller arrays, say
24 x 3076

I have also tried 'upping' the numpy array to complex256, I have like 12GB
of RAM...

This happens both in ipython and when I call my function from the command
line.

Does this sound familiar to anyone?  Is my machine possessed?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Apparently non-deterministic behaviour of complex array multiplication

2011-11-30 Thread Olivier Delalleau
I guess it's just a typo on your part, but just to make sure, you are using
.transpose(), not .transpose, correct?

-=- Olivier

2011/11/30 Karl Kappler magnetotellur...@gmail.com

 Hello,
 I am somewhat new to scipy/numpy so please point me in the right direction
 if I am posting to an incorrect forum.

 The experience which has prompted my post is the following:
 I have a numpy array Y where the elements of Y are
 type(Y[0,0])
 Out[709]: type 'numpy.complex128'

 The absolute values of the real and complex values do not far exceed say
 1e-10.  The shape of Y is (24, 49218).
 When I perform the operation: C = dot(Y,Y.conj().transpose), i.e. I form
 the covariance matrix by multiplying T by its conjugate transpose, I
 sometimes get NaN in the array C.

 I can imagine some reasons why this may happen, but what is truly puzzling
 to me is that I will be working in ipython and will execute for example:
 find(isnan(C)) and will be returned an list of elements of C which are
 NaN,
 fine, but then I recalculate C, and repeat the find(isnan(C)) command and
 I get a different answer.

 I type:
 find(isnan(dot(Y,Y.conj().transpose)))
 and an empty array is returned.  Repeated calls of the same command
 however result in a non-empty array.  In fact, the sequence of arrays
 returned from several consecutive calls varies. Sometimes there are tens of
 NaNs, sometimes none.

 I have been performing a collection of experiments for some hours and
 cannot get to the bottom of this;
 Some things I have tried:
 1. Cast the array Y as a matrix X and calculate X*X.H --- in this case i
 get the same result in that sometimes I have NaN and sometimes I do not.
 2. set A=X.H and calculate X*A --- same results*
 3. set b=A.copy() and calc X*b --- same results*.
 4. find(isnan(real(X*X.H))) --- same results*
 5. find(isnan(real(X)*real(X.H))) - no NaN appear

 *N.B. Same results does not mean that the same indices were going NaN,
 simply that I was getting back a different result if I ran the command say
 a dozen times.

 So it would seem that it has something to do with the complex
 multiplication.   I am wondering if there is too much dynamic range being
 used in the calculation?  It absolutely amazes me that I can perform the
 same complex-arithmetic operation sitting at the command line and obtain
 different results each time.  In one case I ran a for loop where I
 performed the multiplication 1000 times and found that 694 trials had no
 NaN and 306 trials had NaN.

 Saving X to file and then reloading it in a new ipython interpreter
 typically resulted in no NaN.

 For a fixed interpretter and instance of X or Y, the indices which go NaN
 (when they do) sometimes repeat many times and sometimes they vary
 apparently at random.

 Also note that I have had a similar problem with much smaller arrays, say
 24 x 3076

 I have also tried 'upping' the numpy array to complex256, I have like 12GB
 of RAM...

 This happens both in ipython and when I call my function from the command
 line.

 Does this sound familiar to anyone?  Is my machine possessed?


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] loop through arrays and find numpy maximum

2011-11-30 Thread questions anon
I would like to calculate the max and min of many netcdf files.
I know how to create one big array and then concatenate and find the
numpy.max but when I run this on 1000's of arrays I have a memory error.
What I would prefer is to loop through the arrays and produce the maximum
without having the make a big array.
My idea goes something like:

netCDF_list=[]
maxarray=[]

for dir in glob.glob(MainFolder + '*/01/')+ glob.glob(MainFolder +
'*/02/')+ glob.glob(MainFolder + '*/12/'):
for ncfile in glob.glob(dir + '*.nc'):
netCDF_list.append(ncfile)
for filename in netCDF_list:
ncfile=netCDF4.Dataset(filename)
TSFC=ncfile.variables['T_SFC'][:]
fillvalue=ncfile.variables['T_SFC']._FillValue
TSFC=MA.masked_values(TSFC, fillvalue)
for i in TSFC:
if i == N.max(TSFC, axis=0):
maxarray.append(i)
else:
pass

print maxarray
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] np.dot and array order

2011-11-30 Thread josef . pktd
np.__version__   '1.5.1'   official win32 installer

(playing with ipython for once)

I thought np.dot is Lapack based and favors fortran order, but if the
second array is fortran ordered, then dot takes twice as long. The
order of the first array seems irrelevant
(or maybe just with my shapes, in case it matters: the first array is
float64, the second is bool, and I'm low in left over memory)

In [93]: %timeit np.dot(x.T, indi)
1 loops, best of 3: 1.33 s per loop

In [94]: %timeit np.dot(xf.T, indi)
1 loops, best of 3: 1.27 s per loop

In [95]: %timeit np.dot(xf.T, indif)
1 loops, best of 3: 3 s per loop

In [100]: %timeit np.dot(x.T, indif)
1 loops, best of 3: 3.05 s per loop


In [96]: x.flags.c_contiguous
Out[96]: True

In [97]: xf.flags.c_contiguous
Out[97]: False

In [98]: indi.flags.c_contiguous
Out[98]: True

In [99]: indif.flags.c_contiguous
Out[99]: False

In [101]: x.shape
Out[101]: (20, 20)

In [102]: indi.shape
Out[102]: (20, 500)


It's just the way it is, or does it depend on ?

Josef
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what statistical module to use for python?

2011-11-30 Thread Chao YUE
thanks, I should do it but I forgot

chao

2011/12/1 josef.p...@gmail.com

 On Wed, Nov 30, 2011 at 1:16 PM, Chao YUE chaoyue...@gmail.com wrote:
  Hi all,
 
  I just want to broadly ask what statistical package are you guys using? I
  mean routine statistical function like linear regression, GLM, ANOVA...
 etc.
 
  I know there is SciKits packages like statsmodels, but are there more
  general and complete ones?
 
  thanks to all,

 I forwarded it to the scipy-user mailing list since that is more suitable.

 Josef

 
  Chao
  --
 
 ***
  Chao YUE
  Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL)
  UMR 1572 CEA-CNRS-UVSQ
  Batiment 712 - Pe 119
  91191 GIF Sur YVETTE Cedex
  Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16
 
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
***
Chao YUE
Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL)
UMR 1572 CEA-CNRS-UVSQ
Batiment 712 - Pe 119
91191 GIF Sur YVETTE Cedex
Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion