Re: [Numpy-discussion] inversion of large matrices

2010-09-01 Thread Sebastian Walter
is it really the covariance matrix you want to invert? Or do you want
to compute something like
x^T C^{-1} x,
where x is an array of size N and C an array of size (N,N)?

It would also be interesting to know how the covariance matrix gets computed
and what its condition number is, at least approximately.



On Wed, Sep 1, 2010 at 1:58 AM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Tue, Aug 31, 2010 at 4:52 PM, Dan Elliott danelliotts...@gmail.com
 wrote:

 David Warde-Farley dwf at cs.toronto.edu writes:
  On 2010-08-30, at 10:36 PM, Charles R Harris wrote:
  I think he means that if he needs both the determinant and to solve the
  system, it might be more efficient to do
  the SVD, obtain the determinant from the diagonal values, and obtain the
  solution by multiplying by U D^-1 V^T?

 Thank you, that is what I meant.  Poorly worded on my part.

 In particular, I am writing code to invert a very large covariance matrix.
  I
 think David has some good information in another post in this thread.


 Where did the covariance array come from? It may be the case that you can
 use a much smaller one, for instance in PCA of images.

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where is the dev version of numpydoc?

2010-09-01 Thread Ralf Gommers
On Wed, Sep 1, 2010 at 11:31 AM, John Salvatier
jsalv...@u.washington.eduwrote:

 Hello,

 I would like to update my numpydoc so it works with sphinx 1.0, but I am
 not sure where the dev version is; can someone point me in the right
 direction?

 In numpy trunk, under doc/sphinxext/. That works *only* with sphinx 1.0
now, which is perhaps the reason there's no new release on pypi (but that's
just my guess).

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Github migration?

2010-09-01 Thread Charles R Harris
On Tue, Aug 31, 2010 at 2:56 PM, Jason McCampbell jmccampb...@enthought.com
 wrote:

 Hi Chuck (and anyone else interested),

 I updated the refactoring page on the NumPy developer wiki (seems to be
 down or I'd paste in the link).  It certainly isn't complete, but there are
 a lot more details about the data structures and memory handling and an
 outline of some additional topics that needs to be filled in.


Thanks  Jason. How much of the core library can be used without any
reference counting? I was originally thinking that the base ufuncs would
just be functions accepting a pointer and a descriptor and handling memory
allocations and such would be at a higher level. That is to say, the object
oriented aspects of numpy would be removed from the bottom layers where they
just get in the way.

Also, since many of the public macros expect the old type structures, what
is going to happen with them? They are really part of the API, but a
particularly troublesome part for going forward.

snip

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Parsing PyArrays

2010-09-01 Thread Babak Ahmadi
Hi,
I'm having a little Problem that might be trivial.
I'm trying to use a nupmy array within a c++ method and having some trouble.
The PyArray is not parsed correctly and PyArg_ParseTuple returns 0.
Appreciate any help.

#include Python.h
#include numpy/arrayobject.h

void fromMatrix(PyObject *args){
_import_array();
PyArrayObject* A;
PyArrayObject* B;
if (!PyArg_ParseTuple(args, O!O!, PyArray_Type,A, PyArray_Type,B))
return;

int m = PyArray_DIM(A,0);
int n = PyArray_DIM(A,1);

 and so on.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Developer Job Openings at Space Telescope Science Institute

2010-09-01 Thread Perry Greenfield
We are advertising for two different positions at the Space Telescope  
Science Institute (located on the Johns Hopkins University Campus in  
Baltimore, Md). STScI is seeking Senior Systems Software Engineers to  
develop applications to calibrate and analyze data from the Hubble and  
the James Webb Space Telescopes.



First  position:

Description 

The developer will work with other members of the Science Software  
Branch and instrument scientists to develop applications to calibrate  
and analyze data from the Hubble Space Telescope and its successor,  
the James Webb Space Telescope, as well as other astronomy-related  
projects. The individual will be developing open source applications  
primarily in Python and C, using open source tools developed at STScI,  
such as PyRAF and PyFITS, and elsewhere. Some projects may involve  
developing software applications or libraries as part of a team, or  
leading a team.

Requirements

Candidates should have experience writing applications to calibrate,  
reduce, and analyze scientific data, preferably astronomical data. A  
background in astronomy and experience with astronomical data  
reduction is highly desirable. Candidates should have experience  
writing large programs in a compiled language as well as experience  
with an interpreted language such as IDL, Matlab, or Python.

Experience using array manipulations facilities such as are available  
in IDL, Matlab, numpy/numarray, or APL is a plus. Experience using  
software engineering tools such as debuggers, CVS or subversion, and  
bug trackers is strongly desired. Strong analytical, problem-solving,  
planning, and organizational skills are needed, and excellent written  
and verbal communication skills are essential. Prior experience in  
developing medium or large projects sufficient to demonstrate the  
specified knowledge, skills and abilities is required.

Qualified candidates should possess a Bachelor's degree in a science- 
related field such as Physics, Astronomy, or Mathematics. A Master's  
or Ph.D degree is desirable. Substitution of additional relevant  
education or experience for the stated qualifications may be considered.

Apply through the following link:
https://www.ultirecruit.com/SPA1004/jobboard/JobDetails.aspx?__ID=*6D48E0EFCC47915A



Second position:

Description 

The developer will work with other members of the Science Software  
Branch to help in enhancing and maintaining our Python-based framework  
for developing astronomical data analysis and calibration  
applications. STScI has pioneered in the generation of tools for using  
Python for scientific analysis and programming through its development  
of PyRAF, numarray, PyFITS, and contributions to other Python Open  
Source projects. The individual being sought will help STScI maintain  
its leadership in this area by developing leading-edge capabilities by  
enhancing existing tools such as PyRAF and PyFITS, contributing to  
scipy, numpy, and matplotlib, and developing new libraries to meet the  
needs of future astronomical processing. Some projects may involve  
developing software tools as part of a team, or leading a team. Work  
will also require working with an external community on Open Source  
software projects.

Requirements

Candidates should be experienced with systems-level programming,  
preferably with C or C++ and familiar with variances in processor and  
operating system architectures (preferably Linux, OS X, and MS  
Windows) with regard to file systems, memory, data types and  
efficiency, as well as modern software development techniques  
including Object-Oriented design and programming. Experience with  
Python and writing C extensions for Python is highly desirable. A  
working knowledge of any of the following would be a plus: parsers,  
code generation, numerical techniques, image processing and data  
analysis, web and network protocols, or parallel processing.

Experience using software engineering tools such as debuggers, version  
control systems (e.g., subversion), and bug trackers is strongly  
desired. Strong analytical, problem-solving, planning, and  
organizational skills are needed, and excellent written and verbal  
communication skills are essential. Prior experience in developing  
medium or large projects sufficient to demonstrate the specified  
knowledge, skills and abilities is required.

Qualified candidates should possess a Bachelor's Degree in Computer  
Science, Physics, Math, or technically related field. Master's degree  
preferred. Substitution of additional relevant education or experience  
for the stated qualifications may be considered.

Apply through the following link:
https://www.ultirecruit.com/SPA1004/jobboard/JobDetails.aspx?__ID=*85D01A9E3BE42CFD


[Numpy-discussion] Unexpected float96 precision loss

2010-09-01 Thread Michael Gilbert
Hi,

I've been using numpy's float96 class lately, and I've run into some
strange precision errors.  See example below:

   import numpy
   numpy.version.version
  '1.5.0'
   sys.version
  '3.1.2 (release31-maint, Jul  8 2010, 01:16:48) \n[GCC 4.4.4]'
   x = numpy.array( [0.01] , numpy.float32 )
   y = numpy.array( [0.0001] , numpy.float32 )
   x[0]*x[0] - y[0]
  0.0
   x = numpy.array( [0.01] , numpy.float64 )
   y = numpy.array( [0.0001] , numpy.float64 )
   x[0]*x[0] - y[0]
  0.0
   x = numpy.array( [0.01] , numpy.float96 )
   y = numpy.array( [0.0001] , numpy.float96 )
   x[0]*x[0] - y[0]
  -6.286572655403010329e-22

I would expect the float96 calculation to also produce 0.0 exactly as
found in the float32 and float64 examples.  Why isn't this the case?

Slightly off-topic: why was the float128 class dropped?

Thanks in advance for any thoughts/feedback,
Mike
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unexpected float96 precision loss

2010-09-01 Thread Charles R Harris
On Wed, Sep 1, 2010 at 2:26 PM, Michael Gilbert michael.s.gilb...@gmail.com
 wrote:

 Hi,

 I've been using numpy's float96 class lately, and I've run into some
 strange precision errors.  See example below:

   import numpy
   numpy.version.version
  '1.5.0'
   sys.version
  '3.1.2 (release31-maint, Jul  8 2010, 01:16:48) \n[GCC 4.4.4]'
   x = numpy.array( [0.01] , numpy.float32 )
   y = numpy.array( [0.0001] , numpy.float32 )
   x[0]*x[0] - y[0]
  0.0
   x = numpy.array( [0.01] , numpy.float64 )
   y = numpy.array( [0.0001] , numpy.float64 )
   x[0]*x[0] - y[0]
  0.0
   x = numpy.array( [0.01] , numpy.float96 )
   y = numpy.array( [0.0001] , numpy.float96 )
   x[0]*x[0] - y[0]
  -6.286572655403010329e-22

 I would expect the float96 calculation to also produce 0.0 exactly as
 found in the float32 and float64 examples.  Why isn't this the case?


None of the numbers is exactly represented in ieee floating format, so what
you are seeing is rounding error. Note that the first two zeros are only
accurate to about 7 and 16 digits respectively, whereas fot float 96 is
accurate to about  19 digits.

Slightly off-topic: why was the float128 class dropped?


It wasn't, but you won't see it on a 32 bit system because of how the gcc
compiler treats long doubles for alignment reasons. For common intel
hardware/os,  float96, and float128 are the same precision, just stored
differently. In general the long precision formats are not portable, so
watch out.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unexpected float96 precision loss

2010-09-01 Thread Pauli Virtanen
Wed, 01 Sep 2010 16:26:59 -0400, Michael Gilbert wrote:
 I've been using numpy's float96 class lately, and I've run into some
 strange precision errors.
[clip]
x = numpy.array( [0.01] , numpy.float96 )
[clip]
 I would expect the float96 calculation to also produce 0.0 exactly as
 found in the float32 and float64 examples.  Why isn't this the case?

(i) It is not possible to write long double literals in Python.
float96(0.0001) means in fact float96(float64(0.0001))

(ii) It is not possible to represent numbers 10^-r, r  1 exactly
 in base-2 floating point.

So if you write float96(0.0001), the result is not the float96 number 
closest to 0.0001, but the 96-bit representation of the 64-bit number 
closest to 0.0001. Indeed,

 float96(0.0001), float96(1.0)/1000
(0.0001479, 0.0009996)

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unexpected float96 precision loss

2010-09-01 Thread Michael Gilbert
On Wed, 1 Sep 2010 21:15:22 + (UTC), Pauli Virtanen wrote:
 Wed, 01 Sep 2010 16:26:59 -0400, Michael Gilbert wrote:
  I've been using numpy's float96 class lately, and I've run into some
  strange precision errors.
 [clip]
 x = numpy.array( [0.01] , numpy.float96 )
 [clip]
  I would expect the float96 calculation to also produce 0.0 exactly as
  found in the float32 and float64 examples.  Why isn't this the case?
 
 (i) It is not possible to write long double literals in Python.
 float96(0.0001) means in fact float96(float64(0.0001))
 
 (ii) It is not possible to represent numbers 10^-r, r  1 exactly
  in base-2 floating point.
 
 So if you write float96(0.0001), the result is not the float96 number 
 closest to 0.0001, but the 96-bit representation of the 64-bit number 
 closest to 0.0001. Indeed,
 
  float96(0.0001), float96(1.0)/1000
 (0.0001479, 0.0009996)

Interesting.  float96( '0.0001' ) also seems to evaluate to the first
result. I assume that it also does a float64( '0.0001' ) conversion
first. I understand that you can't change how python passes in floats,
but wouldn't it be better to exactly handle strings since those can be
converted exactly, which is what the user wants/expects?

Thanks so much for the quick responses.

Best wishes,
Mike
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Array slices and number of dimensions

2010-09-01 Thread Thomas Robitaille
Hi,

I'm trying to extract sub-sections of a multidimensional array while keeping 
the number of dimensions the same. If I just select a specific element along a 
given direction, then the number of dimensions goes down by one:

 import numpy as np
 a = np.zeros((10,10,10))
 a.shape
(10, 10, 10)
 a[0,:,:].shape
(10, 10)

This makes sense to me. If I want to retain the initial number of dimensions, I 
can do

 a[[0],:,:].shape
(1, 10, 10)

However, if I try and do this along two directions, I do get a reduction in the 
number of dimensions:

 a[[0],:,[5]].shape
(1, 10)

I'm wondering if this is normal, or is a bug? In fact, I can get what I want by 
doing:

 a[[0],:,:][:,:,[5]].shape
(1, 10, 1)

so I can get around the issue, but just wanted to check whether the issue with 
a[[0],:,[5]] is a bug?

Thanks,

Tom


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array slices and number of dimensions

2010-09-01 Thread Warren Weckesser
Thomas Robitaille wrote:
 Hi,

 I'm trying to extract sub-sections of a multidimensional array while keeping 
 the number of dimensions the same. If I just select a specific element along 
 a given direction, then the number of dimensions goes down by one:

   
 snip
  In fact, I can get what I want by doing:

   
 a[[0],:,:][:,:,[5]].shape
 
 (1, 10, 1)

 so I can get around the issue

You can also use trivial slices:

In [2]: a = np.zeros((10,10,10))

In [3]: a.shape
Out[3]: (10, 10, 10)

In [4]: a[0:1, :, 5:6].shape
Out[4]: (1, 10, 1)



Warren


 , but just wanted to check whether the issue with a[[0],:,[5]] is a bug?

 Thanks,

 Tom


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array slices and number of dimensions

2010-09-01 Thread Anne Archibald
On 1 September 2010 17:54, Thomas Robitaille
thomas.robitai...@gmail.com wrote:
 Hi,

 I'm trying to extract sub-sections of a multidimensional array while keeping 
 the number of dimensions the same. If I just select a specific element along 
 a given direction, then the number of dimensions goes down by one:

 import numpy as np
 a = np.zeros((10,10,10))
 a.shape
 (10, 10, 10)
 a[0,:,:].shape
 (10, 10)

 This makes sense to me. If I want to retain the initial number of dimensions, 
 I can do

 a[[0],:,:].shape
 (1, 10, 10)

 However, if I try and do this along two directions, I do get a reduction in 
 the number of dimensions:

 a[[0],:,[5]].shape
 (1, 10)

 I'm wondering if this is normal, or is a bug? In fact, I can get what I want 
 by doing:

 a[[0],:,:][:,:,[5]].shape
 (1, 10, 1)

 so I can get around the issue, but just wanted to check whether the issue 
 with a[[0],:,[5]] is a bug?

No, it's not a bug. The key problem is that supplying lists does not
extract a slice - it uses fancy indexing. This implies, among other
things, that the data must be copied. When you supply two lists, that
means something very different in fancy indexing. When you are
supplying arrays in all index slots, what you get back has the same
shape as the arrays you put in; so if you supply one-dimensional
lists, like

A[[1,2,3],[1,4,5],[7,6,2]]

what you get is

[A[1,1,7], A[2,4,6], A[3,5,2]]

When you supply slices in some slots, what you get is complicated, and
maybe not well-defined. In particular, I think the fancy-indexing
dimensions always wind up at the front, and any slice dimensions are
left at the end.

In short, fancy indexing is not the way to go with your problem. I
generally use np.newaxis:

a[7,np.newaxis,:,8,np.newaxis]

but you can also use slices of length one:

a[7:8, :, 8:9]

Anne


 Thanks,

 Tom


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] kron produces F-contiguous?

2010-09-01 Thread Neal Becker
It seems if I call kron with 2 C-contiguous arrays, it returns an F-
contiguous array.  Any reason for this (it's not what I wanted)?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] kron produces F-contiguous?

2010-09-01 Thread Lisandro Dalcin
On 1 September 2010 19:24, Neal Becker ndbeck...@gmail.com wrote:
 It seems if I call kron with 2 C-contiguous arrays, it returns an F-
 contiguous array.  Any reason for this (it's not what I wanted)?


Try numpy.linalg.inv ...


-- 
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unexpected float96 precision loss

2010-09-01 Thread Colin Macdonald
On 09/01/10 22:30, Michael Gilbert wrote:
 Interesting.  float96( '0.0001' ) also seems to evaluate to the first
 result. I assume that it also does a float64( '0.0001' ) conversion
 first. I understand that you can't change how python passes in floats,
 but wouldn't it be better to exactly handle strings since those can be
 converted exactly, which is what the user wants/expects?

I posted about this a week or two ago as well.  Mean meaning to file 
bugs, but busy :(

IIRC, the suggestion at the time was I might be able to use something 
like C's sscanf to do the string-to-float96 conversion properly.  I 
haven't yet looked into it further, please let me know if you sort out 
any way to do it...


cheers,
Colin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unexpected float96 precision loss

2010-09-01 Thread Charles R Harris
On Wed, Sep 1, 2010 at 3:30 PM, Michael Gilbert michael.s.gilb...@gmail.com
 wrote:

 On Wed, 1 Sep 2010 21:15:22 + (UTC), Pauli Virtanen wrote:
  Wed, 01 Sep 2010 16:26:59 -0400, Michael Gilbert wrote:
   I've been using numpy's float96 class lately, and I've run into some
   strange precision errors.
  [clip]
  x = numpy.array( [0.01] , numpy.float96 )
  [clip]
   I would expect the float96 calculation to also produce 0.0 exactly as
   found in the float32 and float64 examples.  Why isn't this the case?
 
  (i) It is not possible to write long double literals in Python.
  float96(0.0001) means in fact float96(float64(0.0001))
 
  (ii) It is not possible to represent numbers 10^-r, r  1 exactly
   in base-2 floating point.
 
  So if you write float96(0.0001), the result is not the float96 number
  closest to 0.0001, but the 96-bit representation of the 64-bit number
  closest to 0.0001. Indeed,
 
   float96(0.0001), float96(1.0)/1000
  (0.0001479, 0.0009996)

 Interesting.  float96( '0.0001' ) also seems to evaluate to the first
 result. I assume that it also does a float64( '0.0001' ) conversion
 first. I understand that you can't change how python passes in floats,
 but wouldn't it be better to exactly handle strings since those can be
 converted exactly, which is what the user wants/expects?


Well, yes. But then we would need to write our own routines for the
conversions...

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unexpected float96 precision loss

2010-09-01 Thread David Cournapeau
On Thu, Sep 2, 2010 at 10:03 AM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Wed, Sep 1, 2010 at 3:30 PM, Michael Gilbert
 michael.s.gilb...@gmail.com wrote:

 On Wed, 1 Sep 2010 21:15:22 + (UTC), Pauli Virtanen wrote:
  Wed, 01 Sep 2010 16:26:59 -0400, Michael Gilbert wrote:
   I've been using numpy's float96 class lately, and I've run into some
   strange precision errors.
  [clip]
      x = numpy.array( [0.01] , numpy.float96 )
  [clip]
   I would expect the float96 calculation to also produce 0.0 exactly as
   found in the float32 and float64 examples.  Why isn't this the case?
 
  (i) It is not possible to write long double literals in Python.
      float96(0.0001) means in fact float96(float64(0.0001))
 
  (ii) It is not possible to represent numbers 10^-r, r  1 exactly
       in base-2 floating point.
 
  So if you write float96(0.0001), the result is not the float96 number
  closest to 0.0001, but the 96-bit representation of the 64-bit number
  closest to 0.0001. Indeed,
 
   float96(0.0001), float96(1.0)/1000
  (0.0001479, 0.0009996)

 Interesting.  float96( '0.0001' ) also seems to evaluate to the first
 result. I assume that it also does a float64( '0.0001' ) conversion
 first. I understand that you can't change how python passes in floats,
 but wouldn't it be better to exactly handle strings since those can be
 converted exactly, which is what the user wants/expects?


 Well, yes. But then we would need to write our own routines for the
 conversions...

I think that it is needed at some point, though. There are quite a few
bugs related to this kind of issues in NumPy,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Github migration?

2010-09-01 Thread Charles R Harris
Hi Jason,

On Tue, Aug 31, 2010 at 2:56 PM, Jason McCampbell jmccampb...@enthought.com
 wrote:

 Hi Chuck (and anyone else interested),

 I updated the refactoring page on the NumPy developer wiki (seems to be
 down or I'd paste in the link).  It certainly isn't complete, but there are
 a lot more details about the data structures and memory handling and an
 outline of some additional topics that needs to be filled in.


I note that there are some C++ style comments in the code which will cause
errors on some platforms, so I hope you are planning on removing them at
some point. Also,

if (yes) foo;

is very bad style. There is a lot of that in old code like that that still
needs to be cleaned up, but I also see some in the new code. It would be
best to get it right to start with.

snip

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion