Re: [Numpy-discussion] bug in deepcopy() of rank-zero arrays?

2013-04-30 Thread Richard Hattersley
+1 for getting rid of this inconsistency

We've hit this with Iris (a met/ocean analysis package - see github), and
have had to add several workarounds.


On 19 April 2013 16:55, Chris Barker - NOAA Federal
chris.bar...@noaa.govwrote:

 Hi folks,

 In [264]: np.__version__
 Out[264]: '1.7.0'

 I just noticed that deep copying a rank-zero array yields a scalar --
 probably not what we want.

 In [242]: a1 = np.array(3)

 In [243]: type(a1), a1
 Out[243]: (numpy.ndarray, array(3))

 In [244]: a2 = copy.deepcopy(a1)

 In [245]: type(a2), a2
 Out[245]: (numpy.int32, 3)

 regular copy.copy() seems to work fine:

 In [246]: a3 = copy.copy(a1)

 In [247]: type(a3), a3
 Out[247]: (numpy.ndarray, array(3))

 Higher-rank arrays seem to work fine:

 In [253]: a1 = np.array((3,4))

 In [254]: type(a1), a1
 Out[254]: (numpy.ndarray, array([3, 4]))

 In [255]: a2 = copy.deepcopy(a1)

 In [256]: type(a2), a2
 Out[256]: (numpy.ndarray, array([3, 4]))

 Array scalars seem to work fine as well:

 In [257]: s1 = np.float32(3)

 In [258]: s2 = copy.deepcopy(s1)

 In [261]: type(s1), s1
 Out[261]: (numpy.float32, 3.0)

 In [262]: type(s2), s2
 Out[262]: (numpy.float32, 3.0)

 There are other ways to copy arrays, but in this case, I had a dict
 with a bunch of arrays in it, and needed a deepcopy of the dict. I was
 surprised to find that my rank-0 array got turned into a scalar.

 -Chris

 --

 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] bug in deepcopy() of rank-zero arrays?

2013-04-30 Thread Chris Barker - NOAA Federal
hmm -- I suppose one of us should post an issue on github -- then ask for
it ti be fixed before 1.8  ;-)

I'll try to get to the issue if no one beats me to it -- got to run now...

-Chris



On Tue, Apr 30, 2013 at 5:35 AM, Richard Hattersley
rhatters...@gmail.comwrote:

 +1 for getting rid of this inconsistency

 We've hit this with Iris (a met/ocean analysis package - see github), and
 have had to add several workarounds.


 On 19 April 2013 16:55, Chris Barker - NOAA Federal chris.bar...@noaa.gov
  wrote:

  Hi folks,

 In [264]: np.__version__
 Out[264]: '1.7.0'

 I just noticed that deep copying a rank-zero array yields a scalar --
 probably not what we want.

 In [242]: a1 = np.array(3)

 In [243]: type(a1), a1
 Out[243]: (numpy.ndarray, array(3))

 In [244]: a2 = copy.deepcopy(a1)

 In [245]: type(a2), a2
 Out[245]: (numpy.int32, 3)

 regular copy.copy() seems to work fine:

 In [246]: a3 = copy.copy(a1)

 In [247]: type(a3), a3
 Out[247]: (numpy.ndarray, array(3))

 Higher-rank arrays seem to work fine:

 In [253]: a1 = np.array((3,4))

 In [254]: type(a1), a1
 Out[254]: (numpy.ndarray, array([3, 4]))

 In [255]: a2 = copy.deepcopy(a1)

 In [256]: type(a2), a2
 Out[256]: (numpy.ndarray, array([3, 4]))

 Array scalars seem to work fine as well:

 In [257]: s1 = np.float32(3)

 In [258]: s2 = copy.deepcopy(s1)

 In [261]: type(s1), s1
 Out[261]: (numpy.float32, 3.0)

 In [262]: type(s2), s2
 Out[262]: (numpy.float32, 3.0)

 There are other ways to copy arrays, but in this case, I had a dict
 with a bunch of arrays in it, and needed a deepcopy of the dict. I was
 surprised to find that my rank-0 array got turned into a scalar.

 -Chris

 --

 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] GSoC proposal -- Numpy SciPy

2013-04-30 Thread Blake Griffith
Hello, I'm writing a GSoC proposal, mostly concerning SciPy, but it
involves a few changes to NumPy.
The proposal is titled: Improvements to the sparse package of Scipy:
support for bool dtype and better interaction with NumPy
and can be found on my GitHub:
https://github.com/cowlicks/GSoC-proposal/blob/master/proposal.markdown#numpy-interactionsjuly-8th-to-august-26th-7-weeks

Basically, I want to change the ufunc class to be aware of SciPy's sparse
matrices. So that when a ufunc is passed a sparse matrix as an argument, it
will dispatch to a function in the sparse matrix package, which will then
decide what to do. I just wanted to ping NumPy to make sure this is
reasonable, and I'm not totally off track. Suggestions, feedback
and criticism welcome.

Thanks!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GSoC proposal -- Numpy SciPy

2013-04-30 Thread Nathaniel Smith
On Tue, Apr 30, 2013 at 3:19 PM, Blake Griffith
blake.a.griff...@gmail.com wrote:
 Hello, I'm writing a GSoC proposal, mostly concerning SciPy, but it involves
 a few changes to NumPy.
 The proposal is titled: Improvements to the sparse package of Scipy: support
 for bool dtype and better interaction with NumPy
 and can be found on my GitHub:
 https://github.com/cowlicks/GSoC-proposal/blob/master/proposal.markdown#numpy-interactionsjuly-8th-to-august-26th-7-weeks

 Basically, I want to change the ufunc class to be aware of SciPy's sparse
 matrices. So that when a ufunc is passed a sparse matrix as an argument, it
 will dispatch to a function in the sparse matrix package, which will then
 decide what to do. I just wanted to ping NumPy to make sure this is
 reasonable, and I'm not totally off track. Suggestions, feedback and
 criticism welcome.

How do you plan to go about this? The obvious option of just calling
scipy.sparse.issparse() on ufunc entry raises some problems, since
numpy can't depend on or even import scipy, and we might be reluctant
to add such a special case for what's a rather more general problem.
OTOH it might be possible to solve the problem in general, e.g., see
the prototyped _ufunc_override_ special method in:
  https://github.com/njsmith/numpyNEP/blob/master/numpyNEP.py
but I don't know if you want to get into such a debate within the
scope of your GSoC. What were you thinking?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GSoC proposal -- Numpy SciPy

2013-04-30 Thread Charles R Harris
On Tue, Apr 30, 2013 at 1:37 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Apr 30, 2013 at 3:19 PM, Blake Griffith
 blake.a.griff...@gmail.com wrote:
  Hello, I'm writing a GSoC proposal, mostly concerning SciPy, but it
 involves
  a few changes to NumPy.
  The proposal is titled: Improvements to the sparse package of Scipy:
 support
  for bool dtype and better interaction with NumPy
  and can be found on my GitHub:
 
 https://github.com/cowlicks/GSoC-proposal/blob/master/proposal.markdown#numpy-interactionsjuly-8th-to-august-26th-7-weeks
 
  Basically, I want to change the ufunc class to be aware of SciPy's sparse
  matrices. So that when a ufunc is passed a sparse matrix as an argument,
 it
  will dispatch to a function in the sparse matrix package, which will then
  decide what to do. I just wanted to ping NumPy to make sure this is
  reasonable, and I'm not totally off track. Suggestions, feedback and
  criticism welcome.

 How do you plan to go about this? The obvious option of just calling
 scipy.sparse.issparse() on ufunc entry raises some problems, since
 numpy can't depend on or even import scipy, and we might be reluctant
 to add such a special case for what's a rather more general problem.
 OTOH it might be possible to solve the problem in general, e.g., see
 the prototyped _ufunc_override_ special method in:
   https://github.com/njsmith/numpyNEP/blob/master/numpyNEP.py
 but I don't know if you want to get into such a debate within the
 scope of your GSoC. What were you thinking?


ISTR that Mark Wiebe also had thoughts for that functionality. There was a
thread on the topic but I don't recall the time.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GSoC proposal -- Numpy SciPy

2013-04-30 Thread Pauli Virtanen
30.04.2013 22:37, Nathaniel Smith kirjoitti:
[clip]
 How do you plan to go about this? The obvious option of just calling
 scipy.sparse.issparse() on ufunc entry raises some problems, since
 numpy can't depend on or even import scipy, and we might be reluctant
 to add such a special case for what's a rather more general problem.
 OTOH it might be possible to solve the problem in general, e.g., see
 the prototyped _ufunc_override_ special method in:

   https://github.com/njsmith/numpyNEP/blob/master/numpyNEP.py

 but I don't know if you want to get into such a debate within the
 scope of your GSoC. What were you thinking?

To me it seems that the right thing to do here is the general solution.

Do you see immediate problems in e.g. just enabling something like your
_ufunc_override_?

The easy thing is that there are no backward compatibility problems
here, since if the magic is missing, the old logic is used. Currently,
the numpy dot() and ufuncs also most of the time do nothing sensible
with sparse matrix inputs even though they in some cases return values.
Which then makes writing generic sparse/dense code more painful than
just __mul__ being matrix multiplication.

IIRC, I seem to remember that also the quantities package had some
issues with operations involving ndarrays, to which being able to
override this could be a solution.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Raveling, reshape order keyword unnecessarily confuses index and memory ordering

2013-04-30 Thread Matthew Brett
Hi,

On Sat, Apr 6, 2013 at 3:15 PM, Matthew Brett matthew.br...@gmail.com wrote:
 Hi,

 On Sat, Apr 6, 2013 at 1:35 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:



 On Sat, Apr 6, 2013 at 7:22 PM, Matthew Brett matthew.br...@gmail.com
 wrote:

 Hi,

 On Sat, Apr 6, 2013 at 1:51 AM, Ralf Gommers ralf.gomm...@gmail.com
 wrote:
 
 
 
  On Sat, Apr 6, 2013 at 4:47 AM, Matthew Brett matthew.br...@gmail.com
  wrote:
 
  Hi,
 
  On Fri, Apr 5, 2013 at 7:39 PM,  josef.p...@gmail.com wrote:
  
   It's not *any* cost, this goes deep and wide, it's one of the basic
   concepts of numpy that you want to rename.
 
  The proposal I last made was to change the default name to 'layout'
  after some period to be agreed - say - P - with suitable warning in
  the docstring up until that time, and after, and leave 'order' as an
  alias forever.
 
 
  The above paragraph is simply incorrect. Your last proposal also
  included
  deprecation warnings and a future backwards compatibility break by
  removing
  'order'.
 
  If you now say you're not proposing steps 3 and 4 anymore, then you're
  back
  to what I called option (2) - duplicate keywords forever. Which for me
  is
  undesirable, for reasons I already mentioned.

 You might not have read my follow-up proposing to drop steps 3 and 4
 if you felt they were unacceptable.

  P.S. being called short-sighted and damaging numpy by responding to a
  proposal you now say you didn't make is pretty damn annoying.

 No, I did make that proposal, and in the spirit of negotiation and
 consensus, I subsequently modified my proposal, as I hope you'd expect
 in this situation.


 You have had clear NOs to the various incarnations of your proposal from 3
 active developers of this community, not once but two or three times from
 each of those developers. Furthermore you have got only a couple of +0.5s,
 after 90 emails no one else seems to feel that this is a change we really
 have to have this change. Therefore I don't expect another modification of
 your proposal, I expect you to drop it.

 OK - I think I have a better understanding of the 'model' now.

 As another poster said, this thread has run its course. The technical issues
 are clear, and apparently we're going to have to agree to disagree about the
 seriousness of the confusion. Please please go and fix the docs in the way
 you deem best, and leave it at that. And triple please not another
 governance thread.

https://github.com/numpy/numpy/pull/3294

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GSoC proposal -- Numpy SciPy

2013-04-30 Thread Nathaniel Smith
On Tue, Apr 30, 2013 at 4:02 PM, Pauli Virtanen p...@iki.fi wrote:
 30.04.2013 22:37, Nathaniel Smith kirjoitti:
 [clip]
 How do you plan to go about this? The obvious option of just calling
 scipy.sparse.issparse() on ufunc entry raises some problems, since
 numpy can't depend on or even import scipy, and we might be reluctant
 to add such a special case for what's a rather more general problem.
 OTOH it might be possible to solve the problem in general, e.g., see
 the prototyped _ufunc_override_ special method in:

   https://github.com/njsmith/numpyNEP/blob/master/numpyNEP.py

 but I don't know if you want to get into such a debate within the
 scope of your GSoC. What were you thinking?

 To me it seems that the right thing to do here is the general solution.

 Do you see immediate problems in e.g. just enabling something like your
 _ufunc_override_?

Just that we might want to think a bit about the design space before
implementing something. E.g., apparently doing Python attribute lookup
is very expensive -- we recently had a patch to skip
__array_interface__ checks whenever possible -- is adding another such
per-operation overhead ok? I guess we could use similar checks (skip
checking for known types like int/float/ndarray), or only check for
_ufunc_override_ on the class (not the instance) and cache the result
per-class?

 The easy thing is that there are no backward compatibility problems
 here, since if the magic is missing, the old logic is used. Currently,
 the numpy dot() and ufuncs also most of the time do nothing sensible
 with sparse matrix inputs even though they in some cases return values.
 Which then makes writing generic sparse/dense code more painful than
 just __mul__ being matrix multiplication.

I agree, but, if the main target is 'dot' then the current
_ufunc_override_ design alone won't do it, since 'dot' is not a
ufunc...

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] nanmean(), nanstd() and other missing functions for 1.8

2013-04-30 Thread Benjamin Root
Currently, I am in the process of migrating some co-workers from Matlab and
IDL, and the number one complaint I get is that numpy has nansum() but no
nanmean() and nanstd().  While we do have an alternative in the form of
masked arrays, most of these people are busy enough trying to port their
existing code over to python that this sort of stumbling block becomes
difficult to explain away.

Given how relatively simple these functions are, I can not think of any
reason not to include these functions in v1.8.  Of course, the
documentation for these functions should certainly include mention of
masked arrays.

There is one other non-trivial function that have been discussed before:
np.minmax().  My thinking is that it would return a 2xN array (where N is
whatever size of the result that would be returned if just np.min() was
used).  This would allow one to do min, max = np.minmax(X).

Are there any other functions that others feel are missing from numpy and
would like to see for v1.8?  Let's discuss them here.

Cheers!
Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] GSoC : Performance parity between numpy arrays and Python scalars

2013-04-30 Thread Arink Verma
Hi all!
I have written my application[1] for *Performance parity between numpy
arrays and Python scalars[2]. *It would be a great help if you view it.
Does it look achievable and deliverable according to the project.

[1]
http://www.google-melange.com/gsoc/proposal/review/google/gsoc2013/arinkverma/40001#
[2] http://projects.scipy.org/scipy/wiki/SummerofCodeIdeas


-- 
Arink
Computer Science and Engineering
Indian Institute of Technology Ropar
www.arinkverma.in
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nanmean(), nanstd() and other missing functions for 1.8

2013-04-30 Thread Daπid
On 1 May 2013 03:36, Benjamin Root ben.r...@ou.edu wrote:
 There is one other non-trivial function that have been discussed before:
 np.minmax().  My thinking is that it would return a 2xN array (where N is
 whatever size of the result that would be returned if just np.min() was
 used).  This would allow one to do min, max = np.minmax(X).

I had been looking for this function in the past, I think it is a good
and necessary addition. It should also come with its companion,
np.argminmax.


David.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] could anyone check on a 32bit system?

2013-04-30 Thread Matthew Brett
Hi,

On Tue, Apr 30, 2013 at 8:08 PM, Yaroslav Halchenko
li...@onerussian.com wrote:
 could anyone on 32bit system with fresh numpy (1.7.1) test following:

 wget -nc http://www.onerussian.com/tmp/data.npy ; python -c 'import numpy as 
 np; data1 = np.load(/tmp/data.npy); print  np.sum(data1[1,:,0,1]) - 
 np.sum(data1, axis=1)[1,0,1]'

 0.0

 because unfortunately it seems on fresh ubuntu raring (in 32bit build only,
 seems ok in 64 bit... also never ran into it on older numpy releases):

 python -c 'import numpy as np; data1 = np.load(/tmp/data.npy); print  
 np.sum(data1[1,:,0,1]) - np.sum(data1, axis=1)[1,0,1]'
 -1.11022302463e-16

 PS detected by failed tests of pymvpa

Reduced case on numpy 1.7.1, 32-bit Ubuntu 12.04.2

In [64]: data = np.array([[ 0.49505185,  0.47212842],
   [ 0.53529587,  0.04366172],
   [-0.13461665, -0.01664215]])

In [65]: np.sum(data[:, 0]) - np.sum(data, axis=0)[0]
Out[65]: 1.1102230246251565e-16

No difference for single vector:

In [4]: data1 = data[:, 0:1]

In [5]: np.sum(data1[:, 0]) - np.sum(data1, axis=0)[0]
Out[5]: 0.0

Puzzling to me...

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nanmean(), nanstd() and other missing functions for 1.8

2013-04-30 Thread Chris Barker - NOAA Federal
On Apr 30, 2013, at 6:37 PM, Benjamin Root ben.r...@ou.edu wrote:
 I can not think of any reason not to include these functions in v1.8.

+1


 Of course, the documentation for discussed before: np.minmax().  My thinking 
 is that it would return a 2xN array

How about a tuple: (min, max)?

-Chris
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] could anyone check on a 32bit system?

2013-04-30 Thread Matthew Brett
Hi,

On Tue, Apr 30, 2013 at 9:16 PM, Matthew Brett matthew.br...@gmail.com wrote:
 Hi,

 On Tue, Apr 30, 2013 at 8:08 PM, Yaroslav Halchenko
 li...@onerussian.com wrote:
 could anyone on 32bit system with fresh numpy (1.7.1) test following:

 wget -nc http://www.onerussian.com/tmp/data.npy ; python -c 'import numpy 
 as np; data1 = np.load(/tmp/data.npy); print  np.sum(data1[1,:,0,1]) - 
 np.sum(data1, axis=1)[1,0,1]'

 0.0

 because unfortunately it seems on fresh ubuntu raring (in 32bit build only,
 seems ok in 64 bit... also never ran into it on older numpy releases):

 python -c 'import numpy as np; data1 = np.load(/tmp/data.npy); print  
 np.sum(data1[1,:,0,1]) - np.sum(data1, axis=1)[1,0,1]'
 -1.11022302463e-16

 PS detected by failed tests of pymvpa

 Reduced case on numpy 1.7.1, 32-bit Ubuntu 12.04.2

 In [64]: data = np.array([[ 0.49505185,  0.47212842],
[ 0.53529587,  0.04366172],
[-0.13461665, -0.01664215]])

 In [65]: np.sum(data[:, 0]) - np.sum(data, axis=0)[0]
 Out[65]: 1.1102230246251565e-16

 No difference for single vector:

 In [4]: data1 = data[:, 0:1]

 In [5]: np.sum(data1[:, 0]) - np.sum(data1, axis=0)[0]
 Out[5]: 0.0

Also true on current numpy trunk:

In [2]: import numpy as np

In [3]: np.__version__
Out[3]: '1.8.0.dev-a8805f6'

In [4]: data = np.array([[ 0.49505185,  0.47212842],
   :[ 0.53529587,  0.04366172],
   :[-0.13461665, -0.01664215]])

In [5]: np.sum(data[:, 0]) - np.sum(data, axis=0)[0]
Out[5]: 1.1102230246251565e-16

Not true on numpy 1.6.1:

In [2]: np.__version__
Out[2]: '1.6.1'

In [3]: data = np.array([[ 0.49505185,  0.47212842],
   :[ 0.53529587,  0.04366172],
   :[-0.13461665, -0.01664215]])

In [4]: np.sum(data[:, 0]) - np.sum(data, axis=0)[0]
Out[4]: 0.0

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] could anyone check on a 32bit system?

2013-04-30 Thread Bradley M. Froehle
On Tue, Apr 30, 2013 at 8:08 PM, Yaroslav Halchenko li...@onerussian.comwrote:

 could anyone on 32bit system with fresh numpy (1.7.1) test following:

  wget -nc http://www.onerussian.com/tmp/data.npy ; python -c 'import
 numpy as np; data1 = np.load(/tmp/data.npy); print
  np.sum(data1[1,:,0,1]) - np.sum(data1, axis=1)[1,0,1]'

 0.0

 because unfortunately it seems on fresh ubuntu raring (in 32bit build only,
 seems ok in 64 bit... also never ran into it on older numpy releases):

  python -c 'import numpy as np; data1 = np.load(/tmp/data.npy); print
  np.sum(data1[1,:,0,1]) - np.sum(data1, axis=1)[1,0,1]'
 -1.11022302463e-16


Perhaps on the 32-bit system one call is using the 80-bit extended
precision register for the summation and the other one is not?

-Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion