Re: [Numpy-discussion] Quaternion dtype for NumPy - initial implementation available

2011-07-27 Thread Robert Love
To use quaternions I find I often need conversion to/from matrices and to/from 
Euler angles.  Will you add that functionality?  Will you handle the left 
versor and right versor versions?

I have a set of pure python code I've sketched out for my needs (aerospace) but 
would be happy to have an intrinsic Numpy solution.


On Jul 16, 2011, at 9:50 AM, Martin Ling wrote:

> Hi all,
> 
> I have just pushed a package to GitHub which adds a quaternion dtype to
> NumPy: https://github.com/martinling/numpy_quaternion
> 
> Some backstory: on Wednesday I gave a talk at SciPy 2011 about an
> inertial sensing simulation package I have been working on
> (http://www.imusim.org/). One component I suggested might be reusable
> from that code was the quaternion math implementation, written in
> Cython. One of its features is a wrapper class for Nx4 NumPy arrays that
> supports efficient operations using arrays of quaternion values.
> 
> Travis Oliphant suggested that a quaternion dtype would be a better
> solution, and got me talking to Mark Weibe about this. With Mark's help
> I completed this initial version at yesterday's sprint session.
> 
> Incidentally, how to do something like this isn't well documented and I
> would have had little hope without both Mark's in-person help and his
> previous code (for adding a half-precision float dtype) to refer to. I
> don't know what the consensus is about whether people writing custom
> dtypes is a desirable thing, but if it is then the process needs to be
> made a lot easier. That said, the fact this is doable without patching
> the numpy core at all is really, really nice.
> 
> Example usage:
> 
 import numpy as np
 import quaternion
 np.quaternion(1,0,0,0)
> quaternion(1, 0, 0, 0)
 q1 = np.quaternion(1,2,3,4)
 q2 = np.quaternion(5,6,7,8)
 q1 * q2
> quaternion(-60, 12, 30, 24)
 a = np.array([q1, q2])
 a
> array([quaternion(1, 2, 3, 4), quaternion(5, 6, 7, 8)],
>   dtype=quaternion)
 exp(a)
> array([quaternion(1.69392, -0.78956, -1.18434, -1.57912),
>quaternion(138.909, -25.6861, -29.9671, -34.2481)],
>   dtype=quaternion)
> 
> The following ufuncs are implemented:
> add, subtract, multiply, divide, log, exp, power, negative, conjugate,
> copysign, equal, not_equal, less, less_equal, isnan, isinf, isfinite,
> absolute
> 
> Quaternion components are stored as doubles. The package could be extended
> to support e.g. qfloat, qdouble, qlongdouble
> 
> Comparison operations follow the same lexicographic ordering as tuples.
> 
> The unary tests isnan, isinf and isfinite return true if they would
> return true for any individual component.
> 
> Real types may be cast to quaternions, giving quaternions with zero for
> all three imaginary components. Complex types may also be cast to
> quaternions, with their single imaginary component becoming the first
> imaginary component of the quaternion. Quaternions may not be cast to
> real or complex types.
> 
> Comments very welcome. This is my first attempt at NumPy hacking :-)
> 
> 
> Martin
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression in choose()

2011-07-27 Thread Ed Schofield
Hi Travis, hi Olivier,

Thanks for your replies last month about the choose() issue.

I did some further investigation into this. I ran out of time in that
project to come up with a patch, but here's what I found, which may be of
interest:

The compile-time constant NPY_MAXARGS is indeed limiting choose(), but only
in recent versions. In NumPy version 1.2.1 this constant was set to the same
value of 32, but choose() was not limited in the same way. This code
succeeds on NumPy 1.2.1:



import numpy as np

choices = [[0, 1, 2, 3], [10, 11, 12, 13],
  [20, 21, 22, 23], [30, 31, 32, 33]]

morechoices = choices * 2**22
np.choose([2, 0, 1, 0], morechoices)



where the list contains 16.7m items. So this is a real regression ... for
heavy-duty users of choose().

Thanks again for your thoughts!

Best wishes,
Ed


On Fri, Jun 17, 2011 at 3:05 AM, Travis Oliphant wrote:

> Hi Ed,
>
> I'm pretty sure that this is "bug" is due to the compile-time constant
> NPY_MAXARGS defined in include/numpy/ndarraytypes.h I suspect that the
> versions you are trying it on where it succeeds as a different compile-time
> constant of that value.
>
> NumPy uses a multi-iterator object (PyArrayMultiIterObject defined in
> ndarraytypes.h as well) to broadcast arguments together for ufuncs and for
> functions like choose.  The data-structure that it uses to do this has a
> static array of Iterator objects with room for NPY_MAXARGS iterators. I
> think in some versions this compile time constant has been 40 or higher.
>  Re-compiling NumPy by bumping up that constant will of course require
> re-compilation of most extensions modules that use the NumPy API.
>
> Numeric did not use this approach to broadcast the arguments to choose
> together and so likely does not have the same limitation.   It would also
> not be that difficult to modify the NumPy code to dynamically allocate the
> iters array when needed to remove the NPY_MAXARGS limitation.   In fact, I
> would not mind seeing all the NPY_MAXDIMS and NPY_MAXARGS limitations
> removed.   To do it well you would probably want to have some minimum
> storage-space pre-allocated (say NPY_MAXDIMS as 7 and NPY_MAXARGS as 10 to
> avoid the memory allocation in common cases) and just increase that space as
> needed dynamically.
>
> This would be a nice project for someone wanting to learn the NumPy code
> base.
>
> -Travis
>
>
>
>
>
> On Jun 16, 2011, at 1:56 AM, Ed Schofield wrote:
>
> Hi all,
>
> I have been investigation the limitation of the choose() method (and
> function) to 32 elements. This is a regression in recent versions of NumPy.
> I have tested choose() in the following NumPy versions:
>
> 1.0.4: fine
> 1.1.1: bug
> 1.2.1: fine
> 1.3.0: bug
> 1.4.x: bug
> 1.5.x: bug
> 1.6.x: bug
> Numeric 24.3: fine
>
> (To run the tests on versions of NumPy prior to 1.4.x I used Python 2.4.3.
> For the other tests I used Python 2.7.)
>
> Here 'bug' means the choose() function has the 32-element limitation. I
> have been helping an organization to port a large old Numeric-using codebase
> to NumPy, and the choose() limitation in recent NumPy versions is throwing a
> spanner in the works. The codebase is currently using both NumPy and Numeric
> side-by-side, with Numeric only being used for its choose() function, with a
> few dozen lines like this:
>
> a = numpy.array(Numeric.choose(b, c))
>
> Here is a simple example that triggers the bug. It is a simple extension of
> the example from the choose() docstring:
>
> 
>
> import numpy as np
>
> choices = [[0, 1, 2, 3], [10, 11, 12, 13],
>   [20, 21, 22, 23], [30, 31, 32, 33]]
>
> np.choose([2, 3, 1, 0], choices * 8)
>
> 
>
> A side note: the exception message (defined in
> core/src/multiarray/iterators.c) is also slightly inconsistent with the
> actual behaviour:
>
> Traceback (most recent call last):
>   File "chooser.py", line 6, in 
> np.choose([2, 3, 1, 0], choices * 8)
>   File "/usr/lib64/python2.7/site-packages/numpy/core/fromnumeric.py", line
> 277, in choose
> return _wrapit(a, 'choose', choices, out=out, mode=mode)
>   File "/usr/lib64/python2.7/site-packages/numpy/core/fromnumeric.py", line
> 37, in _wrapit
> result = getattr(asarray(obj),method)(*args, **kwds)
> ValueError: Need between 2 and (32) array objects (inclusive).
>
> The actual behaviour is that choose() passes with 31 objects but fails with
> 32 objects, so this should read "exclusive" rather than "inclusive". (And
> why the parentheses around 32?)
>
> Does anyone know what changed between 1.2.1 and 1.3.0 that introduced the
> 32-element limitation to choose(), and whether we might be able to lift this
> limitation again for future NumPy versions? I have a couple of days to work
> on a patch ... if someone can advise me how to approach this.
>
> Best wishes,
> Ed
>
>
> --
> Dr. Edward Schofield
> Python Charmers
> +61 (0)405 676 229
> http://pythoncharmers.com
>
> ___

Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Gael Varoquaux
On Wed, Jul 27, 2011 at 05:25:20PM -0600, Charles R Harris wrote:
>Well, doc tests are just a losing proposition, no one should be using them
>for writing tests. It's not like this is a new discovery, doc tests have
>been known to be unstable for years.

Untested documentation is broken in my experience. This is why I do rely
a lot on doctests.

>As to numpy being a settled project, I beg to differ. Without moving
>forward and routine maintenance numpy will quickly bitrot. I think it
>would only take a year or two before the decay began to show.

I agree; it's a question of finding tradeoff. It is hard to support
different versions of numpy because of it's evolution, but we do like
it's evolution, but that's what makes it better. However, sometimes doing
a bit of compromises on changes that are cosmetic for the old farts like
Matthew and me who have legacy codebases to support is appreciated.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Charles R Harris
On Wed, Jul 27, 2011 at 4:07 PM, Gael Varoquaux <
gael.varoqu...@normalesup.org> wrote:

> On Wed, Jul 27, 2011 at 04:59:17PM -0500, Mark Wiebe wrote:
> >but ultimately NumPy needs the ability to change its repr and other
> >details like it in order to progress as a software project.
>
> You have to understand that numpy is the core layer on which people have
> build pretty huge scientific codebases with fairly little money flowing
> in to support. Any minor change to numpy cause turmoils in labs and is
> delt by people (student or researchers) on their spare time. I am not
> saying that there should not be any changes to numpy, just that the costs
> and benefits of these changes need to be weighted carefully. Numpy is not
> a young and agile project its a foundation library.
>
>
Well, doc tests are just a losing proposition, no one should be using them
for writing tests. It's not like this is a new discovery, doc tests have
been known to be unstable for years.

As to numpy being a settled project, I beg to differ. Without moving forward
and routine maintenance numpy will quickly bitrot. I think it would only
take a year or two before the decay began to show.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Mark Wiebe
On Wed, Jul 27, 2011 at 5:47 PM, Matthew Brett wrote:

> Hi,
>
> On Wed, Jul 27, 2011 at 3:23 PM, Mark Wiebe  wrote:
> > On Wed, Jul 27, 2011 at 5:07 PM, Gael Varoquaux
> >  wrote:
> >>
> >> On Wed, Jul 27, 2011 at 04:59:17PM -0500, Mark Wiebe wrote:
> >> >but ultimately NumPy needs the ability to change its repr and other
> >> >details like it in order to progress as a software project.
> >>
> >> You have to understand that numpy is the core layer on which people have
> >> build pretty huge scientific codebases with fairly little money flowing
> >> in to support. Any minor change to numpy cause turmoils in labs and is
> >> delt by people (student or researchers) on their spare time. I am not
> >> saying that there should not be any changes to numpy, just that the
> costs
> >> and benefits of these changes need to be weighted carefully. Numpy is
> not
> >> a young and agile project its a foundation library.
> >
> > That's absolutely true. In my opinion, the biggest consequence of this is
> > that great caution needs to be taken during the release process,
> something
> > that Ralf has done a commendable job on.
>
> You seem to be saying that if - say - you - put in some backwards
> incompatibility during development then you are expecting:
>
> a) Not to do anything about this until release time and
> b) That Ralf can clear all that up even though you made the changes.
>

Not at all. What I tried to do is explain the rationale for the change, and
why I believe third party code should not depend on this aspect of the
system. You are free to argue why you believe this point is incorrect, or
why even though it is correct, there are pragmatic reasons why a compromise
solution should be found. Then we can discuss who should do what and figure
out a time frame. That is after all the purpose of the mailing list.

The role Ralf is playing in managing the release process does not involve
doing all the code fixes, it's a group effort. He gets the credit for making
sure everything goes smoothly during the beta and release candidate period,
and that all the loose ends are tied up appropriately.


> I am sure that most people, myself included, are very glad that you
> are trying to improve the numpy internals, and know that that is hard,
> and will cause breakage, from time to time.
>
> On the other hand, if we tell you about breakages or
> incompatibilities, and you tell us 'go fix it yourself', or 'Ralf can
> fix it later' then that can
>
> a) cause bad feeling and
> b) reduce community ownership of the code and
> c) make us anxious about stability.
>

By working with master, you're participating in the development of NumPy.
I'm volunteering as part of this community as much as you are, and I'm doing
things to try and increase participation by giving pointers and suggestions
in bug reports and pull requests. NumPy has a *very* small development team
and a very large user base.

I'm a human being, as are all of us in the NumPy community, and not
everything I do will be perfect. I am, however, for the moment writing a lot
of NumPy code, and there already have been two releases, 1.6.0 and 1.6.1,
with changes authored by me which cut deeper in NumPy than I suspect most
people realize. I would kindly ask that people be patient with and
participate positively in the development process, it is not a
straightforward journey from start to finish.

-Mark


>
> Cheers,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API: multidimensional array indexing?

2011-07-27 Thread Johann Bauer
Thanks, Mark! Problem solved.

Johann

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Change in behavior of PyArray_BYTES(

2011-07-27 Thread Matthew Brett
Hi,

On Wed, Jul 27, 2011 at 3:40 PM, Mark Wiebe  wrote:
> On Wed, Jul 27, 2011 at 5:35 PM, Matthew Brett 
> wrote:
>>
>> Hi,
>>
>> I was trying to compile matplotlib against current trunk, and hit an
>> error with this line:
>>
>>            char* row0 = PyArray_BYTES(matrix);
>>
>>
>> https://github.com/matplotlib/matplotlib/blob/master/src/agg_py_transforms.cpp
>>
>> The error is:
>>
>> src/agg_py_transforms.cpp:30:26: error: invalid conversion from
>> ‘void*’ to ‘char*’
>>
>> It turned out that the output type of PyArray_BYTES has changed
>> between 1.5.1 and current trunk
>>
>> In 1.5.1, ndarraytypes.h:
>>
>> #define PyArray_BYTES(obj) (((PyArrayObject *)(obj))->data)
>> (resulting in a char *, from the char * bytes member of PyArrayObject)
>>
>> In current trunk we have this:
>>
>> #define PyArray_BYTES(arr) PyArray_DATA(arr)
>>
>> ifndef NPY_NO_DEPRECATED_API  then this results in:
>>
>> #define PyArray_DATA(obj) ((void *)(((PyArrayObject_fieldaccess
>> *)(obj))->data))
>>
>> giving a void *
>>
>> ifdef NPY_NO_DEPRECATED_API then:
>>
>> static NPY_INLINE char *
>> PyArray_DATA(PyArrayObject *arr)
>> {
>>    return ((PyArrayObject_fieldaccess *)arr)->data;
>> }
>>
>> resulting in a char * (for both PyArray_DATA and PyArray_BYTES.
>>
>> It seems to me that it would be safer to add back this line:
>>
>> #define PyArray_BYTES(obj) (((PyArrayObject *)(obj))->data)
>>
>> to ndarraytypes.h , within the ifndef NPY_NO_DEPRECATED_API block, to
>> maintain compatibility.
>>
>> Do y'all agree?
>
> Yes, this was an error. Michael Droettboom's pull request to fix it is
> already merged, so if you update against master it should work.
> -Mark

Ah - yes - thanks,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Matthew Brett
Hi,

On Wed, Jul 27, 2011 at 3:23 PM, Mark Wiebe  wrote:
> On Wed, Jul 27, 2011 at 5:07 PM, Gael Varoquaux
>  wrote:
>>
>> On Wed, Jul 27, 2011 at 04:59:17PM -0500, Mark Wiebe wrote:
>> >    but ultimately NumPy needs the ability to change its repr and other
>> >    details like it in order to progress as a software project.
>>
>> You have to understand that numpy is the core layer on which people have
>> build pretty huge scientific codebases with fairly little money flowing
>> in to support. Any minor change to numpy cause turmoils in labs and is
>> delt by people (student or researchers) on their spare time. I am not
>> saying that there should not be any changes to numpy, just that the costs
>> and benefits of these changes need to be weighted carefully. Numpy is not
>> a young and agile project its a foundation library.
>
> That's absolutely true. In my opinion, the biggest consequence of this is
> that great caution needs to be taken during the release process, something
> that Ralf has done a commendable job on.

You seem to be saying that if - say - you - put in some backwards
incompatibility during development then you are expecting:

a) Not to do anything about this until release time and
b) That Ralf can clear all that up even though you made the changes.

I am sure that most people, myself included, are very glad that you
are trying to improve the numpy internals, and know that that is hard,
and will cause breakage, from time to time.

On the other hand, if we tell you about breakages or
incompatibilities, and you tell us 'go fix it yourself', or 'Ralf can
fix it later' then that can

a) cause bad feeling and
b) reduce community ownership of the code and
c) make us anxious about stability.

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Change in behavior of PyArray_BYTES(

2011-07-27 Thread Mark Wiebe
On Wed, Jul 27, 2011 at 5:35 PM, Matthew Brett wrote:

> Hi,
>
> I was trying to compile matplotlib against current trunk, and hit an
> error with this line:
>
>char* row0 = PyArray_BYTES(matrix);
>
>
> https://github.com/matplotlib/matplotlib/blob/master/src/agg_py_transforms.cpp
>
> The error is:
>
> src/agg_py_transforms.cpp:30:26: error: invalid conversion from
> ‘void*’ to ‘char*’
>
> It turned out that the output type of PyArray_BYTES has changed
> between 1.5.1 and current trunk
>
> In 1.5.1, ndarraytypes.h:
>
> #define PyArray_BYTES(obj) (((PyArrayObject *)(obj))->data)
> (resulting in a char *, from the char * bytes member of PyArrayObject)
>
> In current trunk we have this:
>
> #define PyArray_BYTES(arr) PyArray_DATA(arr)
>
> ifndef NPY_NO_DEPRECATED_API  then this results in:
>
> #define PyArray_DATA(obj) ((void *)(((PyArrayObject_fieldaccess
> *)(obj))->data))
>
> giving a void *
>
> ifdef NPY_NO_DEPRECATED_API then:
>
> static NPY_INLINE char *
> PyArray_DATA(PyArrayObject *arr)
> {
>return ((PyArrayObject_fieldaccess *)arr)->data;
> }
>
> resulting in a char * (for both PyArray_DATA and PyArray_BYTES.
>
> It seems to me that it would be safer to add back this line:
>
> #define PyArray_BYTES(obj) (((PyArrayObject *)(obj))->data)
>
> to ndarraytypes.h , within the ifndef NPY_NO_DEPRECATED_API block, to
> maintain compatibility.
>
> Do y'all agree?
>

Yes, this was an error. Michael Droettboom's pull request to fix it is
already merged, so if you update against master it should work.

-Mark


> Best,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Change in behavior of PyArray_BYTES(

2011-07-27 Thread Matthew Brett
Hi,

I was trying to compile matplotlib against current trunk, and hit an
error with this line:

char* row0 = PyArray_BYTES(matrix);

https://github.com/matplotlib/matplotlib/blob/master/src/agg_py_transforms.cpp

The error is:

src/agg_py_transforms.cpp:30:26: error: invalid conversion from
‘void*’ to ‘char*’

It turned out that the output type of PyArray_BYTES has changed
between 1.5.1 and current trunk

In 1.5.1, ndarraytypes.h:

#define PyArray_BYTES(obj) (((PyArrayObject *)(obj))->data)
(resulting in a char *, from the char * bytes member of PyArrayObject)

In current trunk we have this:

#define PyArray_BYTES(arr) PyArray_DATA(arr)

ifndef NPY_NO_DEPRECATED_API  then this results in:

#define PyArray_DATA(obj) ((void *)(((PyArrayObject_fieldaccess *)(obj))->data))

giving a void *

ifdef NPY_NO_DEPRECATED_API then:

static NPY_INLINE char *
PyArray_DATA(PyArrayObject *arr)
{
return ((PyArrayObject_fieldaccess *)arr)->data;
}

resulting in a char * (for both PyArray_DATA and PyArray_BYTES.

It seems to me that it would be safer to add back this line:

#define PyArray_BYTES(obj) (((PyArrayObject *)(obj))->data)

to ndarraytypes.h , within the ifndef NPY_NO_DEPRECATED_API block, to
maintain compatibility.

Do y'all agree?

Best,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API: multidimensional array indexing?

2011-07-27 Thread Mark Wiebe
Probably the easiest way is to emulate what Python is doing in M[i,:] and
M[:,i]. You can create the : with PySlice_New(NULL, NULL, NULL), and the i
with PyInt_FromLong. Then create a tuple with Py_BuildValue and use
PyObject_GetItem to do the slicing.

It is possible to do the same thing directly in C, but as you've found there
aren't convenient APIs for this yet.

Cheers,
Mark


On Wed, Jul 27, 2011 at 4:37 PM, Johann Bauer  wrote:

> Dear experts,
>
> is there a C-API function for numpy which implements Python's
> multidimensional indexing? Say, I have a 2d-array
>
>   PyArrayObject * M;
>
> and an index
>
>   int i;
>
> how do I extract the i-th row or column M[i,:] respectively M[:,i]?
>
> I am looking for a function which gives again a PyArrayObject * and
> which is a view to M (no copied data; the result should be another
> PyArrayObject whose data and strides points to the correct memory
> portion of M).
>
> I searched the API documentation, Google and mailing lists for quite a
> long time but didn't find anything. Can you help me?
>
> Thanks, Johann
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Mark Wiebe
On Wed, Jul 27, 2011 at 5:07 PM, Gael Varoquaux <
gael.varoqu...@normalesup.org> wrote:

> On Wed, Jul 27, 2011 at 04:59:17PM -0500, Mark Wiebe wrote:
> >but ultimately NumPy needs the ability to change its repr and other
> >details like it in order to progress as a software project.
>
> You have to understand that numpy is the core layer on which people have
> build pretty huge scientific codebases with fairly little money flowing
> in to support. Any minor change to numpy cause turmoils in labs and is
> delt by people (student or researchers) on their spare time. I am not
> saying that there should not be any changes to numpy, just that the costs
> and benefits of these changes need to be weighted carefully. Numpy is not
> a young and agile project its a foundation library.
>

That's absolutely true. In my opinion, the biggest consequence of this is
that great caution needs to be taken during the release process, something
that Ralf has done a commendable job on.

This shouldn't affect the idea that there are progress and improvements in
the library. On the contrary, changes to such a core library can have just
as many positive ripple effects to all its dependencies as it can have
turmoils. The development trunk out of necessity will cause more turmoils
than release versions, otherwise there's no way to see whether a change
affects a lot of code out there or if its effects will be relatively minor.
I appreciate everyone out there that's running the code in master, producing
bug reports and in some cases pull requests based on their testing.

NumPy isn't a young and agile project, that's correct. It does however have
the potential to be agile. Many of the changes I've done deeper in the core
are with the aim of allowing NumPy to evolve more quickly without having to
change the exposed ABI and API.

Cheers,
Mark


>
> My two cents,
>
> Gaël
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Gael Varoquaux
On Wed, Jul 27, 2011 at 04:59:17PM -0500, Mark Wiebe wrote:
>but ultimately NumPy needs the ability to change its repr and other
>details like it in order to progress as a software project.

You have to understand that numpy is the core layer on which people have
build pretty huge scientific codebases with fairly little money flowing
in to support. Any minor change to numpy cause turmoils in labs and is
delt by people (student or researchers) on their spare time. I am not
saying that there should not be any changes to numpy, just that the costs
and benefits of these changes need to be weighted carefully. Numpy is not
a young and agile project its a foundation library.

My two cents, 

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Mark Wiebe
On Wed, Jul 27, 2011 at 4:32 PM, Matthew Brett wrote:

> Hi,
>
> On Wed, Jul 27, 2011 at 1:12 PM, Mark Wiebe  wrote:
> > On Wed, Jul 27, 2011 at 3:09 PM, Robert Kern 
> wrote:
> >>
> >> On Wed, Jul 27, 2011 at 14:47, Mark Wiebe  wrote:
> >> > On Wed, Jul 27, 2011 at 2:44 PM, Matthew Brett <
> matthew.br...@gmail.com>
> >> > wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> On Wed, Jul 27, 2011 at 12:25 PM, Mark Wiebe 
> wrote:
> >> >> > On Wed, Jul 27, 2011 at 1:01 PM, Matthew Brett
> >> >> > 
> >> >> > wrote:
> >> >> >>
> >> >> >> Hi,
> >> >> >>
> >> >> >> On Wed, Jul 27, 2011 at 6:54 PM, Mark Wiebe 
> >> >> >> wrote:
> >> >> >> > This was the most consistent way to deal with the parameterized
> >> >> >> > dtype
> >> >> >> > in
> >> >> >> > the
> >> >> >> > repr, making it more future-proof at the same time. It was
> >> >> >> > producing
> >> >> >> > reprs
> >> >> >> > like "array(['2011-01-01'], dtype=datetime64[D])", which is
> >> >> >> > clearly
> >> >> >> > wrong,
> >> >> >> > and putting quotes around it makes it work in general for all
> >> >> >> > possible
> >> >> >> > dtypes, present and future.
> >> >> >>
> >> >> >> I don't know about you, but I find maintaining doctests across
> >> >> >> versions changes rather tricky.  For our projects, doctests are
> >> >> >> important as part of the automated tests.  At the moment this
> means
> >> >> >> that many doctests will break between 1.5.1 and 2.0.  What do you
> >> >> >> think the best way round this problem?
> >> >> >
> >> >> > I'm not sure what the best approach is. I think the primary use of
> >> >> > doctests
> >> >> > should be to validate that the documentation matches the
> >> >> > implementation,
> >> >> > and
> >> >> > anything confirming aspects of a software system should be regular
> >> >> > tests.
> >> >> >  In NumPy, there are platform-dependent differences in 32 vs 64 bit
> >> >> > and
> >> >> > big
> >> >> > vs little endian, so the part of the system that changed already
> >> >> > couldn't be
> >> >> > relied on consistently. I prefer systems where the code output in
> the
> >> >> > documentation is generated as part of the documentation build
> process
> >> >> > instead of being included in the documentation source files.
> >> >>
> >> >> Would it be fair to summarize your reply as 'just deal with it'?
> >> >
> >> > I'm not sure what else I can do to help you, since I think this aspect
> >> > of
> >> > the system should be subject to arbitrary improvement. My
> recommendation
> >> > is
> >> > in general not to use doctests as if they were regular tests. I'd
> rather
> >> > not
> >> > back out the improvements to repr, if that's what you're suggesting
> >> > should
> >> > happen. Do you have any other ideas?
> >>
> >> In general, I tend to agree that doctests are not always appropriate.
> >> They tend to "overtest" and express things that the tester did not
> >> intend. It's just the nature of doctests that you have to accept if
> >> you want to use them. In this case, the tester wanted to test that the
> >> contents of the array were particular values and that it was a boolean
> >> array. Instead, it tested the precise bytes of the repr of the array.
> >> The repr of ndarrays are not a stable API, and we don't make
> >> guarantees about the precise details of its behavior from version to
> >> version. doctests work better to test simpler types and methods that
> >> do not have such complicated reprs. Yes, even as part of an automated
> >> test suite for functionality, not just to ensure the compliance of
> >> documentation examples.
> >>
> >> That said, you could only quote the dtypes that require the extra
> >> [syntax] and leave the current, simpler dtypes alone. That's a
> >> pragmatic compromise to the reality of the situation, which is that
> >> people do have extensive doctest suites already around, without
> >> removing your ability to innovate with the representations of the new
> >> dtypes.
> >
> > That sounds reasonable to me, and I'm happy to review pull requests from
> > anyone who has time to do this change.
>
> Forgive me, but this seems almost ostentatiously unhelpful.
>

I was offering to help, I think you're reading between the lines too much.
The kind of response I was trying to invite is more along the lines of "I'd
like to help, but I'm not sure where to start. Can you give me some
pointers?"

I understand you have little sympathy for the problem, but, just as a
> social courtesy, some pointers as to where to look would have been
> useful.
>

I do have sympathy for the problem, dealing with bad design decisions made
early on in software projects is pretty common. In this case what Robert
proposed is a good temporary solution, but ultimately NumPy needs the
ability to change its repr and other details like it in order to progress as
a software project.

If I recall correctly the relevant functions are in Python and called
array_repr and array2string, and they're in some of the files in numpy/core.
I don't remember the fil

Re: [Numpy-discussion] inconsistent semantics for double-slicing

2011-07-27 Thread Wes McKinney
On Wed, Jul 27, 2011 at 5:36 PM, Alex Flint  wrote:
> When applying two different slicing operations in succession (e.g. select a
> sub-range, then select using a binary mask) it seems that numpy arrays can
> be inconsistent with respect to assignment:
> For example, in this case an array is modified:
> In [6]: A = np.arange(5)
> In [8]: A[:][A>2] = 0
> In [10]: A
> Out[10]: array([0, 1, 2, 0, 0])
> Whereas here the original array remains unchanged
> In [11]: A = np.arange(5)
> In [12]: A[[0,1,2,3,4]][A>2] = 0
> In [13]: A
> Out[13]: array([0, 1, 2, 3, 4])
> This arose in a less contrived situation in which I was trying to copy a
> small image into a large image, modulo a mask on the small image.
> Is this meant to be like this?
> Alex
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>

When you do this:

A[[0,1,2,3,4]][A>2] = 0

what is happening is:

A.__getitem__([0,1,2,3,4]).__setitem__(A > 2, 0)

Whenever you do getitem with "fancy" indexing (i.e. A[[0,1,2,3,4]]),
it produces a new object. In the first case, slicing A[:] produces a
view on the same data.

- Wes
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Christopher Jordan-Squire
On Wed, Jul 27, 2011 at 3:09 PM, Robert Kern  wrote:

> On Wed, Jul 27, 2011 at 14:47, Mark Wiebe  wrote:
> > On Wed, Jul 27, 2011 at 2:44 PM, Matthew Brett 
> > wrote:
> >>
> >> Hi,
> >>
> >> On Wed, Jul 27, 2011 at 12:25 PM, Mark Wiebe  wrote:
> >> > On Wed, Jul 27, 2011 at 1:01 PM, Matthew Brett <
> matthew.br...@gmail.com>
> >> > wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> On Wed, Jul 27, 2011 at 6:54 PM, Mark Wiebe 
> wrote:
> >> >> > This was the most consistent way to deal with the parameterized
> dtype
> >> >> > in
> >> >> > the
> >> >> > repr, making it more future-proof at the same time. It was
> producing
> >> >> > reprs
> >> >> > like "array(['2011-01-01'], dtype=datetime64[D])", which is clearly
> >> >> > wrong,
> >> >> > and putting quotes around it makes it work in general for all
> >> >> > possible
> >> >> > dtypes, present and future.
> >> >>
> >> >> I don't know about you, but I find maintaining doctests across
> >> >> versions changes rather tricky.  For our projects, doctests are
> >> >> important as part of the automated tests.  At the moment this means
> >> >> that many doctests will break between 1.5.1 and 2.0.  What do you
> >> >> think the best way round this problem?
> >> >
> >> > I'm not sure what the best approach is. I think the primary use of
> >> > doctests
> >> > should be to validate that the documentation matches the
> implementation,
> >> > and
> >> > anything confirming aspects of a software system should be regular
> >> > tests.
> >> >  In NumPy, there are platform-dependent differences in 32 vs 64 bit
> and
> >> > big
> >> > vs little endian, so the part of the system that changed already
> >> > couldn't be
> >> > relied on consistently. I prefer systems where the code output in the
> >> > documentation is generated as part of the documentation build process
> >> > instead of being included in the documentation source files.
> >>
> >> Would it be fair to summarize your reply as 'just deal with it'?
> >
> > I'm not sure what else I can do to help you, since I think this aspect of
> > the system should be subject to arbitrary improvement. My recommendation
> is
> > in general not to use doctests as if they were regular tests. I'd rather
> not
> > back out the improvements to repr, if that's what you're suggesting
> should
> > happen. Do you have any other ideas?
>
> In general, I tend to agree that doctests are not always appropriate.
> They tend to "overtest" and express things that the tester did not
> intend. It's just the nature of doctests that you have to accept if
> you want to use them. In this case, the tester wanted to test that the
> contents of the array were particular values and that it was a boolean
> array. Instead, it tested the precise bytes of the repr of the array.
> The repr of ndarrays are not a stable API, and we don't make
> guarantees about the precise details of its behavior from version to
> version. doctests work better to test simpler types and methods that
> do not have such complicated reprs. Yes, even as part of an automated
> test suite for functionality, not just to ensure the compliance of
> documentation examples.
>
> That said, you could only quote the dtypes that require the extra
> [syntax] and leave the current, simpler dtypes alone. That's a
> pragmatic compromise to the reality of the situation, which is that
> people do have extensive doctest suites already around, without
> removing your ability to innovate with the representations of the new
> dtypes.
>
>
+1

-Chris JS



> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
>   -- Umberto Eco
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] C-API: multidimensional array indexing?

2011-07-27 Thread Johann Bauer
Dear experts,

is there a C-API function for numpy which implements Python's 
multidimensional indexing? Say, I have a 2d-array

   PyArrayObject * M;

and an index

   int i;

how do I extract the i-th row or column M[i,:] respectively M[:,i]?

I am looking for a function which gives again a PyArrayObject * and 
which is a view to M (no copied data; the result should be another 
PyArrayObject whose data and strides points to the correct memory 
portion of M).

I searched the API documentation, Google and mailing lists for quite a 
long time but didn't find anything. Can you help me?

Thanks, Johann
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] inconsistent semantics for double-slicing

2011-07-27 Thread Alex Flint
When applying two different slicing operations in succession (e.g. select a
sub-range, then select using a binary mask) it seems that numpy arrays can
be inconsistent with respect to assignment:

For example, in this case an array is modified:
In [6]: *A = np.arange(5)*
In [8]: *A[:][A>2] = 0*
In [10]: A
Out[10]: *array([0, 1, 2, 0, 0])*

Whereas here the original array remains unchanged
In [11]: *A = np.arange(5)*
In [12]: *A[[0,1,2,3,4]][A>2] = 0*
In [13]: A
Out[13]: *array([0, 1, 2, 3, 4])*

This arose in a less contrived situation in which I was trying to copy a
small image into a large image, modulo a mask on the small image.

Is this meant to be like this?

Alex
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Matthew Brett
Hi,

On Wed, Jul 27, 2011 at 1:12 PM, Mark Wiebe  wrote:
> On Wed, Jul 27, 2011 at 3:09 PM, Robert Kern  wrote:
>>
>> On Wed, Jul 27, 2011 at 14:47, Mark Wiebe  wrote:
>> > On Wed, Jul 27, 2011 at 2:44 PM, Matthew Brett 
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> On Wed, Jul 27, 2011 at 12:25 PM, Mark Wiebe  wrote:
>> >> > On Wed, Jul 27, 2011 at 1:01 PM, Matthew Brett
>> >> > 
>> >> > wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> On Wed, Jul 27, 2011 at 6:54 PM, Mark Wiebe 
>> >> >> wrote:
>> >> >> > This was the most consistent way to deal with the parameterized
>> >> >> > dtype
>> >> >> > in
>> >> >> > the
>> >> >> > repr, making it more future-proof at the same time. It was
>> >> >> > producing
>> >> >> > reprs
>> >> >> > like "array(['2011-01-01'], dtype=datetime64[D])", which is
>> >> >> > clearly
>> >> >> > wrong,
>> >> >> > and putting quotes around it makes it work in general for all
>> >> >> > possible
>> >> >> > dtypes, present and future.
>> >> >>
>> >> >> I don't know about you, but I find maintaining doctests across
>> >> >> versions changes rather tricky.  For our projects, doctests are
>> >> >> important as part of the automated tests.  At the moment this means
>> >> >> that many doctests will break between 1.5.1 and 2.0.  What do you
>> >> >> think the best way round this problem?
>> >> >
>> >> > I'm not sure what the best approach is. I think the primary use of
>> >> > doctests
>> >> > should be to validate that the documentation matches the
>> >> > implementation,
>> >> > and
>> >> > anything confirming aspects of a software system should be regular
>> >> > tests.
>> >> >  In NumPy, there are platform-dependent differences in 32 vs 64 bit
>> >> > and
>> >> > big
>> >> > vs little endian, so the part of the system that changed already
>> >> > couldn't be
>> >> > relied on consistently. I prefer systems where the code output in the
>> >> > documentation is generated as part of the documentation build process
>> >> > instead of being included in the documentation source files.
>> >>
>> >> Would it be fair to summarize your reply as 'just deal with it'?
>> >
>> > I'm not sure what else I can do to help you, since I think this aspect
>> > of
>> > the system should be subject to arbitrary improvement. My recommendation
>> > is
>> > in general not to use doctests as if they were regular tests. I'd rather
>> > not
>> > back out the improvements to repr, if that's what you're suggesting
>> > should
>> > happen. Do you have any other ideas?
>>
>> In general, I tend to agree that doctests are not always appropriate.
>> They tend to "overtest" and express things that the tester did not
>> intend. It's just the nature of doctests that you have to accept if
>> you want to use them. In this case, the tester wanted to test that the
>> contents of the array were particular values and that it was a boolean
>> array. Instead, it tested the precise bytes of the repr of the array.
>> The repr of ndarrays are not a stable API, and we don't make
>> guarantees about the precise details of its behavior from version to
>> version. doctests work better to test simpler types and methods that
>> do not have such complicated reprs. Yes, even as part of an automated
>> test suite for functionality, not just to ensure the compliance of
>> documentation examples.
>>
>> That said, you could only quote the dtypes that require the extra
>> [syntax] and leave the current, simpler dtypes alone. That's a
>> pragmatic compromise to the reality of the situation, which is that
>> people do have extensive doctest suites already around, without
>> removing your ability to innovate with the representations of the new
>> dtypes.
>
> That sounds reasonable to me, and I'm happy to review pull requests from
> anyone who has time to do this change.

Forgive me, but this seems almost ostentatiously unhelpful.

I understand you have little sympathy for the problem, but, just as a
social courtesy, some pointers as to where to look would have been
useful.

See you,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.sqrt behaving differently on MacOS Lion

2011-07-27 Thread Ilan Schnell
> Please don't distribute a different numpy binary for each version of
> MacOS X.
+1

Maybe I should mention that I just finished testing all Python
packages in EPD under 10.7, and everything (execpt numpy.sqr
for weird complex values such as inf/nan) works fine!
In particular building C and Fortran extensions with the new LLVM
based gcc and importing them into Python (both 32 and 64-bit).
There are two MacOS builds of EPD (one 32-bit and 64-bit), they
are compiled on 10.5 using gcc 4.0.1 and then tested on 10.5, 10.6
and 10.7.

- Ilan


On Wed, Jul 27, 2011 at 3:23 PM, Christopher Barker
 wrote:
> On 7/27/11 12:35 PM, Ralf Gommers wrote:
>>     Please don't distribute a different numpy binary for each version of
>>     MacOS X.
>
> +1 !
>
>> If 10.6-built binaries are going to work without problems on 10.7 - also
>> for scipy - then two versions is enough. I'm not yet confident this will
>> be the case though.
>
> Unless Apple has really broken things (and they usually don't in this
> way), that should be fine.
>
> However, a potential arise when folks want to build their own extensions
> against the python.org (and numpy) binaries.
>
> As I understand it, you can not build extensions to the 32 bit 10.3
> binary on Lion, because Apple has not distributed the 10.4 sdk with
> XCode (nor does it support PPC compilation)
>
> But I think the 10.6+ binaries are fine.
>
> ( wish we had 10.5+ Intel only binaries, as I still need to support
> 10.5, but there are reasons that wasn't done)
>
> No Lion here just yet, so I can't test -- hopefully soon.
>
>> Do the tests for the current 10.6 scipy installer
>> pass on 10.7? And do the 10.3-and-up Python 2.7 and 3.2 binaries work on
>> 10.7? Those are explicitly listed as 10.3-10.6 (not 10.7 ...).
>
> they'll work (the 10.6, not 10.7 is because 10.7 didn't exist yet) --
> with the exception of the above, which is, unfortunately, a common
> numpy/scipy use case.
>
>
> -Chris
>
>
> --
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R            (206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115       (206) 526-6317   main reception
>
> chris.bar...@noaa.gov
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.sqrt behaving differently on MacOS Lion

2011-07-27 Thread Christopher Barker
On 7/27/11 12:35 PM, Ralf Gommers wrote:
> Please don't distribute a different numpy binary for each version of
> MacOS X.

+1 !

> If 10.6-built binaries are going to work without problems on 10.7 - also
> for scipy - then two versions is enough. I'm not yet confident this will
> be the case though.

Unless Apple has really broken things (and they usually don't in this 
way), that should be fine.

However, a potential arise when folks want to build their own extensions 
against the python.org (and numpy) binaries.

As I understand it, you can not build extensions to the 32 bit 10.3 
binary on Lion, because Apple has not distributed the 10.4 sdk with 
XCode (nor does it support PPC compilation)

But I think the 10.6+ binaries are fine.

( wish we had 10.5+ Intel only binaries, as I still need to support 
10.5, but there are reasons that wasn't done)

No Lion here just yet, so I can't test -- hopefully soon.

> Do the tests for the current 10.6 scipy installer
> pass on 10.7? And do the 10.3-and-up Python 2.7 and 3.2 binaries work on
> 10.7? Those are explicitly listed as 10.3-10.6 (not 10.7 ...).

they'll work (the 10.6, not 10.7 is because 10.7 didn't exist yet) -- 
with the exception of the above, which is, unfortunately, a common 
numpy/scipy use case.


-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Mark Wiebe
On Wed, Jul 27, 2011 at 3:09 PM, Robert Kern  wrote:

> On Wed, Jul 27, 2011 at 14:47, Mark Wiebe  wrote:
> > On Wed, Jul 27, 2011 at 2:44 PM, Matthew Brett 
> > wrote:
> >>
> >> Hi,
> >>
> >> On Wed, Jul 27, 2011 at 12:25 PM, Mark Wiebe  wrote:
> >> > On Wed, Jul 27, 2011 at 1:01 PM, Matthew Brett <
> matthew.br...@gmail.com>
> >> > wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> On Wed, Jul 27, 2011 at 6:54 PM, Mark Wiebe 
> wrote:
> >> >> > This was the most consistent way to deal with the parameterized
> dtype
> >> >> > in
> >> >> > the
> >> >> > repr, making it more future-proof at the same time. It was
> producing
> >> >> > reprs
> >> >> > like "array(['2011-01-01'], dtype=datetime64[D])", which is clearly
> >> >> > wrong,
> >> >> > and putting quotes around it makes it work in general for all
> >> >> > possible
> >> >> > dtypes, present and future.
> >> >>
> >> >> I don't know about you, but I find maintaining doctests across
> >> >> versions changes rather tricky.  For our projects, doctests are
> >> >> important as part of the automated tests.  At the moment this means
> >> >> that many doctests will break between 1.5.1 and 2.0.  What do you
> >> >> think the best way round this problem?
> >> >
> >> > I'm not sure what the best approach is. I think the primary use of
> >> > doctests
> >> > should be to validate that the documentation matches the
> implementation,
> >> > and
> >> > anything confirming aspects of a software system should be regular
> >> > tests.
> >> >  In NumPy, there are platform-dependent differences in 32 vs 64 bit
> and
> >> > big
> >> > vs little endian, so the part of the system that changed already
> >> > couldn't be
> >> > relied on consistently. I prefer systems where the code output in the
> >> > documentation is generated as part of the documentation build process
> >> > instead of being included in the documentation source files.
> >>
> >> Would it be fair to summarize your reply as 'just deal with it'?
> >
> > I'm not sure what else I can do to help you, since I think this aspect of
> > the system should be subject to arbitrary improvement. My recommendation
> is
> > in general not to use doctests as if they were regular tests. I'd rather
> not
> > back out the improvements to repr, if that's what you're suggesting
> should
> > happen. Do you have any other ideas?
>
> In general, I tend to agree that doctests are not always appropriate.
> They tend to "overtest" and express things that the tester did not
> intend. It's just the nature of doctests that you have to accept if
> you want to use them. In this case, the tester wanted to test that the
> contents of the array were particular values and that it was a boolean
> array. Instead, it tested the precise bytes of the repr of the array.
> The repr of ndarrays are not a stable API, and we don't make
> guarantees about the precise details of its behavior from version to
> version. doctests work better to test simpler types and methods that
> do not have such complicated reprs. Yes, even as part of an automated
> test suite for functionality, not just to ensure the compliance of
> documentation examples.
>
> That said, you could only quote the dtypes that require the extra
> [syntax] and leave the current, simpler dtypes alone. That's a
> pragmatic compromise to the reality of the situation, which is that
> people do have extensive doctest suites already around, without
> removing your ability to innovate with the representations of the new
> dtypes.
>

That sounds reasonable to me, and I'm happy to review pull requests from
anyone who has time to do this change.

-Mark


>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
>   -- Umberto Eco
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Robert Kern
On Wed, Jul 27, 2011 at 14:47, Mark Wiebe  wrote:
> On Wed, Jul 27, 2011 at 2:44 PM, Matthew Brett 
> wrote:
>>
>> Hi,
>>
>> On Wed, Jul 27, 2011 at 12:25 PM, Mark Wiebe  wrote:
>> > On Wed, Jul 27, 2011 at 1:01 PM, Matthew Brett 
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> On Wed, Jul 27, 2011 at 6:54 PM, Mark Wiebe  wrote:
>> >> > This was the most consistent way to deal with the parameterized dtype
>> >> > in
>> >> > the
>> >> > repr, making it more future-proof at the same time. It was producing
>> >> > reprs
>> >> > like "array(['2011-01-01'], dtype=datetime64[D])", which is clearly
>> >> > wrong,
>> >> > and putting quotes around it makes it work in general for all
>> >> > possible
>> >> > dtypes, present and future.
>> >>
>> >> I don't know about you, but I find maintaining doctests across
>> >> versions changes rather tricky.  For our projects, doctests are
>> >> important as part of the automated tests.  At the moment this means
>> >> that many doctests will break between 1.5.1 and 2.0.  What do you
>> >> think the best way round this problem?
>> >
>> > I'm not sure what the best approach is. I think the primary use of
>> > doctests
>> > should be to validate that the documentation matches the implementation,
>> > and
>> > anything confirming aspects of a software system should be regular
>> > tests.
>> >  In NumPy, there are platform-dependent differences in 32 vs 64 bit and
>> > big
>> > vs little endian, so the part of the system that changed already
>> > couldn't be
>> > relied on consistently. I prefer systems where the code output in the
>> > documentation is generated as part of the documentation build process
>> > instead of being included in the documentation source files.
>>
>> Would it be fair to summarize your reply as 'just deal with it'?
>
> I'm not sure what else I can do to help you, since I think this aspect of
> the system should be subject to arbitrary improvement. My recommendation is
> in general not to use doctests as if they were regular tests. I'd rather not
> back out the improvements to repr, if that's what you're suggesting should
> happen. Do you have any other ideas?

In general, I tend to agree that doctests are not always appropriate.
They tend to "overtest" and express things that the tester did not
intend. It's just the nature of doctests that you have to accept if
you want to use them. In this case, the tester wanted to test that the
contents of the array were particular values and that it was a boolean
array. Instead, it tested the precise bytes of the repr of the array.
The repr of ndarrays are not a stable API, and we don't make
guarantees about the precise details of its behavior from version to
version. doctests work better to test simpler types and methods that
do not have such complicated reprs. Yes, even as part of an automated
test suite for functionality, not just to ensure the compliance of
documentation examples.

That said, you could only quote the dtypes that require the extra
[syntax] and leave the current, simpler dtypes alone. That's a
pragmatic compromise to the reality of the situation, which is that
people do have extensive doctest suites already around, without
removing your ability to innovate with the representations of the new
dtypes.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Mark Wiebe
On Wed, Jul 27, 2011 at 2:44 PM, Matthew Brett wrote:

> Hi,
>
> On Wed, Jul 27, 2011 at 12:25 PM, Mark Wiebe  wrote:
> > On Wed, Jul 27, 2011 at 1:01 PM, Matthew Brett 
> > wrote:
> >>
> >> Hi,
> >>
> >> On Wed, Jul 27, 2011 at 6:54 PM, Mark Wiebe  wrote:
> >> > This was the most consistent way to deal with the parameterized dtype
> in
> >> > the
> >> > repr, making it more future-proof at the same time. It was producing
> >> > reprs
> >> > like "array(['2011-01-01'], dtype=datetime64[D])", which is clearly
> >> > wrong,
> >> > and putting quotes around it makes it work in general for all possible
> >> > dtypes, present and future.
> >>
> >> I don't know about you, but I find maintaining doctests across
> >> versions changes rather tricky.  For our projects, doctests are
> >> important as part of the automated tests.  At the moment this means
> >> that many doctests will break between 1.5.1 and 2.0.  What do you
> >> think the best way round this problem?
> >
> > I'm not sure what the best approach is. I think the primary use of
> doctests
> > should be to validate that the documentation matches the implementation,
> and
> > anything confirming aspects of a software system should be regular tests.
> >  In NumPy, there are platform-dependent differences in 32 vs 64 bit and
> big
> > vs little endian, so the part of the system that changed already couldn't
> be
> > relied on consistently. I prefer systems where the code output in the
> > documentation is generated as part of the documentation build process
> > instead of being included in the documentation source files.
>
> Would it be fair to summarize your reply as 'just deal with it'?
>

I'm not sure what else I can do to help you, since I think this aspect of
the system should be subject to arbitrary improvement. My recommendation is
in general not to use doctests as if they were regular tests. I'd rather not
back out the improvements to repr, if that's what you're suggesting should
happen. Do you have any other ideas?

-Mark


>
> See you,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Matthew Brett
Hi,

On Wed, Jul 27, 2011 at 12:25 PM, Mark Wiebe  wrote:
> On Wed, Jul 27, 2011 at 1:01 PM, Matthew Brett 
> wrote:
>>
>> Hi,
>>
>> On Wed, Jul 27, 2011 at 6:54 PM, Mark Wiebe  wrote:
>> > This was the most consistent way to deal with the parameterized dtype in
>> > the
>> > repr, making it more future-proof at the same time. It was producing
>> > reprs
>> > like "array(['2011-01-01'], dtype=datetime64[D])", which is clearly
>> > wrong,
>> > and putting quotes around it makes it work in general for all possible
>> > dtypes, present and future.
>>
>> I don't know about you, but I find maintaining doctests across
>> versions changes rather tricky.  For our projects, doctests are
>> important as part of the automated tests.  At the moment this means
>> that many doctests will break between 1.5.1 and 2.0.  What do you
>> think the best way round this problem?
>
> I'm not sure what the best approach is. I think the primary use of doctests
> should be to validate that the documentation matches the implementation, and
> anything confirming aspects of a software system should be regular tests.
>  In NumPy, there are platform-dependent differences in 32 vs 64 bit and big
> vs little endian, so the part of the system that changed already couldn't be
> relied on consistently. I prefer systems where the code output in the
> documentation is generated as part of the documentation build process
> instead of being included in the documentation source files.

Would it be fair to summarize your reply as 'just deal with it'?

See you,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.sqrt behaving differently on MacOS Lion

2011-07-27 Thread Ralf Gommers
On Wed, Jul 27, 2011 at 9:00 PM, Russell E. Owen  wrote:

> In article
> ,
>  Ralf Gommers  wrote:
>
> > On Wed, Jul 27, 2011 at 7:17 PM, Ilan Schnell  >wrote:
> >
> > > MacOS Lion:
> > > >>> numpy.sqrt([complex(numpy.nan, numpy.inf)])
> > > array([ nan+infj])
> > >
> > > Other all system:
> > > array([ inf+infj])
> > >
> > > This causes a few numpy tests to fail on Lion.  The numpy
> > > was not compiled using the new LLVM based gcc, it is the
> > > same numpy binary I used on other MacOS systems, which
> > > was compiled using gcc-4.0.1.  However on Lion it is linked
> > > to Lions LLVM based gcc runtime, which apparently has some
> > > different behavior when it comes to such strange complex
> > > values.
> > >
> > > These type of complex corner cases fail on several other platforms,
> there
> > they are marked as skipped. I propose not to start changing this yet -
> the
> > compiler change is causing problems with scipy (
> > http://projects.scipy.org/scipy/ticket/1476) and it's not yet clear what
> the
> > recommended build setup on Lion should be.
> >
> > Regarding binaries, it may be better to distribute separate ones for each
> > version of OS X from numpy 1.7 / 2.0 (we already do for python 2.7). In
> that
> > case this particular failure will not occur.
>
> Please don't distribute a different numpy binary for each version of
> MacOS X. That makes it very difficult to distribute bundled applications.
>
> The current situation is very reasonable, in my opinion: numpy has two
> Mac binary distributions for Python 2.7: 32-bit 10.3-and-up and 64-bit
> 10.6-and-up. These match the python.org python distributions. I can't
> see wanting any more than one per python.org Mac binary.
>

If 10.6-built binaries are going to work without problems on 10.7 - also for
scipy - then two versions is enough. I'm not yet confident this will be the
case though. Do the tests for the current 10.6 scipy installer pass on 10.7?
And do the 10.3-and-up Python 2.7 and 3.2 binaries work on 10.7? Those are
explicitly listed as 10.3-10.6 (not 10.7 ...).


> Note that the numpy Mac binaries are not listed next to each other on
> the numpy sourceforge download page, so some folks are installing the
> wrong one.


That unfortunately can't be changed, unless I re-upload everything in the
desired order. The SF interface has been rewritten, but it's not much of an
improvement.

Ralf



> If you add even more os-specific flavors the problem is
> likely to get worse.
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nanmin() fails with 'TypeError: cannot reduce a scalar'. Numpy 1.6.0 regression?

2011-07-27 Thread Mark Wiebe
On Wed, Jul 27, 2011 at 2:20 PM, Charles R Harris  wrote:

>
>
> On Wed, Jul 27, 2011 at 6:58 AM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Wed, Jul 27, 2011 at 2:49 AM, Mark Dickinson > > wrote:
>>
>>> In NumPy 1.6.0, I get the following behaviour:
>>>
>>>
>>> Python 2.7.2 |EPD 7.1-1 (32-bit)| (default, Jul  3 2011, 15:40:35)
>>> [GCC 4.0.1 (Apple Inc. build 5493)] on darwin
>>> Type "packages", "demo" or "enthought" for more information.
>>> >>> import numpy
>>> >>> numpy.nanmin(numpy.ma.masked_array([1,2,3,4]))
>>> Traceback (most recent call last):
>>>  File "", line 1, in 
>>>  File
>>> "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/numpy/lib/function_base.py",
>>> line 1507, in nanmin
>>>return np.fmin.reduce(a.flat)
>>> TypeError: cannot reduce on a scalar
>>> >>> numpy.__version__
>>> '1.6.0'
>>>
>>>
>>> In NumPy version 1.5.1:
>>>
>>> Python 2.7.2 |EPD 7.1-1 (32-bit)| (default, Jul  3 2011, 15:40:35)
>>> [GCC 4.0.1 (Apple Inc. build 5493)] on darwin
>>> Type "packages", "demo" or "enthought" for more information.
>>> >>> import numpy
>>> >>> numpy.nanmin(numpy.ma.masked_array([1,2,3,4]))
>>> 1
>>> >>> numpy.__version__
>>> '1.5.1'
>>>
>>>
>>> Was this change intentional?
>>>
>>>
>> No, it comes from this
>>
>> In [2]: a = numpy.ma.masked_array([1,2,3,4])
>>
>> In [3]: array(a.flat)
>> Out[3]: array(,
>> dtype='object')
>>
>> i.e., the a.flat iterator is turned into an object array with one
>> element.  I'm not sure what the correct fix for this would be. Please open a
>> ticket.
>>
>>
> In fact, array no longer recognizes iterators, but a.flat works, so I
> assume the __array__ attribute of the array iterator is at work. I think
> nanmin needs to be fixed, because it used a.flat for speed, but it looks
> like something closer to 'asflat' is needed. In addition, array probably
> needs to be fixed to accept iterators, I think it used to.
>

I'd guess this slipped through with something I changed when I was in the
array construction part of the system, because the test suite doesn't
exercise this. Any NumPy behavior we want preserved needs tests!

-Mark


>
> How did nanmin interact with the mask of masked arrays in earlier versions?
>
> Chuck
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Mark Wiebe
On Wed, Jul 27, 2011 at 1:01 PM, Matthew Brett wrote:

> Hi,
>
> On Wed, Jul 27, 2011 at 6:54 PM, Mark Wiebe  wrote:
> > This was the most consistent way to deal with the parameterized dtype in
> the
> > repr, making it more future-proof at the same time. It was producing
> reprs
> > like "array(['2011-01-01'], dtype=datetime64[D])", which is clearly
> wrong,
> > and putting quotes around it makes it work in general for all possible
> > dtypes, present and future.
>
> I don't know about you, but I find maintaining doctests across
> versions changes rather tricky.  For our projects, doctests are
> important as part of the automated tests.  At the moment this means
> that many doctests will break between 1.5.1 and 2.0.  What do you
> think the best way round this problem?
>

I'm not sure what the best approach is. I think the primary use of doctests
should be to validate that the documentation matches the implementation, and
anything confirming aspects of a software system should be regular tests.
 In NumPy, there are platform-dependent differences in 32 vs 64 bit and big
vs little endian, so the part of the system that changed already couldn't be
relied on consistently. I prefer systems where the code output in the
documentation is generated as part of the documentation build process
instead of being included in the documentation source files.

Cheers,
Mark


>
> See you,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nanmin() fails with 'TypeError: cannot reduce a scalar'. Numpy 1.6.0 regression?

2011-07-27 Thread Charles R Harris
On Wed, Jul 27, 2011 at 6:58 AM, Charles R Harris  wrote:

>
>
> On Wed, Jul 27, 2011 at 2:49 AM, Mark Dickinson 
> wrote:
>
>> In NumPy 1.6.0, I get the following behaviour:
>>
>>
>> Python 2.7.2 |EPD 7.1-1 (32-bit)| (default, Jul  3 2011, 15:40:35)
>> [GCC 4.0.1 (Apple Inc. build 5493)] on darwin
>> Type "packages", "demo" or "enthought" for more information.
>> >>> import numpy
>> >>> numpy.nanmin(numpy.ma.masked_array([1,2,3,4]))
>> Traceback (most recent call last):
>>  File "", line 1, in 
>>  File
>> "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/numpy/lib/function_base.py",
>> line 1507, in nanmin
>>return np.fmin.reduce(a.flat)
>> TypeError: cannot reduce on a scalar
>> >>> numpy.__version__
>> '1.6.0'
>>
>>
>> In NumPy version 1.5.1:
>>
>> Python 2.7.2 |EPD 7.1-1 (32-bit)| (default, Jul  3 2011, 15:40:35)
>> [GCC 4.0.1 (Apple Inc. build 5493)] on darwin
>> Type "packages", "demo" or "enthought" for more information.
>> >>> import numpy
>> >>> numpy.nanmin(numpy.ma.masked_array([1,2,3,4]))
>> 1
>> >>> numpy.__version__
>> '1.5.1'
>>
>>
>> Was this change intentional?
>>
>>
> No, it comes from this
>
> In [2]: a = numpy.ma.masked_array([1,2,3,4])
>
> In [3]: array(a.flat)
> Out[3]: array(,
> dtype='object')
>
> i.e., the a.flat iterator is turned into an object array with one element.
> I'm not sure what the correct fix for this would be. Please open a ticket.
>
>
In fact, array no longer recognizes iterators, but a.flat works, so I assume
the __array__ attribute of the array iterator is at work. I think nanmin
needs to be fixed, because it used a.flat for speed, but it looks like
something closer to 'asflat' is needed. In addition, array probably needs to
be fixed to accept iterators, I think it used to.

How did nanmin interact with the mask of masked arrays in earlier versions?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.sqrt behaving differently on MacOS Lion

2011-07-27 Thread Russell E. Owen
In article 
,
 Ralf Gommers  wrote:

> On Wed, Jul 27, 2011 at 7:17 PM, Ilan Schnell wrote:
> 
> > MacOS Lion:
> > >>> numpy.sqrt([complex(numpy.nan, numpy.inf)])
> > array([ nan+infj])
> >
> > Other all system:
> > array([ inf+infj])
> >
> > This causes a few numpy tests to fail on Lion.  The numpy
> > was not compiled using the new LLVM based gcc, it is the
> > same numpy binary I used on other MacOS systems, which
> > was compiled using gcc-4.0.1.  However on Lion it is linked
> > to Lions LLVM based gcc runtime, which apparently has some
> > different behavior when it comes to such strange complex
> > values.
> >
> > These type of complex corner cases fail on several other platforms, there
> they are marked as skipped. I propose not to start changing this yet - the
> compiler change is causing problems with scipy (
> http://projects.scipy.org/scipy/ticket/1476) and it's not yet clear what the
> recommended build setup on Lion should be.
> 
> Regarding binaries, it may be better to distribute separate ones for each
> version of OS X from numpy 1.7 / 2.0 (we already do for python 2.7). In that
> case this particular failure will not occur.

Please don't distribute a different numpy binary for each version of 
MacOS X. That makes it very difficult to distribute bundled applications.

The current situation is very reasonable, in my opinion: numpy has two 
Mac binary distributions for Python 2.7: 32-bit 10.3-and-up and 64-bit 
10.6-and-up. These match the python.org python distributions. I can't 
see wanting any more than one per python.org Mac binary.

Note that the numpy Mac binaries are not listed next to each other on 
the numpy sourceforge download page, so some folks are installing the 
wrong one. If you add even more os-specific flavors the problem is 
likely to get worse.

-- Russell

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.sqrt behaving differently on MacOS Lion

2011-07-27 Thread Ralf Gommers
On Wed, Jul 27, 2011 at 8:18 PM, Ilan Schnell wrote:

> Thanks for you quick response Ralf.  Regarding binaries, we are
> trying to avoid to different EPD binaries for different versions of OSX,
> as maintaining/distributing/testing more binaries is quite expensive.
>
> Agreed, it can be expensive. However, for numpy the main time sink is
maintaining 10.5/10.6/10.7 systems and build environments on them, which
probably can't be avoided anyway. Building and uploading binaries is quick.
And more reliable if build_OS_version == usage_OS_version.

I can imagine for EPD the situation is different because it's so large.

Ralf


>
> On Wed, Jul 27, 2011 at 12:58 PM, Ralf Gommers
>  wrote:
> >
> >
> > On Wed, Jul 27, 2011 at 7:17 PM, Ilan Schnell 
> > wrote:
> >>
> >> MacOS Lion:
> >> >>> numpy.sqrt([complex(numpy.nan, numpy.inf)])
> >> array([ nan+infj])
> >>
> >> Other all system:
> >> array([ inf+infj])
> >>
> >> This causes a few numpy tests to fail on Lion.  The numpy
> >> was not compiled using the new LLVM based gcc, it is the
> >> same numpy binary I used on other MacOS systems, which
> >> was compiled using gcc-4.0.1.  However on Lion it is linked
> >> to Lions LLVM based gcc runtime, which apparently has some
> >> different behavior when it comes to such strange complex
> >> values.
> >>
> > These type of complex corner cases fail on several other platforms, there
> > they are marked as skipped. I propose not to start changing this yet -
> the
> > compiler change is causing problems with scipy
> > (http://projects.scipy.org/scipy/ticket/1476) and it's not yet clear
> what
> > the recommended build setup on Lion should be.
> >
> > Regarding binaries, it may be better to distribute separate ones for each
> > version of OS X from numpy 1.7 / 2.0 (we already do for python 2.7). In
> that
> > case this particular failure will not occur.
> >
> > Cheers,
> > Ralf
> >
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> >
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.sqrt behaving differently on MacOS Lion

2011-07-27 Thread Ilan Schnell
Thanks for you quick response Ralf.  Regarding binaries, we are
trying to avoid to different EPD binaries for different versions of OSX,
as maintaining/distributing/testing more binaries is quite expensive.

- Ilan


On Wed, Jul 27, 2011 at 12:58 PM, Ralf Gommers
 wrote:
>
>
> On Wed, Jul 27, 2011 at 7:17 PM, Ilan Schnell 
> wrote:
>>
>> MacOS Lion:
>> >>> numpy.sqrt([complex(numpy.nan, numpy.inf)])
>> array([ nan+infj])
>>
>> Other all system:
>> array([ inf+infj])
>>
>> This causes a few numpy tests to fail on Lion.  The numpy
>> was not compiled using the new LLVM based gcc, it is the
>> same numpy binary I used on other MacOS systems, which
>> was compiled using gcc-4.0.1.  However on Lion it is linked
>> to Lions LLVM based gcc runtime, which apparently has some
>> different behavior when it comes to such strange complex
>> values.
>>
> These type of complex corner cases fail on several other platforms, there
> they are marked as skipped. I propose not to start changing this yet - the
> compiler change is causing problems with scipy
> (http://projects.scipy.org/scipy/ticket/1476) and it's not yet clear what
> the recommended build setup on Lion should be.
>
> Regarding binaries, it may be better to distribute separate ones for each
> version of OS X from numpy 1.7 / 2.0 (we already do for python 2.7). In that
> case this particular failure will not occur.
>
> Cheers,
> Ralf
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Matthew Brett
Hi,

On Wed, Jul 27, 2011 at 6:54 PM, Mark Wiebe  wrote:
> This was the most consistent way to deal with the parameterized dtype in the
> repr, making it more future-proof at the same time. It was producing reprs
> like "array(['2011-01-01'], dtype=datetime64[D])", which is clearly wrong,
> and putting quotes around it makes it work in general for all possible
> dtypes, present and future.

I don't know about you, but I find maintaining doctests across
versions changes rather tricky.  For our projects, doctests are
important as part of the automated tests.  At the moment this means
that many doctests will break between 1.5.1 and 2.0.  What do you
think the best way round this problem?

See you,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.sqrt behaving differently on MacOS Lion

2011-07-27 Thread Ralf Gommers
On Wed, Jul 27, 2011 at 7:17 PM, Ilan Schnell wrote:

> MacOS Lion:
> >>> numpy.sqrt([complex(numpy.nan, numpy.inf)])
> array([ nan+infj])
>
> Other all system:
> array([ inf+infj])
>
> This causes a few numpy tests to fail on Lion.  The numpy
> was not compiled using the new LLVM based gcc, it is the
> same numpy binary I used on other MacOS systems, which
> was compiled using gcc-4.0.1.  However on Lion it is linked
> to Lions LLVM based gcc runtime, which apparently has some
> different behavior when it comes to such strange complex
> values.
>
> These type of complex corner cases fail on several other platforms, there
they are marked as skipped. I propose not to start changing this yet - the
compiler change is causing problems with scipy (
http://projects.scipy.org/scipy/ticket/1476) and it's not yet clear what the
recommended build setup on Lion should be.

Regarding binaries, it may be better to distribute separate ones for each
version of OS X from numpy 1.7 / 2.0 (we already do for python 2.7). In that
case this particular failure will not occur.

Cheers,
Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-27 Thread Mark Wiebe
This was the most consistent way to deal with the parameterized dtype in the
repr, making it more future-proof at the same time. It was producing reprs
like "array(['2011-01-01'], dtype=datetime64[D])", which is clearly wrong,
and putting quotes around it makes it work in general for all possible
dtypes, present and future.

-Mark

On Wed, Jul 27, 2011 at 12:50 PM, Matthew Brett wrote:

> Hi,
>
> I see that (current trunk):
>
> In [9]: np.ones((1,), dtype=bool)
> Out[9]: array([ True], dtype='bool')
>
> - whereas (1.5.1):
>
> In [2]: np.ones((1,), dtype=bool)
> Out[2]: array([ True], dtype=bool)
>
> That is breaking quite a few doctests.   What is the reason for the
> change?  Something to do with more planned dtypes?
>
> Thanks a lot,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dtype repr change?

2011-07-27 Thread Matthew Brett
Hi,

I see that (current trunk):

In [9]: np.ones((1,), dtype=bool)
Out[9]: array([ True], dtype='bool')

- whereas (1.5.1):

In [2]: np.ones((1,), dtype=bool)
Out[2]: array([ True], dtype=bool)

That is breaking quite a few doctests.   What is the reason for the
change?  Something to do with more planned dtypes?

Thanks a lot,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.sqrt behaving differently on MacOS Lion

2011-07-27 Thread Ilan Schnell
MacOS Lion:
>>> numpy.sqrt([complex(numpy.nan, numpy.inf)])
array([ nan+infj])

Other all system:
array([ inf+infj])

This causes a few numpy tests to fail on Lion.  The numpy
was not compiled using the new LLVM based gcc, it is the
same numpy binary I used on other MacOS systems, which
was compiled using gcc-4.0.1.  However on Lion it is linked
to Lions LLVM based gcc runtime, which apparently has some
different behavior when it comes to such strange complex
values.

- Ilan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy master breaking matplotlib build

2011-07-27 Thread Mark Wiebe
Looks good. It might be good to change it back to (void *) for the
PyArray_DATA inline function as well, I changed that during lots of tweaking
to get things to build properly.

-Mark

On Wed, Jul 27, 2011 at 11:46 AM, Michael Droettboom wrote:

> The return type of PyArray_BYTES in the old API compatibility code seems
> to have changed recently to (void *) which breaks matplotlib builds.
> This pull request changes it back.  Is this correct?
>
> https://github.com/numpy/numpy/pull/121
>
> Mike
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy master breaking matplotlib build

2011-07-27 Thread Michael Droettboom
The return type of PyArray_BYTES in the old API compatibility code seems 
to have changed recently to (void *) which breaks matplotlib builds.  
This pull request changes it back.  Is this correct?

https://github.com/numpy/numpy/pull/121

Mike
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nanmin() fails with 'TypeError: cannot reduce a scalar'. Numpy 1.6.0 regression?

2011-07-27 Thread Charles R Harris
On Wed, Jul 27, 2011 at 2:49 AM, Mark Dickinson wrote:

> In NumPy 1.6.0, I get the following behaviour:
>
>
> Python 2.7.2 |EPD 7.1-1 (32-bit)| (default, Jul  3 2011, 15:40:35)
> [GCC 4.0.1 (Apple Inc. build 5493)] on darwin
> Type "packages", "demo" or "enthought" for more information.
> >>> import numpy
> >>> numpy.nanmin(numpy.ma.masked_array([1,2,3,4]))
> Traceback (most recent call last):
>  File "", line 1, in 
>  File
> "/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/numpy/lib/function_base.py",
> line 1507, in nanmin
>return np.fmin.reduce(a.flat)
> TypeError: cannot reduce on a scalar
> >>> numpy.__version__
> '1.6.0'
>
>
> In NumPy version 1.5.1:
>
> Python 2.7.2 |EPD 7.1-1 (32-bit)| (default, Jul  3 2011, 15:40:35)
> [GCC 4.0.1 (Apple Inc. build 5493)] on darwin
> Type "packages", "demo" or "enthought" for more information.
> >>> import numpy
> >>> numpy.nanmin(numpy.ma.masked_array([1,2,3,4]))
> 1
> >>> numpy.__version__
> '1.5.1'
>
>
> Was this change intentional?
>
>
No, it comes from this

In [2]: a = numpy.ma.masked_array([1,2,3,4])

In [3]: array(a.flat)
Out[3]: array(,
dtype='object')

i.e., the a.flat iterator is turned into an object array with one element.
I'm not sure what the correct fix for this would be. Please open a ticket.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] nanmin() fails with 'TypeError: cannot reduce a scalar'. Numpy 1.6.0 regression?

2011-07-27 Thread Mark Dickinson
In NumPy 1.6.0, I get the following behaviour:


Python 2.7.2 |EPD 7.1-1 (32-bit)| (default, Jul  3 2011, 15:40:35)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "packages", "demo" or "enthought" for more information.
>>> import numpy
>>> numpy.nanmin(numpy.ma.masked_array([1,2,3,4]))
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/numpy/lib/function_base.py",
line 1507, in nanmin
return np.fmin.reduce(a.flat)
TypeError: cannot reduce on a scalar
>>> numpy.__version__
'1.6.0'


In NumPy version 1.5.1:

Python 2.7.2 |EPD 7.1-1 (32-bit)| (default, Jul  3 2011, 15:40:35)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "packages", "demo" or "enthought" for more information.
>>> import numpy
>>> numpy.nanmin(numpy.ma.masked_array([1,2,3,4]))
1
>>> numpy.__version__
'1.5.1'


Was this change intentional?

-- 
Mark
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion