Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Pauli Virtanen
ti, 2008-05-20 kello 18:04 -0500, Robert Kern kirjoitti:
 On Tue, May 20, 2008 at 5:55 PM, Jonathan Wright [EMAIL PROTECTED] wrote:
  Stéfan van der Walt wrote:
As for internationali(s/z)ation, we'll see who writes the most
docstrings.
 
  Indeed. There are some notes on the OLPC wiki at
 
  http://wiki.laptop.org/go/Python_i18n
 
  It seems to be just a question of adding at the top of add_newdocs.py
 
  from gettext import gettext as _
 
  ... and putting the docstrings in a _() function call, although perhaps
  I miss something important, like a performance hit?

 Possibly a significant one. This could affect startup times, which I
 am hesitant to make worse.
 
  This would catch
  everything in add_newdocs at least. It seems like a relatively minor
  change if you are overhauling anyway?
 
 add_newdocs() could do that, but the usual function docstrings can't.
 The rule is that if the first statement in a function is a literal
 string, then the compiler will assign it func.__doc__. Expressions are
 just treated as expressions in the function body and have no affect on
 func.__doc__.

I think it would be quite straightforward to write a function that
crawled over the numpy namespace and dynamically replaced __doc__ with
gettextized versions. The user could call this function to switch the
language and the reference manual authors could call it to produce
localized versions of the manual. Moreover, we already have tools to
extract English docstrings from numpy and producing .pot files for
gettext could be done. I think i18n of numpy (or any other Python
module) is technically not as far out as it initially seems!

(This is assuming that there are no objects there that don't allow
changing their __doc__.)

But I believe that in practice, we really had better to concentrate on
improving the documentation in English before thinking about spending
much effort on i18n or l10n.

-- 
Pauli Virtanen


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 1:27 AM, Pauli Virtanen [EMAIL PROTECTED] wrote:

 I think it would be quite straightforward to write a function that
 crawled over the numpy namespace and dynamically replaced __doc__ with
 gettextized versions. The user could call this function to switch the
 language and the reference manual authors could call it to produce
 localized versions of the manual. Moreover, we already have tools to
 extract English docstrings from numpy and producing .pot files for
 gettext could be done. I think i18n of numpy (or any other Python
 module) is technically not as far out as it initially seems!

Yes, that sounds (technically) feasible.

 (This is assuming that there are no objects there that don't allow
 changing their __doc__.)

 But I believe that in practice, we really had better to concentrate on
 improving the documentation in English before thinking about spending
 much effort on i18n or l10n.

Yup. Sounds about right.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] first recarray steps

2008-05-21 Thread Vincent Schut
Christopher Barker wrote:
 
 Vincent Schut wrote:
 Lets say I have a rgb image of arbitrary size, as a normal ndarray 
 (that's what my image reading lib gives me). Thus shape is 
 (3,ysize,xsize), dtype = int8. How would I convert/view this as a 
 recarray of shape (ysize, xsize) with the first dimension split up into 
 'r', 'g', 'b' fields? No need for 'x' and 'y' fields.
 
 Take a look in this list for a thread entitled recarray fun about a 
 month ago -- you'll find some more discussion of approaches.

Well, actually that thread was my inspiration to take a closer look into 
recarrays...
 
 Also, if you image data is rgb, usually, that's a (width, height, 3) 
 array: rgbrgbrgbrgb... in memory. If you have a (3, width, height) 
 array, then that's rrr... Some image libs 
 may give you that, I'm not sure.

My data is. In fact, this is a simplification of my situation; I'm 
processing satellite data, which usually has more (and other) bands than 
just rgb. But the data is definitely in shape (bands, y, x).
 
 Also, you probably want a uint8 dtype, giving you 0-255 for each byte.

Same story. In fact, in this case it's int16, but can actually be any 
data type, even floats, even complex.
But thanks for the thoughts :-)
 
 -Chris
 
 
 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] first recarray steps

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 1:48 AM, Vincent Schut [EMAIL PROTECTED] wrote:
 Christopher Barker wrote:

 Also, if you image data is rgb, usually, that's a (width, height, 3)
 array: rgbrgbrgbrgb... in memory. If you have a (3, width, height)
 array, then that's rrr... Some image libs
 may give you that, I'm not sure.

 My data is. In fact, this is a simplification of my situation; I'm
 processing satellite data, which usually has more (and other) bands than
 just rgb. But the data is definitely in shape (bands, y, x).

I don't think record arrays will help you much, then. Individual
records need to be contiguous (bar padding). You can't interleave
them.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] first recarray steps

2008-05-21 Thread Vincent Schut
Robert Kern wrote:
 On Wed, May 21, 2008 at 1:48 AM, Vincent Schut [EMAIL PROTECTED] wrote:
 Christopher Barker wrote:
 
 Also, if you image data is rgb, usually, that's a (width, height, 3)
 array: rgbrgbrgbrgb... in memory. If you have a (3, width, height)
 array, then that's rrr... Some image libs
 may give you that, I'm not sure.
 My data is. In fact, this is a simplification of my situation; I'm
 processing satellite data, which usually has more (and other) bands than
 just rgb. But the data is definitely in shape (bands, y, x).
 
 I don't think record arrays will help you much, then. Individual
 records need to be contiguous (bar padding). You can't interleave
 them.
 
Hmm, that was just what I was wondering about, when reading Stefan's 
reply. So in fact, recarrays aren't just another way to view some data, 
no matter in what shape it is.

So his solution: 
x.T.reshape((-1,x.shape[0])).view(dt).reshape(x.shape[1:]).T won't work, 
than. Or, at least, won't give me a view on my original dat, but would 
give me a recarray with a copy of my data.

I guess I was misled by this text on the recarray wiki page:

We would like to represent a small colour image. The image is two 
pixels high and two pixels wide. Each pixel has a red, green and blue 
colour component, which is represented by a 32-bit floating point number 
between 0 and 1.

Intuitively, we could represent the image as a 3x2x2 array, where the 
first dimension represents the color, and the last two the pixel 
positions, i.e. 

Note the 3x2x2, which suggested imho that this would work with an 
image with (bands,y,x) shape, not with (x,y,bands) shape. But I 
understand that it's not shape, but internal representation in memory 
(contiguous or not, C/Fortran, etc) that matters?

I know I can change the wiki text, but I'm afraid I still don't feel 
confident on this matter...

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] first recarray steps

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 2:03 AM, Vincent Schut [EMAIL PROTECTED] wrote:
 Robert Kern wrote:
 On Wed, May 21, 2008 at 1:48 AM, Vincent Schut [EMAIL PROTECTED] wrote:
 Christopher Barker wrote:

 Also, if you image data is rgb, usually, that's a (width, height, 3)
 array: rgbrgbrgbrgb... in memory. If you have a (3, width, height)
 array, then that's rrr... Some image libs
 may give you that, I'm not sure.
 My data is. In fact, this is a simplification of my situation; I'm
 processing satellite data, which usually has more (and other) bands than
 just rgb. But the data is definitely in shape (bands, y, x).

 I don't think record arrays will help you much, then. Individual
 records need to be contiguous (bar padding). You can't interleave
 them.

 Hmm, that was just what I was wondering about, when reading Stefan's
 reply. So in fact, recarrays aren't just another way to view some data,
 no matter in what shape it is.

 So his solution:
 x.T.reshape((-1,x.shape[0])).view(dt).reshape(x.shape[1:]).T won't work,
 than. Or, at least, won't give me a view on my original dat, but would
 give me a recarray with a copy of my data.

Right.

 I guess I was misled by this text on the recarray wiki page:

 We would like to represent a small colour image. The image is two
 pixels high and two pixels wide. Each pixel has a red, green and blue
 colour component, which is represented by a 32-bit floating point number
 between 0 and 1.

 Intuitively, we could represent the image as a 3x2x2 array, where the
 first dimension represents the color, and the last two the pixel
 positions, i.e. 

 Note the 3x2x2, which suggested imho that this would work with an
 image with (bands,y,x) shape, not with (x,y,bands) shape.

Yes, the tutorial goes on to use record arrays as a view onto an
(x,y,bands) array and also make a (bands,x,y) view from that, too.
That is, in fact, quite a confusing presentation of the subject.

Now, there is a way to use record arrays here; it's a bit ugly but can
be quite useful when parsing data formats. Each item in the record can
also be an array. So let's pretend we have a (3,nx,ny) RGB array.

nbands, nx, ny = a.shape
dtype = numpy.dtype([
  ('r', a.dtype, [nx, ny]),
  ('g', a.dtype, [nx, ny]),
  ('b', a.dtype, [nx, ny]),
])

# The flatten() is necessary to pre-empt numpy from
# trying to do too much interpretation of a's shape.
rec = a.flatten().view(dtype)
print rec['r']
print rec['g']
print rec['b']

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Rob Hetland

Should we add a general discussion section to the Wiki?  I would just  
do this, but it seems like a fundamental enough addition that I though  
I would suggest it first.  The rational is that there are some  
stylistic questions that are not covered in the example.  For  
instance, I think that the See Also section should have a format like  
this:


See Also

function : One line description from function's docstring
 Longer description here over potentially many lines. Lorem ipsum  
dolor sit amet,
 consectetur adipisicing elit, sed do eiusmod tempor incididunt ut  
labore et dolore
 magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation  
ullamco laboris nisi
 ut aliquip ex ea commodo consequat.
next_function : One line description from next_function's docstring
 Longer description here over potentially many lines. Lorem ipsum  
dolor sit amet,
 consectetur adipisicing elit, sed do eiusmod tempor incididunt ut  
labore et dolore
 magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation  
ullamco laboris nisi
 ut aliquip ex ea commodo consequat.


This will parse better (as the line with the semicolon is bold, the  
next lines are not).  Also, would it be possible to put function and  
next_function in double back-ticks, so that they are referenced, like  
modules?  That way they will might be clickable in a html version of  
the documentation.

-Rob
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Outputting arrays.

2008-05-21 Thread Alexandra Geddes
Hi. 

1. Is there a module or other code to write arrays to databases (they want 
access databases)?

2. How can i write 2D arrays to textfiles with labels on the rows and columns?

thanks!
alex.


  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] first recarray steps

2008-05-21 Thread Vincent Schut
Robert Kern wrote:
 On Wed, May 21, 2008 at 2:03 AM, Vincent Schut [EMAIL PROTECTED] wrote:
 Robert Kern wrote:
 On Wed, May 21, 2008 at 1:48 AM, Vincent Schut [EMAIL PROTECTED] wrote:
 Christopher Barker wrote:
 Also, if you image data is rgb, usually, that's a (width, height, 3)
 array: rgbrgbrgbrgb... in memory. If you have a (3, width, height)
 array, then that's rrr... Some image libs
 may give you that, I'm not sure.
 My data is. In fact, this is a simplification of my situation; I'm
 processing satellite data, which usually has more (and other) bands than
 just rgb. But the data is definitely in shape (bands, y, x).
 I don't think record arrays will help you much, then. Individual
 records need to be contiguous (bar padding). You can't interleave
 them.

 Hmm, that was just what I was wondering about, when reading Stefan's
 reply. So in fact, recarrays aren't just another way to view some data,
 no matter in what shape it is.

 So his solution:
 x.T.reshape((-1,x.shape[0])).view(dt).reshape(x.shape[1:]).T won't work,
 than. Or, at least, won't give me a view on my original dat, but would
 give me a recarray with a copy of my data.
 
 Right.
 
 I guess I was misled by this text on the recarray wiki page:

 We would like to represent a small colour image. The image is two
 pixels high and two pixels wide. Each pixel has a red, green and blue
 colour component, which is represented by a 32-bit floating point number
 between 0 and 1.

 Intuitively, we could represent the image as a 3x2x2 array, where the
 first dimension represents the color, and the last two the pixel
 positions, i.e. 

 Note the 3x2x2, which suggested imho that this would work with an
 image with (bands,y,x) shape, not with (x,y,bands) shape.
 
 Yes, the tutorial goes on to use record arrays as a view onto an
 (x,y,bands) array and also make a (bands,x,y) view from that, too.
 That is, in fact, quite a confusing presentation of the subject.
 
 Now, there is a way to use record arrays here; it's a bit ugly but can
 be quite useful when parsing data formats. Each item in the record can
 also be an array. So let's pretend we have a (3,nx,ny) RGB array.
 
 nbands, nx, ny = a.shape
 dtype = numpy.dtype([
   ('r', a.dtype, [nx, ny]),
   ('g', a.dtype, [nx, ny]),
   ('b', a.dtype, [nx, ny]),
 ])
 
 # The flatten() is necessary to pre-empt numpy from
 # trying to do too much interpretation of a's shape.
 rec = a.flatten().view(dtype)
 print rec['r']
 print rec['g']
 print rec['b']
 

Ah, now that is clarifying! Thanks a lot. I'll do some experiments to 
see whether this way of viewing my data is useful to me (in a sense that 
making may code more readable is already very useful).

Cheers,
Vincent.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 2:27 AM, Rob Hetland [EMAIL PROTECTED] wrote:

 Should we add a general discussion section to the Wiki?

Discussion should happen here on the mailing list instead of the wiki.
But please, let's not rehash discussions which have already happened
(like this one). They simply have no way of coming to a conclusion and
cannot improve matters enough to justify the effort expended on the
discussion.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Stéfan van der Walt
Hi Rob

2008/5/21 Rob Hetland [EMAIL PROTECTED]:
 See Also
 
 function : One line description from function's docstring
 Longer description here over potentially many lines. Lorem ipsum
 dolor sit amet,
 consectetur adipisicing elit, sed do eiusmod tempor incididunt ut
 labore et dolore
 magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
 ullamco laboris nisi
 ut aliquip ex ea commodo consequat.
 next_function : One line description from next_function's docstring
 Longer description here over potentially many lines. Lorem ipsum
 dolor sit amet,
 consectetur adipisicing elit, sed do eiusmod tempor incididunt ut
 labore et dolore
 magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
 ullamco laboris nisi
 ut aliquip ex ea commodo consequat.


 This will parse better (as the line with the semicolon is bold, the
 next lines are not).  Also, would it be possible to put function and
 next_function in double back-ticks, so that they are referenced, like
 modules?  That way they will might be clickable in a html version of
 the documentation.

When generating the reference guide, I parse all the numpy docstrings
and re-generate a document enhanced with Sphinx markup.  In this
document, functions in the See Also clause are clickable.  I have
support for two formats:

See Also

function_a, function_b, function_c
function_d : relation to current function

Don't worry if it doesn't look perfect on the wiki; the reference
guide will be rendered correctly.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Rob Hetland

On May 21, 2008, at 9:40 AM, Robert Kern wrote:

 But please, let's not rehash discussions which have already happened
 (like this one).

I didn't mean to suggest rehashing the documentation format.  I agree  
that this has been discussed enough.

Rather, sometimes it's not clear to me how to apply the existing  
standard.  'See Also' was a case where the style guidelines seem  
sparse.  My suggestion, I guess, was more to clarify than to change.

-Rob


Rob Hetland, Associate Professor
Dept. of Oceanography, Texas AM University
http://pong.tamu.edu/~rob
phone: 979-458-0096, fax: 979-845-6331



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-21 Thread George Nurser
2008/5/21 Jarrod Millman [EMAIL PROTECTED]:
 On Mon, May 19, 2008 at 12:39 PM, Christopher Burns [EMAIL PROTECTED] wrote:
 I've built a Mac binary for the 1.1 release candidate.  Mac users,
 please test it from:

 https://cirl.berkeley.edu/numpy/numpy-1.1.0rc1-py2.5-macosx10.5.dmg

 This is for the MacPython installed from python.org.

 Hello,

 Please test the Mac binaries.  I can't tag the release until I know
 that our binary installers work on a wide variety of Mac machines.

Works for me. Intel, MBP, 10.5.2.
 import numpy
 numpy.__version__
'1.1.0rc1'
 numpy.test(10)
ran 1005 tests in 2.290s

OK
unittest._TextTestResult run=1005 errors=0 failures=0

George Nurser.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Ticket #798: `piecewise` exposes raw memory

2008-05-21 Thread Stéfan van der Walt
Referring to
http://scipy.org/scipy/numpy/ticket/798

`piecewise` uses `empty` to allocate output memory.  If the conditions
do not sufficiently cover the output, then raw memory is returned,
e.g.,

{{{
import numpy as np
np.piecewise([0,1,2],[True,False,False],[1])
}}}

A patch which addresses the issue is available here for review:

http://codereview.appspot.com/1105

Documentation is being updated on the wiki.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket #798: `piecewise` exposes raw memory

2008-05-21 Thread Ondrej Certik
On Wed, May 21, 2008 at 11:30 AM, Stéfan van der Walt [EMAIL PROTECTED] wrote:
 Referring to
 http://scipy.org/scipy/numpy/ticket/798

 `piecewise` uses `empty` to allocate output memory.  If the conditions
 do not sufficiently cover the output, then raw memory is returned,
 e.g.,

 {{{
 import numpy as np
 np.piecewise([0,1,2],[True,False,False],[1])
 }}}

 A patch which addresses the issue is available here for review:

 http://codereview.appspot.com/1105

 Documentation is being updated on the wiki.

I'd like to invite everyone to take part in the review. It's fun, it's
just talking, no coding. :)

Ondrej
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] URGENT: Re: 1.1.0rc1, Win32 Installer: please test it

2008-05-21 Thread Alan G Isaac
On Tue, 20 May 2008, joep apparently wrote:
 missed with option all=True:

Yes, I also see this with numpy.test(all=True).
(Same old machine (no SSE2) running Win 2000.)

Alan



==
ERROR: Test creation by view
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
51, in test_byview
assert_equal_records(mbase._data, base._data.view(recarray))
  File C:\Python25\Lib\site-packages\numpy\ma\testutils.py, line 74, in 
assert_equal_records
assert_equal(getattr(a,f), getattr(b,f))
  File C:\Python25\Lib\site-packages\numpy\ma\testutils.py, line 103, in 
assert_equal
return _assert_equal_on_sequences(actual.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Test filling the array
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
258, in test_filled
assert_equal(mrecfilled['c'], np.array(('one','two','N/A'), dtype='|S8'))
  File C:\Python25\Lib\site-packages\numpy\ma\testutils.py, line 103, in 
assert_equal
return _assert_equal_on_sequences(actual.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Tests fields retrieval
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
62, in test_get
assert_equal(getattr(mbase,field), mbase[field])
  File C:\Python25\Lib\site-packages\numpy\ma\testutils.py, line 104, in 
assert_equal
desired.tolist(),
  File C:\Python25\Lib\site-packages\numpy\ma\core.py, line 2552, in tolist
result = self.filled().tolist()
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Test pickling
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
243, in test_pickling
assert_equal_records(mrec_._data, mrec._data)
  File C:\Python25\Lib\site-packages\numpy\ma\testutils.py, line 74, in 
assert_equal_records
assert_equal(getattr(a,f), getattr(b,f))
  File C:\Python25\Lib\site-packages\numpy\ma\testutils.py, line 103, in 
assert_equal
return _assert_equal_on_sequences(actual.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: test_set_elements (numpy.ma.tests.test_mrecords.TestMRecords)
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
185, in test_set_elemen
ts
assert_equal(mbase._fieldmask.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Tests setting fields.
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
104, in test_set_fields

assert_equal(mbase._fieldmask.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: test_set_mask (numpy.ma.tests.test_mrecords.TestMRecords)
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
142, in test_set_mask
assert_equal(mbase._fieldmask.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Test tolist.
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
269, in test_tolist
assert_equal(mrec.tolist(),
  File C:\Python25\Lib\site-packages\numpy\ma\mrecords.py, line 474, in tolist
result = narray(self.filled().tolist(), dtype=object)
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Test construction from records.
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
312, in test_fromrecord
s

Re: [Numpy-discussion] 1.1.0rc1, Win32 Installer: please test it (test errors)

2008-05-21 Thread Matt Knox
 installed fine and all tests ran successfully on my machine.

I spoke too soon. I didn't know about the all parameter in numpy.test and
just ran if with the default before. When I specify all=True, I get 12 errors.
Most of which seem to be related to a problem with calling the tolist
method.  See output below...

 numpy.test(all=True)
Numpy is installed in C:\Python25\lib\site-packages\numpy
Numpy version 1.1.0rc1
Python version 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit
(Intel)]
  Found 18/18 tests for numpy.core.tests.test_defmatrix
  Found 3/3 tests for numpy.core.tests.test_errstate
  Found 3/3 tests for numpy.core.tests.test_memmap
  Found 283/283 tests for numpy.core.tests.test_multiarray
  Found 70/70 tests for numpy.core.tests.test_numeric
  Found 36/36 tests for numpy.core.tests.test_numerictypes
  Found 12/12 tests for numpy.core.tests.test_records
  Found 140/140 tests for numpy.core.tests.test_regression
  Found 7/7 tests for numpy.core.tests.test_scalarmath
  Found 2/2 tests for numpy.core.tests.test_ufunc
  Found 16/16 tests for numpy.core.tests.test_umath
  Found 63/63 tests for numpy.core.tests.test_unicode
  Found 4/4 tests for numpy.distutils.tests.test_fcompiler_gnu
  Found 5/5 tests for numpy.distutils.tests.test_misc_util
  Found 2/2 tests for numpy.fft.tests.test_fftpack
  Found 3/3 tests for numpy.fft.tests.test_helper
  Found 10/10 tests for numpy.lib.tests.test_arraysetops
  Found 1/1 tests for numpy.lib.tests.test_financial
  Found 53/53 tests for numpy.lib.tests.test_function_base
  Found 5/5 tests for numpy.lib.tests.test_getlimits
  Found 6/6 tests for numpy.lib.tests.test_index_tricks
  Found 15/15 tests for numpy.lib.tests.test_io
  Found 1/1 tests for numpy.lib.tests.test_machar
  Found 4/4 tests for numpy.lib.tests.test_polynomial
  Found 1/1 tests for numpy.lib.tests.test_regression
  Found 49/49 tests for numpy.lib.tests.test_shape_base
  Found 15/15 tests for numpy.lib.tests.test_twodim_base
  Found 43/43 tests for numpy.lib.tests.test_type_check
  Found 1/1 tests for numpy.lib.tests.test_ufunclike
  Found 24/24 tests for numpy.lib.tests.test__datasource
  Found 89/89 tests for numpy.linalg.tests.test_linalg
  Found 3/3 tests for numpy.linalg.tests.test_regression
  Found 94/94 tests for numpy.ma.tests.test_core
  Found 15/15 tests for numpy.ma.tests.test_extras
  Found 17/17 tests for numpy.ma.tests.test_mrecords
  Found 36/36 tests for numpy.ma.tests.test_old_ma
  Found 4/4 tests for numpy.ma.tests.test_subclassing
  Found 7/7 tests for numpy.tests.test_random
  Found 16/16 tests for numpy.testing.tests.test_utils
  Found 5/5 tests for numpy.tests.test_ctypeslib
..
..
..
..
..
..
Ignoring
Python was built with Visual S
tudio 2003;
extensions must be built with a compiler than can generate compatible
binaries. Visual Studio 2003 was not found on this system. If you have Cygwin
installed, you can try compiling with MingW32, by passing -c mingw32 to
setup.py. (one should fix me in fcompiler/compaq.py)
..
..
..
..F...
EEE....E..EE..
...EE..
==
ERROR: Test creation by view
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line
51, in test_byview
assert_equal_records(mbase._data, base._data.view(recarray))
  File C:\Python25\Lib\site-packages\numpy\ma\testutils.py, line 74, in
assert_equal_records
assert_equal(getattr(a,f), getattr(b,f))
  File C:\Python25\Lib\site-packages\numpy\ma\testutils.py, line 103, in
assert_equal
return _assert_equal_on_sequences(actual.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Test filling the array
--
Traceback (most recent call last):
  File C:\Python25\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line
258, in test_filled
assert_equal(mrecfilled['c'], np.array(('one','two','N/A'), 

[Numpy-discussion] distance_matrix: how to speed up?

2008-05-21 Thread Emanuele Olivetti
Dear all,

I need to speed up this function (a little example follows):
--
import numpy as N
def distance_matrix(data1,data2,weights):
rows = data1.shape[0]
columns = data2.shape[0]
dm = N.zeros((rows,columns))
for i in range(rows):
for j in range(columns):
dm[i,j] = ((data1[i,:]-data2[j,:])**2*weights).sum()
pass
pass
return dm

size1 = 4
size2 = 3
dimensions = 2
data1 = N.random.rand(size1,dimensions)
data2 = N.random.rand(size2,dimensions)
weights = N.random.rand(dimensions)
dm = distance_matrix(data1,data2,weights)
print dm
--
The distance_matrix function computes the weighted (squared) euclidean
distances between each pair of vectors from two sets (data1, data2).
The previous naive algorithm is extremely slow for my standard use,
i.e., when size1 and size2 are in the order of 1000 or more. It can be
improved using N.subtract.outer:

def distance_matrix_faster(data1,data2,weights):
rows = data1.shape[0]
columns = data2.shape[0]
dm = N.zeros((rows,columns))
for i in range(data1.shape[1]):
dm += N.subtract.outer(data1[:,i],data2[:,i])**2*weights[i]
pass
return dm

This algorithm becomes slow when dimensions (i.e., data1.shape[1]) is
big (i.e., 1000), due to the Python loop. In order to speed it up, I guess
that N.subtract.outer could be used on the full matrices instead of one
column at a time. But then there is a memory issue: 'outer' allocates
too much memory since it stores all possible combinations along all
dimensions. This is clearly unnecessary.

Is there a NumPy way to avoid all Python loops and without wasting
too much memory? As a comparison I coded the same algorithm in
C through weave (inline): it is _much_ faster and requires just
the memory to store the result. But I'd prefer not using C or weave
if possible.

Thanks in advance for any help,


Emanuele

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] distance_matrix: how to speed up?

2008-05-21 Thread Matthieu Brucher
Hi,

Bill Baxter proposed a version of this problem some months ago on this ML. I
use it regularly and it is fast enough for me.

Matthieu

2008/5/21 Emanuele Olivetti [EMAIL PROTECTED]:

 Dear all,

 I need to speed up this function (a little example follows):
 --
 import numpy as N
 def distance_matrix(data1,data2,weights):
rows = data1.shape[0]
columns = data2.shape[0]
dm = N.zeros((rows,columns))
for i in range(rows):
for j in range(columns):
dm[i,j] = ((data1[i,:]-data2[j,:])**2*weights).sum()
pass
pass
return dm

 size1 = 4
 size2 = 3
 dimensions = 2
 data1 = N.random.rand(size1,dimensions)
 data2 = N.random.rand(size2,dimensions)
 weights = N.random.rand(dimensions)
 dm = distance_matrix(data1,data2,weights)
 print dm
 --
 The distance_matrix function computes the weighted (squared) euclidean
 distances between each pair of vectors from two sets (data1, data2).
 The previous naive algorithm is extremely slow for my standard use,
 i.e., when size1 and size2 are in the order of 1000 or more. It can be
 improved using N.subtract.outer:

 def distance_matrix_faster(data1,data2,weights):
rows = data1.shape[0]
columns = data2.shape[0]
dm = N.zeros((rows,columns))
for i in range(data1.shape[1]):
dm += N.subtract.outer(data1[:,i],data2[:,i])**2*weights[i]
pass
return dm

 This algorithm becomes slow when dimensions (i.e., data1.shape[1]) is
 big (i.e., 1000), due to the Python loop. In order to speed it up, I guess
 that N.subtract.outer could be used on the full matrices instead of one
 column at a time. But then there is a memory issue: 'outer' allocates
 too much memory since it stores all possible combinations along all
 dimensions. This is clearly unnecessary.

 Is there a NumPy way to avoid all Python loops and without wasting
 too much memory? As a comparison I coded the same algorithm in
 C through weave (inline): it is _much_ faster and requires just
 the memory to store the result. But I'd prefer not using C or weave
 if possible.

 Thanks in advance for any help,


 Emanuele

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] distance_matrix: how to speed up?

2008-05-21 Thread Rob Hetland

I think you want something like this:

x1 = x1 * weights[np.newaxis,:]
x2 = x2 * weights[np.newaxis,:]

x1 = x1[np.newaxis, :, :]
x2 = x2[:, np.newaxis, :]
distance = np.sqrt( ((x1 - x2)**2).sum(axis=-1) )

x1 and x2 are arrays with size of (npoints, ndimensions), and npoints  
can be different for each array.  I'm not sure I did your weights  
right, but that part shouldn't be so difficult.


On May 21, 2008, at 2:39 PM, Emanuele Olivetti wrote:

 Dear all,

 I need to speed up this function (a little example follows):
 --
 import numpy as N
 def distance_matrix(data1,data2,weights):
rows = data1.shape[0]
columns = data2.shape[0]
dm = N.zeros((rows,columns))
for i in range(rows):
for j in range(columns):
dm[i,j] = ((data1[i,:]-data2[j,:])**2*weights).sum()
pass
pass
return dm

 size1 = 4
 size2 = 3
 dimensions = 2
 data1 = N.random.rand(size1,dimensions)
 data2 = N.random.rand(size2,dimensions)
 weights = N.random.rand(dimensions)
 dm = distance_matrix(data1,data2,weights)
 print dm
 --
 The distance_matrix function computes the weighted (squared) euclidean
 distances between each pair of vectors from two sets (data1, data2).
 The previous naive algorithm is extremely slow for my standard use,
 i.e., when size1 and size2 are in the order of 1000 or more. It can be
 improved using N.subtract.outer:

 def distance_matrix_faster(data1,data2,weights):
rows = data1.shape[0]
columns = data2.shape[0]
dm = N.zeros((rows,columns))
for i in range(data1.shape[1]):
dm += N.subtract.outer(data1[:,i],data2[:,i])**2*weights[i]
pass
return dm

 This algorithm becomes slow when dimensions (i.e., data1.shape[1]) is
 big (i.e., 1000), due to the Python loop. In order to speed it up,  
 I guess
 that N.subtract.outer could be used on the full matrices instead of  
 one
 column at a time. But then there is a memory issue: 'outer' allocates
 too much memory since it stores all possible combinations along all
 dimensions. This is clearly unnecessary.

 Is there a NumPy way to avoid all Python loops and without wasting
 too much memory? As a comparison I coded the same algorithm in
 C through weave (inline): it is _much_ faster and requires just
 the memory to store the result. But I'd prefer not using C or weave
 if possible.

 Thanks in advance for any help,


 Emanuele

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


Rob Hetland, Associate Professor
Dept. of Oceanography, Texas AM University
http://pong.tamu.edu/~rob
phone: 979-458-0096, fax: 979-845-6331



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] URGENT: Re: 1.1.0rc1, Mac Installer: please test it

2008-05-21 Thread Tommy Grav
Doing the same on a the Mac installer also returns 3 failures and 12  
errors with all=True.
Installer works fine though :)

[skathi:~] tgrav% python
ActivePython 2.5.1.1 (ActiveState Software Inc.) based on
Python 2.5.1 (r251:54863, May  1 2007, 17:40:00)
[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin
Type help, copyright, credits or license for more information.
  import numpy
  numpy.test(all=True)
Numpy is installed in /Library/Frameworks/Python.framework/Versions/ 
2.5/lib/python2.5/site-packages/numpy
Numpy version 1.1.0rc1
Python version 2.5.1 (r251:54863, May  1 2007, 17:40:00) [GCC 4.0.1  
(Apple Computer, Inc. build 5250)]

./Library/ 
Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/ 
numpy/core/ma.py:609: UserWarning: Cannot automatically convert masked  
array to numeric because data
 is masked in one or more locations.
   warnings.warn(Cannot automatically convert masked array to \
F... 
 
 
...F 
F..EEE....E..EE.EE..
==
ERROR: Test creation by view
--
Traceback (most recent call last):
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/ma/tests/test_mrecords.py, line 51, in  
test_byview
 assert_equal_records(mbase._data, base._data.view(recarray))
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/ma/testutils.py, line 74, in  
assert_equal_records
 assert_equal(getattr(a,f), getattr(b,f))
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/ma/testutils.py, line 103, in  
assert_equal
 return _assert_equal_on_sequences(actual.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Test filling the array
--
Traceback (most recent call last):
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/ma/tests/test_mrecords.py, line 258, in  
test_filled
 assert_equal(mrecfilled['c'], np.array(('one','two','N/A'),  
dtype='|S8'))
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/ma/testutils.py, line 103, in  
assert_equal
 return _assert_equal_on_sequences(actual.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Tests fields retrieval
--
Traceback (most recent call last):
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/ma/tests/test_mrecords.py, line 62, in  
test_get
 assert_equal(getattr(mbase,field), mbase[field])
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/ma/testutils.py, line 104, in  
assert_equal
 desired.tolist(),
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/ma/core.py, line 2552, in tolist
 result = self.filled().tolist()
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Test pickling
--
Traceback (most recent call last):
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/ma/tests/test_mrecords.py, line 243, in  
test_pickling
 

Re: [Numpy-discussion] distance_matrix: how to speed up?

2008-05-21 Thread Emanuele Olivetti
Matthieu Brucher wrote:
 Hi,

 Bill Baxter proposed a version of this problem some months ago on this
 ML. I use it regularly and it is fast enough for me.


Excellent. Exactly what I was looking for.

Thanks,

Emanuele

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Stéfan van der Walt
Hi Bruce

2008/5/21 Bruce Southey [EMAIL PROTECTED]:
 I would like to throw out the following idea with no obligations:
 IF people have the time and energy while writing the documentation, can
 they also test that the function is doing what it is expected?
 Also related to this is developing appropriate tests if these are not
 covered or at least provide a file of code used in evaluating the
 functionality.

We are adding examples (read: doctests) to every function, which serve
as unit tests at the same time.  In writing these, we do come across
bugs (like http://projects.scipy.org/scipy/numpy/ticket/798), for
which tickets are filed.  This is a documentation drive, though, so
the examples are illustrative; we don't aim to write exhaustive unit
tests that cover all corner cases.

That said, any person who wishes to contribute unit tests is most
welcome to do so.  I can guarantee that your patches will be applied
speedily :)

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] distance_matrix: how to speed up?

2008-05-21 Thread Emanuele Olivetti
Rob Hetland wrote:
 I think you want something like this:

 x1 = x1 * weights[np.newaxis,:]
 x2 = x2 * weights[np.newaxis,:]

 x1 = x1[np.newaxis, :, :]
 x2 = x2[:, np.newaxis, :]
 distance = np.sqrt( ((x1 - x2)**2).sum(axis=-1) )

 x1 and x2 are arrays with size of (npoints, ndimensions), and npoints  
 can be different for each array.  I'm not sure I did your weights  
 right, but that part shouldn't be so difficult.

   

Weights seem not right but anyway here is the solution adapted from
Bill Baxter's :

def distance_matrix_final(data1,data2,weights):
data1w = data1*weights
dm =
(data1w*data1).sum(1)[:,None]-2*N.dot(data1w,data2.T)+(data2*data2*weights).sum(1)
dm[dm0] = 0
return dm

This solution is super-fast, stable and use little memory.
It is based on the fact that:
(x-y)^2*w = x*x*w - 2*x*y*w + y*y*w

For size1=size2=dimensions=1000 requires ~0.6sec. to compute
on my dual core duo. It is 2 order of magnitude faster than my
previous solution, but 1-2 order of magnitude slower than using
C with weave.inline.

Definitely good enough for me.


Emanuele

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [Fwd: Re: [NumPy] #770: numpy.core.tests.test_multiarray.TestView failures on big-endian machines]

2008-05-21 Thread Christopher Hanley
Just forwarding this to the main list since the Trac mailer still seems 
to be broken.


Chris

--
Christopher Hanley
Systems Software Engineer
Space Telescope Science Institute
3700 San Martin Drive
Baltimore MD, 21218
(410) 338-4338
---BeginMessage---
#770: numpy.core.tests.test_multiarray.TestView failures on big-endian machines
+---
 Reporter:  chanley |Owner:  somebody
 Type:  defect  |   Status:  reopened
 Priority:  normal  |Milestone:  1.1.0   
Component:  numpy.core  |  Version:  devel   
 Severity:  normal  |   Resolution:  
 Keywords:  big-endian  |  
+---
Changes (by chanley):

  * status:  closed = reopened
  * resolution:  fixed =

Comment:

 This has not been fixed as of 1.1.0dev5211 on either our Sun running
 Solaris 10 or our PPC MAC running 10.4.11.  Here is the output from the
 seltest:

 {{{
 Numpy version 1.1.0.dev5211
 Python version 2.5.1 (r251:54863, Jan 21 2008, 11:03:00) [C]
   Found 18/18 tests for numpy.core.defmatrix
   Found 3/3 tests for numpy.core.memmap
   Found 283/283 tests for numpy.core.multiarray
   Found 70/70 tests for numpy.core.numeric
   Found 36/36 tests for numpy.core.numerictypes
   Found 12/12 tests for numpy.core.records
   Found 7/7 tests for numpy.core.scalarmath
   Found 16/16 tests for numpy.core.umath
   Found 5/5 tests for numpy.distutils.misc_util
   Found 2/2 tests for numpy.fft.fftpack
   Found 3/3 tests for numpy.fft.helper
   Found 24/24 tests for numpy.lib._datasource
   Found 10/10 tests for numpy.lib.arraysetops
   Found 1/1 tests for numpy.lib.financial
   Found 0/0 tests for numpy.lib.format
   Found 53/53 tests for numpy.lib.function_base
   Found 6/6 tests for numpy.lib.getlimits
   Found 6/6 tests for numpy.lib.index_tricks
   Found 15/15 tests for numpy.lib.io
   Found 1/1 tests for numpy.lib.machar
   Found 4/4 tests for numpy.lib.polynomial
   Found 49/49 tests for numpy.lib.shape_base
   Found 15/15 tests for numpy.lib.twodim_base
   Found 43/43 tests for numpy.lib.type_check
   Found 1/1 tests for numpy.lib.ufunclike
   Found 89/89 tests for numpy.linalg
   Found 94/94 tests for numpy.ma.core
   Found 15/15 tests for numpy.ma.extras
   Found 7/7 tests for numpy.random
   Found 16/16 tests for numpy.testing.utils
   Found 0/0 tests for __main__
 
.FF.
 ==
 FAIL: test_basic (numpy.core.tests.test_multiarray.TestView)
 --
 Traceback (most recent call last):
   File /usr/ra/pyssg/2.5.1/numpy/core/tests/test_multiarray.py, line
 843, in test_basic
 assert_array_equal(y,z)
   File /usr/stsci/pyssgdev/2.5.1/numpy/testing/utils.py, line 248, in
 assert_array_equal
 verbose=verbose, header='Arrays are not equal')
   File /usr/stsci/pyssgdev/2.5.1/numpy/testing/utils.py, line 240, in
 assert_array_compare
 assert cond, msg
 AssertionError:
 Arrays are not equal

 (mismatch 100.0%)
  x: array([ 67305985, 134678021])
  y: array([16909060, 84281096])

 ==
 FAIL: test_keywords (numpy.core.tests.test_multiarray.TestView)
 --
 Traceback (most recent call last):
   File /usr/ra/pyssg/2.5.1/numpy/core/tests/test_multiarray.py, line
 857, in test_keywords
 assert_equal(y.dtype,np.int16)
   File /usr/stsci/pyssgdev/2.5.1/numpy/testing/utils.py, line 145, in
 assert_equal
 assert desired == actual, msg
 AssertionError:
 Items are not equal:
  ACTUAL: dtype('i2')
  DESIRED: type 'numpy.int16'

 --
 Ran 1000 tests in 21.171s

 FAILED (failures=2)
 errors:
 failures:
 (numpy.core.tests.test_multiarray.TestView
 

Re: [Numpy-discussion] URGENT: Re: 1.1.0rc1, Mac Installer: please test it

2008-05-21 Thread George Nurser
2008/5/21 Pierre GM [EMAIL PROTECTED]:
 Mmh, wait a minute:
 * There shouldn't be any mstats.py nor morestats.py in numpy.ma any longer: I
 moved the packages to scipy.stats along their respective unittests.

Right. I hadn't deleted the previous
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy

After doing that, I get just 12 errors; no failures.
The errors are related to
* ma/tests/test_mrecords.py
* ma/tests/test_old_ma.py
* ma/tests/test_subclassing.py

-George.

Numpy version 1.1.0rc1
Python version 2.5.2 (r252:60911, Feb 22 2008, 07:57:53) [GCC 4.0.1
(Apple Computer, Inc. build 5363)]
...
...EEE....E..EE.EE..
==
ERROR: Test creation by view
--
Traceback (most recent call last):
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/tests/test_mrecords.py,
line 51, in test_byview
assert_equal_records(mbase._data, base._data.view(recarray))
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/testutils.py,
line 74, in assert_equal_records
assert_equal(getattr(a,f), getattr(b,f))
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/testutils.py,
line 103, in assert_equal
return _assert_equal_on_sequences(actual.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Test filling the array
--
Traceback (most recent call last):
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/tests/test_mrecords.py,
line 258, in test_filled
assert_equal(mrecfilled['c'], np.array(('one','two','N/A'), dtype='|S8'))
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/testutils.py,
line 103, in assert_equal
return _assert_equal_on_sequences(actual.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Tests fields retrieval
--
Traceback (most recent call last):
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/tests/test_mrecords.py,
line 62, in test_get
assert_equal(getattr(mbase,field), mbase[field])
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/testutils.py,
line 104, in assert_equal
desired.tolist(),
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/core.py,
line 2552, in tolist
result = self.filled().tolist()
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Test pickling
--
Traceback (most recent call last):
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/tests/test_mrecords.py,
line 243, in test_pickling
assert_equal_records(mrec_._data, mrec._data)
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/testutils.py,
line 74, in assert_equal_records
assert_equal(getattr(a,f), getattr(b,f))
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/testutils.py,
line 103, in assert_equal
return _assert_equal_on_sequences(actual.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: test_set_elements (numpy.ma.tests.test_mrecords.TestMRecords)
--
Traceback (most recent call last):
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/tests/test_mrecords.py,
line 185, in test_set_elements
assert_equal(mbase._fieldmask.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==
ERROR: Tests setting fields.
--
Traceback (most recent call last):
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/tests/test_mrecords.py,
line 104, in test_set_fields
assert_equal(mbase._fieldmask.tolist(),
RuntimeError: array_item not returning smaller-dimensional array

==

Re: [Numpy-discussion] URGENT: Re: 1.1.0rc1, Win32 Installer: please test it

2008-05-21 Thread Paul Moore
Jarrod Millman wrote:
 Please test the Windows binaries.  So far I have only seen two
 testers.  I can't tag the release until I know that our binary
 installers work on a wide variety of Windows machines.

For what it's worth, I got this:

System information for \\GANDALF:
Uptime:0 days 0 hours 55 minutes 8 seconds
Kernel version:Microsoft Windows XP, Multiprocessor Free
Product type:  Professional
Product version:   5.1
Service pack:  3
Kernel build number:   2600
Registered organization:
Registered owner:  Gustav
Install date:  05/12/2006, 14:22:51
Activation status: Error reading status
IE version:7.
System root:   C:\WINDOWS
Processors:2
Processor speed:   2.4 GHz
Processor type:AMD Athlon(tm) 64 X2 Dual Core Processor 4600+
Physical memory:   2046 MB
Video driver:  NVIDIA GeForce 7950 GT

The installet installed the SSE3 version.

Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit 
(Intel)] on win32
Type help, copyright, credits or license for more information.
  import numpy
  numpy.test(all=True)
Numpy is installed in C:\Apps\Python\lib\site-packages\numpy
Numpy version 1.1.0rc1
Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 
bit (Intel)]
   Found 18/18 tests for numpy.core.tests.test_defmatrix
   Found 3/3 tests for numpy.core.tests.test_errstate
   Found 3/3 tests for numpy.core.tests.test_memmap
   Found 283/283 tests for numpy.core.tests.test_multiarray
   Found 70/70 tests for numpy.core.tests.test_numeric
   Found 36/36 tests for numpy.core.tests.test_numerictypes
   Found 12/12 tests for numpy.core.tests.test_records
   Found 140/140 tests for numpy.core.tests.test_regression
   Found 7/7 tests for numpy.core.tests.test_scalarmath
   Found 2/2 tests for numpy.core.tests.test_ufunc
   Found 16/16 tests for numpy.core.tests.test_umath
   Found 63/63 tests for numpy.core.tests.test_unicode
   Found 4/4 tests for numpy.distutils.tests.test_fcompiler_gnu
   Found 5/5 tests for numpy.distutils.tests.test_misc_util
   Found 2/2 tests for numpy.fft.tests.test_fftpack
   Found 3/3 tests for numpy.fft.tests.test_helper
   Found 10/10 tests for numpy.lib.tests.test_arraysetops
   Found 1/1 tests for numpy.lib.tests.test_financial
   Found 53/53 tests for numpy.lib.tests.test_function_base
   Found 5/5 tests for numpy.lib.tests.test_getlimits
   Found 6/6 tests for numpy.lib.tests.test_index_tricks
   Found 15/15 tests for numpy.lib.tests.test_io
   Found 1/1 tests for numpy.lib.tests.test_machar
   Found 4/4 tests for numpy.lib.tests.test_polynomial
   Found 1/1 tests for numpy.lib.tests.test_regression
   Found 49/49 tests for numpy.lib.tests.test_shape_base
   Found 15/15 tests for numpy.lib.tests.test_twodim_base
   Found 43/43 tests for numpy.lib.tests.test_type_check
   Found 1/1 tests for numpy.lib.tests.test_ufunclike
   Found 24/24 tests for numpy.lib.tests.test__datasource
   Found 89/89 tests for numpy.linalg.tests.test_linalg
   Found 3/3 tests for numpy.linalg.tests.test_regression
   Found 94/94 tests for numpy.ma.tests.test_core
   Found 15/15 tests for numpy.ma.tests.test_extras
   Found 17/17 tests for numpy.ma.tests.test_mrecords
   Found 36/36 tests for numpy.ma.tests.test_old_ma
   Found 4/4 tests for numpy.ma.tests.test_subclassing
   Found 7/7 tests for numpy.tests.test_random
   Found 16/16 tests for numpy.testing.tests.test_utils
   Found 5/5 tests for numpy.tests.test_ctypeslib
Ignoring
 
Python was built with Visual Studio version 7.1, and extensions need to 
be built with the same version of the compiler, but it isn't installed. 
(one should fix me in fcompiler/compaq.py)

[Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Pierre GM
All,
Most of the errors that are reported in 1.1.0rc1 are related to the .tolist() 
method in numpy.ma, such as :

ERROR: Tests fields retrieval
--
Traceback (most recent call last):
   File 
C:\Apps\Python\Lib\site-packages\numpy\ma\tests\test_mrecords.py, line 
62, in test_get
     assert_equal(getattr(mbase,field), mbase[field])
   File C:\Apps\Python\Lib\site-packages\numpy\ma\testutils.py, line 
104, in assert_equal
     desired.tolist(),
   File C:\Apps\Python\Lib\site-packages\numpy\ma\core.py, line 2552, 
in tolist
     result = self.filled().tolist()
RuntimeError: array_item not returning smaller-dimensional array
#

Note that the method seems to work, still: for example, the following command 
gives the proper output, without RuntimeError

python -c import numpy as np, numpy.ma as ma; x=ma.array(np.random.rand(5), 
mask=[1,0,0,0,0]); print x.tolist()

The problem looks quite recent, and not related to numpy.ma itself: what 
changed recently in the .tolist() method of ndarrays ? Why do we get these 
RuntimeErrors ? 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Alan McIntyre
On Wed, May 21, 2008 at 11:35 AM, Charles R Harris
[EMAIL PROTECTED] wrote:
 On Wed, May 21, 2008 at 9:28 AM, Pierre GM [EMAIL PROTECTED] wrote:
 The problem looks quite recent, and not related to numpy.ma itself: what
 changed recently in the .tolist() method of ndarrays ? Why do we get these
 RuntimeErrors ?

 I expect it comes from the matrix churn.  The tolist method was one of those
 with a  workaround for the failure to reduce dimensions. I'll look at it
 later if someone doesn't beat me to it.

There's some commentary and a patch on NumPy ticket 793 on this issue:

http://scipy.org/scipy/numpy/ticket/793

Hope it's helpful.
Alan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Pierre GM
On Wednesday 21 May 2008 11:39:32 Alan McIntyre wrote:
 There's some commentary and a patch on NumPy ticket 793 on this issue:

 http://scipy.org/scipy/numpy/ticket/793

OK, thanks a lot ! That's a C problem, then...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Alan McIntyre
On Wed, May 21, 2008 at 11:56 AM, Pierre GM [EMAIL PROTECTED] wrote:
 On Wednesday 21 May 2008 11:39:32 Alan McIntyre wrote:
 There's some commentary and a patch on NumPy ticket 793 on this issue:

 http://scipy.org/scipy/numpy/ticket/793

 OK, thanks a lot ! That's a C problem, then...

It's probably worth mentioning that I'm not that familiar with all the
innards of NumPy yet, so take my comments and patch on that issue with
a (fairly large) grain of salt. ;)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Bruce Southey
Stéfan van der Walt wrote:
 Hi Bruce

 2008/5/21 Bruce Southey [EMAIL PROTECTED]:
   
 I would like to throw out the following idea with no obligations:
 IF people have the time and energy while writing the documentation, can
 they also test that the function is doing what it is expected?
 Also related to this is developing appropriate tests if these are not
 covered or at least provide a file of code used in evaluating the
 functionality.
 

 We are adding examples (read: doctests) to every function, which serve
 as unit tests at the same time.  In writing these, we do come across
 bugs (like http://projects.scipy.org/scipy/numpy/ticket/798), for
 which tickets are filed.  This is a documentation drive, though, so
 the examples are illustrative; we don't aim to write exhaustive unit
 tests that cover all corner cases.

 That said, any person who wishes to contribute unit tests is most
 welcome to do so.  I can guarantee that your patches will be applied
 speedily :)

 Regards
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   
Hi
Excellent as I did not see any mention of this on the web page.

Bruce
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 3:26 AM, Rob Hetland [EMAIL PROTECTED] wrote:

 On May 21, 2008, at 9:40 AM, Robert Kern wrote:

 But please, let's not rehash discussions which have already happened
 (like this one).

 I didn't mean to suggest rehashing the documentation format.  I agree
 that this has been discussed enough.

 Rather, sometimes it's not clear to me how to apply the existing
 standard.  'See Also' was a case where the style guidelines seem
 sparse.  My suggestion, I guess, was more to clarify than to change.

Okey-dokey.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-21 Thread Christopher Barker
Jarrod Millman wrote:
 please test it from:

 https://cirl.berkeley.edu/numpy/numpy-1.1.0rc1-py2.5-macosx10.5.dmg
 Please test the Mac binaries.  I can't tag the release until I know
 that our binary installers work on a wide variety of Mac machines.

Has there been a new build since the endian bug in the tests was fixed?

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-21 Thread Christopher Barker
James Snyder wrote:
 b = np.zeros((1,30)) # allocates new memory and disconnects the view

This is really about how python works, not how numpy works:

np.zeros() -- creates a new array with all zeros in it -- that's the 
whole point.

b = Something -- binds the name b to the Something object. Name 
binding will never, ever, change the object the name used to be bound 
to. This has nothing to do with whether the object formally know as b 
is referencing the data from another array.

This is a nice write up of the concept of name binding in Python:

http://python.net/crew/mwh/hacks/objectthink.html


-Chris

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy dependency in numpy?

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 11:51 AM, Robert Kern [EMAIL PROTECTED] wrote:

 On Wed, May 21, 2008 at 12:49 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
  Failed importing
  /usr/lib/python2.5/site-packages/numpy/ma/tests/test_morestats.py: No
 module
  named scipy.stats.distributions.

 This has been removed in SVN.


Not in 1.1.0.dev5211, which is the latest. Where are people making these
fixes? Note that ticket 770 has also been reopened.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy dependency in numpy?

2008-05-21 Thread Jarrod Millman
On Wed, May 21, 2008 at 11:28 AM, Charles R Harris
[EMAIL PROTECTED] wrote:
 Not in 1.1.0.dev5211, which is the latest.

It isn't in the trunk.  Maybe you have the old file still installed.
Please try removing the installed files and reinstall.

Thanks,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy dependency in numpy?

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 12:28 PM, Charles R Harris 
[EMAIL PROTECTED] wrote:



 On Wed, May 21, 2008 at 11:51 AM, Robert Kern [EMAIL PROTECTED]
 wrote:

 On Wed, May 21, 2008 at 12:49 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
  Failed importing
  /usr/lib/python2.5/site-packages/numpy/ma/tests/test_morestats.py: No
 module
  named scipy.stats.distributions.

 This has been removed in SVN.


 Not in 1.1.0.dev5211, which is the latest. Where are people making these
 fixes? Note that ticket 770 has also been reopened.


OK, I had to delete numpy from the site-packages and reinstall. Can we make
the install do this? Otherwise we will end up with bogus error reports.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy dependency in numpy?

2008-05-21 Thread Pierre GM
On Wednesday 21 May 2008 14:28:34 Charles R Harris wrote:
 On Wed, May 21, 2008 at 11:51 AM, Robert Kern [EMAIL PROTECTED] wrote:
  On Wed, May 21, 2008 at 12:49 PM, Charles R Harris
 
  [EMAIL PROTECTED] wrote:
   Failed importing
   /usr/lib/python2.5/site-packages/numpy/ma/tests/test_morestats.py: No
 
  module
 
   named scipy.stats.distributions.
 
  This has been removed in SVN.

 Not in 1.1.0.dev5211, which is the latest. 
???
http://scipy.org/scipy/numpy/changeset/5078



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] first recarray steps

2008-05-21 Thread Anne Archibald
2008/5/21 Vincent Schut [EMAIL PROTECTED]:
 Christopher Barker wrote:

 Also, if you image data is rgb, usually, that's a (width, height, 3)
 array: rgbrgbrgbrgb... in memory. If you have a (3, width, height)
 array, then that's rrr... Some image libs
 may give you that, I'm not sure.

 My data is. In fact, this is a simplification of my situation; I'm
 processing satellite data, which usually has more (and other) bands than
 just rgb. But the data is definitely in shape (bands, y, x).

You may find your life becomes easier if you transpose the data in
memory. This can make a big difference to efficiency. Years ago I was
working with enormous (by the standards of the day) MATLAB files on
disk, storing complex data. The way (that version of) MATLAB
represented complex data was the way you describe: matrix of real
parts, matrix of imaginary parts. This meant that to draw a single
pixel, the disk needed to seek twice... depending on what sort of
operations you're doing, transposing your data so that each pixel is
all in one place may improve cache coherency as well as making the use
of record arrays possible.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy dependency in numpy?

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 12:37 PM, Pierre GM [EMAIL PROTECTED] wrote:

 On Wednesday 21 May 2008 14:28:34 Charles R Harris wrote:
  On Wed, May 21, 2008 at 11:51 AM, Robert Kern [EMAIL PROTECTED]
 wrote:
   On Wed, May 21, 2008 at 12:49 PM, Charles R Harris
  
   [EMAIL PROTECTED] wrote:
Failed importing
/usr/lib/python2.5/site-packages/numpy/ma/tests/test_morestats.py: No
  
   module
  
named scipy.stats.distributions.
  
   This has been removed in SVN.
 
  Not in 1.1.0.dev5211, which is the latest.


And why this in ma/tests/test_core.py

set_local_path()
from test_old_ma import *
restore_path()

and test_core then proceeds to shadow all the test classes in test_old_ma.
That's just silly.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-21 Thread Christopher Burns
On Wed, May 21, 2008 at 10:44 AM, Christopher Barker
[EMAIL PROTECTED] wrote:
 Has there been a new build since the endian bug in the tests was fixed?

 -Chris

Nope.  I figured that would be included in the 1.1.1 release.


-- 
Christopher Burns
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-21 Thread Christopher Barker
Christopher Burns wrote:
 Nope.  I figured that would be included in the 1.1.1 release.

It seems a few bugs have been found and fixed. It would be nice to put 
out another release candidate with those fixes at some point. Anyway:

OS-X 10.4.11 Dual g5 PPC:

FAILED (failures=3, errors=12)
unittest._TextTestResult run=1315 errors=12 failures=3

I think these are all the ma errors and endian errors already 
identified, but here are the details:

  numpy.test(all=True)
Numpy is installed in 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy
Numpy version 1.1.0rc1
Python version 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) [GCC 4.0.1 
(Apple Computer, Inc. build 5367)]
   Found 18/18 tests for numpy.core.tests.test_defmatrix
   Found 3/3 tests for numpy.core.tests.test_errstate
   Found 36/36 tests for numpy.core.tests.test_ma
   Found 3/3 tests for numpy.core.tests.test_memmap
   Found 283/283 tests for numpy.core.tests.test_multiarray
   Found 70/70 tests for numpy.core.tests.test_numeric
   Found 36/36 tests for numpy.core.tests.test_numerictypes
   Found 12/12 tests for numpy.core.tests.test_records
   Found 140/140 tests for numpy.core.tests.test_regression
   Found 2/2 tests for numpy.core.tests.test_reshape
   Found 7/7 tests for numpy.core.tests.test_scalarmath
   Found 2/2 tests for numpy.core.tests.test_ufunc
   Found 16/16 tests for numpy.core.tests.test_umath
   Found 63/63 tests for numpy.core.tests.test_unicode
   Found 4/4 tests for numpy.distutils.tests.test_fcompiler_gnu
   Found 5/5 tests for numpy.distutils.tests.test_misc_util
   Found 2/2 tests for numpy.fft.tests.test_fftpack
   Found 3/3 tests for numpy.fft.tests.test_helper
   Found 24/24 tests for numpy.lib.tests.test__datasource
   Found 10/10 tests for numpy.lib.tests.test_arraysetops
   Found 1/1 tests for numpy.lib.tests.test_financial
   Found 53/53 tests for numpy.lib.tests.test_function_base
   Found 5/5 tests for numpy.lib.tests.test_getlimits
   Found 6/6 tests for numpy.lib.tests.test_index_tricks
   Found 15/15 tests for numpy.lib.tests.test_io
   Found 1/1 tests for numpy.lib.tests.test_machar
   Found 4/4 tests for numpy.lib.tests.test_polynomial
   Found 1/1 tests for numpy.lib.tests.test_regression
   Found 49/49 tests for numpy.lib.tests.test_shape_base
   Found 15/15 tests for numpy.lib.tests.test_twodim_base
   Found 43/43 tests for numpy.lib.tests.test_type_check
   Found 1/1 tests for numpy.lib.tests.test_ufunclike
   Found 89/89 tests for numpy.linalg.tests.test_linalg
   Found 3/3 tests for numpy.linalg.tests.test_regression
   Found 94/94 tests for numpy.ma.tests.test_core
   Found 15/15 tests for numpy.ma.tests.test_extras
   Found 17/17 tests for numpy.ma.tests.test_mrecords
   Found 36/36 tests for numpy.ma.tests.test_old_ma
   Found 4/4 tests for numpy.ma.tests.test_subclassing
   Found 7/7 tests for numpy.tests.test_random
   Found 16/16 tests for numpy.testing.tests.test_utils
   Found 5/5 tests for numpy.tests.test_ctypeslib
./Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/ma.py:608:
 
UserWarning: Cannot automatically convert masked array to numeric 
because data
 is masked in one or more locations.
   warnings.warn(Cannot automatically convert masked array to \
F..FF.
...EEE....E..EE.EE..
==
ERROR: Test creation by view
--
Traceback (most recent call last):
   File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ma/tests/test_mrecords.py,
 
line 51, in test_byview
 

Re: [Numpy-discussion] Outputting arrays.

2008-05-21 Thread Christopher Barker
Alexandra Geddes wrote:
 1. Is there a module or other code to write arrays to databases (they want 
 access databases)?

I don't think there is any way to write an access database file from 
Python except using com on Windows.

 2. How can i write 2D arrays to textfiles with labels on the rows and columns?

I'm not sure it there is an out of the box way, but it's easy to write 
by hand, just loop through the array.

-Chris



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy dependency in numpy?

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 1:38 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
 OK, I had to delete numpy from the site-packages and reinstall. Can we make
 the install do this? Otherwise we will end up with bogus error reports.

That's not really feasible in distutils, no.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread LB
This is really a great news, and it seems very promising according to
the first pages of the Wiki that I've seen.

It's perhaps not the right place to say this, but I was wondering what
you would thinking about adding labels or category to the descriptions
of each function ? I think It would really help newcomers to search
around the 400 functions of numpy if they could use some labels to
precise theirs thoughts.

Until now, the most efficient why of finding the numpy  function that
fits my needds what to search after some words in the numpy example
list page. This was not always very fast and labels like array
creation, shape manipulation,index operation,
arithmetic, ...etc, could simplify this kind of search.

 --
LB



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Outputting arrays.

2008-05-21 Thread Timothy Hochberg
On Wed, May 21, 2008 at 12:32 AM, Alexandra Geddes [EMAIL PROTECTED]
wrote:

 Hi.

 1. Is there a module or other code to write arrays to databases (they want
 access databases)?


If you have $$, I think you can use mxODBC. Otherwise, I believe that you
have to use COM as Chris suggested.



 2. How can i write 2D arrays to textfiles with labels on the rows and
 columns?


I'm not sure it will do what you want, but you might want to look at the csv
module.





 thanks!
 alex.



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
. __
. |-\
.
. [EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy dependency in numpy?

2008-05-21 Thread Jarrod Millman
On Wed, May 21, 2008 at 12:03 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
 And why this in ma/tests/test_core.py

 set_local_path()
 from test_old_ma import *
 restore_path()

 and test_core then proceeds to shadow all the test classes in test_old_ma.
 That's just silly.

Let's just leave this for the moment.  As soon as we get 1.1.0 out, we
are going to switch to the nose testing framework in the 1.2
development series.  We can clean up the tests there.

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Anne Archibald
2008/5/21 LB [EMAIL PROTECTED]:
 This is really a great news, and it seems very promising according to
 the first pages of the Wiki that I've seen.

 It's perhaps not the right place to say this, but I was wondering what
 you would thinking about adding labels or category to the descriptions
 of each function ? I think It would really help newcomers to search
 around the 400 functions of numpy if they could use some labels to
 precise theirs thoughts.

You're right that some kind of categorization would be very valuable.
I think that this is planned, but the doc marathon organizers can
presumably give a more detailed answer.

 Until now, the most efficient why of finding the numpy  function that
 fits my needds what to search after some words in the numpy example
 list page. This was not always very fast and labels like array
 creation, shape manipulation,index operation,
 arithmetic, ...etc, could simplify this kind of search.

In the short term, there's the numpy functions by category page on the wiki:
http://www.scipy.org/Numpy_Functions_by_Category
Of course, this page is already incomplete, and nobody is
systematically updating it. Really the right solution is what you
propose, annotating each function and then automatically generating
such a list.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy Documentation Marathon progress

2008-05-21 Thread Stéfan van der Walt
Hi all,

I'd like to thank everyone who has responded so positively to the
NumPy Documentation Marathon.  We now have more than 10 contributors
enlisted, including (in order of signing up):

- Pauli Virtanen
- Emmanuelle Gouillart
- Joe Harrington
- Johann Cohen-Tanugi
- Alan Jackson
- Tim Cera
- Anne Archibald
- David Cournapeau
- Neil Martinsen Burrell
- Rob Hetland

We have already updated more than 50 docstrings (not all of them major
rewrites, but we're working on it).  I'd like to invite any interested
parties to join our efforts on

http://sd-2116.dedibox.fr/doc

It's easy: create an account on the wiki, familiarise yourself with
the docstring standard, and start editing.

When the source trunk re-opens for commits after the 1.1 release, I
plan to upload the changes from the wiki.  Thereafter, I will
synchronise them on a weekly or twice-weekly basis.

The reference guide is also shaping up nicely.  To bring you up to
speed: I have written a parser for the current docstring standard,
which is used to convert the docstrings from the source tree to a
stand-alone document.  This document is rendered using Sphinx, the
application developed to generate the Python 2.6 documentation.  You
can see the progress on the reference guide here, in HTML or PDF:

http://mentat.za.net/numpy/refguide
http://mentat.za.net/numpy/refguide/NumPy.pdf

- Mathematical formulas now render correctly (using mathml in the
.xhtml files, and LaTeX in the PDF).  If you can't read the mathml
(see for example the `bartlett` function) you may need to install
additional fonts (e.g.,
http://www.cs.unb.ca/~bremner/docs/mathml-debian-firefox/).  For
internet explorer, a separate plugin (mathplayer) is required.

- I'm aware that the HTML-search is broken -- we'll fix that soon.

So, that's where we are now.  A lot of the organisation has been done,
and there is an editing framework in place (thanks Pauli,
Emmanuelle!).  Now, we just need to write some content!

Thanks again for all your contributions, and here's to many more!

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-21 Thread Stéfan van der Walt
Hi LB

2008/5/21 LB [EMAIL PROTECTED]:
 This is really a great news, and it seems very promising according to
 the first pages of the Wiki that I've seen.

 It's perhaps not the right place to say this, but I was wondering what
 you would thinking about adding labels or category to the descriptions
 of each function ? I think It would really help newcomers to search
 around the 400 functions of numpy if they could use some labels to
 precise theirs thoughts.

If you take a look at the new docstring standard, we've introduced a
new `index` tag, that can be used as follows:

.. index::
   :refguide: trigonometry, ufunc

We still need to decide on which categories to use, but the markup is there.

 Until now, the most efficient why of finding the numpy  function that
 fits my needds what to search after some words in the numpy example
 list page. This was not always very fast and labels like array
 creation, shape manipulation,index operation,
 arithmetic, ...etc, could simplify this kind of search.

Watch this space!

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-21 Thread Christopher Burns
You're right, I'll put out a new rc.  Sorry, I didn't see the other
emails this morning and assumed the only errors were the endian issues
in the test code.   Apparently these are still an issue though, so
I'll look into that.

Chris

On Wed, May 21, 2008 at 12:21 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
 Christopher Burns wrote:
 Nope.  I figured that would be included in the 1.1.1 release.

 It seems a few bugs have been found and fixed. It would be nice to put
 out another release candidate with those fixes at some point. Anyway:

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Outputting arrays.

2008-05-21 Thread Alan G Isaac
On Wed, 21 May 2008, Alexandra Geddes apparently wrote:
 2. How can i write 2D arrays to textfiles with labels on the rows and 
 columns? 

http://code.google.com/p/econpy/source/browse/trunk/utilities/text.py

hth,
Alan Isaac



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket #798: `piecewise` exposes raw memory

2008-05-21 Thread Ondrej Certik
On Wed, May 21, 2008 at 11:53 AM, Ondrej Certik [EMAIL PROTECTED] wrote:
 On Wed, May 21, 2008 at 11:30 AM, Stéfan van der Walt [EMAIL PROTECTED] 
 wrote:
 Referring to
 http://scipy.org/scipy/numpy/ticket/798

 `piecewise` uses `empty` to allocate output memory.  If the conditions
 do not sufficiently cover the output, then raw memory is returned,
 e.g.,

 {{{
 import numpy as np
 np.piecewise([0,1,2],[True,False,False],[1])
 }}}

 A patch which addresses the issue is available here for review:

 http://codereview.appspot.com/1105

 Documentation is being updated on the wiki.

 I'd like to invite everyone to take part in the review. It's fun, it's
 just talking, no coding. :)

Thanks Robert for taking part in the review:

http://codereview.appspot.com/1105/diff/22/122#newcode566

That's the way to go.

Ondrej
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PY_ARRAY_UNIQUE_SYMBOL

2008-05-21 Thread Bill Spotz
I am running into a problem with a numpy-compatible extension module  
that I develop, and I believe it has to do with PY_ARRAY_UNIQUE_SYMBOL.

I set PY_ARRAY_UNIQUE_SYMBOL to PyTrilinos.  On my machine (Mac OS  
X), the module loads and works properly.  Another user, however (on  
Ubuntu), gets the following:

 ImportError: Failure linking new module: /usr/local/lib/python2.4/ 
site-
 packages/PyTrilinos/_Epetra.so: Symbol not found: _PyTrilinos
 Referenced from: /usr/local/lib/libpytrilinos.dylib
 Expected in: dynamic lookup

On my machine, I see:

 $ nm libpytrilinos.dylib | grep PyTrilinos
  U _PyTrilinos
  ...

and I'm not sure where the symbol actually IS defined.  I don't  
understand enough about how PY_ARRAY_UNIQUE_SYMBOL is actually used to  
hunt this down, especially through another user on a machine I don't  
have access to.

Any ideas on what I should be looking for?

Thanks

** Bill Spotz  **
** Sandia National Laboratories  Voice: (505)845-0170  **
** P.O. Box 5800 Fax:   (505)284-0154  **
** Albuquerque, NM 87185-0370Email: [EMAIL PROTECTED] **






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 10:07 AM, Alan McIntyre [EMAIL PROTECTED]
wrote:

 On Wed, May 21, 2008 at 11:56 AM, Pierre GM [EMAIL PROTECTED] wrote:
  On Wednesday 21 May 2008 11:39:32 Alan McIntyre wrote:
  There's some commentary and a patch on NumPy ticket 793 on this issue:
 
  http://scipy.org/scipy/numpy/ticket/793
 
  OK, thanks a lot ! That's a C problem, then...

 It's probably worth mentioning that I'm not that familiar with all the
 innards of NumPy yet, so take my comments and patch on that issue with
 a (fairly large) grain of salt. ;)


This was introduced by Travis in r5138 as part of the matrix changes.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 2:39 PM, Charles R Harris [EMAIL PROTECTED]
wrote:



 On Wed, May 21, 2008 at 10:07 AM, Alan McIntyre [EMAIL PROTECTED]
 wrote:

 On Wed, May 21, 2008 at 11:56 AM, Pierre GM [EMAIL PROTECTED] wrote:
  On Wednesday 21 May 2008 11:39:32 Alan McIntyre wrote:
  There's some commentary and a patch on NumPy ticket 793 on this issue:
 
  http://scipy.org/scipy/numpy/ticket/793
 
  OK, thanks a lot ! That's a C problem, then...

 It's probably worth mentioning that I'm not that familiar with all the
 innards of NumPy yet, so take my comments and patch on that issue with
 a (fairly large) grain of salt. ;)


 This was introduced by Travis in r5138 as part of the matrix changes.


Alan's change looks a bit iffy to me because the looser check would pass
things like matrices and there would be no check for decreasing dimensions
to throw an error. So I think the safest thing for 1.1 is to back out
Travis' change and rethink this for 1.2.

Chuck


 Chuck



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy Documentation Marathon progress

2008-05-21 Thread Travis E. Oliphant
Stéfan van der Walt wrote:
 Hi all,

 I'd like to thank everyone who has responded so positively to the
 NumPy Documentation Marathon.  We now have more than 10 contributors
 enlisted, including (in order of signing up):

 - Pauli Virtanen
 - Emmanuelle Gouillart
 - Joe Harrington
 - Johann Cohen-Tanugi
 - Alan Jackson
 - Tim Cera
 - Anne Archibald
 - David Cournapeau
 - Neil Martinsen Burrell
 - Rob Hetland


 http://mentat.za.net/numpy/refguide
 http://mentat.za.net/numpy/refguide/NumPy.pdf
   
This is looking great   Hearty thanks to all those who are helping. 

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 3:10 PM, Charles R Harris [EMAIL PROTECTED]
wrote:



 On Wed, May 21, 2008 at 2:39 PM, Charles R Harris 
 [EMAIL PROTECTED] wrote:



 On Wed, May 21, 2008 at 10:07 AM, Alan McIntyre [EMAIL PROTECTED]
 wrote:

 On Wed, May 21, 2008 at 11:56 AM, Pierre GM [EMAIL PROTECTED]
 wrote:
  On Wednesday 21 May 2008 11:39:32 Alan McIntyre wrote:
  There's some commentary and a patch on NumPy ticket 793 on this issue:
 
  http://scipy.org/scipy/numpy/ticket/793
 
  OK, thanks a lot ! That's a C problem, then...

 It's probably worth mentioning that I'm not that familiar with all the
 innards of NumPy yet, so take my comments and patch on that issue with
 a (fairly large) grain of salt. ;)


 This was introduced by Travis in r5138 as part of the matrix changes.


 Alan's change looks a bit iffy to me because the looser check would pass
 things like matrices and there would be no check for decreasing dimensions
 to throw an error. So I think the safest thing for 1.1 is to back out
 Travis' change and rethink this for 1.2.

 Chuck


I backed out r5138 and, with a few test fixes, everything passes. However,
there is now a warning exposed in the masked array tests:

/usr/lib/python2.5/site-packages/numpy/ma/core.py:1357: UserWarning:
MaskedArray.__setitem__ on fields: The mask is NOT affected!
  warnings.warn(MaskedArray.__setitem__ on fields: 

Pierre?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] matrices and __radd__

2008-05-21 Thread Keith Goodman
I have a class that stores some of its data in a matrix. I can't
figure out how to do right adds with a matrix. Here's a toy example:

class Myclass(object):

def __init__(self, x, a):
self.x = x  # numpy matrix
self.a = a  # some attribute, say, an integer

def __add__(self, other):
# Assume other is a numpy matrix
return Myclass(self.x + other, self.a += 1)

def __radd__(self, other):
print other

 from myclass import Myclass
 import numpy.matlib as mp
 m = Myclass(mp.zeros((2,2)), 1)
 x = mp.asmatrix(range(4)).reshape(2,2)
 radd = x + m
0
1
2
3

The matrix.__add__ sends one element at a time. That sounds slow. Do I
have to grab the corresponding element of self.x and add it to the
element passed in by matrix.__add__? Or is there a better way?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrices and __radd__

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 5:28 PM, Keith Goodman [EMAIL PROTECTED] wrote:
 I have a class that stores some of its data in a matrix. I can't
 figure out how to do right adds with a matrix. Here's a toy example:

 class Myclass(object):

def __init__(self, x, a):
self.x = x  # numpy matrix
self.a = a  # some attribute, say, an integer

def __add__(self, other):
# Assume other is a numpy matrix
return Myclass(self.x + other, self.a += 1)

def __radd__(self, other):
print other

 from myclass import Myclass
 import numpy.matlib as mp
 m = Myclass(mp.zeros((2,2)), 1)
 x = mp.asmatrix(range(4)).reshape(2,2)
 radd = x + m
 0
 1
 2
 3

 The matrix.__add__ sends one element at a time. That sounds slow.

Well, what's actually going on here is this: ndarray.__add__() looks
at m and decides that it doesn't look like anything it can make an
array from. However, it does have an __add__() method, so it assumes
that it is intended to be a scalar. It uses broadcasting to treat it
as if it were an object array of the shape of x with each element
identical. Then it adds together the two arrays element-wise. Each
element-wise addition triggers the MyClass.__radd__() call.

 Do I
 have to grab the corresponding element of self.x and add it to the
 element passed in by matrix.__add__? Or is there a better way?

There probably is, but not being familiar with your actual use case,
I'm not sure what it would be.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PY_ARRAY_UNIQUE_SYMBOL

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 3:34 PM, Bill Spotz [EMAIL PROTECTED] wrote:
 I am running into a problem with a numpy-compatible extension module
 that I develop, and I believe it has to do with PY_ARRAY_UNIQUE_SYMBOL.

 I set PY_ARRAY_UNIQUE_SYMBOL to PyTrilinos.

Why? My understanding is also limited, but it does not appear to me to
be something one should change.

 On my machine (Mac OS
 X), the module loads and works properly.  Another user, however (on
 Ubuntu), gets the following:

 ImportError: Failure linking new module: /usr/local/lib/python2.4/
 site-
 packages/PyTrilinos/_Epetra.so: Symbol not found: _PyTrilinos
 Referenced from: /usr/local/lib/libpytrilinos.dylib
 Expected in: dynamic lookup

??? How did an Ubuntu user get a hold of a .dylib?

 On my machine, I see:

 $ nm libpytrilinos.dylib | grep PyTrilinos
  U _PyTrilinos
  ...

 and I'm not sure where the symbol actually IS defined.

It gets generated into __multiarray_api.h:

#if defined(PY_ARRAY_UNIQUE_SYMBOL)
#define PyArray_API PY_ARRAY_UNIQUE_SYMBOL
#endif

#if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY)
extern void **PyArray_API;
#else
#if defined(PY_ARRAY_UNIQUE_SYMBOL)
void **PyArray_API;
#else
static void **PyArray_API=NULL;
#endif
#endif


How did this symbol end up in a .dylib rather than a Python extension
module? I think it might be difficult to arrange for a non-extension
dynamic library to call into numpy.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrices and __radd__

2008-05-21 Thread Keith Goodman
On Wed, May 21, 2008 at 4:07 PM, Robert Kern [EMAIL PROTECTED] wrote:
 On Wed, May 21, 2008 at 5:28 PM, Keith Goodman [EMAIL PROTECTED] wrote:
 I have a class that stores some of its data in a matrix. I can't
 figure out how to do right adds with a matrix. Here's a toy example:

 class Myclass(object):

def __init__(self, x, a):
self.x = x  # numpy matrix
self.a = a  # some attribute, say, an integer

def __add__(self, other):
# Assume other is a numpy matrix
return Myclass(self.x + other, self.a += 1)

def __radd__(self, other):
print other

 from myclass import Myclass
 import numpy.matlib as mp
 m = Myclass(mp.zeros((2,2)), 1)
 x = mp.asmatrix(range(4)).reshape(2,2)
 radd = x + m
 0
 1
 2
 3

 The matrix.__add__ sends one element at a time. That sounds slow.

 Well, what's actually going on here is this: ndarray.__add__() looks
 at m and decides that it doesn't look like anything it can make an
 array from. However, it does have an __add__() method, so it assumes
 that it is intended to be a scalar. It uses broadcasting to treat it
 as if it were an object array of the shape of x with each element
 identical. Then it adds together the two arrays element-wise. Each
 element-wise addition triggers the MyClass.__radd__() call.

Oh, broadcasting. OK that makes sense.

 Do I
 have to grab the corresponding element of self.x and add it to the
 element passed in by matrix.__add__? Or is there a better way?

 There probably is, but not being familiar with your actual use case,
 I'm not sure what it would be.

From

http://projects.scipy.org/pipermail/numpy-discussion/2006-December/025075.html

I see that the trick is to add

__array_priority__ = 10

to my class.

class Myclass(object):

__array_priority__ = 10

def __init__(self, x, a):
self.x = x  # numpy matrix
self.a = a  # some attribute, say, an integer

def __add__(self, other):
# Assume other is a numpy matrix
return Myclass(self.x + other, 2*self.a)

__radd__ = __add__

 from myclass import Myclass
 import numpy.matlib as mp
 m = Myclass(mp.zeros((2,2)), 1)
 x = mp.asmatrix(range(4)).reshape(2,2)
 radd = x + m
 radd.a
   2
 radd.x

matrix([[ 0.,  1.],
[ 2.,  3.]])
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PY_ARRAY_UNIQUE_SYMBOL

2008-05-21 Thread Bill Spotz
On May 21, 2008, at 5:15 PM, Robert Kern wrote:

 On Wed, May 21, 2008 at 3:34 PM, Bill Spotz [EMAIL PROTECTED]  
 wrote:
 I am running into a problem with a numpy-compatible extension module
 that I develop, and I believe it has to do with  
 PY_ARRAY_UNIQUE_SYMBOL.

 I set PY_ARRAY_UNIQUE_SYMBOL to PyTrilinos.

 Why? My understanding is also limited, but it does not appear to me to
 be something one should change.

My understanding is that if your extension module requires more than  
one source file, then this macro needs to be set to something  
reasonably unique to your project.  And that NO_IMPORT_ARRAY needs to  
be set in those files that do not call import_array().

 On my machine (Mac OS
 X), the module loads and works properly.  Another user, however (on
 Ubuntu), gets the following:

ImportError: Failure linking new module: /usr/local/lib/python2.4/
 site-
packages/PyTrilinos/_Epetra.so: Symbol not found: _PyTrilinos
Referenced from: /usr/local/lib/libpytrilinos.dylib
Expected in: dynamic lookup

 ??? How did an Ubuntu user get a hold of a .dylib?

I could have this wrong.  He is asking me questions about Mac OS X and  
Ubuntu simultaneously.  But going back through his emails, I thought I  
had this right.

 On my machine, I see:

$ nm libpytrilinos.dylib | grep PyTrilinos
 U _PyTrilinos
 ...

 and I'm not sure where the symbol actually IS defined.

 It gets generated into __multiarray_api.h:

 #if defined(PY_ARRAY_UNIQUE_SYMBOL)
 #define PyArray_API PY_ARRAY_UNIQUE_SYMBOL
 #endif

 #if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY)
 extern void **PyArray_API;
 #else
 #if defined(PY_ARRAY_UNIQUE_SYMBOL)
 void **PyArray_API;
 #else
 static void **PyArray_API=NULL;
 #endif
 #endif


 How did this symbol end up in a .dylib rather than a Python extension
 module? I think it might be difficult to arrange for a non-extension
 dynamic library to call into numpy.

It is designed so that the extension modules (plural -- there are many  
of them) link to the dylib that has the symbol.  The code in this  
dylib is only ever accessed through those extension modules.

** Bill Spotz  **
** Sandia National Laboratories  Voice: (505)845-0170  **
** P.O. Box 5800 Fax:   (505)284-0154  **
** Albuquerque, NM 87185-0370Email: [EMAIL PROTECTED] **






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-21 Thread Thomas Hrabe
iBook G4 osX 10.5.2

hope this helps!

Numpy is installed in 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy
Numpy version 1.1.0rc1
Python version 2.5.2 (r252:60911, Feb 22 2008, 07:57:53) [GCC 4.0.1 (Apple 
Computer, Inc. build 5363)]
  Found 18/18 tests for numpy.core.defmatrix
  Found 3/3 tests for numpy.core.memmap
  Found 283/283 tests for numpy.core.multiarray
  Found 70/70 tests for numpy.core.numeric
  Found 36/36 tests for numpy.core.numerictypes
  Found 12/12 tests for numpy.core.records
  Found 7/7 tests for numpy.core.scalarmath
  Found 16/16 tests for numpy.core.umath
  Found 5/5 tests for numpy.ctypeslib
  Found 5/5 tests for numpy.distutils.misc_util
  Found 2/2 tests for numpy.fft.fftpack
  Found 3/3 tests for numpy.fft.helper
  Found 24/24 tests for numpy.lib._datasource
  Found 10/10 tests for numpy.lib.arraysetops
  Found 1/1 tests for numpy.lib.financial
  Found 0/0 tests for numpy.lib.format
  Found 53/53 tests for numpy.lib.function_base
  Found 5/5 tests for numpy.lib.getlimits
  Found 6/6 tests for numpy.lib.index_tricks
  Found 15/15 tests for numpy.lib.io
  Found 1/1 tests for numpy.lib.machar
  Found 4/4 tests for numpy.lib.polynomial
  Found 49/49 tests for numpy.lib.shape_base
  Found 15/15 tests for numpy.lib.twodim_base
  Found 43/43 tests for numpy.lib.type_check
  Found 1/1 tests for numpy.lib.ufunclike
  Found 89/89 tests for numpy.linalg
  Found 94/94 tests for numpy.ma.core
  Found 15/15 tests for numpy.ma.extras
  Found 7/7 tests for numpy.random
  Found 16/16 tests for numpy.testing.utils
  Found 0/0 tests for __main__
.FF.
==
FAIL: test_basic (numpy.core.tests.test_multiarray.TestView)
--
Traceback (most recent call last):
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/tests/test_multiarray.py,
 line 843, in test_basic
assert_array_equal(y, [67305985, 134678021])
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py,
 line 248, in assert_array_equal
verbose=verbose, header='Arrays are not equal')
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py,
 line 240, in assert_array_compare
assert cond, msg
AssertionError: 
Arrays are not equal

(mismatch 100.0%)
 x: array([16909060, 84281096])
 y: array([ 67305985, 134678021])

==
FAIL: test_keywords (numpy.core.tests.test_multiarray.TestView)
--
Traceback (most recent call last):
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/tests/test_multiarray.py,
 line 852, in test_keywords
assert_array_equal(y,[[513]])
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py,
 line 248, in assert_array_equal
verbose=verbose, header='Arrays are not equal')
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py,
 line 240, in assert_array_compare
assert cond, msg
AssertionError: 
Arrays are not equal

(mismatch 100.0%)
 x: array([[258]], dtype=int16)
 y: array([[513]])

--
Ran 1004 tests in 6.159s

FAILED (failures=2)
unittest._TextTestResult run=1004 errors=0 failures=2

winmail.dat___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 5:56 PM, Pierre GM [EMAIL PROTECTED] wrote:

 On Wednesday 21 May 2008 17:57:30 Charles R Harris wrote:
  On Wed, May 21, 2008 at 3:10 PM, Charles R Harris
  [EMAIL PROTECTED]
 
  wrote:
   On Wed, May 21, 2008 at 2:39 PM, Charles R Harris 
  
   [EMAIL PROTECTED] wrote:
   On Wed, May 21, 2008 at 10:07 AM, Alan McIntyre
   [EMAIL PROTECTED]
  
   wrote:
   On Wed, May 21, 2008 at 11:56 AM, Pierre GM [EMAIL PROTECTED]
  
   wrote:
On Wednesday 21 May 2008 11:39:32 Alan McIntyre wrote:
There's some commentary and a patch on NumPy ticket 793 on this
issue:
   
http://scipy.org/scipy/numpy/ticket/793
   
OK, thanks a lot ! That's a C problem, then...
  
   It's probably worth mentioning that I'm not that familiar with all
 the
   innards of NumPy yet, so take my comments and patch on that issue
 with
   a (fairly large) grain of salt. ;)
  
   This was introduced by Travis in r5138 as part of the matrix changes.
  
   Alan's change looks a bit iffy to me because the looser check would
 pass
   things like matrices and there would be no check for decreasing
   dimensions to throw an error. So I think the safest thing for 1.1 is to
   back out Travis' change and rethink this for 1.2.
  
   Chuck
 
  I backed out r5138 and, with a few test fixes, everything passes.
 However,
  there is now a warning exposed in the masked array tests:
 
  /usr/lib/python2.5/site-packages/numpy/ma/core.py:1357: UserWarning:
  MaskedArray.__setitem__ on fields: The mask is NOT affected!
warnings.warn(MaskedArray.__setitem__ on fields: 
 
  Pierre?

 Well, that's a warning all right, not an error.

 With exotic dtypes (understand, not int,bool,float or complex but named
 fields), setting a field doesn't affect the mask, hence the warning.

 The reason behind this behavior is that the mask of a MaskedArray is only a
 boolean-array: you have (at most) one boolean per element/record.
 Therefore,
 you can mask/unmask a full record, but not a specific field.  Masking
 particular fields is possible with MaskedRecords, however, but with an
 overhead that wasn't worth putting in MaskedArray.
 Because I've been bitten a couple of times by this mechanism, I figured
 that a
 warning would be the easiest way to remember that MaskedArrays don't handle
 records very well.

 So, nothing to worry about.


Should we disable the warning for the tests? It's a bit unnerving and likely
to generate mail calling attention to it.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 7:12 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
 Should we disable the warning for the tests? It's a bit unnerving and likely
 to generate mail calling attention to it.

Yes. Known warnings should be explicitly silenced in tests.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Jarrod Millman
On Wed, May 21, 2008 at 5:12 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
 Should we disable the warning for the tests? It's a bit unnerving and likely
 to generate mail calling attention to it.

+1
I tend to agree that it is disconcerting to have warnings pop up in the tests.
-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Pierre GM

 Should we disable the warning for the tests? It's a bit unnerving and
 likely to generate mail calling attention to it.

I don't have a problem with that. Is there some kind of trapping mechanism we 
can use ? A la fail_unless_raise ?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 RuntimeErrors

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 7:23 PM, Pierre GM [EMAIL PROTECTED] wrote:

 Should we disable the warning for the tests? It's a bit unnerving and
 likely to generate mail calling attention to it.

 I don't have a problem with that. Is there some kind of trapping mechanism we
 can use ?

You can filter warnings using warning.filterwarnings() or
warning.simplefilter().

  http://docs.python.org/dev/library/warnings

 A la fail_unless_raise ?

I'm not sure I see the connection. Do you need to test that the
warning is issued? If so, then make a filter with the action 'error'
and use self.assertRaises().

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread Charles R Harris
David,

I'm not sure that fix is completely correct. The out keyword is funny and
I'm not what the specs are supposed to be, but generally the output is cast
rather than an error raised. We need an official spec here because the
documentation of this feature is essentially random. Note that the shapes
don't really have to match, either.

In [1]: x = ones(5)

In [2]: out = ones(5, dtype=int8)

In [3]: cumsum(x, out=out)
Out[3]: array([1, 2, 3, 4, 5], dtype=int8)

In [4]: out = empty((5,1))

In [5]: cumsum(x, out=out)
Out[5]:
array([[ 1.],
   [ 2.],
   [ 3.],
   [ 4.],
   [ 5.]])

OTOH, out = empty((1,5)) doesn't work but doesn't raise an error. Confused?
Me too.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 8:48 PM, Charles R Harris [EMAIL PROTECTED]
wrote:

 David,

 I'm not sure that fix is completely correct. The out keyword is funny and
 I'm not what the specs are supposed to be, but generally the output is cast
 rather than an error raised. We need an official spec here because the
 documentation of this feature is essentially random. Note that the shapes
 don't really have to match, either.

 In [1]: x = ones(5)

 In [2]: out = ones(5, dtype=int8)

 In [3]: cumsum(x, out=out)
 Out[3]: array([1, 2, 3, 4, 5], dtype=int8)

 In [4]: out = empty((5,1))

 In [5]: cumsum(x, out=out)
 Out[5]:
 array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.]])

 OTOH, out = empty((1,5)) doesn't work but doesn't raise an error.
 Confused? Me too.


And I'm not sure self-desc needs its reference count decremented,
PyArray_FromArray is one of those vicious, nasty functions with side effects
and might decrement the count itself. Note that the reference count is not
decremented elsewhere. It's a capital offense to write such functions, but
there it is.

On a lesser note, there is trailing whitespace on every new line except one.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread David Cournapeau
Charles R Harris wrote:
 David,

 I'm not sure that fix is completely correct. The out keyword is funny 
 and I'm not what the specs are supposed to be, but generally the 
 output is cast rather than an error raised. 

I think the out argument is one of this thing which is rather a mess 
right now in numpy. The functions which accept it do not always mention 
it, and as you say, there are various different behaviour.

What are the uses of the out argument ? The obvious one is saving 
memory, but are there others ? Automatic casting would break the memory 
saving.

 We need an official spec here because the documentation of this 
 feature is essentially random. Note that the shapes don't really have 
 to match, either.

 In [1]: x = ones(5)

 In [2]: out = ones(5, dtype=int8)

 In [3]: cumsum(x, out=out)
 Out[3]: array([1, 2, 3, 4, 5], dtype=int8)

I don't feel like this should work (because of the dtype).


 In [4]: out = empty((5,1))

 In [5]: cumsum(x, out=out)
 Out[5]:
 array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.]])

 OTOH, out = empty((1,5)) doesn't work but doesn't raise an error. 

Not working and no error is obviously wrong. Note that after playing a 
bit with this, I got segfaults when exciting python (I am working on 
reproducint it).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread David Cournapeau
Charles R Harris wrote:


 And I'm not sure self-desc needs its reference count decremented, 
 PyArray_FromArray is one of those vicious, nasty functions with side 
 effects and might decrement the count itself.

might ? What do you mean by might decrement ? If the call to 
PyAarray_FromArray fails, no reference are stolen, right ?

 Note that the reference count is not decremented elsewhere. It's a 
 capital offense to write such functions, but there it is.

 On a lesser note, there is trailing whitespace on every new line 
 except one.

This is easier to fix :)

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread Travis E. Oliphant
David Cournapeau wrote:
 Charles R Harris wrote:
   
 And I'm not sure self-desc needs its reference count decremented, 
 PyArray_FromArray is one of those vicious, nasty functions with side 
 effects and might decrement the count itself.
 

 might ? What do you mean by might decrement ? If the call to 
 PyAarray_FromArray fails, no reference are stolen, right ?
   

No, that's not right.   The reference is stolen if it fails as well.   
This is true of all descriptor data-types.  Perhaps it is weird, but it 
was a lot easier to retro-fit Numeric PyArray_Descr as a Python object 
that way.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 9:03 PM, David Cournapeau 
[EMAIL PROTECTED] wrote:

 Charles R Harris wrote:
 
 
  And I'm not sure self-desc needs its reference count decremented,
  PyArray_FromArray is one of those vicious, nasty functions with side
  effects and might decrement the count itself.

 might ? What do you mean by might decrement ? If the call to
 PyAarray_FromArray fails, no reference are stolen, right ?


Maybe, maybe not. We need to look at the function and document it. I think
these sort of functions are an invitation to reference counting bugs, of
which valgrind says we have several. Really, all the increments and
decrements should be inside PyArray_FromArray, but calls to this function
are scattered all over.


  Note that the reference count is not decremented elsewhere. It's a
  capital offense to write such functions, but there it is.
 
  On a lesser note, there is trailing whitespace on every new line
  except one.

 This is easier to fix :)


Not that I think we are in any worse shape after the fix than before, but I
do think we should pretty much leave things alone until 1.1 out the door and
leave future fixes to 1.1.1

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread Travis E. Oliphant
Charles R Harris wrote:
 Really, all the increments and decrements should be inside 
 PyArray_FromArray, but calls to this function are scattered all over.
I don't understand what you mean by this statement.All functions 
that return an object and take a PyArray_Descr object steal a reference 
to the descriptor (even if it fails).   That's the rule.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 9:32 PM, Travis E. Oliphant [EMAIL PROTECTED]
wrote:

 Charles R Harris wrote:
  Really, all the increments and decrements should be inside
  PyArray_FromArray, but calls to this function are scattered all over.
 I don't understand what you mean by this statement.All functions
 that return an object and take a PyArray_Descr object steal a reference
 to the descriptor (even if it fails).   That's the rule.


Why should it not increment the reference itself? Note that calls to this
function are normally preceded by incrementing the reference, probably
because one wants to keep it around. I think it would be clearer to have the
rule: you increment it, you decrement it. That way everything is in one
obvious place and you don't have to concern yourself with what happens
inside PyArray_FromArray. Functions with side effects are almost always a
bad idea and lead to bugs in practice.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread David Cournapeau
Travis E. Oliphant wrote:
 No, that's not right.   The reference is stolen if it fails as well.   
 This is true of all descriptor data-types.  Perhaps it is weird, but it 
 was a lot easier to retro-fit Numeric PyArray_Descr as a Python object 
 that way.
   

Thanks for the clarification. I fixed the code accordingly,

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread David Cournapeau
David Cournapeau wrote:

 Thanks for the clarification. I fixed the code accordingly,
   

Ok, you beat me :)

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread Charles R Harris
On Wed, May 21, 2008 at 8:56 PM, David Cournapeau 
[EMAIL PROTECTED] wrote:

 Charles R Harris wrote:
  David,
 
  I'm not sure that fix is completely correct. The out keyword is funny
  and I'm not what the specs are supposed to be, but generally the
  output is cast rather than an error raised.

 I think the out argument is one of this thing which is rather a mess
 right now in numpy. The functions which accept it do not always mention
 it, and as you say, there are various different behaviour.

 What are the uses of the out argument ? The obvious one is saving
 memory, but are there others ? Automatic casting would break the memory
 saving.


I've been contemplating this and I think you are right. The out parameter is
for those special bits that need efficiency and should be as simple as
possible. That means: same type and shape as the normal output, no appeal.
This is especially so as a reference to the output is what the function
returns when out is present and the return type should not depend on the
type of the out parameter.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Closing some tickets.

2008-05-21 Thread Charles R Harris
All,

Can we close ticket #117  and add Pearu's comment to the FAQ?
http://projects.scipy.org/scipy/numpy/ticket/117

Can someone with MSVC 2005 check if we can close ticket #164?
http://projects.scipy.org/scipy/numpy/ticket/164

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Closing some tickets.

2008-05-21 Thread Robert Kern
On Wed, May 21, 2008 at 11:42 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
 All,

 Can we close ticket #117  and add Pearu's comment to the FAQ?
 http://projects.scipy.org/scipy/numpy/ticket/117

Yes.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread Travis E. Oliphant
Charles R Harris wrote:


 On Wed, May 21, 2008 at 9:32 PM, Travis E. Oliphant 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Charles R Harris wrote:
  Really, all the increments and decrements should be inside
  PyArray_FromArray, but calls to this function are scattered all
 over.
 I don't understand what you mean by this statement.All functions
 that return an object and take a PyArray_Descr object steal a
 reference
 to the descriptor (even if it fails).   That's the rule.


 Why should it not increment the reference itself? Note that calls to 
 this function are normally preceded by incrementing the reference, 
 probably because one wants to keep it around.
I wouldn't say normally.   I would say sometimes.   

Normally you create a reference to the data-type and want 
PyArray_FromAny and friends to steal it (i.e. PyArray_DescrFromType). 

I wouldn't call stealing a reference count a side effect, but even if 
you want to call it that, it can't really change without a *huge* 
re-working effort for almost zero gain.   You would also have to re-work 
all the macros that take type numbers and construct data-type objects 
for passing to these functions.   I don't see the benefit at all.
 I think it would be clearer to have the rule: you increment it, you 
 decrement it.
Maybe, but Python's own C-API doesn't always follow that rule, there are 
functions that steal references.   Remember, PyArray_Descr was 
retrofitted as a Python Object.  It didn't use to be one.   This steal 
rule was the cleanest I could come up with --- i.e. it wasn't an idle 
decision.

It actually makes some sense because the returned array is going to 
own the reference count to the data-type object (just like setting to 
a list).

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fix to #789 maybe not right.

2008-05-21 Thread Travis E. Oliphant
Charles R Harris wrote:

 I  agree with all that, which is why I'm not advocating a change. But 
 it does raise the bar for working with the C code and I think the 
 current case is an example of that.

Yes it does.  I also agree that reference counting is the hardest part 
of coding with the Python C-API. 


  I think it would be clearer to have the rule: you increment it, you
  decrement it.
 Maybe, but Python's own C-API doesn't always follow that rule,
 there are
 functions that steal references.   Remember, PyArray_Descr was
 retrofitted as a Python Object.  It didn't use to be one.   This steal
 rule was the cleanest I could come up with --- i.e. it wasn't an idle
 decision.


 I realize that Python does this too, I also note that getting 
 reference counting right is one of the more difficult aspects of 
 writing Python extension code. The more programmers have to know and 
 keep in mind, the more likely they are to make mistakes.

Agreed.

-teo


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion