Re: [Numpy-discussion] Numpy on AIX 5.3

2008-07-09 Thread David Cournapeau
On Tue, Jul 8, 2008 at 5:08 PM, Marek Wojciechowski
[EMAIL PROTECTED] wrote:

 cxx.linker_so = [cxx.linker_so[0], cxx.compiler_cxx[0]] + cxx.linker_so[2:]

 in line 303 of cccompiler.py in distutils.


Should be fixed in r5368. I will merge the change into 1.1.1 as well

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] A couple of testing issues

2008-07-09 Thread Alan McIntyre
Hi all,

I wanted to point out a couple of things about the new test framework
that you should keep in mind if you're writing tests:

- Don't use NumpyTestCase any more, just use TestCase (which is
available if you do from numpy.testing import *).  Using NumpyTestCase
now causes a deprecation warning.
- Test functions and methods will only be picked up based on name if
they begin with test; check_* will no longer be seen as a test
function.

I figured I should mention these since there probably hasn't been a
general announcement about the testing changes.

Thanks,
Alan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] alterdot and restoredot

2008-07-09 Thread Pauli Virtanen

Tue, 08 Jul 2008 23:03:52 -0500, Robert Kern wrote:
 On Tue, Jul 8, 2008 at 14:01, Keith Goodman [EMAIL PROTECTED] wrote:
 I don't know what to write for a doc string for alterdot and
 restoredot.
 
 Then maybe you're the best one to figure it out. What details do you
 think are missing from the current docstrings? What questions do they
 leave you with?

I have the following for starters:

- Are these meant as user-visible functions?

- Should the user call them? When? What is the advantage?

- Are BLAS routines used by default? (And if not, why not?)

- Which operations do the functions exactly affect?
  It seems that alterdot sets the dot function slot to a BLAS
  version, but what operations does this affect?

Pauli

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Documentation: topical docs and reviewing our work

2008-07-09 Thread Stéfan van der Walt
Hi all,

A `numpy.doc` sub-module has been added, which contains documentation
for topics such as indexing, broadcasting, array operations etc.
These can be edited from the documentation wiki:

http://sd-2116.dedibox.fr/pydocweb/doc/numpy.doc/

If you'd like to document a topic that is not there, let me know and
I'll add it.

Further, we have documented a large number of functions, and the list
is growing by the day.  If you go to the docstring summary page:

http://sd-2116.dedibox.fr/pydocweb/doc/

the ones ready for review are marked in pink, right at the top.

Please log in and leave comments on those.  Your input would be much
appreciated!

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another reference count leak: ticket #848

2008-07-09 Thread Michael Abbott
There are three separate patches in this message plus some remarks on 
stealing reference counts at the bottom.


On Tue, 8 Jul 2008, Travis E. Oliphant wrote:

 Michael Abbott wrote:
  On Tue, 8 Jul 2008, Travis E. Oliphant wrote:
  The first part of this patch is good.  The second is not needed.   
  I don't see that.
 Don't forget that PyArray_FromAny consumes the reference even if it 
 returns with an error.

Oh dear.  That's not good.  

Well then, I need to redo my patch.  Here's the new patch for 
..._arrtype_new:


commit 431d99f40ca200201ba59c74a88b0bd972022ff0
Author: Michael Abbott [EMAIL PROTECTED]
Date:   Tue Jul 8 10:10:59 2008 +0100

Another reference leak using PyArray_DescrFromType

This change fixes two issues: a spurious ADDREF on a typecode returned
from PyArray_DescrFromType and an awkward interaction with PyArray_FromAny.

diff --git a/numpy/core/src/scalartypes.inc.src 
b/numpy/core/src/scalartypes.inc.src
index 3feefc0..7d3e562 100644
--- a/numpy/core/src/scalartypes.inc.src
+++ b/numpy/core/src/scalartypes.inc.src
@@ -1886,7 +1886,6 @@ static PyObject *
 if (!PyArg_ParseTuple(args, |O, obj)) return NULL;
 
 typecode = PyArray_DescrFromType([EMAIL PROTECTED]@);
-Py_INCREF(typecode);
 if (obj == NULL) {
 #if @default@ == 0
 char *mem;
@@ -1903,8 +1902,12 @@ static PyObject *
 goto finish;
 }
 
+Py_XINCREF(typecode);
 arr = PyArray_FromAny(obj, typecode, 0, 0, FORCECAST, NULL);
-if ((arr==NULL) || (PyArray_NDIM(arr)  0)) return arr;
+if ((arr==NULL) || (PyArray_NDIM(arr)  0)) {
+Py_XDECREF(typecode);
+return arr;
+}
 robj = PyArray_Return((PyArrayObject *)arr);
 
 finish:


I don't think we can dispense with the extra INCREF and DECREF.


Looking at the uses of PyArray_FromAny I can see the motivation for this 
design: core/include/numpy/ndarrayobject.h has a lot of calls which take a 
value returned by PyArray_DescrFromType as argument.  This has prompted me 
to take a trawl through the code to see what else is going on, and I note 
a couple more issues with patches below.


In the patch below the problem being fixed is that the first call to 
PyArray_FromAny can result in the erasure of dtype *before* Py_INCREF is 
called.  Perhaps you can argue that this only occurs when NULL is 
returned...


diff --git a/numpy/core/blasdot/_dotblas.c b/numpy/core/blasdot/_dotblas.c
index e2619b6..0b34ec7 100644
--- a/numpy/core/blasdot/_dotblas.c
+++ b/numpy/core/blasdot/_dotblas.c
@@ -234,9 +234,9 @@ dotblas_matrixproduct(PyObject *dummy, PyObject *args)
 }
 
 dtype = PyArray_DescrFromType(typenum);
-ap1 = (PyArrayObject *)PyArray_FromAny(op1, dtype, 0, 0, ALIGNED, NULL);
-if (ap1 == NULL) return NULL;
 Py_INCREF(dtype);
+ap1 = (PyArrayObject *)PyArray_FromAny(op1, dtype, 0, 0, ALIGNED, NULL);
+if (ap1 == NULL) { Py_DECREF(dtype); return NULL; }
 ap2 = (PyArrayObject *)PyArray_FromAny(op2, dtype, 0, 0, ALIGNED, NULL);
 if (ap2 == NULL) goto fail;
 

The next patch deals with an interestingly subtle memory leak in 
_string_richcompare where if casting to a common type fails then a 
reference count will leaked.  Actually this one has nothing to do with 
PyArray_FromAny, but I spotted it in passing.


diff --git a/numpy/core/src/arrayobject.c b/numpy/core/src/arrayobject.c
index ee4e945..2294b8d 100644
--- a/numpy/core/src/arrayobject.c
+++ b/numpy/core/src/arrayobject.c
@@ -4715,7 +4715,6 @@ _strings_richcompare(PyArrayObject *self, PyArrayObject 
*other, int cmp_op,
 PyObject *new;
 if (self-descr-type_num == PyArray_STRING  \
 other-descr-type_num == PyArray_UNICODE) {
-Py_INCREF(other);
 Py_INCREF(other-descr);
 new = PyArray_FromAny((PyObject *)self, other-descr,
   0, 0, 0, NULL);
@@ -4723,16 +4722,17 @@ _strings_richcompare(PyArrayObject *self, PyArrayObject 
*other, int cmp_op,
 return NULL;
 }
 self = (PyArrayObject *)new;
+Py_INCREF(other);
 }
 else if (self-descr-type_num == PyArray_UNICODE \
  other-descr-type_num == PyArray_STRING) {
-Py_INCREF(self);
 Py_INCREF(self-descr);
 new = PyArray_FromAny((PyObject *)other, self-descr,
   0, 0, 0, NULL);
 if (new == NULL) {
 return NULL;
 }
+Py_INCREF(self);
 other = (PyArrayObject *)new;
 }
 else {


I really don't think that this design of reference count handling in 
PyArray_FromAny (and consequently PyArray_CheckFromAny) is a good idea.  
Unfortunately these seem to be part of the published API, so presumably 
it's too late to change this?  (Otherwise I might see how the 
corresponding patch comes out.)

Not only is this not a good idea, it's not documented in the API 
documentation 

Re: [Numpy-discussion] Documentation: topical docs and reviewing our work

2008-07-09 Thread Robert Kern
On Wed, Jul 9, 2008 at 03:28, Stéfan van der Walt [EMAIL PROTECTED] wrote:
 Please log in and leave comments on those.  Your input would be much
 appreciated!

Each docstring page could use a Next link to move to the next
docstring with the same review status (actually, the same review
status that it was when the user first came to the page; not sure how
hard that makes the implementation).

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another reference count leak: ticket #848

2008-07-09 Thread Michael Abbott
On Wed, 9 Jul 2008, Michael Abbott wrote:
 Well then, I need to redo my patch.  Here's the new patch for 
 ..._arrtype_new:

I'm sorry about this, I posted too early.  Here is the final patch (and 
I'll update the ticket accordingly).



commit a1ff570cbd3ca6c28f87c55cebf2675b395c6fa0
Author: Michael Abbott [EMAIL PROTECTED]
Date:   Tue Jul 8 10:10:59 2008 +0100

Another reference leak using PyArray_DescrFromType

This change fixes the following issues resulting in reference count
leaks: a spurious ADDREF on a typecode returned from
PyArray_DescrFromType, an awkward interaction with PyArray_FromAny, and
a couple of early returns which need DECREFs.

diff --git a/numpy/core/src/scalartypes.inc.src 
b/numpy/core/src/scalartypes.inc.src
index 3feefc0..d54ae1b 100644
--- a/numpy/core/src/scalartypes.inc.src
+++ b/numpy/core/src/scalartypes.inc.src
@@ -1886,7 +1886,6 @@ static PyObject *
 if (!PyArg_ParseTuple(args, |O, obj)) return NULL;
 
 typecode = PyArray_DescrFromType([EMAIL PROTECTED]@);
-Py_INCREF(typecode);
 if (obj == NULL) {
 #if @default@ == 0
 char *mem;
@@ -1903,19 +1902,30 @@ static PyObject *
 goto finish;
 }
 
+Py_XINCREF(typecode);
 arr = PyArray_FromAny(obj, typecode, 0, 0, FORCECAST, NULL);
-if ((arr==NULL) || (PyArray_NDIM(arr)  0)) return arr;
+if ((arr==NULL) || (PyArray_NDIM(arr)  0)) {
+Py_XDECREF(typecode);
+return arr;
+}
 robj = PyArray_Return((PyArrayObject *)arr);
 
 finish:
-if ((robj==NULL) || (robj-ob_type == type)) return robj;
+if ((robj==NULL) || (robj-ob_type == type)) {
+Py_XDECREF(typecode);
+return robj;
+}
 /* Need to allocate new type and copy data-area over */
 if (type-tp_itemsize) {
 itemsize = PyString_GET_SIZE(robj);
 }
 else itemsize = 0;
 obj = type-tp_alloc(type, itemsize);
-if (obj == NULL) {Py_DECREF(robj); return NULL;}
+if (obj == NULL) {
+Py_XDECREF(typecode);
+Py_DECREF(robj);
+return NULL;
+}
 if (typecode==NULL)
 typecode = PyArray_DescrFromType([EMAIL PROTECTED]@);
 dest = scalar_value(obj, typecode);


The corresponding test case is (sorry it's crude):

import sys
from numpy import float32

refs = 0
refs = sys.gettotalrefcount()
float32()
print sys.gettotalrefcount() - refs

I'm afraid I haven't tested all the possible paths through this routine.  
I need to get back to chasing my other leaks.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] alterdot and restoredot

2008-07-09 Thread Robert Kern
On Wed, Jul 9, 2008 at 02:53, Pauli Virtanen [EMAIL PROTECTED] wrote:

 Tue, 08 Jul 2008 23:03:52 -0500, Robert Kern wrote:
 On Tue, Jul 8, 2008 at 14:01, Keith Goodman [EMAIL PROTECTED] wrote:
 I don't know what to write for a doc string for alterdot and
 restoredot.

 Then maybe you're the best one to figure it out. What details do you
 think are missing from the current docstrings? What questions do they
 leave you with?

 I have the following for starters:

 - Are these meant as user-visible functions?

Yes, with the caveats below.

 - Should the user call them? When? What is the advantage?

Typically, one would only want to call them when trying to
troubleshoot an installation problem, benchmark their installation, or
otherwise need complete control over what code is used. Most users
will never need to touch them.

 - Are BLAS routines used by default? (And if not, why not?)

If numpy.core._blasdot was built and imports, then yes.

 - Which operations do the functions exactly affect?
  It seems that alterdot sets the dot function slot to a BLAS
  version, but what operations does this affect?

dot(), vdot(), and innerproduct() on C-contiguous arrays which are
Matrix-Matrix, Matrix-Vector or Vector-Vector products.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Detecting phase windings

2008-07-09 Thread Gary Ruben
I had a chance to look at Anne's suggestion from this thread
http://www.mail-archive.com/numpy-discussion@scipy.org/msg10091.html
and I thought I should post my phase winding finder solution, which is 
slightly modified from her idea. Thanks Anne. This is a vast improvement 
over my original slow code, and is useful to me now, but I will probably 
have to rewrite it in C, weave or Cython when I start generating large 
data sets.

import numpy as np
from pyvtk import *

def find_vortices(x, axis=0):
 xx = np.rollaxis(x, axis)
 r = np.empty_like(xx).astype(np.bool)
 for i in range(xx.shape[0]):
 print i,
 xxx = xx[i,...]
 loop = np.concatenate(([xxx],
[np.roll(xxx,1,0)],
[np.roll(np.roll(xxx,1,0),1,1)],
[np.roll(xxx,1,1)],
[xxx]), axis=0)
 loop = np.unwrap(loop, axis=0)
 r[i,...] = np.abs(loop[0,...]-loop[-1,...])pi/2

 return np.rollaxis(r, 0, axis+1)[1:-1,1:-1,1:-1]

and call it like so on the 3D phaseField array, which is a float32 array 
containing the phase angle at each point:

# Detect the nodal lines
b0 = find_vortices(phaseField, axis=0)
b0 |= find_vortices(phaseField, axis=1)
b0 |= find_vortices(phaseField, axis=2)

# output vortices to vtk
indices = np.transpose(np.nonzero(b0)).tolist()
vtk = VtkData(UnstructuredGrid(indices))
vtk.tofile('%s_vol'%sys.argv[0][:-3],'binary')
del vtk

--
Gary R.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another reference count leak: ticket #848

2008-07-09 Thread David Cournapeau
 I really don't think that this design of reference count handling in
 PyArray_FromAny (and consequently PyArray_CheckFromAny) is a good idea.
 Unfortunately these seem to be part of the published API, so presumably
 it's too late to change this?  (Otherwise I might see how the
 corresponding patch comes out.)

Changing it would break almost every code using numpy C API ... Don't
forget that numpy did build on numarray and numeric, with an almost
backward compatible C API.

cheers

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] alterdot and restoredot

2008-07-09 Thread Anne Archibald
2008/7/9 Robert Kern [EMAIL PROTECTED]:

 - Which operations do the functions exactly affect?
  It seems that alterdot sets the dot function slot to a BLAS
  version, but what operations does this affect?

 dot(), vdot(), and innerproduct() on C-contiguous arrays which are
 Matrix-Matrix, Matrix-Vector or Vector-Vector products.

Really? Not, say, tensordot()?

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another reference count leak: ticket #848

2008-07-09 Thread Charles R Harris
On Wed, Jul 9, 2008 at 2:36 AM, Michael Abbott [EMAIL PROTECTED]
wrote:

 There are three separate patches in this message plus some remarks on
 stealing reference counts at the bottom.

snip


 I really don't think that this design of reference count handling in
 PyArray_FromAny (and consequently PyArray_CheckFromAny) is a good idea.
 Unfortunately these seem to be part of the published API, so presumably
 it's too late to change this?  (Otherwise I might see how the
 corresponding patch comes out.)


There was a previous discussion along those lines initiated by, ahem,
myself. But changing things at this point would be too much churn. Better, I
think, to get these things documented and the code cleaned up.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] element-wise logical operations on numpy arrays

2008-07-09 Thread Catherine Moroney
Hello,

I have a question about performing element-wise logical operations
on numpy arrays.

If a, b and c are numpy arrays of the same size, does the  
following
syntax work?

mask = (a  1.0)  ((b  3.0) | (c  10.0))

It seems to be performing correctly, but the documentation that I've  
read
indicates that  and | are for bitwise operations, not element-by-
element operations in arrays.

I'm trying to avoid using logical_and and logical_or because they
make the code more cumbersome and difficult to read.  Are  and |
acceptable substitutes for numpy arrays?

Thanks,

Catherine
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] element-wise logical operations on numpy arrays

2008-07-09 Thread Charles R Harris
On Wed, Jul 9, 2008 at 10:21 AM, Catherine Moroney 
[EMAIL PROTECTED] wrote:

 Hello,

 I have a question about performing element-wise logical operations
 on numpy arrays.

 If a, b and c are numpy arrays of the same size, does the
 following
 syntax work?

 mask = (a  1.0)  ((b  3.0) | (c  10.0))

 It seems to be performing correctly, but the documentation that I've
 read
 indicates that  and | are for bitwise operations, not element-by-
 element operations in arrays.


They perform bitwise operations element by element. They only work for
integer/bool arrays and you should avoid mixing signed/unsigned types
because of the type promotion rules. Other than that, things should work
fine.


 I'm trying to avoid using logical_and and logical_or because they
 make the code more cumbersome and difficult to read.  Are  and |
 acceptable substitutes for numpy arrays?


Generally, yes, but they are more restrictive in the types they accept.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] element-wise logical operations on numpy arrays

2008-07-09 Thread Anne Archibald
2008/7/9 Catherine Moroney [EMAIL PROTECTED]:

 I have a question about performing element-wise logical operations
 on numpy arrays.

 If a, b and c are numpy arrays of the same size, does the
 following
 syntax work?

 mask = (a  1.0)  ((b  3.0) | (c  10.0))

 It seems to be performing correctly, but the documentation that I've
 read
 indicates that  and | are for bitwise operations, not element-by-
 element operations in arrays.

 I'm trying to avoid using logical_and and logical_or because they
 make the code more cumbersome and difficult to read.  Are  and |
 acceptable substitutes for numpy arrays?

Yes. Unfortunately it is impossible to make python's usual logical
operators, and, or, etcetera, behave correctly on numpy arrays. So
the decision was made to use the bitwise operators to express logical
operations on boolean arrays. If you like, you can think of boolean
arrays as containing single bits, so that the bitwise operators *are*
the logical operators.

Confusing, but I'm afraid there really isn't anything the numpy
developers can do about it, besides write good documentation.

Good luck,
Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] REMINDER: SciPy 2008 Early Registration ends in 2 days

2008-07-09 Thread Jarrod Millman
Hello,

This is a reminder that early registration for SciPy 2008 ends in two
days  on Friday, July 11th.  To register, please see:
 http://conference.scipy.org/to_register

This year's conference has two days for tutorials, two days of
presentations, and ends with a two day coding sprint.  If you want to
learn more see my blog post:
http://jarrodmillman.blogspot.com/2008/07/scipy-2008-conference-program-posted.html

Cheers,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-discussion Digest, Vol 22, Issue 32

2008-07-09 Thread Catherine Moroney

On Jul 9, 2008, at 10:00 AM, [EMAIL PROTECTED] wrote:

 Send Numpy-discussion mailing list submissions to
   numpy-discussion@scipy.org

 To subscribe or unsubscribe via the World Wide Web, visit
   http://projects.scipy.org/mailman/listinfo/numpy-discussion
 or, via email, send a message with subject or body 'help' to
   [EMAIL PROTECTED]

 You can reach the person managing the list at
   [EMAIL PROTECTED]

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Numpy-discussion digest...
 Today's Topics:

1. Re: element-wise logical operations on numpyarrays
   (Anne Archibald)

 From: Anne Archibald [EMAIL PROTECTED]
 Date: July 9, 2008 9:35:20 AM PDT
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Subject: Re: [Numpy-discussion] element-wise logical operations on  
 numpy arrays
 Reply-To: Discussion of Numerical Python numpy-discussion@scipy.org


 2008/7/9 Catherine Moroney [EMAIL PROTECTED]:

 I have a question about performing element-wise logical operations
 on numpy arrays.

 If a, b and c are numpy arrays of the same size, does the
 following syntax work?

 mask = (a  1.0)  ((b  3.0) | (c  10.0))

 It seems to be performing correctly, but the documentation that I've
 read indicates that  and | are for bitwise operations, not  
 element-by-
 element operations in arrays.

 I'm trying to avoid using logical_and and logical_or because they
 make the code more cumbersome and difficult to read.  Are  and |
 acceptable substitutes for numpy arrays?

 Yes. Unfortunately it is impossible to make python's usual logical
 operators, and, or, etcetera, behave correctly on numpy arrays. So
 the decision was made to use the bitwise operators to express logical
 operations on boolean arrays. If you like, you can think of boolean
 arrays as containing single bits, so that the bitwise operators *are*
 the logical operators.

 Confusing, but I'm afraid there really isn't anything the numpy
 developers can do about it, besides write good documentation.

Do  and | work on all types of numpy arrays (i.e. floats and
16 and 32-bit integers), or only on arrays of booleans?  The short
tests I've done seem to indicate that it does, but I'd like to have
some confirmation.

 Good luck,
 Anne

Catherine


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-discussion Digest, Vol 22, Issue 32

2008-07-09 Thread Charles R Harris
On Wed, Jul 9, 2008 at 11:11 AM, Catherine Moroney 
[EMAIL PROTECTED] wrote:


 On Jul 9, 2008, at 10:00 AM, [EMAIL PROTECTED] wrote:

  Send Numpy-discussion mailing list submissions to
numpy-discussion@scipy.org
 
  To subscribe or unsubscribe via the World Wide Web, visit
http://projects.scipy.org/mailman/listinfo/numpy-discussion
  or, via email, send a message with subject or body 'help' to
[EMAIL PROTECTED]
 
  You can reach the person managing the list at
[EMAIL PROTECTED]
 
  When replying, please edit your Subject line so it is more specific
  than Re: Contents of Numpy-discussion digest...
  Today's Topics:
 
 1. Re: element-wise logical operations on numpyarrays
(Anne Archibald)
 
  From: Anne Archibald [EMAIL PROTECTED]
  Date: July 9, 2008 9:35:20 AM PDT
  To: Discussion of Numerical Python numpy-discussion@scipy.org
  Subject: Re: [Numpy-discussion] element-wise logical operations on
  numpy arrays
  Reply-To: Discussion of Numerical Python numpy-discussion@scipy.org
 
 
  2008/7/9 Catherine Moroney [EMAIL PROTECTED]:
 
  I have a question about performing element-wise logical operations
  on numpy arrays.
 
  If a, b and c are numpy arrays of the same size, does the
  following syntax work?
 
  mask = (a  1.0)  ((b  3.0) | (c  10.0))
 
  It seems to be performing correctly, but the documentation that I've
  read indicates that  and | are for bitwise operations, not
  element-by-
  element operations in arrays.
 
  I'm trying to avoid using logical_and and logical_or because they
  make the code more cumbersome and difficult to read.  Are  and |
  acceptable substitutes for numpy arrays?
 
  Yes. Unfortunately it is impossible to make python's usual logical
  operators, and, or, etcetera, behave correctly on numpy arrays. So
  the decision was made to use the bitwise operators to express logical
  operations on boolean arrays. If you like, you can think of boolean
  arrays as containing single bits, so that the bitwise operators *are*
  the logical operators.
 
  Confusing, but I'm afraid there really isn't anything the numpy
  developers can do about it, besides write good documentation.
 
 Do  and | work on all types of numpy arrays (i.e. floats and
 16 and 32-bit integers), or only on arrays of booleans?  The short
 tests I've done seem to indicate that it does, but I'd like to have
 some confirmation.


They work for all integer types but not for float or complex types:

In [1]: x = ones(3)

In [2]: x | x
---
TypeError Traceback (most recent call last)

/home/charris/ipython console in module()

TypeError: unsupported operand type(s) for |: 'float' and 'float'


Comparisons always return boolean arrays, so you don't have to worry about
that.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Multiplying every 3 elements by a vector?

2008-07-09 Thread Marlin Rowley
All:
 
I'm trying to take a constant vector:
 
 v = (0.122169, 0.61516, 0.262671)
 
and multiply those values by every 3 components in an array of length N:
 
A = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ]
 
So what I want is: 
 
v[0]*A[0]
v[1]*A[1]
v[2]*A[2]
v[0]*A[3]
v[1]*A[4]
v[2]*A[5]
v[0]*A[6]
 
...
 
How do I do this with one command in numPy?
 
-M
_
Need to know now? Get instant answers with Windows Live Messenger.
http://www.windowslive.com/messenger/connect_your_way.html?ocid=TXT_TAGLM_WL_messenger_072008___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A couple of testing issues

2008-07-09 Thread Alan McIntyre
On Wed, Jul 9, 2008 at 9:26 AM, Anne Archibald
[EMAIL PROTECTED] wrote:
 - Test functions and methods will only be picked up based on name if
 they begin with test; check_* will no longer be seen as a test
 function.

 Is it possible to induce nose to pick these up and, if not actually
 run them, warn about them? It's not so good to have some tests
 silently not being run...

Having nose pick up check_ functions as tests may interfere with
SciPy testing; it looks like there are a couple dozen
functions/methods named that way in the SciPy tree.  I didn't look at
all of them, though; it could be that some are tests that still need
renaming.

Since I'm looking at coverage (including test code coverage), any
tests that don't get run will be found, at least while I'm working on
tests.  Still, it might not hurt to have something automated looking
for potentially missed tests for 1.2.  That would also help with
third-party code that depends on NumPy for testing, since they
probably don't have the luxury of someone able to spend all their time
worrying over test coverage.

I can make a pass through all the test_* modules in the source tree
under test and post a warning if def check_ is found in them before
handing things over to nose.Anyone else have thoughts on this?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multiplying every 3 elements by a vector?

2008-07-09 Thread Charles R Harris
On Wed, Jul 9, 2008 at 1:16 PM, Marlin Rowley [EMAIL PROTECTED]
wrote:

 All:

 I'm trying to take a constant vector:

  v = (0.122169, 0.61516, 0.262671)

 and multiply those values by every 3 components in an array of length N:

 A = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ]

 So what I want is:

 v[0]*A[0]
 v[1]*A[1]
 v[2]*A[2]
 v[0]*A[3]
 v[1]*A[4]
 v[2]*A[5]
 v[0]*A[6]

 ...

 How do I do this with one command in numPy?



If the length of A is divisible by 3:

A.reshape((-1,3))*v

You might want to reshape the result to 1-D.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A couple of testing issues

2008-07-09 Thread Robert Kern
On Wed, Jul 9, 2008 at 14:19, Alan McIntyre [EMAIL PROTECTED] wrote:

 I can make a pass through all the test_* modules in the source tree
 under test and post a warning if def check_ is found in them before
 handing things over to nose.Anyone else have thoughts on this?

I don't think it's worth automating on every run. People can see for
themselves if they have any such check_methods() and make the
conversion once:

  nosetests -v --include check_.* --exclude test_.*

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A couple of testing issues

2008-07-09 Thread Robert Kern
On Wed, Jul 9, 2008 at 14:26, Robert Kern [EMAIL PROTECTED] wrote:
 On Wed, Jul 9, 2008 at 14:19, Alan McIntyre [EMAIL PROTECTED] wrote:

 I can make a pass through all the test_* modules in the source tree
 under test and post a warning if def check_ is found in them before
 handing things over to nose.Anyone else have thoughts on this?

 I don't think it's worth automating on every run. People can see for
 themselves if they have any such check_methods() and make the
 conversion once:

  nosetests -v --include check_.* --exclude test_.*

Hmm, could be wrong about that. Let me find the right incantation.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A couple of testing issues

2008-07-09 Thread Alan McIntyre
On Wed, Jul 9, 2008 at 3:26 PM, Robert Kern [EMAIL PROTECTED] wrote:
 I don't think it's worth automating on every run. People can see for
 themselves if they have any such check_methods() and make the
 conversion once:

Does this fall into the how in the world should I have known to do
that category?  As long as there's a prominent note in the release
notes, either containing such suggestions or links to a page that
does, I don't have any problem just including this in the list of
things that people should do if they're planning on upgrading to 1.2.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-discussion Digest, Vol 22, Issue 33

2008-07-09 Thread Catherine Moroney

  2008/7/9 Catherine Moroney [EMAIL PROTECTED]:
 
  I have a question about performing element-wise logical operations
  on numpy arrays.
 
  If a, b and c are numpy arrays of the same size, does the
  following syntax work?
 
  mask = (a  1.0)  ((b  3.0) | (c  10.0))
 
  It seems to be performing correctly, but the documentation that  
 I've
  read indicates that  and | are for bitwise operations, not
  element-by-
  element operations in arrays.
 
  I'm trying to avoid using logical_and and logical_or because  
 they
  make the code more cumbersome and difficult to read.  Are   
 and |
  acceptable substitutes for numpy arrays?
 
  Yes. Unfortunately it is impossible to make python's usual logical
  operators, and, or, etcetera, behave correctly on numpy  
 arrays. So
  the decision was made to use the bitwise operators to express  
 logical
  operations on boolean arrays. If you like, you can think of boolean
  arrays as containing single bits, so that the bitwise operators  
 *are*
  the logical operators.
 
  Confusing, but I'm afraid there really isn't anything the numpy
  developers can do about it, besides write good documentation.
 
 Do  and | work on all types of numpy arrays (i.e. floats and
 16 and 32-bit integers), or only on arrays of booleans?  The short
 tests I've done seem to indicate that it does, but I'd like to have
 some confirmation.

 They work for all integer types but not for float or complex types:

 In [1]: x = ones(3)

 In [2]: x | x
 -- 
 -
 TypeError Traceback (most recent  
 call last)

 /home/charris/ipython console in module()

 TypeError: unsupported operand type(s) for |: 'float' and 'float'


 Comparisons always return boolean arrays, so you don't have to  
 worry about that.

 Chuck

I've attached a short test program for numpy arrays of floats for which
 and | seem to work.  If, as you say,  and | don't work for
floats, why does this program work?

from numpy import *

a = array([(1.1, 2.1),(3.1, 4.1)],'float')
b = a + 1
c = b + 1

print a = ,a
print b = ,b
print c = ,c

mask = (a  4.5)  (b  4.5)  (c  4.5)
print mask = ,mask

print masked a = ,a[mask]
print masked b = ,b[mask]
print masked c = ,c[mask]



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-discussion Digest, Vol 22, Issue 33

2008-07-09 Thread Keith Goodman
On Wed, Jul 9, 2008 at 12:43 PM, Catherine Moroney
[EMAIL PROTECTED] wrote:

  2008/7/9 Catherine Moroney [EMAIL PROTECTED]:
 
  I have a question about performing element-wise logical operations
  on numpy arrays.
 
  If a, b and c are numpy arrays of the same size, does the
  following syntax work?
 
  mask = (a  1.0)  ((b  3.0) | (c  10.0))
 
  It seems to be performing correctly, but the documentation that
 I've
  read indicates that  and | are for bitwise operations, not
  element-by-
  element operations in arrays.
 
  I'm trying to avoid using logical_and and logical_or because
 they
  make the code more cumbersome and difficult to read.  Are 
 and |
  acceptable substitutes for numpy arrays?
 
  Yes. Unfortunately it is impossible to make python's usual logical
  operators, and, or, etcetera, behave correctly on numpy
 arrays. So
  the decision was made to use the bitwise operators to express
 logical
  operations on boolean arrays. If you like, you can think of boolean
  arrays as containing single bits, so that the bitwise operators
 *are*
  the logical operators.
 
  Confusing, but I'm afraid there really isn't anything the numpy
  developers can do about it, besides write good documentation.
 
 Do  and | work on all types of numpy arrays (i.e. floats and
 16 and 32-bit integers), or only on arrays of booleans?  The short
 tests I've done seem to indicate that it does, but I'd like to have
 some confirmation.

 They work for all integer types but not for float or complex types:

 In [1]: x = ones(3)

 In [2]: x | x
 --
 -
 TypeError Traceback (most recent
 call last)

 /home/charris/ipython console in module()

 TypeError: unsupported operand type(s) for |: 'float' and 'float'


 Comparisons always return boolean arrays, so you don't have to
 worry about that.

 Chuck

 I've attached a short test program for numpy arrays of floats for which
  and | seem to work.  If, as you say,  and | don't work for
 floats, why does this program work?

 from numpy import *

 a = array([(1.1, 2.1),(3.1, 4.1)],'float')
 b = a + 1
 c = b + 1

 print a = ,a
 print b = ,b
 print c = ,c

 mask = (a  4.5)  (b  4.5)  (c  4.5)
 print mask = ,mask

 print masked a = ,a[mask]
 print masked b = ,b[mask]
 print masked c = ,c[mask]

a contains floats. But a  4.5 doesn't:

 a = np.array([(1.1, 2.1),(3.1, 4.1)],'float')
 a

array([[ 1.1,  2.1],
   [ 3.1,  4.1]])

 a  4.5

array([[ True,  True],
   [ True,  True]], dtype=bool)

 a | a
---
TypeError: unsupported operand type(s) for |: 'float' and 'float'
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A couple of testing issues

2008-07-09 Thread Robert Kern
On Wed, Jul 9, 2008 at 14:35, Alan McIntyre [EMAIL PROTECTED] wrote:
 On Wed, Jul 9, 2008 at 3:26 PM, Robert Kern [EMAIL PROTECTED] wrote:
 I don't think it's worth automating on every run. People can see for
 themselves if they have any such check_methods() and make the
 conversion once:

 Does this fall into the how in the world should I have known to do
 that category?

Doesn't matter. It doesn't work anyways; those arguments are for
matching classes and module-level functions, not TestCase methods.

Fortunately, grep works just as well.

 As long as there's a prominent note in the release
 notes, either containing such suggestions or links to a page that
 does, I don't have any problem just including this in the list of
 things that people should do if they're planning on upgrading to 1.2.

By all means. This should be documented, of course, but one-time
conversion tasks amenable to grep are not worth checking for on every
run.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] alterdot and restoredot

2008-07-09 Thread Robert Kern
On Wed, Jul 9, 2008 at 06:36, Anne Archibald [EMAIL PROTECTED] wrote:
 2008/7/9 Robert Kern [EMAIL PROTECTED]:

 - Which operations do the functions exactly affect?
  It seems that alterdot sets the dot function slot to a BLAS
  version, but what operations does this affect?

 dot(), vdot(), and innerproduct() on C-contiguous arrays which are
 Matrix-Matrix, Matrix-Vector or Vector-Vector products.

 Really? Not, say, tensordot()?

If the ultimate dot() call inside tensordot() is one of the above
forms, then yes. If it's a 3D-3D product, for example, or one where
the shape manipulations leave the arrays discontiguous, then no.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Unused matrix method

2008-07-09 Thread Alan McIntyre
There's a _get_truendim method on matrix that isn't referenced
anywhere in NumPy, SciPy, or matplotlib.  Should this get deprecated
or removed in 1.2?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unused matrix method

2008-07-09 Thread Robert Kern
On Wed, Jul 9, 2008 at 15:16, Alan McIntyre [EMAIL PROTECTED] wrote:
 There's a _get_truendim method on matrix that isn't referenced
 anywhere in NumPy, SciPy, or matplotlib.  Should this get deprecated
 or removed in 1.2?

We could remove it. It's a private method.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unused matrix method

2008-07-09 Thread Alan McIntyre
On Wed, Jul 9, 2008 at 4:23 PM, Robert Kern [EMAIL PROTECTED] wrote:
 On Wed, Jul 9, 2008 at 15:16, Alan McIntyre [EMAIL PROTECTED] wrote:
 There's a _get_truendim method on matrix that isn't referenced
 anywhere in NumPy, SciPy, or matplotlib.  Should this get deprecated
 or removed in 1.2?

 We could remove it. It's a private method.

Done.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multiplying every 3 elements by a vector?

2008-07-09 Thread Marlin Rowley
Thanks Chuck, but I wasn't quit clear with my question.
 
You answered exactly according to what I asked, but I failed to mention needing 
the dot product instead of just the product.
 
So, 
 
v dot A = v'
 
v'[0] = v[0]*A[0] + v[1]*A[1] + v[2]*A[2]
v'[1] = v[0]*A[3] + v[1]*A[4] + v[2]*A[5]
v'[2] = v[0]*A[6] + v[1]*A[7] + v[2]*A[8]
 
-M



Date: Wed, 9 Jul 2008 13:26:01 -0600From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: 
Re: [Numpy-discussion] Multiplying every 3 elements by a vector?
On Wed, Jul 9, 2008 at 1:16 PM, Marlin Rowley [EMAIL PROTECTED] wrote:

All: I'm trying to take a constant vector:  v = (0.122169, 0.61516, 0.262671) 
and multiply those values by every 3 components in an array of length N: A = 
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ] So what I want is:  
v[0]*A[0]v[1]*A[1]v[2]*A[2]v[0]*A[3]v[1]*A[4]v[2]*A[5]v[0]*A[6] ... How do I do 
this with one command in numPy? 
If the length of A is divisible by 3:A.reshape((-1,3))*vYou might want to 
reshape the result to 1-D.Chuck 
_
Use video conversation to talk face-to-face with Windows Live Messenger.
http://www.windowslive.com/messenger/connect_your_way.html?ocid=TXT_TAGLM_WL_Refresh_messenger_video_072008___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] chararray constructor change

2008-07-09 Thread Alan McIntyre
I'd like to make the following change to the chararray constructor.
This is motivated by some of chararray's methods constructing new
chararrays with NumPy integer arguments to itemsize, and it just
seemed easier to fix this in the constructor.

Index: numpy/numpy/core/defchararray.py
===
--- numpy/numpy/core/defchararray.py(revision 5378)
+++ numpy/numpy/core/defchararray.py(working copy)
@@ -25,6 +25,11 @@
 else:
 dtype = string_

+# force itemsize to be a Python long, since using Numpy integer
+# types results in itemsize.itemsize being used as the size of
+# strings in the new array.
+itemsize = long(itemsize)
+
 _globalvar = 1
 if buffer is None:
 self = ndarray.__new__(subtype, shape, (dtype, itemsize),
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multiplying every 3 elements by a vector?

2008-07-09 Thread Charles R Harris
On Wed, Jul 9, 2008 at 2:34 PM, Marlin Rowley [EMAIL PROTECTED]
wrote:

 Thanks Chuck, but I wasn't quit clear with my question.

 You answered exactly according to what I asked, but I failed to mention
 needing the dot product instead of just the product.

 So,

 v dot A = v'

 v'[0] = v[0]*A[0] + v[1]*A[1] + v[2]*A[2]
 v'[1] = v[0]*A[3] + v[1]*A[4] + v[2]*A[5]
 v'[2] = v[0]*A[6] + v[1]*A[7] + v[2]*A[8]



There is no built in method for this specific problem (stacks of vectors and
matrices), but you can make things work:

sum(A.reshape((-1,3))*v, axis=1)

You can do lots of interesting things using such manipulations and newaxis.
For instance, multiplying stacks of matrices by stacks of matrices etc. I
put up a post of such things once if you are interested.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multiplying every 3 elements by a vector?

2008-07-09 Thread Anne Archibald
2008/7/9 Charles R Harris [EMAIL PROTECTED]:

 On Wed, Jul 9, 2008 at 2:34 PM, Marlin Rowley [EMAIL PROTECTED]
 wrote:

 Thanks Chuck, but I wasn't quit clear with my question.

 You answered exactly according to what I asked, but I failed to mention
 needing the dot product instead of just the product.

 So,

 v dot A = v'

 v'[0] = v[0]*A[0] + v[1]*A[1] + v[2]*A[2]
 v'[1] = v[0]*A[3] + v[1]*A[4] + v[2]*A[5]
 v'[2] = v[0]*A[6] + v[1]*A[7] + v[2]*A[8]


 There is no built in method for this specific problem (stacks of vectors and
 matrices), but you can make things work:

 sum(A.reshape((-1,3))*v, axis=1)

 You can do lots of interesting things using such manipulations and newaxis.
 For instance, multiplying stacks of matrices by stacks of matrices etc. I
 put up a post of such things once if you are interested.

This particular instance can be viewed as a matrix multiplication
(np.dot(A.reshape((-1,3)),v) I think).

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multiplying every 3 elements by a vector?

2008-07-09 Thread Charles R Harris
On Wed, Jul 9, 2008 at 3:26 PM, Anne Archibald [EMAIL PROTECTED]
wrote:

 2008/7/9 Charles R Harris [EMAIL PROTECTED]:
 
  On Wed, Jul 9, 2008 at 2:34 PM, Marlin Rowley [EMAIL PROTECTED]
 
  wrote:
 
  Thanks Chuck, but I wasn't quit clear with my question.
 
  You answered exactly according to what I asked, but I failed to mention
  needing the dot product instead of just the product.
 
  So,
 
  v dot A = v'
 
  v'[0] = v[0]*A[0] + v[1]*A[1] + v[2]*A[2]
  v'[1] = v[0]*A[3] + v[1]*A[4] + v[2]*A[5]
  v'[2] = v[0]*A[6] + v[1]*A[7] + v[2]*A[8]
 
 
  There is no built in method for this specific problem (stacks of vectors
 and
  matrices), but you can make things work:
 
  sum(A.reshape((-1,3))*v, axis=1)
 
  You can do lots of interesting things using such manipulations and
 newaxis.
  For instance, multiplying stacks of matrices by stacks of matrices etc. I
  put up a post of such things once if you are interested.

 This particular instance can be viewed as a matrix multiplication
 (np.dot(A.reshape((-1,3)),v) I think).


Yep, that should work.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] expected a single-segment buffer object

2008-07-09 Thread Anne Archibald
Hi,

When trying to construct an ndarray, I sometimes run into the
more-or-less mystifying error expected a single-segment buffer
object:

Out[54]: (0, 16, 8)
In [55]: A=np.zeros(2); A=A[np.newaxis,...];
np.ndarray(strides=A.strides,shape=A.shape,buffer=A,dtype=A.dtype)
---
type 'exceptions.TypeError' Traceback (most recent call last)

/home/peridot/ipython console in module()

type 'exceptions.TypeError': expected a single-segment buffer object

In [56]: A.strides
Out[56]: (0, 8)

That is, when I try to construct an ndarray based on an array with a
zero stride, I get this mystifying error. Zero-strided arrays appear
naturally when one uses newaxis, but they are valuable in their own
right (for example for broadcasting purposes). So it's a bit awkward
to have this error appearing when one tries to feed them to
ndarray.__new__ as a buffer. I can, I think, work around it by
removing all axes with stride zero:

def bufferize(A):
   idx = []
   for v in A.strides:
   if v==0:
   idx.append(0)
   else:
   idx.append(slice(None,None,None))
   return A[tuple(idx)]

Is there any reason for this restriction?

Thanks,
Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] expected a single-segment buffer object

2008-07-09 Thread Robert Kern
On Wed, Jul 9, 2008 at 18:55, Anne Archibald [EMAIL PROTECTED] wrote:
 Hi,

 When trying to construct an ndarray, I sometimes run into the
 more-or-less mystifying error expected a single-segment buffer
 object:

 Out[54]: (0, 16, 8)
 In [55]: A=np.zeros(2); A=A[np.newaxis,...];
 np.ndarray(strides=A.strides,shape=A.shape,buffer=A,dtype=A.dtype)
 ---
 type 'exceptions.TypeError' Traceback (most recent call last)

 /home/peridot/ipython console in module()

 type 'exceptions.TypeError': expected a single-segment buffer object

 In [56]: A.strides
 Out[56]: (0, 8)

 That is, when I try to construct an ndarray based on an array with a
 zero stride, I get this mystifying error. Zero-strided arrays appear
 naturally when one uses newaxis, but they are valuable in their own
 right (for example for broadcasting purposes). So it's a bit awkward
 to have this error appearing when one tries to feed them to
 ndarray.__new__ as a buffer. I can, I think, work around it by
 removing all axes with stride zero:

 def bufferize(A):
   idx = []
   for v in A.strides:
   if v==0:
   idx.append(0)
   else:
   idx.append(slice(None,None,None))
   return A[tuple(idx)]

 Is there any reason for this restriction?

Yes, the buffer interface, at least the subset that ndarray()
consumes, requires that all of the data be contiguous in memory.
array_as_buffer() checks for that using PyArray_ISONE_SEGMENT(), which
looks like this:

#define PyArray_ISONESEGMENT(m) (PyArray_NDIM(m) == 0 ||  \
 PyArray_CHKFLAGS(m, NPY_CONTIGUOUS) ||   \
 PyArray_CHKFLAGS(m, NPY_FORTRAN))

Trying to get a buffer object from anything that is neither C- or
Fortran-contiguous will fail. E.g.

In [1]: from numpy import *

In [2]: A = arange(10)

In [3]: B = A[::2]

In [4]: ndarray(strides=B.strides, shape=B.shape, buffer=B, dtype=B.dtype)
---
TypeError Traceback (most recent call last)

/Users/rkern/today/ipython console in module()

TypeError: expected a single-segment buffer object


What is the use case, here? One rarely has to use the ndarray
constructor by itself. For example, the result you seem to want from
the call you make above can be done just fine with .view().

In [8]: C = B.view(ndarray)

In [9]: C
Out[9]: array([0, 2, 4, 6, 8])

In [10]: B
Out[10]: array([0, 2, 4, 6, 8])

In [11]: C is B
Out[11]: False

In [12]: B[0] = 10

In [13]: C
Out[13]: array([10,  2,  4,  6,  8])

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] expected a single-segment buffer object

2008-07-09 Thread Anne Archibald
2008/7/9 Robert Kern [EMAIL PROTECTED]:

 Yes, the buffer interface, at least the subset that ndarray()
 consumes, requires that all of the data be contiguous in memory.
 array_as_buffer() checks for that using PyArray_ISONE_SEGMENT(), which
 looks like this:

 #define PyArray_ISONESEGMENT(m) (PyArray_NDIM(m) == 0 ||  
 \
 PyArray_CHKFLAGS(m, NPY_CONTIGUOUS) ||   \
 PyArray_CHKFLAGS(m, NPY_FORTRAN))

 Trying to get a buffer object from anything that is neither C- or
 Fortran-contiguous will fail. E.g.

 In [1]: from numpy import *

 In [2]: A = arange(10)

 In [3]: B = A[::2]

 In [4]: ndarray(strides=B.strides, shape=B.shape, buffer=B, dtype=B.dtype)
 ---
 TypeError Traceback (most recent call last)

 /Users/rkern/today/ipython console in module()

 TypeError: expected a single-segment buffer object

Is this really necessary? What does making this restriction gain? It
certainly means that many arrays whose storage is a contiguous block
of memory can still not be used (just permute the axes of a 3d array,
say; it may even be possible for an array to be in C contiguous order
but for the flag not to be set), but how is one to construct exotic
slices of an array that is strided in memory? (The real part of a
complex array, say.)

I suppose one could follow the linked list of .bases up to the
original ndarray, which should normally be C- or Fortran-contiguous,
then work out the offset, but even this may not always work: what if
the original array was constructed with non-C-contiguous strides from
some preexisting buffer?

If the concern is that this allows users to shoot themselves in the
foot, it's worth noting that even with the current setup you can
easily fabricate strides and shapes that go outside the allocated part
of memory.

 What is the use case, here? One rarely has to use the ndarray
 constructor by itself. For example, the result you seem to want from
 the call you make above can be done just fine with .view().

I was presenting a simple example. I was actually trying to use
zero-strided arrays to implement broadcasting.  The code was rather
long, but essentially what it was meant to do was generate a view of
an array in which an axis of length one had been replaced by an axis
of length m with stride zero. (The point of all this was to create a
class like vectorize that was suitable for use on, for example,
np.linalg.inv().) But I also ran into this problem while writing
segmentaxis.py, the code to produce a matrix of sliding windows.
(See http://www.scipy.org/Cookbook/SegmentAxis .) There I caught the
exception and copied the array (unnecessarily) if this came up.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] expected a single-segment buffer object

2008-07-09 Thread Robert Kern
On Wed, Jul 9, 2008 at 21:29, Anne Archibald [EMAIL PROTECTED] wrote:
 2008/7/9 Robert Kern [EMAIL PROTECTED]:

 Yes, the buffer interface, at least the subset that ndarray()
 consumes, requires that all of the data be contiguous in memory.
 array_as_buffer() checks for that using PyArray_ISONE_SEGMENT(), which
 looks like this:

 #define PyArray_ISONESEGMENT(m) (PyArray_NDIM(m) == 0 || 
  \
 PyArray_CHKFLAGS(m, NPY_CONTIGUOUS) ||   
 \
 PyArray_CHKFLAGS(m, NPY_FORTRAN))

 Trying to get a buffer object from anything that is neither C- or
 Fortran-contiguous will fail. E.g.

 In [1]: from numpy import *

 In [2]: A = arange(10)

 In [3]: B = A[::2]

 In [4]: ndarray(strides=B.strides, shape=B.shape, buffer=B, dtype=B.dtype)
 ---
 TypeError Traceback (most recent call last)

 /Users/rkern/today/ipython console in module()

 TypeError: expected a single-segment buffer object

 Is this really necessary? What does making this restriction gain? It
 certainly means that many arrays whose storage is a contiguous block
 of memory can still not be used (just permute the axes of a 3d array,
 say; it may even be possible for an array to be in C contiguous order
 but for the flag not to be set), but how is one to construct exotic
 slices of an array that is strided in memory? (The real part of a
 complex array, say.)

Because that's just what a buffer= argument *is*. It is not a place
for presenting the starting pointer to exotically-strided memory. Use
__array_interface__s to describe the full range of representable
memory. See below.

 I suppose one could follow the linked list of .bases up to the
 original ndarray, which should normally be C- or Fortran-contiguous,
 then work out the offset, but even this may not always work: what if
 the original array was constructed with non-C-contiguous strides from
 some preexisting buffer?

 If the concern is that this allows users to shoot themselves in the
 foot, it's worth noting that even with the current setup you can
 easily fabricate strides and shapes that go outside the allocated part
 of memory.

 What is the use case, here? One rarely has to use the ndarray
 constructor by itself. For example, the result you seem to want from
 the call you make above can be done just fine with .view().

 I was presenting a simple example. I was actually trying to use
 zero-strided arrays to implement broadcasting.  The code was rather
 long, but essentially what it was meant to do was generate a view of
 an array in which an axis of length one had been replaced by an axis
 of length m with stride zero. (The point of all this was to create a
 class like vectorize that was suitable for use on, for example,
 np.linalg.inv().) But I also ran into this problem while writing
 segmentaxis.py, the code to produce a matrix of sliding windows.
 (See http://www.scipy.org/Cookbook/SegmentAxis .) There I caught the
 exception and copied the array (unnecessarily) if this came up.

I was about a week ahead of you. See numpy/lib/stride_tricks.py in the trunk.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion