Re: [Numpy-discussion] Another suggestion for making numpy's functions generic

2009-10-20 Thread Sebastian Walter
On Tue, Oct 20, 2009 at 5:45 AM, Anne Archibald
peridot.face...@gmail.com wrote:
 2009/10/19 Sebastian Walter sebastian.wal...@gmail.com:

 I'm all for generic (u)funcs since they might come handy for me since
 I'm doing lots of operation on arrays of polynomials.

 Just as a side note, if you don't mind my asking, what sorts of
 operations do you do on arrays of polynomials? In a thread on
 scipy-dev we're discussing improving scipy's polynomial support, and
 we'd be happy to get some more feedback on what they need to be able
 to do.

I've been reading (and commenting) that thread ;)
 I'm doing algorithmic differentiation by computing on truncated
Taylor polynomials in the Powerbasis,
 i.e. always truncating all operation at degree D
z(t) = \sum_d=0^{D-1} z_d t^d =  x(t) * y(t) = \sum_{d=0}^{D-1}
\sum_{k=0}^d x_k * y_{d-k} + O(t^D)

Using other bases does not make sense in my case since the truncation
of all terms of higher degree than t^D
has afaik no good counterpart for bases like chebycheff.
On the other hand, I need to be generic in the coefficients, e.g.
z_d from above could be a tensor of any shape,  e.g.  a matrix.

Typical workcase when I need to perform operations on arrays of
polynomials is best explained in a talk I gave earlier this year:
http://github.com/b45ch1/pyadolc/raw/master/doc/walter_talk_algorithmic_differentiation_in_python_with_pyadolc_pycppad_algopy.pdf
on slide 7 and 8. (the class adouble is a Taylor polynomial).


 Thanks!
 Anne
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] why does binary_repr don't support arrays

2009-10-20 Thread markus . proeller
Hello,

I'm always wondering why binary_repr doesn't allow arrays as input values. 
I always have to use a work around like:

import numpy as np

def binary_repr(arr, width=None):
binary_list = map((lambda foo: np.binary_repr(foo, width)), 
arr.flatten())
str_len_max = len(np.binary_repr(arr.max(), width=width))
str_len_min = len(np.binary_repr(arr.min(), width=width))
if str_len_max  str_len_min:
str_len = str_len_max
else:
str_len = str_len_min
binary_array = np.fromiter(binary_list, dtype='|S'+str(str_len))
return binary_array.reshape(arr.shape)

Is there a reason why arrays are not supported or is there another 
function that does support arrays?

Thanks,

Markus___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another suggestion for making numpy's functions generic

2009-10-20 Thread Sebastian Walter
I'm not very familiar with the underlying C-API of numpy, so this has
to be taken with a grain of salt.

The reason why I'm curious about the genericity is that it would be
awesome to have:
1) ufuncs like sin, cos, exp... to work on arrays of any object (this
works already)
2) funcs like dot, eig, etc, to work on arrays of objects( works for
dot already, but not for eig)
3) ufuncs and funcs to work on any objects

examples that would be nice to work are among others:
* arrays of polynomials, i.e. arrays of objects
* polynomials with tensor coefficients, object with underlying array structure

I thought that the most elegant way to implement that would be to have
all numpy functions try  to call either
1)  the class function with the same name as the numpy function
2) or if the class function is not implemented, the member function
with the same name as the numpy function
3) if none exists, raise an exception

E.g.

1)
if isinstance(x) = Foo
then numpy.sin(x)
would call Foo.sin(x) if it doesn't know how to handle Foo

2)
similarly, for arrays of objects of type Foo:
 x = np.array([Foo(1), Foo(2)])

Then numpy.sin(x)
should try to return npy.array([Foo.sin(xi) for xi in x])
or in case Foo.sin is not implemented as class function,
return : np.array([xi.sin() for xi in x])

Therefore, I somehow expected something like that:
Quantity would derive from numpy.ndarray.
When calling  Quantity.__new__(cls) creates the member functions
__add__, __imul__, sin, exp, ...
where each function has a preprocessing part and a post processing part.
After the preprocessing call the original ufuncs on the base class
object, e.g. __add__



Sebastian



On Mon, Oct 19, 2009 at 1:55 PM, Darren Dale dsdal...@gmail.com wrote:
 On Mon, Oct 19, 2009 at 3:10 AM, Sebastian Walter
 sebastian.wal...@gmail.com wrote:
 On Sat, Oct 17, 2009 at 2:49 PM, Darren Dale dsdal...@gmail.com wrote:
 numpy's functions, especially ufuncs, have had some ability to support
 subclasses through the ndarray.__array_wrap__ method, which provides
 masked arrays or quantities (for example) with an opportunity to set
 the class and metadata of the output array at the end of an operation.
 An example is

 q1 = Quantity(1, 'meter')
 q2 = Quantity(2, 'meters')
 numpy.add(q1, q2) # yields Quantity(3, 'meters')

 At SciPy2009 we committed a change to the numpy trunk that provides a
 chance to determine the class and some metadata of the output *before*
 the ufunc performs its calculation, but after output array has been
 established (and its data is still uninitialized). Consider:

 q1 = Quantity(1, 'meter')
 q2 = Quantity(2, 'J')
 numpy.add(q1, q2, q1)
 # or equivalently:
 # q1 += q2

 With only __array_wrap__, the attempt to propagate the units happens
 after q1's data was updated in place, too late to raise an error, the
 data is now corrupted. __array_prepare__ solves that problem, an
 exception can be raised in time.

 Now I'd like to suggest one more improvement to numpy to make its
 functions more generic. Consider one more example:

 q1 = Quantity(1, 'meter')
 q2 = Quantity(2, 'feet')
 numpy.add(q1, q2)

 In this case, I'd like an opportunity to operate on the input arrays
 on the way in to the ufunc, to rescale the second input to meters. I
 think it would be a hack to try to stuff this capability into
 __array_prepare__. One form of this particular example is already
 supported in quantities, q1 + q2, by overriding the __add__ method
 to rescale the second input, but there are ufuncs that do not have an
 associated special method. So I'd like to look into adding another
 check for a special method, perhaps called __input_prepare__. My time
 is really tight for the next month, so I'd rather not start if there
 are strong objections, but otherwise, I'd like to try to try to get it
 in in time for numpy-1.4. (Has a timeline been established?)

 I think it will be not too difficult to document this overall scheme:

 When calling numpy functions:

 1) __input_prepare__ provides an opportunity to operate on the inputs
 to yield versions that are compatible with the operation (they should
 obviously not be modified in place)

 2) the output array is established

 3) __array_prepare__ is used to determine the class of the output
 array, as well as any metadata that needs to be established before the
 operation proceeds

 4) the ufunc performs its operations

 5) __array_wrap__ provides an opportunity to update the output array
 based on the results of the computation

 Comments, criticisms? If PEP 3124^ were already a part of the standard
 library, that could serve as the basis for generalizing numpy's
 functions. But I think the PEP will not be approved in its current
 form, and it is unclear when and if the author will revisit the
 proposal. The scheme I'm imagining might be sufficient for our
 purposes.

 I'm all for generic (u)funcs since they might come handy for me since
 I'm doing lots of operation on arrays of polynomials.
  I don't quite get the 

Re: [Numpy-discussion] Another suggestion for making numpy's functions generic

2009-10-20 Thread Darren Dale
On Tue, Oct 20, 2009 at 5:24 AM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
 I'm not very familiar with the underlying C-API of numpy, so this has
 to be taken with a grain of salt.

 The reason why I'm curious about the genericity is that it would be
 awesome to have:
 1) ufuncs like sin, cos, exp... to work on arrays of any object (this
 works already)
 2) funcs like dot, eig, etc, to work on arrays of objects( works for
 dot already, but not for eig)
 3) ufuncs and funcs to work on any objects

I think if you want to work on any object, you need something like the
PEP I mentioned earlier. What I am proposing is to use the existing
mechanism in numpy, check __array_priority__ to determine which
input's __input_prepare__ to call.

 examples that would be nice to work are among others:
 * arrays of polynomials, i.e. arrays of objects
 * polynomials with tensor coefficients, object with underlying array structure

 I thought that the most elegant way to implement that would be to have
 all numpy functions try  to call either
 1)  the class function with the same name as the numpy function
 2) or if the class function is not implemented, the member function
 with the same name as the numpy function
 3) if none exists, raise an exception

 E.g.

 1)
 if isinstance(x) = Foo
 then numpy.sin(x)
 would call Foo.sin(x) if it doesn't know how to handle Foo

How does it numpy.sin know if it knows how to handle Foo? numpy.sin
will happily process the data of subclasses of ndarray, but if you
give it a quantity with units of degrees it is going to return garbage
and not care.

 2)
 similarly, for arrays of objects of type Foo:
  x = np.array([Foo(1), Foo(2)])

 Then numpy.sin(x)
 should try to return npy.array([Foo.sin(xi) for xi in x])
 or in case Foo.sin is not implemented as class function,
 return : np.array([xi.sin() for xi in x])

I'm not going to comment on this, except to say that it is outside the
scope of my proposal.

 Therefore, I somehow expected something like that:
 Quantity would derive from numpy.ndarray.
 When calling  Quantity.__new__(cls) creates the member functions
 __add__, __imul__, sin, exp, ...
 where each function has a preprocessing part and a post processing part.
 After the preprocessing call the original ufuncs on the base class
 object, e.g. __add__

It is more complicated than that. Ufuncs don't call array methods, its
the other way around. ndarray.__add__ calls numpy.add. If you have a
custom operation to perform on numpy arrays, you write a ufunc, not a
subclass. What you are proposing is a very significant change to
numpy.

Darren
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another suggestion for making numpy's functions generic

2009-10-20 Thread Darren Dale
Hi Travis,

On Mon, Oct 19, 2009 at 6:29 PM, Travis Oliphant oliph...@enthought.com wrote:

 On Oct 17, 2009, at 7:49 AM, Darren Dale wrote:
[...]
 When calling numpy functions:

 1) __input_prepare__ provides an opportunity to operate on the inputs
 to yield versions that are compatible with the operation (they should
 obviously not be modified in place)

 2) the output array is established

 3) __array_prepare__ is used to determine the class of the output
 array, as well as any metadata that needs to be established before the
 operation proceeds

 4) the ufunc performs its operations

 5) __array_wrap__ provides an opportunity to update the output array
 based on the results of the computation

 Comments, criticisms? If PEP 3124^ were already a part of the standard
 library, that could serve as the basis for generalizing numpy's
 functions. But I think the PEP will not be approved in its current
 form, and it is unclear when and if the author will revisit the
 proposal. The scheme I'm imagining might be sufficient for our
 purposes.

 This seems like it could work.    So, basically ufuncs will take any
 object as input and call it's __input__prepare__ method?   This should
 return a sub-class of an ndarray?

ufuncs would call __input_prepare__ on the input declaring the highest
__array_priority__, just like ufuncs do with __array_wrap__, passing a
tuple of inputs and the ufunc itself (provided for context).
__input_prepare__ would return a tuple of inputs that the ufunc would
use for computation, I'm not sure if these need to be arrays or not, I
think I can give a better answer once I start the implementation (next
few days I think).

Darren
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] object array alignment issues

2009-10-20 Thread Michael Droettboom
I've resubmitted the patch without whitespace-only changes.

For what it's worth, I had followed the directions here:

http://projects.scipy.org/numpy/wiki/EmacsSetup

which say to perform untabify and whitespace-cleanup.  Are those not 
current?  I had added these to my pre-save hooks under my numpy tree.

Cheers,
Mike

Charles R Harris wrote:


 On Mon, Oct 19, 2009 at 4:36 PM, Robert Kern robert.k...@gmail.com 
 mailto:robert.k...@gmail.com wrote:

 On Mon, Oct 19, 2009 at 17:28, Charles R Harris
 charlesr.har...@gmail.com mailto:charlesr.har...@gmail.com wrote:
 
  On Mon, Oct 19, 2009 at 3:55 PM, Travis Oliphant
 oliph...@enthought.com mailto:oliph...@enthought.com
  wrote:

  Right now, though, the patch has too many white-space only
 changes in
  it.  Could you submit a new patch that removes those changes?
 
  The old whitespace is hard tabs and needs to be replaced anyway.
 The new
  whitespace doesn't always get the indentation right, however.
 That file
  needs a style/whitespace cleanup.

 That's fine, but whitespace cleanup needs to be done in commits that
 are separate from the functional changes.


 I agree, but it can be tricky to preserve hard tabs when your editor 
 uses spaces and has hard tabs set to 8 spaces. That file is on my 
 cleanup list anyway, I'll try to get to it this weekend.

 Chuck

 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another suggestion for making numpy's functions generic

2009-10-20 Thread Anne Archibald
2009/10/20 Sebastian Walter sebastian.wal...@gmail.com:
 On Tue, Oct 20, 2009 at 5:45 AM, Anne Archibald
 peridot.face...@gmail.com wrote:
 2009/10/19 Sebastian Walter sebastian.wal...@gmail.com:

 I'm all for generic (u)funcs since they might come handy for me since
 I'm doing lots of operation on arrays of polynomials.

 Just as a side note, if you don't mind my asking, what sorts of
 operations do you do on arrays of polynomials? In a thread on
 scipy-dev we're discussing improving scipy's polynomial support, and
 we'd be happy to get some more feedback on what they need to be able
 to do.

 I've been reading (and commenting) that thread ;)
  I'm doing algorithmic differentiation by computing on truncated
 Taylor polynomials in the Powerbasis,
  i.e. always truncating all operation at degree D
 z(t) = \sum_d=0^{D-1} z_d t^d =  x(t) * y(t) = \sum_{d=0}^{D-1}
 \sum_{k=0}^d x_k * y_{d-k} + O(t^D)

 Using other bases does not make sense in my case since the truncation
 of all terms of higher degree than t^D
 has afaik no good counterpart for bases like chebycheff.
 On the other hand, I need to be generic in the coefficients, e.g.
 z_d from above could be a tensor of any shape,  e.g.  a matrix.

In fact, truncating at degree D for Chebyshev polynomials works
exactly the same way as it does for power polynomials, and if what you
care about is function approximation, it has much nicer behaviour. But
if what you care about is really truncated Taylor polynomials, there's
no beating the power basis.

I realize that arrays of polynomial objects are handy from a
bookkeeping point of view, but how does an array of polynomials, say
of shape (N,), differ from a single polynomial with coefficients of
shape (N,)? I think we need to provide the latter in our polynomial
classes, but as you point out, getting ufunc support for the former is
nontrivial.

Anne
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] object array alignment issues

2009-10-20 Thread Charles R Harris
On Tue, Oct 20, 2009 at 7:02 AM, Michael Droettboom md...@stsci.edu wrote:

 I've resubmitted the patch without whitespace-only changes.

 For what it's worth, I had followed the directions here:

 http://projects.scipy.org/numpy/wiki/EmacsSetup

 which say to perform untabify and whitespace-cleanup.  Are those not
 current?  I had added these to my pre-save hooks under my numpy tree.


The problem is that hard tabs have crept into the file. The strict approach
in this case is to make two patches: the first cleans up the hard tabs, the
second fixes the problems.

How about I fix up the hard tabs and then you can make another patch?

snip

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Optimized sum of squares

2009-10-20 Thread josef . pktd
On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben gru...@bigpond.net.au wrote:
 Hi Gaël,

 If you've got a 1D array/vector called a, I think the normal idiom is

 np.dot(a,a)

 For the more general case, I think
 np.tensordot(a, a, axes=something_else)
 should do it, where you should be able to figure out something_else for
 your particular case.

Is it really possible to get the same as np.sum(a*a, axis)  with
tensordot  if a.ndim=2 ?
Any way I try the something_else, I get extra terms as in np.dot(a.T, a)

Josef






 Gary R.

 Gael Varoquaux wrote:
 On Sat, Oct 17, 2009 at 07:27:55PM -0400, josef.p...@gmail.com wrote:
 Why aren't you using logaddexp ufunc from numpy?

 Maybe because it is difficult to find, it doesn't have its own docs entry.

 Speaking of which...

 I thought that there was a readily-written, optimized function (or ufunc)
 in numpy or scipy that calculated the sum of squares for an array
 (possibly along an axis). However, I cannot find it.

 Is there something similar? If not, it is not the end of the world, the
 operation is trivial to write.

 Cheers,

 Gaël
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Optimized sum of squares

2009-10-20 Thread Anne Archibald
2009/10/20  josef.p...@gmail.com:
 On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben gru...@bigpond.net.au wrote:
 Hi Gaël,

 If you've got a 1D array/vector called a, I think the normal idiom is

 np.dot(a,a)

 For the more general case, I think
 np.tensordot(a, a, axes=something_else)
 should do it, where you should be able to figure out something_else for
 your particular case.

 Is it really possible to get the same as np.sum(a*a, axis)  with
 tensordot  if a.ndim=2 ?
 Any way I try the something_else, I get extra terms as in np.dot(a.T, a)

It seems like this would be a good place to apply numpy's
higher-dimensional ufuncs: what you want seems to just be the vector
inner product, broadcast over all other dimensions. In fact I believe
this is implemented in numpy as a demo: numpy.umath_tests.inner1d
should do the job.


Anne
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Optimized sum of squares

2009-10-20 Thread josef . pktd
On Tue, Oct 20, 2009 at 3:09 PM, Anne Archibald
peridot.face...@gmail.com wrote:
 2009/10/20  josef.p...@gmail.com:
 On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben gru...@bigpond.net.au wrote:
 Hi Gaël,

 If you've got a 1D array/vector called a, I think the normal idiom is

 np.dot(a,a)

 For the more general case, I think
 np.tensordot(a, a, axes=something_else)
 should do it, where you should be able to figure out something_else for
 your particular case.

 Is it really possible to get the same as np.sum(a*a, axis)  with
 tensordot  if a.ndim=2 ?
 Any way I try the something_else, I get extra terms as in np.dot(a.T, a)

 It seems like this would be a good place to apply numpy's
 higher-dimensional ufuncs: what you want seems to just be the vector
 inner product, broadcast over all other dimensions. In fact I believe
 this is implemented in numpy as a demo: numpy.umath_tests.inner1d
 should do the job.

Thanks, this works well, needs core in name
(I might have to learn how to swap or roll axis to use this for more than 2d.)

 np.core.umath_tests.inner1d(a.T, b.T)
array([12,  8, 16])
 (a*b).sum(0)
array([12,  8, 16])
 np.core.umath_tests.inner1d(a.T, b.T)
array([12,  8, 16])
 (a*a).sum(0)
array([126, 166, 214])
 np.core.umath_tests.inner1d(a.T, a.T)
array([126, 166, 214])


What's the status on these functions? They don't show up in the docs
or help, except for
a brief mention in the c-api:

http://docs.scipy.org/numpy/docs/numpy-docs/reference/c-api.generalized-ufuncs.rst/

Are they for public consumption and should go into the docs?
Or do they remain a hidden secret, to force users to read the mailing lists?

Josef




 Anne
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] object array alignment issues

2009-10-20 Thread Charles R Harris
On Tue, Oct 20, 2009 at 10:13 AM, Charles R Harris 
charlesr.har...@gmail.com wrote:



 On Tue, Oct 20, 2009 at 7:02 AM, Michael Droettboom md...@stsci.eduwrote:

 I've resubmitted the patch without whitespace-only changes.

 For what it's worth, I had followed the directions here:

 http://projects.scipy.org/numpy/wiki/EmacsSetup

 which say to perform untabify and whitespace-cleanup.  Are those not
 current?  I had added these to my pre-save hooks under my numpy tree.


 The problem is that hard tabs have crept into the file. The strict approach
 in this case is to make two patches: the first cleans up the hard tabs, the
 second fixes the problems.

 How about I fix up the hard tabs and then you can make another patch?


I applied the patch. Can you test it?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] object array alignment issues

2009-10-20 Thread Michael Droettboom
Thanks.  It's passing the related unit tests on Sparc SunOS 5, and Linux 
x86.

Cheers,
Mike

Charles R Harris wrote:


 On Tue, Oct 20, 2009 at 10:13 AM, Charles R Harris 
 charlesr.har...@gmail.com mailto:charlesr.har...@gmail.com wrote:



 On Tue, Oct 20, 2009 at 7:02 AM, Michael Droettboom
 md...@stsci.edu mailto:md...@stsci.edu wrote:

 I've resubmitted the patch without whitespace-only changes.

 For what it's worth, I had followed the directions here:

 http://projects.scipy.org/numpy/wiki/EmacsSetup

 which say to perform untabify and whitespace-cleanup.  Are
 those not
 current?  I had added these to my pre-save hooks under my
 numpy tree.


 The problem is that hard tabs have crept into the file. The strict
 approach in this case is to make two patches: the first cleans up
 the hard tabs, the second fixes the problems.

 How about I fix up the hard tabs and then you can make another patch?


 I applied the patch. Can you test it?

 Chuck

 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Optimized sum of squares

2009-10-20 Thread Anne Archibald
2009/10/20  josef.p...@gmail.com:
 On Tue, Oct 20, 2009 at 3:09 PM, Anne Archibald
 peridot.face...@gmail.com wrote:
 2009/10/20  josef.p...@gmail.com:
 On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben gru...@bigpond.net.au wrote:
 Hi Gaël,

 If you've got a 1D array/vector called a, I think the normal idiom is

 np.dot(a,a)

 For the more general case, I think
 np.tensordot(a, a, axes=something_else)
 should do it, where you should be able to figure out something_else for
 your particular case.

 Is it really possible to get the same as np.sum(a*a, axis)  with
 tensordot  if a.ndim=2 ?
 Any way I try the something_else, I get extra terms as in np.dot(a.T, a)

 It seems like this would be a good place to apply numpy's
 higher-dimensional ufuncs: what you want seems to just be the vector
 inner product, broadcast over all other dimensions. In fact I believe
 this is implemented in numpy as a demo: numpy.umath_tests.inner1d
 should do the job.

 Thanks, this works well, needs core in name
 (I might have to learn how to swap or roll axis to use this for more than 2d.)

 np.core.umath_tests.inner1d(a.T, b.T)
 array([12,  8, 16])
 (a*b).sum(0)
 array([12,  8, 16])
 np.core.umath_tests.inner1d(a.T, b.T)
 array([12,  8, 16])
 (a*a).sum(0)
 array([126, 166, 214])
 np.core.umath_tests.inner1d(a.T, a.T)
 array([126, 166, 214])


 What's the status on these functions? They don't show up in the docs
 or help, except for
 a brief mention in the c-api:

 http://docs.scipy.org/numpy/docs/numpy-docs/reference/c-api.generalized-ufuncs.rst/

 Are they for public consumption and should go into the docs?
 Or do they remain a hidden secret, to force users to read the mailing lists?

I think the long-term goal is to have a completely ufuncized linear
algebra library, and I think these functions are just tests of the
gufunc features. In principle, at least, it wouldn't actually be too
hard to fill out a full linear algebra library, since the per
element linear algebra operations already exist. Unfortunately the
code should exist for many data types, and the code generator scheme
currently used to do this for ordinary ufuncs is a barrier to
contributions. It might be worth banging out a doubles-only generic
ufunc linear algebra library (in addition to
numpy.linalg/scipy.linalg), just as a proof of concept.

Anne
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-20 Thread Mathieu Blondel
Hello,

About one year ago, a high-level, objected-oriented SIMD API was added
to Mono. For example, there is a class Vector4f for vectors of 4
floats and this class implements methods such as basic operators,
bitwise operators, comparison operators, min, max, sqrt, shuffle
directly using SIMD operations. You can have a look at the following
pages for further details:

http://tirania.org/blog/archive/2008/Nov-03.html (blog post)
http://go-mono.com/docs/index.aspx?tlin...@n%3amono.simd (API reference)

It seems to me that such an API would possibly be a great fit in Numpy
too. It would also be possible to add classes that don't directly map
to SIMD types. For example, Vector8f can easily be implemented in
terms of 2 Vector4f. In addition to vectors, additional API may be
added to support operations on matrices of fixed width or height.

I search the archives for similar discussions but I only found a
discussion about memory-alignment so I hope I am not restarting an
existing discussion here. Memory-alignment is an import related issue
since non-aligned movs can tank the performance.

Any thoughts? I don't know the Numpy code base yet but I'm willing to
help if such an effort is started.

Thanks,
Mathieu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion