Re: [Numpy-discussion] Scipy 2011 Convore thread now open

2011-07-12 Thread Long Duong
Does anybody know if there are there videos of the conference this year?


Best regards,

Long Duong
UC Irvine Biomedical Engineering
l...@studioart.org



On Tue, Jul 12, 2011 at 1:00 PM, Peter Wang  wrote:

> Hi folks,
>
> I have gone ahead and created a Convore group for the SciPy 2011
> conference:
>
> https://convore.com/scipy-2011/
>
> I have already created threads for each of the tutorial topics, and
> once the conference is underway, we'll create threads for each talk,
> so that audience can interact and post questions.  Everyone is welcome
> to create topics of their own, in addition to the "official"
> conference topics.
>
> For those who are unfamiliar with Convore, it is a cross between a
> mailing list and a very souped-up IRC.  It's usable for aynchronous
> discussion, but great for realtime, topical chats.  Those of you who
> were at PyCon this year probably saw what a wonderful tool Convore
> proved to be for a tech conference.  People used it for everything
> from BoF planning to dinner coordination to good-natured heckling of
> lightning talk speakers.  I'm hoping that it will be used to similarly
> good effect for the SciPy
>
>
> Cheers,
> Peter
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] named ndarray axes

2011-07-12 Thread Craig Yoshioka
I brought up a while ago about how it would be nice if numpy arrays could have 
their axes 'labeled'.= I got an implementation that works pretty well for 
me and in the process learned quite a few things, and was hoping to foster some 
more discussion on this topic, as I think I have found a simple/flexible 
solution to support this at the numpy level.

Here are *some* examples code snippets from my unit tests on 'Array':

a = Array((4,5,6))

# you can assign data to all axes by attribute:
a.Axes.Label = (0:'z',1:'y',2:'x'}

# or add metadata to each individually:
a.Axes[1].Vector = [0,1,0]
a.Axes[2].Vector = [0,0,1]

# simple case int indexing
b = a[0]
assert b.shape == (5,6)
assert b.Axes.Label == {0:'y',1:'x'}
assert b.Axes.Vector == {0:[0,1,0],1:[0,0,1]}

# indexing with slices
b = a[:,0,:]
assert b.shape == (4,6)
assert b.Axes.Label == {0:'z',1:'x'}
assert b.Axes.Vector == {1:[0,0,1]}

# indexing with ellipsis
b = a[...,0]
assert b.shape == (4,5)
assert b.Axes.Label == {0:'z',1:'y'}

# indexing with ellipsis, newaxis, etc.
b = a[newaxis,...,2,newaxis]
assert b.shape == (1,4,5,1)
assert b.Axes.Label == {1:'z',2:'y'}

# indexing with lists
b = a[[1,2],:,[1,2]]
assert b.shape == (2,5)
assert b.Axes.Label == {0:'z',1:'y'}

# most interesting examples, indexing with axes labels 
# I was a bit confused about how to handle indexing with mixed 
axes/non-axes indexes
# IE: what does a['x',2:4]  mean?  on what axis is the 2:4 slice being 
applied, the first? the first after 'x'?
#   One option is to disallow mixing (simpler to implement, understand?)
#   Instead I chose to treat the axis indexing as a forced assignment 
of an axis to a position.

# axis indexing that transposes the first two dimensions, but otherwise 
does nothing
b = a['y','z']
assert b.shape == (5,4,6)
assert b.Axes.Label == {0:'y',1:'z',2:'x'}

# abusing slices to allow specifying indexes for axes 
b = a['y':0,'z']
assert b.shape == (4,6)
assert b.Axes.Label == {0:'z',1:'x'}

# unfortunately that means a slice index on an axis must be written like so:
b = a['y':slice(0,2),'x','z']
assert b.shape == (2,6,4)
assert b.Axes.Label == {0:'y',1:'x',2:'z'}

b = a['y':[1,2,3],'x','z':slice(0:1)]
# or due to the forced transposition, this is the same as:
c = a['y','x','z'][[1,2,3],:,0:1]

assert b.shape == (3,6,1)
assert b.Axes.Label == {0:'y',1:'x',2:'z'}
assert b.shape == c.shape
assert b.Axes == c.Axes


#


To do all this I essentially had to recreate the effects of numpy indexing on 
axes  This is not ideal, but so far I seem to have addressed most of the 
indexing I've used, at least. Here is what __getitem__ looks like:

def __getitem__(self,idxs):
filtered_idxs,transposed_axes,kept_axes = self.idx_axes(idxs)
array = self.view(ndarray).transpose(transposed_axes)
array = array[filtered_idxs]
if isinstance(array,ndarray):
array = array.view(Array)
array.Axes = self.Axes.keep(kept_axes)
return array  

As you can see idx_axes() essentially recreates a lot of ndarray indexing 
behavior, so that its effects can be explicitly handled.

Having done all this, I think the best way for numpy to support 'labeled' axes 
in the future is by having numpy itself keep track of a very simple tuple 
attribute, like shape, and leave more complex axis naming/labeling to 
subclasses on the python side.  As an example, upon creating a new dimension in 
an array, numpy assigns that dimension a semi-unique id, and this tuple could 
be used in __array_finalize__.

For example my __array_finalize__ could look like:

def __array_finalize__(self,obj):
if hasattr(obj,'axesdata'):
 for axesid in self.axes:
  if axesid in obj.axes:
   self.axesdata[axesid] = obj.axesdata[axesid]


This would cover a lot more situations and lead to much simpler code since the 
work required on the C side would be minimal, but still allow robust and custom 
tracking and propagation of axes information.
Subclasses that tap into this data would react to the result of numpy 
operations vs. having to predict/anticipate.

For example, my __getitem__, relying on the __array_finalize__ above, could 
look like:
 
def __getitem__(self,idxs):
filtered_idxs,transposed_axes= self.idx_axes(idxs)
array = self.transpose(transposed_axes)
return array[filtered_idxs]

Not shown is how much simpler and robust the code for idx_axes would then be.  
I estimate it would go from 130 loc to < 20 loc.

Sorry for the extra long e-mail,
-Craig


___
NumPy-Discussion mailing list
Num

[Numpy-discussion] Scipy 2011 Convore thread now open

2011-07-12 Thread Peter Wang
Hi folks,

I have gone ahead and created a Convore group for the SciPy 2011 conference:

https://convore.com/scipy-2011/

I have already created threads for each of the tutorial topics, and
once the conference is underway, we'll create threads for each talk,
so that audience can interact and post questions.  Everyone is welcome
to create topics of their own, in addition to the "official"
conference topics.

For those who are unfamiliar with Convore, it is a cross between a
mailing list and a very souped-up IRC.  It's usable for aynchronous
discussion, but great for realtime, topical chats.  Those of you who
were at PyCon this year probably saw what a wonderful tool Convore
proved to be for a tech conference.  People used it for everything
from BoF planning to dinner coordination to good-natured heckling of
lightning talk speakers.  I'm hoping that it will be used to similarly
good effect for the SciPy


Cheers,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] object scalars

2011-07-12 Thread Johann Hibschman
Olivier Delalleau  writes:

> 2011/7/12 Johann Hibschman 
>
> Is there any way to wrap a sequence (in particular a python list) as a
> numpy object scalar, without it being promoted to an object array?

> I found a workaround but it's a bit ugly:
> def some_call(x):
>   rval = numpy.array(None, dtype='object')
>   rval.fill(x)
>   return rval

Thanks, that works for me, as does "rval[()] = x" instead of
"rval.fill(x)".

Regards,
Johann

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New arrays in 1.6 not always C-contiguous

2011-07-12 Thread Frédéric Bastien
Hi,

On Tue, Jul 12, 2011 at 12:48 PM, Mark Wiebe  wrote:
[...]
>
> This only added to the C-API, pre-existing API remained the same for API/ABI
> compatibility reasons. C code already had to deal with the possibility of
> differing memory layouts, for example if someone passes in carr.T, something
> in Fortran order.
> This change primarily affected the output layout of ufuncs, newly created
> ndarrays continue to be default 'C' order.

In that case, there won't be problem with our software(Theano). We
already handle correctly all c order for the input, but frequently we
allocate some outputs memory and I'm not sure if in all case we check
the stride or suppose it is C contiguous.

thanks

Frédéric Bastien
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New arrays in 1.6 not always C-contiguous

2011-07-12 Thread Mark Wiebe
2011/7/12 Frédéric Bastien 

> Hi,
>
> We depend highly on numpy, but don't have the time to follow all the
> mailing lists regularly of all our tools. Having some information on
> the release note about this would have been useful to many people I
> think.
>

You're absolutely right, not including information on this was an oversight
on our part. I apologize for that.

Also, did this affect the C-API? Do the default value of newlly
> created ndarray in C changed?
>

This only added to the C-API, pre-existing API remained the same for API/ABI
compatibility reasons. C code already had to deal with the possibility of
differing memory layouts, for example if someone passes in carr.T, something
in Fortran order.

This change primarily affected the output layout of ufuncs, newly created
ndarrays continue to be default 'C' order.


>
> Thanks for the great work.
>

Thanks,
Mark


>
> Frédéric Bastien
>
> On Fri, Jul 8, 2011 at 12:50 PM, Sturla Molden  wrote:
> > Den 07.07.2011 14:10, skrev Jens Jørgen Mortensen:
> >> So, this means I can't count on new arrays being C-contiguous any more.
> >> I guess there is a good reason for this.
> >
> > Work with linear algebra (LAPACK) caused excessive and redundant array
> > transpositions. Arrays would be transposed from C to Fortran order
> > before they were passed to LAPACK, and returned arrays were transposed
> > from Fortran to C order when used in Python. Signal and image processing
> > in SciPy (FFTPACK) suffered from the same issue, as did certain
> > optimization (MINPACK). Computer graphics with OpenGL was similarly
> > impaired. The OpenGL library has a C frontent, but requires that all
> > buffers and matrices are stored in Fortran order.
> >
> > The old behaviour of NumPy was very annoying. Now we can rely on NumPy
> > to always use the most efficient memory layout, unless we request one in
> > particular.
> >
> > Yeah, and it also made NumPy look bad compared to Matlab, which always
> > uses Fortran order for this reason ;-)
> >
> > Sturla
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New arrays in 1.6 not always C-contiguous

2011-07-12 Thread Frédéric Bastien
Hi,

We depend highly on numpy, but don't have the time to follow all the
mailing lists regularly of all our tools. Having some information on
the release note about this would have been useful to many people I
think.

Also, did this affect the C-API? Do the default value of newlly
created ndarray in C changed?

Thanks for the great work.

Frédéric Bastien

On Fri, Jul 8, 2011 at 12:50 PM, Sturla Molden  wrote:
> Den 07.07.2011 14:10, skrev Jens Jørgen Mortensen:
>> So, this means I can't count on new arrays being C-contiguous any more.
>> I guess there is a good reason for this.
>
> Work with linear algebra (LAPACK) caused excessive and redundant array
> transpositions. Arrays would be transposed from C to Fortran order
> before they were passed to LAPACK, and returned arrays were transposed
> from Fortran to C order when used in Python. Signal and image processing
> in SciPy (FFTPACK) suffered from the same issue, as did certain
> optimization (MINPACK). Computer graphics with OpenGL was similarly
> impaired. The OpenGL library has a C frontent, but requires that all
> buffers and matrices are stored in Fortran order.
>
> The old behaviour of NumPy was very annoying. Now we can rely on NumPy
> to always use the most efficient memory layout, unless we request one in
> particular.
>
> Yeah, and it also made NumPy look bad compared to Matlab, which always
> uses Fortran order for this reason ;-)
>
> Sturla
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

2011-07-12 Thread Daniel Wheeler
On Tue, Jul 12, 2011 at 11:19 AM, Sturla Molden  wrote:
> Den 11.07.2011 23:01, skrev Daniel Wheeler:
> To make the loop over N matrices fast, there is nothing that beats a
> loop in C or Fortran (or Cython) if you have a 3D array. And that brings
> us to the second issue, which is that it would be nice if the solvers in
> numpy.linalg (and scipy.linalg) were vectorized for 3D arrays.

Amen.

> Another question is if you really need to compute the inverse. A matrix
> inversion and subsequent matrix multiplication can be replaced by
> solving a linear system, which only takes half the amount of computation.

Good idea. One possibility for avoiding cython for the inverse is to
populate a sparse matrix in pysparse (or scipy) with the small
matrices and then linear solve as I don't need the explicit inverse.
However, that doesn't help with the eigenvalues.

-- 
Daniel Wheeler
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

2011-07-12 Thread Gael Varoquaux
On Tue, Jul 12, 2011 at 12:16:29PM -0400, greg whittier wrote:
> Gael, your code addresses inverses, but I take it something similar for
> eigenvalues of a matrix bigger than 5x5 doesn't exists since a
> closed-form solution doesn't exist for finding polynomials roots for
> order > 5?

I guess so :).

> The ones I'm looking at now happen to be 3x3, so I was thinking of
> using 
> http://en.wikipedia.org/wiki/Eigenvalue_algorithm#Eigenvalues_of_a_Symmetric_3x3_Matrix
> but I might have anywhere from 2 to 10 at some point. 

I am afraid that this is beyond my skills. Sorry ;$.

G
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

2011-07-12 Thread greg whittier
On Tue, Jul 12, 2011 at 11:30 AM, Gael Varoquaux
 wrote:
> On Mon, Jul 11, 2011 at 05:01:07PM -0400, Daniel Wheeler wrote:
>> Hi, I am trying to find the eigenvalues and eigenvectors as well as
>> the inverse for a large number of small matrices. The matrix size
>> (MxM) will typically range from 2x2 to 8x8 at most.
>
> If you really care about speed, for matrices of this size you shouldn't
> call a linear algebra pack, but simply precompute the close-form
> solution. Here is sympy code that I used a while ago to generate fast
> code to do inverse of SPD matrices.
>
> G

This has been a very timely discussion for me since I'm looking to do
the same thing with a different application.  My interest in
diagonalizing lots of small matrices (not inverting).  I believe the
OP was also interested in eigenvalues.  Gael, your code addresses
inverses, but I take it something similar for eigenvalues of a matrix
bigger than 5x5 doesn't exists since a closed-form solution doesn't
exist for finding polynomials roots for order > 5?

The ones I'm looking at now happen to be 3x3, so I was thinking of
using 
http://en.wikipedia.org/wiki/Eigenvalue_algorithm#Eigenvalues_of_a_Symmetric_3x3_Matrix
but I might have anywhere from 2 to 10 at some point.  (To add another
spin to this, I recently acquired an NVIDIA Tesla card and am thinking
of using it for this problem.)

Thanks,
Greg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

2011-07-12 Thread Daniel Wheeler
On Tue, Jul 12, 2011 at 10:52 AM, Dag Sverre Seljebotn
 wrote:
> On 07/12/2011 04:10 PM, Daniel Wheeler wrote:
>> On Tue, Jul 12, 2011 at 3:51 AM, Dag Sverre Seljebotn
>> Thanks for the heads up. Looks like an option. Presumably, it would
>> still have to use "map" even with more direct access to BLAS (still
>> going C<->  python for every matrix)? Also, adding extra non-standard
>> dependencies is a problem as this code is part of a production code
>> that's passed onto others.
>>
>
> I was thinking you'd use it as a starting point, and actually write
> low-level for-loops indexing the buffer data pointers in Cython.

I realized that as soon as I'd hit the send button. Thanks.


-- 
Daniel Wheeler
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

2011-07-12 Thread Gael Varoquaux
On Mon, Jul 11, 2011 at 05:01:07PM -0400, Daniel Wheeler wrote:
> Hi, I am trying to find the eigenvalues and eigenvectors as well as
> the inverse for a large number of small matrices. The matrix size
> (MxM) will typically range from 2x2 to 8x8 at most. 

If you really care about speed, for matrices of this size you shouldn't
call a linear algebra pack, but simply precompute the close-form
solution. Here is sympy code that I used a while ago to generate fast
code to do inverse of SPD matrices.

G
"""
Generate the code for fast_inv
"""
from itertools import chain
from sympy import Symbol, cse, Matrix
from sympy.printing.str import StrPrinter, precedence, S

# First create a special to print squares and multiplications, rather
# than pows, as cython is not able to optimize them
class MyPrinter(StrPrinter):

def _print_Pow(self, expr):
PREC = precedence(expr)
if expr.exp is S.NegativeOne:
return '1/%s'%(self.parenthesize(expr.base, PREC))
elif expr.exp == 2:
return '%s*%s'%(self.parenthesize(expr.base, PREC),
self.parenthesize(expr.base, PREC))
else:
return '%s**%s'%(self.parenthesize(expr.base, PREC),
 self.parenthesize(expr.exp, PREC))

my_printer = MyPrinter()


def code(n):
""" Generate the code for inversing a C matrix of rank n.
"""
l = list()
for i in range(n):
ll = list()
for j in range(n):
if j < i:
ll.append(l[j][i])
else:
ll.append(Symbol('a[%i, %i]' % (i, j)))
l.append(ll)
m = Matrix(l)
a = m.inverse_LU()
vars, expr = cse([poly#.radsimp() 
  for poly in chain(*a.tolist())])

var_str = '\n'.join(['cdef double %s = %s' %
(name, my_printer.doprint(v)) for name, v in vars])
out_str = """

@cython.boundscheck(False)
@cython.wraparound(False)
@cython.profile(False)
@cython.cdivision(True)
@cython.cdivision_warnings(False)
cdef _inv_%i(np.ndarray[np.float_t, ndim=2, mode="c"] a):
# Intermediate variable definition
%s
# Final matrix
return np.array(%s).reshape((%i, %i))
""" % (n, var_str, my_printer.doprint(expr), n, n)
return out_str


out_file = file('fast_inv.pyx', 'w')

out_file.write("""
'''
Code to do a fast inverse on small symmetric matrices
'''
cimport cython

from scipy import linalg
import numpy as np
cimport numpy as np
from fast_inv import inv

""")

N = 7


for n in range(2, N+1):
out_file.write(code(n))
out_file.flush()
print 'Done order %i' % n

out_file.write('''
def sym_inv(a):
""" Return the inverse of the symetric matrix 'a'.

This code is optimized for small symmetric matrices.
"""
cdef int n = a.shape[0]
if n == 1:
return 1./a
%s
else:
   return inv(a) 

''' % ''.join(["""
elif n == %i:
return _inv_%i(np.ascontiguousarray(a))""" 
% (i, i) for i in range(2, N+1)])

)

out_file.close()

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] object scalars

2011-07-12 Thread Olivier Delalleau
I found a workaround but it's a bit ugly:
def some_call(x):
  rval = numpy.array(None, dtype='object')
  rval.fill(x)
  return rval

-=- Olivier

2011/7/12 Johann Hibschman 

> Is there any way to wrap a sequence (in particular a python list) as a
> numpy object scalar, without it being promoted to an object array?
>
> In particular,
>
>  np.object_([1, 2]).shape == (2,)
>  np.array([1,2], dtype='O').shape == (2,)
>
> while I want
>
>  some_call([1,2]).shape = ()
>
> Thanks,
> Johann
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

2011-07-12 Thread Sturla Molden
Den 11.07.2011 23:01, skrev Daniel Wheeler:
> The above uses "map" to fake a vector solution, but this is heinously
> slow. Are there any better ways to do this without resorting to cython
> or weave (would it even be faster (or possible) to use "np.linalg.eig"
> and "np.linalg.inv" within cython)?

I see two problems here:

def inv(A):
 """
 Inverts N MxM matrices, A.shape = (M, M, N), inv(A).shape = (M, M, N).
 """
 return np.array(map(np.linalg.inv, A.transpose(2, 0, 1))).transpose(1, 2, 
0)


At least get rid of the transpositions and non-contiguous memory access.

Use shape (N,M,M) in C order or (M,M,N) in Fortran order to make memory 
access more contiguous and cache friendly. Statements like 
A.transpose(2,0,1) are evil: they just burn the CPU and flood the memory 
bus; and cache, prefetching, etc. will do no good as everything is out 
of order.

LAPACK is written in Fortran, at least with scipy.linalg I would use 
shape (M,M,N) in Fortran order to avoid redundant transpositions by 
f2py. I am not sure what NumPy's lapack_lite does, though.

You will have a lot of Python overhead by using np.linalg.inv on each of 
the N matrices. Therefore, you don't gain much by using map instead of a 
for loop. Using map will save a few attribute lookups per loop, but 
there are dozens of them.

To make the loop over N matrices fast, there is nothing that beats a 
loop in C or Fortran (or Cython) if you have a 3D array. And that brings 
us to the second issue, which is that it would be nice if the solvers in 
numpy.linalg (and scipy.linalg) were vectorized for 3D arrays.

Calling np.linalg.inv from Cython will not help though, as you incur the 
same overhead as calling np.linalg.inv with map from Python.

Another question is if you really need to compute the inverse. A matrix 
inversion and subsequent matrix multiplication can be replaced by 
solving a linear system, which only takes half the amount of computation.

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] object scalars

2011-07-12 Thread Johann Hibschman
Is there any way to wrap a sequence (in particular a python list) as a
numpy object scalar, without it being promoted to an object array?

In particular,

  np.object_([1, 2]).shape == (2,)
  np.array([1,2], dtype='O').shape == (2,)

while I want

  some_call([1,2]).shape = ()

Thanks,
Johann

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

2011-07-12 Thread Dag Sverre Seljebotn
On 07/12/2011 04:10 PM, Daniel Wheeler wrote:
> On Tue, Jul 12, 2011 at 3:51 AM, Dag Sverre Seljebotn
>   wrote:
>> On 07/11/2011 11:01 PM, Daniel Wheeler wrote:
>>> Hi, I am trying to find the eigenvalues and eigenvectors as well as
>>> the inverse for a large number of small matrices. The matrix size
>
>> If you want to go the Cython route, here's a start:
>>
>> http://www.vetta.org/2009/09/tokyo-a-cython-blas-wrapper-for-fast-matrix-math/
>
> Thanks for the heads up. Looks like an option. Presumably, it would
> still have to use "map" even with more direct access to BLAS (still
> going C<->  python for every matrix)? Also, adding extra non-standard
> dependencies is a problem as this code is part of a production code
> that's passed onto others.
>

I was thinking you'd use it as a starting point, and actually write 
low-level for-loops indexing the buffer data pointers in Cython.

If you make sure that np.ascontiguousarray or np.asfortranarray, you can do

cimport numpy as np
np.import_array()

...

def func(np.ndarray[double, ndim=3, mode='fortran'] arr):
 double *buf = PyArray_DATA(arr)
 # low-level C-like code to get slices of buf and pass to BLAS

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

2011-07-12 Thread Daniel Wheeler
On Tue, Jul 12, 2011 at 3:51 AM, Dag Sverre Seljebotn
 wrote:
> On 07/11/2011 11:01 PM, Daniel Wheeler wrote:
>> Hi, I am trying to find the eigenvalues and eigenvectors as well as
>> the inverse for a large number of small matrices. The matrix size

> If you want to go the Cython route, here's a start:
>
> http://www.vetta.org/2009/09/tokyo-a-cython-blas-wrapper-for-fast-matrix-math/

Thanks for the heads up. Looks like an option. Presumably, it would
still have to use "map" even with more direct access to BLAS (still
going C <-> python for every matrix)? Also, adding extra non-standard
dependencies is a problem as this code is part of a production code
that's passed onto others.


-- 
Daniel Wheeler
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are .M and .H removed in NumPy 1.6?

2011-07-12 Thread Sturla Molden

Den 12.07.2011 15:55, skrev Charles R Harris:



On Tue, Jul 12, 2011 at 7:36 AM, Sturla Molden > wrote:


After upgrading EPD, I just discovered that my ndarrays no longer have
.M and .H attributes.

Were they deprectated, or is my NumPy not working correctly?


I thought they were long gone: 
http://mail.scipy.org/pipermail/numpy-discussion/2006-July/009247.html


Yes, thanks. Sorry for my confusion.

.H and .A are obviously attributes of np.matrix, but there are no .H and 
.M for np.ndarray.


Sturla







___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are .M and .H removed in NumPy 1.6?

2011-07-12 Thread Charles R Harris
On Tue, Jul 12, 2011 at 7:36 AM, Sturla Molden  wrote:

> After upgrading EPD, I just discovered that my ndarrays no longer have
> .M and .H attributes.
>
> Were they deprectated, or is my NumPy not working correctly?
>
>
I thought they were long gone:
http://mail.scipy.org/pipermail/numpy-discussion/2006-July/009247.html

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Are .M and .H removed in NumPy 1.6?

2011-07-12 Thread Sturla Molden
After upgrading EPD, I just discovered that my ndarrays no longer have 
.M and .H attributes.

Were they deprectated, or is my NumPy not working correctly?


Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inverting and calculating eigenvalues for many small matrices

2011-07-12 Thread Dag Sverre Seljebotn
On 07/11/2011 11:01 PM, Daniel Wheeler wrote:
> Hi, I am trying to find the eigenvalues and eigenvectors as well as
> the inverse for a large number of small matrices. The matrix size
> (MxM) will typically range from 2x2 to 8x8 at most. The number of
> matrices (N) can be from 100 up to a million or more. My current
> solution is to define "eig" and "inv" to be,
>
> def inv(A):
>  """
>  Inverts N MxM matrices, A.shape = (M, M, N), inv(A).shape = (M, M, N).
>  """
>  return np.array(map(np.linalg.inv, A.transpose(2, 0, 1))).transpose(1, 
> 2, 0)
>
> def eig(A):
>  """
>  Calculate the eigenvalues and eigenvectors of N MxM matrices,
> A.shape = (M, M, N), eig(A)[0].shape = (M, N), eig(A)[1].shape = (M,
> M, N)
>  """
>  tmp = zip(*map(np.linalg.eig, A.transpose(2, 0, 1)))
>  return (np.array(tmp[0]).swapaxes(0,1), 
> np.array(tmp[1]).transpose(1,2,0))
>
> The above uses "map" to fake a vector solution, but this is heinously
> slow. Are there any better ways to do this without resorting to cython
> or weave (would it even be faster (or possible) to use "np.linalg.eig"
> and "np.linalg.inv" within cython)? I could write specialized versions

If you want to go the Cython route, here's a start:

http://www.vetta.org/2009/09/tokyo-a-cython-blas-wrapper-for-fast-matrix-math/

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion