On Mon, Dec 27, 2010 at 6:20 AM, Enzo Michelangeli <enzom...@gmail.com> wrote:
> Many thanks to Josef and Justin for their replies.
>
> Josef's hint sounds like a good way of reducing peak memory allocation
> especially when the row size is large, which makes the "for" overhead for
> each iteration comparatively lower. However, time is still spent in
> back-and-forth conversions between numpy arrays and the native BLAS data
> structures, and copying data from the temporary array holding the
> intermediate results and tableau.
>
> Regarding Justin's suggestion, before trying Cython (which, according to
> http://wiki.cython.org/tutorials/numpy , seems to require a bit of work to
> handle numpy arrays properly)

Cython doesn't have to be that complicated. For your example, you just
have to unroll the vectorization (and account for the fact that the
result is mutated in place, which was your original goal).


cimport numpy

def do_it(numpy.ndarray[double, ndim=2] tableau, int locat, int cand,
bint vectorize=True):
    cdef numpy.ndarray[double, ndim=1] pivot
    pivot = tableau[locat,:]/tableau[locat,cand]
    if vectorize:
        tableau -= tableau[:,cand:cand+1]*pivot
    else:
        for i in range(tableau.shape[0]):
            for  j in range(tableau.shape[1]):
                if j != cand:
                    tableau[i,j] -= tableau[i,cand] * pivot[j]
    tableau[:,cand] = 0
    tableau[locat,:] = pivot
    return tableau


- Robert
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to