Hi Charles,

Thank you for your response!

I do think np.einsum() is really great. I am not clear about how that
ties to my question though, because I was thinking more in lines of
wrapping and reshaping the output (#1) and improve the documentation
(#2), where the proper outcome was already calculated.


-Shawn

On Tue, Nov 25, 2014 at 11:12 PM, Charles R Harris
<charlesr.har...@gmail.com> wrote:
> Take a look as einsum, it is quite good for such things.
>
> Chuck
>
> On Tue, Nov 25, 2014 at 9:06 PM, Yuxiang Wang <yw...@virginia.edu> wrote:
>>
>> Dear all,
>>
>> I have been doing tensor algebra recently (for continuum mechanics)
>> and was looking into two common operations: tensor product & tensor
>> contraction.
>>
>> 1. Tensor product
>>
>> One common usage is:
>> a[i1, i2, i3, ..., iN, j1, j2, j3, ..., jM] = b[i1, i2, i3, ..., iN] *
>> c[j1, j2, j3, ..., jM]
>>
>> I looked into the current np.outer(), and the only difference is that
>> it always flattens the input array. So actually, the function for
>> tensor product is simply
>>
>> np.outer(a, b, out=out).reshape(a.shape+b.shape)  <-- I think I got
>> this right but please do correct me if I am wrong
>>
>> Would anyone think it helpful or harmful to add such a function,
>> np.tensorprod()? it will simply be like
>> def tensorprod(a, b, out=None):
>>     return outer(a, b, out=out).reshape(a.shape+b.shape)
>>
>>
>> 2. Tensor contraction
>>
>> It is currently the np.tensordot(a, b) and it will do np.tensordot(a,
>> b, axes=2) by default. I think this is all great, but it would be even
>> better if we specify in the doc, that:
>>     i) say explicitly that by default it will be the double-dot or
>> contraction operator, and
>>     ii) explain that in cases where axes is an integer-like scalar,
>> which axes were selected from the two array and in what order. Like:
>> if axes is an integer-like scalar, it is the number axes to sum over,
>> equivalent to axes=(list(range(-axes, 0)), list(range(0, axes)))   (or
>> something like this)
>>
>>
>> It'd be great to hear what you would think about it,
>>
>> Shawn
>>
>>
>> --
>> Yuxiang "Shawn" Wang
>> Gerling Research Lab
>> University of Virginia
>> yw...@virginia.edu
>> +1 (434) 284-0836
>> https://sites.google.com/a/virginia.edu/yw5aj/
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
Yuxiang "Shawn" Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to