I agree with Stephan, I can never remember how np.dot works for
multidimensional arrays, and I rarely need its behaviour. Einsum, on
the other hand, is both intuitive to me and more general.
Anyway, yes, if y has a leading singleton dimension then its transpose
will have shape (28,28,1) which leads to that unexpected trailing
singleton dimension. If you look at how the shape changes in each step
(first transpose, then np.dot) you can see that everything's doing
what it should (i.e. what you tell it to do).
With np.einsum you'd have to consider that you want to pair the last
axis of X with the first axis of y.T, i.e. the last axis of y
(assuming the latter has only two axes, so it doesn't have that
leading singleton). This would correspond to the rule 'abc,dc->abd',
or if you want to allow arbitrary leading dimensions on y,
'abc,...c->ab...':
>>> X = np.arange(3*4*5).reshape(3,4,5)
... y1 = np.arange(6*5).reshape(6,5)
... y2 = y1[:,None]  # inject leading singleton
... print(np.einsum('abc,dc->abd', X, y1).shape)
... print(np.einsum('abc,...c->ab...', X, y2).shape)
(3, 4, 6)
(3, 4, 6, 1)

AndrĂ¡s

On Sat, Apr 20, 2019 at 1:06 AM Stephan Hoyer <sho...@gmail.com> wrote:
>
> You may find np.einsum() more intuitive than np.dot() for aligning axes -- 
> it's certainly more explicit.
>
> On Fri, Apr 19, 2019 at 3:59 PM C W <tmrs...@gmail.com> wrote:
>>
>> Thanks, you are right. I overlooked it's for addition.
>>
>> The original problem was that I have matrix X (RBG image, 3 layers), and 
>> vector y.
>>
>> I wanted to do np(X, y.T).
>> >>> X.shape   # 100 of 28 x 28 matrix
>> (100, 28, 28)
>> >>> y.shape   # Just one 28 x 28 matrix
>> (1, 28, 28)
>>
>> But, np.dot() gives me four axis shown below,
>> >>> z = np.dot(X, y.T)
>> >>> z.shape
>> (100, 28, 28, 1)
>>
>> The fourth axis is unexpected. Should y.shape be (28, 28), not (1, 28, 28)?
>>
>> Thanks again!
>>
>> On Fri, Apr 19, 2019 at 6:39 PM Andras Deak <deak.and...@gmail.com> wrote:
>>>
>>> On Sat, Apr 20, 2019 at 12:24 AM C W <tmrs...@gmail.com> wrote:
>>> >
>>> > Am I miss reading something? Thank you in advance!
>>>
>>> Hey,
>>>
>>> You are missing that the broadcasting rules typically apply to
>>> arithmetic operations and methods that are specified explicitly to
>>> broadcast. There is no mention of broadcasting in the docs of np.dot
>>> [1], and its behaviour is a bit more complicated.
>>> Specifically for multidimensional arrays (which you have), the doc says
>>>
>>> If a is an N-D array and b is an M-D array (where M>=2), it is a sum
>>> product over the last axis of a and the second-to-last axis of b:
>>> dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
>>>
>>> So your (3,4,5) @ (3,5) would want to collapse the 4-length axis of
>>> `a` with the 3-length axis of `b`; this won't work. If you want
>>> elementwise multiplication according to the broadcasting rules, just
>>> use `a * b`:
>>>
>>> >>> a = np.arange(3*4*5).reshape(3,4,5)
>>> ... b = np.arange(4*5).reshape(4,5)
>>> ... (a * b).shape
>>> (3, 4, 5)
>>>
>>>
>>> [1]: https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@python.org
>>> https://mail.python.org/mailman/listinfo/numpy-discussion
>>
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@python.org
>> https://mail.python.org/mailman/listinfo/numpy-discussion
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to