Actually, the second version I wrote is inaccurate, because `y.T` will
permute the remaining axes in the result, but the '...' in einsum
won't do this.
On Sat, Apr 20, 2019 at 1:24 AM Andras Deak wrote:
>
> I agree with Stephan, I can never remember how np.dot works for
> multidimensional arrays,
I agree with Stephan, I can never remember how np.dot works for
multidimensional arrays, and I rarely need its behaviour. Einsum, on
the other hand, is both intuitive to me and more general.
Anyway, yes, if y has a leading singleton dimension then its transpose
will have shape (28,28,1) which leads
You may find np.einsum() more intuitive than np.dot() for aligning axes --
it's certainly more explicit.
On Fri, Apr 19, 2019 at 3:59 PM C W wrote:
> Thanks, you are right. I overlooked it's for addition.
>
> The original problem was that I have matrix X (RBG image, 3 layers), and
> vector y.
>
Thanks, you are right. I overlooked it's for addition.
The original problem was that I have matrix X (RBG image, 3 layers), and
vector y.
I wanted to do np(X, y.T).
>>> X.shape # 100 of 28 x 28 matrix
(100, 28, 28)
>>> y.shape # Just one 28 x 28 matrix
(1, 28, 28)
But, np.dot() gives me four
On Sat, Apr 20, 2019 at 12:24 AM C W wrote:
>
> Am I miss reading something? Thank you in advance!
Hey,
You are missing that the broadcasting rules typically apply to
arithmetic operations and methods that are specified explicitly to
broadcast. There is no mention of broadcasting in the docs of
Hello all,
Can an m x n x k matrix be multiplied with n x k matrix? Looking at the
Numpy doc page 46 (
https://docs.scipy.org/doc/numpy-1.11.0/numpy-user-1.11.0.pdf), it should
work.
It says the following:
A (3d array): 15 x 3 x 5
B (2d array): 3 x 5
Result (3d array): 15 x 3 x 5
But, th
On Fri, Apr 19, 2019 at 4:54 AM Kevin Sheppard
wrote:
> > Finally, why do we expose the np.random.gen object? I thought part of the
> idea with the new API was to avoid global mutable state.
>
> Module level functions are essential for quick experiments and should be
> provided. The only differe
On Fri, Apr 19, 2019 at 5:16 AM Neal Becker wrote:
> The boost_random c++ library uses the terms 'generators' and
> 'distributions'. Distributions are applied to generators.
>
"distributions" is a little confusing in the context of
scipy.stats.distributions, which a distribution corresponds to
The boost_random c++ library uses the terms 'generators' and
'distributions'. Distributions are applied to generators.
On Fri, Apr 19, 2019 at 7:54 AM Kevin Sheppard
wrote:
>
> > Rather than "base RNG", what about calling these classes a "random source"
> or "random stream"? In particular, I wo
> Rather than "base RNG", what about calling these classes a "random
source"
or "random stream"? In particular, I would suggest defining two Python
classes:
> - np.random.Generator as a less redundant name for what is currently
called
RandomGenerator
> - np.random.Source or np.random.Stream as an
10 matches
Mail list logo