On Thursday, 12 November 2015 22:57:21 UTC+1, Robert Kern  wrote:
> On 2015-11-12 15:57, PythonDude wrote:
> > Hi all,
> >
> > I've come around a webpage with python-tutorial/description for obtaining 
> > something and I'll solve this:
> >
> > R = p^T w
> >
> > where R is a vector and p^T is the transpose of another vector.
> >
> > ...
> > p is a Nx1 column vector, so p^T turns into a 1xN row vector which can be 
> > multiplied with the
> > Nx1 weight (column) vector w to give a scalar result. This is equivalent to 
> > the dot
> > product used in the code. Keep in mind that Python has a reversed 
> > definition of
> > rows and columns and the accurate NumPy version of the previous equation 
> > would
> > be R = w * p.T
> > ...
> >
> > (source: http://blog.quantopian.com/markowitz-portfolio-optimization-2/ )
> >
> > I don't understand this: "Keep in mind that Python has a reversed 
> > definition of
> > rows and columns and the accurate NumPy version of the previous equation 
> > would
> > be R = w * p.T"
> >
> > Not true for numpy, is it? This page: 
> > http://mathesaurus.sourceforge.net/matlab-numpy.html says it python and 
> > matlab looks quite similar...
> >
> > Anyone could please explain or elaborate on exactly this (quote): "Keep in 
> > mind that Python has a reversed definition of rows and columns"???
> 
> He's wrong, simply put. There is no "reversed definition of rows and 
> columns". 

Great, thank...

> He simply instantiated the two vectors as row-vectors instead of 
> column-vectors, 
> which he could have easily done, so he had to flip the matrix expression.

Thank you very much Robert - I just had to be sure about it :-)
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to