Xiangrui Meng created SPARK-5905:
------------------------------------

             Summary: Improve RowMatrix user guide and doc.
                 Key: SPARK-5905
                 URL: https://issues.apache.org/jira/browse/SPARK-5905
             Project: Spark
          Issue Type: Improvement
          Components: Documentation, MLlib
    Affects Versions: 1.3.0
            Reporter: Xiangrui Meng
            Priority: Minor


>From mbofb's comment in PR https://github.com/apache/spark/pull/4680:

{code}
The description of RowMatrix.computeSVD and mllib-dimensionality-reduction.html 
should be more precise/explicit regarding the m x n matrix. In the current 
description I would conclude that n refers to the rows. According to 
http://math.stackexchange.com/questions/191711/how-many-rows-and-columns-are-in-an-m-x-n-matrix
 this way of describing a matrix is only used in particular domains. I as a 
reader interested on applying SVD would rather prefer the more common m x n way 
of rows x columns (e.g. http://en.wikipedia.org/wiki/Matrix_%28mathematics%29 ) 
which is also used in http://en.wikipedia.org/wiki/Latent_semantic_analysis 
(and also within the ARPACK manual:
“
N Integer. (INPUT) - Dimension of the eigenproblem. 
NEV Integer. (INPUT) - Number of eigenvalues of OP to be computed. 0 < NEV < N. 
NCV Integer. (INPUT) - Number of columns of the matrix V (less than or equal to 
N).
“
).

description of RowMatrix.computeSVD and mllib-dimensionality-reduction.html:
"We assume n is smaller than m." Is this just a recommendation or a hard 
requirement. This condition seems not to be checked and causing an 
IllegalArgumentException – the processing finishes even though the vectors have 
a higher dimension than the number of vectors.

description of RowMatrix. computePrincipalComponents or RowMatrix in general:
I got a Exception.
java.lang.IllegalArgumentException: Argument with more than 65535 cols: 7949273
at 
org.apache.spark.mllib.linalg.distributed.RowMatrix.checkNumColumns(RowMatrix.scala:131)
at 
org.apache.spark.mllib.linalg.distributed.RowMatrix.computeCovariance(RowMatrix.scala:318)
at 
org.apache.spark.mllib.linalg.distributed.RowMatrix.computePrincipalComponents(RowMatrix.scala:373)
This 65535 cols restriction would be nice to be written in the doc (if this 
still applies in 1.3).
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to