Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22784#discussion_r228724667
  
    --- Diff: 
mllib/src/test/scala/org/apache/spark/mllib/feature/PCASuite.scala ---
    @@ -54,4 +55,14 @@ class PCASuite extends SparkFunSuite with 
MLlibTestSparkContext {
         // check overflowing
         assert(PCAUtil.memoryCost(40000, 60000) > Int.MaxValue)
       }
    +
    +  test("number of features more than 65500") {
    +    val rows = 10
    +    val columns = 100000
    +    val k = 5
    +    val randomRDD = RandomRDDs.normalVectorRDD(sc, rows, columns, 0, 0)
    +    val pca = new PCA(k).fit(randomRDD)
    +    assert(pca.explainedVariance.size === 5)
    +    assert(pca.pc.numRows === 100000 && pca.pc.numCols === 5)
    --- End diff --
    
    Is there an easy dummy test case we can write where we know what the first 
PC should be? like if you generate a bunch of vectors like (a +/- epsilon, a 
+/- epsilon, ...) for many a, the principal component should be (1,1,1...) 
nearly right? is that easy enough to add as a trivial test of the actual 
analysis? I think that would really prove it, though you manual test suggests 
it's working.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to