Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23126#discussion_r237520303
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala 
---
    @@ -128,6 +128,82 @@ class RowMatrix @Since("1.0.0") (
         RowMatrix.triuToFull(n, GU.data)
       }
     
    +  private def computeDenseVectorCovariance(mean: Vector, n: Int, m: Long): 
Matrix = {
    +
    +    val bc = rows.context.broadcast(mean)
    +
    +    // Computes n*(n+1)/2, avoiding overflow in the multiplication.
    +    // This succeeds when n <= 65535, which is checked above
    +    val nt = if (n % 2 == 0) ((n / 2) * (n + 1)) else (n * ((n + 1) / 2))
    +
    +    val MU = rows.treeAggregate(new BDV[Double](nt))(
    +      seqOp = (U, v) => {
    +
    +        val n = v.size
    +        val na = Array.ofDim[Double](n)
    +        val means = bc.value
    +        if (v.isInstanceOf[DenseVector]) {
    +          v.foreachActive{(index, value) =>
    --- End diff --
    
    Nit (this might fail scalastyle): space around the braces here. But we 
don't need foreachActive here I think; they're all 'active' in a dense vector 
and every element needs the mean subtracted. Even for a sparse vector you have 
to do this. Do you really need a separate case here? we're assuming the vectors 
are (mostly) dense in this method.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to