Github user akopich commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18924#discussion_r140180799
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala ---
    @@ -503,17 +518,15 @@ final class OnlineLDAOptimizer extends LDAOptimizer {
       }
     
       /**
    -   * Update alpha based on `gammat`, the inferred topic distributions for 
documents in the
    -   * current mini-batch. Uses Newton-Rhapson method.
    +   * Update alpha based on `logphat`.
    --- End diff --
    
    @WeichenXu123, you are right. So should we add `stats.count()` or should we 
rather embed the counting in the aggregation phase so that we avoid the second 
pass?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to