Repository: spark
Updated Branches:
  refs/heads/master 7bdc92197 -> a94671a02


[SPARK-11506][MLLIB] Removed redundant operation in Online LDA implementation

In file LDAOptimizer.scala:

line 441: since "idx" was never used, replaced unrequired zipWithIndex.foreach 
with foreach.

-      nonEmptyDocs.zipWithIndex.foreach { case ((_, termCounts: Vector), idx: 
Int) =>
+      nonEmptyDocs.foreach { case (_, termCounts: Vector) =>

Author: a1singh <a1si...@ucsd.edu>

Closes #9456 from a1singh/master.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a94671a0
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a94671a0
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a94671a0

Branch: refs/heads/master
Commit: a94671a027c29bacea37f56b95eccb115638abff
Parents: 7bdc921
Author: a1singh <a1si...@ucsd.edu>
Authored: Thu Nov 5 12:51:10 2015 +0000
Committer: Sean Owen <so...@cloudera.com>
Committed: Thu Nov 5 12:51:10 2015 +0000

----------------------------------------------------------------------
 .../scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala     | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/a94671a0/mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala
----------------------------------------------------------------------
diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala 
b/mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala
index 38486e9..17c0609 100644
--- a/mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala
+++ b/mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala
@@ -438,7 +438,7 @@ final class OnlineLDAOptimizer extends LDAOptimizer {
 
       val stat = BDM.zeros[Double](k, vocabSize)
       var gammaPart = List[BDV[Double]]()
-      nonEmptyDocs.zipWithIndex.foreach { case ((_, termCounts: Vector), idx: 
Int) =>
+      nonEmptyDocs.foreach { case (_, termCounts: Vector) =>
         val ids: List[Int] = termCounts match {
           case v: DenseVector => (0 until v.size).toList
           case v: SparseVector => v.indices.toList


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to