Repository: spark
Updated Branches:
  refs/heads/master 71a138cd0 -> c0e9ff158


[SPARK-9800] Adds docs for GradientDescent$.runMiniBatchSGD alias

* Adds doc for alias of runMIniBatchSGD documenting default value for 
convergeTol
* Cleans up a note in code

Author: Feynman Liang <fli...@databricks.com>

Closes #8425 from feynmanliang/SPARK-9800.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c0e9ff15
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c0e9ff15
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c0e9ff15

Branch: refs/heads/master
Commit: c0e9ff1588b4d9313cc6ec6e00e5c7663eb67910
Parents: 71a138c
Author: Feynman Liang <fli...@databricks.com>
Authored: Tue Aug 25 13:21:05 2015 -0700
Committer: Joseph K. Bradley <jos...@databricks.com>
Committed: Tue Aug 25 13:21:05 2015 -0700

----------------------------------------------------------------------
 .../org/apache/spark/mllib/optimization/GradientDescent.scala   | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/c0e9ff15/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
----------------------------------------------------------------------
diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
 
b/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
index 8f0d1e4..3b663b5 100644
--- 
a/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
+++ 
b/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
@@ -235,7 +235,7 @@ object GradientDescent extends Logging {
 
       if (miniBatchSize > 0) {
         /**
-         * NOTE(Xinghao): lossSum is computed using the weights from the 
previous iteration
+         * lossSum is computed using the weights from the previous iteration
          * and regVal is the regularization value computed in the previous 
iteration as well.
          */
         stochasticLossHistory.append(lossSum / miniBatchSize + regVal)
@@ -264,6 +264,9 @@ object GradientDescent extends Logging {
 
   }
 
+  /**
+   * Alias of [[runMiniBatchSGD]] with convergenceTol set to default value of 
0.001.
+   */
   def runMiniBatchSGD(
       data: RDD[(Double, Vector)],
       gradient: Gradient,


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to