Repository: spark
Updated Branches:
  refs/heads/branch-1.5 5a32ed75c -> 95e44b4df


[SPARK-9800] Adds docs for GradientDescent$.runMiniBatchSGD alias

* Adds doc for alias of runMIniBatchSGD documenting default value for 
convergeTol
* Cleans up a note in code

Author: Feynman Liang <fli...@databricks.com>

Closes #8425 from feynmanliang/SPARK-9800.

(cherry picked from commit c0e9ff1588b4d9313cc6ec6e00e5c7663eb67910)
Signed-off-by: Joseph K. Bradley <jos...@databricks.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/95e44b4d
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/95e44b4d
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/95e44b4d

Branch: refs/heads/branch-1.5
Commit: 95e44b4df81b09803be2fde8c4e2566be0c8fdbc
Parents: 5a32ed7
Author: Feynman Liang <fli...@databricks.com>
Authored: Tue Aug 25 13:21:05 2015 -0700
Committer: Joseph K. Bradley <jos...@databricks.com>
Committed: Tue Aug 25 13:21:16 2015 -0700

----------------------------------------------------------------------
 .../org/apache/spark/mllib/optimization/GradientDescent.scala   | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/95e44b4d/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
----------------------------------------------------------------------
diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
 
b/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
index 8f0d1e4..3b663b5 100644
--- 
a/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
+++ 
b/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
@@ -235,7 +235,7 @@ object GradientDescent extends Logging {
 
       if (miniBatchSize > 0) {
         /**
-         * NOTE(Xinghao): lossSum is computed using the weights from the 
previous iteration
+         * lossSum is computed using the weights from the previous iteration
          * and regVal is the regularization value computed in the previous 
iteration as well.
          */
         stochasticLossHistory.append(lossSum / miniBatchSize + regVal)
@@ -264,6 +264,9 @@ object GradientDescent extends Logging {
 
   }
 
+  /**
+   * Alias of [[runMiniBatchSGD]] with convergenceTol set to default value of 
0.001.
+   */
   def runMiniBatchSGD(
       data: RDD[(Double, Vector)],
       gradient: Gradient,


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to