http://git-wip-us.apache.org/repos/asf/spark-website/blob/d2bcf185/site/docs/2.1.0/mllib-linear-methods.html
----------------------------------------------------------------------
diff --git a/site/docs/2.1.0/mllib-linear-methods.html 
b/site/docs/2.1.0/mllib-linear-methods.html
index 46a1a25..428d778 100644
--- a/site/docs/2.1.0/mllib-linear-methods.html
+++ b/site/docs/2.1.0/mllib-linear-methods.html
@@ -307,23 +307,23 @@
                     
 
                     <ul id="markdown-toc">
-  <li><a href="#mathematical-formulation" 
id="markdown-toc-mathematical-formulation">Mathematical formulation</a>    <ul>
-      <li><a href="#loss-functions" id="markdown-toc-loss-functions">Loss 
functions</a></li>
-      <li><a href="#regularizers" 
id="markdown-toc-regularizers">Regularizers</a></li>
-      <li><a href="#optimization" 
id="markdown-toc-optimization">Optimization</a></li>
+  <li><a href="#mathematical-formulation">Mathematical formulation</a>    <ul>
+      <li><a href="#loss-functions">Loss functions</a></li>
+      <li><a href="#regularizers">Regularizers</a></li>
+      <li><a href="#optimization">Optimization</a></li>
     </ul>
   </li>
-  <li><a href="#classification" 
id="markdown-toc-classification">Classification</a>    <ul>
-      <li><a href="#linear-support-vector-machines-svms" 
id="markdown-toc-linear-support-vector-machines-svms">Linear Support Vector 
Machines (SVMs)</a></li>
-      <li><a href="#logistic-regression" 
id="markdown-toc-logistic-regression">Logistic regression</a></li>
+  <li><a href="#classification">Classification</a>    <ul>
+      <li><a href="#linear-support-vector-machines-svms">Linear Support Vector 
Machines (SVMs)</a></li>
+      <li><a href="#logistic-regression">Logistic regression</a></li>
     </ul>
   </li>
-  <li><a href="#regression" id="markdown-toc-regression">Regression</a>    <ul>
-      <li><a href="#linear-least-squares-lasso-and-ridge-regression" 
id="markdown-toc-linear-least-squares-lasso-and-ridge-regression">Linear least 
squares, Lasso, and ridge regression</a></li>
-      <li><a href="#streaming-linear-regression" 
id="markdown-toc-streaming-linear-regression">Streaming linear 
regression</a></li>
+  <li><a href="#regression">Regression</a>    <ul>
+      <li><a href="#linear-least-squares-lasso-and-ridge-regression">Linear 
least squares, Lasso, and ridge regression</a></li>
+      <li><a href="#streaming-linear-regression">Streaming linear 
regression</a></li>
     </ul>
   </li>
-  <li><a href="#implementation-developer" 
id="markdown-toc-implementation-developer">Implementation (developer)</a></li>
+  <li><a href="#implementation-developer">Implementation (developer)</a></li>
 </ul>
 
 <p><code>\[
@@ -489,7 +489,7 @@ error.</p>
 
     <p>Refer to the <a 
href="api/scala/index.html#org.apache.spark.mllib.classification.SVMWithSGD"><code>SVMWithSGD</code>
 Scala docs</a> and <a 
href="api/scala/index.html#org.apache.spark.mllib.classification.SVMModel"><code>SVMModel</code>
 Scala docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.classification.</span><span 
class="o">{</span><span class="nc">SVMModel</span><span class="o">,</span> 
<span class="nc">SVMWithSGD</span><span class="o">}</span>
+    <div class="highlight"><pre><span></span><span class="k">import</span> 
<span class="nn">org.apache.spark.mllib.classification.</span><span 
class="o">{</span><span class="nc">SVMModel</span><span class="o">,</span> 
<span class="nc">SVMWithSGD</span><span class="o">}</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.evaluation.BinaryClassificationMetrics</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.util.MLUtils</span>
 
@@ -534,14 +534,14 @@ this way as well. For example, the following code 
produces an L1 regularized
 variant of SVMs with regularization parameter set to 0.1, and runs the training
 algorithm for 200 iterations.</p>
 
-    <div class="highlight"><pre><code class="language-scala" 
data-lang="scala"><span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.optimization.L1Updater</span>
+    <figure class="highlight"><pre><code class="language-scala" 
data-lang="scala"><span></span><span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.optimization.L1Updater</span>
 
 <span class="k">val</span> <span class="n">svmAlg</span> <span 
class="k">=</span> <span class="k">new</span> <span 
class="nc">SVMWithSGD</span><span class="o">()</span>
 <span class="n">svmAlg</span><span class="o">.</span><span 
class="n">optimizer</span>
   <span class="o">.</span><span class="n">setNumIterations</span><span 
class="o">(</span><span class="mi">200</span><span class="o">)</span>
   <span class="o">.</span><span class="n">setRegParam</span><span 
class="o">(</span><span class="mf">0.1</span><span class="o">)</span>
   <span class="o">.</span><span class="n">setUpdater</span><span 
class="o">(</span><span class="k">new</span> <span 
class="n">L1Updater</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">modelL1</span> <span 
class="k">=</span> <span class="n">svmAlg</span><span class="o">.</span><span 
class="n">run</span><span class="o">(</span><span 
class="n">training</span><span class="o">)</span></code></pre></div>
+<span class="k">val</span> <span class="n">modelL1</span> <span 
class="k">=</span> <span class="n">svmAlg</span><span class="o">.</span><span 
class="n">run</span><span class="o">(</span><span 
class="n">training</span><span class="o">)</span></code></pre></figure>
 
   </div>
 
@@ -554,7 +554,7 @@ that is equivalent to the provided example in Scala is 
given below:</p>
 
     <p>Refer to the <a 
href="api/java/org/apache/spark/mllib/classification/SVMWithSGD.html"><code>SVMWithSGD</code>
 Java docs</a> and <a 
href="api/java/org/apache/spark/mllib/classification/SVMModel.html"><code>SVMModel</code>
 Java docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="kn">import</span> <span 
class="nn">scala.Tuple2</span><span class="o">;</span>
+    <div class="highlight"><pre><span></span><span class="kn">import</span> 
<span class="nn">scala.Tuple2</span><span class="o">;</span>
 
 <span class="kn">import</span> <span 
class="nn">org.apache.spark.api.java.JavaRDD</span><span class="o">;</span>
 <span class="kn">import</span> <span 
class="nn">org.apache.spark.api.java.function.Function</span><span 
class="o">;</span>
@@ -591,7 +591,7 @@ that is equivalent to the provided example in Scala is 
given below:</p>
 
 <span class="c1">// Get evaluation metrics.</span>
 <span class="n">BinaryClassificationMetrics</span> <span 
class="n">metrics</span> <span class="o">=</span>
-  <span class="k">new</span> <span 
class="nf">BinaryClassificationMetrics</span><span class="o">(</span><span 
class="n">JavaRDD</span><span class="o">.</span><span 
class="na">toRDD</span><span class="o">(</span><span 
class="n">scoreAndLabels</span><span class="o">));</span>
+  <span class="k">new</span> <span 
class="n">BinaryClassificationMetrics</span><span class="o">(</span><span 
class="n">JavaRDD</span><span class="o">.</span><span 
class="na">toRDD</span><span class="o">(</span><span 
class="n">scoreAndLabels</span><span class="o">));</span>
 <span class="kt">double</span> <span class="n">auROC</span> <span 
class="o">=</span> <span class="n">metrics</span><span class="o">.</span><span 
class="na">areaUnderROC</span><span class="o">();</span>
 
 <span class="n">System</span><span class="o">.</span><span 
class="na">out</span><span class="o">.</span><span 
class="na">println</span><span class="o">(</span><span class="s">&quot;Area 
under ROC = &quot;</span> <span class="o">+</span> <span 
class="n">auROC</span><span class="o">);</span>
@@ -610,14 +610,14 @@ this way as well. For example, the following code 
produces an L1 regularized
 variant of SVMs with regularization parameter set to 0.1, and runs the training
 algorithm for 200 iterations.</p>
 
-    <div class="highlight"><pre><code class="language-java" 
data-lang="java"><span class="kn">import</span> <span 
class="nn">org.apache.spark.mllib.optimization.L1Updater</span><span 
class="o">;</span>
+    <figure class="highlight"><pre><code class="language-java" 
data-lang="java"><span></span><span class="kn">import</span> <span 
class="nn">org.apache.spark.mllib.optimization.L1Updater</span><span 
class="o">;</span>
 
-<span class="n">SVMWithSGD</span> <span class="n">svmAlg</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="nf">SVMWithSGD</span><span class="o">();</span>
+<span class="n">SVMWithSGD</span> <span class="n">svmAlg</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="n">SVMWithSGD</span><span class="o">();</span>
 <span class="n">svmAlg</span><span class="o">.</span><span 
class="na">optimizer</span><span class="o">()</span>
   <span class="o">.</span><span class="na">setNumIterations</span><span 
class="o">(</span><span class="mi">200</span><span class="o">)</span>
   <span class="o">.</span><span class="na">setRegParam</span><span 
class="o">(</span><span class="mf">0.1</span><span class="o">)</span>
-  <span class="o">.</span><span class="na">setUpdater</span><span 
class="o">(</span><span class="k">new</span> <span 
class="nf">L1Updater</span><span class="o">());</span>
-<span class="kd">final</span> <span class="n">SVMModel</span> <span 
class="n">modelL1</span> <span class="o">=</span> <span 
class="n">svmAlg</span><span class="o">.</span><span class="na">run</span><span 
class="o">(</span><span class="n">training</span><span class="o">.</span><span 
class="na">rdd</span><span class="o">());</span></code></pre></div>
+  <span class="o">.</span><span class="na">setUpdater</span><span 
class="o">(</span><span class="k">new</span> <span 
class="n">L1Updater</span><span class="o">());</span>
+<span class="kd">final</span> <span class="n">SVMModel</span> <span 
class="n">modelL1</span> <span class="o">=</span> <span 
class="n">svmAlg</span><span class="o">.</span><span class="na">run</span><span 
class="o">(</span><span class="n">training</span><span class="o">.</span><span 
class="na">rdd</span><span class="o">());</span></code></pre></figure>
 
     <p>In order to run the above application, follow the instructions
 provided in the <a 
href="quick-start.html#self-contained-applications">Self-Contained
@@ -632,28 +632,28 @@ and make predictions with the resulting model to compute 
the training error.</p>
 
     <p>Refer to the <a 
href="api/python/pyspark.mllib.html#pyspark.mllib.classification.SVMWithSGD"><code>SVMWithSGD</code>
 Python docs</a> and <a 
href="api/python/pyspark.mllib.html#pyspark.mllib.classification.SVMModel"><code>SVMModel</code>
 Python docs</a> for more details on the API.</p>
 
-    <div class="highlight"><pre><span class="kn">from</span> <span 
class="nn">pyspark.mllib.classification</span> <span class="kn">import</span> 
<span class="n">SVMWithSGD</span><span class="p">,</span> <span 
class="n">SVMModel</span>
+    <div class="highlight"><pre><span></span><span class="kn">from</span> 
<span class="nn">pyspark.mllib.classification</span> <span 
class="kn">import</span> <span class="n">SVMWithSGD</span><span 
class="p">,</span> <span class="n">SVMModel</span>
 <span class="kn">from</span> <span class="nn">pyspark.mllib.regression</span> 
<span class="kn">import</span> <span class="n">LabeledPoint</span>
 
-<span class="c"># Load and parse the data</span>
+<span class="c1"># Load and parse the data</span>
 <span class="k">def</span> <span class="nf">parsePoint</span><span 
class="p">(</span><span class="n">line</span><span class="p">):</span>
-    <span class="n">values</span> <span class="o">=</span> <span 
class="p">[</span><span class="nb">float</span><span class="p">(</span><span 
class="n">x</span><span class="p">)</span> <span class="k">for</span> <span 
class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span 
class="o">.</span><span class="n">split</span><span class="p">(</span><span 
class="s">&#39; &#39;</span><span class="p">)]</span>
+    <span class="n">values</span> <span class="o">=</span> <span 
class="p">[</span><span class="nb">float</span><span class="p">(</span><span 
class="n">x</span><span class="p">)</span> <span class="k">for</span> <span 
class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span 
class="o">.</span><span class="n">split</span><span class="p">(</span><span 
class="s1">&#39; &#39;</span><span class="p">)]</span>
     <span class="k">return</span> <span class="n">LabeledPoint</span><span 
class="p">(</span><span class="n">values</span><span class="p">[</span><span 
class="mi">0</span><span class="p">],</span> <span class="n">values</span><span 
class="p">[</span><span class="mi">1</span><span class="p">:])</span>
 
-<span class="n">data</span> <span class="o">=</span> <span 
class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span 
class="p">(</span><span 
class="s">&quot;data/mllib/sample_svm_data.txt&quot;</span><span 
class="p">)</span>
+<span class="n">data</span> <span class="o">=</span> <span 
class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span 
class="p">(</span><span 
class="s2">&quot;data/mllib/sample_svm_data.txt&quot;</span><span 
class="p">)</span>
 <span class="n">parsedData</span> <span class="o">=</span> <span 
class="n">data</span><span class="o">.</span><span class="n">map</span><span 
class="p">(</span><span class="n">parsePoint</span><span class="p">)</span>
 
-<span class="c"># Build the model</span>
+<span class="c1"># Build the model</span>
 <span class="n">model</span> <span class="o">=</span> <span 
class="n">SVMWithSGD</span><span class="o">.</span><span 
class="n">train</span><span class="p">(</span><span 
class="n">parsedData</span><span class="p">,</span> <span 
class="n">iterations</span><span class="o">=</span><span 
class="mi">100</span><span class="p">)</span>
 
-<span class="c"># Evaluating the model on training data</span>
+<span class="c1"># Evaluating the model on training data</span>
 <span class="n">labelsAndPreds</span> <span class="o">=</span> <span 
class="n">parsedData</span><span class="o">.</span><span 
class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span 
class="n">p</span><span class="p">:</span> <span class="p">(</span><span 
class="n">p</span><span class="o">.</span><span class="n">label</span><span 
class="p">,</span> <span class="n">model</span><span class="o">.</span><span 
class="n">predict</span><span class="p">(</span><span class="n">p</span><span 
class="o">.</span><span class="n">features</span><span class="p">)))</span>
 <span class="n">trainErr</span> <span class="o">=</span> <span 
class="n">labelsAndPreds</span><span class="o">.</span><span 
class="n">filter</span><span class="p">(</span><span class="k">lambda</span> 
<span class="p">(</span><span class="n">v</span><span class="p">,</span> <span 
class="n">p</span><span class="p">):</span> <span class="n">v</span> <span 
class="o">!=</span> <span class="n">p</span><span class="p">)</span><span 
class="o">.</span><span class="n">count</span><span class="p">()</span> <span 
class="o">/</span> <span class="nb">float</span><span class="p">(</span><span 
class="n">parsedData</span><span class="o">.</span><span 
class="n">count</span><span class="p">())</span>
-<span class="k">print</span><span class="p">(</span><span 
class="s">&quot;Training Error = &quot;</span> <span class="o">+</span> <span 
class="nb">str</span><span class="p">(</span><span 
class="n">trainErr</span><span class="p">))</span>
+<span class="k">print</span><span class="p">(</span><span 
class="s2">&quot;Training Error = &quot;</span> <span class="o">+</span> <span 
class="nb">str</span><span class="p">(</span><span 
class="n">trainErr</span><span class="p">))</span>
 
-<span class="c"># Save and load model</span>
-<span class="n">model</span><span class="o">.</span><span 
class="n">save</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s">&quot;target/tmp/pythonSVMWithSGDModel&quot;</span><span 
class="p">)</span>
-<span class="n">sameModel</span> <span class="o">=</span> <span 
class="n">SVMModel</span><span class="o">.</span><span 
class="n">load</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s">&quot;target/tmp/pythonSVMWithSGDModel&quot;</span><span 
class="p">)</span>
+<span class="c1"># Save and load model</span>
+<span class="n">model</span><span class="o">.</span><span 
class="n">save</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s2">&quot;target/tmp/pythonSVMWithSGDModel&quot;</span><span 
class="p">)</span>
+<span class="n">sameModel</span> <span class="o">=</span> <span 
class="n">SVMModel</span><span class="o">.</span><span 
class="n">load</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s2">&quot;target/tmp/pythonSVMWithSGDModel&quot;</span><span 
class="p">)</span>
 </pre></div>
     <div><small>Find full example code at 
"examples/src/main/python/mllib/svm_with_sgd_example.py" in the Spark 
repo.</small></div>
   </div>
@@ -713,7 +713,7 @@ Then the model is evaluated against the test dataset and 
saved to disk.</p>
 
     <p>Refer to the <a 
href="api/scala/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS"><code>LogisticRegressionWithLBFGS</code>
 Scala docs</a> and <a 
href="api/scala/index.html#org.apache.spark.mllib.classification.LogisticRegressionModel"><code>LogisticRegressionModel</code>
 Scala docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.classification.</span><span 
class="o">{</span><span class="nc">LogisticRegressionModel</span><span 
class="o">,</span> <span class="nc">LogisticRegressionWithLBFGS</span><span 
class="o">}</span>
+    <div class="highlight"><pre><span></span><span class="k">import</span> 
<span class="nn">org.apache.spark.mllib.classification.</span><span 
class="o">{</span><span class="nc">LogisticRegressionModel</span><span 
class="o">,</span> <span class="nc">LogisticRegressionWithLBFGS</span><span 
class="o">}</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.evaluation.MulticlassMetrics</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.regression.LabeledPoint</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.util.MLUtils</span>
@@ -740,7 +740,7 @@ Then the model is evaluated against the test dataset and 
saved to disk.</p>
 <span class="c1">// Get evaluation metrics.</span>
 <span class="k">val</span> <span class="n">metrics</span> <span 
class="k">=</span> <span class="k">new</span> <span 
class="nc">MulticlassMetrics</span><span class="o">(</span><span 
class="n">predictionAndLabels</span><span class="o">)</span>
 <span class="k">val</span> <span class="n">accuracy</span> <span 
class="k">=</span> <span class="n">metrics</span><span class="o">.</span><span 
class="n">accuracy</span>
-<span class="n">println</span><span class="o">(</span><span 
class="n">s</span><span class="s">&quot;Accuracy = $accuracy&quot;</span><span 
class="o">)</span>
+<span class="n">println</span><span class="o">(</span><span 
class="s">s&quot;Accuracy = </span><span class="si">$accuracy</span><span 
class="s">&quot;</span><span class="o">)</span>
 
 <span class="c1">// Save and load model</span>
 <span class="n">model</span><span class="o">.</span><span 
class="n">save</span><span class="o">(</span><span class="n">sc</span><span 
class="o">,</span> <span 
class="s">&quot;target/tmp/scalaLogisticRegressionWithLBFGSModel&quot;</span><span
 class="o">)</span>
@@ -760,7 +760,7 @@ Then the model is evaluated against the test dataset and 
saved to disk.</p>
 
     <p>Refer to the <a 
href="api/java/org/apache/spark/mllib/classification/LogisticRegressionWithLBFGS.html"><code>LogisticRegressionWithLBFGS</code>
 Java docs</a> and <a 
href="api/java/org/apache/spark/mllib/classification/LogisticRegressionModel.html"><code>LogisticRegressionModel</code>
 Java docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="kn">import</span> <span 
class="nn">scala.Tuple2</span><span class="o">;</span>
+    <div class="highlight"><pre><span></span><span class="kn">import</span> 
<span class="nn">scala.Tuple2</span><span class="o">;</span>
 
 <span class="kn">import</span> <span 
class="nn">org.apache.spark.api.java.JavaRDD</span><span class="o">;</span>
 <span class="kn">import</span> <span 
class="nn">org.apache.spark.api.java.function.Function</span><span 
class="o">;</span>
@@ -779,7 +779,7 @@ Then the model is evaluated against the test dataset and 
saved to disk.</p>
 <span class="n">JavaRDD</span><span class="o">&lt;</span><span 
class="n">LabeledPoint</span><span class="o">&gt;</span> <span 
class="n">test</span> <span class="o">=</span> <span 
class="n">splits</span><span class="o">[</span><span class="mi">1</span><span 
class="o">];</span>
 
 <span class="c1">// Run training algorithm to build the model.</span>
-<span class="kd">final</span> <span class="n">LogisticRegressionModel</span> 
<span class="n">model</span> <span class="o">=</span> <span 
class="k">new</span> <span class="nf">LogisticRegressionWithLBFGS</span><span 
class="o">()</span>
+<span class="kd">final</span> <span class="n">LogisticRegressionModel</span> 
<span class="n">model</span> <span class="o">=</span> <span 
class="k">new</span> <span class="n">LogisticRegressionWithLBFGS</span><span 
class="o">()</span>
   <span class="o">.</span><span class="na">setNumClasses</span><span 
class="o">(</span><span class="mi">10</span><span class="o">)</span>
   <span class="o">.</span><span class="na">run</span><span 
class="o">(</span><span class="n">training</span><span class="o">.</span><span 
class="na">rdd</span><span class="o">());</span>
 
@@ -794,7 +794,7 @@ Then the model is evaluated against the test dataset and 
saved to disk.</p>
 <span class="o">);</span>
 
 <span class="c1">// Get evaluation metrics.</span>
-<span class="n">MulticlassMetrics</span> <span class="n">metrics</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="nf">MulticlassMetrics</span><span class="o">(</span><span 
class="n">predictionAndLabels</span><span class="o">.</span><span 
class="na">rdd</span><span class="o">());</span>
+<span class="n">MulticlassMetrics</span> <span class="n">metrics</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="n">MulticlassMetrics</span><span class="o">(</span><span 
class="n">predictionAndLabels</span><span class="o">.</span><span 
class="na">rdd</span><span class="o">());</span>
 <span class="kt">double</span> <span class="n">accuracy</span> <span 
class="o">=</span> <span class="n">metrics</span><span class="o">.</span><span 
class="na">accuracy</span><span class="o">();</span>
 <span class="n">System</span><span class="o">.</span><span 
class="na">out</span><span class="o">.</span><span 
class="na">println</span><span class="o">(</span><span class="s">&quot;Accuracy 
= &quot;</span> <span class="o">+</span> <span class="n">accuracy</span><span 
class="o">);</span>
 
@@ -815,29 +815,29 @@ will in the future.</p>
 
     <p>Refer to the <a 
href="api/python/pyspark.mllib.html#pyspark.mllib.classification.LogisticRegressionWithLBFGS"><code>LogisticRegressionWithLBFGS</code>
 Python docs</a> and <a 
href="api/python/pyspark.mllib.html#pyspark.mllib.classification.LogisticRegressionModel"><code>LogisticRegressionModel</code>
 Python docs</a> for more details on the API.</p>
 
-    <div class="highlight"><pre><span class="kn">from</span> <span 
class="nn">pyspark.mllib.classification</span> <span class="kn">import</span> 
<span class="n">LogisticRegressionWithLBFGS</span><span class="p">,</span> 
<span class="n">LogisticRegressionModel</span>
+    <div class="highlight"><pre><span></span><span class="kn">from</span> 
<span class="nn">pyspark.mllib.classification</span> <span 
class="kn">import</span> <span 
class="n">LogisticRegressionWithLBFGS</span><span class="p">,</span> <span 
class="n">LogisticRegressionModel</span>
 <span class="kn">from</span> <span class="nn">pyspark.mllib.regression</span> 
<span class="kn">import</span> <span class="n">LabeledPoint</span>
 
-<span class="c"># Load and parse the data</span>
+<span class="c1"># Load and parse the data</span>
 <span class="k">def</span> <span class="nf">parsePoint</span><span 
class="p">(</span><span class="n">line</span><span class="p">):</span>
-    <span class="n">values</span> <span class="o">=</span> <span 
class="p">[</span><span class="nb">float</span><span class="p">(</span><span 
class="n">x</span><span class="p">)</span> <span class="k">for</span> <span 
class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span 
class="o">.</span><span class="n">split</span><span class="p">(</span><span 
class="s">&#39; &#39;</span><span class="p">)]</span>
+    <span class="n">values</span> <span class="o">=</span> <span 
class="p">[</span><span class="nb">float</span><span class="p">(</span><span 
class="n">x</span><span class="p">)</span> <span class="k">for</span> <span 
class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span 
class="o">.</span><span class="n">split</span><span class="p">(</span><span 
class="s1">&#39; &#39;</span><span class="p">)]</span>
     <span class="k">return</span> <span class="n">LabeledPoint</span><span 
class="p">(</span><span class="n">values</span><span class="p">[</span><span 
class="mi">0</span><span class="p">],</span> <span class="n">values</span><span 
class="p">[</span><span class="mi">1</span><span class="p">:])</span>
 
-<span class="n">data</span> <span class="o">=</span> <span 
class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span 
class="p">(</span><span 
class="s">&quot;data/mllib/sample_svm_data.txt&quot;</span><span 
class="p">)</span>
+<span class="n">data</span> <span class="o">=</span> <span 
class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span 
class="p">(</span><span 
class="s2">&quot;data/mllib/sample_svm_data.txt&quot;</span><span 
class="p">)</span>
 <span class="n">parsedData</span> <span class="o">=</span> <span 
class="n">data</span><span class="o">.</span><span class="n">map</span><span 
class="p">(</span><span class="n">parsePoint</span><span class="p">)</span>
 
-<span class="c"># Build the model</span>
+<span class="c1"># Build the model</span>
 <span class="n">model</span> <span class="o">=</span> <span 
class="n">LogisticRegressionWithLBFGS</span><span class="o">.</span><span 
class="n">train</span><span class="p">(</span><span 
class="n">parsedData</span><span class="p">)</span>
 
-<span class="c"># Evaluating the model on training data</span>
+<span class="c1"># Evaluating the model on training data</span>
 <span class="n">labelsAndPreds</span> <span class="o">=</span> <span 
class="n">parsedData</span><span class="o">.</span><span 
class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span 
class="n">p</span><span class="p">:</span> <span class="p">(</span><span 
class="n">p</span><span class="o">.</span><span class="n">label</span><span 
class="p">,</span> <span class="n">model</span><span class="o">.</span><span 
class="n">predict</span><span class="p">(</span><span class="n">p</span><span 
class="o">.</span><span class="n">features</span><span class="p">)))</span>
 <span class="n">trainErr</span> <span class="o">=</span> <span 
class="n">labelsAndPreds</span><span class="o">.</span><span 
class="n">filter</span><span class="p">(</span><span class="k">lambda</span> 
<span class="p">(</span><span class="n">v</span><span class="p">,</span> <span 
class="n">p</span><span class="p">):</span> <span class="n">v</span> <span 
class="o">!=</span> <span class="n">p</span><span class="p">)</span><span 
class="o">.</span><span class="n">count</span><span class="p">()</span> <span 
class="o">/</span> <span class="nb">float</span><span class="p">(</span><span 
class="n">parsedData</span><span class="o">.</span><span 
class="n">count</span><span class="p">())</span>
-<span class="k">print</span><span class="p">(</span><span 
class="s">&quot;Training Error = &quot;</span> <span class="o">+</span> <span 
class="nb">str</span><span class="p">(</span><span 
class="n">trainErr</span><span class="p">))</span>
+<span class="k">print</span><span class="p">(</span><span 
class="s2">&quot;Training Error = &quot;</span> <span class="o">+</span> <span 
class="nb">str</span><span class="p">(</span><span 
class="n">trainErr</span><span class="p">))</span>
 
-<span class="c"># Save and load model</span>
-<span class="n">model</span><span class="o">.</span><span 
class="n">save</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s">&quot;target/tmp/pythonLogisticRegressionWithLBFGSModel&quot;</span><span
 class="p">)</span>
+<span class="c1"># Save and load model</span>
+<span class="n">model</span><span class="o">.</span><span 
class="n">save</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s2">&quot;target/tmp/pythonLogisticRegressionWithLBFGSModel&quot;</span><span
 class="p">)</span>
 <span class="n">sameModel</span> <span class="o">=</span> <span 
class="n">LogisticRegressionModel</span><span class="o">.</span><span 
class="n">load</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span>
-                                         <span 
class="s">&quot;target/tmp/pythonLogisticRegressionWithLBFGSModel&quot;</span><span
 class="p">)</span>
+                                         <span 
class="s2">&quot;target/tmp/pythonLogisticRegressionWithLBFGSModel&quot;</span><span
 class="p">)</span>
 </pre></div>
     <div><small>Find full example code at 
"examples/src/main/python/mllib/logistic_regression_with_lbfgs_example.py" in 
the Spark repo.</small></div>
   </div>
@@ -874,7 +874,7 @@ values. We compute the mean squared error at the end to 
evaluate
 
     <p>Refer to the <a 
href="api/scala/index.html#org.apache.spark.mllib.regression.LinearRegressionWithSGD"><code>LinearRegressionWithSGD</code>
 Scala docs</a> and <a 
href="api/scala/index.html#org.apache.spark.mllib.regression.LinearRegressionModel"><code>LinearRegressionModel</code>
 Scala docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.linalg.Vectors</span>
+    <div class="highlight"><pre><span></span><span class="k">import</span> 
<span class="nn">org.apache.spark.mllib.linalg.Vectors</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.regression.LabeledPoint</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.regression.LinearRegressionModel</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.regression.LinearRegressionWithSGD</span>
@@ -919,7 +919,7 @@ the Scala snippet provided, is presented below:</p>
 
     <p>Refer to the <a 
href="api/java/org/apache/spark/mllib/regression/LinearRegressionWithSGD.html"><code>LinearRegressionWithSGD</code>
 Java docs</a> and <a 
href="api/java/org/apache/spark/mllib/regression/LinearRegressionModel.html"><code>LinearRegressionModel</code>
 Java docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="kn">import</span> <span 
class="nn">scala.Tuple2</span><span class="o">;</span>
+    <div class="highlight"><pre><span></span><span class="kn">import</span> 
<span class="nn">scala.Tuple2</span><span class="o">;</span>
 
 <span class="kn">import</span> <span 
class="nn">org.apache.spark.api.java.JavaDoubleRDD</span><span 
class="o">;</span>
 <span class="kn">import</span> <span 
class="nn">org.apache.spark.api.java.JavaRDD</span><span class="o">;</span>
@@ -941,7 +941,7 @@ the Scala snippet provided, is presented below:</p>
       <span class="k">for</span> <span class="o">(</span><span 
class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span 
class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span 
class="o">&lt;</span> <span class="n">features</span><span 
class="o">.</span><span class="na">length</span> <span class="o">-</span> <span 
class="mi">1</span><span class="o">;</span> <span class="n">i</span><span 
class="o">++)</span> <span class="o">{</span>
         <span class="n">v</span><span class="o">[</span><span 
class="n">i</span><span class="o">]</span> <span class="o">=</span> <span 
class="n">Double</span><span class="o">.</span><span 
class="na">parseDouble</span><span class="o">(</span><span 
class="n">features</span><span class="o">[</span><span class="n">i</span><span 
class="o">]);</span>
       <span class="o">}</span>
-      <span class="k">return</span> <span class="k">new</span> <span 
class="nf">LabeledPoint</span><span class="o">(</span><span 
class="n">Double</span><span class="o">.</span><span 
class="na">parseDouble</span><span class="o">(</span><span 
class="n">parts</span><span class="o">[</span><span class="mi">0</span><span 
class="o">]),</span> <span class="n">Vectors</span><span 
class="o">.</span><span class="na">dense</span><span class="o">(</span><span 
class="n">v</span><span class="o">));</span>
+      <span class="k">return</span> <span class="k">new</span> <span 
class="n">LabeledPoint</span><span class="o">(</span><span 
class="n">Double</span><span class="o">.</span><span 
class="na">parseDouble</span><span class="o">(</span><span 
class="n">parts</span><span class="o">[</span><span class="mi">0</span><span 
class="o">]),</span> <span class="n">Vectors</span><span 
class="o">.</span><span class="na">dense</span><span class="o">(</span><span 
class="n">v</span><span class="o">));</span>
     <span class="o">}</span>
   <span class="o">}</span>
 <span class="o">);</span>
@@ -962,7 +962,7 @@ the Scala snippet provided, is presented below:</p>
     <span class="o">}</span>
   <span class="o">}</span>
 <span class="o">);</span>
-<span class="kt">double</span> <span class="n">MSE</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="nf">JavaDoubleRDD</span><span class="o">(</span><span 
class="n">valuesAndPreds</span><span class="o">.</span><span 
class="na">map</span><span class="o">(</span>
+<span class="kt">double</span> <span class="n">MSE</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="n">JavaDoubleRDD</span><span class="o">(</span><span 
class="n">valuesAndPreds</span><span class="o">.</span><span 
class="na">map</span><span class="o">(</span>
   <span class="k">new</span> <span class="n">Function</span><span 
class="o">&lt;</span><span class="n">Tuple2</span><span 
class="o">&lt;</span><span class="n">Double</span><span class="o">,</span> 
<span class="n">Double</span><span class="o">&gt;,</span> <span 
class="n">Object</span><span class="o">&gt;()</span> <span class="o">{</span>
     <span class="kd">public</span> <span class="n">Object</span> <span 
class="nf">call</span><span class="o">(</span><span 
class="n">Tuple2</span><span class="o">&lt;</span><span 
class="n">Double</span><span class="o">,</span> <span 
class="n">Double</span><span class="o">&gt;</span> <span 
class="n">pair</span><span class="o">)</span> <span class="o">{</span>
       <span class="k">return</span> <span class="n">Math</span><span 
class="o">.</span><span class="na">pow</span><span class="o">(</span><span 
class="n">pair</span><span class="o">.</span><span class="na">_1</span><span 
class="o">()</span> <span class="o">-</span> <span class="n">pair</span><span 
class="o">.</span><span class="na">_2</span><span class="o">(),</span> <span 
class="mf">2.0</span><span class="o">);</span>
@@ -989,29 +989,29 @@ values. We compute the mean squared error at the end to 
evaluate
 
     <p>Refer to the <a 
href="api/python/pyspark.mllib.html#pyspark.mllib.regression.LinearRegressionWithSGD"><code>LinearRegressionWithSGD</code>
 Python docs</a> and <a 
href="api/python/pyspark.mllib.html#pyspark.mllib.regression.LinearRegressionModel"><code>LinearRegressionModel</code>
 Python docs</a> for more details on the API.</p>
 
-    <div class="highlight"><pre><span class="kn">from</span> <span 
class="nn">pyspark.mllib.regression</span> <span class="kn">import</span> <span 
class="n">LabeledPoint</span><span class="p">,</span> <span 
class="n">LinearRegressionWithSGD</span><span class="p">,</span> <span 
class="n">LinearRegressionModel</span>
+    <div class="highlight"><pre><span></span><span class="kn">from</span> 
<span class="nn">pyspark.mllib.regression</span> <span class="kn">import</span> 
<span class="n">LabeledPoint</span><span class="p">,</span> <span 
class="n">LinearRegressionWithSGD</span><span class="p">,</span> <span 
class="n">LinearRegressionModel</span>
 
-<span class="c"># Load and parse the data</span>
+<span class="c1"># Load and parse the data</span>
 <span class="k">def</span> <span class="nf">parsePoint</span><span 
class="p">(</span><span class="n">line</span><span class="p">):</span>
-    <span class="n">values</span> <span class="o">=</span> <span 
class="p">[</span><span class="nb">float</span><span class="p">(</span><span 
class="n">x</span><span class="p">)</span> <span class="k">for</span> <span 
class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span 
class="o">.</span><span class="n">replace</span><span class="p">(</span><span 
class="s">&#39;,&#39;</span><span class="p">,</span> <span class="s">&#39; 
&#39;</span><span class="p">)</span><span class="o">.</span><span 
class="n">split</span><span class="p">(</span><span class="s">&#39; 
&#39;</span><span class="p">)]</span>
+    <span class="n">values</span> <span class="o">=</span> <span 
class="p">[</span><span class="nb">float</span><span class="p">(</span><span 
class="n">x</span><span class="p">)</span> <span class="k">for</span> <span 
class="n">x</span> <span class="ow">in</span> <span class="n">line</span><span 
class="o">.</span><span class="n">replace</span><span class="p">(</span><span 
class="s1">&#39;,&#39;</span><span class="p">,</span> <span class="s1">&#39; 
&#39;</span><span class="p">)</span><span class="o">.</span><span 
class="n">split</span><span class="p">(</span><span class="s1">&#39; 
&#39;</span><span class="p">)]</span>
     <span class="k">return</span> <span class="n">LabeledPoint</span><span 
class="p">(</span><span class="n">values</span><span class="p">[</span><span 
class="mi">0</span><span class="p">],</span> <span class="n">values</span><span 
class="p">[</span><span class="mi">1</span><span class="p">:])</span>
 
-<span class="n">data</span> <span class="o">=</span> <span 
class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span 
class="p">(</span><span 
class="s">&quot;data/mllib/ridge-data/lpsa.data&quot;</span><span 
class="p">)</span>
+<span class="n">data</span> <span class="o">=</span> <span 
class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span 
class="p">(</span><span 
class="s2">&quot;data/mllib/ridge-data/lpsa.data&quot;</span><span 
class="p">)</span>
 <span class="n">parsedData</span> <span class="o">=</span> <span 
class="n">data</span><span class="o">.</span><span class="n">map</span><span 
class="p">(</span><span class="n">parsePoint</span><span class="p">)</span>
 
-<span class="c"># Build the model</span>
+<span class="c1"># Build the model</span>
 <span class="n">model</span> <span class="o">=</span> <span 
class="n">LinearRegressionWithSGD</span><span class="o">.</span><span 
class="n">train</span><span class="p">(</span><span 
class="n">parsedData</span><span class="p">,</span> <span 
class="n">iterations</span><span class="o">=</span><span 
class="mi">100</span><span class="p">,</span> <span class="n">step</span><span 
class="o">=</span><span class="mf">0.00000001</span><span class="p">)</span>
 
-<span class="c"># Evaluate the model on training data</span>
+<span class="c1"># Evaluate the model on training data</span>
 <span class="n">valuesAndPreds</span> <span class="o">=</span> <span 
class="n">parsedData</span><span class="o">.</span><span 
class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span 
class="n">p</span><span class="p">:</span> <span class="p">(</span><span 
class="n">p</span><span class="o">.</span><span class="n">label</span><span 
class="p">,</span> <span class="n">model</span><span class="o">.</span><span 
class="n">predict</span><span class="p">(</span><span class="n">p</span><span 
class="o">.</span><span class="n">features</span><span class="p">)))</span>
 <span class="n">MSE</span> <span class="o">=</span> <span 
class="n">valuesAndPreds</span> \
     <span class="o">.</span><span class="n">map</span><span 
class="p">(</span><span class="k">lambda</span> <span class="p">(</span><span 
class="n">v</span><span class="p">,</span> <span class="n">p</span><span 
class="p">):</span> <span class="p">(</span><span class="n">v</span> <span 
class="o">-</span> <span class="n">p</span><span class="p">)</span><span 
class="o">**</span><span class="mi">2</span><span class="p">)</span> \
     <span class="o">.</span><span class="n">reduce</span><span 
class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span 
class="p">,</span> <span class="n">y</span><span class="p">:</span> <span 
class="n">x</span> <span class="o">+</span> <span class="n">y</span><span 
class="p">)</span> <span class="o">/</span> <span 
class="n">valuesAndPreds</span><span class="o">.</span><span 
class="n">count</span><span class="p">()</span>
-<span class="k">print</span><span class="p">(</span><span class="s">&quot;Mean 
Squared Error = &quot;</span> <span class="o">+</span> <span 
class="nb">str</span><span class="p">(</span><span class="n">MSE</span><span 
class="p">))</span>
+<span class="k">print</span><span class="p">(</span><span 
class="s2">&quot;Mean Squared Error = &quot;</span> <span class="o">+</span> 
<span class="nb">str</span><span class="p">(</span><span 
class="n">MSE</span><span class="p">))</span>
 
-<span class="c"># Save and load model</span>
-<span class="n">model</span><span class="o">.</span><span 
class="n">save</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s">&quot;target/tmp/pythonLinearRegressionWithSGDModel&quot;</span><span 
class="p">)</span>
-<span class="n">sameModel</span> <span class="o">=</span> <span 
class="n">LinearRegressionModel</span><span class="o">.</span><span 
class="n">load</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s">&quot;target/tmp/pythonLinearRegressionWithSGDModel&quot;</span><span 
class="p">)</span>
+<span class="c1"># Save and load model</span>
+<span class="n">model</span><span class="o">.</span><span 
class="n">save</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s2">&quot;target/tmp/pythonLinearRegressionWithSGDModel&quot;</span><span
 class="p">)</span>
+<span class="n">sameModel</span> <span class="o">=</span> <span 
class="n">LinearRegressionModel</span><span class="o">.</span><span 
class="n">load</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span 
class="s2">&quot;target/tmp/pythonLinearRegressionWithSGDModel&quot;</span><span
 class="p">)</span>
 </pre></div>
     <div><small>Find full example code at 
"examples/src/main/python/mllib/linear_regression_with_sgd_example.py" in the 
Spark repo.</small></div>
   </div>
@@ -1059,25 +1059,24 @@ the model will update. Anytime a text file is placed in 
<code>args(1)</code> you
 As you feed more data to the training directory, the predictions
 will get better!</p>
 
-    <p>Here is a complete example:</p>
-    <div class="highlight"><pre><span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.linalg.Vectors</span>
+    <p>Here is a complete example:
+&lt;div class="highlight"&gt;&lt;pre&gt;<span></span><span 
class="k">import</span> <span 
class="nn">org.apache.spark.mllib.linalg.Vectors</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.regression.LabeledPoint</span>
-<span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD</span>
+<span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD</span></p>
 
-<span class="k">val</span> <span class="n">trainingData</span> <span 
class="k">=</span> <span class="n">ssc</span><span class="o">.</span><span 
class="n">textFileStream</span><span class="o">(</span><span 
class="n">args</span><span class="o">(</span><span class="mi">0</span><span 
class="o">)).</span><span class="n">map</span><span class="o">(</span><span 
class="nc">LabeledPoint</span><span class="o">.</span><span 
class="n">parse</span><span class="o">).</span><span 
class="n">cache</span><span class="o">()</span>
-<span class="k">val</span> <span class="n">testData</span> <span 
class="k">=</span> <span class="n">ssc</span><span class="o">.</span><span 
class="n">textFileStream</span><span class="o">(</span><span 
class="n">args</span><span class="o">(</span><span class="mi">1</span><span 
class="o">)).</span><span class="n">map</span><span class="o">(</span><span 
class="nc">LabeledPoint</span><span class="o">.</span><span 
class="n">parse</span><span class="o">)</span>
+    <p><span class="k">val</span> <span class="n">trainingData</span> <span 
class="k">=</span> <span class="n">ssc</span><span class="o">.</span><span 
class="n">textFileStream</span><span class="o">(</span><span 
class="n">args</span><span class="o">(</span><span class="mi">0</span><span 
class="o">)).</span><span class="n">map</span><span class="o">(</span><span 
class="nc">LabeledPoint</span><span class="o">.</span><span 
class="n">parse</span><span class="o">).</span><span 
class="n">cache</span><span class="o">()</span>
+<span class="k">val</span> <span class="n">testData</span> <span 
class="k">=</span> <span class="n">ssc</span><span class="o">.</span><span 
class="n">textFileStream</span><span class="o">(</span><span 
class="n">args</span><span class="o">(</span><span class="mi">1</span><span 
class="o">)).</span><span class="n">map</span><span class="o">(</span><span 
class="nc">LabeledPoint</span><span class="o">.</span><span 
class="n">parse</span><span class="o">)</span></p>
 
-<span class="k">val</span> <span class="n">numFeatures</span> <span 
class="k">=</span> <span class="mi">3</span>
+    <p><span class="k">val</span> <span class="n">numFeatures</span> <span 
class="k">=</span> <span class="mi">3</span>
 <span class="k">val</span> <span class="n">model</span> <span 
class="k">=</span> <span class="k">new</span> <span 
class="nc">StreamingLinearRegressionWithSGD</span><span class="o">()</span>
-  <span class="o">.</span><span class="n">setInitialWeights</span><span 
class="o">(</span><span class="nc">Vectors</span><span class="o">.</span><span 
class="n">zeros</span><span class="o">(</span><span 
class="n">numFeatures</span><span class="o">))</span>
+  <span class="o">.</span><span class="n">setInitialWeights</span><span 
class="o">(</span><span class="nc">Vectors</span><span class="o">.</span><span 
class="n">zeros</span><span class="o">(</span><span 
class="n">numFeatures</span><span class="o">))</span></p>
 
-<span class="n">model</span><span class="o">.</span><span 
class="n">trainOn</span><span class="o">(</span><span 
class="n">trainingData</span><span class="o">)</span>
-<span class="n">model</span><span class="o">.</span><span 
class="n">predictOnValues</span><span class="o">(</span><span 
class="n">testData</span><span class="o">.</span><span 
class="n">map</span><span class="o">(</span><span class="n">lp</span> <span 
class="k">=&gt;</span> <span class="o">(</span><span class="n">lp</span><span 
class="o">.</span><span class="n">label</span><span class="o">,</span> <span 
class="n">lp</span><span class="o">.</span><span class="n">features</span><span 
class="o">))).</span><span class="n">print</span><span class="o">()</span>
+    <p><span class="n">model</span><span class="o">.</span><span 
class="n">trainOn</span><span class="o">(</span><span 
class="n">trainingData</span><span class="o">)</span>
+<span class="n">model</span><span class="o">.</span><span 
class="n">predictOnValues</span><span class="o">(</span><span 
class="n">testData</span><span class="o">.</span><span 
class="n">map</span><span class="o">(</span><span class="n">lp</span> <span 
class="k">=&gt;</span> <span class="o">(</span><span class="n">lp</span><span 
class="o">.</span><span class="n">label</span><span class="o">,</span> <span 
class="n">lp</span><span class="o">.</span><span class="n">features</span><span 
class="o">))).</span><span class="n">print</span><span class="o">()</span></p>
 
-<span class="n">ssc</span><span class="o">.</span><span 
class="n">start</span><span class="o">()</span>
+    <p><span class="n">ssc</span><span class="o">.</span><span 
class="n">start</span><span class="o">()</span>
 <span class="n">ssc</span><span class="o">.</span><span 
class="n">awaitTermination</span><span class="o">()</span>
-</pre></div>
-    <div><small>Find full example code at 
"examples/src/main/scala/org/apache/spark/examples/mllib/StreamingLinearRegressionExample.scala"
 in the Spark repo.</small></div>
+&lt;/pre&gt;&lt;/div&gt;&lt;div&gt;<small>Find full example code at 
&#8220;examples/src/main/scala/org/apache/spark/examples/mllib/StreamingLinearRegressionExample.scala&#8221;
 in the Spark repo.</small>&lt;/div&gt;</p>
 
   </div>
 
@@ -1101,32 +1100,31 @@ the model will update. Anytime a text file is placed in 
<code>sys.argv[2]</code>
 As you feed more data to the training directory, the predictions
 will get better!</p>
 
-    <p>Here a complete example:</p>
-    <div class="highlight"><pre><span class="kn">import</span> <span 
class="nn">sys</span>
+    <p>Here a complete example:
+&lt;div class="highlight"&gt;&lt;pre&gt;<span></span><span 
class="kn">import</span> <span class="nn">sys</span></p>
 
-<span class="kn">from</span> <span class="nn">pyspark.mllib.linalg</span> 
<span class="kn">import</span> <span class="n">Vectors</span>
+    <p><span class="kn">from</span> <span 
class="nn">pyspark.mllib.linalg</span> <span class="kn">import</span> <span 
class="n">Vectors</span>
 <span class="kn">from</span> <span class="nn">pyspark.mllib.regression</span> 
<span class="kn">import</span> <span class="n">LabeledPoint</span>
-<span class="kn">from</span> <span class="nn">pyspark.mllib.regression</span> 
<span class="kn">import</span> <span 
class="n">StreamingLinearRegressionWithSGD</span>
+<span class="kn">from</span> <span class="nn">pyspark.mllib.regression</span> 
<span class="kn">import</span> <span 
class="n">StreamingLinearRegressionWithSGD</span></p>
 
-<span class="k">def</span> <span class="nf">parse</span><span 
class="p">(</span><span class="n">lp</span><span class="p">):</span>
-    <span class="n">label</span> <span class="o">=</span> <span 
class="nb">float</span><span class="p">(</span><span class="n">lp</span><span 
class="p">[</span><span class="n">lp</span><span class="o">.</span><span 
class="n">find</span><span class="p">(</span><span 
class="s">&#39;(&#39;</span><span class="p">)</span> <span class="o">+</span> 
<span class="mi">1</span><span class="p">:</span> <span 
class="n">lp</span><span class="o">.</span><span class="n">find</span><span 
class="p">(</span><span class="s">&#39;,&#39;</span><span class="p">)])</span>
-    <span class="n">vec</span> <span class="o">=</span> <span 
class="n">Vectors</span><span class="o">.</span><span 
class="n">dense</span><span class="p">(</span><span class="n">lp</span><span 
class="p">[</span><span class="n">lp</span><span class="o">.</span><span 
class="n">find</span><span class="p">(</span><span 
class="s">&#39;[&#39;</span><span class="p">)</span> <span class="o">+</span> 
<span class="mi">1</span><span class="p">:</span> <span 
class="n">lp</span><span class="o">.</span><span class="n">find</span><span 
class="p">(</span><span class="s">&#39;]&#39;</span><span 
class="p">)]</span><span class="o">.</span><span class="n">split</span><span 
class="p">(</span><span class="s">&#39;,&#39;</span><span class="p">))</span>
-    <span class="k">return</span> <span class="n">LabeledPoint</span><span 
class="p">(</span><span class="n">label</span><span class="p">,</span> <span 
class="n">vec</span><span class="p">)</span>
+    <p><span class="k">def</span> <span class="nf">parse</span><span 
class="p">(</span><span class="n">lp</span><span class="p">):</span>
+    <span class="n">label</span> <span class="o">=</span> <span 
class="nb">float</span><span class="p">(</span><span class="n">lp</span><span 
class="p">[</span><span class="n">lp</span><span class="o">.</span><span 
class="n">find</span><span class="p">(</span><span 
class="s1">&#39;(&#39;</span><span class="p">)</span> <span class="o">+</span> 
<span class="mi">1</span><span class="p">:</span> <span 
class="n">lp</span><span class="o">.</span><span class="n">find</span><span 
class="p">(</span><span class="s1">&#39;,&#39;</span><span class="p">)])</span>
+    <span class="n">vec</span> <span class="o">=</span> <span 
class="n">Vectors</span><span class="o">.</span><span 
class="n">dense</span><span class="p">(</span><span class="n">lp</span><span 
class="p">[</span><span class="n">lp</span><span class="o">.</span><span 
class="n">find</span><span class="p">(</span><span 
class="s1">&#39;[&#39;</span><span class="p">)</span> <span class="o">+</span> 
<span class="mi">1</span><span class="p">:</span> <span 
class="n">lp</span><span class="o">.</span><span class="n">find</span><span 
class="p">(</span><span class="s1">&#39;]&#39;</span><span 
class="p">)]</span><span class="o">.</span><span class="n">split</span><span 
class="p">(</span><span class="s1">&#39;,&#39;</span><span class="p">))</span>
+    <span class="k">return</span> <span class="n">LabeledPoint</span><span 
class="p">(</span><span class="n">label</span><span class="p">,</span> <span 
class="n">vec</span><span class="p">)</span></p>
 
-<span class="n">trainingData</span> <span class="o">=</span> <span 
class="n">ssc</span><span class="o">.</span><span 
class="n">textFileStream</span><span class="p">(</span><span 
class="n">sys</span><span class="o">.</span><span class="n">argv</span><span 
class="p">[</span><span class="mi">1</span><span class="p">])</span><span 
class="o">.</span><span class="n">map</span><span class="p">(</span><span 
class="n">parse</span><span class="p">)</span><span class="o">.</span><span 
class="n">cache</span><span class="p">()</span>
-<span class="n">testData</span> <span class="o">=</span> <span 
class="n">ssc</span><span class="o">.</span><span 
class="n">textFileStream</span><span class="p">(</span><span 
class="n">sys</span><span class="o">.</span><span class="n">argv</span><span 
class="p">[</span><span class="mi">2</span><span class="p">])</span><span 
class="o">.</span><span class="n">map</span><span class="p">(</span><span 
class="n">parse</span><span class="p">)</span>
+    <p><span class="n">trainingData</span> <span class="o">=</span> <span 
class="n">ssc</span><span class="o">.</span><span 
class="n">textFileStream</span><span class="p">(</span><span 
class="n">sys</span><span class="o">.</span><span class="n">argv</span><span 
class="p">[</span><span class="mi">1</span><span class="p">])</span><span 
class="o">.</span><span class="n">map</span><span class="p">(</span><span 
class="n">parse</span><span class="p">)</span><span class="o">.</span><span 
class="n">cache</span><span class="p">()</span>
+<span class="n">testData</span> <span class="o">=</span> <span 
class="n">ssc</span><span class="o">.</span><span 
class="n">textFileStream</span><span class="p">(</span><span 
class="n">sys</span><span class="o">.</span><span class="n">argv</span><span 
class="p">[</span><span class="mi">2</span><span class="p">])</span><span 
class="o">.</span><span class="n">map</span><span class="p">(</span><span 
class="n">parse</span><span class="p">)</span></p>
 
-<span class="n">numFeatures</span> <span class="o">=</span> <span 
class="mi">3</span>
+    <p><span class="n">numFeatures</span> <span class="o">=</span> <span 
class="mi">3</span>
 <span class="n">model</span> <span class="o">=</span> <span 
class="n">StreamingLinearRegressionWithSGD</span><span class="p">()</span>
-<span class="n">model</span><span class="o">.</span><span 
class="n">setInitialWeights</span><span class="p">([</span><span 
class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span 
class="p">,</span> <span class="mf">0.0</span><span class="p">])</span>
+<span class="n">model</span><span class="o">.</span><span 
class="n">setInitialWeights</span><span class="p">([</span><span 
class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span 
class="p">,</span> <span class="mf">0.0</span><span class="p">])</span></p>
 
-<span class="n">model</span><span class="o">.</span><span 
class="n">trainOn</span><span class="p">(</span><span 
class="n">trainingData</span><span class="p">)</span>
-<span class="k">print</span><span class="p">(</span><span 
class="n">model</span><span class="o">.</span><span 
class="n">predictOnValues</span><span class="p">(</span><span 
class="n">testData</span><span class="o">.</span><span 
class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span 
class="n">lp</span><span class="p">:</span> <span class="p">(</span><span 
class="n">lp</span><span class="o">.</span><span class="n">label</span><span 
class="p">,</span> <span class="n">lp</span><span class="o">.</span><span 
class="n">features</span><span class="p">))))</span>
+    <p><span class="n">model</span><span class="o">.</span><span 
class="n">trainOn</span><span class="p">(</span><span 
class="n">trainingData</span><span class="p">)</span>
+<span class="k">print</span><span class="p">(</span><span 
class="n">model</span><span class="o">.</span><span 
class="n">predictOnValues</span><span class="p">(</span><span 
class="n">testData</span><span class="o">.</span><span 
class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span 
class="n">lp</span><span class="p">:</span> <span class="p">(</span><span 
class="n">lp</span><span class="o">.</span><span class="n">label</span><span 
class="p">,</span> <span class="n">lp</span><span class="o">.</span><span 
class="n">features</span><span class="p">))))</span></p>
 
-<span class="n">ssc</span><span class="o">.</span><span 
class="n">start</span><span class="p">()</span>
+    <p><span class="n">ssc</span><span class="o">.</span><span 
class="n">start</span><span class="p">()</span>
 <span class="n">ssc</span><span class="o">.</span><span 
class="n">awaitTermination</span><span class="p">()</span>
-</pre></div>
-    <div><small>Find full example code at 
"examples/src/main/python/mllib/streaming_linear_regression_example.py" in the 
Spark repo.</small></div>
+&lt;/pre&gt;&lt;/div&gt;&lt;div&gt;<small>Find full example code at 
&#8220;examples/src/main/python/mllib/streaming_linear_regression_example.py&#8221;
 in the Spark repo.</small>&lt;/div&gt;</p>
 
   </div>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d2bcf185/site/docs/2.1.0/mllib-naive-bayes.html
----------------------------------------------------------------------
diff --git a/site/docs/2.1.0/mllib-naive-bayes.html 
b/site/docs/2.1.0/mllib-naive-bayes.html
index c21dd83..d843987 100644
--- a/site/docs/2.1.0/mllib-naive-bayes.html
+++ b/site/docs/2.1.0/mllib-naive-bayes.html
@@ -342,7 +342,7 @@ can be used for evaluation and prediction.</p>
 
     <p>Refer to the <a 
href="api/scala/index.html#org.apache.spark.mllib.classification.NaiveBayes"><code>NaiveBayes</code>
 Scala docs</a> and <a 
href="api/scala/index.html#org.apache.spark.mllib.classification.NaiveBayesModel"><code>NaiveBayesModel</code>
 Scala docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.classification.</span><span 
class="o">{</span><span class="nc">NaiveBayes</span><span class="o">,</span> 
<span class="nc">NaiveBayesModel</span><span class="o">}</span>
+    <div class="highlight"><pre><span></span><span class="k">import</span> 
<span class="nn">org.apache.spark.mllib.classification.</span><span 
class="o">{</span><span class="nc">NaiveBayes</span><span class="o">,</span> 
<span class="nc">NaiveBayesModel</span><span class="o">}</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.util.MLUtils</span>
 
 <span class="c1">// Load and parse the data file.</span>
@@ -373,7 +373,7 @@ can be used for evaluation and prediction.</p>
 
     <p>Refer to the <a 
href="api/java/org/apache/spark/mllib/classification/NaiveBayes.html"><code>NaiveBayes</code>
 Java docs</a> and <a 
href="api/java/org/apache/spark/mllib/classification/NaiveBayesModel.html"><code>NaiveBayesModel</code>
 Java docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="kn">import</span> <span 
class="nn">scala.Tuple2</span><span class="o">;</span>
+    <div class="highlight"><pre><span></span><span class="kn">import</span> 
<span class="nn">scala.Tuple2</span><span class="o">;</span>
 <span class="kn">import</span> <span 
class="nn">org.apache.spark.api.java.function.Function</span><span 
class="o">;</span>
 <span class="kn">import</span> <span 
class="nn">org.apache.spark.api.java.function.PairFunction</span><span 
class="o">;</span>
 <span class="kn">import</span> <span 
class="nn">org.apache.spark.api.java.JavaPairRDD</span><span class="o">;</span>
@@ -423,33 +423,33 @@ used for evaluation and prediction.</p>
 
     <p>Refer to the <a 
href="api/python/pyspark.mllib.html#pyspark.mllib.classification.NaiveBayes"><code>NaiveBayes</code>
 Python docs</a> and <a 
href="api/python/pyspark.mllib.html#pyspark.mllib.classification.NaiveBayesModel"><code>NaiveBayesModel</code>
 Python docs</a> for more details on the API.</p>
 
-    <div class="highlight"><pre><span class="kn">from</span> <span 
class="nn">pyspark.mllib.classification</span> <span class="kn">import</span> 
<span class="n">NaiveBayes</span><span class="p">,</span> <span 
class="n">NaiveBayesModel</span>
+    <div class="highlight"><pre><span></span><span class="kn">from</span> 
<span class="nn">pyspark.mllib.classification</span> <span 
class="kn">import</span> <span class="n">NaiveBayes</span><span 
class="p">,</span> <span class="n">NaiveBayesModel</span>
 <span class="kn">from</span> <span class="nn">pyspark.mllib.util</span> <span 
class="kn">import</span> <span class="n">MLUtils</span>
 
 
 
-<span class="c"># Load and parse the data file.</span>
-<span class="n">data</span> <span class="o">=</span> <span 
class="n">MLUtils</span><span class="o">.</span><span 
class="n">loadLibSVMFile</span><span class="p">(</span><span 
class="n">sc</span><span class="p">,</span> <span 
class="s">&quot;data/mllib/sample_libsvm_data.txt&quot;</span><span 
class="p">)</span>
+<span class="c1"># Load and parse the data file.</span>
+<span class="n">data</span> <span class="o">=</span> <span 
class="n">MLUtils</span><span class="o">.</span><span 
class="n">loadLibSVMFile</span><span class="p">(</span><span 
class="n">sc</span><span class="p">,</span> <span 
class="s2">&quot;data/mllib/sample_libsvm_data.txt&quot;</span><span 
class="p">)</span>
 
-<span class="c"># Split data approximately into training (60%) and test 
(40%)</span>
+<span class="c1"># Split data approximately into training (60%) and test 
(40%)</span>
 <span class="n">training</span><span class="p">,</span> <span 
class="n">test</span> <span class="o">=</span> <span class="n">data</span><span 
class="o">.</span><span class="n">randomSplit</span><span 
class="p">([</span><span class="mf">0.6</span><span class="p">,</span> <span 
class="mf">0.4</span><span class="p">])</span>
 
-<span class="c"># Train a naive Bayes model.</span>
+<span class="c1"># Train a naive Bayes model.</span>
 <span class="n">model</span> <span class="o">=</span> <span 
class="n">NaiveBayes</span><span class="o">.</span><span 
class="n">train</span><span class="p">(</span><span 
class="n">training</span><span class="p">,</span> <span 
class="mf">1.0</span><span class="p">)</span>
 
-<span class="c"># Make prediction and test accuracy.</span>
+<span class="c1"># Make prediction and test accuracy.</span>
 <span class="n">predictionAndLabel</span> <span class="o">=</span> <span 
class="n">test</span><span class="o">.</span><span class="n">map</span><span 
class="p">(</span><span class="k">lambda</span> <span class="n">p</span><span 
class="p">:</span> <span class="p">(</span><span class="n">model</span><span 
class="o">.</span><span class="n">predict</span><span class="p">(</span><span 
class="n">p</span><span class="o">.</span><span class="n">features</span><span 
class="p">),</span> <span class="n">p</span><span class="o">.</span><span 
class="n">label</span><span class="p">))</span>
 <span class="n">accuracy</span> <span class="o">=</span> <span 
class="mf">1.0</span> <span class="o">*</span> <span 
class="n">predictionAndLabel</span><span class="o">.</span><span 
class="n">filter</span><span class="p">(</span><span class="k">lambda</span> 
<span class="p">(</span><span class="n">x</span><span class="p">,</span> <span 
class="n">v</span><span class="p">):</span> <span class="n">x</span> <span 
class="o">==</span> <span class="n">v</span><span class="p">)</span><span 
class="o">.</span><span class="n">count</span><span class="p">()</span> <span 
class="o">/</span> <span class="n">test</span><span class="o">.</span><span 
class="n">count</span><span class="p">()</span>
-<span class="k">print</span><span class="p">(</span><span class="s">&#39;model 
accuracy {}&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span 
class="n">accuracy</span><span class="p">))</span>
+<span class="k">print</span><span class="p">(</span><span 
class="s1">&#39;model accuracy {}&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span 
class="n">accuracy</span><span class="p">))</span>
 
-<span class="c"># Save and load model</span>
-<span class="n">output_dir</span> <span class="o">=</span> <span 
class="s">&#39;target/tmp/myNaiveBayesModel&#39;</span>
+<span class="c1"># Save and load model</span>
+<span class="n">output_dir</span> <span class="o">=</span> <span 
class="s1">&#39;target/tmp/myNaiveBayesModel&#39;</span>
 <span class="n">shutil</span><span class="o">.</span><span 
class="n">rmtree</span><span class="p">(</span><span 
class="n">output_dir</span><span class="p">,</span> <span 
class="n">ignore_errors</span><span class="o">=</span><span 
class="bp">True</span><span class="p">)</span>
 <span class="n">model</span><span class="o">.</span><span 
class="n">save</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span class="n">output_dir</span><span class="p">)</span>
 <span class="n">sameModel</span> <span class="o">=</span> <span 
class="n">NaiveBayesModel</span><span class="o">.</span><span 
class="n">load</span><span class="p">(</span><span class="n">sc</span><span 
class="p">,</span> <span class="n">output_dir</span><span class="p">)</span>
 <span class="n">predictionAndLabel</span> <span class="o">=</span> <span 
class="n">test</span><span class="o">.</span><span class="n">map</span><span 
class="p">(</span><span class="k">lambda</span> <span class="n">p</span><span 
class="p">:</span> <span class="p">(</span><span 
class="n">sameModel</span><span class="o">.</span><span 
class="n">predict</span><span class="p">(</span><span class="n">p</span><span 
class="o">.</span><span class="n">features</span><span class="p">),</span> 
<span class="n">p</span><span class="o">.</span><span 
class="n">label</span><span class="p">))</span>
 <span class="n">accuracy</span> <span class="o">=</span> <span 
class="mf">1.0</span> <span class="o">*</span> <span 
class="n">predictionAndLabel</span><span class="o">.</span><span 
class="n">filter</span><span class="p">(</span><span class="k">lambda</span> 
<span class="p">(</span><span class="n">x</span><span class="p">,</span> <span 
class="n">v</span><span class="p">):</span> <span class="n">x</span> <span 
class="o">==</span> <span class="n">v</span><span class="p">)</span><span 
class="o">.</span><span class="n">count</span><span class="p">()</span> <span 
class="o">/</span> <span class="n">test</span><span class="o">.</span><span 
class="n">count</span><span class="p">()</span>
-<span class="k">print</span><span class="p">(</span><span 
class="s">&#39;sameModel accuracy {}&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span 
class="n">accuracy</span><span class="p">))</span>
+<span class="k">print</span><span class="p">(</span><span 
class="s1">&#39;sameModel accuracy {}&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span 
class="n">accuracy</span><span class="p">))</span>
 </pre></div>
     <div><small>Find full example code at 
"examples/src/main/python/mllib/naive_bayes_example.py" in the Spark 
repo.</small></div>
   </div>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d2bcf185/site/docs/2.1.0/mllib-optimization.html
----------------------------------------------------------------------
diff --git a/site/docs/2.1.0/mllib-optimization.html 
b/site/docs/2.1.0/mllib-optimization.html
index 0c32f6e..74dbeba 100644
--- a/site/docs/2.1.0/mllib-optimization.html
+++ b/site/docs/2.1.0/mllib-optimization.html
@@ -331,20 +331,20 @@
                     
 
                     <ul id="markdown-toc">
-  <li><a href="#mathematical-description" 
id="markdown-toc-mathematical-description">Mathematical description</a>    <ul>
-      <li><a href="#gradient-descent" 
id="markdown-toc-gradient-descent">Gradient descent</a></li>
-      <li><a href="#stochastic-gradient-descent-sgd" 
id="markdown-toc-stochastic-gradient-descent-sgd">Stochastic gradient descent 
(SGD)</a></li>
-      <li><a href="#update-schemes-for-distributed-sgd" 
id="markdown-toc-update-schemes-for-distributed-sgd">Update schemes for 
distributed SGD</a></li>
-      <li><a href="#limited-memory-bfgs-l-bfgs" 
id="markdown-toc-limited-memory-bfgs-l-bfgs">Limited-memory BFGS 
(L-BFGS)</a></li>
-      <li><a href="#choosing-an-optimization-method" 
id="markdown-toc-choosing-an-optimization-method">Choosing an Optimization 
Method</a></li>
+  <li><a href="#mathematical-description">Mathematical description</a>    <ul>
+      <li><a href="#gradient-descent">Gradient descent</a></li>
+      <li><a href="#stochastic-gradient-descent-sgd">Stochastic gradient 
descent (SGD)</a></li>
+      <li><a href="#update-schemes-for-distributed-sgd">Update schemes for 
distributed SGD</a></li>
+      <li><a href="#limited-memory-bfgs-l-bfgs">Limited-memory BFGS 
(L-BFGS)</a></li>
+      <li><a href="#choosing-an-optimization-method">Choosing an Optimization 
Method</a></li>
     </ul>
   </li>
-  <li><a href="#implementation-in-mllib" 
id="markdown-toc-implementation-in-mllib">Implementation in MLlib</a>    <ul>
-      <li><a href="#gradient-descent-and-stochastic-gradient-descent" 
id="markdown-toc-gradient-descent-and-stochastic-gradient-descent">Gradient 
descent and stochastic gradient descent</a></li>
-      <li><a href="#l-bfgs" id="markdown-toc-l-bfgs">L-BFGS</a></li>
+  <li><a href="#implementation-in-mllib">Implementation in MLlib</a>    <ul>
+      <li><a href="#gradient-descent-and-stochastic-gradient-descent">Gradient 
descent and stochastic gradient descent</a></li>
+      <li><a href="#l-bfgs">L-BFGS</a></li>
     </ul>
   </li>
-  <li><a href="#developers-notes" 
id="markdown-toc-developers-notes">Developer&#8217;s notes</a></li>
+  <li><a href="#developers-notes">Developer&#8217;s notes</a></li>
 </ul>
 
 <p><code>\[
@@ -471,7 +471,7 @@ quadratic without evaluating the second partial derivatives 
of the objective fun
 Hessian matrix. The Hessian matrix is approximated by previous gradient 
evaluations, so there is no 
 vertical scalability issue (the number of training features) when computing 
the Hessian matrix 
 explicitly in Newton&#8217;s method. As a result, L-BFGS often achieves 
rapider convergence compared with 
-other first-order optimization.</p>
+other first-order optimization. </p>
 
 <h3 id="choosing-an-optimization-method">Choosing an Optimization Method</h3>
 
@@ -497,7 +497,7 @@ sets the following parameters:</p>
 being optimized, i.e., with respect to a single training example, at the
 current parameter value. MLlib includes gradient classes for common loss
 functions, e.g., hinge, logistic, least-squares.  The gradient class takes as
-input a training example, its label, and the current parameter value.</li>
+input a training example, its label, and the current parameter value. </li>
   <li><code>Updater</code> is a class that performs the actual gradient 
descent step, i.e. 
 updating the weights in each iteration, for a given gradient of the loss part.
 The updater is also responsible to perform the update from the regularization 
@@ -505,7 +505,7 @@ part. MLlib includes updaters for cases without 
regularization, as well as
 L1 and L2 regularizers.</li>
   <li><code>stepSize</code> is a scalar value denoting the initial step size 
for gradient
 descent. All updaters in MLlib use a step size at the t-th step equal to
-<code>stepSize $/ \sqrt{t}$</code>.</li>
+<code>stepSize $/ \sqrt{t}$</code>. </li>
   <li><code>numIterations</code> is the number of iterations to run.</li>
   <li><code>regParam</code> is the regularization parameter when using L1 or 
L2 regularization.</li>
   <li><code>miniBatchFraction</code> is the fraction of the total data that is 
sampled in 
@@ -521,7 +521,7 @@ each iteration, to compute the gradient direction.
 ML algorithms such as Linear Regression, and Logistic Regression, you have to 
pass the gradient of objective
 function, and updater into optimizer yourself instead of using the training 
APIs like 
 <a 
href="api/scala/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithSGD">LogisticRegressionWithSGD</a>.
-See the example below. It will be addressed in the next release.</p>
+See the example below. It will be addressed in the next release. </p>
 
 <p>The L1 regularization by using 
 <a 
href="api/scala/index.html#org.apache.spark.mllib.optimization.L1Updater">L1Updater</a>
 will not work since the 
@@ -536,10 +536,10 @@ has the following parameters:</p>
 being optimized, i.e., with respect to a single training example, at the
 current parameter value. MLlib includes gradient classes for common loss
 functions, e.g., hinge, logistic, least-squares.  The gradient class takes as
-input a training example, its label, and the current parameter value.</li>
+input a training example, its label, and the current parameter value. </li>
   <li><code>Updater</code> is a class that computes the gradient and loss of 
objective function 
 of the regularization part for L-BFGS. MLlib includes updaters for cases 
without 
-regularization, as well as L2 regularizer.</li>
+regularization, as well as L2 regularizer. </li>
   <li><code>numCorrections</code> is the number of corrections used in the 
L-BFGS update. 10 is 
 recommended.</li>
   <li><code>maxNumIterations</code> is the maximal number of iterations that 
L-BFGS can be run.</li>
@@ -555,14 +555,14 @@ containing weights for every feature, and the second 
element is an array contain
 the loss computed for every iteration.</p>
 
 <p>Here is an example to train binary logistic regression with L2 
regularization using
-L-BFGS optimizer.</p>
+L-BFGS optimizer. </p>
 
 <div class="codetabs">
 
 <div data-lang="scala">
     <p>Refer to the <a 
href="api/scala/index.html#org.apache.spark.mllib.optimization.LBFGS"><code>LBFGS</code>
 Scala docs</a> and <a 
href="api/scala/index.html#org.apache.spark.mllib.optimization.SquaredL2Updater"><code>SquaredL2Updater</code>
 Scala docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.classification.LogisticRegressionModel</span>
+    <div class="highlight"><pre><span></span><span class="k">import</span> 
<span 
class="nn">org.apache.spark.mllib.classification.LogisticRegressionModel</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.evaluation.BinaryClassificationMetrics</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.linalg.Vectors</span>
 <span class="k">import</span> <span 
class="nn">org.apache.spark.mllib.optimization.</span><span 
class="o">{</span><span class="nc">LBFGS</span><span class="o">,</span> <span 
class="nc">LogisticGradient</span><span class="o">,</span> <span 
class="nc">SquaredL2Updater</span><span class="o">}</span>
@@ -623,7 +623,7 @@ L-BFGS optimizer.</p>
 <div data-lang="java">
     <p>Refer to the <a 
href="api/java/org/apache/spark/mllib/optimization/LBFGS.html"><code>LBFGS</code>
 Java docs</a> and <a 
href="api/java/org/apache/spark/mllib/optimization/SquaredL2Updater.html"><code>SquaredL2Updater</code>
 Java docs</a> for details on the API.</p>
 
-    <div class="highlight"><pre><span class="kn">import</span> <span 
class="nn">java.util.Arrays</span><span class="o">;</span>
+    <div class="highlight"><pre><span></span><span class="kn">import</span> 
<span class="nn">java.util.Arrays</span><span class="o">;</span>
 
 <span class="kn">import</span> <span class="nn">scala.Tuple2</span><span 
class="o">;</span>
 
@@ -658,15 +658,15 @@ L-BFGS optimizer.</p>
 
 <span class="c1">// Run training algorithm to build the model.</span>
 <span class="kt">int</span> <span class="n">numCorrections</span> <span 
class="o">=</span> <span class="mi">10</span><span class="o">;</span>
-<span class="kt">double</span> <span class="n">convergenceTol</span> <span 
class="o">=</span> <span class="mi">1</span><span class="n">e</span><span 
class="o">-</span><span class="mi">4</span><span class="o">;</span>
+<span class="kt">double</span> <span class="n">convergenceTol</span> <span 
class="o">=</span> <span class="mf">1e-4</span><span class="o">;</span>
 <span class="kt">int</span> <span class="n">maxNumIterations</span> <span 
class="o">=</span> <span class="mi">20</span><span class="o">;</span>
 <span class="kt">double</span> <span class="n">regParam</span> <span 
class="o">=</span> <span class="mf">0.1</span><span class="o">;</span>
 <span class="n">Vector</span> <span 
class="n">initialWeightsWithIntercept</span> <span class="o">=</span> <span 
class="n">Vectors</span><span class="o">.</span><span 
class="na">dense</span><span class="o">(</span><span class="k">new</span> <span 
class="kt">double</span><span class="o">[</span><span 
class="n">numFeatures</span> <span class="o">+</span> <span 
class="mi">1</span><span class="o">]);</span>
 
 <span class="n">Tuple2</span><span class="o">&lt;</span><span 
class="n">Vector</span><span class="o">,</span> <span 
class="kt">double</span><span class="o">[]&gt;</span> <span 
class="n">result</span> <span class="o">=</span> <span 
class="n">LBFGS</span><span class="o">.</span><span 
class="na">runLBFGS</span><span class="o">(</span>
   <span class="n">training</span><span class="o">.</span><span 
class="na">rdd</span><span class="o">(),</span>
-  <span class="k">new</span> <span class="nf">LogisticGradient</span><span 
class="o">(),</span>
-  <span class="k">new</span> <span class="nf">SquaredL2Updater</span><span 
class="o">(),</span>
+  <span class="k">new</span> <span class="n">LogisticGradient</span><span 
class="o">(),</span>
+  <span class="k">new</span> <span class="n">SquaredL2Updater</span><span 
class="o">(),</span>
   <span class="n">numCorrections</span><span class="o">,</span>
   <span class="n">convergenceTol</span><span class="o">,</span>
   <span class="n">maxNumIterations</span><span class="o">,</span>
@@ -675,7 +675,7 @@ L-BFGS optimizer.</p>
 <span class="n">Vector</span> <span class="n">weightsWithIntercept</span> 
<span class="o">=</span> <span class="n">result</span><span 
class="o">.</span><span class="na">_1</span><span class="o">();</span>
 <span class="kt">double</span><span class="o">[]</span> <span 
class="n">loss</span> <span class="o">=</span> <span 
class="n">result</span><span class="o">.</span><span class="na">_2</span><span 
class="o">();</span>
 
-<span class="kd">final</span> <span class="n">LogisticRegressionModel</span> 
<span class="n">model</span> <span class="o">=</span> <span 
class="k">new</span> <span class="nf">LogisticRegressionModel</span><span 
class="o">(</span>
+<span class="kd">final</span> <span class="n">LogisticRegressionModel</span> 
<span class="n">model</span> <span class="o">=</span> <span 
class="k">new</span> <span class="n">LogisticRegressionModel</span><span 
class="o">(</span>
   <span class="n">Vectors</span><span class="o">.</span><span 
class="na">dense</span><span class="o">(</span><span 
class="n">Arrays</span><span class="o">.</span><span 
class="na">copyOf</span><span class="o">(</span><span 
class="n">weightsWithIntercept</span><span class="o">.</span><span 
class="na">toArray</span><span class="o">(),</span> <span 
class="n">weightsWithIntercept</span><span class="o">.</span><span 
class="na">size</span><span class="o">()</span> <span class="o">-</span> <span 
class="mi">1</span><span class="o">)),</span>
   <span class="o">(</span><span class="n">weightsWithIntercept</span><span 
class="o">.</span><span class="na">toArray</span><span 
class="o">())[</span><span class="n">weightsWithIntercept</span><span 
class="o">.</span><span class="na">size</span><span class="o">()</span> <span 
class="o">-</span> <span class="mi">1</span><span class="o">]);</span>
 
@@ -693,7 +693,7 @@ L-BFGS optimizer.</p>
 
 <span class="c1">// Get evaluation metrics.</span>
 <span class="n">BinaryClassificationMetrics</span> <span 
class="n">metrics</span> <span class="o">=</span>
-  <span class="k">new</span> <span 
class="nf">BinaryClassificationMetrics</span><span class="o">(</span><span 
class="n">scoreAndLabels</span><span class="o">.</span><span 
class="na">rdd</span><span class="o">());</span>
+  <span class="k">new</span> <span 
class="n">BinaryClassificationMetrics</span><span class="o">(</span><span 
class="n">scoreAndLabels</span><span class="o">.</span><span 
class="na">rdd</span><span class="o">());</span>
 <span class="kt">double</span> <span class="n">auROC</span> <span 
class="o">=</span> <span class="n">metrics</span><span class="o">.</span><span 
class="na">areaUnderROC</span><span class="o">();</span>
 
 <span class="n">System</span><span class="o">.</span><span 
class="na">out</span><span class="o">.</span><span 
class="na">println</span><span class="o">(</span><span class="s">&quot;Loss of 
each step in training process&quot;</span><span class="o">);</span>
@@ -717,7 +717,7 @@ the actual gradient descent step. However, we&#8217;re able 
to take the gradient
 loss of objective function of regularization for L-BFGS by ignoring the part 
of logic
 only for gradient decent such as adaptive step size stuff. We will refactorize
 this into regularizer to replace updater to separate the logic between 
-regularization and step update later.</p>
+regularization and step update later. </p>
 
 
                 </div>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to