Repository: spark-website
Updated Branches:
  refs/heads/asf-site ecf94f284 -> d2bcf1854


http://git-wip-us.apache.org/repos/asf/spark-website/blob/d2bcf185/site/docs/2.1.0/submitting-applications.html
----------------------------------------------------------------------
diff --git a/site/docs/2.1.0/submitting-applications.html 
b/site/docs/2.1.0/submitting-applications.html
index fc18fa9..0c91739 100644
--- a/site/docs/2.1.0/submitting-applications.html
+++ b/site/docs/2.1.0/submitting-applications.html
@@ -151,14 +151,14 @@ packaging them into a <code>.zip</code> or 
<code>.egg</code>.</p>
 This script takes care of setting up the classpath with Spark and its
 dependencies, and can support different cluster managers and deploy modes that 
Spark supports:</p>
 
-<div class="highlight"><pre><code class="language-bash" 
data-lang="bash">./bin/spark-submit <span class="se">\</span>
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash"><span></span>./bin/spark-submit <span class="se">\</span>
   --class &lt;main-class&gt; <span class="se">\</span>
   --master &lt;master-url&gt; <span class="se">\</span>
   --deploy-mode &lt;deploy-mode&gt; <span class="se">\</span>
   --conf &lt;key&gt;<span class="o">=</span>&lt;value&gt; <span 
class="se">\</span>
-  ... <span class="c"># other options</span>
+  ... <span class="c1"># other options</span>
   &lt;application-jar&gt; <span class="se">\</span>
-  <span class="o">[</span>application-arguments<span 
class="o">]</span></code></pre></div>
+  <span class="o">[</span>application-arguments<span 
class="o">]</span></code></pre></figure>
 
 <p>Some of the commonly used options are:</p>
 
@@ -194,23 +194,23 @@ you can also specify <code>--supervise</code> to make 
sure that the driver is au
 fails with non-zero exit code. To enumerate all such options available to 
<code>spark-submit</code>,
 run it with <code>--help</code>. Here are a few examples of common options:</p>
 
-<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span 
class="c"># Run application locally on 8 cores</span>
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash"><span></span><span class="c1"># Run application locally on 8 
cores</span>
 ./bin/spark-submit <span class="se">\</span>
   --class org.apache.spark.examples.SparkPi <span class="se">\</span>
-  --master <span class="nb">local</span><span class="o">[</span>8<span 
class="o">]</span> <span class="se">\</span>
+  --master local<span class="o">[</span><span class="m">8</span><span 
class="o">]</span> <span class="se">\</span>
   /path/to/examples.jar <span class="se">\</span>
-  100
+  <span class="m">100</span>
 
-<span class="c"># Run on a Spark standalone cluster in client deploy 
mode</span>
+<span class="c1"># Run on a Spark standalone cluster in client deploy 
mode</span>
 ./bin/spark-submit <span class="se">\</span>
   --class org.apache.spark.examples.SparkPi <span class="se">\</span>
   --master spark://207.184.161.138:7077 <span class="se">\</span>
   --executor-memory 20G <span class="se">\</span>
   --total-executor-cores <span class="m">100</span> <span class="se">\</span>
   /path/to/examples.jar <span class="se">\</span>
-  1000
+  <span class="m">1000</span>
 
-<span class="c"># Run on a Spark standalone cluster in cluster deploy mode 
with supervise</span>
+<span class="c1"># Run on a Spark standalone cluster in cluster deploy mode 
with supervise</span>
 ./bin/spark-submit <span class="se">\</span>
   --class org.apache.spark.examples.SparkPi <span class="se">\</span>
   --master spark://207.184.161.138:7077 <span class="se">\</span>
@@ -219,26 +219,26 @@ run it with <code>--help</code>. Here are a few examples 
of common options:</p>
   --executor-memory 20G <span class="se">\</span>
   --total-executor-cores <span class="m">100</span> <span class="se">\</span>
   /path/to/examples.jar <span class="se">\</span>
-  1000
+  <span class="m">1000</span>
 
-<span class="c"># Run on a YARN cluster</span>
-<span class="nb">export </span><span class="nv">HADOOP_CONF_DIR</span><span 
class="o">=</span>XXX
+<span class="c1"># Run on a YARN cluster</span>
+<span class="nb">export</span> <span class="nv">HADOOP_CONF_DIR</span><span 
class="o">=</span>XXX
 ./bin/spark-submit <span class="se">\</span>
   --class org.apache.spark.examples.SparkPi <span class="se">\</span>
   --master yarn <span class="se">\</span>
-  --deploy-mode cluster <span class="se">\ </span> <span class="c"># can be 
client for client mode</span>
+  --deploy-mode cluster <span class="se">\ </span> <span class="c1"># can be 
client for client mode</span>
   --executor-memory 20G <span class="se">\</span>
   --num-executors <span class="m">50</span> <span class="se">\</span>
   /path/to/examples.jar <span class="se">\</span>
-  1000
+  <span class="m">1000</span>
 
-<span class="c"># Run a Python application on a Spark standalone cluster</span>
+<span class="c1"># Run a Python application on a Spark standalone 
cluster</span>
 ./bin/spark-submit <span class="se">\</span>
   --master spark://207.184.161.138:7077 <span class="se">\</span>
   examples/src/main/python/pi.py <span class="se">\</span>
-  1000
+  <span class="m">1000</span>
 
-<span class="c"># Run on a Mesos cluster in cluster deploy mode with 
supervise</span>
+<span class="c1"># Run on a Mesos cluster in cluster deploy mode with 
supervise</span>
 ./bin/spark-submit <span class="se">\</span>
   --class org.apache.spark.examples.SparkPi <span class="se">\</span>
   --master mesos://207.184.161.138:7077 <span class="se">\</span>
@@ -247,7 +247,7 @@ run it with <code>--help</code>. Here are a few examples of 
common options:</p>
   --executor-memory 20G <span class="se">\</span>
   --total-executor-cores <span class="m">100</span> <span class="se">\</span>
   http://path/to/examples.jar <span class="se">\</span>
-  1000</code></pre></div>
+  <span class="m">1000</span></code></pre></figure>
 
 <h1 id="master-urls">Master URLs</h1>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d2bcf185/site/docs/2.1.0/tuning.html
----------------------------------------------------------------------
diff --git a/site/docs/2.1.0/tuning.html b/site/docs/2.1.0/tuning.html
index ca4ad9f..33a6316 100644
--- a/site/docs/2.1.0/tuning.html
+++ b/site/docs/2.1.0/tuning.html
@@ -129,23 +129,23 @@
                     
 
                     <ul id="markdown-toc">
-  <li><a href="#data-serialization" id="markdown-toc-data-serialization">Data 
Serialization</a></li>
-  <li><a href="#memory-tuning" id="markdown-toc-memory-tuning">Memory 
Tuning</a>    <ul>
-      <li><a href="#memory-management-overview" 
id="markdown-toc-memory-management-overview">Memory Management Overview</a></li>
-      <li><a href="#determining-memory-consumption" 
id="markdown-toc-determining-memory-consumption">Determining Memory 
Consumption</a></li>
-      <li><a href="#tuning-data-structures" 
id="markdown-toc-tuning-data-structures">Tuning Data Structures</a></li>
-      <li><a href="#serialized-rdd-storage" 
id="markdown-toc-serialized-rdd-storage">Serialized RDD Storage</a></li>
-      <li><a href="#garbage-collection-tuning" 
id="markdown-toc-garbage-collection-tuning">Garbage Collection Tuning</a></li>
+  <li><a href="#data-serialization">Data Serialization</a></li>
+  <li><a href="#memory-tuning">Memory Tuning</a>    <ul>
+      <li><a href="#memory-management-overview">Memory Management 
Overview</a></li>
+      <li><a href="#determining-memory-consumption">Determining Memory 
Consumption</a></li>
+      <li><a href="#tuning-data-structures">Tuning Data Structures</a></li>
+      <li><a href="#serialized-rdd-storage">Serialized RDD Storage</a></li>
+      <li><a href="#garbage-collection-tuning">Garbage Collection 
Tuning</a></li>
     </ul>
   </li>
-  <li><a href="#other-considerations" 
id="markdown-toc-other-considerations">Other Considerations</a>    <ul>
-      <li><a href="#level-of-parallelism" 
id="markdown-toc-level-of-parallelism">Level of Parallelism</a></li>
-      <li><a href="#memory-usage-of-reduce-tasks" 
id="markdown-toc-memory-usage-of-reduce-tasks">Memory Usage of Reduce 
Tasks</a></li>
-      <li><a href="#broadcasting-large-variables" 
id="markdown-toc-broadcasting-large-variables">Broadcasting Large 
Variables</a></li>
-      <li><a href="#data-locality" id="markdown-toc-data-locality">Data 
Locality</a></li>
+  <li><a href="#other-considerations">Other Considerations</a>    <ul>
+      <li><a href="#level-of-parallelism">Level of Parallelism</a></li>
+      <li><a href="#memory-usage-of-reduce-tasks">Memory Usage of Reduce 
Tasks</a></li>
+      <li><a href="#broadcasting-large-variables">Broadcasting Large 
Variables</a></li>
+      <li><a href="#data-locality">Data Locality</a></li>
     </ul>
   </li>
-  <li><a href="#summary" id="markdown-toc-summary">Summary</a></li>
+  <li><a href="#summary">Summary</a></li>
 </ul>
 
 <p>Because of the in-memory nature of most Spark computations, Spark programs 
can be bottlenecked
@@ -194,9 +194,9 @@ in the AllScalaRegistrar from the <a 
href="https://github.com/twitter/chill";>Twi
 
 <p>To register your own custom classes with Kryo, use the 
<code>registerKryoClasses</code> method.</p>
 
-<div class="highlight"><pre><code class="language-scala" 
data-lang="scala"><span class="k">val</span> <span class="n">conf</span> <span 
class="k">=</span> <span class="k">new</span> <span 
class="nc">SparkConf</span><span class="o">().</span><span 
class="n">setMaster</span><span class="o">(...).</span><span 
class="n">setAppName</span><span class="o">(...)</span>
+<figure class="highlight"><pre><code class="language-scala" 
data-lang="scala"><span></span><span class="k">val</span> <span 
class="n">conf</span> <span class="k">=</span> <span class="k">new</span> <span 
class="nc">SparkConf</span><span class="o">().</span><span 
class="n">setMaster</span><span class="o">(...).</span><span 
class="n">setAppName</span><span class="o">(...)</span>
 <span class="n">conf</span><span class="o">.</span><span 
class="n">registerKryoClasses</span><span class="o">(</span><span 
class="nc">Array</span><span class="o">(</span><span 
class="n">classOf</span><span class="o">[</span><span 
class="kt">MyClass1</span><span class="o">],</span> <span 
class="n">classOf</span><span class="o">[</span><span 
class="kt">MyClass2</span><span class="o">]))</span>
-<span class="k">val</span> <span class="n">sc</span> <span class="k">=</span> 
<span class="k">new</span> <span class="nc">SparkContext</span><span 
class="o">(</span><span class="n">conf</span><span 
class="o">)</span></code></pre></div>
+<span class="k">val</span> <span class="n">sc</span> <span class="k">=</span> 
<span class="k">new</span> <span class="nc">SparkContext</span><span 
class="o">(</span><span class="n">conf</span><span 
class="o">)</span></code></pre></figure>
 
 <p>The <a href="https://github.com/EsotericSoftware/kryo";>Kryo 
documentation</a> describes more advanced
 registration options, such as adding custom serialization code.</p>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to