This is an automated email from the ASF dual-hosted git repository.

tgraves pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
     new a5d0eaf  [SPARK-595][DOCS] Add local-cluster mode option in 
Documentation
a5d0eaf is described below

commit a5d0eafa324279e4516bd4c6b544b0cc7dbbd4e3
Author: Yuto Akutsu <yuto.aku...@jp.nttdata.com>
AuthorDate: Fri Aug 6 09:26:13 2021 -0500

    [SPARK-595][DOCS] Add local-cluster mode option in Documentation
    
    ### What changes were proposed in this pull request?
    
    Add local-cluster mode option to submitting-applications.md
    
    ### Why are the changes needed?
    
    Help users to find/use this option for unit tests.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes, docs changed.
    
    ### How was this patch tested?
    
    `SKIP_API=1 bundle exec jekyll build`
    <img width="460" alt="docchange" 
src="https://user-images.githubusercontent.com/87687356/127125380-6beb4601-7cf4-4876-b2c6-459454ce2a02.png";>
    
    Closes #33537 from yutoacts/SPARK-595.
    
    Lead-authored-by: Yuto Akutsu <yuto.aku...@jp.nttdata.com>
    Co-authored-by: Yuto Akutsu <yuto.aku...@nttdata.com>
    Co-authored-by: Yuto Akutsu <87687356+yutoa...@users.noreply.github.com>
    Signed-off-by: Thomas Graves <tgra...@apache.org>
    (cherry picked from commit 41b011e416286374e2e8e8dea36ba79f4c403040)
    Signed-off-by: Thomas Graves <tgra...@apache.org>
---
 docs/submitting-applications.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/docs/submitting-applications.md b/docs/submitting-applications.md
index 0319859..402dd06 100644
--- a/docs/submitting-applications.md
+++ b/docs/submitting-applications.md
@@ -162,9 +162,10 @@ The master URL passed to Spark can be in one of the 
following formats:
 <tr><th>Master URL</th><th>Meaning</th></tr>
 <tr><td> <code>local</code> </td><td> Run Spark locally with one worker thread 
(i.e. no parallelism at all). </td></tr>
 <tr><td> <code>local[K]</code> </td><td> Run Spark locally with K worker 
threads (ideally, set this to the number of cores on your machine). </td></tr>
-<tr><td> <code>local[K,F]</code> </td><td> Run Spark locally with K worker 
threads and F maxFailures (see <a 
href="configuration.html#scheduling">spark.task.maxFailures</a> for an 
explanation of this variable) </td></tr>
+<tr><td> <code>local[K,F]</code> </td><td> Run Spark locally with K worker 
threads and F maxFailures (see <a 
href="configuration.html#scheduling">spark.task.maxFailures</a> for an 
explanation of this variable). </td></tr>
 <tr><td> <code>local[*]</code> </td><td> Run Spark locally with as many worker 
threads as logical cores on your machine.</td></tr>
 <tr><td> <code>local[*,F]</code> </td><td> Run Spark locally with as many 
worker threads as logical cores on your machine and F maxFailures.</td></tr>
+<tr><td> <code>local-cluster[N,C,M]</code> </td><td> Local-cluster mode is 
only for unit tests. It emulates a distributed cluster in a single JVM with N 
number of workers, C cores per worker and M MiB of memory per worker.</td></tr>
 <tr><td> <code>spark://HOST:PORT</code> </td><td> Connect to the given <a 
href="spark-standalone.html">Spark standalone
         cluster</a> master. The port must be whichever one your master is 
configured to use, which is 7077 by default.
 </td></tr>

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to