This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 57b207774382 [SPARK-48240][DOCS] Replace `Local[..]` with 
`"Local[...]"` in the docs
57b207774382 is described below

commit 57b207774382e3a35345518ede5cfc028885f90b
Author: panbingkun <panbing...@baidu.com>
AuthorDate: Sat May 11 21:41:14 2024 +0900

    [SPARK-48240][DOCS] Replace `Local[..]` with `"Local[...]"` in the docs
    
    ### What changes were proposed in this pull request?
    The pr aims to replace `Local[..]` with `"Local[...]"` in the docs
    
    ### Why are the changes needed?
    1.When I recently switched from `bash` to `zsh` and executed command 
`./bin/spark-shell --master local[8]` on local, the following error will be 
printed:
    <img width="570" alt="image" 
src="https://github.com/apache/spark/assets/15246973/d6ad0113-942a-4370-904e-70cb2780f818";>
    
    2.Some descriptions in the existing documents have been written as 
`--master "local[n]"`, eg:
    
https://github.com/apache/spark/blob/f699f556d8a09bb755e9c8558661a36fbdb42e73/docs/index.md?plain=1#L49
    
    3.The root cause is: 
https://blog.peiyingchi.com/2017/03/20/spark-zsh-no-matches-found-local/
    <img width="942" alt="image" 
src="https://github.com/apache/spark/assets/15246973/11ff03b1-bc60-48e3-b55c-984cbc053cef";>
    
    ### Does this PR introduce _any_ user-facing change?
    Yes, with the `zsh` becoming the mainstream of shell, avoid the confusion 
of spark users when submitting apps with `./bin/spark-shell --master "local[n]" 
...` or `./bin/spark-sql --master "local[n]" ...`, etc
    
    ### How was this patch tested?
    Manually test
    Whether the user uses `bash` or `zsh`, the above `--master "local[n]"` can 
be executed successfully in the expected way.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    No.
    
    Closes #46535 from panbingkun/SPARK-48240.
    
    Authored-by: panbingkun <panbing...@baidu.com>
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
---
 docs/configuration.md           |  4 ++--
 docs/quick-start.md             |  6 +++---
 docs/rdd-programming-guide.md   | 12 ++++++------
 docs/submitting-applications.md |  2 +-
 4 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/docs/configuration.md b/docs/configuration.md
index c018b9f1fb7c..7884a2af60b2 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -91,7 +91,7 @@ Then, you can supply configuration values at runtime:
 ```sh
 ./bin/spark-submit \
   --name "My app" \
-  --master local[4] \
+  --master "local[4]" \
   --conf spark.eventLog.enabled=false \
   --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps" \
   myApp.jar
@@ -3750,7 +3750,7 @@ Also, you can modify or add configurations at runtime:
 {% highlight bash %}
 ./bin/spark-submit \
   --name "My app" \
-  --master local[4] \
+  --master "local[4]" \
   --conf spark.eventLog.enabled=false \
   --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps" \
   --conf spark.hadoop.abc.def=xyz \
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 366970cf66c7..5a03af98cd83 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -286,7 +286,7 @@ We can run this application using the `bin/spark-submit` 
script:
 {% highlight bash %}
 # Use spark-submit to run your application
 $ YOUR_SPARK_HOME/bin/spark-submit \
-  --master local[4] \
+  --master "local[4]" \
   SimpleApp.py
 ...
 Lines with a: 46, Lines with b: 23
@@ -371,7 +371,7 @@ $ sbt package
 # Use spark-submit to run your application
 $ YOUR_SPARK_HOME/bin/spark-submit \
   --class "SimpleApp" \
-  --master local[4] \
+  --master "local[4]" \
   
target/scala-{{site.SCALA_BINARY_VERSION}}/simple-project_{{site.SCALA_BINARY_VERSION}}-1.0.jar
 ...
 Lines with a: 46, Lines with b: 23
@@ -452,7 +452,7 @@ $ mvn package
 # Use spark-submit to run your application
 $ YOUR_SPARK_HOME/bin/spark-submit \
   --class "SimpleApp" \
-  --master local[4] \
+  --master "local[4]" \
   target/simple-project-1.0.jar
 ...
 Lines with a: 46, Lines with b: 23
diff --git a/docs/rdd-programming-guide.md b/docs/rdd-programming-guide.md
index f75bda0ffafb..cbbce4c08206 100644
--- a/docs/rdd-programming-guide.md
+++ b/docs/rdd-programming-guide.md
@@ -214,13 +214,13 @@ can be passed to the `--repositories` argument. For 
example, to run
 `bin/pyspark` on exactly four cores, use:
 
 {% highlight bash %}
-$ ./bin/pyspark --master local[4]
+$ ./bin/pyspark --master "local[4]"
 {% endhighlight %}
 
 Or, to also add `code.py` to the search path (in order to later be able to 
`import code`), use:
 
 {% highlight bash %}
-$ ./bin/pyspark --master local[4] --py-files code.py
+$ ./bin/pyspark --master "local[4]" --py-files code.py
 {% endhighlight %}
 
 For a complete list of options, run `pyspark --help`. Behind the scenes,
@@ -260,19 +260,19 @@ can be passed to the `--repositories` argument. For 
example, to run `bin/spark-s
 four cores, use:
 
 {% highlight bash %}
-$ ./bin/spark-shell --master local[4]
+$ ./bin/spark-shell --master "local[4]"
 {% endhighlight %}
 
 Or, to also add `code.jar` to its classpath, use:
 
 {% highlight bash %}
-$ ./bin/spark-shell --master local[4] --jars code.jar
+$ ./bin/spark-shell --master "local[4]" --jars code.jar
 {% endhighlight %}
 
 To include a dependency using Maven coordinates:
 
 {% highlight bash %}
-$ ./bin/spark-shell --master local[4] --packages "org.example:example:0.1"
+$ ./bin/spark-shell --master "local[4]" --packages "org.example:example:0.1"
 {% endhighlight %}
 
 For a complete list of options, run `spark-shell --help`. Behind the scenes,
@@ -781,7 +781,7 @@ One of the harder things about Spark is understanding the 
scope and life cycle o
 
 #### Example
 
-Consider the naive RDD element sum below, which may behave differently 
depending on whether execution is happening within the same JVM. A common 
example of this is when running Spark in `local` mode (`--master = local[n]`) 
versus deploying a Spark application to a cluster (e.g. via spark-submit to 
YARN):
+Consider the naive RDD element sum below, which may behave differently 
depending on whether execution is happening within the same JVM. A common 
example of this is when running Spark in `local` mode (`--master = "local[n]"`) 
versus deploying a Spark application to a cluster (e.g. via spark-submit to 
YARN):
 
 <div class="codetabs">
 
diff --git a/docs/submitting-applications.md b/docs/submitting-applications.md
index bf02ec137e20..3a99151768a1 100644
--- a/docs/submitting-applications.md
+++ b/docs/submitting-applications.md
@@ -91,7 +91,7 @@ run it with `--help`. Here are a few examples of common 
options:
 # Run application locally on 8 cores
 ./bin/spark-submit \
   --class org.apache.spark.examples.SparkPi \
-  --master local[8] \
+  --master "local[8]" \
   /path/to/examples.jar \
   100
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to