This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 51e8ca3635d [SPARK-40699][DOCS] Supplement undocumented yarn 
configurations in documentation
51e8ca3635d is described below

commit 51e8ca3635d62d470721e7ce0f7e868b6b57334c
Author: Qian.Sun <qian.sun2...@gmail.com>
AuthorDate: Sun Oct 9 10:10:06 2022 -0500

    [SPARK-40699][DOCS] Supplement undocumented yarn configurations in 
documentation
    
    ### What changes were proposed in this pull request?
    
    This PR aims to supplement undocumented yarn configuration in documentation.
    
    ### Why are the changes needed?
    
    Help users to confirm yarn configurations through documentation instead of 
code.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes, more configurations in documentation.
    
    ### How was this patch tested?
    
    Pass the GA.
    
    Closes #38150 from dcoliversun/SPARK-40699.
    
    Authored-by: Qian.Sun <qian.sun2...@gmail.com>
    Signed-off-by: Sean Owen <sro...@gmail.com>
---
 docs/running-on-yarn.md | 41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index ea117f31357..4112c71cdf9 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -486,6 +486,20 @@ To use a custom metrics.properties for the application 
master and executors, upd
   </td>
   <td>3.3.0</td>
 </tr>
+<tr>
+  <td><code>spark.yarn.am.tokenConfRegex</code></td>
+  <td>(none)</td>
+  <td>
+    This config is only supported when Hadoop version is 2.9+ or 3.x (e.g., 
when using the Hadoop 3.x profile). 
+    The value of this config is a regex expression used to grep a list of 
config entries from the job's configuration file (e.g., hdfs-site.xml) 
+    and send to RM, which uses them when renewing delegation tokens. A typical 
use case of this feature is to support delegation 
+    tokens in an environment where a YARN cluster needs to talk to multiple 
downstream HDFS clusters, where the YARN RM may not have configs 
+    (e.g., dfs.nameservices, dfs.ha.namenodes.*, dfs.namenode.rpc-address.*) 
to connect to these clusters. 
+    In this scenario, Spark users can specify the config value to be 
<code>^dfs.nameservices$|^dfs.namenode.rpc-address.*$|^dfs.ha.namenodes.*$</code>
 to parse 
+    these HDFS configs from the job's local configuration files. This config 
is very similar to <code>mapreduce.job.send-token-conf</code>. Please check 
YARN-5910 for more details.
+  </td>
+  <td>3.3.0</td>
+</tr>
 <tr>
   <td><code>spark.yarn.executor.failuresValidityInterval</code></td>
   <td>(none)</td>
@@ -632,6 +646,33 @@ To use a custom metrics.properties for the application 
master and executors, upd
   </td>
   <td>0.9.0</td>
 </tr>
+<tr>
+  <td><code>spark.yarn.clientLaunchMonitorInterval</code></td>
+  <td><code>1s</code></td>
+  <td>
+    Interval between requests for status the client mode AM when starting the 
app.
+  </td>
+  <td>2.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.yarn.includeDriverLogsLink</code></td>
+  <td><code>false</code></td>
+  <td>
+    In cluster mode, whether the client application report includes links to 
the driver 
+    container's logs. This requires polling the ResourceManager's REST API, so 
it 
+    places some additional load on the RM.
+  </td>
+  <td>3.1.0</td>
+</tr>
+<tr>
+  <td><code>spark.yarn.unmanagedAM.enabled</code></td>
+  <td><code>false</code></td>
+  <td>
+    In client mode, whether to launch the Application Master service as part 
of the client 
+    using unmanaged am.
+  </td>
+  <td>3.0.0</td>
+</tr>
 </table>
 
 #### Available patterns for SHS custom executor log URL


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to