This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.5
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.5 by this push:
     new 7d1e77c3072e [MINOR][DOCS] Make the link of spark properties with YARN 
more accurate
7d1e77c3072e is described below

commit 7d1e77c3072e278d2552a57746bf3ab7abc58c41
Author: beliefer <belie...@163.com>
AuthorDate: Wed Apr 10 20:33:43 2024 -0700

    [MINOR][DOCS] Make the link of spark properties with YARN more accurate
    
    ### What changes were proposed in this pull request?
    This PR propose to make the link of spark properties with YARN more 
accurate.
    
    ### Why are the changes needed?
    Currently, the link of `YARN Spark Properties` is just the page of 
`running-on-yarn.html`.
    We should add the anchor point.
    
    ### Does this PR introduce _any_ user-facing change?
    'Yes'.
    More convenient for readers to read.
    
    ### How was this patch tested?
    N/A
    
    ### Was this patch authored or co-authored using generative AI tooling?
    'No'.
    
    Closes #45994 from beliefer/accurate-yarn-link.
    
    Authored-by: beliefer <belie...@163.com>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
    (cherry picked from commit aca3d1025e2d85c02737456bfb01163c87ca3394)
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
 docs/job-scheduling.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md
index 0875bd5558e5..8f10d0788e63 100644
--- a/docs/job-scheduling.md
+++ b/docs/job-scheduling.md
@@ -57,7 +57,7 @@ Resource allocation can be configured as follows, based on 
the cluster type:
   on the cluster (`spark.executor.instances` as configuration property), while 
`--executor-memory`
   (`spark.executor.memory` configuration property) and `--executor-cores` 
(`spark.executor.cores` configuration
   property) control the resources per executor. For more information, see the
-  [YARN Spark Properties](running-on-yarn.html).
+  [YARN Spark Properties](running-on-yarn.html#spark-properties).
 
 A second option available on Mesos is _dynamic sharing_ of CPU cores. In this 
mode, each Spark application
 still has a fixed and independent memory allocation (set by 
`spark.executor.memory`), but when the


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to