dongjoon-hyun commented on code in PR #45982: URL: https://github.com/apache/spark/pull/45982#discussion_r1559997742
########## docs/job-scheduling.md: ########## @@ -53,7 +53,11 @@ Resource allocation can be configured as follows, based on the cluster type: on the cluster (`spark.executor.instances` as configuration property), while `--executor-memory` (`spark.executor.memory` configuration property) and `--executor-cores` (`spark.executor.cores` configuration property) control the resources per executor. For more information, see the - [YARN Spark Properties](running-on-yarn.html). + [YARN Spark Properties](running-on-yarn.html#spark-properties). Review Comment: Yes, please handle this separately, @beliefer . As you know, I'm the release manager of Apache Spark 3.4.3 (for next Monday). I can backport this fix to branch-3.4 as a part of Apache Spark 3.4.3. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org