Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/3607#discussion_r22063984 --- Diff: docs/running-on-yarn.md --- @@ -92,6 +100,13 @@ Most of the configs are the same for Spark on YARN as for other deployment modes </td> </tr> <tr> + <td><code>spark.yarn.am.memoryOverhead</code></td> + <td>AM memory * 0.07, with minimum of 384 </td> --- End diff -- I find it more confusing. It's immediately intuitive that `spark.yarn.am.memoryOverhead` is the overhead applied on top of `spark.yarn.am.memory` since they share the same prefix, but it's not at all clear to me that `spark.driver.memoryOverhead` is related at all (assuming the user does not have expertise in how different deploy modes are architected).
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org