Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15563#discussion_r86805324
  
    --- Diff: docs/running-on-yarn.md ---
    @@ -495,6 +495,15 @@ To use a custom metrics.properties for the application 
master and executors, upd
       name matches both the include and the exclude pattern, this file will be 
excluded eventually.
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.hadoop.callerContext</code></td>
    --- End diff --
    
    doesn't match above actual config spark.upstreamApp.callerContext.  how 
about:  spark.log.callerContext
    
    If I'm running spark in standalonde mode with master/worker and reading 
from hdfs, the caller context would still work on the hdfs side, right?  So 
this isn't just a  spark on yarn config and should move to general 
configuration section but mention applies to yarn/hdfs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to