Github user mridulm commented on the pull request:

    https://github.com/apache/spark/pull/148#issuecomment-39653130
  
    Just to be clear, the current status of the patch seems to be :
    a) If user specified logging config - use that.
    b) If missing, use a spark config built into the jar.
    
    (b) seems to be different from the original intent of the PR, but I guess 
if we cant merge the container logging config with ours, it cant be helped.
    What about simply copying the existing hadoop log4j container config into 
(b) and expanding the variables as was done in the first PR when (a) is missing 
? (it would have the nice property of logs going to syslog in that case, no ? 
Or wont that not work ?)
    Also, this is distinct from what happens in the master ? Or applies there 
too ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to