Re: Who manage the log4j appender while running spark on yarn?

2014-12-22 Thread WangTaoTheTonic
After some discussions with Hadoop guys, I got how the mechanism works.
If we don't add -Dlog4j.configuration into java options to the container(AM
or executors), they will use log4j.properties(if any) under container's
classpath(extraClasspath plus yarn.application.classpath).

If we wanna custom our log4j configuration, we should add
spark.executor.extraJavaOptions=-Dlog4j.configuration=/path/to/log4j.properties
or
spark.yarn.am.extraJavaOptions=-Dlog4j.configuration=/path/to/log4j.properties
in spark-defaults.conf file.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Who-manage-the-log4j-appender-while-running-spark-on-yarn-tp20778p20818.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Who manage the log4j appender while running spark on yarn?

2014-12-22 Thread Marcelo Vanzin
If you don't specify your own log4j.properties, Spark will load the
default one (from
core/src/main/resources/org/apache/spark/log4j-defaults.properties,
which ends up being packaged with the Spark assembly).

You can easily override the config file if you want to, though; check
the Debugging section of the Running on YARN docs.

On Fri, Dec 19, 2014 at 12:37 AM, WangTaoTheTonic
barneystin...@aliyun.com wrote:
 Hi guys,

 I recently ran spark on yarn and found spark didn't set any log4j properties
 file in configuration or code. And the log4j logs was writing into stderr
 file under ${yarn.nodemanager.log-dirs}/application_${appid}.

 I wanna know which side(spark or hadoop) controll the appender? Have found
 that related disscussion here:
 http://apache-spark-user-list.1001560.n3.nabble.com/Spark-logging-strategy-on-YARN-td8751.html,
 but I think spark code has changed a lot since then.

 Any one could offer some guide? Thanks.





 --
 View this message in context: 
 http://apache-spark-user-list.1001560.n3.nabble.com/Who-manage-the-log4j-appender-while-running-spark-on-yarn-tp20778.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




-- 
Marcelo

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Who manage the log4j appender while running spark on yarn?

2014-12-19 Thread WangTaoTheTonic
Hi guys, 

I recently ran spark on yarn and found spark didn't set any log4j properties
file in configuration or code. And the log4j logs was writing into stderr
file under ${yarn.nodemanager.log-dirs}/application_${appid}.

I wanna know which side(spark or hadoop) controll the appender? Have found
that related disscussion here:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-logging-strategy-on-YARN-td8751.html,
but I think spark code has changed a lot since then.

Any one could offer some guide? Thanks.





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Who-manage-the-log4j-appender-while-running-spark-on-yarn-tp20778.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org