If you don't specify your own log4j.properties, Spark will load the
default one (from
core/src/main/resources/org/apache/spark/log4j-defaults.properties,
which ends up being packaged with the Spark assembly).

You can easily override the config file if you want to, though; check
the "Debugging" section of the "Running on YARN" docs.

On Fri, Dec 19, 2014 at 12:37 AM, WangTaoTheTonic
<barneystin...@aliyun.com> wrote:
> Hi guys,
>
> I recently ran spark on yarn and found spark didn't set any log4j properties
> file in configuration or code. And the log4j logs was writing into stderr
> file under ${yarn.nodemanager.log-dirs}/application_${appid}.
>
> I wanna know which side(spark or hadoop) controll the appender? Have found
> that related disscussion here:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-logging-strategy-on-YARN-td8751.html,
> but I think spark code has changed a lot since then.
>
> Any one could offer some guide? Thanks.
>
>
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Who-manage-the-log4j-appender-while-running-spark-on-yarn-tp20778.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to