---------- Forwarded message ----------
From: Tim Chen <t...@mesosphere.io>
Date: Thu, May 28, 2015 at 10:49 AM
Subject: Re: [Streaming] Configure executor logging on Mesos
To: Gerard Maas <gerard.m...@gmail.com>


Hi Gerard,

The log line you referred to is not Spark logging but Mesos own logging,
which is using glog.

Our own executor logs should only contain very few lines though.

Most of the log lines you'll see is from Spark, and it can be controled by
specifiying a log4j.properties to be downloaded with your Mesos task.
Alternatively if you are downloading Spark executor via spark.executor.uri,
you can include log4j.properties in that tar ball.

I think we probably need some more configurations for Spark scheduler to
pick up extra files to be downloaded into the sandbox.

Tim





On Thu, May 28, 2015 at 6:46 AM, Gerard Maas <gerard.m...@gmail.com> wrote:

> Hi,
>
> I'm trying to control the verbosity of the logs on the Mesos executors
> with no luck so far. The default behaviour is INFO on stderr dump with an
> unbounded growth that gets too big at some point.
>
> I noticed that when the executor is instantiated, it locates a default log
> configuration in the spark assembly:
>
> I0528 13:36:22.958067 26890 exec.cpp:206] Executor registered on slave
> 20150528-063307-780930314-5050-8152-S5
> Spark assembly has been built with Hive, including Datanucleus jars on
> classpath
> Using Spark's default log4j profile:
> org/apache/spark/log4j-defaults.properties
>
> So, no matter what I provide in my job jar files (or also tried with
> (spark.executor.extraClassPath=log4j.properties) takes effect in the
> executor's configuration.
>
> How should I configure the log on the executors?
>
> thanks, Gerard.
>

Reply via email to