Are you looking at the driver log? (e.g. Shark?). I see a ton of
information in the INFO category on what query is being started, what
stage is starting and which executor stuff is sent to. So I'm not sure
if you're saying you see all that and you need more, or that you're
not seeing this type of i
One thing we ran into was that there was another log4j.properties earlier
in the classpath. For us, it was in our MapR/Hadoop conf.
If that is the case, something like the following could help you track it
down. The only thing to watch out for is that you might have to walk up the
classloader hier
We changed the loglevel to DEBUG by replacing every INFO with DEBUG in
/root/ephemeral-hdfs/conf/log4j.properties and propagating it to the
cluster. There is some DEBUG output visible in both master and worker but
nothing really interesting regarding stages or scheduling. Since we
expected a little
If you're using the spark-ec2 scripts, you may have to change
/root/ephemeral-hdfs/conf/log4j.properties or something like that, as that
is added to the classpath before Spark's own conf.
On Wed, Jun 25, 2014 at 6:10 PM, Tobias Pfeiffer wrote:
> I have a log4j.xml in src/main/resources with
>
>
I have a log4j.xml in src/main/resources with
http://jakarta.apache.org/log4j/";>
[...]
and that is included in the jar I package with `sbt assembly`. That
works fine for me, at least on the driver.
Tobias
On Wed, Jun 25, 2014 at 2:25 PM, Philip Limbeck wrote
Hi!
According to
https://spark.apache.org/docs/0.9.0/configuration.html#configuring-logging,
changing log-level is just a matter of creating a log4j.properties (which
is in the classpath of spark) and changing log level there for the root
logger. I did this steps on every node in the cluster (mast