Hi,
I figure out that in standalone mode these configuration should add to
worker process's configs, like adding the following line in
spark-env.sh:
SPARK_WORKER_OPTS=-Dspark.executor.logs.rolling.strategy=time
-Dspark.executor.logs.rolling.time.interval=daily
Hi,
I'm using Spark Streaming 1.1, and I have the following logs keep growing:
/opt/spark-1.1.0-bin-cdh4/work/app-20141029175309-0005/2/stderr
I think it is executor log, so I setup the following options in
spark-defaults.conf:
spark.executor.logs.rolling.strategy time