Hi,

I figure out that in standalone mode these configuration should add to
worker process's configs, like adding the following line in
spark-env.sh:

SPARK_WORKER_OPTS="-Dspark.executor.logs.rolling.strategy=time
-Dspark.executor.logs.rolling.time.interval=daily
-Dspark.executor.logs.rolling.maxRetainedFiles=3"

Maybe in yarn mode the spark-defaults.conf would be sufficient, but
I've not tested.

On Tue, Nov 4, 2014 at 12:24 PM, Ji ZHANG <zhangj...@gmail.com> wrote:
> Hi,
>
> I'm using Spark Streaming 1.1, and I have the following logs keep growing:
>
> /opt/spark-1.1.0-bin-cdh4/work/app-20141029175309-0005/2/stderr
>
> I think it is executor log, so I setup the following options in
> spark-defaults.conf:
>
> spark.executor.logs.rolling.strategy time
> spark.executor.logs.rolling.time.interval daily
> spark.executor.logs.rolling.maxRetainedFiles 10
>
> I can see these options on Web UI, so I suppose they are effective.
> However, the stderr is still not rotated.
>
> Am I doing wrong?
>
> Thanks.
>
> --
> Jerry



-- 
Jerry

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to