These settings don't control what happens to stderr, right? stderr is
up to the process that invoked the driver to control. You may wish to
configure log4j to log to files instead.

On Wed, Nov 12, 2014 at 8:15 PM, Nguyen, Duc <duc.ngu...@pearson.com> wrote:
> I've also tried setting the aforementioned properties using
> System.setProperty() as well as on the command line while submitting the job
> using --conf key=value. All to no success. When I go to the Spark UI and
> click on that particular streaming job and then the "Environment" tab, I can
> see the properties are correctly set. But regardless of what I've tried, the
> "stderr" log file on the worker nodes does not roll and continues to
> grow...leading to a crash of the cluster once it claims 100% of disk. Has
> anyone else encountered this? Anyone?
>
>
>
> On Fri, Nov 7, 2014 at 3:35 PM, Nguyen, Duc <duc.ngu...@pearson.com> wrote:
>>
>> We are running spark streaming jobs (version 1.1.0). After a sufficient
>> amount of time, the stderr file grows until the disk is full at 100% and
>> crashes the cluster. I've read this
>>
>> https://github.com/apache/spark/pull/895
>>
>> and also read this
>>
>> http://spark.apache.org/docs/latest/configuration.html#spark-streaming
>>
>>
>> So I've tried testing with this in an attempt to get the stderr log file
>> to roll.
>>
>> sparkConf.set("spark.executor.logs.rolling.strategy", "size")
>>             .set("spark.executor.logs.rolling.size.maxBytes", "1024")
>>             .set("spark.executor.logs.rolling.maxRetainedFiles", "3")
>>
>>
>> Yet it does not roll and continues to grow. Am I missing something
>> obvious?
>>
>>
>> thanks,
>> Duc
>>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to