If YARN log aggregation is enabled then logs will be moved to HDFS. You can
use yarn logs -applicationId <applicationId> to view those logs.

On Wed, Jul 20, 2016 at 8:58 AM, Ted Yu <yuzhih...@gmail.com> wrote:

> What's the value for yarn.log-aggregation.retain-seconds
> and yarn.log-aggregation-enable ?
>
> Which hadoop release are you using ?
>
> Thanks
>
> On Tue, Jul 19, 2016 at 3:23 PM, Rachana Srivastava <
> rachana.srivast...@markmonitor.com> wrote:
>
>> I am trying to find the root cause of recent Spark application failure in
>> production. When the Spark application is running I can check NodeManager's
>> yarn.nodemanager.log-dir property to get the Spark executor container logs.
>>
>> The container has logs for both the running Spark applications
>>
>> Here is the view of the container logs: drwx--x--- 3 yarn yarn 51 Jul 19
>> 09:04 application_1467068598418_0209 drwx--x--- 5 yarn yarn 141 Jul 19
>> 09:04 application_1467068598418_0210
>>
>> But when the application is killed both the application logs are
>> automatically deleted. I have set all the log retention setting etc in Yarn
>> to a very large number. But still these logs are deleted as soon as the
>> Spark applications are crashed.
>>
>> Question: How can we retain these Spark application logs in Yarn for
>> debugging when the Spark application is crashed for some reason.
>>
>
>


-- 
Best Regards,
Ayan Guha

Reply via email to