Hi Fawze,
Thank you for the link. But that is exactly what I am doing.
I think this is related to
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
setting.
When the disk utilization exceeds this setting, the node is marked
unhealthy.

Other than increasing the default 90%, is there anything else I can do?

Thanks
-Shyla



On Tue, Dec 25, 2018 at 7:26 PM Fawze Abujaber <fawz...@gmail.com> wrote:

> http://shzhangji.com/blog/2015/05/31/spark-streaming-logging-configuration/
>
> On Wed, Dec 26, 2018 at 1:05 AM shyla deshpande <deshpandesh...@gmail.com>
> wrote:
>
>> Please point me to any documentation if available. Thanks
>>
>> On Tue, Dec 18, 2018 at 11:10 AM shyla deshpande <
>> deshpandesh...@gmail.com> wrote:
>>
>>> Is there a way to do this without stopping the streaming application in
>>> yarn cluster mode?
>>>
>>> On Mon, Dec 17, 2018 at 4:42 PM shyla deshpande <
>>> deshpandesh...@gmail.com> wrote:
>>>
>>>> I get the ERROR
>>>> 1/1 local-dirs are bad: /mnt/yarn; 1/1 log-dirs are bad:
>>>> /var/log/hadoop-yarn/containers
>>>>
>>>> Is there a way to clean up these directories while the spark streaming
>>>> application is running?
>>>>
>>>> Thanks
>>>>
>>>
>
> --
> Take Care
> Fawze Abujaber
>

Reply via email to