Hi Fawze,
Thank you for the link. But that is exactly what I am doing.
I think this is related to
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
setting.
When the disk utilization exceeds this setting, the node is marked
unhealthy.
Other than increasing the default
http://shzhangji.com/blog/2015/05/31/spark-streaming-logging-configuration/
On Wed, Dec 26, 2018 at 1:05 AM shyla deshpande
wrote:
> Please point me to any documentation if available. Thanks
>
> On Tue, Dec 18, 2018 at 11:10 AM shyla deshpande
> wrote:
>
>> Is there a way to do this without
Please point me to any documentation if available. Thanks
On Tue, Dec 18, 2018 at 11:10 AM shyla deshpande
wrote:
> Is there a way to do this without stopping the streaming application in
> yarn cluster mode?
>
> On Mon, Dec 17, 2018 at 4:42 PM shyla deshpande
> wrote:
>
>> I get the ERROR
>>
Is there a way to do this without stopping the streaming application in
yarn cluster mode?
On Mon, Dec 17, 2018 at 4:42 PM shyla deshpande
wrote:
> I get the ERROR
> 1/1 local-dirs are bad: /mnt/yarn; 1/1 log-dirs are bad:
> /var/log/hadoop-yarn/containers
>
> Is there a way to clean up these
I get the ERROR
1/1 local-dirs are bad: /mnt/yarn; 1/1 log-dirs are bad:
/var/log/hadoop-yarn/containers
Is there a way to clean up these directories while the spark streaming
application is running?
Thanks