I would suggest you install monitoring service.
'no space left' condition would affect other services, not just Spark.

For the second part, Spark experts may have answer for you.

On Mon, Oct 12, 2015 at 11:09 AM, Saurav Sinha <sauravsinh...@gmail.com>
wrote:

> Hi Ted,
>
> *Do you have monitoring put in place to detect 'no space left' scenario ?*
>
> No, I don't have any monitoring in place.
>
> *By 'way to kill job', do you mean automatic kill ?*
>
> Yes, I need some way by which my job will detect this failure and kill
> itself.
>
> Thanks,
> Saurav
>
> On Mon, Oct 12, 2015 at 10:46 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> Do you have monitoring put in place to detect 'no space left' scenario ?
>>
>> By 'way to kill job', do you mean automatic kill ?
>>
>> Please include the release of Spark, command line for 'spark-submit' in
>> your reply.
>>
>> Thanks
>>
>> On Mon, Oct 12, 2015 at 10:07 AM, Saurav Sinha <sauravsinh...@gmail.com>
>> wrote:
>>
>>> Hi Experts,
>>>
>>> I am facing issue in which spark job is running infinitely.
>>>
>>> When I start spark job on 4 node cluster.
>>>
>>> In which there is no space left on one machine then it is running
>>> infinity.
>>>
>>> Does any one can across such an issue. Is any why to kill job when such
>>> thing happens.
>>>
>>>
>>>
>>> --
>>> Thanks and Regards,
>>>
>>> Saurav Sinha
>>>
>>> Contact: 9742879062
>>>
>>
>>
>
>
> --
> Thanks and Regards,
>
> Saurav Sinha
>
> Contact: 9742879062
>

Reply via email to