;>> master logs: http://pastebin.com/v3NCzm0u
>>>> worker's logs: http://pastebin.com/Ninkscnx
>>>>
>>>> It seems that some of the executors can create the directories, but as
>>>> some
>>>> others are repetitively failing, the job ends
vely failing, the job ends up failing. Shouldn't spark
>>> manage to keep working with a smallest number of executors instead of
>>> failing?
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>>
working with a smallest number of executors instead of
>> failing?
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Directory-creation-failed-leads-to-job-fail-should-it-tp23531.html
>>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Directory-creation-failed-leads-to-job-fail-should-it-tp23531.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> --
http://apache-spark-user-list.1001560.n3.nabble.com/Directory-creation-failed-leads-to-job-fail-should-it-tp23531.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail