Hi Tien,
There is no retry on the job level as we expect the user to retry, and as
you mention we tolerate tasks retry already.
There is no request/limit type resource configuration that you described in
Mesos (yet).
So for 2) that’s not possible at the moment.
Tim
On Fri, Jul 6, 2018 at
Dear all,
We are running Spark with Mesos as the resource manager. We are interesting
in some aspect, such as:
1, Is it possible to configure a specific job with a number of maximum
retries?
I meant here is the retry at job level, NOT the /spark.task.maxFailures/
which is for the task with a