Hi Tien,

There is no retry on the job level as we expect the user to retry, and as
you mention we tolerate tasks retry already.

There is no request/limit type resource configuration that you described in
Mesos (yet).

So for 2) that’s not possible at the moment.

Tim


On Fri, Jul 6, 2018 at 11:42 PM Tien Dat <tphan....@gmail.com> wrote:

> Dear all,
>
> We are running Spark with Mesos as the resource manager. We are interesting
> in some aspect, such as:
>
> 1, Is it possible to configure a specific job with a number of maximum
> retries?
> I meant here is the retry at job level, NOT the /spark.task.maxFailures/
> which is for the task with a job.
>
> 2, Is it possible to set a job with a range of resource, such as: at least
> 20 CPU cores, at most 30 CPU cores and at least 20GB of mem, at most 40GB?
>
> Thank you in advance.
>
> Best
> Tien Dat
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to