Dear all,

We are running Spark with Mesos as the resource manager. We are interesting
in some aspect, such as:

1, Is it possible to configure a specific job with a number of maximum
retries?
I meant here is the retry at job level, NOT the /spark.task.maxFailures/
which is for the task with a job.

2, Is it possible to set a job with a range of resource, such as: at least
20 CPU cores, at most 30 CPU cores and at least 20GB of mem, at most 40GB?

Thank you in advance.

Best 
Tien Dat



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to