Hi,

I am also trying to use the spark.mesos.constraints but it gives me the
same error: job has not be accepted by any resources.

I am doubting that I should start some additional service like
./sbin/start-mesos-shuffle-service.sh. Am I correct?

Thanks,
Jia

On Tue, Dec 1, 2015 at 5:14 PM, rarediel <bryce.ag...@gettyimages.com>
wrote:

> I am trying to add mesos constraints to my spark-submit command in my
> marathon file I am setting it to spark.mesos.coarse=true.
>
> Here is an example of a constraint I am trying to set.
>
>  --conf spark.mesos.constraint=cpus:2
>
> I want to use the constraints to control the amount of executors are
> created
> so I can control the total memory of my spark job.
>
> I've tried many variations of resource constraints, but no matter which
> resource or what number, range, etc. I do I always get the error "Initial
> job has not accepted any resources; check your cluster UI...".  My cluster
> has the available resources.  Is there any examples I can look at where
> people use resource constraints?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-use-spark-mesos-constraints-tp25541.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to