Have you tried setting the "spark.cores.max" in sparkconf? Check
http://spark.apache.org/docs/1.6.1/running-on-mesos.html :

 You can cap the maximum number of cores using conf.set("spark.cores.max",
> "10") (for example).


On Thu, Apr 14, 2016 at 12:53 AM, Andreas Tsarida <
andreas.tsar...@teralytics.ch> wrote:

>
> Hello,
>
> I’m trying to figure out a solution for dynamic resource allocation in
> mesos within the same framework ( spark ).
>
> Scenario :
> 1 - run spark a job in coarse mode
> 2 - run second job in coarse mode
>
> Second job will not start unless first job finishes which is not something
> that I would want. The problem is small when the job running doesn’t take
> too long but when it does nobody can work on the cluster.
>
> Best scenario would be to have mesos revoke resources from the first job
> and try to allocate resources to the second job.
>
> If there anybody else who solved this issue in another way ?
>
> Thanks
>

Reply via email to