You can look at http://spark.apache.org/docs/1.2.0/job-scheduling.html

I would go with mesos
http://spark.apache.org/docs/1.2.0/running-on-mesos.html

Thanks
Best Regards

On Tue, Feb 10, 2015 at 2:59 PM, matha.harika <matha.har...@gmail.com>
wrote:

> Hi,
>
> I have a cluster setup with three slaves, 4 cores each(12 cores in total).
> When I try to run multiple applications, using 4 cores each, only the first
> application is running(with 2,1,1 cores used in corresponding slaves).
> Every
> other application is going to WAIT state. Following the solution provided
> here
> <
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Worker-Core-Allocation-td7188.html
> >
> I set the parameter spark.deploy.spreadout to false. But the problem is not
> solved.
>
> Any suggestion in this regard is welcome.
>
> Thanks in advance
>
> Harika
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-efficiently-utilize-all-cores-tp21569.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to