That answers it thanks!

On Fri, Dec 11, 2015 at 6:37 AM Iulian Dragoș <iulian.dra...@typesafe.com>
wrote:

> Hi Charles,
>
> I am not sure I totally understand your issues, but the spark.task.cpus
> limit is imposed at a higher level, for all cluster managers. The code is
> in TaskSchedulerImpl
> <https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L250>
> .
>
> There is a pending PR to implement spark.executor.cores (and launching
> multiple executors on a single worker), but it wasn’t yet merged:
> https://github.com/apache/spark/pull/4027
>
> iulian
> ​
>
> On Wed, Dec 9, 2015 at 7:23 PM, Charles Allen <
> charles.al...@metamarkets.com> wrote:
>
>> I have a spark app in development which has relatively strict cpu/mem
>> ratios that are required. As such, I cannot arbitrarily add CPUs to a
>> limited memory size.
>>
>> The general spark cluster behaves as expected, where tasks are launched
>> with a specified memory/cpu ratio, but the mesos scheduler seems to ignore
>> this.
>>
>> Specifically, I cannot find where in the code the limit of number of
>> tasks per executor of "spark.executor.cores" / "spark.task.cpus" is
>> enforced in the MesosBackendScheduler.
>>
>> The Spark App in question has some JVM heap heavy activities inside a
>> RDD.mapPartitionsWithIndex, so having more tasks per limited JVM memory
>> resource is bad. The workaround planned handling of this is to limit the
>> number of tasks per JVM, which does not seem possible in mesos mode, where
>> it seems to just keep stacking on CPUs as tasks come in without adjusting
>> any memory constraints, or looking for limits of tasks per executor.
>>
>> How can I limit the tasks per executor (or per memory pool) in the Mesos
>> backend scheduler?
>>
>> Thanks,
>> Charles Allen
>>
>
>
>
> --
>
> --
> Iulian Dragos
>
> ------
> Reactive Apps on the JVM
> www.typesafe.com
>
>

Reply via email to