Have you run with debug logging?  There are some hints in the debug logs:
https://github.com/apache/spark/blob/branch-2.1/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L316

On Mon, Apr 24, 2017 at 4:53 AM, Pavel Plotnikov <
pavel.plotni...@team.wrike.com> wrote:

> Hi, everyone! I run spark 2.1.0 jobs on the top of Mesos cluster in
> coarse-grained mode with dynamic resource allocation. And sometimes spark
> mesos scheduler declines mesos offers despite the fact that not all
> available resources were used (I have less workers than the possible
> maximum) and the maximum threshold in the spark configuration is not
> reached and the queue have lot of pending tasks.
>
> May be I have wrong spark or mesos configuration? Does anyone have the
> same problems?
>



-- 
Michael Gummelt
Software Engineer
Mesosphere

Reply via email to