Hi Thodoris,

Maybe setting "spark.scheduler.minRegisteredResourcesRatio" to > 0 would
help? Default value is 0 with Mesos.

"The minimum ratio of registered resources (registered resources / total
expected resources) (resources are executors in yarn mode and Kubernetes
mode, CPU cores in standalone mode and Mesos coarsed-grained mode
['spark.cores.max' value is total expected resources for Mesos
coarse-grained mode] ) to wait for before scheduling begins. Specified as a
double between 0.0 and 1.0. Regardless of whether the minimum ratio of
resources has been reached, the maximum amount of time it will wait before
scheduling begins is controlled by config
spark.scheduler.maxRegisteredResourcesWaitingTime." -
https://spark.apache.org/docs/latest/configuration.html

Susan

On Wed, Jul 11, 2018 at 7:22 AM, Pavel Plotnikov <
pavel.plotni...@team.wrike.com> wrote:

> Oh, sorry, i missed that you use spark without dynamic allocation. Anyway,
> i don't know does this parameters works without dynamic allocation.
>
> On Wed, Jul 11, 2018 at 5:11 PM Thodoris Zois <z...@ics.forth.gr> wrote:
>
>> Hello,
>>
>> Yeah you are right, but I think that works only if you use Spark dynamic
>> allocation. Am I wrong?
>>
>> -Thodoris
>>
>> On 11 Jul 2018, at 17:09, Pavel Plotnikov <pavel.plotni...@team.wrike.com>
>> wrote:
>>
>> Hi, Thodoris
>> You can configure resources per executor and manipulate with number of
>> executers instead using spark.max.cores. I think 
>> spark.dynamicAllocation.minExecutors
>> and spark.dynamicAllocation.maxExecutors configuration values can help
>> you.
>>
>> On Tue, Jul 10, 2018 at 5:07 PM Thodoris Zois <z...@ics.forth.gr> wrote:
>>
>>> Actually after some experiments we figured out that spark.max.cores /
>>> spark.executor.cores is the upper bound for the executors. Spark apps will
>>> run even only if one executor can be launched.
>>>
>>> Is there any way to specify also the lower bound? It is a bit annoying
>>> that seems that we can’t control the resource usage of an application. By
>>> the way, we are not using dynamic allocation.
>>>
>>> - Thodoris
>>>
>>>
>>> On 10 Jul 2018, at 14:35, Pavel Plotnikov <pavel.plotnikov@team.wrike.
>>> com> wrote:
>>>
>>> Hello Thodoris!
>>> Have you checked this:
>>>  - does mesos cluster have available resources?
>>>   - if spark have waiting tasks in queue more than
>>> spark.dynamicAllocation.schedulerBacklogTimeout configuration value?
>>>  - And then, have you checked that mesos send offers to spark app mesos
>>> framework at least with 10 cores and 2GB RAM?
>>>
>>> If mesos have not available offers with 10 cores, for example, but have
>>> with 8 or 9, so you can use smaller executers for better fit for available
>>> resources on nodes for example with 4 cores and 1 GB RAM, for example
>>>
>>> Cheers,
>>> Pavel
>>>
>>> On Mon, Jul 9, 2018 at 9:05 PM Thodoris Zois <z...@ics.forth.gr> wrote:
>>>
>>>> Hello list,
>>>>
>>>> We are running Apache Spark on a Mesos cluster and we face a weird
>>>> behavior of executors. When we submit an app with e.g 10 cores and 2GB of
>>>> memory and max cores 30, we expect to see 3 executors running on the
>>>> cluster. However, sometimes there are only 2... Spark applications are not
>>>> the only one that run on the cluster. I guess that Spark starts executors
>>>> on the available offers even if it does not satisfy our needs. Is there any
>>>> configuration that we can use in order to prevent Spark from starting when
>>>> there are no resource offers for the total number of executors?
>>>>
>>>> Thank you
>>>> - Thodoris
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>>>
>>>>
>>


-- 
Susan X. Huynh
Software engineer, Data Agility
xhu...@mesosphere.com

Reply via email to