Github user MartinWeindel commented on the pull request:

    https://github.com/apache/spark/pull/5597#issuecomment-94562061
  
    The value 5 seconds is the default value of Mesos, which is used if not 
    set or an invalid value is given. So at least with current versions of 
    Mesos nothing changes in the behavior.
    
    The parameter refuse_seconds configures how long Mesos should wait 
    before it offers resources again after the framework (i.e. here the 
    Spark scheduler backend) has refused them.
    If you set it to 0, this means that Mesos will immediately offer these 
    resources again with the next allocation (by default after 1 second). 
    This will cause slightly higher traffic between the scheduler backend 
    and the Mesos master.
    Alternatively, this parameter could be made configurable by Spark, but I 
    am not sure if it is really worth the effort.
    In coarse grained mode, resources are allocated at the start. Are there 
    any circumstances other than a lost executor, where refused resources 
    will be used?
    
    Am 20.04.2015 um 19:03 schrieb Sean Owen:
    >
    > Sounds reasonable, since the value is reported to be invalid. The 
    > intent seemed to be to set this to "unset" or something. 5 seems to do 
    > something different as it sets it to a concrete value. Knowing nothing 
    > about this, is there maybe a closer equivalent value like 0? or is it 
    > really best to set this to a fixed value?
    >
    > —
    > Reply to this email directly or view it on GitHub 
    > <https://github.com/apache/spark/pull/5597#issuecomment-94510001>.
    >
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to