Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/900#discussion_r13709414
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ---
    @@ -48,6 +48,10 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, actorSystem: A
       var totalCoreCount = new AtomicInteger(0)
       val conf = scheduler.sc.conf
       private val timeout = AkkaUtils.askTimeout(conf)
    +  val minRegisteredNum = conf.getDouble("spark.executor.minRegisteredNum", 
0)
    --- End diff --
    
    ultimately I would prefer the end user config for yarn to be a percentage. 
That makes it easier to set a decent default across the entire cluster (other 
then just off).    I guess if we had to we could have another config for yarn 
which ends up being applied to the # of executors requested and sets this conf 
for the user, but that is just one more config.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to