Repository: spark
Updated Branches:
  refs/heads/master 1a88f20de -> fe3740c4c


[SPARK-5636] Ramp up faster in dynamic allocation

A recent patch #4051 made the initial number default to 0. With this change, 
any Spark application using dynamic allocation's default settings will ramp up 
very slowly. Since we never request more executors than needed to saturate the 
pending tasks, it is safe to ramp up quickly. The current default of 60 may be 
too slow.

Author: Andrew Or <and...@databricks.com>

Closes #4409 from andrewor14/dynamic-allocation-interval and squashes the 
following commits:

d3cc485 [Andrew Or] Lower request interval


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/fe3740c4
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/fe3740c4
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/fe3740c4

Branch: refs/heads/master
Commit: fe3740c4c859d087b714c666741a29061bba5f58
Parents: 1a88f20
Author: Andrew Or <and...@databricks.com>
Authored: Fri Feb 6 10:54:23 2015 -0800
Committer: Andrew Or <and...@databricks.com>
Committed: Fri Feb 6 10:55:13 2015 -0800

----------------------------------------------------------------------
 .../scala/org/apache/spark/ExecutorAllocationManager.scala     | 6 +++---
 docs/configuration.md                                          | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/fe3740c4/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
----------------------------------------------------------------------
diff --git 
a/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala 
b/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
index 5d5288b..8b38366 100644
--- a/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
+++ b/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
@@ -76,15 +76,15 @@ private[spark] class ExecutorAllocationManager(
   private val maxNumExecutors = 
conf.getInt("spark.dynamicAllocation.maxExecutors",
     Integer.MAX_VALUE)
 
-  // How long there must be backlogged tasks for before an addition is 
triggered
+  // How long there must be backlogged tasks for before an addition is 
triggered (seconds)
   private val schedulerBacklogTimeout = conf.getLong(
-    "spark.dynamicAllocation.schedulerBacklogTimeout", 60)
+    "spark.dynamicAllocation.schedulerBacklogTimeout", 5)
 
   // Same as above, but used only after `schedulerBacklogTimeout` is exceeded
   private val sustainedSchedulerBacklogTimeout = conf.getLong(
     "spark.dynamicAllocation.sustainedSchedulerBacklogTimeout", 
schedulerBacklogTimeout)
 
-  // How long an executor must be idle for before it is removed
+  // How long an executor must be idle for before it is removed (seconds)
   private val executorIdleTimeout = conf.getLong(
     "spark.dynamicAllocation.executorIdleTimeout", 600)
 

http://git-wip-us.apache.org/repos/asf/spark/blob/fe3740c4/docs/configuration.md
----------------------------------------------------------------------
diff --git a/docs/configuration.md b/docs/configuration.md
index 4c86cb7..00e973c 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -1140,7 +1140,7 @@ Apart from these, the following properties are also 
available, and may be useful
 </tr>
 <tr>
   <td><code>spark.dynamicAllocation.schedulerBacklogTimeout</code></td>
-  <td>60</td>
+  <td>5</td>
   <td>
     If dynamic allocation is enabled and there have been pending tasks 
backlogged for more than
     this duration (in seconds), new executors will be requested. For more 
detail, see this


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to