Hi,

Running Spark 1.3 with secured Hadoop.

Spark-shell with Yarn client mode runs without issue when not using Dynamic
Allocation.

When Dynamic allocation is turned on, the shell comes up but same SQL etc.
causes it to loop.

spark.dynamicAllocation.enabled=true
spark.dynamicAllocation.initialExecutors=1
spark.dynamicAllocation.maxExecutors=10
# Set IdleTime low for testing
spark.dynamicAllocation.executorIdleTimeout=60
spark.shuffle.service.enabled=true

Following is the start of the messages and then it keeps looping with
"Requesting 0 new executors"

15/03/20 22:52:42 INFO storage.BlockManagerMaster: Updated info of block
broadcast_1_piece0
15/03/20 22:52:42 INFO spark.SparkContext: Created broadcast 1 from
broadcast at DAGScheduler.scala:839
15/03/20 22:52:42 INFO scheduler.DAGScheduler: Submitting 1 missing tasks
from Stage 0 (MapPartitionsRDD[3] at mapPartitions at Exchange.scala:100)
15/03/20 22:52:42 INFO cluster.YarnScheduler: Adding task set 0.0 with 1
tasks
15/03/20 22:52:47 INFO spark.ExecutorAllocationManager: Requesting 1 new
executor(s) because tasks are backlogged (new desired total will be 1)
15/03/20 22:52:52 INFO spark.ExecutorAllocationManager: Requesting 0 new
executor(s) because tasks are backlogged (new desired total will be 1)
15/03/20 22:52:57 WARN cluster.YarnScheduler: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are registered
and have sufficient resources
15/03/20 22:52:57 INFO spark.ExecutorAllocationManager: Requesting 0 new
executor(s) because tasks are backlogged (new desired total will be 1)
15/03/20 22:53:02 INFO spark.ExecutorAllocationManager: Requesting 0 new
executor(s) because tasks are backlogged (new desired total will be 1)
15/03/20 22:53:07 INFO spark.ExecutorAllocationManager: Requesting 0 new
executor(s) because tasks are backlogged (new desired total will be 1)
15/03/20 22:53:12 INFO spark.ExecutorAllocationManager: Requesting 0 new
executor(s) because tasks are backlogged (new desired total will be 1)
15/03/20 22:53:12 WARN cluster.YarnScheduler: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are registered
and have sufficient resources
15/03/20 22:53:17 INFO spark.ExecutorAllocationManager: Requesting 0 new
executor(s) because tasks are backlogged (new desired total will be 1)
15/03/20 22:53:22 INFO spark.ExecutorAllocationManager: Requesting 0 new
executor(s) because tasks are backlogged (new desired total will be 1)
15/03/20 22:53:27 INFO spark.ExecutorAllocationManager: Requesting 0 new
executor(s) because tasks are backlogged (new desired total will be 1)

Reply via email to