[ 
https://issues.apache.org/jira/browse/SPARK-7901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14561484#comment-14561484
 ] 

Ryan Williams commented on SPARK-7901:
--------------------------------------

Looks like a dupe of 
[SPARK-6954|https://issues.apache.org/jira/browse/SPARK-6954]…

> Attempt to request negative number of executors with dynamic allocation
> -----------------------------------------------------------------------
>
>                 Key: SPARK-7901
>                 URL: https://issues.apache.org/jira/browse/SPARK-7901
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.3.1
>            Reporter: Ryan Williams
>
> I ran a {{spark-shell}} on YARN with dynamic allocation enabled; relevant 
> params:
> {code}
>   --conf spark.dynamicAllocation.enabled=true \
>   --conf spark.dynamicAllocation.minExecutors=5 \
>   --conf spark.dynamicAllocation.maxExecutors=300 \
>   --conf spark.dynamicAllocation.schedulerBacklogTimeout=3 \
>   --conf spark.dynamicAllocation.executorIdleTimeout=300 \
> {code}
> It started out with executors, went up to 300 when I ran a job, and then 
> killed them all back down to 5 executors after 5mins of idle time; all 
> working as intended.
> When I ran another job, it tried to request -187 executors:
> {code}
> 15/05/27 17:41:12 ERROR util.Utils: Uncaught exception in thread 
> spark-dynamic-executor-allocation-0
> java.lang.IllegalArgumentException: Attempted to request a negative number of 
> executor(s) -187 from the cluster manager. Please specify a positive number!
>       at 
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:338)
>       at 
> org.apache.spark.SparkContext.requestTotalExecutors(SparkContext.scala:1137)
>       at 
> org.apache.spark.ExecutorAllocationManager.addExecutors(ExecutorAllocationManager.scala:294)
>       at 
> org.apache.spark.ExecutorAllocationManager.addOrCancelExecutorRequests(ExecutorAllocationManager.scala:263)
>       at 
> org.apache.spark.ExecutorAllocationManager.org$apache$spark$ExecutorAllocationManager$$schedule(ExecutorAllocationManager.scala:230)
>       at 
> org.apache.spark.ExecutorAllocationManager$$anon$1$$anonfun$run$1.apply$mcV$sp(ExecutorAllocationManager.scala:189)
>       at 
> org.apache.spark.ExecutorAllocationManager$$anon$1$$anonfun$run$1.apply(ExecutorAllocationManager.scala:189)
>       at 
> org.apache.spark.ExecutorAllocationManager$$anon$1$$anonfun$run$1.apply(ExecutorAllocationManager.scala:189)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1618)
>       at 
> org.apache.spark.ExecutorAllocationManager$$anon$1.run(ExecutorAllocationManager.scala:189)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> {code}
> Now it seems like I'm stuck with 5 executors in this application as some 
> internal state is corrupt.
> [This dropbox 
> folder|https://www.dropbox.com/sh/36slqgyll8nwxrk/AACPMc9UbKRY7SieR9bCXPJCa?dl=0]
>  has the stdout from my console, including the -187 error above, as well as 
> the eventlog for this application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to