I have a task that would benefit from more cores but the standalone scheduler launches it when only a subset are available. I’d rather use all cluster cores on this task.
Is there a way to tell the scheduler to finish everything before allocating resources to a task? Like "finish everything else then launch this”? Put another way the DAG would be better for this job if it ended all paths before executing a task or waited until more cores were available. Perhaps a way to hint that a task is fat. Any ideas? --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org