[ 
https://issues.apache.org/jira/browse/SPARK-22683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16291141#comment-16291141
 ] 

Julien Cuquemelle commented on SPARK-22683:
-------------------------------------------

[~tgraves], thanks a lot for your remarks, I've updated the description and 
also included a summary of various results and comments I got.

Answers about your other questions: 

"The fact you are asking for 5+cores per executor will naturally waste more 
resources when the executor isn't being used"
In fact the resource usage will be similar with fewer cores, because if I set 1 
core per exe, the dynamic allocation will ask for 5 times more exes

"But if we can find something that by defaults works better for the majority of 
workloads that it makes sense to improve"
I'm pretty sure 2 tasks per Slot would work for a very large set of workloads, 
especially short jobs

"As with any config though, how do I know what to set the tasksPerSlot as? it 
requires configuration and it could affect performance."
I agree, what I'm trying to show in my argumentation is that:
- I don't have any parameter today to do what I want without optimizing each 
job, which is not feasible in my use case
- the granularity of the efficiency of this parameter seems coarser that other 
parameters (sweetspot values are valid on a more broader range of jobs than 
maxNbExe or backLogTimeout
- it seems to me some settings are quite simple to understand : if I want to 
minimize latency, let the default value; If I want to save some resources, use 
a value of 2; If I want to really minimize resource consumption, do an analysis 
or aim at maximizing a time budget

About dynamic allocation : 
with the default setting of 1s of backlogTimeout, the exponential ramp up is in 
practise very similar to an upfront request, regarding the duration of jobs. I 
think upfront allocation could be used instead of exponential, but this 
wouldn't change the issue which is related to the target number of exes
I don't think asking upfront vs exponential has any effect over how Yarn yields 
containers.

"Above you say "When running with 6 tasks per executor slot, our Spark jobs 
consume in average 30% less vcorehours than the MR jobs, this setting being 
valid for different workload sizes." Was this with this patch applied or 
without?"
The patch was applied, if not you cannot set the number of tasks per taskSlot 
(I mentionned "executor slot", which is incorrect, I was refering to taskSlot)

"the WallTimeGain wrt MR (%) , does this mean positive numbers ran faster then 
MR? "
Positive numbers mean faster in Spark.

"why is running with 6 or 8 slower? is it shuffle issues or mistuning with gc, 
or just unknown overhead?"
running with 6 tasks per taskSlot means that 6 tasks will be processed 
sequentially by 6 times less task slots

> DynamicAllocation wastes resources by allocating containers that will barely 
> be used
> ------------------------------------------------------------------------------------
>
>                 Key: SPARK-22683
>                 URL: https://issues.apache.org/jira/browse/SPARK-22683
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.1.0, 2.2.0
>            Reporter: Julien Cuquemelle
>              Labels: pull-request-available
>
> While migrating a series of jobs from MR to Spark using dynamicAllocation, 
> I've noticed almost a doubling (+114% exactly) of resource consumption of 
> Spark w.r.t MR, for a wall clock time gain of 43%
> About the context: 
> - resource usage stands for vcore-hours allocation for the whole job, as seen 
> by YARN
> - I'm talking about a series of jobs because we provide our users with a way 
> to define experiments (via UI / DSL) that automatically get translated to 
> Spark / MR jobs and submitted on the cluster
> - we submit around 500 of such jobs each day
> - these jobs are usually one shot, and the amount of processing can vary a 
> lot between jobs, and as such finding an efficient number of executors for 
> each job is difficult to get right, which is the reason I took the path of 
> dynamic allocation.  
> - Some of the tests have been scheduled on an idle queue, some on a full 
> queue.
> - experiments have been conducted with spark.executor-cores = 5 and 10, only 
> results for 5 cores have been reported because efficiency was overall better 
> than with 10 cores
> - the figures I give are averaged over a representative sample of those jobs 
> (about 600 jobs) ranging from tens to thousands splits in the data 
> partitioning and between 400 to 9000 seconds of wall clock time.
> - executor idle timeout is set to 30s;
>  
> Definition: 
> - let's say an executor has spark.executor.cores / spark.task.cpus taskSlots, 
> which represent the max number of tasks an executor will process in parallel.
> - the current behaviour of the dynamic allocation is to allocate enough 
> containers to have one taskSlot per task, which minimizes latency, but wastes 
> resources when tasks are small regarding executor allocation and idling 
> overhead. 
> The results using the proposal (described below) over the job sample (600 
> jobs):
> - by using 2 tasks per taskSlot, we get a 5% (against -114%) reduction in 
> resource usage, for a 37% (against 43%) reduction in wall clock time for 
> Spark w.r.t MR
> - by trying to minimize the average resource consumption, I ended up with 6 
> tasks per core, with a 30% resource usage reduction, for a similar wall clock 
> time w.r.t. MR
> What did I try to mitigate this (summing up a few points mentioned in the 
> comments)?
> - change dynamicAllocation.maxExecutors: this would need to be adapted for 
> each job (tens to thousands splits can occur), and essentially remove the 
> interest of using the dynamic allocation.
> - use dynamicAllocation.backlogTimeout: 
>     - setting this parameter right to avoid creating unused executors is very 
> dependant on wall clock time. One basically needs to solve the exponential 
> ramp up for the target time. So this is not an option for my use case where I 
> don't want a per-job tuning. 
>     - I've still done a series of experiments, details in the comments. 
> Result is that after manual tuning, the best I could get was a similar 
> resource consumption at the expense of 20% more wall clock time, or a similar 
> wall clock time at the expense of 60% more resource consumption than what I 
> got using my proposal @ 6 tasks per slot (this value being optimized over a 
> much larger range of jobs as already stated)
>     - as mentioned in another comment, tampering with the exponential ramp up 
> might yield task imbalance and such old executors could become contention 
> points for other exes trying to remotely access blocks in the old exes (not 
> witnessed in the jobs I'm talking about, but we did see this behavior in 
> other jobs)
> Proposal: 
> Simply add a tasksPerExecutorSlot parameter, which makes it possible to 
> specify how many tasks a single taskSlot should ideally execute to mitigate 
> the overhead of executor allocation.
> PR: https://github.com/apache/spark/pull/19881



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to