[ 
https://issues.apache.org/jira/browse/FLINK-35035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835434#comment-17835434
 ] 

Etienne Chauchot commented on FLINK-35035:
------------------------------------------

FLINK-21883 cooldown period was mainly designed to avoid too frequent rescales. 
Here is how it works when new slots are available:
 - Flink should rescale immediately only if last rescale was done more than 
scaling-interval.min (default 30s) ago.
 - Otherwise it should schedule a rescale at (now + scaling-interval.min) point 
in time.
The rescale is done like this:
 - if minimum scaling requirements are met (AdaptiveScheduler#shouldRescale 
default to minimum 1 slot added), the job is restarted with new parallelism
 - if minimum scaling requirements are not met
 -- if last rescale was done more than scaling-interval.max ago (default 
disabled), a rescale is forced.
 -- otherwise, schedule a forced rescale in scaling-interval.max

So in your case of slots arriving partially during the resource stabilization 
timeout leading to a rescale with only a portion of the ideal number of slots, 
what I see is that you can either:
1. increase the stabilization timeout hopping you'll get all the slots during 
that time
2. set min-parallelism-increase to 5 instead of default 1 and set 
scaling-interval.max. That way the first slots additions will not trigger a 
rescale but the rescale will be issued only when the 5th slot arrives and you 
will still get a security force rescale scheduled no matter what (as long as 
the parallelism has changed) after scaling-interval.max

> Reduce job pause time when cluster resources are expanded in adaptive mode
> --------------------------------------------------------------------------
>
>                 Key: FLINK-35035
>                 URL: https://issues.apache.org/jira/browse/FLINK-35035
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / Task
>    Affects Versions: 1.19.0
>            Reporter: yuanfenghu
>            Priority: Minor
>
> When 'jobmanager.scheduler = adaptive' , job graph changes triggered by 
> cluster expansion will cause long-term task stagnation. We should reduce this 
> impact.
> As an example:
> I have jobgraph for : [v1 (maxp=10 minp = 1)] -> [v2 (maxp=10, minp=1)]
> When my cluster has 5 slots, the job will be executed as [v1 p5]->[v2 p5]
> When I add slots the task will trigger jobgraph changes,by
> org.apache.flink.runtime.scheduler.adaptive.ResourceListener#onNewResourcesAvailable,
> However, the five new slots I added were not discovered at the same time (for 
> convenience, I assume that a taskmanager has one slot), because no matter 
> what environment we add, we cannot guarantee that the new slots will be added 
> at once, so this will cause onNewResourcesAvailable triggers repeatedly
> ,If each new slot action has a certain interval, then the jobgraph will 
> continue to change during this period. What I hope is that there will be a 
> stable time to configure the cluster resources  and then go to it after the 
> number of cluster slots has been stable for a certain period of time. Trigger 
> jobgraph changes to avoid this situation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to