[ 
https://issues.apache.org/jira/browse/FLINK-35035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836462#comment-17836462
 ] 

Etienne Chauchot commented on FLINK-35035:
------------------------------------------

with adaptive scheduler, the jobMaster declares resources needed with a min and 
a max. The only difference with reactive mode is that the max is +INF. Here we 
are talking about declaring min resources needed. So unless there is something 
I missed, I'm not sure reactive mode is relevant here.

If I understand correctly,  what you want in the end is to use whatever new 
slots arrive in the cluster with a minimal waiting period.  So why not just 
leave default min-parallelism-increase=1, leave default scaling-interval.max 
unset and change default scaling-interval.min of 30s to 0s ?

The only thing is that you will have more frequent rescales (each time a slot 
is added to the cluster) modulo slots that are added during the stabilization 
period that do not lead to a rescale.

> Reduce job pause time when cluster resources are expanded in adaptive mode
> --------------------------------------------------------------------------
>
>                 Key: FLINK-35035
>                 URL: https://issues.apache.org/jira/browse/FLINK-35035
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / Task
>    Affects Versions: 1.19.0
>            Reporter: yuanfenghu
>            Priority: Minor
>
> When 'jobmanager.scheduler = adaptive' , job graph changes triggered by 
> cluster expansion will cause long-term task stagnation. We should reduce this 
> impact.
> As an example:
> I have jobgraph for : [v1 (maxp=10 minp = 1)] -> [v2 (maxp=10, minp=1)]
> When my cluster has 5 slots, the job will be executed as [v1 p5]->[v2 p5]
> When I add slots the task will trigger jobgraph changes,by
> org.apache.flink.runtime.scheduler.adaptive.ResourceListener#onNewResourcesAvailable,
> However, the five new slots I added were not discovered at the same time (for 
> convenience, I assume that a taskmanager has one slot), because no matter 
> what environment we add, we cannot guarantee that the new slots will be added 
> at once, so this will cause onNewResourcesAvailable triggers repeatedly
> ,If each new slot action has a certain interval, then the jobgraph will 
> continue to change during this period. What I hope is that there will be a 
> stable time to configure the cluster resources  and then go to it after the 
> number of cluster slots has been stable for a certain period of time. Trigger 
> jobgraph changes to avoid this situation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to