[ 
https://issues.apache.org/jira/browse/FLINK-30198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646961#comment-17646961
 ] 

Zhu Zhu edited comment on FLINK-30198 at 2/9/23 9:41 AM:
---------------------------------------------------------

In our experience, map tasks are also possible to be heavy, depending on the 
implementation, e.g. complex Calc.
Allowing to specify expected data to consume in vertex level can be flexible. 
However, it might be too complex for users to tune it for each vertex. Also, we 
will keep improving the adaptive batch in next versions. Therefore, I prefer to 
not add this feature to core at the moment. 
We may reconsider this once the adaptive batch becomes stable. Alternatives, in 
the future, we may make VertexParallelismDecider pluggable, so that users will 
be able to customize their own strategy, including vertex-wise configurable 
data size to consume.


was (Author: zhuzh):
In our experience, map tasks are also possible to be heavy, depending on the 
implementation, e.g. complex Calc.
Allowing to specify expected data to consume in vertex level can be flexible. 
However, it might be too complex for users to tune it for each vertex. Also, we 
will keep improving the adaptive batch in next versions. Therefore, I prefer to 
now add this feature to core at the moment. 
We may reconsider this once the adaptive batch becomes stable. Alternatives, in 
the future, we may make VertexParallelismDecider pluggable, so that users will 
be able to customize their own strategy, including vertex-wise configurable 
data size to consume.

> Support AdaptiveBatchScheduler to set per-task size for reducer task 
> ---------------------------------------------------------------------
>
>                 Key: FLINK-30198
>                 URL: https://issues.apache.org/jira/browse/FLINK-30198
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / Coordination
>            Reporter: Aitozi
>            Priority: Major
>
> When we use AdaptiveBatchScheduler in our case, we found that it can work 
> well in most case, but there is a limit that, there is only one global 
> parameter for per task data size by 
> {{jobmanager.adaptive-batch-scheduler.avg-data-volume-per-task}}. 
> However, in a map-reduce architecture, the reducer tasks are usually have 
> more complex computation logic such as aggregate/sort/join operators. So I 
> think it will be nicer if we can set the reducer and mapper task's data size 
> per task individually.
> Then, how to distinguish the reducer task?
> IMO, we can let the parallelism decider know whether the vertex have a hash 
> edge inputs. If yes, it should be a reducer task.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to