[ 
https://issues.apache.org/jira/browse/FLINK-11037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang closed FLINK-11037.
----------------------------
    Resolution: Later

I guess we do not have much time to further work on it recently as there are no 
evidence for problems ATM. I will close it for cleanup until we find it 
necessary to reopen future.

> Introduce another greedy mechanism for distributing floating buffers
> --------------------------------------------------------------------
>
>                 Key: FLINK-11037
>                 URL: https://issues.apache.org/jira/browse/FLINK-11037
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Runtime / Network
>    Affects Versions: 1.8.0
>            Reporter: Zhijiang
>            Assignee: Zhijiang
>            Priority: Minor
>
> The current mechanism for distributing floating buffers is fair for all the 
> listeners. In detail, each input channel can only request one floating buffer 
> each time although this channel may actually need more floating buffers. Then 
> this channel has to loop to request floating buffer until all are satisfied 
> or pool is exhausted.
> In generally speaking, this way seems fair for all the concurrent channels 
> invoked by netty nio thread.  But every request from LocalBufferPool needs to 
> syn lock and it is hard to say how to distribute all the available floating 
> buffers behaves better in real scenarios.
> Therefore we propose another greedy mechanism to request more floating 
> buffers each time. In extreme case, we can even request all the required 
> buffers at a time or partial ones via configured parameters.  On the other 
> side, LocalBufferPool can also decide how many floating buffers should been 
> assigned based on some factors, such as how many total channels and how many 
> total floating buffers.
> The motivation is making better use of floating buffer resources and it may 
> need extra metrics for adjusting the mechanism dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to