[ 
https://issues.apache.org/jira/browse/FLINK-13531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-13531.
------------------------------------
    Fix Version/s: 1.10.0
       Resolution: Fixed

master: f9b7467c12d6f7f17d198fc4c9ee454c29739216

> Do not print log and call 'release' if no requests should be evicted from the 
> shared slot
> -----------------------------------------------------------------------------------------
>
>                 Key: FLINK-13531
>                 URL: https://issues.apache.org/jira/browse/FLINK-13531
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Coordination
>    Affects Versions: 1.9.0
>            Reporter: Yun Gao
>            Assignee: Yun Gao
>            Priority: Minor
>              Labels: pull-request-available
>             Fix For: 1.10.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> After adding the logic to bookkeeping the resource used in the shared slots, 
> the resource requests will be recorded inside the MultiTaskSlot and when the 
> underlying slot is allocated, all the resource requests will be checked if 
> there is over-subscription, if so, some requests will be failed.
> In the current implementation, the code does not check the amount to fail 
> before printing the over-allocated debug log and tries to fail them. This 
> should not cause actual errors, but it will 
>  # Print a debug log saying some requests will be failed even if no one to 
> fail.
>  # If the total number of requests is 0 (This is possible if there already 
> AllocatedSlot before the first request), the _release_ method will be called. 
> Although it will do nothing with the current implementation (the slot is 
> still being created and not added to any other data structure), it may cause 
> error if the release logic changes in the future.
> To fix this issue, we should add a explicit check on the number of requests 
> to fail.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to