[ https://issues.apache.org/jira/browse/YARN-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522972#comment-16522972 ]
Wangda Tan commented on YARN-8459: ---------------------------------- Attached ver.1 patch to run Jenkins, I felt it might be not straightforward to add tests. We need a lot of mock. I'm thinking to add a chaos-monkey-like UT to just randomly start/stop nodes/apps. We should be able to get some interesting results from that. Will update ver.2 patch with tests. cc: [~sunil.gov...@gmail.com], [~Tao Yang], [~cheersyang]. > Capacity Scheduler should properly handle container allocation on app/node > when app/node being removed by scheduler > ------------------------------------------------------------------------------------------------------------------- > > Key: YARN-8459 > URL: https://issues.apache.org/jira/browse/YARN-8459 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler > Affects Versions: 3.1.0 > Reporter: Wangda Tan > Assignee: Wangda Tan > Priority: Blocker > Attachments: YARN-8459.001.patch > > > Thanks [~gopalv] for reporting this issue. > In async mode, capacity scheduler can allocate/reserve containers on node/app > when node/app is being removed ({{doneApplicationAttempt}}/{{removeNode}}). > This will cause some issues, for example. > a. Container for app_1 reserved on node_x. > b. At the same time, app_1 is being removed. > c. Reserve on node operation finished after app_1 removed > ({{doneApplicationAttempt}}). > For all the future runs, the node_x is completely blocked by the invalid > reservation. It keep reporting "Trying to schedule for a finished app, please > double check" for the node_x. > We need a fix to make sure this won't happen. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org