[ https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Robert Kanter updated YARN-5566: -------------------------------- Attachment: YARN-5566.002.patch My test was doing something wrong. After I fixed that, the 001 patch stopped helping (which makes more sense because that code never actually did DECOMMISSIONG --> UNHEALTHY). I put back the code that YARN-4676 removed that you mentioned, but tweaked it a little bit and moved it above the {{getIsNodeHealthy}} call so that it can transition to DECOMMISSIONED even if a node is UNHEALTHY now. I temporarily added a bunch more log statements to help investigate, and saw that sometimes {{handleContainerStatus}} (when called from {{StatusUpdateWhenHealthyTransition}}) would add an Application to {{runningApplications}}, but then nothing ever removed it. This happened way more frequently for DECOMMISSIONING nodes, but I did see it once happen to a normal node. There's a piece of code here that adds an Application to {{runningApplications}} if it sees a Container without an Application in {{runningApplications}}. I changed this code to call {{handleRunningAppOnNode}} instead of simply adding the Application, which basically makes it check that the Application still exists. I'm not exactly sure why this is happening, but from what I can tell, this issue is based on some timing of when things occur, and somehow DECOMMISSIONING makes it more likely to happen. I've attached a 002 patch with the new changes. I ran my test over 150 times with the 002 patch and it worked every time. When I ran my test without the patch (or with the 001 patch, or with just adding the code removed by YARN-4676), it would fail on the first run, except for one time where it failed on the second. [~djp], please take a look. > client-side NM graceful decom doesn't trigger when jobs finish > -------------------------------------------------------------- > > Key: YARN-5566 > URL: https://issues.apache.org/jira/browse/YARN-5566 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager > Affects Versions: 2.8.0 > Reporter: Robert Kanter > Assignee: Robert Kanter > Attachments: YARN-5566.001.patch, YARN-5566.002.patch > > > I was testing the client-side NM graceful decommission and noticed that it > was always waiting for the timeout, even if all jobs running on that node (or > even the cluster) had already finished. > For example: > # JobA is running with at least one container on NodeA > # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours > --> NodeA enters DECOMMISSIONING state > # JobA finishes at 6:00am and there are no other jobs running on NodeA > # User's client reaches the timeout at 8:00am, and forcibly decommissions > NodeA > NodeA should have decommissioned at 6:00am. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org