[ https://issues.apache.org/jira/browse/YARN-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16573232#comment-16573232 ]
Jason Lowe commented on YARN-8609: ---------------------------------- This JIRA does mention all those things, and now it points to YARN-3998 as the fix (I just linked the two JIRAs). If we resolve it as fixed with a patch that only truncates individual diagnostic messages, that will not prevent an OOM if something adds a ton of separate diagnostic messages to a container. It would be a partial fix to the OOM while YARN-3998 is a complete fix. > NM oom because of large container statuses > ------------------------------------------ > > Key: YARN-8609 > URL: https://issues.apache.org/jira/browse/YARN-8609 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager > Reporter: Xianghao Lu > Priority: Major > Attachments: YARN-8609.001.patch, contain_status.jpg, oom.jpeg > > > Sometimes, NodeManger will send large container statuses to ResourceManager > when NodeManger start with recovering, as a result , NodeManger will be > failed to start because of oom. > In my case, the large container statuses size is 135M, which contain 11 > container statuses, and I find the diagnostics of 5 containers are very > large(27M), so, I truncate the container diagnostics as the patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org