[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13416193#comment-13416193
 ] 

Jason Lowe commented on MAPREDUCE-4448:
---------------------------------------

{{stopContainer}} and {{stopApp}} are simply called in response to receiving 
the corresponding events from other subsystems within the nodemanager.  Those 
subsystems are not (and should not be) aware of whether log aggregation 
initialized successfully.  They just fire off the event to inform the log 
aggregation service and move on.

As for debug/info, I think it's pretty important to log when the aggregation 
service isn't doing it's job properly, as it would help explain why logs are 
missing.  This isn't a failure we expect to happen frequently, so I don't think 
it's going to be cluttering up the logs.  And if log aggregation did initialize 
successfully but somehow the aggregation instance was "lost" before the app 
completed, that's a real problem we'd want to know about.
                
> Nodemanager crashes upon application cleanup if aggregation failed to start
> ---------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-4448
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4448
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mrv2, nodemanager
>    Affects Versions: 0.23.3, 2.0.1-alpha
>            Reporter: Jason Lowe
>            Assignee: Jason Lowe
>            Priority: Critical
>         Attachments: MAPREDUCE-4448.patch
>
>
> When log aggregation is enabled, the nodemanager can crash if log aggregation 
> for an application failed to start.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to