[ 
https://issues.apache.org/jira/browse/YARN-903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704069#comment-13704069
 ] 

Omkar Vinit Joshi commented on YARN-903:
----------------------------------------

yes looks like the exception is misleading... at this point of time because we 
don't remember past running containers on node manager whenever we get 
stopContainer request if we don't find that container to be running we log an 
exception. Also we on node manager side don't know whether the call came from 
resource manager or from application master to stop container.
 
when an application finishes; normal containers as well as container in which 
AM was running exits successfully on node manager side. Before exiting, AM 
informs resource manager that the application finished successfully which in 
turn tries to stop all the containers running on different node managers. Now 
on node manager side after successful authentication / authorization it tries 
to check whether that container is still running or not. if it is not running 
then as we are not storing past running containers on node manager node manager 
logs this as an invalid attempt to stop unknown container on node manager. We 
are logging (in nm.logs + in NMAuditLogs) this exception for security reasons.

After YARN-62 goes in we will be in a state to remember past running containers 
for some time on node manager. This way we will be in a position to ignore 
these logically valid attempts for that duration.
 
any thoughts?
                
> DistributedShell throwing Errors in logs after successfull completion
> ---------------------------------------------------------------------
>
>                 Key: YARN-903
>                 URL: https://issues.apache.org/jira/browse/YARN-903
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: applications/distributed-shell
>    Affects Versions: 2.0.4-alpha
>         Environment: Ununtu 11.10
>            Reporter: Abhishek Kapoor
>            Assignee: Omkar Vinit Joshi
>         Attachments: AppMaster.stderr, 
> yarn-sunny-nodemanager-sunny-Inspiron.log
>
>
> I have tried running DistributedShell and also used ApplicationMaster of the 
> same for my test.
> The application is successfully running through logging some errors which 
> would be useful to fix.
> Below are the logs from NodeManager and ApplicationMasterode
> Log Snippet for NodeManager
> =============================
> 2013-07-07 13:39:18,787 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Connecting 
> to ResourceManager at localhost/127.0.0.1:9990. current no. of attempts is 1
> 2013-07-07 13:39:19,050 INFO 
> org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager:
>  Rolling master-key for container-tokens, got key with id -325382586
> 2013-07-07 13:39:19,052 INFO 
> org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM: 
> Rolling master-key for nm-tokens, got key with id :1005046570
> 2013-07-07 13:39:19,053 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registered 
> with ResourceManager as sunny-Inspiron:9993 with total resource of 
> <memory:10240, vCores:8>
> 2013-07-07 13:39:19,053 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Notifying 
> ContainerManager to unblock new container-requests
> 2013-07-07 13:39:35,256 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1373184544832_0001_000001 (auth:SIMPLE)
> 2013-07-07 13:39:35,492 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1373184544832_0001_01_000001 by user sunny
> 2013-07-07 13:39:35,507 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Creating a new application reference for app application_1373184544832_0001
> 2013-07-07 13:39:35,511 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sunny      
> IP=127.0.0.1    OPERATION=Start Container Request       
> TARGET=ContainerManageImpl      RESULT=SUCCESS  
> APPID=application_1373184544832_0001    
> CONTAINERID=container_1373184544832_0001_01_000001
> 2013-07-07 13:39:35,511 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>  Application application_1373184544832_0001 transitioned from NEW to INITING
> 2013-07-07 13:39:35,512 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>  Adding container_1373184544832_0001_01_000001 to application 
> application_1373184544832_0001
> 2013-07-07 13:39:35,518 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>  Application application_1373184544832_0001 transitioned from INITING to 
> RUNNING
> 2013-07-07 13:39:35,528 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Container container_1373184544832_0001_01_000001 transitioned from NEW to 
> LOCALIZING
> 2013-07-07 13:39:35,540 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource:
>  Resource hdfs://localhost:9000/application/test.jar transitioned from INIT 
> to DOWNLOADING
> 2013-07-07 13:39:35,540 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Created localizer for container_1373184544832_0001_01_000001
> 2013-07-07 13:39:35,675 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Writing credentials to the nmPrivate file 
> /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/nmPrivate/container_1373184544832_0001_01_000001.tokens.
>  Credentials list: 
> 2013-07-07 13:39:35,694 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Initializing user sunny
> 2013-07-07 13:39:35,803 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying 
> from 
> /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/nmPrivate/container_1373184544832_0001_01_000001.tokens
>  to 
> /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001/container_1373184544832_0001_01_000001.tokens
> 2013-07-07 13:39:35,803 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: CWD set 
> to 
> /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001
>  = 
> file:/home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001
> 2013-07-07 13:39:36,136 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 
> status for container: container_id {, app_attempt_id {, application_id {, id: 
> 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: 
> C_RUNNING, diagnostics: "", exit_status: -1000, 
> 2013-07-07 13:39:36,406 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource:
>  Resource hdfs://localhost:9000/application/test.jar transitioned from 
> DOWNLOADING to LOCALIZED
> 2013-07-07 13:39:36,409 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Container container_1373184544832_0001_01_000001 transitioned from 
> LOCALIZING to LOCALIZED
> 2013-07-07 13:39:36,524 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Container container_1373184544832_0001_01_000001 transitioned from LOCALIZED 
> to RUNNING
> 2013-07-07 13:39:36,692 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> launchContainer: [bash, -c, 
> /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001/container_1373184544832_0001_01_000001/default_container_executor.sh]
> 2013-07-07 13:39:37,144 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 
> status for container: container_id {, app_attempt_id {, application_id {, id: 
> 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: 
> C_RUNNING, diagnostics: "", exit_status: -1000, 
> 2013-07-07 13:39:38,147 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 
> status for container: container_id {, app_attempt_id {, application_id {, id: 
> 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: 
> C_RUNNING, diagnostics: "", exit_status: -1000, 
> 2013-07-07 13:39:39,151 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 
> status for container: container_id {, app_attempt_id {, application_id {, id: 
> 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: 
> C_RUNNING, diagnostics: "", exit_status: -1000, 
> 2013-07-07 13:39:39,209 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Starting resource-monitoring for container_1373184544832_0001_01_000001
> 2013-07-07 13:39:39,259 WARN 
> org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: Unexpected: procfs stat 
> file is not in the expected format for process with pid 11552
> 2013-07-07 13:39:39,264 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 29524 for container-id 
> container_1373184544832_0001_01_000001: 79.9 MB of 1 GB physical memory used; 
> 2.2 GB of 2.1 GB virtual memory used
> 2013-07-07 13:39:39,645 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1373184544832_0001_000001 (auth:SIMPLE)
> 2013-07-07 13:39:39,651 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1373184544832_0001_01_000002 by user sunny
> 2013-07-07 13:39:39,651 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sunny      
> IP=127.0.0.1    OPERATION=Start Container Request       
> TARGET=ContainerManageImpl      RESULT=SUCCESS  
> APPID=application_1373184544832_0001    
> CONTAINERID=container_1373184544832_0001_01_000002
> 2013-07-07 13:39:39,651 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>  Adding container_1373184544832_0001_01_000002 to application 
> application_1373184544832_0001
> 2013-07-07 13:39:39,652 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Container container_1373184544832_0001_01_000002 transitioned from NEW to 
> LOCALIZED
> 2013-07-07 13:39:39,660 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Getting container-status for container_1373184544832_0001_01_000002
> 2013-07-07 13:39:39,661 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Returning container_id {, app_attempt_id {, application_id {, id: 1, 
> cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 2, }, state: 
> C_RUNNING, diagnostics: "", exit_status: -1000, 
> 2013-07-07 13:39:39,728 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Container container_1373184544832_0001_01_000002 transitioned from LOCALIZED 
> to RUNNING
> 2013-07-07 13:39:39,873 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> launchContainer: [bash, -c, 
> /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001/container_1373184544832_0001_01_000002/default_container_executor.sh]
> 2013-07-07 13:39:39,898 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Container container_1373184544832_0001_01_000002 succeeded 
> 2013-07-07 13:39:39,899 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Container container_1373184544832_0001_01_000002 transitioned from RUNNING 
> to EXITED_WITH_SUCCESS
> 2013-07-07 13:39:39,900 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Cleaning up container container_1373184544832_0001_01_000002
> 2013-07-07 13:39:39,942 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sunny      
> OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    
> RESULT=SUCCESS  APPID=application_1373184544832_0001    
> CONTAINERID=container_1373184544832_0001_01_000002
> 2013-07-07 13:39:39,943 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Container container_1373184544832_0001_01_000002 transitioned from 
> EXITED_WITH_SUCCESS to DONE
> 2013-07-07 13:39:39,944 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>  Removing container_1373184544832_0001_01_000002 from application 
> application_1373184544832_0001
> 2013-07-07 13:39:40,155 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 
> status for container: container_id {, app_attempt_id {, application_id {, id: 
> 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: 
> C_RUNNING, diagnostics: "", exit_status: -1000, 
> 2013-07-07 13:39:40,157 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 
> status for container: container_id {, app_attempt_id {, application_id {, id: 
> 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 2, }, state: 
> C_COMPLETE, diagnostics: "", exit_status: 0, 
> 2013-07-07 13:39:40,158 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
> completed container container_1373184544832_0001_01_000002
> 2013-07-07 13:39:40,683 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Getting container-status for container_1373184544832_0001_01_000002
> 2013-07-07 13:39:40,686 ERROR 
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException 
> as:appattempt_1373184544832_0001_000001 (auth:TOKEN) 
> cause:org.apache.hadoop.yarn.exceptions.YarnException: Container 
> container_1373184544832_0001_01_000002 is not handled by this NodeManager
> 2013-07-07 13:39:40,687 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 4 on 9993, call 
> org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.stopContainer from 
> 127.0.0.1:51085: error: org.apache.hadoop.yarn.exceptions.YarnException: 
> Container container_1373184544832_0001_01_000002 is not handled by this 
> NodeManager
> org.apache.hadoop.yarn.exceptions.YarnException: Container 
> container_1373184544832_0001_01_000002 is not handled by this NodeManager
>       at 
> org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:45)
>       at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.authorizeGetAndStopContainerRequest(ContainerManagerImpl.java:614)
>       at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.stopContainer(ContainerManagerImpl.java:538)
>       at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.stopContainer(ContainerManagementProtocolPBServiceImpl.java:88)
>       at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:85)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1868)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1864)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1862)
> 2013-07-07 13:39:41,162 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 
> status for container: container_id {, app_attempt_id {, application_id {, id: 
> 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: 
> C_RUNNING, diagnostics: "", exit_status: -1000, 
> 2013-07-07 13:39:41,691 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Container container_1373184544832_0001_01_000001 succeeded 
> 2013-07-07 13:39:41,692 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Container container_1373184544832_0001_01_000001 transitioned from RUNNING 
> to EXITED_WITH_SUCCESS
> 2013-07-07 13:39:41,692 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Cleaning up container container_1373184544832_0001_01_000001
> 2013-07-07 13:39:41,714 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sunny      
> OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    
> RESULT=SUCCESS  APPID=application_1373184544832_0001    
> CONTAINERID=container_1373184544832_0001_01_000001
> 2013-07-07 13:39:41,714 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Container container_1373184544832_0001_01_000001 transitioned from 
> EXITED_WITH_SUCCESS to DONE
> 2013-07-07 13:39:41,714 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>  Removing container_1373184544832_0001_01_000001 from application 
> application_1373184544832_0001
> 2013-07-07 13:39:42,166 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 
> status for container: container_id {, app_attempt_id {, application_id {, id: 
> 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: 
> C_COMPLETE, diagnostics: "", exit_status: 0, 
> 2013-07-07 13:39:42,166 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
> completed container container_1373184544832_0001_01_000001
> 2013-07-07 13:39:42,191 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1373184544832_0001_000001 (auth:SIMPLE)
> 2013-07-07 13:39:42,195 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Getting container-status for container_1373184544832_0001_01_000001
> 2013-07-07 13:39:42,196 ERROR 
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException 
> as:appattempt_1373184544832_0001_000001 (auth:TOKEN) 
> cause:org.apache.hadoop.yarn.exceptions.YarnException: Container 
> container_1373184544832_0001_01_000001 is not handled by this NodeManager
> 2013-07-07 13:39:42,196 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 5 on 9993, call 
> org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.stopContainer from 
> 127.0.0.1:51086: error: org.apache.hadoop.yarn.exceptions.YarnException: 
> Container container_1373184544832_0001_01_000001 is not handled by this 
> NodeManager
> org.apache.hadoop.yarn.exceptions.YarnException: Container 
> container_1373184544832_0001_01_000001 is not handled by this NodeManager
>       at 
> org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:45)
>       at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.authorizeGetAndStopContainerRequest(ContainerManagerImpl.java:614)
>       at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.stopContainer(ContainerManagerImpl.java:538)
>       at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.stopContainer(ContainerManagementProtocolPBServiceImpl.java:88)
>       at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:85)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1868)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1864)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1862)
> 2013-07-07 13:39:42,264 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Starting resource-monitoring for container_1373184544832_0001_01_000002
> 2013-07-07 13:39:42,265 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Stopping resource-monitoring for container_1373184544832_0001_01_000002
> 2013-07-07 13:39:42,265 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Stopping resource-monitoring for container_1373184544832_0001_01_000001
> 2013-07-07 13:39:43,173 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>  Application application_1373184544832_0001 transitioned from RUNNING to 
> APPLICATION_RESOURCES_CLEANINGUP
> 2013-07-07 13:39:43,174 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event APPLICATION_STOP for appId application_1373184544832_0001
> 2013-07-07 13:39:43,180 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>  Application application_1373184544832_0001 transitioned from 
> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2013-07-07 13:39:43,180 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
>  Scheduling Log Deletion for application: application_1373184544832_0001, 
> with delay of 10800 seconds
> Log Snippet for Application Manager
> ==================================
> 13/07/07 13:39:36 INFO client.SimpleApplicationMaster: Initializing 
> ApplicationMaster
> 13/07/07 13:39:37 INFO client.SimpleApplicationMaster: Application master for 
> app, appId=1, clustertimestamp=1373184544832, attemptId=1
> 13/07/07 13:39:37 INFO client.SimpleApplicationMaster: Starting 
> ApplicationMaster
> 13/07/07 13:39:37 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 13/07/07 13:39:37 INFO impl.NMClientAsyncImpl: Upper bound of the thread pool 
> size is 500
> 13/07/07 13:39:37 INFO impl.ContainerManagementProtocolProxy: 
> yarn.client.max-nodemanagers-proxies : 500
> 13/07/07 13:39:37 INFO client.SimpleApplicationMaster: Max mem capabililty of 
> resources in this cluster 8192
> 13/07/07 13:39:37 INFO client.SimpleApplicationMaster: Requested container 
> ask: Capability[<memory:100, vCores:0>]Priority[0]ContainerCount[1]
> 13/07/07 13:39:39 INFO client.SimpleApplicationMaster: Got response from RM 
> for container ask, allocatedCnt=1
> 13/07/07 13:39:39 INFO client.SimpleApplicationMaster: Launching shell 
> command on a new container., 
> containerId=container_1373184544832_0001_01_000002, 
> containerNode=sunny-Inspiron:9993, containerNodeURI=sunny-Inspiron:8042, 
> containerResourceMemory1024
> 13/07/07 13:39:39 INFO client.SimpleApplicationMaster: Setting up container 
> launch container for containerid=container_1373184544832_0001_01_000002
> 13/07/07 13:39:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
> START_CONTAINER for Container container_1373184544832_0001_01_000002
> 13/07/07 13:39:39 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
> sunny-Inspiron:9993
> 13/07/07 13:39:39 INFO client.SimpleApplicationMaster: Succeeded to start 
> Container container_1373184544832_0001_01_000002
> 13/07/07 13:39:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
> QUERY_CONTAINER for Container container_1373184544832_0001_01_000002
> 13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Got response from RM 
> for container ask, completedCnt=1
> 13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Got container status 
> for containerID=container_1373184544832_0001_01_000002, state=COMPLETE, 
> exitStatus=0, diagnostics=
> 13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Container completed 
> successfully., containerId=container_1373184544832_0001_01_000002
> 13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Application completed. 
> Stopping running containers
> 13/07/07 13:39:40 ERROR impl.NMClientImpl: Failed to stop Container 
> container_1373184544832_0001_01_000002when stopping NMClientImpl
> 13/07/07 13:39:40 INFO impl.ContainerManagementProtocolProxy: Closing proxy : 
> sunny-Inspiron:9993
> 13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Application completed. 
> Signalling finish to RM
> 13/07/07 13:39:41 INFO impl.AMRMClientAsyncImpl: Interrupted while waiting 
> for queue
> java.lang.InterruptedException
>       at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)
>       at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)
>       at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
>       at 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:281)
> 13/07/07 13:39:41 INFO client.SimpleApplicationMaster: Application Master 
> completed successfully. exiting

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to