I am setting up a small yarn/spark cluster. hadoop/yarn version is 2.7.3 and I can run wordcount map-reduce correctly in yarn. And I am using spark-2.0.1-bin-hadoop2.7 using command: ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client examples/jars/spark-examples_2.11-2.0.1.jar 10000 it fails and the first error is: 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.161.219.189, 39161) 16/10/20 18:12:03 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE} 16/10/20 18:12:12 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null) 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES -> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002), /proxy/application_1476957324184_0002 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms) 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing SparkContext, some configuration may not take effect. 16/10/20 18:12:12 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE} 16/10/20 18:12:12 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE} 16/10/20 18:12:12 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE} 16/10/20 18:12:12 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,null,AVAILABLE} 16/10/20 18:12:12 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE} 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'. 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 10000 output partitions 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38) 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage: List() 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List() 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1832.0 B, free 366.3 MB) 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB) 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B, free: 366.3 MB) 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1012 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with 10000 tasks 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED! 16/10/20 18:12:14 INFO server.ServerConnector: Stopped ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040} 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,null,UNAVAILABLE} 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE} 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
I also use yarn log to get logs from yarn(total log is very lengthy in attachement): 16/10/20 18:12:03 INFO yarn.ExecutorRunnable: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* SPARK_LOG_URL_STDERR -> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096 SPARK_YARN_STAGING_DIR -> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002 SPARK_USER -> hadoop SPARK_YARN_MODE -> true SPARK_LOG_URL_STDOUT -> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096 command: {{JAVA_HOME}}/bin/java -server -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657' -Dspark.yarn.app.container.log.dir=<LOG_DIR> -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2 --hostname ai-hz1-spark3 --cores 1 --app-id application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr =============================================================================== 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening proxy : ai-hz1-spark5:55857 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening proxy : ai-hz1-spark3:51061 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status: UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final status was reported.) 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
Container: container_1476957324184_0002_01_000001 on ai-hz1-spark2_36299 ========================================================================== LogType:stderr Log Upload Time:Thu Oct 20 18:12:13 +0800 2016 LogLength:8737 Log Contents: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data/hadoop/yarndata/local-dir/usercache/hadoop/filecache/12/__spark_libs__1768433196339904263.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 16/10/20 18:12:00 INFO util.SignalUtils: Registered signal handler for TERM 16/10/20 18:12:00 INFO util.SignalUtils: Registered signal handler for HUP 16/10/20 18:12:00 INFO util.SignalUtils: Registered signal handler for INT 16/10/20 18:12:01 INFO yarn.ApplicationMaster: Preparing Local resources 16/10/20 18:12:02 INFO yarn.ApplicationMaster: Prepared Local resources Map(__spark_libs__ -> resource { scheme: "hdfs" host: "ai-hz1-spark1" port: -1 file: "/user/hadoop/.sparkStaging/application_1476957324184_0002/__spark_libs__1768433196339904263.zip" } size: 192507295 timestamp: 1476958317234 type: ARCHIVE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "ai-hz1-spark1" port: -1 file: "/user/hadoop/.sparkStaging/application_1476957324184_0002/__spark_conf__.zip" } size: 86556 timestamp: 1476958317387 type: ARCHIVE visibility: PRIVATE) 16/10/20 18:12:02 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1476957324184_0002_000001 16/10/20 18:12:02 INFO spark.SecurityManager: Changing view acls to: hadoop 16/10/20 18:12:02 INFO spark.SecurityManager: Changing modify acls to: hadoop 16/10/20 18:12:02 INFO spark.SecurityManager: Changing view acls groups to: 16/10/20 18:12:02 INFO spark.SecurityManager: Changing modify acls groups to: 16/10/20 18:12:02 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set() 16/10/20 18:12:02 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable. 16/10/20 18:12:02 INFO yarn.ApplicationMaster: Driver now available: 10.161.219.189:60657 16/10/20 18:12:02 INFO client.TransportClientFactory: Successfully created connection to /10.161.219.189:60657 after 83 ms (0 ms spent in bootstraps) 16/10/20 18:12:02 INFO yarn.ApplicationMaster$AMEndpoint: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES -> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),/proxy/application_1476957324184_0002) 16/10/20 18:12:02 INFO client.RMProxy: Connecting to ResourceManager at ai-hz1-spark1/10.161.219.189:8030 16/10/20 18:12:02 INFO yarn.YarnRMClient: Registering the ApplicationMaster 16/10/20 18:12:03 INFO yarn.YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead 16/10/20 18:12:03 INFO yarn.YarnAllocator: Canceled 0 container requests (locality no longer needed) 16/10/20 18:12:03 INFO yarn.YarnAllocator: Submitted container request (host: Any, capability: <memory:1408, vCores:1>) 16/10/20 18:12:03 INFO yarn.YarnAllocator: Submitted container request (host: Any, capability: <memory:1408, vCores:1>) 16/10/20 18:12:03 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 16/10/20 18:12:03 INFO impl.AMRMClientImpl: Received new token for : ai-hz1-spark5:55857 16/10/20 18:12:03 INFO impl.AMRMClientImpl: Received new token for : ai-hz1-spark3:51061 16/10/20 18:12:03 INFO yarn.YarnAllocator: Launching container container_1476957324184_0002_01_000002 for on host ai-hz1-spark5 16/10/20 18:12:03 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://CoarseGrainedScheduler@10.161.219.189:60657, executorHostname: ai-hz1-spark5 16/10/20 18:12:03 INFO yarn.YarnAllocator: Launching container container_1476957324184_0002_01_000003 for on host ai-hz1-spark3 16/10/20 18:12:03 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://CoarseGrainedScheduler@10.161.219.189:60657, executorHostname: ai-hz1-spark3 16/10/20 18:12:03 INFO yarn.YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them. 16/10/20 18:12:03 INFO yarn.ExecutorRunnable: Starting Executor Container 16/10/20 18:12:03 INFO yarn.ExecutorRunnable: Starting Executor Container 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 16/10/20 18:12:03 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext 16/10/20 18:12:03 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext 16/10/20 18:12:03 INFO yarn.ExecutorRunnable: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* SPARK_LOG_URL_STDERR -> http://ai-hz1-spark5:8042/node/containerlogs/container_1476957324184_0002_01_000002/hadoop/stderr?start=-4096 SPARK_YARN_STAGING_DIR -> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002 SPARK_USER -> hadoop SPARK_YARN_MODE -> true SPARK_LOG_URL_STDOUT -> http://ai-hz1-spark5:8042/node/containerlogs/container_1476957324184_0002_01_000002/hadoop/stdout?start=-4096 command: {{JAVA_HOME}}/bin/java -server -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657' -Dspark.yarn.app.container.log.dir=<LOG_DIR> -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 1 --hostname ai-hz1-spark5 --cores 1 --app-id application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr =============================================================================== 16/10/20 18:12:03 INFO yarn.ExecutorRunnable: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* SPARK_LOG_URL_STDERR -> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096 SPARK_YARN_STAGING_DIR -> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002 SPARK_USER -> hadoop SPARK_YARN_MODE -> true SPARK_LOG_URL_STDOUT -> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096 command: {{JAVA_HOME}}/bin/java -server -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657' -Dspark.yarn.app.container.log.dir=<LOG_DIR> -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2 --hostname ai-hz1-spark3 --cores 1 --app-id application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr =============================================================================== 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening proxy : ai-hz1-spark5:55857 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening proxy : ai-hz1-spark3:51061 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status: UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final status was reported.) 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called End of LogType:stderr LogType:stdout Log Upload Time:Thu Oct 20 18:12:13 +0800 2016 LogLength:0 Log Contents: End of LogType:stdout Container: container_1476957324184_0002_02_000001 on ai-hz1-spark4_41806 ========================================================================== LogType:stderr Log Upload Time:Thu Oct 20 18:12:14 +0800 2016 LogLength:9155 Log Contents: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data/hadoop/yarndata/local-dir/usercache/hadoop/filecache/12/__spark_libs__1768433196339904263.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 16/10/20 18:12:09 INFO util.SignalUtils: Registered signal handler for TERM 16/10/20 18:12:09 INFO util.SignalUtils: Registered signal handler for HUP 16/10/20 18:12:09 INFO util.SignalUtils: Registered signal handler for INT 16/10/20 18:12:10 INFO yarn.ApplicationMaster: Preparing Local resources 16/10/20 18:12:11 INFO yarn.ApplicationMaster: Prepared Local resources Map(__spark_libs__ -> resource { scheme: "hdfs" host: "ai-hz1-spark1" port: -1 file: "/user/hadoop/.sparkStaging/application_1476957324184_0002/__spark_libs__1768433196339904263.zip" } size: 192507295 timestamp: 1476958317234 type: ARCHIVE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "ai-hz1-spark1" port: -1 file: "/user/hadoop/.sparkStaging/application_1476957324184_0002/__spark_conf__.zip" } size: 86556 timestamp: 1476958317387 type: ARCHIVE visibility: PRIVATE) 16/10/20 18:12:11 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1476957324184_0002_000002 16/10/20 18:12:11 INFO spark.SecurityManager: Changing view acls to: hadoop 16/10/20 18:12:11 INFO spark.SecurityManager: Changing modify acls to: hadoop 16/10/20 18:12:11 INFO spark.SecurityManager: Changing view acls groups to: 16/10/20 18:12:11 INFO spark.SecurityManager: Changing modify acls groups to: 16/10/20 18:12:11 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set() 16/10/20 18:12:12 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable. 16/10/20 18:12:12 INFO yarn.ApplicationMaster: Driver now available: 10.161.219.189:60657 16/10/20 18:12:12 INFO client.TransportClientFactory: Successfully created connection to /10.161.219.189:60657 after 71 ms (0 ms spent in bootstraps) 16/10/20 18:12:12 INFO yarn.ApplicationMaster$AMEndpoint: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES -> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),/proxy/application_1476957324184_0002) 16/10/20 18:12:12 INFO client.RMProxy: Connecting to ResourceManager at ai-hz1-spark1/10.161.219.189:8030 16/10/20 18:12:12 INFO yarn.YarnRMClient: Registering the ApplicationMaster 16/10/20 18:12:12 INFO yarn.YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead 16/10/20 18:12:12 INFO yarn.YarnAllocator: Canceled 0 container requests (locality no longer needed) 16/10/20 18:12:12 INFO yarn.YarnAllocator: Submitted container request (host: Any, capability: <memory:1408, vCores:1>) 16/10/20 18:12:12 INFO yarn.YarnAllocator: Submitted container request (host: Any, capability: <memory:1408, vCores:1>) 16/10/20 18:12:12 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 16/10/20 18:12:12 INFO impl.AMRMClientImpl: Received new token for : ai-hz1-spark4:41806 16/10/20 18:12:12 INFO impl.AMRMClientImpl: Received new token for : ai-hz1-spark5:55857 16/10/20 18:12:12 INFO yarn.YarnAllocator: Launching container container_1476957324184_0002_02_000002 for on host ai-hz1-spark4 16/10/20 18:12:12 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://CoarseGrainedScheduler@10.161.219.189:60657, executorHostname: ai-hz1-spark4 16/10/20 18:12:12 INFO yarn.YarnAllocator: Launching container container_1476957324184_0002_02_000003 for on host ai-hz1-spark5 16/10/20 18:12:12 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://CoarseGrainedScheduler@10.161.219.189:60657, executorHostname: ai-hz1-spark5 16/10/20 18:12:12 INFO yarn.YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them. 16/10/20 18:12:12 INFO yarn.ExecutorRunnable: Starting Executor Container 16/10/20 18:12:12 INFO yarn.ExecutorRunnable: Starting Executor Container 16/10/20 18:12:12 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 16/10/20 18:12:12 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 16/10/20 18:12:12 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext 16/10/20 18:12:12 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext 16/10/20 18:12:12 INFO yarn.ExecutorRunnable: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* SPARK_LOG_URL_STDERR -> http://ai-hz1-spark4:8042/node/containerlogs/container_1476957324184_0002_02_000002/hadoop/stderr?start=-4096 SPARK_YARN_STAGING_DIR -> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002 SPARK_USER -> hadoop SPARK_YARN_MODE -> true SPARK_LOG_URL_STDOUT -> http://ai-hz1-spark4:8042/node/containerlogs/container_1476957324184_0002_02_000002/hadoop/stdout?start=-4096 command: {{JAVA_HOME}}/bin/java -server -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657' -Dspark.yarn.app.container.log.dir=<LOG_DIR> -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 1 --hostname ai-hz1-spark4 --cores 1 --app-id application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr =============================================================================== 16/10/20 18:12:12 INFO yarn.ExecutorRunnable: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* SPARK_LOG_URL_STDERR -> http://ai-hz1-spark5:8042/node/containerlogs/container_1476957324184_0002_02_000003/hadoop/stderr?start=-4096 SPARK_YARN_STAGING_DIR -> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002 SPARK_USER -> hadoop SPARK_YARN_MODE -> true SPARK_LOG_URL_STDOUT -> http://ai-hz1-spark5:8042/node/containerlogs/container_1476957324184_0002_02_000003/hadoop/stdout?start=-4096 command: {{JAVA_HOME}}/bin/java -server -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657' -Dspark.yarn.app.container.log.dir=<LOG_DIR> -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2 --hostname ai-hz1-spark5 --cores 1 --app-id application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr =============================================================================== 16/10/20 18:12:12 INFO impl.ContainerManagementProtocolProxy: Opening proxy : ai-hz1-spark4:41806 16/10/20 18:12:12 INFO impl.ContainerManagementProtocolProxy: Opening proxy : ai-hz1-spark5:55857 16/10/20 18:12:13 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM 16/10/20 18:12:13 INFO yarn.ApplicationMaster: Final app status: UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final status was reported.) 16/10/20 18:12:13 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with UNDEFINED (diag message: Shutdown hook called before final status was reported.) 16/10/20 18:12:13 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered. 16/10/20 18:12:13 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002 16/10/20 18:12:13 INFO util.ShutdownHookManager: Shutdown hook called End of LogType:stderr LogType:stdout Log Upload Time:Thu Oct 20 18:12:14 +0800 2016 LogLength:0 Log Contents: End of LogType:stdout Container: container_1476957324184_0002_02_000002 on ai-hz1-spark4_41806 ========================================================================== LogType:stderr Log Upload Time:Thu Oct 20 18:12:14 +0800 2016 LogLength:1559 Log Contents: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data/hadoop/yarndata/local-dir/usercache/hadoop/filecache/12/__spark_libs__1768433196339904263.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 16/10/20 18:12:13 INFO executor.CoarseGrainedExecutorBackend: Started daemon with process name: 14725@ai-hz1-spark4 16/10/20 18:12:13 INFO util.SignalUtils: Registered signal handler for TERM 16/10/20 18:12:13 INFO util.SignalUtils: Registered signal handler for HUP 16/10/20 18:12:13 INFO util.SignalUtils: Registered signal handler for INT 16/10/20 18:12:14 INFO spark.SecurityManager: Changing view acls to: hadoop 16/10/20 18:12:14 INFO spark.SecurityManager: Changing modify acls to: hadoop 16/10/20 18:12:14 INFO spark.SecurityManager: Changing view acls groups to: 16/10/20 18:12:14 INFO spark.SecurityManager: Changing modify acls groups to: 16/10/20 18:12:14 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set() 16/10/20 18:12:14 ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM End of LogType:stderr LogType:stdout Log Upload Time:Thu Oct 20 18:12:14 +0800 2016 LogLength:0 Log Contents: End of LogType:stdout
--------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org