[ 
https://issues.apache.org/jira/browse/SPARK-15329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15294653#comment-15294653
 ] 

Saisai Shao commented on SPARK-15329:
-------------------------------------

{code}
2016-05-15 00:06:08,368 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Process tree for container: container_1463267120616_0001_01_000001 has 
processes older than 1 iteration running over the configured limit. 
Limit=2254857728, current usage = 2331357184
2016-05-15 00:06:08,374 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Container [pid=10000,containerID=container_1463267120616_0001_01_000001] is 
running beyond virtual memory limits. Current usage: 264.2 MB of 1 GB physical 
memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1463267120616_0001_01_000001 :
{code}

Please check the nodemanager log, container is killed by NM due to out of vmem, 
please increase the pmem vmem ratio or turn off vmem check.

If you meet any problem when running Spark other than bugs, please send the 
mail to user mailing list firstly. JIRA is not used for Q&A.

>  When start spark with yarn: spark.SparkContext: Error initializing 
> SparkContext. 
> ----------------------------------------------------------------------------------
>
>                 Key: SPARK-15329
>                 URL: https://issues.apache.org/jira/browse/SPARK-15329
>             Project: Spark
>          Issue Type: Bug
>          Components: EC2
>            Reporter: Jon
>
> Hi, Im trying to start spark with yarn-client, like this "spark-shell 
> --master yarn-client" but Im getting the error below.
> If I start spark just with "spark-shell" everything works fine.
> I have a single node machine where I have all hadoop processes running, and a 
> hive metastore server running.
> I already try more than 30 different configurations, but nothing is working, 
> the config that I have now is this:
> core-site.xml:
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://masternode:9000</value>
> </property>
> </configuration>
> hdfs-site.xml:
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>1</value>
> </property>
> </configuration>
> yarn-site.xml:
> <configuration>
> <property>
> <name>yarn.resourcemanager.resource-tracker.address</name>
> <value>masternode:8031</value>
> </property>
> <property>
> <name>yarn.resourcemanager.address</name>
> <value>masternode:8032</value>
> </property>
> <property>
> <name>yarn.resourcemanager.scheduler.address</name>
> <value>masternode:8030</value>
> </property>
> <property>
> <name>yarn.resourcemanager.admin.address</name>
> <value>masternode:8033</value>
> </property>
> <property>
> <name>yarn.resourcemanager.webapp.address</name>
> <value>masternode:8088</value>
> </property>
> </configuration>
> About spark confs:
> spark-env.sh:
> HADOOP_CONF_DIR=/usr/local/hadoop-2.7.1/hadoop
> SPARK_MASTER_IP=masternode
> spark-defaults.conf
> spark.master spark://masternode:7077
> spark.serializer org.apache.spark.serializer.KryoSerializer
> Do you understand why this is happening?
> hadoopadmin@mn:~$ spark-shell --master yarn-client
> 16/05/14 23:21:07 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/05/14 23:21:07 INFO spark.SecurityManager: Changing view acls to: 
> hadoopadmin
> 16/05/14 23:21:07 INFO spark.SecurityManager: Changing modify acls to: 
> hadoopadmin
> 16/05/14 23:21:07 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(hadoopadmin); 
> users with modify permissions: Set(hadoopadmin)
> 16/05/14 23:21:08 INFO spark.HttpServer: Starting HTTP Server
> 16/05/14 23:21:08 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 16/05/14 23:21:08 INFO server.AbstractConnector: Started 
> SocketConnector@0.0.0.0:36979
> 16/05/14 23:21:08 INFO util.Utils: Successfully started service 'HTTP class 
> server' on port 36979.
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 1.6.1
>       /_/
> Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 16/05/14 23:21:12 INFO spark.SparkContext: Running Spark version 1.6.1
> 16/05/14 23:21:12 INFO spark.SecurityManager: Changing view acls to: 
> hadoopadmin
> 16/05/14 23:21:12 INFO spark.SecurityManager: Changing modify acls to: 
> hadoopadmin
> 16/05/14 23:21:12 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(hadoopadmin); 
> users with modify permissions: Set(hadoopadmin)
> 16/05/14 23:21:12 INFO util.Utils: Successfully started service 'sparkDriver' 
> on port 33128.
> 16/05/14 23:21:13 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 16/05/14 23:21:13 INFO Remoting: Starting remoting
> 16/05/14 23:21:13 INFO Remoting: Remoting started; listening on addresses 
> :[akka.tcp://sparkDriverActorSystem@10.15.0.11:34382]
> 16/05/14 23:21:13 INFO util.Utils: Successfully started service 
> 'sparkDriverActorSystem' on port 34382.
> 16/05/14 23:21:13 INFO spark.SparkEnv: Registering MapOutputTracker
> 16/05/14 23:21:13 INFO spark.SparkEnv: Registering BlockManagerMaster
> 16/05/14 23:21:13 INFO storage.DiskBlockManager: Created local directory at 
> /tmp/blockmgr-a0048199-bf2f-404b-9cd2-b5988367783f
> 16/05/14 23:21:13 INFO storage.MemoryStore: MemoryStore started with capacity 
> 511.1 MB
> 16/05/14 23:21:13 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 16/05/14 23:21:13 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 16/05/14 23:21:13 INFO server.AbstractConnector: Started 
> SelectChannelConnector@0.0.0.0:4040
> 16/05/14 23:21:13 INFO util.Utils: Successfully started service 'SparkUI' on 
> port 4040.
> 16/05/14 23:21:13 INFO ui.SparkUI: Started SparkUI at http://10.15.0.11:4040
> 16/05/14 23:21:14 INFO client.RMProxy: Connecting to ResourceManager at 
> localhost/127.0.0.1:8032
> 16/05/14 23:21:14 INFO yarn.Client: Requesting a new application from cluster 
> with 1 NodeManagers
> 16/05/14 23:21:14 INFO yarn.Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (8192 MB per 
> container)
> 16/05/14 23:21:14 INFO yarn.Client: Will allocate AM container, with 896 MB 
> memory including 384 MB overhead
> 16/05/14 23:21:14 INFO yarn.Client: Setting up container launch context for 
> our AM
> 16/05/14 23:21:14 INFO yarn.Client: Setting up the launch environment for our 
> AM container
> 16/05/14 23:21:14 INFO yarn.Client: Preparing resources for our AM container
> 16/05/14 23:21:15 INFO yarn.Client: Uploading resource 
> file:/usr/local/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar
>  -> 
> hdfs://localhost:9000/user/hadoopadmin/.sparkStaging/application_1463264445515_0001/spark-assembly-1.6.1-hadoop2.6.0.jar
> 16/05/14 23:21:17 INFO yarn.Client: Uploading resource 
> file:/tmp/spark-3df9a858-4bdb-4c3f-87cb-8768fb2987e7/__spark_conf__6806563942591505644.zip
>  -> 
> hdfs://localhost:9000/user/hadoopadmin/.sparkStaging/application_1463264445515_0001/__spark_conf__6806563942591505644.zip
> 16/05/14 23:21:17 INFO spark.SecurityManager: Changing view acls to: 
> hadoopadmin
> 16/05/14 23:21:17 INFO spark.SecurityManager: Changing modify acls to: 
> hadoopadmin
> 16/05/14 23:21:17 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(hadoopadmin); 
> users with modify permissions: Set(hadoopadmin)
> 16/05/14 23:21:17 INFO yarn.Client: Submitting application 1 to 
> ResourceManager
> 16/05/14 23:21:17 INFO impl.YarnClientImpl: Submitted application 
> application_1463264445515_0001
> 16/05/14 23:21:19 INFO yarn.Client: Application report for 
> application_1463264445515_0001 (state: ACCEPTED)
> 16/05/14 23:21:19 INFO yarn.Client:
>          client token: N/A
>          diagnostics: N/A
>          ApplicationMaster host: N/A
>          ApplicationMaster RPC port: -1
>          queue: default
>          start time: 1463264477898
>          final status: UNDEFINED
>          tracking URL: 
> http://masternode:8088/proxy/application_1463264445515_0001/
>          user: hadoopadmin
> 16/05/14 23:21:20 INFO yarn.Client: Application report for 
> application_1463264445515_0001 (state: ACCEPTED)
> 16/05/14 23:21:21 INFO yarn.Client: Application report for 
> application_1463264445515_0001 (state: ACCEPTED)
> 16/05/14 23:21:22 INFO yarn.Client: Application report for 
> application_1463264445515_0001 (state: ACCEPTED)
> 16/05/14 23:21:23 INFO yarn.Client: Application report for 
> application_1463264445515_0001 (state: ACCEPTED)
> 16/05/14 23:21:24 INFO yarn.Client: Application report for 
> application_1463264445515_0001 (state: ACCEPTED)
> 16/05/14 23:21:24 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: 
> ApplicationMaster registered as NettyRpcEndpointRef(null)
> 16/05/14 23:21:24 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS 
> -> masternode, PROXY_URI_BASES -> 
> http://masternode:8088/proxy/application_1463264445515_0001), 
> /proxy/application_1463264445515_0001
> 16/05/14 23:21:24 INFO ui.JettyUtils: Adding filter: 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 16/05/14 23:21:25 INFO yarn.Client: Application report for 
> application_1463264445515_0001 (state: RUNNING)
> 16/05/14 23:21:25 INFO yarn.Client:
>          client token: N/A
>          diagnostics: N/A
>          ApplicationMaster host: 10.15.0.11
>          ApplicationMaster RPC port: 0
>          queue: default
>          start time: 1463264477898
>          final status: UNDEFINED
>          tracking URL: 
> http://masternode:8088/proxy/application_1463264445515_0001/
>          user: hadoopadmin
> 16/05/14 23:21:25 INFO cluster.YarnClientSchedulerBackend: Application 
> application_1463264445515_0001 has started running.
> 16/05/14 23:21:25 INFO util.Utils: Successfully started service 
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45282.
> 16/05/14 23:21:25 INFO netty.NettyBlockTransferService: Server created on 
> 45282
> 16/05/14 23:21:25 INFO storage.BlockManagerMaster: Trying to register 
> BlockManager
> 16/05/14 23:21:25 INFO storage.BlockManagerMasterEndpoint: Registering block 
> manager 10.15.0.11:45282 with 511.1 MB RAM, BlockManagerId(driver, 
> 10.15.0.11, 45282)
> 16/05/14 23:21:25 INFO storage.BlockManagerMaster: Registered BlockManager
> 16/05/14 23:21:31 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: 
> ApplicationMaster registered as NettyRpcEndpointRef(null)
> 16/05/14 23:21:31 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS 
> -> masternode, PROXY_URI_BASES -> 
> http://masternode:8088/proxy/application_1463264445515_0001), 
> /proxy/application_1463264445515_0001
> 16/05/14 23:21:31 INFO ui.JettyUtils: Adding filter: 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 16/05/14 23:21:34 ERROR cluster.YarnClientSchedulerBackend: Yarn application 
> has already exited with state FINISHED!
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/metrics/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/api,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/static,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/executors/threadDump,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/executors/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/executors,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/environment/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/environment,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/storage/rdd,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/storage/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/storage,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/stages/pool/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/stages/pool,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/stages/stage/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/stages/stage,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/stages/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/stages,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/jobs/job/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/jobs/job,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/jobs/json,null}
> 16/05/14 23:21:34 INFO handler.ContextHandler: stopped 
> o.s.j.s.ServletContextHandler{/jobs,null}
> 16/05/14 23:21:34 INFO ui.SparkUI: Stopped Spark web UI at 
> http://10.15.0.11:4040
> 16/05/14 23:21:34 INFO cluster.YarnClientSchedulerBackend: Shutting down all 
> executors
> 16/05/14 23:21:34 INFO cluster.YarnClientSchedulerBackend: Asking each 
> executor to shut down
> 16/05/14 23:21:34 INFO cluster.YarnClientSchedulerBackend: Stopped
> 16/05/14 23:21:34 INFO spark.MapOutputTrackerMasterEndpoint: 
> MapOutputTrackerMasterEndpoint stopped!
> 16/05/14 23:21:34 INFO storage.MemoryStore: MemoryStore cleared
> 16/05/14 23:21:34 INFO storage.BlockManager: BlockManager stopped
> 16/05/14 23:21:34 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
> 16/05/14 23:21:34 INFO 
> scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
> OutputCommitCoordinator stopped!
> 16/05/14 23:21:34 INFO remote.RemoteActorRefProvider$RemotingTerminator: 
> Shutting down remote daemon.
> 16/05/14 23:21:34 INFO spark.SparkContext: Successfully stopped SparkContext
> 16/05/14 23:21:34 INFO remote.RemoteActorRefProvider$RemotingTerminator: 
> Remote daemon shut down; proceeding with flushing remote transports.
> 16/05/14 23:21:34 INFO remote.RemoteActorRefProvider$RemotingTerminator: 
> Remoting shut down.
> 16/05/14 23:21:44 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend 
> is ready for scheduling beginning after waiting 
> maxRegisteredResourcesWaitingTime: 30000(ms)
> 16/05/14 23:21:44 ERROR spark.SparkContext: Error initializing SparkContext.
> java.lang.NullPointerException
>         at org.apache.spark.SparkContext.<init>(SparkContext.scala:584)
>         at 
> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
>         at $line3.$read$$iwC$$iwC.<init>(<console>:15)
>         at $line3.$read$$iwC.<init>(<console>:24)
>         at $line3.$read.<init>(<console>:26)
>         at $line3.$read$.<init>(<console>:30)
>         at $line3.$read$.<clinit>(<console>)
>         at $line3.$eval$.<init>(<console>:7)
>         at $line3.$eval$.<clinit>(<console>)
>         at $line3.$eval.$print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>         at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>         at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>         at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>         at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>         at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>         at 
> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
>         at 
> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
>         at 
> org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
>         at 
> org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
>         at 
> org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
>         at 
> org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
>         at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
>         at 
> org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
>         at 
> org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>         at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>         at org.apache.spark.repl.Main.main(Main.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>         at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> 16/05/14 23:21:44 INFO spark.SparkContext: SparkContext already stopped.
> java.lang.NullPointerException
>         at org.apache.spark.SparkContext.<init>(SparkContext.scala:584)
>         at 
> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
>         at $iwC$$iwC.<init>(<console>:15)
>         at $iwC.<init>(<console>:24)
>         at <init>(<console>:26)
>         at .<init>(<console>:30)
>         at .<clinit>(<console>)
>         at .<init>(<console>:7)
>         at .<clinit>(<console>)
>         at $print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>         at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>         at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>         at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>         at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>         at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>         at 
> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
>         at 
> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
>         at 
> org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
>         at 
> org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
>         at 
> org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
>         at 
> org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
>         at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
>         at 
> org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
>         at 
> org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>         at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>         at org.apache.spark.repl.Main.main(Main.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>         at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> java.lang.NullPointerException
>         at 
> org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367)
>         at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>         at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>         at 
> org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
>         at $iwC$$iwC.<init>(<console>:15)
>         at $iwC.<init>(<console>:24)
>         at <init>(<console>:26)
>         at .<init>(<console>:30)
>         at .<clinit>(<console>)
>         at .<init>(<console>:7)
>         at .<clinit>(<console>)
>         at $print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>         at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>         at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>         at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>         at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>         at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>         at 
> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
>         at 
> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
>         at 
> org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
>         at 
> org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
>         at 
> org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
>         at 
> org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
>         at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
>         at 
> org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
>         at 
> org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>         at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>         at org.apache.spark.repl.Main.main(Main.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>         at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> <console>:16: error: not found: value sqlContext
>          import sqlContext.implicits._
>                 ^
> <console>:16: error: not found: value sqlContext
>          import sqlContext.sql
> Versions:
> spark-1.6.1-bin-hadoop2.6.tgz and hadoop-2.7.1
> Yarn NodeManager logs:
> 2016-05-15 00:06:03,188 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000001 transitioned from 
> LOCALIZING to LOCALIZED
> 2016-05-15 00:06:03,234 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000001 transitioned from LOCALIZED 
> to RUNNING
> 2016-05-15 00:06:03,243 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> launchContainer: [bash, 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000001/default_container_executor.sh]
> 2016-05-15 00:06:05,144 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Starting resource-monitoring for container_1463267120616_0001_01_000001
> 2016-05-15 00:06:05,271 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 10000 for container-id 
> container_1463267120616_0001_01_000001: 125.3 MB of 1 GB physical memory 
> used; 2.1 GB of 2.1 GB virtual memory used
> 2016-05-15 00:06:07,045 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1463267120616_0001_000001 (auth:SIMPLE)
> 2016-05-15 00:06:07,063 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1463267120616_0001_01_000002 by user hadoopadmin
> 2016-05-15 00:06:07,064 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Adding container_1463267120616_0001_01_000002 to application 
> application_1463267120616_0001
> 2016-05-15 00:06:07,065 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000002 transitioned from NEW to 
> LOCALIZING
> 2016-05-15 00:06:07,065 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_INIT for appId application_1463267120616_0001
> 2016-05-15 00:06:07,065 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000002 transitioned from 
> LOCALIZING to LOCALIZED
> 2016-05-15 00:06:07,064 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> IP=10.15.0.11   OPERATION=Start Container Request      
> TARGET=ContainerManageImpl       RESULT=SUCCESS  
> APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_01_000002
> 2016-05-15 00:06:07,192 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000002 transitioned from LOCALIZED 
> to RUNNING
> 2016-05-15 00:06:07,213 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> launchContainer: [bash, 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000002/default_container_executor.sh]
> 2016-05-15 00:06:07,972 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1463267120616_0001_000001 (auth:SIMPLE)
> 2016-05-15 00:06:07,987 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1463267120616_0001_01_000003 by user hadoopadmin
> 2016-05-15 00:06:07,988 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> IP=10.15.0.11   OPERATION=Start Container Request      
> TARGET=ContainerManageImpl       RESULT=SUCCESS  
> APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_01_000003
> 2016-05-15 00:06:07,988 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Adding container_1463267120616_0001_01_000003 to application 
> application_1463267120616_0001
> 2016-05-15 00:06:07,988 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000003 transitioned from NEW to 
> LOCALIZING
> 2016-05-15 00:06:07,989 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_INIT for appId application_1463267120616_0001
> 2016-05-15 00:06:07,989 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000003 transitioned from 
> LOCALIZING to LOCALIZED
> 2016-05-15 00:06:08,099 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000003 transitioned from LOCALIZED 
> to RUNNING
> 2016-05-15 00:06:08,117 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> launchContainer: [bash, 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000003/default_container_executor.sh]
> 2016-05-15 00:06:08,271 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Starting resource-monitoring for container_1463267120616_0001_01_000002
> 2016-05-15 00:06:08,272 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Starting resource-monitoring for container_1463267120616_0001_01_000003
> 2016-05-15 00:06:08,368 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 10000 for container-id 
> container_1463267120616_0001_01_000001: 264.2 MB of 1 GB physical memory 
> used; 2.2 GB of 2.1 GB virtual memory used
> 2016-05-15 00:06:08,368 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Process tree for container: container_1463267120616_0001_01_000001 has 
> processes older than 1 iteration running over the configured limit. 
> Limit=2254857728, current usage = 2331357184
> 2016-05-15 00:06:08,374 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Container [pid=10000,containerID=container_1463267120616_0001_01_000001] is 
> running beyond virtual memory limits. Current usage: 264.2 MB of 1 GB 
> physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1463267120616_0001_01_000001 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 10000 9998 10000 10000 (bash) 0 0 17043456 309 /bin/bash -c 
> /usr/lib/jvm/java-8-oracle/bin/java -server -Xmx512m 
> -Djava.io.tmpdir=/tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000001/tmp
>  
> -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.1/logs/userlogs/application_1463267120616_0001/container_1463267120616_0001_01_000001
>  org.apache.spark.deploy.yarn.ExecutorLauncher --arg '10.15.0.11:49099' 
> --executor-memory 1024m --executor-cores 1 --properties-file 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000001/__spark_conf__/__spark_conf__.properties
>  1> 
> /usr/local/hadoop-2.7.1/logs/userlogs/application_1463267120616_0001/container_1463267120616_0001_01_000001/stdout
>  2> 
> /usr/local/hadoop-2.7.1/logs/userlogs/application_1463267120616_0001/container_1463267120616_0001_01_000001/stderr
>         |- 10004 10000 10000 10000 (java) 639 27 2314313728 67323 
> /usr/lib/jvm/java-8-oracle/bin/java -server -Xmx512m 
> -Djava.io.tmpdir=/tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000001/tmp
>  
> -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.1/logs/userlogs/application_1463267120616_0001/container_1463267120616_0001_01_000001
>  org.apache.spark.deploy.yarn.ExecutorLauncher --arg 10.15.0.11:49099 
> --executor-memory 1024m --executor-cores 1 --properties-file 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000001/__spark_conf__/__spark_conf__.properties
> 2016-05-15 00:06:08,382 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000001 transitioned from RUNNING 
> to KILLING
> 2016-05-15 00:06:08,382 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Cleaning up container container_1463267120616_0001_01_000001
> 2016-05-15 00:06:08,383 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Removed ProcessTree with root 10000
> 2016-05-15 00:06:08,457 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code 
> from container container_1463267120616_0001_01_000001 is : 143
> 2016-05-15 00:06:08,516 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 10043 for container-id 
> container_1463267120616_0001_01_000002: 83.0 MB of 2 GB physical memory used; 
> 2.6 GB of 4.2 GB virtual memory used
> 2016-05-15 00:06:08,562 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000001 transitioned from KILLING 
> to CONTAINER_CLEANEDUP_AFTER_KILL
> 2016-05-15 00:06:08,582 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 10067 for container-id 
> container_1463267120616_0001_01_000003: 43.2 MB of 2 GB physical memory used; 
> 2.6 GB of 4.2 GB virtual memory used
> 2016-05-15 00:06:08,583 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting 
> absolute path : 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000001
> 2016-05-15 00:06:08,585 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> OPERATION=Container Finished - Killed   TARGET=ContainerImpl    
> RESULT=SUCCESS  APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_01_000001
> 2016-05-15 00:06:08,593 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000001 transitioned from 
> CONTAINER_CLEANEDUP_AFTER_KILL to DONE
> 2016-05-15 00:06:08,593 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Removing container_1463267120616_0001_01_000001 from application 
> application_1463267120616_0001
> 2016-05-15 00:06:08,593 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_STOP for appId application_1463267120616_0001
> 2016-05-15 00:06:09,574 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1463267120616_0001_000001 (auth:SIMPLE)
> 2016-05-15 00:06:09,601 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Stopping container with container Id: container_1463267120616_0001_01_000001
> 2016-05-15 00:06:09,601 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> IP=10.15.0.11   OPERATION=Stop Container Request       
> TARGET=ContainerManageImpl       RESULT=SUCCESS  
> APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_01_000001
> 2016-05-15 00:06:09,608 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
> completed containers from NM context: [container_1463267120616_0001_01_000001]
> 2016-05-15 00:06:09,609 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000002 transitioned from RUNNING 
> to KILLING
> 2016-05-15 00:06:09,609 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000003 transitioned from RUNNING 
> to KILLING
> 2016-05-15 00:06:09,609 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Cleaning up container container_1463267120616_0001_01_000002
> 2016-05-15 00:06:09,661 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1463267120616_0001_000002 (auth:SIMPLE)
> 2016-05-15 00:06:09,689 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code 
> from container container_1463267120616_0001_01_000002 is : 143
> 2016-05-15 00:06:09,710 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1463267120616_0001_02_000001 by user hadoopadmin
> 2016-05-15 00:06:09,710 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> IP=10.15.0.11   OPERATION=Start Container Request      
> TARGET=ContainerManageImpl       RESULT=SUCCESS  
> APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_02_000001
> 2016-05-15 00:06:09,734 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Cleaning up container container_1463267120616_0001_01_000003
> 2016-05-15 00:06:09,767 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code 
> from container container_1463267120616_0001_01_000003 is : 143
> 2016-05-15 00:06:09,796 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000002 transitioned from KILLING 
> to CONTAINER_CLEANEDUP_AFTER_KILL
> 2016-05-15 00:06:09,796 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Adding container_1463267120616_0001_02_000001 to application 
> application_1463267120616_0001
> 2016-05-15 00:06:09,796 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000003 transitioned from KILLING 
> to CONTAINER_CLEANEDUP_AFTER_KILL
> 2016-05-15 00:06:09,796 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting 
> absolute path : 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000002
> 2016-05-15 00:06:09,797 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000001 transitioned from NEW to 
> LOCALIZING
> 2016-05-15 00:06:09,797 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> OPERATION=Container Finished - Killed   TARGET=ContainerImpl    
> RESULT=SUCCESS  APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_01_000002
> 2016-05-15 00:06:09,797 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000002 transitioned from 
> CONTAINER_CLEANEDUP_AFTER_KILL to DONE
> 2016-05-15 00:06:09,797 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_INIT for appId application_1463267120616_0001
> 2016-05-15 00:06:09,797 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting 
> absolute path : 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_01_000003
> 2016-05-15 00:06:09,798 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> OPERATION=Container Finished - Killed   TARGET=ContainerImpl    
> RESULT=SUCCESS  APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_01_000003
> 2016-05-15 00:06:09,798 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_01_000003 transitioned from 
> CONTAINER_CLEANEDUP_AFTER_KILL to DONE
> 2016-05-15 00:06:09,798 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Removing container_1463267120616_0001_01_000002 from application 
> application_1463267120616_0001
> 2016-05-15 00:06:09,798 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_STOP for appId application_1463267120616_0001
> 2016-05-15 00:06:09,798 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000001 transitioned from 
> LOCALIZING to LOCALIZED
> 2016-05-15 00:06:09,798 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Removing container_1463267120616_0001_01_000003 from application 
> application_1463267120616_0001
> 2016-05-15 00:06:09,798 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_STOP for appId application_1463267120616_0001
> 2016-05-15 00:06:09,821 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000001 transitioned from LOCALIZED 
> to RUNNING
> 2016-05-15 00:06:09,827 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> launchContainer: [bash, 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_02_000001/default_container_executor.sh]
> 2016-05-15 00:06:11,583 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Starting resource-monitoring for container_1463267120616_0001_02_000001
> 2016-05-15 00:06:11,583 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Stopping resource-monitoring for container_1463267120616_0001_01_000001
> 2016-05-15 00:06:11,583 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Stopping resource-monitoring for container_1463267120616_0001_01_000002
> 2016-05-15 00:06:11,583 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Stopping resource-monitoring for container_1463267120616_0001_01_000003
> 2016-05-15 00:06:11,668 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 10121 for container-id 
> container_1463267120616_0001_02_000001: 121.8 MB of 1 GB physical memory 
> used; 2.1 GB of 2.1 GB virtual memory used
> 2016-05-15 00:06:13,645 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
> completed containers from NM context: 
> [container_1463267120616_0001_01_000002, 
> container_1463267120616_0001_01_000003]
> 2016-05-15 00:06:14,567 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1463267120616_0001_000002 (auth:SIMPLE)
> 2016-05-15 00:06:14,571 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1463267120616_0001_02_000002 by user hadoopadmin
> 2016-05-15 00:06:14,572 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> IP=10.15.0.11   OPERATION=Start Container Request      
> TARGET=ContainerManageImpl       RESULT=SUCCESS  
> APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_02_000002
> 2016-05-15 00:06:14,572 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Adding container_1463267120616_0001_02_000002 to application 
> application_1463267120616_0001
> 2016-05-15 00:06:14,572 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000002 transitioned from NEW to 
> LOCALIZING
> 2016-05-15 00:06:14,572 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_INIT for appId application_1463267120616_0001
> 2016-05-15 00:06:14,573 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000002 transitioned from 
> LOCALIZING to LOCALIZED
> 2016-05-15 00:06:14,594 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000002 transitioned from LOCALIZED 
> to RUNNING
> 2016-05-15 00:06:14,597 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> launchContainer: [bash, 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_02_000002/default_container_executor.sh]
> 2016-05-15 00:06:14,668 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Starting resource-monitoring for container_1463267120616_0001_02_000002
> 2016-05-15 00:06:14,700 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 10159 for container-id 
> container_1463267120616_0001_02_000002: 23.0 MB of 2 GB physical memory used; 
> 2.6 GB of 4.2 GB virtual memory used
> 2016-05-15 00:06:14,722 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 10121 for container-id 
> container_1463267120616_0001_02_000001: 222.1 MB of 1 GB physical memory 
> used; 2.1 GB of 2.1 GB virtual memory used
> 2016-05-15 00:06:14,722 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Process tree for container: container_1463267120616_0001_02_000001 has 
> processes older than 1 iteration running over the configured limit. 
> Limit=2254857728, current usage = 2285281280
> 2016-05-15 00:06:14,723 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Container [pid=10121,containerID=container_1463267120616_0001_02_000001] is 
> running beyond virtual memory limits. Current usage: 222.1 MB of 1 GB 
> physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1463267120616_0001_02_000001 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 10121 10119 10121 10121 (bash) 0 0 17043456 308 /bin/bash -c 
> /usr/lib/jvm/java-8-oracle/bin/java -server -Xmx512m 
> -Djava.io.tmpdir=/tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_02_000001/tmp
>  
> -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.1/logs/userlogs/application_1463267120616_0001/container_1463267120616_0001_02_000001
>  org.apache.spark.deploy.yarn.ExecutorLauncher --arg '10.15.0.11:49099' 
> --executor-memory 1024m --executor-cores 1 --properties-file 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_02_000001/__spark_conf__/__spark_conf__.properties
>  1> 
> /usr/local/hadoop-2.7.1/logs/userlogs/application_1463267120616_0001/container_1463267120616_0001_02_000001/stdout
>  2> 
> /usr/local/hadoop-2.7.1/logs/userlogs/application_1463267120616_0001/container_1463267120616_0001_02_000001/stderr
>         |- 10125 10121 10121 10121 (java) 524 28 2268237824 56548 
> /usr/lib/jvm/java-8-oracle/bin/java -server -Xmx512m 
> -Djava.io.tmpdir=/tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_02_000001/tmp
>  
> -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.1/logs/userlogs/application_1463267120616_0001/container_1463267120616_0001_02_000001
>  org.apache.spark.deploy.yarn.ExecutorLauncher --arg 10.15.0.11:49099 
> --executor-memory 1024m --executor-cores 1 --properties-file 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_02_000001/__spark_conf__/__spark_conf__.properties
> 2016-05-15 00:06:14,723 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000001 transitioned from RUNNING 
> to KILLING
> 2016-05-15 00:06:14,723 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Cleaning up container container_1463267120616_0001_02_000001
> 2016-05-15 00:06:14,724 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Removed ProcessTree with root 10121
> 2016-05-15 00:06:14,762 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code 
> from container container_1463267120616_0001_02_000001 is : 143
> 2016-05-15 00:06:14,784 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000001 transitioned from KILLING 
> to CONTAINER_CLEANEDUP_AFTER_KILL
> 2016-05-15 00:06:14,785 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting 
> absolute path : 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_02_000001
> 2016-05-15 00:06:14,791 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> OPERATION=Container Finished - Killed   TARGET=ContainerImpl    
> RESULT=SUCCESS  APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_02_000001
> 2016-05-15 00:06:14,791 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000001 transitioned from 
> CONTAINER_CLEANEDUP_AFTER_KILL to DONE
> 2016-05-15 00:06:14,791 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Removing container_1463267120616_0001_02_000001 from application 
> application_1463267120616_0001
> 2016-05-15 00:06:14,792 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_STOP for appId application_1463267120616_0001
> 2016-05-15 00:06:15,685 INFO SecurityLogger.org.apache.hadoop.ipc.Server: 
> Auth successful for appattempt_1463267120616_0001_000002 (auth:SIMPLE)
> 2016-05-15 00:06:15,716 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Stopping container with container Id: container_1463267120616_0001_02_000001
> 2016-05-15 00:06:15,717 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> IP=10.15.0.11   OPERATION=Stop Container Request       
> TARGET=ContainerManageImpl       RESULT=SUCCESS  
> APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_02_000001
> 2016-05-15 00:06:15,720 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
> completed containers from NM context: [container_1463267120616_0001_02_000001]
> 2016-05-15 00:06:15,724 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1463267120616_0001 transitioned from RUNNING to 
> FINISHING_CONTAINERS_WAIT
> 2016-05-15 00:06:15,724 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000002 transitioned from RUNNING 
> to KILLING
> 2016-05-15 00:06:15,724 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Cleaning up container container_1463267120616_0001_02_000002
> 2016-05-15 00:06:15,759 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code 
> from container container_1463267120616_0001_02_000002 is : 143
> 2016-05-15 00:06:15,776 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000002 transitioned from KILLING 
> to CONTAINER_CLEANEDUP_AFTER_KILL
> 2016-05-15 00:06:15,777 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting 
> absolute path : 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001/container_1463267120616_0001_02_000002
> 2016-05-15 00:06:15,778 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoopadmin  
> OPERATION=Container Finished - Killed   TARGET=ContainerImpl    
> RESULT=SUCCESS  APPID=application_1463267120616_0001    
> CONTAINERID=container_1463267120616_0001_02_000002
> 2016-05-15 00:06:15,778 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
>  Container container_1463267120616_0001_02_000002 transitioned from 
> CONTAINER_CLEANEDUP_AFTER_KILL to DONE
> 2016-05-15 00:06:15,778 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Removing container_1463267120616_0001_02_000002 from application 
> application_1463267120616_0001
> 2016-05-15 00:06:15,778 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1463267120616_0001 transitioned from 
> FINISHING_CONTAINERS_WAIT to APPLICATION_RESOURCES_CLEANINGUP
> 2016-05-15 00:06:15,778 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_STOP for appId application_1463267120616_0001
> 2016-05-15 00:06:15,779 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting 
> absolute path : 
> /tmp/hadoop-hadoopadmin/nm-local-dir/usercache/hadoopadmin/appcache/application_1463267120616_0001
> 2016-05-15 00:06:15,779 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event APPLICATION_STOP for appId application_1463267120616_0001
> 2016-05-15 00:06:15,779 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl:
>  Application application_1463267120616_0001 transitioned from 
> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2016-05-15 00:06:15,779 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
>  Scheduling Log Deletion for application: application_1463267120616_0001, 
> with delay of 10800 seconds
> 2016-05-15 00:06:16,726 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Event EventType: KILL_CONTAINER sent to absent container 
> container_1463267120616_0001_02_000002
> 2016-05-15 00:06:17,724 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Stopping resource-monitoring for container_1463267120616_0001_02_000001
> 2016-05-15 00:06:17,725 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Stopping resource-monitoring for container_1463267120616_0001_02_000002
> 2016-05-15 03:06:15,785 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting 
> path : /usr/local/hadoop-2.7.1/logs/userlogs/application_1463267120616_0001
> 2016-05-16 00:05:20,714 INFO 
> org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager:
>  Rolling master-key for container-tokens, got key with id -22032173
> 2016-05-16 00:05:20,714 INFO 
> org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretMan



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to