hi,all:
   undering spark2.0 with hadoop 2.7.2
   my code like this:
   
   String c1 = "/bin/sh";
   String c2 = "-c";    
   StringBuilder sb = new StringBuilder("cd 
/home/hadoop/dmp/spark-2.0.2-bin-hadoop2.7/bin;spark-submit --class 
com.hua.spark.dataload.DataLoadFromBase64JSON --master yarn --deploy-mode 
client  /home/hadoop/dmp/dataload-1.0-SNAPSHOT-jar-with-dependencies.jar ");
   Process pro = Runtime.getRuntime().exec(new String[]{c1,c2,sb.toString()});
   pro.waitFor();

  on the same node, I can exec the commond sucess from terminal , but in java I 
got error :  


17/01/20 06:39:05 ERROR TransportChannelHandler: Connection to 
/192.168.0.136:51197 has been quiet for 120000 ms while there are outstanding 
requests. Assuming connection is dead; please adjust spark.network.timeout if 
this is wrong.
17/01/20 06:39:05 ERROR TransportResponseHandler: Still have 1 requests 
outstanding when connection from /192.168.0.136:51197 is closed
17/01/20 06:39:05 WARN NettyRpcEnv: Ignored failure: java.io.IOException: 
Connection from /192.168.0.136:51197 closed
17/01/20 06:39:05 ERROR CoarseGrainedExecutorBackend: Cannot register with 
driver: spark://CoarseGrainedScheduler@192.168.0.136:51197
org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 
seconds. This timeout is controlled by spark.rpc.askTimeout
 at 
org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
 at 
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
 at 
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
 at 
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
 at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216)
 at scala.util.Try$.apply(Try.scala:192)
 at scala.util.Failure.recover(Try.scala:216)
 at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
 at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
 at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
 at 
org.spark_project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
 at 
scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
 at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
 at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
 at scala.concurrent.Promise$class.complete(Promise.scala:55)
 at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
 at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
 at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
 at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
 at 
scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
 at 
scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
 at 
scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
 at 
scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
 at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
 at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
 at 
scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
 at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
 at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
 at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
 at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
 at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
 at scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153)
 at 
org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:205)
 at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:239)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 
120 seconds
 ... 8 more
17/01/20 06:39:05 ERROR CoarseGrainedExecutorBackend: Driver 
192.168.0.136:51197 disassociated! Shutting down.

   

2017-01-20


lk_spark 

Reply via email to