>>> *15/12/16 10:22:01 WARN cluster.YarnScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources*

That means you don't have resources for your application, please check your
hadoop web ui.

On Wed, Dec 16, 2015 at 10:32 AM, zml张明磊 <mingleizh...@ctrip.com> wrote:

> Yesterday night, I run the jar on my pseudo-distributed mode without WARN
> and ERROR. However, Today, Getting the WARN and directly leading to the
> ERROR below. My computer memory is 8GB and I think it’s not the issue as
> the LOG WARN describe. What ‘s wrong ? The code haven’t change yet. And the
> environment haven’t change too. So Strange. Can anybody help me ? Why …….
>
>
>
> Thanks.
>
> Minglei.
>
>
>
> Here is the submit job script
>
>
>
> /bin/spark-submit --master local[*] --driver-memory 8g --executor-memory
> 8g  --class com.ctrip.ml.client.Client
>  /root/di-ml-tool/target/di-ml-tool-1.0-SNAPSHOT.jar
>
>
>
> Error below
>
> *15/12/16 10:22:01 WARN cluster.YarnScheduler: Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources*
>
> 15/12/16 10:22:04 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
> ApplicationMaster has disassociated: 10.32.3.21:48311
>
> 15/12/16 10:22:04 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
> ApplicationMaster has disassociated: 10.32.3.21:48311
>
> 15/12/16 10:22:04 WARN remote.ReliableDeliverySupervisor: Association with
> remote system [akka.tcp://sparkYarnAM@10.32.3.21:48311] has failed,
> address is now gated for [5000] ms. Reason is: [Disassociated].
>
> *15/12/16 10:22:04 ERROR cluster.YarnClientSchedulerBackend: Yarn
> application has already exited with state FINISHED!*
>
>
>
> Exception in thread "main" 15/12/16 10:22:04 INFO
> cluster.YarnClientSchedulerBackend: Shutting down all executors
>
> Exception in thread "Yarn application state monitor"
> org.apache.spark.SparkException: Error asking standalone scheduler to shut
> down executors
>
> at
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBackend.scala:261)
>
> at
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrainedSchedulerBackend.scala:266)
>
> at
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:158)
>
> at
> org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
>
> at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1411)
>
> at org.apache.spark.SparkContext.stop(SparkContext.scala:1644)
>
> at
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$$anon$1.run(YarnClientSchedulerBackend.scala:139)
>
> Caused by: java.lang.InterruptedException
>
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1325)
>
> at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
>
> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
>
> at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>
> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>
> at
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>
> at scala.concurrent.Await$.result(package.scala:107)
>
> at
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>
> at
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
>
> at
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBackend.scala:257)
>
>
>



-- 
Best Regards

Jeff Zhang

Reply via email to