As far as I know from Mesos with Spark, it is a running state and not a pending 
one. What you see is normal, but if I am wrong somebody correct me.

 Spark driver at start operates normally (running state) but when it comes to 
start up executors, then it cannot allocate resources for them and hangs.. 

- Thodoris

> On 8 Jun 2018, at 21:24, purna pradeep <purna2prad...@gmail.com> wrote:
> 
> Hello,
> When I run spark-submit on k8s cluster I’m
> 
> Seeing driver pod stuck in Running state and when I pulled driver pod logs 
> I’m able to see below log
> 
> I do understand that this warning might be because of lack of cpu/ Memory , 
> but I expect driver pod be in “Pending” state rather than “ Running” state 
> though actually it’s not Running 
> 
> So I had kill the driver pod and resubmit the job 
> 
> Please suggest here !
> 
> 2018-06-08 14:38:01 WARN TaskSchedulerImpl:66 - Initial job has not accepted 
> any resources; check your cluster UI to ensure that workers are registered 
> and have sufficient resources
> 
> 2018-06-08 14:38:16 WARN TaskSchedulerImpl:66 - Initial job has not accepted 
> any resources; check your cluster UI to ensure that workers are registered 
> and have sufficient resources
> 
> 2018-06-08 14:38:31 WARN TaskSchedulerImpl:66 - Initial job has not accepted 
> any resources; check your cluster UI to ensure that workers are registered 
> and have sufficient resources
> 
> 2018-06-08 14:38:46 WARN TaskSchedulerImpl:66 - Initial job has not accepted 
> any resources; check your cluster UI to ensure that workers are registered 
> and have sufficient resources
> 
> 2018-06-08 14:39:01 WARN TaskSchedulerImpl:66 - Initial job has not accepted 
> any resources; check your cluster UI to ensure that workers are registered 
> and have sufficient resources

Reply via email to