Yes, it looks like it is because there's not enough resources to run the
executor pods. Have you seen pending executor pods?

On Fri, Jun 8, 2018, 11:49 AM Thodoris Zois <z...@ics.forth.gr> wrote:

> As far as I know from Mesos with Spark, it is a running state and not a
> pending one. What you see is normal, but if I am wrong somebody correct me.
>
>  Spark driver at start operates normally (running state) but when it comes
> to start up executors, then it cannot allocate resources for them and
> hangs..
>
> - Thodoris
>
> On 8 Jun 2018, at 21:24, purna pradeep <purna2prad...@gmail.com> wrote:
>
> Hello,
>
> When I run spark-submit on k8s cluster I’m
>
> Seeing driver pod stuck in Running state and when I pulled driver pod logs
> I’m able to see below log
>
> I do understand that this warning might be because of lack of cpu/ Memory
> , but I expect driver pod be in “Pending” state rather than “ Running”
> state though actually it’s not Running
>
> So I had kill the driver pod and resubmit the job
>
> Please suggest here !
>
> 2018-06-08 14:38:01 WARN TaskSchedulerImpl:66 - Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources
>
> 2018-06-08 14:38:16 WARN TaskSchedulerImpl:66 - Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources
>
> 2018-06-08 14:38:31 WARN TaskSchedulerImpl:66 - Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources
>
> 2018-06-08 14:38:46 WARN TaskSchedulerImpl:66 - Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources
>
> 2018-06-08 14:39:01 WARN TaskSchedulerImpl:66 - Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources
>
>

Reply via email to