Hi Manish, Are you seeing this issue consistently or sporadically? and when you say executors are not launched not even a single executor created for that driver pod?
On Tue, Oct 1, 2019 at 1:43 AM manish gupta <tomanishgupt...@gmail.com> wrote: > Hi Team > > I am trying to create a spark cluster on kubernetes with rbac enabled > using spark submit job. I am using spark-2.4.1 version. > Spark submit is able to launch the driver pod by contacting Kubernetes API > server but executor Pod is not getting launched. I can see the below > warning message in the driver pod logs. > > > *19/09/27 10:16:01 INFO TaskSchedulerImpl: Adding task set 0.0 with 3 > tasks19/09/27 10:16:16 WARN TaskSchedulerImpl: Initial job has not accepted > any resources; check your cluster UI to ensure that workers are registered > and have sufficient resources* > > I have faced this issue in standalone spark clusters and resolved it but > not sure how to resolve this issue in kubernetes. I have not given any > ResourceQuota configuration in kubernetes rbac yaml file and there is ample > memory and cpu available for any new pod/container to be launched. > > Any leads/pointers to resolve this issue would be of great help. > > Thanks and Regards > Manish Gupta > -- *Thanks,* *Prudhvi Chennuru.* ______________________________________________________________________ The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.