33186571-driver-svc.fractal-segmentation.svc
>
> -z
> ________
> From: Prudhvi Chennuru (CONT)
> Sent: Friday, April 10, 2020 2:44
> To: user
> Subject: Driver pods stuck in running state indefinitely
>
>
> Hi,
>
>We are running spark
Hi,
*We are running spark batch jobs on K8s.*
*Kubernetes version:* 1.11.5 ,
* spark version*: 2.3.2,
* docker version:* 19.3.8
*Issue: Few Driver pods are stuck in running state indefinitely with
error*
```
The Initial job has not accepted any resources; check your cluster UI
Hi ,
I am running spark batch jobs on kubernetes cluster and intermittently
i am seeing MultiObjectDeleteException
spark version: 2.3.0
kubernetes version: 1.11.5
aws-java-sdk: 1.7.4.jar
hadoop-aws: 2.7.3.jar
I even added *spark.hadoop.fs.s3a.multiobjectdelete.enable=false* property
to
Tue, Oct 1, 2019 at 8:01 PM Prudhvi Chennuru (CONT) <
> prudhvi.chenn...@capitalone.com> wrote:
>
>> By default, executors use default service account in the namespace you
>> are creating the driver and executors so i am guessing that executors don't
>> have access to ru
ng message that I
> have provided above.
> Not even a single executor pod is getting launched.
>
> Regards
> Manish Gupta
>
> On Tue, Oct 1, 2019 at 6:31 PM Prudhvi Chennuru (CONT) <
> prudhvi.chenn...@capitalone.com> wrote:
>
>> Hi Manish,
>>
>>
Hi Manish,
Are you seeing this issue consistently or sporadically? and
when you say executors are not launched not even a single executor created
for that driver pod?
On Tue, Oct 1, 2019 at 1:43 AM manish gupta
wrote:
> Hi Team
>
> I am trying to create a spark cluster on
g0fGctzE8h6HioRMam_Q18QTLAN3LEl1SdiGuTX7a4=GA-PO2FbDWQPNYgoTNs0kNHbjryZZ6phLPZ-wdQSBTs=>
>2. Disable negative DNS caching at JVM level, on the entrypoint.sh
>
>
>
> JL
>
>
>
>
>
> *From: *Olivier Girardot
> *Date: *Tuesday 18 June 2019 at 10:06
> *To: *"Prudhvi Chennur
Hey Oliver,
I am also facing the same issue on my kubernetes
cluster(v1.11.5) on AWS with spark version 2.3.3, any luck in figuring out
the root cause?
On Fri, May 3, 2019 at 5:37 AM Olivier Girardot <
o.girar...@lateral-thoughts.com> wrote:
> Hi,
> I did not try on
Hi,
I am using kubernetes *v 1.11.5* and spark *v 2.3.0*,
*calico(daemonset)* as overlay network plugin and kubernetes *cluster auto
scalar* feature to autoscale cluster if needed. When the cluster is auto
scaling calico pods are scheduling on those nodes but they are not ready
for 40 to 50