One thing to note is you may need to have the S3 credentials in the
init-container unless you use a publicly accessible URL. If this is the
case, you can either create a Kubernetes secret and use the Spark config
option for mounting secrets (secrets will be mounted into the
init-container as well
You don't need to create the init-container. It's an implementation detail.
If you provide a remote uri, and
specify spark.kubernetes.container.image=, Spark *internally*
will add the init container to the pod spec for you.
*If *for some reason, you want to customize the init container image, you
Im trying to run spark-submit to kubernetes cluster with spark 2.3 docker
container image
The challenge im facing is application have a mainapplication.jar and other
dependency files & jars which are located in Remote location like AWS s3
,but as per spark 2.3 documentation there is something