Hi Mitch

IMO, it's done to provide most flexibility. So, some users can have
limited/restricted version of the image or with some additional software
that they use on the executors that is used during processing.

So, in your case you only need to provide the first one since the other two
configs will be copied from it

Regards
Khalid

On Wed, 8 Dec 2021, 10:41 Mich Talebzadeh, <mich.talebza...@gmail.com>
wrote:

> Just a correction that in Spark 3.2 documentation it states
> <https://spark.apache.org/docs/latest/running-on-kubernetes.html#configuration>
> that
>
> Property NameDefaultMeaning
> spark.kubernetes.container.image (none) Container image to use for the
> Spark application. This is usually of the form
> example.com/repo/spark:v1.0.0. This configuration is required and must be
> provided by the user, unless explicit images are provided for each
> different container type. 2.3.0
> spark.kubernetes.driver.container.image (value of
> spark.kubernetes.container.image) Custom container image to use for the
> driver. 2.3.0
> spark.kubernetes.executor.container.image (value of
> spark.kubernetes.container.image) Custom container image to use for
> executors.
>
> So both driver and executor images are mapped to the container image. In
> my opinion, they are redundant and will potentially add confusion so they
> should be removed?
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 8 Dec 2021 at 10:15, Mich Talebzadeh <mich.talebza...@gmail.com>
> wrote:
>
>> Hi,
>>
>> We have three conf parameters to distribute the docker image with
>> spark-sumit in Kubernetes cluster.
>>
>> These are
>>
>> spark-submit --verbose \
>>           --conf spark.kubernetes.driver.docker.image=${IMAGEGCP} \
>>            --conf spark.kubernetes.executor.docker.image=${IMAGEGCP} \
>>            --conf spark.kubernetes.container.image=${IMAGEGCP} \
>>
>> when the above is run, it shows
>>
>> (spark.kubernetes.driver.docker.image,
>> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
>> )
>> (spark.kubernetes.executor.docker.image,
>> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
>> )
>> (spark.kubernetes.container.image,
>> eu.gcr.io/axial-glow-224522/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-addedpackages
>> )
>>
>> You notice that I am using the same docker image for driver, executor and
>> container. In Spark 3.2 (actually in recent spark versions), I cannot see
>> reference to driver or executor. Are these depreciated? It appears that
>> Spark still accepts them?
>>
>> Thanks
>>
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>  h
>>
>>
>>
>>
>>
>>

Reply via email to