Eric created SPARK-41006:
----------------------------

             Summary: ConfigMap has the same name when launching two pods on 
the same namespace
                 Key: SPARK-41006
                 URL: https://issues.apache.org/jira/browse/SPARK-41006
             Project: Spark
          Issue Type: Bug
          Components: Kubernetes
    Affects Versions: 3.3.0, 3.2.0, 3.1.0
            Reporter: Eric


If we use the Spark Launcher to launch our spark apps in k8s:
{code:java}
val sparkLauncher = new InProcessLauncher()
 .setMaster(k8sMaster)
 .setDeployMode(deployMode)
 .setAppName(appName)
 .setVerbose(true)

sparkLauncher.startApplication(new SparkAppHandle.Listener { ...{code}
We have an issue when we launch another spark driver in the same namespace 
where other spark app was running:
{code:java}
kp -n qa-topfive-python-spark-2-15d42ac3b9
NAME                                                READY   STATUS    RESTARTS  
 AGE
data-io-c590a7843d47e206-driver                     1/1     Terminating   0     
     2s
qa-top-five-python-1667475391655-exec-1             1/1     Running   0         
 94s
qa-topfive-python-spark-2-462c5d843d46e38b-driver   1/1     Running   0         
 119s {code}
The error is:
{code:java}
{"time":"2022-10-24T15:08:50.239Z","lvl":"WARN","logger":"o.a.s.l.InProcessAppHandle","thread":"spark-app-44:
 'data-io'","msg":"Application failed with 
exception.","stack_trace":"io.fabric8.kubernetes.client.KubernetesClientException:
 Failure executing: PUT at: 
https://kubernetes.default/api/v1/namespaces/qa-topfive-python-spark-2-edf723f942/configmaps/spark-drv-34c4e3840a0466c2-conf-map.
 Message: ConfigMap \"spark-drv-34c4e3840a0466c2-conf-map\" is invalid: data: 
Forbidden: field is immutable when `immutable` is set. Received status: 
Status(apiVersion=v1, code=422, 
details=StatusDetails(causes=[StatusCause(field=data, message=Forbidden: field 
is immutable when `immutable` is set, reason=FieldValueForbidden, 
additionalProperties={})], group=null, kind=ConfigMap, 
name=spark-drv-34c4e3840a0466c2-conf-map, retryAfterSeconds=null, uid=null, 
additionalProperties={}), kind=Status, message=ConfigMap 
\"spark-drv-34c4e3840a0466c2-conf-map\" is invalid: data: Forbidden: field is 
immutable when `immutable` is set, metadata=ListMeta(_continue=null, 
remainingItemCount=null, resourceVersion=null, selfLink=null, 
additionalProperties={}), reason=Invalid, status=Failure, 
additionalProperties={}).\n\tat 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:612)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:555)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:342)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:322)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleUpdate(BaseOperation.java:649)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.lambda$replace$1(HasMetadataOperation.java:195)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation$$Lambda$5663/000000000000000000.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:200)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:141)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation$$Lambda$5183/000000000000000000.apply(Unknown
 Source)\n\tat 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.replace(CreateOrReplaceHelper.java:69)\n\tat
 
io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.createOrReplace(CreateOrReplaceHelper.java:61)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:318)\n\tat
 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.createOrReplace(BaseOperation.java:83)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableImpl.java:105)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.lambda$createOrReplace$7(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:174)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl$$Lambda$5578/000000000000000000.apply(Unknown
 Source)\n\tat java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)\n\tat 
java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown 
Source)\n\tat java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)\n\tat 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source)\n\tat 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)\n\tat java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)\n\tat java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)\n\tat 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:176)\n\tat
 
io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:54)\n\tat
 
org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:175)\n\tat
 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5(KubernetesClientApplication.scala:248)\n\tat
 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5$adapted(KubernetesClientApplication.scala:242)\n\tat
 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$Lambda$5452/000000000000000000.apply(Unknown
 Source)\n\tat 
org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2764)\n\tat 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:242)\n\tat
 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:214)\n\tat
 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)\n\tat
 org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)\n\tat 
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)\n\tat 
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)\n\tat 
org.apache.spark.deploy.InProcessSparkSubmit$.main(SparkSubmit.scala:987)\n\tat 
org.apache.spark.deploy.InProcessSparkSubmit.main(SparkSubmit.scala)\n\tat 
jdk.internal.reflect.GeneratedMethodAccessor442.invoke(Unknown Source)\n\tat 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
Source)\n\tat java.base/java.lang.reflect.Method.invoke(Unknown Source)\n\tat 
org.apache.spark.launcher.InProcessAppHandle.lambda$start$0(InProcessAppHandle.java:72)\n\tat
 
org.apache.spark.launcher.InProcessAppHandle$$Lambda$5223/000000000000000000.run(Unknown
 Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n"}
 {code}
When launching the second algorithm (data-io) in the same namespace, the name 
of the configmap is the same as the one created for the previous running 
algorithm (spark-drv-34c4e3840a0466c2-conf-map), and since it is immutable, it 
fails.

The solution is pretty straightforward, simply change here:
[https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientUtils.scala#L46]



>From val to def, so the uniqueID is always new when we call to 
>configMapNameDriver:
{code:java}
def configMapNameDriver: String = 
configMapName(s"spark-drv-${KubernetesUtils.uniqueID()}") {code}
We have tested and it works in our case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to