Wenjun Ruan created SPARK-54553:
-----------------------------------

             Summary: Do not overwrite the podgroup name form 
podGroupTemplateFile  when using volcano
                 Key: SPARK-54553
                 URL: https://issues.apache.org/jira/browse/SPARK-54553
             Project: Spark
          Issue Type: Improvement
          Components: Kubernetes
    Affects Versions: 4.0.1
            Reporter: Wenjun Ruan


I submit Spark jobs to Kubernetes using Volcano as the scheduler.

According to the documentation:
[https://spark.apache.org/docs/latest/running-on-kubernetes.html#volcano-feature-step]

I found that the actual PodGroup name is not the one I specified in the 
{{{}podGroupTemplateFile{}}}. Instead, it is overwritten by 
{{{}VolcanoFeatureStep{}}}.

I hope Spark does not overwrite the PodGroup name when it is already defined in 
the {{{}podGroupTemplateFile{}}}.

Here is the relevant code:

```scala

override def getAdditionalPreKubernetesResources(): Seq[HasMetadata] = {
  if (kubernetesConf.isInstanceOf[KubernetesExecutorConf]) {
    logWarning(
      "VolcanoFeatureStep#getAdditionalPreKubernetesResources() is not 
supported for executor."
    )
    return Seq.empty
  }

  lazy val client = new DefaultVolcanoClient
  val template = kubernetesConf.getOption(POD_GROUP_TEMPLATE_FILE_KEY)
  val pg = template.map(client.podGroups.load(_).item).getOrElse(new PodGroup())

  var metadata = pg.getMetadata
  if (metadata == null) metadata = new ObjectMeta
  metadata.setName(podGroupName)
  metadata.setNamespace(namespace)
  pg.setMetadata(metadata)

  var spec = pg.getSpec
  if (spec == null) spec = new PodGroupSpec
  pg.setSpec(spec)

  Seq(pg)
}

```

I can submit a PR to make this behavior configurable or to preserve the 
PodGroup name if it is already provided in {{{}podGroupTemplateFile{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to