[ 
https://issues.apache.org/jira/browse/YUNIKORN-966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17523044#comment-17523044
 ] 

ted edited comment on YUNIKORN-966 at 4/16/22 8:45 AM:
-------------------------------------------------------

Hi [~yuchaoran2011],

I would like some clarifications. Thank you.

The original method seems to be able to get the username of all pods.

Under this method, spark pod can execute smoothly.

can be seen:
{code:java}
utils/utils.go:225    Found user name from pod labels.    {"userLabel": 
"yunikorn.apache.org/username", "user": "ted"} {code}
 * Install spark-on-k8s-operator: Change batchScheduler to true
and webhook enable to true in values.yaml 
Then helm install.
 * Install yunikorn: 

 #   deployments/image/configmap/Dockerfile change to 
{code:java}
ENV OPERATOR_PLUGINS “general,spark-k8s-operator”{code}

        2.  configmap of yunikorn:
{code:java}
partitions:
  - name: default
    placementrules:
      - name: tag
        value: namespace
        create: true
    queues:
      - name: root
        submitacl: 'ted'{code}
 *   SparkApplication:

{code:java}
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: spark-pi
  namespace: default
  labels:
    yunikorn.apache.org/username: "ted"
    queue: "root.parent"
spec:
  type: Scala
  mode: cluster
  image: "gcr.io/spark-operator/spark:v3.1.1"
  imagePullPolicy: Always
  mainClass: org.apache.spark.examples.SparkPi
  mainApplicationFile: 
"local:///opt/spark/examples/jars/spark-examples_2.12-3.1.1.jar"
  batchScheduler: "yunikorn"
  sparkVersion: "3.1.1"
  restartPolicy:
    type: Never
  volumes:
    - name: "test-volume"
      hostPath:
        path: "/tmp"
        type: Directory
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "512m"
    labels:
      version: 3.1.1
    serviceAccount: chart-1650093659-spark
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"
  executor:
    cores: 1
    instances: 1
    memory: "512m"
    labels:
      version: 3.1.1
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"{code}


was (Author: steinsgateted):
Hi Chaoran Yu,

I would like some clarifications. Thank you.

The original method seems to be able to get the username of all pods.

Under this method, spark pod can execute smoothly.

can be seen:
{code:java}
utils/utils.go:225    Found user name from pod labels.    {"userLabel": 
"yunikorn.apache.org/username", "user": "ted"} {code}
 * Install spark-on-k8s-operator: Change batchScheduler to true
and webhook enable to true in values.yaml 
Then helm install.
 * Install yunikorn: 

 #   deployments/image/configmap/Dockerfile change to 
{code:java}
ENV OPERATOR_PLUGINS “general,spark-k8s-operator”{code}

        2.  configmap of yunikorn:
{code:java}
partitions:
  - name: default
    placementrules:
      - name: tag
        value: namespace
        create: true
    queues:
      - name: root
        submitacl: 'ted'{code}
 *   SparkApplication:

{code:java}
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: spark-pi
  namespace: default
  labels:
    yunikorn.apache.org/username: "ted"
    queue: "root.parent"
spec:
  type: Scala
  mode: cluster
  image: "gcr.io/spark-operator/spark:v3.1.1"
  imagePullPolicy: Always
  mainClass: org.apache.spark.examples.SparkPi
  mainApplicationFile: 
"local:///opt/spark/examples/jars/spark-examples_2.12-3.1.1.jar"
  batchScheduler: "yunikorn"
  sparkVersion: "3.1.1"
  restartPolicy:
    type: Never
  volumes:
    - name: "test-volume"
      hostPath:
        path: "/tmp"
        type: Directory
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "512m"
    labels:
      version: 3.1.1
    serviceAccount: chart-1650093659-spark
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"
  executor:
    cores: 1
    instances: 1
    memory: "512m"
    labels:
      version: 3.1.1
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"{code}

> Retrieve the username from the SparkApp CRD
> -------------------------------------------
>
>                 Key: YUNIKORN-966
>                 URL: https://issues.apache.org/jira/browse/YUNIKORN-966
>             Project: Apache YuniKorn
>          Issue Type: Sub-task
>          Components: shim - kubernetes
>            Reporter: Chaoran Yu
>            Assignee: ted
>            Priority: Minor
>
> Currently the shim only looks at the pods to get the value of the label 
> yunikorn.apache.org/username. When the Spark operator plugin is enabled, we 
> should look at the SparkApp CRD for the label.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@yunikorn.apache.org
For additional commands, e-mail: issues-h...@yunikorn.apache.org

Reply via email to