Re: Recap on current status of "SPIP: Support Customized Kubernetes Schedulers"

2022-02-24 Thread Yikun Jiang
@dongjoon-hyun @yangwwei Thanks!

@Mich Thanks for testing it, I'm not very professional with GKE,

I'm also not quite sure if it is different in configurations, internal
network, scheduler implementations
itself VS upstream K8S. As far as I know, different K8S vendors also
maintain their own optimizations
in their downstream product.

But you can see some basic integration test results based on upstream K8S
on x86/arm64:
- x86: https://github.com/apache/spark/pull/35422#issuecomment-1035901775
- Arm64: https://github.com/apache/spark/pull/35422#issuecomment-1037039764

As can be seen from the results, for a single job, there is no big
difference between default scheduler
and volcano.

Also custom schedulers such as Volcano, Yunikorn are more for the overall
situation for multiple jobs
and the utilization of the entire K8S cluster.


Re: Recap on current status of "SPIP: Support Customized Kubernetes Schedulers"

2022-02-24 Thread Mich Talebzadeh
I did some preliminary tests without volcana and with volcano addition to
spark-submit.


*setup*


The K8s cluster used was a Google Kubernetes standard cluster with three
nodes with autoscale up to 6 nodes. It runs *spark 3.1.1* with spark-py
dockers also using *spark 3.1.1 with Java 8*.  In every run, it creates a
million rows of random data and inserts them from Spark DF into Google
BigQuery database. The choice of Spark 3.1.1 and Java 8 was for
compatibility for Spark API and the BigQuery.


To keep the systematics the same I used the same cluster with the only
difference being the spark-submit additional lines as below for volcano



 NEXEC=2

 MEMORY="8192m"

 VCORES=3


FEATURES=”org.apache.spark.deploy.k8s.features.VolcanoFeatureStep”

gcloud config set compute/zone $ZONE

export PROJECT=$(gcloud info --format='value(config.project)')

gcloud container clusters get-credentials ${CLUSTER_NAME} --zone
$ZONE

export KUBERNETES_MASTER_IP=$(gcloud container clusters list
--filter name=${CLUSTER_NAME} --format='value(MASTER_IP)')

spark-submit --verbose \

   --properties-file ${property_file} \

   --master k8s://https://$KUBERNETES_MASTER_IP:443 \

   --deploy-mode cluster \

   --name sparkBQ \

   * --conf spark.kubernetes.scheduler=volcano \*

*   --conf spark.kubernetes.driver.pod.featureSteps=$FEATURES \*

*   --conf spark.kubernetes.executor.pod.featureSteps=$FEATURES \*

*   --conf spark.kubernetes.job.queue=queue1 \*

   --py-files $CODE_DIRECTORY_CLOUD/spark_on_gke.zip \

   --conf spark.kubernetes.namespace=$NAMESPACE \

   --conf spark.executor.instances=$NEXEC \

   --conf spark.driver.cores=$VCORES \

   --conf spark.executor.cores=$VCORES \

   --conf spark.driver.memory=$MEMORY \

   --conf spark.executor.memory=$MEMORY \

   --conf spark.network.timeout=300 \

   --conf spark.kubernetes.allocation.batch.size=3 \

   --conf spark.kubernetes.allocation.batch.delay=1 \

   --conf spark.dynamicAllocation.enabled=true \

   --conf spark.dynamicAllocation.shuffleTracking.enabled=true \

   --conf spark.kubernetes.driver.container.image=${IMAGEDRIVER} \

   --conf spark.kubernetes.executor.container.image=${IMAGEDRIVER} \

   --conf
spark.kubernetes.authenticate.driver.serviceAccountName=spark-bq \

 --conf
spark.driver.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true" \

   --conf
spark.executor.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true"
\

   --conf spark.kubernetes.authenticate.caCertFile=/var/run/secrets/
kubernetes.io/serviceaccount/ca.crt  \

   --conf
spark.kubernetes.authenticate.oauthTokenFile=/var/run/secrets/
kubernetes.io/serviceaccount/token  \

   $CODE_DIRECTORY_CLOUD/${APPLICATION}



In contrast the standard spark-submit does not have those 4 volcano
specific lines (in bald). This i the output from *spark-submit --verbose*


Spark properties used, including those specified through

 --conf and those from the properties file
/home/hduser/dba/bin/python/spark_on_gke/deployment/src/scripts/properties:

  (spark.kubernetes.executor.secrets.spark-sa,*(redacted))

  (spark.dynamicAllocation.shuffleTracking.enabled,true)

  (spark.kubernetes.allocation.batch.delay,1)


(spark.kubernetes.driverEnv.GOOGLE_APPLICATION_CREDENTIALS,*(redacted))


*(spark.kubernetes.executor.pod.featureSteps,”org.apache.spark.deploy.k8s.features.VolcanoFeatureStep”)*

  (spark.driver.memory,8192m)

  (spark.network.timeout,300)

  (spark.executor.memory,8192m)

  (spark.executor.instances,2)

  (spark.hadoop.fs.gs.project.id,xxx)

  (spark.kubernetes.allocation.batch.size,3)


(spark.hadoop.google.cloud.auth.service.account.json.keyfile,*(redacted))

  *(spark.kubernetes.scheduler,volcano)*

  (spark.kubernetes.namespace,spark)

  (spark.kubernetes.authenticate.driver.serviceAccountName,spark-bq)

  (spark.kubernetes.executor.container.image,
eu.gcr.io/xxx/spark-py:3.1.1-scala_2.12-8-jre-slim-buster-java8PlusPackages)

  (spark.driver.cores,3)

  (spark.kubernetes.driverEnv.GCS_PROJECT_ID,xxx)


(spark.executor.extraJavaOptions,-Dio.netty.tryReflectionSetAccessible=true)

  (spark.executorEnv.GCS_PROJECT_ID,xxx)

  (spark.hadoop.google.cloud.auth.service.account.enable,true)

  (spark.driver.extraJavaOptions,-Dio.netty.tryReflectionSetAccessible=true)

  *(spark.kubernetes.job.queue,queue1)*

  (spark.kubernetes.authenticate.caCertFile,*(redacted))

  (spark.kubernetes.driver.secrets.spark-sa,*(redacted))

  (spark.executorEnv.GOOGLE_APPLICATION_CREDENTIALS,*(redacted))

  (spark.kubernetes.authenticate.oauthTokenFile,*(redacted))

  (spark.dynamicAllocation.enabled,true)

  (spark.kubernetes.driver.container.image,

Re: Recap on current status of "SPIP: Support Customized Kubernetes Schedulers"

2022-02-24 Thread Mich Talebzadeh
Hi,

what do expect the performance gain to be by using volcano versus standard
scheduler.

Just to be sure there are two aspects here.


   1. Procuring the Kubernetes cluster
   2. Running the job through spark-submit


Item 1 is left untouched and we should see improvements in item 2 with
Volcano

Thanks



   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 24 Feb 2022 at 03:35, Yikun Jiang  wrote:

> First, much thanks for all your help (Spark/Volcano/Yunikorn community) to
> make this SPIP happen!
>
> Especially,@dongjoon-hyun @holdenk @william-wang @attilapiros @HyukjinKwon
> @martin-g @yangwwei @tgravescs
>
> The SPIP is near the end of the stage. It can be said that it is beta
> available at the basic level.
>
> I also draft a simple slide to show how to use and help you understand
> what we have done:
>
> https://docs.google.com/presentation/d/1XDsTWPcsBe4PQ-1MlBwd9pRl8mySdziE_dJE6iATNw8
>
> Below are also some recap to help you understand current implementation
> and next step on SPIP:
>
> *# Existing work*
> *## Basic part:*
> - SPARK-36059  *New
> configuration:* ability to specify "schedulerName" in driver/executor for
> Spark on K8S
> - SPARK-37331  *New
> workflow:*ability to create pre-populated resources before driver pod
>  for Spark on K8S
> - SPARK-37145  *New
> developer API:* support user feature step with configuration for Spark on
> K8S
> - *(reviewing)* *New Job Configurations* for Spark on K8S:
>   - SPARK-38188 :
> spark.kubernetes.job.queue
>   - SPARK-38187 :
> spark.kubernetes.job.[minCPU|minMemory]
>   - SPARK-38189 :
> spark.kubernetes.job.priorityClassName
>
> *## Volcano Part:*
> - SPARK-37258  *New
> volcano extension* in kubernetes-client fabric8io/kubernetes-client#3579
> - SPARK-36061  *New
> profile: *-Pvolcano
> - SPARK-36061  *New
> Feature Step:* VolcanoFeatureStep
> - SPARK-36061  *New
> integration test:*
>  *- Passed on x86 and Arm64 (Linux on Huawei Kunpeng 920 and MacOS on
> Apple Silicon M1).*
>  - Test basic volcano workflow
>  - Test all existing tests based on the volcano.
>
> *## Yunikorn Part:*
> @yangwwei  will also make the efforts for Yunikorn module feature step
> since this week.
> I will help to complete the yunikorn integration based on previous
> experience.
>
> *# Next Plan*
> There are also 3 main tasks to be completed before v3.3 code freeze:
> 1. (reviewing) SPARK-38188
> : Support queue
> scheduling configuration
> https://github.com/apache/spark/pull/35553
> 2. (reviewing) SPARK-38187
> : Support resource
> reservation (minCPU/minMemory configuration)
> https://github.com/apache/spark/pull/35640
> 3. (reviewing) SPARK-38187
> : Support priority
> scheduling (priorityClass configuration):
> https://issues.apache.org/jira/browse/SPARK-38189
> https://github.com/apache/spark/pull/35639
> 4. (WIP) SPARK-37809 :
> Yunikorn integration
>
> Also several misc work is gonna be completed before 3.3:
> 1. Integrated volcano deploy into integration test (x86 and arm)
> - Add it to spark kubernetes integration test once cross compile support:
> https://github.com/volcano-sh/volcano/pull/1571
> 2. Complete doc and test guideline.
>
> Please feel free to contact me if you have any other concerns! Thanks!
>
> [1] https://issues.apache.org/jira/browse/SPARK-36057
>