[GitHub] spark pull request #21511: [SPARK-24491][Kubernetes] Configuration support f...
Github user alexmilowski commented on a diff in the pull request: https://github.com/apache/spark/pull/21511#discussion_r198591324 --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala --- @@ -172,7 +184,7 @@ private[spark] class BasicExecutorFeatureStep( .addToImagePullSecrets(kubernetesConf.imagePullSecrets(): _*) .endSpec() .build() -SparkPod(executorPod, containerWithLimitCores) +SparkPod(executorPod, containerWithLimitGpus) --- End diff -- Yes ... not in love with the way this is currently structured. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #21511: [SPARK-24491][Kubernetes] Configuration support f...
Github user alexmilowski commented on a diff in the pull request: https://github.com/apache/spark/pull/21511#discussion_r198591146 --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala --- @@ -104,6 +104,20 @@ private[spark] object Config extends Logging { .stringConf .createOptional + val KUBERNETES_EXECUTOR_LIMIT_GPUS = --- End diff -- Would drivers need GPU acceleration? My assumption was the executor code was where all the possibly acceleration would be needed. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #21511: [SPARK-24491][Kubernetes] Configuration support for requ...
Github user alexmilowski commented on the issue: https://github.com/apache/spark/pull/21511 Hello all, I've been thinking about trying to make this more generic given I just ran into a hostPath/volume issue for executors which is similar. I took a look at SPARK-24434 and that seems to be likely the right path. In the end, GPU, volumes, etc. are all just aspects that need to go into the pod description so the scheduler can choose the right node and associate the right resources. I will try to contribute to SPARK-24434 and this pull request might not be necessary afterwards. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #21511: [SPARK-24491][Kubernetes] Configuration support f...
GitHub user alexmilowski opened a pull request: https://github.com/apache/spark/pull/21511 [SPARK-24491][Kubernetes] Configuration support for requesting GPUs on k8s ## What changes were proposed in this pull request? Configuration support for generating the GPU requests in the limits section for the executor pods. ## How was this patch tested? The patch has been tested on a local on-premise cluster with mixed nodes (some with GPUs and some without). There are currently no contributed tests for the patch. :( Legal: I (Alex MiÅowski) developed and tested this patch. It is my original work and that I license to the project under the projectâs open source license. You can merge this pull request into a Git repository by running: $ git pull https://github.com/alexmilowski/spark k8s-gpu Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/21511.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #21511 commit 583928ed2f280ca90c77fc12bf49817f6792db66 Author: alex.milowski Date: 2018-06-07T23:05:14Z Configuration support for requesting GPUs on k8s --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org