dongjoon-hyun commented on code in PR #38943:
URL: https://github.com/apache/spark/pull/38943#discussion_r1041345486


##########
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala:
##########
@@ -398,6 +410,10 @@ class ExecutorPodsAllocator(
     // Check reusable PVCs for this executor allocation batch
     val reusablePVCs = getReusablePVCs(applicationId, pvcsInUse)
     for ( _ <- 0 until numExecutorsToAllocate) {
+      if (reusablePVCs.isEmpty && reusePVC && maxPVCs <= PVC_COUNTER.get()) {

Review Comment:
   Thank you for review.
   
   Theoretically, `reusablePVCs` are all driver-owned PVCs whose creation time 
is bigger than `podAllocationDelay` + now. So, it can be bigger than `maxPVCs` 
is there is other PVC creation logic (For example, Spark driver plugin).
   
   
https://github.com/apache/spark/blob/89b2ee27d258dec8fe265fa862846e800a374d8e/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala#L364-L382
   
   Also, previously, Spark creates new pod and PVCs when some executors are 
dead. In that case, PVCs could be created a little more.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to