tgravescs commented on a change in pull request #24374: [SPARK-27366][CORE] 
Support GPU Resources in Spark job scheduling
URL: https://github.com/apache/spark/pull/24374#discussion_r288588353
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ##########
 @@ -263,7 +272,7 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, val rpcEnv: Rp
         val workOffers = activeExecutors.map {
           case (id, executorData) =>
             new WorkerOffer(id, executorData.executorHost, 
executorData.freeCores,
-              Some(executorData.executorAddress.hostPort))
+              Some(executorData.executorAddress.hostPort), 
executorData.availableResources.toMap)
 
 Review comment:
   we are passing in an immutable map here, which we should be, but then I 
think you are relying on that tasksetmanager to acquire and set the reserved, 
which then below the assignAddresses is supposed to account for.  That isn't 
going to work since its a new map we pass in.  Or at least there will never be 
reserved ones here in  executorData.availableResources. I guess other then that 
it will work as far as idle and assigned.
   In the current usage I don't think we need reservedAddresses. It didn't 
really change our original comments about the place its doing the accounting in 
TaskSchedulerImpl vs TaskSetManager.  So lets just add a comment there

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to