This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new b41ea9162f4c [SPARK-45549][CORE] Remove unused `numExistingExecutors` 
in `CoarseGrainedSchedulerBackend`
b41ea9162f4c is described below

commit b41ea9162f4c8fbc4d04d28d6ab5cc0342b88cb0
Author: huangxiaoping <1754789...@qq.com>
AuthorDate: Wed Oct 18 08:49:46 2023 -0700

    [SPARK-45549][CORE] Remove unused `numExistingExecutors` in 
`CoarseGrainedSchedulerBackend`
    
    ### What changes were proposed in this pull request?
    
    Remove unused `numExistingExecutors` in `CoarseGrainedSchedulerBackend`
    
    ### Why are the changes needed?
    Remove unused code, make spark code cleaner
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    No need test
    
    ### Was this patch authored or co-authored using generative AI tooling?
    No
    
    Closes #43383 from huangxiaopingRD/SPARK-45549.
    
    Authored-by: huangxiaoping <1754789...@qq.com>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
 .../spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala      | 5 -----
 1 file changed, 5 deletions(-)

diff --git 
a/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 
b/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
index b55dfb39d445..c770e5c9950a 100644
--- 
a/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
+++ 
b/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
@@ -731,11 +731,6 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, val rpcEnv: Rp
     false
   }
 
-  /**
-   * Return the number of executors currently registered with this backend.
-   */
-  private def numExistingExecutors: Int = synchronized { executorDataMap.size }
-
   override def getExecutorIds(): Seq[String] = synchronized {
     executorDataMap.keySet.toSeq
   }


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to