[ 
https://issues.apache.org/jira/browse/SPARK-27220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808011#comment-16808011
 ] 

Jacek Lewandowski commented on SPARK-27220:
-------------------------------------------

Thank you [~irashid]. 

It is not a custom fork actually because SparkContext allows you to plugin a 
custom cluster manager without modifying Spark itself. I'll try to make the fix 
as a small as possible.


> Remove Yarn specific leftover from CoarseGrainedSchedulerBackend
> ----------------------------------------------------------------
>
>                 Key: SPARK-27220
>                 URL: https://issues.apache.org/jira/browse/SPARK-27220
>             Project: Spark
>          Issue Type: Task
>          Components: Spark Core, YARN
>    Affects Versions: 2.0.2, 2.1.3, 2.2.3, 2.3.3, 2.4.0
>            Reporter: Jacek Lewandowski
>            Priority: Minor
>
> {{CoarseGrainedSchedulerBackend}} has the following field:
> {code:scala}
>   // The num of current max ExecutorId used to re-register appMaster
>   @volatile protected var currentExecutorIdCounter = 0
> {code}
> which is then updated:
> {code:scala}
>       case RegisterExecutor(executorId, executorRef, hostname, cores, 
> logUrls) =>
> ...
>           // This must be synchronized because variables mutated
>           // in this block are read when requesting executors
>           CoarseGrainedSchedulerBackend.this.synchronized {
>             executorDataMap.put(executorId, data)
>             if (currentExecutorIdCounter < executorId.toInt) {
>               currentExecutorIdCounter = executorId.toInt
>             }
> ...
> {code}
> However it is never really used in {{CoarseGrainedSchedulerBackend}}. Its 
> only usage is in Yarn-specific code. It should be moved to Yarn then because 
> {{executorId}} is a {{String}} and there are really no guarantees that it is 
> always an integer. It was introduced in SPARK-12864



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to