Github user harishreedharan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4688#discussion_r25547769
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ---
    @@ -71,6 +72,16 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, val actorSyste
       // Executors we have requested the cluster manager to kill that have not 
died yet
       private val executorsPendingToRemove = new HashSet[String]
     
    +  /**
    +   * Send new credentials to executors. This is the method that is called 
when the scheduled
    +   * login completes, so the new credentials can be sent to the executors.
    +   * @param credentials
    +   */
    +  def sendNewCredentialsToExecutors(credentials: SerializableBuffer): Unit 
= {
    +    // We don't care about the reply, so going to deadLetters is fine.
    --- End diff --
    
    No, that is not necessarily true. The initial startup would basically 
happen with credentials from the startup (similar to how it is now) -- the 
tokens are set up by YARN. YARN handles the NM tokens but not the HDFS tokens, 
which need to be replaced. This patch handles that, so it is ok to start 
running - since the initial tokens will last it a while.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to