Github user ArtRand commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19272#discussion_r149797247
  
    --- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
    @@ -213,6 +216,14 @@ private[spark] class 
MesosCoarseGrainedSchedulerBackend(
           sc.conf.getOption("spark.mesos.driver.frameworkId").map(_ + suffix)
         )
     
    +    // check that the credentials are defined, even though it's likely 
that auth would have failed
    +    // already if you've made it this far, then start the token renewer
    +    if (hadoopDelegationTokens.isDefined) {
    --- End diff --
    
    Check out the patch now. `hadoopDelegationTokens` now calls 
`initializeHadoopDelegationTokens` (renamed `fetchHadoopDelegationTokens`) by 
name:
    ```scala
      private val hadoopDelegationTokens: () => Option[Array[Byte]] = 
fetchHadoopDelegationTokens
    ```
     This has the effect of only generating the first set of delegation tokens 
once the first `RetrieveSparkAppConfig` message is received. At this point, 
everything has been initialized because renewer (renamed 
`MesosHadoopDelegationTokenManager`) is evaluated lazily with the correct 
`driverEndpoint`. 
    
    It's a bit confusing to just avoid an extra conditional. WDYT? 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to