Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4688#discussion_r25913541
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ---
    @@ -234,9 +236,14 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, val actorSyste
             properties += ((key, value))
           }
         }
    +
         // TODO (prashant) send conf instead of properties
         driverActor = actorSystem.actorOf(
           Props(new DriverActor(properties)), name = 
CoarseGrainedSchedulerBackend.ACTOR_NAME)
    +
    +    // If a principal and keytab have been set, use that to create new 
credentials for executors
    +    // periodically
    +    SparkHadoopUtil.get.scheduleLoginFromKeytab()
    --- End diff --
    
    Ah, I see why you exposed that method that way. Probably ok, but it does 
feel a little weird; I'd expect the caller to know that it needs to schedule 
this thing, and if not running on Yarn, things should blow up.
    
    (But I guess they already blow up because the command line options prevent 
that?)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to