Github user ifilonenko commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21669#discussion_r200401350
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Constants.scala
 ---
    @@ -81,4 +83,35 @@ private[spark] object Constants {
       val KUBERNETES_MASTER_INTERNAL_URL = "https://kubernetes.default.svc";
       val DRIVER_CONTAINER_NAME = "spark-kubernetes-driver"
       val MEMORY_OVERHEAD_MIN_MIB = 384L
    +
    +  // Hadoop Configuration
    +  val HADOOP_FILE_VOLUME = "hadoop-properties"
    +  val HADOOP_CONF_DIR_PATH = "/etc/hadoop/conf"
    +  val ENV_HADOOP_CONF_DIR = "HADOOP_CONF_DIR"
    +  val HADOOP_CONF_DIR_LOC = "spark.kubernetes.hadoop.conf.dir"
    +  val HADOOP_CONFIG_MAP_SPARK_CONF_NAME =
    +    "spark.kubernetes.hadoop.executor.hadoopConfigMapName"
    +
    +  // Kerberos Configuration
    +  val KERBEROS_DELEGEGATION_TOKEN_SECRET_NAME =
    +    "spark.kubernetes.kerberos.delegation-token-secret-name"
    +  val KERBEROS_KEYTAB_SECRET_NAME =
    +    "spark.kubernetes.kerberos.key-tab-secret-name"
    +  val KERBEROS_KEYTAB_SECRET_KEY =
    +    "spark.kubernetes.kerberos.key-tab-secret-key"
    +  val KERBEROS_SPARK_USER_NAME =
    +    "spark.kubernetes.kerberos.spark-user-name"
    +  val KERBEROS_SECRET_LABEL_PREFIX =
    +    "hadoop-tokens"
    +  val SPARK_HADOOP_PREFIX = "spark.hadoop."
    +  val HADOOP_SECURITY_AUTHENTICATION =
    +    SPARK_HADOOP_PREFIX + "hadoop.security.authentication"
    +
    +  // Kerberos Token-Refresh Server
    +  val KERBEROS_REFRESH_LABEL_KEY = "refresh-hadoop-tokens"
    --- End diff --
    
    Because our original architecture had the opinion that the renewal service 
pod will exist as a separate micro-service, that option could be handled by 
that renewal service. We used this label to detect that this specific secret 
was to be renewed. But if we wished to use another renewal service via some 
existing service, we might be able to just grab an Array[Byte] from some 
DTManager that may exist in their external Hadoop clusters, and store in a 
secret. Thank you for this note in the design doc. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to