Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19468#discussion_r146425100
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
 ---
    @@ -0,0 +1,103 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.spark.deploy.k8s
    +
    +import java.io.File
    +
    +import com.google.common.base.Charsets
    +import com.google.common.io.Files
    +import io.fabric8.kubernetes.client.{Config, ConfigBuilder, 
DefaultKubernetesClient, KubernetesClient}
    +import io.fabric8.kubernetes.client.utils.HttpClientUtils
    +import okhttp3.Dispatcher
    +
    +import org.apache.spark.SparkConf
    +import org.apache.spark.deploy.k8s.config._
    +import org.apache.spark.util.ThreadUtils
    +
    +/**
    + * Spark-opinionated builder for Kubernetes clients. It uses a prefix plus 
common suffixes to
    + * parse configuration keys, similar to the manner in which Spark's 
SecurityManager parses SSL
    + * options for different components.
    + */
    +private[spark] object SparkKubernetesClientFactory {
    +
    +  def createKubernetesClient(
    +      master: String,
    +      namespace: Option[String],
    +      kubernetesAuthConfPrefix: String,
    +      sparkConf: SparkConf,
    +      maybeServiceAccountToken: Option[File],
    +      maybeServiceAccountCaCert: Option[File]): KubernetesClient = {
    +    val oauthTokenFileConf = 
s"$kubernetesAuthConfPrefix.$OAUTH_TOKEN_FILE_CONF_SUFFIX"
    +    val oauthTokenConf = 
s"$kubernetesAuthConfPrefix.$OAUTH_TOKEN_CONF_SUFFIX"
    +    val oauthTokenFile = sparkConf.getOption(oauthTokenFileConf)
    --- End diff --
    
    This lacks context from the `spark-submit` implementation that is not in 
this PR.
    
    We intend to have two different sets of authentication options for the 
Kubernetes API. The first is the credentials for creating a driver pod and all 
the Kubernetes resources that the application requires outside of executor 
pods. The second is a set of credentials that the driver can use to create 
executor pods. These options will have shared suffixes in the configuration 
keys but different prefixes.
    
    The reasoning for two sets of credentials is twofold:
    
    - The driver needs strictly fewer privileges than `spark-submit`, because 
the driver only creates + deletes pods but `spark-submit` needs to make pods 
and other Kubernetes resources. Two sets of credentials allows the driver to 
have an appropriately limited scope of API access.
    - Part of the credentials includes TLS certificates for accessing the 
Kubernetes API over HTTPs. A common environment is to have the Kubernetes API 
server be accessible from a proxy into the cluster from an outside location, 
but then the driver will access the API server from inside the cluster. A front 
door for the API server typically asks for a different certificate than the 
certificate one would present when accessing the API server internally.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to