Github user liyinan926 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21669#discussion_r221823354
  
    --- Diff: docs/security.md ---
    @@ -722,6 +722,67 @@ with encryption, at least.
     The Kerberos login will be periodically renewed using the provided 
credentials, and new delegation
     tokens for supported will be created.
     
    +## Secure Interaction with Kubernetes
    +
    +When talking to Hadoop-based services behind Kerberos, it was noted that 
Spark needs to obtain delegation tokens
    +so that non-local processes can authenticate. These delegation tokens in 
Kubernetes are stored in Secrets that are 
    +shared by the Driver and its Executors. As such, there are three ways of 
submitting a kerberos job: 
    +
    +In all cases you must define the environment variable: `HADOOP_CONF_DIR`.
    +It also important to note that the KDC needs to be visible from inside the 
containers if the user uses a local
    +krb5 file. 
    +
    +If a user wishes to use a remote HADOOP_CONF directory, that contains the 
Hadoop configuration files, or 
    +a remote krb5 file, this could be achieved by mounting a pre-defined 
ConfigMap and mounting the volume in the
    +desired location that you can point to via the appropriate configs. This 
method is useful for those who wish to not
    +rebuild their Docker images, but instead point to a ConfigMap that they 
could modify. This strategy is supported
    --- End diff --
    
    Why not allow users to specify the ConfigMap storing the `krb5.conf` like 
what you do with the DT secret?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to