[ 
https://issues.apache.org/jira/browse/HDDS-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16869763#comment-16869763
 ] 

Eric Yang commented on HDDS-1567:
---------------------------------

{quote}Give me more example, please. krb5.conf and jaas.config can be 
configured in a platform dependent way (for kubernetes with a configmap, for 
on-prem with creating the files)
{quote}
Sorry, I don't understand the ask here. I did not say anything about platform 
dependent. I only mentioned site dependent. For example, krb5.conf is usually 
managed as part of infrastructure via FreeIPA. It could get really complicated 
really fast, if we try to manage our own copy. Here is a copy of my test 
environment krb5.conf:
{code:java}
# Other applications require this directory to perform krb5 configuration.
includedir /etc/krb5.conf.d/

[libdefaults]
  renew_lifetime = 7d
  forwardable = true
  default_realm = EXAMPLE.COM
  ticket_lifetime = 24h
  dns_lookup_realm = false
  dns_lookup_kdc = false
  default_ccache_name = /tmp/krb5cc_%{uid}
  #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
  #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5

[logging]
  default = FILE:/var/log/krb5kdc.log
  admin_server = FILE:/var/log/kadmind.log
  kdc = FILE:/var/log/krb5kdc.log

[realms]
  EXAMPLE.COM = {
    admin_server = eyang-1
    kdc = eyang-1
    auth_to_local = RULE:[3:$3](b)/s/^.*$/guest/
  }

  EXAMPLE2.COM = {
    admin_server = eyang-3
    kdc = eyang-3
  }
{code}
As you can see that multiple realm are defined in krb5.conf, and this is 
usually auto-generated by infrastructure tools. Mounting krb5.conf from host, 
is usually better way to handle this to avoid over simplify OS configuration. 
Hadoop made a short cut with auth_to_local parsing from Hadoop's own 
configuration rather than krb5.conf. This creates extra work for system admin 
to ensure Hadoop auth_to_local and system level krb5.conf are aligned.

Here is a jaas configuration specifically for YARN to access ZooKeeper service:

Jaas configuration:
{code:java}
Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  useTicketCache=false
  keyTab="/etc/security/keytabs/rm.service.keytab"
  principal="rm/eyang-2.localdom...@example.com";
};
com.sun.security.jgss.krb5.initiate {
  com.sun.security.auth.module.Krb5LoginModule required
  renewTGT=false
  doNotPrompt=true
  useKeyTab=true
  keyTab="/etc/security/keytabs/rm.service.keytab"
  principal="rm/eyang-2.localdom...@example.com"
  storeKey=true
  useTicketCache=false;
};
{code}
There are common properties like principal, location of keytab which can be 
reused in ozone-site.xml. It is probably cheaper to create extension in 
envtoconf.py to manage jaas configuration. Something like krb5.conf is better 
left alone without modification.

{quote}
I am fine to handle all the other files (krb5.conf, jaas.config spark.conf) 
with envtoconf.py, and in this case the configuration of these files should be 
handled manually in on-prem.{quote}

Sounds good.

> Define a set of environment variables to configure Ozone docker image
> ---------------------------------------------------------------------
>
>                 Key: HDDS-1567
>                 URL: https://issues.apache.org/jira/browse/HDDS-1567
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Eric Yang
>            Priority: Major
>
> For developer that tries to setup docker image by end for testing purpose, it 
> would be nice to predefine a set of environment variables that can be passed 
> to Ozone docker image to configure the minimum set of configuration to start 
> Ozone containers.  There is a python script that converts environment 
> variables to config, but documentation does not show what setting can be 
> passed to configure the system.  This task would be a good starting point to 
> document the available configuration knobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to