[ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905573#comment-16905573
 ] 

Eric Yang commented on HDFS-14375:
----------------------------------

{quote}I think the main issue is DataNode only authorize its own realm, even if 
the realms are set cross-realm trust.
 To solve this issue, clientPrincipal should be checked multiple cross-realms 
in authorize method.
{quote}
Authorize method is looking into [krbInfo to find the hostname from the service 
principal to find a 
match|https://github.com/apache/hadoop/blame/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java#L109].
 If client access datanode and passed authentication negotiation, client ticket 
cache will have datanode hostname in ticket cache. Hadoop code does not inspect 
realm part of the principal name in authorize method, but merely validate that 
client ticket cache contains the hostname name of datanode. One way to validate 
that cross-realm authentication is to look at klist output and make sure that:
{code:java}
klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-d...@example.com

Valid starting       Expires              Service principal
08/12/2019 19:28:17  08/13/2019 19:28:17  krbtgt/example....@example.com
        renew until 08/19/2019 19:28:17
08/12/2019 20:37:49  08/13/2019 19:28:17  
HTTP/datanode.example2....@example2.com
        renew until 08/19/2019 19:28:17
{code}
In this example, ticket cache contains user's own krbtgt and also granted 
service principal for host in a different realm.

> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> ----------------------------------------------------------------------------
>
>                 Key: HDFS-14375
>                 URL: https://issues.apache.org/jira/browse/HDFS-14375
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 3.1.1
>            Reporter: Jihyun Cho
>            Assignee: Jihyun Cho
>            Priority: Major
>         Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>    |                                         |
> NameNode1                                 NameNode2
>    |                                         |
>    ---------- DataNodes (federated) ----------
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test....@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test....@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> <property>
>   <name>dfs.namenode.kerberos.trust-realms</name>
>   <value>TEST1.COM,TEST2.COM</value>
> </property>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to