[
https://issues.apache.org/jira/browse/HADOOP-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13726541#comment-13726541
]
Kihwal Lee commented on HADOOP-9789:
------------------------------------
1. Don't we need to support per name service SPN pattern for namenode rpc? If a
client is talking to multiple name services, it might need to use a separate
pattern for each name service.
For viewfs and HA config, while the server side utilizes
NameNode.initializeGenericKeys() to set up a conf, the client side uses DFSUtil
or HAUtil to extract certain keys. In order to support per name service SPN
pattern, the client code needs to do something equivalent to what the server
side is doing or obtain the value and explicitly set this variable before
creating an RPC proxy. In either case, it needs to be in
NameNode.NAMESERVICE_SPECIFIC_KEYS or NameNode.NAMENODE_SPECIFIC_KEYS. If
there was only the HA client, we might do it in failover proxy implementations,
but to support viewfs, more generic solution will be better.
If you agree, please file an HDFS jira to address this.
2. I am sure this is the case, but want to double check because it is critical
for the security. When the conf for the SPN contains "_HOST" and there is no
pattern, the comparison will be done against the same SPN that would have been
used in the client pre-patch. serverAddr comes from ConnectionId used for
creating the Connection instance and it is from a conf, not something that can
be dynamically updated using external services (e.g. Connection.server). This
address is used for "_HOST" substitution, so I think it is safe.
+1 pending your confirmation that 2) is true.
> Support server advertised kerberos principals
> ---------------------------------------------
>
> Key: HADOOP-9789
> URL: https://issues.apache.org/jira/browse/HADOOP-9789
> Project: Hadoop Common
> Issue Type: New Feature
> Components: ipc, security
> Affects Versions: 2.0.0-alpha, 3.0.0
> Reporter: Daryn Sharp
> Assignee: Daryn Sharp
> Priority: Critical
> Attachments: HADOOP-9789.patch, HADOOP-9789.patch
>
>
> The RPC client currently constructs the kerberos principal based on the a
> config value, usually with an _HOST substitution. This means the service
> principal must match the hostname the client is using to connect. This
> causes problems:
> * Prevents using HA with IP failover when the servers have distinct
> principals from the failover hostname
> * Prevents clients from being able to access a service bound to multiple
> interfaces. Only the interface that matches the server's principal may be
> used.
> The client should be able to use the SASL advertised principal (HADOOP-9698),
> with appropriate safeguards, to acquire the correct service ticket.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira