[ https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16873708#comment-16873708 ]
Eric Yang edited comment on HDFS-14609 at 6/26/19 11:40 PM: ------------------------------------------------------------ [~crh] h3. Answer for TestRouterWithSecureStartup#testStartupWithoutSpnegoPrincipal issue: In my local test, the test case continues to fail even when HADOOP-16314 and HADOOP-16354 are reverted. A few interesting discovery in [AbstractService.java|https://github.com/apache/hadoop/blame/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/AbstractService.java#L170], would catch IOException. Hence, the test case expecting IOException, would not happen. A second issue with the test, it is making assumption that dfs.web.authentication.kerberos.principal is the principal to run hdfs SPNEGO principal. However, Hadoop [KerberosAuthenticationHandler.java|https://github.com/apache/hadoop/blob/d43af8b3db4743b4b240751b6f29de6c20cfd6e5/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java#L133] uses hadoop.http.authentication.kerberos.principal configuration to obtain SPNEGO principal. Therefore, no bad configuration was actually detected. This is a case where too many configurations represent the same thing, and code may not always work as expected. I am in favor of using hadoop.http.authentication.kerberos.principal to reduce redundant configuration keys as the code has written currently. Here is the unit test log file indicating the HTTP principal is used: {code:java} 2019-06-26 19:25:25,933 [main] INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1620)) - Starting web server as: HTTP/localh...@example.com 2019-06-26 19:25:26,215 [main] INFO server.KerberosAuthenticationHandler (KerberosAuthenticationHandler.java:init(164)) - Using keytab /home/eyang/test/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab, for principal HTTP/localh...@example.com {code} h3. Answer for TestRouterHttpDelegationToken issue: The test case NoAuthFilter extends on AuthenticationFilter, which does not issue delegation token. DelegationTokenAuthenticationFilter is the filter that issues delegation token. Hence, the test case failed to find delegation token. I do not know the reason to write test case like this. It seems counterintuitive to test security feature on a non-secured server. It looks like RouterHttpServer is extending on top of HttpServer2. Hence, all the core initialization of security configuration is automatically inherited from Hadoop. You may need to pay attention to filter initializer to ensure the security filter is initialized with the one that is expected (configured), and not hard coded into RouterHttpServer. RouterHttpServer and the two test cases needs to be refined to make sense. was (Author: eyang): [~crh] h3. Answer for TestRouterWithSecureStartup#testStartupWithoutSpnegoPrincipal issue: In my local test, the test case continues to fail even when HADOOP-16314 and HADOOP-16354 are reverted. A few interesting discovery in [AbstractService.java|https://github.com/apache/hadoop/blame/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/AbstractService.java#L170], would catch IOException. Hence, the test case expecting IOException, would not happen. A second issue with the test, it is making assumption that dfs.web.authentication.kerberos.principal is the principal to run hdfs SPNEGO principal. However, Hadoop [KerberosAuthenticationHandler.java|https://github.com/apache/hadoop/blob/d43af8b3db4743b4b240751b6f29de6c20cfd6e5/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java#L133] uses hadoop.http.authentication.kerberos.principal configuration to obtain SPNEGO principal. Therefore, no bad configuration was actually detected. This is a case where too many configurations represent the same thing, and code may not always work as expected. I am in favor of using hadoop.http.authentication.kerberos.principal to reduce redundant configuration keys as the code has written currently. Here is the unit test log file indicating the HTTP principal is used: {code:java} 2019-06-26 19:25:25,933 [main] INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1620)) - Starting web server as: HTTP/localh...@example.com 2019-06-26 19:25:26,215 [main] INFO server.KerberosAuthenticationHandler (KerberosAuthenticationHandler.java:init(164)) - Using keytab /home/eyang/test/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab, for principal HTTP/localh...@example.com {code} h3. Answer for TestRouterHttpDelegationToken issue: The test case NoAuthFilter extends on AuthenticationFilter, which does not issue delegation token. DelegationTokenAuthenticationFilter is the filter that issues delegation token. Hence, the test case failed to find delegation token. I do not know the reason to write test case like this. It seems counterintuitive to test security feature on a non-secured server. It looks like RouterHttpServer is extending on top of HttpServer2. Hence, all the core initialization of security configuration is automatically inherited from Hadoop. The two test cases needs to be refined to make sense. > RBF: Security should use common AuthenticationFilter > ---------------------------------------------------- > > Key: HDFS-14609 > URL: https://issues.apache.org/jira/browse/HDFS-14609 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: CR Hota > Assignee: CR Hota > Priority: Major > > We worked on router based federation security as part of HDFS-13532. We kept > it compatible with the way namenode works. However with HADOOP-16314 and > HDFS-16354 in trunk, auth filters seems to have been changed causing tests to > fail. > Changes are needed appropriately in RBF, mainly fixing broken tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org