[jira] [Comment Edited] (HDFS-14845) Request is a replay (34) error in httpfs
[ https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16936381#comment-16936381 ] Akira Ajisaka edited comment on HDFS-14845 at 9/24/19 4:29 AM: --- Thanks [~Prabhu Joseph] for the patch and thanks [~eyang] for the review. Tested the 004 patch in a dev cluster and worked as expected. Would you add the deprecated properties in DeprecatedProperties.md? was (Author: ajisakaa): Thanks [~Prabhu Joseph] for the patch and thanks [~eyang] for the review. Tested the 004 patch and worked as expected. Would you add the deprecated properties in DeprecatedProperties.md? > Request is a replay (34) error in httpfs > > > Key: HDFS-14845 > URL: https://issues.apache.org/jira/browse/HDFS-14845 > Project: Hadoop HDFS > Issue Type: Bug > Components: httpfs >Affects Versions: 3.3.0 > Environment: Kerberos and ZKDelgationTokenSecretManager enabled in > HttpFS >Reporter: Akira Ajisaka >Assignee: Prabhu Joseph >Priority: Critical > Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, > HDFS-14845-003.patch, HDFS-14845-004.patch > > > We are facing "Request is a replay (34)" error when accessing to HDFS via > httpfs on trunk. > {noformat} > % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus" > HTTP/1.1 401 Authentication required > Date: Mon, 09 Sep 2019 06:00:04 GMT > Date: Mon, 09 Sep 2019 06:00:04 GMT > Pragma: no-cache > X-Content-Type-Options: nosniff > X-XSS-Protection: 1; mode=block > WWW-Authenticate: Negotiate > Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly > Cache-Control: must-revalidate,no-cache,no-store > Content-Type: text/html;charset=iso-8859-1 > Content-Length: 271 > HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism > level: Request is a replay (34)) > Date: Mon, 09 Sep 2019 06:00:04 GMT > Date: Mon, 09 Sep 2019 06:00:04 GMT > Pragma: no-cache > X-Content-Type-Options: nosniff > X-XSS-Protection: 1; mode=block > (snip) > Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly > Cache-Control: must-revalidate,no-cache,no-store > Content-Type: text/html;charset=iso-8859-1 > Content-Length: 413 > > > > Error 403 GSSException: Failure unspecified at GSS-API level > (Mechanism level: Request is a replay (34)) > > HTTP ERROR 403 > Problem accessing /webhdfs/v1/. Reason: > GSSException: Failure unspecified at GSS-API level (Mechanism level: > Request is a replay (34)) > > > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14845) Request is a replay (34) error in httpfs
[ https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16934589#comment-16934589 ] Eric Yang edited comment on HDFS-14845 at 9/20/19 5:17 PM: --- [~Prabhu Joseph] Thank you for the patch. I tested with these sets of configuration and both can work as long as I define hadoop.http.authentication.signature.secret.file. {code} hadoop.http.authentication.type kerberos hadoop.http.authentication.kerberos.principal HTTP/host1.example@example.com hadoop.http.authentication.kerberos.keytab /etc/security/keytabs/spnego.service.keytab hadoop.http.authentication.signature.secret.file ${httpfs.config.dir}/httpfs-signature.secret hadoop.http.filter.initializers org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilterInitializer,org.apache.hadoop.security.HttpCrossOriginFilterInitializer hadoop.authentication.type kerberos httpfs.hadoop.authentication.type kerberos httpfs.hadoop.authentication.kerberos.principal nn/host1.example@example.com httpfs.hadoop.authentication.kerberos.keytab /etc/security/keytabs/hdfs.service.keytab {code} Backward compatible config also works: {code} hadoop.http.authentication.type kerberos httpfs.authentication.signature.secret.file ${httpfs.config.dir}/httpfs-signature.secret hadoop.http.filter.initializers org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilterInitializer,org.apache.hadoop.security.HttpCrossOriginFilterInitializer httpfs.authentication.type kerberos httpfs.hadoop.authentication.type kerberos httpfs.authentication.kerberos.principal HTTP/host-1.example@example.com httpfs.authentication.kerberos.keytab /etc/security/keytabs/spnego.service.keytab httpfs.hadoop.authentication.kerberos.principal nn/host-1.example@example.com httpfs.hadoop.authentication.kerberos.keytab /etc/security/keytabs/hdfs.service.keytab {code} When httpfs.authentication.signature.secret.file is undefined in httpfs-site.xml, httpfs server doesn't work. {code} Exception in thread "main" java.io.IOException: Unable to initialize WebAppContext at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1198) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.start(HttpFSServerWebServer.java:154) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.main(HttpFSServerWebServer.java:187) Caused by: java.lang.RuntimeException: Undefined property: signature.secret.file at org.apache.hadoop.fs.http.server.HttpFSAuthenticationFilter.getConfiguration(HttpFSAuthenticationFilter.java:95) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:160) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:180) at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:881) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349) at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778) at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131) at org.eclipse.jetty.server.Server.start(Server.java:427) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) at org.eclipse.jetty.server.Server.doStart(Server
[jira] [Comment Edited] (HDFS-14845) Request is a replay (34) error in httpfs
[ https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929120#comment-16929120 ] Prabhu Joseph edited comment on HDFS-14845 at 9/13/19 11:25 AM: Thanks for sharing the details. That helped to repro the issue. The issue happens as {{AuthenticationFilter}} (Kerberos handler) is called twice. 1. HttpFSAuthenticationFilter (httpfs.authentication.type=kerberos) 2. AuthenticationFilterInitializer added in hadoop.http.filter.initializer (hadoop.http.authentication.type=kerberos) The default {{HttpFSAuthenticationFilter}} itself a combination of Kerberos + Delegation + Proxy support. And hence the {{AuthenticationFilterInitializer}} and {{ProxyUserAuthenticationFilterInitializer}} are not required for {{HttpFSServerWebServer}}. Workaround is to remove them from hadoop.http.filter.initializer in core-site.xml. Have prepared a patch to do the same if this is configured. cc [~eyang]. was (Author: prabhu joseph): Thanks for sharing the details. That helped to repro the issue. The issue happens as {{AuthenticationFilter}} (Kerberos handler) is called twice. 1. HttpFSAuthenticationFilter (httpfs.authentication.type=kerberos) 2. AuthenticationFilterInitializer added in hadoop.http.filter.initializer (hadoop.http.authentication.type=kerberos) The default {{HttpFSAuthenticationFilter}} itself a combination of Kerberos + Delegation + Proxy support. And hence the {{AuthenticationFilterInitializer}} and {{ProxyUserAuthenticationFilterInitializer}} are not required for {{HttpFSServerWebServer}}. Workaround is to remove them from hadoop.http.filter.initializer in core-site.xml. Have prepared a patch to do the same if this is configured. > Request is a replay (34) error in httpfs > > > Key: HDFS-14845 > URL: https://issues.apache.org/jira/browse/HDFS-14845 > Project: Hadoop HDFS > Issue Type: Bug > Components: httpfs >Affects Versions: 3.3.0 > Environment: Kerberos and ZKDelgationTokenSecretManager enabled in > HttpFS >Reporter: Akira Ajisaka >Assignee: Prabhu Joseph >Priority: Critical > Attachments: HDFS-14845-001.patch > > > We are facing "Request is a replay (34)" error when accessing to HDFS via > httpfs on trunk. > {noformat} > % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus" > HTTP/1.1 401 Authentication required > Date: Mon, 09 Sep 2019 06:00:04 GMT > Date: Mon, 09 Sep 2019 06:00:04 GMT > Pragma: no-cache > X-Content-Type-Options: nosniff > X-XSS-Protection: 1; mode=block > WWW-Authenticate: Negotiate > Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly > Cache-Control: must-revalidate,no-cache,no-store > Content-Type: text/html;charset=iso-8859-1 > Content-Length: 271 > HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism > level: Request is a replay (34)) > Date: Mon, 09 Sep 2019 06:00:04 GMT > Date: Mon, 09 Sep 2019 06:00:04 GMT > Pragma: no-cache > X-Content-Type-Options: nosniff > X-XSS-Protection: 1; mode=block > (snip) > Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly > Cache-Control: must-revalidate,no-cache,no-store > Content-Type: text/html;charset=iso-8859-1 > Content-Length: 413 > > > > Error 403 GSSException: Failure unspecified at GSS-API level > (Mechanism level: Request is a replay (34)) > > HTTP ERROR 403 > Problem accessing /webhdfs/v1/. Reason: > GSSException: Failure unspecified at GSS-API level (Mechanism level: > Request is a replay (34)) > > > {noformat} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14845) Request is a replay (34) error in httpfs
[ https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928304#comment-16928304 ] Akira Ajisaka edited comment on HDFS-14845 at 9/12/19 7:21 AM: --- Our settings related to AuthFilter are as follows * hadoop.http.authentication.type: org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler * httpfs.authentication.zk-dt-secret-manager.enable: true * httpfs.authentication.type: kerberos After HADOOP-16314, JWTRedirectAuthenticationHandler is enabled for httpfs in addition to KerberosDelegationTokenAuthenticationHandler, which is set by HttpFSAuthenticationFilter. Now our workaround is to set "hadoop.http.authentication.type" to "simple" to discard the common filter (JWTRedirectAuthenticationHandler) in httpfs. was (Author: ajisakaa): Our settings related to AuthFilter are as follows * hadoop.http.authentication.type: org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler * httpfs.authentication.zk-dt-secret-manager.enable: true * httpfs.authentication.type: kerberos After HADOOP-16366, JWTRedirectAuthenticationHandler is enabled for httpfs in addition to KerberosDelegationTokenAuthenticationHandler, which is set by HttpFSAuthenticationFilter. Now our workaround is to set "hadoop.http.authentication.type" to "simple" to discard the common filter (JWTRedirectAuthenticationHandler) in httpfs. > Request is a replay (34) error in httpfs > > > Key: HDFS-14845 > URL: https://issues.apache.org/jira/browse/HDFS-14845 > Project: Hadoop HDFS > Issue Type: Bug > Components: httpfs >Affects Versions: 3.3.0 > Environment: Kerberos and ZKDelgationTokenSecretManager enabled in > HttpFS >Reporter: Akira Ajisaka >Priority: Critical > > We are facing "Request is a replay (34)" error when accessing to HDFS via > httpfs on trunk. > {noformat} > % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus" > HTTP/1.1 401 Authentication required > Date: Mon, 09 Sep 2019 06:00:04 GMT > Date: Mon, 09 Sep 2019 06:00:04 GMT > Pragma: no-cache > X-Content-Type-Options: nosniff > X-XSS-Protection: 1; mode=block > WWW-Authenticate: Negotiate > Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly > Cache-Control: must-revalidate,no-cache,no-store > Content-Type: text/html;charset=iso-8859-1 > Content-Length: 271 > HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism > level: Request is a replay (34)) > Date: Mon, 09 Sep 2019 06:00:04 GMT > Date: Mon, 09 Sep 2019 06:00:04 GMT > Pragma: no-cache > X-Content-Type-Options: nosniff > X-XSS-Protection: 1; mode=block > (snip) > Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly > Cache-Control: must-revalidate,no-cache,no-store > Content-Type: text/html;charset=iso-8859-1 > Content-Length: 413 > > > > Error 403 GSSException: Failure unspecified at GSS-API level > (Mechanism level: Request is a replay (34)) > > HTTP ERROR 403 > Problem accessing /webhdfs/v1/. Reason: > GSSException: Failure unspecified at GSS-API level (Mechanism level: > Request is a replay (34)) > > > {noformat} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org