[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931795#comment-16931795
 ] 

Eric Yang commented on HDFS-14845:
----------------------------------

[~Prabhu Joseph] Thank you for patch 002.

{quote}But most of the testcases related to HttpFSServerWebServer (eg: 
TestHttpFSServer) requires more changes as they did not use HttpServer2 and so 
the filter initializers are not called, instead it uses a Test Jetty Server 
with HttpFSServerWebApp which are failing as the filter won't have any configs.

Please let me know if we can handle this in a separate improvement Jira.{quote}

All HttpFS unit tests are passing on my system.  Which test requires a separate 
ticket?

{quote}Have changed the HttpFSAuthenticationFilter$getConfiguration to honor 
the hadoop.http.authentication configs which will be overridden by 
httpfs.authentication configs.{quote}

Patch 2 works for these configuration:

{code}
    <property>
      <name>hadoop.http.authentication.type</name>
      <value>kerberos</value>
    </property>

    <property>
      <name>hadoop.http.authentication.kerberos.principal</name>
      <value>HTTP/host-1.example....@example.com</value>
    </property>

    <property>
      <name>hadoop.http.authentication.kerberos.keytab</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>

    <property>
      <name>hadoop.http.filter.initializers</name>
      
<value>org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilterInitializer,org.apache.hadoop.security.HttpCrossOriginFilterInitializer</value>
    </property>

    <property>
      <name>httpfs.authentication.type</name>
      <value>kerberos</value>
    </property>

    <property>
      <name>hadoop.authentication.type</name>
      <value>kerberos</value>
    </property>

    <property>
      <name>httpfs.hadoop.authentication.type</name>
      <value>kerberos</value>
    </property>

    <property>
      <name>httpfs.authentication.kerberos.principal</name>
      <value>HTTP/host-1.example....@example.com</value>
    </property>

    <property>
      <name>httpfs.authentication.kerberos.keytab</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>

    <property>
      <name>httpfs.hadoop.authentication.kerberos.principal</name>
      <value>nn/host-1.example....@example.com</value>
    </property>

    <property>
      <name>httpfs.hadoop.authentication.kerberos.keytab</name>
      <value>/etc/security/keytabs/hdfs.service.keytab</value>
    </property>
{code}

It doesn't work when configuration skips httpfs.hadoop.authentication.type, 
httpfs.authentication.kerberos.keytab and 
httpfs.hadoop.authentication.kerberos.principal.  httpfs server doesn't start 
when these config are missing.  I think some logic to map the configuration are 
missing in patch 002.

> Request is a replay (34) error in httpfs
> ----------------------------------------
>
>                 Key: HDFS-14845
>                 URL: https://issues.apache.org/jira/browse/HDFS-14845
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: httpfs
>    Affects Versions: 3.3.0
>         Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>            Reporter: Akira Ajisaka
>            Assignee: Prabhu Joseph
>            Priority: Critical
>         Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://<host>:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> <html>
> <head>
> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
> <title>Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))</title>
> </head>
> <body><h2>HTTP ERROR 403</h2>
> <p>Problem accessing /webhdfs/v1/. Reason:
> <pre>    GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))</pre></p>
> </body>
> </html>
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to