[ 
https://issues.apache.org/jira/browse/HDFS-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HDFS-4841:
--------------------------------

    Attachment: HDFS-4841.patch

The problem was basically that in the shutdown hook, we'd try to get the 
filesystem which would add the shutdown hook (again).  The patch simply adds a 
check to not add the shutdown hook if we're already shutting down.  
                
> FsShell commands using secure webhfds fail ClientFinalizer shutdown hook
> ------------------------------------------------------------------------
>
>                 Key: HDFS-4841
>                 URL: https://issues.apache.org/jira/browse/HDFS-4841
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: security, webhdfs
>    Affects Versions: 3.0.0
>            Reporter: Stephen Chu
>            Assignee: Robert Kanter
>         Attachments: core-site.xml, 
> hadoop-root-namenode-hdfs-upgrade-pseudo.ent.cloudera.com.out, 
> HDFS-4841.patch, hdfs-site.xml, jsvc.out
>
>
> Hadoop version:
> {code}
> bash-4.1$ $HADOOP_HOME/bin/hadoop version
> Hadoop 3.0.0-SNAPSHOT
> Subversion git://github.com/apache/hadoop-common.git -r 
> d5373b9c550a355d4e91330ba7cc8f4c7c3aac51
> Compiled by root on 2013-05-22T08:06Z
> From source with checksum 8c4cc9b1e8d6e8361431e00f64483f
> This command was run using 
> /var/lib/hadoop-hdfs/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-SNAPSHOT.jar
> {code}
> I'm seeing a problem when issuing FsShell commands using the webhdfs:// URI 
> when security is enabled. The command completes but leaves a warning that 
> ShutdownHook 'ClientFinalizer' failed.
> {code}
> bash-4.1$ hadoop-3.0.0-SNAPSHOT/bin/hadoop fs -ls 
> webhdfs://hdfs-upgrade-pseudo.ent.cloudera.com:50070/
> 2013-05-22 09:46:55,710 INFO  [main] util.Shell 
> (Shell.java:isSetsidSupported(311)) - setsid exited with exit code 0
> Found 3 items
> drwxr-xr-x   - hbase supergroup          0 2013-05-22 09:46 
> webhdfs://hdfs-upgrade-pseudo.ent.cloudera.com:50070/hbase
> drwxr-xr-x   - hdfs  supergroup          0 2013-05-22 09:46 
> webhdfs://hdfs-upgrade-pseudo.ent.cloudera.com:50070/tmp
> drwxr-xr-x   - hdfs  supergroup          0 2013-05-22 09:46 
> webhdfs://hdfs-upgrade-pseudo.ent.cloudera.com:50070/user
> 2013-05-22 09:46:58,660 WARN  [Thread-3] util.ShutdownHookManager 
> (ShutdownHookManager.java:run(56)) - ShutdownHook 'ClientFinalizer' failed, 
> java.lang.IllegalStateException: Shutdown in progress, cannot add a 
> shutdownHook
> java.lang.IllegalStateException: Shutdown in progress, cannot add a 
> shutdownHook
>       at 
> org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:152)
>       at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2400)
>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2372)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$DtRenewer.getWebHdfs(WebHdfsFileSystem.java:1001)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$DtRenewer.cancel(WebHdfsFileSystem.java:1013)
>       at org.apache.hadoop.security.token.Token.cancel(Token.java:382)
>       at 
> org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.cancel(DelegationTokenRenewer.java:152)
>       at 
> org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.access$200(DelegationTokenRenewer.java:58)
>       at 
> org.apache.hadoop.fs.DelegationTokenRenewer.removeRenewAction(DelegationTokenRenewer.java:241)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:822)
>       at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2446)
>       at 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2463)
>       at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
> {code}
> I've checked that FsShell + hdfs:// commands and WebHDFS operations through 
> curl work successfully:
> {code}
> bash-4.1$ hadoop-3.0.0-SNAPSHOT/bin/hadoop fs -ls /
> 2013-05-22 09:46:43,663 INFO  [main] util.Shell 
> (Shell.java:isSetsidSupported(311)) - setsid exited with exit code 0
> Found 3 items
> drwxr-xr-x   - hbase supergroup          0 2013-05-22 09:46 /hbase
> drwxr-xr-x   - hdfs  supergroup          0 2013-05-22 09:46 /tmp
> drwxr-xr-x   - hdfs  supergroup          0 2013-05-22 09:46 /user
> bash-4.1$ curl -i --negotiate -u : 
> "http://hdfs-upgrade-pseudo.ent.cloudera.com:50070/webhdfs/v1/?op=GETHOMEDIRECTORY";
> HTTP/1.1 401 
> Cache-Control: must-revalidate,no-cache,no-store
> Date: Wed, 22 May 2013 16:47:14 GMT
> Pragma: no-cache
> Date: Wed, 22 May 2013 16:47:14 GMT
> Pragma: no-cache
> Content-Type: text/html; charset=iso-8859-1
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=;Path=/;Expires=Thu, 01-Jan-1970 00:00:00 GMT
> Content-Length: 1358
> Server: Jetty(6.1.26)
> HTTP/1.1 200 OK
> Cache-Control: no-cache
> Expires: Thu, 01-Jan-1970 00:00:00 GMT
> Date: Wed, 22 May 2013 16:47:14 GMT
> Pragma: no-cache
> Date: Wed, 22 May 2013 16:47:14 GMT
> Pragma: no-cache
> Content-Type: application/json
> Set-Cookie: 
> hadoop.auth="u=hdfs&p=hdfs/hdfs-upgrade-pseudo.ent.cloudera....@ent.cloudera.com&t=kerberos&e=1369277234852&s=m3vJ7/pV831tBLkpOBb0Naa5N+g=";Path=/
> Transfer-Encoding: chunked
> Server: Jetty(6.1.26)
> {"Path":"/user/hdfs"}bash-4.1$ 
> {code}
> When I disable security, the warning goes away.
> I'll attach my core-site.xml, hdfs-site.xml, NN and DN output logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to