[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14079625#comment-14079625 ] Brandon Li commented on HDFS-5804: -- {quote}... the first time I went through the instructions and had trouble concentrating =) {quote} Sorry to hear that. In a few places, we tried to explain the reasons of the configuration/setup by adding extra notes and so on. However, clearly we didn't do a good job there. :-( Root privilege is usually required by Linux (MacOS doesn't though) to mount export regardless NFS gateway is in secure mode or non-secure mode. I modify the doc by adding the description based on you suggested above. Please take a look of the patch in HDFS-6717 named 'HDFS-6717.morechange.patch' and let me know if it looks ok to you :-) > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChann
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14077791#comment-14077791 ] Jeff Hansen commented on HDFS-5804: --- I would probably recommend adding a comment to line 77 of http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm?view=markup&pathrev=1614125 Specifically: > The above are the only required configuration for the NFS gateway in > non-secure mode. However, note that in most cases of non-secure > installations, you will need to include "root" in the list of users provided > under `hadoop.proxyuser.nfsserver.groups` as root will generally be the user > that initially executes the mount. Thanks Brandon! By the way, I'd like to concede that I may have made commented (in my stack overflow response) about the lack of certain details in the documentation that were always there -- as I recall, I was VERY tired and distracted the first time I went through the instructions and had trouble concentrating =) When I re-read it, I thought, that's funny, many of those things that I complained about not being there were in fact there... > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) >
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14077786#comment-14077786 ] Hudson commented on HDFS-5804: -- SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1846 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1846/]) HDFS-6717. JIRA HDFS-5804 breaks default nfs-gateway behavior for unsecured config. Contributed by Brandon Li (brandonli: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1614125) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14077714#comment-14077714 ] Hudson commented on HDFS-5804: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1819 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1819/]) HDFS-6717. JIRA HDFS-5804 breaks default nfs-gateway behavior for unsecured config. Contributed by Brandon Li (brandonli: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1614125) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14077625#comment-14077625 ] Hudson commented on HDFS-5804: -- FAILURE: Integrated in Hadoop-Yarn-trunk #627 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/627/]) HDFS-6717. JIRA HDFS-5804 breaks default nfs-gateway behavior for unsecured config. Contributed by Brandon Li (brandonli: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1614125) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) >
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14076681#comment-14076681 ] Hudson commented on HDFS-5804: -- FAILURE: Integrated in Hadoop-trunk-Commit #5979 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5979/]) HDFS-6717. JIRA HDFS-5804 breaks default nfs-gateway behavior for unsecured config. Contributed by Brandon Li (brandonli: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1614125) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071054#comment-14071054 ] Brandon Li commented on HDFS-5804: -- Hi [~dscheffy], thanks for pointing out the inconsistent description in the user guide. I've created HDFS-6732 to track the doc fix. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > a
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14069440#comment-14069440 ] Jeff Hansen commented on HDFS-5804: --- As you point out, this is an incompatible change and breaks the default unsecure behavior. The documentation was updated with details on what was necessary to run this on a secure cluster (I can't testify to how well the instructions would work in that case), however the instructions do not work when trying to set up a basic gateway with no security. I wasn't the first to have trouble with this -- http://stackoverflow.com/questions/24134012/hdfs-nfs-gateway-configuration-getting-exception-for-nfs3/24875747#24875747 > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at >
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893703#comment-13893703 ] Aaron T. Myers commented on HDFS-5804: -- Good point Brandon. Thanks. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHand
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893696#comment-13893696 ] Brandon Li commented on HDFS-5804: -- Let's also mark this JIRA as incompatible change. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleCha
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893668#comment-13893668 ] Jing Zhao commented on HDFS-5804: - bq. I recommend we file a new JIRA to address both of the above issues ASAP. Thanks for the comments [~atm]. I just created HDFS-5898 for this. [~ashahab], feel free to assign that jira to yourself. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.fra
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893637#comment-13893637 ] Aaron T. Myers commented on HDFS-5804: -- bq. On #1, The NFS gateway logs in as a manual hdfs client. By manual, I mean, it acts right now as a human user. The human user has to first get the tgt for the appropriate account, and then issue the hdfs commands. The current NFS gateway does the same. bq. If I understand you correctly, the NFS gateway should be able to get it's own tgts, and renew them(just like the namenode and other hadoop nodes can). We plan to add that functionality soon. Yes, you understand my point correctly. Without this functionality this patch is not very robust. In a production environment the NFS gateway will typically be started at boot by init scripts, so there is no opportunity to run `kinit' beforehand. Also, if using a lcoal FS ticket cache based login, the ticket will need to be periodically renewed every few hours, so the user would have to write a script or something to periodically run `kinit'. This approach also has issues because ticket renewal via a local FS ticket cache is not atomic, so a busy NFS gateway will have problems during renewal. bq. On #2, I completely agree. We should update the HdfsNfsGateway.apt.vm. I will post a patch soon. Thanks. I also strongly suspect that in most deployments the NFS gateway will be running as the same user as the NN, which will therefore make it the HDFS superuser. I think we should also seriously consider making the HDFS superuser capable of proxying all users by default, which would mean that most deployments would not need to manually configure the NFS gateway user as a proxyuser. I recommend we file a new JIRA to address both of the above issues ASAP. I'd be happy to review it. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(R
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893512#comment-13893512 ] Abin Shahab commented on HDFS-5804: --- Hi Aaron, Thanks for the feedback. On #2, I completely agree. We should update the HdfsNfsGateway.apt.vm. I will post a patch soon. On #1, The NFS gateway logs in as a manual hdfs client. By manual, I mean, it acts right now as a human user. The human user has to first get the tgt for the appropriate account, and then issue the hdfs commands. The current NFS gateway does the same. If I understand you correctly, the NFS gateway should be able to get it's own tgts, and renew them(just like the namenode and other hadoop nodes can). We plan to add that functionality soon. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13892909#comment-13892909 ] Aaron T. Myers commented on HDFS-5804: -- Hey folks, sorry I'm coming into this late. Two quick questions: # Unless I'm missing something, shouldn't the NFS gateway be logging in via a keytab so that it actually has Kerberos credentials to authenticate to the secure cluster? Or, more generally, how is the NFS gateway supposed to get credentials to authenticate to the secure cluster? # After this patch, it seems that we now _must_ configure the NFS gateway user as a proxy user on the cluster regardless of whether or not we're using Kerberos. If that's correct, I think we should have updated the HdfsNfsGateway.apt.vm Configuration section of the docs to explicitly say this. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipelin
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13888592#comment-13888592 ] Hudson commented on HDFS-5804: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1660 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1660/]) HDFS-5804. HDFS NFS Gateway fails to mount and proxy when using Kerberos. Contributed by Abin Shahab. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1563323) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/DFSClientCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestReaddir.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserSta
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13888578#comment-13888578 ] Hudson commented on HDFS-5804: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1685 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1685/]) HDFS-5804. HDFS NFS Gateway fails to mount and proxy when using Kerberos. Contributed by Abin Shahab. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1563323) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/DFSClientCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestReaddir.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessag
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13888528#comment-13888528 ] Hudson commented on HDFS-5804: -- FAILURE: Integrated in Hadoop-Yarn-trunk #468 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/468/]) HDFS-5804. HDFS NFS Gateway fails to mount and proxy when using Kerberos. Contributed by Abin Shahab. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1563323) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/DFSClientCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestReaddir.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13888277#comment-13888277 ] Hudson commented on HDFS-5804: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5087 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5087/]) HDFS-5804. HDFS NFS Gateway fails to mount and proxy when using Kerberos. Contributed by Abin Shahab. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1563323) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/DFSClientCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestReaddir.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab >Assignee: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:13
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13888262#comment-13888262 ] Jing Zhao commented on HDFS-5804: - +1 for the latest patch. I will commit it shortly. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(Default
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13886240#comment-13886240 ] Abin Shahab commented on HDFS-5804: --- Hi Daryn, Would you be able to merge the patch? > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(Defaul
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884892#comment-13884892 ] Hadoop QA commented on HDFS-5804: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12625685/HDFS-5804.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs-nfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5966//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5966//console This message is automatically generated. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.ne
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884827#comment-13884827 ] Daryn Sharp commented on HDFS-5804: --- Looks good! Just fix the javadoc and audit warnings. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884715#comment-13884715 ] Hadoop QA commented on HDFS-5804: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12625663/HDFS-5804.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated -14 warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:red}-1 release audit{color}. The applied patch generated 1 release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs-nfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5963//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/5963//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5963//console This message is automatically generated. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884455#comment-13884455 ] Daryn Sharp commented on HDFS-5804: --- Are the other {{isSecurityEnabled}} checks still required? > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:5
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13883750#comment-13883750 ] Hadoop QA commented on HDFS-5804: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12625518/HDFS-5804.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs-nfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5959//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5959//console This message is automatically generated. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13883548#comment-13883548 ] Hadoop QA commented on HDFS-5804: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12625470/HDFS-5804.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs-nfs: org.apache.hadoop.hdfs.nfs.nfs3.TestWrites org.apache.hadoop.hdfs.nfs.TestReaddir {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5956//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5956//console This message is automatically generated. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > HDFS-5804.patch, exception-as-root.log, javadoc-after-patch.log, > javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.se
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13882853#comment-13882853 ] Daryn Sharp commented on HDFS-5804: --- bq. BTW, I have a patch that gets rid off even checking whether we are in secure mode, but I'm not sure if it's the right thing to submit that patch. That patch would require the nfs-gateway user(nfsserver in our case) be allowed to proxy root, even in non-secure mode. That's a big change. I think it's the right thing to do and it's not large. We ideally need to move away from all the {{isSecurityEnabled}} checks. They introduce additional code paths that lack coverage and sufficient testing. When you create a proxy user, it's not conferring the privileges of the real user (ex. root/nfsserver) to the effective user. The real user is simply used to authenticate the connection on behalf of the effective user. After that all permission checking uses the effective user. Even with security off, I'm pretty sure proxy users need to be configured for components like oozie to work. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13881707#comment-13881707 ] Hadoop QA commented on HDFS-5804: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12625100/HDFS-5804.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs-nfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5940//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5940//console This message is automatically generated. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUps
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13881375#comment-13881375 ] Abin Shahab commented on HDFS-5804: --- Jing, let me know if you have any feedback on my patch. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, > exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channe
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880577#comment-13880577 ] Abin Shahab commented on HDFS-5804: --- BTW, I have a patch that gets rid off even checking whether we are in secure mode, but I'm not sure if it's the right thing to submit that patch. That patch would require the nfs-gateway user(nfsserver in our case) be allowed to proxy root, even in non-secure mode. That's a big change. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleCh
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880572#comment-13880572 ] Jing Zhao commented on HDFS-5804: - Sure, I will post what I have to HDFS-5086. In general, I was just trying to merge the GSS authentication part from [~brocknoland]'s NFS4 implementation (https://github.com/cloudera/hdfs-nfs-proxy) into the current NFS3-based implementation. You can directly check [~brocknoland]'s implementation also. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.chan
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880550#comment-13880550 ] Abin Shahab commented on HDFS-5804: --- May I take a look at your patch? I was planning to mimic how org.apache.hadoop.ipc.Client does the authentication. Also, I don't have access to assign issues to myself. I would definitely like to assign this one to me. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880508#comment-13880508 ] Jing Zhao commented on HDFS-5804: - bq. this still allows any user in the proxied group to authenticate WITHOUT having a kerberos ticket. Yeah, currently nfs-gateway can only do simple AUTH_UNIX authentication, thus we need to finish HDFS-5086 so that nfs-gateway can authenticate clients based on kerberos. I have an in-progress patch long time ago, I will see if I can finish it recently. Also feel free to assign that jira to yourself if you want to work on it. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) >
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880491#comment-13880491 ] Abin Shahab commented on HDFS-5804: --- Ah! I see your point. I think I can allow nfsserver to proxy root, and that'd allow this patch to work properly(I've removed the root check condition). BTW, this still allows any user in the proxied group to authenticate WITHOUT having a kerberos ticket. Do you have any advice on implementing the kerberos authentication on the nfs-gateway? We are kerberizing our clusters, and seems like nfs is allowing them to circumvent kerberos authentication. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameD
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880459#comment-13880459 ] Jing Zhao commented on HDFS-5804: - Abin, I see your issue now. So from the nfs-gateway point of view, I think it should just simply impersonate any user who has passed its own authentication, thus should not have special case on root. In HDFS, why do you want to disable the proxy setting for root? HDFS does not respect root as a special user. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.net
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880395#comment-13880395 ] Abin Shahab commented on HDFS-5804: --- Jing, Thanks a lot for looking at the issue. I think you've captured what I'm trying to do very well! Thanks for that. Yes. We specifically do not want nfsserver(the user running the nfs-gateway) to be able to impersonate root. We need root for one thing, and only one thing: to mount the filesystem. After that, root is irrelevant, and should not have any access to do anything. Regretably, it does an FSINFO as part of the mount. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880379#comment-13880379 ] Jing Zhao commented on HDFS-5804: - So I guess that idea here is that the nfs gateway acts a service, and authenticates itself through Kerberos to Hadoop/HDFS. Then for the clients of nfs, if a client can authenticate itself in the NFS gateway (currently we only support AUTH_UNIX, and we plan to support GSS in HDFS-5539), the nfs gateway will create a proxy user for the client and use the proxy user to communicate with HDFS. Back to the exception, I have not tested myself, but have you add the proxy user setting in your HDFS's configuration? Because I saw the exception msg is "User: nfsserver/krb-nfs-desktop.my.company@krb.altiscale.com is not allowed to impersonate root". > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, > javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13879932#comment-13879932 ] Daryn Sharp commented on HDFS-5804: --- I'm unfamiliar with the nfs code to level set these comments. My initial feeling is that the conditional logic is less than desirable. Relative to the provided patch, I think there's a clean way to avoid the explicit root check. The check seems circumspect as in there shouldn't be a pre-condition that the fuse daemon run as "root". My basic understanding is that fuse runs as root to access user ticket caches. However, there's no reason I couldn't map a different username to uid 0, allow a non-privileged user to access the ticket caches based on group perms, use SELinux capabilities to grant a fsuid of root to the fuse daemon, etc. Anyway, back to the patch. A better way may be to check the given username against the current user. Create a proxy user if they are different, else return the current user. No isSecurityEnabled or root comparison needed. Or better yet, just always create a proxy user. A proxy will work with or w/o security, and proxy of the same user also/should work. I'm unclear how this patch solves the issue of root cannot stat /. A proxy is only being created if the user isn't root so how does this fix the issue? > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs >Affects Versions: 3.0.0, 2.2.0 >Reporter: Abin Shahab > Attachments: HDFS-5804.patch, javadoc-after-patch.log, > javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.me
[jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
[ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13878036#comment-13878036 ] Brandon Li commented on HDFS-5804: -- [~ashahab], HDFS-5539 is filed to track the security enhancement. Currently NFS gateway can't work with secure cluster. I'm moving this JIRA under HDFS-5539 to track the effort. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > - > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Affects Versions: 2.2.0 >Reporter: Abin Shahab > > When using HDFS nfs gateway with secure hadoop > (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy > as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has > a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount > -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not > having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: > "my-nfs-server-host.com/10.252.4.197"; destination host is: > "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at > org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at > org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at > org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at > org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(Defa