[ 
https://issues.apache.org/jira/browse/HDFS-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14107935#comment-14107935
 ] 

Haohui Mai commented on HDFS-6776:
----------------------------------

bq. Returning NullToken (notice here we are throwing an NullToken exception 
rather than returning null Token) or dummyToken does not seem to make much 
difference here.

The key difference is whether the user is aware of that there are potential 
security issues.

Note that the users know some information about the remote cluster, for 
example, whether the remote cluster is secure or not. If the user is connecting 
to a insecure cluster, he / she has to have the consent because there could be 
security concerns in this use case. Any solutions have to meet this requirement.

Just for references, if you looks at how distcp works when copying between 
secure / insecure clusters using rpc (i.e. {{hdfs://}}), there is configuration 
to tweak whether connecting to insecure clusters is allowed. Users can reject 
these connections for maximum security. If distcp over webhdfs / swebhdfs 
between secure / insecure works, then the user should be able to specify the 
same security requirement.

There are many ways to approach this requirement. For example, maybe you can 
add a parameter in distcp, or add a new configuration so that the remote 
cluster can return a dummy token? I think this requirement has to be addressed 
before this patch can go in.

> distcp from insecure cluster (source) to secure cluster (destination) doesn't 
> work
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-6776
>                 URL: https://issues.apache.org/jira/browse/HDFS-6776
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.3.0, 2.5.0
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>         Attachments: HDFS-6776.001.patch, HDFS-6776.002.patch, 
> HDFS-6776.003.patch, HDFS-6776.004.patch, HDFS-6776.004.patch, 
> HDFS-6776.005.patch, HDFS-6776.006.NullToken.patch, 
> HDFS-6776.006.NullToken.patch, HDFS-6776.007.patch, HDFS-6776.008.patch
>
>
> Issuing distcp command at the secure cluster side, trying to copy stuff from 
> insecure cluster to secure cluster, and see the following problem:
> {code}
> hadoopuser@yjc5u-1 ~]$ hadoop distcp webhdfs://<insure-cluster>:<port>/tmp 
> hdfs://<sure-cluster>:8020/tmp/tmptgt
> 14/07/30 20:06:19 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[webhdfs://<insecure-cluster>:<port>/tmp], 
> targetPath=hdfs://<secure-cluster>:8020/tmp/tmptgt, targetPathExists=true}
> 14/07/30 20:06:19 INFO client.RMProxy: Connecting to ResourceManager at 
> <secure-clister>:8032
> 14/07/30 20:06:20 WARN ssl.FileBasedKeyStoresFactory: The property 
> 'ssl.client.truststore.location' has not been set, no TrustStore will be 
> loaded
> 14/07/30 20:06:20 WARN security.UserGroupInformation: 
> PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
> cause:java.io.IOException: Failed to get the token for hadoopuser, 
> user=hadoopuser
> 14/07/30 20:06:20 WARN security.UserGroupInformation: 
> PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
> cause:java.io.IOException: Failed to get the token for hadoopuser, 
> user=hadoopuser
> 14/07/30 20:06:20 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Failed to get the token for hadoopuser, user=hadoopuser
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>       at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>       at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:365)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$600(WebHdfsFileSystem.java:84)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:618)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:584)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:438)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:466)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:462)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1132)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:218)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:403)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:424)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:640)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:565)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:438)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:466)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:462)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:781)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:796)
>       at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
>       at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
>       at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
>       at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>       at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
>       at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
>       at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
>       at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>       at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed 
> to get the token for hadoopuser, user=hadoopuser
>       at 
> org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:334)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:84)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:570)
>       ... 30 more
> [hadoopuser@yjc5u-1 ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to