[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13100:
---------------------------------
    Description: 
HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
don't support this to throw UnsupportedOperationException.  However, we are 
seeing

{code}
hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
-update -skipcrccheck webhdfs://<NN1>:<webhdfsPort>/fileX 
hdfs://<NN2>:8020/scale1/fileY

...
18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
Invalid value for webhdfs parameter "op": No enum constant 
org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
        at 
org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
        at 
org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
        at 
org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
        at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
        at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
        at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
        at 
org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
        at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
        at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
        at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
        at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
{code}

We either need to make the server throw UnsupportedOperationException, or make 
the client to handle IllegalArgumentException. For backward compatibility and 
easier operation in the field, the latter is preferred.

But we'd better understand why IllegalArgumentException is thrown instead of 
UnsupportedOperationException is thrown. 

The correct way to do is: check if the operation is supported, and throw the 
UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
IllegalArgumentException is it's not legal. We can do that fix as follow-up of 
this jira.



  was:
HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
don't support this to throw UnsupportedOperationException.  However, we are 
seeing

{code}
hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
-update -skipcrccheck webhdfs://<NN1>:<webhdfsPort>/fileX 
hdfs://<NN2>:8020/scale1/fileY

...
18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
Invalid value for webhdfs parameter "op": No enum constant 
org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
        at 
org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
        at 
org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
        at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
        at 
org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
        at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
        at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
        at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
        at 
org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
        at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
        at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
        at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
        at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
{code}

in that case, the following handler is called, and IllegalArgumentException 
instead of UnsupportedOperationException is thrown, and the client side would 
fail to deal with IllegalArgumentException.


We either need to make the server throw UnsupportedOperationException, or make 
the client to handle IllegalArgumentException. For backward compatibility and 
easier operation in the field, the latter is preferred.



> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> ----------------------------------------------------------------------------------------
>
>                 Key: HDFS-13100
>                 URL: https://issues.apache.org/jira/browse/HDFS-13100
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs, webhdfs
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>            Priority: Major
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://<NN1>:<webhdfsPort>/fileX 
> hdfs://<NN2>:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>       at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>       at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>       at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>       at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>       at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>       at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>       at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>       at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>       at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>       at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>       at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>       at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to