[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-09-24 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626528#comment-16626528
 ] 

Kihwal Lee commented on HDFS-13100:
---

Is the patch working properly? We have a user who is running 2.7.3 and the 
webhdfs client in the latest 2.8 with this patch is showing the same problem.

> Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in 
> webhdfs.
> -
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs (this method is used by 
> HDFS-12396), and expect clusters that don't support this to throw 
> UnsupportedOperationException.  However, we are seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352079#comment-16352079
 ] 

Xiao Chen commented on HDFS-13100:
--

Cherry-picked the branch-3.0.1 patch to branch-3.0. Thanks all.

> Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in 
> webhdfs.
> -
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-03 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351325#comment-16351325
 ] 

Yongjun Zhang commented on HDFS-13100:
--

Committed to trunk, branch-3.0.1, branch-2, branch-2.9, branch-2.8.

Thanks [~kihwal] much for the review, and [~atm] for the contribution!


> Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in 
> webhdfs.
> -
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351296#comment-16351296
 ] 

Hudson commented on HDFS-13100:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13612 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13612/])
HDFS-13100. Handle IllegalArgumentException when GETSERVERDEFAULTS is (yzhang: 
rev 4e9a59ce16e81b4bd6fae443a997ef24d588a6e8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


> Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in 
> webhdfs.
> -
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
>