[ https://issues.apache.org/jira/browse/HADOOP-18826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran resolved HADOOP-18826. ------------------------------------- Fix Version/s: 3.4.0 3.3.9 Resolution: Fixed > abfs getFileStatus(/) fails with "Value for one of the query parameters > specified in the request URI is invalid.", 400 > ---------------------------------------------------------------------------------------------------------------------- > > Key: HADOOP-18826 > URL: https://issues.apache.org/jira/browse/HADOOP-18826 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure > Affects Versions: 3.3.1, 3.3.2, 3.3.5, 3.3.3, 3.3.4, 3.3.6 > Reporter: Sergey Shabalov > Assignee: Anuj Modi > Priority: Major > Fix For: 3.4.0, 3.3.9 > > Attachments: test_hadoop-azure-3_3_1-FileSystem_getFileStatus - > Copy.zip > > > I am using hadoop-azure-3.3.0.jar and have written code: > {code:java} > static final String ROOT_DIR = > "abfs://ssh-test...@sshadlsgen2.dfs.core.windows.net", > Configuration config = new Configuration(); > config.set("fs.defaultFS",ROOT_DIR); > config.set("fs.adl.oauth2.access.token.provider.type","ClientCredential"); > config.set("fs.adl.oauth2.client.id",""); > config.set("fs.adl.oauth2.credential",""); > config.set("fs.adl.oauth2.refresh.url",""); > config.set("fs.azure.account.key.sshadlsgen2.dfs.core.windows.net",ACCESS_TOKEN); > > config.set("fs.azure.skipUserGroupMetadataDuringInitialization","true"); > FileSystem fs = FileSystem.get(config); > System.out.println( "\nfs:'"+fs.toString()+"'"); > FileStatus status = fs.getFileStatus(new Path(ROOT_DIR)); // !!! > Exception in 3.3.1-* > System.out.println( "\nstatus:'"+status.toString()+"'"); > {code} > It did work properly till 3.3.1. > But in 3.3.1 it fails with exception: > {code:java} > Caused by: Operation failed: "Value for one of the query parameters specified > in the request URI is invalid.", 400, HEAD, > https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs?upn=false&action=getAccessControl&timeout=90 > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:218) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:181) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.measureDurationOfInvocation(IOStatisticsBinding.java:494) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:465) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:179) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:942) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:924) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:846) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:507) > {code} > I performed some research and found: > In hadoop-azure-3.3.0.jar we see: > {code:java} > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore{ > ... > public FileStatus getFileStatus(final Path path) throws IOException { > ... > Line 604: op = > client.getAclStatus(AbfsHttpConstants.FORWARD_SLASH + > AbfsHttpConstants.ROOT_PATH); > ... > } > ... > } {code} > and this code produces REST request: > {code:java} > https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs//?upn=false&action=getAccessControl&timeout=90 > {code} > There is finalizes slash in path part > "...ssh-test-fs{*}{color:#de350b}//{color}{*}?upn=false..." This request does > work properly. > But since hadoop-azure-3.3.1.jar till latest hadoop-azure-3.3.6.jar we see: > {code:java} > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore { > ... > public FileStatus getFileStatus(final Path path) throws IOException { > ... > perfInfo.registerCallee("getAclStatus"); > Line 846: op = client.getAclStatus(getRelativePath(path)); > ... > } > ... > } > Line 1492: > private String getRelativePath(final Path path) { > ... > return path.toUri().getPath(); > } {code} > and this code prduces REST request: > {code:java} > https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs?upn=false&action=getAccessControl&timeout=90 > {code} > There is not finalizes slash in path part "...ssh-test-fs?upn=false..." It > happens because the new code "path.toUri().getPath();" produces empty string. > This request fails with message: > {code:java} > Caused by: Operation failed: "Value for one of the query parameters specified > in the request URI is invalid.", 400, HEAD, > https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs?upn=false&action=getAccessControl&timeout=90 > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:218) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:181) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.measureDurationOfInvocation(IOStatisticsBinding.java:494) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:465) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:179) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:942) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:924) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:846) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:507) > {code} > Such us it is for all hadoop-azure-3.3.*.jar versions which does use log4j > 2.* not 1.2.17 we can't update using version > > I attach a sample of Maven project to try: > test_hadoop-azure-3_3_1-FileSystem_getFileStatus - Copy.zip -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org