[ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=640761&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-640761
 ]

ASF GitHub Bot logged work on HDFS-6874:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Aug/21 16:31
            Start Date: 23/Aug/21 16:31
    Worklog Time Spent: 10m 
      Work Description: amahussein commented on a change in pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#discussion_r694125842



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
##########
@@ -1857,18 +1859,57 @@ public synchronized void cancelDelegationToken(final 
Token<?> token
   }
 
   @Override
-  public BlockLocation[] getFileBlockLocations(final Path p,
-      final long offset, final long length) throws IOException {
+  public BlockLocation[] getFileBlockLocations(final Path p, final long offset,
+      final long length) throws IOException {
     statistics.incrementReadOps(1);
     storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS);
+    BlockLocation[] locations = null;
+    try {
+      if (isServerHCFSCompatible) {
+        locations =
+            getFileBlockLocations(GetOpParam.Op.GETFILEBLOCKLOCATIONS, p, 
offset, length);
+      } else {
+        locations = getFileBlockLocations(GetOpParam.Op.GET_BLOCK_LOCATIONS, p,
+            offset, length);
+      }
+    } catch (RemoteException e) {
+      if (isGetFileBlockLocationsException(e)) {

Review comment:
       ```suggestion
         // parsing the exception is needed only if the client thinks the 
service is compatible
         if (isServerHCFSCompatible && isGetFileBlockLocationsException(e)) {
   ```

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
##########
@@ -2002,4 +2003,38 @@ public void testContentType() throws Exception {
         () -> HttpFSUtils.jsonParse(conn));
     conn.disconnect();
   }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testGetFileBlockLocations() throws Exception {
+    createHttpFSServer(false, false);
+    // Create a test directory
+    String pathStr = "/tmp/tmp-snap-diff-test";
+    createDirWithHttp(pathStr, "700", null);
+
+    Path path = new Path(pathStr);
+    DistributedFileSystem dfs = (DistributedFileSystem) FileSystem
+        .get(path.toUri(), TestHdfsHelper.getHdfsConf());
+    // Enable snapshot
+    dfs.allowSnapshot(path);
+    Assert.assertTrue(dfs.getFileStatus(path).isSnapshotEnabled());
+    // Create a file and take a snapshot
+    String file1 = pathStr + "/file1";
+    createWithHttp(file1, null);
+    HttpURLConnection conn = sendRequestToHttpFSServer(file1,
+        "GETFILEBLOCKLOCATIONS", "length=10&offset10");
+    Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+    BlockLocation[] locations1 =
+        dfs.getFileBlockLocations(new Path(file1), 0, 1);
+    Assert.assertNotNull(locations1);
+
+    HttpURLConnection conn1 = sendRequestToHttpFSServer(file1,
+        "GET_BLOCK_LOCATIONS", "length=10&offset10");
+    Assert.assertEquals(HttpURLConnection.HTTP_OK, conn1.getResponseCode());
+    BlockLocation[] locations2 =
+        dfs.getFileBlockLocations(new Path(file1), 0, 1);
+    Assert.assertNotNull(locations2);
+  }

Review comment:
       Falling back from `GETFILEBLOCKLOCATIONS` to `GET_FILE_BLOCK_LOCATIONS` 
and caching the boolean flag is not tested. Maybe we need another unit test 
that assumes that the operation is not supported and falls back to the old.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 640761)
    Time Spent: 0.5h  (was: 20m)

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> ---------------------------------------------
>
>                 Key: HDFS-6874
>                 URL: https://issues.apache.org/jira/browse/HDFS-6874
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: httpfs
>    Affects Versions: 2.4.1, 2.7.3
>            Reporter: Gao Zhong Liang
>            Assignee: Weiwei Yang
>            Priority: Major
>              Labels: BB2015-05-TBR, pull-request-available
>         Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, 
> HDFS-6874.10.patch, HDFS-6874.patch
>
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> .......
>  case GETFILEBLOCKLOCATIONS: {
>         response = Response.status(Response.Status.BAD_REQUEST).build();
>         break;
>       }
> ........ 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to