[ 
https://issues.apache.org/jira/browse/HDFS-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17721184#comment-17721184
 ] 

ASF GitHub Bot commented on HDFS-17001:
---------------------------------------

ayushtkn commented on code in PR #5628:
URL: https://github.com/apache/hadoop/pull/5628#discussion_r1189349033


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java:
##########
@@ -725,4 +726,21 @@ public static Map<String, Object> 
toJsonMap(BlockLocation[] locations)
     m.put(BlockLocation.class.getSimpleName(), blockLocations);
     return m;
   }
+
+  public static String toJsonString(FsStatus status)
+      throws IOException {
+    return toJsonString(FsStatus.class, toJsonMap(status));
+  }
+
+  public static Map<String, Object> toJsonMap(FsStatus status)
+      throws IOException {

Review Comment:
   this doesn't throw IOE, can be removed, once you remove it from here, I 
think the throws IOE can be removed from the above method as well



##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java:
##########
@@ -2178,6 +2179,19 @@ HdfsFileStatus decodeResponse(Map<?, ?> json) {
     return status.makeQualified(getUri(), f);
   }
 
+  @Override
+  public FsStatus getStatus(Path f) throws IOException {

Review Comment:
   nit:
   change ``f`` to ``path``



##########
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md:
##########
@@ -1190,6 +1191,28 @@ See also: 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).getLinkTa
 
 See also: 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).getFileLinkInfo
 
+### Get Status
+
+* Submit a HTTP GET request.
+
+        curl -i "http://<HOST>:<PORT>/webhdfs/v1/<PATH>?op=GETSTATUS"
+
+  The client receives a response with a [`FsStatus` JSON 
object](#FsStatus_JSON_Schema):
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+        Transfer-Encoding: chunked
+
+        {
+            "FsStatus": {
+                "used": 0,
+                "remaining": 0,
+                "capacity":0
+            }

Review Comment:
   can you try it in an actual cluster, get a better example rather than having 
all 0



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java:
##########
@@ -2255,6 +2256,35 @@ public void testFileLinkStatus() throws Exception {
     }
   }
 
+  @Test
+  public void testFsStatus() throws Exception {
+    final Configuration conf = WebHdfsTestUtil.createConf();
+    try {
+      cluster = new MiniDFSCluster.Builder(conf)
+          .numDataNodes(1)

Review Comment:
   datanodes are 1 by default, this line isn't required



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java:
##########
@@ -1535,6 +1543,10 @@ public Void run() throws IOException {
     };
   }
 
+  private long getStateAtIndex(long[] states, int index) {
+    return states.length > index ? states[index] : -1;
+  }

Review Comment:
   The same is defined in DfsClient, can we make the definition over there 
``public static`` and use it here as well, rather than defining it twice?



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java:
##########
@@ -2255,6 +2256,35 @@ public void testFileLinkStatus() throws Exception {
     }
   }
 
+  @Test
+  public void testFsStatus() throws Exception {
+    final Configuration conf = WebHdfsTestUtil.createConf();
+    try {
+      cluster = new MiniDFSCluster.Builder(conf)
+          .numDataNodes(1)
+          .build();
+      cluster.waitActive();
+
+      final WebHdfsFileSystem webHdfs =
+          WebHdfsTestUtil.getWebHdfsFileSystem(conf,
+              WebHdfsConstants.WEBHDFS_SCHEME);
+
+      final String path = "/foo";
+      OutputStream os = webHdfs.create(new Path(path));
+      os.write(new byte[1024]);
+
+      FsStatus fsStatus = webHdfs.getStatus(new Path("/"));
+      Assert.assertNotNull(fsStatus);
+
+      //used, free and capacity are non-negative longs
+      Assert.assertTrue(fsStatus.getUsed() >= 0);
+      Assert.assertTrue(fsStatus.getRemaining() >= 0);
+      Assert.assertTrue(fsStatus.getCapacity() >= 0);

Review Comment:
   there is already a static import for these. No need of Assert. prefix.
   
   Rather than just asserting they aren't 0, can you get the values from 
DistributedFileSystem and validate that they are same





> Support getStatus API in WebHDFS
> --------------------------------
>
>                 Key: HDFS-17001
>                 URL: https://issues.apache.org/jira/browse/HDFS-17001
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: webhdfs
>    Affects Versions: 3.4.0
>            Reporter: Hualong Zhang
>            Assignee: Hualong Zhang
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: image-2023-05-08-14-34-51-873.png
>
>
> WebHDFS should support getStatus:
> !image-2023-05-08-14-34-51-873.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to