[ 
https://issues.apache.org/jira/browse/HDFS-16521?focusedWorklogId=750012&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-750012
 ]

ASF GitHub Bot logged work on HDFS-16521:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 30/Mar/22 11:08
            Start Date: 30/Mar/22 11:08
    Worklog Time Spent: 10m 
      Work Description: iwasakims commented on pull request #4107:
URL: https://github.com/apache/hadoop/pull/4107#issuecomment-1082994545


   I agree with @ayushtkn that modifying ClientProtocol is overkill for the use 
case. @virajjasani
   
   > While I agree that JMX metric for slownode is already available, not every 
downstreamer might have access to it directly, for instance in K8S managed 
clusters, unless port forward is enabled (not so common case in prod), HDFS 
downstreamer would not be able to access JMX metrics.
   
   Thanks to 
[JMXJsonServlet](https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/jmx/JMXJsonServlet.java),
 we can get metrics in JSON format via HTTP/HTTPS port of NameNode without 
additional configuration. JSON on HTTP is usually easier to access from 
outside/downstream than Protobuf on RPC.
   
   ```
   $ curl namenode:9870/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus
   {
     "beans" : [ {
       "name" : "Hadoop:service=NameNode,name=NameNodeStatus",
       "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode",
       "NNRole" : "NameNode",
       "HostAndPort" : "localhost:8020",
       "SecurityEnabled" : false,
       "LastHATransitionTime" : 0,
       "BytesWithFutureGenerationStamps" : 0,
       "SlowPeersReport" : "[]",
       "SlowDisksReport" : null,
       "State" : "active"
     } ]
   }
   ```
   
   How about enhancing metrics if the current information in the 
SlowPeersReport is insufficient?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 750012)
    Time Spent: 1h 10m  (was: 1h)

> DFS API to retrieve slow datanodes
> ----------------------------------
>
>                 Key: HDFS-16521
>                 URL: https://issues.apache.org/jira/browse/HDFS-16521
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Viraj Jasani
>            Assignee: Viraj Jasani
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In order to build some automation around slow datanodes that regularly show 
> up in the slow peer tracking report, e.g. decommission such nodes and queue 
> them up for external processing and add them back later to the cluster after 
> fixing issues etc, we should expose DFS API to retrieve all slow nodes at a 
> given time.
> Providing such API would also help add an additional option to "dfsadmin 
> -report" that lists slow datanodes info for operators to take a look, 
> specifically useful filter for larger clusters.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to