[ 
https://issues.apache.org/jira/browse/HDFS-16203?focusedWorklogId=650316&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650316
 ]

ASF GitHub Bot logged work on HDFS-16203:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Sep/21 01:40
            Start Date: 14/Sep/21 01:40
    Worklog Time Spent: 10m 
      Work Description: tomscut commented on a change in pull request #3366:
URL: https://github.com/apache/hadoop/pull/3366#discussion_r707841118



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/StorageReport.java
##########
@@ -48,6 +49,8 @@ public StorageReport(DatanodeStorage storage, boolean failed, 
long capacity,
     this.nonDfsUsed = nonDfsUsed;
     this.remaining = remaining;
     this.blockPoolUsed = bpUsed;
+    this.blockPoolUsagePercent = capacity == 0 ? 0.0f :

Review comment:
       Thanks @tasanuma for your review. Thus can prevent some anomalies. I 
will update it soon. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 650316)
    Time Spent: 4h  (was: 3h 50m)

> Discover datanodes with unbalanced block pool usage by the standard deviation
> -----------------------------------------------------------------------------
>
>                 Key: HDFS-16203
>                 URL: https://issues.apache.org/jira/browse/HDFS-16203
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: tomscut
>            Assignee: tomscut
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: image-2021-09-01-19-16-27-172.png
>
>          Time Spent: 4h
>  Remaining Estimate: 0h
>
> *Discover datanodes with unbalanced volume usage by the standard deviation.*
> *In some scenarios, we may cause unbalanced datanode disk usage:*
>  1. Repair the damaged disk and make it online again.
>  2. Add disks to some Datanodes.
>  3. Some disks are damaged, resulting in slow data writing.
>  4. Use some custom volume choosing policies.
> In the case of unbalanced disk usage, a sudden increase in datanode write 
> traffic may result in busy disk I/O with low volume usage, resulting in 
> decreased throughput across datanodes.
> We need to find these nodes in time to do diskBalance, or other processing. 
> Based on the volume usage of each datanode, we can calculate the standard 
> deviation of the volume usage. The more unbalanced the volume, the higher the 
> standard deviation.
> *We can display the result on the Web of namenode, and then sorting directly 
> to find the nodes where the volumes usages are unbalanced.*
> *{color:#172b4d}This interface is only used to obtain metrics and does not 
> adversely affect namenode performance.{color}*
>  
> {color:#172b4d}!image-2021-09-01-19-16-27-172.png|width=581,height=216!{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to