[ https://issues.apache.org/jira/browse/HDFS-17360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813045#comment-17813045 ]
ASF GitHub Bot commented on HDFS-17360: --------------------------------------- slfan1989 commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1920558332 > Hi @slfan1989, Is the IO exception mentioned here a fault with DN? If so, There are currently relevant exception handling mechanisms in place, there is to ensure that the elements stored in the map set are reasonable, including the following points: > > 1. The readBlock method adds 1 to the blockId before reading data, and subtracts 1 from the blockId when it executes normally or throws an exception. > 2. The maximum number of read threads on a DN is close to the configuration of the xciver thread. When there is an exception in the read block, the total value in the map will not exceed the number of resident xciver threads. > 3. When there is no read request, this map is an empty set of maps. > > In addition, the ReadBlockIdCounts metric and the xciver thread metric are used together, when a sudden increase in xciver threads is detected and lasts for 1 or 2 minutes, the map can be used to locate the block that has been abnormally accessed. Thank you for your response. My question is, if the IOWait of the DataNode is high, it is possible that the reading of many blocks will be blocked. In this case, does the Map store a lot of block information, leading to a very large JMX? > Record the number of times a block is read during a certain time period. > ------------------------------------------------------------------------ > > Key: HDFS-17360 > URL: https://issues.apache.org/jira/browse/HDFS-17360 > Project: Hadoop HDFS > Issue Type: New Feature > Reporter: huangzhaobo > Assignee: huangzhaobo > Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org