aajisaka commented on a change in pull request #3148: URL: https://github.com/apache/hadoop/pull/3148#discussion_r660477088
########## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java ########## @@ -340,8 +343,7 @@ public static InetSocketAddress createSocketAddr(String target) { private DataNodePeerMetrics peerMetrics; private DataNodeDiskMetrics diskMetrics; private InetSocketAddress streamingAddr; - - // See the note below in incrDatanodeNetworkErrors re: concurrency. + private LoadingCache<String, Map<String, Long>> datanodeNetworkCounts; Review comment: I think `HashMap<String, LongAdder>` is more efficient than `ConcurrentHashMap<String, Long>` because the LongAdder instance in the Map is never replaced in this case. ########## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java ########## @@ -340,8 +343,7 @@ public static InetSocketAddress createSocketAddr(String target) { private DataNodePeerMetrics peerMetrics; private DataNodeDiskMetrics diskMetrics; private InetSocketAddress streamingAddr; - - // See the note below in incrDatanodeNetworkErrors re: concurrency. + private LoadingCache<String, Map<String, Long>> datanodeNetworkCounts; Review comment: Oh I found the interface ``` @Override // DataNodeMXBean public Map<String, Map<String, Long>> getDatanodeNetworkCounts() { return datanodeNetworkCounts.asMap(); } ``` Since the interface cannot be changed, it's okay to use ConcurrentHashMap. +1. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org