[ 
https://issues.apache.org/jira/browse/HDFS-7331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14197449#comment-14197449
 ] 

Aaron T. Myers commented on HDFS-7331:
--------------------------------------

bq. The size of the map needs to be bounded. A cachemap can do the job.

Why does the size of the map need to be bounded? To take an extreme case, in a 
5,000 node cluster we'd be storing maybe an extra 30 bytes for each hostname, 
an extra 4 bytes for each IP address, and an extra 8 bytes for each long. Add 
in maybe another 32 bytes for object overhead per map entry, and you're looking 
at a total of 370KB on each DN. That hardly seems like something to worry 
about, considering most DN heaps are multiple GBs in size.

> Add Datanode network counts to datanode jmx page
> ------------------------------------------------
>
>                 Key: HDFS-7331
>                 URL: https://issues.apache.org/jira/browse/HDFS-7331
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>            Reporter: Charles Lamb
>            Assignee: Charles Lamb
>            Priority: Minor
>         Attachments: HDFS-7331.001.patch
>
>
> Add per-datanode counts to the datanode jmx page. For example, networkErrors 
> could be exposed like this:
> {noformat}
>   }, {
> ...
>     "DatanodeNetworkCounts" : "{\"dn1\":{\"networkErrors\":1}}",
> ...
>     "NamenodeAddresses" : 
> "{\"localhost\":\"BP-1103235125-127.0.0.1-1415057084497\"}",
>     "VolumeInfo" : 
> "{\"/tmp/hadoop-cwl/dfs/data/current\":{\"freeSpace\":3092725760,\"usedSpace\":28672,\"reservedSpace\":0}}",
>     "ClusterId" : "CID-4b38f2ae-5e58-4e15-b3cf-3ba3f46e724e"
>   }, {
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to