[
https://issues.apache.org/jira/browse/HDFS-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14693857#comment-14693857
]
Tsz Wo Nicholas Sze commented on HDFS-8859:
-------------------------------------------
The idea sound good. Some comments:
- Both LightWeightGSet and the new LightWeightHashGSet use hash functions. So
LightWeightHashGSet seems not a good name. How about calling it
LightWeightResizableGSet?
- From your calculation, the patch improve each block replica object size about
45%. The JIRA summary is misleading. It seems claiming that it improves the
overall DataNode memory footprint by about 45%. For 10m replicas, the original
overall map entry object size is ~900 MB and the new size is ~500MB. Is it
correct?
- Why adding LightWeightGSet.putElement? Subclass can call super.put(..).
- There is a rewrite for LightWeightGSet.remove(..). Why? The old code is
well tested. Please do not change it if possible.
- Took a quick looks at the tests. I think we need some long running tests to
make sure the correctness. See TestGSet.runMultipleTestGSet().
> Improve DataNode (ReplicaMap) memory footprint to save about 45%
> ----------------------------------------------------------------
>
> Key: HDFS-8859
> URL: https://issues.apache.org/jira/browse/HDFS-8859
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Reporter: Yi Liu
> Assignee: Yi Liu
> Priority: Critical
> Attachments: HDFS-8859.001.patch, HDFS-8859.002.patch,
> HDFS-8859.003.patch
>
>
> By using following approach we can save about *45%* memory footprint for each
> block replica in DataNode memory (This JIRA only talks about *ReplicaMap* in
> DataNode), the details are:
> In ReplicaMap,
> {code}
> private final Map<String, Map<Long, ReplicaInfo>> map =
> new HashMap<String, Map<Long, ReplicaInfo>>();
> {code}
> Currently we use a HashMap {{Map<Long, ReplicaInfo>}} to store the replicas
> in memory. The key is block id of the block replica which is already
> included in {{ReplicaInfo}}, so this memory can be saved. Also HashMap Entry
> has a object overhead. We can implement a lightweight Set which is similar
> to {{LightWeightGSet}}, but not a fixed size ({{LightWeightGSet}} uses fix
> size for the entries array, usually it's a big value, an example is
> {{BlocksMap}}, this can avoid full gc since no need to resize), also we
> should be able to get Element through key.
> Following is comparison of memory footprint If we implement a lightweight set
> as described:
> We can save:
> {noformat}
> SIZE (bytes) ITEM
> 20 The Key: Long (12 bytes object overhead + 8
> bytes long)
> 12 HashMap Entry object overhead
> 4 reference to the key in Entry
> 4 reference to the value in Entry
> 4 hash in Entry
> {noformat}
> Total: -44 bytes
> We need to add:
> {noformat}
> SIZE (bytes) ITEM
> 4 a reference to next element in ReplicaInfo
> {noformat}
> Total: +4 bytes
> So totally we can save 40bytes for each block replica
> And currently one finalized replica needs around 46 bytes (notice: we ignore
> memory alignment here).
> We can save 1 - (4 + 46) / (44 + 46) = *45%* memory for each block replica
> in DataNode.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)