[
https://issues.apache.org/jira/browse/HBASE-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jonathan Gray resolved HBASE-1590.
----------------------------------
Resolution: Won't Fix
Assignee: Jonathan Gray
All testing on 0.20 shows we are more than okay w.r.t. our HeapSizing. Will
open a new issue against 0.21 if we do need any further improvements.
> Extend TestHeapSize and ClassSize to do "deep" sizing of Objects
> ----------------------------------------------------------------
>
> Key: HBASE-1590
> URL: https://issues.apache.org/jira/browse/HBASE-1590
> Project: Hadoop HBase
> Issue Type: Improvement
> Affects Versions: 0.20.0
> Reporter: Jonathan Gray
> Assignee: Jonathan Gray
> Fix For: 0.20.1
>
>
> As discussed in HBASE-1554 there is a bit of a disconnect between how
> ClassSize calculates the heap size and how we need to calculate heap size in
> our implementations.
> For example, the LRU block cache can be sized via ClassSize, but it is a
> shallow sizing. There is a backing ConcurrentHashMap that is the largest
> memory consumer. However, ClassSize only counts that as a single reference.
> But in our heapSize() reporting, we want to include *everything* within that
> Object.
> This issue is to resolve that dissonance. We may need to create an
> additional ClassSize.estimateDeep(), we may need to rethink our HeapSize
> interface, or maybe just leave it as is. The two primary goals of all this
> testing is to 1) ensure that if something is changed and the sizing is not
> updated, our tests fail, and 2) ensure our sizing is as accurate as possible.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.