[ 
https://issues.apache.org/jira/browse/OAK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484257#comment-15484257
 ] 

Michael Dürig commented on OAK-4635:
------------------------------------

To address the OOME issue we have seen while evaluating this approach with the 
OAK-4635-6 branch I suggest to
* Pro-actively remove mapping of old generations from the cache. I will do this 
in a follow up commit.
* Reduce the memory foot print of the keys (stable ids of node states) of the 
cache. I will file a separate issue for this.

Beyond this we could try coming up with a more memory efficient way to 
structure the cache. As Java doesn't have memory efficient structs, we suffer 
quite a bit of memory overhead from extra instances per mapping (keys and 
entries). However, I'm a bit reluctant to invest here as effort, complexity and 
risk would be quite high an would block progress in other areas. In the end the 
extra memory spent here is a trade-off of our technology choice. 

> Improve cache eviction policy of the node deduplication cache
> -------------------------------------------------------------
>
>                 Key: OAK-4635
>                 URL: https://issues.apache.org/jira/browse/OAK-4635
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: segment-tar
>            Reporter: Michael Dürig
>            Assignee: Michael Dürig
>              Labels: perfomance
>             Fix For: Segment Tar 0.0.12
>
>         Attachments: OAK-4635.m, OAK-4635.pdf
>
>
> {{NodeCache}} uses one stripe per depth (of the nodes in the tree). Once its 
> overall capacity (default 1000000 nodes) is exceeded, it clears all nodes 
> from the stripe with the greatest depth. This can be problematic when the 
> stripe with the greatest depth contains most of the nodes as clearing it 
> would result in an almost empty cache. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to