[ 
https://issues.apache.org/jira/browse/JCR-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-2442:
-----------------------------------

    Description: 
currently there are 2 configuration parameters which affect the performance of 
client-sided tree traversals:

- fetch-depth
- size of item cache

my goal is to minimize the number of server-roundtrips triggered by traversing 
the node hierarchy on the client.

the current eviction policy doesn't seem to be ideal for this use case. in the 
case of relatively deep tree structures
a request for e.g. '/foo' can easily cause a cache overflow and root nodes 
might get evicted from the cache.
a following request to '/foo' cannot be served from cache but will trigger yet 
another deep fetch, despite the fact
that the major part of the tree structure is still in the cache.

increasing the cache size OTOH bears the risk of OOM errors since the memory 
footprint of the cached state seems 
to be quite large. i tried several combinations of fetch depth and cache size, 
to no avail. i either ran into OOM errors 
or performance was inacceptably slow due to an excessive number of server 
roundtrips.

i further noticed that sync'ing existing cached state with the results of a 
deep fetch is rather slow, e.g.
an inital request to '/foo' returns 11k items. the cache size is 10k, i.e. the 
cache cannot accomodate the entire 
result set.  assuming that /foo has been evicted, the following request to 
'/foo' will trigger another deep 
fetch which this time takes considerably moe time since the result set needs to 
be sync'ed with existing cached
state. 

using a LRU eviction policy and touching every node along the parent hierarchy 
when requesting an item might be a solution.

  was:
currently there are 2 configuration parameters which affect the performance of 
client-sided tree traversals:

- fetch-depth
- size of item cache

my goal is to minimize the numbers of server-roundtrips triggered by traversing 
the node hierarchy in the client.

the current eviction policy doesn't seem to be ideal for this use case. in the 
case of relatively deep tree structures
a request for e.g. '/foo' can easily cause a cache overflow and root nodes 
might get evicted from the cache.
a following request to '/foo' cannot be served from cache but will cause a deep 
fetch again, depsite the fact
that the major part of the tree structure is still in the cache.

increasing the cache size OTOH bears the risk of OOM errors since the memory 
footprint of the cached state seems to be quite large.

using a LRU eviction policy and touching every node along the parent hierarchy 
when requesting an item might be a solution.


> make internal item cache hierarchy-aware
> ----------------------------------------
>
>                 Key: JCR-2442
>                 URL: https://issues.apache.org/jira/browse/JCR-2442
>             Project: Jackrabbit Content Repository
>          Issue Type: Improvement
>          Components: jackrabbit-jcr2spi
>            Reporter: Stefan Guggisberg
>            Assignee: Michael Dürig
>
> currently there are 2 configuration parameters which affect the performance 
> of client-sided tree traversals:
> - fetch-depth
> - size of item cache
> my goal is to minimize the number of server-roundtrips triggered by 
> traversing the node hierarchy on the client.
> the current eviction policy doesn't seem to be ideal for this use case. in 
> the case of relatively deep tree structures
> a request for e.g. '/foo' can easily cause a cache overflow and root nodes 
> might get evicted from the cache.
> a following request to '/foo' cannot be served from cache but will trigger 
> yet another deep fetch, despite the fact
> that the major part of the tree structure is still in the cache.
> increasing the cache size OTOH bears the risk of OOM errors since the memory 
> footprint of the cached state seems 
> to be quite large. i tried several combinations of fetch depth and cache 
> size, to no avail. i either ran into OOM errors 
> or performance was inacceptably slow due to an excessive number of server 
> roundtrips.
> i further noticed that sync'ing existing cached state with the results of a 
> deep fetch is rather slow, e.g.
> an inital request to '/foo' returns 11k items. the cache size is 10k, i.e. 
> the cache cannot accomodate the entire 
> result set.  assuming that /foo has been evicted, the following request to 
> '/foo' will trigger another deep 
> fetch which this time takes considerably moe time since the result set needs 
> to be sync'ed with existing cached
> state. 
> using a LRU eviction policy and touching every node along the parent 
> hierarchy when requesting an item might be a solution.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to