[ 
https://issues.apache.org/jira/browse/OAK-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604808#comment-15604808
 ] 

Thomas Mueller commented on OAK-4882:
-------------------------------------

Yes, I think in theory we don't need to support updating entries (or better: 
don't need to _guarantee_ that updating entries always works).

It might seem a bit more complex right now, but the complexity is not strictly 
needed. Right now, the invalidate methods are called from 
DocumentNodeStore.invalidateNodeCache for example, but just from within tests. 
And entries are updated, that's true (DocumentNodeStore.getChildren, "not 
enough children loaded - load more, and put that in the cache"). So if the 
updated entry is stored, then that's better, but if it's not _always_ stored, 
then that's not a problem. The alternative is to use different keys, but that 
also has drawbacks.

> PersistenceCache is a general mechanism

Yes, but I think we can limit what features we support, and what guarantees we 
have. It's more important that the "async" part is always async, than to 
guarantee that updates to existing entries always go through.


> Bottleneck in the asynchronous persistent cache
> -----------------------------------------------
>
>                 Key: OAK-4882
>                 URL: https://issues.apache.org/jira/browse/OAK-4882
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: cache, documentmk
>    Affects Versions: 1.5.10, 1.4.8
>            Reporter: Tomek Rękawek
>            Assignee: Tomek Rękawek
>             Fix For: 1.6
>
>
> The class responsible for accepting new cache operations which will be 
> handled asynchronously is 
> [CacheActionDispatcher|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java].
>  In case of a high load, when the queue is full (=1024 entries), the 
> [add()|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L86]
>  method removes the oldest 256 entries. However, we can't afford losing the 
> updates (as it may result in having stale entries in the cache), so all the 
> removed entries are compacted into one big invalidate action.
> The compaction action 
> ([CacheActionDispatcher#cleanTheQueue|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/persistentCache/async/CacheActionDispatcher.java#L97])
>  still holds the lock taken in add() method, so threads which tries to add 
> something to the queue have to wait until cleanTheQueue() ends.
> Maybe we can optimise the CacheActionDispatcher#add->cleanTheQueue part, so 
> it won't hold the lock for the whole time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to