That's my experience too. So let's go for the concurrenthashmap impl
(patch on jira) and then see how we do the invalidation stuff in a
2.1?


Romain Manni-Bucau
Twitter: @rmannibucau
Blog: http://rmannibucau.wordpress.com/
LinkedIn: http://fr.linkedin.com/in/rmannibucau
Github: https://github.com/rmannibucau


2014-05-06 19:54 GMT+02:00 Mark Struberg <strub...@yahoo.de>:
> Well my personal experience only:
>
>
> 1.) I barely use distributed caches. I use ehcache in most of my projects as 
> of today, but do not use the distribution feature much. Way too complicated
>
> 2.) What actually IS useful is distributed cache invalidation. The caching 
> side is fine to just select any values from my DB if they are not yet cached. 
> But if I change those values, then I really need some ways to get rid of the 
> values in all the caches on all my cluster nodes.
>
> So from a purely personal point I would favour a mode which is really fast as 
> a local cache but would have some ways to distribute the invalidation of a 
> cache to all other nodes.
>
> Not sure how this fits into jcs - don't know the codebase well enough to 
> judge about it.
>
> LieGrue,
> strub
>
>
> On Tuesday, 6 May 2014, 13:29, Romain Manni-Bucau <rmannibu...@gmail.com> 
> wrote:
>
> Here some pseudo-core details about my first mail:
>>
>>New internals:
>>* NetworkTopology
>>* EntryRepartitor: compute the index of the
>>* Node (LocalNode which is current impl and RemoteNode which is just a
>>remote facade relying Network)
>>
>>NetworkTopology { // impl using udp/tcp/or whatever
>>     Node[] findAll() // used by a background thread to check if there
>>is a new node and if so rebalance the cluster
>>}
>>
>>Node { // remote and local API
>>     get(k), put(k, v) .... (Cache<K, V> primitive methods)
>>     Statistics getStatistics() // used by a background thread to
>>aggregate stats on each node
>>}
>>
>>
>>EntryRepartitor {
>>     Node[] nodeAndBackups(key)
>>}
>>
>>
>>get(key) { // symmetrical for put of course
>>    Node[] nodes = entryRepartitor.nodeAndBackups(key);
>>    for (final Node node : nodes) {
>>         try {
>>             return node.get(key);
>>         } catch(final RemoteCacheException rce) { // API exception
>>             throw rce.getJCacheException();
>>         } catch(final Exception e) { // network exception try next node
>>            // continue
>>         }
>>    }
>>}
>>
>>Of course we'll get LocalNode implementing Node with the current impl
>>(ConcurrentHashMap) and RemoteNode will be a client view of the
>>LocalNode over the network.
>>
>>To keep it testable we need to be able to test a RemoteNode ->
>>LocalNode connection in the same JVM creating manually two
>>CachingProviders.
>>
>>wdyt?
>>
>>
>>Romain Manni-Bucau
>>Twitter: @rmannibucau
>>Blog: http://rmannibucau.wordpress.com/
>>LinkedIn: http://fr.linkedin.com/in/rmannibucau
>>Github: https://github.com/rmannibucau
>>
>>
>>
>>2014-05-06 12:50 GMT+02:00 Romain Manni-Bucau <rmannibu...@gmail.com>:
>>> FYI I attached a patch using a ConcurrentHashMap here
>>> https://issues.apache.org/jira/browse/JCS-127
>>>
>>> It is pretty fast compared to previous impl.
>>>
>>>
>>> Romain Manni-Bucau
>>> Twitter: @rmannibucau
>>> Blog: http://rmannibucau.wordpress.com/
>>> LinkedIn: http://fr.linkedin.com/in/rmannibucau
>>> Github: https://github.com/rmannibucau
>>>
>>>
>>> 2014-05-06 8:31 GMT+02:00 Romain Manni-Bucau <rmannibu...@gmail.com>:
>>>> Hi guys,
>>>>
>>>> few questions about jcs:
>>>>
>>>> 1) I played a bit with remote cache server etc and didn't find a lot
>>>> of use cases, do we keep it this way (linked to 4) )?
>>>> 2) API: do we use JCache as main API or do we keep core?
>>>> 3) Reviewing JCache module I really wonder if we shouldn't use a
>>>> ConcurrentHashMap instead of a the currently backing CompositeCache
>>>> and add on top of this "locally optimized" implementation two things:
>>>> a) eviction (a thread regularly iterating over local items to check
>>>> expiry would be enough), b) distribution (see 4) )
>>>> 4) distributed mode: I wonder if we shouldn't rethink it and
>>>> potentially add Cache<K, V> listeners usable in JCache to know if
>>>> another node did something (useful to get consistent stats at least -
>>>> basically we need a way to aggregate on each note stats) + use a key
>>>> for each node to keep data on a single node + potentially add backup
>>>> on another node.
>>>>
>>>> wdyt?
>>>>
>>>> I don't know how much JCS is used ATM and if we can break that much
>>>> the API but since that would be a 2.0 I think it is the moment
>>>>
>>>>
>>>> Romain Manni-Bucau
>>>> Twitter: @rmannibucau
>>>> Blog: http://rmannibucau.wordpress.com/
>>>> LinkedIn: http://fr.linkedin.com/in/rmannibucau
>>>> Github: https://github.com/rmannibucau
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
>>For additional commands, e-mail: dev-h...@commons.apache.org
>>
>>
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to