Re: [infinispan-dev] XSite Performance
On 5/18/13 3:25 AM, Erik Salter wrote: Hi all, I’ve spent quite a bit of time with the existing XSite implementation, getting my solution to run in multiple data centers. I’ve been talking with Mircea and Bela on and off over the past few weeks, but since this affects the community and potentially commercial customers, I wanted to share with a wider audience. The problems I see are as follows: 1.A bridge end will see all the traffic – sending and receiving – for all nodes within a cluster. 2.The bridge end of a site will apply each change with the bridge end as the transaction originator. The reasons for this were: - WAN link between site masters (high latency, possibly low bandwidth); send the data only once across that link - Minimize number of site masters, simplifies cross-site cluster configuration (not yet implemented) - Minimize number of site masters, simplifies firewall management (not yet done) In my deployment, this can be three sites backing up their data to the other two. For stage 1, we said we would not support this, unless the data was disjoint, ie. SFO owning and backing up keys A-K to LON and LON owning and backing up keys L-Z to SFO. If both sites had the same or overlapping key set, then we'd support it if the sites were modifying the key set at *different times*. Are you accessing (changing) the same keys in different sites at the same time ? This would mean we've progressed to the second stage already ? :-) So for 3 sites of 12 nodes each, a single bridge end will see all 36 nodes worth of traffic. Wouldn't it be 24 ? We also have to differentiate between incoming and outgoing traffic, and between internal (site-local) and external traffic (between sites). A bridge end (= site master) sees internal traffic, but we could exclude the site master from getting internal traffic by installing a consistent hash function which excludes the site master from storing any data, so handling internal traffic doesn't slow the site master down. Has this been done yet ? A site master does get external traffic (from the other 2 sites), but I don't think it should be overwhelmed by it because: #1 Only the result of a successful transaction is sent to the backup sites. Intermediate modifications/prepares are *not* seen by the site master (if we exclude it from storing dta, as mentioned above). I would say that the traffic from a transaction is usually a fraction of that leading up to the transaction. #1a The above refers to ASYNC replication between sites. We don't recommend SYNC replication, the reasons are detailed in the I-RAC wiki [1]. #2 Infinispan caches are designed for high read / low write access, this naturally keeps traffic down. Am I smelling an abuse of this design pattern here ? :-) This breaks linear scalability. In my QA’s testing, a 3 DC cluster of 6 nodes is about 1/10 the throughput of a single cluster. Is this with high reads and async transactional xsite replication excluding the site master ? I think I-RAC solves some of the problem, like the reliable sending of data, but it doesn’t really help with performance in high throughput cases. (Note: FWIW, my apps do about a 8-12:1 read/write ratio). Hmm, so you are indeed using the cache in the right way... The traffic going between the site masters should then be minimal, even if we get traffic from 2 sites... So I’ve prototyped the following: 1.Load-balanced applying the changes from a remote SiteMaster across all local nodes in a cluster. The basics are that there is still a single SiteMaster (thereby not breaking the existing JGroups model). This is okay, since it’s the same bandwidth pipe, and as long as there is no unmarshalling, it’s a simple buffer copy. The difference is that the messages are now forwarded to other nodes in the local cluster and delivered to the ISPN layer there for unmarshalling and data application. Note that this does NOT break XSite synchronous replication, as I’m still preserving the originating site. Do you forward traffic to random nodes in the site ? This is what we discussed in our last call, and I'm curious to see what the numbers are. Have you given excluding the site master from storing data a try too ? It should be relatively simple to install a consistent hash which excludes the SM. Note that I don't think SYNC xsite replication is feasible, for the reasons listed in [1] ! 2.I also needed more intelligent application of the data that is replicated. My local cluster will save data to 8-9 caches that need to be replicated. Instead of replicating data on cache boundaries, I consolidated the data to only replicate an aggregate object. In turn, I have a custom BackupReceiver implementation that takes this object and expands it into the requisite data for the 8-9 caches. Since these caches are a mixture of optimistic and pessimistic modes, I made liberal use of the Distributed
Re: [infinispan-dev] configuring fetchInMemoryState for topology caches
On May 16, 2013, at 12:00 PM, Mircea Markus mmar...@redhat.com wrote: Hi Galder, Whilst reviewing Tristan's pull request for ISPN-3008[1] I saw that we allow configuring fetchInMemoryState for topology caches and wondering why we do that? ^ Not sure exactly what you mean… When a node joins in, it needs to receive the topology from other nodes to be able to provide topology headers with the correct info to the clients. That's why the topology caches can be configured with fetching memory. Alternatively, since the cache is replicated, a similar effect can be achieved with a cluster cache loader. This was introduced a while back as a result of a bug with state transfer (I think it was something related to https://issues.jboss.org/browse/EDG-44) Shouldn't it be enabled by default/enforced? ^ Either that, or the cluster cache loader are used, both of which serve the same purpouse. [1] https://github.com/infinispan/infinispan/pull/1802 Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) -- Galder Zamarreño gal...@redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] configuring fetchInMemoryState for topology caches
On 05/21/2013 08:58 AM, Galder Zamarreño wrote: Shouldn't it be enabled by default/enforced? ^ Either that, or the cluster cache loader are used, both of which serve the same purpouse. I think what Mircea is getting at, is that there is an intention to deprecate / remove the CCL. I think that we can do that in 6.0 (with the CacheStore redesign) and remove all potential users of CCL (including the lazy topology transfer). Tristan ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
[infinispan-dev] NPE with Cache.replace()
Can someone investigate why CacheImpl.replaceInternal() throws an NPE ? I can reproduce this every time. Using the latest JDG. See the attached stack trace for details. -- Bela Ban, JGroups lead (http://www.jgroups.org) 11:39:36,342 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Incoming-2,shared=tcp) ISPN71: Caught exception when handling command SingleRpcCommand{cacheName='default', command=ReplaceCommand{key=conf, oldValue=org.infinispan.remoting.MIMECacheEntry@e9e0403b, newValue=org.infinispan.remoting.MIMECacheEntry@e9e0403b, lifespanMillis=-1000, maxIdleTimeMillis=-1000, flags=null, successful=true, ignorePreviousValue=false}}: java.lang.NullPointerException at org.infinispan.CacheImpl.replaceInternal(CacheImpl.java:915) at org.infinispan.CacheImpl.replace(CacheImpl.java:894) at org.infinispan.DecoratedCache.replace(DecoratedCache.java:206) at org.infinispan.xsite.BackupReceiverImpl$BackupCacheUpdater.visitReplaceCommand(BackupReceiverImpl.java:116) at org.infinispan.commands.write.ReplaceCommand.acceptVisitor(ReplaceCommand.java:70) at org.infinispan.xsite.BackupReceiverImpl.handleRemoteCommand(BackupReceiverImpl.java:75) at org.infinispan.xsite.BackupReceiverRepositoryImpl.handleRemoteCommand(BackupReceiverRepositoryImpl.java:87) at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromRemoteSite(CommandAwareRpcDispatcher.java:255) at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:230) at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:665) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.JChannel.up(JChannel.java:719) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1008) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:607) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:507) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:482) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:463) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.relay.Relayer$Bridge.receive(Relayer.java:302) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.JChannel.up(JChannel.java:749) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1012) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.RSVP.up(RSVP.java:209) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.FRAG2.up(FRAG2.java:192) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.FlowControl.up(FlowControl.java:461) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:300) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.UNICAST2.removeAndPassUp(UNICAST2.java:920) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.UNICAST2.handleBatchReceived(UNICAST2.java:856) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:481) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.protocols.FD.up(FD.java:274) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.3.0.CR2.jar:3.3.0.CR2] ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] NPE with Cache.replace()
[Mircea] Might be a problem in xsite replication when the keys that are updated are not present. This happens all the time as xsite state transfer is not yet implemented: a new site comes online, no state transfer, and an xsite replication update will not be able to replace non-existing keys. I suggest to use a straight put() for updates, or a new internal replaceIfPresentOrPutIfNotPresent()... On 5/21/13 11:43 AM, Bela Ban wrote: Can someone investigate why CacheImpl.replaceInternal() throws an NPE ? I can reproduce this every time. Using the latest JDG. -- Bela Ban, JGroups lead (http://www.jgroups.org) ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] ISPN-1797 MongoDB cachestore - pending question
On 20 May 2013, at 22:12, Guillaume SCHEIBEL guillaume.schei...@gmail.com wrote: Hi Sanne, You probably missed the notification but there is still one pending question I asked you for the pull request: @Sanne, I would like the MongoDBCacheStoreConfig constructor to return an exception if the port is not properly set (between 1 and 65535) but how to handle it in the caller ? The port validation is not mandatory as the user would notice the problem when trying to connect to the mongodb instance, i.e. at start up time vs configuration time (if you throw the exception). If anything I think the validation should be moved in MongoDBCacheStoreConfigurationBuilder.validate (override from AbstractStoreConfigurationBuilder) method which would trow an (runtime) ConfigurationException. I can't rethrow it from adapt() so ? Thanks Guillaume ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] Supporting notifications for entries expired while in the cache store - ISPN-694
On May 6, 2013, at 2:20 PM, Mircea Markus mmar...@redhat.com wrote: On 3 May 2013, at 20:15, Paul Ferraro wrote: Is it essential? No - but it would simplify things on my end. If Infinispan can't implement expiration notifications, then I am forced to use immortal cache entries and perform expiration myself. To do this, I have to store meta information about the cache entry along with my actual cache values, which normally I would get for free via mortal cache entries. In the scope of 5.2, what galder suggested was to fully support notifications for the entries in memory. In order to fully support your use case you'd need to add some code to trigger notifications in the cache store as well - I think that shouldn't be too difficult. What cache store implementation are you using any way? ^ Personally, I'd do in-memory entry expiration notifications for 5.2, and I'd leave cache store based entry expiration for 6.0, when we'll revisit cache store API, and we can address cache store based entry expiration notification properly. Agree everyone? So, it would be nice to have. If I have to wait for 6.0 for this, that's ok. On Thu, 2013-05-02 at 17:03 +0200, Galder Zamarreño wrote: Hi, Re: https://issues.jboss.org/browse/ISPN-694 We've got a little problem here. Paul requires that entries that might have been expired while in the cache store, when loaded, we send expiration notifications for them. The problem is that expiration checking is currently done in the actual cache store implementations, which makes supporting this (even outside the purgeExpired business) specific to each cache store. Not ideal. The alternative would be for CacheLoaderInterceptor to load, do the checks and then remove the entries accordingly. The big problem here is that you're imposing a way to deal with expiration handling for all cache store implementations, and some might be able to do these checks and removals in a more efficient way if they were left to do it themselves. For example, having to load all entries and then decide which are to expire might require a lot of work, instead of potentially communicating directly with the cache store (imagine a remote cache store…) and asking it to return all the entries filtered by those whose expiry has not expired. However, even if a cache store can do that, it would lead to loading only those entries not expired, but then how do you send the notifications if those expired entries have been filtered out? You probably need multiple load methods here... @Paul, do you really need this for your use case? The simplest thing to do might be to go for option 1, and let each cache store send notifications for expired entries for the moment, and then in 6.0 revise not only the API for purgeExpired, but also the API for load/loadAll() to find a way that, if any expiry listeners are in place, a different method can be called on the cache store that signals it to return all entries: both expired and non-expired, and then let the CacheLoaderInterceptor send notifications from a central location. Thoughts? Cheers, -- Galder Zamarreño gal...@redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) -- Galder Zamarreño gal...@redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] configuring fetchInMemoryState for topology caches
I wouldn't want to deprecate CCL, I think it definitely has a purpose - at least in invalidation mode. Even in replication mode, having a lazy alternative to state transfer may be useful. Maybe not for the topology cache, but it might make sense for large caches. On Tue, May 21, 2013 at 4:36 PM, Mircea Markus mmar...@redhat.com wrote: On 21 May 2013, at 08:30, Tristan Tarrant ttarr...@redhat.com wrote: On 05/21/2013 08:58 AM, Galder Zamarreño wrote: Shouldn't it be enabled by default/enforced? ^ Either that, or the cluster cache loader are used, both of which serve the same purpouse. I think what Mircea is getting at, is that there is an intention to deprecate / remove the CCL. I think that we can do that in 6.0 (with the CacheStore redesign) and remove all potential users of CCL (including the lazy topology transfer). Mind reader :-) Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] How to get GrouperT#computeGroup(key) return value to map to physical Node?
Hmmm, one hacky way might be to hold on to the grouper instance passed in via configuration, and once the cache manager has been started, set it in the grouper and use to query either the address, or the physical address (via EmbeddedCacheManager.getTransport…)? On May 14, 2013, at 6:34 PM, cotton-ben ben.cot...@alumni.rutgers.edu wrote: I am playing with the Infinispan 5.3 quick-start package to exercise my usage of the Grouping API. As we know the quick start package is made up of AbstractNode.java, Node0.java, Node1.java and Node2.java (plus a util/listener). My ambition is to demonstrate 1. that any CacheK,V.put(DIMENSION.xxx,v) will flow through my Grouper and pin that key in the Cache at @Node=0. 2. that any CacheK,V.put(POSITION.xxx,v) will flow through my Grouper and pin that key in the Cache at either @Node=1 or @Node=2 . Here is my AbstractNode#createCacheManagerProgramatically() config: private static EmbeddedCacheManager createCacheManagerProgramatically() { return new DefaultCacheManager( GlobalConfigurationBuilder.defaultClusteredBuilder() .transport().addProperty(configurationFile, jgroups.xml) .build(), new org.infinispan.configuration.cache.ConfigurationBuilder() .clustering() .cacheMode(CacheMode.DIST_SYNC) .hash().numOwners(1).groups().enabled(Boolean.TRUE) .addGrouper(new com.jpmorgan.ct.lri.cs.ae.test.DimensionGrouperString()) .build() ); } And here is my GrouperT implementation public class DimensionGrouperT implements GrouperString { public String computeGroup(String key, String group) { if (key.indexOf(DIMENSION.)==0) { String groupPinned = 0; System.out.println(Pinning Key=[+key+] @Node=[+groupPinned+]); //node= exactly 0 return groupPinned; } else if (key.indexOf(POSITION.)==0) { String groupPinned = +(1+ (int)(Math.random()*2)); System.out.println(Pinning Key=[+key+] @Node=[+groupPinned+]); //node= {1,2} return groupPinned; } else { return null; } } public ClassString getKeyType() { return String.class; } } The logic is working correctly ... i.e. when from Node2.java I call for (int i = 0; i 10; i++) { cacheDP.put(DIMENSION. + i, DimensionValue. + i); cacheDP.put(POSITION. + i, PositionValue. + i); } My DimensionGrouper is returning 0 from computeGroup(). My question is how in Infinispan can I map the computeGroup() return value to a physical Node? I.e. How can I make it so that when computeGroup() returns 0, I will *only* add that K,V entry to the Cache @Node 0? -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-How-to-get-Grouper-T-computeGroup-key-return-value-to-map-to-physical-Node-tp4027134.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarreño gal...@redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] How to get GrouperT#computeGroup(key) return value to map to physical Node?
I guess the grouper could use a KeyAffinityService (or something similar) to generate a key local to each node and return that instead of 0 or 1. However, you won't have any guarantee that the keys will stay on the same node if the cache topology changes (e.g. another node joins). It used to be that the actual address of the node would (almost) always be mapped to that node, but that was just an implementation detail, and it never worked 100%. Ben, do you think being able to pin a key permanently to a node would be useful? Cheers Dan On Tue, May 21, 2013 at 6:31 PM, Galder Zamarreño gal...@redhat.com wrote: Hmmm, one hacky way might be to hold on to the grouper instance passed in via configuration, and once the cache manager has been started, set it in the grouper and use to query either the address, or the physical address (via EmbeddedCacheManager.getTransport…)? On May 14, 2013, at 6:34 PM, cotton-ben ben.cot...@alumni.rutgers.edu wrote: I am playing with the Infinispan 5.3 quick-start package to exercise my usage of the Grouping API. As we know the quick start package is made up of AbstractNode.java, Node0.java, Node1.java and Node2.java (plus a util/listener). My ambition is to demonstrate 1. that any CacheK,V.put(DIMENSION.xxx,v) will flow through my Grouper and pin that key in the Cache at @Node=0. 2. that any CacheK,V.put(POSITION.xxx,v) will flow through my Grouper and pin that key in the Cache at either @Node=1 or @Node=2 . Here is my AbstractNode#createCacheManagerProgramatically() config: private static EmbeddedCacheManager createCacheManagerProgramatically() { return new DefaultCacheManager( GlobalConfigurationBuilder.defaultClusteredBuilder() .transport().addProperty(configurationFile, jgroups.xml) .build(), new org.infinispan.configuration.cache.ConfigurationBuilder() .clustering() .cacheMode(CacheMode.DIST_SYNC) .hash().numOwners(1).groups().enabled(Boolean.TRUE) .addGrouper(new com.jpmorgan.ct.lri.cs.ae.test.DimensionGrouperString()) .build() ); } And here is my GrouperT implementation public class DimensionGrouperT implements GrouperString { public String computeGroup(String key, String group) { if (key.indexOf(DIMENSION.)==0) { String groupPinned = 0; System.out.println(Pinning Key=[+key+] @Node=[+groupPinned+]); //node= exactly 0 return groupPinned; } else if (key.indexOf(POSITION.)==0) { String groupPinned = +(1+ (int)(Math.random()*2)); System.out.println(Pinning Key=[+key+] @Node=[+groupPinned+]); //node= {1,2} return groupPinned; } else { return null; } } public ClassString getKeyType() { return String.class; } } The logic is working correctly ... i.e. when from Node2.java I call for (int i = 0; i 10; i++) { cacheDP.put(DIMENSION. + i, DimensionValue. + i); cacheDP.put(POSITION. + i, PositionValue. + i); } My DimensionGrouper is returning 0 from computeGroup(). My question is how in Infinispan can I map the computeGroup() return value to a physical Node? I.e. How can I make it so that when computeGroup() returns 0, I will *only* add that K,V entry to the Cache @Node 0? -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-How-to-get-Grouper-T-computeGroup-key-return-value-to-map-to-physical-Node-tp4027134.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarreño gal...@redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] How to get GrouperT#computeGroup(key) return value to map to physical Node?
Thanks for the response Galder. Interesting. I have been counseled by Mircea to use the KeyAffiinityService API to do my physical key pinning @ specific node participants. However, the KeyAffinityService brings the risk of not being able to allow my pinned keys to survive topology changes (i.e. the physical key affinity may be lost). The Grouper#computeGroup(key) technique *does* survive grid topology changes (but does not offer a mechanism for physically pinning @ specific node identity -- i.e. Grouper pins keys to nodes anonymously). The ideal compromise (of course!) is to empower the user with a non-anonymous Grouper#computeGroup(key) API capability ... where I can have my key pinned @node=ID_CHOSEN_BY_ME *and* able to survive a topology change. Still musing on these considerations. :-) -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-How-to-get-Grouper-T-computeGroup-key-return-value-to-map-to-physical-Node-tp4027134p4027185.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] How to get GrouperT#computeGroup(key) return value to map to physical Node?
Ben, do you think being able to pin a key permanently to a node would be useful? Indeed I do. The ideal mechanism would be to merge both the ambitions of the Grouper#computeGroup(key) API and KeyAffinityService API into a capability that would allow me to render non-anonymous grouping that could (in its implementation) pin a specific key at a specific node identity (permanently). -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-How-to-get-Grouper-T-computeGroup-key-return-value-to-map-to-physical-Node-tp4027134p4027186.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] Supporting notifications for entries expired while in the cache store - ISPN-694
On Tue, 2013-05-21 at 17:07 +0200, Galder Zamarreño wrote: On May 6, 2013, at 2:20 PM, Mircea Markus mmar...@redhat.com wrote: On 3 May 2013, at 20:15, Paul Ferraro wrote: Is it essential? No - but it would simplify things on my end. If Infinispan can't implement expiration notifications, then I am forced to use immortal cache entries and perform expiration myself. To do this, I have to store meta information about the cache entry along with my actual cache values, which normally I would get for free via mortal cache entries. In the scope of 5.2, what galder suggested was to fully support notifications for the entries in memory. In order to fully support your use case you'd need to add some code to trigger notifications in the cache store as well - I think that shouldn't be too difficult. What cache store implementation are you using any way? ^ Personally, I'd do in-memory entry expiration notifications for 5.2, and I'd leave cache store based entry expiration for 6.0, when we'll revisit cache store API, and we can address cache store based entry expiration notification properly. Agree everyone? Thanks fine. Just to clarify, the end result is that an expiration notification would only ever be emitted on 1 node per cache entry, correct? That is to say, for a given expired cache entry, the corresponding isOriginLocal() would only ever return true on one node, yes? I just want to make sure that each node won't emit a notification for the same cache entry that was discovered to have expired. So, it would be nice to have. If I have to wait for 6.0 for this, that's ok. On Thu, 2013-05-02 at 17:03 +0200, Galder Zamarreño wrote: Hi, Re: https://issues.jboss.org/browse/ISPN-694 We've got a little problem here. Paul requires that entries that might have been expired while in the cache store, when loaded, we send expiration notifications for them. The problem is that expiration checking is currently done in the actual cache store implementations, which makes supporting this (even outside the purgeExpired business) specific to each cache store. Not ideal. The alternative would be for CacheLoaderInterceptor to load, do the checks and then remove the entries accordingly. The big problem here is that you're imposing a way to deal with expiration handling for all cache store implementations, and some might be able to do these checks and removals in a more efficient way if they were left to do it themselves. For example, having to load all entries and then decide which are to expire might require a lot of work, instead of potentially communicating directly with the cache store (imagine a remote cache store…) and asking it to return all the entries filtered by those whose expiry has not expired. However, even if a cache store can do that, it would lead to loading only those entries not expired, but then how do you send the notifications if those expired entries have been filtered out? You probably need multiple load methods here... @Paul, do you really need this for your use case? The simplest thing to do might be to go for option 1, and let each cache store send notifications for expired entries for the moment, and then in 6.0 revise not only the API for purgeExpired, but also the API for load/loadAll() to find a way that, if any expiry listeners are in place, a different method can be called on the cache store that signals it to return all entries: both expired and non-expired, and then let the CacheLoaderInterceptor send notifications from a central location. Thoughts? Cheers, -- Galder Zamarreño gal...@redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) -- Galder Zamarreño gal...@redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
[infinispan-dev] mongodb cache store added in Infinispan 5.3 - curtesy of Guillaume Scheibel
Thanks to Guillaume Scheibel, Infinispan now has an mongodb cache store that will be shipped as part of 5.3.0.CR1. The test for the mongodb cache store are not run by default. In order to be able to run them you need to: - install mongodb locally - run mongodb profile The cache store was add in the CI build on all 5.3 configs (together with a running instance of mongodb). Guillaume, would you mind adding a blog entry describing this new functionality? (I've invited you to be a member of the infinispan.blogpsot.com team.) Also can you please update the user doc: https://docs.jboss.org/author/display/ISPN/Cache+Loaders+and+Stores Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] Supporting notifications for entries expired while in the cache store - ISPN-694
Sent from my iPhone On 21 May 2013, at 16:07, Galder Zamarreño gal...@redhat.com wrote: On May 6, 2013, at 2:20 PM, Mircea Markus mmar...@redhat.com wrote: On 3 May 2013, at 20:15, Paul Ferraro wrote: Is it essential? No - but it would simplify things on my end. If Infinispan can't implement expiration notifications, then I am forced to use immortal cache entries and perform expiration myself. To do this, I have to store meta information about the cache entry along with my actual cache values, which normally I would get for free via mortal cache entries. In the scope of 5.2, what galder suggested was to fully support notifications for the entries in memory. In order to fully support your use case you'd need to add some code to trigger notifications in the cache store as well - I think that shouldn't be too difficult. What cache store implementation are you using any way? ^ Personally, I'd do in-memory entry expiration notifications for 5.2, and I'd leave cache store based entry expiration for 6.0, when we'll revisit cache store API, and we can address cache store based entry expiration notification properly. Agree everyone? Paul? So, it would be nice to have. If I have to wait for 6.0 for this, that's ok. On Thu, 2013-05-02 at 17:03 +0200, Galder Zamarreño wrote: Hi, Re: https://issues.jboss.org/browse/ISPN-694 We've got a little problem here. Paul requires that entries that might have been expired while in the cache store, when loaded, we send expiration notifications for them. The problem is that expiration checking is currently done in the actual cache store implementations, which makes supporting this (even outside the purgeExpired business) specific to each cache store. Not ideal. The alternative would be for CacheLoaderInterceptor to load, do the checks and then remove the entries accordingly. The big problem here is that you're imposing a way to deal with expiration handling for all cache store implementations, and some might be able to do these checks and removals in a more efficient way if they were left to do it themselves. For example, having to load all entries and then decide which are to expire might require a lot of work, instead of potentially communicating directly with the cache store (imagine a remote cache store…) and asking it to return all the entries filtered by those whose expiry has not expired. However, even if a cache store can do that, it would lead to loading only those entries not expired, but then how do you send the notifications if those expired entries have been filtered out? You probably need multiple load methods here... @Paul, do you really need this for your use case? The simplest thing to do might be to go for option 1, and let each cache store send notifications for expired entries for the moment, and then in 6.0 revise not only the API for purgeExpired, but also the API for load/loadAll() to find a way that, if any expiry listeners are in place, a different method can be called on the cache store that signals it to return all entries: both expired and non-expired, and then let the CacheLoaderInterceptor send notifications from a central location. Thoughts? Cheers, -- Galder Zamarreño gal...@redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) -- Galder Zamarreño gal...@redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev
Re: [infinispan-dev] mongodb cache store added in Infinispan 5.3 - curtesy of Guillaume Scheibel
There is a way to download (via Maven) and run MongoDB locally from within Java, via Flapdoodle's Embedded MongoDB: https://github.com/flapdoodle-oss/embedmongo.flapdoodle.de ModeShape uses this in our builds in support of our storage of binary values inside MongoDB. The relevant Maven POM parts and JUnit test case are: https://github.com/ModeShape/modeshape/blob/master/modeshape-jcr/pom.xml#L147 https://github.com/ModeShape/modeshape/blob/master/modeshape-jcr/src/test/java/org/modeshape/jcr/value/binary/MongodbBinaryStoreTest.java On May 21, 2013, at 1:04 PM, Mircea Markus mmar...@redhat.com wrote: Thanks to Guillaume Scheibel, Infinispan now has an mongodb cache store that will be shipped as part of 5.3.0.CR1. The test for the mongodb cache store are not run by default. In order to be able to run them you need to: - install mongodb locally - run mongodb profile The cache store was add in the CI build on all 5.3 configs (together with a running instance of mongodb). Guillaume, would you mind adding a blog entry describing this new functionality? (I've invited you to be a member of the infinispan.blogpsot.com team.) Also can you please update the user doc: https://docs.jboss.org/author/display/ISPN/Cache+Loaders+and+Stores Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev ___ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev