[jboss-user] [JBossCache] - Re: Buddy Replication on specific Region
Buddy reapplication cannot be configured on a per region basis. You can use two several cache instances to achieve this (each being one region, e.g., the caches that replicate can share a common transport/multiplexer) View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4126643#4126643 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4126643 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication
Thanks for the info. Until it proves to be something different, I suggest you follow/join the other thread, and we'll track the problem from there. I know the JGroups guys are aware of that thread. This looks like a JGroups issue. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4076866#4076866 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4076866 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication
JBoss Cache: 2.0.0 GA JGroups: 2.5.0 GA Your description sounds exactly like what is happening here. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4076626#4076626 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4076626 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication
What release of JBC and JGroups? This sounds very similar to the situation discussed at http://www.jboss.com/index.html?module=bb&op=viewtopic&t=116104[/url] and [url]http://jira.jboss.com/jira/browse/JBAS-4608. The application gets a view change callback before the protocols lower down in the channel are aware of the new view. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4076470#4076470 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4076470 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication on Weblogic
anonymous wrote : | 1) BuddyReplication only works in case of clustered environment? So If I have individual nodes which are not part of cluster then I can't use BuddyReplication? | You don't necessarily need a "cluster" as far as WL is concerned. They could be standalone servers. You will need some form of load balancing and session affinity however, perhaps provided by an external load balancer (HW or SW based) anonymous wrote : | 2) With the same configuration, when I dont use BuddyReplication in my cache-config, still the contents get replicated. i.e Application-1 and Application-2 were able to see the cache contents at any point of time. | (Applications are deployed in two different AdminServers (not a part of cluster) but on single machine). Can you please tell us if that is right and how it replicated the contents when we did not use BuddyReplication and why it is not replicating the contents when we use BuddyReplication? | WL "clustering" has no effect on JBoss Cache, even if you have 2 separate Admin Servers. Even with BR enabled, replication still happens - it is just that state is replicated to a backup region and not a primary region. This is why you do not see it. anonymous wrote : | Is it mandatory to use sticky session for BuddyReplication to work? | If it is to be of any benefit, then yes. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4042858#4042858 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4042858 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication on Weblogic
Manik, Thanks for the reply. The two nodes are not clustered (sticky session not configured) and I used the same configuration. I have few questions 1) BuddyReplication only works in case of clustered environment? So If I have individual nodes which are not part of cluster then I can't use BuddyReplication? 2) With the same configuration, when I dont use BuddyReplication in my cache-config, still the contents get replicated. i.e Application-1 and Application-2 were able to see the cache contents at any point of time. (Applications are deployed in two different AdminServers (not a part of cluster) but on single machine). Can you please tell us if that is right and how it replicated the contents when we did not use BuddyReplication and why it is not replicating the contents when we use BuddyReplication? Is it mandatory to use sticky session for BuddyReplication to work? View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4040952#4040952 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4040952 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication on Weblogic
anonymous wrote : | But still the contents are not replicated, as in if application-1 puts some value into cache then it is not visible when accessed from application-2. | Buddy Replication requires session affinity. You won't see the contenst on app-2. But it will be on app-2 as a backup so if app-1 crashes you will then be able to see it on app-2. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4040931#4040931 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4040931 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication on Weblogic
Thanks for the reply, now error related to Transaction lookup is resolved. But still the contents are not replicated, as in if application-1 puts some value into cache then it is not visible when accessed from application-2. But when I remove BuddyReplication configuration from the cache-config, everything works fine. Can you please advise on this? or give some more information as in how do we check from logs as how to see if Buddy is found and cache contents are accessed or there was no cache hit? I have changed the logging to TRACE. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4040084#4040084 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4040084 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication on Weblogic
Change org.jboss.cache.DummyTransactionManagerLookup to org.jboss.cache.GenericTransactionManagerLookup View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4040042#4040042 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4040042 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication on Weblogic
I am using JBossCache-1.4.1.SP3 and weblogic9.2 on windows machine. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4039737#4039737 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4039737 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication and Replication Queue
This is because by the time the replication queue fires, the buddy group membership may have changed. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4024451#4024451 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4024451 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Not immutable. +1 on making it immutable though. Been reading Goetz's book, have we? ;) View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4006451#4006451 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4006451 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Sure, that should do it. +1 on the convenience method -- give us freedom to change how things work later without breaking callers. Is the BuddyGroup immutable? Should be if we're exposing it. (Sorry, I'm being lazy.) View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4006441#4006441 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4006441 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Would exposing the BuddyGroup in the RuntimeConfig suffise? It would have a reference to the data owner as well as the list of buddies. The primary buddy is the first buddy in the list; perhaps a convenience method | Address getPrimaryBuddy() | { | return buddies.get(0); | } | could be added... View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4006424#4006424 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4006424 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Added http://jira.jboss.com/jira/browse/JBCACHE-950 for exposing this. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4006421#4006421 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4006421 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Hmm, I don't see anything in the BuddyManager class that allows a caller to find out this kind of information either. That's a flaw. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4006415#4006415 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4006415 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
You are right Brian, I forgot to post the configuration. | | | true | org.jboss.cache.buddyreplication.NextMemberBuddyLocator | 5 | | numBuddies = 1 | ignoreColocatedBuddies = true | | | true | true | false | | | this workaround should work fine only in case of numBuddies=1. I don't have idea how to discriminate the first buddy to force the gravitation only on it. regards gianluca -- Gianluca Puggelli skype1138 View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4006412#4006412 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4006412 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Hi, I wrote a workaround in the form of a TreeCacheListenener. Please let me know what you think about. | class Listener implements TreeCacheListener | { | private TreeCache cache; | private View view; | | private static final Fqn backupFqn = new Fqn(BuddyManager.BUDDY_BACKUP_SUBTREE); | private static final Option option = new Option(); | | static | { | option.setForceDataGravitation(true); | } | | private Vector getMembersLeft(View old_view, View new_view) | { | final Vector result = new Vector(); | final Vector members = old_view.getMembers(); | final Vector new_members = new_view.getMembers(); | | for(int i=0; i < members.size(); i++) | { | final Object mbr = members.elementAt(i); | | if(!new_members.contains(mbr)) | { | result.addElement(mbr); | } | } | | return(result); | } | | private void check(Vector membersLeft) | { | for(int i=0, n=membersLeft.size(); ihttp://www.jboss.com/index.html?module=bb&op=viewtopic&p=4006387#4006387 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4006387 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
I read this real quickly, so forgive me if I'm wrong, but it looks like *each* buddy of the node that left will try to do the gravitation, rather than a "primary" buddy. That will very likely lead to problems. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4006394#4006394 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4006394 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Yes, this will solve the problem. Any idea about when this feature will be available ? thanks and regards gianluca -- Gianluca Puggelli skype:pugg1138 View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4006289#4006289 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4006289 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
IMO, this is an area where configurable alternatives would be useful, since the "network storm" problem is a real issue, as is the lower QOS that comes if the primary buddy doesn't automatically take over the data. A couple of config options come to mind (option names I just made up with very little thought): 1) "primary-backup-take-ownership" boolean flag -- if true, the behavior Gianlucca is looking for occurs. 2) minBuddies -- indicates the minimum number of buddies a node has to have; if a node fails, the affected nodes check this to decide whether they have to elect a new buddy and/or take ownership (if they are the primary). View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4004620#4004620 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4004620 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
I used the word 'gravitation' also for backup data, may be improperly. anonymous wrote : | cache[1] also now sees /_BUDDY_BACKUP_/192.168.0.4_33266/three since cache[2] realises that it doesn't have a backup anywhere anymore, and hence assigns cache[1] as it's new backup node with it's state. | This movement of the primary data that are without backup can cause a "network storm", but this is inevitable. Why there isn't an equivalent movement for the backup data that are without primary (e.g. /_BUDDY_BACKUP_/192.168.0.4_33266/one on cache[1]) ? My major concern is not related to the "network storm" but to the fact that in case of multiple faults the cluster has information loss. For example: Let suppose that first the cache[0] dies and then after one minute also the cache[1] dies. In this case the data stored in the node /one is lost forever. Thanks and regards gianluca -- Gianluca Puggelli skype:pugg1138 View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4003705#4003705 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4003705 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
There is no automatic gravitation done for the backup. In your example, cache[1] always had /_BUDDY_BACKUP_/192.168.0.4_33266/one. No gravitation necessary. cache[1] also now sees /_BUDDY_BACKUP_/192.168.0.4_33266/three since cache[2] realises that it doesn't have a backup anywhere anymore, and hence assigns cache[1] as it's new backup node with it's state. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4003693#4003693 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4003693 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Hello again, the point 1) is clear, but I have some doubts about the point 2). Let me divide the data stored in a cache in primary and backup. When a node die, to avoid network storm, the primary data is not automatically gravitated. If so, why the automatic gravitation is done for the backup ? In fact in the example that I posted, the primary data contained in the node C or cache 2 (the node three) is automatically copied in the backup data of the node B (or cache 1). And then, when a node die, the cluster is not 'homogeneous' anymore. Some data has a backup and some other don't. In this situation, without a specific application code that force the gravitation, another fault can cause information loss. thanks and regards gianluca -- Gianluca Puggelli skype:pugg1138 View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4003415#4003415 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4003415 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Correct, this is what is expected. You need to 1) enable gravitation explicitly if you want to pull data out of a potential backup scenario (this is necessary as a separate option to prevent calls trying to gravitate data for data that does not exist. E.g., If I try | String[] s = {"/one", "/two", "/three", "/four", "/five", | "/six", "/seven", "/eight", "/nine", "/ten"}; | | for (String fqn : s) cache[1].get(fqn); | I don't want expensive network calls (esp. if the cluster is big) to go out when looking for nodes four to ten. This is why when you know about a view-change event (perhaps by using a listener) you can execute gravitate calls. 2) Gravitation should not happen automatically - only when a gravitation call occurs, and even then, only for the node being called. This is to prevent a "network storm" when a node dies. Let's assume each node has 1GB of data. If a node dies, I don't want 1GB of network traffic of data being gravitated, since this may then kill other nodes or cause the network to be unresponsive. This is why this happens lazily, when a node is requested. Hope this helps, Manik View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4003302#4003302 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4003302 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Manik, this is a test case that cover this use case: | public class BuddyReplicationFailoverTest extends BuddyReplicationTestsBase | { | ... | | public void testReplication() throws Exception | { | caches = createCaches(3, false, false, false); | | final String[] fqns = { "/one", "/two", "/three" }; | final String[] backupFqns = { | "/" + BuddyManager.BUDDY_BACKUP_SUBTREE + "/" | + BuddyManager.getGroupNameFromAddress(caches[0].getLocalAddress()) + fqns[0], | | "/" + BuddyManager.BUDDY_BACKUP_SUBTREE + "/" | + BuddyManager.getGroupNameFromAddress(caches[1].getLocalAddress()) + fqns[1], | | "/" + BuddyManager.BUDDY_BACKUP_SUBTREE + "/" | + BuddyManager.getGroupNameFromAddress(caches[2].getLocalAddress()) + fqns[2], | }; | | caches[0].put(fqns[0], key, value); | caches[1].put(fqns[1], key, value); | caches[2].put(fqns[2], key, value); | | dumpCacheContents(caches); | | caches[0].stopService(); | caches[0] = null; | TestingUtil.sleepThread(500); | | dumpCacheContents(caches); | | assertTrue("caches[1] should contain \"one\" and \"two\"", |caches[1].exists(fqns[0]) && caches[1].exists(fqns[2])); | assertTrue("caches[1] should contain the \"three\" backup", caches[1].exists(backupFqns[2])); | | assertTrue("caches[2] should contain \"three\"", caches[2].exists(fqns[2])); | assertTrue("caches[2] should contain the \"one\" and \"two\" backups", |caches[2].exists(backupFqns[0]) && caches[2].exists(backupFqns[1])); | } | ... | } | The first assertion fails. This is what is printed before the first cache kill: | START: Cache Contents | ** Cache 0 is 192.168.0.4:33266 | | /one | /_BUDDY_BACKUP_ | /192.168.0.4_33270 | /three | | ** Cache 1 is 192.168.0.4:33268 | | /_BUDDY_BACKUP_ | /192.168.0.4_33266 | /one | /two | | ** Cache 2 is 192.168.0.4:33270 | | /three | /_BUDDY_BACKUP_ | /192.168.0.4_33268 | /two | | END: Cache Contents | While this is what is printed after the kill: | START: Cache Contents | ** Cache 0 is null! | ** Cache 1 is 192.168.0.4:33268 | | /_BUDDY_BACKUP_ | /192.168.0.4_33270 | /three | /192.168.0.4_33266 | /one | /two | | ** Cache 2 is 192.168.0.4:33270 | | /three | /_BUDDY_BACKUP_ | /192.168.0.4_33268 | /two | | END: Cache Contents | The same as my original post. Thanks and regards gianluca -- Gianluca Puggelli skype:pugg1138 View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4002950#4002950 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4002950 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
Hi Manik, unfortunately I don't have an unit test. But I will try to write it. I had a look to the testGravitationKillOwner() method. In it after the kill (stopService), the get method is explicitly called and then the data is gravitated to another node. May be I'm wrong, but the behavior that I expect to have should happen just after the kill owner without invoking any methods. And it should happen even if the gravitation is completely disabled. thanks and regards gianluca -- Gianluca Puggelli skype:pugg1138 View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4002920#4002920 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4002920 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy replication behavior
You are correct with what you expect, that is what should happen. This is proved by the unit test BuddyReplicationFailoverTest.testGravitationKillOwner(). Just tried this and the test works fine. Do you have a unit test that shows the problem? View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4002859#4002859 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4002859 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication and data consistency
Thread for issue just mentioned: http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3994764#3994764 View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3994765#3994765 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3994765 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication and data consistency
If you do a put with a local option, it won't replicate to anyone, so the node that did the replication will be out of sync with the buddies. As to multiple nodes simultaneously doing a put on the same node, here's what happens. I'm assuming the node already exists. Assume no tx running. The data in question is stored on server0 and it's buddy group. 1) You do a put() on server 1. Simultaneously a put() on server0. 2) DataGravitatorInterceptor.1 and DataGravitatorInterceptor.2 both see the node doesn't exist; fetches the node's data from across the cluster. 3) DataGravitatorInterceptor.1 and .2 take the data and do a put (not local). This replicates the data to its buddies. No tx, so no lock is held on the node. At this point there are three copies of the data -- the server0 group's, the server1 group's and the server2 group's. 4) DataGravitatorInterceptor.1 and .2 send a cleanup call to the cluster. Any copy of the data not associated with the sending server's buddy group is removed. 5) The original puts go through. The end result here will very much depend on how things get interleaved. With REPL_SYNC you could end up with a TimeoutException in Step 4 as server1 and server2 tell each other to remove the data and deadlock. Or server1 completes steps 3-5 and then server 2 executes steps 3-5, in which case server 2's change wins. Or both complete step 3, then server 1 completes step 4 (so the server 0 and server 2 copies are gone), then server 2 completes step 4 (so the server 1 copy is gone). Then the both complete step 5, resulting in 2 sets of data, each of which only has the key/value pair included in the put. Now, if there is a tx in place: The put() in step 3 is done in a tx, so a write lock will be held on the node on each server until the tx commits. The put will not replicate until the tx commits. The removes in step 4 will also not be broadcast until the tx commits. The put in step 5 will not be replicated until the tx commits. The fact that the WL from step 3 is held should make steps 3-5 atomic. If it's REPL_SYNC, you have two servers trying to write to the same node, so it's possible when the tx tries to commit you'll get a TimeoutExceptio due to a lock conflict. With REPL_ASYNC, the later tx will win; the step 5 put from the earlier tx will be lost. But.. while writing this I'm pretty sure I've spotted a bug in the tx case. The step 4 cleanup call gets bundled together with the other tx changes and therefore only gets replicated to the server's buddy's, not to the whole cluster. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3994763#3994763 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3994763 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication and data consistency
Is it for efficiency or corectness reasons? i can imagine put with auto gravity option on as an atomic operation of subsequently: get() resulting in gravity of data put() performed 'locally' than according to my understanding the get() has to remove the node from other servers when the dataGravitationRemoveOnFind set on true - that's where the question about the difference between INVAL and REPL came from. Yet what if I invoke concurrently get() on 2 or more servers. Do I have any guarantees that at the end I will have only one main copy of the node? -- cheers, mj View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3994395#3994395 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3994395 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication and data consistency
Buddy replication should not be used in a situation where you're expecting multiple servers to be concurrently modifying the same node. It's meant for use cases where one server owns the data. Buddy replication combined with INVALIDATION doesn't make sense. Invalidation means, "I have the latest data; you may be out of date, so throw away your data." Sending such a message to a limited subset of the cluster doesn't make sense. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3994367#3994367 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3994367 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication
Well, at least I don't have find it out the hard way now... I'll guess I will start looking into other ways of propagating the state information needed through other channels, be it a second cache or something else. Cheers View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3979274#3979274 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3979274 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user
[jboss-user] [JBossCache] - Re: Buddy Replication
Sadly mixing configurations for different nodes or regions of nodes is not supported at this time, although it is on the task list for some point in the future. For now, perhaps two different cache instances may do the trick? One set up with BR and the other not. Not sure if this helps your use case any though. View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3979079#3979079 Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3979079 ___ jboss-user mailing list jboss-user@lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-user