Indeed.
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4153963#4153963
Reply to the post :
http://www.jboss.com/index.html?module=bbop=postingmode=replyp=4153963
___
jboss-user mailing list
jboss-user@lists.jboss.org
Hum... You'll pardon me for thinking that that listener/latch solution feels a
bit like black magic eh? :-)
But if it works I won't beat it.
However, it doesn't seem to work for me. Without transactions we're still
failing 15-20% of all runs (I increased repetitions to 5 or 10 to check this)
Mmm. A few questions.
1) The lock contention occurs on data gravitation only, correct? As we're only
updating attributes on nodes, the only time we need to content on lock (the /
lock, or the buddy backup lock) is when adding something on a cache, ie. on
data gravitation?
2) If #1, is there
The first timeout is not configurable (at least not in JBC 2.2.0.BETA1). It is
hard coded to a sequence of 400, 800, and 1600 milliseconds in
BuddyManager.java:859 (again in JBC 2.2.0.BETA1).
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4152681#4152681
Setting higher timeout values in the sequence does not help. Also synchronous
commit and rollbacks only brought other exceptions.
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4152736#4152736
Reply to the post :
What I fail to understand is where the lock contention comes from. Given that
each thread access separate nodes and mutate only attributes, why is there a
contention on state transfer?
Ie. given thread T1 accessing node N1, state transfer for N1 should not
commence unless T1 accesses the data
I've testng'd this now.
Here: http://www.cubeia.com/misc/statetransfer/src2.zip
This uses the available buddy replication config in the test resources, so it
tests REPL_SYNC with buddy replication with and without user transactions.
I can also confirm that it fails on all JBC versions I have
Cleaned up, testng'd and uploaded.
Here: http://www.cubeia.com/misc/replqueue/src.zip
The test includes a config file, it is documented and... fails. You should be
able to drop this straight into the 2.1.1.GA tag and run.
Cheers
/Lars
View the original post :
Cool.
Our real scenario is this: We have two caches, one for hard binary data
(dataCache) and one for meta data (metaCache) consisting mainly of strings. The
server processing updates the dataCache within a transaction (configurable with
JTA manager) which default to a jboss dummy
Hi,
I seem to have stumbled on a regression: The combination REPL_ASYNCH with a
user transaction and UseReplQueue=true seems to fail in 2.1.1.GA (but not in
eg. 2.1.0.GA) . State transfer seems to work, but there's no indication the
replication queue is ever flushed, ie. updated values does
Oh, and: This was with JBoss Cache 2.1.1.GA on an Intel dual core/Linux 2.6.22.
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4149683#4149683
Reply to the post :
http://www.jboss.com/index.html?module=bbop=postingmode=replyp=4149683
[EMAIL PROTECTED] wrote : The only bottleneck I can think of is on the
transport layer (JGroups) when replicating to the same buddy node.
That's correct, and we're indeed seeing contention in the JGroups layer. As
we're load testing on a couple of thousand accesses/replication per second,
they
JGroups 2.6.2 with TCP transport (we had NAK/ACK issues with UDP).
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4149724#4149724
Reply to the post :
http://www.jboss.com/index.html?module=bbop=postingmode=replyp=4149724
That's correct. Also, the lock in question which prompted this post was on
sending (as we're using REPL_ASYNCH) and we're going down the stack using one
thread.
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4149734#4149734
Reply to the post :
Well That could... possibly work. After all, I doubt setting up TestNG
will take a significant amount of time for us comparing to write the damn stuff
in the first place :-)
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4149741#4149741
Reply to the
Hi,
We have an issue regarding state transfer within our system. We basically
assume that given two clustered caches A and B, cache A can be used and
accessed safely while cache B is starting without and risk of data loss when
cache B is successfully started and accessed. This, however, does
Hi, I thought I'd offer an idea that struck me this morning regarding buddy
backup. Consider the following start of a tree:
/root/
| /A
| /B
| /C
In our scenario, changes on nodes A-C (including optional sub-trees) may be
performed concurrently per node but the system
Great. Fredrik happends to be on vacation this week. But if you have any
questions about the test or our problems I'll be happy to answer if I can.
Regards
/Lars J. Nilsson
http://www.cubeia.com
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4146112#4146112
Brilliant. Thanks for the attention. I'll see if I'll have time later this week
to give head a swirl.
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4135344#4135344
Reply to the post :
http://www.jboss.com/index.html?module=bbop=postingmode=replyp=4135344
Hi,
Two issues/questions regarding OrderedSynchronizationHandler.
1) The instances member is not synchronized
OrderedSynchronizationHandler seems to have an unguarded static HashMap member
called instances. This is a potentional crash issue. On one of our instances
(in a four member cluster)
The title was truncated, but it went on to say that this is an issue in
2.1.0.CR3 as well.
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4129422#4129422
Reply to the post :
http://www.jboss.com/index.html?module=bbop=postingmode=replyp=4129422
We're using version 2.0.0 GA (with JGroups 2.5). There's no eviction policy
configured.
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4096574#4096574
Reply to the post :
http://www.jboss.com/index.html?module=bbop=postingmode=replyp=4096574
The UnitTest models our real application. It goes like this: It is an
application processing events for units (areas) where the area is an object
stored in the cache and session affinity is enforced by a message bus.
Sometimes during the lifetime of the application new areas will be added by a
* bump *
It would be nice to know if anyone has been able to duplicate this problem.
This is a major blocker for our development at the moment.
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4095161#4095161
Reply to the post :
Unit test can be downloaded here:
http://www.cubeia.com/files/cache-lock-test.tar.gz
Regards
/Lars J. Nilsson
www.cubeia.com
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4091939#4091939
Reply to the post :
Hi,
We have isolated what we think is a synchronization issue during data
gravitation over multiple nodes using buddy replication.We have a unit test
demonstrating the issue which I can send to anyone interested.
What appears to happen is this: When two nodes are involved in a data
26 matches
Mail list logo