Wow, that was fast =)
The issue is indeed fixed for the standalone test case. We will probably wait
for the CR3 release before we test with our real application.
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4117525#4117525
Reply to the post :
We do apply session affinity.
Look at this way:
1. Cache A starts
2. Cache A adds 10 nodes to the cache
3. Cache B starts
4. Cache B 'gets' the 10 nodes thus causing a gravitation
After #4 in the sequence we end up with the weird buddy rep settings as
discussed above. This is exactly what
Sorry, I didn't quite understand your use case in the beginning.
If you are adding data to .5 while at the same time reading the same data from
the other nodes (causing data gravitation), this will naturally cause some
timeouts since there is contention on a buddy backup subtree.
In general
FredrikJ wrote :
| But we also see the master having a buddy backup for itself (with no data):
|
| MASTER:
| | null {}
| | /_BUDDY_BACKUP_ {}
| | /192.168.1.135_51469 {}
| | /192.168.1.135_51470 {}
| | /1 {1=c6m0p888dfvz}
|
| The backup
I tried a snapshot build from SVN (revision 4982) but the result is still the
same .
Actually I was not entirely accurate in the sequence description. It should be
like this:
1. Cache A starts
2. Cache A adds 10 nodes to the cache
3. Cache B starts
4. Cache B 'gets' 9 nodes thus causing a
This is a regression. Thanks for this - JBCACHE-1256
View the original post :
http://www.jboss.com/index.html?module=bbop=viewtopicp=4117150#4117150
Reply to the post :
http://www.jboss.com/index.html?module=bbop=postingmode=replyp=4117150
___
Hi - I have a fix in svn trunk if you feel like trying it out. Note that trunk
is unstable and Hudson still hasn't finished thoroughly checking my fix for
further regressions. Details are in JIRA.
View the original post :
I have now tried to reproduce the issue in a standalone unit test and have
succeeded to at least some extent =)
I am now running two caches locally where one is producing data and the other
one is inspecting the cache - causing data to gravitate to the second cache.
The issue is replicated in
FredrikJ wrote : I am currently using cache 2.1.0 GA
I presume you mean 2.1.0.CR2.
No such behaviour should have changed between 2.0.0 and 2.1.0. Some of the
internal code has changed, but that is more along the lines of refactoring and
performance enhancements, not logical behaviour in any
2.1.0 CR2 is correct.
We are not using anything extra apart from turning on buddy replication and
using data gravitation. I tried to recreate it today as well in a separate
unit-test but with no success so far. I will give it another shot tomorrow
since I think the underlying use case scenario
(Cont.)
Further, we see that all locks that fail because a timeout is from .6, which do
have .5 as its buddy backup.
So, my question is if buddy replication has changed between 2.0.0 and 2.1.0?
In any case the behavior is changed since this worked with 2.0.0 and not
anymore. Why does the .5
11 matches
Mail list logo