I wrote a small article regarding a project I did some time ago where we
clustered our load-servers together using JBoss Cache to create a load server
farm. It is not a complex solution by any means, but it's somewhat of a success
story and outlines a simple distribution of work solution using J
Posted here: http://www.jboss.com/index.html?module=bb&op=viewtopic&t=135172
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4157799#4157799
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4157799
_
I'm pretty sure that upgrading to JBC 2.1.0 GA and JGroups 2.6.2 did resolve
the "not in started state" errors for us. Unfortunately we ran into other
issues when using sync replication on versions > 2.0.0 so we're still running
2.0.0.
View the original post :
http://www.jboss.com/index.html?
Our workaround consisted of 'stopping the world', i.e. halt everything in the
system while the new node is connecting and transferring state data. Then start
execution again when we can guarantee that the JBC has stabilized.
Unfortunately we have still not been able to have a jboss cache node j
Did you ever find something out regarding this?
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4151744#4151744
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4151744
___
jboss-u
I have done some small scale tests with FetchInMemoryState set to false and it
seems to be working and does not interfere with the expected behavior (as long
as we are using buddy replication).
We will make larger scale tests for this when 2.1.1.GA is released.
View the original post :
http://
Cheers. The JIRA issue is currently due for 2.2.0, is this correct or do you
think it might be included in 2.1.1? If the fix is likely to be included in
2.1.1 then we wont bother with implementing a workaround but rather wait for
2.1.1 =)
View the original post :
http://www.jboss.com/index.htm
The returned map from the node seems to be an instance of
java.util.Collections$UnmodifiableMap. The javadoc for the UnmodifiableMap
specifies:
anonymous wrote : unmodifiableMap
|
| public static Map unmodifiableMap(Map m)
|
| Returns an unmodifiable view of the specified map. Thi
We have seen ConcurrentModificationExceptions in our system when handling data
maps returned from a node. In the JBC javadoc it is specified that the method
Node.getData() should return an immutable map.
anonymous wrote : Returns:
| a Map containing the data in this Node. If there
Hi.
We have been experiencing an issue in our system when enabling buddy
replication. The issue manifests itself in way that replication seems to be
completely missing. We can turn the issue on and off by enabling/disabling
buddy replication so I have focused on isolating the problem in a stand-
I just experienced what might be the same problem.
I started up 6 nodes and one node was using up all CPU without any real load on
it. Dumping all stacks shows that all threads are parked except one which is
stuck on map.get in OrderedSynchronizationHandler.
"ReceivingGameEventDaemon-1" prio=1
We use repeatable read and read uncommitted (different caches). We use
read_uncommitted only to ensure a reader is never blocked. If read_committed
would always allow read then we would be happy using that instead.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&
Are you using 2.0.0 GA?
We have encountered this problem for 2.0.0 GA and never solved it properly.
Currently we have a smelly workaround for it. I have a simple test project for
replicating this (for v2.0.0), but its real easy to replicate: Setup one node
that is making changes to a couple of
Why don't you just use separate regions in the cache?
I.e.:
root {}
| /data_a
| /... // first data set here
| /data_b
| /... // second data set here
That way you use one cache for your data.
View the original post :
http://www.jboss.com/index.html?module=bb&op=v
Wow, that was fast =)
The issue is indeed fixed for the standalone test case. We will probably wait
for the CR3 release before we test with our real application.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4117525#4117525
Reply to the post :
http://www.jb
I tried a snapshot build from SVN (revision 4982) but the result is still the
same .
Actually I was not entirely accurate in the sequence description. It should be
like this:
1. Cache A starts
2. Cache A adds 10 nodes to the cache
3. Cache B starts
4. Cache B 'gets' 9 nodes thus causing a g
We do apply session affinity.
Look at this way:
1. Cache A starts
2. Cache A adds 10 nodes to the cache
3. Cache B starts
4. Cache B 'gets' the 10 nodes thus causing a gravitation
After #4 in the sequence we end up with the weird buddy rep settings as
discussed above. This is exactly what w
I have now tried to reproduce the issue in a standalone unit test and have
succeeded to at least some extent =)
I am now running two caches locally where one is producing data and the other
one is inspecting the cache - causing data to gravitate to the second cache.
The issue is replicated in t
2.1.0 CR2 is correct.
We are not using anything extra apart from turning on buddy replication and
using data gravitation. I tried to recreate it today as well in a separate
unit-test but with no success so far. I will give it another shot tomorrow
since I think the underlying use case scenario
(Cont.)
Further, we see that all locks that fail because a timeout is from .6, which do
have .5 as its buddy backup.
So, my question is if buddy replication has changed between 2.0.0 and 2.1.0?
In any case the behavior is changed since this worked with 2.0.0 and not
anymore. Why does the .5
Hi.
I am currently using cache 2.1.0 GA and jgroups 2.6.1 with buddy replication.
Buddy rep is configured to use one buddy only.
The setup is four nodes with ip addresses like:
172.16.0.5
| 172.16.0.6
| 172.16.0.7
| 172.16.0.8
|
| They are all started in the stated order so that .5 is
Hi.
We are currently running a setup with jboss cache 2.1.0 CR2 and JGroups 2.6.1.
When disconnecting one node from a cluster of two nodes, the first node will
often catch exception below.
Replication exception : org.jboss.cache.ReplicationException:
rsp=sender=192.168.1.112:32904, retval=null,
Hi.
We are currently testing a scenario of starting up an additional cache to a
cluster where replication is running. We have been using 2.0.0 GA with Jgroups
2.5.0, but have encountered problems with this (mainly 'cache not in started
state'). So we decided to try 2.1.0 CR2.
It seems that by
Hi.
We have been hunting hot spots in our application which relies heavily on the
jboss cache. One hot spot that occurred was in the actual serialization of the
objects. We got rid of the hot spot by using jboss serialization instead of the
regular java serialization.
In short we changed the
Great! In what release(s) do you think this will be included?
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4100925#4100925
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4100925
_
We have been load testing our system lately and we have been noticing many
threads waiting to lock the readOwnerList in the LockMap class.
The readOwnerList_ is a CopyOnWriteArraySet and (as the sourcecode comment
above clearly states) is not the most effecient implementation to use here.
So I
We are currently running 4 nodes with synchronous replication of rather large
data objects. (The replication is done within a transaction.)
The objects replicated are from 1k up to about 8k bytes large. These 4 nodes
currently handle up to 5000 events/second which would translate into 1250
repl
I have been investigating the issue further and it seems like we actually
access the node right after creating it. This causes concurrent access from
both the server that currently owns the data and the server that is assigned to
the node and this is what is triggering the lock-situation describ
Using 2.1 beta does not solve the problem. I still get timeout exceptions from
unsuccesful lock acquisition when enabling buddy replication.
JBoss Cache version: JBossCache 'Alegrias' 2.1.0.BETA1[ $Id: Version.java 4592
2007-10-10 16:44:36Z [EMAIL PROTECTED] $]
|
| JGroups version: 2.6
I have also seen this behaviour in our application. Buddy replication does not
seem to work under concurrent load and/or usage.
Just from starting up (no load applied) I get this in the logs:
| 2007-10-23 14:25:52,664 Incoming Thread,TableSpace,172.16.0.5:7500 ERROR
jboss.cache.interceptors.
Works ok now.
Cheers
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4078269#4078269
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4078269
___
jboss-user mailing list
jboss-use
The jboss cache jar at:
http://repository.jboss.com/maven2/jboss/jboss-cache/2.0.0.GA/
Looks like it has not been built correctly (4k?). The JBC dependency from maven
does not work anyway.
I'm getting the feeling that you guys are not using maven... ;)
View the original post :
http://www.jboss
JBoss Cache: 2.0.0 GA
JGroups: 2.5.0 GA
Your description sounds exactly like what is happening here.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4076626#4076626
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4076626
_
We get the following stack trace when starting up the cache with buddy
replication enabled and can't really understand why.
| 2007-08-21 21:27:49,849 Incoming Thread,TableSpace,172.16.0.5:8786 INFO
space.jboss.ExtendedCache.TableSpace - viewAccepted(): [172.16.0.5:8786|1]
[172.16.0.5:8786,
Hi,
I was about to update to 2.0 GA and I noticed that in the maven pom file the
dependency to jgroups still uses the 2.5.0-BETA2 version of jgroups. Is there
any reason for this or has it just been forgotten?
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=
Great stuff! Keep up the good work =)
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4073295#4073295
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4073295
___
jboss-user mailin
True. I dismissed optimistic locking earlier for reasons I can't remember right
now. I might take another plunge into using optimistic locking for at one of
our cache applications.
I solved the blocked readers in one instance by putting a local cache in front
of jboss cache. Not the prettiest
I just upgraded to CR3 and found out that the CacheListener interface has been
removed in favor for the annotationbased listener.
Some gripes:
1. Wouldn't this be considered a major api change(?) and as such should not go
in between two cr releases?
2. I understand that the benefit is that you
That would probably qualify as a double locked check which is unfortunately not
thread safe (see http://www.ibm.com/developerworks/library/j-dcl.html).
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4063575#4063575
Reply to the post :
http://www.jboss.com/i
Using the testbeds from above on CR3 shows a significant improvement over CR2.
The source for the tests can be found here:
http://www.robotsociety.com/cache/cr3/src.rar
NB: These tests are micro-benchmarks, i.e. not a real-life scenario.
Parallel tests
Reading threads access all available nodes
Will do. I'll post the results here when done.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4063366#4063366
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4063366
___
jboss-us
I omitted the getMbeanServer() since it is environment dependent. A oneliner
approach to getting the default server is like this:
private MBeanServer getMBeanServer() {
| return ManagementFactory.getPlatformMBeanServer();
| }
View the original post :
http://www.jboss.com/index.html?modu
If you are using the regular cache you can follow this thread:
http://www.jboss.com/index.html?module=bb&op=viewtopic&t=106118
If you are using pojocache you can use a similar pattern:
MBeanServer mbs = getMBeanServer();
| mbs.registerMBean(new PojoCacheJmxWrapper(cache), pojoCacheName);
Vie
What version are you using?
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4060114#4060114
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4060114
___
jboss-user mailing list
jbo
I can't seem to find any mention of context classloaders in the pojocache
documentation. Will regional context classloaders work in the same way for pojo
as for the regular cache?
I.e., can I get a region of the pojocache's underlying cache and register a
context classloader? There will be diff
Of course. You can get the source code here:
http://www.robotsociety.com/cache/src.zip
There are two tests, one is concurrent readers on all nodes the other one is
concurrent readers on fixed nodes. Both tests are for profiling so they don't
assert any functionality. In fact there is no reason
We are currently testing the 2.0 CR2 in a load intensive environment. I started
to notice a lot of blocked threads that were blocked by the cache.
In order to investigate further I constructed a simple load test on a cache.
The cache uses the standard replAsync-service.xml
This is what I do:
1.
Great stuff!
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4053461#4053461
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4053461
___
jboss-user mailing list
jboss-user@lists.j
What is the status of this?
Should Option.setForceWriteLock() write-lock all nodes all the way up from the
designated node or not?
If this is in fact correct behaviour, is there any recommended way of solving
the leaf-only locking scenario described in this thread?
View the original post :
You're right, the old '-all' was cached on my side.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4050962#4050962
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4050962
___
jbo
I am no maven expert or anything, but I think you want to change this
dependency:
| jgroups
| jgroups-all
| 2.5.0-BETA2
|
|
| bsh
| bsh
|
|
|
To:
| jgroups
| jgroups
|
The parent works now, i.e. jgroups:jboss-parent -> jboss:jboss-parent
Still, it seems that the jgroups dependency is pointing to 'jgroups-all' where
in the maven repo only 'jgroups' exists.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4050908#4050908
Reply
Downloading and installing the jgroups 2.5.0 beta 2 manually in jgroups-all
locally led to this problem:
| Unable to resolve artifact: Unable to get dependency information: Unable to
read the metadata file for artifact 'jgroups:jgroups-all:jar': Cannot find
parent: jgroups:jboss-parent for p
My first feedback on CR2: I can't get CR2 from maven. =)
This works:
|
This does not work:
Error:
Unable to resolve artifact: Missing:
| --
| 1) jgroups:jgroups-all:jar:2.5.0-BETA2
I can always patch the libs manually, but...
View the original post :
http://www.jboss.com/index
Using repeatable read as transaction isolation will not stop concurrent read
access to the node if i'm not mistaken.
My interpretation is that atijms want to get an exclusive lock on '/a/b/n1'
regardless of read or write operation. If this is correct then repeatable_read
would not solve the iss
As a note I finally got it working by adding the arguments:
-javaagent:lib/jboss-aop-jdk50.jar
-Djboss.aop.path=src/conf/META-INF/pojocache-aop.xml
and copying the pojocache-aop.xml from the distribution /resources.
I must say, I do find the documentation for this rather... confusing. It is not
Did you find out what the problem was? I'm fiddling around with pojocache
myself and got the same error when running the examples from the distribution.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4048797#4048797
Reply to the post :
http://www.jboss.com/in
We are currently using the 2.0 beta 2 release, and have some problem with the
state transfer.
We are using regional based classloaders with an example cache like:
/a/ -> ClassLoader A
/b/ -> ClassLoader B
The startup sequence of the cache looks like:
1. Create cache
2. Register classloaders
3
I'm using 2.0 in a standalone application. When using 1.4 I got the JBoss Cache
registered to JMX by default (I think). Now when we are testing out 2.0, it
doesn't seem to get registered to JMX anymore.
However, registering the cache manually works perfectly fine:
| CacheImpl cache = createC
If it helps, I extended DefaultCacheFactory and added my own little factory
method that accepts an input stream.
The method looks like:
public Cache createCache(InputStream is, boolean start) throws
ConfigurationException {
| XmlConfigurationParser parser = new XmlConfigurationParser();
Cheers!
I'll look into the PessimisticLockInterceptor
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4032778#4032778
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4032778
___
I'm using the 2.0 Habanero cache and I want to access my cache as a CacheSPI.
The reason being that I want to access nodes as NodeSPI so I can inspect (and
possible upgrade) the node lock.
Now,
I started to check the Javadoc on how to get the CacheSPI instead of the
regular Cache interface, but
I was wondering if there is any support built in or a preferred way of setting
up a replication gateway for having a treecache running in two geographically
separated locations.
The setup is that I want to run a treecache at site A and a treecache at site
B. I want them to share the same data,
Well, at least I don't have find it out the hard way now...
I'll guess I will start looking into other ways of propagating the state
information needed through other channels, be it a second cache or something
else.
Cheers
View the original post :
http://www.jboss.com/index.html?module=bb&op=
I am currently developing a system using the regular treecache. In the cache we
store stateful objects. The nodes in the system all have a designated set of
objects that the manipulate. In essence, the cache is mostly for fail-over
reasons. Therefore I find that using buddy replication together
65 matches
Mail list logo