After looking closer at the logs, the exceptions were being logged every 10 
seconds, not in a tight loop like I had originally thought.   It felt like a 
tight loop while watching two production servers throw new errors ;)

So it appears that it was a single session that got corrupted and the 
ContainerBackgroundProcessor was trying to clean up the same bad session on 2 
of the 3 servers in the cluster.  It's unclear how the third did not get caught 
up in the problem as well.

Finding something invalid that is put into our sessions is a bit of a needle in 
a haystack, it's a fairly large app.  We've been doing session replication 
(with failover / deserialization of replicated sessions) working well for 
several month.  It's just when we turned on passivation that this error occurs. 
 

I would think that if a session were serialized ok/without errors (granted 
errors might be happening and not caught on serialization), that it could be 
deserialized ok.  To me this implies that it was corrupted / changed while in 
its serialized form.

Is it possible that using the org.jboss.cache.loader.FileCacheLoader could 
cause such corruption of a session?  The docs do mention using caution with 
this in high load environments.  I'm currently working on testing the JDBC 
based loader for passivation.

Thanks for writing up those two jira issues, those directly describe what we 
ran into. 



View the original post : 
http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4261616#4261616

Reply to the post : 
http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4261616
_______________________________________________
jboss-user mailing list
jboss-user@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/jboss-user

Reply via email to