Re: tomcat 5.0.19 cluster problem

2004-02-23 Thread Antonio Fiol Bonnn
Filip Hanik (lists) wrote:

In any case could a cluster node that ran out of memory destroy the
entire cluster?
   

it shouldn't, it can temporary slow it down if the node that is down is
accepting connections and broad casting its membership.
I'm running a load test right now with the latest version to make sure that
I am not BS:ing you here :)
Filip

 

Hi,

If you use in-memory replication, and the source of your 
OutOfMemoryError is that you have too many objects stored in sessions, 
or those objects are too big, or whatever, I think this could bring down 
your entire cluster. What do you think, Filip?

Antonio


smime.p7s
Description: S/MIME Cryptographic Signature


RE: tomcat 5.0.19 cluster problem

2004-02-23 Thread Filip Hanik \(lists\)
yes, three servers in a cluster means three times the amount of memory used
for session data.
checking your -Xmx setting might be a good thing

-Original Message-
From: Antonio Fiol Bonnn [mailto:[EMAIL PROTECTED]
Sent: Monday, February 23, 2004 11:33 AM
To: Tomcat Users List
Subject: Re: tomcat 5.0.19 cluster problem


Filip Hanik (lists) wrote:

In any case could a cluster node that ran out of memory destroy the
entire cluster?



it shouldn't, it can temporary slow it down if the node that is down is
accepting connections and broad casting its membership.
I'm running a load test right now with the latest version to make sure that
I am not BS:ing you here :)

Filip




Hi,

If you use in-memory replication, and the source of your
OutOfMemoryError is that you have too many objects stored in sessions,
or those objects are too big, or whatever, I think this could bring down
your entire cluster. What do you think, Filip?


Antonio

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.585 / Virus Database: 370 - Release Date: 2/11/2004


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: tomcat 5.0.19 cluster problem

2004-02-22 Thread Filip Hanik \(lists\)
I haven't tested clustering on Solaris 9, but on linux it works great.
There is something funky with your multicast, as you can see there are
members added and disappearing all the time.
Try to increase your mcastDropTime, that should keep the members in the
cluster for a longer time.
contact me at my apache.org email for help with debugging

Filip

-Original Message-
From: Ilyschenko, Vlad [mailto:[EMAIL PROTECTED]
Sent: Sunday, February 22, 2004 5:15 PM
To: [EMAIL PROTECTED]
Subject: tomcat 5.0.19 cluster problem


Hi,



We are running three Solaris9 boxes with tomcat 5.0.19 on them. Cluster
configuration is as follows:



Cluster
className=org.apache.catalina.cluster.tcp.SimpleTcpCluster


managerClassName=org.apache.catalina.cluster.session.DeltaManager

 expireSessionsOnShutdown=false

 useDirtyFlag=true



Membership


className=org.apache.catalina.cluster.mcast.McastService

mcastAddr=228.0.0.3

mcastPort=45564

mcastFrequency=500

mcastDropTime=3000/



Receiver


className=org.apache.catalina.cluster.tcp.ReplicationListener

tcpListenAddress=auto

tcpListenPort=4001

tcpSelectorTimeout=100

tcpThreadCount=60/



Sender


className=org.apache.catalina.cluster.tcp.ReplicationTransmitter

replicationMode=pooled/



Valve
className=org.apache.catalina.cluster.tcp.ReplicationValve


filter=.*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;/

/Cluster



Yesterday tomcat on one of the servers ran out of memory that coincided
with a clustered web application hang across all three servers. All
tomcat instances started exhibiting cluster problems in one shape or
another. I wonder if 5.0.19 cluster has memory leaks. I have not
experienced OutOfMemory problems on those boxes running 5.0.16 for over
a month.



In any case could a cluster node that ran out of memory destroy the
entire cluster?





You could find the log fragments from those three boxes below:



Box #1 (IP: 192.168.64.40) - the one with memory problems:



22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.36:4001,192.168.64.36,4001, alive=112504278]

22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36:
4001,192.168.64.36,4001, alive=112532838]

22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.36:4001,192.168.64.36,4001, alive=112532838]

22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36:
4001,192.168.64.36,4001, alive=112540488]

22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.36:4001,192.168.64.36,4001, alive=112540488]

22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36:
4001,192.168.64.36,4001, alive=112548138]

22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.41:4001,192.168.64.41,4001, alive=113937290]

22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41:
4001,192.168.64.41,4001, alive=113967890]

22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.36:4001,192.168.64.36,4001, alive=112548138]

22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36:
4001,192.168.64.36,4001, alive=112558338]

22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.41:4001,192.168.64.41,4001, alive=113967890]

22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41:
4001,192.168.64.41,4001, alive=113981150]

22 Feb 2004 00:27:27 ERROR TP-Processor16 - An exception or error
occurred in the container during the request processing

java.lang.OutOfMemoryError

22 Feb 2004 00:27:27 DEBUG Finalizer - result finalized

22 Feb 2004 00:27:27 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.36:4001,192.168.64.36,4001, alive=112558338]

22 Feb 2004 00:27:27 INFO Cluster-MembershipReceiver - Replication
member

RE: tomcat 5.0.19 cluster problem

2004-02-22 Thread Filip Hanik \(lists\)
In any case could a cluster node that ran out of memory destroy the
entire cluster?

it shouldn't, it can temporary slow it down if the node that is down is
accepting connections and broad casting its membership.
I'm running a load test right now with the latest version to make sure that
I am not BS:ing you here :)

Filip

-Original Message-
From: Filip Hanik (lists) [mailto:[EMAIL PROTECTED]
Sent: Sunday, February 22, 2004 5:51 PM
To: Tomcat Users List
Subject: RE: tomcat 5.0.19 cluster problem


I haven't tested clustering on Solaris 9, but on linux it works great.
There is something funky with your multicast, as you can see there are
members added and disappearing all the time.
Try to increase your mcastDropTime, that should keep the members in the
cluster for a longer time.
contact me at my apache.org email for help with debugging

Filip

-Original Message-
From: Ilyschenko, Vlad [mailto:[EMAIL PROTECTED]
Sent: Sunday, February 22, 2004 5:15 PM
To: [EMAIL PROTECTED]
Subject: tomcat 5.0.19 cluster problem


Hi,



We are running three Solaris9 boxes with tomcat 5.0.19 on them. Cluster
configuration is as follows:



Cluster
className=org.apache.catalina.cluster.tcp.SimpleTcpCluster


managerClassName=org.apache.catalina.cluster.session.DeltaManager

 expireSessionsOnShutdown=false

 useDirtyFlag=true



Membership


className=org.apache.catalina.cluster.mcast.McastService

mcastAddr=228.0.0.3

mcastPort=45564

mcastFrequency=500

mcastDropTime=3000/



Receiver


className=org.apache.catalina.cluster.tcp.ReplicationListener

tcpListenAddress=auto

tcpListenPort=4001

tcpSelectorTimeout=100

tcpThreadCount=60/



Sender


className=org.apache.catalina.cluster.tcp.ReplicationTransmitter

replicationMode=pooled/



Valve
className=org.apache.catalina.cluster.tcp.ReplicationValve


filter=.*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;/

/Cluster



Yesterday tomcat on one of the servers ran out of memory that coincided
with a clustered web application hang across all three servers. All
tomcat instances started exhibiting cluster problems in one shape or
another. I wonder if 5.0.19 cluster has memory leaks. I have not
experienced OutOfMemory problems on those boxes running 5.0.16 for over
a month.



In any case could a cluster node that ran out of memory destroy the
entire cluster?





You could find the log fragments from those three boxes below:



Box #1 (IP: 192.168.64.40) - the one with memory problems:



22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.36:4001,192.168.64.36,4001, alive=112504278]

22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36:
4001,192.168.64.36,4001, alive=112532838]

22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.36:4001,192.168.64.36,4001, alive=112532838]

22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36:
4001,192.168.64.36,4001, alive=112540488]

22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.36:4001,192.168.64.36,4001, alive=112540488]

22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36:
4001,192.168.64.36,4001, alive=112548138]

22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.41:4001,192.168.64.41,4001, alive=113937290]

22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41:
4001,192.168.64.41,4001, alive=113967890]

22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.36:4001,192.168.64.36,4001, alive=112548138]

22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36:
4001,192.168.64.36,4001, alive=112558338]

22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.
64.41:4001,192.168.64.41,4001, alive=113967890]

22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Replication
member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41:
4001,192.168.64.41,4001, alive