Anybody using autofarming feature on Tomcat 5.5 cluster?
Hey all, Can anyone share any experiences with web application autofarming on the Tomcat 5.5 cluster. Is it reliable? Do you use it for production purposes? How big is the cluster? -- Thank you in advance, Edmon Begoli http://blogs.ittoolbox.com/eai/software
using disks SATA against SCSI in tomcat 5.5.9 cluster servers
Respected friends of the list, I am specifying Cluster nodes (hosts) to use with tomcat 5.5.9 with an apache server as dispatcher. In the specification of the disks I have doubts because the new disks SATA has great performance and speed and with cost comparative minor to the SCSI Disks. Considering applications WEB, where the WAR files are loaded in memory and having enough memory (2 Gbytes) so that there is not use of virtual memory (windows XP), would there be some advantage to use disks SCSI? or can we wait for the same performance considering disks sata 7200 rpm against disks scsi of 7200 rpm? Could anybody explain to myself this doubts? Regards Acacio Furtado Costa Pesquisa e Tecnologia GIA - Magnesita S/A ((0xx31) 3368-1349 * [EMAIL PROTECTED]
RE: using disks SATA against SCSI in tomcat 5.5.9 cluster servers
From: Acácio Furtado Costa [mailto:[EMAIL PROTECTED] In the specification of the disks I have doubts because the new disks SATA has great performance and speed and with cost comparative minor to the SCSI Disks. Considering applications WEB, where the WAR files are loaded in memory and having enough memory (2 Gbytes) so that there is not use of virtual memory (windows XP), would there be some advantage to use disks SCSI? or can we wait for the same performance considering disks sata 7200 rpm against disks scsi of 7200 rpm? Could anybody explain to myself this doubts? You will only really be able to answer this by benchmarking your application and finding out how much of the time it spends accessing the disk. SCSI disks and controllers typically have two other advantages over SATA disks and controllers: faster average seek times, and more write cache. However, if the performance under SATA is good enough, then you will not want to spend the extra for SCSI. I support a number of live applications deployed on servers using mirrored SATA disks, as they are good enough for the clients. - Peter - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 5.5.9 Cluster Error
Hi Peter, I resolved my problems. I appended mcastBindAddress attribute to ClusterMembership element and set 127.0.0.1 as a value. Thanks anyway. -Toshio Which OS you used and is your firewall open for UDP port 45564 and TCP 4001 ? Is your network interface enabled for Multicast Packages ? Why you used the deployer ? Peter SUGAHARA Toshio schrieb: Sorry, attached file was not sent and I'll try again. [error message] ERROR main org.apache.catalina.cluster.tcp.SimpleTcpCluster - Unable to start cluster. java.net.SocketException: error setting options at java.net.PlainDatagramSocketImpl.join(Native Method) at java.net.PlainDatagramSocketImpl.join(PlainDatagramSocketImpl.java:172) at java.net.MulticastSocket.joinGroup(MulticastSocket.java:276) at org.apache.catalina.cluster.mcast.McastServiceImpl.start(McastServiceImpl.java:174) at org.apache.catalina.cluster.mcast.McastService.start(McastService.java:217) at org.apache.catalina.cluster.mcast.McastService.start(McastService.java:167) at org.apache.catalina.cluster.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:418) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1002) at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1012) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442) at org.apache.catalina.core.StandardService.start(StandardService.java:450) at org.apache.catalina.core.StandardServer.start(StandardServer.java:683) at org.apache.catalina.startup.Catalina.start(Catalina.java:537) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:271) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:409) ERROR main org.apache.catalina.startup.Catalina - Catalina.start: LifecycleException: java.net.SocketException: error setting options at org.apache.catalina.cluster.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:434) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1002) at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1012) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442) at org.apache.catalina.core.StandardService.start(StandardService.java:450) at org.apache.catalina.core.StandardServer.start(StandardServer.java:683) at org.apache.catalina.startup.Catalina.start(Catalina.java:537) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:271) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:409) Hi all, I'm building Tomcat Cluster environment on the same box. Configuring sevrer.xml and starting tomcat instance, I got error message as attached file. Here is my envrinment; Apache:2.0.54 MPM worker Tomcat:5.5.9 JK_Connector:mod_jk 1.2.14.1 JDK:J2SE 1.5.0_04 Also, I modified web.xml and added distributable/. What am I missing? I'm at a loss. Any help would be very much appreciated. Thanks __ Save the earth http://pr.mail.yahoo.co.jp/ondanka/ !-- Example Server Configuration File -- !-- Note that component elements are nested corresponding to their parent-child relationships with each other -- !-- A Server is a singleton element that represents the entire JVM, which may contain one or more Service instances. The Server listens for a shutdown command on the indicated port. Note: A Server is not itself a Container, so you may not define subcomponents such as Valves or Loggers at this level. -- Server port=8005 shutdown=SHUTDOWN Listener className=org.apache.catalina.mbeans.ServerLifecycleListener / Listener className=org.apache.catalina.mbeans.GlobalResourcesLifecycleListener / Listener className=org.apache.catalina.storeconfig.StoreConfigLifecycleListener/ !-- Global JNDI resources -- GlobalNamingResources !-- Test entry for demonstration purposes -- Environment name=simpleValue type=java.lang.Integer value=30
Re: Tomcat 5.5.9 Cluster Error
Hi Peter, I'm using Windows XP SP1 and not enable Firewall. Also I can use Multicast. I was able to set-up Tomcat Cluster with 5.0.16, but I failed with 5.5.9 and 5.0.28. On only uncommenting the Cluster and Value element in server.xml, I succeeded to set-up with 5.0.16. But with 5.5.9 and 5.0.28, I was not able to work Tomcat Cluster well using that way. Is there anything different in how to configure between each version? Thanks, --Toshio --- Peter Rossbach [EMAIL PROTECTED] からのメッセージ: Which OS you used and is your firewall open for UDP port 45564 and TCP 4001 ? Is your network interface enabled for Multicast Packages ? Why you used the deployer ? Peter SUGAHARA Toshio schrieb: Sorry, attached file was not sent and I'll try again. [error message] ERROR main org.apache.catalina.cluster.tcp.SimpleTcpCluster - Unable to start cluster. java.net.SocketException: error setting options at java.net.PlainDatagramSocketImpl.join(Native Method) at java.net.PlainDatagramSocketImpl.join(PlainDatagramSocketImpl.java:172) at java.net.MulticastSocket.joinGroup(MulticastSocket.java:276) at org.apache.catalina.cluster.mcast.McastServiceImpl.start(McastServiceImpl.java:174) at org.apache.catalina.cluster.mcast.McastService.start(McastService.java:217) at org.apache.catalina.cluster.mcast.McastService.start(McastService.java:167) at org.apache.catalina.cluster.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:418) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1002) at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1012) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442) at org.apache.catalina.core.StandardService.start(StandardService.java:450) at org.apache.catalina.core.StandardServer.start(StandardServer.java:683) at org.apache.catalina.startup.Catalina.start(Catalina.java:537) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:271) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:409) ERROR main org.apache.catalina.startup.Catalina - Catalina.start: LifecycleException: java.net.SocketException: error setting options at org.apache.catalina.cluster.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:434) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1002) at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1012) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442) at org.apache.catalina.core.StandardService.start(StandardService.java:450) at org.apache.catalina.core.StandardServer.start(StandardServer.java:683) at org.apache.catalina.startup.Catalina.start(Catalina.java:537) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:271) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:409) Hi all, I'm building Tomcat Cluster environment on the same box. Configuring sevrer.xml and starting tomcat instance, I got error message as attached file. Here is my envrinment; Apache:2.0.54 MPM worker Tomcat:5.5.9 JK_Connector:mod_jk 1.2.14.1 JDK:J2SE 1.5.0_04 Also, I modified web.xml and added distributable/. What am I missing? I'm at a loss. Any help would be very much appreciated. Thanks __ Save the earth http://pr.mail.yahoo.co.jp/ondanka/ !-- Example Server Configuration File -- !-- Note that component elements are nested corresponding to their parent-child relationships with each other -- !-- A Server is a singleton element that represents the entire JVM, which may contain one or more Service instances. The Server listens for a shutdown command on the indicated port. Note: A Server is not itself a Container, so you may not define subcomponents such as Valves or Loggers at this level. -- __ Save the earth http://pr.mail.yahoo.co.jp/ondanka
Tomcat 5.5.9 Cluster Error
Hi all, I'm building Tomcat Cluster environment on the same box. Configuring sevrer.xml and starting tomcat instance, I got error message as attached file. Here is my envrinment; Apache:2.0.54 MPM worker Tomcat:5.5.9 JK_Connector:mod_jk 1.2.14.1 JDK:J2SE 1.5.0_04 Also, I modified web.xml and added distributable/. What am I missing? I'm at a loss. Any help would be very much appreciated. Thanks __ Save the earth http://pr.mail.yahoo.co.jp/ondanka/ !-- Example Server Configuration File -- !-- Note that component elements are nested corresponding to their parent-child relationships with each other -- !-- A Server is a singleton element that represents the entire JVM, which may contain one or more Service instances. The Server listens for a shutdown command on the indicated port. Note: A Server is not itself a Container, so you may not define subcomponents such as Valves or Loggers at this level. -- Server port=8005 shutdown=SHUTDOWN Listener className=org.apache.catalina.mbeans.ServerLifecycleListener / Listener className=org.apache.catalina.mbeans.GlobalResourcesLifecycleListener / Listener className=org.apache.catalina.storeconfig.StoreConfigLifecycleListener/ !-- Global JNDI resources -- GlobalNamingResources !-- Test entry for demonstration purposes -- Environment name=simpleValue type=java.lang.Integer value=30/ !-- Editable user database that can also be used by UserDatabaseRealm to authenticate users -- Resource name=UserDatabase auth=Container type=org.apache.catalina.UserDatabase description=User database that can be updated and saved factory=org.apache.catalina.users.MemoryUserDatabaseFactory pathname=conf/tomcat-users.xml / /GlobalNamingResources !-- Define the Tomcat Stand-Alone Service -- Service name=Catalina Connector port=8080 maxHttpHeaderSize=8192 maxThreads=150 minSpareThreads=25 maxSpareThreads=75 enableLookups=false redirectPort=8443 acceptCount=100 connectionTimeout=2 disableUploadTimeout=true / !-- Define an AJP 1.3 Connector on port 8009 -- Connector port=8009 enableLookups=false redirectPort=8443 protocol=AJP/1.3 / !-- Define the top level container in our container hierarchy -- Engine jvmRoot=lb1 name=Catalina defaultHost=localhost Realm className=org.apache.catalina.realm.UserDatabaseRealm resourceName=UserDatabase/ Host name=localhost appBase=webapps unpackWARs=true autoDeploy=true xmlValidation=false xmlNamespaceAware=false Cluster className=org.apache.catalina.cluster.tcp.SimpleTcpCluster managerClassName=org.apache.catalina.cluster.session.DeltaManager expireSessionsOnShutdown=false useDirtyFlag=true notifyListenersOnReplication=true Membership className=org.apache.catalina.cluster.mcast.McastService mcastAddr=228.0.0.4 mcastPort=45564 mcastFrequency=500 mcastDropTime=3000/ Receiver className=org.apache.catalina.cluster.tcp.ReplicationListener tcpListenAddress=auto tcpListenPort=4001 tcpSelectorTimeout=100 tcpThreadCount=6/ Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled ackTimeout=15000/ Valve className=org.apache.catalina.cluster.tcp.ReplicationValve filter=.*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;/ Deployer className=org.apache.catalina.cluster.deploy.FarmWarDeployer tempDir=/tmp/war-temp/ deployDir=/tmp/war-deploy/ watchDir=/tmp/war-listen/ watchEnabled=false/ /Cluster /Host /Engine /Service /Server - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 5.5.9 Cluster Error
Sorry, attached file was not sent and I'll try again. [error message] ERROR main org.apache.catalina.cluster.tcp.SimpleTcpCluster - Unable to start cluster. java.net.SocketException: error setting options at java.net.PlainDatagramSocketImpl.join(Native Method) at java.net.PlainDatagramSocketImpl.join(PlainDatagramSocketImpl.java:172) at java.net.MulticastSocket.joinGroup(MulticastSocket.java:276) at org.apache.catalina.cluster.mcast.McastServiceImpl.start(McastServiceImpl.java:174) at org.apache.catalina.cluster.mcast.McastService.start(McastService.java:217) at org.apache.catalina.cluster.mcast.McastService.start(McastService.java:167) at org.apache.catalina.cluster.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:418) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1002) at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1012) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442) at org.apache.catalina.core.StandardService.start(StandardService.java:450) at org.apache.catalina.core.StandardServer.start(StandardServer.java:683) at org.apache.catalina.startup.Catalina.start(Catalina.java:537) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:271) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:409) ERROR main org.apache.catalina.startup.Catalina - Catalina.start: LifecycleException: java.net.SocketException: error setting options at org.apache.catalina.cluster.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:434) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1002) at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1012) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442) at org.apache.catalina.core.StandardService.start(StandardService.java:450) at org.apache.catalina.core.StandardServer.start(StandardServer.java:683) at org.apache.catalina.startup.Catalina.start(Catalina.java:537) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:271) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:409) Hi all, I'm building Tomcat Cluster environment on the same box. Configuring sevrer.xml and starting tomcat instance, I got error message as attached file. Here is my envrinment; Apache:2.0.54 MPM worker Tomcat:5.5.9 JK_Connector:mod_jk 1.2.14.1 JDK:J2SE 1.5.0_04 Also, I modified web.xml and added distributable/. What am I missing? I'm at a loss. Any help would be very much appreciated. Thanks __ Save the earth http://pr.mail.yahoo.co.jp/ondanka/ !-- Example Server Configuration File -- !-- Note that component elements are nested corresponding to their parent-child relationships with each other -- !-- A Server is a singleton element that represents the entire JVM, which may contain one or more Service instances. The Server listens for a shutdown command on the indicated port. Note: A Server is not itself a Container, so you may not define subcomponents such as Valves or Loggers at this level. -- Server port=8005 shutdown=SHUTDOWN Listener className=org.apache.catalina.mbeans.ServerLifecycleListener / Listener className=org.apache.catalina.mbeans.GlobalResourcesLifecycleListener / Listener className=org.apache.catalina.storeconfig.StoreConfigLifecycleListener/ !-- Global JNDI resources -- GlobalNamingResources !-- Test entry for demonstration purposes -- Environment name=simpleValue type=java.lang.Integer value=30/ !-- Editable user database that can also be used by UserDatabaseRealm to authenticate users -- Resource name=UserDatabase auth=Container type=org.apache.catalina.UserDatabase description=User database that can be updated and saved factory=org.apache.catalina.users.MemoryUserDatabaseFactory pathname=conf/tomcat-users.xml / /GlobalNamingResources
Re: Tomcat 5.5.9 Cluster Error
Which OS you used and is your firewall open for UDP port 45564 and TCP 4001 ? Is your network interface enabled for Multicast Packages ? Why you used the deployer ? Peter SUGAHARA Toshio schrieb: Sorry, attached file was not sent and I'll try again. [error message] ERROR main org.apache.catalina.cluster.tcp.SimpleTcpCluster - Unable to start cluster. java.net.SocketException: error setting options at java.net.PlainDatagramSocketImpl.join(Native Method) at java.net.PlainDatagramSocketImpl.join(PlainDatagramSocketImpl.java:172) at java.net.MulticastSocket.joinGroup(MulticastSocket.java:276) at org.apache.catalina.cluster.mcast.McastServiceImpl.start(McastServiceImpl.java:174) at org.apache.catalina.cluster.mcast.McastService.start(McastService.java:217) at org.apache.catalina.cluster.mcast.McastService.start(McastService.java:167) at org.apache.catalina.cluster.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:418) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1002) at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1012) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442) at org.apache.catalina.core.StandardService.start(StandardService.java:450) at org.apache.catalina.core.StandardServer.start(StandardServer.java:683) at org.apache.catalina.startup.Catalina.start(Catalina.java:537) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:271) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:409) ERROR main org.apache.catalina.startup.Catalina - Catalina.start: LifecycleException: java.net.SocketException: error setting options at org.apache.catalina.cluster.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:434) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1002) at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1012) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442) at org.apache.catalina.core.StandardService.start(StandardService.java:450) at org.apache.catalina.core.StandardServer.start(StandardServer.java:683) at org.apache.catalina.startup.Catalina.start(Catalina.java:537) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:271) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:409) Hi all, I'm building Tomcat Cluster environment on the same box. Configuring sevrer.xml and starting tomcat instance, I got error message as attached file. Here is my envrinment; Apache:2.0.54 MPM worker Tomcat:5.5.9 JK_Connector:mod_jk 1.2.14.1 JDK:J2SE 1.5.0_04 Also, I modified web.xml and added distributable/. What am I missing? I'm at a loss. Any help would be very much appreciated. Thanks __ Save the earth http://pr.mail.yahoo.co.jp/ondanka/ !-- Example Server Configuration File -- !-- Note that component elements are nested corresponding to their parent-child relationships with each other -- !-- A Server is a singleton element that represents the entire JVM, which may contain one or more Service instances. The Server listens for a shutdown command on the indicated port. Note: A Server is not itself a Container, so you may not define subcomponents such as Valves or Loggers at this level. -- Server port=8005 shutdown=SHUTDOWN Listener className=org.apache.catalina.mbeans.ServerLifecycleListener / Listener className=org.apache.catalina.mbeans.GlobalResourcesLifecycleListener / Listener className=org.apache.catalina.storeconfig.StoreConfigLifecycleListener/ !-- Global JNDI resources -- GlobalNamingResources !-- Test entry for demonstration purposes -- Environment name=simpleValue type=java.lang.Integer value=30/ !-- Editable user database that can also be used by UserDatabaseRealm to authenticate users -- Resource name=UserDatabase auth=Container type=org.apache.catalina.UserDatabase description=User database that can be updated and saved
apache/tomcat/mysql cluster info request
Good Morning, I'm looking for suggestions on a large scale tomcat cluster for one deployed app. I currently run our app happily with all server apps smushed on one server. Apache webserver 2.0.52/mod_jk/tomcat5.0.28/mysql 4.1.7 on Fedora Core 3. My company wants to deploy a site that would generate a huge amount of traffic. Lots of huge files, lot's of CPU use. I've been reading the online resources for cluster set ups but I still have some questions I'm hoping someone who has a successful cluster can help with. I'd like a set up that has redundant pipes, so a horizontal cluster on all tiers is important for growth and fail over. I'm trying to find information about a setup that uses a hardware loadbalancer in front of multiple apache webservers that are in front of mutliple tomcat app servers with a MySQL master/slave at the back. I'm thinking the tomcat servers may need to be connected to the application data filesystem via a NAS. I'm not sure about the performance hit, but it is critical that the data served is shared. I've read a bunch of great articles on using the apache webserver as the loadbalancer for the tomcat cluster, but I'm concerned about the webserver failing. Plus I use the webserver tier for rewrite flexibility. I think I want to start with two of each tier(web server, application server, database server), with multiple tomcat engines running on the application servers on some memory rich hardware. From my current setup my load tests have always shown the bottleneck to be java memory use (jdk 1.5) Thank you for your time. -Kiarna
Re: How to query for number of active participants in the tomcat 5.5 cluster
you would need to write a component that queries the cluster classes (internal tomcat components) yourself. I believe you can reach the cluster object through JMX and through the tomcat classes (host etc) the interface CatalinaCluster.getMembers() returns all members in a cluster. Filip Edmon Begoli wrote: Hi, Is it possible to query host tomcat for the number of active participants in the cluster that host tomcat belongs to. If yes - can you please point me to the API, and possibly examples. Thank you, Edmon - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: How to query for number of active participants in the tomcat 5.5 cluster
Hey, I have checkin today a JMX Object for McastService (Membership implementation) to access the membership list via mbean. :-) Also every sender is a mbean ( type IDataSender). Look inside the Mbean list via jconsole. Peter Filip Hanik - Dev Lists schrieb: you would need to write a component that queries the cluster classes (internal tomcat components) yourself. I believe you can reach the cluster object through JMX and through the tomcat classes (host etc) the interface CatalinaCluster.getMembers() returns all members in a cluster. Filip Edmon Begoli wrote: Hi, Is it possible to query host tomcat for the number of active participants in the cluster that host tomcat belongs to. If yes - can you please point me to the API, and possibly examples. Thank you, Edmon - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: How to query for number of active participants in the tomcat 5.5 cluster
Hi, To answer my own question and for the sake of others. I think I have figured out the way how to query Tomcat for the number of active members in the clusters (that replicate sessions) using available MBeans. To make it easily available I put instructions on my blog - you can access detailed, step by step here: http://blogs.ittoolbox.com/eai/software/archives/004546.asp Please feel free to comment with any additions or better ways if needed. Thank you, Edmon http://blogs.ittoolbox.com/eai/software Edmon Begoli wrote: Hi, Is it possible to query host tomcat for the number of active participants in the cluster that host tomcat belongs to. If yes - can you please point me to the API, and possibly examples. Thank you, Edmon - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
How to query for number of active participants in the tomcat 5.5 cluster
Hi, Is it possible to query host tomcat for the number of active participants in the cluster that host tomcat belongs to. If yes - can you please point me to the API, and possibly examples. Thank you, Edmon - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Tomcat 5 cluster spins out of control periodically
I have two machines running Tomcat 5.0 with java 1.4. Both are clustered together. Every once in a while, one or the other just starts spinning out of control and clocking wall time. This is on hp ux version 11. New machines, lots of resources (16 Gig of RAM) Any ideas? Thanks, Micky - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 5 cluster spins out of control periodically
Are you using apache in front of tomcat? Have you done thread dumps when in a good state vs a 100% cpu state? Sounds like an inifinite loop. -Tim Micky Williamson wrote: I have two machines running Tomcat 5.0 with java 1.4. Both are clustered together. Every once in a while, one or the other just starts spinning out of control and clocking wall time. This is on hp ux version 11. New machines, lots of resources (16 Gig of RAM) - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 5 cluster spins out of control periodically
No, not using Apache to front end. Haven't done any thread dumps...yet. I just was throwing feelers out to see if there is any history of this. Tim Funk wrote: Are you using apache in front of tomcat? Have you done thread dumps when in a good state vs a 100% cpu state? Sounds like an inifinite loop. -Tim Micky Williamson wrote: I have two machines running Tomcat 5.0 with java 1.4. Both are clustered together. Every once in a while, one or the other just starts spinning out of control and clocking wall time. This is on hp ux version 11. New machines, lots of resources (16 Gig of RAM) - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 5 cluster spins out of control periodically
probably is the multi cast receiver that freaks out. This can happen if the network cable is unplugged. I will add in a sleep on the receiver so that even in this case it wont freak out. so in the scenario above, its a known problem, checking in a fix right now. if you do provide a dump however, we will know more. or even better, profiling it. Filip - Original Message - From: Micky Williamson [EMAIL PROTECTED] To: Tomcat Users List tomcat-user@jakarta.apache.org Sent: Thursday, February 10, 2005 1:24 PM Subject: Re: Tomcat 5 cluster spins out of control periodically No, not using Apache to front end. Haven't done any thread dumps...yet. I just was throwing feelers out to see if there is any history of this. Tim Funk wrote: Are you using apache in front of tomcat? Have you done thread dumps when in a good state vs a 100% cpu state? Sounds like an inifinite loop. -Tim Micky Williamson wrote: I have two machines running Tomcat 5.0 with java 1.4. Both are clustered together. Every once in a while, one or the other just starts spinning out of control and clocking wall time. This is on hp ux version 11. New machines, lots of resources (16 Gig of RAM) - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 5 cluster spins out of control periodically
Filip, thanks for the information. I did notice that on our test servers, it doesn't happen as frequently. And in the production server.xml we had: Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ In the test one it was: Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=asynchronous/ What is the better way? Filip Hanik - Dev wrote: probably is the multi cast receiver that freaks out. This can happen if the network cable is unplugged. I will add in a sleep on the receiver so that even in this case it wont freak out. so in the scenario above, its a known problem, checking in a fix right now. if you do provide a dump however, we will know more. or even better, profiling it. Filip - Original Message - From: Micky Williamson [EMAIL PROTECTED] To: Tomcat Users List tomcat-user@jakarta.apache.org Sent: Thursday, February 10, 2005 1:24 PM Subject: Re: Tomcat 5 cluster spins out of control periodically No, not using Apache to front end. Haven't done any thread dumps...yet. I just was throwing feelers out to see if there is any history of this. Tim Funk wrote: Are you using apache in front of tomcat? Have you done thread dumps when in a good state vs a 100% cpu state? Sounds like an inifinite loop. -Tim Micky Williamson wrote: I have two machines running Tomcat 5.0 with java 1.4. Both are clustered together. Every once in a while, one or the other just starts spinning out of control and clocking wall time. This is on hp ux version 11. New machines, lots of resources (16 Gig of RAM) - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 5 cluster spins out of control periodically
ah, you might be having problems with the java.nio package, (TCP) as the conf below you are mentioning, are using those settings. asynch doesn't require an ack message, and some VM's on some platforms have problems sending data back on a NIO channel. linux had this issue unless you set LD_ASSUME_KERNEL for example, Filip - Original Message - From: Micky Williamson [EMAIL PROTECTED] To: Tomcat Users List tomcat-user@jakarta.apache.org Sent: Thursday, February 10, 2005 2:44 PM Subject: Re: Tomcat 5 cluster spins out of control periodically Filip, thanks for the information. I did notice that on our test servers, it doesn't happen as frequently. And in the production server.xml we had: Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ In the test one it was: Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=asynchronous/ What is the better way? Filip Hanik - Dev wrote: probably is the multi cast receiver that freaks out. This can happen if the network cable is unplugged. I will add in a sleep on the receiver so that even in this case it wont freak out. so in the scenario above, its a known problem, checking in a fix right now. if you do provide a dump however, we will know more. or even better, profiling it. Filip - Original Message - From: Micky Williamson [EMAIL PROTECTED] To: Tomcat Users List tomcat-user@jakarta.apache.org Sent: Thursday, February 10, 2005 1:24 PM Subject: Re: Tomcat 5 cluster spins out of control periodically No, not using Apache to front end. Haven't done any thread dumps...yet. I just was throwing feelers out to see if there is any history of this. Tim Funk wrote: Are you using apache in front of tomcat? Have you done thread dumps when in a good state vs a 100% cpu state? Sounds like an inifinite loop. -Tim Micky Williamson wrote: I have two machines running Tomcat 5.0 with java 1.4. Both are clustered together. Every once in a while, one or the other just starts spinning out of control and clocking wall time. This is on hp ux version 11. New machines, lots of resources (16 Gig of RAM) - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 5 cluster spins out of control periodically
hmm is there any difference in performance with pooled vs async? The OS is HPUX 11 Filip Hanik - Dev wrote: ah, you might be having problems with the java.nio package, (TCP) as the conf below you are mentioning, are using those settings. asynch doesn't require an ack message, and some VM's on some platforms have problems sending data back on a NIO channel. linux had this issue unless you set LD_ASSUME_KERNEL for example, Filip - Original Message - From: Micky Williamson [EMAIL PROTECTED] To: Tomcat Users List tomcat-user@jakarta.apache.org Sent: Thursday, February 10, 2005 2:44 PM Subject: Re: Tomcat 5 cluster spins out of control periodically Filip, thanks for the information. I did notice that on our test servers, it doesn't happen as frequently. And in the production server.xml we had: Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ In the test one it was: Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=asynchronous/ What is the better way? Filip Hanik - Dev wrote: probably is the multi cast receiver that freaks out. This can happen if the network cable is unplugged. I will add in a sleep on the receiver so that even in this case it wont freak out. so in the scenario above, its a known problem, checking in a fix right now. if you do provide a dump however, we will know more. or even better, profiling it. Filip - Original Message - From: Micky Williamson [EMAIL PROTECTED] To: Tomcat Users List tomcat-user@jakarta.apache.org Sent: Thursday, February 10, 2005 1:24 PM Subject: Re: Tomcat 5 cluster spins out of control periodically No, not using Apache to front end. Haven't done any thread dumps...yet. I just was throwing feelers out to see if there is any history of this. Tim Funk wrote: Are you using apache in front of tomcat? Have you done thread dumps when in a good state vs a 100% cpu state? Sounds like an inifinite loop. -Tim Micky Williamson wrote: I have two machines running Tomcat 5.0 with java 1.4. Both are clustered together. Every once in a while, one or the other just starts spinning out of control and clocking wall time. This is on hp ux version 11. New machines, lots of resources (16 Gig of RAM) - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 5 cluster spins out of control periodically
there have been a few articles, check onjava.com for example, asynch will result in a faster response time, but doesn't guarantee that the session gets replicated for the next request. Filip - Original Message - From: Micky Williamson [EMAIL PROTECTED] To: Tomcat Users List tomcat-user@jakarta.apache.org Sent: Thursday, February 10, 2005 3:43 PM Subject: Re: Tomcat 5 cluster spins out of control periodically hmm is there any difference in performance with pooled vs async? The OS is HPUX 11 Filip Hanik - Dev wrote: ah, you might be having problems with the java.nio package, (TCP) as the conf below you are mentioning, are using those settings. asynch doesn't require an ack message, and some VM's on some platforms have problems sending data back on a NIO channel. linux had this issue unless you set LD_ASSUME_KERNEL for example, Filip - Original Message - From: Micky Williamson [EMAIL PROTECTED] To: Tomcat Users List tomcat-user@jakarta.apache.org Sent: Thursday, February 10, 2005 2:44 PM Subject: Re: Tomcat 5 cluster spins out of control periodically Filip, thanks for the information. I did notice that on our test servers, it doesn't happen as frequently. And in the production server.xml we had: Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ In the test one it was: Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=asynchronous/ What is the better way? Filip Hanik - Dev wrote: probably is the multi cast receiver that freaks out. This can happen if the network cable is unplugged. I will add in a sleep on the receiver so that even in this case it wont freak out. so in the scenario above, its a known problem, checking in a fix right now. if you do provide a dump however, we will know more. or even better, profiling it. Filip - Original Message - From: Micky Williamson [EMAIL PROTECTED] To: Tomcat Users List tomcat-user@jakarta.apache.org Sent: Thursday, February 10, 2005 1:24 PM Subject: Re: Tomcat 5 cluster spins out of control periodically No, not using Apache to front end. Haven't done any thread dumps...yet. I just was throwing feelers out to see if there is any history of this. Tim Funk wrote: Are you using apache in front of tomcat? Have you done thread dumps when in a good state vs a 100% cpu state? Sounds like an inifinite loop. -Tim Micky Williamson wrote: I have two machines running Tomcat 5.0 with java 1.4. Both are clustered together. Every once in a while, one or the other just starts spinning out of control and clocking wall time. This is on hp ux version 11. New machines, lots of resources (16 Gig of RAM) - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Tomcat 5.0.27 Cluster Issues
Hi all, I have some problem with my Tomcat cluster configuration. Im doing the test only with to machines (Windows 2K, Tomcat 5.0.27 and the ISAPI Filter). I cant setup the cluster ok. Now Im using the default setup in both machines: As you can see in the logs, both machine create the Cluster, I think is not the good behaviour. I appreciate very much any help. We are working in a big REAL project in Spain, and it is my only problem. Thank you very much in advanced, I expect your answer. Best regards. Configuration Machine 1 Cluster className=org.apache.catalina.cluster.tcp.SimpleTcpCluster managerClassName=org.apache.catalina.cluster.session.DeltaManager name=KutxaNetCluster printToScreen=true expireSessionsOnShutdown=false useDirtyFlag=true Membership className=org.apache.catalina.cluster.mcast.McastService mcastAddr=239.26.102.4 mcastPort=45564 mcastFrequency=500 mcastDropTime=3000/ Receiver className=org.apache.catalina.cluster.tcp.ReplicationListener tcpListenAddress=auto tcpListenPort=4010 tcpSelectorTimeout=100 tcpThreadCount=6/ Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ Valve className=org.apache.catalina.cluster.tcp.ReplicationValve debug=10 verbosity=4 Machine 2: Cluster className=org.apache.catalina.cluster.tcp.SimpleTcpCluster managerClassName=org.apache.catalina.cluster.session.DeltaManager name=KutxaNetCluster printToScreen=true expireSessionsOnShutdown=false useDirtyFlag=true Membership className=org.apache.catalina.cluster.mcast.McastService mcastAddr=239.26.102.4 mcastPort=45564 mcastFrequency=500 mcastDropTime=3000/ Receiver className=org.apache.catalina.cluster.tcp.ReplicationListener tcpListenAddress=auto tcpListenPort=4010 tcpSelectorTimeout=100 tcpThreadCount=6/ Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ Valve className=org.apache.catalina.cluster.tcp.ReplicationValve debug=10 verbosity=4 Both machines have the same configuration. And the logs in the machine are: Machine 1 07-dic-2004 18:29:50 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 2188 ms 07-dic-2004 18:29:51 org.apache.catalina.core.StandardService start INFO: Arrancando servicio Catalina 07-dic-2004 18:29:51 org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/5.0.27 07-dic-2004 18:29:51 org.apache.catalina.core.StandardHost start INFO: Desactivada la validación XML 07-dic-2004 18:29:51 org.apache.catalina.cluster.tcp.SimpleTcpCluster start INFO: Cluster is about to start 07-dic-2004 18:29:51 org.apache.catalina.cluster.mcast.McastService start INFO: Sleeping for 2000 secs to establish cluster membership 07-dic-2004 18:29:53 org.apache.catalina.core.StandardHost getDeployer INFO: Create Host deployer for direct deployment ( non-jmx ) 07-dic-2004 18:29:53 org.apache.catalina.core.StandardHostDeployer install INFO: Procesando URL de archivo de configuración de Contexto file:C:\Tomcat 5.0\conf\Standalone\localhost\admin.xml 07-dic-2004 18:29:54 org.apache.struts.util.PropertyMessageResources init INFO: Initializing, config='org.apache.struts.util.LocalStrings', returnNull=true 07-dic-2004 18:29:54 org.apache.struts.util.PropertyMessageResources init INFO: Initializing, config='org.apache.struts.action.ActionResources', returnNull=true 07-dic-2004 18:29:55 org.apache.struts.util.PropertyMessageResources init INFO: Initializing, config='org.apache.webapp.admin.ApplicationResources', returnNull=true 07-dic-2004 18:29:57 org.apache.catalina.core.StandardHostDeployer install INFO: Procesando URL de archivo de configuración de Contexto file:C:\Tomcat 5.0\conf\Standalone\localhost\manager.xml 07-dic-2004 18:29:58 org.apache.catalina.core.StandardHostDeployer install INFO: Procesando URL de archivo de configuración de Contexto file:C:\Tomcat 5.0\conf\Standalone\localhost\ROOT.xml Creating ClusterManager for context using class org.apache.catalina.cluster.session.DeltaManager 07-dic-2004 18:29:59 org.apache.catalina.cluster.session.DeltaManager start INFO: Starting clustering manager...: 07
Re: Tomcat 5.0.27 Cluster Issues
By looking at the following log entries INFO: Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://172.26.102.2 :4010,172.26.102.2,4010, alive=232390] 07-dic-2004 18:34:46 org.apache.catalina.cluster.tcp.SimpleTcpCluster memberAdded INFO: Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://172.26.102.2:4010, 172.26.102.2,4010, alive=357296] 07-dic-2004 18:34:49 org.apache.catalina.cluster.tcp.SimpleTcpCluster memberDisappeared INFO: Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://172.26.102.2 :4010,172.26.102.2,4010, alive=357296] It appears as multicasting is not working properly in your network. Filip - Original Message - From: Pablo Carretero [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, December 07, 2004 11:44 AM Subject: Tomcat 5.0.27 Cluster Issues Hi all, I have some problem with my Tomcat cluster configuration. I'm doing the test only with to machines (Windows 2K, Tomcat 5.0.27 and the ISAPI Filter). I can't setup the cluster ok. Now I'm using the default setup in both machines: As you can see in the logs, both machine create the Cluster, I think is not the good behaviour. I appreciate very much any help. We are working in a big REAL project in Spain, and it is my only problem. Thank you very much in advanced, I expect your answer. Best regards. Configuration Machine 1 Cluster className=org.apache.catalina.cluster.tcp.SimpleTcpCluster managerClassName=org.apache.catalina.cluster.session.DeltaManager name=KutxaNetCluster printToScreen=true expireSessionsOnShutdown=false useDirtyFlag=true Membership className=org.apache.catalina.cluster.mcast.McastService mcastAddr=239.26.102.4 mcastPort=45564 mcastFrequency=500 mcastDropTime=3000/ Receiver className=org.apache.catalina.cluster.tcp.ReplicationListener tcpListenAddress=auto tcpListenPort=4010 tcpSelectorTimeout=100 tcpThreadCount=6/ Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ Valve className=org.apache.catalina.cluster.tcp.ReplicationValve debug=10 verbosity=4 Machine 2: Cluster className=org.apache.catalina.cluster.tcp.SimpleTcpCluster managerClassName=org.apache.catalina.cluster.session.DeltaManager name=KutxaNetCluster printToScreen=true expireSessionsOnShutdown=false useDirtyFlag=true Membership className=org.apache.catalina.cluster.mcast.McastService mcastAddr=239.26.102.4 mcastPort=45564 mcastFrequency=500 mcastDropTime=3000/ Receiver className=org.apache.catalina.cluster.tcp.ReplicationListener tcpListenAddress=auto tcpListenPort=4010 tcpSelectorTimeout=100 tcpThreadCount=6/ Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ Valve className=org.apache.catalina.cluster.tcp.ReplicationValve debug=10 verbosity=4 Both machines have the same configuration. And the logs in the machine are: Machine 1 07-dic-2004 18:29:50 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 2188 ms 07-dic-2004 18:29:51 org.apache.catalina.core.StandardService start INFO: Arrancando servicio Catalina 07-dic-2004 18:29:51 org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/5.0.27 07-dic-2004 18:29:51 org.apache.catalina.core.StandardHost start INFO: Desactivada la validación XML 07-dic-2004 18:29:51 org.apache.catalina.cluster.tcp.SimpleTcpCluster start INFO: Cluster is about to start 07-dic-2004 18:29:51 org.apache.catalina.cluster.mcast.McastService start INFO: Sleeping for 2000 secs to establish cluster membership 07-dic-2004 18:29:53 org.apache.catalina.core.StandardHost getDeployer INFO: Create Host deployer for direct deployment ( non-jmx ) 07-dic-2004 18:29:53 org.apache.catalina.core.StandardHostDeployer install INFO: Procesando URL de archivo de configuración de Contexto file:C:\Tomcat 5.0\conf\Standalone\localhost\admin.xml 07-dic-2004 18:29:54 org.apache.struts.util.PropertyMessageResources init INFO: Initializing, config='org.apache.struts.util.LocalStrings', returnNull=true 07-dic-2004 18:29:54 org.apache.struts.util.PropertyMessageResources init INFO: Initializing, config
Tomcat in Cluster
Hello , a specific jk2+lb question please. Says that I have apache HTTP + 2 Tomcats Workers. Each worker is defined in the same lb group and manage the same webapp. - It works fine in optimal context (2 Tomcat + Apache run). - It works fine in 1 tomcat crash. But the problem is when the webapp is stopped (by manager for ie.) on one worker. The query is send to both tomcat even to the one that have no webapp available and so, the answer to the client is a nice 404 HTTP Error code. IMHO, it's because that jk2 don't parses the http answer. Is there some issue please ? Regards, Arnaud - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Tomcat 5 Cluster on blades with multiple network adapters
Title: Tomcat 5 Cluster on blades with multiple network adapters I try to get Tomcat-Cluster running on blades where two network-adapters are active. So i figured out that mcastBindAddr parameter has to specified. (mcastBindAddr = bind the multicast socket to a specific address) Has anyone an example for the parameter? How has it look like? - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: tomcat 5.0.28 cluster with useDirtyFlag=false and NO session.setAttribute(...) within request.
Hello Filip, got it. Decided to stick with DeltaManager and update the session programmatically. Thanks a lot for you time. Rolf -Original Message- From: Filip Hanik - Dev [mailto:[EMAIL PROTECTED] Sent: Wednesday, September 15, 2004 5:57 PM To: Tomcat Users List Subject: Re: tomcat 5.0.28 cluster with useDirtyFlag=false and NO session.setAttribute(...) within request. hi rolf, the use dirty flag only works with the SimpleTcpReplicationManager, not the delta manager, so replace DeltaManager in server.xml with SimpleTcpReplicationManager note, that this will replicate the entire session on each request, you are probably better off just fixing your code Filip - Original Message - From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, September 15, 2004 3:24 AM Subject: tomcat 5.0.28 cluster with useDirtyFlag=false and NO session.setAttribute(...) within request. Hello all, We have the following tomcat cluster setup: - Reverse Proxy: Apache reverse proxy which distributes every request to different tomcat instance. Works fine. - Tomcat Cluster: Cluster with 2 instances, pooled synchronization using mcast, useDirtyFlag=false Replication as such works fine except to the following code: A) -- code snipped -- SessionContext sessionContext = getSessionContext(request); // session.getAttribute(session.context); sessionContext.setSelectedParticipant(11); // previous value e.g. is 10 -- code snipped -- With this code above, we read an already and successfully replicated object (sessionContext) from the session. After fetching the object, we modify one of its values. Please note, we do not call session.setAttribute(...) in this scenario. Having the useDirtyFlag=false, we expect the whole session beeing replicated everytime before the request has ended (pooled synchronization). However, on the next request (to the second tomcat instance) getSelectedParticipant() still returns 10. The following request to the first tomcat instance shows the changed value 11. B) If we change our code to the following, session replication works and will show value 11 in both cases: -- code snipped -- SessionContext sessionContext = getSessionContext(request); // session.getAttribute(session.context); sessionContext.setSelectedParticipant(11); // previous value e.g. is 10 request.getSession().setAttribute( SessionContext.CONTEXT_KEY, sessionContext); -- code snipped -- Please note, that in scenario B) we have explicitly called session.setAttribute(...), means we are replacing a session attribute. With this, the session get replicated successfully. Did we missunderstand the useDirtyFlag=false for scenario A) in combination with pooled synchronization? Any help is warmly appreciated. Best Regards, Rolf Schenk Rolf B. Schenk Equate+ IT Architecture Development CEFS EU [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
tomcat 5.0.28 cluster with useDirtyFlag=false and NO session.setAttribute(...) within request.
Hello all, We have the following tomcat cluster setup: - Reverse Proxy: Apache reverse proxy which distributes every request to different tomcat instance. Works fine. - Tomcat Cluster: Cluster with 2 instances, pooled synchronization using mcast, useDirtyFlag=false Replication as such works fine except to the following code: A) -- code snipped -- SessionContext sessionContext = getSessionContext(request); // session.getAttribute(session.context); sessionContext.setSelectedParticipant(11); // previous value e.g. is 10 -- code snipped -- With this code above, we read an already and successfully replicated object (sessionContext) from the session. After fetching the object, we modify one of its values. Please note, we do not call session.setAttribute(...) in this scenario. Having the useDirtyFlag=false, we expect the whole session beeing replicated everytime before the request has ended (pooled synchronization). However, on the next request (to the second tomcat instance) getSelectedParticipant() still returns 10. The following request to the first tomcat instance shows the changed value 11. B) If we change our code to the following, session replication works and will show value 11 in both cases: -- code snipped -- SessionContext sessionContext = getSessionContext(request); // session.getAttribute(session.context); sessionContext.setSelectedParticipant(11); // previous value e.g. is 10 request.getSession().setAttribute( SessionContext.CONTEXT_KEY, sessionContext); -- code snipped -- Please note, that in scenario B) we have explicitly called session.setAttribute(...), means we are replacing a session attribute. With this, the session get replicated successfully. Did we missunderstand the useDirtyFlag=false for scenario A) in combination with pooled synchronization? Any help is warmly appreciated. Best Regards, Rolf Schenk Rolf B. Schenk Equate+ IT Architecture Development CEFS EU UBS AG Buckhauserstrasse 22 8048 Zürich +41-1-2348594 [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: tomcat 5.0.28 cluster with useDirtyFlag=false and NO session.setAttribute(...) within request.
hi rolf, the use dirty flag only works with the SimpleTcpReplicationManager, not the delta manager, so replace DeltaManager in server.xml with SimpleTcpReplicationManager note, that this will replicate the entire session on each request, you are probably better off just fixing your code Filip - Original Message - From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, September 15, 2004 3:24 AM Subject: tomcat 5.0.28 cluster with useDirtyFlag=false and NO session.setAttribute(...) within request. Hello all, We have the following tomcat cluster setup: - Reverse Proxy: Apache reverse proxy which distributes every request to different tomcat instance. Works fine. - Tomcat Cluster: Cluster with 2 instances, pooled synchronization using mcast, useDirtyFlag=false Replication as such works fine except to the following code: A) -- code snipped -- SessionContext sessionContext = getSessionContext(request); // session.getAttribute(session.context); sessionContext.setSelectedParticipant(11); // previous value e.g. is 10 -- code snipped -- With this code above, we read an already and successfully replicated object (sessionContext) from the session. After fetching the object, we modify one of its values. Please note, we do not call session.setAttribute(...) in this scenario. Having the useDirtyFlag=false, we expect the whole session beeing replicated everytime before the request has ended (pooled synchronization). However, on the next request (to the second tomcat instance) getSelectedParticipant() still returns 10. The following request to the first tomcat instance shows the changed value 11. B) If we change our code to the following, session replication works and will show value 11 in both cases: -- code snipped -- SessionContext sessionContext = getSessionContext(request); // session.getAttribute(session.context); sessionContext.setSelectedParticipant(11); // previous value e.g. is 10 request.getSession().setAttribute( SessionContext.CONTEXT_KEY, sessionContext); -- code snipped -- Please note, that in scenario B) we have explicitly called session.setAttribute(...), means we are replacing a session attribute. With this, the session get replicated successfully. Did we missunderstand the useDirtyFlag=false for scenario A) in combination with pooled synchronization? Any help is warmly appreciated. Best Regards, Rolf Schenk Rolf B. Schenk Equate+ IT Architecture Development CEFS EU UBS AG Buckhauserstrasse 22 8048 Zürich +41-1-2348594 [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Tomcat 4 cluster - drops members
Hello, I have an application deployed on a tomcat 4.1.30 cluster. I retrieved a jar from http://cvs.apache.org/~fhanik/ , a tomcat 5 clustering jar that has been made portable for tomcat 4. We followed the instructions and setup the server.xml for the cluster. The application is deoplyed using ant auto deployment task. Context path=/test docBase=C:\server\tomcat41\webapps\testtapp debug=0 Valve className=org.apache.catalina.session.ReplicationValve filter=.*\.gif;.*\.jpg;.*\.jpeg;.*\.js debug=0/ Manager className=org.apache.catalina.session.InMemoryReplicationManager debug=10 printToScreen=true saveOnRestart=false maxActiveSessions=-1 minIdleSwap=-1 maxIdleSwap=-1 maxIdleBackup=-1 pathname=null printSessionInfo=true checkInterval=10 expireSessionsOnShutdown=false serviceclass=org.apache.catalina.cluster.mcast.McastService mcastAddr=224.0.0.0 mcastPort=7800 mcastFrequency=500 mcastDropTime=5000 tcpListenAddress=auto tcpListenPort=4001 tcpSelectorTimeout=100 tcpThreadCount=2 tcpKeepAliveTime=-1 synchronousReplication=true useDirtyFlag=true / Now when the cluster is started the groups is successfully established. Suppose we have two servers A and B. When a browser hits the server A, a session is created on server A and an attempt is made to replicate session to server B. During the process server B hangs and is knocked off the cluster. Any ideas why this could be happening. Nandish Rudra - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Tomcat 4 cluster - drops members
first of all I would recommend moving to T5 as my bandwidth to support T4 is non existent. Second, I would recommend doing a thread dump when your server hangs, it will tell you where the error is. Filip - Original Message - From: Nandish Rudra [EMAIL PROTECTED] To: Tomcat Users List (E-mail) [EMAIL PROTECTED] Sent: Tuesday, August 03, 2004 11:54 AM Subject: Tomcat 4 cluster - drops members Hello, I have an application deployed on a tomcat 4.1.30 cluster. I retrieved a jar from http://cvs.apache.org/~fhanik/ , a tomcat 5 clustering jar that has been made portable for tomcat 4. We followed the instructions and setup the server.xml for the cluster. The application is deoplyed using ant auto deployment task. Context path=/test docBase=C:\server\tomcat41\webapps\testtapp debug=0 Valve className=org.apache.catalina.session.ReplicationValve filter=.*\.gif;.*\.jpg;.*\.jpeg;.*\.js debug=0/ Manager className=org.apache.catalina.session.InMemoryReplicationManager debug=10 printToScreen=true saveOnRestart=false maxActiveSessions=-1 minIdleSwap=-1 maxIdleSwap=-1 maxIdleBackup=-1 pathname=null printSessionInfo=true checkInterval=10 expireSessionsOnShutdown=false serviceclass=org.apache.catalina.cluster.mcast.McastService mcastAddr=224.0.0.0 mcastPort=7800 mcastFrequency=500 mcastDropTime=5000 tcpListenAddress=auto tcpListenPort=4001 tcpSelectorTimeout=100 tcpThreadCount=2 tcpKeepAliveTime=-1 synchronousReplication=true useDirtyFlag=true / Now when the cluster is started the groups is successfully established. Suppose we have two servers A and B. When a browser hits the server A, a session is created on server A and an attempt is made to replicate session to server B. During the process server B hangs and is knocked off the cluster. Any ideas why this could be happening. Nandish Rudra - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: tomcat 5.0.19 cluster problem
Filip Hanik (lists) wrote: In any case could a cluster node that ran out of memory destroy the entire cluster? it shouldn't, it can temporary slow it down if the node that is down is accepting connections and broad casting its membership. I'm running a load test right now with the latest version to make sure that I am not BS:ing you here :) Filip Hi, If you use in-memory replication, and the source of your OutOfMemoryError is that you have too many objects stored in sessions, or those objects are too big, or whatever, I think this could bring down your entire cluster. What do you think, Filip? Antonio smime.p7s Description: S/MIME Cryptographic Signature
RE: tomcat 5.0.19 cluster problem
yes, three servers in a cluster means three times the amount of memory used for session data. checking your -Xmx setting might be a good thing -Original Message- From: Antonio Fiol Bonnn [mailto:[EMAIL PROTECTED] Sent: Monday, February 23, 2004 11:33 AM To: Tomcat Users List Subject: Re: tomcat 5.0.19 cluster problem Filip Hanik (lists) wrote: In any case could a cluster node that ran out of memory destroy the entire cluster? it shouldn't, it can temporary slow it down if the node that is down is accepting connections and broad casting its membership. I'm running a load test right now with the latest version to make sure that I am not BS:ing you here :) Filip Hi, If you use in-memory replication, and the source of your OutOfMemoryError is that you have too many objects stored in sessions, or those objects are too big, or whatever, I think this could bring down your entire cluster. What do you think, Filip? Antonio --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.585 / Virus Database: 370 - Release Date: 2/11/2004 - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
tomcat 5.0.19 cluster problem
Hi, We are running three Solaris9 boxes with tomcat 5.0.19 on them. Cluster configuration is as follows: Cluster className=org.apache.catalina.cluster.tcp.SimpleTcpCluster managerClassName=org.apache.catalina.cluster.session.DeltaManager expireSessionsOnShutdown=false useDirtyFlag=true Membership className=org.apache.catalina.cluster.mcast.McastService mcastAddr=228.0.0.3 mcastPort=45564 mcastFrequency=500 mcastDropTime=3000/ Receiver className=org.apache.catalina.cluster.tcp.ReplicationListener tcpListenAddress=auto tcpListenPort=4001 tcpSelectorTimeout=100 tcpThreadCount=60/ Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ Valve className=org.apache.catalina.cluster.tcp.ReplicationValve filter=.*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;/ /Cluster Yesterday tomcat on one of the servers ran out of memory that coincided with a clustered web application hang across all three servers. All tomcat instances started exhibiting cluster problems in one shape or another. I wonder if 5.0.19 cluster has memory leaks. I have not experienced OutOfMemory problems on those boxes running 5.0.16 for over a month. In any case could a cluster node that ran out of memory destroy the entire cluster? You could find the log fragments from those three boxes below: Box #1 (IP: 192.168.64.40) - the one with memory problems: 22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112504278] 22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112532838] 22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112532838] 22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112540488] 22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112540488] 22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112548138] 22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.41:4001,192.168.64.41,4001, alive=113937290] 22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41: 4001,192.168.64.41,4001, alive=113967890] 22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112548138] 22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112558338] 22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.41:4001,192.168.64.41,4001, alive=113967890] 22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41: 4001,192.168.64.41,4001, alive=113981150] 22 Feb 2004 00:27:27 ERROR TP-Processor16 - An exception or error occurred in the container during the request processing java.lang.OutOfMemoryError 22 Feb 2004 00:27:27 DEBUG Finalizer - result finalized 22 Feb 2004 00:27:27 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112558338] 22 Feb 2004 00:27:27 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112573638] 22 Feb 2004 00:27:27 INFO TP-Processor16 - Unknown message 0 22 Feb 2004 00:27:34 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112573638] 22 Feb 2004 00:27:34 INFO Cluster-MembershipReceiver - Replication member
RE: tomcat 5.0.19 cluster problem
I haven't tested clustering on Solaris 9, but on linux it works great. There is something funky with your multicast, as you can see there are members added and disappearing all the time. Try to increase your mcastDropTime, that should keep the members in the cluster for a longer time. contact me at my apache.org email for help with debugging Filip -Original Message- From: Ilyschenko, Vlad [mailto:[EMAIL PROTECTED] Sent: Sunday, February 22, 2004 5:15 PM To: [EMAIL PROTECTED] Subject: tomcat 5.0.19 cluster problem Hi, We are running three Solaris9 boxes with tomcat 5.0.19 on them. Cluster configuration is as follows: Cluster className=org.apache.catalina.cluster.tcp.SimpleTcpCluster managerClassName=org.apache.catalina.cluster.session.DeltaManager expireSessionsOnShutdown=false useDirtyFlag=true Membership className=org.apache.catalina.cluster.mcast.McastService mcastAddr=228.0.0.3 mcastPort=45564 mcastFrequency=500 mcastDropTime=3000/ Receiver className=org.apache.catalina.cluster.tcp.ReplicationListener tcpListenAddress=auto tcpListenPort=4001 tcpSelectorTimeout=100 tcpThreadCount=60/ Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ Valve className=org.apache.catalina.cluster.tcp.ReplicationValve filter=.*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;/ /Cluster Yesterday tomcat on one of the servers ran out of memory that coincided with a clustered web application hang across all three servers. All tomcat instances started exhibiting cluster problems in one shape or another. I wonder if 5.0.19 cluster has memory leaks. I have not experienced OutOfMemory problems on those boxes running 5.0.16 for over a month. In any case could a cluster node that ran out of memory destroy the entire cluster? You could find the log fragments from those three boxes below: Box #1 (IP: 192.168.64.40) - the one with memory problems: 22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112504278] 22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112532838] 22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112532838] 22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112540488] 22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112540488] 22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112548138] 22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.41:4001,192.168.64.41,4001, alive=113937290] 22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41: 4001,192.168.64.41,4001, alive=113967890] 22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112548138] 22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112558338] 22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.41:4001,192.168.64.41,4001, alive=113967890] 22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41: 4001,192.168.64.41,4001, alive=113981150] 22 Feb 2004 00:27:27 ERROR TP-Processor16 - An exception or error occurred in the container during the request processing java.lang.OutOfMemoryError 22 Feb 2004 00:27:27 DEBUG Finalizer - result finalized 22 Feb 2004 00:27:27 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112558338] 22 Feb 2004 00:27:27 INFO Cluster-MembershipReceiver - Replication member
RE: tomcat 5.0.19 cluster problem
In any case could a cluster node that ran out of memory destroy the entire cluster? it shouldn't, it can temporary slow it down if the node that is down is accepting connections and broad casting its membership. I'm running a load test right now with the latest version to make sure that I am not BS:ing you here :) Filip -Original Message- From: Filip Hanik (lists) [mailto:[EMAIL PROTECTED] Sent: Sunday, February 22, 2004 5:51 PM To: Tomcat Users List Subject: RE: tomcat 5.0.19 cluster problem I haven't tested clustering on Solaris 9, but on linux it works great. There is something funky with your multicast, as you can see there are members added and disappearing all the time. Try to increase your mcastDropTime, that should keep the members in the cluster for a longer time. contact me at my apache.org email for help with debugging Filip -Original Message- From: Ilyschenko, Vlad [mailto:[EMAIL PROTECTED] Sent: Sunday, February 22, 2004 5:15 PM To: [EMAIL PROTECTED] Subject: tomcat 5.0.19 cluster problem Hi, We are running three Solaris9 boxes with tomcat 5.0.19 on them. Cluster configuration is as follows: Cluster className=org.apache.catalina.cluster.tcp.SimpleTcpCluster managerClassName=org.apache.catalina.cluster.session.DeltaManager expireSessionsOnShutdown=false useDirtyFlag=true Membership className=org.apache.catalina.cluster.mcast.McastService mcastAddr=228.0.0.3 mcastPort=45564 mcastFrequency=500 mcastDropTime=3000/ Receiver className=org.apache.catalina.cluster.tcp.ReplicationListener tcpListenAddress=auto tcpListenPort=4001 tcpSelectorTimeout=100 tcpThreadCount=60/ Sender className=org.apache.catalina.cluster.tcp.ReplicationTransmitter replicationMode=pooled/ Valve className=org.apache.catalina.cluster.tcp.ReplicationValve filter=.*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;/ /Cluster Yesterday tomcat on one of the servers ran out of memory that coincided with a clustered web application hang across all three servers. All tomcat instances started exhibiting cluster problems in one shape or another. I wonder if 5.0.19 cluster has memory leaks. I have not experienced OutOfMemory problems on those boxes running 5.0.16 for over a month. In any case could a cluster node that ran out of memory destroy the entire cluster? You could find the log fragments from those three boxes below: Box #1 (IP: 192.168.64.40) - the one with memory problems: 22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112504278] 22 Feb 2004 00:26:43 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112532838] 22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112532838] 22 Feb 2004 00:26:53 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112540488] 22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112540488] 22 Feb 2004 00:26:58 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112548138] 22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.41:4001,192.168.64.41,4001, alive=113937290] 22 Feb 2004 00:27:04 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41: 4001,192.168.64.41,4001, alive=113967890] 22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.36:4001,192.168.64.36,4001, alive=112548138] 22 Feb 2004 00:27:09 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.36: 4001,192.168.64.36,4001, alive=112558338] 22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Received member disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168. 64.41:4001,192.168.64.41,4001, alive=113967890] 22 Feb 2004 00:27:19 INFO Cluster-MembershipReceiver - Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.64.41: 4001,192.168.64.41,4001, alive
RE: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky session support.
have you tried http://cvs.apache.org/~fhanik/ Filip -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, September 08, 2003 2:32 AM To: Tomcat Users List Subject: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky session support. Hello , I still have no idea what could I do to get apache-tomcat cluster up'n'running. Current problem is that browser session got right workerid added JSESSIONID=19E9FD015AF34C5181322F3FEF37B0D6.tomcat-worker-01, but sends request for processing to all tomcat nodes using round-robin. So tomcat got the session lost. log file , JSESSIONID is the same but two tomcat hosts process it. tomcat1: 2003-08-29 16:49:26 RequestDumperValve[Standalone]: cookie=JSESSIONID=19E9FD015AF34C5181322F3FEF37B0D6.tomcat-worker-01 tomcat2: 003-08-29 16:50:10 RequestDumperValve[Standalone]: header=cookie=JSESSIONID=19E9FD015AF34C5181322F3FEF37B0D6.tomcat-w orker-01; this could work if session is replicated on all tomcat servers but not in a current state. So the same round-robin problem like before. ((( Any idea will be highly appreciated, Yefym - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky session support.
Hello , I still have no idea what could I do to get apache-tomcat cluster up'n'running. Current problem is that browser session got right workerid added JSESSIONID=19E9FD015AF34C5181322F3FEF37B0D6.tomcat-worker-01, but sends request for processing to all tomcat nodes using round-robin. So tomcat got the session lost. log file , JSESSIONID is the same but two tomcat hosts process it. tomcat1: 2003-08-29 16:49:26 RequestDumperValve[Standalone]: cookie=JSESSIONID=19E9FD015AF34C5181322F3FEF37B0D6.tomcat-worker-01 tomcat2: 003-08-29 16:50:10 RequestDumperValve[Standalone]: header=cookie=JSESSIONID=19E9FD015AF34C5181322F3FEF37B0D6.tomcat-worker-01; this could work if session is replicated on all tomcat servers but not in a current state. So the same round-robin problem like before. ((( Any idea will be highly appreciated, Yefym
Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with stickysession support.
Hi people, I still have no any progress , I've installed the newest version of tomcat 4.1.27 and build mod_jk from the sources. Still the same : all requests are sent to one tomcat , but if I shutdown it apache redirects the work to another one No error messages, in mod_jk.log I see that two balanced workers were found. Lots of web links say that load-balancing work , but in reallity they all point to one from Pascal Forget. I even cannot view anything added by tomcat to my session cookie. Apache side apache sidetomcat gots all requests There are some conf files bellow. Any help or suggestion would be highly appreciated. P.S. My topic is still not in tomcat-users list. So what are criterias to put it there? -- Yefym developer MOD_JK.LOG [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (321)]: Into jk_uri_worker_map_t::uri_worker_map_open, match rule /ping/servlet/=loadbalancer was added [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (299)]: Into jk_uri_worker_map_t::uri_worker_map_open, suffix rule /ping/.jsp=loadbalancer was added [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (299)]: Into jk_uri_worker_map_t::uri_worker_map_open, suffix rule /ping/.do=loadbalancer was added [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (408)]: Into jk_uri_worker_map_t::uri_worker_map_open, there are 33 rules [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (422)]: jk_uri_worker_map_t::uri_worker_map_open, done [Fri Aug 29 08:21:47 2003] [jk_worker.c (88)]: Into wc_open [Fri Aug 29 08:21:47 2003] [jk_worker.c (222)]: Into build_worker_map, creating 3 workers [Fri Aug 29 08:21:47 2003] [jk_worker.c (228)]: build_worker_map, creating worker tomcat-worker-01 [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-01 of ajp13 [Fri Aug 29 08:21:47 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-01 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-01 contact is 172.31.7.20:5007 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1267)]: Into jk_worker_t::init [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1287)]: In jk_worker_t::init, setting socket timeout to 300 [Fri Aug 29 08:21:47 2003] [jk_worker.c (187)]: wc_create_worker, done [Fri Aug 29 08:21:47 2003] [jk_worker.c (238)]: build_worker_map, removing old tomcat-worker-01 worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (228)]: build_worker_map, creating worker tomcat-worker-02 [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-02 of ajp13 [Fri Aug 29 08:21:47 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-02 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-02 contact is 172.31.7.12:6007 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1267)]: Into jk_worker_t::init [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1287)]: In jk_worker_t::init, setting socket timeout to 300 [Fri Aug 29 08:21:47 2003] [jk_worker.c (187)]: wc_create_worker, done [Fri Aug 29 08:21:47 2003] [jk_worker.c (238)]: build_worker_map, removing old tomcat-worker-02 worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (228)]: build_worker_map, creating worker loadbalancer [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance loadbalancer of lb [Fri Aug 29 08:21:47 2003] [jk_lb_worker.c (586)]: Into lb_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init loadbalancer [Fri Aug 29 08:21:47 2003] [jk_lb_worker.c (420)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-01 of ajp13 [Fri Aug 29 08:21:47 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-01 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-01 contact is 172.31.7.20:5007 [Fri Aug 29
Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky sessionsupport.
Hi people, I still have no any progress , I've installed the newest version of tomcat 4.1.27 and build mod_jk from the sources. Still the same : all requests are sent to one tomcat , but if I shutdown it apache redirects the work to another one No error messages, in mod_jk.log I see that two balanced workers were found. Lots of web links say that load-balancing work , but in reallity they all point to one from Pascal Forget. I even cannot see anything added by tomcat to my session cookie. There are some conf files bellow. Any help or suggestion would be highly appreciated. P.S. My topic is still not in tomcat-users list. So what are criterias to put it there? -- Yefym MOD_JK.LOG [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (321)]: Into jk_uri_worker_map_t::uri_worker_map_open, match rule /ping/servlet/=loadbalancer was added [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (299)]: Into jk_uri_worker_map_t::uri_worker_map_open, suffix rule /ping/.jsp=loadbalancer was added [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (299)]: Into jk_uri_worker_map_t::uri_worker_map_open, suffix rule /ping/.do=loadbalancer was added [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (408)]: Into jk_uri_worker_map_t::uri_worker_map_open, there are 33 rules [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (422)]: jk_uri_worker_map_t::uri_worker_map_open, done [Fri Aug 29 08:21:47 2003] [jk_worker.c (88)]: Into wc_open [Fri Aug 29 08:21:47 2003] [jk_worker.c (222)]: Into build_worker_map, creating 3 workers [Fri Aug 29 08:21:47 2003] [jk_worker.c (228)]: build_worker_map, creating worker tomcat-worker-01 [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-01 of ajp13 [Fri Aug 29 08:21:47 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-01 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-01 contact is 172.31.7.20:5007 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1267)]: Into jk_worker_t::init [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1287)]: In jk_worker_t::init, setting socket timeout to 300 [Fri Aug 29 08:21:47 2003] [jk_worker.c (187)]: wc_create_worker, done [Fri Aug 29 08:21:47 2003] [jk_worker.c (238)]: build_worker_map, removing old tomcat-worker-01 worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (228)]: build_worker_map, creating worker tomcat-worker-02 [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-02 of ajp13 [Fri Aug 29 08:21:47 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-02 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-02 contact is 172.31.7.12:6007 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1267)]: Into jk_worker_t::init [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1287)]: In jk_worker_t::init, setting socket timeout to 300 [Fri Aug 29 08:21:47 2003] [jk_worker.c (187)]: wc_create_worker, done [Fri Aug 29 08:21:47 2003] [jk_worker.c (238)]: build_worker_map, removing old tomcat-worker-02 worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (228)]: build_worker_map, creating worker loadbalancer [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance loadbalancer of lb [Fri Aug 29 08:21:47 2003] [jk_lb_worker.c (586)]: Into lb_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init loadbalancer [Fri Aug 29 08:21:47 2003] [jk_lb_worker.c (420)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-01 of ajp13 [Fri Aug 29 08:21:47 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-01 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-01 contact is 172.31.7.20:5007 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1267)]: Into jk_worker_t::init [Fri Aug
Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running withsticky session support.
A couple of things: 1. Your workers.properties file (to me) seems needlessly complex. I would cut it down to match Pascal's example. 2. In your server.xml, you have jmvRoute. I don't load balance, but as far as I know it should be jvmRoute (note spelling). 3. You only sent one server.xml...there should be two. John [EMAIL PROTECTED] wrote: Hi people, I still have no any progress , I've installed the newest version of tomcat 4.1.27 and build mod_jk from the sources. Still the same : all requests are sent to one tomcat , but if I shutdown it apache redirects the work to another one No error messages, in mod_jk.log I see that two balanced workers were found. Lots of web links say that load-balancing work , but in reallity they all point to one from Pascal Forget. I even cannot see anything added by tomcat to my session cookie. There are some conf files bellow. Any help or suggestion would be highly appreciated. P.S. My topic is still not in tomcat-users list. So what are criterias to put it there? -- Yefym MOD_JK.LOG [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (321)]: Into jk_uri_worker_map_t::uri_worker_map_open, match rule /ping/servlet/=loadbalancer was added [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (299)]: Into jk_uri_worker_map_t::uri_worker_map_open, suffix rule /ping/.jsp=loadbalancer was added [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (299)]: Into jk_uri_worker_map_t::uri_worker_map_open, suffix rule /ping/.do=loadbalancer was added [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (408)]: Into jk_uri_worker_map_t::uri_worker_map_open, there are 33 rules [Fri Aug 29 08:21:47 2003] [jk_uri_worker_map.c (422)]: jk_uri_worker_map_t::uri_worker_map_open, done [Fri Aug 29 08:21:47 2003] [jk_worker.c (88)]: Into wc_open [Fri Aug 29 08:21:47 2003] [jk_worker.c (222)]: Into build_worker_map, creating 3 workers [Fri Aug 29 08:21:47 2003] [jk_worker.c (228)]: build_worker_map, creating worker tomcat-worker-01 [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-01 of ajp13 [Fri Aug 29 08:21:47 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-01 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-01 contact is 172.31.7.20:5007 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1267)]: Into jk_worker_t::init [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1287)]: In jk_worker_t::init, setting socket timeout to 300 [Fri Aug 29 08:21:47 2003] [jk_worker.c (187)]: wc_create_worker, done [Fri Aug 29 08:21:47 2003] [jk_worker.c (238)]: build_worker_map, removing old tomcat-worker-01 worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (228)]: build_worker_map, creating worker tomcat-worker-02 [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-02 of ajp13 [Fri Aug 29 08:21:47 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-02 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-02 contact is 172.31.7.12:6007 [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1267)]: Into jk_worker_t::init [Fri Aug 29 08:21:47 2003] [jk_ajp_common.c (1287)]: In jk_worker_t::init, setting socket timeout to 300 [Fri Aug 29 08:21:47 2003] [jk_worker.c (187)]: wc_create_worker, done [Fri Aug 29 08:21:47 2003] [jk_worker.c (238)]: build_worker_map, removing old tomcat-worker-02 worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (228)]: build_worker_map, creating worker loadbalancer [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance loadbalancer of lb [Fri Aug 29 08:21:47 2003] [jk_lb_worker.c (586)]: Into lb_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init loadbalancer [Fri Aug 29 08:21:47 2003] [jk_lb_worker.c (420)]: Into jk_worker_t::validate [Fri Aug 29 08:21:47 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 08:21:47 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-01 of ajp13 [Fri Aug 29 08:21:47 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 08:21:47 2003] [jk_worker.c (171)]: wc_create_worker,
Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky sessionsupport.
Lot's of thnx that U point me to the misspelled error. I've got jvmRoute added to the session cookie finally. The problem is that apache redirects the workers randomly : tomcat1 got session cookie with worker of tomcat2 sometimes. The same history with tomcat2. 2003-08-29 16:11:11 RequestDumperValve[Standalone]: cookie=JSESSIONID=6DA548CB187E7EB31B73F8DF8959C643.tomcat-worker-02 2003-08-29 16:11:11 RequestDumperValve[Standalone]: cookie=lifesensor01=US%7Cen%7Cc%7Cneutral%7Cnoname%7C0%7Cnull%7C-1%7Cwellcome%7C0%7C 03-08-29 16:11:46 RequestDumperValve[Standalone]:contextPath= 2003-08-29 16:11:46 RequestDumperValve[Standalone]: cookie=JSESSIONID=10219C0C8296E9A0629C1D9C3BF7770B.tomcat-worker-01 2003-08-29 16:11:46 RequestDumperValve[Standalone]: cookie=lifesensor01=US%7Cen%7Cc%7Cneutral%7Cnoname%7C0%7Cnull%7C-1%7Cwellcome%7C0%7C So the result is that the same session processed by both tomcat but with specified right worker: tomcat1: 2003-08-29 16:49:26 RequestDumperValve[Standalone]: cookie=JSESSIONID=19E9FD015AF34C5181322F3FEF37B0D6.tomcat-worker-01 tomcat2: 003-08-29 16:50:10 RequestDumperValve[Standalone]: header=cookie=JSESSIONID=19E9FD015AF34C5181322F3FEF37B0D6.tomcat-worker-01; this could work if session is replicated on all tomcat servers but not in a current state. So the same round-robin problem like before. ((( Which configuration files should I post in addition? What can be the problem if in workers.properties I have: worker.tomcat-worker-02.port=6007 worker.tomcat-worker-02.host=172.31.7.12 worker.tomcat-worker-02.type=ajp13 worker.tomcat-worker-02.lbfactor=50 #worker.tomcat-worker-02.cachesize=10 #worker.tomcat-worker-02.cache_timeout=600 #worker.tomcat-worker-02.socket_timeout=300 #worker.tomcat-worker-02.local_worker=1 worker.tomcat-worker-01.port=5007 worker.tomcat-worker-01.host=172.31.7.20 worker.tomcat-worker-01.type=ajp13 worker.tomcat-worker-01.lbfactor=50 #worker.tomcat-worker-01.cachesize=10 #worker.tomcat-worker-01.cache_timeout=600 #worker.tomcat-worker-01.socket_timeout=300 #worker.tomcat-worker-01.local_worker=1 # load balancing with sticky sessions. # Note: # If a worker dies, the load balancer will check its state #once in a while. Until then all work is redirected to peer #workers. worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcat-worker-01,tomcat-worker-02 worker.loadbalancer.sticky_session =1 #worker.loadbalancer.local_worker_only=1 MOD_JK.log( workers mapped to the right ips) [Fri Aug 29 14:41:43 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-01 [Fri Aug 29 14:41:43 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 14:41:43 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-01 contact is 172.31.7.20:5007 [Fri Aug 29 14:41:43 2003] [jk_ajp_common.c (1267)]: Into jk_worker_t::init [Fri Aug 29 14:41:43 2003] [jk_ajp_common.c (1287)]: In jk_worker_t::init, setting socket timeout to 0 [Fri Aug 29 14:41:43 2003] [jk_worker.c (187)]: wc_create_worker, done [Fri Aug 29 14:41:43 2003] [jk_worker.c (238)]: build_worker_map, removing old tomcat-worker-01 worker [Fri Aug 29 14:41:43 2003] [jk_worker.c (228)]: build_worker_map, creating worker tomcat-worker-02 [Fri Aug 29 14:41:43 2003] [jk_worker.c (148)]: Into wc_create_worker [Fri Aug 29 14:41:43 2003] [jk_worker.c (162)]: wc_create_worker, about to create instance tomcat-worker-02 of ajp13 [Fri Aug 29 14:41:43 2003] [jk_ajp13_worker.c (108)]: Into ajp13_worker_factory [Fri Aug 29 14:41:43 2003] [jk_worker.c (171)]: wc_create_worker, about to validate and init tomcat-worker-02 [Fri Aug 29 14:41:43 2003] [jk_ajp_common.c (1219)]: Into jk_worker_t::validate [Fri Aug 29 14:41:43 2003] [jk_ajp_common.c (1239)]: In jk_worker_t::validate for worker tomcat-worker-02 contact is 172.31.7.12:6007 [Fri Aug 29 14:41:43 2003] [jk_ajp_common.c (1267)]: Into jk_worker_t::init [Fri Aug 29 14:41:43 2003] [jk_ajp_common.c (1287)]: In jk_worker_t::init, setting socket timeout to 0 [Fri Aug 29 14:41:43 2003] [jk_worker.c (187)]: wc_create_worker, done [Fri Aug 29 14:41:43 2003] [jk_lb_worker.c (498)]: Balanced worker 0 has name tomcat-worker-01 [Fri Aug 29 14:41:43 2003] [jk_lb_worker.c (498)]: Balanced worker 1 has name tomcat-worker-02 [Fri Aug 29 14:50:01 2003] [jk_ajp_common.c (532)]: ajp_unmarshal_response: Header[1] [Set-Cookie] = [JSESSIONID=19E9FD015AF34C5181322F3FEF37B0D6.tomcat-worker-01; Path=/lifesensor] [Fri Aug 29 14:50:12 2003] [jk_ajp_common.c (532)]: ajp_unmarshal_response: Header[2] [Set-Cookie] = [JSESSIONID=EEB6757BF4A9337FD7265AA581DE8BE3.tomcat-worker-02; Path=/lifesensor] John Turner [EMAIL PROTECTED] 29.08.2003 14:23 Please respond to Tomcat Users List To: Tomcat Users List [EMAIL PROTECTED] cc: Subject:Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster
Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with stickysession support.
Hi , here is result of httpd -l: Compiled-in modules: http_core.c mod_so.c suexec: enabled; valid wrapper /usr/sbin/suexec Vladyslav Kosulin [EMAIL PROTECTED] 27.08.2003 16:51 Please respond to Tomcat Users List To: Tomcat Users List [EMAIL PROTECTED] cc: Subject:Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky session support. [EMAIL PROTECTED] wrote: Hi guys here is detail description: I have Apache/2.0.45 running on the server A If your Apache is running on UNIX/Linux/BSD/MacOS X, load balancing with sticky sessions will work only if you use worker MPM. At least this is correct for mod_jk2, and may be the same is the cause for mod_jk. By default Apache is been compiled with prefork MPM on UNIX/Linux/BSD. Check with httpd -l If you see prefork.c, you have to recompile Apache using ./configure --with-mpm=worker ... Hope this will help. Vlad And two tomcat workers are running on B and C The problem is that I cannot get Tomcat cluster load balanced , playing around with workers.properties on Apache gave me two different situations. 1.if I have local_worker parameter equal to 1, then I have no lost sessions but also no loadbalancing . But cluster is still fail safe, if one tomcat dies - another one gets all incoming requests. 2. if I have local_worker=0 then I have a simple round- robin balancer without session affinity. So my session got lost. configuration example workers.properties: worker.list=tomcat-worker-01,tomcat-worker-02,router worker.tomcat-worker-02.port=4007 worker.tomcat-worker-02.host=xxx.xx.x.12 worker.tomcat-worker-02.type=ajp13 worker.tomcat-worker-02.lbfactor=50 worker.tomcat-worker-02.cachesize=10 worker.tomcat-worker-02.cache_timeout=600 worker.tomcat-worker-02.socket_timeout=300 worker.tomcat-worker-02.local_worker=1 worker.tomcat-worker-01.port=5007 worker.tomcat-worker-01.host=xxx.xx.x.20 worker.tomcat-worker-01.type=ajp13 worker.tomcat-worker-01.lbfactor=50 worker.tomcat-worker-01.cachesize=10 worker.tomcat-worker-01.cache_timeout=600 worker.tomcat-worker-01.socket_timeout=300 worker.tomcat-worker-01.local_worker=1 worker.router.type=lb worker.router.balanced_workers=tomcat-worker-01,tomcat-worker-02 worker.router.sticky_session =1 worker.router.local_worker_only=1 P.S. I checked the previous discussions related to server.xmlS configuration. I had there: Engine jmvRoute=worker1 name=Standalone defaultHost=ipaddress1 debug=0 Engine jmvRoute=worker2 name=Standalone defaultHost=ipaddress2 debug=0 -- Yefym Dmukh developer email: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with stickysession support.
Hi , some additional information . I've got with mozilla LiveHTTPHeaders that my session cookie doesn't recieve Jvmroute parameter , so apache doesn't know anything about load-balancing . Cookie: JSESSIONID=5387242C819757A9BC12B2FAF1AF2AD8; Does anybody have any suggestion or idea? P.S. No errors or exceptions in mod_jk.log were found. [EMAIL PROTECTED] 28.08.2003 09:07 Please respond to Tomcat Users List To: Tomcat Users List [EMAIL PROTECTED] cc: Subject:Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky session support. Hi , here is result of httpd -l: Compiled-in modules: http_core.c mod_so.c suexec: enabled; valid wrapper /usr/sbin/suexec Vladyslav Kosulin [EMAIL PROTECTED] 27.08.2003 16:51 Please respond to Tomcat Users List To: Tomcat Users List [EMAIL PROTECTED] cc: Subject:Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky session support. [EMAIL PROTECTED] wrote: Hi guys here is detail description: I have Apache/2.0.45 running on the server A If your Apache is running on UNIX/Linux/BSD/MacOS X, load balancing with sticky sessions will work only if you use worker MPM. At least this is correct for mod_jk2, and may be the same is the cause for mod_jk. By default Apache is been compiled with prefork MPM on UNIX/Linux/BSD. Check with httpd -l If you see prefork.c, you have to recompile Apache using ./configure --with-mpm=worker ... Hope this will help. Vlad And two tomcat workers are running on B and C The problem is that I cannot get Tomcat cluster load balanced , playing around with workers.properties on Apache gave me two different situations. 1.if I have local_worker parameter equal to 1, then I have no lost sessions but also no loadbalancing . But cluster is still fail safe, if one tomcat dies - another one gets all incoming requests. 2. if I have local_worker=0 then I have a simple round- robin balancer without session affinity. So my session got lost. configuration example workers.properties: worker.list=tomcat-worker-01,tomcat-worker-02,router worker.tomcat-worker-02.port=4007 worker.tomcat-worker-02.host=xxx.xx.x.12 worker.tomcat-worker-02.type=ajp13 worker.tomcat-worker-02.lbfactor=50 worker.tomcat-worker-02.cachesize=10 worker.tomcat-worker-02.cache_timeout=600 worker.tomcat-worker-02.socket_timeout=300 worker.tomcat-worker-02.local_worker=1 worker.tomcat-worker-01.port=5007 worker.tomcat-worker-01.host=xxx.xx.x.20 worker.tomcat-worker-01.type=ajp13 worker.tomcat-worker-01.lbfactor=50 worker.tomcat-worker-01.cachesize=10 worker.tomcat-worker-01.cache_timeout=600 worker.tomcat-worker-01.socket_timeout=300 worker.tomcat-worker-01.local_worker=1 worker.router.type=lb worker.router.balanced_workers=tomcat-worker-01,tomcat-worker-02 worker.router.sticky_session =1 worker.router.local_worker_only=1 P.S. I checked the previous discussions related to server.xmlS configuration. I had there: Engine jmvRoute=worker1 name=Standalone defaultHost=ipaddress1 debug=0 Engine jmvRoute=worker2 name=Standalone defaultHost=ipaddress2 debug=0 -- Yefym Dmukh developer email: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running withsticky session support.
[EMAIL PROTECTED] wrote: Hi , some additional information . I've got with mozilla LiveHTTPHeaders that my session cookie doesn't recieve Jvmroute parameter , so apache doesn't know anything about load-balancing . Cookie: JSESSIONID=5387242C819757A9BC12B2FAF1AF2AD8; Does anybody have any suggestion or idea? Is your web application distributable/? I don't know how Tomcat handles this, but Jetty does not attach jvmRoute for distributable applications assuming that sticky sessions are meaningless in this case. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky sessionsupport.
Hi guys here is detail description: I have Apache/2.0.45 running on the server A And two tomcat workers are running on B and C The problem is that I cannot get Tomcat cluster load balanced , playing around with workers.properties on Apache gave me two different situations. 1.if I have local_worker parameter equal to 1, then I have no lost sessions but also no loadbalancing . But cluster is still fail safe, if one tomcat dies - another one gets all incoming requests. 2. if I have local_worker=0 then I have a simple round- robin balancer without session affinity. So my session got lost. configuration example workers.properties: worker.list=tomcat-worker-01,tomcat-worker-02,router worker.tomcat-worker-02.port=4007 worker.tomcat-worker-02.host=xxx.xx.x.12 worker.tomcat-worker-02.type=ajp13 worker.tomcat-worker-02.lbfactor=50 worker.tomcat-worker-02.cachesize=10 worker.tomcat-worker-02.cache_timeout=600 worker.tomcat-worker-02.socket_timeout=300 worker.tomcat-worker-02.local_worker=1 worker.tomcat-worker-01.port=5007 worker.tomcat-worker-01.host=xxx.xx.x.20 worker.tomcat-worker-01.type=ajp13 worker.tomcat-worker-01.lbfactor=50 worker.tomcat-worker-01.cachesize=10 worker.tomcat-worker-01.cache_timeout=600 worker.tomcat-worker-01.socket_timeout=300 worker.tomcat-worker-01.local_worker=1 worker.router.type=lb worker.router.balanced_workers=tomcat-worker-01,tomcat-worker-02 worker.router.sticky_session =1 worker.router.local_worker_only=1 P.S. I checked the previous discussions related to server.xmlS configuration. I had there: Engine jmvRoute=worker1 name=Standalone defaultHost=ipaddress1 debug=0 Engine jmvRoute=worker2 name=Standalone defaultHost=ipaddress2 debug=0 -- Yefym Dmukh developer email: [EMAIL PROTECTED]
Urgent !!! Problem to get TOMCAT/4.1.24 cluster running with sticky sessionsupport.
Hi guys , I'm newbei here , can be that I duplicate this message. Sorry for that , but would be nice if anybody reply me :))) just to be sure that I have my chance to solve this problem I have Apache/2.0.45 running on the server A And two tomcat workers are running on B and C The problem is that I cannot get Tomcat cluster load balanced , playing around with workers.properties on Apache gave me two different situations. 1.if I have local_worker parameter equal to 1, then I have no lost sessions but also no loadbalancing . But cluster is still fail safe, if one tomcat dies - another one gets all incoming requests. 2. if I have local_worker=0 then I have a simple round- robin balancer without session affinity. So my session got lost. configuration example workers.properties: worker.list=tomcat-worker-01,tomcat-worker-02,router worker.tomcat-worker-02.port=4007 worker.tomcat-worker-02.host=xxx.xx.x.12 worker.tomcat-worker-02.type=ajp13 worker.tomcat-worker-02.lbfactor=50 worker.tomcat-worker-02.cachesize=10 worker.tomcat-worker-02.cache_timeout=600 worker.tomcat-worker-02.socket_timeout=300 worker.tomcat-worker-02.local_worker=1 worker.tomcat-worker-01.port=5007 worker.tomcat-worker-01.host=xxx.xx.x.20 worker.tomcat-worker-01.type=ajp13 worker.tomcat-worker-01.lbfactor=50 worker.tomcat-worker-01.cachesize=10 worker.tomcat-worker-01.cache_timeout=600 worker.tomcat-worker-01.socket_timeout=300 worker.tomcat-worker-01.local_worker=1 worker.router.type=lb worker.router.balanced_workers=tomcat-worker-01,tomcat-worker-02 worker.router.sticky_session =1 worker.router.local_worker_only=1 P.S. I checked the previous discussions related to server.xmlS configuration. I had there: Engine jmvRoute=worker1 name=Standalone defaultHost=ipaddress1 debug=0 Engine jmvRoute=worker2 name=Standalone defaultHost=ipaddress2 debug=0 Best Regards an thnx for your patience.) --
Re: Urgent !!! Problem to get TOMCAT/4.1.24 cluster running withsticky session support.
[EMAIL PROTECTED] wrote: Hi guys here is detail description: I have Apache/2.0.45 running on the server A If your Apache is running on UNIX/Linux/BSD/MacOS X, load balancing with sticky sessions will work only if you use worker MPM. At least this is correct for mod_jk2, and may be the same is the cause for mod_jk. By default Apache is been compiled with prefork MPM on UNIX/Linux/BSD. Check with httpd -l If you see prefork.c, you have to recompile Apache using ./configure --with-mpm=worker ... Hope this will help. Vlad And two tomcat workers are running on B and C The problem is that I cannot get Tomcat cluster load balanced , playing around with workers.properties on Apache gave me two different situations. 1.if I have local_worker parameter equal to 1, then I have no lost sessions but also no loadbalancing . But cluster is still fail safe, if one tomcat dies - another one gets all incoming requests. 2. if I have local_worker=0 then I have a simple round- robin balancer without session affinity. So my session got lost. configuration example workers.properties: worker.list=tomcat-worker-01,tomcat-worker-02,router worker.tomcat-worker-02.port=4007 worker.tomcat-worker-02.host=xxx.xx.x.12 worker.tomcat-worker-02.type=ajp13 worker.tomcat-worker-02.lbfactor=50 worker.tomcat-worker-02.cachesize=10 worker.tomcat-worker-02.cache_timeout=600 worker.tomcat-worker-02.socket_timeout=300 worker.tomcat-worker-02.local_worker=1 worker.tomcat-worker-01.port=5007 worker.tomcat-worker-01.host=xxx.xx.x.20 worker.tomcat-worker-01.type=ajp13 worker.tomcat-worker-01.lbfactor=50 worker.tomcat-worker-01.cachesize=10 worker.tomcat-worker-01.cache_timeout=600 worker.tomcat-worker-01.socket_timeout=300 worker.tomcat-worker-01.local_worker=1 worker.router.type=lb worker.router.balanced_workers=tomcat-worker-01,tomcat-worker-02 worker.router.sticky_session =1 worker.router.local_worker_only=1 P.S. I checked the previous discussions related to server.xmlS configuration. I had there: Engine jmvRoute=worker1 name=Standalone defaultHost=ipaddress1 debug=0 Engine jmvRoute=worker2 name=Standalone defaultHost=ipaddress2 debug=0 -- Yefym Dmukh developer email: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]