Hello, Why see one tomcat the apps of another tomcat?
We have two servers with two installed Apache Tomcats 6.0.32 and one Apache Webserver 2.2.14. Server A: Tomcat1 hosts App1 and App2 Tomcat2 hosts App3 Server B: Tomcat1 hosts App1 and App2 Tomcat2 hosts App3 In the worker.properties we define two loadbalancer worker, but it seems, that they found each other cluster. Message from catalina log : [ ] 13.05.2011 14:29:15 org.apache.catalina.startup.Catalina start INFO: Server startup in 5793 ms 13.05.2011 14:29:18 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:article_finder_admin# 13.05.2011 14:29:19 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:article_finder_admin# 13.05.2011 14:29:31 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:article_finder_admin# 13.05.2011 14:29:35 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:article_finder_admin# 13.05.2011 14:29:35 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:article_finder_admin# 13.05.2011 14:29:35 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:article_finder_admin# 13.05.2011 14:29:39 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:akademie# 13.05.2011 14:29:41 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:akademie# 13.05.2011 14:29:45 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:extranet# 13.05.2011 14:29:47 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:extranet# 13.05.2011 14:29:47 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:extranet# 13.05.2011 14:29:50 org.apache.catalina.ha.session.ClusterSessionListener messageReceived WARNUNG: Context manager doesn't exist:extranet# The Context Managers for article_Finder_admin, akademie and extranet are Virtual Hosts in LoadbalancerA and doesn´t exists in conf directory of Tomcat2. We define the Cluster Valve inside the Engine (remark, these block was eqal on every server.xml, or have we to customize these valve???) : >>>>>> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8"> <Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/> <Channel className="org.apache.catalina.tribes.group.GroupChannel"> <Membership className="org.apache.catalina.tribes.membership.McastService" address="228.0.0.4" port="45564" frequency="500" dropTime="3000"/> <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="auto" port="4000" autoBind="100" selectorTimeout="5000" maxThreads="6"/> <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> </Sender> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector" /> <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15In terceptor"/> </Channel> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/> <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> <!-- <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> --> <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> </Cluster> <<<<<< Worker.properties: # ---------------- # First worker # ---------------- worker.worker1.port=8010 worker.worker1.host=192.168.100.1 worker.worker1.type=ajp13 worker.worker1.lbfactor=75 worker.worker1.route=worker1 #worker.worker1.connection_pool_size=250 #worker.worker1.connection_pool_minsize=126 worker.worker1.connection_pool_timeout=600 worker.worker1.activation=active # ---------------- # Second worker # ---------------- worker.worker2.port=8010 worker.worker2.host=192.168.100.2 worker.worker2.type=ajp13 worker.worker2.lbfactor=100 worker.worker2.route=worker2 #worker.worker2.connection_pool_size=250 #worker.worker2.connection_pool_minsize=126 worker.worker2.connection_pool_timeout=600 worker.worker2.activation=active # ---------------- # sixth worker # ---------------- worker.worker6.port=8012 worker.worker6.host=192.168.100.1 worker.worker6.type=ajp13 worker.worker6.lbfactor=1 worker.worker6.activation=active # ---------------- # seventh worker # ---------------- worker.worker7.port=8012 worker.worker7.host=192.168.100.2 worker.worker7.type=ajp13 worker.worker7.lbfactor=1 worker.worker7.activation=active # ---------------------- # Load Balancer worker # ---------------------- worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=worker1,worker2 worker.loadbalancer.sticky_session=true worker.loadbalancer.sticky_session_force=false worker.loadbalancer.method=Busyness worker.loadbalancer.retries=3 worker.loadbalancer.secret=xxx # ---------------------- # Load Balancer worker tc # ---------------------- worker.loadbalancertc.type=lb worker.loadbalancertc.balance_workers=worker6,worker7 worker.loadbalancertc.sticky_session=true worker.loadbalancertc.sticky_session_force=false worker.loadbalancertc.method=Busyness worker.loadbalancertc.retries=3 worker.loadbalancertc.secret=xxx
smime.p7s
Description: S/MIME cryptographic signature