ping 224.0.0.1
PING 224.0.0.1 (224.0.0.1) 56(84) bytes of data.
64 bytes from 192.168.3.254: icmp_seq=1 ttl=255 time=0.895 ms
64 bytes from 192.168.3.254: icmp_seq=2 ttl=255 time=0.693 ms
64 bytes from 192.168.3.254: icmp_seq=3 ttl=255 time=0.686 ms


Mitch


Filip Hanik - Dev Lists wrote:
>
> correct, you're members are not discovering each other.
> and its purely multicast related.
>
> what do you get when you do
>
> ping 224.0.0.1
>
> Filip
>
> On 07/16/2009 05:16 PM, Mitch Claborn wrote:
>> Not having much luck getting a simple cluster to work.   Using nginx as
>> a front end/load balancer against two tomcat instances on the same
>> machine (for now).  SuSE Linux 11.1.  I see this message in the startup
>> log, making me think the tomcat instances are not talking:
>>
>> INFO: Manager [localhost#/Struts1]: skipping state transfer. No members
>> active in cluster group.
>>
>> I have a simple test page in the web app that shows the session ID and
>> the instance of tomcat that it is hitting (by server port number) and
>> the session id changes whenever ngnix directs the request to a different
>> instance.
>>
>> as far as I can tell, multicast is enabled on eth0:
>> eth0      Link encap:Ethernet  HWaddr 00:1D:09:C4:C2:9A
>>            inet addr:192.168.3.5  Bcast:192.168.3.255 
>> Mask:255.255.255.0
>>            inet6 addr: fe80::21d:9ff:fec4:c29a/64 Scope:Link
>>            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>
>> I've added a route for the mulitcast address to eth0:
>> Kernel IP routing table
>> Destination     Gateway         Genmask         Flags Metric Ref    Use
>> Iface
>> 228.0.0.4       0.0.0.0         255.255.255.255 UH    0      0       
>> 0 eth0
>> 192.168.3.0     0.0.0.0         255.255.255.0   U     1      0       
>> 0 eth0
>> 127.0.0.0       0.0.0.0         255.0.0.0       U     0      0       
>> 0 lo
>> 0.0.0.0         192.168.3.254   0.0.0.0         UG    0      0       
>> 0 eth0
>>
>> localhost is mapped to the eth0 interface:
>> ping localhost
>> PING mlcx300 (192.168.3.5) 56(84) bytes of data.
>> 64 bytes from mlcx300 (192.168.3.5): icmp_seq=1 ttl=64 time=0.046 ms
>>
>>
>>
>> I've tried the simple config:
>> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
>>
>> as well as the detailed config below.  Any pointers or ideas are
>> welcome.
>>
>>        <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
>>                 channelSendOptions="6">
>>
>>          <Manager
>> className="org.apache.catalina.ha.session.DeltaManager"
>>                   name="MMClusterManatger"
>>                   expireSessionsOnShutdown="false"
>>                   notifyListenersOnReplication="true"/>
>>
>>          <Channel
>> className="org.apache.catalina.tribes.group.GroupChannel">
>>            <Membership
>> className="org.apache.catalina.tribes.membership.McastService"
>>                        address="228.0.0.4"
>>                        port="45564"
>>                        frequency="500"
>>                        dropTime="3000"/>
>>            <Receiver
>> className="org.apache.catalina.tribes.transport.nio.NioReceiver"
>>                      address="auto"
>>                      port="5000"
>>                      autoBind="100"
>>                      selectorTimeout="100"
>>                      minThreads="2"
>>                      maxThreads="6"/>
>>
>>            <Sender
>> className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
>>              <Transport
>> className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
>>
>> poolSize="25"/>
>>            </Sender>
>>            <Interceptor
>> className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
>>
>>            <Interceptor
>> className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
>>
>>            <Interceptor
>> className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
>>
>>          </Channel>
>>
>>          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
>>                 filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.css;.*\.txt;"
>>                 statistics="true"
>>                 />
>>
>>          <Deployer
>> className="org.apache.catalina.ha.deploy.FarmWarDeployer"
>>                    tempDir="/tmp/war-temp/"
>>                    deployDir="/tmp/war-deploy/"
>>                    watchDir="/tmp/war-listen/"
>>                    watchEnabled="false"/>
>>
>>          <ClusterListener
>> className="org.apache.catalina.ha.session.ClusterSessionListener"/>
>>
>>
>>        </Cluster>
>>
>>
>>
>> Here are the cluster related messages from the startup of instance 2:
>>
>> INFO: Cluster is about to start
>> Jul 16, 2009 6:03:26 PM
>> org.apache.catalina.tribes.transport.ReceiverBase bind
>> INFO: Receiver Server Socket bound to:/192.168.3.5:4001
>> Jul 16, 2009 6:03:26 PM
>> org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
>> INFO: Setting cluster mcast soTimeout to 500
>> Jul 16, 2009 6:03:26 PM
>> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
>> INFO: Sleeping for 1000 milliseconds to establish cluster membership,
>> start level:4
>> Jul 16, 2009 6:03:27 PM
>> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
>> INFO: Done sleeping, membership established, start level:4
>> Jul 16, 2009 6:03:27 PM
>> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
>> INFO: Sleeping for 1000 milliseconds to establish cluster membership,
>> start level:8
>> Jul 16, 2009 6:03:28 PM
>> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
>> INFO: Done sleeping, membership established, start level:8
>> Jul 16, 2009 6:03:29 PM org.apache.catalina.ha.session.DeltaManager
>> start
>> INFO: Register manager /Struts1 to cluster element Engine with name
>> Catalina
>> Jul 16, 2009 6:03:29 PM org.apache.catalina.ha.session.DeltaManager
>> start
>> INFO: Starting clustering manager at /Struts1
>> Jul 16, 2009 6:03:29 PM org.apache.catalina.ha.session.DeltaManager
>> getAllClusterSessions
>> INFO: Manager [localhost#/Struts1]: skipping state transfer. No members
>> active in cluster group.
>>
>>
>> Mitch
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
>>    
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to