Can anyone help in these regards? Thanks in advance, Sumedh On Thu, Sep 3, 2009 at 8:52 AM, Sumedh Sakdeo <[email protected]>wrote:
> http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html > *How it Works > * > > 1. TomcatA starts up > 2. TomcatB starts up (Wait that TomcatA start is complete) > 3. TomcatA receives a request, a session S1 is created. > 4. TomcatA crashes > 5. TomcatB receives a request for session S1 > 6. TomcatA starts up > 7. TomcatA receives a request, invalidate is called on the session (S1) > 8. TomcatB receives a request, for a new session (S2) > 9. TomcatA The session S2 expires due to inactivity. > > section describes the steps.7-9 steps confusing. > Does tomcat 6 even support such scenario? > > --Sumedh > > > > On Thu, Sep 3, 2009 at 6:29 AM, Shaun Senecal <[email protected]>wrote: > >> After re-reading your initial post, the problems might not be as related >> as >> I thought since at no point did replication "stop" for me. >> >> >> On Thu, Sep 3, 2009 at 9:56 AM, Shaun Senecal <[email protected] >> >wrote: >> >> > We had a similar problem with Tomcat 6 using clustering. It turns out >> that >> > the SSO information is only propagated while all instances are running. >> If >> > Instance-A fails, several users then log in to Instance-B, then >> Instance-A >> > comes back up, all of the SSO information for the users that logged in >> > during the downtime is not included in Instance-A so those users are >> forced >> > to re-login once the load balancer sends them to that instance. >> > >> > I wrote a fix for it, which might be useful for you. However, it hasnt >> > been fully tested and is designed to only share the SSO information at >> > startup, not all Session information. If Tomcat doenst handle this >> case, >> > then the fix I wrote should be easily extended to handle that. >> Basically, >> > when an instance comes up it broadcasts a request for all known SSO >> > information to the cluster. It then takes the first response it gets >> and >> > continues processing as normal. >> > >> > Let me know if you dont find a proper solution to the problem and I will >> > try to dig up that fix. My intention was to post it back to the group, >> but >> > I got sidetracked once we (temporarily) stopped using clustering. >> > >> > >> > Shaun >> > >> > >> > On Thu, Sep 3, 2009 at 3:52 AM, Sumedh Sakdeo <[email protected] >> >wrote: >> > >> >> Hi Rainer, >> >> >> >> I am using Tomcat session clustering and Apache Http Server for >> LB(using >> >> mod_jk module). Also, using Tomcat 6. I have made appropriate changes >> to >> >> worker.properties and httpd.conf. Also I have made appropriate changes >> to >> >> server.xml on each tomcat. >> >> >> >> Thanks, >> >> Sumedh >> >> >> >> On Thu, Sep 3, 2009 at 12:15 AM, Rainer Jung <[email protected] >> >> >wrote: >> >> >> >> > On 02.09.2009 19:57, Sumedh Sakdeo wrote: >> >> > > Hello All, >> >> > > I have a setup with two tomcat instances(A&B). I have >> >> > configured >> >> > > an apache web server 2.2 for load balancing and fail over. Setup >> looks >> >> > fine >> >> > > as per the configurations suggested. Let tomcat A be handling some >> >> > request >> >> > > at sometime. When tomcat instance(A) goes down, the session is >> >> replicated >> >> > to >> >> > > another tomcat instance(B) successfully. Now tomcat instance B is >> >> > handling >> >> > > those requests. Till this point everything goes fine, but when I >> bring >> >> up >> >> > > tomcat instance(A) and after that tomcat instance(B) goes down, the >> >> > session >> >> > > is no longer replicated. What might be the issue? In status page of >> >> > apache >> >> > > server I see even if node status is OK session is not replicated to >> >> fail >> >> > > over node for second time. >> >> > >> >> > How do you replicate? Are you using Tomcat session clustering? Tomcat >> >> > 5.5 or Tomcat 6? >> >> > >> >> > Regards, >> >> > >> >> > Rainer >> >> > >> >> > --------------------------------------------------------------------- >> >> > To unsubscribe, e-mail: [email protected] >> >> > For additional commands, e-mail: [email protected] >> >> > >> >> > >> >> >> > >> > >> > >
