Thanks Rob, That makes total sense, thanks for that.
I agree that the best option seems to be to add a --wait option. ~Wayne On Apr 12, 2007, at 11:04 , Rob Kaufman wrote: > Hi Wayne, > Though that is a good idea in general, it doesn't get the job done > in this case. The problem is that stop returns successfully as soon > as it sends the signal to the mongrel processes. It goes out, says > "hey please stop what your doing" and then returns, telling you "I > told them", not "they have stopped". It seems to me like what we need > is to have a --wait option. The idea would be that mongrel_rails stop > --wait would not return until it had confirmed that all the processes > had truly stopped what they where doing. It would be nice if wait > took an optional timeout argument. > > I see two benefits to this solution. One it solves the problem > we're discussing here. Your cluster reset could be composed of stop > --wait and start commands. Second it will allow your system shutdown > or deployments to wait for every doggy to finish up and gracefully > return your maintenance page instead of just timing out. > > Rob Kaufman > > > On 4/12/07, Wayne E. Seguin <[EMAIL PROTECTED]> wrote: >> This may be a bit simple, but couldn't you concatenate the commands >> in a system call using either ';' (or &&), doesn't this (or both?) of >> them require that the previous command finishes before executing the >> current one? >> >> What I'm thinking is that you can do something like: >> `mongrel_rails stop... ; mongrel_rails start` >> To accomplish the correct wait for the graceful stop? >> >> I hope I'm not way off here as I just joined the discussion. >> >> ~Wayne >> >> On Apr 11, 2007, at 19:56 , Michael A. Schoen wrote: >> >>> Bradley Taylor wrote: >>>> Reviewing the code (Zed correct me if I'm wrong), stop and restart >>>> both >>>> call the same stop method. The graceful handling of an in-progress >>>> request is the same. >>> >>> Yes, and that handling works for me. The problem is that a >>> stop;start >>> fails when the stop takes a bit, whereas a stop-with-restart will >>> always >>> be just fine. >>> >>> What happens now when I do a cluster restart is that some of my >>> Mongrels >>> end up just dead, as they actually stop (gracefully) after the >>> start has >>> already been called for. I could resolve this using a forced >>> stop, but >>> I'm looking for a more, not less, graceful process. >>> >>>> Restart also has some funky semantics when used in a cluster >>>> where it >>>> reuses the the command line arguments. This means that you can't >>>> modify >>>> the cluster configuration and apply the changes with a restart. The >>>> standard behavior of a linux (freebsd, etc) service is that >>>> configuration changes are reread on restart (apache, mysql,etc). >>>> So for >>>> the purposes of mongrel_cluster, restart == stop;start. Running a >>>> single >>>> mongrel with its own configuration file would behave as expected. >>> >>> Ah, so I understand why you made the change to have a cluster >>> restart do >>> a stop;start. We don't change the cluster configuration, so we >>> aren't >>> hit by that problem. >>> >>> But would it be possible to get an alternative command added that >>> does >>> do an actual restart? If not, no worries, I'll hack it in on my end. >>> >>> _______________________________________________ >>> Mongrel-users mailing list >>> [EMAIL PROTECTED] >>> http://rubyforge.org/mailman/listinfo/mongrel-users >> >> _______________________________________________ >> Mongrel-users mailing list >> [EMAIL PROTECTED] >> http://rubyforge.org/mailman/listinfo/mongrel-users >> > _______________________________________________ > Mongrel-users mailing list > [EMAIL PROTECTED] > http://rubyforge.org/mailman/listinfo/mongrel-users _______________________________________________ Mongrel-users mailing list [EMAIL PROTECTED] http://rubyforge.org/mailman/listinfo/mongrel-users
