On Fri, 2008-02-22 at 21:31 +0000, Christopher Browne wrote: > Craig James <[EMAIL PROTECTED]> writes: > > Ow Mun Heng wrote: > >> I came to work today and seems like the slave server died. (power trip? > >> No it was not connected to a UPS :-() > >> I've not been able to locate/determine if the slave is really dead or > >> otherwise and it's the weekend in Asia and there's no one in the office > >> till Next week. > >> As of now, the master is still trying to contact the slave (slon is > >> still running on the master) and log_1 and log_2 is filling up. > >> And yesterday, I just created a job to manually force the log_switch > >> to > >> occur. So, right now, I'm at a loss as to what i can do. > > > > Just kill all of the Slony daemons. Next week when the other server > > is back, start them again. It will figure out what it missed, and > > will catch up with no problems. > > That's not quite accurate... > > If you kill ALL the daemons, and don't have *something* maintaining > the creation of SYNCs (e.g. - a script running the "generate_sync()" > stored function), then there will be one really gigantic SYNC covering > the interval of [time slon for origin died] until [time slon for > origin restarted].
I remember reading that in the docs, but I took the advise anyway and killed the slon daemon a few hours ago. > a) Set up generate_sync() cron job, and kill all slons. Where is this generate_sync() anyway? I only saw a generate_sync_event(interval) stored function in the cluster DB. > b) Increase the various sync parms for the slon for the origin node; > -s 60000 and -t 120000 will mean you SYNC once per minute, when things > are busy. > > That reduces the work level a bit., either way. Thanks. I've already restarted the process on the origin with slon -c2 -d2 -s60000 -t 120000 (actually I was already using -s60000) I guess my concern now is that the slon logs are filling up and going past the 2GB threshold (for both log_1 and 2 which would mean that there's not going to be much help in gettting things back to speed when next week comes) Thanks guys. Appreciate the comments/advise. _______________________________________________ Slony1-general mailing list [email protected] http://lists.slony.info/mailman/listinfo/slony1-general
