It wasn't mission critical changes lost. Postgres log was full of messages saying it couldn't switch the log because it was already in progress. Uptime was showing load averages of 60. Checking sl_log_1 it only had 4 entries. nuking it and re-initing the log switch reduced the load average to between 4 and 12.
That's with 15 slaves to one master! ------Original Message------ From: Jan Wieck To: [email protected] Cc: [email protected] Subject: Re: [Slony1-general] Slony replication problem - logswitch failure Sent: Mar 26, 2011 23:52 On 3/26/2011 6:54 PM, Tim Lloyd wrote: > Only way to stop Postgres chewing all the CPU What did you do after this to verify that the unqualified delete from sl_log_1 did not make you lose any changes? Jan > > > ------Original Message------ > From: Jan Wieck > To: Tim Lloyd > Cc: [email protected] > Subject: Re: [Slony1-general] Slony replication problem - logswitch failure > Sent: Mar 26, 2011 22:53 > > On 3/26/2011 6:43 AM, Tim Lloyd wrote: >> Hi Venkat >> >> I found a way to get Slony out of this state without rebuilding the db. >> >> 1) Connect to your database, e.g. psql<dbname> as user postgres >> >> 2) delete from _schema.sl_log_1; > > Looks a little bit dangerous to me. YMMV. > > > Jan > -- Anyone who trades liberty for security deserves neither liberty nor security. -- Benjamin Franklin Sent from my BlackBerry® wireless device _______________________________________________ Slony1-general mailing list [email protected] http://lists.slony.info/mailman/listinfo/slony1-general
