Getting a 404 on those images.

On Tue, Aug 9, 2016 at 12:36 PM, Tory M Blue <tmb...@gmail.com> wrote:

>
>
> On Sat, Aug 6, 2016 at 6:16 AM, Jan Wieck <j...@wi3ck.info> wrote:
>
>> How are you monitoring the number of rows in the sl_log_* tables?
>>
>>
>> Jan
>>
>>
> I've got a couple of updates but first let me answer the question.
>
>     max     => "SHOW max_connections",
>
>     cur     => "SELECT COUNT(*) FROM pg_stat_activity",
>
>     log1    => "SELECT COUNT(*) FROM _cls.sl_log_1",
>
>     log2    => "SELECT COUNT(*) FROM _cls.sl_log_2",
>
>     siz1    => "SELECT (relpages*8) FROM pg_class where relname='sl_log
> _1'",
>     siz2    => "SELECT (relpages*8) FROM pg_class where relname='sl_log
> _2'"
>
> We have a script that runs and monitors a few things in the dB, one is the
> count of sl_log_1 and 2. This is added as an RRD
> and graphed.
>
> Now the update, graph above , is what we have been seeing, logs grow  up
> until 10:50am or so when the truncate is allowed (now backups are complete
> at 4am and all heavy lifting is finished at 5am)
>
>
> [image: idb01.gc - slonyTables]
>
>
> I disabled the full backup on the standby unit last night.
> [image: idb01.gc - slonyTables]
>
>
> I still get some peeks and valleys, but the system is not backed up for 10
> hours. So this tells me that the backup on the standby is causing a
> situation where slon is blocked because tables are locked on the standby
> unit. This is still not ideal but it's a big improvement from before where
> the sl_log_? would grow to 14million rows from about 2am to 10:45am..
>
> So I need to figure out how to reduce the overhead of the backup, I
> figured it was better to run it on the standby but at this point that is
> not looking like a great option..
>
> Thanks for working on this with me!
>
> Tory
>



-- 
Jan Wieck
Senior Postgres Architect
http://pgblog.wi3ck.info
_______________________________________________
Slony1-general mailing list
Slony1-general@lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to