Hi

Since I have posted this nothing really changed. I am starting to panic 
(mildly).  

The source (production) runs :

          relname           |           mode           | granted |              
                  substr                                |          query_start  
        |          age           
----------------------------+--------------------------+---------+----------------------------------------------------------------------+-------------------------------+------------------------
 t_inodes_iio_idx           | RowExclusiveLock         | t       | autovacuum: 
VACUUM ANALYZE public.t_inodes (to prevent wraparound)   | 2017-06-15 
10:26:18.643209-05 | 4 days 01:58:56.697559
 t_inodes_pkey              | RowExclusiveLock         | t       | autovacuum: 
VACUUM ANALYZE public.t_inodes (to prevent wraparound)   | 2017-06-15 
10:26:18.643209-05 | 4 days 01:58:56.697559
                            | ExclusiveLock            | t       | autovacuum: 
VACUUM ANALYZE public.t_inodes (to prevent wraparound)   | 2017-06-15 
10:26:18.643209-05 | 4 days 01:58:56.697559
 t_inodes                   | ShareUpdateExclusiveLock | t       | autovacuum: 
VACUUM ANALYZE public.t_inodes (to prevent wraparound)   | 2017-06-15 
10:26:18.643209-05 | 4 days 01:58:56.697559
 t_inodes_itype_idx         | RowExclusiveLock         | t       | autovacuum: 
VACUUM ANALYZE public.t_inodes (to prevent wraparound)   | 2017-06-15 
10:26:18.643209-05 | 4 days 01:58:56.697559
 t_inodes_imtime_idx        | RowExclusiveLock         | t       | autovacuum: 
VACUUM ANALYZE public.t_inodes (to prevent wraparound)   | 2017-06-15 
10:26:18.643209-05 | 4 days 01:58:56.697559

Above does not impact production activity a lot. 

On the test stand (where I pg_basebackupped from production and also upgraded 
to 9.6) I see:

               relname              |           mode           | granted |      
                     substr                           |          query_start    
      |          age           
-----------------------------------+--------------------------+---------+------------------------------------------------------------+-------------------------------+------------------------
 t_inodes                          | ShareUpdateExclusiveLock | t       | 
autovacuum: VACUUM public.t_inodes (to prevent wraparound) | 2017-06-13 
15:27:54.872154-05 | 5 days 20:59:22.769404
 t_inodes_itype_idx                | RowExclusiveLock         | t       | 
autovacuum: VACUUM public.t_inodes (to prevent wraparound) | 2017-06-13 
15:27:54.872154-05 | 5 days 20:59:22.769404
 t_inodes_imtime_idx               | RowExclusiveLock         | t       | 
autovacuum: VACUUM public.t_inodes (to prevent wraparound) | 2017-06-13 
15:27:54.872154-05 | 5 days 20:59:22.769404
 t_inodes_iio_idx                  | RowExclusiveLock         | t       | 
autovacuum: VACUUM public.t_inodes (to prevent wraparound) | 2017-06-13 
15:27:54.872154-05 | 5 days 20:59:22.769404
 t_inodes_pkey                     | RowExclusiveLock         | t       | 
autovacuum: VACUUM public.t_inodes (to prevent wraparound) | 2017-06-13 
15:27:54.872154-05 | 5 days 20:59:22.769404
                                   | ExclusiveLock            | t       | 
autovacuum: VACUUM public.t_inodes (to prevent wraparound) | 2017-06-13 
15:27:54.872154-05 | 5 days 20:59:22.769404
 t_inodes                          | ShareUpdateExclusiveLock | f       | 
ANALYZE;                                                   | 2017-06-13 
15:27:59.781285-05 | 5 days 20:59:17.860273
                                   | ExclusiveLock            | t       | 
ANALYZE;                                                   | 2017-06-13 
15:27:59.781285-05 | 5 days 20:59:17.860273


The test stand where I was to test schema upgrade is stuck cuz vacuum is 
blocking. 

Production settings follow:

version 9.3.9

max_connections = 512
shared_buffers = 8192MB
temp_buffers = 1024MB
work_mem = 512MB
#maintenance_work_mem = 2048MB 
maintenance_work_mem = 4096MB #increased after 3 days of vacuum analyze running 
max_stack_depth = 2MB
vacuum_cost_delay = 50ms
synchronous_commit = off
wal_buffers = 245MB
wal_writer_delay = 10s
checkpoint_segments = 64
checkpoint_completion_target = 0.9
random_page_cost = 2.0
effective_cache_size = 94GB
wal_level = hot_standby
hot_standby = on
archive_mode = on
archive_command = '/usr/loca/bin/wal_backup.sh %p %f'
max_wal_senders = 4
wal_keep_segments = 1024
max_standby_streaming_delay = 7200s

So, the problem : I cannot do schema change until vacuum has finished, and 
there 
seems to be no end in sight for vacuum to finish throwing off our software 
upgrade plans. 

Anything can be done here?

Thanks,
Dmitry

________________________________________
From: Andreas Kretschmer <andr...@a-kretschmer.de>
Sent: Tuesday, June 13, 2017 1:54 PM
To: pgsql-general@postgresql.org; Dmitry O Litvintsev; 
pgsql-general@postgresql.org
Subject: Re: [GENERAL] autovacuum holds exclusive lock on table preventing it 
from to be updated

Am 13. Juni 2017 20:04:04 MESZ schrieb Dmitry O Litvintsev <litvi...@fnal.gov>:
>
>I
>wraparound)                | 2017-
>| t       | enstore | autovacuum: VACUUM public.t_inodes (to prevent
>wraparound)                | 2017-06-13 12:31:04.870064-05 |
>00:28:50.276437 | 40672
>chimera | t_inodes                   |               |
>ShareUpdateExclusiveLock | t       | enstore | autovacuum: VACUUM
>public.t_inodes (to prevent wraparound)                | 2017-06-13
>12:31:04.870064-05 | 00:28:50.276437 | 40672
>

It is a autocacuum to prevent wraparound, you can't stop or avoid that.

Regards, Andreas
--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to