On Thu, Apr 21, 2016 at 5:16 PM, Kevin Grittner <kgri...@gmail.com> wrote:
> On Wed, Apr 20, 2016 at 8:08 PM, Ants Aasma <ants.aa...@eesti.ee> wrote:
>
>> However, while checking out if my proof of concept patch actually
>> works I hit another issue. I couldn't get my test for the feature to
>> actually work. The test script I used is attached.
>
> Could you provide enough to make that a self-contained reproducible
> test case (i.e., that I don't need to infer or re-write any steps
> or guess how to call it)?  In previous cases people have given me
> where they felt that the feature wasn't working there have have
> been valid reasons for it to behave as it was (e.g., a transaction
> with a transaction ID and an xmin which prevented cleanup from
> advancing).  I'll be happy to look at your case and see whether
> it's another such case or some bug, but it seems a waste to reverse
> engineer or rewrite parts of the test case to do so.

Just to be sure I didn't have anything screwy in my build environment
I redid the test on a freshly installed Fedora 23 VM. Steps to
reproduce:

1. Build postgresql from git. I used ./configure --enable-debug
--enable-cassert --prefix=/home/ants/pg-master
2. Set up database:

cat << EOF > test-settings.conf
old_snapshot_threshold = 1min

logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql.log'
log_line_prefix = '[%m] '
log_autovacuum_min_duration = 0
EOF

    pg-master/bin/initdb data/
    cat test-settings.conf >> data/postgresql.conf
    pg-master/bin/pg_ctl -D data/ start
    pg-master/bin/createdb

3. Install python-psycopg2 and get the test script from my earlier e-mail [1]
4. Run the test:

    python test_oldsnapshot.py "host=/tmp"

5. Observe that the table keeps growing even after the old snapshot
threshold is exceeded and autovacuum has run. Autovacuum log shows 0
tuples removed.

Only the write workload has a xid assigned, the other two backends
only have snapshot held:

[ants@localhost ~]$ pg-master/bin/psql -c "SELECT application_name,
backend_xid, backend_xmin, NOW()-xact_start AS tx_age, state FROM
pg_stat_activity"
   application_name   | backend_xid | backend_xmin |     tx_age      |
       state
----------------------+-------------+--------------+-----------------+---------------------
 write-workload       |       95637 |              | 00:00:00.009314 | active
 long-unrelated-query |             |         1806 | 00:04:33.914048 | active
 interfering-query    |             |         2444 | 00:04:32.910742 |
idle in transaction
 psql                 |             |        95637 | 00:00:00        | active

Output from the test tool attached. After killing the test tool and
the long running query autovacuum cleans stuff as expected.

I'm too tired right now to chase this down myself. The mental toll
that two small kids can take is pretty staggering. But I might find
the time to fire up a debugger sometime tomorrow.

Regards,
Ants Aasma

[1] http://www.postgresql.org/message-id/attachment/43859/test_oldsnapshot.py
[ants@localhost ~]$ python test_oldsnapshot.py "host=/tmp"
[21:37:56]  Starting 1800s long query
[21:37:56]  old_snapshot_threshold = 1min
[21:37:56]  High throughput table size @     0s. Size    176kB Last vacuum None 
ago
[21:37:57]  Counted 1000 rows with max 1637 in high_throughput table
[21:38:06]  High throughput table size @    10s. Size    952kB Last vacuum None 
ago
[21:38:07]  Counted 1000 rows with max 1637 in high_throughput table
[21:38:16]  High throughput table size @    20s. Size   1768kB Last vacuum None 
ago
[21:38:17]  Counted 1000 rows with max 1637 in high_throughput table
[21:38:26]  High throughput table size @    30s. Size   2640kB Last vacuum None 
ago
[21:38:27]  Counted 1000 rows with max 1637 in high_throughput table
[21:38:36]  High throughput table size @    40s. Size   3488kB Last vacuum None 
ago
[21:38:37]  Counted 1000 rows with max 1637 in high_throughput table
[21:38:46]  High throughput table size @    50s. Size   4328kB Last vacuum 
0:00:02.339652 ago
[21:38:47]  Counted 1000 rows with max 1637 in high_throughput table
[21:38:56]  High throughput table size @    60s. Size   4832kB Last vacuum 
0:00:12.342278 ago
[21:38:57]  Counted 1000 rows with max 1637 in high_throughput table
[21:39:06]  High throughput table size @    70s. Size   4920kB Last vacuum 
0:00:22.425250 ago
[21:39:07]  Counted 1000 rows with max 1637 in high_throughput table
[21:39:16]  High throughput table size @    80s. Size   5016kB Last vacuum 
0:00:32.431971 ago
[21:39:17]  Counted 1000 rows with max 1637 in high_throughput table
[21:39:26]  High throughput table size @    90s. Size   5112kB Last vacuum 
0:00:42.431730 ago
[21:39:27]  Counted 1000 rows with max 1637 in high_throughput table
[21:39:36]  High throughput table size @   100s. Size   5200kB Last vacuum 
0:00:52.448369 ago
[21:39:37]  Counted 1000 rows with max 1637 in high_throughput table
[21:39:46]  High throughput table size @   110s. Size   5672kB Last vacuum 
0:00:02.387574 ago
[21:39:47]  Counted 1000 rows with max 1637 in high_throughput table
[21:39:56]  High throughput table size @   120s. Size   5784kB Last vacuum 
0:00:12.399496 ago
[21:39:57]  Counted 1000 rows with max 1637 in high_throughput table
[21:40:06]  High throughput table size @   130s. Size   5864kB Last vacuum 
0:00:22.403741 ago
[21:40:07]  Counted 1000 rows with max 1637 in high_throughput table
[21:40:16]  High throughput table size @   140s. Size   5944kB Last vacuum 
0:00:32.412108 ago
[21:40:17]  Counted 1000 rows with max 1637 in high_throughput table
[21:40:26]  High throughput table size @   150s. Size   6048kB Last vacuum 
0:00:42.534381 ago
[21:40:27]  Counted 1000 rows with max 1637 in high_throughput table
[21:40:36]  High throughput table size @   160s. Size   6216kB Last vacuum 
0:00:52.535201 ago
[21:40:37]  Counted 1000 rows with max 1637 in high_throughput table
[21:40:46]  High throughput table size @   170s. Size   7000kB Last vacuum 
0:00:02.464416 ago
[21:40:47]  Counted 1000 rows with max 1637 in high_throughput table
[21:40:56]  High throughput table size @   180s. Size   7848kB Last vacuum 
0:00:12.464621 ago
[21:40:57]  Counted 1000 rows with max 1637 in high_throughput table
[21:41:06]  High throughput table size @   190s. Size   8528kB Last vacuum 
0:00:22.465198 ago
[21:41:07]  Counted 1000 rows with max 1637 in high_throughput table
[21:41:16]  High throughput table size @   200s. Size   9288kB Last vacuum 
0:00:32.466764 ago
[21:41:17]  Counted 1000 rows with max 1637 in high_throughput table
[21:41:26]  High throughput table size @   210s. Size  10064kB Last vacuum 
0:00:42.467752 ago
[21:41:27]  Counted 1000 rows with max 1637 in high_throughput table
[21:41:36]  High throughput table size @   220s. Size  10880kB Last vacuum 
0:00:52.468845 ago
[21:41:37]  Counted 1000 rows with max 1637 in high_throughput table
[21:41:46]  High throughput table size @   230s. Size  11632kB Last vacuum 
0:00:02.313827 ago
[21:41:47]  Counted 1000 rows with max 1637 in high_throughput table
[21:41:56]  High throughput table size @   240s. Size  12328kB Last vacuum 
0:00:12.314350 ago
[21:41:57]  Counted 1000 rows with max 1637 in high_throughput table
[21:42:07]  High throughput table size @   250s. Size  12880kB Last vacuum 
0:00:22.730924 ago
[21:42:07]  Counted 1000 rows with max 1637 in high_throughput table
[21:42:17]  High throughput table size @   260s. Size  12992kB Last vacuum 
0:00:32.737007 ago
[21:42:17]  Counted 1000 rows with max 1637 in high_throughput table
[21:42:27]  High throughput table size @   270s. Size  13088kB Last vacuum 
0:00:42.738136 ago
[21:42:27]  Counted 1000 rows with max 1637 in high_throughput table
[21:42:37]  High throughput table size @   280s. Size  13184kB Last vacuum 
0:00:52.790580 ago
[21:42:37]  Counted 1000 rows with max 1637 in high_throughput table
[21:42:47]  High throughput table size @   290s. Size  13512kB Last vacuum 
0:00:02.639072 ago
[21:42:47]  Counted 1000 rows with max 1637 in high_throughput table

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to