/* * ... * * No need to update the visibilitymap if it had all_frozen bit set * before this insertion. */ if (all_frozen_set && ((vmstatus & VISIBILITYMAP_ALL_FROZEN) == 0))
while heap_multi_insert only does this: if (all_frozen_set) { ... } I haven't looked at the details, but shouldn't both do the same thing?I've done some benchmarks, comparing master and patched version on a bunch of combinations (logged/unlogged, no before-insert trigger, trigger filtering everything/nothing). On master, the results are:
group copy vacuum ----------------------------------------------- logged / no trigger 4672 162 logged / trigger (all) 4914 162 logged / trigger (none) 1219 11 unlogged / no trigger 4132 156 unlogged / trigger (all) 4292 156 unlogged / trigger (none) 1275 11 and patched looks like this: group copy vacuum ----------------------------------------------- logged / no trigger 4669 12 logged / trigger (all) 4874 12 logged / trigger (none) 1233 11 unlogged / no trigger 4054 11 unlogged / trigger (all) 4185 12 unlogged / trigger (none) 1222 10This looks pretty nice - there are no regressions, just speedups in the vacuum step. The SQL script used is attached.
However, I've also repeated the test counting all-frozen pages in both the main table and TOAST table, and I get this:
master ======select count(*) from pg_visibility((select reltoastrelid from pg_class where relname = 't'));
count -------- 100000 (1 row)select count(*) from pg_visibility((select reltoastrelid from pg_class where relname = 't')) where not all_visible;
count -------- 100000 (1 row) patched =======select count(*) from pg_visibility((select reltoastrelid from pg_class where relname = 't'));
count -------- 100002 (1 row)select count(*) from pg_visibility((select reltoastrelid from pg_class where relname = 't')) where not all_visible;
count -------- 0 (1 row)That is - all TOAST pages are frozen (as expected, which is good). But now there are 100002 pages, not just 100000 pages. That is, we're now creating 2 extra pages, for some reason. I recall Pavan reported similar issue with every 32768-th page not being properly filled, but I'm not sure if that's the same issue.
regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
test.sql
Description: application/sql
bench.sql
Description: application/sql