On Mon, Jan 11, 2021 at 03:28:08PM +0100, Peter Eisentraut wrote: > On 2020-12-27 20:07, Justin Pryzby wrote: > > On Wed, Dec 16, 2020 at 11:22:23AM -0600, Justin Pryzby wrote: > > > On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote: > > > > I meant to notice if the binary format is accidentally changed again, > > > > which was > > > > what happened here: > > > > 7c15cef86 Base information_schema.sql_identifier domain on name, not > > > > varchar. > > > > > > > > I added a table to the regression tests so it's processed by pg_upgrade > > > > tests, > > > > run like: > > > > > > > > | time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 > > > > oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin > > > > > > Per cfbot, this avoids testing ::xml (support for which may not be > > > enabled) > > > And also now tests oid types. > > > > > > I think the per-version hacks should be grouped by logical change, rather > > > than > > > by version. Which I've started doing here. > > > > rebased on 6df7a9698bb036610c1e8c6d375e1be38cb26d5f > > I think these patches could use some in-place documentation of what they are > trying to achieve and how they do it. The required information is spread > over a lengthy thread. No one wants to read that. Add commit messages to > the patches.
0001 patch fixes pg_upgrade/test.sh, which was disfunctional. Portions of the first patch were independently handled by commits 52202bb39, fa744697c, 091866724. So this is rebased on those. I guess updating this script should be a part of a beta-checklist somewhere, since I guess nobody will want to backpatch changes for testing older releases. 0002 allows detecting the information_schema problem that was introduced at: 7c15cef86 Base information_schema.sql_identifier domain on name, not varchar. +-- Create a table with different data types, to exercise binary compatibility +-- during pg_upgrade test If binary compatibility is changed I expect this will error, crash, at least return wrong data, and thereby fail tests. -- Justin On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote: > I checked that if I cherry-pick 0002 to v11, and comment out > old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh > detects the original problem: > pg_dump: error: Error message from server: ERROR: invalid memory alloc > request size 18446744073709551613 > > I understand the buildfarm has its own cross-version-upgrade test, which I > think would catch this on its own. > > These all seem to complicate use of pg_upgrade/test.sh, so 0001 is needed to > allow testing upgrade from older releases. > > e78900afd217fa3eaa77c51e23a94c1466af421c Create by default sql/ and expected/ > for output directory in pg_regress > 40b132c1afbb4b1494aa8e48cc35ec98d2b90777 In the pg_upgrade test suite, don't > write to src/test/regress. > fc49e24fa69a15efacd5b8958115ed9c43c48f9a Make WAL segment size configurable > at initdb time. > c37b3d08ca6873f9d4eaf24c72a90a550970cbb8 Allow group access on PGDATA > da9b580d89903fee871cf54845ffa2b26bda2e11 Refactor dir/file permissions
>From b3f829ab0fd880962d43eac0222bdaab2b8070f4 Mon Sep 17 00:00:00 2001 From: Justin Pryzby <pryz...@telsasoft.com> Date: Sat, 5 Dec 2020 22:31:19 -0600 Subject: [PATCH v4 1/3] WIP: pg_upgrade/test.sh: changes needed to allow testing upgrade to v14dev from v9.5-v13 --- src/bin/pg_upgrade/test.sh | 93 +++++++++++++++++++++++++++++++++++--- 1 file changed, 86 insertions(+), 7 deletions(-) diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh index ca923ba01b..b36fca4233 100644 --- a/src/bin/pg_upgrade/test.sh +++ b/src/bin/pg_upgrade/test.sh @@ -177,18 +177,97 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then esac fix_sql="$fix_sql DROP FUNCTION IF EXISTS - public.oldstyle_length(integer, text); -- last in 9.6 + public.oldstyle_length(integer, text);" # last in 9.6 -- commit 5ded4bd21 + fix_sql="$fix_sql DROP FUNCTION IF EXISTS - public.putenv(text); -- last in v13 - DROP OPERATOR IF EXISTS -- last in v13 - public.#@# (pg_catalog.int8, NONE), - public.#%# (pg_catalog.int8, NONE), - public.!=- (pg_catalog.int8, NONE), + public.putenv(text);" # last in v13 + # last in v13 commit 76f412ab3 + # public.!=- This one is only needed for v11+ ?? + # Note, until v10, operators could only be dropped one at a time + fix_sql="$fix_sql + DROP OPERATOR IF EXISTS + public.#@# (pg_catalog.int8, NONE);" + fix_sql="$fix_sql + DROP OPERATOR IF EXISTS + public.#%# (pg_catalog.int8, NONE);" + fix_sql="$fix_sql + DROP OPERATOR IF EXISTS + public.!=- (pg_catalog.int8, NONE);" + fix_sql="$fix_sql + DROP OPERATOR IF EXISTS public.#@%# (pg_catalog.int8, NONE);" + + # commit 068503c76511cdb0080bab689662a20e86b9c845 + case $oldpgversion in + 10????) + fix_sql="$fix_sql + DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;" + ;; + esac + + # commit db3af9feb19f39827e916145f88fa5eca3130cb2 + case $oldpgversion in + 10????) + fix_sql="$fix_sql + DROP FUNCTION boxarea(box);" + fix_sql="$fix_sql + DROP FUNCTION funny_dup17();" + ;; + esac + + # commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f + case $oldpgversion in + 10????) + fix_sql="$fix_sql + DROP TABLE abstime_tbl;" + fix_sql="$fix_sql + DROP TABLE reltime_tbl;" + fix_sql="$fix_sql + DROP TABLE tinterval_tbl;" + ;; + esac + + # Various things removed for v14 + case $oldpgversion in + 906??|10????|11????|12????|13????) + fix_sql="$fix_sql + DROP AGGREGATE first_el_agg_any(anyelement);" + ;; + esac + case $oldpgversion in + 90[56]??|10????|11????|12????|13????) + # commit 9e38c2bb5 and 97f73a978 + # fix_sql="$fix_sql DROP AGGREGATE array_larger_accum(anyarray);" + fix_sql="$fix_sql + DROP AGGREGATE array_cat_accum(anyarray);" + + # commit 76f412ab3 + #fix_sql="$fix_sql DROP OPERATOR @#@(bigint,NONE);" + fix_sql="$fix_sql + DROP OPERATOR @#@(NONE,bigint);" + ;; + esac + + # commit 578b22971: OIDS removed in v12 + case $oldpgversion in + 804??|9????|10????|11????) + fix_sql="$fix_sql + ALTER TABLE public.tenk1 SET WITHOUT OIDS;" + fix_sql="$fix_sql + ALTER TABLE public.tenk1 SET WITHOUT OIDS;" + #fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited + fix_sql="$fix_sql + ALTER TABLE public.emp SET WITHOUT OIDS;" + fix_sql="$fix_sql + ALTER TABLE public.tt7 SET WITHOUT OIDS;" + ;; + esac + psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$? fi - pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$? + echo "fix_sql: $oldpgversion: $fix_sql" >&2 + pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$? if [ "$newsrc" != "$oldsrc" ]; then # update references to old source tree's regress.so etc -- 2.17.0
>From 093a976220a6bdbca13a17e0b2c0d6256b2b74fa Mon Sep 17 00:00:00 2001 From: Justin Pryzby <pryz...@telsasoft.com> Date: Mon, 11 Jan 2021 21:41:16 -0600 Subject: [PATCH v4 2/3] More changes needed to allow upgrade testing: These all seem to complicate use of pg_upgrade/test.sh: e78900afd217fa3eaa77c51e23a94c1466af421c Create by default sql/ and expected/ for output directory in pg_regress 40b132c1afbb4b1494aa8e48cc35ec98d2b90777 In the pg_upgrade test suite, don't write to src/test/regress. fc49e24fa69a15efacd5b8958115ed9c43c48f9a Make WAL segment size configurable at initdb time. c37b3d08ca6873f9d4eaf24c72a90a550970cbb8 Allow group access on PGDATA da9b580d89903fee871cf54845ffa2b26bda2e11 Refactor dir/file permissions --- src/bin/pg_upgrade/test.sh | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh index b36fca4233..ab45801c35 100644 --- a/src/bin/pg_upgrade/test.sh +++ b/src/bin/pg_upgrade/test.sh @@ -23,7 +23,7 @@ standard_initdb() { # To increase coverage of non-standard segment size and group access # without increasing test runtime, run these tests with a custom setting. # Also, specify "-A trust" explicitly to suppress initdb's warning. - "$1" -N --wal-segsize 1 -g -A trust + "$1" -N -A trust if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ] then cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf" @@ -108,6 +108,9 @@ export EXTRA_REGRESS_OPTS mkdir "$outputdir" mkdir "$outputdir"/testtablespace +mkdir "$outputdir"/sql +mkdir "$outputdir"/expected + logdir=`pwd`/log rm -rf "$logdir" mkdir "$logdir" @@ -313,23 +316,29 @@ pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "$PGDATA" -b "$oldbindir" -p " # Windows hosts don't support Unix-y permissions. case $testhost in MINGW*) ;; - *) if [ `find "$PGDATA" -type f ! -perm 640 | wc -l` -ne 0 ]; then + *) + x=`find "$PGDATA" -type f -perm /127 -ls` + if [ -n "$x" ]; then echo "files in PGDATA with permission != 640"; + echo "$x" |head exit 1; fi ;; esac case $testhost in MINGW*) ;; - *) if [ `find "$PGDATA" -type d ! -perm 750 | wc -l` -ne 0 ]; then + *) + x=`find "$PGDATA" -type d -perm /027 -ls` + if [ "$x" ]; then echo "directories in PGDATA with permission != 750"; + echo "$x" |head exit 1; fi ;; esac pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w -pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$? +pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$? pg_ctl -m fast stop if [ -n "$pg_dumpall2_status" ]; then -- 2.17.0
>From 91fe77ad5501e0feb26067f369077715565c7ced Mon Sep 17 00:00:00 2001 From: Justin Pryzby <pryz...@telsasoft.com> Date: Sat, 5 Dec 2020 17:20:09 -0600 Subject: [PATCH v4 3/3] pg_upgrade: test to exercise binary compatibility Creating a table with columns of many different datatypes to notice if the binary format is accidentally changed again, as happened at: 7c15cef86 Base information_schema.sql_identifier domain on name, not varchar. I checked that if I cherry-pick to v11, and comment out old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh detects the original problem: pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613 I understand the buildfarm has its own cross-version-upgrade test, which I think would catch this on its own. --- src/test/regress/expected/sanity_check.out | 1 + src/test/regress/expected/type_sanity.out | 39 ++++++++++++++++++++++ src/test/regress/sql/type_sanity.sql | 38 +++++++++++++++++++++ 3 files changed, 78 insertions(+) diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out index d9ce961be2..f67e3853ff 100644 --- a/src/test/regress/expected/sanity_check.out +++ b/src/test/regress/expected/sanity_check.out @@ -69,6 +69,7 @@ line_tbl|f log_table|f lseg_tbl|f main_table|f +manytypes|f mlparted|f mlparted1|f mlparted11|f diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out index 0c74dc96a8..598a39ae03 100644 --- a/src/test/regress/expected/type_sanity.out +++ b/src/test/regress/expected/type_sanity.out @@ -672,3 +672,42 @@ WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0; ----------+------------+--------------- (0 rows) +-- Create a table with different data types, to exercise binary compatibility +-- during pg_upgrade test +CREATE TABLE manytypes AS SELECT +'(11,12)'::point, '(1,1),(2,2)'::line, +'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box, +'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath, +'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle, +'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval, +'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath, +'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr, +2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric, +'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool, +E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money, +'abc'::refcursor, +'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem, +'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery, +'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8, +'pg_class'::regclass, 'regtype'::regtype type, +'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid, +'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot, +1::information_schema.cardinal_number, +'l'::information_schema.character_data, +'n'::information_schema.sql_identifier, +'now'::information_schema.time_stamp, +'YES'::information_schema.yes_or_no; +-- And now a test on the previous test, checking that all core types are +-- included in this table (or some other non-catalog table processed by pg_upgrade). +SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t +WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace) +AND typtype IN ('b', 'e', 'd') +-- reg* cannot be pg_upgraded +AND NOT typname~'_|^char$|^reg' +-- XML might be disabled at compile-time +AND oid != ALL(ARRAY['gtsvector', 'xml']::regtype[]) +AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass); + typname | typtype | typelem | typarray | typarray +---------+---------+---------+----------+---------- +(0 rows) + diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql index 4739aca84a..1df9859118 100644 --- a/src/test/regress/sql/type_sanity.sql +++ b/src/test/regress/sql/type_sanity.sql @@ -495,3 +495,41 @@ WHERE pronargs != 2 SELECT p1.rngtypid, p1.rngsubtype, p1.rngmultitypid FROM pg_range p1 WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0; + +-- Create a table with different data types, to exercise binary compatibility +-- during pg_upgrade test + +CREATE TABLE manytypes AS SELECT +'(11,12)'::point, '(1,1),(2,2)'::line, +'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box, +'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath, +'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle, +'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval, +'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath, +'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr, +2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric, +'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool, +E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money, +'abc'::refcursor, +'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem, +'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery, +'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8, +'pg_class'::regclass, 'regtype'::regtype type, +'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid, +'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot, +1::information_schema.cardinal_number, +'l'::information_schema.character_data, +'n'::information_schema.sql_identifier, +'now'::information_schema.time_stamp, +'YES'::information_schema.yes_or_no; + +-- And now a test on the previous test, checking that all core types are +-- included in this table (or some other non-catalog table processed by pg_upgrade). +SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t +WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace) +AND typtype IN ('b', 'e', 'd') +-- reg* cannot be pg_upgraded +AND NOT typname~'_|^char$|^reg' +-- XML might be disabled at compile-time +AND oid != ALL(ARRAY['gtsvector', 'xml']::regtype[]) +AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass); -- 2.17.0