Noah, all, * Noah Misch (n...@leadboat.com) wrote: > On Fri, Apr 22, 2016 at 12:31:41PM -0400, Stephen Frost wrote: > > After looking through the code a bit, I realized that there are a lot of > > object types which don't have ACLs at all but which exist in pg_catalog > > and were being analyzed because the bitmask for pg_catalog included ACLs > > and therefore was non-zero. > > > > Clearing that bit for object types which don't have ACLs improved the > > performance for empty databases quite a bit (from about 3s to a bit > > under 1s on my laptop). That's a 42-line patch, with comment lines > > being half of that, which I'll push once I've looked into the other > > concerns which were brought up on this thread. > > That's good news.
Attached patch-set includes this change in patch #2. Patch #1 is the fix for the incorrect WHERE clause. > > Much of the remaining inefficiancy is how we query for column > > information one table at a time (that appears to be around 300ms of the > > 900ms or so total time). I'm certainly interested in improving that but > > that would require adding more complex data structures to pg_dump than > > what we use currently (we'd want to grab all of the columns we care > > about in an entire schema and store it locally and then provide a way to > > look up that information, etc), so I'm not sure it'd be appropriate to > > do now. > > I'm not sure, either; I'd need to see more to decide. If I were you, I would > draft a patch to the point where the community can see the complexity and the > performance change. That should be enough to inform the choice among moving > forward with the proposed complexity, soliciting other designs, reverting the > original changes, or accepting for 9.6 the slowdown as it stands at that time. > Other people may have more definite opinions already. I'll look at doing this once I'm done with the regression test improvements (see below). * Noah Misch (n...@leadboat.com) wrote: > On Fri, Apr 22, 2016 at 08:24:53PM -0400, Stephen Frost wrote: > > * Noah Misch (n...@leadboat.com) wrote: > > > I wrote the attached test script to verify which types of ACLs dump/reload > > > covers. Based on the commit message, I expected these results: > > > > > > Dumped: type, class, attribute, proc, largeobject_metadata, > > > foreign_data_wrapper, foreign_server, > > > language(in-extension), namespace(in-extension) > > > Not dumped: database, tablespace, > > > language(from-initdb), namespace(from-initdb) > > > > > > Did I interpret the commit message correctly? The script gave these > > > results: > > > > > > Dumped: type, class, attribute, namespace(from-initdb) > > > Not dumped: database, tablespace, proc, language(from-initdb) > > > > You interpreted the commit message correctly and in a number of cases > > the correct results are generated, but there's an issue in the WHERE > > clause being used for some of the object types. > > I'm wondering whether it would be a slightly better design to treat > language(from-initdb) like language(in-extension). Likewise for namespaces. > The code apparently already exists to dump from-initdb namespace ACLs. Is > there a user interface design reason to want to distinguish the from-initdb > and in-extension cases? Reviewing the code, we do record from-initdb namespace privileges, and we also record in-extension namespace privileges (see ExecGrant_Namespace()). We also record from-initdb language ACLs (initdb.c:2071) and in-extension language ACLs (see ExecGrant_Language()), so I'm not entirely sure what, if any, issue exists here either. For both, we also grab the ACL info vs. pg_init_privs in pg_dump. The issue in these cases is a bit of an interesting one- they are not part of a namespace; this patch was focused on allowing users to modify, specifically, the ACLs of objects in the 'pg_catalog' namespace. For languages, I believe that means that we simply need to modify the selectDumpableProcLang() function to use the same default we use for the 'pg_catalog' namespace- DUMP_COMPONENT_ACL, instead of DUMP_COMPONENT_NONE. What's not clear to me is what, if any, issue there is with namespaces. Certainly, in my testing at least, if you do: REVOKE CREATE ON SCHEMA public FROM public; Then you get the appropriate commands from pg_dump to implement the resulting ACLs on the public schema. If you change the permissions back to match what is there at initdb-time (or you just don't change them), then there aren't any GRANT or REVOKE commands from pg_dump, but that's correct, isn't it? > > That's relatively straight-forward to fix, but I'm going to put > > together a series of TAP tests to go with these fixes. While I tested > > various options and bits and pieces of the code while developing, this > > really needs a proper test suite that runs through all of these > > permutations with each change. > > Sounds great. Thanks. > > > I'm planning to have that done by the end of the weekend. In the attached patch-set, patch #3 includes support for src/bin/pg_dump: make check implemented using the TAP testing system. There are a total of 360 tests, generally covering: Various invalid sets of command-line options. Valid command-line options (generally independently used): (no options- defaults) --clean --clean --if-exists --data-only --format=c (tested with pg_restore) --format=d (tested with pg_restore) --format=t (tested with pg_restore) --format=d --jobs=2 (tested with pg_restore) --exclude-schema --exclude-table --no-privileges --no-owner --schema --schema-only Note that this is only including tests for basic schemas, tables, data, and privileges, so far. I believe it's pretty obvious that we want to include all object types and include testing of extensions, I just haven't gotten there yet and figured now would be a good time to solicit feedback on the approach I've used. I'm not sure how valuable it is to test all the different combinations of command-line options, though clearly some should be tested (eg: include-schema, exclude table in that schema, and similar cases). I'm planning to work on adding in other object types and, once that's more-or-less complete, adding in a test for extensions and then adding tests for GRANT/REVOKE on from-initdb and in-extension objects. Thoughts? Thanks! Stephen
From 7ed5716e6f02065c4ff0b92dd75bac765ac78b82 Mon Sep 17 00:00:00 2001 From: Stephen Frost <sfr...@snowman.net> Date: Sun, 24 Apr 2016 23:59:23 -0400 Subject: [PATCH 1/3] Correct pg_dump WHERE clause for functions/aggregates --- src/bin/pg_dump/pg_dump.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 396c03d..d0f4113 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -4673,11 +4673,7 @@ getAggregates(Archive *fout, int *numAggs) "p.pronamespace != " "(SELECT oid FROM pg_namespace " "WHERE nspname = 'pg_catalog') OR " - "EXISTS (SELECT * FROM pg_init_privs pip " - "WHERE p.oid = pip.objoid AND pip.classoid = " - "(SELECT oid FROM pg_class " - "WHERE relname = 'pg_proc') " - "AND p.proacl IS DISTINCT FROM pip.initprivs)", + "p.proacl IS DISTINCT FROM pip.initprivs", username_subquery, acl_subquery->data, racl_subquery->data, @@ -4923,11 +4919,7 @@ getFuncs(Archive *fout, int *numFuncs) "pronamespace != " "(SELECT oid FROM pg_namespace " "WHERE nspname = 'pg_catalog') OR " - "EXISTS (SELECT * FROM pg_init_privs pip " - "WHERE p.oid = pip.objoid AND pip.classoid = " - "(SELECT oid FROM pg_class " - "WHERE relname = 'pg_proc') " - "AND p.proacl IS DISTINCT FROM pip.initprivs)", + "p.proacl IS DISTINCT FROM pip.initprivs", acl_subquery->data, racl_subquery->data, initacl_subquery->data, -- 2.5.0 From c2fdc4354927466d77aae677e67ee5744adf8d92 Mon Sep 17 00:00:00 2001 From: Stephen Frost <sfr...@snowman.net> Date: Mon, 25 Apr 2016 00:00:15 -0400 Subject: [PATCH 2/3] Performance improvement for pg_dump Do not try to dump objects which do not have ACLs when only ACLs are being requested. --- src/bin/pg_dump/pg_dump.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index d0f4113..384ba9e 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -3729,6 +3729,9 @@ getExtensions(Archive *fout, int *numExtensions) /* Decide whether we want to dump it */ selectDumpableExtension(&(extinfo[i]), dopt); + + /* Extensions do not currently have ACLs. */ + extinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -4175,6 +4178,9 @@ getOperators(Archive *fout, int *numOprs) /* Decide whether we want to dump it */ selectDumpableObject(&(oprinfo[i].dobj), fout); + /* Operators do not currently have ACLs. */ + oprinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; + if (strlen(oprinfo[i].rolname) == 0) write_msg(NULL, "WARNING: owner of operator \"%s\" appears to be invalid\n", oprinfo[i].dobj.name); @@ -4259,6 +4265,9 @@ getCollations(Archive *fout, int *numCollations) /* Decide whether we want to dump it */ selectDumpableObject(&(collinfo[i].dobj), fout); + + /* Collations do not currently have ACLs. */ + collinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -4340,6 +4349,9 @@ getConversions(Archive *fout, int *numConversions) /* Decide whether we want to dump it */ selectDumpableObject(&(convinfo[i].dobj), fout); + + /* Conversions do not currently have ACLs. */ + convinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -4413,6 +4425,9 @@ getAccessMethods(Archive *fout, int *numAccessMethods) /* Decide whether we want to dump it */ selectDumpableAccessMethod(&(aminfo[i]), fout); + + /* Access methods do not currently have ACLs. */ + aminfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -4506,6 +4521,9 @@ getOpclasses(Archive *fout, int *numOpclasses) /* Decide whether we want to dump it */ selectDumpableObject(&(opcinfo[i].dobj), fout); + /* Op Classes do not currently have ACLs. */ + opcinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; + if (fout->remoteVersion >= 70300) { if (strlen(opcinfo[i].rolname) == 0) @@ -4594,6 +4612,9 @@ getOpfamilies(Archive *fout, int *numOpfamilies) /* Decide whether we want to dump it */ selectDumpableObject(&(opfinfo[i].dobj), fout); + /* Extensions do not currently have ACLs. */ + opfinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; + if (fout->remoteVersion >= 70300) { if (strlen(opfinfo[i].rolname) == 0) @@ -6989,6 +7010,9 @@ getEventTriggers(Archive *fout, int *numEventTriggers) /* Decide whether we want to dump it */ selectDumpableObject(&(evtinfo[i].dobj), fout); + + /* Event Triggers do not currently have ACLs. */ + evtinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -7350,6 +7374,9 @@ getCasts(Archive *fout, int *numCasts) /* Decide whether we want to dump it */ selectDumpableCast(&(castinfo[i]), fout); + + /* Casts do not currently have ACLs. */ + castinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -8143,6 +8170,9 @@ getTSParsers(Archive *fout, int *numTSParsers) /* Decide whether we want to dump it */ selectDumpableObject(&(prsinfo[i].dobj), fout); + + /* Text Search Parsers do not currently have ACLs. */ + prsinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -8228,6 +8258,9 @@ getTSDictionaries(Archive *fout, int *numTSDicts) /* Decide whether we want to dump it */ selectDumpableObject(&(dictinfo[i].dobj), fout); + + /* Text Search Dictionaries do not currently have ACLs. */ + dictinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -8305,6 +8338,9 @@ getTSTemplates(Archive *fout, int *numTSTemplates) /* Decide whether we want to dump it */ selectDumpableObject(&(tmplinfo[i].dobj), fout); + + /* Text Search Templates do not currently have ACLs. */ + tmplinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -8383,6 +8419,9 @@ getTSConfigurations(Archive *fout, int *numTSConfigs) /* Decide whether we want to dump it */ selectDumpableObject(&(cfginfo[i].dobj), fout); + + /* Text Search Configurations do not currently have ACLs. */ + cfginfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); @@ -8793,6 +8832,9 @@ getDefaultACLs(Archive *fout, int *numDefaultACLs) /* Decide whether we want to dump it */ selectDumpableDefaultACL(&(daclinfo[i]), dopt); + + /* Default ACLs do not currently have ACLs. */ + daclinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } PQclear(res); -- 2.5.0 From 9e1042d76dc1e20869005bb9959c46d768cebb64 Mon Sep 17 00:00:00 2001 From: Stephen Frost <sfr...@snowman.net> Date: Mon, 25 Apr 2016 00:01:38 -0400 Subject: [PATCH 3/3] Add TAP tests for pg_dump --- src/bin/pg_dump/Makefile | 3 + src/bin/pg_dump/t/001_basic.pl | 42 +++ src/bin/pg_dump/t/002_pg_dump.pl | 788 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 833 insertions(+) create mode 100644 src/bin/pg_dump/t/001_basic.pl create mode 100644 src/bin/pg_dump/t/002_pg_dump.pl diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile index 9596789..260804b 100644 --- a/src/bin/pg_dump/Makefile +++ b/src/bin/pg_dump/Makefile @@ -42,6 +42,9 @@ install: all installdirs installdirs: $(MKDIR_P) '$(DESTDIR)$(bindir)' +check: + $(prove_check) + uninstall: rm -f $(addprefix '$(DESTDIR)$(bindir)'/, pg_dump$(X) pg_restore$(X) pg_dumpall$(X)) diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl new file mode 100644 index 0000000..1411ef7 --- /dev/null +++ b/src/bin/pg_dump/t/001_basic.pl @@ -0,0 +1,42 @@ +use strict; +use warnings; + +use Config; +use PostgresNode; +use TestLib; +use Test::More tests => 15; + +my $tempdir = TestLib::tempdir; +my $tempdir_short = TestLib::tempdir_short; + +######################################### +# Basic checks + +program_help_ok('pg_dump'); +program_version_ok('pg_dump'); +program_options_handling_ok('pg_dump'); + +######################################### +# Test various invalid options and disallowed combinations +# Doesn't require a PG instance to be set up, so do this first. + +command_exit_is([ 'pg_dump', 'qqq', 'abc' ], + 1, 'pg_dump: too many command-line arguments (first is "asd")'); + +command_exit_is([ 'pg_dump', '-s', '-a' ], + 1, 'pg_dump: options -s/--schema-only and -a/--data-only cannot be used together'); + +command_exit_is([ 'pg_dump', '-c', '-a' ], + 1, 'pg_dump: options -c/--clean and -a/--data-only cannot be used together'); + +command_exit_is([ 'pg_dump', '--inserts', '-o' ], + 1, 'pg_dump: options --inserts/--column-inserts and -o/--oids cannot be used together'); + +command_exit_is([ 'pg_dump', '--if-exists' ], + 1, 'pg_dump: option --if-exists requires option -c/--clean'); + +command_exit_is([ 'pg_dump', '-j' ], + 1, 'pg_dump: option requires an argument -- \'j\''); + +command_exit_is([ 'pg_dump', '-j3' ], + 1, 'pg_dump: parallel backup only supported by the directory format'); diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl new file mode 100644 index 0000000..cb458a4 --- /dev/null +++ b/src/bin/pg_dump/t/002_pg_dump.pl @@ -0,0 +1,788 @@ +use strict; +use warnings; + +use Config; +use PostgresNode; +use TestLib; +use Test::More; + +my $tempdir = TestLib::tempdir; +my $tempdir_short = TestLib::tempdir_short; + +my %pgdump_runs = ( + clean => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/clean.sql", + '-c', + 'postgres', + ], + }, + clean_if_exists => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/clean_if_exists.sql", + '-c', + '--if-exists', + 'postgres', + ], + }, + data_only => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/data_only.sql", + '-a', + 'postgres', + ], + }, + defaults => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/defaults.sql", + 'postgres', + ], + }, + defaults_custom_format => { + test_key => 'defaults', + dump_cmd => [ + 'pg_dump', + '-Fc', + '-f', "$tempdir/defaults_custom_format.dump", + 'postgres', + ], + restore_cmd => [ + 'pg_restore', + '-Fc', + '-f', "$tempdir/defaults_custom_format.sql", + "$tempdir/defaults_custom_format.dump", + ], + }, + defaults_dir_format => { + test_key => 'defaults', + dump_cmd => [ + 'pg_dump', + '-Fd', + '-f', "$tempdir/defaults_dir_format", + 'postgres', + ], + restore_cmd => [ + 'pg_restore', + '-Fd', + '-f', "$tempdir/defaults_dir_format.sql", + "$tempdir/defaults_dir_format", + ], + }, + defaults_parallel => { + test_key => 'defaults', + dump_cmd => [ + 'pg_dump', + '-Fd', + '-j2', + '-f', "$tempdir/defaults_parallel", + 'postgres', + ], + restore_cmd => [ + 'pg_restore', + '-Fd', + '-f', "$tempdir/defaults_parallel.sql", + "$tempdir/defaults_parallel", + ], + }, + defaults_tar_format => { + test_key => 'defaults', + dump_cmd => [ + 'pg_dump', + '-Ft', + '-f', "$tempdir/defaults_tar_format.tar", + 'postgres', + ], + restore_cmd => [ + 'pg_restore', + '-Ft', + '-f', "$tempdir/defaults_tar_format.sql", + "$tempdir/defaults_tar_format.tar", + ], + }, + exclude_dump_test_schema => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/exclude_dump_test_schema.sql", + '-N', 'dump_test', + 'postgres', + ], + }, + exclude_test_table => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/exclude_test_table.sql", + '-T', 'dump_test.test_table', + 'postgres', + ], + }, + exclude_test_table_data => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/exclude_test_table_data.sql", + '--exclude-table-data=dump_test.test_table', + 'postgres', + ], + }, + no_privs => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/no_privs.sql", + '-x', + 'postgres', + ], + }, + no_owner => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/no_owner.sql", + '-O', + 'postgres', + ], + }, + only_dump_test_schema => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/only_dump_test_schema.sql", + '-n', 'dump_test', + 'postgres', + ], + }, + only_dump_test_table => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/only_dump_test_table.sql", + '-t', 'dump_test.test_table', + 'postgres', + ], + }, + schema_only => { + dump_cmd => [ + 'pg_dump', + '-f', "$tempdir/schema_only.sql", + '-s', + 'postgres', + ], + }, +); + +my %tests = ( + 'ALTER SCHEMA dump_test OWNER TO' => { + regexp => qr/^ALTER SCHEMA dump_test OWNER TO .*;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + only_dump_test_schema => 1, + schema_only => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'ALTER SCHEMA dump_test_second_schema OWNER TO' => { + regexp => qr/^ALTER SCHEMA dump_test_second_schema OWNER TO .*;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + schema_only => 1, + }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'ALTER TABLE test_table OWNER TO' => { + regexp => qr/^ALTER TABLE test_table OWNER TO .*;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_test_table_data => 1, + no_privs => 1, + only_dump_test_schema => 1, + only_dump_test_table => 1, + schema_only => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + exclude_test_table => 1, + }, + }, + 'ALTER TABLE test_second_table OWNER TO' => { + regexp => qr/^ALTER TABLE test_second_table OWNER TO .*;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + only_dump_test_schema => 1, + schema_only => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'ALTER TABLE test_third_table OWNER TO' => { + regexp => qr/^ALTER TABLE test_third_table OWNER TO .*;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + schema_only => 1, + }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'ALTER ... OWNER commands' => { # catch-all for ALTER ... OWNER + regexp => qr/^ALTER .* OWNER TO .*;$/m, + like => { }, # use more-specific options above + unlike => { + data_only => 1, + no_owner => 1, + }, + }, + 'COMMENT ON DATABASE postgres' => { + regexp => qr/^COMMENT ON DATABASE postgres IS .*;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + schema_only => 1, + }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'COMMENT ON EXTENSION plpgsql' => { + regexp => qr/^COMMENT ON EXTENSION plpgsql IS .*;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + schema_only => 1, + }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'COMMENT commands' => { # catch-all for COMMENTs + regexp => qr/^COMMENT ON /m, + like => { }, # use more-specific options above + unlike => { + data_only => 1, + }, + }, + 'COPY test_table' => { + regexp => qr/^COPY test_table \(col1\) FROM stdin;\n(?:\d\n){9}\\\.\n$/m, + like => { + clean => 1, + clean_if_exists => 1, + data_only => 1, + defaults => 1, + no_privs => 1, + no_owner => 1, + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + }, + }, + 'COPY test_second_table' => { + regexp => qr/^COPY test_second_table \(col1, col2\) FROM stdin;\n(?:\d\t\d\n){9}\\\.\n$/m, + like => { + clean => 1, + clean_if_exists => 1, + data_only => 1, + defaults => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + only_dump_test_schema => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'COPY test_third_table' => { + regexp => qr/^COPY test_third_table \(col1\) FROM stdin;\n(?:\d\n){9}\\\.\n$/m, + like => { + clean => 1, + clean_if_exists => 1, + data_only => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'COPY ... commands' => { # catch-all for COPY + regexp => qr/^COPY /m, + like => { }, # use more-specific options above + unlike => { + schema_only => 1, + }, + }, + 'CREATE EXTENSION ... plpgsql' => { + regexp => qr/^CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + schema_only => 1, + }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'CREATE SCHEMA dump_test' => { + regexp => qr/^CREATE SCHEMA dump_test;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + only_dump_test_schema => 1, + schema_only => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'CREATE SCHEMA dump_test_second_schema' => { + regexp => qr/^CREATE SCHEMA dump_test_second_schema;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + schema_only => 1, + }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'CREATE TABLE test_table' => { + regexp => qr/^CREATE TABLE test_table \(\n\s+col1 integer\n\);$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + only_dump_test_schema => 1, + only_dump_test_table => 1, + schema_only => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + exclude_test_table => 1, + }, + }, + 'CREATE TABLE test_second_table' => { + regexp => qr/^CREATE TABLE test_second_table \(\n\s+col1 integer,\n\s+col2 text\n\);$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + only_dump_test_schema => 1, + schema_only => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'CREATE TABLE test_third_table' => { + regexp => qr/^CREATE TABLE test_third_table \(\n\s+col1 integer\n\);$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + schema_only => 1, + }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'CREATE ... commands' => { # catch-all for CREATE + regexp => qr/^CREATE /m, + like => { }, # use more-specific options above + unlike => { + data_only => 1, + }, + }, + 'DROP EXTENSION plpgsql' => { + regexp => qr/^DROP EXTENSION plpgsql;$/m, + like => { + clean => 1, + }, + unlike => { + clean_if_exists => 1, + }, + }, + 'DROP SCHEMA dump_test' => { + regexp => qr/^DROP SCHEMA dump_test;$/m, + like => { + clean => 1, + }, + unlike => { + clean_if_exists => 1, + }, + }, + 'DROP SCHEMA dump_test_second_schema' => { + regexp => qr/^DROP SCHEMA dump_test_second_schema;$/m, + like => { + clean => 1, + }, + unlike => { + clean_if_exists => 1, + }, + }, + 'DROP TABLE test_table' => { + regexp => qr/^DROP TABLE dump_test\.test_table;$/m, + like => { + clean => 1, + }, + unlike => { + clean_if_exists => 1, + }, + }, + 'DROP TABLE test_second_table' => { + regexp => qr/^DROP TABLE dump_test\.test_second_table;$/m, + like => { + clean => 1, + }, + unlike => { + clean_if_exists => 1, + }, + }, + 'DROP TABLE test_third_table' => { + regexp => qr/^DROP TABLE dump_test_second_schema\.test_third_table;$/m, + like => { + clean => 1, + }, + unlike => { + clean_if_exists => 1, + }, + }, + 'DROP EXTENSION IF EXISTS plpgsql' => { + regexp => qr/^DROP EXTENSION IF EXISTS plpgsql;$/m, + like => { + clean_if_exists => 1, + }, + unlike => { + clean => 1, + }, + }, + 'DROP SCHEMA IF EXISTS dump_test' => { + regexp => qr/^DROP SCHEMA IF EXISTS dump_test;$/m, + like => { + clean_if_exists => 1, + }, + unlike => { + clean => 1, + }, + }, + 'DROP SCHEMA IF EXISTS dump_test_second_schema' => { + regexp => qr/^DROP SCHEMA IF EXISTS dump_test_second_schema;$/m, + like => { + clean_if_exists => 1, + }, + unlike => { + clean => 1, + }, + }, + 'DROP TABLE IF EXISTS test_table' => { + regexp => qr/^DROP TABLE IF EXISTS dump_test\.test_table;$/m, + like => { + clean_if_exists => 1, + }, + unlike => { + clean => 1, + }, + }, + 'DROP TABLE IF EXISTS test_second_table' => { + regexp => qr/^DROP TABLE IF EXISTS dump_test\.test_second_table;$/m, + like => { + clean_if_exists => 1, + }, + unlike => { + clean => 1, + }, + }, + 'DROP TABLE IF EXISTS test_third_table' => { + regexp => qr/^DROP TABLE IF EXISTS dump_test_second_schema\.test_third_table;$/m, + like => { + clean_if_exists => 1, + }, + unlike => { + clean => 1, + }, + }, + 'DROP ... commands' => { # catch-all for DROP + regexp => qr/^DROP /m, + like => { }, # use more-specific options above + unlike => { + data_only => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_privs => 1, + no_owner => 1, + only_dump_test_schema => 1, + only_dump_test_table => 1, + schema_only => 1, + }, + }, + 'GRANT USAGE ON SCHEMA dump_test_second_schema' => { + regexp => qr/^GRANT USAGE ON SCHEMA dump_test_second_schema TO dump_test;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_owner => 1, + schema_only => 1, + }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'GRANT SELECT ON TABLE test_table' => { + regexp => qr/^GRANT SELECT ON TABLE test_table TO dump_test;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_test_table_data => 1, + no_owner => 1, + only_dump_test_schema => 1, + only_dump_test_table => 1, + schema_only => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + exclude_test_table => 1, + }, + }, + 'GRANT INSERT(col1) ON TABLE test_second_table' => { + regexp => qr/^GRANT INSERT\(col1\) ON TABLE test_second_table TO dump_test;$/m, + like => { + clean => 1, + clean_if_exists => 1, + defaults => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_owner => 1, + only_dump_test_schema => 1, + schema_only => 1, + }, + unlike => { + exclude_dump_test_schema => 1, + only_dump_test_table => 1, + }, + }, + 'GRANT commands' => { # catch-all for GRANT commands + regexp => qr/^GRANT /m, + like => { }, # use more-specific options above + unlike => { + data_only => 1, + no_privs => 1, + }, + }, + 'REVOKE commands' => { # catch-all for REVOKE commands + regexp => qr/^REVOKE /m, + like => { }, # use more-specific options above + unlike => { + data_only => 1, + no_privs => 1, + }, + }, +); + +######################################### +# Create a PG instance to test actually dumping from + +my $node = get_new_node('main'); +$node->init; +$node->start; + +######################################### +# Set up schemas, tables, etc, to be dumped. + +$node->safe_psql('postgres', + 'CREATE ROLE dump_test; ' + . 'CREATE SCHEMA dump_test; ' + . 'CREATE TABLE dump_test.test_table (col1 int); ' + . 'INSERT INTO dump_test.test_table (col1) ' + . 'SELECT generate_series FROM generate_series(1,9);' + . 'GRANT SELECT ON TABLE dump_test.test_table TO dump_test;' + . 'CREATE TABLE dump_test.test_second_table (col1 int, col2 text); ' + . 'INSERT INTO dump_test.test_second_table (col1, col2) ' + . 'SELECT generate_series, generate_series::text FROM generate_series(1,9);' + . 'GRANT INSERT (col1) ON TABLE dump_test.test_second_table TO dump_test;' + . 'CREATE SCHEMA dump_test_second_schema; ' + . 'GRANT USAGE ON SCHEMA dump_test_second_schema TO dump_test;' + . 'CREATE TABLE dump_test_second_schema.test_third_table (col1 int); ' + . 'INSERT INTO dump_test_second_schema.test_third_table (col1) ' + . 'SELECT generate_series FROM generate_series(1,9);' + ); + +my $port = $node->port; + +# Start with 1 because of non-existant database test below +my $num_tests = 1; + +foreach my $run (sort keys %pgdump_runs) { + my $test_key = $run; + + # Each run of pg_dump is a test itself + $num_tests++; + + # If there is a restore cmd, that's another test + if ($pgdump_runs{$run}->{restore_cmd}) { + $num_tests++; + } + + if ($pgdump_runs{$run}->{test_key}) { + $test_key = $pgdump_runs{$run}->{test_key}; + } + + # Then count all the tests run against each run + foreach my $test (sort keys %tests) { + if ($tests{$test}->{like}->{$test_key}) { + $num_tests++; + } + + if ($tests{$test}->{unlike}->{$test_key}) { + $num_tests++; + } + } +} +plan tests => $num_tests; + +######################################### +# Test connecting to a non-existant database + +command_exit_is([ 'pg_dump', '-p', "$port", 'qqq' ], + 1, 'pg_dump: [archiver (db)] connection to database "qqq" failed: FATAL: database "qqq" does not exist'); + +######################################### +# Run all runs + +foreach my $run (sort keys %pgdump_runs) { + + my $test_key = $run; + + $node->command_ok(\@{ $pgdump_runs{$run}->{dump_cmd} }, "$run: pg_dump runs"); + + if ($pgdump_runs{$run}->{restore_cmd}) { + $node->command_ok(\@{ $pgdump_runs{$run}->{restore_cmd} }, "$run: pg_restore runs"); + } + + if ($pgdump_runs{$run}->{test_key}) { + $test_key = $pgdump_runs{$run}->{test_key}; + } + + my $output_file = slurp_file("$tempdir/${run}.sql"); + + ######################################### + # Run all tests where this run is included + # as either a 'like' or 'unlike' test. + + foreach my $test (sort keys %tests) { + if ($tests{$test}->{like}->{$test_key}) { + like($output_file, $tests{$test}->{regexp}, "$run: dumps $test"); + } + + if ($tests{$test}->{unlike}->{$test_key}) { + unlike($output_file, $tests{$test}->{regexp}, "$run: does not dump $test"); + } + } +} + +######################################### +# Clean up + +$node->safe_psql('postgres', + 'DROP SCHEMA dump_test CASCADE; ' + . 'DROP SCHEMA dump_test_second_schema CASCADE; ' + . 'DROP ROLE dump_test; ' + ); + +$node->stop('fast'); -- 2.5.0
signature.asc
Description: Digital signature