Re: WIP: Avoid creation of the free space map for small tables

2019-02-23 Thread John Naylor
On Fri, Feb 22, 2019 at 3:59 AM Amit Kapila  wrote:
> The reason for not pushing much on making the test pass for
> nonstandard block sizes is that when I tried existing tests, there
> were already some failures.

FWIW, I currently see 8 failures (attached).

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


jcn-16blk-regress.diffs
Description: Binary data


Re: Why don't we have a small reserved OID range for patch revisions?

2019-02-23 Thread John Naylor
I wrote:

> Along those lines, here's a draft patch to do just that. It handles
> array type oids as well. Run it like this:
>
> perl  reformat_dat_file.pl  --map-from 9000  --map-to 2000  *.dat
>
> There is some attempt at documentation. So far it doesn't map by
> default, but that could be changed if we agreed on the convention of
> 9000 or whatever.

In case we don't want to lose track of this, I added it to the March
commitfest with a target of v13. (I didn't see a way to  add it to the
July commitfest)

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Add --include-table-data-where option to pg_dump, to export only a subset of table data

2019-02-23 Thread Surafel Temesgen
On Thu, Jan 31, 2019 at 10:22 PM Carter Thaxton 
wrote:

>
>
> Here's a rebased patch that works with the latest master branch.  Very
> straightforward.
>


You forget to resubmit it to the next commitfest and  the error message on
incorrect
where clause specification is still the same

regards
Surafel


Re: [PATCH v20] GSSAPI encryption support

2019-02-23 Thread Peter Eisentraut
I don't know much about GSSAPI, but from what I can tell, this seems an
attractive feature, and the implementation is compact enough.  I have
done a bit of work on the internal SSL API refactoring, so I have some
thoughts on this patch.

Looking at the file structure, we would have

be-secure.c
be-secure-openssl.c
be-secure-[othersslimpl].c
be-secure-gssapi.c
be-secure-common.c

This implies a code structure that isn't really there.
be-secure-common.c is used by SSL implementations but not by the GSSAPI
implementation.

Perhaps we should rename be-secure-openssl.c to be-ssl-openssl.c and
be-secure-common.c to be-ssl-common.c.

Or maybe we avoid that, and you rename be-secure-gssapi.c to just
be-gssapi.c and also combine that with the contents of be-gssapi-common.c.

(Or maybe both.)

(And similarly in libpq.)

About pg_hba.conf: The "hostgss" keyword seems a bit confusing.  It only
applies to encrypted gss-using connections, not all of them.  Maybe
"hostgssenc" or "hostgsswrap"?

I don't see any tests in the patch.  We have a Kerberos test suite at
src/test/kerberos/ and an SSL test suite at src/test/ssl/.  You can get
some ideas there.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [Bug Fix] ECPG: could not use set xxx to default statement

2019-02-23 Thread Michael Meskes
> Not seeing any motion on this, here's a draft patch to make these
> scripts complain about missing semicolons.  Against the current
> gram.y (which contains 2 such errors, as Michael noted) you
> get output like

Thanks Tom for looking into this. Are we agreed then that we want
gram.y to have semicolons? 

> '/usr/bin/perl' ./parse.pl . < ../../../backend/parser/gram.y >
> preproc.y
> unterminated rule at ./parse.pl line 370, <> line 1469.
> make: *** [preproc.y] Error 255
> make: *** Deleting file `preproc.y'
> 
> That's not *super* friendly, but it does give you the right line
> number
> to look at in gram.y.  We could adjust the script (and the Makefile)
> further so that the message would cite the gram.y filename, but I'm
> not
> sure if it's worth the trouble.  Thoughts?

IMO it's not worth it. We all know where the grammar is and that the
ecpg tools only parse that one file. Why putting effort into writing it
down too?

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Meskes at (Debian|Postgresql) dot Org
Jabber: michael at xmpp dot meskes dot org
VfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL




Re: [HACKERS] Block level parallel vacuum

2019-02-23 Thread Haribabu Kommi
On Thu, Feb 14, 2019 at 9:17 PM Masahiko Sawada 
wrote:

> On Wed, Feb 13, 2019 at 9:32 PM Haribabu Kommi 
> wrote:
> >
> >
> > On Sat, Feb 9, 2019 at 11:47 PM Masahiko Sawada 
> wrote:
> >>
> >> On Tue, Feb 5, 2019 at 12:14 PM Haribabu Kommi <
> kommi.harib...@gmail.com> wrote:
> >> >
> >> >
> >> > On Fri, Feb 1, 2019 at 8:19 AM Masahiko Sawada 
> wrote:
> >> >>
> >> >>
> >> >> The passing stats = NULL to amvacuumcleanup and ambulkdelete means
> the
> >> >> first time execution. For example, btvacuumcleanup skips cleanup if
> >> >> it's not NULL.In the normal vacuum we pass NULL to ambulkdelete or
> >> >> amvacuumcleanup when the first time calling. And they store the
> result
> >> >> stats to the memory allocated int the local memory. Therefore in the
> >> >> parallel vacuum I think that both worker and leader need to move it
> to
> >> >> the shared memory and mark it as updated as different worker could
> >> >> vacuum different indexes at the next time.
> >> >
> >> >
> >> > OK, understood the point. But for btbulkdelete whenever the stats are
> NULL,
> >> > it allocates the memory. So I don't see a problem with it.
> >> >
> >> > The only problem is with btvacuumcleanup, when there are no dead
> tuples
> >> > present in the table, the btbulkdelete is not called and directly the
> btvacuumcleanup
> >> > is called at the end of vacuum, in that scenario, there is code flow
> difference
> >> > based on the stats. so why can't we use the deadtuples number to
> differentiate
> >> > instead of adding another flag?
> >>
> >> I don't understand your suggestion. What do we compare deadtuples
> >> number to? Could you elaborate on that please?
> >
> >
> > The scenario where the stats should pass NULL to btvacuumcleanup
> function is
> > when there no dead tuples, I just think that we may use that deadtuples
> structure
> > to find out whether stats should pass NULL or not while avoiding the
> extra
> > memcpy.
> >
>
> Thank you for your explanation. I understood. Maybe I'm worrying too
> much but I'm concernced compatibility; currently we handle indexes
> individually. So if there is an index access method whose ambulkdelete
> returns NULL at the first call but returns a palloc'd struct at the
> second time or other, that doesn't work fine.
>
> The documentation says that passed-in 'stats' is NULL at the first
> time call of ambulkdelete but doesn't say about the second time or
> more. Index access methods may expect that the passed-in 'stats'  is
> the same as what they has returned last time. So I think to add an
> extra flag for keeping comptibility.
>

I checked some of the ambulkdelete functions, and they are not returning
a NULL data whenever those functions are called. But the palloc'd structure
doesn't get filled with the details.

IMO, there is no need of any extra code that is required for parallel vacuum
compared to normal vacuum.

Regards,
Haribabu Kommi
Fujitsu Australia


Re: Autovaccuum vs temp tables crash

2019-02-23 Thread Magnus Hagander
On Fri, Feb 22, 2019 at 7:15 PM Robert Haas  wrote:

> On Fri, Feb 22, 2019 at 1:14 PM Tom Lane  wrote:
> > Why?  It would likely be a significant amount of effort and added
> overhead,
> > to accomplish no obviously-useful goal.
> >
> > Note that all the temp schemas are made as owned by the bootstrap
> > superuser, so there is no real argument to be made that people might
> > be expecting they should be able to delete them.
>
> Hmm, well maybe you're right.  Just seems like an odd wart.
>

Well, the way it works now is you can drop them. But if you then create
another temp table in the same session, it will get an oid of the already
dropped schema in the relnamespace column.

That just seems plain broken.

I think we need to either prevent dropping of temp namespaces *or* we need
to create a new entry in pg_namespace in this particular case.

I wonder if other "fun" things could happen if you go rename the namespace,
haven't tried that yet...

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: oddity with ALTER ROLE/USER

2019-02-23 Thread Stephen Frost
Greetings,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Joe Conway  writes:
> > I noticed that ALTER ROLE/USER succeeds even when called without any
> > options:
> 
> > postgres=# alter user foo;
> > ALTER ROLE
> > postgres=# alter role foo;
> > ALTER ROLE
> > postgres=# alter group foo;
> > ERROR:  syntax error at or near ";"
> > LINE 1: alter group foo;
> 
> > That seems odd, does nothing useful, and is inconsistent with, for
> > example, ALTER GROUP as shown above.
> 
> > Proposed patch attached.
> 
> If you want to make it act like alter group, why not make it act
> like alter group?  That is, the way to fix this is to change the
> grammar so that AlterOptRoleList doesn't permit an expansion with
> zero list elements.
> 
> Having said that, I can't get excited about changing this at all.
> Nobody will thank us for it, and someone might complain.

Is there no chance that someone's issueing such an ALTER ROLE foo;
statement and thinking it's actually doing something?  I suppose it's
pretty unlikely, but we do complain in some cases where we've been asked
to do something and we end up not actually doing anything for various
reasons ("no privileges were granted..."), though that's only a warning.
Maybe that'd be useful to have here though?  At least then we're making
it clear to the user that nothing is happening and we don't break
existing things.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: FOP warnings about id attributes in title tags

2019-02-23 Thread Andrew Dunstan

On 2/22/19 9:57 AM, Andrew Dunstan wrote:
> On 2/19/19 11:07 AM, Peter Eisentraut wrote:
>> On 2019-02-18 16:37, Peter Eisentraut wrote:
 It appears that these are due to title elements having id tags. At
  is says:

 When adding an |id| or |xml:id| attribute, put it on the element
 itself, not the |title|.

 So maybe we need to fix those up?
>>> You can't just remove the ids, since some of them are referenced from
>>> elsewhere.
>> Here was a discussion on getting rid of them:
>> https://www.postgresql.org/message-id/flat/4a60dfc3-061b-01c4-2b86-279d3a612fd2%402ndquadrant.com
>>
>
>
> Yeah,
>
>
> I did some experimentation, and found that removing the id on the title
> tag, and the corresponding endterm attributes, and adding an xreflabel
> to the linkend object seemed to have the desired effect. Not yet tested
> with FOP but this looks like a good direction.
>
>
> Test case:
>
>
> diff --git a/doc/src/sgml/ref/alter_collation.sgml 
> b/doc/src/sgml/ref/alter_collation.sgml
> index b51b3a2564..432495e522 100644
> --- a/doc/src/sgml/ref/alter_collation.sgml
> +++ b/doc/src/sgml/ref/alter_collation.sgml
> @@ -93,16 +93,15 @@ ALTER COLLATION name SET 
> SCHEMA new_sche
>  
>   
>    Update the collation's version.
> -  See  -  endterm="sql-altercollation-notes-title"/> below.
> +  See  below.
>   
>  
>     
>    
>   
>  
> - 
> -  Notes
> + 
> +  Notes
>  
>    
>     When using collations provided by the ICU library, the ICU-specific 
> version


This worked reasonably well in most cases, but not in the cases where
there was formatting in the title text. So I adopted a different
approach which wrapped the title text in a phrase tag and put the id on
that tag instead of on the title tag itself. The documentation seems to
suggest that supplying a place to put an id tag around a small piece of
text is largely the purpose of the phrase tag. Anyway, this worked. It
also has the upside that we're not duplicating the title text.


At the same time I removed some apparently pointless id tags on a
handful of refname objects.


Given these two changes the PDFs build free of warnings about unresolved
ID references.


Some of these titles with id attributes are not actually referred to
anywhere, but that seems reasonably harmless.


patch attached.


cheers


andrew



-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml
index b7f785069f..95ec66867f 100644
--- a/doc/src/sgml/ecpg.sgml
+++ b/doc/src/sgml/ecpg.sgml
@@ -1041,7 +1041,7 @@ struct varchar_var { int len; char arr[180]; } var;
 
 
 
- timestamp, date
+ timestamp, date
 
  
   Here is a pattern for handling timestamp variables
@@ -1206,7 +1206,7 @@ EXEC SQL END DECLARE SECTION;
 
 
 
- bytea
+ bytea
 
  
   The handling of the bytea type is also similar to
diff --git a/doc/src/sgml/file-fdw.sgml b/doc/src/sgml/file-fdw.sgml
index 2413089ffe..2049e855dd 100644
--- a/doc/src/sgml/file-fdw.sgml
+++ b/doc/src/sgml/file-fdw.sgml
@@ -210,7 +210,7 @@
  
 
  
- Create a Foreign Table for PostgreSQL CSV Logs
+ Create a Foreign Table for PostgreSQL CSV Logs
 
   
One of the obvious uses for file_fdw is to make
diff --git a/doc/src/sgml/ref/alter_collation.sgml b/doc/src/sgml/ref/alter_collation.sgml
index b51b3a2564..eac6c244fe 100644
--- a/doc/src/sgml/ref/alter_collation.sgml
+++ b/doc/src/sgml/ref/alter_collation.sgml
@@ -102,7 +102,7 @@ ALTER COLLATION name SET SCHEMA new_sche
  
 
  
-  Notes
+  Notes
 
   
When using collations provided by the ICU library, the ICU-specific version
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 0aa0f093f2..8a47e3f95e 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -1153,7 +1153,7 @@ WITH ( MODULUS numeric_literal, REM
  
 
  
-  Notes
+  Notes
 

 The key word COLUMN is noise and can be omitted.
diff --git a/doc/src/sgml/ref/clusterdb.sgml b/doc/src/sgml/ref/clusterdb.sgml
index ed343dd7da..56c93543c2 100644
--- a/doc/src/sgml/ref/clusterdb.sgml
+++ b/doc/src/sgml/ref/clusterdb.sgml
@@ -15,7 +15,7 @@ PostgreSQL documentation
  
 
  
-  clusterdb
+  clusterdb
   cluster a PostgreSQL database
  
 
diff --git a/doc/src/sgml/ref/commit_prepared.sgml b/doc/src/sgml/ref/commit_prepared.sgml
index d938b65bbe..113c91b291 100644
--- a/doc/src/sgml/ref/commit_prepared.sgml
+++ b/doc/src/sgml/ref/commit_prepared.sgml
@@ -72,7 +72,7 @@ COMMIT PREPARED transaction_id
  
 
  
-  Examples
+  Examples
   
Commit the transaction identified by the transaction
identifier foobar:
diff --git a/doc/src/sgml/ref/create_aggregate.sgml b/doc/src/sgml/ref/create_aggregate.sgml
index b8cd2e7af9..1fb0164805 100644
--- a/doc/src/sgml/ref/create_aggre

Re: Autovaccuum vs temp tables crash

2019-02-23 Thread Michael Paquier
On Sat, Feb 23, 2019 at 02:48:58PM +0100, Magnus Hagander wrote:
> I think we need to either prevent dropping of temp namespaces *or* we need
> to create a new entry in pg_namespace in this particular case.

Perhaps I am missing something, but it would be just more simple to
now allow users to restrict that?

> I wonder if other "fun" things could happen if you go rename the namespace,
> haven't tried that yet...

In this case the OID remains the same, still there are some cases
where we rely on the namespace name, and one is CLUSTER.
objectaddress.c uses as well get_namespace_name_or_temp(), which would
be messed up, so it would be better to prevent a temp namespace to be
renamed.  Could ALTER SCHEMA OWNER TO also be a problem?
--
Michael


signature.asc
Description: PGP signature


Re: Autovaccuum vs temp tables crash

2019-02-23 Thread Magnus Hagander
On Sat, Feb 23, 2019 at 12:29 AM Michael Paquier 
wrote:

> On Fri, Feb 22, 2019 at 09:45:42AM -0500, Tom Lane wrote:
> > +1 ... maybe "(dropped)", because we tend to use parens for this sort
> > of thing, I think.
>
> +1.  Using "dropped" sounds good to me in this context.  Perhaps we
> could have something more fancy like what's used for dropped columns?
> It would be nice to get a reference to a schema, like say "dropped
> temporary schema".
>

I think that's unnecessary given the context:

2019-02-23 15:43:56.375 CET [17250] LOG:  autovacuum: dropping orphan temp
table "postgres.(dropped).bar"

That said, it just moves the crash. Turns out the problem goes a lot deeper
than just the logging, it's basically the cleanup of orphaned temp tables
that's completely broken if you drop the namespace.

TRAP: FailedAssertion("!(relation->rd_backend != (-1))", File:
"relcache.c", Line: 1085)
2019-02-23 15:43:56.378 CET [17146] LOG:  server process (PID 17250) was
terminated by signal 6: Aborted

#2  0x563e0fe4bc83 in ExceptionalCondition
(conditionName=conditionName@entry=0x563e10035fb0 "!(relation->rd_backend
!= (-1))",
errorType=errorType@entry=0x563e0fe9bf9d "FailedAssertion",
fileName=fileName@entry=0x563e100357dc "relcache.c",
lineNumber=lineNumber@entry=1085) at assert.c:54
#3  0x563e0fe40e18 in RelationBuildDesc (targetRelId=24580,
insertIt=insertIt@entry=true) at relcache.c:1085
#4  0x563e0fe41a86 in RelationIdGetRelation (relationId=, relationId@entry=24580) at relcache.c:1894
#5  0x563e0fa24b4c in relation_open (relationId=relationId@entry=24580,
lockmode=lockmode@entry=8) at relation.c:59
#6  0x563e0fadcfea in heap_drop_with_catalog (relid=24580) at
heap.c:1856
#7  0x563e0fad9145 in doDeletion (flags=21, object=) at
dependency.c:1329
#8  deleteOneObject (flags=21, depRel=0x7ffd80db4808, object=) at dependency.c:1231
#9  deleteObjectsInList (targetObjects=targetObjects@entry=0x563e10640110,
depRel=depRel@entry=0x7ffd80db4808, flags=flags@entry=21)
at dependency.c:271
#10 0x563e0fad91f0 in performDeletion (object=object@entry=0x7ffd80db4944,
behavior=behavior@entry=DROP_CASCADE,
flags=flags@entry=21) at dependency.c:352
#11 0x563e0fa13532 in do_autovacuum () at autovacuum.c:2269


So basically I think it's the wrong approach to try to fix this error
message. We need to fix the underlying problem. Which is the ability to
drop the temp table schemas and then create temp tables which ahve no
schemas.

We could try to recreate the namespace if dropped. But a quick fix around
that just moved coredumps around to a lot of other dependent places.

I think we're better off just peventing the explicit drop of a temp schema.
See attached?

We'd also want to block things like:
ALTER SCHEMA pg_temp_3 RENAME TO foobar;
DROP SCHEMA foobar;

Are there any more things beyond RENAME we need to block?

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 
diff --git a/src/backend/commands/dropcmds.c b/src/backend/commands/dropcmds.c
index f26a2f4779..c9841e2dc2 100644
--- a/src/backend/commands/dropcmds.c
+++ b/src/backend/commands/dropcmds.c
@@ -101,6 +101,23 @@ RemoveObjects(DropStmt *stmt)
 		 errhint("Use DROP AGGREGATE to drop aggregate functions.")));
 		}
 
+		/*
+		 * Don't allow dropping of temp namespaces, since there can still be
+		 * references to the oids all over.
+		 * We have to look at the name of the schema, since the checks for isTempNamespace()
+		 * only checks if it's *our* temp one,gi
+		 */
+		if (stmt->removeType == OBJECT_SCHEMA)
+		{
+			char *name = get_namespace_name(address.objectId);
+			if (name && (strncmp(name, "pg_temp_", 8) == 0 || strncmp(name, "pg_toast_temp_", 14) == 0))
+ereport(ERROR,
+		(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+		 errmsg("cannot drop temporary schema \"%s\"",
+get_namespace_name(address.objectId)),
+		 errhint("temporary schemas are dropped automatically when the session ends.")));
+		}
+
 		/* Check permissions. */
 		namespaceId = get_object_namespace(&address);
 		if (!OidIsValid(namespaceId) ||


Re: Autovaccuum vs temp tables crash

2019-02-23 Thread Magnus Hagander
On Sat, Feb 23, 2019 at 4:18 PM Michael Paquier  wrote:

> On Sat, Feb 23, 2019 at 02:48:58PM +0100, Magnus Hagander wrote:
> > I think we need to either prevent dropping of temp namespaces *or* we
> need
> > to create a new entry in pg_namespace in this particular case.
>
> Perhaps I am missing something, but it would be just more simple to
> now allow users to restrict that?
>

I can't parse what you are saying here. Now allow users to restrict what?


> I wonder if other "fun" things could happen if you go rename the
> namespace,
> > haven't tried that yet...
>
> In this case the OID remains the same, still there are some cases
> where we rely on the namespace name, and one is CLUSTER.
> objectaddress.c uses as well get_namespace_name_or_temp(), which would
> be messed up, so it would be better to prevent a temp namespace to be
> renamed.  Could ALTER SCHEMA OWNER TO also be a problem?
>

Or possibly altering permissions on it?

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Autovaccuum vs temp tables crash

2019-02-23 Thread Magnus Hagander
On Sat, Feb 23, 2019 at 4:28 PM Magnus Hagander  wrote:

>
>
> On Sat, Feb 23, 2019 at 12:29 AM Michael Paquier 
> wrote:
>
>> On Fri, Feb 22, 2019 at 09:45:42AM -0500, Tom Lane wrote:
>> > +1 ... maybe "(dropped)", because we tend to use parens for this sort
>> > of thing, I think.
>>
>> +1.  Using "dropped" sounds good to me in this context.  Perhaps we
>> could have something more fancy like what's used for dropped columns?
>> It would be nice to get a reference to a schema, like say "dropped
>> temporary schema".
>>
>
> I think that's unnecessary given the context:
>
> 2019-02-23 15:43:56.375 CET [17250] LOG:  autovacuum: dropping orphan temp
> table "postgres.(dropped).bar"
>
> That said, it just moves the crash. Turns out the problem goes a lot
> deeper than just the logging, it's basically the cleanup of orphaned temp
> tables that's completely broken if you drop the namespace.
>
> TRAP: FailedAssertion("!(relation->rd_backend != (-1))", File:
> "relcache.c", Line: 1085)
> 2019-02-23 15:43:56.378 CET [17146] LOG:  server process (PID 17250) was
> terminated by signal 6: Aborted
>
> #2  0x563e0fe4bc83 in ExceptionalCondition
> (conditionName=conditionName@entry=0x563e10035fb0 "!(relation->rd_backend
> != (-1))",
> errorType=errorType@entry=0x563e0fe9bf9d "FailedAssertion",
> fileName=fileName@entry=0x563e100357dc "relcache.c",
> lineNumber=lineNumber@entry=1085) at assert.c:54
> #3  0x563e0fe40e18 in RelationBuildDesc (targetRelId=24580,
> insertIt=insertIt@entry=true) at relcache.c:1085
> #4  0x563e0fe41a86 in RelationIdGetRelation (relationId= out>, relationId@entry=24580) at relcache.c:1894
> #5  0x563e0fa24b4c in relation_open (relationId=relationId@entry=24580,
> lockmode=lockmode@entry=8) at relation.c:59
> #6  0x563e0fadcfea in heap_drop_with_catalog (relid=24580) at
> heap.c:1856
> #7  0x563e0fad9145 in doDeletion (flags=21, object=) at
> dependency.c:1329
> #8  deleteOneObject (flags=21, depRel=0x7ffd80db4808, object= out>) at dependency.c:1231
> #9  deleteObjectsInList (targetObjects=targetObjects@entry=0x563e10640110,
> depRel=depRel@entry=0x7ffd80db4808, flags=flags@entry=21)
> at dependency.c:271
> #10 0x563e0fad91f0 in performDeletion (object=object@entry=0x7ffd80db4944,
> behavior=behavior@entry=DROP_CASCADE,
> flags=flags@entry=21) at dependency.c:352
> #11 0x563e0fa13532 in do_autovacuum () at autovacuum.c:2269
>
>
> So basically I think it's the wrong approach to try to fix this error
> message. We need to fix the underlying problem. Which is the ability to
> drop the temp table schemas and then create temp tables which ahve no
> schemas.
>
> We could try to recreate the namespace if dropped. But a quick fix around
> that just moved coredumps around to a lot of other dependent places.
>
> I think we're better off just peventing the explicit drop of a temp
> schema. See attached?
>
> We'd also want to block things like:
> ALTER SCHEMA pg_temp_3 RENAME TO foobar;
> DROP SCHEMA foobar;
>
> Are there any more things beyond RENAME we need to block?
>
>
Ooh, RENAME has problems entirely unrelated to that:

ALTER SCHEMA pg_catalog RENAME TO foobar;

Yeah, that breaks things...

Renaming schemas only check that the *target* name is not reserved. Surely
it should also check that the *source* name is not reserved? (Unless
allowSystemTableMods)?

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Autovaccuum vs temp tables crash

2019-02-23 Thread Tom Lane
Magnus Hagander  writes:
> On Fri, Feb 22, 2019 at 7:15 PM Robert Haas  wrote:
>> On Fri, Feb 22, 2019 at 1:14 PM Tom Lane  wrote:
>>> Note that all the temp schemas are made as owned by the bootstrap
>>> superuser, so there is no real argument to be made that people might
>>> be expecting they should be able to delete them.

>> Hmm, well maybe you're right.  Just seems like an odd wart.

> Well, the way it works now is you can drop them. But if you then create
> another temp table in the same session, it will get an oid of the already
> dropped schema in the relnamespace column.

Only if you're superuser.

> That just seems plain broken.

There are a *lot* of ways that a superuser can break things.  I'm not
real sure that this one is special enough that we need a defense
against it.

However, if someone held a gun to my head and said fix it, I'd be inclined
to do so by having temp-namespace creation insert a "pin" dependency into
pg_depend.  Arguably, the only reason we don't create all the temp
namespaces during bootstrap is because we aren't sure how many we'd need
--- but if we did do that, they'd presumably end up pinned.

> I wonder if other "fun" things could happen if you go rename the namespace,
> haven't tried that yet...

I put that one on exactly a par with renaming all the "=" operators.
Yes, the system will let a superuser do it, and no, it's not a good idea.

regards, tom lane



Re: Autovaccuum vs temp tables crash

2019-02-23 Thread Tom Lane
Magnus Hagander  writes:
> I think we're better off just peventing the explicit drop of a temp schema.
> See attached?

I think this is a poor implementation of a bad idea.  Would you like a
list of the ways a superuser can break the system?  We could start with
"DELETE FROM pg_proc;".

regards, tom lane



Re: Autovaccuum vs temp tables crash

2019-02-23 Thread Magnus Hagander
On Sat, Feb 23, 2019 at 5:01 PM Tom Lane  wrote:

> Magnus Hagander  writes:
> > I think we're better off just peventing the explicit drop of a temp
> schema.
> > See attached?
>
> I think this is a poor implementation of a bad idea.  Would you like a
> list of the ways a superuser can break the system?  We could start with
> "DELETE FROM pg_proc;".
>

Yeah, true.

That said, I'm not sure there is much point in fixing the original problem
either. It comes down to a "don't do that", as the system just keeps
crashing even if we fix that one. Trying to fix every possible place that
breaks if there are tables with invalid data in pg_class like that is not
likely to work either.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: [PATCH v20] GSSAPI encryption support

2019-02-23 Thread Stephen Frost
Greetings,

* Peter Eisentraut (peter.eisentr...@2ndquadrant.com) wrote:
> I don't know much about GSSAPI, but from what I can tell, this seems an
> attractive feature, and the implementation is compact enough.  I have
> done a bit of work on the internal SSL API refactoring, so I have some
> thoughts on this patch.
> 
> Looking at the file structure, we would have
> 
> be-secure.c
> be-secure-openssl.c
> be-secure-[othersslimpl].c
> be-secure-gssapi.c
> be-secure-common.c
> 
> This implies a code structure that isn't really there.
> be-secure-common.c is used by SSL implementations but not by the GSSAPI
> implementation.

be-secure-common.c seems very clearly mis-named, I mean, look at the
comment at the top of the file:

* common implementation-independent SSL support code

Seems we've been conflating SSL and "Secure" and we should probably fix
that.

What I also really don't like is that "secure_read()" is really only
*maybe* secure.  be-secure.c is really just an IO-abstraction layer that
lets other things hook in and implement the read/write themselves.

> Perhaps we should rename be-secure-openssl.c to be-ssl-openssl.c and
> be-secure-common.c to be-ssl-common.c.

This might be overly pedantic, but what we do in other parts of the tree
is use these things called directories..

I do think we need to rename be-secure-common since it's just flat out
wrong as-is, but that's independent of the GSSAPI encryption work,
really.

> Or maybe we avoid that, and you rename be-secure-gssapi.c to just
> be-gssapi.c and also combine that with the contents of be-gssapi-common.c.

I don't know why we would need to, or want to, combine
be-secure-gssapi.c and be-gssapi-common.c, they do have different roles
in that be-gssapi-common.c is used even if you aren't doing encryption,
while bs-secure-gssapi.c is specifically for handling the encryption
side of things.

Then again, be-gssapi-common.c does currently only have the one function
in it, and it's not like there's an option to compile for *just* GSS
authentication (and not encryption), or at least, I don't think that
would be a useful option to have..  Robbie?

> (And similarly in libpq.)

I do agree that we should try to keep the frontend and backend code
structures similar in this area, that seems to make sense.

> About pg_hba.conf: The "hostgss" keyword seems a bit confusing.  It only
> applies to encrypted gss-using connections, not all of them.  Maybe
> "hostgssenc" or "hostgsswrap"?

Not quite sure what you mean here, but 'hostgss' seems to be quite well
in-line with what we do for SSL...  as in, we have 'hostssl', we don't
say 'hostsslenc'.  I feel like I'm just not understanding what you mean
by "not all of them".

> I don't see any tests in the patch.  We have a Kerberos test suite at
> src/test/kerberos/ and an SSL test suite at src/test/ssl/.  You can get
> some ideas there.

Yeah, I was going to comment on that as well.  We definitely should
implement tests around this.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: proposal: variadic argument support for least, greatest function

2019-02-23 Thread Chapman Flack
On 02/23/19 01:22, Pavel Stehule wrote:
> so 23. 2. 2019 v 4:50 odesílatel Chapman Flack 
> napsal:

>> In transformMinMaxExpr:
>> The assignment of funcname doesn't look right.

... it still doesn't.

>> Two new errors are elogs. ...
>> I am not sure if there is a way for user input to trigger the first one.
>> Perhaps it can stay an elog if not. In any case, s/to
>> determinate/determine/.
>>
> 
> It is +/- internal error and usually should not be touched - so there I
> prefer elog. I fix message

... still needs s/to //.

Can the sentence added to the doc be changed to "These functions support
passing parameters as an array after VARIADIC keyword."?
That is, s/supports/support/ and s/a/an/. I've done that after a couple of
recent patches, but it seems to be on springs.

> What about using macros?

Meh ... the macros look nicer, but still rely just as much on the compiler
to hoist the tests out of the loop. I suppose it probably can.

I wouldn't have thought it necessary to change the switch statements in
FigureColnameInternal or get_rule_expr ... cases with multiple labels are
seen often enough, and it's probably hard to do better.


Taking a step back ...


All of this still begs Tom's question about whether array_greatest/
array_least would be preferable.

I understand Pavel's point:

>> I am not against these functions, but these functions doesn't solve a
>> confusing of some users, so LEAST, GREATEST looks like variadic
>> functions, but it doesn't allow VARIADIC parameter.

but, to advocate the other side for a moment, perhaps that could be viewed
as more of a documentation problem.

At bottom, the confusion potential that concerns Pavel exists because
these things look like variadic functions, and the manual calls them
"the GREATEST and LEAST functions", but they won't accept a VARIADIC array
parameter as a genuine variadic function would.

But that's hardly the only way these differ from normal functions.
You can't find them in pg_proc or call them through fmgr. In fact, they
share these non-function properties with all of their siblings in the
functions-conditional section of the manual. (Arguably, if GREATEST and
LEAST end up taking a VARIADIC array parameter, COALESCE will be wanting
one next. And indeed, there doesn't seem to be an existing
array_firstnonnull function for that job, either.) And these "functions"
have special, type-unifying argument rules (already documented in
typeconv-union-case), late argument evaluation in the case of COALESCE,
and so on.

In other words, any effort to make these animals act more like functions
will be necessarily incomplete, and differences will remain.

Maybe the confusion-potential could be fixed by having one more sentence
at the top of the whole functions-conditional section, saying "Some of
these resemble functions, but are better viewed as function-like
expressions, with special rules for their argument lists." Then the section
for GREATEST/LEAST could have a "see also" for array_greatest/array_least,
the COALESCE section a "see also" for array_firstnonnull, and those simple
functions could be written.

The special rules for these constructs don't really buy anything for the
array-argument flavors: you can't pass in an array with heterogeneous
types or late-evaluated values anyway, so ordinary functions would suffice.

>From the outside, it would look tidy and parsimonious to just have
GREATEST(VARIADIC...)/LEAST(VARIADIC...)/COALESCE(VARIADIC...) and have
just one set of "function" names to remember, rather than separate
array_* variants. But the cost of that tidiness from the outside is an
implementation that has to frob half a dozen source files on the inside.

The approach with more parsimony indoors would be to just create a few
new ordinary functions, and add to the doc explaining why they are
different, and that would be a patch only needing to touch a couple files.

I have a feeling that the final decision on whether the indoor or outdoor
parsimony matters more will be made by Tom, or another committer; I find
myself seeing both sides, and not feeling insider-y enough myself to
pick one.

-Chap



Re: proposal: variadic argument support for least, greatest function

2019-02-23 Thread Pavel Stehule
so 23. 2. 2019 v 18:28 odesílatel Chapman Flack 
napsal:

> On 02/23/19 01:22, Pavel Stehule wrote:
> > so 23. 2. 2019 v 4:50 odesílatel Chapman Flack 
> > napsal:
>
> >> In transformMinMaxExpr:
> >> The assignment of funcname doesn't look right.
>
> ... it still doesn't.
>

fixed


> >> Two new errors are elogs. ...
> >> I am not sure if there is a way for user input to trigger the first one.
> >> Perhaps it can stay an elog if not. In any case, s/to
> >> determinate/determine/.
> >>
> >
> > It is +/- internal error and usually should not be touched - so there I
> > prefer elog. I fix message
>
> ... still needs s/to //.
>

fixed


> Can the sentence added to the doc be changed to "These functions support
> passing parameters as an array after VARIADIC keyword."?
> That is, s/supports/support/ and s/a/an/. I've done that after a couple of
> recent patches, but it seems to be on springs.
>
> > What about using macros?
>
> Meh ... the macros look nicer, but still rely just as much on the compiler
> to hoist the tests out of the loop. I suppose it probably can.
>

reverted, I use a variables

>
> I wouldn't have thought it necessary to change the switch statements in
> FigureColnameInternal or get_rule_expr ... cases with multiple labels are
> seen often enough, and it's probably hard to do better.
>
>
> Taking a step back ...
>
>
> All of this still begs Tom's question about whether array_greatest/
> array_least would be preferable.
>
> I understand Pavel's point:
>
> >> I am not against these functions, but these functions doesn't solve a
> >> confusing of some users, so LEAST, GREATEST looks like variadic
> >> functions, but it doesn't allow VARIADIC parameter.
>
> but, to advocate the other side for a moment, perhaps that could be viewed
> as more of a documentation problem.
>
> At bottom, the confusion potential that concerns Pavel exists because
> these things look like variadic functions, and the manual calls them
> "the GREATEST and LEAST functions", but they won't accept a VARIADIC array
> parameter as a genuine variadic function would.
>
> But that's hardly the only way these differ from normal functions.
> You can't find them in pg_proc or call them through fmgr. In fact, they
> share these non-function properties with all of their siblings in the
> functions-conditional section of the manual. (Arguably, if GREATEST and
> LEAST end up taking a VARIADIC array parameter, COALESCE will be wanting
> one next. And indeed, there doesn't seem to be an existing
> array_firstnonnull function for that job, either.) And these "functions"
> have special, type-unifying argument rules (already documented in
> typeconv-union-case), late argument evaluation in the case of COALESCE,
> and so on.
>
> In other words, any effort to make these animals act more like functions
> will be necessarily incomplete, and differences will remain.
>
> Maybe the confusion-potential could be fixed by having one more sentence
> at the top of the whole functions-conditional section, saying "Some of
> these resemble functions, but are better viewed as function-like
> expressions, with special rules for their argument lists." Then the section
> for GREATEST/LEAST could have a "see also" for array_greatest/array_least,
> the COALESCE section a "see also" for array_firstnonnull, and those simple
> functions could be written.
>
> The special rules for these constructs don't really buy anything for the
> array-argument flavors: you can't pass in an array with heterogeneous
> types or late-evaluated values anyway, so ordinary functions would suffice.
>
> From the outside, it would look tidy and parsimonious to just have
> GREATEST(VARIADIC...)/LEAST(VARIADIC...)/COALESCE(VARIADIC...) and have
> just one set of "function" names to remember, rather than separate
> array_* variants. But the cost of that tidiness from the outside is an
> implementation that has to frob half a dozen source files on the inside.
>

This is goal of this small patch - do life little bit more easier and
little bit more consistent.

It is very small patch without risks or slowdowns. So I expect the cost of
this patch is small. Just it is small step forward to users.

I wrote "greatest", "least" support before I wrote variadic functions
support. If I wrote it in different order, probably, greatest, least
functions was pg_proc based. On second hand, the limitation of pg_proc
functions was motivation for variadic function support.

With my experience, I am not sure if better documentation does things
better. For some kind of users some smart magic is important. They don't
want to see implementation details.

In my car, i lost hope to understand completely to engine. I am not sure if
users should to know so we have three kind of functions - a) pg_proc based
functions, b) parser based functions (and looks like functions), c) parser
based functions (and looks like constant).

I know so is important to understand to things, but nobody can understand
to all. And then it is n

Re: proposal: variadic argument support for least, greatest function

2019-02-23 Thread Chapman Flack
On 02/23/19 13:35, Pavel Stehule wrote:
> please, see, attached patch

It is getting close, for my purposes. There is still this:

>> Can the sentence added to the doc be changed to "These functions support
>> passing parameters as an array after VARIADIC
>> keyword."? That is, s/supports/support/ and s/a/an/. I've done that
>> after a couple of recent patches, but it seems to be on springs.



> I know so is important to understand to things, but nobody can understand
> to all. And then it is nice, so the things just works
>>
>> The approach with more parsimony indoors would be to just create a few
>> new ordinary functions, and add to the doc explaining why they are
>> different, and that would be a patch only needing to touch a couple files.
>>
>> I have a feeling that the final decision on whether the indoor or outdoor
>> parsimony matters more will be made by Tom, or another committer; I find
>> myself seeing both sides, and not feeling insider-y enough myself to
>> pick one.
> 
> sure, every time it is commiter's decision.


A part of me waits to see if another voice chimes in on the high-level
want/don't-want question ... I think enough of the patch is ready for
that question to be ripe, and if the answer is going to be "don't want"
it would be ideal to know that before additional iterations of work on it.

-Chap



Re: [patch] Add schema total size to psql \dn+

2019-02-23 Thread Julien Rouhaud
On Fri, Feb 22, 2019 at 7:22 PM Tom Lane  wrote:
>
> Is there any permissions issue involved here?  I'd be a bit worried
> about whether \dn+ could fail, or deliver misleading answers, when
> run by a user without permissions on (some) tables.  Also, even if
> we allow people to get size info on tables they can't read today,
> having this feature would be a roadblock to tightening that in
> the future.

Gilles' patch is using pg_total_relation_size(), so there's no
permission check at all.  Also AFAICS this function even allows any
user to get the size of any other user backend's temporary table.



Re: FETCH FIRST clause PERCENT option

2019-02-23 Thread Tomas Vondra



On 2/23/19 8:53 AM, Surafel Temesgen wrote:
> 
> 
> On Sun, Feb 10, 2019 at 2:22 AM Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>> wrote:
>  
> 
> 
> I'm not sure I understand - are you saying every time the user does a
> FETCH, we have to run the outer plan from scratch? I don't see why would
> that be necessary? And if it is, how come there's no noticeable
> performance difference?
> 
> Can you share a patch implementing the incremental approach, and a query
> demonstrating the issue?
> 
> 
> I didn't implement it but its obvious that it doesn't work similarly
> with previous approach.
> 

Sure, but that's hardly a sufficient argument for the current approach.

> We need different implementation and my plan was to use tuplestore per
> call and clear
> 
> it after returning tuple but I see that the plan will not go far because
> mainly the last returned
> 
> slot is not the last slot we get from outerPlan execution
> 

I'm sorry, I still don't understand what the supposed problem is. I
don't think it's all that different from what nodeMaterial.c does, for
example.

As I explained before, having to execute the outer plan till completion
before returning any tuples is an issue. So either it needs fixing or an
explanation why it's not an issue.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: WIP: BRIN multi-range indexes

2019-02-23 Thread Tomas Vondra
Apparently cputube did not pick the last version of the patches I've
submitted in December (and I don't see the message in the thread in
archive either), so it's listed as broken.

So here we go again, hopefully this time everything will go through ...

regards

On 12/28/18 12:45 AM, Tomas Vondra wrote:
> Hi all,
> 
> Attached is an updated/rebased version of the patch series. There are no
> changes to behavior, but let me briefly summarize the current state:
> 
> 0001 and 0002
> -
> 
> The first two parts are "just" refactoring the existing code to pass all
> scankeys to the opclass at once - this is needed by the new minmax-like
> opclass, but per discussion with Alvaro it seems worthwhile even
> independently. I tend to agree with that. Similarly for the second part,
> which moves all IS NULL checks entirely to bringetbimap().
> 
> 0003 bloom opclass
> --
> 
> The first new opclasss, based on bloom filters. For each page range
> (i.e. 1MB by default) a small bloom filter is built (with hash values of
> the original values as inputs), and then used to evaluate equality
> queries. A small optimization is that initially the actual (hash) values
> are kept until reaching the bloom filter size. This improves behavior in
> low-cardinality data sets.
> 
> Picking the bloom filter parameters is the tricky part - we don't have a
> reliable source of such information (namely number of distinct values
> per range), and e.g. the false positive rate actually has to be picked
> by the user because it's a compromise between index size and accuracy.
> Essentially, false positive rate is the fraction of the table that has
> to be scanned for a random value (on average). But it also makes the
> index larger, because the per-range bloom filters will be larger.
> 
> Another reason why this needs to be defined by the user is that the
> space for index tuple is limited by one page (8kB by default), so we
> can't allow the bloom filter to be larger (we have to assume it's
> non-compressible, because in the optimal fill it's 50% 0s and 1s). But
> the BRIN index may be multi-column, and the limit applies to the whole
> tuple. And we don't know what the opclasses or parameters of other
> columns are.
> 
> So the patch simply adds two reloptions
> 
> a) n_distinct_per_range - number of distinct values per range
> b) false_positive_rate - false positive rate of the filter
> 
> There are some simple heuristics to ensure the values are reasonable
> (e.g. upper limit for number of distinct values, etc.) and perhaps we
> might consider stats from the underlying table (when not empty), but the
> patch does not do that.
> 
> 
> 0004 multi-minmax opclass
> -
> 
> The second opclass addresses a common issue for minmax indexes, where
> the table is initially nicely correlated with the index, and it works
> fine. But then deletes/updates route data into other parts of the table
> making the ranges very wide ad rendering the BRIN index inefficient.
> 
> One way to deal improve this would be considering the index(es) while
> routing the new tuple, i.e. looking not only for page with enough free
> space, but for pages in already matching ranges (or close to it).
> 
> A partitioning is a possible approach so segregate the data. But it's
> certainly much higher overhead, both in terms of maintenance and
> planning (particularly with  1:1 of ranges vs. partitions).
> 
> So the new multi-minmax opclass takes a different approach, replacing
> the one minmax range with multiple ranges (64 boundary values or 32
> ranges by default). Initially individual values are stored, and after
> reaching the maximum number of values the values are merged into ranges
> by distance. This allows handling outliers very efficiently, because
> they will not be merged with the "main" range for as long as possible.
> 
> Similarly to the bloom opclass, the main challenge here is deciding the
> parameter - in this case, it's "number of values per range". Again, it's
> a compromise vs. index size and efficiency. The default (64 values) is
> fairly reasonable, but ultimately it's up to the user - there is a new
> reloption "values_per_range".
> 
> 
> 
> regards
> 

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-Pass-all-keys-to-BRIN-consistent-function-a-20190223.patch.gz
Description: application/gzip


0002-Move-IS-NOT-NULL-checks-to-bringetbitmap-20190223.patch.gz
Description: application/gzip


0003-BRIN-bloom-indexes-20190223.patch.gz
Description: application/gzip


0004-BRIN-multi-range-minmax-indexes-20190223.patch.gz
Description: application/gzip


Re: explain plans with information about (modified) gucs

2019-02-23 Thread Tomas Vondra
Hi,

attached is an updated patch, fixing and slightly tweaking the docs.


Barring objections, I'll get this committed later next week.

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From ba0dc2578351e9c3c29cbd00860b31e3fd0acd6a Mon Sep 17 00:00:00 2001
From: Tomas Vondra 
Date: Sat, 23 Feb 2019 23:38:30 +0100
Subject: [PATCH] Allow including configuration parameters in explain output

When analyzing execution plans, it's useful to know which configuration
parameters affecting the planning were modified, and how.  This commit
adds SETTINGS option for EXPLAIN command, which includes GUC parameters
with non-default value and marked with GUC_EXPLAIN flag. auto_explain is
extended with log_settings option, doing the same thing.
---
 contrib/auto_explain/auto_explain.c |  13 ++
 doc/src/sgml/auto-explain.sgml  |  18 +++
 doc/src/sgml/ref/explain.sgml   |  12 ++
 src/backend/commands/explain.c  |  71 +
 src/backend/utils/misc/guc.c| 229 +---
 src/include/commands/explain.h  |   1 +
 src/include/utils/guc.h |   2 +
 src/include/utils/guc_tables.h  |   1 +
 8 files changed, 295 insertions(+), 52 deletions(-)

diff --git a/contrib/auto_explain/auto_explain.c b/contrib/auto_explain/auto_explain.c
index 7b22927674..edc50f9368 100644
--- a/contrib/auto_explain/auto_explain.c
+++ b/contrib/auto_explain/auto_explain.c
@@ -28,6 +28,7 @@ static bool auto_explain_log_verbose = false;
 static bool auto_explain_log_buffers = false;
 static bool auto_explain_log_triggers = false;
 static bool auto_explain_log_timing = true;
+static bool auto_explain_log_settings = false;
 static int	auto_explain_log_format = EXPLAIN_FORMAT_TEXT;
 static int	auto_explain_log_level = LOG;
 static bool auto_explain_log_nested_statements = false;
@@ -112,6 +113,17 @@ _PG_init(void)
 			 NULL,
 			 NULL);
 
+	DefineCustomBoolVariable("auto_explain.log_settings",
+			 "Log modified configuration parameters affecting query planning.",
+			 NULL,
+			 &auto_explain_log_settings,
+			 false,
+			 PGC_SUSET,
+			 0,
+			 NULL,
+			 NULL,
+			 NULL);
+
 	DefineCustomBoolVariable("auto_explain.log_verbose",
 			 "Use EXPLAIN VERBOSE for plan logging.",
 			 NULL,
@@ -356,6 +368,7 @@ explain_ExecutorEnd(QueryDesc *queryDesc)
 			es->timing = (es->analyze && auto_explain_log_timing);
 			es->summary = es->analyze;
 			es->format = auto_explain_log_format;
+			es->settings = auto_explain_log_settings;
 
 			ExplainBeginOutput(es);
 			ExplainQueryText(es, queryDesc);
diff --git a/doc/src/sgml/auto-explain.sgml b/doc/src/sgml/auto-explain.sgml
index 120b168d45..296ae2de80 100644
--- a/doc/src/sgml/auto-explain.sgml
+++ b/doc/src/sgml/auto-explain.sgml
@@ -169,6 +169,24 @@ LOAD 'auto_explain';
 

 
+   
+
+ auto_explain.log_settings (boolean)
+ 
+  auto_explain.log_settings configuration parameter
+ 
+
+
+ 
+  auto_explain.log_settings controls whether information
+  about modified configuration options affecting query planning are logged
+  with the execution plan.  Only options affecting query planning with value
+  different from the built-in default value are considered.  This parameter is
+  off by default.  Only superusers can change this setting.
+ 
+
+   
+

 
  auto_explain.log_format (enum)
diff --git a/doc/src/sgml/ref/explain.sgml b/doc/src/sgml/ref/explain.sgml
index 8dc0d7038a..385d10411f 100644
--- a/doc/src/sgml/ref/explain.sgml
+++ b/doc/src/sgml/ref/explain.sgml
@@ -39,6 +39,7 @@ EXPLAIN [ ANALYZE ] [ VERBOSE ] statementboolean ]
 VERBOSE [ boolean ]
 COSTS [ boolean ]
+SETTINGS [ boolean ]
 BUFFERS [ boolean ]
 TIMING [ boolean ]
 SUMMARY [ boolean ]
@@ -152,6 +153,17 @@ ROLLBACK;
 

 
+   
+SETTINGS
+
+ 
+  Include information on configuration parameters.  Specifically, include
+  options affecting query planning with value different from the built-in
+  default value.  This parameter defaults to FALSE.
+ 
+
+   
+

 BUFFERS
 
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 1831ea81cf..8e48b94d4a 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -29,6 +29,7 @@
 #include "storage/bufmgr.h"
 #include "tcop/tcopprot.h"
 #include "utils/builtins.h"
+#include "utils/guc_tables.h"
 #include "utils/json.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
@@ -162,6 +163,8 @@ ExplainQuery(ParseState *pstate, ExplainStmt *stmt, const char *queryString,
 			es->costs = defGetBoolean(opt);
 		else if (strcmp(opt->defname, "buffers") == 0)
 			es->buffers = defGetBoolean(opt);
+		else if (strcmp(opt->defname, "settings") == 0)
+			es->settings = defGetBoolean(opt);
 		else if (strcmp(opt->defname, "timing") == 0)
 		{
 

Re: Autovaccuum vs temp tables crash

2019-02-23 Thread Michael Paquier
On Sat, Feb 23, 2019 at 04:29:24PM +0100, Magnus Hagander wrote:
> On Sat, Feb 23, 2019 at 4:18 PM Michael Paquier  wrote:
>> Perhaps I am missing something, but it would be just more simple to
>> now allow users to restrict that?
>>
> 
> I can't parse what you are saying here. Now allow users to restrict
> what?

Second try after some sleep:
"Perhaps I am missing something, but it would be just more simple to
now allow users to drop it?"
--
Michael


signature.asc
Description: PGP signature


Re: some ri_triggers.c cleanup

2019-02-23 Thread Corey Huinker
On Fri, Feb 22, 2019 at 1:12 PM Corey Huinker 
wrote:

> On Fri, Feb 22, 2019 at 11:05 AM Peter Eisentraut <
> peter.eisentr...@2ndquadrant.com> wrote:
>
>> ri_triggers.c is endlessly long and repetitive.  I want to clean it up a
>> bit (more).
>>
>
> Having just been down this road, I agree that a lot of cleanup is needed
> and possible.
>
>
>> I looked into all these switch cases for the unimplemented MATCH PARTIAL
>> option.  I toyed around with how a MATCH PARTIAL implementation would
>> actually look like, and it likely wouldn't use the existing code
>> structure anyway, so let's just simplify this for now.
>>
>
> +1
>
>
>
>> Attached are some patches.
>
>
> I intend to look this over in much greater detail, but I did skim the code
> and it seems like you left the SET DEFAULT and SET NULL paths separate. In
> my attempt at statement level triggers I realized that they only differed
> by the one literal value, and parameterized the function.
>
>

I've looked it over more closely now and I think that it's a nice
improvement.

As I suspected, the code for SET NULL and SET DEFAULT are highly similar
(see .diff), the major difference being two constants, the order of some
variable declarations, and the recheck in the set-default case.

The changes were so simple that I felt remiss not adding the patch for you
(see .patch).

Passes make check.
diff --git a/set_null.c b/set_default.c
index bc323ec..b2dd91d 100644
--- a/set_null.c
+++ b/set_default.c
@@ -1,10 +1,10 @@
 /*
- * ri_setnull -
+ * ri_setdefault -
  *
- * Common code for ON DELETE SET NULL and ON UPDATE SET NULL
+ * Common code for ON DELETE SET DEFAULT and ON UPDATE SET DEFAULT
  */
 static Datum
-ri_setnull(TriggerData *trigdata)
+ri_setdefault(TriggerData *trigdata)
 {
 const RI_ConstraintInfo *riinfo;
 Relationfk_rel;
@@ -30,10 +30,10 @@ ri_setnull(TriggerData *trigdata)
 elog(ERROR, "SPI_connect failed");
 
 /*
- * Fetch or prepare a saved plan for the set null operation (it's
- * the same query for delete and update cases)
+ * Fetch or prepare a saved plan for the set default operation
+ * (it's the same query for delete and update cases)
  */
-ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_SETNULL_DOUPDATE);
+ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_SETDEFAULT_DOUPDATE);
 
 if ((qplan = ri_FetchPreparedPlan(&qkey)) == NULL)
 {
@@ -44,12 +44,12 @@ ri_setnull(TriggerData *trigdata)
 charparamname[16];
 const char *querysep;
 const char *qualsep;
-const char *fk_only;
 Oid queryoids[RI_MAX_NUMKEYS];
+const char *fk_only;
 
 /* --
  * The query string built is
- *  UPDATE [ONLY]  SET fkatt1 = NULL [, ...]
+ *  UPDATE [ONLY]  SET fkatt1 = DEFAULT [, ...]
  *  WHERE $1 = fkatt1 [AND ...]
  * The type id's for the $ parameters are those of the
  * corresponding PK attributes.
@@ -57,9 +57,9 @@ ri_setnull(TriggerData *trigdata)
  */
 initStringInfo(&querybuf);
 initStringInfo(&qualbuf);
+quoteRelationName(fkrelname, fk_rel);
 fk_only = fk_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE ?
 "" : "ONLY ";
-quoteRelationName(fkrelname, fk_rel);
 appendStringInfo(&querybuf, "UPDATE %s%s SET",
  fk_only, fkrelname);
 querysep = "";
@@ -72,9 +72,10 @@ ri_setnull(TriggerData *trigdata)
 quoteOneName(attname,
  RIAttName(fk_rel, riinfo->fk_attnums[i]));
 appendStringInfo(&querybuf,
- "%s %s = NULL",
+ "%s %s = DEFAULT",
  querysep, attname);
 sprintf(paramname, "$%d", i + 1);
+sprintf(paramname, "$%d", i + 1);
 ri_GenerateQual(&qualbuf, qualsep,
 paramname, pk_type,
 riinfo->pf_eq_oprs[i],
@@ -104,5 +105,20 @@ ri_setnull(TriggerData *trigdata)
 
 table_close(fk_rel, RowExclusiveLock);
 
-return PointerGetDatum(NULL);
+/*
+ * If we just deleted or updated the PK row whose key was equal to
+ * the FK columns' default values, and a referencing row exists in
+ * the FK table, we would have updated that row to the same values
+ * it already had --- and RI_FKey_fk_upd_check_required would
+ * hence believe no check is necessary.  So we need to do another
+ * lookup now and in case a reference still exists, abort the
+ * operation.  That is already implemented in the NO ACTION
+ * trigger, so just run it.  (This recheck is only needed in the
+ * SET DEFAULT case, since CASCADE would remove such rows in case
+ * of a DELETE operation or would change the FK key values in case
+ * of an UPDATE, while SET NULL is certain to result in rows that
+ * satisfy the FK constraint.)
+ */
+return ri_restric

Re: \describe*

2019-02-23 Thread Corey Huinker
>
> Given that this patch has been added to the last commitfest for v12, I
> think we should mark it as targeting 13, so it can be skipped over by
> people looking to get things into v12.  Even leaving fairness aside, I
> don't think it's likely to be ready quickly enough...
>

Obviously this patch is nowhere near the importance of most patches slated
for v12, but I would hope it can be considered, time permitting.

The size of the patch may look large (1036 lines), but 650+ of that is pure
documentation changes, ~50 lines of added autocomplete strings, ~140 lines
are added TailMatches calls (one per new autocomplete string), and what
remains is strncmp() calls to match those same strings, so it's pretty mild
in terms of impact.


Re: \describe*

2019-02-23 Thread Andres Freund
Hi,

On 2019-02-23 19:14:27 -0500, Corey Huinker wrote:
> >
> > Given that this patch has been added to the last commitfest for v12, I
> > think we should mark it as targeting 13, so it can be skipped over by
> > people looking to get things into v12.  Even leaving fairness aside, I
> > don't think it's likely to be ready quickly enough...
> >
> 
> Obviously this patch is nowhere near the importance of most patches slated
> for v12, but I would hope it can be considered, time permitting.
> 
> The size of the patch may look large (1036 lines), but 650+ of that is pure
> documentation changes, ~50 lines of added autocomplete strings, ~140 lines
> are added TailMatches calls (one per new autocomplete string), and what
> remains is strncmp() calls to match those same strings, so it's pretty mild
> in terms of impact.

Sure, but it was late, and we have far more patches than we can deal
with. Many of them much much older than this.

Greetings,

Andres Freund



POC: converting Lists into arrays

2019-02-23 Thread Tom Lane
For quite some years now there's been dissatisfaction with our List
data structure implementation.  Because it separately palloc's each
list cell, it chews up lots of memory, and it's none too cache-friendly
because the cells aren't necessarily adjacent.  Moreover, our typical
usage is to just build a list by repeated lappend's and never modify it,
so that the flexibility of having separately insertable/removable list
cells is usually wasted.

Every time this has come up, I've opined that the right fix is to jack
up the List API and drive a new implementation underneath, as we did
once before (cf commit d0b4399d81).  I thought maybe it was about time
to provide some evidence for that position, so attached is a POC patch
that changes Lists into expansible arrays, while preserving most of
their existing API.

The big-picture performance change is that this makes list_nth()
a cheap O(1) operation, while lappend() is still pretty cheap;
on the downside, lcons() becomes O(N), as does insertion or deletion
in the middle of a list.  But we don't use lcons() very much
(and maybe a lot of the existing usages aren't really necessary?),
while insertion/deletion in long lists is a vanishingly infrequent
operation.  Meanwhile, making list_nth() cheap is a *huge* win.

The most critical thing that we lose by doing this is that when a
List is modified, all of its cells may need to move, which breaks
a lot of code that assumes it can insert or delete a cell while
hanging onto a pointer to a nearby cell.  In almost every case,
this takes the form of doing list insertions or deletions inside
a foreach() loop, and the pointer that's invalidated is the loop's
current-cell pointer.  Fortunately, with list_nth() now being so cheap,
we can replace these uses of foreach() with loops using an integer
index variable and fetching the next list element directly with
list_nth().  Most of these places were loops around list_delete_cell
calls, which I replaced with a new function list_delete_nth_cell
to go along with the emphasis on the integer index.

I don't claim to have found every case where that could happen,
although I added debug support in list.c to force list contents
to move on every list modification, and this patch does pass
check-world with that support turned on.  I fear that some such
bugs remain, though.

There is one big way in which I failed to preserve the old API
syntactically: lnext() now requires a pointer to the List as
well as the current ListCell, so that it can figure out where
the end of the cell array is.  That requires touching something
like 150 places that otherwise wouldn't have had to be touched,
which is annoying, even though most of those changes are trivial.

I thought about avoiding that by requiring Lists to keep a "sentinel"
value in the cell after the end of the active array, so that lnext()
could look for the sentinel to detect list end.  However, that idea
doesn't really work, because if the list array has been moved, the
spot where the sentinel had been could have been reallocated and
filled with something else.  So this'd offer no defense against the
possibility of a stale ListCell pointer, which is something that
we absolutely need defenses for.  As the patch stands we can have
quite a strong defense, because we can check whether the presented
ListCell pointer actually points into the list's current data array.

Another annoying consequence of lnext() needing a List pointer is that
the List arguments of foreach() and related macros are now evaluated
each time through the loop.  I could only find half a dozen places
where that was actually unsafe (all fixed in the draft patch), but
it's still bothersome.  I experimented with ways to avoid that, but
the only way I could get it to work was to define foreach like this:

#define foreach(cell, l)for (const List *cell##__foreach = 
foreach_setup(l, &cell);  cell != NULL; cell = lnext(cell##__foreach, 
cell))

static inline const List *
foreach_setup(const List *l, ListCell **cell)
{
*cell = list_head(l);
return l;
}

That works, but there are two problems.  The lesser one is that a
not-very-bright compiler might think that the "cell" variable has to
be forced into memory, because its address is taken.  The bigger one is
that this coding forces the "cell" variable to be exactly "ListCell *";
you can't add const or volatile qualifiers to it without getting
compiler warnings.  There are actually more places that'd be affected
by that than by the need to avoid multiple evaluations.  I don't think
the const markings would be a big deal to lose, and the two places in
do_autovacuum that need "volatile" (because of a nearby PG_TRY) could
be rewritten to not use foreach.  So I'm tempted to do that, but it's
not very pretty.  Maybe somebody can think of a better solution?

There's a lot of potential follow-on work that I've not touched yet:

1. This still involves at least two palloc's for every nonempty List,
because I kept the heade

Re: [HACKERS] Removing [Merge]Append nodes which contain a single subpath

2019-02-23 Thread David Rowley
On Tue, 5 Feb 2019 at 12:05, David Rowley  wrote:
> Rebased again.

and again, due to new create_append_path() call added in ab5fcf2b04f9.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


v13-0001-Forgo-generating-single-subpath-Append-and-Merge.patch
Description: Binary data


Re: Ordered Partitioned Table Scans

2019-02-23 Thread David Rowley
On Thu, 31 Jan 2019 at 16:29, David Rowley  wrote:
> I've attached a rebased patch which fixes up the recent conflicts. No
> other changes were made.

Rebased version due to a new call to create_append_path() added in
ab5fcf2b0. No other changes made.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


v8-0001-Allow-Append-to-be-used-in-place-of-MergeAppend-f.patch
Description: Binary data