Re: [HACKERS] SEGFAULT in CREATE EXTENSION related pg_init_privs

2016-04-15 Thread Pavel Stehule
2016-04-16 4:12 GMT+02:00 Stephen Frost :

> Pavel,
>
> * Pavel Stehule (pavel.steh...@gmail.com) wrote:
> > 2016-04-14 14:26 GMT+02:00 Stephen Frost :
> > >
> > > * Pavel Stehule (pavel.steh...@gmail.com) wrote:
> > > > I am trying to prepare orafce for PostgreSQL 9.6.
> > > >
> > > > I can successfully compile this extension, but the statement CREATE
> > > > EXTENSION fails on segfault
> > >
> > > Just the latest off of https://github.com/orafce/orafce ?
> >
> > yes. When I commented REVOKE, then all tests passed. When you uncomment
> > REVOKE, then CREATE EXTENSION should to fail.
>
> Fix pushed, please let me know if you see any further issues.
>

Orafce is working without any issues now.

Thank you

Pavel



>
> Thanks!
>
> Stephen
>


Re: [HACKERS] [COMMITTERS] pgsql: Add new catalog called pg_init_privs

2016-04-15 Thread Stephen Frost
* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
> Stephen Frost wrote:
> > Alvaro,
> 
> > Will take a look at this, though I'm just about to commit a fix that's
> > probably related (and addresses Pavel's issue).  Basically, for reasons
> > unknown, I was calling systable_endscan() immediately after
> > systable_getnext(), which doesn't work when you want to use the tuple
> > you got back.
> 
> Yeah, I noticed that bug too.  It might well explain the issue, because
> the tuple is seen as 0x7f.

Fix pushed.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] SEGFAULT in CREATE EXTENSION related pg_init_privs

2016-04-15 Thread Stephen Frost
Pavel,

* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2016-04-14 14:26 GMT+02:00 Stephen Frost :
> >
> > * Pavel Stehule (pavel.steh...@gmail.com) wrote:
> > > I am trying to prepare orafce for PostgreSQL 9.6.
> > >
> > > I can successfully compile this extension, but the statement CREATE
> > > EXTENSION fails on segfault
> >
> > Just the latest off of https://github.com/orafce/orafce ?
> 
> yes. When I commented REVOKE, then all tests passed. When you uncomment
> REVOKE, then CREATE EXTENSION should to fail.

Fix pushed, please let me know if you see any further issues.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-04-15 Thread Michael Paquier
On Fri, Apr 15, 2016 at 9:46 PM, Michael Paquier
 wrote:
> On Fri, Apr 15, 2016 at 8:25 PM, Etsuro Fujita
>  wrote:
>> How about doing something similar for PQprepare/PQexecPrepared in
>> postgresExecForeignInsert, postgresExecForeignUpdate, and
>> postgresExecForeignDelete?
>
> Yes, I hesitated to touch those, but they are good candidates for this
> new interface, and actually it has proved not to be complicated to
> plug in the new routines in those code paths.
>
>> Also, how about doing that for PQexec in connection.c?
>
> Here I disagree, this is not adapted. All the PQexec calls are part of
> callbacks that are triggered on failures, and we rely on such a
> callback when issuing PQcancel. do_sql_command runs commands that take
> a short amount of time, so I think as well that it is fine as-is with
> PQexec.

Here is a new version. I just recalled that I forgot a PQclear() call
to clean up results.
-- 
Michael
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 189f290..33b7363 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -17,6 +17,7 @@
 #include "access/xact.h"
 #include "mb/pg_wchar.h"
 #include "miscadmin.h"
+#include "storage/latch.h"
 #include "utils/hsearch.h"
 #include "utils/memutils.h"
 
@@ -448,6 +449,68 @@ GetPrepStmtNumber(PGconn *conn)
 }
 
 /*
+ * Execute query in non-blocking mode and fetch back result
+ */
+PGresult *
+pgfdw_exec_query(PGconn *conn, const char *query)
+{
+	if (!PQsendQuery(conn, query))
+		pgfdw_report_error(ERROR, NULL, conn, false, query);
+
+	return pgfdw_get_result(conn, query);
+}
+
+/*
+ * Scan for results of query run in non-waiting mode
+ *
+ * This should be run after executing a query on the remote server and
+ * offers quick responsiveness by checking for any interruptions. The
+ * last result is returned back to caller, and previous results are
+ * consumed. Caller is responsible for the error handling on the fetched
+ * result.
+ */
+PGresult *
+pgfdw_get_result(PGconn *conn, const char *query)
+{
+	PGresult   *last_res = NULL;
+
+	for (;;)
+	{
+		PGresult *res;
+
+		while (PQisBusy(conn))
+		{
+			int		wc;
+
+			/* Sleep until there's something to do */
+			wc = WaitLatchOrSocket(MyLatch,
+   WL_LATCH_SET | WL_SOCKET_READABLE,
+   PQsocket(conn),
+   -1L);
+			ResetLatch(MyLatch);
+
+			CHECK_FOR_INTERRUPTS();
+
+			/* Data available in socket */
+			if (wc & WL_SOCKET_READABLE)
+			{
+if (!PQconsumeInput(conn))
+	pgfdw_report_error(ERROR, NULL, conn, false, query);
+			}
+		}
+
+		res = PQgetResult(conn);
+		if (res == NULL)
+			break;
+
+		PQclear(last_res);
+		last_res = res;
+	}
+
+	return last_res;
+}
+
+/*
  * Report an error we got from the remote server.
  *
  * elevel: error level to use (typically ERROR, but might be less)
@@ -598,6 +661,32 @@ pgfdw_xact_callback(XactEvent event, void *arg)
 case XACT_EVENT_ABORT:
 	/* Assume we might have lost track of prepared statements */
 	entry->have_error = true;
+
+	/*
+	 * If a command has been submitted to the remote server
+	 * using an asynchronous execution function, the command
+	 * might not have yet completed.  Check to see if a command
+	 * is still being processed by the remote server, and if so,
+	 * request cancellation of the command; if not, abort
+	 * gracefully.
+	 */
+	if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
+	{
+		PGcancel   *cancel;
+		char		errbuf[256];
+
+		if ((cancel = PQgetCancel(entry->conn)))
+		{
+			if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ereport(WARNING,
+		(errcode(ERRCODE_CONNECTION_FAILURE),
+		 errmsg("could not send cancel request: %s",
+errbuf)));
+			PQfreeCancel(cancel);
+		}
+		break;
+	}
+
 	/* If we're aborting, abort all remote transactions too */
 	res = PQexec(entry->conn, "ABORT TRANSACTION");
 	/* Note: can't throw ERROR, it would be infinite loop */
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index ee0220a..4566cf4 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -1421,7 +1421,7 @@ postgresReScanForeignScan(ForeignScanState *node)
 	 * We don't use a PG_TRY block here, so be careful not to throw error
 	 * without releasing the PGresult.
 	 */
-	res = PQexec(fsstate->conn, sql);
+	res = pgfdw_exec_query(fsstate->conn, sql);
 	if (PQresultStatus(res) != PGRES_COMMAND_OK)
 		pgfdw_report_error(ERROR, res, fsstate->conn, true, sql);
 	PQclear(res);
@@ -1754,13 +1754,16 @@ postgresExecForeignInsert(EState *estate,
 	 * We don't use a PG_TRY block here, so be careful not to throw error
 	 * without releasing the PGresult.
 	 */
-	res = PQexecPrepared(fmstate->conn,
-		 fmstate->p_name,
-		 fmstate->p_nums,
-		 p_values,
-		 NULL,
-		 NULL,
-		 0);
+	if (!PQsendQueryPrepared(fmstate->con

Re: [HACKERS] Is a syscache tuple more like an on-disk tuple or a freshly made one?

2016-04-15 Thread Tom Lane
Chapman Flack  writes:
> On 04/15/16 18:13, Tom Lane wrote:
>> You could use heap_copy_tuple_as_datum().

> Thanks, that looks like what the doctor ordered.

> For pre-9.4, would the equivalent be basically
> heap_form_tuple applied to the results of heap_deform_tuple ?

You could do that, or you could do what heap_copy_tuple_as_datum
does, ie copy the tuple and then poke the appropriate header
field values into it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is a syscache tuple more like an on-disk tuple or a freshly made one?

2016-04-15 Thread Chapman Flack
On 04/15/16 18:03, Alvaro Herrera wrote:

> I suppose you could create a copy of the tuple (SysCacheSearchCopy) and
> use that for HeapTupleGetDatum.  The problem with the syscache tuple is
> that it can go away as soon as you do the ReleaseSysCache -- it lives in
> shared_buffers memory, so when it's released the buffer might get
> evicted.

Sure ... I wasn't going to call ReleaseSysCache until I was all done
with it anyway, should only take microseconds ... thought I'd be
clever and avoid making a copy, and pass it to existing code expecting
a Datum, but I guess that's more trouble than it's worth.

> A "syscache tuple" is definitely an on-disk tuple.

Got it. Thanks!

On 04/15/16 18:13, Tom Lane wrote:

> You could use heap_copy_tuple_as_datum().

Thanks, that looks like what the doctor ordered.

For pre-9.4, would the equivalent be basically
heap_form_tuple applied to the results of heap_deform_tuple ?

-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is a syscache tuple more like an on-disk tuple or a freshly made one?

2016-04-15 Thread Tom Lane
Chapman Flack  writes:
> I am tempted to apply HeapTupleGetDatum to a tuple retrieved from
> the syscache (as I already have code for processing a tuple presented
> as a Datum).

> But I see a comment on HeapTupleHeaderGetDatum: "This must *not* get
> applied to an on-disk tuple; the tuple should be freshly made by
> heap_form_tuple or some wrapper ..."

> ... and here I confess I'm unsure whether a tuple retrieved from
> the syscache is more like an on-disk one, or a freshly-made one,
> for purposes of the warning in that comment.

A tuple from syscache is an on-disk tuple for this purpose; it has
the original catalog row's header fields, not the header fields
appropriate for a Datum.  So no, that will *not* work, even disregarding
the question of whether it'd be safe to pass a pointer into syscache
to some random function.

> Is there a conventional proper way to pass a tuple retrieved from
> syscache to code that accepts a tuple as a Datum?

You could use heap_copy_tuple_as_datum().  See SPI_returntuple()
for an example.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is a syscache tuple more like an on-disk tuple or a freshly made one?

2016-04-15 Thread Alvaro Herrera
Chapman Flack wrote:

> I am tempted to apply HeapTupleGetDatum to a tuple retrieved from
> the syscache (as I already have code for processing a tuple presented
> as a Datum).
> 
> But I see a comment on HeapTupleHeaderGetDatum: "This must *not* get
> applied to an on-disk tuple; the tuple should be freshly made by
> heap_form_tuple or some wrapper ..."

I suppose you could create a copy of the tuple (SysCacheSearchCopy) and
use that for HeapTupleGetDatum.  The problem with the syscache tuple is
that it can go away as soon as you do the ReleaseSysCache -- it lives in
shared_buffers memory, so when it's released the buffer might get
evicted.

heap_form_tuple returns a newly palloc'd tuple, which is what you want.

> ... and here I confess I'm unsure whether a tuple retrieved from
> the syscache is more like an on-disk one, or a freshly-made one,
> for purposes of the warning in that comment.

A "syscache tuple" is definitely an on-disk tuple.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Is a syscache tuple more like an on-disk tuple or a freshly made one?

2016-04-15 Thread Chapman Flack
Please bear with a neophyte question ...

I am tempted to apply HeapTupleGetDatum to a tuple retrieved from
the syscache (as I already have code for processing a tuple presented
as a Datum).

But I see a comment on HeapTupleHeaderGetDatum: "This must *not* get
applied to an on-disk tuple; the tuple should be freshly made by
heap_form_tuple or some wrapper ..."

... and here I confess I'm unsure whether a tuple retrieved from
the syscache is more like an on-disk one, or a freshly-made one,
for purposes of the warning in that comment.

Is there a conventional proper way to pass a tuple retrieved from
syscache to code that accepts a tuple as a Datum? Or is there some
fundamental reason a smart person wouldn't do that?

Thanks,
-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add new catalog called pg_init_privs

2016-04-15 Thread Stephen Frost
Alvaro,

On Friday, April 15, 2016, Alvaro Herrera  wrote:

> Stephen Frost wrote:
>
> > Here's the latest message ID on the thread he started.
> >
> > cafj8prb_8wggxg1ekgfdq_g_c1p1ilc_j9m3jflfmld2vxt...@mail.gmail.com
> 
> >
> > Pretty sure it's the same issue.  Going through testing now and will
> > push the fix very shortly.
>
> May I suggest adding the extension I showed to
> src/test/modules/test_extensions.
>

Yup, already done and will be included in my commit.

Thanks!

Stephen


Re: [HACKERS] [COMMITTERS] pgsql: Add new catalog called pg_init_privs

2016-04-15 Thread Alvaro Herrera
Stephen Frost wrote:

> Here's the latest message ID on the thread he started.
> 
> cafj8prb_8wggxg1ekgfdq_g_c1p1ilc_j9m3jflfmld2vxt...@mail.gmail.com
> 
> Pretty sure it's the same issue.  Going through testing now and will
> push the fix very shortly.

May I suggest adding the extension I showed to
src/test/modules/test_extensions.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Tom Lane
Andres Freund  writes:
> On 2016-04-15 15:26:17 -0400, Tom Lane wrote:
>> I think the bottom line is that we misdesigned the WAL representation
>> by assuming that this sort of info could always be piggybacked on a
>> transaction commit record.  It's time to fix that.

> I think we got to piggyback it onto a commit record, as long as there's
> one.

No objection to that part.  What I'm saying is that when there isn't one,
the answer is a new record type, not forcing xid assignment.  It might
look almost like a commit record, but it shouldn't imply that there
was a transaction.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Alvaro Herrera
Andres Freund wrote:
> On 2016-04-15 15:26:17 -0400, Tom Lane wrote:
> > I think the bottom line is that we misdesigned the WAL representation
> > by assuming that this sort of info could always be piggybacked on a
> > transaction commit record.  It's time to fix that.
> 
> I think we got to piggyback it onto a commit record, as long as there's
> one. Otherwise it's going to be more complex (queuing messages when
> reading an inval record) and slower (more wal records).  I can easily
> develop a patch for that, the question is what we do on the back
> branches...

We have introduced new wal records in back branches previously --
nothing new (c.f. 8e9a16ab8f7f0e5813644975cc3f336e5b064b6e).  The user
just needs to make sure to upgrade the standbys first.  If they don't,
they would die upon replay of the first such record, which they can take
as an indication that they need to be upgraded; the standby is down for
some time, but there is no data loss or corruption.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Andres Freund
On 2016-04-15 15:26:17 -0400, Tom Lane wrote:
> I think the bottom line is that we misdesigned the WAL representation
> by assuming that this sort of info could always be piggybacked on a
> transaction commit record.  It's time to fix that.

I think we got to piggyback it onto a commit record, as long as there's
one. Otherwise it's going to be more complex (queuing messages when
reading an inval record) and slower (more wal records).  I can easily
develop a patch for that, the question is what we do on the back
branches...

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] more parallel query documentation

2016-04-15 Thread Jim Nasby

On 4/14/16 10:02 PM, Robert Haas wrote:

As previously threatened, I have written some user documentation for
parallel query.  I put it up here:


Yay! Definitely needed to be written. :)

There should be a section that summarizes the parallel machinery. I 
think the most important points are that separate processes are spun up, 
that they're limited by max_worker_processes and max_parallel_degree, 
and that shared memory queues are used to move data, results and errors 
between a regular backend (controlling backend?) and it's workers. The 
first section kind-of alludes to this, but it doesn't actually explain 
any of it. I think it's OK for the very first section to be a *brief* 
tl;dr summary on the basics of turning the feature on, but after that 
laying down groundwork knowledge will make the rest of the page much 
clearer.


I think the parts that talk about "parallel plan executed with no 
workers" are confusing... it almost sounds like the query won't be 
executed at all. It'd be better to say something like "executed single 
process" or "executed with no parallelism" or similar. Maybe the real 
issue is we need to pick a clear term for a non-parallel query and stick 
with it. I would also expand the different scenarios into bullets and 
explain why parallelism isn't used, like you did right above that. (I 
think it's great that you explained *why* parallel plans wouldn't be 
generated instead of just listing conditions.)


When describing SeqScan, it would be good to clarify whether 
effective_io_concurrency has an effect. (For that matter, does 
effective_io_concurrency interact with any of the other parallel settings?)


"Functions must be marked PARALLEL UNSAFE ..., or make persistent 
changes to settings." What would be a non-persistent change? SET LOCAL? 
(This is another case where it'd be good if we decided on specific 
terminology and referenced the definition from the page.)

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add new catalog called pg_init_privs

2016-04-15 Thread Stephen Frost
* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
> Stephen Frost wrote:
> > Alvaro,
> 
> > Will take a look at this, though I'm just about to commit a fix that's
> > probably related (and addresses Pavel's issue).  Basically, for reasons
> > unknown, I was calling systable_endscan() immediately after
> > systable_getnext(), which doesn't work when you want to use the tuple
> > you got back.
> 
> Yeah, I noticed that bug too.  It might well explain the issue, because
> the tuple is seen as 0x7f.
> 
> #0  heap_deform_tuple (tuple=tuple@entry=0x339f438, 
> tupleDesc=tupleDesc@entry=0x7f680f3dcde0, 
> values=values@entry=0x3390cd8, 
> isnull=isnull@entry=0x339f3c8 
> "\177\177\177\177\177~\177\177h\367\071\003")
> at /pgsql/source/master/src/backend/access/common/heaptuple.c:881
> #1  0x00479e3a in heap_modify_tuple (tuple=tuple@entry=0x339f438, 
> tupleDesc=0x7f680f3dcde0, 
> replValues=replValues@entry=0x7ffd662a3770, 
> replIsnull=replIsnull@entry=0x7ffd662a3750 "", 
> doReplace=doReplace@entry=0x7ffd662a3760 "")
> at /pgsql/source/master/src/backend/access/common/heaptuple.c:817
> #2  0x00518feb in recordExtensionInitPriv (objoid=46960, 
> classoid=2615, objsubid=0, 
> new_acl=0x339f188) at 
> /pgsql/source/master/src/backend/catalog/aclchk.c:5305
> #3  0x0051d2b5 in ExecGrant_Namespace (istmt=)
> at /pgsql/source/master/src/backend/catalog/aclchk.c:2942
> 
> Not sure what "Pavel's issue" is, since it's not listed in the open
> items page.

Here's the latest message ID on the thread he started.

cafj8prb_8wggxg1ekgfdq_g_c1p1ilc_j9m3jflfmld2vxt...@mail.gmail.com

Pretty sure it's the same issue.  Going through testing now and will
push the fix very shortly.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Tom Lane
Simon Riggs  writes:
> In my understanding we have two choices for this bug

> 1) assign an xid so it forces sending a message (message plus xid)
> 2) send a message without assigning an xid (message only)

> (1) seems like it is worse for backpatching, IMHO, though I am willing to
> hear other thoughts or options

The problem with (1) is that it creates side-effects that could be bad;
Robert's already pointed out one close-to-show-stopper consequence,
and I have little confidence that there are not others.  In general,
if we got here without assigning an xid, there's a reason.

I think the bottom line is that we misdesigned the WAL representation
by assuming that this sort of info could always be piggybacked on a
transaction commit record.  It's time to fix that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add new catalog called pg_init_privs

2016-04-15 Thread Alvaro Herrera
Stephen Frost wrote:
> Alvaro,

> Will take a look at this, though I'm just about to commit a fix that's
> probably related (and addresses Pavel's issue).  Basically, for reasons
> unknown, I was calling systable_endscan() immediately after
> systable_getnext(), which doesn't work when you want to use the tuple
> you got back.

Yeah, I noticed that bug too.  It might well explain the issue, because
the tuple is seen as 0x7f.

#0  heap_deform_tuple (tuple=tuple@entry=0x339f438, 
tupleDesc=tupleDesc@entry=0x7f680f3dcde0, 
values=values@entry=0x3390cd8, 
isnull=isnull@entry=0x339f3c8 "\177\177\177\177\177~\177\177h\367\071\003")
at /pgsql/source/master/src/backend/access/common/heaptuple.c:881
#1  0x00479e3a in heap_modify_tuple (tuple=tuple@entry=0x339f438, 
tupleDesc=0x7f680f3dcde0, 
replValues=replValues@entry=0x7ffd662a3770, 
replIsnull=replIsnull@entry=0x7ffd662a3750 "", 
doReplace=doReplace@entry=0x7ffd662a3760 "")
at /pgsql/source/master/src/backend/access/common/heaptuple.c:817
#2  0x00518feb in recordExtensionInitPriv (objoid=46960, classoid=2615, 
objsubid=0, 
new_acl=0x339f188) at /pgsql/source/master/src/backend/catalog/aclchk.c:5305
#3  0x0051d2b5 in ExecGrant_Namespace (istmt=)
at /pgsql/source/master/src/backend/catalog/aclchk.c:2942

Not sure what "Pavel's issue" is, since it's not listed in the open
items page.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add new catalog called pg_init_privs

2016-04-15 Thread Stephen Frost
Alvaro,

* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
> Stephen Frost wrote:
> > Add new catalog called pg_init_privs
> > 
> > This new catalog holds the privileges which the system was
> > initialized with at initdb time, along with any permissions set
> > by extensions at CREATE EXTENSION time.  This allows pg_dump
> > (and any other similar use-cases) to detect when the privileges
> > set on initdb-created or extension-created objects have been
> > changed from what they were set to at initdb/extension-creation
> > time and handle those changes appropriately.
> 
> If you have an extension that's marked not relocatable and drop it, its
> schema is left behind; trying to create the extension again, *crash*.

Will take a look at this, though I'm just about to commit a fix that's
probably related (and addresses Pavel's issue).  Basically, for reasons
unknown, I was calling systable_endscan() immediately after
systable_getnext(), which doesn't work when you want to use the tuple
you got back.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Simon Riggs
On 15 April 2016 at 20:01, Andres Freund  wrote:

> On 2016-04-15 19:59:06 +0100, Simon Riggs wrote:
> > For me, the issue is that we need to do something to catch bugs. The
> > existing code does not have any test at all to check whether we are doing
> > the wrong thing - it just lets the wrong thing happen.
>
> But sending the message, without assigning an xid, *IS* the right thing
> to do here? We shouldn't assign an xid, and we need to send the message
> out to the standbys.
>
>
> > Fixing it by forcing a new behaviour might be the right thing to do going
> > forwards, but I don't much like the idea of forcing new behaviour in back
> > branches. It might fix this bug, but can easily cause others.
>
> What's your alternative? Assigning an xid in the middle of vacuum isn't
> ok, breaking vacuum isn't either?
>

In my understanding we have two choices for this bug

1) assign an xid so it forces sending a message (message plus xid)
2) send a message without assigning an xid (message only)

(1) seems like it is worse for backpatching, IMHO, though I am willing to
hear other thoughts or options

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] [COMMITTERS] pgsql: Add new catalog called pg_init_privs

2016-04-15 Thread Alvaro Herrera
Stephen Frost wrote:
> Add new catalog called pg_init_privs
> 
> This new catalog holds the privileges which the system was
> initialized with at initdb time, along with any permissions set
> by extensions at CREATE EXTENSION time.  This allows pg_dump
> (and any other similar use-cases) to detect when the privileges
> set on initdb-created or extension-created objects have been
> changed from what they were set to at initdb/extension-creation
> time and handle those changes appropriately.

If you have an extension that's marked not relocatable and drop it, its
schema is left behind; trying to create the extension again, *crash*.

$ cat /pgsql/install/master/share/extension/crash--1.sql 
grant usage on schema @extschema@ to public;


$ cat /pgsql/install/master/share/extension/crash.control 
comment = 'crash'
default_version = '1'
relocatable = false
superuser = true
schema = 'crash'

alvherre=# create extension crash;
CREATE EXTENSION
alvherre=# drop extension crash;
DROP EXTENSION
alvherre=# \dn
Listado de esquemas
 Nombre |  Dueño   
+--
 crash  | alvherre
 public | alvherre
(2 filas)

alvherre=# create extension crash;
el servidor ha cerrado la conexión inesperadamente
Probablemente se debe a que el servidor terminó de manera anormal
antes o durante el procesamiento de la petición.
La conexión al servidor se ha perdido. Intentando reiniciar: con éxito.


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Andres Freund
On 2016-04-15 19:59:06 +0100, Simon Riggs wrote:
> For me, the issue is that we need to do something to catch bugs. The
> existing code does not have any test at all to check whether we are doing
> the wrong thing - it just lets the wrong thing happen.

But sending the message, without assigning an xid, *IS* the right thing
to do here? We shouldn't assign an xid, and we need to send the message
out to the standbys.


> Fixing it by forcing a new behaviour might be the right thing to do going
> forwards, but I don't much like the idea of forcing new behaviour in back
> branches. It might fix this bug, but can easily cause others.

What's your alternative? Assigning an xid in the middle of vacuum isn't
ok, breaking vacuum isn't either?

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Simon Riggs
On 15 April 2016 at 18:44, Robert Haas  wrote:

> On Fri, Apr 15, 2016 at 1:12 PM, Tom Lane  wrote:
> > Robert Haas  writes:
> >> On Thu, Apr 14, 2016 at 12:11 PM, Andres Freund 
> wrote:
> >>> The easiest way to achieve that seems to be to just assign an xid if
> >>> that's the case; while it's not necessarily safe/efficient to do so at
> >>> the point the invalidation message was queued, I think it should be
> safe
> >>> to do so at commit time. Seems less invasive to backpatch than to
> either
> >>> support commit records without xids, or a separate record just
> >>> transporting invalidation messages.
> >
> >> I agree that's better for back-patching.  I hope it won't suck
> >> performance-wise.  In master, we might think of inventing something
> >> new.
> >
> > I'm a little worried about whether this will break assumptions that
> > vacuum doesn't have an XID.  I don't immediately see how it would,
> > but it seems a bit shaky.
>
> Actually, I think there's a bigger problem.  If you manage to almost
> wrap around the XID space, and the cluster shuts down, you are
> guaranteed to be able to vacuum the whole cluster without actually
> running out of XIDs.  Forcing an XID to be assigned here would make
> that uncertain - it would depend on how many tables you have versus
> how many XIDs you have left.  That seems unacceptable to me.


I agree this is a bug, similar to the last one.

For me, the issue is that we need to do something to catch bugs. The
existing code does not have any test at all to check whether we are doing
the wrong thing - it just lets the wrong thing happen.

Fixing it by forcing a new behaviour might be the right thing to do going
forwards, but I don't much like the idea of forcing new behaviour in back
branches. It might fix this bug, but can easily cause others.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] Refactor pg_dump as a library?

2016-04-15 Thread Tom Lane
Robert Haas  writes:
> On Thu, Apr 14, 2016 at 1:01 PM, Andres Freund  wrote:
>> I'm not sure I find that convincing: The state portrayed by the syscache
>> is th state COPY/SELECT et al will be using.  I think the angle to
>> attack this is rather to allow blocking 'globally visible' DDL
>> efficiently and correctly, rather than the hack pg_dump is using right now.

> Maybe.  I think that idea of changing the pg_get_Xdef() stuff to use
> the transaction snapshot rather than the latest snapshot might be
> worth considering, too.

The problem here is the connection to syscache; changing the behavior
of that, in a general context, is very scary.  What we might be able to
do that would satisfy pg_dump's needs is to invent a mode in which you
can run a read-only transaction that uses the transaction snapshot to
populate syscache (and then flushes the syscache at the end).  It would
have to be a pretty "hard" notion of read-only, not the squishy one we
have now, but I think it would work safely.  Anything that might otherwise
break because of stale syscache entries should be prevented from having
bad side-effects by the read-only restriction.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Andres Freund
On 2016-04-15 13:44:27 -0400, Robert Haas wrote:
> On Fri, Apr 15, 2016 at 1:12 PM, Tom Lane  wrote:
> > Robert Haas  writes:
> >> On Thu, Apr 14, 2016 at 12:11 PM, Andres Freund  wrote:
> >>> The easiest way to achieve that seems to be to just assign an xid if
> >>> that's the case; while it's not necessarily safe/efficient to do so at
> >>> the point the invalidation message was queued, I think it should be safe
> >>> to do so at commit time. Seems less invasive to backpatch than to either
> >>> support commit records without xids, or a separate record just
> >>> transporting invalidation messages.
> >
> >> I agree that's better for back-patching.  I hope it won't suck
> >> performance-wise.  In master, we might think of inventing something
> >> new.
> >
> > I'm a little worried about whether this will break assumptions that
> > vacuum doesn't have an XID.  I don't immediately see how it would,
> > but it seems a bit shaky.
> 
> Actually, I think there's a bigger problem.  If you manage to almost
> wrap around the XID space, and the cluster shuts down, you are
> guaranteed to be able to vacuum the whole cluster without actually
> running out of XIDs.

Currently you're unfortunately not, c.f.
http://www.postgresql.org/message-id/CAPpHfdspOkmiQsxh-UZw2chM6dRMwXAJGEmmbmqYR=yvm7-...@mail.gmail.com


> Forcing an XID to be assigned here would make
> that uncertain - it would depend on how many tables you have versus
> how many XIDs you have left.  That seems unacceptable to me.

But I agree that that's a concern.  I'm kinda leaning towards
introducing an invalidation WAL record for that case atm. It's quite
possible to force an xid to be assigned in xact.c, but it's certainly
not pretty (the least ugly way is to duplicate the
xactGetCommittedInvalidationMessages() call, and force an xid to be
assigned early in CommitTransaction().

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Refactor pg_dump as a library?

2016-04-15 Thread Robert Haas
On Thu, Apr 14, 2016 at 1:01 PM, Andres Freund  wrote:
> On 2016-04-14 13:16:20 +0200, Andreas Karlsson wrote:
>> I am personally not a fan of the pg_get_Xdef() functions due to their heavy
>> reliance on the syscache which feels rather unsafe in combination with
>> concurrent DDL.
>
> I'm not sure I find that convincing: The state portrayed by the syscache
> is th state COPY/SELECT et al will be using.  I think the angle to
> attack this is rather to allow blocking 'globally visible' DDL
> efficiently and correctly, rather than the hack pg_dump is using right now.

Maybe.  I think that idea of changing the pg_get_Xdef() stuff to use
the transaction snapshot rather than the latest snapshot might be
worth considering, too.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Add trigonometric functions that work in degrees.

2016-04-15 Thread Robert Haas
On Fri, Apr 8, 2016 at 5:48 PM, Tom Lane  wrote:
> Peter Eisentraut  writes:
>> On 01/22/2016 03:46 PM, Tom Lane wrote:
>>> Add trigonometric functions that work in degrees.
>
>> I have a host here that is having regression test failures from this commit:
>
>> --- src/test/regress/expected/float8.out
>> +++ src/test/regress/results/float8.out
>> @@ -490,9 +490,9 @@
>> x   | asind | acosd | atand
>>   --+---+---+---
>>  -1 |   -90 |   180 |   -45
>> - -0.5 |   -30 |   120 |
>> + -0.5 |   |   120 |
>>   0 | 0 |90 | 0
>> -  0.5 |30 |60 |
>> +  0.5 |   |60 |
>>   1 |90 | 0 |45
>>   (5 rows)
>
> BTW ... looking closer at that, it appears to show asind(-0.5) and
> asind(0.5) returning NULL.  Which makes no sense at all, because
> there is no provision in dasind() for returning a null, regardless
> of the input value.
>
> So I'm pretty baffled.  Maybe you could step through this and figure
> out where it's going off the rails?

Peter, are you going to look into this further?  This is on the open
items list, but there seems to be nothing that can be done about it by
anyone other than, maybe, you.

If you're not going to look into it, I think we should delete the open
item.  There's no point in tracking issues that aren't actionable.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disallow unique index on system columns

2016-04-15 Thread Tom Lane
Andres Freund  writes:
> On 2016-04-15 11:49:27 -0400, Tom Lane wrote:
>> I think what we should do with this is split it into two patches, one
>> to fix ALTER REPLICA IDENTITY's crash (which we'd back-patch to 9.4)
>> and one to forbid indexes on system columns (which IMO should be HEAD
>> only).  The first of those should be pretty uncontroversial, so I'll
>> go ahead with pushing it --- anyone have a problem with the second?

> Only in that I'm wondering whether we shouldn't be more aggressive about
> #2.

Well, if there *is* someone out there somehow making use of an index on
one of these columns, I think we shouldn't break it in a minor release.

A more likely scenario is that someone's created such an index but it's
lying around unused.  In that case, with the patch as proposed, they'd
have to drop it before they could upgrade to 9.6.  That doesn't bother
me ... but again, possibly causing problems in a minor release does.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Robert Haas
On Fri, Apr 15, 2016 at 1:12 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Thu, Apr 14, 2016 at 12:11 PM, Andres Freund  wrote:
>>> The easiest way to achieve that seems to be to just assign an xid if
>>> that's the case; while it's not necessarily safe/efficient to do so at
>>> the point the invalidation message was queued, I think it should be safe
>>> to do so at commit time. Seems less invasive to backpatch than to either
>>> support commit records without xids, or a separate record just
>>> transporting invalidation messages.
>
>> I agree that's better for back-patching.  I hope it won't suck
>> performance-wise.  In master, we might think of inventing something
>> new.
>
> I'm a little worried about whether this will break assumptions that
> vacuum doesn't have an XID.  I don't immediately see how it would,
> but it seems a bit shaky.

Actually, I think there's a bigger problem.  If you manage to almost
wrap around the XID space, and the cluster shuts down, you are
guaranteed to be able to vacuum the whole cluster without actually
running out of XIDs.  Forcing an XID to be assigned here would make
that uncertain - it would depend on how many tables you have versus
how many XIDs you have left.  That seems unacceptable to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Breakage with VACUUM ANALYSE + partitions

2016-04-15 Thread Robert Haas
On Thu, Apr 14, 2016 at 1:39 AM, Abhijit Menon-Sen  wrote:
> At 2016-04-12 09:00:57 -0400, robertmh...@gmail.com wrote:
>>
>> On Mon, Apr 11, 2016 at 1:17 PM, Andres Freund  wrote:
>> >
>> > 3) Actually handle the case of the last open segment not being
>> >RELSEG_SIZE properly in _mdfd_getseg() - mdnblocks() does so.
>>
>> #3 seems like it's probably about 15 years overdue, so let's do that
>> anyway.
>
> Do I understand correctly that the case of the "last open segment"
> (i.e., the one for which mdfd_chain == NULL) not being RELSEG_SIZE
> (i.e., _mdnblocks(reln, forknum, v) < RELSEG_SIZE) should not call
> _mfdf_openseg on nextsegno if behaviour is not EXTENSION_CREATE or
> InRecovery?
>
> And that "We won't create segment if not existent" should happen, but
> doesn't only because the next segment file wasn't removed earlier, so
> we have to add an extra check for that case?
>
> In other words, is something like the following what's needed here, or
> is there more to it?

Something like that is what I was thinking about, but I notice it has
the disadvantage of adding lseeks to cater to a shouldn't-happen
condition.  Not sure if we should care.

My attempts to test things were also singularly unrewarding.  Even
after messing with the filesystem in various ways, I couldn't figure
out exactly how this makes a difference.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Tom Lane
Robert Haas  writes:
> On Thu, Apr 14, 2016 at 12:11 PM, Andres Freund  wrote:
>> The easiest way to achieve that seems to be to just assign an xid if
>> that's the case; while it's not necessarily safe/efficient to do so at
>> the point the invalidation message was queued, I think it should be safe
>> to do so at commit time. Seems less invasive to backpatch than to either
>> support commit records without xids, or a separate record just
>> transporting invalidation messages.

> I agree that's better for back-patching.  I hope it won't suck
> performance-wise.  In master, we might think of inventing something
> new.

I'm a little worried about whether this will break assumptions that
vacuum doesn't have an XID.  I don't immediately see how it would,
but it seems a bit shaky.

I find it hard to believe that the act of assigning an XID would add
measurably to the cost of a vacuum, so Robert's performance concern
doesn't sound very exciting.  If this works, I think it's fine to
adopt as a permanent solution.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Andres Freund
On 2016-04-15 13:07:19 -0400, Robert Haas wrote:
> On Thu, Apr 14, 2016 at 12:11 PM, Andres Freund  wrote:
> > On 2016-04-14 11:50:58 -0400, Robert Haas wrote:
> >> On Wed, Apr 13, 2016 at 9:58 PM, Andres Freund  wrote:
> >> > We've recently discussed a very similar issue around
> >> > http://www.postgresql.org/message-id/20160227002958.peftvmcx4dxwe...@alap3.anarazel.de
> >> >
> >> > Unfortunately Simon over in that thread disagreed there about fixing
> >> > this by always emitting a commit record when nmsgs > 0 in
> >> > RecordTransactionCommit().  I think this thread is a pretty strong hint
> >> > that we actually should do so.
> >>
> >> Yes.  I'm pretty confident that you had the right idea there, and that
> >> Simon's objection was off-base.
> >
> > The easiest way to achieve that seems to be to just assign an xid if
> > that's the case; while it's not necessarily safe/efficient to do so at
> > the point the invalidation message was queued, I think it should be safe
> > to do so at commit time. Seems less invasive to backpatch than to either
> > support commit records without xids, or a separate record just
> > transporting invalidation messages.
> 
> I agree that's better for back-patching.

It's a bit ugly though, since we're at that stage pretty heavily
assuming there's no xid assigned (on the caller level)... I've toyed
with the idea of emitting a commit record that doesn't have an assigned
xid, but that'd require changes on the apply side, which would be uglier
than a new record type :(

> I hope it won't suck performance-wise.

I can't see that be the case, there's not many places where we send
invalidation messages without an assigned xid. Running an instrumented
build with an appropriate wal level reveals about a handfull places.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suspicious behaviour on applying XLOG_HEAP2_VISIBLE.

2016-04-15 Thread Robert Haas
On Thu, Apr 14, 2016 at 12:11 PM, Andres Freund  wrote:
> On 2016-04-14 11:50:58 -0400, Robert Haas wrote:
>> On Wed, Apr 13, 2016 at 9:58 PM, Andres Freund  wrote:
>> > We've recently discussed a very similar issue around
>> > http://www.postgresql.org/message-id/20160227002958.peftvmcx4dxwe...@alap3.anarazel.de
>> >
>> > Unfortunately Simon over in that thread disagreed there about fixing
>> > this by always emitting a commit record when nmsgs > 0 in
>> > RecordTransactionCommit().  I think this thread is a pretty strong hint
>> > that we actually should do so.
>>
>> Yes.  I'm pretty confident that you had the right idea there, and that
>> Simon's objection was off-base.
>
> The easiest way to achieve that seems to be to just assign an xid if
> that's the case; while it's not necessarily safe/efficient to do so at
> the point the invalidation message was queued, I think it should be safe
> to do so at commit time. Seems less invasive to backpatch than to either
> support commit records without xids, or a separate record just
> transporting invalidation messages.

I agree that's better for back-patching.  I hope it won't suck
performance-wise.  In master, we might think of inventing something
new.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd system-column handling in postgres_fdw join pushdown patch

2016-04-15 Thread Ashutosh Bapat
On Fri, Apr 15, 2016 at 10:17 PM, Robert Haas  wrote:

> On Fri, Apr 15, 2016 at 12:16 PM, Ashutosh Bapat
>  wrote:
> > The testcases had tableoid::regclass which outputs the foreign table's
> local
> > name, which won't change across runs. Isn't that so?
>
> [rhaas pgsql]$ grep 16444 ~/Downloads/postgres-fdw-syscol-zap-ab.patch
> + Remote SQL: SELECT r1.c1, CASE WHEN r1.* IS NOT NULL THEN 0
> END, CASE WHEN r1.* IS NOT NULL THEN 0 END, CASE WHEN r1.* IS NOT NULL
> THEN 0 END, CASE WHEN r1.* IS NOT NULL THEN 16444 END, CASE WHEN r2.*
> IS NOT NULL THEN 0 END, CASE WHEN r2.* IS NOT NULL THEN 16447 END,
> r2.c1 FROM ("S 1"."T 3" r1 INNER JOIN "S 1"."T 4" r2 ON (TRUE)) WHERE
> ((r1.c1 = r2.c1)) ORDER BY r1.c1 ASC NULLS LAST
> + Remote SQL: SELECT r1.c1, CASE WHEN r1.* IS NOT NULL THEN
> 16444 END, CASE WHEN r2.* IS NOT NULL THEN 16447 END, r2.c1 FROM ("S
> 1"."T 3" r1 LEFT JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1 ORDER BY
> r1.c1 ASC NULLS LAST, r2.c1 ASC NULLS LAST
> + Remote SQL: SELECT r1.c1, CASE WHEN r1.* IS NOT NULL THEN
> 16444 END, CASE WHEN r2.* IS NOT NULL THEN 16447 END, r2.c1 FROM ("S
> 1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1 ORDER BY
> r1.c1 ASC NULLS LAST, r2.c1 ASC NULLS LAST
>
> Where do you think 16444 and 16447 are coming from here?
>

Ah! Sorry, it's the explain output.
-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] Declarative partitioning

2016-04-15 Thread Robert Haas
On Fri, Apr 15, 2016 at 5:46 AM, Ashutosh Bapat
 wrote:
> Retaining the partition hierarchy would help to push-down join across
> partition hierarchy effectively.

-1.  You don't get to insert cruft into the final plan for the
convenience of the optimizer.  I think the AppendPath needs to be
annotated with sufficient information to do whatever query planning
optimizations we want, and some or all of that may need to carry over
to the Append plan to allow run-time partition pruning.  But I think
that flattening nests of Appends is a good optimization and we should
preserve it.  If that makes the additional information that any given
Append needs to carry a bit more complex, so be it.

I also think it's very good that Amit has kept the query planner
unchanged in this initial patch.  Let's leave that work to phase two.
What I suggest we do when the time comes is invent new nodes
RangePartitionMap, ListPartitionMap, HashPartitionMap.  Each contains
minimal metadata needed for tuple routing or planner transformation.
For example, RangePartitionMap can contain an array of partition
boundaries - represented as Datums - and an array of mappings, each a
Node *.  The associated value can be another PartitionMap object if
there is subpartitioning in use, or an OID.  This can be used both for
matching up partitions for join pushdown, and also for fast tuple
routing and runtime partition pruning.

> 2. The new syntax allows CREATE TABLE to be specified as partition of an
> already partitioned table. Is it possible to do the same for CREATE FOREIGN
> TABLE? Or that's material for v2? Similarly for ATTACH PARTITION.

+1 for making CREATE FOREIGN TABLE support that also, and in version
1.  And same for ATTACH PARTITION.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd system-column handling in postgres_fdw join pushdown patch

2016-04-15 Thread Robert Haas
On Fri, Apr 15, 2016 at 12:16 PM, Ashutosh Bapat
 wrote:
> The testcases had tableoid::regclass which outputs the foreign table's local
> name, which won't change across runs. Isn't that so?

[rhaas pgsql]$ grep 16444 ~/Downloads/postgres-fdw-syscol-zap-ab.patch
+ Remote SQL: SELECT r1.c1, CASE WHEN r1.* IS NOT NULL THEN 0
END, CASE WHEN r1.* IS NOT NULL THEN 0 END, CASE WHEN r1.* IS NOT NULL
THEN 0 END, CASE WHEN r1.* IS NOT NULL THEN 16444 END, CASE WHEN r2.*
IS NOT NULL THEN 0 END, CASE WHEN r2.* IS NOT NULL THEN 16447 END,
r2.c1 FROM ("S 1"."T 3" r1 INNER JOIN "S 1"."T 4" r2 ON (TRUE)) WHERE
((r1.c1 = r2.c1)) ORDER BY r1.c1 ASC NULLS LAST
+ Remote SQL: SELECT r1.c1, CASE WHEN r1.* IS NOT NULL THEN
16444 END, CASE WHEN r2.* IS NOT NULL THEN 16447 END, r2.c1 FROM ("S
1"."T 3" r1 LEFT JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1 ORDER BY
r1.c1 ASC NULLS LAST, r2.c1 ASC NULLS LAST
+ Remote SQL: SELECT r1.c1, CASE WHEN r1.* IS NOT NULL THEN
16444 END, CASE WHEN r2.* IS NOT NULL THEN 16447 END, r2.c1 FROM ("S
1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1 ORDER BY
r1.c1 ASC NULLS LAST, r2.c1 ASC NULLS LAST

Where do you think 16444 and 16447 are coming from here?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EXPLAIN VERBOSE with parallel Aggregate

2016-04-15 Thread Tom Lane
Robert Haas  writes:
> I definitely agree that the current output is messed up, but I'm not
> sure your proposed output is much better.  I wonder if it shouldn't
> say something like:
> Output: serialfn(transfn(args))
> for the partial aggregate and
> Output: finalfn(combinefn(deserialfn(args)))
> for the finalize aggregate step.

> Or maybe just insert the word PARTIAL before each partial aggregate
> step, like PARTIAL sum(num) for the partial step and then just
> sum(num) for the final step.

+1 for the latter, if we can do it conveniently.  I think exposing
the names of the aggregate implementation functions would be very
user-unfriendly, as nobody but us hackers knows what those are.

> I think ending up with sum(sum(num)) is
> right out.  It doesn't look so bad for that case but avg(avg(num))
> would certainly imply something that's not the actual behavior.

Agreed.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SET ROLE and reserved roles

2016-04-15 Thread David G. Johnston
On Fri, Apr 15, 2016 at 8:56 AM, Stephen Frost  wrote:

> Robert,
>
> * Robert Haas (robertmh...@gmail.com) wrote:
> > On Wed, Apr 13, 2016 at 6:53 PM, Stephen Frost 
> wrote:
> > > Requiring that SET ROLE be allowed will mean that many more paths must
> > > be checked and adjusted, such as in all of the CreateObject statements
> > > and potentionally many other paths that I'm not thinking of here, not
> > > the least of which are all of the *existing* checks near
> > > get_rolespec_oid/tuple which will require additional checks for the
> > > CURRENT_USER case to see if the current user is a default role.
> >
> > I don't get it.  I think that these new roles should work just like
> > any other roles, except for existing at initdb time.  I don't see why
> > they would require checking or adjusting any code-paths at all.  It
> > would presumably have made the patch you committed smaller and
> > simpler.  The only thing you'd need to do is (approximately) not dump
> > them.
>
> Perhaps it would be helpful to return to the initial goal of these
> default roles.
>
> We've identified certain privileges which are currently superuser-only
> and we would like to be able to be GRANT those to non-superuser roles.
> Where our GRANT system is able to provide the granularity desired, we
> have simply removed the superuser checks, set up appropriate default
> values and, through the "pg_dump dump catalog ACLs" patch, allowed
> administrators to make the changes to those objects via the
> 'GRANT privilege ON object TO role' system.
>
> For those cases where our GRANT system is unable to provide the
> granularity desired and it isn't sensible to extend the GRANT system to
> cover the exact granularity we wish to provide, we needed to come up
> with an alternative approach.  That approach is the use of special
> default roles, where we can allow exactly the level of granularity
> desired and give administrators a way to grant those privileges to
> users.
>

​I didn't fully comprehend this before, but now I understand why additional
checks beyond simple "has inherited the necessary grant" are needed.  The
checks being performed are not for itemized granted capabilities

Further, there seems to be no particular use-case which is satisfied by
> allowing SET ROLE'ing to the default roles or for those roles to own
> objects; indeed, it seems more likely to simply spark confusion and
> ultimately may prevent administrators from making use of this system for
> granting privileges as they are unable to GRANT just the specific
> privilege they wish to.
>
>
​And I'm have trouble imaging what kind of corner-case could result from
not allowing a role to assume ownership of a role it is a member of.  While
we do have the capability to required SET ROLE it is optional: any code
paths dealing with grants have to assume that the current role receives
permission via inheritance - and all we are doing here is ensuring that is
the situation.

It now occurs to me that perhaps we can improve on this situation in the
> future by formally supporting a role attribute that is "do not allow SET
> ROLE to this user" and then changing the default roles to have that
> attribute set (and disallowing users from changing it).  That's just
> additional flexibility over what the current patch does, however, but
> perhaps it helps illustrate that there are cases where only the
> privileges of the role are intended to be GRANT'd and not the ability to
> SET ROLE to the role or to create objects as that role.
>

​That is where my first response was heading but it was definitely scope
creep so I didn't bring it up.  Basically, an "INHERITONLY" ​modifier in
addition to INHERIT and NOINHERIT.

I had stated the having a role that neither provides inheritance nor
changing-to is pointless but I am now unsure if that is true.  It would
depend upon whether a LOGIN role is one that is considered having been SET
ROLE to or not.  If not, then a "LOGINONLY" combination would work such
that a role with that combination would only have whatever grants are given
to it and is not allowed to be changed to by a different role - i.e., it
could only be used by someone logging into the system specifically as that
role.  It likewise complements "NOLOGIN + LOGIN".

It wouldn't directly preclude said role from itself SET ROLE'ing to a
different role though.​

And, for all that, I do realize I'm using these terms somewhat contrary to
reality since INHERIT is done on the receiver and not the giver.  The
wording would have to change to reflect the POV of the giver.  That is,
INHERITONLY should be something done to a role that prevents it from being
SET TO as opposed to NOINHERIT which forces a role to use SET TO to access
the permissions of all roles in which it is a member.

​David J.
​


Re: [HACKERS] GIN data corruption bug(s) in 9.6devel

2016-04-15 Thread Teodor Sigaev

Alvaro's recommendation, to let the cleaner off the hook once it
passes the page which was the tail page at the time it started, would
prevent any process from getting pinned down indefinitely, but would

Added, see attached patch (based on v3.1)
If there is no objections I will aplly it at mondeay and backpatch all supported 
versions.


--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd system-column handling in postgres_fdw join pushdown patch

2016-04-15 Thread Ashutosh Bapat
On Fri, Apr 15, 2016 at 9:39 PM, Robert Haas  wrote:

> On Thu, Apr 14, 2016 at 7:49 AM, Ashutosh Bapat
>  wrote:
> > BTW, I noticed that we are deparsing whole-row reference as ROW(list of
> > columns from local definition of foreign table), which has the same
> problem
> > with outer joins. It won't be NULL when the rest of the row from that
> > relation is NULL in an outer join. It too needs to be encapsulated in
> CASE
> > WHEN .. END expression. PFA patch with that fix included and also some
> > testcases for system columns as well as whole-row references.
>
> Good catch.  But your test cases are no good because then we have OIDs
> hardcoded in the expected output.  That means 'make installcheck' will
> fail, or if for any other reason the OID varies it will also fail.
> Committed your version with those test cases.
>
>
The testcases had tableoid::regclass which outputs the foreign table's
local name, which won't change across runs. Isn't that so?

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] EXPLAIN VERBOSE with parallel Aggregate

2016-04-15 Thread Robert Haas
On Wed, Apr 13, 2016 at 11:03 PM, David Rowley
 wrote:
> There's 2 problems:
>
> 1)
>
> I recently noticed that EXPLAIN VERBOSE is a bit bogus when it comes
> to parallel aggregates with FILTER (WHERE ...) clauses.
>
> We get;
>
> Output: pg_catalog.sum((sum(num) FILTER (WHERE (num > 0 FILTER
> (WHERE (num > 0))
>
> Which is simply a lie, we only filter on the partial aggregate, not the 
> combine.
>
> The attached patch just nullifies the combine aggregate's aggfilter
> clause during setrefs. We cannot nullify it any sooner, as the
> aggfilter is required so that we find the correct partial Aggref in
> search_indexed_tlist_for_partial_aggref().
>
> With the attached we get;
>
>  Output: pg_catalog.sum((sum(num) FILTER (WHERE (num > 0
>
> The patch is very simple.
>
> 2)
>
> I'm unsure if the schema prefix on the combine aggregate is a problem
> or not. The code path which generates it is rather unfortunate and is
> down to func_get_detail() returning FUNCDETAIL_NOTFOUND in
> generate_function_name() due to not being able to find a function
> named "sum" with the transtype as its only argument. I had thought
> that maybe we should add a pg_proc entry for the aggregate with the
> transtype, if not already covered by the entry for aggfnoid.
> Aggregates where the transtype is the same as the input type work just
> fine;
>
> Output: max((max(num)))
>
> The problem with that is adding the pg_proc entry still won't get rid
> of the schema as the "p_funcid == funcid" test in
> generate_function_name() will fail causing the schema qualification to
> occur again. But at least func_get_detail() would be successful in
> finding the function.
>
> Any one have any thoughts on if this is a problem?

I definitely agree that the current output is messed up, but I'm not
sure your proposed output is much better.  I wonder if it shouldn't
say something like:

Output: serialfn(transfn(args))

for the partial aggregate and

Output: finalfn(combinefn(deserialfn(args)))

for the finalize aggregate step.

Or maybe just insert the word PARTIAL before each partial aggregate
step, like PARTIAL sum(num) for the partial step and then just
sum(num) for the final step.  I think ending up with sum(sum(num)) is
right out.  It doesn't look so bad for that case but avg(avg(num))
would certainly imply something that's not the actual behavior.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd system-column handling in postgres_fdw join pushdown patch

2016-04-15 Thread Robert Haas
On Thu, Apr 14, 2016 at 7:49 AM, Ashutosh Bapat
 wrote:
> BTW, I noticed that we are deparsing whole-row reference as ROW(list of
> columns from local definition of foreign table), which has the same problem
> with outer joins. It won't be NULL when the rest of the row from that
> relation is NULL in an outer join. It too needs to be encapsulated in CASE
> WHEN .. END expression. PFA patch with that fix included and also some
> testcases for system columns as well as whole-row references.

Good catch.  But your test cases are no good because then we have OIDs
hardcoded in the expected output.  That means 'make installcheck' will
fail, or if for any other reason the OID varies it will also fail.
Committed your version with those test cases.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unreasonably generic names in matview.sql

2016-04-15 Thread Tom Lane
Kevin Grittner  writes:
> On Fri, Apr 15, 2016 at 10:52 AM, Tom Lane  wrote:
>> I've been annoyed one time too many by the fact that matview.sql
>> creates, and leaves around, tables and views with such generic
>> names as "t" and "tv".  Unless someone's really attached to that,
>> I'd like to rename those objects to begin with, say, "mvtest".

> No objection here.
> I can do that if you would rather.

It's my gripe, so I'm happy to do the legwork.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unreasonably generic names in matview.sql

2016-04-15 Thread Kevin Grittner
No objection here.

I can do that if you would rather.

Kevin Grittner


On Fri, Apr 15, 2016 at 10:52 AM, Tom Lane  wrote:
> I've been annoyed one time too many by the fact that matview.sql
> creates, and leaves around, tables and views with such generic
> names as "t" and "tv".  Unless someone's really attached to that,
> I'd like to rename those objects to begin with, say, "mvtest".
>
> regards, tom lane
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SET ROLE and reserved roles

2016-04-15 Thread Stephen Frost
Robert,

* Robert Haas (robertmh...@gmail.com) wrote:
> On Wed, Apr 13, 2016 at 6:53 PM, Stephen Frost  wrote:
> > Requiring that SET ROLE be allowed will mean that many more paths must
> > be checked and adjusted, such as in all of the CreateObject statements
> > and potentionally many other paths that I'm not thinking of here, not
> > the least of which are all of the *existing* checks near
> > get_rolespec_oid/tuple which will require additional checks for the
> > CURRENT_USER case to see if the current user is a default role.
> 
> I don't get it.  I think that these new roles should work just like
> any other roles, except for existing at initdb time.  I don't see why
> they would require checking or adjusting any code-paths at all.  It
> would presumably have made the patch you committed smaller and
> simpler.  The only thing you'd need to do is (approximately) not dump
> them.

Perhaps it would be helpful to return to the initial goal of these
default roles.

We've identified certain privileges which are currently superuser-only
and we would like to be able to be GRANT those to non-superuser roles.
Where our GRANT system is able to provide the granularity desired, we
have simply removed the superuser checks, set up appropriate default
values and, through the "pg_dump dump catalog ACLs" patch, allowed
administrators to make the changes to those objects via the
'GRANT privilege ON object TO role' system.

For those cases where our GRANT system is unable to provide the
granularity desired and it isn't sensible to extend the GRANT system to
cover the exact granularity we wish to provide, we needed to come up
with an alternative approach.  That approach is the use of special
default roles, where we can allow exactly the level of granularity
desired and give administrators a way to grant those privileges to
users.

What this means in a nutshell is that we've collectivly decided that
we'd rather support:

/* uses the 'GRANT role TO role' system */
GRANT pg_signal_backend TO test_user;

than:

/*
 * uses the 'GRANT privilege ON object TO role' system
 * seems like cluster-level is actually the right answer here, but
 * we don't support such an object type currently, so using database
 * for the example
 */
GRANT SIGNAL BACKEND ON DATABASE my_database TO test_user;

The goal being that the result of the two commands is identical (where
the second command is only hypothetical at this point, but hopefully the
meaning is conveyed).

However, GRANT'ing a role to a user traditionally also allows the user
to 'SET ROLE' to that user, to create objects as that user, and other
privileges.  This results in two issues, as I see it:

1) The administrator who is looking to grant the specific 'signal
   backend' privilege to a user is unlikely to intend or desire for that
   user to also be able to SET ROLE to the role, or for that user to be
   able to create objects as the default role.

2) If the default role ends up owning objects then we have much less
   flexibility in making changes to that role because we must ensure
   that those objects are preserved across upgrades, etc.

Further, there seems to be no particular use-case which is satisfied by
allowing SET ROLE'ing to the default roles or for those roles to own
objects; indeed, it seems more likely to simply spark confusion and
ultimately may prevent administrators from making use of this system for
granting privileges as they are unable to GRANT just the specific
privilege they wish to.

It now occurs to me that perhaps we can improve on this situation in the
future by formally supporting a role attribute that is "do not allow SET
ROLE to this user" and then changing the default roles to have that
attribute set (and disallowing users from changing it).  That's just
additional flexibility over what the current patch does, however, but
perhaps it helps illustrate that there are cases where only the
privileges of the role are intended to be GRANT'd and not the ability to
SET ROLE to the role or to create objects as that role.

I hope that helps.  I'll be in NY next week also, so perhaps that would
be a good opportunity to have an interactive discussion on this topic.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Choosing parallel_degree

2016-04-15 Thread Robert Haas
On Wed, Apr 13, 2016 at 2:21 PM, Julien Rouhaud
 wrote:
>> I think we should go with "Workers Planned" and "Workers Launched",
>> capitalized exactly that way, and lose "Number Of".
>
> Fixed
>
>> I would be inclined to view this as a reasonable 9.6 cleanup of
>> parallel query, but other people may wish to construe things more
>> strictly than I would.
>
> FWIW, I also see it as a reasonable cleanup.

Hearing no dissent, committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disallow unique index on system columns

2016-04-15 Thread Andres Freund
On 2016-04-15 11:49:27 -0400, Tom Lane wrote:
> I think what we should do with this is split it into two patches, one
> to fix ALTER REPLICA IDENTITY's crash (which we'd back-patch to 9.4)
> and one to forbid indexes on system columns (which IMO should be HEAD
> only).  The first of those should be pretty uncontroversial, so I'll
> go ahead with pushing it --- anyone have a problem with the second?

Only in that I'm wondering whether we shouldn't be more aggressive about
#2.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Unreasonably generic names in matview.sql

2016-04-15 Thread Tom Lane
I've been annoyed one time too many by the fact that matview.sql
creates, and leaves around, tables and views with such generic
names as "t" and "tv".  Unless someone's really attached to that,
I'd like to rename those objects to begin with, say, "mvtest".

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disallow unique index on system columns

2016-04-15 Thread Tom Lane
David Rowley  writes:
> On 15 April 2016 at 13:43, David Rowley  wrote:
>> The attached patch just disallows any index containing a system
>> column, apart from OID.

> Seems I only did half the job as I forgot to think to check for system
> columns that are hidden away inside expressions or predicates.
> The attached fixes that.

I think what we should do with this is split it into two patches, one
to fix ALTER REPLICA IDENTITY's crash (which we'd back-patch to 9.4)
and one to forbid indexes on system columns (which IMO should be HEAD
only).  The first of those should be pretty uncontroversial, so I'll
go ahead with pushing it --- anyone have a problem with the second?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-04-15 Thread Michael Paquier
On Fri, Apr 15, 2016 at 8:25 PM, Etsuro Fujita
 wrote:
> How about doing something similar for PQprepare/PQexecPrepared in
> postgresExecForeignInsert, postgresExecForeignUpdate, and
> postgresExecForeignDelete?

Yes, I hesitated to touch those, but they are good candidates for this
new interface, and actually it has proved not to be complicated to
plug in the new routines in those code paths.

> Also, how about doing that for PQexec in connection.c?

Here I disagree, this is not adapted. All the PQexec calls are part of
callbacks that are triggered on failures, and we rely on such a
callback when issuing PQcancel. do_sql_command runs commands that take
a short amount of time, so I think as well that it is fine as-is with
PQexec.
-- 
Michael
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 189f290..64a00b4 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -17,6 +17,7 @@
 #include "access/xact.h"
 #include "mb/pg_wchar.h"
 #include "miscadmin.h"
+#include "storage/latch.h"
 #include "utils/hsearch.h"
 #include "utils/memutils.h"
 
@@ -448,6 +449,67 @@ GetPrepStmtNumber(PGconn *conn)
 }
 
 /*
+ * Execute query in non-blocking mode and fetch back result
+ */
+PGresult *
+pgfdw_exec_query(PGconn *conn, const char *query)
+{
+	if (!PQsendQuery(conn, query))
+		pgfdw_report_error(ERROR, NULL, conn, false, query);
+
+	return pgfdw_get_result(conn, query);
+}
+
+/*
+ * Scan for results of query run in non-waiting mode
+ *
+ * This should be run after executing a query on the remote server and
+ * offers quick responsiveness by checking for any interruptions. The
+ * last result is returned back to caller, and previous results are
+ * consumed. Caller is responsible for the error handling on the fetched
+ * result.
+ */
+PGresult *
+pgfdw_get_result(PGconn *conn, const char *query)
+{
+	PGresult   *last_res;
+
+	for (;;)
+	{
+		PGresult *res;
+
+		while (PQisBusy(conn))
+		{
+			int		wc;
+
+			/* Sleep until there's something to do */
+			wc = WaitLatchOrSocket(MyLatch,
+   WL_LATCH_SET | WL_SOCKET_READABLE,
+   PQsocket(conn),
+   -1L);
+			ResetLatch(MyLatch);
+
+			CHECK_FOR_INTERRUPTS();
+
+			/* Data available in socket */
+			if (wc & WL_SOCKET_READABLE)
+			{
+if (!PQconsumeInput(conn))
+	pgfdw_report_error(ERROR, NULL, conn, false, query);
+			}
+		}
+
+		res = PQgetResult(conn);
+		if (res == NULL)
+			break;
+
+		last_res = res;
+	}
+
+	return last_res;
+}
+
+/*
  * Report an error we got from the remote server.
  *
  * elevel: error level to use (typically ERROR, but might be less)
@@ -598,6 +660,32 @@ pgfdw_xact_callback(XactEvent event, void *arg)
 case XACT_EVENT_ABORT:
 	/* Assume we might have lost track of prepared statements */
 	entry->have_error = true;
+
+	/*
+	 * If a command has been submitted to the remote server
+	 * using an asynchronous execution function, the command
+	 * might not have yet completed.  Check to see if a command
+	 * is still being processed by the remote server, and if so,
+	 * request cancellation of the command; if not, abort
+	 * gracefully.
+	 */
+	if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
+	{
+		PGcancel   *cancel;
+		char		errbuf[256];
+
+		if ((cancel = PQgetCancel(entry->conn)))
+		{
+			if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ereport(WARNING,
+		(errcode(ERRCODE_CONNECTION_FAILURE),
+		 errmsg("could not send cancel request: %s",
+errbuf)));
+			PQfreeCancel(cancel);
+		}
+		break;
+	}
+
 	/* If we're aborting, abort all remote transactions too */
 	res = PQexec(entry->conn, "ABORT TRANSACTION");
 	/* Note: can't throw ERROR, it would be infinite loop */
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index ee0220a..4566cf4 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -1421,7 +1421,7 @@ postgresReScanForeignScan(ForeignScanState *node)
 	 * We don't use a PG_TRY block here, so be careful not to throw error
 	 * without releasing the PGresult.
 	 */
-	res = PQexec(fsstate->conn, sql);
+	res = pgfdw_exec_query(fsstate->conn, sql);
 	if (PQresultStatus(res) != PGRES_COMMAND_OK)
 		pgfdw_report_error(ERROR, res, fsstate->conn, true, sql);
 	PQclear(res);
@@ -1754,13 +1754,16 @@ postgresExecForeignInsert(EState *estate,
 	 * We don't use a PG_TRY block here, so be careful not to throw error
 	 * without releasing the PGresult.
 	 */
-	res = PQexecPrepared(fmstate->conn,
-		 fmstate->p_name,
-		 fmstate->p_nums,
-		 p_values,
-		 NULL,
-		 NULL,
-		 0);
+	if (!PQsendQueryPrepared(fmstate->conn,
+			 fmstate->p_name,
+			 fmstate->p_nums,
+			 p_values,
+			 NULL,
+			 NULL,
+			 0))
+		pgfdw_report_error(ERROR, NULL, fmstate->conn, true, fmstate->query);
+
+	res = pgfdw

Re: [HACKERS] Parallel indicators not written by pg_get_functiondef

2016-04-15 Thread Michael Paquier
On Fri, Apr 15, 2016 at 8:48 PM, Feike Steenbergen
 wrote:
> pg_get_functiondef(oid) does not return the PARALLEL indicators.
>
> Attached a small patch to have pg_get_functiondef actually add these
> indicators, using commit 7aea8e4f2 (pg_dump.c) as a guide.
>
> The logic is the same as with the volatility: we only append non-default
> indicators.

+1 for a good catch. Your patch is correct.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rework help interface of vcregress.pl

2016-04-15 Thread Michael Paquier
On Fri, Apr 15, 2016 at 5:07 PM, Magnus Hagander  wrote:
> Applied with only very small changes - you had trailing slashes on src/bin
> and contrib, but not on src/test/modules. I added it to modules, to make it
> consistent. And I removed the "driver" from ECPG, because I'm pretty sure
> that's not a driver... And I marked serial mode as the default schedule.

Thanks for the final push.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres_fdw join pushdown - INNER - FULL OUTER join combination generating wrong result

2016-04-15 Thread Etsuro Fujita

On 2016/04/14 15:20, Ashutosh Bapat wrote:

On Thu, Apr 14, 2016 at 8:42 AM, Etsuro Fujita
mailto:fujita.ets...@lab.ntt.co.jp>> wrote:



As you mentioned, we could
support FULL JOIN fully, by encapsulating a joining relation with
conditions into a subquery.  And ISTM that it is relatively easy to
do that, by borrowing some ideas from Hanada-san's original join
pushdown patch.  If it's OK, I'll create a patch for that in a few days.



In his patch the deparsing targetlist and conditions required that the
FROM clause entries were ready with the columns from base relations and
joins realiased. That's no more true. We deparse every Var node as
. where relation alias is nothing but rN; N
being index of RangeTblEntry. So, Hanada-san's method to deparse
recursively can not be applied as such now.


I think so, too.  I don't think his ideas could be applied as is.


Here's what needs to be done:
When we identify that certain relation (base or join) needs a subquery
to be deparsed (because the join relation above it could not pull the
quals up), we remember it in the upper join relation. Before deparsing
1. we walk the join tree and collect targetlists of all such relations,
2. associate column aliases with those targetlists (save the column
alises in resname?) and craft a relation alias  3. associate the
relations alias, column aliases and targetlists with the base relations
involved in such relations (may be creating a list similar to rtable).
While deparsing a Var node, we check if corresponding base relation is
itself or part of a relation deparsed as a subquery. If it is then we
lookup that Var in the targetlist associated with the base relation and
use corresponding relation and column alias for deparsing it. Otherwise,
deparse it as . usually.


Good to know.  That is what I have in mind, except for the way of 
collecting subqueries' columns and associating those columns with 
relation and column aliases, which I think can be done more easily. 
Please find attached a WIP patch.  That patch works well at least for 
queries in your patch.  Maybe I'm missing something, though.



That looks like a big code change to go after feature freeze. So, we
will leave it for 9.7, unless RMT allows us to introduce that change.


OK

Best regards,
Etsuro Fujita
*** a/contrib/postgres_fdw/deparse.c
--- b/contrib/postgres_fdw/deparse.c
***
*** 88,93  typedef struct foreign_loc_cxt
--- 88,105 
  } foreign_loc_cxt;
  
  /*
+  * Structure for information on subqueries' column aliases
+  */
+ typedef struct ColumnAliases
+ {
+ 	List	   *exprs;			/* subqueries' columns */
+ 	int			max_exprs;		/* maximum number of columns stored */
+ 	int		   *ssno;			/* table alias numbers of columns */
+ 	int		   *sscolno;		/* column alias numbers of columns */
+ 	int			next_ssno;		/* next subquery's table alias number */
+ } ColumnAliases;
+ 
+ /*
   * Context for deparseExpr
   */
  typedef struct deparse_expr_cxt
***
*** 96,101  typedef struct deparse_expr_cxt
--- 108,114 
  	RelOptInfo *foreignrel;		/* the foreign relation we are planning for */
  	StringInfo	buf;			/* output buffer to append to */
  	List	  **params_list;	/* exprs that will become remote Params */
+ 	ColumnAliases *colaliases;	/* subqueries' column aliases */
  } deparse_expr_cxt;
  
  #define REL_ALIAS_PREFIX	"r"
***
*** 103,108  typedef struct deparse_expr_cxt
--- 116,124 
  #define ADD_REL_QUALIFIER(buf, varno)	\
  		appendStringInfo((buf), "%s%d.", REL_ALIAS_PREFIX, (varno))
  
+ #define SS_ALIAS_PREFIX		"ss"
+ #define SSCOL_ALIAS_PREFIX	"c"
+ 
  /*
   * Functions to determine whether an expression can be evaluated safely on
   * remote server.
***
*** 152,164  static void printRemoteParam(int paramindex, Oid paramtype, int32 paramtypmod,
   deparse_expr_cxt *context);
  static void printRemotePlaceholder(Oid paramtype, int32 paramtypmod,
  	   deparse_expr_cxt *context);
! static void deparseSelectSql(List *tlist, List **retrieved_attrs,
   deparse_expr_cxt *context);
  static void deparseLockingClause(deparse_expr_cxt *context);
  static void appendOrderByClause(List *pathkeys, deparse_expr_cxt *context);
  static void appendConditions(List *exprs, deparse_expr_cxt *context);
  static void deparseFromExprForRel(StringInfo buf, PlannerInfo *root,
! 	RelOptInfo *joinrel, bool use_alias, List **params_list);
  
  
  /*
--- 168,188 
   deparse_expr_cxt *context);
  static void printRemotePlaceholder(Oid paramtype, int32 paramtypmod,
  	   deparse_expr_cxt *context);
! static void deparseSelectSql(List *tlist,
!  List *remote_conds,
!  List **retrieved_attrs,
   deparse_expr_cxt *context);
  static void deparseLockingClause(deparse_expr_cxt *context);
  static void appendOrderByClause(List *pathkeys, deparse_expr_cxt *context);
  static void appendConditions(List *exprs, deparse_expr_cxt *context);
  static void deparseFromExprForRel(St

[HACKERS] Parallel indicators not written by pg_get_functiondef

2016-04-15 Thread Feike Steenbergen
pg_get_functiondef(oid) does not return the PARALLEL indicators.

Attached a small patch to have pg_get_functiondef actually add these
indicators, using commit 7aea8e4f2 (pg_dump.c) as a guide.

The logic is the same as with the volatility: we only append non-default
indicators.

Feike Steenbergen


add_parallel_indicators_to_get_functiondef.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-04-15 Thread Etsuro Fujita

On 2016/04/15 14:31, Michael Paquier wrote:

On Thu, Apr 14, 2016 at 10:44 AM, Etsuro Fujita
 wrote:

On 2016/04/13 21:50, Michael Paquier wrote:

On Wed, Apr 13, 2016 at 9:46 PM, Robert Haas 
wrote:

On Tue, Apr 12, 2016 at 10:24 PM, Etsuro Fujita
 wrote:


How about we encapsulate the while (PQisBusy(...)) loop into a new
function pgfdw_get_result(), which can be called after first calling
PQsendQueryParams()?  So then this code will say dmstate->result =
pgfdw_get_result(dmstate->conn).  And we can do something similar for
the other call to PQexecParams() in create_cursor().  Then let's also
add something like pgfdw_exec_query() which calls PQsendQuery() and
then pgfdw_get_result, and use that to replace all of the existing
calls to PQexec().

Then all the SQL postgres_fdw executes would be interruptible, not
just the new stuff.



I would be happy if you work on that.



OK, so I have finished with the attached.


Thank you for working on that!


One thing that I noticed in
the previous patch version is that it completely ignored cases where
multiple PGresult could be returned by server, say when multiple
queries are sent in the same string: PQexec gets always the last one,
so I think that we had better do the same here.


Seems reasonable.


so I think that we had better do the same here. I have switched all
the PQexec calls to a custom routine that combines
PQsendQuery/PQgetResult, and PQexecParams is switched to
PQsendQueryParams/PQgetResult. This structure allows all queries run
though postgres_fdw.c to be interruptible.


How about doing something similar for PQprepare/PQexecPrepared in 
postgresExecForeignInsert, postgresExecForeignUpdate, and 
postgresExecForeignDelete?  Also, how about doing that for PQexec in 
connection.c?


Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd system-column handling in postgres_fdw join pushdown patch

2016-04-15 Thread Etsuro Fujita

On 2016/04/14 13:04, Robert Haas wrote:

On Wed, Apr 13, 2016 at 11:21 PM, Etsuro Fujita
 wrote:

2. When a join is pushed down, deparse system columns using something
like "CASE WHEN r1.* IS NOT NULL THEN 0 END", except for the table OID
column, which gets deparsed with the table OID in place of 0.  This
delivers the correct behavior in the presence of outer joins.

I think that that would cause useless data transfer for such culumns.
Why not set values locally for such columns?



Because that doesn't work properly when there are outer joins
involved.


Understood.  It looks like I overlooked Ashutosh's mail about that. 
Thanks, Robert and Ashutosh!


Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] checkpoint_flush_after documentation inconsistency

2016-04-15 Thread Magnus Hagander
The documentation says that the default value is 128Kb on Linux, but the
code says it's 256Kb.

Not sure which one is correct, but the other one should be updated :) I'm
guessing it's a copy/paste mistake in the docs, but since I'm not sure I'm
not just pushing a fix.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Declarative partitioning

2016-04-15 Thread Ashutosh Bapat
With the new set of patches, I am getting following warnings but no
compilation failures. Patches apply smoothly.
partition.c:1216:21: warning: variable ‘form’ set but not used
[-Wunused-but-set-variable]
partition.c:1637:20: warning: variable ‘form’ set but not used
[-Wunused-but-set-variable]

For creating partition the documentation seems to suggest the syntax
create table t1_p1 partition of t1 for values {start {0} end {100}
exclusive;
but following syntax works
create table t1_p1 partition of t1 for values start (0) end (100) exclusive;





> > 5. In expand_partitioned_rtentry(), instead of finding all the leaf level
> > relations, it might be better to build the RTEs, RelOptInfo and paths for
> > intermediate relations as well. This helps in partition pruning. In your
> > introductory write-up you have mentioned this. It might be better if v1
> > includes this change, so that the partition hierarchy is visible in
> EXPLAIN
> > output as well.
> >
> > 6. Explain output of scan on a partitioned table shows Append node with
> > individual table scans as sub-plans. May be we should annotate Append
> node
> > with
> > the name of the partitioned table to make EXPLAIN output more readable.
>
> I got rid of all the optimizer changes in the new version (except a line
> or two).  I did that by switching to storing parent-child relationships in
> pg_inherits so that all the existing optimizer code for inheritance sets
> works unchanged.  Also, the implementation detail that required to put
> only leaf partitions in the append list is also gone.  Previously no
> storage was allocated for partitioned tables (either root or any of the
> internal partitions), so it was being done that way.
>

With this set of patches we are still flattening all the partitions.
postgres=# create table t1 (a int, b int) partition by range(a);
CREATE TABLE
postgres=# create table t1_p1 partition of t1 for values start (0) end
(100) exclusive partition by range(b);
CREATE TABLE
postgres=# create table t1_p1_p1 partition of t1_p1 for values start (0)
end (100) exclusive;
CREATE TABLE
postgres=# explain verbose select * from t1;
   QUERY PLAN
-
 Append  (cost=0.00..32.60 rows=2262 width=8)
   ->  Seq Scan on public.t1  (cost=0.00..0.00 rows=1 width=8)
 Output: t1.a, t1.b
   ->  Seq Scan on public.t1_p1  (cost=0.00..0.00 rows=1 width=8)
 Output: t1_p1.a, t1_p1.b
   ->  Seq Scan on public.t1_p1_p1  (cost=0.00..32.60 rows=2260 width=8)
 Output: t1_p1_p1.a, t1_p1_p1.b
(7 rows)
Retaining the partition hierarchy would help to push-down join across
partition hierarchy effectively.


>
> Regarding 6, it seems to me that because Append does not have a associated
> relid (like scan nodes have with scanrelid).  Maybe we need to either fix
> Append or create some enhanced version of Append which would also support
> dynamic pruning.
>

Right, I think, Append might store the relid of the parent table as well as
partitioning information at that level along-with the subplans.

Some more comments
Would it be better to declare PartitionDescData as
{
int nparts;
PartitionInfo *partinfo; /* array of partition information structures. */
}

The new syntax allows CREATE TABLE to be specified as partition of an
already partitioned table. Is it possible to do the same for CREATE FOREIGN
TABLE? Or that's material for v2? Similarly for ATTACH PARTITION.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] Declarative partitioning

2016-04-15 Thread Ashutosh Bapat
With the new set of patches, I am getting following warnings but no
compilation failures. Patches apply smoothly.
partition.c:1216:21: warning: variable ‘form’ set but not used
[-Wunused-but-set-variable]
partition.c:1637:20: warning: variable ‘form’ set but not used
[-Wunused-but-set-variable]

For creating partition the documentation seems to suggest the syntax
create table t1_p1 partition of t1 for values {start {0} end {100}
exclusive;
but following syntax works
create table t1_p1 partition of t1 for values start (0) end (100) exclusive;





> > 5. In expand_partitioned_rtentry(), instead of finding all the leaf level
> > relations, it might be better to build the RTEs, RelOptInfo and paths for
> > intermediate relations as well. This helps in partition pruning. In your
> > introductory write-up you have mentioned this. It might be better if v1
> > includes this change, so that the partition hierarchy is visible in
> EXPLAIN
> > output as well.
> >
> > 6. Explain output of scan on a partitioned table shows Append node with
> > individual table scans as sub-plans. May be we should annotate Append
> node
> > with
> > the name of the partitioned table to make EXPLAIN output more readable.
>
> I got rid of all the optimizer changes in the new version (except a line
> or two).  I did that by switching to storing parent-child relationships in
> pg_inherits so that all the existing optimizer code for inheritance sets
> works unchanged.  Also, the implementation detail that required to put
> only leaf partitions in the append list is also gone.  Previously no
> storage was allocated for partitioned tables (either root or any of the
> internal partitions), so it was being done that way.
>

With this set of patches we are still flattening all the partitions.
postgres=# create table t1 (a int, b int) partition by range(a);
CREATE TABLE
postgres=# create table t1_p1 partition of t1 for values start (0) end
(100) exclusive partition by range(b);
CREATE TABLE
postgres=# create table t1_p1_p1 partition of t1_p1 for values start (0)
end (100) exclusive;
CREATE TABLE
postgres=# explain verbose select * from t1;
   QUERY PLAN
-
 Append  (cost=0.00..32.60 rows=2262 width=8)
   ->  Seq Scan on public.t1  (cost=0.00..0.00 rows=1 width=8)
 Output: t1.a, t1.b
   ->  Seq Scan on public.t1_p1  (cost=0.00..0.00 rows=1 width=8)
 Output: t1_p1.a, t1_p1.b
   ->  Seq Scan on public.t1_p1_p1  (cost=0.00..32.60 rows=2260 width=8)
 Output: t1_p1_p1.a, t1_p1_p1.b
(7 rows)
Retaining the partition hierarchy would help to push-down join across
partition hierarchy effectively.


>
> Regarding 6, it seems to me that because Append does not have a associated
> relid (like scan nodes have with scanrelid).  Maybe we need to either fix
> Append or create some enhanced version of Append which would also support
> dynamic pruning.
>

Right, I think, Append might store the relid of the parent table as well as
partitioning information at that level along-with the subplans.

Some more comments:
1. Would it be better to declare PartitionDescData as
{
int nparts;
PartitionInfo *partinfo; /* array of partition information structures. */
}

2. The new syntax allows CREATE TABLE to be specified as partition of an
already partitioned table. Is it possible to do the same for CREATE FOREIGN
TABLE? Or that's material for v2? Similarly for ATTACH PARTITION.

3. PartitionKeyData contains KeyTypeCollInfo, whose contents can be
obtained by calling functions exprType, exprTypemod on partexprs. Why do we
need to store that information as a separate member?
-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-04-15 Thread Masahiko Sawada
On Fri, Apr 15, 2016 at 3:00 PM, Kyotaro HORIGUCHI
 wrote:
> At Fri, 15 Apr 2016 08:52:56 +0530, Amit Kapila  
> wrote in 
>> On Thu, Apr 14, 2016 at 1:10 PM, Masahiko Sawada 
>> wrote:
>> >
>> > On Thu, Apr 14, 2016 at 1:11 PM, Kyotaro HORIGUCHI
>> >  wrote:
>> > > At Thu, 14 Apr 2016 12:42:06 +0900, Fujii Masao 
>> wrote in > >
>> > >> > Yes, this is what I was trying to explain to Fujii-san upthread and
>> I have
>> > >> > also verified that the same works on Windows.
>> > >>
>> > >> Oh, okay, understood. Thanks for explaining that!
>> > >>
>> > >> > I think one point which we
>> > >> > should try to ensure in this patch is whether it is good to use
>> > >> > TopMemoryContext to allocate the memory in the check or assign
>> function or
>> > >> > should we allocate some temporary context (like we do in
>> load_tzoffsets())
>> > >> > to perform parsing and then delete the same at end.
>> > >>
>> > >> Seems yes if some memories are allocated by palloc and they are not
>> > >> free'd while parsing s_s_names.
>> > >>
>> > >> Here are another comment for the patch.
>> > >>
>> > >> -SyncRepFreeConfig(SyncRepConfigData *config)
>> > >> +SyncRepFreeConfig(SyncRepConfigData *config, bool itself)
>> > >>
>> > >> SyncRepFreeConfig() was extended so that it accepts the second boolean
>> > >> argument. But it's always called with the second argument = false. So,
>> > >> I just wonder why that second argument is required.
>> > >>
>> > >> SyncRepConfigData *config =
>> > >> -(SyncRepConfigData *) palloc(sizeof(SyncRepConfigData));
>> > >> +(SyncRepConfigData *) malloc(sizeof(SyncRepConfigData));
>> > >>
>> > >> Why should we use malloc instead of palloc here?
>> > >>
>> > >> *If* we use malloc, its return value must be checked.
>> > >
>> > > Because it should live irrelevant to any memory context, as guc
>> > > values are so. guc.c provides guc_malloc for this purpose, which
>> > > is a malloc having some simple error handling, so having
>> > > walsender_malloc would be reasonable.
>> > >
>> > > I don't think it's good to use TopMemoryContext for syncrep
>> > > parser. syncrep_scanner.l uses palloc. This basically causes a
>> > > memory leak on all postgres processes.
>> > >
>> > > It might be better if the parser works on the current memory
>> > > context and the caller copies the result on the malloc'ed
>> > > memory. But some list-creation functions using palloc..
>>
>> How about if we do all the parsing stuff in temporary context and then copy
>> the results using TopMemoryContext?  I don't think it will be a leak in
>> TopMemoryContext, because next time we try to check/assign s_s_names, it
>> will free the previous result.
>
> I agree with you. A temporary context for the parser seems
> reasonable. TopMemoryContext is created very early in main() so
> palloc on it is effectively the same with malloc.
> One problem is that only the top memory block is assumed to be
> free()'d, not pfree()'d by guc_set_extra. It makes this quite
> ugly..
>
> Maybe we shouldn't use the extra for this purpose.
>
> Thoughts?
>

How about if check_hook just parses parameter in
CurrentMemoryContext(i.g., T_AllocSetContext), and then the
assign_hook copies syncrep_parse_result to TopMemoryContext.
Because syncrep_parse_result is a global variable, these hooks can see it.

Here are some comments.

-SyncRepUpdateConfig(void)
+SyncRepFreeConfig(SyncRepConfigData *config, bool itself, MemoryContext cxt)

Sorry, it's my bad. itself variables is no longer needed because
SyncRepFreeConfig is called by only one function.

-void
-SyncRepFreeConfig(SyncRepConfigData *config)
+SyncRepConfigData *
+SyncRepCopyConfig(SyncRepConfigData *oldconfig, MemoryContext targetcxt)

I'm not sure targetcxt argument is necessary.

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rework help interface of vcregress.pl

2016-04-15 Thread Magnus Hagander
On Fri, Apr 15, 2016 at 10:03 AM, Kyotaro HORIGUCHI <
horiguchi.kyot...@lab.ntt.co.jp> wrote:

> At Fri, 15 Apr 2016 14:45:33 +0900, Michael Paquier <
> michael.paqu...@gmail.com> wrote in  tcufg+dgy_gqcypre5d6...@mail.gmail.com>
> > Hi all,
> > (Windows-only be careful)
> >
> > Horiguchi-san has mentioned yesterday
> > (
> http://www.postgresql.org/message-id/20160414.172539.34325458.horiguchi.kyot...@lab.ntt.co.jp
> )
> > that we are missing a couple of modes in vcregress.pl in its help
> > message: modulescheck, bincheck, recoverycheck.
> >
> > The help message given to users is ugly and unreadable:
> > $ perl vcregress.pl
> > Usage: vcregress.pl
> >
> 
> > [schedule]
>
> This seems to be accumultion of gradually edting for every check
> modes.
>

Yeah. (I noticed you sent this just as I pushed the fix)



> > So I would like to suggest the attached patch that makes things easier
> > to understand:
> > $ perl vcregress.pl
> > Usage: vcregress.pl  [  ]
>
> It is an issue of this patch, but the command line is not
> vcregress.pl, but vcregress[.bat]. However nobody would care
> about the difference.
>

The actual usage is for vcregress.pl. vcregress.bat happens to be calling
that one. It's a fine line :)



> > Options for :
> >   bincheck   run tests of utilities in src/bin/
> >   check  deploy instance and run regression tests on it
> >   contribcheck   run tests of modules in contrib/
> >   ecpgcheck  run regression tests of ECPG driver
> >   installcheck   run regression tests on existing instance
> >   isolationcheck run isolation tests
> >   modulescheck   run tests of modules in src/test/modules
> >   plcheckrun tests of PL languages
> >   recoverycheck  run recovery test suite
> >   upgradecheck   run tests of pg_upgrade
> >
> > Options for :
> >   serial serial mode
> >   parallel   parallel mode
>
> This looks good to me but since  is optional, some
> description about default behavior would be needed.
>

I had already added that one independently.

Thanks!

//Magnus


Re: [HACKERS] Rework help interface of vcregress.pl

2016-04-15 Thread Magnus Hagander
On Fri, Apr 15, 2016 at 7:45 AM, Michael Paquier 
wrote:

> Hi all,
> (Windows-only be careful)
>
> Horiguchi-san has mentioned yesterday
> (
> http://www.postgresql.org/message-id/20160414.172539.34325458.horiguchi.kyot...@lab.ntt.co.jp
> )
> that we are missing a couple of modes in vcregress.pl in its help
> message: modulescheck, bincheck, recoverycheck.
>
> The help message given to users is ugly and unreadable:
> $ perl vcregress.pl
> Usage: vcregress.pl
>
> 
> [schedule]
> So I would like to suggest the attached patch that makes things easier
> to understand:
> $ perl vcregress.pl
> Usage: vcregress.pl  [  ]
>
> Options for :
>   bincheck   run tests of utilities in src/bin/
>   check  deploy instance and run regression tests on it
>   contribcheck   run tests of modules in contrib/
>   ecpgcheck  run regression tests of ECPG driver
>   installcheck   run regression tests on existing instance
>   isolationcheck run isolation tests
>   modulescheck   run tests of modules in src/test/modules
>   plcheckrun tests of PL languages
>   recoverycheck  run recovery test suite
>   upgradecheck   run tests of pg_upgrade
>
> Options for :
>   serial serial mode
>   parallel   parallel mode
>

Applied with only very small changes - you had trailing slashes on src/bin
and contrib, but not on src/test/modules. I added it to modules, to make it
consistent. And I removed the "driver" from ECPG, because I'm pretty sure
that's not a driver... And I marked serial mode as the default schedule.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Rework help interface of vcregress.pl

2016-04-15 Thread Kyotaro HORIGUCHI
At Fri, 15 Apr 2016 14:45:33 +0900, Michael Paquier  
wrote in 
> Hi all,
> (Windows-only be careful)
> 
> Horiguchi-san has mentioned yesterday
> (http://www.postgresql.org/message-id/20160414.172539.34325458.horiguchi.kyot...@lab.ntt.co.jp)
> that we are missing a couple of modes in vcregress.pl in its help
> message: modulescheck, bincheck, recoverycheck.
> 
> The help message given to users is ugly and unreadable:
> $ perl vcregress.pl
> Usage: vcregress.pl
> 
> [schedule]

This seems to be accumultion of gradually edting for every check
modes.

> So I would like to suggest the attached patch that makes things easier
> to understand:
> $ perl vcregress.pl
> Usage: vcregress.pl  [  ]

It is an issue of this patch, but the command line is not
vcregress.pl, but vcregress[.bat]. However nobody would care
about the difference.

> Options for :
>   bincheck   run tests of utilities in src/bin/
>   check  deploy instance and run regression tests on it
>   contribcheck   run tests of modules in contrib/
>   ecpgcheck  run regression tests of ECPG driver
>   installcheck   run regression tests on existing instance
>   isolationcheck run isolation tests
>   modulescheck   run tests of modules in src/test/modules
>   plcheckrun tests of PL languages
>   recoverycheck  run recovery test suite
>   upgradecheck   run tests of pg_upgrade
> 
> Options for :
>   serial serial mode
>   parallel   parallel mode

This looks good to me but since  is optional, some
description about default behavior would be needed.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers