Re: [HACKERS] [BUGS] Bug in Physical Replication Slots (at least 9.5)?

2017-09-03 Thread Kyotaro HORIGUCHI
Hello,

Thank you for reviewing this.

At Mon, 28 Aug 2017 20:14:54 +0900, Michael Paquier  
wrote in 
> On Mon, Aug 28, 2017 at 8:02 PM, Kyotaro HORIGUCHI
>  wrote:
> > The first patch (0001-) fixes this problem, preventing the
> > problematic state of WAL segments by retarding restart LSN of a
> > physical replication slot in a certain condition.
> 
> FWIW, I have this patch marked on my list of things to look at, so you
> can count me as a reviewer. There are also some approaches that I
> would like to test because I rely on replication slots for some
> infrastructure. Still...

This test patch modifies the code for easiness. The window for
this bug to occur is from receiving the first record of a segment
to in most cases receiving the second record or after receiving
several records. Intentionally emitting a record spanning two or
more segments would work?


> +if (oldFlushPtr != InvalidXLogRecPtr &&
> +(restartLSN == InvalidXLogRecPtr ?
> + oldFlushPtr / XLOG_SEG_SIZE != flushPtr / XLOG_SEG_SIZE :
> + restartLSN / XLOG_BLCKSZ != flushPtr / XLOG_BLCKSZ))
> I find such code patterns not readable.

Yeah, I agree. I rewrote there and the result in the attached
patch is far cleaner than the blob.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From c15473998467a17ac6070ed68405a14ca0f98ae6 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Wed, 1 Feb 2017 16:07:22 +0900
Subject: [PATCH 1/2] Retard restart LSN of a slot when a segment starts with a
 contrecord.

A physical-replication standby can stop just at the boundary of WAL
segments. restart_lsn of a slot on the master can be assumed to be the
same location. The last segment on the master will be removed after
some checkpoints for the case. If the last record of the last
replicated segment continues to the next segment, the continuation
record is only on the master. The standby cannot start in the case
because the split record is not available from only one source.

This patch detains restart_lsn in the last sgement when the first page
of the next segment is a continuation record.
---
 src/backend/replication/walsender.c | 123 ++--
 1 file changed, 116 insertions(+), 7 deletions(-)

diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index db346e6..30e222c 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -217,6 +217,13 @@ static struct
 	WalTimeSample last_read[NUM_SYNC_REP_WAIT_MODE];
 }			LagTracker;
 
+/*
+ * This variable corresponds to restart_lsn in pg_replication_slots for a
+ * physical slot. This has a valid value only when it differs from the current
+ * flush pointer.
+ */
+static XLogRecPtr	   restartLSN = InvalidXLogRecPtr;
+
 /* Signal handlers */
 static void WalSndLastCycleHandler(SIGNAL_ARGS);
 
@@ -251,7 +258,7 @@ static void LagTrackerWrite(XLogRecPtr lsn, TimestampTz local_flush_time);
 static TimeOffset LagTrackerRead(int head, XLogRecPtr lsn, TimestampTz now);
 static bool TransactionIdInRecentPast(TransactionId xid, uint32 epoch);
 
-static void XLogRead(char *buf, XLogRecPtr startptr, Size count);
+static bool XLogRead(char *buf, XLogRecPtr startptr, Size count, bool noutfoundok);
 
 
 /* Initialize walsender process before entering the main command loop */
@@ -546,6 +553,9 @@ StartReplication(StartReplicationCmd *cmd)
 			ereport(ERROR,
 	(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 	 (errmsg("cannot use a logical replication slot for physical replication";
+
+		/* Restore restartLSN from replication slot */
+		restartLSN = MyReplicationSlot->data.restart_lsn;
 	}
 
 	/*
@@ -561,6 +571,10 @@ StartReplication(StartReplicationCmd *cmd)
 	else
 		FlushPtr = GetFlushRecPtr();
 
+	/* Set InvalidXLogRecPtr if catching up */
+	if (restartLSN == FlushPtr)
+		restartLSN = InvalidXLogRecPtr;
+
 	if (cmd->timeline != 0)
 	{
 		XLogRecPtr	switchpoint;
@@ -770,7 +784,7 @@ logical_read_xlog_page(XLogReaderState *state, XLogRecPtr targetPagePtr, int req
 		count = flushptr - targetPagePtr;	/* part of the page available */
 
 	/* now actually read the data, we know it's there */
-	XLogRead(cur_page, targetPagePtr, XLOG_BLCKSZ);
+	XLogRead(cur_page, targetPagePtr, XLOG_BLCKSZ, false);
 
 	return count;
 }
@@ -1738,7 +1752,7 @@ static void
 ProcessStandbyReplyMessage(void)
 {
 	XLogRecPtr	writePtr,
-flushPtr,
+flushPtr, oldFlushPtr,
 applyPtr;
 	bool		replyRequested;
 	TimeOffset	writeLag,
@@ -1798,6 +1812,7 @@ ProcessStandbyReplyMessage(void)
 		WalSnd	   *walsnd = MyWalSnd;
 
 		SpinLockAcquire(&walsnd->mutex);
+		oldFlushPtr = walsnd->flush;
 		walsnd->write = writePtr;
 		walsnd->flush = flushPtr;
 		walsnd->apply = applyPtr;
@@ -1821,7 +1836,93 @@ ProcessStandbyReplyMessage(void)
 		if (SlotIsLogical(MyReplicationSlot))
 			LogicalConfirmReceivedLocation(flushPtr);
 		else
-			PhysicalConfirmReceivedLocation(flushPtr);

Re: [HACKERS] Minor code improvement to postgresGetForeignPlan

2017-09-03 Thread Tatsuro Yamada
Hi Fujita-san,

> The patch looks good to me, so I'll mark this as Ready for Committer.

Thanks for reviewing my patch.


> (I'm not sure we should do the same thing to the function declaration in 
> other places such as fdwapi.h and the documentation for consistency, but if 
> so, I'd vote for leaving that for another patch.)

+1

Regards,
Tatsuro Yamada



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CLUSTER command progress monitor

2017-09-03 Thread Michael Paquier
On Thu, Aug 31, 2017 at 11:12 AM, Tatsuro Yamada
 wrote:
> Then I have questions.
>
>   * Should we have separate views for them?  Or should both be covered by the
> same view with some indication of which command (CLUSTER or VACUUM FULL)
> is actually running?

Using the same view for both, and tell that this is rather VACUUM or
CLUSTER in the view, would be better IMO. Coming up with a name more
generic than pg_stat_progress_cluster may be better though if this
speaks with VACUUM FULL as well, user-facing documentation does not
say that VACUUM FULL is actually CLUSTER.

> I'll add this patch to CF2017-09.
> Any comments or suggestion are welcome.

Nice to see that you are taking the time to implement patches for
upstream, Yamada-san!
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-09-03 Thread Thomas Munro
Thanks for the review and commits so far.  Here's a rebased, debugged
and pgindented version of the remaining patches.  I ran pgindent with
--list-of-typedefs="SharedRecordTableKey,SharedRecordTableEntry,SharedTypmodTableEntry,SharedRecordTypmodRegistry,Session"
to fix some weirdness around these new typenames.

While rebasing the 0002 patch (removal of tqueue.c's remapping logic),
I modified the interface of the newly added
ExecParallelCreateReaders() function from commit 51daa7bd because it
no longer has any reason to take a TupleDesc.

On Fri, Aug 25, 2017 at 1:46 PM, Andres Freund  wrote:
> On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
>> 2.  Andres didn't like what I did to DecrTupleDescRefCount, namely
>> allowing to run when there is no ResourceOwner.  I now see that this
>> is probably an indication of a different problem; even if there were a
>> worker ResourceOwner as he suggested (or perhaps a session-scoped one,
>> which a worker would reset before being reused), it wouldn't be the
>> one that was active when the TupleDesc was created.  I think I have
>> failed to understand the contracts here and will think/read about it
>> some more.
>
> Maybe I'm missing something, but isn't the issue here that using
> DecrTupleDescRefCount() simply is wrong, because we're not actually
> necessarily tracking the TupleDesc via the resowner mechanism?

Yeah.  Thanks.

> If you look at the code, in the case it's a previously unknown tupledesc
> it's registered with:
>
> entDesc = CreateTupleDescCopy(tupDesc);
> ...
> /* mark it as a reference-counted tupdesc */
> entDesc->tdrefcount = 1;
> ...
> RecordCacheArray[newtypmod] = entDesc;
> ...
>
> Note that there's no PinTupleDesc(), IncrTupleDescRefCount() or
> ResourceOwnerRememberTupleDesc() managing the reference from the
> array. Nor was there one before.
>
> We have other code managing TupleDesc lifetimes similarly, and look at
> how they're freeing it:
> /* Delete tupdesc if we have it */
> if (typentry->tupDesc != NULL)
> {
> /*
>  * Release our refcount, and free the tupdesc if none 
> remain.
>  * (Can't use DecrTupleDescRefCount because this 
> reference is not
>  * logged in current resource owner.)
>  */
> Assert(typentry->tupDesc->tdrefcount > 0);
> if (--typentry->tupDesc->tdrefcount == 0)
> FreeTupleDesc(typentry->tupDesc);
> typentry->tupDesc = NULL;
> }

Right.  I have changed shared_record_typmod_registry_worker_detach()
to be more like that, with an explanation.

> This also made me think about how we're managing the lookup from the
> shared array:
>
> /*
>  * Our local array can now point 
> directly to the TupleDesc
>  * in shared memory.
>  */
> RecordCacheArray[typmod] = tupdesc;
>
> Uhm. Isn't that highly highly problematic? E.g. tdrefcount manipulations
> which are done by all lookups (cf. lookup_rowtype_tupdesc()) would in
> that case manipulate shared memory in a concurrency unsafe manner.

No.  See this change, in that and similar code paths:

-   IncrTupleDescRefCount(tupDesc);
+   PinTupleDesc(tupDesc);

The difference between IncrTupleDescRefCount() and PinTupleDesc() is
that the latter recognises non-refcounted tuple descriptors
(tdrefcount == -1) and does nothing.  Shared tuple descriptors are not
reference counted (see TupleDescCopy() which initialises
dst->tdrefcount to -1).  It was for foolish symmetry that I was trying
to use ReleaseTupleDesc() in shared_record_typmod_registry_detach()
before, since it also knows about non-refcounted tuple descriptors,
but that's not appropriate: it calls DecrTupleDescRefCount() which
assumes that we're using resource owners.  We're not.

To summarise the object lifetime management situation created by this
patch: shared TupleDesc objects accumulate in per-session DSM memory
until eventually the session ends and the DSM memory goes away.  A bit
like CacheMemoryContext: there is no retail cleanup of shared
TupleDesc objects.  BUT: the DSM detach callback is used to clear out
backend-local pointers to that stuff (and any non-shared reference
counted TupleDesc objects that might be found), in anticipation of
being able to reuse a worker process one day (which will involve
attaching to a new session, so we mustn't retain any traces of the
previous session in our local state).  Maybe I'm trying to be a little
too clairvoyant there...

I improved the cleanup code: now it frees RecordCacheArray and
RecordCacheHash and reinstalls NULL pointers.  Also it deals with
errors in GetSes

Re: [HACKERS] [PATCH] Assert that the correct locks are held when calling PageGetLSN()

2017-09-03 Thread Michael Paquier
On Fri, Sep 1, 2017 at 3:53 AM, Jacob Champion  wrote:
> The patch is really two pieces: add the assertion, and fix the callers
> that would trip it. The latter part is still in progress, because I'm
> running into some places where I'm not sure what the correct way
> forward is.

I looked at your patch.

> While working on checksum support for GPDB, we noticed that several
> callers of PageGetLSN() didn't follow the correct locking procedure.

Well, PageGetLSN can be used in some hot code paths, xloginsert.c
being one, so it does not seem wise to me to switch it to something
more complicated than a macro, and also it is not bounded to any
locking contracts now. For example, pageinspect works on a copy of the
buffer, so that's one case where we just don't care. So we should it
be tightened in Postgres? That's the portion of the proposal I don't
get. You could introduce a custom API for this purpose, isn't the
origin of your problems with GPDB for checksums that data is
replicated across many nodes so things need to be locked uniquely?

> - This change introduces a subtle source incompatibility: whereas
> previously both Pages and PageHeaders could be passed to PageGetLSN(),
> now callers must explicitly cast to Page if they don't already have
> one.

That would produce warnings for extensions, so people would likely
catch that when compiling with a newer version, if they use -Werror...

> - Is my use of FRONTEND here correct? The documentation made it seem
> like some compilers could try to link against the
> AssertPageIsLockedForLSN function, even if PageGetLSN was never
> called.

+#if defined(USE_ASSERT_CHECKING) && !defined(FRONTEND)
+void
+AssertPageIsLockedForLSN(Page page)
+{
>From a design point of view, bufmgr.c is an API interface for buffer
management on the backend-size, so just using FRONTEND there looks
wrong to me (bufmgr.h is already bad enough IMO now).

> - Use of PageGetLSN() in heapam.c seems to be incorrect, but I don't
> really know what the best way is to fix it. It explicitly unlocks the
> buffer, with the comment that visibilitymap_set() needs to handle
> locking itself, but then calls PageGetLSN() to determine whether
> visibilitymap_set() needs to be called.

This bit in src/backend/access/transam/README may interest you:
Note that we must only use PageSetLSN/PageGetLSN() when we know the action
is serialised. Only Startup process may modify data blocks during recovery,
so Startup process may execute PageGetLSN() without fear of serialisation
problems. All other processes must only call PageSet/GetLSN when holding
either an exclusive buffer lock or a shared lock plus buffer header lock,
or be writing the data block directly rather than through shared buffers
while holding AccessExclusiveLock on the relation.

So I see no problem with the exiting caller.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CLUSTER command progress monitor

2017-09-03 Thread Tatsuro Yamada

Hi Sawada-san,

Thanks for taking your time.
I'll be more careful.

Regards,
Tatsuro Yamada

On 2017/09/04 11:51, Masahiko Sawada wrote:

On Mon, Sep 4, 2017 at 11:37 AM, Tatsuro Yamada
 wrote:

Hi Sawada-san, Thomas,

Thanks for sharing the reggression.diff.
I realized Thomas's comment is right.

Attached patch is fixed version.
Could you try it?



Yeah, in my environment the regression test passed. Thanks.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center







--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel worker error

2017-09-03 Thread Pavan Deolasee
On Wed, Aug 30, 2017 at 8:49 PM, Robert Haas  wrote:

>
>
> But since that's an established design fl^H^Hprinciple, maybe that
> means we should go with the approach of teaching SerializeGUCState()
> to ignore role altogether and instead have ParallelWorkerMain call
> SetCurrentRoleId using information passed via the FixedParallelState
> (not sure of the precise details here).
>
>
Seems like a reasonable approach to me.

Thanks,
Pavan

-- 
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages

2017-09-03 Thread Dilip Kumar
On Thu, Aug 31, 2017 at 11:27 PM, Robert Haas  wrote:

I have repeated one of the tests after fixing the problems pointed by
you but this time results are not that impressive.  Seems like below
check was the problem in the previous patch

   if (tbm->nentries > tbm->maxentries / 2)
tbm->maxentries = Min(tbm->nentries, (INT_MAX - 1) / 2) * 2;

Because we were lossifying only till tbm->nentries becomes 90% of
tbm->maxentries but later we had this check which will always be true
and tbm->maxentries will be doubled and that was the main reason of
huge reduction of lossy pages, basically, we started using more
work_mem in all the cases.

I have taken one reading just to see the impact after fixing the
problem with the patch.

 Work_mem: 40 MB
(Lossy Pages count)

Queryhead  patch
6   995223   733087
14 337894   206824
15 995417   798817
20   1654016 1588498

Still, we see a good reduction in lossy pages count.  I will perform
the test at different work_mem and for different values of
TBM_FILFACTOR and share the number soon.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup throttling doesn't throttle as promised

2017-09-03 Thread Michael Paquier
On Sat, Sep 2, 2017 at 6:42 AM, Jeff Janes  wrote:
> I'm attaching a patch for each option.  Each one independently solves the
> problem.  But I think we should do both.  There is no point in issuing
> unnecessary kill system calls, and there may also be more spurious wake-ups
> than just these ones.

I agree that 1) and 2) are sensible things to do now. At some point I
think that we will want do_pg_stop_backup() to use a wait event
instead of a pg_usleep call when waiting for a segment to be archived,
in which case we could use WalSndWakeup() to set the latch and
accelerate things. So it would be nice to keep track of this
possibility in the code. We could as well do that with a new API only
signalling WAL senders in backup state though.

SpinLockAcquire(&walsnd->mutex);
latch = walsnd->latch;
+   state = walsnd->state;
SpinLockRelease(&walsnd->mutex);

-   if (latch != NULL)
+   if (latch != NULL && state != WALSNDSTATE_BACKUP)
SetLatch(latch);
This surely meritates a comment.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dropping partitioned tables without CASCADE

2017-09-03 Thread Amit Langote
Hi Ashutosh,

On 2017/09/04 13:51, Ashutosh Bapat wrote:
> Hi,
> Thomas's application to track patches told me that this patch needs
> rebase. It also required some changes to the code. Here's the updated
> version. I have squashed those two patches together.

Thanks for the rebased patch.

Would it make sense to post them in a new thread?  When this message
popped into my inbox in a thread titled "dropping partitioned tables
without CASCADE", I wondered for a second if there is still something left
to be done about that. :)

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] asynchronous execution

2017-09-03 Thread Kyotaro HORIGUCHI
At Thu, 31 Aug 2017 21:52:36 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20170831.215236.135328985.horiguchi.kyot...@lab.ntt.co.jp>
> At Thu, 03 Aug 2017 09:30:57 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
>  wrote in 
> <20170803.093057.261590619.horiguchi.kyot...@lab.ntt.co.jp>
> > > Unfortunately, that's probably another gigantic patch (that
> > > should probably be written by Andres).
> > 
> > Yeah, but async executor on the current style of executor seems
> > furtile work, or sitting until the patch comes is also waste of
> > time. So I'm planning to include the following sutff in the next
> > PoC patch. Even I'm not sure it can land on the coming
> > Andres'patch.
> > 
> > - Tuple passing outside call-stack. (I remember it was in the
> >   past of the thread around but not found)
> > 
> >   This should be included in the Andres' patch.
> > 
> > - Give executor an ability to run from data-source (or driver)
> >   nodes to the root.
> > 
> >   I'm not sure this is included, but I suppose he is aiming this
> >   kind of thing.
> > 
> > - Rebuid asynchronous execution on the upside-down executor.
> 
> Anyway, I modified ExecProcNode into push-up form and it *seems*
> working to some extent. But trigger and cursors are almost broken
> and several other regressions fail. Some nodes such like
> windowagg are terriblly difficult to change to this push-up form
> (using state machine). And of course it is terribly inefficient.
> 
> I'm afraid that all of this turns out to be in vain. But anyway,
> and FWIW, I'll show the work to here after some cleansing work on
> it.

So, this is that. Maybe this is really a bad way to go. Top of
the bads is it's terriblly hard to maintain because the behavior
of the state machine constructed in this patch is hardly
predictable so easily broken. During the 'cleansing work' I had
many crash or infinite-loop and they were a bit hard to
diagnose.. This will be soon broken by following commits.

Anyway and, again FWIW, this is that. I'll leave this for a while
(at least the period of this CF) and reconsider on async in
different forms.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


poc_pushexecutor_20170904_4faa1dc.tar.bz2
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dropping partitioned tables without CASCADE

2017-09-03 Thread Ashutosh Bapat
Hi,
Thomas's application to track patches told me that this patch needs
rebase. It also required some changes to the code. Here's the updated
version. I have squashed those two patches together.

On Tue, Mar 14, 2017 at 6:35 PM, Ashutosh Bapat
 wrote:
> Added this to 2017/7 commitfest to keep a track of it.
>
> On Wed, Mar 8, 2017 at 3:39 PM, Amit Langote
>  wrote:
>> On 2017/03/08 18:27, Ashutosh Bapat wrote:

 About the other statement you changed, I just realized that we should
 perhaps do one more thing.  Show the Number of partitions, even if it's 0.
  In case of inheritance, the parent table stands on its own when there are
 no child tables, but a partitioned table doesn't in the same sense.  I
 tried to implement that in attached patch 0002.  Example below:

 create table p (a int) partition by list (a);
 \d p
 
 Partition key: LIST (a)
 Number of partitions: 0

 \d+ p
 
 Partition key: LIST (a)
 Number of partitions: 0

 create table p1 partition of p for values in (1);
 \d p
 
 Partition key: LIST (a)
 Number of partitions: 1 (Use \d+ to list them.)

 \d+ p
 
 Partition key: LIST (a)
 Partitions: p1 FOR VALUES IN (1)
>>>
>>> I liked that. PFA 0002 updated. I changed one of \d output to \d+ to
>>> better test partitioned tables without partitions in verbose and
>>> non-verbose mode. Also, refactored the your code to have less number
>>> of conditions. Please let me know if it looks good.
>>
>> Thanks, looks good.
>>
>> Regards,
>> Amit
>>
>>
>
>
>
> --
> Best Wishes,
> Ashutosh Bapat
> EnterpriseDB Corporation
> The Postgres Database Company



-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company
From 5e5af858931f11b86e8b535331849e58d9a08281 Mon Sep 17 00:00:00 2001
From: Ashutosh Bapat 
Date: Mon, 4 Sep 2017 09:56:41 +0530
Subject: [PATCH] Improve \d+ output of a partitioned table

While displaying partitions in \d+ output of a partitioned table
annotate the partitioned partitions as "has partitions".

For a partitioned table show the number of partitions even if it's 0.

Refactored an existing statement to use ? : instead of if condition
according to the surrounding code.

Ashutosh Bapat and Amit Langote.
---
 src/bin/psql/describe.c|   41 +---
 src/test/regress/expected/create_table.out |   13 +
 src/test/regress/expected/foreign_data.out |3 ++
 src/test/regress/expected/insert.out   |   15 ++
 src/test/regress/sql/create_table.sql  |2 +-
 src/test/regress/sql/insert.sql|4 +++
 6 files changed, 62 insertions(+), 16 deletions(-)

diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index f6049cc..db97de0 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -2828,7 +2828,7 @@ describeOneTableDetails(const char *schemaname,
 		/* print child tables (with additional info if partitions) */
 		if (pset.sversion >= 10)
 			printfPQExpBuffer(&buf,
-			  "SELECT c.oid::pg_catalog.regclass, pg_catalog.pg_get_expr(c.relpartbound, c.oid)"
+			  "SELECT c.oid::pg_catalog.regclass, pg_get_expr(c.relpartbound, c.oid), c.relkind"
 			  " FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i"
 			  " WHERE c.oid=i.inhrelid AND i.inhparent = '%s'"
 			  " ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text;", oid);
@@ -2851,15 +2851,26 @@ describeOneTableDetails(const char *schemaname,
 		else
 			tuples = PQntuples(result);
 
-		if (!verbose)
+		/*
+		 * For a partitioned table with no partitions, always print the number
+		 * of partitions as zero, even when verbose output is expected.
+		 * Otherwise, we will not print "Partitions" section for a partitioned
+		 * table without any partitions.
+		 */
+		if (tableinfo.relkind == RELKIND_PARTITIONED_TABLE && tuples == 0)
+ 		{
+			printfPQExpBuffer(&buf, _("Number of partitions: %d"), tuples);
+			printTableAddFooter(&cont, buf.data);
+		}
+		else if (!verbose)
 		{
+			const char *ct = tableinfo.relkind != RELKIND_PARTITIONED_TABLE ?
+			_("child tables") : _("partitions");
+
 			/* print the number of child tables, if any */
 			if (tuples > 0)
 			{
-if (tableinfo.relkind != RELKIND_PARTITIONED_TABLE)
-	printfPQExpBuffer(&buf, _("Number of child tables: %d (Use \\d+ to list them.)"), tuples);
-else
-	printfPQExpBuffer(&buf, _("Number of partitions: %d (Use \\d+ to list them.)"), tuples);
+printfPQExpBuffer(&buf, _("Number of %s: %d (Use \\d+ to list them.)"), ct, tuples);
 printTableAddFooter(&cont, buf.data);
 			}
 		}
@@ -2883,12 +2894,21 @@ describeOneTableDetails(const char *schemaname,
 }
 else
 {
+	char *partitioned_note;
+
+	if (*PQgetvalue(result, i, 2) == RELKIND_PARTITIONED_TABLE)
+		partitioned_note = " has partitions";
+	else
+		partitioned_note = "";
+
 	if (i == 0)
-		printfPQExpBuffer(&buf, "%s: %s

Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-03 Thread Michael Paquier
On Sat, Sep 2, 2017 at 3:00 AM, Bossart, Nathan  wrote:
> Don't we have a similar problem with makeVacuumRelation() and list_make1()?

Yeah, indeed. I forgot about this portion.

> I went ahead and moved the RangeVar, VacuumRelation, and List into local
> variables for now, but I agree that this could be improved in a separate
> patch.  Perhaps these could be allocated in AutovacMemCxt.  I see from
> 4873c96f that autovacuum_do_vac_analyze() used to allocate the list of OIDs
> in that "long-lived" memory context.

Indeed, and this has been removed in 9319fd89 by Álvaro as this API
did not need to be this complicated at this point, but now we have to.

I did not consider first that the list portion also needed to be
modified, perhaps because I am not coding that myself... So now that
it is becoming more complicated what about just using AutovacMemCxt?
This would simplify the list of VacuumRelation entries and the
RangeVar creation as well, and honestly this is ugly and there are no
other similar patterns in the backend code:
+   MemSet(&rel_list, 0, sizeof(rel_list));
+   NodeSetTag(&rel_list, T_List);
+   rel_list.length = 1;
+   rel_list.head = &lc;
+   rel_list.tail = &lc;
+
+   MemSet(&lc, 0, sizeof(lc));
+   lfirst(rel_list.head) = &rel;
This would become way more readable by using makeRangeVar() and the
new makeVacuumRelation. As this is partly my fault that we are at this
state, I am fine as well to remove this burden from you, Nathan, and
fix that myself in a new version. And I don't want to step on your
toes either :)

>> -   $$ = (Node *)n;
>> +   $$ = (Node *) n;
>> Spurious noise. And the coding pattern in gram.y is to not add a space
>> (make new code look like its surroundings as the documentation says).
>
> I've fixed this in v13.

Thanks.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables

2017-09-03 Thread Amit Langote
On 2017/09/04 12:38, Etsuro Fujita wrote:
> On 2017/09/02 4:10, Ashutosh Bapat wrote:
>> This rebase mainly changes patch 0001, which translates partition
>> hierarchy into inheritance hierarchy creating AppendRelInfos and
>> RelOptInfos for partitioned partitions. Because of that, it's not
>> necessary to record the partitioned partitions in a
>> PartitionedChildRelInfos::child_rels. The only RTI that goes in there
>> is the RTI of child RTE which is same as the parent RTE except inh
>> flag. I tried removing that with a series of changes but it seems that
>> following code in ExecInitModifyTable() requires it.
>> 1897 /* The root table RT index is at the head of the
>> partitioned_rels list */
>> 1898 if (node->partitioned_rels)
>> 1899 {
>> 1900 Index   root_rti;
>> 1901 Oid root_oid;
>> 1902
>> 1903 root_rti = linitial_int(node->partitioned_rels);
>> 1904 root_oid = getrelid(root_rti, estate->es_range_table);
>> 1905 rel = heap_open(root_oid, NoLock);  /* locked by InitPlan */
>> 1906 }
>> 1907 else
>> 1908 rel = mtstate->resultRelInfo->ri_RelationDesc;
>>
>> I don't know whether we could change this code not to use
>> PartitionedChildRelInfos::child_rels.

For a root partitioned tables, ModifyTable.partitioned_rels comes from
PartitionedChildRelInfo.child_rels recorded for the table by
expand_inherited_rtnentry().  In fact, the latter is copied verbatim to
ModifyTablePath (or AppendPath/MergeAppendPath) when creating the same.
The only point of keeping those RT indexes around in the ModifyTable node
is for the executor to be able to locate partitioned table RT entries and
lock them.  Without them, the executor wouldn't know about those tables at
all, because there won't be subplans corresponding to partitioned tables
in the tree and hence their RT indexes won't appear in the
ModifyTable.resultRelations list.  If your patch adds partitioned child
rel AppendRelInfos back for whatever reason, you should also make sure in
inheritance_planner() that their RT indexes don't end up the
resultRelations list.  See this piece of code in inheritance_planner():

1351 /* Build list of sub-paths */
1352 subpaths = lappend(subpaths, subpath);
1353
1354 /* Build list of modified subroots, too */
1355 subroots = lappend(subroots, subroot);
1356
1357 /* Build list of target-relation RT indexes */
1358 resultRelations = lappend_int(resultRelations,
appinfo->child_relid);

Maybe it won't happen, because if this appinfo corresponds to a
partitioned child table, recursion would occur and we'll get to this piece
of code for only the leaf children.

By the way, if you want to get rid of PartitionedChildRelInfo, you can do
that as long as you find some other way of putting together the
partitioned_rels list to add into the ModifyTable (Append/MergeAppend)
node created for the root partitioned table.  Currently,
PartitionedChildRelInfo (and the root->pcinfo_list) is the way for
expand_inherited_rtentry() to pass that information to the planner's
path-generating code.  We may be able to generate that list when actually
creating the path using set_append_rel_pathlist() or
inheritance_planner(), without having created a PartitionedChildRelInfo
node beforehand.

> Though I haven't read the patch yet, I think the above code is useless.
> And I proposed a patch to clean it up before [1].  I'll add that patch to
> the next commitfest.

+1.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-03 Thread Michael Paquier
On Mon, Sep 4, 2017 at 7:20 AM, Tom Lane  wrote:
> I wrote:
> On further consideration, I think the control logic I added in
> exec_simple_query() is a shade bogus.  I set it up to only force
> an implicit transaction block when there are at least two statements
> remaining to execute.  However, that has the result of allowing, eg,
>
> begin\; select 1\; commit\; vacuum;
>
> Now in principle it's perfectly OK to allow that, since the vacuum
> is alone in its transaction.  But it feels more like an implementation
> artifact than a good design.  The existing code doesn't allow it,
> and we might have a hard time duplicating this behavior if we ever
> significantly rewrote the transaction infrastructure.  Plus I'd hate
> to have to explain it to users.  I think we'd be better off enforcing
> transaction block restrictions on every statement in a multi-command
> string, regardless of the location of any COMMIT/ROLLBACK within the
> string.
>
> Hence, attached a v2 that does it like that.  I also fully reverted
> 4f896dac1 by undoing its changes to PreventTransactionChain; other
> than that, the changes in xact.c are the same as before.

Hmm. While this patch looks to me in a better shape than what Simon's
is proposing, thinking about
cah2-v61vxnentfj2v-zd+ma-g6kqmjgd5svxou3jbvdzqh0...@mail.gmail.com
which involved a migration Oracle->Postgres, I have been wondering if
it is possible to still allow savepoints in those cases to ease the
pain and surprise of some users. And while looking around, it seems to
me that it is possible. Please find the attached to show my idea,
based on Tom's v2. The use of a new transaction state like
IMPLICIT_INPROGRESS is something that I got in mind upthread, but I
have not shaped that into a fully-blown patch.

All the following sequences are working as I would think they should
(a couple of inserts done within each savepoint allowed me to check
that the transactions happened correctly, though the set of
regressions presented in v2 looks enough):
BEGIN; SELECT 1; SAVEPOINT sp; RELEASE sp; SAVEPOINT sp; ROLLBACK TO
SAVEPOINT sp; COMMIT;
BEGIN; SELECT 1; SAVEPOINT sp; RELEASE sp; SAVEPOINT sp; ROLLBACK TO
SAVEPOINT sp; ROLLBACK;
SELECT 1; SAVEPOINT sp; RELEASE sp; SAVEPOINT sp; ROLLBACK TO SAVEPOINT sp;
So sequences of multiple commands are working with the patch attached
even if a BEGIN is not explicitly added. On HEAD or with v2, if BEGIN
is not specified, savepoint commands cause a failure.
-- 
Michael


introduce-implicit-transaction-blocks-3-michael.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables

2017-09-03 Thread Etsuro Fujita

On 2017/09/02 4:10, Ashutosh Bapat wrote:

This rebase mainly changes patch 0001, which translates partition
hierarchy into inheritance hierarchy creating AppendRelInfos and
RelOptInfos for partitioned partitions. Because of that, it's not
necessary to record the partitioned partitions in a
PartitionedChildRelInfos::child_rels. The only RTI that goes in there
is the RTI of child RTE which is same as the parent RTE except inh
flag. I tried removing that with a series of changes but it seems that
following code in ExecInitModifyTable() requires it.
1897 /* The root table RT index is at the head of the
partitioned_rels list */
1898 if (node->partitioned_rels)
1899 {
1900 Index   root_rti;
1901 Oid root_oid;
1902
1903 root_rti = linitial_int(node->partitioned_rels);
1904 root_oid = getrelid(root_rti, estate->es_range_table);
1905 rel = heap_open(root_oid, NoLock);  /* locked by InitPlan */
1906 }
1907 else
1908 rel = mtstate->resultRelInfo->ri_RelationDesc;

I don't know whether we could change this code not to use
PartitionedChildRelInfos::child_rels.
Though I haven't read the patch yet, I think the above code is useless. 
And I proposed a patch to clean it up before [1].  I'll add that patch 
to the next commitfest.


Best regards,
Etsuro Fujita

[1] 
https://www.postgresql.org/message-id/93cf9816-2f7d-0f67-8ed2-4a4e497a6ab8%40lab.ntt.co.jp




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CLUSTER command progress monitor

2017-09-03 Thread Masahiko Sawada
On Mon, Sep 4, 2017 at 11:37 AM, Tatsuro Yamada
 wrote:
> Hi Sawada-san, Thomas,
>
> Thanks for sharing the reggression.diff.
> I realized Thomas's comment is right.
>
> Attached patch is fixed version.
> Could you try it?
>

Yeah, in my environment the regression test passed. Thanks.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-03 Thread Amit Langote
On 2017/09/02 12:44, Thomas Munro wrote:
> On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
>  wrote:
>> postgres=# explain select * from bt where k between 1 and 2 and v = 100;
>>   QUERY PLAN
>> --
>>  Append  (cost=0.29..15.63 rows=2 width=8)
>>->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
>>  Index Cond: (v = 100)
>>->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
>>  Index Cond: (v = 100)
>>  Filter: (k <= 2)
>> (6 rows)
> 
> +1
> 
> This seems like a good feature to me: filtering stuff that is
> obviously true is a waste of CPU cycles and may even require people to
> add redundant stuff to indexes.  I was pondering something related to
> this over in the partition-wise join thread (join quals that are
> implied by partition constraints and should be discarded).
> 
> It'd be interesting to get Amit Langote's feedback, so I CC'd him.
> I'd be surprised if he and others haven't got a plan or a patch for
> this down the back of the sofa.

I agree that that's a good optimization in the cases it's correct.  Given
that check_index_predicates() already applies the same optimization when
considering using a partial index, it might make sense to try to do the
same even earlier for the table itself using its CHECK / NOT NULL
constraints as predicates (I said *earlier* because
relation_excluded_by_constrains happens for a relation before we look at
its indexes).  Also, at the end of relation_excluded_by_constraints() may
not be such a bad place to do this.

By the way, I read in check_index_predicates() that we should not apply
this optimization if the relation in question is a target of UPDATE /
DELETE / SELECT FOR UPDATE.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CLUSTER command progress monitor

2017-09-03 Thread Tatsuro Yamada

Hi Sawada-san, Thomas,

Thanks for sharing the reggression.diff.
I realized Thomas's comment is right.

Attached patch is fixed version.
Could you try it?

Regards,

Tatsuro Yamada
NTT Open Source Software Center

On 2017/09/01 17:59, Masahiko Sawada wrote:

On Fri, Sep 1, 2017 at 3:38 PM, Tatsuro Yamada
 wrote:

Hi Thomas,


Any comments or suggestion are welcome.



Although this patch updates src/test/regress/expected/rules.out I
think perhaps you included the wrong version?  That regression test
fails for me



Thanks for the comment.

I use the patch on 7b69b6ce and it's fine.
Did you use "initdb" command after "make install"?
The pg_stat_progress_cluster view is created in initdb, probably.



I also got a regression test error (applied to abe85ef). Here is
regression.diff file.

*** /home/masahiko/source/postgresql/src/test/regress/expected/rules.out
2017-09-01 17:27:33.680055612 -0700
--- /home/masahiko/source/postgresql/src/test/regress/results/rules.out
2017-09-01 17:28:10.410055596 -0700
***
*** 1819,1824 
--- 1819,1849 
   pg_stat_get_db_conflict_bufferpin(d.oid) AS confl_bufferpin,
   pg_stat_get_db_conflict_startup_deadlock(d.oid) AS confl_deadlock
  FROM pg_database d;
+ pg_stat_progress_cluster| SELECT s.pid,
+ s.datid,
+ d.datname,
+ s.relid,
+ CASE s.param1
+ WHEN 0 THEN 'initializing'::text
+ WHEN 1 THEN 'scanning heap'::text
+ WHEN 2 THEN 'sorting tuples'::text
+ WHEN 3 THEN 'writing new heap'::text
+ WHEN 4 THEN 'scan heap and write new heap'::text
+ WHEN 5 THEN 'swapping relation files'::text
+ WHEN 6 THEN 'rebuilding index'::text
+ WHEN 7 THEN 'performing final cleanup'::text
+ ELSE NULL::text
+ END AS phase,
+ CASE s.param2
+ WHEN 0 THEN 'index scan'::text
+ WHEN 1 THEN 'seq scan'::text
+ ELSE NULL::text
+ END AS scan_method,
+ s.param3 AS scan_index_relid,
+ s.param4 AS heap_tuples_total,
+ s.param5 AS heap_tuples_scanned
+FROM (pg_stat_get_progress_info('CLUSTER'::text) s(pid, datid,
relid, param1, param2, param3, param4, param5, param6, param7, param8,
param9, param10)
+  LEFT JOIN pg_database d ON ((s.datid = d.oid)));
   pg_stat_progress_vacuum| SELECT s.pid,
   s.datid,
   d.datname,
***
*** 1841,1871 
   s.param7 AS num_dead_tuples
  FROM (pg_stat_get_progress_info('VACUUM'::text) s(pid, datid,
relid, param1, param2, param3, param4, param5, param6, param7, param8,
param9, param10)
LEFT JOIN pg_database d ON ((s.datid = d.oid)));
- pg_stat_progress_cluster| SELECT
- s.pid,
- s.datid,
- d.datname,
- s.relid,
- CASE s.param1
- WHEN 0 THEN 'initializing'::text
- WHEN 1 THEN 'scanning heap'::text
- WHEN 2 THEN 'sorting tuples'::text
- WHEN 3 THEN 'writing new heap'::text
- WHEN 4 THEN 'scan heap and write new heap'::text
- WHEN 5 THEN 'swapping relation files'::text
- WHEN 6 THEN 'rebuilding index'::text
- WHEN 7 THEN 'performing final cleanup'::text
- ELSE NULL::text
- END AS phase,
- CASE S.param2
- WHEN 0 THEN 'index scan'
- WHEN 1 THEN 'seq scan'
- END AS scan_method,
- s.param3 AS index_relid,
- s.param4 AS heap_blks_total,
- s.param5 AS heap_blks_scanned
-FROM (pg_stat_get_progress_info('CLUSTER'::text) s(pid, datid,
relid, param1, param2, param3, param4, param5)
-  LEFT JOIN pg_database d ON ((s.datid = d.oid)));
   pg_stat_replication| SELECT s.pid,
   s.usesysid,
   u.rolname AS usename,
--- 1866,1871 

==

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center




diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 38bf636..35a5c63 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -332,6 +332,14 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
   
  
 
+ 
+  pg_stat_progress_clusterpg_stat_progress_cluster
+  One row for each backend running
+   CLUSTER, showing current progress.
+   See .
+  
+ 
+
 

   
@@ -3233,9 +3241,9 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
 
   
PostgreSQL has the ability to report the progress of
-   certain commands during command execution.  Currently, the only command
-   which supports progress reporting is VACUUM.  This may be
-   expanded in the future.
+   certain commands during command execution.  Currently, the suppoted 
+   progress reporting commands are VACUUM and CLUSTER.
+   This may be expanded in the future.
   
 
  
@@ -3427,6 +3435,157 @@ SELECT pg_stat_get_backend_

Re: [HACKERS] Fix warnings and typo in dshash

2017-09-03 Thread Amit Kapila
On Sun, Sep 3, 2017 at 2:56 PM, Thomas Munro
 wrote:
> On Sun, Sep 3, 2017 at 6:57 PM, Amit Kapila  wrote:
>> I am seeing below warnings (on Win7) in dshash.c
>>
>> 1>  dshash.c
>> 1>src/backend/lib/dshash.c(318): warning C4334: '<<' : result of
>> 32-bit shift implicitly converted to 64 bits (was 64-bit shift
>> intended?)
>> 1>src/backend/lib/dshash.c(679): warning C4334: '<<' : result of
>> 32-bit shift implicitly converted to 64 bits (was 64-bit shift
>> intended?)
>> 1>src/backend/lib/dshash.c(713): warning C4334: '<<' : result of
>> 32-bit shift implicitly converted to 64 bits (was 64-bit shift
>> intended?)
>
> Thanks!  That's a handy warning to have.  I see that it is also
> visible on the build farm:
>
> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=caecilian&dt=2017-09-02%2019%3A30%3A30&stg=make
>
> Aside from these 3 warnings, it looks like the other 17 are all
> "warning C4005: 'HAVE_LONG_LONG_INT_64': macro redefinition".  I
> wonder if it would make sense to fix that too and then turn on the
> MSVC equivalent of -Werror=xxx on a build farm animal...
>
>> Attached a patch to fix the above warning.
>
> I think it should be (size_t) 1, not UINT64CONST(1).  See attached.
>

Okay, that makes sense.  Do you think we should also change type
casting in BUCKETS_PER_PARTITION so that we are consistent?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE of partition key

2017-09-03 Thread Amit Kapila
On Sun, Sep 3, 2017 at 5:10 PM, Amit Khandekar  wrote:
> On 31 August 2017 at 14:15, Amit Khandekar  wrote:
>> Thanks Dilip. I am working on rebasing the patch. Particularly, the
>> partition walker in my patch depended on the fact that all the tables
>> get opened (and then closed) while creating the tuple routing info.
>> But in HEAD, now only the partitioned tables get opened. So need some
>> changes in my patch.
>>
>> The partition walker related changes are going to be inapplicable once
>> the other thread [1] commits the changes for expansion of inheritence
>> in bound-order, but till then I would have to rebase the partition
>> walker changes over HEAD.
>>
>> [1] 
>> https://www.postgresql.org/message-id/0118a1f2-84bb-19a7-b906-dec040a206f2%40lab.ntt.co.jp
>>
>
> After recent commit 30833ba154, now the partitions are expanded in
> depth-first order. It didn't seem worthwhile rebasing my partition
> walker changes onto the latest code. So in the attached patch, I have
> removed all the partition walker changes.
>

It seems you have forgotten to attach the patch.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench: Skipping the creating primary keys after initialization

2017-09-03 Thread Masahiko Sawada
On Fri, Sep 1, 2017 at 11:29 PM, Fabien COELHO  wrote:
>
>>> I'm wondering whether this truncation should be yet another available
>>> command? Hmmm... maybe not.
>>
>>
>> Currently TRUNCATE pgbench_accounts command is executed within a
>> transaction started immediately before it. If we move it out of the
>> transaction, the table data will be truncated even if the copying data
>> failed. Maybe we can do TRUNCATE pgbench_accounts, pgbench_history
>> instead. Thought?
>
>
> Keep the truncate in the transaction, and truncate both (or all?) tables
> together.

Attached latest patch incorporated the comments I got so far. Please review it.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


pgbench_custom_initialization_v9.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT .. ON CONFLICT DO SELECT [FOR ..]

2017-09-03 Thread Peter Geoghegan
On Tue, Aug 15, 2017 at 12:17 AM, Marko Tiikkaja  wrote:
> On Tue, Aug 15, 2017 at 7:43 AM, Peter Geoghegan  wrote:
>>
>> On Mon, Aug 14, 2017 at 6:23 PM, Marko Tiikkaja  wrote:
>> > Attached is a patch for $SUBJECT.  It might still be a bit rough around
>> > the
>> > edges and probably light on docs and testing, but I thought I'd post it
>> > anyway so I won't forget.
>>
>> Is it possible for ON CONFLICT DO SELECT to raise a cardinality
>> violation error? Why or why not?
>
>
> No.  I don't see when that would need to happen.  But I'm guessing you have
> a case in mind?

Actually, no, I didn't. But I wondered if you did. I think that it
makes some sense not to, now that I think about it. ON CONFLICT DO
NOTHING doesn't have cardinality violations, because it cannot affect
a row twice if there are duplicates proposed for insertion (at least,
not through any ON CONFLICT related codepath). But, this opinion only
applies to ON CONFLICT DO SELECT, not ON CONFLICT DO SELECT FOR
UPDATE. And I have other reservations, which I'll go in to
momentarily, about ON CONFLICT DO SELECT in general. So, the upshot is
that I think we need cardinality violations in all cases for this
feature. Why would a user ever not want to know that the row was
locked twice?

On to the subject of my more general reservation: Why did you include
ON CONFLICT DO SELECT at all? Why not just ON CONFLICT DO SELECT FOR
UPDATE (and FOR SHARE, ...) ?

I think I know what you're going to say about it: ON CONFLICT DO
NOTHING doesn't lock the conflicting row, so why should I insist on it
here (why not have an ON CONFLICT DO SELECT variant, too)? That's not
a bad argument. However, I think that there is still a difference.
Namely, ON CONFLICT DO NOTHING doesn't let you project the rows with
RETURNING (that may not even be visible to the command's MVCC snapshot
-- the rows that we also don't lock), because those are simply not the
semantics it has. DO NOTHING is more or less about ETL use-cases, some
of which involve data from very unclean sources. It seems questionable
to cite that as precedent for not locking a row (that is, for having a
plain ON CONFLICT DO SELECT variant at all).

In other words, while ON CONFLICT DO NOTHING may have set a precedent
here, it at least has semantics that limit the consequences of not
locking the row; and it *feels* a little bit dirty to use it
indifferently, even where that makes sense. ON CONFLICT DO SELECT is
probably going to be used within wCTEs some of the time. I'm not sure
that a plain ON CONFLICT DO SELECT variant won't allow unpredictable,
complicated problems when composed within a more complicated query.
(This brings me back!)

In other other words, plain SELECT FOR UPDATE has to do all the EPQ
stuff, in order to confront many of the same issues that ON CONFLICT
must confront in its own way [1], but a plain SELECT never does any
EPQ stuff at all. It seems to follow that a plain ON CONFLICT DO
SELECT is an oxymoron. If I've missed the point of ON CONFLICT DO
SELECT, please let me know how.

[1] 
https://wiki.postgresql.org/wiki/UPSERT#Visibility_issues_and_the_proposed_syntax_.28WHERE_clause.2Fpredicate_stuff.29
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] path toward faster partition pruning

2017-09-03 Thread Amit Langote
Thanks for the comments.

On 2017/09/02 2:52, Robert Haas wrote:
> On Thu, Aug 31, 2017 at 2:02 AM, Amit Langote
>  wrote:
>> Attached is now also the set of patches that implement the actual
>> partition-pruning logic, viz. the last 3 patches (0004, 0005, and 0006) of
>> the attached.
> 
> It strikes me that this patch set is doing two things but maybe in the
> opposite order that I would have chosen to attack them.  First,
> there's getting partition pruning to use something other than
> constraint exclusion.  Second, there's deferring work that is
> currently done at an early stage of the process until later, so that
> we waste less effort on partitions that are ultimately going to be
> pruned.

OK.

> 
> The second one is certainly a worthwhile goal, but there are fairly
> firm interdependencies between the first one and some other things
> that are in progress.  For example, the first one probably ought to be
> done before hash partitioning gets committed, because
> constraint-exclusion based partitioning pruning won't work with
> partitioning pruning, but some mechanism based on asking the
> partitioning code which partitions might match will.

Yeah.

> Such a mechanism
> is more efficient for list and range partitions, but it's the only
> thing that will work for hash partitions.  Also, Beena Emerson is
> working on run-time partition pruning, and the more I think about it,
> the more I think that overlaps with this first part.  Both patches
> need a mechanism to identify, given a btree-indexable comparison
> operator (< > <= >= =) and a set of values, which partitions might
> contain matching values.  Run-time partition pruning will call that at
> execution time, and this patch will call it at plan time, but it's the
> same logic; it's just a question of the point at which the values are
> known.  And of course we don't want to end up with two copies of the
> logic.

Agreed here too.

I agree that spending effort on the first part (deferment of locking, etc.
within the planner) does not benefit either the hash partitioning and
run-time pruning patches much.

> Therefore, IMHO, it would be best to focus first on how we're going to
> identify the partitions that survive pruning, and then afterwards work
> on transposing that logic to happen before partitions are opened and
> locked.  That way, we get some incremental benefit sooner, and also
> unblock some other development work.

Alright, I will try to do it that way.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] expanding inheritance in partition bound order

2017-09-03 Thread Amit Langote
Hi Amit,

On 2017/09/03 16:07, Amit Khandekar wrote:
> On 31 August 2017 at 13:06, Amit Langote  
> wrote:
>>> Mind you, that idea has some problems anyway in the face of default
>>> partitions, null partitions, and list partitions which accept
>>> non-contiguous values (e.g. one partition for 1, 3, 5; another for 2,
>>> 4, 6).  We might need to mark the PartitionDesc to indicate whether
>>> PartitionDesc-order is in fact bound-order in a particular instance,
>>> or something like that.
>>
>> ISTM, the primary motivation for the EIBO patch at this point is to get
>> the partitions ordered in a predictable manner so that the partition-wise
>> join patch and update partition key patches could implement certain logic
>> using O (n) algorithm rather than an O (n^2) one.  Neither of them depend
>> on the actual order in the sense of, say, sticking a PathKey to the
>> resulting Append.
> 
> Now that the inheritance hierarchy is expanded in depth-first order,
> RelationGetPartitionDispatchInfo() needs to be changed to arrange the
> PartitionDispatch array and the leaf partitions in depth-first order
> (as we know this is a requirement for the update-partition-key patch
> for efficiently determining which of the leaf partitions are already
> present in the update result rels).

I was thinking the same.

> Amit, I am not sure if you are
> already doing this as part of the patches in this mail thread. Please
> let me know.

Actually, I had thought of changing the expansion order in
RelationGetPartitionDispatchInfo to depth-first after Robert committed his
patch the other day, but haven't got around to doing that yet.  Will do
that in the updated patch (the refactoring patch) I will post sometime
later today or tomorrow on a differently titled thread, because the EIBO
work seems to be done.

> Also, let me know if you think there will be any loss of
> efficiency in tuple routing code if we arrange the Partition Dispatch
> indexes in depth-first order.

I don't think there will be any loss in the efficiency of the tuple
routing code itself.  It's just that the position of the ResultRelInfos
(of leaf partitions) and PartitionDispatch objects (of partitioned tables)
will be different in their respective arrays, that is, pd->indexes will
now have different values than formerly.

And now because the planner will put leaf partitions subplans / WCOs /
RETURNING projections in that order in the ModifyTable node, we must make
sure that we adapt the same order in the executor, as you already noted.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: odd behavior/possible bug (Was: Re: [HACKERS] PG10 partitioning - odd behavior/possible bug)

2017-09-03 Thread Joe Conway
On 09/03/2017 03:34 PM, Tom Lane wrote:
> Joe Conway  writes:
>> Notice that tsr is not empty at all on the first loop, but on the second
>> loop it is empty every second time the trigger fires.
> 
> I think the issue is that now() isn't changing within the transaction,
> so when you construct "tstzrange(lower(OLD.tr), now(), '[)')" using an
> old row whose "lower(OLD.tr)" is already "now()", you get an empty range.
> Probably using '[]' bounds would avoid the oddness.

Hmmm, good point. Sorry for the noise.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: odd behavior/possible bug (Was: Re: [HACKERS] PG10 partitioning - odd behavior/possible bug)

2017-09-03 Thread Tom Lane
Joe Conway  writes:
> Notice that tsr is not empty at all on the first loop, but on the second
> loop it is empty every second time the trigger fires.

I think the issue is that now() isn't changing within the transaction,
so when you construct "tstzrange(lower(OLD.tr), now(), '[)')" using an
old row whose "lower(OLD.tr)" is already "now()", you get an empty range.
Probably using '[]' bounds would avoid the oddness.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


odd behavior/possible bug (Was: Re: [HACKERS] PG10 partitioning - odd behavior/possible bug)

2017-09-03 Thread Joe Conway
On 09/03/2017 02:28 PM, Joe Conway wrote:
> I was playing around with partitioning and found an oddity that is best
> described with the following reasonably minimal test case:



> Notice that in the first loop iteration tsr is calculated from OLD.tr
> correctly. But in the second loop iteration it is not, and therefore no
> partition can be found for the insert.
> 
> I have not dug too deeply into this yet, but was wondering if this
> behavior is sane/expected for some reason I am missing?

This does not require partitioning to reproduce -- sorry for the false
accusations ;-)

8<---
CREATE TABLE timetravel
(
  id int8,
  f1 text not null,
  tr tstzrange not null default tstzrange(now(), 'infinity', '[]')
);

CREATE OR REPLACE FUNCTION modify_timetravel()
RETURNS TRIGGER AS $$
  DECLARE
tsr tstzrange;
  BEGIN
RAISE NOTICE 'OLD.tr = %', OLD.tr;

tsr := tstzrange(lower(OLD.tr), now(), '[)');
RAISE NOTICE 'tsr = %', tsr;

OLD.tr = tsr;
INSERT INTO timetravel VALUES (OLD.*);
IF (TG_OP = 'UPDATE') THEN
  tsr := tstzrange(now(), 'infinity', '[]');
  RAISE NOTICE 'NEW.tr = %', tsr;
  NEW.tr = tsr;
  RETURN NEW;
ELSIF (TG_OP = 'DELETE') THEN
  RETURN OLD;
END IF;
  END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER timetravel_audit BEFORE DELETE OR UPDATE
ON timetravel FOR EACH ROW EXECUTE PROCEDURE modify_timetravel();

INSERT INTO timetravel(id, f1)
SELECT g.i, 'row-' || g.i::text
FROM generate_series(1,10) AS g(i);

DO $$
  DECLARE
i int;
  BEGIN
FOR i IN 1..2 LOOP
  RAISE NOTICE 'loop = %', i;
  UPDATE timetravel SET f1 = f1 || '-r' || i where id < 4;
END LOOP;
  END
$$;
NOTICE:  loop = 1
NOTICE:  OLD.tr = ["2017-09-03 15:17:31.598734-07",infinity]
NOTICE:  tsr = ["2017-09-03 15:17:31.598734-07","2017-09-03
15:17:31.608018-07")
NOTICE:  NEW.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  OLD.tr = ["2017-09-03 15:17:31.598734-07",infinity]
NOTICE:  tsr = ["2017-09-03 15:17:31.598734-07","2017-09-03
15:17:31.608018-07")
NOTICE:  NEW.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  OLD.tr = ["2017-09-03 15:17:31.598734-07",infinity]
NOTICE:  tsr = ["2017-09-03 15:17:31.598734-07","2017-09-03
15:17:31.608018-07")
NOTICE:  NEW.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  loop = 2
NOTICE:  OLD.tr = ["2017-09-03 15:17:31.598734-07","2017-09-03
15:17:31.608018-07")
NOTICE:  tsr = ["2017-09-03 15:17:31.598734-07","2017-09-03
15:17:31.608018-07")
NOTICE:  NEW.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  OLD.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  tsr = empty
NOTICE:  NEW.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  OLD.tr = ["2017-09-03 15:17:31.598734-07","2017-09-03
15:17:31.608018-07")
NOTICE:  tsr = ["2017-09-03 15:17:31.598734-07","2017-09-03
15:17:31.608018-07")
NOTICE:  NEW.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  OLD.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  tsr = empty
NOTICE:  NEW.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  OLD.tr = ["2017-09-03 15:17:31.598734-07","2017-09-03
15:17:31.608018-07")
NOTICE:  tsr = ["2017-09-03 15:17:31.598734-07","2017-09-03
15:17:31.608018-07")
NOTICE:  NEW.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  OLD.tr = ["2017-09-03 15:17:31.608018-07",infinity]
NOTICE:  tsr = empty
NOTICE:  NEW.tr = ["2017-09-03 15:17:31.608018-07",infinity]
DO
8<---

Notice that tsr is not empty at all on the first loop, but on the second
loop it is empty every second time the trigger fires.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-03 Thread Tom Lane
I wrote:
> ... PFA a patch
> that invents a notion of an "implicit" transaction block.

On further consideration, I think the control logic I added in
exec_simple_query() is a shade bogus.  I set it up to only force
an implicit transaction block when there are at least two statements
remaining to execute.  However, that has the result of allowing, eg,

begin\; select 1\; commit\; vacuum;

Now in principle it's perfectly OK to allow that, since the vacuum
is alone in its transaction.  But it feels more like an implementation
artifact than a good design.  The existing code doesn't allow it,
and we might have a hard time duplicating this behavior if we ever
significantly rewrote the transaction infrastructure.  Plus I'd hate
to have to explain it to users.  I think we'd be better off enforcing
transaction block restrictions on every statement in a multi-command
string, regardless of the location of any COMMIT/ROLLBACK within the
string.

Hence, attached a v2 that does it like that.  I also fully reverted
4f896dac1 by undoing its changes to PreventTransactionChain; other
than that, the changes in xact.c are the same as before.

regards, tom lane

diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 5e7e812..8b33676 100644
*** a/src/backend/access/transam/xact.c
--- b/src/backend/access/transam/xact.c
*** typedef enum TBlockState
*** 145,150 
--- 145,151 
  	/* transaction block states */
  	TBLOCK_BEGIN,/* starting transaction block */
  	TBLOCK_INPROGRESS,			/* live transaction */
+ 	TBLOCK_IMPLICIT_INPROGRESS, /* live transaction after implicit BEGIN */
  	TBLOCK_PARALLEL_INPROGRESS, /* live transaction inside parallel worker */
  	TBLOCK_END,	/* COMMIT received */
  	TBLOCK_ABORT,/* failed xact, awaiting ROLLBACK */
*** StartTransactionCommand(void)
*** 2700,2705 
--- 2701,2707 
  			 * previous CommitTransactionCommand.)
  			 */
  		case TBLOCK_INPROGRESS:
+ 		case TBLOCK_IMPLICIT_INPROGRESS:
  		case TBLOCK_SUBINPROGRESS:
  			break;
  
*** CommitTransactionCommand(void)
*** 2790,2795 
--- 2792,2798 
  			 * counter and return.
  			 */
  		case TBLOCK_INPROGRESS:
+ 		case TBLOCK_IMPLICIT_INPROGRESS:
  		case TBLOCK_SUBINPROGRESS:
  			CommandCounterIncrement();
  			break;
*** AbortCurrentTransaction(void)
*** 3014,3023 
  			break;
  
  			/*
! 			 * if we aren't in a transaction block, we just do the basic abort
! 			 * & cleanup transaction.
  			 */
  		case TBLOCK_STARTED:
  			AbortTransaction();
  			CleanupTransaction();
  			s->blockState = TBLOCK_DEFAULT;
--- 3017,3028 
  			break;
  
  			/*
! 			 * If we aren't in a transaction block, we just do the basic abort
! 			 * & cleanup transaction.  For this purpose, we treat an implicit
! 			 * transaction block as if it were a simple statement.
  			 */
  		case TBLOCK_STARTED:
+ 		case TBLOCK_IMPLICIT_INPROGRESS:
  			AbortTransaction();
  			CleanupTransaction();
  			s->blockState = TBLOCK_DEFAULT;
*** AbortCurrentTransaction(void)
*** 3148,3156 
   *	completes).  Subtransactions are verboten too.
   *
   *	isTopLevel: passed down from ProcessUtility to determine whether we are
!  *	inside a function or multi-query querystring.  (We will always fail if
!  *	this is false, but it's convenient to centralize the check here instead of
!  *	making callers do it.)
   *	stmtType: statement type name, for error messages.
   */
  void
--- 3153,3160 
   *	completes).  Subtransactions are verboten too.
   *
   *	isTopLevel: passed down from ProcessUtility to determine whether we are
!  *	inside a function.  (We will always fail if this is false, but it's
!  *	convenient to centralize the check here instead of making callers do it.)
   *	stmtType: statement type name, for error messages.
   */
  void
*** PreventTransactionChain(bool isTopLevel,
*** 3183,3190 
  		ereport(ERROR,
  (errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),
  		/* translator: %s represents an SQL statement name */
!  errmsg("%s cannot be executed from a function or multi-command string",
! 		stmtType)));
  
  	/* If we got past IsTransactionBlock test, should be in default state */
  	if (CurrentTransactionState->blockState != TBLOCK_DEFAULT &&
--- 3187,3193 
  		ereport(ERROR,
  (errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),
  		/* translator: %s represents an SQL statement name */
!  errmsg("%s cannot be executed from a function", stmtType)));
  
  	/* If we got past IsTransactionBlock test, should be in default state */
  	if (CurrentTransactionState->blockState != TBLOCK_DEFAULT &&
*** BeginTransactionBlock(void)
*** 3429,3434 
--- 3432,3446 
  			break;
  
  			/*
+ 			 * BEGIN converts an implicit transaction block to a regular one.
+ 			 * (Note that we allow this even if we've already done some
+ 			 * commands, which is a bit odd but matches historical

[HACKERS] PG10 partitioning - odd behavior/possible bug

2017-09-03 Thread Joe Conway
I was playing around with partitioning and found an oddity that is best
described with the following reasonably minimal test case:

8<-
CREATE TABLE timetravel
(
  id int8,
  f1 text not null,
  tr tstzrange not null default tstzrange(now(), 'infinity', '[]')
) PARTITION BY RANGE (upper(tr));

CREATE TABLE timetravel_current PARTITION OF timetravel
(
  primary key (id, tr) DEFERRABLE
) FOR VALUES FROM ('infinity') TO (MAXVALUE);
CREATE INDEX timetravel_current_tr_idx ON timetravel_current USING GIST
(tr);

CREATE TABLE timetravel_history PARTITION OF timetravel
(
  primary key (id, tr) DEFERRABLE
) FOR VALUES FROM (MINVALUE) TO ('infinity');
CREATE INDEX timetravel_history_tr_idx ON timetravel_history USING GIST
(tr);

CREATE OR REPLACE FUNCTION modify_timetravel()
RETURNS TRIGGER AS $$
  DECLARE
tsr tstzrange;
  BEGIN
RAISE NOTICE 'OLD.tr = %', OLD.tr;

tsr := tstzrange(lower(OLD.tr), now(), '[)');
RAISE NOTICE 'tsr = %', tsr;

OLD.tr = tsr;
INSERT INTO timetravel VALUES (OLD.*);
IF (TG_OP = 'UPDATE') THEN
  tsr := tstzrange(now(), 'infinity', '[]');
  RAISE NOTICE 'NEW.tr = %', tsr;
  NEW.tr = tsr;
  RETURN NEW;
ELSIF (TG_OP = 'DELETE') THEN
  RETURN OLD;
END IF;
  END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER timetravel_audit BEFORE DELETE OR UPDATE
ON timetravel_current FOR EACH ROW EXECUTE PROCEDURE modify_timetravel();

INSERT INTO timetravel(id, f1)
SELECT g.i, 'row-' || g.i::text
FROM generate_series(1,10) AS g(i);

DO $$
  DECLARE
i int;
  BEGIN
FOR i IN 1..2 LOOP
  RAISE NOTICE 'loop = %', i;
  UPDATE timetravel SET f1 = f1 || '-r' || i where id < 2;
END LOOP;
  END
$$;
NOTICE:  loop = 1
NOTICE:  OLD.tr = ["2017-09-03 14:15:08.800811-07",infinity]
NOTICE:  tsr = ["2017-09-03 14:15:08.800811-07","2017-09-03
14:18:48.270274-07")
NOTICE:  NEW.tr = ["2017-09-03 14:18:48.270274-07",infinity]
NOTICE:  loop = 2
NOTICE:  OLD.tr = ["2017-09-03 14:18:48.270274-07",infinity]
NOTICE:  tsr = empty
ERROR:  no partition of relation "timetravel" found for row
DETAIL:  Partition key of the failing row contains (upper(tr)) = (null).
CONTEXT:  SQL statement "INSERT INTO timetravel VALUES (OLD.*)"
PL/pgSQL function modify_timetravel() line 11 at SQL statement
SQL statement "UPDATE timetravel SET f1 = f1 || '-r' || i where id < 2"
PL/pgSQL function inline_code_block line 7 at SQL statement
8<-

Notice that in the first loop iteration tsr is calculated from OLD.tr
correctly. But in the second loop iteration it is not, and therefore no
partition can be found for the insert.

I have not dug too deeply into this yet, but was wondering if this
behavior is sane/expected for some reason I am missing?

Thanks,

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] adding the commit to a patch's thread

2017-09-03 Thread Daniel Gustafsson
> On 03 Sep 2017, at 21:12, Magnus Hagander  wrote:
> 
> On Sun, Sep 3, 2017 at 8:55 PM, Daniel Gustafsson  > wrote:
> 
> That leaves us back at parsing/scraping -committers and adding the mailthread.
> For this CF I can add the commits manually, since attaching a thread isn’t
> limited to threads in -hackers.  This way we have more time to figure out an
> automated way for the next CF.  I’ve added the commit for my patch here as an
> example:
> 
> https://commitfest.postgresql.org/14/1245/ 
> 
> 
> That's a good start.

I will do that for commited patches.

> We can also add a separate field that's just a comma separated list of commit 
> hashes that auto-links to the git server if we want. *That* would be a 
> trivial addition. Actually auto-populating it would probably be a lot less 
> so, but for a manually managed one it should be fairly easy.

We can do that as well in case we want to, one doesn’t exclude the other.
Having both a link to the thread on -committers as well as a link to Git would
be neat (the former will neatly capture discussions on -committers that lead to
follow-up patches, something the latter will struggle with since it will be
lots extra work).

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optional message to user when terminating/cancelling backend

2017-09-03 Thread Daniel Gustafsson
> On 02 Sep 2017, at 02:21, Thomas Munro  wrote:
> 
> On Fri, Jun 23, 2017 at 1:48 AM, Daniel Gustafsson  wrote:
>> Good point.  I’ve attached a new version which issues a NOTICE on truncation
>> and also addresses all other comments so far in this thread.  Thanks a lot 
>> for
>> the early patch reviews!
> 
> FYI this no longer builds because commit 81c5e46c490e just stole your OIDs:
> 
> make[3]: Entering directory
> `/home/travis/build/postgresql-cfbot/postgresql/src/backend/catalog'
> cd ../../../src/include/catalog && '/usr/bin/perl' ./duplicate_oids
> 772
> 972
> make[3]: *** [postgres.bki] Error 1

Thanks, I hadn’t spotted that yet.  Attached is an updated patch using new OIDs
to make it compile again.

cheers ./daniel



terminate_msg_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC] What would be difficult to make data models pluggable for making PostgreSQL a multi-model database?

2017-09-03 Thread MauMau
From: Henry M
> This may be interesting... they implement cypher (unfortunately they
had to fork in order to have cypher be a first class query language
with SQL).
>
> https://github.com/bitnine-oss/agensgraph

I'm sorry for my very late reply.

Thanks for the information.  AgensGraph is certainly interesting, but
the problem is that it's a fork of PostgreSQL as you mentioned.  I
wish the data models, including query languages, to be pluggable
extensions, so that various people (especially database researchers?)
can develop them flexibly.  Of course, I want various data models to
be incorporated in the core as early as possible, but I'm afraid it's
not easy.  If new data models can be added as extensions, they can be
developed outside the PostgreSQL community process, get popular and
mature, and then be embraced in core like GiST/SP-Gist indexes and
full text search did.


Regards
MauMau



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Function to move the position of a replication slot

2017-09-03 Thread Magnus Hagander
On Sun, Sep 3, 2017 at 12:20 AM, Robert Haas  wrote:

> On Fri, Sep 1, 2017 at 11:30 PM, Peter Eisentraut
>  wrote:
> > On 8/31/17 08:19, Magnus Hagander wrote:
> >> Rebased. Now named pg_advance_replication_slot. ERROR on logical slots.
> >> Forward only.
> >>
> >> I think that, in the end, covered all the comments?
> >
> > I didn't see any explanation of what this would actually be useful for.
> > I suppose you could skip over some changes you don't want replicated,
> > but how do you find to what position to skip?
> >
> > Logical replication has a similar mechanism, using the function
> > pg_replication_origin_advance().  Any overlap there?  (Maybe the names
> > could be aligned.)
> > (https://www.postgresql.org/docs/devel/static/logical-
> replication-conflicts.html)
>
> I think you can use this to work around the absence of failover slots.
>
>
That was the initial usecase I had for this, yes.

It can also be combined with file-based restoring to keep a "blocker"
preventing removal before a segment has progressed through a workflow for
example.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: [HACKERS] adding the commit to a patch's thread

2017-09-03 Thread Magnus Hagander
On Sun, Sep 3, 2017 at 8:55 PM, Daniel Gustafsson  wrote:

> > On 03 Sep 2017, at 19:33, Magnus Hagander  wrote:
> >
> > On Sun, Sep 3, 2017 at 12:00 AM, Daniel Gustafsson  > wrote:
> > > On 01 Sep 2017, at 15:40, Robert Haas  robertmh...@gmail.com>> wrote:
> > >
> > > On Fri, Sep 1, 2017 at 4:26 AM, Alvaro Herrera <
> alvhe...@alvh.no-ip.org > wrote:
> > >> Erik Rijkers wrote:
> > >>> Would it be possible to change the commitfest a bit and make it
> possible to
> > >>> add the commit (or commit-message, or hash) to the thread in the
> > >>> commitfest-app.
> > >>
> > >> +1 to add one or more commit hashes to CF entry metadata.
> > >>
> > >> (Back-filling for old entries welcome)
> > >
> > > Couldn't the CF app scrape the commit messages for references to
> > > threads, and if the commit message points to a thread with exactly 1
> > > patch record, associate the commit to that patch?
> >
> > Since there is a Gitlink field in the patch metadata, we could start
> simple
> > with setting that to link to the commit on pg gitweb?  While adding the
> thread
> > of the commit from -committers would be great as well, this gets us off
> the
> > ground quickly.
> >
> > The original idea behind the gitlink field was to link to a git
> repo/branch representing the patch, for people who preferred to publish
> full branches for those that want it.
> >
> > This has been done for a grand total of 43 patches throughout (out of a
> total of 1231).
> >
> > Not sure if that's enough to say "let's not repurpose it”?
>
> My thinking was that it wasn’t really repurposing, since the commit is
> quite
> representative of the patch and comma separated list could house both
> (especially since the former usecase is quite rate).  Looking at the code
> however, the Gitlink is a UrlField which I believe won’t support that so
> it’s
> dead in the water either way.
>

I think that would definitely be repurposing it since it's not used for
that in the past :)


That leaves us back at parsing/scraping -committers and adding the
> mailthread.
> For this CF I can add the commits manually, since attaching a thread isn’t
> limited to threads in -hackers.  This way we have more time to figure out
> an
> automated way for the next CF.  I’ve added the commit for my patch here as
> an
> example:
>
> https://commitfest.postgresql.org/14/1245/


That's a good start.

We can also add a separate field that's just a comma separated list of
commit hashes that auto-links to the git server if we want. *That* would be
a trivial addition. Actually auto-populating it would probably be a lot
less so, but for a manually managed one it should be fairly easy.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: [HACKERS] adding the commit to a patch's thread

2017-09-03 Thread Daniel Gustafsson
> On 03 Sep 2017, at 19:33, Magnus Hagander  wrote:
> 
> On Sun, Sep 3, 2017 at 12:00 AM, Daniel Gustafsson  > wrote:
> > On 01 Sep 2017, at 15:40, Robert Haas  > > wrote:
> >
> > On Fri, Sep 1, 2017 at 4:26 AM, Alvaro Herrera  > > wrote:
> >> Erik Rijkers wrote:
> >>> Would it be possible to change the commitfest a bit and make it possible 
> >>> to
> >>> add the commit (or commit-message, or hash) to the thread in the
> >>> commitfest-app.
> >>
> >> +1 to add one or more commit hashes to CF entry metadata.
> >>
> >> (Back-filling for old entries welcome)
> >
> > Couldn't the CF app scrape the commit messages for references to
> > threads, and if the commit message points to a thread with exactly 1
> > patch record, associate the commit to that patch?
> 
> Since there is a Gitlink field in the patch metadata, we could start simple
> with setting that to link to the commit on pg gitweb?  While adding the thread
> of the commit from -committers would be great as well, this gets us off the
> ground quickly.
> 
> The original idea behind the gitlink field was to link to a git repo/branch 
> representing the patch, for people who preferred to publish full branches for 
> those that want it.
> 
> This has been done for a grand total of 43 patches throughout (out of a total 
> of 1231).
> 
> Not sure if that's enough to say "let's not repurpose it”?

My thinking was that it wasn’t really repurposing, since the commit is quite
representative of the patch and comma separated list could house both
(especially since the former usecase is quite rate).  Looking at the code
however, the Gitlink is a UrlField which I believe won’t support that so it’s
dead in the water either way.

That leaves us back at parsing/scraping -committers and adding the mailthread.
For this CF I can add the commits manually, since attaching a thread isn’t
limited to threads in -hackers.  This way we have more time to figure out an
automated way for the next CF.  I’ve added the commit for my patch here as an
example:

https://commitfest.postgresql.org/14/1245/

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Move all am-related reloption code into src/backend/access/[am-name] and get rid of relopt_kind for custom AM

2017-09-03 Thread Alvaro Herrera
Nikolay Shaplov wrote:
> В письме от 25 июня 2017 21:05:49 пользователь Nikolay Shaplov написал:
> 
> I've just made sure that patch is still applyable to the current master.
> 
> And I am still waiting for reviews :-)

I see that this patch adds a few tests for reloptions, for instance in
bloom.  I think we should split this in at least two commits, one that
adds those tests before the functionality change, so that they can be
committed in advance, verify that the buildfarm likes it with the
current code, and verify the coverage.

I also noticed that your patch adds an error message that didn't exist
previously, 

+   ereport(ERROR,
+   (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+  errmsg("col%i should not be grater than length", i)));

this seems a bit troublesome, because of the "col%i" thing and also
because of the typo.  I wonder if this means you've changed the way
sanity checks work here.

The new checks around toast table creation look like they belong to a
completely independent patch also ... the error message there goes
against message style guidelines also.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JIT & function naming

2017-09-03 Thread Andres Freund
On 2017-09-03 10:11:37 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > Currently there's essentially a per EState counter and the generated
> > functions get named deform$n and evalexpr$n. That allows for profiling
> > of a single query, because different compiled expressions are
> > disambiguated. It even allows to run the same query over and over, still
> > giving meaningful results.  But it breaks down when running multiple
> > queries while profiling - evalexpr0 can mean something entirely
> > different for different queries.
> 
> > The best idea I have so far would be to name queries like
> > evalexpr_$fingerprint_$n, but for that we'd need fingerprinting support
> > outside of pg_stat_statement, which seems painful-ish.
> 
> Yeah.  Why not just use a static counter to give successive unique IDs
> to each query that gets JIT-compiled?  Then the function names would
> be like deform_$querynumber_$subexprnumber.

That works, but unfortunately it doesn't keep the names the same over
reruns. So if you rerun the query inside the same session - a quite
reasonable thing to get more accurate profiles - the names in the
profile will change. That makes it quite hard to compare profiles,
especially when a single execution of the query is too quick to see
something meaningful.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] adding the commit to a patch's thread

2017-09-03 Thread Magnus Hagander
On Sun, Sep 3, 2017 at 12:00 AM, Daniel Gustafsson  wrote:

> > On 01 Sep 2017, at 15:40, Robert Haas  wrote:
> >
> > On Fri, Sep 1, 2017 at 4:26 AM, Alvaro Herrera 
> wrote:
> >> Erik Rijkers wrote:
> >>> Would it be possible to change the commitfest a bit and make it
> possible to
> >>> add the commit (or commit-message, or hash) to the thread in the
> >>> commitfest-app.
> >>
> >> +1 to add one or more commit hashes to CF entry metadata.
> >>
> >> (Back-filling for old entries welcome)
> >
> > Couldn't the CF app scrape the commit messages for references to
> > threads, and if the commit message points to a thread with exactly 1
> > patch record, associate the commit to that patch?
>
> Since there is a Gitlink field in the patch metadata, we could start simple
> with setting that to link to the commit on pg gitweb?  While adding the
> thread
> of the commit from -committers would be great as well, this gets us off the
> ground quickly.
>

The original idea behind the gitlink field was to link to a git repo/branch
representing the patch, for people who preferred to publish full branches
for those that want it.

This has been done for a grand total of 43 patches throughout (out of a
total of 1231).

Not sure if that's enough to say "let's not repurpose it"?

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: [HACKERS] Fix warnings and typo in dshash

2017-09-03 Thread Tom Lane
Thomas Munro  writes:
> Aside from these 3 warnings, it looks like the other 17 are all
> "warning C4005: 'HAVE_LONG_LONG_INT_64': macro redefinition".

Oh, I think that one might be my fault.  I tweaked pg_config.h.win32
in 9d6b160d7 to use "#define HAVE_LONG_LONG_INT_64 1", for consistency
with what happens in an autoconf'd build.  But now I see that
Solution.pm has another definition of that macro.  (That sure looks
like a mighty ad-hoc way of building ecpg_config.h, but whatever.)

> I wonder if it would make sense to fix that too and then turn on the
> MSVC equivalent of -Werror=xxx on a build farm animal...

I don't have enough experience with MSVC to know if we want to commit
to being 100% warning-free forevermore on it.  But sure, somebody
should try that on an experimental basis to see what happens.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix warnings and typo in dshash

2017-09-03 Thread Tom Lane
Amit Kapila  writes:
> I am seeing below warnings (on Win7) in dshash.c
> 1>  dshash.c
> 1>src/backend/lib/dshash.c(318): warning C4334: '<<' : result of
> 32-bit shift implicitly converted to 64 bits (was 64-bit shift
> intended?)
> 1>src/backend/lib/dshash.c(679): warning C4334: '<<' : result of
> 32-bit shift implicitly converted to 64 bits (was 64-bit shift
> intended?)
> 1>src/backend/lib/dshash.c(713): warning C4334: '<<' : result of
> 32-bit shift implicitly converted to 64 bits (was 64-bit shift
> intended?)

> Attached a patch to fix the above warning.

That will just make for different warnings on 32-bit machines.
Perhaps casting to size_t is appropriate here.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JIT & function naming

2017-09-03 Thread Tom Lane
Andres Freund  writes:
> Currently there's essentially a per EState counter and the generated
> functions get named deform$n and evalexpr$n. That allows for profiling
> of a single query, because different compiled expressions are
> disambiguated. It even allows to run the same query over and over, still
> giving meaningful results.  But it breaks down when running multiple
> queries while profiling - evalexpr0 can mean something entirely
> different for different queries.

> The best idea I have so far would be to name queries like
> evalexpr_$fingerprint_$n, but for that we'd need fingerprinting support
> outside of pg_stat_statement, which seems painful-ish.

Yeah.  Why not just use a static counter to give successive unique IDs
to each query that gets JIT-compiled?  Then the function names would
be like deform_$querynumber_$subexprnumber.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-09-03 Thread Jeevan Ladhe
On Sat, Sep 2, 2017 at 7:03 AM, Robert Haas  wrote:

> On Fri, Sep 1, 2017 at 3:19 PM, Robert Haas  wrote:
> > On Thu, Aug 31, 2017 at 8:53 AM, Jeevan Ladhe
> >  wrote:
> >> 0001:
> >> This patch refactors RelationBuildPartitionDesc(), basically this is
> patch
> >> 0001 of default range partition[1].
> >
> > I spent a while studying this; it seems to be simpler and there's no
> > real downside.  So, committed.
>
>
Thanks Robert for taking care of this.


> BTW, the rest of this series seems to need a rebase.  The changes to
> insert.sql conflicted with 30833ba154e0c1106d61e3270242dc5999a3e4f3.


Will rebase the patches.

Regards,
Jeevan Ladhe


Re: [HACKERS] UPDATE of partition key

2017-09-03 Thread Amit Khandekar
On 31 August 2017 at 14:15, Amit Khandekar  wrote:
> Thanks Dilip. I am working on rebasing the patch. Particularly, the
> partition walker in my patch depended on the fact that all the tables
> get opened (and then closed) while creating the tuple routing info.
> But in HEAD, now only the partitioned tables get opened. So need some
> changes in my patch.
>
> The partition walker related changes are going to be inapplicable once
> the other thread [1] commits the changes for expansion of inheritence
> in bound-order, but till then I would have to rebase the partition
> walker changes over HEAD.
>
> [1] 
> https://www.postgresql.org/message-id/0118a1f2-84bb-19a7-b906-dec040a206f2%40lab.ntt.co.jp
>

After recent commit 30833ba154, now the partitions are expanded in
depth-first order. It didn't seem worthwhile rebasing my partition
walker changes onto the latest code. So in the attached patch, I have
removed all the partition walker changes. But
RelationGetPartitionDispatchInfo() traverses in breadth-first order,
which is different than the update result rels order (because
inheritance expansion order is depth-first). So, in order to make the
tuple-routing-related leaf partitions in the same order as that of the
update result rels, we would have to make changes in
RelationGetPartitionDispatchInfo(), which I am not sure whether it is
going to be done as part of the thread "expanding inheritance in
partition bound order" [1]. For now, in the attached patch, I have
reverted back to the hash table method to find the leaf partitions in
the update result rels.

[1] 
https://www.postgresql.org/message-id/CAJ3gD9eyudCNU6V-veMme%2BeyzfX_ey%2BgEzULMzOw26c3f9rzdg%40mail.gmail.com

Thanks
-Amit Khandekar


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix warnings and typo in dshash

2017-09-03 Thread Thomas Munro
On Sun, Sep 3, 2017 at 6:57 PM, Amit Kapila  wrote:
> I am seeing below warnings (on Win7) in dshash.c
>
> 1>  dshash.c
> 1>src/backend/lib/dshash.c(318): warning C4334: '<<' : result of
> 32-bit shift implicitly converted to 64 bits (was 64-bit shift
> intended?)
> 1>src/backend/lib/dshash.c(679): warning C4334: '<<' : result of
> 32-bit shift implicitly converted to 64 bits (was 64-bit shift
> intended?)
> 1>src/backend/lib/dshash.c(713): warning C4334: '<<' : result of
> 32-bit shift implicitly converted to 64 bits (was 64-bit shift
> intended?)

Thanks!  That's a handy warning to have.  I see that it is also
visible on the build farm:

https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=caecilian&dt=2017-09-02%2019%3A30%3A30&stg=make

Aside from these 3 warnings, it looks like the other 17 are all
"warning C4005: 'HAVE_LONG_LONG_INT_64': macro redefinition".  I
wonder if it would make sense to fix that too and then turn on the
MSVC equivalent of -Werror=xxx on a build farm animal...

> Attached a patch to fix the above warning.

I think it should be (size_t) 1, not UINT64CONST(1).  See attached.

> I have noticed a typo in dshash.h for which a separate patch is attached.

+1

-- 
Thomas Munro
http://www.enterprisedb.com


fix_warning.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] expanding inheritance in partition bound order

2017-09-03 Thread Amit Khandekar
On 31 August 2017 at 13:06, Amit Langote  wrote:
>> Mind you, that idea has some problems anyway in the face of default
>> partitions, null partitions, and list partitions which accept
>> non-contiguous values (e.g. one partition for 1, 3, 5; another for 2,
>> 4, 6).  We might need to mark the PartitionDesc to indicate whether
>> PartitionDesc-order is in fact bound-order in a particular instance,
>> or something like that.
>
> ISTM, the primary motivation for the EIBO patch at this point is to get
> the partitions ordered in a predictable manner so that the partition-wise
> join patch and update partition key patches could implement certain logic
> using O (n) algorithm rather than an O (n^2) one.  Neither of them depend
> on the actual order in the sense of, say, sticking a PathKey to the
> resulting Append.

Now that the inheritance hierarchy is expanded in depth-first order,
RelationGetPartitionDispatchInfo() needs to be changed to arrange the
PartitionDispatch array and the leaf partitions in depth-first order
(as we know this is a requirement for the update-partition-key patch
for efficiently determining which of the leaf partitions are already
present in the update result rels). Amit, I am not sure if you are
already doing this as part of the patches in this mail thread. Please
let me know. Also, let me know if you think there will be any loss of
efficiency in tuple routing code if we arrange the Partition Dispatch
indexes in depth-first order.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JIT & function naming

2017-09-03 Thread Konstantin Knizhnik

On 09/03/2017 02:59 AM, Andres Freund wrote:

Hi,

On 2017-08-31 23:41:31 -0700, Andres Freund wrote:

I previously had an early prototype of JITing [1] expression evaluation
and tuple deforming.  I've since then worked a lot on this.

Here's an initial, not really pretty but functional, submission.

One of the things I'm not really happy about yet is the naming of the
generated functions. Those primarily matter when doing profiling, where
the function name will show up when the profiler supports JIT stuff
(e.g. with a patch I proposed to LLVM that emits perf compatible output,
there's also existing LLVM support for a profiler by intel and
oprofile).

Currently there's essentially a per EState counter and the generated
functions get named deform$n and evalexpr$n. That allows for profiling
of a single query, because different compiled expressions are
disambiguated. It even allows to run the same query over and over, still
giving meaningful results.  But it breaks down when running multiple
queries while profiling - evalexpr0 can mean something entirely
different for different queries.

The best idea I have so far would be to name queries like
evalexpr_$fingerprint_$n, but for that we'd need fingerprinting support
outside of pg_stat_statement, which seems painful-ish.

Perhaps somebody has a better idea?


As far as I understand we do not need precise fingerprint.
So may be just calculate some lightweight fingerprint?
For example take query text (es_sourceText from EText), replace all 
non-alphanumeric characters spaces with '_' and take first N (16?) characters 
of the result?
It seems to me that in most cases it will help to identify the query...


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers