Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-21 Thread Amit Langote
On 2017/05/20 9:01, Thomas Munro wrote:
> On Sat, May 20, 2017 at 10:43 AM, Thomas Munro
>  wrote:
>> On Fri, May 19, 2017 at 6:35 PM, Amit Langote
>>  wrote:
>>> On 2017/05/19 15:16, Thomas Munro wrote:
 Would TransitionCaptureState be a better name for this struct?
>>>
>>> Yes.  Although, losing the Trigger prefix might make it sound a bit
>>> ambiguous though.  Right above its definition, we have TriggerData.  So,
>>> maybe TriggerTransitionCaptureState or TriggerTransitionCaptureData or
>>> TriggerTransitionData may be worth considering.
>>
>> Ok, here's a version using TransitionCaptureState.  Those other names
>> seem too long, and "TriggerTransition" is already in use so
>> "TriggerTransitionData" seems off the table.  Having the word
>> "capture" in there seems good, since this is an object that controls
>> what we capture when we process a modify a set of tables.  I hope
>> that's clear.

I agree.  TransitionCaptureState sounds good.

> Sent too soon.  Several variables should also be renamed to make clear
> they refer to the transition capture state in effect, instead of vague
> names like 'transitions'.  Sorry for the version churn.

Ah, I was kind of getting distracted by it earlier too; thanks.

Anyway, the patch looks good to me.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [POC] hash partitioning

2017-05-21 Thread Ashutosh Bapat
On Fri, May 19, 2017 at 10:35 PM, Robert Haas  wrote:
>
> + * For range and list partitioned tables, datums is an array of datum-tuples
> + * with key->partnatts datums each.
> + * For hash partitioned tables, it is an array of datum-tuples with 2 datums,
> + * modulus and remainder, corresponding to a given partition.
>
> Second line is very short; reflow as one paragraph.
>
>   * In case of range partitioning, it stores one entry per distinct range
>   * datum, which is the index of the partition for which a given datum
>   * is an upper bound.
> + * In the case of hash partitioning, the number of the entries in the indexes
> + * array is same as the greatest modulus amongst all partitions. For a given
> + * partition key datum-tuple, the index of the partition which would
> accept that
> + * datum-tuple would be given by the entry pointed by remainder produced when
> + * hash value of the datum-tuple is divided by the greatest modulus.
>
> Insert line break before the new text as a paragraph break.

The prologue is arranged as one paragraph (with a new line) per
member. Within each paragraph explanation for each partitioning
strategy starts on its own line. One paragraph per member is more
readable than separate paragraphs for each member and strategy.

>
> I don't really see why hash partitioning needs to touch
> partition_bounds_equal() at all.  Why can't the existing logic work
> for hash partitioning without change?

Right now, it compares partnatts datums values for list and range. For
hash it requires to compare 2 datums remainder and modulus. So, the
difference?
Further, I suggested that we use the fact that equality of indexes
array implies equality of bounds for hash partitioning.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-05-21 Thread Pavel Stehule
Hi

2017-04-04 23:01 GMT+02:00 Pavel Stehule :

>
>
> 2017-04-04 22:05 GMT+02:00 Fabien COELHO :
>
>>
>> After some discussions about what could be useful since psql scripts now
>> accepts tests, this patch sets a few variables which can be used by psql
>> after a "front door" (i.e. actually typed by the user) query:
>>
>>  - RESULT_STATUS: the status of the query
>>  - ERROR: whether the query failed
>>  - ERROR_MESSAGE: ...
>>  - ROW_COUNT: #rows affected
>>
>>  SELECT * FROM ;
>>  \if :ERROR
>>\echo oops
>>\q
>>  \endif
>>
>> I'm not sure that the names are right. Maybe STATUS would be better than
>> RESULT_STATUS.
>
>
I am sending review of this patch:

1. I agree so STATUS is better name, than RESULT status. Currently it
returns values with prefix PGRES (like PGRES_FATAL_ERROR, PGRES_TUPLES_OK).
Maybe we should to cut this prefix. FATAL_ERROR, TUPLES_OK looks better for
custom level. The PGRES prefix has not sense in psql.

2. I propose availability to read ERROR_CODE - sometimes it can be more
practical than parsing error possible translated message

3. The fields ERROR_MESSAGE and ROW_COUNT are set only when it has sense.
This behave is maybe too strict for psql and the processing needs more
nesting \if command. What do you think about -1 or 0 for ROW_COUNT (for
DDL) and "" for ERROR_MESSAGE when there are not any error?  It will be
consistent with already implemented LASTOID variable (and other state psql
variables). Using default values are not strict clean, but it can reduce
complexity of psql scripts.

4. all regress tests passed
5. there are not any problem with doc building

Regards

Pavel




>
> good ideas
>
> Regards
>
> Pavel
>
>>
>>
>> --
>> Fabien.
>>
>> --
>> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-hackers
>>
>>
>


Re: [HACKERS] asynchronous execution

2017-05-21 Thread Kyotaro HORIGUCHI
At Mon, 22 May 2017 13:12:14 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20170522.131214.20936668.horiguchi.kyot...@lab.ntt.co.jp>
> > The attached patch is rebased on the current master, but no
> > substantial changes other than disallowing partitioned tables on
> > async by assertion.
> 
> This is just rebased onto the current master (d761fe2).
> I'll recheck further detail after this.

Sorry, the patch was missing some files to add.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From 000f0465a59cdabd02f43e886c76c89c14d987a5 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Mon, 22 May 2017 12:42:58 +0900
Subject: [PATCH 1/4] Allow wait event set to be registered to resource owner

WaitEventSet needs to be released using resource owner for a certain
case. This change adds WaitEventSet reowner and allow the creator of a
WaitEventSet to specify a resource owner.
---
 src/backend/libpq/pqcomm.c|  2 +-
 src/backend/storage/ipc/latch.c   | 18 ++-
 src/backend/storage/lmgr/condition_variable.c |  2 +-
 src/backend/utils/resowner/resowner.c | 68 +++
 src/include/storage/latch.h   |  4 +-
 src/include/utils/resowner_private.h  |  8 
 6 files changed, 97 insertions(+), 5 deletions(-)

diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index d1cc38b..1c34114 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -201,7 +201,7 @@ pq_init(void)
 (errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
-	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
+	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, NULL, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 	  NULL, NULL);
 	AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 53e6bf2..8c182a2 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -51,6 +51,7 @@
 #include "storage/latch.h"
 #include "storage/pmsignal.h"
 #include "storage/shmem.h"
+#include "utils/resowner_private.h"
 
 /*
  * Select the fd readiness primitive to use. Normally the "most modern"
@@ -77,6 +78,8 @@ struct WaitEventSet
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
 
+	ResourceOwner	resowner;	/* Resource owner */
+
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
 	 * set is waiting for.
@@ -359,7 +362,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 	int			ret = 0;
 	int			rc;
 	WaitEvent	event;
-	WaitEventSet *set = CreateWaitEventSet(CurrentMemoryContext, 3);
+	WaitEventSet *set = CreateWaitEventSet(CurrentMemoryContext, NULL, 3);
 
 	if (wakeEvents & WL_TIMEOUT)
 		Assert(timeout >= 0);
@@ -518,12 +521,15 @@ ResetLatch(volatile Latch *latch)
  * WaitEventSetWait().
  */
 WaitEventSet *
-CreateWaitEventSet(MemoryContext context, int nevents)
+CreateWaitEventSet(MemoryContext context, ResourceOwner res, int nevents)
 {
 	WaitEventSet *set;
 	char	   *data;
 	Size		sz = 0;
 
+	if (res)
+		ResourceOwnerEnlargeWESs(res);
+
 	/*
 	 * Use MAXALIGN size/alignment to guarantee that later uses of memory are
 	 * aligned correctly. E.g. epoll_event might need 8 byte alignment on some
@@ -592,6 +598,11 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	StaticAssertStmt(WSA_INVALID_EVENT == NULL, "");
 #endif
 
+	/* Register this wait event set if requested */
+	set->resowner = res;
+	if (res)
+		ResourceOwnerRememberWES(set->resowner, set);
+
 	return set;
 }
 
@@ -633,6 +644,9 @@ FreeWaitEventSet(WaitEventSet *set)
 	}
 #endif
 
+	if (set->resowner != NULL)
+		ResourceOwnerForgetWES(set->resowner, set);
+
 	pfree(set);
 }
 
diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c
index 5afb211..1d9111e 100644
--- a/src/backend/storage/lmgr/condition_variable.c
+++ b/src/backend/storage/lmgr/condition_variable.c
@@ -66,7 +66,7 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv)
 	/* Create a reusable WaitEventSet. */
 	if (cv_wait_event_set == NULL)
 	{
-		cv_wait_event_set = CreateWaitEventSet(TopMemoryContext, 1);
+		cv_wait_event_set = CreateWaitEventSet(TopMemoryContext, NULL, 1);
 		AddWaitEventToSet(cv_wait_event_set, WL_LATCH_SET, PGINVALID_SOCKET,
 		  >procLatch, NULL);
 	}
diff --git a/src/backend/utils/resowner/resowner.c b/src/backend/utils/resowner/resowner.c
index af46d78..a1a1121 100644
--- a/src/backend/utils/resowner/resowner.c
+++ b/src/backend/utils/resowner/resowner.c
@@ -124,6 +124,7 @@ typedef struct ResourceOwnerData
 	ResourceArray snapshotarr;	/* snapshot references */
 	ResourceArray filearr;		/* open temporary files */
 	ResourceArray dsmarr;		/* dynamic 

Re: pgindent (was Re: [HACKERS] [COMMITTERS] pgsql: Preventive maintenance in advance of pgindent run.)

2017-05-21 Thread Piotr Stefaniak
On 2017-05-22 01:50, Tom Lane wrote:
> Being lazy, I just wiped my copy and re-cloned, but it still seems the
> same as before ... last commit on the pass3 branch is from Mar 4.
> What branch should I be paying attention to?

Sorry for the trouble, this is because I interactively git-rebased it in
order to rewrite history. I do this to limit the number of commits to
the FreeBSD repository. Next time I'll leave fix-ups in chronological
order and meld them with what they fix just before committing to the
FreeBSD repository.

pass3 is the right branch. A fresh clone should have worked as in the
attached log.
me@t520 /tmp> git clone https://github.com/pstef/freebsd_indent
Cloning into 'freebsd_indent'...
remote: Counting objects: 640, done.
remote: Compressing objects: 100% (55/55), done.
remote: Total 640 (delta 128), reused 151 (delta 116), pack-reused 469
Receiving objects: 100% (640/640), 270.61 KiB | 66.00 KiB/s, done.
Resolving deltas: 100% (409/409), done.
Checking connectivity... done.
me@t520 /tmp> cd freebsd_indent
me@t520 /t/freebsd_indent> bmake CC=clang CFLAGS='-D__FBSDID="static char 
*rcsid=" -g -O0'
clang -D__FBSDID="static char *rcsid=" -g -O0-c indent.c
clang -D__FBSDID="static char *rcsid=" -g -O0-c io.c
clang -D__FBSDID="static char *rcsid=" -g -O0-c lexi.c
clang -D__FBSDID="static char *rcsid=" -g -O0-c parse.c
clang -D__FBSDID="static char *rcsid=" -g -O0-c pr_comment.c
clang -D__FBSDID="static char *rcsid=" -g -O0-c args.c
clang   -o indent  indent.o io.o lexi.o parse.o pr_comment.o args.o 
nroff -man indent.1 > indent.cat1
me@t520 /t/freebsd_indent> cp ~/postgres/freebsd_indent/test.c .
me@t520 /t/freebsd_indent> ./indent -bad -bap -bc -bl -d0 -cdb -nce -nfc1 -di12 
-i4 -l79 -lp -nip -npro -bbb -cli1 
-U/home/me/pgindent-test/git/src/tools/pgindent/typedefs.list < test.c
typedef struct GinBtreeData
{
/* search methods */
BlockNumber (*findChildPage) (GinBtree, GinBtreeStack *);
BlockNumber (*getLeftMostChild) (GinBtree, Page);
bool(*isMoveRight) (GinBtree, Page);
bool(*findItem) (GinBtree, GinBtreeStack *);

/* insert methods */
OffsetNumber (*findChildPtr) (GinBtree, Page, BlockNumber, OffsetNumber);
GinPlaceToPageRC (*beginPlaceToPage) (GinBtree, Buffer, GinBtreeStack *, 
void *, BlockNumber, void **, Page *, Page *);
void(*execPlaceToPage) (GinBtree, Buffer, GinBtreeStack *, void *, 
BlockNumber, void *);
}


void
hardclock_device_poll(void)
{
const unsigned (*TABLE_index)[2];

if (stat(pg_data, ) != 0)
{
/*
 * The Linux Standard Base Core Specification 3.1 says this should
 * return '4, program or service status is unknown'
 * https://refspecs.linuxbase.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-g
 * eneric/iniscrptact.html
 */
exit(is_status_request ? 4 : 1);
}
}
me@t520 /t/freebsd_indent> cat test.c
typedef struct GinBtreeData
{
/* search methods */
BlockNumber (*findChildPage) (GinBtree, GinBtreeStack *);
BlockNumber (*getLeftMostChild) (GinBtree, Page);
bool(*isMoveRight) (GinBtree, Page);
bool(*findItem) (GinBtree, GinBtreeStack *);

/* insert methods */
OffsetNumber (*findChildPtr) (GinBtree, Page, BlockNumber, 
OffsetNumber);
GinPlaceToPageRC (*beginPlaceToPage) (GinBtree, Buffer, GinBtreeStack 
*, void *, BlockNumber, void **, Page *, Page *);
void(*execPlaceToPage) (GinBtree, Buffer, GinBtreeStack *, 
void *, BlockNumber, void *);
}


void
hardclock_device_poll(void)
{
const unsigned (*TABLE_index)[2];
if (stat(pg_data, ) != 0)
{
/*
 * The Linux Standard Base Core Specification 3.1 says this 
should
 * return '4, program or service status is unknown'
 * 
https://refspecs.linuxbase.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-g
 * eneric/iniscrptact.html
 */
exit(is_status_request ? 4 : 1);
}
}
me@t520 /t/freebsd_indent>
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] asynchronous execution

2017-05-21 Thread Kyotaro HORIGUCHI
Hello.

At Tue, 04 Apr 2017 19:25:39 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20170404.192539.29699823.horiguchi.kyot...@lab.ntt.co.jp>
> The attached patch is rebased on the current master, but no
> substantial changes other than disallowing partitioned tables on
> async by assertion.

This is just rebased onto the current master (d761fe2).
I'll recheck further detail after this.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From 000f0465a59cdabd02f43e886c76c89c14d987a5 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Mon, 22 May 2017 12:42:58 +0900
Subject: [PATCH 1/4] Allow wait event set to be registered to resource owner

WaitEventSet needs to be released using resource owner for a certain
case. This change adds WaitEventSet reowner and allow the creator of a
WaitEventSet to specify a resource owner.
---
 src/backend/libpq/pqcomm.c|  2 +-
 src/backend/storage/ipc/latch.c   | 18 ++-
 src/backend/storage/lmgr/condition_variable.c |  2 +-
 src/backend/utils/resowner/resowner.c | 68 +++
 src/include/storage/latch.h   |  4 +-
 src/include/utils/resowner_private.h  |  8 
 6 files changed, 97 insertions(+), 5 deletions(-)

diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index d1cc38b..1c34114 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -201,7 +201,7 @@ pq_init(void)
 (errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
-	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
+	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, NULL, 3);
 	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
 	  NULL, NULL);
 	AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 53e6bf2..8c182a2 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -51,6 +51,7 @@
 #include "storage/latch.h"
 #include "storage/pmsignal.h"
 #include "storage/shmem.h"
+#include "utils/resowner_private.h"
 
 /*
  * Select the fd readiness primitive to use. Normally the "most modern"
@@ -77,6 +78,8 @@ struct WaitEventSet
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
 
+	ResourceOwner	resowner;	/* Resource owner */
+
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
 	 * set is waiting for.
@@ -359,7 +362,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 	int			ret = 0;
 	int			rc;
 	WaitEvent	event;
-	WaitEventSet *set = CreateWaitEventSet(CurrentMemoryContext, 3);
+	WaitEventSet *set = CreateWaitEventSet(CurrentMemoryContext, NULL, 3);
 
 	if (wakeEvents & WL_TIMEOUT)
 		Assert(timeout >= 0);
@@ -518,12 +521,15 @@ ResetLatch(volatile Latch *latch)
  * WaitEventSetWait().
  */
 WaitEventSet *
-CreateWaitEventSet(MemoryContext context, int nevents)
+CreateWaitEventSet(MemoryContext context, ResourceOwner res, int nevents)
 {
 	WaitEventSet *set;
 	char	   *data;
 	Size		sz = 0;
 
+	if (res)
+		ResourceOwnerEnlargeWESs(res);
+
 	/*
 	 * Use MAXALIGN size/alignment to guarantee that later uses of memory are
 	 * aligned correctly. E.g. epoll_event might need 8 byte alignment on some
@@ -592,6 +598,11 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 	StaticAssertStmt(WSA_INVALID_EVENT == NULL, "");
 #endif
 
+	/* Register this wait event set if requested */
+	set->resowner = res;
+	if (res)
+		ResourceOwnerRememberWES(set->resowner, set);
+
 	return set;
 }
 
@@ -633,6 +644,9 @@ FreeWaitEventSet(WaitEventSet *set)
 	}
 #endif
 
+	if (set->resowner != NULL)
+		ResourceOwnerForgetWES(set->resowner, set);
+
 	pfree(set);
 }
 
diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c
index 5afb211..1d9111e 100644
--- a/src/backend/storage/lmgr/condition_variable.c
+++ b/src/backend/storage/lmgr/condition_variable.c
@@ -66,7 +66,7 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv)
 	/* Create a reusable WaitEventSet. */
 	if (cv_wait_event_set == NULL)
 	{
-		cv_wait_event_set = CreateWaitEventSet(TopMemoryContext, 1);
+		cv_wait_event_set = CreateWaitEventSet(TopMemoryContext, NULL, 1);
 		AddWaitEventToSet(cv_wait_event_set, WL_LATCH_SET, PGINVALID_SOCKET,
 		  >procLatch, NULL);
 	}
diff --git a/src/backend/utils/resowner/resowner.c b/src/backend/utils/resowner/resowner.c
index af46d78..a1a1121 100644
--- a/src/backend/utils/resowner/resowner.c
+++ b/src/backend/utils/resowner/resowner.c
@@ -124,6 +124,7 @@ typedef struct ResourceOwnerData
 	ResourceArray snapshotarr;	/* snapshot references */
 	ResourceArray filearr;		/* open temporary files */
 	ResourceArray dsmarr;		/* dynamic shmem segments */
+	ResourceArray wesarr;		/* wait 

Re: [HACKERS] Regression in join selectivity estimations when using foreign keys

2017-05-21 Thread David Rowley
On 21 May 2017 at 07:56, Tom Lane  wrote:
> I'm entirely unconvinced by this patch --- it seems to simply be throwing
> away a lot of logic.  Notably it lobotomizes the FK code altogether for
> semi/antijoin cases, but you've not shown any example that even involves
> such joins, so what's the argument for doing that?  Also, the reason
> we had the use_smallest_selectivity code in the first place was that we
> didn't believe the FK-based selectivity could be applied safely to
> outer-join cases, so simply deciding that it's OK to apply it after all
> seems insufficiently justified.

Originally I looked at just multiplying the selectivities in
use_smallest_selectivity, but on further thought, I didn't manage to
see why using foreign keys to assist with selectivity estimations when
the ref_is_outer is true. We still have a very high probability that
the outer rel contains a tuple to match each inner rel's tuples
(because inner references outer). The only difference between OUTER
and INNER typed joins is that OUTER will produce a bunch of NULL rows
in place of non-matched inner rows. That part seems to be handled in
calc_joinrel_size_estimate() by code such as:

 if (nrows < outer_rows)
nrows = outer_rows;

We could do much better than what we have today by also adding
outer_rows - (n_distinct inner rows on referencing Vars), to also
account for those unmatched rows. However, I'd say that's not for
back-patching, although it may be especially good now that we have
multi-variate ndistinct in PG10.

> Or in short, exhibiting one counterexample to the existing code is not
> a sufficient argument for changing things this much.  You need to give
> an argument why this is the right thing to do instead.
>
> Stepping back a bit, it seems like the core thing the FK selectivity code
> was meant to do was to prevent us from underestimating selectivity of
> multiple-clause join conditions through a false assumption of clause
> independence.  The problem with the use_smallest_selectivity code is that
> it's overestimating selectivity, but that doesn't mean that we want to go
> all the way back to the old way of doing things.  I wonder if we could get
> any traction in these dubious cases by computing the product, instead of
> minimum, of the clause selectivities (thus matching the old estimate) and
> then taking the greater of that and the FK-based selectivity.

My concern with going back to selectivity multiplication was exactly
because I didn't want to go back to the original undesired behaviour
of underestimation when conditions are co-related. I'm unsure why
taking the greater is any better than just using the foreign key
selectivity. It seems senseless to use clause based selectivities at
all when we've proved the foreign key exists -- there will be no
unmatched inner tuples.

The reason I dropped the non-singleton join for SEMI and ANTI is that
I couldn't see how we could make any improvements over the original
join estimation code for this case. Perhaps I'm assuming too much
about how get_foreign_key_join_selectivity() is used by doing this?
I'm assuming that leaving the clause list intact and referring the
whole case to calc_joinrel_size_estimate() is okay to do.

The reason I added the nullfrac code in 0002 was because I was
concerned with regressing OUTER join cases where the nulfrac works due
to using the clause_selectivity(). Although this case regressed
between 9.5 and 9.6 for INNER joins with a non-zero nullfrac on the
join condition.

In short; if we replace the use_smallest_selectivity code with fkselec
*= clauselist_selectivity(root, removedlist, 0, jointype, sjinfo);
then I'm worried about regressing back to the old underestimations.
The extended stats code in PG10's version of clauselist_selectivity()
will ignore these join conditions, so nothing can be done to assist
the underestmations by adding extended stats yet.

I also just noticed that I don't think I've got ANTI join cases
correct in the patch I sent. I'll look at that now.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG10 Crash-safe and replicable Hash Indexes and UNIQUE

2017-05-21 Thread Chapman Flack
On 05/19/17 11:41, Tom Lane wrote:

> No, nobody's done anything about allowing hash indexes to support
> uniqueness AFAIK.  I don't have a clear picture of how much work
> it would be, but it would likely be more than trivial effort;

I see what you mean. Because of the way hash values are ordered
(to allow binary search) within a page, but not between pages of
a bucket, insertion as it stands now is able to stop as soon as
it finds any page with room for the entry, but a unique-insertion
will have to check every page of the bucket for matching hashes,
and then (because only the hash and tid are in the index) chase
any of those to the heap to compare the value.

Maybe both hash collisions and overflow pages are rare enough
in practice with reasonable data that the performance impact
of that would be small, but still the possibility has to be
accounted for, the locking may get hairier (do you now keep
the lock you have on the page where room was found for the entry,
and use another lock to walk the remaining pages until sure
there's no duplicate?).

At least I see that interest in UNIQUE for hash indexes has been
shown on -hackers several times over the years, and is on the TODO.
Neil Conway seems to have had an idea [1] for making the locking work,
14 years ago (however relevant that might be to today's code).

... and one inquiry last year [2] did seem to get tabled because of the
lack of WAL logging, which is now a non-blocker.

I haven't seen much discussion of /why/ one would want hash-based UNIQUE.
I know my own reasons, but I'm not sure how persuasive they are in light
of the implementation realities, so maybe that makes such a discussion
worthwhile. I can start; these are the two reasons I had:

1. To a naive intuition (especially one raised more on in-memory data
   structures than the guts of databases), it just seems natural:
   hashing seems like the canonical approach to uniqueness testing
   where there's no need for ordering, intuition suggests a performance
   advantage, and so the least-astonishment principle suffers upon finding
   it isn't supported.

2. When developing a custom data type, it feels like tedious
   busy-work to have to bang out a full set of ordering operators
   for a btree operator class if there is no meaningful order for
   the type.

Maybe the intuitions behind (1) are just misinformed, the performance
ones at least, in light of Craig Ringer's low opinion of whether "hash
indexes are better than btree for anything" [3], and André Barbosa's
more recent performance comparison [4] (which does show some advantages
for hash in some circumstances, but mostly not large. The only large
advantage was in initial creation; would that be hashsort.c at work?).

But then, both [3] and [4] predate the recent projects on hash indexes
that have "made them crash safe and are on the way to making them
performant" [5], so maybe an updated comparison would be timely, or some
addition to the docs to better characterize the circumstances where hash
could be good. (Every index method newer than btree and hash has its own
part VII Internals chapter; for completeness, might it make sense to have
those for btree and hash also, even if only to broadly discuss
the conditions under which they perform especially well or poorly?)

For all sorts of indexes, would there be any use for some CREATE INDEX
syntax for a multicolumn index to say that some of its rightmost columns
aren't there to participate in the indexing scheme, but only to benefit
index-only scans? Applied to a hash index, that might offer another useful
kind of multicolumn support, which otherwise seems limited to queries
where you have the exact values of all indexed columns.

Anyway, even if my performance assumption behind (1) was too optimistic,
the astonishment when a new user finds a hash can't support uniqueness
still seems real. A related astonishment is that a hash opclass can't
support DISTINCT, and that seems like something that could be changed
with much less work than making hash indexes amcanunique. Apparently,
array comparison already looks for a btree opclass but can fall back to
a hash one, if present, for equality comparisons. Would it be difficult
to do the same for DISTINCT?

As for my reason (2), the tedium of having to bang out btree operators
for a new type with no meaningful order (which, as I've just remembered,
is necessary not just for UNIQUE constraints but even just to make
SELECT DISTINCT work), maybe there's a solution that simply reduces
the tedium. After all, if a new type has no meaningful notion of order,
an arbitrary one imposed by copy/paste of some existing opclass
for a type with the same internallength might often be good enough.
Could there be some syntactic sugar for that, say,

CREATE OPERATOR CLASS btree_foo_ops
FOR TYPE foo USING btree LIKE int4_ops;

? A transformOpclassLike function could verify that foo and the opcintype
of int4_ops have the same typlen and typbyval, and that the 

Re: [HACKERS] Re: [doc fix] PG10: wroing description on connect_timeout when multiple hosts are specified

2017-05-21 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Robert Haas
> Committed.  I also added a slight tweak to the wording of the documentation.

Thank you, the doc looks clearer.

Regards
Takayuki Tsunakawa


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-21 Thread Beena Emerson
Hello,

Patch for default range partition has been added. PFA the rebased v12 patch
for the same.
I have not removed the has_default variable yet.

Default range partition:
https://www.postgresql.org/message-id/CAOG9ApEYj34fWMcvBMBQ-YtqR9fTdXhdN82QEKG0SVZ6zeL1xg%40mail.gmail.com
-- 

Beena Emerson

EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


default_partition_v12_rebase.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Default Partition for Range

2017-05-21 Thread Beena Emerson
Hello,

Many were in favour of the default partition for tables partitioned by
range [1].
Please find attached the WIP patch for the same which can be applied over
the default_partition_v12.patch.

Syntax: Same as agreed for list:
CREATE PARTITION  PARTITION OF  DEFAULT;

Default Constraint:
Negation constraints of all existing partitions.

One case:
CREATE TABLE range1 (a int, b int) PARTITION by range (a);
CREATE TABLE range1_1 PARTITION OF range1 FOR VALUES FROM (1) TO (5);
CREATE TABLE range1_2 PARTITION OF range1 FOR VALUES FROM (7) TO (10);
CREATE TABLE range1_def PARTITION OF range1 DEFAULT;
\d+ range1_def
Table "public.range1_def"
 Column |  Type   | Collation | Nullable | Default | Storage | Stats target
| Description
+-+---+--+-+
-+--+-
 a  | integer |   | not null | | plain   |
 |
 b  | integer |   |  | | plain   |
 |
Partition of: range1 DEFAULT
Partition constraint: (((a < 1) OR (a >= 5)) AND ((a < 7) OR (a >= 10)))

It still needs more work:
1. Handling addition of new partition after default, insertion of data, few
more bugs
2. Documentation
3. Regression tests

[1] Discussion on default partition:
*https://www.postgresql.org/message-id/flat/CAOgcT0PLPge%3D5U6%3DGU5SnC3_8yutCbWWOiUva3Cw94M9zpbvgQ%40mail.gmail.com
*

-- 

Beena Emerson

EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


default_range_partition_WIP.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multiple table synchronizations are processed serially

2017-05-21 Thread Masahiko Sawada
On Sat, May 20, 2017 at 4:47 AM, Peter Eisentraut
 wrote:
> On 5/19/17 01:01, Masahiko Sawada wrote:
>> Seems all four table sync workers are launched at the almost same
>> time, but three workers of them get stuck in idle transaction state
>> when creating replication slot. That is these waiting workers cannot
>> proceed its work until first connected table sync worker finishes. ps
>> command shows the followings.
>
> Creating a replication slot waits for all transactions to finish.  So if
> one of those transactions is a table copy of another subscription, it
> has to wait for that.
>
> You can avoid that by creating all the slots first and then triggering
> the initial table copy separately.
>

Thank you for suggestion!
I understood the reason why subsequent processes have to wait. If some
of tables belonging to publication are very large, the idle
transaction will wait for a long time, which is not good. So as you
mentioned it seems to me that it's better to create all slots first
and then trigger the initial table copy separately especially in such
case.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Fix a typo in hash.c

2017-05-21 Thread Masahiko Sawada
Hi,

Attached patch for $subject.

s/curent/current/

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


fix_typo_in_hash_c.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Location of PG_CATALOG_VERSION

2017-05-21 Thread Tom Lane
Vicky Vergara  writes:
> From this message:
> https://www.postgresql.org/message-id/1585.1472410329%40sss.pgh.pa.us
> I deduced that in the code I can use
> PG_CATALOG_VERSION

No, sorry, thinko on my part.  It's CATALOG_VERSION_NO, from
, that people usually use for this sort of thing.
That gives you finer grain than PG_VERSION would, although that only
matters if you're concerned about working with development versions.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Location of PG_CATALOG_VERSION

2017-05-21 Thread Vicky Vergara
Hello all


The postgreSQL version is needed internally in order to make the code work 
because for example the type of funcctx->max_calls  changed on 9.6



uint64_t result_count = 0;


...


#if PGSQL_VERSION > 95
funcctx->max_calls = result_count;
#else
funcctx->max_calls = (uint32_t)result_count;
#endif


PGSQL_VERSION is a result of a manipulation of the version found using 
FindPostgres.cmake (which uses pg_config --version)

>From this message: 
>https://www.postgresql.org/message-id/1585.1472410329%40sss.pgh.pa.us




I deduced that in the code I can use

PG_CATALOG_VERSION


I made the following experiment:


#define STRINGIFY(s) XSTRINGIFY(s)
#define XSTRINGIFY(s) #s

#pragma message ("The value PGSQL_VERSION: " STRINGIFY(PGSQL_VERSION))
#ifdef PG_CATALOG_VERSION
#pragma message ("The value PG_CATALOG_VERSION: " STRINGIFY(PG_CATALOG_VERSION))
#endif



I have this result:

note: #pragma message: The value PGSQL_VERSION: 93

So PG_CATALOG_VERSION is not defined, then I went to see the doxygen page to 
find out which file I have to include to get the definition.
But PG_CATALOG_VERSION its not there.

So, what am I missing?

I have no problem on doing more manipulations to get a value that I can use in 
the #if PGSQL_VERSION > 95 comparison (like 100 for postgreSQL 10betaX) but if 
PG_CATALOG_VERSION is considered the best thing to look at, where can I find it?

Vicky










Re: pgindent (was Re: [HACKERS] [COMMITTERS] pgsql: Preventive maintenance in advance of pgindent run.)

2017-05-21 Thread Tom Lane
Piotr Stefaniak  writes:
>> * const unsigned(*TABLE_index)[2];
>> * OffsetNumber(*findChildPtr) (GinBtree, Page, BlockNumber, OffsetNumber);
>> * an overlength comment line is simply busted altogether

> Fixed and pushed to my github repository.

I'm pretty confused by your github repo.  It doesn't have a master
branch, and what seems to be the HEAD branch is called "pass3", but
when I tried to "git pull" just now I got


>From https://github.com/pstef/freebsd_indent
 + b7b74cb...e4ccc02 pass3  -> origin/pass3  (forced update)
First, rewinding head to replay your work on top of it...
Applying: [bugfix] Don't add unneeded space in function pointer declaration.
Using index info to reconstruct a base tree...
M   indent.c
M   tests/declarations.0.stdout
Falling back to patching base and 3-way merge...
Auto-merging indent.c
CONFLICT (content): Merge conflict in indent.c
Failed to merge in the changes.
Patch failed at 0001 [bugfix] Don't add unneeded space in function pointer 
declaration.
The copy of the patch that failed is found in:
   /home/postgres/freebsd_indent/.git/rebase-apply/patch

When you have resolved this problem, run "git rebase --continue".
If you prefer to skip this patch, run "git rebase --skip" instead.
To check out the original branch and stop rebasing, run "git rebase --abort".


... which is pretty odd because there were no local changes.  It should
have just done a fast-forward, I'd think.

Being lazy, I just wiped my copy and re-cloned, but it still seems the
same as before ... last commit on the pass3 branch is from Mar 4.
What branch should I be paying attention to?

> Note that indent won't wrap
> long lines like links or paths anymore. But obviously it can't join
> parts of long links that have been wrapped by previous pgindent runs.

Check, I wouldn't expect it to.  I'll probably make a manual pass to sew
split-up URLs in comments back together.

>> * the comments get formatted differently for -ts4 than -ts8
>> * extra spacing getting inserted for fairly long labels
>> * some enum declarations get the phony extra indentation
>> * comments on the same line are now run together
>> * surplus indentation for SH_ELEMENT_TYPE * data;

> Please tell me if the list of your complaints above is incomplete. I
> will analyze those next.

There's also the portability issues: __FBSDID() and bcopy() and
.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ANNOUNCE] PostgreSQL 10 Beta 1 Released!

2017-05-21 Thread Huong Dangminh
> From: Bruce Momjian [mailto:br...@momjian.us]
> Sent: Saturday, May 20, 2017 1:11 AM
> On Fri, May 19, 2017 at 04:10:09AM +, Huong Dangminh wrote:
> > Hi,
> >
> > > * 10 Beta Release Notes:
> > >   https://www.postgresql.org/docs/devel/static/release-10.html
> >
> > Just a minute thing, but changing of hot_standby default value is not fully
> noted in release-10.sgml.
> > Please find the attached patch.
> 
> Patch applied.  Sorry I missed this.  I must have been so focused on
> how to add the item that I forgot the commit details.
> 

Thanks!

---
Thanks and best regards,
Dang Minh Huong
NEC Solution Innovators, Ltd.
http://www.nec-solutioninnovators.co.jp/en/



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing dash character in LTREE

2017-05-21 Thread Manuel Kniep




Manuel Kniep
Danziger Str. 116
10405 Berlin
> Am 20.05.2017 um 10:27 schrieb Cyril Auburtin :
> 
> It could be useful to allow the `-` char in allowed LTREE label characters  
> (currently a-zA-Z0-9_ 
> https://https://github.com/adjust/ltreewww.postgresql.org/docs/9.6/static/ltree.html)

There is a patched version here
https://github.com/adjust/ltree

Which allows almost any char but uses '::' as label separator my it's an 
inspiration to do your own patch. 

Manuel 



Re: pgindent (was Re: [HACKERS] [COMMITTERS] pgsql: Preventive maintenance in advance of pgindent run.)

2017-05-21 Thread Piotr Stefaniak
> * const unsigned(*TABLE_index)[2];
> * OffsetNumber(*findChildPtr) (GinBtree, Page, BlockNumber, OffsetNumber);
> * an overlength comment line is simply busted altogether

Fixed and pushed to my github repository. Note that indent won't wrap
long lines like links or paths anymore. But obviously it can't join
parts of long links that have been wrapped by previous pgindent runs.

> * the comments get formatted differently for -ts4 than -ts8
> * extra spacing getting inserted for fairly long labels
> * some enum declarations get the phony extra indentation
> * comments on the same line are now run together
> * surplus indentation for SH_ELEMENT_TYPE * data;

Please tell me if the list of your complaints above is incomplete. I
will analyze those next.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch: add --if-exists to pg_recvlogical

2017-05-21 Thread Rosser Schwarz
Hackers,

While doing some scripting around pg_recvlogical at $work, I found a need
for $subject. Attached, find a patch to that effect.

I tried simply to mirror the logic used elsewhere. I don't think there's
anything controversial here, but welcome any comments or suggestions.

This applies and builds successfully against master, and behaves as
designed (i.e., dropping or trying to stream from a nonexistent slot exits
cleanly). It doesn't affect any other behavior I could observe.

If accepted, this will likely need a pgindent run upon merging; I had to
give up on the rabbit hole of getting that working locally.

Thanks,

rls

-- 
:wq


pg_recvlogical--if-exists-v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] translatable string fixes

2017-05-21 Thread Alvaro Herrera
I noticed this entry while updating the translation for 9.6:

#: catalog/index.c:3456 commands/vacuumlazy.c:1345 commands/vacuumlazy.c:1421
#: commands/vacuumlazy.c:1610 commands/vacuumlazy.c:1820
#, c-format
msgid "%s."
msgstr "%s."

All of these correspond to errdetail printing pg_rusage_show() output.
I think these are all bogus and should be changed to
errdetail_internal() instead.  Surely if we want pg_rusage_show() output
to be translated, we should apply _() to the snprintf() call inside that
function.

At the same time, trying to append a period in the callers seems
pointless; if we really feel a strong need for that period I suggest we
add a flag to pg_rusage_show() to indicate whether to add it or not,
though my inclination is not to bother.

I also attach style fixes for other issues I found.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From 96641d740437c5a002b852189754ca0a30f803fc Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Sun, 21 May 2017 10:10:52 -0400
Subject: [PATCH 1/6] Make messages identical

---
 src/backend/storage/page/bufpage.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/backend/storage/page/bufpage.c 
b/src/backend/storage/page/bufpage.c
index f2a07f2111..11607827d8 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -889,7 +889,7 @@ PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int 
nitems)
offset != MAXALIGN(offset))
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
-errmsg("corrupted item pointer: offset 
= %u, size = %u",
+errmsg("corrupted item pointer: offset 
= %u, length = %u",
offset, (unsigned int) 
size)));
 
if (nextitm < nitems && offnum == itemnos[nextitm])
-- 
2.11.0

>From 81b220ba12027a85efb84690e9bce643bc2c93d1 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Sun, 21 May 2017 10:30:12 -0400
Subject: [PATCH 2/6] Make strings identical

---
 src/backend/utils/adt/json.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c
index f704418603..192f9abae1 100644
--- a/src/backend/utils/adt/json.c
+++ b/src/backend/utils/adt/json.c
@@ -2003,7 +2003,7 @@ json_object_agg_transfn(PG_FUNCTION_ARGS)
if (arg_type == InvalidOid)
ereport(ERROR,

(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-errmsg("could not determine data type 
for argument 1")));
+errmsg("could not determine data type 
for argument %d", 1)));
 
json_categorize_type(arg_type, >key_category,
 
>key_output_func);
@@ -2013,7 +2013,7 @@ json_object_agg_transfn(PG_FUNCTION_ARGS)
if (arg_type == InvalidOid)
ereport(ERROR,

(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-errmsg("could not determine data type 
for argument 2")));
+errmsg("could not determine data type 
for argument %d", 2)));
 
json_categorize_type(arg_type, >val_category,
 
>val_output_func);
-- 
2.11.0

>From 8dbe584d39b4f19aab97f790652a827ecb943bbd Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Sun, 21 May 2017 12:48:15 -0400
Subject: [PATCH 3/6] correctly do not quote type name

---
 src/backend/catalog/pg_aggregate.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/backend/catalog/pg_aggregate.c 
b/src/backend/catalog/pg_aggregate.c
index 959d3845df..ada6e6171b 100644
--- a/src/backend/catalog/pg_aggregate.c
+++ b/src/backend/catalog/pg_aggregate.c
@@ -433,7 +433,7 @@ AggregateCreate(const char *aggName,
if (aggTransType == INTERNALOID && func_strict(combinefn))
ereport(ERROR,

(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
-errmsg("combine function with \"%s\" 
transition type must not be declared STRICT",
+errmsg("combine function with 
transition type %s must not be declared STRICT",

format_type_be(aggTransType;
 
}
-- 
2.11.0

>From d0a06f1012e461f67ae4b24ca91f629a514d7138 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Sun, 21 May 2017 13:44:20 

Re: [HACKERS] WIP: [[Parallel] Shared] Hash

2017-05-21 Thread Thomas Munro
On Thu, Apr 27, 2017 at 11:03 AM, Thomas Munro
 wrote:
> On Thu, Apr 27, 2017 at 5:13 AM, Oleg Golovanov  wrote:
>> Can you actualize your patch set? The error got from
>> 0010-hj-parallel-v12.patch.
>
> I really should get around to setting up a cron job to tell me about
> that.  Here's a rebased version.

Rebased.

-- 
Thomas Munro
http://www.enterprisedb.com


parallel-shared-hash-v14.tgz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Causal reads take II

2017-05-21 Thread Thomas Munro
On Mon, May 22, 2017 at 4:10 AM, Dmitry Dolgov <9erthali...@gmail.com> wrote:
> I'm wondering about status of this patch and how can I try it out?

Hi Dmitry, thanks for your interest.

>> On 3 January 2017 at 02:43, Thomas Munro 
>> wrote:
>> The replay lag tracking patch this depends on is in the current commitfest
>
> I assume you're talking about this patch [1] (at least it's the only thread
> where I could find a `replay-lag-v16.patch`)? But `replay lag tracking` was
> returned with feedback, so what's the status of this one (`causal reads`)?

Right, replay lag tracking was committed.  I'll post a rebased causal
reads patch later today.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Causal reads take II

2017-05-21 Thread Dmitry Dolgov
Hi

I'm wondering about status of this patch and how can I try it out?

> On 3 January 2017 at 02:43, Thomas Munro 
wrote:
> The replay lag tracking patch this depends on is in the current commitfest

I assume you're talking about this patch [1] (at least it's the only thread
where I could find a `replay-lag-v16.patch`)? But `replay lag tracking` was
returned with feedback, so what's the status of this one (`causal reads`)?

> First apply replay-lag-v16.patch, then refactor-syncrep-exit-v16.patch,
then
> causal-reads-v16.patch.

It would be nice to have all three of them attached (for some reason I see
only
last two of them in this thread). But anyway there are a lot of failed hunks
when I'm trying to apply `replay-lag-v16.patch` and
`refactor-syncrep-exit-v16.patch`,
`causal-reads-v16.patch` (or last two of them separately).

[1] https://commitfest.postgresql.org/12/920/


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-05-21 Thread Fabien COELHO



Thanks for the pointer. I grepped for them without success. Here is a v4.


I am sending a review of this patch.

This patch has trivial implementation - and there are not any objection to
used new variable names.

1. there was not any problem with patching, compiling
2. all regress tests passed
3. no problems with doc build

I'll mark this patch as ready for commiters


Thanks for the review.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: pgindent (was Re: [HACKERS] [COMMITTERS] pgsql: Preventive maintenance in advance of pgindent run.)

2017-05-21 Thread Bruce Momjian
On Sun, May 21, 2017 at 10:12:20AM -0400, Tom Lane wrote:
> Piotr Stefaniak  writes:
> > On 2017-05-21 03:00, Tom Lane wrote:
> >> I wrote:
> >>> Also, I found two places where an overlength comment line is simply busted
> >>> altogether --- notice that a character is missing at the split point:
> 
> >> I found the cause of that: you need to apply this patch:
> 
> > I have been analyzing this and came to different conclusions.
> 
> Well, the code as it stands breaks those two comments (and a third one
> I'd failed to notice before).  With the patch I propose, the only changes
> are that those comments are left unmolested.  So even aside from the
> fact that this code is visibly unsafe, it does correspond to the symptom.

Frankly, I found it ironic that the BSD indent code, which was designed
to improve code clarity, was so confusingly written.  I went with the
sed script (and later Perl script) wrapper solution because the BSD
indent code was so confusing to me.

It seems like a "The Cobbler's children have no shoes" syndrome:


https://english.stackexchange.com/questions/159004/the-cobblers-children-have-no-shoes

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: pgindent (was Re: [HACKERS] [COMMITTERS] pgsql: Preventive maintenance in advance of pgindent run.)

2017-05-21 Thread Tom Lane
Piotr Stefaniak  writes:
> On 2017-05-21 03:00, Tom Lane wrote:
>> I wrote:
>>> Also, I found two places where an overlength comment line is simply busted
>>> altogether --- notice that a character is missing at the split point:

>> I found the cause of that: you need to apply this patch:

> I have been analyzing this and came to different conclusions.

Well, the code as it stands breaks those two comments (and a third one
I'd failed to notice before).  With the patch I propose, the only changes
are that those comments are left unmolested.  So even aside from the
fact that this code is visibly unsafe, it does correspond to the symptom.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Q] When should I use reg* types instead of oid in the system catalog?

2017-05-21 Thread MauMau
From: Tom Lane
It's probably mostly historical accident :-(.  There have been
suggestions
before to convert more system catalog columns to regfoo types, but
there's
serious stumbling blocks in the way:


Thank you so much for concise detailed explanation of the history and
current situation.  I understood that it would do more harm than good
to change existing plain oid columns to reg* types for various
clients, and it wouldn't very beneficial but also not so harmful to
make new catalogs/columns reg* types.

Regards
MauMau





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-21 Thread Pavel Stehule
2017-05-21 8:39 GMT+02:00 Fabien COELHO :

>
> Hello Pavel,
>
> v6 patch applies cleanly; make check ok; code, comments, doc & tests ok;
> various interactive tests I did ok.
>
> Thanks for this useful little feature!
>
> Let's see what committers think about it.


Thank you

Pavel

>
>
> --
> Fabien.
>


Re: [HACKERS] proposal psql \gdesc

2017-05-21 Thread Fabien COELHO


Hello Pavel,

v6 patch applies cleanly; make check ok; code, comments, doc & tests ok; 
various interactive tests I did ok.


Thanks for this useful little feature!

Let's see what committers think about it.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Race conditions with WAL sender PID lookups

2017-05-21 Thread Erik Rijkers

On 2017-05-21 06:37, Erik Rijkers wrote:

On 2017-05-20 14:40, Michael Paquier wrote:
On Fri, May 19, 2017 at 3:01 PM, Masahiko Sawada 
 wrote:
Also, as Horiguchi-san pointed out earlier, walreceiver seems need 
the

similar fix.


Actually, now that I look at it, ready_to_display should as well be
protected by the lock of the WAL receiver, so it is incorrectly placed
in walreceiver.h. As you are pointing out, pg_stat_get_wal_receiver()
is lazy as well, and that's new in 10, so we have an open item here
for both of them. And I am the author for both things. No issues
spotted in walreceiverfuncs.c after review.

I am adding an open item so as both issues are fixed in PG10. With the
WAL sender part, I think that this should be a group shot.

So what do you think about the attached?



[walsnd-pid-races-v3.patch]



With this patch on current master my logical replication tests
(pgbench-over-logical-replication) run without errors for the first
time in many days (even weeks).


Unfortunately, just now another logical-replication failure occurred.  
The same as I have seen all along:


The symptom: after starting logical replication, there are no rows in 
pg_stat_replication and in the replica-log logical replication complains 
about max_replication_slots being too low. (from previous experience I 
know that making max_replication_slots higher does indeed 'help', but 
only until the next (same) error occurs, with renewed (same) complaint).


Also from previous experience of this failed state I know that it can be 
'cleaned up' by

manually emptying these tables:
  delete from pg_subscription_rel;
  delete from pg_subscription;
  delete from pg_replication_origin;
Then it becomes possible to start a new subscription without the above 
symptoms.


I'll do some more testing and hopefully get some information that's less 
vague...



Erik Rijkers



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers