Re: [HACKERS] psql - add ability to test whether a variable exists

2017-09-19 Thread Robins Tharakan
I was able to test the functionality (which seemed to work fine) and fed in my 
comment to assist anyone else reviewing this patch (and intentionally let it's 
state as 'Needs Review').

While trying to provide my feedback, on hindsight I should have been more 
detailed about what I didn't test. Being my first review, I didn't understand 
that not checking a box meant 'failure'. For e.g. I read the sgml changes, 
which felt okay but didn't click 'Passed' because my env wasn't setup properly.

I've set this back to 'Needs Review' because clearly needs it.
Apologies for the noise here.

The new status of this patch is: Needs review

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add ability to test whether a variable exists

2017-09-19 Thread Robins Tharakan
Hi Fabien,

I was able to test the functionality (which seemed to work fine) and fed in
my comment to assist anyone else reviewing this patch (and intentionally
let it's state as 'Needs Review').

While trying to provide my feedback, on hindsight I should have been more
detailed about what I didn't test. Being my first review, I didn't
understand that not checking a box meant 'failure'. For e.g. I read the
sgml changes, which felt okay but didn't click 'Passed' because my env
wasn't setup properly.

I've set this back to 'Needs Review' because clearly needs it.
Apologies for the noise here.

-
robins

On 20 September 2017 at 16:18, Fabien COELHO  wrote:

>
> Hello Robins,
>
> Thanks for the review.
>
> The following review has been posted through the commitfest application:
>> make installcheck-world:  not tested
>> Implements feature:   tested, failed
>>
>
> Where ?
>
> Spec compliant:   not tested
>> Documentation:tested, failed
>>
>
> Where ? I just regenerated the html doc on the patch without a glitch.
>
> The patch applies cleanly and compiles + installs fine (although am unable
>> to do installcheck-world on my Cygwin setup). This is how the patch works
>> on my setup.
>>
>> $ /opt/postgres/reviewpatch/bin/psql -U postgres -h localhost
>> psql (11devel, server 9.6.1)
>> Type "help" for help.
>>
>> postgres=# \set i 1
>> postgres=# \if :{?i}
>> postgres=# \echo 'testing'
>> testing
>> postgres=# \endif
>> postgres=# \if :{?id}
>> postgres@# \echo 'testing'
>> \echo command ignored; use \endif or Ctrl-C to exit current \if block
>> postgres@# \endif
>> postgres=#
>>
>
> ISTM that this is the expected behavior.
>
> In the first case, "i" is defined, so the test is true and the echo echoes.
>
> In the second case, "id" is undefined, the test is false and the echo is
> skipped.
>
> I do not understand why you conclude that the feature "failed".
>
> --
> Fabien.
>


Re: [HACKERS] Patch: add --if-exists to pg_recvlogical

2017-09-19 Thread Rosser Schwarz
On Tue, Sep 19, 2017 at 1:12 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 9/17/17 18:21, Rosser Schwarz wrote:
> > On Fri, Sep 1, 2017 at 10:22 AM, Peter Eisentraut
> >  > > wrote:
> >> I understand the --drop-slot part.  But I don't understand what it means
> >> to ignore a missing replication slot when running --start
>


> > I'm not sure I do either, honestly. I followed the Principle of Least
> > Surprise in making it a no-op when those switches are used and the slot
> > doesn't exist, over "no one will ever do that". Because someone will.
>


> > I'm happy to hear suggestions on what to do in that case beyond exit
> > cleanly.
>


> Nonsensical option combinations should result in an error.
>

The more I think about it, I don't think it's nonsensical, though.
--create-slot --if-exists or --drop-slot --if-not-exists, obviously. I
mean, do you even logic?

Those aside, --if-exists just means a non-existent slot isn't an error
condition, doesn't it? --start --if-exists will start, if the slot exists.
Otherwise it won't, in neither case raising an error. Exactly what it says
on the tin. Perhaps the docs could make clear that combination implies
--no-loop (or at least means we'll exit immediately) if the slot does not,
in fact, exist?

Because I started work on this patch in the context of doing some scripting
around pg_recvlogical, I guess I had some vague notion that someone might
have logic in their own scripts where that state was possible (and, of
course, appropriately handled). The program exits either way: one assumes
the operator meant to do that; the other says they were wrong to have done
so.

I'm having trouble seeing a combination of options that does what it prima
facie implies as an error state. If there's broader consensus that's how it
should behave, I'll happily defer, though.

To that end, could I solicit some input from the broader audience?

It appears that you have removed the interaction of --start and
> --if-exists in your last patch, but the documentation patch still
> mentions it.  Which is correct?
>

Pardon? I've had a bit on my plate recently, so it's entirely possible, if
not likely, that I missed something, but isn't that this block?

@@ -267,6 +271,17 @@ StreamLogicalLog(void)
  res = PQexec(conn, query->data);
  if (PQresultStatus(res) != PGRES_COPY_BOTH)
  {
+ const char* sqlstate = PQresultErrorField(res, PG_DIAG_SQLSTATE);
+
+ if (slot_not_exists_ok &&
+ sqlstate &&
+ strcmp(sqlstate, ERRCODE_UNKNOWN_OBJECT) == 0)
+ {
+ destroyPQExpBuffer(query);
+ PQclear(res);
+ disconnect_and_exit(0);
+ }
+
  fprintf(stderr, _("%s: could not send replication command \"%s\": %s"),
  progname, query->data, PQresultErrorMessage(res));
  PQclear(res);

-- 
:wq


Re: [HACKERS] pgbench: Skipping the creating primary keys after initialization

2017-09-19 Thread Fabien COELHO


Hello Masahiko-san,

v14 applies, compiles and works. TAP tests provide good coverage.

ISTM that you probably intended "\(.*\)" (actual parenthesis) instead of 
"(.*)" (memorization) in the data generation message check.


Otherwise all is well for me.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add ability to test whether a variable exists

2017-09-19 Thread Fabien COELHO


Hello Robins,

Thanks for the review.


The following review has been posted through the commitfest application:
make installcheck-world:  not tested
Implements feature:   tested, failed


Where ?


Spec compliant:   not tested
Documentation:tested, failed


Where ? I just regenerated the html doc on the patch without a glitch.

The patch applies cleanly and compiles + installs fine (although am 
unable to do installcheck-world on my Cygwin setup). This is how the 
patch works on my setup.


$ /opt/postgres/reviewpatch/bin/psql -U postgres -h localhost
psql (11devel, server 9.6.1)
Type "help" for help.

postgres=# \set i 1
postgres=# \if :{?i}
postgres=# \echo 'testing'
testing
postgres=# \endif
postgres=# \if :{?id}
postgres@# \echo 'testing'
\echo command ignored; use \endif or Ctrl-C to exit current \if block
postgres@# \endif
postgres=#


ISTM that this is the expected behavior.

In the first case, "i" is defined, so the test is true and the echo 
echoes.


In the second case, "id" is undefined, the test is false and the echo is 
skipped.


I do not understand why you conclude that the feature "failed".

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SendRowDescriptionMessage() is slow for queries with a lot of columns

2017-09-19 Thread Mithun Cy
On Mon, Sep 18, 2017 at 1:38 PM, Mithun Cy  wrote:
> On Sat, Sep 16, 2017 at 3:03 AM, Andres Freund  wrote:
> So I think performance gain is visible. We saved a good amount of
> execution cycle in SendRowDescriptionMessagewhen(my callgrind report
> confirmed same) when we project a large number of columns in the query
> with these new patches.

I have tested patch, for me, patch looks good and can see improvement
in performance as a number of columns projected increases in the
query. There appear some cosmetic issues(pgindent issues + end of file
cr) in the patch if it can be considered as a valid issue they need
changes. Rest look okay for me.

-- 
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error: dsa_area could not attach to a segment that has been freed

2017-09-19 Thread Gaddam Sai Ram
Thank you very much! That fixed my issue! :)
I was in an assumption that pinning the area will increase its lifetime but 
yeah after taking memory context into consideration its working fine!


Regards

G. Sai Ram







 On Wed, 20 Sep 2017 11:16:19 +0530 Thomas Munro 
 wrote 




On Fri, Sep 15, 2017 at 7:51 PM, Gaddam Sai Ram 

 wrote: 

> As i'm pinning the dsa mapping after attach, it has to stay through out 
the 

> backend session. But not sure why its freed/corrupted. 

> 

> Kindly help me in fixing this issue. Attached the copy of my extension, 

> which will reproduce the same issue. 

 

Your DSA area is pinned and the mapping is pinned, but there is one 

more thing that goes away automatically unless you nail it to the 

table: the backend-local dsa_area object which dsa_create() and 

dsa_attach() return. That's allocated in the "current memory 

context", so if you do it from your procedure simple_udf_func without 

making special arrangements it gets automatically freed at end of 

transaction. If you're going to cache it for the whole life of the 

backend, you'd better make sure it's allocated in memory context that 

lives long enough. Where you have dsa_create() and dsa_attach() 

calls, try coding like this: 

 

 MemoryContext old_context; 

 

 old_context = MemoryContextSwitchTo(TopMemoryContext); 

 area = dsa_create(...); 

 MemoryContextSwitchTo(old_context); 

 

 old_context = MemoryContextSwitchTo(TopMemoryContext); 

 area = dsa_attach(...); 

 MemoryContextSwitchTo(old_context); 

 

You'll need to #include "utils/memutils.h". 

 

-- 

Thomas Munro 

http://www.enterprisedb.com 








Re: [HACKERS] Parallel Append implementation

2017-09-19 Thread Amit Khandekar
On 11 September 2017 at 18:55, Amit Kapila  wrote:
>>> 1.
>>> + else if (IsA(subpath, MergeAppendPath))
>>> + {
>>> + MergeAppendPath *mpath = (MergeAppendPath *) subpath;
>>> +
>>> + /*
>>> + * If at all MergeAppend is partial, all its child plans have to be
>>> + * partial : we don't currently support a mix of partial and
>>> + * non-partial MergeAppend subpaths.
>>> + */
>>> + if (is_partial)
>>> + return list_concat(partial_subpaths, list_copy(mpath->subpaths));
>>>
>>> In which situation partial MergeAppendPath is generated?  Can you
>>> provide one example of such path?
>>
>> Actually currently we don't support partial paths for MergeAppendPath.
>> That code just has that if condition (is_partial) but currently that
>> condition won't be true for MergeAppendPath.
>>
>
> I think then it is better to have Assert for MergeAppend.  It is
> generally not a good idea to add code which we can never hit.

Done.

> One more thing, I think you might want to update comment atop
> add_paths_to_append_rel to reflect the new type of parallel-aware
> append path being generated. In particular, I am referring to below
> part of the comment:
>
>  * Similarly it collects partial paths from
>  * non-dummy children to create partial append paths.
>  */
> static void
> add_paths_to_append_rel()
>

Added comments.

Attached revised patch v15. There is still the open point being
discussed : whether to have non-parallel-aware partial Append path, or
always have parallel-aware Append paths.


-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


ParallelAppend_v15.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "inconsistent page found" with checksum and wal_consistency_checking enabled

2017-09-19 Thread Kuntal Ghosh
On Wed, Sep 20, 2017 at 11:05 AM, Michael Paquier
 wrote:
> On Wed, Sep 20, 2017 at 2:26 PM, Kuntal Ghosh
>  wrote:
>> On Wed, Sep 20, 2017 at 10:22 AM, Michael Paquier
>>  wrote:
>>> On Wed, Sep 20, 2017 at 5:23 AM, Ashwin Agrawal  wrote:
 Currently, page checksum is not masked by Page masking routines used by
 wal_consistency_checking facility. So, when running `make installcheck` 
 with
 data checksum enabled and wal_consistency_checking='all', it easily and
 consistently FATALs with "inconsistent page found".
>>>
>>> Indeed. This had better be fixed before PG10 is out. I am adding an open 
>>> item.
>>>
>> Oops and surprised! How come we missed that previously. If page lsns
>> are different, checksums will be different as well. Anyhow, nice catch
>> and thanks for the patch.
>
> That happens. We have really covered maaany points during many rounds
> of reviews, still I am not surprised to see one or two things that
> fell into the cracks.
Yup, that's true. :-)


-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Make WAL segment size configurable at initdb time.

2017-09-19 Thread Andres Freund
On 2017-09-20 14:51:45 +0900, Michael Paquier wrote:
> On Wed, Sep 20, 2017 at 2:04 PM, Andres Freund  wrote:
> > Make WAL segment size configurable at initdb time.
> >
> > For performance reasons a larger segment size than the default 16MB
> > can be useful. A larger segment size has two main benefits: Firstly,
> > in setups using archiving, it makes it easier to write scripts that
> > can keep up with higher amounts of WAL, secondly, the WAL has to be
> > written and synced to disk less frequently.
> >
> > But at the same time large segment size are disadvantageous for
> > smaller databases. So far the segment size had to be configured at
> > compile time, often making it unrealistic to choose one fitting to a
> > particularly load. Therefore change it to a initdb time setting.
> >
> > This includes a breaking changes to the xlogreader.h API, which now
> > requires the current segment size to be configured.  For that and
> > similar reasons a number of binaries had to be taught how to recognize
> > the current segment size.
> 
> I have been reading through this commit, and there is one change in
> streamutil.c which is really unwelcome:
> +   /* for previous versions set the default xlog seg size */
> +   if (PQserverVersion(conn) < MINIMUM_VERSION_FOR_SHOW_CMD)
> +   {
> +   WalSegSz = DEFAULT_XLOG_SEG_SIZE;
> +   return true;
> +   }
> Because of this pg_receivewal will generate silently incorrect WAL
> segment data if connecting with builds of PostgreSQL 10 or older
> versions which do not have the default segment size of 16MB set. If
> this patch justifies such breakages, I think that this should just
> fail instead to tell users that it is better to use an older version
> of pg_receivewal compatible with the build of this server. Better safe
> than sorry.

Hm, what are you proposing to do otherwise? I'm fairly unconvinced this
is a serious problem. It'll work in like 99.95% of all clusters, and in
the ones it won't work, it'll quickly afterwards, no? Forbidding
cross-version stuff for the large majority of installations that have
the default size, for the benefit the handfull of installations where
that's not the case? Not convinced.


> --- a/src/bin/pg_test_fsync/pg_test_fsync.c
> +++ b/src/bin/pg_test_fsync/pg_test_fsync.c
> @@ -64,7 +64,7 @@ static const char *progname;
> 
>  static int secs_per_test = 5;
>  static int needs_unlink = 0;
> -static char full_buf[XLOG_SEG_SIZE],
> +static char full_buf[DEFAULT_XLOG_SEG_SIZE],
> Did you consider adding an option --wal-segzize to pg_test_fsync?

Did, yes. Don't really think it's worthwhile. If somebody wants to come
up with a patch and argue for it, please...


> + * This is default value for wal_segment_size to be used at intidb when run
> Typo here: s/intidb/initdb/.

Ah, good catch.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Minor patch to correct symbol name in logs

2017-09-19 Thread Vaishnavi Prabakaran
Hi,

Backend's lo_functions were renamed to avoid conflicting with libpq
prototypes in commit - 6fc547960dbe0b8bd6cefae5ab7ec3605a5c46fc.

Logs inside those functions still refer to old symbol names, Here is the
small patch to update the same.


Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Debug-log-correction-to-reflect-changed-function-nam.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Make WAL segment size configurable at initdb time.

2017-09-19 Thread Michael Paquier
On Wed, Sep 20, 2017 at 2:04 PM, Andres Freund  wrote:
> Make WAL segment size configurable at initdb time.
>
> For performance reasons a larger segment size than the default 16MB
> can be useful. A larger segment size has two main benefits: Firstly,
> in setups using archiving, it makes it easier to write scripts that
> can keep up with higher amounts of WAL, secondly, the WAL has to be
> written and synced to disk less frequently.
>
> But at the same time large segment size are disadvantageous for
> smaller databases. So far the segment size had to be configured at
> compile time, often making it unrealistic to choose one fitting to a
> particularly load. Therefore change it to a initdb time setting.
>
> This includes a breaking changes to the xlogreader.h API, which now
> requires the current segment size to be configured.  For that and
> similar reasons a number of binaries had to be taught how to recognize
> the current segment size.

I have been reading through this commit, and there is one change in
streamutil.c which is really unwelcome:
+   /* for previous versions set the default xlog seg size */
+   if (PQserverVersion(conn) < MINIMUM_VERSION_FOR_SHOW_CMD)
+   {
+   WalSegSz = DEFAULT_XLOG_SEG_SIZE;
+   return true;
+   }
Because of this pg_receivewal will generate silently incorrect WAL
segment data if connecting with builds of PostgreSQL 10 or older
versions which do not have the default segment size of 16MB set. If
this patch justifies such breakages, I think that this should just
fail instead to tell users that it is better to use an older version
of pg_receivewal compatible with the build of this server. Better safe
than sorry.

--- a/src/bin/pg_test_fsync/pg_test_fsync.c
+++ b/src/bin/pg_test_fsync/pg_test_fsync.c
@@ -64,7 +64,7 @@ static const char *progname;

 static int secs_per_test = 5;
 static int needs_unlink = 0;
-static char full_buf[XLOG_SEG_SIZE],
+static char full_buf[DEFAULT_XLOG_SEG_SIZE],
Did you consider adding an option --wal-segzize to pg_test_fsync?

+ * This is default value for wal_segment_size to be used at intidb when run
Typo here: s/intidb/initdb/.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error: dsa_area could not attach to a segment that has been freed

2017-09-19 Thread Thomas Munro
On Fri, Sep 15, 2017 at 7:51 PM, Gaddam Sai Ram
 wrote:
> As i'm pinning the dsa mapping after attach, it has to stay through out the
> backend session. But not sure why its freed/corrupted.
>
> Kindly help me in fixing this issue. Attached the copy of my extension,
> which will reproduce the same issue.

Your DSA area is pinned and the mapping is pinned, but there is one
more thing that goes away automatically unless you nail it to the
table: the backend-local dsa_area object which dsa_create() and
dsa_attach() return.  That's allocated in the "current memory
context", so if you do it from your procedure simple_udf_func without
making special arrangements it gets automatically freed at end of
transaction.  If you're going to cache it for the whole life of the
backend, you'd better make sure it's allocated in memory context that
lives long enough.  Where you have dsa_create() and dsa_attach()
calls, try coding like this:

  MemoryContext old_context;

  old_context = MemoryContextSwitchTo(TopMemoryContext);
  area = dsa_create(...);
  MemoryContextSwitchTo(old_context);

  old_context = MemoryContextSwitchTo(TopMemoryContext);
  area = dsa_attach(...);
  MemoryContextSwitchTo(old_context);

You'll need to #include "utils/memutils.h".

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench: Skipping the creating primary keys after initialization

2017-09-19 Thread Masahiko Sawada
On Tue, Sep 19, 2017 at 12:41 PM, Fabien COELHO  wrote:
>
> Hello Masahiko-san,
>
>> Attached the latest version patch incorporated the tap tests.
>> Please review it.
>
>
> Patch applies, compilation & make check ok.
>
> Tests are simple and provide good coverage of new functionalities.
>
> I would suggest to add '--unlogged-tables' so speedup the tests a little.
>

Good idea, added.

> Comment: "# Custom initialization option, including a space"... ISTM that
> there is no space. Space is tested in the next test because of the v's and
> the --no-vacuum which turned them into space, which is enough.

You're right, I removed it.

> Regex are just check for the whole output, so putting twice "qr{vacuum}"
> does not check that vacuum appears twice, it checks twice that vacuum
> appears once. I do not think that it is worth trying to check for the v
> repetition, so I suggest to remove one from the first test. Repetition of '
> ' is checked with the second test.

Agreed.

> Maybe you could check that the data generation message is there.

Added the check.

Attached the latest patch. Please review it.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


pgbench_custom_initialization_v14.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "inconsistent page found" with checksum and wal_consistency_checking enabled

2017-09-19 Thread Michael Paquier
On Wed, Sep 20, 2017 at 2:26 PM, Kuntal Ghosh
 wrote:
> On Wed, Sep 20, 2017 at 10:22 AM, Michael Paquier
>  wrote:
>> On Wed, Sep 20, 2017 at 5:23 AM, Ashwin Agrawal  wrote:
>>> Currently, page checksum is not masked by Page masking routines used by
>>> wal_consistency_checking facility. So, when running `make installcheck` with
>>> data checksum enabled and wal_consistency_checking='all', it easily and
>>> consistently FATALs with "inconsistent page found".
>>
>> Indeed. This had better be fixed before PG10 is out. I am adding an open 
>> item.
>>
> Oops and surprised! How come we missed that previously. If page lsns
> are different, checksums will be different as well. Anyhow, nice catch
> and thanks for the patch.

That happens. We have really covered maaany points during many rounds
of reviews, still I am not surprised to see one or two things that
fell into the cracks.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/test/subscription/t/005_encoding.pl is broken

2017-09-19 Thread Michael Paquier
On Wed, Sep 20, 2017 at 12:33 AM, Tom Lane  wrote:
> That would indicate that something isn't ever retrying the worker
> start; but if that's the case, how is it that we get through the
> other subscription tests with my random-failure patch in place?

I have been able to dig into this issue further, and the problem is
indeed in the wait logic of 005_encoding.pl. It is important to wait
for the initial sync of the subscriber to happen. There is no need to
incorporate the additional wait query in wait_for_caught_up() as well.
Please see the attached which fixes the stability problems for me even
after forcing failures in launcher.c.
-- 
Michael


tap-subs-encoding.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Append implementation

2017-09-19 Thread Amit Khandekar
On 16 September 2017 at 10:42, Amit Kapila  wrote:
> On Thu, Sep 14, 2017 at 9:41 PM, Robert Haas  wrote:
>> On Mon, Sep 11, 2017 at 9:25 AM, Amit Kapila  wrote:
>>> I think the patch stores only non-partial paths in decreasing order,
>>> what if partial paths having more costs follows those paths?
>>
>> The general picture here is that we don't want the leader to get stuck
>> inside some long-running operation because then it won't be available
>> to read tuples from the workers.  On the other hand, we don't want to
>> just have the leader do no work because that might be slow.  And in
>> most cast cases, the leader will be the first participant to arrive at
>> the Append node, because of the worker startup time.  So the idea is
>> that the workers should pick expensive things first, and the leader
>> should pick cheap things first.
>>
>
> At a broader level, the idea is good, but I think it won't turn out
> exactly like that considering your below paragraph which indicates
> that it is okay if leader picks a partial path that is costly among
> other partial paths as a leader won't be locked with that.
> Considering this is a good design for parallel append, the question is
> do we really need worker and leader to follow separate strategy for
> choosing next path.  I think the patch will be simpler if we can come
> up with a way for the worker and leader to use the same strategy to
> pick next path to process.  How about we arrange the list of paths
> such that first, all partial paths will be there and then non-partial
> paths and probably both in decreasing order of cost.  Now, both leader
> and worker can start from the beginning of the list. In most cases,
> the leader will start at the first partial path and will only ever
> need to scan non-partial path if there is no other partial path left.
> This is not bulletproof as it is possible that some worker starts
> before leader in which case leader might scan non-partial path before
> all partial paths are finished, but I think we can avoid that as well
> if we are too worried about such cases.

If there are no partial subpaths, then again the leader is likely to
take up the expensive subpath. And this scenario would not be
uncommon. And for this scenario at least, we anyway have to make it
start from cheapest one, so will have to maintain code for that logic.
Now since we anyway have to maintain that logic, why not use the same
logic for leader for all cases. Otherwise, if we try to come up with a
common logic that conditionally chooses different next plan for leader
or worker, then that logic would most probably get complicated than
the current state. Also, I don't see any performance issue if there is
a leader is running backwards while the others are going forwards.



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Index Only Scan support for cube

2017-09-19 Thread Andrey Borodin
Hi hackers!
> 23 мая 2017 г., в 14:53, Andrew Borodin  написал(а):
> 
> Here's a small patch that implements fetch function necessary for
> Index Only Scans that use cube data type.

Tom Lane have just commited d3a4f89 (Allow no-op GiST support functions to be 
omitted) Thanks, Tom! : )
"Index Only Scan support for cube" patch now is obsolete. I'm working on 
another similar patch for contribs to support GiST IOS and remove no-op support 
functions.

Best regards, Andrey Borodin.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "inconsistent page found" with checksum and wal_consistency_checking enabled

2017-09-19 Thread Kuntal Ghosh
On Wed, Sep 20, 2017 at 10:22 AM, Michael Paquier
 wrote:
> On Wed, Sep 20, 2017 at 5:23 AM, Ashwin Agrawal  wrote:
>> Currently, page checksum is not masked by Page masking routines used by
>> wal_consistency_checking facility. So, when running `make installcheck` with
>> data checksum enabled and wal_consistency_checking='all', it easily and
>> consistently FATALs with "inconsistent page found".
>
> Indeed. This had better be fixed before PG10 is out. I am adding an open item.
>
Oops and surprised! How come we missed that previously. If page lsns
are different, checksums will be different as well. Anyhow, nice catch
and thanks for the patch.


-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Utilities for quoting literals and identifiers in Pg TAP tests

2017-09-19 Thread Craig Ringer
Hi all

Here's a little utility class I wrote for value and identifier quoting for
use in TAP tests.

Might be handy for others.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


PGValues.pm
Description: Perl program

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] increasing the default WAL segment size

2017-09-19 Thread Andres Freund
Hi,

On 2017-09-14 11:31:33 +0530, Beena Emerson wrote:
> The change looks good and is working as expected.
> PFA the updated patch after running pgindent.

I've pushed this version. Yay!  Thanks for the work Beena, everyone!

The only change I made is to run the pg_upgrade tests with a 1 MB
segment size, as discussed in [1].  We'll probably want to refine that,
but let's discuss that in the other thread.

Regards,

Andres

[1] 
http://archives.postgresql.org/message-id/20170919175457.liz3oreqiambuhca%40alap3.anarazel.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Craig Ringer
On 20 September 2017 at 12:16, Craig Ringer  wrote:


> The thought I had in mind upthread was to get rid of logicalrep slots
>> in favor of expanding the underlying bgworker slot with some additional
>> fields that would carry whatever extra info we need about a logicalrep
>> worker.  Such fields could also be repurposed for additional info about
>> other kinds of bgworkers, when (not if) we find out we need that.
>>
>
> That sounds OK to me personally for in-core logical rep, but it's really
> Petr and Peter who need to have a say here, not me.
>
>
Actually, I take that back. It'd bloat struct BackgroundWorker
significantly, and lead to arguments whenever logical replication needed
new fields, which it surely will. Every bgworker would pay the price. If we
added some kind of union to struct BackgroundWorker, other worker types
could later use the same space, offsetting the cost somewhat. But that'd be
no use to out-of-core workers, workers that don't actually need the extra
room, etc.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Craig Ringer
On 20 September 2017 at 12:06, Amit Kapila  wrote:

> On Wed, Sep 20, 2017 at 9:23 AM, Tom Lane  wrote:
> > Craig Ringer  writes:
> >> On 19 September 2017 at 18:04, Petr Jelinek <
> petr.jeli...@2ndquadrant.com>
> >> wrote:
> >>> If you are asking why they are not identified by the
> >>> BackgroundWorkerHandle, then it's because it's private struct and can't
> >>> be shared with other processes so there is no way to link the logical
> >>> worker info with bgworker directly.
> >
> >> I really want BackgroundWorkerHandle to be public, strong +1 from me.
> >
> > I'm confused about what you think that would accomplish.  AFAICS, the
> > point of BackgroundWorkerHandle is to allow the creator/requestor of
> > a bgworker to verify whether or not the slot in question is still
> > "owned" by its request.
> >
>
> Right, but can we avoid maintaining additional information (in_use,
> generation,..) in LogicalRepWorker which is similar to bgworker worker
> machinery (which in turn can also avoid all the housekeeping for those
> variables) if we have access to BackgroundWorkerHandle?
> 
>

As far as I can see, probably not.

Because struct BackgroundWorker only has a single by-value Datum argument,
you can't pass much more information to a bgworker than "here's a slot
number to look up what you need to know to configure yourself". That can be
a ToC position for a shm_mq, or some kind of shmem array, but you need
something. You can't pass a pointer because of ASLR and EXEC_BACKEND. If
the launcher lives significantly longer than its workers, where a worker
dying isn't fatal to the launcher, it needs to be able to re-use slots in
the fixed-size shmem array workers use to self-configure. Which means some
kind of generation counter, like we have in BackgroundWorkerHandle, and an
in_use flag.

Knowing a logical worker's bgworker has started isn't
enough. WaitForReplicationWorkerAttach needs to know it has come up
successfully as a logical replication worker. To do that, it needs to know
that the entry it's looking at in LogicalRepWorker isn't for some prior
logical replication worker that previously had the same LogicalRepWorker
slot, though not necessarily the same BackgroundWorker slot. There's no 1:1
mapping of LogicalRepWorker slot to BackgroundWorker slot to rely on.

So unless we're willing to extend struct BackgroundWorker with some kind of
union that can be used to store extra info for logical rep workers and
whatever else we might later want, I don't see LogicalRepWorker going away.
If we do that, it'll make extending the shmem for logical replication
workers harder since it'll grow the entry for every background worker, not
just logical replication workers.

Parallel query gets away without most of this because as far as I can see
parallel workers don't need to enumerate each other or look up each others'
state, which logical rep can need to do for things like initial table sync.
It doesn't appear to re-launch workers unless it has a clean slate and can
clobber the entire parallel DSM context, either. So it doesn't need to
worry about whether the worker it's talking to is the one it just launched,
or some old worker that hasn't gone away yet. The DSM segment acts as a
generation counter of sorts, with all workers in the same generation.


If we wanted to do away with LogicalRepWorker shmem slots, I suspect we'd
have to broker all inter-worker communications via shm_mq pairs, where
worker A asks the launcher for the status of worker B, or to block until
worker B does X, or whatever. Do everything over shm_mq messaging not shmem
variables. That's a pretty major change, and makes the launcher responsible
for all synchronisation and IPC. It also wouldn't work properly unless we
nuked the DSM segment containing all the queues whenever any worker died
and restarted the whole lot, which would be terribly costly in terms of
restarting transaction replays etc. Or we'd have to keep some kind of
per-shm_mq-pair "in use" flag so we knew when to nuke and reset the queue
pair for a new worker to connect, at which point we're halfway back to
where we started...


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] "inconsistent page found" with checksum and wal_consistency_checking enabled

2017-09-19 Thread Michael Paquier
On Wed, Sep 20, 2017 at 5:23 AM, Ashwin Agrawal  wrote:
> Currently, page checksum is not masked by Page masking routines used by
> wal_consistency_checking facility. So, when running `make installcheck` with
> data checksum enabled and wal_consistency_checking='all', it easily and
> consistently FATALs with "inconsistent page found".

Indeed. This had better be fixed before PG10 is out. I am adding an open item.

> If anything needs to be masked on Page to perform / pass wal consistency
> checking, definitely checksum is not going to match and hence must be masked
> as well. Attaching patch to fix the same, installcheck passes with checksums
> enabled and wal_consistency_checking='all' with the fix.
>
> Clubbed to perform the masking with lsn as it sounds logical to have them
> together, as lsn is masked is all the cases so far and such is needed for
> checksum as well.

Agreed.

  * In consistency checks, the LSN of the two pages compared will likely be
- * different because of concurrent operations when the WAL is generated
- * and the state of the page when WAL is applied.
+ * different because of concurrent operations when the WAL is generated and
+ * the state of the page when WAL is applied. Also, mask out checksum as
+ * masking anything else on page means checksum is not going to match as well.
  */
Nit: Using "the LSN and the checksum" instead of the "the LSN".
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Make new crash restart test a bit more robust.

2017-09-19 Thread Andres Freund
On 2017-09-19 19:00:38 -0700, Andres Freund wrote:
> Given this fact pattern, I'll allow the case without a received error
> message in the recovery test. Objections?

Hearing none. Pushed.

While debugging this, I've also introduced a pump wrapper so that we now
get:
ok 4 - exactly one process killed with SIGQUIT
# aborting wait: program died
# stream contents: >>psql::9: WARNING:  terminating connection because 
of crash of another server process
# DETAIL:  The postmaster has commanded this server process to roll back the 
current transaction and exit, because another server process exited abnormally 
and possibly corrupted shared memory.
# HINT:  In a moment you should be able to reconnect to the database and repeat 
your command.
# psql::9: server closed the connection unexpectedly
#   This probably means the server terminated abnormally
#   before or while processing the request.
# psql::9: connection to server was lost
# <<
# pattern searched for: (?^m:MAYDAY:  terminating connection because of crash 
of another server process)
not ok 5 - psql query died successfully after SIGQUIT


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Craig Ringer
On 20 September 2017 at 11:53, Tom Lane  wrote:

> Craig Ringer  writes:
> > On 19 September 2017 at 18:04, Petr Jelinek <
> petr.jeli...@2ndquadrant.com>
> > wrote:
> >> If you are asking why they are not identified by the
> >> BackgroundWorkerHandle, then it's because it's private struct and can't
> >> be shared with other processes so there is no way to link the logical
> >> worker info with bgworker directly.
>
> > I really want BackgroundWorkerHandle to be public, strong +1 from me.
>
> I'm confused about what you think that would accomplish.  AFAICS, the
> point of BackgroundWorkerHandle is to allow the creator/requestor of
> a bgworker to verify whether or not the slot in question is still
> "owned" by its request.  This is necessarily not useful to any other
> process, since they didn't make the request.
>

I'm using shm_mq in a sort of "connection accepter" role, where a pool of
shm_mq's are attached to by a long lived bgworker. Other backends, both
bgworkers and normal user backends, can find a free slot and attach to it
to talk to the long lived bgworker. These other backends are not
necessarily started by the long lived worker, so it doesn't have a
BackgroundWorkerHandle for them.

Currently, if the long lived bgworker dies and a peer attempts to attach to
the slot, they'll hang forever in shm_mq_wait_internal, because it cannot
use shm_mq_set_handle as described in
https://www.postgresql.org/message-id/E1XbwOs-0002Fd-H9%40gemulon.postgresql.org
to protect its self.

See also thread
https://www.postgresql.org/message-id/CAMsr%2BYHmm%3D01LsuEYR6YdZ8CLGfNK_fgdgi%2BQXUjF%2BJeLPvZQg%40mail.gmail.com
.

If the BackgroundWorkerHandle for the long-lived bgworker is copied to a
small static control shmem segment, the connecting workers can use that to
reliably bail out if the long-lived worker dies.


> The thought I had in mind upthread was to get rid of logicalrep slots
> in favor of expanding the underlying bgworker slot with some additional
> fields that would carry whatever extra info we need about a logicalrep
> worker.  Such fields could also be repurposed for additional info about
> other kinds of bgworkers, when (not if) we find out we need that.
>

That sounds OK to me personally for in-core logical rep, but it's really
Petr and Peter who need to have a say here, not me.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] why not parallel seq scan for slow functions

2017-09-19 Thread Amit Kapila
On Wed, Sep 20, 2017 at 3:05 AM, Jeff Janes  wrote:
> On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro
>  wrote:
>>
>> On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila 
>> wrote:
>> > The attached patch fixes both the review comments as discussed above.
>
>
> that should be fixed by turning costs on the explain, as is the tradition.
>

Right.  BTW, did you get a chance to run the original test (for which
you have reported the problem) with this patch?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Setting pd_lower in GIN metapage

2017-09-19 Thread Michael Paquier
On Wed, Sep 20, 2017 at 12:49 PM, Amit Kapila  wrote:
> On Wed, Sep 20, 2017 at 4:25 AM, Michael Paquier
>  wrote:
>> Also, _hash_init() would need some extra work to
>> generate FPWs, but I don't think that it is necessary per its handling
>> of a per-record meta data either. So REGBUF_STANDARD could be just
>> removed from there, and there is actually no need to patch
>> src/backend/access/hash at all.
>>
>
> I think there is no need to remove it.  As per discussion above, we
> want to keep REGBUF_STANDARD for all metapage initializations for the
> matter of consistency and that will be useful for
> wal_consistency_checking in which case we anyway need full page image.

Arf, yes. You are indeed right. I misread that you still need that
anyway in XLogRecordAssemble.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Amit Kapila
On Wed, Sep 20, 2017 at 9:23 AM, Tom Lane  wrote:
> Craig Ringer  writes:
>> On 19 September 2017 at 18:04, Petr Jelinek 
>> wrote:
>>> If you are asking why they are not identified by the
>>> BackgroundWorkerHandle, then it's because it's private struct and can't
>>> be shared with other processes so there is no way to link the logical
>>> worker info with bgworker directly.
>
>> I really want BackgroundWorkerHandle to be public, strong +1 from me.
>
> I'm confused about what you think that would accomplish.  AFAICS, the
> point of BackgroundWorkerHandle is to allow the creator/requestor of
> a bgworker to verify whether or not the slot in question is still
> "owned" by its request.
>

Right, but can we avoid maintaining additional information (in_use,
generation,..) in LogicalRepWorker which is similar to bgworker worker
machinery (which in turn can also avoid all the housekeeping for those
variables) if we have access to BackgroundWorkerHandle?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Tom Lane
Craig Ringer  writes:
> On 19 September 2017 at 18:04, Petr Jelinek 
> wrote:
>> If you are asking why they are not identified by the
>> BackgroundWorkerHandle, then it's because it's private struct and can't
>> be shared with other processes so there is no way to link the logical
>> worker info with bgworker directly.

> I really want BackgroundWorkerHandle to be public, strong +1 from me.

I'm confused about what you think that would accomplish.  AFAICS, the
point of BackgroundWorkerHandle is to allow the creator/requestor of
a bgworker to verify whether or not the slot in question is still
"owned" by its request.  This is necessarily not useful to any other
process, since they didn't make the request.

The thought I had in mind upthread was to get rid of logicalrep slots
in favor of expanding the underlying bgworker slot with some additional
fields that would carry whatever extra info we need about a logicalrep
worker.  Such fields could also be repurposed for additional info about
other kinds of bgworkers, when (not if) we find out we need that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Setting pd_lower in GIN metapage

2017-09-19 Thread Amit Kapila
On Wed, Sep 20, 2017 at 4:25 AM, Michael Paquier
 wrote:
> On Wed, Sep 20, 2017 at 12:47 AM, Tom Lane  wrote:
>> Amit Kapila  writes:
>>> On Tue, Sep 19, 2017 at 9:27 AM, Michael Paquier
>>>  wrote:
 I am not saying that no index AMs take advantage FPW compressibility
 for their meta pages. There are cases like this one, as well as one
 code path in BRIN where this is useful, and it is useful as well when
 logging FPWs of the init forks for unlogged relations.
>>
>>> Hmm, why is it useful for logging FPWs of the init forks for unlogged
>>> relations?  We don't use REGBUF_STANDARD in those cases.
>>
>> But if we started to do so, that would be a concrete benefit of this
>> patch ...
>
> In the proposed set of patches, all the empty() routines part of index
> AMs which use log_newpage_buffer() (brin, gin, spgst) are doing the
> right thing by updating log_newpage_buffer(). btree also should have
> its call to log_newpage updated in btbuildempty(), and your patch is
> missing that.
>

We can add that for btree patch.

> Also, _hash_init() would need some extra work to
> generate FPWs, but I don't think that it is necessary per its handling
> of a per-record meta data either. So REGBUF_STANDARD could be just
> removed from there, and there is actually no need to patch
> src/backend/access/hash at all.
>

I think there is no need to remove it.  As per discussion above, we
want to keep REGBUF_STANDARD for all metapage initializations for the
matter of consistency and that will be useful for
wal_consistency_checking in which case we anyway need full page image.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow GiST opcalsses without compress\decompres functions

2017-09-19 Thread Tom Lane
Andrey Borodin  writes:
> [ 0001-Allow-uncompressed-GiST-4.patch ]

Pushed, with a bit more work on the documentation and some minor
cosmetic changes.

I did not like the fact that the new code paths added by the patch
were untested, so I went ahead and removed the no-longer-needed
no-op functions in the core GiST opclasses.  There's still room
to improve the contrib opclasses, but that's much more tedious
(you need to write an extension update script) so I didn't feel
like messing with those now.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Page Scan Mode in Hash Index

2017-09-19 Thread Amit Kapila
On Tue, Sep 19, 2017 at 9:49 PM, Robert Haas  wrote:
> On Thu, Aug 24, 2017 at 11:26 AM, Jesper Pedersen
>  wrote:
>> Based on the feedback in this thread, I have moved the patch to "Ready for
>> Committer".
>
> Reviewing 0001:
>
> _hash_readpage gets the page LSN to see if we can apply LP_DEAD hints,
> but if the table is unlogged or temporary, the LSN will never change,
> so the test in _hash_kill_items will always think that it's OK to
> apply the hints.  (This seems like it might be a pretty serious
> problem, because I'm not sure what would be a viable workaround.)
>

This point has been discussed above [1] and to avoid this problem we
are keeping the scan always behind vacuum for unlogged and temporary
tables as we are doing without this patch.   That will ensure vacuum
won't be able to remove the TIDs which we are going to mark as dead.
This has been taken care in patch 0003.  I understand that this is
slightly ugly, but the other alternative (as mentioned in the email
[1]) is much worse.

[1] - 
https://www.postgresql.org/message-id/CAA4eK1J6xiJUOidBaOt0iPsAdS0%2Bp5PoKFf1R2yVjTwrY_4snA%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread Craig Ringer
On 20 September 2017 at 06:36, David Steele  wrote:

>
> I just use:
>
> $SIG{__DIE__} = sub {Carp::confess @_};
>

That's what I patched into my TestLib.pm too, until I learned of
Carp::Always.

I'd rather have Carp::Always, but it's definitely an OK fallback.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Craig Ringer
On 19 September 2017 at 20:33, Amit Kapila  wrote:


> Yeah, but you could have used the way we do for parallel query where
> we setup dsm and share all such information.  You can check the logic
> of execparallel.c and parallel.c to see how we do all such stuff for
> parallel query.


Parallel query has a very clearly scoped lifetime, which seems to help. The
parallel workers are started by a leader, whose lifetime entirely
encompasses that of the workers. They're strictly children of the leader
process.

In logical replication, the logical replication manager doesn't start the
walsenders, they're started by the postmaster in response to inbound
connections.

But the logical replication launcher does start the managers, and the
managers start the apply workers. (IIRC). If we don't mind the whole thing
dying and restarting if the launcher dies,  or workers for a db dying if a
database manager dies, then using dsm segments and detach notifications
does seem viable.

IIRC when we did something similar in pglogical we ran into problems with
(a) inability to handle an ERROR in a bgworker without exiting and being
restarted by the postmaster; and (b) the postmaster remembering bgworker
registrations across crash restart with no way to tell it not to. Maybe
Petr remembers the details?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Craig Ringer
On 19 September 2017 at 18:04, Petr Jelinek 
wrote:


>
> If you are asking why they are not identified by the
> BackgroundWorkerHandle, then it's because it's private struct and can't
> be shared with other processes so there is no way to link the logical
> worker info with bgworker directly.


I really want BackgroundWorkerHandle to be public, strong +1 from me.

It'd be very beneficial when working with shm_mq, too. Right now you cannot
pass a BackgroundWorkerHandle through shmem to another process, either via
a static shmem region or via shm_mq. This means you can't supply it to
shm_mq_attach and have shm_mq handle lifetime issues for you based on the
worker handle.

TBH I think there's a fair bit of work needed in the bgworker
infrastructure, but making BackgroundWorkerHandle public is a small and
simple start that'd be a big help.

At some point we'll also want to be able to enumerate background workers
and get handles for existing workers. Also, let background workers recover
from errors without exiting, which means factoring a bunch of stuff out of
PostgresMain. But both of those are bigger jobs.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Creating backup history files for backups taken from standbys

2017-09-19 Thread Masahiko Sawada
On Tue, Sep 19, 2017 at 2:48 PM, Masahiko Sawada  wrote:
> On Tue, Sep 19, 2017 at 8:33 AM, David Steele  wrote:
>> On 9/18/17 7:26 PM, Michael Paquier wrote:
>>> On Tue, Sep 19, 2017 at 8:14 AM, David Steele  wrote:
 On 8/31/17 11:56 PM, Michael Paquier wrote:
> Here is an updated patch with refreshed documentation, as a result of
> 449338c which was discussed in thread
> https://www.postgresql.org/message-id/d4d951b9-89c0-6bc1-b6ff-d0b2dd5a8...@pgmasters.net.
> I am just outlining the fact that a backup history file gets created
> on standbys and that it is archived.

 The patch looks good overall.  It applies cleanly and passes all tests.

 One question.  Do we want to create this file all the time (as written),
 or only when archive_mode = always?

 It appears that CleanupBackupHistory() will immediately remove the
 history file when archive_mode != always so it seems useless to write
 it.  On the other hand the code is a bit simpler this way.  Thoughts?
>>>
>>> With archive_mode = off, the bytes of the backup history file are
>>> still written to disk, so my take on the matter is to keep the code
>>> simple.
>>
>> I'm OK with that.
>>
>> I'll give Masahiko some time to respond before marking it RFC.
>>
>
> Thanks, I'll review it and send a comment by this Wednesday.
>

I've reviewed and tested the latest patch but fond no problems, so
I've marked this patch to Ready for Committer.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sync process names between ps and pg_stat_activity

2017-09-19 Thread Tom Lane
Michael Paquier  writes:
> On Fri, Sep 1, 2017 at 5:33 AM, Tom Lane  wrote:
>> Peter Eisentraut  writes:
>>> As an aside, is there a reason why the archiver process is not included
>>> in pg_stat_activity?

>> It's not connected to shared memory.

> Do you think that monitoring would be a reason sufficient to do so? My
> personal opinion on the matter is that people are more and more going
> to move on with pull (*) models (aka pg_receivewal and such with
> replication slots) instead of push (*) models (use of
> archive_command), so that monitoring of the archiver becomes less and
> less useful in the long-term. And there is also pg_stat_archiver that
> covers largely the gap for archive failures.

Meh.  I think keeping it separate from shared memory is a good thing
for reliability reasons.

> Still, one reason that could be used to connect it to shared memory is
> to control the interval of time used for archive attempts, which is
> now a interval hardcoded of 1s in pgarch_ArchiverCopyLoop(). Here more
> flexibility would be definitely useful.

AFAIR, GUC reloads work regardless of that.  If we wanted to make the
interval configurable, this would not prevent us from doing so.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sync process names between ps and pg_stat_activity

2017-09-19 Thread Michael Paquier
On Fri, Sep 1, 2017 at 5:33 AM, Tom Lane  wrote:
> Peter Eisentraut  writes:
>> As an aside, is there a reason why the archiver process is not included
>> in pg_stat_activity?
>
> It's not connected to shared memory.

Do you think that monitoring would be a reason sufficient to do so? My
personal opinion on the matter is that people are more and more going
to move on with pull (*) models (aka pg_receivewal and such with
replication slots) instead of push (*) models (use of
archive_command), so that monitoring of the archiver becomes less and
less useful in the long-term. And there is also pg_stat_archiver that
covers largely the gap for archive failures.

Still, one reason that could be used to connect it to shared memory is
to control the interval of time used for archive attempts, which is
now a interval hardcoded of 1s in pgarch_ArchiverCopyLoop(). Here more
flexibility would be definitely useful.

(*): this wording is from a colleague, not from me.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commits don't block for synchronous replication

2017-09-19 Thread Masahiko Sawada
On Tue, Sep 19, 2017 at 2:26 PM, Michael Paquier
 wrote:
> On Tue, Sep 19, 2017 at 7:50 AM, Xin Zhang  wrote:
>> If primary crashed at that moment, and failover to standby, the foo table is
>> lost, even though the replication is synced according to
>> `pg_stat_replication` view.
>
> GUC parameters are reloaded each time a query is run, and so
> SyncRepConfig is filled with the parsed data of SyncRepStandbyNames
> once the parameter is reloaded for the process. Still, here, a commit
> is waiting for a signal from a WAL sender that the wanted LSN has been
> correctly flushed on a standby so this code path does not care about
> the state of SyncRepConfig saved in the context of the process, we
> want to know what the checkpointer thinks about it. Hence using WAL
> sender data or sync_standbys_defined as a source of truth looks like a
> correct concept to me, making the problem of this bug legit.
>

I agree with the analysis and the approach of this patch.

> The check with SyncRepRequested() still holds truth: max_wal_senders
> needs a restart to be updated. Also, the other caller of
> SyncStandbysDefined() requires SyncRepConfig to be set, so this caller
> is fine.

Yeah, after reloaded the config there might be some wal senders that
don't reflect it yet but I think it cannot be a problem.

> I have looked at your patch and tested it, but found no problems
> associated with it. A backpatch would be required, so I have added an
> entry in the next commit fest with status set to "ready for committer"
> so as this bug does not fall into the cracks.

Also I found no problems in the patch.

>
>> A separate question, is the `pg_stat_replication` view the reliable way to
>> find when to failover to a standby, or there are some other ways to ensure
>> the standby is in-sync with the primary?
>
> It shows at SQL level what is currently present in shared memory by
> scanning all the WAL sender entries, so this report uses the same data
> as the backend themselves, so that's a reliable source. In Postgres
> 10, pg_stat_activity is also able to show to users what are the
> backends waiting for a change to be flushed/applied on the standby
> using the wait event called SyncRep. You could make some use of that
> as well.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE COLLATION does not sanitize ICU's BCP 47 language tags. Should it?

2017-09-19 Thread Peter Geoghegan
On Tue, Sep 19, 2017 at 5:52 PM, Peter Eisentraut
 wrote:
> On 9/18/17 18:46, Peter Geoghegan wrote:
>> As I pointed out a couple of times already [1], we don't currently
>> sanitize ICU's BCP 47 language tags within CREATE COLLATION.
>
> There is no requirement that the locale strings for ICU need to be BCP
> 47.  ICU locale names like 'de@collation=phonebook' are also acceptable.

Right. But, we only document that BCP 47 is supported by Postgres.
Maybe we can use get_icu_language_tag()/uloc_toLanguageTag() to ensure
that we end up with BCP 47, even when the user happens to specify the
legacy syntax. Should we be "canonicalizing" legacy ICU locale strings
as BCP 47, too?

> The reason they are not validated is that, as you know, ICU accepts any
> locale string as valid.  You appear to have found a way to do some
> validation, but I would like to see that code.

As I mentioned, I'm simply calling
get_icu_language_tag()/uloc_toLanguageTag() to do that sanitization.
The code to get the extra sanitization is completely trivial.

I didn't post the patch that generates the errors in my opening e-mail
because I'm not confident it's correct just yet. And, I think that I
see a bigger problem: we pass a string that is almost certainly a BCP
47 string to ucol_open() from within pg_newlocale_from_collation(). We
do so despite the fact that ucol_open() apparently doesn't accept BCP
47 syntax locale strings until ICU 54 [1]. Seems entirely possible
that this accounts for the problems you saw on ICU 4.2, back when we
were still creating keyword variants (I guess that the keyword
variants seem very "BCP 47-ish" to me).

I really think we need to add some kind of debug mode that makes ICU
optionally spit out a locale display name at key points. We need this
to gain confidence that the behavior that ICU provides actually
matches what is expected across ICU different versions for different
locale formats. We talked about this as a user-facing feature before,
which can wait till v11; I just want this to debug problems like this
one.

[1] 
https://ssl.icu-project.org/apiref/icu4c/ucol_8h.html#a4721e4c0a519bb0139a874e191223590
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Boom filters for hash joins (was: A design for amcheck heapam verification)

2017-09-19 Thread Robert Haas
On Tue, Sep 19, 2017 at 4:25 PM, Tomas Vondra
 wrote:
> I haven't thought about it from that point of view. Can you elaborate
> why that would be the case? Sorry if this was explained earlier in this
> thread (I don't see it in the history, though).
>
> I can't quite remember why I haven't pursued the patch in 2015, but it
> was probably clear it wouldn't get in in the last CF, and I never got
> back to it.

IIRC, it was a clear loser performance-wise in the case where the
Bloom filter didn't end up helping, and we didn't have a way to avoid
doing it when it didn't help.  That may or may not be why you didn't
pursue it, but I'm fairly sure it was my motivation for being
unexcited about the whole idea.  I think if we can solve that problem
somehow, we have a winner.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Make new crash restart test a bit more robust.

2017-09-19 Thread Andres Freund
On 2017-09-19 18:06:29 -0700, Andres Freund wrote:
> On 2017-09-19 16:46:58 -0400, Tom Lane wrote:
> > Have we forgotten an fflush() or something?
> 
> After hacking a fix for my previous theory, I started adding strace into
> the mix, to verify this. Takes longer to reproduce, but after filtering
> to -e trace=network, I got this:
> 
> socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
> connect(3, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 
> ENOENT (No such file or directory)
> socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
> connect(3, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 
> ENOENT (No such file or directory)
> socket(AF_UNIX, SOCK_STREAM, 0) = 3
> connect(3, {sa_family=AF_UNIX, sun_path="/tmp/EDkYotgk3u/.s.PGSQL.57230"}, 
> 110) = 0
> getsockopt(3, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
> getsockname(3, {sa_family=AF_UNIX}, [128->2]) = 0
> sendto(3, "\0\0\0O\0\3\0\0user\0andres\0database\0pos"..., 79, MSG_NOSIGNAL, 
> NULL, 0) = 79
> recvfrom(3, "R\0\0\0\10\0\0\0\0S\0\0\0,application_name\0t"..., 16384, 0, 
> NULL, NULL) = 340
> sendto(3, "Q\0\0\0\37SELECT $$psql-connected$$;\0", 32, MSG_NOSIGNAL, NULL, 
> 0) = 32
> recvfrom(3, 
> "T\0\0\0!\0\1?column?\0\0\0\0\0\0\0\0\0\0\31\377\377\377\377\377\377"..., 
> 16384, 0, NULL, NULL) = 79
> sendto(3, "Q\0\0\0\33SELECT pg_sleep(3600);\0", 28, MSG_NOSIGNAL, NULL, 0) = 
> 28
> recvfrom(3, 0x555817dae2a0, 16384, 0, NULL, NULL) = -1 ECONNRESET (Connection 
> reset by peer)
> +++ exited with 2 +++
> 
> So indeed, we got a connreset before receiving the proper error
> message.
> 
> The corresponding server log (debug3):
> 2017-09-20 00:57:00.573 UTC [713] DEBUG:  sending SIGQUIT to process 730
> 2017-09-20 00:57:00.573 UTC [713] DEBUG:  sending SIGQUIT to process 716
> 2017-09-20 00:57:00.573 UTC [713] DEBUG:  sending SIGQUIT to process 715
> 2017-09-20 00:57:00.573 UTC [713] DEBUG:  sending SIGQUIT to process 717
> 2017-09-20 00:57:00.573 UTC [713] DEBUG:  sending SIGQUIT to process 718
> 2017-09-20 00:57:00.573 UTC [713] DEBUG:  sending SIGQUIT to process 719
> 2017-09-20 00:57:00.573 UTC [730] t/013_crash_restart.pl WARNING:  
> terminating connection because of crash of another server process
> 2017-09-20 00:57:00.573 UTC [730] t/013_crash_restart.pl DETAIL:  The 
> postmaster has commanded this server process to roll back the current t
> ransaction and exit, because another server process exited abnormally and 
> possibly corrupted shared memory.
> 2017-09-20 00:57:00.573 UTC [730] t/013_crash_restart.pl HINT:  In a moment 
> you should be able to reconnect to the database and repeat your c
> ommand.
> 2017-09-20 00:57:00.573 UTC [730] t/013_crash_restart.pl DEBUG:  
> shmem_exit(-1): 0 before_shmem_exit callbacks to make
> 2017-09-20 00:57:00.573 UTC [730] t/013_crash_restart.pl DEBUG:  
> shmem_exit(-1): 0 on_shmem_exit callbacks to make
> 2017-09-20 00:57:00.573 UTC [720] DEBUG:  shmem_exit(-1): 0 before_shmem_exit 
> callbacks to make
> 2017-09-20 00:57:00.573 UTC [730] t/013_crash_restart.pl DEBUG:  
> proc_exit(-1): 0 callbacks to make
> ...
> 2017-09-20 00:57:00.577 UTC [713] DEBUG:  server process (PID 730) exited 
> with exit code 2
> 2017-09-20 00:57:00.577 UTC [713] DETAIL:  Failed process was running: SELECT 
> pg_sleep(3600);
> 2017-09-20 00:57:00.577 UTC [713] LOG:  all server processes terminated; 
> reinitializing
> 
> So the server indeed was killed by SIGQUIT, not an escalation to
> SIGKILL. And it output stuff to the server log, and didn't complain
> about communication to the client... Odd.

Hah! The reason for that is that socket_flush tries to avoid doing stuff
recursively:

static int
socket_flush(void)
{
int res;

/* No-op if reentrant call */
if (PqCommBusy)
return 0;
...

(detected by putting an elog(COMMERROR) there)

adding an abort shows the following backtrace:

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x7f089de4e42a in __GI_abort () at abort.c:89
#2  0x56473218a3f6 in socket_flush () at 
/home/andres/src/postgresql/src/backend/libpq/pqcomm.c:1408
#3  0x56473246e4ec in send_message_to_frontend (edata=0x5647329e34e0 
)
at /home/andres/src/postgresql/src/backend/utils/error/elog.c:3319
#4  0x56473246ad02 in EmitErrorReport () at 
/home/andres/src/postgresql/src/backend/utils/error/elog.c:1483
#5  0x5647324680af in errfinish (dummy=0) at 
/home/andres/src/postgresql/src/backend/utils/error/elog.c:483
#6  0x5647322fb340 in quickdie (postgres_signal_arg=3) at 
/home/andres/src/postgresql/src/backend/tcop/postgres.c:2608
#7  
#8  0x7f089fbb65bd in __libc_send (fd=7, buf=0x564734a22c80, n=79, flags=0) 
at ../sysdeps/unix/sysv/linux/x86_64/send.c:26
#9  0x56473217c874 in secure_raw_write (port=0x564734a1c730, 
ptr=0x564734a22c80, len=79)
at /home/andres/src/postgresql/src/backend/libpq/be-secure.c:310
#10 0x56473217c6fc

Re: [HACKERS] PG 10 release notes

2017-09-19 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Tom Lane
> TBH, I think that's another reason for not release-noting it.  There's no
> concrete change to point to, and so it's hard to figure out what to say.
> I'm not even very sure that we should be encouraging people to change
> existing shared_buffers settings; the experiments that Takayuki-san did
> were only on Windows 10, so we don't really know that changing that would
> be a good idea on older Windows versions.

In fact, when I ran pgbench on Windows 2008 for some other unrelated reason, I 
found larger shared buffers (4GB, 8GB) was effective, too.

I didn't know pure documentation changes are not listed on the release notes.  
But I suggest listing them (of course, excluding mere typos), because the 
documentation is also important for users as well as programs.  The 
documentation is also part of software product.  If the documented behavior and 
the actual one differs and the user is wondering which is correct, he can know 
the answer only from the release note when the mismatch is fixed.  I think the 
documentation changes are more useful for users than, for example, the 
following items:

E.1.3.11. Source Code
Improve behavior of pgindent (Piotr Stefaniak, Tom Lane)
Allow WaitLatchOrSocket() to wait for socket connection on Windows (Andres 
Freund)
Overhaul documentation build process (Alexander Lakhin)
Use XSLT to build the PostgreSQL documentation (Peter Eisentraut)


Regards
Takayuki Tsunakawa







-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM in the PG 10 release notes

2017-09-19 Thread Michael Paquier
On Wed, Sep 20, 2017 at 9:45 AM, Peter Eisentraut
 wrote:
> On 9/19/17 17:55, Jeff Janes wrote:
>> I guess I'm late to the party, but I don't see why this is needed at
>> all.  We encourage people to use any and all new features which are
>> appropriate to them--that is why we implement new features.  Why does
>> this feature need a special invitation?
>
> It's not clear to me how an average user would get from the press
> release or release notes to upgrading their installation to use
> SCRAM-based authentication and passwords.  A little bit more guidance
> somewhere would be helpful.
>
> The patch that Heikki posted seemed reasonable to me as a starting
> point, but there probably needs to be more "how" information somewhere.

I agree with that.

+   
+Installations using MD5 authentication are encouraged to switch to
+SCRAM-SHA-256, unless using older client programs or drivers that don't
+support it yet.
+   
I think that the addition of a link to
https://wiki.postgresql.org/wiki/List_of_drivers would be appropriate.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sync process names between ps and pg_stat_activity

2017-09-19 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Peter Eisentraut
> > Personally, I prefer "wal writer", "wal sender" and "wal receiver"
> > that separate words as other process names.  But I don't mind leaving
> > them as they are now.
> 
> If we were to change those, that would break existing queries for
> pg_stat_activity.  That's new in PG10, so we could change it if we were
> really eager.  But it's probably not worth bothering.  Then again, there
> is pg_stat_wal_receiver.  So it's all totally inconsistent.  Not sure
> where to go.

OK, I'm comfortable with as it is now.

I made this ready for committer.  You can fix the following and commit the 
patch.  Thank you.


> >   * To achieve that, we pass "wal sender process" as username and
> > username
> 
> good catch

Regards
Takayuki Tsunakawa


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sync process names between ps and pg_stat_activity

2017-09-19 Thread Peter Eisentraut
On 9/18/17 02:07, MauMau wrote:
> (1)
> In the following comment, it's better to change "wal sender process"
> to "walsender" to follow the modified name.
> 
> - * postgres: wal sender process   
> + * postgres: walsender   
>   *
>   * To achieve that, we pass "wal sender process" as username and
> username

good catch

> (2)
> "WAL writer process" is used, not "walwriter", is used in postmaster.c
> as follows.  I guess this is for natural language.  Is this intended?
> I'm OK with either, though.
> 
> HandleChildCrash(pid, exitstatus,
>  _("WAL writer process"));

Yes, we usually use that spelling in user-facing messages.

> Personally, I prefer "wal writer", "wal sender" and "wal receiver"
> that separate words as other process names.  But I don't mind leaving
> them as they are now.

If we were to change those, that would break existing queries for
pg_stat_activity.  That's new in PG10, so we could change it if we were
really eager.  But it's probably not worth bothering.  Then again, there
is pg_stat_wal_receiver.  So it's all totally inconsistent.  Not sure
where to go.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Generic type subscripting

2017-09-19 Thread Peter Eisentraut
On 9/18/17 05:39, Arthur Zakirov wrote:
> On Mon, Sep 18, 2017 at 10:31:54AM +0200, Dmitry Dolgov wrote:
>> Just to clarify, do you mean that `CREATE SUBSCRIPTING FOR` would only make
>> a
>> dependency record? In this case `DROP SUBSCRIPTING FOR` actually means just
>> drop an init function.
> I think it would be good to add new catalog table.

Would you mind posting a summary of how you go here?

Superficially, it seems like this sort of feature should be handled by a
couple of type attributes and pg_type columns.  But you are talking
about a new system catalog and new DDL, and it's not clear to me how you
got here.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE COLLATION does not sanitize ICU's BCP 47 language tags. Should it?

2017-09-19 Thread Peter Eisentraut
On 9/18/17 18:46, Peter Geoghegan wrote:
> As I pointed out a couple of times already [1], we don't currently
> sanitize ICU's BCP 47 language tags within CREATE COLLATION.

There is no requirement that the locale strings for ICU need to be BCP
47.  ICU locale names like 'de@collation=phonebook' are also acceptable.

The reason they are not validated is that, as you know, ICU accepts any
locale string as valid.  You appear to have found a way to do some
validation, but I would like to see that code.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM in the PG 10 release notes

2017-09-19 Thread Peter Eisentraut
On 9/19/17 17:55, Jeff Janes wrote:
> I guess I'm late to the party, but I don't see why this is needed at
> all.  We encourage people to use any and all new features which are
> appropriate to them--that is why we implement new features.  Why does
> this feature need a special invitation?

It's not clear to me how an average user would get from the press
release or release notes to upgrading their installation to use
SCRAM-based authentication and passwords.  A little bit more guidance
somewhere would be helpful.

The patch that Heikki posted seemed reasonable to me as a starting
point, but there probably needs to be more "how" information somewhere.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM in the PG 10 release notes

2017-09-19 Thread Jeff Janes
On Tue, Sep 19, 2017 at 4:29 PM, Michael Paquier 
wrote:

> On Wed, Sep 20, 2017 at 6:55 AM, Jeff Janes  wrote:
> > On Tue, Sep 19, 2017 at 1:32 PM, Heikki Linnakangas 
> wrote:
> >> I'm not sure what exactly to do here. Where should we stick that notice?
> >> We could put it in the release notes, where the bullet point about
> SCRAM is,
> >> but it would be well hidden. If we want to give advice to people who
> might
> >> not otherwise pay attention, it should go to a more prominent place. In
> the
> >> "Migration to version 10" section perhaps. Currently, it only lists
> >> incompatibilities, which this isn't. Perhaps put the notice after the
> list
> >> of incompatibilities (patch attached)?
> >
> > I guess I'm late to the party, but I don't see why this is needed at all.
> > We encourage people to use any and all new features which are
> appropriate to
> > them--that is why we implement new features.  Why does this feature need
> a
> > special invitation?
>
> There have been continuous complains on those lists for the last 5
> years or so that MD5 is "weak" and should be avoided. Well, Postgres
> is not wrong in the way it uses MD5 in itself, backups including raw
> MD5 hashes being more of a problem. But I would think that it is fair
> to tell in a louder to such folks that Postgres has actually done
> something on the matter.
>

People who are stressed out about it but use PostgreSQL anyway will see it
in the release notes and recognize the importance (to them) without being
told. People who don't use PostgreSQL at all due to the issue aren't going
to be reading the release notes anyway.  The place to be louder about "this
is now available" is in the announcement and press release, and in the
(currently unwritten) "E.1.1. Overview", not down in the guts.

Meanwhile the people who don't know enough about it to understand why our
use of md5 "is not wrong", will just see "encourage" and "better security"
and then go lock their users and themselves out of their database and have
a generally miserable experience.

I think the proposed invitation is too strong and warning is too weak.
Especially as there seems to be no way to audit server-side what
drivers/versions people are connecting with.  You either have to track down
every client and identify the correct binaries and run ldd against them (or
whatever you would have to do on Windows), or just go ahead and break it
and see who calls.

The people who need this don't need to be encouraged to use it, they just
need to know it exists.  The people who need to be encouraged are going to
shoot themselves in the foot.

Cheers,

Jeff


Re: [HACKERS] GUC for cleanup indexes threshold.

2017-09-19 Thread Peter Geoghegan
On Tue, Sep 19, 2017 at 4:47 PM, Claudio Freire  wrote:
> Maybe this is looking at the problem from the wrong direction.
>
> Why can't the page be added to the FSM immediately and the check be
> done at runtime when looking for a reusable page?
>
> Index FSMs currently store only 0 or 255, couldn't they store 128 for
> half-recyclable pages and make the caller re-check reusability before
> using it?

No, because it's impossible for them to know whether or not the page
that their index scan just landed on recycled just a second ago, or
was like this since before their xact began/snapshot was acquired.

For your reference, this RecentGlobalXmin interlock stuff is what
Lanin & Shasha call "The Drain Technique" within "2.5 Freeing Empty
Nodes". Seems pretty hard to do it any other way.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GUC for cleanup indexes threshold.

2017-09-19 Thread Claudio Freire
On Tue, Sep 19, 2017 at 3:31 AM, Kyotaro HORIGUCHI
 wrote:
> I was just looking the thread since it is found left alone for a
> long time in the CF app.
>
> At Mon, 18 Sep 2017 16:35:58 -0700, Peter Geoghegan  wrote in 
> 
>> On Wed, Apr 5, 2017 at 3:50 PM, Andres Freund  wrote:
>> > Hi,
>> >
>> > On 2017-04-01 03:05:07 +0900, Masahiko Sawada wrote:
>> >> On Fri, Mar 31, 2017 at 11:44 PM, Robert Haas  
>> >> wrote:
>> >> [ lots of valuable discussion ]
>> >
>> > I think this patch clearly still is in the design stage, and has
>> > received plenty feedback this CF.  I'll therefore move this to the next
>> > commitfest.
>>
>> Does anyone have ideas on a way forward here? I don't, but then I
>> haven't thought about it in detail in several months.
>
> Is the additional storage in metapage to store the current status
> of vaccum is still unacceptable even if it can avoid useless
> full-page scan on indexes especially for stable tables?
>
> Or, how about additional 1 bit in pg_stat_*_index to indicate
> that the index *don't* require vacuum cleanup stage. (default
> value causes cleanup)
>
> index_bulk_delete (or ambulkdelete) returns the flag in
> IndexBulkDeleteResult then lazy_scan_heap stores the flag in
> stats and in the next cycle it is looked up to decide the
> necessity of index cleanup.

Maybe this is looking at the problem from the wrong direction.

Why can't the page be added to the FSM immediately and the check be
done at runtime when looking for a reusable page?

Index FSMs currently store only 0 or 255, couldn't they store 128 for
half-recyclable pages and make the caller re-check reusability before
using it?

This would make the second pass totally unnecessary, for a slight
penalty when looking at the FSM. Ie: 2 lookups in the FSM instead of
one. An initial one with min free space 128 to get a half-recyclable
page, if the check fails on that page, a second lookup with min free
space 255 to get a surely-recycleable page.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2017-09-19 Thread Peter Geoghegan
On Tue, Sep 19, 2017 at 3:21 AM, Rushabh Lathia
 wrote:
> As per the earlier discussion in the thread, I did experiment using
> BufFileSet interface from parallel-hash-v18.patchset.  I took the reference
> of parallel-hash other patches to understand the BufFileSet APIs, and
> incorporate the changes to parallel create index.
>
> In order to achieve the same:
>
> - Applied 0007-Remove-BufFile-s-isTemp-flag.patch and
> 0008-Add-BufFileSet-for-sharing-temporary-files-between-b.patch from the
> parallel-hash-v18.patchset.
> - Removed the buffile.c/logtap.c/fd.c changes from the parallel CREATE
> INDEX v10 patch.
> - incorporate the BufFileSet API to the parallel tuple sort for CREATE
> INDEX.
> - Changes into few existing functions as well as added few to support the
> BufFileSet changes.

I'm glad that somebody is working on this. (Someone closer to the more
general work on shared/parallel BufFile infrastructure than I am.)

I do have some quick feedback, and I hope to be able to provide that
to both you and Thomas, as needed to see this one through. I'm not
going to get into the tricky details around resource management just
yet. I'll start with some simpler questions, to get a general sense of
the plan here.

I gather that you're at least aware that your v11 of the patch doesn't
preserve randomAccess support for parallel sorts, because you didn't
include my 0002-* testing GUCs patch, which was specifically designed
to make various randomAccess stuff testable. I also figured this to be
true because I noticed this FIXME among (otherwise unchanged)
tuplesort code:

> +static void
> +leader_takeover_tapes(Tuplesortstate *state)
> +{
> +   Sharedsort *shared = state->shared;
> +   int nLaunched = state->nLaunched;
> +   int j;
> +
> +   Assert(LEADER(state));
> +   Assert(nLaunched >= 1);
> +   Assert(nLaunched == shared->workersFinished);
> +
> +   /*
> +* Create the tapeset from worker tapes, including a leader-owned tape at
> +* the end.  Parallel workers are far more expensive than logical tapes,
> +* so the number of tapes allocated here should never be excessive. FIXME
> +*/
> +   inittapestate(state, nLaunched + 1);
> +   state->tapeset = LogicalTapeSetCreate(nLaunched + 1, shared->tapes,
> + state->fileset, state->worker);

It's not surprising to me that you do not yet have this part working,
because much of my design was about changing as little as possible
above the BufFile interface, in order for tuplesort.c (and logtape.c)
code like this to "just work" as if it was the serial case. It doesn't
look like you've added the kind of BufFile multiplexing code that I
expected to see in logtape.c. This is needed to compensate for the
code removed from fd.c and buffile.c. Perhaps it would help me to go
look at Thomas' latest parallel hash join patch -- did it gain some
kind of transparent multiplexing ability that you actually (want to)
use here?

Though randomAccess isn't used by CREATE INDEX in general, and so not
supporting randomAccess within tuplesort.c for parallel callers
doesn't matter as far as this CREATE INDEX user-visible feature is
concerned, I still believe that randomAccess is important (IIRC,
Robert thought so too). Specifically, it seems like a good idea to
have randomAccess support, both on general principle (why should the
parallel case be different?), and because having it now will probably
enable future enhancements to logtape.c. Enhancements that have it
manage parallel sorts based on partitioning/distribution/bucketing
[1]. I'm pretty sure that partitioning-based parallel sort is going to
become very important in the future, especially for parallel
GroupAggregate. The leader needs to truly own the tapes it reclaims
from workers in order for all of this to work.

Questions on where you're going with randomAccess support:

1. Is randomAccess support a goal for you here at all?

2. If so, is preserving eager recycling of temp file space during
randomAccess (materializing a final output tape within the leader)
another goal for you here? Do we need to preserve that property of
serial external sorts, too, so that it remains true that logtape.c
ensures that "the total space usage is essentially just the actual
data volume, plus insignificant bookkeeping and start/stop overhead"?
(I'm quoting from master's logtape.c header comments.)

3. Any ideas on next steps in support of those 2 goals? What problems
do you foresee, if any?

> CREATE INDEX serial_idx ON parallel_sort_test (randint);
>
> Without patch:
>
> Time: 3430054.220 ms (57:10.054)
>
> With patch (max_parallel_workers_maintenance  = 8):
>
> Time: 1163445.271 ms (19:23.445)

This looks very similar to my v10. While I will need to follow up on
this, to make sure, it seems likely that this patch has exactly the
same performance characteristics as v10.

Thanks

[1] 
https://wiki.postgresql.org/wiki/Parallel_External_Sort#Partitioning_for_parallelism_.28parallel_externa

Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Andres Freund
On 2017-09-19 17:20:49 -0400, Tom Lane wrote:
> Thomas Munro  writes:
> > This type of violent shutdown seems to be associated with occasional
> > corruption of .gcda files (the files output by GCC coverage builds).
> > The symptoms are that if you use --enable-coverage and make
> > check-world you'll very occasionally get a spurious TAP test failure
> > like this:
> 
> > #   Failed test 'pg_ctl start: no stderr'
> > #   at 
> > /home/travis/build/postgresql-cfbot/postgresql/src/bin/pg_ctl/../../../src/test/perl/TestLib.pm
> > line 301.
> > #  got:
> > 'profiling:/home/travis/build/postgresql-cfbot/postgresql/src/backend/nodes/copyfuncs.gcda:Merge
> > mismatch for function 94
> > # '
> > # expected: ''
> 
> > I'm not sure of the exact mechanism though.  GCC supplies a function
> > __gcov_flush() that normally runs at exit or execve, so if you're
> > killed without reaching those you don't get any .gcda data.  Perhaps
> > we are in exit (or fork/exec) partway through writing out coverage
> > data in __gcov_flush(), and at that moment we are killed.  Then a
> > subsequent run of instrumented code will find the half-written file
> > and print the "Merge mismatch" message.

Note that newer gcc's (7+) have a feature to avoid such corruption, by
renaming the files atomically. Possibly the fix here is to just upgrade
to such a version...

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Varying results when using merge joins over postgres_fdw vs hash joins

2017-09-19 Thread Corey Huinker
We are having an issue with a query that will return no results when the
query does a merge join with a foreign table, but (correctly) returns
results when using a hash join.

Here is the situation on the "remote" database (9.5):


# \d+ table_with_en_us_utf8_encoding
   Table "public.table_with_en_us_utf8_encoding"
 Column |  Type  | Modifiers | Storage  | Stats target |
Description
++---+--+--+-
 id | bigint |   | plain|  |
 str1   | character varying(255) |   | extended |  |
 str2   | character varying(255) |   | extended |  |
 str3   | character varying(255) |   | extended |  |
 str4   | character varying(3)   |   | extended |  |

analytics=# select encoding, datcollate, datctype from pg_database where
datname = current_database();
 encoding | datcollate  |  datctype
--+-+-
6 | en_US.UTF-8 | en_US.UTF-8




And here's what we do on the local side (9.6):

# select encoding, datcollate, datctype from pg_database where datname =
current_database();
 encoding | datcollate | datctype
--++--
6 | C  | C

# import foreign schema public limit to (table_with_en_us_utf8_encoding)
from server primary_replica into public;

# \d+ table_with_en_us_utf8_encoding
  Foreign table
"public.table_with_en_us_utf8_encoding"
 Column |  Type  | Collation | Nullable | Default | FDW
options  | Storage  | Stats target | Description
++---+--+-+--+--+--+-
 id | bigint |   |  | |
(column_name 'id')   | plain|  |
 str1   | character varying(255) |   |  | |
(column_name 'str1') | extended |  |
 str2   | character varying(255) |   |  | |
(column_name 'str2') | extended |  |
 str3   | character varying(255) |   |  | |
(column_name 'str3') | extended |  |
 str4   | character varying(3)   |   |  | |
(column_name 'str4') | extended |  |
Server: primary_replica
FDW options: (schema_name 'public', table_name
'table_with_en_us_utf8_encoding')

# create temporary table tmp_on_c_collated_foreign_server (str2 text);

# insert into tmp_on_c_collated_foreign_server (str2) values ('576228972');
# insert into tmp_on_c_collated_foreign_server (str2) values ('576228972');
# insert into tmp_on_c_collated_foreign_server (str2) values ('576228972');

--
-- query with merge join, returns zero rows
--
# explain (analyze, verbose) select e.str1, e.str2, e.str3 from
tmp_on_c_collated_foreign_server c left join table_with_en_us_utf8_encoding
e  on c.str2 = e.str2 where e.str4='2' ;

 QUERY PLAN

 Merge Join  (cost=18041.88..22322.92 rows=229221 width=1548) (actual
time=102.849..102.849 *rows=0* loops=1)
   Output: e.str1, e.str2, e.str3
   Merge Cond: ((e.str2)::text = c.str2)
   ->  Foreign Scan on public.table_with_en_us_utf8_encoding e
 (cost=17947.50..18705.95 rows=33709 width=93) (actual
time=102.815..102.815 rows=1 loops=1)
 Output: e.id, e.str1, e.str2, e.str3, e.str4
 Remote SQL: *SELECT str1, str2, str3 FROM
public.table_with_en_us_utf8_encoding WHERE ((str4 = '2'::text)) ORDER BY
str2 ASC NULLS LAST*
   ->  Sort  (cost=94.38..97.78 rows=1360 width=32) (actual
time=0.028..0.029 rows=7 loops=1)
 Output: c.str2
 Sort Key: c.str2
 Sort Method: quicksort  Memory: 25kB
 ->  Seq Scan on pg_temp_3.tmp_on_c_collated_foreign_server c
 (cost=0.00..23.60 rows=1360 width=32) (actual time=0.010..0.011 rows=7
loops=1)
   Output: c.str2
 Planning time: 4.285 ms
 Execution time: 104.458 ms
(14 rows)


--
-- query with hash join, returns rows

--

-- the default for the foreign server is to use remote estimates, so we
turn that off...

# alter foreign table table_with_en_us_utf8_encoding OPTIONS (ADD
use_remote_estimate 'false');
ALTER FOREIGN TABLE

-- and then run the same query again

# explain (analyze, verbose) select e.str1, e.str2, e.str3 from
tmp_on_c_collated_foreign_server c left join table_with_en_us_utf8_encoding
e  on c.str2 = e.str2 where e.str4='2' ;

QUERY PLAN
--
 Hash Join  (cost=110.68..139.45 rows=7 width=1548) (actual
time=154.280..154.286 *rows=7* loops=1)
   Output: e.str1, e.str2, e.str3
   Hash Cond: (c.str

Re: [HACKERS] SCRAM in the PG 10 release notes

2017-09-19 Thread Michael Paquier
On Wed, Sep 20, 2017 at 6:55 AM, Jeff Janes  wrote:
> On Tue, Sep 19, 2017 at 1:32 PM, Heikki Linnakangas  wrote:
>> I'm not sure what exactly to do here. Where should we stick that notice?
>> We could put it in the release notes, where the bullet point about SCRAM is,
>> but it would be well hidden. If we want to give advice to people who might
>> not otherwise pay attention, it should go to a more prominent place. In the
>> "Migration to version 10" section perhaps. Currently, it only lists
>> incompatibilities, which this isn't. Perhaps put the notice after the list
>> of incompatibilities (patch attached)?
>
> I guess I'm late to the party, but I don't see why this is needed at all.
> We encourage people to use any and all new features which are appropriate to
> them--that is why we implement new features.  Why does this feature need a
> special invitation?

There have been continuous complains on those lists for the last 5
years or so that MD5 is "weak" and should be avoided. Well, Postgres
is not wrong in the way it uses MD5 in itself, backups including raw
MD5 hashes being more of a problem. But I would think that it is fair
to tell in a louder to such folks that Postgres has actually done
something on the matter.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v1] Add and report the new "in_hot_standby" GUC pseudo-variable.

2017-09-19 Thread Melanie Plageman
On Tue, Sep 12, 2017 at 6:11 PM, Thomas Munro  wrote:

> On Wed, Sep 13, 2017 at 3:48 AM, Elvis Pranskevichus 
> wrote:
> > I incorporated those bits into your patch and rebased in onto master.
> > Please see attached.
> >
> > FWIW, I think that mixing the standby status and the default
> > transaction writability is suboptimal.  They are related, yes, but not
> > the same thing.  It is possible to have a master cluster in the
> > read-only mode, and with this patch it would be impossible to
> > distinguish from a hot-standby replica without also polling
> > pg_is_in_recovery(), which defeats the purpose of having to do no
> > database roundtrips.
>
> Hi Elvis,
>
> FYI the recovery test 001_stream_rep.pl fails with this patch applied.
> You can see that if you configure with --enable-tap-tests, build and
> then cd into src/test/recovery and "make check".
>
> The latest patch applies cleanly and builds (I am also seeing the failing
TAP tests), however, I have a concern. With a single server set up, when I
attempt to make a connection with target_session_attrs=read-write, I get
the message
psql: could not make a suitable connection to server "localhost:5432"
Whereas, when I attempt to make a connection with
target_session_attrs=read-only, it is successful.

I might be missing something, but this seems somewhat counter-intuitive. I
would expect to specify read-write as target_session_attrs and successfully
connect to a server on which read and write operations are permitted. I see
this behavior implemented in src/interfaces/libpq/fe-connect.c
Is there a reason to reject a connection to a primary server when I specify
'read-write'? Is this intentional?


Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list

2017-09-19 Thread David G. Johnston
On Tue, Sep 19, 2017 at 11:29 AM, Tom Lane  wrote:

> ​T​
> hat
> ​ ​
> doesn't work today, and this patch doesn't fix it, but it does create
> enough confusion that we never would be able to fix it.
>
> I'd be much happier if there were some notational difference
> between I-want-the-composite-variable-to-absorb-a-composite-column
> and I-want-the-composite-variable-to-absorb-N-scalar-columns.
> For backwards compatibility with what happens now, the latter would
> have to be the default.


​So, using "()" syntax​

s,t: scalar text
c,d: (text, text)

treat all numbers below as text; and the ((1,2),) as ("(1,2)",)

A. SELECT 1 INTO s; -- today 1, this patch is the same

B. SELECT 1, 2 INTO s; -- today 1, this patch is the same

C. SELECT 1, 2 INTO (s); -- ERROR syntax - scalars cannot be tagged with
(), this patch N/A

D. SELECT 1, 2 INTO s, t; -- today 1, 2, this patch is the same

E. SELECT 1, 2 INTO c; -- today (1,2), this patch is the same

F. SELECT 1, 2 INTO (c); --ERROR "1" cannot be converted to (text, text),
this patch N/A

1. SELECT (1,2) INTO c; -- today ((1,2),); this patch is the same

2. SELECT (1,2) INTO (c); -- (1,2) -- this patch N/A

3. SELECT (1,2),(3,4) INTO c,d; -- ERROR syntax -- this patch gives [I
think...it can be made to give] (1,2),(3,4)

4. SELECT (1,2),(3,4) INTO c,(d); -- ERROR syntax -- this patch N/A

5. SELECT (1,2),(3,4) INTO (c),d; -- ERROR syntax -- this patch N/A

6. SELECT (1,2),(3,4) INTO (c),(d); -- (1,2),(3,4) -- this patch N/A

!. SELECT 1, (2,3), 4 INTO s, (c), t -- 1, (2,3), 4 -- this patch N/A
@. SELECT 1, 2, 3, 4 INTO s, (c), t -- ERROR "2" cannot be made into (text,
text) -- this patch N/A

IOW, this patch, if "c" is the only target (#1) and is composite pretend
the user wrote "INTO c.1, c.2" and assign each output column as a scalar in
one-by-one fashion.  If "c" is not the only target column (#3) assign a
single output column to it.  This maintains compatibility and clean syntax
at the cost of inconsistency.

The alternate, backward compatible, option introduces mandatory () in the
syntax for all composite columns in a multi-variable target (# 3-5 errors,
#6 valid) while it changes the behavior if present on a single variable
target (#1 vs #2).

So, allowing #3 to work makes implementing #2 even more unappealing.
Making only #2 and #6 work seems like a reasonable alternative position.

The last option is to fix #1 to return (1,2) - cleanly reporting an error
if possible, must like we just did with SRFs, and apply the patch thus
gaining both consistency and a clean syntax at the expense of limited
backward incompatibility.

Arrays not considered; single-column composites might end up looking like
scalars when presented to a (c) target.

Hope this helps someone besides me understand the problem space.

David J.


Re: [HACKERS] Rewriting the test of pg_upgrade as a TAP test

2017-09-19 Thread Michael Paquier
On Wed, Sep 20, 2017 at 7:57 AM, Peter Eisentraut
 wrote:
> To get things rolling, I have committed just the basic TAP tests, to
> give it a spin on the build farm.  I'll work through the rest in the
> coming days.

Thanks!
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rewriting the test of pg_upgrade as a TAP test

2017-09-19 Thread Peter Eisentraut
On 9/19/17 07:37, Michael Paquier wrote:
>>> This patch is logged as "waiting on author" in the current commit
>>> fest, but any new patch will depend on the feedback that any other
>>> hacker has to offer based on the set of ideas I have posted upthread.
>>> Hence I am yet unsure what is the correct way to move things forward.
>>> So, any opinions? Peter or others?
>>
>> I think the first step is to send the rebased version of the patch.  It
>> was last posted in April ...
> 
> Here you go. I have not done anything fancy for cross-version tests yet.

To get things rolling, I have committed just the basic TAP tests, to
give it a spin on the build farm.  I'll work through the rest in the
coming days.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Setting pd_lower in GIN metapage

2017-09-19 Thread Michael Paquier
On Wed, Sep 20, 2017 at 12:47 AM, Tom Lane  wrote:
> Amit Kapila  writes:
>> On Tue, Sep 19, 2017 at 9:27 AM, Michael Paquier
>>  wrote:
>>> I am not saying that no index AMs take advantage FPW compressibility
>>> for their meta pages. There are cases like this one, as well as one
>>> code path in BRIN where this is useful, and it is useful as well when
>>> logging FPWs of the init forks for unlogged relations.
>
>> Hmm, why is it useful for logging FPWs of the init forks for unlogged
>> relations?  We don't use REGBUF_STANDARD in those cases.
>
> But if we started to do so, that would be a concrete benefit of this
> patch ...

In the proposed set of patches, all the empty() routines part of index
AMs which use log_newpage_buffer() (brin, gin, spgst) are doing the
right thing by updating log_newpage_buffer(). btree also should have
its call to log_newpage updated in btbuildempty(), and your patch is
missing that. Also, _hash_init() would need some extra work to
generate FPWs, but I don't think that it is necessary per its handling
of a per-record meta data either. So REGBUF_STANDARD could be just
removed from there, and there is actually no need to patch
src/backend/access/hash at all.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE COLLATION does not sanitize ICU's BCP 47 language tags. Should it?

2017-09-19 Thread Peter Geoghegan
On Tue, Sep 19, 2017 at 3:23 PM, Andreas Karlsson  wrote:
> If people think it is possible to get this done in time for PostgreSQL 10
> and it does not break anything on older version of ICU (or the migration
> from older versions) I am all for it.

The only behavioral difference would occur when CREATE COLLATION is
run (no changes to the initdb behavior, where the real risk exists).

What this boils down to is that we have every reason to think that the
right thing is to reject something that ICU's uloc_toLanguageTag()
itself rejects as invalid (through the get_icu_language_tag()
function). It looked like we were equivocating on following this at
one point, prior to 2bfd1b1, in order to suit ICU 4.2 (again, see
commit eccead9). I tend to think that the way we used to concatenate
variant keywords against locale names during initdb was simply wrong
in ICU 4.2, for some unknown reason. I think that the behavior that I
propose might prevent things from silently failing on ICU 4.2 that
were previously *believed* to work there, but in fact these things
(variant keywords) never really worked.

There might be an argument to be made for passing strict = FALSE to
uloc_toLanguageTag() instead, so that we replace the language tag with
one that is known to have valid syntax, and store that in pg_collation
instead (while possibly raising a NOTICE). I guess that that would
actually be the minimal fix here. I still favor a hard reject of
invalid BCP 47 syntax, though. PostgreSQL's CREATE COLLATION is not
one of the places where this kind of leniency makes sense.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread David Steele

On 9/19/17 5:25 PM, Tom Lane wrote:

Andres Freund  writes:

On 2017-09-19 17:15:21 -0400, Tom Lane wrote:

Meh --- Carp::Always isn't standard either, so I think this is just extra
complication with little value-add.  Let's just do the Devel::Confess
incantation as Dagfinn has it.



Has ~25 times the installation base on debian tho...



https://qa.debian.org/popcon.php?package=libdevel-confess-perl (13)
vs
https://qa.debian.org/popcon.php?package=libcarp-always-perl (300)


Both of those read like "lost in the noise" to me.  I think with
either of these, we're more or less asking PG developers to install
a dependency they probably didn't have before.  We might as well
ask them to install the more useful one.


I just use:

$SIG{__DIE__} = sub {Carp::confess @_};

It also includes the stack for the confess, but that's only a single 
line and I don't care since the important information is at the top.


I have used this in production code and it doesn't seem to have any 
nasty side effects, though this is only a last resort for when a defined 
exception is not raised.  For test code it should be more than sufficient.


I have not tried this on Perls < 5.10, though.

--
-David
da...@pgmasters.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Assert that the correct locks are held when calling PageGetLSN()

2017-09-19 Thread Jacob Champion
On Wed, Sep 6, 2017 at 8:37 AM, Jacob Champion  wrote:
> On Tue, Sep 5, 2017 at 10:49 PM, Michael Paquier
>  wrote:
>> In short, it seems to me that this patch should be rejected in its
>> current shape.
>
> Is the half of the patch that switches PageGetLSN to
> BufferGetLSNAtomic correct, at least?

Any further thoughts on this? If the BufferGetLSNAtomic fixes made
here are not correct to begin with, then the rest of the patch is
probably moot; I just want to double-check that that is the case.

--Jacob


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE COLLATION does not sanitize ICU's BCP 47 language tags. Should it?

2017-09-19 Thread Andreas Karlsson

On 09/19/2017 11:32 PM, Peter Geoghegan wrote:

On Tue, Sep 19, 2017 at 2:22 PM, Tom Lane  wrote:

Well, if PG10 shipped with that restriction in place then it wouldn't
be an issue ;-)


I was proposing that this be treated as an open item for v10; sorry if
I was unclear on that. Much like the "ICU locales vs. ICU collations
within pg_collation" issue, this seems like the kind of thing that we
ought to go out of our way to get right in the *first* version.


If people think it is possible to get this done in time for PostgreSQL 
10 and it does not break anything on older version of ICU (or the 
migration from older versions) I am all for it.


Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM in the PG 10 release notes

2017-09-19 Thread Jeff Janes
On Tue, Sep 19, 2017 at 1:32 PM, Heikki Linnakangas  wrote:


> I'm not sure what exactly to do here. Where should we stick that notice?
> We could put it in the release notes, where the bullet point about SCRAM
> is, but it would be well hidden. If we want to give advice to people who
> might not otherwise pay attention, it should go to a more prominent place.
> In the "Migration to version 10" section perhaps. Currently, it only lists
> incompatibilities, which this isn't. Perhaps put the notice after the list
> of incompatibilities (patch attached)?
>
>
I guess I'm late to the party, but I don't see why this is needed at all.
We encourage people to use any and all new features which are appropriate
to them--that is why we implement new features.  Why does this feature need
a special invitation?

Cheers,

Jeff


Re: [HACKERS] why not parallel seq scan for slow functions

2017-09-19 Thread Jeff Janes
On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro  wrote:

> On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila 
> wrote:
> > The attached patch fixes both the review comments as discussed above.
>
> This cost stuff looks unstable:
>
> test select_parallel  ... FAILED
>
> !  Gather  (cost=0.00..623882.94 rows=9976 width=8)
>  Workers Planned: 4
> !->  Parallel Seq Scan on tenk1  (cost=0.00..623882.94 rows=2494
> width=8)
>   (3 rows)
>
>   drop function costly_func(var1 integer);
> --- 112,120 
>   explain select ten, costly_func(ten) from tenk1;
>QUERY PLAN
>   
> 
> !  Gather  (cost=0.00..625383.00 rows=1 width=8)
>  Workers Planned: 4
> !->  Parallel Seq Scan on tenk1  (cost=0.00..625383.00 rows=2500
> width=8)
>   (3 rows)
>

that should be fixed by turning costs on the explain, as is the tradition.


See attached.

Cheers,

Jeff


parallel_paths_include_tlist_cost_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list

2017-09-19 Thread Tom Lane
"David G. Johnston"  writes:
> On Tue, Sep 19, 2017 at 2:12 PM, Tom Lane  wrote:
>> Notice the parens/comma positioning.  It's only because text is such
>> a lax datatype that you didn't notice the problem.

> I saw exactly what you described - that it didn't error out and that the
> text representation of the single output composite was being stored in the
> first field of the target composite.  i.e., that it "worked".  Does that
> fact that it only works if the first field of the composite type is text
> change your opinion that the behavior is OK to break?

No.  That's not "working" for any useful value of "working".

I would indeed be happy if we could just change this behavior, but I do
not care to answer to the crowd of villagers that will appear if we do
that.  It's just way too easy to do, eg,

declare r record;
...
for r in select lots-o-columns from ... loop ...

and then expect r to contain all the columns of the SELECT, separately.
We can't change that behavior now.  (And making FOR behave differently
for this purpose than INTO wouldn't be very nice, either.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE COLLATION does not sanitize ICU's BCP 47 language tags. Should it?

2017-09-19 Thread Peter Geoghegan
On Tue, Sep 19, 2017 at 2:22 PM, Tom Lane  wrote:
> Andreas Karlsson  writes:
>> Hm, I like the idea but I see some issues.
>
>> Enforcing the BCP47 seems like a good thing to me. I do not see any
>> reason to allow input with syntax errors. The issue though is that we do
>> not want to break people's databases when they upgrade to PostgreSQL 11.
>> What if they have specified the locale in the old non-ICU format or they
>> have a bogus value and we then error out on pg_upgrade or pg_restore?
>
> Well, if PG10 shipped with that restriction in place then it wouldn't
> be an issue ;-)

I was proposing that this be treated as an open item for v10; sorry if
I was unclear on that. Much like the "ICU locales vs. ICU collations
within pg_collation" issue, this seems like the kind of thing that we
ought to go out of our way to get right in the *first* version.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread Tom Lane
Andres Freund  writes:
> On 2017-09-19 17:15:21 -0400, Tom Lane wrote:
>> Meh --- Carp::Always isn't standard either, so I think this is just extra
>> complication with little value-add.  Let's just do the Devel::Confess
>> incantation as Dagfinn has it.

> Has ~25 times the installation base on debian tho...

> https://qa.debian.org/popcon.php?package=libdevel-confess-perl (13)
> vs
> https://qa.debian.org/popcon.php?package=libcarp-always-perl (300)

Both of those read like "lost in the noise" to me.  I think with
either of these, we're more or less asking PG developers to install
a dependency they probably didn't have before.  We might as well
ask them to install the more useful one.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE COLLATION does not sanitize ICU's BCP 47 language tags. Should it?

2017-09-19 Thread Tom Lane
Andreas Karlsson  writes:
> Hm, I like the idea but I see some issues.

> Enforcing the BCP47 seems like a good thing to me. I do not see any 
> reason to allow input with syntax errors. The issue though is that we do 
> not want to break people's databases when they upgrade to PostgreSQL 11. 
> What if they have specified the locale in the old non-ICU format or they 
> have a bogus value and we then error out on pg_upgrade or pg_restore?

Well, if PG10 shipped with that restriction in place then it wouldn't
be an issue ;-)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list

2017-09-19 Thread David G. Johnston
On Tue, Sep 19, 2017 at 2:12 PM, Tom Lane  wrote:

> "David G. Johnston"  writes:
> > Actually, this does work, just not the way one would immediately expect.
>
> On closer inspection, what's actually happening in your example is that
> you're getting the SELECT's ct1 result crammed into the first column of
> c1, and then a default NULL is stuck into its second column:
>
> > ​ct1: (text, text)​
>
> > DO $$
> > SELECT ('1', '2')::ct1 INTO c1;
> > RAISE NOTICE '%', c1;
> > END;
> > $$;
>
> > ​Notice: ("(1,2)",)
>
> Notice the parens/comma positioning.  It's only because text is such
> a lax datatype that you didn't notice the problem.
>
>
​I saw exactly what you described - that it didn't error out and that the
text representation of the single output composite was being stored in the
first field of the target composite.  i.e., that it "worked".  Does that
fact that it only works if the first field of the composite type is text
change your opinion that the behavior is OK to break?

David J.


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread Andres Freund
On 2017-09-19 17:15:21 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2017-09-19 21:37:26 +0100, Dagfinn Ilmari Mannsåker wrote:
> >> Devel::Confess is more thorough, so +1 on that.
> 
> > Or just try Devel::Confess first, and if the require fails, go to
> > Carp::always.
> 
> Meh --- Carp::Always isn't standard either, so I think this is just extra
> complication with little value-add.  Let's just do the Devel::Confess
> incantation as Dagfinn has it.

Has ~25 times the installation base on debian tho...

https://qa.debian.org/popcon.php?package=libdevel-confess-perl (13)
vs
https://qa.debian.org/popcon.php?package=libcarp-always-perl (300)

(note this is an opt-in program)

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Tom Lane
Thomas Munro  writes:
> This type of violent shutdown seems to be associated with occasional
> corruption of .gcda files (the files output by GCC coverage builds).
> The symptoms are that if you use --enable-coverage and make
> check-world you'll very occasionally get a spurious TAP test failure
> like this:

> #   Failed test 'pg_ctl start: no stderr'
> #   at 
> /home/travis/build/postgresql-cfbot/postgresql/src/bin/pg_ctl/../../../src/test/perl/TestLib.pm
> line 301.
> #  got:
> 'profiling:/home/travis/build/postgresql-cfbot/postgresql/src/backend/nodes/copyfuncs.gcda:Merge
> mismatch for function 94
> # '
> # expected: ''

> I'm not sure of the exact mechanism though.  GCC supplies a function
> __gcov_flush() that normally runs at exit or execve, so if you're
> killed without reaching those you don't get any .gcda data.  Perhaps
> we are in exit (or fork/exec) partway through writing out coverage
> data in __gcov_flush(), and at that moment we are killed.  Then a
> subsequent run of instrumented code will find the half-written file
> and print the "Merge mismatch" message.

On a slow/loaded machine, perhaps it could be that the postmaster loses
patience and SIGKILLs a backend that's still writing its .gcda data?
If so, maybe we could make SIGKILL_CHILDREN_AFTER_SECS longer in
coverage builds?  Or bite the bullet and make it configurable ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE COLLATION does not sanitize ICU's BCP 47 language tags. Should it?

2017-09-19 Thread Andreas Karlsson
On 09/19/2017 12:46 AM, Peter Geoghegan wrote:> At one point a couple of 
months back, it was understood that

get_icu_language_tag() might not always work with (assumed) valid
locale names -- that is at least the impression that the commit
message of eccead9 left me with. But, that was only with ICU 4.2, and
in any case we've since stopped creating keyword variants at initdb
time for other reasons (see 2bfd1b1 for details of those other
reasons). I tend to think that we should not install any language tag
that uloc_toLanguageTag() does not accept as valid on general
principle (so not just at initdb time, when it's actually least
needed).

Thoughts? I can write a patch for this, if that helps. It should be
straightforward.


Hm, I like the idea but I see some issues.

Enforcing the BCP47 seems like a good thing to me. I do not see any 
reason to allow input with syntax errors. The issue though is that we do 
not want to break people's databases when they upgrade to PostgreSQL 11. 
What if they have specified the locale in the old non-ICU format or they 
have a bogus value and we then error out on pg_upgrade or pg_restore?


Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment

2017-09-19 Thread Thomas Munro
On Mon, Sep 18, 2017 at 10:18 PM, Andres Freund  wrote:
> On 2017-09-18 21:57:04 +1200, Thomas Munro wrote:
>> WARNING:  terminating connection because of crash of another server 
>> process
>> DETAIL:  The postmaster has commanded this server process to roll
>> back the current transaction and exit, because another server process
>> exited abnormally and possibly corrupted shared memory.
>> HINT:  In a moment you should be able to reconnect to the database
>> and repeat your command.
>>
>> As far as I know these are misleading, it's really just an immediate
>> shutdown.  There is no core file, so I don't believe anything actually
>> crashed.
>
> I was about to complain about these, for entirely unrelated reasons. I
> think it's a bad idea - and there's a couple complains on the lists too,
> to emit these warnings.  It's not entirely trivial to fix though :(

This type of violent shutdown seems to be associated with occasional
corruption of .gcda files (the files output by GCC coverage builds).
The symptoms are that if you use --enable-coverage and make
check-world you'll very occasionally get a spurious TAP test failure
like this:

#   Failed test 'pg_ctl start: no stderr'
#   at 
/home/travis/build/postgresql-cfbot/postgresql/src/bin/pg_ctl/../../../src/test/perl/TestLib.pm
line 301.
#  got:
'profiling:/home/travis/build/postgresql-cfbot/postgresql/src/backend/nodes/copyfuncs.gcda:Merge
mismatch for function 94
# '
# expected: ''

I'm not sure of the exact mechanism though.  GCC supplies a function
__gcov_flush() that normally runs at exit or execve, so if you're
killed without reaching those you don't get any .gcda data.  Perhaps
we are in exit (or fork/exec) partway through writing out coverage
data in __gcov_flush(), and at that moment we are killed.  Then a
subsequent run of instrumented code will find the half-written file
and print the "Merge mismatch" message.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread Tom Lane
Andres Freund  writes:
> On 2017-09-19 21:37:26 +0100, Dagfinn Ilmari Mannsåker wrote:
>> Devel::Confess is more thorough, so +1 on that.

> Or just try Devel::Confess first, and if the require fails, go to
> Carp::always.

Meh --- Carp::Always isn't standard either, so I think this is just extra
complication with little value-add.  Let's just do the Devel::Confess
incantation as Dagfinn has it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list

2017-09-19 Thread Tom Lane
"David G. Johnston"  writes:
> Actually, this does work, just not the way one would immediately expect.

On closer inspection, what's actually happening in your example is that
you're getting the SELECT's ct1 result crammed into the first column of
c1, and then a default NULL is stuck into its second column:

> ​ct1: (text, text)​

> DO $$
> SELECT ('1', '2')::ct1 INTO c1;
> RAISE NOTICE '%', c1;
> END;
> $$;

> ​Notice: ("(1,2)",)

Notice the parens/comma positioning.  It's only because text is such
a lax datatype that you didn't notice the problem.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM in the PG 10 release notes

2017-09-19 Thread Tom Lane
Heikki Linnakangas  writes:
> I'm not sure what exactly to do here. Where should we stick that notice? 
> We could put it in the release notes, where the bullet point about SCRAM 
> is, but it would be well hidden. If we want to give advice to people who 
> might not otherwise pay attention, it should go to a more prominent 
> place. In the "Migration to version 10" section perhaps. Currently, it 
> only lists incompatibilities, which this isn't. Perhaps put the notice 
> after the list of incompatibilities (patch attached)?

That seems pretty weird.  I don't see a big reason not to just put it with
the bullet point about SCRAM.  People who care will notice it there just
fine.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list

2017-09-19 Thread Tom Lane
"David G. Johnston"  writes:
>> Aside from being inconsistent, it doesn't cover all
>> the cases --- what if you have just one query output column, that is
>> composite, and you'd like it to go into a composite variable?  That
>> doesn't work today, and this patch doesn't fix it, but it does create
>> enough confusion that we never would be able to fix it.

> Actually, this does work, just not the way one would immediately expect.

Uh ... how did you declare ct1, exactly?  I tried this before claiming
it doesn't work, and it doesn't, for me:

create type complex as (r float8, i float8);

create or replace function mkc(a float8, b float8) returns complex
language sql as 'select a,b';

select mkc(1,2);

create or replace function test() returns void language plpgsql as $$
declare c complex;
begin
  select mkc(1,2) into c;
  raise notice 'c = %', c;
end$$;

select test();

I get

ERROR:  invalid input syntax for type double precision: "(1,2)"
CONTEXT:  PL/pgSQL function test() line 4 at SQL statement

That's because it's trying to assign the result of mkc() into c.r,
not into the whole composite variable.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread Dagfinn Ilmari Mannsåker
Tom Lane  writes:

> Andrew Dunstan  writes:
>> On 09/19/2017 01:31 PM, Andres Freund wrote:
>>> # Include module showing backtraces upon failures. As it's a
>>> non-standard module, don't fail if not installed.
>>> eval { use Carp::Always; }
>
>> Or maybe Devel::Confess ?
>
> Neither one seems to be present in a standard Perl installation :-(

No, hence making it optional via eval { }.  That way we can get more
useful output from the buildfarm (and Travis) by installing it there,
without imposing extra dependencies on end users.

We already depend on one non-core module (IPC::Run), so presumably we
could add others, but I presume the threshold for that is quite high.

- ilmari
-- 
"a disappointingly low fraction of the human race is,
 at any given time, on fire." - Stig Sandbeck Mathisen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread Andres Freund
On 2017-09-19 21:37:26 +0100, Dagfinn Ilmari Mannsåker wrote:
> Andrew Dunstan  writes:
> 
> > On 09/19/2017 01:31 PM, Andres Freund wrote:
> >> Hi,
> >>
> >> I've had a couple cases where tap tests died, and I couldn't easily see
> >> where / why. For development of a new test I found it useful to show
> >> backtraces in that case - just adding a
> >> use Carp::Always;
> >> at the start of the relevant module did the trick.
> >>
> >> I'm wondering if we shouldn't always do so if the module is
> >> installed. I.e. have PostgresNode or something do something like
> 
> I think TestLib would be a better place, since PostgresNode uses TestLib
> and there are some tests that use TestLib but notf PostgresNode.

Fair.


> >> # Include module showing backtraces upon failures. As it's a
> >> non-standard module, don't fail if not installed.
> >> eval { use Carp::Always; }
> >>
> >> Comments?
> >
> > Or maybe Devel::Confess ?
> 
> Devel::Confess is more thorough, so +1 on that.

Or just try Devel::Confess first, and if the require fails, go to
Carp::always.

> > In an eval you need a 'require' rather than a 'use', AFAIK.
> 
> Yes, because 'use' happens as soon as the statement is parsed at compile
> time, long before the eval { } gets executed.  Devel::Confess does its
> setup in the 'import' method (called implicitly by 'use'), so we'd need:
> 
>   eval { require Devel::Confess; Devel::Confess->import };

My perl's rusty ;)

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list

2017-09-19 Thread David G. Johnston
On Tue, Sep 19, 2017 at 11:29 AM, Tom Lane  wrote:

> Aside from being inconsistent, it doesn't cover all
> the cases --- what if you have just one query output column, that is
> composite, and you'd like it to go into a composite variable?  That
> doesn't work today, and this patch doesn't fix it, but it does create
> enough confusion that we never would be able to fix it.
>
​
Actually, this does work, just not the way one would immediately expect.

​ct1: (text, text)​

DO $$
SELECT ('1', '2')::ct1 INTO c1;
RAISE NOTICE '%', c1;
END;
$$;

​Notice: ("(1,2)",)

And so, yes, my thinking has a backward compatibility problem.  But one
that isn't fixable when constrained by backward compatibility - whether
this patch goes in or not.

David J.


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread Dagfinn Ilmari Mannsåker
Andrew Dunstan  writes:

> On 09/19/2017 01:31 PM, Andres Freund wrote:
>> Hi,
>>
>> I've had a couple cases where tap tests died, and I couldn't easily see
>> where / why. For development of a new test I found it useful to show
>> backtraces in that case - just adding a
>> use Carp::Always;
>> at the start of the relevant module did the trick.
>>
>> I'm wondering if we shouldn't always do so if the module is
>> installed. I.e. have PostgresNode or something do something like

I think TestLib would be a better place, since PostgresNode uses TestLib
and there are some tests that use TestLib but notf PostgresNode.

>> # Include module showing backtraces upon failures. As it's a
>> non-standard module, don't fail if not installed.
>> eval { use Carp::Always; }
>>
>> Comments?
>
> Or maybe Devel::Confess ?

Devel::Confess is more thorough, so +1 on that.

> In an eval you need a 'require' rather than a 'use', AFAIK.

Yes, because 'use' happens as soon as the statement is parsed at compile
time, long before the eval { } gets executed.  Devel::Confess does its
setup in the 'import' method (called implicitly by 'use'), so we'd need:

  eval { require Devel::Confess; Devel::Confess->import };

-- 
"The surreality of the universe tends towards a maximum" -- Skud's Law
"Never formulate a law or axiom that you're not prepared to live with
 the consequences of."  -- Skud's Meta-Law


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM in the PG 10 release notes

2017-09-19 Thread Heikki Linnakangas

On 09/18/2017 11:13 AM, Noah Misch wrote:

On Thu, Sep 14, 2017 at 09:57:36AM +0300, Heikki Linnakangas wrote:

On 09/12/2017 04:09 AM, Noah Misch wrote:

On Wed, May 10, 2017 at 10:50:51PM -0400, Bruce Momjian wrote:

On Mon, May  1, 2017 at 08:12:51AM -0400, Robert Haas wrote:

On Tue, Apr 25, 2017 at 10:16 PM, Bruce Momjian  wrote:

Well, we could add "MD5 users are encouraged to switch to
SCRAM-SHA-256".  Now whether we want to list this as something on the
SCRAM-SHA-256 description, or mention it as an incompatibility, or
under Migration.  I am not clear that MD5 is in such terrible shape that
this is warranted.


I think it's warranted.  The continuing use of MD5 has been a headache
for some EnterpriseDB customers who have compliance requirements which
they must meet.  It's not that they themselves necessarily know or
care whether MD5 is secure, although in some cases they do; it's that
if they use it, they will be breaking laws or regulations to which
their business or agency is subject.  I imagine customers of other
PostgreSQL companies have similar issues.  But leaving that aside, the
advantage of SCRAM isn't merely that it uses a better algorithm to
hash the password.  It has other advantages also, like not being
vulnerable to replay attacks.  If you're doing password
authentication, you should really be using SCRAM, and encouraging
people to move to SCRAM after upgrading is a good idea.

That having been said, SCRAM is a wire protocol break.  You will not
be able to upgrade to SCRAM unless and until the drivers you use to
connect to the database add support for it.  The only such driver
that's part of libpq; other drivers that have reimplemented the
PostgreSQL wire protocol will have to be updated with SCRAM support
before it will be possible to use SCRAM with those drivers.  I think
this should be mentioned in the release notes, too.  I also think it
would be great if somebody would put together a wiki page listing all
the popular drivers and (1) whether they use libpq or reimplement the
wire protocol, and (2) if the latter, the status of any efforts to
implement SCRAM, and (3) if those efforts have been completed, the
version from which they support SCRAM.  Then, I think we should reach
out to all of the maintainers of those driver authors who aren't
moving to support SCRAM and encourage them to do so.


I have added this as an open item because we will have to wait to see
where we are with driver support as the release gets closer.


With the release near, I'm promoting this to the regular open issues section.


Thanks.

I updated the list of drivers on the wiki
(https://wiki.postgresql.org/wiki/List_of_drivers), adding a column for
whether the driver supports SCRAM authentication. Currently, the only
non-libpq driver that has implemented SCRAM is the JDBC driver. I submitted
a patch for the Go driver, but it hasn't been committed yet.

As for a recommendation in the release notes, maybe something like
"Installations using MD5 authentication are encouraged to switch to
SCRAM-SHA-256, unless using older client programs or drivers that don't
support it yet."


That sounds reasonable.

This PostgreSQL 10 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Refer to the policy on open item ownership:
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


I'm not sure what exactly to do here. Where should we stick that notice? 
We could put it in the release notes, where the bullet point about SCRAM 
is, but it would be well hidden. If we want to give advice to people who 
might not otherwise pay attention, it should go to a more prominent 
place. In the "Migration to version 10" section perhaps. Currently, it 
only lists incompatibilities, which this isn't. Perhaps put the notice 
after the list of incompatibilities (patch attached)?


- Heikki
diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml
index 1a9110614d..015d441376 100644
--- a/doc/src/sgml/release-10.sgml
+++ b/doc/src/sgml/release-10.sgml
@@ -508,6 +508,12 @@
 

 
+   
+Installations using MD5 authentication are encouraged to switch to
+SCRAM-SHA-256, unless using older client programs or drivers that don't
+support it yet.
+   
+
   
 
   

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Boom filters for hash joins (was: A design for amcheck heapam verification)

2017-09-19 Thread Tomas Vondra
On 09/19/2017 06:03 PM, Peter Geoghegan wrote:
> On Tue, Sep 19, 2017 at 6:28 AM, Tomas Vondra
>  wrote:
>> The patch is fairly simple, and did not try to push the bloom filters to
>> scan nodes or anything like that. It might be a meaningful first step,
>> though, particularly for selective joins (where only small number of
>> rows from the outer relation has a match in the hash table).
> 
> I believe that parallelism makes the use of Bloom filter a lot more 
> compelling, too. Obviously this is something that wasn't taken into 
> consideration in 2015.
> 

I haven't thought about it from that point of view. Can you elaborate
why that would be the case? Sorry if this was explained earlier in this
thread (I don't see it in the history, though).

I can't quite remember why I haven't pursued the patch in 2015, but it
was probably clear it wouldn't get in in the last CF, and I never got
back to it.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] "inconsistent page found" with checksum and wal_consistency_checking enabled

2017-09-19 Thread Ashwin Agrawal
Currently, page checksum is not masked by Page masking routines used
by wal_consistency_checking facility. So, when running `make installcheck`
with data checksum enabled and wal_consistency_checking='all', it easily
and consistently FATALs with "inconsistent page found".

If anything needs to be masked on Page to perform / pass wal consistency
checking, definitely checksum is not going to match and hence must be
masked as well. Attaching patch to fix the same, installcheck passes with
checksums enabled and wal_consistency_checking='all' with the fix.

Clubbed to perform the masking with lsn as it sounds logical to have them
together, as lsn is masked is all the cases so far and such is needed for
checksum as well.

Thank You,
Ashwin Agrawal
diff --git a/src/backend/access/brin/brin_xlog.c 
b/src/backend/access/brin/brin_xlog.c
index dff7198..60daa54 100644
--- a/src/backend/access/brin/brin_xlog.c
+++ b/src/backend/access/brin/brin_xlog.c
@@ -332,7 +332,7 @@ brin_mask(char *pagedata, BlockNumber blkno)
 {
Pagepage = (Page) pagedata;
 
-   mask_page_lsn(page);
+   mask_page_lsn_and_checksum(page);
 
mask_page_hint_bits(page);
 
diff --git a/src/backend/access/common/bufmask.c 
b/src/backend/access/common/bufmask.c
index 10253d3..d880aef 100644
--- a/src/backend/access/common/bufmask.c
+++ b/src/backend/access/common/bufmask.c
@@ -23,15 +23,17 @@
  * mask_page_lsn
  *
  * In consistency checks, the LSN of the two pages compared will likely be
- * different because of concurrent operations when the WAL is generated
- * and the state of the page when WAL is applied.
+ * different because of concurrent operations when the WAL is generated and
+ * the state of the page when WAL is applied. Also, mask out checksum as
+ * masking anything else on page means checksum is not going to match as well.
  */
 void
-mask_page_lsn(Page page)
+mask_page_lsn_and_checksum(Page page)
 {
PageHeader  phdr = (PageHeader) page;
 
PageXLogRecPtrSet(phdr->pd_lsn, (uint64) MASK_MARKER);
+   phdr->pd_checksum = MASK_MARKER;
 }
 
 /*
diff --git a/src/backend/access/gin/ginxlog.c b/src/backend/access/gin/ginxlog.c
index 7ba04e3..92cafe9 100644
--- a/src/backend/access/gin/ginxlog.c
+++ b/src/backend/access/gin/ginxlog.c
@@ -770,7 +770,7 @@ gin_mask(char *pagedata, BlockNumber blkno)
Pagepage = (Page) pagedata;
GinPageOpaque opaque;
 
-   mask_page_lsn(page);
+   mask_page_lsn_and_checksum(page);
opaque = GinPageGetOpaque(page);
 
mask_page_hint_bits(page);
diff --git a/src/backend/access/gist/gistxlog.c 
b/src/backend/access/gist/gistxlog.c
index 4f4fe8f..7fd91ce 100644
--- a/src/backend/access/gist/gistxlog.c
+++ b/src/backend/access/gist/gistxlog.c
@@ -352,14 +352,14 @@ gist_mask(char *pagedata, BlockNumber blkno)
 {
Pagepage = (Page) pagedata;
 
-   mask_page_lsn(page);
+   mask_page_lsn_and_checksum(page);
 
mask_page_hint_bits(page);
mask_unused_space(page);
 
/*
 * NSN is nothing but a special purpose LSN. Hence, mask it for the same
-* reason as mask_page_lsn.
+* reason as mask_page_lsn_and_checksum.
 */
GistPageSetNSN(page, (uint64) MASK_MARKER);
 
diff --git a/src/backend/access/hash/hash_xlog.c 
b/src/backend/access/hash/hash_xlog.c
index 67a856c..f19f6fd 100644
--- a/src/backend/access/hash/hash_xlog.c
+++ b/src/backend/access/hash/hash_xlog.c
@@ -1263,7 +1263,7 @@ hash_mask(char *pagedata, BlockNumber blkno)
HashPageOpaque opaque;
int pagetype;
 
-   mask_page_lsn(page);
+   mask_page_lsn_and_checksum(page);
 
mask_page_hint_bits(page);
mask_unused_space(page);
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index d20f038..d03f544 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -9166,7 +9166,7 @@ heap_mask(char *pagedata, BlockNumber blkno)
Pagepage = (Page) pagedata;
OffsetNumber off;
 
-   mask_page_lsn(page);
+   mask_page_lsn_and_checksum(page);
 
mask_page_hint_bits(page);
mask_unused_space(page);
diff --git a/src/backend/access/nbtree/nbtxlog.c 
b/src/backend/access/nbtree/nbtxlog.c
index 4afdf47..82337f8 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -1034,7 +1034,7 @@ btree_mask(char *pagedata, BlockNumber blkno)
Pagepage = (Page) pagedata;
BTPageOpaque maskopaq;
 
-   mask_page_lsn(page);
+   mask_page_lsn_and_checksum(page);
 
mask_page_hint_bits(page);
mask_unused_space(page);
diff --git a/src/backend/access/spgist/spgxlog.c 
b/src/backend/access/spgist/spgxlog.c
index c440d21..87def79 100644
--- a/src/backend/access/spgist/spgxlog.c
+++ b/src/backend/access/spgist/spgxlog.c
@@ -1034,7 +1034,7 @@ spg_mask(char *pagedata, BlockNumber b

Re: [HACKERS] why not parallel seq scan for slow functions

2017-09-19 Thread Thomas Munro
On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila  wrote:
> The attached patch fixes both the review comments as discussed above.

This cost stuff looks unstable:

test select_parallel  ... FAILED

!  Gather  (cost=0.00..623882.94 rows=9976 width=8)
 Workers Planned: 4
!->  Parallel Seq Scan on tenk1  (cost=0.00..623882.94 rows=2494 width=8)
  (3 rows)

  drop function costly_func(var1 integer);
--- 112,120 
  explain select ten, costly_func(ten) from tenk1;
   QUERY PLAN
  
!  Gather  (cost=0.00..625383.00 rows=1 width=8)
 Workers Planned: 4
!->  Parallel Seq Scan on tenk1  (cost=0.00..625383.00 rows=2500 width=8)
  (3 rows)

  drop function costly_func(var1 integer);

>From https://travis-ci.org/postgresql-cfbot/postgresql/builds/277376953 .

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread Tom Lane
Andrew Dunstan  writes:
> On 09/19/2017 01:31 PM, Andres Freund wrote:
>> # Include module showing backtraces upon failures. As it's a
>> non-standard module, don't fail if not installed.
>> eval { use Carp::Always; }

> Or maybe Devel::Confess ?

Neither one seems to be present in a standard Perl installation :-(

> In an eval you need a 'require' rather than a 'use', AFAIK.

Yeah:

$ perl -e 'eval { use Carp::Always; }'
Can't locate Carp/Always.pm in @INC (@INC contains: /usr/local/lib64/perl5 
/usr/local/share/perl5 /usr/lib64/perl5/vendor_perl 
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at -e line 1.
BEGIN failed--compilation aborted at -e line 1.
$ perl -e 'eval { require Carp::Always; }'
$ echo $?
0

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add --if-exists to pg_recvlogical

2017-09-19 Thread Peter Eisentraut
On 9/17/17 18:21, Rosser Schwarz wrote:
> On Fri, Sep 1, 2017 at 10:22 AM, Peter Eisentraut
>  > wrote:
>> I understand the --drop-slot part.  But I don't understand what it means
>> to ignore a missing replication slot when running --start.
> 
> I'm not sure I do either, honestly. I followed the Principle of Least
> Surprise in making it a no-op when those switches are used and the slot
> doesn't exist, over "no one will ever do that". Because someone will.
> 
> I'm happy to hear suggestions on what to do in that case beyond exit
> cleanly.

Nonsensical option combinations should result in an error.

It appears that you have removed the interaction of --start and
--if-exists in your last patch, but the documentation patch still
mentions it.  Which is correct?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-09-19 Thread Alexander Korotkov
On Tue, Sep 19, 2017 at 7:54 PM, Pavel Stehule 
wrote:

> 2017-09-19 16:14 GMT+02:00 Alexander Korotkov :
>
>> On Fri, Sep 8, 2017 at 7:13 AM, Pavel Stehule 
>> wrote:
>>
>>> 2017-08-16 14:06 GMT+02:00 Pavel Stehule :
>>>
 Hi

 2017-08-15 4:37 GMT+02:00 Peter Eisentraut <
 peter.eisentr...@2ndquadrant.com>:

> On 3/11/17 07:06, Pavel Stehule wrote:
> > I am sending a updated version with separated sort direction in
> special
> > variable
>
> This patch also needs a rebase.
>

 I am sending rebased patch

>>>
>>> rebased again + fix obsolete help
>>>
>>
>> For me, patch applies cleanly, builds and passed regression tests.
>> However, patch misses regression tests covering added functionality.
>>
>
> I am not sure if there are any tests related to output of \dt+ commands -
> there result is platform depend.
>

BTW, why isn't order by name_schema available for \dt?  If it's available
we could at least cover this case by plain regression tests.
\dt+ could be covered by TAP tests, but it isn't yet.  I think one day we
should add them.  However, I don't think we should force you to write them
in order to push this simple patch.

Patch is definitely harmless, i.e. it doesn't affect anybody who doesn't
>> use new functionality.
>> But I still would prefer ordering to be options of \d* commands while
>> psql variables be defaults for those options...
>>
>
> I understand
>
> a) I don't think so commands like \dt++ (or similar) is good idea - these
> commands should be simple how it is possible
>

I don't particularly like \dt++, but second argument is probably an option.


> b) this patch doesn't block any other design - more it opens the door
> because the executive part will be implemented and users can have a
> experience with with different output sorts - so if people will need more
> quick change of result sort, then the work in this area will continue.
>

OK. As reviewer, I'm not going to block this patch if you see its
functionality limited by just psql variables.
I think you should add support of name_schema \dt and some regression tests
for this case, before I can mark this as "ready for committer".

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] Log LDAP "diagnostic messages"?

2017-09-19 Thread Peter Eisentraut
On 9/15/17 08:43, Thomas Munro wrote:
> On Fri, Sep 15, 2017 at 2:12 AM, Alvaro Herrera  
> wrote:
>> I think the ldap_unbind() changes should be in a separate preliminary
>> patch to be committed separately and backpatched.
> 
> OK, here it is split into two patches.

I've looked this over.

In the 0001 patch, I would move the ldap_unbind() calls after the
ereport(LOG) calls.  We do all the other resource cleanup (pfree() etc.)
after the ereport() calls, so it would be weird to do this one
differently.  Also, in the second patch you move one of the
ldap_unbind() calls down anyway.

In the 0002 patch, I think this is a bit repetitive and could be
refactored even more.  The end result could look like

ereport(LOG,
(errmsg("blah"),
 errdetail_for_ldap(ldap)));
ldap_unbind(ldap);

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show backtrace when tap tests fail

2017-09-19 Thread Andrew Dunstan


On 09/19/2017 01:31 PM, Andres Freund wrote:
> Hi,
>
> I've had a couple cases where tap tests died, and I couldn't easily see
> where / why. For development of a new test I found it useful to show
> backtraces in that case - just adding a
> use Carp::Always;
> at the start of the relevant module did the trick.
>
> I'm wondering if we shouldn't always do so if the module is
> installed. I.e. have PostgresNode or something do something like
>
> # Include module showing backtraces upon failures. As it's a
> non-standard module, don't fail if not installed.
> eval { use Carp::Always; }
>
> Comments?
>


Or maybe Devel::Confess ?

In an eval you need a 'require' rather than a 'use', AFAIK.


cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL

2017-09-19 Thread chiru r
Yes, LDAP will do. However we need to sync the user accounts and  groups
between AD and PG servers.and then AD profiles will apply to PG user
accounts for authentication.

It is good if we have user profiles in core PostgreSQL database system. So
it will add more security.

Thanks,
Chiranjeevi

On Tue, Sep 19, 2017 at 3:09 PM, Bruce Momjian  wrote:

> On Tue, Sep 19, 2017 at 01:28:11PM -0400, Stephen Frost wrote:
> > Tom,
> >
> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
> > > chiru r  writes:
> > > > We are looking  for User profiles in ope source PostgreSQL.
> > > > For example, If a  user password failed n+ times while login ,the
> user
> > > > access has to be blocked few seconds.
> > > > Please let us know, is there any plan to implement user profiles in
> feature
> > > > releases?.
> > >
> > > Not particularly.  You can do that sort of thing already via PAM,
> > > for example.
> >
> > Ugh, hardly and it's hokey and a huge pain to do, and only works on
> > platforms that have PAM.
> >
> > Better is to use an external authentication system (Kerberos, for
> > example) which can deal with this, but I do think this is also something
> > we should be considering for core, especially now that we've got a
> > reasonable password-based authentication method with SCRAM.
>
> Does LDAP do this too?
>
> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
> + As you are, so once was I.  As I am, so you will be. +
> +  Ancient Roman grave inscription +
>


Re: [HACKERS] Create replication slot in pg_basebackup if requested and not yet present

2017-09-19 Thread Jeff Janes
On Thu, Sep 14, 2017 at 8:23 AM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 9/12/17 16:39, Michael Banck wrote:
> > We could split up the logic here and create the optional physical
> > replication slot in the main connection and the temporary one in the WAL
> > streamer connection, but this would keep any fragility around for
> > (likely more frequently used) temporary replication slots. It would make
> > the patch much smaller though if I revert touching temporary slots at
> > all.
>
> That's what I was thinking.
>
> But:
>
> If the race condition concern that Jeff was describing is indeed
> correct, then the current use of temporary replication slots would be
> equally affected.  So I think either we already have a problem, or we
> don't and then this patch wouldn't introduce a new one.
>
> I don't know the details of this well enough.
>

It is possible the race condition is entirely theoretical, as the WAL files
won't be eligible for recycling until the end of the *next* (future)
checkpoint (for no good reason, as far as I can tell, although fortunate in
this case) so that should give plenty of opportunity for the WAL streamer
to connect in all but the weirdest cases.  When it reserves the WAL, I
assume it does so back-dating to the LSN position reported by
pg_start_backup and will promptly fail if those are no longer available?

I don't want to hold up the patch based on theoretical questions when it
solves an actual problem.

Cheers,

Jeff


Re: [HACKERS] Add Roman numeral conversion to to_number

2017-09-19 Thread Doug Doole
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:   tested, passed
Spec compliant:   tested, passed
Documentation:tested, passed

Applied clean and runs fine. Previous comments were addressed.

There's been some discussion in the thread about whether this is worth doing, 
but given that it is done it seems a bit of a moot point. So passing this off 
to the committers to make a final call.

The new status of this patch is: Ready for Committer

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Add citext_pattern_ops for citext contrib module

2017-09-19 Thread Andrew Dunstan


On 09/19/2017 02:47 PM, Andrew Dunstan wrote:
>
> On 09/19/2017 11:11 AM, Tom Lane wrote:
>> Andrew Dunstan  writes:
>>> This seems to have upset a number or animals in the buildfarm.
>> Actually, after looking closer, my advice is just to drop the new
>> test cases involving accented letters.  It surprises me not in the
>> least that those would have nonportable behavior: probably, some
>> locales will case-fold them and some won't.
>>
>> (This does open up some questions about whether the opclass will
>> ever be of use for Alexey's original purpose :-()
> Actually, it now looks to me like the problem is we forgot to tell
> postgres that this data is in utf8. The problem appears to resolve (at
> least on my CentOS test box, where I reproduced the buildfarm error) if
> I add
>
> set client_encoding = 'utf8';
>
> to the sql file.
>
> Of course UTF8 bytes interpreted as LATIN1 will give results that are
> ... interesting.
>
> So let's try that first and see if it make the buildfarm go green. Maybe
> there's hope for Alexey's purpose after all.


*sigh* That worked in a couple of instances but crashed and burned
elsewhere. I'll just disable the multi-byte tests as Tom suggests.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Add citext_pattern_ops for citext contrib module

2017-09-19 Thread Tom Lane
Andrew Dunstan  writes:
> On 09/19/2017 11:11 AM, Tom Lane wrote:
>> Actually, after looking closer, my advice is just to drop the new
>> test cases involving accented letters.  It surprises me not in the
>> least that those would have nonportable behavior: probably, some
>> locales will case-fold them and some won't.

> Actually, it now looks to me like the problem is we forgot to tell
> postgres that this data is in utf8. The problem appears to resolve (at
> least on my CentOS test box, where I reproduced the buildfarm error) if
> I add
> set client_encoding = 'utf8';
> to the sql file.

That took care of one source of problems, but I wouldn't have expected it
to resolve all the problems ... and it didn't.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG 10 release notes

2017-09-19 Thread Joshua D. Drake

On 09/19/2017 09:32 AM, 'Bruce Momjian' wrote:

On Tue, Sep 19, 2017 at 12:30:01PM -0400, Tom Lane wrote:

"'Bruce Momjian'"  writes:

On Tue, Sep 19, 2017 at 12:22:39PM -0400, Tom Lane wrote:

We don't normally release-note documentation changes.  If this
wasn't purely a documentation change, then I was probably in error
to decide it didn't need to be in the notes.



It was purely a documentation change, but it was a documented change in a
long-standing and popular practice of not using too many shared buffers
on Windows, so I thought it wise to add it.


Well, if the intent of the note was to encourage people to raise
shared_buffers, it didn't do a very good job of that as written,
because I sure didn't understand it that way.


Do you have any suggestions since it is not a code change that I can
point to?  My guess is that the limitation was removed years ago, but we
only found out recently.


My guess is that the limitation was removed as of 9.3 with the work Haas 
did with shared buffers. Thus, yes it was years ago. I think that 
listing it regardless of the documentation change could be useful. 
Something like:


"""
Better support for large shared_buffers configurations including the 
Windows platform. Users are encouraged to review their shared_buffer 
settings against the size of their active data set and reconfigure 
appropriately.

"""

It is pretty much practitioner given that if your active data set can 
fit in shared_buffers and you aren't going to adversely affect the 
ability for the system to operate, that you should configure a high 
setting. I have seen settings as much as 96GB doing wonderfully in 
production.


JD






--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own.   *


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >