Re: [HACKERS] Aggregate transition state merging vs. hypothetical set functions

2017-10-13 Thread Heikki Linnakangas

On 10/13/2017 02:41 AM, Tom Lane wrote:

David Rowley  writes:

On 13 October 2017 at 12:08, Tom Lane  wrote:

Therefore, I think we need to bite the bullet and provide an aggregate
property (CREATE AGGREGATE argument / pg_aggregate column) that tells
whether the aggregate supports transition state merging.  Likely this
should have been in the state-merging patch to begin with, but better
late than never.



Are you considering that this is an option only for ordered-set
aggregates or for all?


All.


If the user defines their normal aggregate as not safe for merging,
then surely it'll not be suitable to be used as a window function
either, since the final function will also be called there multiple
times per state.


Yeah, we would probably also want to check the flag in nodeWindowAgg.
Not sure exactly how that should play out --- maybe we end up with
a tri-valued property "works as normal agg without merging, works
as normal agg with merging, works as window agg".  But this would
arguably be an improvement over the current situation.  Right now
I'm sure there are user-written aggs out there that will just crash
if used as a window agg, and the authors don't have much choice because
the performance costs of not modifying the transition state in the
finalfn are higher than they're willing to bear.  At least with a
flag they could ensure that the case will fail cleanly.


Sounds right to me. I'm not so sure there really are aggregates out 
there that would crash today if used as a window aggregate, but it sure 
would be nice to give some control on that.


We've been doing that window agg thing for a long time, so I think 
"works as window agg" should be the default for regular aggregates. For 
ordered-set aggregates, "no merging, no more transfn() calls after 
finalfn()" seems safest.


It's a bit of a shame to have different defaults for regular and 
ordered-set aggregates. But that is what we implicitly assume today, 
anyway, and it's not too that hard to document.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Aggregate transition state merging vs. hypothetical set functions

2017-10-13 Thread Heikki Linnakangas

On 10/13/2017 02:08 AM, Tom Lane wrote:

I started to look into fixing orderedsetaggs.c so that we could revert
52328727b, and soon found a rather nasty problem.  Although the plain
OSAs seem amenable to supporting multiple finalfn calls on the same
transition state, the "hypothetical set" functions are not at all.
What they do is to accumulate all the regular aggregate input into
a tuplesort, then add the direct arguments as an additional "hypothetical"
row, and finally sort the result.  There's no realistic way of undoing
the addition of the hypothetical row, so these finalfns simply can't
share tuplesort state.

You could imagine accumulating the regular input into a tuplestore
and then copying it into a tuplesort in each finalfn call.  But that
seems pretty icky, and I'm not sure it'd really be any more performant
than just keeping separate tuplesort objects for each aggregate.


The current implementation, with the extra flag column, is quite naive. 
We could add some support to tuplesort to find a row with given 
attributes, and call that instead of actually adding the hypothetical 
row to the tuplesort and iterating to re-find it after performsort. For 
a result set that fits in memory, you could even do a binary search 
instead of linearly iterating through the result set.


I'm not suggesting that we do that right now, though. And even if we 
did, there might be other aggregates out there that are not safe to 
merge, so we should  still add the option to CREATE AGGREGATE. But 
that'd be nice little project, if someone wanted to make those functions 
faster.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Binary search in fmgr_isbuiltin() is a bottleneck.

2017-10-05 Thread Heikki Linnakangas

On 10/04/2017 10:33 AM, Andres Freund wrote:

On 2017-10-02 15:01:36 -0700, Andres Freund wrote:

On 2017-10-02 17:57:51 -0400, Tom Lane wrote:

Andres Freund  writes:

Done that way. It's a bit annoying, because we've to take care to
initialize the "unused" part of the array with a valid signalling it's
an unused mapping. Can't use 0 for that because fmgr_builtins[0] is a
valid entry.


The prototype code I posted further upthread just used -1 as the "unused"
marker. There's no reason the array can't be int16 rather than uint16,
and "if (index < 0)" is probably a faster test anyway.


Right, but whether we use -1 or UINT16_MAX or such doesn't matter. The
relevant bit is that we can't use 0, so we can't rely on the rest of the
array being zero initialized, but instead of to initialize all of it
explicitly.  I've no real feelings about using -1 or UINT16_MAX - I'd be
very surprised if there's any sort of meaningful performance difference.


I pushed a further cleaned up version of these two patches.  If you see
a way to avoid initializing the "trailing" part of the
fmgr_builtin_oid_index in a different manner, I'm all ears ;)


You could put a dummy entry at fmgr_builtins[0].

BTW, there's some alignment padding in FmgrBuiltin, when 
MAXIMUM_ALIGNOF==8. You could easily shrink the struct from 32 to 24 
bytes by moving funcName to the end of the struct:


--- a/src/include/utils/fmgrtab.h
+++ b/src/include/utils/fmgrtab.h
@@ -25,11 +25,11 @@
 typedef struct
 {
Oid foid;   /* OID of the function 
*/
-   const char *funcName;   /* C name of the function */
short   nargs;  /* 0..FUNC_MAX_ARGS, or -1 if 
variable count */
boolstrict; /* T if function is "strict" */
boolretset; /* T if function returns a set 
*/
PGFunction  func;   /* pointer to compiled function 
*/
+   const char *funcName;   /* C name of the function */
 } FmgrBuiltin;

 extern const FmgrBuiltin fmgr_builtins[];

If we care about cache efficiency here, we could move funcName out of 
the fmgr_builtins array, to a separate array of the same size. I believe 
funcName is only used when you create an internal-language function with 
CREATE FUNCTION, and having it in a separate array shouldn't hurt those 
lookups.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - minor fix for meta command only scripts

2017-10-01 Thread Heikki Linnakangas

On 09/29/2017 08:43 PM, Fabien COELHO wrote:

reality.  So I don't know if this needs backpatching or not.  But it
should be fixed for v10, as there it becomes a demonstrably live issue.


Yes.


Patch looks good to me, so committed to master and v10. Thanks!

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM in the PG 10 release notes

2017-09-19 Thread Heikki Linnakangas

On 09/18/2017 11:13 AM, Noah Misch wrote:

On Thu, Sep 14, 2017 at 09:57:36AM +0300, Heikki Linnakangas wrote:

On 09/12/2017 04:09 AM, Noah Misch wrote:

On Wed, May 10, 2017 at 10:50:51PM -0400, Bruce Momjian wrote:

On Mon, May  1, 2017 at 08:12:51AM -0400, Robert Haas wrote:

On Tue, Apr 25, 2017 at 10:16 PM, Bruce Momjian <br...@momjian.us> wrote:

Well, we could add "MD5 users are encouraged to switch to
SCRAM-SHA-256".  Now whether we want to list this as something on the
SCRAM-SHA-256 description, or mention it as an incompatibility, or
under Migration.  I am not clear that MD5 is in such terrible shape that
this is warranted.


I think it's warranted.  The continuing use of MD5 has been a headache
for some EnterpriseDB customers who have compliance requirements which
they must meet.  It's not that they themselves necessarily know or
care whether MD5 is secure, although in some cases they do; it's that
if they use it, they will be breaking laws or regulations to which
their business or agency is subject.  I imagine customers of other
PostgreSQL companies have similar issues.  But leaving that aside, the
advantage of SCRAM isn't merely that it uses a better algorithm to
hash the password.  It has other advantages also, like not being
vulnerable to replay attacks.  If you're doing password
authentication, you should really be using SCRAM, and encouraging
people to move to SCRAM after upgrading is a good idea.

That having been said, SCRAM is a wire protocol break.  You will not
be able to upgrade to SCRAM unless and until the drivers you use to
connect to the database add support for it.  The only such driver
that's part of libpq; other drivers that have reimplemented the
PostgreSQL wire protocol will have to be updated with SCRAM support
before it will be possible to use SCRAM with those drivers.  I think
this should be mentioned in the release notes, too.  I also think it
would be great if somebody would put together a wiki page listing all
the popular drivers and (1) whether they use libpq or reimplement the
wire protocol, and (2) if the latter, the status of any efforts to
implement SCRAM, and (3) if those efforts have been completed, the
version from which they support SCRAM.  Then, I think we should reach
out to all of the maintainers of those driver authors who aren't
moving to support SCRAM and encourage them to do so.


I have added this as an open item because we will have to wait to see
where we are with driver support as the release gets closer.


With the release near, I'm promoting this to the regular open issues section.


Thanks.

I updated the list of drivers on the wiki
(https://wiki.postgresql.org/wiki/List_of_drivers), adding a column for
whether the driver supports SCRAM authentication. Currently, the only
non-libpq driver that has implemented SCRAM is the JDBC driver. I submitted
a patch for the Go driver, but it hasn't been committed yet.

As for a recommendation in the release notes, maybe something like
"Installations using MD5 authentication are encouraged to switch to
SCRAM-SHA-256, unless using older client programs or drivers that don't
support it yet."


That sounds reasonable.

This PostgreSQL 10 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Refer to the policy on open item ownership:
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


I'm not sure what exactly to do here. Where should we stick that notice? 
We could put it in the release notes, where the bullet point about SCRAM 
is, but it would be well hidden. If we want to give advice to people who 
might not otherwise pay attention, it should go to a more prominent 
place. In the "Migration to version 10" section perhaps. Currently, it 
only lists incompatibilities, which this isn't. Perhaps put the notice 
after the list of incompatibilities (patch attached)?


- Heikki
diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml
index 1a9110614d..015d441376 100644
--- a/doc/src/sgml/release-10.sgml
+++ b/doc/src/sgml/release-10.sgml
@@ -508,6 +508,12 @@
 

 
+   
+Installations using MD5 authentication are encouraged to switch to
+SCRAM-SHA-256, unless using older client programs or drivers that don't
+support it yet.
+   
+
   
 
   

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM in the PG 10 release notes

2017-09-14 Thread Heikki Linnakangas

On 09/12/2017 04:09 AM, Noah Misch wrote:

On Wed, May 10, 2017 at 10:50:51PM -0400, Bruce Momjian wrote:

On Mon, May  1, 2017 at 08:12:51AM -0400, Robert Haas wrote:

On Tue, Apr 25, 2017 at 10:16 PM, Bruce Momjian  wrote:

Well, we could add "MD5 users are encouraged to switch to
SCRAM-SHA-256".  Now whether we want to list this as something on the
SCRAM-SHA-256 description, or mention it as an incompatibility, or
under Migration.  I am not clear that MD5 is in such terrible shape that
this is warranted.


I think it's warranted.  The continuing use of MD5 has been a headache
for some EnterpriseDB customers who have compliance requirements which
they must meet.  It's not that they themselves necessarily know or
care whether MD5 is secure, although in some cases they do; it's that
if they use it, they will be breaking laws or regulations to which
their business or agency is subject.  I imagine customers of other
PostgreSQL companies have similar issues.  But leaving that aside, the
advantage of SCRAM isn't merely that it uses a better algorithm to
hash the password.  It has other advantages also, like not being
vulnerable to replay attacks.  If you're doing password
authentication, you should really be using SCRAM, and encouraging
people to move to SCRAM after upgrading is a good idea.

That having been said, SCRAM is a wire protocol break.  You will not
be able to upgrade to SCRAM unless and until the drivers you use to
connect to the database add support for it.  The only such driver
that's part of libpq; other drivers that have reimplemented the
PostgreSQL wire protocol will have to be updated with SCRAM support
before it will be possible to use SCRAM with those drivers.  I think
this should be mentioned in the release notes, too.  I also think it
would be great if somebody would put together a wiki page listing all
the popular drivers and (1) whether they use libpq or reimplement the
wire protocol, and (2) if the latter, the status of any efforts to
implement SCRAM, and (3) if those efforts have been completed, the
version from which they support SCRAM.  Then, I think we should reach
out to all of the maintainers of those driver authors who aren't
moving to support SCRAM and encourage them to do so.


I have added this as an open item because we will have to wait to see
where we are with driver support as the release gets closer.


With the release near, I'm promoting this to the regular open issues section.


Thanks.

I updated the list of drivers on the wiki 
(https://wiki.postgresql.org/wiki/List_of_drivers), adding a column for 
whether the driver supports SCRAM authentication. Currently, the only 
non-libpq driver that has implemented SCRAM is the JDBC driver. I 
submitted a patch for the Go driver, but it hasn't been committed yet.


As for a recommendation in the release notes, maybe something like 
"Installations using MD5 authentication are encouraged to switch to 
SCRAM-SHA-256, unless using older client programs or drivers that don't 
support it yet."


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM salt length

2017-08-17 Thread Heikki Linnakangas

On 08/17/2017 05:23 PM, Peter Eisentraut wrote:

On 8/17/17 09:21, Heikki Linnakangas wrote:

The RFC doesn't say anything about salt
length, but the one example in it uses a 16 byte string as the salt.
That's more or less equal to the current default of 12 raw bytes, after
base64-encoding.


The example is

S: r=rOprNGfwEbeRWgbNEkqO%hvYDpWUa2RaTCAfuxFIlj)hNlF$k0,
   s=W22ZaJ0SNY7soEsUEjb6gQ==,i=4096

That salt is 24 characters and 16 raw bytes.


Ah, I see, that's from the SCRAM-SHA-256 spec. I was looking at the 
example in the original SCRAM-SHA-1 spec:


S: r=fyko+d2lbbFgONRv9qkxdawL3rfcNHYJY1ZVvWVs7j,s=QSXCR+Q6sek8bf92,
  i=4096

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM salt length

2017-08-17 Thread Heikki Linnakangas

On 08/17/2017 04:04 PM, Robert Haas wrote:

On Wed, Aug 16, 2017 at 10:42 PM, Michael Paquier
 wrote:

In the initial discussions there was as well a mention about using 16 bytes.
https://www.postgresql.org/message-id/507550bd.2030...@vmware.com
As we are using SCRAM-SHA-256, let's bump it up and be consistent.
That's now or never.


This was discussed and changed once before at
https://www.postgresql.org/message-id/df8c6e27-4d8e-5281-96e5-131a4e638...@8kdata.com


Different thing. That was the nonce length, now we're talking about salt 
length.


I think 2^96 is large enough. The RFC doesn't say anything about salt 
length, but the one example in it uses a 16 byte string as the salt. 
That's more or less equal to the current default of 12 raw bytes, after 
base64-encoding.


On 08/17/2017 05:42 AM, Michael Paquier wrote:
> That's now or never.

Not really. That constant is just the default to use when creating new 
password verifiers, but the code can handle any salt length, and 
different verifiers can have different lengths.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics for heap_parallelscan_nextpage()

2017-08-16 Thread Heikki Linnakangas

On 08/17/2017 12:20 AM, Tom Lane wrote:

Andres Freund <and...@anarazel.de> writes:

On 2017-08-16 16:20:28 +0300, Heikki Linnakangas wrote:
+   pg_atomic_write_u64(>phs_nallocated, 0);



It's not ok to initialize an atomic with pg_atomic_write_u64 - works
well enough for "plain" atomics, but the fallback implementation isn't
ok with it. You're probably going to get a failure on the respective
buildfarm animal soon.


Indeed, gaur fails with

2017-08-16 17:09:38.315 EDT [13043:11] PANIC:  stuck spinlock detected at pg_at\
omic_compare_exchange_u64_impl, atomics.c:196
2017-08-16 17:09:38.315 EDT [13043:12] STATEMENT:  select count(*) from a_star;
2017-08-16 17:09:40.350 EDT [12437:3] LOG:  server process (PID 13043) was term\
inated by signal 6
2017-08-16 17:09:40.350 EDT [12437:4] DETAIL:  Failed process was running: sele\
ct count(*) from a_star;

and I'm sure pademelon will fail once it gets to that.


Fixed.


I thought we had other buildfarm animals testing the fallback path,
though?

Yes, at least piculet is building with --disable-atomics.

I was able to reproduce this locally, with --disable-atomics, but only 
after hacking it to fill the struct with garbage, before initializing 
it. IOW, it failed to fail, because the spinlock happened to be 
initialized correctly by accident. Perhaps that's happening on piculet, too.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics for heap_parallelscan_nextpage()

2017-08-16 Thread Heikki Linnakangas

On 08/16/2017 09:00 PM, Tom Lane wrote:

Robert Haas  writes:

On Wed, Aug 16, 2017 at 1:44 PM, Tom Lane  wrote:

I was feeling a bit uncomfortable with the BUFFERALIGN_DOWN() for a
different reason: if the caller has specified the exact amount of space it
needs, having shm_toc_create discard some could lead to an unexpected
failure.



Well, that's why Heikki also patched shm_toc_estimate.  With the
patch, if the size being used in shm_toc_create comes from
shm_toc_estimate, it will always be aligned and nothing bad will
happen.


Sure.  So the point is entirely about what should happen if someone
doesn't use shm_toc_estimate.


If the user invents another size out of whole cloth, then
they might get a few bytes less than they expect, but that's what you
get for not using shm_toc_estimate().


I don't buy that argument.  A caller might think "Why do I need
shm_toc_estimate, when I can compute the *exact* size I need?".
And it would have worked, up till this proposed patch.


Well, no. The size of the shm_toc struct is subtracted from the size 
that you give to shm_toc_create. As well as the sizes of the TOC 
entries. And those sizes are private to shm_toc.c, so a caller has no 
way to know what size it should pass to shm_toc_create(), in order to 
have N bytes of space actually usable. You really need to use 
shm_toc_estimate() if you want any guarantees on what will fit.


I've pushed the patch to fix this, with some extra comments and 
reformatting shm_toc_estimate.



8 byte alignment would be good enough, so BUFFERALIGN ought to be
sufficient. But it'd be nicer to have a separate more descriptive knob.


What I meant by possibly not good enough is that pg_atomic_uint64 used
in other places isn't going to be very safe.

We might be effectively all right as long as we have a coding rule that
pg_atomic_uint64 can only be placed in memory handed out by ShmemAlloc
or shm_toc_allocate, which both have bigger-than-MAXALIGN alignment
practices.  But this needs to be documented.


Yeah. We are implicitly also relying on ShmemAlloc() to return 
sufficiently-aligned memory. Palloc() too, although you probably 
wouldn't use atomic ops on a palloc'd struct. I think we should 
introduce a new ALIGNOF macro for that, and document that those 
functions return memory with enough alignment. GENUINEMAX_ALIGNOF? 
MAXSTRUCT_ALIGNOF?


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics for heap_parallelscan_nextpage()

2017-08-16 Thread Heikki Linnakangas

On 08/16/2017 08:10 PM, Andres Freund wrote:

Afaict shm_create/shm_toc_allocate don't actually guarantee that the end
of the toc's memory is suitably aligned.  But I didn't yet have any
coffee, so ...


Robert, I'm not quite sure what the intended behaviour of shm_toc is wrt
alignment. I see that individual chunks are BUFFERALIGNed (both during
estimation, and allocation). But I don't see how the size of the entire
toc is aligned, which seems a requirement, given we allocate from the
end.
Seems like we'd just have to add alignment of the total size to
shm_toc_estimate()?


Yeah, that's the gist of it.

The attached patch seems to fix this. I'm not too familiar with this DSM 
stuff, but this seems right to me. Unless someone has a better idea 
soon, I'll commit this to make the buildfarm happy.


- Heikki
diff --git a/src/backend/storage/ipc/shm_toc.c b/src/backend/storage/ipc/shm_toc.c
index 9f259441f0..121d5a1ec9 100644
--- a/src/backend/storage/ipc/shm_toc.c
+++ b/src/backend/storage/ipc/shm_toc.c
@@ -44,7 +44,7 @@ shm_toc_create(uint64 magic, void *address, Size nbytes)
 	Assert(nbytes > offsetof(shm_toc, toc_entry));
 	toc->toc_magic = magic;
 	SpinLockInit(>toc_mutex);
-	toc->toc_total_bytes = nbytes;
+	toc->toc_total_bytes = BUFFERALIGN_DOWN(nbytes);
 	toc->toc_allocated_bytes = 0;
 	toc->toc_nentry = 0;
 
@@ -252,7 +252,8 @@ shm_toc_lookup(shm_toc *toc, uint64 key, bool noError)
 Size
 shm_toc_estimate(shm_toc_estimator *e)
 {
-	return add_size(offsetof(shm_toc, toc_entry),
-	add_size(mul_size(e->number_of_keys, sizeof(shm_toc_entry)),
-			 e->space_for_chunks));
+	return BUFFERALIGN(
+		add_size(offsetof(shm_toc, toc_entry),
+ add_size(mul_size(e->number_of_keys, sizeof(shm_toc_entry)),
+		  e->space_for_chunks)));
 }
diff --git a/src/include/c.h b/src/include/c.h
index 9066e3c578..af799dc1df 100644
--- a/src/include/c.h
+++ b/src/include/c.h
@@ -598,6 +598,7 @@ typedef NameData *Name;
 #define LONGALIGN_DOWN(LEN)		TYPEALIGN_DOWN(ALIGNOF_LONG, (LEN))
 #define DOUBLEALIGN_DOWN(LEN)	TYPEALIGN_DOWN(ALIGNOF_DOUBLE, (LEN))
 #define MAXALIGN_DOWN(LEN)		TYPEALIGN_DOWN(MAXIMUM_ALIGNOF, (LEN))
+#define BUFFERALIGN_DOWN(LEN)	TYPEALIGN_DOWN(ALIGNOF_BUFFER, (LEN))
 
 /*
  * The above macros will not work with types wider than uintptr_t, like with

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics for heap_parallelscan_nextpage()

2017-08-16 Thread Heikki Linnakangas

On 08/16/2017 04:20 PM, Heikki Linnakangas wrote:

On 05/06/2017 04:57 PM, David Rowley wrote:

Andres mentioned in [2] that it might be worth exploring using
atomics to do the same job. So I went ahead and did that, and came
up with the attached, which is a slight variation on what he
mentioned in the thread.

To keep things a bit more simple, and streamline, I ended up
pulling out the logic for setting the startblock into another
function, which we only call once before the first call to 
heap_parallelscan_nextpage().  I also ended up changing phs_cblock

and replacing it with a counter that always starts at zero. The
actual block is calculated based on that + the startblock modulo
nblocks. This makes things a good bit more simple for detecting
when we've allocated all the blocks to the workers, and also works
nicely when wrapping back to the start of a relation when we
started somewhere in the middle due to piggybacking with a
synchronous scan.


Looks reasonable. I edited the comments and the variable names a bit,
to my liking, and committed. Thanks!


A couple of 32-bit x86 buildfarm members don't seem to be happy with 
this. I'll investigate, but if anyone has a clue, I'm all ears...


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics for heap_parallelscan_nextpage()

2017-08-16 Thread Heikki Linnakangas

On 05/06/2017 04:57 PM, David Rowley wrote:

Andres mentioned in [2] that it might be worth exploring using atomics
to do the same job. So I went ahead and did that, and came up with the
attached, which is a slight variation on what he mentioned in the
thread.

To keep things a bit more simple, and streamline, I ended up pulling
out the logic for setting the startblock into another function, which
we only call once before the first call to
heap_parallelscan_nextpage().  I also ended up changing phs_cblock and
replacing it with a counter that always starts at zero. The actual
block is calculated based on that + the startblock modulo nblocks.
This makes things a good bit more simple for detecting when we've
allocated all the blocks to the workers, and also works nicely when
wrapping back to the start of a relation when we started somewhere in
the middle due to piggybacking with a synchronous scan.
Looks reasonable. I edited the comments and the variable names a bit, to 
my liking, and committed. Thanks!


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Refactoring identifier checks to consistently use strcmp

2017-08-16 Thread Heikki Linnakangas

On 04/04/2017 10:13 AM, Daniel Gustafsson wrote:

On 04 Apr 2017, at 05:52, Alvaro Herrera  wrote:

Daniel Gustafsson wrote:

Testing DefElem options is done with both strcmp() and pg_strcasecmp() a bit
mixed.  Since the option defnames are all lowercased, either via IDENT, keyword
rules or “by hand” with makeString(), using strcmp() is safe (and assumed to be
so in quite a lot of places).

While it’s not incorrect per se to use pg_strcasecmp(), it has the potential to
hide a DefElem created with a mixed-case defname where it in other places is
expected to be in lowercase, which may lead to subtle bugs.

The attached patch refactors to use strcmp() consistently for option processing
in the command code as a pre-emptive belts+suspenders move against such subtle
bugs and to make the code more consistent.  Also reorders a few checks to have
all in the same “format” and removes a comment related to the above.

Tested with randomizing case on options in make check (not included in patch).


Does it work correctly in the Turkish locale?


Yes, running make check with random case defnames under tr_TR works fine.


This no longer works:

postgres=# CREATE TEXT SEARCH DICTIONARY public.simple_dict (
TEMPLATE = pg_catalog.simple,
"STOPWORds" = english
);
ERROR:  unrecognized simple dictionary parameter: "STOPWORds"

In hindsight, perhaps we should always have been more strict about that 
to begin wtih, but let's not break backwards-compatibility without a 
better reason. I didn't thoroughly check all of the cases here, to see 
if there are more like this.


It'd be nice to have some kind of a rule on when pg_strcasecmp should be 
used and when strcmp() is enough. Currently, by looking at the code, I 
can't tell.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Use PG_DETOAST_DATUM_SLICE in bitlength() (was Re: PG_GETARG_GISTENTRY?)

2017-08-16 Thread Heikki Linnakangas

On 04/25/2017 04:10 AM, Noah Misch wrote:

On Mon, Apr 24, 2017 at 09:25:25AM -0700, Mark Dilger wrote:

Noah, if you left this case out intentionally, sorry for the noise.  I did not
immediately see any reason not to follow your lead for this function.


This is not following my lead, but that doesn't make it bad.  It's just a
different topic.


(Changed subject line accordingly.)

From the patch:

--- a/src/backend/utils/adt/varbit.c
+++ b/src/backend/utils/adt/varbit.c
@@ -1187,7 +1187,7 @@ bit_overlay(VarBit *t1, VarBit *t2, int sp, int sl)
 Datum
 bitlength(PG_FUNCTION_ARGS)
 {
-   VarBit *arg = PG_GETARG_VARBIT_P(0);
+   VarBit *arg = PG_GETARG_VARBIT_P_SLICE(0,0,VARHDRSZ+VARBITHDRSZ);
 
PG_RETURN_INT32(VARBITLEN(arg));

 }


That doesn't look quite right. PG_GETARG_VARBIT_P_SLICE(X, m, n) returns 
n bytes, from offset m, within the varlena. Offset 0 points to just 
after the varlen header, i.e. the bit length. AFAICS, there's no need to 
include VARHDRSZ here, and this should be just "arg = 
PG_GETARG_VARBIT_P_SLICE(0, 0, VARBITHDRSZ)". It's a harmless mistake to 
fetch more data than needed, but let's try to not be confused over how 
slices work.


I wonder if having a PG_GETARG_VARBIT_P_SLICE macro like this is really 
a good idea. It might be useful to be able to fetch just the header, to 
get the length, like in this function. And bitgetbit() function would 
benefit from being able to fetch just a slice of the data, containing 
the bit its interested in. But this macro seems quite inconvenient for 
both of those use cases. I'm not sure what to do instead, but I think 
that needs some more thought.


I'd suggest expanding this patch, to also make bitgetbit to fetch just a 
slice, and see what that looks like.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Remove unused argument in btree_xlog_split

2017-08-16 Thread Heikki Linnakangas

On 04/06/2017 03:21 PM, Aleksander Alekseev wrote:

Hi Robert,


Hmm.  I don't see anything wrong with that, particularly, but it seems
we also don't need the distinction between XLOG_BTREE_SPLIT_L and
XLOG_BTREE_SPLIT_L_ROOT or likewise between XLOG_BTREE_SPLIT_R and
XLOG_BTREE_SPLIT_R_ROOT -- in which case I think this patch should go
a little further and do all of that together.


Thank you for sharing your thoughts on this patch. Here is a new
version.


Committed, thanks.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for Secure Transport SSL library on macOS as OpenSSL alternative

2017-08-03 Thread Heikki Linnakangas

On 08/03/2017 01:02 PM, Daniel Gustafsson wrote:

In https://postgr.es/m/69db7657-3f9d-4d30-8a4b-e06034251...@yesql.se I
presented a WIP patch for adding support for the Apple Secure Transport SSL
library on macOS as, an alternative to OpenSSL.  That patch got put on the
backburner for a bit, but I’ve now found the time to make enough progress to
warrant a new submission for discussions on this (and hopefully help hacking).


Hooray!


Keychains
=
The frontend has support for using PEM files for certificates and keys.  It can
also look up the key for the certificate in a Keychain, or both certificate and
key in a Keychain.  The root certificate is still just read from a PEM file.


Why can't you have the root certificate in the keychain, too? Just not 
implemented yet, or is there some fundamental problem with it?



The existence of an sslcert config trumps a keychain, but when a keychain is
used I’m currently using the sslcert parameter (awaiting a discussion on how to
properly do this) for the certificate label required to search the keychain.

There is a new frontend parameter called “keychain” with which a path to a
custom Keychain file can be passed.  If set, this Keychain will be searched as
well as the default.  If not, only the default user Keychain is used.  There is
nothing that modifies the Keychain in this patch, it can read identities
(certificates and its key) but not alter them in any way.


OpenSSL also has a mechanism somewhat similar to the keychain, called 
"engines". You can e.g. keep the private key corresponding a certificate 
on a smart card, and speak to it with an OpenSSL "smart card reader" 
engine. If you do that, the 'sslkey' syntax is ":name>". Perhaps we should adopt that syntax here as well? For example, 
to read the client certificate from the key chain, you would use libpq 
options like "keychain=/home/heikki/my_keychain 
sslcert=keychain:my_client_cert".



“keychain” is obviously a very Secure Transport specific name, and I personally
think we should avoid that.  Any new configuration added here should take
future potential implementation into consideration such that avoid the need for
lots of backend specific knobs.  “sslcertstore” comes to mind as an
alternative, but we’d also need parameters to point into the certstore for
finding what we need.  Another option would be to do a URL based scheme
perhaps.


I wouldn't actually mind using implementation-specific terms like 
"keychain" here. It makes it clear that it's implementation-specific. I 
think it would be confusing, to use the same generic option name, like 
"sslcertstore", for both a macOS keychain and e.g. the private key store 
in Windows. Or GNU Keyring. In the worst case, you might even have 
multiple such "key stores" on the same system, so you'd anyways need a 
way to specify which one you mean.


Actually, perhaps it should be made even more explicit, and call it 
"secure_transport_keychain". That's quite long, though.


Wrt. keychains, is there a system-global or per-user keychain in macOS? 
And does this patch use them? If you load a CRL into a global keychain, 
does it take effect?



Testing
===
In order to test this we need to provide an alternative to the openssl calls in
src/test/ssl/Makefile for Secure Transport.


Those openssl commands are only needed to re-generate the test 
certificates. The certificates are included in the git repository, so 
you only need to re-generate them if you want to modify them or add new 
ones. I think it's OK to require the openssl tool for that, it won't be 
needed just to run the test suite.



Documentation
=
I have started fiddling with this a little, but to avoid spending time on the
wrong thing I have done very little awaiting the outcome of discussions here.
I have tried to add lots of comments to the code however, to explain the quirks
of Secure Transport.


I think this patch in general is in very good shape, and the next step 
is to write the documentation. In particular, I'd like to see 
documentation on how the keychain stuff should work. It'll be easier to 
discuss the behavior and the interactions, once it's documented.


In fact, better to write the documentation for that now, and not bother 
to change the code, until after we've discussed and are happy with the 
documented behavior.



I went into this thinking I would write a README for how to implement a new SSL
library.  But in the end, turns out the refactoring that went into our SSL code
earlier made that part almost too easy to warrant that.  It’s really quite
straightforward.


That's nice to hear!

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL - Weak DH group

2017-07-31 Thread Heikki Linnakangas

On 07/31/2017 02:27 PM, Heikki Linnakangas wrote:

Rebased patch attached, with proposed release notes included. Barring
new objections or arguments, I'll commit this (only) to v10 later today.


Ok, committed for v10. Thanks Nicolas and Damien, and everyone else 
involved!


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL not setting OpenSSL session id context?

2017-07-31 Thread Heikki Linnakangas

On 07/31/2017 02:24 AM, Shay Rojansky wrote:

Just to continue the above, I can confirm that adding a simple call
to SSL_CTX_set_session_id_context() to be_tls_init() with some arbitrary
const value fixes the error for me. Attached is a patch (ideally a test
should be done for this, but that's beyond what I can invest at the moment,
let me know if it's absolutely necessary).


I agree with Tom that we don't really want abbreviated SSL handshakes, 
or other similar optimizations, to take place. PostgreSQL connections 
are quite long-lived, so we have little to gain. But it makes the attack 
surface larger. There have been vulnerabilities related to SSL 
renegotiation, resumption, abbreviated handshakes, and all that.


I think we should actually call SSL_CTX_set_session_cache_mode(ctx, 
SSL_SESS_CACHE_OFF), to disable session caching altogether. I'm not sure 
if we still need to call SSL_CTX_set_session_cache_mode() if we do that.


I know next-to-nothing about .Net; is there some easy way to download a 
.Net client application and test this?


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL - Weak DH group

2017-07-31 Thread Heikki Linnakangas

On 07/13/2017 11:07 PM, Heikki Linnakangas wrote:

On 07/13/2017 10:13 PM, Robert Haas wrote:

On Thu, Jul 13, 2017 at 1:30 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

Heikki Linnakangas <hlinn...@iki.fi> writes:

I don't think this can be backpatched. It changes the default DH
parameters from 1024 bits to 2048 bits. That's a good thing for
security, but older clients might not support it, and would refuse to
connect or would fall back to something less secure.


Do we have any hard information about which versions of which clients
might not support that?  (In particular I'm wondering if any still exist
in the wild.)


Yeah.  If we break clients for v10 two months from release, some
drivers won't be updated by release time, and that sounds pretty
unfriendly to me.  On the other hand, if there is only a theoretical
risk of breakage and no clients that we actually know about will have
a problem with it, then the argument for waiting is weaker.  I'm not
generally very excited about changing things after beta2, which is
where are, but if this is a security issue then we might need to hold
our nose and go ahead.  I'm against it if it's likely to cause
real-world connectivity problems, though.


Googling around, I believe Java 6 is the only straggler [1]. So we would
be breaking that. Java 7 also doesn't support DH parameters > 1024 bits,
but it supports ECDHE, which is prioritized over DH ciphers, so it
doesn't matter.

Java 6 was released back in 2006. The last public release was in 2013.
It wouldn't surprise me to still see it bundled with random proprietary
software packages, though. The official PostgreSQL JDBC driver still
supports it, but there has been discussion recently on dropping support
for it, and even for Java 7. [2]

I would be OK with breaking DH with Java 6 in PostgreSQL 10, especially
since there's a simple workaround (generate a 1024-bit DH parameters
file). I would be less enthusiastic about doing that in a minor release,
although maybe that wouldn't be too bad either, if we put a prominent
notice with the workaround in the release notes.


Some more information on the situation with JDK version 6: I installed 
Debian wheezy on a VM, with a OpenJDK 6, and tested connecting to a 
patched server with the JDBC driver. It worked! Googling around, it 
seems that this was fixed in later versions of OpenJDK 6 
(https://bugs.openjdk.java.net/browse/JDK-8062834). I then downloaded 
the latest Oracle JRE binary version, 6u45, available from 
http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase6-419409.html. 
With that, it does not work, you get errors like:


org.postgresql.util.PSQLException: SSL error: 
java.lang.RuntimeException: Could not generate DH keypair

...
Caused by: java.security.InvalidAlgorithmParameterException: Prime size 
must be multiple of 64, and can only range from 512 to 1024 (inclusive)


So, the last binary version downloadable from Oracle is affected, but 
recent versions of OpenJDK 6 work.


Rebased patch attached, with proposed release notes included. Barring 
new objections or arguments, I'll commit this (only) to v10 later today.


- Heikki
>From 93ef6ce1c203028384cf9febf3b4add715fff26f Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <heikki.linnakan...@iki.fi>
Date: Mon, 31 Jul 2017 13:39:01 +0300
Subject: [PATCH 1/2] Always use 2048 bit DH parameters for OpenSSL ephemeral
 DH ciphers.

1024 bits is considered weak these days, but OpenSSL always passes 1024 as
the key length to the tmp_dh callback. All the code to handle other key
lengths is, in fact, dead.

To remedy those issues:

* Only include hard-coded 2048-bit parameters.
* Set the parameters directly with SSL_CTX_set_tmp_dh(), without the
  callback
* The name of the file containing the DH parameters is now a GUC. This
  replaces the old hardcoded "dh1024.pem" filename. (The files for other
  key lengths, dh512.pem, dh2048.pem, etc. were never actually used.)

Per report by Nicolas Guini and Damian Quiroga. Reviewed by Michael Paquier.

Discussion: https://www.postgresql.org/message-id/camxbouyjooautvozn6ofzym828anrdjuccotccquxjws-l2...@mail.gmail.com
---
 doc/src/sgml/config.sgml  |  24 +++
 src/backend/libpq/be-secure-openssl.c | 264 +-
 src/backend/libpq/be-secure.c |   1 +
 src/backend/utils/misc/guc.c  |  11 ++
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/include/libpq/libpq.h |   1 +
 6 files changed, 133 insertions(+), 169 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index b45b7f7f69..c33d6a0349 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1203,6 +1203,30 @@ include_dir 'conf.d'
   
  
 
+ 
+  ssl_dh_params_file (string)
+  
+   ssl_dh_params_file configuration parameter
+  
+  
+  
+   
+Specifies the name of 

[HACKERS] Re: [BUGS] BUG #14634: On Windows pg_basebackup should write tar to stdout in binary mode

2017-07-14 Thread Heikki Linnakangas

On 07/14/2017 05:27 AM, Haribabu Kommi wrote:

On Fri, Jul 14, 2017 at 2:54 AM, Heikki Linnakangas <hlinn...@iki.fi> wrote:


On 05/03/2017 07:32 AM, Haribabu Kommi wrote:


[Adding -hackers mailing list]

On Fri, Apr 28, 2017 at 6:28 PM, <henry_boehl...@agilent.com> wrote:


Executing command pg_basebackup -D -F t writes its output to stdout,
which
is open in text mode, causing LF to be converted to CR LF thus corrupting
the resulting archive.

To write the tar to stdout, on Windows stdout's mode should be
temporarily
switched to binary.

https://msdn.microsoft.com/en-us/library/tw4k6df8.aspx


Thanks for reporting the issue.
With the attached patch, I was able to extract the tar file that gets
generated when the tar file is written into stdout. I tested the
the compressed tar also.

This bug needs to be fixed in back branches also.


Seems reasonable. One question:

In the patch, you used "_setmode" function, while the calls in
src/bin/pg_dump/pg_backup_archiver.c use "setmode". There are a few
places in the backend that also use "_setmode". What's the difference?
Should we change some of them to be consistent?


Actually there is no functional difference between these two functions.
one is a POSIX variant and another one is ISO C++ variant [1]. The support
of POSIX variant is deprecated in windows, because of this reason we should
use the _setmode instead of setmode.

I attached the patch to change the pg_dump code to use _setmode function
instead of _setmode to be consistent with other functions.


Ok, committed and backpatched both patches. Thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL - Weak DH group

2017-07-13 Thread Heikki Linnakangas

On 07/13/2017 10:13 PM, Robert Haas wrote:

On Thu, Jul 13, 2017 at 1:30 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

Heikki Linnakangas <hlinn...@iki.fi> writes:

I don't think this can be backpatched. It changes the default DH
parameters from 1024 bits to 2048 bits. That's a good thing for
security, but older clients might not support it, and would refuse to
connect or would fall back to something less secure.


Do we have any hard information about which versions of which clients
might not support that?  (In particular I'm wondering if any still exist
in the wild.)


Yeah.  If we break clients for v10 two months from release, some
drivers won't be updated by release time, and that sounds pretty
unfriendly to me.  On the other hand, if there is only a theoretical
risk of breakage and no clients that we actually know about will have
a problem with it, then the argument for waiting is weaker.  I'm not
generally very excited about changing things after beta2, which is
where are, but if this is a security issue then we might need to hold
our nose and go ahead.  I'm against it if it's likely to cause
real-world connectivity problems, though.


Googling around, I believe Java 6 is the only straggler [1]. So we would 
be breaking that. Java 7 also doesn't support DH parameters > 1024 bits, 
but it supports ECDHE, which is prioritized over DH ciphers, so it 
doesn't matter.


Java 6 was released back in 2006. The last public release was in 2013. 
It wouldn't surprise me to still see it bundled with random proprietary 
software packages, though. The official PostgreSQL JDBC driver still 
supports it, but there has been discussion recently on dropping support 
for it, and even for Java 7. [2]


I would be OK with breaking DH with Java 6 in PostgreSQL 10, especially 
since there's a simple workaround (generate a 1024-bit DH parameters 
file). I would be less enthusiastic about doing that in a minor release, 
although maybe that wouldn't be too bad either, if we put a prominent 
notice with the workaround in the release notes.


[1] https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_and_ECDHE_support

[2] 
https://www.postgresql.org/message-id/69ae857b-15cc-36dd-f380-6620ef1effb9%408kdata.com


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fixup some misusage of appendStringInfo and friends

2017-07-13 Thread Heikki Linnakangas

On 04/27/2017 03:14 AM, David Rowley wrote:

On 27 April 2017 at 06:41, Peter Eisentraut
 wrote:

On 4/19/17 08:42, Ashutosh Bapat wrote:

I reviewed the patch. It compiles clean, make check-world passes. I do
not see any issue with it.


Looks reasonable.  Let's keep it for the next commit fest.


Thank you to both of you for looking. I'd thought that maybe the new
stuff in PG10 should be fixed before the release. If we waited, and
fix in PG11 then backpatching is more of a pain.

However, I wasn't careful in the patch to touch only new to PG10 code.

I'll defer to your better judgment and add to the next 'fest.


I think that's a very good argument. Cleaning up code that's new in this 
version seems like a fair game, and a good idea. The places that are not 
new in PostgreSQL 10 are more questionable, but seems harmless enough 
anyway.


Did you have an outright objection to this, Peter? The patch looks good 
to me at a quick glance, I think we should commit this now.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL - Weak DH group

2017-07-13 Thread Heikki Linnakangas

On 07/13/2017 08:04 PM, Alvaro Herrera wrote:

Michael Paquier wrote:

On Thu, Jul 13, 2017 at 5:32 PM, Heikki Linnakangas <hlinn...@iki.fi> wrote:



Objections to committing this now, instead of waiting for v11?


But I am -1 for the sneak part. It is not the time to have a new
feature in 10, the focus is to stabilize.


But if we were treating it as a security issue, would we backpatch it?
If we do, then it definitely makes sense to put something in pg10.  I'm
not sure that this patch is it, though -- perhaps it makes sense to put
a minimal fix in older branches, and let the new feature wait for pg11?


I don't think this can be backpatched. It changes the default DH 
parameters from 1024 bits to 2048 bits. That's a good thing for 
security, but older clients might not support it, and would refuse to 
connect or would fall back to something less secure. I don't think there 
are many such clients around anymore, but it's nevertheless not 
something we want to do in a stable release I think the best we can do 
is to document the issue and the workaround. To recap, to use stronger 
DH parameters in stable versions, you need to do "openssl dhparam -out 
$PGDATA/dh1024.pem 2048".


But I'd like to take the opportunity to change this for new 
installations, with v10, instead of waiting for another year. Of course, 
you could say that for any new feature, too, but that doesn't 
necessarily mean that it's a bad argument :-). It's a judgment call, for 
sure.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #14634: On Windows pg_basebackup should write tar to stdout in binary mode

2017-07-13 Thread Heikki Linnakangas

On 05/03/2017 07:32 AM, Haribabu Kommi wrote:

[Adding -hackers mailing list]

On Fri, Apr 28, 2017 at 6:28 PM,  wrote:


The following bug has been logged on the website:

Bug reference:  14634
Logged by:  Henry Boehlert
Email address:  henry_boehl...@agilent.com
PostgreSQL version: 9.6.2
Operating system:   Windows Server 2012 R2 6.3.9600
Description:

Executing command pg_basebackup -D -F t writes its output to stdout, which
is open in text mode, causing LF to be converted to CR LF thus corrupting
the resulting archive.

To write the tar to stdout, on Windows stdout's mode should be temporarily
switched to binary.

https://msdn.microsoft.com/en-us/library/tw4k6df8.aspx



Thanks for reporting the issue.
With the attached patch, I was able to extract the tar file that gets
generated when the tar file is written into stdout. I tested the
the compressed tar also.

This bug needs to be fixed in back branches also.


Seems reasonable. One question:

In the patch, you used "_setmode" function, while the calls in 
src/bin/pg_dump/pg_backup_archiver.c use "setmode". There are a few 
places in the backend that also use "_setmode". What's the difference? 
Should we change some of them to be consistent?


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL - Weak DH group

2017-07-13 Thread Heikki Linnakangas

(We dropped the ball back in October, continuing the discussion now)

On 10/10/2016 06:24 PM, Heikki Linnakangas wrote:

On 10/06/2016 10:26 PM, Christoph Berg wrote:

Re: Heikki Linnakangas 2016-10-06 <fd6eb3cc-1585-1469-fd9e-763f8e410...@iki.fi>

I propose the attached patch. It gives up on trying to deal with multiple
key lengths (as noted earlier, OpenSSL just always passed keylength=1024, so
that was useless). Instead of using the callback, it just sets fixed DH
parameters with SSL_CTX_set_tmp_dh(), like we do for the ECDH curve. The DH
parameters are loaded from a file called "dh_params.pem" (instead of
"dh1024.pem"), if present, otherwise the built-in 2048 bit parameters are
used.


Shouldn't this be a GUC pointing to a configurable location like
ssl_cert_file? This way, people reading the security section of the
default postgresql.conf would notice that there's something (new) to
configure. (And I wouldn't want to start creating symlinks from PGDATA
to /etc/ssl/something again...)


Perhaps. The DH parameters are not quite like other configuration files,
though. The actual parameters used don't matter, you just want to
generate different parameters for every cluster. The key length of the
parameters can be considered configuration, though.


Actually, adding a GUC also solves another grief I had about this. There 
is currently no easy way to tell if your parameter file is being used, 
or if the server is failing to read it and is falling back to the 
hard-coded parameters. With a GUC, if the GUC is set it's a good 
indication that the admin expects the file to really exist and to 
contain valid parameters. So if the GUC is set, we should throw an error 
if it cannot be used. And if it's not set, we use the built-in defaults.


I rebased the patch, did some other clean up of error reporting, and 
added a GUC along those lines, as well as docs. How does this look?


It's late in the release cycle, but it would be nice to sneak this into 
v10. Using weak 1024 bit DH parameters is arguably a security issue; it 
was originally reported as such. There's a work-around for older 
versions: generate custom 2048 bit parameters and place them in a file 
called "dh1024.pem", but that's completely undocumented.


Thoughts? Objections to committing this now, instead of waiting for v11?

- Heikki

>From 481e019441dadc0826e6e9b7d4a56c49e63c654b Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <heikki.linnakan...@iki.fi>
Date: Thu, 13 Jul 2017 18:29:48 +0300
Subject: [PATCH 1/1] Always use 2048 bit DH parameters for OpenSSL ephemeral
 DH ciphers.

1024 bits is considered weak these days, but OpenSSL always passes 1024 as
the key length to the tmp_dh callback. All the code to handle other key
lengths as in fact dead.

To remedy those issues:

* Only include hard-coded 2048-bit parameters.
* Set the parameters directly with SSL_CTX_set_tmp_dh(), without the
  callback
* The name of the file containing the DH parameters is now a GUC. This
  replaces the old hardcoded "dh1024.pem" filename. (The files for other
  key lengths, dh512.pem, dh2048.pem, etc. were never actually used.)

Per report by Nicolas Guini

Discussion: https://www.postgresql.org/message-id/camxbouyjooautvozn6ofzym828anrdjuccotccquxjws-l2...@mail.gmail.com
---
 doc/src/sgml/config.sgml  |  23 +++
 src/backend/libpq/be-secure-openssl.c | 265 +-
 src/backend/libpq/be-secure.c |   1 +
 src/backend/utils/misc/guc.c  |  11 ++
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/include/libpq/libpq.h |   1 +
 6 files changed, 132 insertions(+), 170 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 80d1679b14..6c1452a9bc 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1203,6 +1203,29 @@ include_dir 'conf.d'
   
  
 
+ 
+  ssl_dh_params_file (string)
+  
+   ssl_ecdh_curve configuration parameter
+  
+  
+  
+   
+Specifies the name of the file containing Diffie-Hellman parameters used for
+so-called Perfect Forward Secrecy SSL ciphers. The default is empty, in
+which case compiled-in default DH parameters used. Using custom DH
+parameters reduces the exposure if an attacker manages to crack the
+well-known compiled-in DH parameters. You can create your own DH parameters
+file with the command openssl dhparam -out dhparams.pem 2048.
+   
+
+   
+This parameter can only be set in the postgresql.conf
+file or on the server command line.
+   
+  
+ 
+
  
   krb_server_keyfile (string)
   
diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c
index 67145e9412..ed1779d6b6 100644
--- a/src/backend/libpq/be-secure-openssl.c
+++ b/src/backend/libpq/be-secure-openssl.c

Re: [HACKERS] Race condition in GetOldestActiveTransactionId()

2017-07-13 Thread Heikki Linnakangas

On 08/22/2016 01:46 PM, Heikki Linnakangas wrote:

While hacking on the CSN patch, I spotted a race condition between
GetOldestActiveTransactionId() and GetNewTransactionId().
GetOldestActiveTransactionId() calculates the oldest XID that's still
running, by doing:

1. Read nextXid, without a lock. This is the upper bound, if there are
no active XIDs in the proc array.
2. Loop through the proc array, making note of the oldest XID.

While GetNewTransactionId() does:

1. Read and Increment nextXid
2. Set MyPgXact->xid.

It seems possible that if you call GetNewTransactionId() concurrently
with GetOldestActiveTransactionId(), GetOldestActiveTransactionId() sees
the new nextXid value that the concurrent GetNewTransactionId() set, but
doesn't see the old XID in the proc array. It will return a value that
doesn't cover the old XID, i.e. it won't consider the just-assigned XID
as in-progress.

Am I missing something? Commit c6d76d7c added a comment to
GetOldestActiveTransactionId() explaining why it's not necessary to
acquire XidGenLock there, but I think it missed the above race condition.

GetOldestActiveTransactionId() is not performance-critical, it's only
called when performing a checkpoint, so I think we should just bite the
bullet and grab the lock. Per attached patch.


I did some testing, and was able to indeed construct a case where 
oldestActiveXid was off-by-one in an online checkpoint record. However, 
looking at how it's used, I think it happened to not have any 
user-visible effect. The oldestActiveXid value of an online checkpoint 
is only used to determine the point where pg_subtrans is initialized, 
and the XID missed in the computation could not be a subtransaction.


Nevertheless, it's clearly a bug and the fix is simple, so I committed a 
fix.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typo in backend/storage/ipc/standby.c

2017-07-12 Thread Heikki Linnakangas

On 07/11/2017 10:34 AM, Kyotaro HORIGUCHI wrote:

Hello.

I noticed that a comment above StandbyAcquireAccessExclusiveLock
in backend/storage/ipc/standby.c using wrong names of a variable
and a type.

The attached patch fixes it. The same mistake is found in older
versions back to 9.0.

fix_typo_of_standby_c_10_master.patch is for 10 and master and
fix_typo_of_standby_c_96_and_before.patch for 9.6 and before.


Applied, thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor style cleanup in tab-complete.c

2017-07-12 Thread Heikki Linnakangas

On 07/12/2017 07:39 AM, Michael Paquier wrote:

On Wed, Jul 12, 2017 at 11:19 AM, Thomas Munro
 wrote:

From the triviality department: I noticed some branches in
tab-complete.c's gargantuan if statement, mostly brand new, that break
from the established brace style.  Should we fix that like this?


For consistency I think it does.


Agreed. Committed, thanks.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Allow multiple hostaddrs to go with multiple hostnames.

2017-07-10 Thread Heikki Linnakangas

On 07/10/2017 01:27 PM, Masahiko Sawada wrote:

This commit seems be cause of the documentation compilation error. I
think  is missing.

...

Attached small patch fixes this.


Thanks, committed!

Strangely, it worked on my system, despite that clear mistake. Looks 
like the 'osx' tool is more strict than what I had installed. I have now 
also installed opensp, so won't make this mistake again.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] List of hostaddrs not supported

2017-07-10 Thread Heikki Linnakangas

On 07/10/2017 01:47 PM, Arthur Zakirov wrote:

Hello,

2017-07-10 12:30 GMT+03:00 Heikki Linnakangas <hlinn...@iki.fi>:



I just remembered that this was still pending. I made the documentation
changes, and committed this patch now.

We're uncomfortably close to wrapping the next beta, later today, but I
think it's better to get this into the hands of people testing this, rather
than wait for the next beta. I think the risk of breaking something that
used to work is small.


I get this warning during compilation using gcc 7.1.1 20170621:


fe-connect.c:1100:61: warning: comparison between pointer and zero

character constant [-Wpointer-compare]

conn->connhost[i].host != NULL && conn->connhost[i].host != '\0')


Thanks, fixed! That check for empty hostname was indeed wrong.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] List of hostaddrs not supported

2017-07-10 Thread Heikki Linnakangas

On 06/09/2017 04:26 PM, Robert Haas wrote:

On Fri, Jun 9, 2017 at 6:36 AM, Heikki Linnakangas <hlinn...@iki.fi> wrote:

Right. I think it's a usability fail as it is; it certainly fooled me. We
could make the error messages and documentation more clear. But even better
to allow multiple host addresses, so that it works as you'd expect.


Sure, I don't have a problem with that.  I guess part of the point of
beta releases is to correct things that don't turn out to be as smart
as we thought they were, and this seems to be an example of that.


I just remembered that this was still pending. I made the documentation 
changes, and committed this patch now.


We're uncomfortably close to wrapping the next beta, later today, but I 
think it's better to get this into the hands of people testing this, 
rather than wait for the next beta. I think the risk of breaking 
something that used to work is small.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] AdvanceXLInsertBuffer vs. WAL segment compressibility

2017-07-06 Thread Heikki Linnakangas

On 07/03/2017 06:30 PM, Chapman Flack wrote:

On 07/03/2017 09:39 AM, Heikki Linnakangas wrote:


Hmm. That's not the problem, though. Imagine that instead of the loop
above, you do just:

WALInsertLockUpdateInsertingAt(CurrPos);
AdvanceXLInsertBuffer(EndPos, false);

AdvanceXLInsertBuffer() will call XLogWrite(), to flush out any pages
before EndPos, to make room in the wal_buffers for the new pages. Before
doing that, it will call WaitXLogInsertionsToFinish()


Although it's moot in the straightforward approach of re-zeroing in
the loop, it would still help my understanding of the system to know
if there is some subtlety that would have broken what I proposed
earlier, which was an extra flag to AdvanceXLInsertBuffer() that
would tell it not only to skip initializing headers, but also to
skip the WaitXLogInsertionsToFinish() check ... because I have
the entire region reserved and I hold all the writer slots
at that moment, it seems safe to assure AdvanceXLInsertBuffer()
that there are no outstanding writes to wait for.


Yeah, I suppose that would work, too.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] AdvanceXLInsertBuffer vs. WAL segment compressibility

2017-07-03 Thread Heikki Linnakangas

On 06/26/2017 04:20 AM, Chapman Flack wrote:

I notice CopyXLogRecordToWAL contains this loop (in the case where
the record being copied is a switch):

while (CurrPos < EndPos)
{
/* initialize the next page (if not initialized already) */
WALInsertLockUpdateInsertingAt(CurrPos);
AdvanceXLInsertBuffer(CurrPos, false);
CurrPos += XLOG_BLCKSZ;
}

in which it calls, one page at a time, AdvanceXLInsertBuffer, which contains
its own loop able to do a sequence of pages. A comment explains why:

/*
 * We do this one page at a time, to make sure we don't deadlock
 * against ourselves if wal_buffers < XLOG_SEG_SIZE.
 */

I want to make sure I understand what the deadlock potential is
in this case. AdvanceXLInsertBuffer will call WaitXLogInsertionsToFinish
before writing any dirty buffer, and we do hold insertion slot locks
(all of 'em, in the case of a log switch, because that makes
XlogInsertRecord call WALInsertLockAcquireExclusive instead of just
WALInsertLockAcquire for other record types).

Does not the fact we hold all the insertion slots exclude the possibility
that any dirty buffer (preceding the one we're touching) needs to be checked
for in-flight insertions?


Hmm. That's not the problem, though. Imagine that instead of the loop 
above, you do just:


WALInsertLockUpdateInsertingAt(CurrPos);
AdvanceXLInsertBuffer(EndPos, false);

AdvanceXLInsertBuffer() will call XLogWrite(), to flush out any pages 
before EndPos, to make room in the wal_buffers for the new pages. Before 
doing that, it will call WaitXLogInsertionsToFinish() to wait for any 
insertions to those pages to be completed. But the backend itself is 
advertising the insertion position CurrPos, and it will therefore wait 
for itself, forever.



I've been thinking along the lines of another parameter to
AdvanceXLInsertBuffer to indicate when the caller is exactly this loop
filling out the tail after a log switch (originally, to avoid filling
in page headers). It now seems to me that, if AdvanceXLInsertBuffer
has that information, it could also be safe for it to skip the
WaitXLogInsertionsToFinish in that case. Would that eliminate the
deadlock potential, and allow the loop in CopyXLogRecordToWAL to be
replaced with a single call to AdvanceXLInsertBuffer and a single
WALInsertLockUpdateInsertingAt ?

Or have I overlooked some other subtlety?


The most straightforward solution would be to just clear each page with 
memset() in the loop. It's a bit wasteful to clear the page again, just 
after AdvanceXLInsertBuffer() has initialized it, but this isn't 
performance-critical.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error-like LOG when connecting with SSL for password authentication

2017-07-03 Thread Heikki Linnakangas

On 05/24/2017 05:29 PM, Michael Paquier wrote:

Attached is an updated patch.


Thanks, I've pushed the backend read part of this patch. That's enough 
to fix the original complaint with password authentication. I think the 
rest was a bit dubious, and I'm hesitant to commit that (or at least to 
backport):


* Backend write: you wouldn't really expect a client to disconnect, 
while the backend is trying to send something to it. You'll get an error 
in the non-SSL case too, although I guess the error message would be 
different.


* Frontend read: pqReadData has this, after calling pqsecure_read:


/*
 * A return value of 0 could mean just that no data is now available, or
 * it could mean EOF --- that is, the server has closed the connection.
 * Since we have the socket in nonblock mode, the only way to tell the
 * difference is to see if select() is saying that the file is ready.
 * Grumble.  Fortunately, we don't expect this path to be taken much,
 * since in normal practice we should not be trying to read data unless
 * the file selected for reading already.
 *
 * In SSL mode it's even worse: SSL_read() could say WANT_READ and then
 * data could arrive before we make the pqReadReady() test, but the 
second
 * SSL_read() could still say WANT_READ because the data received was 
not
 * a complete SSL record.  So we must play dumb and assume there is more
 * data, relying on the SSL layer to detect true EOF.
 */

#ifdef USE_SSL
if (conn->ssl_in_use)
return 0;
#endif


* Frontend write: Same as in the backend, I think having the server 
disconnect while you're trying to send to it is not expected. Not sure 
if the error message is here again different, but seems best to just 
leave it alone.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gen_random_uuid security not explicit in documentation

2017-06-29 Thread Heikki Linnakangas


On 30 June 2017 06:45:04 EEST, Noah Misch  wrote:
>This PostgreSQL 10 open item is past due for your status update. 
>Kindly send
>a status update within 24 hours, and include a date for your subsequent
>status
>update.

I'll fix this some time next week. (I'm on vacation right now)

- Heikki



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comment typo in execMain.c

2017-06-21 Thread Heikki Linnakangas

On 06/21/2017 11:35 AM, Etsuro Fujita wrote:

Here is a patch to fix a typo in a comment in ExecWithCheckOptions():
s/as as/as/.


Fixed, thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] AdvanceXLInsertBuffer vs. WAL segment compressibility

2017-06-21 Thread Heikki Linnakangas

(I'm cleaning up my inbox, hence the delayed reply)

On 08/02/2016 10:51 PM, Robert Haas wrote:

On Tue, Aug 2, 2016 at 2:33 PM, Bruce Momjian  wrote:

On Tue, Jul 26, 2016 at 05:42:43PM -0400, Chapman Flack wrote:

Even so, I'd be curious whether it would break anything to have
xlp_pageaddr simply set to InvalidXLogRecPtr in the dummy zero
pages written to fill out a segment. At least until it's felt
that archive_timeout has been so decidedly obsoleted by streaming
replication that it is removed, and the log-tail zeroing code
with it.

That at least would eliminate the risk of anyone else repeating
my astonishment. :)  I had read that 9.4 added built-in log-zeroing
code, and my first reaction was "cool! that may make the compression
technique we're using unnecessary, but certainly can't make it worse"
only to discover that it did, by ~ 300x, becoming now 3x *worse* than
plain gzip, which itself is ~ 100x worse than what we had.


My guess is that the bytes are there to detect problems where a 512-byte
disk sector is zeroed by a disk failure.  I don't see use changing that
for the use-case you have described.


Is there actually any code that makes such a check?

I'm inclined to doubt that was the motivation, though admittedly we're
both speculating about the contents of Heikki's brain, a tricky
proposition on a good day.


Given that we used to just leave them as garbage, it seems pretty safe 
to zero them out now.


It's kind of nice that all the XLOG pages have valid page headers. One 
way to think of the WAL switch record is that it's a very large WAL 
record that just happens to consume the rest of the WAL segment. Except 
that it's not actually represented like that; the xl_tot_len field of an 
XLOG switch record does not include the zeroed out portion. Instead, 
there's special handling in the reader code, that skips to the end of 
the segment when it sees a switch record. So that point is moot.


When I wrote that code, I don't remember if I realized that we're 
initializing the page headers, or if I thought that it's good enough 
even if we do. I guess I didn't realize it, because a comment would've 
been in order if it was intentional.


So +1 on fixing that, a patch would be welcome. In the meanwhile, have 
you tried using a different compression program? Something else than 
gzip might do a better job at the almost zero pages.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017 : Patch for predicate locking in Gist index

2017-06-21 Thread Heikki Linnakangas

On 06/21/2017 10:41 AM, Heikki Linnakangas wrote:

On 06/16/2017 01:24 PM, Shubham Barai wrote:

@@ -497,6 +499,13 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE 
*giststate,
for (ptr = dist->next; ptr; ptr = ptr->next)
UnlockReleaseBuffer(ptr->buffer);
}
+
+   for (ptr = dist; ptr; ptr = ptr->next)
+   PredicateLockPageSplit(rel,
+   BufferGetBlockNumber(buffer),
+   
BufferGetBlockNumber(ptr->buffer));
+
+


I think this new code needs to go before the UnlockReleaseBuffer() calls
above. Calling BufferGetBlockNumber() on an already-released buffer is
not cool.


.. and that's exactly what you fixed in your updated patch. Sorry for 
the noise :-)


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017 : Patch for predicate locking in Gist index

2017-06-21 Thread Heikki Linnakangas

On 06/16/2017 01:24 PM, Shubham Barai wrote:

@@ -497,6 +499,13 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE 
*giststate,
for (ptr = dist->next; ptr; ptr = ptr->next)
UnlockReleaseBuffer(ptr->buffer);
}
+
+   for (ptr = dist; ptr; ptr = ptr->next)
+   PredicateLockPageSplit(rel,
+   BufferGetBlockNumber(buffer),
+   
BufferGetBlockNumber(ptr->buffer));
+
+


I think this new code needs to go before the UnlockReleaseBuffer() calls 
above. Calling BufferGetBlockNumber() on an already-released buffer is 
not cool.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GSOC][weekly report 3] Eliminate O(N^2) scaling from rw-conflict tracking in serializable transactions

2017-06-20 Thread Heikki Linnakangas

On 06/20/2017 06:51 AM, Mengxing Liu wrote:

But in my benchmark, the throughput decrease by 15% after the modification.
Can you help me do a quick review to find if there is anything wrong?
I also attached the flame graph before/after the modification for reference.


Hmm. The hash table ought to speed up the RWConflictExists() function 
right? Where in the flame graph is RWConflictExists()? If it only 
accounts for a small amount of the overall runtime, even drastic speedup 
there won't make much difference to the total.



Here is my questions:
1. Is there any function in HTAB like “clear” that can make itself empty 
quickly?
In this patch, when releasing a transaction object, I need to scan the hash 
table and
use “hash_search” to remove entries one by one. It would make releasing 
operation slower.


Nope, there is no such function, I'm afraid.


In a previous email, I reported that many backends wait for the lock 
“SerializableFinishedListLock”;
If we don't implement functions like “ReleaseOneSerializableXact” quickly, they 
will be the bottleneck.


Yeah, that's probably what's causing the decrease in throughput you're 
seeing.


You might need to also keep the linked lists, and use the hash table to 
only look up particular items in the linked list faster.


I was surprised to see that you're creating a lot of smallish hash 
tables, three for each PredXact entry. I would've expected all the 
PredXact entries to share the same hash tables, i.e. have only three 
hash tables in total. That would be more flexible in allocating 
resources among entries. (It won't help with the quick-release, though)



2. Is the HTAB thread-safe? I would focus on removing some unnecessary locks if 
possible.


Nope, you need to hold a lock while searching/manipulating a HTAB hash 
table. If the hash table gets big and you start to see lock contention, 
you can partition it so that each operation only needs to lock the one 
partition covering the search key.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Phantom segment upon promotion causing troubles.

2017-06-20 Thread Heikki Linnakangas

On 06/19/2017 10:30 AM, Andres Freund wrote:

Greg Burek from Heroku (CCed) reported a weird issue on IM, that was
weird enough to be interesting.  What he'd observed was that he promoted
some PITR standby, and early clones of that node work, but later clones
did not, failing to read some segment.

The problems turns out to be the following:  [explanation]


Good detective work!


The minimal fix here is presumably not to use XLByteToPrevSeg() in
RemoveXlogFile(), but XLByteToSeg(). I don't quite see what purpose it
serves here - I don't think it's ever needed.


Agreed, I don't see a reason for it either.


There seems to be a larger question ehre though: Why does
XLogFileReadAnyTLI() probe all timelines even if they weren't a parent
at that period?  That seems like a bad idea, especially in more
complicated scenarios where some precursor timeline might live for
longer than it was a parent?  ISTM XLogFileReadAnyTLI() should check
which timeline a segment ought to come from, based on the historY?


Yeah. I've had that thought for years as well, but there has never been 
any pressing reason to bite the bullet and rewrite it, so I haven't 
gotten around to it.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GSOC] [Weekly report 2] Eliminate O(N^2) scaling from rw-conflict tracking in serializable transactions

2017-06-15 Thread Heikki Linnakangas

On 06/15/2017 08:51 AM, Mengxing Liu wrote:

My design is as follow:

For hash table, key is the pointer of SerializableXact; Value is the 
RWConflictData object.
Hashcode is generated based on the SerializableXact pointer.
So, given a SerializableXact, we can quickly find if it is conflict with 
another SerializableXact.

Every SerializableXact has two tables: inConflictsTab and outConflictsTab.
They are allocated and initialized when creating PredXactList (in the function 
InitPredicateLocks).
When a SerializableXact object is released, it will release all its RWConflict 
objects, the hash tables are empty again.
Thus They can be reused directly after releasing.

NOTE: I stored RWConflictData in hash tables, instead of RWConflict object 
allocated by RWConflictPool.
 After I remove other linked lists, the RWConflictPool can be omitted.


Sounds good!


My code is on the :
https://github.com/liumx10/postgresql/commit/3fd9a7488de5ae19ce2ce19eae5f303153a079ff


Once you've ironed out the obvious bugs, make sure to also post it as a 
patch to this mailing list. For the sake of the archives, and to make it 
clear that you're submitting this for inclusion in PostgreSQL, under the 
PostgreSQL license.


Couple of little things caught my eye at a quick glance:


 * Test whether a hash table is empty.
+ * I didn't find any function in dynamic hash supports the requirement.


You can do: hash_get_num_entries(hashp) == 0


sprintf(name, "PredXact inConflictsTab %d", i);


Better to use snprintf() instead. sprintf() is safe here, but many 
static analysis tools will complain on any sight of plain sprintf(), so 
better to just never use it.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] List of hostaddrs not supported

2017-06-09 Thread Heikki Linnakangas

On 06/09/2017 05:47 PM, Jeff Janes wrote:

Your commit to fix this part, 76b11e8a43eca4612d, is giving me compiler
warnings:

fe-connect.c: In function 'connectDBStart':
fe-connect.c:1625: warning: 'ret' may be used uninitialized in this function

gcc version 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC)


Oh. Apparently that version of gcc doesn't take it for granted that the 
switch-statement covers all the possible cases. I've added a dummy 
initialization, to silence it. Thanks, and let me know if it didn't help.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] List of hostaddrs not supported

2017-06-09 Thread Heikki Linnakangas

On 06/08/2017 06:18 PM, Robert Haas wrote:

On Thu, Jun 8, 2017 at 10:50 AM, Tom Lane  wrote:

Robert Haas  writes:

It doesn't seem like a problem to me if somebody else wants to extend
it to hostaddr, though.  Whether that change belongs in v10 or v11 is
debatable.  I would object to adding this as an open item with me as
the owner because doesn't seem to me to be a must-fix issue, but I
don't mind someone else doing the work.


If you want to define multiple-hostaddrs as a future feature, that
seems fine, but I think Heikki is describing actual bugs.  The minimum
that I think needs to be done for v10 is to make libpq reject a hostaddr
string with the wrong number of entries (either different from the
host list, or different from 1).


Whatever you put in the hostaddr field - or any field other than host
and port - is one entry.  There is no notion of a list of entries in
any other field, and no attempt to split any other field on a comma or
any other symbol.  The fact that ::1,::1 looks to you like two entries
rather than a single malformed entry is just a misunderstanding on
your part, just like you'd be wrong if you thought that
password=foo,bar is a list of passwords rather than a password
containing a comma.

I think the argument is about whether I made the right decision when I
scoped the feature, not about whether there's a defect in the
implementation.


Right. I think it's a usability fail as it is; it certainly fooled me. 
We could make the error messages and documentation more clear. But even 
better to allow multiple host addresses, so that it works as you'd expect.


I understand the slippery-slope argument that you might also want to 
have different usernames etc. for different hosts, but it's confusing 
that specifying a hostaddr changes the way the host-argument is 
interpreted. In the worst case, if we let that stand, someone might 
actually start to depend on that behavior. The other options don't have 
that issue. And hostaddr is much more closely tied to specifying the 
target to connect to, like host and port are.


I propose the attached patch, to allow a comma-separated list of 
hostaddr's. It includes some refactoring of the code to parse 
comma-separated lists, as it was getting a bit repetitive. It also fixes 
a couple of minor issues in error messages, see commit message.


The patch doesn't include docs changes yet. I think we should add a new 
sub-section, at the same level as the "33.1.1.1. Keyword/Value 
Connection Strings" and "33.1.1.2. Connection URIs" sub-sections, to 
explain some of the details we've discussed here. Something like:


---
33.1.1.3 Specifying Multiple Hosts

It is possible to specify multiple hosts to connect to, so that they are 
tried in the order given. In the Keyword/Value syntax, the host, 
hostaddr, and port options accept a comma-separated list of values. The 
same number of elements should be given in each option, such that e.g. 
the first hostaddr corresponds to the first host name, the second 
hostaddr corresponds to the second host name, and so forth. As an 
exception, if only one port is specified, it applies to all the hosts.


In the connection URI syntax, you can list multiple host:port pairs 
separated by commas, in the host component of the URI.


A single hostname can also translate to multiple network addresses. A 
common example of this is that a host can have both an IPv4 and an IPv6 
address.


When multiple hosts are specified, or when a single hostname is 
translated to multiple addresses,  all the hosts and addresses will be 
tried in order, until one succeeds. If none of the hosts can be reached, 
the connection fails. If a connection is established successfully, but 
authentication fails, the remaining hosts in the list are not tried.


If a password file is used, you can have different passwords for 
different hosts. All the other connection options are the same for every 
host, it is not possible to e.g. specify a different username for 
different hosts.

---

- Heikki



0001-Allow-multiple-hostaddrs-to-go-with-multiple-hostnam.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] List of hostaddrs not supported

2017-06-09 Thread Heikki Linnakangas

On 06/08/2017 06:39 PM, David G. Johnston wrote:

These are already failing so I'd agree that explicit rejection isn't
necessary - the question seems restricted to usability.  Though I suppose
we need to consider whether there is any problem with the current setup if
indeed our intended separator is also an allowable character - i.e., do we
want to future-proof the syntax by requiring quotes now?


Hmm, there is one problem with our current use of comma as a separator: 
you cannot use a Unix-domain socket directory that has a comma in the 
name, because it's interpreted as multiple hostnames. E.g. this doesn't 
work:


psql "host=/tmp/dir,with,commas"

For hostnames, ports, and network addresses (hostaddr), a comma is not a 
problem, as it's not a valid character in any of those.


I don't know if that was considered when this patch was developed. I 
couldn't find a mention of this in the archives. But in any case, that's 
quite orthogonal to the rest of this thread.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to refer to resource files from UDFs written in C

2017-06-09 Thread Heikki Linnakangas

On 06/09/2017 08:56 AM, Supun Nakandala wrote:

Hi hackers,

I am trying to extend PostgreSQL by adding UDT and UDF for a custom use
case and I am using C language extensions to do that.

However, I have a requirement of reading a text file from one of the C
functions. The compiled *.so files are placed in the "pg_config
--pkglibdir" directory and tried copying my text files there but it didn't
work. I found that, when these shared libs are loaded they are run from a
different working directory. In this case, what is the best way to refer to
my text files from the C code other than giving the absolute path which can
change from system to system.


All backend processes run with the data directory as the current 
directory. So you can put the files into the data directory, probably 
best to have a subdirectory there to avoid confusing them with 
PostgreSQL's own files. Or you could have a config option, to set an 
absolute path.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG10 transition tables, wCTEs and multiple operations on the same table

2017-06-08 Thread Heikki Linnakangas

On 06/08/2017 08:36 PM, Robert Haas wrote:

I freely admit I encouraged you to commit this.  I did not imagine
that would be followed immediately by abdicating all responsibility
for it.  My mistake, I guess.


Robert, chill out. Kevin offered to revert. It's perhaps not the best 
way forward - I'm not familiar with the details here - but it's 
certainly one way to take responsibility.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] List of hostaddrs not supported

2017-06-08 Thread Heikki Linnakangas
While testing libpq and GSS the other day, I was surprised by the 
behavior of the host and hostaddr libpq options, if you specify a list 
of hostnames.


I did this this, and it took me quite a while to figure out what was 
going on:



$ psql "dbname=postgres hostaddr=::1  host=localhost,localhost user=krbtestuser" -c 
"SELECT 'hello'"
psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may 
provide more information
GSSAPI continuation error: Server postgres/localhost,localhost@PG.EXAMPLE not 
found in Kerberos database


That was a pilot error; I specified a list of hostnames, but only one 
hostaddr. But I would've expected to get a more helpful error, pointing 
that out.


Some thoughts on this:

1. You cannot actually specify a list of hostaddrs. Trying to do so 
gives error:



psql: could not translate host name "::1,::1" to address: Name or service not 
known


That error message is a bit inaccurate, in that it wasn't really a "host 
name" that it tried to translate, but a raw address in string format. 
That's even more confusing if you make the mistake that you specify 
"hostaddr=localhost":



psql: could not translate host name "localhost" to address: Name or service not 
known


But in the first case, could we detect that there is a comma in the 
string, and give an error along the lines of "list of hostaddr's not 
supported". (Or better yet, support multiple hostaddrs)


2. The documentation is not very clear on the fact that multiple 
hostaddr's is not supported. Nor what happens if you specify a single 
hostaddr, but a list of hostnames. (The list of hostnames gets treated 
as a single hostname, that's what.)



So, this is all quite confusing. I think we should support a list of 
hostaddrs, to go with the list of hostnames. It seems like a strange 
omission. Looking at the archives, it was mentioned a few times when 
this was developed and reviewed, latest Takayuki Tsunakawa asked [1] the 
same question, but it was then forgotten about.


[1] 
https://www.postgresql.org/message-id/0A3221C70F24FB45833433255569204D1F63FB5E%40G01JPEXMBYT05


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Server ignores contents of SASLInitialResponse

2017-06-07 Thread Heikki Linnakangas

On 06/06/2017 06:09 AM, Michael Paquier wrote:

On Thu, Jun 1, 2017 at 4:58 AM, Heikki Linnakangas <hlinn...@iki.fi> wrote:

To fix, I suppose we can do what you did for SASL in your patch, and move
the cleanup of conn->gctx from closePGconn to pgDropConnection. And I
presume we need to do the same for the SSPI state too, but I don't have a
Windows set up to test that at the moment.


SSPI does not complain with sslmode=prefer as each time
pg_SSPI_startup() is called conn->sspictx is enforced to NULL. This
looks wrong to me by the way as pg_SSPI_startup() is invoked only once
per authentication, and it leaks memory this way. That's also
inconsistent with SASL and GSS. At the same time this inconsistency is
not causing actual problems except a leak with SSPI in libpq, so not
doing anything except on HEAD looks fine to me.


Ok, I committed your patch, with some minor changes. I added a line to 
also clear "conn->usesspi". Without that, if the server on first attempt 
asked for SSPI authentication, but GSS on the second attempt, we would 
incorrectly try to continue SSPI authentication during the second 
attempt. Also, I kept the existing code to discard the input and output 
data together, and added the new code after that, instead of in the 
middle. And added some newlines to pqDropConnection for beauty.


BTW, multiple connection attempts if "host" is a list of hostnames, 
which is now possible in version 10, also had the same issue. On master, 
that was the easiest way to reproduce this.


I decided to backpatch this down to 9.3, after all. It is clearly a bug, 
although unlikely to be hit in typical configurations. One configuration 
where this can be reproduced, is if you have separate "hostnossl" and 
"hostssl" lines in pg_hba.conf, for Kerberos authentication, but with 
different options. If the options are such that the first 
authentication, with SSL, fails, but the second one should succeed, 
before this fix the second attempt would nevertheless fail with the 
"duplicate authentication request".


The code in 9.2 was sufficiently different that I didn't backport it 
there, out of conservatism (ok, laziness).


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Challenges preventing us moving to 64 bit transaction id (XID)?

2017-06-07 Thread Heikki Linnakangas

On 06/06/2017 07:24 AM, Ashutosh Bapat wrote:

On Tue, Jun 6, 2017 at 9:48 AM, Craig Ringer  wrote:

On 6 June 2017 at 12:13, Ashutosh Bapat  wrote:


What happens when the epoch is so low that the rest of the XID does
not fit in 32bits of tuple header? Or such a case should never arise?


Storing an epoch implies that rows can't have (xmin,xmax) different by
more than one epoch. So if you're updating/deleting an extremely old
tuple you'll presumably have to set xmin to FrozenTransactionId if it
isn't already, so you can set a new epoch and xmax.


If the page has multiple such tuples, updating one tuple will mean
updating headers of other tuples as well? This means that those tuples
need to be locked for concurrent scans? May be not, since such tuples
will be anyway visible to any concurrent scans and updating xmin/xmax
doesn't change the visibility. But we might have to prevent multiple
updates to the xmin/xmax because of concurrent updates on the same
page.


"Store the epoch in the page header" is actually a slightly 
simpler-to-visualize, but incorrect, version of what we actually need to 
do. If you only store the epoch, then all the XIDs on a page need to 
belong to the same epoch, which causes trouble when the current epoch 
changes. Just after the epoch changes, you cannot necessarily freeze all 
the tuples from the previous epoch, because they would not yet be 
visible to everyone.


The full picture is that we need to store one 64-bit XID "base" value in 
the page header, and all the xmin/xmax values in the tuple headers are 
offsets relative to that base. With that, you effectively have 64-bit 
XIDs, as long as the *difference* between any two XIDs on a page is not 
greater than 2^32. That can be guaranteed, as long as we don't allow a 
transaction to be in-progress for more than 2^32 XIDs. That seems like a 
reasonable limitation.


But yes, when the "current XID - base XID in page header" becomes 
greater than 2^32, and you need to update a tuple on that page, you need 
to first freeze the page, update the base XID on the page header to a 
more recent value, and update the XID offsets on every tuple on the page 
accordingly. And to do that, you need to hold a lock on the page. If you 
don't move any tuples around at the same time, but just update the XID 
fields, and exclusive lock on the page is enough, i.e. you don't need to 
take a super-exclusive or vacuum lock. In any case, it happens so 
infrequently that it should not become a serious burden.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Challenges preventing us moving to 64 bit transaction id (XID)?

2017-06-05 Thread Heikki Linnakangas

On 06/05/2017 11:49 AM, Tianzhou Chen wrote:

Hi Pg Hackers,

XID wraparound seems to be quite a big concern and we introduce
changes like “adding another frozen bit to each page”
[http://rhaas.blogspot.com/2016/03/no-more-full-table-vacuums.html

to tackle this. I am just wondering what’s the challenges preventing
us from moving to 64 bit xid?  This is the previous thread I find
https://www.postgresql.org/message-id/CAEYLb_UfC%2BHZ4RAP7XuoFZr%2B2_ktQmS9xqcQgE-rNf5UCqEt5A%40mail.gmail.com
,
the only answer there is:

“ The most obvious reason for not using 64-bit xid values is that
they require more storage than 32-bit values. There is a patch
floating around that makes it safe to not forcibly safety shutdown
the server where currently it is necessary, but it doesn't work by
making xids 64-bit.
"

I am personally not quite convinced that is the main reason, since I
feel for database hitting this issue, the schema is mostly
non-trivial and doesn’t matter so much with 8 more bytes. Could some
postgres experts share more insights about the challenges?


That quote is accurate. We don't want to just expand XIDs to 64 bits, 
because it would significantly bloat the tuple header. PostgreSQL's 
per-tuple overhead is already quite large, compared to many other systems.


The most promising approach to tackle this is to switch to 64-bit XIDs 
in in-memory structures, and add some kind of an extra epoch field to 
the page header. That would effectively give you 64-bit XIDs, but would 
only add one a field to each page, not every tuple.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] simplehash.h typo

2017-06-05 Thread Heikki Linnakangas

On 06/05/2017 11:26 AM, Andres Freund wrote:

On 2017-06-05 11:10:12 +0300, Heikki Linnakangas wrote:

Fixed, thanks. I also fixed and clarified some other comments in the file
that seemed wrong or confusing to me.


Thanks for looking - I don't see any commit though?


Pushed now. Note to self: remove --dry-run when you intend to commit for 
real :-).


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Server ignores contents of SASLInitialResponse

2017-06-05 Thread Heikki Linnakangas

On 06/05/2017 09:34 AM, Michael Paquier wrote:

On Mon, Jun 5, 2017 at 7:49 AM, Noah Misch  wrote:

IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
for your status update.  Please reacquaint yourself with the policy on open
item ownership[1] and then reply immediately.  If I do not hear from you by
2017-06-05 23:00 UTC, I will transfer this item to release management team
ownership without further notice.

[1] 
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


I am planning to look and comment about Heikki's patch tomorrow my
time. The GSS issue requires a back-patch and is actually linked to
what is discussed for SASL. The testing behind it also needs some care
and attention.


Thanks. I'll wait for Michael's update tomorrow, and will have a look at 
this after that. Expect a new status update by Wednesday.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] simplehash.h typo

2017-06-05 Thread Heikki Linnakangas

On 05/28/2017 04:50 AM, Jeff Janes wrote:

/* round up size to the next power of 2, that's the bucketing works  */


That should probably be "that's the **way** bucketing works".  Or maybe it
is an idiom I don't grok.


Fixed, thanks. I also fixed and clarified some other comments in the 
file that seemed wrong or confusing to me.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Server ignores contents of SASLInitialResponse

2017-06-01 Thread Heikki Linnakangas

On 05/25/2017 06:36 PM, Michael Paquier wrote:

On Thu, May 25, 2017 at 10:52 AM, Michael Paquier
 wrote:

Actually, I don't think that we are completely done here. Using the
patch of upthread to enforce a failure on SASLInitialResponse, I see
that connecting without SSL causes the following error:
psql: FATAL:  password authentication failed for user "mpaquier"
But connecting with SSL returns that:
psql: duplicate SASL authentication request

I have not looked at that in details yet, but it seems to me that we
should not take pg_SASL_init() twice in the scram authentication code
path in libpq for a single attempt.


Gotcha. This happens because of sslmode=prefer, on which
pqDropConnection is used to clean up the connection state. So it seems
to me that the correct fix is to move the cleanup of sasl_state to
pqDropConnection() instead of closePGconn(). Once I do so the failures
are correct, showing to the user two FATAL errors because of the two
attempts:
psql: FATAL:  password authentication failed for user "mpaquier"
FATAL:  password authentication failed for user "mpaquier"


Hmm, the SASL state cleanup is done pretty much the same way that 
GSS/SSPI cleanup is. Don't we have a similar problem with GSS?


I tested that, and I got an error from glibc, complaining about a 
double-free:



*** Error in `/home/heikki/pgsql.master/bin/psql': double free or corruption 
(out): 0x56157d13be00 ***
=== Backtrace: =
/lib/x86_64-linux-gnu/libc.so.6(+0x70bcb)[0x7f6a460b7bcb]
/lib/x86_64-linux-gnu/libc.so.6(+0x76f96)[0x7f6a460bdf96]
/lib/x86_64-linux-gnu/libc.so.6(+0x7778e)[0x7f6a460be78e]
/home/heikki/pgsql.master/lib/libpq.so.5(+0xa1dc)[0x7f6a46d651dc]
/home/heikki/pgsql.master/lib/libpq.so.5(+0xa4b3)[0x7f6a46d654b3]
/home/heikki/pgsql.master/lib/libpq.so.5(+0xacb9)[0x7f6a46d65cb9]
/home/heikki/pgsql.master/lib/libpq.so.5(PQconnectPoll+0x14e7)[0x7f6a46d6ae81]
/home/heikki/pgsql.master/lib/libpq.so.5(+0xe895)[0x7f6a46d69895]
/home/heikki/pgsql.master/lib/libpq.so.5(PQconnectdbParams+0x4f)[0x7f6a46d675b9]
/home/heikki/pgsql.master/bin/psql(+0x3daa9)[0x56157ccafaa9]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f6a460672b1]
/home/heikki/pgsql.master/bin/psql(+0x1163a)[0x56157cc8363a]


I bisected that; the culprit was commit 61bf96cab0, where I refactored 
the libpq authentication code in preparation for SCRAM. The logic around 
that free() was always a bit wonky, but the refactoring made it outright 
broken. Attached is a patch for that, see commit message for details. 
(Review of that would be welcome.)


So, after fixing that, back to the original question; don't we have a 
similar "duplicate authentication request" problem with GSS? Yes, turns 
out that we do, even on stable branches:


psql "sslmode=prefer dbname=postgres hostaddr=127.0.0.1 
krbsrvname=postgres  host=localhost user=krbtestuser"

psql: duplicate GSS authentication request

To fix, I suppose we can do what you did for SASL in your patch, and 
move the cleanup of conn->gctx from closePGconn to pgDropConnection. And 
I presume we need to do the same for the SSPI state too, but I don't 
have a Windows set up to test that at the moment.


- Heikki



0001-Fix-double-free-bug-in-GSS-authentication.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Server ignores contents of SASLInitialResponse

2017-05-25 Thread Heikki Linnakangas

On 05/24/2017 11:33 PM, Michael Paquier wrote:

I have noticed today that the server ignores completely the contents
of SASLInitialResponse. ... Attached is a patch to fix the problem.


Fixed, thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tightening isspace() tests where behavior should match SQL parser

2017-05-23 Thread Heikki Linnakangas

On 05/20/2017 01:48 PM, Tom Lane wrote:

Attached is a proposed patch.  I'm vacillating on whether to
back-patch this --- it will fix a reported bug, but it seems
at least somewhat conceivable that it will also break cases
that were working acceptably before.  Thoughts?


+1 for back-patching. If I understand correctly, it would change the 
behavior when you pass a string with non-ASCII leading or trailing 
whitespace characters to those functions. I suppose that can happen, but 
it's only pure luck if the current code happens to work in that case. 
And even if so, it's IMO still quite reasonable to change the behavior, 
on the grounds that the new behavior is what the core SQL lexer would do.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error-like LOG when connecting with SSL for password authentication

2017-05-23 Thread Heikki Linnakangas

On 05/22/2017 10:11 PM, Vaishnavi Prabakaran wrote:

On Mon, May 22, 2017 at 5:10 PM, Michael Paquier 
wrote:


If the protocol version is SSL
3.0 or TLS 1.0, this result code is returned only if a closure alert
has occurred in the protocol, i.e. if the connection has been closed
cleanly. Note that in this case SSL_ERROR_ZERO_RETURN does not
necessarily indicate that the underlying transport has been closed.


I guess this error code exist even for SSL2 protocol, In that case, don't
we need to keep the current code for this error code?


If I understand correctly, with SSLv2, SSL_ERROR_ZERO_RETURN does mean 
that the underlying transport has been closed. Returning 0 seems 
appropriate in that case, too.


But the point is moot anyway, because PostgreSQL doesn't allow SSLv2 
anymore.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error-like LOG when connecting with SSL for password authentication

2017-05-23 Thread Heikki Linnakangas

On 05/22/2017 03:10 AM, Michael Paquier wrote:

Hi all,

When attempting to connect using password authentication through SSL,
the backend will complain in its log with the following entry before
calling sendAuthRequest(), which asks the client for a password:
LOG:  could not receive data from client: Connection reset by peer

After a short talk with Heikki, it seems that be_tls_read() complains
on SSL_ERROR_ZERO_RETURN, which is documented here:
https://wiki.openssl.org/index.php/Manual:SSL_get_error(3)
The TLS/SSL connection has been closed. If the protocol version is SSL
3.0 or TLS 1.0, this result code is returned only if a closure alert
has occurred in the protocol, i.e. if the connection has been closed
cleanly. Note that in this case SSL_ERROR_ZERO_RETURN does not
necessarily indicate that the underlying transport has been closed.

As this is a clean shutdown of the SSL connection, shouldn't
be_tls_read() return 0 to the caller instead of -1? This would map
with what the non-SSL code path does for raw reads.


Yeah. The be_tls_read() change looks OK to me.

Can SSL_ERROR_ZERO_RETURN also happen in a write? I suppose it can, but 
returning 0 from secure_write() seems iffy. My man page for send(2) 
doesn't say that it would return 0 if the connection has been closed by 
the peer, so that would be different from the non-SSL codepath. Looking 
at the only caller to secure_write(), it does check for 0, and would 
treat it correctly as EOF, so it would work. But it makes me a bit 
nervous, a comment in secure_write() at least would be in order, to 
point out that it can return 0 to indicate EOF, unlike the raw write(2) 
or send(2) system functions.


In libpq, do we want to set the "SSL connection has been closed 
unexpectedly" message? In the non-SSL case, pqsecure_raw_read() doesn't 
set any error message on EOF. Also, even though the comment in 
pgtls_read() says "... we should not report it as a server crash", 
looking at pqReadData, ISTM that returning 0 will in fact lead to the 
"server closed the connection unexpectedly" message. And it seems like a 
good assumption anyway, that the server did in fact terminate 
abnormally, if that happens. We don't expect the server to disconnect 
the client without notice, even though it's not unusual for the client 
to disconnect without notifying the server.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removal of plaintext password type references

2017-05-20 Thread Heikki Linnakangas

On 05/20/2017 05:41 AM, Tom Lane wrote:

Robert Haas  writes:

I guess it does seem likely that most users of the hook would need to
do the same, but it seems confusing to pass the same function both x
and f(x), so my vote is to not do that.


Right, that was my thinking. (Except that my vote is to nevertheless 
keep it unchanged.)



I guess what's in the back of my mind is that the password type might
someday not be just a function of the password, but require other
inputs.  That is, if we change the hook signature as proposed, then
the signature of get_password_type() also becomes part of that API.
If someday f(x) needs to become f(x,y), that becomes either more API
breakage for users of the hook, or no change at all because it's the
callers' problem.

Maybe there's no reason to believe that that will ever happen.


I don't see that happening. It's too convenient that a password verifier 
is self-identifying, i.e. that you can tell what kind of a verifier it 
is, just by looking at the value, without any out-of-band information.


Although when we talked about the representation of password verifiers 
in pg_authid.rolpassword, there was discussion on having a separate 
"password type" field. I was against it, but some thought it would be a 
good idea.



But I'm not disposed to spend
a lot of energy arguing about it, so if other people feel differently,
that's cool.


TBH, I'm not that hot about it either.  But I'm thinking this
is an API break we don't need.


I'm going to just remove this from the open items list. But if some 
other committer disagrees strongly and wants to commit this, I won't object.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-05-19 Thread Heikki Linnakangas

On 05/19/2017 06:48 PM, Tom Lane wrote:

Heikki Linnakangas <hlinn...@iki.fi> writes:

You can get a pretty good typedefs list just by looking for the pattern
"} ;".


That's going to catch a lot of things that are just variables, though.
It might be all right as long as there was manual filtering after it.


At a quick glance, there are only a couple of them. This two cases 
caught my eye. In twophase.c:


static struct xllist
{
StateFileChunk *head;   /* first data block in the chain */
StateFileChunk *tail;   /* last block in chain */
uint32  num_chunks;
uint32  bytes_free; /* free bytes left in 
tail block */
uint32  total_len;  /* total data bytes in 
chain */

}   records;

And this in informix.c:

static struct
{
longval;
int maxdigits;
int digits;
int remaining;
charsign;
char   *val_string;
}   value;

IMHO it would actually be an improvement if there was a space rather 
than a tab there. But I'm not sure what else it would mess up to 
consider those typedef names. And those are awfully generic names; 
wouldn't hurt to rename them, anyway.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-05-19 Thread Heikki Linnakangas

On 05/19/2017 06:05 PM, Stephen Frost wrote:

* Tom Lane (t...@sss.pgh.pa.us) wrote:

Robert Haas  writes:

On Thu, May 18, 2017 at 11:00 PM, Tom Lane  wrote:

The reason PGSSTrackLevel is "unrecognized" is that it's not in
typedefs.list, which is a deficiency in our typedef-collection
technology not in indent.  (I believe the problem is that there
are no variables declared with that typename, causing there to
not be any of the kind of symbol table entries we are looking for.)



This, however, doesn't sound so good.  Isn't there some way this can be fixed?


I'm intending to look into it, but I think it's mostly independent of
whether we replace pgindent itself.  The existing code has the same
problem, really.

One brute-force way we could deal with the problem is to have a "manual"
list of names to be treated as typedefs, in addition to whatever the
buildfarm produces.  I see no other way than that to get, for instance,
simplehash.h's SH_TYPE to be formatted as a typedef.  There are also
some typedefs that don't get formatted correctly because they are only
used for wonky options that no existing typedef-reporting buildfarm member
builds.  Manual addition might be the path of least resistance there too.

Now the other side of this coin is that, by definition, such typedefs
are not getting used in a huge number of places.  If we just had to
live with it, it might not be awful.


Dealing with the typedef lists in general is a bit of a pain to get
right, to make sure that new just-written code gets correctly indented.
Perhaps we could find a way to incorporate the buildfarm typedef lists
and a manual list and a locally generated/provided set in a simpler
fashion in general.


You can get a pretty good typedefs list just by looking for the pattern 
"} ;". Something like this:


grep -o -h -I --perl-regexp -r "}\W(\w+);" src/ contrib/ | perl -pe 
's/}\W(.*);/\1/' | sort | uniq > typedefs.list


It won't cover system headers and non-struct typedefs, but it catches 
those simplehash typedefs and PGSSTrackLevel. Maybe we should run that 
and merge the result with the typedef lists we collect in the buildfarm.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 10beta1 - compilation fails on OpenBSD -current

2017-05-19 Thread Heikki Linnakangas

On 05/19/2017 12:03 PM, Pierre-Emmanuel André wrote:

Hi,

I tried to build PostgreSQL 10beta1 on OpenBSD -current and i have this error:

gmake[3]: Entering directory 
'/usr/ports/pobj/postgresql-10beta1/postgresql-10beta1/src/backend/libpq'
cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
-Wendif-labels -Wmissing-format-attribute -Wformat-security 
-fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include 
-I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include-c -o 
be-fsstubs.o be-fsstubs.c
cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
-Wendif-labels -Wmissing-format-attribute -Wformat-security 
-fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include 
-I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include-c -o 
be-secure.o be-secure.c
cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
-Wendif-labels -Wmissing-format-attribute -Wformat-security 
-fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include 
-I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include-c -o 
auth.o auth.c
auth.c: In function 'CheckBSDAuth':
auth.c:2265: error: too few arguments to function 'sendAuthRequest'


In auth.c, there is this call:
/* Send regular password request to client, and get the response */
sendAuthRequest(port, AUTH_REQ_PASSWORD);

but this function is defined like this:
static void sendAuthRequest(Port *port, AuthRequest areq, char *extradata,int 
extralen);


This was an oversight in my commit 8d3b9cce81c. It added those new 
arguments, but didn't update that call. I'll go fix that, thanks for the 
report!


You're the first to complain, and the buildfarm has been happy, so it 
seems that you're the first one to try building 10beta1 with 
--with-bsd-auth. We have one OpenBSD box in the buildfarm, "curculio", 
but apparently it's not using the --with-bsd-auth option.


Mikael, could you add --with-bsd-auth to curculio's configuration, 
please? On 9.6 and above.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10beta1/m68k: static assertion failed: "MAXALIGN too small to fit int32"

2017-05-18 Thread Heikki Linnakangas

On 05/18/2017 12:31 PM, Christoph Berg wrote:

Re: Heikki Linnakangas 2017-05-18 <e0bfad8e-ec5d-28ee-ebbc-4670d465b...@iki.fi>

I'll commit that, barring objections. If you can verify that it fixes the
problem before that, that'd be great, otherwise I guess we'll find out some
time after the commit.


Please go ahead, I don't think I have online access to a m68k machine.
(It got demoted to an unofficial port some time ago and the old Debian
porter machines got taken down).


Ok, pushed, let's see if the port machine likes it.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10beta1/m68k: static assertion failed: "MAXALIGN too small to fit int32"

2017-05-18 Thread Heikki Linnakangas

On 05/17/2017 10:39 PM, Christoph Berg wrote:

Not sure if a lot of people still care about m68k, but it's still one
of the unofficial Debian ports (it used to be the first non-x86 port
done decades ago):

gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels 
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -g -O2 
-fdebug-prefix-map=/<>=. -fstack-protector-strong -Wformat -Werror=format-security 
-I/usr/include/mit-krb5 -no-pie -I../../../../src/include -I/<>/build/../src/include 
-Wdate-time -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include/tcl8.6   -c -o slab.o 
/<>/build/../src/backend/utils/mmgr/slab.c
In file included from /<>/build/../src/include/postgres.h:47:0,
 from 
/<>/build/../src/backend/utils/mmgr/slab.c:53:
/<>/build/../src/backend/utils/mmgr/slab.c: In function 
'SlabContextCreate':
/<>/build/../src/include/c.h:753:7: error: static assertion failed: 
"MAXALIGN too small to fit int32"
  do { _Static_assert(condition, errmessage); } while(0)
   ^
/<>/build/../src/backend/utils/mmgr/slab.c:198:2: note: in 
expansion of macro 'StaticAssertStmt'
  StaticAssertStmt(MAXIMUM_ALIGNOF >= sizeof(int),
  ^~~~
: recipe for target 'slab.o' failed
make[5]: *** [slab.o] Error 1


If that's all that prevents it from working, by all means let's fix it. 
I think this should do it, although I don't have a system to test it on:


diff --git a/src/backend/utils/mmgr/slab.c b/src/backend/utils/mmgr/slab.c
index 0fcfcb4c78..e59154ddda 100644
--- a/src/backend/utils/mmgr/slab.c
+++ b/src/backend/utils/mmgr/slab.c
@@ -194,9 +194,9 @@ SlabContextCreate(MemoryContext parent,
 MAXALIGN(sizeof(SlabChunk)),
 "padding calculation in SlabChunk is 
wrong");

-   /* otherwise the linked list inside freed chunk isn't guaranteed to fit 
*/
-   StaticAssertStmt(MAXIMUM_ALIGNOF >= sizeof(int),
-"MAXALIGN too small to fit int32");
+   /* Make sure the linked list node fits inside a freed chunk */
+   if (chunkSize < sizeof(int))
+   chunkSize = sizeof(int);

/* chunk, including SLAB header (both addresses nicely aligned) */
fullChunkSize = MAXALIGN(sizeof(SlabChunk) + MAXALIGN(chunkSize));

It adds a few instructions to check that on all platforms, unless the 
compiler can optimize that away, but this is not performance critical.


I'll commit that, barring objections. If you can verify that it fixes 
the problem before that, that'd be great, otherwise I guess we'll find 
out some time after the commit.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typo in json.c

2017-05-18 Thread Heikki Linnakangas

On 05/18/2017 10:17 AM, Daniel Gustafsson wrote:

Spotted while reading code, patch attached.


Applied, thanks.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removal of plaintext password type references

2017-05-18 Thread Heikki Linnakangas

On 05/15/2017 07:03 AM, Noah Misch wrote:

On Thu, May 11, 2017 at 10:08:30AM +0900, Michael Paquier wrote:

On Wed, May 10, 2017 at 10:22 PM, Michael Paquier <michael.paqu...@gmail.com> 
wrote:

On Wed, May 10, 2017 at 10:01 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

Heikki Linnakangas <hlinn...@iki.fi> writes:

Also note that changing the signature check_password_hook_type would
break any external modules that use the hook. Removing
PASSWORD_TYPE_PLAINTEXT will do that too, because any password hook
function would use that constant (see e.g. contrib/passwordcheck). If we
were to change the signature, I'd actually like to simplify it by
removing the password_type parameter altogether. The hook function can
call get_password_type() on the password itself to get the same
information. But it's not worth changing the API and breaking external
modules for that.


Ahah. I just had the same thought before reading this message.


And attached is a patch to do that. I am all for this one to get a
more simple interface in place.


[Action required within three days.  This is a generic notification.]


I still don't think this worth it to change the hook function's 
signature for this. Even though it's not a big deal for external modules 
to adapt, it's not zero effort either. And it's not that much nicer to us.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Pulling up more complicated subqueries

2017-05-17 Thread Heikki Linnakangas

Hi,

I spent some time staring at TPC-DS benchmark's query 6. It contains a 
somewhat complicated subquery, and most of the time spent on that query 
is currently spent on executing the subquery again and again. The 
essence of the query boils down to this:


CREATE TABLE foo (i int4, j int4);
CREATE TABLE bar (i int4, j int4);

INSERT INTO foo SELECT g%10, g FROM generate_series(1, 1) g;
INSERT INTO bar SELECT g%10, g FROM generate_series(1, 1) g;


SELECT * FROM foo
WHERE foo.j >= 1.2 *
  (SELECT avg(bar.j) FROM bar WHERE foo.i = bar.i);


The planner can pull up simpler subqueries, converting them to joins, 
but unfortunately this case is beyond its capabilities. If it was smart 
enough, it could be transformed into something like this:


SELECT * FROM foo
LEFT JOIN (SELECT avg(bar.j) as avg, bar.i FROM bar GROUP BY bar.i) as 
avg_bar ON foo.i = avg_bar.i

WHERE foo.j >= 1.2 * avg_bar.avg


There was some brief discussion on doing this kind of transformation 
back in 2011, at 
https://www.postgresql.org/message-id/4E2F163B.6060105%40gmail.com. I 
think we're in a better position to do that now than back then, thanks 
to all the other planner work that's been done since.


So how do we get there? I've identified a number of smaller steps that, 
when all put together, can handle this, as well as many other queries of 
course.


0. This simple case is already handled:

SELECT * FROM foo
WHERE foo.j IN (select bar.j from bar)

It's turned into a semi-join. The next steps will extend this basic 
capability, to handle more complicated cases.


postgres=# explain SELECT * FROM foo WHERE foo.j IN (select bar.j from bar);
   QUERY PLAN
-
 Hash Join  (cost=42.75..93.85 rows=1130 width=8)
   Hash Cond: (foo.j = bar.j)
   ->  Seq Scan on foo  (cost=0.00..32.60 rows=2260 width=8)
   ->  Hash  (cost=40.25..40.25 rows=200 width=4)
 ->  HashAggregate  (cost=38.25..40.25 rows=200 width=4)
   Group Key: bar.j
   ->  Seq Scan on bar  (cost=0.00..32.60 rows=2260 width=4)
(7 rows)


1. Currently, the IN/EXISTS pullup is only done when the IN/EXISTS is a 
top-level qual. For example, this isn't currently pulled-up:


SELECT * FROM foo
WHERE foo.j IN (select bar.j from bar) OR foo.j < 10;

That's not a straight semi-join, but we could still turn it into a new 
kind of LEFT-SEMI join. A left-semi join is like a left join, in that it 
returns all rows from the left side, and NULLs for any non-matches. And 
like a semi-join, it returns only one matching row from the right-side, 
if there are duplicates. In the qual, replace the SubLink with an IS NOT 
NULL test. So logically, the above would be converted into:


SELECT * FROM foo
/* SEMI- */ LEFT JOIN (select bar.j from bar) ON foo.j = bar.j
WHERE bar.j IS NOT NULL OR foo.j < 10

This transformation can be applied to IN/EXIST type SubPlan references 
anywhere in the query, not only those in the WHERE clause.



2. Using a similar trick, we can transform subqueries that are not of 
the IN/EXISTS kind. (This isn't useful on its own, because this example 
would be handled as an InitPlan, which performs well. But it's a 
stepping stone in explaining this.)


For example:

SELECT * FROM foo
WHERE foo.j > (select avg(bar.j) from bar)

This can be implemented using yet another new join type, a LEFT-UNIQUE 
join. It's like a LEFT JOIN, but it must check that there are no 
duplicates in the right-hand-side, and throw an error if there are 
(ERROR:  more than one row returned by a subquery used as an expression).


SELECT * FROM foo
/* unique-checking */ LEFT JOIN (select avg(bar.j) as min_j from bar) as 
subq ON TRUE

WHERE foo.j > subq.min_j


3. All the previous examples are uncorrelated subqueries, but all these 
transformations can also be performed on correlated subqueries. The 
difference is that the subquery that's pulled up to the range table must 
be marked as LATERAL. For example:


SELECT * FROM foo
WHERE foo.j > (select avg(bar.j) from bar where bar.i = foo.i)

->

SELECT * FROM foo
/* UNIQUE-CHECKING */ LEFT JOIN LATERAL (select avg(bar.j) from bar 
where bar.i = foo.i) AS subq(j) ON true

WHERE foo.j > subq.j


4. The previous transformation isn't very interesting on its own, 
because the only way to implement a LATERAL join is a nested loop join, 
which is essentially the same as executing the subplan the naive way. 
But we can potentially de-correlate it, by pulling up the correlation qual:



SELECT * FROM foo
/* UNIQUE-CHECKING */ LEFT JOIN LATERAL (select bar.j, bar.i from bar 
WHERE bar.i = foo.i) AS subq(j, i) ON true

WHERE foo.j > subq.j

->

SELECT * FROM foo
/* UNIQUE-CHECKING */ LEFT JOIN (select bar.j, bar.i from bar) AS 
subq(j, i) ON subq.i = foo.i

WHERE foo.j > subq.j


This will get inlined into the parent query, getting rid of the subquery 
altogether. But even if that cannot be done, this is potentially much 

Re: [HACKERS] Receive buffer size for the statistics socket

2017-05-15 Thread Heikki Linnakangas

On 05/14/2017 09:54 PM, Tom Lane wrote:

Also, it's clear that a session could easily shove much more than 8KB at
a time out to the stats collector, because what we're doing in the stats
test does not involve touching any very large number of tables.  So I
think this is not just a test failure but is telling us about a plausible
mechanism for real-world statistics drops.

I observe a default receive buffer size around 124K on my Linux box,
which seems like it'd be far less prone to overflow than 8K.

I propose that it'd be a good idea to try to set the stats socket's
receive buffer size to be a minimum of, say, 100K on all platforms.
Code for this would be analogous to what we already have in pqcomm.c
(circa line 760) for forcing up the send buffer size, but SO_RCVBUF
not SO_SNDBUF.


Seems reasonable.


A further idea is that maybe backends should be tweaked to avoid
blasting large amounts of data at the stats collector in one go.
That would require more thought to design, though.


The data is already sent in small < 1 kB messages, I don't see what more 
we can do in the sender side to avoid overwhelming the receiver. Except 
reduce the amount of data sent overall. But that only goes so far, we 
cannot eliminate the problem altogether, unless we also lose some detail.


It might nevertheless be worthwhile to reduce the overall volume. It 
would avoid some overhead, even if the buffer is large enough, although 
I don't remember pgstat being significant in any profiling I've done. 
One thing that caught my eye at a quick glance is that we are sending 
the # of tuples inserted/deleted/updated counters, even for read-only 
cases. It seems straightforward to detect that special case, and use an 
abbreviated version of PgStat_MsgTabstat without those counters, when we 
haven't updated anything. But again, might not be worth the trouble.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-05-15 Thread Heikki Linnakangas

On 05/14/2017 09:47 PM, Sokolov Yura wrote:

Good day, everyone.

I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort of
itemidbase items.


Ah, I played with this too a couple of years ago, see 
https://www.postgresql.org/message-id/546B89DE.7030906%40vmware.com, but 
got distracted by other things and never got around to commit that.



itemidbase array is bounded by number of tuples in a page, and
itemIdSortData
structure is simple, so specialized version could be a better choice.

Attached patch adds combination of one pass of prefix sort with
insertion
sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be
reused.


Cool! Could you compare that against the bucket sort I posted in the 
above thread, please?


At a quick glance, your "prefix sort" seems to be the the same algorithm 
as the bucket sort that I implemented. You chose 256 buckets, where I 
picked 32. And you're adding a shell sort implementation, for small 
arrays, while I used a straight insertion sort. Not sure what these 
differences mean in practice.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] eval_const_expresisions and ScalarArrayOpExpr

2017-05-11 Thread Heikki Linnakangas

On 05/11/2017 06:21 PM, Tom Lane wrote:

Heikki Linnakangas <hlinn...@iki.fi> writes:

Eval_const_expressions() doesn't know about ScalarArrayOpExpr.
...
That seems like an oversight. I guess that scenario doesn't happen very
often in practice, but there's no reason not to do it when it does.
Patch attached.


Yeah, I think it's a lack-of-round-tuits situation.  Your patch reminds
me of a more ambitious attempt I made awhile back to reduce the amount
of duplicative boilerplate in eval_const_expressions.  I think I wrote
it during last year's beta period, stashed it because I didn't feel like
arguing for applying it right then, and then forgot about it.


Hmph, now we're almost in beta period again. :-(.


Blowing the dust off, it's attached. It fixes ArrayRef and RowExpr as
well as ScalarArrayOpExpr, with a net growth of only 20 lines
(largely comments).


Nice!


On a side-note, I find it a bit awkward that ScalarArrayOpExpr uses a
2-element List to hold the scalar and array arguments. Wouldn't it be
much more straightforward to have explicit "Expr *scalararg" and "Expr
*arrayarg" fields?


I think it's modeled on OpExpr, which also uses a list though you could
argue for separate leftarg and rightarg fields instead.


Yeah, I think that would be better for OpExpr, too. (For an unary 
operator, rightarg would be unused.)


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] eval_const_expresisions and ScalarArrayOpExpr

2017-05-11 Thread Heikki Linnakangas
Eval_const_expressions() doesn't know about ScalarArrayOpExpr. We 
simplify the arguments, but if all the arguments are booleans, we don't 
take the obvious step of replacing the whole expression with a boolean 
Const. For example:


postgres=# explain select * from foo where 1 IN (1,2,3);
 QUERY PLAN
-
 Result  (cost=0.00..35.50 rows=2550 width=4)
   One-Time Filter: (1 = ANY ('{1,2,3}'::integer[]))
   ->  Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4)
(3 rows)

I would've expected that to be fully evaluated at plan-time, and 
optimized away.


That seems like an oversight. I guess that scenario doesn't happen very 
often in practice, but there's no reason not to do it when it does. 
Patch attached.


On a side-note, I find it a bit awkward that ScalarArrayOpExpr uses a 
2-element List to hold the scalar and array arguments. Wouldn't it be 
much more straightforward to have explicit "Expr *scalararg" and "Expr 
*arrayarg" fields?


- Heikki



0001-Const-eval-ScalarArrayOpExprs.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM in the PG 10 release notes

2017-05-11 Thread Heikki Linnakangas

On 05/11/2017 07:03 AM, Michael Paquier wrote:

On Thu, May 11, 2017 at 11:50 AM, Bruce Momjian  wrote:

I have added this as an open item because we will have to wait to see
where we are with driver support as the release gets closer.


As Postgres ODBC now has a hard dependency with libpq, no actions is
taken from there. At least this makes one driver worth mentioning.


FWIW, I wrote a patch for the Go driver at https://github.com/lib/pq, to 
implement SCRAM. It's awaiting review.


I updated the List of Drivers in the Wiki. I added a few drivers that 
were missing, like the ODBC driver, and the pgtclng driver, as well as a 
Go and Rust driver that I'm aware of. I reformatted it, and added a 
column to indicate whether each driver uses libpq or not.


https://wiki.postgresql.org/wiki/List_of_drivers

There is a similar list in our docs:

https://www.postgresql.org/docs/devel/static/external-interfaces.html

Should we update the list in the docs, adding every driver we know of? 
Or curate the list somehow, adding only more popular drivers? Or perhaps 
add a link to the Wiki page from the docs?


We can use this list in the Wiki to track which drivers implement SCRAM.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removal of plaintext password type references

2017-05-10 Thread Heikki Linnakangas

On 05/10/2017 08:01 AM, Michael Paquier wrote:

On Wed, May 10, 2017 at 10:51 AM, Vaishnavi Prabakaran
 wrote:

Following recent removal of support to store password in plain text,
modified the code to

1. Remove  "PASSWORD_TYPE_PLAINTEXT" macro
2. Instead of using "get_password_type" to retrieve the encryption method
AND to check if the password is already encrypted or not, modified the code
to
a. Use "get_password_encryption_type" function to retrieve encryption
method.
b. Use "isPasswordEncrypted" function to check if the password is already
encrypted or not.

These changes are mainly to increase code readability and does not change
underlying functionality.


Actually, this patch reduces readability.


Yeah, I don't think this is an improvement. Vaishnavi, did you find the 
current code difficult? Perhaps some extra comments somewhere would help?


Also note that changing the signature check_password_hook_type would 
break any external modules that use the hook. Removing 
PASSWORD_TYPE_PLAINTEXT will do that too, because any password hook 
function would use that constant (see e.g. contrib/passwordcheck). If we 
were to change the signature, I'd actually like to simplify it by 
removing the password_type parameter altogether. The hook function can 
call get_password_type() on the password itself to get the same 
information. But it's not worth changing the API and breaking external 
modules for that.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] password_encryption, default and 'plain' support

2017-05-08 Thread Heikki Linnakangas

On 05/05/2017 03:42 PM, Michael Paquier wrote:

+This option is obsolete but still accepted for backwards
+compatibility.
Isn't that incorrect English? It seems to me that this be non-plural,
as "for backward compatibility".


I changed most cases to "backward compatibility", except the one in 
create_role.sgml, because there were other instances of "backwards 
compatibility" on that page, and I didn't want this to stick out.



The comment at the top of check_password() in passwordcheck.c does not
mention scram, you may want to update that.


Reworded the comment, to not list all the possible values.


+   /*
+* We never store passwords in plaintext, so
this shouldn't
+* happen.
+*/
break;
An error here is overthinking?


It's not shown in the diff's context, but an error is returned just 
after the switch statement. I considered leaving out the "case 
PASSWORD_TYPE_PLAINTEXT" altogether, but then you might get compiler 
warnings complaining that that enum value is not handled. I also 
considered putting a an explicit "default:" there, which returns an 
error, but then you'd again miss out on compiler warnings, if you add a 
new password hash type and forget to add a case here to handle it.



 -- consistency of password entries
-SET password_encryption = 'plain';
-CREATE ROLE regress_passwd1 PASSWORD 'role_pwd1';
 SET password_encryption = 'md5';
Nit: this is skipping directly to role number 2.


Renumbered the test roles.

Committed with those little cleanups. Thanks for the review!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] password_encryption, default and 'plain' support

2017-05-08 Thread Heikki Linnakangas

On 05/06/2017 01:56 PM, Gavin Flower wrote:

On 06/05/17 22:44, Vik Fearing wrote:

On 05/05/2017 02:42 PM, Michael Paquier wrote:

+This option is obsolete but still accepted for backwards
+compatibility.
Isn't that incorrect English?

No.


It seems to me that this be non-plural,
as "for backward compatibility".

"Backwards" is not plural, it's a regional variation of "backward" (or
vice versa depending on which region you come from).  Both are correct.


I am English, born & bred, and 'Backwards' feels a lot more natural to me.


Another data point:

$ grep "backwards-comp" doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml | wc -l
7
$ grep "backward-comp" doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml | wc -l
3

Another important question is whether there should be a hyphen there or not?

$ grep   "backward comp" doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml  | 
wc -l

21
~/git-sandbox-pgsql/master (master)$ grep   "backwards comp" 
doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml  | wc -l

11

It looks like the most popular spelling in our docs is "backward 
compatibility". I'll go with that.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error message on missing SCRAM authentication with older clients

2017-05-05 Thread Heikki Linnakangas

On 05/05/2017 11:26 AM, Magnus Hagander wrote:

On Fri, May 5, 2017 at 10:19 AM, Aleksander Alekseev <
a.aleks...@postgrespro.ru> wrote:


psql: SCRAM authentication requires libpq version 10 or above


Sounds good.


+1.


Ok, committed. Thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some thoughts about SCRAM implementation

2017-05-05 Thread Heikki Linnakangas

On 04/10/2017 09:28 PM, Álvaro Hernández Tortosa wrote:

On 10/04/17 13:02, Heikki Linnakangas wrote:

On 04/10/2017 12:39 PM, Álvaro Hernández Tortosa wrote:

* The nonce length is not specified by the RFC. I see typical
implementations use 24 chars for the client and 18 for the server.
Current code uses 10. I think it should not hurt making it at least 16
or 18.


Wouldn't hurt, I guess. IIRC I checked some other implementations,
when I picked 10, but I don't remember which ones anymore. Got a
reference for 24/18?


 First reference is the RFC example itself (non-mandatory, of
course). But then I saw many followed this. As a quick example, GNU SASL
defines:

#define SNONCE_ENTROPY_BYTES 18
https://www.gnu.org/software/gsasl/coverage/lib/scram/server.c.gcov.frameset.html


Ok, I bumped up the nonce lengths to 18 raw bytes. Thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Authentication tests, and plain 'password' authentication with a SCRAM verifier

2017-05-05 Thread Heikki Linnakangas

On 03/14/2017 09:25 PM, Heikki Linnakangas wrote:

On 03/14/2017 09:02 PM, Jeff Janes wrote:

The message returned to the client for the wrong password differs between
pg_hba-set scram and pg_hba-set md5/password methods.  Is that OK?

psql: error received from server in SASL exchange: invalid-proof

psql: FATAL:  password authentication failed for user "test"


Ah yeah, I was on the fence on that one. Currently, the server returns
the invalid-proof error to the client, as defined in RFC5802. That
results in that error message from libpq. Alternatively, the server
could elog(FATAL), like the other authentication mechanisms do, with the
same message. The RFC allows that behavior too but returning the
invalid-proof error code is potentially more friendly to 3rd party SCRAM
implementations.

One option would be to recognize the "invalid-proof" message in libpq,
and construct a more informative error message in libpq. Could use the
same wording, "password authentication failed", but it would behave
differently wrt. translation, at least.


I went ahead and changed the backend code to not send the 
"invalid-proof" error. That seemed like the easiest fix for this. You 
now get the same "password authentication failed" error as with MD5 and 
plain password authentication.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error message on missing SCRAM authentication with older clients

2017-05-04 Thread Heikki Linnakangas

On 05/03/2017 03:12 PM, Aleksander Alekseev wrote:

Hi Heikki,


psql: SCRAM authentication not supported by this version of libpq


Maybe it would be better to specify a minimum required version?


Yeah, that could be helpful. Can you suggest a wording?

My first thought was:

psql: SCRAM authentication not supported by this version of libpq 
(version 10 or above required)


But that's very long. Perhaps:

psql: SCRAM authentication requires libpq version 10 or above

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] password_encryption, default and 'plain' support

2017-05-04 Thread Heikki Linnakangas

On 05/03/2017 08:40 PM, Tom Lane wrote:

The other question I can think to ask is what will happen during
pg_upgrade, given an existing installation with one or more passwords
stored plain.  If the answer is "silently convert to MD5", I'd be
good with that.


Yes, it will silently convert to MD5. That happened even on earlier 
versions, if you had password_encryption=on in the new cluster (which 
was the default).


I'm planning to go ahead with the attached patch for this (removing 
password_encryption='plain' support, but keeping the default as 'md5').


- Heikki

>From 62d42e07fe09dcb8586620ee66b1567a8f002f27 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <heikki.linnakan...@iki.fi>
Date: Thu, 4 May 2017 14:31:45 +0300
Subject: [PATCH 1/1] Remove support for password_encryption='off' / 'plain'.

Storing passwords in plaintext hasn't been a good idea for a very long
time, if ever. Now seems like a good time to finally forbid it, since we're
messing with this in PostgreSQL 10 anyway.

This also removes the CREATE/ALTER USER UNENCRYPTED PASSSWORD 'foo' syntax,
since storing passwords unencrypted is no longer supported. ENCRYPTED
PASSWORD 'foo' is still accepted, but ENCRYPTED is now just a noise-word,
it does the same as just PASSWORD 'foo'.

Likewise, remove the --unencrypted option from createuser, but accept
--encrypted as a no-op for backwards-compatibility. AFAICS, --encrypted was
a no-op even before this patch, because createuser encrypted the password
before sending it to the server even if --encrypted was not specified. It
added the ENCRYPTED keyword to the SQL command, but since the password was
already in encrypted form, it didn't matter. The documentation was not
clear on whether that was intended or not, but the point is moot now.

Also, while password_encryption='on' is still accepted as an alias for
'md5', it is now marked as hidden, so that it is not listed as an accepted
value in error hints, for example. That's not directly related to removing
'plain', but it seems better this way.

Discussion: https://www.postgresql.org/message-id/16e9b768-fd78-0b12-cfc1-7b6b7f238...@iki.fi
---
 doc/src/sgml/config.sgml  |  18 +++--
 doc/src/sgml/ref/alter_role.sgml  |   6 +-
 doc/src/sgml/ref/alter_user.sgml  |   2 +-
 doc/src/sgml/ref/create_group.sgml|   2 +-
 doc/src/sgml/ref/create_role.sgml |  34 +++-
 doc/src/sgml/ref/create_user.sgml |   2 +-
 doc/src/sgml/ref/createuser.sgml  |  21 +
 src/backend/commands/user.c   |  34 ++--
 src/backend/libpq/auth-scram.c|  20 +
 src/backend/libpq/auth.c  |  26 ++
 src/backend/libpq/crypt.c | 126 +-
 src/backend/parser/gram.y |  14 +++-
 src/backend/utils/misc/guc.c  |  10 +--
 src/bin/psql/tab-complete.c   |  25 ++
 src/bin/scripts/createuser.c  |  46 ---
 src/include/libpq/crypt.h |   9 ++-
 src/interfaces/libpq/fe-auth.c|   3 +-
 src/test/authentication/t/001_password.pl |  10 +--
 src/test/regress/expected/password.out|  31 
 src/test/regress/sql/password.sql |  23 +++---
 20 files changed, 154 insertions(+), 308 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0b9e3002fb..20bc3c61b1 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1188,14 +1188,16 @@ include_dir 'conf.d'
   

 When a password is specified in  or
- without writing either ENCRYPTED
-or UNENCRYPTED, this parameter determines whether the
-password is to be encrypted. The default value is md5, which
-stores the password as an MD5 hash. Setting this to plain stores
-it in plaintext. on and off are also accepted, as
-aliases for md5 and plain, respectively.  Setting
-this parameter to scram-sha-256 will encrypt the password
-with SCRAM-SHA-256.
+, this parameter determines the algorithm
+to use to encrypt the password. The default value is md5,
+which stores the password as an MD5 hash (on is also
+accepted, as alias for md5). Setting this parameter to
+scram-sha-256 will encrypt the password with SCRAM-SHA-256.
+   
+   
+Note that older clients might lack support for the SCRAM authentication
+mechanism, and hence not work with passwords encrypted with
+SCRAM-SHA-256.

   
  
diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml
index 37fcfb926c..8cd8602bc4 100644
--- a/doc/src/sgml/ref/alter_role.sgml
+++ b/doc/src/sgml/ref/alter_role.sgml
@@ -33,7 +33,7 @@ ALTER ROLE role_specification [ WIT
 | REPLICATION | NOREPLICATION
 | BYPASSRLS | NOBYPASSRLS
 | CONNECTION LIMIT connlimit
-| [ ENCRYPTED | UNENCRYPTED ]

Re: [HACKERS] password_encryption, default and 'plain' support

2017-05-03 Thread Heikki Linnakangas

On 05/03/2017 07:14 PM, Tom Lane wrote:

Robert Haas <robertmh...@gmail.com> writes:

On Wed, May 3, 2017 at 7:31 AM, Heikki Linnakangas <hlinn...@iki.fi> wrote:

So, I propose that we remove support for password_encryption='plain' in
PostgreSQL 10. If you try to do that, you'll get an error.



I have no idea how widely used that option is.


Is it possible that there are still client libraries that don't support
password encryption at all?  If so, are we willing to break them?
I'd say "yes" but it's worth thinking about.


That doesn't make sense. The client doesn't even know what 
password_encryption is set to. I think you're confusing 
password_encryption='plain' with the plaintext "password" authentication 
method.


If the server has an MD5 hash stored in pg_authid, the server will ask 
the client to do MD5 authentication. If the server has a SCRAM verifier 
in pg_authid, it will ask the client to do SCRAM authentication. If the 
server has a plaintext password in pg_authid, it will also ask the 
client to do SCRAM authentication (it could ask for MD5 authentication, 
but as the code stands, it will ask for SCRAM).


The server will only ask the client to do plaintext password 
authentication, if you put "password" as the authentication method in 
pg_hba.conf. But that works regardless of what password_encryption is 
set to.


No, I don't think there's any valid reason to store passwords in 
plaintext anymore. In theory, you could use either MD5 or SCRAM 
authentication with a plaintext password, which would be an advantage, 
but we don't provide an option for that.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Error message on missing SCRAM authentication with older clients

2017-05-03 Thread Heikki Linnakangas
Currently, if you use 9.6 libpq to connect to a v10 server that requires 
SCRAM authentication, you get an error:


psql: authentication method 10 not supported

I'd like to apply this small patch to all the stable branches, to give a 
nicer error message:


psql: SCRAM authentication not supported by this version of libpq

It won't help unless you upgrade to the latest minor version, of course, 
but it's better than nothing. Any objections?


- Heikki


backport-nicer-error-on-scram.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] password_encryption, default and 'plain' support

2017-05-03 Thread Heikki Linnakangas

Hi,

In various threads on SCRAM, we've skirted around the question of 
whether we should still allow storing passwords in plaintext. I've 
avoided discussing that in those other threads, because it's been an 
orthogonal question, but it's a good question and we should discuss it.


So, I propose that we remove support for password_encryption='plain' in 
PostgreSQL 10. If you try to do that, you'll get an error.


Another question that's been touched upon but not explicitly discussed, 
is whether we should change the default to "scram-sha-256". I propose 
that we do that as well. If you need to stick to md5, e.g. because you 
use drivers that don't support SCRAM yet, you can change it in 
postgresql.conf, but the majority of installations that use modern 
clients will be more secure by default.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Error message on missing SCRAM authentication with older clients

2017-05-03 Thread Heikki Linnakangas
Currently, if you use 9.6 libpq to connect to a v10 server that requires 
SCRAM authentication, you get an error:


psql: authentication method 10 not supported

I'd like to apply this small patch to all the stable branches, to give a 
nicer error message:


psql: SCRAM authentication not supported by this version of libpq

It won't help unless you upgrade to the latest minor version, of course, 
but it's better than nothing. Any objections?


- Heikki



backport-nicer-error-on-scram.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] scram and \password

2017-05-03 Thread Heikki Linnakangas

On 05/02/2017 07:47 PM, Robert Haas wrote:

On Tue, May 2, 2017 at 3:42 AM, Heikki Linnakangas <hlinn...@iki.fi> wrote:

There's going to be a default, one way or another. The default is going to
come from password_encryption, or it's going to be a hard-coded value or
logic based on server-version in PQencryptPasswordConn(). Or it's going to
be a hard-coded value or logic implemented in every application that uses
PQencryptPasswordConn(). I think looking at password_encryption makes the
most sense. The application is not in a good position to make the decision,
and forcing the end-user to choose every time they change a password is too
onerous.


I think there should be no default, and the caller should have to pass
the algorithm explicitly.  If they want to determine what default to
pass by running 'SHOW password_encryption', that's their choice.


Ok, gotcha. I disagree, I think we should provide a default. Libpq is in 
a better position to make a good choice than most applications.


I've committed the new PQencryptPasswordConn function, with the default 
behavior of doing "show password_encryption", and the changes to use it 
in psql and createuser. This closes the open issue with \password.


On 04/27/2017 07:03 AM, Michael Paquier wrote:

I think that it is a mistake to move SASLprep out of
scram_build_verifier, because pre-processing the password is not
necessary, it is normally mandatory. The BE/FE versions that you are
adding also duplicate the calls to pg_saslprep().


I played with that a little bit, but decided to keep pg_saslprep() out 
of scram_build_verifier() after all. It would seem asymmetric to have 
scram_build_verifier() call pg_saslprep(), but require callers of 
scram_SaltedPassword() to call it. So for consistency, I think 
scram_SaltedPassword() should also call pg_saslprep(). That would 
complicated the scram_SaltedPassword() function, however. It would need 
to report an OOM error somehow, for starters. Not an insurmountable 
issue, of course, but it felt cleaner this way, after all, despite the 
duplication.



Using "encrypt" instead of "hash" in the function name :(


Yeah. For better or worse, I've kept the "encrypt" nomenclature 
everywhere, for consistency.


Thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM authentication, take three

2017-05-02 Thread Heikki Linnakangas

On 05/02/2017 09:57 PM, Robert Haas wrote:

Does db_user_namespace work with SCRAM?


Yes. Haven't tested it, come to think of it, but it should work.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] scram and \password

2017-05-02 Thread Heikki Linnakangas

On 05/01/2017 07:04 PM, Robert Haas wrote:

On Thu, Apr 27, 2017 at 3:22 AM, Heikki Linnakangas <hlinn...@iki.fi> wrote:

You could argue, that since we need to document how to avoid the query and
the blocking, we might as well always require the application to run the
"show password_encryption" query before calling PQencryptPasswordConn(). But
I'd like to offer the convenience for the majority of applications that
don't mind blocking.


I still think that's borrowing trouble.  It just seems like too
critical of a thing to have a default -- if the convenience logic gets
it wrong and encrypts the password in a manner not intended by the
user, that could (a) amount to a security vulnerability or (b) lock
you out of your account.


That's true for a lot of things. The logic isn't complicated; it runs 
"SHOW password_encryption", and uses the value of that as the algorithm.


There's going to be a default, one way or another. The default is going 
to come from password_encryption, or it's going to be a hard-coded value 
or logic based on server-version in PQencryptPasswordConn(). Or it's 
going to be a hard-coded value or logic implemented in every application 
that uses PQencryptPasswordConn(). I think looking at 
password_encryption makes the most sense. The application is not in a 
good position to make the decision, and forcing the end-user to choose 
every time they change a password is too onerous.



 If you ask your significant other "where do
you want to go to dinner?" and can't get a clear answer out of them
after some period of time, it's probably OK to assume they don't care
that much and you can just pick something.  If you ask the
commander-in-chief "which country should we fire the missiles at?" and
you don't get a clear and unambiguous answer, just picking something
is not a very good idea.


I don't understand the analogy. If the application explicitly passes an 
algorithm, we use that. If the application passes NULL, it means "you 
decide, based on the documented rules". If the application passes 
"mumble-mumble", you get an error. If the "SHOW password_encryption" 
query fails or libpq doesn't understand the result, you also get an error.


What do you think we should do here, then? Make password_encryption 
GUC_REPORT? Hard-code an algorithm in every application? Remove the 
convenience logic from PQencryptionPasswordConn(), and document that for 
a sensible default, the application should first run "SHOW 
password_encryption", and use the result of that as the algorithm? Or go 
with the current patch?


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] scram and \password

2017-04-28 Thread Heikki Linnakangas

On 04/28/2017 07:49 AM, Noah Misch wrote:

On Fri, Apr 21, 2017 at 11:04:14PM +0300, Heikki Linnakangas wrote:

I'll continue reviewing the rest of the patch on Monday, but [...]


This PostgreSQL 10 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Moreover, this open item has been in progress for 17 days, materially
longer than the 1-2 week RMT non-intervention period.  Refer to the policy on
open item ownership:
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


I just pushed some little cleanups that caught my eye while working on 
this. I need to rebase the patch set I sent earlier, and fix the little 
things that Michael pointed out, but that shouldn't take long. I plan to 
push a fix for this on Tuesday (Monday is a national holiday here).


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] scram and \password

2017-04-27 Thread Heikki Linnakangas

On 04/26/2017 07:57 PM, Tom Lane wrote:

Robert Haas <robertmh...@gmail.com> writes:

On Wed, Apr 26, 2017 at 6:22 AM, Heikki Linnakangas <hlinn...@iki.fi> wrote:

* If algorithm is not given explicitly, PQencryptPasswordConn() queries
"SHOW password_encryption", and uses that. That is documented, and it is
also documented that it will *not* issue queries, and hence will not block,
if the algorithm is given explicitly. That's an important property for some
applications. If you want the default behavior without blocking, query "SHOW
password_encryption" yourself, and pass the result as the 'algorithm'.



TBH, I'd just require the user to specify the algorithm explicitly.
Having it run SHOW on the server seems wonky.  It introduces a bunch
of failure modes for ... no real benefit, I think.


Yeah.  Blocking is the least of your worries --- what about being in
a failed transaction, for instance?


Well, the "ALTER USER" command that you will surely run next, using the 
same connection, would fail anyway. I don't think running a query is a 
problem, as long as it's documented, and there's a documented way to 
avoid it.


You could argue, that since we need to document how to avoid the query 
and the blocking, we might as well always require the application to run 
the "show password_encryption" query before calling 
PQencryptPasswordConn(). But I'd like to offer the convenience for the 
majority of applications that don't mind blocking.



However, it's not entirely true that there's no real benefit.  If the
client app has to specify the algorithm then client code will need
extension every time we add another algorithm.  Maybe that's going to
happen so seldom that it's not a big deal, but it would be nice to
avoid that.


Yeah, I'd like to be prepared. Hopefully we don't need to add another 
algorithm any time soon, but maybe we will, or maybe we want to add an 
option for the SCRAM iteration count, for example.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] scram and \password

2017-04-26 Thread Heikki Linnakangas

On 04/25/2017 06:26 PM, Heikki Linnakangas wrote:

Thoughts? Unless someone has better ideas or objections, I'll go
implement that.


This is what I came up with in the end. Some highlights and differences 
vs the plan I posted earlier:


* If algorithm is not given explicitly, PQencryptPasswordConn() queries 
"SHOW password_encryption", and uses that. That is documented, and it is 
also documented that it will *not* issue queries, and hence will not 
block, if the algorithm is given explicitly. That's an important 
property for some applications. If you want the default behavior without 
blocking, query "SHOW password_encryption" yourself, and pass the result 
as the 'algorithm'.


* In the previous draft, I envisioned an algorithm-specific 'options' 
argument. I left it out, on second thoughts (more on that below).


* The old PQencryptPassword() function is unchanged, and always uses 
md5. Per public opinion.



We talked about the alternative where PQencryptPasswordConn() would not 
look at password_encryption, but would always use the strongest possible 
algorithm supported by the server. That would avoid querying the server. 
But then I started thinking how this would work, if we make the number 
of iterations in the SCRAM verifier configurable in the future. The 
client would not know the desired number of iterations based only on the 
server version, it would need to query the server, and we would be back 
to square one. We could add an "options" argument to 
PQencryptPasswordConn() that the application could use to pass that 
information, but documenting how to fetch that information, if you don't 
want PQencryptPasswordConn() to block, gets tedious, and puts a lot of 
burden to applications. That is why I left out the "options" argument, 
after all.


I'm now thinking that if we add password hashing options like the 
iteration count in the future, they will be tacked on to 
password_encryption. For example, "password_encryption='scram-sha-256, 
iterations=1". That way, "password_encryption" will always contain 
enough information to construct password verifiers.


What will happen to existing applications using PQencryptPasswordConn() 
if a new password_encryption algorithm (or options) is added in the 
future? With this scheme, the application doesn't need to know what 
algorithms exist. An application can pass algorithm=NULL, to use the 
server default, or do "show password_encryption" and pass that, for the 
non-blocking behavior. If you use an older libpq against a new server, 
so that libpq doesn't know about the algorithm used by the server, you 
get an error.


For reviewer convenience, I put up the patched docs at 
http://hlinnaka.iki.fi/temp/scram-wip-docs/libpq-misc.html#libpq-pqencryptpasswordconn.


Thoughts? Am I missing anything?

As an alternative, I considered making password_encryption GUC_REPORT, 
so that libpq would always know it without having to issue a query. But 
it feels wrong to send that option to the client on every connection, 
when it's only needed in the rare case that you use 
PQencryptPasswordConn(). And if we added new settings like the iteration 
count in the future, those would need to be GUC_REPORT too.


- Heikki



0001-Move-scram_build_verifier-to-src-common.patch
Description: application/download


0002-Add-PQencryptPasswordConn-function-to-libpq.patch
Description: application/download


0003-Use-new-PQencryptPasswordConn-function-in-psql-and-c.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] scram and \password

2017-04-25 Thread Heikki Linnakangas

On 04/22/2017 01:20 AM, Michael Paquier wrote:

On Sat, Apr 22, 2017 at 5:04 AM, Heikki Linnakangas <hlinn...@iki.fi> wrote:

I'll continue reviewing the rest of the patch on Monday, but one glaring
issue is that we cannot add an argument to the existing libpq
PQencryptPassword() function, because that's part of the public API. It
would break all applications that use PQencryptPassword().

What we need to do is to add a new function. What should we call that? We
don't have a tradition of using "Ex" or such suffix to mark extended
versions of existing functions. PQencryptPasswordWithMethod(user, pass,
method) ?


Do we really want to add a new function or have a hard failure? Any
application calling PQencryptPassword may trap itself silently if the
server uses scram as hba key or if the default is switched to that,
from this point of view extending the function makes sense as well.


Yeah, there is that. But we simply cannot change the signature of an 
existing function. It would not only produce compile-time errors when 
building old applications, which would arguably be a good thing, but it 
would also cause old binaries to segfault when dynamically linked with 
the new libpq.


I think it's clear that we should have a new function that takes the 
algorithm as argument. But there are open decisions on what the old 
PQencryptPassword() function should do, and also what the new function 
should do by default, if you don't specify an algorithm:


A) Have PQencryptPassword() return an md5 hash.

B) Have PQencryptPassword() return a SCRAM verifier

C) Have PQencryptPassword() return a SCRAM verifier if connected to a 
v10 server, and an md5 hash otherwise. This is tricky, because 
PQencryptPassword doesn't take a PGconn argument. It could behave like 
PQescapeString() does, and choose md5/scram depending on the server 
version of the last connection that was established.


For the new function, it's probably best to pass a PGconn argument. That 
way we can use the connection to determine the default, and it seems to 
be a good idea for future-proofing too. And an extra "options" argument 
might be good, while we're at it, to e.g. specify the number of 
iterations for SCRAM. So all in all, I propose the documentation for 
these functions to be (I chose option C from above for this):



char *
PQencryptPasswordConn(PGconn *conn,
  const char *passwd,
  const char *user,
  const char *method,
  const char *options)

[this paragraph is the same as current PQencryptPassword()]
This function is intended to be used by client applications that wish to 
send commands like ALTER ROLE joe PASSWORD 'pwd'. It is good practice to 
not send the original cleartext password in such a command, because it 
might be exposed in command logs, activity displays and so on. Instead, 
use this function to convert the password to encrypted form before it is 
sent.

[end of unchanged part]

This function may execute internal queries to the server to determine 
appropriate defaults, using the given connection object. The call can 
therefore fail if the connection is busy executing another query, or the 
current transaction is aborted.


The return value is a string allocated by malloc, or NULL if out of 
memory or other error. On error, a suitable message is stored in the 
'conn' object. The caller can assume the string doesn't contain any 
special characters that would require escaping. Use PQfreemem to free 
the result when done with it.


The function arguments are:

conn
  Connection object for the database where the password is to be changed.

passwd
  The new password

user
  Name of the role whose password is to be changed

method
  Name of the password encryption method to use. Currently supported 
methods are "md5" or "scram-sha-256". If method is NULL, the default for 
the current database is used. [i.e. this looks at password_encryption]


options
  Options specific to the encryption method, or NULL to use the 
defaults. (This argument is for future expansion, there are currently no 
options, and you should always pass NULL.)



char *
PQencryptPassword(const char *passwd, const char *user)

PQencryptPassword is an older, deprecated version of 
PQencryptPasswodConn. The difference is that PQencryptPassword does not 
require a connection object. The encryption method will be chosen 
depending on the server version of the last established connection, and 
built-in default options.




Thoughts? Unless someone has better ideas or objections, I'll go 
implement that.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OK, so culicidae is *still* broken

2017-04-24 Thread Heikki Linnakangas

On 04/24/2017 09:50 PM, Andres Freund wrote:

On 2017-04-24 14:43:11 -0400, Tom Lane wrote:

(We have accepted that kind of overhead for DSM segments, but the
intention I think is to allow only very trivial data structures in
the DSM segments.  Losing compiler pointer type checking for data
structures like the lock or PGPROC tables would be horrid.)


The relptr.h infrastructure brings some of the type-checking back, but
it's still pretty painful.  And just as important, it's quite noticeable
performance-wise.  So we have to do it for dynamic shm (until/unless we
go to using threads), but that doesn't mean we should do it for some of
the most performance critical data structures in PG...


Agreed.

For some data shared memory structures, that store no pointers, we 
wouldn't need to insist that they are mapped to the same address in 
every backend, though. In particular, shared_buffers. It wouldn't 
eliminate the problem, though, only make it less likely, so we'd still 
need to retry when it does happen.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] scram and \password

2017-04-21 Thread Heikki Linnakangas

On 04/10/2017 08:42 AM, Michael Paquier wrote:

As there have been some conflicts because of the commit of SASLprep,
here is a rebased set of patches.


I've committed a modified version of the first patch, to change the 
on-disk format to RFC 5803, as mentioned on the other thread 
(https://www.postgresql.org/message-id/351ba574-85ea-d9b8-9689-8c928dd09...@iki.fi).


I'll continue reviewing the rest of the patch on Monday, but one glaring 
issue is that we cannot add an argument to the existing libpq 
PQencryptPassword() function, because that's part of the public API. It 
would break all applications that use PQencryptPassword().


What we need to do is to add a new function. What should we call that? 
We don't have a tradition of using "Ex" or such suffix to mark extended 
versions of existing functions. PQencryptPasswordWithMethod(user, pass, 
method) ?


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On-disk format of SCRAM verifiers

2017-04-21 Thread Heikki Linnakangas

On 04/21/2017 05:33 PM, Simon Riggs wrote:

On 21 April 2017 at 14:42, Heikki Linnakangas <hlinn...@iki.fi> wrote:


SCRAM-SHA-256$:$:


Could you explain where you are looking? I don't see that in RFC5803



>From 1.  Overview:

Yeah, it's not easy to see, I missed it earlier too. You have to look at RFC 5803 and RFC 3112 together. RFC 3112 says that the overall format is 
"$$", and RFC5803 says that for SCRAM, scheme is "SCRAM-SHA-256" (for our variant), authInfo 
is ":" and authValue is ":"

They really should've included examples in those RFCs.


Thanks

+1 for change


Committed, thanks.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On-disk format of SCRAM verifiers

2017-04-21 Thread Heikki Linnakangas


On 21 April 2017 16:20:56 EEST, Michael Paquier <michael.paqu...@gmail.com> 
wrote:
>On Fri, Apr 21, 2017 at 10:02 PM, Simon Riggs <si...@2ndquadrant.com>
>wrote:
>> On 21 April 2017 at 10:20, Heikki Linnakangas <hlinn...@iki.fi>
>wrote:
>>> But looking more closely, I think I misunderstood RFC 5803. It
>*does* in
>>> fact specify a single string format to store the verifier in. And
>the format
>>> looks like:
>>>
>>> SCRAM-SHA-256$:$:
>>
>> Could you explain where you are looking? I don't see that in RFC5803
>
>From 1.  Overview:

Yeah, it's not easy to see, I missed it earlier too. You have to look at RFC 
5803 and RFC 3112 together. RFC 3112 says that the overall format is 
"$$", and RFC5803 says that for SCRAM, scheme is 
"SCRAM-SHA-256" (for our variant), authInfo is ":" and 
authValue is ":"

They really should've included examples in those RFCs.

- Heikki


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] On-disk format of SCRAM verifiers

2017-04-21 Thread Heikki Linnakangas

The current format for SCRAM verifiers in pg_authid is:

scram-sha-256

While reviewing Michael's patch to change that so that StoredKey and 
ServerKey are stored base64-encoded, rather than hex-encoded as they are 
currently [1], I looked again at RFC 5803. RFC 5803 specifies the format 
to use when storing SCRAM verifiers in LDAP. I looked at it earlier, and 
it was a source of inspiration for the current format, but I didn't 
think that it was directly applicable. I thought that in RFC 5803 the 
different fields were stored as separate fields or attributes, not as a 
single string.


But looking more closely, I think I misunderstood RFC 5803. It *does* in 
fact specify a single string format to store the verifier in. And the 
format looks like:


SCRAM-SHA-256$:$:

Alternating '$' and ':' as the separators seems a bit wonky, but it 
actually makes sense. ":" is treated as one 
field, and ":" is treated as another, which is 
logical, since the iteration count and salt are sent together to the 
client in the SCRAM challenge, while StoredKey and ServerKey must be 
kept secret.


I think we should adopt that exact format, so that our verifiers are 
compatible with RFC 5803. It doesn't make any immediate difference, but 
since there is a standard out there, might as well follow it. And just 
in case we get support for looking up SCRAM verifiers from an LDAP 
server in the future, it will come handy as we won't need to parse two 
different formats.


Barring objections, I'll go change our on-disk format for SCRAM 
verifiers to follow RFC 5803.


[1] 
https://www.postgresql.org/message-id/CAB7nPqSbsCBCxy8-DtwzRxYgTnbGUtY4uFEkLQhG%3DR%3Duo%3Dg8Fw%40mail.gmail.com


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SASL minor docs typo

2017-04-19 Thread Heikki Linnakangas

On 04/18/2017 08:50 PM, Jaime Casanova wrote:

Hi,

reading SASL docs found this typo:

in protocol.sgml:1356
"""
To begin a SASL authentication exchange, the server an AuthenticationSASL
  message.
"""

I guess it should say "the server sends an AuthenticationSASL message"


Yep, fixed. Thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] scram and \password

2017-04-18 Thread Heikki Linnakangas

On 04/18/2017 08:44 AM, Noah Misch wrote:

On Tue, Apr 11, 2017 at 10:07:12PM +0300, Heikki Linnakangas wrote:

Thanks! I've been busy on the other thread on future-proofing the protocol
with negotiating the SASL mechanism, I'll come back to this once we get that
settled. By the end of the week, I presume.


This PostgreSQL 10 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Refer to the policy on open item ownership:
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


Took a bit longer than I predicted, but the other scram open items have 
now been resolved, and I'll start reviewing this now. I expect there 
will be 1-2 iterations on this before it's committed. I'll send another 
status update by Friday, if this isn't committed by then.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   3   4   5   6   7   8   9   10   >