[HACKERS] Re: Should phraseto_tsquery('simple', 'blue blue') @@ to_tsvector('simple', 'blue') be true ?

2016-06-27 Thread Noah Misch
On Sun, Jun 26, 2016 at 10:22:26PM -0400, Noah Misch wrote:
> On Wed, Jun 15, 2016 at 11:08:54AM -0400, Noah Misch wrote:
> > On Wed, Jun 15, 2016 at 03:02:15PM +0300, Teodor Sigaev wrote:
> > > On Wed, Jun 15, 2016 at 02:54:33AM -0400, Noah Misch wrote:
> > > > On Mon, Jun 13, 2016 at 10:44:06PM -0400, Noah Misch wrote:
> > > > > On Fri, Jun 10, 2016 at 03:10:40AM -0400, Noah Misch wrote:
> > > > > > [Action required within 72 hours.  This is a generic notification.]
> > > > > > 
> > > > > > The above-described topic is currently a PostgreSQL 9.6 open item.  
> > > > > > Teodor,
> > > > > > since you committed the patch believed to have created it, you own 
> > > > > > this open
> > > > > > item.  If some other commit is more relevant or if this does not 
> > > > > > belong as a
> > > > > > 9.6 open item, please let us know.  Otherwise, please observe the 
> > > > > > policy on
> > > > > > open item ownership[1] and send a status update within 72 hours of 
> > > > > > this
> > > > > > message.  Include a date for your subsequent status update.  
> > > > > > Testers may
> > > > > > discover new open items at any time, and I want to plan to get them 
> > > > > > all fixed
> > > > > > well in advance of shipping 9.6rc1.  Consequently, I will 
> > > > > > appreciate your
> > > > > > efforts toward speedy resolution.  Thanks.
> > > > > > 
> > > > > > [1] 
> > > > > > http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
> > > > > 
> > > > > This PostgreSQL 9.6 open item is past due for your status update.  
> > > > > Kindly send
> > > > > a status update within 24 hours, and include a date for your 
> > > > > subsequent status
> > > > > update.  Refer to the policy on open item ownership:
> > > > > http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
> > > > 
> > > >IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 9.6 open item is long 
> > > >past due
> > > >for your status update.  Please reacquaint yourself with the policy on 
> > > >open
> > > >item ownership[1] and then reply immediately.  If I do not hear from you 
> > > >by
> > > >2016-06-16 07:00 UTC, I will transfer this item to release management 
> > > >team
> > > >ownership without further notice.
> > > >
> > > >[1] 
> > > >http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
> > > 
> > > I'm working on it right now.
> > 
> > That is good news, but it is not a valid status update.  In particular, it
> > does not specify a date for your next update.
> 
> You still have not delivered the status update due thirteen days ago.  If I do
> not hear from you a fully-conforming status update by 2016-06-28 03:00 UTC, or
> if this item ever again becomes overdue for a status update, I will transfer
> the item to release management team ownership.

This PostgreSQL 9.6 open item now needs a permanent owner.  Would any other
committer like to take ownership?  I see Teodor committed some things relevant
to this item just today, so the task may be as simple as verifying that those
commits resolve the item.  If this role interests you, please read this thread
and the policy linked above, then send an initial status update bearing a date
for your subsequent status update.  If the item does not have a permanent
owner by 2016-07-01 07:00 UTC, I will resolve the item by reverting all phrase
search commits.

Thanks,
nm


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-27 Thread Ashutosh Bapat
On Tue, Jun 28, 2016 at 11:43 AM, Etsuro Fujita  wrote:

> On 2016/06/28 13:53, Ashutosh Bapat wrote:
>
>> Ideally, we should point out the specific column that faced the
>> conversion problem and report it, instead of saying the whole row
>> reference conversion caused the problem. But that may be too difficult.
>>
>
> I think so too.
>
> Or at least the error message should read "whole row reference of
>> foreign table ft1".
>>
>
> OK, attached is an updated version of the patch, which uses "whole-row
> reference", not "wholerow".
>

The wording "column "whole-row reference ..." doesn't look good. Whole-row
reference is not a column. The error context itself should be "whole row
reference for foreign table ft1".

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-27 Thread Etsuro Fujita

On 2016/06/28 13:53, Ashutosh Bapat wrote:

Ideally, we should point out the specific column that faced the
conversion problem and report it, instead of saying the whole row
reference conversion caused the problem. But that may be too difficult.


I think so too.


Or at least the error message should read "whole row reference of
foreign table ft1".


OK, attached is an updated version of the patch, which uses "whole-row 
reference", not "wholerow".


Best regards,
Etsuro Fujita


postgres-fdw-conv-error-callback-v3.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ERROR: ORDER/GROUP BY expression not found in targetlist

2016-06-27 Thread Rushabh Lathia
Hi,

Consider the below testcase:

CREATE TABLE tab(
  c1 INT NOT NULL,
  c2 INT NOT NULL
);
INSERT INTO tab VALUES (1, 2);
INSERT INTO tab VALUES (2, 1);
INSERT INTO tab VALUES (1, 2);


case 1:

SELECT c.c1, c.c2 from tab C WHERE c.c2 = ANY (
SELECT 1 FROM tab A WHERE a.c2 IN (
  SELECT 1 FROM tab B WHERE a.c1 = c.c1
  GROUP BY rollup(a.c1)
)
GROUP BY cube(c.c2)
  )
  GROUP BY grouping sets(c.c1, c.c2)
  ORDER BY 1, 2 DESC;
ERROR:  ORDER/GROUP BY expression not found in targetlist

case 2:

create sequence s;
SELECT setval('s', max(100)) from tab;
ERROR:  ORDER/GROUP BY expression not found in targetlist

Looking further I found that error started coming with below commit:

commit aeb9ae6457865c8949641d71a9523374d843a418
Author: Tom Lane 
Date:   Thu May 26 14:52:24 2016 -0400

Disable physical tlist if any Var would need multiple sortgroupref
labels.

If we revert the above commit, then the give test are running
as expected.


Regards,
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-27 Thread Ashutosh Bapat
On Tue, Jun 28, 2016 at 9:00 AM, Etsuro Fujita 
wrote:

> On 2016/06/27 18:56, Ashutosh Bapat wrote:
>
>> On Mon, Jun 27, 2016 at 3:06 PM, Etsuro Fujita
>> mailto:fujita.ets...@lab.ntt.co.jp>> wrote:
>>
>
> I found another bug in error handling of whole-row references in
>> join pushdown; conversion_error_callback fails to take into account
>> that get_relid_attribute_name(Oid relid, AttrNumber attnum) can't
>> handle whole-row references (ie, attnum=0), in which case that would
>> cause cache lookup errors.  Attached is a small patch to address
>> this issue.
>>
>
> Do you have any testcase reproducing the bug here? It would be good to
>> include that test in the regression.
>>
>
> Done.
>
> There is a always a possibility that a user would create a table (which
>> can be used as target for the foreign table) with column named
>> 'wholerow', in which case s/he will get confused with this error message.
>>
>
> By grepping I found that there are error messages that use "whole-row
> table reference", "whole-row reference",


These two should be fine.


> or "wholerow", so the use of "wholerow" seems to me reasonable.  (And IMO
> I think "wholerow" would most likely fit into that errcontext().)
>

As an error message text, which is not referring to any particular column,
these are fine. But in this case, we are specifically referring to a
particular column. That reference can be confusing. The text ' column
"wholerow" of foreign table "ft1"' is confusing either way. A lay user who
doesn't have created table with "wholerow" column, s/he will not understand
which column the error context refers to. For a user who has created table
with "wholerow" column, he won't find any problem with the data in that
column.

Ideally, we should point out the specific column that faced the conversion
problem and report it, instead of saying the whole row reference conversion
caused the problem. But that may be too difficult. Or at least the error
message should read "whole row reference of foreign table ft1".

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] A couple of cosmetic changes around shared memory code

2016-06-27 Thread Michael Paquier
On Tue, Jun 28, 2016 at 6:49 AM, Robert Haas  wrote:
> On Sun, Jun 26, 2016 at 6:19 AM, Piotr Stefaniak
>  wrote:
>>> while investigating the shm_mq code and its testing module I made some
>>> cosmetic improvements there. You can see them in the attached diff file.
>>
>> Revised patch attached.
>
> The first hunk of this corrects an outdated comment, so we should
> certainly apply that.  I'm not seeing what the value of the other bits
> is.

- proc_exit(1);
+ proc_exit(0);
Looking again at this thread with fresh eyes, isn't the origin of the
confusion the fact that we do need to have a non-zero error code so as
the worker is never restarted thanks to BGW_NEVER_RESTART? Even with
that, it is a strange concept to leave with proc_exit(1) in the case
where a worker left correctly..
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-27 Thread Etsuro Fujita

On 2016/06/27 18:56, Ashutosh Bapat wrote:

On Mon, Jun 27, 2016 at 3:06 PM, Etsuro Fujita
mailto:fujita.ets...@lab.ntt.co.jp>> wrote:



I found another bug in error handling of whole-row references in
join pushdown; conversion_error_callback fails to take into account
that get_relid_attribute_name(Oid relid, AttrNumber attnum) can't
handle whole-row references (ie, attnum=0), in which case that would
cause cache lookup errors.  Attached is a small patch to address
this issue.



Do you have any testcase reproducing the bug here? It would be good to
include that test in the regression.


Done.


There is a always a possibility that a user would create a table (which
can be used as target for the foreign table) with column named
'wholerow', in which case s/he will get confused with this error message.


By grepping I found that there are error messages that use "whole-row 
table reference", "whole-row reference", or "wholerow", so the use of 
"wholerow" seems to me reasonable.  (And IMO I think "wholerow" would 
most likely fit into that errcontext().)


Best regards,
Etsuro Fujita


postgres-fdw-conv-error-callback-v2.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken handling of lwlocknames.h

2016-06-27 Thread Michael Paquier
On Tue, Jun 28, 2016 at 3:22 AM, Christoph Berg  wrote:
> Re: Tom Lane 2016-06-27 <31398.1467036...@sss.pgh.pa.us>
>> Bjorn Munch reported off-list that this sequence:
>>
>> unpack tarball, cd into it
>> ./configure ...
>> cd src/test/regress
>> make
>>
>> no longer works in 9.6beta2, where it did work in previous releases.
>> I have confirmed both statements.  The failure looks like
>>
>> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
>> -Wendif-labels -Wmissing-format-attribute -Wformat-security 
>> -fno-strict-aliasing -fwrapv -g -O1 -fpic -I../../../src/include 
>> -D_GNU_SOURCE   -c -o regress.o regress.c
>> In file included from ../../../src/include/storage/lock.h:23,
>>  from ../../../src/include/access/heapam.h:22,
>>  from ../../../src/include/nodes/execnodes.h:18,
>>  from ../../../src/include/commands/trigger.h:17,
>>  from regress.c:29:
>> ../../../src/include/storage/lwlock.h:129:33: error: storage/lwlocknames.h: 
>> No such file or directory
>> make: *** [regress.o] Error 1
>>
>> So this is some sort of fallout from commit aa65de042f582896, which
>> invented that as a generated file.
>>
>> Perhaps the solution is to extend src/test/regress/GNUmakefile to know
>> about this file explicitly (as it already does know about
>> src/port/pg_config_paths.h).  But that seems rather brute-force; in
>> particular it seems like that does nothing to keep us from getting burnt
>> again the same way in future.  I wonder if we should modify
>> src/backend/Makefile so that it exposes a phony target for "update all the
>> generated include files", and then have src/test/regress/GNUmakefile call
>> that.  Or maybe there are other ways.
>
> That would also fix the "build plpython3 only" problem I was aiming at
> in https://www.postgresql.org/message-id/20150916200959.gb32...@msg.df7cb.de
>
> So another +1 from a packagers perspective.

Yes that would be indeed cleaner this way. I have poked a bit at that
and finished with the attached that defines some rules to generate all
the files needed. But actually it does not seem to be enough, for
example on OSX this would fail to compile because it cannot find the
postgres binary in src/backend/postgres. Does somebody have an idea
what this is about? It seems that we have two problems here.
-- 
Michael


makefile-generate.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing subplan/subquery confusion

2016-06-27 Thread Tom Lane
Amit Kapila  writes:
> I had couple of questions [1] related to that patch.  See if you find
> those as relevant?

I do not think those cases are directly relevant: you're talking about
appendrels not single, unflattened RTE_SUBQUERY rels.

In the subquery case, my view of how it ought to work is that Paths coming
up from the subquery would be marked as not-parallel-safe if they contain
references to unsafe functions.  It might be that that doesn't happen for
free, but my guess is that it would already work that way given a change
similar to what I proposed.

In the appendrel case, I tend to agree that the easiest solution is to
scan all the children of the appendrel and just mark the whole thing as
not consider_parallel if any of them have unsafe functions.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rename max_parallel_degree?

2016-06-27 Thread Amit Kapila
On Mon, Jun 27, 2016 at 10:35 PM, Julien Rouhaud
 wrote:
> On 27/06/2016 07:18, Amit Kapila wrote:
>> On Mon, Jun 27, 2016 at 10:21 AM, Amit Kapila  
>> wrote:
>>>
>>> I think as the parallel_terminate_count is only modified by postmaster
>>> and read by other processes, such an operation will be considered
>>> atomic.  We don't need to specifically make it atomic unless multiple
>>> processes are allowed to modify such a counter.  Although, I think we
>>> need to use some barriers in code to make it safe.
>>>
>>> 1.
>>> @@ -272,6 +279,8 @@ BackgroundWorkerStateChange(void)
>>>   pg_memory_barrier();
>>>
>>> slot->pid = 0;
>>>   slot->in_use = false;
>>> + if (slot->parallel)
>>> +
>>> BackgroundWorkerData->parallel_terminate_count++;
>>>
>>> I think the check of slot->parallel and increment to
>>> parallel_terminate_count should be done before marking slot->in_use to
>>> false.  Consider the case if slot->in_use is marked as false and then
>>> before next line execution, one of the backends registers dynamic
>>> worker (non-parallel worker), so that
>>> backend can see this slot as free and use it.  It will also mark the
>>> parallel flag of slot as false.  Now when postmaster will check the
>>> slot->parallel flag, it will find it false and then skip the increment
>>> to parallel_terminate_count.
>>>
>>
>> If you agree with above theory, then you need to use write barrier as
>> well after incrementing the parallel_terminate_count to avoid
>> re-ordering with slot->in_use flag.
>>
>
> I totally agree, I didn't thought about this corner case.
>
> There's already a pg_memory_barrier() call in
> BackgroundWorkerStateChange(), to avoid reordering the notify_pid load.
> Couldn't we use it to also make sure the parallel_terminate_count
> increment happens before the slot->in_use store?
>

Yes, that is enough, as memory barrier ensures that both loads and
stores are completed before any loads and stores that are after
barrier.

>  I guess that a write
> barrier will be needed in ForgetBacgroundWorker().
>

Yes.

>>> 2.
>>> + if (parallel && (BackgroundWorkerData->parallel_register_count -
>>> +
>>> BackgroundWorkerData->parallel_terminate_count) >=
>>> +
>>> max_parallel_workers)
>>> + {
>>> + LWLockRelease(BackgroundWorkerLock);
>>> + return
>>> false;
>>> + }
>>>+
>>>
>>> I think we need a read barrier here, so that this check doesn't get
>>> reordered with the for loop below it.
>
> You mean between the end of this block and the for loop?
>

Yes.

>>>  Also, see if you find the code
>>> more readable by moving the after && part of check to next line.
>
> I think I'll just pgindent the file.
>

make sense.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing subplan/subquery confusion

2016-06-27 Thread Amit Kapila
On Tue, Jun 28, 2016 at 1:42 AM, Robert Haas  wrote:
> On Mon, Jun 27, 2016 at 4:03 PM, Tom Lane  wrote:
>> Robert Haas  writes:
>>> On Sun, Jun 26, 2016 at 10:39 PM, Noah Misch  wrote:
 On Mon, Jun 13, 2016 at 04:46:19PM -0400, Tom Lane wrote:
 The above-described topic is currently a PostgreSQL 9.6 open item ("fix
 possible confusion between subqueries and subplans").
>>
>>> This open item comes with a patch submitted by Tom Lane.  If Tom wants
>>> me to review and (if no problems are found) commit that patch to
>>> resolve this open item, I'm willing to do that.  But generally I don't
>>> commit patches submitted by other committers unless that person and I
>>> have agreed on it in advance, which is not currently the case here.
>>
>>> Tom, do you want to commit this, or do you want me to handle it, or
>>> something else?
>>
>> I was not sure if you'd agreed that the patch was correct, and in any
>> case I thought you wanted to fold it into the upperrel consider_parallel
>> patch.  I feel no great need to commit it myself, if that's what you
>> meant.
>
> OK, I'll set aside some time for further review and either commit it
> or send an update by COB Thursday US time.
>

I had couple of questions [1] related to that patch.  See if you find
those as relevant?


[1] - 
https://www.postgresql.org/message-id/CAA4eK1%2ByGs-onuJDy%2BTTqnrnT0hty_QQPC1GipS%2Bce-W3872QQ%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OpenSSL 1.1 breaks configure and more

2016-06-27 Thread Michael Paquier
On Tue, Jun 28, 2016 at 3:21 AM, Andreas Karlsson  wrote:
> Yes, we could do that, but I do not think we should check for the existence
> of a backwards compatibility macro. Actually I think we may want to skip
> much of the OpenSSL initialization code when compiling against OpenSSL 1.1
> since they have now added automatic initialization of the library. Instead I
> think we should check for something we actually will use like SSL_CTX_new().

Agreed. Changing the routine being checked may be a good idea in this
case, and we surely want to check for something that is used in the
frontend and the backend. So why not something more generic like
SSL_read or SSL_write?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to kill a Background worker and Its metadata

2016-06-27 Thread Craig Ringer
On 28 June 2016 at 02:27, Akash Agrawal  wrote:

> Hi,
>
> I've created a background worker and I am using Postgresql-9.4. This
> bgworker handles the job queue dynamically and goes to sleep if there is no
> job to process within the next 1 hour.
>
> Now, I want to have a mechanism to wake the bgworker up in case if someone
> adds a new job while the bgworker is in sleep mode. So to do it, I have
> created a trigger which initially removes the existing background worker
> and then registers a new one.
>

Don't do that.

Instead, set the background worker's process latch, which you can find in
the PGPROC array. There are a variety of examples of how to set latches in
the sources.

>
> I am retrieving the pid from pg_Stat_activity. The maximum number of
> background worker that can run simultaneously is equal to 8.  I think even
> if I call pg_terminate_backend the metadata of the background worker is not
> being deleted
>

Correct. Unless you register it as a dynamic bgworker with no automatic
restart, it'll get restarted when it exits uncleanly.

Have the worker call proc_exit(0) if you want it not to be restarted.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] How to kill a Background worker and Its metadata

2016-06-27 Thread Michael Paquier
On Tue, Jun 28, 2016 at 3:27 AM, Akash Agrawal  wrote:
> I've created a background worker and I am using Postgresql-9.4. This
> bgworker handles the job queue dynamically and goes to sleep if there is no
> job to process within the next 1 hour.
>
> Now, I want to have a mechanism to wake the bgworker up in case if someone
> adds a new job while the bgworker is in sleep mode. So to do it, I have
> created a trigger which initially removes the existing background worker and
> then registers a new one. I am using the following two queries inside it:

Why don't you just register and use a signal in this case? You could
even do something with SIGHUP...
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation fixes for pg_visibility

2016-06-27 Thread Robert Haas
On Mon, Jun 27, 2016 at 5:56 PM, Michael Paquier
 wrote:
>> Under what circumstances would you wish to check only one page of a relation?
>
> What I'd like to be able to do is to stop scanning the relation once
> one defective tuple has been found: if there is at least one problem,
> the whole vm needs to be rebuilt anyway. So this function could be
> wrapped in a plpgsql function for example. It is more flexible than
> directly modifying this function so as it stops at the first problem
> stopped.

I think most likely the best way to handle this is teach VACUUM to do
PageClearAllVisible() and visibilitymap_clear() on any page where
VM_ALL_FROZEN(onerel, blkno, &vmbuffer) && !all_frozen.  This would go
well with the existing code to clear incorrectly-set visibility map
bits, and it would allow VACUUM (DISABLE_PAGE_SKIPPING) to serve the
purpose you're talking about here, but more efficiently.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing consider_parallel for upper planner rels

2016-06-27 Thread Tom Lane
Robert Haas  writes:
> On Mon, Jun 27, 2016 at 5:28 PM, Tom Lane  wrote:
>> Seems to me that it should generally be the case that consider_parallel
>> would already be clear on the parent rel if the tlist isn't parallel safe,
>> and if it isn't we probably have a bug elsewhere.  If it makes you feel
>> better, maybe you could add Assert(!has_parallel_hazard(...)) here?

> I don't see that this is true.  If someone does SELECT
> pg_backend_pid() FROM pgbench_accounts, there's only one RelOptInfo
> and nothing to clear consider_parallel for it anywhere else.

Huh?  The final tlist would go with the final_rel, ISTM, not the scan
relation.  Maybe we have some rejiggering to do to make that true, though.

> I've really been wondering whether there ought to be a separate
> UPPERREL_FOO whose only purpose is to handle applying scanjoin_target
> to the topmost scan/join rel.

That's another way of thinking about it, I guess.

> Effectively, the current code is trying
> to do that transformation in place.  We start with a scan/join rel
> that emits whatever it naturally emits and then frob all of the paths
> and partial paths to emit the scanjoin_target.  But this seems quite
> cumbersome.

It'd be no less cumbersome, because you'd end up with all those same paths
surviving into the FOO upperrel.  But it might be logically cleaner.


(Thinks for a bit...)  Actually, scratch that.  It was very intentional on
my part that different Paths for the same relation might produce different
tlists.  Without that, there's no way that we're going to get a solution
that allows extracting index expression values from index-only scans in
nontrivial plans.  Otherwise I wouldn't have introduced PathTarget to
begin with, because we already had a perfectly good way of representing a
one-size-fits-all result tlist for each RelOptInfo.

So it seems like we should not introduce a dependency here that assumes
that all Paths for a given rel have equivalent parallel_safe settings.
Maybe it'd be okay for the special case of index expressions, because
they are almost certainly going to be parallel safe, but in general it's
a restriction we don't want.

You could still save something by writing code along the line of
if (path->parallel_safe &&
has_parallel_hazard(...))
path->parallel_safe = false;
so as not to run has_parallel_hazard in the case where we already know
we lost.

Doing more than this would probably involve caching parallel-safety
status in PathTarget itself, which is certainly doable but we should
measure first to see if it's worth the trouble.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation fixes for pg_visibility

2016-06-27 Thread Robert Haas
On Thu, Jun 23, 2016 at 12:46 AM, Michael Paquier
 wrote:
> On Thu, Jun 23, 2016 at 1:42 PM, Michael Paquier
>  wrote:
>> While looking at the module I found two mistakes in the docs:
>> pg_visibility_map and pg_visibility *not* taking in input a block
>> number are SRFs, and return a set of records. The documentation is
>> just listing them with "returns record". A patch is attached.
>
> And that: s/PD_ALL_VISIBILE/PD_ALL_VISIBLE.

Committed.  Thanks for the careful proofreading.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation fixes for pg_visibility

2016-06-27 Thread Michael Paquier
On Tue, Jun 28, 2016 at 6:51 AM, Robert Haas  wrote:
> On Thu, Jun 23, 2016 at 12:53 AM, Michael Paquier
>  wrote:
>> On Thu, Jun 23, 2016 at 1:46 PM, Michael Paquier
>>  wrote:
>>> On Thu, Jun 23, 2016 at 1:42 PM, Michael Paquier
>>>  wrote:
 While looking at the module I found two mistakes in the docs:
 pg_visibility_map and pg_visibility *not* taking in input a block
 number are SRFs, and return a set of records. The documentation is
 just listing them with "returns record". A patch is attached.
>>>
>>> And that: s/PD_ALL_VISIBILE/PD_ALL_VISIBLE.
>>
>> And would it actually make sense to have pg_check_frozen(IN regclass,
>> IN blkno) to target only a certain page? Same for pg_check_visible. It
>> would take a long time to run those functions on large tables as they
>> scan all the pages of a relation at once..
>
> Under what circumstances would you wish to check only one page of a relation?

What I'd like to be able to do is to stop scanning the relation once
one defective tuple has been found: if there is at least one problem,
the whole vm needs to be rebuilt anyway. So this function could be
wrapped in a plpgsql function for example. It is more flexible than
directly modifying this function so as it stops at the first problem
stopped.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation fixes for pg_visibility

2016-06-27 Thread Robert Haas
On Thu, Jun 23, 2016 at 12:53 AM, Michael Paquier
 wrote:
> On Thu, Jun 23, 2016 at 1:46 PM, Michael Paquier
>  wrote:
>> On Thu, Jun 23, 2016 at 1:42 PM, Michael Paquier
>>  wrote:
>>> While looking at the module I found two mistakes in the docs:
>>> pg_visibility_map and pg_visibility *not* taking in input a block
>>> number are SRFs, and return a set of records. The documentation is
>>> just listing them with "returns record". A patch is attached.
>>
>> And that: s/PD_ALL_VISIBILE/PD_ALL_VISIBLE.
>
> And would it actually make sense to have pg_check_frozen(IN regclass,
> IN blkno) to target only a certain page? Same for pg_check_visible. It
> would take a long time to run those functions on large tables as they
> scan all the pages of a relation at once..

Under what circumstances would you wish to check only one page of a relation?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A couple of cosmetic changes around shared memory code

2016-06-27 Thread Robert Haas
On Sun, Jun 26, 2016 at 6:19 AM, Piotr Stefaniak
 wrote:
>> while investigating the shm_mq code and its testing module I made some
>> cosmetic improvements there. You can see them in the attached diff file.
>
> Revised patch attached.

The first hunk of this corrects an outdated comment, so we should
certainly apply that.  I'm not seeing what the value of the other bits
is.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [HITB-Announce] HITB2016AMS Videos & GSEC Singapore Voting

2016-06-27 Thread Robert Haas
On Mon, Jun 27, 2016 at 5:29 PM, Alvaro Herrera
 wrote:
> Robert Haas wrote:
>> On Mon, Jun 20, 2016 at 5:41 PM, Hafez Kamal  wrote:
>> > See you in Singapore!
>>
>> This seems totally off-topic.  Shouldn't a post like this result in a ban?
>
> It is off-topic.  Sorry that it got through.  We get dozens of these
> every week, and the vast majority are rejected; I suppose some moderator
> slipped up (might have been me).

Ah, I didn't realize that this was an ongoing issue.  Thanks for
getting rid of as many of them as you do.

> I see that this is a repeat of an incident you already complained about
> in January 2014.  Will look into what happened exactly, and if a ban is
> warranted, I'll implement that.  Conference spammers have gotten pretty
> annoying ...

Thanks!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing consider_parallel for upper planner rels

2016-06-27 Thread Robert Haas
On Mon, Jun 27, 2016 at 5:28 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Mon, Jun 27, 2016 at 4:49 PM, Tom Lane  wrote:
>>> * Not following what you did to apply_projection_to_path, and the comment
>>> therein isn't helping.
>
>> Gee, I wonder why not?  :-)
>
>> The basic problem here is that applying a projection to a path can
>> render a formerly parallel-safe path no longer parallel-safe.  If we
>> jam a parallel-restricted target list into a formerly parallel-safe
>> path, we'd better also clear path->parallel_safe.  Currently,
>> apply_projection_to_path only needs to call has_parallel_hazard for an
>> input which is a GatherPath, which isn't too expensive because most
>> paths are not GatherPaths and if we get one that is, well, we can hope
>> parallel query will win enough during execution to make up for the
>> extra planning cost.  But if we want the consider_parallel and
>> parallel_safe flags to be set correctly for all upper rels and paths,
>> it seems that we need to do it always - hence the dismayed comment.
>> Thoughts?
>
> Seems to me that it should generally be the case that consider_parallel
> would already be clear on the parent rel if the tlist isn't parallel safe,
> and if it isn't we probably have a bug elsewhere.  If it makes you feel
> better, maybe you could add Assert(!has_parallel_hazard(...)) here?

I don't see that this is true.  If someone does SELECT
pg_backend_pid() FROM pgbench_accounts, there's only one RelOptInfo
and nothing to clear consider_parallel for it anywhere else.

I've really been wondering whether there ought to be a separate
UPPERREL_FOO whose only purpose is to handle applying scanjoin_target
to the topmost scan/join rel.  Effectively, the current code is trying
to do that transformation in place.  We start with a scan/join rel
that emits whatever it naturally emits and then frob all of the paths
and partial paths to emit the scanjoin_target.  But this seems quite
cumbersome.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving executor performance

2016-06-27 Thread Bruce Momjian
On Fri, Jun 24, 2016 at 05:25:27PM -0700, Peter Geoghegan wrote:
> On Fri, Jun 24, 2016 at 4:29 PM, Andres Freund  wrote:
> > As a motivation, here's a somewhat juicy example of the benefits that
> > can be gained (disabled parallelism, results vary too much):
> > SELECT l_linestatus, SUM(l_quantity) FROM lineitem GROUP BY 1 ORDER BY 2 
> > DESC LIMIT 10 ;
> > non-optimized: 9075.835 ms
> > optimized: 5194.024 ms
> >
> > and that's far from doing all the possible work for that query;
> > especially the batching is far from optimal.
> 
> That's pretty great. Let me first just say that I think that your
> broadly have the right idea here, and that it will likely be a big
> area to work on in the years ahead. This may become a big, general
> area with many sub-projects, kind of like parallelism. Also, I think
> it's very much complementary to parallelism.

Agreed.  I did put out a blog entry about this in April that has links
to some external projects trying to address this issue:

http://momjian.us/main/blogs/pgblog/2016.html#April_1_2016

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel workers and client encoding

2016-06-27 Thread Robert Haas
On Mon, Jun 27, 2016 at 12:53 PM, Robert Haas  wrote:
> On Mon, Jun 13, 2016 at 10:27 PM, Peter Eisentraut
>  wrote:
>> Modulo that last point, here is a patch that shows how I think this could
>> work, in combination with the patch I posted previously that sets the
>> "client encoding" in the parallel worker to the server encoding.
>>
>> This patch disassembles the NotificationResponse message with a temporary
>> client encoding, and then sends it off to the real frontend using the real
>> client encoding.
>>
>> Doing it this way also takes care of a few special cases that
>> NotifyMyFrontEnd() handles, such as a client with protocol version 2 that
>> doesn't expect a payload in the message.
>
> How does this address the concern raised in the last sentence of
> https://www.postgresql.org/message-id/CA+TgmoaAAEXmjVMB4nM=znf_5b9jhuim6q3afro4spt23ti...@mail.gmail.com
> ?  It seems that if an error occurs between the two SetClientEncoding
> calls, the change will persist for the rest of the session, resulting
> in chaos.

Please find attached an a patch for a proposed alternative approach.
This does the following:

1. When the client_encoding GUC is changed in the worker,
SetClientEncoding() is not called.  Thus, GetClientEncoding() in the
worker always returns the database encoding, regardless of how the
client encoding is set.  This is better than your approach of calling
SetClientEncoding() during worker startup, I think, because the worker
could call a parallel-safe function with SET clause, and one of the
GUCs temporarily set could be client_encoding.  That would be stupid,
but somebody could do it.

2. A new function pq_getmsgrawstring() is added.  This is like
pq_getmsgstring() but it does no encoding conversion.
pq_parse_errornotice() is changed to use pq_getmsgrawstring() instead
of pq_getmsgstring().  Because of (1), when the leader receives an
ErrorResponse or NoticeResponse from the worker, it will not have been
subject to encoding conversion; because of this item, the leader will
not try to convert it either when initially parsing it.  So the extra
encoding round-trip is avoided.

3. The changes for NotifyResponse which you proposed are included
here, but with the modification that pq_getmsgrawstring() is used.

This seems to fix your original test case for me, and hopefully all of
the related cases also.  Review is appreciated.  The main thing about
this is that it doesn't rely on being able to make temporary changes
to global variables, thus hopefully making it robust in the face of
non-local transfer of control.

(Official status update: I'll commit this on Thursday unless anyone
reports a problem with it before then.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 088700e..3996897 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -810,7 +810,17 @@ HandleParallelMessage(ParallelContext *pcxt, int i, StringInfo msg)
 		case 'A':/* NotifyResponse */
 			{
 /* Propagate NotifyResponse. */
-pq_putmessage(msg->data[0], &msg->data[1], msg->len - 1);
+int32		pid;
+const char *channel;
+const char *payload;
+
+pid = pq_getmsgint(msg, 4);
+channel = pq_getmsgrawstring(msg);
+payload = pq_getmsgrawstring(msg);
+pq_endmessage(msg);
+
+NotifyMyFrontEnd(channel, payload, pid);
+
 break;
 			}
 
diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index c39ac3a..716f1c3 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -390,9 +390,6 @@ static bool asyncQueueProcessPageEntries(volatile QueuePosition *current,
 			 char *page_buffer);
 static void asyncQueueAdvanceTail(void);
 static void ProcessIncomingNotify(void);
-static void NotifyMyFrontEnd(const char *channel,
- const char *payload,
- int32 srcPid);
 static bool AsyncExistsPendingNotify(const char *channel, const char *payload);
 static void ClearPendingActionsAndNotifies(void);
 
@@ -2076,7 +2073,7 @@ ProcessIncomingNotify(void)
 /*
  * Send NOTIFY message to my front end.
  */
-static void
+void
 NotifyMyFrontEnd(const char *channel, const char *payload, int32 srcPid)
 {
 	if (whereToSendOutput == DestRemote)
diff --git a/src/backend/commands/variable.c b/src/backend/commands/variable.c
index 962d75d..7180be6 100644
--- a/src/backend/commands/variable.c
+++ b/src/backend/commands/variable.c
@@ -755,6 +755,15 @@ assign_client_encoding(const char *newval, void *extra)
 {
 	int			encoding = *((int *) extra);
 
+	/*
+	 * Even though the value of the client_encoding GUC might change within a
+	 * parallel worker, we don't really change the client encoding in that
+	 * case.  For a parallel worker, the client is the leader process, which
+	 * always expects the database encoding from us.
+	 */
+	if (IsParallelWorker())
+		return;
+
 	/* We do not expect an erro

Re: [HACKERS] [HITB-Announce] HITB2016AMS Videos & GSEC Singapore Voting

2016-06-27 Thread Alvaro Herrera
Robert Haas wrote:
> On Mon, Jun 20, 2016 at 5:41 PM, Hafez Kamal  wrote:

> > See you in Singapore!
> 
> This seems totally off-topic.  Shouldn't a post like this result in a ban?

It is off-topic.  Sorry that it got through.  We get dozens of these
every week, and the vast majority are rejected; I suppose some moderator
slipped up (might have been me).

I see that this is a repeat of an incident you already complained about
in January 2014.  Will look into what happened exactly, and if a ban is
warranted, I'll implement that.  Conference spammers have gotten pretty
annoying ...

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing consider_parallel for upper planner rels

2016-06-27 Thread Tom Lane
Robert Haas  writes:
> On Mon, Jun 27, 2016 at 4:49 PM, Tom Lane  wrote:
>> * Not following what you did to apply_projection_to_path, and the comment
>> therein isn't helping.

> Gee, I wonder why not?  :-)

> The basic problem here is that applying a projection to a path can
> render a formerly parallel-safe path no longer parallel-safe.  If we
> jam a parallel-restricted target list into a formerly parallel-safe
> path, we'd better also clear path->parallel_safe.  Currently,
> apply_projection_to_path only needs to call has_parallel_hazard for an
> input which is a GatherPath, which isn't too expensive because most
> paths are not GatherPaths and if we get one that is, well, we can hope
> parallel query will win enough during execution to make up for the
> extra planning cost.  But if we want the consider_parallel and
> parallel_safe flags to be set correctly for all upper rels and paths,
> it seems that we need to do it always - hence the dismayed comment.
> Thoughts?

Seems to me that it should generally be the case that consider_parallel
would already be clear on the parent rel if the tlist isn't parallel safe,
and if it isn't we probably have a bug elsewhere.  If it makes you feel
better, maybe you could add Assert(!has_parallel_hazard(...)) here?

I guess this means that apply_projection_to_path would need to clear
parallel_safe if rel->consider_parallel isn't true.  This would correspond
to situations where we are taking a parallel-safe path for a lower-level
relation that has consider_parallel true, and repurposing it for a new
upperrel that has consider_parallel false.  Maybe it'd be better to
not do that but just force use of a separate ProjectionPath if the
parallel_safe flag needs to change.

(I think 8b9d323cb may have made this a little less messy than it
was when you did your draft patch, btw.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tuplesort.c's copytup_index() is dead code

2016-06-27 Thread Bruce Momjian
On Fri, Jun 24, 2016 at 02:26:18PM -0700, Peter Geoghegan wrote:
> On Fri, Jun 24, 2016 at 2:18 PM, Tom Lane  wrote:
> > Uh, why?  It's not a large amount of code and it seems like removing
> > it puts a fair-size hole in the symmetry of tuplesort's capabilities.
> 
> It's not a small amount of code either.
> 
> Removing the code clarifies the division of labor between COPYTUP()
> routines in general, their callers (tuplesort_putheaptuple() and
> tuplesort_puttupleslot() -- which are also puttuple_common() callers),
> and routines that are similar to those caller routines (in that they
> at least call puttuple_common()) that do not call COPYTUP()
> (tuplesort_putdatum(), and now tuplesort_putindextuplevalues()).
> 
> I believe that this has value. All the extra boilerplate code misleads.

At a minimum we can block out the code with #ifdef NOT_USED.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [HITB-Announce] HITB2016AMS Videos & GSEC Singapore Voting

2016-06-27 Thread Robert Haas
On Mon, Jun 20, 2016 at 5:41 PM, Hafez Kamal  wrote:
> Videos from the 7th annual HITB Security Conference are being released
> this week!
>
> HITBSecConf Youtube channel: http://youtube.com/hitbsecconf
>
> Talks from the #HITB2016AMS CommSec track have already been uploaded and
> linked to their individual presentations:
>
> http://conference.hitb.org/hitbsecconf2016ams/commsec-track/
>
> Conference materials:
> http://conference.hitb.org/hitbsecconf2016ams/materials/
>
> ===
>
> On an unrelated note, voting for HITB GSEC talks in Singapore (August
> 25th and 26th) closes on the 30th of June! Two audience voted talks have
> already been added to the agenda:
>
> iOS 10 Kernel Heap Revisited - Stefan Esser
>
> http://gsec.hitb.org/sg2016/sessions/ios-10-kernel-heap-revisited/
>
> Look Mom! I Don't Use Shellcode: A Browser Exploitation Case Study for
> Internet Explorer 11 - Moritz Jodeit
>
> http://gsec.hitb.org/sg2016/sessions/look-mom-i-dont-use-shellcode-a-browser-exploitation-case-study-for-internet-explorer-11/
>
> HITB GSEC Voting: http://gsec.hitb.org/vote/
>
> See you in Singapore!

This seems totally off-topic.  Shouldn't a post like this result in a ban?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing consider_parallel for upper planner rels

2016-06-27 Thread Robert Haas
On Mon, Jun 27, 2016 at 4:49 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Mon, Jun 27, 2016 at 3:35 PM, Tom Lane  wrote:
>>> Oh, I thought you were still actively working on it.  What patch do
>>> you want me to review?
>
>> I'm looking for an opinion on the WIP patch attached to:
>> https://www.postgresql.org/message-id/ca+tgmozwjb9eaibj-meeaq913wkkoz4aq7vqncfrdls9wyh...@mail.gmail.com
>
> OK, I took a quick look.  Some of the details are obsolete due to my
> over-the-weekend hacking on parallel aggregation, but I think generally
> you have the right idea that we should set upperrel consider_parallel
> flags on the basis of "input rel has consider_parallel = true AND there
> are no parallel hazards in operations to be performed at this level".

OK, great.  Thanks.

> A few specific comments:
>
> * Can't we remove wholePlanParallelSafe altogether, in favor of just
> examining best_path->parallel_safe in standard_planner?

Yes, I think we can.  Before the upper planner used paths, we needed a
way to get wholePlanParallelSafe = false even if every generated path
was parallel-safe, but now that all planning stages have paths, we can
get rid of that.

> * In grouping_planner, I'm not exactly convinced that
> final_rel->consider_parallel can just be copied up without any further
> restrictions.  Easiest counterexample is a parallel restricted function in
> LIMIT.

OK, will look.

> * Why have the try_parallel_path flag at all in create_grouping_paths,
> rather than just setting or not setting grouped_rel->consider_parallel?
> If your thinking is that this is dealing with implementation restrictions
> that might not apply to paths added by an extension, then OK, but the
> variable needs to have a less generic name.  Maybe try_parallel_aggregation.

Suppose all of the relevant functions are parallel-safe, but the
aggregates don't have combine functions.  In that case,
consider_parallel ought to be true, because parallel strategies are OK
as a general matter, but the one and only parallel strategy we have
today - two-phase aggregation - will not work.

> * Copy/paste error in comment in create_distinct_paths: s/window/distinct/

OK, will fix.

> * Not following what you did to apply_projection_to_path, and the comment
> therein isn't helping.

Gee, I wonder why not?  :-)

The basic problem here is that applying a projection to a path can
render a formerly parallel-safe path no longer parallel-safe.  If we
jam a parallel-restricted target list into a formerly parallel-safe
path, we'd better also clear path->parallel_safe.  Currently,
apply_projection_to_path only needs to call has_parallel_hazard for an
input which is a GatherPath, which isn't too expensive because most
paths are not GatherPaths and if we get one that is, well, we can hope
parallel query will win enough during execution to make up for the
extra planning cost.  But if we want the consider_parallel and
parallel_safe flags to be set correctly for all upper rels and paths,
it seems that we need to do it always - hence the dismayed comment.
Thoughts?

>> It may not be correct in detail, but I'd like to know whether you
>> think it's going in the generally correct direction and what major
>> concerns you might have before spending more time on it.  Also, I'd
>> like to know whether you think it's something we should try to put
>> into 9.6 or whether you think we should leave it for next cycle.
>
> I think we should try to get this right in 9.6.  For one thing, the
> more stuff we can easily exercise in parallel mode, the more likely
> we are to find bugs before they reach the field.

OK.

(Official status update: I will post an updated version of this patch
by Thursday.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in to_timestamp().

2016-06-27 Thread Bruce Momjian
On Thu, Jun 23, 2016 at 02:00:49PM -0400, Robert Haas wrote:
> > Ok.  I'm having trouble seeing this justified as a bug fix - the docs
> > clearly state our behavior is intentional.  Improved behavior, yes, but
> > that's a feature and we're in beta2.  Please be specific if you believe I've
> > misinterpreted project policies on this matter.
> 
> I would not vote to back-patch a change in this area because I think
> that could break SQL code that works today.  But I think the current
> behavior is pretty well indefensible.  It's neither intuitive nor
> compatible with Oracle.  And there is plenty of existing precedent for
> making this sort of change during the beta period.  We regard it as a
> bug fix which is too dangerous to back-patch; therefore, it is neither
> subject to feature freeze nor does it go into existing stable
> releases.  Whether to treat this particular issue in that particular
> way is something that needs to be hammered out in discussion.  Tom
> raises the valid point that we need to make sure that we've thoroughly
> studied this issue and fixed all of the related cases before
> committing anything -- we don't want to change the behavior in
> 9.6beta, release 9.6, and then have to change it again for 9.7.  But
> there is certainly no project policy which says we can't change this
> now for 9.6 if we decide that (1) the current behavior is wrong and
> (2) we are quite sure we know how to fix it.

If you are not going to backpatch this for compatibility reasons, I
don't think changing it during beta makes sense either because you open
the chance of breaking applications that have already been tested on
earlier 9.6 betas.  This would only make sense if the  to_timestamp bugs
were new in 9.6.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing consider_parallel for upper planner rels

2016-06-27 Thread Tom Lane
Robert Haas  writes:
> On Mon, Jun 27, 2016 at 3:35 PM, Tom Lane  wrote:
>> Oh, I thought you were still actively working on it.  What patch do
>> you want me to review?

> I'm looking for an opinion on the WIP patch attached to:
> https://www.postgresql.org/message-id/ca+tgmozwjb9eaibj-meeaq913wkkoz4aq7vqncfrdls9wyh...@mail.gmail.com

OK, I took a quick look.  Some of the details are obsolete due to my
over-the-weekend hacking on parallel aggregation, but I think generally
you have the right idea that we should set upperrel consider_parallel
flags on the basis of "input rel has consider_parallel = true AND there
are no parallel hazards in operations to be performed at this level".
A few specific comments:

* Can't we remove wholePlanParallelSafe altogether, in favor of just
examining best_path->parallel_safe in standard_planner?

* In grouping_planner, I'm not exactly convinced that
final_rel->consider_parallel can just be copied up without any further
restrictions.  Easiest counterexample is a parallel restricted function in
LIMIT.

* Why have the try_parallel_path flag at all in create_grouping_paths,
rather than just setting or not setting grouped_rel->consider_parallel?
If your thinking is that this is dealing with implementation restrictions
that might not apply to paths added by an extension, then OK, but the
variable needs to have a less generic name.  Maybe try_parallel_aggregation.

* Copy/paste error in comment in create_distinct_paths: s/window/distinct/

* Not following what you did to apply_projection_to_path, and the comment
therein isn't helping.

> It may not be correct in detail, but I'd like to know whether you
> think it's going in the generally correct direction and what major
> concerns you might have before spending more time on it.  Also, I'd
> like to know whether you think it's something we should try to put
> into 9.6 or whether you think we should leave it for next cycle.

I think we should try to get this right in 9.6.  For one thing, the
more stuff we can easily exercise in parallel mode, the more likely
we are to find bugs before they reach the field.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] How to kill a Background worker and Its metadata

2016-06-27 Thread Akash Agrawal
Hi,

I've created a background worker and I am using Postgresql-9.4. This
bgworker handles the job queue dynamically and goes to sleep if there is no
job to process within the next 1 hour.

Now, I want to have a mechanism to wake the bgworker up in case if someone
adds a new job while the bgworker is in sleep mode. So to do it, I have
created a trigger which initially removes the existing background worker
and then registers a new one. I am using the following two queries inside
it:

select pg_terminate_backend(pid of bgworker);
select worker_spi1_launch(1);

I am retrieving the pid from pg_Stat_activity. The maximum number of
background worker that can run simultaneously is equal to 8.  I think even
if I call pg_terminate_backend the metadata of the background worker is not
being deleted and as a result after 8 insert operations I am not able to
register a background worker. Please suggest what can be done here.

Best,
Akash


Re: [HACKERS] fixing subplan/subquery confusion

2016-06-27 Thread Robert Haas
On Mon, Jun 27, 2016 at 4:03 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Sun, Jun 26, 2016 at 10:39 PM, Noah Misch  wrote:
>>> On Mon, Jun 13, 2016 at 04:46:19PM -0400, Tom Lane wrote:
>>> The above-described topic is currently a PostgreSQL 9.6 open item ("fix
>>> possible confusion between subqueries and subplans").
>
>> This open item comes with a patch submitted by Tom Lane.  If Tom wants
>> me to review and (if no problems are found) commit that patch to
>> resolve this open item, I'm willing to do that.  But generally I don't
>> commit patches submitted by other committers unless that person and I
>> have agreed on it in advance, which is not currently the case here.
>
>> Tom, do you want to commit this, or do you want me to handle it, or
>> something else?
>
> I was not sure if you'd agreed that the patch was correct, and in any
> case I thought you wanted to fold it into the upperrel consider_parallel
> patch.  I feel no great need to commit it myself, if that's what you
> meant.

OK, I'll set aside some time for further review and either commit it
or send an update by COB Thursday US time.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing subplan/subquery confusion

2016-06-27 Thread Tom Lane
Robert Haas  writes:
> On Sun, Jun 26, 2016 at 10:39 PM, Noah Misch  wrote:
>> On Mon, Jun 13, 2016 at 04:46:19PM -0400, Tom Lane wrote:
>> The above-described topic is currently a PostgreSQL 9.6 open item ("fix
>> possible confusion between subqueries and subplans").

> This open item comes with a patch submitted by Tom Lane.  If Tom wants
> me to review and (if no problems are found) commit that patch to
> resolve this open item, I'm willing to do that.  But generally I don't
> commit patches submitted by other committers unless that person and I
> have agreed on it in advance, which is not currently the case here.

> Tom, do you want to commit this, or do you want me to handle it, or
> something else?

I was not sure if you'd agreed that the patch was correct, and in any
case I thought you wanted to fold it into the upperrel consider_parallel
patch.  I feel no great need to commit it myself, if that's what you
meant.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing consider_parallel for upper planner rels

2016-06-27 Thread Robert Haas
On Mon, Jun 27, 2016 at 3:35 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> I'm not sure how to proceed here.  I have asked Tom several times to
>> look at the WIP patch and offer comments, but he so far has not done
>> so.
>
> Oh, I thought you were still actively working on it.  What patch do
> you want me to review?

I'm looking for an opinion on the WIP patch attached to:

https://www.postgresql.org/message-id/ca+tgmozwjb9eaibj-meeaq913wkkoz4aq7vqncfrdls9wyh...@mail.gmail.com

It may not be correct in detail, but I'd like to know whether you
think it's going in the generally correct direction and what major
concerns you might have before spending more time on it.  Also, I'd
like to know whether you think it's something we should try to put
into 9.6 or whether you think we should leave it for next cycle.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MinMaxAggPath vs. parallel-safety

2016-06-27 Thread Robert Haas
On Mon, Jun 27, 2016 at 3:33 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Sun, Jun 26, 2016 at 10:35 PM, Noah Misch  wrote:
>>> The above-described topic is currently a PostgreSQL 9.6 open item ("consider
>>> whether MinMaxAggPath might fail to be parallel-safe").
>
>> Currently, MinMaxAggPath is never parallel-safe; the question is
>> whether we could allow it to be parallel-safe (not, as the current
>> phrasing implies, whether it might ever need to be other than
>> parallel-safe).
>
> Check.
>
>> It appears to me that the answer is "no", because a
>> MinMaxAggPath contains a list of MinMaxAggInfo objects, and there we
>> have this:
>> Param  *param;  /* param for subplan's 
>> output */
>> Since subplans aren't passed down to parallel workers, no
>> MinMaxAggPath can be parallel-safe.   Therefore, I think there's
>> nothing to do here right now.  Comments?
>
> Hm.  In principle, this could be made to work, since I don't think it
> would be necessary for the Param's value to pass across process
> boundaries.  (It could be locally generated within a worker, and then also
> consumed within the worker, if the worker's plan looked like a Result with
> a subplan attached.)  However, if we don't even pass down the plan trees
> for subplans, then I agree that it can't work at the moment.

We don't.  See ExecSerializePlan().

> In any case, this is an optimization opportunity not a bug.  If you want
> to kick this can down the road until parallel query is generally smarter
> about subplans, that's OK with me.

I don't really see another option.  I don't think it would be a lot of
work to pass subplans to workers along with the main plan, but finding
all of the places that can then benefit as a result of that change and
figuring out which cases are allowable sounds to me like development
work, not stabilization.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] fixing subplan/subquery confusion

2016-06-27 Thread Robert Haas
On Sun, Jun 26, 2016 at 10:39 PM, Noah Misch  wrote:
> On Mon, Jun 13, 2016 at 04:46:19PM -0400, Tom Lane wrote:
>> Robert Haas  writes:
>> > In practice, we don't yet have the ability for
>> > parallel-safe paths from subqueries to affect planning at higher query
>> > levels, but that's because the pathification stuff landed too late in
>> > the cycle for me to fully react to it.
>>
>> I wonder if that's not just from confusion between subplans and
>> subqueries.  I don't believe any of the claims made in the comment
>> adjusted in the patch below (other than "Subplans currently aren't passed
>> to workers", which is true but irrelevant).  Nested Gather nodes is
>> something that you would need, and already have, a defense for anyway.
>
> [Action required within 72 hours.  This is a generic notification.]
>
> The above-described topic is currently a PostgreSQL 9.6 open item ("fix
> possible confusion between subqueries and subplans").  Robert, since you
> committed the patch believed to have created it, you own this open item.  If
> some other commit is more relevant or if this does not belong as a 9.6 open
> item, please let us know.  Otherwise, please observe the policy on open item
> ownership[1] and send a status update within 72 hours of this message.
> Include a date for your subsequent status update.  Testers may discover new
> open items at any time, and I want to plan to get them all fixed well in
> advance of shipping 9.6rc1.  Consequently, I will appreciate your efforts
> toward speedy resolution.  Thanks.

This open item comes with a patch submitted by Tom Lane.  If Tom wants
me to review and (if no problems are found) commit that patch to
resolve this open item, I'm willing to do that.  But generally I don't
commit patches submitted by other committers unless that person and I
have agreed on it in advance, which is not currently the case here.

Tom, do you want to commit this, or do you want me to handle it, or
something else?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing consider_parallel for upper planner rels

2016-06-27 Thread Tom Lane
Robert Haas  writes:
> I'm not sure how to proceed here.  I have asked Tom several times to
> look at the WIP patch and offer comments, but he so far has not done
> so.

Oh, I thought you were still actively working on it.  What patch do
you want me to review?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MinMaxAggPath vs. parallel-safety

2016-06-27 Thread Tom Lane
Robert Haas  writes:
> On Sun, Jun 26, 2016 at 10:35 PM, Noah Misch  wrote:
>> The above-described topic is currently a PostgreSQL 9.6 open item ("consider
>> whether MinMaxAggPath might fail to be parallel-safe").

> Currently, MinMaxAggPath is never parallel-safe; the question is
> whether we could allow it to be parallel-safe (not, as the current
> phrasing implies, whether it might ever need to be other than
> parallel-safe).

Check.

> It appears to me that the answer is "no", because a
> MinMaxAggPath contains a list of MinMaxAggInfo objects, and there we
> have this:
> Param  *param;  /* param for subplan's output 
> */
> Since subplans aren't passed down to parallel workers, no
> MinMaxAggPath can be parallel-safe.   Therefore, I think there's
> nothing to do here right now.  Comments?

Hm.  In principle, this could be made to work, since I don't think it
would be necessary for the Param's value to pass across process
boundaries.  (It could be locally generated within a worker, and then also
consumed within the worker, if the worker's plan looked like a Result with
a subplan attached.)  However, if we don't even pass down the plan trees
for subplans, then I agree that it can't work at the moment.

In any case, this is an optimization opportunity not a bug.  If you want
to kick this can down the road until parallel query is generally smarter
about subplans, that's OK with me.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] fixing consider_parallel for upper planner rels

2016-06-27 Thread Robert Haas
On Sun, Jun 26, 2016 at 10:42 PM, Noah Misch  wrote:
> On Mon, Jun 13, 2016 at 05:27:13PM -0400, Robert Haas wrote:
>> One problem that I've realized that is related to this is that the way
>> that the consider_parallel flag is being set for upper rels is almost
>> totally bogus, which I believe accounts for your complaint at PGCon
>> that force_parallel_mode was not doing as much as it ought to do.
>> When I originally wrote a lot of this logic, there were no upper rels,
>> which led to this code:
>>
>> if (force_parallel_mode == FORCE_PARALLEL_OFF || 
>> !glob->parallelModeOK)
>> {
>> glob->parallelModeNeeded = false;
>> glob->wholePlanParallelSafe = false;/* either
>> false or don't care */
>> }
>> else
>> {
>> glob->parallelModeNeeded = true;
>> glob->wholePlanParallelSafe =
>> !has_parallel_hazard((Node *) parse, false);
>> }
>>
>> The main way that wholePlanParallelSafe is supposed to be cleared is
>> in create_plan:
>>
>> /* Update parallel safety information if needed. */
>> if (!best_path->parallel_safe)
>> root->glob->wholePlanParallelSafe = false;
>>
>> However, at the time that code was written, it was impossible to rely
>> entirely on the latter mechanism.  Since there were no upper rels and
>> no paths for upper plan nodes, you could have the case where every
>> path was parallel-safe but the whole plan was node.  Therefore the
>> code shown above was needed to scan the whole darned plan for
>> parallel-unsafe things.  Post-pathification, this whole thing is
>> pretty bogus: upper rels mostly don't get consider_parallel set at
>> all, which means plans involving upper rels get marked parallel-unsafe
>> even if they are not, which means the wholePlanParallelSafe flag gets
>> cleared in a bunch of cases where it shouldn't.  And on the flip side
>> I think that the first chunk of code quoted above would be totally
>> unnecessary if we were actually setting consider_parallel correctly on
>> the upper rels.
>
> [Action required within 72 hours.  This is a generic notification.]
>
> The above-described topic is currently a PostgreSQL 9.6 open item ("set
> consider_parallel correctly for upper rels").  Robert, since you committed the
> patch believed to have created it, you own this open item.  If some other
> commit is more relevant or if this does not belong as a 9.6 open item, please
> let us know.  Otherwise, please observe the policy on open item ownership[1]
> and send a status update within 72 hours of this message.  Include a date for
> your subsequent status update.  Testers may discover new open items at any
> time, and I want to plan to get them all fixed well in advance of shipping
> 9.6rc1.  Consequently, I will appreciate your efforts toward speedy
> resolution.  Thanks.

I'm not sure how to proceed here.  I have asked Tom several times to
look at the WIP patch and offer comments, but he so far has not done
so.  I am reluctant to commit more patches whacking the planner around
post-beta2 without some review from the guy who invented the upper
planner pathification stuff that broke this in the first place.  What
I have here was more or less correct before that.  It could be argued
that this problem should really be attributed to
3fc6e2d7f5b652b417fa6937c34de2438d60fa9f rather than any of the
parallel query commits -- though it's certainly the case that
e06a38965b3bcdaa881e7e06892d4d8ab6c2c980 was the result of massive
fuzzy thinking on my part.  I am quite worried that if I whack this
around some more it's going to break more stuff, and I don't know that
I feel very comfortable doing that without some independent review.

Suggestions?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] MinMaxAggPath vs. parallel-safety

2016-06-27 Thread Robert Haas
On Sun, Jun 26, 2016 at 10:35 PM, Noah Misch  wrote:
>> I looked into this and found that the costs are considered fuzzily the
>> same, and then add_path prefers the slightly-worse path on the grounds
>> that it is marked parallel_safe while the MinMaxAgg path is not.  It seems
>> to me that there is some fuzzy thinking going on there.  On exactly what
>> grounds is a path to be preferred merely because it is parallel safe, and
>> not actually parallelized?  Or perhaps the question to ask is whether a
>> MinMaxAgg path can be marked parallel-safe.
>
> [Action required within 72 hours.  This is a generic notification.]
>
> The above-described topic is currently a PostgreSQL 9.6 open item ("consider
> whether MinMaxAggPath might fail to be parallel-safe").  Robert, since you
> committed the patch believed to have created it, you own this open item.  If
> some other commit is more relevant or if this does not belong as a 9.6 open
> item, please let us know.  Otherwise, please observe the policy on open item
> ownership[1] and send a status update within 72 hours of this message.
> Include a date for your subsequent status update.  Testers may discover new
> open items at any time, and I want to plan to get them all fixed well in
> advance of shipping 9.6rc1.  Consequently, I will appreciate your efforts
> toward speedy resolution.  Thanks.

It turns out that this open item is phased incorrectly.  I'll update
the phrasing.

/* A MinMaxAggPath implies use of subplans, so cannot be
parallel-safe */
pathnode->path.parallel_safe = false;

Currently, MinMaxAggPath is never parallel-safe; the question is
whether we could allow it to be parallel-safe (not, as the current
phrasing implies, whether it might ever need to be other than
parallel-safe).   It appears to me that the answer is "no", because a
MinMaxAggPath contains a list of MinMaxAggInfo objects, and there we
have this:

Param  *param;  /* param for subplan's output */

Since subplans aren't passed down to parallel workers, no
MinMaxAggPath can be parallel-safe.   Therefore, I think there's
nothing to do here right now.  Comments?

See also 
https://www.postgresql.org/message-id/ca+tgmoz7wvmpmtsntk+dfdunwmc8kk5putldnzolvj9dvea...@mail.gmail.com

(Official status update: I'll remove this open item in 3 days unless
the above analysis is shown to be incorrect.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken handling of lwlocknames.h

2016-06-27 Thread Christoph Berg
Re: Tom Lane 2016-06-27 <31398.1467036...@sss.pgh.pa.us>
> Bjorn Munch reported off-list that this sequence:
> 
> unpack tarball, cd into it
> ./configure ...
> cd src/test/regress
> make
> 
> no longer works in 9.6beta2, where it did work in previous releases.
> I have confirmed both statements.  The failure looks like
> 
> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
> -Wendif-labels -Wmissing-format-attribute -Wformat-security 
> -fno-strict-aliasing -fwrapv -g -O1 -fpic -I../../../src/include 
> -D_GNU_SOURCE   -c -o regress.o regress.c
> In file included from ../../../src/include/storage/lock.h:23,
>  from ../../../src/include/access/heapam.h:22,
>  from ../../../src/include/nodes/execnodes.h:18,
>  from ../../../src/include/commands/trigger.h:17,
>  from regress.c:29:
> ../../../src/include/storage/lwlock.h:129:33: error: storage/lwlocknames.h: 
> No such file or directory
> make: *** [regress.o] Error 1
> 
> So this is some sort of fallout from commit aa65de042f582896, which
> invented that as a generated file.
> 
> Perhaps the solution is to extend src/test/regress/GNUmakefile to know
> about this file explicitly (as it already does know about
> src/port/pg_config_paths.h).  But that seems rather brute-force; in
> particular it seems like that does nothing to keep us from getting burnt
> again the same way in future.  I wonder if we should modify
> src/backend/Makefile so that it exposes a phony target for "update all the
> generated include files", and then have src/test/regress/GNUmakefile call
> that.  Or maybe there are other ways.

That would also fix the "build plpython3 only" problem I was aiming at
in https://www.postgresql.org/message-id/20150916200959.gb32...@msg.df7cb.de

So another +1 from a packagers perspective.

Christoph


signature.asc
Description: PGP signature


Re: [HACKERS] OpenSSL 1.1 breaks configure and more

2016-06-27 Thread Andreas Karlsson

On 06/27/2016 08:12 PM, Christoph Berg wrote:

Re: Andreas Karlsson 2016-06-27 <8a0a5959-0b83-3dc8-d9e7-66ce8c1c5...@proxel.se>

The errors you report make it sound like they broke API compatibility
wholesale.  Was that really their intent?  If so, where are the changes
documented?


I do not see that they have documented the removal of the SSL_library_init
symbol anywhere. They changed the function into a macro in the following
commit. I guess we have to check for some other symbol, like SSL_new.


I'm not an autoconf expert, but as said in the original mail, I could
get the SSL_library_init check to work, even if that's a macro now:


Yes, we could do that, but I do not think we should check for the 
existence of a backwards compatibility macro. Actually I think we may 
want to skip much of the OpenSSL initialization code when compiling 
against OpenSSL 1.1 since they have now added automatic initialization 
of the library. Instead I think we should check for something we 
actually will use like SSL_CTX_new().


Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OpenSSL 1.1 breaks configure and more

2016-06-27 Thread Christoph Berg
Re: Andreas Karlsson 2016-06-27 <8a0a5959-0b83-3dc8-d9e7-66ce8c1c5...@proxel.se>
> > The errors you report make it sound like they broke API compatibility
> > wholesale.  Was that really their intent?  If so, where are the changes
> > documented?
> 
> I do not see that they have documented the removal of the SSL_library_init
> symbol anywhere. They changed the function into a macro in the following
> commit. I guess we have to check for some other symbol, like SSL_new.

I'm not an autoconf expert, but as said in the original mail, I could
get the SSL_library_init check to work, even if that's a macro now:

> -  AC_CHECK_LIB(ssl,SSL_library_init, [], [AC_MSG_ERROR([library 
> 'ssl' is required for OpenSSL])])
> +  AC_CHECK_LIB([ssl],  [SSL_library_init])

(I haven't investigated if that's due to the quoting, the removal of
the extra arguments, or simply because I reran autoreconf.)

> I think much of the above is missing from the release notes I have found. I
> hope they will be more complete at the time of the release. I am working on
> a patch to handle these API changes.
> 
> - https://www.openssl.org/news/openssl-1.1.0-notes.html
> - https://wiki.openssl.org/index.php/1.1_API_Changes

Nod, I was also disappointed when browsing the API changes document,
given it didn't mention any of the problems I was seeing.

Christoph


signature.asc
Description: PGP signature


Re: [HACKERS] OpenSSL 1.1 breaks configure and more

2016-06-27 Thread Andreas Karlsson

On 06/27/2016 05:24 PM, Tom Lane wrote:

Christoph Berg  writes:

as reported by Debian's OpenSSL maintainers, PostgreSQL is failing to
build against a snapshot of the upcoming 1.1.0 version.


The errors you report make it sound like they broke API compatibility
wholesale.  Was that really their intent?  If so, where are the changes
documented?


I do not see that they have documented the removal of the 
SSL_library_init symbol anywhere. They changed the function into a macro 
in the following commit. I guess we have to check for some other symbol, 
like SSL_new.


https://github.com/openssl/openssl/commit/7fa792d14d06cdaca18f225b1d2d8daf8ed24fd7

They have also, which is in the release notes, broken API compatibility 
when they made the BIO and BIO_METHOD structs opaque. This will probably 
require some ugly ugly #ifs or compatibility macros from us.


They also seem to have broken our OpenSSL thread safety callback when 
they added their new threading API and removed the CRYPTO_LOCK define. I 
have reported this in their issue tracker 
(https://github.com/openssl/openssl/issues/1260).


In addition to this there are a couple of deprecated functions 
(DH_generate_parameters() and OPENSSL_config()), but they look pretty 
easy to handle.


I think much of the above is missing from the release notes I have 
found. I hope they will be more complete at the time of the release. I 
am working on a patch to handle these API changes.


- https://www.openssl.org/news/openssl-1.1.0-notes.html
- https://wiki.openssl.org/index.php/1.1_API_Changes

Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rename max_parallel_degree?

2016-06-27 Thread Julien Rouhaud
On 27/06/2016 07:18, Amit Kapila wrote:
> On Mon, Jun 27, 2016 at 10:21 AM, Amit Kapila  wrote:
>>
>> I think as the parallel_terminate_count is only modified by postmaster
>> and read by other processes, such an operation will be considered
>> atomic.  We don't need to specifically make it atomic unless multiple
>> processes are allowed to modify such a counter.  Although, I think we
>> need to use some barriers in code to make it safe.
>>
>> 1.
>> @@ -272,6 +279,8 @@ BackgroundWorkerStateChange(void)
>>   pg_memory_barrier();
>>
>> slot->pid = 0;
>>   slot->in_use = false;
>> + if (slot->parallel)
>> +
>> BackgroundWorkerData->parallel_terminate_count++;
>>
>> I think the check of slot->parallel and increment to
>> parallel_terminate_count should be done before marking slot->in_use to
>> false.  Consider the case if slot->in_use is marked as false and then
>> before next line execution, one of the backends registers dynamic
>> worker (non-parallel worker), so that
>> backend can see this slot as free and use it.  It will also mark the
>> parallel flag of slot as false.  Now when postmaster will check the
>> slot->parallel flag, it will find it false and then skip the increment
>> to parallel_terminate_count.
>>
> 
> If you agree with above theory, then you need to use write barrier as
> well after incrementing the parallel_terminate_count to avoid
> re-ordering with slot->in_use flag.
> 

I totally agree, I didn't thought about this corner case.

There's already a pg_memory_barrier() call in
BackgroundWorkerStateChange(), to avoid reordering the notify_pid load.
Couldn't we use it to also make sure the parallel_terminate_count
increment happens before the slot->in_use store?  I guess that a write
barrier will be needed in ForgetBacgroundWorker().

>> 2.
>> + if (parallel && (BackgroundWorkerData->parallel_register_count -
>> +
>> BackgroundWorkerData->parallel_terminate_count) >=
>> +
>> max_parallel_workers)
>> + {
>> + LWLockRelease(BackgroundWorkerLock);
>> + return
>> false;
>> + }
>>+
>>
>> I think we need a read barrier here, so that this check doesn't get
>> reordered with the for loop below it.

You mean between the end of this block and the for loop? (Sorry, I'm not
at all familiar with the pg_barrier mechanism).

>>  Also, see if you find the code
>> more readable by moving the after && part of check to next line.

I think I'll just pgindent the file.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-27 Thread Robert Haas
On Mon, Jun 27, 2016 at 2:47 AM, Ashutosh Bapat
 wrote:
> On Sat, Jun 25, 2016 at 12:44 AM, Robert Haas  wrote:
>> On Wed, Jun 22, 2016 at 4:11 AM, Amit Langote
>>  wrote:
>> >> In an outer join we have to differentiate between a row being null
>> >> (because
>> >> there was no joining row on nullable side) and a non-null row with all
>> >> column values being null. If we cast the whole-row expression to a text
>> >> e.g. r.*::text and test the resultant value for nullness, it gives us
>> >> what
>> >> we want. A null row casted to text is null and a row with all null
>> >> values
>> >> casted to text is not null.
>> >
>> > You are right.  There may be non-null rows with all columns null which
>> > are
>> > handled wrongly (as Rushabh reports) and the hack I proposed is not
>> > right
>> > for.  Especially if from non-nullable side as in the reported case, NULL
>> > test for such a whole-row-var would produce the wrong result.  Casting
>> > to
>> > text as your patch does produces the correct behavior.
>>
>> I agree, but I think we'd better cast to pg_catalog.text instead, just
>> to be safe.  Committed that way.
>
> postgres_fdw resets the search path to pg_catalog while opening connection
> to the server. The reason behind this is explained in deparse.c
>
>  * We assume that the remote session's search_path is exactly "pg_catalog",
>  * and thus we need schema-qualify all and only names outside pg_catalog.

Hmm.  OK, should we revert the schema-qualification part of that
commit, or just leave it alone?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel workers and client encoding

2016-06-27 Thread Robert Haas
On Mon, Jun 13, 2016 at 10:27 PM, Peter Eisentraut
 wrote:
> Modulo that last point, here is a patch that shows how I think this could
> work, in combination with the patch I posted previously that sets the
> "client encoding" in the parallel worker to the server encoding.
>
> This patch disassembles the NotificationResponse message with a temporary
> client encoding, and then sends it off to the real frontend using the real
> client encoding.
>
> Doing it this way also takes care of a few special cases that
> NotifyMyFrontEnd() handles, such as a client with protocol version 2 that
> doesn't expect a payload in the message.

How does this address the concern raised in the last sentence of
https://www.postgresql.org/message-id/CA+TgmoaAAEXmjVMB4nM=znf_5b9jhuim6q3afro4spt23ti...@mail.gmail.com
?  It seems that if an error occurs between the two SetClientEncoding
calls, the change will persist for the rest of the session, resulting
in chaos.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in to_timestamp().

2016-06-27 Thread Robert Haas
On Fri, Jun 24, 2016 at 5:16 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Fri, Jun 24, 2016 at 12:26 PM, Steve Crawford
>>  wrote:
>>> To me, 2016-02-30 is an invalid date that should generate an error.
>
>> I don't particularly disagree with that, but on the other hand, as
>> mentioned earlier, to_timestamp() is here for Oracle compatibility,
>> and if it doesn't do what Oracle's function does, then (1) it's not
>> useful for people migrating from Oracle and (2) we're making up the
>> behavior out of whole cloth.  I think things that we invent ourselves
>> should reject stuff like this, but in a compatibility function we
>> might want to, say, have compatibility.
>
> Agreed, mostly, but ... how far are we prepared to go on that?  The one
> thing I know about that is different from Oracle and is not something that
> most people would consider clearly wrong is the behavior of the FM prefix.
> We think it's a prefix that modifies only the next format code; they think
> it's a toggle.  If we make that act like Oracle, we will silently break an
> awful lot of applications, and there will be *no way* to write code that
> is correct under both interpretations.  (And no, I do not want to hear
> "let's fix it with a GUC".)  So I'm afraid we're between a rock and a hard
> place on that one --- but if we let that stand, the argument that Oracle's
> to_timestamp should be treated as right by definition loses a lot of air.

Well, I think that you're making several logical jumps that I
personally would decline to make.  First, I don't think every issue
with these functions needs to be handled in the same way as every
other.  Just because we're willing or unwilling to break compatibility
in one area doesn't mean we have to make the same decision in every
case.  We're allowed to take into effect the likely impact of making a
given change in deciding whether it's worth it.  Second, if in one or
more areas we decide that a hard backward compatibility break would be
too painful, then I think it's a good idea to ask ourselves how we
could ease the migration pain for people.  And I'd phrase that as an
open-ended question rather than "should we add a GUC?".

For example, one idea here is to create a to_timstamp_old() function
that retains the existing behavior of to_timestamp() without any
change, and then add a new to_timestamp() function and fix every
Oracle incompatibility we can find as thoroughly as we can do in one
release cycle.  So we fix this whitespace stuff, we fix the FM
modifier, and anything else that comes up, we fix it all.  Then, if
people run into trouble with the new behavior when they upgrade, we
tell them that they can either fix their application or, if they want
the old behavior, they can use to_timestamp_old().  We can also
document the differences between to_timestamp() and to_timestamp_old()
so that people can easily figure out whether those differences are
significant to them.

Another idea is to add an optional third argument to to_timestamp()
that can be used to set compatibility behaviors.

I'm not altogether convinced that it's worth the effort to provide
lots of backward-compatibility here.  Presumably, only a small
percentage of people use to_timestamp(), and only a percentage of
those are going to rely on the details we're talking about changing.
So it might be that if we just up and change this, a few people will
be grumpy and then they'll update their code and that will be it.  On
the other hand, maybe it'll be a real pain in the butt for lots of
people and we'll lose users.  I don't know how to judge how
significant these changes will be to users, and I think that the level
of impact does matter in deciding what to do.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Non-text EXPLAIN output for partial aggregation

2016-06-27 Thread Robert Haas
On Sun, Jun 26, 2016 at 4:25 PM, Tom Lane  wrote:
> I noticed that the EXPLAIN code is set up so that in non-text output
> modes, you get output like this for partial-aggregate plans:
>
>"Node Type": "Aggregate",  +
>"Strategy": "Plain",   +
>"Operation": "Finalize",   +
> ...
>"Node Type": "Aggregate",  +
>"Strategy": "Plain",   +
>"Operation": "Partial",+
>
> That is, the "Operation" field type has been commandeered to indicate
> partial-aggregation cases.  This seems like a pretty bad idea to me,
> for two reasons:
>
> 1.  In other plan node types, "Operation" refers to SQL-visible semantics,
> in fact always Select/Insert/Update/Delete.  Re-using it for an
> implementation detail doesn't seem very consistent.
>
> 2.  As coded, the field is not printed at all for a non-partial aggregate
> node.  This is just wrong.  A given node type should have a fixed set of
> attributes.
>
> I think we should use some other field name, maybe "Step" or
> "PartialMode", and have "Simple" or "Plain" as the default field
> contents.  It's also arguable that this field should distinguish all the
> values of the AggSplit enum we just invented, though I'm not sure how
> important it is to report serialization options.  I'm not wedded to any
> particular ideas here, other than that omitting the field in the simple
> case is bad.
>
> Comments, naming ideas?

Simple sounds better than Plain, as between the two of them, because
Plain already means something with respect to aggregates.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OpenSSL 1.1 breaks configure and more

2016-06-27 Thread Tom Lane
Christoph Berg  writes:
> as reported by Debian's OpenSSL maintainers, PostgreSQL is failing to
> build against a snapshot of the upcoming 1.1.0 version.

The errors you report make it sound like they broke API compatibility
wholesale.  Was that really their intent?  If so, where are the changes
documented?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] OpenSSL 1.1 breaks configure and more

2016-06-27 Thread Christoph Berg
Hi,

as reported by Debian's OpenSSL maintainers, PostgreSQL is failing to
build against a snapshot of the upcoming 1.1.0 version. The report was
for 9.5.3, but I can reproduce it in HEAD as well:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=828510
> OpenSSL 1.1.0 is about to released.  During a rebuild of all packages using
> OpenSSL this package fail to build.  A log of that build can be found at:
> https://breakpoint.cc/openssl-1.1-rebuild-2016-05-29/Attempted/postgresql-9.5_9.5.3-1_amd64-20160529-1510
> 
> On https://wiki.openssl.org/index.php/1.1_API_Changes you can see various of 
> the
> reasons why it might fail.  There are also updated man pages at
> https://www.openssl.org/docs/manmaster/ that should contain useful 
> information.
> 
> There is a libssl-dev package available in experimental that contains a recent
> snapshot, I suggest you try building against that to see if everything works.

$ ./configure --with-openssl
checking for CRYPTO_new_ex_data in -lcrypto... yes
checking for SSL_library_init in -lssl... no
configure: error: library 'ssl' is required for OpenSSL

I can get one step further by tweaking configure.in and running
autoreconf, but then compilation fails further down:

-  AC_CHECK_LIB(ssl,SSL_library_init, [], [AC_MSG_ERROR([library 'ssl' 
is required for OpenSSL])])
+  AC_CHECK_LIB([ssl],  [SSL_library_init])

make -C common all
make[4]: Verzeichnis 
„/home/cbe/projects/postgresql/pg/master/src/backend/access/common“ wird 
betreten
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
-Wendif-labels -Wmissing-format-attribute -Wformat-security 
-fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 
-I../../../../src/include -D_GNU_SOURCE   -c -o printtup.o printtup.c
In file included from ../../../../src/include/libpq/libpq-be.h:25:0,
 from ../../../../src/include/libpq/libpq.h:21,
 from printtup.c:19:
/usr/include/openssl/ssl.h:1740:26: error: expected identifier or ‘(’ before 
numeric constant
 __owur const COMP_METHOD *SSL_get_current_compression(SSL *s);
  ^

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallelized polymorphic aggs, and aggtype vs aggoutputtype

2016-06-27 Thread Tom Lane
Noah Misch  writes:
> What, if anything, is yet required to close this 9.6 open item?

The original complaint about polymorphic aggs is fixed to my satisfaction.
The business about how non-text-format EXPLAIN reports parallel plans
(<16002.1466972...@sss.pgh.pa.us>) remains, but perhaps we should view
that as an independent issue.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rethinking representation of partial-aggregate steps

2016-06-27 Thread Tom Lane
David Rowley  writes:
> I can't help wonder how plan to allow future expansions of
> non-serialized partial aggregates giving that in setrefs.c you're
> making a hard assumption that mark_partial_aggref() should always
> receive the SERIAL versions of the aggsplit.

What I was imagining, but didn't bother to implement immediately, is
that we could pass down the appropriate AggSplit value from the plan
node (using the context argument for the mutator function).  planner.c
likewise needs a bit more plumbing to generate such plan nodes in the
first place, so I didn't feel that setrefs.c had to be smarter right now.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Broken handling of lwlocknames.h

2016-06-27 Thread Tom Lane
Bjorn Munch reported off-list that this sequence:

unpack tarball, cd into it
./configure ...
cd src/test/regress
make

no longer works in 9.6beta2, where it did work in previous releases.
I have confirmed both statements.  The failure looks like

gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
-Wendif-labels -Wmissing-format-attribute -Wformat-security 
-fno-strict-aliasing -fwrapv -g -O1 -fpic -I../../../src/include -D_GNU_SOURCE  
 -c -o regress.o regress.c
In file included from ../../../src/include/storage/lock.h:23,
 from ../../../src/include/access/heapam.h:22,
 from ../../../src/include/nodes/execnodes.h:18,
 from ../../../src/include/commands/trigger.h:17,
 from regress.c:29:
../../../src/include/storage/lwlock.h:129:33: error: storage/lwlocknames.h: No 
such file or directory
make: *** [regress.o] Error 1

So this is some sort of fallout from commit aa65de042f582896, which
invented that as a generated file.

Perhaps the solution is to extend src/test/regress/GNUmakefile to know
about this file explicitly (as it already does know about
src/port/pg_config_paths.h).  But that seems rather brute-force; in
particular it seems like that does nothing to keep us from getting burnt
again the same way in future.  I wonder if we should modify
src/backend/Makefile so that it exposes a phony target for "update all the
generated include files", and then have src/test/regress/GNUmakefile call
that.  Or maybe there are other ways.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-27 Thread Ashutosh Bapat
On Mon, Jun 27, 2016 at 3:06 PM, Etsuro Fujita 
wrote:

> On 2016/06/25 4:14, Robert Haas wrote:
>
>> Committed that way.
>>
>
> Thanks for taking care of this!
>
> I found another bug in error handling of whole-row references in join
> pushdown; conversion_error_callback fails to take into account that
> get_relid_attribute_name(Oid relid, AttrNumber attnum) can't handle
> whole-row references (ie, attnum=0), in which case that would cause cache
> lookup errors.  Attached is a small patch to address this issue.
>

Do you have any testcase reproducing the bug here? It would be good to
include that test in the regression.

There is a always a possibility that a user would create a table (which can
be used as target for the foreign table) with column named 'wholerow', in
which case s/he will get confused with this error message.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-27 Thread Etsuro Fujita

On 2016/06/25 4:14, Robert Haas wrote:

Committed that way.


Thanks for taking care of this!

I found another bug in error handling of whole-row references in join 
pushdown; conversion_error_callback fails to take into account that 
get_relid_attribute_name(Oid relid, AttrNumber attnum) can't handle 
whole-row references (ie, attnum=0), in which case that would cause 
cache lookup errors.  Attached is a small patch to address this issue.


Best regards,
Etsuro Fujita


postgres-fdw-conv-error-callback.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cleanup in contrib/postgres_fdw/deparse.c

2016-06-27 Thread Etsuro Fujita

On 2016/06/25 3:39, Robert Haas wrote:

Seems like a good cleanup.  Committed.


Thanks for committing the patch!

Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers