Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-04-09 Thread Etsuro Fujita

(2018/04/07 4:17), Andres Freund wrote:

On 2018-03-05 17:07:10 -0500, Tom Lane wrote:

Meanwhile, I'm back to wondering what could possibly have affected
the planner's estimates, if pg_proc and pg_statistic didn't change.
I confess bafflement ... but we've now eliminated the autovacuum-
did-it theory entirely, so it's time to start looking someplace else.
I wonder if something in the postgres_fdw remote join machinery
is not as deterministic as it should be.


I wonder if temporarily changing postgres_fdw's test to specify an extra
config that installs auto_explain in full aggressiveness (i.e. including
costs etc) and enables debug3 logging could help narrow this down?


+1 because we cannot deny the possibility that the plan instability is 
caused by such an unexpected behavior of postgres_fdw.


Best regards,
Etsuro Fujita



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-04-06 Thread Andres Freund
On 2018-03-05 17:07:10 -0500, Tom Lane wrote:
> Joe Conway  writes:
> > On 03/05/2018 11:19 AM, Tom Lane wrote:
> >> Joe, I wonder if you could add "log_autovacuum_min_duration = 0" to
> >> rhinoceros' extra_config options, temporarily?  Correlating that log
> >> output with the log_statement output from the test proper would let
> >> us confirm or deny whether it's autovacuum.
> 
> > Done just now. Do you want me to force a run?
> 
> Both of these runs
> 
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros=2018-03-05%2020%3A35%3A00
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros=2018-03-05%2021%3A45%3A02
> 
> appear to exonerate autovacuum, and the second seems to destroy the
> behind-the-scenes-ANALYZE theory entirely, since there was no change
> in the outputs of the extra instrumentation queries.  (In theory
> there's a window for ANALYZE to have occurred in between, but that's
> not very credible.)
> 
> So you can revert the rhinoceros config change if you like --- thanks
> for making it so quickly!
> 
> Meanwhile, I'm back to wondering what could possibly have affected
> the planner's estimates, if pg_proc and pg_statistic didn't change.
> I confess bafflement ... but we've now eliminated the autovacuum-
> did-it theory entirely, so it's time to start looking someplace else.
> I wonder if something in the postgres_fdw remote join machinery
> is not as deterministic as it should be.

I wonder if temporarily changing postgres_fdw's test to specify an extra
config that installs auto_explain in full aggressiveness (i.e. including
costs etc) and enables debug3 logging could help narrow this down?

- Andres



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-05 Thread Joe Conway
On 03/05/2018 02:07 PM, Tom Lane wrote:
> So you can revert the rhinoceros config change if you like --- thanks
> for making it so quickly!

Ok, reverted.

> Meanwhile, I'm back to wondering what could possibly have affected
> the planner's estimates, if pg_proc and pg_statistic didn't change.
> I confess bafflement ... but we've now eliminated the autovacuum-
> did-it theory entirely, so it's time to start looking someplace else.
> I wonder if something in the postgres_fdw remote join machinery
> is not as deterministic as it should be.

Do you want me to do anything manual locally on this VM?

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-05 Thread Tom Lane
Joe Conway  writes:
> On 03/05/2018 11:19 AM, Tom Lane wrote:
>> Joe, I wonder if you could add "log_autovacuum_min_duration = 0" to
>> rhinoceros' extra_config options, temporarily?  Correlating that log
>> output with the log_statement output from the test proper would let
>> us confirm or deny whether it's autovacuum.

> Done just now. Do you want me to force a run?

Both of these runs

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros=2018-03-05%2020%3A35%3A00
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros=2018-03-05%2021%3A45%3A02

appear to exonerate autovacuum, and the second seems to destroy the
behind-the-scenes-ANALYZE theory entirely, since there was no change
in the outputs of the extra instrumentation queries.  (In theory
there's a window for ANALYZE to have occurred in between, but that's
not very credible.)

So you can revert the rhinoceros config change if you like --- thanks
for making it so quickly!

Meanwhile, I'm back to wondering what could possibly have affected
the planner's estimates, if pg_proc and pg_statistic didn't change.
I confess bafflement ... but we've now eliminated the autovacuum-
did-it theory entirely, so it's time to start looking someplace else.
I wonder if something in the postgres_fdw remote join machinery
is not as deterministic as it should be.

regards, tom lane



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-05 Thread Tom Lane
Joe Conway  writes:
> On 03/05/2018 11:19 AM, Tom Lane wrote:
>> Joe, I wonder if you could add "log_autovacuum_min_duration = 0" to
>> rhinoceros' extra_config options, temporarily?  Correlating that log
>> output with the log_statement output from the test proper would let
>> us confirm or deny whether it's autovacuum.

> Done just now. Do you want me to force a run?

Thanks.  I'm about to push something, so no need for a force.

regards, tom lane



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-05 Thread Joe Conway
On 03/05/2018 11:19 AM, Tom Lane wrote:
> Joe, I wonder if you could add "log_autovacuum_min_duration = 0" to
> rhinoceros' extra_config options, temporarily?  Correlating that log
> output with the log_statement output from the test proper would let
> us confirm or deny whether it's autovacuum.


Done just now. Do you want me to force a run?


Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-05 Thread Tom Lane
I wrote:
> The thing that I find curious, now that we've shut off autovacuum
> altogether on those tables, is that we *still* aren't getting stable
> results.  How can that be?

I spent some time trying to figure out what's going on here.  I've still
not been able to replicate the failure on any of my machines, but I have
learned a thing or two.

On the first query that's still failing on rhinoceros (and has also been
seen to fail on other machines, before the last round of changes), which
is at line 1142 in postgres_fdw.sql in current HEAD:

EXPLAIN (verbose, costs off)
UPDATE ft2 SET c3 = 'baz'
  FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1)
  WHERE ft2.c1 > 2000 AND ft2.c2 === ft4.c1
  RETURNING ft2.*, ft4.*, ft5.*;
-- can't be pushed down

(call this Q1), we are expecting to get a plan of the shape of

->  Nested Loop
  ->  Foreign Scan on public.ft2
Remote SQL: SELECT "C 1", c2, c4, c5, c6, c7, c8, ctid FROM "S 
1"."T 1" WHERE (("C 1" > 2000)) FOR UPDATE
  ->  Foreign Scan
Relations: (public.ft4) INNER JOIN (public.ft5)

Now, there is no possible way that the planner would pick that plan if its
estimate of the number of rows out of the ft2 scan was more than 1.
Re-executing the foreign join would have high enough overhead to push the
plan to some other shape.  In fact, probably the shape we see in the
actual plan choice, on the failing machines:

->  Nested Loop
  ->  Foreign Scan
Relations: (public.ft4) INNER JOIN (public.ft5)
  ->  Materialize
->  Foreign Scan on public.ft2
  Remote SQL: SELECT "C 1", c2, c4, c5, c6, c7, c8, ctid 
FROM "S 1"."T 1" WHERE (("C 1" > 2000)) FOR UPDATE

I instrumented costsize.c, and determined that the estimated size of
"S 1"."T 1" WHERE (("C 1" > 2000)), before the clamp_row_est() rounding,
is only 0.1159 rows on my machine.  So there's no way to explain the plan
change as the result of small platform-specific roundoff differences ---
we need something more than a 12X change in the selectivity estimate
before the plan shape would change like this.  There are a bunch of things
going on under the hood, such as that the table's raw rowcount estimate
gets scaled up from the original 1000 rows because it's now got more pages
than before, but none come close to explaining 12X.

However, the planner is working with quite old statistics, dating back to
the manual ANALYZE at postgres_fdw.sql line 93.  If I stick in another
manual ANALYZE just before Q1, I get exactly the same plan change reported
by rhinoceros.  (And underneath, the rowcount estimate for ft2 has gone
from 1 row to 8 rows, which is much closer to the true value of 10 rows,
so the plan change is not surprising.)  What's more, doing this also
reproduces the one other plan change seen later in rhinoceros' output.

It is, therefore, very hard to avoid the conclusion that something is
causing an ANALYZE to happen while the script runs, despite our fooling
about with the table's reloptions.  I'm not sure that that something is
autovacuum.  A platform-specific bug in reloptions handling doesn't seem
out of the question, but poking around in the code didn't spot anything
obvious.

Joe, I wonder if you could add "log_autovacuum_min_duration = 0" to
rhinoceros' extra_config options, temporarily?  Correlating that log
output with the log_statement output from the test proper would let
us confirm or deny whether it's autovacuum.

Another thing I'd like to do is temporarily add

select relpages, reltuples from pg_class where relname = 'T 1';

to the test script, both just after the manual ANALYZE and just before Q1.
If we see a change between those reports on any of the affected machines,
we'll know that *something* is changing the stats.  Now the problem with
doing that is that the expected value of relpages is platform-dependent
(I see 11 on 64-bit and 10 on 32-bit on my machines).  We can work around
that, perhaps by capturing the initial value in a temp table and printing
only the delta, but I'm not sure if it's worth the effort as opposed to
just letting it fail on 32-bit critters for a day or two.

regards, tom lane



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-04 Thread Tom Lane
Ashutosh Bapat  writes:
> On Mon, Mar 5, 2018 at 8:51 AM, Etsuro Fujita
>  wrote:
>> Totally outdated stats used in query planning causes the failures? ANALYZE
>> right before the plan-changing queries would fix the failures?

> I think the testcase file for postgres_fdw has grown really large over
> the time, as we have added more pushdown. Since most of the queries in
> that file use the same tables created at the beginning of the file,
> changes somewhere in-between (esp. DMLs) affect the following queries.
> It's getting hard to add a test query in that file and expect stable
> results.

The thing that I find curious, now that we've shut off autovacuum
altogether on those tables, is that we *still* aren't getting stable
results.  How can that be?  Yeah, if you add another query in the middle
you might find that changing later results, but for any fixed contents of
the test script, why isn't the buildfarm getting the same answers that the
patch author and committer got?  I could believe different results on
32-bit and 64-bit hardware, but that does not seem to be quite the pattern
we're seeing.

> I think, it's time to split the file into at least two one
> for queries and one for DMLs.

That seems largely irrelevant right at this point.  First we have to
explain the instability.

regards, tom lane



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-04 Thread Ashutosh Bapat
On Mon, Mar 5, 2018 at 8:51 AM, Etsuro Fujita
 wrote:
> (2018/03/03 5:01), Robert Haas wrote:
>>
>> On Fri, Mar 2, 2018 at 1:20 PM, Robert Haas  wrote:
>>>
>>> On Fri, Mar 2, 2018 at 5:35 AM, Etsuro Fujita
>>>   wrote:

 Agreed.  Better safe than sorry, so I disabled autovacuum for all the
 tables
 created in the postgres_fdw regression test, except the ones with no
 data
 modification created in the sections "test handling of collations" and
 "test
 IMPORT FOREIGN SCHEMA".  I'm attaching a patch for that.
>>>
>>>
>>> I have committed this patch.  Whee!
>
>
> Thank you for taking the time on this!
>
>> And that seems to have made things worse.  The failures on rhinocerous
>> look related:
>>
>>
>> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=rhinoceros=HEAD
>
>
> Totally outdated stats used in query planning causes the failures? ANALYZE
> right before the plan-changing queries would fix the failures?
>
>> And so does this failure on chub:
>>
>>
>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chub=2018-03-02%2016%3A10%3A02
>
>
> The Git log shows that this was done before the commit.

I think the testcase file for postgres_fdw has grown really large over
the time, as we have added more pushdown. Since most of the queries in
that file use the same tables created at the beginning of the file,
changes somewhere in-between (esp. DMLs) affect the following queries.
It's getting hard to add a test query in that file and expect stable
results. I think, it's time to split the file into at least two one
for queries and one for DMLs. In long term, we may have, multiple
files each handling one functionality like regress/sql/. If we do so,
it will be easier to fix such problems.


-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-04 Thread Etsuro Fujita

(2018/03/03 5:01), Robert Haas wrote:

On Fri, Mar 2, 2018 at 1:20 PM, Robert Haas  wrote:

On Fri, Mar 2, 2018 at 5:35 AM, Etsuro Fujita
  wrote:

Agreed.  Better safe than sorry, so I disabled autovacuum for all the tables
created in the postgres_fdw regression test, except the ones with no data
modification created in the sections "test handling of collations" and "test
IMPORT FOREIGN SCHEMA".  I'm attaching a patch for that.


I have committed this patch.  Whee!


Thank you for taking the time on this!


And that seems to have made things worse.  The failures on rhinocerous
look related:

https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=rhinoceros=HEAD


Totally outdated stats used in query planning causes the failures? 
ANALYZE right before the plan-changing queries would fix the failures?



And so does this failure on chub:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chub=2018-03-02%2016%3A10%3A02


The Git log shows that this was done before the commit.

Best regards,
Etsuro Fujita



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-02 Thread Robert Haas
On Fri, Mar 2, 2018 at 1:20 PM, Robert Haas  wrote:
> On Fri, Mar 2, 2018 at 5:35 AM, Etsuro Fujita
>  wrote:
>> Agreed.  Better safe than sorry, so I disabled autovacuum for all the tables
>> created in the postgres_fdw regression test, except the ones with no data
>> modification created in the sections "test handling of collations" and "test
>> IMPORT FOREIGN SCHEMA".  I'm attaching a patch for that.
>
> I have committed this patch.  Whee!

And that seems to have made things worse.  The failures on rhinocerous
look related:

https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=rhinoceros=HEAD

And so does this failure on chub:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chub=2018-03-02%2016%3A10%3A02

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-02 Thread Robert Haas
On Fri, Mar 2, 2018 at 5:35 AM, Etsuro Fujita
 wrote:
> Agreed.  Better safe than sorry, so I disabled autovacuum for all the tables
> created in the postgres_fdw regression test, except the ones with no data
> modification created in the sections "test handling of collations" and "test
> IMPORT FOREIGN SCHEMA".  I'm attaching a patch for that.

I have committed this patch.  Whee!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-02 Thread Etsuro Fujita

(2018/03/02 3:34), Tom Lane wrote:

Robert Haas  writes:

This is another patch that got past me.  Committed now.



I'd be lying if I said I was filled with confidence that this was
going to fix it, but I don't know what to do other than keep tinkering
with it until we find something that works.


The buildfarm news on this isn't very good; jaguarundi seems happier,
but we've seen intermittent failures on four other critters since
this went in.

The idea of disabling autovacuum seems reasonable to me, but perhaps
it needs to be done to more of the tables.


Agreed.  Better safe than sorry, so I disabled autovacuum for all the 
tables created in the postgres_fdw regression test, except the ones with 
no data modification created in the sections "test handling of 
collations" and "test IMPORT FOREIGN SCHEMA".  I'm attaching a patch for 
that.


Best regards,
Etsuro Fujita
*** a/contrib/postgres_fdw/expected/postgres_fdw.out
--- b/contrib/postgres_fdw/expected/postgres_fdw.out
***
*** 35,43  CREATE TABLE "S 1"."T 1" (
  	c8 user_enum,
  	CONSTRAINT t1_pkey PRIMARY KEY ("C 1")
  );
- -- "S 1"."T 1" will be heavily updated below, so disable autovacuum for
- -- the table to avoid unexpected effects of that
- ALTER TABLE "S 1"."T 1" SET (autovacuum_enabled = 'false');
  CREATE TABLE "S 1"."T 2" (
  	c1 int NOT NULL,
  	c2 text,
--- 35,40 
***
*** 55,60  CREATE TABLE "S 1"."T 4" (
--- 52,62 
  	c3 text,
  	CONSTRAINT t4_pkey PRIMARY KEY (c1)
  );
+ -- Disable autovacuum for these tables to avoid unexpected effects of that
+ ALTER TABLE "S 1"."T 1" SET (autovacuum_enabled = 'false');
+ ALTER TABLE "S 1"."T 2" SET (autovacuum_enabled = 'false');
+ ALTER TABLE "S 1"."T 3" SET (autovacuum_enabled = 'false');
+ ALTER TABLE "S 1"."T 4" SET (autovacuum_enabled = 'false');
  INSERT INTO "S 1"."T 1"
  	SELECT id,
  	   id % 10,
***
*** 6172,6177  ALTER FOREIGN TABLE ft1 DROP CONSTRAINT ft1_c2negative;
--- 6174,6180 
  -- test WITH CHECK OPTION constraints
  -- ===
  CREATE TABLE base_tbl (a int, b int);
+ ALTER TABLE base_tbl SET (autovacuum_enabled = 'false');
  CREATE FOREIGN TABLE foreign_tbl (a int, b int)
SERVER loopback OPTIONS(table_name 'base_tbl');
  CREATE VIEW rw_view AS SELECT * FROM foreign_tbl
***
*** 6232,6237  DROP TABLE base_tbl;
--- 6235,6241 
  -- test serial columns (ie, sequence-based defaults)
  -- ===
  create table loc1 (f1 serial, f2 text);
+ alter table loc1 set (autovacuum_enabled = 'false');
  create foreign table rem1 (f1 serial, f2 text)
server loopback options(table_name 'loc1');
  select pg_catalog.setval('rem1_f1_seq', 10, false);
***
*** 6779,6784  DROP TRIGGER trig_row_after_delete ON rem1;
--- 6783,6790 
  -- ===
  CREATE TABLE a (aa TEXT);
  CREATE TABLE loct (aa TEXT, bb TEXT);
+ ALTER TABLE a SET (autovacuum_enabled = 'false');
+ ALTER TABLE loct SET (autovacuum_enabled = 'false');
  CREATE FOREIGN TABLE b (bb TEXT) INHERITS (a)
SERVER loopback OPTIONS (table_name 'loct');
  INSERT INTO a(aa) VALUES('aaa');
***
*** 6920,6931  DROP TABLE loct;
--- 6926,6941 
  -- Check SELECT FOR UPDATE/SHARE with an inherited source table
  create table loct1 (f1 int, f2 int, f3 int);
  create table loct2 (f1 int, f2 int, f3 int);
+ alter table loct1 set (autovacuum_enabled = 'false');
+ alter table loct2 set (autovacuum_enabled = 'false');
  create table foo (f1 int, f2 int);
  create foreign table foo2 (f3 int) inherits (foo)
server loopback options (table_name 'loct1');
  create table bar (f1 int, f2 int);
  create foreign table bar2 (f3 int) inherits (bar)
server loopback options (table_name 'loct2');
+ alter table foo set (autovacuum_enabled = 'false');
+ alter table bar set (autovacuum_enabled = 'false');
  insert into foo values(1,1);
  insert into foo values(3,3);
  insert into foo2 values(2,2,2);
***
*** 7685,7690  SET enable_partitionwise_join=on;
--- 7695,7702 
  CREATE TABLE fprt1 (a int, b int, c varchar) PARTITION BY RANGE(a);
  CREATE TABLE fprt1_p1 (LIKE fprt1);
  CREATE TABLE fprt1_p2 (LIKE fprt1);
+ ALTER TABLE fprt1_p1 SET (autovacuum_enabled = 'false');
+ ALTER TABLE fprt1_p2 SET (autovacuum_enabled = 'false');
  INSERT INTO fprt1_p1 SELECT i, i, to_char(i/50, 'FM') FROM generate_series(0, 249, 2) i;
  INSERT INTO fprt1_p2 SELECT i, i, to_char(i/50, 'FM') FROM generate_series(250, 499, 2) i;
  CREATE FOREIGN TABLE ftprt1_p1 PARTITION OF fprt1 FOR VALUES FROM (0) TO (250)
***
*** 7697,7702  ANALYZE fprt1_p2;
--- 7709,7716 
  CREATE TABLE fprt2 (a int, b int, c varchar) PARTITION BY RANGE(b);
  CREATE TABLE fprt2_p1 (LIKE fprt2);
  CREATE TABLE fprt2_p2 (LIKE fprt2);
+ ALTER TABLE 

Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-03-01 Thread Tom Lane
Robert Haas  writes:
> On Tue, Feb 27, 2018 at 5:14 PM, Tom Lane  wrote:
>> Ping?  We're still seeing those failures on jaguarundi.

> This is another patch that got past me.  Committed now.

> I'd be lying if I said I was filled with confidence that this was
> going to fix it, but I don't know what to do other than keep tinkering
> with it until we find something that works.

The buildfarm news on this isn't very good; jaguarundi seems happier,
but we've seen intermittent failures on four other critters since
this went in.

The idea of disabling autovacuum seems reasonable to me, but perhaps
it needs to be done to more of the tables.

regards, tom lane



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-28 Thread Robert Haas
On Tue, Feb 27, 2018 at 5:14 PM, Tom Lane  wrote:
>> I ran the postgres_fdw regression test with no sleep two times in a
>> CLOBBER_CACHE_ALWAYS-enabled build, and then the regression test with
>> the sleep (60 seconds) two times, but I couldn't reproduce that in both
>> cases.  I suspect the changes in the order of the RETURNING output there
>> was still caused by autovacuum kicking in partway through the run.  So
>> to make the regression test more stable against autovacuum, I'd propose
>> to modify the regression test to disable autovacuum for the remote table
>> (ie "S 1"."T 1") (and perform VACUUM ANALYZE to that table manually
>> instead) in hopes of getting that fixed.  Attached is a patch for that.
>>   I think changes added by the previous patch wouldn't make sense
>> anymore, so I removed those changes.
>
> Ping?  We're still seeing those failures on jaguarundi.

This is another patch that got past me.  Committed now.

I'd be lying if I said I was filled with confidence that this was
going to fix it, but I don't know what to do other than keep tinkering
with it until we find something that works.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-27 Thread Tom Lane
Etsuro Fujita  writes:
> (2018/02/11 6:24), Tom Lane wrote:
>> However, jaguarundi still shows a problem:
>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguarundi=2018-02-10%2008%3A41%3A32

> I ran the postgres_fdw regression test with no sleep two times in a 
> CLOBBER_CACHE_ALWAYS-enabled build, and then the regression test with 
> the sleep (60 seconds) two times, but I couldn't reproduce that in both 
> cases.  I suspect the changes in the order of the RETURNING output there 
> was still caused by autovacuum kicking in partway through the run.  So 
> to make the regression test more stable against autovacuum, I'd propose 
> to modify the regression test to disable autovacuum for the remote table 
> (ie "S 1"."T 1") (and perform VACUUM ANALYZE to that table manually 
> instead) in hopes of getting that fixed.  Attached is a patch for that. 
>   I think changes added by the previous patch wouldn't make sense 
> anymore, so I removed those changes.

Ping?  We're still seeing those failures on jaguarundi.

regards, tom lane



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-13 Thread Etsuro Fujita

(2018/02/11 6:24), Tom Lane wrote:

However, jaguarundi still shows a problem:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguarundi=2018-02-10%2008%3A41%3A32

(previous run similar, so it's semi-reproducible even after this patch).
jaguarundi uses -DCLOBBER_CACHE_ALWAYS, so you might try a few repetitions
with that.


I ran the postgres_fdw regression test with no sleep two times in a 
CLOBBER_CACHE_ALWAYS-enabled build, and then the regression test with 
the sleep (60 seconds) two times, but I couldn't reproduce that in both 
cases.  I suspect the changes in the order of the RETURNING output there 
was still caused by autovacuum kicking in partway through the run.  So 
to make the regression test more stable against autovacuum, I'd propose 
to modify the regression test to disable autovacuum for the remote table 
(ie "S 1"."T 1") (and perform VACUUM ANALYZE to that table manually 
instead) in hopes of getting that fixed.  Attached is a patch for that. 
 I think changes added by the previous patch wouldn't make sense 
anymore, so I removed those changes.


Best regards,
Etsuro Fujita
*** a/contrib/postgres_fdw/expected/postgres_fdw.out
--- b/contrib/postgres_fdw/expected/postgres_fdw.out
***
*** 35,40  CREATE TABLE "S 1"."T 1" (
--- 35,43 
  	c8 user_enum,
  	CONSTRAINT t1_pkey PRIMARY KEY ("C 1")
  );
+ -- "S 1"."T 1" will be heavily updated below, so disable autovacuum for
+ -- the table to avoid unexpected effects of that
+ ALTER TABLE "S 1"."T 1" SET (autovacuum_enabled = 'false');
  CREATE TABLE "S 1"."T 2" (
  	c1 int NOT NULL,
  	c2 text,
***
*** 4244,4253  explain (verbose, costs off) select * from ft3 f, loct3 l
  -- ===
  -- test writable foreign table stuff
  -- ===
- -- Autovacuum on the remote side might affect remote estimates,
- -- so use local stats on ft2 as well
- ALTER FOREIGN TABLE ft2 OPTIONS (SET use_remote_estimate 'false');
- ANALYZE ft2;
  EXPLAIN (verbose, costs off)
  INSERT INTO ft2 (c1,c2,c3) SELECT c1+1000,c2+100, c3 || c3 FROM ft2 LIMIT 20;
  QUERY PLAN
--- 4247,4252 
***
*** 5524,  UPDATE ft2 SET c3 = 'baz'
FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1)
WHERE ft2.c1 > 2000 AND ft2.c2 === ft4.c1
RETURNING ft2.*, ft4.*, ft5.*;-- can't be pushed down
! QUERY PLAN 
! ---
   Update on public.ft2
 Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3
 Remote SQL: UPDATE "S 1"."T 1" SET c3 = $2 WHERE ctid = $1 RETURNING "C 1", c2, c3, c4, c5, c6, c7, c8
!->  Hash Join
   Output: ft2.c1, ft2.c2, NULL::integer, 'baz'::text, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid, ft4.*, ft5.*, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3
!  Hash Cond: (ft4.c1 = ft5.c1)
   ->  Foreign Scan
!Output: ft2.c1, ft2.c2, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid, ft4.*, ft4.c1, ft4.c2, ft4.c3
!Filter: (ft2.c2 === ft4.c1)
!Relations: (public.ft2) INNER JOIN (public.ft4)
!Remote SQL: SELECT r1."C 1", r1.c2, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8, r1.ctid, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2.c1, r2.c2, r2.c3) END, r2.c1, r2.c2, r2.c3 FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 3" r2 ON (((r1."C 1" > 2000 FOR UPDATE OF r1
!->  Nested Loop
!  Output: ft2.c1, ft2.c2, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid, ft4.*, ft4.c1, ft4.c2, ft4.c3
!  ->  Foreign Scan on public.ft2
!Output: ft2.c1, ft2.c2, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid
!Remote SQL: SELECT "C 1", c2, c4, c5, c6, c7, c8, ctid FROM "S 1"."T 1" WHERE (("C 1" > 2000)) FOR UPDATE
   ->  Foreign Scan on public.ft4
 Output: ft4.*, ft4.c1, ft4.c2, ft4.c3
 Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 3"
!  ->  Hash
!Output: ft5.*, 

Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-10 Thread Etsuro Fujita
2018-02-11 6:24 GMT+09:00 Tom Lane :

> Robert Haas  writes:
> > On Fri, Feb 9, 2018 at 1:33 PM, Tom Lane  wrote:
> >> Robert Haas  writes:
> >>> Me neither.  I just ran the postgres_fdw regression tests 713 times in
> >>> a row without a failure.  Tom, since you seem to be able to reproduce
> >>> the problem locally, could you have a look at this proposed fix?
>
> >> I'm a bit busy, but AFAICS it's just a timing thing, so try inserting
> >> a sleep.  The attached is enough to reproduce rhinoceros' results
> >> for me.
>
> > Not for me, but when I pushed the pg_sleep up to 180 seconds, then it
> failed.
>
> > With the proposed patch, it passes repeatedly for me with no sleep,
> > and also passes for me with the sleep.  So I guess I'll commit this
> > and see what the buildfarm thinks.
>
> FWIW, I ran a thousand cycles of postgres_fdw installcheck without seeing
> further problems.  So this fixes it at least for my configuration.
>

Thank you both for working on this issue!

However, jaguarundi still shows a problem:
>
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=
> jaguarundi=2018-02-10%2008%3A41%3A32
>
> (previous run similar, so it's semi-reproducible even after this patch).
> jaguarundi uses -DCLOBBER_CACHE_ALWAYS, so you might try a few repetitions
> with that.
>

I'll look into this and send a patch by Tuesday.

Best regards,
Etsuro Fujita


Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-10 Thread Tom Lane
Robert Haas  writes:
> On Fri, Feb 9, 2018 at 1:33 PM, Tom Lane  wrote:
>> Robert Haas  writes:
>>> Me neither.  I just ran the postgres_fdw regression tests 713 times in
>>> a row without a failure.  Tom, since you seem to be able to reproduce
>>> the problem locally, could you have a look at this proposed fix?

>> I'm a bit busy, but AFAICS it's just a timing thing, so try inserting
>> a sleep.  The attached is enough to reproduce rhinoceros' results
>> for me.

> Not for me, but when I pushed the pg_sleep up to 180 seconds, then it failed.

> With the proposed patch, it passes repeatedly for me with no sleep,
> and also passes for me with the sleep.  So I guess I'll commit this
> and see what the buildfarm thinks.

FWIW, I ran a thousand cycles of postgres_fdw installcheck without seeing
further problems.  So this fixes it at least for my configuration.
However, jaguarundi still shows a problem:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguarundi=2018-02-10%2008%3A41%3A32

(previous run similar, so it's semi-reproducible even after this patch).
jaguarundi uses -DCLOBBER_CACHE_ALWAYS, so you might try a few repetitions
with that.

regards, tom lane



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-09 Thread Robert Haas
On Fri, Feb 9, 2018 at 1:33 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> Me neither.  I just ran the postgres_fdw regression tests 713 times in
>> a row without a failure.  Tom, since you seem to be able to reproduce
>> the problem locally, could you have a look at this proposed fix?
>
> I'm a bit busy, but AFAICS it's just a timing thing, so try inserting
> a sleep.  The attached is enough to reproduce rhinoceros' results
> for me.

Not for me, but when I pushed the pg_sleep up to 180 seconds, then it failed.

With the proposed patch, it passes repeatedly for me with no sleep,
and also passes for me with the sleep.  So I guess I'll commit this
and see what the buildfarm thinks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-09 Thread Tom Lane
Robert Haas  writes:
> Me neither.  I just ran the postgres_fdw regression tests 713 times in
> a row without a failure.  Tom, since you seem to be able to reproduce
> the problem locally, could you have a look at this proposed fix?

I'm a bit busy, but AFAICS it's just a timing thing, so try inserting
a sleep.  The attached is enough to reproduce rhinoceros' results
for me.

regards, tom lane

diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 0b2c528..3f8bd6d 100644
*** a/contrib/postgres_fdw/sql/postgres_fdw.sql
--- b/contrib/postgres_fdw/sql/postgres_fdw.sql
*** INSERT INTO ft2 (c1,c2,c3)
*** 1133,1138 
--- 1133,1139 
  EXPLAIN (verbose, costs off)
  UPDATE ft2 SET c3 = 'bar' WHERE postgres_fdw_abs(c1) > 2000 RETURNING *;-- can't be pushed down
  UPDATE ft2 SET c3 = 'bar' WHERE postgres_fdw_abs(c1) > 2000 RETURNING *;
+ select pg_sleep(60);
  EXPLAIN (verbose, costs off)
  UPDATE ft2 SET c3 = 'baz'
FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1)


Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-09 Thread Robert Haas
On Fri, Feb 9, 2018 at 5:24 AM, Etsuro Fujita
 wrote:
> I tried to reproduce that in my environment, but I couldn't.  On reflection
> I think an easy and reliable way to address that concern is to use local
> stats on foreign tables.  Attached is a patch for that.

Me neither.  I just ran the postgres_fdw regression tests 713 times in
a row without a failure.  Tom, since you seem to be able to reproduce
the problem locally, could you have a look at this proposed fix?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-09 Thread Etsuro Fujita

(2018/02/09 10:48), Etsuro Fujita wrote:

(2018/02/09 4:32), Robert Haas wrote:

On Thu, Feb 8, 2018 at 11:05 AM, Tom Lane wrote:

there's still an intermittent issue. I ran "make installcheck" in
contrib/postgres_fdw in a loop, and got a similar failure on the
47th try --- my result duplicates the second plan change shown by
rhinoceros, but not the first one. I speculate that the plan change
is a result of autovacuum kicking in partway through the run.


Will look into this.


I tried to reproduce that in my environment, but I couldn't.  On 
reflection I think an easy and reliable way to address that concern is 
to use local stats on foreign tables.  Attached is a patch for that.


Best regards,
Etsuro Fujita
*** a/contrib/postgres_fdw/expected/postgres_fdw.out
--- b/contrib/postgres_fdw/expected/postgres_fdw.out
***
*** 4244,4249  explain (verbose, costs off) select * from ft3 f, loct3 l
--- 4244,4253 
  -- ===
  -- test writable foreign table stuff
  -- ===
+ -- Autovacuum on the remote side might affect remote estimates,
+ -- so use local stats on ft2 as well
+ ALTER FOREIGN TABLE ft2 OPTIONS (SET use_remote_estimate 'false');
+ ANALYZE ft2;
  EXPLAIN (verbose, costs off)
  INSERT INTO ft2 (c1,c2,c3) SELECT c1+1000,c2+100, c3 || c3 FROM ft2 LIMIT 20;
  QUERY PLAN
***
*** 5520,5551  UPDATE ft2 SET c3 = 'baz'
FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1)
WHERE ft2.c1 > 2000 AND ft2.c2 === ft4.c1
RETURNING ft2.*, ft4.*, ft5.*;-- can't be pushed down
!   QUERY PLAN  
! --
   Update on public.ft2
 Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3
 Remote SQL: UPDATE "S 1"."T 1" SET c3 = $2 WHERE ctid = $1 RETURNING "C 1", c2, c3, c4, c5, c6, c7, c8
!->  Nested Loop
   Output: ft2.c1, ft2.c2, NULL::integer, 'baz'::text, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid, ft4.*, ft5.*, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3
!  Join Filter: (ft2.c2 === ft4.c1)
!  ->  Foreign Scan on public.ft2
!Output: ft2.c1, ft2.c2, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid
!Remote SQL: SELECT "C 1", c2, c4, c5, c6, c7, c8, ctid FROM "S 1"."T 1" WHERE (("C 1" > 2000)) FOR UPDATE
   ->  Foreign Scan
!Output: ft4.*, ft4.c1, ft4.c2, ft4.c3, ft5.*, ft5.c1, ft5.c2, ft5.c3
!Relations: (public.ft4) INNER JOIN (public.ft5)
!Remote SQL: SELECT CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2.c1, r2.c2, r2.c3) END, r2.c1, r2.c2, r2.c3, CASE WHEN (r3.*)::text IS NOT NULL THEN ROW(r3.c1, r3.c2, r3.c3) END, r3.c1, r3.c2, r3.c3 FROM ("S 1"."T 3" r2 INNER JOIN "S 1"."T 4" r3 ON (((r2.c1 = r3.c1
!->  Hash Join
!  Output: ft4.*, ft4.c1, ft4.c2, ft4.c3, ft5.*, ft5.c1, ft5.c2, ft5.c3
!  Hash Cond: (ft4.c1 = ft5.c1)
   ->  Foreign Scan on public.ft4
 Output: ft4.*, ft4.c1, ft4.c2, ft4.c3
 Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 3"
!  ->  Hash
!Output: ft5.*, ft5.c1, ft5.c2, ft5.c3
!->  Foreign Scan on public.ft5
!  Output: ft5.*, ft5.c1, ft5.c2, ft5.c3
!  Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 4"
  (24 rows)
  
  UPDATE ft2 SET c3 = 'baz'
--- 5524, 
FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1)
WHERE ft2.c1 > 2000 AND ft2.c2 === ft4.c1
RETURNING ft2.*, ft4.*, ft5.*;-- can't be pushed down
! QUERY PLAN 
! 

Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-08 Thread Etsuro Fujita

(2018/02/09 4:32), Robert Haas wrote:

On Thu, Feb 8, 2018 at 11:05 AM, Tom Lane  wrote:

According to

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros=2018-02-08%2001%3A45%3A01

there's still an intermittent issue.  I ran "make installcheck" in
contrib/postgres_fdw in a loop, and got a similar failure on the
47th try --- my result duplicates the second plan change shown by
rhinoceros, but not the first one.  I speculate that the plan change
is a result of autovacuum kicking in partway through the run.


Will look into this.


Hmm.  Maybe inserting an ANALYZE command in the right place would fix it?


VACUUM to the right tables in the right place might better fix it? 
Another idea would be to modify test cases added by that commit so that 
they don't modify the existing tables to not break the existing test cases?


Best regards,
Etsuro Fujita



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-08 Thread Robert Haas
On Thu, Feb 8, 2018 at 11:05 AM, Tom Lane  wrote:
> Etsuro Fujita  writes:
>> (2018/02/08 10:40), Robert Haas wrote:
>>> Uggh, I missed the fact that they were doing that.  It's probably
>>> actually useful test coverage, but it's not surprising that it isn't
>>> stable.
>
>> That was my purpose, but I agree with the instability.  Thanks again,
>> Robert!
>
> According to
>
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros=2018-02-08%2001%3A45%3A01
>
> there's still an intermittent issue.  I ran "make installcheck" in
> contrib/postgres_fdw in a loop, and got a similar failure on the
> 47th try --- my result duplicates the second plan change shown by
> rhinoceros, but not the first one.  I speculate that the plan change
> is a result of autovacuum kicking in partway through the run.

Hmm.  Maybe inserting an ANALYZE command in the right place would fix it?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-08 Thread Tom Lane
Etsuro Fujita  writes:
> (2018/02/08 10:40), Robert Haas wrote:
>> Uggh, I missed the fact that they were doing that.  It's probably
>> actually useful test coverage, but it's not surprising that it isn't
>> stable.

> That was my purpose, but I agree with the instability.  Thanks again, 
> Robert!

According to

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros=2018-02-08%2001%3A45%3A01

there's still an intermittent issue.  I ran "make installcheck" in
contrib/postgres_fdw in a loop, and got a similar failure on the
47th try --- my result duplicates the second plan change shown by
rhinoceros, but not the first one.  I speculate that the plan change
is a result of autovacuum kicking in partway through the run.

regards, tom lane



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-07 Thread Etsuro Fujita

(2018/02/08 10:40), Robert Haas wrote:

On Wed, Feb 7, 2018 at 6:01 PM, Tom Lane  wrote:

The buildfarm's opinion of it is lower than yours.  Just eyeballing
the failures, I'd say there was some naivete about the reproducibility
of tuple CTIDs across different platforms.  Is there a good reason
these test cases need to print CTID?


Uggh, I missed the fact that they were doing that.  It's probably
actually useful test coverage, but it's not surprising that it isn't
stable.


That was my purpose, but I agree with the instability.  Thanks again, 
Robert!


Best regards,
Etsuro Fujita



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-07 Thread Etsuro Fujita

(2018/02/08 5:39), Robert Haas wrote:

On Thu, Jan 11, 2018 at 2:58 AM, Etsuro Fujita
  wrote:

(2017/12/27 20:55), Etsuro Fujita wrote:

Attached is an updated version of the patch.


I revised code/comments a little bit.  PFA new version.


I spent a while reading through this today.  I see a few decisions
here or there that are debatable, in the sense that somebody else
might have chosen to do it differently, but I don't see anything that
actually looks wrong.  So, committed.


Thanks for committing, Robert!  Thanks for reviewing, Rushabh and Ashutosh!

Best regards,
Etsuro Fujita



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-07 Thread Robert Haas
On Wed, Feb 7, 2018 at 6:01 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> I spent a while reading through this today.  I see a few decisions
>> here or there that are debatable, in the sense that somebody else
>> might have chosen to do it differently, but I don't see anything that
>> actually looks wrong.  So, committed.
>
> The buildfarm's opinion of it is lower than yours.  Just eyeballing
> the failures, I'd say there was some naivete about the reproducibility
> of tuple CTIDs across different platforms.  Is there a good reason
> these test cases need to print CTID?

Uggh, I missed the fact that they were doing that.  It's probably
actually useful test coverage, but it's not surprising that it isn't
stable.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-07 Thread Tom Lane
Robert Haas  writes:
> I spent a while reading through this today.  I see a few decisions
> here or there that are debatable, in the sense that somebody else
> might have chosen to do it differently, but I don't see anything that
> actually looks wrong.  So, committed.

The buildfarm's opinion of it is lower than yours.  Just eyeballing
the failures, I'd say there was some naivete about the reproducibility
of tuple CTIDs across different platforms.  Is there a good reason
these test cases need to print CTID?

regards, tom lane



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-02-07 Thread Robert Haas
On Thu, Jan 11, 2018 at 2:58 AM, Etsuro Fujita
 wrote:
> (2017/12/27 20:55), Etsuro Fujita wrote:
>> Attached is an updated version of the patch.
>
> I revised code/comments a little bit.  PFA new version.

I spent a while reading through this today.  I see a few decisions
here or there that are debatable, in the sense that somebody else
might have chosen to do it differently, but I don't see anything that
actually looks wrong.  So, committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2018-01-10 Thread Etsuro Fujita
(2017/12/27 20:55), Etsuro Fujita wrote:
> Attached is an updated version of the patch.

I revised code/comments a little bit.  PFA new version.

Best regards,
Etsuro Fujita
*** a/contrib/postgres_fdw/deparse.c
--- b/contrib/postgres_fdw/deparse.c
***
*** 132,138  static void deparseTargetList(StringInfo buf,
    Bitmapset *attrs_used,
    bool qualify_col,
    List **retrieved_attrs);
! static void deparseExplicitTargetList(List *tlist, List **retrieved_attrs,
  		  deparse_expr_cxt *context);
  static void deparseSubqueryTargetList(deparse_expr_cxt *context);
  static void deparseReturningList(StringInfo buf, PlannerInfo *root,
--- 132,140 
    Bitmapset *attrs_used,
    bool qualify_col,
    List **retrieved_attrs);
! static void deparseExplicitTargetList(List *tlist,
! 		  bool is_returning,
! 		  List **retrieved_attrs,
  		  deparse_expr_cxt *context);
  static void deparseSubqueryTargetList(deparse_expr_cxt *context);
  static void deparseReturningList(StringInfo buf, PlannerInfo *root,
***
*** 168,178  static void deparseLockingClause(deparse_expr_cxt *context);
  static void appendOrderByClause(List *pathkeys, deparse_expr_cxt *context);
  static void appendConditions(List *exprs, deparse_expr_cxt *context);
  static void deparseFromExprForRel(StringInfo buf, PlannerInfo *root,
! 	  RelOptInfo *joinrel, bool use_alias, List **params_list);
  static void deparseFromExpr(List *quals, deparse_expr_cxt *context);
  static void deparseRangeTblRef(StringInfo buf, PlannerInfo *root,
     RelOptInfo *foreignrel, bool make_subquery,
!    List **params_list);
  static void deparseAggref(Aggref *node, deparse_expr_cxt *context);
  static void appendGroupByClause(List *tlist, deparse_expr_cxt *context);
  static void appendAggOrderBy(List *orderList, List *targetList,
--- 170,182 
  static void appendOrderByClause(List *pathkeys, deparse_expr_cxt *context);
  static void appendConditions(List *exprs, deparse_expr_cxt *context);
  static void deparseFromExprForRel(StringInfo buf, PlannerInfo *root,
! 	  RelOptInfo *foreignrel, bool use_alias,
! 	  Index ignore_rel, List **ignore_conds,
! 	  List **params_list);
  static void deparseFromExpr(List *quals, deparse_expr_cxt *context);
  static void deparseRangeTblRef(StringInfo buf, PlannerInfo *root,
     RelOptInfo *foreignrel, bool make_subquery,
!    Index ignore_rel, List **ignore_conds, List **params_list);
  static void deparseAggref(Aggref *node, deparse_expr_cxt *context);
  static void appendGroupByClause(List *tlist, deparse_expr_cxt *context);
  static void appendAggOrderBy(List *orderList, List *targetList,
***
*** 1028,1034  deparseSelectSql(List *tlist, bool is_subquery, List **retrieved_attrs,
  		 * For a join or upper relation the input tlist gives the list of
  		 * columns required to be fetched from the foreign server.
  		 */
! 		deparseExplicitTargetList(tlist, retrieved_attrs, context);
  	}
  	else
  	{
--- 1032,1038 
  		 * For a join or upper relation the input tlist gives the list of
  		 * columns required to be fetched from the foreign server.
  		 */
! 		deparseExplicitTargetList(tlist, false, retrieved_attrs, context);
  	}
  	else
  	{
***
*** 1071,1077  deparseFromExpr(List *quals, deparse_expr_cxt *context)
  	appendStringInfoString(buf, " FROM ");
  	deparseFromExprForRel(buf, context->root, scanrel,
  		  (bms_num_members(scanrel->relids) > 1),
! 		  context->params_list);
  
  	/* Construct WHERE clause */
  	if (quals != NIL)
--- 1075,1081 
  	appendStringInfoString(buf, " FROM ");
  	deparseFromExprForRel(buf, context->root, scanrel,
  		  (bms_num_members(scanrel->relids) > 1),
! 		  (Index) 0, NULL, context->params_list);
  
  	/* Construct WHERE clause */
  	if (quals != NIL)
***
*** 1340,1348  get_jointype_name(JoinType jointype)
   *
   * retrieved_attrs is the list of continuously increasing integers starting
   * from 1. It has same number of entries as tlist.
   */
  static void
! deparseExplicitTargetList(List *tlist, List **retrieved_attrs,
  		  deparse_expr_cxt *context)
  {
  	ListCell   *lc;
--- 1344,1357 
   *
   * retrieved_attrs is the list of continuously increasing integers starting
   * from 1. It has same number of entries as tlist.
+  *
+  * This is used for both SELECT and RETURNING targetlists; the is_returning
+  * parameter is true only for a RETURNING targetlist.
   */
  static void
! deparseExplicitTargetList(List *tlist,
! 		  bool is_returning,
! 		  List **retrieved_attrs,
  		  deparse_expr_cxt *context)
  {
  	ListCell   *lc;
***
*** 1357,1369  deparseExplicitTargetList(List *tlist, List **retrieved_attrs,
  
  		if (i > 0)
  			appendStringInfoString(buf, ", ");
  		deparseExpr((Expr *) tle->expr, context);
  
  		*retrieved_attrs = lappend_int(*retrieved_attrs, i + 1);
  		

postgres_fdw: perform UPDATE/DELETE .. RETURNING on a join directly

2017-12-27 Thread Etsuro Fujita
(2017/04/08 10:28), Robert Haas wrote:
> On Wed, Mar 22, 2017 at 6:20 AM, Etsuro Fujita
>   wrote:
>> On 2017/02/22 19:57, Rushabh Lathia wrote:
>>> Marked this as Ready for Committer.
>>
>> I noticed that this item in the CF app was incorrectly marked as
Committed.
>> This patch isn't committed, so I returned it to the previous status.
 I also
>> rebased the patch.  Attached is a new version of the patch.
>
> Sorry, I marked the wrong patch as committed.  Apologies for that.

No problem.  My apologies for the long long delay.

> This doesn't apply any more because of recent changes.
>
> git diff --check complains:
> contrib/postgres_fdw/postgres_fdw.c:3653: space before tab in indent.

I rebased the patch.

> +/* Shouldn't contain the target relation. */
> +Assert(target_rel == 0);
>
> This comment should give a reason.

Done.

>   void
>   deparseDirectUpdateSql(StringInfo buf, PlannerInfo *root,
>  Index rtindex, Relation rel,
> +   RelOptInfo *foreignrel,
>  List *targetlist,
>  List *targetAttrs,
>  List *remote_conds,
>
> Could you add a comment explaining the meaning of these various
> arguments?  It takes rtindex, rel, and foreignrel, which apparently
> are all different things, but the meaning is not explained.

Done.

>   /*
> + * Add a RETURNING clause, if needed, to an UPDATE/DELETE on a join.
> + */
> +static void
> +deparseExplicitReturningList(List *rlist,
> + List **retrieved_attrs,
> + deparse_expr_cxt *context)
> +{
> +deparseExplicitTargetList(rlist, true, retrieved_attrs, context);
> +}
>
> Do we really want to add a function for one line of code?

I don't have any strong opinion about that, so I removed that function
and modified deparseDirectUpdateSql/deparseDirectDeleteSql so it calls
deparseExplicitTargetList directly.

> +/*
> + * Look for conditions mentioning the target relation in the given
join tree,
> + * which will be pulled up into the WHERE clause.  Note that this is
safe due
> + * to the same reason stated in comments in deparseFromExprForRel.
> + */
>
> The comments for deparseFromExprForRel do not seem to address the
> topic of why this is safe.  Also, the answer to the question "safe
> from what?" is not clear.

Maybe my explanation would not be enough, but I think the reason why
this is safe would be derived from the comments for
deparseFromExprForRel.  BUT: on reflection, I think I made the deparsing
logic complex beyond necessity.  So I simplified the logic, which
doesn't pull_up_target_conditions any more, and added comments about that.

> -deparseReturningList(buf, root, rtindex, rel, false,
> - returningList, retrieved_attrs);
> +if (foreignrel->reloptkind == RELOPT_JOINREL)
> +deparseExplicitReturningList(returningList,
retrieved_attrs,);
> +else
> +deparseReturningList(buf, root, rtindex, rel, false,
> + returningList, retrieved_attrs);
>
> Why do these cases need to be handled differently?  Maybe add a brief
comment?

The reason for that is 1)

+   /*
+* When performing an UPDATE/DELETE .. RETURNING on a join directly,
+* we fetch from the foreign server any Vars specified in RETURNING
+* that refer not only to the target relation but to non-target
+* relations.  So we'll deparse them into the RETURNING clause
of the
+* remote query;

and 2) deparseReturningList can retrieve Vars of the target relation,
but can't retrieve Vars of non-target relations.

> +if ((outerrel->reloptkind == RELOPT_BASEREL&&
> + outerrel->relid == target_rel) ||
> +(innerrel->reloptkind == RELOPT_BASEREL&&
> + innerrel->relid == target_rel))
>
> 1. Surely it's redundant to check the RelOptKind if the RTI matches?

Good catch!  Revised.

> 2. Generally, the tests in this patch against various RelOptKind
> values should be adapted to use the new macros introduced in
> 7a39b5e4d11229ece930a51fd7cb29e535db4494.

I left some of the tests alone because I think that's more strict.

> The regression tests remove every remaining case where an update or
> delete gets fails to get pushed to the remote side.  I think we should
> still test that path, because we've still got that code.  Maybe use a
> non-pushable function in the join clause, or something.

Done.

> The new test cases could use some brief comments explaining their purpose.

Done.  Also, I think some of the tests in the previous version are
redundant, so I simplified the tests a bit.

>   if (plan->returningLists)
> +{
>   returningList = (List *) list_nth(plan->returningLists,
subplan_index);
>
> +/*
> + * If UPDATE/DELETE on a join, create a RETURNING list used
in the
> + * remote query.
> + */
> +