Re: [HACKERS] path toward faster partition pruning
On Mon, Nov 6, 2017 at 3:31 PM, Amit Langote wrote: > Attached updated set of patches, including the fix to make the new pruning > code handle Boolean partitioning. > Hi Amit, I have tried pruning for different values of constraint exclusion GUC change, not sure exactly how it should behave, but I can see with the delete statement pruning is not happening when constraint_exclusion is off, but select is working as expected. Is this expected behaviour? create table lp (c1 int, c2 text) partition by list(c1); create table lp1 partition of lp for values in (1,2); create table lp2 partition of lp for values in (3,4); create table lp3 partition of lp for values in (5,6); insert into lp values (1,'p1'),(2,'p1'),(3,'p2'),(4,'p2'),(5,'p3'); show constraint_exclusion ; constraint_exclusion -- partition (1 row) explain select c1 from lp where c1 >= 1 and c1 < 2; QUERY PLAN -- Append (cost=0.00..29.05 rows=6 width=4) -> Seq Scan on lp1 (cost=0.00..29.05 rows=6 width=4) Filter: ((c1 >= 1) AND (c1 < 2)) (3 rows) explain delete from lp where c1 >= 1 and c1 < 2; QUERY PLAN -- Delete on lp (cost=0.00..29.05 rows=6 width=6) Delete on lp1 -> Seq Scan on lp1 (cost=0.00..29.05 rows=6 width=6) Filter: ((c1 >= 1) AND (c1 < 2)) (4 rows) set constraint_exclusion = off; explain select c1 from lp where c1 >= 1 and c1 < 2; QUERY PLAN -- Append (cost=0.00..29.05 rows=6 width=4) -> Seq Scan on lp1 (cost=0.00..29.05 rows=6 width=4) Filter: ((c1 >= 1) AND (c1 < 2)) (3 rows) *explain delete from lp where c1 >= 1 and c1 < 2;* QUERY PLAN -- Delete on lp (cost=0.00..87.15 rows=18 width=6) Delete on lp1 Delete on lp2 Delete on lp3 -> Seq Scan on lp1 (cost=0.00..29.05 rows=6 width=6) Filter: ((c1 >= 1) AND (c1 < 2)) -> Seq Scan on lp2 (cost=0.00..29.05 rows=6 width=6) Filter: ((c1 >= 1) AND (c1 < 2)) -> Seq Scan on lp3 (cost=0.00..29.05 rows=6 width=6) Filter: ((c1 >= 1) AND (c1 < 2)) (10 rows) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] path toward faster partition pruning
On Mon, Oct 30, 2017 at 12:20 PM, Amit Langote < langote_amit...@lab.ntt.co.jp> wrote: > In the previous versions, RT index of the table needed to be passed to > partition.c, which I realized is no longer needed, so I removed that > requirement from the interface. As a result, patches 0002 and 0003 have > changed in this version. > Thanks for the fix. I am getting wrong output when default is sub-partitioned further, below is a test case. CREATE TABLE lpd(a int, b varchar, c float) PARTITION BY LIST (a); CREATE TABLE lpd_p1 PARTITION OF lpd FOR VALUES IN (1,2,3); CREATE TABLE lpd_p2 PARTITION OF lpd FOR VALUES IN (4,5); CREATE TABLE lpd_d PARTITION OF lpd DEFAULT PARTITION BY LIST(a); CREATE TABLE lpd_d1 PARTITION OF lpd_d FOR VALUES IN (7,8,9); CREATE TABLE lpd_d2 PARTITION OF lpd_d FOR VALUES IN (10,11,12); CREATE TABLE lpd_d3 PARTITION OF lpd_d FOR VALUES IN (6,null); INSERT INTO lpd SELECT i,i,i FROM generate_Series (1,12)i; INSERT INTO lpd VALUES (null,null,null); --on HEAD postgres=# EXPLAIN (COSTS OFF) SELECT tableoid::regclass, * FROM lpd WHERE a IS NOT NULL ORDER BY 1; QUERY PLAN - Sort Sort Key: ((lpd_p1.tableoid)::regclass) -> Result -> Append -> Seq Scan on lpd_p1 Filter: (a IS NOT NULL) -> Seq Scan on lpd_p2 Filter: (a IS NOT NULL) -> Seq Scan on lpd_d3 Filter: (a IS NOT NULL) -> Seq Scan on lpd_d1 Filter: (a IS NOT NULL) -> Seq Scan on lpd_d2 Filter: (a IS NOT NULL) (14 rows) postgres=# postgres=# SELECT tableoid::regclass, * FROM lpd WHERE a IS NOT NULL ORDER BY 1; tableoid | a | b | c --+++ lpd_p1 | 1 | 1 | 1 lpd_p1 | 2 | 2 | 2 lpd_p1 | 3 | 3 | 3 lpd_p2 | 4 | 4 | 4 lpd_p2 | 5 | 5 | 5 lpd_d1 | 7 | 7 | 7 lpd_d1 | 8 | 8 | 8 lpd_d1 | 9 | 9 | 9 lpd_d2 | 12 | 12 | 12 lpd_d2 | 10 | 10 | 10 lpd_d2 | 11 | 11 | 11 lpd_d3 | 6 | 6 | 6 (12 rows) --on HEAD + v8 patches postgres=# EXPLAIN (COSTS OFF) SELECT tableoid::regclass, * FROM lpd WHERE a IS NOT NULL ORDER BY 1; QUERY PLAN - Sort Sort Key: ((lpd_p1.tableoid)::regclass) -> Result -> Append -> Seq Scan on lpd_p1 Filter: (a IS NOT NULL) -> Seq Scan on lpd_p2 Filter: (a IS NOT NULL) (8 rows) postgres=# SELECT tableoid::regclass, * FROM lpd WHERE a IS NOT NULL ORDER BY 1; tableoid | a | b | c --+---+---+--- lpd_p1 | 1 | 1 | 1 lpd_p1 | 2 | 2 | 2 lpd_p1 | 3 | 3 | 3 lpd_p2 | 4 | 4 | 4 lpd_p2 | 5 | 5 | 5 (5 rows) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] path toward faster partition pruning
On Fri, Oct 27, 2017 at 2:41 PM, Amit Langote wrote: > 0001: added some new tests > 0002: no change > 0003: fixed issue that Rajkumar reported (cope with Params properly) > 0004: no change > 0005: fix the case to prune the default partition when warranted (the > issue reported by Beena) > Thanks for the updated patch, i am getting server crash with below query. CREATE TABLE mp (c1 int, c2 int, c3 int) PARTITION BY LIST(c3); CREATE TABLE mp_p1 PARTITION OF mp FOR VALUES IN (10, 20) PARTITION BY RANGE(c2); CREATE TABLE mp_p1_1 PARTITION OF mp_p1 FOR VALUES FROM (0) TO (200); CREATE TABLE mp_p1_2 PARTITION OF mp_p1 FOR VALUES FROM (200) TO (400); CREATE TABLE mp_p2 PARTITION OF mp FOR VALUES IN (30, 40) PARTITION BY RANGE(c2); CREATE TABLE mp_p2_1 PARTITION OF mp_p2 FOR VALUES FROM (0) TO (300); CREATE TABLE mp_p2_2 PARTITION OF mp_p2 FOR VALUES FROM (300) TO (600); INSERT INTO mp VALUES(10, 100, 10); INSERT INTO mp VALUES(20, 200, 20); INSERT INTO mp VALUES(21, 150, 30); INSERT INTO mp VALUES(30, 200, 40); INSERT INTO mp VALUES(31, 300, 30); INSERT INTO mp VALUES(40, 400, 40); EXPLAIN (COSTS OFF) SELECT tableoid::regclass, * FROM mp WHERE c3 = 40 AND c2 < 300; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] path toward faster partition pruning
On Thu, Oct 26, 2017 at 4:47 PM, Amit Langote wrote: > > Meanwhile, attached updated set of patches including fixes for the typos > you reported in the other message. Updated 0005 fixes the first bug (the > Case 1 in your email), while other patches 0002-0004 are updated mostly to > fix the reported typos. A couple of tests are added in 0001 to test the > default partition case a bit more. > Hi Amit, while testing further this feature, I got a bug with partitions as foreign tables. Test case given below. Take a look. CREATE EXTENSION postgres_fdw; CREATE SERVER fp_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname 'postgres', port '5432', use_remote_estimate 'true'); CREATE USER MAPPING FOR PUBLIC SERVER fp_server; CREATE TABLE fplt1 (a int, b int, c text) PARTITION BY LIST(c); CREATE TABLE fplt1_p1 (a int, b int, c text); CREATE FOREIGN TABLE ftplt1_p1 PARTITION OF fplt1 FOR VALUES IN ('', '0001', '0002', '0003') SERVER fp_server OPTIONS (TABLE_NAME 'fplt1_p1'); CREATE TABLE fplt1_p2 (a int, b int, c text); CREATE FOREIGN TABLE ftplt1_p2 PARTITION OF fplt1 FOR VALUES IN ('0004', '0005', '0006', '0007') SERVER fp_server OPTIONS (TABLE_NAME 'fplt1_p2'); INSERT INTO fplt1_p1 SELECT i, i, to_char(i/50, 'FM') FROM generate_series(0, 198, 2) i; INSERT INTO fplt1_p2 SELECT i, i, to_char(i/50, 'FM') FROM generate_series(200, 398, 2) i; --PG-HEAD postgres=# EXPLAIN (COSTS OFF) SELECT t1.c FROM fplt1 t1, LATERAL (SELECT DISTINCT t2.c FROM fplt1 t2 WHERE t2.c = t1.c ) q; QUERY PLAN -- Nested Loop -> Append -> Foreign Scan on ftplt1_p1 t1 -> Foreign Scan on ftplt1_p2 t1_1 -> Unique -> Append -> Foreign Scan on ftplt1_p1 t2 -> Foreign Scan on ftplt1_p2 t2_1 (8 rows) --PG-HEAD +v5 patches postgres=# EXPLAIN (COSTS OFF) SELECT t1.c FROM fplt1 t1, LATERAL (SELECT DISTINCT t2.c FROM fplt1 t2 WHERE t2.c = t1.c ) q; *ERROR: invalid expression for partition key* Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] path toward faster partition pruning
On Mon, Oct 23, 2017 at 1:12 PM, Amit Langote wrote: > The compiler I have here (gcc (GCC) 6.2.0) didn't complain like that for > this typedef redefinition introduced by the 0002 patch, but it seems that > it's not needed anyway, so got rid of that line in the attached updated > patch. > > Fixed one more useless diff in 0002, but no changes in any other patch Thanks for updated patches, I am able to compile it on head. While testing this, I got an observation, pruning is not scanning default partition leading to wrong output. below is test to reproduce this. create table rp (a int, b varchar) partition by range (a); create table rp_p1 partition of rp default; create table rp_p2 partition of rp for values from (1) to (10); create table rp_p3 partition of rp for values from (10) to (maxvalue); insert into rp values (-1,'p1'); insert into rp values (1,'p2'); insert into rp values (11,'p3'); postgres=# select tableoid::regclass,* from rp; tableoid | a | b --++ rp_p2| 1 | p2 rp_p3| 11 | p3 rp_p1| -1 | p1 (3 rows) --with pruning postgres=# explain (costs off) select * from rp where a <= 1; QUERY PLAN -- Append -> Seq Scan on rp_p2 Filter: (a <= 1) (3 rows) postgres=# select * from rp where a <= 1; a | b ---+ 1 | p2 (1 row) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] path toward faster partition pruning
On Thu, Oct 19, 2017 at 12:16 PM, Amit Langote < langote_amit...@lab.ntt.co.jp> wrote: > Description of the attached patches: > > 0001: add new tests for partition-pruning > > 0002: patch that makes all the changes needed in the planer (adds a stub > function in partition.c) > > 0003: patch that implements the aforementioned stub (significant amount of > code to analyze partition clauses and gin up bounding keys to > compare with the values in PartitionBoundInfo; the actual function > that will do the comparison is just a stub as of this patch) > > 0004: make some preparatory changes to partition_bound_cmp/bsearch, to be > able to pass incomplete partition keys (aka, prefix of a multi- > column key) for comparison with the values in PartitionBoundInfo > (just a refactoring patch) > > 0005: implements the stub mentioned in 0003 and finally gets the new > partition-pruning working (also disables constraint exclusion using > internal partition constraints by teaching get_relation_constraints > to not include those). > > Feedback greatly welcome. > Hi Amit, I have tried to apply attached patch. patch applied cleanly on commit id - bf54c0f05c0a58db17627724a83e1b6d4ec2712c but make failed with below error. ./../../../src/include/nodes/relation.h:2126: error: redefinition of typedef ‘AppendRelInfo’ ../../../../src/include/nodes/relation.h:584: note: previous declaration of ‘AppendRelInfo’ was here make[4]: *** [gistbuild.o] Error 1
Re: [HACKERS] Partition-wise aggregation/grouping
On Mon, Sep 18, 2017 at 12:37 PM, Jeevan Chalke < jeevan.cha...@enterprisedb.com> wrote: > > On Tue, Sep 12, 2017 at 6:21 PM, Jeevan Chalke < > jeevan.cha...@enterprisedb.com> wrote: > >> >> >> On Tue, Sep 12, 2017 at 3:24 PM, Rajkumar Raghuwanshi < >> rajkumar.raghuwan...@enterprisedb.com> wrote: >> >>> >>> Hi Jeevan, >>> >>> I have started testing partition-wise-aggregate and got one observation, >>> please take a look. >>> with the v2 patch, here if I change target list order, query is not >>> picking full partition-wise-aggregate. >>> >> >> Thanks Rajkumar for reporting this. >> >> I am looking into this issue and will post updated patch with the fix. >> > > Logic for checking whether partition keys lead group by keys needs to be > updated here. The group by expressions can appear in any order without > affecting the final result. And thus, the need for partition keys should > be leading the group by keys to have full aggregation is not mandatory. > Instead we must ensure that the partition keys are part of the group by > keys to compute full aggregation on a partition. > > Attached, revised patch-set with above fix. > > Also, in test-cases, I have removed DROP/ANALYZE commands on child > relations and also removed VERBOSE from the EXPLAIN. > > Notes: > HEAD: 8edacab209957520423770851351ab4013cb0167 > Partition-wise Join patch-set version: v32 > Hi Jeevan, while testing latest v3 patches, I am getting a server crash if I reset partition_wise_agg_cost_factor, please take a look. CREATE TABLE lp (a TEXT, b FLOAT, c INT) PARTITION BY LIST(c); CREATE TABLE lp1 PARTITION OF lp FOR VALUES IN (10,20); CREATE TABLE lp2 PARTITION OF lp FOR VALUES IN (30,40); INSERT INTO lp VALUES ('a1',800, 20); INSERT INTO lp VALUES ('a2',1250,30); INSERT INTO lp VALUES ('a3',2975,20); INSERT INTO lp VALUES ('a3',2850,30); postgres=# SET enable_partition_wise_agg TO true; SET postgres=# SET partition_wise_agg_cost_factor TO 0.5; SET postgres=# postgres=# SELECT MAX(b), AVG(b) FROM lp GROUP BY a HAVING a = 'a3' ORDER BY 1,2; max | avg --+ 2975 | 2912.5 (1 row) postgres=# RESET partition_wise_agg_cost_factor; RESET postgres=# postgres=# SELECT MAX(b), AVG(b) FROM lp GROUP BY a HAVING a = 'a3' ORDER BY 1,2; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !> Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables
On Wed, Sep 20, 2017 at 3:13 PM, Ashutosh Bapat < ashutosh.ba...@enterprisedb.com> wrote: > On Wed, Sep 20, 2017 at 9:44 AM, Thomas Munro > wrote: > > 2. What queries in the 0008 patch are hitting lines that 0007 doesn't > hit? > > > > I thought about how to answer questions like this and came up with a > > shell script that (1) makes computers run really hot for quite a long > > time and (2) tells you which blocks of SQL hit which lines of C. > > Please find attached the shell script and its output. The .sql files > > have been annotated with "block" numbers (blocks being chunks of SQL > > stuff separated by blank lines), and the C files annotated with > > references to those block numbers where A = block > > partition_join.sql and B = block in partition_join_extras.sql. > > > > Then to find lines that B queries hit but A queries don't and know > > which particular queries hit them, you might use something like: > > > > grep -v 'SQL blocks: .*A[0-9]' < joinpath.c.aggregated_coverage | \ > > grep 'SQL blocks: .*B[0-9]' > > > > Thanks for this. It generates a lot of output (970 lines over all the > coverage files). It will take some time for getting anything > meaningful out of this. May be there's some faster way by looking at > the lines that are covered by B but not A. BTW, I checked those lines > to see if there could be any bug there. But I don't see what could go > wrong with those lines. > > I have also tried to find test cases in B which hits some extra line which is not hitting by A, with the help of results attached by Thomas in coverage.tarball_FILES. It took lot of time but I am able to find some test cases. which if adding in partition_join.sql increasing no of lines hit by 14. but for hitting these 14 extra line attached patch is doing 900+ line inserts in partition_join.sql and partition_join.out file. I have used gcov-lcov to find coverage for files changed by partition-wise-join patches with and without attached patch which is below. *with existing partition_join.sql* *partition_join.sql + some test cases of partition_join_extra.sql* *Modifed Files* *Line Coverage* *Functions* *Line Coverage* *Functions* src/backend/optimizer/geqo 79.4 % 269/339 96.6 % 28/29 79.4 % 269/339 96.6 % 28/29 src/backend/optimizer/path/allpaths.c 92.3 % 787 / 853 95.5 % 42 / 44 92.6 % 790 / 853 95.5 % 42 / 44 src/backend/optimizer/path/costsize.c 96.8 % 1415 / 1462 98.4 % 61 / 62 96.9 % 1416 / 1462 98.4 % 61 / 62 src/backend/optimizer/path/joinpath.c 95.5 % 404 / 423 100.0 % 16 / 16 95.5 % 404 / 423 100.0 % 16 / 16 src/backend/optimizer/path/joinrels.c 92.5 % 422 / 456 100.0 % 16 / 16 93.0 % 424 / 456 100.0 % 16 / 16 src/backend/optimizer/plan/createplan.c 90.9 % 1928 / 2122 96.3 % 103 / 107 91.0 % 1930 / 2122 96.3 % 103 / 107 src/backend/optimizer/plan/planner.c 94.9 % 1609 / 1696 97.6 % 41 / 42 94.9 % 1609 / 1696 97.6 % 41 / 42 src/backend/optimizer/plan/setrefs.c 91.3 % 806 / 883 94.3 % 33 / 35 91.3 % 806 / 883 94.3 % 33 / 35 src/backend/optimizer/prep/prepunion.c 95.5 % 661 / 692 100.0 % 25 / 25 95.5 % 661 / 692 100.0 % 25 / 25 src/backend/optimizer/util/pathnode.c 88.7 % 1144 / 1290 98.1 % 52 / 53 88.8 % 1146 / 1290 98.1 % 52 / 53 src/backend/optimizer/util/placeholder.c 96.5 % 139 / 144 100.0 % 10 / 10 96.5 % 139 / 144 100.0 % 10 / 10 src/backend/optimizer/util/plancat.c 89.0 % 540 / 607 94.7 % 18 / 19 89.6 % 544 / 607 94.7 % 18 / 19 src/backend/optimizer/util/relnode.c 95.3 % 548 / 575 100.0 % 24 / 24 95.3 % 548 / 575 100.0 % 24 / 24 src/backend/utils/misc/guc.c 67.4 % 1536 / 2278 89.7 % 113 / 126 67.4 % 1536 / 2278 89.7 % 113 / 126 Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation diff --git a/src/test/regress/expected/partition_join.out b/src/test/regress/expected/partition_join.out index 9fec170..ab411b6 100644 --- a/src/test/regress/expected/partition_join.out +++ b/src/test/regress/expected/partition_join.out @@ -584,6 +584,215 @@ SELECT t1.a, ss.t2a, ss.t2c FROM prt1 t1 LEFT JOIN LATERAL 550 | | (12 rows) +-- join with aggregate +EXPLAIN (VERBOSE, COSTS OFF) +select t1.a, count(t2.*) from prt1 t1 left join prt1 t2 on (t1.a = t2.a) where t1.a % 25 = 0 group by t1.a; + QUERY PLAN +--- + GroupAggregate + Output: t1.a, count(((t2.*)::prt1)) + Group Key: t1.a + -> Sort + Output: t1.a, ((t2.*)::prt1) + Sort Key: t1.a + -> Append + -> Hash Right Join + Output: t1.a, ((t2.*)::prt1) + Hash Cond: (t2.a = t1.a) + -> Seq Scan on public.prt1_p1 t2 + Output: t2.*, t2.a + -> Hash +
Re: [HACKERS] Partition-wise aggregation/grouping
On Mon, Sep 18, 2017 at 12:37 PM, Jeevan Chalke < jeevan.cha...@enterprisedb.com> wrote: > > > On Tue, Sep 12, 2017 at 6:21 PM, Jeevan Chalke < > jeevan.cha...@enterprisedb.com> wrote: > >> >> >> On Tue, Sep 12, 2017 at 3:24 PM, Rajkumar Raghuwanshi < >> rajkumar.raghuwan...@enterprisedb.com> wrote: >> >>> >>> Hi Jeevan, >>> >>> I have started testing partition-wise-aggregate and got one observation, >>> please take a look. >>> with the v2 patch, here if I change target list order, query is not >>> picking full partition-wise-aggregate. >>> >> >> Thanks Rajkumar for reporting this. >> >> I am looking into this issue and will post updated patch with the fix. >> > > Logic for checking whether partition keys lead group by keys needs to be > updated here. The group by expressions can appear in any order without > affecting the final result. And thus, the need for partition keys should > be leading the group by keys to have full aggregation is not mandatory. > Instead we must ensure that the partition keys are part of the group by > keys to compute full aggregation on a partition. > > Attached, revised patch-set with above fix. > > Also, in test-cases, I have removed DROP/ANALYZE commands on child > relations and also removed VERBOSE from the EXPLAIN. > > Notes: > HEAD: 8edacab209957520423770851351ab4013cb0167 > Partition-wise Join patch-set version: v32 > Thanks for the patch. I have tested it and issue is fixed now.
Re: [HACKERS] Partition-wise aggregation/grouping
On Fri, Sep 8, 2017 at 5:47 PM, Jeevan Chalke < jeevan.cha...@enterprisedb.com> wrote: > Here are the new patch-set re-based on HEAD (f0a0c17) and > latest partition-wise join (v29) patches. > Hi Jeevan, I have started testing partition-wise-aggregate and got one observation, please take a look. with the v2 patch, here if I change target list order, query is not picking full partition-wise-aggregate. SET enable_partition_wise_agg TO true; SET partition_wise_agg_cost_factor TO 0.5; SET enable_partition_wise_join TO true; SET max_parallel_workers_per_gather TO 0; CREATE TABLE pagg_tab (a int, b int, c int) PARTITION BY RANGE(a); CREATE TABLE pagg_tab_p1 PARTITION OF pagg_tab FOR VALUES FROM (0) TO (10); CREATE TABLE pagg_tab_p2 PARTITION OF pagg_tab FOR VALUES FROM (10) TO (20); CREATE TABLE pagg_tab_p3 PARTITION OF pagg_tab FOR VALUES FROM (20) TO (30); INSERT INTO pagg_tab SELECT i % 30, i % 30, i % 50 FROM generate_series(0, 299) i; ANALYZE pagg_tab; postgres=# explain (verbose, costs off) select a,b,count(*) from pagg_tab group by a,b order by 1,2; QUERY PLAN -- Sort Output: pagg_tab_p1.a, pagg_tab_p1.b, (count(*)) Sort Key: pagg_tab_p1.a, pagg_tab_p1.b -> Append -> HashAggregate Output: pagg_tab_p1.a, pagg_tab_p1.b, count(*) Group Key: pagg_tab_p1.a, pagg_tab_p1.b -> Seq Scan on public.pagg_tab_p1 Output: pagg_tab_p1.a, pagg_tab_p1.b -> HashAggregate Output: pagg_tab_p2.a, pagg_tab_p2.b, count(*) Group Key: pagg_tab_p2.a, pagg_tab_p2.b -> Seq Scan on public.pagg_tab_p2 Output: pagg_tab_p2.a, pagg_tab_p2.b -> HashAggregate Output: pagg_tab_p3.a, pagg_tab_p3.b, count(*) Group Key: pagg_tab_p3.a, pagg_tab_p3.b -> Seq Scan on public.pagg_tab_p3 Output: pagg_tab_p3.a, pagg_tab_p3.b (19 rows) -- changing target list order -- picking partial partition-wise aggregation path postgres=# explain (verbose, costs off) select b,a,count(*) from pagg_tab group by a,b order by 1,2; QUERY PLAN Finalize GroupAggregate Output: pagg_tab_p1.b, pagg_tab_p1.a, count(*) Group Key: pagg_tab_p1.b, pagg_tab_p1.a -> Sort Output: pagg_tab_p1.b, pagg_tab_p1.a, (PARTIAL count(*)) Sort Key: pagg_tab_p1.b, pagg_tab_p1.a -> Append -> Partial HashAggregate Output: pagg_tab_p1.b, pagg_tab_p1.a, PARTIAL count(*) Group Key: pagg_tab_p1.b, pagg_tab_p1.a -> Seq Scan on public.pagg_tab_p1 Output: pagg_tab_p1.b, pagg_tab_p1.a -> Partial HashAggregate Output: pagg_tab_p2.b, pagg_tab_p2.a, PARTIAL count(*) Group Key: pagg_tab_p2.b, pagg_tab_p2.a -> Seq Scan on public.pagg_tab_p2 Output: pagg_tab_p2.b, pagg_tab_p2.a -> Partial HashAggregate Output: pagg_tab_p3.b, pagg_tab_p3.a, PARTIAL count(*) Group Key: pagg_tab_p3.b, pagg_tab_p3.a -> Seq Scan on public.pagg_tab_p3 Output: pagg_tab_p3.b, pagg_tab_p3.a (22 rows) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Adding support for Default partition in partitioning
On Wed, Sep 6, 2017 at 5:25 PM, Jeevan Ladhe wrote: > Hi, > > Attached is the rebased set of patches. > Robert has committed[1] patch 0001 in V26 series, hence the patch numbering > in V27 is now decreased by 1 for each patch as compared to V26. > Hi, I have applied v27 patches and while testing got below observation. Observation: in below partition table, d1 constraints not allowing NULL to be inserted in b column but I am able to insert it. steps to reproduce: create table d0 (a int, b int) partition by range(a,b); create table d1 partition of d0 for values from (0,0) to (maxvalue,maxvalue); postgres=# insert into d0 values (0,null); INSERT 0 1 postgres=# \d+ d1 Table "public.d1" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +-+---+--+-+-+--+- a | integer | | | | plain | | b | integer | | | | plain | | Partition of: d0 FOR VALUES FROM (0, 0) TO (MAXVALUE, MAXVALUE) Partition constraint: ((a IS NOT NULL) AND *(b IS NOT NULL) *AND ((a > 0) OR ((a = 0) AND (b >= 0 postgres=# select tableoid::regclass,* from d0; tableoid | a | b --+---+--- *d1 | 0 | * (1 row) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] advanced partition matching algorithm for partition-wise join
On Tue, Sep 5, 2017 at 4:34 PM, Ashutosh Bapat < ashutosh.ba...@enterprisedb.com> wrote: > I have fixed the issues which were marked as TODOs in the attached > patches. Also, I have included your test change patch in my series of > patches. Are there any other issues you have commented out? > > Thanks Ashutosh, All commented issue got fixed. I am working on some combinations of N-way joins to test partition matching, will send those as well once done.
Re: [HACKERS] [POC] hash partitioning
On Mon, Sep 4, 2017 at 4:08 PM, amul sul wrote: > I've updated patch to use an extended hash function (Commit # > 81c5e46c490e2426db243eada186995da5bb0ba7) for the partitioning. > > I have done some testing with these patches, everything looks fine, attaching sql and out file for reference. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation hash_partition_test.out Description: Binary data --Testing CREATE TABLE...PARTITION BY HASH syntax -- --basic syntax --should pass CREATE TABLE hp_tbl (a int) PARTITION BY HASH (a); drop table hp_tbl; --partition with more than one coloumn --should pass CREATE TABLE hp_tbl (a int, b int) PARTITION BY HASH (a,b); drop table hp_tbl; --partition with expression --should pass CREATE TABLE hp_tbl (a int, b int) PARTITION BY HASH (abs(a)); drop table hp_tbl; --partition with airthmatic expression --should pass CREATE TABLE hp_tbl (a int, b int) PARTITION BY HASH ((a+b)); drop table hp_tbl; --partition with other data type --should pass CREATE TABLE hp_tbl (a text, b Date) PARTITION BY HASH (a); drop table hp_tbl; CREATE TABLE hp_tbl (a text, b Date) PARTITION BY HASH (b); drop table hp_tbl; --Testing CREATE TABLE...PARTITION OF syntax -- CREATE TABLE hp_tbl (a int, b text) PARTITION BY HASH (a); --basic partition of syntax --should pass CREATE TABLE hp_tbl_p1 PARTITION OF hp_tbl FOR VALUES WITH (modulus 20, remainder 0); --trying to attach same partition again --should fail CREATE TABLE hp_tbl_p1 PARTITION OF hp_tbl FOR VALUES WITH (modulus 20, remainder 0); --trying to attach with same modulus different remainder --should pass CREATE TABLE hp_tbl_p2 PARTITION OF hp_tbl FOR VALUES WITH (modulus 20, remainder 1); --trying to attach with different modulus different remainder --should pass CREATE TABLE hp_tbl_p3 PARTITION OF hp_tbl FOR VALUES WITH (modulus 40, remainder 2); --trying to create with a value not factor of previous value --should fail CREATE TABLE hp_tbl_p4 PARTITION OF hp_tbl FOR VALUES WITH (modulus 45, remainder 0); -- trying to create with modulus equal to zero --should fail CREATE TABLE hp_tbl_p4 PARTITION OF hp_tbl FOR VALUES WITH (modulus 0, remainder 1); -- trying to create with remainder greater or equal than modulus --should fail CREATE TABLE hp_tbl_p4 PARTITION OF hp_tbl FOR VALUES WITH (modulus 60, remainder 60); --trying to create like list partition --should fail CREATE TABLE hp_tbl_p4 PARTITION OF hp_tbl FOR VALUES IN (10); --trying to create like range partition --should fail CREATE TABLE hp_tbl_p4 PARTITION OF hp_tbl FOR VALUES FROM (0) TO (10); DROP TABLE hp_tbl; --trying to create for list partition --should fail CREATE TABLE hp_tbl (a int, b text) PARTITION BY LIST (a); CREATE TABLE hp_tbl_p1 PARTITION OF hp_tbl FOR VALUES WITH (modulus 10, remainder 0); DROP TABLE hp_tbl; --trying to create for range partition --should fail CREATE TABLE hp_tbl (a int, b text) PARTITION BY RANGE (a); CREATE TABLE hp_tbl_p1 PARTITION OF hp_tbl FOR VALUES WITH (modulus 10, remainder 0); DROP TABLE hp_tbl; --check for table description -- CREATE TABLE hp_tbl (a int, b text) PARTITION BY HASH (a); \d+ hp_tbl CREATE TABLE hp_tbl_p1 PARTITION OF hp_tbl FOR VALUES WITH (modulus 20, remainder 0); \d+ hp_tbl;\d+ hp_tbl_p1 CREATE TABLE hp_tbl_p2 PARTITION OF hp_tbl FOR VALUES WITH (modulus 20, remainder 1); \d+ hp_tbl;\d+ hp_tbl_p1;\d+ hp_tbl_p2 CREATE TABLE hp_tbl_p3 (like hp_tbl); ALTER TABLE hp_tbl ATTACH PARTITION hp_tbl_p3 FOR VALUES WITH (modulus 20, remainder 2); \d+ hp_tbl;\d+ hp_tbl_p1;\d+ hp_tbl_p2;\d+ hp_tbl_p3 ALTER TABLE hp_tbl DETACH PARTITION hp_tbl_p3; \d+ hp_tbl;\d+ hp_tbl_p1;\d+ hp_tbl_p2 DROP TABLE hp_tbl_p3; DROP TABLE hp_tbl; --testing TEMP-NESS of Hash partition -- --trying to add temp partition to permanent partiton --should pass CREATE TABLE hp_tbl (a int, b text) PARTITION BY HASH (a); CREATE TEMP TABLE hp_tbl_p1 PARTITION OF hp_tbl FOR VALUES WITH (modulus 20, remainder 0); DROP TABLE hp_tbl; --trying to add permanent partition to temp partiton --should fail CREATE TEMP TABLE hp_tbl (a int, b text) PARTITION BY HASH (a); CREATE TABLE hp_tbl_p1 PARTITION OF hp_tbl FOR VALUES WITH (modulus 20, remainder 0); DROP TABLE hp_tbl; --trying to add temp partition to temp partiton --should pass CREATE TEMP TABLE hp_tbl (a int, b text) PARTITION BY HASH (a); CREATE TEMP TABLE hp_tbl_p1 PARTITION OF hp_tbl FOR VALUES WITH (modulus 20, remainder 0); DROP TABLE hp_tbl; --testing with/without oids for Hash partition -- --when root partition have oid create table hp_tbl (a int) partition by hash (a) with oids; --partition can be created with oids --should pass create table hp_tbl_p1 partition of hp_tbl FOR VALUES WITH (modulus 20, remainder 0) with oids; \d+ hp_tbl_p1 drop
Re: [HACKERS] Default Partition for Range
On Wed, Aug 9, 2017 at 1:54 PM, Beena Emerson wrote: > Hello Rajkumar, > > On Wed, Aug 9, 2017 at 12:37 PM, Rajkumar Raghuwanshi > wrote: > > > > Hi Beena, > > > > I have applied Jeevan's v24 patches and then your v9 patch over commit > > 5ff3d73813ebcc3ff80be77c30b458d728951036. > > and while testing I got a server crash. below is sql to reproduce it. > > > > postgres=# CREATE TABLE rp (a int, b int) PARTITION by range (a); > > CREATE TABLE > > postgres=# CREATE TABLE rp_p1 PARTITION OF rp DEFAULT partition by > range(a); > > CREATE TABLE > > postgres=# CREATE TABLE rp_p11 PARTITION OF rp_p1 FOR VALUES FROM (1) TO > > (15); > > CREATE TABLE > > postgres=# CREATE TABLE rp_p12 PARTITION OF rp_p1 DEFAULT; > > CREATE TABLE > > postgres=# insert into rp select i,i from generate_series(1,15) i; > > server closed the connection unexpectedly > > This probably means the server terminated abnormally > > before or while processing the request. > > The connection to the server was lost. Attempting reset: Failed. > > > > Thank you for testing. It seems I made a mistake in the assert > condition. I have corrected it in this patch. > > Thanks Beena, I have tested new patch, crash got fixed now, got two observations, please check if these are expected? --difference in the description of default partition in case of list vs range create table lp (a int) partition by list(a); create table lp_d partition of lp DEFAULT; postgres=# \d+ lp_d Table "public.lp_d" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +-+---+--+-+-+--+- a | integer | | | | plain | | Partition of: lp DEFAULT *Partition constraint:* create table rp (a int) partition by range(a); create table rp_d partition of rp DEFAULT; postgres=# \d+ rp_d Table "public.rp_d" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +-+---+--+-+-+--+- a | integer | | | | plain | | Partition of: rp DEFAULT *Partition constraint: true* --getting warning WARNING: skipped scanning foreign table...which is a partition of default partition... --when adding new partition after adding default as foreign partition CREATE EXTENSION postgres_fdw; CREATE SERVER def_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname 'postgres', port '5432', use_remote_estimate 'true'); CREATE USER MAPPING FOR PUBLIC SERVER def_server; postgres=# CREATE TABLE frp (a int, b int) PARTITION BY RANGE(a); CREATE TABLE postgres=# CREATE TABLE frp_d PARTITION OF frp DEFAULT partition by RANGE(b); CREATE TABLE postgres=# CREATE TABLE ffrp_d_d (like frp); CREATE TABLE postgres=# CREATE FOREIGN TABLE ftfrp_d_d PARTITION OF frp_d DEFAULT SERVER def_server OPTIONS (TABLE_NAME 'ffrp_d_d'); CREATE FOREIGN TABLE postgres=# CREATE TABLE frp_p2 PARTITION OF frp FOR VALUES FROM (10) TO (12); WARNING: skipped scanning foreign table "ftfrp_d_d" which is a partition of default partition "frp_d" CREATE TABLE Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Default Partition for Range
On Wed, Aug 9, 2017 at 8:26 AM, Beena Emerson wrote: > I have updated the patch to make it similar to the way default/null is > handled in list partition, removing the PARTITION_RANGE_DATUM_DEFAULT. > This is to be applied over v24 patches shared by Jeevan [1] which > applies on commit id 5ff3d73813ebcc3ff80be77c30b458d728951036. > > The RelationBuildPartitionDesc has been modified a lot, especially the > way all_bounds, ndatums and rbounds are set. > > [1] https://www.postgresql.org/message-id/CAOgcT0OVwDu% > 2BbeChWb5R5s6rfKLCiWcZT5617hqu7T3GdA1hAw%40mail.gmail.com > > Hi Beena, I have applied Jeevan's v24 patches and then your v9 patch over commit 5ff3d73813ebcc3ff80be77c30b458d728951036. and while testing I got a server crash. below is sql to reproduce it. postgres=# CREATE TABLE rp (a int, b int) PARTITION by range (a); CREATE TABLE postgres=# CREATE TABLE rp_p1 PARTITION OF rp DEFAULT partition by range(a); CREATE TABLE postgres=# CREATE TABLE rp_p11 PARTITION OF rp_p1 FOR VALUES FROM (1) TO (15); CREATE TABLE postgres=# CREATE TABLE rp_p12 PARTITION OF rp_p1 DEFAULT; CREATE TABLE postgres=# insert into rp select i,i from generate_series(1,15) i; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] reload-through-the-top-parent switch the partition table
Hi Rushabh, While testing latest v2 patch, I got a crash when using --load-via-partition-root with --schema options. Below are steps to reproduce. --create below test data create schema a; create schema b; create schema c; create table t1 (a int,b text) partition by list(a); create table a.t1_p1 partition of t1 FOR VALUES in (1,2,3,4) partition by list(a); create table b.t1_p1_p1 partition of a.t1_p1 FOR VALUES in (1,2); create table c.t1_p1_p2 partition of a.t1_p1 FOR VALUES in (3,4); create table b.t1_p2 partition of t1 FOR VALUES in (5,6,7,8) partition by list(a); create table a.t1_p2_p1 partition of b.t1_p2 FOR VALUES in (5,6); create table t1_p2_p2 partition of b.t1_p2 FOR VALUES in (7,8); insert into t1 values (8,'t1'); insert into a.t1_p1 values (2,'a.t1_p1'); insert into b.t1_p1_p1 values (1,'b.t1_p1_p1'); insert into c.t1_p1_p2 values (3,'c.t1_p1_p2'); insert into b.t1_p2 values (6,'b.t1_p2'); insert into a.t1_p2_p1 values (5,'a.t1_p2_p1'); insert into t1_p2_p2 values (7,'t1_p2_p1'); insert into t1 values (4 ,'t1'); --trying to take pg_dump [edb@localhost bin]$ ./pg_dump -d postgres --schema=a -f d1.dump -Fp [edb@localhost bin]$ ./pg_dump -d postgres --load-via-partition-root -f d2.dump -Fp [edb@localhost bin]$ ./pg_dump -d postgres --load-via-partition-root --schema=a -f d3.dump -Fp pg_dump: pg_dump.c:2063: getRootTableInfo: Assertion `tbinfo->numParents == 1' failed. Aborted (core dumped) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation On Fri, Aug 4, 2017 at 3:01 PM, Rushabh Lathia wrote: > > Here is an update patch, now renamed the switch to > --load-via-partition-root > and also added the documentation for the new switch into pg_dump as well > as pg_dumpall. > > > On Fri, Aug 4, 2017 at 7:13 AM, Amit Langote < > langote_amit...@lab.ntt.co.jp> wrote: > >> On 2017/08/04 1:08, David G. Johnston wrote: >> > On Thu, Aug 3, 2017 at 8:53 AM, Tom Lane wrote: >> > >> >> Robert Haas writes: >> >>> So maybe --load-via-partition-root if nobody likes my previous >> >>> suggestion of --partition-data-via-root ? >> >> >> >> WFM. >> >> >> > >> > +1 >> >> +1. >> >> Thanks, >> Amit >> >> > > > Thanks, > Rushabh Lathia > www.EnterpriseDB.com > > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers > >
Re: [HACKERS] UPDATE of partition key
On Fri, Aug 4, 2017 at 10:28 PM, Amit Khandekar wrote: > > > > Below are the TODOS at this point : > > > > Fix for bug reported by Rajkumar about update with join. > > I had explained the root issue of this bug here : [1] > > Attached patch includes the fix, which is explained below. > Hi Amit, I have applied v14 patch and tested from my side, everything looks good to me. attaching some of test case and out file for reference. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation update_partition_test.out Description: Binary data --=== --creating test dataset CREATE TABLE pt (a INT, b INT, c INT) PARTITION BY RANGE(a); CREATE TABLE pt_p1 PARTITION OF pt FOR VALUES FROM (1) to (6) PARTITION BY RANGE (b); CREATE TABLE pt_p1_p1 PARTITION OF pt_p1 FOR VALUES FROM (11) to (44); CREATE TABLE pt_p1_p2 PARTITION OF pt_p1 FOR VALUES FROM (44) to (66); CREATE TABLE pt_p2 PARTITION OF pt FOR VALUES FROM (6) to (11) PARTITION BY LIST (c); CREATE TABLE pt_p2_p1 PARTITION OF pt_p2 FOR VALUES IN (666,777,888); CREATE TABLE pt_p2_p2 PARTITION OF pt_p2 FOR VALUES IN (999,NULL); INSERT INTO pt (a,b,c) VALUES (1,11,111),(2,22,222),(3,33,333),(4,44,444),(5,55,555); INSERT INTO pt (a,b,c) VALUES (6,66,666),(7,77,777),(8,88,888),(9,99,999),(10,100,NULL); --test with updating root partition --move data within same partition,setisfying partition contraint --should pass BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt SET b = 23 WHERE b = 11; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data within same subtree,different partition,setisfying partition contraint --should pass BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt SET b = 45,c = 422 WHERE b = 22; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data different subtree,setisfying partition contraint --should pass BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt SET a = 8,c=888 WHERE b = 33; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data within same partition,not setisfying partition contraint --should fail BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt SET b = 23, a=13 WHERE b = 11; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data within same subtree,different partition,not setisfying partition contraint --should fail BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt SET b = 45, a=14 WHERE b = 22; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data different subtree,not setisfying partition contraint --should fail BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt SET b = 88, c=198 WHERE b = 33; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; -- --test with updating child partition --move data within same partition,setisfying partition contraint --should pass BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt_p1_p1 SET b = 23 WHERE b = 11; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data within same subtree,different partition,setisfying partition contraint --should pass BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt_p1 SET b = 45,c = 422 WHERE b = 22; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data within same subtree,different partition,setisfying partition contraint,updating leaf child --should fail BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt_p1_p1 SET b = 45 WHERE b = 22; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data different subtree,setisfying partition contraint,updating child partition --should fail BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt_p1 SET a = 8,c=888 WHERE b = 33; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data within same partition,not setisfying partition contraint --should fail BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt_p1_p1 SET b = 23, a=13 WHERE b = 11; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --move data within same subtree,different partition,not setisfying partition contraint --should fail BEGIN; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; UPDATE pt_p1 SET b = 45, a=14 WHERE b = 22; SELECT TABLEOID::REGCLASS AS PARTITION_NAME,* FROM pt; ROLLBACK; --=== --creating test dataset ALTER TABLE pt_p1 ADD constraint pt_p1_check check(c < 560); ALTER TABLE pt_p1_p1 add CONSTRAINT pt_p1_p1_uk UNIQUE (c); ALTER TABLE pt_p1_
Re: [HACKERS] UPDATE of partition key
On Tue, Jul 25, 2017 at 3:54 PM, Amit Khandekar wrote: > On 25 July 2017 at 15:02, Rajkumar Raghuwanshi > wrote: > > On Mon, Jul 24, 2017 at 11:23 AM, Amit Khandekar > > > wrote: > >> > >> > >> Attached update-partition-key_v13.patch now contains this > >> make_resultrels_ordered.patch changes. > >> > > > > I have applied attach patch and got below observation. > > > > Observation : if join producing multiple output rows for a given row to > be > > modified. I am seeing here it is updating a row and also inserting rows > in > > target table. hence after update total count of table got incremented. > > Thanks for catching this Rajkumar. > > So after the row to be updated is already moved to another partition, > when the next join output row corresponds to the same row which is > moved, that row is now deleted, so ExecDelete()=>heap_delete() gets > HeapTupleSelfUpdated, and this is not handled. So even when > ExecDelete() finds that the row is already deleted, we still call > ExecInsert(), so a new row is inserted. In ExecDelete(), we should > indicate that the row is already deleted. In the existing patch, there > is a parameter concurrenty_deleted for ExecDelete() which indicates > that the row is concurrently deleted. I think we can make this > parameter for both of these purposes so as to avoid ExecInsert() for > both these scenarios. Will work on a patch. > Thanks Amit. Got one more observation : update... returning is not working with whole row reference. please take a look. postgres=# create table part (a int, b int) partition by range(a); CREATE TABLE postgres=# create table part_p1 partition of part for values from (minvalue) to (0); CREATE TABLE postgres=# create table part_p2 partition of part for values from (0) to (maxvalue); CREATE TABLE postgres=# insert into part values (10,1); INSERT 0 1 postgres=# insert into part values (20,2); INSERT 0 1 postgres=# update part t1 set a = b returning t1; ERROR: unexpected whole-row reference found in partition key
Re: [HACKERS] UPDATE of partition key
On Mon, Jul 24, 2017 at 11:23 AM, Amit Khandekar wrote: > > Attached update-partition-key_v13.patch now contains this > make_resultrels_ordered.patch changes. > > I have applied attach patch and got below observation. Observation : if join producing multiple output rows for a given row to be modified. I am seeing here it is updating a row and also inserting rows in target table. hence after update total count of table got incremented. below are steps: postgres=# create table part_upd (a int, b int) partition by range(a); CREATE TABLE postgres=# create table part_upd1 partition of part_upd for values from (minvalue) to (-10); CREATE TABLE postgres=# create table part_upd2 partition of part_upd for values from (-10) to (0); CREATE TABLE postgres=# create table part_upd3 partition of part_upd for values from (0) to (10); CREATE TABLE postgres=# create table part_upd4 partition of part_upd for values from (10) to (maxvalue); CREATE TABLE postgres=# insert into part_upd select i,i from generate_series(-30,30,3)i; INSERT 0 21 *postgres=# select count(*) from part_upd; count ---21(1 row)* postgres=# postgres=# create table non_part_upd (a int); CREATE TABLE postgres=# insert into non_part_upd select i%2 from generate_series(-30,30,5)i; INSERT 0 13 postgres=# update part_upd t1 set a = (t2.a+10) from non_part_upd t2 where t2.a = t1.b; UPDATE 7 *postgres=# select count(*) from part_upd; count ---27(1 row)* postgres=# select tableoid::regclass,* from part_upd; tableoid | a | b ---+-+- part_upd1 | -30 | -30 part_upd1 | -27 | -27 part_upd1 | -24 | -24 part_upd1 | -21 | -21 part_upd1 | -18 | -18 part_upd1 | -15 | -15 part_upd1 | -12 | -12 part_upd2 | -9 | -9 part_upd2 | -6 | -6 part_upd2 | -3 | -3 part_upd3 | 3 | 3 part_upd3 | 6 | 6 part_upd3 | 9 | 9 part_upd4 | 12 | 12 part_upd4 | 15 | 15 part_upd4 | 18 | 18 part_upd4 | 21 | 21 part_upd4 | 24 | 24 part_upd4 | 27 | 27 part_upd4 | 30 | 30 * part_upd4 | 10 | 0 part_upd4 | 10 | 0 part_upd4 | 10 | 0 part_upd4 | 10 | 0 part_upd4 | 10 | 0 part_upd4 | 10 | 0 part_upd4 | 10 | 0*(27 rows) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
[HACKERS] drop operator class..using .. left dependency behind.
Hi, I have observed that even after dropping operator class, not able to drop schema containing it. below is a example. postgres=# CREATE SCHEMA sch_test; CREATE SCHEMA postgres=# SET search_path TO 'sch_test'; SET postgres=# CREATE OR REPLACE FUNCTION sch_test.dummy_hashint4_39779(a int4) RETURNS int4 AS $$ BEGIN RETURN a; END; $$ LANGUAGE 'plpgsql' IMMUTABLE; CREATE FUNCTION postgres=# CREATE OPERATOR CLASS sch_test.custom_opclass_test FOR TYPE int4 USING HASH AS OPERATOR 1 = , FUNCTION 1 sch_test.dummy_hashint4_39779(a int4); CREATE OPERATOR CLASS *postgres=# DROP OPERATOR CLASS sch_test.custom_opclass_test USING HASH;DROP OPERATOR CLASS* postgres=# DROP FUNCTION sch_test.dummy_hashint4_39779(a int4); DROP FUNCTION postgres=# RESET search_path; RESET *postgres=# DROP SCHEMA sch_test;ERROR: cannot drop schema sch_test because other objects depend on itDETAIL: operator family sch_test.custom_opclass_test for access method hash depends on schema sch_testHINT: Use DROP ... CASCADE to drop the dependent objects too.* when investigated found, entry still present in pg_opfamily. postgres=# select * from pg_opfamily where opfname like 'custom_opclass_test%'; opfmethod | opfname | opfnamespace | opfowner ---+-+--+-- 405 | custom_opclass_test |16409 | 10 (1 row) Is this expected behaviour?? Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Adding support for Default partition in partitioning
On Thu, May 25, 2017 at 12:10 PM, Jeevan Ladhe < jeevan.la...@enterprisedb.com> wrote: > PFA. > Hi I have applied v13 patch, got a crash when trying to attach default temp partition. postgres=# CREATE TEMP TABLE temp_list_part (a int) PARTITION BY LIST (a); CREATE TABLE postgres=# CREATE TEMP TABLE temp_def_part (a int); CREATE TABLE postgres=# ALTER TABLE temp_list_part ATTACH PARTITION temp_def_part DEFAULT; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !> Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
[HACKERS] alter table..drop constraint pkey, left not null un-dropped
Hi All, I have created a table with primary key, and then dropped primary key from table. But table still have not null constraint added by primary key. Is there any other statement to delete primary key with not null? or this is an expected behaviour of pg? postgres=# create table tbl (c1 int primary key); CREATE TABLE postgres=# \d+ tbl Table "public.tbl" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +-+---+--+-+ -+--+- c1 | integer | | not null | | plain | | Indexes: "tbl_pkey" PRIMARY KEY, btree (c1) postgres=# alter table tbl drop constraint tbl_pkey; ALTER TABLE postgres=# \d+ tbl Table "public.tbl" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +-+---+--+-+ -+--+- c1 | integer | | not null | | plain | | Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Adding support for Default partition in partitioning
Hi Rahila, pg_restore is failing for default partition, dump file still storing old syntax of default partition. create table lpd (a int, b int, c varchar) partition by list(a); create table lpd_d partition of lpd DEFAULT; create database bkp owner 'edb'; grant all on DATABASE bkp to edb; --take plain dump of existing database \! ./pg_dump -f lpd_test.sql -Fp -d postgres --restore plain backup to new database bkp \! ./psql -f lpd_test.sql -d bkp psql:lpd_test.sql:63: ERROR: syntax error at or near "DEFAULT" LINE 2: FOR VALUES IN (DEFAULT); ^ vi lpd_test.sql -- -- Name: lpd; Type: TABLE; Schema: public; Owner: edb -- CREATE TABLE lpd ( a integer, b integer, c character varying ) PARTITION BY LIST (a); ALTER TABLE lpd OWNER TO edb; -- -- Name: lpd_d; Type: TABLE; Schema: public; Owner: edb -- CREATE TABLE lpd_d PARTITION OF lpd FOR VALUES IN (DEFAULT); ALTER TABLE lpd_d OWNER TO edb; Thanks, Rajkumar
Re: [HACKERS] Adding support for Default partition in partitioning
On Thu, May 4, 2017 at 5:14 PM, Rahila Syed wrote: > The syntax implemented in this patch is as follows, > > CREATE TABLE p11 PARTITION OF p1 DEFAULT; > > Applied v9 patches, table description still showing old pattern of default partition. Is it expected? create table lpd (a int, b int, c varchar) partition by list(a); create table lpd_d partition of lpd DEFAULT; \d+ lpd Table "public.lpd" Column | Type| Collation | Nullable | Default | Storage | Stats target | Description +---+---+--+-+--+--+- a | integer | | | | plain | | b | integer | | | | plain | | c | character varying | | | | extended | | Partition key: LIST (a) Partitions: lpd_d FOR VALUES IN (DEFAULT)
Re: [HACKERS] Declarative partitioning - another take
On Fri, Apr 28, 2017 at 11:43 AM, Amit Langote < langote_amit...@lab.ntt.co.jp> wrote: > Updated patch attached. > Hi Amit, I have applied given patch, could see below behaviour with statement trigger. When trying to delete value within partition range, triggers got fired (with zero row affected) as expected, but if trying to delete value which is outside of partition range (with zero row affected), No trigger fired. is this expected?? Below are steps to reproduce. CREATE TABLE trigger_test_table (a INT, b INT) PARTITION BY RANGE(a); CREATE TABLE trigger_test_table1 PARTITION OF trigger_test_table FOR VALUES FROM (0) to (6); INSERT INTO trigger_test_table (a,b) SELECT i,i FROM generate_series(1,3)i; CREATE TABLE trigger_test_statatics(TG_NAME varchar,TG_TABLE_NAME varchar,TG_LEVEL varchar,TG_WHEN varchar, TG_OP varchar); CREATE FUNCTION trigger_test_procedure() RETURNS TRIGGER AS $ttp$ BEGIN INSERT INTO trigger_test_statatics SELECT TG_NAME,TG_TABLE_NAME,TG_LEVEL,TG_WHEN,TG_OP; RETURN OLD; END; $ttp$ LANGUAGE plpgsql; CREATE TRIGGER trigger_test11 AFTER DELETE ON trigger_test_table FOR EACH STATEMENT EXECUTE PROCEDURE trigger_test_procedure(); CREATE TRIGGER trigger_test12 AFTER DELETE ON trigger_test_table1 FOR EACH STATEMENT EXECUTE PROCEDURE trigger_test_procedure(); postgres=# DELETE FROM trigger_test_table WHERE a = 5; DELETE 0 postgres=# SELECT * FROM trigger_test_statatics; tg_name | tg_table_name| tg_level | tg_when | tg_op ++---+-+ trigger_test11 | trigger_test_table | STATEMENT | AFTER | DELETE (1 row) TRUNCATE TABLE trigger_test_statatics; --statement trigger NOT fired, when trying to delete data outside partition range. postgres=# DELETE FROM trigger_test_table WHERE a = 10; DELETE 0 postgres=# SELECT * FROM trigger_test_statatics; tg_name | tg_table_name | tg_level | tg_when | tg_op -+---+--+-+--- (0 rows) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Declarative partitioning - another take
On Mon, Apr 24, 2017 at 4:13 PM, Amit Langote wrote: > Hi Rajkumar, > > It would be great if you could check if the patches fix the issues. > Hi Amit, Thanks for looking into it. I have applied fixes and checked for triggers. I could see difference in behaviour of statement triggers for INSERT and UPDATE, for insert only root partition triggers are getting fired but for update root as well as child partition table triggers both getting fired. is this expected?? Below are steps to reproduce. CREATE TABLE pt (a INT, b INT) PARTITION BY RANGE(a); CREATE TABLE pt1 PARTITION OF pt FOR VALUES FROM (1) to (6); CREATE TABLE pt2 PARTITION OF pt FOR VALUES FROM (6) to (11); INSERT INTO pt (a,b) SELECT i,i FROM generate_series(1,7)i; CREATE TABLE pt_trigger(TG_NAME varchar,TG_TABLE_NAME varchar,TG_LEVEL varchar,TG_WHEN varchar,a_old int,a_new int,b_old int,b_new int); CREATE FUNCTION process_pt_trigger() RETURNS TRIGGER AS $ttp$ BEGIN IF (TG_OP = 'INSERT') THEN IF (TG_LEVEL = 'STATEMENT') THEN INSERT INTO pt_trigger SELECT TG_NAME,TG_TABLE_NAME,TG_LEVEL,TG_WHEN,NULL,NULL,NULL,NULL; END IF; IF (TG_LEVEL = 'ROW') THEN INSERT INTO pt_trigger SELECT TG_NAME,TG_TABLE_NAME,TG_LEVEL,TG_WHEN,NULL,NEW.a,NULL,NEW.b; END IF; RETURN NEW; END IF; IF (TG_OP = 'UPDATE') THEN IF (TG_LEVEL = 'STATEMENT') THEN INSERT INTO pt_trigger SELECT TG_NAME,TG_TABLE_NAME,TG_LEVEL,TG_WHEN,NULL,NULL,NULL,NULL; END IF; IF (TG_LEVEL = 'ROW') THEN INSERT INTO pt_trigger SELECT TG_NAME,TG_TABLE_NAME,TG_LEVEL,TG_WHEN,OLD.a,NEW.a,OLD.b,NEW.b; END IF; RETURN NEW; END IF; END; $ttp$ LANGUAGE plpgsql; CREATE TRIGGER trigger_test11 AFTER INSERT OR UPDATE ON pt FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER trigger_test12 AFTER INSERT OR UPDATE ON pt1 FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER trigger_test13 AFTER INSERT OR UPDATE ON pt2 FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER trigger_test21 BEFORE INSERT OR UPDATE ON pt FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER trigger_test22 BEFORE INSERT OR UPDATE ON pt1 FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER trigger_test23 BEFORE INSERT OR UPDATE ON pt2 FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER trigger_test32 AFTER INSERT OR UPDATE ON pt1 FOR EACH ROW EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER trigger_test33 AFTER INSERT OR UPDATE ON pt2 FOR EACH ROW EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER trigger_test42 BEFORE INSERT OR UPDATE ON pt1 FOR EACH ROW EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER trigger_test43 BEFORE INSERT OR UPDATE ON pt2 FOR EACH ROW EXECUTE PROCEDURE process_pt_trigger(); postgres=# INSERT INTO pt (a,b) VALUES (8,8); INSERT 0 1 postgres=# SELECT * FROM pt_trigger; tg_name | tg_table_name | tg_level | tg_when | a_old | a_new | b_old | b_new +---+---+-+---+---+---+--- trigger_test21 | pt| STATEMENT | BEFORE | | | | trigger_test43 | pt2 | ROW | BEFORE | | 8 | | 8 trigger_test33 | pt2 | ROW | AFTER | | 8 | | 8 trigger_test11 | pt| STATEMENT | AFTER | | | | (4 rows) postgres=# TRUNCATE TABLE pt_trigger; TRUNCATE TABLE postgres=# UPDATE pt SET a = 2 WHERE a = 1; UPDATE 1 postgres=# SELECT * FROM pt_trigger; tg_name | tg_table_name | tg_level | tg_when | a_old | a_new | b_old | b_new +---+---+-+---+---+---+--- trigger_test21 | pt| STATEMENT | BEFORE | | | | trigger_test22 | pt1 | STATEMENT | BEFORE | | | | trigger_test42 | pt1 | ROW | BEFORE | 1 | 2 | 1 | 1 trigger_test32 | pt1 | ROW | AFTER | 1 | 2 | 1 | 1 trigger_test11 | pt| STATEMENT | AFTER | | | | trigger_test12 | pt1 | STATEMENT | AFTER | | | | (6 rows) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB
Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables
On Fri, Apr 21, 2017 at 7:59 PM, Ashutosh Bapat < ashutosh.ba...@enterprisedb.com> wrote: > Here's an updated patch set > Hi, I have applied v18 patches and got a crash in m-way joins when partition ranges differ, below are steps to reproduce this. CREATE TABLE prt1 (a int, b int, c varchar) PARTITION BY RANGE(a); CREATE TABLE prt1_p1 PARTITION OF prt1 FOR VALUES FROM (0) TO (250); CREATE TABLE prt1_p3 PARTITION OF prt1 FOR VALUES FROM (500) TO (600); CREATE TABLE prt1_p2 PARTITION OF prt1 FOR VALUES FROM (250) TO (500); INSERT INTO prt1 SELECT i, i, to_char(i, 'FM') FROM generate_series(0, 599, 2) i; ANALYZE prt1; CREATE TABLE prt4_n (a int, b int, c text) PARTITION BY RANGE(a); CREATE TABLE prt4_n_p1 PARTITION OF prt4_n FOR VALUES FROM (0) TO (300); CREATE TABLE prt4_n_p2 PARTITION OF prt4_n FOR VALUES FROM (300) TO (500); CREATE TABLE prt4_n_p3 PARTITION OF prt4_n FOR VALUES FROM (500) TO (600); INSERT INTO prt4_n SELECT i, i, to_char(i, 'FM') FROM generate_series(0, 599, 2) i; ANALYZE prt4_n; SET enable_partition_wise_join = on ; EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt4_n t2, prt1 t3 WHERE t1.a = t2.a AND t2.a = t3.a; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !> Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Declarative partitioning - another take
Hi, I have observed below with the statement triggers. I am able to create statement triggers at root partition, but these triggers, not getting fired on updating partition. CREATE TABLE pt (a INT, b INT) PARTITION BY RANGE(a); CREATE TABLE pt1 PARTITION OF pt FOR VALUES FROM (1) to (7); CREATE TABLE pt2 PARTITION OF pt FOR VALUES FROM (7) to (11); INSERT INTO pt (a,b) SELECT i,i FROM generate_series(1,10)i; CREATE TABLE pt_trigger(TG_NAME varchar,TG_TABLE_NAME varchar,TG_LEVEL varchar,TG_WHEN varchar); CREATE OR REPLACE FUNCTION process_pt_trigger() RETURNS TRIGGER AS $pttg$ BEGIN IF (TG_OP = 'UPDATE') THEN INSERT INTO pt_trigger SELECT TG_NAME,TG_TABLE_NAME,TG_LEVEL,TG_WHEN; RETURN NEW; END IF; RETURN NULL; END; $pttg$ LANGUAGE plpgsql; CREATE TRIGGER pt_trigger_after_p0 AFTER UPDATE ON pt FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER pt_trigger_before_p0 BEFORE UPDATE ON pt FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); postgres=# UPDATE pt SET a = 2 WHERE a = 1; UPDATE 1 postgres=# SELECT * FROM pt_trigger ORDER BY 1; tg_name | tg_table_name | tg_level | tg_when -+---+--+- (0 rows) no statement level trigger fired in this case, is this expected behaviour?? but if i am creating triggers on leaf partition, trigger is getting fired. CREATE TRIGGER pt_trigger_after_p1 AFTER UPDATE ON pt1 FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); CREATE TRIGGER pt_trigger_before_p1 BEFORE UPDATE ON pt1 FOR EACH STATEMENT EXECUTE PROCEDURE process_pt_trigger(); postgres=# UPDATE pt SET a = 5 WHERE a = 4; UPDATE 1 postgres=# SELECT * FROM pt_trigger ORDER BY 1; tg_name| tg_table_name | tg_level | tg_when --+---+---+- pt_trigger_after_p1 | pt1 | STATEMENT | AFTER pt_trigger_before_p1 | pt1 | STATEMENT | BEFORE (2 rows) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Parallel Append implementation
On Fri, Mar 24, 2017 at 12:38 AM, Amit Khandekar wrote: > Meanwhile, attached is a WIP patch v10. The only change in this patch > w.r.t. the last patch (v9) is that this one has a new function defined > append_nonpartial_cost(). Just sending this to show how the algorithm > looks like; haven't yet called it. > Hi, I have given patch on latest pg sources (on commit 457a4448732881b5008f7a3bcca76fc299075ac3). configure and make all install ran successfully, but initdb failed with below error. [edb@localhost bin]$ ./initdb -D data The files belonging to this database system will be owned by user "edb". This user must also own the server process. The database cluster will be initialized with locale "en_US.UTF-8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". Data page checksums are disabled. creating directory data ... ok creating subdirectories ... ok selecting default max_connections ... sh: line 1: 3106 Aborted (core dumped) "/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c max_connections=100 -c shared_buffers=1000 -c dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1 sh: line 1: 3112 Aborted (core dumped) "/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c max_connections=50 -c shared_buffers=500 -c dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1 sh: line 1: 3115 Aborted (core dumped) "/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c max_connections=40 -c shared_buffers=400 -c dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1 sh: line 1: 3118 Aborted (core dumped) "/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c max_connections=30 -c shared_buffers=300 -c dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1 sh: line 1: 3121 Aborted (core dumped) "/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c max_connections=20 -c shared_buffers=200 -c dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1 sh: line 1: 3124 Aborted (core dumped) "/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c max_connections=10 -c shared_buffers=100 -c dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1 10 selecting default shared_buffers ... sh: line 1: 3127 Aborted (core dumped) "/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c max_connections=10 -c shared_buffers=16384 -c dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1 400kB selecting dynamic shared memory implementation ... posix creating configuration files ... ok running bootstrap script ... TRAP: FailedAssertion("!(LWLockTranchesAllocated >= LWTRANCHE_FIRST_USER_DEFINED)", File: "lwlock.c", Line: 501) child process was terminated by signal 6: Aborted initdb: removing data directory "data" [edb@localhost bin]$ Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables
> On Mon, Mar 20, 2017 at 1:19 PM, Ashutosh Bapat > wrote: I have created some test to cover partition wise joins with postgres_fdw, also verified make check. patch attached. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index 059c5c3..f0b1a32 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -7181,3 +7181,149 @@ AND ftoptions @> array['fetch_size=6']; (1 row) ROLLBACK; +-- === +-- test partition-wise-joins +-- === +SET enable_partition_wise_join=on; +--range partition +CREATE TABLE fprt1 (a int, b int, c varchar) PARTITION BY RANGE(a); +CREATE TABLE fprt1_p1 (a int, b int, c text); +CREATE TABLE fprt1_p2 (a int, b int, c text); +CREATE FOREIGN TABLE ftprt1_p1 PARTITION OF fprt1 FOR VALUES FROM (0) TO (250) +SERVER loopback OPTIONS (TABLE_NAME 'fprt1_p1'); +CREATE FOREIGN TABLE ftprt1_p2 PARTITION OF fprt1 FOR VALUES FROM (250) TO (500) +SERVER loopback OPTIONS (TABLE_NAME 'fprt1_p2'); +INSERT INTO fprt1_p1 SELECT i, i, to_char(i/50, 'FM') FROM generate_series(0, 249, 2) i; +INSERT INTO fprt1_p2 SELECT i, i, to_char(i/50, 'FM') FROM generate_series(250, 499, 2) i; +ANALYZE fprt1; +CREATE TABLE fprt2 (a int, b int, c varchar) PARTITION BY RANGE(b); +CREATE TABLE fprt2_p1 (a int, b int, c text); +CREATE TABLE fprt2_p2 (a int, b int, c text); +CREATE FOREIGN TABLE ftprt2_p1 PARTITION OF fprt2 FOR VALUES FROM (0) TO (250) +SERVER loopback OPTIONS (TABLE_NAME 'fprt2_p1'); +CREATE FOREIGN TABLE ftprt2_p2 PARTITION OF fprt2 FOR VALUES FROM (250) TO (500) +SERVER loopback OPTIONS (TABLE_NAME 'fprt2_p2'); +INSERT INTO fprt2_p1 SELECT i, i, to_char(i/50, 'FM') FROM generate_series(0, 249, 3) i; +INSERT INTO fprt2_p2 SELECT i, i, to_char(i/50, 'FM') FROM generate_series(250, 499, 3) i; +ANALYZE fprt2; +-- inner join three tables, all join qualified +EXPLAIN (COSTS OFF) +SELECT t1.a,t2.b,t3.c FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) INNER JOIN fprt1 t3 ON (t2.b = t3.a) WHERE t1.a % 25 =0 ORDER BY 1,2,3; + QUERY PLAN + + Sort + Sort Key: t1.a, t3.c + -> Append + -> Foreign Scan + Relations: ((public.ftprt1_p1 t1) INNER JOIN (public.ftprt2_p1 t2)) INNER JOIN (public.ftprt1_p1 t3) + -> Foreign Scan + Relations: ((public.ftprt1_p2 t1) INNER JOIN (public.ftprt2_p2 t2)) INNER JOIN (public.ftprt1_p2 t3) +(7 rows) + +SELECT t1.a,t2.b,t3.c FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) INNER JOIN fprt1 t3 ON (t2.b = t3.a) WHERE t1.a % 25 =0 ORDER BY 1,2,3; + a | b | c +-+-+-- + 0 | 0 | + 150 | 150 | 0003 + 250 | 250 | 0005 + 400 | 400 | 0008 +(4 rows) + +-- left outer join + nullable clasue +EXPLAIN (COSTS OFF) +SELECT t1.a,t2.b,t2.c FROM fprt1 t1 LEFT JOIN (SELECT * FROM fprt2 WHERE a < 10) t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a < 10 ORDER BY 1,2,3; + QUERY PLAN +- + Merge Append + Sort Key: t1.a, ftprt2_p1.b, ftprt2_p1.c + -> Foreign Scan + Relations: (public.ftprt1_p1 t1) LEFT JOIN (public.ftprt2_p1 fprt2) +(4 rows) + +SELECT t1.a,t2.b,t2.c FROM fprt1 t1 LEFT JOIN (SELECT * FROM fprt2 WHERE a < 10) t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a < 10 ORDER BY 1,2,3; + a | b | c +---+---+-- + 0 | 0 | + 2 | | + 4 | | + 6 | 6 | + 8 | | +(5 rows) + +-- full outer join + right outer join +EXPLAIN (COSTS OFF) +SELECT t1.a,t2.b,t3.c FROM fprt1 t1 FULL JOIN fprt2 t2 ON (t1.a = t2.b and t1.b = t2.a) RIGHT JOIN fprt1 t3 ON (t2.a = t3.b and t2.a = t3.b) WHERE t1.a % 25 =0 ORDER BY 1,2,3; + QUERY PLAN +- + Sort + Sort Key: t1.a, t3.c + -> Hash Join + Hash Cond: (t3.b = t1.b) + -> Append + -> Seq Scan on fprt1 t3 + -> Foreign Scan on ftprt1_p1 t3_1 + -> Foreign Scan on ftprt1_p2 t3_2 + -> Hash + -> Append + -> Foreign Scan + Relations: (public.ftprt1_p1 t1) INNER JOIN (public.ftprt2_p1 t2) + -> Foreign Scan +
Re: [HACKERS] wait events for disk I/O
On Sat, Mar 18, 2017 at 10:52 PM, Rushabh Lathia wrote: > > > On Sat, Mar 18, 2017 at 5:15 PM, Robert Haas wrote: >> >> On Fri, Mar 17, 2017 at 10:01 AM, Rushabh Lathia >> wrote: >> > I tried to cover all the suggestion in the attached latest patch. >> >> Committed. I reworded the documentation entries, renamed a few of the >> wait events to make things more consistent, put all three lists in >> rigorous alphabetical order, and I fixed a couple of places where an >> error return from a system call could lead to returning without >> clearing the wait event. >> Thanks, I ran pgbench with shared_buffers set to 128kB, able to see below wait IO events. DataFileRead DataFileWrite WALInitSync WALInitWrite WALWrite DataFileSync WALRead Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] wait events for disk I/O
On Thu, Mar 9, 2017 at 10:54 AM, Rushabh Lathia wrote: > Thanks Rajkumar for performing tests on this patch. > > Yes, I also noticed similar results in my testing. Additionally sometime I > also > noticed ReadSLRUPage event on my system. > > I also run the reindex database command and I notices below IO events. > > SyncImmedRelation, > WriteDataBlock > WriteBuffile, > WriteXLog, > ReadDataBlock I have tried for generating some more events, by running pgbench on master/slave configuration, able to see three more events. WriteInitXLogFile SyncInitXLogFile ReadXLog -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] wait events for disk I/O
On Wed, Mar 8, 2017 at 4:50 PM, Rushabh Lathia wrote: > I am attaching another version of the patch, as I found stupid mistake > in the earlier version of patch, where I missed to initialize initial > value to > WaitEventIO enum. Also earlier version was not getting cleanly apply on > the current version of sources. > I have applied attached patch, set shared_buffers to 128kB and ran pgbench, I am able to see below distinct IO events. --with ./pgbench -i -s 500 postgres application_namewait_event_typewait_event query pgbench IO ExtendDataBlock copy pgbench_account pgbench IO WriteXLog copy pgbench_account pgbench IO WriteDataBlock copy pgbench_account pgbench IO ReadDataBlock vacuum analyze pgben pgbench IO ReadBuffile alter table pgbench_ --with ./pgbench -T 600 postgres (readwrite) application_namewait_event_typewait_event query pgbench IO ReadDataBlock UPDATE pgbench_accou pgbench IO WriteDataBlock UPDATE pgbench_telle IO SyncDataBlock pgbench IO SyncDataBlock UPDATE pgbench_telle IO SyncDataBlock autovacuum: VACUUM A pgbench IO WriteXLog END; pgbench IO ExtendDataBlock copy pgbench_account --with ./pgbench -T 600 -S postgres (select only) application_namewait_event_typewait_event query pgbench IO ReadDataBlock SELECT abalance FROM Attached excel with all IO event values. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation wait_event_for_disk_io.ods Description: application/vnd.oasis.opendocument.spreadsheet -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Declarative partitioning - another take
On Wed, Jan 4, 2017 at 10:37 AM, Amit Langote wrote: > On 2017/01/03 19:04, Rajkumar Raghuwanshi wrote: > > On Tue, Dec 27, 2016 at 3:24 PM, Amit Langote wrote: > >> > >> Attached patch should fix the same. > > > > I have applied attached patch, server crash for range is fixed, but still > > getting crash for multi-level list partitioning insert. > > > > postgres=# CREATE TABLE test_ml_l (a int, b int, c varchar) PARTITION BY > > LIST(c); > > [ ... ] > > > postgres=# INSERT INTO test_ml_l SELECT i, i, to_char(i/50, 'FM') > FROM > > generate_series(0, 599, 2) i; > > server closed the connection unexpectedly > > This probably means the server terminated abnormally > > before or while processing the request. > > The connection to the server was lost. Attempting reset: Failed. > > Hm, that's odd. I tried your new example, but didn't get the crash. > > Thanks, > Amit > Thanks, I have pulled latest sources from git, and then applied patch "fix-wrong-ecxt_scantuple-crash.patch", Not getting crash now, may be I have missed something last time.
Re: [HACKERS] Declarative partitioning - another take
On Tue, Dec 27, 2016 at 3:24 PM, Amit Langote wrote: > On 2016/12/27 18:30, Rajkumar Raghuwanshi wrote: > > Hi Amit, > > > > I have pulled latest sources from git and tried to create multi-level > > partition, getting a server crash, below are steps to reproduce. please > > check if it is reproducible in your machine also. > > > > [ ... ] > > > postgres=# INSERT INTO test_ml SELECT i, i, to_char(i, 'FM') FROM > > generate_series(0, 599, 2) i; > > server closed the connection unexpectedly > > This probably means the server terminated abnormally > > before or while processing the request. > > The connection to the server was lost. Attempting reset: Failed. > > Thanks for the example. Looks like there was an oversight in my patch > that got committed as 2ac3ef7a01 [1]. > > Attached patch should fix the same. > > Thanks, > Amit > > [1] > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h= > 2ac3ef7a01df859c62d0a02333b646d65eaec5ff > Hi Amit, I have applied attached patch, server crash for range is fixed, but still getting crash for multi-level list partitioning insert. postgres=# CREATE TABLE test_ml_l (a int, b int, c varchar) PARTITION BY LIST(c); CREATE TABLE postgres=# CREATE TABLE test_ml_l_p1 PARTITION OF test_ml_l FOR VALUES IN ('', '0003', '0004', '0010') PARTITION BY LIST (c); CREATE TABLE postgres=# CREATE TABLE test_ml_l_p1_p1 PARTITION OF test_ml_l_p1 FOR VALUES IN ('', '0003'); CREATE TABLE postgres=# CREATE TABLE test_ml_l_p1_p2 PARTITION OF test_ml_l_p1 FOR VALUES IN ('0004', '0010'); CREATE TABLE postgres=# CREATE TABLE test_ml_l_p2 PARTITION OF test_ml_l FOR VALUES IN ('0001', '0005', '0002', '0009') PARTITION BY LIST (c); CREATE TABLE postgres=# CREATE TABLE test_ml_l_p2_p1 PARTITION OF test_ml_l_p2 FOR VALUES IN ('0001', '0005'); CREATE TABLE postgres=# CREATE TABLE test_ml_l_p2_p2 PARTITION OF test_ml_l_p2 FOR VALUES IN ('0002', '0009'); CREATE TABLE postgres=# CREATE TABLE test_ml_l_p3 PARTITION OF test_ml_l FOR VALUES IN ('0006', '0007', '0008', '0011') PARTITION BY LIST (ltrim(c,'A')); CREATE TABLE postgres=# CREATE TABLE test_ml_l_p3_p1 PARTITION OF test_ml_l_p3 FOR VALUES IN ('0006', '0007'); CREATE TABLE postgres=# CREATE TABLE test_ml_l_p3_p2 PARTITION OF test_ml_l_p3 FOR VALUES IN ('0008', '0011'); CREATE TABLE postgres=# INSERT INTO test_ml_l SELECT i, i, to_char(i/50, 'FM') FROM generate_series(0, 599, 2) i; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Declarative partitioning - another take
Hi Amit, I have pulled latest sources from git and tried to create multi-level partition, getting a server crash, below are steps to reproduce. please check if it is reproducible in your machine also. postgres=# CREATE TABLE test_ml (a int, b int, c varchar) PARTITION BY RANGE(a); CREATE TABLE postgres=# CREATE TABLE test_ml_p1 PARTITION OF test_ml FOR VALUES FROM (0) TO (250) PARTITION BY RANGE (b); CREATE TABLE postgres=# CREATE TABLE test_ml_p1_p1 PARTITION OF test_ml_p1 FOR VALUES FROM (0) TO (100); CREATE TABLE postgres=# CREATE TABLE test_ml_p1_p2 PARTITION OF test_ml_p1 FOR VALUES FROM (100) TO (250); CREATE TABLE postgres=# CREATE TABLE test_ml_p2 PARTITION OF test_ml FOR VALUES FROM (250) TO (500) PARTITION BY RANGE (c); CREATE TABLE postgres=# CREATE TABLE test_ml_p2_p1 PARTITION OF test_ml_p2 FOR VALUES FROM ('0250') TO ('0400'); CREATE TABLE postgres=# CREATE TABLE test_ml_p2_p2 PARTITION OF test_ml_p2 FOR VALUES FROM ('0400') TO ('0500'); CREATE TABLE postgres=# CREATE TABLE test_ml_p3 PARTITION OF test_ml FOR VALUES FROM (500) TO (600) PARTITION BY RANGE ((b + a)); CREATE TABLE postgres=# CREATE TABLE test_ml_p3_p1 PARTITION OF test_ml_p3 FOR VALUES FROM (1000) TO (1100); CREATE TABLE postgres=# CREATE TABLE test_ml_p3_p2 PARTITION OF test_ml_p3 FOR VALUES FROM (1100) TO (1200); CREATE TABLE postgres=# INSERT INTO test_ml SELECT i, i, to_char(i, 'FM') FROM generate_series(0, 599, 2) i; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Declarative partitioning - another take
On Thu, Oct 6, 2016 at 12:44 PM, Amit Langote wrote: > > Attached revised patches. Also, includes a fix for an issue reported by > Rajkumar Raghuwanshi [1] which turned out to be a bug in one of the later > patches. I will now move on to addressing the comments on patch 0003. > > Thanks a lot for the review! > > Thanks, > Amit > > [1] > https://www.postgresql.org/message-id/5dded2f1-c7f6-e7fc-56b > 5-23ab59495...@lab.ntt.co.jp Hi, I have applied latest patches, getting some error and crash, please check if you are also able to reproduce the same. Observation1 : Not able to create index on partition table. -- CREATE TABLE rp (c1 int, c2 int) PARTITION BY RANGE(c1); CREATE TABLE rp_p1 PARTITION OF rp FOR VALUES START (1) END (10); CREATE TABLE rp_p2 PARTITION OF rp FOR VALUES START (10) END (20); CREATE INDEX idx_rp_c1 on rp(c1); ERROR: cannot create index on partitioned table "rp" Observation2 : Getting cache lookup failed error for multiple column range partition -- CREATE TABLE rp1_m (c1 int, c2 int) PARTITION BY RANGE(c1, ((c1 + c2)/2)); CREATE TABLE rp1_m_p1 PARTITION OF rp1_m FOR VALUES START (1, 1) END (10, 10); ERROR: cache lookup failed for attribute 0 of relation 16429 Observation3 : Getting server crash with multiple column range partition -- CREATE TABLE rp2_m (c1 int, c2 int) PARTITION BY RANGE(((c2 + c1)/2), c2); CREATE TABLE rp2_m_p1 PARTITION OF rp2_m FOR VALUES START (1, 1) END (10, 10); server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !> Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Declarative partitioning - another take
On Tue, Oct 4, 2016 at 1:32 PM, Amit Langote wrote: Attached updated patches. > > Thanks, > Amit > Hi, I observed, when creating foreign table with range partition, data is not inserting into specified partition range. below are steps to reproduce. CREATE EXTENSION postgres_fdw; CREATE SERVER pwj_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname 'postgres', port '5432',use_remote_estimate 'true'); CREATE USER MAPPING FOR PUBLIC SERVER pwj_server; CREATE TABLE test_range (a int) PARTITION BY RANGE(a); CREATE TABLE test_range_p1 (a int); CREATE FOREIGN TABLE ft_test_range_p1 PARTITION OF test_range FOR VALUES START (1) END (10) SERVER pwj_server OPTIONS (TABLE_NAME 'test_range_p1'); CREATE TABLE test_range_p2 (a int); CREATE FOREIGN TABLE ft_test_range_p2 PARTITION OF test_range FOR VALUES START (20) END (30) SERVER pwj_server OPTIONS (TABLE_NAME 'test_range_p2'); CREATE TABLE test_range_p3 (a int); CREATE FOREIGN TABLE ft_test_range_p3 PARTITION OF test_range FOR VALUES START (10) END (20) SERVER pwj_server OPTIONS (TABLE_NAME 'test_range_p3'); postgres=# INSERT INTO test_range (a) values (5),(25),(15); INSERT 0 3 postgres=# select tableoid::regclass, * from test_range; tableoid | a --+ ft_test_range_p1 | 5 ft_test_range_p2 | 15 ft_test_range_p3 | 25 (3 rows) --Here ft_test_range_p2 is created for range 20-30 having value 15. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables
On Tue, Sep 20, 2016 at 4:26 PM, Ashutosh Bapat < ashutosh.ba...@enterprisedb.com> wrote: > PFA patch which takes care of some of the TODOs mentioned in my > previous mail. The patch is based on the set of patches supporting > declarative partitioning by Amit Langoted posted on 26th August. I have applied declarative partitioning patches posted by Amit Langote on 26 Aug 2016 and then latest partition-wise-join patch, getting below error while make install. ../../../../src/include/catalog/partition.h:37: error: redefinition of typedef ‘PartitionScheme’ ../../../../src/include/nodes/relation.h:492: note: previous declaration of ‘PartitionScheme’ was here make[4]: *** [commit_ts.o] Error 1 make[4]: Leaving directory `/home/edb/Desktop/edb_work/WORKDB/PG_PWJ/postgresql/src/backend/access/transam' make[3]: *** [transam-recursive] Error 2 make[3]: Leaving directory `/home/edb/Desktop/edb_work/WORKDB/PG_PWJ/postgresql/src/backend/access' make[2]: *** [access-recursive] Error 2 make[2]: Leaving directory `/home/edb/Desktop/edb_work/WORKDB/PG_PWJ/postgresql/src/backend' make[1]: *** [all-backend-recurse] Error 2 make[1]: Leaving directory `/home/edb/Desktop/edb_work/WORKDB/PG_PWJ/postgresql/src' make: *** [all-src-recurse] Error 2 PS : I am using - gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17) I have commented below statement in src/include/catalog/partition.h file and then tried to install, it worked fine. /* typedef struct PartitionSchemeData*PartitionScheme; */ Thanks & Regards, Rajkumar Raghuwanshi
Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables
Hi, I got a server crash with partition_wise_join, steps to reproduce given below. postgres=# set enable_partition_wise_join=true; SET postgres=# CREATE TABLE tbl (a int,c text) PARTITION BY LIST(a); CREATE TABLE postgres=# CREATE TABLE tbl_p1 PARTITION OF tbl FOR VALUES IN (1, 2); CREATE TABLE postgres=# CREATE TABLE tbl_p2 PARTITION OF tbl FOR VALUES IN (3, 4); CREATE TABLE postgres=# INSERT INTO tbl VALUES (1,'P1'),(2,'P1'),(3,'P2'),(4,'P2'); INSERT 0 4 postgres=# EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM tbl t1 INNER JOIN tbl t2 ON (t1.a = t2.a) WHERE t1.c = 'P1' AND t1.c = 'P2'; NOTICE: join between relations (b 1) and (b 2) is considered for partition-wise join. server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !> Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables
On Fri, Sep 9, 2016 at 3:17 PM, Ashutosh Bapat < ashutosh.ba...@enterprisedb.com> wrote: > Hi All, > > PFA the patch to support partition-wise joins for partitioned tables. The > patch > is based on the declarative parition support patches provided by Amit > Langote > on 26th August 2016. > I have applied declarative partitioning patches posted by Amit Langote on 26 Aug 2016 and then partition-wise-join patch, getting below error while make install. ../../../../src/include/nodes/relation.h:706: error: redefinition of typedef ‘PartitionOptInfo’ ../../../../src/include/nodes/relation.h:490: note: previous declaration of ‘PartitionOptInfo’ was here make[4]: *** [gistbuild.o] Error 1 make[4]: Leaving directory `/home/edb/Desktop/edb_work/WO RKDB/PG/postgresql/src/backend/access/gist' make[3]: *** [gist-recursive] Error 2 make[3]: Leaving directory `/home/edb/Desktop/edb_work/WO RKDB/PG/postgresql/src/backend/access' make[2]: *** [access-recursive] Error 2 make[2]: Leaving directory `/home/edb/Desktop/edb_work/WO RKDB/PG/postgresql/src/backend' make[1]: *** [all-backend-recurse] Error 2 make[1]: Leaving directory `/home/edb/Desktop/edb_work/WO RKDB/PG/postgresql/src' make: *** [all-src-recurse] Error 2 PS : I am using - gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17) Attached the patch for the fix of above error. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index 1e9fed9..963b022 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -487,7 +487,7 @@ typedef enum RelOptKind ((reloptkind) == RELOPT_OTHER_MEMBER_REL || \ (reloptkind) == RELOPT_OTHER_JOINREL) -typedef struct PartitionOptInfo PartitionOptInfo; +typedef struct PartitionOptInfo; typedef struct RelOptInfo { @@ -561,7 +561,7 @@ typedef struct RelOptInfo /* * TODO: Notice recursive usage of RelOptInfo. */ - PartitionOptInfo *part_info; + struct PartitionOptInfo *part_info; /* Set only for "other" base or join relations. */ Relids parent_relids; -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Declarative partitioning - another take
I have Continued with testing declarative partitioning with the latest patch. Got some more observation, given below -- Observation 1 : Getting overlap error with START with EXCLUSIVE in range partition. create table test_range_bound ( a int) partition by range(a); --creating a partition to contain records {1,2,3,4}, by default 1 is inclusive and 5 is exclusive create table test_range_bound_p1 partition of test_range_bound for values start (1) end (5); --now trying to create a partition by explicitly mentioning start is exclusive to contain records {5,6,7}, here trying to create with START with 4 as exclusive so range should be 5 to 8, but getting partition overlap error. create table test_range_bound_p2 partition of test_range_bound for values start (4) EXCLUSIVE end (8); ERROR: partition "test_range_bound_p2" would overlap partition "test_range_bound_p1" -- Observation 2 : able to create sub-partition out of the range set for main table, causing not able to insert data satisfying any of the partition. create table test_subpart (c1 int) partition by range (c1); create table test_subpart_p1 partition of test_subpart for values start (1) end (100) inclusive partition by range (c1); create table test_subpart_p1_sub1 partition of test_subpart_p1 for values start (101) end (200); \d+ test_subpart Table "public.test_subpart" Column | Type | Modifiers | Storage | Stats target | Description +-+---+-+--+- c1 | integer | | plain | | Partition Key: RANGE (c1) Partitions: test_subpart_p1 FOR VALUES START (1) END (100) INCLUSIVE \d+ test_subpart_p1 Table "public.test_subpart_p1" Column | Type | Modifiers | Storage | Stats target | Description +-+---+-+--+- c1 | integer | | plain | | Partition Of: test_subpart FOR VALUES START (1) END (100) INCLUSIVE Partition Key: RANGE (c1) Partitions: test_subpart_p1_sub1 FOR VALUES START (101) END (200) insert into test_subpart values (50); ERROR: no partition of relation "test_subpart_p1" found for row DETAIL: Failing row contains (50). insert into test_subpart values (150); ERROR: no partition of relation "test_subpart" found for row DETAIL: Failing row contains (150). -- Observation 3 : Getting cache lookup failed, when selecting list partition table containing array. CREATE TABLE test_array ( i int,j int[],k text[]) PARTITION BY LIST (j); CREATE TABLE test_array_p1 PARTITION OF test_array FOR VALUES IN ('{1}'); CREATE TABLE test_array_p2 PARTITION OF test_array FOR VALUES IN ('{2,2}'); INSERT INTO test_array (i,j[1],k[1]) VALUES (1,1,1); INSERT INTO test_array (i,j[1],j[2],k[1]) VALUES (2,2,2,2); postgres=# SELECT tableoid::regclass,* FROM test_array_p1; tableoid| i | j | k ---+---+-+- test_array_p1 | 1 | {1} | {1} (1 row) postgres=# SELECT tableoid::regclass,* FROM test_array_p2; tableoid| i | j | k ---+---+---+- test_array_p2 | 2 | {2,2} | {2} (1 row) postgres=# SELECT tableoid::regclass,* FROM test_array; ERROR: cache lookup failed for type 0 Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation On Fri, Sep 9, 2016 at 2:25 PM, Amit Langote wrote: > On 2016/09/06 22:04, Amit Langote wrote: > > Will fix. > > Here is an updated set of patches. > > In addition to fixing a couple of bugs reported by Ashutosh and Rajkumar, > there are a few of major changes: > > * change the way individual partition bounds are represented internally > and the way a collection of partition bounds associated with a > partitioned table is exposed to other modules. Especially list > partition bounds which are manipulated more efficiently as discussed > at [1]. > > * \d partitioned_table now shows partition count and \d+ lists partition > names and their bounds as follows: > > \d t6 >Table "public.t6" > Column | Type| Modifiers > .---+---+--- > a | integer | > b | character varying | > Partition Key: LIST (a) > Number of partitions: 3 (Use \d+ to list them.) > > \d+ t6 >Table "public.t6" > Column | Type| Modifiers | Storage | Stats target | > Description > .---+---+---+--+ > --+- > a | integer | | plain| | > b | character varying | | extended | | > Partition Key: LIST (a) > Partitions: t6_p1 FOR VALUES IN (1, 2, NULL), > t6_p2 FOR VALUES IN (4, 5), > t6_p3 FOR VALUES IN (3, 6)
Re: [HACKERS] Declarative partitioning - another take
On Wed, Sep 7, 2016 at 3:58 PM, Amit Langote wrote: > > Hi, > > On 2016/09/07 17:56, Rajkumar Raghuwanshi wrote: > > Hi, > > > > I have a query regarding list partitioning, > > > > For example if I want to store employee data in a table, with "IT" dept > > employee in emp_p1 partition, "HR" dept employee in emp_p2 partition and > if > > employee belongs to other than these two, should come in emp_p3 > partition. > > > > In this case not sure how to create partition table. Do we have something > > like we have UNBOUNDED for range partition or oracle have "DEFAULT" for > > list partition. > > > > create table employee (empid int, dept varchar) partition by list(dept); > > create table emp_p1 partition of employee for values in ('IT'); > > create table emp_p2 partition of employee for values in ('HR'); > > create table emp_p3 partition of employee for values in (??); > > Sorry, no such feature is currently offered. It might be possible to > offer something like a "default" list partition which accepts values other > than those specified for other existing partitions. However, that means > if we add a non-default list partition after a default one has been > created, the implementation must make sure that it moves any values from > the default partition that now belong to the newly created partition. > > Thanks, > Amit > Thanks for clarifying, But I could see same problem of moving data when adding a new non-default partition with unbounded range partition. For example give here, Initially I have create a partition table test with test_p3 as unbounded end, Later tried to change test_p3 to contain 7-9 values only, and adding a new partition test_p4 contain 10-unbound. --create partition table and some leafs CREATE TABLE test (a int, b int) PARTITION BY RANGE(a); CREATE TABLE test_p1 PARTITION OF test FOR VALUES START (1) END (4); CREATE TABLE test_p2 PARTITION OF test FOR VALUES START (4) END (7); CREATE TABLE test_p3 PARTITION OF test FOR VALUES START (7) END UNBOUNDED; --insert some data INSERT INTO test SELECT i, i*10 FROM generate_series(1,3) i; INSERT INTO test SELECT i, i*10 FROM generate_series(4,6) i; INSERT INTO test SELECT i, i*10 FROM generate_series(7,13) i; --directly not able to attach test_p4 because of overlap error, hence detached test_p3 and than attaching test_p4 SELECT tableoid::regclass,* FROM test; tableoid | a | b --++- test_p1 | 1 | 10 test_p1 | 2 | 20 test_p1 | 3 | 30 test_p2 | 4 | 40 test_p2 | 5 | 50 test_p2 | 6 | 60 test_p3 | 7 | 70 test_p3 | 8 | 80 test_p3 | 9 | 90 test_p3 | 10 | 100 test_p3 | 11 | 110 test_p3 | 12 | 120 test_p3 | 13 | 130 (13 rows) ALTER TABLE test DETACH PARTITION test_p3; CREATE TABLE test_p4 (like test); ALTER TABLE test ATTACH PARTITION test_p4 FOR VALUES start (10) end UNBOUNDED; --now can not attach test_p3 because of overlap with test_p4, causing data loss from main test table. ALTER TABLE test ATTACH PARTITION test_p3 FOR VALUES start (7) end (10); ERROR: source table contains a row violating partition bound specification ALTER TABLE test ATTACH PARTITION test_p3 FOR VALUES start (7) end (13); ERROR: partition "test_p3" would overlap partition "test_p4" SELECT tableoid::regclass,* FROM test; tableoid | a | b --+---+ test_p1 | 1 | 10 test_p1 | 2 | 20 test_p1 | 3 | 30 test_p2 | 4 | 40 test_p2 | 5 | 50 test_p2 | 6 | 60 (6 rows)
Re: [HACKERS] Declarative partitioning - another take
Hi, I have a query regarding list partitioning, For example if I want to store employee data in a table, with "IT" dept employee in emp_p1 partition, "HR" dept employee in emp_p2 partition and if employee belongs to other than these two, should come in emp_p3 partition. In this case not sure how to create partition table. Do we have something like we have UNBOUNDED for range partition or oracle have "DEFAULT" for list partition. create table employee (empid int, dept varchar) partition by list(dept); create table emp_p1 partition of employee for values in ('IT'); create table emp_p2 partition of employee for values in ('HR'); create table emp_p3 partition of employee for values in (??); Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation On Tue, Sep 6, 2016 at 6:37 PM, Amit Langote wrote: > On Tue, Sep 6, 2016 at 9:19 PM, Robert Haas wrote: > > On Wed, Aug 31, 2016 at 1:05 PM, Amit Langote > > wrote: > >>> However, it seems a lot better to make it a property of the parent > >>> from a performance point of view. Suppose there are 1000 partitions. > >>> Reading one toasted value for pg_class and running stringToNode() on > >>> it is probably a lot faster than scanning pg_inherits to find all of > >>> the child partitions and then doing an index scan to find the pg_class > >>> tuple for each and then decoding all of those tuples and assembling > >>> them into some data structure. > >> > >> Seems worth trying. One point that bothers me a bit is how do we > enforce > >> partition bound condition on individual partition basis. For example > when > >> a row is inserted into a partition directly, we better check that it > does > >> not fall outside the bounds and issue an error otherwise. With current > >> approach, we just look up a partition's bound from the catalog and gin > up > >> a check constraint expression (and cache in relcache) to be enforced in > >> ExecConstraints(). With the new approach, I guess we would need to look > >> up the parent's partition descriptor. Note that the checking in > >> ExecConstraints() is turned off when routing a tuple from the parent. > > > > [ Sorry for the slow response. ] > > > > Yeah, that's a problem. Maybe it's best to associate this data with > > the childrels after all - or halfway in between, e.g. augment > > pg_inherits with this information. After all, the performance problem > > I was worried about above isn't really much of an issue: each backend > > will build a relcache entry for the parent just once and then use it > > for the lifetime of the session unless some invalidation occurs. So > > if that takes a small amount of extra time, it's probably not really a > > big deal. On the other hand, if we can't build the implicit > > constraint for the child table without opening the parent, that's > > probably going to cause us some serious inconvenience. > > Agreed. So I will stick with the existing approach. > > Thanks, > Amit > > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers >
Re: [HACKERS] Declarative partitioning - another take
Hi, I have applied updated patches given by you, and observe below. here in the given example, t6_p3 partition is not allowed to have null, but I am able to insert it, causing two nulls in the table. --create a partition table create table t6 (a int, b varchar) partition by list(a); create table t6_p1 partition of t6 for values in (1,2,null); create table t6_p2 partition of t6 for values in (4,5); create table t6_p3 partition of t6 for values in (3,6); --insert some values insert into t6 select i,i::varchar from generate_series(1,6) i; insert into t6 values (null,'A'); --try inserting null to t6_p3 partition table insert into t6_p3 values (null,'A'); select tableoid::regclass,* from t6; tableoid | a | b --+---+--- t6_p1| 1 | 1 t6_p1| 2 | 2 t6_p1| | A t6_p2| 4 | 4 t6_p2| 5 | 5 t6_p3| 3 | 3 t6_p3| 6 | 6 t6_p3| | A (8 rows) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation On Tue, Sep 6, 2016 at 1:37 PM, Amit Langote wrote: > > Hi, > > On 2016/09/06 16:13, Ashutosh Bapat wrote: > > I found a server crash while running make check in regress folder. with > > this set of patches. Problem is RelationBuildPartitionKey() partexprsrc > may > > be used uninitialized. Initializing it with NIL fixes the crash. Here's > > patch to fix it. Came up with the fix after discussion with Amit. > > Thanks for the report. Here is a rebased version of the patches including > you fix (no significant changes from those posted on Aug 26). > > Thanks, > Amit > > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers > >
Re: [HACKERS] Postgres_fdw join pushdown - getting server crash in left outer join of three table
Thanks for the commit. I have tested it again. Not getting server crash now. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation On Mon, May 16, 2016 at 9:38 PM, Robert Haas wrote: > On Fri, May 13, 2016 at 6:40 PM, Michael Paquier > wrote: > > On Fri, May 13, 2016 at 11:14 PM, Robert Haas > wrote: > >> So, barring objections, I intend to apply the attached fixup patch, > >> which replaces Michael's logic with Ashutosh's logic and rewrites the > >> comments such to be much more explicit. > > > > Re-oops. I didn't check what was committed to be honest. And it should > > not have been my version, definitely. > > OK, committed. > > -- > Robert Haas > EnterpriseDB: http://www.enterprisedb.com > The Enterprise PostgreSQL Company >
[HACKERS] postgres_fdw : Not able to update foreign table referring to a local table's view when use_remote_estimate = true
Hi, I observed below in postgres_fdw. *Observation:* Update a foreign table which is referring to a local table's view (with use_remote_estimate = true) getting failed with below error. ERROR: column "ctid" does not exist CONTEXT: Remote SQL command: EXPLAIN SELECT c1, ctid FROM public.lt_view FOR UPDATE create extension postgres_fdw; create server link_server foreign data wrapper postgres_fdw options (host 'localhost',dbname 'postgres', port '5447'); create user mapping for public server link_server; create table lt (c1 integer, c2 integer); insert into lt values (1,null); create view lt_view as select * from lt; create foreign table ft (c1 integer,c2 integer) server link_server options (table_name 'lt_view'); --alter server with use_remote_estimate 'false' alter server link_server options (add use_remote_estimate 'false'); --update foreign table refering to local view -- able to update update ft set c2 = c1; UPDATE 1 --alter server with use_remote_estimate 'true' alter server link_server options (SET use_remote_estimate 'true'); --update foreign table refering to local view -- fail, throwing error update ft set c2 = c1; psql:/home/edb/Desktop/edb_work/Postgres_Fdw/dml_pushdown_35882/observation_view.sql:24: ERROR: column "ctid" does not exist CONTEXT: Remote SQL command: EXPLAIN SELECT c1, ctid FROM public.lt_view FOR UPDATE Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation >
[HACKERS] postgres_fdw : altering foreign table not invalidating prepare statement execution plan.
Hi, I observed below in postgres_fdw * .Observation: *Prepare statement execution plan is not getting changed even after altering foreign table to point to new schema. CREATE EXTENSION postgres_fdw; CREATE SCHEMA s1; create table s1.lt (c1 integer, c2 varchar); insert into s1.lt values (1, 's1.lt'); CREATE SCHEMA s2; create table s2.lt (c1 integer, c2 varchar); insert into s2.lt values (1, 's2.lt'); CREATE SERVER link_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname 'postgres', port '5447', use_remote_estimate 'true'); CREATE USER MAPPING FOR public SERVER link_server; create foreign table ft (c1 integer, c2 varchar) server link_server options (schema_name 's1',table_name 'lt'); ANALYZE ft; PREPARE stmt_ft AS select c1,c2 from ft; EXECUTE stmt_ft; c1 | c2 +--- 1 | s1.lt (1 row) --changed foreign table ft pointing schema from s1 to s2 ALTER foreign table ft options (SET schema_name 's2', SET table_name 'lt'); ANALYZE ft; EXPLAIN (COSTS OFF, VERBOSE) EXECUTE stmt_ft; QUERY PLAN Foreign Scan on public.ft Output: c1, c2 Remote SQL: SELECT c1, c2 FROM s1.lt (3 rows) EXECUTE stmt_ft; c1 | c2 +--- 1 | s1.lt (1 row) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation >
Re: [HACKERS] Postgres_fdw join pushdown - INNER - FULL OUTER join combination generating wrong result
Thanks Ashutosh for the patch. I have applied and tested it. Now getting proper result for reported issue. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation On Tue, Mar 29, 2016 at 7:50 PM, Ashutosh Bapat < ashutosh.ba...@enterprisedb.com> wrote: > > Observation:_ Inner join and full outer join combination on a table >>> >>> generating wrong result. >>> >>> SELECT * FROM lt; >>> c1 >>> >>>1 >>>2 >>> (2 rows) >>> >>> SELECT * FROM ft; >>> c1 >>> >>>1 >>>2 >>> (2 rows) >>> >>> \d+ ft >>> Foreign table "public.ft" >>> Column | Type | Modifiers | FDW Options | Storage | Stats target | >>> Description >>> >>> +-+---+-+-+--+- >>> c1 | integer | | | plain | | >>> Server: link_server >>> FDW Options: (table_name 'lt') >>> >>> --inner join and full outer join on local tables >>> SELECT t1.c1,t2.c1,t3.c1 FROM lt t1 INNER JOIN lt t2 ON (t1.c1 = t2.c1) >>> FULL JOIN lt t3 ON (t2.c1 = t3.c1); >>> c1 | c1 | c1 >>> ++ >>>1 | 1 | 1 >>>2 | 2 | 2 >>> (2 rows) >>> >>> --inner join and full outer join on corresponding foreign tables >>> SELECT t1.c1,t2.c1,t3.c1 FROM ft t1 INNER JOIN ft t2 ON (t1.c1 = t2.c1) >>> FULL JOIN ft t3 ON (t2.c1 = t3.c1); >>> c1 | c1 | c1 >>> ++ >>>1 | 1 | 1 >>>1 | 2 | >>>2 | 1 | >>>2 | 2 | 2 >>> (4 rows) >>> >> > Thanks Rajkumar for the detailed report. > > >> >> I think the reason for that is in foreign_join_ok. This in that function: >> >> wrongly pulls up remote_conds from joining relations in the FULL JOIN >> case. I think we should not pull up such conditions in the FULL JOIN case. >> >> > Right. For a full outer join, since each joining relation acts as outer > for the other, we can not pull up the quals to either join clauses or other > clauses. So, in such a case, we will need to encapsulate the joining > relation with conditions into a subquery. Unfortunately, the current > deparse logic does not handle this encapsulation. Adding that functionality > so close to the feature freeze might be risky given the amount of code > changes required. > > PFA patch with a quick fix. A full outer join with either of the joining > relations having WHERE conditions (or other clauses) is not pushed down. In > the particular case that was reported, the bug triggered because of the way > conditions are handled for an inner join. For an inner join, all the > conditions in ON as well as WHERE clause are treated like they are part of > WHERE clause. This allows pushing down a join even if there are unpushable > join clauses. But the pushable conditions can be put back into the ON > clause. This avoids using subqueries while deparsing. > > -- > Best Wishes, > Ashutosh Bapat > EnterpriseDB Corporation > The Postgres Database Company >
[HACKERS] Postgres_fdw join pushdown - INNER - FULL OUTER join combination generating wrong result
Hi, I am testing postgres_fdw join pushdown feature for PostgreSQL 9.6 DB, and I observed below issue. *Observation:* Inner join and full outer join combination on a table generating wrong result. SELECT * FROM lt; c1 1 2 (2 rows) SELECT * FROM ft; c1 1 2 (2 rows) \d+ ft Foreign table "public.ft" Column | Type | Modifiers | FDW Options | Storage | Stats target | Description +-+---+-+-+--+- c1 | integer | | | plain | | Server: link_server FDW Options: (table_name 'lt') --inner join and full outer join on local tables SELECT t1.c1,t2.c1,t3.c1 FROM lt t1 INNER JOIN lt t2 ON (t1.c1 = t2.c1) FULL JOIN lt t3 ON (t2.c1 = t3.c1); c1 | c1 | c1 ++ 1 | 1 | 1 2 | 2 | 2 (2 rows) --inner join and full outer join on corresponding foreign tables SELECT t1.c1,t2.c1,t3.c1 FROM ft t1 INNER JOIN ft t2 ON (t1.c1 = t2.c1) FULL JOIN ft t3 ON (t2.c1 = t3.c1); c1 | c1 | c1 ++ 1 | 1 | 1 1 | 2 | 2 | 1 | 2 | 2 | 2 (4 rows) Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation >
Re: [HACKERS] Postgres_fdw join pushdown - getting server crash in left outer join of three table
Thanks Ashutosh for the patch. I have apply and retested it, now not getting server crash. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation On Mon, Mar 21, 2016 at 8:02 PM, Ashutosh Bapat < ashutosh.ba...@enterprisedb.com> wrote: > Thanks Michael for looking into this. > > > >> In get_useful_ecs_for_relation, it seems to me that this assertion >> should be removed and replaces by an actual check because even if >> right_ec and left_ec are initialized, we cannot be sure that ec_relids >> contains the relations specified: >> /* >> * restrictinfo->mergeopfamilies != NIL is sufficient to guarantee >> * that left_ec and right_ec will be initialized, per comments in >> * distribute_qual_to_rels, and rel->joininfo should only contain >> ECs >> * where this relation appears on one side or the other. >> */ >> if (bms_is_subset(relids, restrictinfo->right_ec->ec_relids)) >> useful_eclass_list = >> list_append_unique_ptr(useful_eclass_list, >> >> restrictinfo->right_ec); >> else >> { >> Assert(bms_is_subset(relids, >> restrictinfo->left_ec->ec_relids)); >> useful_eclass_list = >> list_append_unique_ptr(useful_eclass_list, >> >> restrictinfo->left_ec); >> } >> > > An EC covers all the relations covered by all the equivalence members it > contains. In case of mergejoinable clause for outer join, EC may cover just > a single relation whose column appears on either side of the clause. In > this case, bms_is_subset() for a given join relation covering single > relation in EC will be false. So, we have to use bms_overlap() instead of > bms_is_subset(). The caller get_useful_pathkeys_for_rel() extracts the > equivalence member (if any), which is entirely covered by the given > relation. Otherwise, you are correct that we have to convert the assertion > into a condition. I have added comments in get_useful_ecs_for_relation() > explaining, why. > > See for example the attached (with more tests including combinations >> of joins, and three-table joins). I have added an open item for 9.6 on >> the wiki. >> > > Thanks for those tests. Actually, that code is relevant for joins which > can not be pushed down to the foreign server. For such joins we try to add > pathkeys which will help merge joins. I have included the relevant tests > rewriting them to use local tables, so that the entire join is not pushed > down to the foreign server. > > -- > Best Wishes, > Ashutosh Bapat > EnterpriseDB Corporation > The Postgres Database Company >
[HACKERS] Postgres_fdw join pushdown - getting server crash in left outer join of three table
Hi, I am testing postgres_fdw join pushdown feature for PostgreSQL 9.6 DB, and I observed below issue. *Observation:* If do a left outer join on foreign tables more than two times. It is causing the server crash. Added below statement in contrib/postgres_fdw/postgres_fdw.sql and ran make check, did a server crash -- left outer join three tables EXPLAIN (COSTS false, VERBOSE) SELECT t1.c1,t2.c1,t3.c1 FROM ft2 t1 LEFT JOIN ft2 t2 ON (t1.c1 = t2.c1) LEFT JOIN ft2 t3 ON (t2.c1 = t3.c1) OFFSET 10 LIMIT 10; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. connection to server was lost Facing the same crash while doing left outer join, right outer join or combination of left-right outer joins for three tables and one local and two foreign tables. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation
Re: [HACKERS] Issue with NULLS LAST, with postgres_fdw sort pushdown
Thanks Ashutosh. Retested the issue after applying given patch,It is fine now. Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation On Wed, Mar 2, 2016 at 2:35 PM, Ashutosh Bapat < ashutosh.ba...@enterprisedb.com> wrote: > Thanks Rajkumar for your report. Let me know if the attached patch fixes > the issue. > > The code did not add NULL LAST clause the case when pk_nulls_first is > false in pathkey. PFA the fix for the same. I have also added few tests to > postgres_fdw.sql for few combinations of asc/desc and nulls first/last. > > On Mon, Feb 29, 2016 at 3:49 PM, Rajkumar Raghuwanshi < > rajkumar.raghuwan...@enterprisedb.com> wrote: > >> Hi, >> >> I am testing postgres_fdw sort pushdown feature for PostgreSQL 9.6 DB, >> and I observed below issue. >> >> *Observation: *If giving nulls last option with the order by clause as >> 'desc nulls last', remote query is not considering nulls last and giving >> wrong result in 9.6 version. while in 9.5 it is giving proper result. >> >> for testing, I have a table "fdw_sort_test" in foreign server for which >> postgres_fdw, foreign table created in local server. >> >> db2=# select * from fdw_sort_test ; >> id | name >> +-- >> 1 | xyz >> 3 | >> 2 | abc >> 4 | pqr >> (4 rows) >> >>on version 9.6 : >> >> db1=# select * from fdw_sort_test order by name >> desc nulls last; >> id | name >> +-- >>3 | >>1 | xyz >>4 | pqr >>2 | abc >> (4 rows) >> >> db1=# explain verbose select * from fdw_sort_test >> order by name desc nulls last; >> QUERY >> PLAN >> -- >> -- >> Foreign Scan on public.fdw_sort_test >> (cost=100.00..129.95 rows=561 width=122) >> Output: id, name >> Remote SQL: SELECT id, name FROM >> public.fdw_sort_test ORDER BY name DESC >> (3 rows) >> >> >> on version 9.5 : >> db1=# select * from fdw_sort_test order by name >> desc nulls last; >>id | name >> +-- >> 1 | xyz >> 4 | pqr >> 2 | abc >> 3 | >> (4 rows) >> >> db1=# explain verbose select * from fdw_sort_test >> order by name desc nulls last; >> QUERY >> PLAN >> -- >> >> Sort (cost=152.44..153.85 rows=561 width=122) >> Output: id, name >> Sort Key: fdw_sort_test.name DESC NULLS LAST >> -> Foreign Scan on public.fdw_sort_test >> (cost=100.00..126.83 rows=561 width=122) >> Output: id, name >> Remote SQL: SELECT id, name FROM >> public.fdw_sort_test >> >> *steps to reproduce : * >> >> --connect to sql >> \c postgres postgres >> --create role and database db1, will act as local server >> create role db1 password 'db1' superuser login; >> create database db1 owner=db1; >> grant all on database db1 to db1; >> >> --create role and database db2, will act as foreign server >> create role db2 password 'db2' superuser login; >> create database db2 owner=db2; >> grant all on database db2 to db2; >> >> --connect to db2 and create a table >> \c db2 db2 >> create table fdw_sort_test (id integer, name varchar(50)); >> insert into fdw_sort_test values (1,'
[HACKERS] Issue with NULLS LAST, with postgres_fdw sort pushdown
Hi, I am testing postgres_fdw sort pushdown feature for PostgreSQL 9.6 DB, and I observed below issue. *Observation: *If giving nulls last option with the order by clause as 'desc nulls last', remote query is not considering nulls last and giving wrong result in 9.6 version. while in 9.5 it is giving proper result. for testing, I have a table "fdw_sort_test" in foreign server for which postgres_fdw, foreign table created in local server. db2=# select * from fdw_sort_test ; id | name +-- 1 | xyz 3 | 2 | abc 4 | pqr (4 rows) on version 9.6 : db1=# select * from fdw_sort_test order by name desc nulls last; id | name +-- 3 | 1 | xyz 4 | pqr 2 | abc (4 rows) db1=# explain verbose select * from fdw_sort_test order by name desc nulls last; QUERY PLAN -- -- Foreign Scan on public.fdw_sort_test (cost=100.00..129.95 rows=561 width=122) Output: id, name Remote SQL: SELECT id, name FROM public.fdw_sort_test ORDER BY name DESC (3 rows) on version 9.5 : db1=# select * from fdw_sort_test order by name desc nulls last; id | name +-- 1 | xyz 4 | pqr 2 | abc 3 | (4 rows) db1=# explain verbose select * from fdw_sort_test order by name desc nulls last; QUERY PLAN -- Sort (cost=152.44..153.85 rows=561 width=122) Output: id, name Sort Key: fdw_sort_test.name DESC NULLS LAST -> Foreign Scan on public.fdw_sort_test (cost=100.00..126.83 rows=561 width=122) Output: id, name Remote SQL: SELECT id, name FROM public.fdw_sort_test *steps to reproduce : * --connect to sql \c postgres postgres --create role and database db1, will act as local server create role db1 password 'db1' superuser login; create database db1 owner=db1; grant all on database db1 to db1; --create role and database db2, will act as foreign server create role db2 password 'db2' superuser login; create database db2 owner=db2; grant all on database db2 to db2; --connect to db2 and create a table \c db2 db2 create table fdw_sort_test (id integer, name varchar(50)); insert into fdw_sort_test values (1,'xyz'); insert into fdw_sort_test values (3,null); insert into fdw_sort_test values (2,'abc'); insert into fdw_sort_test values (4,'pqr'); --connect to db1 and create postgres_fdw \c db1 db1 create extension postgres_fdw; create server db2_link_server foreign data wrapper postgres_fdw options (host 'db2_machine_ip', dbname 'db2', port 'db_machine_port_no'); create user mapping for db1 server db2_link_server options (user 'db2', password 'db2'); --create a foreign table create foreign table fdw_sort_test (id integer, name varchar(50)) server db2_link_server; --run the below query and checkout the output select * from fdw_sort_test order by name desc nulls last; --check the explain plan explain plan select * from fdw_sort_test order by name desc nulls last; Thanks & Regards, Rajkumar Raghuwanshi QMG, EnterpriseDB Corporation