[HACKERS] Remove pre-10 compatibility code in hash index

2017-05-08 Thread Amit Kapila
Commit ea69a0dead5128c421140dc53fac165ba4af8520 has bumped the hash
index version and obviates the need for backward compatibility code
added by commit 293e24e507838733aba4748b514536af2d39d7f2.  The same
has been mentioned in the commit message, please find attached patch
to remove the pre-10 compatibility code in hash index.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


remove_pre10_compat_hash_index_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Time based lag tracking for logical replication

2017-05-08 Thread Noah Misch
On Fri, May 05, 2017 at 07:07:26AM +, Noah Misch wrote:
> On Wed, May 03, 2017 at 08:28:53AM +0200, Simon Riggs wrote:
> > On 23 April 2017 at 01:10, Petr Jelinek  
> > wrote:
> > > Hi,
> > >
> > > The time based lag tracking commit [1] added interface for logging
> > > progress of replication so that we can report lag as time interval
> > > instead of just bytes. But the patch didn't contain patch for the
> > > builtin logical replication.
> > >
> > > So I wrote something that implements this. I didn't like all that much
> > > the API layering in terms of exporting the walsender's LagTrackerWrite()
> > > for use by plugin directly. Normally output plugin does not have to care
> > > if it's running under walsender or not, it uses abstracted write
> > > interface for that which can be implemented in various ways (that's how
> > > we implement SQL interface to logical decoding after all). So I decided
> > > to add another function to the logical decoding write api called
> > > update_progress and call that one from the output plugin. The walsender
> > > then implements that new API to call the LagTrackerWrite() while the SQL
> > > interface just does not implement it at all. This seems like cleaner way
> > > of doing it.
> > >
> > > Thoughts?
> > 
> > Agree cleaner.
> > 
> > I don't see any pacing or comments about it, nor handling of
> > intermediate messages while we process a large transaction.
> > 
> > I'll look at adding some pacing code in WalSndUpdateProgress()
> 
> [Action required within three days.]
> 
> Simon, I understand, from an annotation on the open items list, that you have
> volunteered to own this item.  Please observe the policy on open item
> ownership[1] and send a status update within three calendar days of this
> message.  Include a date for your subsequent status update.  Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping v10.  Consequently, I will appreciate your efforts
> toward speedy resolution.  Thanks.
> 
> [1] 
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

This PostgreSQL 10 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Refer to the policy on open item ownership:
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] modeling parallel contention (was: Parallel Append implementation)

2017-05-08 Thread Haribabu Kommi
On Mon, May 8, 2017 at 11:39 AM, David Rowley 
wrote:

>
> We really need a machine with good IO concurrency, and not too much
> RAM to test these things out. It could well be that for a suitability
> large enough table we'd want to scan a whole 1GB extent per worker.
>
> I did post a patch to have heap_parallelscan_nextpage() use atomics
> instead of locking over in [1], but I think doing atomics there does
> not rule out also adding batching later. In fact, I think it
> structures things so batching would be easier than it is today.
>

As part of our internal PostgreSQL project, we developed parallel seq
scan with batch mode only. The problem that we faced with batch mode
is making sure that all the parallel workers should finish almost the same
time with a proper distribution of data pages. Otherwise, it may lead to
a problem where one worker only doing the last batch job and all others
gets finished their job. In these cases, we cannot achieve good performance.

Whereas in the current approach, the maximum time the last worker
will do the job is scanning the last one page of the table.

If we go with batching of 1GB per worker, there may be chances that, the
data that satisfies the query condition may fall into only one extent then
in these cases also the batching may not yield the good results.

Regards,
Hari Babu
Fujitsu Australia


Re: [HACKERS] SUBSCRIPTIONS and pg_upgrade

2017-05-08 Thread Michael Paquier
On Sat, May 6, 2017 at 12:01 AM, Peter Eisentraut
 wrote:
> On 5/2/17 21:44, Noah Misch wrote:
>>> I wonder if we should have an --no-subscriptions option, now that they
>>> are dumped by default, just like we have --no-blobs, --no-owner,
>>> --no-password, --no-privileges, --no-acl, --no-tablespaces, and
>>> --no-security-labels.  It seems like there is probably a fairly large
>>> use case for excluding subscriptions even if you have sufficient
>>> permissions to dump them.

Reading the thread for the first time, I am +1 on that. It feels more
natural to have subscriptions by default when taking a dump, and it is
useful as well to be able to override the default so as they are not
included.

On Tue, May 9, 2017 at 1:26 AM, Peter Eisentraut
 wrote:
> On 5/6/17 14:50, Noah Misch wrote:
>>> I consider this item low priority and don't plan to work on it before
>>> all the other open items under logical replication are addressed.
>>>
>>> (Here, working on it would include thinking further about whether it is
>>> necessary at all or what alternatives might look like.)
>>
>> That's informative, but it's not a valid status update.  This PostgreSQL 10
>> open item is past due for your status update.  Kindly send a valid status
>> update within 24 hours.  Refer to the policy on open item ownership:
>> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
>
> Fair enough.  I have set myself a reminder to report back on May 30.

I think that it would be nice to fix that even before beta, so
attached is a patch to add --no-subscriptions to pg_dump, pg_dumpall
and pg_restore.
-- 
Michael


dump-no-subs.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pkg-postgresql-public] Debian "postgresql-common" config check issue with pg10

2017-05-08 Thread Fabien COELHO


Hallo Christoph,


I've relaxed the regexps there. It's still not exactly what the PG
parser accepts, but I think it's a superset now, so we should be safe.



Will it be released with the next pg10dev compilation?


The answer is "yes", and it works as expected.

Vielen Dank again for the fix!

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-08 Thread Peter Eisentraut
On 5/8/17 23:23, Peter Eisentraut wrote:
> The way this uses RESTRICT and CASCADE appears to be backwards from its
> usual meaning.  Normally, CASCADE when dropping an object that is still
> used by others will cause those other objects to be dropped.  The
> equivalent here would be DROP REPLICATION SLOT + CASCADE would drop the
> subscription.
> 
> What we want to simulate instead is an "auto" dependency of the slot on
> the subscription.  So you can drop the slot separately (subject to other
> restrictions), and it is dropped automatically when the subscription is
> dropped.  To avoid that, you can disassociate the slot from the
> subscription, which you have implemented.
> 
> I think we can therefore do without RESTRICT/CASCADE here.  If a slot is
> associated with the subscription, it should be there when we drop the
> subscription.  Otherwise, the user has to disassociate the slot and take
> care of it manually.  So just keep the "cascade" behavior.
> 
> Similarly, I wouldn't check first whether the slot exists.  If the
> subscription is associated with the slot, it should be there.

Here is your patch amended for that.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v2-0001-Remove-the-NODROP-SLOT-option-from-DROP-SUBSCRIPT.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should pg_current_wal_location() become pg_current_wal_lsn()

2017-05-08 Thread Peter Eisentraut
On 5/1/17 08:10, David Rowley wrote:
> On 20 April 2017 at 07:29, Euler Taveira  wrote:
>> 2017-04-19 1:32 GMT-03:00 Michael Paquier :
>>>
>>> I vote for "location" -> "lsn". I would expect complains about the
>>> current inconsistency at some point, and the function names have been
>>> already changed for this release..
> 
> OK, so I've created a draft patch which does this.

After reading this patch, I see that

a) The scope of the compatibility break is expanded significantly beyond
what was already affected by the xlog->wal renaming.

b) Generally, things read less nicely and look more complicated.

So I still think we'd be better off leaving things the way they are.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-08 Thread Peter Eisentraut
On 5/8/17 17:55, Petr Jelinek wrote:
> On 08/05/17 22:55, Peter Eisentraut wrote:
>> On 5/5/17 13:01, Petr Jelinek wrote:
>>> What do you think of attached. I actually did add RESTRICT/CASCADE, in a
>>> way that if there is slot RESTRICT will refuse to drop subscription and
>>> CASCADE will try to drop it. Still all errors.
>>>
>>> The new way to not drop slot is to set slot_name to NONE which is new
>>> value that I invented (inspiration from OWNED BY sequences) which
>>> basically disassociates the subscription from slot.
>>>
>>> It's slightly less automatic this way but perhaps that's not a bad
>>> thing, at least nobody will drop slots by mistake while we still make
>>> sure that slots are not left over without anybody noticing.
>>
>> I think this is OK.  Could you send a version of this patch based on
>> master please?
>>
> 
> Here it is.

The way this uses RESTRICT and CASCADE appears to be backwards from its
usual meaning.  Normally, CASCADE when dropping an object that is still
used by others will cause those other objects to be dropped.  The
equivalent here would be DROP REPLICATION SLOT + CASCADE would drop the
subscription.

What we want to simulate instead is an "auto" dependency of the slot on
the subscription.  So you can drop the slot separately (subject to other
restrictions), and it is dropped automatically when the subscription is
dropped.  To avoid that, you can disassociate the slot from the
subscription, which you have implemented.

I think we can therefore do without RESTRICT/CASCADE here.  If a slot is
associated with the subscription, it should be there when we drop the
subscription.  Otherwise, the user has to disassociate the slot and take
care of it manually.  So just keep the "cascade" behavior.

Similarly, I wouldn't check first whether the slot exists.  If the
subscription is associated with the slot, it should be there.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump emits ALTER TABLE ONLY partitioned_table

2017-05-08 Thread Amit Langote
On 2017/05/08 12:42, Stephen Frost wrote:
> Amit,
> 
> * Amit Langote (langote_amit...@lab.ntt.co.jp) wrote:
>> Thanks for committing the patch after improving it quite a bit, and sorry
>> that I couldn't reply promptly during the last week due to vacation.
> 
> No worries, hopefully you have an opportunity to review the additional
> changes I made and understand why they were necessary.  Certainly, feel
> free to reach out if you have any questions or notice anything else
> which should be improved.

Do you intend to push the other patch to add regression tests for the
non-inherited constraints?  Here it is attached again for you to look over.

Thanks,
Amit
>From a27689d445d88462dc499f04ed6779cb686a9588 Mon Sep 17 00:00:00 2001
From: amit 
Date: Tue, 18 Apr 2017 10:59:35 +0900
Subject: [PATCH] Improve test coverage of partition constraints

Better exercise the code manipulating partition constraints a bit
differently from the traditional inheritance children.
---
 src/bin/pg_dump/t/002_pg_dump.pl   | 10 +---
 src/test/regress/expected/create_table.out | 41 ++
 src/test/regress/sql/create_table.sql  | 21 ---
 3 files changed, 55 insertions(+), 17 deletions(-)

diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index ce0c9ef54d..08ae7eb83b 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -4791,12 +4791,16 @@ qr/CREATE TRANSFORM FOR integer LANGUAGE sql \(FROM SQL WITH FUNCTION pg_catalog
 		catch_all=> 'CREATE ... commands',
 		create_order => 91,
 		create_sql => 'CREATE TABLE dump_test_second_schema.measurement_y2006m2
-	   PARTITION OF dump_test.measurement FOR VALUES
-	   FROM (\'2006-02-01\') TO (\'2006-03-01\');',
+	   PARTITION OF dump_test.measurement (
+	  unitsales DEFAULT 0 CHECK (unitsales >= 0)
+	   )
+	   FOR VALUES FROM (\'2006-02-01\') TO (\'2006-03-01\');',
 		regexp => qr/^
 			\Q-- Name: measurement_y2006m2;\E.*\n
 			\Q--\E\n\n
-			\QCREATE TABLE measurement_y2006m2 PARTITION OF dump_test.measurement\E\n
+			\QCREATE TABLE measurement_y2006m2 PARTITION OF dump_test.measurement (\E\n
+			\s+\QCONSTRAINT measurement_y2006m2_unitsales_check CHECK ((unitsales >= 0))\E\n
+			\)\n
 			\QFOR VALUES FROM ('2006-02-01') TO ('2006-03-01');\E\n
 			/xm,
 		like => {
diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out
index dda0d7ee5d..79603166d1 100644
--- a/src/test/regress/expected/create_table.out
+++ b/src/test/regress/expected/create_table.out
@@ -618,16 +618,42 @@ CREATE TABLE part_b PARTITION OF parted (
 ) FOR VALUES IN ('b');
 ERROR:  column "b" specified more than once
 CREATE TABLE part_b PARTITION OF parted (
-	b NOT NULL DEFAULT 1 CHECK (b >= 0),
-	CONSTRAINT check_a CHECK (length(a) > 0)
+	b NOT NULL DEFAULT 1,
+	CONSTRAINT check_a CHECK (length(a) > 0),
+	CONSTRAINT check_b CHECK (b >= 0)
 ) FOR VALUES IN ('b');
 NOTICE:  merging constraint "check_a" with inherited definition
 -- conislocal should be false for any merged constraints
-SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = 'part_b'::regclass AND conname = 'check_a';
+SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = 'part_b'::regclass;
  conislocal | coninhcount 
 +-
  f  |   1
-(1 row)
+ t  |   0
+(2 rows)
+
+-- Once check_b is added to the parent, it should be made non-local for part_b
+ALTER TABLE parted ADD CONSTRAINT check_b CHECK (b >= 0);
+NOTICE:  merging constraint "check_b" with inherited definition
+SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = 'part_b'::regclass;
+ conislocal | coninhcount 
++-
+ f  |   1
+ f  |   1
+(2 rows)
+
+-- Neither check_a nor check_b are droppable from part_b
+ALTER TABLE part_b DROP CONSTRAINT check_a;
+ERROR:  cannot drop inherited constraint "check_a" of relation "part_b"
+ALTER TABLE part_b DROP CONSTRAINT check_b;
+ERROR:  cannot drop inherited constraint "check_b" of relation "part_b"
+-- And dropping it from parted should leave no trace of them on part_b, unlike
+-- traditional inheritance where they will be left behind, because they would
+-- be local constraints.
+ALTER TABLE parted DROP CONSTRAINT check_a, DROP CONSTRAINT check_b;
+SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = 'part_b'::regclass;
+ conislocal | coninhcount 
++-
+(0 rows)
 
 -- specify PARTITION BY for a partition
 CREATE TABLE fail_part_col_not_found PARTITION OF parted FOR VALUES IN ('c') PARTITION BY RANGE (c);
@@ -643,9 +669,6 @@ CREATE TABLE part_c_1_10 PARTITION OF part_c FOR VALUES FROM (1) TO (10);
  a  | text|   |  | 
  b  | integer |   | not null | 1
 Partition of: parted FOR VALUES IN ('b')
-Check constraints:
-"check_a" 

Re: [HACKERS] Logical replication - TRAP: FailedAssertion in pgstat.c

2017-05-08 Thread Masahiko Sawada
On Tue, May 9, 2017 at 1:26 AM, Petr Jelinek
 wrote:
> On 08/05/17 17:52, Masahiko Sawada wrote:
>> On Fri, May 5, 2017 at 8:13 PM, Petr Jelinek
>>  wrote:
>>> On 03/05/17 13:23, Erik Rijkers wrote:
 On 2017-05-03 08:17, Petr Jelinek wrote:
> On 02/05/17 20:43, Robert Haas wrote:
>> On Thu, Apr 20, 2017 at 2:58 PM, Peter Eisentraut

>>> code path that calls CommitTransactionCommand() should have one, no?
>>
>> Is there anything left to be committed here?
>>
>
> Afaics the fix was not committed. Peter wanted more comprehensive fix
> which didn't happen. I think something like attached should do the job.

 I'm running my pgbench-over-logical-replication test in chunk of 15
 minutes, wth different pgbench -c (num clients) and -s (scale) values.

 With this patch (and nothing else)  on top of master (8f8b9be51fd7 to be
 precise):

> fix-statistics-reporting-in-logical-replication-work.patch

 logical replication is still often failing (as expected, I suppose; it
 seems because of "inital snapshot too large") but indeed I do not see
>>>
>>> Yes that's different thing that we've been discussing a bit in snapbuild
>>> woes thread.
>>>
 the 'TRAP: FailedAssertion in pgstat.c' anymore.

 (If there is any other configuration of patches worth testing please let
 me know)

>>>
>>> Thanks, so the patch works.
>>>
>>
>> I think that we should commit the local transaction that did initial
>> data copy, and then report stat as well. Currently table sync worker
>> doesn't commit the local transaction in LogicalRepSyncTableStart
>> (maybe until apply commit record?) if its status is changed to
>> SUBREL_STATE_CATCHUP. That's why the table sync worker issues
>> assertion failure.
>>
>
> That would fix the assert as well yes, but it would also mean that if
> the worker crashed between the initial copy and the end of catchup there
> would be no way to restart it without manual intervention from user
> since the synchronization position would be lost. Hence the fix I
> proposed which does it differently and has the whole sync in a single
> transaction.

I understood that the data synchronization even including apply
logical record after changed to SUBREL_STATE_CATCHUP should be done in
a single transaction. Thank you for explanation.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] snapbuild woes

2017-05-08 Thread Erik Rijkers

On 2017-05-05 02:00, Andres Freund wrote:


Could you have a look?


Running tests with these three patches:


0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
0002-WIP-Possibly-more-robust-snapbuild-approach.patch +
fix-statistics-reporting-in-logical-replication-work.patch

(on top of 44c528810)

I test by 15-minute pgbench runs while there is a logical replication 
connection. Primary and replica are on the same machine.


I have seen errors on 3 different machines (where error means: at least 
1 of the 4 pgbench tables is not md5-equal). It seems better, faster 
machines yield less errors.


Normally I see in pg_stat_replication (on master) one process in state 
'streaming'.


 pid  | wal | replay_loc  |   diff   |   state   |   app   | 
sync_state
16495 | 11/EDBC | 11/EA3FEEE8 | 58462488 | streaming | derail2 | 
async


Often there are another two processes in pg_stat_replication that remain 
in state 'startup'.


In the failing sessions the 'streaming'-state process is missing; in 
failing sessions there are only the two processes that are and remain in 
'startup'.


FWIW, below  the output of a succesful and a failed run:


-- successful run:
creating tables...
1590400 of 250 tuples (63%) done (elapsed 5.34 s, remaining 3.05 s)
250 of 250 tuples (100%) done (elapsed 9.63 s, remaining 0.00 s)
vacuum...
set primary keys...
done.
create publication pub1 for all tables;
create subscription sub1 connection 'port=6972 application_name=derail2' 
publication pub1 with (disabled);

alter subscription sub1 enable;
-- pgbench -c 90 -j 8 -T 900 -P 180 -n   --  scale 25
progress: 180.0 s, 82.5 tps, lat 1086.845 ms stddev 3211.785
progress: 360.0 s, 25.4 tps, lat 3469.040 ms stddev 6297.440
progress: 540.0 s, 28.9 tps, lat 3131.438 ms stddev 4288.130
progress: 720.0 s, 27.5 tps, lat 3285.024 ms stddev 4113.841
progress: 900.0 s, 47.2 tps, lat 1896.698 ms stddev 2182.695
transaction type: 
scaling factor: 25
query mode: simple
number of clients: 90
number of threads: 8
duration: 900 s
number of transactions actually processed: 38175
latency average = 2128.606 ms
latency stddev = 3948.634 ms
tps = 42.151205 (including connections establishing)
tps = 42.151589 (excluding connections establishing)
-- waiting 0s... (always)
 port | pg_stat_replication | pid  | wal | replay_loc  | diff | 
?column? |   state   |   app   | sync_state
 6972 | pg_stat_replication | 2545 | 18/432B2180 | 18/432B2180 |0 | 
t| streaming | derail2 | async


2017.05.08 23:19:22
-- getting md5 (cb)
6972 a,b,t,h:  250 25250  38175  b2ba48b53 b3788a837 
d1afac950 d4abcc72e master
6973 a,b,t,h:  250 25250  38175  b2ba48b53 b3788a837 
d1afac950 d4abcc72e replica ok  bee2312c7

2017.05.08 23:20:48

 port | pg_stat_replication | pid  | wal | replay_loc  |   diff  
 | ?column? |   state   |   app   | sync_state
 6972 | pg_stat_replication | 2545 | 18/4AEEC8C0 | 18/453FBD20 | 
95357856 | f| streaming | derail2 | async




-- failure:
creating tables...
1777100 of 250 tuples (71%) done (elapsed 5.06 s, remaining 2.06 s)
250 of 250 tuples (100%) done (elapsed 7.41 s, remaining 0.00 s)
vacuum...
set primary keys...
done.
create publication pub1 for all tables;
create subscription sub1 connection 'port=6972 application_name=derail2' 
publication pub1 with (disabled);

alter subscription sub1 enable;
 port | pg_stat_replication |  pid  | wal | replay_loc | diff | 
?column? |  state  |   app   | sync_state
 6972 | pg_stat_replication | 11945 | 18/5E2913D0 ||  |  
| catchup | derail2 | async


-- pgbench -c 90 -j 8 -T 900 -P 180 -n   --  scale 25
progress: 180.0 s, 78.4 tps, lat 1138.348 ms stddev 2884.815
progress: 360.0 s, 69.2 tps, lat 1309.716 ms stddev 2594.231
progress: 540.0 s, 59.0 tps, lat 1519.146 ms stddev 2033.400
progress: 720.0 s, 62.9 tps, lat 1421.854 ms stddev 1775.066
progress: 900.0 s, 57.0 tps, lat 1575.693 ms stddev 1681.800
transaction type: 
scaling factor: 25
query mode: simple
number of clients: 90
number of threads: 8
duration: 900 s
number of transactions actually processed: 58846
latency average = 1378.259 ms
latency stddev = 2304.159 ms
tps = 65.224168 (including connections establishing)
tps = 65.224788 (excluding connections establishing)
-- waiting 0s... (always)
 port | pg_stat_replication |  pid  | wal | replay_loc | diff | 
?column? |  state  |   app   | sync_state
 6972 | pg_stat_replication | 11948 | 18/7469A038 ||  |  
| startup | derail2 | async
 6972 | pg_stat_replication | 12372 | 18/7469A038 ||  |  
| startup | derail2 | async




During my tests, I keep an eye on pg_stat_replication (refreshing every 
2s), and when I see those two 'startup'-lines in pg_stat_replication 
without any 'streaming'-line I know the test 

Re: [HACKERS] logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-08 Thread Petr Jelinek
On 08/05/17 22:55, Peter Eisentraut wrote:
> On 5/5/17 13:01, Petr Jelinek wrote:
>> What do you think of attached. I actually did add RESTRICT/CASCADE, in a
>> way that if there is slot RESTRICT will refuse to drop subscription and
>> CASCADE will try to drop it. Still all errors.
>>
>> The new way to not drop slot is to set slot_name to NONE which is new
>> value that I invented (inspiration from OWNED BY sequences) which
>> basically disassociates the subscription from slot.
>>
>> It's slightly less automatic this way but perhaps that's not a bad
>> thing, at least nobody will drop slots by mistake while we still make
>> sure that slots are not left over without anybody noticing.
> 
> I think this is OK.  Could you send a version of this patch based on
> master please?
> 

Here it is.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


Remove-the-NODROP-SLOT-option-from-DROP-SUBSCRIPTION-0508.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Google Summer Of Code 2017 & PostgreSQL

2017-05-08 Thread David Fetter
On Sat, May 06, 2017 at 07:54:51PM -0400, Stephen Frost wrote:
> Greetings!
> 
> I am very pleased to be able to announce that the PostgreSQL project has
> been approved for 4 GSoC projects this year!

Thanks for the hard work!

Google's process has gotten a lot more rigorous over the years, and in
under-appreciated consequence, people who would rather be doing other
things on projects get to shoulder that burden.

> The four projects are:
> 
> - Add errors handling and parallel execution to COPY
> 
>   Student: Alexey Kondratov
>   Mentors: Anastasia Lubennikova, Alexander Korotkov, Alvaro Herrera
> 
> - Eliminate O(N^2) scaling from rw-conflict tracking in serializable
>   transactions
> 
>   Student: Mengxing Liu
>   Mentors: Alvaro Herrera, Kevin Grittner
> 
> - Explicitly support predicate locks in index AMs besides btree
> 
>   Student: Shubham Barai
>   Mentors: Andrey Borodin, Kevin Grittner
> 
> - Foreign Keys for Array Elements
> 
>   Student: Mark Rofail
>   Mentors: Alexander Korotkov, Stephen Frost, Alvaro Herrera

Good luck to all, and welcome to Alexey, Menxing, Shubham, and Mark!

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench tap tests & minor fixes

2017-05-08 Thread Fabien COELHO


Hello,


st->cnt -- number of transactions finished successed or failed, right?


Or *skipped*. That is why I changed the declaration comment.


one iteration of for(;;) is for one transaction or really less. Right?


No, under -R -L late schedules are simply skipped.

We can't process two tansactions in one iteration of this loop. So we 
can't increase st->cnt more then once during one iteration?


Yes we can, if they are skipped because the scheduling was too late to 
execute them on time.



   processXactStats(thread, st, , true, agg);

Let's imagine that thread->throttle_trigger is  now_us - 10 000,
latency_limit is 5 000 and throttle_delay is 100

How many times you would call  processXactStats in this while loop?


Around 100 times to catch up.


And each time it would do st->cnt++


Yep. The "true" argument tells the stats that the transaction was skipped, 
though. It just counting late transaction that could not be processed.


And this while loop is inside for(;;) in which as I said above we can do 
st->cnt++ not more than once. I see no logic here.


The logic is that at most one *real* transaction is actually performed in 
the for, but there may be any number of "skipped" (unexecuted) ones, which 
is fine.



PS This is a fast reply. May be it will make things clear fast wither for me
or for you. I will carefully answer your full letter tomorrow (I hope nothing
will prevent me from doing it)


Today was a holiday in France. Tomorrow is not.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-08 Thread Peter Eisentraut
On 5/7/17 04:21, Noah Misch wrote:
> The above-described topic is currently a PostgreSQL 10 open item.  Peter,
> since you committed the patch believed to have created it, you own this open
> item.  If some other commit is more relevant or if this does not belong as a
> v10 open item, please let us know.  Otherwise, please observe the policy on
> open item ownership[1] and send a status update within three calendar days of
> this message.  Include a date for your subsequent status update.  Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping v10.  Consequently, I will appreciate your efforts
> toward speedy resolution.  Thanks.

I think we have a workable patch, but it needs some rebasing.  I hope to
have this sorted by Wednesday.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-08 Thread Peter Eisentraut
On 5/5/17 13:01, Petr Jelinek wrote:
> What do you think of attached. I actually did add RESTRICT/CASCADE, in a
> way that if there is slot RESTRICT will refuse to drop subscription and
> CASCADE will try to drop it. Still all errors.
> 
> The new way to not drop slot is to set slot_name to NONE which is new
> value that I invented (inspiration from OWNED BY sequences) which
> basically disassociates the subscription from slot.
> 
> It's slightly less automatic this way but perhaps that's not a bad
> thing, at least nobody will drop slots by mistake while we still make
> sure that slots are not left over without anybody noticing.

I think this is OK.  Could you send a version of this patch based on
master please?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench tap tests & minor fixes

2017-05-08 Thread Nikolay Shaplov
В письме от 8 мая 2017 22:08:51 пользователь Fabien COELHO написал:

> >while (thread->throttle_trigger < now_us -
> >latency_limit &&
> >
> >   /* with -t, do not overshoot */
> >   (nxacts <= 0 || st->cnt < nxacts))
> 
>  ...
> 
> >if (nxacts > 0 && st->cnt >= nxacts)
> >{
> >
> >st->state = CSTATE_FINISHED;
> >break;
> >
> >}
> > 

st->cnt -- number of transactions finished successed or failed, right?

one iteration of for(;;) is for one transaction or really less. Right? We 
can't process two tansactions in one iteration of this loop. So  we can't 
increase st->cnt more then once during one iteration?


So let's look at the while loop:

   while (thread->throttle_trigger < now_us - latency_limit 
&&
   /* with -t, do not overshoot */
   (nxacts <= 0 || st->cnt < nxacts))
{
processXactStats(thread, st, , true, agg);
/* next rendez-vous */
wait = getPoissonRand(thread, throttle_delay);
thread->throttle_trigger += wait;
st->txn_scheduled = thread->throttle_trigger;
}


Let's imagine that thread->throttle_trigger is  now_us - 10 000,
latency_limit is 5 000 and throttle_delay is 100

How many times you would call  processXactStats in this while loop?
And each time it would do st->cnt++

And this while loop is inside for(;;) in which as I said above we can do st-
>cnt++ not more than once. I see no logic here.


PS This is a fast reply. May be it will make things clear fast wither for me 
or for you. I will carefully answer your full letter tomorrow (I hope nothing 
will prevent me from doing it)

-- 
Nikolay Shaplov, independent Perl & C/C++ developer. Available for hire.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-08 Thread Robert Haas
On Thu, May 4, 2017 at 4:40 PM, Sven R. Kunze  wrote:
> It yields
>
> CREATE TABLE p1 PARTITION OF test DEFAULT PARTITION BY LIST(b);
>
> This reads to me like "DEFAULT PARTITION".
>
> I can imagine a lot of confusion when those queries are encountered in the
> wild. I know this thread is about creating a default partition but I were to
> propose a minor change in the following direction, I think confusion would
> be greatly avoided:
>
> CREATE TABLE p1 PARTITION OF test AS DEFAULT PARTITIONED BY LIST(b);
>
> I know it's a bit longer but I think those 4 characters might serve
> readability in the long term. It was especially confusing to see PARTITION
> in two positions serving two different functions.

Well, we certainly can't make that change just for default partitions.
I mean, that would be non-orthogonal, right?  You can't say that the
way to subpartition is to write "PARTITION BY strategy" when the table
unpartitioned or is a non-default partition but "PARTITIONED BY
strategy" when it is a default partition.  That would certainly not be
a good way of confusing users less, and would probably result in a
variety of special cases in places like ruleutils.c or pg_dump, plus
some weasel-wording in the documentation.  We COULD do a general
change from "CREATE TABLE table_name PARTITION BY strategy" to "CREATE
TABLE table_name PARTITIONED BY strategy".  I don't have any
particular arguments against that except that the current syntax is
more like Oracle, which might count for something, and maybe the fact
that we're a month after feature freeze.  Still, if we want to change
that, now would be the time; but I favor leaving it alone.

I don't have a big objection to adding AS.  If that's the majority
vote, fine; if not, that's OK, too.  I can see it might be a bit more
clear in the case you mention, but it might also just be a noise word
that we don't really need.  There don't seem to be many uses of AS
that would pose a risk of actual grammar conflicts here.  I can
imagine someone wanting to use CREATE TABLE ... PARTITION BY ... AS
SELECT ... to create and populate a partition in one command, but that
wouldn't be a conflict because it'd have to go AFTER the partition
specification.  In the DEFAULT case, you'd end up with something like

CREATE TABLE p1 PARTITION OF test AS DEFAULT AS 

...which is neither great nor horrible syntax-wise and maybe not such
a good thing to support anyway since it would have to lock the parent
to add the partition and then keep the lock on the parent while
populating the new child (ouch).

So I guess I'm still in favor of the CREATE TABLE p1 PARTITION OF test
DEFAULT syntax, but if it ends up being AS DEFAULT instead, I can live
with that.

Other opinions?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench tap tests & minor fixes

2017-05-08 Thread Fabien COELHO


Hello Alvaro,

Here is a v3, with less files. I cannot say I find it better, but it 
still works.


The "command_likes" function has been renamed "command_checks".


Do parts of this need to be backpatched?


I would not bother too much about backpatching.


I notice that you're patching pgbench.c, probably to fix some bug(s);


The bug fix part is about small issues that I noticed while writing 
extensive tests. Probably nobody would have noticed otherwise for some 
time.


is the idea that we would backpatch all the new tests on whatever old 
branches need the bugfixes too? If so, how far back do the fixes need to 
go?


I'd say 9.6. There has been quite some changes and significant 
restructuring on pgbench wrt to prior versions.



ISTM TestLib::command_checks() needs a comment explaining what it does.
Its API seems pretty opaque.


Ok. That I can do. I'm wondering about Windows portability that I cannot 
check.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [Pkg-postgresql-public] Debian "postgresql-common" config check issue with pg10

2017-05-08 Thread Fabien COELHO


Hallo Christoph,


  password_encryption = scram-sha-256


Hmm. Naïvely I would have assumed this would be missing quotes :)


That what I thought at first, but the comments, documentation and code 
were saying otherwise, so I dug into debian packaging instead.



I've relaxed the regexps there. It's still not exactly what the PG
parser accepts, but I think it's a superset now, so we should be safe.


Ok.


Thanks for spotting and reporting!


Thanks for the fix! Will it be released with the next pg10dev compilation?

--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-08 Thread Jeevan Ladhe
Hi Robert,

Thanks for your explnation.

On Mon, May 8, 2017 at 9:56 PM, Robert Haas  wrote:

> On Thu, May 4, 2017 at 4:28 PM, Jeevan Ladhe
>  wrote:
> > While reviewing the code I was trying to explore more cases, and I here
> > comes an
> > open question to my mind:
> > should we allow the default partition table to be partitioned further?
>
> I think yes.  In general, you are allowed to partition a partition,
> and I can't see any justification for restricting that for default
> partitions when we allow it everywhere else.
>
> > If we allow it(as in the current case) then observe following case,
> where I
> > have defined a default partitioned which is further partitioned on a
> > different
> > column.
> >
> > postgres=# CREATE TABLE test ( a int, b int, c int) PARTITION BY LIST
> (a);
> > CREATE TABLE
> > postgres=# CREATE TABLE test_p1 PARTITION OF test FOR VALUES IN(4, 5, 6,
> 7,
> > 8);
> > CREATE TABLE
> > postgres=# CREATE TABLE test_pd PARTITION OF test DEFAULT PARTITION BY
> > LIST(b);
> > CREATE TABLE
> > postgres=# INSERT INTO test VALUES (20, 24, 12);
> > ERROR:  no partition of relation "test_pd" found for row
> > DETAIL:  Partition key of the failing row contains (b) = (24).
> >
> > Note, that it does not allow inserting the tuple(20, 24, 12) because
> though
> > a=20
> > would fall in default partition i.e. test_pd, table test_pd itself is
> > further
> > partitioned and does not have any partition satisfying b=24.
>
> Right, that looks like correct behavior.  You would have gotten the
> same result if you had tried to insert into test_pd directly.
>
> > Further if I define a default partition for table test_pd, the the tuple
> > gets inserted.
>
> That also sounds correct.
>
> > Doesn't this sound like the whole purpose of having DEFAULT partition on
> > test
> > table is defeated?
>
> Not to me.  It's possible to do lots of silly things with partitioned
> tables.  For example, one case that we talked about before is that you
> can define a range partition for, say, VALUES (0) TO (100), and then
> subpartition it and give the subpartitions bounds which are outside
> the range 0-100.  That's obviously silly and will lead to failures
> inserting tuples, but we chose not to try to prohibit it because it's
> not really broken, just useless.  There are lots of similar cases
> involving other features.  For example, you can apply an inherited
> CHECK (false) constraint to a table, which makes it impossible for
> that table or any of its children to ever contain any rows; that is
> probably a dumb configuration.  You can create two unique indexes with
> exactly the same definition; unless you're creating a new one with the
> intent of dropping the old one, that doesn't make sense.  You can
> define a trigger that always throws an ERROR and then another trigger
> which runs later that modifies the tuple; the second will never be run
> because the first one will always kill the transaction before we get
> there.  Those things are all legal, but often unuseful.  Similarly
> here.  Defining a default list partition and then subpartitioning it
> by list is not likely to be a good schema design, but it doesn't mean
> we should try to disallow it.  It is important to distinguish between
> things that are actually *broken* (like a partitioning configuration
> where the tuples that can be inserted into a partition manually differ
> from the ones that are routed to it automatically) and things that are
> merely *lame* (like creating a multi-level partitioning hierarchy when
> a single level would have done the job just as well).  The former
> should be prevented by the code, while the latter is at most a
> documentation issue.


I agree with you that it is a user perspective on how he decides to do
partitions of already partitioned table, and also we should have a
demarcation between things to be handled by code and things to be
left as common-sense or ability to define a good schema.

I am ok with current behavior, provided we have atleast one-lineer in
documentation alerting the user that partitioning the default partition will
limit the ability of routing the tuples that do not fit in any other
partitions.

Regards,
Jeevan Ladhe


Re: [HACKERS] Not getting error if ALTER SUBSCRIPTION syntax is wrong.

2017-05-08 Thread Peter Eisentraut
On 5/6/17 08:44, Petr Jelinek wrote:
>> We could check validity of the connection string though to complain
>> immediately like we do in CREATE.
> 
> The attached does exactly that.

committed

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [Pkg-postgresql-public] Debian "postgresql-common" config check issue with pg10

2017-05-08 Thread Christoph Berg
Re: Fabien COELHO 2017-05-08 
> Thus I naïvely added:
> 
>   password_encryption = scram-sha-256

Hmm. Naïvely I would have assumed this would be missing quotes :)

> The result is:
> 
>   Error: Invalid line 88 in /etc/postgresql/10/main/postgresql.conf:
> »password_encryption = scram-sha-256«
> 
> However, it works if I put 'scram-sha-256' (with simple quotes).
> 
> The underlying issue is that the '-' character breaks the config checker,
> ISTM that the simple value regex in function "read_conf_file" in module
> "PgCommon.pm" should be extended to allow more chars in unquoted strings, to
> be consistent with lexer definitions in "src/backend/utils/misc/guc-file.l".

I've relaxed the regexps there. It's still not exactly what the PG
parser accepts, but I think it's a superset now, so we should be safe.

https://anonscm.debian.org/cgit/pkg-postgresql/postgresql-common.git/commit/?id=93a2ec91ea83e427fc2e68789e572864e158a32e

> In passing, I would like to point out that the French quotation chevrons
> (guillemets) used on the wrong sides and without spacing is probably eye
> watering pain to any French reader, maybe like using ß in place of B in a
> text. Also utf8 chars might not work properly under some terminal encodings.
> Maybe using simple ascii ">>" and "<<" for the messages would also be more
> portable?

I think »« are used the other way round in German vs. French, that's
probably why it was like that ;). Anyway, we are not quoting output in
most of the error() calls, so I've simply dropped the quotes.

Thanks for spotting and reporting!

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Debian "postgresql-common" config check issue with pg10

2017-05-08 Thread Fabien COELHO


Hello Peter,

Although this is really a small debian packaging issue, I cc to pgdev 
because it illustrates unintended consequences of trivial changes.


Thanks (again!) to the great and up-to-date apt.postgresql.org repository, 
I've tried to test the new scram-sha-256 feature. For that I looked in 
"postgres.conf" and found:


  #password_encryption = md5  # md5, scram-sha-256 or plain

Thus I naïvely added:

  password_encryption = scram-sha-256

After:

  sh> pg_ctlcluster 10 main start

The result is:

  Error: Invalid line 88 in /etc/postgresql/10/main/postgresql.conf:
»password_encryption = scram-sha-256«

However, it works if I put 'scram-sha-256' (with simple quotes).

The underlying issue is that the '-' character breaks the config checker, 
ISTM that the simple value regex in function "read_conf_file" in module 
"PgCommon.pm" should be extended to allow more chars in unquoted strings, 
to be consistent with lexer definitions in 
"src/backend/utils/misc/guc-file.l".


I think that the issue appeared when 'scram' was changed to 
'scram-sha-256' recently, as it is the first option enum with a dash, 
all other options use '_'.


In passing, I would like to point out that the French quotation chevrons 
(guillemets) used on the wrong sides and without spacing is probably eye 
watering pain to any French reader, maybe like using ß in place of B in a 
text. Also utf8 chars might not work properly under some terminal 
encodings. Maybe using simple ascii ">>" and "<<" for the messages would 
also be more portable?


--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench tap tests & minor fixes

2017-05-08 Thread Alvaro Herrera
Fabien COELHO wrote:
> 
> Here is a v3, with less files. I cannot say I find it better, but it still
> works.
> 
> The "command_likes" function has been renamed "command_checks".

Do parts of this need to be backpatched?  I notice that you're patching
pgbench.c, probably to fix some bug(s); is the idea that we would
backpatch all the new tests on whatever old branches need the bugfixes
too?  If so, how far back do the fixes need to go?

ISTM TestLib::command_checks() needs a comment explaining what it does.
Its API seems pretty opaque.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-08 Thread Andres Freund
On 2017-05-08 12:28:38 -0400, Peter Eisentraut wrote:
> On 5/8/17 02:58, Noah Misch wrote:
> > IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
> > for your status update.  Please reacquaint yourself with the policy on open
> > item ownership[1] and then reply immediately.  If I do not hear from you by
> > 2017-05-09 07:00 UTC, I will transfer this item to release management team
> > ownership without further notice.
> 
> I got side-tracked by the minor releases preparation.  I will work on
> this now and report on Wednesday.

This first needs discussion, rather than work on a patch.  Could you
please reply to
http://archives.postgresql.org/message-id/20170507234334.yimkp6aq2o43wtt2%40alap3.anarazel.de

You've so far not actually replied to the concerns raised in this thread
in a more than superficial way, not responded to questions, and it's
been two weeks.  This is an architectural issue, not just a bug.  We
can't even think of going to beta before it's resolved.  It's understand
you not having time to properly resolve it immediately, but we need to
be able to have a discussion about where to go from here.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-08 Thread Peter Eisentraut
On 5/8/17 02:58, Noah Misch wrote:
> IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
> for your status update.  Please reacquaint yourself with the policy on open
> item ownership[1] and then reply immediately.  If I do not hear from you by
> 2017-05-09 07:00 UTC, I will transfer this item to release management team
> ownership without further notice.

I got side-tracked by the minor releases preparation.  I will work on
this now and report on Wednesday.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical replication - TRAP: FailedAssertion in pgstat.c

2017-05-08 Thread Petr Jelinek
On 08/05/17 18:12, Peter Eisentraut wrote:
> On 5/5/17 07:13, Petr Jelinek wrote:
>> Yes that's different thing that we've been discussing a bit in snapbuild
>> woes thread.
>>
>>> the 'TRAP: FailedAssertion in pgstat.c' anymore.
>>>
>>> (If there is any other configuration of patches worth testing please let
>>> me know)
>>
>> Thanks, so the patch works.
> 
> Committed, with s/xact_started/started_tx/, to match nearby code.
> 

Thanks!

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical replication - TRAP: FailedAssertion in pgstat.c

2017-05-08 Thread Petr Jelinek
On 08/05/17 17:52, Masahiko Sawada wrote:
> On Fri, May 5, 2017 at 8:13 PM, Petr Jelinek
>  wrote:
>> On 03/05/17 13:23, Erik Rijkers wrote:
>>> On 2017-05-03 08:17, Petr Jelinek wrote:
 On 02/05/17 20:43, Robert Haas wrote:
> On Thu, Apr 20, 2017 at 2:58 PM, Peter Eisentraut
>>>
>> code path that calls CommitTransactionCommand() should have one, no?
>
> Is there anything left to be committed here?
>

 Afaics the fix was not committed. Peter wanted more comprehensive fix
 which didn't happen. I think something like attached should do the job.
>>>
>>> I'm running my pgbench-over-logical-replication test in chunk of 15
>>> minutes, wth different pgbench -c (num clients) and -s (scale) values.
>>>
>>> With this patch (and nothing else)  on top of master (8f8b9be51fd7 to be
>>> precise):
>>>
 fix-statistics-reporting-in-logical-replication-work.patch
>>>
>>> logical replication is still often failing (as expected, I suppose; it
>>> seems because of "inital snapshot too large") but indeed I do not see
>>
>> Yes that's different thing that we've been discussing a bit in snapbuild
>> woes thread.
>>
>>> the 'TRAP: FailedAssertion in pgstat.c' anymore.
>>>
>>> (If there is any other configuration of patches worth testing please let
>>> me know)
>>>
>>
>> Thanks, so the patch works.
>>
> 
> I think that we should commit the local transaction that did initial
> data copy, and then report stat as well. Currently table sync worker
> doesn't commit the local transaction in LogicalRepSyncTableStart
> (maybe until apply commit record?) if its status is changed to
> SUBREL_STATE_CATCHUP. That's why the table sync worker issues
> assertion failure.
> 

That would fix the assert as well yes, but it would also mean that if
the worker crashed between the initial copy and the end of catchup there
would be no way to restart it without manual intervention from user
since the synchronization position would be lost. Hence the fix I
proposed which does it differently and has the whole sync in a single
transaction.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-08 Thread Robert Haas
On Thu, May 4, 2017 at 4:28 PM, Jeevan Ladhe
 wrote:
> While reviewing the code I was trying to explore more cases, and I here
> comes an
> open question to my mind:
> should we allow the default partition table to be partitioned further?

I think yes.  In general, you are allowed to partition a partition,
and I can't see any justification for restricting that for default
partitions when we allow it everywhere else.

> If we allow it(as in the current case) then observe following case, where I
> have defined a default partitioned which is further partitioned on a
> different
> column.
>
> postgres=# CREATE TABLE test ( a int, b int, c int) PARTITION BY LIST (a);
> CREATE TABLE
> postgres=# CREATE TABLE test_p1 PARTITION OF test FOR VALUES IN(4, 5, 6, 7,
> 8);
> CREATE TABLE
> postgres=# CREATE TABLE test_pd PARTITION OF test DEFAULT PARTITION BY
> LIST(b);
> CREATE TABLE
> postgres=# INSERT INTO test VALUES (20, 24, 12);
> ERROR:  no partition of relation "test_pd" found for row
> DETAIL:  Partition key of the failing row contains (b) = (24).
>
> Note, that it does not allow inserting the tuple(20, 24, 12) because though
> a=20
> would fall in default partition i.e. test_pd, table test_pd itself is
> further
> partitioned and does not have any partition satisfying b=24.

Right, that looks like correct behavior.  You would have gotten the
same result if you had tried to insert into test_pd directly.

> Further if I define a default partition for table test_pd, the the tuple
> gets inserted.

That also sounds correct.

> Doesn't this sound like the whole purpose of having DEFAULT partition on
> test
> table is defeated?

Not to me.  It's possible to do lots of silly things with partitioned
tables.  For example, one case that we talked about before is that you
can define a range partition for, say, VALUES (0) TO (100), and then
subpartition it and give the subpartitions bounds which are outside
the range 0-100.  That's obviously silly and will lead to failures
inserting tuples, but we chose not to try to prohibit it because it's
not really broken, just useless.  There are lots of similar cases
involving other features.  For example, you can apply an inherited
CHECK (false) constraint to a table, which makes it impossible for
that table or any of its children to ever contain any rows; that is
probably a dumb configuration.  You can create two unique indexes with
exactly the same definition; unless you're creating a new one with the
intent of dropping the old one, that doesn't make sense.  You can
define a trigger that always throws an ERROR and then another trigger
which runs later that modifies the tuple; the second will never be run
because the first one will always kill the transaction before we get
there.  Those things are all legal, but often unuseful.  Similarly
here.  Defining a default list partition and then subpartitioning it
by list is not likely to be a good schema design, but it doesn't mean
we should try to disallow it.  It is important to distinguish between
things that are actually *broken* (like a partitioning configuration
where the tuples that can be inserted into a partition manually differ
from the ones that are routed to it automatically) and things that are
merely *lame* (like creating a multi-level partitioning hierarchy when
a single level would have done the job just as well).  The former
should be prevented by the code, while the latter is at most a
documentation issue.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SUBSCRIPTIONS and pg_upgrade

2017-05-08 Thread Peter Eisentraut
On 5/6/17 14:50, Noah Misch wrote:
>> I consider this item low priority and don't plan to work on it before
>> all the other open items under logical replication are addressed.
>>
>> (Here, working on it would include thinking further about whether it is
>> necessary at all or what alternatives might look like.)
> 
> That's informative, but it's not a valid status update.  This PostgreSQL 10
> open item is past due for your status update.  Kindly send a valid status
> update within 24 hours.  Refer to the policy on open item ownership:
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

Fair enough.  I have set myself a reminder to report back on May 30.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical replication - TRAP: FailedAssertion in pgstat.c

2017-05-08 Thread Peter Eisentraut
On 5/5/17 07:13, Petr Jelinek wrote:
> Yes that's different thing that we've been discussing a bit in snapbuild
> woes thread.
> 
>> the 'TRAP: FailedAssertion in pgstat.c' anymore.
>>
>> (If there is any other configuration of patches worth testing please let
>> me know)
> 
> Thanks, so the patch works.

Committed, with s/xact_started/started_tx/, to match nearby code.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical replication - TRAP: FailedAssertion in pgstat.c

2017-05-08 Thread Masahiko Sawada
On Fri, May 5, 2017 at 8:13 PM, Petr Jelinek
 wrote:
> On 03/05/17 13:23, Erik Rijkers wrote:
>> On 2017-05-03 08:17, Petr Jelinek wrote:
>>> On 02/05/17 20:43, Robert Haas wrote:
 On Thu, Apr 20, 2017 at 2:58 PM, Peter Eisentraut
>>
> code path that calls CommitTransactionCommand() should have one, no?

 Is there anything left to be committed here?

>>>
>>> Afaics the fix was not committed. Peter wanted more comprehensive fix
>>> which didn't happen. I think something like attached should do the job.
>>
>> I'm running my pgbench-over-logical-replication test in chunk of 15
>> minutes, wth different pgbench -c (num clients) and -s (scale) values.
>>
>> With this patch (and nothing else)  on top of master (8f8b9be51fd7 to be
>> precise):
>>
>>> fix-statistics-reporting-in-logical-replication-work.patch
>>
>> logical replication is still often failing (as expected, I suppose; it
>> seems because of "inital snapshot too large") but indeed I do not see
>
> Yes that's different thing that we've been discussing a bit in snapbuild
> woes thread.
>
>> the 'TRAP: FailedAssertion in pgstat.c' anymore.
>>
>> (If there is any other configuration of patches worth testing please let
>> me know)
>>
>
> Thanks, so the patch works.
>

I think that we should commit the local transaction that did initial
data copy, and then report stat as well. Currently table sync worker
doesn't commit the local transaction in LogicalRepSyncTableStart
(maybe until apply commit record?) if its status is changed to
SUBREL_STATE_CATCHUP. That's why the table sync worker issues
assertion failure.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Incremental sort

2017-05-08 Thread Robert Haas
On Fri, May 5, 2017 at 11:13 AM, Alexander Korotkov
 wrote:
> Incremental sort is faster in vast majority of cases.  It appears to be
> slower only when whose dataset is one sort group.  In this case incremental
> sort is useless, and it should be considered as misuse of incremental sort.
> Slowdown is related to the fact that we anyway have to do extra comparisons,
> unless we somehow push our comparison result into qsort itself and save some
> cpu cycles (but that would be unreasonable break of encapsulation).  Thus,
> in such cases regression seems to be inevitable anyway.  I think we could
> evade this regression during query planning.  If we see that there would be
> only few groups, we should choose plain sort instead of incremental sort.

I'm sorry that I don't have time to review this in detail right now,
but it sounds like you are doing good work to file down cases where
this might cause regressions, which is great.  Regarding the point in
the paragraph above, I'd say that it's OK for the planner to be
responsible for picking between Sort and Incremental Sort in some way.
It is, after all, the planner's job to decide between different
strategies for executing the same query and, of course, sometimes it
will be wrong, but that's OK as long as it's not wrong too often (or
by too much, hopefully).  It may be a little difficult to get this
right, though, because I'm not sure that the information you need
actually exists (or is reliable).  For example, consider the case
where we need to sort 100m rows and there are 2 groups.  If 1 group
contains 1 row and the other group contains all of the rest, there is
really no point in an incremental sort.  On the other hand, if each
group contains 50m rows and we can get the data presorted by the
grouping column, there might be a lot of point to an incremental sort,
because two 50m-row sorts might be a lot cheaper than one 100m sort.
More generally, it's quite easy to imagine situations where the
individual groups can be quicksorted but sorting all of the rows
requires I/O, even when the number of groups isn't that big.  On the
other hand, the real sweet spot for this is probably the case where
the number of groups is very large, with many single-row groups or
many groups with just a few rows each, so if we can at least get this
to work in those cases that may be good enough.  On the third hand,
when costing aggregation, I think we often underestimate the number of
groups and there might well be similar problems here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical replication ApplyContext bloat

2017-05-08 Thread Peter Eisentraut
On 5/5/17 03:09, Noah Misch wrote:
> [Action required within three days.  This is a generic notification.]
> 
> The above-described topic is currently a PostgreSQL 10 open item.  Peter,
> since you committed the patch believed to have created it, you own this open
> item.  If some other commit is more relevant or if this does not belong as a
> v10 open item, please let us know.  Otherwise, please observe the policy on
> open item ownership[1] and send a status update within three calendar days of
> this message.  Include a date for your subsequent status update.  Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping v10.  Consequently, I will appreciate your efforts
> toward speedy resolution.  Thanks.

The discussion is still ongoing.  I will focus on it this week and
report back on Wednesday.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical replication ApplyContext bloat

2017-05-08 Thread Peter Eisentraut
I'm not sure about some of the details.

I think it would make more sense to reset ApplyMessageContext in
apply_dispatch(), so that it is done consistently after every message
(as the name implies), not only some messages.

Also, perhaps ApplyMessageContext should be a child of
TopTransactionContext.  (You have it as a child of ApplyContext, which
is under TopMemoryContext.)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Use SnapshotAny in get_actual_variable_range

2017-05-08 Thread Dmitriy Sarafannikov

> + else \
> + (snapshotdata).xmin = \
> + TransactionIdLimitedForOldSnapshots(RecentGlobalDataXmin, \
> + relation); \
> 
> I think we don't need to use TransactionIdLimitedForOldSnapshots() as
> that is required to override xmin for table vacuum/pruning purposes.
> 
>> Maybe we need
>> to use GetOldestXmin()?
> 
> It is a costly call as it needs ProcArrayLock, so I don't think it
> makes sense to use it here.

Ok, i agree. Patch is attached.



snapshot_non_vacuumable_v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG 10 release notes

2017-05-08 Thread Masahiko Sawada
On Mon, May 8, 2017 at 10:50 PM, Bruce Momjian  wrote:
> On Mon, May  8, 2017 at 12:18:54PM +0900, Masahiko Sawada wrote:
>> On Fri, May 5, 2017 at 8:38 AM, Bruce Momjian  wrote:
>> > On Fri, Apr 28, 2017 at 01:12:34PM +0900, Masahiko Sawada wrote:
>> >> On Tue, Apr 25, 2017 at 10:31 AM, Bruce Momjian  wrote:
>> >> > I have committed the first draft of the Postgres 10 release notes.  They
>> >> > are current as of two days ago, and I will keep them current.  Please
>> >> > give me any feedback you have.
>> >> >
>> >>
>> >> Related to a following item in release note, oldestxmin is also
>> >> reported by VACUUM VERBOSE and autovacuum , which is introduced by
>> >> commit 9eb344faf54a849898d9be012ddfa8204cfeb57c (author is Simon).
>> >>
>> >> * Have VACUUM VERBOSE report the number of skipped frozen pages
>> >> (Masahiko Sawada)
>> >>
>> >> Could we add this item to the release note?
>> >
>> > OK, adjusted text below and commit added above this item:
>> >
>> >Have VACUUM VERBOSE report
>> >the number of skipped frozen pages and oldest xmin (Masahiko Sawada)
>> >
>>
>> Thank you! But actually main author of this item is Simon, I didn't
>> involved in it. Maybe we can have it as a separate item.
>
> I just added Simon's name to the same item.
>

Thank you!

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix a typo in snapmgr.c

2017-05-08 Thread Masahiko Sawada
On Mon, May 8, 2017 at 10:12 PM, Tom Lane  wrote:
> Simon Riggs  writes:
>> So rearranged code a little to keep it lean.
>
> Didn't you break it with that?  As it now stands, the memcpy will
> copy the nonzero value.
>

Right, I'd missed it. That code should be placed before first memcpy.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG 10 release notes

2017-05-08 Thread Bruce Momjian
On Mon, May  8, 2017 at 12:18:54PM +0900, Masahiko Sawada wrote:
> On Fri, May 5, 2017 at 8:38 AM, Bruce Momjian  wrote:
> > On Fri, Apr 28, 2017 at 01:12:34PM +0900, Masahiko Sawada wrote:
> >> On Tue, Apr 25, 2017 at 10:31 AM, Bruce Momjian  wrote:
> >> > I have committed the first draft of the Postgres 10 release notes.  They
> >> > are current as of two days ago, and I will keep them current.  Please
> >> > give me any feedback you have.
> >> >
> >>
> >> Related to a following item in release note, oldestxmin is also
> >> reported by VACUUM VERBOSE and autovacuum , which is introduced by
> >> commit 9eb344faf54a849898d9be012ddfa8204cfeb57c (author is Simon).
> >>
> >> * Have VACUUM VERBOSE report the number of skipped frozen pages
> >> (Masahiko Sawada)
> >>
> >> Could we add this item to the release note?
> >
> > OK, adjusted text below and commit added above this item:
> >
> >Have VACUUM VERBOSE report
> >the number of skipped frozen pages and oldest xmin (Masahiko Sawada)
> >
> 
> Thank you! But actually main author of this item is Simon, I didn't
> involved in it. Maybe we can have it as a separate item.

I just added Simon's name to the same item.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Use SnapshotAny in get_actual_variable_range

2017-05-08 Thread Amit Kapila
On Mon, May 8, 2017 at 6:30 PM, Dmitriy Sarafannikov
 wrote:
>
> I think we can use RecentGlobalDataXmin for non-catalog relations and
> RecentGlobalXmin for catalog relations (probably a check similar to
> what we have in heap_page_prune_opt).
>
>
> I took check from heap_page_prune_opt (Maybe this check must be present as
> separate function?)
> But it requires to initialize snapshot for specific relation.
>

+ else \
+ (snapshotdata).xmin = \
+ TransactionIdLimitedForOldSnapshots(RecentGlobalDataXmin, \
+ relation); \

I think we don't need to use TransactionIdLimitedForOldSnapshots() as
that is required to override xmin for table vacuum/pruning purposes.

> Maybe we need
> to use GetOldestXmin()?

It is a costly call as it needs ProcArrayLock, so I don't think it
makes sense to use it here.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics for heap_parallelscan_nextpage()

2017-05-08 Thread Amit Kapila
On Sat, May 6, 2017 at 7:27 PM, David Rowley
 wrote:
>
> I've also noticed that both the atomics patch and unpatched master do
> something that looks a bit weird with synchronous seq-scans. If the
> parallel seq-scan piggybacked on another scan, then subsequent
> parallel scans will start at the same non-zero block location, even
> when no other concurrent scans exist.
>

That is what we expect.  The same is true for non-parallel scans as
well, refer below code for non-parallel scans.

heapgettup_pagemode()
{
..

finished = (page == scan->rs_startblock) ||
(scan->rs_numblocks != InvalidBlockNumber ? --scan->rs_numblocks == 0 : false);


/*
* Report our new scan position for synchronization purposes. We
* don't do that when moving backwards, however. That would just
* mess up any other forward-moving scanners.
*
* Note: we do this before checking for end of scan so that the
* final state of the position hint is back at the start of the
* rel.  That's not strictly necessary, but otherwise when you run
* the same query multiple times the starting position would shift
* a little bit backwards on every invocation, which is confusing.
* We don't guarantee any specific ordering in general, though.
*/
if (scan->rs_syncscan)
ss_report_location(scan->rs_rd, page);
..
}


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Page Scan Mode in Hash Index

2017-05-08 Thread Ashutosh Sharma
Hi,

On Tue, Apr 4, 2017 at 3:56 PM, Amit Kapila  wrote:
> On Sun, Apr 2, 2017 at 4:14 AM, Ashutosh Sharma  wrote:
>>
>> Please note that these patches needs to be applied on top of  [1].
>>
>
> Few more review comments:
>
> 1.
> - page = BufferGetPage(so->hashso_curbuf);
> + blkno = so->currPos.currPage;
> + if (so->hashso_bucket_buf == so->currPos.buf)
> + {
> + buf = so->currPos.buf;
> + LockBuffer(buf, BUFFER_LOCK_SHARE);
> + page = BufferGetPage(buf);
> + }
>
> Here, you are assuming that only bucket page can be pinned, but I
> think that assumption is not right.  When _hash_kill_items() is called
> before moving to next page, there could be a pin on the overflow page.
> You need some logic to check if the buffer is pinned, then just lock
> it.  I think once you do that, it might not be convinient to release
> the pin at the end of this function.

Yes, there are few cases where we might have pin on overflow pages too
and we need to handle such cases in _hash_kill_items. I have taken
care of this in the attached v7 patch. Thanks.

>
> Add some comments on top of _hash_kill_items to explain the new
> changes or say some thing like "See _bt_killitems for details"

Added some more comments on the new changes.

>
> 2.
> + /*
> + * We save the LSN of the page as we read it, so that we know whether it
> + * safe to apply LP_DEAD hints to the page later.  This allows us to drop
> + * the pin for MVCC scans, which allows vacuum to avoid blocking.
> + */
> + so->currPos.lsn = PageGetLSN(page);
> +
>
> The second part of above comment doesn't make sense because vacuum's
> will still be blocked because we hold the pin on primary bucket page.

That's right. It doesn't make sense because we won't allow vacuum to
start. I have removed it.

>
> 3.
> + {
> + /*
> + * No more matching tuples were found. return FALSE
> + * indicating the same. Also, remember the prev and
> + * next block number so that if fetching tuples using
> + * cursor we remember the page from where to start the
> + * scan.
> + */
> + so->currPos.prevPage = (opaque)->hasho_prevblkno;
> + so->currPos.nextPage = (opaque)->hasho_nextblkno;
>
> You can't read opaque without having lock and by this time it has
> already been released.

I have corrected it. Please refer to the attached v7 patch.

Also, I think if you want to save position for
> cursor movement, then you need to save the position of last bucket
> when scan completes the overflow chain, however as you have written it
> will be always invalid block number. I think there is similar problem
> with prevblock number.

Did you mean last bucket or last page. If it is last page, then I am
already storing it.

>
> 4.
> +_hash_load_qualified_items(IndexScanDesc scan, Page page, OffsetNumber 
> offnum,
> +   OffsetNumber maxoff, ScanDirection dir)
> +{
> + HashScanOpaque so = (HashScanOpaque) scan->opaque;
> + IndexTuple  itup;
> + int itemIndex;
> +
> + if (ScanDirectionIsForward(dir))
> + {
> + /* load items[] in ascending order */
> + itemIndex = 0;
> +
> + /* new page, relocate starting position by binary search */
> + offnum = _hash_binsearch(page, so->hashso_sk_hash);
>
> What is the need to find offset number when this function already has
> that as an input parameter?

It's not required. I have removed it.

>
> 5.
> +_hash_load_qualified_items(IndexScanDesc scan, Page page, OffsetNumber 
> offnum,
> +   OffsetNumber maxoff, ScanDirection dir)
>
> I think maxoff is not required to be passed a parameter to this
> function as you need it only for forward scan. You can compute it
> locally.

It is required for both forward and backward scan. However, I am not
passing it to _hash_load_qualified_items() now.

>
> 6.
> +_hash_load_qualified_items(IndexScanDesc scan, Page page, OffsetNumber 
> offnum,
> +   OffsetNumber maxoff, ScanDirection dir)
> {
> ..
> + if (ScanDirectionIsForward(dir))
> + {
> ..
> + while (offnum <= maxoff)
> {
> ..
> + if (so->hashso_sk_hash == _hash_get_indextuple_hashkey(itup) &&
> + _hash_checkqual(scan, itup))
> + {
> + /* tuple is qualified, so remember it */
> + _hash_saveitem(so, itemIndex, offnum, itup);
> + itemIndex++;
> + }
> +
> + offnum = OffsetNumberNext(offnum);
> ..
>
> Why are you traversing the whole page even when there is no match?
> There is a similar problem in backward scan. I think this is blunder.

Fixed. Please check the attached
'0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev7.patch'

>
> 7.
> + if (so->currPos.buf == so->hashso_bucket_buf ||
> + so->currPos.buf == so->hashso_split_bucket_buf)
> + {
> + so->currPos.prevPage = InvalidBlockNumber;
> + LockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);
> + }
> + else
> + {
> + so->currPos.prevPage = (opaque)->hasho_prevblkno;
> + _hash_relbuf(rel, so->currPos.buf);
> + }
> +
> + so->currPos.nextPage = (opaque)->hasho_nextblkno;
>
> What makes you think it is safe read opaque after releasing the lock?

Nothing makes me think to read opaque 

Re: [HACKERS] Detecting schema changes during logical replication

2017-05-08 Thread Daniele Varrazzo
On Mon, May 8, 2017 at 3:48 AM, Craig Ringer
 wrote:

> Sounds like you're reimplementing pglogical
> (http://2ndquadrant.com/pglogical) on top of a json protocol.

The fact the protocol is JSON is more a detail, but it's a good start
as it's human-readable.

> [...]
> I have no reason to object to your doing it yourself, and you're welcome to
> use pglogical as a reference for how to do things (see the license). It just
> seems like a waste.

Logical Replication, for the first time, offers a way to implement a
replication solution that is not several layers away from the
database. Or even: for the first time is something I understand.

Using the logical replication we can perform some manipulation of the
data I will want to use (tables not necessarily in the same places,
schemas not necessarily matching). In particular one not-really-minor
annoyance of the current system is that adding a column of the master
regularly breaks the replica, and pglogical doesn't resolve this
problem. We currently use certain features of Londiste (tables living
in different schemas, slightly different data types with the same
textual representation, extra columns on the slave...) that are
against pglogical requirements, but which can be implemented no
problem is a customised replication solution built on top of streaming
replication.

All in all I'm more thrilled by the idea of having a database throwing
a stream of changes at me in a format I can reinterpret, allowing me
to write in a target database with a high degree of configurability in
the middle (i.e. I just have a Python script receiving the data,
munging them and then performing queries), then a complete but
schema-rigid replication solution.


-- Daniele


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix a typo in snapmgr.c

2017-05-08 Thread Tom Lane
Simon Riggs  writes:
> So rearranged code a little to keep it lean.

Didn't you break it with that?  As it now stands, the memcpy will
copy the nonzero value.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Use SnapshotAny in get_actual_variable_range

2017-05-08 Thread Dmitriy Sarafannikov
I think we can use RecentGlobalDataXmin for non-catalog relations andRecentGlobalXmin for catalog relations (probably a check similar towhat we have in heap_page_prune_opt).I took check from heap_page_prune_opt (Maybe this check must be present as separate function?)But it requires to initialize snapshot for specific relation. Maybe we need to use GetOldestXmin()?New version of patch is attached.

snapshot_non_vacuumable_v2.patch
Description: Binary data


Re: [HACKERS] no test coverage for ALTER FOREIGN DATA WRAPPER name HANDLER ...

2017-05-08 Thread Ashutosh Bapat
On Fri, May 5, 2017 at 6:08 AM, Mark Dilger  wrote:
> Hackers,
>
> just FYI, I cannot find any regression test coverage of this part of the 
> grammar, not
> even in the contrib/ directory or TAP tests.  I was going to submit a patch 
> to help out,
> and discovered it is not so easy to do, and perhaps that is why the coverage 
> is missing.
>
> My apologies for the noise if this is already common knowledge.  Also, if I'm 
> wrong,
> I'd appreciate a pointer to the test that I am failing to find.
>

It depends on what do you want to test. You could write a simple test
in postgres_fdw or file_fdw where you specify the same handler as
CREATE FOREIGN DATA WRAPPER command (basically a no-op) to check
whether the command succeeds. If you want to do any more serious
testing, it will require defining at some of the FDW hooks that will
be returned by the handler, and then exercising those hooks.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should pg_current_wal_location() become pg_current_wal_lsn()

2017-05-08 Thread Neha Khatri
On Sat, May 6, 2017 at 4:09 PM, Tom Lane  wrote:

> Noah Misch  writes:
> > On Fri, May 05, 2017 at 03:36:39AM +1200, David Rowley wrote:
> >> Do any of the committers who voted for this change feel inclined to
> >> pick this patch up?
>
> > I'll echo that question.  This open item lacks a clear owner.  One might
> argue
> > that 806091c caused it by doing the backward-compatibility breaks that
> > inspired this patch, but that's a link too tenuous to create ownership.
>
> If no one else takes this, I will pick it up --- but I don't anticipate
> having any time for it until after Monday's back-branch release wraps.
>

[In case forgotten] pg_controdata and pg_waldump interfaces should also be
considered for this standardization.

Following are pg_controldata interfaces that might require change:

  Latest checkpoint location:
  Prior checkpoint location:
  Latest checkpoint's REDO location:
  Minimum recovery ending location:
  Backup start location:
  Backup end location:

The pg_waldump help options(and probably documentation) would also require
change:
  -e, --end=RECPTR   stop reading at log position RECPTR
  -s, --start=RECPTR start reading at log position RECPTR

Regards,
Neha


Re: [HACKERS] proposal psql \gdesc

2017-05-08 Thread Pavel Stehule
2017-05-08 12:59 GMT+02:00 Fabien COELHO :

>
> Hello Pavel,
>
> After giving it some thoughts, I see three possible solutions:
>>>
>>> 0. Do nothing about it.
>>>I would prefer the prepare is cleaned up.
>>>
>>> 1. assign a special name, eg "_psql_gdesc_", so that
>>>DEALLOCATE "_psql_gdesc_" can be issued afterwards.
>>>
>>> [...]
>>>
>>
>> The doc says about unnamed prepared statements - any new unnamed prepared
>> statement deallocates previous by self. From psql environment is not
>> possible to create unnamed prepared statement.
>>
>
> That is a good point. It seems that it is not possible to execute it
> either.
>
> I prefer @0 agaisnt @1 because workflow is simple and robust. If unnamed PP
>> doesn't exists, then it will be created, else it will be replaced. @1 has
>> little bit more complex workflow, because there is not command like
>> DEALLOCATE IF EXISTS, so I have to ensure deallocation in all possible
>> ways.
>>
>
> ISTM That it is only of the PQprepare succeeded, so there should be only
> one point, at the end of the corresponding OK condition?
>
> Another reason for @0 is not necessity to generate some auxiliary
>> name.
>>
>
> Yes. I do not like special names either. But I do not like keeping objects
> lives if they are dead. Not sure which is worst.
>
> My opinion in this case is not too strong - just I see the advantages of @0
>> (robust and simple) nice. The question is about cost of unwanted allocated
>> PP to end of session.
>>
>
> My opinion is not strong either, it is more the principle that I do not
> like to let things forever live in the server while they are known dead.
>
> Hmmm. Strange. For some obscure reason, the unnamed prepared statement
> does not show in pg_prepared_statements:
>

looks like the design. Unnamed PP is near to PP created by PLpgSQL.


>
>   calvin=# PREPARE test AS SELECT 2;
>   calvin=# EXECUTE test;
> -- 2
>   calvin=# SELECT 1 AS one \gdesc
> -- one | integer
>   calvin=# SELECT * FROM  pg_prepared_statements ;
> -- just one row:
> -- test | PREPARE test AS SELECT 2; │7..
>
>
> Conclusion: Maybe let it as solution 0 for the time being, with a comment
> telling that it will be cleaned on the next unnamed PQprepare, and the
> committer will have its opinion.
>

good decision.

Thank you for review. I'll send new version early.

Regards

Pavel


>
> --
> Fabien.


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-08 Thread Masahiko Sawada
On Mon, May 8, 2017 at 8:53 PM, Erik Rijkers  wrote:
> On 2017-05-08 13:13, Masahiko Sawada wrote:
>>
>> On Mon, May 8, 2017 at 7:14 PM, Erik Rijkers  wrote:
>>>
>>> On 2017-05-08 11:27, Masahiko Sawada wrote:


>
>>>
>>> FWIW, running
>>>
 0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
 0002-WIP-Possibly-more-robust-snapbuild-approach.patch +
 fix-statistics-reporting-in-logical-replication-work.patch
>>>
>>>
>>> (on top of 44c528810)
>>
>>
>> Thanks, which thread are these patches attached on?
>>
>
> The first two patches are here:
> https://www.postgresql.org/message-id/20170505004237.edtahvrwb3uwd5rs%40alap3.anarazel.de
>
> and last one:
> https://www.postgresql.org/message-id/22cc402c-88eb-fa35-217f-0060db2c72f0%402ndquadrant.com
>
> ( I have to include that last one or my tests fail within minutes. )

Thank you! I will look at these patches.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-08 Thread Erik Rijkers

On 2017-05-08 13:13, Masahiko Sawada wrote:

On Mon, May 8, 2017 at 7:14 PM, Erik Rijkers  wrote:

On 2017-05-08 11:27, Masahiko Sawada wrote:






FWIW, running


0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
0002-WIP-Possibly-more-robust-snapbuild-approach.patch +
fix-statistics-reporting-in-logical-replication-work.patch


(on top of 44c528810)


Thanks, which thread are these patches attached on?



The first two patches are here:
https://www.postgresql.org/message-id/20170505004237.edtahvrwb3uwd5rs%40alap3.anarazel.de

and last one:
https://www.postgresql.org/message-id/22cc402c-88eb-fa35-217f-0060db2c72f0%402ndquadrant.com

( I have to include that last one or my tests fail within minutes. )


Erik Rijkers


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication deranged sender

2017-05-08 Thread Petr Jelinek
On 08/05/17 01:17, Jeff Janes wrote:
> After dropping a subscription, it says it succeeded and that it dropped
> the slot on the publisher.
> 
> But the publisher still has the slot, and a full-tilt process described
> by ps as 
> 
> postgres: wal sender process jjanes [local] idle in transaction
> 
> Strace shows that this process is doing nothing but opening, reading,
> lseek, and closing from pg_wal, and calling sbrk.  It never sends anything.
> 
> This is not how it should work, correct?
> 

No, and I don't see how this happens though, we only report success if
the publisher side said that DROP_REPLICATION_SLOT succeeded. So far I
don't see anything in source that would explain this. I will need to
reproduce it first to see what's happening (wasn't able to do that yet,
but it might just need more time since you say it does no happen always).

> 
> The subscribing server's logs shows:
> 
> 21270  2017-05-07 16:04:16.517 PDT LOG:  process 21270 still waiting for
> AccessShareLock on relation 6100 of database 0 after 1000.051 ms
> 21270  2017-05-07 16:04:16.517 PDT DETAIL:  Process holding the lock:
> 25493. Wait queue: 21270.
> 21270  2017-05-07 16:04:16.520 PDT LOG:  process 21270 acquired
> AccessShareLock on relation 6100 of database 0 after 1003.608 ms
> 25493 DROP SUBSCRIPTION 2017-05-07 16:04:16.520 PDT LOG:  duration:
> 1005.176 ms CPU: user: 0.00 s, system: 0.00 s, elapsed: 1.00 s  
> statement: drop subscription foobar ;
> 

Yeah that's normal because DropSubscription holds exclusive lock on
pg_subscription. I am guessing 21270 is the logical replication launcher?

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-08 Thread Petr Jelinek
On 08/05/17 11:27, Masahiko Sawada wrote:
> Hi,
> 
> I encountered a situation where DROP SUBSCRIPTION got stuck when
> initial table sync is in progress. In my environment, I created
> several tables with some data on publisher. I created subscription on
> subscriber and drop subscription immediately after that. It doesn't
> always happen but I often encountered it on my environment.
> 
> ps -x command shows the following.
> 
>  96796 ?Ss 0:00 postgres: masahiko postgres [local] DROP
> SUBSCRIPTION
>  96801 ?Ts 0:00 postgres: bgworker: logical replication
> worker for subscription 40993waiting
>  96805 ?Ss 0:07 postgres: bgworker: logical replication
> worker for subscription 40993 sync 16418
>  96806 ?Ss 0:01 postgres: wal sender process masahiko [local] idle
>  96807 ?Ss 0:00 postgres: bgworker: logical replication
> worker for subscription 40993 sync 16421
>  96808 ?Ss 0:00 postgres: wal sender process masahiko [local] idle
> 
> The DROP SUBSCRIPTION process (pid 96796) is waiting for the apply
> worker process (pid 96801) to stop while holding a lock on
> pg_subscription_rel. On the other hand the apply worker is waiting for
> acquiring a tuple lock on pg_subscription_rel needed for heap_update.
> Also table sync workers (pid 96805 and 96807) are waiting for the
> apply worker process to change their status.
> 

Looks like we should kill apply before dropping dependencies.

> Also, even when DROP SUBSCRIPTION is done successfully, the table sync
> worker can be orphaned because I guess that the apply worker can exit
> before change status of table sync worker.

Well the tablesync worker should stop itself if the subscription got
removed, but of course again the dependencies are an issue, so we should
probably kill those explicitly as well.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Other formats in pset like markdown, rst, mediawiki

2017-05-08 Thread Fabien COELHO


Hello Jan,

Please give a number to submitted patches. I think that this was v3.

The patch does NOT fix various issues I pointed out in my previous review:

 - tabs introduced in "doc/src/sgml/ref/psql-ref.sgml"
 - too long help line in "src/bin/psql/help.c"
 - spurious space after a comma in "src/fe_utils/print.c"
   and possibly elsewhere.

On Sun, 23 Apr 2017, Jan Michálek wrote:


Markdown include characters/sequences which are interpreted as markers:
_Italic_, **Bold**, *** => horizontal rules, > block quote... `inline
code`... If they are interpreted within a table cell then probably they
should be escaped somehow.


I have treated "_*|<>"


Probably not enough, see below. Note the escaping chars should also be 
escaped.



I`m able to sanitize characters, but complex sequences will be problem. I
will look on this, but I don`t know, if I`m able to do this.


I do not know whether only those are necessary. Have you checked? Guessing 
is probably not the right approach.



Concerning MARKDOWN, and according to the following source about github 
markdown implementation:


https://enterprise.github.com/downloads/en/markdown-cheatsheet.pdf

The following characters may need to be backslash escaped, although it 
does not cover HTML stuff.


\   backslash
`   backtick
*   asterisk
_   underscore
{}  curly braces
[]  square brackets
()  parentheses
#   hash mark
+   plus sign
-   minus sign (hyphen)
.   dot
!   exclamation

Another source https://genius.com/3057216 suggests (* # / ( ) [ ] < >),
which should protect HTML.

However, the escaping seems to be the backslash character, NOT using html 
encoding  as done in your version.


Where did you find the precise escaping rules for markdown? I do not think 
that it should be invented...



I have looked at RST, according to this reference:


http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#grid-tables

The good news is that you do not need to handle a special | case because 
you would only produce clean tables.


I've tested UTF-8 with plane 1 (你好!) and plane 2 (!) and the alignment 
seems to worked well, incredible!


My main interest on this was in rst. I`m using markdown only in github 
issues and my knowldge about md is poor.


Then maybe only do RST?!

It looks much simpler anyway, and if you do MARKDOWN the support needs to 
be clean.


About the code:

I'm still at odds with the code which needs to test for markdown to call 
for different functions in multiple places. If you keep md and in order to 
avoid that, I would suggest to extend the pg_wcs* functions with a list of 
caracters which may have different sizes with additionnal args, say:


  pg_wcssize(// same args, plus:
 char * escaped_chars, // will require escaping
 int escape_len, // how many chars added when escaping
 int nllen // len of newline if substituted
 );

So that pg_wcssize(..., "\r", 1, 1) would behave as before (\n and \t are 
rather special cases), and the various constants could be held in the 
format description so the whole thing would be parametric.


Same approach with format.

That would allow to simplify the code significantly and to share it 
between MARKDOWN and others. Also, the multiple "else if" list would be 
simplified by using strchr or the escaped_chars string.


--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-08 Thread Masahiko Sawada
On Mon, May 8, 2017 at 7:14 PM, Erik Rijkers  wrote:
> On 2017-05-08 11:27, Masahiko Sawada wrote:
>>
>> Hi,
>>
>> I encountered a situation where DROP SUBSCRIPTION got stuck when
>> initial table sync is in progress. In my environment, I created
>> several tables with some data on publisher. I created subscription on
>> subscriber and drop subscription immediately after that. It doesn't
>> always happen but I often encountered it on my environment.
>>
>> ps -x command shows the following.
>>
>>  96796 ?Ss 0:00 postgres: masahiko postgres [local] DROP
>> SUBSCRIPTION
>>  96801 ?Ts 0:00 postgres: bgworker: logical replication
>> worker for subscription 40993waiting
>>  96805 ?Ss 0:07 postgres: bgworker: logical replication
>> worker for subscription 40993 sync 16418
>>  96806 ?Ss 0:01 postgres: wal sender process masahiko [local]
>> idle
>>  96807 ?Ss 0:00 postgres: bgworker: logical replication
>> worker for subscription 40993 sync 16421
>>  96808 ?Ss 0:00 postgres: wal sender process masahiko [local]
>> idle
>>
>
> FWIW, running
>
>> 0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
>> 0002-WIP-Possibly-more-robust-snapbuild-approach.patch +
>> fix-statistics-reporting-in-logical-replication-work.patch
>
> (on top of 44c528810)

Thanks, which thread are these patches attached on?

>
> I have encountered the same condition as well in the last few days, a few
> times (I think 2 or 3 times).

IIUC there are two issues; one is that the deadlock can happen between
the DROP SUBSCRIPTION and the apply worker process, another one is the
table sync worker can be orphaned if the apply worker exits before
changing status. The latter might relate to another issue reported by
Jeff[1].

[1] 
https://www.postgresql.org/message-id/CAMkU%3D1xUJKs%3D2etq2K7bmbY51Q7g853HLxJ7qEB2Snog9oRvDw%40mail.gmail.com

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-08 Thread Fabien COELHO


Hello Pavel,


After giving it some thoughts, I see three possible solutions:

0. Do nothing about it.
   I would prefer the prepare is cleaned up.

1. assign a special name, eg "_psql_gdesc_", so that
   DEALLOCATE "_psql_gdesc_" can be issued afterwards.

[...]


The doc says about unnamed prepared statements - any new unnamed prepared
statement deallocates previous by self. From psql environment is not
possible to create unnamed prepared statement.


That is a good point. It seems that it is not possible to execute it 
either.



I prefer @0 agaisnt @1 because workflow is simple and robust. If unnamed PP
doesn't exists, then it will be created, else it will be replaced. @1 has
little bit more complex workflow, because there is not command like
DEALLOCATE IF EXISTS, so I have to ensure deallocation in all possible
ways.


ISTM That it is only of the PQprepare succeeded, so there should be only 
one point, at the end of the corresponding OK condition?



Another reason for @0 is not necessity to generate some auxiliary
name.


Yes. I do not like special names either. But I do not like keeping objects 
lives if they are dead. Not sure which is worst.



My opinion in this case is not too strong - just I see the advantages of @0
(robust and simple) nice. The question is about cost of unwanted allocated
PP to end of session.


My opinion is not strong either, it is more the principle that I do not 
like to let things forever live in the server while they are known dead.


Hmmm. Strange. For some obscure reason, the unnamed prepared statement 
does not show in pg_prepared_statements:


  calvin=# PREPARE test AS SELECT 2;
  calvin=# EXECUTE test;
-- 2
  calvin=# SELECT 1 AS one \gdesc
-- one | integer
  calvin=# SELECT * FROM  pg_prepared_statements ;
-- just one row:
-- test | PREPARE test AS SELECT 2; │7..


Conclusion: Maybe let it as solution 0 for the time being, with a comment 
telling that it will be cleaned on the next unnamed PQprepare, and the 
committer will have its opinion.


--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Other formats in pset like markdown, rst, mediawiki

2017-05-08 Thread Fabien COELHO




Hello Jan,

Please give a number to submitted patches. I think that this was v3.

The patch does NOT fix various issues I pointed out in my previous review:

 - tabs introduced in "doc/src/sgml/ref/psql-ref.sgml"
 - too long help line in "src/bin/psql/help.c"
 - spurious space after a comma in "src/fe_utils/print.c"
   and possibly elsewhere.

On Sun, 23 Apr 2017, Jan Michálek wrote:


> > Markdown include characters/sequences which are interpreted as markers:
> > _Italic_, **Bold**, *** => horizontal rules, > block quote... `inline
> > code`... If they are interpreted within a table cell then probably they
> > should be escaped somehow.

I have treated "_*|<>"


Probably not enough, see below. Note the escaping chars should also be escaped.


> I`m able to sanitize characters, but complex sequences will be problem. I
> will look on this, but I don`t know, if I`m able to do this.


I do not know whether only those are necessary. Have you checked? Guessing is
probably not the right approach.


Concerning MARKDOWN, and according to the following source about github markdown
implementation:

https://enterprise.github.com/downloads/en/markdown-cheatsheet.pdf

The following characters may need to be backslash escaped, although it does not
cover HTML stuff.

\   backslash
`   backtick
*   asterisk
_   underscore
{}  curly braces
[]  square brackets
()  parentheses
#   hash mark
+   plus sign
-   minus sign (hyphen)
.   dot
!   exclamation

Another source https://genius.com/3057216 suggests (* # / ( ) [ ] < >),
which should protect HTML.

However, the escaping seems to be the backslash character, NOT using html
encoding  as done in your version.

Where did you find the precise escaping rules for markdown? I do not think that
it should be invented...


I have looked at RST, according to this reference:

http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#grid-
tables

The good news is that you do not need to handle a special | case because you
would only produce clean tables.

I've tested UTF-8 with plane 1 (你好!) and plane 2 (!) and the alignment seems
to worked well, incredible!


> My main interest on this was in rst. I`m using markdown only in github
> issues and my knowldge about md is poor.


Then maybe only do RST?!

It looks much simpler anyway, and if you do MARKDOWN the support needs to be
clean.

About the code:

I'm still at odds with the code which needs to test for markdown to call for
different functions in multiple places. If you keep md and in order to avoid
that, I would suggest to extend the pg_wcs* functions with a list of caracters
which may have different sizes with additionnal args, say:

  pg_wcssize(// same args, plus:
 char * escaped_chars, // will require escaping
 int escape_len, // how many chars added when escaping
 int nllen // len of newline if substituted
 );

So that pg_wcssize(..., "\r", 1, 1) would behave as before (\n and \t are rather
special cases), and the various constants could be held in the format
description so the whole thing would be parametric.

Same approach with format.

That would allow to simplify the code significantly and to share it between
MARKDOWN and others. Also, the multiple "else if" list would be simplified by
using strchr or the escaped_chars string.

--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] password_encryption, default and 'plain' support

2017-05-08 Thread Dagfinn Ilmari Mannsåker
Heikki Linnakangas  writes:

> On 05/06/2017 01:56 PM, Gavin Flower wrote:
>> On 06/05/17 22:44, Vik Fearing wrote:
>>> On 05/05/2017 02:42 PM, Michael Paquier wrote:
 +This option is obsolete but still accepted for backwards
 +compatibility.
 Isn't that incorrect English?
>>> No.
>>>
 It seems to me that this be non-plural,
 as "for backward compatibility".
>>> "Backwards" is not plural, it's a regional variation of "backward" (or
>>> vice versa depending on which region you come from).  Both are correct.
>>
>> I am English, born & bred, and 'Backwards' feels a lot more natural to me.
>
> Another data point:
>
> $ grep "backwards-comp" doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml | wc -l
> 7
> $ grep "backward-comp" doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml | wc -l
> 3
>
> Another important question is whether there should be a hyphen there or not?
>
> $ grep   "backward comp" doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml  |
> wc -l
> 21
> ~/git-sandbox-pgsql/master (master)$ grep   "backwards comp"
> doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml  | wc -l
> 11

This misses out instances where the phrase is broken across lines.
Using a less line-orented tool, ag (aka. The Silver Searcher), gives
more hits for the non-hyphenated case, but doesn't significantly change
the s-to-non-s ratio.

$ ag 'backwards-\s*comp' doc/**/*.sgml|grep -c backward
7
$ ag 'backward-\s*comp' doc/**/*.sgml|grep -c backward
3
$ ag 'backward\s+comp' doc/**/*.sgml | grep -c backward
29
* ag 'backwards\s+comp' doc/**/*.sgml | grep -c backward
20


-- 
"I use RMS as a guide in the same way that a boat captain would use
 a lighthouse.  It's good to know where it is, but you generally
 don't want to find yourself in the same spot." - Tollef Fog Heen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-08 Thread Erik Rijkers

On 2017-05-08 11:27, Masahiko Sawada wrote:

Hi,

I encountered a situation where DROP SUBSCRIPTION got stuck when
initial table sync is in progress. In my environment, I created
several tables with some data on publisher. I created subscription on
subscriber and drop subscription immediately after that. It doesn't
always happen but I often encountered it on my environment.

ps -x command shows the following.

 96796 ?Ss 0:00 postgres: masahiko postgres [local] DROP
SUBSCRIPTION
 96801 ?Ts 0:00 postgres: bgworker: logical replication
worker for subscription 40993waiting
 96805 ?Ss 0:07 postgres: bgworker: logical replication
worker for subscription 40993 sync 16418
 96806 ?Ss 0:01 postgres: wal sender process masahiko 
[local] idle

 96807 ?Ss 0:00 postgres: bgworker: logical replication
worker for subscription 40993 sync 16421
 96808 ?Ss 0:00 postgres: wal sender process masahiko 
[local] idle




FWIW, running


0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
0002-WIP-Possibly-more-robust-snapbuild-approach.patch +
fix-statistics-reporting-in-logical-replication-work.patch

(on top of 44c528810)

I have encountered the same condition as well in the last few days, a 
few times (I think 2 or 3 times).


Erik Rijkers


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes for next week's back-branch releases

2017-05-08 Thread Petr Jelinek
On 06/05/17 19:38, Tom Lane wrote:
> Petr Jelinek  writes:
>> On 06/05/17 19:15, Tom Lane wrote:
>>> (Or, wait a minute.  That documentation only applies to v10, but we
>>> need to be writing this relnote for 9.6 users.  What terminology should
>>> we be using anyway?)
> 
>> Yeah we need to somehow mention that it only affects 3rd party tools
>> using logical decoding.
> 
>> "The initial snapshot created for a logical decoding slot was
>> potentially incorrect.  This could allow the 3rd party tools using
>> the logical decoding to copy incomplete existing(?) data.  This was
>> more likely to happen if the source server was busy at the time of
>> slot creation, or if two slots were created concurrently."
> 
>>> Also, do we need to recommend that people not trust any logical replicas
>>> at this point, but recreate them after installing the update?
> 
>> Yes, but only if there was preexisting data *and* there was concurrent
>> activity on the server when the "replication" was setup.
> 
> OK, I can work with this.  Thanks for the help!
> 

Great, thanks.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-08 Thread Masahiko Sawada
Hi,

I encountered a situation where DROP SUBSCRIPTION got stuck when
initial table sync is in progress. In my environment, I created
several tables with some data on publisher. I created subscription on
subscriber and drop subscription immediately after that. It doesn't
always happen but I often encountered it on my environment.

ps -x command shows the following.

 96796 ?Ss 0:00 postgres: masahiko postgres [local] DROP
SUBSCRIPTION
 96801 ?Ts 0:00 postgres: bgworker: logical replication
worker for subscription 40993waiting
 96805 ?Ss 0:07 postgres: bgworker: logical replication
worker for subscription 40993 sync 16418
 96806 ?Ss 0:01 postgres: wal sender process masahiko [local] idle
 96807 ?Ss 0:00 postgres: bgworker: logical replication
worker for subscription 40993 sync 16421
 96808 ?Ss 0:00 postgres: wal sender process masahiko [local] idle

The DROP SUBSCRIPTION process (pid 96796) is waiting for the apply
worker process (pid 96801) to stop while holding a lock on
pg_subscription_rel. On the other hand the apply worker is waiting for
acquiring a tuple lock on pg_subscription_rel needed for heap_update.
Also table sync workers (pid 96805 and 96807) are waiting for the
apply worker process to change their status.

Also, even when DROP SUBSCRIPTION is done successfully, the table sync
worker can be orphaned because I guess that the apply worker can exit
before change status of table sync worker.

I'm using 1f30295.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] May cause infinite loop when initializing rel-cache contains partitioned table

2017-05-08 Thread Amit Langote
Hi,

On 2017/05/05 17:26, 甄明洋 wrote:
> The function of  initializing rel-cache is RelationCacheInitializePhase3 and 
> in src/backend/utils/cache/relcache.c file.
> When initializing the partition description information, we forget to check 
> if partition key or descriptor is NULL. 
> Therefore, after the loop restarts, the partition information will be 
> initialized again, resulting in an infinite loop.
> Code:
> /*
> * Reload partition key and descriptor for a partitioned table.
> */
> if (relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
> {
> RelationBuildPartitionKey(relation);
> Assert(relation->rd_partkey != NULL);
> 
> 
> RelationBuildPartitionDesc(relation);
> Assert(relation->rd_partdesc != NULL);
> 
> 
> restart = true;
> }

Thanks for bringing it to the notice.  The above code should follow what's
done for other fields that are initialized by
RelationCacheInitializePhase3().  Although, since none of the entries in
the relcache init file are partitioned tables, infinite loop won't occur
in practice, but it's a good idea to fix the code anyway.

Attached patch does that.

Thanks,
Amit
>From 6706999b159a76a3eb812eb3f7aad6e066a1c587 Mon Sep 17 00:00:00 2001
From: amit 
Date: Mon, 8 May 2017 18:03:09 +0900
Subject: [PATCH] Check partitioning fields are NULL in
 RelationCacheInitializePhase3

---
 src/backend/utils/cache/relcache.c | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 30789c1edc..c3721d9e43 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -3858,13 +3858,20 @@ RelationCacheInitializePhase3(void)
 		}
 
 		/*
-		 * Reload partition key and descriptor for a partitioned table.
+		 * Reload the partition key and descriptor for a partitioned table.
 		 */
-		if (relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
+		if (relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE &&
+			relation->rd_partkey == NULL)
 		{
 			RelationBuildPartitionKey(relation);
 			Assert(relation->rd_partkey != NULL);
 
+			restart = true;
+		}
+
+		if (relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE &&
+			relation->rd_partdesc == NULL)
+		{
 			RelationBuildPartitionDesc(relation);
 			Assert(relation->rd_partdesc != NULL);
 
-- 
2.11.0


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v1] Add and report the new "in_hot_standby" GUC pseudo-variable.

2017-05-08 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Craig Ringer
> On 13 April 2017 at 14:59, Tsunakawa, Takayuki
>  wrote:
> 
> > 2. Make transaction_read_only GUC_REPORT This is to avoid the added
> > round-trip by SHOW command.  It also benefits client apps that want to
> know when the server gets promoted?  And this may simplify the libpq code.
> > I don't understand yet why we need to provide this feature for older servers
> by using SHOW.  Those who are already using <= 9.6 in production completed
> the system or application, and their business is running.  Why would they
> want to just replace libpq and use this feature?
> 
> I think "transaction_read_only" is a bit confusing for something we're
> expecting to change under us.
> 
> To me, a "read only" xact is one created with
> 
> BEGIN READ ONLY TRANSACTION;
> 
>  which I would not expect to become read/write under me, since I
> explicitly asked for read-only.
> 
> It's more like "session read only" that we're interested in IMO.

Are you suggest thating we provide a GUC_REPORT read-only variable 
"session_read_only" and the libpq should use it?

Anyway, I added this item in the PostgreSQL 10 Open Items page under
"Design Decisions to Recheck Mid-Beta".  I'm willing to submit a patch for PG10.

Regards
Takayuki Tsunakawa




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] password_encryption, default and 'plain' support

2017-05-08 Thread Heikki Linnakangas

On 05/05/2017 03:42 PM, Michael Paquier wrote:

+This option is obsolete but still accepted for backwards
+compatibility.
Isn't that incorrect English? It seems to me that this be non-plural,
as "for backward compatibility".


I changed most cases to "backward compatibility", except the one in 
create_role.sgml, because there were other instances of "backwards 
compatibility" on that page, and I didn't want this to stick out.



The comment at the top of check_password() in passwordcheck.c does not
mention scram, you may want to update that.


Reworded the comment, to not list all the possible values.


+   /*
+* We never store passwords in plaintext, so
this shouldn't
+* happen.
+*/
break;
An error here is overthinking?


It's not shown in the diff's context, but an error is returned just 
after the switch statement. I considered leaving out the "case 
PASSWORD_TYPE_PLAINTEXT" altogether, but then you might get compiler 
warnings complaining that that enum value is not handled. I also 
considered putting a an explicit "default:" there, which returns an 
error, but then you'd again miss out on compiler warnings, if you add a 
new password hash type and forget to add a case here to handle it.



 -- consistency of password entries
-SET password_encryption = 'plain';
-CREATE ROLE regress_passwd1 PASSWORD 'role_pwd1';
 SET password_encryption = 'md5';
Nit: this is skipping directly to role number 2.


Renumbered the test roles.

Committed with those little cleanups. Thanks for the review!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-08 Thread Rahila Syed
>I am not able add a new partition if default partition is further
partitioned
>with default partition.

Thanks for reporting. I will fix this.

>pg_restore is failing for default partition, dump file still storing old
syntax of default partition.
Thanks for reporting . I will fix this once the syntax is finalized.


On Fri, May 5, 2017 at 12:46 PM, Jeevan Ladhe  wrote:

> Hi Rahila,
>
> I am not able add a new partition if default partition is further
> partitioned
> with default partition.
>
> Consider example below:
>
> postgres=# CREATE TABLE test ( a int, b int, c int) PARTITION BY LIST (a);
> CREATE TABLE
> postgres=# CREATE TABLE test_p1 PARTITION OF test FOR VALUES IN(4, 5, 6,
> 7, 8);
> CREATE TABLE
> postgres=# CREATE TABLE test_pd PARTITION OF test DEFAULT PARTITION BY
> LIST(b);
> CREATE TABLE
> postgres=# CREATE TABLE test_pd_pd PARTITION OF test_pd DEFAULT;
> CREATE TABLE
> postgres=# INSERT INTO test VALUES (20, 24, 12);
> INSERT 0 1
> *postgres=# CREATE TABLE test_p2 PARTITION OF test FOR VALUES IN(15);*
> *ERROR:  could not open file "base/12335/16420": No such file or directory*
>
>
> Thanks,
> Jeevan Ladhe
>
> On Fri, May 5, 2017 at 11:55 AM, Rajkumar Raghuwanshi <
> rajkumar.raghuwan...@enterprisedb.com> wrote:
>
>> Hi Rahila,
>>
>> pg_restore is failing for default partition, dump file still storing old
>> syntax of default partition.
>>
>> create table lpd (a int, b int, c varchar) partition by list(a);
>> create table lpd_d partition of lpd DEFAULT;
>>
>> create database bkp owner 'edb';
>> grant all on DATABASE bkp to edb;
>>
>> --take plain dump of existing database
>> \! ./pg_dump -f lpd_test.sql -Fp -d postgres
>>
>> --restore plain backup to new database bkp
>> \! ./psql -f lpd_test.sql -d bkp
>>
>> psql:lpd_test.sql:63: ERROR:  syntax error at or near "DEFAULT"
>> LINE 2: FOR VALUES IN (DEFAULT);
>>^
>>
>>
>> vi lpd_test.sql
>>
>> --
>> -- Name: lpd; Type: TABLE; Schema: public; Owner: edb
>> --
>>
>> CREATE TABLE lpd (
>> a integer,
>> b integer,
>> c character varying
>> )
>> PARTITION BY LIST (a);
>>
>>
>> ALTER TABLE lpd OWNER TO edb;
>>
>> --
>> -- Name: lpd_d; Type: TABLE; Schema: public; Owner: edb
>> --
>>
>> CREATE TABLE lpd_d PARTITION OF lpd
>> FOR VALUES IN (DEFAULT);
>>
>>
>> ALTER TABLE lpd_d OWNER TO edb;
>>
>>
>> Thanks,
>> Rajkumar
>>
>
>


Re: [HACKERS] Fix a typo in snapmgr.c

2017-05-08 Thread Masahiko Sawada
On Mon, May 8, 2017 at 4:57 PM, Simon Riggs  wrote:
> On 8 May 2017 at 06:28, Masahiko Sawada  wrote:
>> Hi,
>>
>> Attached patch for $subject.
>>
>> s/recovey/recovery/
>
> There was a typo, but the comment itself was slightly wrong and also
> duplicated with another comment further down.
>
> So rearranged code a little to keep it lean.

Okay, thank you!

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix a typo in snapmgr.c

2017-05-08 Thread Simon Riggs
On 8 May 2017 at 06:28, Masahiko Sawada  wrote:
> Hi,
>
> Attached patch for $subject.
>
> s/recovey/recovery/

There was a typo, but the comment itself was slightly wrong and also
duplicated with another comment further down.

So rearranged code a little to keep it lean.

Thanks

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] password_encryption, default and 'plain' support

2017-05-08 Thread Heikki Linnakangas

On 05/06/2017 01:56 PM, Gavin Flower wrote:

On 06/05/17 22:44, Vik Fearing wrote:

On 05/05/2017 02:42 PM, Michael Paquier wrote:

+This option is obsolete but still accepted for backwards
+compatibility.
Isn't that incorrect English?

No.


It seems to me that this be non-plural,
as "for backward compatibility".

"Backwards" is not plural, it's a regional variation of "backward" (or
vice versa depending on which region you come from).  Both are correct.


I am English, born & bred, and 'Backwards' feels a lot more natural to me.


Another data point:

$ grep "backwards-comp" doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml | wc -l
7
$ grep "backward-comp" doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml | wc -l
3

Another important question is whether there should be a hyphen there or not?

$ grep   "backward comp" doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml  | 
wc -l

21
~/git-sandbox-pgsql/master (master)$ grep   "backwards comp" 
doc/src/sgml/ref/*.sgml doc/src/sgml/*.sgml  | wc -l

11

It looks like the most popular spelling in our docs is "backward 
compatibility". I'll go with that.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-08 Thread Pavel Stehule
Hi

2017-05-08 9:08 GMT+02:00 Fabien COELHO :

>
> Hello Pavel,
>
> A complement to my previous comments:
>
> Also, maybe it would be better if the statement is cleaned up server side
>> at the end of the execution. Not sure how to achieve that, though, libpq
>> seems to lack the relevant function:-(
>>
>>  """although there is no libpq function for deleting a prepared
>>  statement, the SQL DEALLOCATE statement can be used for that purpose."""
>>
>> Hmmm... I have not found how to use DEALLOCATE to cleanup an unnamed
>> statement, it does not allow a "zero-length" name. Maybe it could be
>> extended somehow, or a function could be provided for the purpose, eg
>> by passing a NULL query to PQprepare...
>>
>
> After giving it some thoughts, I see three possible solutions:
>
> 0. Do nothing about it.
>I would prefer the prepare is cleaned up.


> 1. assign a special name, eg "_psql_gdesc_", so that
>DEALLOCATE "_psql_gdesc_" can be issued afterwards.


> 2. allow executing DEALLOCATE "";
>
> 3. add the missing PQdeallocate function to libpq?
>
> Version 2 is server side, so it would not be compatible when connected to
> server running previous versions. Not desirable.
>
> Version 3 may have implication at the protocol level and server side, if
> so it does not seem desirable to introduce such a change.
>
> So maybe only version 1 is possible.
>

The doc says about unnamed prepared statements - any new unnamed prepared
statement deallocates previous by self. From psql environment is not
possible to create unnamed prepared statement. So there are not any
possible conflict and only one unnamed prepared statement can exists. The
overhead is like any call of PLpgSQL function where any embedded SQLs are
prepared implicitly. So @0 is from my perspective safe. Looks so unnamed PP
was designed for this short life PP.

I prefer @0 agaisnt @1 because workflow is simple and robust. If unnamed PP
doesn't exists, then it will be created, else it will be replaced. @1 has
little bit more complex workflow, because there is not command like
DEALLOCATE IF EXISTS, so I have to ensure deallocation in all possible
ways. Another reason for @0 is not necessity to generate some auxiliary
name.

So in this case, I thinking @0 is good enough way (due unnamed PP behave),
and can be enhanced by @3, but @3 requires wide discussion about design
(and can be overkill for \gdesc  command)  and should be problem everywhere
you use new client against old server. Same problem (you mentioned) has @2.

My opinion in this case is not too strong - just I see the advantages of @0
(robust and simple) nice. The question is about cost of unwanted allocated
PP to end of session.

Regards

Pavel



> --
> Fabien.
>


Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-08 Thread Thomas Munro
On Fri, May 5, 2017 at 8:29 AM, Thomas Munro
 wrote:
> On Fri, May 5, 2017 at 2:40 AM, Robert Haas  wrote:
>> On Thu, May 4, 2017 at 4:46 AM, Thomas Munro
>>  wrote:
>>> On Thu, May 4, 2017 at 4:02 AM, Alvaro Herrera  
>>> wrote:
 Robert Haas wrote:
> I suspect that most users would find it more useful to capture all of
> the rows that the statement actually touched, regardless of whether
> they hit the named table or an inheritance child.

 Yes, agreed.  For the plain inheritance cases each row would need to
 have an indicator of which relation it comes from (tableoid); I'm not
 sure if such a thing would be useful in the partitioning case.
>>>
>>> On Thu, May 4, 2017 at 4:26 AM, David Fetter  wrote:
 +1 on the not-duct-tape view of partitioned tables.
>>>
>>> Hmm.  Ok.  Are we talking about PG10 or PG11 here?  Does this approach
>>> makes sense?
>>
>> I was thinking PG10 if it can be done straightforwardly.
>
> Ok, I will draft a patch to do it the way I described and see what people 
> think.

FYI I am still working on this and will post a draft patch to do this
(that is: make transition tables capture changes from children with
appropriate tuple conversion) in the next 24 hours.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] snapbuild woes

2017-05-08 Thread Andres Freund


On May 3, 2017 10:45:16 PM PDT, Noah Misch  wrote:
>On Thu, Apr 27, 2017 at 09:42:58PM -0700, Andres Freund wrote:
>> 
>> 
>> On April 27, 2017 9:34:44 PM PDT, Noah Misch 
>wrote:
>> >On Fri, Apr 21, 2017 at 10:36:21PM -0700, Andres Freund wrote:
>> >> On 2017-04-17 21:16:57 -0700, Andres Freund wrote:
>> >> > I've since the previous update reviewed Petr's patch, which he
>> >since has
>> >> > updated over the weekend.  I'll do another round tomorrow, and
>will
>> >see
>> >> > how it looks.  I think we might need some more tests for this to
>be
>> >> > committable, so it might not become committable tomorrow.  I
>hope
>> >we'll
>> >> > have something in tree by end of this week, if not I'll send an
>> >update.
>> >> 
>> >> I was less productive this week than I'd hoped, and creating a
>> >testsuite
>> >> was more work than I'd anticipated, so I'm slightly lagging
>behind. 
>> >I
>> >> hope to have a patchset tomorrow, aiming to commit something
>> >> Monday/Tuesday.
>> >
>> >This PostgreSQL 10 open item is past due for your status update. 
>> >Kindly send
>> >a status update within 24 hours, and include a date for your
>subsequent
>> >status
>> >update.  Refer to the policy on open item ownership:
>>
>>https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
>> 
>> I committed part of the series today, plan to continue doing so over
>the next few days.  Changes require careful review & testing, this is
>easy to get wrong...
>
>This PostgreSQL 10 open item is past due for your status update. 
>Kindly send
>a status update within 24 hours, and include a date for your subsequent
>status
>update.
>
>Also, this open item has been alive for three weeks, well above
>guideline.  I
>understand it's a tricky bug, but I'm worried this isn't on track to
>end.
>What is missing to make it end?
>
>Refer to the policy on open item ownership:
>https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

I plan to commit the next pending patch after the back branch releases are cut 
- it's an invasive fix and the issue doesn't cause corruption "just" slow slot 
creation. So it seems better to wait for a few days, rather than hurry it into 
the release.

Andres
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-08 Thread Fabien COELHO


Hello Pavel,

A complement to my previous comments:

Also, maybe it would be better if the statement is cleaned up server side at 
the end of the execution. Not sure how to achieve that, though, libpq seems 
to lack the relevant function:-(


 """although there is no libpq function for deleting a prepared
 statement, the SQL DEALLOCATE statement can be used for that purpose."""

Hmmm... I have not found how to use DEALLOCATE to cleanup an unnamed 
statement, it does not allow a "zero-length" name. Maybe it could be extended 
somehow, or a function could be provided for the purpose, eg

by passing a NULL query to PQprepare...


After giving it some thoughts, I see three possible solutions:

0. Do nothing about it.
   I would prefer the prepare is cleaned up.

1. assign a special name, eg "_psql_gdesc_", so that
   DEALLOCATE "_psql_gdesc_" can be issued afterwards.

2. allow executing DEALLOCATE "";

3. add the missing PQdeallocate function to libpq?

Version 2 is server side, so it would not be compatible when connected 
to server running previous versions. Not desirable.


Version 3 may have implication at the protocol level and server side, if 
so it does not seem desirable to introduce such a change.


So maybe only version 1 is possible.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SUBSCRIPTIONS and pg_upgrade

2017-05-08 Thread Noah Misch
On Sat, May 06, 2017 at 11:50:16AM -0700, Noah Misch wrote:
> On Fri, May 05, 2017 at 11:01:57AM -0400, Peter Eisentraut wrote:
> > On 5/2/17 21:44, Noah Misch wrote:
> > >> I wonder if we should have an --no-subscriptions option, now that they
> > >> are dumped by default, just like we have --no-blobs, --no-owner,
> > >> --no-password, --no-privileges, --no-acl, --no-tablespaces, and
> > >> --no-security-labels.  It seems like there is probably a fairly large
> > >> use case for excluding subscriptions even if you have sufficient
> > >> permissions to dump them.
> > > 
> > > [Action required within three days.  This is a generic notification.]
> > > 
> > > The above-described topic is currently a PostgreSQL 10 open item.  Peter,
> > > since you committed the patch believed to have created it, you own this 
> > > open
> > > item.  If some other commit is more relevant or if this does not belong 
> > > as a
> > > v10 open item, please let us know.  Otherwise, please observe the policy 
> > > on
> > > open item ownership[1] and send a status update within three calendar 
> > > days of
> > > this message.  Include a date for your subsequent status update.  Testers 
> > > may
> > > discover new open items at any time, and I want to plan to get them all 
> > > fixed
> > > well in advance of shipping v10.  Consequently, I will appreciate your 
> > > efforts
> > > toward speedy resolution.  Thanks.
> > 
> > I consider this item low priority and don't plan to work on it before
> > all the other open items under logical replication are addressed.
> > 
> > (Here, working on it would include thinking further about whether it is
> > necessary at all or what alternatives might look like.)
> 
> That's informative, but it's not a valid status update.  This PostgreSQL 10
> open item is past due for your status update.  Kindly send a valid status
> update within 24 hours.  Refer to the policy on open item ownership:
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
for your status update.  Please reacquaint yourself with the policy on open
item ownership[1] and then reply immediately.  If I do not hear from you by
2017-05-09 07:00 UTC, I will transfer this item to release management team
ownership without further notice.

[1] 
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] multi-column range partition constraint

2017-05-08 Thread Amit Langote
On 2017/05/03 6:30, Robert Haas wrote:
> On Tue, May 2, 2017 at 2:51 AM, Amit Langote
>  wrote:
>> Per an off-list report from Olaf Gawenda (thanks Olaf), it seems that the
>> range partition's constraint is sometimes incorrect, at least in the case
>> of multi-column range partitioning.  See below:
>>
>> create table p (a int, b int) partition by range (a, b);
>> create table p1 partition of p for values from (1, 1) to (10 ,10);
>> create table p2 partition of p for values from (11, 1) to (20, 10);
>>
>> Perhaps unusual, but it's still a valid definition.  Tuple-routing puts
>> rows where they belong correctly.
>>
>> -- ok
>> insert into p values (10, 9);
>> select tableoid::regclass, * from p;
>>  tableoid | a  | b
>> --++---
>>  p1   | 10 | 9
>> (1 row)
>>
>> -- but see this
>> select tableoid::regclass, * from p where a = 10;
>>  tableoid | a | b
>> --+---+---
>> (0 rows)
>>
>> explain select tableoid::regclass, * from p where a = 10;
>> QUERY PLAN
>> ---
>>  Result  (cost=0.00..0.00 rows=0 width=12)
>>One-Time Filter: false
>> (2 rows)
>>
>> -- or this
>> insert into p1 values (10, 9);
>> ERROR:  new row for relation "p1" violates partition constraint
>> DETAIL:  Failing row contains (10, 9).
>>
>> This is because of the constraint being generated is not correct in this
>> case.  p1's constraint is currently:
>>
>>   a >= 1 and a < 10
>>
>> where it should really be the following:
>>
>>   (a > 1  OR (a = 1  AND b >= 1))
>> AND
>>   (a < 10 OR (a = 10 AND b < 10))
>>
>> Attached patch rewrites get_qual_for_range() for the same, along with some
>> code rearrangement for reuse.  I also added some new tests to insert.sql
>> and inherit.sql, but wondered (maybe, too late now) whether there should
>> really be a declarative_partition.sql for these, moving in some of the old
>> tests too.
>>
>> Adding to the open items list.
> 
> I think there are more problems here.  With the patch:
> 
> rhaas=# create table p (a int, b int) partition by range (a, b);
> CREATE TABLE
> rhaas=# create table p1 partition of p for values from (unbounded,0)
> to (unbounded,1);
> CREATE TABLE
> rhaas=# insert into p1 values (-2,-2);
> ERROR:  new row for relation "p1" violates partition constraint
> DETAIL:  Failing row contains (-2, -2).
> rhaas=# insert into p1 values (2,2);
> ERROR:  new row for relation "p1" violates partition constraint
> DETAIL:  Failing row contains (2, 2).
> 
> Really, the whole CREATE TABLE .. PARTITION statement is meaningless
> and should be disallowed, because it's not meaningful to have a
> partition bound specification with a non-unbounded value following an
> unbounded value.

Yes, disallowing this in the first place is the best thing to do.
Attached patch 0001 implements that.  With the patch:

create table p1 partition of p for values from (unbounded,0) to (unbounded,1);
ERROR:  cannot specify finite value after UNBOUNDED
LINE 1: ...able p1 partition of p for values from (unbounded,0) to (unb...
 ^
> BTW, I think we should also add a function that prints the partition
> constraint, and have psql display that in the \d+ output, because
> people might need that - e.g. if you want to attach a partition
> without having to validate it, you need to be able to apply an
> appropriate constraint to it in advance, so you'll want to see what
> the existing partition constraints look like.

Agree that that would be helpful, so done in the attached 0003.  With the
patch:

create table p (a int, b int) partition by range (a, b);
create table p1 partition of p for values from (1, 1) to (10 ,10);
create table p2 partition of p for values from (11, 1) to (20, 10);
\d p2
 Table "public.p2"
 Column |  Type   | Collation | Nullable | Default
+-+---+--+-
 a  | integer |   | not null |
 b  | integer |   | not null |
Partition of: p FOR VALUES FROM (11, 1) TO (20, 10)
Partition constraint: CHECK a > 11) OR ((a = 11) AND (b >= 1))) AND
((a < 20) OR ((a = 20) AND (b < 10)

create table p3 (like p);
alter table p3 add constraint partcheck check a > 21) OR ((a = 21) AND
(b >= 1))) AND ((a < 30) OR ((a = 30) AND (b < 5);

alter table p attach partition p3 for values from (21, 1) to (30, 10);
INFO:  partition constraint for table "p3" is implied by existing constraints
ALTER TABLE

BTW, is it strange that the newly added pg_get_partition_constraintdef()
requires the relcache entry to be created for the partition and all of its
ancestor relations up to the root (I mean the fact that the relcache entry
needs to be created at all)?  I can see only one other function,
set_relation_column_names(), creating the relcache entry at all.

> While I'm glad we have partitioning has a feature, I'm starting to get
> a bit depressed by the number of bugs that are 

Re: [HACKERS] Transition tables for triggers on foreign tables and views

2017-05-08 Thread Noah Misch
On Sat, May 06, 2017 at 06:58:35PM +, Noah Misch wrote:
> On Tue, May 02, 2017 at 06:06:47PM -0500, Kevin Grittner wrote:
> > On Fri, Apr 28, 2017 at 10:56 PM, Tom Lane  wrote:
> > 
> > > They will fire if you have an INSTEAD OF row-level trigger; the existence
> > > of that trigger is what determines whether we implement DML on a view
> > > through the view's own triggers or through translation to an action on
> > > the underlying table.
> > >
> > > I do not think it'd be reasonable to throw an error for creation of
> > > a statement-level view trigger when there's no row-level trigger,
> > > because that just imposes a hard-to-deal-with DDL ordering dependency.
> > >
> > > You could make a case for having the updatable-view translation code
> > > print a WARNING if it notices that there are statement-level triggers
> > > that cannot be fired due to the translation.
> > 
> > Oh, I see -- you can add all the AFTER ... FOR EACH STATEMENT
> > triggers you want for an updatable view and they will quietly sit
> > there without firing no matter how many statements perform the
> > supposedly triggering action, but as soon as you add a INSTEAD OF
> > ... FOR EACH ROW trigger they spring to life.  On the face of it that
> > seems to me to violate the POLA, but I kinda see how it evolved.
> > 
> > I need to look at this and the rather similar odd behavior under
> > inheritance.  I hope to post something Friday.
> 
> This PostgreSQL 10 open item is past due for your status update.  Kindly send
> a status update within 24 hours, and include a date for your subsequent status
> update.  Refer to the policy on open item ownership:
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
for your status update.  Please reacquaint yourself with the policy on open
item ownership[1] and then reply immediately.  If I do not hear from you by
2017-05-09 07:00 UTC, I will transfer this item to release management team
ownership without further notice.

[1] 
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-08 Thread Noah Misch
On Sat, May 06, 2017 at 07:02:46PM +, Noah Misch wrote:
> On Wed, May 03, 2017 at 11:29:29PM -0400, Peter Eisentraut wrote:
> > On 4/30/17 04:05, Noah Misch wrote:
> > > The above-described topic is currently a PostgreSQL 10 open item.  Peter,
> > > since you committed the patch believed to have created it, you own this 
> > > open
> > > item.  If some other commit is more relevant or if this does not belong 
> > > as a
> > > v10 open item, please let us know.  Otherwise, please observe the policy 
> > > on
> > > open item ownership[1] and send a status update within three calendar 
> > > days of
> > > this message.  Include a date for your subsequent status update.  Testers 
> > > may
> > > discover new open items at any time, and I want to plan to get them all 
> > > fixed
> > > well in advance of shipping v10.  Consequently, I will appreciate your 
> > > efforts
> > > toward speedy resolution.  Thanks.
> > 
> > I'm working on this and will report on Friday.
> 
> This PostgreSQL 10 open item is past due for your status update.  Kindly send
> a status update within 24 hours, and include a date for your subsequent status
> update.  Refer to the policy on open item ownership:
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
for your status update.  Please reacquaint yourself with the policy on open
item ownership[1] and then reply immediately.  If I do not hear from you by
2017-05-09 07:00 UTC, I will transfer this item to release management team
ownership without further notice.

[1] 
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Not getting error if ALTER SUBSCRIPTION syntax is wrong.

2017-05-08 Thread Noah Misch
On Sat, May 06, 2017 at 02:44:27PM +0200, Petr Jelinek wrote:
> On 05/05/17 19:51, Petr Jelinek wrote:
> > On 05/05/17 14:40, tushar wrote:
> >> Hi,
> >>
> >> While testing 'logical replication' against v10 , i encountered one
> >> issue where data stop migrating after ALTER PUBLICATION.
> >>
> >> X Server
> >> \\ Make sure wal_level is set to logical in postgresql.conf file
> >> \\create table/Insert 1 row -> create table test(n int); insert into t
> >> values (1);
> >> \\create publication for all -> create publication pub for ALL TABLES ;
> >>
> >>
> >> Y server
> >>
> >> \\ Make sure wal_level is set to logical in postgresql.conf file
> >> \\create table -> create table test(n int);
> >>
> >> \\create Subscription
> >>
> >> CREATE SUBSCRIPTION sub CONNECTION 'host=localhost dbname=postgres
> >> port=5432 ' PUBLICATION pub;
> >>
> >> [...]
> >>
> >> I think probably syntax of alter subscription is not correct but
> >> surprisingly it is not throwing an error.
> >>
> > 
> > Syntax of ALTER command is correct, syntax of the connection string is
> > not, you are probably getting errors in log from the replication worker.
> > 
> > We could check validity of the connection string though to complain
> > immediately like we do in CREATE.
> > 
> 
> The attached does exactly that.

[Action required within three days.  This is a generic notification.]

The above-described topic is currently a PostgreSQL 10 open item.  Peter,
since you committed the patch believed to have created it, you own this open
item.  If some other commit is more relevant or if this does not belong as a
v10 open item, please let us know.  Otherwise, please observe the policy on
open item ownership[1] and send a status update within three calendar days of
this message.  Include a date for your subsequent status update.  Testers may
discover new open items at any time, and I want to plan to get them all fixed
well in advance of shipping v10.  Consequently, I will appreciate your efforts
toward speedy resolution.  Thanks.

[1] 
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers