Fwd: Re: [HACKERS] MSVC odd TAP test problem

2017-05-09 Thread Andrew Dunstan

On 05/09/2017 09:37 PM, Michael Paquier wrote:

> On Wed, May 10, 2017 at 2:11 AM, Andrew Dunstan
>  wrote:
>> (After extensive trial and error) Turns out it's not quite that, it's
>> the kill_kill stuff. I think for now we should just disable it on the
>> platform. That means not running tests 7 and 8 of the logical_decoding
>> tests and all of the crash_recovery test. test::More has nice
>> faciliti4es for skipping tests cleanly. See attached patch.
> +SKIP:
> +{
> +# some Windows Perls at least don't like IPC::Run's start/kill_kill 
> regime.
> +skip "Test fails on Windows perl", 2 if $Config{osname} eq 'MSWin32';
> So this basically works with msys but not with MSWin32? Interesting...


On Msys we use the Msys DTK perl to run prove, and it executes the Msys
shell to run commands, with Msys signal emulation. The buildfarm client
goes to some trouble to arrange this. So it's very different.



>
> Does it make a different if you use for example coup_d_grace =>
> "QUIT"? Per the docs of IPC::Run SIGTERM is used for kills on Windows.


No idea. I'll try.


>
> +if  ($Config{osname} eq 'MSWin32')
> +{
> +# some Windows Perls at least don't like IPC::Run's start/kill_kill 
> regime.
> +plan skip_all => "Test fails on Windows perl";
> +}
> Indentation is weird here, with a mix of spaces and tabs.


I will indent it before I commit anything.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [doc fix] PG10: wroing description on connect_timeout when multiple hosts are specified

2017-05-09 Thread Tsunakawa, Takayuki
Hello, Robert

I found a wrong sentence here in the doc.  I'm sorry, this is what I asked you 
to add...

https://www.postgresql.org/docs/devel/static/libpq-connect.html#libpq-paramkeywords

connect_timeout
Maximum wait for connection, in seconds (write as a decimal integer string). 
Zero or not specified means wait indefinitely. It is not recommended to use a 
timeout of less than 2 seconds. This timeout applies separately to each 
connection attempt. For example, if you specify two hosts and both of them are 
unreachable, and connect_timeout is 5, the total time spent waiting for a 
connection might be up to 10 seconds.


The program behavior is that libpq times out after connect_timeout seconds 
regardless of how many hosts are specified.  I confirmed it like this:

$ export PGOPTIONS="-c post_auth_delay=30"
$ psql -d "dbname=postgres connect_timeout=5" -h localhost,localhost -p 
5432,5433
(psql erros out after 5 seconds)

Could you fix the doc with something like this?

"This timeout applies across all the connection attempts. For example, if you 
specify two hosts and both of them are unreachable, and connect_timeout is 5, 
the total time spent waiting for a connection is up to 5 seconds."

Should we also change the minimum "2 seconds" part to be longer, according to 
the number of hosts?

Regards
Takayuki Tsunakawa



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-05-09 Thread Amit Langote
On 2017/05/10 12:59, Robert Haas wrote:
> On Tue, May 9, 2017 at 11:54 PM, Thomas Munro
>  wrote:
>> +A statement that targets a parent table in a inheritance or partitioning
>>
>> A tiny typo: s/a inheritance/an inheritance/
> 
> Now he tells me.

Thanks both.

Regards,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-09 Thread Peter Eisentraut
On 5/7/17 19:43, Andres Freund wrote:
> 3. Keep the catalog, make ALTER properly transactional, blocking
>concurrent nextval()s. This resolves the issue that nextval() can't
>see the changed definition of the sequence.

This was the intended choice.

I think I have more clarity about the different, overlapping issues:

1. Concurrent ALTER SEQUENCE causes "tuple concurrently updated"
   error, caused by updates to pg_sequence catalog.  This can be fixed
   by taking a self-exclusive lock somewhere.

   A typical answer for other catalogs is to use
   ShareRowExclusiveLock.  But in another context it was argued that
   is undesirable for other reasons, and it's better to stick with
   RowExclusiveLock and deal with the occasional consequences.  But
   this question is obsoleted by #2.

2. There is some inconsistency or disagreement about what to lock
   when modifying relation metadata.  When you alter a non-relation
   object, you just have the respective catalog to lock, and you have
   to make the trade-offs described in #1.  When you alter a relation
   object, you can lock the catalog or the actual relation, or both.
   I had gone with locking the catalog, but existing practice in
   tablecmds.c is to lock the relation with the most appropriate lock
   level, and use RowExclusiveLock for the catalog.  We can make
   sequences do that, too.

3. Sequence WAL writes and catalog updates are not protected by same
   lock.  We can fix that by holding a lock longer.  If per #2 we make
   the lock on the sequence the "main" one, then we just hold that one
   across the catalog update.

4. Some concerns about in AlterSequence() opening the pg_sequence
   catalog while holding the sequence buffer lock.  Move some things
   around to fix that.

5. nextval() disregarding uncommitted ALTER SEQUENCE changes.  In
   

Re: [HACKERS] Removal of plaintext password type references

2017-05-09 Thread Michael Paquier
On Wed, May 10, 2017 at 10:51 AM, Vaishnavi Prabakaran
 wrote:
> Following recent removal of support to store password in plain text,
> modified the code to
>
> 1. Remove  "PASSWORD_TYPE_PLAINTEXT" macro
> 2. Instead of using "get_password_type" to retrieve the encryption method
> AND to check if the password is already encrypted or not, modified the code
> to
> a. Use "get_password_encryption_type" function to retrieve encryption
> method.
> b. Use "isPasswordEncrypted" function to check if the password is already
> encrypted or not.
>
> These changes are mainly to increase code readability and does not change
> underlying functionality.

Actually, this patch reduces readability.

> Attached the patch for community's review.

+/*
+ * Verify whether the given password is already encrypted or not
+ */
+bool
+isPasswordEncrypted(const char *password)
+{
+   if(isMD5(password) || isSCRAM(password))
+   return true;
+   return false;
 }
New hash functions may be added in the future, and we have now a
facility that can be easily extended for this purpose. We have been
already through a couple of commits to make that modular and I think
that it would be a bad idea to drop that. Please note that in this
facility we still need to be able to track passwords that are in a
plain format, because, even if Postgres does not store them in
cleartext, nothing prevents users to send to the server cleartext
strings.

In your patch, get_password_encryption_type() and
isPasswordEncrypted() are actually doing the same work. There is no
need for duplication as well.

In short, I am clearly -1 for this patch.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] .pgpass's behavior has changed

2017-05-09 Thread Kyotaro HORIGUCHI
At Mon, 1 May 2017 11:34:41 -0400, Robert Haas  wrote in 

> On Fri, Apr 28, 2017 at 3:54 AM, Kyotaro HORIGUCHI
>  wrote:
> > But the above also leaves a bug so I sent another patch to fix
> > it. The attched patch restores the 9.6's beavior of looking up
> > .pgpass file in the same manner to the aother patch.
> 
> Thanks for catching this.  Committed.

Thanks!

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-05-09 Thread Robert Haas
On Tue, May 9, 2017 at 11:54 PM, Thomas Munro
 wrote:
> +A statement that targets a parent table in a inheritance or partitioning
>
> A tiny typo: s/a inheritance/an inheritance/

Now he tells me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] May cause infinite loop when initializing rel-cache contains partitioned table

2017-05-09 Thread Robert Haas
On Mon, May 8, 2017 at 5:07 AM, Amit Langote
 wrote:
> Thanks for bringing it to the notice.  The above code should follow what's
> done for other fields that are initialized by
> RelationCacheInitializePhase3().  Although, since none of the entries in
> the relcache init file are partitioned tables, infinite loop won't occur
> in practice, but it's a good idea to fix the code anyway.
>
> Attached patch does that.

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-09 Thread Robert Haas
On Tue, May 9, 2017 at 11:48 PM, Thomas Munro
 wrote:
> Hmm.  DB2 has transition tables (invented them maybe?) and it allows
> OLD/NEW TABLE on row-level triggers:
>
> https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.1.0/com.ibm.db2.luw.admin.dbobj.doc/doc/t0020236.html

Yeah, my impression is that Kevin was pretty keen on supporting that
case.  I couldn't say exactly why, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-05-09 Thread Thomas Munro
On Wed, May 10, 2017 at 3:51 PM, Robert Haas  wrote:
> On Sun, May 7, 2017 at 9:44 PM, Amit Langote
>  wrote:
>> I think that makes sense.  Modified it to read: "A statement that targets
>> a parent table in a inheritance or partitioning hierarchy..." in the
>> attached updated patch.
>
> LGTM.  Committed.

+A statement that targets a parent table in a inheritance or partitioning

A tiny typo: s/a inheritance/an inheritance/

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-05-09 Thread Robert Haas
On Sun, May 7, 2017 at 9:44 PM, Amit Langote
 wrote:
> I think that makes sense.  Modified it to read: "A statement that targets
> a parent table in a inheritance or partitioning hierarchy..." in the
> attached updated patch.

LGTM.  Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-09 Thread Thomas Munro
On Wed, May 10, 2017 at 11:22 AM, Thomas Munro
 wrote:
> On Wed, May 10, 2017 at 9:57 AM, Alvaro Herrera
>  wrote:
>> Thomas Munro wrote:
>>
>>> Recall that transition tables can be specified for statement-level
>>> triggers AND row-level triggers.  If you specify them for row-level
>>> triggers, then they can see all rows changed so far each time they
>>> fire.
>>
>> Uhmm ... why do we do this?  It seems like a great way to cause much
>> confusion.  Shouldn't we see the transition table containing the whole
>> set for statement-level triggers only, and give row-level triggers just
>> the individual affected row each time?
>
> I assumed that had come from the standard.  I don't have a published
> standard, but I have just taken a quick look at one of the publicly
> available drafts dated 2006.  I think its model is that the transition
> tables are always conceptually there, and NEW and OLD are just range
> variables over those tables.  That may explain why transition tables
> are mentioned in the context of row-level triggers, and it may be that
> the spec's authors never intended row-level triggers to be able to see
> the (partial) transition table other than through the range variables
> that access exactly one row, but I don't see any wording that
> explicitly says so in the spec.  Do you?  Thoughts, Kevin?

Hmm.  DB2 has transition tables (invented them maybe?) and it allows
OLD/NEW TABLE on row-level triggers:

https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.1.0/com.ibm.db2.luw.admin.dbobj.doc/doc/t0020236.html

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove pre-10 compatibility code in hash index

2017-05-09 Thread Robert Haas
On Tue, May 9, 2017 at 1:41 AM, Amit Kapila  wrote:
> Commit ea69a0dead5128c421140dc53fac165ba4af8520 has bumped the hash
> index version and obviates the need for backward compatibility code
> added by commit 293e24e507838733aba4748b514536af2d39d7f2.  The same
> has been mentioned in the commit message, please find attached patch
> to remove the pre-10 compatibility code in hash index.

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typos in comments in execMain.c

2017-05-09 Thread Robert Haas
On Tue, May 9, 2017 at 4:26 AM, Etsuro Fujita
 wrote:
> Here is a small patch to fix typos in comments for InitResultRelInfo() in
> execMain.c: s/ResultRelationInfo/ResultRelInfo/ and s/the the/the/.

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-09 Thread Robert Haas
On Tue, May 9, 2017 at 11:40 PM, Thomas Munro
 wrote:
> On Wed, May 10, 2017 at 2:31 PM, Amit Langote
>  wrote:
>> On 2017/05/10 6:51, Thomas Munro wrote:
>>> No such problem exists for partition hierarchies since the tables all
>>> appear as the same type to user code (though conversions may be
>>> happening for technical reasons).
>>
>> To clarify a bit, there may exist differences in the ordering of columns,
>> either between the parent and its partitions or between different
>> partitions.  For example, while parent's rowtype is (a int, b char, c
>> float), a partition's may be (b char, a int, c float), and yet another
>> partition may have (c float, a int, b char).  If some user code happens to
>> depend on the ordering of columns, selecting from the parent and selecting
>> from a partition directly may return the same result but in different 
>> formats.
>
> Right.  And the patch I posted converts all transition tuples it
> collects from child tables to match the TupleDescriptor of the
> relation you named, which it gets from
> estate->es_root_result_relations[0].  Is that right?  I suppose it
> will be very common for partitions to have matching TupleDescriptors,
> so the TupleConversionMap will usually be NULL meaning no conversion
> is ever done.  But in the inheritance case they might be different on
> purpose, and in both inheritance and partitioning cases they might be
> different in physical ways that aren't logically important as you said
> (column order, dropped columns).

Hmm.  What if the partitioning hierarchy contains foreign tables?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-09 Thread Thomas Munro
On Wed, May 10, 2017 at 2:31 PM, Amit Langote
 wrote:
> On 2017/05/10 6:51, Thomas Munro wrote:
>> No such problem exists for partition hierarchies since the tables all
>> appear as the same type to user code (though conversions may be
>> happening for technical reasons).
>
> To clarify a bit, there may exist differences in the ordering of columns,
> either between the parent and its partitions or between different
> partitions.  For example, while parent's rowtype is (a int, b char, c
> float), a partition's may be (b char, a int, c float), and yet another
> partition may have (c float, a int, b char).  If some user code happens to
> depend on the ordering of columns, selecting from the parent and selecting
> from a partition directly may return the same result but in different formats.

Right.  And the patch I posted converts all transition tuples it
collects from child tables to match the TupleDescriptor of the
relation you named, which it gets from
estate->es_root_result_relations[0].  Is that right?  I suppose it
will be very common for partitions to have matching TupleDescriptors,
so the TupleConversionMap will usually be NULL meaning no conversion
is ever done.  But in the inheritance case they might be different on
purpose, and in both inheritance and partitioning cases they might be
different in physical ways that aren't logically important as you said
(column order, dropped columns).

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transition tables for triggers on foreign tables and views

2017-05-09 Thread Robert Haas
On Tue, May 9, 2017 at 11:24 PM, Thomas Munro
 wrote:
> Since the last one conflicted with recent changes, here's a rebased
> patch to prevent transition tables on views and foreign tables.

Committed, but I made the new error details more like the existing one
that also checks relkind.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] delta relations in AFTER triggers

2017-05-09 Thread Robert Haas
On Thu, May 4, 2017 at 4:23 PM, Thomas Munro
 wrote:
> We can't possibly support transition tables on TRUNCATE (the whole
> point of TRUNCATE is not to inspect all the rows so we can't collect
> them), and we already reject ROW triggers on TRUNCATE, so we should
> reject transition tables on STATEMENT triggers for TRUNCATE at
> creation time too.  See attached.  Thoughts?

Committed, with the addition of a regression test, the inclusion of
which in future bug fixes of this sort I recommend.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transition tables for triggers on foreign tables and views

2017-05-09 Thread Thomas Munro
Hi,

Since the last one conflicted with recent changes, here's a rebased
patch to prevent transition tables on views and foreign tables.

-- 
Thomas Munro
http://www.enterprisedb.com


prevent-unsupported-transition-tables-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] delta relations in AFTER triggers

2017-05-09 Thread Robert Haas
On Thu, May 4, 2017 at 5:51 AM, Thomas Munro
 wrote:
> Reproduced here.  The stack looks like this:
>
> frame #3: 0x00010f06f8b0
> postgres`ExceptionalCondition(conditionName="!(readptr->eflags &
> 0x0002)", errorType="FailedAssertion", fileName="tuplestore.c",
> lineNumber=1237) + 128 at assert.c:54
> frame #4: 0x00010f0cbc85
> postgres`tuplestore_rescan(state=0x7ff219840200) + 85 at
> tuplestore.c:1237
> frame #5: 0x00010eced9b1
> postgres`ExecReScanNamedTuplestoreScan(node=0x7ff21d007840) + 81
> at nodeNamedtuplestorescan.c:197
> frame #6: 0x00010eca46a6
> postgres`ExecReScan(node=0x7ff21d007840) + 822 at execAmi.c:216
> frame #7: 0x00010ece7eca
> postgres`ExecNestLoop(node=0x7ff21d006310) + 538 at
> nodeNestloop.c:148
>
> I think the problem is that the tuplestore read pointer hasn't been
> opened with the "rewindable" flag.  It works for me with the attached.

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] multi-column range partition constraint

2017-05-09 Thread Robert Haas
On Mon, May 8, 2017 at 2:59 AM, Amit Langote
 wrote:
> Yes, disallowing this in the first place is the best thing to do.
> Attached patch 0001 implements that.  With the patch:

Committed.

With regard to 0002, some of the resulting constraints are a bit more
complicated than seems desirable:

create table foo1 partition of foo for values from (unbounded,
unbounded, unbounded) to (1, unbounded, unbounded);
yields:
Partition constraint: CHECK (((a < 1) OR (a = 1) OR (a = 1)))

It would be better not to have (a = 1) in there twice, and better
still to have the whole thing as (a <= 1).

create table foo2 partition of foo for values from (3, 4, 5) to (6, 7,
unbounded);
yields:
Partition constraint: CHECK a > 3) OR ((a = 3) AND (b > 4)) OR ((a
= 3) AND (b = 4) AND (c >= 5))) AND ((a < 6) OR ((a = 6) AND (b < 7))
OR ((a = 6) AND (b = 7)

The first half of that (for the lower bound) is of course fine, but
the second half could be written better using <=, like instead of

((a = 6) AND (b < 7)) OR ((a = 6) AND (b = 7))
you could have:
((a = 6) AND (b <= 7)

This isn't purely cosmetic because the simpler constraint is probably
noticeably faster to evaluate.

I think you should have a few test cases like this in the patch - that
is, cases where some but not all bounds are finite.

> BTW, is it strange that the newly added pg_get_partition_constraintdef()
> requires the relcache entry to be created for the partition and all of its
> ancestor relations up to the root (I mean the fact that the relcache entry
> needs to be created at all)?  I can see only one other function,
> set_relation_column_names(), creating the relcache entry at all.

I suggest that you display this information only when "verbose" is set
- i.e. \d+ not just \d.  I don't intrinsically care think that forcing
the relcache entry to be built here, but note that it means this will
block when the parent is locked.  Between that and the fact that this
information will only sometimes be of interest, restricting it to \d+
seems preferable.

Next update on this issue by Thursday 5/11.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-09 Thread Masahiko Sawada
On Wed, May 10, 2017 at 2:46 AM, Masahiko Sawada  wrote:
> On Mon, May 8, 2017 at 8:42 PM, Petr Jelinek
>  wrote:
>> On 08/05/17 11:27, Masahiko Sawada wrote:
>>> Hi,
>>>
>>> I encountered a situation where DROP SUBSCRIPTION got stuck when
>>> initial table sync is in progress. In my environment, I created
>>> several tables with some data on publisher. I created subscription on
>>> subscriber and drop subscription immediately after that. It doesn't
>>> always happen but I often encountered it on my environment.
>>>
>>> ps -x command shows the following.
>>>
>>>  96796 ?Ss 0:00 postgres: masahiko postgres [local] DROP
>>> SUBSCRIPTION
>>>  96801 ?Ts 0:00 postgres: bgworker: logical replication
>>> worker for subscription 40993waiting
>>>  96805 ?Ss 0:07 postgres: bgworker: logical replication
>>> worker for subscription 40993 sync 16418
>>>  96806 ?Ss 0:01 postgres: wal sender process masahiko [local] 
>>> idle
>>>  96807 ?Ss 0:00 postgres: bgworker: logical replication
>>> worker for subscription 40993 sync 16421
>>>  96808 ?Ss 0:00 postgres: wal sender process masahiko [local] 
>>> idle
>>>
>>> The DROP SUBSCRIPTION process (pid 96796) is waiting for the apply
>>> worker process (pid 96801) to stop while holding a lock on
>>> pg_subscription_rel. On the other hand the apply worker is waiting for
>>> acquiring a tuple lock on pg_subscription_rel needed for heap_update.
>>> Also table sync workers (pid 96805 and 96807) are waiting for the
>>> apply worker process to change their status.
>>>
>>
>> Looks like we should kill apply before dropping dependencies.
>
> Sorry, after investigated I found out that DROP SUBSCRIPTION process
> is holding AccessExclusiveLock on pg_subscription (, not
> pg_subscription_rel) and apply worker is waiting for acquiring a lock
> on it.

Hmm it seems there are two cases. One is that the apply worker waits
to acquire AccessShareLock on pg_subscription but DropSubscription
already acquired AcessExclusiveLock on it and waits for the apply
worker to finish. Another case is that the apply worker waits to
acquire a tuple lock on pg_subscrption_rel but DropSubscription (maybe
droppoing dependencies) already acquired it.

> So I guess that the dropping dependencies are not relevant with
> this.  It seems to me that the main cause is that DROP SUBSCRIPTION
> waits for apply worker to finish while keeping to hold
> AccessExclusiveLock on pg_subscription. Perhaps we need to contrive
> ways to reduce lock level somehow.
>
>>
>>> Also, even when DROP SUBSCRIPTION is done successfully, the table sync
>>> worker can be orphaned because I guess that the apply worker can exit
>>> before change status of table sync worker.
>>
>> Well the tablesync worker should stop itself if the subscription got
>> removed, but of course again the dependencies are an issue, so we should
>> probably kill those explicitly as well.
>
> Yeah, I think that we should ensure that the apply worker exits after
> killed all involved table sync workers.
>

Barring any objections, I'll add these two issues to open item.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] delta relations in AFTER triggers

2017-05-09 Thread Noah Misch
On Sat, May 06, 2017 at 05:34:46AM +, Noah Misch wrote:
> On Fri, May 05, 2017 at 08:23:33AM +1200, Thomas Munro wrote:
> > On Fri, May 5, 2017 at 12:39 AM, Neha Sharma
> >  wrote:
> > > While testing the feature we encountered one more crash,below is the
> > > scenario to reproduce.
> > >
> > > create table t1 ( a int);
> > > create table t2 ( a int);
> > > insert into t1 values (11),(12),(13);
> > >
> > > create or replace function my_trig() returns trigger
> > > language plpgsql as $my_trig$
> > > begin
> > > insert into t2(select a from new_table);
> > > RETURN NEW;
> > > end;
> > > $my_trig$;
> > >
> > > create trigger my_trigger
> > > after truncate or update  on t1
> > > referencing new table as new_table old table as oldtab
> > > for each statement
> > > execute procedure my_trig();
> > >
> > > truncate t1;
> > > server closed the connection unexpectedly
> > > This probably means the server terminated abnormally
> > > before or while processing the request.
> > > The connection to the server was lost. Attempting reset: Failed.
> > 
> > Thanks.  Reproduced here.  The stack looks like this:
> > 
> > frame #3: 0x000103e5e8b0
> > postgres`ExceptionalCondition(conditionName="!((trigdata->tg_event)
> > & 0x0003) == 0x) || (((trigdata->tg_event) & 0x0003)
> > == 0x0002) || (((trigdata->tg_event) & 0x0003) == 0x0001))
> > && (((trigdata->tg_event) & 0x0018) == 0x) &&
> > !(trigdata->tg_event & 0x0020) && !(trigdata->tg_event &
> > 0x0040)) || (trigdata->tg_oldtable == ((void*)0) &&
> > trigdata->tg_newtable == ((void*)0)))", errorType="FailedAssertion",
> > fileName="trigger.c", lineNumber=2045) + 128 at assert.c:54
> > frame #4: 0x000103a6f542
> > postgres`ExecCallTriggerFunc(trigdata=0x7fff5c40bad0, tgindx=0,
> > finfo=0x7fd8ba0817b8, instr=0x,
> > per_tuple_context=0x7fd8b906f928) + 258 at trigger.c:2039
> > frame #5: 0x000103a754ed
> > postgres`AfterTriggerExecute(event=0x7fd8ba092460,
> > rel=0x0001043fd9c0, trigdesc=0x7fd8ba068758,
> > finfo=0x7fd8ba0817b8, instr=0x,
> > per_tuple_context=0x7fd8b906f928,
> > trig_tuple_slot1=0x,
> > trig_tuple_slot2=0x) + 1469 at trigger.c:3860
> > frame #6: 0x000103a73080
> > postgres`afterTriggerInvokeEvents(events=0x7fd8ba07fb00,
> > firing_id=1, estate=0x7fd8ba090440, delete_ok='\x01') + 592 at
> > trigger.c:4051
> > frame #7: 0x000103a72b7b
> > postgres`AfterTriggerEndQuery(estate=0x7fd8ba090440) + 203 at
> > trigger.c:4227
> > frame #8: 0x000103a498aa
> > postgres`ExecuteTruncate(stmt=0x7fd8ba059f40) + 2026 at
> > tablecmds.c:1485
> > 
> > There's an assertion that it's (one of INSERT, UPDATE, DELETE, an
> > AFTER trigger, not deferred) *or* there are no transition tables.
> > Here it's TRUNCATE and there are transition tables, so it fails:
> > 
> > /*
> >  * Protect against code paths that may fail to initialize transition 
> > table
> >  * info.
> >  */
> > Assert(((TRIGGER_FIRED_BY_INSERT(trigdata->tg_event) ||
> >  TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event) ||
> >  TRIGGER_FIRED_BY_DELETE(trigdata->tg_event)) &&
> > TRIGGER_FIRED_AFTER(trigdata->tg_event) &&
> > !(trigdata->tg_event & AFTER_TRIGGER_DEFERRABLE) &&
> > !(trigdata->tg_event & AFTER_TRIGGER_INITDEFERRED)) ||
> >(trigdata->tg_oldtable == NULL && trigdata->tg_newtable == 
> > NULL));
> > 
> > 
> > We can't possibly support transition tables on TRUNCATE (the whole
> > point of TRUNCATE is not to inspect all the rows so we can't collect
> > them), and we already reject ROW triggers on TRUNCATE, so we should
> > reject transition tables on STATEMENT triggers for TRUNCATE at
> > creation time too.  See attached.  Thoughts?
> 
> [Action required within three days.  This is a generic notification.]
> 
> The above-described topic is currently a PostgreSQL 10 open item.  Kevin,
> since you committed the patch believed to have created it, you own this open
> item.  If some other commit is more relevant or if this does not belong as a
> v10 open item, please let us know.  Otherwise, please observe the policy on
> open item ownership[1] and send a status update within three calendar days of
> this message.  Include a date for your subsequent status update.  Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping v10.  Consequently, I will appreciate your efforts
> toward speedy resolution.  Thanks.
> 
> [1] 
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

This PostgreSQL 10 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Refer to the policy on open item ownership:

Re: [HACKERS] delta relations in AFTER triggers

2017-05-09 Thread Noah Misch
On Sat, May 06, 2017 at 05:33:24AM +, Noah Misch wrote:
> On Thu, May 04, 2017 at 09:51:03PM +1200, Thomas Munro wrote:
> > On Thu, May 4, 2017 at 9:12 PM, Prabhat Sahu
> >  wrote:
> > > I have been testing this for a while and observed a server crash while 
> > > referencing table column value in a trigger procedure for AFTER DELETE 
> > > trigger.
> > >
> > > -- Steps to reproduce:
> > > CREATE TABLE t1(c1 int);
> > > CREATE TABLE t2(cc1 int);
> > > INSERT INTO t1 VALUES (10);
> > > INSERT INTO t2 VALUES (10);
> > >
> > > CREATE OR REPLACE FUNCTION trig_func() RETURNS trigger AS
> > > $$ BEGIN
> > > DELETE FROM t1 WHERE c1 IN (select OLD.cc1 from my_old);
> > > RETURN OLD;
> > > END; $$ LANGUAGE PLPGSQL;
> > >
> > > CREATE TRIGGER trg1
> > >   AFTER DELETE ON t2
> > > REFERENCING OLD TABLE AS my_old
> > > FOR EACH ROW
> > >   EXECUTE PROCEDURE trig_func();
> > >
> > > DELETE FROM t2 WHERE cc1 =10;
> > > server closed the connection unexpectedly
> > > This probably means the server terminated abnormally
> > > before or while processing the request.
> > > The connection to the server was lost. Attempting reset: Failed.
> > 
> > Reproduced here.  The stack looks like this:
> > 
> > frame #3: 0x00010f06f8b0
> > postgres`ExceptionalCondition(conditionName="!(readptr->eflags &
> > 0x0002)", errorType="FailedAssertion", fileName="tuplestore.c",
> > lineNumber=1237) + 128 at assert.c:54
> > frame #4: 0x00010f0cbc85
> > postgres`tuplestore_rescan(state=0x7ff219840200) + 85 at
> > tuplestore.c:1237
> > frame #5: 0x00010eced9b1
> > postgres`ExecReScanNamedTuplestoreScan(node=0x7ff21d007840) + 81
> > at nodeNamedtuplestorescan.c:197
> > frame #6: 0x00010eca46a6
> > postgres`ExecReScan(node=0x7ff21d007840) + 822 at execAmi.c:216
> > frame #7: 0x00010ece7eca
> > postgres`ExecNestLoop(node=0x7ff21d006310) + 538 at
> > nodeNestloop.c:148
> > 
> > I think the problem is that the tuplestore read pointer hasn't been
> > opened with the "rewindable" flag.  It works for me with the attached.
> 
> [Action required within three days.  This is a generic notification.]
> 
> The above-described topic is currently a PostgreSQL 10 open item.  Kevin,
> since you committed the patch believed to have created it, you own this open
> item.  If some other commit is more relevant or if this does not belong as a
> v10 open item, please let us know.  Otherwise, please observe the policy on
> open item ownership[1] and send a status update within three calendar days of
> this message.  Include a date for your subsequent status update.  Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping v10.  Consequently, I will appreciate your efforts
> toward speedy resolution.  Thanks.
> 
> [1] 
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

This PostgreSQL 10 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Refer to the policy on open item ownership:
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transition tables for triggers on foreign tables and views

2017-05-09 Thread Noah Misch
On Mon, May 08, 2017 at 12:00:20AM -0700, Noah Misch wrote:
> On Sat, May 06, 2017 at 06:58:35PM +, Noah Misch wrote:
> > On Tue, May 02, 2017 at 06:06:47PM -0500, Kevin Grittner wrote:
> > > On Fri, Apr 28, 2017 at 10:56 PM, Tom Lane  wrote:
> > > 
> > > > They will fire if you have an INSTEAD OF row-level trigger; the 
> > > > existence
> > > > of that trigger is what determines whether we implement DML on a view
> > > > through the view's own triggers or through translation to an action on
> > > > the underlying table.
> > > >
> > > > I do not think it'd be reasonable to throw an error for creation of
> > > > a statement-level view trigger when there's no row-level trigger,
> > > > because that just imposes a hard-to-deal-with DDL ordering dependency.
> > > >
> > > > You could make a case for having the updatable-view translation code
> > > > print a WARNING if it notices that there are statement-level triggers
> > > > that cannot be fired due to the translation.
> > > 
> > > Oh, I see -- you can add all the AFTER ... FOR EACH STATEMENT
> > > triggers you want for an updatable view and they will quietly sit
> > > there without firing no matter how many statements perform the
> > > supposedly triggering action, but as soon as you add a INSTEAD OF
> > > ... FOR EACH ROW trigger they spring to life.  On the face of it that
> > > seems to me to violate the POLA, but I kinda see how it evolved.
> > > 
> > > I need to look at this and the rather similar odd behavior under
> > > inheritance.  I hope to post something Friday.
> > 
> > This PostgreSQL 10 open item is past due for your status update.  Kindly 
> > send
> > a status update within 24 hours, and include a date for your subsequent 
> > status
> > update.  Refer to the policy on open item ownership:
> > https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> 
> IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
> for your status update.  Please reacquaint yourself with the policy on open
> item ownership[1] and then reply immediately.  If I do not hear from you by
> 2017-05-09 07:00 UTC, I will transfer this item to release management team
> ownership without further notice.
> 
> [1] 
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

This PostgreSQL 10 open item now needs a permanent owner.  Would any other
committer like to take ownership?  If this role interests you, please read
this thread and the policy linked above, then send an initial status update
bearing a date for your subsequent status update.  If the item does not have a
permanent owner by 2017-05-13 03:00 UTC, I will resolve the item by reverting
transition table patches.

Thanks,
nm


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: custom log_line_prefix components besides application_name

2017-05-09 Thread Chapman Flack
On 05/09/17 18:48, Mark Dilger wrote:

> I don't have any positive expectation that the postgres community will go
> along with any of this, but just from my point of view, the cleaner way to
> do what you are proposing is something like setting a session variable.
> 
> In your middle tier java application or whatever, you'd run something like
> 
> SET SESSION ON BEHALF OF 'joe user'
> 
> And then when you reuse that connection for the next web page you
> would do something like that again:
> 
> SET SESSION ON BEHALF OF "anonymous from 1.2.3.4"

I may have muddled my presentation by starting it with the more-speculative
ideas about building support into client-side drivers. At present, none of
that exists, and in the simple case where the application code has access
to the driver for sending arbitrary SQL, then yes, you just have it send

SET SESSION ON BEHALF OF 'joe user'
or
SET SESSION ON BEHALF OF "anonymous from 1.2.3.4"

or something equivalent but spelled differently, like

SET appident.user TO 'joe user'
or
SET appident.user TO 'anonymous'; SET appident.inet_addr TO '1.2.3.4'

Your spelling is nicer, but requires teaching the parser new syntax
and touching lots of places in PostgreSQL core, probably a non-starter.
In contrast, writing an 'appident' extension that defines some new GUCs
would be trivial and well-encapsulated. (What I'd think ideal would be
an extension that defines _whatever new GUCs make sense for your
application_ so if it would serve your purpose to have appident.locale
you just make it so. I wonder if an extension can define one GUC that
you set in the config file to a list of other GUCs it should also define
within its own namespace, or if that would run afoul of the order
initialization happens in.)

The other bit of my proposal was to prevent Mallory from spoofing
his appident info by managing to inject some SQL through your app
like "21' && sET_/**/cONfiG('appident.user', 'alice', fA/**/lsE)".

That's where the appident.cookie() function comes in. You just
query it once at session establishment and remember the cookie.
That allows your code to say:

SET SESSION ON BEHALF OF 'joe user' BECAUSE I HAVE :cookie AND I SAY SO;

and Mallory can't inject that because he doesn't have :cookie and the
appident.cookie() function only succeeds the first time.

Without any new syntax, that could be spelled either:

SELECT appident.set('user', 'joe user', is_local => false,
  cookie => :cookie)
(where the cookie is simply verified by the SQL-callable function), or
SET appident.user TO :cookie || 'joe user'
(where it's verified by the check hook on the GUC, then stripped off).

That's pretty much the total extent of what I would propose the
extension to PostgreSQL proper would do. Just let you define some
new GUCs with meaning to your application, and not interpret them
or use them for anything, but provide a bit of protection so your
code controls them and arbitrary SQL queries can't.

The more science-fiction, client-side ideas I proposed were just
ruminations on what might be useful to application code sitting on top
of a taller stack of third-party code that might get in the way of just
sending your arbitrary SET command ahead of your query.

> and so forth.  You wouldn't have to worry about threads since this is
> being handled much like SET ROLE only instead of changing the role
> under which your database handle is operating, it only changes an
> opaque value that you can then use for whatever purpose.  I would
> not expect the database permissions logic to use the value in any
> way, but just to preserve it for you to query in logging or from a stored
> procedure or similar.  As long as your jdbc driver or similar does not
> prevent you from running non-standard sql statements, you should
> be able to pass this down the line without any of the third party
> software in the middle messing with it.

It's those more complex architectures I was thinking of with the client-
side ideas, where your code may be at the top of such a tall stack of
persistence/ORM/whatever layers that you're not sure you can just emit
an arbitrary SET command and have it come out in front of the right query
generated by the lower layers. That's where it might be handy to have a
way to associate the info with the current thread or current request
in a way that doesn't need any support in third party layers in the middle,
but can be retrieved by the driver (or a thin wrapper around it, down
at the bottom of the stack) and turned into the proper SET commands. That's
really a separable, less immediate, future-work idea.

> If this feature were implemented, I'd probably use it.  I might also be
> willing to write this with you in the unlikely event that it gets community
> approval.

I might just go ahead and try writing the extension part. Isn't that
the beauty of extensions? Approval? We don't need no stinkin' approval. :)

But critiques and better ideas are never unwelcome. :)

-Chap


-- 
Sent via pgsql-hackers 

Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-09 Thread Amit Langote
On 2017/05/10 6:51, Thomas Munro wrote:
> No such problem exists for partition hierarchies since the tables all
> appear as the same type to user code (though conversions may be
> happening for technical reasons).

To clarify a bit, there may exist differences in the ordering of columns,
either between the parent and its partitions or between different
partitions.  For example, while parent's rowtype is (a int, b char, c
float), a partition's may be (b char, a int, c float), and yet another
partition may have (c float, a int, b char).  If some user code happens to
depend on the ordering of columns, selecting from the parent and selecting
from a partition directly may return the same result but in different formats.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] how to shrink pg_attribute table?

2017-05-09 Thread xu jian
Hello,
My application creates temp table frequently. Now I find 
pg_attribute  table is growing larger and larger, 90GB now.
pgstattuple shows 82% free space, so I restored the db to dev env, and tried 
vacuum full to shrink it.
However, the vacuum full run for 24 hours, and hang there. I can see a CPU is 
on 100% usage, no disk activity.
looks like a bug of vacuum full. my pg version is 9.6.1. auto_vacuum uses 
default value.
Does anyone has this issue before? how to shrink the pg_attribute table? and 
what is the best configuration for auto_vacuum parameter? thanks

James


[HACKERS] Removal of plaintext password type references

2017-05-09 Thread Vaishnavi Prabakaran
Hi All,

Following recent removal of support to store password in plain text,
modified the code to

1. Remove  "PASSWORD_TYPE_PLAINTEXT" macro
2. Instead of using "get_password_type" to retrieve the encryption method
AND to check if the password is already encrypted or not, modified the code
to
a. Use "get_password_encryption_type" function to retrieve encryption
method.
b. Use "isPasswordEncrypted" function to check if the password is already
encrypted or not.

These changes are mainly to increase code readability and does not change
underlying functionality.

Attached the patch for community's review.


Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Removal-of-plaintext-password-reference.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MSVC odd TAP test problem

2017-05-09 Thread Michael Paquier
On Wed, May 10, 2017 at 2:11 AM, Andrew Dunstan
 wrote:
> (After extensive trial and error) Turns out it's not quite that, it's
> the kill_kill stuff. I think for now we should just disable it on the
> platform. That means not running tests 7 and 8 of the logical_decoding
> tests and all of the crash_recovery test. test::More has nice
> faciliti4es for skipping tests cleanly. See attached patch.

+SKIP:
+{
+# some Windows Perls at least don't like IPC::Run's start/kill_kill regime.
+skip "Test fails on Windows perl", 2 if $Config{osname} eq 'MSWin32';
So this basically works with msys but not with MSWin32? Interesting...

Does it make a different if you use for example coup_d_grace =>
"QUIT"? Per the docs of IPC::Run SIGTERM is used for kills on Windows.

+if  ($Config{osname} eq 'MSWin32')
+{
+# some Windows Perls at least don't like IPC::Run's start/kill_kill regime.
+plan skip_all => "Test fails on Windows perl";
+}
Indentation is weird here, with a mix of spaces and tabs.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-09 Thread Amit Langote
On 2017/05/10 2:09, Robert Haas wrote:
> On Tue, May 9, 2017 at 9:26 AM, Rahila Syed  wrote:
>>> Hi Rahila,
>>
>>> I am not able add a new partition if default partition is further
>>> partitioned
>>> with default partition.
>>
>>> Consider example below:
>>
>>> postgres=# CREATE TABLE test ( a int, b int, c int) PARTITION BY LIST (a);
>>> CREATE TABLE
>>> postgres=# CREATE TABLE test_p1 PARTITION OF test FOR VALUES IN(4, 5, 6, 7,
>>> 8);
>>> CREATE TABLE
>>> postgres=# CREATE TABLE test_pd PARTITION OF test DEFAULT PARTITION BY
>>> LIST(b);
>>> CREATE TABLE
>>> postgres=# CREATE TABLE test_pd_pd PARTITION OF test_pd DEFAULT;
>>> CREATE TABLE
>>> postgres=# INSERT INTO test VALUES (20, 24, 12);
>>> INSERT 0 1
>>> postgres=# CREATE TABLE test_p2 PARTITION OF test FOR VALUES IN(15);
>> ERROR:  could not open file "base/12335/16420": No such file or directory
>>
>> Regarding fix for this I think we need to prohibit this case. That is
>> prohibit creation
>> of new partition after a default partition which is further partitioned.
>> Currently before adding a new partition after default partition all the rows
>> of default
>> partition are scanned and if a row which matches the new partitions
>> constraint exists
>> the new partition is not added.
>>
>> If we allow this for default partition which is partitioned further, we will
>> have to scan
>> all the partitions of default partition for matching rows which can slow
>> down execution.
> 
> I think this case should be allowed

+1

> and I don't think it should
> require scanning all the partitions of the default partition.  This is
> no different than any other case where multiple levels of partitioning
> are used.  First, you route the tuple at the root level; then, you
> route it at the next level; and so on.  It shouldn't matter whether
> the routing at the top level is to that level's default partition or
> not.

It seems that adding a new partition at the same level as the default
partition will require scanning it or its (leaf) partitions if
partitioned.  Consider that p1, pd are partitions of a list-partitioned
table p accepting 1 and everything else, respectively, and pd is further
partitioned.  When adding p2 of p for 2, we need to scan the partitions of
pd to check if there are any (2, ...) rows.

As for fixing the reported issue whereby the partitioned default
partition's non-existent file is being accessed, it would help to take a
look at the code in ATExecAttachPartition() starting at the following:

/*
 * Set up to have the table be scanned to validate the partition
 * constraint (see partConstraint above).  If it's a partitioned table, we
 * instead schedule its leaf partitions to be scanned.
 */
if (!skip_validate)
{

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no test coverage for ALTER FOREIGN DATA WRAPPER name HANDLER ...

2017-05-09 Thread Amit Langote
On 2017/05/09 22:52, Mark Dilger wrote:
> 
>> On May 9, 2017, at 12:18 AM, Amit Langote  
>> wrote:
>>
>> Hi,
>>
>> On 2017/05/05 9:38, Mark Dilger wrote:
>>> Hackers,
>>>
>>> just FYI, I cannot find any regression test coverage of this part of the 
>>> grammar, not
>>> even in the contrib/ directory or TAP tests.  I was going to submit a patch 
>>> to help out,
>>> and discovered it is not so easy to do, and perhaps that is why the 
>>> coverage is missing.
>>
>> I think we could add the coverage by defining a dummy C FDW handler
>> function in regress.c.  I see that some other regression tests use C
>> functions defined there, such as test_atomic_ops().
>>
>> What do you think about the attached patch?  I am assuming you only meant
>> to add tests for the code in foreigncmds.c (CREATE/ALTER FOREIGN DATA
>> WRAPPER).
> 
> Thank you for creating the patch.  I only see one small oversight, which is 
> the
> successful case of ALTER FOREIGN DATA WRAPPER ... HANDLER ... is still
> not tested.  I added one line for that in the attached modification of your 
> patch.

Ah right, thanks.

Adding this to the next commitfest as Ashutosh suggested.

Regards,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SUBSCRIPTIONS and pg_upgrade

2017-05-09 Thread Michael Paquier
On Wed, May 10, 2017 at 12:00 AM, Peter Eisentraut
 wrote:
> On 5/9/17 04:54, Petr Jelinek wrote:
>>> I think that it would be nice to fix that even before beta, so
>>> attached is a patch to add --no-subscriptions to pg_dump, pg_dumpall
>>> and pg_restore.
>>
>> Looks okay to me, it's simple enough patch.
>
> Committed, thanks.
>
> (There was some inconsistent variable naming no_subscription vs
> no_subscriptions, which I sorted out.)

Thanks.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-09 Thread Thomas Munro
On Wed, May 10, 2017 at 9:57 AM, Alvaro Herrera
 wrote:
> Thomas Munro wrote:
>
>> Recall that transition tables can be specified for statement-level
>> triggers AND row-level triggers.  If you specify them for row-level
>> triggers, then they can see all rows changed so far each time they
>> fire.
>
> Uhmm ... why do we do this?  It seems like a great way to cause much
> confusion.  Shouldn't we see the transition table containing the whole
> set for statement-level triggers only, and give row-level triggers just
> the individual affected row each time?

I assumed that had come from the standard.  I don't have a published
standard, but I have just taken a quick look at one of the publicly
available drafts dated 2006.  I think its model is that the transition
tables are always conceptually there, and NEW and OLD are just range
variables over those tables.  That may explain why transition tables
are mentioned in the context of row-level triggers, and it may be that
the spec's authors never intended row-level triggers to be able to see
the (partial) transition table other than through the range variables
that access exactly one row, but I don't see any wording that
explicitly says so in the spec.  Do you?  Thoughts, Kevin?

After thinking about this some more, it's not only the conversion to
some arbitrary parent tuple type that would be surprising to a user of
inheritance + triggers + transition tables, it's the fact that a
row-level trigger on a given child table will also see tuples
collected from other tables in the hierarchy.  I think I didn't quite
get that right in -v2: it should probably build
TriggerTransitionFilter from union of all child tables' transition
table flags, not just the named table, so that if any child table has
a row trigger we start collecting transition tuples from all others
for it to see...  That sounds pretty crazy to me.

So, assuming we want to proceed with this plan of collecting
transition tuples from children, I see approximately 3 choices:

1.  Reject transition tables on row-level triggers.
2.  Reject transition tables on row-level triggers on tables that
inherit from other tables.
3.  Continue to allow transition tables on row-level triggers, but
document that they must be prepared to see transition tuples as they
would when querying arbitrary parent tables, and see tuples from other
tables in the hierarchy (!)

Another possibility which I haven't seriously considered is per-table
transition tables so you'd collect each child's tuples separately.

I take Alvaro's comment as a vote for 1.  I vote for 1 too.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: custom log_line_prefix components besides application_name

2017-05-09 Thread Mark Dilger

> On May 9, 2017, at 3:14 PM, Chapman Flack  wrote:
> 
> On 05/09/2017 01:25 PM, Mark Dilger wrote:
> 
>> Consensus, no, but utility, yes.
>> 
>> In three tier architectures there is a general problem that the database
>> role used by the middle tier to connect to the database does not entail
>> information about the user who, such as a visitor to your website, made
>> the request of the middle tier.  Chapman wants this information so he
>> can include it in the logs, but stored procedures that work in support
>> of the middle tier might want it for locale information, etc.  As things
>> currently stand, there is no good way to get this passed all the way down
>> into the database stored procedure that needs it, given that you are
>> typically calling down through third party code that doesn't play along.
> 
> I like this framing.
> 
> Clearly a good part of the story is outside of PostgreSQL proper, and
> has to be written elsewhere. There could be a picture like this:
> 
> middle tier receiving request (webapp?) - knows user/origin info
>   |
>   V
> third-party code (rails? web2py? spring?) - doesn't play along
>   |
>   V
> PQ protocol driver (pg? psycopg2? pgjdbc?) - could offer support
>   .
>   .
>   V
> PostgreSQL (what to do here?)
> 
> 
> What to do on the client side of the . . > can only be suggested and
> would have to be independently implemented by several drivers, but
> I could imagine a driver offering some API to tuck a bit of
> application-specific data into some form of thread-local storage.
> In the picture above, the top layer, where the user/origin info
> is known, would need a small modification to call that driver API
> and provide that info. The request could then be processed on down
> through the third-party layer(s) that don't play along. When
> it reaches the driver, something magic will happen to forward
> the thread-local preserved information on to PostgreSQL along with
> the query.
> 
> That of course isn't enough if the intervening layers that don't
> play along use thread pools, and the request could ultimately
> reach the driver on a different thread. But for the simple case
> it gives an idea.
> 
> As to how the driver then propagates the info to PostgreSQL, seems
> to me it could generate a SET in front of the actual query. Most or
> all of what would be needed in PostgreSQL might be possible in an
> extension, which I could try my hand at writing. Here's the idea:
> 
> The extension would define one or more custom GUCs, with flags /
> check hooks to enforce strict limits on when and how they can be set.
> 
> If the client stack is using a simple connection-per-request
> approach, they could just be PGC_BACKEND, and the client part of
> the picture could just be that the top layer supplies them as
> options= in the conninfo string, which various drivers already
> support.
> 
> But if connections may be pooled and re-used for different identities
> and origins, that isn't enough. So the extension would provide
> a function that can be called once in the session, returning
> a random magic cookie. The driver itself would call this function
> upon connecting, and save the cookie in a per-connection
> private variable. Code above the driver in the stack would have
> no access to it, as the function can't be called a second time,
> and so could not spoof identities just by sending arbitrary SET
> commands. The extension would reject any attempts to set or reset
> those GUCs unless accompanied by the cookie.
> 
> Stored procedures could then look at those GUCs for locale / identity
> / origin information and trust that they haven't been spoofed by
> injected commands.
> 
> If there were such a thing as a log_line_prefix_hook, then such an
> extension could also support my original idea and add some new
> escapes to log the added information. But there doesn't seem to be
> such a hook at present. Or, if there were simply a %{name-of-GUC}
> escape supported in log_line_prefix, nothing more would even be
> needed.
> 
> Does this sound workable?

I don't have any positive expectation that the postgres community will go
along with any of this, but just from my point of view, the cleaner way to
do what you are proposing is something like setting a session variable.

In your middle tier java application or whatever, you'd run something like

SET SESSION ON BEHALF OF 'joe user'

And then when you reuse that connection for the next web page you
would do something like that again:

SET SESSION ON BEHALF OF "anonymous from 1.2.3.4"

and so forth.  You wouldn't have to worry about threads since this is
being handled much like SET ROLE only instead of changing the role
under which your database handle is operating, it only changes an
opaque value that you can then use for whatever purpose.  I would
not expect the database permissions logic to use the value in any
way, but just to preserve it for you to query in logging or from a 

Re: [HACKERS] idea: custom log_line_prefix components besides application_name

2017-05-09 Thread Chapman Flack
On 05/09/2017 01:25 PM, Mark Dilger wrote:

> Consensus, no, but utility, yes.
> 
> In three tier architectures there is a general problem that the database
> role used by the middle tier to connect to the database does not entail
> information about the user who, such as a visitor to your website, made
> the request of the middle tier.  Chapman wants this information so he
> can include it in the logs, but stored procedures that work in support
> of the middle tier might want it for locale information, etc.  As things
> currently stand, there is no good way to get this passed all the way down
> into the database stored procedure that needs it, given that you are
> typically calling down through third party code that doesn't play along.

I like this framing.

Clearly a good part of the story is outside of PostgreSQL proper, and
has to be written elsewhere. There could be a picture like this:

 middle tier receiving request (webapp?) - knows user/origin info
   |
   V
 third-party code (rails? web2py? spring?) - doesn't play along
   |
   V
 PQ protocol driver (pg? psycopg2? pgjdbc?) - could offer support
   .
   .
   V
 PostgreSQL (what to do here?)


What to do on the client side of the . . > can only be suggested and
would have to be independently implemented by several drivers, but
I could imagine a driver offering some API to tuck a bit of
application-specific data into some form of thread-local storage.
In the picture above, the top layer, where the user/origin info
is known, would need a small modification to call that driver API
and provide that info. The request could then be processed on down
through the third-party layer(s) that don't play along. When
it reaches the driver, something magic will happen to forward
the thread-local preserved information on to PostgreSQL along with
the query.

That of course isn't enough if the intervening layers that don't
play along use thread pools, and the request could ultimately
reach the driver on a different thread. But for the simple case
it gives an idea.

As to how the driver then propagates the info to PostgreSQL, seems
to me it could generate a SET in front of the actual query. Most or
all of what would be needed in PostgreSQL might be possible in an
extension, which I could try my hand at writing. Here's the idea:

The extension would define one or more custom GUCs, with flags /
check hooks to enforce strict limits on when and how they can be set.

If the client stack is using a simple connection-per-request
approach, they could just be PGC_BACKEND, and the client part of
the picture could just be that the top layer supplies them as
options= in the conninfo string, which various drivers already
support.

But if connections may be pooled and re-used for different identities
and origins, that isn't enough. So the extension would provide
a function that can be called once in the session, returning
a random magic cookie. The driver itself would call this function
upon connecting, and save the cookie in a per-connection
private variable. Code above the driver in the stack would have
no access to it, as the function can't be called a second time,
and so could not spoof identities just by sending arbitrary SET
commands. The extension would reject any attempts to set or reset
those GUCs unless accompanied by the cookie.

Stored procedures could then look at those GUCs for locale / identity
/ origin information and trust that they haven't been spoofed by
injected commands.

If there were such a thing as a log_line_prefix_hook, then such an
extension could also support my original idea and add some new
escapes to log the added information. But there doesn't seem to be
such a hook at present. Or, if there were simply a %{name-of-GUC}
escape supported in log_line_prefix, nothing more would even be
needed.

Does this sound workable?

-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CTE inlining

2017-05-09 Thread Andrew Dunstan


On 05/09/2017 04:14 PM, Tom Lane wrote:
>
> Well, TBH that is pre-judging what (if anything) is going to be changed
> by a feature that we don't even have design consensus on, let alone a
> patch for.  I don't think that's an improvement or a good service to
> our users; it's just promoting confusion.  I think that until we do have
> a design and a patch, we're better off leaving the docs reading the way
> they have for the past eight years.
>
> I'm also a bit mystified by the apparent urgency to change something,
> anything, in this area when we're already past feature freeze.  This
> would be a fit subject for discussion several months from now when
> v11 development opens, but right now it's just distracting us from
> stabilizing v10.
>
>   


+1. We should not be in the business of documenting or declaring our
intention of having vaporware. Say we get a patch and for some reason it
gets rejected. We've had features that took two or three releases to get
accepted. This is why we've historically shied away from roadmaps. And
we've been bitten in the past by deprecating things only to undeprecate
them later.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-09 Thread Alvaro Herrera
Thomas Munro wrote:

> Recall that transition tables can be specified for statement-level
> triggers AND row-level triggers.  If you specify them for row-level
> triggers, then they can see all rows changed so far each time they
> fire.

Uhmm ... why do we do this?  It seems like a great way to cause much
confusion.  Shouldn't we see the transition table containing the whole
set for statement-level triggers only, and give row-level triggers just
the individual affected row each time?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-09 Thread Thomas Munro
On Tue, May 9, 2017 at 10:29 PM, Thomas Munro
 wrote:
> In master, the decision to populate transition tables happens in
> AfterTriggerSaveEvent (called by the ExecAR* functions) in trigger.c.
> It does that if it sees that there are triggers on the
> relation-being-modified that have transition tables.
>
> With this patch, nodeModifyTuple.c makes a 'TriggerTransitionFilter'
> object to override that behaviour, if there are child tables of either
> kind.  That is passed to the ExecAR* functions and thence
> AfterTriggerSaveEvent, overriding its usual behaviour, so that it can
> know that it needs collect tuples from child nodes and how to convert
> them to the layout needed for the tuplestores if necessary.
>
> Thoughts?  I'm not yet sure about the locking situation.  Generally
> needs some more testing.

Here is a new version with tidied up tests and a couple of small bug
fixes.  However, I've realised that there is a surprising behaviour
with this approach, and I'm not sure what to do about it.

Recall that transition tables can be specified for statement-level
triggers AND row-level triggers.  If you specify them for row-level
triggers, then they can see all rows changed so far each time they
fire.  Now our policy of firing the statement level triggers only for
the named relation but firing row-level triggers for all modified
relations leads to a tricky problem for the inheritance case: what
type of transition tuples should the child table's row-level triggers
see?

Suppose you have an inheritance hierarchy like this:

 animal
  -> mammal
  -> cat

You define a statement-level trigger on "animal" and another
statement-level trigger on "mammal".  You define a row-level trigger
on "cat".  When you update either "animal" or "mammal", the row
triggers on "cat" might run.  Row-level triggers on "cat" see OLD and
NEW as "cat" tuples, of course, but if they are configured to see
transition tables, should they see "cat", "mammal" or "animal" tuples
in the transition tables?  With my patch as it is, that depends on
which level of the hierarchy you explicitly updated!

No such problem exists for partition hierarchies since the tables all
appear as the same type to user code (though conversions may be
happening for technical reasons).

-- 
Thomas Munro
http://www.enterprisedb.com


transition-tuples-from-child-tables-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [GSOC 17] Eliminate O(N^2) scaling from rw-conflict tracking in serializable transactions

2017-05-09 Thread Alvaro Herrera
Hi Mengxing,

Mengxing Liu wrote:
> Hi, Alvaro and Kevin. I'm  Mengxing.  
> 
> This is a “synchronization” email to  tell you what I've done and my next 
> plan. I'm looking forward to your advice. 

Welcome!

> According to my proposal, I want to prepare the experimental environment 
> during the community bonding period. 
> 
> As this is the first time I discuss with Alvaro, here I will introduce the 
> environment again. 
> 
> My lab have a Lenovo System x3950 X6 machine. 
> 
> https://www.lenovo.com/images/products/system-x/pdfs/datasheets/x3950_x6_ds.pdf
> 
> More specifically, there are 8 sockets, each has 15 Intel(R) Xeon(R) CPU 
> E7-8870 v2 @ 2.30GHz. 
> 
> Thus we have 120 cores in total. The storage is a 1 TB SSD, with SAS 
> interface, 500MB write bandwidth. 

OK, having a single disk and 120 CPU cores sounds unbalanced.

> Because we have too many cores, SSD becomes the bottleneck. In my test, 
> pgbench can scale to 36 connections. ( 18 KTPS throughput). CPU utilization 
> is smaller than 30%. 
> 
> Therefore:
> 
> 1. Is there anything wrong in my tuning parameters?For example, should I 
> close "fsync"? Because we don't care recovery here. 

While it's true that we don't care about recovery, I'm not sure that
benchmark results would still be valid with fsync=off.  I would try
synchronous_commit=off instead, and perhaps also enlarge wal_buffers.
What do you have shared_buffers set to?

> 2. Can we use just two sockets (30 physical cores) to run database server? 
> Then the CPU can be the bottleneck so that it  shows the problem we try to 
> solve.

Sure -- it's not a Postgres option, but an operating system option:
you'd set the "maxcpus" parameter in GRUB when booting Linux.
Alternatively, you could use "numactl" to bind the postgres server to a
subset of CPUs.  (And you could put pgbench on a different CPU set).

> >  What method of communication will be used among the mentors and with 
> > Mengxing.
> 
> What method do you prefer?

Mailing list is fine.

> >  Frequency of status updates (must be at least once a week and more often 
> > is encouraged).
> 
> How about reporting my status once a week?

Once a week sounds good to me.

> >  What steps will be taken next during the community bonding period.
> 
> As I wrote in the proposal, I want to establish the environment and read the 
> related source code. Do you have any suggestions for me?


-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CTE inlining

2017-05-09 Thread Tom Lane
Ilya Shkuratov  writes:
> Ok, it seems that most people in discussion are agree that removing 
> optimization
> fence is a right thing to do.
> Nonetheless I still hoping to discuss the algorithm and its implementation. 

Yeah, so far we've mainly discussed whether to do that and how to control
it, not what the actual results would be.

> I suppose, in case of a single reference we can validate CTE subquery and 
> inline it
> just before SS_process_ctes() in subquery_planner() and then process remaining
> CTEs as before. 

Yeah, something like that ought to be possible.  I'd prefer to think of it
as related to the work prepjointree.c does, ie pull_up_sublinks, which
raises the question why we do SS_process_ctes before that not after it.

You might want to start by postponing planning of CTE subqueries as late
as possible, without any other changes.  In particular, I think it'd be
a good thing to postpone Path-to-Plan conversion of the subqueries
until much later, ideally not till the final toplevel create_plan() call.
This is tied up with early Plan creation for regular subqueries too, so
that might have to be an all-or-nothing conversion.  But we aren't really
going to have much flexibility of planning for subqueries until we do
that.

> The case of multiple reference is more interesting.
> Ideally, we would decide whether to inline just before pull_up_sublinks(), so 
> all 
> the optimizations can be applied to inlined subquery. But It is impossible as 
> we 
> have no information to build subquery paths and estimate they costs at this 
> point. 

TBH, I would just ignore that completely, at least until the simpler
case is done and committed.  Trying to bite that off as part of the
initial patch is likely to lead to never getting anything done at all.
And I'm not exactly convinced either that it will win often enough to
be worth the trouble, or that users would thank you for rewriting their
queries that way.

When and if we get to this, you could imagine tackling it a bit like
preprocess_minmax_aggregates, or like what I did with OR clauses in
<22002.1487099...@sss.pgh.pa.us>, ie just repeat a lot of the work
over again to get several complete Path trees, and then pick one.
This is kind of expensive and ugly but it's hard to see how to do it
sensibly in a bottom-up fashion.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] export import bytea from psql

2017-05-09 Thread Daniel Verite
Pavel Stehule wrote:

> 1. using psql variables - we just need to write commands \setfrom
> \setfrombf - this should be very simple implementation. The value can be
> used more times. On second hand - the loaded variable can hold lot of
> memory (and it can be invisible for user).

This could be simplified by using the variable only for the filename,
and then injecting the contents of the file into the PQexec'd query
as the interpolation of the variable.
We already have:
 :var for verbatim injection
 :'var' for injection quoted-as-text
 :"var" for injection quoted-as-identifier

What if we add new forms of dereferencing, for instance
(not necessarily with this exact syntax):
 : for injecting the quoted-as-text contents of the file pointed
  to by var.
 :{var} same thing but with file contents quoted as binary
 (PQescapeByteaConn)

then we could write:

\set img '/path/to/image.png'
insert into image(binary) values(:{img});

We could also go further  in that direction. More new interpolation
syntax could express that a variable is to be passed as a
parameter rather than injected (assuming a parametrized query),
and whether the value is directly the contents or it's a filename
pointing to the contents, and whether its format is binary or text,
and even support an optional oid or typename coupled to
the variable.
That would be a lot of new syntax for interpolation but no new
backslash command and no change to \set itself.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-09 Thread Fabien COELHO



What about detecting the empty result (eg PQntuples()==0?) and writing
"Empty result" instead of the strange looking empty table above? That would
just mean skipping the PrintQueryResult call in this case?


PQntuples == 0 every time - the query is not executed.


I meant to test the query which collects type names, which is executed?

Or check that PQnfields() == 0 on the PQdescribePrepared() result, so 
that there is no need to execute the type name collection query?



For the case "SELECT;" the empty table is correct.


Ok. Then write "Empty table"?

For TRUNCATE and similar command I am not sure. The empty table is maybe 
unusual, but it is valid - like "SELECT;".


I would partly disagree:

"SELECT;" does indeed return an empty relation, so I agree that an empty 
table is valid whether spelled out as "Empty table" or explicitly.


However, ISTM that "TRUNCATE stuff;" does *NOT* return a relation, so 
maybe "No table" would be ok, but not an empty table... ?!


So I could be okay with both:

  SELECT \gdesc
  -- "Empty table" or some other string
Or
  -- Name | Type

Although I prefer the first one, because the second looks like a bug 
somehow: I asked for a description, but nothing is described... even if 
the answer is somehow valid, it looks pretty strange.


The same results do not realy suit "TRUNCATE Foo \gdesc", where "No table"
would seem more appropriate?

In both case, "Empty result" is kind of neutral, it does not promise a 
table or not. Hmmm. At least not too much. Or maybe some other string such 
as "Nothing" or "No result"?


Now I wonder whether the No vs Empty cases can be distinguished?

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CTE inlining

2017-05-09 Thread Tom Lane
"David G. Johnston"  writes:
> On Tue, May 9, 2017 at 12:36 PM, Tom Lane  wrote:
>> Also, considering that this behavior has been there since 8.4,
>> I think it's sheerest chutzpah to claim that changing the docs in
>> v10 would materially reduce the backward-compatibility concerns
>> for whatever we might do in v11.

> ​No it won't, but those who are using 10 as their first version would
> probably be happy if this was covered in a bit more depth.

I think the existing doc text is perfectly clear (while David's proposed
replacement text is not).

> Even a comment
> like "Unlike most other DBMS PostgreSQL presently executes the subquery in
> the CTE​ in relative isolation.  It is suggested that you document any
> intentional usage of this optimization fence as a query planning hint so
> that should the default change in the future you can update the query to
> support whatever official syntax is implemented to retain this behavior.

Well, TBH that is pre-judging what (if anything) is going to be changed
by a feature that we don't even have design consensus on, let alone a
patch for.  I don't think that's an improvement or a good service to
our users; it's just promoting confusion.  I think that until we do have
a design and a patch, we're better off leaving the docs reading the way
they have for the past eight years.

I'm also a bit mystified by the apparent urgency to change something,
anything, in this area when we're already past feature freeze.  This
would be a fit subject for discussion several months from now when
v11 development opens, but right now it's just distracting us from
stabilizing v10.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] snapbuild woes

2017-05-09 Thread Erik Rijkers

On 2017-05-09 21:00, Petr Jelinek wrote:

On 09/05/17 19:54, Erik Rijkers wrote:

On 2017-05-09 11:50, Petr Jelinek wrote:



Ah okay, so this is same issue that's reported by both Masahiko Sawada
[1] and Jeff Janes [2].

[1]
https://www.postgresql.org/message-id/CAD21AoBYpyqTSw%2B%3DES%2BxXtRGMPKh%3DpKiqjNxZKnNUae0pSt9bg%40mail.gmail.com
[2]
https://www.postgresql.org/message-id/flat/CAMkU%3D1xUJKs%3D2etq2K7bmbY51Q7g853HLxJ7qEB2Snog9oRvDw%40mail.gmail.com


I don't understand why you come to that conclusion: both Masahiko Sawada 
and Jeff Janes have a DROP SUBSCRIPTION in the mix; my cases haven't.  
Isn't that a real difference?


( I do sometimes get that DROP-SUBSCRIPTION too, but much less often 
than the sync-failure. )






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CTE inlining

2017-05-09 Thread David G. Johnston
On Tue, May 9, 2017 at 12:36 PM, Tom Lane  wrote:

> Also, considering that this behavior has been there since 8.4,
> I think it's sheerest chutzpah to claim that changing the docs in
> v10 would materially reduce the backward-compatibility concerns
> for whatever we might do in v11.
>

​No it won't, but those who are using 10 as their first version would
probably be happy if this was covered in a bit more depth.  Even a comment
like "Unlike most other DBMS PostgreSQL presently executes the subquery in
the CTE​ in relative isolation.  It is suggested that you document any
intentional usage of this optimization fence as a query planning hint so
that should the default change in the future you can update the query to
support whatever official syntax is implemented to retain this behavior.

We cannot really help those who have been using this since 8.4 and won't
read the relevant docs because they don't need to learn anything new; but
we can at least avoid subjecting newer customers.  I'm not that strongly in
favor of it but it would be a nice touch - even if it won't change the
risk/reward calculation in any meaningful way.

David J.


Re: [HACKERS] Logical replication ApplyContext bloat

2017-05-09 Thread Stas Kelvich

> On 9 May 2017, at 21:53, Peter Eisentraut  
> wrote:
> 
> On 5/8/17 11:26, Peter Eisentraut wrote:
>> I think it would make more sense to reset ApplyMessageContext in
>> apply_dispatch(), so that it is done consistently after every message
>> (as the name implies), not only some messages.
> 
> Committed that.
> 
>> Also, perhaps ApplyMessageContext should be a child of
>> TopTransactionContext.  (You have it as a child of ApplyContext, which
>> is under TopMemoryContext.)
> 
> Left that as is.

Thanks!

Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CTE inlining

2017-05-09 Thread Tom Lane
"David G. Johnston"  writes:
> On Tue, May 9, 2017 at 12:15 PM, Peter Eisentraut <
> peter.eisentr...@2ndquadrant.com> wrote:
>> If we just tell them that the thing they might have relied on might go
>> away, without a replacement to suggest, then we're just confusing and
>> scaring them, no?

> ​We'd end up suggesting our OFFSET 0 hack as true protection.

Considering that many of the commenters in this thread view OFFSET 0
as a vile hack that ought to go away, I can hardly see how that's
an improvement.

I tend to agree with Peter that there's no need to do anything until
we have a committable code improvement.  Documentation changes that
push people towards adding OFFSET 0, without any certainty that that
will be the long-term answer, do not seem like a net positive.

Also, considering that this behavior has been there since 8.4,
I think it's sheerest chutzpah to claim that changing the docs in
v10 would materially reduce the backward-compatibility concerns
for whatever we might do in v11.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-09 Thread Pavel Stehule
2017-05-09 21:23 GMT+02:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> On 5/3/17 02:56, Pavel Stehule wrote:
> > Sometimes I have to solve the result types of some query. It is
> > invisible in psql. You have to materialize table or you have to
> > create view. Now, when we can enhance \g command, we can introduce
> > query describing
> >
> > some like
> >
> > select a, b from foo
> > \gdesc
> >
> >  |   type | length | collation | 
> > 
> >  a  | varchar  | 30  |
> >  b  | numeric |  20 |
> >
> >
> > here is the patch. It is based on PQdescribePrepared result.
>
> I have often wished for functionality like this, so I'm in favor of
> investigating this.
>
> I don't think you need a separate call to prepare the query.  You can
> get the result column types using PQftype().  (Hmm, you can get the
> typmod that way, but not the collation.)
>

the describe command is used and collation info is not available

looks to the attached patches


> My thinking in the past has been to put the column types either in the
> column headers, like "colname (coltype)", or in the footer, along with
> the actual query result.
>

My first idea was like classic gui implementation

  colname1
type
==
  data

but the header is not multi line.

Merging with result is another way, but mostly you don't need this info. So
special command looks better.




>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] CTE inlining

2017-05-09 Thread Serge Rielau
On Tue, May 9, 2017 at 12:22 PM, David G. Johnston  
wrote:
On Tue, May 9, 2017 at 12:15 PM, Peter Eisentraut < 
peter.eisentr...@2ndquadrant.com [peter.eisentr...@2ndquadrant.com] > wrote:
On 5/5/17 08:43, David Rowley wrote:
> How about we get the ball rolling on this in v10 and pull that part
> out of the docs. If anything that'll buy us a bit more wiggle room to
> change this in v11.
>
> I've attached a proposed patch.

If we just tell them that the thing they might have relied on might go
away, without a replacement to suggest, then we're just confusing and
scaring them, no?

We'd end up suggesting our OFFSET 0 hack as true protection. If they know for a 
fact that their use of CTE for its barrier properties is not supported they are 
also more likely to document intentional usage with something like: "-- CHANGE 
THIS ONCE VERSION 11 IS RELEASED!!! --" which would make finding the call sites 
that need to add the new "MATERIALIZED" ​keyword much easier. How about adding 
MATERIALIZED now (in 10) as a noise word. Give people a release to switch over 
before pulling the rug.. Cheers Serge

Re: [HACKERS] idea: custom log_line_prefix components besides application_name

2017-05-09 Thread Tom Lane
David Fetter  writes:
> On Tue, May 09, 2017 at 12:48:01PM -0400, Tom Lane wrote:
>> I don't think that's a problem: while psql will remove "--" and everything
>> following it until newline, it won't remove the newline.  So there's still
>> a token boundary there.

> We may still need to be careful.

> davidfetter@davidfetter=# SELECT 'foo'-- stuff goes here
> 'bar';
>  ?column?
> --
>  foobar
> (1 row)

If you read the SQL spec, you'll find that particular behavior is
required by spec, and would still be required by spec with or
without the comment.  Perhaps it's a trap for unwary SQL programmers,
but psql isn't making it worse.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CTE inlining

2017-05-09 Thread David G. Johnston
On Tue, May 9, 2017 at 12:15 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 5/5/17 08:43, David Rowley wrote:
> > How about we get the ball rolling on this in v10 and pull that part
> > out of the docs. If anything that'll buy us a bit more wiggle room to
> > change this in v11.
> >
> > I've attached a proposed patch.
>
> If we just tell them that the thing they might have relied on might go
> away, without a replacement to suggest, then we're just confusing and
> scaring them, no?
>

​We'd end up suggesting our OFFSET 0 hack as true protection.  If they know
for a fact that their use of CTE for its barrier properties is not
supported they are also more likely to document intentional usage with
something like:   "-- CHANGE THIS ONCE VERSION 11 IS RELEASED!!! --" which
would make finding the call sites that need to add the new "MATERIALIZED"
​keyword much easier.

David J.


Re: [HACKERS] proposal psql \gdesc

2017-05-09 Thread Peter Eisentraut
On 5/3/17 02:56, Pavel Stehule wrote:
> Sometimes I have to solve the result types of some query. It is
> invisible in psql. You have to materialize table or you have to
> create view. Now, when we can enhance \g command, we can introduce
> query describing
> 
> some like
> 
> select a, b from foo
> \gdesc
> 
>  |   type | length | collation | 
> 
>  a  | varchar  | 30  |
>  b  | numeric |  20 | 
> 
> 
> here is the patch. It is based on PQdescribePrepared result.

I have often wished for functionality like this, so I'm in favor of
investigating this.

I don't think you need a separate call to prepare the query.  You can
get the result column types using PQftype().  (Hmm, you can get the
typmod that way, but not the collation.)

My thinking in the past has been to put the column types either in the
column headers, like "colname (coltype)", or in the footer, along with
the actual query result.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: custom log_line_prefix components besides application_name

2017-05-09 Thread David Fetter
On Tue, May 09, 2017 at 12:48:01PM -0400, Tom Lane wrote:
> David Fetter  writes:
> > On Fri, May 05, 2017 at 02:20:26PM -0400, Robert Haas wrote:
> >> On Thu, May 4, 2017 at 10:59 AM, Chapman Flack  
> >> wrote:
> >>> invalid input syntax for integer: "21' && 1=2)) Uni/**/ON
> >>> SEl/**/eCT 0x646665743166657274,0x646665743266657274,
> >>> 0x646665743366657274 -- "
> 
> >> Now that is choice.  I wonder what specific database system that's
> >> targeting...
> 
> > It could well be targeting some class of pipeline to the database,
> > too, for example one that removes comments and/or un-escapes.
> 
> Yeah.  It's a bit hard to see a database's parser treating "Uni/**/ON"
> as UNION, but if some stack someplace had a keyword check ahead of
> a comment-stripping step, maybe that could do something useful.

Right.

> > It occurs to me that psql's habit of stripping out everything on a
> > line that follows a double dash  might be vulnerable in this way, but
> > I wouldn't see such vulnerabilities as super easy to exploit, as psql
> > isn't usually exposed directly to input from the internet.
> 
> I don't think that's a problem: while psql will remove "--" and everything
> following it until newline, it won't remove the newline.  So there's still
> a token boundary there.  If we tried to strip /*...*/ comments we'd have
> to be more careful.

We may still need to be careful.

davidfetter@davidfetter=# SELECT 'foo'-- stuff goes here
'bar';
 ?column?
--
 foobar
(1 row)

> As far as the actual thread topic goes, I tend to agree with
> Robert's doubt that there's enough utility or consensus for this.

I'm pretty sure we're going to need a logger with more structure than
our default, especially as those logs get machine-parsed, and more
importantly, machine-acted-upon.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CTE inlining

2017-05-09 Thread Peter Eisentraut
On 5/5/17 08:43, David Rowley wrote:
> How about we get the ball rolling on this in v10 and pull that part
> out of the docs. If anything that'll buy us a bit more wiggle room to
> change this in v11.
> 
> I've attached a proposed patch.

If we just tell them that the thing they might have relied on might go
away, without a replacement to suggest, then we're just confusing and
scaring them, no?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-09 Thread Pavel Stehule
2017-05-09 20:37 GMT+02:00 Fabien COELHO :

>
> Hello Pavel,
>
> [...] it is little bit worse. I cannot to distinguish between SELECT\gdesc
>> and TRUNCATE xxx\gdesc . All are valid commands and produce empty result,
>> so result of \gdesc command should be empty result too.
>>
>> postgres=# truncate table xx\gdesc
>> ┌──┬──┐
>> │ Name │ Type │
>> ╞══╪══╡
>> └──┴──┘
>> (0 rows)
>>
>
> Hmmm. At least it is better than the previous error.
>
> What about detecting the empty result (eg PQntuples()==0?) and writing
> "Empty result" instead of the strange looking empty table above? That would
> just mean skipping the PrintQueryResult call in this case?


PQntuples == 0 every time - the query is not executed.

I am not sure, what is more correct. The "Empty result" string is not used
every time in psql. For the case "SELECT;" the empty table is correct. For
TRUNCATE and similar command I am not sure. The empty table is maybe
unusual, but it is valid - like "SELECT;". The implementation is not
problem in any case. The question is what is more natural for users a) the
string "Empty result", b) empty table.

I prefer @b due consistency with current behave (that is only reason, I
have not any other).

postgres=# select 'ahoj' where false;
┌──┐
│ ?column? │
╞══╡
└──┘
(0 rows)



>
>
> --
> Fabien.


Re: [HACKERS] snapbuild woes

2017-05-09 Thread Petr Jelinek
On 09/05/17 19:54, Erik Rijkers wrote:
> On 2017-05-09 11:50, Petr Jelinek wrote:
> 
>> I rebased the above mentioned patch to apply to the patches Andres sent,
>> if you could try to add it on top of what you have and check if it still
>> fails, that would be helpful.
> 
> It still fails.
> 
> With these patches
> 
> - 0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
> - 2-WIP-Possibly-more-robust-snapbuild-approach.patch +
> - fix-statistics-reporting-in-logical-replication-work.patch +
> - Skip-unnecessary-snapshot-builds.patch
> 
> built again on top of 44c528810a1 ( so I had to add the
> 'fix-statistics-rep*' patch because without it I immediately got that
> Assertion failure again ).
> 
> As always most runs succeed (especially on this large 192GB 16-core
> server).
> 
> But attached is an output file of a number of runs of my
> pgbench_derail2.sh test.
> 
> Overal result:
> 
> -- out_20170509_1635.txt
>   3 -- pgbench -c 64 -j 8 -T 900 -P 180 -n   --  scale 25
>   2 -- All is well.
>   1 -- Not good, but breaking out of wait (21 times no change)
> 
> I broke it off after iteration 4, so 5 never ran, and
> iteration 1 failed due to a mistake in the harness (somethind stupid I
> did) - not interesting.
> 
> iteration 2 succeeds. (eventually has 'replica ok')
> 
> iteration 3 succeeds. (eventually has 'replica ok')
> 
> iteration 4 fails.
>   Just after 'alter subscription sub1 enable' I caught (as is usual)
> pg_stat_replication.state as 'catchup'. So far so good.
>   After the 15-minute pgbench run pg_stat_replication has only 2
> 'startup' lines (and none 'catchup' or 'streaming'):
> 
>  port | pg_stat_replication |  pid   | wal | replay_loc | diff |
> ?column? |  state  |   app   | sync_state
>  6972 | pg_stat_replication | 108349 | 19/8FBCC248 ||  |
>  | startup | derail2 | async
>  6972 | pg_stat_replication | 108351 | 19/8FBCC248 ||  |
>  | startup | derail2 | async
> 
> (that's from:
>select $port1 as port,'pg_stat_replication' as pg_stat_replication, pid
>  , pg_current_wal_location() wal, replay_location replay_loc,
> pg_current_wal_location() - replay_location as diff
>  , pg_current_wal_location() <= replay_location
>  , state, application_name as app, sync_state
>from pg_stat_replication
> )
> 
> This remains in this state for as long as my test-programs lets it
> (i.e., 20 x 30s, or something like that, and then the loop is exited);
> in the ouput file it says: 'Not good, but breaking out of wait'
> 
> Below is the accompanying ps (with the 2 'deranged senders' as Jeff
> Janes would surely call them):
> 
> 
> UID PID   PPID  C STIME TTY  STAT   TIME CMD
> rijkers  107147  1  0 17:11 pts/35   S+ 0:00
> /var/data1/pg_stuff/pg_installations/pgsql.logical_replication2/bin/postgres
> -D /var/data1/pg_stuff/pg_installations
> rijkers  107149 107147  0 17:11 ?Ss 0:00  \_ postgres:
> logger process
> rijkers  107299 107147  0 17:11 ?Ss 0:01  \_ postgres:
> checkpointer process
> rijkers  107300 107147  0 17:11 ?Ss 0:00  \_ postgres:
> writer process
> rijkers  107301 107147  0 17:11 ?Ss 0:00  \_ postgres: wal
> writer process
> rijkers  107302 107147  0 17:11 ?Ss 0:00  \_ postgres:
> autovacuum launcher process
> rijkers  107303 107147  0 17:11 ?Ss 0:00  \_ postgres: stats
> collector process
> rijkers  107304 107147  0 17:11 ?Ss 0:00  \_ postgres:
> bgworker: logical replication launcher
> rijkers  108348 107147  0 17:12 ?Ss 0:01  \_ postgres:
> bgworker: logical replication worker for subscription 70310 sync 70293
> rijkers  108350 107147  0 17:12 ?Ss 0:00  \_ postgres:
> bgworker: logical replication worker for subscription 70310 sync 70298
> rijkers  107145  1  0 17:11 pts/35   S+ 0:02
> /var/data1/pg_stuff/pg_installations/pgsql.logical_replication/bin/postgres
> -D /var/data1/pg_stuff/pg_installations
> rijkers  107151 107145  0 17:11 ?Ss 0:00  \_ postgres:
> logger process
> rijkers  107160 107145  0 17:11 ?Ss 0:08  \_ postgres:
> checkpointer process
> rijkers  107161 107145  0 17:11 ?Ss 0:07  \_ postgres:
> writer process
> rijkers  107162 107145  0 17:11 ?Ss 0:02  \_ postgres: wal
> writer process
> rijkers  107163 107145  0 17:11 ?Ss 0:00  \_ postgres:
> autovacuum launcher process
> rijkers  107164 107145  0 17:11 ?Ss 0:02  \_ postgres: stats
> collector process
> rijkers  107165 107145  0 17:11 ?Ss 0:00  \_ postgres:
> bgworker: logical replication launcher
> rijkers  108349 107145  0 17:12 ?Ss 0:27  \_ postgres: wal
> sender process rijkers [local] idle
> rijkers  108351 107145  0 17:12 ?Ss 0:26  \_ postgres: wal
> sender process rijkers [local] idle
> 
> I have had no time to add (or view) any CPUinfo.
> 
> 

Ah okay, so this is same 

Re: [HACKERS] Logical replication ApplyContext bloat

2017-05-09 Thread Peter Eisentraut
On 5/8/17 11:26, Peter Eisentraut wrote:
> I think it would make more sense to reset ApplyMessageContext in
> apply_dispatch(), so that it is done consistently after every message
> (as the name implies), not only some messages.

Committed that.

> Also, perhaps ApplyMessageContext should be a child of
> TopTransactionContext.  (You have it as a child of ApplyContext, which
> is under TopMemoryContext.)

Left that as is.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-09 Thread Fabien COELHO


Hello Pavel,

[...] it is little bit worse. I cannot to distinguish between 
SELECT\gdesc and TRUNCATE xxx\gdesc . All are valid commands and produce 
empty result, so result of \gdesc command should be empty result too.


postgres=# truncate table xx\gdesc
┌──┬──┐
│ Name │ Type │
╞══╪══╡
└──┴──┘
(0 rows)


Hmmm. At least it is better than the previous error.

What about detecting the empty result (eg PQntuples()==0?) and writing 
"Empty result" instead of the strange looking empty table above? That 
would just mean skipping the PrintQueryResult call in this case?


--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COMPRESS VALUES feature request

2017-05-09 Thread Marko Tiikkaja
On Tue, May 9, 2017 at 8:18 PM, Erez Segal  wrote:

> In the IRC channel - johto suggested an implementation:
>
>  if you want to get really fancy you could have two columns where
> only one of set; one would be the value (if reasonably unique) and the
> other the id (if not)
>

I only suggested how to do this in an application if one really wants the
feature so badly.  I don't think this would fly as a native implementation.


> I'd like to add that an ENUM can be used instead of the id+lookup table in
> the 2nd column for non unique values.
>

An ENUM in postgres *is* an id + lookup table..


.m


Re: [HACKERS] CTE inlining

2017-05-09 Thread Ilya Shkuratov

Ok, it seems that most people in discussion are agree that removing optimization
fence is a right thing to do. But so far the main topic was whether it worth to 
make "inlining" by default, and how we should enable it.

Nonetheless I still hoping to discuss the algorithm and its implementation. 

I suppose, in case of a single reference we can validate CTE subquery and 
inline it
just before SS_process_ctes() in subquery_planner() and then process remaining
CTEs as before. 

The case of multiple reference is more interesting.
Ideally, we would decide whether to inline just before pull_up_sublinks(), so 
all 
the optimizations can be applied to inlined subquery. But It is impossible as 
we 
have no information to build subquery paths and estimate they costs at this 
point. 
All necessary initialization is performed in query_planner(), that invoked far
later in grouping_planner(). (As far as I understand.)

The most straighforward way is to compare CTE scan cost with subquery execution
and result scan cost in set_rel_size(), just after set_cte_pathlist(), and 
alter 
RelOptInfo, if we choose to inline.
(e.g (CTE scan) < (cheapest_path(subquery) + subquery scan))
This way we still can push down predicates as it is performed in 
set_subquery_pathlist(), but we missed pull_up_subquery().
Besides, it seems like a dirty quick solution.

Maybe it possible to add subquery scan to RTE_CTE RelOptInfo, but I'm not sure.

So what is a right way to conduct comparison between CTE scan and subquery 
execution with subsequent scan?

I am new to PostgreSQL development, so I need a guidance from someone who 
familiar with optimizer infrastructure to ensure that I moving in a right 
direction and not making something weird.


P.S. There is a paper [1] describing implementation of CTE optimization in Orca 
optimizer. It may be useful, though architecture is completely different.

[1] Optimization of Common Table Expressions in MPP Database Systems 
(http://www.vldb.org/pvldb/vol8/p1704-elhelw.pdf)


Ilya Shkuratov


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] COMPRESS VALUES feature request

2017-05-09 Thread Erez Segal
Hi,

Following a discussion in the IRC channel, I'd like to suggest a feature
request to the ToDo WIKI page .

The use case is having a column with strings/floats, where some values are
very common and can be compressed, while many other values are unique,
making a lookup table not efficient.

It can be part of a column definition, set in CREATE TABLE/ALTER TABLE
e.g. field VARCHAR(255) COMPRESS (NULL, 'EREZ', 'SEGAL')

The feature exists in TeraData:
https://community.teradata.com/t5/Database/how-does-compress-work-in-tables/td-p/13950

In the IRC channel - johto suggested an implementation:

 if you want to get really fancy you could have two columns where
only one of set; one would be the value (if reasonably unique) and the
other the id (if not)
I'd like to add that an ENUM can be used instead of the id+lookup table in
the 2nd column for non unique values.


Re: [HACKERS] logical replication deranged sender

2017-05-09 Thread Petr Jelinek
On 09/05/17 19:13, Jeff Janes wrote:
> On Tue, May 9, 2017 at 9:18 AM, Petr Jelinek
> > wrote:
> 
> On 08/05/17 13:47, Petr Jelinek wrote:
> > On 08/05/17 01:17, Jeff Janes wrote:
> >> After dropping a subscription, it says it succeeded and that it dropped
> >> the slot on the publisher.
> >>
> >> But the publisher still has the slot, and a full-tilt process described
> >> by ps as
> >>
> >> postgres: wal sender process jjanes [local] idle in transaction
> >>
> >> Strace shows that this process is doing nothing but opening, reading,
> >> lseek, and closing from pg_wal, and calling sbrk.  It never sends 
> anything.
> >>
> >> This is not how it should work, correct?
> >>
> >
> > No, and I don't see how this happens though, we only report success if
> > the publisher side said that DROP_REPLICATION_SLOT succeeded. So far I
> > don't see anything in source that would explain this. I will need to
> > reproduce it first to see what's happening (wasn't able to do that yet,
> > but it might just need more time since you say it does no happen 
> always).
> >
> 
> Hm I wonder are there any workers left on subscriber when this happens?
> 
> 
> Yes.  using ps, I get this:
> 
> postgres: bgworker: logical replication worker for subscription 16408
> sync 16391
> postgres: bgworker: logical replication worker for subscription 16408
> sync 16388
> 
> They seem to be permanently blocked on a socket to read from the publisher.
> 
> On the publisher side, I think it is very slowly assembling a snapshot. 
> It seems to be adding one xid at a time, and then re-sorting the entire
> list.  Over and over.
> 

Okay, then it's the same issue Masahiko Sawada reported in nearby
thread, or at least has same cause.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-09 Thread Pavel Stehule
2017-05-09 18:15 GMT+02:00 Fabien COELHO :

>
> Hello Pavel,
>
> Patch applies cleanly and compiles.
>>>
>>
> Idem for v2. "make check" ok. Tests look good.
>
> I would suggest some rewording, maybe:
>>>
>>> "Show the description of the result of the current query buffer without
>>> actually executing it, by considering it a prepared statement."
>>>
>>> done
>>
>
> Ok. If some native English speaker can clarify the sentence further, or
> imprive it anyway, thanks in advance!
>
>   SELECT $1 AS unknown_type \gdesc
>>>
>>
>> It is not unknown type - the default placeholder type is text
>>
>
> Indeed. I really meant something like:
>
>   calvin=# SELECT $1 + $2 \gdesc
>   ERROR:  operator is not unique: unknown + unknown
>   ...
>
> More comments:
>
> I propose that the help message could be "describe result of query without
> executing it".
>

done

>
> I found an issue. \gdesk fails when the command does not return a result:
>
>  calvin=# TRUNCATE pgbench_history \gdesc
>  ERROR:  syntax error at or near ")"
>  LINE 2:  (VALUES ) s (name, tp, tpm)
>
> I guess the issue is that PQdescribePrepared returns an empty description,
> which is fine, but then the second query should be skipped, and some
> message should be output instead, like "no result" or whatever...
>
> This need fixing, and a corresponding test should be added.
>

it is little bit worse. I cannot to distinguish between SELECT\gdesc and
TRUNCATE xxx\gdesc . All are valid commands and produce empty result, so
result of \gdesc command should be empty result too.

 postgres=# truncate table xx\gdesc
┌──┬──┐
│ Name │ Type │
╞══╪══╡
└──┴──┘
(0 rows)

postgres=# select \gdesc
┌──┬──┐
│ Name │ Type │
╞══╪══╡
└──┴──┘
(0 rows)


> Also I would suggest to add a \g after the first test, which would execute
> the current buffer after its description, to show that the current buffer
> does indeed hold the query:
>
>  calvin=# SELECT 1 as one, ... \gdesc \g
>  -- one | int
>  -- ...
>  -- 1 | ...
>
>
done

Regards

Pavel


> --
> Fabien.
>
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 3b86612..2e46f4c 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1957,6 +1957,16 @@ Tue Oct 26 21:40:57 CEST 1999
 
 
   
+\gdesc
+
+
+Show the description of the result of the current query buffer without
+actually executing it, by considering it a prepared statement.
+
+
+  
+
+  
 \gexec
 
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index b3263a9..a1c8e1d 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -88,6 +88,7 @@ static backslashResult exec_command_errverbose(PsqlScanState scan_state, bool ac
 static backslashResult exec_command_f(PsqlScanState scan_state, bool active_branch);
 static backslashResult exec_command_g(PsqlScanState scan_state, bool active_branch,
 			   const char *cmd);
+static backslashResult exec_command_gdesc(PsqlScanState scan_state, bool active_branch);
 static backslashResult exec_command_gexec(PsqlScanState scan_state, bool active_branch);
 static backslashResult exec_command_gset(PsqlScanState scan_state, bool active_branch);
 static backslashResult exec_command_help(PsqlScanState scan_state, bool active_branch);
@@ -337,6 +338,8 @@ exec_command(const char *cmd,
 		status = exec_command_f(scan_state, active_branch);
 	else if (strcmp(cmd, "g") == 0 || strcmp(cmd, "gx") == 0)
 		status = exec_command_g(scan_state, active_branch, cmd);
+	else if (strcmp(cmd, "gdesc") == 0)
+		status = exec_command_gdesc(scan_state, active_branch);
 	else if (strcmp(cmd, "gexec") == 0)
 		status = exec_command_gexec(scan_state, active_branch);
 	else if (strcmp(cmd, "gset") == 0)
@@ -1328,6 +1331,25 @@ exec_command_g(PsqlScanState scan_state, bool active_branch, const char *cmd)
 }
 
 /*
+ * \gdesc -- describe query result
+ */
+static backslashResult
+exec_command_gdesc(PsqlScanState scan_state, bool active_branch)
+{
+	backslashResult status = PSQL_CMD_SKIP_LINE;
+
+	if (active_branch)
+	{
+		pset.gdesc_flag = true;
+		status = PSQL_CMD_SEND;
+	}
+	else
+		ignore_slash_filepipe(scan_state);
+
+	return status;
+}
+
+/*
  * \gexec -- send query and execute each field of result
  */
 static backslashResult
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index a2f1259..529034f 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -1323,7 +1323,87 @@ SendQuery(const char *query)
 		}
 	}
 
-	if (pset.fetch_count <= 0 || pset.gexec_flag ||
+	if (pset.gdesc_flag)
+	{
+		/*
+		 * Unnamed prepared statement is used. Is not possible to
+		 * create any unnamed prepared statement from psql user space,
+		 * so there should not be any conflict. In this moment is not
+		 * possible to deallocate this prepared statement, so it should
+		 * to live to end of session or to another \gdesc call.
+		 

Re: [HACKERS] snapbuild woes

2017-05-09 Thread Erik Rijkers

On 2017-05-09 11:50, Petr Jelinek wrote:

I rebased the above mentioned patch to apply to the patches Andres 
sent,
if you could try to add it on top of what you have and check if it 
still

fails, that would be helpful.


It still fails.

With these patches

- 0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
- 2-WIP-Possibly-more-robust-snapbuild-approach.patch +
- fix-statistics-reporting-in-logical-replication-work.patch +
- Skip-unnecessary-snapshot-builds.patch

built again on top of 44c528810a1 ( so I had to add the 
'fix-statistics-rep*' patch because without it I immediately got that 
Assertion failure again ).


As always most runs succeed (especially on this large 192GB 16-core 
server).


But attached is an output file of a number of runs of my 
pgbench_derail2.sh test.


Overal result:

-- out_20170509_1635.txt
  3 -- pgbench -c 64 -j 8 -T 900 -P 180 -n   --  scale 25
  2 -- All is well.
  1 -- Not good, but breaking out of wait (21 times no change)

I broke it off after iteration 4, so 5 never ran, and
iteration 1 failed due to a mistake in the harness (somethind stupid I 
did) - not interesting.


iteration 2 succeeds. (eventually has 'replica ok')

iteration 3 succeeds. (eventually has 'replica ok')

iteration 4 fails.
  Just after 'alter subscription sub1 enable' I caught (as is usual) 
pg_stat_replication.state as 'catchup'. So far so good.
  After the 15-minute pgbench run pg_stat_replication has only 2 
'startup' lines (and none 'catchup' or 'streaming'):


 port | pg_stat_replication |  pid   | wal | replay_loc | diff | 
?column? |  state  |   app   | sync_state
 6972 | pg_stat_replication | 108349 | 19/8FBCC248 ||  | 
 | startup | derail2 | async
 6972 | pg_stat_replication | 108351 | 19/8FBCC248 ||  | 
 | startup | derail2 | async


(that's from:
   select $port1 as port,'pg_stat_replication' as pg_stat_replication, 
pid
 , pg_current_wal_location() wal, replay_location replay_loc, 
pg_current_wal_location() - replay_location as diff

 , pg_current_wal_location() <= replay_location
 , state, application_name as app, sync_state
   from pg_stat_replication
)

This remains in this state for as long as my test-programs lets it 
(i.e., 20 x 30s, or something like that, and then the loop is exited); 
in the ouput file it says: 'Not good, but breaking out of wait'


Below is the accompanying ps (with the 2 'deranged senders' as Jeff 
Janes would surely call them):



UID PID   PPID  C STIME TTY  STAT   TIME CMD
rijkers  107147  1  0 17:11 pts/35   S+ 0:00 
/var/data1/pg_stuff/pg_installations/pgsql.logical_replication2/bin/postgres 
-D /var/data1/pg_stuff/pg_installations
rijkers  107149 107147  0 17:11 ?Ss 0:00  \_ postgres: 
logger process
rijkers  107299 107147  0 17:11 ?Ss 0:01  \_ postgres: 
checkpointer process
rijkers  107300 107147  0 17:11 ?Ss 0:00  \_ postgres: 
writer process
rijkers  107301 107147  0 17:11 ?Ss 0:00  \_ postgres: wal 
writer process
rijkers  107302 107147  0 17:11 ?Ss 0:00  \_ postgres: 
autovacuum launcher process
rijkers  107303 107147  0 17:11 ?Ss 0:00  \_ postgres: stats 
collector process
rijkers  107304 107147  0 17:11 ?Ss 0:00  \_ postgres: 
bgworker: logical replication launcher
rijkers  108348 107147  0 17:12 ?Ss 0:01  \_ postgres: 
bgworker: logical replication worker for subscription 70310 sync 70293
rijkers  108350 107147  0 17:12 ?Ss 0:00  \_ postgres: 
bgworker: logical replication worker for subscription 70310 sync 70298
rijkers  107145  1  0 17:11 pts/35   S+ 0:02 
/var/data1/pg_stuff/pg_installations/pgsql.logical_replication/bin/postgres 
-D /var/data1/pg_stuff/pg_installations
rijkers  107151 107145  0 17:11 ?Ss 0:00  \_ postgres: 
logger process
rijkers  107160 107145  0 17:11 ?Ss 0:08  \_ postgres: 
checkpointer process
rijkers  107161 107145  0 17:11 ?Ss 0:07  \_ postgres: 
writer process
rijkers  107162 107145  0 17:11 ?Ss 0:02  \_ postgres: wal 
writer process
rijkers  107163 107145  0 17:11 ?Ss 0:00  \_ postgres: 
autovacuum launcher process
rijkers  107164 107145  0 17:11 ?Ss 0:02  \_ postgres: stats 
collector process
rijkers  107165 107145  0 17:11 ?Ss 0:00  \_ postgres: 
bgworker: logical replication launcher
rijkers  108349 107145  0 17:12 ?Ss 0:27  \_ postgres: wal 
sender process rijkers [local] idle
rijkers  108351 107145  0 17:12 ?Ss 0:26  \_ postgres: wal 
sender process rijkers [local] idle


I have had no time to add (or view) any CPUinfo.


Erik Rijkers





out_20170509_1635.txt
Description: application/elc

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-09 Thread Masahiko Sawada
On Mon, May 8, 2017 at 8:42 PM, Petr Jelinek
 wrote:
> On 08/05/17 11:27, Masahiko Sawada wrote:
>> Hi,
>>
>> I encountered a situation where DROP SUBSCRIPTION got stuck when
>> initial table sync is in progress. In my environment, I created
>> several tables with some data on publisher. I created subscription on
>> subscriber and drop subscription immediately after that. It doesn't
>> always happen but I often encountered it on my environment.
>>
>> ps -x command shows the following.
>>
>>  96796 ?Ss 0:00 postgres: masahiko postgres [local] DROP
>> SUBSCRIPTION
>>  96801 ?Ts 0:00 postgres: bgworker: logical replication
>> worker for subscription 40993waiting
>>  96805 ?Ss 0:07 postgres: bgworker: logical replication
>> worker for subscription 40993 sync 16418
>>  96806 ?Ss 0:01 postgres: wal sender process masahiko [local] 
>> idle
>>  96807 ?Ss 0:00 postgres: bgworker: logical replication
>> worker for subscription 40993 sync 16421
>>  96808 ?Ss 0:00 postgres: wal sender process masahiko [local] 
>> idle
>>
>> The DROP SUBSCRIPTION process (pid 96796) is waiting for the apply
>> worker process (pid 96801) to stop while holding a lock on
>> pg_subscription_rel. On the other hand the apply worker is waiting for
>> acquiring a tuple lock on pg_subscription_rel needed for heap_update.
>> Also table sync workers (pid 96805 and 96807) are waiting for the
>> apply worker process to change their status.
>>
>
> Looks like we should kill apply before dropping dependencies.

Sorry, after investigated I found out that DROP SUBSCRIPTION process
is holding AccessExclusiveLock on pg_subscription (, not
pg_subscription_rel) and apply worker is waiting for acquiring a lock
on it. So I guess that the dropping dependencies are not relevant with
this.  It seems to me that the main cause is that DROP SUBSCRIPTION
waits for apply worker to finish while keeping to hold
AccessExclusiveLock on pg_subscription. Perhaps we need to contrive
ways to reduce lock level somehow.

>
>> Also, even when DROP SUBSCRIPTION is done successfully, the table sync
>> worker can be orphaned because I guess that the apply worker can exit
>> before change status of table sync worker.
>
> Well the tablesync worker should stop itself if the subscription got
> removed, but of course again the dependencies are an issue, so we should
> probably kill those explicitly as well.

Yeah, I think that we should ensure that the apply worker exits after
killed all involved table sync workers.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: custom log_line_prefix components besides application_name

2017-05-09 Thread Mark Dilger

> On May 9, 2017, at 9:48 AM, Tom Lane  wrote:
> 
> David Fetter  writes:
>> On Fri, May 05, 2017 at 02:20:26PM -0400, Robert Haas wrote:
>>> On Thu, May 4, 2017 at 10:59 AM, Chapman Flack  
>>> wrote:
 invalid input syntax for integer: "21' && 1=2)) Uni/**/ON
 SEl/**/eCT 0x646665743166657274,0x646665743266657274,
 0x646665743366657274 -- "
> 
>>> Now that is choice.  I wonder what specific database system that's
>>> targeting...
> 
>> It could well be targeting some class of pipeline to the database,
>> too, for example one that removes comments and/or un-escapes.
> 
> Yeah.  It's a bit hard to see a database's parser treating "Uni/**/ON"
> as UNION, but if some stack someplace had a keyword check ahead of
> a comment-stripping step, maybe that could do something useful.
> 
>> It occurs to me that psql's habit of stripping out everything on a
>> line that follows a double dash  might be vulnerable in this way, but
>> I wouldn't see such vulnerabilities as super easy to exploit, as psql
>> isn't usually exposed directly to input from the internet.
> 
> I don't think that's a problem: while psql will remove "--" and everything
> following it until newline, it won't remove the newline.  So there's still
> a token boundary there.  If we tried to strip /*...*/ comments we'd have
> to be more careful.
> 
> As far as the actual thread topic goes, I tend to agree with Robert's
> doubt that there's enough utility or consensus for this.

Consensus, no, but utility, yes.

In three tier architectures there is a general problem that the database
role used by the middle tier to connect to the database does not entail
information about the user who, such as a visitor to your website, made
the request of the middle tier.  Chapman wants this information so he
can include it in the logs, but stored procedures that work in support
of the middle tier might want it for locale information, etc.  As things
currently stand, there is no good way to get this passed all the way down
into the database stored procedure that needs it, given that you are
typically calling down through third party code that doesn't play along.

I expect proposals to address this have been made and rejected before.
Is there a word or phrase for this that I can use to search the archives
to read up on the issue?

Thanks,

Mark Dilger

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication deranged sender

2017-05-09 Thread Jeff Janes
On Tue, May 9, 2017 at 9:18 AM, Petr Jelinek 
wrote:

> On 08/05/17 13:47, Petr Jelinek wrote:
> > On 08/05/17 01:17, Jeff Janes wrote:
> >> After dropping a subscription, it says it succeeded and that it dropped
> >> the slot on the publisher.
> >>
> >> But the publisher still has the slot, and a full-tilt process described
> >> by ps as
> >>
> >> postgres: wal sender process jjanes [local] idle in transaction
> >>
> >> Strace shows that this process is doing nothing but opening, reading,
> >> lseek, and closing from pg_wal, and calling sbrk.  It never sends
> anything.
> >>
> >> This is not how it should work, correct?
> >>
> >
> > No, and I don't see how this happens though, we only report success if
> > the publisher side said that DROP_REPLICATION_SLOT succeeded. So far I
> > don't see anything in source that would explain this. I will need to
> > reproduce it first to see what's happening (wasn't able to do that yet,
> > but it might just need more time since you say it does no happen always).
> >
>
> Hm I wonder are there any workers left on subscriber when this happens?
>

Yes.  using ps, I get this:

postgres: bgworker: logical replication worker for subscription 16408 sync
16391
postgres: bgworker: logical replication worker for subscription 16408 sync
16388

They seem to be permanently blocked on a socket to read from the publisher.

On the publisher side, I think it is very slowly assembling a snapshot.  It
seems to be adding one xid at a time, and then re-sorting the entire list.
Over and over.

Cheers,

Jeff


Re: [HACKERS] MSVC odd TAP test problem

2017-05-09 Thread Andrew Dunstan


On 05/06/2017 08:54 PM, Andrew Dunstan wrote:
>
> On 05/06/2017 07:41 PM, Craig Ringer wrote:
>>
>> On 7 May 2017 4:24 am, "Andrew Dunstan"
>> > > wrote:
>>
>>
>> I have been working on enabling the remaining TAP tests on MSVC
>> build in
>> the buildfarm client, but I have come across an odd problem. The bin
>> tests all run fine, but the recover tests crash and in such a way
>> as to
>> crash the buildfarm client itself and require some manual cleanup.
>> This
>> happens at some stage after the tests have run (the final "ok" is
>> output) but before the END handler in PostgresNode.pm (I put some
>> traces
>> in there to see if I could narrow down where there were problems).
>>
>> The symptom is that this appears at the end of the output when the
>> client calls "vcregress.pl  taptest
>> src/test/recover":
>>
>> Terminating on signal SIGBREAK(21)
>> Terminating on signal SIGBREAK(21)
>> Terminate batch job (Y/N)?
>>
>> And at that point there is nothing at all apparently running,
>> according
>> to Sysinternals Process Explorer, including the buildfarm client.
>>
>> It's 100% repeatable on bowerbird, and I'm a bit puzzled about how to
>> fix it.
>>
>>
>> Anyone have any clues?
>>
>>
>> That looks like we've upset CMD.exe its self. I'm not sure how ...
>> leaking a signal to the parent proc?
>>
>> I suspect this could be something to do with console process groups.
>>
>> Bowerbird is win8 . So this isn't going to be related to the support
>> for ANSI escapes added in win10.
>>
>> A serach for the error turns up a complaint about IPC::Run as the
>> first hit. Probably not coincidence.
>>
>>
>> http://stackoverflow.com/q/40924750
>>
>> See this bug
>>
>> https://rt.cpan.org/Public/Bug/Display.html?id=101093
>>
>>
>>
>
>
> Actually, it's Win10, looks like I forgot to update the personality, my bad.
>
> I had a feeling it was probably something to do with timeout. That RT
> ticket looks like it's on the money.
>



(After extensive trial and error) Turns out it's not quite that, it's
the kill_kill stuff. I think for now we should just disable it on the
platform. That means not running tests 7 and 8 of the logical_decoding
tests and all of the crash_recovery test. test::More has nice
faciliti4es for skipping tests cleanly. See attached patch.

cheers

andrew



-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

>From f3ffdad568e9fbce6b8cc3c6ffc4490842b1b5fb Mon Sep 17 00:00:00 2001
From: Andrew Dunstan 
Date: Tue, 9 May 2017 13:03:41 -0400
Subject: [PATCH] Avoid tests which crash the calling process on Windows

Certain recovery tests use the Perl IPC::Run module's start/kill_kill
method of processing. On at least some versions of perl this causes the
whole process and its caller to crash. If we ever find a better way of
doing these tests they can be re-enabled.
---
 src/test/recovery/t/006_logical_decoding.pl | 22 +++---
 src/test/recovery/t/011_crash_recovery.pl   | 12 +++-
 2 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/src/test/recovery/t/006_logical_decoding.pl b/src/test/recovery/t/006_logical_decoding.pl
index bf9b50a..095cfa8 100644
--- a/src/test/recovery/t/006_logical_decoding.pl
+++ b/src/test/recovery/t/006_logical_decoding.pl
@@ -8,6 +8,7 @@ use warnings;
 use PostgresNode;
 use TestLib;
 use Test::More tests => 16;
+use Config;
 
 # Initialize master node
 my $node_master = get_new_node('master');
@@ -72,13 +73,20 @@ is($node_master->psql('otherdb', "SELECT location FROM pg_logical_slot_peek_chan
 $node_master->safe_psql('otherdb', qq[SELECT pg_create_logical_replication_slot('otherdb_slot', 'test_decoding');]);
 
 # make sure you can't drop a slot while active
-my $pg_recvlogical = IPC::Run::start(['pg_recvlogical', '-d', $node_master->connstr('otherdb'), '-S', 'otherdb_slot', '-f', '-', '--start']);
-$node_master->poll_query_until('otherdb', "SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = 'otherdb_slot' AND active_pid IS NOT NULL)");
-is($node_master->psql('postgres', 'DROP DATABASE otherdb'), 3,
-	'dropping a DB with inactive logical slots fails');
-$pg_recvlogical->kill_kill;
-is($node_master->slot('otherdb_slot')->{'slot_name'}, undef,
-	'logical slot still exists');
+#
+SKIP:
+{
+	# some Windows Perls at least don't like IPC::Run's start/kill_kill regime.
+	skip "Test fails on Windows perl", 2 if $Config{osname} eq 'MSWin32';
+
+	my $pg_recvlogical = IPC::Run::start(['pg_recvlogical', '-d', $node_master->connstr('otherdb'), '-S', 'otherdb_slot', '-f', '-', '--start']);
+	$node_master->poll_query_until('otherdb', "SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = 'otherdb_slot' AND active_pid IS NOT NULL)");
+	

Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-09 Thread Robert Haas
On Tue, May 9, 2017 at 9:26 AM, Rahila Syed  wrote:
>>Hi Rahila,
>
>>I am not able add a new partition if default partition is further
>> partitioned
>>with default partition.
>
>>Consider example below:
>
>>postgres=# CREATE TABLE test ( a int, b int, c int) PARTITION BY LIST (a);
>>CREATE TABLE
>>postgres=# CREATE TABLE test_p1 PARTITION OF test FOR VALUES IN(4, 5, 6, 7,
>> 8);
>>CREATE TABLE
>>postgres=# CREATE TABLE test_pd PARTITION OF test DEFAULT PARTITION BY
>> LIST(b);
>>CREATE TABLE
>>postgres=# CREATE TABLE test_pd_pd PARTITION OF test_pd DEFAULT;
>>CREATE TABLE
>>postgres=# INSERT INTO test VALUES (20, 24, 12);
>>INSERT 0 1
>>postgres=# CREATE TABLE test_p2 PARTITION OF test FOR VALUES IN(15);
> ERROR:  could not open file "base/12335/16420": No such file or directory
>
> Regarding fix for this I think we need to prohibit this case. That is
> prohibit creation
> of new partition after a default partition which is further partitioned.
> Currently before adding a new partition after default partition all the rows
> of default
> partition are scanned and if a row which matches the new partitions
> constraint exists
> the new partition is not added.
>
> If we allow this for default partition which is partitioned further, we will
> have to scan
> all the partitions of default partition for matching rows which can slow
> down execution.

I think this case should be allowed and I don't think it should
require scanning all the partitions of the default partition.  This is
no different than any other case where multiple levels of partitioning
are used.  First, you route the tuple at the root level; then, you
route it at the next level; and so on.  It shouldn't matter whether
the routing at the top level is to that level's default partition or
not.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: custom log_line_prefix components besides application_name

2017-05-09 Thread Tom Lane
David Fetter  writes:
> On Fri, May 05, 2017 at 02:20:26PM -0400, Robert Haas wrote:
>> On Thu, May 4, 2017 at 10:59 AM, Chapman Flack  wrote:
>>> invalid input syntax for integer: "21' && 1=2)) Uni/**/ON
>>> SEl/**/eCT 0x646665743166657274,0x646665743266657274,
>>> 0x646665743366657274 -- "

>> Now that is choice.  I wonder what specific database system that's
>> targeting...

> It could well be targeting some class of pipeline to the database,
> too, for example one that removes comments and/or un-escapes.

Yeah.  It's a bit hard to see a database's parser treating "Uni/**/ON"
as UNION, but if some stack someplace had a keyword check ahead of
a comment-stripping step, maybe that could do something useful.

> It occurs to me that psql's habit of stripping out everything on a
> line that follows a double dash  might be vulnerable in this way, but
> I wouldn't see such vulnerabilities as super easy to exploit, as psql
> isn't usually exposed directly to input from the internet.

I don't think that's a problem: while psql will remove "--" and everything
following it until newline, it won't remove the newline.  So there's still
a token boundary there.  If we tried to strip /*...*/ comments we'd have
to be more careful.

As far as the actual thread topic goes, I tend to agree with Robert's
doubt that there's enough utility or consensus for this.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should pg_current_wal_location() become pg_current_wal_lsn()

2017-05-09 Thread David Steele

On 5/9/17 10:00 AM, Stephen Frost wrote:

* Tom Lane (t...@sss.pgh.pa.us) wrote:


#2: Rename all these functions and columns to "lsn", as in this patch.


+1


<...>


#2 strikes me as the best option, though that's probably not much of a
surprise to anyone whose been following my comments on this thread.


+1.  If anything this analysis make me more convinced it is the right 
thing to do.


If I read this correctly, we won't change the names of any functions 
that we haven't *already* changed the names of, and only one view would 
be changed to bring it into line with the rest.


--
-David
da...@pgmasters.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication deranged sender

2017-05-09 Thread Petr Jelinek
On 08/05/17 13:47, Petr Jelinek wrote:
> On 08/05/17 01:17, Jeff Janes wrote:
>> After dropping a subscription, it says it succeeded and that it dropped
>> the slot on the publisher.
>>
>> But the publisher still has the slot, and a full-tilt process described
>> by ps as 
>>
>> postgres: wal sender process jjanes [local] idle in transaction
>>
>> Strace shows that this process is doing nothing but opening, reading,
>> lseek, and closing from pg_wal, and calling sbrk.  It never sends anything.
>>
>> This is not how it should work, correct?
>>
> 
> No, and I don't see how this happens though, we only report success if
> the publisher side said that DROP_REPLICATION_SLOT succeeded. So far I
> don't see anything in source that would explain this. I will need to
> reproduce it first to see what's happening (wasn't able to do that yet,
> but it might just need more time since you say it does no happen always).
> 

Hm I wonder are there any workers left on subscriber when this happens?

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-05-09 Thread Fabien COELHO


Hello Pavel,


Patch applies cleanly and compiles.


Idem for v2. "make check" ok. Tests look good.


I would suggest some rewording, maybe:

"Show the description of the result of the current query buffer without
actually executing it, by considering it a prepared statement."


done


Ok. If some native English speaker can clarify the sentence further, or 
imprive it anyway, thanks in advance!



  SELECT $1 AS unknown_type \gdesc


It is not unknown type - the default placeholder type is text


Indeed. I really meant something like:

  calvin=# SELECT $1 + $2 \gdesc
  ERROR:  operator is not unique: unknown + unknown
  ...

More comments:

I propose that the help message could be "describe result of query without 
executing it".


I found an issue. \gdesk fails when the command does not return a result:

 calvin=# TRUNCATE pgbench_history \gdesc
 ERROR:  syntax error at or near ")"
 LINE 2:  (VALUES ) s (name, tp, tpm)

I guess the issue is that PQdescribePrepared returns an empty description, 
which is fine, but then the second query should be skipped, and some 
message should be output instead, like "no result" or whatever...


This need fixing, and a corresponding test should be added.

Also I would suggest to add a \g after the first test, which would execute 
the current buffer after its description, to show that the current buffer 
does indeed hold the query:


 calvin=# SELECT 1 as one, ... \gdesc \g
 -- one | int
 -- ...
 -- 1 | ...

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-09 Thread Petr Jelinek
On 09/05/17 16:28, Peter Eisentraut wrote:
> On 5/9/17 04:39, Petr Jelinek wrote:
 What we want to simulate instead is an "auto" dependency of the slot on
 the subscription.  So you can drop the slot separately (subject to other
 restrictions), and it is dropped automatically when the subscription is
 dropped.  To avoid that, you can disassociate the slot from the
 subscription, which you have implemented.

 I think we can therefore do without RESTRICT/CASCADE here.  If a slot is
 associated with the subscription, it should be there when we drop the
 subscription.  Otherwise, the user has to disassociate the slot and take
 care of it manually.  So just keep the "cascade" behavior.

 Similarly, I wouldn't check first whether the slot exists.  If the
 subscription is associated with the slot, it should be there.
>>>
>>> Here is your patch amended for that.
>>
>> I am fine with this mechanism as well.
> 
> Committed.
> 
> I also wrote a bit of documentation about slot handling for
> subscriptions, covering some of what was discussed in this thread.
> 

Great, thanks.

Here is rebased version of the other patch (the interface rework). I
also fixed the tab completion there.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


Rework-the-options-for-logical-replication-v2.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] export import bytea from psql

2017-05-09 Thread Pavel Stehule
Hi

This is topic, that I try to solve some longer time. I would to open
discussion before summer commitfest.

The work with binary data or long text fields (multilines) or long not only
utf8 encoded XML is not well supported by psql console. Is not hard to use
binary protocol and write custom application, but it needs some time and
some experience. Buildin support in psql can helps lot of users.

Export
=
Last time the most conflict point is mapping bytea field to file. Has sense
or has not sense to push more bytea fields to one file?  It is
implementation detail - just raise or not raise a exception when count > 1.
Mapping 1:1 is little bit cleaner, simpler from user perspective - but some
use cases are not possible and files should be joined on operation system
level. Can we find a agreement?

Design
--

SELECT xxx
\gstore file -- use text protocol

SELECT xxx
\gbstore file -- use binary protocol

Theoretically we can support more target files in one command. But I am
thinking it is over engineering. If somebody need it, then he can write
small app in Perl, Python ...

Import
=

There are more possible ways

1. using psql variables - we just need to write commands \setfrom
\setfrombf - this should be very simple implementation. The value can be
used more times. On second hand - the loaded variable can hold lot of
memory (and it can be invisible for user). Next disadvantage - when SQL
commands with this value fails, then the full SQL command can be written to
log (there is high risk of log bloating).

2. using psql variables with query variables - first part is same like @1.
This needs a parametric queries support. This is partially optimised first
variant - the risk of log bloating is reduced.

3. using special gexec command where query parameters can be replaced by
specified content. Some like

insert into foo values($1, $2)
\gusefiles xml:~/xx.xml bytea:~/avatar.jpg

This is one step command - so work can be faster. There is zero risk of
forgotten content in variable. On second hand the command is more complex.

The binary import with binary protocol is risky if content doesn't respect
a protocol. I don't think we need to support unknown (any) formats and
custom data types. Only known formats can be supported text, json, xml
(text or binary), bytea (text or binary). Using binary protocol for XML
format enforces automatic conversion.

Opinions? Notes?

Regards

Pavel


Re: [HACKERS] SUBSCRIPTIONS and pg_upgrade

2017-05-09 Thread Peter Eisentraut
On 5/9/17 04:54, Petr Jelinek wrote:
>> I think that it would be nice to fix that even before beta, so
>> attached is a patch to add --no-subscriptions to pg_dump, pg_dumpall
>> and pg_restore.
> 
> Looks okay to me, it's simple enough patch.

Committed, thanks.

(There was some inconsistent variable naming no_subscription vs
no_subscriptions, which I sorted out.)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: custom log_line_prefix components besides application_name

2017-05-09 Thread David Fetter
On Fri, May 05, 2017 at 02:20:26PM -0400, Robert Haas wrote:
> On Thu, May 4, 2017 at 10:59 AM, Chapman Flack  wrote:
> > invalid input syntax for integer: "21' && 1=2)) Uni/**/ON
> > SEl/**/eCT 0x646665743166657274,0x646665743266657274,
> > 0x646665743366657274 -- "
> 
> Now that is choice.  I wonder what specific database system that's
> targeting...

It could well be targeting some class of pipeline to the database,
too, for example one that removes comments and/or un-escapes.

It occurs to me that psql's habit of stripping out everything on a
line that follows a double dash  might be vulnerable in this way, but
I wouldn't see such vulnerabilities as super easy to exploit, as psql
isn't usually exposed directly to input from the internet.

> > I just wonder if anybody thinks web apps, and therefore this
> > scenario, are common enough these days to maybe justify one or two
> > more GUCs with their own log_line_prefix escapes, such as
> > app_client_addr or app_user. Naturally they would only be as
> > reliable as the app setting them, and uninterpreted by PostgreSQL
> > itself, and so functionally no different from the uninterpreted
> > string already available as application_name.  The benefit is
> > perhaps to be clearer than just overloading application_name to
> > carry two or three pieces of information (and perhaps privacy, if
> > you care about app user identities and source IPs showing up in ps
> > titles).
> >
> > Worth considering, or is application_name Good Enough?
> 
> I mean, if there were a list of things that needed to propagated
> that was (1) lengthy and (2) universally agreed, then we'd probably
> want more than one field.  But your list is pretty short, so I guess
> I don't see why you can't just join them together with a punctuation
> mark of your choice and call it good.
> 
> I might be missing something, though.

That there isn't universal agreement probably points to wanting an
ability to place arbitrary fields in the logs, not just a
log_line_prefix.  This would be made a good bit simpler by structuring
logs, by default, in some serialization a little easier to reason
about (and among other things, parse correctly) than CSV.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-09 Thread Peter Eisentraut
On 5/9/17 03:27, Masahiko Sawada wrote:
> I think we should change tab-completion support for that as well.
> 
> diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
> index 183fc37..046fdd5 100644
> --- a/src/bin/psql/tab-complete.c
> +++ b/src/bin/psql/tab-complete.c
> @@ -2684,7 +2684,7 @@ psql_completion(const char *text, int start, int end)
> 
> /* DROP SUBSCRIPTION */
> else if (Matches3("DROP", "SUBSCRIPTION", MatchAny))
> -   COMPLETE_WITH_LIST2("DROP SLOT", "NODROP SLOT");
> +   COMPLETE_WITH_LIST2("CASCADE", "RESTRICT");
> 
>  /* EXECUTE */
> else if (Matches1("EXECUTE"))

Thanks for pointing that out.  I just moved it under the generic DROP
support section.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-09 Thread Peter Eisentraut
On 5/9/17 04:39, Petr Jelinek wrote:
>>> What we want to simulate instead is an "auto" dependency of the slot on
>>> the subscription.  So you can drop the slot separately (subject to other
>>> restrictions), and it is dropped automatically when the subscription is
>>> dropped.  To avoid that, you can disassociate the slot from the
>>> subscription, which you have implemented.
>>>
>>> I think we can therefore do without RESTRICT/CASCADE here.  If a slot is
>>> associated with the subscription, it should be there when we drop the
>>> subscription.  Otherwise, the user has to disassociate the slot and take
>>> care of it manually.  So just keep the "cascade" behavior.
>>>
>>> Similarly, I wouldn't check first whether the slot exists.  If the
>>> subscription is associated with the slot, it should be there.
>>
>> Here is your patch amended for that.
> 
> I am fine with this mechanism as well.

Committed.

I also wrote a bit of documentation about slot handling for
subscriptions, covering some of what was discussed in this thread.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should pg_current_wal_location() become pg_current_wal_lsn()

2017-05-09 Thread Stephen Frost
Tom, all,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Peter Eisentraut  writes:
> > On 5/1/17 08:10, David Rowley wrote:
> >> OK, so I've created a draft patch which does this.
> 
> > After reading this patch, I see that
> > a) The scope of the compatibility break is expanded significantly beyond
> > what was already affected by the xlog->wal renaming.

Changing one additional view doesn't strike me as a significantly
expanded set.

> > b) Generally, things read less nicely and look more complicated.

I disagree.

> > So I still think we'd be better off leaving things the way they are.
> 
> What I find in a bit of looking around is that
> 
> 1. There are no functions named *lsn* except the support functions
> for the pg_lsn type.

Even so, that doesn't strike me as having been particularly intentional
or because it was "better" to use something else, especially given the
column naming.

> 2. There are half a dozen or so functions named *location* (the
> ones hit in this patch).  All of them existed in 9.6, but with
> names involving "xlog"; they're already renamed to involve "wal".

Right, so changing location -> lsn for those doesn't expand the
compatibility issue really.

> 3. We have these system columns named *lsn*:
> 
> regression=# select attrelid::regclass, attname, atttypid::regtype from 
> pg_attribute where attname like '%lsn%' and attrelid<16384;
>attrelid   |   attname   | atttypid 
> --+-+--
>  pg_stat_wal_receiver | receive_start_lsn   | pg_lsn
>  pg_stat_wal_receiver | received_lsn| pg_lsn
>  pg_stat_wal_receiver | latest_end_lsn  | pg_lsn
>  pg_replication_slots | restart_lsn | pg_lsn
>  pg_replication_slots | confirmed_flush_lsn | pg_lsn
>  pg_replication_origin_status | remote_lsn  | pg_lsn
>  pg_replication_origin_status | local_lsn   | pg_lsn
>  pg_subscription_rel  | srsublsn| pg_lsn
>  pg_stat_subscription | received_lsn| pg_lsn
>  pg_stat_subscription | latest_end_lsn  | pg_lsn
> (10 rows)
> 
> The first seven of those existed in 9.6.
> 
> 4. We have these system columns named *location*:
> 
> regression=# select attrelid::regclass, attname, atttypid::regtype from 
> pg_attribute where attname like '%location%' and attrelid<16384;
>   attrelid   | attname | atttypid 
> -+-+--
>  pg_stat_replication | sent_location   | pg_lsn
>  pg_stat_replication | write_location  | pg_lsn
>  pg_stat_replication | flush_location  | pg_lsn
>  pg_stat_replication | replay_location | pg_lsn
> (4 rows)
> 
> All of those existed in 9.6.  The patch proposes to rename them to *lsn.

Right, four column names in one view- the one view which is different
from all of the other views that exist.

> So it seems like we do not have such a big problem with function names:
> the relevant functions all use "location" in their names, and we could
> just keep doing that.  The problem that we've got is inconsistency in
> table/view column names.  However, there is no way to fix it without
> adding a new dollop of compatibility break on top of what we've done
> already.
> 
> For completeness, I think the available alternatives are:
> 
> #1: Leave these names as they stand.

-1

> #2: Rename all these functions and columns to "lsn", as in this patch.

+1

> #3: Rename the functions but not the columns.

-1

> #4: Leave the function names alone, standardize on "lsn" in column names
> (hence, touching pg_stat_replication only).

-1

> #5: Standardize on "location" not "lsn" (hence, leaving the function
> names alone, and touching pg_stat_wal_receiver, pg_replication_slots,
> and pg_replication_origin_status, as well as the new-in-v10-anyway
> pg_subscription_rel and pg_stat_subscription).

Ugh, that seems even worse, and expands the compatibility breakage, -5.

> #3 has the advantage of not breaking anything we didn't break already,
> but it isn't accomplishing very much in terms of improving consistency.

Agreed.

> With #4, we have a rule that function names use "location" while column
> names use "lsn", which is a bit odd but not terrible.

Given that we're changing the function names already, I really don't see
why this makes any sense.  Perhaps it's just me, but 'location' is much
more of a vague term than 'lsn' and if we're talking about an 'lsn' then
that's what we should be using.

> I think #5 would best serve Peter's point about readability, because
> I agree that "lsn" isn't doing us any favors in that direction.

I disagree that 'lsn' is worse than 'location'- at least for my part,
it's much more precise and clear about what we're talking about.
"location" could be where a file exists on the disk, where in the world
we are, where we are in the heap, where we are when stepping through an
index, etc.  With the work on 

Re: [HACKERS] no test coverage for ALTER FOREIGN DATA WRAPPER name HANDLER ...

2017-05-09 Thread Mark Dilger

> On May 9, 2017, at 12:18 AM, Amit Langote  
> wrote:
> 
> Hi,
> 
> On 2017/05/05 9:38, Mark Dilger wrote:
>> Hackers,
>> 
>> just FYI, I cannot find any regression test coverage of this part of the 
>> grammar, not
>> even in the contrib/ directory or TAP tests.  I was going to submit a patch 
>> to help out,
>> and discovered it is not so easy to do, and perhaps that is why the coverage 
>> is missing.
> 
> I think we could add the coverage by defining a dummy C FDW handler
> function in regress.c.  I see that some other regression tests use C
> functions defined there, such as test_atomic_ops().
> 
> What do you think about the attached patch?  I am assuming you only meant
> to add tests for the code in foreigncmds.c (CREATE/ALTER FOREIGN DATA
> WRAPPER).

Thank you for creating the patch.  I only see one small oversight, which is the
successful case of ALTER FOREIGN DATA WRAPPER ... HANDLER ... is still
not tested.  I added one line for that in the attached modification of your 
patch.

Mark Dilger



0002-Add-some-FDW-HANDLER-DDL-tests.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should pg_current_wal_location() become pg_current_wal_lsn()

2017-05-09 Thread Tom Lane
Peter Eisentraut  writes:
> On 5/1/17 08:10, David Rowley wrote:
>> OK, so I've created a draft patch which does this.

> After reading this patch, I see that
> a) The scope of the compatibility break is expanded significantly beyond
> what was already affected by the xlog->wal renaming.
> b) Generally, things read less nicely and look more complicated.

> So I still think we'd be better off leaving things the way they are.

What I find in a bit of looking around is that

1. There are no functions named *lsn* except the support functions
for the pg_lsn type.

2. There are half a dozen or so functions named *location* (the
ones hit in this patch).  All of them existed in 9.6, but with
names involving "xlog"; they're already renamed to involve "wal".

3. We have these system columns named *lsn*:

regression=# select attrelid::regclass, attname, atttypid::regtype from 
pg_attribute where attname like '%lsn%' and attrelid<16384;
   attrelid   |   attname   | atttypid 
--+-+--
 pg_stat_wal_receiver | receive_start_lsn   | pg_lsn
 pg_stat_wal_receiver | received_lsn| pg_lsn
 pg_stat_wal_receiver | latest_end_lsn  | pg_lsn
 pg_replication_slots | restart_lsn | pg_lsn
 pg_replication_slots | confirmed_flush_lsn | pg_lsn
 pg_replication_origin_status | remote_lsn  | pg_lsn
 pg_replication_origin_status | local_lsn   | pg_lsn
 pg_subscription_rel  | srsublsn| pg_lsn
 pg_stat_subscription | received_lsn| pg_lsn
 pg_stat_subscription | latest_end_lsn  | pg_lsn
(10 rows)

The first seven of those existed in 9.6.

4. We have these system columns named *location*:

regression=# select attrelid::regclass, attname, atttypid::regtype from 
pg_attribute where attname like '%location%' and attrelid<16384;
  attrelid   | attname | atttypid 
-+-+--
 pg_stat_replication | sent_location   | pg_lsn
 pg_stat_replication | write_location  | pg_lsn
 pg_stat_replication | flush_location  | pg_lsn
 pg_stat_replication | replay_location | pg_lsn
(4 rows)

All of those existed in 9.6.  The patch proposes to rename them to *lsn.


So it seems like we do not have such a big problem with function names:
the relevant functions all use "location" in their names, and we could
just keep doing that.  The problem that we've got is inconsistency in
table/view column names.  However, there is no way to fix it without
adding a new dollop of compatibility break on top of what we've done
already.

For completeness, I think the available alternatives are:

#1: Leave these names as they stand.

#2: Rename all these functions and columns to "lsn", as in this patch.

#3: Rename the functions but not the columns.

#4: Leave the function names alone, standardize on "lsn" in column names
(hence, touching pg_stat_replication only).

#5: Standardize on "location" not "lsn" (hence, leaving the function
names alone, and touching pg_stat_wal_receiver, pg_replication_slots,
and pg_replication_origin_status, as well as the new-in-v10-anyway
pg_subscription_rel and pg_stat_subscription).

#3 has the advantage of not breaking anything we didn't break already,
but it isn't accomplishing very much in terms of improving consistency.
With #4, we have a rule that function names use "location" while column
names use "lsn", which is a bit odd but not terrible.
I think #5 would best serve Peter's point about readability, because
I agree that "lsn" isn't doing us any favors in that direction.

So this boils down to whether we are willing to touch any of these
column names in order to improve consistency.  I think it might be
worth doing, but there's no doubt that we're adding more compatibility
pain if we do anything but #1 or #3.

regards, tom lane

PS: There are some other changes in David's patch, such as
s/position/location/ in some text, that I think we should do in any
case.  But the first decision has to be about the view column names.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-09 Thread Rahila Syed
>Hi Rahila,

>I am not able add a new partition if default partition is further
partitioned
>with default partition.

>Consider example below:

>postgres=# CREATE TABLE test ( a int, b int, c int) PARTITION BY LIST (a);
>CREATE TABLE
>postgres=# CREATE TABLE test_p1 PARTITION OF test FOR VALUES IN(4, 5, 6,
7, 8);
>CREATE TABLE
>postgres=# CREATE TABLE test_pd PARTITION OF test DEFAULT PARTITION BY
LIST(b);
>CREATE TABLE
>postgres=# CREATE TABLE test_pd_pd PARTITION OF test_pd DEFAULT;
>CREATE TABLE
>postgres=# INSERT INTO test VALUES (20, 24, 12);
>INSERT 0 1
*>postgres=# CREATE TABLE test_p2 PARTITION OF test FOR VALUES IN(15);*

*ERROR:  could not open file "base/12335/16420": No such file or directory*
Regarding fix for this I think we need to prohibit this case. That is
prohibit creation
of new partition after a default partition which is further partitioned.
Currently before adding a new partition after default partition all the
rows of default
partition are scanned and if a row which matches the new partitions
constraint exists
the new partition is not added.

If we allow this for default partition which is partitioned further, we
will have to scan
all the partitions of default partition for matching rows which can slow
down execution.

So to not hamper the performance, an error should be thrown in this case
and user should
be expected to change his schema to avoid partitioning default partitions.

Kindly give your opinions.



On Fri, May 5, 2017 at 12:46 PM, Jeevan Ladhe  wrote:

> Hi Rahila,
>
> I am not able add a new partition if default partition is further
> partitioned
> with default partition.
>
> Consider example below:
>
> postgres=# CREATE TABLE test ( a int, b int, c int) PARTITION BY LIST (a);
> CREATE TABLE
> postgres=# CREATE TABLE test_p1 PARTITION OF test FOR VALUES IN(4, 5, 6,
> 7, 8);
> CREATE TABLE
> postgres=# CREATE TABLE test_pd PARTITION OF test DEFAULT PARTITION BY
> LIST(b);
> CREATE TABLE
> postgres=# CREATE TABLE test_pd_pd PARTITION OF test_pd DEFAULT;
> CREATE TABLE
> postgres=# INSERT INTO test VALUES (20, 24, 12);
> INSERT 0 1
> *postgres=# CREATE TABLE test_p2 PARTITION OF test FOR VALUES IN(15);*
> *ERROR:  could not open file "base/12335/16420": No such file or directory*
>
>
> Thanks,
> Jeevan Ladhe
>
> On Fri, May 5, 2017 at 11:55 AM, Rajkumar Raghuwanshi <
> rajkumar.raghuwan...@enterprisedb.com> wrote:
>
>> Hi Rahila,
>>
>> pg_restore is failing for default partition, dump file still storing old
>> syntax of default partition.
>>
>> create table lpd (a int, b int, c varchar) partition by list(a);
>> create table lpd_d partition of lpd DEFAULT;
>>
>> create database bkp owner 'edb';
>> grant all on DATABASE bkp to edb;
>>
>> --take plain dump of existing database
>> \! ./pg_dump -f lpd_test.sql -Fp -d postgres
>>
>> --restore plain backup to new database bkp
>> \! ./psql -f lpd_test.sql -d bkp
>>
>> psql:lpd_test.sql:63: ERROR:  syntax error at or near "DEFAULT"
>> LINE 2: FOR VALUES IN (DEFAULT);
>>^
>>
>>
>> vi lpd_test.sql
>>
>> --
>> -- Name: lpd; Type: TABLE; Schema: public; Owner: edb
>> --
>>
>> CREATE TABLE lpd (
>> a integer,
>> b integer,
>> c character varying
>> )
>> PARTITION BY LIST (a);
>>
>>
>> ALTER TABLE lpd OWNER TO edb;
>>
>> --
>> -- Name: lpd_d; Type: TABLE; Schema: public; Owner: edb
>> --
>>
>> CREATE TABLE lpd_d PARTITION OF lpd
>> FOR VALUES IN (DEFAULT);
>>
>>
>> ALTER TABLE lpd_d OWNER TO edb;
>>
>>
>> Thanks,
>> Rajkumar
>>
>
>


Re: [HACKERS] Bug in pg_dump --table and --exclude-table for declarative partition table handling.

2017-05-09 Thread amul sul
On Tue, May 9, 2017 at 3:51 PM, Ashutosh Bapat
 wrote:
> On Tue, May 9, 2017 at 2:59 PM, Amit Langote
>  wrote:
>> Hi Amul,
>>
>> On 2017/05/09 16:13, amul sul wrote:
>>> Hi,
>>>
>>> Current pg_dump --exclude-table option excludes partitioned relation
>>> and dumps all its child relations and vice versa for --table option, which
>>> I think is incorrect.
>>
>> I agree that `pg_dump -t partitioned_table` should dump all of its
>> partitions too and that `pg_dump -T partitioned_table` should exclude
>> partitions.  Your patch achieves that.  Some comments:
>>
>> I think we should update doc/src/sgml/ref/pg_dump.sgml to mention this
>> behavior.
>>
>> +static void
>> +find_partition_by_relid(Archive *fout, Oid parentId, SimpleOidList *oids)
>>
>> Is expand_partitioned_table() a slightly better name?
>>
>>
>> +   appendPQExpBuffer(query, "SELECT relkind "
>> + "FROM pg_catalog.pg_class "
>> + "WHERE oid = %u", partid);
>> +
>> +   res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
>> +
>> +   if (PQntuples(res) == 0)
>> +   exit_horribly(NULL, "no matching partition tables 
>> were ");
>> +
>> +   relkind = PQgetvalue(res, 0, 0);
>> +
>> +   if (relkind[0] == RELKIND_PARTITIONED_TABLE)
>> +   find_partition_by_relid(fout, partid, oids);
>>
>> Instead of recursing like this requiring to send one query for every
>> partition, why not issue just one query such that all the partitions in
>> the inheritance tree are returned.  For example, as follows:
>>
>> +appendPQExpBuffer(query, "WITH RECURSIVE partitions (partoid) AS ("
>> + "   SELECT inhrelid"
>> + "   FROM pg_inherits"
>> + "   WHERE inhparent = %u"
>> + " UNION ALL"
>> + "   SELECT inhrelid"
>> + "   FROM pg_inherits, partitions"
>> + "   WHERE inhparent = partitions.partoid )"
>> + " SELECT partoid FROM partitions",
>> + parentId);
>>
>> I included your patch with the above modifications in the attached 0001
>> patch, which also contains the documentation updates.  Please take a look.
>
> I think this patch too has the same problem I described in my reply to
> Amul; it fires a query to server for every partitioned table it
> encounters, that's not very efficient.
>
Agree with Ashutosh, If such information is already available then we need to
leverage of it.

>>
>> When testing the patch, I found that if --table specifies the parent and
>> --exclude specifies one of its partitions, the latter won't be forcefully
>> be included due to the partitioned table expansion, which seems like an
>> acceptable behavior.
>
> unless the partition is default partition or a hash partition.
>
>> On the other hand, if --table specifies a partition
>> and --exclude specifies the partition's parent, the latter makes partition
>> to be excluded as well as part of the expansion, which seems fine too.
>>
>
> I am not sure if it's worth considering partitions and partitioned
> table as different entities as far as dump is concerned. We should
> probably dump the whole partitioned table or none of it. There's merit
> in dumping just a single partition to transfer that data to some other
> server, but I am not sure how much the feature would be used.
>
but won't be useless.

> In order to avoid any such potential anomalies, we should copy dump
> flag for a partition from that of the parent as I have described in my
> reply to Amul.
>
>> One more thing I am wondering about is whether `pg_dump -t partition`
>> should be dumped as a partition definition (that is, as CREATE TABLE
>> PARTITION OF) which is what happens currently or as a regular table (just
>> CREATE TABLE)?  I'm thinking the latter would be better, but would like to
>> confirm before writing any code for that.
>
> If we go about dumping a partition without its parent table, we should
> dump CREATE TABLE with proper list of columns and constraints without
> PARTITION OF clause. But I am not sure whether we should go that
> route.

IMO, I think it's worth to dump CREATE TABLE without PARTITION OF clause.

Regards,
Amul


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no test coverage for ALTER FOREIGN DATA WRAPPER name HANDLER ...

2017-05-09 Thread Ashutosh Bapat
On Tue, May 9, 2017 at 12:48 PM, Amit Langote
 wrote:
> Hi,
>
> On 2017/05/05 9:38, Mark Dilger wrote:
>> Hackers,
>>
>> just FYI, I cannot find any regression test coverage of this part of the 
>> grammar, not
>> even in the contrib/ directory or TAP tests.  I was going to submit a patch 
>> to help out,
>> and discovered it is not so easy to do, and perhaps that is why the coverage 
>> is missing.
>
> I think we could add the coverage by defining a dummy C FDW handler
> function in regress.c.  I see that some other regression tests use C
> functions defined there, such as test_atomic_ops().
>
> What do you think about the attached patch?  I am assuming you only meant
> to add tests for the code in foreigncmds.c (CREATE/ALTER FOREIGN DATA
> WRAPPER).

Looks ok to me. It at least tests whether function with the right
return type has been provided and when there are multiple handlers
provided. May be add it to the next commitfest for more opinions.


-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-09 Thread Thomas Munro
On Mon, May 8, 2017 at 7:09 PM, Thomas Munro
 wrote:
> On Fri, May 5, 2017 at 8:29 AM, Thomas Munro
>  wrote:
>> On Fri, May 5, 2017 at 2:40 AM, Robert Haas  wrote:
>>> On Thu, May 4, 2017 at 4:46 AM, Thomas Munro
>>>  wrote:
 On Thu, May 4, 2017 at 4:02 AM, Alvaro Herrera  
 wrote:
> Robert Haas wrote:
>> I suspect that most users would find it more useful to capture all of
>> the rows that the statement actually touched, regardless of whether
>> they hit the named table or an inheritance child.
>
> Yes, agreed.  For the plain inheritance cases each row would need to
> have an indicator of which relation it comes from (tableoid); I'm not
> sure if such a thing would be useful in the partitioning case.

 On Thu, May 4, 2017 at 4:26 AM, David Fetter  wrote:
> +1 on the not-duct-tape view of partitioned tables.

 Hmm.  Ok.  Are we talking about PG10 or PG11 here?  Does this approach
 makes sense?
>>>
>>> I was thinking PG10 if it can be done straightforwardly.
>>
>> Ok, I will draft a patch to do it the way I described and see what people 
>> think.
>
> FYI I am still working on this and will post a draft patch to do this
> (that is: make transition tables capture changes from children with
> appropriate tuple conversion) in the next 24 hours.

Ok, here is a first swing at it, for discussion.

In master, the decision to populate transition tables happens in
AfterTriggerSaveEvent (called by the ExecAR* functions) in trigger.c.
It does that if it sees that there are triggers on the
relation-being-modified that have transition tables.

With this patch, nodeModifyTuple.c makes a 'TriggerTransitionFilter'
object to override that behaviour, if there are child tables of either
kind.  That is passed to the ExecAR* functions and thence
AfterTriggerSaveEvent, overriding its usual behaviour, so that it can
know that it needs collect tuples from child nodes and how to convert
them to the layout needed for the tuplestores if necessary.

Thoughts?  I'm not yet sure about the locking situation.  Generally
needs some more testing.

-- 
Thomas Munro
http://www.enterprisedb.com


transition-tuples-from-child-tables.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] snapbuild woes

2017-05-09 Thread Erik Rijkers

On 2017-05-09 11:50, Petr Jelinek wrote:

On 09/05/17 10:59, Erik Rijkers wrote:

On 2017-05-09 10:50, Petr Jelinek wrote:

On 09/05/17 00:03, Erik Rijkers wrote:

On 2017-05-05 02:00, Andres Freund wrote:


Could you have a look?


[...]

I rebased the above mentioned patch to apply to the patches Andres 
sent,
if you could try to add it on top of what you have and check if it 
still

fails, that would be helpful.


I suppose you mean these; but they do not apply anymore:

20170505/0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch
20170505/0002-WIP-Possibly-more-robust-snapbuild-approach.patch

Andres, any change you could update them?

alternatively I could use the older version again..

thanks,

Erik Rijkers


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [GSOC 17] Eliminate O(N^2) scaling from rw-conflict tracking in serializable transactions

2017-05-09 Thread Mengxing Liu
Hi, Alvaro and Kevin. I'm  Mengxing.  

This is a “synchronization” email to  tell you what I've done and my next plan. 
I'm looking forward to your advice. 




According to my proposal, I want to prepare the experimental environment during 
the community bonding period. 

As this is the first time I discuss with Alvaro, here I will introduce the 
environment again. 

My lab have a Lenovo System x3950 X6 machine. 

https://www.lenovo.com/images/products/system-x/pdfs/datasheets/x3950_x6_ds.pdf

More specifically, there are 8 sockets, each has 15 Intel(R) Xeon(R) CPU 
E7-8870 v2 @ 2.30GHz. 

Thus we have 120 cores in total. The storage is a 1 TB SSD, with SAS interface, 
500MB write bandwidth. 

OS is ubuntu 14.04. 




I've compiled & installed PostgreSQL and have tested it by using pgbench. I 
tuned parameter for database according to

http://www.dbdude.com/postgresql/postgresqlperformance.php#BESTPRACTICES

For pgbench, I set scale factor (-s) 512. Other configurations are default 
values. 




Because we have too many cores, SSD becomes the bottleneck. In my test, pgbench 
can scale to 36 connections. ( 18 KTPS throughput). CPU utilization is smaller 
than 30%. 

Therefore:

1. Is there anything wrong in my tuning parameters?For example, should I close 
"fsync"? Because we don't care recovery here. 

2. Can we use just two sockets (30 physical cores) to run database server? Then 
the CPU can be the bottleneck so that it  shows the problem we try to solve.







BTW. Here is the question of Stephen. 

>  What method of communication will be used among the mentors and with 
> Mengxing.

What method do you prefer?




>  Frequency of status updates (must be at least once a week and more often is 
> encouraged).

How about reporting my status once a week?




>  What steps will be taken next during the community bonding period.

As I wrote in the proposal, I want to establish the environment and read the 
related source code. Do you have any suggestions for me?




--
Mengxing Liu








Re: [HACKERS] Bug in pg_dump --table and --exclude-table for declarative partition table handling.

2017-05-09 Thread Ashutosh Bapat
On Tue, May 9, 2017 at 2:59 PM, Amit Langote
 wrote:
> Hi Amul,
>
> On 2017/05/09 16:13, amul sul wrote:
>> Hi,
>>
>> Current pg_dump --exclude-table option excludes partitioned relation
>> and dumps all its child relations and vice versa for --table option, which
>> I think is incorrect.
>
> I agree that `pg_dump -t partitioned_table` should dump all of its
> partitions too and that `pg_dump -T partitioned_table` should exclude
> partitions.  Your patch achieves that.  Some comments:
>
> I think we should update doc/src/sgml/ref/pg_dump.sgml to mention this
> behavior.
>
> +static void
> +find_partition_by_relid(Archive *fout, Oid parentId, SimpleOidList *oids)
>
> Is expand_partitioned_table() a slightly better name?
>
>
> +   appendPQExpBuffer(query, "SELECT relkind "
> + "FROM pg_catalog.pg_class "
> + "WHERE oid = %u", partid);
> +
> +   res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
> +
> +   if (PQntuples(res) == 0)
> +   exit_horribly(NULL, "no matching partition tables 
> were ");
> +
> +   relkind = PQgetvalue(res, 0, 0);
> +
> +   if (relkind[0] == RELKIND_PARTITIONED_TABLE)
> +   find_partition_by_relid(fout, partid, oids);
>
> Instead of recursing like this requiring to send one query for every
> partition, why not issue just one query such that all the partitions in
> the inheritance tree are returned.  For example, as follows:
>
> +appendPQExpBuffer(query, "WITH RECURSIVE partitions (partoid) AS ("
> + "   SELECT inhrelid"
> + "   FROM pg_inherits"
> + "   WHERE inhparent = %u"
> + " UNION ALL"
> + "   SELECT inhrelid"
> + "   FROM pg_inherits, partitions"
> + "   WHERE inhparent = partitions.partoid )"
> + " SELECT partoid FROM partitions",
> + parentId);
>
> I included your patch with the above modifications in the attached 0001
> patch, which also contains the documentation updates.  Please take a look.

I think this patch too has the same problem I described in my reply to
Amul; it fires a query to server for every partitioned table it
encounters, that's not very efficient.

>
> When testing the patch, I found that if --table specifies the parent and
> --exclude specifies one of its partitions, the latter won't be forcefully
> be included due to the partitioned table expansion, which seems like an
> acceptable behavior.

unless the partition is default partition or a hash partition.

> On the other hand, if --table specifies a partition
> and --exclude specifies the partition's parent, the latter makes partition
> to be excluded as well as part of the expansion, which seems fine too.
>

I am not sure if it's worth considering partitions and partitioned
table as different entities as far as dump is concerned. We should
probably dump the whole partitioned table or none of it. There's merit
in dumping just a single partition to transfer that data to some other
server, but I am not sure how much the feature would be used.

In order to avoid any such potential anomalies, we should copy dump
flag for a partition from that of the parent as I have described in my
reply to Amul.

> One more thing I am wondering about is whether `pg_dump -t partition`
> should be dumped as a partition definition (that is, as CREATE TABLE
> PARTITION OF) which is what happens currently or as a regular table (just
> CREATE TABLE)?  I'm thinking the latter would be better, but would like to
> confirm before writing any code for that.

If we go about dumping a partition without its parent table, we should
dump CREATE TABLE with proper list of columns and constraints without
PARTITION OF clause. But I am not sure whether we should go that
route.
-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_dump --table and --exclude-table for declarative partition table handling.

2017-05-09 Thread Jeevan Ladhe
Hi Amit, Ashutosh,

On Tue, May 9, 2017 at 3:29 PM, Ashutosh Bapat <
ashutosh.ba...@enterprisedb.com> wrote:

> On Tue, May 9, 2017 at 3:13 PM, Amit Langote
>  wrote:
> > On 2017/05/09 17:21, Jeevan Ladhe wrote:
> >> On Tue, May 9, 2017 at 12:43 PM, amul sul  wrote:
> >>> Current pg_dump --exclude-table option excludes partitioned relation
> >>> and dumps all its child relations and vice versa for --table option,
> which
> >>> I think is incorrect.
> >>>
> >>> In this case we might need to explore all partitions and exclude or
> include
> >>> from dump according to given pg_dump option, attaching WIP patch
> proposing
> >>> same fix.   Thoughts/Comments?
> >>
> >> Also, I can see similar issue exists with inheritance.
> >> In attached patch, I have extended Amul's original patch to address the
> >> inheritance dumping issue.
> >
> > Perhaps, it will be better not to touch the regular inheritance tables in
> > this patch.
>
> Yeah, I think it's fine if parent gets dumped without one or more of
> its children, that's user's choice when it used a certain pattern.
> Problematic case might be when we dump a child without its parent and
> have INHERITS clause there. pg_restore would throw an error. But in
> case that problem exists it's very old and should be fixed separately.


I agree that this should be taken as a separate fix, rather than taking it
with
partition.

Regards,
Jeevan Ladhe


[HACKERS] Issues with replication slots(which created manually) against logical replication

2017-05-09 Thread tushar

Hi,

While testing 'logical replication' against v10 , i encountered couple 
of issue when created logical/physical slot manually.


Case 1 - when used with   logical replication slot (which created manually)
Publication Server(X)
\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table/Insert 1 row -> create table test(n int); insert into t 
values (1);

\\create publication for all -> create publication pub for table t;
\\create logical replication slot  but before that  - do 'make and make 
install' against "contrib/test_decoding" contrib folder
select * from 
pg_create_logical_replication_slot('my_logical','test_decoding');



Subscription Serve(Y)
\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table -> create table test(n int);
\\create Subscription , used the existing slot

postgres=#  CREATE SUBSCRIPTION sub CONNECTION 'host=localhost 
dbname=postgres  port=5000 user=centos ' publication pub with (NOCREATE 
SLOT ,Slot name=my_logical);

NOTICE:  synchronized table states
CREATE SUBSCRIPTION

if we check the publication server (x) and subscription server(y) , we 
are getting this error in log file -


2017-05-09 10:41:49.570 BST [1809] LOG:  starting logical replication 
worker for subscription "sub"
2017-05-09 10:41:49.579 BST [2346] LOG:  logical replication apply for 
subscription sub started
2017-05-09 10:41:49.588 BST [2346] ERROR:  could not receive data from 
WAL stream: ERROR:  option "proto_version" = "1" is unknown
CONTEXT:  slot "my_logical", output plugin "test_decoding", in the 
startup callback
2017-05-09 10:41:49.589 BST [1801] LOG:  worker process: logical 
replication worker for subscription 16391 (PID 2346) exited with exit code 1


Case 2 -When  used with  physical  replication slot


Publication Server(X)
\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table/Insert 1 row -> create table test(n int); insert into t 
values (1);

\\create publication for all -> create publication pub for table t;

\\create physical replication slot
postgres=# select * from pg_create_physical_replication_slot('my_test');
 slot_name | wal_position
---+--
 my_test   |
(1 row)


Subscription Serve(Y)
\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table -> create table test(n int);
\\create Subscription , used the existing slot ,which is physical

postgres=# CREATE SUBSCRIPTION sub CONNECTION 'host=localhost 
dbname=postgres  port=5000 user=centos ' publication pub with (NOCREATE 
SLOT ,Slot name=my_test);

NOTICE:  synchronized table states
CREATE SUBSCRIPTION
postgres=#

in the subscription server log file , we are getting this error -

2017-05-09 10:51:44.037 BST [2738] ERROR:  could not receive data from 
WAL stream: ERROR:  cannot use physical replication slot for logical 
decoding
2017-05-09 10:51:44.038 BST [1801] LOG:  worker process: logical 
replication worker for subscription 16393 (PID 2738) exited with exit code 1


I think -we should throw an error while creating subscription.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_dump --table and --exclude-table for declarative partition table handling.

2017-05-09 Thread Ashutosh Bapat
On Tue, May 9, 2017 at 3:13 PM, Amit Langote
 wrote:
> On 2017/05/09 17:21, Jeevan Ladhe wrote:
>> On Tue, May 9, 2017 at 12:43 PM, amul sul  wrote:
>>> Current pg_dump --exclude-table option excludes partitioned relation
>>> and dumps all its child relations and vice versa for --table option, which
>>> I think is incorrect.
>>>
>>> In this case we might need to explore all partitions and exclude or include
>>> from dump according to given pg_dump option, attaching WIP patch proposing
>>> same fix.   Thoughts/Comments?
>>
>> Also, I can see similar issue exists with inheritance.
>> In attached patch, I have extended Amul's original patch to address the
>> inheritance dumping issue.
>
> Perhaps, it will be better not to touch the regular inheritance tables in
> this patch.

Yeah, I think it's fine if parent gets dumped without one or more of
its children, that's user's choice when it used a certain pattern.
Problematic case might be when we dump a child without its parent and
have INHERITS clause there. pg_restore would throw an error. But in
case that problem exists it's very old and should be fixed separately.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-09 Thread Petr Jelinek
On 09/05/17 11:44, Masahiko Sawada wrote:
> On Tue, May 9, 2017 at 5:57 PM, Petr Jelinek
>  wrote:
>> On 09/05/17 10:51, Masahiko Sawada wrote:
>>> On Tue, May 9, 2017 at 5:39 PM, Petr Jelinek
>>>  wrote:
 On 09/05/17 07:07, Peter Eisentraut wrote:
> On 5/8/17 23:23, Peter Eisentraut wrote:
>> The way this uses RESTRICT and CASCADE appears to be backwards from its
>> usual meaning.  Normally, CASCADE when dropping an object that is still
>> used by others will cause those other objects to be dropped.  The
>> equivalent here would be DROP REPLICATION SLOT + CASCADE would drop the
>> subscription.
>>
>> What we want to simulate instead is an "auto" dependency of the slot on
>> the subscription.  So you can drop the slot separately (subject to other
>> restrictions), and it is dropped automatically when the subscription is
>> dropped.  To avoid that, you can disassociate the slot from the
>> subscription, which you have implemented.
>>
>> I think we can therefore do without RESTRICT/CASCADE here.  If a slot is
>> associated with the subscription, it should be there when we drop the
>> subscription.  Otherwise, the user has to disassociate the slot and take
>> care of it manually.  So just keep the "cascade" behavior.
>>
>> Similarly, I wouldn't check first whether the slot exists.  If the
>> subscription is associated with the slot, it should be there.
>>>
>>> IIUC in this design, for example if we mistakenly create a
>>> subscription without creating replication slot and corresponding
>>> replication slot doesn't exist on publisher, we cannot drop
>>> subscription until we create corresponding replication slot manually.
>>> Isn't it become a problem for user?
>>>
>>
>> We can, that's why the NONE value for slot name was added by the patch
>> so that subscription can be made "slot-less".
> 
> Yeah, but since we can still create subscription with only NOCREATE
> SLOT option (option name will be changed but still exists), if we do
> that then non-NULL value is stored into subslotname and the
> subscription is enable. We can drop such subscription after disabled
> it and after set its slot name to NONE. But I think it's still not
> good for user..
> 

Well that's why I originally had the options directly as part of DROP
SUBSCRIPTION. But if you read the discussion in this thread, that's not
something we should be doing. There is no sensible way of mapping the
nodrop behavior to the only allowed options of RESTRICT and CASCADE so
there is not much we can do other than the ALTER.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] snapbuild woes

2017-05-09 Thread Petr Jelinek
On 09/05/17 10:59, Erik Rijkers wrote:
> On 2017-05-09 10:50, Petr Jelinek wrote:
>> On 09/05/17 00:03, Erik Rijkers wrote:
>>> On 2017-05-05 02:00, Andres Freund wrote:

 Could you have a look?
>>>
>>> Running tests with these three patches:
>>>
 0001-WIP-Fix-off-by-one-around-GetLastImportantRecPtr.patch+
 0002-WIP-Possibly-more-robust-snapbuild-approach.patch +
 fix-statistics-reporting-in-logical-replication-work.patch
>>> (on top of 44c528810)
>>>
>>> I test by 15-minute pgbench runs while there is a logical replication
>>> connection. Primary and replica are on the same machine.
>>>
>>> I have seen errors on 3 different machines (where error means: at least
>>> 1 of the 4 pgbench tables is not md5-equal). It seems better, faster
>>> machines yield less errors.
>>>
>>> Normally I see in pg_stat_replication (on master) one process in state
>>> 'streaming'.
>>>
>>>  pid  | wal | replay_loc  |   diff   |   state   |   app   |
>>> sync_state
>>> 16495 | 11/EDBC | 11/EA3FEEE8 | 58462488 | streaming | derail2 |
>>> async
>>>
>>> Often there are another two processes in pg_stat_replication that remain
>>> in state 'startup'.
>>>
>>> In the failing sessions the 'streaming'-state process is missing; in
>>> failing sessions there are only the two processes that are and remain in
>>> 'startup'.
>>
>> Hmm, startup is the state where slot creation is happening. I wonder if
>> it's just taking long time to create snapshot because of the 5th issue
>> which is not yet fixed (and the original patch will not apply on top of
>> this change). Alternatively there is a bug in this patch.
>>
>> Did you see high CPU usage during the test when there were those
>> "startup" state walsenders?
>>
> 
> I haven't noticed but I didn't pay attention to that particularly.
> 
> I'll try to get some CPU-info logged...
> 

I rebased the above mentioned patch to apply to the patches Andres sent,
if you could try to add it on top of what you have and check if it still
fails, that would be helpful.

Thanks!

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


Skip-unnecessary-snapshot-builds.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication syntax (was DROP SUBSCRIPTION, query cancellations and slot handling)

2017-05-09 Thread Masahiko Sawada
On Tue, May 9, 2017 at 5:57 PM, Petr Jelinek
 wrote:
> On 09/05/17 10:51, Masahiko Sawada wrote:
>> On Tue, May 9, 2017 at 5:39 PM, Petr Jelinek
>>  wrote:
>>> On 09/05/17 07:07, Peter Eisentraut wrote:
 On 5/8/17 23:23, Peter Eisentraut wrote:
> The way this uses RESTRICT and CASCADE appears to be backwards from its
> usual meaning.  Normally, CASCADE when dropping an object that is still
> used by others will cause those other objects to be dropped.  The
> equivalent here would be DROP REPLICATION SLOT + CASCADE would drop the
> subscription.
>
> What we want to simulate instead is an "auto" dependency of the slot on
> the subscription.  So you can drop the slot separately (subject to other
> restrictions), and it is dropped automatically when the subscription is
> dropped.  To avoid that, you can disassociate the slot from the
> subscription, which you have implemented.
>
> I think we can therefore do without RESTRICT/CASCADE here.  If a slot is
> associated with the subscription, it should be there when we drop the
> subscription.  Otherwise, the user has to disassociate the slot and take
> care of it manually.  So just keep the "cascade" behavior.
>
> Similarly, I wouldn't check first whether the slot exists.  If the
> subscription is associated with the slot, it should be there.
>>
>> IIUC in this design, for example if we mistakenly create a
>> subscription without creating replication slot and corresponding
>> replication slot doesn't exist on publisher, we cannot drop
>> subscription until we create corresponding replication slot manually.
>> Isn't it become a problem for user?
>>
>
> We can, that's why the NONE value for slot name was added by the patch
> so that subscription can be made "slot-less".

Yeah, but since we can still create subscription with only NOCREATE
SLOT option (option name will be changed but still exists), if we do
that then non-NULL value is stored into subslotname and the
subscription is enable. We can drop such subscription after disabled
it and after set its slot name to NONE. But I think it's still not
good for user..

> The change of
> RESTRICT/CASCADE behavior that Peter made is just about whether we
> refuse to drop subscription by default when there is slot associated
> with or if we just go straight to dropping the slot.
>

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_dump --table and --exclude-table for declarative partition table handling.

2017-05-09 Thread Amit Langote
On 2017/05/09 17:21, Jeevan Ladhe wrote:
> On Tue, May 9, 2017 at 12:43 PM, amul sul  wrote:
>> Current pg_dump --exclude-table option excludes partitioned relation
>> and dumps all its child relations and vice versa for --table option, which
>> I think is incorrect.
>>
>> In this case we might need to explore all partitions and exclude or include
>> from dump according to given pg_dump option, attaching WIP patch proposing
>> same fix.   Thoughts/Comments?
> 
> Also, I can see similar issue exists with inheritance.
> In attached patch, I have extended Amul's original patch to address the
> inheritance dumping issue.

Perhaps, it will be better not to touch the regular inheritance tables in
this patch.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_dump --table and --exclude-table for declarative partition table handling.

2017-05-09 Thread Ashutosh Bapat
On Tue, May 9, 2017 at 12:43 PM, amul sul  wrote:
> Hi,
>
> Current pg_dump --exclude-table option excludes partitioned relation
> and dumps all its child relations and vice versa for --table option, which
> I think is incorrect.
>
> In this case we might need to explore all partitions and exclude or include
> from dump according to given pg_dump option, attaching WIP patch proposing
> same fix.   Thoughts/Comments?
>

+1 for fixing this.

Since we didn't catch this issue earlier it looks like we don't have
testcases testing this scenario. May be we should add those in the
patch.

The way this patch has been written, there is a possibility that a
partition would be included multiple times in the list of tables, if
names of the partition and the parent table both match the pattern.
That's not a correctness issue, but it would slow down
simple_oid_list_member() a bit because of longer OID list.

I am not sure what would be the use of dumping a partitioned table
without its partitions or vice-versa. I think, instead, look
partitioned table and its partitions as a single unit as far as dump
is concerned. So, either we dump partitioned table along with all its
partitions or we don't dump any of those. In that case, we should
modify the query in expand_table_name_patterns(), to not include
partitions in the result. Rest of the code in the patch would take
care of including/excluding the partitions when the parent table is
included/excluded.

This patch looks up pg_inherits for every partitioned tables that it
encounters, which is costly. Instead, a fix with better performance is
to set a table dumpable based on the corresponding setting of its
parent. The parent is available in TableInfo structure, but the
comment there says that it's set only for dumpable objects. The
comment is correct since flagInhTables() skips the tables with dump =
false. May be we should modify this function not to skip the
partitions, find their parent tables and set the dump flat based on
the parents. This means that the immediate parent's dump flag should
be set correctly before any of its children are encountered by the
flagInhTables() for the case of multi-level partitioning to work. I
don't think we can guarantee that. May be we can modify
flagInhTables() to set the parent tables of parent if that's not done
already and then set the dump flag from top parent to bottom. If we do
that, we will have to add code in tflagInhTables() to skip the entries
whose parents have been set already since those might have been set
because of an earlier grand child.

Even better if we could dump the partitions along with partitioned
table instead of creating separate TableInfo entries for those. But I
think that's a lot of change.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_dump --table and --exclude-table for declarative partition table handling.

2017-05-09 Thread Amit Langote
Hi Amul,

On 2017/05/09 16:13, amul sul wrote:
> Hi,
> 
> Current pg_dump --exclude-table option excludes partitioned relation
> and dumps all its child relations and vice versa for --table option, which
> I think is incorrect.

I agree that `pg_dump -t partitioned_table` should dump all of its
partitions too and that `pg_dump -T partitioned_table` should exclude
partitions.  Your patch achieves that.  Some comments:

I think we should update doc/src/sgml/ref/pg_dump.sgml to mention this
behavior.

+static void
+find_partition_by_relid(Archive *fout, Oid parentId, SimpleOidList *oids)

Is expand_partitioned_table() a slightly better name?


+   appendPQExpBuffer(query, "SELECT relkind "
+ "FROM pg_catalog.pg_class "
+ "WHERE oid = %u", partid);
+
+   res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
+
+   if (PQntuples(res) == 0)
+   exit_horribly(NULL, "no matching partition tables were 
");
+
+   relkind = PQgetvalue(res, 0, 0);
+
+   if (relkind[0] == RELKIND_PARTITIONED_TABLE)
+   find_partition_by_relid(fout, partid, oids);

Instead of recursing like this requiring to send one query for every
partition, why not issue just one query such that all the partitions in
the inheritance tree are returned.  For example, as follows:

+appendPQExpBuffer(query, "WITH RECURSIVE partitions (partoid) AS ("
+ "   SELECT inhrelid"
+ "   FROM pg_inherits"
+ "   WHERE inhparent = %u"
+ " UNION ALL"
+ "   SELECT inhrelid"
+ "   FROM pg_inherits, partitions"
+ "   WHERE inhparent = partitions.partoid )"
+ " SELECT partoid FROM partitions",
+ parentId);

I included your patch with the above modifications in the attached 0001
patch, which also contains the documentation updates.  Please take a look.

When testing the patch, I found that if --table specifies the parent and
--exclude specifies one of its partitions, the latter won't be forcefully
be included due to the partitioned table expansion, which seems like an
acceptable behavior.  On the other hand, if --table specifies a partition
and --exclude specifies the partition's parent, the latter makes partition
to be excluded as well as part of the expansion, which seems fine too.

One more thing I am wondering about is whether `pg_dump -t partition`
should be dumped as a partition definition (that is, as CREATE TABLE
PARTITION OF) which is what happens currently or as a regular table (just
CREATE TABLE)?  I'm thinking the latter would be better, but would like to
confirm before writing any code for that.

I will add this as an open item.  Thanks for working on it.

Thanks,
Amit
>From 3b3103d6a23aab78592d7547e11f7e6414cfe0ec Mon Sep 17 00:00:00 2001
From: amit 
Date: Tue, 9 May 2017 16:39:14 +0900
Subject: [PATCH] Expand partitioned table specified using pg_dump's -t or -T
 options

Report and patch by Amul Sul.
---
 doc/src/sgml/ref/pg_dump.sgml | 12 +
 src/bin/pg_dump/pg_dump.c | 57 +--
 2 files changed, 62 insertions(+), 7 deletions(-)

diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index 6cf7e570ef..c90a3298ca 100644
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@ -520,10 +520,11 @@ PostgreSQL documentation
 table.
 For this purpose, table includes views, materialized views,
 sequences, and foreign tables.  Multiple tables
-can be selected by writing multiple -t switches.  Also, the
-table parameter is
-interpreted as a pattern according to the same rules used by
-psql's \d commands (see -t switches.  If the
+specified table is a partitioned table, its partitions are dumped
+too.  Also, the table
+parameter is interpreted as a pattern according to the same rules used
+by psql's \d commands (see ),
 so multiple tables can also be selected by writing wildcard characters
 in the pattern.  When using wildcards, be careful to quote the pattern
@@ -572,7 +573,8 @@ PostgreSQL documentation
 class="parameter">table pattern.  The pattern is
 interpreted according to the same rules as for -t.
 -T can be given more than once to exclude tables
-matching any of several patterns.
+matching any of several patterns.  If the specified table is a
+partitioned table, its partitions are not dumped either.

 

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index af84c25093..00146786e3 100644
--- 

Re: [HACKERS] Bug in pg_dump --table and --exclude-table for declarative partition table handling.

2017-05-09 Thread Jeevan Ladhe
In my last email by mistake I attached Amul's patch itself.
Please find attached patch extending the fix to inheritance relations.

Regards,
Jeevan Ladhe

On Tue, May 9, 2017 at 1:51 PM, Jeevan Ladhe 
wrote:

> Hi,
>
> On Tue, May 9, 2017 at 12:43 PM, amul sul  wrote:
>
>> Hi,
>>
>> Current pg_dump --exclude-table option excludes partitioned relation
>> and dumps all its child relations and vice versa for --table option, which
>> I think is incorrect.
>>
>> In this case we might need to explore all partitions and exclude or
>> include
>> from dump according to given pg_dump option, attaching WIP patch proposing
>> same fix.   Thoughts/Comments?
>>
>> Regards,
>> Amul
>>
>
> +1.
>
> Also, I can see similar issue exists with inheritance.
> In attached patch, I have extended Amul's original patch to address the
> inheritance dumping issue.
>
> Regards,
> Jeevan Ladhe
>
>


pg_dump_fix_for_partition_and_inheritance_wip.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >