Re: In-place index updates and HOT (Was: [HACKERS] Patch: Write Amplification Reduction Method (WARM))

2017-07-28 Thread Claudio Freire
On Fri, Jul 28, 2017 at 8:32 PM, Peter Geoghegan  wrote:
> README.HOT says that that cost is not worth the benefit of
> preventing a new index write, but I think that it ought to take into
> account that not all index writes are equal. There is an appreciable
> difference between inserting a new tuple, and updating one in-place. We
> can remove the cost (hurting new snapshots by making them go through old
> heap pages) while preserving most of the benefits (no logically
> unnecessary index bloat).

It's a neat idea.

And, well, now that you mention, you don't need to touch indexes at all.

You can create the new chain, and "update" the index to point to it,
without ever touching the index itself, since you can repoint the old
HOT chain's start line pointer to point to the new HOT chain, create
a new pointer for the old one and point to it in the new HOT chain's
t_tid.

Existing index tuples thus now point to the right HOT chain without
having to go into the index and make any changes.

You do need the new HOT chain to live in the same page for this,
however.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] AlterUserStmt anmd RoleSpec rules in grammar.y

2017-07-28 Thread Robert Haas
On Thu, Jul 27, 2017 at 2:52 AM, Pavel Golub  wrote:
> One  more  notice.  ALTER  USER  ALL  works  in  EnterpriseDB 10beta2
> installer. That's weird. I thought EnterpriseDB uses official sources.

I find it really hard to believe that we're doing anything else.  It
wouldn't make any sense to patch the PostgreSQL source code and then
release the installers as PostgreSQL installers.  And if we *were*
going to do that, wouldn't we patch something more interesting than
the ALTER USER command?  I don't know what's going on here but I have
a feeling that EnterpriseDB secretly maintaining patch sets that we
inject into our PostgreSQL installers is not that thing.

Adding a few EDB people.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Vacuum: Update FSM more frequently

2017-07-28 Thread Claudio Freire
As discussed in the vacuum ring buffer thread:

Make Vacuum update the FSM more frequently, to avoid the case where
autovacuum fails to reach the point where it updates the FSM in
highly contended tables.

Introduce a tree pruning threshold to FreeSpaceMapVacuum that avoids
recursing into branches that already contain enough free space, to
avoid having to traverse the whole FSM and thus induce quadratic
costs. Intermediate FSM vacuums are only supposed to make enough
free space visible to avoid extension until the final (non-partial)
FSM vacuum.

Run partial FSM vacuums after each index pass, which is a point
at which whole ranges of the heap have been thorougly cleaned, and
we can expect no further updates to those ranges of the FSM save
for concurrent activity. When there are no indexes, and thus no
index passes, run partial FSM vacuums every 8GB of dirtied pages
or 1/8th of the relation, whichever is highest. This allows some
partial work to be made visible without incurring quadratic cost.

In any case, FSM are small in relation to the table, so even when
quadratic cost is involved, it should not be problematic. Index
passes already incur quadratic cost, and the addition of the FSM
is unlikely to be measurable.

Run a partial FSM vacuum with a low pruning threshold right at
the beginning of the VACUUM to finish up any work left over from
prior canceled vacuum runs, something that is common in highly
contended tables when running only autovacuum.

Autovacuum canceling is thus handled by updating the FSM first-thing
when autovacuum retries vacuuming a relation. I attempted to add
an autovacuum work item for performing the FSM update shortly
after the cancel, but that had some issues so I abandoned the approach.

For one, it would sometimes crash when adding the work item from
inside autovacuum itself. I didn't find the cause of the crash, but I suspect
AutoVacuumRequestWork was being called in a context where it
was not safe. In any case, the way work items work didn't seem like
it would have worked for our purposes anyway, since they will only ever
be processed after processing all tables in the database, something that
could take ages, the work items are limited to 256, which would make
the approach troublesome for databases with more than 256 tables that
trigger this case, and it required de-duplicating work items which had
quadratic cost without major refactoring of the feature.

So, patch attached. I'll add it to the next CF as well.
From 791b9d2006b5abd67a8efb3f7c6cc99141ddbb09 Mon Sep 17 00:00:00 2001
From: Claudio Freire 
Date: Fri, 28 Jul 2017 21:31:59 -0300
Subject: [PATCH] Vacuum: Update FSM more frequently

Make Vacuum update the FSM more frequently, to avoid the case where
autovacuum fails to reach the point where it updates the FSM in
highly contended tables.

Introduce a tree pruning threshold to FreeSpaceMapVacuum that avoids
recursing into branches that already contain enough free space, to
avoid having to traverse the whole FSM and thus induce quadratic
costs. Intermediate FSM vacuums are only supposed to make enough
free space visible to avoid extension until the final (non-partial)
FSM vacuum.

Run partial FSM vacuums after each index pass, which is a point
at which whole ranges of the heap have been thorougly cleaned, and
we can expect no further updates to those ranges of the FSM save
for concurrent activity. When there are no indexes, and thus no
index passes, run partial FSM vacuums every 8GB of dirtied pages
or 1/8th of the relation, whichever is highest. This allows some
partial work to be made visible without incurring quadratic cost.

In any case, FSM are small in relation to the table, so even when
quadratic cost is involved, it should not be problematic. Index
passes already incur quadratic cost, and the addition of the FSM
is unlikely to be measurable.

Run a partial FSM vacuum with a low pruning threshold right at
the beginning of the VACUUM to finish up any work left over from
prior canceled vacuum runs, something that is common in highly
contended tables when running only autovacuum.
---
 src/backend/access/brin/brin.c|  2 +-
 src/backend/access/brin/brin_pageops.c| 10 +++---
 src/backend/commands/vacuumlazy.c | 60 +--
 src/backend/storage/freespace/freespace.c | 31 
 src/backend/storage/freespace/indexfsm.c  |  2 +-
 src/include/storage/freespace.h   |  2 +-
 6 files changed, 90 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index efebeb0..bb80edd 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -1424,5 +1424,5 @@ brin_vacuum_scan(Relation idxrel, BufferAccessStrategy strategy)
 	 * the way to the top.
 	 */
 	if (vacuum_fsm)
-		FreeSpaceMapVacuum(idxrel);
+		FreeSpaceMapVacuum(idxrel, 0);
 }
diff --git a/src/backend/access/brin/brin_pageops.c 

In-place index updates and HOT (Was: [HACKERS] Patch: Write Amplification Reduction Method (WARM))

2017-07-28 Thread Peter Geoghegan

Pavan Deolasee  wrote:

One good thing is that the patch is ready and fully functional. So that
allows those who are keen to run real performance tests and see the actual
impact of the patch.


Very true.


I see your point. But I would like to think this way: does the technology
significantly help many common use cases, that are currently not addressed
by HOT? It probably won't help all workloads, that's given. Also, we don't
have any credible alternative while this patch has progressed quite a lot.
May be Robert will soon present the pluggable storage/UNDO patch and that
will cover everything and more that is currently covered by HOT/WARM. That
will probably make many other things redundant.


Well, I don't assume that it will; again, I just don't know. I agree
with your general assessment of things, which is that WARM, EDB's
Z-Heap/UNDO project, and things like IOTs have significant overlap in
terms of the high-level problems that they fix. While it's hard to say
just how much overlap exists, it's clearly more than a little. And, you
are right that we don't have a credible alternative in this general
category right now. The WARM patch is available today.

As you may have noticed, in recent weeks I've been very vocal about the
role of index bloat in cases where bloat has a big impact on production
workloads. I think that it has an under-appreciated role in workloads
that deteriorate over time, as bloat accumulates. Perhaps HOT made such
a big difference to workloads 10 years ago not just because it prevented
creating new index entries. It also reduced fragmentation of the
keyspace in indexes, by never inserting duplicates in the first place.

I have some rough ideas related to this, and to the general questions
you're addressing. I'd like to run these by you.

In-place index updates + HOT


Maybe we could improve things markedly in this general area by "chaining
together HOT chains", and updating index heap pointers in place, to
point to the start of the latest HOT chain in that chain of chains
(provided the index tuple was "logically unchanged" -- otherwise, you'd
need to have both sets of indexed values at once, of course). Index
tuples therefore always point to the latest HOT chain, favoring recent
MVCC snapshots over older ones.

Pruning
---

HOT pruning is great because you can remove heap bloat without worrying
about there being index entries with heap item pointers pointing to what
is removed. But isn't that limitation as much about what is in the index
as it is about what is in the heap?

Under this scheme, you don't even have to keep around the old ItemId
stub when pruning, if it's a sufficiently old HOT chain that no index
points to the corresponding TID. That may not seem like a lot of bloat
to have to keep around, but it accumulates within a page until VACUUM
runs, ultimately limiting the effectiveness of pruning for certain
workloads.

Old snapshots/row versions
--

Superseding HOT chains have their last heap tuple's t_tid point to the
start of the preceding/superseded HOT chain (not their own TID, as
today, which is redundant), which may or may not be on the same heap
page. That's how old snapshots go backwards to get old versions, without
needing their own "logically redundant" index entries. So with UPDATE
heavy workloads that are essentially HOT-safe today, performance doesn't
tank due to a long running transaction that obstructs pruning within a
heap page, and thus necessitates the insertion of new index tuples.
That's the main justification for this entire design.

It's also possible that pruning can be taught that since only one index
update was logically necessary when the to-be-pruned HOT chain was
created, it's worth doing a "retail index tuple deletion" against the
index tuple that was logically necessary, then completely obliterating
the HOT chain, stub item pointer and all.

Bloat and locality
--

README.HOT argues against HOT chains that span pages, which this is a
bit like, on the grounds that it's bad news that every recent snapshot
has to go through the old heap page. That makes sense, but only because
the temporal locality there is horrible, which would not be the case
here. README.HOT says that that cost is not worth the benefit of
preventing a new index write, but I think that it ought to take into
account that not all index writes are equal. There is an appreciable
difference between inserting a new tuple, and updating one in-place. We
can remove the cost (hurting new snapshots by making them go through old
heap pages) while preserving most of the benefits (no logically
unnecessary index bloat).

The benefit of HOT is clearly more bloat prevention than not having to
visit indexes at all. InnoDB secondary index updates update the index
twice: The first time, during the update itself, and the second time, by
the purge thread, once the xact commits. Clearly they care about 

Re: [HACKERS] psql's \d and \dt are sending their complaints to different output files

2017-07-28 Thread Tom Lane
Daniel Gustafsson  writes:
> On 27 Jul 2017, at 19:40, Tom Lane  wrote:
>> listTables and listDbRoleSettings print a custom message rather than
>> an empty table for no matches (but in QUIET mode they just do the
>> latter).  I think that's actually a good decision for listDbRoleSettings,
>> because the user might be confused about which pattern is which, and
>> we can straighten him out with a custom error message.  But the only
>> good argument for listTables behaving that way is that people are used
>> to it.  

> Which is a pretty good argument for not changing it.

Yeah, not hearing any votes for changing it, I'll leave it be.

>> I'm not sure about this one.  It definitely seems bogus that \dRp+ is
>> omitting the owner column, but I'm less excited about duplicating the
>> pubname.

> It’s indeed a bit silly to duplicate the information like that.

The minimum change here seems to be to add the owner column, so I did
that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-07-28 Thread Robert Haas
On Fri, Jul 28, 2017 at 9:30 AM, Ashutosh Bapat
 wrote:
> 0004 patch
> The patch adds another column partdefid to catalog pg_partitioned_table. The
> column gives OID of the default partition for a given partitioned table. This
> means that the default partition's OID is stored at two places 1. in the
> default partition table's pg_class entry and in pg_partitioned_table. There is
> no way to detect when these two go out of sync. Keeping those two in sync is
> also a maintenance burdern. Given that default partition's OID is required 
> only
> while adding/dropping a partition, which is a less frequent operation, it 
> won't
> hurt to join a few catalogs (pg_inherits and pg_class in this case) to find 
> out
> the default partition's OID. That will be occasional performance hit
> worth the otherwise maintenance burden.

Performance isn't the only consideration here.  We also need to think
about locking and concurrency.  I think that most operations that
involve locking the parent will also involve locking the default
partition.  However, we can't safely build a relcache entry for a
relation before we've got some kind of lock on it.  We can't assume
that there is no concurrent DDL going on before we take some lock.  We
can't assume invalidation messages are processed before we have taken
some lock.  If we read multiple catalog tuples, they may be from
different points in time.  If we can figure out everything we need to
know from one or two syscache lookups, it may be easier to verify that
the code is bug-free vs. having to do something more complicated.

Now that having been said, I'm not taking the position that Jeevan's
patch (based on Amit Langote's idea) has definitely got the right
idea, just that you should think twice before shooting down the
approach.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?

2017-07-28 Thread Tom Lane
Andrew Dunstan  writes:
>> On 07/27/2017 11:58 PM, Tom Lane wrote:
>>> I wonder if it'd be worth getting the buildfarm
>>> to log the output of "perl -V" so we could get a clearer picture
>>> of what's being tested.

> Looks like this, bit it's rather tedious. I think I might back it out. I
> guess I could also write it to a log file, if we really think we need it.
> 

Yeah, that's awfully verbose :-(.  I suspect most vendors wouldn't bother
with enumerating quite so many patches.

I fixed configure so that it will report what it saw in $Config{ccflags};
that may be sufficient.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-28 Thread Tom Lane
Ashutosh Sharma  writes:
> I quickly ran the some basic test-cases on my Windows machine (machine
> where i have been regenerating the issue) and they are all passing.
> Also, all the automated test-cases for contrib module hstore_plperl
> are passing.

Cool, thanks.  I hope people will set up some buildfarm machines with
the formerly-broken configurations.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-28 Thread Tom Lane
Christoph Berg  writes:
> The plperl segfault on Debian's kfreebsd port I reported back in 2013
> is also still present:
> https://www.postgresql.org/message-id/20130515064201.GC704%40msgid.df7cb.de
> https://buildd.debian.org/status/fetch.php?pkg=postgresql-10=kfreebsd-amd64=10~beta2-1=1499947011=0

So it'd be interesting to know if it's any better with HEAD ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] map_partition_varattnos() and whole-row vars

2017-07-28 Thread Robert Haas
On Fri, Jul 28, 2017 at 3:26 PM, Tom Lane  wrote:
>> (Boy, our implementation of DROP COLUMN is painful!  If we really got
>> rid of columns when they were dropped we could've avoided this whole
>> mess.)
>
> I think the pain arises mostly from the decision to allow partitions
> to not all have identical rowtype.  I would have lobbied against that
> choice if I'd been paying more attention at the start ... but I wasn't.

Given the way DROP COLUMN works, I can't really imagine that being a
hit with end users even if you'd won the argument on this list.  It
would be totally reasonable to ask users to put the columns in the
same order in all partitions, but asking them to have
identically-sized and positioned dropped columns is full of fail.

On the other hand, if DROP COLUMN didn't work this way, then you'd
definitely have won that argument and we would be spared this bug (and
many others).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-28 Thread Ashutosh Sharma
On Sat, Jul 29, 2017 at 12:05 AM, Tom Lane  wrote:
> Ashutosh Sharma  writes:
>> On Fri, Jul 28, 2017 at 10:05 PM, Tom Lane  wrote:
>>> Uh-huh.  So the issue is indeed that they're injecting PERL_IMPLICIT_SYS
>>> via a command-line #define rather than putting it into perl's config.h,
>>> and that results in a change in the apparent size of the PerlInterp
>>> struct (because of IMem and friends).
>
>> Yes, That's right. We would have seen different result if the
>> PERL_IMPLICIT_SYS would not have been defined globally.
>
> I've pushed the changes to the build scripts now.  I had to revise the
> Mkvcbuild.pm part a bit, because you'd forgotten to propagate the extra
> defines into hstore_plperl.

Thanks for that.

So that code is untested; please confirm
> that HEAD works in your problem environment now.
>

I quickly ran the some basic test-cases on my Windows machine (machine
where i have been regenerating the issue) and they are all passing.
Also, all the automated test-cases for contrib module hstore_plperl
are passing.

> I notice that Mkvcbuild.pm is artificially injecting a define for
> PLPERL_HAVE_UID_GID.  I strongly suspect that that was a hack meant
> to work around the lack of this mechanism, and that we could now
> get rid of it or clean it up.  But there's no evidence in the commit
> log about the reason for it --- it seems to go back to the original
> addition of MSVC support.  Anybody remember?
>
> regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] map_partition_varattnos() and whole-row vars

2017-07-28 Thread Tom Lane
Robert Haas  writes:
> If we're remapping the varattnos, I don't see how we can just pass
> whole-row references through.  I mean, if the partition and the parent
> have different varattnos, then a whole-row attribute for one is a
> different thing from a whole-row attribute for the other; the
> HeapTuple you would need to build in each case is different, based on
> the column order for the relation you're worrying about.

There is longstanding code in the planner to handle this for traditional-
inheritance cases; IIRC what it does is build a ROW() expression that
emits the proper output rowtype regardless of the discrepancies.
Not sure why that's apparently not getting applied for partitioning.

> (Boy, our implementation of DROP COLUMN is painful!  If we really got
> rid of columns when they were dropped we could've avoided this whole
> mess.)

I think the pain arises mostly from the decision to allow partitions
to not all have identical rowtype.  I would have lobbied against that
choice if I'd been paying more attention at the start ... but I wasn't.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] pg_dump/pg_restore zerror() and strerror() mishap

2017-07-28 Thread Vladimir Kunschikov
I haven't seen that but can't guarantee that such case does not exist

28 июля 2017 г. 9:19 PM пользователь "Alvaro Herrera" <
alvhe...@2ndquadrant.com> написал:

> Vladimir Kunschikov wrote:
> > >This "maxlen" business and the fallback error message are
> > >strange.  We have roughly equivalent code in pg_basebackup.c
> > >which has been working since 2011
> > >Perhaps you can drop the memchr/fallback tricks and adopt the
> > >pg_basebackup coding?  Or is there a specific reason to have
> > >the memchr check?
> >
> > Ofcourse that tricks can be dropped, function will be much prettier.
> > 'Tricks' were made to pass some strict internal tests.
>
> Well, if there are cases which cause zlib to return a message that
> causes a crash when printing it or some other problem, then let's use
> your version both in this new code and in pg_basebackup.  Do these cases
> really exist?
>
> > Initially I used exactly that function from pg_basebackup.c:
> > https://github.com/kunschikov/postgres/commit/
> 15e9fda6df51cf17c0b0a4f201ee0f93cf258de9#diff-
> 98e3f8ce5d6e87950dd66e4c8bdedb21R713
> > It was rewritten for the sake of somewhat exaggerated security.
> > Version #5 in attachment.
>
> I'd rather have all the security we can get, so before dropping those
> protections, let's make sure they are useless.
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] map_partition_varattnos() and whole-row vars

2017-07-28 Thread Peter Geoghegan

Robert Haas  wrote:

(Boy, our implementation of DROP COLUMN is painful!  If we really got
rid of columns when they were dropped we could've avoided this whole
mess.)


I tend to agree. I can recall several cases where it led to bugs that
went undetected for quite a while.

--
Peter Geoghegan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-07-28 Thread Mark Rofail
On Fri, Jul 28, 2017 at 1:19 PM, Erik Rijkers  wrote:

> One small thing while building docs:
>
> $  cd doc/src/sgml && make html
> osx -wall -wno-unused-param -wno-empty -wfully-tagged -D . -D . -x lower
> postgres.sgml >postgres.xml.tmp
> osx:ref/create_table.sgml:960:100:E: document type does not allow element
> "VARLISTENTRY" here
> Makefile:147: recipe for target 'postgres.xml' failed
> make: *** [postgres.xml] Error 1
>

I will work on it.

How's the rest of the patch ?


Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?

2017-07-28 Thread Andrew Dunstan


On 07/28/2017 08:22 AM, Andrew Dunstan wrote:
>
> On 07/27/2017 11:58 PM, Tom Lane wrote:
>> I kinda suspect we're not actively testing non-MULTIPLICITY builds
>> either.  The 5.8.7 test I just ran was with a non-MULTIPLICITY build,
>> so the case doesn't seem actively broken, but I doubt there is any
>> buildfarm coverage.  I wonder if it'd be worth getting the buildfarm
>> to log the output of "perl -V" so we could get a clearer picture
>> of what's being tested.
>>
> It's quite possible, but in general it will need a new buildfarm client 
> release. If you choose a few possibly critical animals we can ask the owners 
> to apply a fairly simple patch.


Looks like this, bit it's rather tedious. I think I might back it out. I
guess I could also write it to a log file, if we really think we need it.



cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] map_partition_varattnos() and whole-row vars

2017-07-28 Thread Robert Haas
On Fri, Jul 28, 2017 at 1:06 AM, Noah Misch  wrote:
> [Action required within three days.  This is a generic notification.]
>
> The above-described topic is currently a PostgreSQL 10 open item.  Robert,
> since you committed the patch believed to have created it, you own this open
> item.  If some other commit is more relevant or if this does not belong as a
> v10 open item, please let us know.  Otherwise, please observe the policy on
> open item ownership[1] and send a status update within three calendar days of
> this message.  Include a date for your subsequent status update.  Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping v10.  Consequently, I will appreciate your efforts
> toward speedy resolution.  Thanks.

I'll try to get this resolved by the end of next week, but I don't
know if that will be possible.  I don't completely understand the
issue yet.

If we're remapping the varattnos, I don't see how we can just pass
whole-row references through.  I mean, if the partition and the parent
have different varattnos, then a whole-row attribute for one is a
different thing from a whole-row attribute for the other; the
HeapTuple you would need to build in each case is different, based on
the column order for the relation you're worrying about.

(Boy, our implementation of DROP COLUMN is painful!  If we really got
rid of columns when they were dropped we could've avoided this whole
mess.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GSOC] Eliminate O(N^2) scaling from rw-conflict tracking in serializable transactions

2017-07-28 Thread Robert Haas
On Wed, Jul 26, 2017 at 11:41 AM, Mengxing Liu <
liu-m...@mails.tsinghua.edu.cn> wrote:

> Hi, all. There was a very strange phenomenon I couldn't explain. So I was
> wondering if you can help me.
>
> I was trying to replace the linked list with a skip list in serializable
> transaction object for faster conflict tracking. But the performance is bad.
> So I used the instruction "rdtsc" to compare the speed of my skip list and
> the original linked list. The skip list was about 1.5x faster.
>
> The interesting thing is that if I added the instruction "rdstc" at the
> end of the function "RWConflictExists",
> the performance of the whole system was increased by at most 3 times!
> Here is the result.
>
> benchmarks without rdtsc  with rdtsc
> simpe read/write 4.91 14.16
> ssibench 9.72 10.24
> tpcb 26.45 26.38
>
> ( The simple read/write benchmark has the most number of conflicts. )
>
> The patch is attached. All the difference of the two columns is
> with/without a simple line of code:
> __asm__ __volatile__ ("rdtsc");
> But I don't know why this instruction will influence the performance so
> much!
>

Lock contention is really expensive, so a slight delay that is just long
enough to prevent the contention from happening can sometimes improve
performance.  This example is surprisingly dramatic, though.  Of course, we
can't commit it this way -- it will break on non-x86.

I would suggest that you gather information on what wait events are
occurring in the "without rdtsc" case.  Like this:

$ script
$ psql
psql=> select wait_event from pg_stat_activity;
psql=> \watch 0.5
...run test in another window...
^C
\q
^D
...use awk or perl or something to count up the wait events and see where
the contention is happening...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] segfault in HEAD when too many nested functions call

2017-07-28 Thread Robert Haas
On Fri, Jul 28, 2017 at 12:29 PM, Noah Misch  wrote:
> Your colleagues achieve compliance despite uncertainty; for inspiration, I
> recommend examining Alvaro's status updates as examples of this.  The policy
> currently governs your open items even if you disagree with it.

I emphatically agree with that.  If the RMT is to accomplish its
purpose, it must be able to exert authority even when an individual
contributor doesn't like the decisions it makes.

On the other hand, nothing in the open item policy the current RMT has
adopted prohibits you from using judgement about when and how
vigorously to enforce that policy in any particular case, and I would
encourage you to do so.  It didn't make much sense to keep sending
Kevin increasingly strident form letters about each individual item
when he wasn't responding to any emails at all, and it makes equally
little sense to me to nag someone over a technical failure to include
a date when things are obviously progressing adequately.  As Andres
quite rightly says downthread:

> That's just process over substance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-28 Thread Tom Lane
Ashutosh Sharma  writes:
> On Fri, Jul 28, 2017 at 10:05 PM, Tom Lane  wrote:
>> Uh-huh.  So the issue is indeed that they're injecting PERL_IMPLICIT_SYS
>> via a command-line #define rather than putting it into perl's config.h,
>> and that results in a change in the apparent size of the PerlInterp
>> struct (because of IMem and friends).

> Yes, That's right. We would have seen different result if the
> PERL_IMPLICIT_SYS would not have been defined globally.

I've pushed the changes to the build scripts now.  I had to revise the
Mkvcbuild.pm part a bit, because you'd forgotten to propagate the extra
defines into hstore_plperl.  So that code is untested; please confirm
that HEAD works in your problem environment now.

I notice that Mkvcbuild.pm is artificially injecting a define for
PLPERL_HAVE_UID_GID.  I strongly suspect that that was a hack meant
to work around the lack of this mechanism, and that we could now
get rid of it or clean it up.  But there's no evidence in the commit
log about the reason for it --- it seems to go back to the original
addition of MSVC support.  Anybody remember?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] pg_dump/pg_restore zerror() and strerror() mishap

2017-07-28 Thread Alvaro Herrera
Vladimir Kunschikov wrote:
> >This "maxlen" business and the fallback error message are
> >strange.  We have roughly equivalent code in pg_basebackup.c
> >which has been working since 2011
> >Perhaps you can drop the memchr/fallback tricks and adopt the
> >pg_basebackup coding?  Or is there a specific reason to have
> >the memchr check?
> 
> Ofcourse that tricks can be dropped, function will be much prettier.
> 'Tricks' were made to pass some strict internal tests.

Well, if there are cases which cause zlib to return a message that
causes a crash when printing it or some other problem, then let's use
your version both in this new code and in pg_basebackup.  Do these cases
really exist?

> Initially I used exactly that function from pg_basebackup.c:
> https://github.com/kunschikov/postgres/commit/15e9fda6df51cf17c0b0a4f201ee0f93cf258de9#diff-98e3f8ce5d6e87950dd66e4c8bdedb21R713
> It was rewritten for the sake of somewhat exaggerated security.
> Version #5 in attachment.

I'd rather have all the security we can get, so before dropping those
protections, let's make sure they are useless.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] pg_dump/pg_restore zerror() and strerror() mishap

2017-07-28 Thread Vladimir Kunschikov
>This "maxlen" business and the fallback error message are
>strange.  We have roughly equivalent code in pg_basebackup.c
>which has been working since 2011
>Perhaps you can drop the memchr/fallback tricks and adopt the
>pg_basebackup coding?  Or is there a specific reason to have
>the memchr check?

Ofcourse that tricks can be dropped, function will be much prettier.
'Tricks' were made to pass some strict internal tests.
Initially I used exactly that function from pg_basebackup.c:
https://github.com/kunschikov/postgres/commit/15e9fda6df51cf17c0b0a4f201ee0f93cf258de9#diff-98e3f8ce5d6e87950dd66e4c8bdedb21R713
It was rewritten for the sake of somewhat exaggerated security.
Version #5 in attachment.

--
Regards,
Vladimir Kunschikov
diff --git a/src/bin/pg_dump/compress_io.c b/src/bin/pg_dump/compress_io.c
index 991fe11..b8af1bc 100644
--- a/src/bin/pg_dump/compress_io.c
+++ b/src/bin/pg_dump/compress_io.c
@@ -709,5 +709,19 @@ hasSuffix(const char *filename, const char *suffix)
   suffix,
   suffixlen) == 0;
 }
+#endif
+
+const char *
+get_cfp_error(cfp* fp)
+{
+#ifdef HAVE_LIBZ
+	if(fp->compressedfp){
+		int errnum;
+		const char *errmsg = gzerror(fp->compressedfp, );
 
+		if( errnum != Z_ERRNO)
+			return errmsg;
+	}
 #endif
+	return strerror(errno);
+}
diff --git a/src/bin/pg_dump/compress_io.h b/src/bin/pg_dump/compress_io.h
index 8f2e752..06b3762 100644
--- a/src/bin/pg_dump/compress_io.h
+++ b/src/bin/pg_dump/compress_io.h
@@ -65,5 +65,6 @@ extern int	cfgetc(cfp *fp);
 extern char *cfgets(cfp *fp, char *buf, int len);
 extern int	cfclose(cfp *fp);
 extern int	cfeof(cfp *fp);
+extern const char * get_cfp_error(cfp *fp);
 
 #endif
diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c
index 79922da..e9c26bc 100644
--- a/src/bin/pg_dump/pg_backup_directory.c
+++ b/src/bin/pg_dump/pg_backup_directory.c
@@ -352,7 +352,7 @@ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	if (dLen > 0 && cfwrite(data, dLen, ctx->dataFH) != dLen)
-		WRITE_ERROR_EXIT;
+		exit_horribly(modulename, "could not write to output file: %s\n", get_cfp_error(ctx->dataFH));
 
 	return;
 }
@@ -490,7 +490,7 @@ _WriteByte(ArchiveHandle *AH, const int i)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	if (cfwrite(, 1, ctx->dataFH) != 1)
-		WRITE_ERROR_EXIT;
+		exit_horribly(modulename, "could not write to output file: %s\n", get_cfp_error(ctx->dataFH));
 
 	return 1;
 }
@@ -519,7 +519,7 @@ _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	if (cfwrite(buf, len, ctx->dataFH) != len)
-		WRITE_ERROR_EXIT;
+		exit_horribly(modulename, "could not write to output file: %s\n", get_cfp_error(ctx->dataFH));
 
 	return;
 }
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index f839712..b35d3fc 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -555,8 +555,12 @@ _tarReadRaw(ArchiveHandle *AH, void *buf, size_t len, TAR_MEMBER *th, FILE *fh)
 			{
 res = GZREAD(&((char *) buf)[used], 1, len, th->zFH);
 if (res != len && !GZEOF(th->zFH))
+{
+	int errnum;
+	const char *errmsg = gzerror(th->zFH, );
 	exit_horribly(modulename,
-  "could not read from input file: %s\n", strerror(errno));
+  "could not read from input file: %s\n", errnum == Z_ERRNO? strerror(errno) : errmsg);
+}
 			}
 			else
 			{

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault in HEAD when too many nested functions call

2017-07-28 Thread Tom Lane
Andres Freund  writes:
> Anyway, I'll commit it after another pass in ~1 week if it doesn't get a
> review till then, but I assume it'll.

FWIW, I intend to review it today, or tomorrow at the very latest.
(Right now I'm buried in perl droppings.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE of partition key

2017-07-28 Thread Amit Khandekar
On 28 July 2017 at 20:10, Robert Haas  wrote:
> On Wed, Jul 26, 2017 at 2:13 AM, Amit Langote
>  wrote:
>> Sorry to be responding this late to the Amit's make_resultrel_ordered
>> patch itself, but I agree that we should teach the planner to *always*
>> expand partitioned tables in the partition bound order.
>
> Sounds like we have unanimous agreement on that point.

I too agree.

>
>> I checked that we get the same result relation order with both the
>> patches, but I would like to highlight a notable difference here between
>> the approaches taken by our patches.  In my patch, I have now taught
>> RelationGetPartitionDispatchInfo() to lock *only* the partitioned tables
>> in the tree, because we need to look at its partition descriptor to
>> collect partition OIDs and bounds.  We can defer locking (and opening the
>> relation descriptor of) leaf partitions to a point where planner has
>> determined that the partition will be accessed after all (not pruned),
>> which will be done in a separate patch of course.

With Amit Langote's patch, we can very well do the locking beforehand
by find_all_inheritors(), and then run
RelationGetPartitionDispatchInfo() with noLock, so as to remove the
deadlock problem. But I think we should keep these two tasks separate,
i.e. expanding the partition tree in bound order, and making
RelationGetPartitionDispatchInfo() work for the planner.

Regarding building the PartitionDispatchInfo in the planner, we should
do that only after it is known that partition columns are updated, so
it can't be done in expand_inherited_rtentry() because it would be too
soon. For planner setup, RelationGetPartitionDispatchInfo() should
just build the tupmap for each partitioned table, and then initialize
the rest of the fields like tuplslot, reldesc , etc later during
execution.

So for now, I feel we should just do the changes for making sure the
order is same, and then over that, separately modify
RelationGetPartitionDispatchInfo() for planner.

>
> That's very desirable, but I believe it introduces a deadlock risk
> which Amit's patch avoids.  A transaction using the code you've
> written here is eventually going to lock all partitions, BUT it's
> going to move the partitioned ones to the front of the locking order
> vs. what find_all_inheritors would do.  So, when multi-level
> partitioning is in use, I think it could happen that some other
> transaction is accessing the table using a different code path that
> uses the find_all_inheritors order without modification.  If those
> locks conflict (e.g. query vs. DROP) then there's a deadlock risk.

Yes, I agree. Even with single-level partitioning, the leaf partitions
ordered by find_all_inheritors() is by oid values, so that's also
going to be differently ordered.

>
> Unfortunately I don't see any easy way around that problem, but maybe
> somebody else has an idea.

One approach I had considered was to have find_inheritance_children()
itself lock the children in bound order, so that everyone will have
bound-ordered oids, but that would be too expensive since it requires
opening all partitioned tables to initialize partition descriptors. In
find_inheritance_children(), we get all oids without opening any
tables. But now that I think more of it, it's only the partitioned
tables that we have to open, not the leaf partitions; and furthermore,
I didn't see calls to find_inheritance_children() and
find_all_inheritors() in performance-critical code, except in
expand_inherited_rtentry(). All of them are in DDL commands; but yes,
that can change in the future.

Regarding dynamically locking specific partitions as and when needed,
I think this method inherently has the issue of deadlock because the
order would be random. So it feels like there is no way around other
than to lock all partitions beforehand.



Regarding using first resultrel for mapping RETURNING and WCO, I think
we can use (a renamed) getASTriggerResultRelInfo() to get the root
result relation, and use WCO and RETURNING expressions of this
relation to do the mapping for child rels. This way, there won't be
insert/update specific code, and we don't need to use first result
relation.

While checking the whole-row bug on the other thread [1] , I noticed
that the RETURNING/WCO expressions for the per-subplan result rels are
formed by considering not just simple vars, but also whole row vars
and other nodes. So for update-tuple-routing, there would be some
result-rels WCOs formed using adjust_appendrel_attrs(), while for
others, they would be built using map_partition_varattnos() which only
considers simple vars. So the bug in [1] would be there for
update-partition-key as well, when the tuple is routed into a newly
built resultrel. May be, while fixing the bug in [1] , this might be
automatically solved.



Below are the TODOS at this point :

Fix for bug reported by Rajkumar about update with join.
Do 

[HACKERS] Red-Black tree traversal tests

2017-07-28 Thread Victor Drobny

Hello,

Postgres now has its own red-black tree implementation. This tree has 4 
types of traversals. In the attachment, you can find module test that 
checks the correctness of tree traversal strategies.


I hope that someone can find it useful.

Thank you for attention!

--
--
Victor Drobny
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Companydiff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 3ce9904..b7ed0af 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -13,6 +13,7 @@ SUBDIRS = \
 		  test_extensions \
 		  test_parser \
 		  test_pg_dump \
+		  test_rbtree \
 		  test_rls_hooks \
 		  test_shm_mq \
 		  worker_spi
diff --git a/src/test/modules/test_rbtree/.gitignore b/src/test/modules/test_rbtree/.gitignore
new file mode 100644
index 000..5dcb3ff
--- /dev/null
+++ b/src/test/modules/test_rbtree/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/test_rbtree/Makefile b/src/test/modules/test_rbtree/Makefile
new file mode 100644
index 000..11a10cf
--- /dev/null
+++ b/src/test/modules/test_rbtree/Makefile
@@ -0,0 +1,21 @@
+# src/test/modules/test_rbtree/Makefile
+
+MODULE_big = test_rbtree
+OBJS = test.o $(WIN32RES)
+PGFILEDESC = "test_rbtree - rbtree triversal testing"
+
+EXTENSION = test_rbtree
+DATA = test_rbtree--1.0.sql
+
+REGRESS = test_rbtree
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_rbtree
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_rbtree/README b/src/test/modules/test_rbtree/README
new file mode 100644
index 000..9e6c977
--- /dev/null
+++ b/src/test/modules/test_rbtree/README
@@ -0,0 +1,17 @@
+test_rbtree is a module tests for checking the correctness of all kinds of
+traversal of red-black tree. Right now rbtree in postgres has 4 kinds of
+traversals: Left-Current-Right, Right-Current-Left, Current-Left-Right and
+Left-Right-Current.
+
+This extention has 4 functions. Each function checks one traversal.
+The checking the correctness of first two types are based on the fact that
+red-black tree is a binary search tree, so the elements should be iterated in
+increasing(for Left-Current-Right) or decreasing(for Right-Current-Left)
+order.
+In order to verify last two strategies, we will check the sequence if it is
+correct or not. For given pre- or post- order traversing of binary search tree
+it is always possible to say is it correct or not and to rebuild original tree.
+The idea is based on the fact that in such traversal sequence is always
+possible to determine current node, left subtree and right subtree.
+
+This tests are performed on red-black trees that store integers.
\ No newline at end of file
diff --git a/src/test/modules/test_rbtree/expected/test_rbtree.out b/src/test/modules/test_rbtree/expected/test_rbtree.out
new file mode 100644
index 000..be8cd75
--- /dev/null
+++ b/src/test/modules/test_rbtree/expected/test_rbtree.out
@@ -0,0 +1,25 @@
+CREATE EXTENSION test_rbtree;
+SELECT testleftright();
+ testleftright 
+---
+ 
+(1 row)
+
+SELECT testrightleft();
+ testrightleft 
+---
+ 
+(1 row)
+
+SELECT testdirect();
+ testdirect 
+
+ 
+(1 row)
+
+SELECT testinverted();
+ testinverted 
+--
+ 
+(1 row)
+
diff --git a/src/test/modules/test_rbtree/int_rbtree.h b/src/test/modules/test_rbtree/int_rbtree.h
new file mode 100644
index 000..d153616
--- /dev/null
+++ b/src/test/modules/test_rbtree/int_rbtree.h
@@ -0,0 +1,49 @@
+/*--
+ *
+ * int_rbtree.h
+ *		Definitions for integer red-black tree
+ *
+ * Copyright (c) 2013-2017, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *		src/test/modules/test_rbtree/int_rbtree.h
+ *
+ * -
+ */
+
+#ifndef INT_RBTREE_H
+#define INT_RBTREE_H
+
+#include "lib/rbtree.h"
+
+typedef struct IntRBTreeNode
+{
+	RBNode		rbnode;
+	int			key;
+
+} IntRBTreeNode;
+
+static int
+cmp(const RBNode *a, const RBNode *b, void *arg)
+{
+	const IntRBTreeNode *ea = (const IntRBTreeNode *) a;
+	const IntRBTreeNode *eb = (const IntRBTreeNode *) b;
+
+	return ea->key - eb->key;
+}
+
+static RBNode *
+alloc(void *arg)
+{
+	IntRBTreeNode *ea;
+	ea = malloc(sizeof(IntRBTreeNode));
+	return (RBNode *) ea;
+}
+
+static void
+fr(RBNode * node, void *arg)
+{
+	free(node);
+}
+
+#endif // INT_RBTREE_H
diff --git a/src/test/modules/test_rbtree/sql/test_rbtree.sql b/src/test/modules/test_rbtree/sql/test_rbtree.sql
new file mode 100644
index 000..edd90c1
--- /dev/null
+++ b/src/test/modules/test_rbtree/sql/test_rbtree.sql
@@ -0,0 +1,6 @@
+CREATE EXTENSION test_rbtree;
+
+SELECT testleftright();
+SELECT 

Re: [HACKERS] pl/perl extension fails on Windows

2017-07-28 Thread Ashutosh Sharma
On Fri, Jul 28, 2017 at 10:05 PM, Tom Lane  wrote:
> Ashutosh Sharma  writes:
>> On Fri, Jul 28, 2017 at 7:22 PM, Tom Lane  wrote:
>>> Assuming that the Perl crew know what they're doing and this list is
>>> complete, I notice that not one of the symbols they show as relevant
>>> starts with an underscore.  So I'm thinking that my previous semi-
>>> joking idea of absorbing only -D switches for names that *don't*
>>> start with an underscore was actually a good solution.
>
>> Okay, as per your suggestion, I have modified my earlier patches that
>> imports the -D switches used by perl into plperl accordingly i.e. it
>> now ignores the switches whose name starts with underscore. Please
>> find plperl_win_v3 and plperl_linux_v2 patches attached with this
>> email.
>
> OK, thanks.  I've pushed the XSUB/dTHX patch after another round of
> code-reading and some minor comment improvements; we'll soon see
> what the buildfarm thinks of it.  In the meantime I'll work on these
> two patches.

Sure, Thanks a lot.

>
>>> (BTW, you never did tell us exactly what defines you're getting
>>> out of Perl's flags on the problem installation.)
>
>> I am really sorry about that. I just missed that in my earlier email.
>> Here are the defines used in the perl where i could reproduce the
>> issue,
>
>> C:\Users\ashu>perl -MConfig -e "print $Config{ccflags}"
>> -nologo -GF -W3 -O1 -MD -Zi -DNDEBUG -GL -fp:precise -DWIN32
>> -D_CONSOLE -DNO_STRICT -DWIN64 -DCONSERVATIVE  -DUSE_SITECUSTOMIZE
>> -DPERL_TEXTMODE_SCRIPTS -DPERL_IMPLICIT_CONTEXT -DPERL_IMPLICIT_SYS
>
> Uh-huh.  So the issue is indeed that they're injecting PERL_IMPLICIT_SYS
> via a command-line #define rather than putting it into perl's config.h,
> and that results in a change in the apparent size of the PerlInterp
> struct (because of IMem and friends).

Yes, That's right. We would have seen different result if the
PERL_IMPLICIT_SYS would not have been defined globally.

We'd expected as much, but it's
> good to have clear confirmation.

That's right. It is always good to have a clear confirmation. Thanks.

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault in HEAD when too many nested functions call

2017-07-28 Thread Andres Freund
On 2017-07-28 09:29:58 -0700, Noah Misch wrote:
> On Thu, Jul 27, 2017 at 10:08:57PM -0700, Andres Freund wrote:
> > For me that means the policy isn't quite right.  It's not like I can
> > force Tom to review the patch at a specific date. But the thread has
> > been progressing steadily over the last days, so I'm not particularly
> > concerned.
> 
> Your colleagues achieve compliance despite uncertainty; for inspiration, I
> recommend examining Alvaro's status updates as examples of this.  The policy
> currently governs your open items even if you disagree with it.

That's just process over substance.  Publishing repeated "I'll look at
it again in $small_n days" doesn't really provide something useful.

Anyway, I'll commit it after another pass in ~1 week if it doesn't get a
review till then, but I assume it'll.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-28 Thread Tom Lane
Ashutosh Sharma  writes:
> On Fri, Jul 28, 2017 at 7:22 PM, Tom Lane  wrote:
>> Assuming that the Perl crew know what they're doing and this list is
>> complete, I notice that not one of the symbols they show as relevant
>> starts with an underscore.  So I'm thinking that my previous semi-
>> joking idea of absorbing only -D switches for names that *don't*
>> start with an underscore was actually a good solution.

> Okay, as per your suggestion, I have modified my earlier patches that
> imports the -D switches used by perl into plperl accordingly i.e. it
> now ignores the switches whose name starts with underscore. Please
> find plperl_win_v3 and plperl_linux_v2 patches attached with this
> email.

OK, thanks.  I've pushed the XSUB/dTHX patch after another round of
code-reading and some minor comment improvements; we'll soon see
what the buildfarm thinks of it.  In the meantime I'll work on these
two patches.

>> (BTW, you never did tell us exactly what defines you're getting
>> out of Perl's flags on the problem installation.)

> I am really sorry about that. I just missed that in my earlier email.
> Here are the defines used in the perl where i could reproduce the
> issue,

> C:\Users\ashu>perl -MConfig -e "print $Config{ccflags}"
> -nologo -GF -W3 -O1 -MD -Zi -DNDEBUG -GL -fp:precise -DWIN32
> -D_CONSOLE -DNO_STRICT -DWIN64 -DCONSERVATIVE  -DUSE_SITECUSTOMIZE
> -DPERL_TEXTMODE_SCRIPTS -DPERL_IMPLICIT_CONTEXT -DPERL_IMPLICIT_SYS

Uh-huh.  So the issue is indeed that they're injecting PERL_IMPLICIT_SYS
via a command-line #define rather than putting it into perl's config.h,
and that results in a change in the apparent size of the PerlInterp
struct (because of IMem and friends).  We'd expected as much, but it's
good to have clear confirmation.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault in HEAD when too many nested functions call

2017-07-28 Thread Noah Misch
On Thu, Jul 27, 2017 at 10:08:57PM -0700, Andres Freund wrote:
> On 2017-07-27 22:04:59 -0700, Noah Misch wrote:
> > On Thu, Jul 27, 2017 at 09:49:18PM -0700, Andres Freund wrote:
> > > On 2017-07-27 21:46:57 -0700, Noah Misch wrote:
> > > > On Thu, Jul 27, 2017 at 02:29:32AM +, Noah Misch wrote:
> > > > > On Mon, Jul 24, 2017 at 08:04:30AM +0100, Andres Freund wrote:
> > > > > > 
> > > > > > 
> > > > > > On July 24, 2017 7:10:19 AM GMT+01:00, Noah Misch 
> > > > > >  wrote:
> > > > > > >On Tue, Jul 18, 2017 at 01:04:10PM -0700, Andres Freund wrote:
> > > > > > >> Ok, I'll flesh out the patch till Thursday.  But I do think we're
> > > > > > >going
> > > > > > >> to have to do something about the back branches, too.
> > > > > > >
> > > > > > >This PostgreSQL 10 open item is past due for your status update. 
> > > > > > >Kindly send
> > > > > > >a status update within 24 hours, and include a date for your 
> > > > > > >subsequent
> > > > > > >status
> > > > > > >update.  Refer to the policy on open item ownership:
> > > > > > >https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > > > > > 
> > > > > > I sent out a note fleshed out patch last week, which Tom reviewed. 
> > > > > > Planning to update it to address that review today or tomorrow.
> > > > > 
> > > > > This PostgreSQL 10 open item is past due for your status update.  
> > > > > Kindly send
> > > > > a status update within 24 hours, and include a date for your 
> > > > > subsequent status
> > > > > update.  Refer to the policy on open item ownership:
> > > > > https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > > > 
> > > > IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long 
> > > > past due
> > > > for your status update.  Please reacquaint yourself with the policy on 
> > > > open
> > > > item ownership[1] and then reply immediately.  If I do not hear from 
> > > > you by
> > > > 2017-07-29 05:00 UTC, I will transfer this item to release management 
> > > > team
> > > > ownership without further notice.
> > > > 
> > > > [1] 
> > > > https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > > 
> > > I've updated the patch based on review (today). Awaiting new review.
> > > 
> > > FWIW, I don't see the point of these messages when there is a new patch
> > > version posted today.
> > 
> > The policy says, "Each update shall state a date when the community will
> > receive another update".  Nothing you've sent today specifies a deadline for
> > your next update, so your ownership of this item remains out of
> > compliance.
> 
> For me that means the policy isn't quite right.  It's not like I can
> force Tom to review the patch at a specific date. But the thread has
> been progressing steadily over the last days, so I'm not particularly
> concerned.

Your colleagues achieve compliance despite uncertainty; for inspiration, I
recommend examining Alvaro's status updates as examples of this.  The policy
currently governs your open items even if you disagree with it.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: LP_DEAD hinting and not holding on to a buffer pin on leaf page (Was: [HACKERS] [WIP] Zipfian distribution in pgbench)

2017-07-28 Thread Peter Geoghegan

Amit Kapila  wrote:

Isn't it possible to confirm if the problem is due to commit
2ed5b87f9?  Basically, if we have unlogged tables, then it won't
release the pin.  So if the commit in question is the culprit, then
the same workload should not lead to bloat.


That's a great idea. Alik?

--
Peter Geoghegan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] pg_dump/pg_restore zerror() and strerror() mishap

2017-07-28 Thread Alvaro Herrera
Kunshchikov Vladimir wrote:
> Hello Alvaro,
> 
> here goes v4 version: removed unused header.
> 
> Compilation of this code snippet with -Wall -Wexter -std=c89 doesn't produce 
> any warnings.

Great, thanks.

+const char *
+get_cfp_error(cfp* fp)
+{
+#ifdef HAVE_LIBZ
+   if(fp->compressedfp){
+   int errnum;
+   static const char fallback[] = "Zlib error";
+   const int maxlen = 255;
+   const char *errmsg = gzerror(fp->compressedfp, );
+   if(!errmsg || !memchr(errmsg, 0, maxlen))
+   errmsg = fallback;
+
+   return errnum == Z_ERRNO ? strerror(errno) : errmsg;
+   }
 #endif
+   return strerror(errno);
+}

This "maxlen" business and the fallback error message are strange.  We
have roughly equivalent code in pg_basebackup.c, which has been working
since 2011 (commit 048d148fe631), and it looks like this:

#ifdef HAVE_LIBZ
static const char *
get_gz_error(gzFile gzf)
{
int errnum;
const char *errmsg;

errmsg = gzerror(gzf, );
if (errnum == Z_ERRNO)
return strerror(errno);
else
return errmsg;
}
#endif

Perhaps you can drop the memchr/fallback tricks and adopt the
pg_basebackup coding?  Or is there a specific reason to have the memchr
check?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tab complete for psql pset pager values

2017-07-28 Thread Jeff Janes
On Wed, Jul 26, 2017 at 8:02 AM, Pavel Stehule 
wrote:

> Hi
>
> attached trivial patch for missing tab complete for \pset pager setting
>
>
Looks good to me.  I've set it ready for committer.

Cheers,

Jeff


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-28 Thread Ashutosh Sharma
On Fri, Jul 28, 2017 at 7:22 PM, Tom Lane  wrote:
> Ashutosh Sharma  writes:
>> Thanks for the patch. I am seeing some compilation errors on Windows
>> with the patch. Below are the errors observed,
>> ...
>> I did spent some time to find reason for these compilation errors and
>> could eventually find that some of the Windows specific functions
>> inside plperl module are calling Perl APIs without fetching the perl
>> interpreter's context using dTHX.
>
> Ah, thanks.  I just stuck that in where compiler errors were telling
> me to, so I didn't realize there were Windows-specific functions to
> worry about.
>
>> Moreover, I have also tested this patch along with the patch to import
>> switches used by perl into plperl and together it fixes the server
>> crash issue. Also, now, the interpreter size in both perl and plperl
>> module are the same thereby generating the same key on both plperl and
>> perl module. Thanks.
>
> Excellent.  So this looks like the avenue to pursue.
>
> As far as the question of which symbols to import goes: on my own
> machines I'm seeing a lot of things like
>
> $ perl -MConfig -e 'print $Config{ccflags}'
> -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector 
> -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
>
> $ perl -MConfig -e 'print $Config{ccflags}'
>  -Ae -D_HPUX_SOURCE -Wl,+vnocompatwarnings -DDEBUGGING -I/usr/local/include 
> -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
>
> I'm really quite afraid to import symbols like _LARGEFILE_SOURCE and
> _FILE_OFFSET_BITS into plperl; those *will* break things if they
> are different from what core Postgres is built with.  Moreover,
> I came across a relevant data structure in perl.h:
>
> /* These are all the compile time options that affect binary compatibility.
>Other compile time options that are binary compatible are in perl.c
>Both are combined for the output of perl -V
>However, this string will be embedded in any shared perl library, which 
> will
>allow us add a comparison check in perlmain.c in the near future.  */
> #ifdef DOINIT
> EXTCONST char PL_bincompat_options[] =
> #  ifdef DEBUG_LEAKING_SCALARS
>  " DEBUG_LEAKING_SCALARS"
> #  endif
> #  ifdef DEBUG_LEAKING_SCALARS_FORK_DUMP
>  " DEBUG_LEAKING_SCALARS_FORK_DUMP"
> #  endif
> #  ifdef FAKE_THREADS
>  " FAKE_THREADS"
> #  endif
> #  ifdef MULTIPLICITY
>  " MULTIPLICITY"
> #  endif
> ... lots more ...

Thanks for sharing this information. I too had a look into
'PL_bincompat_options' data structure in perl.h and i didn't see any
macro name starting with underscore. Based on this information, we can
now confidently say that we do not need any -D switches starting with
underscore for binary compatibility purpose.

>
> Assuming that the Perl crew know what they're doing and this list is
> complete, I notice that not one of the symbols they show as relevant
> starts with an underscore.  So I'm thinking that my previous semi-
> joking idea of absorbing only -D switches for names that *don't*
> start with an underscore was actually a good solution.

Yes, it was a good solution indeed.

If that
> turns out to not be enough of a filter, we could consider looking
> into perl.h to extract the set of symbols tested in building
> PL_bincompat_options and then intersecting that with what we get
> from Perl's ccflags.  But that would be a lot more complex, so
> I'd rather go with the simpler filter rule for now.

Okay, as per your suggestion, I have modified my earlier patches that
imports the -D switches used by perl into plperl accordingly i.e. it
now ignores the switches whose name starts with underscore. Please
find plperl_win_v3 and plperl_linux_v2 patches attached with this
email.

>
> (BTW, you never did tell us exactly what defines you're getting
> out of Perl's flags on the problem installation.)

I am really sorry about that. I just missed that in my earlier email.
Here are the defines used in the perl where i could reproduce the
issue,

C:\Users\ashu>perl -MConfig -e "print $Config{ccflags}"
-nologo -GF -W3 -O1 -MD -Zi -DNDEBUG -GL -fp:precise -DWIN32
-D_CONSOLE -DNO_STRICT -DWIN64 -DCONSERVATIVE  -DUSE_SITECUSTOMIZE
-DPERL_TEXTMODE_SCRIPTS -DPERL_IMPLICIT_CONTEXT -DPERL_IMPLICIT_SYS

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index e0bf607..f0ada1b 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -518,6 +518,16 @@ sub mkvcbuild
 		  $solution->AddProject('plperl', 'dll', 'PLs', 'src/pl/plperl');
 		$plperl->AddIncludeDir($solution->{options}->{perl} . '/lib/CORE');
 		$plperl->AddDefine('PLPERL_HAVE_UID_GID');
+
+		foreach my $f (split(" ",$Config{ccflags}))
+		{
+			if ($f =~ /^-D/ && $f !~ /^-D_/)
+			{
+$f =~ s/\-D//;
+

[HACKERS] Re: [bug fix] PG10: libpq doesn't connect to alternative hosts when some errors occur

2017-07-28 Thread Robert Haas
On Fri, Jul 28, 2017 at 1:30 AM, Noah Misch  wrote:
> On Fri, May 19, 2017 at 11:08:41AM +0900, Michael Paquier wrote:
>> On Fri, May 19, 2017 at 11:01 AM, Tsunakawa, Takayuki
>>  wrote:
>> > From: pgsql-hackers-ow...@postgresql.org
>> >> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Peter Eisentraut
>> >> The problem is that if we decide to change the behavior mid-beta, then 
>> >> we'll
>> >> only have the rest of beta to find out whether people will like the other
>> >> behavior.
>> >>
>> >> I would aim for the behavior that is most suitable for refinement in the
>> >> future.  The current behavior seems to match that.
>> >
>> > I think the pre-final release period is the very timing for refinement, in 
>> > the perspective of users and PG developers as users.
>>
>> Sure that is the correct period to argue.
>
> We've reached that period.  If anyone is going to push for a change here, now
> is the time.  Absent such arguments, the behavior won't change.

Well, I started out believing that the current behavior was for the
best, and then completely reversed my position and favored the OP's
proposal.  Nothing has really happened since then to change my mind,
so I guess I'm still in that camp.  But do we have any new data
points?  Have any beta-testers tested this and what do they think?
The only non-developer (i.e. person not living in an ivory tower) who
has weighed in here is Tels, who favored reversing the original
decision and adopting Tsunakawa-san's position, and that was 2 months
ago.

I am pretty reluctant to tinker with this at this late date and in the
face of several opposing votes, but I do think that we bet on the
wrong horse.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Write Amplification Reduction Method (WARM)

2017-07-28 Thread Robert Haas
On Fri, Jul 28, 2017 at 12:39 AM, Pavan Deolasee
 wrote:
> I see your point. But I would like to think this way: does the technology
> significantly help many common use cases, that are currently not addressed
> by HOT? It probably won't help all workloads, that's given. Also, we don't
> have any credible alternative while this patch has progressed quite a lot.
> May be Robert will soon present the pluggable storage/UNDO patch and that
> will cover everything and more that is currently covered by HOT/WARM. That
> will probably make many other things redundant.

A lot of work is currently being done on this, by multiple people,
mostly not including me, and a lot of good progress is being made.
But it's not exactly ready to ship, nor will it be any time soon.  I
think we can run a 1-client pgbench without crashing the server at
this point, if you tweak the configuration a little bit and don't do
anything fancy like, say, try to roll back a transaction.  :-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE of partition key

2017-07-28 Thread Robert Haas
On Wed, Jul 26, 2017 at 2:13 AM, Amit Langote
 wrote:
> Sorry to be responding this late to the Amit's make_resultrel_ordered
> patch itself, but I agree that we should teach the planner to *always*
> expand partitioned tables in the partition bound order.

Sounds like we have unanimous agreement on that point.  Yesterday, I
was discussing with Beena Emerson, who is working on run-time
partition pruning, that it would also be useful for that purpose, if
you're trying to prune based on a range query.

> I checked that we get the same result relation order with both the
> patches, but I would like to highlight a notable difference here between
> the approaches taken by our patches.  In my patch, I have now taught
> RelationGetPartitionDispatchInfo() to lock *only* the partitioned tables
> in the tree, because we need to look at its partition descriptor to
> collect partition OIDs and bounds.  We can defer locking (and opening the
> relation descriptor of) leaf partitions to a point where planner has
> determined that the partition will be accessed after all (not pruned),
> which will be done in a separate patch of course.

That's very desirable, but I believe it introduces a deadlock risk
which Amit's patch avoids.  A transaction using the code you've
written here is eventually going to lock all partitions, BUT it's
going to move the partitioned ones to the front of the locking order
vs. what find_all_inheritors would do.  So, when multi-level
partitioning is in use, I think it could happen that some other
transaction is accessing the table using a different code path that
uses the find_all_inheritors order without modification.  If those
locks conflict (e.g. query vs. DROP) then there's a deadlock risk.

Unfortunately I don't see any easy way around that problem, but maybe
somebody else has an idea.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-07-28 Thread Andreas Joseph Krogh
Hi -hackers.
 
I'm reading https://www.postgresql.org/docs/10/static/pgupgrade.html to try to 
understand how to upgrade standby-servers using pg_upgrade with pg10.
 
The text in step 10 sais:
"You will not be running pg_upgrade on the standby servers, but rather rsync", 
which to me sounds like rsync, in step 10-f, should be issued on the standy 
servers. Is this the case? If so I don't understand how the standby's data is 
upgraded and what "remote_dir" is. If rsync is supposed to be issued on the 
primary then I think it should be explicitly mentioned, and step 10-f should 
provide a clarer example with more detailed values for the directory-structures 
involved.
 
I really think section 10 needs improvement as I'm certainly not comfortable 
upgrading standbys following the existing procedure.
 
Thanks.
 
--
Andreas Joseph Krogh


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-07-28 Thread Michael Paquier
On Fri, Jul 28, 2017 at 7:28 AM, Masahiko Sawada  wrote:
> That also requires to share the same XID space with all remote nodes.

You are putting your finger on the main bottleneck with global
consistency that XC and XL has because of that. And the source feeding
the XIDs is a SPOF.

> Perhaps the CSN based snapshot can make this more simple.

Hm. This needs a closer look.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-28 Thread Tom Lane
Ashutosh Sharma  writes:
> Thanks for the patch. I am seeing some compilation errors on Windows
> with the patch. Below are the errors observed,
> ...
> I did spent some time to find reason for these compilation errors and
> could eventually find that some of the Windows specific functions
> inside plperl module are calling Perl APIs without fetching the perl
> interpreter's context using dTHX.

Ah, thanks.  I just stuck that in where compiler errors were telling
me to, so I didn't realize there were Windows-specific functions to
worry about.

> Moreover, I have also tested this patch along with the patch to import
> switches used by perl into plperl and together it fixes the server
> crash issue. Also, now, the interpreter size in both perl and plperl
> module are the same thereby generating the same key on both plperl and
> perl module. Thanks.

Excellent.  So this looks like the avenue to pursue.

As far as the question of which symbols to import goes: on my own
machines I'm seeing a lot of things like

$ perl -MConfig -e 'print $Config{ccflags}'
-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector 
-I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64

$ perl -MConfig -e 'print $Config{ccflags}'
 -Ae -D_HPUX_SOURCE -Wl,+vnocompatwarnings -DDEBUGGING -I/usr/local/include 
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 

I'm really quite afraid to import symbols like _LARGEFILE_SOURCE and
_FILE_OFFSET_BITS into plperl; those *will* break things if they
are different from what core Postgres is built with.  Moreover,
I came across a relevant data structure in perl.h:

/* These are all the compile time options that affect binary compatibility.
   Other compile time options that are binary compatible are in perl.c
   Both are combined for the output of perl -V
   However, this string will be embedded in any shared perl library, which will
   allow us add a comparison check in perlmain.c in the near future.  */
#ifdef DOINIT
EXTCONST char PL_bincompat_options[] =
#  ifdef DEBUG_LEAKING_SCALARS
 " DEBUG_LEAKING_SCALARS"
#  endif
#  ifdef DEBUG_LEAKING_SCALARS_FORK_DUMP
 " DEBUG_LEAKING_SCALARS_FORK_DUMP"
#  endif
#  ifdef FAKE_THREADS
 " FAKE_THREADS"
#  endif
#  ifdef MULTIPLICITY
 " MULTIPLICITY"
#  endif
... lots more ...

Assuming that the Perl crew know what they're doing and this list is
complete, I notice that not one of the symbols they show as relevant
starts with an underscore.  So I'm thinking that my previous semi-
joking idea of absorbing only -D switches for names that *don't*
start with an underscore was actually a good solution.  If that
turns out to not be enough of a filter, we could consider looking
into perl.h to extract the set of symbols tested in building
PL_bincompat_options and then intersecting that with what we get
from Perl's ccflags.  But that would be a lot more complex, so
I'd rather go with the simpler filter rule for now.

(BTW, you never did tell us exactly what defines you're getting
out of Perl's flags on the problem installation.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-07-28 Thread Ashutosh Bapat
On Wed, Jul 26, 2017 at 5:44 PM, Jeevan Ladhe
 wrote:
> Hi,
>
> I have rebased the patches on the latest commit.
>

Thanks for rebasing the patches. The patches apply and compile
cleanly. make check passes.

Here are some review comments
0001 patch
Most of this patch is same as 0002 patch posted in thread [1]. I have
extensively reviewed that patch for Amit Langote. Can you please compare these
two patches and try to address those comments OR just use patch from that
thread? For example, canSkipPartConstraintValidation() is named as
PartConstraintImpliedByRelConstraint() in that patch. OR
+if (scanRel_constr == NULL)
+return false;
+
is not there in that patch since returning false is wrong when partConstraint
is NULL. I think this patch needs those fixes. Also, this patch set would need
a rebase when 0001 from that thread gets committed.

0002 patch
+if (!and_args)
+result = NULL;
Add "NULL, if there are not partition constraints e.g. in case of default
partition as the only partition.". This patch avoids calling
validatePartitionConstraints() and hence canSkipPartConstraintValidation() when
partConstraint is NULL, but patches in [1] introduce more callers of
canSkipPartConstraintValidation() which may pass NULL. So, it's better that we
handle that case.

0003 patch
+parentRel = heap_open(parentOid, AccessExclusiveLock);
In [2], Amit Langote has given a reason as to why heap_drop_with_catalog()
should not heap_open() the parent relation. But this patch still calls
heap_open() without giving any counter argument. Also I don't see
get_default_partition_oid() using Relation anywhere. If you remove that
heap_open() please remove following heap_close().
+heap_close(parentRel, NoLock);

+/*
+ * The default partition accepts any non-specified
+ * value, hence it should not get a mapped index while
+ * assigning those for non-null datums.
+ */
Instead of "any non-specified value", you may want to use "any value not
specified in the lists of other partitions" or something like that.

+ * If this is a NULL, route it to the null-accepting partition.
+ * Otherwise, route by searching the array of partition bounds.
You may want to write it as "If this is a null partition key, ..." to clarify
what's NULL.

+ * cur_index < 0 means we could not find a non-default partition of
+ * this parent. cur_index >= 0 means we either found the leaf
+ * partition, or the next parent to find a partition of.
+ *
+ * If we couldn't find a non-default partition check if the default
+ * partition exists, if it does, get its index.
In order to avoid repeating "we couldn't find a ..."; you may want to add ",
try default partition if one exists." in the first sentence itself.

get_default_partition_oid() is defined in this patch and then redefined in
0004. Let's define it only once, mostly in or before 0003 patch.

+ * partition strategy. Assign the parent strategy to the default
s/parent/parent's/

+-- attaching default partition overlaps if the default partition already exists
+CREATE TABLE def_part PARTITION OF list_parted DEFAULT;
+CREATE TABLE fail_def_part (LIKE part_1 INCLUDING CONSTRAINTS);
+ALTER TABLE list_parted ATTACH PARTITION fail_def_part DEFAULT;
+ERROR:  cannot attach a new partition to table "list_parted" having a
default partition
For 0003 patch this testcase is same as the testcase in the next hunk; no new
partition can be added after default partition. Please add this testcase in
next set of patches.

+-- fail
+insert into part_default values ('aa', 2);
May be explain why the insert should fail. "A row, which would fit
other partition, does not fit default partition, even when inserted directly"
or something like that. I see that many of the tests in that file do not
explain why something should "fail" or be "ok", but may be it's better to
document the reason for better readability and future reference.

+-- check in case of multi-level default partitioned table
s/in/the/ ?. Or you may want to reword it as "default partitioned partition in
multi-level partitioned table" as there is nothing like "default partitioned
table". May be we need a testcase where every level of a multi-level
partitioned table has a default partition.

+-- drop default, as we need to add some more partitions to test tuple routing
Should be clubbed with the actual DROP statement?

+-- Check that addition or removal of any partition is correctly dealt with by
+-- default partition table when it is being used in cached plan.
Plan of a prepared statement gets cached only after it's executed 5 times.
Before that the statement gets invalidated but there's not cached plan that
gets invalidated. The test is fine here, but in order to test the cached plan
as mentioned in the comment, you 

Re: LP_DEAD hinting and not holding on to a buffer pin on leaf page (Was: [HACKERS] [WIP] Zipfian distribution in pgbench)

2017-07-28 Thread Amit Kapila
On Wed, Jul 26, 2017 at 3:32 AM, Peter Geoghegan  wrote:
> On Fri, Jul 14, 2017 at 5:06 PM, Peter Geoghegan  wrote:
>> I think that what this probably comes down to, more than anything
>> else, is that you have leftmost hot/bloated leaf pages like this:
>>
>>
>>   idx  | level | l_item | blkno | btpo_prev |
>> btpo_next | btpo_flags | type | live_items | dead_items |
>> avg_item_size | page_size | free_size | highkey
>> ---+---++---+---+---++--+++---+---+---+-
>>  ...
>>  pgbench_accounts_pkey | 0 |  1 | 1 | 0 |
>> 2751 | 65 | l|100 | 41 |16 |
>>8192 |  5328 | 11 00 00 00 00 00 00 00
>>  pgbench_accounts_pkey | 0 |  2 |  2751 | 1 |
>> 2746 | 65 | l| 48 | 90 |16 |
>>8192 |  5388 | 32 00 00 00 00 00 00 00
>> ...
>>
>> The high key for the leftmost shows that only values below 0x11 belong
>> on the first page. This is about 16 or 17 possible distinct values,
>> and yet the page has 100 live items, and 41 dead items; in total,
>> there is room for 367 items. It's only slightly better with other
>> nearby pages. This is especially bad because once the keyspace gets
>> split up this finely, it's *impossible* to reverse it -- it's more or
>> less a permanent problem, at least until a REINDEX.
>
> I've been thinking about this a lot, because this really does look
> like a pathological case to me. I think that this workload is very
> sensitive to how effective kill_prior_tuples/LP_DEAD hinting is. Or at
> least, I can imagine that mechanism doing a lot better than it
> actually manages to do here. I wonder if it's possible that commit
> 2ed5b87f9, which let MVCC snapshots not hold on to a pin on leaf
> pages, should have considered workloads like this.
>

Isn't it possible to confirm if the problem is due to commit
2ed5b87f9?  Basically, if we have unlogged tables, then it won't
release the pin.  So if the commit in question is the culprit, then
the same workload should not lead to bloat.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?

2017-07-28 Thread Andrew Dunstan


On 07/27/2017 11:58 PM, Tom Lane wrote:
>
> I kinda suspect we're not actively testing non-MULTIPLICITY builds
> either.  The 5.8.7 test I just ran was with a non-MULTIPLICITY build,
> so the case doesn't seem actively broken, but I doubt there is any
> buildfarm coverage.  I wonder if it'd be worth getting the buildfarm
> to log the output of "perl -V" so we could get a clearer picture
> of what's being tested.
>
It's quite possible, but in general it will need a new buildfarm client 
release. If you choose a few possibly critical animals we can ask the owners to 
apply a fairly simple patch.

cheers

andrew



-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Update comments in nodeModifyTable.c

2017-07-28 Thread Etsuro Fujita

On 2017/07/26 22:39, Robert Haas wrote:

On Wed, Jun 14, 2017 at 10:40 PM, Etsuro Fujita
 wrote:

Attached is an updated version of the patch.


Well, now I'm confused:

   * Initialize the junk filter(s) if needed.  INSERT queries need a filter
   * if there are any junk attrs in the tlist.  UPDATE and DELETE always
   * need a filter, since there's always a junk 'ctid' or 'wholerow'
- * attribute present --- no need to look first.
+ * attribute present if not foreign table, and if foreign table, there
+ * are always junk attributes present the FDW needs to identify the exact
+ * row to update or delete --- no need to look first.  For foreign tables,
+ * there's also a wholerow attribute when the relation has a row-level
+ * trigger on UPDATE/DELETE but not on INSERT.

So the first part of the change weakens the assertion that a 'ctid' or
'wholerow' attribute will always be present by saying that an FDW may
instead have other attributes sufficient to identify the row.


That's right.


But
then the additional sentence says that there will be a 'wholerow'
attribute after all.  So this whole change seems to me to be going
around in a circle.


What I mean by the additional one is: if the result table that is a 
foreign table has a row-level UPDATE/DELETE trigger, a 'wholerow' will 
also be present.  So, if the result table didn't have the trigger, there 
wouldn't be 'whole-row', so in that case it could be possible that there 
would be only attributes other than 'ctid' and 'wholerow'.


Best regards,
Etsuro Fujita



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql's \d and \dt are sending their complaints to different output files

2017-07-28 Thread Daniel Gustafsson
> On 27 Jul 2017, at 19:40, Tom Lane  wrote:
> 
> Daniel Gustafsson  writes:
>>> On 19 Jun 2017, at 17:32, Tom Lane  wrote:
>>> So, if we're getting into enforcing consistency in describe.c, there's
>>> lots to do.
> 
>> Addressed in attached patch, see list of patches below.
> 
> I've pushed most of this.

Thanks!

> listTables and listDbRoleSettings print a custom message rather than
> an empty table for no matches (but in QUIET mode they just do the
> latter).  I think that's actually a good decision for listDbRoleSettings,
> because the user might be confused about which pattern is which, and
> we can straighten him out with a custom error message.  But the only
> good argument for listTables behaving that way is that people are used
> to it.  

Which is a pretty good argument for not changing it.

> Should we override backwards compatibility to the extent of
> dropping the custom messages in listTables, and just printing an empty
> table-of-tables?
> 
>> 0004 - Most verbose metacommands include additional information on top of 
>> what
>> is in the normal output, while \dRp+ dropped two columns (moving one to the
>> title).  This include the information from \dRp in \dRp+.  Having the pubname
>> in the title as well as the table is perhaps superfuous, but consistent with
>> other commands so opted for it.
> 
> I'm not sure about this one.  It definitely seems bogus that \dRp+ is
> omitting the owner column, but I'm less excited about duplicating the
> pubname.

It’s indeed a bit silly to duplicate the information like that.

Another option would perhaps be to move to a format like the one in \dx+, and
only list the tables per subscription like how extension objects are listed.
Not sure if that’s any better here though.

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #14759: insert into foreign data partitions fail

2017-07-28 Thread Etsuro Fujita

On 2017/07/26 15:29, Amit Langote wrote:

On 2017/07/25 9:43, David G. Johnston wrote:

On Mon, Jul 24, 2017 at 5:19 PM, Amit Langote 

Re: [HACKERS] map_partition_varattnos() and whole-row vars

2017-07-28 Thread Amit Khandekar
On 28 July 2017 at 11:17, Amit Langote  wrote:
> On 2017/07/26 16:58, Amit Langote wrote:
>> Rajkumar Raghuwanshi reported [1] on the "UPDATE partition key" thread
>> that whole-row vars don't play nicely with partitioned table operations.
>>
>> For example, when used to convert WITH CHECK OPTION constraint expressions
>> and RETURNING target list expressions, it will error out if the
>> expressions contained whole-row vars.  That's a bug, because whole-row
>> vars are legal in those expressions.  I think the decision to output error
>> upon encountering a whole-row in the input expression was based on the
>> assumption that it will only ever be used to convert partition constraint
>> expressions, which cannot contain those.  So, we can relax that
>> requirement so that its other users are not bitten by it.
>>
>> Attached fixes that.
>
> Attached a revised version of the patch.
>
> Updated patch moves the if (found_whole_row) elog(ERROR) bit in
> map_partition_varattnos to the callers.  Only the callers know if
> whole-row vars are not expected in the expression it's getting converted
> from map_partition_varattnos.  Given the current message text (mentioning
> "partition key"), it didn't seem appropriate to have the elog inside this
> function, since it's callable from places where it wouldn't make sense.

create table foo (a int, b text) partition by list (a);
create table foo1 partition of foo for values in (1);
create table foo2(b text, a int) ;
alter table foo attach partition foo2 for values in (2);

postgres=# insert into foo values (1, 'hi there');
INSERT 0 1
postgres=# insert into foo values (2, 'bi there');
INSERT 0 1
postgres=# insert into foo values (2, 'error there') returning foo;
ERROR:  table row type and query-specified row type do not match
DETAIL:  Table has type text at ordinal position 1, but query expects integer.

This is due to a mismatch between the composite type tuple descriptor
of the leaf partition doesn't match the RETURNING slot tuple
descriptor.

We probably need to do what
inheritance_planner()=>adjust_appendrel_attrs() does for adjusting the
attrs in the child rel parse tree; i.e., handle some specific nodes
other than just simple var nodes. In adjust_appendrel_attrs_mutator(),
for whole row node, it updates var->vartype to the child rel.

I suspect that the other nodes that adjust_appendrel_attrs_mutator
handles (like JoinExpr, CurrentOfExpr, etc ) may also require similar
adjustments for our case, because WithCheckOptions (and I think even
RETURNING) can have a subquery.


>
> Thanks,
> Amit
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] pg_dump/pg_restore zerror() and strerror() mishap

2017-07-28 Thread Kunshchikov Vladimir
Hello Alvaro,

here goes v4 version: removed unused header.

Compilation of this code snippet with -Wall -Wexter -std=c89 doesn't produce 
any warnings.

--
Best regards,
Vladimir Kunschikov
Lead software developer
IDS project
InfoTeCS JSC


From: Kunshchikov Vladimir
Sent: Thursday, July 27, 2017 10:34:22 AM
To: Alvaro Herrera
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] [patch] pg_dump/pg_restore zerror() and strerror() mishap



Hello, Alvaro,

  thanks for the suggestions,  attached version #3 with all of your 
requirements met.



--
С уважением,
Владимир Кунщиков
Ведущий программист

Отдел разработки систем обнаружения и предотвращения компьютерных атак

Компания "ИнфоТеКС"


From: Alvaro Herrera 
Sent: Wednesday, July 26, 2017 7:40:20 PM
To: Kunshchikov Vladimir
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] [patch] pg_dump/pg_restore zerror() and strerror() mishap

Kunshchikov Vladimir wrote:
>  Hello Alvaro, thanks for the feedback, fixed all of your points.
> Attached new version of patch.

Looks great -- it's a lot smaller than the original even.  One final
touch -- see cfread(), where we have an #ifdef where we test for
fp->compressedfp; the "#else" branch uses the same code as the
!fp->compressedfp.  I think your get_cfp_error can be written more
simply using that form.  (Also, your version probably errors or warns
about "code before declaration" in that routine, which is not allowed in
C89.)

--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/bin/pg_dump/compress_io.c b/src/bin/pg_dump/compress_io.c
index 991fe11e8a..28e5b417c2 100644
--- a/src/bin/pg_dump/compress_io.c
+++ b/src/bin/pg_dump/compress_io.c
@@ -709,5 +709,22 @@ hasSuffix(const char *filename, const char *suffix)
   suffix,
   suffixlen) == 0;
 }
+#endif
 
+const char *
+get_cfp_error(cfp* fp)
+{
+#ifdef HAVE_LIBZ
+	if(fp->compressedfp){
+		int errnum;
+		static const char fallback[] = "Zlib error";
+		const int maxlen = 255;
+		const char *errmsg = gzerror(fp->compressedfp, );
+		if(!errmsg || !memchr(errmsg, 0, maxlen))
+			errmsg = fallback;
+
+		return errnum == Z_ERRNO ? strerror(errno) : errmsg;
+	}
 #endif
+	return strerror(errno);
+}
diff --git a/src/bin/pg_dump/compress_io.h b/src/bin/pg_dump/compress_io.h
index 8f2e752cba..06b3762233 100644
--- a/src/bin/pg_dump/compress_io.h
+++ b/src/bin/pg_dump/compress_io.h
@@ -65,5 +65,6 @@ extern int	cfgetc(cfp *fp);
 extern char *cfgets(cfp *fp, char *buf, int len);
 extern int	cfclose(cfp *fp);
 extern int	cfeof(cfp *fp);
+extern const char * get_cfp_error(cfp *fp);
 
 #endif
diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c
index 79922da8ba..e9c26bc571 100644
--- a/src/bin/pg_dump/pg_backup_directory.c
+++ b/src/bin/pg_dump/pg_backup_directory.c
@@ -352,7 +352,7 @@ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	if (dLen > 0 && cfwrite(data, dLen, ctx->dataFH) != dLen)
-		WRITE_ERROR_EXIT;
+		exit_horribly(modulename, "could not write to output file: %s\n", get_cfp_error(ctx->dataFH));
 
 	return;
 }
@@ -490,7 +490,7 @@ _WriteByte(ArchiveHandle *AH, const int i)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	if (cfwrite(, 1, ctx->dataFH) != 1)
-		WRITE_ERROR_EXIT;
+		exit_horribly(modulename, "could not write to output file: %s\n", get_cfp_error(ctx->dataFH));
 
 	return 1;
 }
@@ -519,7 +519,7 @@ _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	if (cfwrite(buf, len, ctx->dataFH) != len)
-		WRITE_ERROR_EXIT;
+		exit_horribly(modulename, "could not write to output file: %s\n", get_cfp_error(ctx->dataFH));
 
 	return;
 }
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index f839712945..b35d3fcaa5 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -555,8 +555,12 @@ _tarReadRaw(ArchiveHandle *AH, void *buf, size_t len, TAR_MEMBER *th, FILE *fh)
 			{
 res = GZREAD(&((char *) buf)[used], 1, len, th->zFH);
 if (res != len && !GZEOF(th->zFH))
+{
+	int errnum;
+	const char *errmsg = gzerror(th->zFH, );
 	exit_horribly(modulename,
-  "could not read from input file: %s\n", strerror(errno));
+  "could not read from input file: %s\n", errnum == Z_ERRNO? strerror(errno) : errmsg);
+}
 			}
 			else
 			{

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-07-28 Thread Erik Rijkers

On 2017-07-27 21:08, Mark Rofail wrote:

On Thu, Jul 27, 2017 at 7:15 PM, Erik Rijkers  wrote:


It would help (me at least) if you could be more explicit about what
exactly each instance is.



I apologize, I thought it was clear through the context.


Thanks a lot.  It's just really easy for testers like me that aren't 
following a thread too closely and just snatch a half hour here and 
there to look into a feature/patch.



One small thing while building docs:

$  cd doc/src/sgml && make html
osx -wall -wno-unused-param -wno-empty -wfully-tagged -D . -D . -x lower 
postgres.sgml >postgres.xml.tmp
osx:ref/create_table.sgml:960:100:E: document type does not allow 
element "VARLISTENTRY" here

Makefile:147: recipe for target 'postgres.xml' failed
make: *** [postgres.xml] Error 1

(Debian 8/jessie)


thanks,


Erik Rijkers



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-28 Thread Ashutosh Sharma
Hi,

On Fri, Jul 28, 2017 at 4:20 AM, Tom Lane  wrote:
>
> Andrew Dunstan  writes:
> > On 07/27/2017 04:33 PM, Tom Lane wrote:
> >> So I was trying to figure a way to not include XSUB.h except in a very
> >> limited part of plperl, like ideally just the .xs files.  It's looking
> >> like that would take nontrivial refactoring though :-(.  Another problem
> >> is that this critical bit of the library API is in XSUB.h:
>
> > That's the sort of thing that prompted me to ask what was the minimal
> > set of defines required to fix the original problem (assuming such a
> > thing exists)
> > We haven't used PERL_IMPLICIT_CONTEXT to date, and without ill effect.
> > For example. it's in the ExtUtils::Embed::ccopts for the perl that
> > jacana and bowerbird happily build and test against.
>
> Well, actually, PERL_IMPLICIT_CONTEXT is turned on automatically in any
> MULTIPLICITY build, and since it changes all the Perl ABIs, we *are*
> relying on it.  However, after further study of the Perl docs I noticed
> that we could dispense with XSUB.h if we defined PERL_NO_GET_CONTEXT
> (which turns the quoted stanza into a no-op).  That results in needing to
> sprinkle plperl.c with "dTHX" declarations, as explained in perlguts.pod.
> They're slightly tricky to place correctly, because they load up a pointer
> to the current Perl interpreter, so you have to be wary of where to put
> them in functions that change interpreters.  But I seem to have it working
> in the attached patch.  (One benefit of doing this extra work is that it
> should be a bit more efficient, in that we load up a Perl interpreter
> pointer only once per function called, not once per usage therein.  We
> could remove many of those fetches too, if we wanted to sprinkle the
> plperl code with yet more THX droppings; but I left that for another day.)
>
> Armed with that, I got rid of XSUB.h in plperl.c and moved the
> PG_TRY-using functions in the .xs files to plperl.c.  I think this would
> fix Ashutosh's problem, though I am not in a position to try it with a
> PERL_IMPLICIT_SYS build here.

Thanks for the patch. I am seeing some compilation errors on Windows
with the patch. Below are the errors observed,

1>ClCompile:
1>  plperl.c
1>src/pl/plperl/plperl.c(4051): error C2065: 'my_perl' : undeclared identifier

2>ClCompile:
2>  hstore_plperl.c
2>contrib/hstore_plperl/hstore_plperl.c(77): error C2065: 'my_perl' :
undeclared identifier

I did spent some time to find reason for these compilation errors and
could eventually find that some of the Windows specific functions
inside plperl module are calling Perl APIs without fetching the perl
interpreter's context using dTHX. Please note that, we have now
defined PERL_NO_GET_CONTEXT macro before including the Perl headers in
plperl.h file which means any plperl function calling Perl APIs should
try to fetch the perl interpreter context variable 'my_perl' at the
start of the function using dTHX but, we were not doing that for
'setlocale_perl', 'hstore_to_plperl' and 'plperl_to_hstore' functions.
I have corrected this and attached is the new version of patch.

Moreover, I have also tested this patch along with the patch to import
switches used by perl into plperl and together it fixes the server
crash issue. Also, now, the interpreter size in both perl and plperl
module are the same thereby generating the same key on both plperl and
perl module. Thanks.

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com


avoid-XSUB.h-in-plperl.c-v2.patch
Description: Binary data


plperl_win_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] asynchronous execution

2017-07-28 Thread Kyotaro HORIGUCHI
Thank you for the comment.

At Wed, 26 Jul 2017 17:16:43 -0400, Robert Haas  wrote 
in 
> But if we do, then I fear we'll just be reintroducing the same
> performance regression that we introduced by switching to this
> framework from the previous one - or maybe a different one, but a
> regression all the same.  Every type of intermediate node will have to
> have a code path where it uses ExecAsyncRequest() /
> ExecAyncHogeResponse() rather than ExecProcNode() to get tuples, and

I understand what Robert concerns and I think I share the same
opinion. It needs further different framework.

At Thu, 27 Jul 2017 06:39:51 -0400, Robert Haas  wrote 
in 
> On Wed, Jul 26, 2017 at 5:43 PM, Tom Lane  wrote:
> > I have not been paying any attention to this thread whatsoever,
> > but I wonder if you can address your problem by building on top of
> > the ExecProcNode replacement that Andres is working on,
> > https://www.postgresql.org/message-id/20170726012641.bmhfcp5ajpudi...@alap3.anarazel.de
> >
> > The scheme he has allows $extra_stuff to be injected into ExecProcNode at
> > no cost when $extra_stuff is not needed, because you simply don't insert
> > the wrapper function when it's not needed.  I'm not sure that it will
> > scale well to several different kinds of insertions though, for instance
> > if you wanted both instrumentation and async support on the same node.
> > But maybe those two couldn't be arms-length from each other anyway,
> > in which case it might be fine as-is.
> 
> Yeah, I don't quite see how that would apply in this case -- what we
> need here is not as simple as just conditionally injecting an extra
> bit.

Thank you for the pointer, Tom. The subject (segfault in HEAD...)
haven't made me think that this kind of discussion was held
there. Anyway it seems very closer to asynchronous execution so
I'll catch up it considering how I can associate with this.

Regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Missing comment for max_logical_replication_workers in postgresql.conf.sample

2017-07-28 Thread Tatsuo Ishii
Attached patch looks good except the excessive tab stops:
+   
# (change requires restart)

I will commit/push this with removing the excessive tab stops if
there's no objection.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

> On Thu, 27 Jul 2017 14:38:29 +0900
> Masahiko Sawada  wrote:
> 
>> On Thu, Jul 27, 2017 at 10:14 AM, Yugo Nagata  wrote:
>> > Hi,
>> >
>> > I found that postgresql.conf.sample is missing a comment
>> > to note that changing max_logical_replication_workers requires
>> > restart of the server.
>> >
>> > Other such parameters has the comments, so I think the new
>> > parameter also needs this. Attached is a simple patch to fix
>> > this.
>> >
>> 
>> Good point. Similarly, dynamic_shared_memory_type and event_source are
>> required restarting to change but are not mentioned in
>> postgresql.conf.sample. Should we add a comment as well?
> 
> I think so. The updated patch is attached.
> 
>> 
>> Regards,
>> 
>> --
>> Masahiko Sawada
>> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
>> NTT Open Source Software Center
> 
> 
> -- 
> Yugo Nagata 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers