Re: [WIP PATCH] Index scan offset optimisation using visibility map

2018-10-02 Thread Andrey Borodin
Hi!

> 2 окт. 2018 г., в 11:55, Michail Nikolaev  
> написал(а):
> 
> > Okay, it has been more than a couple of days and the patch has not been
> > updated, so I am marking as returned with feedback. 
> 
> Yes, it is more than couple of days passed, but also there is almost no 
> feedback since 20 Mar after patch design was changed :)
> But seriously - I still working on it 

Let's move this to CF 2018-11? Obviously, it is WiP, but it seems that patch is 
being discussed, author cares about it.


Best regards, Andrey Borodin.


Re: Add SKIP LOCKED to VACUUM and ANALYZE

2018-10-02 Thread Michael Paquier
On Tue, Oct 02, 2018 at 02:22:42AM +, Bossart, Nathan wrote:
> On 10/1/18, 7:07 PM, "Michael Paquier"  wrote:
>> On Mon, Oct 01, 2018 at 03:37:01PM +, Bossart, Nathan wrote:
>>> Without the find_all_inheritors() stuff, I think we would just need to
>>> modify the ANALYZE documentation patch to say something like
>>> 
>>> Specifies that ANALYZE should not wait for any conflicting locks
>>> to be released: if a relation cannot be locked immediately without
>>> waiting, the relation is skipped.  Note that even with this
>>> option, ANALYZE can still block when opening the relation's
>>> indexes or when acquiring sample rows to prepare the statistics.
>>> Also, while ANALYZE ordinarily processes all leaf partitions of
>>> partitioned tables, this option will cause ANALYZE to skip all
>>> leaf partitions if there is a conflicting lock on the partitioned
>>> table.
>>
>> Yes, I think that we could live with something like that.  I could give
>> this whole thing a shot, but this will have to wait until the commit
>> fest is closed as there are still many patches to classify.  If you can
>> send a patch that's of course always helpful..
> 
> Sure, here it is.

I was just playing with autovacuum and some inheritance trees, and
actually locking a child would cause an autovacuum worker to block if it
is analyzing the parent when acquiring row samples even now.  Anyway,
coming back to the patch...

My comments are mostly about the documentation bits added:
- Inheritance trees have the same problem, so it would be nice to
mention them as well.
- One workaround, which we could document (?), would be to list leaves
of a partition tree individually so as their locks are checked one by
one.

+  Specifies that ANALYZE should not wait for any
+  conflicting locks to be released: if a relation cannot be locked

Perhaps it would be more precise to tell when "beginning to work on a
relation" because the restrictions in row samplings?

The rest of the patch looks clean to me.
--
Michael


signature.asc
Description: PGP signature


Re: Pluggable Storage - Andres's take

2018-10-02 Thread Andres Freund
On 2018-09-27 20:03:58 -0700, Andres Freund wrote:
> On 2018-09-28 12:21:08 +1000, Haribabu Kommi wrote:
> > Here I attached further cleanup patches.
> > 1. Re-arrange the GUC variable
> > 2. Added a check function hook for default_table_access_method GUC
> 
> Cool.
> 
> 
> > 3. Added a new hook validate_index. I tried to change the function
> > validate_index_heapscan to slotify, but that have many problems as it
> > is accessing some internals of the heapscandesc structure and accessing
> > the buffer and etc.
> 
> Oops, I also did that locally, in a way. I also made a validate a
> callback, as the validation logic is going to be specific to the AMs.
> Sorry for not pushing that up earlier.  I'll try to do that soon,
> there's a fair amount of change.

I've pushed an updated version, with a fair amount of pending changes,
and I hope all your pending (and not redundant, by our concurrent
development), patches merged.

There's currently 3 regression test failures, that I'll look into
tomorrow:
- partition_prune shows a few additional Heap Blocks: exact=1 lines. I'm
  a bit confused as to why, but haven't really investigated yet.
- fast_default fails, because I've undone most of 7636e5c60fea83a9f3c,
  I'll have to redo that in a different way.
- I occasionally see failures in aggregates.sql - I've not figured out
  what's going on there.

Amit Khandekar said he'll publish a new version of the slot-abstraction
patch tomorrow, so I'll rebase it onto that ASAP.

My next planned steps are a) to try to commit parts of the
slot-abstraction work b) to try to break out a few more pieces out of
the large pluggable storage patch.

Greetings,

Andres Freund



Re: Tid scan improvements

2018-10-02 Thread David Rowley
On 28 September 2018 at 18:13, Edmund Horner  wrote:
> On Fri, 28 Sep 2018 at 17:02, Edmund Horner  wrote:
>> I did run pgindent over it though. :)
>
> But I didn't check if it still applied to master.  Sigh.  Here's one that 
> does.

I know commit fest is over, but I made a pass of this to hopefully
provide a bit of guidance so that it's closer for the November 'fest.

I've only done light testing on the patch and it does seem to work,
but there are a few things that I think should be changed. Most
importantly #11 below I think needs to be done. That might overwrite
some of the items that come before it in the list as you likely will
have to pull some of code which I mention out out due to changing #11.
I've kept them around anyway just in case some of it remains.

1. Could wrap for tables > 16TB. Please use double. See index_pages_fetched()

int nrandompages;
int nseqpages;

2. Should multiply by nseqpages, not add.

run_cost += spc_random_page_cost * nrandompages + spc_seq_page_cost + nseqpages;

3. Should be double:

BlockNumber pages = selectivity * baserel->pages;

4. Comment needs updated to mention what the new code does in
heapgettup() and heapgettup_pagemode()

+
  /* start from last page of the scan */
- if (scan->rs_startblock > 0)
- page = scan->rs_startblock - 1;
+ if (scan->rs_numblocks == InvalidBlockNumber)
+ {
+ if (scan->rs_startblock > 0)
+ page = scan->rs_startblock - 1;
+ else
+ page = scan->rs_nblocks - 1;
+ }
  else
- page = scan->rs_nblocks - 1;
+ page = scan->rs_startblock + scan->rs_numblocks - 1;
+

5. Variables should be called "inclusive". We use "strict" to indicate
an operator comparison cannot match NULL values.

+ bool strict; /* Indicates < rather than <=, or > rather */
+ bool strict2; /* than >= */

Don't break the comment like that. If you need more space don't end
the comment and use a new line and tab the next line out to match the
* of the first line.

6. Why not pass the TidExpr into MakeTidOpExprState() and have it set
the type instead of repeating code

7. It's not very obvious why the following Assert() can't fail.

+ bool invert;
+ bool invert2;
+
+ Assert(list_length(((BoolExpr *) expr)->args) == 2);

I had to hunt around quite a bit to see that
TidQualFromBaseRestrictinfo could only ever make the list have 2
elements, and we'd not form a BoolExpr with just 1. (but see #11)

8. Many instances of the word "strict" are used to mean "inclusive".
Can you please change all of them.

9. Confusing comment:

+ * If the expression was non-strict (<=) and the offset is 0, then just
+ * pretend it was strict, because offset 0 doesn't exist and we may as
+ * well exclude that block.

Shouldn't this be, "If the operator is non-inclusive, then since TID
offsets are 1-based, for simplicity, we can just class the expression
as inclusive.", or something along those lines.

10. Comment talks about LHS, but the first OpExpr in a list of two
OpExprs has nothing to do with left hand side.  You could use LHS if
you were talking about the first arg in an OpExpr, but this is not the
case here.

/* If the LHS is not the lower bound, swap them. */

You could probably just ensure that the >=, > ops is the first in the
list inside TidQualFromBaseRestrictinfo(), but you'd need to clearly
comment that this is the case in both locations. Perhaps use lcons()
for the lower end and lappend() for the upper end, but see #11.

11. I think the qual matching code needs an overhaul.  Really you
should attempt to find the smallest and largest ctid for your
implicitly ANDed ranges.  This would require you getting rid of the
BETWEEN type claused you're trying to build in
TidQualFromBaseRestrictinfo
and instead just include all quals, don't ignore other quals when
you've already found your complete range bounds.

The problem with doing it the way that you're doing it now is in cases like:

create table t1(a int);
insert into t1 select generate_Series(1,1000);
create index on t1 (a);
select ctid,a from t1 order by a desc limit 1; -- find the max ctid.
ctid |a
-+--
 (44247,178) | 1000
(1 row)

set max_parallel_workers_per_gather=0;
explain analyze select ctid,* from t1 where ctid > '(0,0)' and ctid <=
'(44247,178)' and ctid <= '(0,1)';
 QUERY PLAN
-
 Tid Scan on t1  (cost=0.01..169248.78 rows=1 width=10) (actual
time=0.042..2123.432 rows=1 loops=1)
   TID Cond: ((ctid > '(0,0)'::tid) AND (ctid <= '(44247,178)'::tid))
   Filter: (ctid <= '(0,1)'::tid)
   Rows Removed by Filter: 999
 Planning Time: 4.049 ms
 Execution Time: 2123.464 ms
(6 rows)

Due to how you've coded TidQualFromBaseRestrictinfo(), the ctid <=
'(0,1)' qual does not make it into the range. It's left as a filter in
the Tid Scan.

I think I'm going to stop here as changing this going to cause quite a
bit of churn.

but one more...

12. I think the changes 

Re: SerializeParamList vs machines with strict alignment

2018-10-02 Thread Amit Kapila
On Wed, Oct 3, 2018 at 8:29 AM Amit Kapila  wrote:
>
> On Wed, Oct 3, 2018 at 12:25 AM Tom Lane  wrote:
> >
> > Amit Kapila  writes:
> > >> Okay, I have pushed the test case patch on HEAD.  Attached is the
> > >> code-fix patch, let's wait for a day so that we have all the results
> > >> which can help us to discuss the merits of this patch.
> >
> > > By now, the added test has failed on gharial [1] with below log on the 
> > > server:
> >
> > Yeah, gharial and anole both don't like this, which is interesting
> > but not really surprising, considering that IA64 is in some part
> > an HPPA follow-on architecture.
> >
>
> Now chipmunk also failed for the same test.
>
> >  What I find more interesting is
> > that both of the live Sparc critters are happy --- so despite
> > Thomas' statements upthread, they're coping with unaligned accesses.
> > Maybe you should have back-patched the test to older branches so
> > we could see what castoroides and protosciurus would do.  But it's
> > probably not worth additional delay.
> >
>
> Agreed, I will push the code-fix on HEAD and code+test in back-branches.
>

Pushed, let's wait and see if this can make all the failing buidfarm
members (due to this issue) happy.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: Relax transactional restrictions on ALTER ENUM ... ADD TYPE (redux)

2018-10-02 Thread Thomas Munro
On Wed, Oct 3, 2018 at 4:42 PM Thomas Munro
 wrote:
> On Wed, Oct 3, 2018 at 2:24 PM Thomas Munro
>  wrote:
> > Over the thread for bug #14825 I posted some draft code to show one
> > way to save/restore the enum blacklist for parallel workers.  Here's a
> > better version, and a new thread.  0001 is the code by Andrew Dustan
> > and Tom Lane that was reverted in 93a1af0b, unchanged by me except for
> > resolving trivial conflicts on current master.  0002 fixes the failure
> > seen with make installcheck when postgresql.conf says
> > force_parallel_mode = regress.
>
> Added to the next commitfest.

... which promptly caused cfbot to report that the documentation
doesn't build anymore, because it used one of those old "" tags
that are now outlawed.  Fixed.

-- 
Thomas Munro
http://www.enterprisedb.com


0001-Relax-transactional-restrictions-on-ALTER-TYPE-.--v2.patch
Description: Binary data


0002-fixup-Relax-transactional-restrictions-on-ALTER-T-v2.patch
Description: Binary data


Re: Relax transactional restrictions on ALTER ENUM ... ADD TYPE (redux)

2018-10-02 Thread Thomas Munro
On Wed, Oct 3, 2018 at 2:24 PM Thomas Munro
 wrote:
> Over the thread for bug #14825 I posted some draft code to show one
> way to save/restore the enum blacklist for parallel workers.  Here's a
> better version, and a new thread.  0001 is the code by Andrew Dustan
> and Tom Lane that was reverted in 93a1af0b, unchanged by me except for
> resolving trivial conflicts on current master.  0002 fixes the failure
> seen with make installcheck when postgresql.conf says
> force_parallel_mode = regress.

Added to the next commitfest.

-- 
Thomas Munro
http://www.enterprisedb.com



Re: partition tree inspection functions

2018-10-02 Thread Michael Paquier
On Mon, Oct 01, 2018 at 04:27:57PM +0900, Amit Langote wrote:
> Yeah, maybe there is no reason to delay proceeding with
> pg_partition_children which provides a useful functionality.

So, I have been looking at your patch, and there are a couple of things
which could be improved.

Putting the new function pg_partition_children() in partition.c is a
bad idea I think.  So instead I think that we should put that in a
different location, say utils/adt/partitionfuncs.c.

+   TupleDescInitEntry(tupdesc, (AttrNumber) 1, "relid",
+  REGCLASSOID, -1, 0);
+   TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parentid",
+  REGCLASSOID, -1, 0);
REGCLASSOID is used mainly for casting, so instead let's use OIDOID like
any other system function.

+   values[2] = psprintf("%d", level);
+   values[3] = psprintf("%c", relkind == RELKIND_PARTITIONED_TABLE ?
+   'f' :
+   't');
Using Datum objects is more elegant in this context.

isleaf is not particularly useful as it can just be fetched with a join
on pg_class/relkind.  So I think that we had better remove it.

I have cleaned up a bit tests, removing duplicates and most of the
things which touched the size of relations to have something more
portable.

We could have a flavor using a relation name in input with qualified
names handled properly (see pg_get_viewdef_name for example), not sure
if that's really mandatory so I left that out.  I have also added some
comments here and there.  The docs could be worded a bit better still.

My result is the patch attached.  What do you think?
--
Michael
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 9a7f683658..d41c09b68b 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -20197,6 +20197,46 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup());
 The function returns the number of new collation objects it created.

 
+   
+Partitioning Information Functions
+
+ 
+  Name Return Type Description
+ 
+
+  
+  
+   pg_partition_tree(oid)
+   setof record
+   
+List information about a partition tree for the given partitioned
+table, consisting of one row for each partition in a tree.  The
+information available is the OID of the partition, the OID of its
+immediate partitioned table, and its level in the hierarchy,
+beginning at 0 for the top-most parent, and
+incremented by 1 for each level up.
+   
+  
+ 
+
+   
+
+   
+To check the total size of the data contained in
+measurement table described in
+, one could use the
+following query:
+   
+
+
+=# SELECT pg_size_pretty(sum(pg_relation_size(relid))) AS total_size
+ FROM pg_partition_tree('measurement');
+ total_size 
+
+ 24 kB
+(1 row)
+
+
   
 
   
diff --git a/src/backend/utils/adt/Makefile b/src/backend/utils/adt/Makefile
index 4b35dbb8bb..132ec7620c 100644
--- a/src/backend/utils/adt/Makefile
+++ b/src/backend/utils/adt/Makefile
@@ -20,8 +20,8 @@ OBJS = acl.o amutils.o arrayfuncs.o array_expanded.o array_selfuncs.o \
 	jsonfuncs.o like.o lockfuncs.o mac.o mac8.o misc.o nabstime.o name.o \
 	network.o network_gist.o network_selfuncs.o network_spgist.o \
 	numeric.o numutils.o oid.o oracle_compat.o \
-	orderedsetaggs.o pg_locale.o pg_lsn.o pg_upgrade_support.o \
-	pgstatfuncs.o \
+	orderedsetaggs.o partitionfuncs.o pg_locale.o pg_lsn.o \
+	pg_upgrade_support.o pgstatfuncs.o \
 	pseudotypes.o quote.o rangetypes.o rangetypes_gist.o \
 	rangetypes_selfuncs.o rangetypes_spgist.o rangetypes_typanalyze.o \
 	regexp.o regproc.o ri_triggers.o rowtypes.o ruleutils.o \
diff --git a/src/backend/utils/adt/partitionfuncs.c b/src/backend/utils/adt/partitionfuncs.c
new file mode 100644
index 00..fc0a904967
--- /dev/null
+++ b/src/backend/utils/adt/partitionfuncs.c
@@ -0,0 +1,128 @@
+/*-
+ *
+ * partitionfuncs.c
+ *		  Functions for accessing partitioning data
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		  src/backend/utils/adt/partitionfuncs.c
+ *
+ *-
+ */
+
+#include "postgres.h"
+
+#include "access/htup_details.h"
+#include "catalog/partition.h"
+#include "catalog/pg_inherits.h"
+#include "catalog/pg_type.h"
+#include "funcapi.h"
+#include "utils/fmgrprotos.h"
+
+/*
+ * pg_partition_tree
+ *
+ * Produce a view with one row per member of a partition tree, beginning
+ * from the top-most parent given by the caller.  This gives information
+ * about each partition, its immediate partitioned parent and its level in
+ * the hierarchy.
+ */
+Datum
+pg_partition_tree(PG_FUNCTION_ARGS)
+{
+#define 

Re: SerializeParamList vs machines with strict alignment

2018-10-02 Thread Amit Kapila
On Wed, Oct 3, 2018 at 12:25 AM Tom Lane  wrote:
>
> Amit Kapila  writes:
> >> Okay, I have pushed the test case patch on HEAD.  Attached is the
> >> code-fix patch, let's wait for a day so that we have all the results
> >> which can help us to discuss the merits of this patch.
>
> > By now, the added test has failed on gharial [1] with below log on the 
> > server:
>
> Yeah, gharial and anole both don't like this, which is interesting
> but not really surprising, considering that IA64 is in some part
> an HPPA follow-on architecture.
>

Now chipmunk also failed for the same test.

>  What I find more interesting is
> that both of the live Sparc critters are happy --- so despite
> Thomas' statements upthread, they're coping with unaligned accesses.
> Maybe you should have back-patched the test to older branches so
> we could see what castoroides and protosciurus would do.  But it's
> probably not worth additional delay.
>

Agreed, I will push the code-fix on HEAD and code+test in back-branches.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: file cloning in pg_upgrade and CREATE DATABASE

2018-10-02 Thread Alvaro Herrera
On 2018-Oct-03, Michael Paquier wrote:

> On Tue, Oct 02, 2018 at 10:35:02PM -0300, Alvaro Herrera wrote:
> > I'm not clear what interface are you proposing.  Maybe they would just
> > use the clone-or-fail mode, and note whether it fails?  If that's not
> > it, please explain.
> 
> Okay.  What I am proposing is to not have any kind of automatic mode to
> keep the code simple, with a new option called --transfer-mode, able to
> do three things:
> - "link", which is the equivalent of the existing --link.
> - "copy", the default and the current behavior, which copies files.
> - "clone", the new mode proposed in this thread.

I see.  Peter is proposing to have a fourth mode, essentially
--transfer-mode=clone-or-copy.

Thinking about a generic tool that wraps pg_upgrade (say, Debian's
wrapper for it) this makes sense: just use the fastest non-destructive
mode available.  Which ones are available depends on what the underlying
filesystem is, so it's not up to the tool's writer to decide which to
use ahead of time.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: file cloning in pg_upgrade and CREATE DATABASE

2018-10-02 Thread Michael Paquier
On Tue, Oct 02, 2018 at 10:35:02PM -0300, Alvaro Herrera wrote:
> I'm not clear what interface are you proposing.  Maybe they would just
> use the clone-or-fail mode, and note whether it fails?  If that's not
> it, please explain.

Okay.  What I am proposing is to not have any kind of automatic mode to
keep the code simple, with a new option called --transfer-mode, able to
do three things:
- "link", which is the equivalent of the existing --link.
- "copy", the default and the current behavior, which copies files.
- "clone", the new mode proposed in this thread.
--
Michael


signature.asc
Description: PGP signature


Re: file cloning in pg_upgrade and CREATE DATABASE

2018-10-02 Thread Alvaro Herrera
On 2018-Oct-03, Michael Paquier wrote:

> There could be an argument for having an automatic more within this
> scheme, still I am not really a fan of this.  When somebody integrates
> pg_upgrade within an upgrade framework, they would likely test if
> cloning actually works, bumping immediately on a failure, no?  I'd like
> to think that copy should be the default, cloning being available as an
> option.  Cloning is not supported on many filesystems anyway..

I'm not clear what interface are you proposing.  Maybe they would just
use the clone-or-fail mode, and note whether it fails?  If that's not
it, please explain.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Relax transactional restrictions on ALTER ENUM ... ADD TYPE (redux)

2018-10-02 Thread Thomas Munro
Hi Andrew, Tom, all,

Over the thread for bug #14825 I posted some draft code to show one
way to save/restore the enum blacklist for parallel workers.  Here's a
better version, and a new thread.  0001 is the code by Andrew Dustan
and Tom Lane that was reverted in 93a1af0b, unchanged by me except for
resolving trivial conflicts on current master.  0002 fixes the failure
seen with make installcheck when postgresql.conf says
force_parallel_mode = regress.

-- 
Thomas Munro
http://www.enterprisedb.com


0001-Relax-transactional-restrictions-on-ALTER-TYPE-.-ADD.patch
Description: Binary data


0002-fixup-Relax-transactional-restrictions-on-ALTER-TYPE.patch
Description: Binary data


Re: speeding up planning with partitions

2018-10-02 Thread Amit Langote
On 2018/10/03 8:31, Thomas Munro wrote:
> On Fri, Sep 14, 2018 at 10:29 PM Amit Langote
>  wrote:
>> Attached is what I have at the moment.
> 
> I realise this is a WIP but FYI the docs don't build (you removed a
>  element that is still needed, when removing a paragraph).

Thanks Thomas for the heads up, will fix.

Regards,
Amit




Re: SSL tests failing with "ee key too small" error on Debian SID

2018-10-02 Thread Michael Paquier
On Mon, Oct 01, 2018 at 09:18:01PM +0900, Kyotaro HORIGUCHI wrote:
> In Debian /etc/ssl/openssl.cnf has been changed to
> "CiperString=DEFAULT@SECLEVEL=2", which implies that "RSA and DHE
> keys need to be at least 2048 bit long" according to the
> following page.
> 
> https://wiki.debian.org/ContinuousIntegration/TriagingTips/openssl-1.1.1
> 
> It seems to be Debian's special feature and I suppose
> (differently from the previous mail..) it won't happen on other
> platforms.

Ah...  Thanks for the information.  I have missed that bit.  Likely
other platforms would not bother much about that.

> The attached second patch just changes key size to 2048 bits and
> "ee key too small" are eliminated in 001_ssltests_master, but
> instead I got "ca md too weak" error. This is eliminated by using
> sha256 instead of sha1 in cas.config. (third attached)

I find your suggestion quite tempting at the end instead of having to
tweak the global system's configuration.  That should normally work with
any configuration.  This would require regenerating the certs in the
tree.  Any thoughts from others?

> By the way I got (with both 1.0.2k and 1.1.1) a "tlsv1 alert
> unknown ca" error from 002_scram.pl. It is fixed for me by the
> forth attached, but I'm not sure why we haven't have such a
> complain. (It happens only for me?)

I am actually seeing that for 001_ssltests, but that's expected as there
are some cases with revoked certs, but not for 002_scram.
--
Michael


signature.asc
Description: PGP signature


Re: file cloning in pg_upgrade and CREATE DATABASE

2018-10-02 Thread Michael Paquier
On Tue, Oct 02, 2018 at 02:31:35PM +0200, Peter Eisentraut wrote:
> I can see the argument for that.  But I don't understand where the
> automatic mode fits into this.  I would like to keep all three modes
> from my patch: copy, clone-if-possible, clone-or-fail, unless you want
> to argue against that.

I'd like to argue against that :)

There could be an argument for having an automatic more within this
scheme, still I am not really a fan of this.  When somebody integrates
pg_upgrade within an upgrade framework, they would likely test if
cloning actually works, bumping immediately on a failure, no?  I'd like
to think that copy should be the default, cloning being available as an
option.  Cloning is not supported on many filesystems anyway..
--
Michael


signature.asc
Description: PGP signature


Re: speeding up planning with partitions

2018-10-02 Thread Thomas Munro
On Fri, Sep 14, 2018 at 10:29 PM Amit Langote
 wrote:
> Attached is what I have at the moment.

I realise this is a WIP but FYI the docs don't build (you removed a
 element that is still needed, when removing a paragraph).

-- 
Thomas Munro
http://www.enterprisedb.com



Re: [HACKERS] Optional message to user when terminating/cancelling backend

2018-10-02 Thread Thomas Munro
On Tue, Oct 2, 2018 at 1:37 AM Daniel Gustafsson  wrote:
> > On 1 Oct 2018, at 01:19, Michael Paquier  wrote:
> > On Sun, Sep 30, 2018 at 10:51:44PM +0900, Michael Paquier wrote:
> >> You could have chosen something less complicated, like "ホゲ", which is
> >> the equivalent of "foo" in English.  Anyway, non-ASCII characters should
> >> not be included in the final patch.
>
> Fixed in the attached v16 revision.

Hi Daniel,

It looks like you missed another case that needs tolerance for late
signal delivery on Windows:

+select pg_cancel_backend(pg_backend_pid());

https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.15263

-- 
Thomas Munro
http://www.enterprisedb.com



Re: [HACKERS] proposal: schema variables

2018-10-02 Thread Thomas Munro
On Sun, Sep 30, 2018 at 11:20 AM Pavel Stehule  wrote:
> I hope so now, there are almost complete functionality. Please, check it.

Hi Pavel,

FYI there is a regression test failure on Windows:

plpgsql ... FAILED

*** 4071,4077 
end;
$$ language plpgsql;
select stacked_diagnostics_test();
- NOTICE: sqlstate: 22012, message: division by zero, context:
[PL/pgSQL function zero_divide() line 4 at RETURN <- SQL statement
"SELECT zero_divide()" <- PL/pgSQL function stacked_diagnostics_test()
line 6 at PERFORM]
+ NOTICE: sqlstate: 42702, message: column reference "v" is ambiguous,
context: [PL/pgSQL function zero_divide() line 4 at RETURN <- SQL
statement "SELECT zero_divide()" <- PL/pgSQL function
stacked_diagnostics_test() line 6 at PERFORM]

https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.15234

-- 
Thomas Munro
http://www.enterprisedb.com



Re: Progress reporting for pg_verify_checksums

2018-10-02 Thread Thomas Munro
On Sat, Sep 29, 2018 at 1:07 AM Michael Banck  wrote:
> I've attached v4 of the patch.

Hi Michael,

Windows doesn't like sigaction:

https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.15189

I'm not sure if we classify this as a "frontend" program.  Should it
be using pqsignal() from src/port/pqsignal.c?  Or perhaps just
sigaction as you have it (pqsignal.c says that we require sigaction on
all Unices), but #ifndef WIN32 around that stuff, since SIGUSR1 is
never going to work anyway.

-- 
Thomas Munro
http://www.enterprisedb.com



Re: Obtaining a more consistent view definition when a UNION subquery contains undecorated constants

2018-10-02 Thread Jacob Champion
On Thu, Sep 27, 2018 at 3:38 PM Tom Lane  wrote:
> Jimmy Yih  writes:
> > Looking at the internal code (mostly get_from_clause_item() function), we
> > saw that when a subquery is used, there is no tuple descriptor passed to
> > get_query_def() function. Because of this, get_target_list() uses the
> > resnames obtained from the pg_rewrite entry's ev_action field.  However, it
> > seems to be fairly simple to construct a dummy tuple descriptor from the
> > ev_action to pass down the call stack so that the column names will be
> > consistent when deparsing both T_A_Const and T_TypeCast parse tree nodes
> > involving a UNION.  Attached is a patch that demonstrates this.
>
> I'm afraid that this just moves the weird cases somewhere else, ie
> you might see an AS clause where you had none before, or a different
> AS clause from what you originally wrote.  The parser has emitted the
> same parse tree as if there were an explicit AS there; I doubt that
> we want ruleutils to second-guess that unless it really has to.

Can you give a quick example of something that "breaks" with this
approach? I think we're having trouble seeing it.

It might help to have some additional context on why we care: this
problem shows up in pg_upgrade testing, since we're basically
performing the following steps:
- create a view in the old database
- dump the old database schema
- load the schema into the new database
- dump the new database schema and compare to the old one

So from this perspective, we don't mind so much if the view definition
changes between creation and dump, but we do mind if it changes a
second time after the dump has been restored, since it shows up as a
false negative in the diff. In other words, we'd like the dumped view
definitions to be "stable" with respect to dumps and restores.

Thanks,
--Jacob



Re: Performance improvements for src/port/snprintf.c

2018-10-02 Thread Tom Lane
Andres Freund  writes:
> I've looked through the patch. Looks good to me. Some minor notes:

[ didn't see this till after sending my previous ]

> - How about adding our own strchrnul for the case where we don't
>   HAVE_STRCHRNUL?  It's possible that other platforms have something
>   similar, and the code wouldlook more readable that way.

Sure, we could just make a "static inline strchrnul()" for use when
!HAVE_STRCHRNUL.  No objection.

> - I know it's not new, but is it actually correct to use va_arg(args, int64)
>   for ATYPE_LONGLONG?

Well, the problem with just doing s/int64/long long/g is that the
code would then fail on compilers without a "long long" type.
We could ifdef our way around that, but I don't think the code would
end up prettier.

Given that we only ever use "ll" modifiers via INT64_FORMAT, and that
that'll only be set to "ll" if int64 is indeed "long long", those code
paths should be dead code in any situation where the type pun is wrong.

regards, tom lane



Re: Performance improvements for src/port/snprintf.c

2018-10-02 Thread Tom Lane
Here's a version of this patch rebased over commit 625b38ea0.

That commit's fix for the possibly-expensive memset means that we need
to reconsider performance numbers for this patch.  I re-ran my previous
tests, and it's still looking like this is a substantial win, as it makes
snprintf.c faster than the native snprintf for most non-float cases.
We're still stuck at something like 10% penalty for float cases.

While there might be value in implementing our own float printing code,
I have a pretty hard time getting excited about the cost/benefit ratio
of that.  I think that what we probably really ought to do here is hack
float4out/float8out to bypass the extra overhead, as in the 0002 patch
below.

For reference, I attach the testbed I'm using now plus some results.
I wasn't able to get my cranky NetBSD system up today, so I don't
have results for that.  However, I did add recent glibc (Fedora 28)
to the mix, and I was interested to discover that they seem to have
added a fast-path for format strings that are exactly "%s", just as
NetBSD did.  I wonder if we should reconsider our position on doing
that.  It'd be a simple enough addition...

regards, tom lane

diff --git a/configure b/configure
index 6414ec1..0448c6b 100755
*** a/configure
--- b/configure
*** fi
*** 15100,15106 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! for ac_func in cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate ppoll pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l
  do :
as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh`
  ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var"
--- 15100,15106 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! for ac_func in cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate ppoll pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open strchrnul symlink sync_file_range utime utimes wcstombs_l
  do :
as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh`
  ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var"
diff --git a/configure.in b/configure.in
index 158d5a1..23b5bb8 100644
*** a/configure.in
--- b/configure.in
*** PGAC_FUNC_WCSTOMBS_L
*** 1571,1577 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! AC_CHECK_FUNCS([cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate ppoll pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l])
  
  AC_REPLACE_FUNCS(fseeko)
  case $host_os in
--- 1571,1577 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! AC_CHECK_FUNCS([cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate ppoll pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open strchrnul symlink sync_file_range utime utimes wcstombs_l])
  
  AC_REPLACE_FUNCS(fseeko)
  case $host_os in
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 90dda8e..7894caa 100644
*** a/src/include/pg_config.h.in
--- b/src/include/pg_config.h.in
***
*** 523,528 
--- 523,531 
  /* Define to 1 if you have the  header file. */
  #undef HAVE_STDLIB_H
  
+ /* Define to 1 if you have the `strchrnul' function. */
+ #undef HAVE_STRCHRNUL
+ 
  /* Define to 1 if you have the `strerror_r' function. */
  #undef HAVE_STRERROR_R
  
diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32
index 93bb773..f7a051d 100644
*** a/src/include/pg_config.h.win32
--- b/src/include/pg_config.h.win32
***
*** 394,399 
--- 394,402 
  /* Define to 1 if you have the  header file. */
  #define HAVE_STDLIB_H 1
  
+ /* Define to 1 if you have the `strchrnul' function. */
+ /* #undef HAVE_STRCHRNUL */
+ 
  /* Define to 1 if you have the `strerror_r' function. */
  /* #undef HAVE_STRERROR_R */
  
diff --git a/src/port/snprintf.c b/src/port/snprintf.c
index 1be5f70..3094ad8 100644
*** a/src/port/snprintf.c
--- b/src/port/snprintf.c
*** flushbuffer(PrintfTarget *target)
*** 314,320 
  }
  
  
! static void fmtstr(char *value, int leftjust, int minlen, int maxwidth,
  	   int pointflag, PrintfTarget *target);
  static void fmtptr(void *value, PrintfTarget *target);
  static void fmtint(int64 value, char type, int forcesign,
--- 314,322 
  }
  
  
! static bool find_arguments(const char *format, va_list args,
! 			   PrintfArgValue *argvalues);
! static void fmtstr(const char *value, int leftjust, int minlen, int maxwidth,
  	   int pointflag, 

Re: [PATCH v19] GSSAPI encryption support

2018-10-02 Thread Robbie Harwood
Michael Paquier  writes:

> On Mon, Aug 06, 2018 at 05:23:28PM -0400, Robbie Harwood wrote:
>> If you're in a position where you're using Kerberos (or most other
>> things from the GSSAPI) for authentication, the encryption comes at
>> little to no additional setup cost.  And then you get all the security
>> benefits outlined in the docs changes.
>
> Please note that the last patch set does not apply anymore, so I have
> moved it to CF 2018-11 and marked it as waiting on author.

Indeed.  Please find v19 attached.  It's just a rebase; no architecture
changes.

I'll set commitfest status back to "Needs Review" once I've sent this
mail.

Thanks,
--Robbie


signature.asc
Description: PGP signature
>From 4a91571db6873da46becf2f7ad0cd9055df2 Mon Sep 17 00:00:00 2001
From: Robbie Harwood 
Date: Thu, 10 May 2018 16:12:03 -0400
Subject: [PATCH] libpq GSSAPI encryption support

On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.

Add frontend and backend encryption support functions.  Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.

In postmaster, pull GSSAPI authorization checking into a shared
function.  Also share the initiator name between the encryption and
non-encryption codepaths.

Modify pqsecure_write() to take a non-const pointer.  The pointer will
not be modified, but this change (or a const-discarding cast, or a
malloc()+memcpy()) is necessary for GSSAPI due to const/struct
interactions in C.

For HBA, add "hostgss" and "hostnogss" entries that behave similarly
to their SSL counterparts.  "hostgss" requires either "gss", "trust",
or "reject" for its authentication.

Simiarly, add a "gssmode" parameter to libpq.  Supported values are
"disable", "require", and "prefer".  Notably, negotiation will only be
attempted if credentials can be acquired.  Move credential acquisition
into its own function to support this behavior.

Finally, add documentation for everything new, and update existing
documentation on connection security.

Thanks to Michael Paquier for the Windows fixes.
---
 doc/src/sgml/client-auth.sgml   |  75 --
 doc/src/sgml/libpq.sgml |  57 +++-
 doc/src/sgml/runtime.sgml   |  77 +-
 src/backend/libpq/Makefile  |   4 +
 src/backend/libpq/auth.c| 107 +++-
 src/backend/libpq/be-gssapi-common.c|  64 +
 src/backend/libpq/be-gssapi-common.h|  26 ++
 src/backend/libpq/be-secure-gssapi.c| 321 ++
 src/backend/libpq/be-secure.c   |  16 ++
 src/backend/libpq/hba.c |  51 +++-
 src/backend/postmaster/pgstat.c |   3 +
 src/backend/postmaster/postmaster.c |  64 -
 src/include/libpq/hba.h |   4 +-
 src/include/libpq/libpq-be.h|  11 +-
 src/include/libpq/libpq.h   |   3 +
 src/include/libpq/pqcomm.h  |   5 +-
 src/include/pgstat.h|   3 +-
 src/interfaces/libpq/Makefile   |   4 +
 src/interfaces/libpq/fe-auth.c  |  82 +-
 src/interfaces/libpq/fe-connect.c   | 232 +++-
 src/interfaces/libpq/fe-gssapi-common.c | 128 +
 src/interfaces/libpq/fe-gssapi-common.h |  23 ++
 src/interfaces/libpq/fe-secure-gssapi.c | 345 
 src/interfaces/libpq/fe-secure.c|  16 +-
 src/interfaces/libpq/libpq-fe.h |   3 +-
 src/interfaces/libpq/libpq-int.h|  30 ++-
 src/tools/msvc/Mkvcbuild.pm |  10 +
 27 files changed, 1571 insertions(+), 193 deletions(-)
 create mode 100644 src/backend/libpq/be-gssapi-common.c
 create mode 100644 src/backend/libpq/be-gssapi-common.h
 create mode 100644 src/backend/libpq/be-secure-gssapi.c
 create mode 100644 src/interfaces/libpq/fe-gssapi-common.c
 create mode 100644 src/interfaces/libpq/fe-gssapi-common.h
 create mode 100644 src/interfaces/libpq/fe-secure-gssapi.c

diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index c2114021c3..6f9f2b7560 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -108,6 +108,8 @@ hostnossl  database  user
 host   database  user  IP-address  IP-mask  auth-method  auth-options
 hostssldatabase  user  IP-address  IP-mask  auth-method  auth-options
 hostnossl  database  user  IP-address  IP-mask  auth-method  auth-options
+hostgssdatabase  user  IP-address  IP-mask  auth-method  auth-options
+hostnogss  database  user  IP-address  IP-mask  auth-method  auth-options
 
The meaning of the fields is as follows:
 
@@ -128,9 +130,10 @@ hostnossl  database  user
  
   
This record matches connection attempts made using TCP/IP.
-

Re: SerializeParamList vs machines with strict alignment

2018-10-02 Thread Thomas Munro
On Wed, Oct 3, 2018 at 7:55 AM Tom Lane  wrote:
> Amit Kapila  writes:
> >> Okay, I have pushed the test case patch on HEAD.  Attached is the
> >> code-fix patch, let's wait for a day so that we have all the results
> >> which can help us to discuss the merits of this patch.
>
> > By now, the added test has failed on gharial [1] with below log on the 
> > server:
>
> Yeah, gharial and anole both don't like this, which is interesting
> but not really surprising, considering that IA64 is in some part
> an HPPA follow-on architecture.  What I find more interesting is
> that both of the live Sparc critters are happy --- so despite
> Thomas' statements upthread, they're coping with unaligned accesses.

Just for fun:

$ curl -O 
https://people.debian.org/~aurel32/qemu/sparc/debian_etch_sparc_small.qcow2
$ qemu-system-sparc -hda debian_etch_sparc_small.qcow2 -nographic
...
Debian GNU/Linux 4.0 debian-sparc ttyS0

debian-sparc login: root
Password: root
...
debian-sparc:~# cat < /etc/apt/sources.list
deb http://archive.debian.org/debian/ etch main
deb-src http://archive.debian.org/debian/ etch main
deb http://archive.debian.org/debian-security/ etch/updates main contrib
EOF
debian-sparc:~# apt-get update
...
debian-sparc:~# apt-get dist-upgrade
...
debian-sparc:~# apt-get install gcc libc6-dev
...
debian-sparc:~# cat < test.c
int main(int argc, char **argv)
{
  int i[2];
  int *p;

  p = (int *)(((char *) [0]) + 2);
  *p = 42;
}
EOF
debian-sparc:~# gcc test.c
debian-sparc:~# ./a.out
Bus error

--
Thomas Munro
http://www.enterprisedb.com



Re: Performance improvements for src/port/snprintf.c

2018-10-02 Thread Andres Freund
Hi,

On 2018-09-26 21:30:25 -0400, Tom Lane wrote:
> Here's a rebased version of <15785.1536776...@sss.pgh.pa.us>.
> 
> I think we should try to get this reviewed and committed before
> we worry more about the float business.  It would be silly to
> not be benchmarking any bigger changes against the low-hanging
> fruit here.

I've looked through the patch. Looks good to me. Some minor notes:

- How about adding our own strchrnul for the case where we don't
  HAVE_STRCHRNUL?  It's possible that other platforms have something
  similar, and the code wouldlook more readable that way.
- I know it's not new, but is it actually correct to use va_arg(args, int64)
  for ATYPE_LONGLONG?

Greetings,

Andres Freund



Re: Cygwin linking rules

2018-10-02 Thread Marco Atzeri

Am 02.10.2018 um 19:07 schrieb Tom Lane:

Andrew Dunstan  writes:

On 09/29/2018 02:13 PM, Marco Atzeri wrote:

[ proposed patch ]



Yes. So there are a couple of things here. First, the dll has
SO_MAJORVERSION in the name. And second it stops building any static
libraries and instead builds windows import libraries with names like
lippq.a.


I'm pretty much -1 on adding SO_MAJORVERSION to the library names.
It seems like it will cause churn to library users without really
accomplishing much, because when was the last time we changed the
SO_MAJORVERSION of anything?

I'd suggest that if we ever do change libpq's SO_MAJORVERSION,
that would be the time to append the suffix (so we'd go from libpq.dll
to libpq-6.dll).  For now, let's not fix what isn't broken.


On cygwin the library is cygpq-5.dll by long time;
around 2013 with 9.2.x we standardized to have the
SO_MAJORVERSION in the lib name as all the other
packages

https://cygwin.com/packages/x86_64/libpq5/libpq5-10.5-1

same as on Unix/Linux
https://packages.debian.org/sid/amd64/libpq5/filelist



However, the .a linking definitely is broken, and if this way
of building fixes it, that's great.  I do not have the ability
to test it, but we can throw it into the buildfarm to see what
happens.


I think we should apply this to HEAD. If it's not too late it would
probably be a good thing for release 11 - would need a release note.


I think it's too late for 11; we're too close to RC1, and besides
the problem this is fixing doesn't seem to manifest before my
recent port/common library changes.  (If that's not so, maybe it
qualifies as a bug fix for back branches; but it seems rather
high risk.)


No problem. It works for my cygwin package release
but I could have broken something around and it need to be tested.


regards, tom lane


Regards
Marco

---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus




Re: Slotification of partition tuple conversion

2018-10-02 Thread Andres Freund
On 2018-10-02 11:02:37 -0700, Andres Freund wrote:
> On 2018-10-02 18:35:29 +0900, Amit Langote wrote:
> > Hi,
> > 
> > I looked at the patch.  Some comments.
> > 
> > On 2018/10/02 16:35, Andres Freund wrote:
> > > I wasn't quite happy yet with that patch.
> > > 
> > > - ConvertTupleSlot seems like a too generic name, it's very unclear it's
> > >  related to tuple mapping, rather than something internal to slots. I
> > >  went for execute_attr_map_slot (and renamed do_convert_tuple to
> > >  execute_attr_map_tuple, to match).
> > > 
> > >  I'd welcome a better name.
> > 
> > do_convert_tuple -> convert_tuple_using_map
> > ConvertTuplSlot  -> convert_tuple_using_map_slot
> 
> I think my name is more descriptive, referencing the attr map seems
> better to me.
> 
> 
> > > - I disliked inheritence_tupconv_map, it doesn't seem very clear why
> > >  this is named inheritence_* (typo aside). I went for
> > >  convert_tuples_by_name_map_if_req() - while I think this sounds
> > >  too much like it converts tuples itself it should be renamed with the
> > >  rest of the convert_tuples_by_* routines.
> > > 
> > >  I'd welcome a better name.
> > 
> > Agree about doing something about the convert_* names.  A comment near the
> > beginning of tupconvert.c says all of these convert_* routines are meant
> > as conversion "setup" routines:
> 
> I'll try to tackle that in a separate commit.
> 
> 
> > /*
> >  * The conversion setup routines have the following common API:
> > 
> > So, maybe:
> > 
> > convert_tuples_by_position -> get_conversion_map_by_position
> > convert_tuples_by_name -> get_conversion_map_by_name
> > convert_tuples_by_name_map -> get_attr_map_for_conversion
> > convert_tuples_by_name_map_if_req -> get_attr_map_for_conversion_if_req
> 
> s/get/build/? I'm also not a fan of the separation of attr and
> conversion map to signal the difference between tuples and slots being
> converted...
> 
> 
> > I noticed the comment at the top of tupconvert.c requires updating:
> > 
> >  * of dropped columns.  There is some overlap of functionality with the
> >  * executor's "junkfilter" routines, but these functions work on bare
> >  * HeapTuples rather than TupleTableSlots.
> > 
> > Now we have functions that manipulate TupleTableSlots.
> 
> Heh.  I think I'll just drop the entire sentence - I don't really think
> there's much of an overlap to junkfilters these days.

I've pushed this now.  We can discuss about the other renaming and then
I'll commit that separately.

Greetings,

Andres Freund



Regarding Google Code In mentorship

2018-10-02 Thread Roshni Ram
Hello,
I would like to know if there is a vacancy for becoming a Google Code-In
mentor with your organization this year.
I have prior experience in Java, Python, Android, React, React native, Deep
learning, Computer Vision, MySQL, PostgreSQL.
I have prior experience of open source and working remotely. I hope to
leverage these experiences into the Google Code-In mentor role.
LinkedIn: https://www.linkedin.com/in/roshni-ram-306b0a164/
GitHub: https://github.com/roshniRam
Looking forward to your reply

Thank you
Roshni Ram.


Re: Commit fest 2018-09

2018-10-02 Thread David Fetter
On Tue, Oct 02, 2018 at 03:52:20PM +0900, Michael Paquier wrote:
> On Thu, Sep 06, 2018 at 01:50:23PM -0700, Michael Paquier wrote:
> > We are already in September, hence it is time to move on with the 2nd
> > commit fest for v12.  As usual, there are many patches waiting for
> > review and integration:
> > https://commitfest.postgresql.org/19/
> > With a couple of days of delay, I have switched the CF app as
> > in-progress on Tuesday AOE.  Please note that I am fine to act as CFM
> > and help with the patch classification.  If somebody wishes to act as
> > such and is familiar with the community process, feel free to reply to
> > this thread to manifest yourself.
> 
> And more or less one month later here we are, with the following final
> score:
> Committed: 46
> Moved to next CF: 127
> Rejected: 5
> Returned with Feedback: 37
> Total: 215
> 
> I have spent some time classifying the existing entries, roughly moving
> to the next commit fest entries which had rather fresh reviews, the
> patches which got no reviews.  Patches which did not apply without any
> reviews have been moved to the next CF, waiting for their respective
> author(s).  Patches not having much activity, or waiting for author
> input for more than a couple of weeks have been returned with feedback.
> 
> Thanks to all who participated in the patch review, authoring, and
> everybody else who helped in making the different patches move forward.

Thanks for doing this hard and unglamorous work!

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate



Re: Slotification of partition tuple conversion

2018-10-02 Thread Andres Freund
On 2018-10-02 18:35:29 +0900, Amit Langote wrote:
> Hi,
> 
> I looked at the patch.  Some comments.
> 
> On 2018/10/02 16:35, Andres Freund wrote:
> > I wasn't quite happy yet with that patch.
> > 
> > - ConvertTupleSlot seems like a too generic name, it's very unclear it's
> >  related to tuple mapping, rather than something internal to slots. I
> >  went for execute_attr_map_slot (and renamed do_convert_tuple to
> >  execute_attr_map_tuple, to match).
> > 
> >  I'd welcome a better name.
> 
> do_convert_tuple -> convert_tuple_using_map
> ConvertTuplSlot  -> convert_tuple_using_map_slot

I think my name is more descriptive, referencing the attr map seems
better to me.


> > - I disliked inheritence_tupconv_map, it doesn't seem very clear why
> >  this is named inheritence_* (typo aside). I went for
> >  convert_tuples_by_name_map_if_req() - while I think this sounds
> >  too much like it converts tuples itself it should be renamed with the
> >  rest of the convert_tuples_by_* routines.
> > 
> >  I'd welcome a better name.
> 
> Agree about doing something about the convert_* names.  A comment near the
> beginning of tupconvert.c says all of these convert_* routines are meant
> as conversion "setup" routines:

I'll try to tackle that in a separate commit.


> /*
>  * The conversion setup routines have the following common API:
> 
> So, maybe:
> 
> convert_tuples_by_position -> get_conversion_map_by_position
> convert_tuples_by_name -> get_conversion_map_by_name
> convert_tuples_by_name_map -> get_attr_map_for_conversion
> convert_tuples_by_name_map_if_req -> get_attr_map_for_conversion_if_req

s/get/build/? I'm also not a fan of the separation of attr and
conversion map to signal the difference between tuples and slots being
converted...


> I noticed the comment at the top of tupconvert.c requires updating:
> 
>  * of dropped columns.  There is some overlap of functionality with the
>  * executor's "junkfilter" routines, but these functions work on bare
>  * HeapTuples rather than TupleTableSlots.
> 
> Now we have functions that manipulate TupleTableSlots.

Heh.  I think I'll just drop the entire sentence - I don't really think
there's much of an overlap to junkfilters these days.

Greetings,

Andres Freund



NOTIFY and pg_notify performance when deduplicating notifications

2018-10-02 Thread julien
Hi,

Back in 2016 a patch was proposed to fix the O(N^2) performance on transactions 
that generate many notifications. The performance issue is caused by the check 
for duplicate notifications.

I have a feature built around LISTEN / NOTIFY that works perfectly well, except 
for the enormous performance impact to transactions that emit large numbers of 
notifications. It’s very hard to work around the problem on the application 
side and transactions that could take just a few seconds end up taking over 30 
minutes. 

The patch that was proposed was nearly finalized, but ended up being abandoned. 
The latest patch is here: 
https://www.postgresql.org/message-id/CAP_rwwmKjO_p3kYB4jYceqcvcyRYjBQdji1GSCyqvLK%3D5nZzWQ%40mail.gmail.com
 . 

I understand that the only work left to be done on the patch was to address 
comments made on the proposed syntax. I’m attaching an updated patch that 
changes the syntax to allow for a variable number of modes. The new syntax 
would be NOTIFY channel [ , payload [ , collapse_mode ] ] ; where collapse_mode 
can be 'never' or 'maybe'.

I hope this patch can be reviewed and included in PostgreSQL. 

Best regards.

--
Julien Demoor
 


postgresql-notify-collapse-mode.patch
Description: Binary data


Re: Cygwin linking rules

2018-10-02 Thread Tom Lane
Andrew Dunstan  writes:
> On 09/29/2018 02:13 PM, Marco Atzeri wrote:
>> [ proposed patch ]

> Yes. So there are a couple of things here. First, the dll has 
> SO_MAJORVERSION in the name. And second it stops building any static 
> libraries and instead builds windows import libraries with names like 
> lippq.a.

I'm pretty much -1 on adding SO_MAJORVERSION to the library names.
It seems like it will cause churn to library users without really
accomplishing much, because when was the last time we changed the
SO_MAJORVERSION of anything?

I'd suggest that if we ever do change libpq's SO_MAJORVERSION,
that would be the time to append the suffix (so we'd go from libpq.dll
to libpq-6.dll).  For now, let's not fix what isn't broken.

However, the .a linking definitely is broken, and if this way
of building fixes it, that's great.  I do not have the ability
to test it, but we can throw it into the buildfarm to see what
happens.

> I think we should apply this to HEAD. If it's not too late it would 
> probably be a good thing for release 11 - would need a release note.

I think it's too late for 11; we're too close to RC1, and besides
the problem this is fixing doesn't seem to manifest before my
recent port/common library changes.  (If that's not so, maybe it
qualifies as a bug fix for back branches; but it seems rather
high risk.)

regards, tom lane



Re: pg_ls_tmpdir()

2018-10-02 Thread Tom Lane
"Bossart, Nathan"  writes:
> On 10/2/18, 7:22 AM, "Andrew Dunstan"  wrote:
>> See the bottom of src/backend/catalog/system_views.sql starting around 
>> line 1010.

> AFAICT the cleanest way to do this in system_views.sql is to hard-code
> the pg_default tablespace OID in the DEFAULT expression.  So, it might
> be best to use the two function approach if we want pg_ls_tmpdir() to
> default to the pg_default tablespace.

That would be pretty bletcherous, so +1 for just making two C functions.

regards, tom lane



Re: transction_timestamp() inside of procedures

2018-10-02 Thread Merlin Moncure
On Wed, Sep 26, 2018 at 10:55 AM Alvaro Herrera
 wrote:
>
> On 2018-Sep-26, Tom Lane wrote:
>
> > Alvaro Herrera  writes:
> > > On 2018-Sep-26, Tom Lane wrote:
> > >> I agree that it would be surprising for transaction timestamp to be newer
> > >> than statement timestamp.  So for now at least, I'd be satisfied with
> > >> documenting the behavior.
> >
> > > Really?  I thought it was practically obvious that for transaction-
> > > controlling procedures, the transaction timestamp would not necessarily
> > > be aligned with the statement timestamp.  The surprise would come
> > > together with the usage of the new feature, so existing users would not
> > > be surprised in any way.
> >
> > Nope.  That's the same poor reasoning we've fallen into in some other
> > cases, of assuming that "the user" is a point source of knowledge.
> > But DBMSes tend to interact with lots of different code.  If some part
> > of application A starts using intraprocedure transactions, and then
> > application B breaks because it wasn't expecting to see xact_start
> > later than query_start in pg_stat_activity, you've still got a problem.
>
> While that's true, I think it's also highly hypothetical.
>
> What could be the use for the transaction timestamp?  I think one of the
> most important uses (at least in pg_stat_activity) is to verify that
> transactions are not taking excessively long time to complete;

+1

I think the existing behavior is broken, and extremely so.
Transaction timestamp has a very clear definition to me.  I'm in
planning to move a lot of code into stored procedures from bash, and
upon doing so it's going to trip all kinds of nagios alarms that are
looking at the longest running transaction.

merlin



Re: has_column_privilege behavior (was Re: Assert failed in snprintf.c)

2018-10-02 Thread Tom Lane
Stephen Frost  writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> One might reasonably expect NULL there, but column_privilege_check
>> observes that you have table-level select privilege so it doesn't
>> bother to look up the column number.  Not sure if this is worth
>> doing something about.

> Yeah, I'm on the fence about if it makes sense to do anything here or
> not.  Hard to see how getting a NULL back is really more useful in this
> case.

Yeah, I'm not terribly excited about changing it either.  I just
thought it'd be a good idea to point it out in case somebody else
feels differently.

regards, tom lane



Re: pg_ls_tmpdir()

2018-10-02 Thread Bossart, Nathan
On 10/2/18, 7:22 AM, "Andrew Dunstan"  wrote:
> On 10/02/2018 08:00 AM, Laurenz Albe wrote:
>> Christoph Berg wrote:
>>> Re: Bossart, Nathan 2018-10-01 
>>> <69fd7e51-2b13-41fd-9438-17395c73f...@amazon.com>
> 1. Do we really need two functions, one without input argument
>  to list the default tablespace?
>  I think that anybody who uses with such a function whould
>  be able to feed the OID of "pg_default".
 That seems reasonable to me.  I've removed the second version of the
 function.
>>> You could make that one argument have a DEFAULT value that makes it
>>> act on pg_default.
>> I looked at that, and I don't think you can add a DEFAULT for
>> system functions installed during bootstrap.
>> At least I failed in the attempt.
>
>
> See the bottom of src/backend/catalog/system_views.sql starting around 
> line 1010.

AFAICT the cleanest way to do this in system_views.sql is to hard-code
the pg_default tablespace OID in the DEFAULT expression.  So, it might
be best to use the two function approach if we want pg_ls_tmpdir() to
default to the pg_default tablespace.

Nathan



Re: has_column_privilege behavior (was Re: Assert failed in snprintf.c)

2018-10-02 Thread Stephen Frost
Greetings,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost  writes:
> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
> >> OK, so here's a patch that I think does the right things.
> >> I noticed that has_foreign_data_wrapper_privilege() and some other
> >> recently-added members of the has_foo_privilege family had not gotten
> >> the word about not failing on bogus OIDs, so I also fixed those.
> 
> > I just glanced through it pretty quickly, but looks good to me.
> 
> Pushed with some test cases; thanks for reviewing!

Thanks for hacking on it.

> BTW, I noticed while making the test cases that there are some odd-seeming
> behaviors as a result of early exits from the test functions.  For
> instance,
> 
> regression=# create table mytab(f1 int, f2 int);
> CREATE TABLE
> regression=# select has_column_privilege('mytab',99::int2,'select');
>  has_column_privilege 
> --
>  t
> (1 row)

Ah, yeah, that's the whole "you have access to all columns if you have
SELECT rights on the table".

> One might reasonably expect NULL there, but column_privilege_check
> observes that you have table-level select privilege so it doesn't
> bother to look up the column number.  Not sure if this is worth
> doing something about.

Yeah, I'm on the fence about if it makes sense to do anything here or
not.  Hard to see how getting a NULL back is really more useful in this
case.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: has_column_privilege behavior (was Re: Assert failed in snprintf.c)

2018-10-02 Thread Tom Lane
Stephen Frost  writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> OK, so here's a patch that I think does the right things.
>> I noticed that has_foreign_data_wrapper_privilege() and some other
>> recently-added members of the has_foo_privilege family had not gotten
>> the word about not failing on bogus OIDs, so I also fixed those.

> I just glanced through it pretty quickly, but looks good to me.

Pushed with some test cases; thanks for reviewing!

BTW, I noticed while making the test cases that there are some odd-seeming
behaviors as a result of early exits from the test functions.  For
instance,

regression=# create table mytab(f1 int, f2 int);
CREATE TABLE
regression=# select has_column_privilege('mytab',99::int2,'select');
 has_column_privilege 
--
 t
(1 row)

One might reasonably expect NULL there, but column_privilege_check
observes that you have table-level select privilege so it doesn't
bother to look up the column number.  Not sure if this is worth
doing something about.

regards, tom lane



Re: Tuple conversion naming

2018-10-02 Thread Andres Freund
Hi,

On 2018-10-02 17:28:26 +0900, Amit Langote wrote:
> On 2018/10/02 16:40, Andres Freund wrote:
> > I'm kinda wondering if we shouldn't have the tuple
> > conversion functions just use the slot based functionality in the back,
> > and just store those in the TupConversionMap.
> 
> Sorry, I didn't understand this.

I was basically wondering if we should just allocate two slots in
TupConversionMap and then drive the tuple conversion via the slots.

Greetings,

Andres Freund



Re: transction_timestamp() inside of procedures

2018-10-02 Thread Andres Freund
On 2018-09-28 09:35:48 +0200, Peter Eisentraut wrote:
> On 26/09/2018 23:48, Peter Eisentraut wrote:
> > That's certainly a good argument.  Note that if we implemented that the
> > transaction timestamp is advanced inside procedures, that would also
> > mean that the transaction timestamp as observed in pg_stat_activity
> > would move during VACUUM, for example.  That might or might not be
> > desirable.

I think it doesn't hurt, although it's also not a huge advantage.


> diff --git a/src/backend/access/transam/xact.c 
> b/src/backend/access/transam/xact.c
> index 9aa63c8792..245735420c 100644
> --- a/src/backend/access/transam/xact.c
> +++ b/src/backend/access/transam/xact.c
> @@ -1884,14 +1884,19 @@ StartTransaction(void)
>   TRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId);
>  
>   /*
> -  * set transaction_timestamp() (a/k/a now()).  We want this to be the 
> same
> -  * as the first command's statement_timestamp(), so don't do a fresh
> -  * GetCurrentTimestamp() call (which'd be expensive anyway).  Also, mark
> -  * xactStopTimestamp as unset.
> -  */
> - xactStartTimestamp = stmtStartTimestamp;
> - xactStopTimestamp = 0;
> +  * set transaction_timestamp() (a/k/a now()).  Normally, we want this to
> +  * be the same as the first command's statement_timestamp(), so don't 
> do a
> +  * fresh GetCurrentTimestamp() call (which'd be expensive anyway).  But
> +  * for transactions started inside statements (e.g., procedure calls), 
> we
> +  * want to advance the timestamp.
> +  */
> + if (xactStartTimestamp < stmtStartTimestamp)
> + xactStartTimestamp = stmtStartTimestamp;
> + else
> + xactStartTimestamp = GetCurrentTimestamp();
>   pgstat_report_xact_timestamp(xactStartTimestamp);
> + /* Mark xactStopTimestamp as unset. */
> + xactStopTimestamp = 0;

It's a bit weird to make this decision based on these two timestamps
differing.  For one, it only indirectly seems to be guaranteed that
xactStartTimestamp is even set to anything here (to 0 by virtue of being
a global var).

Greetings,

Andres Freund



Re: [HACKERS] Re: Improve OR conditions on joined columns (common star schema problem)

2018-10-02 Thread Tom Lane
Noah Misch  writes:
> On Mon, Oct 01, 2018 at 09:32:10PM -0400, Tom Lane wrote:
>> FWIW, my problem with this patch is that I remain unconvinced of the basic
>> correctness of the transform (specifically the unique-ification approach).
>> Noah's points would be important to address if we were moving the patch
>> towards commit, but I don't see much reason to put effort into it until
>> we can think of a way to prove whether that works.

> Not even effort to fix the assertion failures I reported?

If it seemed relevant to the proof-of-correctness problem, I would look
into it, but ...

regards, tom lane



Re: transction_timestamp() inside of procedures

2018-10-02 Thread Andres Freund
Hi,

On 2018-10-02 10:55:56 +0200, Peter Eisentraut wrote:
> On 28/09/2018 09:35, Peter Eisentraut wrote:
> >> That's certainly a good argument.  Note that if we implemented that the
> >> transaction timestamp is advanced inside procedures, that would also
> >> mean that the transaction timestamp as observed in pg_stat_activity
> >> would move during VACUUM, for example.  That might or might not be
> >> desirable.
> > 
> > Attached is a rough implementation.
> > 
> > I'd be mildly in favor of doing this, but we have mentioned tradeoffs in
> > this thread.
> 
> So do we want to do this or not?

Without having reviewed the patch yet, yes, I'd say we want this.

Greetings,

Andres Freund



Regarding Google Code-In mentorship

2018-10-02 Thread KIRTIKA SINGHAL
Hello,
I would like to know if there is some vacancy for Google Code-In mentor
with your organization this year.
I have prior experience in React, Javascript, Machine Learning, Deep
Learning, and Machine Learning. I have prior experience of open source and
working remotely. I hope to leverage these experiences into the Google
Code-In mentor role.
LinkedIn: https://www.linkedin.com/in/kirtika-singhal-30371416a/
Github: https://github.com/singhalkirtika

Looking forward to your reply.

Thankyou
Kirtika Singhal


Re: [HACKERS] Re: Improve OR conditions on joined columns (common star schema problem)

2018-10-02 Thread Noah Misch
On Mon, Oct 01, 2018 at 09:32:10PM -0400, Tom Lane wrote:
> Michael Paquier  writes:
> > On Mon, Sep 03, 2018 at 06:59:10PM -0700, Noah Misch wrote:
> >> If you're going to keep this highly-simplified estimate, please expand the
> >> comment to say why it doesn't matter or what makes it hard to do better.  
> >> The
> >> non-planunionor.c path for the same query computes its own estimate of the
> >> same underlying quantity.  Though it may be too difficult in today's 
> >> system,
> >> one could copy the competing path's row count estimate here.  Perhaps it
> >> doesn't matter because higher-level processing already assumes equal row 
> >> count
> >> among competitors?
> 
> > As there has been no replies to Noah's review for one month, I am
> > marking this patch as returned with feedback for now.
> 
> FWIW, my problem with this patch is that I remain unconvinced of the basic
> correctness of the transform (specifically the unique-ification approach).
> Noah's points would be important to address if we were moving the patch
> towards commit, but I don't see much reason to put effort into it until
> we can think of a way to prove whether that works.

Not even effort to fix the assertion failures I reported?



Re: SerializeParamList vs machines with strict alignment

2018-10-02 Thread Amit Kapila
On Tue, Oct 2, 2018 at 11:21 AM Amit Kapila  wrote:
>
> On Tue, Oct 2, 2018 at 10:07 AM Tom Lane  wrote:
> >
> > Amit Kapila  writes:
> > > On Tue, Oct 2, 2018 at 9:38 AM Tom Lane  wrote:
> > >> (But it might be worth choosing slightly less
> > >> generic object names, to avoid a conflict against other sub-tests
> > >> later in that script.)
> >
> > > The function name and statement name seems okay to me.  How about
> > > changing the table name to fooarr or arrtest?
> >
> > Up to you.
> >
>
> Okay, I have pushed the test case patch on HEAD.  Attached is the
> code-fix patch, let's wait for a day so that we have all the results
> which can help us to discuss the merits of this patch.
>

By now, the added test has failed on gharial [1] with below log on the server:
2018-10-02 00:46:57.019 MDT [5bb31455.2066:193] LOG:  statement:
CREATE TABLE fooarr(f1 text, f2 int[], f3 text);
2018-10-02 00:46:57.147 MDT [5bb31455.2066:194] LOG:  statement:
INSERT INTO fooarr VALUES('1', ARRAY[1,2], 'one');
2018-10-02 00:46:57.149 MDT [5bb31455.2066:195] LOG:  statement:
PREPARE pstmt(text, int[]) AS SELECT * FROM fooarr WHERE f1 = $1 AND
f2 = $2;
2018-10-02 00:46:57.153 MDT [5bb31455.2066:196] LOG:  statement:
EXPLAIN (COSTS OFF) EXECUTE pstmt('1', make_some_array(1,2));
2018-10-02 00:46:57.157 MDT [5bb31455.2066:197] LOG:  statement:
EXECUTE pstmt('1', make_some_array(1,2));
2018-10-02 00:46:57.157 MDT [5bb31455.2066:198] DETAIL:  prepare:
PREPARE pstmt(text, int[]) AS SELECT * FROM fooarr WHERE f1 = $1 AND
f2 = $2;
2018-10-02 00:46:57.784 MDT [5bb313c8.1da5:4] LOG:  server process
(PID 8294) was terminated by signal 10
2018-10-02 00:46:57.784 MDT [5bb313c8.1da5:5] DETAIL:  Failed process
was running: EXECUTE pstmt('1', make_some_array(1,2));
2018-10-02 00:46:57.784 MDT [5bb313c8.1da5:6] LOG:  terminating any
other active server processes

AFAICS, the machine details are as follows:
uname -m = ia64
uname -r = B.11.31
uname -s = HP-UX
uname -v = U

This seems to indicate that this hardware is alignment-sensitive.  I
haven't done a detailed study of this architecture, but if anybody is
familiar with this architecture, then feel free to share your
thoughts.  On a quick look at one of on this architecture, there
doesn't seem to be any recommendation on aligning the memory access to
8-byte boundary.  See 3.2.2 section (Addressable Units and Alignment)
in doc [2].

This test doesn't seem to be executed on the chipmunk, so will wait
for its results.

I will further look into it tomorrow morning.

[1] - 
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial=2018-10-02%2006%3A30%3A47
[2] - https://www.csee.umbc.edu/portal/help/architecture/aig.pdf

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: transction_timestamp() inside of procedures

2018-10-02 Thread Bruce Momjian
On Tue, Oct  2, 2018 at 10:55:56AM +0200, Peter Eisentraut wrote:
> On 28/09/2018 09:35, Peter Eisentraut wrote:
> >> That's certainly a good argument.  Note that if we implemented that the
> >> transaction timestamp is advanced inside procedures, that would also
> >> mean that the transaction timestamp as observed in pg_stat_activity
> >> would move during VACUUM, for example.  That might or might not be
> >> desirable.
> > 
> > Attached is a rough implementation.
> > 
> > I'd be mildly in favor of doing this, but we have mentioned tradeoffs in
> > this thread.
> 
> So do we want to do this or not?

I thought some more about this.  I think there are a few issues:

1  Utility:  since you can't use CALL in a transaction block, our
current code will always have transaction_timestamp() and
statement_timestamp() as identical in a procedure.  Having
transaction_timestamp() advance on COMMIT gives users a new ability.

2  Surprise:  What do people use transaction_timestamp() for and what
behavior would be most expected?

3  Other databases:  How do other database systems handle this, and the
SQL standard?

Based on 1 and 2, I suggest we change transaction_timestamp() to advance
on COMMIT in procedure, and document this.  I have no idea on #3.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Commit fest 2018-09

2018-10-02 Thread Joe Conway
On 10/02/2018 10:13 AM, Tom Lane wrote:
> Michael Paquier  writes:
>> Thanks to all who participated in the patch review, authoring, and
>> everybody else who helped in making the different patches move forward.
> 
> Thanks for being CFM!  I know it's a lot of work ...

+10!

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: pgsql: Improve autovacuum logging for aggressive and anti-wraparound ru

2018-10-02 Thread Tom Lane
Michael Paquier  writes:
> My brain is rather fried for the rest of the day...  But we could just
> be looking at using USE_ASSERT_CHECKING.  Thoughts from other are
> welcome.

I'd go with folding the condition into a plain Assert.  Then it's
obvious that no code is added in a non-assert build.  I can see that
some cases might be so complicated that that isn't nice, but this
case doesn't seem to qualify.

regards, tom lane



Re: Commit fest 2018-09

2018-10-02 Thread Tom Lane
Michael Paquier  writes:
> Thanks to all who participated in the patch review, authoring, and
> everybody else who helped in making the different patches move forward.

Thanks for being CFM!  I know it's a lot of work ...

regards, tom lane



PostgreSQL 11 RC1 + GA Dates

2018-10-02 Thread Jonathan S. Katz
Hi,

Based on the current status of the open items and where we are at in the
release cycle, the date for the first release candidate of PostgreSQL 11
will be 2018-10-11.

If all goes well with RC1, the PostgreSQL 11.0 GA release will be
2018-10-18. This is subject to change if we find any issues during the
RC1 period that indicate we need to make an additional release candidate
prior to going GA.

To the entire community, thank you for all of your hard work on
developing PostgreSQL 11. It is exciting that we are finally at this
point where we are ready to make our major release.

Jonathan



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Support for Secure Transport SSL library on macOS as OpenSSL alternative

2018-10-02 Thread Tom Lane
Peter Eisentraut  writes:
> On 26/09/2018 23:19, Daniel Gustafsson wrote:
>> It’s not clear to me just how common it is to use GCC via homebrew on macOS.

> I use that all the time.

Hm, so did 5e2217131 break anything for you?  Does that version of gcc
claim to know -F or -framework switches?

regards, tom lane



Re: file cloning in pg_upgrade and CREATE DATABASE

2018-10-02 Thread Peter Eisentraut
On 28/09/2018 07:19, Michael Paquier wrote:
> +static bool cloning_ok = true;
> +
> +pg_log(PG_VERBOSE, "copying \"%s\" to \"%s\"\n",
> +   old_file, new_file);
> +if (cloning_ok &&
> +!cloneFile(old_file, new_file, map->nspname, map->relname, true))
> +{
> +pg_log(PG_VERBOSE, "cloning not supported, switching to copying\n");
> +cloning_ok = false;
> +copyFile(old_file, new_file, map->nspname, map->relname);
> +}
> +else
> +copyFile(old_file, new_file, map->nspname, map->relname);
> 
> This part overlaps with the job that check_reflink() already does.
> Wouldn't it be more simple to have check_reflink do a one-time check
> with the automatic mode, enforcing to REFLINK_NEVER if cloning test did
> not pass when REFLINK_AUTO is used?  This would simplify
> transfer_relfile().

I'll look into that.

> The --help output of pg_upgrade has not been updated.

will fix

> I am not a fan of the --reflink interface to be honest, even if this
> maps to what cp offers, particularly because there is already the --link
> mode, and that the new option interacts with it.  Here is an idea of
> interface with an option named, named say, --transfer-method:
> - "link", maps to the existing --link, which is just kept as a
> deprecated alias.
> - "clone" is the new mode you propose.
> - "copy" is the default, and copies directly files.  This automatic mode
> also makes the implementation around transfer_relfile more difficult to
> apprehend in my opinion, and I would think that all the different
> transfer modes ought to be maintained within it.  pg_upgrade.h also has
> logic for such transfer modes.

I can see the argument for that.  But I don't understand where the
automatic mode fits into this.  I would like to keep all three modes
from my patch: copy, clone-if-possible, clone-or-fail, unless you want
to argue against that.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: settings to control SSL/TLS protocol version

2018-10-02 Thread Daniel Gustafsson
> On 2 Oct 2018, at 14:23, Peter Eisentraut  
> wrote:
> 
> On 01/10/2018 23:30, Daniel Gustafsson wrote:
>>>   ssl_min_protocol_version = 'TLSv1'
>>>   ssl_max_protocol_version = ‘any'
>> 
>> I don’t think ‘any’ is a clear name for a setting which means “the highest
>> supported version”.  How about ‘max_supported’ or something similar?
> 
> I can see the argument for an alternative, but your suggestion is a
> mouthful.

Agreed, but I can’t think of a better wording. Perhaps just ‘tls_max’?

>> +1 for using a min/max approach for setting the version, and it should be
>> trivial to add support for in the pending GnuTLS and Secure Transport 
>> patches.
> 
> AFAICT, in GnuTLS this is done via the "priorities" setting that also
> sets the ciphers.  There is no separate API for just the TLS version.
> It would be interesting to see how Secure Transport can do it.

Secure Transport has a fairly neat API for this, SSLSetProtocolVersionMax() and
SSLSetProtocolVersionMin() (available since Lion).

cheers ./daniel


Re: settings to control SSL/TLS protocol version

2018-10-02 Thread Peter Eisentraut
On 01/10/2018 23:30, Daniel Gustafsson wrote:
>>ssl_min_protocol_version = 'TLSv1'
>>ssl_max_protocol_version = ‘any'
> 
> I don’t think ‘any’ is a clear name for a setting which means “the highest
> supported version”.  How about ‘max_supported’ or something similar?

I can see the argument for an alternative, but your suggestion is a
mouthful.

> +1 for using a min/max approach for setting the version, and it should be
> trivial to add support for in the pending GnuTLS and Secure Transport patches.

AFAICT, in GnuTLS this is done via the "priorities" setting that also
sets the ciphers.  There is no separate API for just the TLS version.
It would be interesting to see how Secure Transport can do it.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg_ls_tmpdir()

2018-10-02 Thread Andrew Dunstan




On 10/02/2018 08:00 AM, Laurenz Albe wrote:

Christoph Berg wrote:

Re: Bossart, Nathan 2018-10-01 <69fd7e51-2b13-41fd-9438-17395c73f...@amazon.com>

1. Do we really need two functions, one without input argument
 to list the default tablespace?
 I think that anybody who uses with such a function whould
 be able to feed the OID of "pg_default".

That seems reasonable to me.  I've removed the second version of the
function.

You could make that one argument have a DEFAULT value that makes it
act on pg_default.

I looked at that, and I don't think you can add a DEFAULT for
system functions installed during bootstrap.
At least I failed in the attempt.



See the bottom of src/backend/catalog/system_views.sql starting around 
line 1010.


cheers

andrew


--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Support for Secure Transport SSL library on macOS as OpenSSL alternative

2018-10-02 Thread Peter Eisentraut
On 26/09/2018 23:19, Daniel Gustafsson wrote:
> It’s not clear to me just how common it is to use GCC via homebrew on macOS.

I use that all the time.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Problem while updating a foreign table pointing to a partitioned table on foreign server

2018-10-02 Thread Etsuro Fujita

(2018/09/21 20:03), Etsuro Fujita wrote:

(2018/09/18 21:14), Kyotaro HORIGUCHI wrote:

At Fri, 14 Sep 2018 22:01:39 +0900, Etsuro
Fujita wrote
in<5b9bb133.1060...@lab.ntt.co.jp>

I wrote a patch using
the Param-based approach, and compared the two approaches.



I don't think
there would be any warts as discussed above in the Param-based
approach for now. (That approach modifies the planner so that the
targetrel's tlist would contain Params as well as Vars/PHVs, so
actually, it breaks the planner assumption that a rel's tlist would
only include Vars/PHVs, but I don't find any issues on that at least
for now. Will look into that in more detail.)


I spent quite a bit of time looking into that, but I couldn't find any 
issues, including ones discussed in [1]:


* In contrib/postgres_fdw, the patch does the special handling of the 
Param representing the remote table OID in deparsing a remote SELECT 
query and building fdw_scan_tlist, but it wouldn't need the 
pull_var_clause change as proposed in [1].  And ISTM that that handling 
would be sufficient to avoid errors like 'variable not found in subplan 
target lists' as in [1].


* Params as extra target expressions can never be used as Pathkeys or 
something like that, so it seems unlikely that that approach would cause 
'could not find pathkey item to sort' errors in 
prepare_sort_from_pathkeys() as in [1].


* I checked other parts of the planner such as subselect.c and 
setrefs.c, but I couldn't find any issues.



What do you think about that?



I confirmed that a FOREIGN_PARAM_EXEC is evaluated and stored
into the parent node. For the mentioned Merge/Sort/ForeignScan
case, Sort node takes the parameter value via projection. I
didn't know PARAM_EXEC works that way. I consulted nodeNestLoop
but not fully understood.

So I think it works. I still don't think expanded tupledesc is
not wart but this is smarter than that. Addition to that, it
seems back-patchable. I must admit that yours is better.


I also think that approach would be back-patchable to PG9.3, where 
contrib/postgres_fdw landed with the writable functionality, so I'm 
inclined to vote for the Param-based approach.  Attached is an updated 
version of the patch.  Changes:


* Added this to use_physical_tlist():


One thing I noticed is: in any approach, I think use_physical_tlist
needs to be modified so that it disables doing build_physical_tlist for
a foreign scan in the case where the FDW added resjunk columns for
UPDATE/DELETE that are different from user/system columns of the foreign
table; else such columns would not be emitted from the foreign scan.


* Fixed a bug in conversion_error_callback() in contrib/postgres_fdw.c

* Simplified your contrib/postgres_fdw.c tests as discussed

* Revise code/comments a bit

* Added docs to fdwhandler.sgml

* Rebased the patch against the latest HEAD

Best regards,
Etsuro Fujita

[1] 
https://www.postgresql.org/message-id/flat/cakcux6ktu-8teflwtquuzbyfaza83vuzurd7c1yhc-yewyy...@mail.gmail.com
*** a/contrib/postgres_fdw/deparse.c
--- b/contrib/postgres_fdw/deparse.c
***
*** 130,135  static void deparseTargetList(StringInfo buf,
--- 130,136 
    Relation rel,
    bool is_returning,
    Bitmapset *attrs_used,
+   bool tableoid_needed,
    bool qualify_col,
    List **retrieved_attrs);
  static void deparseExplicitTargetList(List *tlist,
***
*** 901,906  build_tlist_to_deparse(RelOptInfo *foreignrel)
--- 902,926 
    PVC_RECURSE_PLACEHOLDERS));
  	}
  
+ 	/* Also, add the Param representing the remote table OID, if it exists. */
+ 	if (fpinfo->tableoid_param)
+ 	{
+ 		TargetEntry *tle;
+ 
+ 		/*
+ 		 * Core code should have contained the Param in the given relation's
+ 		 * reltarget.
+ 		 */
+ 		Assert(list_member(foreignrel->reltarget->exprs,
+ 		   fpinfo->tableoid_param));
+ 
+ 		tle = makeTargetEntry((Expr *) copyObject(fpinfo->tableoid_param),
+ 			  list_length(tlist) + 1,
+ 			  NULL,
+ 			  false);
+ 		tlist = lappend(tlist, tle);
+ 	}
+ 
  	return tlist;
  }
  
***
*** 1052,1058  deparseSelectSql(List *tlist, bool is_subquery, List **retrieved_attrs,
  		Relation	rel = heap_open(rte->relid, NoLock);
  
  		deparseTargetList(buf, rte, foreignrel->relid, rel, false,
! 		  fpinfo->attrs_used, false, retrieved_attrs);
  		heap_close(rel, NoLock);
  	}
  }
--- 1072,1080 
  		Relation	rel = heap_open(rte->relid, NoLock);
  
  		deparseTargetList(buf, rte, foreignrel->relid, rel, false,
! 		  fpinfo->attrs_used,
! 		  fpinfo->tableoid_param ? true : false,
! 		  false, retrieved_attrs);
  		heap_close(rel, NoLock);
  	}
  }
***
*** 1093,1098  deparseFromExpr(List *quals, deparse_expr_cxt *context)
--- 1115,1122 
   * This is used for both SELECT and RETURNING targetlists; the is_returning
   * parameter is true only for a RETURNING targetlist.
   *
+  * For SELECT, the target list contains remote 

Re: pg_ls_tmpdir()

2018-10-02 Thread Laurenz Albe
Christoph Berg wrote:
> Re: Bossart, Nathan 2018-10-01 
> <69fd7e51-2b13-41fd-9438-17395c73f...@amazon.com>
> > > 1. Do we really need two functions, one without input argument
> > > to list the default tablespace?
> > > I think that anybody who uses with such a function whould
> > > be able to feed the OID of "pg_default".
> > 
> > That seems reasonable to me.  I've removed the second version of the
> > function.
> 
> You could make that one argument have a DEFAULT value that makes it
> act on pg_default.

I looked at that, and I don't think you can add a DEFAULT for
system functions installed during bootstrap.
At least I failed in the attempt.

Yours,
Laurenz Albe




Re: has_column_privilege behavior (was Re: Assert failed in snprintf.c)

2018-10-02 Thread Stephen Frost
Greetings,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost  writes:
> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
> >> Having said that, I'm fine with having it return NULL if the given
> >> attname matches an attisdropped column.
> 
> > Ok, that's really all I was asking about.
> 
> Ah, we were just talking past each other then :-(.  That behavior existed
> already, it wasn't something my draft patch introduced, so I was confused
> what you were talking about.

No worries.

> >> ... What I was on about is what
> >> happens when you write
> >> has_column_privilege('sometab'::regclass, 'somecol', 'SELECT');
> >> and sometab exists but somecol doesn't.
> 
> > Yeah, having that throw an error seems reasonable to me.
> 
> OK, so here's a patch that I think does the right things.
> 
> I noticed that has_foreign_data_wrapper_privilege() and some other
> recently-added members of the has_foo_privilege family had not gotten
> the word about not failing on bogus OIDs, so I also fixed those.

I just glanced through it pretty quickly, but looks good to me.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: pg_ls_tmpdir()

2018-10-02 Thread Christoph Berg
Re: Bossart, Nathan 2018-10-01 <69fd7e51-2b13-41fd-9438-17395c73f...@amazon.com>
> > 1. Do we really need two functions, one without input argument
> >to list the default tablespace?
> >I think that anybody who uses with such a function whould
> >be able to feed the OID of "pg_default".
> 
> That seems reasonable to me.  I've removed the second version of the
> function.

You could make that one argument have a DEFAULT value that makes it
act on pg_default.

Christoph



Re: Early WIP/PoC for inlining CTEs

2018-10-02 Thread Andreas Karlsson

Hi,

Here is an updated patch which adds some simple syntax for adding the 
optimization barrier. For example:


WITH x AS MATERIALIZED (
   SELECT 1
)
SELECT * FROM x;

Andreas
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 21a2ef5ad3a..017b73e0a29 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -1888,7 +1888,7 @@ SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t
 
 -- join in CTE
 EXPLAIN (VERBOSE, COSTS OFF)
-WITH t (c1_1, c1_3, c2_1) AS (SELECT t1.c1, t1.c3, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1)) SELECT c1_1, c2_1 FROM t ORDER BY c1_3, c1_1 OFFSET 100 LIMIT 10;
+WITH t (c1_1, c1_3, c2_1) AS MATERIALIZED (SELECT t1.c1, t1.c3, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1)) SELECT c1_1, c2_1 FROM t ORDER BY c1_3, c1_1 OFFSET 100 LIMIT 10;
  QUERY PLAN  
 -
  Limit
@@ -1905,7 +1905,7 @@ WITH t (c1_1, c1_3, c2_1) AS (SELECT t1.c1, t1.c3, t2.c1 FROM ft1 t1 JOIN ft2 t2
Output: t.c1_1, t.c2_1, t.c1_3
 (12 rows)
 
-WITH t (c1_1, c1_3, c2_1) AS (SELECT t1.c1, t1.c3, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1)) SELECT c1_1, c2_1 FROM t ORDER BY c1_3, c1_1 OFFSET 100 LIMIT 10;
+WITH t (c1_1, c1_3, c2_1) AS MATERIALIZED (SELECT t1.c1, t1.c3, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1)) SELECT c1_1, c2_1 FROM t ORDER BY c1_3, c1_1 OFFSET 100 LIMIT 10;
  c1_1 | c2_1 
 --+--
   101 |  101
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 88c4cb4783f..3032aed34f9 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -501,8 +501,8 @@ SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t
 SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t1.c1 OFFSET 100 LIMIT 10 FOR SHARE;
 -- join in CTE
 EXPLAIN (VERBOSE, COSTS OFF)
-WITH t (c1_1, c1_3, c2_1) AS (SELECT t1.c1, t1.c3, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1)) SELECT c1_1, c2_1 FROM t ORDER BY c1_3, c1_1 OFFSET 100 LIMIT 10;
-WITH t (c1_1, c1_3, c2_1) AS (SELECT t1.c1, t1.c3, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1)) SELECT c1_1, c2_1 FROM t ORDER BY c1_3, c1_1 OFFSET 100 LIMIT 10;
+WITH t (c1_1, c1_3, c2_1) AS MATERIALIZED (SELECT t1.c1, t1.c3, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1)) SELECT c1_1, c2_1 FROM t ORDER BY c1_3, c1_1 OFFSET 100 LIMIT 10;
+WITH t (c1_1, c1_3, c2_1) AS MATERIALIZED (SELECT t1.c1, t1.c3, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1)) SELECT c1_1, c2_1 FROM t ORDER BY c1_3, c1_1 OFFSET 100 LIMIT 10;
 -- ctid with whole-row reference
 EXPLAIN (VERBOSE, COSTS OFF)
 SELECT t1.ctid, t1, t2, t1.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t1.c1 OFFSET 100 LIMIT 10;
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 4db8142afaa..43f1b0e3f15 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -72,7 +72,7 @@ SELECT [ ALL | DISTINCT [ ON ( expressionand with_query is:
 
-with_query_name [ ( column_name [, ...] ) ] AS ( select | values | insert | update | delete )
+with_query_name [ ( column_name [, ...] ) ] AS [ MATERIALIZED ] ( select | values | insert | update | delete )
 
 TABLE [ ONLY ] table_name [ * ]
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 925cb8f3800..84435acb83e 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -2542,6 +2542,7 @@ _copyCommonTableExpr(const CommonTableExpr *from)
 	COPY_NODE_FIELD(ctequery);
 	COPY_LOCATION_FIELD(location);
 	COPY_SCALAR_FIELD(cterecursive);
+	COPY_SCALAR_FIELD(ctematerialized);
 	COPY_SCALAR_FIELD(cterefcount);
 	COPY_NODE_FIELD(ctecolnames);
 	COPY_NODE_FIELD(ctecoltypes);
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 3bb91c95958..65875d90201 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2794,6 +2794,7 @@ _equalCommonTableExpr(const CommonTableExpr *a, const CommonTableExpr *b)
 	COMPARE_NODE_FIELD(ctequery);
 	COMPARE_LOCATION_FIELD(location);
 	COMPARE_SCALAR_FIELD(cterecursive);
+	COMPARE_SCALAR_FIELD(ctematerialized);
 	COMPARE_SCALAR_FIELD(cterefcount);
 	COMPARE_NODE_FIELD(ctecolnames);
 	COMPARE_NODE_FIELD(ctecoltypes);
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 22dbae15d3b..56bfdcc5d10 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -3094,6 +3094,7 @@ _outCommonTableExpr(StringInfo str, const CommonTableExpr *node)
 	WRITE_NODE_FIELD(ctequery);
 	

Re: Slotification of partition tuple conversion

2018-10-02 Thread Amit Langote
Hi,

I looked at the patch.  Some comments.

On 2018/10/02 16:35, Andres Freund wrote:
> I wasn't quite happy yet with that patch.
> 
> - ConvertTupleSlot seems like a too generic name, it's very unclear it's
>  related to tuple mapping, rather than something internal to slots. I
>  went for execute_attr_map_slot (and renamed do_convert_tuple to
>  execute_attr_map_tuple, to match).
> 
>  I'd welcome a better name.

do_convert_tuple -> convert_tuple_using_map
ConvertTuplSlot  -> convert_tuple_using_map_slot

?

> - I disliked inheritence_tupconv_map, it doesn't seem very clear why
>  this is named inheritence_* (typo aside). I went for
>  convert_tuples_by_name_map_if_req() - while I think this sounds
>  too much like it converts tuples itself it should be renamed with the
>  rest of the convert_tuples_by_* routines.
> 
>  I'd welcome a better name.

Agree about doing something about the convert_* names.  A comment near the
beginning of tupconvert.c says all of these convert_* routines are meant
as conversion "setup" routines:

/*
 * The conversion setup routines have the following common API:

So, maybe:

convert_tuples_by_position -> get_conversion_map_by_position
convert_tuples_by_name -> get_conversion_map_by_name
convert_tuples_by_name_map -> get_attr_map_for_conversion
convert_tuples_by_name_map_if_req -> get_attr_map_for_conversion_if_req

> - Combined the two patches, they seemed to largely affect related code

Hmm yeah, the code and data structures that my patch changed are only
related to the cases which involve tuple conversion.

I noticed the comment at the top of tupconvert.c requires updating:

 * of dropped columns.  There is some overlap of functionality with the
 * executor's "junkfilter" routines, but these functions work on bare
 * HeapTuples rather than TupleTableSlots.

Now we have functions that manipulate TupleTableSlots.

Thanks,
Amit




Re: transction_timestamp() inside of procedures

2018-10-02 Thread Peter Eisentraut
On 28/09/2018 09:35, Peter Eisentraut wrote:
>> That's certainly a good argument.  Note that if we implemented that the
>> transaction timestamp is advanced inside procedures, that would also
>> mean that the transaction timestamp as observed in pg_stat_activity
>> would move during VACUUM, for example.  That might or might not be
>> desirable.
> 
> Attached is a rough implementation.
> 
> I'd be mildly in favor of doing this, but we have mentioned tradeoffs in
> this thread.

So do we want to do this or not?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Tuple conversion naming

2018-10-02 Thread Amit Langote
On 2018/10/02 16:40, Andres Freund wrote:
>>> For executing them we have:
>>> - do_convert_tuple
>>> - ConvertPartitionTupleSlot
>>>
>>> which is two randomly differing spellings of related functionality,
>>> without the name indicating that they, for reasons, don't both use
>>> TupleConversionMap.
>>
>> Actually, ConvertPartitionTupleSlot performs two tasks: 1. performs the
>> conversion of the tuple in the input slot using do_convert_tuple, and 2.
>> switches the TupleTableSlot to be used from this point on to a
>> partition-specific one.
> 
> Yea, I had this mostly written with the view of the code after the
> "slottification of partition tuple covnersion" thread.

Ah, okay.  I'm looking at the new patch you posted.

>>> I'm also not particularly happy about
>>> "do_convert_tuple", which is a pretty generic name.
>>>
>>> A lot of this probably made sense at some time, but we got to clean this
>>> up. We'll grow more partitioning related code, and this IMO makes
>>> reading the code harder - and given the change rate, that's something I
>>> frequently have to do.
>>
>> On the "Slotification of partition tuple conversion" thread, I suggested
>> that we try to decouple partitioning-related tuple conversion from
>> tupconvert.c, which may significantly improve things imho.  See the last
>> paragraph of my message here:
>>
>> https://www.postgresql.org/message-id/ed66156b-54fe-b5d2-10bf-3c73a8052e7e%40lab.ntt.co.jp
>>
>> Basically, Amit Khandekar proposed that we add another function with the
>> same functionality as do_convert_tuple to tupconvert.c that accepts and
>> returns TupleTableSlot instead of HeapTuple.  Per discussion, it turned
>> out that we won't need a full TupleConversionMap for partitioning-related
>> conversions if we start using this new function, so we could land the new
>> conversion function in execPartition.c instead of tupconvert.c.  Perhaps,
>> we could just open-code do_convert_tuple()'s functionality in the existing
>> ConvertPartitionTupleSlot().
>>
>> Of course, that is not to say we shouldn't do anything about the possibly
>> unhelpful names of the functions that tupconvert.c exports.
> 
> I'm not quite sure I see the point of a full decoupling of
> partitioning-related tuple conversion from tupconvert.c. Yes, the
> routines should be more clearly named, but why should the code be
> separate?

Hmm, okay.  After looking at your patch on the other thread, I feel less
strongly about this.

> I'm kinda wondering if we shouldn't have the tuple
> conversion functions just use the slot based functionality in the back,
> and just store those in the TupConversionMap.

Sorry, I didn't understand this.

Thanks,
Amit




Re: Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.

2018-10-02 Thread Etsuro Fujita

(2018/10/02 16:45), Michael Paquier wrote:

On Tue, Oct 02, 2018 at 04:11:54PM +0900, Amit Langote wrote:

I tried to close it as being committed, but couldn't do so, because I
can't find Fujita-san's name in the list of committers in the CF app's
drop down box that lists all committers.


Indeed, Fujita-san has been added to the list.  And I switched this CF
entry at the same time.


Thank you guys!

Best regards,
Etsuro Fujita



Re: Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.

2018-10-02 Thread Michael Paquier
On Tue, Oct 02, 2018 at 04:11:54PM +0900, Amit Langote wrote:
> I tried to close it as being committed, but couldn't do so, because I
> can't find Fujita-san's name in the list of committers in the CF app's
> drop down box that lists all committers.

Indeed, Fujita-san has been added to the list.  And I switched this CF
entry at the same time.
--
Michael


signature.asc
Description: PGP signature


Re: Sync ECPG scanner with core

2018-10-02 Thread John Naylor
It seems the pach tester is confused by the addition of the
demonstration diff file. I'm reattaching just the patchset to see if
it turns green.

-John Naylor
From 107e3c8a0b65b0196ea4370a724c8b2a1b0fdf79 Mon Sep 17 00:00:00 2001
From: John Naylor 
Date: Sun, 30 Sep 2018 12:51:41 +0700
Subject: [PATCH v1 1/4] First pass at syncing ECPG scanner with the core
 scanner.

Adjust whitespace and formatting, clean up some comments, and move
the block of whitespace rules.
---
 src/backend/parser/scan.l |   2 +-
 src/fe_utils/psqlscan.l   |   2 +-
 src/interfaces/ecpg/preproc/pgc.l | 773 --
 3 files changed, 408 insertions(+), 369 deletions(-)

diff --git a/src/backend/parser/scan.l b/src/backend/parser/scan.l
index 950b8b8591..a2454732a1 100644
--- a/src/backend/parser/scan.l
+++ b/src/backend/parser/scan.l
@@ -192,7 +192,7 @@ extern void core_yyset_column(int column_no, yyscan_t yyscanner);
  * XXX perhaps \f (formfeed) should be treated as a newline as well?
  *
  * XXX if you change the set of whitespace characters, fix scanner_isspace()
- * to agree, and see also the plpgsql lexer.
+ * to agree.
  */
 
 space			[ \t\n\r\f]
diff --git a/src/fe_utils/psqlscan.l b/src/fe_utils/psqlscan.l
index fdf49875a7..25253b54ea 100644
--- a/src/fe_utils/psqlscan.l
+++ b/src/fe_utils/psqlscan.l
@@ -151,7 +151,7 @@ extern void psql_yyset_column(int column_no, yyscan_t yyscanner);
  * XXX perhaps \f (formfeed) should be treated as a newline as well?
  *
  * XXX if you change the set of whitespace characters, fix scanner_isspace()
- * to agree, and see also the plpgsql lexer.
+ * to agree.
  */
 
 space			[ \t\n\r\f]
diff --git a/src/interfaces/ecpg/preproc/pgc.l b/src/interfaces/ecpg/preproc/pgc.l
index 0792118cfe..b96f17ca20 100644
--- a/src/interfaces/ecpg/preproc/pgc.l
+++ b/src/interfaces/ecpg/preproc/pgc.l
@@ -108,16 +108,19 @@ static struct _if_value
  * We use exclusive states for quoted strings, extended comments,
  * and to eliminate parsing troubles for numeric strings.
  * Exclusive states:
- *	 bit string literal
- *	 extended C-style comments in C
- *	 extended C-style comments in SQL
- *	 delimited identifiers (double-quoted identifiers) - thomas 1997-10-27
- *	 hexadecimal numeric string - thomas 1997-11-16
- *	 standard quoted strings - thomas 1997-07-30
- *	 standard quoted strings in C - michael
- *	 extended quoted strings (support backslash escape sequences)
- *	 national character quoted strings
+ *   bit string literal
+ *   extended C-style comments in C
+ *   extended C-style comments in SQL
+ *   delimited identifiers (double-quoted identifiers)
+ *  
+ *   hexadecimal numeric string
+ *   standard quoted strings
+ *   extended quoted strings (support backslash escape sequences)
+ *   national character quoted strings
+ *   standard quoted strings in C
  *   $foo$ quoted strings
+ *  
+ *  
  *   quoted identifier with Unicode escapes
  *   quoted string with Unicode escapes
  */
@@ -138,6 +141,48 @@ static struct _if_value
 %x xui
 %x xus
 
+/*
+ * In order to make the world safe for Windows and Mac clients as well as
+ * Unix ones, we accept either \n or \r as a newline.  A DOS-style \r\n
+ * sequence will be seen as two successive newlines, but that doesn't cause
+ * any problems.  SQL-style comments, which start with -- and extend to the
+ * next newline, are treated as equivalent to a single whitespace character.
+ *
+ * NOTE a fine point: if there is no newline following --, we will absorb
+ * everything to the end of the input as a comment.  This is correct.  Older
+ * versions of Postgres failed to recognize -- as a comment if the input
+ * did not end with a newline.
+ *
+ * XXX perhaps \f (formfeed) should be treated as a newline as well?
+ *
+ * XXX if you change the set of whitespace characters, fix ecpg_isspace()
+ * to agree.
+ */
+
+space			[ \t\n\r\f]
+horiz_space		[ \t\f]
+newline			[\n\r]
+non_newline		[^\n\r]
+
+comment			("--"{non_newline}*)
+
+whitespace		({space}+|{comment})
+
+/*
+ * SQL requires at least one newline in the whitespace separating
+ * string literals that are to be concatenated.  Silly, but who are we
+ * to argue?  Note that {whitespace_with_newline} should not have * after
+ * it, whereas {whitespace} should generally have a * after it...
+ */
+
+horiz_whitespace		({horiz_space}|{comment})
+whitespace_with_newline	({horiz_whitespace}*{newline}{whitespace}*)
+
+quote			'
+quotestop		{quote}{whitespace}*
+quotecontinue	{quote}{whitespace_with_newline}{quote}
+quotefail		{quote}{whitespace}*"-"
+
 /* Bit string
  */
 xbstart			[bB]{quote}
@@ -216,17 +261,17 @@ xdcinside		({xdcqq}|{xdcqdq}|{xdcother})
  * The "extended comment" syntax closely resembles allowable operator syntax.
  * The tricky part here is to get lex to recognize a string starting with
  * slash-star as a comment, when interpreting it as an operator would produce
- * a longer match --- remember lex will prefer a longer match!	Also, if we
+ * a longer match 

Re: Tuple conversion naming

2018-10-02 Thread Andres Freund
Hi,

On 2018-10-02 16:18:19 +0900, Amit Langote wrote:
> Hi,
> 
> I agree that some clean up might be in order, but want to clarify a few
> points.
> 
> On 2018/10/02 15:11, Andres Freund wrote:
> > Hi,
> > 
> > The naming around partition related tuple conversions is imo worthy of
> > improvement.
> 
> Note that tuple conversion functionality in tupconvert.c has existed even
> before partitioning, although partitioning may have contributed to
> significant expansion of its usage.

Fair enough, doesn't really change my point tho :)


> > For executing them we have:
> > - do_convert_tuple
> > - ConvertPartitionTupleSlot
> >
> > which is two randomly differing spellings of related functionality,
> > without the name indicating that they, for reasons, don't both use
> > TupleConversionMap.
> 
> Actually, ConvertPartitionTupleSlot performs two tasks: 1. performs the
> conversion of the tuple in the input slot using do_convert_tuple, and 2.
> switches the TupleTableSlot to be used from this point on to a
> partition-specific one.

Yea, I had this mostly written with the view of the code after the
"slottification of partition tuple covnersion" thread.


> > I'm also not particularly happy about
> > "do_convert_tuple", which is a pretty generic name.
> > 
> > A lot of this probably made sense at some time, but we got to clean this
> > up. We'll grow more partitioning related code, and this IMO makes
> > reading the code harder - and given the change rate, that's something I
> > frequently have to do.
> 
> On the "Slotification of partition tuple conversion" thread, I suggested
> that we try to decouple partitioning-related tuple conversion from
> tupconvert.c, which may significantly improve things imho.  See the last
> paragraph of my message here:
> 
> https://www.postgresql.org/message-id/ed66156b-54fe-b5d2-10bf-3c73a8052e7e%40lab.ntt.co.jp
> 
> Basically, Amit Khandekar proposed that we add another function with the
> same functionality as do_convert_tuple to tupconvert.c that accepts and
> returns TupleTableSlot instead of HeapTuple.  Per discussion, it turned
> out that we won't need a full TupleConversionMap for partitioning-related
> conversions if we start using this new function, so we could land the new
> conversion function in execPartition.c instead of tupconvert.c.  Perhaps,
> we could just open-code do_convert_tuple()'s functionality in the existing
> ConvertPartitionTupleSlot().
> 
> Of course, that is not to say we shouldn't do anything about the possibly
> unhelpful names of the functions that tupconvert.c exports.

I'm not quite sure I see the point of a full decoupling of
partitioning-related tuple conversion from tupconvert.c. Yes, the
routines should be more clearly named, but why should the code be
separate?   I'm kinda wondering if we shouldn't have the tuple
conversion functions just use the slot based functionality in the back,
and just store those in the TupConversionMap.

Greetings,

Andres Freund



Re: Slotification of partition tuple conversion

2018-10-02 Thread Andres Freund
Hi,

On 2018-09-28 15:36:00 +0530, Amit Khandekar wrote:
> On Wed, 26 Sep 2018 at 03:33, Andres Freund  wrote:
> >
> > Hi Amit,
> >
> > Could you rebase this patch, it doesn't apply anymore.
>
> Thanks for informing. Attached are both mine and Amit Langote's patch
> rebased and attached ...

I wasn't quite happy yet with that patch.

- ConvertTupleSlot seems like a too generic name, it's very unclear it's
 related to tuple mapping, rather than something internal to slots. I
 went for execute_attr_map_slot (and renamed do_convert_tuple to
 execute_attr_map_tuple, to match).

 I'd welcome a better name.

- I disliked inheritence_tupconv_map, it doesn't seem very clear why
 this is named inheritence_* (typo aside). I went for
 convert_tuples_by_name_map_if_req() - while I think this sounds
 too much like it converts tuples itself it should be renamed with the
 rest of the convert_tuples_by_* routines.

 I'd welcome a better name.

- Combined the two patches, they seemed to largely affect related code


I'm pretty tired right now, so I'm sure the patch, as attached, contains
a few flubs.  I'll try to get this committed tomorrow morning PST.

- Andres
>From 39591cf622ec18ee095448df3888295a3060ac1c Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Tue, 2 Oct 2018 00:25:44 -0700
Subject: [PATCH] Change partition mapping code to use slots more widely and
 consistently.

Author: Amit Khandekar and Amit Langote (combined patches), editorialized by me
Reviewed-By: Amit Langote, Amit Khandekar, Andres Freund
Discussion:
https://postgr.es/m/caj3gd9fr0wrneae8vqffntyons_uffprpqxhnd9q42vzb+j...@mail.gmail.com
https://postgr.es/m/e4f9d743-cd4b-efb0-7574-da21d86a7f36%40lab.ntt.co.jp
Backpatch: -
---
 src/backend/access/common/tupconvert.c | 176 ++---
 src/backend/commands/analyze.c |   2 +-
 src/backend/commands/copy.c|  29 +++-
 src/backend/commands/trigger.c |   4 +-
 src/backend/executor/execExprInterp.c  |   2 +-
 src/backend/executor/execMain.c|  79 ++-
 src/backend/executor/execPartition.c   | 101 +-
 src/backend/executor/nodeModifyTable.c |  24 ++--
 src/include/access/tupconvert.h|   8 +-
 src/include/executor/execPartition.h   |  18 ++-
 src/pl/plpgsql/src/pl_exec.c   |  14 +-
 11 files changed, 270 insertions(+), 187 deletions(-)

diff --git a/src/backend/access/common/tupconvert.c b/src/backend/access/common/tupconvert.c
index 3bc67b846d4..68d977526bf 100644
--- a/src/backend/access/common/tupconvert.c
+++ b/src/backend/access/common/tupconvert.c
@@ -22,6 +22,7 @@
 
 #include "access/htup_details.h"
 #include "access/tupconvert.h"
+#include "executor/tuptable.h"
 #include "utils/builtins.h"
 
 
@@ -31,7 +32,7 @@
  * The setup routine checks whether the given source and destination tuple
  * descriptors are logically compatible.  If not, it throws an error.
  * If so, it returns NULL if they are physically compatible (ie, no conversion
- * is needed), else a TupleConversionMap that can be used by do_convert_tuple
+ * is needed), else a TupleConversionMap that can be used by execute_attr_map_tuple
  * to perform the conversion.
  *
  * The TupleConversionMap, if needed, is palloc'd in the caller's memory
@@ -214,55 +215,13 @@ convert_tuples_by_name(TupleDesc indesc,
 	TupleConversionMap *map;
 	AttrNumber *attrMap;
 	int			n = outdesc->natts;
-	int			i;
-	bool		same;
 
 	/* Verify compatibility and prepare attribute-number map */
-	attrMap = convert_tuples_by_name_map(indesc, outdesc, msg);
+	attrMap = convert_tuples_by_name_map_if_req(indesc, outdesc, msg);
 
-	/*
-	 * Check to see if the map is one-to-one, in which case we need not do a
-	 * tuple conversion.  We must also insist that both tupdescs either
-	 * specify or don't specify an OID column, else we need a conversion to
-	 * add/remove space for that.  (For some callers, presence or absence of
-	 * an OID column perhaps would not really matter, but let's be safe.)
-	 */
-	if (indesc->natts == outdesc->natts &&
-		indesc->tdhasoid == outdesc->tdhasoid)
+	if (attrMap == NULL)
 	{
-		same = true;
-		for (i = 0; i < n; i++)
-		{
-			Form_pg_attribute inatt;
-			Form_pg_attribute outatt;
-
-			if (attrMap[i] == (i + 1))
-continue;
-
-			/*
-			 * If it's a dropped column and the corresponding input column is
-			 * also dropped, we needn't convert.  However, attlen and attalign
-			 * must agree.
-			 */
-			inatt = TupleDescAttr(indesc, i);
-			outatt = TupleDescAttr(outdesc, i);
-			if (attrMap[i] == 0 &&
-inatt->attisdropped &&
-inatt->attlen == outatt->attlen &&
-inatt->attalign == outatt->attalign)
-continue;
-
-			same = false;
-			break;
-		}
-	}
-	else
-		same = false;
-
-	if (same)
-	{
-		/* Runtime conversion is not needed */
-		pfree(attrMap);
+		/* runtime conversion is not needed */
 		return NULL;
 	}
 
@@ -367,11 +326,78 @@ convert_tuples_by_name_map(TupleDesc indesc,
 	return attrMap;
 }
 
+/*
+ * Returns mapping created by 

Re: Tuple conversion naming

2018-10-02 Thread Amit Langote
Hi,

I agree that some clean up might be in order, but want to clarify a few
points.

On 2018/10/02 15:11, Andres Freund wrote:
> Hi,
> 
> The naming around partition related tuple conversions is imo worthy of
> improvement.

Note that tuple conversion functionality in tupconvert.c has existed even
before partitioning, although partitioning may have contributed to
significant expansion of its usage.

> For building tuple conversion maps we have:
> - convert_tuples_by_name
> - convert_tuples_by_name_map
> - convert_tuples_by_position
> - ExecSetupChildParentMapForLeaf
> - TupConvMapForLeaf
> - free_conversion_map

Of these, ExecSetupChildParentMapForLeaf and TupConvMapForLeaf go away
with the patch posted in the "Speeding up INSERTs and UPDATEs to
partitioned tables" thread:

https://www.postgresql.org/message-id/CAKJS1f-SnNXUS0_2wOn-7ZsuvrmQd_6_t-uG8pyUKfdnE9_y-A%40mail.gmail.com

To summarize of what the above patch does and why it removed those
functions:  those functions allocate the memory needed to hold
TupleConversionMap pointers for *all* partitions in a given partition
tree, but the patch refactored the tuple routing initialization code such
that such allocations (the aforementioned and quite a few others) and the
functions are redundant.

> I've a bunch of problems with this:
> 1) convert_tuples_by_name etc sound like they actually perform tuple
>conversion, rather than just prepare for it
> 2) convert_tuples_by_name_map returns not TupleConversionMap, but an
>array. But convert_tuples_by_name does return TupleConversionMap.
> 3) The naming is pretty inconsistent. free_conversion_map
>vs. convert_tuples_by_name, but both deal with TupleConversionMap?

Yeah, I had similar thoughts in past.

> For executing them we have:
> - do_convert_tuple
> - ConvertPartitionTupleSlot
>
> which is two randomly differing spellings of related functionality,
> without the name indicating that they, for reasons, don't both use
> TupleConversionMap.

Actually, ConvertPartitionTupleSlot performs two tasks: 1. performs the
conversion of the tuple in the input slot using do_convert_tuple, and 2.
switches the TupleTableSlot to be used from this point on to a
partition-specific one.

> I'm also not particularly happy about
> "do_convert_tuple", which is a pretty generic name.
> 
> A lot of this probably made sense at some time, but we got to clean this
> up. We'll grow more partitioning related code, and this IMO makes
> reading the code harder - and given the change rate, that's something I
> frequently have to do.

On the "Slotification of partition tuple conversion" thread, I suggested
that we try to decouple partitioning-related tuple conversion from
tupconvert.c, which may significantly improve things imho.  See the last
paragraph of my message here:

https://www.postgresql.org/message-id/ed66156b-54fe-b5d2-10bf-3c73a8052e7e%40lab.ntt.co.jp

Basically, Amit Khandekar proposed that we add another function with the
same functionality as do_convert_tuple to tupconvert.c that accepts and
returns TupleTableSlot instead of HeapTuple.  Per discussion, it turned
out that we won't need a full TupleConversionMap for partitioning-related
conversions if we start using this new function, so we could land the new
conversion function in execPartition.c instead of tupconvert.c.  Perhaps,
we could just open-code do_convert_tuple()'s functionality in the existing
ConvertPartitionTupleSlot().

Of course, that is not to say we shouldn't do anything about the possibly
unhelpful names of the functions that tupconvert.c exports.

Thanks,
Amit




Re: Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.

2018-10-02 Thread Amit Langote
On 2018/08/31 21:40, Etsuro Fujita wrote:
> (2018/08/31 21:30), Jonathan S. Katz wrote:
>> Thank you for taking care of that and for committing the patch. I have
>> now closed this issues on the open items list.
> 
> Thanks!

I noticed that the CF entry for this was not closed.  As of this morning,
it's been moved to the next 2018-11 CF:

https://commitfest.postgresql.org/20/1554/

I tried to close it as being committed, but couldn't do so, because I
can't find Fujita-san's name in the list of committers in the CF app's
drop down box that lists all committers.

Thanks,
Amit




Re: shared-memory based stats collector

2018-10-02 Thread Kyotaro HORIGUCHI
The previous patch doesn't work...

At Thu, 27 Sep 2018 22:00:49 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20180927.220049.168546206.horiguchi.kyot...@lab.ntt.co.jp>
> - 0001 to 0006 is rebased version of v4.
> - 0007 adds conditional locking to dshash
> 
> - 0008 is the no-UDP stats collector.
> 
> If required lock is not acquired for some stats items, report
> funcions immediately return after storing the values locally. The
> stored values are merged with later calls. Explicitly calling
> pgstat_cleanup_pending_stat() at a convenient timing tries to
> apply the pending values, but the function is not called anywhere
> for now.
> 
> stats collector process is used only to save and load saved stats
> files and create shared memory for stats. I'm going to remove
> stats collector.
> 
> I'll continue working this way.

It doesn't work nor even compile since I failed to include some
changes. The atached v6-0008 at least compiles and words.

0001-0007 are not attached since they are still aplicable on
master head with offsets.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From ffbe9d78239352df9ff9edac3e66675117703d88 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Thu, 27 Sep 2018 21:36:06 +0900
Subject: [PATCH 8/8] Ultra PoC of full-shared-memory stats collector.

This path is superultra PoC of full-shared-memory stats collector,
which means UDP is no longer involved in stats collector mechanism.
Some statistics items can be postponed when required lock is not
available, and they can be tried to clean up by calling
pgstat_cleanup_pending_stat() at a convenient time (not called in this
patch).
---
 src/backend/access/transam/xlog.c |4 +-
 src/backend/postmaster/checkpointer.c |8 +-
 src/backend/postmaster/pgstat.c   | 2424 -
 src/backend/storage/buffer/bufmgr.c   |8 +-
 src/backend/storage/lmgr/deadlock.c   |2 +-
 src/backend/utils/adt/pgstatfuncs.c   |2 +-
 src/include/pgstat.h  |  357 +
 7 files changed, 936 insertions(+), 1869 deletions(-)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 7375a78ffc..980c7e9e0e 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -8604,9 +8604,9 @@ LogCheckpointEnd(bool restartpoint)
 		_secs, _usecs);
 
 	/* Accumulate checkpoint timing summary data, in milliseconds. */
-	BgWriterStats.m_checkpoint_write_time +=
+	BgWriterStats.checkpoint_write_time +=
 		write_secs * 1000 + write_usecs / 1000;
-	BgWriterStats.m_checkpoint_sync_time +=
+	BgWriterStats.checkpoint_sync_time +=
 		sync_secs * 1000 + sync_usecs / 1000;
 
 	/*
diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c
index 1a033093c5..62e1ee7ace 100644
--- a/src/backend/postmaster/checkpointer.c
+++ b/src/backend/postmaster/checkpointer.c
@@ -376,7 +376,7 @@ CheckpointerMain(void)
 		{
 			checkpoint_requested = false;
 			do_checkpoint = true;
-			BgWriterStats.m_requested_checkpoints++;
+			BgWriterStats.requested_checkpoints++;
 		}
 		if (shutdown_requested)
 		{
@@ -402,7 +402,7 @@ CheckpointerMain(void)
 		if (elapsed_secs >= CheckPointTimeout)
 		{
 			if (!do_checkpoint)
-BgWriterStats.m_timed_checkpoints++;
+BgWriterStats.timed_checkpoints++;
 			do_checkpoint = true;
 			flags |= CHECKPOINT_CAUSE_TIME;
 		}
@@ -1296,8 +1296,8 @@ AbsorbFsyncRequests(void)
 	LWLockAcquire(CheckpointerCommLock, LW_EXCLUSIVE);
 
 	/* Transfer stats counts into pending pgstats message */
-	BgWriterStats.m_buf_written_backend += CheckpointerShmem->num_backend_writes;
-	BgWriterStats.m_buf_fsync_backend += CheckpointerShmem->num_backend_fsync;
+	BgWriterStats.buf_written_backend += CheckpointerShmem->num_backend_writes;
+	BgWriterStats.buf_fsync_backend += CheckpointerShmem->num_backend_fsync;
 
 	CheckpointerShmem->num_backend_writes = 0;
 	CheckpointerShmem->num_backend_fsync = 0;
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index a3d5f4856f..339425720f 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -93,6 +93,23 @@
 #define PGSTAT_FUNCTION_HASH_SIZE	512
 
 
+#define PGSTAT_MAX_QUEUE_LEN	100
+
+/*
+ * Operation mode of pgstat_get_db_entry.
+ */
+#define PGSTAT_TABLE_READ	0
+#define PGSTAT_TABLE_WRITE	1
+#define PGSTAT_TABLE_CREATE 2
+#define	PGSTAT_TABLE_NOWAIT 4
+
+typedef enum
+{
+	PGSTAT_TABLE_NOT_FOUND,
+	PGSTAT_TABLE_FOUND,
+	PGSTAT_TABLE_LOCK_FAILED
+} pg_stat_table_result_status;
+
 /* --
  * Total number of backends including auxiliary
  *
@@ -119,16 +136,12 @@ int			pgstat_track_activity_query_size = 1024;
  * Stored directly in a stats message structure so it can be sent
  * without needing to copy things around.  We assume this inits to zeroes.
  */
-PgStat_MsgBgWriter BgWriterStats;
+PgStat_BgWriter BgWriterStats;
 
 /* --
  * Local data
  * --
  */
-NON_EXEC_STATIC 

Re: inconsistency and inefficiency in setup_conversion()

2018-10-02 Thread John Naylor
On 10/2/18, Michael Paquier  wrote:
> v4 does not apply anymore.  I am moving this patch to next commit fest,
> waiting on author.

v5 attached.

-John Naylor
From ea0a180bde325b0383ce7f0b3d48d1ce9e941393 Mon Sep 17 00:00:00 2001
From: John Naylor 
Date: Mon, 2 Jul 2018 12:52:07 +0700
Subject: [PATCH v5 1/2] Add pg_language lookup.

This didn't seem worth doing before, but an upcoming commit will add
88 entries to pg_proc whose languge is 'c'. In addition, we can
now use 'internal' in Gen_fmgr.pl instead of looking up the OID.
This simplifies that script and its Makefile a bit.
---
 doc/src/sgml/bki.sgml|  6 +-
 src/backend/catalog/genbki.pl|  8 +++
 src/backend/utils/Gen_fmgrtab.pl |  9 ++-
 src/backend/utils/Makefile   |  8 +--
 src/include/catalog/pg_proc.dat  | 98 
 src/include/catalog/pg_proc.h|  2 +-
 src/tools/msvc/Solution.pm   |  4 +-
 7 files changed, 68 insertions(+), 67 deletions(-)

diff --git a/doc/src/sgml/bki.sgml b/doc/src/sgml/bki.sgml
index 0fb309a1bd..176fa8bf4d 100644
--- a/doc/src/sgml/bki.sgml
+++ b/doc/src/sgml/bki.sgml
@@ -408,8 +408,8 @@
 that's error-prone and hard to understand, so for frequently-referenced
 catalogs, genbki.pl provides mechanisms to write
 symbolic references instead.  Currently this is possible for references
-to access methods, functions, operators, opclasses, opfamilies, and
-types.  The rules are as follows:
+to access methods, functions, languages, operators, opclasses, opfamilies,
+and types.  The rules are as follows:

 

@@ -419,7 +419,7 @@
   Use of symbolic references is enabled in a particular catalog column
   by attaching BKI_LOOKUP(lookuprule)
   to the column's definition, where lookuprule
-  is pg_am, pg_proc,
+  is pg_am, pg_proc, pg_language,
   pg_operator, pg_opclass,
   pg_opfamily, or pg_type.
   BKI_LOOKUP can be attached to columns of
diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl
index 649200260a..f321662695 100644
--- a/src/backend/catalog/genbki.pl
+++ b/src/backend/catalog/genbki.pl
@@ -167,6 +167,13 @@ foreach my $row (@{ $catalog_data{pg_am} })
 	$amoids{ $row->{amname} } = $row->{oid};
 }
 
+# language OID lookup
+my %langoids;
+foreach my $row (@{ $catalog_data{pg_language} })
+{
+	$langoids{ $row->{lanname} } = $row->{oid};
+}
+
 # opclass OID lookup
 my %opcoids;
 foreach my $row (@{ $catalog_data{pg_opclass} })
@@ -241,6 +248,7 @@ foreach my $row (@{ $catalog_data{pg_type} })
 # Map catalog name to OID lookup.
 my %lookup_kind = (
 	pg_am   => \%amoids,
+	pg_language => \%langoids,
 	pg_opclass  => \%opcoids,
 	pg_operator => \%operoids,
 	pg_opfamily => \%opfoids,
diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl
index fa30436895..b288c5e5fd 100644
--- a/src/backend/utils/Gen_fmgrtab.pl
+++ b/src/backend/utils/Gen_fmgrtab.pl
@@ -59,6 +59,8 @@ die "No include path; you must specify -I.\n" if !$include_path;
 # Note: We pass data file names as arguments and then look for matching
 # headers to parse the schema from. This is backwards from genbki.pl,
 # but the Makefile dependencies look more sensible this way.
+# We currently only need pg_proc, but retain the possibility of reading
+# more than one data file.
 my %catalogs;
 my %catalog_data;
 foreach my $datfile (@input_files)
@@ -78,13 +80,10 @@ foreach my $datfile (@input_files)
 	$catalog_data{$catname} = Catalog::ParseData($datfile, $schema, 0);
 }
 
-# Fetch some values for later.
+# Fetch a value for later.
 my $FirstBootstrapObjectId =
   Catalog::FindDefinedSymbol('access/transam.h', $include_path,
 	'FirstBootstrapObjectId');
-my $INTERNALlanguageId =
-  Catalog::FindDefinedSymbolFromData($catalog_data{pg_language},
-	'INTERNALlanguageId');
 
 # Collect certain fields from pg_proc.dat.
 my @fmgr = ();
@@ -94,7 +93,7 @@ foreach my $row (@{ $catalog_data{pg_proc} })
 	my %bki_values = %$row;
 
 	# Select out just the rows for internal-language procedures.
-	next if $bki_values{prolang} ne $INTERNALlanguageId;
+	next if $bki_values{prolang} ne 'internal';
 
 	push @fmgr,
 	  {
diff --git a/src/backend/utils/Makefile b/src/backend/utils/Makefile
index e797539d09..da40f6b6c4 100644
--- a/src/backend/utils/Makefile
+++ b/src/backend/utils/Makefile
@@ -31,15 +31,11 @@ generated-header-symlinks: $(top_builddir)/src/include/utils/header-stamp $(top_
 
 $(SUBDIRS:%=%-recursive): fmgr-stamp errcodes.h
 
-FMGR_DATA := $(addprefix $(top_srcdir)/src/include/catalog/,\
-	pg_language.dat pg_proc.dat \
-	)
-
 # fmgr-stamp records the last time we ran Gen_fmgrtab.pl.  We don't rely on
 # the timestamps of the individual output files, because the Perl script
 # won't update them if they didn't change (to avoid unnecessary recompiles).
-fmgr-stamp: Gen_fmgrtab.pl $(catalogdir)/Catalog.pm $(FMGR_DATA) $(top_srcdir)/src/include/access/transam.h
-	$(PERL) -I $(catalogdir) $< -I 

Re: [WIP PATCH] Index scan offset optimisation using visibility map

2018-10-02 Thread Michail Nikolaev
Hello.

> Okay, it has been more than a couple of days and the patch has not been
> updated, so I am marking as returned with feedback.

Yes, it is more than couple of days passed, but also there is almost no
feedback since 20 Mar after patch design was changed :)
But seriously - I still working on it and was digging into just last night
( https://github.com/michail-nikolaev/postgres/commits/index_only_fetch )

The main issue currently is a cost estimation. In right case (10m relation,
0.5 index correlation, 0.1 selectivity for filter) - it works like a charm
with 200%-400% performance boost.
But the same case with 1.0 selectivity gives 96% comparing to baseline. So,
to do correct cost estimation I need correct selectivity of filter
predicate.

Currently I am thinking to calculate it on fly - and switch to the new
method if selectivity is small. But it feels a little akward.

Thanks,
Michail.


Re: pgsql: Improve autovacuum logging for aggressive and anti-wraparound ru

2018-10-02 Thread Michael Paquier
On Tue, Oct 02, 2018 at 01:18:01PM +0900, Masahiko Sawada wrote:
> I'm not sure both styles would be appropriate style in the postgres
> code so I would rather add elog(ERROR) instead. Thought?

My brain is rather fried for the rest of the day...  But we could just
be looking at using USE_ASSERT_CHECKING.  Thoughts from other are
welcome.
--
Michael


signature.asc
Description: PGP signature


Re: Commit fest 2018-09

2018-10-02 Thread Michael Paquier
On Thu, Sep 06, 2018 at 01:50:23PM -0700, Michael Paquier wrote:
> We are already in September, hence it is time to move on with the 2nd
> commit fest for v12.  As usual, there are many patches waiting for
> review and integration:
> https://commitfest.postgresql.org/19/
> With a couple of days of delay, I have switched the CF app as
> in-progress on Tuesday AOE.  Please note that I am fine to act as CFM
> and help with the patch classification.  If somebody wishes to act as
> such and is familiar with the community process, feel free to reply to
> this thread to manifest yourself.

And more or less one month later here we are, with the following final
score:
Committed: 46
Moved to next CF: 127
Rejected: 5
Returned with Feedback: 37
Total: 215

I have spent some time classifying the existing entries, roughly moving
to the next commit fest entries which had rather fresh reviews, the
patches which got no reviews.  Patches which did not apply without any
reviews have been moved to the next CF, waiting for their respective
author(s).  Patches not having much activity, or waiting for author
input for more than a couple of weeks have been returned with feedback.

Thanks to all who participated in the patch review, authoring, and
everybody else who helped in making the different patches move forward.
--
Michael


signature.asc
Description: PGP signature


Re: [HACKERS] Can ICU be used for a database's default sort order?

2018-10-02 Thread Michael Paquier
On Wed, Sep 12, 2018 at 01:06:12PM +1200, Thomas Munro wrote:
> The ideal scope would be to track all referenced collation versions on
> every index, and only update them at CREATE INDEX or REINDEX time
> (also, as discussed in some other thread, CHECK constraints and
> partition keys might be invalidated and should in theory also carry
> versions that can only be updated by running a hypothetical RECHECK or
> REPARTITION command).  Then a shared pg_collation catalog would make
> perfect sense, and there would be no need for it to have a collversion
> column at all, or an ALTER COLLATION ... REFRESH VERSION command, and
> therefore there would be no way to screw it up by REFRESHing the
> VERSION without having really fixed the problem.

Please note that the latest patch set does not apply, so this has been
switched to commit fest 2018-11, waiting on author for a rebase.
--
Michael


signature.asc
Description: PGP signature


Re: [HACKERS] pgbench - allow to store select results into variables

2018-10-02 Thread Fabien COELHO


Bonjour Michaël,


On Tue, Aug 14, 2018 at 01:38:21PM +0200, Fabien COELHO wrote:

I re-attached the v19 for a check on the list.


You are marked as the committer of this patch in the CF app since last
April and this patch is marked as ready for committer.  Are you planning
to look at it soon?


Here is yet another rebase.

Whether Stephen will have time to commit this patch is unclear. I'd 
suggest that I remove his name from the committer column so that another 
committer may consider it.


What do you think?

For me this patch is fundamental because it allows a test script to 
interact both ways with the database, and to act on database data (in 
particular thanks to \if and expressions already added), and also actually 
retrieving results is a key benchmark compliance constraint that pgbench 
does not meet.


--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 8c464c9d42..d9cdec15da 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -942,6 +942,51 @@ pgbench  options  d
   
 
   
+   
+
+ \cset [prefix] or
+ \gset [prefix]
+
+
+
+ 
+  These commands may be used to end SQL queries, replacing a semicolon.
+  \cset replaces an embedded semicolon (\;) within
+  a compound SQL command, and \gset replaces a final
+  (;) semicolon which ends the SQL command. 
+ 
+
+ 
+  When these commands are used, the preceding SQL query is expected to
+  return one row, the columns of which are stored into variables named after
+  column names, and prefixed with prefix if provided.
+ 
+
+ 
+  The following example puts the final account balance from the first query
+  into variable abalance, and fills variables
+  one, two and
+  p_three with integers from a compound query.
+
+UPDATE pgbench_accounts
+  SET abalance = abalance + :delta
+  WHERE aid = :aid
+  RETURNING abalance \gset
+-- compound of two queries
+SELECT 1 AS one, 2 AS two \cset
+SELECT 3 AS three \gset p_
+
+ 
+
+ 
+  
+\cset and \gset commands do not work when
+empty SQL queries appear within a compound SQL command.
+  
+ 
+
+   
+

 \if expression
 \elif expression
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 436764b9c9..eb9478b3d5 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -467,12 +467,15 @@ static const char *QUERYMODE[] = {"simple", "extended", "prepared"};
 
 typedef struct
 {
-	char	   *line;			/* text of command line */
+	char	   *first_line;		/* first line for short display */
+	char	   *lines;			/* full multi-line text of command */
 	int			command_num;	/* unique index of this Command struct */
 	int			type;			/* command type (SQL_COMMAND or META_COMMAND) */
 	MetaCommand meta;			/* meta command identifier, or META_NONE */
 	int			argc;			/* number of command words */
 	char	   *argv[MAX_ARGS]; /* command word list */
+	int			compound;   /* last compound command (number of \;) */
+	char	  **gset;   /* per-compound command prefix */
 	PgBenchExpr *expr;			/* parsed expression, if needed */
 	SimpleStats stats;			/* time spent in this command */
 } Command;
@@ -1681,6 +1684,107 @@ valueTruth(PgBenchValue *pval)
 	}
 }
 
+/* read all responses from backend, storing into variable or discarding */
+static bool
+read_response(CState *st, char **gset)
+{
+	PGresult   *res;
+	int			compound = 0;
+
+	while ((res = PQgetResult(st->con)) != NULL)
+	{
+		switch (PQresultStatus(res))
+		{
+			case PGRES_COMMAND_OK: /* non-SELECT commands */
+			case PGRES_EMPTY_QUERY: /* may be used for testing no-op overhead */
+if (gset[compound] != NULL)
+{
+	fprintf(stderr,
+			"client %d file %d command %d compound %d: "
+			"\\gset/cset expects a row\n",
+			st->id, st->use_file, st->command, compound);
+	st->ecnt++;
+	return false;
+}
+break; /* OK */
+
+			case PGRES_TUPLES_OK:
+if (gset[compound] != NULL)
+{
+	/* store result into variables if required */
+	int ntuples = PQntuples(res),
+		nfields = PQnfields(res),
+		f;
+
+	if (ntuples != 1)
+	{
+		fprintf(stderr,
+"client %d file %d command %d compound %d: "
+"expecting one row, got %d\n",
+st->id, st->use_file, st->command, compound, ntuples);
+		st->ecnt++;
+		PQclear(res);
+		discard_response(st);
+		return false;
+	}
+
+	for (f = 0; f < nfields ; f++)
+	{
+		char *varname = PQfname(res, f);
+		/* prefix varname if required, will be freed below */
+		if (*gset[compound] != '\0')
+			varname = psprintf("%s%s", gset[compound], varname);
+
+		/* store result as a string */
+		if (!putVariable(st, "gset", varname,
+		 PQgetvalue(res, 0, f)))
+		{
+			/* internal error, should it rather abort? */
+			fprintf(stderr,
+	"client %d file %d command %d compound %d: "
+

Re: [PATCH] Add support for ON UPDATE/DELETE actions on ALTER CONSTRAINT

2018-10-02 Thread Michael Paquier
On Mon, Sep 17, 2018 at 12:03:17AM -0300, Matheus de Oliveira wrote:
> You are correct. I have made a test that tries all combinations of ALTER
> CONSTRAINT ON UPDATE/DELETE ACTION, but it caused a really huge output. I
> have changed that to a simple DO block, and still trying all possibilities
> but now I just verify if the ALTER matches the same definition on
> pg_trigger of a constraint that was created with the target action already,
> seems simpler and work for any change.
> 
> Please, let me know if you all think any change should be made on this
> patch.

The last patch set does not apply, so this is moved to next CF, waiting
on author as new status.
--
Michael


signature.asc
Description: PGP signature


Tuple conversion naming

2018-10-02 Thread Andres Freund
Hi,

The naming around partition related tuple conversions is imo worthy of
improvement.

For building tuple conversion maps we have:
- convert_tuples_by_name
- convert_tuples_by_name_map
- convert_tuples_by_position
- ExecSetupChildParentMapForLeaf
- TupConvMapForLeaf
- free_conversion_map

I've a bunch of problems with this:
1) convert_tuples_by_name etc sound like they actually perform tuple
   conversion, rather than just prepare for it
2) convert_tuples_by_name_map returns not TupleConversionMap, but an
   array. But convert_tuples_by_name does return TupleConversionMap.
3) The naming is pretty inconsistent. free_conversion_map
   vs. convert_tuples_by_name, but both deal with TupleConversionMap?

For executing them we have:
- do_convert_tuple
- ConvertPartitionTupleSlot

which is two randomly differing spellings of related functionality,
without the name indicating that they, for reasons, don't both use
TupleConversionMap.  I'm also not particularly happy about
"do_convert_tuple", which is a pretty generic name.


A lot of this probably made sense at some time, but we got to clean this
up. We'll grow more partitioning related code, and this IMO makes
reading the code harder - and given the change rate, that's something I
frequently have to do.


Greetings,

Andres Freund



Re: [HACKERS] Transactions involving multiple postgres foreign servers, take 2

2018-10-02 Thread Michael Paquier
On Fri, Aug 03, 2018 at 05:52:24PM +0900, Masahiko Sawada wrote:
> I attached the updated version patch as the previous versions conflict
> with the current HEAD.

Please note that the latest patch set does not apply anymore, so this
patch is moved to next CF, waiting on author.
--
Michael


signature.asc
Description: PGP signature