Re: Fix error message when trying to alter statistics on included column

2018-07-06 Thread Yugo Nagata
On Mon, 2 Jul 2018 14:23:09 -0400
Robert Haas  wrote:

> On Thu, Jun 28, 2018 at 5:28 AM, Yugo Nagata  wrote:
> > According to the error message, it is not allowed to alter statistics on
> > included column because this is "non-expression column".
> >
> >  postgres=# create table test (i int, d int);
> >  CREATE TABLE
> >  postgres=# create index idx on test(i) include (d);
> >  CREATE INDEX
> >  postgres=# alter index idx alter column 2 set statistics 10;
> >  ERROR:  cannot alter statistics on non-expression column "d" of index "idx"
> >  HINT:  Alter statistics on table column instead.
> >
> > However, I think this should be forbidded in that this is not a key column
> > but a included column. Even if we decide to support expressions in included
> > columns in future, it is meaningless to do this because any statistics on
> > included column is never used by the planner.
> >
> > Attached is the patch to fix the error message. In this fix, column number
> > is checked first. After applying this, the message is changed as below;
> >
> >  postgres=# alter index idx alter column 2 set statistics 10;
> >  ERROR:  cannot alter statistics on included column "d" of index "idx"
> 
> I think you should add an open item for this.

I was about to add this to the wiki, but someone has already done this. Thanks!
https://wiki.postgresql.org/wiki/PostgreSQL_11_Open_Items

> 
> -- 
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
> 


-- 
Yugo Nagata 



Re: CREATE TABLE .. LIKE .. EXCLUDING documentation

2018-07-06 Thread Yugo Nagata
On Wed, 4 Jul 2018 10:46:30 +0200
Peter Eisentraut  wrote:

> On 02.07.18 10:38, Daniel Gustafsson wrote:
> >> On 29 Jun 2018, at 18:44, Tom Lane  wrote:
> > 
> >> +1 for shortening it as proposed by Peter.  The existing arrangement
> >> made sense when it was first written, when there were only about three
> >> individual options IIRC.  Now it's just confusing, especially since you
> >> can't tell very easily whether any of the individual options were
> >> intentionally omitted from the list.  It will not get better with
> >> more options, either.
> > 
> > Marking this "Waiting for Author” awaiting an update version expanding with 
> > the
> > above comment.
> 
> I ended up rewriting that whole section a bit to give it more structure.
>  I included all the points discussed in this thread.

Thank you for fixing this. 

> 
> -- 
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
> 


-- 
Yugo Nagata 



Re: make installcheck-world in a clean environment

2018-07-06 Thread Alexander Lakhin
Hello Peter,
06.07.2018 00:39, Peter Eisentraut wrote:
> Exactly what order of steps are you executing that doesn't work?
In Centos 7, using the master branch from git:
./configure --enable-tap-tests
make install
make install -C contrib
chown -R postgres:postgres /usr/local/pgsql/
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data
/make clean/
# Also you can just install binary packages to get the same state.

make installcheck-world
# This check fails.

Best regards,
--

Alexander Lakhin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



How to remove elements from array .

2018-07-06 Thread Brahmam Eswar
Hi ,

I tried to use array_remove to remove elements from an array but it's
saying function doesn't exist . I'm able to use other array functions.

1) Capture the results with multiple columns into array .
2)  if ay results exist then loop through an array to find out the record
with col1='Y'
3)  If col1='Y' then get the respective value of Col2 (10) and delete the
similar records of col2 if exist.

Col1Col2
 Y 10
 N 20
N  10

Need to delete record1 and record3.To delete the array records i'm using
array_remove but it says doesn't exist.

Version pgadmin4 .






Snippet :-

CREATE or REPLACE FUNCTION FUNC1
(
<< List of elements >>
) AS $$

DECLARE

TEST_CODES record1 ARRAY;
 TEMP_REF_VALUE VARCHAR(4000);

BEGIN
IS_VALID := 'S';

  SELECT ARRAY
   (SELECT ROW(Col1,Col2,COl3,Col4) ::record1
FROM table1  INTO TEST_CODES
IF array_length(TEST_CODES, 1) > 0 THEN
FOR indx IN array_lower(TEST_CODES, 1)..array_upper(TEST_CODES, 1) LOOP
 IF TEST_CODES[indx].COL1 = 'Y' THEN
TEMP_REF_VALUE:=TEST_CODES[indx].Col2;
TEST_CODES := array_remove(TEST_CODES,TEMP_REF_VALUE);
END IF;
   END Loop;
END IF;


-- 
Thanks & Regards,
Brahmeswara Rao J.


Re: pgsql: Fix "base" snapshot handling in logical decoding

2018-07-06 Thread Arseny Sher


Alvaro Herrera  writes:

> I just don't see it that VACUUM FULL would change the xmin of anything
> to FrozenXid, and in my experiments it doesn't.  Did you mean VACUUM
> FREEZE?

Well, docs for VACUUM say:

FREEZE

Selects aggressive “freezing” of tuples. Specifying FREEZE is
equivalent to performing VACUUM with the vacuum_freeze_min_age and
vacuum_freeze_table_age parameters set to zero. Aggressive freezing
is always performed when the table is rewritten, so this option is
redundant when FULL is specified.

So this is literally the same. rewrite_heap_tuple does the job.


> Thanks for the detective work!  I pushed this test change.

Thank you, I appreciate this.

--
Arseny Sher
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



small doc fix - using expressions in plpgsql FETCH command

2018-07-06 Thread Pavel Stehule
Hi

PLpgSQL FETCH documentation is has ref on SQL FETCH command. SQL FETCH
allows only int constants as count. PLpgSQL allows any expressions. In this
case documentation is not clear, and people can be messy - and apply SQL
FETCH limits on PLpgSQL FETCH.

https://stackoverflow.com/questions/51129336/pl-pgsql-move-with-variable-value/51169438?noredirect=1#comment89349088_51169438

I propose some small enhancing

diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
index 5b2aac618e..b65cb11d00 100644
--- a/doc/src/sgml/plpgsql.sgml
+++ b/doc/src/sgml/plpgsql.sgml
@@ -3250,7 +3250,8 @@ MOVE  direction
{ FROM | IN }  <
  as specifying NEXT.
  direction values that require moving
  backward are likely to fail unless the cursor was declared or opened
- with the SCROLL option.
+ with the SCROLL option. The
count
+ can be any expressions with integer result or integer constant.
 

 

Options, notes?

Regards

Pavel


RE: automatic restore point

2018-07-06 Thread Yotsunaga, Naoki
>-Original Message-
>From: Michael Paquier [mailto:mich...@paquier.xyz] 
>Sent: Tuesday, July 3, 2018 10:22 AM

>This kind of thing is heavily application-dependent.  For example, you would 
>likely not care if an operator, who has newly-joined the team in >charge of 
>the maintenance of this data, drops unfortunately a table which includes logs 
>from 10 years back, and you would very likely care >about a table dropped 
>which has user's login data.  My point is that you need to carefully design 
>the shape of the configuration you would use, >so as any application's admin 
>would be able to cope with it, for example allowing exclusion filters with 
>regular expressions could be a good >idea to dig into.  And also you need to 
>think about it so as it is backward compatible.

Thanks for comments.

Does that mean that the application (user) is interested in which table?
For example, there are two tables A. It is ok even if one table disappears, but 
it is troubled if another table B disappears. So, when the table B is dropped, 
automatic restore point works. In the table A, automatic restore point does not 
work.
So, it is difficult to implement that automatic restore point in postgresql by 
default.
Is my interpretation right?

---
Naoki Yotsunaga




RE: automatic restore point

2018-07-06 Thread Yotsunaga, Naoki
>-Original Message-
>From: Jaime Casanova [mailto:jaime.casan...@2ndquadrant.com] 
>Sent: Tuesday, July 3, 2018 11:06 AM

>Thinking on Michael's suggestion of using event triggers, you can create an 
>event >trigger to run pg_create_restore_point() on DROP, here's a simple 
>example of how >that should like:
>https://www.postgresql.org/docs/current/static/functions-event-triggers.html

>You can also create a normal trigger BEFORE TRUNCATE to create a restore point 
>just >before running the TRUNCATE command.

Thanks for comments.
I was able to understand.

---
Naoki Yotsunaga


Re: How to remove elements from array .

2018-07-06 Thread Pavel Stehule
Hi

2018-07-06 9:49 GMT+02:00 Brahmam Eswar :

> Hi ,
>
> I tried to use array_remove to remove elements from an array but it's
> saying function doesn't exist . I'm able to use other array functions.
>
> 1) Capture the results with multiple columns into array .
> 2)  if ay results exist then loop through an array to find out the record
> with col1='Y'
> 3)  If col1='Y' then get the respective value of Col2 (10) and delete the
> similar records of col2 if exist.
>
> Col1Col2
>  Y 10
>  N 20
> N  10
>
> Need to delete record1 and record3.To delete the array records i'm using
> array_remove but it says doesn't exist.
>
> Version pgadmin4 .
>
>
>
>
In this case, unnesting can be solution

postgres=# select * from foo;
+++
| c1 | c2 |
+++
| t  | 10 |
| f  | 20 |
| f  | 20 |
+++
(3 rows)

postgres=# do $$
declare a foo[] default array(select foo from foo);
begin
  a := array(select (c1,c2)::foo from unnest(a) g(c1,c2) where g.c1 = true);
  raise notice 'a=%', a;
end;
$$;
NOTICE:  a={"(t,10)"}
DO

Regards

Pavel



>
>
>
> Snippet :-
>
> CREATE or REPLACE FUNCTION FUNC1
> (
> << List of elements >>
> ) AS $$
>
> DECLARE
>
> TEST_CODES record1 ARRAY;
>  TEMP_REF_VALUE VARCHAR(4000);
>
> BEGIN
> IS_VALID := 'S';
>
>   SELECT ARRAY
>(SELECT ROW(Col1,Col2,COl3,Col4) ::record1
> FROM table1  INTO TEST_CODES
> IF array_length(TEST_CODES, 1) > 0 THEN
> FOR indx IN array_lower(TEST_CODES, 1)..array_upper(TEST_CODES, 1) LOOP
>  IF TEST_CODES[indx].COL1 = 'Y' THEN
> TEMP_REF_VALUE:=TEST_CODES[indx].Col2;
> TEST_CODES := array_remove(TEST_CODES,TEMP_REF_VALUE);
> END IF;
>END Loop;
> END IF;
>
>
> --
> Thanks & Regards,
> Brahmeswara Rao J.
>


Re: Fix to not check included columns in ANALYZE on indexes

2018-07-06 Thread Yugo Nagata
On Sat, 30 Jun 2018 14:13:49 -0400
Tom Lane  wrote:

> Peter Geoghegan  writes:
> > I think that the argument Tom is making is that it might be useful to
> > have statistics on the expression regardless of this -- the expression
> > may be interesting in some general sense. For example, one can imagine
> > the planner creating a plan with a hash aggregate rather than a group
> > aggregate, but only when statistics on an expression are available,
> > somehow.
> 
> Right.  For instance, "select sum(x) from ... group by y+z" is only
> suitable for hash aggregation if we can predict that there's a fairly
> small number of distinct values of y+z.  This makes it useful to have
> stats on the expression y+z, independently of whether any related index
> actually gets used in the plan.

Thank you for your explanation. I understand the usefulness of the statistics 
on non-key expression attributions and that "CREATE INDEX ... INCLUDE" migth be
a means to collect the statistics on "non-key" expressions in future.

> 
>   regards, tom lane
> 


-- 
Yugo Nagata 



Typo in Japanese translation of psql.

2018-07-06 Thread Taiki Kondo
Hi all,

I found typo in Japanese translation of psql.

Please find attached.


Sincerely,

--
Taiki Kondo
NEC Solution Innovators, Ltd.
diff --git a/src/bin/psql/po/ja.po b/src/bin/psql/po/ja.po
index 34bd8a4e25..fe8dc3804f 100644
--- a/src/bin/psql/po/ja.po
+++ b/src/bin/psql/po/ja.po
@@ -1512,12 +1512,12 @@ msgstr "継承元"
 #: describe.c:2896
 #, c-format
 msgid "Number of child tables: %d (Use \\d+ to list them.)"
-msgstr "子テーブル数: %d (\\+d で一覧を表示)"
+msgstr "子テーブル数: %d (\\d+ で一覧を表示)"
 
 #: describe.c:2898
 #, c-format
 msgid "Number of partitions: %d (Use \\d+ to list them.)"
-msgstr "パーティション数: %d (\\+d で一覧を表示)。"
+msgstr "パーティション数: %d (\\d+ で一覧を表示)。"
 
 #: describe.c:2906
 msgid "Child tables"


Re: Fix to not check included columns in ANALYZE on indexes

2018-07-06 Thread Yugo Nagata
On Fri, 29 Jun 2018 17:31:51 +0300
Teodor Sigaev  wrote:

> > AFAICS, we'd just have to revert this patch later, so I don't see
> > much value in it.
> True, I suppose we should apply this patch just for consistency, because we 
> don't allow expression in included columns.

Yes, this is what I intend in my patch, but I don't persist in this if there
is a reason to leave the code as it is, since the current code is alomot 
harmless.

Thanks,


-- 
Yugo Nagata 



Re: Typo in Japanese translation of psql.

2018-07-06 Thread Yugo Nagata
On Fri, 6 Jul 2018 08:33:56 +
Taiki Kondo  wrote:

> Hi all,
> 
> I found typo in Japanese translation of psql.

Good catch!

However, I think you have to submit the whole po file to Patch Tracker[1]
instead of a patch according to the wiki [2].

[1] https://redmine.postgresql.org/projects/pgtranslation
[2] https://wiki.postgresql.org/wiki/NLS

Regards,

> 
> Please find attached.
> 
> 
> Sincerely,
> 
> --
> Taiki Kondo
> NEC Solution Innovators, Ltd.


-- 
Yugo Nagata 



RE: Speeding up INSERTs and UPDATEs to partitioned tables

2018-07-06 Thread Kato, Sho
Thanks David!

I did benchmark with pgbench, and see a speedup for INSERT / UPDATE scenarios.
I used range partition.

Benchmark results are as follows.

1. 11beta2 result

 part_num |   tps_ex   | latency_avg | update_latency | select_latency | 
insert_latency 
--++-+++
  100 | 479.456278 |   2.086 |  1.382 |  0.365 |
  0.168
  200 | 169.155411 |   5.912 |  4.628 |  0.737 |
  0.299
  400 |  24.857495 |   40.23 | 36.606 |  2.252 |
  0.881
  800 |   6.718104 | 148.853 |141.471 |  5.253 |
  1.433
 1600 |   1.934908 | 516.825 |489.982 | 21.102 |
  3.701
 3200 |   0.456967 |2188.362 |   2101.247 | 72.784 |
  8.833
 6400 |   0.116643 |8573.224 |8286.79 |257.904 |
 14.949


2. 11beta2 + patch1 + patch2

patch1: Allow direct lookups of AppendRelInfo by child relid
commit 7d872c91a3f9d49b56117557cdbb0c3d4c620687
patch2: 0001-Speed-up-INSERT-and-UPDATE-on-partitioned-tables.patch

 part_num |   tps_ex| latency_avg | update_latency | select_latency | 
insert_latency 
--+-+-+++
  100 | 1224.430344 |   0.817 |  0.551 |  0.085 |   
   0.048
  200 |  689.567511 |1.45 |   1.12 |  0.119 |   
0.05
  400 |  347.876616 |   2.875 |  2.419 |  0.185 |   
   0.052
  800 |  140.489269 |   7.118 |  6.393 |  0.329 |   
   0.059
 1600 |   29.681672 |  33.691 | 31.272 |  1.517 |   
   0.147
 3200 |7.021957 | 142.412 |  136.4 |  4.033 |   
   0.214
 6400 |1.462949 | 683.557 |669.187 |  7.677 |   
   0.264


benchmark script:

\set aid random(1, 100 * 1)
\set delta random(-5000, 5000)
BEGIN;
UPDATE test.accounts SET abalance = abalance + :delta WHERE aid = :aid;
SELECT abalance FROM test.accounts WHERE aid = :aid;
INSERT INTO test.accounts_history (aid, delta, mtime) VALUES (:aid, :delta, 
CURRENT_TIMESTAMP);
END;

partition key is aid.

-Original Message-
From: David Rowley [mailto:david.row...@2ndquadrant.com] 
Sent: Thursday, July 05, 2018 6:19 PM
To: Kato, Sho/加藤 翔 
Cc: PostgreSQL Hackers 
Subject: Re: Speeding up INSERTs and UPDATEs to partitioned tables

On 5 July 2018 at 18:39, Kato, Sho  wrote:
> postgres=# create table a(i int) partition by range(i); CREATE TABLE 
> postgres=# create table a_1 partition of a for values from(1) to 
> (200); CREATE TABLE postgres=# create table a_2 partition of a for 
> values from(200) to (400); server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

Hi,

Thanks for testing. I'm unable to reproduce this on beta2 or master as of 
f61988d16.

Did you try make clean then building again?  The 0001 patch does change 
PartitionDescData, so if you've not rebuilt all .c files which use that then 
that might explain your crash.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.

2018-07-06 Thread Etsuro Fujita

(2018/07/04 21:37), Ashutosh Bapat wrote:

On Wed, Jul 4, 2018 at 5:36 PM, Etsuro Fujita
  wrote:

(2018/07/04 19:04), Ashutosh Bapat wrote:

On Fri, Jun 29, 2018 at 6:21 PM, Etsuro Fujita
   wrote:

(2018/06/22 22:54), Ashutosh Bapat wrote:

+   if (enable_partitionwise_join&&
rel->top_parent_is_partitioned)
+   {
+   build_childrel_tlist(root, rel, childrel, 1,&appinfo);
+   }

Why do we need rel->top_parent_is_partitioned? If a relation is
partitioned (if (rel->part_scheme), it's either the top parent or is
partition of some other partitioned table. In either case this
condition will be true.



This would be needed to avoid unnecessarily applying build_childrel_tlist
to
child rels of a partitioned table for which we don't consider
partitionwise
join.  Consider:

postgres=# create table lpt (c1 int, c2 text) partition by list (c1);
CREATE TABLE
postgres=# create table lpt_p1 partition of lpt for values in (1);
CREATE TABLE
postgres=# create table lpt_p2 (c1 int check (c1 in (2)), c2 text);
CREATE TABLE
postgres=# create table test (c1 int, c2 text);
CREATE TABLE
postgres=# explain verbose select * from (select * from lpt union all
select
* from lpt_p2) ss(c1, c2) inner join test on (ss.c1 = test.c1);



I might misunderstand your words, but in the above example the patch doesn't
apply build_childrel_tlist to lpt_p1 and lpt_p2.  The reason for that is
because we can avoid adjusting the tlists for the corresponding subplans at
plan creation time so that whole-row Vars in the tlists are transformed into
CREs.  I think the overhead of the adjustment is not that big, but not zero,
so it would be worth avoiding applying build_childrel_tlist to partitions if
the top parent won't participate in a partitionwise-join at all.


I don't think that answers my question. When we join lpt with test,
your patch will apply build_childrel_tlist() to lpt_p1 and lpt_p2 even
when join between lpt and test is not going to use partition-wise
join. Why?


Maybe my explanation including the example was not good.  Sorry about 
that, but my patch will *not* apply build_childrel_tlist to lpt_p1 and 
lpt_p2 since the top parent of lpt_p1 and lpt_p2 is the UNION ALL 
subquery and hence not a partitioned table (ie, we have 
rel->top_parent_is_partitioned=false for lpt_p1 and lpt_p2).



As per your explanation, the condition "if
(enable_partitionwise_join&&   rel->top_parent_is_partitioned)" is
used to avoid applying build_childrel_tlist() when partition-wise join
won't be possible. But it's not covering all the cases.


Let me explain about that: 1) my patch won't apply that function to a 
child if its top parent is an appendrel built from a UNION ALL subquery, 
even though the child is a partition of a partitioned table pulled up 
from a leaf subquery into the parent query, like lpt_p1, and 2) my patch 
will apply that function to a child if its top parent is a partitioned 
table, whether or not the partitioned table is involved in a 
partitionwise join.  By #1, we avoid the adjustment work at plan 
creation time, as explained above.  It might be worth trying to be 
smarter about #2 (for example, in the case of a join of a partitioned 
table and a non-partitioned table, since we don't consider a 
partitionwise join for that join, it's better to not apply that function 
to partitions of the partitioned table, to avoid the adjustment work at 
plan creation time), but ISTM we don't have enough information to be 
smarter.



An
in-between state will produce a hell lot of confusion for any further
optimization. Whenever we change the code around partition-wise
operations in future, we will have to check whether or not a given
child rel has its whole-row Var embedded in ConvertRowtypeExpr. As I
have mentioned earlier, I am also not comfortable with the targetlist
of child relations being type inconsistent with that of the parent,
which is a fundamental rule in inheritance. Worst keep them
inconsistent during path creation and make them consistent at the time
of creating plans. A child's whole-row Var has datatype of the child
where as that of parent has datatype of parent.



I don't see any critical issue here.  Could you elaborate a bit more on that
point?


I think breaking a fundamental rule like this itself is critical. But
interestingly I am not able to find a case where it becomes a problem.
But may be I haven't tried enough. And the question is if it's not
required to have the targetlists type consistent, why even bother with
ConvertRowtypeExpr addition there? We can use your approach of adding
ConvertRowtypeExpr at the end in all the cases.


I think that the tlist of a (not-the-topmost) child relation doesn't 
need to be type-consistent with that of the parent; it has only to 
include all Vars that are needed for higher joinquals and final output 
to the parent appendrel.  (In other words, I think we can build the 
tlist in the same manner as we build tlists of base or join relations in 
the main join tree.)

Re: [HACKERS] Crash on promotion when recovery.conf is renamed

2018-07-06 Thread Andrew Dunstan
On Thu, Jun 28, 2018 at 1:39 AM, Michael Paquier  wrote:
> On Thu, Jan 11, 2018 at 10:35:22PM -0500, Stephen Frost wrote:
>> Magnus, this was your thread to begin with, though I know others have
>> been involved, any chance you'll be able to review this for commit
>> during this CF?  I agree that this is certainly a good thing to have
>> too, though I've not looked at the patch itself in depth.  Is there
>> anything we can do to help move it along?
>
> As an effort to move on with bug items in the commit fest, attached are
> two patches with a proposed commit message as well as polished comments
> Those are proposed for a back-patched.  The 2PC issue is particularly
> bad in my opinion because having any 2PC file on-disk and corrupted
> means that a transaction is lost.  I have been playing a bit with
> hexedit and changed a couple of bytes in one of them...  If trying to
> use a base backup which includes one, then the standby reading it would
> be similarly confused.

Thanks to Michael for progressing this.

Back in August, nearly a year ago, Robert Haas said upthread:

> This bug fix has been pending in "Ready for Committer" state for about
> 4.5 months. Three committers (Magnus, Heikki, Tom) have contributed
> to the thread to date. Maybe one of them would like to commit this?

Although the state is now back to "Needs Review", I echo those
sentiments. This issue has now been hanging around for about 18
months.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] WIP: Aggregation push-down

2018-07-06 Thread Antonin Houska
Robert Haas  wrote:

> On Fri, Feb 23, 2018 at 11:08 AM, Antonin Houska  wrote:
> > I spent some more time thinking about this. What about adding a new strategy
> > number for hash index operator classes, e.g. HTBinaryEqualStrategyNumber? 
> > For
> > most types both HTEqualStrategyNumber and HTBinaryEqualStrategyNumber 
> > strategy
> > would point to the same operator, but types like numeric would naturally 
> > have
> > them different.
> >
> > Thus the pushed-down partial aggregation can only use the
> > HTBinaryEqualStrategyNumber's operator to compare grouping expressions. In 
> > the
> > initial version (until we have useful statistics for the binary values) we 
> > can
> > avoid the aggregation push-down if the grouping expression output type has 
> > the
> > two strategies implemented using different functions because, as you noted
> > upthread, grouping based on binary equality can result in excessive number 
> > of
> > groups.
> >
> > One open question is whether the binary equality operator needs a separate
> > operator class or not. If an opclass cares only about the binary equality, 
> > its
> > hash function(s) can be a lot simpler.
> 
> Hmm.  How about instead adding another regproc field to pg_type which
> stores the OID of a function that tests binary equality for that
> datatype?  If that happens to be equal to the OID you got from the
> opclass, then you're all set.

I suppose you mean pg_operator, not pg_type. What I don't like about this is
that the new field would only be useful for very little fraction of
operators.

On the other hand, the drawback of an additional operator classes is that we'd
have to modify almost all the existing operator classes for the hash AM. (The
absence of the new strategy number in an operator class cannot mean that the
existing equality operator can be used to compare binary values too, because
thus we can't guarantee correct behavior of the already existing user-defined
operator classes.)


-- 
Antonin Houska
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26, A-2700 Wiener Neustadt
Web: https://www.cybertec-postgresql.com



Re: [HACKERS] Crash on promotion when recovery.conf is renamed

2018-07-06 Thread Michael Paquier
Robert Haas wrote:
> Although the state is now back to "Needs Review", I echo those
> sentiments. This issue has now been hanging around for about 18
> months.

For what it's worth, I volunteer to finish the work :)

The 2PC patch is really simple, and fixes a data loss issue.  The second
patch has been looked up by Heikki, Magnus and me at least once by each,
and there is visibly an agreement on having it.  Having reviews after
a new patch version is sent, by somebody else than the one who sent the
patches is of course always nice.. 
--
Michael


signature.asc
Description: PGP signature


Re: Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.

2018-07-06 Thread Ashutosh Bapat
On Fri, Jul 6, 2018 at 4:29 PM, Etsuro Fujita
 wrote:
> (2018/07/04 21:37), Ashutosh Bapat wrote:
>>
>> On Wed, Jul 4, 2018 at 5:36 PM, Etsuro Fujita
>>   wrote:
>>>
>>> (2018/07/04 19:04), Ashutosh Bapat wrote:

 On Fri, Jun 29, 2018 at 6:21 PM, Etsuro Fujita
wrote:
>
> (2018/06/22 22:54), Ashutosh Bapat wrote:
>>
>> +   if (enable_partitionwise_join&&
>> rel->top_parent_is_partitioned)
>> +   {
>> +   build_childrel_tlist(root, rel, childrel, 1,&appinfo);
>> +   }
>>
>> Why do we need rel->top_parent_is_partitioned? If a relation is
>> partitioned (if (rel->part_scheme), it's either the top parent or is
>> partition of some other partitioned table. In either case this
>> condition will be true.
>
>
> This would be needed to avoid unnecessarily applying
> build_childrel_tlist
> to
> child rels of a partitioned table for which we don't consider
> partitionwise
> join.  Consider:
>
> postgres=# create table lpt (c1 int, c2 text) partition by list (c1);
> CREATE TABLE
> postgres=# create table lpt_p1 partition of lpt for values in (1);
> CREATE TABLE
> postgres=# create table lpt_p2 (c1 int check (c1 in (2)), c2 text);
> CREATE TABLE
> postgres=# create table test (c1 int, c2 text);
> CREATE TABLE
> postgres=# explain verbose select * from (select * from lpt union all
> select
> * from lpt_p2) ss(c1, c2) inner join test on (ss.c1 = test.c1);
>
>
>>> I might misunderstand your words, but in the above example the patch
>>> doesn't
>>> apply build_childrel_tlist to lpt_p1 and lpt_p2.  The reason for that is
>>> because we can avoid adjusting the tlists for the corresponding subplans
>>> at
>>> plan creation time so that whole-row Vars in the tlists are transformed
>>> into
>>> CREs.  I think the overhead of the adjustment is not that big, but not
>>> zero,
>>> so it would be worth avoiding applying build_childrel_tlist to partitions
>>> if
>>> the top parent won't participate in a partitionwise-join at all.
>>
>>
>> I don't think that answers my question. When we join lpt with test,
>> your patch will apply build_childrel_tlist() to lpt_p1 and lpt_p2 even
>> when join between lpt and test is not going to use partition-wise
>> join. Why?
>
>
> Maybe my explanation including the example was not good.  Sorry about that,
> but my patch will *not* apply build_childrel_tlist to lpt_p1 and lpt_p2
> since the top parent of lpt_p1 and lpt_p2 is the UNION ALL subquery and
> hence not a partitioned table (ie, we have
> rel->top_parent_is_partitioned=false for lpt_p1 and lpt_p2).
>
>> As per your explanation, the condition "if
>> (enable_partitionwise_join&&   rel->top_parent_is_partitioned)" is
>> used to avoid applying build_childrel_tlist() when partition-wise join
>> won't be possible. But it's not covering all the cases.
>
>
> Let me explain about that: 1) my patch won't apply that function to a child
> if its top parent is an appendrel built from a UNION ALL subquery, even
> though the child is a partition of a partitioned table pulled up from a leaf
> subquery into the parent query, like lpt_p1, and 2) my patch will apply that
> function to a child if its top parent is a partitioned table, whether or not
> the partitioned table is involved in a partitionwise join.  By #1, we avoid
> the adjustment work at plan creation time, as explained above.  It might be
> worth trying to be smarter about #2 (for example, in the case of a join of a
> partitioned table and a non-partitioned table, since we don't consider a
> partitionwise join for that join, it's better to not apply that function to
> partitions of the partitioned table, to avoid the adjustment work at plan
> creation time), but ISTM we don't have enough information to be smarter.

That looks like a kludge to me rather than a proper fix. It's not
clear to me as to when somebody can expect ConvertRowtypeExpr in the
targetlist and when don't while creating paths and to an extent plans.
For example, inside add_paths_to_append_rel() or in
apply_scanjoin_target_to_paths() or for that matter any path creation
or plan creation function, we will sometimes get targetlists with
ConvertRowtypeExpr() and sometime not. How do we know which is when.


>
 A ConvertRowtypeExpr
 is used to fix this inconsistency. That's why I chose to use
 pull_var_clause() as a place to fix the problem and not fix
 ConvertRowtypeExpr in targetlist itself.
>>>
>>>
>>>
>>> I think the biggest issue in the original patch you proposed is that we
>>> spend extra cycles where partitioning is not involved, which is the
>>> biggest
>>> reason why I think the original patch isn't the right way to go.
>>
>>
>> When there are no partitions involved, there won't be any
>> ConvertRowtypeExprs there which means the function
>> is_converted_whole_row_reference() would just return from the first
>> line checking IsA() and nullness of node.

Re: log_min_messages shows debug instead of debug2

2018-07-06 Thread Andrew Dunstan




On 05/17/2018 08:43 PM, Ideriha, Takeshi wrote:

-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
OK, I'm happy enough to commit it then, barring other objections.  I was just 
going to
just do that but then I realized we're in feature freeze right now, so I 
suppose this
should go into the next CommitFest.

Thank you for your discussion.
Sure, I registed it to the next CommitFest with 'Ready for Committer'.

https://commitfest.postgresql.org/18/1638/



Committed.

cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: PATCH: Update snowball stemmers

2018-07-06 Thread Andrew Dunstan




On 06/26/2018 08:20 AM, Arthur Zakirov wrote:

Hello hackers,

I'd like to propose the patch which syncs PostgreSQL snowball stemmers.
As Tom pointed [1] stemmers haven't synced for a very long time.

I copied all source files without changes, except replacing '#include
"../runtime/header.h"' with '#include "header.h"' and removing includes
of standard headers from utilities.c.

Hungarian language uses ISO-8859-1 and UTF-8 charsets in Postgres HEAD.
But in Snowball HEAD it is ISO-8859-2 per commit [2]. This patch changes
hungarian's charset from ISO-8859-1 to ISO-8859-2 too.

Additionally updated files in the patch are:
- utilities.c
- header.h

Will add to the next commitfest.

Any comments?

1 - https://www.postgresql.org/message-id/5689.1519054983%40sss.pgh.pa.us
2 - 
https://github.com/snowballstem/snowball/commit/4bcae97db044253ea2edae1dd3ca59f3cddd4b9d




I agree with Tom that we should sync with the upstream before we do 
anything else. This is a very large patch  but with fairly limited 
impact. I think now at the start of a dev cycle is the right time to 
apply it.


I don't know if we have a buildfarm animal testing Hungarian. Maybe we 
need a buildfarm animal or two testing a large number of locales.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: How can we submit code patches that implement our (pending) patents?

2018-07-06 Thread Dave Cramer
On 4 July 2018 at 21:15, Tsunakawa, Takayuki  wrote:

> From: Craig Ringer [mailto:cr...@2ndquadrant.com]
> > I'm assuming you don't want to offer a grant that lets anyone use them
> for
> > anything. But if you have a really broad grant to PostgreSQL, all someone
> > would have to do to inherit the grant is re-use some part of PostgreSQL.
>
> Your assumption is right.  No scope is the same as no patent; it won't
> help to defend PostgreSQL community against rival companies/communities of
> other DBMSs.  Or, I think we can set the scope to what OIN states.
> Fortunately, anyone can join OIN free of charge.
>
>
> > I guess there's a middle ground somewhere that protects substantial
> > derivatives and extracts but stops you using some Pg code snippets as a
> > freebie license.
>
> Are you assuming that developers want to use PG code snippets for
> non-PostgreSQL or even non-DBMS software?  I believe that accepting
> patented code from companies would be practically more useful for
> PostgreSQL enhancement and growth.  PostgreSQL is now a mature software,
> and it can be more corporate-friendly like other software under Apache
> License.
>
> Certainly there is history of people using PG code for non-PostgreSQL or
at least commercial derivative work. Greenplum for example.



Dave Cramer

da...@postgresintl.com
www.postgresintl.com


Re: [HACKERS] Crash on promotion when recovery.conf is renamed

2018-07-06 Thread Andrew Dunstan




On 07/06/2018 07:18 AM, Michael Paquier wrote:

Robert Haas wrote:

Although the state is now back to "Needs Review", I echo those
sentiments. This issue has now been hanging around for about 18
months.




No, those are my words, not Robert's :-)



For what it's worth, I volunteer to finish the work :)

The 2PC patch is really simple, and fixes a data loss issue.  The second
patch has been looked up by Heikki, Magnus and me at least once by each,
and there is visibly an agreement on having it.  Having reviews after
a new patch version is sent, by somebody else than the one who sent the
patches is of course always nice..


If you're comfortable committing it then go for it. It will be good to 
have the CF item resolved.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Crash on promotion when recovery.conf is renamed

2018-07-06 Thread Michael Paquier
On Fri, Jul 06, 2018 at 08:05:50AM -0400, Andrew Dunstan wrote:
> On 07/06/2018 07:18 AM, Michael Paquier wrote:
> > For what it's worth, I volunteer to finish the work :)
> > 
> > The 2PC patch is really simple, and fixes a data loss issue.  The second
> > patch has been looked up by Heikki, Magnus and me at least once by each,
> > and there is visibly an agreement on having it.  Having reviews after
> > a new patch version is sent, by somebody else than the one who sent the
> > patches is of course always nice..
> 
> If you're comfortable committing it then go for it. It will be good to have
> the CF item resolved.

Sure.  I think I can finish the set on Monday JST then.
--
Michael


signature.asc
Description: PGP signature


Re: [HACKERS] Optional message to user when terminating/cancelling backend

2018-07-06 Thread Pavel Stehule
Hi

I am testing last patch and looks so truncating message and appending "..."
doesn't work.

The slot->len hold trimmed length, not original length.

Regards

Pavel


Re: patch to allow disable of WAL recycling

2018-07-06 Thread Jerry Jelinek
Thomas,

We're using a zfs recordsize of 8k to match the PG blocksize of 8k, so what
you're describing is not the issue here.

Thanks,
Jerry


On Thu, Jul 5, 2018 at 3:44 PM, Thomas Munro 
wrote:

> On Fri, Jul 6, 2018 at 3:37 AM, Jerry Jelinek 
> wrote:
> >> If the problem is specifically the file system caching behavior, then we
> >> could also consider using the dreaded posix_fadvise().
> >
> > I'm not sure that solves the problem for non-cached files, which is where
> > we've observed the performance impact of recycling, where what should be
> a
> > write intensive workload turns into a read-modify-write workload because
> > we're now reading an old WAL file that is many hours, or even days, old
> and
> > has thus fallen out of the memory-cached data for the filesystem. The
> disk
> > reads still have to happen.
>
> What ZFS record size are you using?  PostgreSQL's XLOG_BLCKSZ is usually
> 8192 bytes.  When XLogWrite() calls write(some multiple of XLOG_BLCKSZ), on
> a traditional filesystem the kernel will say 'oh, that's overwriting whole
> pages exactly, so I have no need to read it from disk' (for example in
> FreeBSD ffs_vnops.c ffs_write() see the comment "We must peform a
> read-before-write if the transfer size does not cover the entire buffer").
> I assume ZFS has a similar optimisation, but it uses much larger records
> than the traditional 4096 byte pages, defaulting to 128KB.  Is that the
> reason for this?
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>


Re: [HACKERS] PoC: full merge join on comparison clause

2018-07-06 Thread Ashutosh Bapat
Hi,
I have started reviewing these patches. I haven't grasped the design
yet. But here are some comments on the first patch.


-clauses = (MergeJoinClause) palloc0(nClauses *
sizeof(MergeJoinClauseData));
+parent->mj_Clauses = (MergeJoinClause) palloc0(nClauses *
sizeof(MergeJoinClauseData));

crosses 80 characters.

-StrategyNumber opstrategy = mergestrategies[iClause];
+StrategyNumber sort_strategy = mergestrategies[iClause];
-intop_strategy;
+intjoin_strategy;
I don't see a reason why should we change the name of variable here. These are
operator strategies and there's no need to change their names. The name change
is introducing unnecessary diffs.

+clause->lexpr = ExecInitExpr((Expr *) linitial(qual->args),
(PlanState *) parent);
+clause->rexpr = ExecInitExpr((Expr *) lsecond(qual->args),
(PlanState *) parent);

cross 80 characters.

 /*
@@ -378,20 +375,29 @@ MJEvalInnerValues(MergeJoinState *mergestate,
TupleTableSlot *innerslot)
 return result;
 }

+/* Tuple comparison result */
+typedef enum
+{
+MJCR_NextInner = 1,
+MJCR_NextOuter = -1,
+MJCR_Join = 0
+} MJCompareResult;
+
 /*
  * MJCompare
  *
- * Compare the mergejoinable values of the current two input tuples
- * and return 0 if they are equal (ie, the mergejoin equalities all
- * succeed), >0 if outer > inner, <0 if outer < inner.
+ * Compare the mergejoinable values of the current two input tuples.
+ * If they are equal, i.e., the mergejoin equalities all succeed,
+ * return MJCR_Join, if outer > inner, MJCR_NextInner, and else
+ * MJCR_NextOuter.
  *
  * MJEvalOuterValues and MJEvalInnerValues must already have been called
  * for the current outer and inner tuples, respectively.
  */
-static int
+static MJCompareResult
 MJCompare(MergeJoinState *mergestate)
 {

I am not sure about this change as well. MJCompare()'s job is to compare given
keys in the two tuples and return the comparison result. The result was used as
it is to decide which side to advance in an equality based merge join. But for
inequality based merge join the result needs to be interpreted further. I think
we should write a wrapper around MJCompare which interprets the result rather
than changing MJCompare itself. OR at least change the name of MJCompare. The
first option is better in case we use MJCompare for purposes other than merge
join in future. I am not sure what those could be, but say a merge based union
or something like that.

 /*
  * Sweep through the existing EquivalenceClasses looking for matches to
@@ -273,7 +274,7 @@ process_equivalence(PlannerInfo *root,
 /*
  * A "match" requires matching sets of btree opfamilies.  Use of
  * equal() for this test has implications discussed in the comments
- * for get_mergejoin_opfamilies().
+ * for get_equality_opfamilies().

I think we should leave mergejoin word in there or at least indicate that these
are btree opfamilies so that we don't confuse it with hash equality operator
families.

It will be good if you can write something about why these changes are
required in the file. If you are using git format-patch, you could
write a commit message that gets added to the patch. That way, it
leaves there for anybody to review.

I am having a difficult time reading the next patch. There are various
changes in the second patch, which I don't understand the reason
behind. I think some comments will help, in as commit message or in
the code.

I will continue reviewing the patches.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: Generating partitioning tuple conversion maps faster

2018-07-06 Thread Alexander Kuzmenkov

On 07/05/2018 02:52 PM, David Rowley wrote:

On 30 June 2018 at 05:40, David Rowley  wrote:

I think your idea
to reduce the loops in test 6 from 2000 down to 1001 should be worth
it. I'll try the idea out next week.

The attached changes things to use your way of picking up the search
for the next column at the column after the last match was found.


Great. I think we can use the same approach for 
make_inh_translation_list, as in the attached patch. It show some 
improvement on test 6. I got the following tps, median of 11 runs 
(forgot to turn off fsync though):


test  master    v3 v4
1  414 416 408
2  259 409 404
3  263 400 405
4  417 416 404
5  118 311 305
6  85  280 303

--
Alexander Kuzmenkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

>From a21da8b6e875977eff7b313e37975b39b390a03e Mon Sep 17 00:00:00 2001
From: "dgrow...@gmail.com" 
Date: Mon, 25 Jun 2018 16:31:47 +1200
Subject: [PATCH] Improve performance of tuple conversion map generation

Previously convert_tuples_by_name_map naively performed a search of each
outdesc column starting at the first column in indesc and searched each
indesc column until a match was found.  When partitioned tables had many
columns this could result in slow generation of the tuple conversion maps.
For INSERT and UPDATE statements that touched few rows, this could mean a
very large overhead indeed.

We can do a bit better with this loop.  It's quite likely that the columns
in partitioned tables and their partitions are in the same order, so it
makes sense to start searching for each column outer column at the inner
column position 1 after where the previous match was found (per idea from
Alexander Kuzmenkov). This makes the best case search O(N) instead of
O(N^2).  The worst case is still O(N^2), but it seems unlikely that would
happen.
---
 src/backend/access/common/tupconvert.c | 37 ++---
 src/backend/optimizer/prep/prepunion.c | 38 --
 2 files changed, 47 insertions(+), 28 deletions(-)

diff --git a/src/backend/access/common/tupconvert.c b/src/backend/access/common/tupconvert.c
index 2d0d2f4..11038c6 100644
--- a/src/backend/access/common/tupconvert.c
+++ b/src/backend/access/common/tupconvert.c
@@ -295,12 +295,16 @@ convert_tuples_by_name_map(TupleDesc indesc,
 		   const char *msg)
 {
 	AttrNumber *attrMap;
-	int			n;
+	int			outnatts;
+	int			innatts;
 	int			i;
+	int			nextindesc = -1;
 
-	n = outdesc->natts;
-	attrMap = (AttrNumber *) palloc0(n * sizeof(AttrNumber));
-	for (i = 0; i < n; i++)
+	outnatts = outdesc->natts;
+	innatts = indesc->natts;
+
+	attrMap = (AttrNumber *) palloc0(outnatts * sizeof(AttrNumber));
+	for (i = 0; i < outnatts; i++)
 	{
 		Form_pg_attribute outatt = TupleDescAttr(outdesc, i);
 		char	   *attname;
@@ -313,10 +317,28 @@ convert_tuples_by_name_map(TupleDesc indesc,
 		attname = NameStr(outatt->attname);
 		atttypid = outatt->atttypid;
 		atttypmod = outatt->atttypmod;
-		for (j = 0; j < indesc->natts; j++)
+
+		/*
+		 * Now search for an attribute with the same name in the indesc. It
+		 * seems likely that a partitioned table will have the attributes in
+		 * the same order as the partition, so the search below is optimized
+		 * for that case.  It is possible that columns are dropped in one of
+		 * the relations, but not the other, so we use the 'nextindesc'
+		 * counter to track the starting point of the search.  If the inner
+		 * loop encounters dropped columns then it will have to skip over
+		 * them, but it should leave 'nextindesc' at the correct position for
+		 * the next outer loop.
+		 */
+		for (j = 0; j < innatts; j++)
 		{
-			Form_pg_attribute inatt = TupleDescAttr(indesc, j);
+			Form_pg_attribute inatt;
+
+			nextindesc++;
 
+			if (nextindesc >= innatts)
+nextindesc = 0;
+
+			inatt = TupleDescAttr(indesc, nextindesc);
 			if (inatt->attisdropped)
 continue;
 			if (strcmp(attname, NameStr(inatt->attname)) == 0)
@@ -330,7 +352,7 @@ convert_tuples_by_name_map(TupleDesc indesc,
 	   attname,
 	   format_type_be(outdesc->tdtypeid),
 	   format_type_be(indesc->tdtypeid;
-attrMap[i] = (AttrNumber) (j + 1);
+attrMap[i] = inatt->attnum;
 break;
 			}
 		}
@@ -343,7 +365,6 @@ convert_tuples_by_name_map(TupleDesc indesc,
 			   format_type_be(outdesc->tdtypeid),
 			   format_type_be(indesc->tdtypeid;
 	}
-
 	return attrMap;
 }
 
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 2d47024..7d7517b 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -51,6 +51,7 @@
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
+#include "utils/syscache.h"
 
 
 typedef struct
@@ -1896,9 +1897,11 @@ make_inh_translation_list(Relation oldrelation, Relation newrelation,
 	List	

Re: Explain buffers wrong counter with parallel plans

2018-07-06 Thread Amit Kapila
On Tue, Jul 3, 2018 at 4:18 PM, Amit Kapila  wrote:
> On Mon, Jul 2, 2018 at 6:02 PM, Robert Haas  wrote:
>>
>
>> To fix the problem with Limit that you mention, we could just modify
>> nodeLimit.c so that when the state is changed from LIMIT_INWINDOW to
>> LIMIT_WINDOWEND, we also call ExecShutdownNode on the child plan.
>>
>
> It should work.
>

I have tried this idea, but it doesn't completely solve the problem.
The problem is that nodes below LIMIT won't get a chance to accumulate
the stats as they won't be able to call InstrStopNode.  So the result
will be something like below:

postgres=# explain (analyze,buffers,timing off,costs off) select *
from t1 limit 5;
   QUERY PLAN
-
 Limit (actual rows=5 loops=1)
   Buffers: shared hit=6 read=224
   ->  Gather (actual rows=5 loops=1)
 Workers Planned: 2
 Workers Launched: 2
 Buffers: shared hit=1 read=63
 ->  Parallel Seq Scan on t1 (actual rows=17213 loops=3)
   Buffers: shared hit=6 read=224
 Planning Time: 0.105 ms
 Execution Time: 1363068.675 ms
(10 rows)

In this plan, you can notice that stats (Buffers:) at Parallel Seq
Scan and Limit are same, but Gather node shows different stats.  One
idea could be that in ExecShutdownNode, somehow, we allow the nodes to
count the stats, maybe by calling InstrStartNode/InstrStopNode, but
not sure if that is the best way to fix it.

Thoughts?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: shared-memory based stats collector

2018-07-06 Thread Magnus Hagander
On Wed, Jul 4, 2018 at 11:23 PM, Tom Lane  wrote:

> Kyotaro HORIGUCHI  writes:
> > At Mon, 2 Jul 2018 14:25:58 -0400, Robert Haas 
> wrote in  gmail.com>
> >> Copying the whole hash table kinds of sucks, partly because of the
> >> time it will take to copy it, but also because it means that memory
> >> usage is still O(nbackends * ntables).  Without looking at the patch,
> >> I'm guessing that you're doing that because we need a way to show each
> >> transaction a consistent snapshot of the data, and I admit that I
> >> don't see another obvious way to tackle that problem.  Still, it would
> >> be nice if we had a better idea.
>
> > The consistency here means "repeatable read" of an object's stats
> > entry, not a snapshot covering all objects. We don't need to copy
> > all the entries at once following this definition. The attached
> > version makes a cache entry only for requested objects.
>
> Uh, what?  That's basically destroying the long-standing semantics of
> statistics snapshots.  I do not think we can consider that acceptable.
> As an example, it would mean that scan counts for indexes would not
> match up with scan counts for their tables.
>

I agree that this is definitely something that needs to be considered. I
took a look some time ago at the same thing, and ran up against exactly
that one (and at the time did not have time to fix it).

I have not yet had time to look at the downstream suggested handling
(UNDO). However, I had one other thing from my notes I wanted to mention :)

We should probably consider adding an API to fetch counters that *don't*
follow these rules, in case it's not needed. When going through files we're
still stuck at that bottleneck, but if going through shared memory it
should be possible to make it a lot cheaper by volunteering to "not need
that".

We should also consider the ability to fetch stats for a single object,
which would require no copying of the whole structure at all. I think
something like this could for example be used for autovacuum rechecks. On
top of the file based transfer that would help very little, but through
shared memory it could be a lot lighter weight.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: no partition pruning when partitioning using array type

2018-07-06 Thread Andrew Dunstan




On 05/08/2018 02:18 AM, Amit Langote wrote:

On 2018/03/01 17:16, Amit Langote wrote:

Added this to CF (actually moved to the September one after first having
added it to the CF that is just getting started).

It seems to me that we don't need to go with my originally proposed
approach to teach predtest.c to strip RelabelType nodes.  Apparently, it's
only partition.c that's adding the RelableType node around partition key
nodes in such cases.  That is, in the case of having an array, record,
enum, and range type as the partition key.  No other part of the system
ends up adding one as far as I can see.  Parser's make_op(), for example,
that is used to generate an OpExpr for a qual involving the partition key,
won't put RelabelType around the partition key node if the operator in
question has "pseudo"-types listed as declared types of its left and right
arguments.

So I revised the patch to drop all the predtest.c changes and instead
modify get_partition_operator() to avoid generating RelabelType in such
cases.  Please find it attached.

I would like to revisit this as a bug fix for get_partition_operator() to
be applied to both PG 10 and HEAD.  In the former case, it fixes the bug
that constraint exclusion code will fail to prune correctly when partition
key is of array, enum, range, or record type due to the structural
mismatch between the OpExpr that partitioning code generates and one that
the parser generates for WHERE clauses involving partition key columns.

In HEAD, since we already fixed that case in e5dcbb88a15d [1] which is a
different piece of code anyway, the patch only serves to improve the
deparse output emitted by ruleutils.c for partition constraint expressions
where pseudo-type partition key is involved.  The change can be seen in
the updated test output for create_table test.

Attached are patches for PG 10 and HEAD, which implement a slightly
different approach to fixing this.  Now, instead of passing the partition
key's type as lefttype and righttype to look up the operator, the operator
class declared type is passed.  Also, we now decide whether to create a
RelabelType node on top based on whether the partition key's type is
different from the selected operator's input type, with exception for
pseudo-type types.

Thanks,
Amit

[1]
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e5dcbb88a15d




Since this email we have branched off REL_11_STABLE. Do we want to 
consider this as a bug fix for Release 11? If so, should we add it to 
the open items list?


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: hot_standby_feedback vs excludeVacuum and snapshots

2018-07-06 Thread Simon Riggs
On 6 July 2018 at 03:30, Thomas Munro  wrote:
> On Thu, Jul 5, 2018 at 8:27 AM, Noah Misch  wrote:
>>> However, 49bff5300d527 also introduced a similar bug where 
>>> subtransaction
>>> commit would fail to release an AccessExclusiveLock, leaving the lock to
>>> be removed sometimes early and sometimes late. This commit fixes
>>> that bug also. Backpatch to PG10 needed.
>>
>> Subtransaction commit is too early to release an arbitrary
>> AccessExclusiveLock.  The primary releases every AccessExclusiveLock at
>> top-level transaction commit, top-level transaction abort, or subtransaction
>> abort.  CommitSubTransaction() doesn't do that; it transfers locks to the
>> parent sub(xact).  Standby nodes can't safely remove an arbitrary lock 
>> earlier
>> than the primary would.
>
> But we don't release locks acquired by committing subxacts until the
> top level xact commits.  Perhaps that's what the git commit message
> meant by "early", as opposed to "late" meaning when
> StandbyReleaseOldLocks() next runs because an XLOG_RUNNING_XACTS
> record is replayed?

Locks held by subtransactions were not released at the correct timing
of top-level commit; they are now.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: BUG #14999: pg_rewind corrupts control file global/pg_control

2018-07-06 Thread Andrew Dunstan
The following review has been posted through the commitfest application:
make installcheck-world:  not tested
Implements feature:   not tested
Spec compliant:   not tested
Documentation:not tested

I agree that this problem should just be documented, and no further fix done. 
The proposed commit message seems to be too long - I think it should mainly 
just just refer to the mailing list discussion. Other than that it seems fine.

The new status of this patch is: Ready for Committer


Re: PATCH: Update snowball stemmers

2018-07-06 Thread Tom Lane
Andrew Dunstan  writes:
> On 06/26/2018 08:20 AM, Arthur Zakirov wrote:
>> I'd like to propose the patch which syncs PostgreSQL snowball stemmers.
>> As Tom pointed [1] stemmers haven't synced for a very long time.

> I agree with Tom that we should sync with the upstream before we do 
> anything else. This is a very large patch but with fairly limited 
> impact. I think now at the start of a dev cycle is the right time to 
> apply it.

Agreed in principle.  The thing that's actually a bit shaky here is
to decide what "upstream" to follow, given that the original maintainer
has retired and there doesn't seem to be active work going on.

> I don't know if we have a buildfarm animal testing Hungarian. Maybe we 
> need a buildfarm animal or two testing a large number of locales.

I dunno that we need to set up a permanent buildfarm member for this.
I had been thinking in terms of testing every available locale on my
own machines before pushing, but given that the code is pretty static,
do we need to do that repetitively?

regards, tom lane



pgbench issue

2018-07-06 Thread Sichen Zhao
Hi,

I am a developer from China, when i using pgbench -l to output log file, the 
pgbench seems dont work, my pgbench version is 9.2.13.
can anyone help me?

ps: when i using 9.2.23 pgbench, the -l is work.


thanks in advance

Best Regards
Sichen Zhao



Re: New function pg_stat_statements_reset_query() to reset statistics of a specific query

2018-07-06 Thread Fujii Masao
On Fri, Jul 6, 2018 at 3:34 AM, Sergei Kornilov  wrote:
> Hello
>
>> Currently pg_stat_statements_reset() and other stats reset functions like
>> pg_stat_reset() can be executed only by superusers.
>> But why did you allow pg_read_all_stats role to execute that function,
>> by default? That change looks strange to me.
> This is not behavior change, only fix documentation to current state. Commit 
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=25fff40798fc4ac11a241bfd9ab0c45c085e2212
>  creates 1.5 version and grant execute to pg_read_all_stats

Hmm... so pg_stat_statements_reset() is allowed to be executed by
pg_read_all_stats role while other stats reset functions like
pg_stat_reset() can be executed only by superusers. Which looks
strange and inconsistent to me.

Regards,

-- 
Fujii Masao



pg_recvlogical use of SIGHUP

2018-07-06 Thread Dave Cramer
There is some undocumented (nothing in the docs) code that allows for
closing reopening the output file for pg_recvlogical.

Since this doesn't change the name of the file in the process I'm wondering
how one might actually use this "feature" ?

I could see it being useful if we were to be able to change the file name
and create a new file based on some form of template ?

Am I missing something ?

Dave Cramer


Re: pg_recvlogical use of SIGHUP

2018-07-06 Thread Andres Freund
Hi,

On 2018-07-06 13:49:37 -0400, Dave Cramer wrote:
> There is some undocumented (nothing in the docs) code that allows for
> closing reopening the output file for pg_recvlogical.
> 
> Since this doesn't change the name of the file in the process I'm wondering
> how one might actually use this "feature" ?

You can rename the file, then sighup, no? Renaming while the file is
open will continue to write into the renamed file, but sighup'ing will
use the original name.

Greetings,

Andres Freund



"interesting" issue with restore from a pg_dump with a database-wide search_path

2018-07-06 Thread Larry Rosenman
I have the following:

\set DB `echo $DB`
CREATE SCHEMA IF NOT EXISTS postgis;
CREATE SCHEMA IF NOT EXISTS topology;
CREATE SCHEMA IF NOT EXISTS tiger;
SET search_path=public,postgis,tiger,topology;
ALTER DATABASE :DB SET search_path=public,postgis,tiger,topology;
\c wm_test
CREATE EXTENSION fuzzystrmatch schema postgis;
-- Enable PostGIS (includes raster)
CREATE EXTENSION postgis schema postgis;
-- Enable Topology
CREATE EXTENSION postgis_topology schema topology;
-- Enable PostGIS Advanced 3D
-- and other geoprocessing algorithms
CREATE EXTENSION postgis_sfcgal schema postgis;
-- rule based standardizer
CREATE EXTENSION address_standardizer schema postgis;
-- example rule data set
CREATE EXTENSION address_standardizer_data_us schema postgis;
-- Enable US Tiger Geocoder
CREATE EXTENSION postgis_tiger_geocoder schema tiger;
-- routing functionality
CREATE EXTENSION pgrouting schema postgis;
-- spatial foreign data wrappers
CREATE EXTENSION ogr_fdw schema postgis;
-- LIDAR support
CREATE EXTENSION pointcloud schema postgis;
-- LIDAR Point cloud patches to geometry type cases
CREATE EXTENSION pointcloud_postgis schema postgis;
-

when I pg_dump -Fc the database and then try to restore it after a
create database, I get errors.  To get a clean restare I need to do:

---
\set DB `echo ${DB}`
CREATE SCHEMA IF NOT EXISTS postgis;
CREATE SCHEMA IF NOT EXISTS topology;
CREATE SCHEMA IF NOT EXISTS tiger;
SET search_path=public,postgis,tiger,topology;
ALTER DATABASE :DB SET search_path=public,postgis,tiger,topology;
\c 
CREATE EXTENSION fuzzystrmatch schema postgis;
-- Enable PostGIS (includes raster)
CREATE EXTENSION postgis schema postgis;
-- Enable Topology
CREATE EXTENSION postgis_topology schema topology;
-- Enable PostGIS Advanced 3D
-- and other geoprocessing algorithms
CREATE EXTENSION postgis_sfcgal schema postgis;
-- rule based standardizer
CREATE EXTENSION address_standardizer schema postgis;
-- example rule data set
CREATE EXTENSION address_standardizer_data_us schema postgis;
-- Enable US Tiger Geocoder
CREATE EXTENSION postgis_tiger_geocoder schema tiger;
-- routing functionality
CREATE EXTENSION pgrouting schema postgis;
-- spatial foreign data wrappers
CREATE EXTENSION ogr_fdw schema postgis;
-- LIDAR support
CREATE EXTENSION pointcloud schema postgis;
-- LIDAR Point cloud patches to geometry type cases
CREATE EXTENSION pointcloud_postgis schema postgis;

Is the need to do this expected?

This is 10.4 on FreeBSD.



-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106


signature.asc
Description: PGP signature


Re: "interesting" issue with restore from a pg_dump with a database-wide search_path

2018-07-06 Thread Joshua D. Drake

On 07/06/2018 11:27 AM, Larry Rosenman wrote:

when I pg_dump -Fc the database and then try to restore it after a
create database, I get errors.  To get a clean restare I need to do:


Knowing the errors would be helpful.

jD



---
\set DB `echo ${DB}`
CREATE SCHEMA IF NOT EXISTS postgis;
CREATE SCHEMA IF NOT EXISTS topology;
CREATE SCHEMA IF NOT EXISTS tiger;
SET search_path=public,postgis,tiger,topology;
ALTER DATABASE :DB SET search_path=public,postgis,tiger,topology;
\c
CREATE EXTENSION fuzzystrmatch schema postgis;
-- Enable PostGIS (includes raster)
CREATE EXTENSION postgis schema postgis;
-- Enable Topology
CREATE EXTENSION postgis_topology schema topology;
-- Enable PostGIS Advanced 3D
-- and other geoprocessing algorithms
CREATE EXTENSION postgis_sfcgal schema postgis;
-- rule based standardizer
CREATE EXTENSION address_standardizer schema postgis;
-- example rule data set
CREATE EXTENSION address_standardizer_data_us schema postgis;
-- Enable US Tiger Geocoder
CREATE EXTENSION postgis_tiger_geocoder schema tiger;
-- routing functionality
CREATE EXTENSION pgrouting schema postgis;
-- spatial foreign data wrappers
CREATE EXTENSION ogr_fdw schema postgis;
-- LIDAR support
CREATE EXTENSION pointcloud schema postgis;
-- LIDAR Point cloud patches to geometry type cases
CREATE EXTENSION pointcloud_postgis schema postgis;

Is the need to do this expected?

This is 10.4 on FreeBSD.





--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
***  A fault and talent of mine is to tell it exactly how it is.  ***
PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Re: pgbench issue

2018-07-06 Thread Andres Freund
Hi,

On 2018-07-06 16:58:03 +, Sichen Zhao wrote:
> I am a developer from China, when i using pgbench -l to output log file, the 
> pgbench seems dont work, my pgbench version is 9.2.13.
> can anyone help me?
> 
> ps: when i using 9.2.23 pgbench, the -l is work.

Given that you say the newer version works, I don't undererstand what
you're really asking? Just use the newer version?

Note that I do not see any relevant changes to pgbench between
9.2.13. and 9.2.23.

Greetings,

Andres Freund



Re: "interesting" issue with restore from a pg_dump with a database-wide search_path

2018-07-06 Thread Larry Rosenman
On Fri, Jul 06, 2018 at 11:35:41AM -0700, Joshua D. Drake wrote:
> On 07/06/2018 11:27 AM, Larry Rosenman wrote:
> > when I pg_dump -Fc the database and then try to restore it after a
> > create database, I get errors.  To get a clean restare I need to do:
> 
> Knowing the errors would be helpful.
> 
> jD
ler=# drop database wm_common;create database wm_common
DROP DATABASE
ler-# ;
CREATE DATABASE
ler=# \q
borg.lerctr.org /home/ler $ pg_restore -d wm_common wm_t
borg.lerctr.org /home/ler $ cd WM
borg.lerctr.org /home/ler/WM $ pg_restore -d wm_common wm_test.dump
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 12; 3079 887963 EXTENSION 
postgis_tiger_geocoder
pg_restore: [archiver (db)] could not execute query: ERROR:  function 
soundex(character varying) does not exist
HINT:  No function matches the given name and argument types. You might need to 
add explicit type casts.
Command was: CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder WITH 
SCHEMA tiger;



pg_restore: [archiver (db)] Error from TOC entry 5400; 0 0 COMMENT EXTENSION 
postgis_tiger_geocoder
pg_restore: [archiver (db)] could not execute query: ERROR:  extension 
"postgis_tiger_geocoder" does not exist
Command was: COMMENT ON EXTENSION postgis_tiger_geocoder IS 'PostGIS tiger 
geocoder and reverse geocoder';



pg_restore: [archiver (db)] Error from TOC entry 11; 3079 887754 EXTENSION 
postgis_topology
pg_restore: [archiver (db)] could not execute query: ERROR:  type "geometry" 
does not exist
Command was: CREATE EXTENSION IF NOT EXISTS postgis_topology WITH SCHEMA 
topology;



pg_restore: [archiver (db)] Error from TOC entry 5401; 0 0 COMMENT EXTENSION 
postgis_topology
pg_restore: [archiver (db)] could not execute query: ERROR:  extension 
"postgis_topology" does not exist
Command was: COMMENT ON EXTENSION postgis_topology IS 'PostGIS topology 
spatial types and functions';


> 
> 
> > ---
> > \set DB `echo ${DB}`
> > CREATE SCHEMA IF NOT EXISTS postgis;
> > CREATE SCHEMA IF NOT EXISTS topology;
> > CREATE SCHEMA IF NOT EXISTS tiger;
> > SET search_path=public,postgis,tiger,topology;
> > ALTER DATABASE :DB SET search_path=public,postgis,tiger,topology;
> > \c
> > CREATE EXTENSION fuzzystrmatch schema postgis;
> > -- Enable PostGIS (includes raster)
> > CREATE EXTENSION postgis schema postgis;
> > -- Enable Topology
> > CREATE EXTENSION postgis_topology schema topology;
> > -- Enable PostGIS Advanced 3D
> > -- and other geoprocessing algorithms
> > CREATE EXTENSION postgis_sfcgal schema postgis;
> > -- rule based standardizer
> > CREATE EXTENSION address_standardizer schema postgis;
> > -- example rule data set
> > CREATE EXTENSION address_standardizer_data_us schema postgis;
> > -- Enable US Tiger Geocoder
> > CREATE EXTENSION postgis_tiger_geocoder schema tiger;
> > -- routing functionality
> > CREATE EXTENSION pgrouting schema postgis;
> > -- spatial foreign data wrappers
> > CREATE EXTENSION ogr_fdw schema postgis;
> > -- LIDAR support
> > CREATE EXTENSION pointcloud schema postgis;
> > -- LIDAR Point cloud patches to geometry type cases
> > CREATE EXTENSION pointcloud_postgis schema postgis;
> > 
> > Is the need to do this expected?
> > 
> > This is 10.4 on FreeBSD.
> > 
> > 
> > 
> 
> -- 
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
> ***  A fault and talent of mine is to tell it exactly how it is.  ***
> PostgreSQL centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://postgresconf.org
> * Unless otherwise stated, opinions are my own.   *
> 

-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106


signature.asc
Description: PGP signature


Re: pg_recvlogical use of SIGHUP

2018-07-06 Thread Dave Cramer
On 6 July 2018 at 14:11, Andres Freund  wrote:

> Hi,
>
> On 2018-07-06 13:49:37 -0400, Dave Cramer wrote:
> > There is some undocumented (nothing in the docs) code that allows for
> > closing reopening the output file for pg_recvlogical.
> >
> > Since this doesn't change the name of the file in the process I'm
> wondering
> > how one might actually use this "feature" ?
>
> You can rename the file, then sighup, no? Renaming while the file is
> open will continue to write into the renamed file, but sighup'ing will
> use the original name.
>

That is the missing piece, thanks!
I'll prepare a patch for the docs

Dave Cramer


Re: shared-memory based stats collector

2018-07-06 Thread Robert Haas
On Fri, Jul 6, 2018 at 10:29 AM, Magnus Hagander  wrote:
> I agree that this is definitely something that needs to be considered. I
> took a look some time ago at the same thing, and ran up against exactly that
> one (and at the time did not have time to fix it).
>
> I have not yet had time to look at the downstream suggested handling (UNDO).
> However, I had one other thing from my notes I wanted to mention :)
>
> We should probably consider adding an API to fetch counters that *don't*
> follow these rules, in case it's not needed. When going through files we're
> still stuck at that bottleneck, but if going through shared memory it should
> be possible to make it a lot cheaper by volunteering to "not need that".
>
> We should also consider the ability to fetch stats for a single object,
> which would require no copying of the whole structure at all. I think
> something like this could for example be used for autovacuum rechecks. On
> top of the file based transfer that would help very little, but through
> shared memory it could be a lot lighter weight.

I think we also have to ask ourselves in general whether snapshots of
this data are worth what they cost.  I don't think anyone would doubt
that a consistent snapshot of the data is better than an inconsistent
view of the data if the costs were equal.  However, if we can avoid a
huge amount of memory usage and complexity on large systems with
hundreds of backends by ditching the snapshot requirement, then we
should ask ourselves how important we think the snapshot behavior
really is.

Note that commit 3cba8999b34 relaxed the synchronization requirements
around GetLockStatusData().  In other words, since 2011, you can no
longer be certain that 'select * from pg_locks' is returning a
perfectly consistent view of the lock status.  If this has caused
anybody a major problem, I'm unaware of it.  Maybe the same would end
up being true here.  The amount of memory we're consuming for this
data may be a bigger problem than minor inconsistencies in the view of
the data would be.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: shared-memory based stats collector

2018-07-06 Thread Andres Freund
On 2018-07-06 14:49:53 -0400, Robert Haas wrote:
> I think we also have to ask ourselves in general whether snapshots of
> this data are worth what they cost.  I don't think anyone would doubt
> that a consistent snapshot of the data is better than an inconsistent
> view of the data if the costs were equal.  However, if we can avoid a
> huge amount of memory usage and complexity on large systems with
> hundreds of backends by ditching the snapshot requirement, then we
> should ask ourselves how important we think the snapshot behavior
> really is.

Indeed. I don't think it's worthwhile major additional memory or code
complexity in this situation. The likelihood of benefitting from more /
better stats seems far higher than a more accurate view of the stats -
which aren't particularly accurate themselves. They don't even survive
crashes right now, so I don't think the current accuracy is very high.

Greetings,

Andres Freund



Re: shared-memory based stats collector

2018-07-06 Thread Joshua D. Drake

On 07/06/2018 11:57 AM, Andres Freund wrote:

On 2018-07-06 14:49:53 -0400, Robert Haas wrote:

I think we also have to ask ourselves in general whether snapshots of
this data are worth what they cost.  I don't think anyone would doubt
that a consistent snapshot of the data is better than an inconsistent
view of the data if the costs were equal.  However, if we can avoid a
huge amount of memory usage and complexity on large systems with
hundreds of backends by ditching the snapshot requirement, then we
should ask ourselves how important we think the snapshot behavior
really is.

Indeed. I don't think it's worthwhile major additional memory or code
complexity in this situation. The likelihood of benefitting from more /
better stats seems far higher than a more accurate view of the stats -
which aren't particularly accurate themselves. They don't even survive
crashes right now, so I don't think the current accuracy is very high.



Will stats, if we move toward the suggested changes be "less" accurate 
than they are now? We already know that stats are generally not accurate 
but they are close enough. If we move toward this change will it still 
be close enough?


JD



Greetings,

Andres Freund



--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
***  A fault and talent of mine is to tell it exactly how it is.  ***
PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Re: Explain buffers wrong counter with parallel plans

2018-07-06 Thread Robert Haas
On Fri, Jul 6, 2018 at 9:44 AM, Amit Kapila  wrote:
> I have tried this idea, but it doesn't completely solve the problem.
> The problem is that nodes below LIMIT won't get a chance to accumulate
> the stats as they won't be able to call InstrStopNode.

I'm not sure I understand.  Why not?  I see that we'd need to insert
an extra call to InstrStopNode() if we were stopping the node while it
was running, because then InstrStartNode() would have already been
done, but the corresponding call to InstrStopNode() would not have
been done.  But I'm not sure how that would happen in this case.  Can
you explain further?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg_recvlogical use of SIGHUP

2018-07-06 Thread Dave Cramer
>
>
> On 6 July 2018 at 14:11, Andres Freund  wrote:
>
>> Hi,
>>
>> On 2018-07-06 13:49:37 -0400, Dave Cramer wrote:
>> > There is some undocumented (nothing in the docs) code that allows for
>> > closing reopening the output file for pg_recvlogical.
>> >
>> > Since this doesn't change the name of the file in the process I'm
>> wondering
>> > how one might actually use this "feature" ?
>>
>> You can rename the file, then sighup, no? Renaming while the file is
>> open will continue to write into the renamed file, but sighup'ing will
>> use the original name.
>>
>
> That is the missing piece, thanks!
> I'll prepare a patch for the docs
>


See attached patch.


pg_recvlogical.patch
Description: Binary data


Re: PATCH: Update snowball stemmers

2018-07-06 Thread Arthur Zakirov
On Fri, Jul 06, 2018 at 12:37:26PM -0400, Tom Lane wrote:
> > I don't know if we have a buildfarm animal testing Hungarian. Maybe we 
> > need a buildfarm animal or two testing a large number of locales.
> 
> I dunno that we need to set up a permanent buildfarm member for this.
> I had been thinking in terms of testing every available locale on my
> own machines before pushing, but given that the code is pretty static,
> do we need to do that repetitively?

I run installcheck for hu_HU.UTF-8, hu_HU.ISO-8859-2 and ru_RU.UTF-8
locales on laptop. Tests passed.

I think it is worth to consider that most text search functions
(to_tsvector, to_tsquery) use specific text search configuration (english
or a custom one). Only few of them use default text search configuration.
They are in json.sql, jsonb.sql, tsearch.sql tests.

Is it good idea to modify some current tests to use default configuration
or to add specific tests for locale testing?

-- 
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company



Re: documentation fixes for partition pruning, round three

2018-07-06 Thread Alvaro Herrera
On 2018-Jul-06, Amit Langote wrote:

> On 2018/07/06 6:55, David Rowley wrote:
> > On 6 July 2018 at 09:41, Alvaro Herrera  wrote:
> >> On 2018-Jul-05, Peter Eisentraut wrote:
> >>> Committed.
> >>
> >> Thanks for handling this.
> >>
> >> Should we do this in REL_11_STABLE too?  I vote yes.
> > 
> > Sorry for now paying much attention to this, but I've read through
> > what's been committed and I also think PG11 deserves this too.
> 
> +1

Done, thanks :-)

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: shared-memory based stats collector

2018-07-06 Thread Robert Haas
On Fri, Jul 6, 2018 at 3:02 PM, Joshua D. Drake  wrote:
> Will stats, if we move toward the suggested changes be "less" accurate than
> they are now? We already know that stats are generally not accurate but they
> are close enough. If we move toward this change will it still be close
> enough?

There proposed change would have no impact at all on the long-term
accuracy of the statistics.  It would just mean that there would be
race conditions when reading them, so that for example you would be
more likely to see a count of heap scans that doesn't match the count
of index scans, because an update arrives in between when you read the
first value and when you read the second one.  I don't see that
mattering a whole lot, TBH, but maybe I'm missing something.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: New function pg_stat_statements_reset_query() to reset statistics of a specific query

2018-07-06 Thread Robert Haas
On Fri, Jul 6, 2018 at 1:26 PM, Fujii Masao  wrote:
> Hmm... so pg_stat_statements_reset() is allowed to be executed by
> pg_read_all_stats role while other stats reset functions like
> pg_stat_reset() can be executed only by superusers. Which looks
> strange and inconsistent to me.

Yeah, why would a pg_READ_all_stats role let you change stuff?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: shared-memory based stats collector

2018-07-06 Thread Andres Freund
On 2018-07-06 12:02:39 -0700, Joshua D. Drake wrote:
> On 07/06/2018 11:57 AM, Andres Freund wrote:
> > On 2018-07-06 14:49:53 -0400, Robert Haas wrote:
> > > I think we also have to ask ourselves in general whether snapshots of
> > > this data are worth what they cost.  I don't think anyone would doubt
> > > that a consistent snapshot of the data is better than an inconsistent
> > > view of the data if the costs were equal.  However, if we can avoid a
> > > huge amount of memory usage and complexity on large systems with
> > > hundreds of backends by ditching the snapshot requirement, then we
> > > should ask ourselves how important we think the snapshot behavior
> > > really is.
> > Indeed. I don't think it's worthwhile major additional memory or code
> > complexity in this situation. The likelihood of benefitting from more /
> > better stats seems far higher than a more accurate view of the stats -
> > which aren't particularly accurate themselves. They don't even survive
> > crashes right now, so I don't think the current accuracy is very high.
> 
> 
> Will stats, if we move toward the suggested changes be "less" accurate than
> they are now? We already know that stats are generally not accurate but they
> are close enough. If we move toward this change will it still be close
> enough?

I don't think there's a meaningful difference to before. And at the same
time less duplication / hardcoded structure will allow us to increase
the amount of stats we keep.

Greetings,

Andres Freund



Re: shared-memory based stats collector

2018-07-06 Thread Joshua D. Drake

On 07/06/2018 12:34 PM, Robert Haas wrote:

On Fri, Jul 6, 2018 at 3:02 PM, Joshua D. Drake  wrote:

Will stats, if we move toward the suggested changes be "less" accurate than
they are now? We already know that stats are generally not accurate but they
are close enough. If we move toward this change will it still be close
enough?

There proposed change would have no impact at all on the long-term
accuracy of the statistics.  It would just mean that there would be
race conditions when reading them, so that for example you would be
more likely to see a count of heap scans that doesn't match the count
of index scans, because an update arrives in between when you read the
first value and when you read the second one.  I don't see that
mattering a whole lot, TBH, but maybe I'm missing something.


I agree that it probably isn't a big deal. Generally speaking when we 
look at stats it is to get an "idea" of what is going on. We don't care 
if we are missing an increase/decrease of 20 of any particular value 
within stats. Based on this and what Andres said, it seems like a net 
win to me.


JD





--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
***  A fault and talent of mine is to tell it exactly how it is.  ***
PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Re: shared-memory based stats collector

2018-07-06 Thread Magnus Hagander
On Fri, Jul 6, 2018 at 8:57 PM, Andres Freund  wrote:

> On 2018-07-06 14:49:53 -0400, Robert Haas wrote:
> > I think we also have to ask ourselves in general whether snapshots of
> > this data are worth what they cost.  I don't think anyone would doubt
> > that a consistent snapshot of the data is better than an inconsistent
> > view of the data if the costs were equal.  However, if we can avoid a
> > huge amount of memory usage and complexity on large systems with
> > hundreds of backends by ditching the snapshot requirement, then we
> > should ask ourselves how important we think the snapshot behavior
> > really is.
>
> Indeed. I don't think it's worthwhile major additional memory or code
> complexity in this situation. The likelihood of benefitting from more /
> better stats seems far higher than a more accurate view of the stats -
> which aren't particularly accurate themselves. They don't even survive
> crashes right now, so I don't think the current accuracy is very high.
>

Definitely agreed.

*If* we can provide the snapshots view of them without too much overhead I
think it's worth looking into that while *also* proviiding a lower overhead
interface for those that don't care about it.

If it ends up that keeping the snapshots become too much overhead in either
in performance or code-maintenance, then I agree can probably drop that.
But we should at least properly investigate the cost.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: "interesting" issue with restore from a pg_dump with a database-wide search_path

2018-07-06 Thread Tom Lane
Larry Rosenman  writes:
> On Fri, Jul 06, 2018 at 11:35:41AM -0700, Joshua D. Drake wrote:
>> Knowing the errors would be helpful.

> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 12; 3079 887963 EXTENSION 
> postgis_tiger_geocoder
> pg_restore: [archiver (db)] could not execute query: ERROR:  function 
> soundex(character varying) does not exist
> HINT:  No function matches the given name and argument types. You might need 
> to add explicit type casts.

This looks like a problem with the postgis_tiger_geocoder extension.
It's depending on the fuzzystrmatch extension (which has the soundex
function), but seemingly this dependency is not declared in the
extension's control file.  If it were, the search path would've been
set to include the schema of the fuzzystrmatch extension during
CREATE EXTENSION.

regards, tom lane



Re: shared-memory based stats collector

2018-07-06 Thread Andres Freund
Hi,

On 2018-07-06 22:03:12 +0200, Magnus Hagander wrote:
> *If* we can provide the snapshots view of them without too much overhead I
> think it's worth looking into that while *also* proviiding a lower overhead
> interface for those that don't care about it.

I don't see how that's possible without adding significant amounts of
complexity and probably memory / cpu overhead. The current stats already
are quite inconsistent (often outdated, partially updated, messages
dropped when busy) - I don't see what we really gain by building
something MVCC like in the "new" stats subsystem.


> If it ends up that keeping the snapshots become too much overhead in either
> in performance or code-maintenance, then I agree can probably drop that.
> But we should at least properly investigate the cost.

I don't think it's worthwhile to more than think a bit about it. There's
fairly obvious tradeoffs in complexity here. Trying to get there seems
like a good way to make the feature too big.

Greetings,

Andres Freund



Re: Make deparsing of column defaults faster

2018-07-06 Thread Peter Eisentraut
On 05.07.18 18:58, Justin Pryzby wrote:
> I checked on one customer running PG10.4, for which pg_dump takes 8 minutes to
> pg_dump -s.
> 
> I imported existing schema to PG12dev (which itself took 25min) and compared:
> patched: 2m33.616s
> unpatched: 7m19.578s

Which patch is that, Jeff's or mine?

> Note, that affects pg_upgrade, which is how this issue originally came up [0].
> (But I believe pg_upgrade likes to call pg_dump from the old server version, 
> so
> pg_upgrade to v11 couldn't benefit unless this was included in PG10.5).

pg_upgrade calls the new pg_dump.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [PATCH] Use access() to check file existence in GetNewRelFileNode().

2018-07-06 Thread Peter Eisentraut
On 18.05.18 16:04, Michael Paquier wrote:
> On Fri, May 18, 2018 at 02:42:08PM +0800, Paul Guo wrote:
>> 2018-05-17 21:18 GMT+08:00 Michael Paquier :
>>> You should also add this patch to the next commit fest, development of
>>> v11 is done and it is a stabilization period now, so no new patches are
>>> merged.  Here is where you can register the patch:
>>> https://commitfest.postgresql.org/18/
>>
>> Submitted.

This patch looks sensible to me.  We also use access() elsewhere in the
backend to test for file existence.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: New function pg_stat_statements_reset_query() to reset statistics of a specific query

2018-07-06 Thread Sergei Kornilov
06.07.2018, 22:35, "Robert Haas" :
> On Fri, Jul 6, 2018 at 1:26 PM, Fujii Masao  wrote:
>>  Hmm... so pg_stat_statements_reset() is allowed to be executed by
>>  pg_read_all_stats role while other stats reset functions like
>>  pg_stat_reset() can be executed only by superusers. Which looks
>>  strange and inconsistent to me.
>
> Yeah, why would a pg_READ_all_stats role let you change stuff?

+1, personally i was surprised first time. But I thought that it was discussed 
before committing version 1.5
I read original thread [1] and, as far i see, pg_stat_statements_reset change 
was not discussed.

Let's remove this grant?
Or grant to pg_monitor role instead of pg_read_all_stats?

regards, Sergei

[1] 
https://www.postgresql.org/message-id/flat/ca+ocxoyrdsc1xylff9s698gugypxbs4cvj+0gwo8u65nmyj...@mail.gmail.com



Re: Let's remove DSM_IMPL_NONE.

2018-07-06 Thread Peter Eisentraut
On 26.06.18 09:10, Kyotaro HORIGUCHI wrote:
> --- a/src/bin/initdb/initdb.c
> +++ b/src/bin/initdb/initdb.c
> @@ -984,6 +984,16 @@ test_config_settings(void)
>   ok_buffers = 0;
>  
>  
> + /*
> +  * Server doesn't confirm that the server-default DSM implementation is
> +  * actually workable. Choose a fine one for probing then it is used on 
> the
> +  * new database.
> +  */
> + printf(_("selecting dynamic shared memory implementation ... "));
> + fflush(stdout);
> + dynamic_shared_memory_type = choose_dsm_implementation();
> + printf("%s\n", dynamic_shared_memory_type);
> +
>   printf(_("selecting default max_connections ... "));
>   fflush(stdout);
>  

I don't understand that comment.  initdb does test whether dsm=posix
works.  What more were you hoping for?

> @@ -996,10 +1006,11 @@ test_config_settings(void)
>"\"%s\" --boot -x0 %s "
>"-c max_connections=%d "
>"-c shared_buffers=%d "
> -  "-c dynamic_shared_memory_type=none "
> +  "-c dynamic_shared_memory_type=%s "
>"< \"%s\" > \"%s\" 2>&1",
>backend_exec, boot_options,
>test_conns, test_buffs,
> +  dynamic_shared_memory_type,
>DEVNULL, DEVNULL);
>   status = system(cmd);
>   if (status == 0)

We could perhaps avoid some variability here by running the
bootstrapping runs in initdb using hardcoded dsm settings of
"sysv"/"windows".

> --- a/src/include/storage/dsm_impl.h
> +++ b/src/include/storage/dsm_impl.h
> @@ -14,11 +14,10 @@
>  #define DSM_IMPL_H
>  
>  /* Dynamic shared memory implementations. */
> -#define DSM_IMPL_NONE0
> -#define DSM_IMPL_POSIX   1
> -#define DSM_IMPL_SYSV2
> -#define DSM_IMPL_WINDOWS 3
> -#define DSM_IMPL_MMAP4
> +#define DSM_IMPL_POSIX   0
> +#define DSM_IMPL_SYSV1
> +#define DSM_IMPL_WINDOWS 2
> +#define DSM_IMPL_MMAP3

I would avoid renumbering here.  It was kind of sensible to have NONE =
0, so I'd keep the non-NONE ones as non-zero.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: New function pg_stat_statements_reset_query() to reset statistics of a specific query

2018-07-06 Thread Alvaro Herrera
On 2018-Jul-06, Sergei Kornilov wrote:

> 06.07.2018, 22:35, "Robert Haas" :
> > On Fri, Jul 6, 2018 at 1:26 PM, Fujii Masao  wrote:
> >>  Hmm... so pg_stat_statements_reset() is allowed to be executed by
> >>  pg_read_all_stats role while other stats reset functions like
> >>  pg_stat_reset() can be executed only by superusers. Which looks
> >>  strange and inconsistent to me.
> >
> > Yeah, why would a pg_READ_all_stats role let you change stuff?
> 
> +1, personally i was surprised first time. But I thought that it was 
> discussed before committing version 1.5
> I read original thread [1] and, as far i see, pg_stat_statements_reset change 
> was not discussed.

Ugh, it's true :-(
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=25fff40798fc4ac11a241bfd9ab0c45c085e2212#patch8

Dave, Simon, any comments?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] Bug in to_timestamp().

2018-07-06 Thread Tom Lane
Alexander Korotkov  writes:
> I tool a look at this patch.  It looks good for me.  It applies
> cleanly on last master, builds without warnings, passes tests.
> Functionality seems to be well-covered by documentation and regression
> tests.  So, I'm going to commit it if no objections.

AFAIR, the argument was mainly over whether we agreed with the proposed
behavioral changes.  It seems a bit premature to me to commit given that
there's not consensus on that.

regards, tom lane



Re: Should contrib modules install .h files?

2018-07-06 Thread Peter Eisentraut
On 05.07.18 14:51, Andrew Gierth wrote:
> Anyone have any objection to putting this into 11beta if it works, as
> well as 12devel?

Yes, I have expressed concerns about this approach elsewhere in this thread.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Should contrib modules install .h files?

2018-07-06 Thread Andrew Gierth
> "Peter" == Peter Eisentraut  writes:

 > On 02.07.18 15:26, Tom Lane wrote:
 >> FWIW, I agree with Andres' thought that each contrib module should
 >> have its own subdirectory under $(includedir_server). Otherwise
 >> we're going to be faced with questions about whether .h files need
 >> to be renamed because they're not globally unique enough.

 Peter> Then they perhaps should be renamed. That seems like a much
 Peter> simpler solution.

Personally I think that more -I options is less pain than having to
rename things or deal with conflicts.

Where exactly are you suggesting that they should be installed? Directly
in $(installdir_server), or in $(installdir_server)/extension or
equivalent?

 Peter> The use case being discussed here is installing a data type
 Peter> extension's header so you can write a transform for it. The
 Peter> extension's name as well as the data type's own name already
 Peter> have to be pretty much globally unique if you want it to be
 Peter> useful. So it doesn't seem very difficult to me to have the
 Peter> extension install a single header file with that same name.

That's assuming a single header file, which might be a bit more
restrictive than absolutely necessary.

 Peter> The other side of this is that the PLs have to install their
 Peter> header files. Which the in-core PLs already do. Would we we want
 Peter> to move their header files under a new per-extension directory
 Peter> scheme?

The in-core PLs could reasonably be grandfathered in in their current
locations, at least for now.

-- 
Andrew (irc:RhodiumToad)



Re: peripatus build failures....

2018-07-06 Thread Larry Rosenman
On Thu, Jul 05, 2018 at 07:47:39AM -0500, Larry Rosenman wrote:
> On Wed, Jul 04, 2018 at 08:37:40PM -0500, Larry Rosenman wrote:
> > On Wed, Jul 04, 2018 at 08:19:48PM -0500, Larry Rosenman wrote:
> > > On Thu, Jul 05, 2018 at 12:56:49PM +1200, Thomas Munro wrote:
> > > > On Thu, Jul 5, 2018 at 12:35 PM, Larry Rosenman  wrote:
> > > > > I agree.  Is there an easy way I can add this work around to 
> > > > > peripatus'
> > > > > source tree:
> > > > >
> > > > > It may be that adding "LDFLAGS+= -Wl,-z,notext" (and removing 
> > > > > LLD_UNSAFE) will let the port build with lld.
> > > > 
> > > > Maybe something like this at the end of your build-farm.conf?
> > > > 
> > > > if ($branch =~ /^REL(9|_10)/)
> > > > {
> > > > $conf{config_env}->{"LDFLAGS"} = "...something something...";
> > > > }
> > > > 
> > > 
> > > Good news.  I ran a quick build test on my checked out FreeBSD ports
> > > tree and with Ed Maste's suggestion, it builds. 
> > > 
> > > Ed's suggestion:
> > > remove LLD_UNSAFE, and add to the LDFLAGS+= line in the port
> > > -Wl,-z,notext.
> > > 
> > > So, that is indeed a fix for us.  My question is:
> > > how to add this LDFLAG for FreeBSD >= 12 and PostgreSQL <= 11's configure 
> > > et al
> > > 
> > > I'm more than willing to try and generate a patch, but would like some
> > > guidance.  I'm also willing to give access to my box.
> > > 
> > > 
> > 
> > I also filed FreeBSD pr
> > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229523 to make the
> > change in the FreeBSD port.
> > 
> 
> I suspect HEAD and REL_11 are ok due to changes in the pgport source.
> (Someone with better git foo would have to check that).
> 
> 
anyone want to look at this, or at least give me a clue on how to add
this to 10 & below?


-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106


signature.asc
Description: PGP signature


Re: peripatus build failures....

2018-07-06 Thread Tom Lane
Larry Rosenman  writes:
> anyone want to look at this, or at least give me a clue on how to add
> this to 10 & below?

I do not like the "-Wl,-z,notext" thing at all.  It fails to explain
why things are working OK in v11/HEAD, which makes me think that it's
band-aiding something rather than really fixing it.

Perhaps you could spend a bit of time with git bisect and find out
which commit un-broke things?  That should help us narrow down the
true explanation.

regards, tom lane



Re: peripatus build failures....

2018-07-06 Thread Larry Rosenman
On Fri, Jul 06, 2018 at 06:40:49PM -0400, Tom Lane wrote:
> Larry Rosenman  writes:
> > anyone want to look at this, or at least give me a clue on how to add
> > this to 10 & below?
> 
> I do not like the "-Wl,-z,notext" thing at all.  It fails to explain
> why things are working OK in v11/HEAD, which makes me think that it's
> band-aiding something rather than really fixing it.
> 
> Perhaps you could spend a bit of time with git bisect and find out
> which commit un-broke things?  That should help us narrow down the
> true explanation.
> 
>   regards, tom lane

Following the advice in the error message, the following ALSO fixes it
(REL_10_STABLE):

borg.lerctr.org /home/ler/Git/postgresql/src/port $ git diff
diff --git a/src/port/Makefile b/src/port/Makefile
index 81f01b25bb..9ef00b3d54 100644
--- a/src/port/Makefile
+++ b/src/port/Makefile
@@ -28,6 +28,7 @@ top_builddir = ../..
 include $(top_builddir)/src/Makefile.global

 override CPPFLAGS := -I$(top_builddir)/src/port -DFRONTEND $(CPPFLAGS)
+override CFLAGS   := -fPIC
 LIBS += $(PTHREAD_LIBS)

 OBJS = $(LIBOBJS) $(PG_CRC32C_OBJS) chklocale.o erand48.o inet_net_ntop.o \
borg.lerctr.org /home/ler/Git/postgresql/src/port $

Is that more acceptable?
-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106


signature.asc
Description: PGP signature


Re: BUG #14999: pg_rewind corrupts control file global/pg_control

2018-07-06 Thread Michael Paquier
On Fri, Jul 06, 2018 at 03:45:46PM +, Andrew Dunstan wrote:
> I agree that this problem should just be documented, and no further
> fix done. The proposed commit message seems to be too long - I think
> it should mainly just just refer to the mailing list discussion. Other
> than that it seems fine. 
> 
> The new status of this patch is: Ready for Committer

Thanks for the review, Andrew.  Committed as suggested.
--
Michael


signature.asc
Description: PGP signature


Re: peripatus build failures....

2018-07-06 Thread Larry Rosenman
On Fri, Jul 06, 2018 at 06:05:36PM -0500, Larry Rosenman wrote:
> On Fri, Jul 06, 2018 at 06:40:49PM -0400, Tom Lane wrote:
> > Larry Rosenman  writes:
> > > anyone want to look at this, or at least give me a clue on how to add
> > > this to 10 & below?
> > 
> > I do not like the "-Wl,-z,notext" thing at all.  It fails to explain
> > why things are working OK in v11/HEAD, which makes me think that it's
> > band-aiding something rather than really fixing it.
> > 
> > Perhaps you could spend a bit of time with git bisect and find out
> > which commit un-broke things?  That should help us narrow down the
> > true explanation.
> > 
> > regards, tom lane
> 
> Following the advice in the error message, the following ALSO fixes it
> (REL_10_STABLE):
> 
> borg.lerctr.org /home/ler/Git/postgresql/src/port $ git diff
> diff --git a/src/port/Makefile b/src/port/Makefile
> index 81f01b25bb..9ef00b3d54 100644
> --- a/src/port/Makefile
> +++ b/src/port/Makefile
> @@ -28,6 +28,7 @@ top_builddir = ../..
>  include $(top_builddir)/src/Makefile.global
> 
>  override CPPFLAGS := -I$(top_builddir)/src/port -DFRONTEND $(CPPFLAGS)
> +override CFLAGS   := -fPIC
>  LIBS += $(PTHREAD_LIBS)
> 
>  OBJS = $(LIBOBJS) $(PG_CRC32C_OBJS) chklocale.o erand48.o inet_net_ntop.o \
> borg.lerctr.org /home/ler/Git/postgresql/src/port $
> 
> Is that more acceptable?

Err, add a $(CFLAGS) to the end of the added line.


> -- 
> Larry Rosenman http://www.lerctr.org/~ler
> Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
> US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106



-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106


signature.asc
Description: PGP signature


Re: peripatus build failures....

2018-07-06 Thread Tom Lane
Larry Rosenman  writes:
> On Fri, Jul 06, 2018 at 06:40:49PM -0400, Tom Lane wrote:
>> I do not like the "-Wl,-z,notext" thing at all.  It fails to explain
>> why things are working OK in v11/HEAD, which makes me think that it's
>> band-aiding something rather than really fixing it.

> Following the advice in the error message, the following ALSO fixes it
> (REL_10_STABLE):
> +override CFLAGS   := -fPIC

Yeah, I wondered about whether that wasn't a cleaner answer, but the same
problem remains: if we need that in src/port/, why don't all the branches
need it?  It would be unsurprising if we'd gained a requirement for -fPIC
somewhere along the line, but I don't entirely believe that we lost one.
So I'd still like to know when this changed.

regards, tom lane



Re: no partition pruning when partitioning using array type

2018-07-06 Thread Alvaro Herrera
On 2018-May-08, Amit Langote wrote:

> I would like to revisit this as a bug fix for get_partition_operator() to
> be applied to both PG 10 and HEAD.  In the former case, it fixes the bug
> that constraint exclusion code will fail to prune correctly when partition
> key is of array, enum, range, or record type due to the structural
> mismatch between the OpExpr that partitioning code generates and one that
> the parser generates for WHERE clauses involving partition key columns.

Interesting patchset.  Didn't read your previous v2, v3 versions; I only
checked your latest, v1 (???).

I'm wondering about the choice of OIDs in the new test.  I wonder if
it's possible to get ANYNONARRAY (or others) by way of having a
polymorphic function that passes its polymorphic argument in a qual.  I
suppose it won't do anything in v10, or will it?  Worth checking :-)
Why not use IsPolymorphicType?  Also, I think it'd be good to have tests
for all these cases (even in v10), just to make sure we don't break it
going forward.  At least the array case is clearly broken today ...
A test for the RECORDOID case would be particularly welcome, since it's
somehow different from the other cases.  (I didn't understand why did
you remove the test in the latest version.)

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: peripatus build failures....

2018-07-06 Thread Larry Rosenman
On Fri, Jul 06, 2018 at 07:18:46PM -0400, Tom Lane wrote:
> Larry Rosenman  writes:
> > On Fri, Jul 06, 2018 at 06:40:49PM -0400, Tom Lane wrote:
> >> I do not like the "-Wl,-z,notext" thing at all.  It fails to explain
> >> why things are working OK in v11/HEAD, which makes me think that it's
> >> band-aiding something rather than really fixing it.
> 
> > Following the advice in the error message, the following ALSO fixes it
> > (REL_10_STABLE):
> > +override CFLAGS   := -fPIC
> 
> Yeah, I wondered about whether that wasn't a cleaner answer, but the same
> problem remains: if we need that in src/port/, why don't all the branches
> need it?  It would be unsurprising if we'd gained a requirement for -fPIC
> somewhere along the line, but I don't entirely believe that we lost one.
> So I'd still like to know when this changed.
> 
BROKE:0c62356cc8777961221a643fa77f62e1c7361085
GOOD: f044d71e331d77a0039cec0a11859b5a3c72bc95

f044d71e331d77a0039cec0a11859b5a3c72bc95 fixed it.

Here's the revs I tried:

BROKE:9d4649ca49416111aee2c84b7e4441a0b7aa2fac
GOOD: 3b8f6e75f3c8c6d192621f21624cc8cee04ec3cb
GOOD: 3a5e0a91bb324ad2b2b1a0623a3f2e37772b43fc
GOOD: 8989f52b1b0636969545e6c8f6c813bc563ebcf5
BROKE:0c62356cc8777961221a643fa77f62e1c7361085
GOOD: f044d71e331d77a0039cec0a11859b5a3c72bc95
>   regards, tom lane

-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106


signature.asc
Description: PGP signature


Re: Explain buffers wrong counter with parallel plans

2018-07-06 Thread Amit Kapila
On Sat, Jul 7, 2018 at 12:44 AM, Robert Haas  wrote:
> On Fri, Jul 6, 2018 at 9:44 AM, Amit Kapila  wrote:
>> I have tried this idea, but it doesn't completely solve the problem.
>> The problem is that nodes below LIMIT won't get a chance to accumulate
>> the stats as they won't be able to call InstrStopNode.
>
> I'm not sure I understand.  Why not?  I see that we'd need to insert
> an extra call to InstrStopNode() if we were stopping the node while it
> was running, because then InstrStartNode() would have already been
> done, but the corresponding call to InstrStopNode() would not have
> been done.  But I'm not sure how that would happen in this case.  Can
> you explain further?
>

Okay, let me try.   The code flow is that for each node we will call
InstrStartNode()->ExecProcNodeReal()->InstrStopNode().  Now let's say
we have to execute a plan Limit->Gather-> Parallel SeqScan.  In this,
first for Limit node, we will call InstrStartNode() and
ExecProcNodeReal() and then for Gather we will call InstrStartNode(),
ExecProcNodeReal() and InstrStopNode().  Now, Limit node decides that
it needs to shutdown all the nodes (ExecShutdownNode) and after that
it will call InstrStopNode() for Limit node.  So, in this flow after
shutting down nodes, we never get chance for Gather node to use stats
collected during ExecShutdownNode.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Transition relations: correlating OLD TABLE and NEW TABLE

2018-07-06 Thread Brent Kerby
In a situation where we're using transition relations to capture changes
after UPDATE statements, i.e., using a trigger of the form

CREATE TRIGGER trigger_name AFTER UPDATE ON table_name
REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table
FOR EACH STATEMENT EXECUTE PROCEDURE procedure();

there seems to be a challenge in how to join `old_table` and `new_table` so
that the old and new version of each row can be matched up with each other.
Of course if the table has a primary key, then this can be used, but I'm
wondering how to handle this in the general case where a primary key might
not exist.

According to this blog (
http://databasedoings.blogspot.com/2017/07/cool-stuff-in-postgresql-10-transition.html)
it is possible to use ROW_NUMBER() OVER () to create a key to join the old
and new tables, but this depends on an implementation detail (or at least,
isn't documented?) that the rows will be returned in the same order for the
two tables. Is it correct that this will work under the existing
implementation?

If there's not a clean way of matching up the old and new versions, could
the transition relation mechanism be extended in order to make this
possible? Here's a couple ideas:

1) A special system column could be added to the two transition relations,
OLD TABLE, and NEW TABLE, providing a common value that could be used to
join corresponding rows; it could be a sequential value (like what
ROW_NUMBER() would generate), or it could be some other unique identifier
for the row that is convenient for implementation. But there's some
awkwardness in the fact that this special column name could clash with the
columns in the table (unless an existing reserved name is used). Also,
exposing a unique row identifier might restrict potential future
implementations.

2) Maybe a cleaner way would be to add a third kind of transition table,
say, a "CHANGE TABLE", which could be used as an alternative to "OLD TABLE"
and "NEW TABLE". A change table could contain just two columns, say
'old_row' and 'new_row', each of which have the appropriate record type. In
this way, the old table and new table are essentially "pre-joined" in the
transition table.

Would this be workable? Or is there some other way of achieving this?

- Brent Kerby


Re: Transition relations: correlating OLD TABLE and NEW TABLE

2018-07-06 Thread David G. Johnston
On Friday, July 6, 2018, Brent Kerby  wrote:

> Of course if the table has a primary key, then this can be used, but I'm
> wondering how to handle this in the general case where a primary key might
> not exist.
>

Personally, I would consider the lack of a PK a rare and special case...I'd
handle the proposed situation by adding a big serial column to the table.

David J.


Re: Explain buffers wrong counter with parallel plans

2018-07-06 Thread Amit Kapila
On Sat, Jul 7, 2018 at 7:45 AM, Amit Kapila  wrote:
> On Sat, Jul 7, 2018 at 12:44 AM, Robert Haas  wrote:
>> On Fri, Jul 6, 2018 at 9:44 AM, Amit Kapila  wrote:
>>> I have tried this idea, but it doesn't completely solve the problem.
>>> The problem is that nodes below LIMIT won't get a chance to accumulate
>>> the stats as they won't be able to call InstrStopNode.
>>
>> I'm not sure I understand.  Why not?  I see that we'd need to insert
>> an extra call to InstrStopNode() if we were stopping the node while it
>> was running, because then InstrStartNode() would have already been
>> done, but the corresponding call to InstrStopNode() would not have
>> been done.  But I'm not sure how that would happen in this case.  Can
>> you explain further?
>>
>
> Okay, let me try.   The code flow is that for each node we will call
> InstrStartNode()->ExecProcNodeReal()->InstrStopNode().  Now let's say
> we have to execute a plan Limit->Gather-> Parallel SeqScan.  In this,
> first for Limit node, we will call InstrStartNode() and
> ExecProcNodeReal() and then for Gather we will call InstrStartNode(),
> ExecProcNodeReal() and InstrStopNode().  Now, Limit node decides that
> it needs to shutdown all the nodes (ExecShutdownNode) and after that
> it will call InstrStopNode() for Limit node.  So, in this flow after
> shutting down nodes, we never get chance for Gather node to use stats
> collected during ExecShutdownNode.
>

I went ahead and tried the solution which I had mentioned yesterday,
that is to allow ExecShutdownNode to count stats.  Apart from fixing
this problem, it will also fix the problem with Gather Merge as
reported by Adrien [1], because now Gather Merge will also get a
chance to count stats after shutting down workers.

Note that, I have changed the location of InstrStartParallelQuery in
ParallelQueryMain so that the buffer usage stats are accumulated only
for the plan execution which is what we do for instrumentation
information as well.  If we don't do that, it will count some
additional stats for ExecutorStart which won't match with what we have
in Instrumentation structure of each node.

[1] - 
https://www.postgresql.org/message-id/01952aab-33ca-36cd-e74b-ce32f3eefc84%40anayrat.info

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


fix_gather_stats_v1.patch
Description: Binary data


Re: EXPLAIN of Parallel Append

2018-07-06 Thread Amit Kapila
On Wed, Mar 14, 2018 at 8:58 PM, Jesper Pedersen
 wrote:
> Hi,
>
> Given
>
> -- test.sql --
> CREATE TABLE t1 (
> a integer NOT NULL,
> b integer NOT NULL
> ) PARTITION BY HASH (b);
> CREATE TABLE t1_p00 PARTITION OF t1 FOR VALUES WITH (MODULUS 4, REMAINDER
> 0);
> CREATE TABLE t1_p01 PARTITION OF t1 FOR VALUES WITH (MODULUS 4, REMAINDER
> 1);
> CREATE TABLE t1_p02 PARTITION OF t1 FOR VALUES WITH (MODULUS 4, REMAINDER
> 2);
> CREATE TABLE t1_p03 PARTITION OF t1 FOR VALUES WITH (MODULUS 4, REMAINDER
> 3);
> INSERT INTO t1 (SELECT i, i FROM generate_series(1, 100) AS i);
> ANALYZE;
> -- test.sql --
>
> Running
>
> EXPLAIN (ANALYZE) SELECT * FROM t1 WHERE a = 5432;
>
> gives
>
>  Gather  (cost=1000.00..12780.36 rows=4 width=8) (actual time=61.270..61.309
> rows=1 loops=1)
>Workers Planned: 2
>Workers Launched: 2
>->  Parallel Append  (cost=0.00..11779.96 rows=4 width=8) (actual
> time=38.915..57.209 rows=0 loops=3)
..
..
> (18 rows)
>
> Parallel Append's ntuples is 1, but given nloops is 3 you end up with the
> slightly confusing "(actual ... *rows=0* loops=3)".
>

The number of rows displayed is total_rows / loops due to which you
are seeing these numbers.  This behavior is the same for all parallel
nodes, nothing specific to Parallel Append.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: Fix error message when trying to alter statistics on included column

2018-07-06 Thread Yugo Nagata
On Mon, 2 Jul 2018 14:23:09 -0400
Robert Haas  wrote:

> On Thu, Jun 28, 2018 at 5:28 AM, Yugo Nagata  wrote:
> > According to the error message, it is not allowed to alter statistics on
> > included column because this is "non-expression column".
> >
> >  postgres=# create table test (i int, d int);
> >  CREATE TABLE
> >  postgres=# create index idx on test(i) include (d);
> >  CREATE INDEX
> >  postgres=# alter index idx alter column 2 set statistics 10;
> >  ERROR:  cannot alter statistics on non-expression column "d" of index "idx"
> >  HINT:  Alter statistics on table column instead.
> >
> > However, I think this should be forbidded in that this is not a key column
> > but a included column. Even if we decide to support expressions in included
> > columns in future, it is meaningless to do this because any statistics on
> > included column is never used by the planner.
> >
> > Attached is the patch to fix the error message. In this fix, column number
> > is checked first. After applying this, the message is changed as below;
> >
> >  postgres=# alter index idx alter column 2 set statistics 10;
> >  ERROR:  cannot alter statistics on included column "d" of index "idx"
> 
> I think you should add an open item for this.

I was about to add this to the wiki, but someone has already done this. Thanks!
https://wiki.postgresql.org/wiki/PostgreSQL_11_Open_Items

> 
> -- 
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
> 


-- 
Yugo Nagata 



Re: CREATE TABLE .. LIKE .. EXCLUDING documentation

2018-07-06 Thread Yugo Nagata
On Wed, 4 Jul 2018 10:46:30 +0200
Peter Eisentraut  wrote:

> On 02.07.18 10:38, Daniel Gustafsson wrote:
> >> On 29 Jun 2018, at 18:44, Tom Lane  wrote:
> > 
> >> +1 for shortening it as proposed by Peter.  The existing arrangement
> >> made sense when it was first written, when there were only about three
> >> individual options IIRC.  Now it's just confusing, especially since you
> >> can't tell very easily whether any of the individual options were
> >> intentionally omitted from the list.  It will not get better with
> >> more options, either.
> > 
> > Marking this "Waiting for Author” awaiting an update version expanding with 
> > the
> > above comment.
> 
> I ended up rewriting that whole section a bit to give it more structure.
>  I included all the points discussed in this thread.

Thank you for fixing this. 

> 
> -- 
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
> 


-- 
Yugo Nagata 



Re: make installcheck-world in a clean environment

2018-07-06 Thread Alexander Lakhin
Hello Peter,
06.07.2018 00:39, Peter Eisentraut wrote:
> Exactly what order of steps are you executing that doesn't work?
In Centos 7, using the master branch from git:
./configure --enable-tap-tests
make install
make install -C contrib
chown -R postgres:postgres /usr/local/pgsql/
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data
/make clean/
# Also you can just install binary packages to get the same state.

make installcheck-world
# This check fails.

Best regards,
--

Alexander Lakhin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



How to remove elements from array .

2018-07-06 Thread Brahmam Eswar
Hi ,

I tried to use array_remove to remove elements from an array but it's
saying function doesn't exist . I'm able to use other array functions.

1) Capture the results with multiple columns into array .
2)  if ay results exist then loop through an array to find out the record
with col1='Y'
3)  If col1='Y' then get the respective value of Col2 (10) and delete the
similar records of col2 if exist.

Col1Col2
 Y 10
 N 20
N  10

Need to delete record1 and record3.To delete the array records i'm using
array_remove but it says doesn't exist.

Version pgadmin4 .






Snippet :-

CREATE or REPLACE FUNCTION FUNC1
(
<< List of elements >>
) AS $$

DECLARE

TEST_CODES record1 ARRAY;
 TEMP_REF_VALUE VARCHAR(4000);

BEGIN
IS_VALID := 'S';

  SELECT ARRAY
   (SELECT ROW(Col1,Col2,COl3,Col4) ::record1
FROM table1  INTO TEST_CODES
IF array_length(TEST_CODES, 1) > 0 THEN
FOR indx IN array_lower(TEST_CODES, 1)..array_upper(TEST_CODES, 1) LOOP
 IF TEST_CODES[indx].COL1 = 'Y' THEN
TEMP_REF_VALUE:=TEST_CODES[indx].Col2;
TEST_CODES := array_remove(TEST_CODES,TEMP_REF_VALUE);
END IF;
   END Loop;
END IF;


-- 
Thanks & Regards,
Brahmeswara Rao J.


Re: pgsql: Fix "base" snapshot handling in logical decoding

2018-07-06 Thread Arseny Sher


Alvaro Herrera  writes:

> I just don't see it that VACUUM FULL would change the xmin of anything
> to FrozenXid, and in my experiments it doesn't.  Did you mean VACUUM
> FREEZE?

Well, docs for VACUUM say:

FREEZE

Selects aggressive “freezing” of tuples. Specifying FREEZE is
equivalent to performing VACUUM with the vacuum_freeze_min_age and
vacuum_freeze_table_age parameters set to zero. Aggressive freezing
is always performed when the table is rewritten, so this option is
redundant when FULL is specified.

So this is literally the same. rewrite_heap_tuple does the job.


> Thanks for the detective work!  I pushed this test change.

Thank you, I appreciate this.

--
Arseny Sher
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



small doc fix - using expressions in plpgsql FETCH command

2018-07-06 Thread Pavel Stehule
Hi

PLpgSQL FETCH documentation is has ref on SQL FETCH command. SQL FETCH
allows only int constants as count. PLpgSQL allows any expressions. In this
case documentation is not clear, and people can be messy - and apply SQL
FETCH limits on PLpgSQL FETCH.

https://stackoverflow.com/questions/51129336/pl-pgsql-move-with-variable-value/51169438?noredirect=1#comment89349088_51169438

I propose some small enhancing

diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
index 5b2aac618e..b65cb11d00 100644
--- a/doc/src/sgml/plpgsql.sgml
+++ b/doc/src/sgml/plpgsql.sgml
@@ -3250,7 +3250,8 @@ MOVE  direction
{ FROM | IN }  <
  as specifying NEXT.
  direction values that require moving
  backward are likely to fail unless the cursor was declared or opened
- with the SCROLL option.
+ with the SCROLL option. The
count
+ can be any expressions with integer result or integer constant.
 

 

Options, notes?

Regards

Pavel


RE: automatic restore point

2018-07-06 Thread Yotsunaga, Naoki
>-Original Message-
>From: Michael Paquier [mailto:mich...@paquier.xyz] 
>Sent: Tuesday, July 3, 2018 10:22 AM

>This kind of thing is heavily application-dependent.  For example, you would 
>likely not care if an operator, who has newly-joined the team in >charge of 
>the maintenance of this data, drops unfortunately a table which includes logs 
>from 10 years back, and you would very likely care >about a table dropped 
>which has user's login data.  My point is that you need to carefully design 
>the shape of the configuration you would use, >so as any application's admin 
>would be able to cope with it, for example allowing exclusion filters with 
>regular expressions could be a good >idea to dig into.  And also you need to 
>think about it so as it is backward compatible.

Thanks for comments.

Does that mean that the application (user) is interested in which table?
For example, there are two tables A. It is ok even if one table disappears, but 
it is troubled if another table B disappears. So, when the table B is dropped, 
automatic restore point works. In the table A, automatic restore point does not 
work.
So, it is difficult to implement that automatic restore point in postgresql by 
default.
Is my interpretation right?

---
Naoki Yotsunaga




RE: automatic restore point

2018-07-06 Thread Yotsunaga, Naoki
>-Original Message-
>From: Jaime Casanova [mailto:jaime.casan...@2ndquadrant.com] 
>Sent: Tuesday, July 3, 2018 11:06 AM

>Thinking on Michael's suggestion of using event triggers, you can create an 
>event >trigger to run pg_create_restore_point() on DROP, here's a simple 
>example of how >that should like:
>https://www.postgresql.org/docs/current/static/functions-event-triggers.html

>You can also create a normal trigger BEFORE TRUNCATE to create a restore point 
>just >before running the TRUNCATE command.

Thanks for comments.
I was able to understand.

---
Naoki Yotsunaga


Re: How to remove elements from array .

2018-07-06 Thread Pavel Stehule
Hi

2018-07-06 9:49 GMT+02:00 Brahmam Eswar :

> Hi ,
>
> I tried to use array_remove to remove elements from an array but it's
> saying function doesn't exist . I'm able to use other array functions.
>
> 1) Capture the results with multiple columns into array .
> 2)  if ay results exist then loop through an array to find out the record
> with col1='Y'
> 3)  If col1='Y' then get the respective value of Col2 (10) and delete the
> similar records of col2 if exist.
>
> Col1Col2
>  Y 10
>  N 20
> N  10
>
> Need to delete record1 and record3.To delete the array records i'm using
> array_remove but it says doesn't exist.
>
> Version pgadmin4 .
>
>
>
>
In this case, unnesting can be solution

postgres=# select * from foo;
+++
| c1 | c2 |
+++
| t  | 10 |
| f  | 20 |
| f  | 20 |
+++
(3 rows)

postgres=# do $$
declare a foo[] default array(select foo from foo);
begin
  a := array(select (c1,c2)::foo from unnest(a) g(c1,c2) where g.c1 = true);
  raise notice 'a=%', a;
end;
$$;
NOTICE:  a={"(t,10)"}
DO

Regards

Pavel



>
>
>
> Snippet :-
>
> CREATE or REPLACE FUNCTION FUNC1
> (
> << List of elements >>
> ) AS $$
>
> DECLARE
>
> TEST_CODES record1 ARRAY;
>  TEMP_REF_VALUE VARCHAR(4000);
>
> BEGIN
> IS_VALID := 'S';
>
>   SELECT ARRAY
>(SELECT ROW(Col1,Col2,COl3,Col4) ::record1
> FROM table1  INTO TEST_CODES
> IF array_length(TEST_CODES, 1) > 0 THEN
> FOR indx IN array_lower(TEST_CODES, 1)..array_upper(TEST_CODES, 1) LOOP
>  IF TEST_CODES[indx].COL1 = 'Y' THEN
> TEMP_REF_VALUE:=TEST_CODES[indx].Col2;
> TEST_CODES := array_remove(TEST_CODES,TEMP_REF_VALUE);
> END IF;
>END Loop;
> END IF;
>
>
> --
> Thanks & Regards,
> Brahmeswara Rao J.
>


Re: Fix to not check included columns in ANALYZE on indexes

2018-07-06 Thread Yugo Nagata
On Sat, 30 Jun 2018 14:13:49 -0400
Tom Lane  wrote:

> Peter Geoghegan  writes:
> > I think that the argument Tom is making is that it might be useful to
> > have statistics on the expression regardless of this -- the expression
> > may be interesting in some general sense. For example, one can imagine
> > the planner creating a plan with a hash aggregate rather than a group
> > aggregate, but only when statistics on an expression are available,
> > somehow.
> 
> Right.  For instance, "select sum(x) from ... group by y+z" is only
> suitable for hash aggregation if we can predict that there's a fairly
> small number of distinct values of y+z.  This makes it useful to have
> stats on the expression y+z, independently of whether any related index
> actually gets used in the plan.

Thank you for your explanation. I understand the usefulness of the statistics 
on non-key expression attributions and that "CREATE INDEX ... INCLUDE" migth be
a means to collect the statistics on "non-key" expressions in future.

> 
>   regards, tom lane
> 


-- 
Yugo Nagata 



Typo in Japanese translation of psql.

2018-07-06 Thread Taiki Kondo
Hi all,

I found typo in Japanese translation of psql.

Please find attached.


Sincerely,

--
Taiki Kondo
NEC Solution Innovators, Ltd.
diff --git a/src/bin/psql/po/ja.po b/src/bin/psql/po/ja.po
index 34bd8a4e25..fe8dc3804f 100644
--- a/src/bin/psql/po/ja.po
+++ b/src/bin/psql/po/ja.po
@@ -1512,12 +1512,12 @@ msgstr "継承元"
 #: describe.c:2896
 #, c-format
 msgid "Number of child tables: %d (Use \\d+ to list them.)"
-msgstr "子テーブル数: %d (\\+d で一覧を表示)"
+msgstr "子テーブル数: %d (\\d+ で一覧を表示)"
 
 #: describe.c:2898
 #, c-format
 msgid "Number of partitions: %d (Use \\d+ to list them.)"
-msgstr "パーティション数: %d (\\+d で一覧を表示)。"
+msgstr "パーティション数: %d (\\d+ で一覧を表示)。"
 
 #: describe.c:2906
 msgid "Child tables"


Re: Fix to not check included columns in ANALYZE on indexes

2018-07-06 Thread Yugo Nagata
On Fri, 29 Jun 2018 17:31:51 +0300
Teodor Sigaev  wrote:

> > AFAICS, we'd just have to revert this patch later, so I don't see
> > much value in it.
> True, I suppose we should apply this patch just for consistency, because we 
> don't allow expression in included columns.

Yes, this is what I intend in my patch, but I don't persist in this if there
is a reason to leave the code as it is, since the current code is alomot 
harmless.

Thanks,


-- 
Yugo Nagata 



Re: Typo in Japanese translation of psql.

2018-07-06 Thread Yugo Nagata
On Fri, 6 Jul 2018 08:33:56 +
Taiki Kondo  wrote:

> Hi all,
> 
> I found typo in Japanese translation of psql.

Good catch!

However, I think you have to submit the whole po file to Patch Tracker[1]
instead of a patch according to the wiki [2].

[1] https://redmine.postgresql.org/projects/pgtranslation
[2] https://wiki.postgresql.org/wiki/NLS

Regards,

> 
> Please find attached.
> 
> 
> Sincerely,
> 
> --
> Taiki Kondo
> NEC Solution Innovators, Ltd.


-- 
Yugo Nagata 



RE: Speeding up INSERTs and UPDATEs to partitioned tables

2018-07-06 Thread Kato, Sho
Thanks David!

I did benchmark with pgbench, and see a speedup for INSERT / UPDATE scenarios.
I used range partition.

Benchmark results are as follows.

1. 11beta2 result

 part_num |   tps_ex   | latency_avg | update_latency | select_latency | 
insert_latency 
--++-+++
  100 | 479.456278 |   2.086 |  1.382 |  0.365 |
  0.168
  200 | 169.155411 |   5.912 |  4.628 |  0.737 |
  0.299
  400 |  24.857495 |   40.23 | 36.606 |  2.252 |
  0.881
  800 |   6.718104 | 148.853 |141.471 |  5.253 |
  1.433
 1600 |   1.934908 | 516.825 |489.982 | 21.102 |
  3.701
 3200 |   0.456967 |2188.362 |   2101.247 | 72.784 |
  8.833
 6400 |   0.116643 |8573.224 |8286.79 |257.904 |
 14.949


2. 11beta2 + patch1 + patch2

patch1: Allow direct lookups of AppendRelInfo by child relid
commit 7d872c91a3f9d49b56117557cdbb0c3d4c620687
patch2: 0001-Speed-up-INSERT-and-UPDATE-on-partitioned-tables.patch

 part_num |   tps_ex| latency_avg | update_latency | select_latency | 
insert_latency 
--+-+-+++
  100 | 1224.430344 |   0.817 |  0.551 |  0.085 |   
   0.048
  200 |  689.567511 |1.45 |   1.12 |  0.119 |   
0.05
  400 |  347.876616 |   2.875 |  2.419 |  0.185 |   
   0.052
  800 |  140.489269 |   7.118 |  6.393 |  0.329 |   
   0.059
 1600 |   29.681672 |  33.691 | 31.272 |  1.517 |   
   0.147
 3200 |7.021957 | 142.412 |  136.4 |  4.033 |   
   0.214
 6400 |1.462949 | 683.557 |669.187 |  7.677 |   
   0.264


benchmark script:

\set aid random(1, 100 * 1)
\set delta random(-5000, 5000)
BEGIN;
UPDATE test.accounts SET abalance = abalance + :delta WHERE aid = :aid;
SELECT abalance FROM test.accounts WHERE aid = :aid;
INSERT INTO test.accounts_history (aid, delta, mtime) VALUES (:aid, :delta, 
CURRENT_TIMESTAMP);
END;

partition key is aid.

-Original Message-
From: David Rowley [mailto:david.row...@2ndquadrant.com] 
Sent: Thursday, July 05, 2018 6:19 PM
To: Kato, Sho/加藤 翔 
Cc: PostgreSQL Hackers 
Subject: Re: Speeding up INSERTs and UPDATEs to partitioned tables

On 5 July 2018 at 18:39, Kato, Sho  wrote:
> postgres=# create table a(i int) partition by range(i); CREATE TABLE 
> postgres=# create table a_1 partition of a for values from(1) to 
> (200); CREATE TABLE postgres=# create table a_2 partition of a for 
> values from(200) to (400); server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

Hi,

Thanks for testing. I'm unable to reproduce this on beta2 or master as of 
f61988d16.

Did you try make clean then building again?  The 0001 patch does change 
PartitionDescData, so if you've not rebuilt all .c files which use that then 
that might explain your crash.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.

2018-07-06 Thread Etsuro Fujita

(2018/07/04 21:37), Ashutosh Bapat wrote:

On Wed, Jul 4, 2018 at 5:36 PM, Etsuro Fujita
  wrote:

(2018/07/04 19:04), Ashutosh Bapat wrote:

On Fri, Jun 29, 2018 at 6:21 PM, Etsuro Fujita
   wrote:

(2018/06/22 22:54), Ashutosh Bapat wrote:

+   if (enable_partitionwise_join&&
rel->top_parent_is_partitioned)
+   {
+   build_childrel_tlist(root, rel, childrel, 1,&appinfo);
+   }

Why do we need rel->top_parent_is_partitioned? If a relation is
partitioned (if (rel->part_scheme), it's either the top parent or is
partition of some other partitioned table. In either case this
condition will be true.



This would be needed to avoid unnecessarily applying build_childrel_tlist
to
child rels of a partitioned table for which we don't consider
partitionwise
join.  Consider:

postgres=# create table lpt (c1 int, c2 text) partition by list (c1);
CREATE TABLE
postgres=# create table lpt_p1 partition of lpt for values in (1);
CREATE TABLE
postgres=# create table lpt_p2 (c1 int check (c1 in (2)), c2 text);
CREATE TABLE
postgres=# create table test (c1 int, c2 text);
CREATE TABLE
postgres=# explain verbose select * from (select * from lpt union all
select
* from lpt_p2) ss(c1, c2) inner join test on (ss.c1 = test.c1);



I might misunderstand your words, but in the above example the patch doesn't
apply build_childrel_tlist to lpt_p1 and lpt_p2.  The reason for that is
because we can avoid adjusting the tlists for the corresponding subplans at
plan creation time so that whole-row Vars in the tlists are transformed into
CREs.  I think the overhead of the adjustment is not that big, but not zero,
so it would be worth avoiding applying build_childrel_tlist to partitions if
the top parent won't participate in a partitionwise-join at all.


I don't think that answers my question. When we join lpt with test,
your patch will apply build_childrel_tlist() to lpt_p1 and lpt_p2 even
when join between lpt and test is not going to use partition-wise
join. Why?


Maybe my explanation including the example was not good.  Sorry about 
that, but my patch will *not* apply build_childrel_tlist to lpt_p1 and 
lpt_p2 since the top parent of lpt_p1 and lpt_p2 is the UNION ALL 
subquery and hence not a partitioned table (ie, we have 
rel->top_parent_is_partitioned=false for lpt_p1 and lpt_p2).



As per your explanation, the condition "if
(enable_partitionwise_join&&   rel->top_parent_is_partitioned)" is
used to avoid applying build_childrel_tlist() when partition-wise join
won't be possible. But it's not covering all the cases.


Let me explain about that: 1) my patch won't apply that function to a 
child if its top parent is an appendrel built from a UNION ALL subquery, 
even though the child is a partition of a partitioned table pulled up 
from a leaf subquery into the parent query, like lpt_p1, and 2) my patch 
will apply that function to a child if its top parent is a partitioned 
table, whether or not the partitioned table is involved in a 
partitionwise join.  By #1, we avoid the adjustment work at plan 
creation time, as explained above.  It might be worth trying to be 
smarter about #2 (for example, in the case of a join of a partitioned 
table and a non-partitioned table, since we don't consider a 
partitionwise join for that join, it's better to not apply that function 
to partitions of the partitioned table, to avoid the adjustment work at 
plan creation time), but ISTM we don't have enough information to be 
smarter.



An
in-between state will produce a hell lot of confusion for any further
optimization. Whenever we change the code around partition-wise
operations in future, we will have to check whether or not a given
child rel has its whole-row Var embedded in ConvertRowtypeExpr. As I
have mentioned earlier, I am also not comfortable with the targetlist
of child relations being type inconsistent with that of the parent,
which is a fundamental rule in inheritance. Worst keep them
inconsistent during path creation and make them consistent at the time
of creating plans. A child's whole-row Var has datatype of the child
where as that of parent has datatype of parent.



I don't see any critical issue here.  Could you elaborate a bit more on that
point?


I think breaking a fundamental rule like this itself is critical. But
interestingly I am not able to find a case where it becomes a problem.
But may be I haven't tried enough. And the question is if it's not
required to have the targetlists type consistent, why even bother with
ConvertRowtypeExpr addition there? We can use your approach of adding
ConvertRowtypeExpr at the end in all the cases.


I think that the tlist of a (not-the-topmost) child relation doesn't 
need to be type-consistent with that of the parent; it has only to 
include all Vars that are needed for higher joinquals and final output 
to the parent appendrel.  (In other words, I think we can build the 
tlist in the same manner as we build tlists of base or join relations in 
the main join tree.)

Re: [HACKERS] Crash on promotion when recovery.conf is renamed

2018-07-06 Thread Andrew Dunstan
On Thu, Jun 28, 2018 at 1:39 AM, Michael Paquier  wrote:
> On Thu, Jan 11, 2018 at 10:35:22PM -0500, Stephen Frost wrote:
>> Magnus, this was your thread to begin with, though I know others have
>> been involved, any chance you'll be able to review this for commit
>> during this CF?  I agree that this is certainly a good thing to have
>> too, though I've not looked at the patch itself in depth.  Is there
>> anything we can do to help move it along?
>
> As an effort to move on with bug items in the commit fest, attached are
> two patches with a proposed commit message as well as polished comments
> Those are proposed for a back-patched.  The 2PC issue is particularly
> bad in my opinion because having any 2PC file on-disk and corrupted
> means that a transaction is lost.  I have been playing a bit with
> hexedit and changed a couple of bytes in one of them...  If trying to
> use a base backup which includes one, then the standby reading it would
> be similarly confused.

Thanks to Michael for progressing this.

Back in August, nearly a year ago, Robert Haas said upthread:

> This bug fix has been pending in "Ready for Committer" state for about
> 4.5 months. Three committers (Magnus, Heikki, Tom) have contributed
> to the thread to date. Maybe one of them would like to commit this?

Although the state is now back to "Needs Review", I echo those
sentiments. This issue has now been hanging around for about 18
months.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] WIP: Aggregation push-down

2018-07-06 Thread Antonin Houska
Robert Haas  wrote:

> On Fri, Feb 23, 2018 at 11:08 AM, Antonin Houska  wrote:
> > I spent some more time thinking about this. What about adding a new strategy
> > number for hash index operator classes, e.g. HTBinaryEqualStrategyNumber? 
> > For
> > most types both HTEqualStrategyNumber and HTBinaryEqualStrategyNumber 
> > strategy
> > would point to the same operator, but types like numeric would naturally 
> > have
> > them different.
> >
> > Thus the pushed-down partial aggregation can only use the
> > HTBinaryEqualStrategyNumber's operator to compare grouping expressions. In 
> > the
> > initial version (until we have useful statistics for the binary values) we 
> > can
> > avoid the aggregation push-down if the grouping expression output type has 
> > the
> > two strategies implemented using different functions because, as you noted
> > upthread, grouping based on binary equality can result in excessive number 
> > of
> > groups.
> >
> > One open question is whether the binary equality operator needs a separate
> > operator class or not. If an opclass cares only about the binary equality, 
> > its
> > hash function(s) can be a lot simpler.
> 
> Hmm.  How about instead adding another regproc field to pg_type which
> stores the OID of a function that tests binary equality for that
> datatype?  If that happens to be equal to the OID you got from the
> opclass, then you're all set.

I suppose you mean pg_operator, not pg_type. What I don't like about this is
that the new field would only be useful for very little fraction of
operators.

On the other hand, the drawback of an additional operator classes is that we'd
have to modify almost all the existing operator classes for the hash AM. (The
absence of the new strategy number in an operator class cannot mean that the
existing equality operator can be used to compare binary values too, because
thus we can't guarantee correct behavior of the already existing user-defined
operator classes.)


-- 
Antonin Houska
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26, A-2700 Wiener Neustadt
Web: https://www.cybertec-postgresql.com



Re: [HACKERS] Crash on promotion when recovery.conf is renamed

2018-07-06 Thread Michael Paquier
Robert Haas wrote:
> Although the state is now back to "Needs Review", I echo those
> sentiments. This issue has now been hanging around for about 18
> months.

For what it's worth, I volunteer to finish the work :)

The 2PC patch is really simple, and fixes a data loss issue.  The second
patch has been looked up by Heikki, Magnus and me at least once by each,
and there is visibly an agreement on having it.  Having reviews after
a new patch version is sent, by somebody else than the one who sent the
patches is of course always nice.. 
--
Michael


signature.asc
Description: PGP signature


Re: Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.

2018-07-06 Thread Ashutosh Bapat
On Fri, Jul 6, 2018 at 4:29 PM, Etsuro Fujita
 wrote:
> (2018/07/04 21:37), Ashutosh Bapat wrote:
>>
>> On Wed, Jul 4, 2018 at 5:36 PM, Etsuro Fujita
>>   wrote:
>>>
>>> (2018/07/04 19:04), Ashutosh Bapat wrote:

 On Fri, Jun 29, 2018 at 6:21 PM, Etsuro Fujita
wrote:
>
> (2018/06/22 22:54), Ashutosh Bapat wrote:
>>
>> +   if (enable_partitionwise_join&&
>> rel->top_parent_is_partitioned)
>> +   {
>> +   build_childrel_tlist(root, rel, childrel, 1,&appinfo);
>> +   }
>>
>> Why do we need rel->top_parent_is_partitioned? If a relation is
>> partitioned (if (rel->part_scheme), it's either the top parent or is
>> partition of some other partitioned table. In either case this
>> condition will be true.
>
>
> This would be needed to avoid unnecessarily applying
> build_childrel_tlist
> to
> child rels of a partitioned table for which we don't consider
> partitionwise
> join.  Consider:
>
> postgres=# create table lpt (c1 int, c2 text) partition by list (c1);
> CREATE TABLE
> postgres=# create table lpt_p1 partition of lpt for values in (1);
> CREATE TABLE
> postgres=# create table lpt_p2 (c1 int check (c1 in (2)), c2 text);
> CREATE TABLE
> postgres=# create table test (c1 int, c2 text);
> CREATE TABLE
> postgres=# explain verbose select * from (select * from lpt union all
> select
> * from lpt_p2) ss(c1, c2) inner join test on (ss.c1 = test.c1);
>
>
>>> I might misunderstand your words, but in the above example the patch
>>> doesn't
>>> apply build_childrel_tlist to lpt_p1 and lpt_p2.  The reason for that is
>>> because we can avoid adjusting the tlists for the corresponding subplans
>>> at
>>> plan creation time so that whole-row Vars in the tlists are transformed
>>> into
>>> CREs.  I think the overhead of the adjustment is not that big, but not
>>> zero,
>>> so it would be worth avoiding applying build_childrel_tlist to partitions
>>> if
>>> the top parent won't participate in a partitionwise-join at all.
>>
>>
>> I don't think that answers my question. When we join lpt with test,
>> your patch will apply build_childrel_tlist() to lpt_p1 and lpt_p2 even
>> when join between lpt and test is not going to use partition-wise
>> join. Why?
>
>
> Maybe my explanation including the example was not good.  Sorry about that,
> but my patch will *not* apply build_childrel_tlist to lpt_p1 and lpt_p2
> since the top parent of lpt_p1 and lpt_p2 is the UNION ALL subquery and
> hence not a partitioned table (ie, we have
> rel->top_parent_is_partitioned=false for lpt_p1 and lpt_p2).
>
>> As per your explanation, the condition "if
>> (enable_partitionwise_join&&   rel->top_parent_is_partitioned)" is
>> used to avoid applying build_childrel_tlist() when partition-wise join
>> won't be possible. But it's not covering all the cases.
>
>
> Let me explain about that: 1) my patch won't apply that function to a child
> if its top parent is an appendrel built from a UNION ALL subquery, even
> though the child is a partition of a partitioned table pulled up from a leaf
> subquery into the parent query, like lpt_p1, and 2) my patch will apply that
> function to a child if its top parent is a partitioned table, whether or not
> the partitioned table is involved in a partitionwise join.  By #1, we avoid
> the adjustment work at plan creation time, as explained above.  It might be
> worth trying to be smarter about #2 (for example, in the case of a join of a
> partitioned table and a non-partitioned table, since we don't consider a
> partitionwise join for that join, it's better to not apply that function to
> partitions of the partitioned table, to avoid the adjustment work at plan
> creation time), but ISTM we don't have enough information to be smarter.

That looks like a kludge to me rather than a proper fix. It's not
clear to me as to when somebody can expect ConvertRowtypeExpr in the
targetlist and when don't while creating paths and to an extent plans.
For example, inside add_paths_to_append_rel() or in
apply_scanjoin_target_to_paths() or for that matter any path creation
or plan creation function, we will sometimes get targetlists with
ConvertRowtypeExpr() and sometime not. How do we know which is when.


>
 A ConvertRowtypeExpr
 is used to fix this inconsistency. That's why I chose to use
 pull_var_clause() as a place to fix the problem and not fix
 ConvertRowtypeExpr in targetlist itself.
>>>
>>>
>>>
>>> I think the biggest issue in the original patch you proposed is that we
>>> spend extra cycles where partitioning is not involved, which is the
>>> biggest
>>> reason why I think the original patch isn't the right way to go.
>>
>>
>> When there are no partitions involved, there won't be any
>> ConvertRowtypeExprs there which means the function
>> is_converted_whole_row_reference() would just return from the first
>> line checking IsA() and nullness of node.

Re: log_min_messages shows debug instead of debug2

2018-07-06 Thread Andrew Dunstan




On 05/17/2018 08:43 PM, Ideriha, Takeshi wrote:

-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
OK, I'm happy enough to commit it then, barring other objections.  I was just 
going to
just do that but then I realized we're in feature freeze right now, so I 
suppose this
should go into the next CommitFest.

Thank you for your discussion.
Sure, I registed it to the next CommitFest with 'Ready for Committer'.

https://commitfest.postgresql.org/18/1638/



Committed.

cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: PATCH: Update snowball stemmers

2018-07-06 Thread Andrew Dunstan




On 06/26/2018 08:20 AM, Arthur Zakirov wrote:

Hello hackers,

I'd like to propose the patch which syncs PostgreSQL snowball stemmers.
As Tom pointed [1] stemmers haven't synced for a very long time.

I copied all source files without changes, except replacing '#include
"../runtime/header.h"' with '#include "header.h"' and removing includes
of standard headers from utilities.c.

Hungarian language uses ISO-8859-1 and UTF-8 charsets in Postgres HEAD.
But in Snowball HEAD it is ISO-8859-2 per commit [2]. This patch changes
hungarian's charset from ISO-8859-1 to ISO-8859-2 too.

Additionally updated files in the patch are:
- utilities.c
- header.h

Will add to the next commitfest.

Any comments?

1 - https://www.postgresql.org/message-id/5689.1519054983%40sss.pgh.pa.us
2 - 
https://github.com/snowballstem/snowball/commit/4bcae97db044253ea2edae1dd3ca59f3cddd4b9d




I agree with Tom that we should sync with the upstream before we do 
anything else. This is a very large patch  but with fairly limited 
impact. I think now at the start of a dev cycle is the right time to 
apply it.


I don't know if we have a buildfarm animal testing Hungarian. Maybe we 
need a buildfarm animal or two testing a large number of locales.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: How can we submit code patches that implement our (pending) patents?

2018-07-06 Thread Dave Cramer
On 4 July 2018 at 21:15, Tsunakawa, Takayuki  wrote:

> From: Craig Ringer [mailto:cr...@2ndquadrant.com]
> > I'm assuming you don't want to offer a grant that lets anyone use them
> for
> > anything. But if you have a really broad grant to PostgreSQL, all someone
> > would have to do to inherit the grant is re-use some part of PostgreSQL.
>
> Your assumption is right.  No scope is the same as no patent; it won't
> help to defend PostgreSQL community against rival companies/communities of
> other DBMSs.  Or, I think we can set the scope to what OIN states.
> Fortunately, anyone can join OIN free of charge.
>
>
> > I guess there's a middle ground somewhere that protects substantial
> > derivatives and extracts but stops you using some Pg code snippets as a
> > freebie license.
>
> Are you assuming that developers want to use PG code snippets for
> non-PostgreSQL or even non-DBMS software?  I believe that accepting
> patented code from companies would be practically more useful for
> PostgreSQL enhancement and growth.  PostgreSQL is now a mature software,
> and it can be more corporate-friendly like other software under Apache
> License.
>
> Certainly there is history of people using PG code for non-PostgreSQL or
at least commercial derivative work. Greenplum for example.



Dave Cramer

da...@postgresintl.com
www.postgresintl.com


  1   2   >