Re: [HACKERS] Latest Data::Dumper breaks hstore_plperl regression test

2017-05-13 Thread Tom Lane
Andrew Dunstan  writes:
> Here's a patch along those lines.

Looks sane in a quick once-over.  Note this needs to be back-patched.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Latest Data::Dumper breaks hstore_plperl regression test

2017-05-13 Thread Andrew Dunstan


On 05/14/2017 12:04 AM, Andrew Dunstan wrote:
>
> On 05/13/2017 11:32 PM, Robert Haas wrote:
>> On Sat, May 13, 2017 at 6:22 PM, Tom Lane  wrote:
>>> Or at least, that's what I surmise from the fact that buildfarm critter
>>> caiman has been failing that test for the last day or so, with symptoms
>>> indicating whitespace changes in Data::Dumper output.  Some poking into
>>> the Fedora repo shows that rawhide updated perl-Data-Dumper from 2.161
>>> to 2.167 on May 11, so that fits ...
>> Depending on the precise details of how Data::Dumper prints things
>> doesn't seem like a particularly good idea.
>>
>
> I'd be inclined to set $Data::Dumper::Indent to 0 which would suppress
> all indentation, and adjusting the test results accordingly. We already
> set $Data::Dumper::Sortkeys to 1, so there's precedent for controlling
> how it presents data back to us.
>


Here's a patch along those lines.

cheers

andrew


-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/contrib/hstore_plperl/expected/hstore_plperlu.out b/contrib/hstore_plperl/expected/hstore_plperlu.out
index b09fb78..d719d29 100644
--- a/contrib/hstore_plperl/expected/hstore_plperlu.out
+++ b/contrib/hstore_plperl/expected/hstore_plperlu.out
@@ -20,15 +20,12 @@ TRANSFORM FOR TYPE hstore
 AS $$
 use Data::Dumper;
 $Data::Dumper::Sortkeys = 1;
+$Data::Dumper::Indent = 0;
 elog(INFO, Dumper($_[0]));
 return scalar(keys %{$_[0]});
 $$;
 SELECT test1('aa=>bb, cc=>NULL'::hstore);
-INFO:  $VAR1 = {
-  'aa' => 'bb',
-  'cc' => undef
-};
-
+INFO:  $VAR1 = {'aa' => 'bb','cc' => undef};
  test1 
 ---
  2
@@ -39,12 +36,12 @@ LANGUAGE plperlu
 AS $$
 use Data::Dumper;
 $Data::Dumper::Sortkeys = 1;
+$Data::Dumper::Indent = 0;
 elog(INFO, Dumper($_[0]));
 return scalar(keys %{$_[0]});
 $$;
 SELECT test1none('aa=>bb, cc=>NULL'::hstore);
 INFO:  $VAR1 = '"aa"=>"bb", "cc"=>NULL';
-
  test1none 
 ---
  0
@@ -56,15 +53,12 @@ TRANSFORM FOR TYPE hstore
 AS $$
 use Data::Dumper;
 $Data::Dumper::Sortkeys = 1;
+$Data::Dumper::Indent = 0;
 elog(INFO, Dumper($_[0]));
 return scalar(keys %{$_[0]});
 $$;
 SELECT test1list('aa=>bb, cc=>NULL'::hstore);
-INFO:  $VAR1 = {
-  'aa' => 'bb',
-  'cc' => undef
-};
-
+INFO:  $VAR1 = {'aa' => 'bb','cc' => undef};
  test1list 
 ---
  2
@@ -77,18 +71,12 @@ TRANSFORM FOR TYPE hstore
 AS $$
 use Data::Dumper;
 $Data::Dumper::Sortkeys = 1;
+$Data::Dumper::Indent = 0;
 elog(INFO, Dumper($_[0]->[0], $_[0]->[1]));
 return scalar(keys %{$_[0]});
 $$;
 SELECT test1arr(array['aa=>bb, cc=>NULL'::hstore, 'dd=>ee']);
-INFO:  $VAR1 = {
-  'aa' => 'bb',
-  'cc' => undef
-};
-$VAR2 = {
-  'dd' => 'ee'
-};
-
+INFO:  $VAR1 = {'aa' => 'bb','cc' => undef};$VAR2 = {'dd' => 'ee'};
  test1arr 
 --
 2
@@ -101,6 +89,7 @@ TRANSFORM FOR TYPE hstore
 AS $$
 use Data::Dumper;
 $Data::Dumper::Sortkeys = 1;
+$Data::Dumper::Indent = 0;
 
 $rv = spi_exec_query(q{SELECT 'aa=>bb, cc=>NULL'::hstore AS col1});
 elog(INFO, Dumper($rv->{rows}[0]->{col1}));
@@ -111,13 +100,8 @@ $rv = spi_exec_prepared($plan, {}, $val);
 elog(INFO, Dumper($rv->{rows}[0]->{col1}));
 $$;
 SELECT test3();
-INFO:  $VAR1 = {
-  'aa' => 'bb',
-  'cc' => undef
-};
-
+INFO:  $VAR1 = {'aa' => 'bb','cc' => undef};
 INFO:  $VAR1 = '"a"=>"1", "b"=>"boo", "c"=>NULL';
-
  test3 
 ---
  
@@ -138,6 +122,7 @@ TRANSFORM FOR TYPE hstore
 AS $$
 use Data::Dumper;
 $Data::Dumper::Sortkeys = 1;
+$Data::Dumper::Indent = 0;
 elog(INFO, Dumper($_TD->{new}));
 if ($_TD->{new}{a} == 1) {
 $_TD->{new}{b} = {a => 1, b => 'boo', c => undef};
@@ -147,14 +132,7 @@ return "MODIFY";
 $$;
 CREATE TRIGGER test4 BEFORE UPDATE ON test1 FOR EACH ROW EXECUTE PROCEDURE test4();
 UPDATE test1 SET a = a;
-INFO:  $VAR1 = {
-  'a' => '1',
-  'b' => {
- 'aa' => 'bb',
- 'cc' => undef
-   }
-};
-
+INFO:  $VAR1 = {'a' => '1','b' => {'aa' => 'bb','cc' => undef}};
 SELECT * FROM test1;
  a |b
 ---+-
diff --git a/contrib/hstore_plperl/sql/hstore_plperlu.sql b/contrib/hstore_plperl/sql/hstore_plperlu.sql
index 8d8508c..c714b35 100644
--- a/contrib/hstore_plperl/sql/hstore_plperlu.sql
+++ b/contrib/hstore_plperl/sql/hstore_plperlu.sql
@@ -15,6 +15,7 @@ TRANSFORM FOR TYPE hstore
 AS $$
 use Data::Dumper;
 $Data::Dumper::Sortkeys = 1;
+$Data::Dumper::Indent = 0;
 elog(INFO, Dumper($_[0]));
 return scalar(keys %{$_[0]});
 $$;
@@ -26,6 +27,7 @@ LANGUAGE plperlu
 AS $$
 use Data::Dumper;
 $Data::Dumper::Sortkeys = 1;
+$Data::Dumper::Indent = 0;
 elog(INFO, Dumper($_[0]));
 return scalar(keys %{$_[0]});
 $$;
@@ -38,6 +40,7 @@ TRANSFORM FOR TYPE hstore
 AS $$
 use Data::Dumper;
 $Data::Dumper::Sortkeys = 1;
+$Data::Dumper::Indent = 0;
 

Re: [HACKERS] Latest Data::Dumper breaks hstore_plperl regression test

2017-05-13 Thread Tom Lane
Andrew Dunstan  writes:
> On 05/13/2017 11:32 PM, Robert Haas wrote:
>> Depending on the precise details of how Data::Dumper prints things
>> doesn't seem like a particularly good idea.

> I'd be inclined to set $Data::Dumper::Indent to 0 which would suppress
> all indentation, and adjusting the test results accordingly. We already
> set $Data::Dumper::Sortkeys to 1, so there's precedent for controlling
> how it presents data back to us.

Sounds plausible to me.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Functions

2017-05-13 Thread Robert Haas
On Sat, May 13, 2017 at 11:47 PM, Andres Freund  wrote:
> It'll be differently sized on different platforms.  So everyone will have to 
> write hash functions that look at each member individually, rather than 
> hashing the entire struct at once.  And for each member you'll have to use a 
> type specific hash function...

Well, that's possibly kind of annoying, but it still sounds like
pretty mechanical work.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Functions

2017-05-13 Thread Robert Haas
On Sat, May 13, 2017 at 1:57 PM, Tom Lane  wrote:
> Basically, this is simply saying that you're willing to ignore the
> hard cases, which reduces the problem to one of documenting the
> portability limitations.  You might as well not even bother with
> worrying about the integer case, because porting between little-
> and big-endian systems is surely far less common than cases you've
> already said you're okay with blowing off.
>
> That's not an unreasonable position to take, perhaps; doing better
> than that is going to be a lot more work and it's not very clear
> how much real-world benefit results.  But I can't follow the point
> of worrying about endianness but not encoding.

Encoding is a user choice, not a property of the machine.  Or, looking
at it from another point of view, the set of values that can be
represented by an int4 is the same whether they are represented in
big-endian form or in little-endian form, but the set of values that
are representable changes when you switch encodings.  You could argue
that text-under-LATIN1 and text-under-UTF8 aren't really the same data
type at all.  It's one thing to say "you can pick up your data and
move it to a different piece of hardware and nothing will break".
It's quite another thing to say "you can pick up your data and convert
it to a different encoding and nothing will break".  The latter is
generally false already.  Maybe LATIN1 -> UTF8 is no-fail, but what
about UTF8 -> LATIN1 or SJIS -> anything?  Based on previous mailing
list discussions, I'm under the impression that it is sometimes
debatable how a character in one encoding should be converted to some
other encoding, either because it's not clear whether there is a
mapping at all or it's unclear what mapping should be used.  See,
e.g., 2dbbf33f4a95cdcce66365bcdb47c885a8858d3c, or
https://www.postgresql.org/message-id/1739a900-30ab-f48e-aec4-2b35475ecf02%402ndquadrant.com
where it was discussed that being able to convert encoding A ->
encoding B does not guarantee the ability to perform the reverse
conversion.

Arguing that a given int4 value should hash to the same value on every
platform seems like a request that is at least superficially
reasonable, if possibly practically tricky in some cases.  Arguing
that every currently supported encoding should hash every character
the same way when they don't all have the same set of characters and
the mappings between them are occasionally debatable is asking for the
impossible.  I certainly don't want to commit to a design for hash
partitioning that involves a compatibility break any time somebody
changes any encoding conversion in the system, even if a hash function
that involved translating every character to some sort of universal
code point before hashing it didn't seem likely to be horribly slow.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Latest Data::Dumper breaks hstore_plperl regression test

2017-05-13 Thread Andrew Dunstan


On 05/13/2017 11:32 PM, Robert Haas wrote:
> On Sat, May 13, 2017 at 6:22 PM, Tom Lane  wrote:
>> Or at least, that's what I surmise from the fact that buildfarm critter
>> caiman has been failing that test for the last day or so, with symptoms
>> indicating whitespace changes in Data::Dumper output.  Some poking into
>> the Fedora repo shows that rawhide updated perl-Data-Dumper from 2.161
>> to 2.167 on May 11, so that fits ...
> Depending on the precise details of how Data::Dumper prints things
> doesn't seem like a particularly good idea.
>


I'd be inclined to set $Data::Dumper::Indent to 0 which would suppress
all indentation, and adjusting the test results accordingly. We already
set $Data::Dumper::Sortkeys to 1, so there's precedent for controlling
how it presents data back to us.

cheers

andrew



-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Latest Data::Dumper breaks hstore_plperl regression test

2017-05-13 Thread Andrew Dunstan


On 05/13/2017 11:43 PM, Craig Ringer wrote:
>
>
> On 14 May 2017 11:33, "Robert Haas"  > wrote:
>
> On Sat, May 13, 2017 at 6:22 PM, Tom Lane  > wrote:
> > Or at least, that's what I surmise from the fact that buildfarm
> critter
> > caiman has been failing that test for the last day or so, with
> symptoms
> > indicating whitespace changes in Data::Dumper output.  Some
> poking into
> > the Fedora repo shows that rawhide updated perl-Data-Dumper from
> 2.161
> > to 2.167 on May 11, so that fits ...
>
> Depending on the precise details of how Data::Dumper prints things
> doesn't seem like a particularly good idea.
>
>
> Surely Test::More::is_deeply is the way to go here.

This isn't a TAP test. It's a standard pg_regress test.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Functions

2017-05-13 Thread Andres Freund


On May 13, 2017 8:44:22 PM PDT, Robert Haas  wrote:
>On Sat, May 13, 2017 at 7:08 PM, Andres Freund 
>wrote:
>> I seriously doubt that's true.  A lot of more complex types have
>> internal alignment padding and such.
>
>True, but I believe we require those padding bytes to be zero.  If we
>didn't, then hstore_hash would be broken already.

It'll be differently sized on different platforms.  So everyone will have to 
write hash functions that look at each member individually, rather than hashing 
the entire struct at once.  And for each member you'll have to use a type 
specific hash function...

Andres
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Functions

2017-05-13 Thread Robert Haas
On Sat, May 13, 2017 at 7:08 PM, Andres Freund  wrote:
> I seriously doubt that's true.  A lot of more complex types have
> internal alignment padding and such.

True, but I believe we require those padding bytes to be zero.  If we
didn't, then hstore_hash would be broken already.

> Consider e.g. something like
> jsonb, hstore, or postgis types - you *can* convert them to something
> that's unambiguous, but it's going to be fairly expensive.

I'm fuzzy on what you think we'd need to do.

> Essentially
> you'd have to something like calling the output function, and then
> hashing the result of that.

I really don't see why we'd have to go to nearly that length.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Latest Data::Dumper breaks hstore_plperl regression test

2017-05-13 Thread Craig Ringer
On 14 May 2017 11:33, "Robert Haas"  wrote:

On Sat, May 13, 2017 at 6:22 PM, Tom Lane  wrote:
> Or at least, that's what I surmise from the fact that buildfarm critter
> caiman has been failing that test for the last day or so, with symptoms
> indicating whitespace changes in Data::Dumper output.  Some poking into
> the Fedora repo shows that rawhide updated perl-Data-Dumper from 2.161
> to 2.167 on May 11, so that fits ...

Depending on the precise details of how Data::Dumper prints things
doesn't seem like a particularly good idea.


Surely Test::More::is_deeply is the way to go here.


Re: [HACKERS] Latest Data::Dumper breaks hstore_plperl regression test

2017-05-13 Thread Robert Haas
On Sat, May 13, 2017 at 6:22 PM, Tom Lane  wrote:
> Or at least, that's what I surmise from the fact that buildfarm critter
> caiman has been failing that test for the last day or so, with symptoms
> indicating whitespace changes in Data::Dumper output.  Some poking into
> the Fedora repo shows that rawhide updated perl-Data-Dumper from 2.161
> to 2.167 on May 11, so that fits ...

Depending on the precise details of how Data::Dumper prints things
doesn't seem like a particularly good idea.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] issue: record or row variable cannot be part of multiple-item INTO list

2017-05-13 Thread Pavel Stehule
2017-05-13 22:20 GMT+02:00 Tom Lane :

> Pavel Stehule  writes:
> > I am working on migration large Oracle application to Postgres. When I
> > started migration procedures with OUT parameters I found following limit
>
> > "record or row variable cannot be part of multiple-item INTO list"
>
> IIRC, the reason for disallowing that is that it's totally unclear what
> the semantics ought to be.  Is that variable a single target (demanding
> a compatible composite-valued column from the source query), or does it
> eat one source column per field within the record/row?  The former is 100%
> inconsistent with what happens if the record/row is the only INTO target;
> while the latter would be very bug-prone, and it's especially unclear what
> ought to happen if it's an as-yet-undefined record variable.


I don't think so. The semantics should be same like now.

now, the output (s1,s2,s3) can be assigned to

1. scalar variables - implemented with aux row variable (s1,s2,s3) ->
r(ts1,ts2,ts3)
2. record - (s1, s2, s3) -> rec(s1,s2,s3)
3. row - (s1,s2,s3) -> r(s1,s2,s3)

If we allow composite values there, then situation is same

1. (s1, c2, s3, c4) -> r(ts1, tc2, ts3, tc4)
2. (s1, c2, s3, c4) -> rec(s1, c2, s3, c4)
3. (s1, c2, s3, c4) -> row(s1, c2, s3, c4)

So there are not any inconsistency if we use rule

1. if there is one target, use it
2. if there are more target, create aux row variable

Same technique is used for function output - build_row_from_vars - and
there are not any problem.

If you try assign composite to scalar or scalar to composite, then the
assignment should to fail. But when statement is correct, then this invalid
assignments should not be there.


>
> Yeah, we could invent some semantics or other, but I think it would
> mostly be a foot-gun for unwary programmers.
>
> We do allow you to write out the columns individually for such cases:
>
> SELECT ... INTO v1, rowvar.c1, rowvar.c2, rowvar.c3, v2 ...
>

It doesn't help to performance and readability (and maintainability) for
following cases

There are often pattern

PROCEDURE p(..., OUT r widetab%ROWTYPE, OUT errordesc COMPOSITE)

Now there is a workaround

SELECT * FROM p() INTO auxrec;
r := auxrec.widetab;
errordesc := auxrec.errordesc;

But it creates N (number of OUT variables) of assignments commands over
records.

If this workaround is safe, then implementation based on aux row variable
should be safe too, because it is manual implementation.



>
> and I think it's better to encourage people to stick to that.


I don't think so using tens OUT variables is some nice, but current behave
is too restrictive. More, I didn't find a case, where current
implementation should not work (allow records needs some work).


>
> regards, tom lane
>


[HACKERS] Event triggers + table partitioning cause server crash in current master

2017-05-13 Thread Mark Dilger
Hackers,

I discovered a reproducible crash using event triggers in the current
development version, 29c7d5e483acaa74a0d06dd6c70b320bb315.
I was getting a crash before this version, and cloned a fresh copy of
the sources to be sure I was up to date, so I don't think the bug can be
attributed to Andres' commit.  (The prior version I was testing against
was heavily modified by me, so I recreated the bug using the latest
standard, unmodified sources.)

I create both before and after event triggers early in the regression test
schedule, which then fire here and there during the following tests, leading
fairly reproducibly to the server crashing somewhere during the test suite.
These crashes do not happen for me without the event triggers being added
to the tests.  Many tests show as 'FAILED' simply because the logging
that happens in the event triggers creates unexpected output for the test.
Those "failures" are expected.  The server crashes are not.

The server logs suggest the crashes might be related to partitioned tables.

Please find attached the patch that includes my changes to the sources
for recreating this bug.  The logs and regression.diffs are a bit large; let
me know if you need them.

I built using the command

./configure --enable-cassert --enable-tap-tests && make -j4 && make check

Mark Dilger



to_reproduce_bug.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Functions

2017-05-13 Thread Andres Freund
On 2017-05-13 10:29:09 -0400, Robert Haas wrote:
> On Sat, May 13, 2017 at 12:52 AM, Amit Kapila  wrote:
> > Can we think of defining separate portable hash functions which can be
> > used for the purpose of hash partitioning?
> 
> I think that would be a good idea.  I think it shouldn't even be that
> hard.  By data type:
> 
> - Integers.  We'd need to make sure that we get the same results for
> the same value on big-endian and little-endian hardware, and that
> performance is good on both systems.  That seems doable.
> 
> - Floats.  There may be different representations in use on different
> hardware, which could be a problem.  Tom didn't answer my question
> about whether any even-vaguely-modern hardware is still using non-IEEE
> floats, which I suspect means that the answer is "no".  If every bit
> of hardware we are likely to find uses basically the same
> representation of the same float value, then this shouldn't be hard.
> (Also, even if this turns out to be hard for floats, using a float as
> a partitioning key would be a surprising choice because the default
> output representation isn't even unambiguous; you need
> extra_float_digits for that.)
> 
> - Strings.  There's basically only one representation for a string.
> If we assume that the hash code only needs to be portable across
> hardware and not across encodings, a position for which I already
> argued upthread, then I think this should be manageable.
> 
> - Everything Else.  Basically, everything else is just a composite of
> that stuff, I think.

I seriously doubt that's true.  A lot of more complex types have
internal alignment padding and such.  Consider e.g. something like
jsonb, hstore, or postgis types - you *can* convert them to something
that's unambiguous, but it's going to be fairly expensive.  Essentially
you'd have to something like calling the output function, and then
hashing the result of that.  And hash-partitioning is particularly
interesting for e.g. large amounts of points in a geospatial scenario,
because other types of partitioning are quite hard to maintain.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Functions

2017-05-13 Thread Jeff Davis
On Fri, May 12, 2017 at 12:38 PM, Robert Haas  wrote:
> That is a good question.  I think it basically amounts to this
> question: is hash partitioning useful, and if so, for what?

Two words: parallel query. To get parallelism, one of the best
approaches is dividing the data, then doing as much work as possible
before combining it again. If you have hash partitions on some key,
then you can do partition-wise joins or partition-wise aggregation on
that key in parallel with no synchronization/communication overhead
(until the final result).

You've taken postgres pretty far in this direction already; hash
partitioning will take it one step further by allowing more pushdowns
and lower sync/communication costs.

Some of these things can be done with range partitioning, too, but see
my other message here:

https://www.postgresql.org/message-id/CAMp0ubfNMSGRvZh7N7TRzHHN5tz0ZeFP13Aq3sv6b0H37fdcPg%40mail.gmail.com

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] walsender & parallelism

2017-05-13 Thread Andres Freund
On 2017-04-25 11:18:35 -0400, Robert Haas wrote:
> On Thu, Apr 20, 2017 at 9:40 PM, Andres Freund  wrote:
> > This'd be easier to look at if the fairly separate facility of allowing
> > walsender to execute SQL commands had been committed separately, rather
> > than as part of a fairly large commit of largely unrelated things.
> 
> Good grief, yes.  I really don't think that should have been slipped
> into a commit that was mostly about logical replication, with only a
> brief mention in the commit message and, AFAICS, no documentation
> whatsoever.

In my opinion some explanation about how this came about would be
appropriate.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical decoding of two-phase transactions

2017-05-13 Thread Dmitry Dolgov
On 13 May 2017 at 22:22, Tom Lane  wrote:
>
> Apparently you are not testing against current HEAD.  That's been there
> since d10c626de (a whole two days now ;-))

Indeed, I was working on a more than two-day old antiquity. Unfortunately,
it's even more complicated
to apply this patch against the current HEAD, so I'll wait for a rebased
version.


[HACKERS] Latest Data::Dumper breaks hstore_plperl regression test

2017-05-13 Thread Tom Lane
Or at least, that's what I surmise from the fact that buildfarm critter
caiman has been failing that test for the last day or so, with symptoms
indicating whitespace changes in Data::Dumper output.  Some poking into
the Fedora repo shows that rawhide updated perl-Data-Dumper from 2.161
to 2.167 on May 11, so that fits ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] snapbuild woes

2017-05-13 Thread Andres Freund
On 2017-05-12 10:57:55 +0200, Petr Jelinek wrote:
> Hmm, well it helps but actually now that we don't track individual
> running transactions anymore it got much less effective (my version of
> 0005 does as well).
> 
> The example workload I test with is:
> session 1: open transaction, do a write, keep it open
> session 2: pgbench  -M simple -N -c 10 -P 1 -T 5
> session 3: run CREATE_REPLICATION_SLOT LOGICAL in walsender
> session 2: pgbench  -M simple -N -c 10 -P 1 -T 20
> session 1: commit
> 
> And wait for session 3 to finish slot creation, takes about 20 mins on
> my laptop without patches, minute and half with your patches for 0004
> and 0005 (or with your 0004 and my 0005) and about 2s with my original
> 0004 and 0005.

Is that with assertions enabled or not?  With assertions all the time
post patches seems to be spent in some Asserts in reorderbuffer.c,
without it takes less than a second for me here.

I'm appylying these now.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical decoding of two-phase transactions

2017-05-13 Thread Tom Lane
Dmitry Dolgov <9erthali...@gmail.com> writes:
> Just a note about this patch. Of course time flies by and it needs rebase,
> but also there are few failing tests right now:

> ERROR:  function pg_wal_lsn_diff(pg_lsn, unknown) does not exist

Apparently you are not testing against current HEAD.  That's been there
since d10c626de (a whole two days now ;-)).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] issue: record or row variable cannot be part of multiple-item INTO list

2017-05-13 Thread Tom Lane
Pavel Stehule  writes:
> I am working on migration large Oracle application to Postgres. When I
> started migration procedures with OUT parameters I found following limit

> "record or row variable cannot be part of multiple-item INTO list"

IIRC, the reason for disallowing that is that it's totally unclear what
the semantics ought to be.  Is that variable a single target (demanding
a compatible composite-valued column from the source query), or does it
eat one source column per field within the record/row?  The former is 100%
inconsistent with what happens if the record/row is the only INTO target;
while the latter would be very bug-prone, and it's especially unclear what
ought to happen if it's an as-yet-undefined record variable.

Yeah, we could invent some semantics or other, but I think it would
mostly be a foot-gun for unwary programmers.

We do allow you to write out the columns individually for such cases:

SELECT ... INTO v1, rowvar.c1, rowvar.c2, rowvar.c3, v2 ...

and I think it's better to encourage people to stick to that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical decoding of two-phase transactions

2017-05-13 Thread Dmitry Dolgov
Hi

> On 4 April 2017 at 19:13, Masahiko Sawada  wrote:
>
> Other than that issue current patch still could not pass 'make check'
> test of contrib/test_decoding.

Just a note about this patch. Of course time flies by and it needs rebase,
but also there are few failing tests right now:

* one that was already mentioned by Masahiko
* one from `ddl`, where expected is:

```
SELECT slot_name, plugin, slot_type, active,
NOT catalog_xmin IS NULL AS catalog_xmin_set,
xmin IS NULl  AS data_xmin_not_set,
pg_wal_lsn_diff(restart_lsn, '0/0100') > 0 AS some_wal
FROM pg_replication_slots;
slot_name|plugin | slot_type | active | catalog_xmin_set |
data_xmin_not_set | some_wal
-+---+---++--+---+--
 regression_slot | test_decoding | logical   | f  | t|
t | t
(1 row)
```

but the result is:

```
SELECT slot_name, plugin, slot_type, active,
NOT catalog_xmin IS NULL AS catalog_xmin_set,
xmin IS NULl  AS data_xmin_not_set,
pg_wal_lsn_diff(restart_lsn, '0/0100') > 0 AS some_wal
FROM pg_replication_slots;
ERROR:  function pg_wal_lsn_diff(pg_lsn, unknown) does not exist
LINE 5: pg_wal_lsn_diff(restart_lsn, '0/0100') > 0 AS some_w...
^
HINT:  No function matches the given name and argument types. You might
need to add explicit type casts.
```


[HACKERS] issue: record or row variable cannot be part of multiple-item INTO list

2017-05-13 Thread Pavel Stehule
Hi

I am working on migration large Oracle application to Postgres. When I
started migration procedures with OUT parameters I found following limit

"record or row variable cannot be part of multiple-item INTO list"

I checked code and it looks so this limit is not necessary for ROW types
(what is enough for migration from Oracle, where REC is not available).

Do you think so this limit is necessary for ROW types?

@@ -3368,19 +3368,7 @@ read_into_target(PLpgSQL_rec **rec, PLpgSQL_row
**row, bool *strict)
switch (tok)
{
case T_DATUM:
-   if (yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_ROW)
-   {
-   check_assignable(yylval.wdatum.datum, yylloc);
-   *row = (PLpgSQL_row *) yylval.wdatum.datum;
-
-   if ((tok = yylex()) == ',')
-   ereport(ERROR,
-   (errcode(ERRCODE_SYNTAX_ERROR),
-errmsg("record or row variable cannot be part
of multiple-item INTO list"),
-parser_errposition(yylloc)));
-   plpgsql_push_back_token(tok);
-   }
-   else if (yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_REC)


Regards

Pavel


Re: [HACKERS] Valgrind & tests for `numeric`

2017-05-13 Thread Tom Lane
Dmitry Dolgov <9erthali...@gmail.com> writes:
> Recently I noticed, that when I'm running the regression tests under
> Valgrind 3.9.0,
> one test for `numeric` is constantly failing:

That seems very strange, especially since many PG developers use
valgrind, and we even have one buildfarm animal testing with it,
and nobody has ever reported such a thing.  I'm inclined to suspect
a bug in that valgrind version.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Valgrind & tests for `numeric`

2017-05-13 Thread Dmitry Dolgov
Hi

Recently I noticed, that when I'm running the regression tests under
Valgrind 3.9.0,
one test for `numeric` is constantly failing:

Here is what expected:
```
SELECT i as pow,
round((-2.5 * 10 ^ i)::numeric, -i),
round((-1.5 * 10 ^ i)::numeric, -i),
round((-0.5 * 10 ^ i)::numeric, -i),
round((0.5 * 10 ^ i)::numeric, -i),
round((1.5 * 10 ^ i)::numeric, -i),
round((2.5 * 10 ^ i)::numeric, -i)
FROM generate_series(-5,5) AS t(i);
 pow |  round   |  round   |  round   |  round  |  round  |  round
-+--+--+--+-+-+-
  -5 | -0.3 | -0.2 | -0.1 | 0.1 | 0.2 | 0.3
  -4 |  -0.0003 |  -0.0002 |  -0.0001 |  0.0001 |  0.0002 |  0.0003
  -3 |   -0.003 |   -0.002 |   -0.001 |   0.001 |   0.002 |   0.003
  -2 |-0.03 |-0.02 |-0.01 |0.01 |0.02 |0.03
  -1 | -0.3 | -0.2 | -0.1 | 0.1 | 0.2 | 0.3
   0 |   -3 |   -2 |   -1 |   1 |   2 |   3
   1 |  -30 |  -20 |  -10 |  10 |  20 |  30
   2 | -300 | -200 | -100 | 100 | 200 | 300
   3 |-3000 |-2000 |-1000 |1000 |2000 |3000
   4 |   -3 |   -2 |   -1 |   1 |   2 |   3
   5 |  -30 |  -20 |  -10 |  10 |  20 |  30
(11 rows)
```

and here is what I've got (see the last row):
```
SELECT i as pow,
round((-2.5 * 10 ^ i)::numeric, -i),
round((-1.5 * 10 ^ i)::numeric, -i),
round((-0.5 * 10 ^ i)::numeric, -i),
round((0.5 * 10 ^ i)::numeric, -i),
round((1.5 * 10 ^ i)::numeric, -i),
round((2.5 * 10 ^ i)::numeric, -i)
FROM generate_series(-5,5) AS t(i);
 pow |  round   |  round   |  round   |  round  |  round  |  round
-+--+--+--+-+-+-
  -5 | -0.3 | -0.2 | -0.1 | 0.1 | 0.2 | 0.3
  -4 |  -0.0003 |  -0.0002 |  -0.0001 |  0.0001 |  0.0002 |  0.0003
  -3 |   -0.003 |   -0.002 |   -0.001 |   0.001 |   0.002 |   0.003
  -2 |-0.03 |-0.02 |-0.01 |0.01 |0.02 |0.03
  -1 | -0.3 | -0.2 | -0.1 | 0.1 | 0.2 | 0.3
   0 |   -3 |   -2 |   -1 |   1 |   2 |   3
   1 |  -30 |  -20 |  -10 |  10 |  20 |  30
   2 | -300 | -200 | -100 | 100 | 200 | 300
   3 |-3000 |-2000 |-1000 |1000 |2000 |3000
   4 |   -3 |   -2 |   -1 |   1 |   2 |   3
   5 |  -30 |  -20 |0 |   0 |  20 |  30
(11 rows)
```

I can see this behavior on the latest master with `USE_VALGRIND` enabled. Is
it something well known?


Re: [HACKERS] Hash Functions

2017-05-13 Thread Jeff Davis
On Fri, May 12, 2017 at 11:45 AM, Tom Lane  wrote:
> Forget hash partitioning.  There's no law saying that that's a good
> idea and we have to have it.  With a different set of constraints,
> maybe we could do it, but I think the existing design decisions have
> basically locked it out --- and I doubt that hash partitioning is so
> valuable that we should jettison other desirable properties to get it.

A lot of the optimizations that can make use of hash partitioning
could also make use of range partitioning. But let me defend hash
partitioning:

* hash partitioning requires fewer decisions by the user
* naturally balances data and workload among partitions in most cases
* easy to match with a degree of parallelism

But with range partitioning, you can have situations where different
tables have different distributions of data. If you partition to
balance the data between partitions in both cases, then that makes
partition-wise join a lot harder because the boundaries don't line up.
If you make the boundaries line up to do partition-wise join, the
partitions might have wildly different amounts of data in them. Either
way, it makes parallelism harder.

Even without considering joins, range partitioning could force you to
make a choice between balancing the data and balancing the workload.
If you are partitioning based on date, then a lot of the workload will
be on more recent partitions. That's desirable sometimes (e.g. for
vacuum) but not always desirable for parallelism.

Hash partitioning doesn't have these issues and goes very nicely with
parallel query.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Functions

2017-05-13 Thread Tom Lane
Robert Haas  writes:
> On Sat, May 13, 2017 at 12:52 AM, Amit Kapila  wrote:
>> Can we think of defining separate portable hash functions which can be
>> used for the purpose of hash partitioning?

> I think that would be a good idea.  I think it shouldn't even be that
> hard.  By data type:

> - Integers.  We'd need to make sure that we get the same results for
> the same value on big-endian and little-endian hardware, and that
> performance is good on both systems.  That seems doable.

> - Floats.  There may be different representations in use on different
> hardware, which could be a problem.  Tom didn't answer my question
> about whether any even-vaguely-modern hardware is still using non-IEEE
> floats, which I suspect means that the answer is "no".  If every bit
> of hardware we are likely to find uses basically the same
> representation of the same float value, then this shouldn't be hard.
> (Also, even if this turns out to be hard for floats, using a float as
> a partitioning key would be a surprising choice because the default
> output representation isn't even unambiguous; you need
> extra_float_digits for that.)

> - Strings.  There's basically only one representation for a string.
> If we assume that the hash code only needs to be portable across
> hardware and not across encodings, a position for which I already
> argued upthread, then I think this should be manageable.

Basically, this is simply saying that you're willing to ignore the
hard cases, which reduces the problem to one of documenting the
portability limitations.  You might as well not even bother with
worrying about the integer case, because porting between little-
and big-endian systems is surely far less common than cases you've
already said you're okay with blowing off.

That's not an unreasonable position to take, perhaps; doing better
than that is going to be a lot more work and it's not very clear
how much real-world benefit results.  But I can't follow the point
of worrying about endianness but not encoding.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Functions

2017-05-13 Thread Jeff Davis
On Fri, May 12, 2017 at 10:34 AM, Tom Lane  wrote:
> Maintaining such a property for float8 (and the types that depend on it)
> might be possible if you believe that nobody ever uses anything but IEEE
> floats, but we've never allowed that as a hard assumption before.

This is not such a big practical problem (for me at least) because
hashing of floats is of dubious value.

> Even architecture dependence isn't the whole scope of the problem.
> Consider for example dumping a LATIN1-encoded database and trying
> to reload it into a UTF8-encoded database.  People will certainly
> expect that to be possible, and do you want to guarantee that the
> hash of a text value is encoding-independent?

That is a major problem. In an ideal world, we could make that work
with something like ucol_getSortKey(), but we don't require ICU, and
we can't mix getSortKey() with strxfrm(), or even strxfrm() results
from different platforms.

I don't think it's correct to hash the code points, either, because
strings may be considered equal in a locale even if the code points
aren't identical. But I don't think postgres lives up to that standard
currently.

But hash partitioning is too valuable to give up on entirely. I think
we should consider supporting a limited subset of types for now with
something not based on the hash am.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] proposal - using names as primary names of plpgsql function parameters instead $ based names

2017-05-13 Thread Pavel Stehule
Hi

Now, I when I working on plpgsql_check, I have to check function
parameters. I can use fn_vargargnos and out_param_varno for list of
arguments and related varno(s). when I detect some issue, I am using
refname. It is not too nice now, because these refnames are $ based. Long
names are alias only. There are not a possibility to find related alias.

So, my proposal. Now, we can use names as refname of parameter variable. $
based name can be used as alias. From user perspective there are not any
change.

Comments, notes?

Regards

Pavel


Re: [HACKERS] multi-column range partition constraint

2017-05-13 Thread Robert Haas
On Sat, May 13, 2017 at 12:42 AM, Amit Langote  wrote:
> Attached is the correct version.

Thank you!  I committed 0001 with a couple of cosmetic tweaks and with
the change I previously suggested around partexprs_item.  You argued
that wouldn't work because the contents of partexprs_item was
modified, but that's not so: partexprs_item in
get_range_key_properties is a pointer the partexprs_item in the
caller.  When it modifies *partexprs_item, it's not changing the
contents of the ListCell itself, just the caller's ListCell *.  I also
ran pgindent over the patch.

Also committed 0002.  In that case, I removed CHECK (...) from the
output; the caller can always add that if it's desired, but since a
partitioning constraint is NOT a CHECK constraint I don't actually
think it's useful in most cases.  I also tweaked the regression tests
slightly.

Thanks again.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Functions

2017-05-13 Thread Robert Haas
On Sat, May 13, 2017 at 12:52 AM, Amit Kapila  wrote:
> Can we think of defining separate portable hash functions which can be
> used for the purpose of hash partitioning?

I think that would be a good idea.  I think it shouldn't even be that
hard.  By data type:

- Integers.  We'd need to make sure that we get the same results for
the same value on big-endian and little-endian hardware, and that
performance is good on both systems.  That seems doable.

- Floats.  There may be different representations in use on different
hardware, which could be a problem.  Tom didn't answer my question
about whether any even-vaguely-modern hardware is still using non-IEEE
floats, which I suspect means that the answer is "no".  If every bit
of hardware we are likely to find uses basically the same
representation of the same float value, then this shouldn't be hard.
(Also, even if this turns out to be hard for floats, using a float as
a partitioning key would be a surprising choice because the default
output representation isn't even unambiguous; you need
extra_float_digits for that.)

- Strings.  There's basically only one representation for a string.
If we assume that the hash code only needs to be portable across
hardware and not across encodings, a position for which I already
argued upthread, then I think this should be manageable.

- Everything Else.  Basically, everything else is just a composite of
that stuff, I think.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v2] Progress command to monitor progression of long running SQL queries

2017-05-13 Thread Amit Kapila
On Wed, May 10, 2017 at 10:10 PM, Remi Colinet  wrote:
>
> Parallel queries can also be monitored. The same mecanism is used to monitor
> child workers with a slight difference: the main worker requests the child
> progression directly in order to dump the whole progress tree in shared
> memory.
>

What if there is any error in the worker (like "out of memory") while
gathering the statistics?  It seems both for workers as well as for
the main backend it will just error out.  I am not sure if it is a
good idea to error out the backend or parallel worker as it will just
end the query execution.  Also, even if it is okay, there doesn't seem
to be a way by which a parallel worker can communicate the error back
to master backend, rather it will just exit silently which is not
right.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2017-05-13 Thread Amit Kapila
On Fri, May 12, 2017 at 9:14 AM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Wed, May 10, 2017 at 8:39 PM, Masahiko Sawada  
>> wrote:
>>> ... I'd like to propose to change relation
>>> extension lock management so that it works using LWLock instead.
>
>> That's not a good idea because it'll make the code that executes while
>> holding that lock noninterruptible.
>
> Is that really a problem?  We typically only hold it over one kernel call,
> which ought to be noninterruptible anyway.
>

During parallel bulk load operations, I think we hold it over multiple
kernel calls.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2017-05-13 Thread Amit Kapila
On Thu, May 11, 2017 at 6:09 AM, Masahiko Sawada  wrote:
> This work would be helpful not only for existing workload but also
> future works like some parallel utility commands, which is discussed
> on other threads[1]. At least for parallel vacuum, this feature helps
> to solve issue that the implementation of parallel vacuum has.
>
> I ran pgbench for 10 min three times(scale factor is 5000), here is a
> performance measurement result.
>
> clients   TPS(HEAD)   TPS(Patched)
> 4   2092.612   2031.277
> 8   3153.732   3046.789
> 16 4562.072   4625.419
> 32 6439.391   6479.526
> 64 7767.364   7779.636
> 100   7917.173   7906.567
>
> * 16 core Xeon E5620 2.4GHz
> * 32 GB RAM
> * ioDrive
>
> In current implementation, it seems there is no performance degradation so 
> far.
>

I think it is good to check pgbench, but we should do tests of the
bulk load as this lock is stressed during such a workload.  Some of
the tests we have done when we have improved the performance of bulk
load can be found in an e-mail [1].

[1] -
https://www.postgresql.org/message-id/CAFiTN-tkX6gs-jL8VrPxg6OG9VUAKnObUq7r7pWQqASzdF5OwA%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [POC] hash partitioning

2017-05-13 Thread Dilip Kumar
On Fri, May 12, 2017 at 6:08 PM, amul sul  wrote:
> Hi,
>
> Please find the following updated patches attached:

I have done some testing with the new patch, most of the cases worked
as per the expectation except below

I expect the planner to select only "Seq Scan on t1" whereas it's
scanning both the partitions?

create table t (a int, b varchar) partition by hash(a);
create table t1 partition of t for values with (modulus 8, remainder 0);
create table t2 partition of t for values with (modulus 8, remainder 1);

postgres=# explain select * from t where a=8;
QUERY PLAN
--
 Append  (cost=0.00..51.75 rows=12 width=36)
   ->  Seq Scan on t1  (cost=0.00..25.88 rows=6 width=36)
 Filter: (a = 8)
   ->  Seq Scan on t2  (cost=0.00..25.88 rows=6 width=36)
 Filter: (a = 8)
(5 rows)


Some cosmetic comments.
---
+ RangeVar   *rv = makeRangeVarFromNameList(castNode(List, nameEl->arg));
+

Useless Hunk.

 /*
- * Build a CREATE SEQUENCE command to create the sequence object, and
- * add it to the list of things to be done before this CREATE/ALTER
- * TABLE.
+ * Build a CREATE SEQUENCE command to create the sequence object, and add
+ * it to the list of things to be done before this CREATE/ALTER TABLE.
  */

Seems like, in src/backend/parser/parse_utilcmd.c, you have changed
the existing code with
pgindent.  I think it's not a good idea to mix pgindent changes with your patch.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers