Re: [HACKERS] [PATCH] Generalized JSON output functions

2015-07-17 Thread Andrew Dunstan


On 07/17/2015 10:11 AM, Andrew Dunstan wrote:


On 07/17/2015 08:20 AM, Shulgin, Oleksandr wrote:



 This patch makes Postgres core more complex

Yes, it does. But, that was not the purpose, obviously. :-)

 while not really solving the problem in Javascript.

It still allows for less risk of silent data corruption on the js side.




I have already pointed out how this patch is fundamentally broken. You 
can achieve your aims by a fairly small amount of code inside your 
logical decoder, and with no core code changes whatsoever. So I'm 
puzzled why we are even still debating this broken design.



Incidentally, this doesn't look acceptable anyway:

!   es-json_cxt.value(es-json_cxt, num, 
JSONTYPE_NUMERIC,
!  
NUMERICOID, 1702 /* numeric_out */);


We don't hardcode function oids elsewhere. So this is also something 
that makes the patch unacceptable.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault in 9.5alpha - plpgsql function, implicit cast and IMMUTABLE cast function

2015-07-17 Thread Tom Lane
Geoff Winkless pgsqlad...@geoff.dj writes:
 While doing some testing of 9.5a one of my colleagues (not on list) found a
 reproducible server segfault.

Hm, looks like commit 1345cc67bbb014209714af32b5681b1e11eaf964 is to
blame: memory management for the plpgsql cast cache needs to be more
complicated than I realized :-(.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Retain comments on indexes and constraints at ALTER TABLE ... TY

2015-07-17 Thread Michael Paquier
On Fri, Jul 17, 2015 at 11:16 PM, Kevin Grittner kgri...@ymail.com wrote:
 Heikki Linnakangas heikki.linnakan...@iki.fi wrote:

 This fixes bug #13126, reported by Kirill Simonov.

 It looks like you missed something with the addition of
 AT_ReAddComment:

 test_ddl_deparse.c:80:11: warning: enumeration value 'AT_ReAddComment' not 
 handled in switch [-Wswitch]
 switch (subcmd-subtype)
 ^

Oops. If someone could pick up the attached (backpatch to 9.5 needed)...
-- 
Michael


20150717_testddl_warning.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Generalized JSON output functions

2015-07-17 Thread Shulgin, Oleksandr
On Jul 17, 2015 4:31 PM, Andrew Dunstan and...@dunslane.net wrote:


 On 07/17/2015 10:11 AM, Andrew Dunstan wrote:


 On 07/17/2015 08:20 AM, Shulgin, Oleksandr wrote:


  This patch makes Postgres core more complex

 Yes, it does. But, that was not the purpose, obviously. :-)

  while not really solving the problem in Javascript.

 It still allows for less risk of silent data corruption on the js side.



 I have already pointed out how this patch is fundamentally broken. You
can achieve your aims by a fairly small amount of code inside your logical
decoder, and with no core code changes whatsoever. So I'm puzzled why we
are even still debating this broken design.



 Incidentally, this doesn't look acceptable anyway:

 !
 es-json_cxt.value(es-json_cxt, num, JSONTYPE_NUMERIC,
 !
  NUMERICOID, 1702 /* numeric_out */);


 We don't hardcode function oids elsewhere. So this is also something that
makes the patch unacceptable.

Well, good to know (I believe I've asked about this in the first mail
specifically).

Is there any way a built-in function oid would change/differ on different
server versions? What would be the recommended way to do this?

--
Alex


Re: [HACKERS] proposal: multiple psql option -c

2015-07-17 Thread Marc Mamin
 On Thu, Jul 16, 2015 at 12:42 PM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 Hi
 can we support multiple -c option?
 Why? Because some statements like VACUUM cannot be used together with any 
 other statements with single -c option. The current solution is using echo 
 and pipe op, but it is a complication in some complex scripts - higher 
 complication when you run psql via multiple sudo statement.
 Example:
 psql -c select pg_stat_reset() -c vacuum full analyze dbname
 or on all db
 psql -At -c select datname from pg_databases postgres | \
 xargs -n 1 -P 3 psql -c ... -c ...
 Ideas, notes, comments?

Hi,
Can't you handle this with a script on the target server ?

I have this one due to a profile issue:

   cat cicrunpsql.sh
   #!/bin/sh

   # set the isdbx environment before calling psql with the passed 
arguments.
   # required for remote calls with ssh

   #example
   # cicrunpsql.sh -Uisdb3  -c select1
   # ssh isdb3@localhost  cicrunpsql.sh -Uisdb3 -c select1

   # remote calls per ssh do not get the profile automatically...
   . ~/.profile || exit 1

 psql $@


Re: [HACKERS] Retrieve the snapshot's LSN

2015-07-17 Thread Robert Haas
On Fri, Jul 17, 2015 at 4:16 AM, Florent Guiliani flor...@guiliani.fr wrote:
 but such an LSN need not exist.  Suppose A writes a commit record at
 LSN 0/1, and then B writes a commit record at 0/10100, and then B
 calls ProcArrayEndTransaction().  At this point, B is visible and A is
 not visible, even though A's commit record precedes that of B.

 Maybe that's what Andres referred as doable with some finicky locking.

 There is some race conditions to build a snapshot with an associated
 consistent LSN. If I understand your example, A is supposed to call
 ProcArrayEndTransaction() anytime soon.

Right.

 Could we wait/lock until it
 happens?

In theory, yes.  I'm not sure what the code would would look like, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Function to get size of asynchronous notification queue

2015-07-17 Thread Robert Haas
On Fri, Jul 17, 2015 at 4:23 AM, Brendan Jurd dire...@gmail.com wrote:
 On Thu, 16 Jul 2015 at 08:37 Gurjeet Singh gurj...@singh.im wrote:
 OK. Please send a new patch with the changes you agree to, and I can mark
 it ready for committer.

 Done.  Please find attached patch v3.  I have changed proportion to
 fraction, and made other wording improvements per your suggestions.

Speaking as a man whose children just finished fifth-grade math, a
proportion, technically speaking, is actually a relationship between
two fractions or ratios.  So I agree that fraction is the right word
here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] postgres_fdw extension support

2015-07-17 Thread Paul Ramsey
 
On July 17, 2015 at 12:49:04 AM, Simon Riggs 
(si...@2ndquadrant.com(mailto:si...@2ndquadrant.com)) wrote:
 On 17 July 2015 at 01:23, Michael Paquier wrote:
  
   Well, as I see it there’s three broad categories of behavior available:
  
   1- Forward nothing non-built-in (current behavior)
   2- Use options to forward only specified non-built-in things (either in
   function chunks (extensions, as in this patch) or one-by-one (mark your
   desired functions / ops)
   3- Forward everything if a “forward everything” option is set
   
  Then what you are describing here is a parameter able to do a switch
  among this selection:
  - nothing, which is to check on built-in stuff
  - extension list.
  - all.
  
 all seems to be a very blunt instrument but is certainly appropriate in 
 some cases  
  
 I see an intermediate setting, giving four categories in total  
  
 0. Nothing, as now  
 1. Extension list option on the Foreign Server
 2. Extension list option on the Foreign Data Wrapper, applies to all Foreign 
 Servers of that type
 3. All extensions allowed

I feel like a lot of knobs are being discussed for maximum configurability, but 
without knowing if people are going to show up with the desire to fiddle them 
:) if marking extensions is not considered a good API, then I’d lean towards 
(a) being able to toggle global fowarding on and off combined with (b) the 
ability to mark individual objects forwardable or not. Which I see, is almost 
what you’ve described.

There’s no facility to add OPTIONS to an EXTENSION right now, so this 
capability seems to be very much server-by-server (adding a FDW-specific 
capability to the EXTENSION mechanism seems like overkill for a niche feature 
addition).

But within the bounds of the SERVER, being able to build combinations of 
allowed/restricted forwarding is certainly powerful,

  CREATE SERVER foo 
    FOREIGN DATA WRAPPER postgres_fdw 
    OPTIONS ( host ‘localhost’, dbname ‘foo’, forward ‘all -postgis’ );

  CREATE SERVER foo 
    FOREIGN DATA WRAPPER postgres_fdw 
    OPTIONS ( host ‘localhost’, dbname ‘foo’, forward ’none +postgis’ );

  CREATE SERVER foo 
    FOREIGN DATA WRAPPER postgres_fdw 
    OPTIONS ( host ‘localhost’, dbname ‘foo’, forward ’none +’ );

Once we get to individual functions or operators it breaks down, since of 
course ‘’ refers to piles of different operators for different types. Their 
descriptions would be unworkably verbose once you have more than a tiny handful.

I’m less concerned with configurability than just the appropriateness of 
forwarding particular functions. In an earlier thread on this topic the problem 
of forwarding arbitrary user-defined PL/PgSQL functions was brought up. In 
passing it was mentioned that maybe VOLATILE functions should *never* be 
forwarded ever. Or, that only IMMUTABLE functions should be *ever* be 
forwarded. Since PostGIS includes a generous mix of all kinds of functions, 
discussing whether that kind of policy is required for any kind of additional 
forwarding would be interesting. Maybe IMMUTABLE/STABLE/VOLATILE doesn’t 
actually capture what it is that makes a function/op FORWARDABLE or not.
 






-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Retrieve the snapshot's LSN

2015-07-17 Thread Florent Guiliani
On Thu, Jul 16, 2015 at 7:13 PM, Andres Freund and...@anarazel.de wrote:
 On 2015-07-16 13:08:48 -0400, Robert Haas wrote:
 On Thu, Jul 16, 2015 at 12:54 PM, Andres Freund and...@anarazel.de wrote:
  Well, in combination with logical decoding it kinda has one: It should
  allow you to take a dump of the database with a certain snapshot and
  replay all transactions with a commit lsn bigger than the snapshot's
  lsn and end up with a continually consistent database.
 
  Visibility for HS actually works precisely in commit LSN order, even if
  that is possibly different than on the primary...

 That makes sense, and hopefully answers Florent's question about why
 this is only exposed through the slot mechanism.

 Trying to swap-in the pub conversion, I think Florent wanted to be able to
 re-sync a standby from an existing slot. Which kinda makes sense to
 me. We could do something like
 SELECT * FROM pg_export_snapshot_for_slot(...);

 which would return the snapshot name and the LSN.

 There'd need to be some finicky locking to get that, but it should b 
 epossible.

A pg_export_snapshot_for_slot(...) would work very well.

Let me explain the use case. You have many downstream systems that are
replicated with logical decoding. Using a dedicated replication slot
for each target is not practical. A single logical replication slot is
configured. It generates a stream of LSN-stamped transactions in
commit order. Those transactions are published to all downstream
nodes.

The snapshot exported during the slot creation can be used to generate
a complete dump that the replicated systems will load before applying
the transaction stream.

How do you individually reload/recover one of the downstream node? You
can use the initial dump and reapply all transactions emitted since
the slot's inception. It will quickly become impractical. What you
need is to generate a newer dump and only apply the transactions from
that point.

Problem: How do you synchronize this newer dump with the LSN-stamped
stream of transactions? Being able to tell what LSN correspond to the
consistent dump would solve it.

I've started a quickdirty solution:
https://github.com/flyerman/postgres/commit/a13432d5e596a8b13ff911637afd764f53af2ab3

where I copied CreateReplicationSlot():
https://github.com/flyerman/postgres/blob/a13432d5e596a8b13ff911637afd764f53af2ab3/src/backend/replication/walsender.c#L764

into ExportLogicalDecodingSnapshot() and removed everything that isn't
needed for the snapshot creation. I still need to plug it into the
replication protocol grammar to test it.

It's not very good solution. Among others bad things, it will exercise
the output plugin for nothing.

--
Florent


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Generalized JSON output functions

2015-07-17 Thread Shulgin, Oleksandr
On Jul 17, 2015 12:23 AM, Ryan Pedela rped...@datalanche.com wrote:

 On Thu, Jul 16, 2015 at 11:51 AM, Robert Haas robertmh...@gmail.com
wrote:

 I don't understand these issues in great technical depth, but if
 somebody is arguing that it's OK for PostgreSQL to be difficult to use
 for a certain category of user for several years until the next
 language rev becomes mainstream, then I disagree.  The fact that
 somebody wrote a patch to try to solve a problem means that the thing
 in question is a problem for at least that one user.  If he's the only
 one, maybe we don't need to care all that much.  If his needs are
 representative of a significant user community, we should not turn our
 backs on that community, regardless of whether we like the patch he
 wrote, and regardless of how well we are meeting the needs of other
 communities (like node.js users).


 I completely agree. However we aren't talking about a usability problem
with Postgres. We are actually talking about a usability problem with
Javascript, and trying to implement a band-aid for it with Postgres.
Javascript doesn't support large numbers, it just doesn't. There is nothing
the Postgres community can do about that. Only the ECMAscript standards
committee and implementers can fix Javascript.

 Here is the current user flow of reading numerics from Postgres and then
doing some math with them in Javascript.

 1. SELECT json
 2. Use json-bignum [1] module or custom JSON parser to correctly parse
numerics.
 3. Perform addition, subtraction, etc of numerics using either custom
numeric math library or an existing library such as bigdecimal.js [2].

 Here is the user flow if this patch is accepted.

 1. SELECT json with quoting flags set
 2. Custom parser to find numeric strings within JSON and convert them
into numerics. This is easy if JSON is simple, but may be difficult with a
very complex JSON.
 3. Perform addition, subtraction, etc of numerics using either custom
numeric math library or an existing library such as bigdecimal.js [2].

 It is almost the exact same user flow so what is the point?

In my case there's no select: we're running this in the context of a
logical decoding plugin.

The all safeguarding idea that is enabled by this patch is that if the
client *expects* big numbers *and* it needs to perform arithmetic on them,
it'll have the special handling anyway. And IMO, it would actually make
more sense to use big numbers module only at the point where you have the
need for special handling, not to parse the whole input in a nonstandard
way.

But the clients that are unaware of big numbers or don't care about them
shouldn't be *forced* to use external modules for parsing json.

 This patch makes Postgres core more complex

Yes, it does. But, that was not the purpose, obviously. :-)

 while not really solving the problem in Javascript.

It still allows for less risk of silent data corruption on the js side.

--
Alex


Re: [HACKERS] segfault in 9.5alpha - plpgsql function, implicit cast and IMMUTABLE cast function

2015-07-17 Thread Michael Paquier
On Fri, Jul 17, 2015 at 7:52 PM, Geoff Winkless pgsqlad...@geoff.dj wrote:
 While doing some testing of 9.5a one of my colleagues (not on list) found a
 reproducible server segfault.
 [...]
 Hope someone can get something useful from the above. Any questions, please
 ask.

A test case is more than enough to look at this issue and guess what
is happening, thanks! The issue can be reproduced on REL9_5_STABLE and
master, and by looking at the stack trace it seems that the problem is
caused by an attempt to delete a memory context that has already been
free'd.

* thread #1: tid = 0x, 0x000109f30dee
postgres`MemoryContextDelete(context=0x7f7f7f7f7f7f7f7f) + 30 at
mcxt.c:206, stop reason = signal SIGSTOP
frame #0: 0x000109f30dee
postgres`MemoryContextDelete(context=0x7f7f7f7f7f7f7f7f) + 30 at
mcxt.c:206
   203 void
   204 MemoryContextDelete(MemoryContext context)
   205 {
- 206 AssertArg(MemoryContextIsValid(context));
   207 /* We had better not be deleting TopMemoryContext ... */
   208 Assert(context != TopMemoryContext);
   209 /* And not CurrentMemoryContext, either */
(lldb) bt
* thread #1: tid = 0x, 0x000109f30dee
postgres`MemoryContextDelete(context=0x7f7f7f7f7f7f7f7f) + 30 at
mcxt.c:206, stop reason = signal SIGSTOP
  * frame #0: 0x000109f30dee
postgres`MemoryContextDelete(context=0x7f7f7f7f7f7f7f7f) + 30 at
mcxt.c:206
frame #1: 0x000109b7e261
postgres`fmgr_sql(fcinfo=0x7f84c28d5870) + 433 at functions.c:1044

I am adding it to the list of Open Items for 9.5. I'll look into that
in the next couple of days (Tuesday at worst).
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] postgres_fdw extension support

2015-07-17 Thread Simon Riggs
On 17 July 2015 at 13:51, Paul Ramsey pram...@cleverelephant.ca wrote:


 There’s no facility to add OPTIONS to an EXTENSION right now, so this
 capability seems to be very much server-by-server (adding a FDW-specific
 capability to the EXTENSION mechanism seems like overkill for a niche
 feature addition).


Options already exist on CREATE FOREIGN DATA WRAPPER, so it should be easy
to support that.

I'd rather add it once on the wrapper than be forced to list all the
options on every foreign server, unless required to do so.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


Re: [HACKERS] [PATCH] Generalized JSON output functions

2015-07-17 Thread Andrew Dunstan


On 07/17/2015 08:20 AM, Shulgin, Oleksandr wrote:



 This patch makes Postgres core more complex

Yes, it does. But, that was not the purpose, obviously. :-)

 while not really solving the problem in Javascript.

It still allows for less risk of silent data corruption on the js side.




I have already pointed out how this patch is fundamentally broken. You 
can achieve your aims by a fairly small amount of code inside your 
logical decoder, and with no core code changes whatsoever. So I'm 
puzzled why we are even still debating this broken design.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Retain comments on indexes and constraints at ALTER TABLE ... TY

2015-07-17 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@iki.fi wrote:

 This fixes bug #13126, reported by Kirill Simonov.

It looks like you missed something with the addition of
AT_ReAddComment:

test_ddl_deparse.c:80:11: warning: enumeration value 'AT_ReAddComment' not 
handled in switch [-Wswitch]
switch (subcmd-subtype)
^

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] postgres_fdw extension support

2015-07-17 Thread Paul Ramsey

On July 17, 2015 at 5:57:42 AM, Simon Riggs 
(si...@2ndquadrant.com(mailto:si...@2ndquadrant.com)) wrote:

 Options already exist on CREATE FOREIGN DATA WRAPPER, so it should be easy to 
 support that.
  
 I'd rather add it once on the wrapper than be forced to list all the options 
 on every foreign server, unless required to do so.  

Gotcha, that does make sense.

P. 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] GSets: Getting collation related error when GSets has text column

2015-07-17 Thread Jeevan Chalke
Hi,

When we have text column in the GROUPING SETS (and with some specific
order of columns), we are getting error saying
could not determine which collation to use for string comparison

Here is the example:

postgres=# select sum(ten) from onek group by rollup(four::text), two
order by 1;
ERROR:  could not determine which collation to use for string comparison
HINT:  Use the COLLATE clause to set the collation explicitly.

However if we have sql like;
select sum(ten) from onek group by two, rollup(four::text)
order by 1;

It is not failing.

After spending enough time, If I understood the code correctly, we are
re-ordering the sort columns but we are not remapping the grpColIdx
correctly.

Attached patch which attempts to fix this issue. However I am not sure
whether it is correct. But it does not add any new issues as such.
Added few test in the patch as well.

Please have a look.

Thanks

-- 
Jeevan B Chalke
Principal Software Engineer, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index a6ce96e..96def6b 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -2402,13 +2402,8 @@ build_grouping_chain(PlannerInfo *root,
 	 * Prepare the grpColIdx for the real Agg node first, because we may need
 	 * it for sorting
 	 */
-	if (list_length(rollup_groupclauses)  1)
-	{
-		Assert(rollup_lists  llast(rollup_lists));
-
-		top_grpColIdx =
-			remap_groupColIdx(root, llast(rollup_groupclauses));
-	}
+	if (parse-groupingSets)
+		top_grpColIdx = remap_groupColIdx(root, llast(rollup_groupclauses));
 
 	/*
 	 * If we need a Sort operation on the input, generate that.
@@ -2429,11 +2424,14 @@ build_grouping_chain(PlannerInfo *root,
 	while (list_length(rollup_groupclauses)  1)
 	{
 		List	   *groupClause = linitial(rollup_groupclauses);
-		List	   *gsets = linitial(rollup_lists);
+		List	   *gsets;
 		AttrNumber *new_grpColIdx;
 		Plan	   *sort_plan;
 		Plan	   *agg_plan;
 
+		Assert(rollup_lists  linitial(rollup_lists));
+		gsets = linitial(rollup_lists);
+
 		Assert(groupClause);
 		Assert(gsets);
 
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
index 5c47717..4ff5963 100644
--- a/src/test/regress/expected/groupingsets.out
+++ b/src/test/regress/expected/groupingsets.out
@@ -755,4 +755,27 @@ select array(select row(v.a,s1.*) from (select two,four, count(*) from onek grou
  {(2,0,0,250),(2,0,2,250),(2,0,,500),(2,1,1,250),(2,1,3,250),(2,1,,500),(2,,0,250),(2,,1,250),(2,,2,250),(2,,3,250),(2,,,1000)}
 (2 rows)
 
+-- Grouping on text columns
+select sum(ten) from onek group by two, rollup(four::text) order by 1;
+ sum  
+--
+ 1000
+ 1000
+ 1250
+ 1250
+ 2000
+ 2500
+(6 rows)
+
+select sum(ten) from onek group by rollup(four::text), two order by 1;
+ sum  
+--
+ 1000
+ 1000
+ 1250
+ 1250
+ 2000
+ 2500
+(6 rows)
+
 -- end
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
index e478d34..cc5b89a 100644
--- a/src/test/regress/sql/groupingsets.sql
+++ b/src/test/regress/sql/groupingsets.sql
@@ -205,4 +205,8 @@ group by rollup(ten);
 select * from (values (1),(2)) v(a) left join lateral (select v.a, four, ten, count(*) from onek group by cube(four,ten)) s on true order by v.a,four,ten;
 select array(select row(v.a,s1.*) from (select two,four, count(*) from onek group by cube(two,four) order by two,four) s1) from (values (1),(2)) v(a);
 
+-- Grouping on text columns
+select sum(ten) from onek group by two, rollup(four::text) order by 1;
+select sum(ten) from onek group by rollup(four::text), two order by 1;
+
 -- end

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive

2015-07-17 Thread Alexander Korotkov
On Fri, Jul 17, 2015 at 6:05 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Peter Geoghegan p...@heroku.com writes:
  I've heard that clock_gettime() with CLOCK_REALTIME_COARSE, or with
  CLOCK_MONOTONIC_COARSE can have significantly lower overhead than
  gettimeofday().

 It can, but it also has *much* lower precision, typically 1ms or so.


I've write simple benchmark of QueryPerformanceCounter() for Windows. The
source code is following.

#include stdio.h
#include windows.h
#include Winbase.h

int _main(int argc, char* argv[])
{
LARGE_INTEGER start, freq, current;
long count = 0;
QueryPerformanceFrequency(freq);
QueryPerformanceCounter(start);
current = start;
while (current.QuadPart  start.QuadPart + freq.QuadPart)
{
QueryPerformanceCounter(current);
count++;
}
printf(QueryPerformanceCounter() per second:  %lu\n, count);
return 0;
}

On my virtual machine in runs 1532746 QueryPerformanceCounter() per second.
In a contrast my MacBook can natively run 26260236 gettimeofday() per
second.
So, performance of PostgreSQL instr_time.h can vary in more than order of
magnitude. It's possible that we can found systems where measurements of
time are even much slower.
In general, there could be some systems where accurate measurements of time
intervals is impossible or slow. That means we should provide them some
different solution like sampling. But does it means we should force
majority of systems use sampling which is both slower and less accurate for
them? Could we end up with different options for user?

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] [PATCH] Generalized JSON output functions

2015-07-17 Thread Alvaro Herrera
Andrew Dunstan wrote:

 I have already pointed out how this patch is fundamentally broken. You can
 achieve your aims by a fairly small amount of code inside your logical
 decoder, and with no core code changes whatsoever. So I'm puzzled why we are
 even still debating this broken design.

I went through all your responses over the entire thread and I couldn't
find your argument about how this is fundamentally broken.  Can you
restate, or maybe give an archive link if I just missed it?


(Saying but it changes so much of the existing code is not really a
fundamental problem to me.  I mean, it's not like the existing code is
perfect and needs no changes.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TABLESAMPLE patch is really in pretty sad shape

2015-07-17 Thread Petr Jelinek

On 2015-07-16 17:08, Tom Lane wrote:

Petr Jelinek p...@2ndquadrant.com writes:

On 2015-07-16 15:59, Tom Lane wrote:

I'm not clear on whether sequence AMs would need explicit catalog
representation, or could be folded down to just a single SQL function
with special signature as I suggested for tablesample handlers.
Is there any need for a sequence AM to have additional catalog
infrastructure like index AMs need?



That depends on the route we will choose to take with the storage there.
If we allow custom columns for sequence AMs (which is what both Heikki
and me seem to be inclined to do) then I think it will still need
catalog, plus it's also easier to just reuse the relam behavior than
coming up up with something completely new IMHO.


Hm.  I've not been following the sequence AM stuff carefully, but if you
want to use relam to point at a sequence's AM then really sequence AMs
have to be represented in pg_am.  (It would be quite broken from a
relational theory standpoint if relam could point to either of two
catalogs; not to mention that we have no way to guarantee OID uniqueness
across multiple catalogs.)



Well not necessarily, it would just mean that relam has different 
meaning depending on relkind so the OID uniqueness is not needed at all.



As things stand today, putting both kinds of AM into one catalog would be
pretty horrible, but it seems not hard to make it work if we did this sort
of refactoring first.  We'd end up with three columns in pg_am: amname,
amkind (index or sequence), and amhandler.  The type and contents of the
struct returned by amhandler could be different depending on amkind, which
means that the APIs could be as different as we need, despite sharing just
one catalog.  The only real restriction is that index and sequence AMs
could not have the same names, which doesn't seem like much of a problem
from here.



Yes, this would be better.

--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Generalized JSON output functions

2015-07-17 Thread Alvaro Herrera
Shulgin, Oleksandr wrote:
 On Jul 17, 2015 4:31 PM, Andrew Dunstan and...@dunslane.net wrote:

  Incidentally, this doesn't look acceptable anyway:
 
  !  es-json_cxt.value(es-json_cxt, num, JSONTYPE_NUMERIC,
  !  NUMERICOID, 1702 /* numeric_out */);
 
  We don't hardcode function oids elsewhere. So this is also something that
  makes the patch unacceptable.
 
 Well, good to know (I believe I've asked about this in the first mail
 specifically).
 
 Is there any way a built-in function oid would change/differ on different
 server versions? What would be the recommended way to do this?

C'mon, that's a trivial problem.  Just use getTypeOutputInfo();
numeric's OID is hardcoded as NUMERICOID.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Grouping Sets: Fix unrecognized node type bug

2015-07-17 Thread Jeevan Chalke
On Wed, Jul 15, 2015 at 10:21 PM, Andrew Gierth and...@tao11.riddles.org.uk
 wrote:

  Jeevan == Jeevan Chalke jeevan.cha...@enterprisedb.com writes:

  Jeevan Hi,
  Jeevan It looks like we do support nested GROUPING SETS, I mean Sets
  Jeevan withing Sets, not other types.  However this nesting is broken.

 Good catch, but I'm not yet sure your fix is correct; I'll need to look
 into that.


Sure. Thanks.

However I wonder why we are supporting GROUPING SETS inside GROUPING SETS.
On Oracle, it is throwing an error.
We are not trying to be Oracle compatible, but just curious to know.

I have tried restricting it in attached patch.

But it may require few comment adjustment.

Thanks


 --
 Andrew (irc:RhodiumToad)




-- 
Jeevan B Chalke
Principal Software Engineer, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index e0ff6f1..738715f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -371,9 +371,9 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 relation_expr_list dostmt_opt_list
 transform_element_list transform_type_list
 
-%type list	group_by_list
+%type list	group_by_list grouping_sets_list
 %type node	group_by_item empty_grouping_set rollup_clause cube_clause
-%type node	grouping_sets_clause
+%type node	grouping_sets_clause grouping_sets_item
 
 %type list	opt_fdw_options fdw_options
 %type defelt	fdw_option
@@ -10343,6 +10343,18 @@ group_by_item:
 			| grouping_sets_clause	{ $$ = $1; }
 		;
 
+grouping_sets_list:
+			grouping_sets_item{ $$ = list_make1($1); }
+			| grouping_sets_list ',' grouping_sets_item		{ $$ = lappend($1,$3); }
+		;
+
+grouping_sets_item:
+			a_expr	{ $$ = $1; }
+			| empty_grouping_set	{ $$ = $1; }
+			| cube_clause			{ $$ = $1; }
+			| rollup_clause			{ $$ = $1; }
+		;
+
 empty_grouping_set:
 			'(' ')'
 {
@@ -10371,7 +10383,7 @@ cube_clause:
 		;
 
 grouping_sets_clause:
-			GROUPING SETS '(' group_by_list ')'
+			GROUPING SETS '(' grouping_sets_list ')'
 {
 	$$ = (Node *) makeGroupingSet(GROUPING_SET_SETS, $4, @1);
 }
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
index 842c2ae..e75dceb 100644
--- a/src/test/regress/expected/groupingsets.out
+++ b/src/test/regress/expected/groupingsets.out
@@ -145,6 +145,25 @@ select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
|   |  12 |   36
 (6 rows)
 
+-- nesting with grouping sets
+select sum(c) from gstest2
+  group by grouping sets((), grouping sets(((a, b
+  order by 1 desc;
+ERROR:  syntax error at or near sets
+LINE 2:   group by grouping sets((), grouping sets(((a, b
+  ^
+select sum(c) from gstest2
+  group by grouping sets(grouping sets(rollup(c), grouping sets(cube(c
+  order by 1 desc;
+ERROR:  syntax error at or near sets
+LINE 2:   group by grouping sets(grouping sets(rollup(c), grouping s...
+  ^
+select sum(c) from gstest2
+  group by grouping sets(grouping sets(a, grouping sets(a), a))
+  order by 1 desc;
+ERROR:  syntax error at or near sets
+LINE 2:   group by grouping sets(grouping sets(a, grouping sets(a), ...
+  ^
 -- empty input: first is 0 rows, second 1, third 3 etc.
 select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
  a | b | sum | count 
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
index 0bffb85..b7e4826 100644
--- a/src/test/regress/sql/groupingsets.sql
+++ b/src/test/regress/sql/groupingsets.sql
@@ -73,6 +73,17 @@ select grouping(a), a, array_agg(b),
 select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
   from gstest2 group by rollup (a,b) order by rsum, a, b;
 
+-- nesting with grouping sets
+select sum(c) from gstest2
+  group by grouping sets((), grouping sets(((a, b
+  order by 1 desc;
+select sum(c) from gstest2
+  group by grouping sets(grouping sets(rollup(c), grouping sets(cube(c
+  order by 1 desc;
+select sum(c) from gstest2
+  group by grouping sets(grouping sets(a, grouping sets(a), a))
+  order by 1 desc;
+
 -- empty input: first is 0 rows, second 1, third 3 etc.
 select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
 select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-07-17 Thread Haribabu Kommi
On Thu, Jul 16, 2015 at 1:10 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 Thanks, I will fix this in next version of patch.


I am posting in this thread as I am not sure, whether it needs a
separate thread or not?

I gone through the code and found that the newly added funnel node is
is tightly coupled with
partial seq scan, in order to add many more parallel plans along with
parallel seq scan,
we need to remove the integration of this node with partial seq scan.

To achieve the same, I have the following ideas.

Plan:
1) Add the funnel path immediately for every parallel path similar to
the current parallel seq scan,
 but during the plan generation generate the funnel plan only for the
top funnel path and
 ignore rest funnel paths.

2)Instead of adding a funnel path immediately after the partial seq
scan path is generated.
Add the funnel path in grouping_planner once the final rel path is
generated before creating the plan.

Execution:
The funnel execution varies based on the below plan node.
1) partial scan - Funnel does the local scan also and returns the tuples
2) partial agg - Funnel does the merging of aggregate results and
returns the final result.

Any other better ideas to achieve the same?

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Function to get size of asynchronous notification queue

2015-07-17 Thread Brendan Jurd
On Thu, 16 Jul 2015 at 08:37 Gurjeet Singh gurj...@singh.im wrote:

 OK. Please send a new patch with the changes you agree to, and I can mark
 it ready for committer.


Done.  Please find attached patch v3.  I have changed proportion to
fraction, and made other wording improvements per your suggestions.

Cheers,
BJ
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 14806,14811  SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
--- 14806,14817 
/row
  
row
+entryliteralfunctionpg_notification_queue_usage()/function/literal/entry
+entrytypedouble/type/entry
+entryfraction of the asynchronous notification queue currently occupied (0-1)/entry
+   /row
+ 
+   row
 entryliteralfunctionpg_my_temp_schema()/function/literal/entry
 entrytypeoid/type/entry
 entryOID of session's temporary schema, or 0 if none/entry
***
*** 14945,14954  SET search_path TO replaceableschema/ optional, replaceableschema/, ..
  primarypg_listening_channels/primary
 /indexterm
  
 para
  functionpg_listening_channels/function returns a set of names of
! channels that the current session is listening to.  See xref
! linkend=sql-listen for more information.
 /para
  
 indexterm
--- 14951,14969 
  primarypg_listening_channels/primary
 /indexterm
  
+indexterm
+ primarypg_notification_queue_usage/primary
+/indexterm
+ 
 para
  functionpg_listening_channels/function returns a set of names of
! asynchronous notification channels that the current session is listening
! to.  functionpg_notification_queue_usage/function returns the
! fraction of the total available space for notifications currently
! occupied by notifications that are waiting to be processed, as a
! typedouble/type in the range 0-1.
! See xref linkend=sql-listen and xref linkend=sql-notify
! for more information.
 /para
  
 indexterm
*** a/doc/src/sgml/ref/notify.sgml
--- b/doc/src/sgml/ref/notify.sgml
***
*** 166,171  NOTIFY replaceable class=PARAMETERchannel/replaceable [ , replaceable cla
--- 166,176 
 current transaction so that cleanup can proceed.
/para
para
+The function functionpg_notification_queue_usage/function returns the
+proportion of the queue that is currently occupied by pending notifications.
+See xref linkend=functions-info for more information.
+   /para
+   para
 A transaction that has executed commandNOTIFY/command cannot be
 prepared for two-phase commit.
/para
*** a/src/backend/commands/async.c
--- b/src/backend/commands/async.c
***
*** 371,376  static bool asyncQueueIsFull(void);
--- 371,377 
  static bool asyncQueueAdvance(volatile QueuePosition *position, int entryLength);
  static void asyncQueueNotificationToEntry(Notification *n, AsyncQueueEntry *qe);
  static ListCell *asyncQueueAddEntries(ListCell *nextNotify);
+ static double asyncQueueUsage(void);
  static void asyncQueueFillWarning(void);
  static bool SignalBackends(void);
  static void asyncQueueReadAllNotifications(void);
***
*** 1362,1387  asyncQueueAddEntries(ListCell *nextNotify)
  }
  
  /*
!  * Check whether the queue is at least half full, and emit a warning if so.
!  *
!  * This is unlikely given the size of the queue, but possible.
!  * The warnings show up at most once every QUEUE_FULL_WARN_INTERVAL.
   *
!  * Caller must hold exclusive AsyncQueueLock.
   */
! static void
! asyncQueueFillWarning(void)
  {
! 	int			headPage = QUEUE_POS_PAGE(QUEUE_HEAD);
! 	int			tailPage = QUEUE_POS_PAGE(QUEUE_TAIL);
! 	int			occupied;
! 	double		fillDegree;
! 	TimestampTz t;
  
  	occupied = headPage - tailPage;
  
  	if (occupied == 0)
! 		return;	/* fast exit for common case */
  
  	if (occupied  0)
  	{
--- 1363,1399 
  }
  
  /*
!  * SQL function to return the fraction of the notification queue currently
!  * occupied.
!  */
! Datum
! pg_notification_queue_usage(PG_FUNCTION_ARGS)
! {
! 	double usage;
! 
! 	LWLockAcquire(AsyncQueueLock, LW_SHARED);
! 	usage = asyncQueueUsage();
! 	LWLockRelease(AsyncQueueLock);
! 
! 	PG_RETURN_FLOAT8(usage);
! }
! 
! /*
!  * Return the fraction of the queue that is currently occupied.
   *
!  * The caller must hold AysncQueueLock in (at least) shared mode.
   */
! static double
! asyncQueueUsage(void)
  {
! 	int		headPage = QUEUE_POS_PAGE(QUEUE_HEAD);
! 	int		tailPage = QUEUE_POS_PAGE(QUEUE_TAIL);
! 	int		occupied;
  
  	occupied = headPage - tailPage;
  
  	if (occupied == 0)
! 		return (double) 0;		/* fast exit for common case */
  
  	if (occupied  0)
  	{
***
*** 1389,1396  asyncQueueFillWarning(void)
  		occupied += QUEUE_MAX_PAGE + 1;
  	}
  
! 	fillDegree = (double) occupied / (double) ((QUEUE_MAX_PAGE + 1) / 2);
  
  	if (fillDegree  0.5)
  		return;
  
--- 1401,1424 
  		occupied += QUEUE_MAX_PAGE + 1;
  	}
  

Re: [HACKERS] [PATCH] postgres_fdw extension support

2015-07-17 Thread Simon Riggs
On 17 July 2015 at 01:23, Michael Paquier michael.paqu...@gmail.com wrote:


  Well, as I see it there’s three broad categories of behavior available:
 
  1- Forward nothing non-built-in (current behavior)
  2- Use options to forward only specified non-built-in things (either in
  function chunks (extensions, as in this patch) or one-by-one (mark your
  desired functions / ops)
  3- Forward everything if a “forward everything” option is set

 Then what you are describing here is a parameter able to do a switch
 among this selection:
 - nothing, which is to check on built-in stuff
 - extension list.
 - all.


all seems to be a very blunt instrument but is certainly appropriate in
some cases

I see an intermediate setting, giving four categories in total

0. Nothing, as now
1. Extension list option on the Foreign Server
2. Extension list option on the Foreign Data Wrapper, applies to all
Foreign Servers of that type
3. All extensions allowed

I would imagine we would need Inclusion and Exclusion on each
e.g. +postgis, -postgis

Cat 2 and 3 would be merged to get the list for the specific server. That
would allow you to a default for all servers of +postgis, yet set a
specific server as -postgis, for example.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


Re: [HACKERS] Retrieve the snapshot's LSN

2015-07-17 Thread Florent Guiliani
On Thu, Jul 16, 2015 at 6:40 PM, Robert Haas robertmh...@gmail.com wrote:
 I don't think the snapshot's LSN has a well-defined meaning in
 general.  The obvious meaning would be the LSN such that all commits
 prior to that LSN are visible and all later commits are invisible,

I like this definition.

 but such an LSN need not exist.  Suppose A writes a commit record at
 LSN 0/1, and then B writes a commit record at 0/10100, and then B
 calls ProcArrayEndTransaction().  At this point, B is visible and A is
 not visible, even though A's commit record precedes that of B.

Maybe that's what Andres referred as doable with some finicky locking.

There is some race conditions to build a snapshot with an associated
consistent LSN. If I understand your example, A is supposed to call
ProcArrayEndTransaction() anytime soon. Could we wait/lock until it
happens?

--
Florent


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2015-07-17 Thread Beena Emerson
Robert Haas wrote:
 
 On Thu, Jul 16, 2015 at 1:32 PM, Simon Riggs lt;simon@gt; wrote:
  Personally, I think we're going to find that using JSON for this
  rather than a custom syntax makes the configuration strings two or
  three times as long for
 
  They may well be 2-3 times as long. Why is that a negative?
 
 In my opinion, brevity makes things easier to read and understand.  We
 also don't support multi-line GUCs, so if your configuration takes 140
 characters, you're going to have a very long line in your
 postgresql.conf (and in your pg_settings output, etc.)
 
  * No additional code required in the server to support this syntax (so
 no
  bugs)
 
 I think you'll find that this is far from true.  Presumably not any
 arbitrary JSON object will be acceptable.  You'll have to parse it as
 JSON, and then validate that it is of the expected form.  It may not
 be MORE code than implementing a mini-language from scratch, but I
 wouldn't expect to save much.
 
  * Developers will immediately understand the format
 
 I doubt it.  I think any format that we pick will have to be carefully
 documented.  People may know what JSON looks like in general, but they
 will not immediately know what bells and whistles are available in
 this context.
 
 * Easy to programmatically manipulate in a range of languages
 
 I agree that JSON has that advantage, but I doubt that it is important
 here.  I would expect that people might need to generate a new config
 string and dump it into postgresql.conf, but that should be easy with
 any reasonable format.  I think it will be rare to need to parse the
 postgresql.conf string, manipulate it programatically, and then put it
 back.  As we've already said, most configurations are simple and
 shouldn't change frequently.  If they're not or they do, that's a
 problem of itself.
 

All points here are valid and I would prefer a new language over JSON. I
agree, the new validation code would have to be properly tested to avoid
bugs but it wont be too difficult. 

Also I think methods that generate WAL record is avoided because any attempt
to change the syncrep settings will go in indefinite wait when a mandatory
sync candidate (as per current settings) goes down (Explained in earlier
post id: cahgqgwe_-hczw687b4sdmwqakkpcu-uxmf3mkydb9mu38cj...@mail.gmail.com)





-
Beena Emerson

--
View this message in context: 
http://postgresql.nabble.com/Support-for-N-synchronous-standby-servers-take-2-tp5849384p5858255.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Grouping Sets: Fix unrecognized node type bug

2015-07-17 Thread Kyotaro HORIGUCHI
Hello, this looks to be a kind of thinko. The attached patch
fixes it.

===
According to the comment of transformGroupingSet, it assumes that
the given GROUPING SETS node is already flatted out and
flatten_grouping_sets() does that. The details of the
transformation is described in the comment for the function.

The problmen is what does the function for nested grouping sets.

 Node *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
 result_set = lappend(result_set, n2);

This does not flattens the list as required. n2 should be
concatenated if it is a list. The attached small patch fixes it
and the problematic query returns sane (perhaps) result.

# Though I don't know the exact definition of the syntax..

=# select sum(c) from gstest2 group by grouping sets ((), grouping sets ((), 
grouping sets ((;
 sum 
-
  12
  12
  12
(3 rows)

=# select sum(c) from gstest2 group by grouping sets ((a), grouping sets ((b), 
grouping sets ((c;
 sum 
-
  10
   2
   6
   6
   8
   4
(6 rows)

regards,

At Fri, 17 Jul 2015 11:37:26 +0530, Jeevan Chalke 
jeevan.cha...@enterprisedb.com wrote in 
CAM2+6=xprgumbqwtsczbecc3xjv4zh1ryq3fwds5uajon1i...@mail.gmail.com
   Jeevan It looks like we do support nested GROUPING SETS, I mean Sets
   Jeevan withing Sets, not other types.  However this nesting is broken.
 
  Good catch, but I'm not yet sure your fix is correct; I'll need to look
  into that.
 
 
 Sure. Thanks.
 
 However I wonder why we are supporting GROUPING SETS inside GROUPING SETS.
 On Oracle, it is throwing an error.
 We are not trying to be Oracle compatible, but just curious to know.
 
 I have tried restricting it in attached patch.
 
 But it may require few comment adjustment.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index e90e1d6..708ebc9 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -1804,8 +1804,10 @@ flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
 foreach(l2, gset-content)
 {
 	Node	   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
-
-	result_set = lappend(result_set, n2);
+	if (IsA(n2, List))
+		result_set = list_concat(result_set, (List *)n2);
+	else
+		result_set = lappend(result_set, n2);
 }
 
 /*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] segfault in 9.5alpha - plpgsql function, implicit cast and IMMUTABLE cast function

2015-07-17 Thread Geoff Winkless
Hi all

While doing some testing of 9.5a one of my colleagues (not on list) found a
reproducible server segfault.

We've broken it down to a minimal script to reproduce below.

Reproduced on both machines on which we've installed 9.5 so far (both built
from source since we don't have any RHEL7 machines in development):

RHEL5.3 (Linux 2.6.18-128.el5 i386), gcc version 4.6.4
CentOS 6.5 (Linux 2.6.32-431.el6.i686), gcc version 4.4.7-4

Script for psql:

 cut ===

CREATE OR REPLACE FUNCTION to_date(integer) RETURNS date LANGUAGE sql
IMMUTABLE AS $$

SELECT $1::text::date

$$;


DROP CAST IF EXISTS (integer AS date);

CREATE CAST (integer AS date) WITH FUNCTION to_date(integer) AS IMPLICIT;



CREATE OR REPLACE FUNCTION newcrash(INTEGER) returns DATE LANGUAGE plpgsql
AS $$ BEGIN

RETURN $1;

END$$;


SELECT newcrash(20150202);

SELECT newcrash(20150203);


 cut ===



It doesn't crash the first time, but does consistently crash the second.
Given that if I remove IMMUTABLE from the function definition it doesn't
fail, it implies that there's a problem with the mechanism used to cache
function results - although the fact that the second function call doesn't
have to be the same value does suggest it's a problem with the code that
*searches* that result cache, rather than the section that retrieves it.


I tried cutting out the implicit CAST altogether and doing

RETURN to_date($1);


but this doesn't fail, which implies also that it's something related to
the implicit cast.


If I DECLARE a local DATE variable and SELECT INTO that (rather than just
using RETURN $1), it crashes at that point too.

Hope someone can get something useful from the above. Any questions, please
ask.


Geoff


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2015-07-17 Thread Robert Haas
Overall, you seem to have made some significant progress on the design
since the last version of this patch.  There's probably a lot left to
do, but the design seems more mature now.  I haven't read the code,
but here are some comments based on the email.

On Thu, Jul 9, 2015 at 6:18 AM, Ashutosh Bapat
ashutosh.ba...@enterprisedb.com wrote:
 The patch introduces a GUC atomic_foreign_transaction, which when ON ensures
 atomic commit for foreign transactions, otherwise not. The value of this GUC
 at the time of committing or preparing a local transaction is used. This
 gives applications the flexibility to choose the behaviour as late in the
 transaction as possible. This GUC has no effect if there are no foreign
 servers involved in the transaction.

Hmm.  I'm not crazy about that name, but I don't have a better idea either.

One thing about this design is that it makes atomicity a property of
the transaction rather than the server.  That is, any given
transaction is either atomic with respect to all servers or atomic
with respect to none.  You could also design this the other way: each
server is either configured to do atomic commit, or not.  When a
transaction is committed, it prepares on those servers which are
configured for it, and then commits the others.  So then you can have
a partially atomic transaction where, for example, you transfer
money from account A to account B (using one or more FDW connections
that support atomic commit) and also use twitter_fdw to tweet about it
(using an FDW connection that does NOT support atomic commit).  The
tweet will survive even if the local commit fails, but that's OK.  You
could even do this at table granularity: we'll prepare the transaction
if at least one foreign table involved in the transaction has
atomic_commit = true.

In some sense I think this might be a nicer design, because suppose
you connect to a foreign server and mostly just log stuff but
occasionally do important things there.  In your design, you can do
this, but you'll need to make sure atomic_foreign_transaction is set
for the correct set of transactions.  But in what I'm proposing here
we might be able to derive the correct value mostly automatically.

We should consider other possible designs as well; the choices we make
here may have a significant impact on usability.

 Another GUC max_fdw_transactions sets the maximum number of transactions
 that can be simultaneously prepared on all the foreign servers. This limits
 the memory required for remembering the prepared foreign transactions.

How about max_prepared_foreign_transactions?

 Two new FDW hooks are introduced for transaction management.
 1. GetPrepareId: to get the prepared transaction identifier for a given
 foreign server connection. An FDW which doesn't want to support this feature
 can keep this hook undefined (NULL). When defined the hook should return a
 unique identifier for the transaction prepared on the foreign server. The
 identifier should be unique enough not to conflict with currently prepared
 or future transactions. This point will be clear when discussing phase 2 of
 2PC.

 2. HandleForeignTransaction: to end a transaction in specified way. The hook
 should be able to prepare/commit/rollback current running transaction on
 given connection or commit/rollback a previously prepared transaction. This
 is described in detail while describing phase two of two-phase commit. The
 function is required to return a boolean status of whether the requested
 operation was successful or not. The function or its minions should not
 raise any error on failure so as not to interfere with the distributed
 transaction processing. This point will be clarified more in the description
 below.

HandleForeignTransaction is not very descriptive, and I think you're
jamming together things that ought to be separated.  Let's have a
PrepareForeignTransaction and a ResolvePreparedForeignTransaction.

 A foreign server, user mapping corresponding to an unresolved foreign
 transaction is not allowed to be dropped or altered until the foreign
 transaction is resolved. This is required to retain the connection
 properties which need to resolve the prepared transaction on the foreign
 server.

I agree with not letting it be dropped, but I think not letting it be
altered is a serious mistake.  Suppose the foreign server dies in a
fire, its replica is promoted, and we need to re-point the master at
the replica's hostname or IP.

 Handling non-atomic foreign transactions
 ===
 When atomic_foreign_transaction is disabled, one-phase commit protocol is
 used to commit/rollback the foreign transactions. After the local
 transaction has committed/aborted, all the foreign transactions on the
 registered foreign connections are committed or aborted resp. using hook
 HandleForeignTransaction. Failing to commit a foreign transaction does not
 affect the other foreign transactions; they are still tried to be committed
 (if the local 

Re: [HACKERS] Retrieve the snapshot's LSN

2015-07-17 Thread Robert Haas
On Fri, Jul 17, 2015 at 8:31 AM, Florent Guiliani flor...@guiliani.fr wrote:
 A pg_export_snapshot_for_slot(...) would work very well.

 Let me explain the use case. You have many downstream systems that are
 replicated with logical decoding. Using a dedicated replication slot
 for each target is not practical. A single logical replication slot is
 configured. It generates a stream of LSN-stamped transactions in
 commit order. Those transactions are published to all downstream
 nodes.

 The snapshot exported during the slot creation can be used to generate
 a complete dump that the replicated systems will load before applying
 the transaction stream.

 How do you individually reload/recover one of the downstream node? You
 can use the initial dump and reapply all transactions emitted since
 the slot's inception. It will quickly become impractical. What you
 need is to generate a newer dump and only apply the transactions from
 that point.

I'd like to point out that I told Andres repeatedly during the
development of logical decoding that this exact thing was going to be
a problem.  He told me fixing it was way too complicated, but I hope
he'll relent, because I still think this is important.

(Not trying to be a jerk here.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] option for psql - short list non template database

2015-07-17 Thread Robert Haas
On Thu, Jul 16, 2015 at 12:21 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 terrible often I use pattern

  psql -c select datname from pg_database where not datistemplate and
 datallowconn postgres

 What about introduction new long option that does it?

 psql -At -list --names --without-templates

I think you should just put an alias in your .bashrc file.  If we add
everyone's favorite shortcuts to every tool, we will end up with an
unsupportable number of features that are each of interest to only a
very few people.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSets: Getting collation related error when GSets has text column

2015-07-17 Thread Andrew Gierth
 Jeevan == Jeevan Chalke jeevan.cha...@enterprisedb.com writes:

 Jeevan Hi,

 Jeevan When we have text column in the GROUPING SETS (and with some specific
 Jeevan order of columns), we are getting error saying
 Jeevan could not determine which collation to use for string comparison

Good catch.

 Jeevan After spending enough time, If I understood the code correctly,
 Jeevan we are re-ordering the sort columns but we are not remapping
 Jeevan the grpColIdx correctly.

Yup.

 Jeevan Attached patch which attempts to fix this issue. However I am
 Jeevan not sure whether it is correct. But it does not add any new
 Jeevan issues as such.  Added few test in the patch as well.

I wouldn't have bothered moving the Assert; it can be removed. Otherwise
it looks correct.

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] option for psql - short list non template database

2015-07-17 Thread Pavel Stehule
2015-07-17 18:56 GMT+02:00 Robert Haas robertmh...@gmail.com:

 On Thu, Jul 16, 2015 at 12:21 AM, Pavel Stehule pavel.steh...@gmail.com
 wrote:
  terrible often I use pattern
 
   psql -c select datname from pg_database where not datistemplate and
  datallowconn postgres
 
  What about introduction new long option that does it?
 
  psql -At -list --names --without-templates

 I think you should just put an alias in your .bashrc file.  If we add
 everyone's favorite shortcuts to every tool, we will end up with an
 unsupportable number of features that are each of interest to only a
 very few people.


sure - but taking list of databases and iteration over this list is pretty
common task.

Pavel



 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company



Re: [HACKERS] [PATCH] Generalized JSON output functions

2015-07-17 Thread Andrew Dunstan


On 07/17/2015 10:30 AM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


I have already pointed out how this patch is fundamentally broken. You can
achieve your aims by a fairly small amount of code inside your logical
decoder, and with no core code changes whatsoever. So I'm puzzled why we are
even still debating this broken design.

I went through all your responses over the entire thread and I couldn't
find your argument about how this is fundamentally broken.  Can you
restate, or maybe give an archive link if I just missed it?


(Saying but it changes so much of the existing code is not really a
fundamental problem to me.  I mean, it's not like the existing code is
perfect and needs no changes.)




On July 13 I wrote:

Yes, but I think the plugin is the right place to do it. What is more, 
this won't actually prevent you completely from producing 
non-ECMAScript compliant JSON, since json or jsonb values containing 
offending numerics won't be caught, AIUI. But a fairly simple to write 
function that reparsed and fixed the JSON inside the decoder would work.


The OP admitted that this was a serious flaw in his approach. In fact, 
given that a json value can contain an offending numeric value, any 
approach which doesn't involve reparsing is pretty much bound to fail.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.6 First Commitfest Begins

2015-07-17 Thread Heikki Linnakangas
The third week of the July commitfest is now past us. There are 35 
patches left in Needs Review state. Progress has slowed somewhat, which 
is understandable, as the trivial patches often get weeded out first. 
But we need to make progress on the bigger patches too to complete this.


If you see a feature you like, or don't like, in the commitfest, please 
take a moment to read the patch and comment on it. If you can do 
testing, that is very welcome too; just pick a patch, compile it, and do 
exotic things with it to see if you can break it.


If you have signed up as Reviewer, please go ahead with the do the 
review you intend to do as soon as possible. Please also feel free to 
just review a patch without putting your name in, drive-by reviews are 
welcome.


On 07/07/2015 02:31 PM, Heikki Linnakangas wrote:

The first week of the July commitfest has passed. A lot of progress has
been made, but there's still a lot to do. There are still 53 patches in
Needs Review state.

Please pick a patch, and review it. Any patch. Don't be afraid of
reviewing a patch that someone else has signed up for.

If you have submitted a patch for review, please consider also reviewing
someone else's patch. If no-one has signed up to review your patch, you
can ask someone who you think would be a good reviewer to do so. He
might say no, but you've got nothing to lose.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Grouping Sets: Fix unrecognized node type bug

2015-07-17 Thread Andrew Gierth
 Kyotaro == Kyotaro HORIGUCHI horiguchi.kyot...@lab.ntt.co.jp writes:

 Kyotaro Hello, this looks to be a kind of thinko. The attached patch
 Kyotaro fixes it.

No, that's still wrong. Just knowing that there is a List is not enough
to tell whether to concat it or append it.

Jeevan's original patch tries to get around this by making the RowExpr
case wrap another List around its result (which is then removed by the
concat), but this is the wrong approach too because it breaks nested
RowExprs (which isn't valid syntax in the spec, because the spec allows
only column references in GROUP BY, not arbitrary expressions, but which
we have no reason not to support).

Attached is the current version of my fix (with Jeevan's regression
tests plus one of mine).

-- 
Andrew (irc:RhodiumToad)

diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index e90e1d6..5a48a02 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -1734,7 +1734,7 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
  * Inside a grouping set (ROLLUP, CUBE, or GROUPING SETS), we expect the
  * content to be nested no more than 2 deep: i.e. ROLLUP((a,b),(c,d)) is
  * ok, but ROLLUP((a,(b,c)),d) is flattened to ((a,b,c),d), which we then
- * normalize to ((a,b,c),(d)).
+ * (later) normalize to ((a,b,c),(d)).
  *
  * CUBE or ROLLUP can be nested inside GROUPING SETS (but not the reverse),
  * and we leave that alone if we find it. But if we see GROUPING SETS inside
@@ -1803,9 +1803,16 @@ flatten_grouping_sets(Node *expr, bool toplevel, bool *hasGroupingSets)
 
 foreach(l2, gset-content)
 {
-	Node	   *n2 = flatten_grouping_sets(lfirst(l2), false, NULL);
+	Node	   *n1 = lfirst(l2);
+	Node	   *n2 = flatten_grouping_sets(n1, false, NULL);
 
-	result_set = lappend(result_set, n2);
+	if (IsA(n1, GroupingSet) 
+		((GroupingSet *)n1)-kind == GROUPING_SET_SETS)
+	{
+		result_set = list_concat(result_set, (List *) n2);
+	}
+	else
+		result_set = lappend(result_set, n2);
 }
 
 /*
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
index 842c2ae..ff3ba9b 100644
--- a/src/test/regress/expected/groupingsets.out
+++ b/src/test/regress/expected/groupingsets.out
@@ -145,6 +145,127 @@ select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
|   |  12 |   36
 (6 rows)
 
+-- nesting with grouping sets
+select sum(c) from gstest2
+  group by grouping sets((), grouping sets((), grouping sets((
+  order by 1 desc;
+ sum 
+-
+  12
+  12
+  12
+(3 rows)
+
+select sum(c) from gstest2
+  group by grouping sets((), grouping sets((), grouping sets(((a, b)
+  order by 1 desc;
+ sum 
+-
+  12
+  12
+   8
+   2
+   2
+(5 rows)
+
+select sum(c) from gstest2
+  group by grouping sets(grouping sets(rollup(c), grouping sets(cube(c
+  order by 1 desc;
+ sum 
+-
+  12
+  12
+   6
+   6
+   6
+   6
+(6 rows)
+
+select sum(c) from gstest2
+  group by grouping sets(a, grouping sets(a, cube(b)))
+  order by 1 desc;
+ sum 
+-
+  12
+  10
+  10
+   8
+   4
+   2
+   2
+(7 rows)
+
+select sum(c) from gstest2
+  group by grouping sets(grouping sets((a, (b
+  order by 1 desc;
+ sum 
+-
+   8
+   2
+   2
+(3 rows)
+
+select sum(c) from gstest2
+  group by grouping sets(grouping sets((a, b)))
+  order by 1 desc;
+ sum 
+-
+   8
+   2
+   2
+(3 rows)
+
+select sum(c) from gstest2
+  group by grouping sets(grouping sets(a, grouping sets(a), a))
+  order by 1 desc;
+ sum 
+-
+  10
+  10
+  10
+   2
+   2
+   2
+(6 rows)
+
+select sum(c) from gstest2
+  group by grouping sets(grouping sets(a, grouping sets(a, grouping sets(a), ((a)), a, grouping sets(a), (a)), a))
+  order by 1 desc;
+ sum 
+-
+  10
+  10
+  10
+  10
+  10
+  10
+  10
+  10
+   2
+   2
+   2
+   2
+   2
+   2
+   2
+   2
+(16 rows)
+
+select sum(c) from gstest2
+  group by grouping sets((a,(a,b)), grouping sets((a,(a,b)),a))
+  order by 1 desc;
+ sum 
+-
+  10
+   8
+   8
+   2
+   2
+   2
+   2
+   2
+(8 rows)
+
 -- empty input: first is 0 rows, second 1, third 3 etc.
 select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
  a | b | sum | count 
diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql
index 0bffb85..d886fae 100644
--- a/src/test/regress/sql/groupingsets.sql
+++ b/src/test/regress/sql/groupingsets.sql
@@ -73,6 +73,35 @@ select grouping(a), a, array_agg(b),
 select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
   from gstest2 group by rollup (a,b) order by rsum, a, b;
 
+-- nesting with grouping sets
+select sum(c) from gstest2
+  group by grouping sets((), grouping sets((), grouping sets((
+  order by 1 desc;
+select sum(c) from gstest2
+  group by grouping sets((), grouping sets((), grouping sets(((a, b)
+  order by 1 desc;
+select sum(c) from gstest2
+  group by grouping sets(grouping 

Re: [HACKERS] Further issues with jsonb semantics, documentation

2015-07-17 Thread Peter Geoghegan
On Fri, Jul 17, 2015 at 11:37 AM, Robert Haas robertmh...@gmail.com wrote:
 I am on vacation right now, but I might have some time tomorrow to deal with
 it. If not, it will be Sunday or Monday when I get to it.

 Is this still pending?

Yes.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Asynchronous execution on FDW

2015-07-17 Thread Robert Haas
On Fri, Jul 3, 2015 at 4:41 PM, Heikki Linnakangas hlinn...@iki.fi wrote:
 At a quick glance, I think this has all the same problems as starting the
 execution at ExecInit phase. The correct way to do this is to kick off the
 queries in the first IterateForeignScan() call. You said that ExecProc
 phase does not fit - why not?

What exactly are those problems?

I can think of these:

1. If the scan is parametrized, we probably can't do it for lack of
knowledge of what they will be.  This seems easy; just don't do it in
that case.
2. It's possible that we're down inside some subtree of the plan that
won't actually get executed.  This is trickier.

Consider this:

Append
- Foreign Scan
- Foreign Scan
- Foreign Scan
repeat 17 more times

If we don't start each foreign scan until the first tuple is fetched,
we will not get any benefit here, because we won't fetch the first
tuple from query #2 until we finish reading the results of query #1.
If the result of the Append node will be needed in its entirety, we
really, really want to launch of those queries as early as possible.
OTOH, if there's a Limit node with a small limit on top of the Append
node, that could be quite wasteful.  We could decide not to care:
after all, if our limit is satisfied, we can just bang the remote
connections shut, and if they wasted some CPU, well, tough luck for
them.  But it would be nice to be smarter.  I'm not sure how, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Further issues with jsonb semantics, documentation

2015-07-17 Thread Robert Haas
On Thu, Jul 9, 2015 at 10:49 AM, Andrew Dunstan and...@dunslane.net wrote:
 Where are we on this? This is currently a 9.5 release blocker.

 I am on vacation right now, but I might have some time tomorrow to deal with
 it. If not, it will be Sunday or Monday when I get to it.

Is this still pending?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Retain comments on indexes and constraints at ALTER TABLE ... TY

2015-07-17 Thread Heikki Linnakangas

On 07/17/2015 05:40 PM, Michael Paquier wrote:

On Fri, Jul 17, 2015 at 11:16 PM, Kevin Grittner kgri...@ymail.com wrote:

Heikki Linnakangas heikki.linnakan...@iki.fi wrote:


This fixes bug #13126, reported by Kirill Simonov.


It looks like you missed something with the addition of
AT_ReAddComment:

test_ddl_deparse.c:80:11: warning: enumeration value 'AT_ReAddComment' not 
handled in switch [-Wswitch]
 switch (subcmd-subtype)
 ^


Oops. If someone could pick up the attached (backpatch to 9.5 needed)...


Hmm, that function is pretty fragile, it will segfault on any AT_* type 
that it doesn't recognize. Thankfully you get that compiler warning, but 
we have added AT_* type codes before in minor releases. I couldn't quite 
figure out what the purpose of that module is, as there is no 
documentation or README or file-header comments on it. If it's there 
just to so you can run the regression tests that come with it, it might 
make sense to just add a default case to that switch to handle any 
unrecognized commands, and perhaps even remove the cases for the 
currently untested subcommands as it's just dead code. If it's supposed 
to act like a sample that you can copy-paste and modify, then perhaps 
that would still be the best option, and add a comment there explaining 
that it only cares about the most common subtypes but you can add 
handlers for others if necessary.


Alvaro, you want to comment on that?

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Further issues with jsonb semantics, documentation

2015-07-17 Thread Andrew Dunstan


On 07/17/2015 02:37 PM, Robert Haas wrote:

On Thu, Jul 9, 2015 at 10:49 AM, Andrew Dunstan and...@dunslane.net wrote:

Where are we on this? This is currently a 9.5 release blocker.

I am on vacation right now, but I might have some time tomorrow to deal with
it. If not, it will be Sunday or Monday when I get to it.

Is this still pending?



Yes, been tied up a bit unexpectedly this week, am just getting down to 
it right now.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow backslash-continuations in custom scripts

2015-07-17 Thread Fabien COELHO


Hello Tom,

(although actually, why wouldn't we want to just implement variable 
substitution exactly like it is in psql?


Pgbench variable substitution is performed when the script is run, not while 
the file is being processed for being split, which is when a lexer would be 
used. The situation is not the same with psql. The most it could do would be 
to keep track of what substitution are done in queries.



So this is looking *eminently* doable.


Possibly.  How much more effort would be involved compared to the quick patch 
I did, I wonder:-)


I had a quick look at the code, and although it seems doable to hack the 
psql lexer for pgbench benefit, I do not think it is a good idea:


 - improving pgbench scripts is really a small feature which requires
   a light weight approach in my opinion. There is no real benefit
   of having a lexer solution which can handle contrived cases, because
   they would be contrived cases and not the kind of tests really written
   by pgbench users.

 - the solution would probably be fragile to changes in psql, or
   could prevent such changes because of the pgbench dependency,
   and this does not look desirable.

 - it would involve much more time than I'm ready to give on such a
   small feature.

So the current patch, or possibly variants of this patch to fix issues 
that may be raised, is what I'm ready to put forward on this.


If you feel that this feature only deserve a lexer solution, then the 
patch should be returned with feedback.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] patch: enhanced DROP POLICY tab complete

2015-07-17 Thread Pavel Stehule
Hi

I am sending trivial patch, that enforce more precious tab complete for
DROP POLICY statement

Regards

Pavel
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
new file mode 100644
index 0683548..9596af6
*** a/src/bin/psql/tab-complete.c
--- b/src/bin/psql/tab-complete.c
*** static const SchemaQuery Query_for_list_
*** 742,747 
--- 742,760 
 FROM pg_catalog.pg_tablesample_method \
WHERE substring(pg_catalog.quote_ident(tsmname),1,%d)='%s'
  
+ #define Query_for_list_of_policies \
+  SELECT pg_catalog.quote_ident(polname) \
+FROM pg_catalog.pg_policy  \
+   WHERE substring(pg_catalog.quote_ident(polname),1,%d)='%s'
+ 
+ #define Query_for_list_of_tables_for_policy \
+ SELECT pg_catalog.quote_ident(relname) \
+   FROM pg_catalog.pg_class\
+  WHERE (%d = pg_catalog.length('%s'))\
+AND oid IN \
+(SELECT polrelid FROM pg_catalog.pg_policy \
+  WHERE pg_catalog.quote_ident(polname)='%s')
+ 
  /*
   * This is a list of all things in Pgsql, which can show up after CREATE or
   * DROP; and there is also a query to get a list of them.
*** psql_completion(const char *text, int st
*** 2891,2905 
  		COMPLETE_WITH_QUERY(Query_for_list_of_event_triggers);
  	}
  
  	/* DROP POLICY name ON */
  	else if (pg_strcasecmp(prev3_wd, DROP) == 0 
  			 pg_strcasecmp(prev2_wd, POLICY) == 0)
  		COMPLETE_WITH_CONST(ON);
  	/* DROP POLICY name ON table */
  	else if (pg_strcasecmp(prev4_wd, DROP) == 0 
  			 pg_strcasecmp(prev3_wd, POLICY) == 0 
  			 pg_strcasecmp(prev_wd, ON) == 0)
! 		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, NULL);
  
  	/* DROP RULE */
  	else if (pg_strcasecmp(prev3_wd, DROP) == 0 
--- 2904,2929 
  		COMPLETE_WITH_QUERY(Query_for_list_of_event_triggers);
  	}
  
+ 	/* DROP POLICY name  */
+ 	else if (pg_strcasecmp(prev2_wd, DROP) == 0 
+ 			 pg_strcasecmp(prev_wd, POLICY) == 0)
+ 	{
+ 		COMPLETE_WITH_QUERY(Query_for_list_of_policies);
+ 	}
  	/* DROP POLICY name ON */
  	else if (pg_strcasecmp(prev3_wd, DROP) == 0 
  			 pg_strcasecmp(prev2_wd, POLICY) == 0)
+ 	{
  		COMPLETE_WITH_CONST(ON);
+ 	}
  	/* DROP POLICY name ON table */
  	else if (pg_strcasecmp(prev4_wd, DROP) == 0 
  			 pg_strcasecmp(prev3_wd, POLICY) == 0 
  			 pg_strcasecmp(prev_wd, ON) == 0)
! 	{
! 		completion_info_charp = prev2_wd;
! 		COMPLETE_WITH_QUERY(Query_for_list_of_tables_for_policy);
! 	}
  
  	/* DROP RULE */
  	else if (pg_strcasecmp(prev3_wd, DROP) == 0 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Memory Accounting v11

2015-07-17 Thread Jeff Davis
On Fri, 2015-07-17 at 15:52 +1200, David Rowley wrote:

 Should we mark the patch as returned with feedback in the commitfest
 app then?

I believe the memory accounting patch has been rejected. Instead, the
work will be done in the HashAgg patch.

Thank you for the review!

Regards,
Jeff Davis





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Further issues with jsonb semantics, documentation

2015-07-17 Thread Peter Geoghegan
On Fri, Jul 17, 2015 at 6:20 PM, Andrew Dunstan and...@dunslane.net wrote:
 OK, I have committed this and updated the open issues list on the wiki.

Thanks, Andrew.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2015-07-17 Thread Jim Nasby

On 7/15/15 11:38 PM, Thakur, Sameer wrote:

Hello,

I am not really willing to show up as the picky guy here, but could it be possible 
to receive those patches as attached to emails instead of having them referenced 
by URL? I imagine that you are directly using the nabble interface.

Just configured a new mail client for nabble, did not know how to use it within 
an existing conversation.


Does this actually handle multiple indexes? It doesn't appear so, which 
I'd think is a significant problem... :/


I'm also not seeing how this will deal with exhausting 
maintenance_work_mem. ISTM that when that happens you'd definitely want 
a better idea of what's going on...

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WAL test/verification tool

2015-07-17 Thread Thomas Munro
Hi

I have heard rumours of a tool that could verify or compare the
effects of applying WAL records for testing/development purposes, but
I've been unable to track it down or find out if it was publicly
released.  Does anyone know the status of that or what it was called?

Thanks,

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] BRIN index and aborted transaction

2015-07-17 Thread Tatsuo Ishii
Forgive me if this has been already discussed somewhere.

When a transaction aborts, it seems a BRIN index leaves summary data
which is not valid any more. Is this an expected behavior?  I guess
the answer is yes, because it does not affect correctness of a query
result, but I would like to make sure.

Second question is when the wrong summary data is gone? It seems
vacuum does not help. Do I have to recreate the index (or reindex)?

test=# begin;
BEGIN
test=# insert into t1 values(101);
INSERT 0 1
test=# SELECT * FROM brin_page_items(get_raw_page('brinidx', 2),'brinidx');
 itemoffset | blknum | attnum | allnulls | hasnulls | placeholder |
value
+++--+--+-+-
  1 |  0 |  1 | f| f| f   | {1 .. 28928}
  2 |128 |  1 | f| f| f   | {28929 .. 
57856}
  3 |256 |  1 | f| f| f   | {57857 .. 
86784}
[snip]
 34 |   4224 |  1 | f| f| f   | {954625 .. 
983552}
 35 |   4352 |  1 | f| f| f   | {983553 .. 
101}
(35 rows)

test=# abort;
ROLLBACK
test=# SELECT * FROM brin_page_items(get_raw_page('brinidx', 2),'brinidx');
 itemoffset | blknum | attnum | allnulls | hasnulls | placeholder |
value
+++--+--+-+-
  1 |  0 |  1 | f| f| f   | {1 .. 28928}
  2 |128 |  1 | f| f| f   | {28929 .. 
57856}
  3 |256 |  1 | f| f| f   | {57857 .. 
86784}
[snip]
 34 |   4224 |  1 | f| f| f   | {954625 .. 
983552}
 35 |   4352 |  1 | f| f| f   | {983553 .. 
101}
(35 rows)

test=# vacuum t1;
VACUUM
test=# SELECT * FROM brin_page_items(get_raw_page('brinidx', 2),'brinidx');
 itemoffset | blknum | attnum | allnulls | hasnulls | placeholder |
value
+++--+--+-+-
  1 |  0 |  1 | f| f| f   | {1 .. 28928}
  2 |128 |  1 | f| f| f   | {28929 .. 
57856}
  3 |256 |  1 | f| f| f   | {57857 .. 
86784}
[snip]
 33 |   4096 |  1 | f| f| f   | {925697 .. 
954624}
 34 |   4224 |  1 | f| f| f   | {954625 .. 
983552}
 35 |   4352 |  1 | f| f| f   | {983553 .. 
101}
(35 rows)

test=# select max(i) from t1;
   max   
-
 100
(1 row)
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Function to get size of asynchronous notification queue

2015-07-17 Thread Brendan Jurd
On Fri, 17 Jul 2015 at 23:14 Robert Haas robertmh...@gmail.com wrote:

 Committed.  I changed one remaining use of proportion to fraction,
 fixed an OID conflict, and reverted some unnecessary whitespace
 changes.


Thanks Robert.  Sorry I missed a proportion in my latest version, and
thanks for catching it.

Cheers,
BJ


Re: [HACKERS] dblink: add polymorphic functions.

2015-07-17 Thread Corey Huinker
On Wed, Jul 8, 2015 at 12:21 PM, Joe Conway m...@joeconway.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 07/08/2015 08:51 AM, Corey Huinker wrote:
  Questions: Would moving rowtype to the first parameter resolve the
  parameter ambiguity issue?

 Not for the existing functions but with new functions I don't think it
 matters. You would know to always ignore either the first or last
 argument when determining which variant was wanted.

  Would we want new function names anyway?

 Yes, see above

  How much of a goal is reducing function count?

 Do you mean reducing number of C functions or SQL functions?



C functions. Was there a reason (performance, style, etc) to have only one
function per function name, and let it suss out which signature call
happened this time, as opposed to having the signatures with defaulted
values implemented as either as C wrappers or SQL wrappers to C function
which can then assume the full-blown signature?

I'm asking because if making the C code more straightforward is a goal, and
wrapper overhead is minimal, then that could be a separate patch, either
preceding or following the polymorphic one.

UPDATE:
After less than an hour of recoding for the 3 _any() functions with full
signatures, I noticed that the C code would be exactly the for the
non-anyelement sets if we implemented the signatures with omitted
parameters as SQL wrappers (of course, the _any() ones would call the C
function without STRICT mode).

So this patch would result in less C code while still adding 3 new
functions. Can anyone think of why that wouldn't be the best way to go?




 The problem is that jsonb_to_recordset() does not behave like these
 new dblink functions in that it requires a runtime column definition
 list. It might be better to use something completely different, so I
 think _any() or maybe _to_any() is better.

 Any other ideas for names out there?

 Joe




_any() is what I'm going with, sticking with trailing anyelement to
highlight the it's just like the function without the _any aspect.

I'm also remembering to drop the --1.1 and restore the regression test case
I hijacked.


Re: [HACKERS] WAL test/verification tool

2015-07-17 Thread Alvaro Herrera
Thomas Munro wrote:
 Hi
 
 I have heard rumours of a tool that could verify or compare the
 effects of applying WAL records for testing/development purposes, but
 I've been unable to track it down or find out if it was publicly
 released.  Does anyone know the status of that or what it was called?

http://www.postgresql.org/message-id/cab7npqt6k-weggl0sg00ql2tctvn3n4taibyboni_jrf4cy...@mail.gmail.com

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_resetsysid

2015-07-17 Thread Peter Eisentraut
On 6/14/15 11:29 AM, Petr Jelinek wrote:
 0002 - Adds pg_resetsysid utility which changes the system id to newly
 generated one.
 
 0003 - Adds -s option to pg_resetxlog to change the system id to the one
 specified - this is separate from the other one as it can be potentially
 more dangerous.

Adding a new top-level binary creates a lot of maintenance overhead
(e.g., documentation, man page, translations, packaging), and it seems
silly to create a new tool whose only purpose is to change one number in
one file.  If we're going to have this in pg_resetxlog anyway, why not
create another option letter to assigns a generated ID?  As the
documentation says, resetting the system ID clears the WAL, so it's not
like this is a completely danger-free situation.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2015-07-17 Thread Jim Nasby

On 7/16/15 12:40 PM, Robert Haas wrote:

They may well be 2-3 times as long. Why is that a negative?

In my opinion, brevity makes things easier to read and understand.  We
also don't support multi-line GUCs, so if your configuration takes 140
characters, you're going to have a very long line in your
postgresql.conf (and in your pg_settings output, etc.)


Brevity goes both ways, but I don't think that's the real problem here; 
it's the lack of multi-line support. The JSON that's been proposed makes 
you work really hard to track what level of nesting you're at, while 
every alternative format I've seen is terse enough to be very clear on a 
single line.


I'm guessing it'd be really ugly/hard to support at least this GUC being 
multi-line?

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support retrieving value from any sequence

2015-07-17 Thread Jim Nasby

On 7/14/15 12:06 PM, Tom Lane wrote:

Thom Brown t...@linux.com writes:

On 14 July 2015 at 17:17, Robert Haas robertmh...@gmail.com wrote:

Since it's trivial to define this function if you need it, I'm not
sure there's a reason to include it in core.



It's not always possible to create functions on a system when access
is restricted.  It may even be the case that procedural languages are
prohibited, and plpgsql has been removed.


By that argument, *any* random function has to be in the core.

I really don't see what's wrong with SELECT last_value FROM sequence,
especially since that has worked in every Postgres version since 6.x.
Anyone slightly worried about backwards compatibility wouldn't use
an equivalent function even if we did add one.


Because you can't do that for all functions in a database.

FWIW, I think it'd be better to have a pg_sequences view that's the 
equivalent of SELECT * FROM sequence for every sequence in the 
database. That would let you get whatever info you needed.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Further issues with jsonb semantics, documentation

2015-07-17 Thread Andrew Dunstan


On 07/17/2015 02:49 PM, Andrew Dunstan wrote:


On 07/17/2015 02:37 PM, Robert Haas wrote:
On Thu, Jul 9, 2015 at 10:49 AM, Andrew Dunstan and...@dunslane.net 
wrote:

Where are we on this? This is currently a 9.5 release blocker.
I am on vacation right now, but I might have some time tomorrow to 
deal with

it. If not, it will be Sunday or Monday when I get to it.

Is this still pending?



Yes, been tied up a bit unexpectedly this week, am just getting down 
to it right now.






OK, I have committed this and updated the open issues list on the wiki.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers