Re: [HACKERS] replication_reserved_connections

2013-07-28 Thread Atri Sharma


Sent from my iPad

On 28-Jul-2013, at 5:53, Marko Tiikkaja ma...@joh.to wrote:

 Hi,
 
 Yesterday an interesting scenario was diagnosed on IRC.  If you're running a 
 synchronous slave and the connection to the slave is lost momentarily, your 
 backends start naturally waiting for the slave to reconnect.  If then your 
 application keeps trying to create new connections, it can use all 
 non-reserved connections, thus locking out the synchronous slave when the 
 connection problem has resolved itself. This brings the entire cluster into a 
 state where manual intervention is necessary.
 
Solving that was fun!

 While you could limit the number of connections for non-replication roles, 
 that's not always possible or desirable.  I would like to introduce a way to 
 reserve connection slots for replication.  However, it's not clear how this 
 would work.  I looked at how superuser_reserved_connections is implented, and 
 with small changes I could see how to implement two ideas:
 
  1) Reserve a portion of superuser_reserved_connections for replication
 connections.  For example, with max_connections=10,
 superuser_reserved_connections=2 and
 replication_reserved_connections=1, at 8 connections either a
 replication connection or a superuser connection can be created,
 and at 9 connections only a superuser one would be allowed.  This
 is a bit clumsy as there still aren't guaranteed slots for
 replication.
 

I would generally in agree with sharing super user reserved connections with 
replication.One thing I would like to explore is if we could potentially add 
some sort of priority system for avoiding contention between super user threads 
and replication threads competing for the same connection.

We could potentially add a GUC for specifying which has the higher priority.

I am just musing here,though.

Thanks and Regards,

Atri

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] improve Chinese locale performance

2013-07-28 Thread Martijn van Oosterhout
On Tue, Jul 23, 2013 at 10:34:21AM -0400, Robert Haas wrote:
 I pretty much lost interest in ICU upon reading that they use UTF-16
 as their internal format.
 
 http://userguide.icu-project.org/strings#TOC-Strings-in-ICU

The UTF-8 support has been steadily improving:

  For example, icu::Collator::compareUTF8() compares two UTF-8 strings
  incrementally, without converting all of the two strings to UTF-16 if
  there is an early base letter difference.

http://userguide.icu-project.org/strings/utf-8

For all other encodings you should be able to use an iterator. As to
performance I have no idea.

The main issue with strxfrm() is its lame API. If it supported
returning prefixes you'd be set, but as it is you need 10MB of memory
just to transform a 10MB string, even if only the first few characers
would be enough to sort...

Mvg,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [HACKERS] replication_reserved_connections

2013-07-28 Thread Marko Tiikkaja

On 28/07/2013 08:51, Atri Sharma wrote:

I would generally in agree with sharing super user reserved connections with 
replication.One thing I would like to explore is if we could potentially add 
some sort of priority system for avoiding contention between super user threads 
and replication threads competing for the same connection.

We could potentially add a GUC for specifying which has the higher priority.


This sounds an awful lot like it would have to scan through the list of 
existing connections, which I wanted to avoid.


Or maybe we could maintain a separate list of reserved connections, 
i.e. ones that were created when we were at max_connections - 
ReservedBackends?  We could quickly look through that list to see how 
many of which we have allowed.  Not sure if that's practical, though.




Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Comparing toasted data (was improve Chinese locale performance)

2013-07-28 Thread Greg Stark
On Sun, Jul 28, 2013 at 10:39 AM, Martijn van Oosterhout
klep...@svana.org wrote:
 The main issue with strxfrm() is its lame API. If it supported
 returning prefixes you'd be set, but as it is you need 10MB of memory
 just to transform a 10MB string, even if only the first few characers
 would be enough to sort...

It occurs to me that the same issue impacts our handling of toast
data. If you compare a toasted bytea (or string in C locale) it would
be nice to fetch just the first chunk and start the comparison. Only
if you reach the end of that chunk should the next chunk be needed.
Even compressed data need not be decompressed past the point where the
comparison is decided. If the other datum is not toasted then you can
even know upfront  what the worst case is of how much needs to be
detoasted.

It's too bad this wouldn't work for non-C locale strings. The tool to
do it would be strxfrm again but I can't imagine how to store toasted
strxfrm data in addition to the string that wouldn't cost more than it
gained.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] replication_reserved_connections

2013-07-28 Thread Gibheer
On Sun, 28 Jul 2013 02:23:47 +0200
Marko Tiikkaja ma...@joh.to wrote:

 Hi,
 
 Yesterday an interesting scenario was diagnosed on IRC.  If you're 
 running a synchronous slave and the connection to the slave is lost 
 momentarily, your backends start naturally waiting for the slave to 
 reconnect.  If then your application keeps trying to create new 
 connections, it can use all non-reserved connections, thus locking
 out the synchronous slave when the connection problem has resolved
 itself. This brings the entire cluster into a state where manual
 intervention is necessary.
 
 While you could limit the number of connections for non-replication 
 roles, that's not always possible or desirable.  I would like to 
 introduce a way to reserve connection slots for replication.
 However, it's not clear how this would work.  I looked at how 
 superuser_reserved_connections is implented, and with small changes I 
 could see how to implement two ideas:
 
1) Reserve a portion of superuser_reserved_connections for
 replication connections.  For example, with max_connections=10,
   superuser_reserved_connections=2 and
   replication_reserved_connections=1, at 8 connections either a
   replication connection or a superuser connection can be created,
   and at 9 connections only a superuser one would be allowed.
 This is a bit clumsy as there still aren't guaranteed slots for
   replication.
2) A GUC which says superuser_reserved_connections can be used up
 by replication connections, and then limiting the number of
   replication connections using per-role limits to make sure
   superusers aren't locked out.
 
 Does anyone see a better way to do this?  I'm not too satisfied with 
 either of these ideas.
 
 
 Regards,
 Marko Tiikkaja
 
 

Hi,

I had the same problem and I created a patch to introduce a GUC for
reserved_replication_connections as a seperate flag.
You can find my patch here
https://commitfest.postgresql.org/action/patch_view?id=1180

I am still waiting for feedback though.

regards,

Stefan Radomski


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: UNNEST (and other functions) WITH ORDINALITY

2013-07-28 Thread Greg Stark
On Sun, Jul 28, 2013 at 6:49 AM, David Fetter da...@fetter.org wrote:
 Are you still on this?  Do you have questions or concerns?

Still on this, I've just been a bit busy the past few days.


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] replication_reserved_connections

2013-07-28 Thread Marko Tiikkaja

On 2013-07-28 19:21, Gibheer wrote:

I had the same problem and I created a patch to introduce a GUC for
reserved_replication_connections as a seperate flag.
You can find my patch here
https://commitfest.postgresql.org/action/patch_view?id=1180


Oops.  I guess I should've searched through the archives before my 
email.  I didn't remember seeing anything about this so I just assumed 
nobody was working on it.


I'll take a look at your patch..


Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: UNNEST (and other functions) WITH ORDINALITY

2013-07-28 Thread Greg Stark
On Wed, Jul 24, 2013 at 7:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I don't see any workable fix that doesn't involve the funny token, though.
 Consider

 CREATE VIEW v AS SELECT ... FROM UNNEST(...) WITH ORDINALITY;
 CREATE VIEW v AS SELECT ... FROM UNNEST(...) WITH NO DATA;

 WITH ORDINALITY really needs to be parsed as part of the FROM clause.
 WITH NO DATA really needs to *not* be parsed as part of the FROM clause.
 Without looking ahead more than one token, there is absolutely no way
 for the grammar to decide if it's got the whole FROM clause or not
 at the point where WITH is the next token.  So our choices are to have
 two-token lookahead at the lexer level, or to give up on bison and find
 something that can implement a parsing algorithm better than LALR(1).
 I know which one seems more likely to get done in the foreseeable future.


It occurs to me we might be being silly here.

Instead of collapsing WITH TIME and WITH ORDINALITY into a single
token why don't we just modify the WITH token to WITH_FOLLOWED_BY_TIME
and WITH_FOLLOWED_BY_ORDINALITY but still keep the following token.
Then we can just include those two tokens everywhere we include WITH.
Basically we would be giving the parser a free extra token of
lookahead whenever it gets WITH.

I think that's isomorphic to what Tom suggested but requires less
surgery on the parser and automatically covers any other cases we
don't need to track down.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: UNNEST (and other functions) WITH ORDINALITY

2013-07-28 Thread Tom Lane
Greg Stark st...@mit.edu writes:
 Instead of collapsing WITH TIME and WITH ORDINALITY into a single
 token why don't we just modify the WITH token to WITH_FOLLOWED_BY_TIME
 and WITH_FOLLOWED_BY_ORDINALITY but still keep the following token.
 Then we can just include those two tokens everywhere we include WITH.
 Basically we would be giving the parser a free extra token of
 lookahead whenever it gets WITH.

 I think that's isomorphic to what Tom suggested but requires less
 surgery on the parser and automatically covers any other cases we
 don't need to track down.

I suspect it's likely to come out about the same either way once you
account for all the uses of WITH.  Might be worth trying both to see
which seems less ugly.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] replication_reserved_connections

2013-07-28 Thread Andres Freund
On 2013-07-28 02:23:47 +0200, Marko Tiikkaja wrote:
 While you could limit the number of connections for non-replication roles,
 that's not always possible or desirable.  I would like to introduce a way to
 reserve connection slots for replication.  However, it's not clear how this
 would work.  I looked at how superuser_reserved_connections is implented,
 and with small changes I could see how to implement two ideas:
 
 Does anyone see a better way to do this?  I'm not too satisfied with either
 of these ideas.

Personally I think we should just shouldn't allow normal connections for
the backend slots added by max_wal_senders. They are internally *added*
to max_connections, so limiting that seems perfectly fine to me since
the system provides max_connections connections externally.

Hm... I wonder how that's managed for 9.4's max_worker_processes.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] install libpq.dll in bin directory on Windows / Cygwin

2013-07-28 Thread Andrew Dunstan


On 07/25/2013 06:27 PM, MauMau wrote:

From: Andrew Dunstan andrew.duns...@pgexperts.com

on Windows it's necessary to have libpq.dll/cygpq.dll either in the PATH
or in the same directory as client .exe files. The buildfarm client has
for many years simply copied this dll from the installation lib to the
installation bin directory after running make install. But I can't
really see why we don't do that as part of make install anyway. I
haven't tested but I think something like this patch would achieve this
goal - it would fix something that's tripped a lot of people up over the
years.

Comments? If we do this, should it be backported?


I was just about to propose something related to this.

On native Windows (not on Cygwin or MinGW), DLLs are in general placed 
in (1) system directory (e.g. c:\windows\system32), (2) somewhere in 
PATH, or (3) the same directory as EXEs which automatically load the 
DLL at invocation. It's no problem that most DLLs in PostgreSQL's lib 
directory, because they are loaded at runtime by postgres.exe by 
calling LoadLibrary() specifying an absolute or a relative path.  
However, the below files are misplaced:


libecpg.dll
libecpg_compat.dll
libpgtypes.dll

These should be placed in bin directory.  If done so, when running SQL 
embedded C applications, users can just add bin in PATH, which is 
usually done already.  Otherwise, users have to add both bin and lib 
in PATH. Usually, lib is not added in PATH in many software.


Could you please place the above files in bin and remove them from lib?



I don't have a problem adding them to the bin directory. I'd be very 
slightly wary of removing them from the lib directory, for legacy 
reasons. Maybe these changes should be done for git tip and not 
backported (or maybe just to 9.3). Or we could just decide to clean this 
mess in one fell swoop.




BTW, why is libpq.dll in lib necessary?  For the above files?  If so, 
we can remove libpq.dll from lib.  Or, libpqwalreceiver.dll needs it?


Not sure. Perhaps you could experiment and see if anything bad happens 
if libpq is just installed in the bin directory and not in lib.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] install libpq.dll in bin directory on Windows / Cygwin

2013-07-28 Thread Andres Freund
On 2013-07-28 15:37:49 -0400, Andrew Dunstan wrote:
 BTW, why is libpq.dll in lib necessary?  For the above files?  If so, we
 can remove libpq.dll from lib.  Or, libpqwalreceiver.dll needs it?
 
 Not sure. Perhaps you could experiment and see if anything bad happens if
 libpq is just installed in the bin directory and not in lib.

Yes, it does need it.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-28 Thread Josh Berkus

 I think if we can design conf.d separately for config files of management 
 tools, then
 it is better to have postgresql.auto.conf to be in $PGDATA rather than in 
 $PGDATA/conf.d

One of the biggest current complaints about recovery.conf from
Debian/Ubuntu users is the fact that it lives in $PGDATA.  Won't we just
get the same thing here?

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-28 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 I think if we can design conf.d separately for config files of management 
 tools, then
 it is better to have postgresql.auto.conf to be in $PGDATA rather than in 
 $PGDATA/conf.d

 One of the biggest current complaints about recovery.conf from
 Debian/Ubuntu users is the fact that it lives in $PGDATA.  Won't we just
 get the same thing here?

I don't think that's the same case, but ... why exactly don't they like
recovery.conf, and can you show that the location of the file has
anything to do with the underlying complaint?  Personally I bet it's
more about the confusion between configuration and triggering functions.
Until we can get those things separated, using recovery.conf to argue
about file locations will result in nothing but confusion and bad
design.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Add GET DIAGNOSTICS ... PG_CONTEXT in PL/PgSQL

2013-07-28 Thread Stephen Frost
Tom,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
 OK.  One possibly non-obvious point is that I think the field should be
 defined as context containing associated non-constant strings; this
 would mean in particular that CopyErrorData would need to change it
 to CurrentMemoryContext in the copied struct, and then ReThrowError
 would change it back when restoring the data onto the error stack.
 This detail is probably a no-op in current usages, but in the future it
 might allow modification of a copied ErrorData while it's outside
 ErrorContext, if anyone should want to do that.
 
 Also I'd advise declaring the field as struct MemoryContextData *
 to avoid having to include palloc.h into elog.h.

Good points, all.  Apologies for it taking a bit to get to this, but
please take a look when you get a chance.  Barring objections, this is
what I'm planning to commit, which moves most calls to use the new
edata-alloc_context instead of ErrorContext.  This also means we can
allow GET DIAG ... PG_CONTEXT to be called from exception handlers,
which I've set up and added regression tests for.

Thanks!

Stephen
diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c
new file mode 100644
index 036fe09..a812ccd 100644
*** a/src/backend/utils/error/elog.c
--- b/src/backend/utils/error/elog.c
*** err_gettext(const char *str)
*** 87,93 
  /* This extension allows gcc to check the format string for consistency with
 the supplied arguments. */
  __attribute__((format_arg(1)));
! static void set_errdata_field(char **ptr, const char *str);
  
  /* Global variables */
  ErrorContextCallback *error_context_stack = NULL;
--- 87,93 
  /* This extension allows gcc to check the format string for consistency with
 the supplied arguments. */
  __attribute__((format_arg(1)));
! static void set_errdata_field(MemoryContextData *cxt, char **ptr, const char *str);
  
  /* Global variables */
  ErrorContextCallback *error_context_stack = NULL;
*** errstart(int elevel, const char *filenam
*** 373,378 
--- 373,383 
  	/* errno is saved here so that error parameter eval can't change it */
  	edata-saved_errno = errno;
  
+ 	/*
+ 	 * Any allocations for this error state level should go into ErrorContext
+ 	 */
+ 	edata-assoc_context = ErrorContext;
+ 
  	recursion_depth--;
  	return true;
  }
*** errmsg(const char *fmt,...)
*** 786,792 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(ErrorContext);
  
  	EVALUATE_MESSAGE(edata-domain, message, false, true);
  
--- 791,797 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(edata-assoc_context);
  
  	EVALUATE_MESSAGE(edata-domain, message, false, true);
  
*** errmsg_internal(const char *fmt,...)
*** 815,821 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(ErrorContext);
  
  	EVALUATE_MESSAGE(edata-domain, message, false, false);
  
--- 820,826 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(edata-assoc_context);
  
  	EVALUATE_MESSAGE(edata-domain, message, false, false);
  
*** errmsg_plural(const char *fmt_singular,
*** 838,844 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(ErrorContext);
  
  	EVALUATE_MESSAGE_PLURAL(edata-domain, message, false);
  
--- 843,849 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(edata-assoc_context);
  
  	EVALUATE_MESSAGE_PLURAL(edata-domain, message, false);
  
*** errdetail(const char *fmt,...)
*** 859,865 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(ErrorContext);
  
  	EVALUATE_MESSAGE(edata-domain, detail, false, true);
  
--- 864,870 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(edata-assoc_context);
  
  	EVALUATE_MESSAGE(edata-domain, detail, false, true);
  
*** errdetail_internal(const char *fmt,...)
*** 886,892 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(ErrorContext);
  
  	EVALUATE_MESSAGE(edata-domain, detail, false, false);
  
--- 891,897 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(edata-assoc_context);
  
  	EVALUATE_MESSAGE(edata-domain, detail, false, false);
  
*** errdetail_log(const char *fmt,...)
*** 907,913 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(ErrorContext);
  
  	EVALUATE_MESSAGE(edata-domain, detail_log, false, true);
  
--- 912,918 
  
  	recursion_depth++;
  	CHECK_STACK_DEPTH();
! 	oldcontext = MemoryContextSwitchTo(edata-assoc_context);
  
  	EVALUATE_MESSAGE(edata-domain, detail_log, false, true);
  
*** errdetail_plural(const char *fmt_singula
*** 930,936 

[HACKERS] Evaluate arbitrary expression on tuple inside trigger function?

2013-07-28 Thread Tom Dunstan
Hi all

I'm trying to hack a trigger function to evaluate an expression on the
tuple that the trigger has been fired for, kinda like a check
constraint. I looked at ExecRelCheck in execMain.c which does
more-or-less what I want to do, and I have the parsed node tree all
ready to go. The problem that I'm facing is that ExecRelCheck uses a
passed in EState to set up the executor in the right mode, and with
the right memory context, but the EState doesn't get passed in to the
trigger function, and I can't see anything obvious hanging off the
TriggerData that does get passed in that would give me access to it.

Can anyone either point me to where I might be able to get a handle on
the current EState, or otherwise recommend a way forward?

Thanks

Tom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Bison 3.0 updates

2013-07-28 Thread Tom Lane
Buildfarm member anchovy has been failing for the last couple of days,
evidently because its owner just couldn't wait to adopt bison 3.0,
which is all of 3 days old.  The failures look like

cubeparse.y:42.1-13: warning: deprecated directive, use '%name-prefix' 
[-Wdeprecated]
 %name-prefix=cube_yy
 ^

(which looks like 3.0 isn't actually ready for prime time, but at least
it's only a warning)

cubeparse.c:163:5: error: conflicting types for 'cube_yyparse'
 int cube_yyparse (void);
 ^
cubeparse.y:32:5: note: previous declaration of 'cube_yyparse' was here
 int cube_yyparse(void *result);
 ^

A look in the Bison release notes explains this one: they stopped
supporting YYPARSE_PARAM, which contrib/cube and contrib/seg both use.
The recommended replacement is %parse-param, which is certainly a whole
lot cleaner: it lets you specify the datatype of the extra parser
parameter, instead of having it default to void *.  This option also
changes the signature of yyerror(), but that's not a problem.

At first I thought this was going to make us go through a tool upgrade
exercise, because I couldn't find %parse-param in the documentation for
bison 1.875, which is our oldest supported version.  But further
research shows that %parse-param actually was introduced in 1.875,
they just forgot to document it :-(.

So I propose the attached patch, which I've verified still works with
1.875.  I don't plan to install 3.0 just to test this, but I assume it's
OK there.

I'm thinking we should apply this to all supported branches, in case
somebody gets the idea to build an older branch with bleeding-edge
tools.  Any objections?

regards, tom lane

diff --git a/contrib/cube/cube.c b/contrib/cube/cube.c
index ce8eaa870fc760b7c7b4607e0844c38ffd5e56bb..dab0e6e7586e306ecfabb6537789a91f95d656e8 100644
*** a/contrib/cube/cube.c
--- b/contrib/cube/cube.c
*** PG_MODULE_MAGIC;
*** 26,33 
  #define ARRPTR(x)  ( (double *) ARR_DATA_PTR(x) )
  #define ARRNELEMS(x)  ArrayGetNItems( ARR_NDIM(x), ARR_DIMS(x))
  
! extern int	cube_yyparse();
! extern void cube_yyerror(const char *message);
  extern void cube_scanner_init(const char *str);
  extern void cube_scanner_finish(void);
  
--- 26,33 
  #define ARRPTR(x)  ( (double *) ARR_DATA_PTR(x) )
  #define ARRNELEMS(x)  ArrayGetNItems( ARR_NDIM(x), ARR_DIMS(x))
  
! extern int	cube_yyparse(NDBOX **result);
! extern void cube_yyerror(NDBOX **result, const char *message);
  extern void cube_scanner_init(const char *str);
  extern void cube_scanner_finish(void);
  
*** Datum
*** 156,167 
  cube_in(PG_FUNCTION_ARGS)
  {
  	char	   *str = PG_GETARG_CSTRING(0);
! 	void	   *result;
  
  	cube_scanner_init(str);
  
  	if (cube_yyparse(result) != 0)
! 		cube_yyerror(bogus input);
  
  	cube_scanner_finish();
  
--- 156,167 
  cube_in(PG_FUNCTION_ARGS)
  {
  	char	   *str = PG_GETARG_CSTRING(0);
! 	NDBOX	   *result;
  
  	cube_scanner_init(str);
  
  	if (cube_yyparse(result) != 0)
! 		cube_yyerror(result, bogus input);
  
  	cube_scanner_finish();
  
diff --git a/contrib/cube/cubeparse.y b/contrib/cube/cubeparse.y
index 21fe5378773da012bd223f4437f02d248da6b6c6..d7205b824cb5c0f21c913b5746552898d8590327 100644
*** a/contrib/cube/cubeparse.y
--- b/contrib/cube/cubeparse.y
***
*** 1,10 
  %{
  /* NdBox = [(lowerleft),(upperright)] */
  /* [(xLL(1)...xLL(N)),(xUR(1)...xUR(n))] */
  
- /* contrib/cube/cubeparse.y */
- 
- #define YYPARSE_PARAM result  /* need this to pass a pointer (void *) to yyparse */
  #define YYSTYPE char *
  #define YYDEBUG 1
  
--- 1,9 
  %{
+ /* contrib/cube/cubeparse.y */
+ 
  /* NdBox = [(lowerleft),(upperright)] */
  /* [(xLL(1)...xLL(N)),(xUR(1)...xUR(n))] */
  
  #define YYSTYPE char *
  #define YYDEBUG 1
  
*** extern int cube_yylex(void);
*** 28,35 
  static char *scanbuf;
  static int	scanbuflen;
  
! void cube_yyerror(const char *message);
! int cube_yyparse(void *result);
  
  static int delim_count(char *s, char delim);
  static NDBOX * write_box(unsigned int dim, char *str1, char *str2);
--- 27,34 
  static char *scanbuf;
  static int	scanbuflen;
  
! extern int	cube_yyparse(NDBOX **result);
! extern void cube_yyerror(NDBOX **result, const char *message);
  
  static int delim_count(char *s, char delim);
  static NDBOX * write_box(unsigned int dim, char *str1, char *str2);
*** static NDBOX * write_point_as_box(char *
*** 38,43 
--- 37,43 
  %}
  
  /* BISON Declarations */
+ %parse-param {NDBOX **result}
  %expect 0
  %name-prefix=cube_yy
  
*** box: O_BRACKET paren_list COMMA paren_li
*** 70,76 
  			YYABORT;
  		}
  
! 		*((void **)result) = write_box( dim, $2, $4 );
  
  	}
  
--- 70,76 
  			YYABORT;
  		}
  
! 		*result = write_box( dim, $2, $4 );
  
  	}
  
*** box: O_BRACKET paren_list COMMA paren_li
*** 97,103 
  			YYABORT;
  		}
  
! 		*((void **)result) = write_box( dim, $1, $3 );
  	}
  
 

Re: [HACKERS] Bison 3.0 updates

2013-07-28 Thread Stephen Frost
Tom,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
 Buildfarm member anchovy has been failing for the last couple of days,
[...]
 I'm thinking we should apply this to all supported branches, in case
 somebody gets the idea to build an older branch with bleeding-edge
 tools.  Any objections?

Certainly looks cleaner to me, and if it works for older bison then it
seems reasonable to back-patch it.

However, I comment on this mainly because anchovy has had issues with
9.1 and older for some time, which looks like an issue with GCC 4.8.0.
Did you happen to resolve or identify what is happening there..?

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Bison 3.0 updates

2013-07-28 Thread Tom Lane
Stephen Frost sfr...@snowman.net writes:
 However, I comment on this mainly because anchovy has had issues with
 9.1 and older for some time, which looks like an issue with GCC 4.8.0.
 Did you happen to resolve or identify what is happening there..?

Yeah, we know about that:
http://www.postgresql.org/message-id/14242.1365200...@sss.pgh.pa.us

The bottom line was:
 It looks like our choices are (1) teach configure to enable
 -fno-aggressive-loop-optimizations if the compiler recognizes it,
 or (2) back-port commit 8137f2c32322c624e0431fac1621e8e9315202f9.

I am in favor of fixing the back branches via (1), because it's less
work and much less likely to break third-party extensions.  Some other
people argued for (2), but I've not seen any patch emerge from them,
and you can bet I'm not going to do it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: UNNEST (and other functions) WITH ORDINALITY

2013-07-28 Thread Andrew Gierth
I propose the following patch (which goes on top of the current
ordinality one) to implement the suggested grammar changes.

I think this is the cleanest way, and I've tested that it both
passes regression and allows constructs like WITH time AS (...)
to work.

-- 
Andrew (irc:RhodiumToad)
*** a/src/backend/parser/gram.y
--- b/src/backend/parser/gram.y
***
*** 608,615  static Node *makeRecursiveViewSelect(char *relname, List 
*aliases, Node *query);
   * The grammar thinks these are keywords, but they are not in the kwlist.h
   * list and so can never be entered directly.  The filter in parser.c
   * creates these tokens when required.
   */
! %tokenNULLS_FIRST NULLS_LAST WITH_ORDINALITY WITH_TIME
  
  /* Precedence: lowest to highest */
  %nonassoc SET /* see relation_expr_opt_alias 
*/
--- 608,645 
   * The grammar thinks these are keywords, but they are not in the kwlist.h
   * list and so can never be entered directly.  The filter in parser.c
   * creates these tokens when required.
+  *
+  * The rules for referencing WITH and these special lookahead keywords are
+  * as follows:
+  *
+  * If WITH is followed by a fixed token, such as WITH OIDS, or a non-keyword
+  * token such as '(', then use WITH directly, except as indicated below.
+  *
+  * If WITH could be followed by an object name, then use the with_keyword
+  * production instead. Also, if there are alternative branches in which some
+  * have a fixed keyword following WITH and some have an object name, then
+  * use with_keyword for all of them, overriding the above rule.
+  *
+  * (Similar rules would apply for NULLS_P, but currently there are no
+  * instances in the grammar where this is used other than as a special
+  * case or as an identifier.)
+  *
+  * The productions associated with these special cases are listed under
+  * Special-case keyword sequences near the end of the grammar. It is
+  * intended that these be the ONLY places that the special lookahead
+  * keywords appear, in order to avoid complicating the main body of the
+  * grammar.
+  *
+  * To add a new special case:
+  *   - add the special token names here in a %token decl
+  *   - add or extend the productions under Special-case keyword sequences
+  *   - add appropriate comparisons in:
+  *   base_yylex in src/backend/parser/parser.c
+  *   filtered_base_yylex in src/interfaces/ecpg/preproc/parser.c
   */
! 
! %tokenNULLS_BEFORE_FIRST NULLS_BEFORE_LAST
! %token  WITH_BEFORE_ORDINALITY WITH_BEFORE_TIME
  
  /* Precedence: lowest to highest */
  %nonassoc SET /* see relation_expr_opt_alias 
*/
***
*** 838,848  CreateRoleStmt:
}
;
  
- 
- opt_with: WITH
{}
-   | /*EMPTY*/ 
{}
-   ;
- 
  /*
   * Options for CREATE ROLE and ALTER ROLE (also used by CREATE/ALTER USER
   * for backwards compatibility).  Note: the only option required by SQL99
--- 868,873 
***
*** 3127,3138  ExclusionConstraintList:

{ $$ = lappend($1, $3); }
;
  
! ExclusionConstraintElem: index_elem WITH any_operator
{
$$ = list_make2($1, $3);
}
/* allow OPERATOR() decoration for the benefit of 
ruleutils.c */
!   | index_elem WITH OPERATOR '(' any_operator ')'
{
$$ = list_make2($1, $5);
}
--- 3152,3163 

{ $$ = lappend($1, $3); }
;
  
! ExclusionConstraintElem: index_elem with_keyword any_operator
{
$$ = list_make2($1, $3);
}
/* allow OPERATOR() decoration for the benefit of 
ruleutils.c */
!   | index_elem with_keyword OPERATOR '(' any_operator ')'
{
$$ = list_make2($1, $5);
}
***
*** 6188,6195  opt_asc_desc: ASC
{ $$ = SORTBY_ASC; }
| /*EMPTY*/ 
{ $$ = SORTBY_DEFAULT; }
;
  
! opt_nulls_order: NULLS_FIRST  { $$ = 
SORTBY_NULLS_FIRST; }
!   | NULLS_LAST{ $$ = 
SORTBY_NULLS_LAST; }
| /*EMPTY*/ 

Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-28 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 One of the biggest current complaints about recovery.conf from
 Debian/Ubuntu users is the fact that it lives in $PGDATA.  Won't we just
 get the same thing here?

I don't think so, see below.

 I don't think that's the same case, but ... why exactly don't they like
 recovery.conf, and can you show that the location of the file has
 anything to do with the underlying complaint?  Personally I bet it's
 more about the confusion between configuration and triggering functions.
 Until we can get those things separated, using recovery.conf to argue
 about file locations will result in nothing but confusion and bad
 design.

I think it's all about having a user-edited configuration file somewhere
else than /etc, and that keeping the triggering functions separated from
the setup itself would be a good thing to have.

I bet that if some facts and links are necessary they will appear soon,
now is not a right time for me to be searching for those facts…

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers