Re: [HACKERS] Disabled features on Hot Standby

2012-01-14 Thread Simon Riggs
On Sat, Jan 14, 2012 at 1:02 AM, Noah Misch n...@leadboat.com wrote:

 Also, what happens if an all-visible bit gets set on the standby
 through some other mechanism - e.g. restored from an FPI or
 XLOG_HEAP_NEWPAGE?  I'm not sure whether we ever do an FPI of the
 visibility map page itself, but we certainly do it for the heap pages.
  So it might be that this infrastructure would (somewhat bizarrely)
 trust the visibility map bits but not the PD_ALL_VISIBLE bits.

 Simon spoke to the FPI side of the question.  For heap pages, the
 XLOG_HEAP_NEWPAGE consumers are CLUSTER, VACUUM FULL and ALTER TABLE SET
 TABLESPACE.  For the last, we will have already logged any PD_ALL_VISIBLE bits
 through normal channels.  CLUSTER and VACUUM FULL never set PD_ALL_VISIBLE or
 visibility map bits.  When, someday, they do, we might emit a separate WAL
 record to force the recovery conflict.  However, CLUSTER/VACUUM FULL already
 remove tuples still-visible to standby snapshots without provoking a recovery
 conflict.  (Again only with hot_standby_feedback=off.)

If that were the case it would be a bug.

CLUSTER/VACUUM FULL emit an AccessExclusiveLock record that would
conflict with any current lock holders, so should be fine on that.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabled features on Hot Standby

2012-01-14 Thread Simon Riggs
On Sat, Jan 14, 2012 at 5:42 AM, Robert Haas robertmh...@gmail.com wrote:

 Other than that, it seems like we might be converging on a workable
 solution: if hot_standby_feedback=off, disable index-only scans for
 snapshots taken during recovery; if hot_standby_feedback=on, generate
 recovery conflicts when a snapshot's xmin precedes the youngest xmin
 on a page marked all-visible, but allow the use of index-only scans,
 and allow sequential scans to trust PD_ALL_VISIBLE.  Off the top of my
 head, I don't see a hole in that logic...

Please use the logic described earlier. Using the existing mechanisms
and infrastructure is all we need here.

We want latestRemovedXid, not a new version of it via xmins. Keep
index-only scans enabled all the time and let conflicts be generated.
There are already mechanisms in place to handle conflicts, so avoiding
them another way is not required or desired.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Measuring relation free space

2012-01-14 Thread Jaime Casanova
On Thu, Dec 15, 2011 at 4:11 PM, Noah Misch n...@leadboat.com wrote:
 On Sun, Nov 06, 2011 at 10:20:49PM +0100, Bernd Helmle wrote:
 --On 6. November 2011 01:08:11 -0200 Greg Smith g...@2ndquadrant.com wrote:

 Attached patch adds a new function to the pageinspect extension for 
 measuring
 total free space, in either tables or indexes.

 I wonder if that should be done in the pgstattuple module, which output
 some similar numbers.

 Indeed, pgstattuple already claims to show precisely the same measure.  Its
 reckoning is right in line for heaps, but the proposed pageinspect function
 finds more free space in indexes:

 [local] test=# SELECT t.free_percent, relation_free_space('pg_proc'), 
 i.free_percent, relation_free_space('pg_proc_proname_args_nsp_index') FROM 
 pgstattuple('pg_proc') t, pgstattuple('pg_proc_proname_args_nsp_index') i;
  free_percent | relation_free_space | free_percent | relation_free_space
 --+-+--+-
         2.53 |           0.0253346 |         8.61 |            0.128041
 (1 row)

 Is one of those index figures simply wrong, or do they measure two senses of
 free space, both of which are interesting to DBAs?


i created a test env using pgbench -s 20 -F 90, i then create a new
table (that keep tracks actions that happens the the pgbench tables,
so insert only) and changed a few fillfactors:

   relname   | reltuples|reloptions
-+ ---+--
 audit_log| 804977 |
 pgbench_accounts  |   1529890 | {fillfactor=90}
 pgbench_accounts_pkey |   1529890 | {fillfactor=50}
 pgbench_branches  |   20 | {fillfactor=100}
 pgbench_branches_pkey |   20 |
 pgbench_history |  94062 |
 pgbench_tellers  |   200   | {fillfactor=100}
 pgbench_tellers_pkey |   200|
(8 rows)


and after running pgbench -n -c 4 -j 2 -T 300 a few times, i used
attached free_space.sql to see what pg_freespacemap, pgstattuple and
relation_free_space had to say about these tables. the result is
attached in result_free_space.out

my first conclusion is that pg_freespacemap is unreliable when indexes
are involved (and looking at the documentation of that module confirms
that), also the fact that FSM is not designed for accuracy make me
think is not an option.

pgstattuple and relation_free_space are very close in all the numbers
except for 2 indexes pgbench_branches_pkey and pgbench_tellers_pkey;
after a VACUUM FULL and  a REINDEX (and the difference persistence) i
checked pgbench_tellers_pkey contents (it has only one page besides
the metapage) and the numbers that i look at where:

page size: 8192
free size:  4148

which in good romance means 50% of free space... so, answering Noah's
question: if that difference has some meaning i can't see it but
looking at the evidence the measure relation_free_space() is giving is
the good one

so, tomorrow (or ...looking at the clock... later today) i will update
the relation_free_space() patch to accept toast tables and other kind
of indexes and add it to the commitfest unless someone says that my
math is wrong and somehow there is a more accurate way of getting the
free space (which is entirely possible)

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
with relation(relid, relname, relpages) as (
	select oid, relname, relpages 
	  from pg_class 
	 where relkind in ('r', 'i') and (relname like 'pgbench%' or relname = 'audit_log')
),
q(relid, relname, total_size, free_size) as (
	select relid, relname, pg_relation_size(relid::regclass), (select sum(avail) from pg_freespace(relid::regclass)) 
	  from relation
)
select relname, total_size, round((free_size::numeric / total_size), 6) as fsm_free_size, relation_free_space(relid::regclass::text),
   ((pgstattuple(relid)).free_percent / 100) pgstattuple_free_pct
  from q
 order by 1;


result_free_space.out
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multithread Query Planner

2012-01-14 Thread Frederico
This means it's possible use threads?

Att,

Fred

Enviado via iPad

Em 13/01/2012, às 20:47, Dimitri Fontaine dimi...@2ndquadrant.fr escreveu:

 Christopher Browne cbbro...@gmail.com writes:
 Yes, don't try to use threads.
 
 http://wiki.postgresql.org/wiki/Developer_FAQ#Why_don.27t_you_use_threads.2C_raw_devices.2C_async-I.2FO.2C_.3Cinsert_your_favorite_wizz-bang_feature_here.3E.3F
 
 ... threads are not currently used instead of multiple processes for
 backends because:
 
 I would only add that the backend code is really written in a process
 based perspective, with a giant number of private variables that are in
 fact global variables.
 
 Trying to “clean” that out in order to get to threads… wow.
 
 Regards,
 -- 
 Dimitri Fontaine
 http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers

2012-01-14 Thread Hitoshi Harada
On Fri, Jan 13, 2012 at 2:05 PM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Nov 29, 2011 at 10:10 AM, Robert Haas robertmh...@gmail.com wrote:

 I have plans to try to improve this, but it's one of those things that
 I care about more than the people who write the checks do, so it
 hasn't quite gotten to the top of the priority list yet.

 All right... I have a patch that I think fixes this, at least so far
 as relations are concerned.  I rewrote
 RangeVarGetAndCheckCreationNamespace() extensively, did surgery on its
 callers, and then modified CREATE OR REPLACE VIEW and ALTER relkind
 .. SET SCHEMA to make use of it rather than doing things as they did
 before.

 So this patch prevents (1) concurrently dropping a schema in which a
 new relation is being created, (2) concurrently dropping a schema into
 which an existing relation is being moved, and (3) using CREATE OR
 REPLACE VIEW to attempt to obtain a lock on a relation you don't own
 (the command would eventually fail, of course, but if there were, say,
 an outstanding AccessShareLock on the relation, you'd queue up for the
 lock and thus prevent any further locks from being granted, despite
 your lack of any rights on the target).

The patch looks ok, though I wonder if we could have a way to release
the lock on namespace much before the end of transaction. Since the
lock is held all the time, now the possibility of dead lock is bigger.
Say,

-- tx1
BEGIN;
SELECT * FROM s.x;
DROP SCHEMA t;

-- tx2
BEGIN;
SELECT * FROM t.y;
DROP SCHEMA s;

I know it's a limited situation, though. Maybe the right way would be
to check the namespace at the end of the transaction if any object is
created, rather than locking it.

 It doesn't fix any of the non-relation object types.  That would be
 nice to do, but I think it's material for a separate patch.

I took a quick look under src/include/catalog and the objects that
have namespace are

- collation
- constraint
- conversion
- extension
- operator
- operator class
- operator family
- function (proc)
- ts_config
- ts_dict
- ts_parser
- ts_template
- (what's missing?)

I agree with you that it's not worth doing everything, but function is
nice to have. I don't mind if we don't have it with the other objects.

Thanks,
-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers

2012-01-14 Thread Hitoshi Harada
On Sat, Jan 14, 2012 at 2:25 AM, Hitoshi Harada umi.tan...@gmail.com wrote:
 On Fri, Jan 13, 2012 at 2:05 PM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Nov 29, 2011 at 10:10 AM, Robert Haas robertmh...@gmail.com wrote:

 I have plans to try to improve this, but it's one of those things that
 I care about more than the people who write the checks do, so it
 hasn't quite gotten to the top of the priority list yet.

 All right... I have a patch that I think fixes this, at least so far
 as relations are concerned.  I rewrote
 RangeVarGetAndCheckCreationNamespace() extensively, did surgery on its
 callers, and then modified CREATE OR REPLACE VIEW and ALTER relkind
 .. SET SCHEMA to make use of it rather than doing things as they did
 before.

 So this patch prevents (1) concurrently dropping a schema in which a
 new relation is being created, (2) concurrently dropping a schema into
 which an existing relation is being moved, and (3) using CREATE OR
 REPLACE VIEW to attempt to obtain a lock on a relation you don't own
 (the command would eventually fail, of course, but if there were, say,
 an outstanding AccessShareLock on the relation, you'd queue up for the
 lock and thus prevent any further locks from being granted, despite
 your lack of any rights on the target).

 The patch looks ok, though I wonder if we could have a way to release
 the lock on namespace much before the end of transaction. Since the
 lock is held all the time, now the possibility of dead lock is bigger.
 Say,

 -- tx1
 BEGIN;
 SELECT * FROM s.x;
 DROP SCHEMA t;

 -- tx2
 BEGIN;
 SELECT * FROM t.y;
 DROP SCHEMA s;

Although I wrote
 I know it's a limited situation, though. Maybe the right way would be
 to check the namespace at the end of the transaction if any object is
 created, rather than locking it.

actually what's surprising to me is that even SELECT holds lock on
namespace to the end of transaction. The ideal way is that we lock
only on CREATE or other DDL.

Thanks,
-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers

2012-01-14 Thread Dimitri Fontaine
Andres Freund and...@anarazel.de writes:
 If you target C coded triggers then all you need to do is provide a
 pointer to the Node *parsetree, I would think.  What else?
 Yes.

 Being able to turn that into a statement again is still valuable imo.

That part of the WIP code is still in the patch, yes.

 The drawback though is still the same, the day you do that you've
 proposed a public API and changing the parsetree stops being internal
 refactoring.
 Yes, sure. I don't particularly care though actually. Changing some generic 
 guts of trigger functions is not really that much of a problem compared to 
 the 
 other stuff involoved in a version migration.

Let's hear about it from Tom, who's mainly been against publishing that.

 The point is that with CREATE COMMAND TRIGGER only the internal part of the 
 triggers need to change. No the external interface. Which is a big difference 
 from my pov.

I'm not sure.  The way you get the arguments would stay rather stable,
but the parsetree would change at each release:  that's not a long term
API here.  I fail to see much difference in between a hook and a command
trigger as soon as you've chosen to implement the feature in C.

 Also hooks are relatively hard to stack, i.e. its hard to use them sensibly 
 from multiple independent projects. They also cannot be purely installed via 
 extensions/sql.

That remains true, you can't easily know in which order your hooks will
get fired, contrary to triggers, and you can't even list the hooks.

I fear that we won't be able to answer your need here in 9.2 though.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabled features on Hot Standby

2012-01-14 Thread Noah Misch
On Sat, Jan 14, 2012 at 08:08:29AM +, Simon Riggs wrote:
 On Sat, Jan 14, 2012 at 1:02 AM, Noah Misch n...@leadboat.com wrote:
  However, CLUSTER/VACUUM FULL already
  remove tuples still-visible to standby snapshots without provoking a 
  recovery
  conflict. ?(Again only with hot_standby_feedback=off.)
 
 If that were the case it would be a bug.
 
 CLUSTER/VACUUM FULL emit an AccessExclusiveLock record that would
 conflict with any current lock holders, so should be fine on that.

I speak of this sequence (M = master connection, S = standby connection):

M: CREATE TABLE t AS SELECT * FROM generate_series(1,1000) t(n);
S: BEGIN ISOLATION LEVEL REPEATABLE READ; SELECT 0;
M: DELETE FROM t WHERE n = 10;
M: VACUUM FULL t;
S: SELECT count(*) FROM t; -- 990, should be 1000

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Measuring relation free space

2012-01-14 Thread Noah Misch
On Sat, Jan 14, 2012 at 04:41:57AM -0500, Jaime Casanova wrote:
 pgstattuple and relation_free_space are very close in all the numbers
 except for 2 indexes pgbench_branches_pkey and pgbench_tellers_pkey;
 after a VACUUM FULL and  a REINDEX (and the difference persistence) i
 checked pgbench_tellers_pkey contents (it has only one page besides
 the metapage) and the numbers that i look at where:
 
 page size: 8192
 free size:  4148
 
 which in good romance means 50% of free space... so, answering Noah's
 question: if that difference has some meaning i can't see it but
 looking at the evidence the measure relation_free_space() is giving is
 the good one
 
 so, tomorrow (or ...looking at the clock... later today) i will update
 the relation_free_space() patch to accept toast tables and other kind
 of indexes and add it to the commitfest unless someone says that my
 math is wrong and somehow there is a more accurate way of getting the
 free space (which is entirely possible)

Did you see this followup[1]?  To summarize:

- pgstattuple() and relation_free_space() should emit the same number, even if
  that means improving pgstattuple() at the same time.
- relation_free_space() belongs in the pgstattuple extension, because its role
  is cheaper access to a single pgstattuple() metric.

Thanks,
nm

[1] 
http://archives.postgresql.org/message-id/20111218165625.gb6...@tornado.leadboat.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabled features on Hot Standby

2012-01-14 Thread Simon Riggs
On Sat, Jan 14, 2012 at 11:17 AM, Noah Misch n...@leadboat.com wrote:
 On Sat, Jan 14, 2012 at 08:08:29AM +, Simon Riggs wrote:
 On Sat, Jan 14, 2012 at 1:02 AM, Noah Misch n...@leadboat.com wrote:
  However, CLUSTER/VACUUM FULL already
  remove tuples still-visible to standby snapshots without provoking a 
  recovery
  conflict. ?(Again only with hot_standby_feedback=off.)

 If that were the case it would be a bug.

 CLUSTER/VACUUM FULL emit an AccessExclusiveLock record that would
 conflict with any current lock holders, so should be fine on that.

 I speak of this sequence (M = master connection, S = standby connection):

 M: CREATE TABLE t AS SELECT * FROM generate_series(1,1000) t(n);
 S: BEGIN ISOLATION LEVEL REPEATABLE READ; SELECT 0;
 M: DELETE FROM t WHERE n = 10;
 M: VACUUM FULL t;
 S: SELECT count(*) FROM t; -- 990, should be 1000

OK, so we need to emit a heap_xlog_cleanup_info() record at the end of
cluster to conflict with anybody that doesn't yet have a lock but has
a snapshot that can see tuples the cluster implicitly removed. Will
do.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabled features on Hot Standby

2012-01-14 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 With the exception of EXPLAIN
 support which I think is merely an oversight, all of those issues,
 including the problems in Hot Standby mode, remain because nobody
 knows exactly what we ought to do to fix them.  When somebody figures
 it out, I predict they'll get fixed pretty quickly.

So this problem goes into the 9.2 Open Items list, right? It looks not
tied to this particular commit fest. Also, with so many people involved,
I think there shouldn't be any finger pointing here.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multithread Query Planner

2012-01-14 Thread Dimitri Fontaine
Frederico zepf...@gmail.com writes:
 This means it's possible use threads?

The short answer is “no”.
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql NUL record and field separator

2012-01-14 Thread Peter Eisentraut
Inspired by this question http://stackoverflow.com/questions/6857265 I
have implemented a way to set the psql record and field separators to a
zero byte (ASCII NUL character).  This can be very useful in shell
scripts to have an unambiguous separator.  Other GNU tools such as find,
grep, sort, xargs also support this.  So with this you could for example
do

psql --record-separator-zero -At -c 'select something from somewhere' | xargs 
-0 dosomething

I have thought about two different ways to implement this.  Attempt one
was to make the backslash command option parsing zero-byte proof top to
bottom by using PQExpBuffers, so you could then write \R '\000'.  But
that turned out to be very invasive and complicated.  And worst, you
couldn't use it from the command line, because psql -R '\000' doesn't
work (the octal escape syntax is not used on the command line).

So attempt two, which I present here, is to just have separate syntax to
set the separators to zero bytes.  From the command line it would be
--record-separator-zero and --field-separator-zero, and from within psql
it would be \pset recordsep_zero and \pset fieldsep_zero.  I don't care
much for the verbosity of this, so I'm still thinking about ways to
abbreviate this.  I think the most common use of this would be to set
the record separator from the command line, so we could use a short
option such as -0 or -z for that.

Patch attached.  Comments welcome.
diff --git i/doc/src/sgml/ref/psql-ref.sgml w/doc/src/sgml/ref/psql-ref.sgml
index a9b1ed2..752d6de 100644
--- i/doc/src/sgml/ref/psql-ref.sgml
+++ w/doc/src/sgml/ref/psql-ref.sgml
@@ -193,6 +193,15 @@ PostgreSQL documentation
 /varlistentry
 
 varlistentry
+  termoption--field-separator-zero/option/term
+  listitem
+  para
+  Set the field separator for unaligned output to a zero byte.
+  /para
+  /listitem
+/varlistentry
+
+varlistentry
   termoption-h replaceable class=parameterhostname/replaceable//term
   termoption--host=replaceable class=parameterhostname/replaceable//term
   listitem
@@ -320,6 +329,16 @@ PostgreSQL documentation
 /varlistentry
 
 varlistentry
+  termoption--record-separator-zero/option/term
+  listitem
+  para
+  Set the record separator for unaligned output to a zero byte.  This is
+  useful for interfacing, for example, with literalxargs -0/literal.
+  /para
+  /listitem
+/varlistentry
+
+varlistentry
   termoption-s//term
   termoption--single-step//term
   listitem
@@ -1909,6 +1928,16 @@ lo_import 152801
   /varlistentry
 
   varlistentry
+  termliteralfieldsep_zero/literal/term
+  listitem
+  para
+  Sets the field separator to use in unaligned output format to a zero
+  byte.
+  /para
+  /listitem
+  /varlistentry
+
+  varlistentry
   termliteralfooter/literal/term
   listitem
   para
@@ -2078,6 +2107,16 @@ lo_import 152801
   /varlistentry
 
   varlistentry
+  termliteralrecordsep_zero/literal/term
+  listitem
+  para
+  Sets the record separator to use in unaligned output format to a zero
+  byte.
+  /para
+  /listitem
+  /varlistentry
+
+  varlistentry
   termliteraltableattr/literal (or literalT/literal)/term
   listitem
   para
diff --git i/src/bin/psql/command.c w/src/bin/psql/command.c
index 69fac83..9421a73 100644
--- i/src/bin/psql/command.c
+++ w/src/bin/psql/command.c
@@ -2284,11 +2284,26 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 	{
 		if (value)
 		{
-			free(popt-topt.fieldSep);
-			popt-topt.fieldSep = pg_strdup(value);
+			free(popt-topt.fieldSep.separator);
+			popt-topt.fieldSep.separator = pg_strdup(value);
+			popt-topt.fieldSep.separator_zero = false;
 		}
 		if (!quiet)
-			printf(_(Field separator is \%s\.\n), popt-topt.fieldSep);
+		{
+			if (popt-topt.fieldSep.separator_zero)
+printf(_(Field separator is zero byte.\n));
+			else
+printf(_(Field separator is \%s\.\n), popt-topt.fieldSep.separator);
+		}
+	}
+
+	else if (strcmp(param, fieldsep_zero) == 0)
+	{
+		free(popt-topt.fieldSep.separator);
+		popt-topt.fieldSep.separator = NULL;
+		popt-topt.fieldSep.separator_zero = true;
+		if (!quiet)
+			printf(_(Field separator is zero byte.\n));
 	}
 
 	/* record separator for unaligned text */
@@ -2296,18 +2311,30 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 	{
 		if (value)
 		{
-			free(popt-topt.recordSep);
-			popt-topt.recordSep = pg_strdup(value);
+			free(popt-topt.recordSep.separator);
+			popt-topt.recordSep.separator = pg_strdup(value);
+			popt-topt.recordSep.separator_zero = false;
 		}
 		if (!quiet)
 		{
-			if (strcmp(popt-topt.recordSep, \n) == 0)
+			if (popt-topt.recordSep.separator_zero)
+printf(_(Record separator 

[HACKERS] pg_statistic, lack of documentation

2012-01-14 Thread Sergey Konoplev
Hi,

http://www.postgresql.org/docs/9.1/interactive/catalog-pg-statistic.html

It specifies that entries are created by ANALYZE, but does not mention
that if a table is empty the entry for it is not created. Probably it
is worth to add to the docs. The test case is below.

grayhemp@[local]:5432 test=#create table t1 (i integer);CREATE
TABLEgrayhemp@[local]:5432 test=#select stanullfrac, stawidth from
pg_statistic where starelid = 't1'::regclass; stanullfrac | stawidth
-+--(0 rows)
grayhemp@[local]:5432 test=#analyze t1;ANALYZEgrayhemp@[local]:5432
test=#select stanullfrac, stawidth from pg_statistic where starelid =
't1'::regclass; stanullfrac | stawidth -+--(0
rows)
grayhemp@[local]:5432 test=#insert into t1 values (1);INSERT 0
1grayhemp@[local]:5432 test=#select stanullfrac, stawidth from
pg_statistic where starelid = 't1'::regclass; stanullfrac | stawidth
-+--(0 rows)
grayhemp@[local]:5432 test=#analyze t1;ANALYZEgrayhemp@[local]:5432
test=#select stanullfrac, stawidth from pg_statistic where starelid =
't1'::regclass; stanullfrac | stawidth -+--
   0 |        4(1 row)

-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] controlling the location of server-side SSL files

2012-01-14 Thread Peter Eisentraut
On mån, 2012-01-02 at 06:32 +0200, Peter Eisentraut wrote:
 I think I would like to have a set of GUC parameters to control the
 location of the server-side SSL files.

Here is the patch for this.

One thing that is perhaps worth thinking about:  Currently, we just
ignore missing root.crt and root.crl files.  With this patch, we still
do this, even if the user has given a specific nondefault location.
That seems a bit odd, but I can't think of a simple way to do it better.
diff --git i/doc/src/sgml/config.sgml w/doc/src/sgml/config.sgml
index 0cc3296..519715f 100644
--- i/doc/src/sgml/config.sgml
+++ w/doc/src/sgml/config.sgml
@@ -668,6 +668,66 @@ SET ENABLE_SEQSCAN TO OFF;
   /listitem
  /varlistentry
 
+ varlistentry id=guc-ssl-ca-file xreflabel=ssl_ca_file
+  termvarnamessl_ca_file/varname (typestring/type)/term
+  indexterm
+   primaryvarnamessl_ca_file/ configuration parameter/primary
+  /indexterm
+  listitem
+   para
+Specifies the name of the file containing the SSL server certificate
+authority.  The default is filenameroot.crt/filename.  Relative
+paths are relative to the data directory.  This parameter can only be
+set at server start.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry id=guc-ssl-cert-file xreflabel=ssl_cert_file
+  termvarnamessl_cert_file/varname (typestring/type)/term
+  indexterm
+   primaryvarnamessl_cert_file/ configuration parameter/primary
+  /indexterm
+  listitem
+   para
+Specifies the name of the file containing the SSL server certificate.
+The default is filenameserver.crt/filename.  Relative paths are
+relative to the data directory.  This parameter can only be set at
+server start.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry id=guc-ssl-crl-file xreflabel=ssl_crl_file
+  termvarnamessl_crl_file/varname (typestring/type)/term
+  indexterm
+   primaryvarnamessl_crl_file/ configuration parameter/primary
+  /indexterm
+  listitem
+   para
+Specifies the name of the file containing the SSL server certificate
+revocation list.  The default is filenameroot.crl/filename.
+Relative paths are relative to the data directory.  This parameter can
+only be set at server start.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry id=guc-ssl-key-file xreflabel=ssl_key_file
+  termvarnamessl_key_file/varname (typestring/type)/term
+  indexterm
+   primaryvarnamessl_key_file/ configuration parameter/primary
+  /indexterm
+  listitem
+   para
+Specifies the name of the file containing the SSL server private key.
+The default is filenameserver.key/filename.  Relative paths are
+relative to the data directory.  This parameter can only be set at
+server start.
+   /para
+  /listitem
+ /varlistentry
+
  varlistentry id=guc-ssl-renegotiation-limit xreflabel=ssl_renegotiation_limit
   termvarnamessl_renegotiation_limit/varname (typeinteger/type)/term
   indexterm
diff --git i/doc/src/sgml/runtime.sgml w/doc/src/sgml/runtime.sgml
index 1c3a9c8..a855279 100644
--- i/doc/src/sgml/runtime.sgml
+++ w/doc/src/sgml/runtime.sgml
@@ -1831,10 +1831,8 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433
SSL certificates and make sure that clients check the server's certificate.
To do that, the server
must be configured to accept only literalhostssl/ connections (xref
-   linkend=auth-pg-hba-conf) and have SSL
-   filenameserver.key/filename (key) and
-   filenameserver.crt/filename (certificate) files (xref
-   linkend=ssl-tcp). The TCP client must connect using
+   linkend=auth-pg-hba-conf) and have SSL key and certificate files
+   (xref linkend=ssl-tcp). The TCP client must connect using
literalsslmode=verify-ca/ or
literalverify-full/ and have the appropriate root certificate
file installed (xref linkend=libpq-connect).
@@ -2053,10 +2051,12 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433
   /note
 
   para
-   To start in acronymSSL/ mode, the files filenameserver.crt/
-   and filenameserver.key/ must exist in the server's data directory.
-   These files should contain the server certificate and private key,
-   respectively.
+   To start in acronymSSL/ mode, files containing the server certificate
+   and private key must exist.  By default, these files are expected to be
+   named filenameserver.crt/ and filenameserver.key/, respectively, in
+   the server's data directory, but other names and locations can be specified
+   using the configuration parameters xref linkend=guc-ssl-cert-file
+   and xref linkend=guc-ssl-key-file.
On Unix systems, the permissions on filenameserver.key/filename must
disallow any access to world or group; achieve this by the command
commandchmod 0600 server.key/command.
@@ -2144,27 +2144,27 @@ pg_dumpall -p 

Re: [HACKERS] Multithread Query Planner

2012-01-14 Thread Thomas Munro
On 13 January 2012 20:14, Frederico zepf...@gmail.com wrote:
 I'm trying to develop a multithread planner, and some times is raised a 
 exception of access memory.

I was a bit confused about what you are trying to do -- somehow
use concurrency during the planning phase, or during
execution (maybe having produced concurrency-aware plans)?

Here is my naive thought: Since threads are not really an option
as explained by others, you could use helper processes to
implement executor concurrency, by replacing nodes with proxies
that talk to helper processes (perhaps obtained from a
per-cluster pool).  The proxy nodes would send their child
subplans and the information needed to get the appropriate
snapshot, and receive tuples via some kind of IPC (perhaps
shmem-backed queues or pipes or whatever).

A common use case in other RDBMSs is running queries over
multiple partitions using parallelism.  In the above scheme that
could be done if the children of Append nodes were candidates for
emigration to helper processes.  OTOH there are some plans
produced by UNION and certain kinds of OR that could probably
benefit.

There may be some relevant stuff in PostgreSQL-XC?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] xlog location arithmetic

2012-01-14 Thread Fujii Masao
On Tue, Dec 6, 2011 at 1:19 PM, Euler Taveira de Oliveira
eu...@timbira.com wrote:
 Hi,

 A while ago when blogging about WAL [1], I noticed a function to deal with
 xlog location arithmetic is wanted. I remembered Depez [2] mentioning it and
 after some questions during trainings and conferences I decided to translate
 my shell script function in C.

 The attached patch implements the function pg_xlog_location_diff (bikeshed
 colors are welcome). It calculates the difference between two given
 transaction log locations. Now that we have pg_stat_replication view, it will
 be easy to get the lag just passing columns as parameters. Also, the
 monitoring tools could take advantage of it instead of relying on a fragile
 routine to get the lag.

 I noticed that pg_xlogfile_name* functions does not sanity check the xrecoff
 boundaries but that is material for another patch.


 [1] http://eulerto.blogspot.com/2011/11/understanding-wal-nomenclature.html
 [2]
 http://www.depesz.com/index.php/2011/01/24/waiting-for-9-1-pg_stat_replication/

I think that this function is very useful. Can you add the patch into
CommitFest 2012-1 ?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] xlog location arithmetic

2012-01-14 Thread Euler Taveira de Oliveira
On 14-01-2012 11:06, Fujii Masao wrote:
 I think that this function is very useful. Can you add the patch into
 CommitFest 2012-1 ?
 
Sure. But I must adjust the patch based on the thread comments (basically,
numeric output). I have a new patch but need to test it before submitting it.
I'll post this weekend.


-- 
   Euler Taveira de Oliveira - Timbira   http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers

2012-01-14 Thread Robert Haas
On Sat, Jan 14, 2012 at 5:25 AM, Hitoshi Harada umi.tan...@gmail.com wrote:
 The patch looks ok, though I wonder if we could have a way to release
 the lock on namespace much before the end of transaction.

Well, that wold kind of miss the point, wouldn't it?  I mean, the race
is that the process dropping the schema might not see the newly
created object.  The only way to prevent that is to hold some sort of
lock until the newly created object is visible, which doesn't happen
until commit.

 I know it's a limited situation, though. Maybe the right way would be
 to check the namespace at the end of the transaction if any object is
 created, rather than locking it.

And then what?  If you find that you created an object in a namespace
that's been subsequently dropped, what do you do about that?  As far
as I can see, your own really choice would be to roll back the
transaction that uncovers this problem, which is likely to produce
more rollbacks than just letting the deadlock detector sort it out.

 It doesn't fix any of the non-relation object types.  That would be
 nice to do, but I think it's material for a separate patch.

 I took a quick look under src/include/catalog and the objects that
 have namespace are

 - collation
 - constraint
 - conversion
 - extension
 - operator
 - operator class
 - operator family
 - function (proc)
 - ts_config
 - ts_dict
 - ts_parser
 - ts_template
 - (what's missing?)

 I agree with you that it's not worth doing everything, but function is
 nice to have. I don't mind if we don't have it with the other objects.

I think the fix for all of them will be very symmetrical; it's just
relations that have a different code path.  I don't see the point of
plugging some but not others; that just provides a mixture of good and
bad examples for future hackers to copy, which doesn't seem ideal.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabled features on Hot Standby

2012-01-14 Thread Robert Haas
On Sat, Jan 14, 2012 at 6:44 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Robert Haas robertmh...@gmail.com writes:
 With the exception of EXPLAIN
 support which I think is merely an oversight, all of those issues,
 including the problems in Hot Standby mode, remain because nobody
 knows exactly what we ought to do to fix them.  When somebody figures
 it out, I predict they'll get fixed pretty quickly.

 So this problem goes into the 9.2 Open Items list, right? It looks not
 tied to this particular commit fest. Also, with so many people involved,
 I think there shouldn't be any finger pointing here.

Well, I wouldn't personally be horrified if this didn't get fixed for
9.2, but since you and Simon and Noah seem to feel strongly about it,
I'm inclined to say we should go ahead and fix it, so sure.  Actually,
come to think of it, it really is on the open items list already: fix
it so index-only scans work on the standby is merely a more ambitious
version of disable index-only scans on the standby, so it pretty
much works out to the same thing.

Furthermore, if we unconditionally generate recovery conflicts as
Simon is proposing, this sounds like it might be pretty darn simple.
I was in the midst of thinking about how to make the locking work if
we changed behavior when changing hot_standby_feedback, but if we
don't even need to go there so much the better.  I'm happy to accept
Simon/Noah's judgement that the rate of conflicts will be acceptable;
I haven't worked with Hot Standby enough myself to have an educated
opinion on that topic.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers

2012-01-14 Thread Robert Haas
On Fri, Jan 13, 2012 at 5:53 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Andres Freund and...@anarazel.de writes:
 I personally think this is an error and those details should at least be
 available on the c level (e.g. some pg_command_trigger_get_plan() function,
 only available via C) to allow sensible playing around with that knowledge. I
 don't really see making progress towards a nice interface unless we get
 something to play around with out there.

 If you target C coded triggers then all you need to do is provide a
 pointer to the Node *parsetree, I would think.  What else?

 The drawback though is still the same, the day you do that you've
 proposed a public API and changing the parsetree stops being internal
 refactoring.  The way around this problem is that if you want a command
 trigger in C, just write an extension that implements the Process
 Utility hook.  Bonus, you can have that working with already released
 versions of PostgreSQL.

But on the flip side, I think we're generally a bit more flexible
about exposing things via C than through the procedural languages.  I
think it's reasonable for people to complain about their PL/pgsql
functions breaking between major releases, but I have less sympathy
for someone who has the same problem when coding in C.  When you
program in C, you're interfacing with the guts of the system, and we
can't improve the system without changing the guts.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql filename completion: quoting

2012-01-14 Thread Noah Misch
Occasionally, I have a SQL file destined for psql's \i command whose name
contains a space.  Less often, I'll have a .csv destined for \copy with the
same problem.  psql's filename completion does not handle these well.  It
completes on the literal name, but the commands will only recognize quoted
names.  For example, given a file foo bar, \i fTAB will complete to \i
foo bar, which will not execute.  If I type \i 'fTAB, completion will not
help at all.

The attached patch wraps rl_filename_completion_function() to dequote on input
and enquote on output.  Now, \i fTAB and \i 'fTAB will both complete
to \i 'foo bar', which executes as expected.  The new code handles embedded
whitespace, quotes, and backslashes.

tab-complete.c works in terms of whitespace-separated words.  As such, \i
'foo bTAB does not complete, because tab-complete.c has no notion of quotes
affecting token boundaries.  It thinks 'foo is one token and b is another.
I'm sure we could fix this (Bash gets it right).  It seemed rather independent
code-wise, so I did not attempt that for this patch.

Thanks,
nm
diff --git a/src/bin/psql/stringutils.c b/src/bin/psql/stringutils.c
index 3b5ce1b..77387dc 100644
*** a/src/bin/psql/stringutils.c
--- b/src/bin/psql/stringutils.c
***
*** 272,274  strip_quotes(char *source, char quote, char escape, int 
encoding)
--- 272,343 
  
*dst = '\0';
  }
+ 
+ 
+ /*
+  * quote_if_needed
+  *
+  * Opposite of strip_quotes().  If source denotes itself literally without
+  * quoting or escaping, returns NULL.  Otherwise, returns a malloc'd copy with
+  * quoting and escaping applied:
+  *
+  * source -   string to parse
+  * entails_quote -any of these present?  need outer quotes
+  * quote -doubled within string, affixed to both ends
+  * escape -   doubled within string
+  * encoding - the active character-set encoding
+  *
+  * Do not use this as a substitute for PQescapeStringConn().  Use it for
+  * strings to be parsed by strtokx() or psql_scan_slash_option().
+  */
+ char *
+ quote_if_needed(const char *source, const char *entails_quote,
+   char quote, char escape, int encoding)
+ {
+   const char *src;
+   char   *ret;
+   char   *dst;
+   boolneed_quotes = false;
+ 
+   psql_assert(source);
+   psql_assert(quote);
+ 
+   src = source;
+   dst = ret = pg_malloc(2 * strlen(src) + 3); /* excess */
+ 
+   *dst++ = quote;
+ 
+   while (*src)
+   {
+   charc = *src;
+   int i;
+ 
+   if (c == quote)
+   {
+   need_quotes = true;
+   *dst++ = quote;
+   }
+   else if (c == escape)
+   {
+   need_quotes = true;
+   *dst++ = escape;
+   }
+   else if (strchr(entails_quote, c))
+   need_quotes = true;
+ 
+   i = PQmblen(src, encoding);
+   while (i--)
+   *dst++ = *src++;
+   }
+ 
+   *dst++ = quote;
+   *dst = '\0';
+ 
+   if (!need_quotes)
+   {
+   free(ret);
+   ret = NULL;
+   }
+ 
+   return ret;
+ }
diff --git a/src/bin/psql/stringuindex c7c5f38..c64fc58 100644
*** a/src/bin/psql/stringutils.h
--- b/src/bin/psql/stringutils.h
***
*** 19,22  extern char *strtokx(const char *s,
--- 19,25 
bool del_quotes,
int encoding);
  
+ extern char *quote_if_needed(const char *source, const char *entails_quote,
+   char quote, char escape, int encoding);
+ 
  #endif   /* STRINGUTILS_H */
diff --git a/src/bin/psql/tab-comindex a27ef69..d226106 100644
*** a/src/bin/psql/tab-complete.c
--- b/src/bin/psql/tab-complete.c
***
*** 670,675  static char *complete_from_list(const char *text, int state);
--- 670,676 
  static char *complete_from_const(const char *text, int state);
  static char **complete_from_variables(char *text,
const char *prefix, const char 
*suffix);
+ static char *complete_from_files(const char *text, int state);
  
  static PGresult *exec_query(const char *query);
  
***
*** 1619,1625  psql_completion(char *text, int start, int end)
  pg_strcasecmp(prev3_wd, BINARY) == 0) 
 (pg_strcasecmp(prev_wd, FROM) == 0 ||
  pg_strcasecmp(prev_wd, TO) == 0))
!   matches = completion_matches(text, 
filename_completion_function);
  
/* Handle COPY|BINARY sth FROM|TO filename */
else if ((pg_strcasecmp(prev4_wd, COPY) == 0 ||
--- 1620,1629 
  pg_strcasecmp(prev3_wd, BINARY) == 0) 
 

[HACKERS] psql COPY vs. ON_ERROR_ROLLBACK, multi-command strings

2012-01-14 Thread Noah Misch
It has bothered me that psql's \copy ignores the ON_ERROR_ROLLBACK setting.
Only SendQuery() takes note of ON_ERROR_ROLLBACK, and \copy, like all
backslash commands, does not route through SendQuery().  Looking into this
turned up several other weaknesses in psql's handling of COPY.  For example,
SendQuery() does not release the savepoint after an ordinary COPY:

[local] test=# set log_statement = 'all'; set client_min_messages = 'log'; copy 
(select 0) to stdout;
SET
SET
LOG:  statement: RELEASE pg_psql_temporary_savepoint
LOG:  statement: SAVEPOINT pg_psql_temporary_savepoint
LOG:  statement: copy (select 0) to stdout;
0

When psql sends a command string containing more than one command, at least
one of which is a COPY, we stop processing results at the first COPY:

[local] test=# set log_statement = 'all'; set client_min_messages = 'log'; copy 
(select 0) to stdout\; select 1/0; select 1;
SET
SET
LOG:  statement: RELEASE pg_psql_temporary_savepoint
LOG:  statement: SAVEPOINT pg_psql_temporary_savepoint
LOG:  statement: copy (select 0) to stdout; select 1/0;
0
LOG:  statement: select 1;
ERROR:  current transaction is aborted, commands ignored until end of 
transaction block

To make the above work normally, this patch improves SendQuery()-based COPY
command handling to process the entire queue of results whenever we've
processed a COPY.  It also brings sensible handling in the face of multiple
COPY commands in a single command string.  See the included test cases for
some examples.

We must prepare for COPY to fail for a local reason, like client-side malloc()
failure or network I/O problems.  The server will still have the connection in
a COPY mode, and we must get it out of that mode.  The \copy command was
prepared for the COPY FROM case with this code block:

/*
 * Make sure we have pumped libpq dry of results; else it may still be 
in
 * ASYNC_BUSY state, leading to false readings in, eg, get_prompt().
 */
while ((result = PQgetResult(pset.db)) != NULL)
{
success = false;
psql_error(\\copy: unexpected response (%d)\n,
   PQresultStatus(result));
/* if still in COPY IN state, try to get out of it */
if (PQresultStatus(result) == PGRES_COPY_IN)
PQputCopyEnd(pset.db, _(trying to exit copy mode));
PQclear(result);
}

It arose from these threads:
http://archives.postgresql.org/pgsql-hackers/2006-11/msg00694.php
http://archives.postgresql.org/pgsql-general/2009-08/msg00268.php

However, psql still enters an infinite loop when COPY TO STDOUT encounters a
client-side failure, such as malloc() failure.  I've merged the above
treatment into lower-level routines, granting plain COPY commands similar
benefits, and fixed the COPY TO handling.  To help you test the corner cases
involved here, I've attached a gdb script that will inject client side
failures into both kinds of COPY commands.  Attach gdb to your psql process,
source the script, and compare the behavior of commands like these with and
without the patch:

\copy (select 0) to pstdout

create table t (c int);
\copy t from stdin
1
\.

With plain COPY handled thoroughly, I fixed \copy by having it override the
source or destination stream, then call SendQuery().  This gets us support for
ON_ERROR_ROLLBACK, \timing, \set ECHO and \set SINGLESTEP for free.  I found
it reasonable to treat \copy's SQL commmand more like a user-entered command
than an internal command, because \copy is such a thin wrapper around COPY
FROM STDIN/COPY TO STDOUT.

This patch makes superfluous the second argument of PSQLexec(), but I have not
removed that argument.

Incidentally, psql's common.c had several switches on PQresultStatus(res) that
did not include a case for PGRES_COPY_BOTH and also silently assumed any
unlisted status was some kind of error.  I tightened these to distinguish all
known statuses and give a diagnostic upon receiving an unknown status.

Thanks,
nm

diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 69fac83..c7d024a 100644
*** a/src/bin/psql/command.c
--- b/src/bin/psql/command.c
***
*** 320,344  exec_command(const char *cmd,
/* \copy */
else if (pg_strcasecmp(cmd, copy) == 0)
{
-   /* Default fetch-it-all-and-print mode */
-   instr_time  before,
-   after;
- 
char   *opt = psql_scan_slash_option(scan_state,

 OT_WHOLE_LINE, NULL, false);
  
-   if (pset.timing)
-   INSTR_TIME_SET_CURRENT(before);
- 
success = do_copy(opt);
- 
-   if (pset.timing  success)
-   {
-   INSTR_TIME_SET_CURRENT(after);
-   

[HACKERS] Patch: Allow SQL-language functions to reference parameters by parameter name

2012-01-14 Thread Matthew Draper

I just remembered to make time to advance this from WIP to proposed
patch this week... and then worked out I'm rudely dropping it into the
last commitfest at the last minute. :/


Anyway, my interpretation of the previous discussion is a general
consensus that permitting ambiguous parameter/column references is
somewhat unsavoury, but better than the alternatives:

http://archives.postgresql.org/pgsql-hackers/2011-04/msg00433.php
http://archives.postgresql.org/pgsql-hackers/2011-04/msg00744.php
(and surrounds)


The attached patch is essentially unchanged from my WIP version; it's
updated to current master (d0dcb31), and fixes a trivial copy/paste
error. It passes `make check`.

Also attached is a rather light doc patch, which I've separated because
I'm hesitant about which approach to take. The attached version just
changes the existing mention of naming parameters in:

http://www.postgresql.org/docs/9.1/static/xfunc-sql.html#XFUNC-NAMED-PARAMETERS

It presumably ought to be clearer about the name resolution
priorities... in a quick look, I failed to locate the corresponding
discussion of column name references to link to (beyond a terse sentence
in 4.2.1).

The alternative would be to adjust most of the examples in section 35.4,
using parameter names as the preferred option, and thus make $n the
other way.

I'm happy to do that, but I figure it'd be a bit presumptuous to present
such a patch without some discussion; it's effectively rewriting the
project's opinion of how to reference function parameters.



With regard to the discussion about aliasing the function name when used
as a qualifier
(http://archives.postgresql.org/pgsql-hackers/2011-04/msg00871.php), my
only suggestion would be that using $0 (as in, '$0.paramname') appears
safe -- surely any spec change causing it issues would equally affect
the existing $1 etc. '$.paramname' seems to look better, but presumably
runs into trouble by looking like an operator.

That whole discussion seems above my pay grade, however.


Original WIP post:

http://archives.postgresql.org/pgsql-hackers/2011-03/msg01479.php

This is an open TODO:

http://wiki.postgresql.org/wiki/Todo#SQL-Language_Functions



I've just added the above post to the CF app; I'll update it to point to
this one once it appears.



Thanks!

Matthew


-- 
matt...@trebex.net
diff --git a/src/backend/executor/functions.c b/src/backend/executor/functions.c
new file mode 100644
index 5642687..74f3e7d
*** a/src/backend/executor/functions.c
--- b/src/backend/executor/functions.c
*** typedef SQLFunctionCache *SQLFunctionCac
*** 115,121 
--- 115,123 
   */
  typedef struct SQLFunctionParseInfo
  {
+ 	char	   *name;			/* function's name */
  	Oid		   *argtypes;		/* resolved types of input arguments */
+ 	char	  **argnames;		/* names of input arguments */
  	int			nargs;			/* number of input arguments */
  	Oid			collation;		/* function's input collation, if known */
  }	SQLFunctionParseInfo;
*** typedef struct SQLFunctionParseInfo
*** 123,128 
--- 125,132 
  
  /* non-export function prototypes */
  static Node *sql_fn_param_ref(ParseState *pstate, ParamRef *pref);
+ static Node *sql_fn_post_column_ref(ParseState *pstate, ColumnRef *cref, Node *var);
+ static Node *sql_fn_param_ref_num(ParseState *pstate, SQLFunctionParseInfoPtr pinfo, int paramno, int location);
  static List *init_execution_state(List *queryTree_list,
  	 SQLFunctionCachePtr fcache,
  	 bool lazyEvalOK);
*** prepare_sql_fn_parse_info(HeapTuple proc
*** 162,167 
--- 166,172 
  	int			nargs;
  
  	pinfo = (SQLFunctionParseInfoPtr) palloc0(sizeof(SQLFunctionParseInfo));
+ 	pinfo-name = NameStr(procedureStruct-proname);
  
  	/* Save the function's input collation */
  	pinfo-collation = inputCollation;
*** prepare_sql_fn_parse_info(HeapTuple proc
*** 200,205 
--- 205,240 
  		pinfo-argtypes = argOidVect;
  	}
  
+ 	if (nargs  0)
+ 	{
+ 		Datum		proargnames;
+ 		Datum		proargmodes;
+ 		int			argnum;
+ 		int			n_arg_names;
+ 		bool		isNull;
+ 
+ 		proargnames = SysCacheGetAttr(PROCNAMEARGSNSP, procedureTuple,
+ 	  Anum_pg_proc_proargnames,
+ 	  isNull);
+ 		if (isNull)
+ 			proargnames = PointerGetDatum(NULL);	/* just to be sure */
+ 
+ 		proargmodes = SysCacheGetAttr(PROCNAMEARGSNSP, procedureTuple,
+ 	  Anum_pg_proc_proargmodes,
+ 	  isNull);
+ 		if (isNull)
+ 			proargmodes = PointerGetDatum(NULL);	/* just to be sure */
+ 
+ 		n_arg_names = get_func_input_arg_names(proargnames, proargmodes, pinfo-argnames);
+ 
+ 		if (n_arg_names  nargs)
+ 			pinfo-argnames = NULL;
+ 	}
+ 	else
+ 	{
+ 		pinfo-argnames = NULL;
+ 	}
+ 
  	return pinfo;
  }
  
*** prepare_sql_fn_parse_info(HeapTuple proc
*** 209,222 
  void
  sql_fn_parser_setup(struct ParseState *pstate, SQLFunctionParseInfoPtr pinfo)
  {
- 	/* Later we might use these hooks to support parameter names */
  	pstate-p_pre_columnref_hook = NULL;
! 	

Re: [HACKERS] Remembering bug #6123

2012-01-14 Thread Kevin Grittner
Tom Lane  wrote:
 
 Well, the bottom line that's concerning me here is whether throwing
 errors is going to push anyone's application into an unfixable
 corner.  I'm somewhat encouraged that your Circuit Courts software
 can adapt to it, since that's certainly one of the larger and more
 complex applications out there. Or at least I would be if you had
 actually verified that the CC code was okay with the recently-
 proposed patch versions. Do you have any thorough tests you can run
 against whatever we end up with?
 
In spite of several attempts over the years to come up with automated
tests of our applications at a level that would show these issues,
we're still dependent on business analysts to run through a standard
list of tests for each release, plus tests designed to exercise code
modified for the release under test.  For the release where we went
to PostgreSQL triggers, that included running lists against the
statistics tables to see which trigger functions had not yet been
exercised in testing, until we had everything covered.
 
To test the new version of this patch, we would need to pick an
application release, and use the patch through the development,
testing, and staging cycles,  We would need to look for all triggers
needing adjustment, and make the necessary changes.  We would need to
figure out which triggers were important to cover, and ensure that
testing covered all of them.
 
Given the discussions with my new manager this past week, I'm pretty
sure we can work this into a release that would complete testing and
hit pilot deployment in something like three months, give or take a
little.  I can't actually make any promises on that until I talk to
her next week.
 
From working through all the BEFORE triggers with UPDATE or DELETE
statements this summer, I'm pretty confident that the ones which
remain can be handled by reissuing the DELETE (we didn't keep any of
the UPDATEs with this pattern) and returning NULL.  The most
complicated and troublesome trigger code has to do with purging old
data.  I suspect that if we include testing of all purge processes in
that release cycle, we'll shake out just about any issues we have.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-01-14 Thread Marco Nenciarini
Hello,

Il giorno dom, 11/12/2011 alle 19.45 -0500, Noah Misch ha scritto:
 On Sat, Dec 10, 2011 at 09:47:53AM +0100, Gabriele Bartolini wrote:
  So, here is a summary:
 
  --- - -
 |   ON|   ON|
  Action | DELETE  | UPDATE  |
  --- - -
  CASCADE|   Row   |  Error  |
  SET NULL   |   Row   |   Row   |
  SET DEFAULT|   Row   |   Row   |
  ARRAY CASCADE  | Element | Element |
  ARRAY SET NULL | Element | Element |
  NO ACTION  |-|-|
  RESTRICT   |-|-|
  --- - -
 
  If that's fine with you guys, Marco and I will refactor the development  
  based on these assumptions.
 
 Looks fine.
 

This is our latest version of the patch. Gabriele, Gianni and I have
discussed a lot and decided to send an initial patch which uses EACH
REFERENCES instead of ARRAY REFERENCES. The reason behind this is that
ARRAY REFERENCES generates an ambiguous grammar, and we all agreed that
EACH REFERENCES makes sense (and the same time does not introduce any
new keyword). This is however open for discussion.


The patch now includes the following clauses on the delete/update
actions - as per previous emails:


--- - -
   |   ON|   ON|
Action | DELETE  | UPDATE  |
--- - -
CASCADE|   Row   |  Error  |
SET NULL   |   Row   |   Row   |
SET DEFAULT|   Row   |   Row   |
ARRAY CASCADE  | Element | Element |
ARRAY SET NULL | Element | Element |
NO ACTION  |-|-|
RESTRICT   |-|-|
--- - - 


We will resubmit the patch for the 2012-01 commit fest.


Thank you,
Marco

-- 
Marco Nenciarini - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
marco.nenciar...@2ndquadrant.it | www.2ndQuadrant.it 



foreign-key-array-v2.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Allow breaking out of hung connection attempts

2012-01-14 Thread Ryan Kelly
On Tue, Jan 10, 2012 at 11:29:58AM +0200, Heikki Linnakangas wrote:
 On 09.01.2012 15:49, Ryan Kelly wrote:
 On Mon, Jan 09, 2012 at 10:35:50AM +0200, Heikki Linnakangas wrote:
 That assumes that it's safe to longjmp out of PQconnectdbParams at
 any instant. It's not.
 I'm guessing because it could result in a resource leak?
 
 Yes, and other unfinished business, too.
 
 I think you'd need to use the asynchronous connection functions
 PQconnectStartParams() and PQconnectPoll(), and select().
 New patch attached.
 
 Thanks, some comments:
 
 * Why do you need the timeout?
 
 * If a SIGINT arrives before you set sigint_interrupt_enabled, it
 just sets cancel_pressed but doesn't jump out of the connection
 attempt. You need to explicitly check cancel_pressed after setting
 sigint_interrupt_enabled to close that race condition.
 
 * You have to reinitialize the fd mask with FD_ZERO/SET before each
 call to select(). select() modifies the mask.
 
 * In case of PGRES_POLLING_WRITING, you have to wait until the
 socket becomes writable, not readable.
 
 Attached is a new version that fixes those.

Yup, I'm an idiot.

 
 There's one caveat in the libpq docs about PQconnectStart/PQconnectPoll:
 
 The connect_timeout connection parameter is ignored when using 
 PQconnectPoll; it is the application's responsibility to decide whether an 
 excessive amount of time has elapsed. Otherwise, PQconnectStart followed by 
 a PQconnectPoll loop is equivalent to PQconnectdb.
 
 So after this patch, connect_timeout will be ignored in \connect.
 That probably needs to be fixed. You could incorporate a timeout
 fairly easily into the select() calls, but unfortunately there's no
 easy way to get the connect_timeout value. You could to parse the
 connection string the user gave with PQconninfoParse(), but the
 effective timeout setting could come from a configuration file, too.
 
 Not sure what to do about that. If there was a
 PQconnectTimeout(conn) function, similar to PQuser(conn),
 PQhost(conn) et al, you could use that. Maybe we should add that, or
 even better, a generic function that could be used to return not
 just connect_timeout, but all the connection options in effect in a
 connection.

I have attached a new patch which handles the connect_timeout option by
adding a PQconnectTimeout(conn) function to access the connect_timeout
which I then use to retrieve the existing value from the old connection.

I also borrowed some code from other places:

* connectDBComplete had some logic surrounding handling the timeout
  value (src/interfaces/libpq/fe-connect.c:1402).
* The timeout code is lifted from pqSocketPoll which, if made public,
  could essentially replace all the select/timeout code in that loop
  (src/interfaces/libpq/fe-misc.c:1034).

Before I get flamed for duplicating code, yes, I know it's a bad thing
to do. If anyone has any guidance I'd be glad to implement their
suggestions.

-Ryan Kelly

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 72c9384..0ee2df1 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1375,6 +1375,24 @@ char *PQoptions(const PGconn *conn);
   /para
  /listitem
 /varlistentry
+
+varlistentry id=libpq-pqconnecttimeout
+ term
+  functionPQconnectTimeout/function
+  indexterm
+   primaryPQconnectTimeout/primary
+  /indexterm
+ /term
+
+ listitem
+  para
+   Returns the connect_timeout property as given to libpq.
+synopsis
+char *PQconnectTimeout(const PGconn *conn);
+/synopsis
+  /para
+ /listitem
+/varlistentry
/variablelist
   /para
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 69fac83..7ba22d0 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1516,7 +1516,7 @@ static bool
 do_connect(char *dbname, char *user, char *host, char *port)
 {
 	PGconn	   *o_conn = pset.db,
-			   *n_conn;
+			   *n_conn = NULL;
 	char	   *password = NULL;
 
 	if (!dbname)
@@ -1549,7 +1549,7 @@ do_connect(char *dbname, char *user, char *host, char *port)
 
 	while (true)
 	{
-#define PARAMS_ARRAY_SIZE	8
+#define PARAMS_ARRAY_SIZE	9
 		const char **keywords = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*keywords));
 		const char **values = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*values));
 
@@ -1567,17 +1567,120 @@ do_connect(char *dbname, char *user, char *host, char *port)
 		values[5] = pset.progname;
 		keywords[6] = client_encoding;
 		values[6] = (pset.notty || getenv(PGCLIENTENCODING)) ? NULL : auto;
-		keywords[7] = NULL;
-		values[7] = NULL;
+		keywords[7] = connect_timeout;
+		values[7] = PQconnectTimeout(o_conn);
+		keywords[8] = NULL;
+		values[8] = NULL;
 
-		n_conn = PQconnectdbParams(keywords, values, true);
+		/* attempt connection asynchronously */
+		n_conn = PQconnectStartParams(keywords, values, true);
+
+		if (sigsetjmp(sigint_interrupt_jmp, 1) != 0)
+		{
+			/* interrupted during connection attempt */
+			PQfinish(n_conn);
+			n_conn = NULL;
+		}
+		else
+		

Re: [HACKERS] Measuring relation free space

2012-01-14 Thread Jaime Casanova
On Sat, Jan 14, 2012 at 6:26 AM, Noah Misch n...@leadboat.com wrote:

 - pgstattuple() and relation_free_space() should emit the same number, even if
  that means improving pgstattuple() at the same time.

yes, i just wanted to understand which one was more accurate and
why... and give the opportunity for anyone to point my error if any

 - relation_free_space() belongs in the pgstattuple extension, because its role
  is cheaper access to a single pgstattuple() metric.


oh! right! so, what about just fixing the free_percent that
pgstattuple is providing

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2012-01-14 Thread Pavel Stehule
2012/1/14 Andrew Dunstan and...@dunslane.net:


 On 01/12/2012 10:51 AM, Andrew Dunstan wrote:



 On 01/12/2012 10:44 AM, Pavel Stehule wrote:

 2012/1/12 Andrew Dunstanand...@dunslane.net:


 On 01/12/2012 09:00 AM, Joey Adams wrote:

 I wrote an array_to_json function during GSoC 2010:



 http://git.postgresql.org/gitweb/?p=json-datatype.git;a=blob;f=json_io.c#l289

 It's not exposed as a procedure called array_to_json: it's part of the
 to_json function, which decides what to do based on the argument type.


 Excellent, this is just the point at which I stopped work last night, so
 with your permission I'll steal this and it will save me a good chunk of
 time.

 this should be little bit more enhanced to support a row arrays - it
 can be merged with some routines from pst tool
 http://okbob.blogspot.com/2010/11/new-version-of-pst-collection-is.html



 I will be covering composites.



 OK, here's a patch that does both query_to_json and array_to_json, along
 with docs and regression tests. It include Robert's original patch, although
 I can produce a differential patch if required. It can also be pulled from
 https://bitbucket.org/adunstan/pgdevel

 A couple of things to note. First, the problem about us losing column names
 that I noted a couple of months ago and Tom did a bit of work on is
 exercised by this. We really need to fix it. Example:


support SELECT ROW (x AS real name, y AS real name) is good idea
and should be used more time than only here.

Regards

Pavel


   andrew=#  select array_to_json(array_agg(row(z.*)))
       from (select $$a$$ || x as b,

                y as c,
                array[row(x.*,array[1,2,3]),
                      row(y.*,array[4,5,6])] as z
             from generate_series(1,1) x,
                  generate_series(4,4) y) z;
                                  array_to_json
   -
     [{f1:a1,f2:4,f3:[{f1:1,f2:[1,2,3]},{f1:4,f2:[4,5,6]}]}]
   (1 row)


 Here we've lost b, c and z as column names.

 Second, what should be do when the database encoding isn't UTF8? I'm
 inclined to emit a \u escape for any non-ASCII character (assuming it
 has a unicode code point - are there any code points in the non-unicode
 encodings that don't have unicode equivalents?). The alternative would be to
 fail on non-ASCII characters, which might be ugly. Of course, anyone wanting
 to deal with JSON should be using UTF8 anyway, but we still have to deal
 with these things. What about SQL_ASCII? If there's a non-ASCII sequence
 there we really have no way of telling what it should be. There at least I
 think we should probably error out.

 cheers

 andrew




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers

2012-01-14 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 But on the flip side, I think we're generally a bit more flexible
 about exposing things via C than through the procedural languages.

So we could still expose the parsetree of the current command.  I wonder
if it's already possible to get that from a C coded trigger, but I'll
admit I'm yet to code a trigger in C.  Will look into that.

Then as Andres proposed, a new function would be available to get the
value, we're not changing the trigger procedure function API in case the
language is C…

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] lots of unused variable warnings in assert-free builds

2012-01-14 Thread Peter Eisentraut
In builds without --enable-cassert (I guess not many developers use
those a lot), there are quite a few unused variable warnings.  These
usually hold some intermediate result that the assert checks later.  I
see that in some places our code already uses #ifdef
USE_ASSERT_CHECKING, presumably to hide similar issues.  But in most
cases using this would significantly butcher the code.  I found that
adding __attribute__((unused)) is cleaner.  Attached is a patch that
cleans up all the warnings I encountered.
diff --git i/src/backend/access/hash/hashovfl.c w/src/backend/access/hash/hashovfl.c
index 130c296..d4329fb 100644
--- i/src/backend/access/hash/hashovfl.c
+++ w/src/backend/access/hash/hashovfl.c
@@ -391,7 +391,9 @@ _hash_freeovflpage(Relation rel, Buffer ovflbuf,
 	uint32		ovflbitno;
 	int32		bitmappage,
 bitmapbit;
+#ifdef USE_ASSERT_CHECKING
 	Bucket		bucket;
+#endif
 
 	/* Get information from the doomed page */
 	_hash_checkpage(rel, ovflbuf, LH_OVERFLOW_PAGE);
@@ -400,7 +402,9 @@ _hash_freeovflpage(Relation rel, Buffer ovflbuf,
 	ovflopaque = (HashPageOpaque) PageGetSpecialPointer(ovflpage);
 	nextblkno = ovflopaque-hasho_nextblkno;
 	prevblkno = ovflopaque-hasho_prevblkno;
+#ifdef USE_ASSERT_CHECKING
 	bucket = ovflopaque-hasho_bucket;
+#endif
 
 	/*
 	 * Zero the page for debugging's sake; then write and release it. (Note:
diff --git i/src/backend/executor/execCurrent.c w/src/backend/executor/execCurrent.c
index b07161f..2a59fc6 100644
--- i/src/backend/executor/execCurrent.c
+++ w/src/backend/executor/execCurrent.c
@@ -151,7 +151,7 @@ execCurrentOf(CurrentOfExpr *cexpr,
 	{
 		ScanState  *scanstate;
 		bool		lisnull;
-		Oid			tuple_tableoid;
+		Oid			tuple_tableoid __attribute__((unused));
 		ItemPointer tuple_tid;
 
 		/*
diff --git i/src/backend/executor/nodeMaterial.c w/src/backend/executor/nodeMaterial.c
index b320b54..4ab660e 100644
--- i/src/backend/executor/nodeMaterial.c
+++ w/src/backend/executor/nodeMaterial.c
@@ -66,7 +66,7 @@ ExecMaterial(MaterialState *node)
 			 * Allocate a second read pointer to serve as the mark. We know it
 			 * must have index 1, so needn't store that.
 			 */
-			int			ptrno;
+			int			ptrno __attribute__((unused));
 
 			ptrno = tuplestore_alloc_read_pointer(tuplestorestate,
   node-eflags);
diff --git i/src/backend/executor/nodeSetOp.c w/src/backend/executor/nodeSetOp.c
index 7fa5730..c6a88a8 100644
--- i/src/backend/executor/nodeSetOp.c
+++ w/src/backend/executor/nodeSetOp.c
@@ -344,7 +344,7 @@ setop_fill_hash_table(SetOpState *setopstate)
 	SetOp	   *node = (SetOp *) setopstate-ps.plan;
 	PlanState  *outerPlan;
 	int			firstFlag;
-	bool		in_first_rel;
+	bool		in_first_rel __attribute__((unused));
 
 	/*
 	 * get state info from node
diff --git i/src/backend/executor/nodeWorktablescan.c w/src/backend/executor/nodeWorktablescan.c
index e2f3dd4..e72d1cb 100644
--- i/src/backend/executor/nodeWorktablescan.c
+++ w/src/backend/executor/nodeWorktablescan.c
@@ -30,7 +30,7 @@ static TupleTableSlot *
 WorkTableScanNext(WorkTableScanState *node)
 {
 	TupleTableSlot *slot;
-	EState	   *estate;
+	EState	   *estate __attribute__((unused));
 	Tuplestorestate *tuplestorestate;
 
 	/*
diff --git i/src/backend/libpq/be-fsstubs.c w/src/backend/libpq/be-fsstubs.c
index b864c86..1bf765c 100644
--- i/src/backend/libpq/be-fsstubs.c
+++ w/src/backend/libpq/be-fsstubs.c
@@ -378,7 +378,7 @@ lo_import_internal(text *filename, Oid lobjOid)
 {
 	File		fd;
 	int			nbytes,
-tmp;
+tmp __attribute__((unused));
 	char		buf[BUFSIZE];
 	char		fnamebuf[MAXPGPATH];
 	LargeObjectDesc *lobj;
diff --git i/src/backend/libpq/pqcomm.c w/src/backend/libpq/pqcomm.c
index 35812f4..df41d60 100644
--- i/src/backend/libpq/pqcomm.c
+++ w/src/backend/libpq/pqcomm.c
@@ -1373,7 +1373,7 @@ fail:
 void
 pq_putmessage_noblock(char msgtype, const char *s, size_t len)
 {
-	int			res;
+	int			res __attribute__((unused));
 	int			required;
 
 	/*
diff --git i/src/backend/optimizer/path/costsize.c w/src/backend/optimizer/path/costsize.c
index e1c070e..fd16cb1 100644
--- i/src/backend/optimizer/path/costsize.c
+++ w/src/backend/optimizer/path/costsize.c
@@ -3245,7 +3245,7 @@ void
 set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel)
 {
 	PlannerInfo *subroot = rel-subroot;
-	RangeTblEntry *rte;
+	RangeTblEntry *rte __attribute__((unused));
 	ListCell   *lc;
 
 	/* Should only be applied to base relations that are subqueries */
diff --git i/src/backend/optimizer/plan/createplan.c w/src/backend/optimizer/plan/createplan.c
index e41d2a6..7dccf7d 100644
--- i/src/backend/optimizer/plan/createplan.c
+++ w/src/backend/optimizer/plan/createplan.c
@@ -1820,7 +1820,7 @@ create_foreignscan_plan(PlannerInfo *root, ForeignPath *best_path,
 	ForeignScan *scan_plan;
 	RelOptInfo *rel = best_path-path.parent;
 	Index		scan_relid = rel-relid;
-	RangeTblEntry *rte;
+	RangeTblEntry *rte __attribute__((unused));
 	bool		fsSystemCol;
 	int			i;
 
diff --git i/src/backend/parser/analyze.c 

Re: [HACKERS] separate initdb -A options for local and host

2012-01-14 Thread Peter Eisentraut
On lör, 2011-11-26 at 01:20 +0200, Peter Eisentraut wrote:
 I think it would be useful to have separate initdb -A options for local
 and host entries.  In 9.1, we went out of our way to separate the peer
 and ident methods, but we have moved the confusion into the initdb -A
 option, where ident sometimes means peer, and peer sometimes means
 ident.  Moreover, having separate options would allow what I think
 would be a far more common use case, namely having local peer and host
 something other than ident, such as md5.
 
 I'm thinking, we could keep the existing -A option, but add long options
 such as --auth-local and --auth-host, to specify more detail.

Here is a patch that implements exactly that.
diff --git i/doc/src/sgml/ref/initdb.sgml w/doc/src/sgml/ref/initdb.sgml
index d816c21..08a3b86 100644
--- i/doc/src/sgml/ref/initdb.sgml
+++ w/doc/src/sgml/ref/initdb.sgml
@@ -118,10 +118,33 @@ PostgreSQL documentation
   termoption--auth=replaceable class=parameterauthmethod/replaceable/option/term
   listitem
para
-This option specifies the authentication method for local users
-used in filenamepg_hba.conf/.  Do not use literaltrust/
-unless you trust all local users on your system.  literalTrust/
-is the default for ease of installation.
+This option specifies the authentication method for local users used
+in filenamepg_hba.conf/ (literalhost/literal
+and literallocal/literal lines).  Do not use literaltrust/
+unless you trust all local users on your system.  literalTrust/ is
+the default for ease of installation.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry
+  termoption--auth-host=replaceable class=parameterauthmethod/replaceable/option/term
+  listitem
+   para
+This option specifies the authentication method for local users via
+TCP/IP connections used in filenamepg_hba.conf/
+(literalhost/literal lines).
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry
+  termoption--auth-local=replaceable class=parameterauthmethod/replaceable/option/term
+  listitem
+   para
+This option specifies the authentication method for local users via
+Unix-domain socket connections used in filenamepg_hba.conf/
+(literallocal/literal lines).
/para
   /listitem
  /varlistentry
diff --git i/src/backend/libpq/pg_hba.conf.sample w/src/backend/libpq/pg_hba.conf.sample
index 0a50905..a12ba26 100644
--- i/src/backend/libpq/pg_hba.conf.sample
+++ w/src/backend/libpq/pg_hba.conf.sample
@@ -79,11 +79,11 @@
 @remove-line-for-nolocal@# local is for Unix domain socket connections only
 @remove-line-for-nolocal@local   all all @authmethodlocal@
 # IPv4 local connections:
-hostall all 127.0.0.1/32@authmethod@
+hostall all 127.0.0.1/32@authmethodhost@
 # IPv6 local connections:
-hostall all ::1/128 @authmethod@
+hostall all ::1/128 @authmethodhost@
 # Allow replication connections from localhost, by a user with the
 # replication privilege.
 @remove-line-for-nolocal@#local   replication @default_username@@authmethodlocal@
-#hostreplication @default_username@127.0.0.1/32@authmethod@
-#hostreplication @default_username@::1/128 @authmethod@
+#hostreplication @default_username@127.0.0.1/32@authmethodhost@
+#hostreplication @default_username@::1/128 @authmethodhost@
diff --git i/src/bin/initdb/initdb.c w/src/bin/initdb/initdb.c
index 9df2656..21ced98 100644
--- i/src/bin/initdb/initdb.c
+++ w/src/bin/initdb/initdb.c
@@ -64,6 +64,34 @@
 /* Ideally this would be in a .h file, but it hardly seems worth the trouble */
 extern const char *select_default_timezone(const char *share_path);
 
+static const char *auth_methods_host[] = {trust, reject, md5, password, ident, radius,
+#ifdef ENABLE_GSS
+   gss,
+#endif
+#ifdef ENABLE_SSPI
+   sspi,
+#endif
+#ifdef KRB5
+   krb5,
+#endif
+#ifdef USE_PAM
+   pam, pam ,
+#endif
+#ifdef USE_LDAP
+   ldap,
+#endif
+#ifdef USE_SSL
+   cert,
+#endif
+   NULL};
+static const char *auth_methods_local[] = {trust, reject, md5, password, peer, radius,
+#ifdef USE_PAM
+   pam, pam ,
+#endif
+#ifdef USE_LDAP
+   ldap,
+#endif
+	NULL};
 
 /*
  * these values are passed in by makefile defines
@@ -84,8 +112,8 @@ static const char *default_text_search_config = ;
 static char *username = ;
 static bool pwprompt = false;
 static char *pwfilename = NULL;
-static char *authmethod = ;
-static char *authmethodlocal = ;
+static const char *authmethodhost = ;
+static 

Re: [HACKERS] WIP -- renaming implicit sequences

2012-01-14 Thread Thomas Munro
On 12 January 2012 00:58, Tom Lane t...@sss.pgh.pa.us wrote:
 Hmm ... this seems a bit inconsistent with the fact that we got rid of
 automatic renaming of indexes a year or three back.  Won't renaming of
 serials have all the same problems that caused us to give up on renaming
 indexes?

I was sort of planning to do something similar for constraints (once
the patch to support renaming constraints lands) and indexes (I didn't
know they'd previously been automatically renamed and that had been
dropped).

Would you say that I should abandon this, no chance of being accepted?
 Is there a technical problem I'm missing, other than the gap between
unique name generation and execution of the implicit ALTERs?

Maybe I should look into writing a 'tidy rename' procedure for tables
and columns instead, rather than modifying the behaviour of core ALTER
TABLE.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2012-01-14 Thread Joey Adams
On Sat, Jan 14, 2012 at 3:06 PM, Andrew Dunstan and...@dunslane.net wrote:
 Second, what should be do when the database encoding isn't UTF8? I'm
 inclined to emit a \u escape for any non-ASCII character (assuming it
 has a unicode code point - are there any code points in the non-unicode
 encodings that don't have unicode equivalents?). The alternative would be to
 fail on non-ASCII characters, which might be ugly. Of course, anyone wanting
 to deal with JSON should be using UTF8 anyway, but we still have to deal
 with these things. What about SQL_ASCII? If there's a non-ASCII sequence
 there we really have no way of telling what it should be. There at least I
 think we should probably error out.

I don't think there is a satisfying solution to this problem.  Things
working against us:

 * Some server encodings support characters that don't map to Unicode
characters (e.g. unused slots in Windows-1252).  Thus, converting to
UTF-8 and back is lossy in general.

 * We want a normalized representation for comparison.  This will
involve a mixture of server and Unicode characters, unless the
encoding is UTF-8.

 * We can't efficiently convert individual characters to and from
Unicode with the current API.

 * What do we do about \u ?  TEXT datums cannot contain NUL characters.

I'd say just ban Unicode escapes and non-ASCII characters unless the
server encoding is UTF-8, and ban all \u escapes.  It's easy, and
whatever we support later will be a superset of this.

Strategies for handling this situation have been discussed in prior
emails.  This is where things got stuck last time.

- Joey

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dry-run mode for pg_archivecleanup

2012-01-14 Thread Josh Kupershmidt
On Sun, Dec 11, 2011 at 9:52 AM, Gabriele Bartolini
gabriele.bartol...@2ndquadrant.it wrote:
 Hi guys,

  I have added the '-n' option to pg_archivecleanup which performs a dry-run
 and outputs the names of the files to be removed to stdout (making possible
 to pass the list via pipe to another process).

  Please find attached the small patch. I submit it to the CommitFest.

Hi Gabriele,

I have signed on to review this patch for the 2012-01 CF. The patch
applies cleanly, includes the necessary documentation, and implements
a useful feature.

I think the actual debugging line:

+   fprintf(stdout, %s\n, WALFilePath);

might need to be tweaked. First, it's printing to stdout, and I think
pg_archivecleanup intentionally sends all its output to stderr, so
that it may show up in the postmaster log. (I expect the dry-run mode
would often be used to test out an archive_cleanup_command, instead of
only in stand-alone mode, where stdout would be fine.)

Also, I think the actual message should be something a little more
descriptive than just the WALFilePath. In debug mode,
pg_archivecleanup prints out a message like:

fprintf(stderr, %s: removing file \%s\\n,
progname, WALFilePath);

I think we'd want to print something similar, i.e. would remove file 

Oh, and I think the removing file...  debug message above should not
be printed in dryrun-mode, lest we confuse the admin.

Other than that, everything looks good.

Josh

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2012-01-14 Thread Andrew Dunstan



On 01/14/2012 06:11 PM, Joey Adams wrote:

On Sat, Jan 14, 2012 at 3:06 PM, Andrew Dunstanand...@dunslane.net  wrote:

Second, what should be do when the database encoding isn't UTF8? I'm
inclined to emit a \u escape for any non-ASCII character (assuming it
has a unicode code point - are there any code points in the non-unicode
encodings that don't have unicode equivalents?). The alternative would be to
fail on non-ASCII characters, which might be ugly. Of course, anyone wanting
to deal with JSON should be using UTF8 anyway, but we still have to deal
with these things. What about SQL_ASCII? If there's a non-ASCII sequence
there we really have no way of telling what it should be. There at least I
think we should probably error out.

I don't think there is a satisfying solution to this problem.  Things
working against us:

  * Some server encodings support characters that don't map to Unicode
characters (e.g. unused slots in Windows-1252).  Thus, converting to
UTF-8 and back is lossy in general.

  * We want a normalized representation for comparison.  This will
involve a mixture of server and Unicode characters, unless the
encoding is UTF-8.

  * We can't efficiently convert individual characters to and from
Unicode with the current API.

  * What do we do about \u ?  TEXT datums cannot contain NUL characters.

I'd say just ban Unicode escapes and non-ASCII characters unless the
server encoding is UTF-8, and ban all \u escapes.  It's easy, and
whatever we support later will be a superset of this.

Strategies for handling this situation have been discussed in prior
emails.  This is where things got stuck last time.




Well, from where I'm coming from, nuls are not a problem. But 
escape_json() is currently totally encoding-unaware. It produces \u 
escapes for low ascii characters, and just passes through characters 
with the high bit set. That's possibly OK for EXPLAIN output - we really 
don't want don't want EXPLAIN failing. But maybe we should ban JSON 
output for EXPLAIN if the encoding isn't UTF8.


Another question in my mind is what to do when the client encoding isn't 
UTF8.


None of these is an insurmountable problem, ISTM - we just need to make 
some decisions.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2012-01-14 Thread Mike Lewis
I am very interested in experimenting with functional indexes into JSON
structures.  I think this could be very powerful combined with full text
search as well as constraints.  It would allow for using postgres as an
unstructured data store without sacrificing the powerful indexing features,
durability, and transactional semantics that comes with using postgres or
RDBMSes in general.

One use case in particular I have been trying to solve for lately is
persisting and synchronizing client-side state (in a mobile application)
with a server.  It would be nice to have a flat, schemaless table (maybe a
table that's like (id, type, owner, data) where data would be a JSON blob).
 I could do this now without JSON support, but I think indexing inside that
JSON blob and having validation database side is valuable as well.  And as
I mentioned before, i'd rather not throw out the baby with the bathwater by
using a different type of database because ACID, replication, and
constraints are also very important to this.  As is being consistent with
the rest of our technology stack. (I'd essentially be using a relational
database to persist an object database)

I'm also not too concerned about storage consumption with this (even though
columnar compression would help a lot in the future) since it's easily
partitionable by user ID.

For my case the equivalent of postgres's XPath would work.  Also having it
as a maintained contrib module would be sufficient, although it being part
of core as XPath is would be even better.


Just my $0.02... even if I'm a bit late to the conversation.

Thanks!
Mike


Re: [HACKERS] xlog location arithmetic

2012-01-14 Thread Greg Smith

On 01/14/2012 09:12 AM, Euler Taveira de Oliveira wrote:

But I must adjust the patch based on the thread comments (basically,
numeric output). I have a new patch but need to test it before submitting it.
I'll post this weekend.


It's now at https://commitfest.postgresql.org/action/patch_view?id=776 
listed as waiting on you right now.  It's good to put patches into the 
CF application early.  Helps planning, and gives a safety net against 
all sorts of things.  We wouldn't want something this obviously useful 
to get kicked out if, for example, you lost your Internet connection 
over the weekend and then didn't technically qualify as having submitted 
it there before the deadline.  As someone who sweated today for two 
hours when my power at home was turned off to install a new circuit 
breaker, I'm feeling particularly paranoid right now about that sort of 
thing here.


The fact that you got some review feedback before the official CF start 
doesn't mean you can't be listed there right now.  In fact, those are 
things I like to see tracked.  Having links to all of the e-mail 
messages that were important turning points for a feature that changed 
during review is very helpful to reviewers and committers.  And the 
easiest way to keep up with that is to start as early as possible:  add 
it to the app right after the first patch submission.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why is CF 2011-11 still listed as In Progress?

2012-01-14 Thread Greg Smith

On 01/09/2012 09:56 PM, Greg Smith wrote:
The main question still lingering about is the viability of pushing 
out an 9.2alpha3 at this point.  That was originally scheduled for 
December 20th.  There was a whole lot of active code whacking still in 
progress that week though.  And as soon as that settled (around the 
30th), there was a regular flurry of bug fixes for a solid week 
there.  A quick review of recent activity suggests right now might 
finally be a good time to at least tag alpha3; exactly what to do 
about releasing the result I don't have a good suggestion for.


I would have sworn I left this next to the bike shed...from the crickets 
chirping I guess not.  I did complete bumping forward the patches that 
slipped through the November CF the other day, and it's properly closed now.


As for CF 2012-01, I had thought Robert Haas was going to run that one.  
My saying that is not intended to put him on the hook.  Normally we'd 
have an official deadline announcement by now too, which as one of the 
notable lagging cat herders I'm content to absorb a chunk of blame for.


If someone wants to advocate an early time for the official cut-off 
tomorrow, don't let me stop you.  But since this last one for 9.2 is 
too big to fail for me, I'm happy to take care of the announcement 
myself as the 15th comes to end relative to PST time tomorrow.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] xlog location arithmetic

2012-01-14 Thread Gurjeet Singh
On Sat, Jan 14, 2012 at 8:18 PM, Greg Smith g...@2ndquadrant.com wrote:

 On 01/14/2012 09:12 AM, Euler Taveira de Oliveira wrote:

 But I must adjust the patch based on the thread comments (basically,
 numeric output). I have a new patch but need to test it before submitting
 it.
 I'll post this weekend.


 It's now at 
 https://commitfest.postgresql.**org/action/patch_view?id=776https://commitfest.postgresql.org/action/patch_view?id=776listed
  as waiting on you right now.  It's good to put patches into the CF
 application early.  Helps planning, and gives a safety net against all
 sorts of things.  We wouldn't want something this obviously useful to get
 kicked out if, for example, you lost your Internet connection over the
 weekend and then didn't technically qualify as having submitted it there
 before the deadline.  As someone who sweated today for two hours when my
 power at home was turned off to install a new circuit breaker, I'm feeling
 particularly paranoid right now about that sort of thing here.
 he patch
 The fact that you got some review feedback before the official CF start
 doesn't mean you can't be listed there right now.  In fact, those are
 things I like to see tracked.  Having links to all of the e-mail messages
 that were important turning points for a feature that changed during review
 is very helpful to reviewers and committers.  And the easiest way to keep
 up with that is to start as early as possible:  add it to the app right
 after the first patch submission.


I agree.

So lets make it easy for the patch submitter to start the process. I
propose that we have a page in the CF application where people can
upload/attach the patch, and the app posts the patch to -hackers and uses
the post URL to create the CF entry.

Regards,
-- 
Gurjeet Singh
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


Re: [HACKERS] automating CF submissions (was xlog location arithmetic)

2012-01-14 Thread Greg Smith

On 01/14/2012 10:49 PM, Gurjeet Singh wrote:
So lets make it easy for the patch submitter to start the process. I 
propose that we have a page in the CF application where people can 
upload/attach the patch, and the app posts the patch to -hackers and 
uses the post URL to create the CF entry.




That would be nice, but there's at least two serious problems with it, 
which I would guess are both unsolvable without adding an unsupportable 
amount of work to the current PostgreSQL web team.  First, it is 
technically risky for a web application hosted on postgresql.org to be 
e-mailing this list.  There are some things in the infrastructure that 
do that already--I believe the pgsql-commiters list being driven from 
commits is the busiest such bot.  But all of the ones that currently 
exist are either moderated, have a limited number of approved 
submitters, or both.


If it were possible for a bot to create a postgresql.org community 
account, then trigger an e-mail to pgsql-hackers just by filling out a 
web form, I'd give it maybe six months before it has to be turned off 
for a bit--because there are thousands messages queued up once the first 
bored spammer figures that out.  Securing web to e-mail gateways is a 
giant headache, and everyone working on the PostgreSQL infrastructure 
who might work on that is already overloaded with community volunteer 
work.  There's an element of zero-sum game here:  while this would 
provide some assistance to new contributors, the time to build and 
maintain the thing would be coming mainly out of senior contributors.  I 
see the gain+risk vs. reward here skewed the wrong way.


Second, e-mail provides some level of validation that patches being 
submitted are coming from the person they claim.  We currently reject 
patches that are only shared with the community on the web, via places 
like github.  The process around this mailing list tries to make it 
clear sending patches to here is a code submission under the PostgreSQL 
license.  And e-mail nowadays keeps increasing the number of checks that 
confirm it's coming from the person it claims sent it.  I can go check 
into the DKIM credentials your Gmail message to the list contained if 
I'd like, to help confirm it really came from your account.  E-mail 
headers are certainly not perfectly traceable and audit-able, but they 
are far better than what you'd get from a web submission.  Little audit 
trail there beyond came from this IP address.


One unicorn I would like to have here would give the CF app a database 
of recent e-mails to pgsql-hackers.  I login to the CF app, click on 
Add recent submission, and anything matching my e-mail address appears 
with a checkbox next to it.  Click on the patch submissions, and then 
something like you described would happen.  That would save me the 
annoying work around looking up message IDs so much.


The role CF manager would benefit even more from infrastructure like 
that too.  Something that listed all the recent e-mail messages for an 
existing submission, such that you could just click on the ones that you 
wanted added to the patch's e-mail history, would save me personally 
enough time that I could probably even justify writing it.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lots of unused variable warnings in assert-free builds

2012-01-14 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 I see that in some places our code already uses #ifdef
 USE_ASSERT_CHECKING, presumably to hide similar issues.  But in most
 cases using this would significantly butcher the code.  I found that
 adding __attribute__((unused)) is cleaner.  Attached is a patch that
 cleans up all the warnings I encountered.

Surely this will fail entirely on most non-gcc compilers?  Not to
mention that next month's gcc may complain hey, you used this 'unused'
variable.  I think #ifdef USE_ASSERT_CHECKING is really the only way
if you care about quieting these warnings.  (Personally, I don't.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Our poll() based WaitLatch implementation is broken

2012-01-14 Thread Peter Geoghegan
Build Postgres master, on Linux or another platform that will use the
poll() implementation rather than the older select(). Send the
Postmaster SIGKILL. Observe that the WAL Writer lives on, representing
a denial of service as it stays attached to shared memory, busy
waiting (evident from the fact that it quickly leaks memory).

WAL writer shouldn't do this. It isn't doing anything stupid like
relying on the return value of WaitLatch(), which is documented to
only reliably indicate certain wake events but not others. The main
event loop calls PostmasterIsAlive(), which is supposed to be totally
reliable. If I use the select() based latch implementation, it behaves
just fine.

I have some doubts about the latch usage within WAL Writer as things
stands - it needs to be cleaned up a bit (I think it should use the
process latch, because I'm paranoid about timeout invalidation issues
now and in the future, plus it doesn't record errno in the handlers).
These smaller issues are covered in passing in the group commit patch
that Simon Riggs and I are currently working on in advance of the
final 9.2 commitfest.

In case it matters:

[peter@peterlaptop postmaster]$ uname -a
Linux peterlaptop 3.1.6-1.fc16.x86_64 #1 SMP Wed Dec 21 22:41:17 UTC
2011 x86_64 x86_64 x86_64 GNU/Linux

I'd debug this myself, but I'm a little bit preoccupied with group
commit right now.

The rationale for introducing the poll()-based implementation where
available was that it performed better than a select()-based one. I
wonder, how compelling a win is that expected to be?

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers