Re: [HACKERS] Hot standby, misc issues

2009-12-06 Thread Simon Riggs
On Sat, 2009-12-05 at 22:56 +0200, Heikki Linnakangas wrote:

 So that RecordKnownAssignedTransactionIds() call needs to be put back.

OK

 BTW, if you want to resurrect the check in KnownAssignedXidsRemove(),
 you also need to not complain before you reach the running-xacts record
 and open up for read-only connections.

Yep

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PageIndexTupleDelete

2009-12-06 Thread Simon Riggs

Just noticed that PageIndexTupleDelete does not check that

pd_special != MAXALIGN(pd_special)

whereas PageIndexMultiDelete() does this. Both routines state that
paranoia seems justified, so this omission looks like an error.

Looking a little deeper at this...

We only call PageIndexTupleDelete() in cases where
PageIndexMultiDelete() has only a single element. We also call it
directly (in btree case) when recovering page splits and when updating
parent pages as part of page removal.

Having a special function that exists only for use in rare occasions
seems like a great recipe for sporadic corruption, if we are taking the
paranoia seems justified approach.

I would be inclined to dispose of PageIndexTupleDelete altogether. We
may yet be able to optimise PageIndexMultiDelete for low values of
nitems, if that is important.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Remove gcc dependency in definition of inline functions

2009-12-06 Thread Kurt Harriman

Tom Lane wrote:

Kurt Harriman harri...@acm.org writes:

(Does anybody still use a C compiler that doesn't support
inline functions?)



The question isn't so much that, it's whether the compiler supports
inline functions with the same behavior as gcc.  


With this patch, if the compiler doesn't accept the inline keyword
(or __inline or __inline__), then HAVE_INLINE remains undefined, and
the out-of-line definitions are used.

So, these concerns would be applicable in case there is a compiler
which accepts the keyword but ignores it, thus fooling configure.

(Also these concerns become applicable if we eliminate the
out-of-line definitions altogether as some have suggested.)


At minimum that would require
* not generating extra copies of the function


Even if someone uses such a compiler, maybe the extra copies are not
too bad.  These are small functions.  If not inlined, the compiled
code size on x86-32 is
list_head48 bytes
list_tail48 bytes
list_length  48 bytes
MemoryContextSwitchTo32 bytes
 Total  176 bytes
In src/backend there are 478 .c files that #include postgres.h, so the
potential bloat is about 82 KB (= 478 * 176).

This would also occur anytime the user specifies compiler options that
suppress inlining, such as for a debug build; but then the executable
size doesn't matter usually.


* not whining about unreferenced static functions


I could update the patch so that configure will test for this.

(BTW, MSVC is a whiner, but clams up if __forceinline is used.)

How many compilers have you tested this patch against?  


Only a small number.  By submitting it to pgsql-hackers, I hope that
it can be exposed to broader coverage.


Which ones does it actually offer any benefit for?


MSVC is one.

I hope eventually it will be found that all compilers of interest
do a good enough job with static inline functions so that we can
use a lot more of them.  For example, I'd like to expunge the
abominable heap_getattr() and fastgetattr() macros in access/htup.h
and replace them with an inline function.  Such improvements are
less easily justifiable if they are limited to gcc.

Regards,
... kurt

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Hot standby, recent changes

2009-12-06 Thread Heikki Linnakangas
1. The XLogFlush() call you added to dbase_redo doesn't help where it
is. You need to call XLogFlush() after the *commit* record of the DROP
DATABASE. The idea is minimize the window where the files have already
been deleted but the entry in pg_database is still visible, if someone
kills the standby and starts it up as a new master. This isn't really
hot standby's fault, you have the same window with PITR, so I think it
would be better to handle this as a separate patch. Also, please handle
all the commands that use ForceSyncCommit().

2. Allow RULEs ON INSERT, ON UPDATE and ON DELETE iff they generate
only SELECT statements that act INSTEAD OF the actual event. This
affects any read-only transaction, not just hot standby, so I think this
should be discussed and changed as separate patch.

3. The Out of lock mem killer in StandbyAcquireAccessExclusiveLock is
quite harsh. It aborts all read-only transactions. It should be enough
to kill just one random one, or maybe the one that's holding most locks.
Also, if there still isn't enough shared memory after killing all
backends, it goes into an infinite loop. I guess that shouldn't happen,
but it seems a bit squishy anyway. It would be nice to differentiate
between out of shared mem and someone else is holding the lock more
accurately. Maybe add a new return value to LockAcquire() for out of
shared mem.

4. Need to handle the case where master is started up with
wal_standby_info=true, shut down, and restarted with
wal_standby_info=false, while the standby server runs continuously. And
the code in StartupXLog() to initialize recovery snapshot from a
shutdown checkpoint needs to check that too.

5. You removed this comment from KnownAssignedXidsAdd:

-   /*
-* XXX: We should check that we don't exceed maxKnownAssignedXids.
-* Even though the hash table might hold a few more entries than that,
-* we use fixed-size arrays of that size elsewhere and expected all
-* entries in the hash table to fit.
-*/

I think the issue still exists. The comment refers to
KnownAssignedXidsGet, which takes as an argument an array that has to be
large enough to hold all entries in the known-assigned hash table. If
there's more entries than expected in the hash table,
KnownAssignedXidsGet will overrun the array. Perhaps
KnownAssignedXidsGet should return a palloc'd array instead or
something, but I don't think it's fine as it is.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cancelling idle in transaction state

2009-12-06 Thread Hannu Krosing
On Sun, 2009-12-06 at 07:58 +, Simon Riggs wrote:
 On Sat, 2009-12-05 at 18:13 -0700, James Pye wrote:
  On Dec 5, 2009, at 12:25 PM, Simon Riggs wrote:
   ...
  
  I'm not volunteering here, but having worked with the protocol, I do have a 
  suggestion:
 
 Thanks. Looks like good input. With the further clarification that we
 use NOTIFY it seems a solution is forming.

If we use notify, then the sufficiently smart client (tm)  should
probably declared that it is waiting for such notify , no ?

That would mean, that it should have issued either 

LISTEN CANCEL_IDLE_TRX_pid

or with the new payload enabled NOTIFY just

LISTEN CANCEL_IDLE_TRX

and then the NOTIFY would include the pid of rolled back backend and
possibly some other extra info. 

Otoh, we could also come up with something that looks like a NOTIFY from
client end, but is sent only to one connection that is canceled instead
of all listeners.


-- 
Hannu Krosing   http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability 
   Services, Consulting and Training



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Simon Riggs
On Sun, 2009-12-06 at 12:32 +0200, Heikki Linnakangas wrote:

 2. Allow RULEs ON INSERT, ON UPDATE and ON DELETE iff they generate
 only SELECT statements that act INSTEAD OF the actual event. This
 affects any read-only transaction, not just hot standby, so I think this
 should be discussed and changed as separate patch.

OK

 4. Need to handle the case where master is started up with
 wal_standby_info=true, shut down, and restarted with
 wal_standby_info=false, while the standby server runs continuously. And
 the code in StartupXLog() to initialize recovery snapshot from a
 shutdown checkpoint needs to check that too.

I don't really understand the use case for shutting down the server and
then using it as a HS base backup. Why would anyone do that? Why would
they have their server down for potentially hours, when they can take
the backup while the server is up? If the server is idle, it can be
up-and-idle just as easily as down-and-idle, in which case we wouldn't
need to support this at all. Adding yards of code for this capability
isn't important to me. I'd rather strip the whole lot out than keep
fiddling with a low priority area. Please justify this as a real world
solution before we continue trying to support it.

 5. You removed this comment from KnownAssignedXidsAdd:
 
 -   /*
 -* XXX: We should check that we don't exceed maxKnownAssignedXids.
 -* Even though the hash table might hold a few more entries than that,
 -* we use fixed-size arrays of that size elsewhere and expected all
 -* entries in the hash table to fit.
 -*/
 
 I think the issue still exists. The comment refers to
 KnownAssignedXidsGet, which takes as an argument an array that has to be
 large enough to hold all entries in the known-assigned hash table. If
 there's more entries than expected in the hash table,
 KnownAssignedXidsGet will overrun the array. Perhaps
 KnownAssignedXidsGet should return a palloc'd array instead or
 something, but I don't think it's fine as it is.

Same issue perhaps, different place.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Allowing DML RULEs that produce Read Only actions during RO xacts

2009-12-06 Thread Simon Riggs

I would like to allow RULEs ON INSERT, ON UPDATE and ON DELETE during
read only transactions iff they generate only SELECT statements that act
INSTEAD OF the actual event.

CREATE RULE foorah
  AS ON INSERT TO foo
  DO INSTEAD SELECT remote_insert(NEW.col1, NEW.col2, ...);

The above rule is currently disallowed during READ ONLY transactions,
even though the write action is re-written into a read-only action.

I have a small patch that allows this, attached here with test cases.

This would be a small, but useful additional feature for Hot Standby,
since it would allow INSERT, UPDATE, DELETE statements to be re-routed,
for various applications.

-- 
 Simon Riggs   www.2ndQuadrant.com
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 1383123..0210d35 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -581,6 +581,14 @@ ExecCheckXactReadOnly(PlannedStmt *plannedstmt)
 	if (plannedstmt-intoClause != NULL)
 		goto fail;
 
+	/*
+	 * If we're running a SELECT, allow it. This ensures that a
+	 * write rule such as ON INSERT DO SELECT can be executed in
+	 * a read-only session.
+	 */
+	if (plannedstmt-commandType == CMD_SELECT)
+		return;
+
 	/* Fail if write permissions are requested on any non-temp table */
 	foreach(l, plannedstmt-rtable)
 	{
diff --git a/src/test/regress/expected/transactions.out b/src/test/regress/expected/transactions.out
index fc98f01..f9eda6b 100644
--- a/src/test/regress/expected/transactions.out
+++ b/src/test/regress/expected/transactions.out
@@ -65,6 +65,24 @@ SELECT * FROM writetest, temptest; -- ok
 
 CREATE TABLE test AS SELECT * FROM writetest; -- fail
 ERROR:  transaction is read-only
+BEGIN;
+SET transaction_read_only = false;
+CREATE RULE write2read_rule AS ON INSERT TO writetest DO INSTEAD SELECT new.a;
+COMMIT;
+BEGIN;
+SHOW transaction_read_only;
+ transaction_read_only 
+---
+ on
+(1 row)
+
+INSERT INTO writetest VALUES (1); -- ok
+ a 
+---
+ 1
+(1 row)
+
+COMMIT;
 START TRANSACTION READ WRITE;
 DROP TABLE writetest; -- ok
 COMMIT;
diff --git a/src/test/regress/sql/transactions.sql b/src/test/regress/sql/transactions.sql
index c670ae1..12a03fa 100644
--- a/src/test/regress/sql/transactions.sql
+++ b/src/test/regress/sql/transactions.sql
@@ -51,6 +51,15 @@ EXECUTE test; -- fail
 SELECT * FROM writetest, temptest; -- ok
 CREATE TABLE test AS SELECT * FROM writetest; -- fail
 
+BEGIN;
+SET transaction_read_only = false;
+CREATE RULE write2read_rule AS ON INSERT TO writetest DO INSTEAD SELECT new.a;
+COMMIT;
+BEGIN;
+SHOW transaction_read_only;
+INSERT INTO writetest VALUES (1); -- ok
+COMMIT;
+
 START TRANSACTION READ WRITE;
 DROP TABLE writetest; -- ok
 COMMIT;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Simon Riggs
On Sun, 2009-12-06 at 12:32 +0200, Heikki Linnakangas wrote:
 1. The XLogFlush() call you added to dbase_redo doesn't help where it
 is. You need to call XLogFlush() after the *commit* record of the DROP
 DATABASE. The idea is minimize the window where the files have already
 been deleted but the entry in pg_database is still visible, if someone
 kills the standby and starts it up as a new master. This isn't really
 hot standby's fault, you have the same window with PITR, so I think it
 would be better to handle this as a separate patch. Also, please handle
 all the commands that use ForceSyncCommit().

I think I'll just add a flag to the commit record to do this. That way
anybody that sets ForceSyncCommit will do as you propose.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Remove gcc dependency in definition of inline functions

2009-12-06 Thread Kurt Harriman

Bruce Momjian wrote:

Peter Eisentraut wrote:

On m?n, 2009-11-30 at 07:06 -0500, Bruce Momjian wrote:

I thought one problem was that inline is a suggestion that the compiler
can ignore, while macros have to be implemented as specified.

Sure, but one could argue that a compiler that doesn't support inline
usefully is probably not the sort of compiler that you use for compiling
performance-relevant software anyway.  We can support such systems in a
degraded way for historical value and evaluation purposes as long as
it's pretty much free, like we support systems without working int8.


The issue is that many compilers will take inline as a suggestion and
decide if it is worth-while to inline it --- I don't think it is inlined
unconditionally by any modern compilers.

Right now we think we are better at deciding what should be inlined than
the compiler --- of course, we might be wrong, and it would be good to
performance test this.


Just to clarify, this patch doesn't add new inline functions or
convert any macros to inline functions.  At present, PostgreSQL
header files define four functions as inline for gcc only.  This
patch merely opens up those existing inline functions to be used
with any compiler that accepts them, not just gcc.  If this goes
well, it may be a preparatory step toward future adoption of more
inline functions and fewer macros.

Regards,
... kurt


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Simon Riggs
On Sun, 2009-12-06 at 12:32 +0200, Heikki Linnakangas wrote:

 3. The Out of lock mem killer in StandbyAcquireAccessExclusiveLock is
 quite harsh. It aborts all read-only transactions. It should be enough
 to kill just one random one, or maybe the one that's holding most locks.
 Also, if there still isn't enough shared memory after killing all
 backends, it goes into an infinite loop. I guess that shouldn't happen,
 but it seems a bit squishy anyway. It would be nice to differentiate
 between out of shared mem and someone else is holding the lock more
 accurately. Maybe add a new return value to LockAcquire() for out of
 shared mem.

OK, will abort infinite loop by adding new return value.

If people don't like having everything killed, they can add more lock
memory. It isn't worth adding more code because its hard to test and
unlikely to be an issue in real usage, and it has a clear workaround
that is already mentioned in a hint.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cancelling idle in transaction state

2009-12-06 Thread Robert Haas

On Dec 6, 2009, at 2:58 AM, Simon Riggs si...@2ndquadrant.com wrote:


On Sat, 2009-12-05 at 18:13 -0700, James Pye wrote:

On Dec 5, 2009, at 12:25 PM, Simon Riggs wrote:

...


I'm not volunteering here, but having worked with the protocol, I  
do have a suggestion:


Thanks. Looks like good input. With the further clarification that we
use NOTIFY it seems a solution is forming.


Notice, not NOTIFY.

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Simon Riggs
On Sun, 2009-12-06 at 11:20 +, Simon Riggs wrote:
 On Sun, 2009-12-06 at 12:32 +0200, Heikki Linnakangas wrote:
 
  3. The Out of lock mem killer in StandbyAcquireAccessExclusiveLock is
  quite harsh. It aborts all read-only transactions. It should be enough
  to kill just one random one, or maybe the one that's holding most locks.
  Also, if there still isn't enough shared memory after killing all
  backends, it goes into an infinite loop. I guess that shouldn't happen,
  but it seems a bit squishy anyway. It would be nice to differentiate
  between out of shared mem and someone else is holding the lock more
  accurately. Maybe add a new return value to LockAcquire() for out of
  shared mem.
 
 OK, will abort infinite loop by adding new return value.

You had me for a minute, but there is no infinite loop. Once we start
killing people it reverts to throwing an ERROR on an out of memory
condition.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PageIndexTupleDelete

2009-12-06 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 Having a special function that exists only for use in rare occasions
 seems like a great recipe for sporadic corruption, if we are taking the
 paranoia seems justified approach.

It used to be called all the time, before some of the more popular
paths got optimized into PageIndexMultiDelete calls.  I'm not
particularly worried about it being broken.  I *am* worried about
the performance hit of using MultiDelete to no purpose.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cancelling idle in transaction state

2009-12-06 Thread Tom Lane
Hannu Krosing ha...@2ndquadrant.com writes:
 On Sun, 2009-12-06 at 07:58 +, Simon Riggs wrote:
 Thanks. Looks like good input. With the further clarification that we
 use NOTIFY it seems a solution is forming.

 If we use notify, then the sufficiently smart client (tm)  should
 probably declared that it is waiting for such notify , no ?

We are using NOTICE, not NOTIFY, assuming that we use anything at all
(which I still regard as unnecessary).  Please stop injecting confusion
into the discussion.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing DML RULEs that produce Read Only actions during RO xacts

2009-12-06 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 I would like to allow RULEs ON INSERT, ON UPDATE and ON DELETE during
 read only transactions iff they generate only SELECT statements that act
 INSTEAD OF the actual event.

I don't actually believe there is any use case for such a thing.

 This would be a small, but useful additional feature for Hot Standby,
 since it would allow INSERT, UPDATE, DELETE statements to be re-routed,
 for various applications.

How would you reroute them without a non-read-only operation happening
somewhere along the line?
 
 + /*
 +  * If we're running a SELECT, allow it. This ensures that a
 +  * write rule such as ON INSERT DO SELECT can be executed in
 +  * a read-only session.
 +  */
 + if (plannedstmt-commandType == CMD_SELECT)
 + return;

This will fail, very nastily, in writable-CTE cases.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing DML RULEs that produce Read Only actions during RO xacts

2009-12-06 Thread Simon Riggs
On Sun, 2009-12-06 at 10:26 -0500, Tom Lane wrote:
 Simon Riggs si...@2ndquadrant.com writes:
  I would like to allow RULEs ON INSERT, ON UPDATE and ON DELETE during
  read only transactions iff they generate only SELECT statements that act
  INSTEAD OF the actual event.
 
 I don't actually believe there is any use case for such a thing.
 
  This would be a small, but useful additional feature for Hot Standby,
  since it would allow INSERT, UPDATE, DELETE statements to be re-routed,
  for various applications.
 
 How would you reroute them without a non-read-only operation happening
 somewhere along the line?

I showed how in my example. The non-read-only operation happens on a
remote server.

If there are additional technical reasons why not, that's fine.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] operator exclusion constraints

2009-12-06 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 [ exclusion constraints patch ]

Still working on this patch.  I ran into something I didn't like at all:

 + if (newrel != NULL)
 + ereport(ERROR,
 + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 +  errmsg(cannot rewrite table while adding 
 + operator exclusion constraint)));

This would be bad enough if the restriction were what the message
alleges, ie, you can't write an ALTER TABLE that both rewrites the heap
and adds an exclusion constraint.  However, actually the error also
occurs if you issue a rewriting ALTER TABLE against a table that already
has an exclusion constraint.  That raises it from annoyance to
showstopper IMO.

There is not a whole lot that can be done to fix this given the design
that exclusion checking is supposed to be done in ATRewriteTable ---
when that runs, we of course have not built the new index yet, so
there's no way to use it to check the constraint.

I think the right way to go at it is to drop that part of the patch
and instead have index_build execute a separate pass to check the
constraint after it's built an exclusion-constraint index.  In the
typical case where you're just doing ALTER TABLE ADD CONSTRAINT EXCLUDE,
this ends up being the same amount of work since there's no need to
run an ATRewriteTable scan at all.  It would be extra work if someone
tried to add multiple exclusion constraints in one ALTER; but I have a
hard time getting excited about that, especially in view of the fact
that the cost of the index searches is going to swamp the heapscan
anyway.

This approach would also mean that the constraint is re-verified
if someone does a REINDEX.  I think this is appropriate behavior
since reindexing a regular unique index also causes re-verification.
However I can imagine someone objecting on grounds of cost ---
it would probably about double the cost of reindexing compared
to assuming the constraint is still good.  We could probably teach
index_build to skip the check pass during REINDEX, but I don't
really want to.

Comments?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] A sniffer for the buffer

2009-12-06 Thread Jonas J
Hi,

I'm a Computer Science student and I'm currently studying databases buffer
managers.  I want to do some experiments and see how the pages access works
in PostgreSQL. (and I also will do some I/O experiments)

So, I want to do a sniffer on the Storage Layer of Postgresql. It should
work telling me the page that PGSQL is reading or writing. So when I make
some operation on PGSQL, the pages that were requested (to read or write) by
PGSQL for the buffer, are stored in a separated file. And after that i can
build some graphs, conclusions, etc...

So, I made such a code to do that, but i'm not quite sure if I'm doing the
right thing. Can someone of you give me a opinion ??

Many Thanks in advance,
Jonas Jeske

The modified code is in bufmgr.c of PostgreSQL source code.

Buffer
ReadBuffer(Relation reln, BlockNumber blockNum)
{
volatile BufferDesc *bufHdr;
BlockbufBlock;
boolfound;
boolisExtend;
boolisLocalBuf;

/*jjeske starts here */
FILE *fp;
fp = fopen(/home/jjeske/tpccuva/pgsql/saida.txt,a);
fprintf(fp,Read Block n: %d\n,(int) blockNum);
fclose(fp);



static void
write_buffer(Buffer buffer, bool unpin)
{
volatile BufferDesc *bufHdr;

/*jjeske starts here */
FILE *fp;
fp = fopen(/home/jjeske/tpccuva/pgsql/saida.txt,a);
fprintf(fp,Write Block n: %d\n,(int) buffer);
fclose(fp);



Re: [HACKERS] Summary and Plan for Hot Standby

2009-12-06 Thread Simon Riggs
On Sun, 2009-11-15 at 16:07 +0200, Heikki Linnakangas wrote:

 - When switching from standby mode to normal operation, we momentarily
 hold all AccessExclusiveLocks held by prepared xacts twice, needing
 twice the lock space. You can run out of lock space at that point,
 causing failover to fail.

Proposed patch to fix that attached.

-- 
 Simon Riggs   www.2ndQuadrant.com
diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c
index ba4d69c..1aed31c 100644
--- a/src/backend/access/transam/twophase.c
+++ b/src/backend/access/transam/twophase.c
@@ -1757,6 +1757,14 @@ RecoverPreparedTransactions(void)
 			 */
 			ProcessRecords(bufptr, xid, twophase_recover_callbacks);
 
+			/*
+			 * Release locks held by the standby process after we process each
+			 * prepared transaction. As a result, we don't need to too many
+			 * additional locks at one time.
+			 */
+			if (InHotStandby)
+StandbyReleaseLockTree(xid, hdr-nsubxacts, subxids);
+
 			pfree(buf);
 		}
 	}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing DML RULEs that produce Read Only actions during RO xacts

2009-12-06 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 On Sun, 2009-12-06 at 10:26 -0500, Tom Lane wrote:
 How would you reroute them without a non-read-only operation happening
 somewhere along the line?

 I showed how in my example. The non-read-only operation happens on a
 remote server.

That seems awfully dubious from a transactional-safety point of view.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Error message translation, with variable reason text

2009-12-06 Thread Simon Riggs

I have an ereport() in the HS patch that looks like this

ereport(trace_recovery(DEBUG1),
  (errmsg(recovery cancels virtual transaction %u/%u pid %d because of
   conflict with %s,
   waitlist-backendId, 
   waitlist-localTransactionId,
   pid,
   reason)));

The purpose of this is to give an LOG message with a variable reason
code. Should I worry about the translatability of something that exists
for DEBUG, and if so, what is the best/approved/recommended way of
making a message both translatable and variable?

Thanks,

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing DML RULEs that produce Read Only actions during RO xacts

2009-12-06 Thread Simon Riggs
On Sun, 2009-12-06 at 10:26 -0500, Tom Lane wrote:
 
  + /*
  +  * If we're running a SELECT, allow it. This ensures that a
  +  * write rule such as ON INSERT DO SELECT can be executed in
  +  * a read-only session.
  +  */
  + if (plannedstmt-commandType == CMD_SELECT)
  + return;
 
 This will fail, very nastily, in writable-CTE cases.

If the feature is not able to be added easily, then I'm not interested
either. It's a nice-to-have and I have no time for anything more, so if
it gives problems in other areas, I would not pursue further.

One question: would a writable-CTE be running in a read-only xact?

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Simon Riggs
On Sun, 2009-12-06 at 10:51 +, Simon Riggs wrote:

  5. You removed this comment from KnownAssignedXidsAdd:
  
  -   /*
  -* XXX: We should check that we don't exceed maxKnownAssignedXids.
  -* Even though the hash table might hold a few more entries than that,
  -* we use fixed-size arrays of that size elsewhere and expected all
  -* entries in the hash table to fit.
  -*/
  
  I think the issue still exists. The comment refers to
  KnownAssignedXidsGet, which takes as an argument an array that has to be
  large enough to hold all entries in the known-assigned hash table. If
  there's more entries than expected in the hash table,
  KnownAssignedXidsGet will overrun the array. Perhaps
  KnownAssignedXidsGet should return a palloc'd array instead or
  something, but I don't think it's fine as it is.
 
 Same issue perhaps, different place.

Well, its not same issue. One was about entering things into a shared
memory hash table, one is about reading values into an locally malloc'd
array. The array is sized as the max size of KnownAssignedXids, so there
is no danger that there would ever be an overrun.

One caller does not set the max size explicitly, so I will add a
comment/code changes to make that clearer.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error message translation, with variable reason text

2009-12-06 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 The purpose of this is to give an LOG message with a variable reason
 code. Should I worry about the translatability of something that exists
 for DEBUG,

Probably not.  I don't even think you should use ereport, just elog
--- or if there's a good functional reason to use ereport, use
errmsg_internal so that translators don't see this message.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A sniffer for the buffer

2009-12-06 Thread Greg Smith

Jonas J wrote:

Buffer
ReadBuffer(Relation reln, BlockNumber blockNum)...
fprintf(fp,Read Block n: %d\n,(int) blockNum);
The key as it were for database blocks read is both of the things 
passed into ReadBuffer.  You'd need to save both the Relation number 
(which turns into the subdirectory used to store that table in the base/ 
directory) and the block number to do anything useful with this data.




static void
write_buffer(Buffer buffer, bool unpin)
{
volatile BufferDesc *bufHdr;

/*jjeske starts here */
FILE *fp;
fp = fopen(/home/jjeske/tpccuva/pgsql/saida.txt,a);   
fprintf(fp,Write Block n: %d\n,(int) buffer);

fclose(fp);
And that won't work at all.  Buffer is a structure, not an integer.  
You need to wait until it's been locked, then save the same data as on 
the read side (relation and block number) from inside the structure.  
You probably want to hook FlushBuffer to do that job.  In fact, there's 
already a DTrace probe in there you could easily use to collect the data 
you want without even touching the source code if you can get a DTrace 
capable system.  Note the following code in bufmgr.c FlushBuffer:


   TRACE_POSTGRESQL_BUFFER_FLUSH_START(buf-tag.forkNum,
   buf-tag.blockNum,
   reln-smgr_rnode.spcNode,
   reln-smgr_rnode.dbNode,
   reln-smgr_rnode.relNode);

That's exporting everything you're trying to save yourself such that a 
DTrace program can observe it. 

In fact, I'd suggest that any time you're trying to get some data out of 
the internals, start by seeing if there's a DTrace probe point with that 
info in it alread, because those have been thought through to make sure 
they're exporting the right data and in the right way (i.e. after 
locking the structures properly).  On the read side, 
TRACE_POSTGRESQL_BUFFER_READ_START inside of ReadBuffer_common is the 
one you should do the same thing as, if you must export this stuff 
manually rather than use DTrace.


There's more about the locking I'm alluding to inside 
src/backend/storage/buffer/README , you should check out that file for 
more info.


P.S. I think you're using a fairly old version of the PostgreSQL source 
code, which may not have the same names and DTrace points I'm 
mentioning.  That's probably a mistake--if you want to hack on the 
PostgreSQL code, you should be using at least PostgreSQL 8.4, and it's 
better still to checkout from the CVS/git repos.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Heikki Linnakangas
Simon Riggs wrote:
 On Sun, 2009-12-06 at 12:32 +0200, Heikki Linnakangas wrote:
 4. Need to handle the case where master is started up with
 wal_standby_info=true, shut down, and restarted with
 wal_standby_info=false, while the standby server runs continuously. And
 the code in StartupXLog() to initialize recovery snapshot from a
 shutdown checkpoint needs to check that too.
 
 I don't really understand the use case for shutting down the server and
 then using it as a HS base backup. Why would anyone do that? Why would
 they have their server down for potentially hours, when they can take
 the backup while the server is up? If the server is idle, it can be
 up-and-idle just as easily as down-and-idle, in which case we wouldn't
 need to support this at all. Adding yards of code for this capability
 isn't important to me. I'd rather strip the whole lot out than keep
 fiddling with a low priority area. Please justify this as a real world
 solution before we continue trying to support it.

This affects using a shutdown checkpoint as a recovery start point
(restore point) in general, not just a fresh backup taken from a shut
down database.

Consider this scenario:

0. You have a master and a standby configured properly, and up and running.
1. You shut down master for some reason.
2. You restart standby. For some reason. Maybe by accident, or you want
to upgrade minor version or whatever.
3. Standby won't accept connections until the master is started too.
Admin says WTF?

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Summary and Plan for Hot Standby

2009-12-06 Thread Heikki Linnakangas
Simon Riggs wrote:
 On Sun, 2009-11-15 at 16:07 +0200, Heikki Linnakangas wrote:
 
 - When switching from standby mode to normal operation, we momentarily
 hold all AccessExclusiveLocks held by prepared xacts twice, needing
 twice the lock space. You can run out of lock space at that point,
 causing failover to fail.
 
 Proposed patch to fix that attached.

Doesn't eliminate the problem completely, but certainly makes it much
less likely.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] operator exclusion constraints

2009-12-06 Thread Jeff Davis
On Sun, 2009-12-06 at 10:46 -0500, Tom Lane wrote:
 This would be bad enough if the restriction were what the message
 alleges, ie, you can't write an ALTER TABLE that both rewrites the heap
 and adds an exclusion constraint.  However, actually the error also
 occurs if you issue a rewriting ALTER TABLE against a table that already
 has an exclusion constraint.  That raises it from annoyance to
 showstopper IMO.

The following works as expected for me:

  create table foo(i int, j int, k int);

  -- the following causes the error in question:
  alter table foo add exclude(i with =),
alter column k type text;

  alter table foo add exclude(i with =); -- works
  alter table foo alter column k type text; -- works

So the workaround is simply to break the ALTER into two statements.

(Aside: the error message should probably have a DETAIL component
telling the user to break up the ALTER commands into separate actions.)

Aha -- I think I see the problem you're having: if you try to rewrite
one of the columns contained in the exclusion constraint, you get that
error:

  create table bar_exclude(i int, exclude (i with =));

  -- raises same error, which is confusing
  alter table bar_exclude alter column i type text;

Compared with UNIQUE:
  create table bar_unique(i int unique);
  alter table bar_unique alter column i type text; -- works

However, I think we _want_ exclusion constraints to fail in that case.
Unique constraints can succeed because both types support UNIQUE, and
the semantics are the same. But in the case of exclusion constraints,
it's quite likely that the semantics will be different or the named
operator won't exist at all. I think it's more fair to compare with a
unique index on an expression:

  create table bar_unique2(i int);
  create unique index bar_unique2_idx on bar_unique2 ((i + 1));

  alter table bar_unique2 alter column i type text;
  ERROR:  operator does not exist: text + integer

You could make the argument that we should do the same thing: try to
re-apply the expression on top of the new column. The only situation
where I can imagine that would be useful is if you are using exclusion
constraints in place of UNIQUE (even then, it's different, because it
uses operator names, not BTEqualStrategyNumber for the default btree
opclass).

If the user alters the type of a column that's part of an exclusion
constraint, the semantics are complex enough that I think we should
inform them that they need to drop the constraint, change the column,
and re-add it. So, my personal opinion is that we need to change the
error message (and probably have two distinct error messages for the two
cases) rather than changing the algorithm.

Comments?

Regards,
Jeff Davis



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] operator exclusion constraints

2009-12-06 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 Aha -- I think I see the problem you're having: if you try to rewrite
 one of the columns contained in the exclusion constraint, you get that
 error:

It fails for me regardless of which column is actually modified.
It could be this is a consequence of other changes I've been making,
but given the way ALTER TABLE works it's hard to see why the specific
column being modified would matter.  Anything that forces a table
rewrite would lead to running through this code path.

I don't really agree with your argument that it's okay to reject the
case of altering the underlying column type, anyhow.  There's enough
commonality of operator names that it's sensible to try to preserve
the constraint across a change.  Consider replacing a circle with a
polygon, say.  A no-overlap restriction is still sensible.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] operator exclusion constraints

2009-12-06 Thread Jeff Davis
On Sun, 2009-12-06 at 14:06 -0500, Tom Lane wrote:
 It fails for me regardless of which column is actually modified.
 It could be this is a consequence of other changes I've been making,
 but given the way ALTER TABLE works it's hard to see why the specific
 column being modified would matter.  Anything that forces a table
 rewrite would lead to running through this code path.

In ATAddOperatorExclusionConstraint():

  tab-constraints = lappend(tab-constraints, newcon);

if that isn't done, it doesn't go into the code path with that error
message at all.

 I don't really agree with your argument that it's okay to reject the
 case of altering the underlying column type, anyhow.  There's enough
 commonality of operator names that it's sensible to try to preserve
 the constraint across a change.  Consider replacing a circle with a
 polygon, say.  A no-overlap restriction is still sensible.

Let's say someone is changing from box to a postgis geometry
representing a box. For boxes, = actually means equal area (aside:
this is insane); but for polygons, = mean equal. Changing in the
other direction is bad, as well. I know operators mostly follow
convention most of the time, but I'm concerned with the subtle (and
surprising) differences.

But if someone is changing a column type, which causes a table rewrite,
hopefully they've planned it. I'll look into doing it as you suggest.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] operator exclusion constraints

2009-12-06 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 ... I'll look into doing it as you suggest.

I'm already working with a pretty-heavily-editorialized version.
Don't worry about it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Simon Riggs
On Sun, 2009-12-06 at 20:32 +0200, Heikki Linnakangas wrote:
 Simon Riggs wrote:
  On Sun, 2009-12-06 at 12:32 +0200, Heikki Linnakangas wrote:
  4. Need to handle the case where master is started up with
  wal_standby_info=true, shut down, and restarted with
  wal_standby_info=false, while the standby server runs continuously. And
  the code in StartupXLog() to initialize recovery snapshot from a
  shutdown checkpoint needs to check that too.
  
  I don't really understand the use case for shutting down the server and
  then using it as a HS base backup. Why would anyone do that? Why would
  they have their server down for potentially hours, when they can take
  the backup while the server is up? If the server is idle, it can be
  up-and-idle just as easily as down-and-idle, in which case we wouldn't
  need to support this at all. Adding yards of code for this capability
  isn't important to me. I'd rather strip the whole lot out than keep
  fiddling with a low priority area. Please justify this as a real world
  solution before we continue trying to support it.
 
 This affects using a shutdown checkpoint as a recovery start point
 (restore point) in general, not just a fresh backup taken from a shut
 down database.
 
 Consider this scenario:
 
 0. You have a master and a standby configured properly, and up and running.
 1. You shut down master for some reason.
 2. You restart standby. For some reason. Maybe by accident, or you want
 to upgrade minor version or whatever.
 3. Standby won't accept connections until the master is started too.
 Admin says WTF?

OK, I accept that as a possible scenario.

I'm concerned that the number of possible scenarios we are trying to
support in the first version is too high, increasing the likelihood that
some of them do not work correctly because the test scenarios didn't
re-create them exactly.

In the above scenario, if it is of serious concern the admin can
failover to the standby and then access the standby for queries. If the
master was down for any length of time that's exactly what we'd expect
to happen anyway. 

So I don't see the scenario as very likely; I'm sure it will happen to
somebody. In any case, it is in the opposite direction to the main
feature, which is a High Availability server, so any admin that argued
that he wanted to have his HA standby stay as a standby in this case
would likely be in a minority.

I would rather document it as a known caveat and be done.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Robert Haas
On Sun, Dec 6, 2009 at 3:06 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Sun, 2009-12-06 at 20:32 +0200, Heikki Linnakangas wrote:
 Simon Riggs wrote:
  On Sun, 2009-12-06 at 12:32 +0200, Heikki Linnakangas wrote:
  4. Need to handle the case where master is started up with
  wal_standby_info=true, shut down, and restarted with
  wal_standby_info=false, while the standby server runs continuously. And
  the code in StartupXLog() to initialize recovery snapshot from a
  shutdown checkpoint needs to check that too.
 
  I don't really understand the use case for shutting down the server and
  then using it as a HS base backup. Why would anyone do that? Why would
  they have their server down for potentially hours, when they can take
  the backup while the server is up? If the server is idle, it can be
  up-and-idle just as easily as down-and-idle, in which case we wouldn't
  need to support this at all. Adding yards of code for this capability
  isn't important to me. I'd rather strip the whole lot out than keep
  fiddling with a low priority area. Please justify this as a real world
  solution before we continue trying to support it.

 This affects using a shutdown checkpoint as a recovery start point
 (restore point) in general, not just a fresh backup taken from a shut
 down database.

 Consider this scenario:

 0. You have a master and a standby configured properly, and up and running.
 1. You shut down master for some reason.
 2. You restart standby. For some reason. Maybe by accident, or you want
 to upgrade minor version or whatever.
 3. Standby won't accept connections until the master is started too.
 Admin says WTF?

 OK, I accept that as a possible scenario.

 I'm concerned that the number of possible scenarios we are trying to
 support in the first version is too high, increasing the likelihood that
 some of them do not work correctly because the test scenarios didn't
 re-create them exactly.

 In the above scenario, if it is of serious concern the admin can
 failover to the standby and then access the standby for queries. If the
 master was down for any length of time that's exactly what we'd expect
 to happen anyway.

 So I don't see the scenario as very likely; I'm sure it will happen to
 somebody. In any case, it is in the opposite direction to the main
 feature, which is a High Availability server, so any admin that argued
 that he wanted to have his HA standby stay as a standby in this case
 would likely be in a minority.

 I would rather document it as a known caveat and be done.

For what it's worth, this doesn't seem particularly unlikely or
unusual to me.  But on the other hand, I do really want to get this
committed soon, and I don't have time to help at the moment, so maybe
I should keep my mouth shut.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Dimitri Fontaine
Le 6 déc. 2009 à 23:26, Robert Haas a écrit :
 Consider this scenario:
 
 0. You have a master and a standby configured properly, and up and running.
 1. You shut down master for some reason.
 2. You restart standby. For some reason. Maybe by accident, or you want
 to upgrade minor version or whatever.
 3. Standby won't accept connections until the master is started too.
 Admin says WTF?
 
 I would rather document it as a known caveat and be done.

+1

 For what it's worth, this doesn't seem particularly unlikely or
 unusual to me.

I'm sorry to have to disagree here. Shutting down the master in my book means 
you're upgrading it, minor or major or the box under it. It's not a frequent 
event. More frequent than a crash but still.

Now the master is offline, you have a standby, and you're restarting it too, 
but you don't mean that as a switchover. I'm with Simon here, the WTF is are 
you in search of trouble? more than anything else. I don't think shutting down 
the standby while the master is offline is considered as a good practice if 
your goal is HA...

Regards,
-- 
dim
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] add more frame types in window functions (ROWS)

2009-12-06 Thread Andrew Gierth
 Hitoshi == Hitoshi Harada umi.tan...@gmail.com writes:

 Hitoshi Attached is updated version. I added AggGetMemoryContext()
 Hitoshi in executor/nodeAgg.h (though I'm not sure where to go...)
 Hitoshi and its second argument iswindowagg is output parameter to
 Hitoshi know whether the call context is Agg or WindowAgg. Your
 Hitoshi proposal of APIs to know whether the function is called as
 Hitoshi Aggregate or not is also a candidate to be, but it seems out
 Hitoshi of this patch scope, so it doesn't touch anything.

I don't really like the extra argument; aggregate functions should
almost never have to care about whether they're being called as window
functions rather than aggregate functions. And if it does care, I
don't see why this is the appropriate function for it. At the very
least the function should accept NULL for the iswindowagg pointer to
avoid useless variables in the caller.

So for this and the regression test problem mentioned in the other mail,
I'm setting this back to waiting on author.

-- 
Andrew.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby, recent changes

2009-12-06 Thread Simon Riggs
On Sun, 2009-12-06 at 17:26 -0500, Robert Haas wrote:

 For what it's worth, this doesn't seem particularly unlikely or
 unusual to me.

I don't know many people who shutdown both nodes of a highly available
application at the same time. If they did, I wouldn't expect them to
complain they couldn't run queries on the standby when an two obvious
and simple workarounds exist to allow them to access their data: start
the master again, or make the standby switchover, both of which are part
of standard operating procedures.

It doesn't seem very high up the list of additional features, at least.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2009-12-06 Thread Itagaki Takahiro

Greg Smith g...@2ndquadrant.com wrote:

 This implements the TODO item Allow the 
 clearing of cluster-level statistics, based on previous discussion 
 ending at http://archives.postgresql.org/pgsql-hackers/2009-03/msg00920.php

 -Not sure if this should be named pg_stat_rest_global (to match the way 
 these are called global stats in the source) or 
 pg_stat_reset_cluster.  Picked the former for V1, not attached to that 
 decision at all.  Might even make sense to use a name that makes it 
 obvious the pg_stat_bgwriter data is what's targeted.

A couple of comments:

 * We will be able to reset global counters and current database's counters.
   Do we need to have a method to reset other databases' counters?
   Or, will pg_stat_reset_global just reset counters of all databases?

 * Is it useful to have a method to reset counters separately?
   For example, pg_stat_reset( which text )
   which := 'buffers' | 'checkpoints' | 'tables' | 'functions' |  ...

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2009-12-06 Thread Greg Smith

Itagaki Takahiro wrote:

Greg Smith g...@2ndquadrant.com wrote:

  
-Not sure if this should be named pg_stat_rest_global (to match the way 
these are called global stats in the source) or 
pg_stat_reset_cluster.  Picked the former for V1, not attached to that 
decision at all.  Might even make sense to use a name that makes it 
obvious the pg_stat_bgwriter data is what's targeted.



A couple of comments:

 * We will be able to reset global counters and current database's counters.
   Do we need to have a method to reset other databases' counters?
   Or, will pg_stat_reset_global just reset counters of all databases?

 * Is it useful to have a method to reset counters separately?
   For example, pg_stat_reset( which text )
   which := 'buffers' | 'checkpoints' | 'tables' | 'functions' |  ...
  
The fact that you're asking the question this way suggests to me I've 
named this completely wrong.  pg_stat_reset_global only resets the bits 
global to all databases.  It doesn't touch any of the database-specific 
things that pg_stat_reset can handle right now.  At the moment, the only 
global information is what's in pg_stat_bgwriter:  buffer statistics and 
checkpoint stats.  I'm thinking that I should rename this new function 
to pg_stat_reset_bgwriter so it's obvious how limited its target is.  
Using either global or cluster for the name is just going to leave 
people thinking it acts across a much larger area than it does.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Re: [HACKERS] Clearing global statistics

2009-12-06 Thread Itagaki Takahiro

Greg Smith g...@2ndquadrant.com wrote:

 I'm thinking that I should rename this new function 
 to pg_stat_reset_bgwriter so it's obvious how limited its target is.

I don't think it is a good name because we might have another cluster-level
statictics not related with bgwriter in the future. I hope you will suggest
extensible method when you improve this area.

To be honest, I have a plan to add performance statistics counters to
postgres. It is not bgwriter's counters, but cluster-level. I'd like
to use your infrastructure in my work, too :)

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Wrapping up CommitFest 2009-11

2009-12-06 Thread Greg Smith
Next Tuesday was our target date for being finished with the current 
CommitFest, and we're a little overdue to start kicking back a lot more 
patches that still clearly need a whole additional round of work on 
them.  Here's how that's going to happen:


This Tuesday 12/8 at noon GMT, I'm going to start transitioning anything 
that's been Waiting for author for more than 5 days to Returned with 
Feedback, removing that patch from the current CommitFest.  If you've 
been given feedback some time ago but not acted on it, that's your 
deadline--not much time left to submit an new update and have it 
considered soon.  I'm going to continue kicking out more patches in that 
state each day afterwards too.  This, by the way, is likely to end up 
the standard policy for future CommitFests, possibly even turning 
automated one day:  5 days for authors to resubmit after review feedback 
before we just assume the patch is dead for this round.


We have a number of patches that have received updates from their 
authors that are waiting for a re-review.  I'm going to take a look at 
as many of these as I can as the second-pass reviewer to try and 
determine whether the patch is ready to pass to a committer or not.  I'd 
prefer to get the original reviewer to help make that assessment.  If 
you did a review for this CommitFest, please check the web application 
to see if there's been an update addressing your concerns and let me 
know what you think of the updated result.


There are also a couple of patches that for various reasons have yet to 
get a first review done.  We've been reassigning for those the last 
couple of days, and I expect that all those will also be looked at by 
Tuesday as well (except for SE-PostgreSQL/Lite, which we all know is a 
bit more complicated).  This will leave some time for their authors to 
respond to feedback before we close up here, and of course we still have 
one more CommitFest left for patches that are returned with feedback to 
be resubmitted for.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] add more frame types in window functions (ROWS)

2009-12-06 Thread Hitoshi Harada
2009/12/7 Andrew Gierth and...@tao11.riddles.org.uk:
 Hitoshi == Hitoshi Harada umi.tan...@gmail.com writes:

  Hitoshi Attached is updated version. I added AggGetMemoryContext()
  Hitoshi in executor/nodeAgg.h (though I'm not sure where to go...)
  Hitoshi and its second argument iswindowagg is output parameter to
  Hitoshi know whether the call context is Agg or WindowAgg. Your
  Hitoshi proposal of APIs to know whether the function is called as
  Hitoshi Aggregate or not is also a candidate to be, but it seems out
  Hitoshi of this patch scope, so it doesn't touch anything.

 I don't really like the extra argument; aggregate functions should
 almost never have to care about whether they're being called as window
 functions rather than aggregate functions. And if it does care, I
 don't see why this is the appropriate function for it. At the very
 least the function should accept NULL for the iswindowagg pointer to
 avoid useless variables in the caller.

I've just remembered such situation before, around here:
http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/utils/adt/array_userfuncs.c.diff?r1=1.30;r2=1.31

Of course it is fixed now. Still thinking about error reporting or
something, there should be a way to know which context you are called.
OK, I agree iswindowagg can be nullable, though most isnull
parameter cannot be NULL and forcing caller to put boolean stack value
don't seem a hard work.

 So for this and the regression test problem mentioned in the other mail,
 I'm setting this back to waiting on author.

In my humble opinion, view regression test is not necessary in both my
patch and yours. At least window test has not been testing views so
far since it was introduced. Am I missing?


Regards,



-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clearing global statistics

2009-12-06 Thread Greg Smith

Itagaki Takahiro wrote:

Greg Smith g...@2ndquadrant.com wrote:

  
I'm thinking that I should rename this new function 
to pg_stat_reset_bgwriter so it's obvious how limited its target is.



I don't think it is a good name because we might have another cluster-level
statictics not related with bgwriter in the future. I hope you will suggest
extensible method when you improve this area.
  
I follow what you mean now.  I'll take a look at allowing pg_stat_reset 
to act on an input as you suggested, rather than adding more of these 
UDFs for every time somebody adds a new area they want to target clearing.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Re: [HACKERS] tsearch parser inefficiency if text includes urls or emails - new version

2009-12-06 Thread Greg Smith
After getting off to a good start, it looks like this patch is now stuck 
waiting for a second review pass from Kevin right now, with no open 
items for Andres to correct.  Since the only issues on the table seem to 
be that of code aesthetics and long-term planning for this style of 
implementation rather than specific functional bits, I'm leaning toward 
saying this one is ready to have a committer look at it.  Any comments 
from Kevin or Andres about where this is at?


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Syntax for partitioning

2009-12-06 Thread Greg Smith

Simon Riggs wrote:

I will review and eventually commit this, if appropriate, though it is
3rd in my queue and will probably not be done for at least 2 weeks,
possibly 4 weeks.
  
I've marked Simon as the next reviewer and expected committer on this 
patch and have updated it to Returned with Feedback.  That's not 
saying work is going to stop on it.  It just looks like that is going to 
extend beyond when we want this CommitFest to finish, and I want to pull 
it off the list of things I'm monitoring as part of that.  Everyone 
should keep hammering away at nailing this fundamental bit down, so that 
the rest of the partitioning patch ideas floating around finally have a 
firm place to start attaching to.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] YAML Was: CommitFest status/management

2009-12-06 Thread Itagaki Takahiro

Josh Berkus j...@agliodbs.com wrote:

 Having compared the JSON and YAML output formats, I think having YAML as
 a 2nd human-readable format might be valuable, even though it adds
 nothing to machine-processing.

Sure. YAML is human readable and can print more information that is
too verbose in one-line text format. +1 to add YAML format to EXPLAIN.

 Again, if there were a sensible way to do YAML as a contrib module, I'd
 go for that, but there isn't.

Some ideas to export formatters:
 1. Provides register_explain_format(name, handler) function.
Contrib modules can register their handlers when LOADed.

 2. Provides a method to specify a filter function. XML output is
passed to the function and probably converted with XSLT.

BTW, an extensible formatter is surely useful, but what will we do about
XML and JSON formats when we have it? Will we remove them from core?
If we have a plan to the direction, we should complete it before 8.5.0
release. We will be hard to revert them after the release.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] YAML Was: CommitFest status/management

2009-12-06 Thread Robert Haas
On Sun, Dec 6, 2009 at 8:34 PM, Itagaki Takahiro
itagaki.takah...@oss.ntt.co.jp wrote:

 Josh Berkus j...@agliodbs.com wrote:

 Having compared the JSON and YAML output formats, I think having YAML as
 a 2nd human-readable format might be valuable, even though it adds
 nothing to machine-processing.

 Sure. YAML is human readable and can print more information that is
 too verbose in one-line text format. +1 to add YAML format to EXPLAIN.

 Again, if there were a sensible way to do YAML as a contrib module, I'd
 go for that, but there isn't.

 Some ideas to export formatters:
  1. Provides register_explain_format(name, handler) function.
    Contrib modules can register their handlers when LOADed.

  2. Provides a method to specify a filter function. XML output is
    passed to the function and probably converted with XSLT.

 BTW, an extensible formatter is surely useful, but what will we do about
 XML and JSON formats when we have it? Will we remove them from core?
 If we have a plan to the direction, we should complete it before 8.5.0
 release. We will be hard to revert them after the release.

I sent a previous message about this, but just realized I forgot to
copy the list.  I thought about the possibility of a pluggable system
for formats when I wrote the original machine-readable explain patch.
It seemed to me at the time that if we went this route any contrib
module would have to duplicate significant portions of explain.c, no
matter where we put the hooks.  We'd presumably feel obliged to update
the contrib module when making any changes to explain.c, so I think
the maintenance burden would actually be higher that way than with
everything in core.

The main point here for me is that the JSON format is already
parseable by YAML parsers, and can probably be turned into YAML using
a very short Perl script - possibly even using a sed script.  I think
that it's overkill to support two formats that are that similar.

Tallying up the votes on this patch, we have Tom Lane, Andrew Dunstan,
Josh Drake, and myself arguing against including this in core, and
Josh Berkus and Itagaki Takahiro arguing for it.  Ron Mayer seemed
somewhat in favor of it in his message on-list but sent a message
off-list taking the opposite position.  Brendan Jurd cast aspersions
on the human-readability of the text format but didn't explicitly take
a position on the core issue, and Euler Taveira de Oliveira suggested
that writing a converter would be easier than writing a module
framework (which I think is almost certainly true) but also didn't
explicitly take a position on the core issue.

Overall, that sounds to me like a slight preponderance of no votes.
Anyone think I've mis-tallied or want to weigh in?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] add more frame types in window functions (ROWS)

2009-12-06 Thread Andrew Gierth
 Hitoshi == Hitoshi Harada umi.tan...@gmail.com writes:

  So for this and the regression test problem mentioned in the other
  mail, I'm setting this back to waiting on author.

 Hitoshi In my humble opinion, view regression test is not necessary
 Hitoshi in both my patch and yours. At least window test has not
 Hitoshi been testing views so far since it was introduced. Am I
 Hitoshi missing?

If you don't want to include views in the regression test, then take
them out; I will object to that, but I'll leave it up to the committer
to decide whether it is appropriate, provided the other concerns are
addressed.

But what you have in the regression tests _now_ is, as far as I can
tell, only working by chance; with rules and window being in the
same parallel group, and window.sql creating a view, you have an
obvious race condition, unless I'm missing something.

I included view tests in my own patch for the very simple reason that
I found actual bugs that way during development.

-- 
Andrew (irc:RhodiumToad)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Syntax for partitioning

2009-12-06 Thread Itagaki Takahiro

Greg Smith g...@2ndquadrant.com wrote:

 I've marked Simon as the next reviewer and expected committer on this 
 patch and have updated it to Returned with Feedback.

OK. I'll re-submit improved patches in the next commit fest.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for SE-Linux security

2009-12-06 Thread Robert Haas
On Sat, Dec 5, 2009 at 8:18 AM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
  I offered to review it. ?I was going to mostly review the parts that
  impacted our existing code, and I wasn't going to be able to do a
  thorough job of the SE-Linux-specific files.

 Review it and commit it, after making whatever modifications are
 necessary?  Or review it in part, leaving the final review and commit
 to someone else?

 I just read through the latest version of this patch and it does
 appear to be in significantly better shape than the versions I read
 back in July.  So it might not require a Herculean feat of strength to
 get this in, but I still think it's going to be a big job.  There's a
 lot of code here that needs to be verified and in some cases probably
 cleaned up or restructured.  If you're prepared to take it on, I'm not
 going to speak against that, other than to say that I think you have
 your work cut out for you.

 This is no harder than many of the other seemingly crazy things I have
 done, e.g. Win32 port, client library threading.  If this is a feature
 we should have, I will get it done or get others to help me complete the
 task.

Well, I have always thought that it would be sort of a feather in our
cap to support this, which is why I've done a couple of reviews of it
in the past.  I tend to agree with Tom that only a small fraction of
our users will probably want it, but then again someone's been paying
KaiGai to put a pretty hefty amount of work into this over the last
year-plus, so obviously someone not only wants the feature but wants
it merged.  Within our community, I think that there have been a lot
of people who have liked the concept of this feature but very few who
have liked the patch, so there's somewhat of a disconnect between our
aspirations and our better technical judgment.  Tom is a notable
exception who I believe likes neither the concept nor the patch, which
is something we may need to resolve before getting too serious about
this.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Need a mentor, and a project.

2009-12-06 Thread abindra
Hello there,

I am a graduate student at the University of Washington, Tacoma
(http://www.tacoma.washington.edu/tech/) with an interest in databases 
(especially query processing). I am familiar with database theory and in an 
earlier life I used to be an application developer and have done a lot of 
SQL/database related work. I have been interested in learning and contribution 
to postgres for a while now. This quarter I was the TA for the undergrad intro 
to database class. I convinced my Prof. to use Postgresql to teach and it has 
been fun. It has also allowed me to familiarize myself with postgres from an 
external user's point of view.

Next quarter I am planning to do an Independent Study course where the main 
objective would be to allow me to get familiar with the internals of Postgres 
by working on a project(s). I would like to work on something that could 
possibly be accepted as a patch.

This is (I think) somewhat similar to what students do during google summer and 
I was hoping to get some help here in terms of:
1. A good project to work on for a newbie.
2. Would someone be willing to be a mentor? It would be nice to be able to get 
some guidance on a one-to-one basis.

Thanks for your time. If you have any questions or need more information, 
please do let me know.

Regards
Ashish







-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] add more frame types in window functions (ROWS)

2009-12-06 Thread Greg Smith

Andrew Gierth wrote:

But what you have in the regression tests _now_ is, as far as I can
tell, only working by chance; with rules and window being in the
same parallel group, and window.sql creating a view, you have an
obvious race condition, unless I'm missing something.
  
You're right.  rules.sql does this, among other things that might get 
impacted:


 SELECT viewname, definition FROM pg_views WHERE schemaname  
'information_schema' ORDER BY viewname;


Both rules and window are part of the same parallel group:

 test: select_views portals_p2 rules foreign_key cluster dependency guc 
bitmapops combocid tsearch tsdicts foreign_data window xmlmap


Which means that views created in the window test could absolutely cause 
the rules test to fail given a bad race condition.  Either rules or 
window needs to be moved to another section of the test schedule.  (I 
guess you could cut down the scope of rules to avoid this particular 
problem, but hacking other people's regression tests to work around 
issues caused by yours is never good practice).  I also agree with 
Andrew's sentiment that including a view on top of the new window 
implementations is good practice, just for general robustness.


Also, while not a strict requirement, I personally hate seeing things 
with simple names used in the regression tests, like how v is used in 
this patch.  The better written regression tests use a preface for all 
their names to decrease the chance of a conflict here; for example, the 
rules test names all of its views with names like rtest_v1 so they're 
very clearly not going to conflict with views created by other tests.  
No cleverness there can eliminate the conflict with rules though, when 
it's looking at every view in the system.


It looks like a lot of progress has been made on this patch through its 
review.  But there's enough open issues still that I think it could use 
a bit more time to mature before we try to get it committed--the fact 
that it's been getting bounced around for weeks now and the regression 
tests aren't even completely settled down yet is telling.  The feature 
seems complete, useful, and functionally solid, but still in need of 
some general cleanup and re-testing afterwards.  I'm going to mark this 
one Returned with Feedback for now.  Please continue to work on 
knocking all these issues out, this should be a lot easier to get 
committed in our next CF.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] add more frame types in window functions (ROWS)

2009-12-06 Thread Hitoshi Harada
2009/12/7 Greg Smith g...@2ndquadrant.com:
 Which means that views created in the window test could absolutely cause the
 rules test to fail given a bad race condition.  Either rules or window needs
 to be moved to another section of the test schedule.  (I guess you could cut
 down the scope of rules to avoid this particular problem, but hacking
 other people's regression tests to work around issues caused by yours is
 never good practice).  I also agree with Andrew's sentiment that including a
 view on top of the new window implementations is good practice, just for
 general robustness.

I've agreed window and rules cannot be in the same group if window has
view test.

 It looks like a lot of progress has been made on this patch through its
 review.  But there's enough open issues still that I think it could use a
 bit more time to mature before we try to get it committed--the fact that
 it's been getting bounced around for weeks now and the regression tests
 aren't even completely settled down yet is telling.  The feature seems
 complete, useful, and functionally solid, but still in need of some general
 cleanup and re-testing afterwards.  I'm going to mark this one Returned
 with Feedback for now.  Please continue to work on knocking all these
 issues out, this should be a lot easier to get committed in our next CF.

OK, Andrew, thank you for collaborating on this. This fest made my
patch to progress greatly. More comments from others on the memory
leakage issue are welcome yet.


Regards,


-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] pg_ctl init extension

2009-12-06 Thread Greg Smith
I just noticed that there was an updated patch here that never made its 
way onto the CommitFest app listing.  I just fixed that and took a quick 
look at it.  I was in favor of this code change, but I have to say even 
I don't really like how it ended up getting documented--and I'm sure 
there are others would be downright hostile toward it.


The biggest problem is that all of the places that used to say 
commandinitdb when talking about creating a cluster now just say 
database cluster initialization--with no link to a section covering 
that topic.  That's not a good forward step.  The part I'm more 
favorable toward that I expect other people to really cringe at is that 
the examples showing how to manually run initdb have all been switched 
to use pg_ctl initdb too.


If we're going to have a smooth transition toward supporting both styles 
of working in the next release, I think what needs to happen to the 
documentation here is adding a very clear section saying that initdb 
and pg_ctl initdb are the same thing, and noting why both forms 
exist.  Maybe a short note in both pg_ctl and initdb pointing at each 
other; not sure yet what's best.  Then update all the current places 
that say initdb that have been rewritten in this doc patch to 
database cluster initialization to reference things appropriate still. 

Going as far as making all the examples exclusively use the pg_ctl form 
right now is probably more than the community at large wants to handle 
just yet I suspect.  At best, maybe we could make some or all of those 
either use both forms, or link them to the new discussion of 
alternatives section. 

I'm glad we made some progress (and are basically at code complete now) 
on this patch this round.  Given that this patch doesn't have a large 
amount of support, I think that the remaining work here is fine-tuning 
the documentation to cover the new option available without introducing 
and abrupt change people won't like.  I'm going to mark this returned 
with feedback for now since I think that's going to take a bit more 
work than we really have time for right now, particularly given the 
limited number of people who care about this change.  Zdenek, once this 
CommitFest clears out, I can help out with getting the documentation 
parts here smoothed over so this is in proper shape to commit during the 
next one; I don't think there's anything left you need to do.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] YAML Was: CommitFest status/management

2009-12-06 Thread Greg Smith

Robert Haas wrote:

The main point here for me is that the JSON format is already
parseable by YAML parsers, and can probably be turned into YAML using
a very short Perl script - possibly even using a sed script.  I think
that it's overkill to support two formats that are that similar.
  
It's not the case that JSON can be turned into YAML or that it just 
happens that it can be parsed by YAML parsers.  While there was some 
possible divergence in earlier versions, a JSON 1.2 document *is* in 
YAML format already.  JSON is actually a subset of YAML that uses one of 
the many possible YAML styles--basically, YAML accepts anything in JSON 
format, along with others.  This means that by providing JSON output, 
we've *already* provided YAML output, too.  Just not the nice looking 
output people tend to associate with YAML.


Accordingly, there is really no basis for this patch to exist from the 
perspective of helping a typical tool author.  If you want to parse YAML 
robustly, you're going to grab someone's parsing library to do it rather 
than writing it yourself, and if you do that it will accept the existing 
JSON output just fine too.  Basically this patch lives or dies by 
whether it looks so much nicer to people as to justify its code weight.


Given the above, I don't understand why writing this patch was deemed 
worthwhile in the first place, but I hate to tell people they can't have 
something they find visually appealing just because I don't think it's 
an improvement.  Consider me a neutral vote, although I suspect the 
above may sway some people who were on the fence toward disapproval.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reading recovery.conf earlier

2009-12-06 Thread Fujii Masao
On Sun, Dec 6, 2009 at 2:49 AM, Simon Riggs si...@2ndquadrant.com wrote:
 Proposal is to split out the couple of lines in
 readRecoveryCommandFile() that set important state and make it read in
 an option block that can be used by caller. It would then be called by
 both postmaster (earlier in startup) and again later by startup process,
 as happens now. I want to do it that way so I can read file before we
 create shared memory, so I don't have to worry about passing details via
 shared memory itself.

I agree with the proposal that postmaster reads the recovery.conf.
Because this would enable all child processes to easily obtain the
parameter values in that, like GUC parameters.

But I'm not sure why recovery.conf should be read separately by
postmaster and the startup process. How about making postmaster
read all of that for the simplification?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Largeobject Access Controls (r2460)

2009-12-06 Thread Greg Smith
I just looked over the latest version of this patch and it seems to 
satisfy all the issues suggested by the initial review.  This looks like 
it's ready for a committer from a quality perspective and I'm going to 
mark it as such.


I have a guess what some of the first points of discussion are going to 
be though, so might as well raise them here.  This patch is 2.8K lines 
of code that's in a lot of places:  a mix of full new functions, tweaks 
to existing ones, docs, regression tests, it's a well structured but 
somewhat heavy bit of work.  One obvious questions is whether there's 
enough demand for access controls on large objects to justify adding the 
complexity involved to do so.  A second thing I'm concerned about is 
what implications this change would have for in-place upgrades.  If 
there's demand and it's not going to cause upgrade issues, then we just 
need to find a committer willing to chew on it.  I think those are the 
main hurdles left for this patch.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] named generic constraints [feature request]

2009-12-06 Thread Caleb Cushing
 no -

 -- is line comment in SQL - it same like // in C++

sorry didn't see this was updated. I know -- is a comment

I mean in sql  means NOT your function name is emptystr which
implies it looks for an emptystr and returns true if the string is
found to be empty (at least in my mind). so if you want to create a
contrstraint of not empty you'd write NOT emptystr(col) however the
way you wrote it would only return true if the string was NOT  empty
which is a double negative meaning that it is empty thereby rejecting
all but empty strings.

my final function that I wrote ended up looking like this (note: I
didn't specify to include whitespace in my original explanation.




CREATE OR REPLACE FUNCTION empty(TEXT)
RETURNS bool AS $$
 SELECT $1 ~ '^[[:space:]]*$';
 $$ LANGUAGE sql
IMMUTABLE;
COMMENT ON FUNCTION empty(TEXT)
IS 'Find empty strings or strings containing only whitespace';

which I'm using like this (note: this is not the full table)

 CREATE TABLE users (
user_name   TEXTNOT NULL
UNIQUE
CHECK ( NOT empty( user_name ))
);

I still wish I could write,something like

CREATE CONSTRAINT empty CHECK ( VALUE NOT ~ '^[[:space:]]*$';)

CREATE TABLE users (
user_name   TEXTNOT NULL
UNIQUE
CHECK ( NOT empty )
);
 CREATE TABLE roles (
role_name   TEXTNOT NULL
UNIQUE
CHECK ( NOT empty)
);
-- 
Caleb Cushing

http://xenoterracide.blogspot.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LDAP where DN does not include UID attribute

2009-12-06 Thread Greg Smith

Magnus Hagander wrote:

On Sun, Nov 29, 2009 at 13:05, Magnus Hagander mag...@hagander.net wrote:
  

I'll be happy to work on this to get it ready for commit, or do you
want to do the updates?



Here's a patch with my work so far. Major points missing is the
sanitizing of the username and the docs.
  
It looks like you did an initial review here already.  Since we haven't 
heard from Robert in a while and you seem interested in the patch, I 
just updated this one to list you as the committer and marked it ready 
for committer.  You can commit it or bounce it from there, but it's 
obvious none of our other reviewers are going to be able to do anything 
with it.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Re: [HACKERS] Listen / Notify - what to do when the queue is full

2009-12-06 Thread Greg Smith

Jeff Davis wrote:

On Mon, 2009-11-30 at 14:14 +0100, Joachim Wieland wrote:
  

I have a new version that deals with this problem but I need to clean
it up a bit. I am planning to post it this week.



Are planning to send a new version soon?

As it is, we're 12 days from the end of this commitfest, so we don't
have much time to hand the patch back and forth before we're out of
time.
  
Joachim has been making good progress here, but given the current code 
maturity vs. implementation complexity of this work my guess is that 
even if we got an updated version today it's not going to hit commit 
quality given the time left.  I'm going to mark this one returned with 
feedback, and I do hope that work continues on this patch so it's early 
in the queue for the final CommitFest for this version.  It seems like 
it just needs a bit more time to let the design mature and get the kinks 
worked out and it could turn into a useful feature--I know I've wanted 
NOTIFY with a payload for a years.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Re: [HACKERS] [PATCH] Largeobject Access Controls (r2460)

2009-12-06 Thread KaiGai Kohei
Greg Smith wrote:
 I just looked over the latest version of this patch and it seems to 
 satisfy all the issues suggested by the initial review.  This looks like 
 it's ready for a committer from a quality perspective and I'm going to 
 mark it as such.

Thanks for your efforts.

 I have a guess what some of the first points of discussion are going to 
 be though, so might as well raise them here.  This patch is 2.8K lines 
 of code that's in a lot of places:  a mix of full new functions, tweaks 
 to existing ones, docs, regression tests, it's a well structured but 
 somewhat heavy bit of work.  One obvious questions is whether there's 
 enough demand for access controls on large objects to justify adding the 
 complexity involved to do so.

At least, it is a todo item in the community:
  http://wiki.postgresql.org/wiki/Todo#Binary_Data

Apart from SELinux, it is quite natural to apply any access controls on
binary data. If we could not have any valid access controls, users will
not want to store their sensitive information, such as confidential PDF
files, as a large object.

 A second thing I'm concerned about is 
 what implications this change would have for in-place upgrades.  If 
 there's demand and it's not going to cause upgrade issues, then we just 
 need to find a committer willing to chew on it.  I think those are the 
 main hurdles left for this patch.

I guess we need to create an empty entry with a given OID into the
pg_largeobject_metadata for each large objects when we try to upgrade
in-place from 8.4.x or earlier release to the upcoming release.
However, no format changes in the pg_largeobject catalog, including
an empty large object, so I guess we need a small amount of additional
support in pg_dump to create empty metadata.

I want any suggestion about here.

Thanks,
-- 
OSS Platform Development Division, NEC
KaiGai Kohei kai...@ak.jp.nec.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Largeobject Access Controls (r2460)

2009-12-06 Thread Jaime Casanova
On Sun, Dec 6, 2009 at 11:19 PM, Greg Smith g...@2ndquadrant.com wrote:
 I just looked over the latest version of this patch and it seems to satisfy
 all the issues suggested by the initial review.  This looks like it's ready
 for a committer from a quality perspective and I'm going to mark it as such.


yes. i have just finished my tests and seems like the patch is working
just fine...

BTW, seems like KaiGai miss this comment in
src/backend/catalog/pg_largeobject.c when renaming the parameter
* large_object_privilege_checks is not refered here,

i still doesn't like the name but we have changed it a lot of times so
if anyone has a better idea now is when you have to speak

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench: new feature allowing to launch shell commands

2009-12-06 Thread Michael Paquier
Hi,

I took care of making the 3 cases you mentioned working in the new version
of the patch attached. It worked in my case for both shell and setshell
without any problem.
The code has also been reorganized so as to lighten the process in doCustom.
It looks cleaner on this part.

The only remaining issues are the thread bug and the documentation.
For the bug, I am currently looking into it.
I should take a little bit of time, I don't really know yet from where it
comes exactly...
For the documentation, I'll try to write it a little bit more once the code
issues are solved.

Regards

-- 
Michael Paquier
NIPPON TELEGRAPH AND
TELEPHONE CORPORATION
NTT Open Source Software Center
diff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench/pgbench.c
index 8a6437f..9c33f7e 100644
--- a/contrib/pgbench/pgbench.c
+++ b/contrib/pgbench/pgbench.c
@@ -159,6 +159,7 @@ typedef struct
 } Variable;
 
 #define MAX_FILES		128		/* max number of SQL script files allowed */
+#define SHELL_COMMAND_SIZE	256		/* maximum size allowed for shell command */
 
 /*
  * structures used in custom query mode
@@ -590,6 +591,131 @@ getQueryParams(CState *st, const Command *command, const char **params)
 		params[i] = getVariable(st, command-argv[i + 1]);
 }
 
+static bool
+process_shellcommand(CState *st, char **argv, int argc, char *commandShell)
+{
+	int		j,
+			retvalglob = 0,
+			retval = 0,
+			arg_min = 0;
+	char	*var;
+
+	/* The minimum number of arguments required is different depending on \setshell or \shell */
+	if (pg_strcasecmp(argv[0], shell) == 0)
+	{
+		arg_min = 2;
+	}
+	else if (pg_strcasecmp(argv[0], setshell) == 0)
+	{
+		arg_min = 3;
+	}
+	else
+	{
+		fprintf(stderr, %s: undefined command type\n, argv[0]);
+		return true;
+	}
+
+	/* construction of the command line with all the transmitted arguments */
+	retval = snprintf(commandShell,SHELL_COMMAND_SIZE-1,%s,argv[arg_min - 1]);
+	if (retval  0
+		|| retval  SHELL_COMMAND_SIZE-1)
+	{
+		fprintf(stderr, %s: error loading shell parameter: too many characters\n,argv[0]);
+		return true;
+	}
+
+	for (j = arg_min; j  argc; j++)
+	{
+		char *commandLoc = strdup(commandShell);
+
+		/* Look at first if the argument is :-based or not */
+		if (*argv[j] == ':')
+		{
+			/* Case of a ::-based argument */
+			if (argv[j][1] == ':')
+			{
+retval = snprintf(commandShell,SHELL_COMMAND_SIZE-1,%s %s, commandLoc, argv[j] + 1);
+			}
+			else
+			{
+/* Case of a :-only-based argument */
+if ((var = getVariable(st, argv[j] + 1)) == NULL)
+{
+	fprintf(stderr, %s: undefined variable %s\n, argv[0], argv[j]);
+	return true;
+}
+retval = snprintf(commandShell,SHELL_COMMAND_SIZE-1,%s %s, commandLoc, var);
+			}
+		}
+		else
+		{
+			/* Case when the argument is neither : nor :: based */
+			retval = snprintf(commandShell,SHELL_COMMAND_SIZE-1,%s %s, commandLoc, argv[j]);
+		}
+		retvalglob += retval;
+		if (retval  0
+			|| retvalglob  SHELL_COMMAND_SIZE-1)
+		{
+			fprintf(stderr, %s: error loading shell parameter: too many characters\n, argv[0]);
+			free(commandLoc);
+			return true;
+		}
+		free(commandLoc);
+	}
+	return false;
+}
+
+static bool
+process_shellvariable(CState *st, char **argv ,char *commandShell)
+{
+	int		retval;
+	char	res[64];
+	FILE	*respipe = NULL;
+
+	/*
+	 * Data treatment
+	 * prototype: /setshell aid skewerand +additional arguments
+	 */
+
+	respipe = popen(commandShell,r);
+	if (respipe == NULL)
+	{
+		fprintf(stderr, %s: error launching shell script\n, argv[0]);
+		return true;
+	}
+	if (fgets(res, sizeof(res), respipe) == NULL)
+	{
+		fprintf(stderr, %s: error getting parameter\n, argv[0]);
+		return true;
+	}
+
+	retval = pclose(respipe);
+	if (retval == -1)
+	{
+		fprintf(stderr, %s: error closing shell script\n, argv[0]);
+		return true;
+	}
+	/* Transform the parameter into an integer */
+	retval = atoi(res);
+	if (retval == 0)
+	{
+		fprintf(stderr, %s: error input integer\n, argv[0]);
+		return true;
+	}
+	/* ready to put the variable */
+	snprintf(res, sizeof(res), %d, retval);
+
+	if (putVariable(st, argv[1], res) == false)
+	{
+		fprintf(stderr, %s: out of memory\n, argv[0]);
+		return true;
+	}
+#ifdef DEBUG
+	printf(shell parameter name: %s, value: %s\n, argv[1], res);
+#endif
+	return false;
+}
+
 #define MAX_PREPARE_NAME		32
 static void
 preparedStatementName(char *buffer, int file, int state)
@@ -992,7 +1118,50 @@ top:
 
 			st-listen = 1;
 		}
+		else if (pg_strcasecmp(argv[0], setshell) == 0)
+		{
+			bool	status = false;
+			char	commandShell[SHELL_COMMAND_SIZE];
+
+			/* construction of the command line with all the transmitted arguments */
+			status = process_shellcommand(st, argv, argc, commandShell);
+			if (status == true)
+			{
+st-ecnt++;
+return true;
+			}
+			/* get as script output the variable and put it */
+			status = process_shellvariable(st, argv, commandShell);
+			if (status == true)
+			{
+st-ecnt++;
+return true;
+			}
+
+			st-listen = 1;
+		}
+		else if (pg_strcasecmp(argv[0], 

Re: [HACKERS] [PATCH] Largeobject Access Controls (r2460)

2009-12-06 Thread KaiGai Kohei
Jaime Casanova wrote:
 On Sun, Dec 6, 2009 at 11:19 PM, Greg Smith g...@2ndquadrant.com wrote:
 I just looked over the latest version of this patch and it seems to satisfy
 all the issues suggested by the initial review.  This looks like it's ready
 for a committer from a quality perspective and I'm going to mark it as such.

 
 yes. i have just finished my tests and seems like the patch is working
 just fine...
 
 BTW, seems like KaiGai miss this comment in
 src/backend/catalog/pg_largeobject.c when renaming the parameter
 * large_object_privilege_checks is not refered here,
 
 i still doesn't like the name but we have changed it a lot of times so
 if anyone has a better idea now is when you have to speak

Oops, it should be fixed to lo_compat_privileges.
This comment also have version number issue, so I fixed it as follows:

BEFORE:
/*
 * large_object_privilege_checks is not refered here,
 * because it is a compatibility option, but we don't
 * have ALTER LARGE OBJECT prior to the v8.5.0.
 */

AFTER:
 /*
  * The 'lo_compat_privileges' is not checked here, because we
  * don't have any access control features in the 8.4.x series
  * or earlier release.
  * So, it is not a place we can define a compatible behavior.
  */

Nothing are changed in other codes, including something corresponding to
in-place upgrading. I'm waiting for suggestion.

Thanks,
-- 
OSS Platform Development Division, NEC
KaiGai Kohei kai...@ak.jp.nec.com


sepgsql-02-blob-8.5devel-r2461.patch.gz
Description: application/gzip

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] operator exclusion constraints

2009-12-06 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 [ exclusion constraint patch ]

Applied after quite a lot of editorialization.  For future reference,
here is a summary of what I did:

* Reworded and renamed stuff to try to be consistent about calling these
things exclusion constraints.  The original code and docs bore quite
a few traces of the various other terminologies, which is not too
surprising given the history but would have been confusing to future
readers of the code.

* Moved the verification of new exclusion constraints into index_build
processing as per discussion.

* Unified the EXCLUDE parsing path with the existing unique/pkey path
by the expedient of adding an excludeOpNames list to IndexStmt.  This
got rid of quite a lot of duplicated code, and fixed a number of bizarre
corner cases like the bogus wording of the index creation NOTICE messages.

* Cleaned up some things that didn't really meet project practices.
To mention a couple: one aspect of the try to make the patch look
like it had always been there rule is to insert new stuff in unsurprising
places.  Adding code at the end of a list or file very often doesn't meet
this test.  I tried to put the EXCLUDE constraint stuff beside
UNIQUE/PRIMARY KEY handling where possible.  Another pet peeve that was
triggered a lot in this patch is that you seemed to be intent on fitting
ereport() calls into 80 columns no matter what, and would break the error
message texts in random places to make it fit.  There's a good practical
reason not to do that: it makes it hard to grep the source code for an
error message.  You can break at phrase boundaries if you must, but
in general I prefer to just let the text go past the right margin.

There are a couple of loose ends yet:

* I made CREATE...LIKE processing handle exclusion constraints the same
as unique/pkey constraints, ie, they're processed by INCLUDING INDEXES.
There was some discussion of rearranging things to make these be processed
by INCLUDING CONSTRAINTS instead, but no consensus that I recall.  In
any case, failing to copy them at all is clearly no good.

* I'm not too satisfied with the behavior of psql's \d:

regression=# create table foo (f1 int primary key using index tablespace ts1,
regression(# f2 int, EXCLUDE USING btree (f2 WITH =) using index tablespace ts1,
regression(# f3 int, EXCLUDE USING btree (f3 WITH =) DEFERRABLE INITIALLY 
DEFERRED);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index foo_pkey for 
table foo
NOTICE:  CREATE TABLE / EXCLUDE will create implicit index foo_f2_exclusion 
for table foo
NOTICE:  CREATE TABLE / EXCLUDE will create implicit index foo_f3_exclusion 
for table foo
CREATE TABLE
regression=# \d foo
  Table public.foo
 Column |  Type   | Modifiers 
+-+---
 f1 | integer | not null
 f2 | integer | 
 f3 | integer | 
Indexes:
foo_pkey PRIMARY KEY, btree (f1), tablespace ts1
foo_f2_exclusion btree (f2), tablespace ts1
foo_f3_exclusion btree (f3) DEFERRABLE INITIALLY DEFERRED
Exclusion constraints:
foo_f2_exclusion EXCLUDE USING btree (f2 WITH =)
foo_f3_exclusion EXCLUDE USING btree (f3 WITH =) DEFERRABLE INITIALLY 
DEFERRED

regression=# 

This might have been defensible back when the idea was to keep constraints
decoupled from indexes, but now it just looks bizarre.  We should either
get rid of the Exclusion constraints: display and attach the info to
the index entries, or hide indexes that are attached to exclusion
constraints.  I lean to the former on the grounds of the precedent for
unique/pkey indexes --- which is not totally arbitrary, since an index
is usable as a query index regardless of its function as a constraint.
It's probably a debatable point though.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EXPLAIN BUFFERS

2009-12-06 Thread Euler Taveira de Oliveira
Itagaki Takahiro escreveu:
 The attached patch is rebased to current CVS.
 
I'm looking at your patch now... It is almost there but has some issues.

(i) documentation: you have more than three counters and they could be
mentioned in docs too.

+Include information on the buffers. Specifically, include the number of
+buffer hits, number of disk blocks read, and number of local buffer read.


(ii) format: why does text output format have less counters than the other ones?

+   if (es-format == EXPLAIN_FORMAT_TEXT)
+   {
+   appendStringInfoSpaces(es-str, es-indent * 2);
+   appendStringInfo(es-str, Blocks Hit: %ld  Read: %ld  Temp Read:
%ld\n,
+   usage-blks_hit, usage-blks_read, usage-temp_blks_read);
+   }
+   else
+   {
+   ExplainPropertyLong(Hit Blocks, usage-blks_hit, es);
+   ExplainPropertyLong(Read Blocks, usage-blks_read, es);
+   ExplainPropertyLong(Written Blocks, usage-blks_written, es);
+   ExplainPropertyLong(Local Hit Blocks, usage-local_blks_hit, es);
+   ExplainPropertyLong(Local Read Blocks, usage-local_blks_read, 
es);
+   ExplainPropertyLong(Local Written Blocks,
usage-local_blks_written, es);
+   ExplainPropertyLong(Temp Read Blocks, usage-temp_blks_read, es);
+   ExplainPropertyLong(Temp Written Blocks,
usage-temp_blks_written, es);
+   }

(iii) string: i don't like the string in text format because (1) it is not
concise (only the first item has the word 'Blocks'), (2) what block is it
about? Shared, Local, or Temp?, (3) why don't you include the other ones?, and
(4) why don't you include the written counters?

 -  Seq Scan on pg_namespace nc  (cost=0.00..1.07 rows=4 width=68) (actual
time=0.015..0.165 rows=4 loops=1)
   Filter: (NOT pg_is_other_temp_schema(oid))
   Blocks Hit: 11  Read: 0  Temp Read: 0

(iv) text format: i don't have a good suggestion but here are some ideas. The
former is too long and the latter is too verbose. :( Another option is to
suppress words hit, read, and written; and just document it.

Shared Blocks (11 hit, 5 read, 0 written); Local Blocks (5 hit, 0 read, 0
written); Temp Blocks (0 read, 0 written)

or

Shared Blocks: 11 hit, 5 read, 0 written
Local Blocks: 5 hit, 0 read, 0 written
Temp Blocks: 0 read, 0 written

(v) accumulative: i don't remember if we discussed it but is there a reason
the number of buffers isn't accumulative? We already have cost and time that
are both accumulative. I saw BufferUsageAccumDiff() function but didn't try to
figure out why it isn't accumulating or passing the counters to parent nodes.

euler=# explain (analyze true, buffers true) select * from pgbench_branches
inner join pgbench_history using (bid) where bid  100;
   QUERY PLAN


 Hash Join  (cost=1.02..18.62 rows=3 width=476) (actual time=0.136..0.136
rows=0 loops=1)
   Hash Cond: (pgbench_history.bid = pgbench_branches.bid)
   Blocks Hit: 2  Read: 0  Temp Read: 0
   -  Seq Scan on pgbench_history  (cost=0.00..15.50 rows=550 width=116)
(actual time=0.034..0.034 rows=1 loops=1)
 Blocks Hit: 1  Read: 0  Temp Read: 0
   -  Hash  (cost=1.01..1.01 rows=1 width=364) (actual time=0.022..0.022
rows=0 loops=1)
 Blocks Hit: 0  Read: 0  Temp Read: 0
 -  Seq Scan on pgbench_branches  (cost=0.00..1.01 rows=1 width=364)
(actual time=0.019..0.019 rows=0 loops=1)
   Filter: (bid  100)
   Blocks Hit: 1  Read: 0  Temp Read: 0
 Total runtime: 0.531 ms
(11 rows)

(vi) comment: the 'at start' is superfluous. Please, remove it.

+   longblks_hit;   /* # of buffer hits at start */
+   longblks_read;  /* # of disk blocks read at start */

(vii) all nodes: I'm thinking if we need this information in all nodes (even
in those nodes that don't read or write). It would be less verbose but it
could complicate some parser's life. Of course, if we suppress this
information, we need to include it on the top node even if we don't read or
write in it.

I didn't have time to adjust your patch per comments above but if you can
address all of those issues I certainly could check your patch again.


-- 
  Euler Taveira de Oliveira
  http://www.timbira.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench: new feature allowing to launch shell commands

2009-12-06 Thread Michael Paquier
The threading bug appears when a duration is set for pgbench tests.
Instead of a duration, if a number of xacts is set, this error doesn't
happen.

If i understood the problem well, when the alarm signal comes, all the
threads have to disconnect even the ones looking for a setshell parameter at
this moment, creating the thread error:
setshell: error getting parameter
Client 0 aborted in state 1. Execution of meta-command failed.

I am trying to find an elegant way to solve this, but I can't figure out yet
how to deal with this error as it looks tricky.


[HACKERS] bug: json format and auto_explain

2009-12-06 Thread Euler Taveira de Oliveira
Hi,

While testing the EXPLAIN BUFFERS patch I found a bug. I'm too tired to
provide a fix right now but if someone can take it I will appreciate. It seems
ExplainJSONLineEnding() doesn't expect es-grouping_stack list as a null 
pointer.

eu...@harman $ ./install/bin/psql
psql (8.5devel)
Type help for help.

euler=# load 'auto_explain';
LOAD
euler=# set auto_explain.log_min_duration to 0;
SET
euler=# set auto_explain.log_format to 'json';
SET
euler=# explain (format json) select * from pgbench_branches inner join
pgbench_history using (bid) where bid  100;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
euler=# \q
eu...@harman /a/pgsql/dev $ gdb ./install/bin/postgres ./data/core
GNU gdb 6.8
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as i686-pc-linux-gnu...

warning: Can't read pathname for load map: Input/output error.
Reading symbols from /usr/lib/libxml2.so.2...done.
Loaded symbols for /usr/lib/libxml2.so.2
Reading symbols from /usr/lib/libssl.so.0.9.8...done.
Loaded symbols for /usr/lib/libssl.so.0.9.8
Reading symbols from /usr/lib/libcrypto.so.0.9.8...done.
Loaded symbols for /usr/lib/libcrypto.so.0.9.8
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/libm.so.6...done.
Loaded symbols for /lib/libm.so.6
Reading symbols from /usr/lib/libldap-2.4.so.2...done.
Loaded symbols for /usr/lib/libldap-2.4.so.2
Reading symbols from /lib/libc.so.6...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/libz.so.1...done.
Loaded symbols for /lib/libz.so.1
Reading symbols from /usr/lib/libgssapi_krb5.so.2...done.
Loaded symbols for /usr/lib/libgssapi_krb5.so.2
Reading symbols from /usr/lib/libkrb5.so.3...done.
Loaded symbols for /usr/lib/libkrb5.so.3
Reading symbols from /lib/libcom_err.so.2...done.
Loaded symbols for /lib/libcom_err.so.2
Reading symbols from /usr/lib/libk5crypto.so.3...done.
Loaded symbols for /usr/lib/libk5crypto.so.3
Reading symbols from /lib/libresolv.so.2...done.
Loaded symbols for /lib/libresolv.so.2
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from /usr/lib/liblber-2.4.so.2...done.
Loaded symbols for /usr/lib/liblber-2.4.so.2
Reading symbols from /usr/lib/libsasl2.so.2...done.
Loaded symbols for /usr/lib/libsasl2.so.2
Reading symbols from /usr/lib/libgnutls.so.26...done.
Loaded symbols for /usr/lib/libgnutls.so.26
Reading symbols from /usr/lib/libkrb5support.so.0...done.
Loaded symbols for /usr/lib/libkrb5support.so.0
Reading symbols from /lib/libpthread.so.0...done.
Loaded symbols for /lib/libpthread.so.0
Reading symbols from /lib/libcrypt.so.1...done.
Loaded symbols for /lib/libcrypt.so.1
Reading symbols from /usr/lib/libtasn1.so.3...done.
Loaded symbols for /usr/lib/libtasn1.so.3
Reading symbols from /usr/lib/libgcrypt.so.11...done.
Loaded symbols for /usr/lib/libgcrypt.so.11
Reading symbols from /usr/lib/libgpg-error.so.0...done.
Loaded symbols for /usr/lib/libgpg-error.so.0
Reading symbols from /lib/libnss_files.so.2...done.
Loaded symbols for /lib/libnss_files.so.2
Reading symbols from /a/pgsql/dev/install/lib/auto_explain.so...done.
Loaded symbols for /a/pgsql/dev/install/lib/auto_explain.so
Core was generated by `postgres: euler euler [local] EXPLAIN '.
Program terminated with signal 11, Segmentation fault.
[New process 2751]
#0  0x081a2158 in ExplainJSONLineEnding (es=0xbfd117dc) at
/a/pgsql/dev/postgresql/src/backend/commands/explain.c:1885

warning: Source file is more recent than executable.
1885if (linitial_int(es-grouping_stack) != 0)
(gdb) print es-grouping_stack
$1 = (List *) 0x0
(gdb) print es-str-data
$2 = 0x865b09c 
(gdb) bt
#0  0x081a2158 in ExplainJSONLineEnding (es=0xbfd117dc) at
/a/pgsql/dev/postgresql/src/backend/commands/explain.c:1885
#1  0x081a1ada in ExplainOpenGroup (objtype=0x84fcb64 Plan,
labelname=0x84fcb64 Plan, labeled=1 '\001', es=0xbfd117dc) at
/a/pgsql/dev/postgresql/src/backend/commands/explain.c:1684
#2  0x0819faac in ExplainNode (plan=0x8652c98, planstate=0x8654868,
outer_plan=0x0, relationship=0x0, plan_name=0x0, es=0xbfd117dc) at
/a/pgsql/dev/postgresql/src/backend/commands/explain.c:739
#3  0x0819f393 in ExplainPrintPlan (es=0xbfd117dc, queryDesc=0x8653724) at
/a/pgsql/dev/postgresql/src/backend/commands/explain.c:474
#4  0xb7ef0010 in explain_ExecutorEnd (queryDesc=0x8653724) at
/a/pgsql/dev/postgresql/contrib/auto_explain/auto_explain.c:238
#5  0x081f1094 in ExecutorEnd (queryDesc=0x8653724) at
/a/pgsql/dev/postgresql/src/backend/executor/execMain.c:314
#6 

Re: [HACKERS] [patch] pg_ctl init extension

2009-12-06 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 The biggest problem is that all of the places that used to say 
 commandinitdb when talking about creating a cluster now just say 
 database cluster initialization--with no link to a section covering 
 that topic.  That's not a good forward step.  The part I'm more 
 favorable toward that I expect other people to really cringe at is that 
 the examples showing how to manually run initdb have all been switched 
 to use pg_ctl initdb too.

That's easily dealt with ;-) ... just rip out all those parts of the
diff.  I think all that needs to be done in the docs is to list the initdb
option in the pg_ctl reference page.

If you think it's code-complete, let's just commit the code and not
try to have a re-education project going on with the docs.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] YAML Was: CommitFest status/management

2009-12-06 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 Given the above, I don't understand why writing this patch was deemed 
 worthwhile in the first place,

It was written and submitted by one person who did not bother to ask
first whether anyone else thought it was worthwhile.  So its presence
on the CF list should not be taken as evidence that there's consensus
for it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New VACUUM FULL

2009-12-06 Thread Tom Lane
Itagaki Takahiro itagaki.takah...@oss.ntt.co.jp writes:
 I added regression tests for database-wide vacuums and check changes
 of relfilenodes in those commands.
 ...
 BTW, I needed to add ORDER BY cluase in select_views test. I didn't modify
 tests in select_views at all, but database-wide vacuum moves tuples in
 select_views test. I think the fix should be reasonable becausae unsorted
 result set is always unstable in regression test.

You should take those out again; if I am the committer I certainly will.
Such a test will guarantee complete instability of every other
regression test, and it's not worth it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench: new feature allowing to launch shell commands

2009-12-06 Thread Michael Paquier
Please find attached the latest version of the patch,
with the threading bug corrected and the documentation updated as well.

The origin of the bug was the alarm signal. Once the duration is over, all
the threads have to finish and timer_exceeded is set at true.
A control on this variable in setshell process is enough so as not to show
out the thread error.

Regards,
diff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench/pgbench.c
index 8a6437f..bb7f468 100644
--- a/contrib/pgbench/pgbench.c
+++ b/contrib/pgbench/pgbench.c
@@ -159,6 +159,7 @@ typedef struct
 } Variable;
 
 #define MAX_FILES		128		/* max number of SQL script files allowed */
+#define SHELL_COMMAND_SIZE	256		/* maximum size allowed for shell command */
 
 /*
  * structures used in custom query mode
@@ -590,6 +591,137 @@ getQueryParams(CState *st, const Command *command, const char **params)
 		params[i] = getVariable(st, command-argv[i + 1]);
 }
 
+static bool
+process_shellcommand(CState *st, char **argv, int argc, char *commandShell)
+{
+	int		j,
+			retvalglob = 0,
+			retval = 0,
+			arg_min = 0;
+	char	*var;
+
+	/* The minimum number of arguments required is different depending on \setshell or \shell */
+	if (pg_strcasecmp(argv[0], shell) == 0)
+	{
+		arg_min = 2;
+	}
+	else if (pg_strcasecmp(argv[0], setshell) == 0)
+	{
+		arg_min = 3;
+	}
+	else
+	{
+		fprintf(stderr, %s: undefined command type\n, argv[0]);
+		return true;
+	}
+
+	/* construction of the command line with all the transmitted arguments */
+	retval = snprintf(commandShell,SHELL_COMMAND_SIZE-1,%s,argv[arg_min - 1]);
+	if (retval  0
+		|| retval  SHELL_COMMAND_SIZE-1)
+	{
+		fprintf(stderr, %s: error loading shell parameter: too many characters\n,argv[0]);
+		return true;
+	}
+
+	for (j = arg_min; j  argc; j++)
+	{
+		char *commandLoc = strdup(commandShell);
+
+		/* Look at first if the argument is :-based or not */
+		if (*argv[j] == ':')
+		{
+			/* Case of a ::-based argument */
+			if (argv[j][1] == ':')
+			{
+retval = snprintf(commandShell,SHELL_COMMAND_SIZE-1,%s %s, commandLoc, argv[j] + 1);
+			}
+			else
+			{
+/* Case of a :-only-based argument */
+if ((var = getVariable(st, argv[j] + 1)) == NULL)
+{
+	fprintf(stderr, %s: undefined variable %s\n, argv[0], argv[j]);
+	return true;
+}
+retval = snprintf(commandShell,SHELL_COMMAND_SIZE-1,%s %s, commandLoc, var);
+			}
+		}
+		else
+		{
+			/* Case when the argument is neither : nor :: based */
+			retval = snprintf(commandShell,SHELL_COMMAND_SIZE-1,%s %s, commandLoc, argv[j]);
+		}
+		retvalglob += retval;
+		if (retval  0
+			|| retvalglob  SHELL_COMMAND_SIZE-1)
+		{
+			fprintf(stderr, %s: error loading shell parameter: too many characters\n, argv[0]);
+			free(commandLoc);
+			return true;
+		}
+		free(commandLoc);
+	}
+	return false;
+}
+
+#define SHELL_ERROR	-1
+#define SHELL_OK	0
+#define SHELL_ALARM	1
+
+static int
+process_shellvariable(CState *st, char **argv ,char *commandShell)
+{
+	int		retval;
+	char	res[64];
+	FILE	*respipe = NULL;
+
+	/*
+	 * Data treatment
+	 * prototype: /setshell aid skewerand +additional arguments
+	 */
+
+	respipe = popen(commandShell,r);
+	if (respipe == NULL)
+	{
+		fprintf(stderr, %s: error launching shell script\n, argv[0]);
+		return SHELL_ERROR;
+	}
+	if (fgets(res, sizeof(res), respipe) == NULL)
+	{
+		if (timer_exceeded)
+			return SHELL_ALARM;
+		fprintf(stderr, %s: error getting parameter\n, argv[0]);
+		return SHELL_ERROR;
+	}
+
+	retval = pclose(respipe);
+	if (retval == -1)
+	{
+		fprintf(stderr, %s: error closing shell script\n, argv[0]);
+		return SHELL_ERROR;
+	}
+	/* Transform the parameter into an integer */
+	retval = atoi(res);
+	if (retval == 0)
+	{
+		fprintf(stderr, %s: error input integer\n, argv[0]);
+		return SHELL_ERROR;
+	}
+	/* ready to put the variable */
+	snprintf(res, sizeof(res), %d, retval);
+
+	if (putVariable(st, argv[1], res) == false)
+	{
+		fprintf(stderr, %s: out of memory\n, argv[0]);
+		return SHELL_ERROR;
+	}
+#ifdef DEBUG
+	printf(shell parameter name: %s, value: %s\n, argv[1], res);
+#endif
+	return SHELL_OK;
+}
+
 #define MAX_PREPARE_NAME		32
 static void
 preparedStatementName(char *buffer, int file, int state)
@@ -992,7 +1124,53 @@ top:
 
 			st-listen = 1;
 		}
+		else if (pg_strcasecmp(argv[0], setshell) == 0)
+		{
+			int		retval;
+			bool	status = false;
+			char	commandShell[SHELL_COMMAND_SIZE];
+
+			/* construction of the command line with all the transmitted arguments */
+			status = process_shellcommand(st, argv, argc, commandShell);
+			if (status == true)
+			{
+st-ecnt++;
+return true;
+			}
+			/* get as script output the variable and put it */
+			retval = process_shellvariable(st, argv, commandShell);
+			if (retval  0)
+			{
+st-ecnt++;
+return true;
+			}
+			else if (retval  0)
+return clientDone(st, true); /* exit correctly, time is out */
+
+			st-listen = 1;
+		}
+		else if (pg_strcasecmp(argv[0], shell) == 0)
+		{
+			int		retval;
+			bool	status = false;

Re: [HACKERS] EXPLAIN BUFFERS

2009-12-06 Thread Itagaki Takahiro

Euler Taveira de Oliveira eu...@timbira.com wrote:

 I'm looking at your patch now... It is almost there but has some issues.
 
 (i) documentation: you have more than three counters and they could be
 mentioned in docs too.

I'll add documentation for all variables.

 (ii) format: why does text output format have less counters than the other 
 ones?

That's because lines will be too long for text format. I think the
three values in it are the most important and useful ones.

 (iii) string: i don't like the string in text format
 (1) it is not concise (only the first item has the word 'Blocks'),
 (2) what block is it about? Shared, Local, or Temp?

The format was suggested here and no objections then.
http://archives.postgresql.org/pgsql-hackers/2009-10/msg00268.php
I think the current output is enough and useful in normal use.
We can use XML or JSON format for more details.

I think
Blocks Hit: 1641  Read: 0  Temp Read: 1443
means
Blocks (Hit: 1641  Read: 0  Temp Read: 1443)
i.e., Blocks of hit, blocks of reads, and Blocks of temp reads.

 (3) why don't you include the other ones?, and
 (4) why don't you include the written counters?
 (iv) text format: i don't have a good suggestion but here are some ideas. The
 former is too long and the latter is too verbose.

Their reasons are the same as (ii).

 (v) accumulative: i don't remember if we discussed it but is there a reason
 the number of buffers isn't accumulative? We already have cost and time that
 are both accumulative. I saw BufferUsageAccumDiff() function but didn't try to
 figure out why it isn't accumulating or passing the counters to parent nodes.

It's reasonable. I'll change so if no objections.

 (vi) comment: the 'at start' is superfluous. Please, remove it.

Ooops, I'll remove them.

 (vii) all nodes: I'm thinking if we need this information in all nodes (even
 in those nodes that don't read or write). It would be less verbose but it
 could complicate some parser's life. Of course, if we suppress this
 information, we need to include it on the top node even if we don't read or
 write in it.

I cannot understand what you mean -- should I suppress the lines when they
have all-zero values?

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug: json format and auto_explain

2009-12-06 Thread Tom Lane
Euler Taveira de Oliveira eu...@timbira.com writes:
 While testing the EXPLAIN BUFFERS patch I found a bug. I'm too tired to
 provide a fix right now but if someone can take it I will appreciate. It seems
 ExplainJSONLineEnding() doesn't expect es-grouping_stack list as a null 
 pointer.

Looks like auto_explain is under the illusion that it need not call
ExplainBeginOutput/ExplainEndOutput.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] YAML Was: CommitFest status/management

2009-12-06 Thread Itagaki Takahiro

Tom Lane t...@sss.pgh.pa.us wrote:

 It was written and submitted by one person who did not bother to ask
 first whether anyone else thought it was worthwhile.  So its presence
 on the CF list should not be taken as evidence that there's consensus
 for it.

Should we have Needs Discussion phase before Needs Review ?
Reviews, including me, think patches with needs-review status are
worthwhile. In contrast, contributers often register their patches
to CF without discussions just because of no response; they cannot
find whether no response is silent approval or not.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug: json format and auto_explain

2009-12-06 Thread Itagaki Takahiro

Tom Lane t...@sss.pgh.pa.us wrote:

 Looks like auto_explain is under the illusion that it need not call
 ExplainBeginOutput/ExplainEndOutput.

They were added by XML formatter; I suppose it worked on 8.4.

Explain{Begin/End}Output are static functions, so we cannot call them
from an external contrib module. Instead, I'll suggest to call them
automatically from ExplainPrintPlan. The original codes in ExplainPrintPlan
was moved into ExplainOneResult, that name might be debatable.

Patch attached.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



auto_explain_fix_20091207.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] named generic constraints [feature request]

2009-12-06 Thread Pavel Stehule
2009/12/7 Caleb Cushing xenoterrac...@gmail.com:
 no -

 -- is line comment in SQL - it same like // in C++

 sorry didn't see this was updated. I know -- is a comment

 I mean in sql  means NOT your function name is emptystr which
 implies it looks for an emptystr and returns true if the string is
 found to be empty (at least in my mind). so if you want to create a
 contrstraint of not empty you'd write NOT emptystr(col) however the
 way you wrote it would only return true if the string was NOT  empty
 which is a double negative meaning that it is empty thereby rejecting
 all but empty strings.

 my final function that I wrote ended up looking like this (note: I
 didn't specify to include whitespace in my original explanation.




 CREATE OR REPLACE FUNCTION empty(TEXT)
 RETURNS bool AS $$
  SELECT $1 ~ '^[[:space:]]*$';
     $$ LANGUAGE sql
        IMMUTABLE;
 COMMENT ON FUNCTION empty(TEXT)
        IS 'Find empty strings or strings containing only whitespace';

 which I'm using like this (note: this is not the full table)

  CREATE TABLE users (
        user_name       TEXT    NOT NULL
                                UNIQUE
                                CHECK ( NOT empty( user_name ))
 );

 I still wish I could write,something like

 CREATE CONSTRAINT empty CHECK ( VALUE NOT ~ '^[[:space:]]*$';)

 CREATE TABLE users (
        user_name       TEXT    NOT NULL
                                UNIQUE
                                CHECK ( NOT empty )
 );
  CREATE TABLE roles (
    role_name       TEXT    NOT NULL
                                UNIQUE
                                CHECK ( NOT empty)


I understand. But I don't see any significant benefit for this
non-standard feature. You safe a few chars. I thing so it is useless.

Regards
Pavel Stehule


 );
 --
 Caleb Cushing

 http://xenoterracide.blogspot.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New VACUUM FULL

2009-12-06 Thread Itagaki Takahiro

Tom Lane t...@sss.pgh.pa.us wrote:

 You should take those out again; if I am the committer I certainly will.
 Such a test will guarantee complete instability of every other
 regression test, and it's not worth it.

I read the original comment was saying to add regression tests for
database-wide vacuums. But I'll reduce the range of vacuum if they
are not acceptable.

The new patch contains only table-based vacuum for local tables and some of
system tables to test non-INPLACE vacuum are not used for system tables.
VACUUM FULL pg_am;
VACUUM FULL pg_class;
VACUUM FULL pg_database;

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



vacuum-full_20091207a.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers