[HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread Jonathan Gray
Following an upgrade to 8.2.2, many of my plpgsql functions started to cause
server process crashes.

I make use of a custom data-type uniqueidentifier, available here:

http://gborg.postgresql.org/project/uniqueidentifier
ftp://gborg.postgresql.org/pub/uniqueidentifier/stable/uniqueidentifier-0.2.
tar.gz

This type has given me the same kind of process crash once before, but that
was related to NULL values in a foreign-key referenced field (unresolved to
this day, but behavior is allowed for all builtin types).

I see the crash in plpgsql functions that return an aggregate type which
contain a uniqueidentifier (including triggers which have uniqueidentifiers
in the NEW).  Here is a test case I was able to create:

 TEST SETUP 

CREATE SCHEMA test;

CREATE TYPE test.guid_plus AS (
  id  public.uniqueidentifier,
  num integer
);

CREATE TABLE test.guid_table (
  id  public.uniqueidentifier,
  num integer
);

INSERT INTO test.guid_table (id,num) VALUES (newid(),1);
INSERT INTO test.guid_table (id,num) VALUES (newid(),2);
INSERT INTO test.guid_table (id,num) VALUES (newid(),3);
INSERT INTO test.guid_table (id,num) VALUES (newid(),4);

CREATE OR REPLACE FUNCTION test.break_guid (idlower integer, idupper
integer) RETURNS SETOF test.guid_plus AS
$$
DECLARE
    x RECORD;
    gplus_ret test.guid_plus;
BEGIN
    FOR x IN SELECT id,num FROM test.guid_table WHERE id  idlower AND id 
idupper LOOP
    gplus_ret :=
(x.id::uniqueidentifier,x.num::integer)::test.guid_plus;
    -- I usually do the following: (but tried above with same result)
    --   gplus_ret := (x.id,x.num);
    RETURN NEXT gplus_ret;
    END LOOP;
    RETURN;
END;
$$ LANGUAGE plpgsql;

 CAUSE THE CRASH 

SELECT * FROM test.break_guid(0,5);


server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.


 FROM THE LOGS, SET AT DEBUG5 

2007-02-05 22:31:07 PST [31363]: [45-1] DEBUG: StartTransactionCommand
2007-02-05 22:31:07 PST [31363]: [46-1] DEBUG: StartTransaction
2007-02-05 22:31:07 PST [31363]: [47-1] DEBUG: name: unnamed; blockState:
DEFAULT; state: INPROGR, xid/subid/cid: 7198/1/0, nestlvl: 1, children: 
2007-02-05 22:31:07 PST [31278]: [775-1] DEBUG: reaping dead processes
2007-02-05 22:31:07 PST [31278]: [776-1] DEBUG: server process (PID 31363)
was terminated BY signal 11
2007-02-05 22:31:07 PST [31278]: [777-1] LOG: server process (PID 31363) was
terminated BY signal 11
2007-02-05 22:31:07 PST [31278]: [778-1] LOG: terminating any other active
server processes
2007-02-05 22:31:07 PST [31278]: [779-1] DEBUG: sending SIGQUIT TO process
31361
2007-02-05 22:31:07 PST [31278]: [780-1] DEBUG: reaping dead processes
2007-02-05 22:31:07 PST [31278]: [781-1] LOG: ALL server processes
terminated; reinitializing
2007-02-05 22:31:07 PST [31278]: [782-1] DEBUG: shmem_exit(0)
2007-02-05 22:31:07 PST [31278]: [783-1] DEBUG: invoking
IpcMemoryCreate(size=537141248)
2007-02-05 22:31:07 PST [31364]: [1-1] LOG: DATABASE system was interrupted
at 2007-02-05 22:30:35 PST
2007-02-05 22:31:07 PST [31364]: [2-1] LOG: checkpoint record IS at
0/286D97FC
2007-02-05 22:31:07 PST [31364]: [3-1] LOG: redo record IS at 0/286D97FC;
undo record IS at 0/0; shutdown TRUE
2007-02-05 22:31:07 PST [31364]: [4-1] LOG: next transaction ID: 0/7192;
next OID: 155654
2007-02-05 22:31:07 PST [31364]: [5-1] LOG: next MultiXactId: 1; next
MultiXactOffset: 0
2007-02-05 22:31:07 PST [31364]: [6-1] LOG: DATABASE system was NOT properly
shut down; automatic recovery IN progress
2007-02-05 22:31:07 PST [31365]: [1-1] FATAL: the DATABASE system IS
starting up
2007-02-05 22:31:07 PST [31365]: [2-1] DEBUG: proc_exit(1)
2007-02-05 22:31:07 PST [31365]: [3-1] DEBUG: shmem_exit(1)
2007-02-05 22:31:07 PST [31365]: [4-1] DEBUG: exit(1)
2007-02-05 22:31:07 PST [31278]: [784-1] DEBUG: forked new backend,
pid=31365 socket=8
2007-02-05 22:31:07 PST [31278]: [785-1] DEBUG: reaping dead processes
2007-02-05 22:31:07 PST [31278]: [786-1] DEBUG: server process (PID 31365)
exited WITH exit code 1
2007-02-05 22:31:07 PST [31364]: [7-1] LOG: record WITH zero length at
0/286D9844
2007-02-05 22:31:07 PST [31364]: [8-1] LOG: redo IS NOT required
2007-02-05 22:31:07 PST [31364]: [9-1] LOG: DATABASE system IS ready
2007-02-05 22:31:07 PST [31364]: [10-1] DEBUG: transaction ID wrap LIMIT IS
2147484171, limited BY DATABASE postgres
2007-02-05 22:31:07 PST [31364]: [11-1] DEBUG: proc_exit(0)
2007-02-05 22:31:07 PST [31364]: [12-1] DEBUG: shmem_exit(0)
2007-02-05 22:31:07 PST [31364]: [13-1] DEBUG: exit(0)
2007-02-05 22:31:07 PST [31278]: [787-1] DEBUG: reaping dead processes
2007-02-05 22:31:07 PST [31368]: [1-1] DEBUG: proc_exit(0)
2007-02-05 22:31:07 PST [31368]: [2-1] DEBUG: shmem_exit(0)
2007-02-05 22:31:07 PST [31368]: [3-1] DEBUG: exit(0)
2007-02-05 22:31:07 PST [31278]: [788-1] DEBUG: reaping dead processes

The data in 

Re: [HACKERS] libpq docs about PQfreemem

2007-02-06 Thread Magnus Hagander
On Mon, Feb 05, 2007 at 05:21:34PM -0500, Bruce Momjian wrote:
 Magnus Hagander wrote:
  have this about PQfreemem():
  
  Frees memory allocated by applicationlibpq/, particularly
 functionPQescapeByteaConn/function,
 functionPQescapeBytea/function,
 functionPQunescapeBytea/function,
 and functionPQnotifies/function.
 It is needed by Microsoft Windows, which cannot free memory across
 DLLs, unless multithreaded DLLs (option/MD/option in VC6) are used.
 On other platforms, this function is the same as the standard library
  function functionfree()/.
/para
  
  
  
  That's also a very old comment, dating back to when we could build libpq
  with VC6 only and nothing else. Now we can build with MinGW, Borland,
  VC2005 and I think also VC2003. Which would add the note that this is
  also depending on *which compiler* and *which compiler version*.
  The /MD mention is just one of several different options to select the
  runtime libraries, so it seems really misplaced.
  
  Now, there are two options for this. Either we fix it (I can put
  together a patch), or we remove it altogether. To me, it seems to be
  just an implementation detail and some kind of explanation why we're
  doing it - which would live better in a source code comment than in the
  docs.
  This includes the part about how it's just the same as free() on other
  platforms. That's just an implementation detail, and I assume we don't
  want people to rely on that - in case we ever want to change it in the
  future for some reason. (the doc for the other functions say you have to
  use PQfreemem without mentioning any exceptions)
  
  Thoughts? Rip out or update?
 
 Are you saying that almost all Win32 binaries and libraries now can free
 across DLLs?

No, I'm saying that the problem is not just between multithreaded and
not, it's depending on a lot other factors as well. It's actuallyi the
same issue as with PQtrace, which has a better explanation.

FOr example, you can't free() a poniter allocated with the MSVC runtime
library if you're using the Borland runtime librarys version of free().

//Magnus

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Proposal: Commit timestamp

2007-02-06 Thread Zeugswetter Andreas ADI SD

 What I'm more concerned about, with Jan's proposal, is the assumption 
 that you always want to resolve conflicts by time (except for 
 balances, 
 for which we don't have much information, yet). I'd rather 

Um, I think the proposal was only for beneficial backend functionality
for replication in general and time based conflict resolution. And time
based
is surely one of the important conflict resolution methods for async MM
replication.

Sure there are others, like rule based priority based but I think
you don't need additional backend functionality for those.

Andreas

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Referential Integrity and SHARE locks

2007-02-06 Thread Simon Riggs
On Mon, 2007-02-05 at 23:25 +, Gregory Stark wrote:
 Gregory Stark [EMAIL PROTECTED] writes:
 
  Bruce Momjian [EMAIL PROTECTED] writes:
 
  OK, please propose some wording so at least we can get agreement on
  that.
 
  How about something open-ended like arrange for updates that do not update
  columns referenced by foreign keys from other tables to avoid being blocked 
  by
  locks from concurrent RI checks
 
 Hum. Reading back in the thread it seems what I wrote is basically equivalent
 to the wording Simon originally proposed.

I like your wording. It's clearer and includes Stephan's clarification.
Some minor mods...

TODO

avoid blocking of updates because of concurrent RI checks when those
updates do not alter columns referenced by foreign keys from other
tables


-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread jon5pg
Reading the post again I caught a typo in my query.  I had been playing
with variations of this test to try and get it working, but I have had no
success with any combination as long as it returns this kind of type.

I was comparing integers to uniqueidentiers, which actually works, but is
unrelated to the issue.

Should be:

 CREATE OR REPLACE FUNCTION test.break_guid (numlower integer, numupper
 integer) RETURNS SETOF test.guid_plus AS
 $$
 DECLARE
 x RECORD;
 gplus_ret test.guid_plus;
 BEGIN
 FOR x IN SELECT id,num FROM test.guid_table WHERE num  numlower AND
num
  numupper LOOP
 gplus_ret :=
 (x.id::uniqueidentifier,x.num::integer)::test.guid_plus;
 -- I usually do the following: (but tried above with same result)
 --   gplus_ret := (x.id,x.num);
 RETURN NEXT gplus_ret;
 END LOOP;
 RETURN;
 END;
 $$ LANGUAGE plpgsql;


Jonathan Gray
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] libpq docs about PQfreemem

2007-02-06 Thread Zeugswetter Andreas ADI SD

  future for some reason. (the doc for the other functions say you
have to
  use PQfreemem without mentioning any exceptions)
  
  Thoughts? Rip out or update?
 
 Are you saying that almost all Win32 binaries and libraries now can
free
 across DLLs?

You can under very narrow conditions. You need to force code generation 
for Multithreaded DLL run-time libraries (e.g. in VC6 msvcrt.dll) 
for all exe's and dll's.
This is bad for debugging, memory checkers and probably impossible
when using different compilers.
So you really need PQfreemem.

Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Proposed adjustments in MaxTupleSize andtoastthresholds

2007-02-06 Thread Simon Riggs
On Mon, 2007-02-05 at 19:18 -0500, Jan Wieck wrote:
 On 2/5/2007 11:52 AM, Tom Lane wrote:
  Simon Riggs [EMAIL PROTECTED] writes:
  Sounds like a good time to suggest making these values configurable,
  within certain reasonable bounds to avoid bad behaviour.
  
  Actually, given what we've just learned --- namely that choosing these
  values at random is a bad idea --- I'd want to see a whole lot of
  positive evidence before adding such a configuration knob.
 
 Some of the evidence is TOAST itself. Every time you do not SET a column 
 that has been toasted into external storage during an UPDATE, you win 
 because the columns data isn't read during the scan for the row to 
 update, it isn't read during heap_update(), it isn't actually updated at 
 all (the toast reference is copied as is and the external value reused), 
 and not a single byte of the external data is bloating WAL. If someone 
 knows that 99% of their updates will not hit certain text columns in 
 their tables, actually forcing them to be compressed no matter what and 
 to be stored external if they exceed 100 bytes will be a win.

Yes, thats the main use case.

 Of course, this is a bit different from Simon's approach. What I 
 describe here is a per pg_attribute configuration to enforce a certain 
 new toaster behavior. Since we already have something that gives the 
 toaster a per column cluestick (like not to bother trying to compress), 
 it might be much easier to implement then Simon's proposal. It would 
 require that the toaster goes over the initial heap tuple for those 
 specially configured columns even if the tuple is below the toast 
 threshold, which suggests that a pg_class.relhasspecialtoastneeds could 
 be useful. But I think as for fine tuning capabilities, a column 
 insensitive maximum tuple size is insufficient anyway.

Well, sounds like we both want the same thing. The only discussion seems
to be about user interface.

Setting it per column is much better for very fine tuning, but setting
them in isolation doesn't help decide what to do when you have lots of
medium length strings where the sum exceeds the toast target.

IMHO it would be better to have an col-level storage priority (default
0) and then an table-level settable toast target. So we start applying
the storage handling mechanisms on the highest priority columns and keep
going in descending order until we are under the limit for the table.

ALTER TABLE foo
ALTER COLUMN foocol 
SET STORAGE EXTERNAL PRIORITY 5
WITH 
(toast_target = 400);   /* must be MAXALIGNed value */

Equal priorities are allowed, in which case lowest attribute id wins,
i.e. current behaviour remains the default.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread Marko Kreen

On 2/6/07, Jonathan Gray [EMAIL PROTECTED] wrote:

Following an upgrade to 8.2.2, many of my plpgsql functions started to cause
server process crashes.

I make use of a custom data-type uniqueidentifier, available here:

http://gborg.postgresql.org/project/uniqueidentifier
ftp://gborg.postgresql.org/pub/uniqueidentifier/stable/uniqueidentifier-0.2.
tar.gz

This type has given me the same kind of process crash once before, but that
was related to NULL values in a foreign-key referenced field (unresolved to
this day, but behavior is allowed for all builtin types).


Indeed, the code can crash on NULL values as the NULL checks
are missing or wrong in the functions.  Actually all the various
functions except newid() should be declared STRICT IMMUTABLE
thus immidiately avoiding problems with NULLs.

Could you reproduce the crash with this change?  I'll try
to play with this myself too.

--
marko

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Referential Integrity and SHARE locks

2007-02-06 Thread Richard Huxton

Simon Riggs wrote:

On Sat, 2007-02-03 at 09:43 -0800, Stephan Szabo wrote:

On Sat, 3 Feb 2007, Simon Riggs wrote:


On Fri, 2007-02-02 at 16:50 -0500, Tom Lane wrote:

No, I don't.  I think knowledge of which columns are in a PK is quite a
few levels away from the semantics of row locking.  To point out just
one problem, what happens when you add or drop a PK?  Or drop and
replace with a different column set?  Yes, I know dropping one requires
exclusive lock on the table, but the transaction doing it could hold row
locks within the table, and now it's very unclear what they mean.

There are issues, yes. Dropping PKs is a very irregular occurrence nor
is it likely to be part of a complex transaction. It wouldn't bother me
to say that if a transaction already holds a RowExclusiveLock or a
RowShareLock it cannot upgrade to an AccessExclusiveLock.

The lock check seems like a strange constraint, given that it's not
necessarily going to be anything that conflicts with the row locks. I'm
not sure there'd be a better idea given this sort of scheme, but it still
seems strange.


The TODO I was requesting you consider was this:

Develop non-conflicting locking scheme to allow RI checks to co-exist
peacefully with non-PK UPDATEs on the referenced table.

That is, IMHO, a general statement of an important unresolved issue with
our Referential Integrity implementation. That is in no way intended as
any form of negative commentary on the excellent detailed work that has
got us so far already.

Well, if we really want to solve that completely then we really need
column locking, or at least locking at the level of arbitrary (possibly
overlapping) unique constraints, not just the PK because foreign keys
don't necessarily reference the primary key.  But the PK case is certainly
the most common and it'd certainly be nice to cover that case.


IMHO generic column level locking would hardly ever be used. Locking for
RI seems to be 99% of the use case, which means we'd be OK if we found a
way of only locking an arbitary number of unique col groups. By
definition, each of these column groups is covered by a unique index.


Not saying this'll gain us anything but...

It has ocurred to me that the lock could be reduced in another way. If 
we had an immutable constraint that could be applied to pkey-columns 
then we wouldn't have to worry about updates at all. If a pkey value was 
there before an update, it would be there after too. The only thing 
you'd need to prevent would be deletes.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] Proposed adjustments in MaxTupleSizeandtoastthresholds

2007-02-06 Thread Simon Riggs
On Tue, 2007-02-06 at 12:10 +0900, ITAGAKI Takahiro wrote:
 Simon Riggs [EMAIL PROTECTED] wrote:
 
   Actually, given what we've just learned --- namely that choosing these
   values at random is a bad idea --- I'd want to see a whole lot of
   positive evidence before adding such a configuration knob.
  
  3. assemble performance evidence
  
  Step 3 is always there for performance work, so even if you don't
  mention it, I'll assume everybody wants to see that as soon as possible
  before we progress.
 
 There was a performance evidence using TOAST in order to partial updates.
 It added a flag of force toasting. The toast threshold suggested now is
 more flexible than it, but I think it is one of the evidences.
 
   Vertical Partitioning with TOAST
   http://archives.postgresql.org/pgsql-hackers/2005-12/msg00013.php
 

Apologies to Junji-san. I'd thought my idea was original, but it seems
we think along similar lines.

That is the kind of performance gain I see possible.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Reading the post again I caught a typo in my query.  I had been playing
with variations of this test to try and get it working, but I have had no
success with any combination as long as it returns this kind of type.

I was comparing integers to uniqueidentiers, which actually works, but is
unrelated to the issue.


Does it still do it if you just return a single uniqueidentifier?

1. RETURN newid()
2. SELECT INTO r newid(); RETURN r;
3. SELECT id INTO r ...query... LIMIT 1; RETURN r;

If all of these fail, then presumably it's allocating memory incorrectly 
 but only shows up in a function's context.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Dead code in _bt_split?

2007-02-06 Thread Heikki Linnakangas

Bruce Momjian wrote:

Heikki Linnakangas wrote:

Bruce Momjian wrote:

Heikki, did this code cleanup get included in your recent btree split
fix?

No.


OK, would you please send a patch to remove the unused code.  Thanks.


Ok, here you are.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
Index: src/backend/access/nbtree/nbtinsert.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/nbtree/nbtinsert.c,v
retrieving revision 1.148
diff -c -r1.148 nbtinsert.c
*** src/backend/access/nbtree/nbtinsert.c	27 Jan 2007 20:53:30 -	1.148
--- src/backend/access/nbtree/nbtinsert.c	6 Feb 2007 10:23:26 -
***
*** 855,876 
  	/* cope with possibility that newitem goes at the end */
  	if (i = newitemoff)
  	{
! 		if (newitemonleft)
! 		{
! 			_bt_pgaddtup(rel, leftpage, newitemsz, newitem, leftoff,
! 		 left sibling);
! 			itup_off = leftoff;
! 			itup_blkno = BufferGetBlockNumber(buf);
! 			leftoff = OffsetNumberNext(leftoff);
! 		}
! 		else
! 		{
! 			_bt_pgaddtup(rel, rightpage, newitemsz, newitem, rightoff,
! 		 right sibling);
! 			itup_off = rightoff;
! 			itup_blkno = BufferGetBlockNumber(rbuf);
! 			rightoff = OffsetNumberNext(rightoff);
! 		}
  	}
  
  	/*
--- 855,865 
  	/* cope with possibility that newitem goes at the end */
  	if (i = newitemoff)
  	{
! 		_bt_pgaddtup(rel, rightpage, newitemsz, newitem, rightoff,
! 	 right sibling);
! 		itup_off = rightoff;
! 		itup_blkno = BufferGetBlockNumber(rbuf);
! 		rightoff = OffsetNumberNext(rightoff);
  	}
  
  	/*

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread Marko Kreen

On 2/6/07, Marko Kreen [EMAIL PROTECTED] wrote:

Indeed, the code can crash on NULL values as the NULL checks
are missing or wrong in the functions.  Actually all the various
functions except newid() should be declared STRICT IMMUTABLE
thus immidiately avoiding problems with NULLs.

Could you reproduce the crash with this change?  I'll try
to play with this myself too.


STRICT IMMUTABLE fixed the crash for me so seems it was bug
in the module.  Although it did not happen in 8.2.1 so seems
some change in 8.2.2 made it trigger.

Attached is a patch for uniqueindent-0.2 that removes the
buggy checks and makes functions STRICT IMMUTABLE.

--
marko


null.fix.diff
Description: Binary data

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Bitmap index thoughts

2007-02-06 Thread Heikki Linnakangas

Gavin Sherry wrote:

On Thu, 1 Feb 2007, Bruce Momjian wrote:


Where are we on this patch?  Does it have performance tests to show
where it is beneificial?  Is it ready to be reviewed?


Here's an updated patch:

http://www.alcove.com.au/~swm/bitmap-2007-02-02.patch

In this patch, I rewrote the index build system. It was fast before for
well clustered data but for poorly clustered data, it was very slow. Now,
it is pretty good for each distribution type.

I have various test cases but the one which showed bitmap a poor light was
a table of 600M rows. The key to the table had a cardinality of 100,000.
When the table was loaded with keys clustered, the build time was 1000
seconds with bitmap (2200 with btree). With poorly clustered data (e.g.,
the index key was (1, 2, 3, ..., 6000, 1, 2, 3, ...)), the build time for
bitmap was 14000 seconds!

So, I rewrote this to compress data using HRL encoding (the same scheme we
use in the bitmap AM itself). Now, clustered data is just as fast and
unclustered data is 2000 seconds.

The select performance at a cardinality of 100,000 is similar to btree but
faster with lower cardinalities.

Jie also contributed a rewrite of the WAL code to this patch. Not only is
the code faster now, but it handles the notion of incomplete actions --
like btree and friends do. The executor code still needs some work from me
-- Jie and I have dirtied things up while experimenting -- but we would
really like some review of the code so that this can get squared away
well before the approach of 8.3 feature freeze.

One of the major deficiencies remaining is the lack of VACUUM support.
Heikki put his hand up for this and I'm holding him to it! ;-)


Thanks :). I'll take a look at it.

I'm a bit worried that vacuuming can get complicated if an index is in 
fact an index + a heap + a btree. To remove empty lov items and the 
entries in the auxiliary heap and b-tree, you need to:


1. Memorize empty lov-items
2. Scan the heap, and mark the heap tuples corresponding the empty 
lov-items as dead

3. Scan the b-tree, removing pointers to dead heap tuples
4. Remove dead heap tuples
5. Remove empty lov-items

Maybe it's possible to call the existing vacuuming code recursively, but 
it feels quite horrible.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread Marko Kreen

On 2/6/07, Marko Kreen [EMAIL PROTECTED] wrote:

STRICT IMMUTABLE fixed the crash for me so seems it was bug
in the module.  Although it did not happen in 8.2.1 so seems
some change in 8.2.2 made it trigger.


Trigger was following patch:

http://archives.postgresql.org/pgsql-committers/2007-02/msg00016.php

as function test.break_guid() assigns NULLs to gplus_ret.

--
marko

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Type casting bug in 8.1.[67]?

2007-02-06 Thread Michael Paesold

Hello all,

after upgrading from 8.1.5 to 8.1.7, I got errors in the server log when 
updating decimal values using string constants. I tried the same using 
psql (pasted the query from below) and it fails, too. Downgrading to 
8.1.5 resolved the issue.


ERROR:  attribute 4 has wrong type
DETAIL:  Table has type numeric, but query expects numeric.
STATEMENT:  UPDATE reminder SET reminder_charges='0' WHERE reminder_id=29362

reminder_charges is defined as:
reminder_charges   | numeric(5,2)   | not null

I guess this is a bug.

Best Regards
Michael Paesold


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Type casting bug in 8.1.[67]?

2007-02-06 Thread Martijn van Oosterhout
On Tue, Feb 06, 2007 at 12:34:50PM +0100, Michael Paesold wrote:
 Hello all,
 
 after upgrading from 8.1.5 to 8.1.7, I got errors in the server log when 
 updating decimal values using string constants. I tried the same using 
 psql (pasted the query from below) and it fails, too. Downgrading to 
 8.1.5 resolved the issue.

Someone on -general just posted exactly the same problem for 8.1.7. I
wonder if the recent security update broke something else?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] Dead code in _bt_split?

2007-02-06 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
 Bruce Momjian wrote:
 OK, would you please send a patch to remove the unused code.  Thanks.

 Ok, here you are.

Applied with an added comment and Assert.

While testing it I realized that there seems to be a nearby bug in
_bt_findsplitloc: it fails to consider the possibility of moving all the
extant items to the left side.  It will always return a firstright =
maxoff.  ISTM this would mean that it could choose a bad split if the
incoming item goes at the end and both it and the last extant item are
large: in this case they should be split apart, but they won't be.

Heikki, do you feel like looking at that, or shall I?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread Tom Lane
Marko Kreen [EMAIL PROTECTED] writes:
 Attached is a patch for uniqueindent-0.2 that removes the
 buggy checks and makes functions STRICT IMMUTABLE.

Not sure where you should send that, but it's not here.

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Modifying and solidifying contrib

2007-02-06 Thread Andrew Sullivan
On Mon, Jan 29, 2007 at 04:22:43PM -0500, Tom Lane wrote:
 
 (A) I'm not sure we would have heard about it, and (B) any one user is
 probably only using a subset of what has been proposed to be loaded by
 default, so the odds of collisions would go way up.

As a data point, some time ago (7.2 days) I used to do this as a
matter of completeness, and never had a collision.  

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
If they don't do anything, we don't need their acronym.
--Josh Hamilton, on the US FEMA

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread Tom Lane
Marko Kreen [EMAIL PROTECTED] writes:
 On 2/6/07, Marko Kreen [EMAIL PROTECTED] wrote:
 STRICT IMMUTABLE fixed the crash for me so seems it was bug
 in the module.  Although it did not happen in 8.2.1 so seems
 some change in 8.2.2 made it trigger.

 Trigger was following patch:
 http://archives.postgresql.org/pgsql-committers/2007-02/msg00016.php
 as function test.break_guid() assigns NULLs to gplus_ret.

So in fact the problem was that the input function was not declared
STRICT and yet failed to handle nulls... which means it was broken
as of 8.2.0, the OP just hadn't tried to throw a null at it except
in the context of plpgsql ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Dead code in _bt_split?

2007-02-06 Thread Heikki Linnakangas

Tom Lane wrote:

Heikki Linnakangas [EMAIL PROTECTED] writes:

Bruce Momjian wrote:

OK, would you please send a patch to remove the unused code.  Thanks.



Ok, here you are.


Applied with an added comment and Assert.

While testing it I realized that there seems to be a nearby bug in
_bt_findsplitloc: it fails to consider the possibility of moving all the
extant items to the left side.  It will always return a firstright =
maxoff.  ISTM this would mean that it could choose a bad split if the
incoming item goes at the end and both it and the last extant item are
large: in this case they should be split apart, but they won't be.

Heikki, do you feel like looking at that, or shall I?


I'll take a look at it.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread Jonathan Gray
Following an upgrade to 8.2.2, many of my plpgsql functions started to cause
server process crashes.

I make use of a custom data-type uniqueidentifier, available here:

http://gborg.postgresql.org/project/uniqueidentifier
ftp://gborg.postgresql.org/pub/uniqueidentifier/stable/uniqueidentifier-0.2.
tar.gz

This type has given me the same kind of process crash once before, but that
was related to NULL values in a foreign-key referenced field (unresolved to
this day, but behavior is allowed for all builtin types).

I see the crash in plpgsql functions that return an aggregate type which
contain a uniqueidentifier (including triggers which have uniqueidentifiers
in the NEW).  Here is a test case I was able to create:

 TEST SETUP 

CREATE SCHEMA test;

CREATE TYPE test.guid_plus AS (
  id  public.uniqueidentifier,
  num integer
);

CREATE TABLE test.guid_table (
  id  public.uniqueidentifier,
  num integer
);

INSERT INTO test.guid_table (id,num) VALUES (newid(),1);
INSERT INTO test.guid_table (id,num) VALUES (newid(),2);
INSERT INTO test.guid_table (id,num) VALUES (newid(),3);
INSERT INTO test.guid_table (id,num) VALUES (newid(),4);

CREATE OR REPLACE FUNCTION test.break_guid (idlower integer, idupper
integer) RETURNS SETOF test.guid_plus AS
$$
DECLARE
    x RECORD;
    gplus_ret test.guid_plus;
BEGIN
    FOR x IN SELECT id,num FROM test.guid_table WHERE id  idlower AND id 
idupper LOOP
    gplus_ret :=
(x.id::uniqueidentifier,x.num::integer)::test.guid_plus;
    -- I usually do the following: (but tried above with same result)
    --   gplus_ret := (x.id,x.num);
    RETURN NEXT gplus_ret;
    END LOOP;
    RETURN;
END;
$$ LANGUAGE plpgsql;

 CAUSE THE CRASH 

SELECT * FROM test.break_guid(0,5);


server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.


 FROM THE LOGS, SET AT DEBUG5 

2007-02-05 22:31:07 PST [31363]: [45-1] DEBUG: StartTransactionCommand
2007-02-05 22:31:07 PST [31363]: [46-1] DEBUG: StartTransaction
2007-02-05 22:31:07 PST [31363]: [47-1] DEBUG: name: unnamed; blockState:
DEFAULT; state: INPROGR, xid/subid/cid: 7198/1/0, nestlvl: 1, children: 
2007-02-05 22:31:07 PST [31278]: [775-1] DEBUG: reaping dead processes
2007-02-05 22:31:07 PST [31278]: [776-1] DEBUG: server process (PID 31363)
was terminated BY signal 11
2007-02-05 22:31:07 PST [31278]: [777-1] LOG: server process (PID 31363) was
terminated BY signal 11
2007-02-05 22:31:07 PST [31278]: [778-1] LOG: terminating any other active
server processes
2007-02-05 22:31:07 PST [31278]: [779-1] DEBUG: sending SIGQUIT TO process
31361
2007-02-05 22:31:07 PST [31278]: [780-1] DEBUG: reaping dead processes
2007-02-05 22:31:07 PST [31278]: [781-1] LOG: ALL server processes
terminated; reinitializing
2007-02-05 22:31:07 PST [31278]: [782-1] DEBUG: shmem_exit(0)
2007-02-05 22:31:07 PST [31278]: [783-1] DEBUG: invoking
IpcMemoryCreate(size=537141248)
2007-02-05 22:31:07 PST [31364]: [1-1] LOG: DATABASE system was interrupted
at 2007-02-05 22:30:35 PST
2007-02-05 22:31:07 PST [31364]: [2-1] LOG: checkpoint record IS at
0/286D97FC
2007-02-05 22:31:07 PST [31364]: [3-1] LOG: redo record IS at 0/286D97FC;
undo record IS at 0/0; shutdown TRUE
2007-02-05 22:31:07 PST [31364]: [4-1] LOG: next transaction ID: 0/7192;
next OID: 155654
2007-02-05 22:31:07 PST [31364]: [5-1] LOG: next MultiXactId: 1; next
MultiXactOffset: 0
2007-02-05 22:31:07 PST [31364]: [6-1] LOG: DATABASE system was NOT properly
shut down; automatic recovery IN progress
2007-02-05 22:31:07 PST [31365]: [1-1] FATAL: the DATABASE system IS
starting up
2007-02-05 22:31:07 PST [31365]: [2-1] DEBUG: proc_exit(1)
2007-02-05 22:31:07 PST [31365]: [3-1] DEBUG: shmem_exit(1)
2007-02-05 22:31:07 PST [31365]: [4-1] DEBUG: exit(1)
2007-02-05 22:31:07 PST [31278]: [784-1] DEBUG: forked new backend,
pid=31365 socket=8
2007-02-05 22:31:07 PST [31278]: [785-1] DEBUG: reaping dead processes
2007-02-05 22:31:07 PST [31278]: [786-1] DEBUG: server process (PID 31365)
exited WITH exit code 1
2007-02-05 22:31:07 PST [31364]: [7-1] LOG: record WITH zero length at
0/286D9844
2007-02-05 22:31:07 PST [31364]: [8-1] LOG: redo IS NOT required
2007-02-05 22:31:07 PST [31364]: [9-1] LOG: DATABASE system IS ready
2007-02-05 22:31:07 PST [31364]: [10-1] DEBUG: transaction ID wrap LIMIT IS
2147484171, limited BY DATABASE postgres
2007-02-05 22:31:07 PST [31364]: [11-1] DEBUG: proc_exit(0)
2007-02-05 22:31:07 PST [31364]: [12-1] DEBUG: shmem_exit(0)
2007-02-05 22:31:07 PST [31364]: [13-1] DEBUG: exit(0)
2007-02-05 22:31:07 PST [31278]: [787-1] DEBUG: reaping dead processes
2007-02-05 22:31:07 PST [31368]: [1-1] DEBUG: proc_exit(0)
2007-02-05 22:31:07 PST [31368]: [2-1] DEBUG: shmem_exit(0)
2007-02-05 22:31:07 PST [31368]: [3-1] DEBUG: exit(0)
2007-02-05 22:31:07 PST [31278]: [788-1] DEBUG: reaping dead processes

The data in 

Re: [HACKERS] Type casting bug in 8.1.[67]?

2007-02-06 Thread Tom Lane
Michael Paesold [EMAIL PROTECTED] writes:
 after upgrading from 8.1.5 to 8.1.7, I got errors in the server log when 
 updating decimal values using string constants.

Have you got a constraint or functional index on that column?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] misread release notes

2007-02-06 Thread ohp
Hi all,

Methinks I made a big mistake when swapping from 8.2.1 to 8.2.2

I read the fix incorrect permission check in
information_schema_key_column_usage_view chapter in HISTORY far too fast
and fed psql on each database with share/information_schema.sql.
Too late to stop it!
What did I do wrong, and how can I go backwards (I imagine %I broke
something!)

Unrelated, I have problem with conforming string: adding a E is easy in C
or PHP but SpamAssassin has this plpgsql function  :

CREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,
  intokenary BYTEA[],
  inspam_count INTEGER,
  inham_count INTEGER,
  inatime INTEGER)
RETURNS VOID AS '
DECLARE
  _token BYTEA;
  new_tokens INTEGER := 0;
BEGIN
  for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
  LOOP
_token := intokenary[i];
UPDATE bayes_token
   SET spam_count = greatest_int(spam_count + inspam_count, 0),
   ham_count = greatest_int(ham_count + inham_count, 0),
   atime = greatest_int(atime, inatime)
 WHERE id = inuserid
   AND token = _token;
IF NOT FOUND THEN
  -- we do not insert negative counts, just return true
  IF NOT (inspam_count  0 OR inham_count  0) THEN
INSERT INTO bayes_token (id, token, spam_count, ham_count, atime)
VALUES (inuserid, _token, inspam_count, inham_count, inatime);
IF FOUND THEN
  new_tokens := new_tokens + 1;
END IF;
  END IF;
END IF;
  END LOOP;

  IF new_tokens  0 AND inatime  0 THEN
UPDATE bayes_vars
   SET token_count = token_count + new_tokens,
   newest_token_age = greatest_int(newest_token_age, inatime),
   oldest_token_age = least_int(oldest_token_age, inatime)
 WHERE id = inuserid;
  ELSEIF new_tokens  0 AND NOT inatime  0 THEN
UPDATE bayes_vars
   SET token_count = token_count + new_tokens
 WHERE id = inuserid;
  ELSEIF NOT new_tokens  0 AND inatime  0 THEN
UPDATE bayes_vars
   SET newest_token_age = greatest_int(newest_token_age, inatime),
   oldest_token_age = least_int(oldest_token_age, inatime)
 WHERE id = inuserid;
  END IF;
  RETURN;
END;
' LANGUAGE 'plpgsql';

As you see, token is bytea where do I put the E in the insert query?

Keep the good job!
Regards
-- 
Olivier PRENANT Tel: +33-5-61-50-97-00 (Work)
15, Chemin des Monges+33-5-61-50-97-01 (Fax)
31190 AUTERIVE   +33-6-07-63-80-64 (GSM)
FRANCE  Email: ohp@pyrenet.fr
--
Make your life a dream, make your dream a reality. (St Exupery)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Modifying and solidifying contrib

2007-02-06 Thread Andrew Dunstan

Andrew Sullivan wrote:

On Mon, Jan 29, 2007 at 04:22:43PM -0500, Tom Lane wrote:
  

(A) I'm not sure we would have heard about it, and (B) any one user is
probably only using a subset of what has been proposed to be loaded by
default, so the odds of collisions would go way up.



As a data point, some time ago (7.2 days) I used to do this as a
matter of completeness, and never had a collision.  

  


(bangs head against brick wall)


of course there isn't a collision.

The point I at least have been trying to make is that extensions 
generally (e.g. from pgfoundry) should protect themselves from possible 
collisions with core and other unknown extensions that might be loaded, 
by using unique namespace(s), and further, that the standard extensions 
(i.e. what we now load from contrib) should act as good exemplars by 
doing likewise, with some support given for legacy uses that expect them 
to use the public schema.


cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Proposal: Commit timestamp

2007-02-06 Thread Markus Schiltknecht

Hi,

Zeugswetter Andreas ADI SD wrote:

And time based
is surely one of the important conflict resolution methods for async MM
replication.


That's what I'm questioning. Wouldn't any other deterministic, but 
seemingly random abort decision be as clever as time based conflict 
resolution? It would then be clear to the user that it's random and not 
some in most cases time based, but no in others and only if... thing.



Sure there are others, like rule based priority based but I think
you don't need additional backend functionality for those.


Got the point, yes. I'm impatient, sorry.

Neither the less, I'm questioning if is it worth adding backend 
functionality for that. And given this probably is the most wanted 
resolution method, this question might be heretical. You could also 
see it as sort of an user educating question: don't favor time based 
resolution if that's the one resolution method with the most traps.


Regards

Markus

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Modifying and solidifying contrib

2007-02-06 Thread Andrew Sullivan
On Tue, Feb 06, 2007 at 11:43:24AM -0500, Andrew Dunstan wrote:
 Andrew Sullivan wrote:
 
 As a data point, some time ago (7.2 days) I used to do this as a
 matter of completeness, and never had a collision.  
 
 The point I at least have been trying to make is that extensions 
 generally (e.g. from pgfoundry) should protect themselves from possible 
 collisions with core and other unknown extensions that might be loaded, 
 by using unique namespace(s), and further, that the standard extensions 

This wasn't a disagreement with your general point.  I was just
trying to say that the problem did not ine fact exist at some point,
so the empirical rathole perhaps doesn't need to be explored.  The
style question is the only one that is relevant, I think.  (I happen
to agree with you on that, and it seems to me that a more complete
proposal for namespace guidelines might be nice.)

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] misread release notes

2007-02-06 Thread Tom Lane
ohp@pyrenet.fr writes:
 I read the fix incorrect permission check in
 information_schema_key_column_usage_view chapter in HISTORY far too fast
 and fed psql on each database with share/information_schema.sql.
 Too late to stop it!
 What did I do wrong, and how can I go backwards (I imagine %I broke
 something!)

I don't think you changed anything --- none of the commands say OR REPLACE
so they'd just all fail.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] misread release notes

2007-02-06 Thread Andrew Dunstan

ohp@pyrenet.fr wrote:

Unrelated, I have problem with conforming string: adding a E is easy in C
or PHP but SpamAssassin has this plpgsql function  :


  

[snip]

As you see, token is bytea where do I put the E in the insert query?

  


Since you aren't using a string literal in this function for token or 
anything else, AFAICS, I don't understand why you would need to.


cheers

andrew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Pl/pgsql functions causing crashes in 8.2.2

2007-02-06 Thread Marko Kreen

On 2/6/07, Tom Lane [EMAIL PROTECTED] wrote:

Marko Kreen [EMAIL PROTECTED] writes:
 Attached is a patch for uniqueindent-0.2 that removes the
 buggy checks and makes functions STRICT IMMUTABLE.

Not sure where you should send that, but it's not here.


I did Cc: the maintainer.

--
marko

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] doxygen.postgresql.org

2007-02-06 Thread Magnus Hagander
http://doxygen.postgresql.org is now set up for your browsing pleasure.
It's synced to anoncvs once per day.

//Magnus

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[HACKERS] strlcat

2007-02-06 Thread Peter Eisentraut
I've been looking into converting some code to use strlcpy rather than 
strncpy, as previously discussed.  Making good use of strlcpy requires 
the availability of strlcat as well, so I'm going to add the OpenBSD 
version thereof to src/port/ as well, unless anyone objects.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Ooops ... seems we need a re-release pronto

2007-02-06 Thread Tom Lane
As per numerous reports this morning, PG 8.2.2 and 8.1.7 both fail on
fairly simple scenarios involving typmod-bearing columns (varchar,
numeric, etc) with check constraints or functional indexes (and maybe
other cases too, but those are the ones reported so far).  I have not
been able to reproduce the failures in 8.0 but I think it may have the
same issue in a weaker form.  We need a quick re-release I'm afraid.

I have applied a patch that resolves the problem AFAICT, but this time
around it would be nice to get some more eyeballs and testing on it.
Please try CVS HEAD or branch tips this afternoon, if you can.  Core
is currently thinking of wrapping update tarballs this evening (maybe
around midnight UTC?).

Also, if anyone can find a related failure in 8.0.11 please notify us.
I'm not sure whether we need an update in that branch or not ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Modifying and solidifying contrib

2007-02-06 Thread Andrew Dunstan

Nikolay Samokhvalov wrote:

On 2/5/07, Andrew Dunstan [EMAIL PROTECTED] wrote:
[...]

I would suggest we start with what is (I think) simplest and clearest:

. catalog support via a simple extension-schema(s) map
. initdb installs standard extensions if it finds them, unless told 
not to

. support for adjusting search path.


Why adjusting search_path is needed at all?



If the extension installs everything in dedicated namespace(s), I think 
we would want to have an option to add those namespaces easily to search 
paths. Right now all we can do is to set a search path. It would be 
nice, for example, to have support for appending or prepending something 
to the search path.


I suspect most apps/extensions don't currently use namespaces much, or 
we might well have seen more demand in this area.


cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Ooops ... seems we need a re-release pronto

2007-02-06 Thread Bruno Wolff III
On Tue, Feb 06, 2007 at 13:27:47 -0500,
  Tom Lane [EMAIL PROTECTED] wrote:
 
 I have applied a patch that resolves the problem AFAICT, but this time
 around it would be nice to get some more eyeballs and testing on it.
 Please try CVS HEAD or branch tips this afternoon, if you can.  Core
 is currently thinking of wrapping update tarballs this evening (maybe
 around midnight UTC?).

Is a test going to get added to the regression tests to catch similar
regressions in the future?

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Ooops ... seems we need a re-release pronto

2007-02-06 Thread Thomas F. O'Connell

On Feb 6, 12:27 pm, [EMAIL PROTECTED] (Tom Lane) wrote:
 As per numerous reports this morning, PG 8.2.2 and 8.1.7 both fail on
 fairly simple scenarios involving typmod-bearing columns (varchar,
 numeric, etc) with check constraints or functional indexes (and maybe
 other cases too, but those are the ones reported so far).  I have not
 been able to reproduce the failures in 8.0 but I think it may have  
the

 same issue in a weaker form.  We need a quick re-release I'm afraid.

Should the existing source and binaries be pulled in the meantime?

--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



Re: [HACKERS] Ooops ... seems we need a re-release pronto

2007-02-06 Thread Tom Lane
Bruno Wolff III [EMAIL PROTECTED] writes:
 Is a test going to get added to the regression tests to catch similar
 regressions in the future?

I've been thinking about that.  It seems that the regression tests have
fairly poor coverage of use of typmod-bearing data types in general;
most of our tests of complicated queries tend to use simple datatypes
like int or text.  I don't have any immediate thoughts what to do about
that --- massive expansion of the tests doesn't seem justified --- but
this isn't the first bug we've hit in this area.  It's just a bit more
embarrassing than most :-(

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Type casting bug in 8.1.[67]?

2007-02-06 Thread Michael Paesold

Tom Lane wrote:

Michael Paesold [EMAIL PROTECTED] writes:
after upgrading from 8.1.5 to 8.1.7, I got errors in the server log when 
updating decimal values using string constants.


Have you got a constraint or functional index on that column?


Yes.

Check constraints:
tc_reminder_charges CHECK (reminder_charges = 0::numeric)

As I read from your other post, you already figured that the issue is 
related to check constraints (or functional indexes).


Best Regards,
Michael Paesold

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] Ooops ... seems we need a re-release pronto

2007-02-06 Thread Jim Nasby

On Feb 6, 2007, at 12:40 PM, Tom Lane wrote:

Bruno Wolff III [EMAIL PROTECTED] writes:

Is a test going to get added to the regression tests to catch similar
regressions in the future?


I've been thinking about that.  It seems that the regression tests  
have

fairly poor coverage of use of typmod-bearing data types in general;
most of our tests of complicated queries tend to use simple  
datatypes
like int or text.  I don't have any immediate thoughts what to do  
about

that --- massive expansion of the tests doesn't seem justified --- but
this isn't the first bug we've hit in this area.  It's just a bit more
embarrassing than most :-(


What about the idea that's been floated in the past about a -- 
extensive mode for regression testing that would (generally) only be  
used by the build farm. That would mean others wouldn't have to  
suffer through extremely long make check's.


Or is there another reason not to expand the tests?
--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Ooops ... seems we need a re-release pronto

2007-02-06 Thread Tom Lane
Jim Nasby [EMAIL PROTECTED] writes:
 On Feb 6, 2007, at 12:40 PM, Tom Lane wrote:
 ... massive expansion of the tests doesn't seem justified

 What about the idea that's been floated in the past about a -- 
 extensive mode for regression testing that would (generally) only be  
 used by the build farm. That would mean others wouldn't have to  
 suffer through extremely long make check's.

 Or is there another reason not to expand the tests?

I'm not concerned so much about the runtime as the development and
maintenance effort...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] getting status transaction error

2007-02-06 Thread Merlin Moncure

On 2/6/07, Merlin Moncure [EMAIL PROTECTED] wrote:

around 6:30 this morning, I started getting the following messages in my log:

Feb  6 06:33:34 mojo postgres[1117]: [2-1] :: ERROR:  could not access
status of transaction 51911
Feb  6 06:34:35 mojo postgres[1128]: [2-1] :: ERROR:  could not access
status of transaction 51911
[...]

repeated roughly once a minute.  I've never seen this before. this is
on postgres 8.1.1 running on fedora core 4 smp.  I don't have any
contextual information yet but I'm getting ready to turn statement
logging on.

Anybody know what this is?

[x-posting to -hackers]

actually, here is some more relevant bits from the log.
Feb  6 06:31:33 mojo postgres[1088]: [1-1] :: LOG:  autovacuum:
processing database template0
Feb  6 06:31:33 mojo postgres[1088]: [2-1] :: ERROR:  could not access
status of transaction 51911
Feb  6 06:31:33 mojo postgres[1088]: [2-2] :: DETAIL:  could not open
file pg_clog/0207: No such file or directory

repeated ad-naseum

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] getting status transaction error

2007-02-06 Thread Stefan Kaltenbrunner
Merlin Moncure wrote:
 On 2/6/07, Merlin Moncure [EMAIL PROTECTED] wrote:
 around 6:30 this morning, I started getting the following messages in
 my log:

 Feb  6 06:33:34 mojo postgres[1117]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 Feb  6 06:34:35 mojo postgres[1128]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 [...]

 repeated roughly once a minute.  I've never seen this before. this is
 on postgres 8.1.1 running on fedora core 4 smp.  I don't have any
 contextual information yet but I'm getting ready to turn statement
 logging on.

 Anybody know what this is?
 [x-posting to -hackers]
 
 actually, here is some more relevant bits from the log.
 Feb  6 06:31:33 mojo postgres[1088]: [1-1] :: LOG:  autovacuum:
 processing database template0
 Feb  6 06:31:33 mojo postgres[1088]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 Feb  6 06:31:33 mojo postgres[1088]: [2-2] :: DETAIL:  could not open
 file pg_clog/0207: No such file or directory

hmm I first thought it could have been
http://archives.postgresql.org/pgsql-committers/2006-01/msg00288.php
which affects 8.1.1 but that's not the very same error as the one
created by the above bug.


Stefan

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] [GENERAL] getting status transaction error

2007-02-06 Thread Alvaro Herrera
Merlin Moncure wrote:
 On 2/6/07, Merlin Moncure [EMAIL PROTECTED] wrote:
 around 6:30 this morning, I started getting the following messages in my 
 log:
 
 Feb  6 06:33:34 mojo postgres[1117]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 Feb  6 06:34:35 mojo postgres[1128]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 [...]
 
 repeated roughly once a minute.  I've never seen this before. this is
 on postgres 8.1.1 running on fedora core 4 smp.  I don't have any
 contextual information yet but I'm getting ready to turn statement
 logging on.
 
 Anybody know what this is?
 [x-posting to -hackers]
 
 actually, here is some more relevant bits from the log.
 Feb  6 06:31:33 mojo postgres[1088]: [1-1] :: LOG:  autovacuum:
 processing database template0
 Feb  6 06:31:33 mojo postgres[1088]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 Feb  6 06:31:33 mojo postgres[1088]: [2-2] :: DETAIL:  could not open
 file pg_clog/0207: No such file or directory

I guess the problem here is that autovacuum believes that template0
needs a database-wide vacuum due to Xid wraparound getting closer.  And
that database seems to have Xid 51911 somewhere, the clog bit for
which was in the 0207 file which was deleted some time ago.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] getting status transaction error

2007-02-06 Thread Merlin Moncure

On 2/6/07, Stefan Kaltenbrunner [EMAIL PROTECTED] wrote:

Merlin Moncure wrote:
 On 2/6/07, Merlin Moncure [EMAIL PROTECTED] wrote:
 around 6:30 this morning, I started getting the following messages in
 my log:

 Feb  6 06:33:34 mojo postgres[1117]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 Feb  6 06:34:35 mojo postgres[1128]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 [...]

 repeated roughly once a minute.  I've never seen this before. this is
 on postgres 8.1.1 running on fedora core 4 smp.  I don't have any
 contextual information yet but I'm getting ready to turn statement
 logging on.

 Anybody know what this is?
 [x-posting to -hackers]

 actually, here is some more relevant bits from the log.
 Feb  6 06:31:33 mojo postgres[1088]: [1-1] :: LOG:  autovacuum:
 processing database template0
 Feb  6 06:31:33 mojo postgres[1088]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 Feb  6 06:31:33 mojo postgres[1088]: [2-2] :: DETAIL:  could not open
 file pg_clog/0207: No such file or directory

hmm I first thought it could have been
http://archives.postgresql.org/pgsql-committers/2006-01/msg00288.php
which affects 8.1.1 but that's not the very same error as the one
created by the above bug.


ya, it doesn't seem to match, as this seems to be repeating quite
regularly.  interesting that my 'clog' files start at 06B6 and count
up. 0207 is way off the charts.

a lot of applications are hitting this database, and so far everything
seems to be running ok (i found this log msg by accident), but I am
now officially very nervous.

merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] getting status transaction error

2007-02-06 Thread Merlin Moncure

On 2/6/07, Alvaro Herrera [EMAIL PROTECTED] wrote:

 actually, here is some more relevant bits from the log.
 Feb  6 06:31:33 mojo postgres[1088]: [1-1] :: LOG:  autovacuum:
 processing database template0
 Feb  6 06:31:33 mojo postgres[1088]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 Feb  6 06:31:33 mojo postgres[1088]: [2-2] :: DETAIL:  could not open
 file pg_clog/0207: No such file or directory

I guess the problem here is that autovacuum believes that template0
needs a database-wide vacuum due to Xid wraparound getting closer.  And
that database seems to have Xid 51911 somewhere, the clog bit for
which was in the 0207 file which was deleted some time ago.


Latest checkpoint's NextXID: 2162841139
2^31:   2147483648

is this related?
merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [HACKERS] getting status transaction error

2007-02-06 Thread Alvaro Herrera
Merlin Moncure wrote:

 ya, it doesn't seem to match, as this seems to be repeating quite
 regularly.  interesting that my 'clog' files start at 06B6 and count
 up. 0207 is way off the charts.
 
 a lot of applications are hitting this database, and so far everything
 seems to be running ok (i found this log msg by accident), but I am
 now officially very nervous.

I don't think there's much cause for concern here.  If my theory is
correct, this is an autovacuum bug which was fixed in 8.1.7.

What I'd do is create a 0207 clog file, fill it with 0x55 (which is
transactions committed for all transactions in that interval), and do
a VACUUM FREEZE on that database.  You'll need to set
pg_database.datallowconn=true beforehand.

Of course, I'd copy the files somewhere else and experiment on a scratch
postmaster, running on a different port, just to be sure ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [HACKERS] getting status transaction error

2007-02-06 Thread Merlin Moncure

n 2/6/07, Alvaro Herrera [EMAIL PROTECTED] wrote:

Merlin Moncure wrote:

 ya, it doesn't seem to match, as this seems to be repeating quite
 regularly.  interesting that my 'clog' files start at 06B6 and count
 up. 0207 is way off the charts.

 a lot of applications are hitting this database, and so far everything
 seems to be running ok (i found this log msg by accident), but I am
 now officially very nervous.

I don't think there's much cause for concern here.  If my theory is
correct, this is an autovacuum bug which was fixed in 8.1.7.

What I'd do is create a 0207 clog file, fill it with 0x55 (which is
transactions committed for all transactions in that interval), and do
a VACUUM FREEZE on that database.  You'll need to set
pg_database.datallowconn=true beforehand.

Of course, I'd copy the files somewhere else and experiment on a scratch
postmaster, running on a different port, just to be sure ...


thats a big help, database is actually fairly huge, so I may have to
just go ahead and do it.   I'm off to a meeting, but I'll check back
when I'm done and assuming nobody else says 'don't do that', I'll try
the fix and post back with the result.

thanks all,
merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] getting status transaction error

2007-02-06 Thread Alvaro Herrera
Merlin Moncure wrote:
 On 2/6/07, Alvaro Herrera [EMAIL PROTECTED] wrote:
  actually, here is some more relevant bits from the log.
  Feb  6 06:31:33 mojo postgres[1088]: [1-1] :: LOG:  autovacuum:
  processing database template0
  Feb  6 06:31:33 mojo postgres[1088]: [2-1] :: ERROR:  could not access
  status of transaction 51911
  Feb  6 06:31:33 mojo postgres[1088]: [2-2] :: DETAIL:  could not open
  file pg_clog/0207: No such file or directory
 
 Latest checkpoint's NextXID: 2162841139
 2^31:   2147483648

I think the relevant arithmetic here is

echo 2162841139 51911 - p | dc
1618399228

That's a billion and a half transactions.  Autovacuum uses the formula

this_whole_db = (tmp-age 
 (int32) ((MaxTransactionId  3) * 3 - 10));

to determine whether it needs database-wide vacuum. 
(MaxTransactionId  3) is 536870911, so the calculation is
536870911 * 3 - 10

echo 536870911 3 * 10 - p | dc
1610512733

which looks awfully close to the number above.  About 7 million
transactions must have passed since the first time the error showed up
-- does that sound likely?

Well, scratch that -- what's the _current_ Xid? (not lastest
checkpoint's)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] [HACKERS] getting status transaction error

2007-02-06 Thread Alvaro Herrera
Merlin Moncure wrote:
 n 2/6/07, Alvaro Herrera [EMAIL PROTECTED] wrote:
 Merlin Moncure wrote:
 
  ya, it doesn't seem to match, as this seems to be repeating quite
  regularly.  interesting that my 'clog' files start at 06B6 and count
  up. 0207 is way off the charts.
 
  a lot of applications are hitting this database, and so far everything
  seems to be running ok (i found this log msg by accident), but I am
  now officially very nervous.
 
 I don't think there's much cause for concern here.  If my theory is
 correct, this is an autovacuum bug which was fixed in 8.1.7.
 
 What I'd do is create a 0207 clog file, fill it with 0x55 (which is
 transactions committed for all transactions in that interval), and do
 a VACUUM FREEZE on that database.  You'll need to set
 pg_database.datallowconn=true beforehand.
 
 Of course, I'd copy the files somewhere else and experiment on a scratch
 postmaster, running on a different port, just to be sure ...
 
 thats a big help, database is actually fairly huge, so I may have to
 just go ahead and do it.   I'm off to a meeting, but I'll check back
 when I'm done and assuming nobody else says 'don't do that', I'll try
 the fix and post back with the result.

Well, you don't need to copy all databases for the test area, just the
base/oid dir for template0 (along with all pg_xlog and pg_clog files,
etc, but these shouldn't be as big as all the other stuff in base/).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Ooops ... seems we need a re-release pronto

2007-02-06 Thread Michael Paesold

Tom Lane wrote:

Bruno Wolff III [EMAIL PROTECTED] writes:

Is a test going to get added to the regression tests to catch similar
regressions in the future?


I've been thinking about that.  It seems that the regression tests have
fairly poor coverage of use of typmod-bearing data types in general;
most of our tests of complicated queries tend to use simple datatypes
like int or text.  I don't have any immediate thoughts what to do about
that --- massive expansion of the tests doesn't seem justified --- but
this isn't the first bug we've hit in this area.  It's just a bit more
embarrassing than most :-(


I think at least the most simple cases should be added. At the very least a 
test that would have caught this issue. This is really the first time that 
I had to pull a minor release and go back to a previous version. ;-)


As far as I understand, it's as simple as this (untested):

CREATE TABLE tab (
c DECIMAL(5,2) NOT NULL,
CHECK (c = 0)
);
INSERT INTO tab ('0');

Right?

Or at least:
UPDATE tab SET c='0';

Best Regards
Michael Paesold

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Status of autovacuum and the sporadic stats failures ?

2007-02-06 Thread Stefan Kaltenbrunner
I'm still getting random failures from some of my buildfarm members
which is starting to get a bit irritating and annoying :-(

some recent failures:

http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=zebradt=2007-02-06%2015:25:04
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=clownfishdt=2007-02-06%2000:03:04
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=clownfishdt=2007-02-04%2003:03:09
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=impaladt=2007-02-02%2003:03:02


any ideas ?


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Status of autovacuum and the sporadic stats failures ?

2007-02-06 Thread Alvaro Herrera
Stefan Kaltenbrunner wrote:
 I'm still getting random failures from some of my buildfarm members
 which is starting to get a bit irritating and annoying :-(
 
 some recent failures:
 
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=zebradt=2007-02-06%2015:25:04
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=clownfishdt=2007-02-06%2000:03:04
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=clownfishdt=2007-02-04%2003:03:09
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=impaladt=2007-02-02%2003:03:02
 
 
 any ideas ?

Since they are sporadic, my guess is that it's due to autovacuum not
letting pgstat catch up.  I'd try either setting
autovacuum_vacuum_cost_delay to a low value, or extending the sleep
period in the stats test, to give more time for pgstat to catch up with
those messages.

Setting the cost_delay sounds a reasonable thing to do anyway, and in
fact I already proposed it and nobody objected (AFAIR).  Now we only
have to agree on a reasonable value.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] getting status transaction error

2007-02-06 Thread Merlin Moncure

On 2/6/07, Alvaro Herrera [EMAIL PROTECTED] wrote:

Merlin Moncure wrote:
 On 2/6/07, Alvaro Herrera [EMAIL PROTECTED] wrote:
  actually, here is some more relevant bits from the log.
  Feb  6 06:31:33 mojo postgres[1088]: [1-1] :: LOG:  autovacuum:
  processing database template0
  Feb  6 06:31:33 mojo postgres[1088]: [2-1] :: ERROR:  could not access
  status of transaction 51911
  Feb  6 06:31:33 mojo postgres[1088]: [2-2] :: DETAIL:  could not open
  file pg_clog/0207: No such file or directory
 
 Latest checkpoint's NextXID: 2162841139
 2^31:   2147483648

I think the relevant arithmetic here is

echo 2162841139 51911 - p | dc
1618399228

That's a billion and a half transactions.  Autovacuum uses the formula

this_whole_db = (tmp-age 
 (int32) ((MaxTransactionId  3) * 3 - 10));

to determine whether it needs database-wide vacuum.
(MaxTransactionId  3) is 536870911, so the calculation is
536870911 * 3 - 10

echo 536870911 3 * 10 - p | dc
1610512733

which looks awfully close to the number above.  About 7 million
transactions must have passed since the first time the error showed up
-- does that sound likely?

Well, scratch that -- what's the _current_ Xid? (not lastest
checkpoint's)


I don't know any better way to get that than this:
postgres=# insert into foo default values;
INSERT 0 1
postgres=# select xmin,xmax from foo;
   xmin| xmax
+--
2163877346 |0
(1 row)

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Status of autovacuum and the sporadic stats failures ?

2007-02-06 Thread Alvaro Herrera
Alvaro Herrera wrote:

 Setting the cost_delay sounds a reasonable thing to do anyway, and in
 fact I already proposed it and nobody objected (AFAIR).  Now we only
 have to agree on a reasonable value.

Also note this message:

Date: Sat, 27 Jan 2007 21:51:40 -0500
Message-ID: [EMAIL PROTECTED]
From: Tom Lane [EMAIL PROTECTED]
Subject: Re: [GENERAL] Stats collector frozen? 

[...]
 If this theory is correct, then we can improve the reliability of the
 stats test a good deal if we put a sleep() at the *start* of the test,
 to let any old backends get out of the way.  It seems worth a try
 anyway.  I'll add this to HEAD and if the stats failure noise seems to
 go down, we can back-port it.

which was followed by this commit

revision 1.6
date: 2007-01-28 00:02:31 -0300;  author: tgl;  state: Exp;  lines: +4 -0;
Add a delay at the start of the stats test, to let any prior stats
activity quiesce.  Possibly this will fix the large increase in
non-reproducible stats test failures we've noted since turning on
stats_row_level by default.


Apparently it wasn't enough to completely eliminate the problems.  Did
it reduce them?  I haven't been watching the buildfarm closely enough to
know for sure.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Status of autovacuum and the sporadic stats failures ?

2007-02-06 Thread Stefan Kaltenbrunner
Alvaro Herrera wrote:
 Stefan Kaltenbrunner wrote:
 I'm still getting random failures from some of my buildfarm members
 which is starting to get a bit irritating and annoying :-(

 some recent failures:

 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=zebradt=2007-02-06%2015:25:04
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=clownfishdt=2007-02-06%2000:03:04
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=clownfishdt=2007-02-04%2003:03:09
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=impaladt=2007-02-02%2003:03:02


 any ideas ?
 
 Since they are sporadic, my guess is that it's due to autovacuum not
 letting pgstat catch up.  I'd try either setting
 autovacuum_vacuum_cost_delay to a low value, or extending the sleep
 period in the stats test, to give more time for pgstat to catch up with
 those messages.

hmm now that I look closer - all those members above are actually VMs of
some sort and the host is rather busy at times (multiple VMs competing
for CPU and IO) so that might be a factor here.

 
 Setting the cost_delay sounds a reasonable thing to do anyway, and in
 fact I already proposed it and nobody objected (AFAIR).  Now we only
 have to agree on a reasonable value.

note sure on a reasonable value but we still have time to test different
values if needed for 8.3 - but I think we should really try to get rid
of those sporadic failures because they might lead to getting other
issues going unnoticed.


Stefan

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Status of autovacuum and the sporadic stats failures ?

2007-02-06 Thread Stefan Kaltenbrunner
Alvaro Herrera wrote:
 Alvaro Herrera wrote:
 
 Setting the cost_delay sounds a reasonable thing to do anyway, and in
 fact I already proposed it and nobody objected (AFAIR).  Now we only
 have to agree on a reasonable value.
 
 Also note this message:
 
 Date: Sat, 27 Jan 2007 21:51:40 -0500
 Message-ID: [EMAIL PROTECTED]
 From: Tom Lane [EMAIL PROTECTED]
 Subject: Re: [GENERAL] Stats collector frozen? 
 
 [...]
 If this theory is correct, then we can improve the reliability of the
 stats test a good deal if we put a sleep() at the *start* of the test,
 to let any old backends get out of the way.  It seems worth a try
 anyway.  I'll add this to HEAD and if the stats failure noise seems to
 go down, we can back-port it.
 
 which was followed by this commit
 
 revision 1.6
 date: 2007-01-28 00:02:31 -0300;  author: tgl;  state: Exp;  lines: +4 -0;
 Add a delay at the start of the stats test, to let any prior stats
 activity quiesce.  Possibly this will fix the large increase in
 non-reproducible stats test failures we've noted since turning on
 stats_row_level by default.
 
 
 Apparently it wasn't enough to completely eliminate the problems.  Did
 it reduce them?  I haven't been watching the buildfarm closely enough to
 know for sure.

at least for my members it seems it did not have any effect at all.
I actually think I got more failures in the period afterwards but the
failures are too sporadic to quantify that(and in some way also depends
on the number of commits done which directly influence the number of
builds/tests).


Stefan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Status of autovacuum and the sporadic stats failures ?

2007-02-06 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Also note this message:
 If this theory is correct, then we can improve the reliability of the
 stats test a good deal if we put a sleep() at the *start* of the test,
 to let any old backends get out of the way.  It seems worth a try
 anyway.  I'll add this to HEAD and if the stats failure noise seems to
 go down, we can back-port it.

 Apparently it wasn't enough to completely eliminate the problems.  Did
 it reduce them?  I haven't been watching the buildfarm closely enough to
 know for sure.

It doesn't seem to have helped much if at all :-(.

The $64 question in my mind is whether the failures represent pgstats
not working at all, or just being pretty slow when the system is under
load.  It seems likely to be the latter, but ...  I don't want to just
keep jacking the sleep up indefinitely, anyway; that will slow the
regression tests down for little reason.

I'm tempted to propose replacing the fixed sleep with a short plpgsql
function that sleeps for a second, checks to see if the stats have
changed, repeats if not; giving up only after perhaps 30 seconds.

It'd be interesting to try to gather stats on the length of the delay
taken, but I don't see a good way to do that within the current
regression-test infrastructure.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] referential Integrity and SHARE locks

2007-02-06 Thread Marc Munro
Simon Riggs started this thread with the question:

  . . .
  Why do we need a SHARE lock at all, on the **referenc(ed)** table?
  . . .

The root problem addressed by this thread seems to be that using share
locks in this way increases the likelihood of deadlock, and causes
blocking when no blocking is actually needed.

I would like to make a few observations leading to two alternative
proposals for dealing with this issue.

Deadlocks arise because of differences in the order in which locks are
taken.  If we have a parent table P, and a child C, and we modify two
children of the same P, locks will be taken in the order C1, P, C2.
Another process modifying only C2, will cause locks to be taken in the
order C2, P, leading to the possibility of deadlock.  With the current
system of RI, this sort of deadlock arises far too easily with the
result that RI is often disabled.

It is solely the order in which the locks are taken that causes the
problem.  If the RI constraints could lock the parent records before
locking the child, the possibility of deadlock would be much reduced.  

Proposal 1: Alter the way RI triggers fire, so that they complete before
locking the row against which they fire.


Having a background in Oracle, I found myself considering how this is
not usually a problem with Oracle databases.  If I understand it
correctly, in Oracle the referential integrity constraints are
implemented by locking the index associated with the constraint, rather
than the records themselves.

Proposal 2: Lock the index associated with the parent record, rather
than the parent record itself.  Updates to indexed fields, and deletions
of records would need to also take such locks, but this should be enough
to allow non-referenced fields to be updated in a parent, even while
transactions are modifying its children.


__
Marc




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Re: [COMMITTERS] pgsql: Add documentation for Windows on how to set an environment

2007-02-06 Thread Magnus Hagander
Bruce Momjian wrote:
 Tom Lane wrote:
 Andrew Dunstan [EMAIL PROTECTED] writes:
 Bruce Momjian wrote:
 I think environment variables are used rarely enough on Win32 that we
 should supply a hint.
 I think every Windows administrator who is not totally clueless knows 
 how to set the environment. Maybe home users don't use it much, but 
 admins certainly need to know about it.
 Another argument against it is that the libpq documentation is an
 entirely random place to discuss it, as libpq is hardly the only part of
 Postgres that responds to environment variables.

 I liked the idea of mentioning it in the Windows FAQ, instead.
 
 OK, sure.  Magnus.
 

I assume you mean add it to the win32 FAQ, so I did. Will be on next
site build.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Re: [COMMITTERS] pgsql: Add documentation for Windows on how to set an environment

2007-02-06 Thread Bruce Momjian
Magnus Hagander wrote:
 Bruce Momjian wrote:
  Tom Lane wrote:
  Andrew Dunstan [EMAIL PROTECTED] writes:
  Bruce Momjian wrote:
  I think environment variables are used rarely enough on Win32 that we
  should supply a hint.
  I think every Windows administrator who is not totally clueless knows 
  how to set the environment. Maybe home users don't use it much, but 
  admins certainly need to know about it.
  Another argument against it is that the libpq documentation is an
  entirely random place to discuss it, as libpq is hardly the only part of
  Postgres that responds to environment variables.
 
  I liked the idea of mentioning it in the Windows FAQ, instead.
  
  OK, sure.  Magnus.
  
 
 I assume you mean add it to the win32 FAQ, so I did. Will be on next
 site build.

Yep, perfect, thanks.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  Homepagehttp://momjian.us
  EnterpriseDBhttp://www.enterprisedb.com
  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Status of autovacuum and the sporadic stats failures ?

2007-02-06 Thread Andrew Dunstan

Tom Lane wrote:

I'm tempted to propose replacing the fixed sleep with a short plpgsql
function that sleeps for a second, checks to see if the stats have
changed, repeats if not; giving up only after perhaps 30 seconds.

It'd be interesting to try to gather stats on the length of the delay
taken, but I don't see a good way to do that within the current
regression-test infrastructure.


  


Have it log something that will appear on the postmaster log but not the 
client log? Buildfarm members mostly post their complete postmaster 
logs, and we could postprocess those.


cheers

andrew


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] proposed todo: use insert/update returning anywhere a table is allowed

2007-02-06 Thread Merlin Moncure

It would be great to be able to join to update, returning, etc.  It
looks like the code was deliberately tied off as-is and I was
surprised not to see a todo for this.

the basic idea is to be able to do things like:
select * from (update foo set id = 1 returning *) q;

which currently syntax errors out on 'set' which is a bit odd.

here are tom's brief notes on it and also some context on how select
combined with insert/update might influence triggers:

http://archives.postgresql.org/pgsql-hackers/2006-10/msg00693.php

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Status of autovacuum and the sporadic stats failures ?

2007-02-06 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 I'm tempted to propose replacing the fixed sleep with a short plpgsql
 function that sleeps for a second, checks to see if the stats have
 changed, repeats if not; giving up only after perhaps 30 seconds.
 
 It'd be interesting to try to gather stats on the length of the delay
 taken, but I don't see a good way to do that within the current
 regression-test infrastructure.

 Have it log something that will appear on the postmaster log but not the 
 client log? Buildfarm members mostly post their complete postmaster 
 logs, and we could postprocess those.

Or even just eyeball them.  Good idea --- I'll do it as soon as I get a
chance to catch my breath.  Today's a bit busy :-(

Or if someone else wants to run with the idea, go for it.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] proposed todo: use insert/update returning anywhere a table is allowed

2007-02-06 Thread Jeff Davis
On Tue, 2007-02-06 at 16:54 -0500, Merlin Moncure wrote:
 It would be great to be able to join to update, returning, etc.  It
 looks like the code was deliberately tied off as-is and I was
 surprised not to see a todo for this.
 
 the basic idea is to be able to do things like:
 select * from (update foo set id = 1 returning *) q;
 
 which currently syntax errors out on 'set' which is a bit odd.
 
 here are tom's brief notes on it and also some context on how select
 combined with insert/update might influence triggers:
 
 http://archives.postgresql.org/pgsql-hackers/2006-10/msg00693.php
 

Also a relevent thread:

http://archives.postgresql.org/pgsql-general/2006-09/msg00803.php

Regards,
Jeff Davis


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Proposal: TABLE functions

2007-02-06 Thread Pavel Stehule

Hello,

Currently PostgreSQL support set returning functions.

ANSI SQL 2003 goes with new type of functions - table functions. With this 
syntax


CREATE FUNCTION foo() RETURNS TABLE (c1 t1, ... )

PostgreSQL equal statements are:

CREATE TYPE tmptype AS (c1 t1, ...)
CREATE FUNCTION ... RETURNS SETOF tmptype AS ...

All necessary infrastructure is done. Implementation needs propably only 
small changes in parser.


This feature doesn't need any changes in SQL functions. I expect so they 
will be more readable and consistent.


CREATE OR REPLACE FUNCTION foo(f integer)
RETURNS TABLE(a int, b int) AS
$$
 SELECT a, b FROM
   FROM footab
  WHERE a  f;
$$ LANGUAGE sql;

plpgpsql RETURN have to be enhanced for table expressions.

CREATE OR REPLACE FUNCTION foo(f integer)
RETURNS TABLE(a int, b int) AS -- they are not variables!
$$
 BEGIN
   RETURN TABLE(SELECT a, b  -- it's secure, a,b are not variables
FROM footab
  WHERE a  f);
 END;
$$ LANGUAGE plpgsql;

RETURN NEXT can be used without changes. This feature doesn't allow 
combination of RETURN TABLE and RETURN NEXT statement.


Table functions can have only IN arguments.

Advances:
* conformance with ansi sql 2003
* less propability of colision varnames and colnames

Regards
Pavel Stehule

_
Emotikony a pozadi programu MSN Messenger ozivi vasi konverzaci. 
http://messenger.msn.cz/



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Logging functions executed by queries in 8.2?

2007-02-06 Thread korryd
 Josh Berkus josh@agliodbs.com writes:
  In recent versions, we've changed the logging of function executions so 
  that only the function call is logged, and not any of the queries which it 
  may execute internally.  While most of the time this method is superior 
  for performance analysis, in applications with extensive multi-line stored 
  procedures sometimes you want to log each individual query.
 
 ISTM that the wave of the future for this is an instrumentation plug-in,
 not further kluging of the query logging functionality.  I had the
 impression that Korry and EDB had some prototype capability in that
 direction already, and surely it shouldn't be that hard to write if not.


There's a sort of proof-of-concept PL/pgSQL tracer plugin in the
debugger project on pgFoundry - I haven't played with it in a few months
so I can't promise that it will run at the moment.

If anyone is interested, let me know and I'll add this to my ToDo
list.  

-- Korry


Re: [HACKERS] Logging functions executed by queries in 8.2?

2007-02-06 Thread Josh Berkus
Korry,

 If anyone is interested, let me know and I'll add this to my ToDo
 list.

The Sun benchmarking team needs this.  However, we need to be able to feed 
the data into some kind of mass analysis ala pg_fouine so that we can do 
overall performance analysis.

-- 
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] referential Integrity and SHARE locks

2007-02-06 Thread Gregory Stark
Marc Munro [EMAIL PROTECTED] writes:

 Proposal 1: Alter the way RI triggers fire, so that they complete before
 locking the row against which they fire.

It's kind of hard to know what records the user will choose to update before
he actually does the update...

 Proposal 2: Lock the index associated with the parent record, rather
 than the parent record itself.  

That doesn't help in our case because each version of a record has an index
entry. So even updates to unrelated fields imply index modifications. Worse,
deleting and updating don't remove the old index entries so even if you've
locked them you won't prevent people from doing exactly those operations
you're trying to avoid.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-02-06 Thread Rick Gigger

Tom Lane wrote:

Rick Gigger [EMAIL PROTECTED] writes:

I thought that the following todo item just barely missed 8.2:
Allow a warm standby system to also allow read-only statements [pitr]


No, it's a someday-wishlist item; the work involved is not small.


Thanks,very much for the info.  I'm not sure why I thought that one was 
near completion.  I can now come up with an alternative plan.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-02-06 Thread Rick Gigger

Andrew Hammond wrote:

On Jan 26, 2:38 pm, [EMAIL PROTECTED] (Tom Lane) wrote:

Rick Gigger [EMAIL PROTECTED] writes:

I thought that the following todo item just barely missed 8.2:
Allow a warm standby system to also allow read-only statements [pitr]

No, it's a someday-wishlist item; the work involved is not small.


Slony1 has supported log-shipping replication for about a year now. It
provides similar functionality.


Yes but Slony is much more complicated, has significantly more 
administrative overhead, and as far as I can tell is much more likely to 
impact my production system than this method would.


Slony is a lot more flexible and powerful but I don't need that.  I just 
want a backup that is reasonably up to date that I can do queries on and 
 and failover to in case of hardware failure on my primary db.


I am going to be looking more closely at Slony now that it seems to be 
the best option for this.  I am not looking forward to how it will 
complicate my life though. (Not saying it is bad, just complicated.  At 
least more complicated than simple postgres log shipping.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-02-06 Thread Rick Gigger

Gregory Stark wrote:

Rick Gigger [EMAIL PROTECTED] writes:


I thought that the following todo item just barely missed 8.2:

Allow a warm standby system to also allow read-only statements [pitr]
This is useful for checking PITR recovery.


No, nobody worked on it prior to 8.2. Afaik there's still nobody working on
it. It's not trivial. Consider for example that your read-only query would
still need to come up with a snapshot and there's nowhere currently to find
out what transactions were in-progress at that point in the log replay.

There's also the problem that currently WAL replay doesn't take have allow for
any locking so there's no way for read-only queries to protect themselves
against the WAL replay thrashing the buffer pages they're looking at.

It does seem to be doable and I agree it would be a great feature, but as far
as I know nobody's working on it for 8.3.


Thanks again for the update.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Proposal: Commit timestamp

2007-02-06 Thread Jan Wieck

On 2/6/2007 11:44 AM, Markus Schiltknecht wrote:

Hi,

Zeugswetter Andreas ADI SD wrote:

And time based
is surely one of the important conflict resolution methods for async MM
replication.


That's what I'm questioning. Wouldn't any other deterministic, but 
seemingly random abort decision be as clever as time based conflict 
resolution? It would then be clear to the user that it's random and not 
some in most cases time based, but no in others and only if... thing.



Sure there are others, like rule based priority based but I think
you don't need additional backend functionality for those.


Got the point, yes. I'm impatient, sorry.

Neither the less, I'm questioning if is it worth adding backend 
functionality for that. And given this probably is the most wanted 
resolution method, this question might be heretical. You could also 
see it as sort of an user educating question: don't favor time based 
resolution if that's the one resolution method with the most traps.


These are all very good suggestions towards additional conflict 
resolution mechanisms, that solve one or the other problem. As we have 
said for years now, one size will not fit all. What I am after for the 
moment is a system that supports by default a last update wins on the 
row level, where last update certainly is a little fuzzy, but not by 
minutes. Plus balance type columns. A balance column is not propagated 
as a new value, but as a delta between the old and the new value. All 
replica will apply the delta to that column regardless of whether the 
replication info is newer or older than the existing row. That way, 
literal value type columns (like an address) will maintain cluster wide 
the value of the last update to the row, while balance type columns will 
clusterwide maintain the sum of all changes.


Whatever strategy one will use, in an async multimaster there are always 
cases that can be resolved by rules (last update being one of them), and 
some that I can't even imagine solving so far. I guess some of the cases 
will simply boil down to the application has to make sure that ... 
never occurs. Think of a multi-item order, created on one node, while 
another node is deleting the long unused item (which would have to be 
backordered). Now while those two nodes figure out what to do to make 
this consistent again, a third node does a partial shipment of that 
order. The solution is simple, reinsert the deleted item ... only that 
there were rather nasty ON DELETE CASCADE's on that item that removed 
all the consumer reviews, product descriptions, data sheets and what 
not. It's going to be an awful lot of undo.


I haven't really made up my mind about a user defined rule based 
conflict resolution interface yet. I do plan to have a unique and 
foreign key constraint based, synchronous advisory locking system on top 
of my system in a later version (advisory key locks would stay in place 
until the transaction, that placed them, replicates).


I guess you see by now why I wanted to keep the discussion about the 
individual, rather generic support features in the backend separate from 
the particular features I plan to implement in the replication system. 
Everyone has different needs and consequently an async multi-master 
must do a whole range of mutually exclusive things altogether ... 
because Postgres can never accept a partial solution. We want the egg 
laying milk-wool-pig or nothing.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Proposal: TABLE functions

2007-02-06 Thread Jeremy Drake
On Tue, 6 Feb 2007, Pavel Stehule wrote:

 Hello,

 Currently PostgreSQL support set returning functions.

 ANSI SQL 2003 goes with new type of functions - table functions. With this
 syntax

 CREATE FUNCTION foo() RETURNS TABLE (c1 t1, ... )

 PostgreSQL equal statements are:

 CREATE TYPE tmptype AS (c1 t1, ...)
 CREATE FUNCTION ... RETURNS SETOF tmptype AS ...

or you can do
CREATE FUNCTION foo(OUT c1 t1, OUT ...) RETURNS SETOF record AS ...

But I think this would be nice, I think the OUT parameters make less sense
than saying RETURNS TABLE(...).  But what about functions not returning
SETOF?



--
The Schwine-Kitzenger Institute study of 47 men over the age of 100
showed that all had these things in common:

(1) They all had moderate appetites.
(2) They all came from middle class homes
(3) All but two of them were dead.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] referential Integrity and SHARE locks

2007-02-06 Thread Marc Munro
On Tue, 2007-06-02 at 23:47 +, Gregory Stark wrote:
 Marc Munro [EMAIL PROTECTED] writes:
 
  Proposal 1: Alter the way RI triggers fire, so that they complete before
  locking the row against which they fire.
 
 It's kind of hard to know what records the user will choose to update before
 he actually does the update...

The RI triggers currently fire when a record is updated.  Under my
proposal they would fire in the same way but before the record is locked
rather than after.  Or am I missing your point?

  Proposal 2: Lock the index associated with the parent record, rather
  than the parent record itself.  
 
 That doesn't help in our case because each version of a record has an index
 entry. So even updates to unrelated fields imply index modifications. Worse,
 deleting and updating don't remove the old index entries so even if you've
 locked them you won't prevent people from doing exactly those operations
 you're trying to avoid.

I guess my proposal was incomplete.  Obviously, before deleting, or
updating an indexed column, a lock would have to be taken on the index.
I believe this would suffice to guarantee referential integrity without
blocking updates that leave the referred indexes unchanged.

What you say about each version of a record having an index entry
confuses me.  I thought there was one index entry that lead to a chain
of tuples.  If this is not the case, I don't see how the current
exclusive locks on indexes work to enforce uniqueness.  Could you point
me to somewhere in the code or the documentation that explains this?

It still seems to me that if we can lock an index entry to guarantee
uniqueness, we can also lock it to implement RI constraints.

__
Marc




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] doxygen.postgresql.org

2007-02-06 Thread Luke Lonergan
Yay!

This rocks IMO, but I'm a borderline PHB so what do I know ;-)

- Luke


On 2/6/07 9:19 AM, Magnus Hagander [EMAIL PROTECTED] wrote:

 http://doxygen.postgresql.org is now set up for your browsing pleasure.
 It's synced to anoncvs once per day.
 
 //Magnus
 
 ---(end of broadcast)---
 TIP 7: You can help support the PostgreSQL project by donating at
 
 http://www.postgresql.org/about/donate
 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] referential Integrity and SHARE locks

2007-02-06 Thread Tom Lane
Marc Munro [EMAIL PROTECTED] writes:
 The RI triggers currently fire when a record is updated.  Under my
 proposal they would fire in the same way but before the record is locked
 rather than after.  Or am I missing your point?

IOW, some other transaction could update or delete the tuple meanwhile?
Doesn't seem very promising.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Proposal: TABLE functions

2007-02-06 Thread Joshua D. Drake
Pavel Stehule wrote:
 Hello,
 
 Currently PostgreSQL support set returning functions.
 
 ANSI SQL 2003 goes with new type of functions - table functions. With
 this syntax
 
 CREATE FUNCTION foo() RETURNS TABLE (c1 t1, ... )
 

Yeah this should be pretty easy because a table is just a composite
type. You can already do this:

CREATE TABLE foo (id bigint, first_name text);

CREATE FUNCTION foo() RETURNS SET OF foo...

 PostgreSQL equal statements are:
 
 CREATE TYPE tmptype AS (c1 t1, ...)
 CREATE FUNCTION ... RETURNS SETOF tmptype AS ...
 
 All necessary infrastructure is done. Implementation needs propably only
 small changes in parser.
 
 This feature doesn't need any changes in SQL functions. I expect so they
 will be more readable and consistent.
 
 CREATE OR REPLACE FUNCTION foo(f integer)
 RETURNS TABLE(a int, b int) AS
 $$
  SELECT a, b FROM
FROM footab
   WHERE a  f;
 $$ LANGUAGE sql;
 
 plpgpsql RETURN have to be enhanced for table expressions.
 
 CREATE OR REPLACE FUNCTION foo(f integer)
 RETURNS TABLE(a int, b int) AS -- they are not variables!
 $$
  BEGIN
RETURN TABLE(SELECT a, b  -- it's secure, a,b are not variables
 FROM footab
   WHERE a  f);
  END;
 $$ LANGUAGE plpgsql;
 
 RETURN NEXT can be used without changes. This feature doesn't allow
 combination of RETURN TABLE and RETURN NEXT statement.
 
 Table functions can have only IN arguments.
 
 Advances:
 * conformance with ansi sql 2003
 * less propability of colision varnames and colnames
 
 Regards
 Pavel Stehule
 
 _
 Emotikony a pozadi programu MSN Messenger ozivi vasi konverzaci.
 http://messenger.msn.cz/
 
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
   http://www.postgresql.org/docs/faq
 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] doxygen.postgresql.org

2007-02-06 Thread Bruce Momjian
Luke Lonergan wrote:
 Yay!
 
 This rocks IMO, but I'm a borderline PHB so what do I know ;-)

You think, Oh, this will make my developers more productive.  :-)

-- 
  Bruce Momjian [EMAIL PROTECTED]   http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [PATCHES] Fix database is ready race condition

2007-02-06 Thread Jim Nasby

On Feb 5, 2007, at 8:19 AM, Tom Lane wrote:

Simon Riggs [EMAIL PROTECTED] writes:

My suggestions would be
1. Database system has completed recovery and
2. Database system is ready to accept connections


The second was in fact the wording I had in mind, sorry for not being
clear.  As to the first, the question is whether a log message at that
specific point has any use.  It's not completion of recovery,  
exactly,
since we go through that spot whether it's a normal startup or  
recovery

(and there is another log message when we actually do any WAL replay).
AFAICS it's a fairly randomly chosen place in a long sequence of  
Things
That Must Happen.  Part of the reason Markus is seeing a race  
condition

is that this isn't the last thing done before the startup subprocess
exits --- see BootstrapMain.  So I'm for just getting rid of it.


It is useful to know if the database had to do recovery, though, and  
if it did do recovery, it would be useful to know how long it took if  
the subsequent startup took a real amount of time.


BTW, this is a real problem I've seen on a database with 500k entries  
in pg_class... it takes several minutes to finish starting after the  
'Postmaster is ready' message.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-02-06 Thread Jim Nasby

On Feb 5, 2007, at 12:53 PM, Andrew Hammond wrote:

On Jan 26, 2:38 pm, [EMAIL PROTECTED] (Tom Lane) wrote:

Rick Gigger [EMAIL PROTECTED] writes:

I thought that the following todo item just barely missed 8.2:
Allow a warm standby system to also allow read-only statements  
[pitr]


No, it's a someday-wishlist item; the work involved is not small.


Slony1 has supported log-shipping replication for about a year now. It
provides similar functionality.


Not really

1) It's not possible for a PITR 'slave' to fall behind to a state  
where it will never catch up, unless it's just on inadequate  
hardware. Same isn't true with slony.

2) PITR handles DDL seamlessly
3) PITR is *much* simpler to configure and maintain
--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Modifying and solidifying contrib

2007-02-06 Thread Jim Nasby

On Feb 5, 2007, at 11:19 AM, Andrew Dunstan wrote:

Jim Nasby wrote:
There was also mention of having a means to tell pg_dump not to  
dump extensions...
What's the use case for that? What will we do if there are db  
objects that depend on some extensions? Given that there will be  
some uninstall support, this one seems less necessary.


In addition to Martijn's tsearch case, there's also PostGIS. And I  
believe this is a pretty big pain for them.


I really think we should approach this by not trying to do  
everything at once.


That's fine; I just wanted to point out the use case while lists were  
being made.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] 10 weeks to feature freeze (Pending Work)

2007-02-06 Thread Andrew Hammond

On 2/6/07, Jim Nasby [EMAIL PROTECTED] wrote:

On Feb 5, 2007, at 12:53 PM, Andrew Hammond wrote:
 On Jan 26, 2:38 pm, [EMAIL PROTECTED] (Tom Lane) wrote:
 Rick Gigger [EMAIL PROTECTED] writes:
 I thought that the following todo item just barely missed 8.2:
 Allow a warm standby system to also allow read-only statements
 [pitr]

 No, it's a someday-wishlist item; the work involved is not small.

 Slony1 has supported log-shipping replication for about a year now. It
 provides similar functionality.

Not really

1) It's not possible for a PITR 'slave' to fall behind to a state
where it will never catch up, unless it's just on inadequate
hardware. Same isn't true with slony.


I imagine that there are ways to screw up WAL shipping too, but there
are plenty more ways to mess up slony.


2) PITR handles DDL seamlessly
3) PITR is *much* simpler to configure and maintain


4) You need 3 databases to do log shipping using slony1. An origin, a
subscriber which generates the logs and obviously the log-replica.

All of which is why I qualified my statement with similar.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Proposal: Commit timestamp

2007-02-06 Thread Jim Nasby
Something worth noting... the only places I've actually seen MM  
replication implemented, each master was in fact still responsible  
for it's own set of data. It was essentially something that you could  
really do with Slony, if you could tolerate the extreme complexity  
that would be involved. It might well be worth focusing on that case  
first, before trying to come up with a perfect last-committed mechanism.


On Feb 5, 2007, at 5:20 AM, Zeugswetter Andreas ADI SD wrote:


I think you are completely ignoring practicability. Or are you saying,
that such a system exists and works for e.g. a loosly connected  
group of

laptop field agents that only sporadically have a connection to the
cluster.

I think Jan's definition gives a pragmatic solution to the problem,
and will be able to give good automatic conflict resolution.

It has downsides he stated, and cannot guarantee 100% correct  
automatic

conflict
resolution in case of connection loss, but I am quite sure you are not
able to do
better, without loosing yourself in theory.

e.g. assume all clocks vary by no more than 30 seconds when
disconnected, you can
require manual (or rule based) resolution to all conflicts that  
vary by

less than
1 minute.


--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Proposal: Commit timestamp

2007-02-06 Thread Markus Schiltknecht

Hi,

Jan Wieck wrote:
Whatever strategy one will use, in an async multimaster there are always 
cases that can be resolved by rules (last update being one of them), and 
some that I can't even imagine solving so far. I guess some of the cases 
will simply boil down to the application has to make sure that ... 
never occurs. Think of a multi-item order, created on one node, while 
another node is deleting the long unused item (which would have to be 
backordered). Now while those two nodes figure out what to do to make 
this consistent again, a third node does a partial shipment of that 
order.


It helps to categorize these conflict types. There basically are:

* data conflicts: simple row data, i.e. update - update conflicts.

* uniqueness conflicts: two rows conflict because they'd violate a 
uniquenes constraint, i.e. insert - insert, update - insert or update - 
update.


* visibility conflicts: basically the remaining update - delete and 
delete - delete cases. But also SELECT FOR UPDATE candidates, etc... 
Everything having to do with a rows not yet or no longer being visible 
to a transaction.


Your example certainly involves a visibility conflict (update - delete). 
Not even (sync) Postgres-R can guarantee consistency on the visibility 
level, i.e.  a first transaction's SELECT FOR UPDATE might not see some 
just recently committed transactions newly inserted rows (because that 
one isn't replayed yet on the node, thus the transaction is working on 
an 'old' snapshot of the database state). Another simpler example: 
Postgres-R doesn't raise a serialization error on delete-delete 
conflicts, it simply deletes the row once, even if two transactions 
confirmed to have committed a transaction which deleted a row.


Luckily, most applications don't need that anyway, though.

The solution is simple, reinsert the deleted item ... 


..at which point timestamps certainly won't help :-)   Sorry, couldn't 
resist...


only that 
there were rather nasty ON DELETE CASCADE's on that item that removed 
all the consumer reviews, product descriptions, data sheets and what 
not. It's going to be an awful lot of undo.


Huh? Are you planning on aborting *parts* of a transaction? I didn't 
think about that, but my gut feeling is that you don't want to do that.


I haven't really made up my mind about a user defined rule based 
conflict resolution interface yet. I do plan to have a unique and 
foreign key constraint based, synchronous advisory locking system on top 
of my system in a later version (advisory key locks would stay in place 
until the transaction, that placed them, replicates).


You'd have to elaborate on that...

I guess you see by now why I wanted to keep the discussion about the 
individual, rather generic support features in the backend separate from 
the particular features I plan to implement in the replication system.


Sure. I know, discussions about replication can get endless, probably 
even are so by definition ;-)  But hey, they're fun!


Everyone has different needs and consequently an async multi-master 
must do a whole range of mutually exclusive things altogether ... 
because Postgres can never accept a partial solution. We want the egg 
laying milk-wool-pig or nothing.


Like the one which would result from a merge of such an async 
replication with a sync one? Imagine being able to choose between sync 
and async per transaction...


Regards

Markus


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org