Re: [HACKERS] SHMMAX seems entirely broken in OS X 10.4.2

2005-08-30 Thread Luke Lonergan
Tom,

On 8/29/05 9:37 PM, Tom Lane [EMAIL PROTECTED] wrote:

 Did that.  Set shmall first, shmall second, both together in one sysctl
 command; no joy anywhere.  Are you trying this on fully up-to-date
 Tiger?

Did you try the values I sent earlier?  If you set them both in
/etc/sysctl.conf and reboot, it should work.  We get 100% repeatability on
10.4.2 platforms.

- Luke



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Performance gain from reduction of GROUP BY memory

2005-08-30 Thread Simon Riggs
On Mon, 2005-08-29 at 20:25 -0400, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  I notice that Neil's patch regarding reducing the number of memory
  allocations during aggregation operations isn't mentioned. It was
  originally discussed in 8.0beta (2-3?) time.
 
  What happened there?
  - patch not committed in the end
  - committed but not mentioned, as a dropped item
  - committed but not mentioned, since part of a larger patch
 
 Are you speaking of these patches?

Yes, those look like the ones I mentioned.

Those seem to have a useful performance improvement?

At very least, the change in Aggregate function API should be mentioned,
no?

 2005-04-06 19:56  neilc
 
   * src/backend/utils/adt/: float.c, numeric.c: Apply the nodeAgg
   optimization to more of the builtin transition functions. This
   patch optimizes int2_sum(), int4_sum(), float4_accum() and
   float8_accum() to avoid needing to copy the transition function's
   state for each input tuple of the aggregate. In an extreme case
   (e.g. SELECT sum(int2_col) FROM table where table has a single
   column), it improves performance by about 20%. For more complex
   queries or tables with wider rows, the relative performance
   improvement will not be as significant.
 
 2005-04-04 19:50  neilc
 
   * src/backend/utils/adt/numeric.c: This patch changes
   int2_avg_accum() and int4_avg_accum() use the nodeAgg performance
   hack Tom introduced recently. This means we can avoid copying the
   transition array for each input tuple if these functions are
   invoked as aggregate transition functions.
   
   To test the performance improvement, I created a 1 million row
   table with a single int4 column. Without the patch, SELECT avg(col)
   FROM table took about 4.2 seconds (after the data was cached); with
   the patch, it took about 3.2 seconds. Naturally, the performance
   improvement for a less trivial query (or a table with wider rows)
   would be relatively smaller.
 
 2005-03-12 15:25  tgl
 
   * contrib/intagg/int_aggregate.c,
   contrib/intagg/int_aggregate.sql.in, doc/src/sgml/xaggr.sgml,
   doc/src/sgml/xfunc.sgml, src/backend/executor/nodeAgg.c,
   src/backend/utils/adt/int8.c: Adjust the API for aggregate function
   calls so that a C-coded function can tell whether it is being used
   as an aggregate or not.  This allows such a function to avoid
   re-pallocing a pass-by-reference transition value; normally it
   would be unsafe for a function to scribble on an input, but in the
   aggregate case it's safe to reuse the old transition value.  Make
   int8inc() do this.  This gets a useful improvement in the speed of
   COUNT(*), at least on narrow tables (it seems to be swamped by I/O
   when the table rows are wide).  Per a discussion in early December
   with Neil Conway.  I also fixed int_aggregate.c to check this,
   thereby turning it into something approaching a supportable
   technique instead of being a crude hack.

I'll search CVS directly next time. Thanks.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: VACUUM/t_ctid bug (was Re: [HACKERS] GiST concurrency commited)

2005-08-30 Thread Teodor Sigaev

Fixed in 8.0, 7.4 and 7.3 branches.

Tom Lane wrote:

Teodor Sigaev [EMAIL PROTECTED] writes:


http://www.sigaev.ru/gist/concur.pl
http://www.sigaev.ru/gist/concur.sh



BTW, these scripts seem to indicate that there's a GIST or
contrib/intarray problem in the 8.0 branch.  I was trying to use 'em
to test REL8_0_STABLE branch tip to verify my t_ctid chain backpatch,
and I pretty consistently see Problem with update:

Start: parallel mode with 4 flows
Problem with update {77,77}:0 count:1 at concur.pl line 91.
Issuing rollback() for database handle being DESTROY'd without explicit 
disconnect().
Problem with update {43,24}:3 count:1 at concur.pl line 91.
Issuing rollback() for database handle being DESTROY'd without explicit 
disconnect().
Problem with update {43,43}:2 count:1 at concur.pl line 91.
Issuing rollback() for database handle being DESTROY'd without explicit 
disconnect().
1 flow finish. Stats: ni:75000 nu:1661 nd:216 nv:13(nf:3) nt:780
All flow finish; status: 255; elapsed time: 265.48 sec

Is this something that can be fixed for 8.0.4?

regards, tom lane


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Query Sampling

2005-08-30 Thread Simon Riggs
On Mon, 2005-08-29 at 14:58 -0700, Varun Kacholia wrote:
   I assume you realise that Bernoulli sampling is currently possibly using
  the random() function and setseed() ?
 Yes, select * from table where random()  x, does the job.
  
  I can't see why TABLESAMPLE effects a sequential scan *only*, in all
  cases. I agree that there seems little point in sampling rows from a
  table when it is already sufficiently restricted that the query could
  use an index.
  AFAICS this clause would potentially effect Index and Bitmap scans also,
  and would be required for full correctness to the standard.
 
 As I see it, there are 3 ways of implementing the sample operator:
 1. modify node[Seq|Tid|Index|..]scan.c to consider sampling
 2. create new nodes for each of the possible scans..sequential, index, tid et 
 al
 3. support sequential scan only for sampling.
 
 (1) does not seem to be attractive, while (2) is a lot of work to
 begin with. I was
 planning to start with (3) and approach to (2) in the long run.
 I would appreciate your opinion on this.

IMHO creating new nodes just for sampling would duplicate too much code.
To me, the Bernoulli sampling sounds like 2-3 carefully placed
statements in the executor nodes and a couple of additions to the node
data structures. (As well as logic in the parser). 

Sounds like you would be better off prototyping something for sequential
scans. If you can get Bernoulli working, you can move on to get SYSTEM
working - which needs a deeper reach into the guts of the block request
logic.

We might be able to get away with the thought that SYSTEM sampling will
actually use BERNOULLI sampling when an index or bitmap scan is used.
That would give us standards compliant behaviour without too much effort
(since that effort is essentially wasted, as previously observed).

Best Regards, Simon Riggs



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: VACUUM/t_ctid bug (was Re: [HACKERS] GiST concurrency commited)

2005-08-30 Thread Mario Weilguni
Am Dienstag, 30. August 2005 11:25 schrieb Teodor Sigaev:
 Fixed in 8.0, 7.4 and 7.3 branches.

 Tom Lane wrote:
  Teodor Sigaev [EMAIL PROTECTED] writes:
 http://www.sigaev.ru/gist/concur.pl
 http://www.sigaev.ru/gist/concur.sh
 
  BTW, these scripts seem to indicate that there's a GIST or
  contrib/intarray problem in the 8.0 branch.  I was trying to use 'em
  to test REL8_0_STABLE branch tip to verify my t_ctid chain backpatch,
  and I pretty consistently see Problem with update:
 
  Start: parallel mode with 4 flows
  Problem with update {77,77}:0 count:1 at concur.pl line 91.
  Issuing rollback() for database handle being DESTROY'd without explicit
  disconnect(). Problem with update {43,24}:3 count:1 at concur.pl line 91.
  Issuing rollback() for database handle being DESTROY'd without explicit
  disconnect(). Problem with update {43,43}:2 count:1 at concur.pl line 91.
  Issuing rollback() for database handle being DESTROY'd without explicit
  disconnect(). 1 flow finish. Stats: ni:75000 nu:1661 nd:216 nv:13(nf:3)
  nt:780 All flow finish; status: 255; elapsed time: 265.48 sec
 
  Is this something that can be fixed for 8.0.4?
 
  regards, tom lane

Since 7.4 we have troubles with ltree (seldom corruption of buffer cache, not 
on-disk), might this bug be somehow related to the ltree problem?
7.2 was rock-stable with ltree.

Best regards,
Mario Weilguni

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: VACUUM/t_ctid bug (was Re: [HACKERS] GiST concurrency commited)

2005-08-30 Thread Teodor Sigaev
Since 7.4 we have troubles with ltree (seldom corruption of buffer cache, not 
on-disk), might this bug be somehow related to the ltree problem?

7.2 was rock-stable with ltree.


Not sure. Fixed bug was (@ - contains operation):

update wow set a = a || '{101}'::int[] where a @ '{1,2,3}';
select a from wow where a @ '{1,2,3}' and not a @ '{101}';

After update query select must not find any rows, but it did. The problem was in 
GiST code and so any GiST idexes was affected.


Can you say more about your trouble?


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: VACUUM/t_ctid bug (was Re: [HACKERS] GiST concurrency commited)

2005-08-30 Thread Mario Weilguni
Am Dienstag, 30. August 2005 12:19 schrieb Teodor Sigaev:
  Since 7.4 we have troubles with ltree (seldom corruption of buffer cache,
  not on-disk), might this bug be somehow related to the ltree problem? 7.2
  was rock-stable with ltree.

 Not sure. Fixed bug was (@ - contains operation):

 update wow set a = a || '{101}'::int[] where a @ '{1,2,3}';
 select a from wow where a @ '{1,2,3}' and not a @ '{101}';

 After update query select must not find any rows, but it did. The problem
 was in GiST code and so any GiST idexes was affected.

 Can you say more about your trouble?

We have queries that use ltree for sorting too, the sort looks like this:
  order by subpath(ltreefield, 0, nlevel(ltreefield) - 1)

But concurrency leads to a bug, that results in an sql-error: 
ERROR:  invalid positions

Now we use locking to prevent concurrenct access on the most-used concurrent 
part of the program, and the problem is extremly rare now, I had only 4 
occurences in one year, but still happens (there are other access paths that 
do not use locking, but they are rareley accessed). 

It seems the ltree length parameter is set to 0 in the tuples, the content 
itself is still there: Example:
Say the tuple was before treefield='1.2.3.4.5'
After the occurence of the error, I get: treefield='' (empty, but not null)

Using a tool Tom Lane told me to use, I checked it, and on-disk I had still 
1.2.3.4.5, but the length parameter of the ltree column was 0 (sorry, I was 
wrong in my first mail, on-disk was broken too.)

Might this be somehow related to the intarray bugs?

Best regards,
Mario Weilguni


p.s.: I tried hard to create a self-contained test for tracking this down, but 
failed. 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] SHMMAX seems entirely broken in OS X 10.4.2

2005-08-30 Thread Dave Cramer

This is from my Powerbook

in /etc/rc

sysctl -w kern.sysv.shmmax=512 kern.sysv.shmmin=1  
kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.sysv.shmall=1024


I also had to up maxprocperuid to 200 to get buildfarm to run

I'm pretty sure shmall had to be increased to allow shmmax to be  
increased.



Dave
On 30-Aug-05, at 12:37 AM, Tom Lane wrote:


Jeff - [EMAIL PROTECTED] writes:


On Aug 29, 2005, at 8:18 PM, Tom Lane wrote:


Has anyone been able to set kern.sysv.shmmax above 4MB at all in
latest OS X?





yeah, you need to set shmmax and shmall.



Did that.  Set shmall first, shmall second, both together in one  
sysctl

command; no joy anywhere.  Are you trying this on fully up-to-date
Tiger?

regards, tom lane

---(end of  
broadcast)---

TIP 2: Don't 'kill -9' the postmaster





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] dangling lock information?

2005-08-30 Thread Hannu Krosing
On E, 2005-08-29 at 13:09 +0200, Andreas Pflug wrote:
 Hannu Krosing wrote:
 
 On P, 2005-08-28 at 22:23 +0200, Andreas Pflug wrote:
   
 
 I'm currently testing pgAdmin support for slony, on pgsql CVS HEAD, and 
 encounter strange problems from time to time.
 After dropping and recreating the slony schema, all changes committed 
 and all backends in IDLE state, I'm getting
 relation with OID xxx does not exist when I'm trying to add a path. 
 This seems to be triggered inside slony functions when a
 LOCK _test.pg_config IN EXCLUSIVE MODE
 is performed.
 The problem is gone as soon as I close the connection I've been using 
 for prior schema changes, and use a fresh connection.
 
 Does this description ring a bell for somebody?
 
 
 
 seems like the usual pl/pgsql caches query plans and relation
 referenced inside the cached plan is gone thing
   
 
 Kind of, but the complete schema including procedures was dropped, so 
 apparently after recreation the old plans were reused?!?

In that case this should probably be asked at slony list.

Added to CC.

-- 
Hannu Krosing [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: VACUUM/t_ctid bug (was Re: [HACKERS] GiST concurrency commited)

2005-08-30 Thread Teodor Sigaev

We have queries that use ltree for sorting too, the sort looks like this:
  order by subpath(ltreefield, 0, nlevel(ltreefield) - 1)

But concurrency leads to a bug, that results in an sql-error: 
ERROR:  invalid positions


contrib_regression=# select nlevel(t), subpath(t, 0, nlevel(t)-1) from 
text2ltree('1.2.3.4.5') as t;

 nlevel | subpath
+-
  5 | 1.2.3.4
(1 row)

contrib_regression=# select nlevel(t), subpath(t, 0, nlevel(t)-1) from 
text2ltree('1') as t;

 nlevel | subpath
+-
  1 |
(1 row)

contrib_regression=# select nlevel(t), subpath(t, 0, nlevel(t)-1) from 
text2ltree('') as t;

ERROR:  invalid positions
contrib_regression=#

It's incorrect arguments for subpath().



It seems the ltree length parameter is set to 0 in the tuples, the content 
itself is still there: Example:

Say the tuple was before treefield='1.2.3.4.5'
After the occurence of the error, I get: treefield='' (empty, but not null)

Using a tool Tom Lane told me to use, I checked it, and on-disk I had still 
1.2.3.4.5, but the length parameter of the ltree column was 0 (sorry, I was 
wrong in my first mail, on-disk was broken too.)


Interesting... But with some test suite or more information I'm helpless.
How often do updates/inserts of table and/or ltree column occurs? Vacuum?




Might this be somehow related to the intarray bugs?


No, except case when you update your table something like to:
update  tbl set ltreefield=... where ltreefield ...;

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] PG_DUMp

2005-08-30 Thread esolsona

Hi all,

You know how i can make a backup of database automatically? I have a script pg_dump.exe DBNAME -U USER  FILE but i don't know how put the password. If i put -W password the command line say that i put more paramatrers that the program accept.

Best Regards



Re: [HACKERS] Intermittent stats test failures on buildfarm

2005-08-30 Thread Kris Jurka



On Tue, 30 Aug 2005, Tom Lane wrote:


What we are left with turns out to be multiple occurrences of the first
pathology on exactly three buildfarm members:

ferret  Cygwin
kuduSolaris 9, x86
dragonfly   Solaris 9, x86

So what to make of this?  Dunno, but it is clearly a very
platform-specific behavior.  Anyone see a connection between Cygwin
and Solaris?



One thing to note about kudu and dragonfly is that they are running under 
vmware.  This, combined with cygwin's reputation, makes me suspect that 
the connection is that they are both struggling under load.  Although 
canary (NetBSD 1.6 x86) is setup in the same fashion and has shown no such 
failures.


I'm also in the process of moving, so I put this machine in a box last 
night and it won't be up and running for a week or two.  I do have very 
similar copies of the OS image running on other machines if you'd like me 
to test something specific.


Kris Jurka

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] PG_DUMp

2005-08-30 Thread Andrew Dunstan



[EMAIL PROTECTED] wrote:



Hi all,

You know how i can make a backup of database automatically? I have a 
script pg_dump.exe DBNAME -U USER   FILE but i don't know how put 
the password. If i put -W password the command line say that i put 
more paramatrers that the program accept.





You have asked this question on the wrong list - please ask questions 
like this on the pgsql-general list.


If you look up the documentation on how to use a pgpass file you should 
discover an answer to your question.


cheers

andrew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] SHMMAX seems entirely broken in OS X 10.4.2

2005-08-30 Thread Jeff Trout


On Aug 30, 2005, at 12:37 AM, Tom Lane wrote:



Did that.  Set shmall first, shmall second, both together in one  
sysctl

command; no joy anywhere.  Are you trying this on fully up-to-date
Tiger?



Just ran software update and (besides a couple apps) it had a  
security update.


skittlebrau:~ postgres$ grep shm /etc/rc
# sysctl -w kern.sysv.shmmax=4194305 kern.sysv.shmmin=1  
kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.sysv.shmall=1024

sysctl -w kern.sysv.shmmax=335544320
sysctl -w kern.sysv.shmmin=1
sysctl -w kern.sysv.shmmni=32
sysctl -w kern.sysv.shmseg=16
sysctl -w kern.sysv.shmall=327680

skittlebrau:~ postgres$ sysctl -a | grep shm
kern.sysv.shmmax: 335544320
kern.sysv.shmmin: 1
kern.sysv.shmmni: 32
kern.sysv.shmseg: 16
kern.sysv.shmall: 327680

good luck.

--
Jeff Trout [EMAIL PROTECTED]
http://www.jefftrout.com/
http://www.stuarthamm.net/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [Slony1-general] Re: [HACKERS] dangling lock information?

2005-08-30 Thread David Parker
The slony log trigger saves execution plans, so any given connection
that has been used with a slony schema installed will have cached OIDs
referring to the sl_log_1 table. When you drop the schema, those OIDs
obviously go away. When you re-create the schema, and try to use the old
connection, it still has the old plan cached in it, so the OIDs in the
plan are out of sync with what actually exists in the database.

This is the behavior I've observed in our environment, anyway. The
problem always shows up when slony is RE-installed under an outstanding
connection.

- DAP 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Hannu
Krosing
Sent: Tuesday, August 30, 2005 7:28 AM
To: Andreas Pflug
Cc: slony1-general@gborg.postgresql.org; PostgreSQL-development
Subject: [Slony1-general] Re: [HACKERS] dangling lock information?

On E, 2005-08-29 at 13:09 +0200, Andreas Pflug wrote:
 Hannu Krosing wrote:
 
 On P, 2005-08-28 at 22:23 +0200, Andreas Pflug wrote:
   
 
 I'm currently testing pgAdmin support for slony, on pgsql CVS HEAD, 
 and encounter strange problems from time to time.
 After dropping and recreating the slony schema, all changes 
 committed and all backends in IDLE state, I'm getting relation 
 with OID xxx does not exist when I'm trying to add a path.
 This seems to be triggered inside slony functions when a
 LOCK _test.pg_config IN EXCLUSIVE MODE is performed.
 The problem is gone as soon as I close the connection I've been 
 using for prior schema changes, and use a fresh connection.
 
 Does this description ring a bell for somebody?
 
 
 
 seems like the usual pl/pgsql caches query plans and relation 
 referenced inside the cached plan is gone thing
   
 
 Kind of, but the complete schema including procedures was dropped, so 
 apparently after recreation the old plans were reused?!?

In that case this should probably be asked at slony list.

Added to CC.

--
Hannu Krosing [EMAIL PROTECTED]

___
Slony1-general mailing list
Slony1-general@gborg.postgresql.org
http://gborg.postgresql.org/mailman/listinfo/slony1-general

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] ALTER TABLE ( smallinto - boolean ) ...

2005-08-30 Thread Greg Stark

Tom Lane [EMAIL PROTECTED] writes:

 You could do something like
 
 ... type boolean using case when field1=0 then false else true end;

Or you could save typing and just use USING field10

Odd that everyone did a CASE for that.


-- 
greg


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Intermittent stats test failures on buildfarm

2005-08-30 Thread Tom Lane
Kris Jurka [EMAIL PROTECTED] writes:
 On Tue, 30 Aug 2005, Tom Lane wrote:
 What we are left with turns out to be multiple occurrences of the first
 pathology on exactly three buildfarm members:
 
 ferret   Cygwin
 kudu Solaris 9, x86
 dragonflySolaris 9, x86
 
 So what to make of this?  Dunno, but it is clearly a very
 platform-specific behavior.  Anyone see a connection between Cygwin
 and Solaris?

 One thing to note about kudu and dragonfly is that they are running under 
 vmware.  This, combined with cygwin's reputation, makes me suspect that 
 the connection is that they are both struggling under load.  Although 
 canary (NetBSD 1.6 x86) is setup in the same fashion and has shown no such 
 failures.

Hmm.  One pretty obvious explanation of the failure is simply that the
machine is so loaded that the stats collector doesn't get to run for a
few seconds.  I had dismissed this idea because I figured the buildfarm
machine owners would schedule the tests to run at relatively low-load
times of day ... but maybe that's not true on these two machines?

We could try increasing the delay in the stats test, say from two
seconds to five.  If it is just a matter of load, that should result
in a very large drop in the frequency of the failure.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: VACUUM/t_ctid bug (was Re: [HACKERS] GiST concurrency commited)

2005-08-30 Thread Tom Lane
Teodor Sigaev [EMAIL PROTECTED] writes:
 Is this something that can be fixed for 8.0.4?

 Fixed in 8.0, 7.4 and 7.3 branches.

Excellent news.  Thanks.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Performance gain from reduction of GROUP BY memory

2005-08-30 Thread Bruce Momjian
Simon Riggs wrote:
 On Mon, 2005-08-29 at 20:25 -0400, Tom Lane wrote:
  Simon Riggs [EMAIL PROTECTED] writes:
   I notice that Neil's patch regarding reducing the number of memory
   allocations during aggregation operations isn't mentioned. It was
   originally discussed in 8.0beta (2-3?) time.
  
   What happened there?
   - patch not committed in the end
   - committed but not mentioned, as a dropped item
   - committed but not mentioned, since part of a larger patch
  
  Are you speaking of these patches?
 
 Yes, those look like the ones I mentioned.
 
 Those seem to have a useful performance improvement?
 
 At very least, the change in Aggregate function API should be mentioned,
 no?

  2005-03-12 15:25  tgl
  
  * contrib/intagg/int_aggregate.c,
  contrib/intagg/int_aggregate.sql.in, doc/src/sgml/xaggr.sgml,
  doc/src/sgml/xfunc.sgml, src/backend/executor/nodeAgg.c,
  src/backend/utils/adt/int8.c: Adjust the API for aggregate function
  calls so that a C-coded function can tell whether it is being used
  as an aggregate or not.  This allows such a function to avoid
  re-pallocing a pass-by-reference transition value; normally it
  would be unsafe for a function to scribble on an input, but in the
  aggregate case it's safe to reuse the old transition value.  Make
  int8inc() do this.  This gets a useful improvement in the speed of
  COUNT(*), at least on narrow tables (it seems to be swamped by I/O
  when the table rows are wide).  Per a discussion in early December
  with Neil Conway.  I also fixed int_aggregate.c to check this,
  thereby turning it into something approaching a supportable
  technique instead of being a crude hack.

I don't usually document internal API changes in the release notes. 
Should I?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.1beta, SunOS and shmget

2005-08-30 Thread Thomas F. O'Connell


On Aug 29, 2005, at 12:41 PM, Tom Lane wrote:


Alvaro Herrera [EMAIL PROTECTED] writes:


On Mon, Aug 29, 2005 at 11:30:46AM -0400, Tom Lane wrote:


20 buffers ... ugh.  Obviously we are on the hairy edge of no longer
functioning at all in 1MB shared memory.  I'm not sure there is a  
whole
lot we can do about this, but it's a tad irritating that clog,  
subtrans,

and multixact are eating the equivalent of about 16 buffers
(nonconfigurable) while the main buffer pool is so badly starved.





8 buffers each, I think, no?  That's 32 buffers total.



You're right; I was thinking that NUM_SLRU_BUFFERS was 4, but I see  
it's
now 8.  Did we bump that up on the basis of any solid evidence?   
There's

256K of shared memory going into those four dedicated buffer areas,
which is kind of a lot when you're hoping to fit into 1MB.

I just finished going through the initialization sequence to trace the
calculation of shared memory size, and what I find in CVS tip is that
it works out like this:

shared_buffers * 8314
max_connections * (217.68 * max_locks_per_transaction + 356)
max_prepared_transactions * (217.68 * max_locks_per_transaction + 576)
wal_buffers * 8192
max_fsm_relations * 70
max_fsm_pages * 6
plus about 500K fixed space

(These numbers are on a 32-bit machine, some of the multipliers  
would be

a little higher on 64-bit.)

The formula given in the docs doesn't seem to have been updated  
since 7.2:

250 kB + 8.2 kB * shared_buffers + 14.2 kB * max_connections

Most of the bloat since then seems to be accounted for by 2PC and the
addition of subtrans and multixact buffers.



Maybe we could make them allocate them automatically based on
shared_buffers, with a ceiling of 8?



Seems like it'd be reasonable to skinny down the number of dedicated
buffers when shared_buffers is tiny, but I'm not sure about the
particular equation to use.

regards, tom lane


Should the new formulation be sent to pgsql-docs? This looks like it  
could be worked into a patch pretty easily. Seems like it would make  
sense to update the docs for 8.1...


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Performance gain from reduction of GROUP BY memory

2005-08-30 Thread Alvaro Herrera
On Tue, Aug 30, 2005 at 10:23:49AM -0400, Bruce Momjian wrote:

   2005-03-12 15:25  tgl
   
 * contrib/intagg/int_aggregate.c,
 contrib/intagg/int_aggregate.sql.in, doc/src/sgml/xaggr.sgml,
 doc/src/sgml/xfunc.sgml, src/backend/executor/nodeAgg.c,
 src/backend/utils/adt/int8.c: Adjust the API for aggregate function
 calls so that a C-coded function can tell whether it is being used
 as an aggregate or not.  This allows such a function to avoid
 re-pallocing a pass-by-reference transition value; normally it
 would be unsafe for a function to scribble on an input, but in the
 aggregate case it's safe to reuse the old transition value.  Make
 int8inc() do this.  This gets a useful improvement in the speed of
 COUNT(*), at least on narrow tables (it seems to be swamped by I/O
 when the table rows are wide).  Per a discussion in early December
 with Neil Conway.  I also fixed int_aggregate.c to check this,
 thereby turning it into something approaching a supportable
 technique instead of being a crude hack.
 
 I don't usually document internal API changes in the release notes. 
 Should I?

Doesn't this potentially affect user-defined aggregates?

-- 
Alvaro Herrera alvherre[]alvh.no-ip.org  Architect, www.EnterpriseDB.com
Lo esencial es invisible para los ojos (A. de Saint Exúpery)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Performance gain from reduction of GROUP BY memory

2005-08-30 Thread Bruce Momjian
Alvaro Herrera wrote:
 On Tue, Aug 30, 2005 at 10:23:49AM -0400, Bruce Momjian wrote:
 
2005-03-12 15:25  tgl

* contrib/intagg/int_aggregate.c,
contrib/intagg/int_aggregate.sql.in, doc/src/sgml/xaggr.sgml,
doc/src/sgml/xfunc.sgml, src/backend/executor/nodeAgg.c,
src/backend/utils/adt/int8.c: Adjust the API for aggregate 
function
calls so that a C-coded function can tell whether it is being 
used
as an aggregate or not.  This allows such a function to avoid
re-pallocing a pass-by-reference transition value; normally it
would be unsafe for a function to scribble on an input, but in 
the
aggregate case it's safe to reuse the old transition value.  
Make
int8inc() do this.  This gets a useful improvement in the speed 
of
COUNT(*), at least on narrow tables (it seems to be swamped by 
I/O
when the table rows are wide).  Per a discussion in early 
December
with Neil Conway.  I also fixed int_aggregate.c to check this,
thereby turning it into something approaching a supportable
technique instead of being a crude hack.
  
  I don't usually document internal API changes in the release notes. 
  Should I?
 
 Doesn't this potentially affect user-defined aggregates?

I read it as something that _could_ be used by user-defined aggregates,
but not something that would require a changes to a user-defined
aggregate.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Performance gain from reduction of GROUP BY memory

2005-08-30 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 Alvaro Herrera wrote:
 On Tue, Aug 30, 2005 at 10:23:49AM -0400, Bruce Momjian wrote:
 I don't usually document internal API changes in the release notes. 
 Should I?
 
 Doesn't this potentially affect user-defined aggregates?

 I read it as something that _could_ be used by user-defined aggregates,
 but not something that would require a changes to a user-defined
 aggregate.

I tend to agree with Bruce here.  If he documented changes of this size
the release notes would be twice as long and even fewer people would
read them.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] SHMMAX seems entirely broken in OS X 10.4.2

2005-08-30 Thread Luke Lonergan
Jeff,

On 8/30/05 5:28 AM, Jeff Trout [EMAIL PROTECTED] wrote:

 Just ran software update and (besides a couple apps) it had a
 security update.

I just did that to test this.

 skittlebrau:~ postgres$ grep shm /etc/rc

Luke-Lonergans-Computer:~ lukelonergan$ cat /etc/sysctl.conf
kern.sysv.shmall=327680
kern.sysv.shmmax=335544320
#kern.sysv.shmall=268435456
#kern.sysv.shmmax=268435456

 skittlebrau:~ postgres$ sysctl -a | grep shm

Luke-Lonergans-Computer:~ lukelonergan$ sysctl -a | grep shm
kern.sysv.shmmax: 335544320
kern.sysv.shmmin: 1
kern.sysv.shmmni: 32
kern.sysv.shmseg: 8
kern.sysv.shmall: 327680

Thx for the tip.

I also tried different settings (shmall set to 500K) and shmmax ended up at
-1.  So, it seems that shmall is in pages.  I didn't try other allowable
configs, which was a problem before.

- Luke



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [Slony1-general] Re: dangling lock information?

2005-08-30 Thread Chris Browne
[EMAIL PROTECTED] (David Parker) writes:
 The slony log trigger saves execution plans, so any given connection
 that has been used with a slony schema installed will have cached OIDs
 referring to the sl_log_1 table. When you drop the schema, those OIDs
 obviously go away. When you re-create the schema, and try to use the old
 connection, it still has the old plan cached in it, so the OIDs in the
 plan are out of sync with what actually exists in the database.

 This is the behavior I've observed in our environment, anyway. The
 problem always shows up when slony is RE-installed under an outstanding
 connection.

I have observed much the same behaviour...

It would be really useful to have some guidance as to how to resolve
this.

What is needed is to invalidate the cached execution plans.

Unfortunately, it's not at all obvious how to accomplish that :-(.

Alas, any time I touch the SPI code in other than relatively trivial
ways, it falls over and croaks :-(.
-- 
let name=cbbrowne and tld=cbbrowne.com in name ^ @ ^ tld;;
http://cbbrowne.com/info/linuxdistributions.html
One good turn gets most of the blankets. 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] 8.1beta, SunOS and shmget

2005-08-30 Thread Tom Lane
Thomas F. O'Connell [EMAIL PROTECTED] writes:
 On Aug 29, 2005, at 12:41 PM, Tom Lane wrote:
 I just finished going through the initialization sequence to trace the
 calculation of shared memory size, and what I find in CVS tip is that
 it works out like this:

 Should the new formulation be sent to pgsql-docs? This looks like it  
 could be worked into a patch pretty easily. Seems like it would make  
 sense to update the docs for 8.1...

Already done.
http://developer.postgresql.org/docs/postgres/kernel-resources.html#SYSVIPC

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [Slony1-general] Re: dangling lock information?

2005-08-30 Thread Tom Lane
Chris Browne [EMAIL PROTECTED] writes:
 What is needed is to invalidate the cached execution plans.

Yeah.  This is not really Slony's fault --- we need a general solution
to that in the backend.  I think Neil was working on it, but I dunno
how far along he is.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] problems installing pgsql

2005-08-30 Thread Ricardo Gamero
Hello everybody!

I'm so sorry to post this simple question but I don't know what to do, the thing is I need to install postgresql 8.0.3 in red hat 9 but when I try to do it this errors appear:

[EMAIL PROTECTED] root]# rpm -ivh postgresql-server-8.0.3-1PGDG.i686.rpmwarning: postgresql-server-8.0.3-1PGDG.i686.rpm: V3 DSA signature: NOKEY, key ID 748f7d0eerror: Failed dependencies: libcrypto.so.4 is needed by 
postgresql-server-8.0.3-1PGDG libpq.so.4 is needed by postgresql-server-8.0.3-1PGDG libssl.so.4 is needed by postgresql-server-8.0.3-1PGDG postgresql = 8.0.3 is needed by postgresql-server-8.0.3-1PGDG
 postgresql  7.4 conflicts with postgresql-server-8.0.3-1PGDG
I've tried installing this packages:

postgresql72-libs-1-3.i386.rpm
postgresql-libs-7.1.3-5.72.i386.rpm 
but I recieve similar errors, I mean dependency. for example:

[EMAIL PROTECTED] root]# rpm -ivh postgresql-libs-7.1.3-5.72.i386.rpm Preparing... ### [100%] package postgresql-libs-7.2.3-5.80 (which is newer than 
postgresql-libs-7.1.3-5.72) is already installed file /usr/lib/libecpg.so.3 from install of postgresql-libs-7.1.3-5.72 conflicts with file from package postgresql-libs-7.2.3-5.80 file /usr/lib/libpgeasy.so.2 from install of 
postgresql-libs-7.1.3-5.72 conflicts with file from package postgresql-libs-7.2.3-5.80 file /usr/lib/libpq.so.2 from install of postgresql-libs-7.1.3-5.72 conflicts with file from package postgresql-libs-7.2.3-5.80
 file /usr/lib/libpq.so.2.0 from install of postgresql-libs-7.1.3-5.72 conflicts with file from package postgresql-libs-7.2.3-5.80

and I also have tried to download the openssl package, 'cause i read somewhere that's what i need, but i can't find the rpm package.


I'd really apreciate your help and thank you so much for your help!!



Re: [HACKERS] [Slony1-general] Re: dangling lock information?

2005-08-30 Thread Alvaro Herrera
On Tue, Aug 30, 2005 at 12:45:18PM -0400, Chris Browne wrote:
 [EMAIL PROTECTED] (David Parker) writes:
  The slony log trigger saves execution plans, so any given connection
  that has been used with a slony schema installed will have cached OIDs
  referring to the sl_log_1 table. When you drop the schema, those OIDs
  obviously go away. When you re-create the schema, and try to use the old
  connection, it still has the old plan cached in it, so the OIDs in the
  plan are out of sync with what actually exists in the database.
 
  This is the behavior I've observed in our environment, anyway. The
  problem always shows up when slony is RE-installed under an outstanding
  connection.
 
 I have observed much the same behaviour...
 
 It would be really useful to have some guidance as to how to resolve
 this.
 
 What is needed is to invalidate the cached execution plans.

The simplest way to do that is to disconnect the client, and start a
fresh session.

 Unfortunately, it's not at all obvious how to accomplish that :-(.

I don't think it can be easily done with the current code.  This is
plpgsql code, right?  There are some ways to cause recompilation for
those, at least on the 8.1 code I'm looking at.

-- 
Alvaro Herrera alvherre[]alvh.no-ip.org  Architect, www.EnterpriseDB.com
Si quieres ser creativo, aprende el arte de perder el tiempo

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [Slony1-general] Re: dangling lock information?

2005-08-30 Thread Neil Conway

Tom Lane wrote:

Yeah.  This is not really Slony's fault --- we need a general solution
to that in the backend.  I think Neil was working on it, but I dunno
how far along he is.


Yeah, I had wanted to get this into 8.1, but I couldn't find time. I 
still plan to work on it for 8.2, unless someone beats me to it.


-Neil

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Intermittent stats test failures on buildfarm

2005-08-30 Thread Rocco Altier
Also, kookaburra (AIX) has a problem with the stats test as well.

What is most puzzling to me is that it only happens with cc (not gcc).
And I can only get it to happen when running a cronjob for the
buildfarm.  If I run it interactively, the stats collector will run
fine, or if I run the build script from the command line.

The environment between cron and from command line are not significantly
different, so I am at a bit of loss as to the reason why.

Any thoughts?

-rocco

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
 Sent: Tuesday, August 30, 2005 12:31 AM
 To: pgsql-hackers@postgreSQL.org
 Subject: [HACKERS] Intermittent stats test failures on buildfarm
 
 
 I just spent a tedious hour digging through the buildfarm results
 to see what I could learn about the intermittent failures we're seeing
 in the stats regression test, such as here:
 http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=ferretdt=20
 05-05-29%2018:25:09
 This is seen in both Check and InstallCheck steps.  A variant 
 pathology
 is seen here:
 http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=gerbildt=20
 05-07-22%2007:58:01
 Notice that only the heap stats columns are wrong in this 
 case, not the
 index stats.  I think that this variant behavior may have 
 been fixed by
 this patch:
 
 2005-07-23 20:33  tgl
 
   * src/backend/postmaster/pgstat.c: Fix some failures to 
 initialize
   table entries induced by recent autovacuum integration. 
  Not clear
   this explains recent stats problems, but it's definitely wrong.
 
 but it's not certain since nobody traced through the code to exhibit
 why those uninitialized table entries would have led to this 
 particular
 visible symptom.  But with no occurrences of that behavior since the
 patch went in, I suspect it's fixed.
 
 What we are left with turns out to be multiple occurrences of 
 the first
 pathology on exactly three buildfarm members:
 
   ferret  Cygwin
   kuduSolaris 9, x86
   dragonfly   Solaris 9, x86
 
 There are no occurrences of the failure on the native-Windows 
 machines,
 nor on buzzard (Solaris 10, SPARC), nor on gerbil (Solaris 9, SPARC)
 (though gerbil has one old occurrence of the second 
 pathology, so maybe
 that observation should be taken with a grain of salt).  And none
 whatever on any other buildfarm member.
 
 The same three machines are showing the failure in the 8.0 
 branch, too,
 so it's not a recently-introduced issue.
 
 And one thing more: kudu and dragonfly are actually the same machine,
 same OS, different compilers.
 
 So what to make of this?  Dunno, but it is clearly a very
 platform-specific behavior.  Anyone see a connection between Cygwin
 and Solaris?
 
   regards, tom lane
 
 ---(end of 
 broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Intermittent stats test failures on buildfarm

2005-08-30 Thread Tom Lane
Rocco Altier [EMAIL PROTECTED] writes:
 Also, kookaburra (AIX) has a problem with the stats test as well.

kookaburra's problem is entirely different, not intermittent in the
least.  The error diff shows that stats collection is off, and its
postmaster log says

LOG:  could not bind socket for statistics collector: Permission denied
LOG:  disabling statistics collector for lack of working socket

I have no idea what's causing that --- the only reason I know of for
EACCES from bind() is trying to bind to a privileged port number, and
one hopes we're not doing that.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [Slony1-general] Re: dangling lock information?

2005-08-30 Thread Andreas Pflug

Alvaro Herrera wrote:




Unfortunately, it's not at all obvious how to accomplish that :-(.



I don't think it can be easily done with the current code.  This is
plpgsql code, right?  There are some ways to cause recompilation for
those, at least on the 8.1 code I'm looking at.


Well at least when a procedure is dropped, its cached plans could be 
dropped as well (apparently the cache plan is located trough some kind 
of hash, not the pg_proc.oid?). I do understand that the usual case, a 
table oid changed while cached inside a procedure isn't easily 
detectable because it would require dependency information generated 
from procedure's source.


Regards,
Andreas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [Slony1-general] Re: dangling lock information?

2005-08-30 Thread Chris Browne
[EMAIL PROTECTED] (Alvaro Herrera) writes:
 On Tue, Aug 30, 2005 at 12:45:18PM -0400, Chris Browne wrote:
 [EMAIL PROTECTED] (David Parker) writes:
  The slony log trigger saves execution plans, so any given
  connection that has been used with a slony schema installed will
  have cached OIDs referring to the sl_log_1 table. When you drop
  the schema, those OIDs obviously go away. When you re-create the
  schema, and try to use the old connection, it still has the old
  plan cached in it, so the OIDs in the plan are out of sync with
  what actually exists in the database.
 
  This is the behavior I've observed in our environment,
  anyway. The problem always shows up when slony is RE-installed
  under an outstanding connection.
 
 I have observed much the same behaviour...
 
 It would be really useful to have some guidance as to how to
 resolve this.
 
 What is needed is to invalidate the cached execution plans.

 The simplest way to do that is to disconnect the client, and start a
 fresh session.

I'm keen on a simplest way that doesn't essentially involve having
to restart the application...

 Unfortunately, it's not at all obvious how to accomplish that :-(.

 I don't think it can be easily done with the current code.  This is
 plpgsql code, right?  There are some ways to cause recompilation for
 those, at least on the 8.1 code I'm looking at.

No, the troublesome parts are in C/SPI code.

If it's something Neil Conway hasn't quite figured out how to handle
yet, I don't feel so bad that I can't imagine a way to do it...  :-)
-- 
select 'cbbrowne' || '@' || 'acm.org';
http://cbbrowne.com/info/spiritual.html
A cool feature of OOP is that the simplest examples are 500 lines.  
-- Peter Sestoft

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Pre-allocated free space for row updating (like PCTFREE)

2005-08-30 Thread Satoshi Nagayasu
Simon Riggs wrote:
 The summary was:
 
 1. Have a PCTFREE column added on a table by table basis

I think a good place to keep PCTFREE value is a new column
in the pg_class, and ALTER TABLE should be able to change this value.

 2. Apply PCTFREE for Inserts only
 3. Allow Updates to use the full space in the block.

4. Allow to repair fragmentation in each page.

Because updates cause fragmentation in the page.

So we need to keep large continuous free space in each page,
if we want to get more effective on PCTFREE feature.

-- 
NAGAYASU Satoshi [EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Intermittent stats test failures on buildfarm

2005-08-30 Thread Andrew Dunstan



Tom Lane wrote:


Rocco Altier [EMAIL PROTECTED] writes:
 


Also, kookaburra (AIX) has a problem with the stats test as well.
   



kookaburra's problem is entirely different, not intermittent in the
least.  The error diff shows that stats collection is off, and its
postmaster log says

LOG:  could not bind socket for statistics collector: Permission denied
LOG:  disabling statistics collector for lack of working socket

I have no idea what's causing that --- the only reason I know of for
EACCES from bind() is trying to bind to a privileged port number, and
one hopes we're not doing that.


 



The other things that's rather odd is that it's failing at the 
installcheck stage, which means it just passed this same test moments 
before are the check stage. Installcheck failures in buildfarm should 
always be regarded suspiciously.


cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] statement logging / extended query protocol issues

2005-08-30 Thread Oliver Jowett
8.1-beta1 produces some odd results with statement logging enabled when
the extended query protocol is used (e.g. when using the JDBC driver).
Repeatedly running a simple query with log_statement = 'all' produces this:

LOG:  statement: PREPARE  AS SELECT 'dummy statement'
LOG:  statement: BIND
LOG:  statement: EXECUTE   [PREPARE:  SELECT 'dummy statement']
[...]
LOG:  statement: PREPARE S_2 AS SELECT 'dummy statement'
LOG:  statement: BIND
LOG:  statement: EXECUTE   [PREPARE:  SELECT 'dummy statement']
LOG:  statement: BIND
LOG:  statement: EXECUTE   [PREPARE:  SELECT 'dummy statement']
LOG:  statement: BIND
[...]

Comments:
- The PREPARE lines are misleading as the query actually sent does not
include PREPARE at all.
- The driver never sends EXECUTE as a statement, but it is logged as one.
- PREPARE  AS is a confusing way of saying the unnamed statement
- The BIND lines are content-free.

Secondly, running a query that uses portals produces output like this:

LOG:  statement: PREPARE S_3 AS SELECT * from pg_proc
LOG:  statement: BIND C_4
LOG:  statement: EXECUTE C_4  [PREPARE:  SELECT * from pg_proc]
LOG:  statement: EXECUTE C_4  [PREPARE:  SELECT * from pg_proc]
LOG:  statement: EXECUTE C_4  [PREPARE:  SELECT * from pg_proc]
LOG:  statement: EXECUTE C_4  [PREPARE:  SELECT * from pg_proc]
LOG:  statement: EXECUTE C_4  [PREPARE:  SELECT * from pg_proc]
LOG:  statement: EXECUTE C_4  [PREPARE:  SELECT * from pg_proc]
LOG:  statement: EXECUTE C_4  [PREPARE:  SELECT * from pg_proc]

Comments:
- The BIND is still fairly content-free.
- The EXECUTEs are a bit misleading as the SELECT was actually only run
once (there are multiple Execute messages for the same portal). You
could infer that there is only one SELECT from the repeated portal name
and the lack of an intervening BIND, I suppose.

8.1 is admittedly better than 8.0 here (8.0 had no logging in this case
at all).. but it's not very user-friendly as it stands. I'm sure the
JDBC list is going to get lots of why does statement logging give me
this weird output questions :/

I've attached the Java code I used to produce this. It expects a single
argument, the JDBC URL to use, e.g.
'jdbc:postgresql://localhost:8101/test?user=oliver'

-O
import java.sql.*;
import java.util.*;

public class TestStatementLogging {
public static void main(String[] args) throws Exception {
Class.forName(org.postgresql.Driver);

Connection conn = DriverManager.getConnection(args[0]);
conn.setAutoCommit(false);

PreparedStatement stmt = conn.prepareStatement(SELECT 'dummy 
statement');
for (int j = 0; j  10; ++j)
stmt.executeQuery();
stmt.close();

stmt = conn.prepareStatement(SELECT * from pg_proc);
stmt.setFetchSize(1);
ResultSet rs = stmt.executeQuery();
while (rs.next())
;
stmt.close();

conn.createStatement().execute(I am a syntax error);
}
}

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster