Re: [HACKERS] custome exception handling support ?

2005-03-20 Thread Pavel Stehule
> Hi,
> i want to add support for exceptions that are
> supported in oracle, in plpgsql. 
> 
> mainly i am want to add custome exceptions support in
> plpgsql. like in Oracle we use 
> EXCEPTION myexp
> 
> can any body help me.

Hello 

http://developer.postgresql.org/docs/postgres/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

But in this time is inpossible get details about exception.

Regards
Pavel Stehule


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] invalidating cached plans

2005-03-20 Thread Tom Lane
Harald Fuchs <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> One possible approach is to do the invalidation on a sufficiently coarse
>> grain that we don't care.  For example, I would be inclined to make any
>> change in a table's schema invalidate all plans that use that table at
>> all; that would then subsume the constraint problem for instance.  This
>> doesn't solve the inlined function problem however.

> How about making this even more coarse-grained?  Blindly throw all
> cached plans away when something in the database DDL changes.

Well, the problem is not so much that we can't tell what depends on
which, as that we have no mechanism to make plan invalidation happen
in the first place.  I don't think that "throw everything away" will
make it very much simpler.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[HACKERS] custome exception handling support ?

2005-03-20 Thread Ali Baba
Hi,
i want to add support for exceptions that are
supported in oracle, in plpgsql. 

mainly i am want to add custome exceptions support in
plpgsql. like in Oracle we use 
EXCEPTION myexp

can any body help me.


Regards,
Asif Ali.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] read-only database

2005-03-20 Thread Satoshi Nagayasu

(BTom Lane wrote:
(B> I'd view this as a postmaster state that propagates to backends.
(B> Probably you'd enable it by means of a postmaster option, and the
(B> only way to get out of it is to shut down and restart the postmaster
(B> without the option.
(B
(BI've created a patch to make a postmaster read-only.
(B(attached patch can be applied to 8.0.1)
(B
(BRead-only state can be enabled/disabled by the postmaster option,
(Bor the postgresql.conf option.
(B
(BIf you start the postmaster with "-r" options,
(Bthe cluster will go to read-only.
(B
(B% pg_ctl -o "-i -r" -D $PGDATA start
(B
(BOr if you set "readonly_cluster = true" in the postgresql.conf,
(Bthe cluster will also become read-only.
(B
(BAny comments?
(B-- 
(BNAGAYASU Satoshi <[EMAIL PROTECTED]>
(BOpenSource Development Center,
(BNTT DATA Corp. http://www.nttdata.co.jpdiff -ru postgresql-8.0.1.orig/src/backend/executor/execMain.c 
postgresql-8.0.1/src/backend/executor/execMain.c
--- postgresql-8.0.1.orig/src/backend/executor/execMain.c   2005-01-15 
02:53:33.0 +0900
+++ postgresql-8.0.1/src/backend/executor/execMain.c2005-03-21 
13:12:22.0 +0900
@@ -43,6 +43,7 @@
 #include "optimizer/clauses.h"
 #include "optimizer/var.h"
 #include "parser/parsetree.h"
+#include "postmaster/postmaster.h"
 #include "utils/acl.h"
 #include "utils/guc.h"
 #include "utils/lsyscache.h"
@@ -127,7 +128,7 @@
 * If the transaction is read-only, we need to check if any writes are
 * planned to non-temporary tables.
 */
-   if (XactReadOnly && !explainOnly)
+   if ( (XactReadOnly || ReadOnlyCluster) && !explainOnly)
ExecCheckXactReadOnly(queryDesc->parsetree);
 
/*
diff -ru postgresql-8.0.1.orig/src/backend/postmaster/postmaster.c 
postgresql-8.0.1/src/backend/postmaster/postmaster.c
--- postgresql-8.0.1.orig/src/backend/postmaster/postmaster.c   2005-01-13 
01:38:17.0 +0900
+++ postgresql-8.0.1/src/backend/postmaster/postmaster.c2005-03-21 
13:21:17.0 +0900
@@ -236,6 +236,8 @@
 extern int optreset;
 #endif
 
+bool   ReadOnlyCluster = false;
+
 /*
  * postmaster.c - function prototypes
  */
@@ -440,7 +442,7 @@
 
opterr = 1;
 
-   while ((opt = getopt(argc, argv, 
"A:a:B:b:c:D:d:Fh:ik:lm:MN:no:p:Ss-:")) != -1)
+   while ((opt = getopt(argc, argv, 
"A:a:B:b:c:D:d:Fh:ik:lm:MN:no:p:rSs-:")) != -1)
{
switch (opt)
{
@@ -515,6 +517,9 @@
case 'p':
SetConfigOption("port", optarg, PGC_POSTMASTER, 
PGC_S_ARGV);
break;
+   case 'r':
+   SetConfigOption("readonly_cluster", "true", 
PGC_POSTMASTER, PGC_S_ARGV);
+   break;
case 'S':
 
/*
diff -ru postgresql-8.0.1.orig/src/backend/tcop/utility.c 
postgresql-8.0.1/src/backend/tcop/utility.c
--- postgresql-8.0.1.orig/src/backend/tcop/utility.c2005-01-25 
02:46:29.0 +0900
+++ postgresql-8.0.1/src/backend/tcop/utility.c 2005-03-21 13:13:45.0 
+0900
@@ -47,6 +47,7 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "postmaster/bgwriter.h"
+#include "postmaster/postmaster.h"
 #include "rewrite/rewriteDefine.h"
 #include "rewrite/rewriteRemove.h"
 #include "storage/fd.h"
@@ -265,7 +266,7 @@
 static void
 check_xact_readonly(Node *parsetree)
 {
-   if (!XactReadOnly)
+   if (!XactReadOnly && !ReadOnlyCluster)
return;
 
/*
diff -ru postgresql-8.0.1.orig/src/backend/utils/misc/guc.c 
postgresql-8.0.1/src/backend/utils/misc/guc.c
--- postgresql-8.0.1.orig/src/backend/utils/misc/guc.c  2005-01-01 
14:43:08.0 +0900
+++ postgresql-8.0.1/src/backend/utils/misc/guc.c   2005-03-21 
13:06:42.0 +0900
@@ -851,6 +851,15 @@
 #endif
},
 
+   {
+   {"readonly_cluster", PGC_POSTMASTER, UNGROUPED,
+   gettext_noop("Enables the postmaster read-only."),
+   NULL
+   },
+   &ReadOnlyCluster,
+   false, NULL, NULL
+   },
+
/* End-of-list marker */
{
{NULL, 0, 0, NULL, NULL}, NULL, false, NULL, NULL
diff -ru postgresql-8.0.1.orig/src/include/postmaster/postmaster.h 
postgresql-8.0.1/src/include/postmaster/postmaster.h
--- postgresql-8.0.1.orig/src/include/postmaster/postmaster.h   2005-01-01 
07:03:39.0 +0900
+++ postgresql-8.0.1/src/include/postmaster/postmaster.h2005-03-21 
13:03:16.0 +0900
@@ -34,6 +34,7 @@
 extern HANDLE PostmasterHandle;
 #endif
 
+extern bool ReadOnlyCluster;
 
 extern int PostmasterMain(int argc, char *argv[]);
 extern void ClosePostmasterPorts(bool am_syslogger);

---(end of broadcast)---
TIP 7: don't forget to increase your free spac

Re: [HACKERS] Very strange query difference between 7.3.6 and 7.4.6

2005-03-20 Thread Joshua D. Drake
Tom Lane wrote:
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
 

O.k. I got 7.3.9 to operate as expected on FC2 (64bit) and these are my 
results:
   

 

enable_hashagg on:
   

 

HashAggregate  (cost=80.00..82.50 rows=1000 width=404) (actual 
time=209.746..209.750 rows=1 loops=1)
   

You got confused somewhere along the line, because 7.3 doesn't have
hash aggregation ...
 

DOH! You are correct. Heh... Anyway here is the real 7.3.9
on FC2 64bit plan:
RY 
PLAN   


Aggregate  (cost=69.83..132.33 rows=100 width=404) (actual 
time=771.96..771.96 rows=1 loops=1)
  ->  Group  (cost=69.83..129.83 rows=1000 width=404) (actual 
time=579.98..767.54 rows=8845 loops=1)
->  Sort  (cost=69.83..72.33 rows=1000 width=404) (actual 
time=579.96..590.00 rows=8845 loops=1)
  Sort Key: post_id, topic_id, topic_title, topic_status, 
topic_replies, topic_time, topic_type, topic_vote, topic_last_post_id, 
forum_name, forum_status, forum_id, auth_view, auth_read, auth_post, 
auth_reply, auth_edit, auth_delete, auth_sticky, auth_announce, 
auth_pollcreate, auth_vote, auth_attachments
  ->  Seq Scan on foo  (cost=0.00..20.00 rows=1000 
width=404) (actual time=0.05..107.62 rows=8845 loops=1)
Total runtime: 774.57 msec
(6 rows)



regards, tom lane
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Avoiding unnecessary writes during relation drop and truncate

2005-03-20 Thread Qingqing Zhou

"Tom Lane" <[EMAIL PROTECTED]> writes
> So it'll get an error ... this scenario doesn't strike me as any worse
> than any other problem occuring in post-commit cleanup.  The locks left
> around by the not-cleaned-up transaction would probably be a bigger
> issue, for example.

Yes, the result is acceptable. BTW, what about other post-commit cleanup
errors? Do we have any idea to avoid/alleviate them?

Regards,
Qingqing



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [PERFORM] How to read query plan

2005-03-20 Thread Miroslav Šulc
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above).  Because there are lots of variable-width columns
involved, most of the time the fast path for field access doesn't apply
and we end up going to nocachegetattr --- which itself is going to be
slow because it has to scan over so many columns.  So the cost is
roughly O(N^2) in the number of columns.
 

As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't 
be helpful to change them to char(1)? Would it solve the variable-width 
problem at least for some fields and speed the query up?

As a short-term hack, you might be able to improve matters if you can
reorder your LEFT JOINs to have the minimum number of columns
propagating up from the earlier join steps.  In other words make the
later joins add more columns than the earlier, as much as you can.
 

That will be hard as the main table which contains most of the fields is 
LEFT JOINed with the others. I'll look at it if I find some way to 
improve it.

I'm not sure whether I understand the process of performing the plan but 
I imagine that the data from AdDevicesSites are retrieved only once when 
they are loaded and maybe stored in memory. Are the columns stored in 
the order they are in the SQL command? If so, wouldn't it be useful to 
move all varchar fields at the end of the SELECT query? I'm just 
guessing because I don't know at all how a database server is 
implemented and what it really does.

..
			regards, tom lane
 

Miroslav
begin:vcard
fn;quoted-printable:Miroslav =C5=A0ulc
n;quoted-printable:=C5=A0ulc;Miroslav
org:StartNet s.r.o.
adr;quoted-printable;quoted-printable:;;Vrchlick=C3=A9ho 161/5;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+420 257 225 602
tel;cell:+420 603 711 413
x-mozilla-html:TRUE
url:http://www.startnet.cz
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PERFORM] How to read query plan

2005-03-20 Thread John Arbash Meinel
Miroslav Šulc wrote:
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above).  Because there are lots of variable-width columns
involved, most of the time the fast path for field access doesn't apply
and we end up going to nocachegetattr --- which itself is going to be
slow because it has to scan over so many columns.  So the cost is
roughly O(N^2) in the number of columns.

As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't
be helpful to change them to char(1)? Would it solve the
variable-width problem at least for some fields and speed the query up?
I'm guessing there really wouldn't be a difference. I think varchar()
and char() are stored the same way, just one always has space padding. I
believe they are both varlena types, so they are still "variable" length.
As a short-term hack, you might be able to improve matters if you can
reorder your LEFT JOINs to have the minimum number of columns
propagating up from the earlier join steps.  In other words make the
later joins add more columns than the earlier, as much as you can.

That will be hard as the main table which contains most of the fields
is LEFT JOINed with the others. I'll look at it if I find some way to
improve it.
One thing that you could try, is to select just the primary keys from
the main table, and then later on, join back to that table to get the
rest of the columns. It is a little bit hackish, but if it makes your
query faster, you might want to try it.
I'm not sure whether I understand the process of performing the plan
but I imagine that the data from AdDevicesSites are retrieved only
once when they are loaded and maybe stored in memory. Are the columns
stored in the order they are in the SQL command? If so, wouldn't it be
useful to move all varchar fields at the end of the SELECT query? I'm
just guessing because I don't know at all how a database server is
implemented and what it really does.
I don't think they are stored in the order of the SELECT <> portion. I'm
guessing they are loaded and saved as you go. But that the order of the
LEFT JOIN at the end is probably important.
..
regards, tom lane

Miroslav

John
=:->


signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] [PERFORM] How to read query plan

2005-03-20 Thread Miroslav Šulc
Tom Lane wrote:
I wrote:
 

Since ExecProject operations within a nest of joins are going to be
dealing entirely with Vars, I wonder if we couldn't speed matters up
by having a short-circuit case for a projection that is only Vars.
Essentially it would be a lot like execJunk.c, except able to cope
with two input tuples.  Using heap_deformtuple instead of retail
extraction of fields would eliminate the O(N^2) penalty for wide tuples.
   

Actually, we already had a pending patch (from Atsushi Ogawa) that
eliminates that particular O(N^2) behavior in another way.  After
applying it, I get about a factor-of-4 reduction in the runtime for
Miroslav's example.
 

Is there a chance we will see this patch in the 8.0.2 release? And when 
can we expect this release?

ExecEvalVar and associated routines are still a pretty good fraction of
the runtime, so it might still be worth doing something like the above,
but it'd probably be just a marginal win instead of a big win.
			regards, tom lane
 

Miroslav
begin:vcard
fn;quoted-printable:Miroslav =C5=A0ulc
n;quoted-printable:=C5=A0ulc;Miroslav
org:StartNet s.r.o.
adr;quoted-printable;quoted-printable:;;Vrchlick=C3=A9ho 161/5;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+420 257 225 602
tel;cell:+420 603 711 413
x-mozilla-html:TRUE
url:http://www.startnet.cz
version:2.1
end:vcard


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [PERFORM] How to read query plan

2005-03-20 Thread Miroslav Šulc
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
 

As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't 
be helpful to change them to char(1)? Would it solve the variable-width 
problem at least for some fields and speed the query up?
   

No, because char(1) isn't physically fixed-width (consider multibyte
encodings).  There's really no advantage to char(N) in Postgres.
 

I was aware of that :-(
I don't know what you're doing with those fields, but if they are
effectively booleans or small codes you might be able to convert them to
bool or int fields.  There is also the "char" datatype (not to be
confused with char(1)) which can hold single ASCII characters, but is
nonstandard and a bit impoverished as to functionality.
 

The problem lies in migration from MySQL to PostgreSQL. In MySQL we 
(badly) choose enum for yes/no switches (there's nothing like boolean 
field type in MySQL as I know but we could use tinyint). It will be very 
time consuming to rewrite all such enums and check the code whether it 
works.

However, I doubt this is worth pursuing.  One of the things I tested
yesterday was a quick hack to organize the storage of intermediate join
tuples with fixed-width fields first and non-fixed ones later.  It
really didn't help much at all :-(.  I think the trouble with your
example is that in the existing code, the really fast path applies only
when the tuple contains no nulls --- and since you're doing all that
left joining, there's frequently at least one null lurking.
 

Unfortunatelly I don't see any other way than LEFT JOINing in this case.
			regards, tom lane
 

Miroslav
begin:vcard
fn;quoted-printable:Miroslav =C5=A0ulc
n;quoted-printable:=C5=A0ulc;Miroslav
org:StartNet s.r.o.
adr;quoted-printable;quoted-printable:;;Vrchlick=C3=A9ho 161/5;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+420 257 225 602
tel;cell:+420 603 711 413
x-mozilla-html:TRUE
url:http://www.startnet.cz
version:2.1
end:vcard


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Avoiding tuple construction/deconstruction during joining

2005-03-20 Thread Miroslav Šulc
Tom Lane wrote:
So I have some results. I have tested the query on both PostgreSQL 8.0.1 
and MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL 
result is 11,667.916 ms, MySQL result is 448.4 ms.
   

That's a fairly impressive discrepancy :-(, and even the slot_getattr()
patch that Atsushi Ogawa provided isn't going to close the gap.
(I got about a 4x speedup on Miroslav's example in my testing, which
leaves us still maybe 6x slower than MySQL.)
 

As I wrote, the comparison is not "fair". Here are the conditions:
"Both databases are running on the same machine (my laptop) and contain 
the same data. However there are some differences in the data table 
definitions:
1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL I 
use 'enum'
2) in PostgreSQL in some cases I use connection fields that are not of 
the same type (smallint <-> integer (SERIAL)), in MySQL I use the same 
types"

For those not used to MySQL, enum is an integer "mapped" to a text 
string (that's how I see it). That means that when you have field such 
as enum('yes','no','DK'), in the table there are stored numbers 1, 2 and 
3 which are mapped to the text values 'yes', 'no' and 'DK'. The 
description is not accurate (I'm not MySQL programmer, I didn't check it 
recently and I didn't inspect the code - I wouldn't understand it 
either) but I think it's not that important. What is important is the 
fact that MySQL has to work with some dozen fields that are numbers and 
PostgreSQL has to work with the same fields as varchar(). Some of the 
other fields are also varchars. This might (or might not) cause the 
speed difference. However I think that if devs figure out how to speed 
this up, other cases will benefit from the improvement too.

As I understood from the contributions of other, the 2) shouldn't have a 
great impact on the speed.

Looking at the post-patch profile for the test case, there is still
quite a lot of cycles going into tuple assembly and disassembly:
Each sample counts as 0.01 seconds.
 %   cumulative   self  self total   
time   seconds   secondscalls  ms/call  ms/call  name
24.47  4.49 4.49 _mcount
 8.01  5.96 1.47  9143692 0.00 0.00  ExecEvalVar
 6.92  7.23 1.27  6614373 0.00 0.00  slot_deformtuple
 6.54  8.43 1.20  9143692 0.00 0.00  slot_getattr
 6.21  9.57 1.14   103737 0.01 0.03  ExecTargetList
 5.56 10.59 1.02   103775 0.01 0.01  DataFill
 3.22 11.18 0.59   103775 0.01 0.01  ComputeDataSize
 2.83 11.70 0.52 ExecEvalVar
 2.72 12.20 0.50  9094122 0.00 0.00  memcpy
 2.51 12.66 0.46 encore
 2.40 13.10 0.44   427448 0.00 0.00  nocachegetattr
 2.13 13.49 0.39   103775 0.00 0.02  heap_formtuple
 2.07 13.87 0.38 noshlibs
 1.20 14.09 0.22   225329 0.00 0.00  _doprnt
 1.20 14.31 0.22 msquadloop
 1.14 14.52 0.21 chunks
 0.98 14.70 0.18   871885 0.00 0.00  AllocSetAlloc
 0.98 14.88 0.18 $$dyncall
 0.76 15.02 0.14   594242 0.00 0.00  FunctionCall3
 0.71 15.15 0.13   213312 0.00 0.00  comparetup_heap
 0.65 15.27 0.12 6364 0.02 0.13  printtup
 0.60 15.38 0.11   790702 0.00 0.00  pfree

(_mcount is profiling overhead, ignore it.)  It looks to me like just
about everything in the top dozen functions is there as a result of the
fact that join steps form new tuples that are the merge of their input
tuples.  Even our favorite villains, palloc and pfree, are down in the
sub-percent range.
I am guessing that the reason MySQL wins on this is that they avoid
doing any data copying during a join step.  I wonder whether we could
accomplish the same by taking Ogawa's patch to the next level: allow
a TupleTableSlot to contain either a "materialized" tuple as now,
or a "virtual" tuple that is simply an array of Datums and null flags.
(It's virtual in the sense that any pass-by-reference Datums would have
to be pointing to data at the next level down.)  This would essentially
turn the formtuple and deformtuple operations into no-ops, and get rid
of a lot of the associated overhead such as ComputeDataSize and
DataFill.  The only operations that would have to forcibly materialize
a tuple would be ones that need to keep the tuple till after they fetch
their next input tuple --- hashing and sorting are examples, but very
many plan node types don't ever need to do that.
I haven't worked out the details, but it seems likely that this could be
a relatively nonintrusive patch.  The main thing that would be an issue
would be that direct reference to slot->val would become verboten (since
you could no lo

Re: [HACKERS] invalidating cached plans

2005-03-20 Thread Harald Fuchs
In article <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> writes:

> One possible approach is to do the invalidation on a sufficiently coarse
> grain that we don't care.  For example, I would be inclined to make any
> change in a table's schema invalidate all plans that use that table at
> all; that would then subsume the constraint problem for instance.  This
> doesn't solve the inlined function problem however.

How about making this even more coarse-grained?  Blindly throw all
cached plans away when something in the database DDL changes.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [PERFORM] Avoiding tuple construction/deconstruction during joining

2005-03-20 Thread PFC
	On my machine (Laptop with Pentium-M 1.6 GHz and 512MB DDR333) I get the  
following timings :

	Big Joins Query will all the fields and no order by (I just put a SELECT  
* in the first table) yielding about 6k rows :
	=> 12136.338 ms

	Replacing the SELECT * from the table with many fields by just a SELECT  
of the foreign key columns :
	=> 1874.612 ms

	I felt like playing a bit so I implemented a hash join in python  
(download the file, it works on Miroslav's data) :
	All timings do not include time to fetch the data from the database.  
Fetching all the tables takes about 1.1 secs.

	* With something that looks like the current implementation (copying  
tuples around) and fetching all the fields from the big table :
	=> Fetching all the tables : 1.1 secs.
	=> Joining : 4.3 secs

* Fetching only the integer fields
=> Fetching all the tables : 0.4 secs.
=> Joining : 1.7 secs
	* A smarter join which copies nothing and updates the rows as they are  
processed, adding fields :
	=> Fetching all the tables :  1.1 secs.
	=> Joining : 0.4 secs
	With the just-in-time compiler activated, it goes down to about 0.25  
seconds.

	First thing, this confirms what Tom said.
	It also means that doing this query in the application can be a lot  
faster than doing it in postgres including fetching all of the tables.  
There's a problem somewhere ! It should be the other way around ! The  
python mappings (dictionaries : { key : value } ) are optimized like crazy  
but they store column names for each row. And it's a dynamic script  
language ! Argh.

Note : run the program like this :
python test.py |less -S
	So that the time spent scrolling your terminal does not spoil the  
measurements.

Download test program :
http://boutiquenumerique.com/pf/miroslav/test.py
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] [PERFORM] Avoiding tuple construction/deconstruction during joining

2005-03-20 Thread PFC
	I have asked him for the data and played with his queries, and obtained  
massive speedups with the following queries :

http://boutiquenumerique.com/pf/miroslav/query.sql
http://boutiquenumerique.com/pf/miroslav/query2.sql
http://boutiquenumerique.com/pf/miroslav/materialize.sql
	Note that my optimized version of the Big Joins is not much faster that  
the materialized view without index (hash joins are damn fast in postgres)  
but of course using an index...

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] Avoiding tuple construction/deconstruction during joining

2005-03-20 Thread Miroslav Šulc
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
 

seriously, I am far below this level of knowledge. But I can contribute 
a test that (maybe) can help. I have rewritten the query so it JOINs the 
varchar() fields (in fact all fields except the IDPK) at the last INNER 
JOIN. Though there is one more JOIN, the query is more than 5 times 
faster (1975.312 ms) :-)
   

That confirms my thought that passing the data up through multiple
levels of join is what's killing us.  I'll work on a solution.  This
will of course be even less back-patchable to 8.0.* than Ogawa's work,
but hopefully it will fix the issue for 8.1.
			regards, tom lane
 

Tom, thank you and the others for help. I'm glad that my problem can 
help PostgreSQL improve and that there are people like you that 
understand each little bit of the software :-)

Miroslav
begin:vcard
fn;quoted-printable:Miroslav =C5=A0ulc
n;quoted-printable:=C5=A0ulc;Miroslav
org:StartNet s.r.o.
adr;quoted-printable;quoted-printable:;;Vrchlick=C3=A9ho 161/5;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+420 257 225 602
tel;cell:+420 603 711 413
x-mozilla-html:TRUE
url:http://www.startnet.cz
version:2.1
end:vcard


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Real-Time Vacuum Possibility

2005-03-20 Thread Christopher Browne
The problem that persists with this is that it throws in extra
processing at the time that the system is the _most_ busy doing
updates, thereby worsening latency at times when the system may
already be reeling at the load.

I think, as a result, that VACUUM will _have_ to be done
asynchronously.

What strikes me as being a useful approach would be to set up an
LRU-ordered (or perhaps unordered) queue of pages that have had tuples
"killed off" by DELETE or UPDATE.

Thus, a DELETE/UPDATE would add the page the tuple is on to the list.

"VACUUM RECENT CHANGES" (or something of the sort) could walk through
just those pages.  Cleaning up indexes would require some further
reads, but that's a given.

This "architecture" would be way more supportive than the present way
vacuum works for tables which are large and which have portions that
see heavy update activity.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "gmail.com")
http://linuxdatabases.info/info/lisp.html
Rules of the Evil Overlord  #129. "Despite the delicious irony, I will
not force two heroes to fight each other in the arena."


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Real-Time Vacuum Possibility

2005-03-20 Thread Christopher Browne
[EMAIL PROTECTED] (Rod Taylor) wrote:
> It's a fairly limited case and by no means removes the requirement for
> regular vacuums, but for an update heavy structure perhaps it would be
> worth while? Even if it could only keep indexes clean it would help.

The problem that persists with this is that it throws in extra
processing at the time that the system is the _most_ busy doing
updates, thereby worsening latency at times when the system may
already be reeling at the load.

I think, as a result, that VACUUM will _have_ to be done
asynchronously.

What strikes me as being a useful approach would be to set up an
LRU-ordered (or perhaps unordered) queue of pages that have had tuples
"killed off" by DELETE or UPDATE.

Thus, a DELETE/UPDATE would add the page the tuple is on to the list.

"VACUUM RECENT CHANGES" (or something of the sort) could walk through
just those pages.  Cleaning up indexes would require some further
reads, but that's a given.

This "architecture" would be way more supportive than the present way
vacuum works for tables which are large and which have portions that
see heavy update activity.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "gmail.com")
http://linuxdatabases.info/info/lisp.html
Rules of the Evil Overlord  #129. "Despite the delicious irony, I will
not force two heroes to fight each other in the arena."


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Avoiding unnecessary writes during relation drop and truncate

2005-03-20 Thread Tom Lane
"Qingqing Zhou" <[EMAIL PROTECTED]> writes:
> What if AtEOXact_Inval() fails (though the chance is slim)? Does that mean
> that smgrDoPendingDeletes() -> DropRelFileNodeBuffers can never get
> executed, which means we can never "dropped without write" the buffers
> belonging to the victim relation? So when the BgWrite sweeps, it will write
> those buffers to a non-logically-existed file?

So it'll get an error ... this scenario doesn't strike me as any worse
than any other problem occuring in post-commit cleanup.  The locks left
around by the not-cleaned-up transaction would probably be a bigger
issue, for example.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] caches lifetime with SQL vs PL/PGSQL procs

2005-03-20 Thread strk
On postgresql-8.0.0 I've faced a *really* weird behavior.

A simple query (single table - simple function call - no index),
makes postgres process grow about as much as the memory size required
to keep ALL rows in memory.

The invoked procedure call doesn't leak.
It's IMMUTABLE.
Calls other procedures (not leaking).

Now.
One of the other procedures it calls is an 'SQL' one.
Replacing it with a correponding 'PL/PGSQL' implementation
drastically reduces memory occupation:

SQL:   220Mb
PL/PGSQL:   13Mb

The function body is *really* simple:

-- SQL
CREATE OR REPLACE FUNCTION get_proj4_from_srid(integer) RETURNS text AS
'SELECT proj4text::text FROM spatial_ref_sys WHERE srid= $1'
LANGUAGE 'sql' IMMUTABLE STRICT; 

-- PL/PGSQL
CREATE OR REPLACE FUNCTION get_proj4_from_srid(integer) RETURNS text AS
' BEGIN
RETURN proj4text::text FROM spatial_ref_sys WHERE srid= $1;
END
' LANGUAGE 'plpgsql' IMMUTABLE STRICT; 


Is this expected ?

--strk;

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Changing the default wal_sync_method to open_sync for Win32?

2005-03-20 Thread Kenneth Marshall
On Wed, Mar 16, 2005 at 11:20:12PM -0500, Bruce Momjian wrote:
> 
> Basically we do open_datasync -> fdatasync -> fsync.  This is
> empirically what we found to be fastest on most operating systems, and
> we default to the first one that exists on the operating system.
> 
> Notice we never default to open_sync.  However, on Win32, Magnus got a
> 60% speedup by using open_sync, implemented using
> FILE_FLAG_WRITE_THROUGH.  Now, because this the fastest on Win32, I
> think we should default to open_sync on Win32.  The attached patch
> implements this.
> 
> 2.  Another question is what to do with 8.0.X?  Do we backpatch this for
> Win32 performance?  Can we test it enough to know it will work well? 
> 8.0.2 is going to have a more rigorous testing cycle because of the
> buffer manager changes.
> 
My preference would be to back-patch to 8.0.1. I have some projects
where the performance difference will decide whether or not they go
with MSSQL or PostgreSQL.

Ken

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[HACKERS] [Mail Delivery System ] Warning: message 1DAroW-0002DC-00 delayed 144 hours

2005-03-20 Thread Greg Stark

One of the RBL lists you (Bruno and Tom) use seems to be poorly maintained. My
new IP address is listed in their database as a dynamic address (it's not, it
even reverse resolves). I've notified this list multiple times that this IP
address is listed improperly and never received any reply. RBL lists with
inaccurate data can be worse than no lists at all.


--- Begin Message ---
This message was created automatically by mail delivery software (Exim).

A message that you sent has not yet been delivered to one or more of its
recipients after more than 144 hours on the queue on stark.xeocode.com.

The message identifier is: 1DAroW-0002DC-00
The subject of the message is: Re: [HACKERS] [BUGS] We are not following the 
spec for HAVING without GROUP
The date of the message is:14 Mar 2005 10:49:04 -0500

The address to which the message has not yet been delivered is:

  [EMAIL PROTECTED]
Delay reason: SMTP error from remote mailer after RCPT TO:<[EMAIL 
PROTECTED]>:
host wolff.to [66.93.249.74]: 451 http://openrbl.org/216.58.44.227

No action is required on your part. Delivery attempts will continue for
some time, and this warning may be repeated at intervals if the message
remains undelivered. Eventually the mail delivery software will give up,
and when that happens, the message will be returned to you.

--- End Message ---


-- 
greg

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Avoiding unnecessary writes during relation drop and truncate

2005-03-20 Thread Qingqing Zhou

"Tom Lane" <[EMAIL PROTECTED]> writes
> It strikes me that the FlushRelationBuffers call is unnecessary and
> causes useless I/O, namely writing out pages into a file that's
> about to be deleted anyway.  If we simply removed it then any buffers
> belonging to the victim relation would stay in memory until commit;
> then they'd be dropped *without* write by the smgr unlink operation
> (which already calls DropRelFileNodeBuffers).
>

In my understanding, DropRelFileNodeBuffers is done in post-commit stage.
The call sequences are:

 ...
 AtEOXact_Inval(true);

 smgrDoPendingDeletes(true);
 ...

What if AtEOXact_Inval() fails (though the chance is slim)? Does that mean
that smgrDoPendingDeletes() -> DropRelFileNodeBuffers can never get
executed, which means we can never "dropped without write" the buffers
belonging to the victim relation? So when the BgWrite sweeps, it will write
those buffers to a non-logically-existed file?

Regards,
Qingqing



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] what to do with backend flowchart

2005-03-20 Thread Qingqing Zhou

"Tom Lane" <[EMAIL PROTECTED]> writes
> If your objection is that it's not being maintained, then that is no
> solution.  Once it's out of the source code CVS it is *guaranteed* to
> not get updated to track source-code changes.
>

Is it possible that we insert some tags (like doc++ does) into source code
comments and write a small parser to retrieve them automatically and
generate most part of the backend flowchat?

Regards,
Qingqing



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] read-only planner input

2005-03-20 Thread Tom Lane
Neil Conway <[EMAIL PROTECTED]> writes:
> Here's one idea to fix this: when planning a Query, transform the Query 
> into a "PlannedQuery". This would essentially be the same as the 
> QueryState we discussed earlier, except that we would also walk through 
> the Query and adjust references to nested Queries to refer to 
> PlannedQueries instead (so RTEs for subqueries would reference the 
> PlannedQuery, not the Query, for example). There would then be a 
> "planned query walker" that would walk both the original query and 
> additional planner-specific working state, and so on.

> Perhaps we could use some trickery to avoid the PlannedQuery vs. Query 
> distinction when a particular piece of code doesn't care, by making 
> Query the first field of PlannedQuery. In other words:

> struct PlannedQuery {
>  Query q;
>  /* other fields */
> };

> So we could treat a PlannedQuery * like a Query *. I don't really like 
> this solution.

No.  At that point you've essentially booted away the entire point of
the change :-(

IIRC one of the main reasons for wanting to make the planner read-only
is so that it does *not* modify subquery RTE contents --- there are all
sorts of uglinesses involved in the fact that it presently does, mainly
having to be sure that we plan each subquery exactly once.  If we go
this route then we won't be able to fix any of that stuff.

> Another possibility would be to punt, and keep in_info_list as part of 
> Query.

That's seeming like the path of least resistance at the moment ... but
it still isn't going to solve the subquery RTE issues.  I'm feeling a
bit discouraged about this concept right now ... maybe we need to back
off and think about a fresh start.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] read-only planner input

2005-03-20 Thread Neil Conway
Tom Lane wrote:
That's a bit nasty.  I'm fairly sure that I added in_info_list to the
walker recursion because I had to; I don't recall the exact scenario,
but I think it needs to be possible to reassign relation numbers
within that data structure if we are doing it elsewhere in a query
tree.
It was r1.125 of clauses.c, and yes, it seems pretty important. It's 
used in adjust_inherited_attrs_mutator() in prep/prepunion.c, 
flatten_join_alias_vars_mutator() in util/var.c, and three different 
walkers in rewriteManip.c. That's from trawling through the original 
(Jan of '03) patch -- it may have been used elsewhere subsequently.

Here's one idea to fix this: when planning a Query, transform the Query 
into a "PlannedQuery". This would essentially be the same as the 
QueryState we discussed earlier, except that we would also walk through 
the Query and adjust references to nested Queries to refer to 
PlannedQueries instead (so RTEs for subqueries would reference the 
PlannedQuery, not the Query, for example). There would then be a 
"planned query walker" that would walk both the original query and 
additional planner-specific working state, and so on.

Perhaps we could use some trickery to avoid the PlannedQuery vs. Query 
distinction when a particular piece of code doesn't care, by making 
Query the first field of PlannedQuery. In other words:

struct PlannedQuery {
Query q;
/* other fields */
};
So we could treat a PlannedQuery * like a Query *. I don't really like 
this solution.

Another possibility would be to punt, and keep in_info_list as part of 
Query. We'd then need to resolve modifications to it in the same we way 
will need to resolve modifications to legitimate parts of the Query 
(e.g. by making an initial shallow copy and avoiding destructive 
updates, per earlier discussion).

-Neil
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Very strange query difference between 7.3.6 and 7.4.6

2005-03-20 Thread Tom Lane
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> O.k. I got 7.3.9 to operate as expected on FC2 (64bit) and these are my 
> results:

> enable_hashagg on:

>  HashAggregate  (cost=80.00..82.50 rows=1000 width=404) (actual 
> time=209.746..209.750 rows=1 loops=1)

You got confused somewhere along the line, because 7.3 doesn't have
hash aggregation ...

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Moving a project from gborg to pgfoundry?

2005-03-20 Thread Thomas Hallgren
Shachar Shemesh wrote:
To summarize, just give me read only access to the old project's data 
and I'm set.
I second that.
- thomas
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Moving a project from gborg to pgfoundry?

2005-03-20 Thread Shachar Shemesh
Marc G. Fournier wrote:
On Sun, 20 Mar 2005, Thomas Hallgren wrote:
Marc G. Fournier wrote:
Once I've copied both over, I'll get Chris to mark the gborg project 
as being 'disabled' so that nobody will see it over there anymore ...

Ok, I submitted a request for the project under pgfoundry. Same name 
(oledb).

Actually, it would be very nice if the project was still visible at 
gborg with a single page explaining that the project has moved. 
People may get the wrong idea if it just disappears.

Seconded. The address was in the files in the release.
I'm still in status quo with PL/Java. I'm eager to help out to get 
the move done, but I still don't know what more I can do. My 
requirements are the same as Shachar's although a dump of the current 
bug database would be helpful.

mailing lists and cvs are easy, as they are the same format under 
either ... its the 'database conversion' stuff that is the reason for 
the long hold up for the rest ...
Yes, well. I never liked much the way gborg handled bugs. In fact, it's 
the fact that people started using the bug system under gborg that 
causes me to miss pgfoundry so much.  Comments such as the one on 
http://gborg.postgresql.org/project/oledb/bugs/bugupdate.php?755 also 
don't help.

The three bugs or so that actually need tracking I can transfer myself. 
I don't think it justifies writing automatic conversion. It does not 
even seem as if gborg is enjoying a lot of active projects anyways. Of 
the three projects that had anything new to say in over a month, two 
have just stated that they don't care about the rest of the info anyways.

To summarize, just give me read only access to the old project's data 
and I'm set.

  Shachar
--
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] Moving a project from gborg to pgfoundry?

2005-03-20 Thread Dave Cramer
I think having to be on a specific server to get automatic updates on 
the front page is the problem. Moving it not the correct solution. There 
are may postgresql related projects that don't live on pgfoundry, or 
even gborg.

Why is this a necessity?
Can't we set up some sort of interface to the latest news?
Dave
Marc G. Fournier wrote:
On Sun, 20 Mar 2005, Thomas Hallgren wrote:
Marc G. Fournier wrote:
If only the CVS/Mailing lists are needed, and nothing that is "in 
the database", then this shouldn't be too hard ... go to pgfoundry, 
submit for the new project ... once it is approved, create the 
various mailing lists that you have on gborg, and then, *before* you 
do anything else on either, I can move copies of both from gborg -> 
pgfoundry  ...

Once I've copied both over, I'll get Chris to mark the gborg project 
as being 'disabled' so that nobody will see it over there anymore ...

Actually, it would be very nice if the project was still visible at 
gborg with a single page explaining that the project has moved. 
People may get the wrong idea if it just disappears.

I'm still in status quo with PL/Java. I'm eager to help out to get 
the move done, but I still don't know what more I can do. My 
requirements are the same as Shachar's although a dump of the current 
bug database would be helpful.

mailing lists and cvs are easy, as they are the same format under 
either ... its the 'database conversion' stuff that is the reason for 
the long hold up for the rest ...


Marc G. Fournier   Hub.Org Networking Services 
(http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 
7615664

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] Moving a project from gborg to pgfoundry?

2005-03-20 Thread Marc G. Fournier
On Sun, 20 Mar 2005, Thomas Hallgren wrote:
Marc G. Fournier wrote:
If only the CVS/Mailing lists are needed, and nothing that is "in the 
database", then this shouldn't be too hard ... go to pgfoundry, submit for 
the new project ... once it is approved, create the various mailing lists 
that you have on gborg, and then, *before* you do anything else on either, 
I can move copies of both from gborg -> pgfoundry  ...

Once I've copied both over, I'll get Chris to mark the gborg project as 
being 'disabled' so that nobody will see it over there anymore ...

Actually, it would be very nice if the project was still visible at gborg 
with a single page explaining that the project has moved. People may get the 
wrong idea if it just disappears.

I'm still in status quo with PL/Java. I'm eager to help out to get the move 
done, but I still don't know what more I can do. My requirements are the same 
as Shachar's although a dump of the current bug database would be helpful.
mailing lists and cvs are easy, as they are the same format under either 
... its the 'database conversion' stuff that is the reason for the long 
hold up for the rest ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Moving a project from gborg to pgfoundry?

2005-03-20 Thread Thomas Hallgren
Marc G. Fournier wrote:
If only the CVS/Mailing lists are needed, and nothing that is "in the 
database", then this shouldn't be too hard ... go to pgfoundry, submit 
for the new project ... once it is approved, create the various mailing 
lists that you have on gborg, and then, *before* you do anything else on 
either, I can move copies of both from gborg -> pgfoundry  ...

Once I've copied both over, I'll get Chris to mark the gborg project as 
being 'disabled' so that nobody will see it over there anymore ...

Actually, it would be very nice if the project was still visible at 
gborg with a single page explaining that the project has moved. People 
may get the wrong idea if it just disappears.

I'm still in status quo with PL/Java. I'm eager to help out to get the 
move done, but I still don't know what more I can do. My requirements 
are the same as Shachar's although a dump of the current bug database 
would be helpful.

Genpages, uploads, etc. are things that I can reconstruct from material 
that's in the CVS.

Regards,
Thomas Hallgren
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] Very strange query difference between 7.3.6 and 7.4.6

2005-03-20 Thread Joshua D. Drake

On 7.4 and up you may have to set enable_hashagg = off to force a
Sort/GroupAggregate plan instead of HashAggregate.

O.k. on FC2 7.4.6 64bit I get:
-
HashAggregate  (cost=80.00..82.50 rows=1000 width=404) (actual 
time=235.064..235.068 rows=1 loops=1)
  ->  Seq Scan on foo  (cost=0.00..20.00 rows=1000 width=404) (actual 
time=0.024..10.409 rows=8845 loops=1)
Total runtime: 236.703 ms
(3 rows)

With enable_hashagg on... With it enable_hashagg off I get:
GroupAggregate  (cost=69.83..134.83 rows=1000 width=404) (actual 
time=688.150..688.151 rows=1 loops=1)
  ->  Sort  (cost=69.83..72.33 rows=1000 width=404) (actual 
time=543.251..554.363 rows=8845 loops=1)
Sort Key: post_id, topic_id, topic_title, topic_status, 
topic_replies, topic_time, topic_type, topic_vote, topic_last_post_id, 
forum_name, forum_status, forum_id, auth_view, auth_read, auth_post, 
auth_reply, auth_edit, auth_delete, auth_sticky, auth_announce, 
auth_pollcreate, auth_vote, auth_attachments
->  Seq Scan on foo  (cost=0.00..20.00 rows=1000 width=404) 
(actual time=0.008..7.635 rows=8845 loops=1)
Total runtime: 690.881 ms
(5 rows)

On the FC3 64bit, I am seeing similar results:
With enable_hashagg on:
   QUERY PLAN
---
HashAggregate  (cost=1041.15..1041.15 rows=1 width=333) (actual 
time=260.543..260.544 rows=1 loops=1)
  ->  Seq Scan on foo  (cost=0.00..510.45 rows=8845 width=333) (actual 
time=11.638..68.744 rows=8845 loops=1)
Total runtime: 261.195 ms
(3 rows)

With enable_hashagg off:
QUERY 
PLAN

--
GroupAggregate  (cost=1090.27..1643.08 rows=1 width=333) (actual 
time=1075.690..1075.690 rows=1 loops=1)
  ->  Sort  (cost=1090.27..1112.38 rows=8845 width=333) (actual 
time=943.242..946.261 rows=8845 loops=1)
Sort Key: post_id, topic_id, topic_title, topic_status, 
topic_replies, topic_time, topic_type, topic_vote, topic_last_post_id, 
forum_name, forum_status, forum_id, auth_view, auth_read, auth_post, 
auth_reply, auth_edit, auth_delete, auth_sticky, auth_announce, 
auth_pollcreate, auth_vote, auth_attachments
->  Seq Scan on foo  (cost=0.00..510.45 rows=8845 width=333) 
(actual time=0.044..15.936 rows=8845 loops=1)
Total runtime: 1084.778 ms
(5 rows)

Odd that FC3 is so much slower, the FC3 machine puts the FC2 machine
to shame for IO.
However, The source query doesn't choose a hashagg on the FC3 machine, 
which your
test case does. I am having problems getting 7.3.9 to start on the FC3 
machine.
Very weird, I get this error:

IpcSemaphoreCreate: semget(key=5435117, num=17, 03600) failed: No space 
left on device

Of which I am familiar with and know how to fix. However, I get the 
error even with
default settings with the other instance of PostgreSQL (the 7.4.6) 
shutdown. So I am
at a loss there.

O.k. I got 7.3.9 to operate as expected on FC2 (64bit) and these are my 
results:

enable_hashagg on:
HashAggregate  (cost=80.00..82.50 rows=1000 width=404) (actual 
time=209.746..209.750 rows=1 loops=1)
  ->  Seq Scan on foo  (cost=0.00..20.00 rows=1000 width=404) (actual 
time=0.018..10.218 rows=8845 loops=1)
Total runtime: 210.580 ms
(3 rows)

enable_hashagg off:
GroupAggregate  (cost=69.83..134.83 rows=1000 width=404) (actual 
time=661.197..661.198 rows=1 loops=1)
  ->  Sort  (cost=69.83..72.33 rows=1000 width=404) (actual 
time=517.531..528.360 rows=8845 loops=1)
Sort Key: post_id, topic_id, topic_title, topic_status, 
topic_replies, topic_time, topic_type, topic_vote, topic_last_post_id, 
forum_name, forum_status, forum_id, auth_view, auth_read, auth_post, 
auth_reply, auth_edit, auth_delete, auth_sticky, auth_announce, 
auth_pollcreate, auth_vote, auth_attachments
->  Seq Scan on foo  (cost=0.00..20.00 rows=1000 width=404) 
(actual time=0.008..7.728 rows=8845 loops=1)
Total runtime: 663.903 ms
(5 rows)

So at this point, from what I can tell FC3 64bit 7.4.6 is slower by an 
at least 400ms (with the wrong plan) and is choosing the wrong plan. Yet 
FC2 doesn't have these issues. Hmmm

FC2 has glibc 2.3.3 and gcc 3.3.3
FC3 has glibc 2.3.4 and gcc 3.4.2
What next?
Sincerely,
Joshua D. Drake




regards, tom lane
 



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregist

Re: [pgsql-hackers-win32] [HACKERS] snprintf causes regression tests

2005-03-20 Thread Andrew Dunstan

After some further digging, I think we have 3 problems.

1. On Windows gettext wants to hijack printf and friends, as below. This
strikes me as rather unfriendly behaviour by a library header file.
Anyway, mercifully libintl.h is included in our source in exactly one
spot, so I think the thing to do for this problem is a) undo that
hijacking and b) make sure any hijacking we want to do occurs after the
point where that file in included (in c.h). This causes most of the
noise, but is probably harmless, since our hijacking does in fact win
out. We need to fix the arnings, though.

2. We have multiple #defines for snprintf and vsnprintf (in port.h and
win32.h).

3. ecpg wants to use our pg*printf routines (because USE_SNPRINTF is
defined) but doesn't know where to find them.

what a mess :-(

cheers

andrew


Bruce Momjian wrote:

>Thanks to Andrew Dunstan, I found the cause of these link errors. 
>Andrew found this in libintl:
>   
>   #undef snprintf
>   #define snprintf libintl_snprintf
>   extern int snprintf (char *, size_t, const char *, ...);
>
>What is happening is that we do:
>
>   #define snprintfpg_snprintf
>
>and then libintl.h (?) does:
>
>   #define snprintf libintl_snprintf
>
>so the effect is:
>
>   #define pg_snprintf libintl_snprintf
>
>In fact, in this example, the system complains about a missing X3 symbol:
>
>   #define X1 X2
>   #define X2 X3
>   
>   int
>   main(int argc, char *argv[])
>   {
>   X1;
>   }
>
>so the effet of the defines is:
>
>   #define X1 X3
>
>Anyway, the reason ecpg is failing is that it is the only client-side
>program that doesn't use libintl for internationalization.  It is on our
>TODO list to do that, but it hasn't been done yet.
>
>However, only Win32 is seeing this failure, and only when configure
>--enable-nls.  I think this is because only Win32 does the redefine of
>snprint and friends.
>
>Comments?
>   
>---
>
>Nicolai Tufar wrote:
>  
>
>>On Wed, 16 Mar 2005 01:00:21 -0500 (EST), Bruce Momjian
>> wrote:
>>
>>
>>>I have applied a modified version of your patch, attached.
>>>  
>>>
>>I am so sorry, I sent untested patch again.  Thank you very
>>much for patience in fixing it. The patch looks perfectly
>>fine and works under Solaris. 
>>
>>Under win32 I am still struggling with build environment.
>>In many directories link fails with "undefined reference to
>>`pg_snprintf'" in other it fails with  "undefined reference to
>>`_imp__libintl_sprintf'". In yet another directory it fails with
>>both, like in src/interfaces/ecpg/pgtypeslib:
>>
>>dlltool --export-all  --output-def pgtypes.def numeric.o datetime.o
>>common.o dt_common.o timestamp.o interval.o pgstrcasecmp.o
>>dllwrap  -o libpgtypes.dll --dllname libpgtypes.dll  --def pgtypes.def
>>numeric.o datetime.o common.o dt_common.o timestamp.o interval.o
>>pgstrcasecmp.o  -L../../../../src/port -lm
>>numeric.o(.text+0x19ea):numeric.c: undefined reference to
>>`_imp__libintl_sprintf'
>>datetime.o(.text+0x476):datetime.c: undefined reference to `pg_snprintf'
>>common.o(.text+0x1cd):common.c: undefined reference to `pg_snprintf'
>>common.o(.text+0x251):common.c: undefined reference to `pg_snprintf'
>>dt_common.o(.text+0x538):dt_common.c: undefined reference to
>>`_imp__libintl_sprintf'
>>dt_common.o(.text+0x553):dt_common.c: undefined reference to
>>`_imp__libintl_sprintf'
>>dt_common.o(.text+0x597):dt_common.c: undefined reference to
>>`_imp__libintl_sprintf'
>>dt_common.o(.text+0x5d5):dt_common.c: undefined reference to
>>`_imp__libintl_sprintf'
>>dt_common.o(.text+0x628):dt_common.c: undefined reference to
>>`_imp__libintl_sprintf'
>>dt_common.o(.text+0x7e8):dt_common.c: more undefined references to
>>`_imp__libintl_sprintf' follow
>>c:\MinGW\bin\dllwrap.exe: c:\MinGW\bin\gcc exited with status 1
>>make: *** [libpgtypes.a] Error 1
>>
>>Could someone with a better grasp of configure and 
>>win32 environment check it? Aparently no one regularily 
>>compiles source code under win32 during development cycle
>>these days.
>>
>>
>>Best regards,
>>Nicolai
>>
>>---(end of broadcast)---
>>TIP 4: Don't 'kill -9' the postmaster
>>
>>
>>
>
>  
>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] what to do with backend flowchart

2005-03-20 Thread Tom Lane
Robert Treat <[EMAIL PROTECTED]> writes:
> I'm currently working on consolidating some of the content on the developer 
> site with the current web code cvs and am wondering what to do with 
> http://developer.postgresql.org/docs/pgsql/src/tools/backend/index.html.  
> This link actually comes right out of the postgresql sources, but it is 
> actually quite a bit out of date at this point.

At the level of detail that that's giving, it's not all that out of
date; though I agree that it could probably do with a lookthrough.

> know that it doesn't match reality at this point.  We could have someone 
> update it and then generate it out of cvs onto the website, but it really 
> seems like the kind of thing that should live in the web code rather than 
> core cvs anyway.  I'm willing to import it all into the web cvs, and send a 
> patch removing it from the core cvs if no one objects, lmk.   

If your objection is that it's not being maintained, then that is no
solution.  Once it's out of the source code CVS it is *guaranteed* to
not get updated to track source-code changes.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Moving a project from gborg to pgfoundry?

2005-03-20 Thread Marc G. Fournier
If only the CVS/Mailing lists are needed, and nothing that is "in the 
database", then this shouldn't be too hard ... go to pgfoundry, submit for 
the new project ... once it is approved, create the various mailing lists 
that you have on gborg, and then, *before* you do anything else on either, 
I can move copies of both from gborg -> pgfoundry  ...

Once I've copied both over, I'll get Chris to mark the gborg project as 
being 'disabled' so that nobody will see it over there anymore ...


 On Sun, 20 Mar 2005, Shachar Shemesh wrote:
Hi all,
When pgfoundry was opened, there was some talk about moving the projects from 
gborg there. This has not, to date, happened. Is there any chance of this 
happening now, even if only for the specific project? I feel really bad about 
releasing a new version of ole db, with the news of the previous release not 
yet disappearing from the main page.

I don't even need all of the infrastructure. Moving just CVS and the mailing 
lists will be more than enough. There is nothing important in the bug and 
other areas that cannot be reconstructed in a few minute's work.

Shachar
--
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] what to do with backend flowchart

2005-03-20 Thread Robert Treat
I'm currently working on consolidating some of the content on the developer 
site with the current web code cvs and am wondering what to do with 
http://developer.postgresql.org/docs/pgsql/src/tools/backend/index.html.  
This link actually comes right out of the postgresql sources, but it is 
actually quite a bit out of date at this point.  I guess it might be worth 
asking if it is so out of date that it isnt even worth displaying anymore... 
it looks usefull to me but I'm not certain how far out of whack it is, just 
know that it doesn't match reality at this point.  We could have someone 
update it and then generate it out of cvs onto the website, but it really 
seems like the kind of thing that should live in the web code rather than 
core cvs anyway.  I'm willing to import it all into the web cvs, and send a 
patch removing it from the core cvs if no one objects, lmk.   

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] GUC variable for setting number of local buffers

2005-03-20 Thread Tom Lane
Markus Bertheau <[EMAIL PROTECTED]> writes:
>> It's already true that the individual buffers, as opposed to the buffer
>> descriptors, are allocated only as needed; which makes the overhead
>> of a large local_buffers setting pretty small if you don't actually do
>> much with temp tables in a given session.  So I was thinking about
>> making the default value fairly robust, maybe 1000 (as compared to
>> the historical value of 64...).

> Why does the dba need to set that variable at all then?

Because you do have to have a limit.  You want the thing trying to keep
all of a large temp table in core?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] read-only planner input

2005-03-20 Thread Tom Lane
Neil Conway <[EMAIL PROTECTED]> writes:
> I've got most of this finished; I'll post a patch soon. One issue I ran 
> into is how to handle query_tree_mutator() and query_tree_walker(): they 
> both expect to be able to traverse a Query's in_info_list, which my 
> patch moves into the QueryState struct. If maintaining this property is 
> essential, it seems that we'll need a way to get the QueryState 
> associated with a given Query.

whereupon the entire premise of a read-only tree collapses ...

That's a bit nasty.  I'm fairly sure that I added in_info_list to the
walker recursion because I had to; I don't recall the exact scenario,
but I think it needs to be possible to reassign relation numbers
within that data structure if we are doing it elsewhere in a query
tree.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Avoiding unnecessary writes during relation drop and truncate

2005-03-20 Thread Tom Lane
Simon Riggs <[EMAIL PROTECTED]> writes:
> ISTM that buffers belonging to the victim relation would not necessarily
> stay in memory.

Right.  They'd be unpinned and therefore candidates for being written
out and recycled.  So we *might* write them before they are dropped.
That's still better than *definitely* writing them.

The particular case that annoyed me into thinking about this was
using strace to watch a backend manipulate a temp table, and seeing
it carefully write out a bunch of pages from local buffers just
before it dropped the temp table :-(.  For local buffers there are
no checkpoints nor bgwriter and so accumulation of a lot of dirty
pages can be expected.  It didn't matter all that much before
yesterday, with the size of the local buffer pool hardwired at 64,
but when there are hundreds or thousands of local buffers it'll
be important to suppress useless writes.

> Removing FlushRelationBuffers in those circumstances will save a scan of
> shared_buffers, but will it save I/O? Perhaps not, but I care more about
> the O(N) operation on shared_buffers than I do about the I/O.

Realistically, wasted I/O costs more.  But yeah, saving one scan of the
buffer arena is a nice side benefit.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] rewriter in updateable views

2005-03-20 Thread Bernd Helmle
--On Samstag, März 19, 2005 11:05:39 -0500 Tom Lane <[EMAIL PROTECTED]> 
wrote:

Jaime Casanova <[EMAIL PROTECTED]> writes:
On Fri, 18 Mar 2005 23:31:26 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
Why do you not define the problem as "when we decide a view is
updateable and create the needed rules for it, also create default
values for it by copying up from the base tables"?
Well, that was our first thought. but what if the default value is
changed in the base table?
So?  Being able to have a different default for the view could be
construed as a feature, not a bug.
As far as i can oversee, we have the following options to handle this:
1.
- Create default values in views inherited by their base tables in the 
CREATE VIEW command.

- Extend ALTER TABLE table ... SET DEFAULT ... to track dependencies when 
changing default values in base tables. We need to know, when a default 
value in a view was overwritten by a user-fired ALTER TABLE view ... SET 
DEFAULT, so we need some extra information somewhere. I think the plus of 
this implementation is, that we don't touch the rewriter and don't need 
extra time on rewriting a query. The negative is that this adds 
side-effects to ALTER TABLE ... SET DEFAULT ... when views are involved.

2.
Extend the rewriter (rewriteTargetList()) to derive column default values 
from a base table, if the pg_attribute.atthasdef column value is set to 
false and the base table has a valid default expression. This adds extra 
time when rewriting the target list of a query and we need to reparse the 
query tree to find out which base table(s) /columns to look for, if we 
don't save extra information somewhere, but we don't have the overhead of 
keeping views and base tables in sync

--
 Bernd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Avoiding unnecessary writes during relation drop and

2005-03-20 Thread Simon Riggs
On Sat, 2005-03-19 at 18:53 -0500, Tom Lane wrote:
> Currently, in places like heap_drop_with_catalog, we issue a
> FlushRelationBuffers() call followed by smgrscheduleunlink().
> The latter doesn't actually do anything right away, but schedules
> a file unlink to occur after transaction commit.
> 
> It strikes me that the FlushRelationBuffers call is unnecessary and
> causes useless I/O, namely writing out pages into a file that's
> about to be deleted anyway.  If we simply removed it then any buffers
> belonging to the victim relation would stay in memory until commit;
> then they'd be dropped *without* write by the smgr unlink operation
> (which already calls DropRelFileNodeBuffers).
> 
> This doesn't cause any problems with rolling back the transaction before
> commit; we can perfectly well leave dirty pages in the buffer pool in
> that case.  About the only downside I can see is that the Flush allows
> buffer pages to be freed slightly sooner, and hence possibly used for
> something else later in the same transaction ... but that's hardly worth
> the cost of writing data that might not need to be written at all.
> 
> Similar remarks apply to the partial FlushRelationBuffers calls that are
> currently done just before partial or full truncation of a relation ---
> except that those are even sillier, because we are writing data that we
> are definitely going to tell the kernel to forget about immediately
> afterward.  We should just drop any buffers that are past the truncation
> point.  smgrtruncate isn't roll-back-able anyway, so the caller already
> has to be certain that the pages aren't going to be needed anymore
> regardless of any subsequent rollback.
> 
> Can anyone see a flaw in this logic?
> 
> I think that the FlushRelationBuffers calls associated with deletion
> are leftover from a time when we actually deleted the target file
> immediately (ie, back when DROP TABLE wasn't rollback-safe).  The
> ones associated with truncation were probably just modeled on the
> deletion logic without sufficient thought.

Yes, I think FlushRelationBuffers can be simply removed. I'd wanted to
get rid of it before, but hadn't seen how to.

Not sure I understand all of your other comments though. If all you mean
to do is to simply remove the call, then please ignore this:

ISTM that buffers belonging to the victim relation would not necessarily
stay in memory. If they were pinned still, then there would be a lock
that would have prevented the DROP TABLE from going through. The buffers
are not pinned and so will stay in memory until aged out by the dirty
write process. You're right, there's no particular benefit of doing them
earlier - we have a bgwriter now that can do that for us when it comes
to it.

Removing FlushRelationBuffers in those circumstances will save a scan of
shared_buffers, but will it save I/O? Perhaps not, but I care more about
the O(N) operation on shared_buffers than I do about the I/O.

Best Regards, Simon Riggs






---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] GUC variable for setting number of local buffers

2005-03-20 Thread Simon Riggs
On Sat, 2005-03-19 at 12:57 -0500, Tom Lane wrote:
> That means we can go ahead with providing a GUC variable to make the
> array size user-selectable.  I was thinking of calling it either
> "local_buffers" (in contrast to "shared_buffers") or "temp_buffers"
> (to emphasize the fact that they're used for temporary tables).
> Anyone have a preference, or a better alternative?

> Comments?

All of that is good news...

Currently, we already have a GUC that describes the amount of memory we
can use for a backend, work_mem. Would it not be possible to continue to
use that setting and resize the temp_buffers area as needed so that
work_mem was not exceeded - and so we need not set local_temp_buffers?

It will become relatively hard to judge how to set work_mem and
local_temp_buffers for larger queries, and almost impossible to do that
in a multi-user system. To do that, we would need some additional
feedback that could be interpreted so as to judge how large to set
these. Perhaps to mention local buffer and memory usage in an EXPLAIN
ANALYZE? It would be much better if we could decide how best to use
work_mem according to the query plan that is just about to be executed,
then set all areas accordingly. After all, not all queries would use
both limits simultaneously.

This is, of course, a nice problem to have. :-)

If we must have a GUC, local_temp_buffers works better for me.
local_buffers is my second choice because it matches the terminology
used everywhere in the code and also because temp_buffers sounds like it
is a global setting, which it would not be.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] GUC variable for setting number of local buffers

2005-03-20 Thread Bruce Momjian
Markus Bertheau wrote:
-- Start of PGP signed section.
> ? ???, 19/03/2005 ? 12:57 -0500, Tom Lane ?:
> 
> > It's already true that the individual buffers, as opposed to the buffer
> > descriptors, are allocated only as needed; which makes the overhead
> > of a large local_buffers setting pretty small if you don't actually do
> > much with temp tables in a given session.  So I was thinking about
> > making the default value fairly robust, maybe 1000 (as compared to
> > the historical value of 64...).
> 
> Why does the dba need to set that variable at all then?

It is like sort_mem that is local memory but is limited so a single
backend does not exhaust all the RAM on the machine.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] GUC variable for setting number of local buffers

2005-03-20 Thread Markus Bertheau
Ð ÐÐÑ, 19/03/2005 Ð 12:57 -0500, Tom Lane ÐÐÑÐÑ:

> It's already true that the individual buffers, as opposed to the buffer
> descriptors, are allocated only as needed; which makes the overhead
> of a large local_buffers setting pretty small if you don't actually do
> much with temp tables in a given session.  So I was thinking about
> making the default value fairly robust, maybe 1000 (as compared to
> the historical value of 64...).

Why does the dba need to set that variable at all then?

-- 
Markus Bertheau <[EMAIL PROTECTED]>


signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] read-only planner input

2005-03-20 Thread Neil Conway
Tom Lane wrote:
I'd go with PlannerState.  QueryState for some reason sounds more like
execution-time state.
Well, not to me :) It just makes sense to me that QueryState as the 
working state associated with a Query. Not sure it makes a big 
difference, though.

Pulling the "planner internal" stuff out of the Query node does seem
like a good idea, even so.
I've got most of this finished; I'll post a patch soon. One issue I ran 
into is how to handle query_tree_mutator() and query_tree_walker(): they 
both expect to be able to traverse a Query's in_info_list, which my 
patch moves into the QueryState struct. If maintaining this property is 
essential, it seems that we'll need a way to get the QueryState 
associated with a given Query. We can't just change the query tree 
walker to be a "query state walker", since we need to be able to recurse 
into subqueries, and the RTE for a subquery will only contain a Query, 
not its QueryState. Any thoughts on the best way to fix this?

-Neil
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[HACKERS] Moving a project from gborg to pgfoundry?

2005-03-20 Thread Shachar Shemesh
Hi all,
When pgfoundry was opened, there was some talk about moving the projects 
from gborg there. This has not, to date, happened. Is there any chance 
of this happening now, even if only for the specific project? I feel 
really bad about releasing a new version of ole db, with the news of the 
previous release not yet disappearing from the main page.

I don't even need all of the infrastructure. Moving just CVS and the 
mailing lists will be more than enough. There is nothing important in 
the bug and other areas that cannot be reconstructed in a few minute's work.

 Shachar
--
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster