[HACKERS] row-wise comparison question/issue

2006-10-20 Thread Jeremy Drake
I noticed something odd when trying to use the row-wise comparison
mentioned in the release notes for 8.2 and in the docs
http://developer.postgresql.org/pgdocs/postgres/functions-comparisons.html#ROW-WISE-COMPARISON

This sets up a suitable test:

create type myrowtype AS (a integer, b integer);
create table myrowtypetable (rowval myrowtype);

insert into myrowtypetable select (a, b)::myrowtype from
generate_series(1,5) a, generate_series(1,5) b;

First I get this error:

select rowval  rowval from myrowtypetable ;
ERROR:  operator does not exist: myrowtype  myrowtype
LINE 1: select rowval  rowval from myrowtypetable ;
  ^
HINT:  No operator matches the given name and argument type(s). You may
need to add explicit type casts.

OK, I guess I can live with that.  I did create a new type, and there are
no operators for it...

Now, I can do the following (pointless) query
select ROW((rowval).*)  ROW((rowval).*) from myrowtypetable ;

and I get 25 rows of 'f'.  So far so good.

But if I try to do
select rowval from myrowtypetable ORDER BY ROW((rowval).*);
ERROR:  could not identify an ordering operator for type record
HINT:  Use an explicit ordering operator or modify the query.

or even
select rowval from myrowtypetable ORDER BY ROW((rowval).*) USING ;
ERROR:  operator does not exist: record  record
HINT:  No operator matches the given name and argument type(s). You may
need to add explicit type casts.

I know that that less-than operator exists, because I just used it in the
query that worked above.  It seems that ORDER BY just can't find it for
some reason.

Is it supposed to not work in order by?  That doesn't really make sense to
me why order by should be special for this.



-- 
All extremists should be taken out and shot.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Microseconds granularity SIGALRM interrupt support

2006-10-20 Thread NikhilS
Hi, 

Currently we have enable_sig_alarm() which provides millisecond level
granularity in specifying delay. I tried using it by just specifying
nonzero value for the timeval.it_value.tv_usec field before calling
setitimer, but didn't seem to work well. 

Do we have any function in the postgres codebase which provides microseconds level of delay coupled with SIGALRM support?

Regards,
Nikhils
EnterpriseDB http://www.enterprisedb.com-- All the world's a stage, and most of us are desperately unrehearsed.



Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread Zeugswetter Andreas ADI SD
  At least according to [1], kernel AIO on Linux still doesn't work
for 
  buffered (i.e. non-O_DIRECT) files. There have been patches
available 
  for quite some time that implement this, but I'm not sure when they 
  are likely to get into the mainline kernel.
  
  -Neil
  
  [1] http://lse.sourceforge.net/io/aio.html
 
 An improvement is going into 2.6.19 to handle asynchronous 
 vector reads and writes. This was covered by Linux Weekly 
 News a couple of weeks ago:
 http://lwn.net/Articles/201682/

That is orthogonal. We don't really need vector io so much, since we
rely
on OS readahead. We want asyc IO to tell the OS earlier, that we will
need
these random pages, and continue our work in the meantime.
For random IO it is really important to tell the OS and disk subsystem
many pages in parallel so it can optimize head movements and busy more
than
one disk at a time.

Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Statements with syntax errors are not logged

2006-10-20 Thread Peter Eisentraut
Am Freitag, 20. Oktober 2006 00:32 schrieb Tom Lane:
 So I'm inclined to leave the behavior as-is.  The documentation for
 log_statement already says

   Note: Statements that generate syntax errors are not logged. Set
   log_min_error_statement to error to log such statements.

Oh, I missed that.  Let's leave it as is.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] adminpack and pg_catalog

2006-10-20 Thread Andreas Pflug
Neil Conway wrote:
 Why does adminpack install functions into pg_catalog? This is
 inconsistent with the rest of the contrib/ packages, not to mention the
 definition of pg_catalog itself (which ought to hold builtin object
 definitions). And as AndrewSN pointed out on IRC, it also breaks
 pg_dump.
   
Having pg_dump not saving the function definitions is an intended
behaviour. Actually, this was different with admin80, and restoring a
8.0 backup to a 8.1 server will throw several errors now.
I'd consider installing contrib modules as an act of installation, not
something that backup/restore should perform (finally, pg_restore isn't
able to do so, since it can't provide the dll/lib module).

Regards,
Andreas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Syntax bug? Group by?

2006-10-20 Thread Markus Schaber
Hi, Mark,

Mark Woodward wrote:

 My question, is it a syntactic technicality that PostgreSQL asks for a
 group by, or a bug in the parser?
 I think that it's a lack of special-casing the = operator. Imagine
 where ycis_id15 or where ycis_id @| $RECTANGLE or other (probably
 user defined) operators on (probably user defined) datatypes.

 The parser has no real knowledge what the operators do, it simply
 requests one that returns a bool.

 One could make the parser to special case the = operator, and maybe some
 others, however I doubt it's worth the effort.
 
 I understand the SQL, and this isn't a sql question else it would be on
 a different list, it is a PostgreSQL internals question and IMHO potential
 bug.

And that's why I talked about PostgreSQL internals.

 The original query:
 select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;
 
 Should NOT require a group by to get ycis_id in the results.

And, as I wrote, this is only possible when the query parser
special-cases the = operator (compared to all other operators).

HTH,
Markus
-- 
Markus Schaber | Logical TrackingTracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in Europe! www.ffii.org
www.nosoftwarepatents.org



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Statements with syntax errors are not logged

2006-10-20 Thread Peter Eisentraut
Am Freitag, 20. Oktober 2006 09:13 schrieb Adrian Maier:
 It would have been much more convenient to see the bad queries in
 the logs ...

I think you are missing the point of this discussion.  All the erroneous 
queries are logged.  The question is merely under what configuration.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread Alvaro Herrera
Zeugswetter Andreas ADI SD wrote:

  An improvement is going into 2.6.19 to handle asynchronous 
  vector reads and writes. This was covered by Linux Weekly 
  News a couple of weeks ago:
  http://lwn.net/Articles/201682/
 
 That is orthogonal. We don't really need vector io so much, since we
 rely on OS readahead. We want asyc IO to tell the OS earlier, that we
 will need these random pages, and continue our work in the meantime.

Of course, you can use asynchronous vector write with a single entry in
the vector if you want to perform an asynchronous write.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] bug or feature, || -operator and NULLs

2006-10-20 Thread Markus Schaber
Hi, Martijn,

Martijn van Oosterhout wrote:
 On Wed, Oct 18, 2006 at 11:36:44AM -0700, Jeff Davis wrote:
 What's being suggested simply violates common sense. Basically:

 if (a = b) then (a||c = b||c)

 If a is 'x' and b is 'x' and c is NULL, the above statement doesn't hold
 in PostgreSQL.
 
 Heh, well, c is supposed to be not NULL. Missed that. I was using the
 equals to include (NULL = NULL) but in SQL it's not like that.

Maybe you should replace = with IS NOT DISTINCT FROM :-)

HTH,
Markus

-- 
Markus Schaber | Logical TrackingTracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in Europe! www.ffii.org
www.nosoftwarepatents.org



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Statements with syntax errors are not logged

2006-10-20 Thread Adrian Maier

On 10/19/06, Tom Lane [EMAIL PROTECTED] wrote:

Peter Eisentraut [EMAIL PROTECTED] writes:
 When setting log_statement = 'all', statements that fail parsing are not
 logged.
 Is that intentional?

The 'mod' and 'ddl' settings obviously can't be handled until after
basic parsing.  We could create a completely separate code path for
'all' but I'm not sure I see the point.


Hello,

Sometimes it can be very useful to be able to see even the incorrect
commands:  for example when the incorrect query is generated by an
application or library that you haven't written yourself .

A few days ago I was experimenting with Lazarus (an object-pascal
based IDE similar to Delphi)  and I was getting some unexpected syntax
errors.  In order to debug the problem i had to hack the sources of the
postgres unit and add some writeln's right before the PQexec calls .

It would have been much more convenient to see the bad queries in
the logs ...


--
Adrian Maier

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Markus Schaber
Hi, Devrim,

Devrim GUNDUZ wrote:

 I have almost finished working on multiple rpm + postmaster issue today.
 The spec file and patches in pgsqlrpms cvs work almost as expected and
 does some preliminary work about multiple postmaster installation issue
 (we can build the rpms and they are installed correctly).

Did you look at how the debian guys handle this?

HTH,
Markus

-- 
Markus Schaber | Logical TrackingTracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in Europe! www.ffii.org
www.nosoftwarepatents.org



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Peter Eisentraut
Am Freitag, 20. Oktober 2006 15:19 schrieb Markus Schaber:
 Devrim GUNDUZ wrote:
  I have almost finished working on multiple rpm + postmaster issue today.
  The spec file and patches in pgsqlrpms cvs work almost as expected and
  does some preliminary work about multiple postmaster installation issue
  (we can build the rpms and they are installed correctly).

 Did you look at how the debian guys handle this?

[EMAIL PROTECTED]:~$ ls -1 /usr/share/locale/de/LC_MESSAGES/p*
/usr/share/locale/de/LC_MESSAGES/pg_controldata-8.1.mo
/usr/share/locale/de/LC_MESSAGES/pg_ctl-8.1.mo
/usr/share/locale/de/LC_MESSAGES/pg_dump-8.1.mo
/usr/share/locale/de/LC_MESSAGES/pg_resetxlog-8.1.mo
/usr/share/locale/de/LC_MESSAGES/pgscripts-8.1.mo
/usr/share/locale/de/LC_MESSAGES/pitchablespeed.mo
/usr/share/locale/de/LC_MESSAGES/pmount.mo
/usr/share/locale/de/LC_MESSAGES/popt.mo
/usr/share/locale/de/LC_MESSAGES/postgres-8.1.mo
/usr/share/locale/de/LC_MESSAGES/ppdtranslations.mo
/usr/share/locale/de/LC_MESSAGES/privacy.mo
/usr/share/locale/de/LC_MESSAGES/psmisc.mo
/usr/share/locale/de/LC_MESSAGES/psql-8.1.mo

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Devrim GUNDUZ
Hi Markus,

On Fri, 2006-10-20 at 15:19 +0200, Markus Schaber wrote:

 Did you look at how the debian guys handle this? 

No, but I believe what Peter wrote two days before was a solution of
Debian guys.

Regards, 
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Devrim GUNDUZ
Hi Peter,

On Fri, 2006-10-20 at 15:37 +0200, Peter Eisentraut wrote:
  Did you look at how the debian guys handle this?
 
 [EMAIL PROTECTED]:~$ ls -1 /usr/share/locale/de/LC_MESSAGES/p* 
 snip

Quick question: Could you please point me a way to change the names
of .mo files and make each PostgreSQL release see its own locale
directory?

Regards,
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] row-wise comparison question/issue

2006-10-20 Thread Merlin Moncure

On 10/20/06, Jeremy Drake [EMAIL PROTECTED] wrote:

I noticed something odd when trying to use the row-wise comparison
mentioned in the release notes for 8.2 and in the docs
http://developer.postgresql.org/pgdocs/postgres/functions-comparisons.html#ROW-WISE-COMPARISON

This sets up a suitable test:

create type myrowtype AS (a integer, b integer);
create table myrowtypetable (rowval myrowtype);

insert into myrowtypetable select (a, b)::myrowtype from
generate_series(1,5) a, generate_series(1,5) b;

First I get this error:

select rowval  rowval from myrowtypetable ;
ERROR:  operator does not exist: myrowtype  myrowtype
LINE 1: select rowval  rowval from myrowtypetable ;
  ^
HINT:  No operator matches the given name and argument type(s). You may
need to add explicit type casts.

OK, I guess I can live with that.  I did create a new type, and there are
no operators for it...

Now, I can do the following (pointless) query
select ROW((rowval).*)  ROW((rowval).*) from myrowtypetable ;

and I get 25 rows of 'f'.  So far so good.

But if I try to do
select rowval from myrowtypetable ORDER BY ROW((rowval).*);
ERROR:  could not identify an ordering operator for type record
HINT:  Use an explicit ordering operator or modify the query.

or even
select rowval from myrowtypetable ORDER BY ROW((rowval).*) USING ;
ERROR:  operator does not exist: record  record
HINT:  No operator matches the given name and argument type(s). You may
need to add explicit type casts.

I know that that less-than operator exists, because I just used it in the
query that worked above.  It seems that ORDER BY just can't find it for
some reason.

Is it supposed to not work in order by?  That doesn't really make sense to
me why order by should be special for this.


that would be neat.  i know that row construction and comparison as
currently implemented is sql standard...is the stuff you are
suggesting also standard? (im guessing no).

I'll throw something else on the pile:

esilo=# select (foo).* from foo order by (foo).*;
ERROR:  column foo.* does not exist

esilo=# select (foo).* from foo;
a | b | c
---+---+---
(0 rows)

seems a little contradictory...

note jeremy that the more common use of row comparison would be to
construct rows on the fly, usually on fields comprising a key with an
explicit order by:

select a,b,c from foo where (a,b,c)  (1,2,3) order by a,b,c;

works fine

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Alvaro Herrera
Devrim GUNDUZ wrote:
 Hi Peter,
 
 On Fri, 2006-10-20 at 15:37 +0200, Peter Eisentraut wrote:
   Did you look at how the debian guys handle this?
  
  [EMAIL PROTECTED]:~$ ls -1 /usr/share/locale/de/LC_MESSAGES/p* 
  snip
 
 Quick question: Could you please point me a way to change the names
 of .mo files and make each PostgreSQL release see its own locale
 directory?

The package files are here:

http://packages.debian.org/unstable/misc/postgresql-8.1

you can find the patch Debian applies following a link below (the
diff.gz), on which you see the method for doing this.

ISTM these parts of that patch should be applied to our code:

+--- postgresql-8.1beta3-old/src/bin/pg_controldata/pg_controldata.c
2005-10-03 02:28:41.0 +0200
 postgresql-8.1beta3/src/bin/pg_controldata/pg_controldata.c2005-10-13 
18:00:42.0 +0200
+@@ -73,7 +73,7 @@
+   char   *strftime_fmt = %c;
+   const char *progname;
+ 
+-  set_pglocale_pgservice(argv[0], pg_controldata);
++  set_pglocale_pgservice(argv[0], pg_controldata-8.1);

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] row-wise comparison question/issue

2006-10-20 Thread Tom Lane
Jeremy Drake [EMAIL PROTECTED] writes:
 select rowval from myrowtypetable ORDER BY ROW((rowval).*) USING ;
 ERROR:  operator does not exist: record  record

This isn't required by the spec, and it's not implemented.  I don't
see that it'd give any new functionality anyway, since you can always
do ORDER BY rowval.f1, rowval.f2, ...

The cases that are implemented are comparisons of explicit row
constructors, eg (a,b,c)  (d,e,f) --- which I think is all
you'll find support for in the spec.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Devrim GUNDUZ
Hi Alvaro,

On Fri, 2006-10-20 at 10:53 -0300, Alvaro Herrera wrote:
 
 The package files are here:
 
 http://packages.debian.org/unstable/misc/postgresql-8.1
 
 you can find the patch Debian applies following a link below (the
 diff.gz), on which you see the method for doing this. 

Thanks. Let me finish it this weekend.

Regards,
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Microseconds granularity SIGALRM interrupt support

2006-10-20 Thread Tom Lane
NikhilS [EMAIL PROTECTED] writes:
 Do we have any function in the postgres codebase which provides microseconds
 level of delay coupled with SIGALRM support?

On most Unixen the resolution of SIGALRM is millisecond(s), so you'd be
living in a dream world if you assumed it would work.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SPAM?] Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread mark
On Fri, Oct 20, 2006 at 11:13:33AM +0530, NikhilS wrote:
 Good idea, but async i/o is generally poorly supported.

 Async i/o is stably supported on most *nix (apart from Linux 2.6.*) plus
 Windows.
 Guess it would be still worth it, since one fine day 2.6.* will start
 supporting it properly too.

Only if it can be shown that async I/O actually results in an improvement.

Currently, it's speculation, with the one trial implementation showing
little to no improvement. Support is a big word in the face of this
initial evidence... :-)

It's possible that the PostgreSQL design limits the effectiveness of
such things. It's possible that PostgreSQL, having been optimized to not
use features such as these, has found a way of operating better,
contrary to those who believe that async I/O, threads, and so on, are
faster. It's possible that async I/O is supported, but poorly implemented
on most systems.

Take into account that async I/O doesn't guarantee parallel I/O. The
concept of async I/O is that an application can proceed to work on other
items while waiting for scheduled work in the background. This can be
achieved with a background system thread (GLIBC?). There is no requirement
that it actually process the requests in parallel. In fact, any system that
did process the requests in parallel, would be easier to run to a halt.
For example, for the many systems that do not use RAID, we would potentially
end up with scattered reads across the disk all running in parallel, with
no priority on the reads, which could mean that data we do not yet need
is returned first, causing PostgreSQL to be unable to move forwards. If
the process is CPU bound at all, this could be an overall loss.

Point being, async I/O isn't a magic bullet. There is no evidence that it
would improve the situation on any platform.

One would need to consider the PostgreSQL architecture, determine where
the bottleneck actually is, and understand why it is a bottleneck fully,
before one could decide how to fix it. So, what is the bottleneck? Is
PostgreSQL unable to max out the I/O bandwidth? Where? Why?

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Peter Eisentraut
Am Freitag, 20. Oktober 2006 15:53 schrieb Alvaro Herrera:
 ISTM these parts of that patch should be applied to our code:

 +--- postgresql-8.1beta3-old/src/bin/pg_controldata/pg_controldata.c   
 2005-10-03 02:28:41.0 +0200 
 postgresql-8.1beta3/src/bin/pg_controldata/pg_controldata.c2005-10-13
 18:00:42.0 +0200 +@@ -73,7 +73,7 @@
 +   char   *strftime_fmt = %c;
 +   const char *progname;
 +
 +-  set_pglocale_pgservice(argv[0], pg_controldata);
 ++  set_pglocale_pgservice(argv[0], pg_controldata-8.1);

Once the RPM crowd has figured out their needs, I'm all in favor of taking the 
common pieces from the Debs and RPMs and turning them into built-in build 
features.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 ISTM these parts of that patch should be applied to our code:

 +--- postgresql-8.1beta3-old/src/bin/pg_controldata/pg_controldata.c
 2005-10-03 02:28:41.0 +0200
  postgresql-8.1beta3/src/bin/pg_controldata/pg_controldata.c
 2005-10-13 18:00:42.0 +0200
 +@@ -73,7 +73,7 @@
 +   char   *strftime_fmt = %c;
 +   const char *progname;
 + 
 +-  set_pglocale_pgservice(argv[0], pg_controldata);
 ++  set_pglocale_pgservice(argv[0], pg_controldata-8.1);

Egad.  What an ugly, unmaintainable crock.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Alvaro Herrera
Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  ISTM these parts of that patch should be applied to our code:
 
  +--- postgresql-8.1beta3-old/src/bin/pg_controldata/pg_controldata.c
  2005-10-03 02:28:41.0 +0200
   postgresql-8.1beta3/src/bin/pg_controldata/pg_controldata.c
  2005-10-13 18:00:42.0 +0200
  +@@ -73,7 +73,7 @@
  +   char   *strftime_fmt = %c;
  +   const char *progname;
  + 
  +-  set_pglocale_pgservice(argv[0], pg_controldata);
  ++  set_pglocale_pgservice(argv[0], pg_controldata-8.1);
 
 Egad.  What an ugly, unmaintainable crock.

Well, we could certainly not use it in the same form, but with a macro,
making it more future-proof.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] misbehaving planer?

2006-10-20 Thread Darcy Buskermolen
I have a setup in which a table has been partitioned into 30 partitions on 
type (1 -30), however no matter what I do i can't make the planner try to use 
constraint exclusion on it. As you can see by the plan, it figures that there 
is at least 1 rows in each partition (Which there is not).  Also yesterday 
when I was first looking into this the plan on partitons 28.29.30 were 
different (they were still 0 rows then too)  it shows the estimated rows 
being 4.


(All the following were done after a fresh VACUUM ANALYZE)


db=# SELECT version();
 version
 
-
 PostgreSQL 8.1.3 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 3.4.4 
20050721 (Red Hat 3.4.4-2)
(1 row)




db=# show constraint_exclusion;
 constraint_exclusion 
--
 on
(1 row)

db=# \d tbl_ps
   Table public.tbl_ps
Column  | Type  |
Modifiers
+---+-
 id | integer   | not null default 
nextval('tbl_ps_id_seq'::regclass)
 uid| integer   | 
 normalized_txt | character varying(50) | 
 type   | smallint  | 
 lastlogin  | integer   | 
Indexes:
id_idx btree (pse_id)
Triggers:
tbl_ps_partitioner BEFORE INSERT OR DELETE OR UPDATE ON tbl_ps FOR EACH 
ROW EXECUTE PROCEDURE tbl_ps_handler()

db=#  \d s_ps.tbl_ps_type_1
 Table s_ps.tbl_ps_type_1
 Column  | Type  |
Modifiers
-+---+-
 id  | integer   | not null default 
nextval('tbl_ps_id_seq'::regclass)
 uid | integer   | 
 normalized_text | character varying(50) | 
 interest_type   | smallint  | 
 lastlogin   | integer   | 
Indexes:
index_09_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = '0'::text AND normalized_text::text = '9'::text
index_a_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'a'::text AND normalized_text::text  'b'::text
index_b_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'b'::text AND normalized_text::text  'c'::text
index_c_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'c'::text AND normalized_text::text  'd'::text
index_cluster_on_part_1 btree (normalized_text, lastlogin) CLUSTER
index_d_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'd'::text AND normalized_text::text  'e'::text
index_e_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'e'::text AND normalized_text::text  'f'::text
index_f_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'f'::text AND normalized_text::text  'g'::text
index_g_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'g'::text AND normalized_text::text  'h'::text
index_h_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'h'::text AND normalized_text::text  'i'::text
index_i_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'i'::text AND normalized_text::text  'j'::text
index_id_on_type_1 btree (id)
index_j_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'j'::text AND normalized_text::text  'k'::text
index_k_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'k'::text AND normalized_text::text  'l'::text
index_l_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'l'::text AND normalized_text::text  'm'::text
index_m_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'm'::text AND normalized_text::text  'n'::text
index_n_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'n'::text AND normalized_text::text  'o'::text
index_o_on_tupe_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'o'::text AND normalized_text::text  'p'::text
index_p_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'p'::text AND normalized_text::text  'q'::text
index_q_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'q'::text AND normalized_text::text  'r'::text
index_r_on_type_1 btree (normalized_text, lastlogin) WHERE 
normalized_text::text = 'r'::text AND normalized_text::text  's'::text

Re: [HACKERS] misbehaving planer?

2006-10-20 Thread Zeugswetter Andreas ADI SD

 db=#  \d s_ps.tbl_ps_type_1
  Table s_ps.tbl_ps_type_1
...
 Check constraints:
 tbl_ps_typ_1_type_check CHECK (type = 1)
 Inherits: tbl_ps
...
 myyearbook=# EXPLAIN ANALYZE SELECT uid FROM tbl_ps WHERE 
 type = 1 and 
 normalized_text='bush';
 
   
  
 QUERY PLAN
...
  -  Index Scan using index_b_on_type_2 on 
 tbl_ps_type_2 tbl_ps  
 (cost=0.00..4.53 rows=1 width=4) (actual time=0.099..0.099 
 rows=0 loops=1)
Index Cond: ((normalized_text)::text = 'bush'::text)
Filter: (type = 1)

Do you have corresponding constraints on all other table partitions ?

btw. I doubt that the many partial indexes are really helpful here.
What you are doing basically only replaces one btree header page.

Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] misbehaving planer?

2006-10-20 Thread Tom Lane
Darcy Buskermolen [EMAIL PROTECTED] writes:
 I have a setup in which a table has been partitioned into 30 partitions on 
 type (1 -30), however no matter what I do i can't make the planner try to use 
 constraint exclusion on it.

Do you have constraint_exclusion turned on?  What are the check
constraints on the other children of tbl_ps?  This example doesn't
really show whether the planner is misbehaving or not.

The multiple-partial-index setup on tbl_ps_type_1 looks pretty silly to
me...  it seems unlikely to buy anything except extra planning overhead.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SPAM?] Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread Zeugswetter Andreas ADI SD

  Good idea, but async i/o is generally poorly supported.

 Only if it can be shown that async I/O actually results in an 
 improvement.

sure.

 fix it. So, what is the bottleneck? Is PostgreSQL unable to 
 max out the I/O bandwidth? Where? Why?

Yup, that would be the scenario where it helps (provided that you have
a smart disk or a disk array and an intelligent OS aio implementation).
It would be used to fetch the data pages pointed at from an index leaf,
or the next level index pages.
We measured the IO bandwidth difference on Windows with EMC as beeing 
nearly proportional to parallel outstanding requests up to at least
16-32.

Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] misbehaving planer?

2006-10-20 Thread Darcy Buskermolen
On Friday 20 October 2006 08:26, Tom Lane wrote:
 Darcy Buskermolen [EMAIL PROTECTED] writes:
  I have a setup in which a table has been partitioned into 30 partitions
  on type (1 -30), however no matter what I do i can't make the planner try
  to use constraint exclusion on it.

 Do you have constraint_exclusion turned on?  What are the check
 constraints on the other children of tbl_ps?  

Yes CE is on (you can see it in the session paste). The other child tables 
have simular CHECK's of type=2, type=3, type=4 and so on.. 1 for each of the 
30 tables.


 This example doesn't 
 really show whether the planner is misbehaving or not.

 The multiple-partial-index setup on tbl_ps_type_1 looks pretty silly to
 me...  it seems unlikely to buy anything except extra planning overhead.

This was a direct port from a big fat table. I agree, I'm not convinced that 
the  partial indexes will buy me much, but this box is so IO bound that the 
planner overhead my just offset the needing to IO bigger indexes.



   regards, tom lane

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] misbehaving planer?

2006-10-20 Thread Tom Lane
Darcy Buskermolen [EMAIL PROTECTED] writes:
 Yes CE is on (you can see it in the session paste). The other child tables 
 have simular CHECK's of type=2, type=3, type=4 and so on.. 1 for each of the 
 30 tables.

[ looks again... ]  Oh, here's your problem:

 type   | smallint  | 

Check constraints:
tbl_ps_typ_1_type_check CHECK (type = 1)

That CHECK is a cross-type comparison (int2 vs int4).  Per the docs:

 Avoid cross-datatype comparisons in the CHECK constraints, as the
 planner will currently fail to prove such conditions false. For
 example, the following constraint will work if x is an integer
 column, but not if x is a bigint:

 CHECK ( x = 1 )

 For a bigint column we must use a constraint like: 

 CHECK ( x = 1::bigint )

 The problem is not limited to the bigint data type --- it can
 occur whenever the default data type of the constant does not match
 the data type of the column to which it is being
 compared. Cross-datatype comparisons in the supplied queries are
 usually OK, just not in the CHECK conditions.

So you can either cast to int2 in the CHECKs, or change the column to
plain integer (int2 is probably not saving you anything here anyway).

 The multiple-partial-index setup on tbl_ps_type_1 looks pretty silly to
 me...  it seems unlikely to buy anything except extra planning overhead.

 This was a direct port from a big fat table. I agree, I'm not convinced that 
 the  partial indexes will buy me much, but this box is so IO bound that the 
 planner overhead my just offset the needing to IO bigger indexes.

Well, you should measure it, but I bet the planner wastes way more time
considering the twenty-some indexes than is saved by avoiding one level
of btree search, which is about the most you could hope for.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] misbehaving planer?

2006-10-20 Thread Darcy Buskermolen
On Friday 20 October 2006 09:27, Tom Lane wrote:
 Darcy Buskermolen [EMAIL PROTECTED] writes:
  Yes CE is on (you can see it in the session paste). The other child
  tables have simular CHECK's of type=2, type=3, type=4 and so on.. 1 for
  each of the 30 tables.

 [ looks again... ]  Oh, here's your problem:

  type   | smallint  |

 Check constraints:
 tbl_ps_typ_1_type_check CHECK (type = 1)

 That CHECK is a cross-type comparison (int2 vs int4).  Per the docs:

Dohh, thanks for the sanity check. I compleatly missed that.

  Avoid cross-datatype comparisons in the CHECK constraints, as the
  planner will currently fail to prove such conditions false. For
  example, the following constraint will work if x is an integer
  column, but not if x is a bigint:

  CHECK ( x = 1 )

  For a bigint column we must use a constraint like:

  CHECK ( x = 1::bigint )

  The problem is not limited to the bigint data type --- it can
  occur whenever the default data type of the constant does not match
  the data type of the column to which it is being
  compared. Cross-datatype comparisons in the supplied queries are
  usually OK, just not in the CHECK conditions.

 So you can either cast to int2 in the CHECKs, or change the column to
 plain integer (int2 is probably not saving you anything here anyway).

  The multiple-partial-index setup on tbl_ps_type_1 looks pretty silly to
  me...  it seems unlikely to buy anything except extra planning overhead.
 
  This was a direct port from a big fat table. I agree, I'm not convinced
  that the  partial indexes will buy me much, but this box is so IO bound
  that the planner overhead my just offset the needing to IO bigger
  indexes.

 Well, you should measure it, but I bet the planner wastes way more time
 considering the twenty-some indexes than is saved by avoiding one level
 of btree search, which is about the most you could hope for.

Yes mesurement will happen, step one was the partioning.


   regards, tom lane

 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SPAM?] Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread mark
On Fri, Oct 20, 2006 at 05:37:48PM +0200, Zeugswetter Andreas ADI SD wrote:
 Yup, that would be the scenario where it helps (provided that you have
 a smart disk or a disk array and an intelligent OS aio implementation).
 It would be used to fetch the data pages pointed at from an index leaf,
 or the next level index pages.
 We measured the IO bandwidth difference on Windows with EMC as beeing 
 nearly proportional to parallel outstanding requests up to at least

Measured it using what? I was under the impression only one
proof-of-implementation existed, and that the scenarios and
configuration of the person who wrote it, did not show significant
improvement.

You have PostgreSQL on Windows with EMC with async I/O support to
test with?

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] xlogdump fixups and WAL log question.

2006-10-20 Thread Theo Schlossnagle
Not sure who cares, so xzilla indicated I should drop a note here.  I  
just made the xlogdump stuff work for 8.1 (trivial) and fixed a few  
other small issues that caused it to not work right both generally  
and in our environment.


http://pgfoundry.org/tracker/index.php? 
func=detailaid=1000760group_id=1000202atid=772


We're using it to track down what's causing some wal log ruckus.   
We're generating about a quarter terabyte of WAL logs a day (on bad  
days) which is posing some PITR backup pains.  That amount isn't a  
severe challenge to backup, but our previous install was on Oracle  
and it generated substantially less archive redo logs (10-20 gigs per  
day).


Is it possible to create tables in fashion that will not write info  
to the WAL log -- knowingly and intentionally making them  
unrecoverable?  This is very desirable for us.  We snapshot tables  
from a production environment.  If the database goes down and we  
recover, the old snapshots are out of date anyway and serve no useful  
purpose.  The periodic snapshot procedure would re-snap them in short  
order anyway.  I'd like to do:


INSERT INTO TABLE tblfoo_snap1 AS SELECT * from table on remote  
database NO LOGGING;


(NO LOGGING being the only part we're currently missing) Is something  
like this possible?


Cheers ;-)
Theo

// Theo Schlossnagle
// CTO -- http://www.omniti.com/~jesus/
// OmniTI Computer Consulting, Inc. -- http://www.omniti.com/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SPAM?] Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread Martijn van Oosterhout
On Fri, Oct 20, 2006 at 10:05:01AM -0400, [EMAIL PROTECTED] wrote:
 Only if it can be shown that async I/O actually results in an improvement.
 
 Currently, it's speculation, with the one trial implementation showing
 little to no improvement. Support is a big word in the face of this
 initial evidence... :-)

Yeah, the single test so far on a system that didn't support
asyncronous I/O doesn't prove anything. It would help if there was a
reasonable system that did support async i/o so it could be tested
properly.

 Point being, async I/O isn't a magic bullet. There is no evidence that it
 would improve the situation on any platform.

I think it's likely to help with index scan. Prefetching index leaf
pages I think could be good. As would prefectching pages from a
(bitmap) index scan.

It won't help much on very simple queries, but where it should shine is
a merge join across two index scans. Currently postgresql would do
something like:

Loop
  Fetch left tuple for join
Fetch btree leaf
  Fetch tuple off disk
  Fetch right tuples for join
Fetch btree leaf
  Fetch tuple off disk

Currently it fetches a block fro one file, then a block from the other,
back and forth. with async i/o you could read from both files and the
indexes simultaneously, thus is theory leading to better i/o
throughput.

 One would need to consider the PostgreSQL architecture, determine where
 the bottleneck actually is, and understand why it is a bottleneck fully,
 before one could decide how to fix it. So, what is the bottleneck? Is
 PostgreSQL unable to max out the I/O bandwidth? Where? Why?

For systems where postgresql is unable to saturate the i/o bandwidth,
this is the proposed solution. Are there others?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] xlogdump fixups and WAL log question.

2006-10-20 Thread Tom Lane
Theo Schlossnagle [EMAIL PROTECTED] writes:
 Is it possible to create tables in fashion that will not write info  
 to the WAL log -- knowingly and intentionally making them  
 unrecoverable?

Use temp tables?

Also, it's likely that much of the WAL volume is full-page images.
While you can't safely turn those off in 8.1, you can dial down the
frequency of occurrence by increasing checkpoint_segments and
checkpoint_timeout as much as you can stand.  (The tradeoffs are
amount of space occupied by pg_xlog/ and time to recover from a crash.)

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] zic with msvc

2006-10-20 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 + #ifdef WIN32
 + #define _WIN32_WINNT 0x0400
 + #endif

Hmm ... in pg_ctl.c I see

#define _WIN32_WINNT 0x0500

Is there a reason for these to be different?  Are there other places
that will need this (ie, maybe it should be in c.h instead?)

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] xlogdump fixups and WAL log question.

2006-10-20 Thread Theo Schlossnagle


On Oct 20, 2006, at 1:58 PM, Tom Lane wrote:


Theo Schlossnagle [EMAIL PROTECTED] writes:

Is it possible to create tables in fashion that will not write info
to the WAL log -- knowingly and intentionally making them
unrecoverable?


Use temp tables?


temp tables won't work too well -- unless I can make a whole  
tablespace temp and multiple backends can see it.  They work fine  
for small tables we snapshot (couple hundred or even a few thousand  
rows), but many of the tables are a few hundred thousand rows and  
several processes on the system all need them.



Also, it's likely that much of the WAL volume is full-page images.
While you can't safely turn those off in 8.1, you can dial down the
frequency of occurrence by increasing checkpoint_segments and
checkpoint_timeout as much as you can stand.  (The tradeoffs are
amount of space occupied by pg_xlog/ and time to recover from a  
crash.)


Our pg_xlog is currently at 9.6GB.  Not sure I can reasonably tune it  
up much higher.


// Theo Schlossnagle
// CTO -- http://www.omniti.com/~jesus/
// OmniTI Computer Consulting, Inc. -- http://www.omniti.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [PATCHES] zic with msvc

2006-10-20 Thread Magnus Hagander
  + #ifdef WIN32
  + #define _WIN32_WINNT 0x0400
  + #endif
 
 Hmm ... in pg_ctl.c I see
 
 #define _WIN32_WINNT 0x0500
 
 Is there a reason for these to be different?  Are there other 
 places that will need this (ie, maybe it should be in c.h instead?)

Not really. The default appears to be 0x0400 for MingW (or it wouldn't
have worked before), but 0x0350 or so for Visual C++.
If we define it to 0x0500 we pull in headers that will only work on 2000
or newer. But I don't really see that as a problem - I think we said
that we don't care about earlier ones anyway. in which case it's
definitly not a problem to stick it in c.h.

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SPAM?] Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread Tom Lane
 On Fri, Oct 20, 2006 at 10:05:01AM -0400, [EMAIL PROTECTED] wrote:
 One would need to consider the PostgreSQL architecture, determine where
 the bottleneck actually is, and understand why it is a bottleneck fully,
 before one could decide how to fix it. So, what is the bottleneck?

I think Mark's point is not being taken sufficiently to heart in this
thread.

It's not difficult at all to think of reasons why attempted read-ahead
could be a net loss.  One that's bothering me right at the moment is
that each such request would require a visit to the shared buffer
manager to see if we already have the desired page in buffers.  (Unless
you think it'd be cheaper to force the kernel to uselessly read the
page...)  Then another visit when we actually need the page.  That means
that readahead will double the contention for the buffer manager locks,
which is likely to put us right back into the context swap storm problem
that we've spent the last couple of releases working out of.

So far I've seen no evidence that async I/O would help us, only a lot
of wishful thinking.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SPAM?] Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread Merlin Moncure

On 10/20/06, Tom Lane [EMAIL PROTECTED] wrote:

So far I've seen no evidence that async I/O would help us, only a lot
of wishful thinking.


is this thread moot?  while researching this thread I came across this
article: http://kerneltrap.org/node/6642 describing claims of 30%
performance boost when using posix_fadvise to ask the o/s to prefetch
data.  istm that this kind of improvement is in line with what aio can
provide, and posix_fadvise is cleaner, not requiring threads and such.

merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SPAM?] Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread Martijn van Oosterhout
On Fri, Oct 20, 2006 at 03:04:55PM -0400, Merlin Moncure wrote:
 On 10/20/06, Tom Lane [EMAIL PROTECTED] wrote:
 So far I've seen no evidence that async I/O would help us, only a lot
 of wishful thinking.
 
 is this thread moot?  while researching this thread I came across this
 article: http://kerneltrap.org/node/6642 describing claims of 30%
 performance boost when using posix_fadvise to ask the o/s to prefetch
 data.  istm that this kind of improvement is in line with what aio can
 provide, and posix_fadvise is cleaner, not requiring threads and such.

Hmm, my man page says:

   POSIX_FADV_WILLNEED and POSIX_FADV_NOREUSE both initiate a
   non-blocking read of the specified region into the page cache. 
   The amount of data read may be decreased by the kernel depending
   on VM load. (A few megabytes will usually be fully satisfied,
   and more is rarely useful.)

This appears to be exactly what we want, no? It would be nice to get
some idea of what systems support this.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] backup + restore fails

2006-10-20 Thread Andreas Seltenreich
Holger Schoenen writes:

 ERROR:  invalid byte sequence for encoding UTF8: 0xe46973
 Command was: --
[...]
 -- Started on 2006-09-15 14:56:51 Westeuropäische Normalzeit

The same problem was recently reported on the pgsql-de-allgemein list.

Would just avoiding %Z in Win32's strftime be an acceptable solution?
elog.c is already doing this, however because of the length of the
zone names, not the localization problem.  The attached patch is
completely untested because I don't have access to a win32 box.

regards,
andreas

Index: src/bin/pg_dump/pg_backup_archiver.c
===
RCS file: /projects/cvsroot/pgsql/src/bin/pg_dump/pg_backup_archiver.c,v
retrieving revision 1.137
diff -c -r1.137 pg_backup_archiver.c
*** src/bin/pg_dump/pg_backup_archiver.c14 Oct 2006 23:07:22 -  
1.137
--- src/bin/pg_dump/pg_backup_archiver.c20 Oct 2006 18:59:11 -
***
*** 2780,2785 
  {
charbuf[256];
  
!   if (strftime(buf, 256, %Y-%m-%d %H:%M:%S %Z, localtime(tim)) != 0)
ahprintf(AH, -- %s %s\n\n, msg, buf);
  }
--- 2780,2793 
  {
charbuf[256];
  
!   if (strftime(buf, 256,
!/* Win32 timezone names are long and localized 
and
! * can interfere with utf-8 dumps */
! #ifndef WIN32
!%Y-%m-%d %H:%M:%S %Z,
! #else
!%Y-%m-%d %H:%M:%S,
! #endif
!localtime(tim)) != 0)
ahprintf(AH, -- %s %s\n\n, msg, buf);
  }
Index: src/bin/pg_dump/pg_dumpall.c
===
RCS file: /projects/cvsroot/pgsql/src/bin/pg_dump/pg_dumpall.c,v
retrieving revision 1.84
diff -c -r1.84 pg_dumpall.c
*** src/bin/pg_dump/pg_dumpall.c7 Oct 2006 20:59:05 -   1.84
--- src/bin/pg_dump/pg_dumpall.c20 Oct 2006 18:59:12 -
***
*** 1320,1325 
charbuf[256];
time_t  now = time(NULL);
  
!   if (strftime(buf, 256, %Y-%m-%d %H:%M:%S %Z, localtime(now)) != 0)
printf(-- %s %s\n\n, msg, buf);
  }
--- 1320,1333 
charbuf[256];
time_t  now = time(NULL);
  
!   if (strftime(buf, 256,
!/* Win32 timezone names are long and localized 
and
! * can interfere with utf-8 dumps */
! #ifndef WIN32
!%Y-%m-%d %H:%M:%S %Z,
! #else
!%Y-%m-%d %H:%M:%S,
! #endif
!localtime(now)) != 0)
printf(-- %s %s\n\n, msg, buf);
  }

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] misbehaving planer?

2006-10-20 Thread Simon Riggs
On Fri, 2006-10-20 at 12:27 -0400, Tom Lane wrote:
  This was a direct port from a big fat table. I agree, I'm not
 convinced that 
  the  partial indexes will buy me much, but this box is so IO bound
 that the 
  planner overhead my just offset the needing to IO bigger indexes.
 
 Well, you should measure it, but I bet the planner wastes way more
 time
 considering the twenty-some indexes than is saved by avoiding one
 level
 of btree search, which is about the most you could hope for.

I note that in allpaths.c:set_plain_rel_pathlist() we consider partial
indexes before we consider constraint exclusion. We normally wouldn't
notice that but, in this case, that would be a big loss.

Is there a reason for that? check_partial_indexes() doesn't seem to have
important side-effects that are required for testing whether
relation_excluded_by_constraints()

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Devrim GUNDUZ
Hi,

On Fri, 2006-10-20 at 10:25 -0400, Tom Lane wrote:
 
 Egad.  What an ugly, unmaintainable crock. 

I want to second this. I would not make this in RPM spec file.

What about changing

localedir='${prefix}/share/locale'

line (in configure) to

localedir='${prefix}/share/locale/pgsql/$PGMAJORVERSION'

or so?

Regards,
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Peter Eisentraut
Tom Lane wrote:
 Egad.  What an ugly, unmaintainable crock.

There is one major PostgreSQL release per year, so even the time 
thinking about an alternative solution is longer than just taking the 
existing solution.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] adminpack and pg_catalog

2006-10-20 Thread Neil Conway
On Fri, 2006-10-20 at 11:50 +0200, Andreas Pflug wrote:
 Having pg_dump not saving the function definitions is an intended
 behaviour.

The manual defines the pg_catalog schema as containing the system
tables and all the built-in data types, functions, and
operators (section 5.7.5). adminpack is none of the above, so I don't
think it should be located in pg_catalog.

 I'd consider installing contrib modules as an act of installation, not
 something that backup/restore should perform

AFAICS this is inconsistent with how every other contrib module behaves:
installing the contrib module into a database results in DDL for that
contrib module being included in pg_dump's output.

 (finally, pg_restore isn't able to do so, since it can't provide
 the dll/lib module).

This is not related to adminpack per se: pg_dump is never be able to
provide the shared object for any C language UDF. By your logic, pg_dump
shouldn't emit DDL for any such function.

-Neil



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] adminpack and pg_catalog

2006-10-20 Thread Neil Conway
On Fri, 2006-10-20 at 05:52 +0100, Dave Page wrote:
 The adminpack was originally written and intended to become builtin
 functions

This is not unique to adminpack: several contrib modules might
eventually become (or have already become) builtins, but adminpack is
the only module that defines objects in the pg_catalog schema.

 pg_catalog was used to ensure compatibility in the future

This is again not unique to adminpack. If users install a contrib module
into a schema that is in their search path, then if the module is
subsequently moved to pg_catalog, no queries will need to be changed. If
users install a module into some schema that isn't in their search path
and use explicit schema references, they are essentially asking for
their application to break if the object moves to a different schema.

  And as AndrewSN pointed out on IRC, it also breaks pg_dump.
 
 It does? In what way?

It breaks in the sense of completely not working :)

% pg_dump | grep file_write
% cd contrib/adminpack
% grep -A1 file_write adminpack.sql
CREATE FUNCTION pg_catalog.pg_file_write(text, text, bool) RETURNS
bigint
   AS '$libdir/adminpack', 'pg_file_write'
LANGUAGE C VOLATILE STRICT;
% psql -f adminpack.sql 
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
% pg_dump | grep file_write
%

-Neil



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Multiple postmaster + RPM + locale issues

2006-10-20 Thread Peter Eisentraut
Devrim GUNDUZ wrote:
 localedir='${prefix}/share/locale/pgsql/$PGMAJORVERSION'

It's probably better not to create nonstandard directories 
below /usr/share/locale, because that's not your directory.  If you 
want to go with nonstandard paths, create one 
in /usr/share/postgresql/.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] xlogdump fixups and WAL log question.

2006-10-20 Thread Simon Riggs
On Fri, 2006-10-20 at 13:18 -0400, Theo Schlossnagle wrote:
 Not sure who cares, so xzilla indicated I should drop a note here.  I  
 just made the xlogdump stuff work for 8.1 (trivial) and fixed a few  
 other small issues that caused it to not work right both generally  
 and in our environment.
 
 http://pgfoundry.org/tracker/index.php? 
 func=detailaid=1000760group_id=1000202atid=772

Diogo Biazus was working on that; I care also.

 We're using it to track down what's causing some wal log ruckus.   
 We're generating about a quarter terabyte of WAL logs a day (on bad  
 days) which is posing some PITR backup pains.  That amount isn't a  
 severe challenge to backup, but our previous install was on Oracle  
 and it generated substantially less archive redo logs (10-20 gigs per  
 day).

As Tom says, definitely because of full_page_writes=on

 Is it possible to create tables in fashion that will not write info  
 to the WAL log -- knowingly and intentionally making them  
 unrecoverable?  This is very desirable for us.  We snapshot tables  
 from a production environment.  If the database goes down and we  
 recover, the old snapshots are out of date anyway and serve no useful  
 purpose.  The periodic snapshot procedure would re-snap them in short  
 order anyway.  I'd like to do:
 
 INSERT INTO TABLE tblfoo_snap1 AS SELECT * from table on remote  
 database NO LOGGING;
 
 (NO LOGGING being the only part we're currently missing) Is something  
 like this possible?

Do you want this because of:
1) performance?
2) to reduce the WAL volume of PITR backups?

If you're thinking (1), then I guess I'd ask whether you've considered
what will happen when the reporting environment includes data from other
sources as it inevitably will. At that point, data loss would be much
more annoying. My experience is that the success of your current
implementation will lead quickly to a greatly increased user
requirement.

I've been looking at ways of reducing the WAL volume for PITR backups.
Here's a few ideas:

1. Provide a filter that can be easily used by archive_command to remove
full page writes from WAL files. This would require us to disable the
file size test when we begin recovery on a new WAL files, plus would
need to redesign initial location of the checkpoint record since we
could no longer rely on the XLogRecPtr being a byte offset within the
file.

e.g. archive_command = 'pg_WAL_filter -f | ... '

2. Include tablespaceid within the header of xlog records. This would
allow us to filter out WAL from one or more tablespaces, similarly to
(1), plus it would also allow single tablespace recovery.

e.g. archive_command = 'pg_WAL_filter -x 35456 | ... '

There are some other ideas for generally reducing WAL volume also.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] adminpack and pg_catalog

2006-10-20 Thread Peter Eisentraut
Neil Conway wrote:
 Why does adminpack install functions into pg_catalog? This is
 inconsistent with the rest of the contrib/ packages, not to mention
 the definition of pg_catalog itself (which ought to hold builtin
 object definitions).

Nothing except initdb should add objects in pg_catalog.  AFAICS, 
adminpack doesn't have any special requirements, so it should behave 
like all other contrib modules.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] xlogdump fixups and WAL log question.

2006-10-20 Thread Theo Schlossnagle


On Oct 20, 2006, at 4:24 PM, Simon Riggs wrote:


On Fri, 2006-10-20 at 13:18 -0400, Theo Schlossnagle wrote:

Not sure who cares, so xzilla indicated I should drop a note here.  I
just made the xlogdump stuff work for 8.1 (trivial) and fixed a few
other small issues that caused it to not work right both generally
and in our environment.

http://pgfoundry.org/tracker/index.php?
func=detailaid=1000760group_id=1000202atid=772


Diogo Biazus was working on that; I care also.


Cool.  Patch is short.


We're using it to track down what's causing some wal log ruckus.
We're generating about a quarter terabyte of WAL logs a day (on bad
days) which is posing some PITR backup pains.  That amount isn't a
severe challenge to backup, but our previous install was on Oracle
and it generated substantially less archive redo logs (10-20 gigs per
day).


As Tom says, definitely because of full_page_writes=on


Can I turn that off in 8.1?


Is it possible to create tables in fashion that will not write info
to the WAL log -- knowingly and intentionally making them
unrecoverable?  This is very desirable for us.  We snapshot tables
from a production environment.  If the database goes down and we
recover, the old snapshots are out of date anyway and serve no useful
purpose.  The periodic snapshot procedure would re-snap them in short
order anyway.  I'd like to do:

INSERT INTO TABLE tblfoo_snap1 AS SELECT * from table on remote
database NO LOGGING;

(NO LOGGING being the only part we're currently missing) Is something
like this possible?


Do you want this because of:
1) performance?


performance in that a substantial portion of my time is spent writing  
to pg_xlog



2) to reduce the WAL volume of PITR backups?


Yes.  Main concern.



e.g. archive_command = 'pg_WAL_filter -f | ... '
e.g. archive_command = 'pg_WAL_filter -x 35456 | ... '

There are some other ideas for generally reducing WAL volume also.


I'd like to see them not written to the xlogs at all (if possible).   
Seems rather unnecessary unless I'm missing something.


// Theo Schlossnagle
// CTO -- http://www.omniti.com/~jesus/
// OmniTI Computer Consulting, Inc. -- http://www.omniti.com/



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] adminpack and pg_catalog

2006-10-20 Thread Andreas Pflug
Neil Conway wrote:
 On Fri, 2006-10-20 at 05:52 +0100, Dave Page wrote:
   
 The adminpack was originally written and intended to become builtin
 functions
 

 This is not unique to adminpack: several contrib modules might
 eventually become (or have already become) builtins, but adminpack is
 the only module that defines objects in the pg_catalog schema.
   
.. which appears simply pragmatic, taken that it features server
maintenance functions, not functions usually called from user applications.

   
 pg_catalog was used to ensure compatibility in the future
 

 This is again not unique to adminpack. If users install a contrib module
 into a schema that is in their search path, then if the module is
 subsequently moved to pg_catalog, no queries will need to be changed. If
 users install a module into some schema that isn't in their search path
 and use explicit schema references, they are essentially asking for
 their application to break if the object moves to a different schema.
   
Please note that adminpack is intended for administrator's use, and
should be robust to (i.e. not dependent on) search path. We previously
had this dependency in pgadmin, and found it sucks. Putting the stuff in
pg_catalog works as desired and has no negative effects (apart from the
contrib not working after pg_dump/pg_restore if not installed, which is
expected behaviour anyway).

However, adminpack was crippled to the edge of usability for me already,
I'm prepared to see it fade away further (Since there's still no
pg_terminate_backend available which is definitely needed, I regularly
need to install my personal adminpack).

Regards,
Andreas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Beta, RC Time?

2006-10-20 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Where are we on releasing beta2 or perhaps going right to an RC1
  release?  Seems it is time for one of them.
 
 I think we need a beta2 now, and perhaps RC1 in a week.  We've done
 enough portability hacking recently that some more beta seems indicated.

OK, what steps do we need to do to get beta2 out?

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] row-wise comparison question/issue

2006-10-20 Thread Jeremy Drake
On Fri, 20 Oct 2006, Tom Lane wrote:

 Jeremy Drake [EMAIL PROTECTED] writes:
  select rowval from myrowtypetable ORDER BY ROW((rowval).*) USING ;
  ERROR:  operator does not exist: record  record

 This isn't required by the spec, and it's not implemented.  I don't
 see that it'd give any new functionality anyway, since you can always
 do ORDER BY rowval.f1, rowval.f2, ...

 The cases that are implemented are comparisons of explicit row
 constructors, eg (a,b,c)  (d,e,f) --- which I think is all
 you'll find support for in the spec.

I just think it is quite unexpected that the operator  is defined in some
places and not in others.  And the way I wrote the order by, it should
have been comparing explicit row constructors (compare the explicitly
constructed row for each rowval in order to sort).  I don't understand how
the operator  in a where clause would be different than the operator 
used by the order by.  If I were to make a custom type in C, and write
these same operators for it, they would work in both places, right?  Why
then would this be any different?


-- 
If someone had told me I would be Pope one day, I would have studied
harder.
-- Pope John Paul I

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Beta, RC Time?

2006-10-20 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 I think we need a beta2 now, and perhaps RC1 in a week.  We've done
 enough portability hacking recently that some more beta seems indicated.

 OK, what steps do we need to do to get beta2 out?

I think all we really gotta do is update the release notes and stamp it.
There aren't any open portability issues as of today, and the known bugs
mostly seem to affect 8.1 too :-( so I don't see them as reasons to
delay beta2.

I'm working on the release notes right now, should be able to commit
shortly.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] row-wise comparison question/issue

2006-10-20 Thread Tom Lane
Jeremy Drake [EMAIL PROTECTED] writes:
 I just think it is quite unexpected that the operator  is defined in some
 places and not in others.

Row-wise comparison isn't an operator, it's a syntactic construct.
Consider

(now(), 'foo', 42)  (SELECT timestampcol, textcol, intcol FROM sometable WHERE 
...)

There isn't any single operator in the system that implements that.

(And no, orthogonality is not one of the strong points of SQL...)

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Want to use my own query-plan

2006-10-20 Thread dakotali kasap
Hi everybody,I have started to work on a project that will be implemented on top of Postgresql. Therefore, I have to warm up with postgresql's internals. I downloaded the source code and currently looking at it. But I have some questions?1. How can I prepare my own query plan? (I will need this because sometimes I can prefer using an index scan of a table or merge-join of two relations. Now I just want to give my own simple query plan.)2. How can I make postgresql execute my own query plan?Thanks for the answers.Baran

[HACKERS] PgSQL users quota

2006-10-20 Thread Tux P
Hi .*Is there any chance to see the quota implementation described in this post in any next releases?http://archives.postgresql.org/pgsql-hackers/2004-07/msg00392.php
Thanks you!Nick


Re: [SPAM?] Re: [HACKERS] Asynchronous I/O Support

2006-10-20 Thread Merlin Moncure

On 10/21/06, Martijn van Oosterhout kleptog@svana.org wrote:

On Fri, Oct 20, 2006 at 03:04:55PM -0400, Merlin Moncure wrote:
 On 10/20/06, Tom Lane [EMAIL PROTECTED] wrote:
 So far I've seen no evidence that async I/O would help us, only a lot
 of wishful thinking.

 is this thread moot?  while researching this thread I came across this
 article: http://kerneltrap.org/node/6642 describing claims of 30%
 performance boost when using posix_fadvise to ask the o/s to prefetch
 data.  istm that this kind of improvement is in line with what aio can
 provide, and posix_fadvise is cleaner, not requiring threads and such.

Hmm, my man page says:

   POSIX_FADV_WILLNEED and POSIX_FADV_NOREUSE both initiate a
   non-blocking read of the specified region into the page cache.
   The amount of data read may be decreased by the kernel depending
   on VM load. (A few megabytes will usually be fully satisfied,
   and more is rarely useful.)

This appears to be exactly what we want, no? It would be nice to get
some idea of what systems support this.


right, and a small clarification: the above claim of 30% was from
using adaptive readahead, not posix_fadvise.  posix_fadvise was
suggested by none other than andrew morton as the way to get the most
i/o out of your box.  there was no mention of aio :)

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] adminpack and pg_catalog

2006-10-20 Thread Neil Conway
On Fri, 2006-10-20 at 22:59 +0200, Peter Eisentraut wrote:
 Nothing except initdb should add objects in pg_catalog.  AFAICS, 
 adminpack doesn't have any special requirements, so it should behave 
 like all other contrib modules.

Okay. Are there any opinions on whether we should make this change to
contrib/adminpack now (i.e. during the 8.2 beta), later (for 8.3), or
not all at?

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Want to use my own query-plan

2006-10-20 Thread Neil Conway
On Fri, 2006-10-20 at 16:05 -0700, dakotali kasap wrote:
 1. How can I prepare my own query plan?

You can't: there is currently no public API for constructing plans by
hand. You could kludge something up by hand, but it would be pretty
fragile (internal planner data structures may well change between
releases).

 2. How can I make postgresql execute my own query plan?

Once you have a valid Plan, you can just feed it to the executor as
normal (CreateQueryDesc(), ExecutorStart(), ExecutorRun(),
ExecutorEnd(), etc.)

-Neil




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] hacking postgres hashjoin algorithm

2006-10-20 Thread HS
from my understanding, postgres first needs to partition the tables. 
ExecHashTableCreate() is the function that partitions the tables right?


Martijn van Oosterhout wrote:

On Sun, Oct 15, 2006 at 11:08:18PM -0400, HS wrote:

Hello there

I am trying to play around with the hashjoin algorithm in postgres.

I am using the statement like
Select count(*) from r,s where r.id=s.id;

I looked at the function ExecHashJoin() in nodeHashjoin.c and cannot 
find where the algorithm is comparing if r.id equals s.id


The code doing the work is actually ExecScanHashBucket() which is in
nodeHash.c. The actual check is done by the ExecQual there...

Hope this helps,


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] zic with msvc

2006-10-20 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 + #ifdef WIN32
 + #define _WIN32_WINNT 0x0400
 + #endif
 
 Hmm ... in pg_ctl.c I see
 
 #define _WIN32_WINNT 0x0500
 
 Is there a reason for these to be different?  Are there other 
 places that will need this (ie, maybe it should be in c.h instead?)

 Not really. The default appears to be 0x0400 for MingW (or it wouldn't
 have worked before), but 0x0350 or so for Visual C++.
 If we define it to 0x0500 we pull in headers that will only work on 2000
 or newer.

Hm.  Actually, if the rest of the backend compiles without this, then
I guess the real question is what's zic.c doing that needs it?  pg_ctl.c
has an excuse because it's doing weird MS-specific junk, but zic.c is
supposed to be bog-standard portable code.  It really shouldn't have
anything that's further out there than you could find in ten other
places in the backend.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] adminpack and pg_catalog

2006-10-20 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 On Fri, 2006-10-20 at 22:59 +0200, Peter Eisentraut wrote:
 Nothing except initdb should add objects in pg_catalog.  AFAICS, 
 adminpack doesn't have any special requirements, so it should behave 
 like all other contrib modules.

 Okay. Are there any opinions on whether we should make this change to
 contrib/adminpack now (i.e. during the 8.2 beta), later (for 8.3), or
 not all at?

AFAIR the point of adminpack was to support pgAdmin, which expects those
functions to be in pg_catalog.  At some point we might as well just take
it out instead of whack it until it meets some arbitrary restrictions
and isn't at all backwards-compatible anymore.

(No, I don't find these arguments that it mustn't put anything in
pg_catalog to be very compelling ... if we seriously believed that,
we'd have arranged for the system to enforce it.)

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org