Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread Heikki Linnakangas
David E. Wheeler wrote:
 Howdy Hackers,
 
 Is there a published maintenance policy somewhere? Something that says
 for how long the project supports minor releases of PostgreSQL. 

We don't have a published policy, but I believe an unofficial policy has
been to support minor releases for about 5 years.

 For
 example, does 7.4 still get bug fixes and minor releases? If not, how
 does one know when support for a major version has been dropped?

Hmm, I thought we dropped support for 7.4 a while ago, and there's no
download link for it on www.postgresql.org anymore. But looking at the
CVS history, I see that others are still committing fixes to 7.4 branch.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread Dave Page
On Tue, Jul 7, 2009 at 8:28 AM, Heikki
Linnakangasheikki.linnakan...@enterprisedb.com wrote:

 For
 example, does 7.4 still get bug fixes and minor releases? If not, how
 does one know when support for a major version has been dropped?

 Hmm, I thought we dropped support for 7.4 a while ago, and there's no
 download link for it on www.postgresql.org anymore. But looking at the
 CVS history, I see that others are still committing fixes to 7.4 branch.

We dropped the link when we released 8.4, primarily for space reasons.
I believe Tom is still patching 7.4 though as Redhat have obligations
to support it and he'd have to do it regardless of project policy.

-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 8.3 PLpgSQL Can't Compare Records?

2009-07-07 Thread Albe Laurenz
David E. Wheeler wrote:
 This code:
 
  CREATE OR REPLACE FUNCTION foo() returns boolean as $$
  DECLARE
  have_rec record;
  want_rec record;
  BEGIN
  have_rec := row(1, 2);
  want_rec := row(3, 5);
  RETURN have_rec IS DISTINCT FROM want_rec;
  END;
  $$ language plpgsql;
 
  SELECT ROW(1, 2) IS DISTINCT FROM ROW(3, 5);
 
  SELECT foo();
  DROP FUNCTION foo();
 
 Works as expected on 8.4, outputting:
 
[...]
 
 On 8.3, however, the row comparisons in the SQL statement works, but  
 fails in the PL/pgSQL function, with this output:
 
   ?column?
  --
   t
  (1 row)
 
  psql:huh.sql:14: ERROR:  operator does not exist: record = record
  LINE 1: SELECT   $1  IS DISTINCT FROM  $2
   ^
  HINT:  No operator matches the given name and argument type(s).  You 
 might need to add explicit type casts.
  QUERY:  SELECT   $1  IS DISTINCT FROM  $2
  CONTEXT:  PL/pgSQL function foo line 7 at RETURN
 
 
 Is this a known issue in 8.3? If so, is there a known workaround?

The change is probably here:
http://archives.postgresql.org/pgsql-committers/2008-10/msg00110.php

So I think it is safe to argue that this is not a bug in 8.3, but an 
improvement in 8.4.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG support for string pseudo-type

2009-07-07 Thread Michael Meskes
On Sat, Jul 04, 2009 at 03:39:14PM +0200, Boszormenyi Zoltan wrote:
 The attached patch is built upon our previous patch supporting
 dynamic cursor and SQLDA.

Please don't do this unless the new patch relies on some changes made in the
older one. 

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
ICQ: 179140304, AIM/Yahoo/Skype: michaelmeskes, Jabber: mes...@jabber.org
Go VfL Borussia! Go SF 49ers! Use Debian GNU/Linux! Use PostgreSQL!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG support for string pseudo-type

2009-07-07 Thread Michael Meskes
On Sat, Jul 04, 2009 at 05:09:04PM +0200, Boszormenyi Zoltan wrote:
 OK, let me retry. This version treats string as a non-reserved word,
 and also discovers whether the PGC contains this construct below,
 as in ecpg/tests/preproc/type.pgc:
 
 exec sql type string is char[11];
 typedef char string[11];

I don't like this because it means a datatype can change its meaning in the
middle of a source file. Your new datatype is only needed for Informix
compatibility, so why not only defining it when running in compatibility mode?
An Informix file will not have such a typedef because it wouldn't/shouldn't
work there either.

Also the new datatype needs some documentation.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
ICQ: 179140304, AIM/Yahoo/Skype: michaelmeskes, Jabber: mes...@jabber.org
Go VfL Borussia! Go SF 49ers! Use Debian GNU/Linux! Use PostgreSQL!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Heikki Linnakangas
Fujii Masao wrote:
 On Tue, Jul 7, 2009 at 12:16 AM, Tom Lanet...@sss.pgh.pa.us wrote:
 Fujii Masao masao.fu...@gmail.com writes:
 In order for the primary server (ie. a normal backend) to read an archived 
 file,
 restore_command needs to be specified in also postgresql.conf. In this case,
 how should we handle restore_command in recovery.conf?
 I confess to not having paid much attention to this thread so far, but ...
 what is the rationale for having such a capability at all?
 
 If the XLOG files which are required for recovery exist only in the
 primary server,
 the standby server has to read them in some way. For example, when the latest
 XLOG file of the primary server is 09 and the standby server has only 01, the
 missing files (02-08) has to be read for recovery by the standby server. In 
 this
 case, the XLOG records in 09 or later are shipped to the standby server in 
 real
 time by synchronous replication feature.
 
 The problem which I'd like to solve is how to make the standby server read the
 XLOG files (XLOG file, backup history file and timeline history) which
 exist only
 in the primary server. In the previous patch, we had to manually copy those
 missing files to the archive of the standby server or use the warm-standby
 mechanism. This would decrease the usability of synchronous replication. So,
 I proposed one of the solutions which makes the standby server read those
 missing files automatically: introducing new function pg_read_xlogfile() which
 reads the specified XLOG file.

pg_read_xlogfile() feels like a quite hacky way to implement that. Do we
require the master to always have read access to the PITR archive? And
indeed, to have a PITR archive configured to begin with. If you need to
set up archiving just because of the standby server, how do old files
that are no longer required by the standby get cleaned up?

I feel that the master needs to explicitly know what is the oldest WAL
file the standby might still need, and refrain from deleting files the
standby might still need. IOW, keep enough history in pg_xlog. Then we
have the risk of running out of disk space on pg_xlog if the connection
to the standby is lost for a long time, so we'll need some cap on that,
after which the master declares the standby as dead and deletes the old
WAL anyway. Nevertheless, I think that would be much simpler to
implement, and simpler for admins. And if the standby can read old WAL
segments from the PITR archive, in addition to requesting them from the
primary, it is just as safe.

I'd like to see a description of the proposed master/slave protocol for
replication. If I understood correctly, you're proposing that the
standby server connects to the master with libpq like any client,
authenticates as usual, and then sends a message indicating that it
wants to switch to replication mode. In replication mode, normal FE/BE
messages are not accepted, but there's a different set of message types
for tranferring XLOG data.

I'd like to see a more formal description of that protocol and the new
message types. Some examples of how they would be in different
scenarios, like when standby server connects to the master for the first
time and needs to catch up.


Looking at the patch briefly, it seems to assume that there is only one
WAL sender active at any time. What happens when a new WAL sender
connects and one is active already? While supporting multiple slaves
isn't a priority, I think we should support multiple WAL senders right
from the start. It shouldn't be much harder, and otherwise we need to
ensure that the switch from old WAL sender to a new one is clean, which
seems non-trivial. Or not accept a new WAL sender while old one is still
active, but then a dead WAL sender process (because the standby suddenly
crashed, for example) would inhibit a new standby from connecting,
possibly for several minutes.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Brendan Jurd
Hi folks,

We're now about a week away from the start of the July 2009
commitfest, and we need to make a decision about whether to start
using http://commitfest.postgresql.org to manage it, or punt to the
next commitfest and continue to use the wiki for July.

Robert and I have been upgrading the app in response to the feedback
so far, and personally I think the app is already a far superior tool
to the wiki for managing a commitfest.  I think we should use it for
2009-07.

What we need in order to proceed is:
 a) a viable consensus from patch reviewers/committers to switch over,
 b) an email out to -hackers about the new app,
 c) updates to the wiki developer pages pointing to the new app, and
 d) decommissioning the wiki July CF.

I can do b), c) and d) but I could use your help in obtaining a).

If you think we should switch over, please say so.

If you think the app needs some tweaking, please say so but let's
switch over anyway and make those tweaks as we go.

If you think the app is fundamentally less useful than the wiki,
please say so and we'll work out whether we can resolve your objection
in time for the start of the CF.

Thanks in advance for your comments.

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] New types for transparent encryption

2009-07-07 Thread Itagaki Takahiro
Our manual says we can use pgcrypto functions or encrypted filesystems
for data encryption.
http://www.postgresql.org/docs/8.4/static/encryption-options.html

However, they are not always the best approaches in some cases.

For pgcrypto functions, user's SQL must contain keyword strings
and they need to consider which column is encrypted. Users complaint
that that they want to treat encrypted values as if not-encrypted.

For encrypted filesystems, all of database will be encrypted
and thare are considerable overheads. In addition, encrypted
filesystems are not well-maintained on some platforms.


I'd like to submit a proposal to add types that encryped or
decrypted transparently to contrib/pgcrypto. It is a simple
wrapper type of bytea. The pseudo code could be:

  CREATE TYPE encrypted_text (
INPUT = pgp_sym_encrypt_text(textin($1), passward(), options()),
OUTPUT = textout(pgp_sym_decrypt_text($1, passward(), options())),
LIKE bytea
  );

passward() and options() are SQL functions and we can re-define them
if needed. The default implementations are to refer custom GUC variables
(pgcrypto.password and pgcrypto.options) so that encryption are done
only in database server and applications don't have to know the details.


I hope this will be an enhancement of contrib/pgcrypto.
Comments welcome.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New types for transparent encryption

2009-07-07 Thread Heikki Linnakangas
Itagaki Takahiro wrote:
 Our manual says we can use pgcrypto functions or encrypted filesystems
 for data encryption.
 http://www.postgresql.org/docs/8.4/static/encryption-options.html
 
 However, they are not always the best approaches in some cases.
 
 For pgcrypto functions, user's SQL must contain keyword strings
 and they need to consider which column is encrypted. Users complaint
 that that they want to treat encrypted values as if not-encrypted.
 
 For encrypted filesystems, all of database will be encrypted
 and thare are considerable overheads. In addition, encrypted
 filesystems are not well-maintained on some platforms.
 
 
 I'd like to submit a proposal to add types that encryped or
 decrypted transparently to contrib/pgcrypto. It is a simple
 wrapper type of bytea. The pseudo code could be:
 
   CREATE TYPE encrypted_text (
 INPUT = pgp_sym_encrypt_text(textin($1), passward(), options()),
 OUTPUT = textout(pgp_sym_decrypt_text($1, passward(), options())),
 LIKE bytea
   );
 
 passward() and options() are SQL functions and we can re-define them
 if needed. The default implementations are to refer custom GUC variables
 (pgcrypto.password and pgcrypto.options) so that encryption are done
 only in database server and applications don't have to know the details.

What kind of attacks would this protect against? Seems a bit pointless
to me if the password is being sent to the server anyway. If the
attacker has superuser access to the server, he can harvest the
passwords as the clients send them in. If he doesn't, the usual access
controls with GRANT/REVOKE would be enough.

I would see some value in this if the encryption was done in the client,
and the server never saw any decrypted values. That would reduce the
damage of a compromised server. A driver extension to do that
transparently would be neat.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New types for transparent encryption

2009-07-07 Thread Greg Stark
On Tue, Jul 7, 2009 at 10:09 AM, Heikki
Linnakangasheikki.linnakan...@enterprisedb.com wrote:

 What kind of attacks would this protect against? Seems a bit pointless
 to me if the password is being sent to the server anyway. If the
 attacker has superuser access to the server, he can harvest the
 passwords as the clients send them in. If he doesn't, the usual access
 controls with GRANT/REVOKE would be enough.

It would still protect against offline attacks such as against backup files.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for automating partitions in PostgreSQL 8.4 Beta 2

2009-07-07 Thread Kedar Potdar
Yes. I am working to integrate some of the comments received for the patch.
I would be able to post latest patch in the next week.

Regards,
--
Kedar.



On Tue, Jul 7, 2009 at 10:18 AM, Jaime Casanova 
jcasa...@systemguards.com.ec wrote:

 On Mon, Jun 8, 2009 at 9:02 AM, Kedar Potdarkedar.pot...@gmail.com
 wrote:
  Hi,
 
  PFA patch, readme for automating partitions in PostgreSQL 8.4 Beta 2 and
  testcases.
 

 if you are still working on this, can you please update the patch to cvs
 head?

 --
 Atentamente,
 Jaime Casanova
 Soporte y capacitación de PostgreSQL
 Asesoría y desarrollo de sistemas
 Guayaquil - Ecuador
 Cel. +59387171157




-- 
Regards,
--
Kedar.

Read 'me' - http://kpotdar.livejournal.com


Re: [HACKERS] New types for transparent encryption

2009-07-07 Thread Heikki Linnakangas
Greg Stark wrote:
 On Tue, Jul 7, 2009 at 10:09 AM, Heikki
 Linnakangasheikki.linnakan...@enterprisedb.com wrote:
 What kind of attacks would this protect against? Seems a bit pointless
 to me if the password is being sent to the server anyway. If the
 attacker has superuser access to the server, he can harvest the
 passwords as the clients send them in. If he doesn't, the usual access
 controls with GRANT/REVOKE would be enough.
 
 It would still protect against offline attacks such as against backup files.

True, but filesystem-level encryption handles that scenario with less pain.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New types for transparent encryption

2009-07-07 Thread tomas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Tue, Jul 07, 2009 at 05:35:28PM +0900, Itagaki Takahiro wrote:
 Our manual says we can use pgcrypto functions or encrypted filesystems
 for data encryption.
 http://www.postgresql.org/docs/8.4/static/encryption-options.html

As other posters have put it, I'd be very sceptical of server-side
decryption. If the server has all the necessary bits to decrypt the
data, all bets are off.

[encryption might be OK, with an asymmetrical scheme in the
vein of public key cryptography].

A client-side decryption (and maybe encryption as well) seems way more
attractive. For that, libpqtypes[1],[2] might come in very handy.

[1] http://pgfoundry.org/projects/libpqtypes/
[2] http://libpqtypes.esilo.com/

Regards
- -- tomás
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFKUyC2Bcgs9XrR2kYRAiJoAJ9426t1bMtZ90690cwU9X+F4GJZkgCfZsJ2
YIon8ulaHI64l5GKbDwV4hM=
=I9fS
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small foreign key error message improvement

2009-07-07 Thread Peter Eisentraut
On Monday 06 July 2009 23:00:18 Tom Lane wrote:
 It seems to me that the right fix here is not so much to tweak the
 message wording as to put in an error location cursor.  In more
 complicated cases (eg, multiple FOREIGN KEY clauses) the suggested
 wording change wouldn't help much anyway.

It looks like this would involve equipping the Value node with location 
information and passing that around everywhere.  This could also be used to 
supply better location information for a number of other cases.  Does that 
sound like the right direction?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GRANT ON ALL IN schema

2009-07-07 Thread Simon Riggs

On Fri, 2009-07-03 at 12:44 +0200, Petr Jelinek wrote:
 Petr Jelinek wrote:
  So, here is the first version of the patch.
 Attached is v2 with slightly improved code, nothing has changed 
 feature-wise.

I would like to see 

GRANT ... ON ALL OBJECTS ...

because I know that I will forget to do TABLES, VIEWS and SEQUENCES
every time I want to do this.

If we are aggregating all objects of a type, why not aggregate all
objects, so we just issue one command?

(I'm sure we can do something intelligent with privileges that don't
apply to all object types rather than just fail. e.g. UPDATE privilege
should be same as USAGE on a sequence.)

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread Andrew Dunstan



Heikki Linnakangas wrote:

David E. Wheeler wrote:
  

Howdy Hackers,

Is there a published maintenance policy somewhere? Something that says
for how long the project supports minor releases of PostgreSQL. 



We don't have a published policy, but I believe an unofficial policy has
been to support minor releases for about 5 years.
  


My recollection is that we don't have a maximum lifetime, but we do have 
a minimum lifetime of about two release cycles, whic is in practice 
about 2 to 2.5 years. Beyond that, we try to maintain the branches as 
long as the effort is not too great. When the branches become 
unmaintainable they are dropped.


  

For
example, does 7.4 still get bug fixes and minor releases? If not, how
does one know when support for a major version has been dropped?



Hmm, I thought we dropped support for 7.4 a while ago, and there's no
download link for it on www.postgresql.org anymore. But looking at the
CVS history, I see that others are still committing fixes to 7.4 branch.
  


Indeed we are :-) I don't recall any decision not to continue support 
for 7.4, which is still quite solid, if a bit limited. (I had to help 
rescue somebody who had been running 6.5 recently, so don't think people 
aren't running extremely old branches.) If you're going to backpatch 
something, going back a couple more branches is often not a great 
difficulty, unless the code drift is large. Most backpatches are 
relatively limited in scope. If there is something that is invasive and 
difficult, that's a possible reason to drop support.


Most users don't want to be upgrading all the time, and I believe we 
inspire some confidence in our user base by a) being quite conservative 
about what we backpatch, and b) giving our stable branches quite long 
lifetimes.


BTW, 7.4 is less than six years old. If we were going to impose an 
arbitrary branch lifetime limit, I think five or six years is about right.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Andrew Dunstan



Heikki Linnakangas wrote:

While supporting multiple slaves
isn't a priority, 
  


Really? I should have thought it was a basic requirement. At the very 
least we need to design with it in mind.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Fujii Masao
Hi,

Thanks for the comment!

On Tue, Jul 7, 2009 at 5:07 PM, Heikki
Linnakangasheikki.linnakan...@enterprisedb.com wrote:
 pg_read_xlogfile() feels like a quite hacky way to implement that. Do we
 require the master to always have read access to the PITR archive? And
 indeed, to have a PITR archive configured to begin with. If you need to
 set up archiving just because of the standby server, how do old files
 that are no longer required by the standby get cleaned up?

 I feel that the master needs to explicitly know what is the oldest WAL
 file the standby might still need, and refrain from deleting files the
 standby might still need. IOW, keep enough history in pg_xlog. Then we
 have the risk of running out of disk space on pg_xlog if the connection
 to the standby is lost for a long time, so we'll need some cap on that,
 after which the master declares the standby as dead and deletes the old
 WAL anyway. Nevertheless, I think that would be much simpler to
 implement, and simpler for admins. And if the standby can read old WAL
 segments from the PITR archive, in addition to requesting them from the
 primary, it is just as safe.

I think of making pg_read_xlogfile() read the XLOG files from pg_xlog when
restore_command is not specified or returns non-zero code (ie. failure). So,
pg_read_xlogfile() with the following conditions might already cover the case
you described.

- checkpoint_segments = N (big number)
- restore_command = ''

In this case, we can expect that the XLOG files which are required for the
standby exist in pg_xlog because of big checkpoint_segments. And,
pg_read_xlogfile() reads them only from pg_xlog. checkpoint_segments
would play a role of the cap and determine the maximum disk size of
pg_xlog. The overflow files which might be no longer required for the
standby are removed safely by postgres. OTOH, if there is not enough
disk space for pg_xlog, we can specify restore_command and decrease
checkpoint_segments. This is more flexible approach, I think.

But, if the primary should not restore any archived file at any time, I have
only to get rid of the code which pg_read_xlogfile() restores it?

 I'd like to see a description of the proposed master/slave protocol for
 replication. If I understood correctly, you're proposing that the
 standby server connects to the master with libpq like any client,
 authenticates as usual, and then sends a message indicating that it
 wants to switch to replication mode. In replication mode, normal FE/BE
 messages are not accepted, but there's a different set of message types
 for tranferring XLOG data.

http://archives.postgresql.org/message-id/4951108a.5040...@enterprisedb.com
 I don't think we need or should
 allow running regular queries before entering replication mode. the
 backend should become a walsender process directly after authentication.

I changed the protocol according to your suggestion.
Here is the current protocol:

On start-up, the standby calls PQstartReplication() which is new libpq
function. It sends the startup packet with a special code for replication
to the primary, like a cancel request. The backend which received this
code becomes walsender directly. Authentication is performed as
normal. Then, walsender switches the XLOG file, and sends the
ReplicationStart message 'l' which includes the timeline ID and the
replication start XLOG position.

ReplicationStart (B)
Byte1('l'): Identifies the message as a replication-start indicator.
Int32(17): Length of message contents in bytes, including self.
Int32: The timeline ID
Int32: The start log file of replication
Int32: The start byte offset of replication

After that, walsender sends the XLogData message 'w' which includes
the XLOG records, the flag (e.g. indicates whether the records should
be fsynced or not), and the XLOG position, in real time. The standby
receives the message using PQgetXLogData() which is new libpq
function. OTOH, after writing or fsyncing the records, the standby
sends the XLogResponse message 'r' which includes the flag and the
position of the written/fsynced records, using PQputXLogRecPtr()
which is new libpq function.

XLogData (B)
Byte1('w'): Identifies the message as XLOG records.
Int32: Length of message contents in bytes, including self.
Int8: Flag bits indicating how the records should be treated.
Int32: The log file number of the records.
Int32: The byte offset of the records.
Byte n: The XLOG records.

XLogResponse (F)
Byte1('r'):  Identifies the message as ACK for XLOG records.
Int32: Length of message contents in bytes, including self.
Int8: Flag bits indicating how the records were treated.
Int32: The log file number of the records.
Int32: The byte offset of the records.

Normal exit of walsender (e.g. by smart shutdown) sends the
ReplicationEnd message 'z'. OTOH, normal exit of walreceiver
sends the existing Terminate message 'X'.

The above protocol is used between walsender 

Re: [HACKERS] New types for transparent encryption

2009-07-07 Thread Bill Moran
In response to Itagaki Takahiro itagaki.takah...@oss.ntt.co.jp:

 Our manual says we can use pgcrypto functions or encrypted filesystems
 for data encryption.
 http://www.postgresql.org/docs/8.4/static/encryption-options.html
 
 However, they are not always the best approaches in some cases.
 
 For pgcrypto functions, user's SQL must contain keyword strings
 and they need to consider which column is encrypted. Users complaint
 that that they want to treat encrypted values as if not-encrypted.
 
 For encrypted filesystems, all of database will be encrypted
 and thare are considerable overheads. In addition, encrypted
 filesystems are not well-maintained on some platforms.
 
 
 I'd like to submit a proposal to add types that encryped or
 decrypted transparently to contrib/pgcrypto. It is a simple
 wrapper type of bytea. The pseudo code could be:
 
   CREATE TYPE encrypted_text (
 INPUT = pgp_sym_encrypt_text(textin($1), passward(), options()),
 OUTPUT = textout(pgp_sym_decrypt_text($1, passward(), options())),
 LIKE bytea
   );
 
 passward() and options() are SQL functions and we can re-define them
 if needed. The default implementations are to refer custom GUC variables
 (pgcrypto.password and pgcrypto.options) so that encryption are done
 only in database server and applications don't have to know the details.
 
 
 I hope this will be an enhancement of contrib/pgcrypto.
 Comments welcome.

As others have noted, I doubt that the overall protection ability would
be very great with such a feature.

What I'd _really_ like to see is native PKI encryption implemented, so
roles could have keys associated with them, and roles could encrypt data
for other roles and sign data, thus allowing some really powerful data
protection schemes.

Just coming up with a methodology for this is non-trivial, though.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-07-07 Thread Teodor Sigaev



CREATE INDEX test_idx ON test USING gist
  (i CONSTRAINT =, c CONSTRAINT );

which would avoid the need for updating the catalog, of course.

Hmm, looks like index-fied table's constrains

--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Merge Append Patch merged up to 85devel

2009-07-07 Thread Teodor Sigaev
Can you provide some more details about the objective of this patch?  Or 
a link to previous discussion?


Suppose, Greg's patch could be modified to support order OR index scans:
SELECT ... WHERE (c  10 AND c  20) OR (c  100 AND C  110) ORDER BY c DESC

with plan:
Result
   - Append
   - Backward index scan (c  100 AND C  110)
   - Backward index scan (c  10 AND C  20)

I suggested a similar patch two years ago: 
http://archives.postgresql.org/message-id/45742c51.9020...@sigaev.ru (4-th 
point) and subsequent discussion 
http://archives.postgresql.org/pgsql-patches/2006-12/msg00029.php


--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce the memcpy call from SearchCatCache

2009-07-07 Thread Atsushi Ogawa
Tom Lane writes:
 Atsushi Ogawa a_og...@hi-ho.ne.jp writes:
  Attached patch is reduce the memcpy calls from SearchCatCache
  and SearchCatCacheList. This patch directly uses cache-cc_skey
  in looking for hash table.

 How much did you test this patch?  I'm fairly sure it will break
 things.
 There are cases where cache lookups happen recursively.

I tested regression test and pgbench. However, I did not consider
recursive case. I revised a patch for safe recursive call.
But I cannot find test case in which recursive call happens.

In my understanding, recursive call at SearchCatCache does not happen
while looking for hash table. The recursive call happens while reading
the relation. If the cache-cc_skey is copied before read the relation,
I think it is safe.

best regards,

--- Atsushi Ogawa

*** ./src/backend/utils/cache/catcache.c.orig   2009-07-07 15:19:56.0 
+0900
--- ./src/backend/utils/cache/catcache.c2009-07-07 15:19:46.0 
+0900
***
*** 1124,1140 
  
/*
 * initialize the search key information
 */
!   memcpy(cur_skey, cache-cc_skey, sizeof(cur_skey));
!   cur_skey[0].sk_argument = v1;
!   cur_skey[1].sk_argument = v2;
!   cur_skey[2].sk_argument = v3;
!   cur_skey[3].sk_argument = v4;
  
/*
 * find the hash bucket in which to look for the tuple
 */
!   hashValue = CatalogCacheComputeHashValue(cache, cache-cc_nkeys, 
cur_skey);
hashIndex = HASH_INDEX(hashValue, cache-cc_nbuckets);
  
/*
--- 1124,1141 
  
/*
 * initialize the search key information
+* directly use cache-cc_skey while looking for hash table
 */
!   cache-cc_skey[0].sk_argument = v1;
!   cache-cc_skey[1].sk_argument = v2;
!   cache-cc_skey[2].sk_argument = v3;
!   cache-cc_skey[3].sk_argument = v4;
  
/*
 * find the hash bucket in which to look for the tuple
 */
!   hashValue = CatalogCacheComputeHashValue(cache, cache-cc_nkeys,
!   cache-cc_skey);
hashIndex = HASH_INDEX(hashValue, cache-cc_nbuckets);
  
/*
***
*** 1160,1166 
HeapKeyTest(ct-tuple,
cache-cc_tupdesc,
cache-cc_nkeys,
!   cur_skey,
res);
if (!res)
continue;
--- 1161,1167 
HeapKeyTest(ct-tuple,
cache-cc_tupdesc,
cache-cc_nkeys,
!   cache-cc_skey,
res);
if (!res)
continue;
***
*** 1206,1211 
--- 1207,1218 
}
  
/*
+* We need copy ScanKey data, because it is possible for recursive 
+* cache lookup.
+*/
+   memcpy(cur_skey, cache-cc_skey, sizeof(cur_skey));
+ 
+   /*
 * Tuple was not found in cache, so we have to try to retrieve it 
directly
 * from the relation.  If found, we will add it to the cache; if not
 * found, we will add a negative cache entry instead.
***
*** 1371,1389 
  
/*
 * initialize the search key information
 */
!   memcpy(cur_skey, cache-cc_skey, sizeof(cur_skey));
!   cur_skey[0].sk_argument = v1;
!   cur_skey[1].sk_argument = v2;
!   cur_skey[2].sk_argument = v3;
!   cur_skey[3].sk_argument = v4;
  
/*
 * compute a hash value of the given keys for faster search.  We don't
 * presently divide the CatCList items into buckets, but this still lets
 * us skip non-matching items quickly most of the time.
 */
!   lHashValue = CatalogCacheComputeHashValue(cache, nkeys, cur_skey);
  
/*
 * scan the items until we find a match or exhaust our list
--- 1378,1396 
  
/*
 * initialize the search key information
+* directly use cache-cc_skey while looking for hash table
 */
!   cache-cc_skey[0].sk_argument = v1;
!   cache-cc_skey[1].sk_argument = v2;
!   cache-cc_skey[2].sk_argument = v3;
!   cache-cc_skey[3].sk_argument = v4;
  
/*
 * compute a hash value of the given keys for faster search.  We don't
 * presently divide the CatCList items into buckets, but this still lets
 * us skip non-matching items quickly most of the time.
 */
!   lHashValue = CatalogCacheComputeHashValue(cache, nkeys, cache-cc_skey);
  
/*
 * scan the items until we find a match or exhaust our list
***
*** 1410,1416 
HeapKeyTest(cl-tuple,
cache-cc_tupdesc,
nkeys,
! 

Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote: 
 
 I'm interested in hearing from anyone who has practical experience
 with tuning these variables, or any ideas on what we should test to
 get a better idea as to how to set them.
 
I don't remember any clear resolution to the wild variations in plan
time mentioned here:
 
http://archives.postgresql.org/pgsql-hackers/2009-06/msg00743.php
 
I think it would be prudent to try to figure out why small changes in
the query caused the large changes in the plan times Andres was
seeing.  Has anyone else ever seen such behavior?  Can we get
examples?  (It should be enough to get the statistics and the schema,
since this is about planning time, not run time.)
 
My own experience is that when we investigate a complaint about a
query not performing to user or application programmer expectations,
we have sometimes found that boosting these values has helped.  We
boost them overall (in postgresql.conf) without ever having seen a
downside.  We currently have geqo disabled and set both collapse
limits to 20.  We should probably just set them both to several
hundred and not wait until some query with more than 20 tables
performs badly, but I'm not sure we have any of those yet.
 
In short, my experience is that when setting these higher has made any
difference at all, it has always generated a plan that saved more time
than the extra planning required.  Well, I'd bet that there has been
an increase in the plan time of some queries which wound up with the
same plan anyway, but the difference has never been noticeable; the
net
effect has been a plus for us.
 
I guess the question is whether there is anyone who has had a contrary
experience.  (There must have been some benchmarks to justify adding
geqo at some point?)
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small foreign key error message improvement

2009-07-07 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On Monday 06 July 2009 23:00:18 Tom Lane wrote:
 It seems to me that the right fix here is not so much to tweak the
 message wording as to put in an error location cursor.  In more
 complicated cases (eg, multiple FOREIGN KEY clauses) the suggested
 wording change wouldn't help much anyway.

 It looks like this would involve equipping the Value node with location 
 information and passing that around everywhere.  This could also be used to 
 supply better location information for a number of other cases.  Does that 
 sound like the right direction?

Yeah, something more or less like that.  The trick is to not clutter the
code too much.  Perhaps the parser should use an alternate version of
makeString that accepts a location parameter, while leaving existing
calls elsewhere as-is?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce the memcpy call from SearchCatCache

2009-07-07 Thread Tom Lane
Atsushi Ogawa a_og...@hi-ho.ne.jp writes:
 Tom Lane writes:
 There are cases where cache lookups happen recursively.

 I tested regression test and pgbench. However, I did not consider
 recursive case. I revised a patch for safe recursive call.
 But I cannot find test case in which recursive call happens.

Try turning on CLOBBER_CACHE_ALWAYS or CLOBBER_CACHE_RECURSIVELY to
get a demonstration of what can happen under the right conditions.

I think the only really safe way to do what you propose would be to
refactor the ScanKey API to separate the datum values and is-null
flags from the more static parts of the data structure.  That would
be a pretty large/invasive patch, and the numbers cited here don't
seem to me to justify the work.  It's even possible that it could
end up being a net performance loss due to having to pass around more
pointers :-(

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread Tom Lane
Dave Page dp...@pgadmin.org writes:
 On Tue, Jul 7, 2009 at 8:28 AM, Heikki
 Linnakangasheikki.linnakan...@enterprisedb.com wrote:
 Hmm, I thought we dropped support for 7.4 a while ago, and there's no
 download link for it on www.postgresql.org anymore. But looking at the
 CVS history, I see that others are still committing fixes to 7.4 branch.

 We dropped the link when we released 8.4, primarily for space reasons.
 I believe Tom is still patching 7.4 though as Redhat have obligations
 to support it and he'd have to do it regardless of project policy.

I'm still theoretically on the hook for 7.3, too.  In practice I doubt
I'd be able to get any but really critical security updates into RHEL-3
or RHEL-4 at this point, so the notion that we're supporting these old
versions because Red Hat wants 'em is probably not holding any water
anymore.

I'd personally be perfectly happy with a community decision to desupport
7.4 now, or perhaps after the next set of update releases (which we're
probably overdue for, BTW).  We cannot support an indefinitely large set
of back branches, and a five-year lifespan seems about right to me.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Robert Haas
On Jul 7, 2009, at 9:31 AM, Kevin Grittner kevin.gritt...@wicourts.gov 
 wrote:



Robert Haas robertmh...@gmail.com wrote:


I'm interested in hearing from anyone who has practical experience
with tuning these variables, or any ideas on what we should test to
get a better idea as to how to set them.


I don't remember any clear resolution to the wild variations in plan
time mentioned here:

http://archives.postgresql.org/pgsql-hackers/2009-06/msg00743.php

I think it would be prudent to try to figure out why small changes in
the query caused the large changes in the plan times Andres was
seeing.  Has anyone else ever seen such behavior?  Can we get
examples?  (It should be enough to get the statistics and the schema,
since this is about planning time, not run time.)


Well, there's not really enough information there to figure out  
specifically what was happening, but from 10,000 feet,  
join_collapse_limit and from_collapse_limit constrain the join order.   
If the estimates are all accurate, setting them to a value  infinity  
will either leave the plans unchanged or make them worse.  If it's  
making them better, then the estimates are off and the join order  
constraint happens to be preventing the planner from considering the  
cases what really hurts you.  But that's mostly luck.



My own experience is that when we investigate a complaint about a
query not performing to user or application programmer expectations,
we have sometimes found that boosting these values has helped.  We
boost them overall (in postgresql.conf) without ever having seen a
downside.  We currently have geqo disabled and set both collapse
limits to 20.  We should probably just set them both to several
hundred and not wait until some query with more than 20 tables
performs badly, but I'm not sure we have any of those yet.

In short, my experience is that when setting these higher has made any
difference at all, it has always generated a plan that saved more time
than the extra planning required.  Well, I'd bet that there has been
an increase in the plan time of some queries which wound up with the
same plan anyway, but the difference has never been noticeable; the
net
effect has been a plus for us.


You have a big dataset AIUI so the right values for you might be too  
high for some people with, say, OLTP workloads.



I guess the question is whether there is anyone who has had a contrary
experience.  (There must have been some benchmarks to justify adding
geqo at some point?)


GEQO or something like it is certainly needed for very large planning  
problems.  The non-GEQO planner takes exponential time in the size of  
the problem, so at some point that's going to get ugly.  But  
triggering it at the level we do now seems unnecessarily pessimistic  
about what constitutes too much planning.


...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GRANT ON ALL IN schema

2009-07-07 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 I would like to see 
 GRANT ... ON ALL OBJECTS ...

This seems inherently broken, since different types of objects
will have different grantable privileges.

 (I'm sure we can do something intelligent with privileges that don't
 apply to all object types rather than just fail. e.g. UPDATE privilege
 should be same as USAGE on a sequence.)

Anything you do in that line will be an ugly kluge, and will tend to
encourage insecure over-granting of privileges (ie GRANT ALL ON ALL
OBJECTS ... what's the point of using permissions at all then?)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Andres Freund
Hi Kevin, Hi all,

On Tuesday 07 July 2009 16:31:14 Kevin Grittner wrote:
 Robert Haas robertmh...@gmail.com wrote:
  I'm interested in hearing from anyone who has practical experience
  with tuning these variables, or any ideas on what we should test to
  get a better idea as to how to set them.

 I don't remember any clear resolution to the wild variations in plan
 time mentioned here:

 http://archives.postgresql.org/pgsql-hackers/2009-06/msg00743.php

 I think it would be prudent to try to figure out why small changes in
 the query caused the large changes in the plan times Andres was
 seeing.  Has anyone else ever seen such behavior?  Can we get
 examples?  (It should be enough to get the statistics and the schema,
 since this is about planning time, not run time.)
I don't think it is surprising that small changes on those variables change 
the plan time widely on a complex query.
I.e. a increase by one in from_collapse_limit can completely change the plan 
before optimizations change due to more inlining.

I don't know the exact behaviour in the case more joins exists than 
join_collapse_limit but is not hard to imagine that this also can dramatically 
change the plan complexity. As there were quite many different views involved 
all the changes on the *_limit variables could have triggered plan changes in 
different parts of the query.

I plan to revisit the issue you referenced btw. Only first was release phase 
and then I could not motivate myself to investigate a bit more...

The mail you referenced contains a completely bogus and ugly query that shows 
similar symptoms by the way. I guess the variations would be even bigger if 
differently sized views/subqueries would be used.

 My own experience is that when we investigate a complaint about a
 query not performing to user or application programmer expectations,
 we have sometimes found that boosting these values has helped.  We
 boost them overall (in postgresql.conf) without ever having seen a
 downside.  We currently have geqo disabled and set both collapse
 limits to 20.  We should probably just set them both to several
 hundred and not wait until some query with more than 20 tables
 performs badly, but I'm not sure we have any of those yet.
I have not found consistently better results with geqo enabled. Some queries 
are better, others worse. Often the comparison is not reliably reproducable.
(The possibility to set geqo to some know starting value would be nice for 
such comparisons)

I cannot reasonably plan some queries with join_collapse_limit set to 20. At 
least not without setting the geqo limit very low and a geqo_effort to a low 
value.
So I would definitely not agree that removing j_c_l is a good idea. 

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 I cannot reasonably plan some queries with join_collapse_limit set to 20. At 
 least not without setting the geqo limit very low and a geqo_effort to a low 
 value.
 So I would definitely not agree that removing j_c_l is a good idea. 

Can you show some specific examples?  All of this discussion seems like
speculation in a vacuum ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Tom Lane
Fujii Masao masao.fu...@gmail.com writes:
 On Tue, Jul 7, 2009 at 12:16 AM, Tom Lanet...@sss.pgh.pa.us wrote:
 I confess to not having paid much attention to this thread so far, but ...
 what is the rationale for having such a capability at all?

 If the XLOG files which are required for recovery exist only in the
 primary server,
 the standby server has to read them in some way. For example, when the latest
 XLOG file of the primary server is 09 and the standby server has only 01, the
 missing files (02-08) has to be read for recovery by the standby server. In 
 this
 case, the XLOG records in 09 or later are shipped to the standby server in 
 real
 time by synchronous replication feature.

 The problem which I'd like to solve is how to make the standby server read the
 XLOG files (XLOG file, backup history file and timeline history) which
 exist only
 in the primary server. In the previous patch, we had to manually copy those
 missing files to the archive of the standby server or use the warm-standby
 mechanism. This would decrease the usability of synchronous replication. So,
 I proposed one of the solutions which makes the standby server read those
 missing files automatically: introducing new function pg_read_xlogfile() which
 reads the specified XLOG file.

 Is this solution in the right direction? Do you have another
 reasonable solution?

This design seems totally wrong to me.  It's confusing the master's
pg_xlog directory with the archive.  We should *not* use pg_xlog as
a long-term archive area; that's terrible from both a performance
and a reliability perspective.  Performance because pg_xlog has to be
fairly high-speed storage, which conflicts with it needing to hold
a lot of stuff; and reliability because the entire point of all this
is to survive a master server crash, and you're probably not going to
have its pg_xlog anymore after that.

If slaves need to be able to get at past WAL, they should be getting
it from a separate archive server that is independent of the master DB.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Initialise perl library as documented in perl API.

2009-07-07 Thread Andrew Dunstan



Andrew Dunstan wrote:


I think we need to float a bug upstream to the perl guys on this, but 
as a holding position I suggest that we alter the #ifdef test to avoid 
calling PERL_SYS_INIT3() where MYMALLOC is defined. It's ugly, but I 
can't think of another simple way around it (and we've done worse 
things to accommodate platform weirdness ;-) )





It turns out that this doesn't work on Cygwin with its latest Perl 
(which has MYMALLOC defined). If we call PERL_SYS_INIT3() it fails in 
that call, and if we don't call it it fails later. We haven't noticed it 
earlier because my Cygwin buildfarm member's Perl is a couple of years 
old. I guess we're lucky that it's not a more critical platform, but 
there is no guarantee that this is the only place it will fail, now or 
in the future. I will see about following up more energetically with the 
Perl people.


The workaround on Cygwin is to downgrade the perl installation to 5.8.8.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 8.3 PLpgSQL Can't Compare Records?

2009-07-07 Thread David E. Wheeler

On Jul 7, 2009, at 12:49 AM, Albe Laurenz wrote:


Is this a known issue in 8.3? If so, is there a known workaround?


The change is probably here:
http://archives.postgresql.org/pgsql-committers/2008-10/msg00110.php

So I think it is safe to argue that this is not a bug in 8.3, but an  
improvement in 8.4.


Right, good find, thanks.

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread David E. Wheeler

On Jul 7, 2009, at 8:06 AM, Tom Lane wrote:

I'd personally be perfectly happy with a community decision to  
desupport

7.4 now, or perhaps after the next set of update releases (which we're
probably overdue for, BTW).  We cannot support an indefinitely large  
set

of back branches, and a five-year lifespan seems about right to me.


I had kind of thought it was five active versions, which translates to  
more or less the same thing. In that case, 7.4 would shortly be  
dropped. So I ask:


1. Should 7.4 be dropped after the release of 7.4.26?

2. Should there be an articulated, published maintenance policy? Or,  
at least, a prominent list saying, these are the versions we actively  
support as of now?


Thanks,

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Andres Freund
On Tuesday 07 July 2009 17:40:50 Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  I cannot reasonably plan some queries with join_collapse_limit set to 20.
  At least not without setting the geqo limit very low and a geqo_effort to
  a low value.
  So I would definitely not agree that removing j_c_l is a good idea.
 Can you show some specific examples?  All of this discussion seems like
 speculation in a vacuum ...
I still may not publish the original schema (And I still have not heard any 
reasonable reasons) - the crazy query in the referenced email shows similar 
problems and has a somewhat similar structure.

If that is not enough I will try to design a schema that is similar and 
different enough from the original schema. Will take a day or two though.


Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread Andrew Dunstan



David E. Wheeler wrote:

On Jul 7, 2009, at 8:06 AM, Tom Lane wrote:


I'd personally be perfectly happy with a community decision to desupport
7.4 now, or perhaps after the next set of update releases (which we're
probably overdue for, BTW).  We cannot support an indefinitely large set
of back branches, and a five-year lifespan seems about right to me.


I had kind of thought it was five active versions, which translates to 
more or less the same thing. In that case, 7.4 would shortly be 
dropped. So I ask:


1. Should 7.4 be dropped after the release of 7.4.26?

2. Should there be an articulated, published maintenance policy? Or, 
at least, a prominent list saying, these are the versions we actively 
support as of now?





One thing I think we really should do is give prominent public notice of 
any EOL for a branch. At least a couple of months, preferably. If the 
lifetime were absolutely fixed it might not matter so much, but as it 
isn't I think we owe that to our users.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Kevin Grittner
Brendan Jurd dire...@gmail.com wrote: 
 
 If you think the app is fundamentally less useful than the wiki,
 please say so and we'll work out whether we can resolve your
 objection in time for the start of the CF.
 
It's been down for a while now.  I don't know if this is causal, but
the failure seemed to start when I went into the patch I submitted,
and clicked on the link to edit a comment I added.
 
At any rate, the reliability doesn't seem to match the wiki yet, which
would seem to be a pretty fundamental thing to fix before switching.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Kevin Grittner
Brendan Jurd dire...@gmail.com wrote: 
 
 If you think the app is fundamentally less useful than the wiki,
 please say so and we'll work out whether we can resolve your
 objection in time for the start of the CF.
 
Oh, sure -- I post about it being down, and seconds after I hit send
it comes up again.   :-/
 
Do we know that cause?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Brendan Jurd
2009/7/8 Kevin Grittner kevin.gritt...@wicourts.gov:
 Oh, sure -- I post about it being down, and seconds after I hit send
 it comes up again.   :-/

 Do we know that cause?

Well, no, since I've never observed it being down and I really have
no idea what you mean by that.

Maybe you could describe the symptoms you observed?  Was the webserver
totally uncontactable, or was it an error in the web app itself?

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Kevin Grittner
Brendan Jurd dire...@gmail.com wrote: 
 
 Maybe you could describe the symptoms you observed?  Was the
 webserver totally uncontactable, or was it an error in the web app
 itself?
 
When I clicked the link to edit the comment, it clocked until the
browser timed out.  So then I tried the URL for the main page, and it
again clocked until the browser timed out.  I then tried again on the
main page, and it clocked for a minute, so I posted the message about
it being down.  Moments after I sent that, the page came up.
 
If anyone else was hitting the site and it was working, perhaps the
problem could have been elsewhere.  (Our network or our Internet path
to the site could have had some temporary issue.)
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 I guess the question is whether there is anyone who has had a contrary
 experience.  (There must have been some benchmarks to justify adding
 geqo at some point?)

The CVS history shows that geqo was integrated on 1997-02-19, which
I think means that it must have been developed against Postgres95
(or even earlier Berkeley releases?).  That was certainly before any
of the current community's work on the optimizer began.  A quick look
at the code as it stood on that date suggests that the regular
optimizer's behavior for large numbers of rels was a lot worse than it
is today --- notably, it looks like it would consider a whole lot more
Cartesian-product joins than we do now; especially if you had bushy
mode turned on, which you'd probably have to do to find good plans in
complicated cases.  There were also a bunch of enormous inefficiencies
that we've whittled down over time, such as the mechanisms for comparing
pathkeys or the use of integer Lists to represent relid sets.

So while I don't doubt that geqo was absolutely essential when it was
written, it's fair to question whether it still provides a real win.
And we could definitely stand to take another look at the default
thresholds.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GRANT ON ALL IN schema

2009-07-07 Thread Greg Stark
On Tue, Jul 7, 2009 at 4:16 PM, Tom Lanet...@sss.pgh.pa.us wrote:

 (I'm sure we can do something intelligent with privileges that don't
 apply to all object types rather than just fail. e.g. UPDATE privilege
 should be same as USAGE on a sequence.)

 Anything you do in that line will be an ugly kluge, and will tend to
 encourage insecure over-granting of privileges (ie GRANT ALL ON ALL
 OBJECTS ... what's the point of using permissions at all then?)

That seems a bit pessimistic. While I disagree with Simon's rule I
think you can get plenty of mileage out of a more conservative rule of
just granting the privilege to all objects for which that privilege is
defined. Especially when you consider that we allow listing multiple
privileges in a single command.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread David E. Wheeler

On Jul 7, 2009, at 9:13 AM, Andrew Dunstan wrote:

One thing I think we really should do is give prominent public  
notice of any EOL for a branch. At least a couple of months,  
preferably. If the lifetime were absolutely fixed it might not  
matter so much, but as it isn't I think we owe that to our users.


Perhaps a maintenance page on the site with a table for each version  
of PostgreSQL, in reverse chronological order, showing the initial  
release date and the date of last supported release (anticipated,  
perhaps, to be something like Sept 1 for 7.4).


So something like:

 branch |  released  | curr_version | curr_date  | final_date
++--++
 8.4| 2009-07-01 | 8.4.0| 2009-07-01 |
 8.3| 2008-02-04 | 8.3.7| 2009-03-16 |
 8.2| 2006-12-05 | 8.2.13   | 2009-03-16 |
 8.1| 2005-11-08 | 8.1.17   | 2009-03-16 |
 8.0| 2005-01-19 | 8.0.21   | 2009-03-16 |
 7.4| 2003-11-17 | 7.4.25   | 2009-03-16 | 2009-09-01  
(projected)

 7.3| 2002-11-27 | 7.3.21   | 2008-01-07 | 2008-01-07
 7.2| 2002-02-04 | 7.2.8| 2005-05-09 | 2005-05-09
 7.1| 2001-04-13 | 7.1.3| 2001-08-15 | 2001-08-15
 7.0| 2000-05-08 | 7.0.3| 2000-11-11 | 2000-11-11

Best,

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Have \d show child tables that inherit from the specified parent

2009-07-07 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On Sunday 10 May 2009 03:05:48 dam...@dalibo.info wrote:
 Here's a second version. Main changes are :
 
 * Child tables are sorted by name
 * \d only shows the number of child tables
 * \d+ shows the full list

 Committed.

I looked at this patch.  I'm a bit disturbed by the
ORDER BY c.oid::pg_catalog.regclass
business.  I doubt that does what the author thinks, and I'm not sure
what we really want anyway.  If we want the child table names sorted
alphabetically then the regclass value needs to be further cast to text,
ie
ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text

As-is, it's sorting by the OID values, which is almost like sorting the
children in creation order, which might possibly be thought preferable
... except that once the OID counter wraps around we won't be able to
guarantee that anymore.

Comments?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread Alvaro Herrera
David E. Wheeler wrote:
 On Jul 7, 2009, at 9:13 AM, Andrew Dunstan wrote:

 One thing I think we really should do is give prominent public notice 
 of any EOL for a branch. At least a couple of months, preferably. If 
 the lifetime were absolutely fixed it might not matter so much, but as 
 it isn't I think we owe that to our users.

 Perhaps a maintenance page on the site with a table for each version of 
 PostgreSQL, in reverse chronological order, showing the initial release 
 date and the date of last supported release (anticipated, perhaps, to be 
 something like Sept 1 for 7.4).

We have an RSS:
http://www.postgresql.org/versions.rss

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Greg Stark
On Tue, Jul 7, 2009 at 4:49 PM, Tom Lanet...@sss.pgh.pa.us wrote:
 This design seems totally wrong to me.  It's confusing the master's
 pg_xlog directory with the archive.  We should *not* use pg_xlog as
 a long-term archive area; that's terrible from both a performance
 and a reliability perspective.  Performance because pg_xlog has to be
 fairly high-speed storage, which conflicts with it needing to hold
 a lot of stuff; and reliability because the entire point of all this
 is to survive a master server crash, and you're probably not going to
 have its pg_xlog anymore after that.

Hm, those are all good points.

 If slaves need to be able to get at past WAL, they should be getting
 it from a separate archive server that is independent of the master DB.

But this conflicts with earlier discussions where we were concerned
about the length of the path wal has to travel between the master and
the slaves. We want slaves to be able to be turned on simply using a
simple robust configuration and to be able to respond quickly to
transactions that are committed in the master for synchronous
operation.

Having wal have to be written to the master xlog directory, be copied
to the archive, then be copied from the archive to the slave's wal
directory, and then finally be reread and replayed in the slave means
a lot of extra complicated configuration which can be set up wrong and
which might not be apparent until things fall apart. And it means a
huge latency before the wal files are finally replayed on the slave
which will make transitioning to synchronous mode -- with a whole
other different mode of operation to configure -- quite tricky and
potentialy slow.

I'm not sure how to reconcile these two sets of priorities though.
Your points above are perfectly valid as well. How do other databases
handle log shipping? Do they depend on archived logs to bring the
slaves up to speed? Is there a separate log management daemon?

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-07-07 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 On Mon, 2009-07-06 at 18:27 +0100, Simon Riggs wrote:
 In many cases, people add unique indexes solely to allow replication to
 work correctly. The index itself may never be used, especially in high
 volume applications.

 Interesting. Maybe we should at least try to leave room for this feature
 to be added later. I agree that, from a theoretical perspective,
 requiring a UNIQUE constraint to use an index is wrong. For one thing,
 you can't ensure the uniqueness without defining some total order
 (although you can define an arbitrary total order for cases with no
 meaningful total order).

This seems a bit pointless.  There is certainly not any use case for a
constraint without an enforcement mechanism (or at least none the PG
community is likely to consider legitimate ;-)).  And it's not very
realistic to suppose that you'd check a constraint by doing a seqscan
every time.  Therefore there has to be an index underlying the
constraint somehow.  Jeff's complaint about total order is not an
argument against having an index, it's just pointing out that btree is
not the only possible type of index.  It's perfectly legitimate to
imagine using a hash index to enforce uniqueness, for example.  If hash
indexes had better performance we'd probably already have been looking
for a way to do that, and wanting some outside-the-AM mechanism for it
so we didn't have to duplicate code from btree.

Also, if hash indexes were a realistic alternative to btree for this,
we'd already have come up against the problem that the CONSTRAINT syntax
doesn't provide any way to specify what kind of index you want to use
underneath the constraint.  I think it might be interesting to turn
around Jeff's syntax sketch and provide a way to say that a CONSTRAINT
declaration should depend on some previously added index, eg
something like

ALTER TABLE tab ADD CONSTRAINT UNIQUE (col1, col2) USING index

Not sure how that squares exactly with the question of variant
definitions of uniqueness semantics (as opposed to purely implementation
decisions like hash vs btree).  But it's a different way to come at it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 One possibility would be to remove join_collapse_limit entirely, but
 that would eliminate one possibily-useful piece of functionality that
 it current enables: namely, the ability to exactly specify the join
 order by setting join_collapse_limit to 1.  So one possibility would
 be to rename the variable something like explicit_join_order and make
 it a Boolean; another possibility would be to change the default value
 to INT_MAX.

As the person who put in those thresholds, I kind of prefer going over
to the boolean definition.  That was the alternative that we considered;
the numeric thresholds were used instead because they were easy to
implement and seemed to possibly offer more control.  But I'm not
convinced that anyone has really used them profitably.  I agree that
the ability to use JOIN syntax to specify the join order exactly (with
join_collapse_limit=1) is the only really solid use-case anyone has
proposed for either threshold.  I'm interested in Andreas' comment that
he has use-cases where using the collapse_limit is better than allowing
geqo to take over for very large problems ... but I think we need to see
those use-cases and see if there's a better fix.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GRANT ON ALL IN schema

2009-07-07 Thread Simon Riggs

On Tue, 2009-07-07 at 11:16 -0400, Tom Lane wrote:
 Simon Riggs si...@2ndquadrant.com writes:
  I would like to see 
  GRANT ... ON ALL OBJECTS ...
 
 This seems inherently broken, since different types of objects
 will have different grantable privileges.
 
  (I'm sure we can do something intelligent with privileges that don't
  apply to all object types rather than just fail. e.g. UPDATE privilege
  should be same as USAGE on a sequence.)
 
 Anything you do in that line will be an ugly kluge, and will tend to
 encourage insecure over-granting of privileges (ie GRANT ALL ON ALL
 OBJECTS ... what's the point of using permissions at all then?)

My perspective would be that privilege systems that are too complex fall
into disuse, leading to less security, not more.

On any database that has moderate security or better permissions errors
are one of the three errors on production databases. Simplifying the
commands, by aggregating them or another way, is likely to yield
benefits in usability for a wide range of users.

Unix allows chmod to run against multiple object types. How annoying
would it be if you had to issue chmodfile, chmodlink, chmoddir
separately for each class of object. (Links don't barf if you try to set
their file mode, for example). We follow the Unix file system in many
other ways, why not this one?

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-07-07 Thread Simon Riggs

On Tue, 2009-07-07 at 13:22 -0400, Tom Lane wrote:
 ALTER TABLE tab ADD CONSTRAINT UNIQUE (col1, col2) USING index

This would be very useful, though perhaps only because we do not have
REINDEX CONCURRENTLY.

It is likely to be useful in the future to allow an index with N
columns, yet which can provide uniqueness with  N of those columns.
This capability is known as covered indexes and will be important if
Heikki writes his index-only scan code.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Greg Stark
On Tue, Jul 7, 2009 at 5:58 PM, Tom Lanet...@sss.pgh.pa.us wrote:
 So while I don't doubt that geqo was absolutely essential when it was
 written, it's fair to question whether it still provides a real win.
 And we could definitely stand to take another look at the default
 thresholds

The whole point of these parameters is to save time planning large
complex queries -- which are rarely going to be the kind of short,
simple, fast to execute oltp queries where planning time makes a big
difference. The larger more complex the query the more likely it is to
be a long-running dss or olap style query where shaving one percent
off the runtime would be worth spending many seconds planning.

I propose that there's a maximum reasonable planning time which a
programmer woulod normally expect the database to be able to come up
with a plan for virtually any query within. Personally I would be
surprised if a plain EXPLAIN took more than, say, 30s. perhaps even
something more like 10s.

We should benchmark the planner on increasingly large sets of
relations on a typical developer machine and set geqo to whatever
value the planner can handle in that length of time. I suspect even at
10s you're talking about substantially larger values than the current
default.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New types for transparent encryption

2009-07-07 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 Itagaki Takahiro wrote:
 CREATE TYPE encrypted_text (
 INPUT = pgp_sym_encrypt_text(textin($1), passward(), options()),
 OUTPUT = textout(pgp_sym_decrypt_text($1, passward(), options())),
 LIKE bytea
 );
 
 passward() and options() are SQL functions and we can re-define them
 if needed. The default implementations are to refer custom GUC variables
 (pgcrypto.password and pgcrypto.options) so that encryption are done
 only in database server and applications don't have to know the details.

 What kind of attacks would this protect against?

I agree that this seems more like offering security theater than real
security.  I'm also pretty concerned about the implications of a
datatype whose I/O operations fundamentally don't work without knowledge
of values that are supposed to be kept secret.  What is your expectation
for how pg_dump will handle such columns?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Tom Lane
Greg Stark gsst...@mit.edu writes:
 We should benchmark the planner on increasingly large sets of
 relations on a typical developer machine and set geqo to whatever
 value the planner can handle in that length of time. I suspect even at
 10s you're talking about substantially larger values than the current
 default.

The problem is to find some realistic benchmark cases.  That's one
reason why I was pestering Andreas to see his actual use cases ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-07-07 Thread Simon Riggs

On Tue, 2009-07-07 at 13:22 -0400, Tom Lane wrote:
 Jeff Davis pg...@j-davis.com writes:
  On Mon, 2009-07-06 at 18:27 +0100, Simon Riggs wrote:
  In many cases, people add unique indexes solely to allow replication to
  work correctly. The index itself may never be used, especially in high
  volume applications.
 
  Interesting. Maybe we should at least try to leave room for this feature
  to be added later. I agree that, from a theoretical perspective,
  requiring a UNIQUE constraint to use an index is wrong. For one thing,
  you can't ensure the uniqueness without defining some total order
  (although you can define an arbitrary total order for cases with no
  meaningful total order).
 
 This seems a bit pointless.  There is certainly not any use case for a
 constraint without an enforcement mechanism (or at least none the PG
 community is likely to consider legitimate ;-)).  And it's not very
 realistic to suppose that you'd check a constraint by doing a seqscan
 every time.  Therefore there has to be an index underlying the
 constraint somehow.  

I think the idea has been misconstrued.

Obviously a constraint requires an enforcement mechanism. That doesn't
imply that the enforcement mechanism must be fully usable as an index.

The example being discussed was enforcing uniqueness on monotonically
increasing columns. If we knew that a column value was GENERATED ALWAYS
using a sequence, then we could simply skip the uniqueness check
altogether. No index, yet an enforced unique constraint.

Yes, we would need to understand the relationship between the sequence
and the table and throw an error in certain sequence update cases (and
we may need to check those with a seq scan). But that seems a small
price to pay for the avoidance of a potentially very large index that
may have no purpose.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Andres Freund
On Tuesday 07 July 2009 19:45:44 Tom Lane wrote:
 Greg Stark gsst...@mit.edu writes:
  We should benchmark the planner on increasingly large sets of
  relations on a typical developer machine and set geqo to whatever
  value the planner can handle in that length of time. I suspect even at
  10s you're talking about substantially larger values than the current
  default.
 The problem is to find some realistic benchmark cases.  That's one
 reason why I was pestering Andreas to see his actual use cases ...
I will start writing a reduced/altered schema tomorrow then...

Andres


PS: Its Andres btw ;-)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-07-07 Thread Greg Stark
On Tue, Jul 7, 2009 at 6:22 PM, Tom Lanet...@sss.pgh.pa.us wrote:

 This seems a bit pointless.  There is certainly not any use case for a
 constraint without an enforcement mechanism (or at least none the PG
 community is likely to consider legitimate ;-)).  And it's not very
 realistic to suppose that you'd check a constraint by doing a seqscan
 every time.  Therefore there has to be an index underlying the
 constraint somehow.

I'm not entirely convinced that running a full scan to enforce
constraints is necessarily such a crazy idea. It may well be the most
efficient approach after a major bulk load. And consider a read-only
database where the only purpose of the constraint is to inform the
optimizer that it can trust the property to hold.

That said this seems like an orthogonal issue to me.

 Jeff's complaint about total order is not an
 argument against having an index, it's just pointing out that btree is
 not the only possible type of index.  It's perfectly legitimate to
 imagine using a hash index to enforce uniqueness, for example.  If hash
 indexes had better performance we'd probably already have been looking
 for a way to do that, and wanting some outside-the-AM mechanism for it
 so we didn't have to duplicate code from btree.

I'm a bit at a loss why we need this extra data structure though. The
whole duplicated code issue seems to me to be one largely of code
structure. If we hoisted the heap-value rechecking code out of the
btree AM then the hash AM could reuse it just fine.

Both the hash and btree AMs would have to implement some kind of
insert-unique-key operation which would hold some kind of lock
preventing duplicate unique keys from being inserted but both btree
and hash could implement that efficiently by locking one page or one
hash value.

GIST would need something like this store the key value or tid in
shared memory mechanism. But that could be implemented as an external
facility which GIST then made use of -- just the way every part of the
system makes use of other parts. It doesn't mean we have to make
prevent concurrent unique inserts not the responsibility of the AM
which knows best how to handle that.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Tom Lane
Greg Stark gsst...@mit.edu writes:
 On Tue, Jul 7, 2009 at 4:49 PM, Tom Lanet...@sss.pgh.pa.us wrote:
 This design seems totally wrong to me.
 ...

 But this conflicts with earlier discussions where we were concerned
 about the length of the path wal has to travel between the master and
 the slaves. We want slaves to be able to be turned on simply using a
 simple robust configuration and to be able to respond quickly to
 transactions that are committed in the master for synchronous
 operation.

Well, the problem I've really got with this is that if you want sync
replication, couching it in terms of WAL files in the first place seems
like getting off on fundamentally the wrong foot.  That still leaves you
with all the BS about having to force WAL file switches (and eat LSN
space) for all sorts of undesirable reasons.  I think we want the
API to operate more like a WAL stream.  I would envision the slaves
connecting to the master's replication port and asking feed me WAL
beginning at LSN position thus-and-so, with no notion of WAL file
boundaries exposed anyplace.  The point about not wanting to archive
lots of WAL on the master would imply that the master reserves the right
to fail if the requested starting position is too old, whereupon the
slave needs some way to resync --- but that probably involves something
close to taking a fresh base backup to copy to the slave.  You either
have the master not recycle its WAL while the backup is going on (so the
slave can start reading afterwards), or expect the slave to absorb and
buffer the WAL stream while the backup is going on.  In neither case is
there any reason to have an API that involves fetching arbitrary chunks
of past WAL, and certainly not one that is phrased as fetching specific
WAL segment files.

There are still some interesting questions in this about exactly how you
switch over from catchup mode to following the live WAL broadcast.
With the above design it would be the master's responsibility to manage
that, since presumably the requested start position will almost always
be somewhat behind the live end of WAL.  It might be nicer to push that
complexity to the slave side, but then you do need two data paths
somehow (ie, retrieving the slightly-stale WAL is separated from
tracking live events).  Which is what you're saying we should avoid,
and I do see the point there.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] information_schema.columns changes needed for OLEDB

2009-07-07 Thread Peter Eisentraut
On Monday 06 July 2009 22:16:12 Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
  I have the attached patch that would make character_octet_length the
  product of character_octet_length and the maximum octet length of a
  single character in the selected server encoding.  So for UTF-8, this
  would be factor 4.  This doesn't exactly correspond to the behavior that
  you expect, but I think it's more correct overall anyway.

 +1, but that new query isn't very schema-safe ... I think it needs a few
 pg_catalog. qualifications.

Applied with fixes.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-07-07 Thread Jeff Davis
On Tue, 2009-07-07 at 18:36 +0100, Simon Riggs wrote:
 On Tue, 2009-07-07 at 13:22 -0400, Tom Lane wrote:
 It is likely to be useful in the future to allow an index with N
 columns, yet which can provide uniqueness with  N of those columns.
 This capability is known as covered indexes and will be important if
 Heikki writes his index-only scan code.

My patch offers this capability, and the language I suggested would
support it.

In the current version of the patch, just use InvalidStrategy (0)
instead of, say, BTEqualStrategyNumber (3) for the attributes that you
don't want to be a part of the constraint. Some of the proper error
checking is not done yet, but it will work.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread David E. Wheeler

On Jul 7, 2009, at 10:18 AM, Alvaro Herrera wrote:


We have an RSS:
http://www.postgresql.org/versions.rss


Does anyone use it? And it only goes back to 8.0 and it served with  
the text/html content-type.


Best,

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-07-07 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 On Tue, 2009-07-07 at 18:36 +0100, Simon Riggs wrote:
 On Tue, 2009-07-07 at 13:22 -0400, Tom Lane wrote:
 It is likely to be useful in the future to allow an index with N
 columns, yet which can provide uniqueness with  N of those columns.
 This capability is known as covered indexes and will be important if
 Heikki writes his index-only scan code.

 My patch offers this capability, and the language I suggested would
 support it.

 In the current version of the patch, just use InvalidStrategy (0)
 instead of, say, BTEqualStrategyNumber (3) for the attributes that you
 don't want to be a part of the constraint. Some of the proper error
 checking is not done yet, but it will work.

I don't think this even approximates the need --- in particular it's not
clear what the semantics of combination across different index columns
are.  I assume you've hot-wired it so that several BTEqualStrategyNumber
columns will work like a normal multicolumn uniqueness constraint (IOW
it's okay as long as at least one column is NULL or isn't equal).  But
I'm not at all sure that's what I'd want for some other operator type.

Also, what happens if you want to use the same index to support more
than one logical constraint?  This is impossible if you put the
information into pg_index, I think.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Heikki Linnakangas
Tom Lane wrote:
 Greg Stark gsst...@mit.edu writes:
 On Tue, Jul 7, 2009 at 4:49 PM, Tom Lanet...@sss.pgh.pa.us wrote:
 This design seems totally wrong to me.
 ...
 
 But this conflicts with earlier discussions where we were concerned
 about the length of the path wal has to travel between the master and
 the slaves. We want slaves to be able to be turned on simply using a
 simple robust configuration and to be able to respond quickly to
 transactions that are committed in the master for synchronous
 operation.
 
 Well, the problem I've really got with this is that if you want sync
 replication, couching it in terms of WAL files in the first place seems
 like getting off on fundamentally the wrong foot.  That still leaves you
 with all the BS about having to force WAL file switches (and eat LSN
 space) for all sorts of undesirable reasons.  I think we want the
 API to operate more like a WAL stream.

I think we all agree on that.

  I would envision the slaves
 connecting to the master's replication port and asking feed me WAL
 beginning at LSN position thus-and-so, with no notion of WAL file
 boundaries exposed anyplace.  

Yep, that's the way I envisioned it to work in my protocol suggestion
that Fujii adopted
(http://archives.postgresql.org/message-id/4951108a.5040...@enterprisedb.com).
The begin and end values are XLogRecPtrs, not WAL filenames.

The point about not wanting to archive
 lots of WAL on the master would imply that the master reserves the right
 to fail if the requested starting position is too old, whereupon the
 slave needs some way to resync --- but that probably involves something
 close to taking a fresh base backup to copy to the slave.

Works for me, except that people will want the ability to use a PITR
archive for the catchup, if available. The master should have no
business business peeking into the archive, however. That should be
implemented entirely in the slave.

And I'm sure people will want the option to retain WAL longer in the
master, to avoid an expensive resync if the slave falls behind. It would
be simple to provide a GUC option for always retain X GB of old WAL in
pg_xlog.

 There are still some interesting questions in this about exactly how you
 switch over from catchup mode to following the live WAL broadcast.
 With the above design it would be the master's responsibility to manage
 that, since presumably the requested start position will almost always
 be somewhat behind the live end of WAL.  It might be nicer to push that
 complexity to the slave side, but then you do need two data paths
 somehow (ie, retrieving the slightly-stale WAL is separated from
 tracking live events).  Which is what you're saying we should avoid,
 and I do see the point there.

Yeah, that logic belongs to the master.

We'll want to send message from the master to the slave when the catchup
is done, so that the slave knows it's up-to-date. For logging, if for no
other reason.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread Alvaro Herrera
David E. Wheeler wrote:
 On Jul 7, 2009, at 10:18 AM, Alvaro Herrera wrote:

 We have an RSS:
 http://www.postgresql.org/versions.rss

 Does anyone use it?

No idea.

 And it only goes back to 8.0

Huh, true :-(  This should be fixed.

 and it served with the text/html content-type.

Not for me:

$ lynx -head -dump http://www.postgresql.org/versions.rss
HTTP/1.1 200 OK
Date: Tue, 07 Jul 2009 18:56:48 GMT
Server: Apache
Last-Modified: Wed, 01 Jul 2009 11:25:40 GMT
ETag: bd2589-a8d-46da32eda5500
Accept-Ranges: bytes
Content-Length: 2701
Connection: close
Content-Type: application/rss+xml

I guess it depends on the mirror.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Have \d show child tables that inherit from the specified parent

2009-07-07 Thread Peter Eisentraut
On Tuesday 07 July 2009 19:35:54 Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
  On Sunday 10 May 2009 03:05:48 dam...@dalibo.info wrote:
  Here's a second version. Main changes are :
 
  * Child tables are sorted by name
  * \d only shows the number of child tables
  * \d+ shows the full list
 
  Committed.

 I looked at this patch.  I'm a bit disturbed by the
   ORDER BY c.oid::pg_catalog.regclass
 business.  I doubt that does what the author thinks, and I'm not sure
 what we really want anyway.  If we want the child table names sorted
 alphabetically then the regclass value needs to be further cast to text,
 ie
   ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text

Fixed; it was clearly meant to be by name.

Creation order might be useful, but we don't really support that anywhere 
else.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread David E. Wheeler

On Jul 7, 2009, at 11:59 AM, Alvaro Herrera wrote:


And it only goes back to 8.0


Huh, true :-(  This should be fixed.


Yeah. Or we should have a table. I could create one in the wiki, I  
guess, but I would assume that the core team would want to have formal  
control over scheduled maintenance expirations…



and it served with the text/html content-type.


Not for me:

$ lynx -head -dump http://www.postgresql.org/versions.rss
HTTP/1.1 200 OK
Date: Tue, 07 Jul 2009 18:56:48 GMT
Server: Apache
Last-Modified: Wed, 01 Jul 2009 11:25:40 GMT
ETag: bd2589-a8d-46da32eda5500
Accept-Ranges: bytes
Content-Length: 2701
Connection: close
Content-Type: application/rss+xml

I guess it depends on the mirror.


Right:

% curl -I http://en.wikipedia.org/wiki/Nofollow
HTTP/1.0 200 OK
Date: Tue, 07 Jul 2009 07:37:07 GMT
Server: Apache
X-Powered-By: PHP/5.2.4-2ubuntu5wm1
Cache-Control: private, s-maxage=0, max-age=0, must-revalidate
Content-Language: en
Vary: Accept-Encoding,Cookie
X-Vary-Options: Accept-Encoding;list-contains=gzip,Cookie;string- 
contains=enwikiToken;string-contains=enwikiLoggedOut;string- 
contains=enwiki_session;string-contains=centralauth_Token;string- 
contains=centralauth_Session;string-contains=centralauth_LoggedOut

Last-Modified: Mon, 06 Jul 2009 21:52:17 GMT
Content-Length: 55543
Content-Type: text/html; charset=utf-8
Age: 41449
X-Cache: HIT from sq21.wikimedia.org
X-Cache-Lookup: HIT from sq21.wikimedia.org:3128
X-Cache: MISS from sq22.wikimedia.org
X-Cache-Lookup: MISS from sq22.wikimedia.org:80
Via: 1.1 sq21.wikimedia.org:3128 (squid/2.7.STABLE6), 1.0  
sq22.wikimedia.org:80 (squid/2.7.STABLE6)

Connection: close

Best,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 And I'm sure people will want the option to retain WAL longer in the
 master, to avoid an expensive resync if the slave falls behind. It would
 be simple to provide a GUC option for always retain X GB of old WAL in
 pg_xlog.

Right, we would want to provide some more configurability on the
when-to-recycle-WAL decision than there is now.  But the basic point
is that I don't see the master pg_xlog as being a long-term archive.
The amount of back WAL that you'd want to keep there is measured in
minutes or hours, not weeks or months.

(If nothing else, there is no point in keeping so much WAL that catching
up by scanning it would take longer than taking a fresh base backup.
My impression from recent complaints about our WAL-reading speed is that
that might be a pretty tight threshold ...)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Peter Eisentraut
On Tuesday 07 July 2009 11:29:07 Brendan Jurd wrote:
 We're now about a week away from the start of the July 2009
 commitfest, and we need to make a decision about whether to start
 using http://commitfest.postgresql.org to manage it, or punt to the
 next commitfest and continue to use the wiki for July.

I have the following concern: Likely, this tool and the overall process will 
evolve over time.  To pick an example that may or may not be actually useful, 
in the future we might want to change from a fixed list of patch sections to a 
free list of tags, say.  Then someone might alter the application backend, and 
we'd use that new version for the next commit fest at the time.  What will 
that do to the data of old commit fests?

With the wiki, the data of the old fests will pretty much stay what is was, 
unless we change the wiki templates in drastic ways, as I understand it.  But 
if we did changes like the above, or more complicated things, perhaps, what 
will happen?  Perhaps we simply don't care about the historical data.  But if 
we do, we better have pretty high confidence that the current application will 
do for a while or is easily upgradable.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread Andrew Dunstan



David E. Wheeler wrote:

On Jul 7, 2009, at 9:13 AM, Andrew Dunstan wrote:

One thing I think we really should do is give prominent public notice 
of any EOL for a branch. At least a couple of months, preferably. If 
the lifetime were absolutely fixed it might not matter so much, but 
as it isn't I think we owe that to our users.


Perhaps a maintenance page on the site with a table for each version 
of PostgreSQL, in reverse chronological order, showing the initial 
release date and the date of last supported release (anticipated, 
perhaps, to be something like Sept 1 for 7.4).


So something like:

 branch |  released  | curr_version | curr_date  | final_date
++--++
 8.4| 2009-07-01 | 8.4.0| 2009-07-01 |
 8.3| 2008-02-04 | 8.3.7| 2009-03-16 |
 8.2| 2006-12-05 | 8.2.13   | 2009-03-16 |
 8.1| 2005-11-08 | 8.1.17   | 2009-03-16 |
 8.0| 2005-01-19 | 8.0.21   | 2009-03-16 |
 7.4| 2003-11-17 | 7.4.25   | 2009-03-16 | 2009-09-01 (projected)
 7.3| 2002-11-27 | 7.3.21   | 2008-01-07 | 2008-01-07
 7.2| 2002-02-04 | 7.2.8| 2005-05-09 | 2005-05-09
 7.1| 2001-04-13 | 7.1.3| 2001-08-15 | 2001-08-15
 7.0| 2000-05-08 | 7.0.3| 2000-11-11 | 2000-11-11




Yeah, nice. I was thinking of a press release when we EOL a branch as well.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Robert Haas

On Jul 7, 2009, at 12:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:


Robert Haas robertmh...@gmail.com writes:

One possibility would be to remove join_collapse_limit entirely, but
that would eliminate one possibily-useful piece of functionality that
it current enables: namely, the ability to exactly specify the join
order by setting join_collapse_limit to 1.  So one possibility would
be to rename the variable something like explicit_join_order and make
it a Boolean; another possibility would be to change the default  
value

to INT_MAX.


As the person who put in those thresholds, I kind of prefer going over
to the boolean definition.


I'm OK with that, but out of conservatism suggested changing the  
default to unlimited in this release.  If by chance there is something  
we're missing and these parameters are doing someone any good, we can  
suggest that they set them back to the old values rather than telling  
them to use a private build.  If on the other hand we don't get any  
complaints, we can remove them with greater confidence in a future  
release.  But maybe that's too conservative.


Now, here's another thought: if we think it's reasonable for people to  
want to explicitly specify the join order, a GUC isn't really the best  
fit, because it's all or nothing.  Maybe we'd be better off dropping  
the GUCs entirely and adding some other bit of syntax that forces the  
join order, but only for that particular join.



That was the alternative that we considered;
the numeric thresholds were used instead because they were easy to
implement and seemed to possibly offer more control.  But I'm not
convinced that anyone has really used them profitably.  I agree that
the ability to use JOIN syntax to specify the join order exactly (with
join_collapse_limit=1) is the only really solid use-case anyone has
proposed for either threshold.  I'm interested in Andreas' comment  
that
he has use-cases where using the collapse_limit is better than  
allowing
geqo to take over for very large problems ... but I think we need to  
see

those use-cases and see if there's a better fix.

   regards, tom lane


Agreed.

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Kevin Grittner
Peter Eisentraut pete...@gmx.net wrote: 
 
 in the future we might want to change from a fixed list of patch
 sections to a free list of tags, say.  Then someone might alter the
 application backend, and we'd use that new version for the next
 commit fest at the time.  What will that do to the data of old
 commit fests?
 
Certainly you see how trivial that conversion would be.  If that were
the worst case anyone could even imagine, it would be a pretty strong
argument that the schema is more than good enough to proceed.
 
Do you see anything fundamentally wrong with the structure in terms
of long term goals?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Dimitri Fontaine

Le 7 juil. 09 à 19:37, Greg Stark a écrit :

I propose that there's a maximum reasonable planning time


It sounds so much like the planner_effort GUC that has been talked  
about in the past...

  http://archives.postgresql.org/pgsql-performance/2009-05/msg00137.php

...except this time you want to measure it in seconds. The problem  
with measuring it in seconds is that when the time has elapsed, it's  
uneasy to switch from classic to geqo and avoid beginning from scratch  
again.

Would it be possible to start geqo from current planner state?

Another idea would be to have more complex metrics for deciding when  
to run geqo, that is guesstimate the query planning difficulty very  
quickly, based on more than just the number of relations in the from:  
presence of subqueries, UNION, EXISTS, IN, or branches in where  
clause, number of operators and index support for them, maybe some  
information from the stats too... The idea would be to
 - set an effort threshold from where we'd better run geqo (GUC,  
disabling possible)

 - if threshold enabled, compute metrics
 - if metric = threshold, use geqo, if not, classic planner
 - maybe default to disabling the threshold

It seems it'd be easier to set the new GUC on a per query basis...

The obvious problem to this approach is that computing the metric will  
take some time better spent at planning queries, but maybe we could  
have fast path for easy queries, which will look a lot like $subject.


Regards,
--
dim

I hope this will give readers better ideas than its bare content...
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Dimitri Fontaine

Le 7 juil. 09 à 21:16, Robert Haas a écrit :
Now, here's another thought: if we think it's reasonable for people  
to want to explicitly specify the join order, a GUC isn't really the  
best fit, because it's all or nothing.  Maybe we'd be better off  
dropping the GUCs entirely and adding some other bit of syntax that  
forces the join order, but only for that particular join.


MySQL calls them Straight Joins:
 
http://www.mysqlperformanceblog.com/2006/12/28/mysql-session-variables-and-hints/

I'm not sure our best move here would be in this direction :)
--
dim
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Have \d show child tables that inherit from the specified parent

2009-07-07 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On Tuesday 07 July 2009 19:35:54 Tom Lane wrote:
 I looked at this patch.  I'm a bit disturbed by the
 ORDER BY c.oid::pg_catalog.regclass
 business.  I doubt that does what the author thinks, and I'm not sure
 what we really want anyway.  If we want the child table names sorted
 alphabetically then the regclass value needs to be further cast to text,
 ie
 ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text

 Fixed; it was clearly meant to be by name.

Testing shows that we still have a problem: the query now fails outright
with pre-8.3 servers:

regression=# \d+ a
ERROR:  cannot cast type regclass to text

Perhaps it would be sufficient to ORDER BY c.relname.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 With the wiki, the data of the old fests will pretty much stay what is
 was, unless we change the wiki templates in drastic ways, as I
 understand it.  But if we did changes like the above, or more
 complicated things, perhaps, what will happen?  Perhaps we simply
 don't care about the historical data.  But if we do, we better have
 pretty high confidence that the current application will do for a
 while or is easily upgradable.

I'm not convinced that we care in the least about commitfests that are
more than a fest or two back; especially since the mailing lists archive
all the interesting underlying data.  However, if we did, the answer
doesn't seem that hard: keep the old database instance on-line for
serving up the old data, and put a new one beside it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Tom Lane
Dimitri Fontaine dfonta...@hi-media.com writes:
 Another idea would be to have more complex metrics for deciding when  
 to run geqo, that is guesstimate the query planning difficulty very  
 quickly, based on more than just the number of relations in the from:  
 presence of subqueries, UNION, EXISTS, IN, or branches in where  
 clause, number of operators and index support for them, maybe some  
 information from the stats too...

Pointless, since GEQO is only concerned with examining alternative join
orderings.  I see no reason whatever to think that number-of-relations
isn't the correct variable to test to decide whether to use it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread Alvaro Herrera
David E. Wheeler wrote:
 On Jul 7, 2009, at 11:59 AM, Alvaro Herrera wrote:

 And it only goes back to 8.0

 Huh, true :-(  This should be fixed.

 Yeah. Or we should have a table. I could create one in the wiki, I  
 guess, but I would assume that the core team would want to have formal  
 control over scheduled maintenance expirations…

The web team already has a table, and it is published as the RSS I
linked to.  If you want it in another format I think it should be on the
main website (not wiki), derived from the table already present.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Dimitri Fontaine

Le 7 juil. 09 à 21:12, Tom Lane a écrit :

Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:

And I'm sure people will want the option to retain WAL longer in the
master, to avoid an expensive resync if the slave falls behind. It  
would
be simple to provide a GUC option for always retain X GB of old  
WAL in

pg_xlog.


Right, we would want to provide some more configurability on the
when-to-recycle-WAL decision than there is now.  But the basic point
is that I don't see the master pg_xlog as being a long-term archive.
The amount of back WAL that you'd want to keep there is measured in
minutes or hours, not weeks or months.


Could we add yet another postmaster specialized child to handle the  
archive, which would be like a default archive_command implemented in  
core. This separate process could then be responsible for feeding the  
slave(s) with the WAL history for any LSN not available in pg_xlog  
anymore.


The bonus would be to have a good reliable WAL archiving default setup  
for simple PITR and simple replication setups. One of the reasons PITR  
looks so difficult is that it involves reading a lot of documentation  
then hand-writing scripts even in the simple default case.


(If nothing else, there is no point in keeping so much WAL that  
catching

up by scanning it would take longer than taking a fresh base backup.
My impression from recent complaints about our WAL-reading speed is  
that

that might be a pretty tight threshold ...)


Could the design above make it so that your later PITR backup is  
always an option for setting up a WAL Shipping slave?


Regards,
--
dim
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote: 
 
 if we think it's reasonable for people to want to explicitly specify
 the join order
 
Regardless of the syntax (GUC or otherwise), that is an optimizer
hint.  I thought we were trying to avoid those.
 
Although -- we do have all those enable_* GUC values which are also
optimizer hints; perhaps this should be another of those? 
enable_join_reorder?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Robert Haas

On Jul 7, 2009, at 2:14 PM, Peter Eisentraut pete...@gmx.net wrote:


On Tuesday 07 July 2009 11:29:07 Brendan Jurd wrote:

We're now about a week away from the start of the July 2009
commitfest, and we need to make a decision about whether to start
using http://commitfest.postgresql.org to manage it, or punt to the
next commitfest and continue to use the wiki for July.


I have the following concern: Likely, this tool and the overall  
process will
evolve over time.  To pick an example that may or may not be  
actually useful,
in the future we might want to change from a fixed list of patch  
sections to a
free list of tags, say.  Then someone might alter the application  
backend, and
we'd use that new version for the next commit fest at the time.   
What will

that do to the data of old commit fests?


I don't see this as being much of an obstacle.  I have migrated data  
far more complex, and I venture to say that most of the rest of the  
regular posters in this forum have too.


With the wiki, the data of the old fests will pretty much stay what  
is was,
unless we change the wiki templates in drastic ways, as I understand  
it.  But
if we did changes like the above, or more complicated things,  
perhaps, what
will happen?  Perhaps we simply don't care about the historical  
data.  But if
we do, we better have pretty high confidence that the current  
application will

do for a while or is easily upgradable.


I suspect both are true, but in the unlikely event that we decide on  
some massive change to the system, we can either run the DBs in  
parallel as Tom suggests, or dump out the older data in Wiki markup  
and post it on there. But I can't imagine what we'd want to do that  
would even make us consider such drastic steps.  Your example would  
not be a difficult migration, for instance.


...Robert 
 


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-www] commitfest.postgresql.org

2009-07-07 Thread Alvaro Herrera
Robert Haas escribió:

 I suspect both are true, but in the unlikely event that we decide on  
 some massive change to the system, we can either run the DBs in parallel 
 as Tom suggests, or dump out the older data in Wiki markup and post it on 
 there. But I can't imagine what we'd want to do that would even make us 
 consider such drastic steps.  Your example would not be a difficult 
 migration, for instance.

By the way, if the migration of the current commitfest was an automatic
procedure, is there a chance that the old commitfests can be migrated as
well?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Dimitri Fontaine

Le 7 juil. 09 à 21:45, Tom Lane a écrit :

Dimitri Fontaine dfonta...@hi-media.com writes:

Another idea would be to have more complex metrics for deciding when
to run geqo


Pointless, since GEQO is only concerned with examining alternative  
join

orderings.  I see no reason whatever to think that number-of-relations
isn't the correct variable to test to decide whether to use it.


Oh. It seems I prefer showing my ignorance rather than learning enough  
first. Writing mails is so much easier...


Sorry for the noise,
--
dim
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Maintenance Policy?

2009-07-07 Thread David E. Wheeler

On Jul 7, 2009, at 12:59 PM, Alvaro Herrera wrote:


Yeah. Or we should have a table. I could create one in the wiki, I
guess, but I would assume that the core team would want to have  
formal

control over scheduled maintenance expirations…


The web team already has a table, and it is published as the RSS I
linked to.  If you want it in another format I think it should be on  
the

main website (not wiki), derived from the table already present.


That would be great, with a link to it from an appropriate part of the  
nav.


Best,

David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Using results from INSERT ... RETURNING

2009-07-07 Thread Marko Tiikkaja

Hello.

Here's a patch(WIP) that implements INSERT .. RETURNING inside a CTE. 
Should apply cleanly against CVS head.


The INSERT query isn't rewritten so rules and default values don't work. 
Recursive CTEs don't work either.


Regards,
Marko Tiikkaja
*** a/src/backend/commands/explain.c
--- b/src/backend/commands/explain.c
***
*** 651,656  explain_outNode(StringInfo str,
--- 651,659 
  		case T_Hash:
  			pname = Hash;
  			break;
+ 		case T_InsertReturning:
+ 			pname = INSERT RETURNING;
+ 			break;
  		default:
  			pname = ???;
  			break;
*** a/src/backend/executor/Makefile
--- b/src/backend/executor/Makefile
***
*** 15,21  include $(top_builddir)/src/Makefile.global
  OBJS = execAmi.o execCurrent.o execGrouping.o execJunk.o execMain.o \
 execProcnode.o execQual.o execScan.o execTuples.o \
 execUtils.o functions.o instrument.o nodeAppend.o nodeAgg.o \
!nodeBitmapAnd.o nodeBitmapOr.o \
 nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeHash.o \
 nodeHashjoin.o nodeIndexscan.o nodeMaterial.o nodeMergejoin.o \
 nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
--- 15,21 
  OBJS = execAmi.o execCurrent.o execGrouping.o execJunk.o execMain.o \
 execProcnode.o execQual.o execScan.o execTuples.o \
 execUtils.o functions.o instrument.o nodeAppend.o nodeAgg.o \
!nodeBitmapAnd.o nodeBitmapOr.o nodeInsertReturning.o \
 nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeHash.o \
 nodeHashjoin.o nodeIndexscan.o nodeMaterial.o nodeMergejoin.o \
 nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
*** a/src/backend/executor/execMain.c
--- b/src/backend/executor/execMain.c
***
*** 86,94  static void ExecutePlan(EState *estate, PlanState *planstate,
  			DestReceiver *dest);
  static void ExecSelect(TupleTableSlot *slot,
  		   DestReceiver *dest, EState *estate);
! static void ExecInsert(TupleTableSlot *slot, ItemPointer tupleid,
  		   TupleTableSlot *planSlot,
! 		   DestReceiver *dest, EState *estate);
  static void ExecDelete(ItemPointer tupleid,
  		   TupleTableSlot *planSlot,
  		   DestReceiver *dest, EState *estate);
--- 86,94 
  			DestReceiver *dest);
  static void ExecSelect(TupleTableSlot *slot,
  		   DestReceiver *dest, EState *estate);
! void ExecInsert(TupleTableSlot *slot, ItemPointer tupleid,
  		   TupleTableSlot *planSlot,
! 		   DestReceiver *dest, EState *estate, ResultRelInfo* resultRelInfo, bool clearReturningTuple);
  static void ExecDelete(ItemPointer tupleid,
  		   TupleTableSlot *planSlot,
  		   DestReceiver *dest, EState *estate);
***
*** 98,104  static void ExecUpdate(TupleTableSlot *slot, ItemPointer tupleid,
  static void ExecProcessReturning(ProjectionInfo *projectReturning,
  	 TupleTableSlot *tupleSlot,
  	 TupleTableSlot *planSlot,
! 	 DestReceiver *dest);
  static TupleTableSlot *EvalPlanQualNext(EState *estate);
  static void EndEvalPlanQual(EState *estate);
  static void ExecCheckRTPerms(List *rangeTable);
--- 98,105 
  static void ExecProcessReturning(ProjectionInfo *projectReturning,
  	 TupleTableSlot *tupleSlot,
  	 TupleTableSlot *planSlot,
! 	 DestReceiver *dest,
! 	 bool clearTuple);
  static TupleTableSlot *EvalPlanQualNext(EState *estate);
  static void EndEvalPlanQual(EState *estate);
  static void ExecCheckRTPerms(List *rangeTable);
***
*** 190,196  standard_ExecutorStart(QueryDesc *queryDesc, int eflags)
  		case CMD_SELECT:
  			/* SELECT INTO and SELECT FOR UPDATE/SHARE need to mark tuples */
  			if (queryDesc-plannedstmt-intoClause != NULL ||
! queryDesc-plannedstmt-rowMarks != NIL)
  estate-es_output_cid = GetCurrentCommandId(true);
  			break;
  
--- 191,198 
  		case CMD_SELECT:
  			/* SELECT INTO and SELECT FOR UPDATE/SHARE need to mark tuples */
  			if (queryDesc-plannedstmt-intoClause != NULL ||
! queryDesc-plannedstmt-rowMarks != NIL ||
! queryDesc-plannedstmt-hasWritableCtes)
  estate-es_output_cid = GetCurrentCommandId(true);
  			break;
  
***
*** 1670,1676  lnext:	;
  break;
  
  			case CMD_INSERT:
! ExecInsert(slot, tupleid, planSlot, dest, estate);
  break;
  
  			case CMD_DELETE:
--- 1672,1678 
  break;
  
  			case CMD_INSERT:
! ExecInsert(slot, tupleid, planSlot, dest, estate, estate-es_result_relation_info, true);
  break;
  
  			case CMD_DELETE:
***
*** 1742,1756  ExecSelect(TupleTableSlot *slot,
   *		index relations.
   * 
   */
! static void
  ExecInsert(TupleTableSlot *slot,
  		   ItemPointer tupleid,
  		   TupleTableSlot *planSlot,
  		   DestReceiver *dest,
! 		   EState *estate)
  {
  	HeapTuple	tuple;
- 	ResultRelInfo *resultRelInfo;
  	Relation	resultRelationDesc;
  	Oid			newId;
  
--- 1744,1759 
   *		index relations.
   * 

Re: [HACKERS] 8.4, One-Time Filter and subquery ( ... FROM function() union all ... )

2009-07-07 Thread Sergey Burladyan
Tom Lane t...@sss.pgh.pa.us writes:

 As of CVS HEAD you get
 
  QUERY PLAN   
   
 
  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.011..0.011 rows=0 
 loops=1)
One-Time Filter: false
  Total runtime: 0.179 ms
 (3 rows)

Thank you, Tom !

-- 
Sergey Burladyan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Synch Rep: direct transfer of WAL file from the primary to the standby

2009-07-07 Thread Tom Lane
Dimitri Fontaine dfonta...@hi-media.com writes:
 Could we add yet another postmaster specialized child to handle the  
 archive, which would be like a default archive_command implemented in  
 core.

I think this fails the basic sanity check: do you need it to still work
when the master is dead.  It's reasonable to ask the master to supply a
few gigs of very-recent WAL, but as soon as the word archive enters
the conversation, you should be thinking in terms of a different
machine.  Or at least a design that easily scales to put the archive on
a different machine.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Although -- we do have all those enable_* GUC values which are also
 optimizer hints; perhaps this should be another of those? 
 enable_join_reorder?

Not a bad suggestion, especially since turning it off would usually be
considered just about as bad an idea as turning off the other ones.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-07-07 Thread Jeff Davis
On Tue, 2009-07-07 at 13:22 -0400, Tom Lane wrote:
 Also, if hash indexes were a realistic alternative to btree for this,
 we'd already have come up against the problem that the CONSTRAINT syntax
 doesn't provide any way to specify what kind of index you want to use
 underneath the constraint.  I think it might be interesting to turn
 around Jeff's syntax sketch and provide a way to say that a CONSTRAINT
 declaration should depend on some previously added index, eg
 something like
 
   ALTER TABLE tab ADD CONSTRAINT UNIQUE (col1, col2) USING index
 

How about:

ALTER TABLE tab ADD INDEX CONSTRAINT [name]
(col1 [op], col2 [op]) USING index

And then just validate the constraint at creation time, and store the
information in pg_constraint.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Robert Haas
On Jul 7, 2009, at 3:03 PM, Kevin Grittner kevin.gritt...@wicourts.gov 
 wrote:



Robert Haas robertmh...@gmail.com wrote:


if we think it's reasonable for people to want to explicitly specify
the join order


Regardless of the syntax (GUC or otherwise), that is an optimizer
hint.  I thought we were trying to avoid those.


I guess my point is that there's not a lot of obvious benefit in  
allowing the functionality to exist but handicapping it so that it's  
useful in as few cases as possible.  If the consensus is that we want  
half a feature (but not more or less than half), that's OK with me,  
but it's not obvious to me why we should choose to want that.


...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I guess my point is that there's not a lot of obvious benefit in  
 allowing the functionality to exist but handicapping it so that it's  
 useful in as few cases as possible.  If the consensus is that we want  
 half a feature (but not more or less than half), that's OK with me,  
 but it's not obvious to me why we should choose to want that.

Well, the question to my mind is whether the collapse_threshold GUCs in
their current form actually represent a feature ;-).  They were put
in pretty much entirely on speculation that someone might find them
useful.  Your argument is that they are not only useless but a foot-gun,
and so far we haven't got any clear contrary evidence.  If we accept
that argument then we should take them out, not just change the default.

My own thought is that from_collapse_limit has more justification,
since it basically acts to stop a subquery from being flattened when
that would make the parent query too complex, and that seems like a
more understandable and justifiable behavior than treating JOIN
syntax specially.  But I'm fine with removing join_collapse_limit
or reducing it to a boolean.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov wrote:
 Robert Haas robertmh...@gmail.com wrote:

 if we think it's reasonable for people to want to explicitly
 specify the join order

 Regardless of the syntax (GUC or otherwise), that is an optimizer
 hint.  I thought we were trying to avoid those.
 
 I guess my point is that there's not a lot of obvious benefit in  
 allowing the functionality to exist but handicapping it so that it's
 useful in as few cases as possible.  If the consensus is that we
 want half a feature (but not more or less than half), that's OK with
 me, but it's not obvious to me why we should choose to want that.
 
Ever since I've been following these lists, there have been a pretty
steady dribble of requests for optimizer hints of one type or another.
The standard response has always been, If you have a query which is
optimizing poorly, show us, and we'll try to fix the optimizer so that
it doesn't need hints to do a good job.  The enable_* GUCs
effectively *are* optimizer hints, but they are strongly discouraged
for anything but diagnostic purposes.  I guess I don't see any reason
to consider this issue as being different.
 
If implementing them this way seems to handicap them, I guess that's
probably to discourage their use, which seems reasonable to me; I bet
people would shoot themselves in the foot much more often than they
would help themselves with such options, we'd lose information which
might help to improve the optimizer, and the list would probably get a
pretty steady stream of slow queries which would turn out to be
(after digging deep enough) caused by people using hints that caused a
sub-optimal plan to be chosen.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bytea vs. pg_dump

2009-07-07 Thread Peter Eisentraut
On Wednesday 06 May 2009 18:47:57 Tom Lane wrote:
 So the ambiguous-input problem is solved if we define the new format(s)
 to be started by backslash and something that the old code would reject.
 I'd keep it short, like \x, but there's still room for multiple
 formats if anyone really wants to go to the trouble.

Here is a first cut at a new hex bytea input and output format.  Example:

SET bytea_output_hex = true;

SELECT E'\\xDeAdBeEf'::bytea;
   bytea

 \xdeadbeef
(1 row)

Bernd did some performance testing for me, and it looked pretty good.

Questions:

Should this be the default format?

Should the configuration parameter be a boolean or an enum, opening 
possibilities for other formats?
diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml
index c944d8f..a6ac9c8 100644
--- a/doc/src/sgml/datatype.sgml
+++ b/doc/src/sgml/datatype.sgml
@@ -1,4 +1,4 @@
-!-- $PostgreSQL$ --
+!-- $PostgreSQL: pgsql/doc/src/sgml/datatype.sgml,v 1.237 2009/04/27 16:27:35 momjian Exp $ --
 
  chapter id=datatype
   title id=datatype-titleData Types/title
@@ -1189,6 +1189,63 @@ SELECT b, char_length(b) FROM test2;
/para
 
para
+The typebytea/type type supports two external formats for
+input and output: the traditional bytea format that is particular
+to PostgreSQL, and the hex format.  Both of these are always
+accepted on input.  The output format depends on the configuration
+parameter bytea_output_format; the default is hex.  (Note that the
+hex format was introduced in PostgreSQL 8.5; so earlier version
+and some tools don't understand it.)
+   /para
+
+   para
+The acronymSQL/acronym standard defines a different binary
+string type, called typeBLOB/type or typeBINARY LARGE
+OBJECT/type.  The input format is different from
+typebytea/type, but the provided functions and operators are
+mostly the same.
+   /para
+
+  sect2
+   titleHex Format/title
+
+   para
+The hex format encodes the binary data as 2 hexadecimal digits per
+byte, highest significant nibble first.  The entire string ist
+preceded by the sequence literal\x/literal (to distinguish it
+from the bytea format).  In SQL literals, the backslash may need
+to be escaped, but it is one logical backslash as far as the
+typebytea/type type is concerned.  The hex format is compatible with a wide
+range of external applications and protocols, and it tends to be
+faster than the traditional bytea format, so its use is
+somewhat preferrable.
+   /para
+
+   para
+Example:
+programlisting
+SELECT E'\\xDEADBEEF';
+/programlisting
+   /para
+  /sect2
+
+  sect2
+   titleTraditional Bytea Format/title
+
+   para
+The traditional bytea format takes the approach of representing a
+binary string as a sequence of ASCII characters and escaping those
+bytes that cannot be represented as an ASCII character by a
+special escape sequence.  If, from the point of view of the
+application, representing bytes as characters makes sense, then
+this representation can be convenient, but in practice it is
+usually confusing becauses it fuzzes up the distinction between
+binary strings and characters strings, and the particular escape
+mechanism that was chosen is also somewhat unwieldy.  So this
+format should probably not be used for most new applications.
+   /para
+
+   para
 When entering typebytea/type values, octets of certain
 values emphasismust/emphasis be escaped (but all octet
 values emphasiscan/emphasis be escaped) when used as part
@@ -1341,14 +1398,7 @@ SELECT b, char_length(b) FROM test2;
 have to escape line feeds and carriage returns if your interface
 automatically translates these.
/para
-
-   para
-The acronymSQL/acronym standard defines a different binary
-string type, called typeBLOB/type or typeBINARY LARGE
-OBJECT/type.  The input format is different from
-typebytea/type, but the provided functions and operators are
-mostly the same.
-   /para
+  /sect2
  /sect1
 
 
diff --git a/src/backend/utils/adt/encode.c b/src/backend/utils/adt/encode.c
index eed799a..b8a3cef 100644
--- a/src/backend/utils/adt/encode.c
+++ b/src/backend/utils/adt/encode.c
@@ -122,8 +122,8 @@ static const int8 hexlookup[128] = {
 	-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
 };
 
-static unsigned
-hex_encode(const char *src, unsigned len, char *dst)
+size_t
+hex_encode(const char *src, size_t len, char *dst)
 {
 	const char *end = src + len;
 
@@ -152,8 +152,8 @@ get_hex(char c)
 	return (char) res;
 }
 
-static unsigned
-hex_decode(const char *src, unsigned len, char *dst)
+size_t
+hex_decode(const char *src, size_t len, char *dst)
 {
 	const char *s,
 			   *srcend;
diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c
index 4cf3966..3c24686 100644
--- a/src/backend/utils/adt/varlena.c
+++ 

Re: [HACKERS] bytea vs. pg_dump

2009-07-07 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 Here is a first cut at a new hex bytea input and output format.  Example:
 ...
 SET bytea_output_hex = true;

 Should the configuration parameter be a boolean or an enum, opening 
 possibilities for other formats?

Enum.  If we do this then it seems entirely fair that someone might
want other settings someday.  Also, it seems silly to pick a format
partly on the grounds that it's expansible, and then not make the
control GUC expansible.  Perhaps

SET bytea_output = [ hex | traditional ]

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Alvaro Herrera
Tom Lane escribió:

 My own thought is that from_collapse_limit has more justification,
 since it basically acts to stop a subquery from being flattened when
 that would make the parent query too complex, and that seems like a
 more understandable and justifiable behavior than treating JOIN
 syntax specially.

Isn't that what we use OFFSET 0 for?  That one has also the nice
property that you can actually specify which subquery you want to
prevent from being flattened.

Personally I have never seen a case where the collapse_limits were
useful tools.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 8.4, One-Time Filter and subquery ( ... FROM function() union all ... )

2009-07-07 Thread Mark Mielke
I found Tom's response ambiguous - but positive in either way, so it 
gave me a smile. :-)


Which of the following two great things occurred?
1) Tom popped a quick fix on CVS HEAD? (Pretty fast!)
2) Tom or somebody else had already done it?

Cheers,
mark

On 07/07/2009 05:14 PM, Sergey Burladyan wrote:

Tom Lanet...@sss.pgh.pa.us  writes

As of CVS HEAD you get

  QUERY PLAN

  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.011..0.011 rows=0 
loops=1)
One-Time Filter: false
  Total runtime: 0.179 ms
(3 rows)
 


Thank you, Tom !

   



--
Mark Mielkem...@mielke.cc



Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 Tom Lane escribió:
 My own thought is that from_collapse_limit has more justification,
 since it basically acts to stop a subquery from being flattened when
 that would make the parent query too complex, and that seems like a
 more understandable and justifiable behavior than treating JOIN
 syntax specially.

 Isn't that what we use OFFSET 0 for?  That one has also the nice
 property that you can actually specify which subquery you want to
 prevent from being flattened.

Well, if you want to modify your queries to prevent long planning times,
that'd be one way to do it.  It doesn't seem like a generally useful
answer to me though.  For example, typically the subquery would actually
be a view that might be used in various contexts.  If you stick an
OFFSET in it then you disable flattening in all those contexts, likely
not the best answer.

 Personally I have never seen a case where the collapse_limits were
 useful tools.

I'm not convinced they're useful either.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 8.4, One-Time Filter and subquery ( ... FROM function() union all ... )

2009-07-07 Thread Tom Lane
Mark Mielke m...@mark.mielke.cc writes:
 Which of the following two great things occurred?
  1) Tom popped a quick fix on CVS HEAD? (Pretty fast!)
  2) Tom or somebody else had already done it?

http://archives.postgresql.org/pgsql-committers/2009-07/msg00067.php

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Deferrable unique constraints

2009-07-07 Thread Jeff Davis
First, I'm happy that you're working on this; I think it's important. I
am working on another index constraints feature that may have some
interaction:

http://archives.postgresql.org/pgsql-hackers/2009-07/msg00302.php

Let me know if you see any potential conflicts between our work.

On Tue, 2009-07-07 at 19:38 +0100, Dean Rasheed wrote:
 For potential uniqueness violations a
 deferred trigger is queued to do a full check at the end of the
 statement or transaction, or when SET CONSTRAINTS is called. The
 trigger then replays the insert in a fake insert mode, which doesn't
 actually insert anything - it just checks that what is already there
 is unique, waiting for other transactions if necessary.

What does the deferred trigger do? Do you need a fake insert mode or
can you use an index search instead?

I'm thinking that you could just store the TID of the tuple that causes
the potential violation in your list. Then, when you recheck the list,
for each potential violation, find the tuple from the TID, do a search
using the appropriate attributes, and if you get multiple results
there's a conflict.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Robert Haas

On Jul 7, 2009, at 4:56 PM, Tom Lane t...@sss.pgh.pa.us wrote:


Robert Haas robertmh...@gmail.com writes:

I guess my point is that there's not a lot of obvious benefit in
allowing the functionality to exist but handicapping it so that it's
useful in as few cases as possible.  If the consensus is that we want
half a feature (but not more or less than half), that's OK with me,
but it's not obvious to me why we should choose to want that.


Well, the question to my mind is whether the collapse_threshold GUCs  
in

their current form actually represent a feature ;-).  They were put
in pretty much entirely on speculation that someone might find them
useful.  Your argument is that they are not only useless but a foot- 
gun,

and so far we haven't got any clear contrary evidence.  If we accept
that argument then we should take them out, not just change the  
default.


My own thought is that from_collapse_limit has more justification,
since it basically acts to stop a subquery from being flattened when
that would make the parent query too complex, and that seems like a
more understandable and justifiable behavior than treating JOIN
syntax specially.  But I'm fine with removing join_collapse_limit
or reducing it to a boolean.


That's pretty much where I am with it, too.  The feature I was  
referring to was not the collapse limits, but the ability to  
explicitly specify the join order, which perhaps could be a useful  
tool for reducing planning time or coping with bad estimates if you  
could do it for only some of the joins in the query, but which we're  
instead proposing to keep as an all-or-nothing flag.


...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 8.4, One-Time Filter and subquery ( ... FROM function() union all ... )

2009-07-07 Thread Sergey Burladyan

Oh, now problem with simple query:

8.4.0 from Debian
explain analyze select i from t where i = 10 and i = 1;
 QUERY PLAN

 Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.002 rows=0 
loops=1)
   One-Time Filter: false
 Total runtime: 0.030 ms

CVS HEAD
explain analyze select i from t where i = 10 and i = 1;
QUERY PLAN
---
 Seq Scan on t  (cost=0.00..17030.00 rows=1 width=4) (actual 
time=449.651..449.651 rows=0 loops=1)
   Filter: ((i = 10) AND (i = 1))
 Total runtime: 449.726 ms

-- 
Sergey Burladyan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] *_collapse_limit, geqo_threshold

2009-07-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Jul 7, 2009, at 4:56 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 My own thought is that from_collapse_limit has more justification,

 That's pretty much where I am with it, too.  The feature I was  
 referring to was not the collapse limits, but the ability to  
 explicitly specify the join order, which perhaps could be a useful  
 tool for reducing planning time or coping with bad estimates if you  
 could do it for only some of the joins in the query, but which we're  
 instead proposing to keep as an all-or-nothing flag.

It's pretty much all-or-nothing now: the GUC does not give you any sort
of useful control over *which* joins are reorderable.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >