Re: [HACKERS] Fwd: [BUGS] fix: plpgsql: return query and dropped columns problem

2009-08-06 Thread Jaime Casanova
On Tue, Aug 4, 2009 at 4:30 AM, Pavel Stehulepavel.steh...@gmail.com wrote:
 forward patch to pg_hackers

 There is fixed patch. Please, Jaime, can you look on it?


this one passed regression tests and my personal test script (of
course it passed the script the last time too)... i'm doing a lot of
alter table [add|drop] column...

it seems it's good enough and is implementing tom's suggestions... can
this be reviewed by a commiter?

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] slow commits with heavy temp table usage in 8.4.0

2009-08-06 Thread Tom Lane
I wrote:
 As I said, my inclination for improving this area, if someone wanted
 to work on it, would be to find a way to do truncate-in-place on
 temp tables.  ISTM that in the case you're showing --- truncate that's
 not within a subtransaction, on a table that's drop-on-commit anyway
 --- we should not need to keep around the pre-truncation data.  So we
 could just do ftruncate instead of creating a new file, and we'd not
 need a new copy of the pg_class row either.  So that should make both
 the function time and the commit time a lot better.  But I'm not sure
 if the use-case is popular enough to deserve such a hack.

Actually, this is easier than I thought, because there is already
bookkeeping being done that (in effect) tracks whether a table has
already been truncated in the current transaction.  So we can rely
on that, and with only a very few lines of code added, ensure that
a situation like this does only one full-scale transaction-safe
truncation per transaction.  The attached prototype patch does this
and seems to fix the speed problem nicely.  It's not tremendously
well tested, but perhaps you'd like to test?  Should work in 8.4.

regards, tom lane

Index: src/backend/catalog/heap.c
===
RCS file: /cvsroot/pgsql/src/backend/catalog/heap.c,v
retrieving revision 1.357
diff -c -r1.357 heap.c
*** src/backend/catalog/heap.c  2 Aug 2009 22:14:52 -   1.357
--- src/backend/catalog/heap.c  6 Aug 2009 06:15:05 -
***
*** 2342,2359 
{
Oid rid = lfirst_oid(cell);
Relationrel;
-   Oid toastrelid;
  
rel = heap_open(rid, AccessExclusiveLock);
relations = lappend(relations, rel);
- 
-   /* If there is a toast table, add it to the list too */
-   toastrelid = rel-rd_rel-reltoastrelid;
-   if (OidIsValid(toastrelid))
-   {
-   rel = heap_open(toastrelid, AccessExclusiveLock);
-   relations = lappend(relations, rel);
-   }
}
  
/* Don't allow truncate on tables that are referenced by foreign keys */
--- 2342,2350 
***
*** 2364,2383 
{
Relationrel = lfirst(cell);
  
!   /* Truncate the actual file (and discard buffers) */
!   RelationTruncate(rel, 0);
  
!   /* If this relation has indexes, truncate the indexes too */
!   RelationTruncateIndexes(rel);
! 
!   /*
!* Close the relation, but keep exclusive lock on it until 
commit.
!*/
heap_close(rel, NoLock);
}
  }
  
  /*
   * heap_truncate_check_FKs
   *Check for foreign keys referencing a list of relations that
   *are to be truncated, and raise error if there are any
--- 2355,2402 
{
Relationrel = lfirst(cell);
  
!   /* Truncate the relation */
!   heap_truncate_one_rel(rel);
  
!   /* Close the relation, but keep exclusive lock on it until 
commit */
heap_close(rel, NoLock);
}
  }
  
  /*
+  * heap_truncate_one_rel
+  *
+  * This routine deletes all data within the specified relation.
+  *
+  * This is not transaction-safe, because the truncation is done immediately
+  * and cannot be rolled back later.  Caller is responsible for having
+  * checked permissions etc, and must have obtained AccessExclusiveLock.
+  */
+ void
+ heap_truncate_one_rel(Relation rel)
+ {
+   Oid toastrelid;
+ 
+   /* Truncate the actual file (and discard buffers) */
+   RelationTruncate(rel, 0);
+ 
+   /* If the relation has indexes, truncate the indexes too */
+   RelationTruncateIndexes(rel);
+ 
+   /* If there is a toast table, truncate that too */
+   toastrelid = rel-rd_rel-reltoastrelid;
+   if (OidIsValid(toastrelid))
+   {
+   Relationtoastrel = heap_open(toastrelid, 
AccessExclusiveLock);
+ 
+   RelationTruncate(toastrel, 0);
+   RelationTruncateIndexes(toastrel);
+   /* keep the lock... */
+   heap_close(toastrel, NoLock);
+   }
+ }
+ 
+ /*
   * heap_truncate_check_FKs
   *Check for foreign keys referencing a list of relations that
   *are to be truncated, and raise error if there are any
Index: src/backend/commands/tablecmds.c
===
RCS file: /cvsroot/pgsql/src/backend/commands/tablecmds.c,v
retrieving revision 1.295
diff -c -r1.295 tablecmds.c
*** src/backend/commands/tablecmds.c2 Aug 2009 22:14:52 -   1.295
--- src/backend/commands/tablecmds.c6 Aug 2009 06:15:06 -
***
*** 775,780 

Re: [HACKERS] 8.4 win32 shared memory patch

2009-08-06 Thread Magnus Hagander
On Tue, Aug 4, 2009 at 19:13, Kevin Fieldkevinjamesfi...@gmail.com wrote:
 On Sat, Aug 1, 2009 at 20:30, Kevin Fieldkevinjamesfi...@gmail.com
 wrote:
   The event viewer says:
  
   The description for Event ID ( 0 ) in Source ( PostgreSQL )
cannot
   be
   found. The local computer may not have the necessary registry
   information or message DLL files to display messages from a
remote
   computer. You may be able to use the /AUXSOURCE= flag to
retrieve
   this
   description; see Help and Support for details. The following
   information is part of the event: pg_ctl: could not find
postgres
   program executable
  
   And yes, I renamed it correctly...
 
  Check permissions on it. If you moved it at some point, it may
   have
  the wrong permissions. They should be the same as for the other
   .EXEs
  in that directory.
 
  The two files (new and old exe) have identical permissions.

 That's just weird. It could be that the postgres executable won't
 work
 - maybe because of some DLL issue. Can you run postgres -V on the
 executable, or does that give you some error?

 It reports the version correctly.  Sorry...any other ideas?

For the archives: I worked off-list with Kevin and we noticed that the
issue was that the *old* version was 8.0rc1, thus pg_ctl was looking
for 8.0rc1. Upgrading to 8.0 first and then applying the new
postgres.exe binary appears to have worked.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 8.4 win32 shared memory patch

2009-08-06 Thread Magnus Hagander
On Thu, Aug 6, 2009 at 09:25, Magnus Hagandermag...@hagander.net wrote:
 On Tue, Aug 4, 2009 at 19:13, Kevin Fieldkevinjamesfi...@gmail.com wrote:
 On Sat, Aug 1, 2009 at 20:30, Kevin Fieldkevinjamesfi...@gmail.com
 wrote:
   The event viewer says:
  
   The description for Event ID ( 0 ) in Source ( PostgreSQL )
cannot
   be
   found. The local computer may not have the necessary registry
   information or message DLL files to display messages from a
remote
   computer. You may be able to use the /AUXSOURCE= flag to
retrieve
   this
   description; see Help and Support for details. The following
   information is part of the event: pg_ctl: could not find
postgres
   program executable
  
   And yes, I renamed it correctly...
 
  Check permissions on it. If you moved it at some point, it may
   have
  the wrong permissions. They should be the same as for the other
   .EXEs
  in that directory.
 
  The two files (new and old exe) have identical permissions.

 That's just weird. It could be that the postgres executable won't
 work
 - maybe because of some DLL issue. Can you run postgres -V on the
 executable, or does that give you some error?

 It reports the version correctly.  Sorry...any other ideas?

 For the archives: I worked off-list with Kevin and we noticed that the
 issue was that the *old* version was 8.0rc1, thus pg_ctl was looking
 for 8.0rc1. Upgrading to 8.0 first and then applying the new
 postgres.exe binary appears to have worked.

Clearly I'm not awake yet. That's obviously 8.4rc1 and 8.4.0.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:

 Andrew Dunstan and...@dunslane.net writes:
 preventing a clash might be fairly difficult.

 Yeah, I was just thinking about that.  The easiest way to avoid
 collisions would be to make pg_dump (in --binary-upgrade mode)
 responsible for being sure that *every* new pg_type and pg_class row
 OID matches what it was in the old DB. 

As we already have WITH OIDS for CREATE TABLE command, maybe adding
support for WITH OID ... to the necessary commands would do the trick?

Instead of messing with pg_type, pg_dump would then have to issue a OID
'decorated' command such as
  CREATE TYPE footype ... WITH OID 27604;

 We could stop doing that
 once we have all the user tables in place --- I don't believe it's
 necessary to preserve the OIDs of user indexes.  But we need to
 preserve toast table OIDs, and toast table index OIDs too if those
 are created at the same time they are now (else we risk one of them
 colliding with a toast table OID we want to create later).

It seems harder to come up with a general purpose syntax to support the
feature in case of toast tables, though.

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Boszormenyi Zoltan
Tom Lane írta:
 At the moment it looks to me like pg_migrator has crashed and burned
 for 8.4, at least for general-purpose usage.

It means that you don't have the restraint that
you thought you have. So you can change the
RedHat/Fedora PostgreSQL 8.4 packages to use
the upstream default for integer timestamps...

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] the case for machine-readable error fields

2009-08-06 Thread Sam Mason
On Wed, Aug 05, 2009 at 08:57:14PM +0200, Pavel Stehule wrote:
 2009/8/5 Tom Lane t...@sss.pgh.pa.us:
  Peter pointed out upthread that the SQL standard already calls out some
  things that should be available in this way --- has anyone studied that
  yet?
 
 yes - it's part of GET DIAGNOSTICS statement
 
 http://savage.net.au/SQL/sql-2003-2.bnf.html#condition%20information%20item%20name

Just out of interest, how is this supposed to be used?  Also, how many
other SQL statements can be run when a transaction has been aborted?  I
would've thought that only COMMIT or ROLLBACK (and their synonyms) make
sense and GET DIAGNOSTICS seems wrong for this purpose.

I (and most code I've seen) normally structures client calls off to the
database as follows:

  db.execute(BEGIN; 
INSERT INTO foo (a,b) VALUES ($1,$2);
INSERT INTO bar (c,d) VALUES ($3,$4);
SELECT frub($5,$6);
COMMIT;, a,b,c,d,e,f);

Where would a call to GET DIAGNOSTICS sensibly go?  Or is it defined
to return information about the last executed transaction, I can't find
much in the above page or in anything Google gives back about it.

Supporting it is fine from a standards point of view, from a calling
code's correctness point of view it seems much better to send the info
back at a protocol level.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Table and Index compression

2009-08-06 Thread PFC



With the talk about adding compression to pg_dump lately, I've been  
wondering if tables and indexes could be compressed too.

So I've implemented a quick on-the-fly compression patch for postgres

Sorry for the long email, but I hope you find this interesting.

Why compress ?

1- To save disk space ?
Disks are so cheap now that it is not a valid reason.
Besides, anyone with a database that is big enough to fill a modern  
harddisk probably also has a monster RAID to get more IO throughput.


2- To make the database faster ?
This would be a valid reason.

If the database is made smaller through compression, this can have several  
interesting effects :
- you need less RAM to cache it entirely, so random IO hits more cache  
than before
- if the CPU can decompress faster than the disk IO, even seq scans can be  
faster (this will not be the case on monster RAID setups, but for cheap  
servers, it could be)


So, why not ? I coded it.

I've benchmarked lzo, lzf, quicklz, lzjb, and fastLZ.
The best for this is lzo : very fast decompression, a good compression  
ratio on a sample of postgres table and indexes, and a license that could  
work.

QuickLZ compresses faster and more, but is not free.

Compression would compress each page independently, so it is fast to  
decompress a single page.
This means 8k pages are a bit small, I used 32k pages to get better  
compression.


I've checked various table and index files.
- your average table with many columns, some TEXT, etc, compresses about 2x
- when the row header is significant vs the row data, it can reach 3x
- indexes compress well too, 2-4x
- gist indexes could be compressed up to 6x sometimes

Roughly, 2-3x compression is achievable on a complete database.



Implementation

This lives in md.c
The implementation is about 100 lines of code (!)

Instead of calling FileWrite and FileRead, md.c calls special functions  
that read and write compressed blocks.


* Writing :

The block is compressed in a temp memory buffer.
A header (prepended to the compressed data) tells the length of this data.
Then, it is written where in the disk file.

* Reading :

The first 4k of compressed data is read.
Looking at the length header, we know how much more to read.
The rest of the data (if any) is read.
The data is decompressed.

Since a compressed block can be larger than the original block, I have  
enlarged the block size in the files by 4k, so that there is a block every  
40k instead of every 32k (with 32k postgres pages).


That's it, very simple.



Now, the reason it works is the underlying file is not handled as sparse  
by the OS.
The holes between compressed blocks are removed : not recorded on disk,  
and never cached either.


However sparse files can have big performance problems if you do this :

- write a small block in the middle of the file, surrounded by holes
- later enlarge this block

When the block is enlarged, if it needs an extra filesystem page, it will  
not be allocated contiguously.

When it is read later, it will need an extra seek, which is really bad.

So, looking at the compression statistics :

gist index for geometric coordinates search :
a 32k page is compressed to 1-2 4k-pages, very rarely 3 pages.

btree indexes :
a 32k page is compressed to 2-3 4k-pages

large table :
a 32k page is compressed to 2-4 4k-pages

Therefore, on write, I pre-allocate some space in the sparse file, by  
writing more than needed : currently I write 5 4k-blocks.
Whatever is written after the compressed data is garbage previously in the  
buffer, it is ignored on reads.


This means the disk space savings are less than a full compression, but  
access is much smoother, in fact much like a regular non-sparse file,  
since the blocks between the holes almost never need to be grown.


Without pre-allocating, performance is abysmal, not even worth talking  
about.


Pre-allocated but not actually used blocks are never read, except maybe by  
OS readahead during seq scan.
On a heavy random access database they will not be touched, not wasting  
any space in the OS cache.




shared_buffers thus contains decompressed blocks : a row that is updated  
very often will not go through decompression-compression cycles each time.

The OS cache contains compressed data.

Some tests :

It appears to behave as expected. It didn't crash (yet...).

Basically it looks like RAM has doubled and CPU speed is halved.

Random access queries are faster, even on a cold cache, and of course,  
much better cached afterwards, since the amount of data the OS cache can  
hold is at least doubled.


Seq scans on a huge table, reading data from disk, are a tiny bit slower,  
which is strange : on my test box, the disks are slow (50 MB/s) and lzo  
can decompress much faster than this. At least it isn't slower.


Seq scans on a cached table that would fit in RAM anyway are slower,  
because it needs to be decompressed.



Re: [HACKERS] the case for machine-readable error fields

2009-08-06 Thread Pavel Stehule
2009/8/6 Sam Mason s...@samason.me.uk:
 On Wed, Aug 05, 2009 at 08:57:14PM +0200, Pavel Stehule wrote:
 2009/8/5 Tom Lane t...@sss.pgh.pa.us:
  Peter pointed out upthread that the SQL standard already calls out some
  things that should be available in this way --- has anyone studied that
  yet?

 yes - it's part of GET DIAGNOSTICS statement

 http://savage.net.au/SQL/sql-2003-2.bnf.html#condition%20information%20item%20name

 Just out of interest, how is this supposed to be used?  Also, how many
 other SQL statements can be run when a transaction has been aborted?  I
 would've thought that only COMMIT or ROLLBACK (and their synonyms) make
 sense and GET DIAGNOSTICS seems wrong for this purpose.

 I (and most code I've seen) normally structures client calls off to the
 database as follows:

  db.execute(BEGIN;
    INSERT INTO foo (a,b) VALUES ($1,$2);
    INSERT INTO bar (c,d) VALUES ($3,$4);
    SELECT frub($5,$6);
    COMMIT;, a,b,c,d,e,f);

 Where would a call to GET DIAGNOSTICS sensibly go?  Or is it defined
 to return information about the last executed transaction, I can't find
 much in the above page or in anything Google gives back about it.

 Supporting it is fine from a standards point of view, from a calling
 code's correctness point of view it seems much better to send the info
 back at a protocol level.

typically in SQL/PSM (stored procedures - look on GET DIAGNOSTICS
statement in plpgsql doc), maybe in ecpg. Other's environments raise
exception - so you can get some data from exception or from special
structures related to environment - php, ruby, .NET etc

Pavel

 --
  Sam  http://samason.me.uk/

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Peter Eisentraut
On Thursday 06 August 2009 06:32:06 Bruce Momjian wrote:
 I have applied the attached patch to pg_migrator to detect enum,
 composites, and arrays.  I tested it and the only error I got was with
 the breakmigrator table that was supplied by Jeff, and once I removed
 that table the migration went fine, meaning there are no cases of these
 stored in the regression test database.

That might be a bit excessive.  As I understand it, arrays of built-in types 
(e.g., int[]) should work fine.  I suspect the majority of uses of arrays will 
be with built-in types, so allowing that would help a significant portion of 
installations.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shipping documentation untarred

2009-08-06 Thread Peter Eisentraut
On Wednesday 05 August 2009 16:13:48 Magnus Hagander wrote:
 Just to verify, there is not going to be any changes in the actual
 format of the generated files, right?

Correct.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding error message source

2009-08-06 Thread Peter Eisentraut
On Wednesday 05 August 2009 17:45:46 Pavel Stehule wrote:
 SQLCODE could carry enough information about user or system exception.
 There are reserved space for custom codes. Maybe for administration
 should be interesting, if error is system error or application error -
 but this should be described by SQLCODE well too.

Yes, for exactly this reason, we just added SQLSTATE support to 
log_line_prefix.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Reserve the shared memory region during backend startup on

2009-08-06 Thread Magnus Hagander
On Wed, Aug 5, 2009 at 16:53, Tom Lanet...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 But. I'll look into cleaning those up for HEAD anyway, but due to lack
 of reports I think we should skip backpatch. Reasonable?

 Fair enough.

 Here's what I came up with. Seems ok?

 Works for me.

Applied.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding error message source

2009-08-06 Thread Magnus Hagander
On Wed, Aug 5, 2009 at 16:11, Heikki
Linnakangasheikki.linnakan...@enterprisedb.com wrote:
 Magnus Hagander wrote:
 As for the source, I think we'd just decorate the error messages
 with errsource(ERRSOURCE_USER) or something like that at places where
 needed, and have it default to internal - so we don't have to touch
 each and every error message in the backend.

 Are you suggesting that all messages would have source of internal,
 until we get around to change them? That doesn't seem nice. There is a
 *lot* of messages that are not internal. I think we should classify all
 messages correctly, or not at all.

No, but under the idea I had most *would* be internal. Not until we
get around to change them, but always.


 Could we deduce the category through some other means? Messages related
 to bgwriter or archiver, for example, would be differentiate by looking
 at what the current process is.

Yes.


 Differentiating between write failed because disk is full and syntax
 error because you typoed a query would be harder. Maybe we could
 classify all messages coming from md.c and smgr.c into a storage
 category, but you'd likely need a lot of exceptions to that rule.

That's exactly the one I want to differentiate between.


 Would you like to propose a concrete list sources that we would have?
 The implementation effort depends a lot on the categorization.

Well, the only one I have a direct usecase for is the one that is I
want logs that are created because of typos in a psql client, or
because an application does bad things (sends broken queries, bad
escaping, NULL in not-NULL field etc) to not clutter up the log when I
look for information about checkpoints and log archiving.

I don't really have a case for making it more fine-grained like that,
but I figured someone else might have :-)


Having SQLSTATE in log_line_prefix is certainly a good thing, but I
don't see how I can easily use that to filter for this. Perhaps I'm
wrong at that? Maybe I can do something with a very long chain of
different greps on the different classes, but that seems to almost
guarantee that i'll miss something... And we seem to set for example
ERRCODE_WARNING as a default for everything that's a warning and
doesn't specify an explicit SQLSTATE - I bet that includes both
warnings that are caused by the client and that are caused by the
system.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Alvaro Herrera
Dimitri Fontaine wrote:
 Tom Lane t...@sss.pgh.pa.us writes:

  We could stop doing that
  once we have all the user tables in place --- I don't believe it's
  necessary to preserve the OIDs of user indexes.  But we need to
  preserve toast table OIDs, and toast table index OIDs too if those
  are created at the same time they are now (else we risk one of them
  colliding with a toast table OID we want to create later).
 
 It seems harder to come up with a general purpose syntax to support the
 feature in case of toast tables, though.

There's already general purpose syntax for relation options which can be
used to get options that do not ultimately end up in
pg_class.reloptions.  An existing example is WITH (oids).  One such
option could be used here.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] problem with splitting a string

2009-08-06 Thread Werner Echezuria


 What use is there for fuzzy predicates?  I think it would mainly be to
 stop more students from coming up with new implementations of the same
 thing over and over.


Well, I'm sorry if anyone of us who is involved on these projects have
already explain the true usefulness of sqlf and fuzzy database, I guess we
focus just in the technical problem, but never explain the theory.

For example here is a paragraph from Flexible queries in relational
databases paper:

   This paper deals with this second type of uncertainty and is concerned
essentially with
database language extensions in order to deal with more expressive
requirements. Indeed,
consider a query such that, for instance, retrieve the apartments which are
not too expensive
and not too far from downtown. In such a case, there does not exist a
definite threshold for
which the price becomes suddenly too high, but rather we have to
discriminate between
prices which are perfectly acceptable for the user, and other prices,
somewhat higher, which
are still more or less acceptable (especially if the apartment is close to
downtown). Note that
the meaning of vague predicate expressions like not too expensive is
context/user
dependent, rather than universal. Fuzzy set membership functions [26] are
convenient tools
for modelling user's preference profiles and the large panoply of fuzzy set
connectives can
capture the different user attitudes concerning the way the different
criteria present in his/her
query compensate or not; see [4] for a unified presentation in the fuzzy set
framework of the
existing proposals for handling flexible queries. Moreover in a given query,
some part of the
request may be less important to fulfill (e.g., in the above example, the
price requirement
may be judged more important than the distance to downtown); the handling of
importance
leads to the need for weighted connectives, as it will be seen in the
following.


I really think this could be something useful, but it is sometimes difficult
to implement and I'm trying to make a different and easy way to do things.

regards


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migratedtables

2009-08-06 Thread Kevin Grittner
Andrew Dunstan and...@dunslane.net wrote:
 
 Excluding every database that has a composite/array-of 
 user-defined-type/enum type would be pretty nasty. After all, these
 are features we boast of.
 
Any idea whether domains are an issue?  I was thinking of trying this
tool soon, and we don't seem to be using any of the problem features
-- unless type issues include domains.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding error message source

2009-08-06 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Wed, Aug 5, 2009 at 16:11, Heikki
 Linnakangasheikki.linnakan...@enterprisedb.com wrote:
 Would you like to propose a concrete list sources that we would have?
 The implementation effort depends a lot on the categorization.

 Well, the only one I have a direct usecase for is the one that is I
 want logs that are created because of typos in a psql client, or
 because an application does bad things (sends broken queries, bad
 escaping, NULL in not-NULL field etc) to not clutter up the log when I
 look for information about checkpoints and log archiving.

Well, it seems like you could get 90% of the way there just by filtering
on the PID --- watching the bgwriter, walwriter, and archiver should
cover this use-case reasonably well.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding error message source

2009-08-06 Thread Magnus Hagander
On Thu, Aug 6, 2009 at 16:20, Tom Lanet...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Wed, Aug 5, 2009 at 16:11, Heikki
 Linnakangasheikki.linnakan...@enterprisedb.com wrote:
 Would you like to propose a concrete list sources that we would have?
 The implementation effort depends a lot on the categorization.

 Well, the only one I have a direct usecase for is the one that is I
 want logs that are created because of typos in a psql client, or
 because an application does bad things (sends broken queries, bad
 escaping, NULL in not-NULL field etc) to not clutter up the log when I
 look for information about checkpoints and log archiving.

 Well, it seems like you could get 90% of the way there just by filtering
 on the PID --- watching the bgwriter, walwriter, and archiver should
 cover this use-case reasonably well.

Right. But that's pretty hard to do automated, since they will get a
new pid whenever the database is restarted. Which is hopefully not
very often, but still an issue. Plus, it's hard to do any kind of
historical look at things.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Filtering dictionaries support and unaccent dictionary

2009-08-06 Thread Teodor Sigaev

Isn't that function leaking res pointer?  Also, I'm curious why you're

fixed


allocating 2*sizeof(TSLexeme) in unaccent_lexize ...
That's is a dictionary's interface part: lexize returns an array of TSLexeme and 
last structure should have lexeme field NULL.



filter_dictionary file is not changed, it's attached only for consistency.
--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/


unaccent-0.6.gz
Description: Unix tar archive


filter_dictionary-0.1.gz
Description: Unix tar archive

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 Dimitri Fontaine wrote:
 It seems harder to come up with a general purpose syntax to support the
 feature in case of toast tables, though.

 There's already general purpose syntax for relation options which can be
 used to get options that do not ultimately end up in
 pg_class.reloptions.  An existing example is WITH (oids).  One such
 option could be used here.

That would cover the problem for OIDs needed during CREATE TABLE, but
what about types and enum values?

The half-formed idea I had was a set of GUC variables:

set next_pg_class_oid = 12345;
set next_pg_type_oid = 12346;
set next_toast_table_oid = ...
set next_toast_index_oid = ...

and finally it could do CREATE TABLE.  CREATE TYPE would only need
next_pg_type_oid (except for a composite type).

Enum values wouldn't work too well this way, unless we were willing to
have a GUC that took a list of OIDs.  I thought about having binary
upgrade mode build up the enum list one entry at a time, by adding
a command like

ALTER TYPE enum_type ADD VALUE 'label' WITH OID oid

which would also have some use for modifying enums on the fly.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migratedtables

2009-08-06 Thread Andrew Dunstan



Kevin Grittner wrote:

Andrew Dunstan and...@dunslane.net wrote:
 
  
Excluding every database that has a composite/array-of 
user-defined-type/enum type would be pretty nasty. After all, these

are features we boast of.

 
Any idea whether domains are an issue?  I was thinking of trying this

tool soon, and we don't seem to be using any of the problem features
-- unless type issues include domains.
 

  


I don't believe that they are an issue. The issue arises only when a 
catalog oid is used in the on-disk representation of a type. AFAIK the 
on-disk representation of a domain is the same as its base type.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding error message source

2009-08-06 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Thu, Aug 6, 2009 at 16:20, Tom Lanet...@sss.pgh.pa.us wrote:
 Well, it seems like you could get 90% of the way there just by filtering
 on the PID --- watching the bgwriter, walwriter, and archiver should
 cover this use-case reasonably well.

 Right. But that's pretty hard to do automated, since they will get a
 new pid whenever the database is restarted. Which is hopefully not
 very often, but still an issue. Plus, it's hard to do any kind of
 historical look at things.

I don't think there'd be much logical difficulty in having an output
field (ie, CSV column or log_line_prefix escape) that represents a
classification of the PID, say as postmaster, backend, AV worker,
AV launcher, bgwriter,   It would only require changing things
in one place, whereas your original proposal seemed mighty open-ended.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migratedtables

2009-08-06 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 Kevin Grittner wrote:
 Any idea whether domains are an issue?

 I don't believe that they are an issue. The issue arises only when a 
 catalog oid is used in the on-disk representation of a type. AFAIK the 
 on-disk representation of a domain is the same as its base type.

Arrays of domains would be a problem, if we had 'em, which we don't...

Also, as Peter already noted, arrays of built-in types are not really a
problem because the OID won't have changed since 8.3.  It's only arrays
of types created post-initdb that are risk factors.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Andrew Dunstan



Tom Lane wrote:

Alvaro Herrera alvhe...@commandprompt.com writes:
  

Dimitri Fontaine wrote:


It seems harder to come up with a general purpose syntax to support the
feature in case of toast tables, though.
  


  

There's already general purpose syntax for relation options which can be
used to get options that do not ultimately end up in
pg_class.reloptions.  An existing example is WITH (oids).  One such
option could be used here.



That would cover the problem for OIDs needed during CREATE TABLE, but
what about types and enum values?

The half-formed idea I had was a set of GUC variables:

set next_pg_class_oid = 12345;
set next_pg_type_oid = 12346;
set next_toast_table_oid = ...
set next_toast_index_oid = ...

and finally it could do CREATE TABLE.  CREATE TYPE would only need
next_pg_type_oid (except for a composite type).

Enum values wouldn't work too well this way, unless we were willing to
have a GUC that took a list of OIDs.  I thought about having binary
upgrade mode build up the enum list one entry at a time, by adding
a command like

ALTER TYPE enum_type ADD VALUE 'label' WITH OID oid

which would also have some use for modifying enums on the fly.


  



It's going to be fairly grotty whatever we do. I'm worried a bit that 
we'll be providing some footguns, but I guess we'll just need to hold 
our noses and do whatever it takes.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding error message source

2009-08-06 Thread Magnus Hagander
On Thu, Aug 6, 2009 at 16:33, Tom Lanet...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Thu, Aug 6, 2009 at 16:20, Tom Lanet...@sss.pgh.pa.us wrote:
 Well, it seems like you could get 90% of the way there just by filtering
 on the PID --- watching the bgwriter, walwriter, and archiver should
 cover this use-case reasonably well.

 Right. But that's pretty hard to do automated, since they will get a
 new pid whenever the database is restarted. Which is hopefully not
 very often, but still an issue. Plus, it's hard to do any kind of
 historical look at things.

 I don't think there'd be much logical difficulty in having an output
 field (ie, CSV column or log_line_prefix escape) that represents a
 classification of the PID, say as postmaster, backend, AV worker,
 AV launcher, bgwriter,   It would only require changing things
 in one place, whereas your original proposal seemed mighty open-ended.

Good point. That *would* probably take care of much of the need. The
downside is aggressive filtering that way would get rid of important
messages coming out of a single backend, like out of disk space.
Meaning it would still not be possible to filter on the difference
between ERROR: syntax error in query and ERROR: out of disk space? But
it'd be an improvement still.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 It's going to be fairly grotty whatever we do. I'm worried a bit that 
 we'll be providing some footguns, but I guess we'll just need to hold 
 our noses and do whatever it takes.

Yeah.  One advantage of the GUC approach is we could make 'em SUSET.
I don't actually see any particularly serious risk of abuse there
(about all you could do is make your CREATEs fail) ... but why not
be careful ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Merlin Moncure
On Thu, Aug 6, 2009 at 4:32 AM, Dimitri Fontainedfonta...@hi-media.com wrote:
 Tom Lane t...@sss.pgh.pa.us writes:

 Andrew Dunstan and...@dunslane.net writes:
 preventing a clash might be fairly difficult.

 Yeah, I was just thinking about that.  The easiest way to avoid
 collisions would be to make pg_dump (in --binary-upgrade mode)
 responsible for being sure that *every* new pg_type and pg_class row
 OID matches what it was in the old DB.

 As we already have WITH OIDS for CREATE TABLE command, maybe adding
 support for WITH OID ... to the necessary commands would do the trick?

 Instead of messing with pg_type, pg_dump would then have to issue a OID
 'decorated' command such as
  CREATE TYPE footype ... WITH OID 27604;

Unfortunately it's not enough to just do this with 'create type'  and
'create type as', we also have to do this with 'create table'.   Some
people (like me) use tables as composite types because of the extra
flexibility it gives you.  So, potentially, OIDs for enums, tables,
and types needs to be preserved.

I am very much in support for any system that allows creation of a
type with a specific OID.  This is not just a problem with the
migrator, but will allow for more robust transfers of data over the
binary protocol (think binary dblink) without resorting to hacks to
that do lookups based on typename.

IOW, this setting specific OIDs should ideally be exposed at the SQL
level and should be able to be done for any type that can be part of a
container.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding error message source

2009-08-06 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Thu, Aug 6, 2009 at 16:33, Tom Lanet...@sss.pgh.pa.us wrote:
 I don't think there'd be much logical difficulty in having an output
 field (ie, CSV column or log_line_prefix escape) that represents a
 classification of the PID, say as postmaster, backend, AV worker,
 AV launcher, bgwriter,   It would only require changing things
 in one place, whereas your original proposal seemed mighty open-ended.

 Good point. That *would* probably take care of much of the need. The
 downside is aggressive filtering that way would get rid of important
 messages coming out of a single backend, like out of disk space.
 Meaning it would still not be possible to filter on the difference
 between ERROR: syntax error in query and ERROR: out of disk space? But
 it'd be an improvement still.

Hmm, well, what about filtering on the SQLSTATE?  You could do that
today.  A category-level filter would probably serve pretty well.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread David E. Wheeler

On Aug 6, 2009, at 7:28 AM, Tom Lane wrote:


That would cover the problem for OIDs needed during CREATE TABLE, but
what about types and enum values?


I haven't been following this discussion very closely, but wanted to  
ask: is someone writing regression tests for these cases that  
pg_migrator keeps bumping into?


Best,

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Peter Eisentraut
On Thursday 06 August 2009 17:54:37 David E. Wheeler wrote:
 On Aug 6, 2009, at 7:28 AM, Tom Lane wrote:
  That would cover the problem for OIDs needed during CREATE TABLE, but
  what about types and enum values?

 I haven't been following this discussion very closely, but wanted to
 ask: is someone writing regression tests for these cases that
 pg_migrator keeps bumping into?

Well, pg_migrator has no included test suite.  There you go.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] dblink bulk operations

2009-08-06 Thread Andrew Dunstan


Last night I needed to move a bunch of data from an OLTP database to an 
archive database, and used dblink with a bunch of insert statements. 
Since I was moving about 4m records this was distressingly but not 
surprisingly slow. It set me wondering why we don't build more support 
for libpq operations into dblink, like transactions and prepared 
queries, and maybe COPY too. It would be nice to be able to do something 
like:


   select dblink_connect('dbh','dbname=foo');
   select dblink_begin('dbh');
   select dblink_prepare('dbh','sth','insert into bar values ($1,$2,$3)');
   select dblink_exec_prepared('dbh','sth',row(a,b,c)) from bar; -- can
   we do this?
   select dblink_commit('dbh');
   select dblink_disconnect('dbh');


Does this seem worthwhile and doable, or am I smoking crack?

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GRANT ON ALL IN schema

2009-08-06 Thread Stephen Frost
* Andrew Dunstan (and...@dunslane.net) wrote:
 Tom Lane wrote:
 I'm not sure whether there is consensus on not using GRANT ON VIEW
 (ie, having these patches treat tables and views alike).  I was waiting
 to see if Stephen would put forward a convincing counterargument ...

 Conceptually it is right, I think. A view is a virtual table, so the  
 counter-argument would need to be pretty good ISTM.

With regard to DefaultACL-

I don't like just masking out the bits for views at create view time.
Right now, a user can 'GRANT INSERT ON view TO role;' and it'll
actually store insert privs for that view and use them for ON INSERT DO
INSTEAD type of work.  If we're going to treat them as virtual tables,
then we should do that and include all the same permissions that tables
get for views.  Additionally, this will make it less of a suprise if we
support updatable views at some point in the future (we wouldn't have
to deal with possibly changing the default acl mask).

Personally, I find that I want different controls on views in general.
This may stem from my compulsive need for a 'clean' system where I don't
want permissions granted on objects that can't support them (eg: views
which don't have ON INSERT DO INSTEAD rules).  As for changing the
default ACL syntax to not be based around SCHEMA- I'm concerned that
we'll then have to define some kind of ordering preference if we get
away from the defaults being associated with the container object.  If
we have defaults for users and schemas, which takes precedence?  I don't
like the idea of trying to merge them.  I'm also not really a fan of
having the defaults be based on pattern-matching to a relation name,
that's just creating another namespace headache, imv.

For my needs, the syntax is not of great importance, I'll use what I
have to.  If ALTER DEFAULT PERMISSIONS is the concensus, then I'd rather
at least have it than not have anything.

With regard to GRANT ALL-

While I don't want to go against the SQL spec, it's opinion is that in
'GRANT SELECT ON TABLE tab1' the 'TABLE' is optional and not relevant.
We can keep that and still implement a 'GRANT SELECT ON VIEW tab1' which
is limited to only operating on views, allowing admins to be more
explicit about what they want.  That would at least reduce the
disconnect between 'grant on all', 'default acls', and regular GRANT
with regard to tables vs. views, presuming we keep them split.

I do like the general idea of making it easier to run commands across
multiple tables, etc, rather than having 'GRANT ON ALL' syntax.  As I
believe has been mentioned before, this is a case where we could improve
our client tools rather than implement it on the server.  For example:

\cmd grant select on * to user

Of course, our new psql * handling would mean this would grant
select on everything in pg_catalog too, at least if we do the same as
\d *

I've got a simple perl script which does this, and I know others have
pl/pgsql functions and the like for doing it.  Adding that capability to
psql, if we can do it cleanly, would be nice.

Adding some kind of 'run-multiple' stored proc is an interesting idea
but I'm afraid the users this is really targetting aren't going to
appreciate or understand something like:

select
  cmd('grant select on '
   || quote_ident(nspname)
   || '.'
   || quote_ident(relname)
   || ' to public')
from pg_class
join pg_namespace on (pg_class.nspoid = pg_namespace.oid)
where pg_namespace.nspname = 'myschema';

Writing a function which takes something like:
select grant('SELECT','myschema','*','role');
or takes any kind of actual syntax like:
select cmd('grant select on * to role');
just strikes me as forcing users to use a function for the sake of it
being a function.

I really feel like we should be able to take a page from the unix book
here and come up with some way to handle wildcards in certain
statements, ala chmod.

grant select on * to role;
grant select on myschema.* to role;
grant select on ab* to role;

We don't currently allow * in GRANT syntax, and I strongly doubt that
the SQL committee will some day allow it AND make it mean something
different.  If we're really that worried about it, we could have
'GRANTALL' or 'MGRANT' or something.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] dblink bulk operations

2009-08-06 Thread Merlin Moncure
On Thu, Aug 6, 2009 at 11:11 AM, Andrew Dunstanand...@dunslane.net wrote:

 Last night I needed to move a bunch of data from an OLTP database to an
 archive database, and used dblink with a bunch of insert statements. Since I
 was moving about 4m records this was distressingly but not surprisingly
 slow. It set me wondering why we don't build more support for libpq
 operations into dblink, like transactions and prepared queries, and maybe
 COPY too. It would be nice to be able to do something like:

   select dblink_connect('dbh','dbname=foo');
   select dblink_begin('dbh');

you can always exec a sql 'begin'.

   select dblink_prepare('dbh','sth','insert into bar values ($1,$2,$3)');
   select dblink_exec_prepared('dbh','sth',row(a,b,c)) from bar; -- can
   we do this?

The answer to this I think is yes, but not quite that way.  Much
better I think is to use 8.4 variable argument functions, use
parametrized features off libpq always, and use the binary protocol
when possible.  This does end up running much faster, and easier to
use...(we've done exactly that for our in house stuff).  IIRC you can
parameterize 'execute', so the above should work for prepared queries
as well.

If we get the ability to set specific OIDs for types, I can remove
some of the hacks  we have to send text for composites and arrays of
composites.

 select * from pqlink_exec(connstr, 'select $1 + $2', 3, 4) as R(v int);
 v
---
 7
(1 row)


merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Joshua D. Drake
On Wed, 2009-08-05 at 22:57 -0400, Bruce Momjian wrote:
 Andrew Dunstan wrote:
  
 Well, pg_migrator has gotten pretty far without supporting these
 features, and I think I would have heard about it if someone had these
 and migrated because vacuum analyze found it right away.  I am afraid
 the best we can do is to throw an error when we see these cases and hope
 we can improve things for 8.5.


*most* users will not even know there is such a thing as a composite
type. Throw an error and call it good for this release.

Joshua D. Drake


 -- 
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com
 
   + If your life is a hard drive, Christ can be your backup. +
 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dblink bulk operations

2009-08-06 Thread David Fetter
On Thu, Aug 06, 2009 at 11:11:58AM -0400, Andrew Dunstan wrote:

 Last night I needed to move a bunch of data from an OLTP database to an  
 archive database, and used dblink with a bunch of insert statements.  
 Since I was moving about 4m records this was distressingly but not  
 surprisingly slow. It set me wondering why we don't build more support  
 for libpq operations into dblink, like transactions and prepared  
 queries, and maybe COPY too. It would be nice to be able to do something  
 like:

select dblink_connect('dbh','dbname=foo');
select dblink_begin('dbh');
select dblink_prepare('dbh','sth','insert into bar values ($1,$2,$3)');
select dblink_exec_prepared('dbh','sth',row(a,b,c)) from bar; -- can
we do this?
select dblink_commit('dbh');
select dblink_disconnect('dbh');


 Does this seem worthwhile and doable, or am I smoking crack?

For what it's worth, DBI-Link provides a lot of this.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prefix support for synonym dictionary

2009-08-06 Thread Teodor Sigaev

1. The docs should be clarified a little. For instance, it should have a
link back to the definition of a prefix search (12.3.2). I included my
doc suggestions as an attachment.

Thank you, merged


2. dsynonym_init() uses findwrd() in a slightly confusing (and perhaps
fragile) way. After calling findwrd(), the end pointer is pointing at
either the end of the string, or the *; depending on whether the string
ends in * and whether flags is NULL. I only mention this because I had
to take a more careful look to see what was happening. Perhaps add a
comment to make it more clear?

Add comments:
/*
 * Finds the next whitespace-delimited word within the 'in' string.
 * Returns a pointer to the first character of the word, and a pointer
 * to the next byte after the last character in the word (in *end).
 * Character '*' at the end of word will not be threated as word
 * charater if flags is not null.
 */
static char *
findwrd(char *in, char **end, uint16 *flags)




3. The patch looks for the special byte '*'. I think that's fine,
because we depend on the files being in UTF-8 encoding, where it's the
same byte. However, I thought it was worth mentioning in case we want to
support other encodings for text search files later.


tsearch_readline() converts file's UTF8 encoding into server encoding. pgsql 
supports only encoding which are a superset of ASCII. So it's safe to use 
asterisk with any encodings


--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/


synonym_prefix-0.2.gz
Description: Unix tar archive

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prefix support for synonym dictionary

2009-08-06 Thread Robert Haas
2009/8/6 Teodor Sigaev teo...@sigaev.ru:
 1. The docs should be clarified a little. For instance, it should have a
 link back to the definition of a prefix search (12.3.2). I included my
 doc suggestions as an attachment.

 Thank you, merged

 2. dsynonym_init() uses findwrd() in a slightly confusing (and perhaps
 fragile) way. After calling findwrd(), the end pointer is pointing at
 either the end of the string, or the *; depending on whether the string
 ends in * and whether flags is NULL. I only mention this because I had
 to take a more careful look to see what was happening. Perhaps add a
 comment to make it more clear?

 Add comments:
 /*
  * Finds the next whitespace-delimited word within the 'in' string.
  * Returns a pointer to the first character of the word, and a pointer
  * to the next byte after the last character in the word (in *end).
  * Character '*' at the end of word will not be threated as word
  * charater if flags is not null.
  */
 static char *
 findwrd(char *in, char **end, uint16 *flags)



 3. The patch looks for the special byte '*'. I think that's fine,
 because we depend on the files being in UTF-8 encoding, where it's the
 same byte. However, I thought it was worth mentioning in case we want to
 support other encodings for text search files later.

 tsearch_readline() converts file's UTF8 encoding into server encoding. pgsql
 supports only encoding which are a superset of ASCII. So it's safe to use
 asterisk with any encodings

Jeff,

Based on these comments, do you want to go ahead and mark this Ready
for Committer?

https://commitfest.postgresql.org/action/patch_view?id=133

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dblink bulk operations

2009-08-06 Thread Andrew Dunstan



David Fetter wrote:


For what it's worth, DBI-Link provides a lot of this.


  


Indeed, but that assumes that perl+DBI+DBD::Pg is available, which is by 
no means always the case. If we're going to have a dblink module ISTM it 
should be capable of reasonable bulk operations.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] the case for machine-readable error fields

2009-08-06 Thread Sam Mason
On Thu, Aug 06, 2009 at 11:41:55AM +0200, Pavel Stehule wrote:
 typically in SQL/PSM (stored procedures - look on GET DIAGNOSTICS
 statement in plpgsql doc), maybe in ecpg. Other's environments raise
 exception - so you can get some data from exception or from special
 structures related to environment - php, ruby, .NET etc

Sorry, I should have said that I saw how it was used in stored
procedures.  My interest was in getting the client doing something
interesting, if you've already got the complexity of a stored procedure
it shouldn't be to hard to teach it where the problem is.


One thing I didn't see any comment on was on the fact that I think
CREATE UNIQUE INDEX is really creating a constraint--it's just not
showing up as one.  For the constraint name to be sent back in the case
of an error I think this needs to be changed.

Triggers (and other domain specific code) seem less important here as
they can always fail with whatever error is appropriate.

-- 
  Sam  http://samason.me.uk /

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dblink bulk operations

2009-08-06 Thread David Fetter
On Thu, Aug 06, 2009 at 12:28:15PM -0400, Andrew Dunstan wrote:
 David Fetter wrote:

 For what it's worth, DBI-Link provides a lot of this.

 Indeed, but that assumes that perl+DBI+DBD::Pg is available, which
 is by  no means always the case. If we're going to have a dblink
 module ISTM it  should be capable of reasonable bulk operations.

I didn't mean to suggest that you should use DBI-Link, just that it's
a requirement that's come up in very similar contexts to that of
dblink.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dblink bulk operations

2009-08-06 Thread Merlin Moncure
On Thu, Aug 6, 2009 at 11:11 AM, Andrew Dunstanand...@dunslane.net wrote:

 Last night I needed to move a bunch of data from an OLTP database to an
 archive database, and used dblink with a bunch of insert statements. Since I
 was moving about 4m records this was distressingly but not surprisingly
 slow. It set me wondering why we don't build more support for libpq
 operations into dblink, like transactions and prepared queries, and maybe
 COPY too. It would be nice to be able to do something like:

   select dblink_connect('dbh','dbname=foo');
   select dblink_begin('dbh');
   select dblink_prepare('dbh','sth','insert into bar values ($1,$2,$3)');
   select dblink_exec_prepared('dbh','sth',row(a,b,c)) from bar; -- can
   we do this?
   select dblink_commit('dbh');
   select dblink_disconnect('dbh');

thinking about this some more, you can get pretty close with vanilla
dblink with something like (i didn't test):

select dblink_exec('dbh', 'prepare xyz as insert into foo select ($1::foo).*');
select dblink_exec('dbh', 'execute xyz(' || my_foo::text || ')');

This maybe defeats a little bit of what you are trying to achieve
(especially performance), but is much easier to craft for basically
any table as long as the fields match.  The above runs into problems
with quoting (composite with bytea in it), but works ok most of the
time.

If you want faster/better, dblink need to be factored to parametrize
queries and, if possible, use binary.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PANIC: cannot make new WAL entries during recovery in the wild

2009-08-06 Thread Alvaro Herrera
Today we got a report in the spanish list about the message in $subject.
The server is 8.4 running on Windows.

Any ideas?  I'm wondering what kind of diagnostics we can run to debug
the problem.  xlogdump perhaps?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
---BeginMessage---

Estimada Lista,

Tengo un servidor corriendo la version 8.4 para windows xp, atiende a  
9 usuarios concurrentes todo sin problemas hasta hace unos minutos que  
hubo una baja de tension electrica y provoco que mi servidor se  
apagara bruscamente. El problema es que no puedo hacer que el servicio  
levante, me arroja el siguiente log:


==
2009-08-05 11:58:19 COTLOG:  el sistema de bases de datos fue  
interrumpido durante la recuperación en 2009-08-05 11:12:14 COT
2009-08-05 11:58:19 COTHINT:  Esto probablemente significa que algunos  
datos están corruptos y tendrá que usar el respaldo más reciente para  
la recuperación.
2009-08-05 11:58:19 COTLOG:  el sistema de bases de datos no fue  
apagado apropiadamente; se está efectuando la recuperación automática

2009-08-05 11:58:19 COTLOG:  redo comienza en 0/75334970
2009-08-05 11:58:19 COTLOG:  la dirección de página 0/6D374000 en el  
archivo de registro 0, segmento 117, posición 3620864 es inesperada

2009-08-05 11:58:19 COTLOG:  redo listo en 0/75370F68
2009-08-05 11:58:19 COTLOG:  última transacción completada al tiempo  
de registro 2009-08-05 09:45:59.796-05

2009-08-05 11:58:19 COTPANIC:  cannot make new WAL entries during recovery


This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.

2009-08-05 11:58:19 COTLOG:  proceso de inicio (PID 2636) terminó con  
código de salida 3
2009-08-05 11:58:19 COTLOG:  abortando el inicio debido a una falla en  
el procesamiento de inicio

==

Por favor si alguien puede ayudarme le estare muy agradecido.

Saludos,

Marcos Rios
Arequipa-Peru

--
TIP 2: puedes desuscribirte de todas las listas simultáneamente
   (envía unregister TuDirecciónDeCorreo a majord...@postgresql.org)
---End Message---

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prefix support for synonym dictionary

2009-08-06 Thread Jeff Davis
On Thu, 2009-08-06 at 12:19 -0400, Robert Haas wrote:
 Based on these comments, do you want to go ahead and mark this Ready
 for Committer?

Done, thanks Teodor.

However, on the commitfest page, the patches got updated in the wrong
places: prefix support and filtering dictionary support are pointing
at each others' patches.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Null handling and plpython

2009-08-06 Thread Peter Eisentraut
I'm reviewing the plpython data type handling patch from the commit fest.  I 
have not dealt much with the plpython code before, and I'm a bit puzzled by 
its elaborately silly handling of null values.  A representative example (for 
the current code):

if (tupdesc-attrs[atti]-attisdropped)
{
modvalues[i] = (Datum) 0;
modnulls[i] = 'n';
}
else if (plval != Py_None)
{
plstr = PyObject_Str(plval);
if (!plstr)
PLy_elog(ERROR, could not compute 
string representation of Python 
object, while modifying trigger row);
src = PyString_AsString(plstr);

modvalues[i] =

InputFunctionCall(proc-result.out.r.atts[atti].typfunc,
  src,

proc-result.out.r.atts[atti].typioparam,
  
tupdesc-attrs[atti]-atttypmod);
modnulls[i] = ' ';

Py_DECREF(plstr);
plstr = NULL;
}
else /* FUN STARTS HERE */
{
modvalues[i] =

InputFunctionCall(proc-result.out.r.atts[atti].typfunc,
  NULL,

proc-result.out.r.atts[atti].typioparam,
  
tupdesc-attrs[atti]-atttypmod);
modnulls[i] = 'n';
}

Py_DECREF(plval);
plval = NULL;
}

rtup = SPI_modifytuple(tdata-tg_relation, otup, natts,
   modattrs, modvalues, 
modnulls);

First of all, SPI_modifytuple (which wraps around heap_modify_tuple) appears 
to ignore the values when a slot is marked to be null.

And then, what is the supposed semantics of calling a nonstrict input function 
with NULL as the cstring value?  InputFunctionCall() requires that the return 
value is null if and only if the input cstring was NULL, but we'll call the 
input function anyway.  Couldn't the call to InputFunctionCall() be scrapped 
altogether in the above case?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding error message source

2009-08-06 Thread Peter Eisentraut
On Thursday 06 August 2009 17:33:40 Tom Lane wrote:
 I don't think there'd be much logical difficulty in having an output
 field (ie, CSV column or log_line_prefix escape) that represents a
 classification of the PID, say as postmaster, backend, AV worker,
 AV launcher, bgwriter,   It would only require changing things
 in one place, whereas your original proposal seemed mighty open-ended.

You might be able to achieve that if you use the %q escape and put something 
after the %q that you can search for.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] the case for machine-readable error fields

2009-08-06 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 
 I'm not proposing that we implement GET DIAGNOSTICS as a statement.
 I was just thinking that the list of values it's supposed to make
 available might do as a guide to what extra error fields we need to
 provide where.
 
From what I could find on a quick scan:
 
RETURNED_SQLSTATE
CLASS_ORIGIN
SUBCLASS_ORIGIN
CONSTRAINT_CATALOG
CONSTRAINT_SCHEMA
CONSTRAINT_NAME
CATALOG_NAME
SCHEMA_NAME
TABLE_NAME
COLUMN_NAME
CURSOR_NAME
MESSAGE_TEXT
MESSAGE_LENGTH
MESSAGE_OCTET_LENGTH
 
CATALOG is, of course, equivalent to database in the PostgreSQL world.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] slow commits with heavy temp table usage in 8.4.0

2009-08-06 Thread Todd A. Cook

Tom Lane wrote:


I took a look through the CVS history and verified that there were
no post-8.4 commits that looked like they'd affect performance in
this area.  So I think it's got to be a platform difference not a
PG version difference.  In particular I think we are probably looking
at a filesystem issue: how fast can you delete [...] 3 files.


I'm still on Fedora 7, so maybe this will be motivation to upgrade.

FYI, on my 8.2.13 system, the test created 30001 files which were all
deleted during the commit.  On my 8.4.0 system, the test created 60001
files, of which 3 were deleted at commit and 30001 disappeared
later (presumably during a checkpoint?).



But I'm not sure
if the use-case is popular enough to deserve such a hack.


FWIW, the full app was looping over a set of datasets.  On each iteration,
it computed some intermediate results into a temp table, generated several
reports from those intermediate results, and finally truncated the table
for the next iteration.

-- todd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] slow commits with heavy temp table usage in 8.4.0

2009-08-06 Thread Todd A. Cook

Tom Lane wrote:


The attached prototype patch does this
and seems to fix the speed problem nicely.  It's not tremendously
well tested, but perhaps you'd like to test?  Should work in 8.4.


I'll give it a try and report back (though probably not until tomorrow).

-- todd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prefix support for synonym dictionary

2009-08-06 Thread Robert Haas
On Thu, Aug 6, 2009 at 12:53 PM, Jeff Davispg...@j-davis.com wrote:
 On Thu, 2009-08-06 at 12:19 -0400, Robert Haas wrote:
 Based on these comments, do you want to go ahead and mark this Ready
 for Committer?

 Done, thanks Teodor.

 However, on the commitfest page, the patches got updated in the wrong
 places: prefix support and filtering dictionary support are pointing
 at each others' patches.

Fixed.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] slow commits with heavy temp table usage in 8.4.0

2009-08-06 Thread Alex Hunsaker
On Thu, Aug 6, 2009 at 11:32, Todd A. Cooktc...@blackducksoftware.com wrote:
 Tom Lane wrote:

 I took a look through the CVS history and verified that there were
 no post-8.4 commits that looked like they'd affect performance in
 this area.  So I think it's got to be a platform difference not a
 PG version difference.  In particular I think we are probably looking
 at a filesystem issue: how fast can you delete [...] 3 files.

 I'm still on Fedora 7, so maybe this will be motivation to upgrade.

 FYI, on my 8.2.13 system, the test created 30001 files which were all
 deleted during the commit.  On my 8.4.0 system, the test created 60001
 files, of which 3 were deleted at commit and 30001 disappeared
 later (presumably during a checkpoint?).

Smells like fsm?  With double the number of files maybe something
simple like turning on dir_index if you are ext3 will help?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] the case for machine-readable error fields

2009-08-06 Thread Kevin Grittner
Kevin Grittner kevin.gritt...@wicourts.gov wrote: 
 From what I could find on a quick scan:
  
 RETURNED_SQLSTATE
 CLASS_ORIGIN
 SUBCLASS_ORIGIN
 CONSTRAINT_CATALOG
 CONSTRAINT_SCHEMA
 CONSTRAINT_NAME
 CATALOG_NAME
 SCHEMA_NAME
 TABLE_NAME
 COLUMN_NAME
 CURSOR_NAME
 MESSAGE_TEXT
 MESSAGE_LENGTH
 MESSAGE_OCTET_LENGTH
 
Also, though I'm not yet totally clear on their meaning:
 
COMMAND_FUNCTION
DYNAMIC_FUNCTION
 
And since users can declare a condition and associate it with a
SQLSTATE, and later use that to terminate a database transaction with
the SIGNAL command:
 
| If the value of the RETURNED_SQLSTATE corresponds to unhandled
| user-defined exception, then the value of CONDITION_IDENTIFIER is
| the condition name of the user-defined exception.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Null handling and plpython

2009-08-06 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 And then, what is the supposed semantics of calling a nonstrict input
 function with NULL as the cstring value?  InputFunctionCall() requires
 that the return value is null if and only if the input cstring was
 NULL, but we'll call the input function anyway.  Couldn't the call
 to InputFunctionCall() be scrapped altogether in the above case?

No.  The point of this is to allow domain_in() to apply domain
constraint checks that might or might not throw error on null values.
I'm not sure whether the code you quote is getting this right in
all the branches, but the last case isn't useless.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GRANT ON ALL IN schema

2009-08-06 Thread decibel

On Aug 5, 2009, at 11:59 AM, Tom Lane wrote:

Robert Haas robertmh...@gmail.com writes:

... bulk-grant could be based on object type,
object name (with wildcard or regexp pattern), schema membership, or
maybe other things, and I think that would be quite useful if we can
figure out how to make it clean and elegant.


Yeah.  In the end you can always write a plpgsql function that filters
on anything at all.  The trick is to pick some useful subset of
functionality that can be exposed in a less messy way.

Or maybe we are going at this the wrong way?  Would it be better to  
try
harder to support the write-a-plpgsql-function approach?  I don't  
think

the documentation even mentions that approach, let alone provides any
concrete examples.  It might be interesting to document it and see if
there are any simple things we could do to file off rough edges in  
doing

grants that way, rather than implementing what must ultimately be a
limited solution directly in GRANT.


I'm not sure if this is what you were thinking, but something I've  
added to all our databases is a simple exec function (see below).  
This makes it a lot less painful to perform arbitrary operations.  
Perhaps we should add something similar to the core database? On a  
related note, I also have tools.raise(level text, messsage text) that  
allows you to perform a plpgsql RAISE command from sql; I've found  
that to be very useful in scripts to allow for raising an exception.


In this specific case, I think there's enough demand to warrant a  
built-in mechanism for granting, but if something like exec() is  
built-in then the bar isn't as high for what the built-in GRANT  
mechanism needs to handle.


CREATE OR REPLACE FUNCTION tools.exec(
sql text
, echo boolean
) RETURNS text LANGUAGE plpgsql AS $exec$
BEGIN
RAISE DEBUG 'Executing dynamic sql: %', sql;
EXECUTE sql;

IF echo THEN
RETURN sql;
ELSE
RETURN NULL;
END IF;
END;
$exec$;

The echo parameter is sometimes useful in scripts so you have some  
idea what's going on; but it should be optional.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GRANT ON ALL IN schema

2009-08-06 Thread Pavel Stehule

 \cmd grant select on * to user


when I wrote epsql I implemented \fetchall metastatement.
http://okbob.blogspot.com/2009/03/experimental-psql.html

It's should be used for GRANT

DECLARE x CURSOR FOR SELECT * FROM information_schema.tables 
\fetchall x GRANT ALL ON :table_name TO public;

CLOSE x;

regards
Pavel Stehule

 Of course, our new psql * handling would mean this would grant
 select on everything in pg_catalog too, at least if we do the same as
 \d *

 I've got a simple perl script which does this, and I know others have
 pl/pgsql functions and the like for doing it.  Adding that capability to
 psql, if we can do it cleanly, would be nice.

 Adding some kind of 'run-multiple' stored proc is an interesting idea
 but I'm afraid the users this is really targetting aren't going to
 appreciate or understand something like:

 select
  cmd('grant select on '
   || quote_ident(nspname)
   || '.'
   || quote_ident(relname)
   || ' to public')
 from pg_class
 join pg_namespace on (pg_class.nspoid = pg_namespace.oid)
 where pg_namespace.nspname = 'myschema';

 Writing a function which takes something like:
 select grant('SELECT','myschema','*','role');
 or takes any kind of actual syntax like:
 select cmd('grant select on * to role');
 just strikes me as forcing users to use a function for the sake of it
 being a function.

 I really feel like we should be able to take a page from the unix book
 here and come up with some way to handle wildcards in certain
 statements, ala chmod.

 grant select on * to role;
 grant select on myschema.* to role;
 grant select on ab* to role;

 We don't currently allow * in GRANT syntax, and I strongly doubt that
 the SQL committee will some day allow it AND make it mean something
 different.  If we're really that worried about it, we could have
 'GRANTALL' or 'MGRANT' or something.

        Thanks,

                Stephen

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAkp69McACgkQrzgMPqB3kii3wQCfUweO4zEIjg2aLd84hxlYGgT1
 pqAAnAnT4FlJkIZ6K3YMjQaCOj3Hww7H
 =iUXy
 -END PGP SIGNATURE-



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Table and Index compression

2009-08-06 Thread Josh Berkus
On 8/6/09 2:39 AM, PFC wrote:
 
 
 With the talk about adding compression to pg_dump lately, I've been
 wondering if tables and indexes could be compressed too.
 So I've implemented a quick on-the-fly compression patch for postgres

I find this very interesting, and would like to test it further on some
client workloads, before you/we put more work into completing it.

I think if we can implement compressed database as an option (perhaps at
initdb time, perhaps at tablespace creation time) then it will be very
attractive.

Where is the patch?

BTW, who are you actually?

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Table and Index compression

2009-08-06 Thread Guillaume Smet
Pierre,

On Thu, Aug 6, 2009 at 11:39 AM, PFCli...@peufeu.com wrote:
 The best for this is lzo : very fast decompression, a good compression ratio
 on a sample of postgres table and indexes, and a license that could work.

The license of lzo doesn't allow us to include it in PostgreSQL
without relicensing PostgreSQL as GPL.

I'm not sure of what you imply by a license that could work.

Note that it doesn't change the interest of your approach. It's just
that I'm not sure we can find a performance-acceptable BSD licensed
compression library (it was discussed a lot of times here).

-- 
Guillaume

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Table and Index compression

2009-08-06 Thread Greg Stark
I like the idea too, but I think there are some major problems to
solve. In particular I think we need a better solution to blocks
growing than sparse files.

The main problem with using sparse files is that currently postgres is
careful to allocate blocks early so it can fail  if there's not enough
space. With your sparse file solution Postgres might only find out
there's no space after it has already committed a transaction.
bgwriter has no good course of action to take if it finds out there's
nowhere to put the data it has in shared buffers.

But I think even if you solve that it's not really a good long-term
solution. We don't know how the OS handles block allocation for this
type of file. I'm actually moderately surprised it isn't skipping
enough blocks assuming you'll allocate them eventually. Even if it
does handle it the way you expect what happens when you do grow a
block, it'll have to allocate it way out of the way and we have no way
to repair that discontinuity later.

Also, the way you've prellocated blocks effectively nails the maximum
compression at 2x. That seems to be leaving a lot of money on the
table.

To handle read-write tables I think we would need to directly
implement the kind of indirection layer that you're getting out of the
filesystem's block layer currently. That would let you allocate enough
blocks to hold the data uncompressed and then free up those blocks
once you're sure the data is compressible.

One possibility is to handle only read-only tables. That would make
things a *lot* simpler. But it sure would be inconvenient if it's only
useful on large static tables but requires you to rewrite the whole
table -- just what you don't want to do with large static tables -- to
get the benefit.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Table and Index compression

2009-08-06 Thread Robert Haas
On Thu, Aug 6, 2009 at 4:03 PM, Greg Starkgsst...@mit.edu wrote:
 I like the idea too, but I think there are some major problems to
 solve. In particular I think we need a better solution to blocks
 growing than sparse files.

How much benefit does this approach have over using TOAST compression
more aggressively?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Executor documentation

2009-08-06 Thread Dan Colish
Hi,

I've been looking through the current state of docuemtation,
including comments, with respect to the executor code and I would
like to improve upon their condition. If anyone has notes,
pseudocode, thoughts on how it all really works, or anything that
can help me out, I'd really appreciate it. Currently, I'm just going
through and finding comments that need more, then adding to them.
I'd also like to add to the readme where it's calling for more
documentation; ie, the XXX line. I think this cleanup will really
help flush out some of the confusion I've had with the executor and
maybe identify areas of it that can be improved. Thanks in advance,

--
--Dan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PANIC: cannot make new WAL entries during recovery in the wild

2009-08-06 Thread Alvaro Herrera
Alvaro Herrera wrote:

 2009-08-05 11:58:19 COTLOG:  el sistema de bases de datos fue
 interrumpido durante la recuperación en 2009-08-05 11:12:14 COT
 2009-08-05 11:58:19 COTHINT:  Esto probablemente significa que
 algunos datos están corruptos y tendrá que usar el respaldo más
 reciente para la recuperación.
 2009-08-05 11:58:19 COTLOG:  el sistema de bases de datos no fue
 apagado apropiadamente; se está efectuando la recuperación
 automática
 2009-08-05 11:58:19 COTLOG:  redo comienza en 0/75334970
 2009-08-05 11:58:19 COTLOG:  la dirección de página 0/6D374000 en el
 archivo de registro 0, segmento 117, posición 3620864 es inesperada
 2009-08-05 11:58:19 COTLOG:  redo listo en 0/75370F68
 2009-08-05 11:58:19 COTLOG:  última transacción completada al tiempo
 de registro 2009-08-05 09:45:59.796-05
 2009-08-05 11:58:19 COTPANIC:  cannot make new WAL entries during recovery

After adding %p to the log_line_prefix, it becomes clear that the
process calling XLogInsert here is the startup process.

This is new code in 8.4.  Is no one concerned about this?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] compilation with libeditpreferred is broken

2009-08-06 Thread Zdenek Kotala
When I try compile postgresql with --libeditpreferred option,
compilation fails when readline is also installed on the system. You can
see error report on:

http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=dot_mothdt=2009-08-06%2012:46:04

The main problem is in src/bin/psql/input.h where is following ifdef
magic:

  #define USE_READLINE 1
  #if defined(HAVE_READLINE_READLINE_H)
#include readline/readline.h
  #elif defined(HAVE_EDITLINE_READLINE_H)
#include editline/readline.h
  #elif defined(HAVE_READLINE_H)
#include readline.h
  #endif
  
  #if defined(HAVE_READLINE_HISTORY_H)
#include readline/history.h
  #elif defined(HAVE_EDITLINE_HISTORY_H)
#include editline/history.h
  #elif defined(HAVE_HISTORY_H)
#include history.h
  #endif
  

The problem is that libedit does not distribute editline/history.h and
configure detects that there is readline/history.h and sets
HAVE_READLINE_HISTORY_H macro. Finally input.h includes
editline/readline.h and readline/history.h which causes symbol
conflicts.

It seems to me that editline never distributed history.h file and
HAVE_EDITLINE_HISTORY_H is nonsense. But I'm not sure.

I attached suggested fix, but it needs also some work in ./configure -
depends if libedit/history.h existed. Anyone knows a history?

I need to backported this fix for branches 8.2 - 8.4, because
OpenSolaris PostgreSQL binaries build is broken now.

Zdenek









diff -r dd477d7938da src/bin/psql/input.h
--- a/src/bin/psql/input.h	Wed Jun 03 00:38:34 2009 +
+++ b/src/bin/psql/input.h	Thu Aug 06 22:24:58 2009 +0200
@@ -18,17 +18,16 @@
 #define USE_READLINE 1
 #if defined(HAVE_READLINE_READLINE_H)
 #include readline/readline.h
+#if defined(HAVE_READLINE_HISTORY_H)
+#include readline/history.h
+#endif
 #elif defined(HAVE_EDITLINE_READLINE_H)
 #include editline/readline.h
 #elif defined(HAVE_READLINE_H)
 #include readline.h
+#if defined(HAVE_HISTORY_H)
+#include history.h
 #endif
-#if defined(HAVE_READLINE_HISTORY_H)
-#include readline/history.h
-#elif defined(HAVE_EDITLINE_HISTORY_H)
-#include editline/history.h
-#elif defined(HAVE_HISTORY_H)
-#include history.h
 #endif
 #endif
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Table and Index compression

2009-08-06 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote: 
 On Thu, Aug 6, 2009 at 4:03 PM, Greg Starkgsst...@mit.edu wrote:
 I like the idea too, but I think there are some major problems to
 solve. In particular I think we need a better solution to blocks
 growing than sparse files.
 
 How much benefit does this approach have over using TOAST
 compression more aggressively?
 
I was wondering the same thing.  It seems like compressing a page at a
time should allow more space savings than a column at a time, and
possibly do it faster.
 
One question I have banging around in my head is what to do with
out-of-line storage.  Sometimes you have a large column which you know
contains data which is already compressed and/or encrypted, so
attempting compression would give little or no benefit; so I'm
inclined to think that if we do page compression, it shouldn't deal
with toast tables.  We could leave them to the current techniques. 
That leaves some subtle problems with how to deal with a datum which
currently compresses from, say, several kB down to one or two hundred
bytes.  Current TOAST logic would typically compress and inline it.
What would we do if we're trying to push heap compression to the page
level?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [BUGS] fix: plpgsql: return query and dropped columns problem

2009-08-06 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 There is fixed patch. Please, Jaime, can you look on it?

Applied with significant revisions.  I really wanted this code factored
out, because we'd otherwise end up duplicating it in other PLs (and it
was already duplicative of execQual.c).  So I pushed the support code
into a new file tupconvert.c.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PANIC: cannot make new WAL entries during recovery in the wild

2009-08-06 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 This is new code in 8.4.  Is no one concerned about this?

[ shrug... ]  It's uninvestigatable with only this amount of detail.
How about a test case, or at least a backtrace?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PANIC: cannot make new WAL entries during recovery in the wild

2009-08-06 Thread Greg Stark
2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
 After adding %p to the log_line_prefix, it becomes clear that the
 process calling XLogInsert here is the startup process.

 This is new code in 8.4.  Is no one concerned about this?

Can you get a backtrace?

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PANIC: cannot make new WAL entries during recovery in the wild

2009-08-06 Thread Alvaro Herrera
Greg Stark wrote:
 2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
  After adding %p to the log_line_prefix, it becomes clear that the
  process calling XLogInsert here is the startup process.
 
  This is new code in 8.4.  Is no one concerned about this?
 
 Can you get a backtrace?

I'll ask the user.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PANIC: cannot make new WAL entries during recovery in the wild

2009-08-06 Thread Greg Stark
2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
 2009-08-05 11:58:19 COTLOG:  la dirección de página 0/6D374000 en el
 archivo de registro 0, segmento 117, posición 3620864 es inesperada


Incidentally, Google's translate gives me the impression that the
above message corresponds to:

if (!XLByteEQ(hdr-xlp_pageaddr, recaddr))
{
ereport(emode,
(errmsg(unexpected pageaddr %X/%X in log file 
%u, segment %u, offset %u,
hdr-xlp_pageaddr.xlogid, 
hdr-xlp_pageaddr.xrecoff,
readId, readSeg, readOff)));
return false;
}


I'm not sure what this indicates though.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PANIC: cannot make new WAL entries during recovery in the wild

2009-08-06 Thread Tom Lane
Greg Stark gsst...@mit.edu writes:
 2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
 2009-08-05 11:58:19 COTLOG:  la dirección de página 0/6D374000 en el
 archivo de registro 0, segmento 117, posición 3620864 es inesperada
 

 Incidentally, Google's translate gives me the impression that the
 above message corresponds to:

   if (!XLByteEQ(hdr-xlp_pageaddr, recaddr))
   {
   ereport(emode,
   (errmsg(unexpected pageaddr %X/%X in log file 
 %u, segment %u, offset %u,
   hdr-xlp_pageaddr.xlogid, 
 hdr-xlp_pageaddr.xrecoff,
   readId, readSeg, readOff)));
   return false;
   }

 I'm not sure what this indicates though.

It probably means nothing: this is a typical termination condition when
reading from a recycled WAL file.  You come to a page that doesn't have
the expected page address, because what it's got is whatever it had in
the WAL file's previous cycle of life.

The PANIC is evidently happening during post-WAL-scanning cleanup,
perhaps while we're trying to repair incomplete btree page splits
or some such.  But I don't want to speculate too much without a
stack trace.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PANIC: cannot make new WAL entries during recovery in the wild

2009-08-06 Thread Alvaro Herrera
Greg Stark wrote:
 2009/8/6 Alvaro Herrera alvhe...@commandprompt.com:
  2009-08-05 11:58:19 COTLOG:  la dirección de página 0/6D374000 en el
  archivo de registro 0, segmento 117, posición 3620864 es inesperada
 
 
 Incidentally, Google's translate gives me the impression that the
 above message corresponds to:
 
   if (!XLByteEQ(hdr-xlp_pageaddr, recaddr))
   {
   ereport(emode,
   (errmsg(unexpected pageaddr %X/%X in log file 
 %u, segment %u, offset %u,
   hdr-xlp_pageaddr.xlogid, 
 hdr-xlp_pageaddr.xrecoff,
   readId, readSeg, readOff)));
   return false;
   }

Yes, that's correct.  (It would be great to have a system to
reverse-translate the messages based on the PO files ...)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] compilation with libeditpreferred is broken

2009-08-06 Thread Tom Lane
Zdenek Kotala zdenek.kot...@sun.com writes:
 It seems to me that editline never distributed history.h file and
 HAVE_EDITLINE_HISTORY_H is nonsense. But I'm not sure.

I wouldn't count on that, in part because there are so many versions of
editline.  On an OS X machine I see

$ ls -l /usr/include/*line*
/usr/include/editline:
total 16
-rw-r--r--  1 root  wheel  6882 Feb 19  2008 readline.h

/usr/include/readline:
total 16
lrwxr-xr-x  1 root  wheel  22 Jul 23 11:31 history.h@ - ../editline/readline.h
lrwxr-xr-x  1 root  wheel  22 Jul 23 11:31 readline.h@ - ../editline/readline.h

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Table and Index compression

2009-08-06 Thread Josh Berkus
On 8/6/09 1:03 PM, Greg Stark wrote:
 One possibility is to handle only read-only tables. That would make
 things a *lot* simpler. But it sure would be inconvenient if it's only
 useful on large static tables but requires you to rewrite the whole
 table -- just what you don't want to do with large static tables -- to
 get the benefit.

Well less flexible, I could see combining this with partitioning to
still be useful.  If we could rewrite specific partitions as compressed,
then there's a lot of cumulative data applications which it would benefit.

Not as exciting as being able to compress the whole thing, of course.

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Array detection in pg_dump

2009-08-06 Thread Bruce Momjian
Is there a reason we don't use pg_type.typcategory to detect arrays in
Postgres 8.4?  Right now I see this in pg_dump.c:

if (g_fout-remoteVersion = 80300)
{
appendPQExpBuffer(query, SELECT tableoid, oid, typname, 
  typnamespace, 
  (%s typowner) AS rolname, 
  typinput::oid AS typinput, 
  typoutput::oid AS typoutput, typelem, typrelid, 
  CASE WHEN typrelid = 0 THEN ' '::\char\ 
  ELSE (SELECT relkind FROM pg_class WHERE oid = 
typrelid) END AS typrelkind, 
  typtype, typisdefined, 
  typname[0] = '_' AND typelem != 0 AND 
--  (SELECT typarray FROM pg_type te WHERE oid = pg_type.typelem) = oid AS 
isarray 
  FROM pg_type,
  username_subquery);
}

It seems the appropriate 8.4+ test would be:

t.typtype = 'b' AND t.typcategory = 'A'

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mixed, named notation support

2009-08-06 Thread Tom Lane
Bernd Helmle maili...@oopsware.de writes:
 Here again a patch version with updated documentation. I will stop 
 reviewing this patch now and mark this ready for committer, so we have some 
 time left to incorporate additional feedback.

I'm starting to look at this now, and my very first reaction was
what in the world is a leaky list?.  I'm not sure I like the
data structure itself, but the terminology is certainly completely
unhelpful.  Can't you come up with something better than
continuous/leaky?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Table and Index compression

2009-08-06 Thread Ron Mayer
I'm curious what advantages there are in building compression into
the database itself, rather than using filesystem-based compression.

I see ZFS articles[1] discuss how enabling compression
improves performance with ZFS; for Linux, Btrfs has compression
features as well[2]; and on Windows NTFS seems to too.

[1]http://blogs.sun.com/observatory/entry/zfs_compression_a_win_win
[2]http://lwn.net/Articles/305697/



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Array detection in pg_dump

2009-08-06 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Is there a reason we don't use pg_type.typcategory to detect arrays in
 Postgres 8.4?  Right now I see this in pg_dump.c:

typcategory is user-assignable and thus not too reliable; furthermore
it wouldn't prove that the type is the array type for its typelem.
(Consider things like point and name --- there can be multiple types
with the same typelem, but only one is the real array type for that
typelem.)  The typelem back-link check is much safer.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Array detection in pg_dump

2009-08-06 Thread Bruce Momjian
Bruce Momjian wrote:
 Is there a reason we don't use pg_type.typcategory to detect arrays in
 Postgres 8.4?  Right now I see this in pg_dump.c:
 
 if (g_fout-remoteVersion = 80300)
 {
   appendPQExpBuffer(query, SELECT tableoid, oid, typname, 
 typnamespace, 
 (%s typowner) AS rolname, 
 typinput::oid AS typinput, 
 typoutput::oid AS typoutput, typelem, typrelid, 
 CASE WHEN typrelid = 0 THEN ' '::\char\ 
 ELSE (SELECT relkind FROM pg_class WHERE oid = 
 typrelid) END AS typrelkind, 
 typtype, typisdefined, 
 typname[0] = '_' AND typelem != 0 AND 
 --  (SELECT typarray FROM pg_type te WHERE oid = pg_type.typelem) = oid AS 
 isarray 

  ^^

Oh, and what does that 'te' do?  Seems useless.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Array detection in pg_dump

2009-08-06 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Is there a reason we don't use pg_type.typcategory to detect arrays in
  Postgres 8.4?  Right now I see this in pg_dump.c:
 
 typcategory is user-assignable and thus not too reliable; furthermore
 it wouldn't prove that the type is the array type for its typelem.
 (Consider things like point and name --- there can be multiple types
 with the same typelem, but only one is the real array type for that
 typelem.)  The typelem back-link check is much safer.

Thanks;  I will use the pg_dump code in pg_migrator then.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Array detection in pg_dump

2009-08-06 Thread Bruce Momjian
Bruce Momjian wrote:
 Bruce Momjian wrote:
  Is there a reason we don't use pg_type.typcategory to detect arrays in
  Postgres 8.4?  Right now I see this in pg_dump.c:
  
  if (g_fout-remoteVersion = 80300)
  {
  appendPQExpBuffer(query, SELECT tableoid, oid, typname, 
typnamespace, 
(%s typowner) AS rolname, 
typinput::oid AS typinput, 
typoutput::oid AS typoutput, typelem, typrelid, 
CASE WHEN typrelid = 0 THEN ' '::\char\ 
ELSE (SELECT relkind FROM pg_class WHERE oid = 
  typrelid) END AS typrelkind, 
typtype, typisdefined, 
typname[0] = '_' AND typelem != 0 AND 
  --  (SELECT typarray FROM pg_type te WHERE oid = pg_type.typelem) = oid 
  AS isarray 
 
   ^^
 
 Oh, and what does that 'te' do?  Seems useless.

Oh, I see it now;  'te' makes the pg_type reference in the subquery
reference the outer pg_type;  Wow that is confusing.  I will clear that
up in my pg_migrator version.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha Releases: Docs?

2009-08-06 Thread Bruce Momjian
Robert Haas wrote:
 On Wed, Aug 5, 2009 at 6:59 PM, Josh Berkusj...@agliodbs.com wrote:
  As far as the release notes, I think we would have to have proof that
  the alpha-generated release notes are as good or close to the quality of
  the release notes using the current process. ?If they are, we can use
  them for 8.6, or even for 8.5 if the quality is similar, but we can't
  know that without creating identical release notes for 8.5 and comparing
  them, to make sure the alpha process has not missed any items, etc.
 
  I can't speak for Robert or Peter, but for me this gives me exactly zero
  incentive to bother. ?If you're just going to do the same amount of work
  anyway ... and potentially delay the release by just as much ... then
  there's really no point on me spending my nights and weekends wrestling
  with SGML formatting. ?I'll leave it to you.
 
 I think I am in agreement.  Parsing Bruce's words carefully, he seems
 to be saying that the only way to determine whether the release notes
 are of sufficient quality is to repeat the whole process of release
 note generation ab initio to determine whether what has been produced
 is good enough.  Presumably this would be followed by some comparison
 of the two work products (by a panel of impartial judges?).
 
 I can't believe this is necessary.  It ought to be possible with
 careful bookkeeping to make it easy to verify that every commit has
 been either included or intentionally omitted.  The obvious system
 that occurs to me is to track the git hash of each commit and the
 release note text associated with it, but I am sure there are other
 unique identifiers that could equally well be used.  Once you've
 verified that, then the only remaining issue is the actual quality of
 the work product, and I would think that it could be much faster to
 edit someone else's work than to do the whole thing over.  Peter and
 Josh both have excellent written communication skills, and I like to
 think that I do as well; I would think that the necessary work would
 be more on the order of fine-tuning than a wholesale rewrite.
 
 That having been said, I am not going to spend a lot of time trying to
 push water up a hill.

I would love to get out of the release-note-writing business, but I
can't imagine how such a document could be written incrementally, so it
is logical that I would want some kind of test to see that the method I
didn't think would work would actually work.

I could state right now that I will not do any 8.5 release notes and
force folks to cobble something together, and hope it works, but that is
hardly repsonsible.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha Releases: Docs?

2009-08-06 Thread Josh Berkus
Bruce,

 I would love to get out of the release-note-writing business, but I
 can't imagine how such a document could be written incrementally, so it
 is logical that I would want some kind of test to see that the method I
 didn't think would work would actually work.

What about Robert's suggested method of simply checking the incremental
version against the commit logs?  Then you'd only be writing what was
actually missed, rather than writing everything and then checking it.

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha Releases: Docs?

2009-08-06 Thread Bruce Momjian
Josh Berkus wrote:
 Bruce,
 
  I would love to get out of the release-note-writing business, but I
  can't imagine how such a document could be written incrementally, so it
  is logical that I would want some kind of test to see that the method I
  didn't think would work would actually work.
 
 What about Robert's suggested method of simply checking the incremental
 version against the commit logs?  Then you'd only be writing what was
 actually missed, rather than writing everything and then checking it.

That seems like more work than just doing the entire thing.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha Releases: Docs?

2009-08-06 Thread Bruce Momjian
Josh Berkus wrote:
 Bruce,
 
  I would love to get out of the release-note-writing business, but I
  can't imagine how such a document could be written incrementally, so it
  is logical that I would want some kind of test to see that the method I
  didn't think would work would actually work.
 
 What about Robert's suggested method of simply checking the incremental
 version against the commit logs?  Then you'd only be writing what was
 actually missed, rather than writing everything and then checking it.

I don't see why it is a problem to have folks creating incremental
release notes and having me create some for 8.5 to test against?  This
is how we have automated other processes --- you do as good a job as I
do and I stop doing the job.  It seems it would be me wasting my time to
validate you work, but I am willing to do it.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Bruce Momjian
Peter Eisentraut wrote:
 On Thursday 06 August 2009 06:32:06 Bruce Momjian wrote:
  I have applied the attached patch to pg_migrator to detect enum,
  composites, and arrays.  I tested it and the only error I got was with
  the breakmigrator table that was supplied by Jeff, and once I removed
  that table the migration went fine, meaning there are no cases of these
  stored in the regression test database.
 
 That might be a bit excessive.  As I understand it, arrays of built-in types 
 (e.g., int[]) should work fine.  I suspect the majority of uses of arrays 
 will 
 be with built-in types, so allowing that would help a significant portion of 
 installations.

Agreed.  I realized that last night, and have modified pg_migrator to
test FirstNormalObjectId.

The pg_migrator limitations are now:

pg_migrator will not work if a user column is defined as:

o  data type tsquery
o  data type 'name' and is not the first column
o  a user-defined composite data type
o  a user-defined array data type
o  a user-defined enum data type

You must drop any such columns and migrate them manually.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Bruce Momjian
David E. Wheeler wrote:
 On Aug 6, 2009, at 7:28 AM, Tom Lane wrote:
 
  That would cover the problem for OIDs needed during CREATE TABLE, but
  what about types and enum values?
 
 I haven't been following this discussion very closely, but wanted to  
 ask: is someone writing regression tests for these cases that  
 pg_migrator keeps bumping into?

Yes, I have regression tests I run but they are not in CVS, partly
because they are tied to other scripts I have to manage server settings.

Here are my scripts:

http://momjian.us/tmp/pg_migrator_test.tgz

One big problem is that pg_migrator fails as soon as it hits one of
these so there isn't much to automate.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread Bruce Momjian
Joshua D. Drake wrote:
 On Wed, 2009-08-05 at 22:57 -0400, Bruce Momjian wrote:
  Andrew Dunstan wrote:
   
  Well, pg_migrator has gotten pretty far without supporting these
  features, and I think I would have heard about it if someone had these
  and migrated because vacuum analyze found it right away.  I am afraid
  the best we can do is to throw an error when we see these cases and hope
  we can improve things for 8.5.
 
 
 *most* users will not even know there is such a thing as a composite
 type. Throw an error and call it good for this release.

Done, pg_migrator 8.4.3 released.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha Releases: Docs?

2009-08-06 Thread Robert Haas
On Thu, Aug 6, 2009 at 8:20 PM, Bruce Momjianbr...@momjian.us wrote:
 Josh Berkus wrote:
 Bruce,

  I would love to get out of the release-note-writing business, but I
  can't imagine how such a document could be written incrementally, so it
  is logical that I would want some kind of test to see that the method I
  didn't think would work would actually work.

 What about Robert's suggested method of simply checking the incremental
 version against the commit logs?  Then you'd only be writing what was
 actually missed, rather than writing everything and then checking it.

 I don't see why it is a problem to have folks creating incremental
 release notes and having me create some for 8.5 to test against?  This
 is how we have automated other processes --- you do as good a job as I
 do and I stop doing the job.  It seems it would be me wasting my time to
 validate you work, but I am willing to do it.

Well, to some extent this may boil down to semantics, but keep in mind
that our goal in volunteering is to get the release out the door more
quickly.   We need you to be willing to do less work, not more, or at
least find a way to get some of that work out of the critical path of
the release.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_ctl stop -m fast after -m smart

2009-08-06 Thread Itagaki Takahiro

pg_ctl stop -m smart will wait for all connections are disconnected and
pg_ctl stop -m fast will disconnect all connections forcibly.
But fast after smart also wait for disconnections.

Can we change the behavior that fast overwrites smart mode?
I'd like to achieve the following sequence:
$ pg_ctl stop
$ (found some connections remain)
$ [Ctrl+C]
$ pg_ctl stop -m fast
$ (force disconnect and stop server safely)

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mixed, named notation support

2009-08-06 Thread Robert Haas
On Thu, Aug 6, 2009 at 7:10 PM, Tom Lanet...@sss.pgh.pa.us wrote:
 Bernd Helmle maili...@oopsware.de writes:
 Here again a patch version with updated documentation. I will stop
 reviewing this patch now and mark this ready for committer, so we have some
 time left to incorporate additional feedback.

 I'm starting to look at this now, and my very first reaction was
 what in the world is a leaky list?.  I'm not sure I like the
 data structure itself, but the terminology is certainly completely
 unhelpful.  Can't you come up with something better than
 continuous/leaky?

Stepping back a bit, are we sure this is a feature we even want to
support?  It was already pointed out in the thread on Parser's hook
based on funccall that SQL:201x may standardize = for this purpose.
I realize that's a problem because of the possibility of a
user-defined operator named =, but aren't we usually reluctant to
adopt syntax that is thought likely to be incompatible with current or
future SQL standards?

http://archives.postgresql.org/pgsql-hackers/2009-07/msg01715.php

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] docs: mention autovacuum when ANALYZE is recommended

2009-08-06 Thread Alvaro Herrera
Tom Lane wrote:
 Alvaro Herrera alvhe...@commandprompt.com writes:
  Bruce asked me to look for places in the docs that mention that an
  ANALYZE is recommended, to mention the possibility that autovacuum takes
  care.  This patch does that.
 
 I think you found the right places to touch, but is let the autovacuum
 daemon do it sufficient?  It seems like that needs some qualifiers
 about whether autovacuum is enabled, how long you should expect to wait
 for the stats to get updated, etc.  It's probably not a good idea to
 duplicate all that in each place, but maybe a link to the main
 documentation about autovacuum is reasonable in each place.

Sorry this fell through the cracks.  How does this look?  My idea would
be to backpatch it to 8.3 and 8.4.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
Index: doc/src/sgml/backup.sgml
===
RCS file: /home/alvherre/Code/cvs/pgsql/doc/src/sgml/backup.sgml,v
retrieving revision 2.129
diff -c -p -r2.129 backup.sgml
*** doc/src/sgml/backup.sgml	26 Jun 2009 22:06:11 -	2.129
--- doc/src/sgml/backup.sgml	6 Aug 2009 21:10:41 -
*** pg_dump -h replaceablehost1/ replac
*** 168,177 
 para
  After restoring a backup, it is wise to run xref
  linkend=sql-analyze endterm=sql-analyze-title on each
! database so the query optimizer has useful statistics. An easy way
! to do this is to run commandvacuumdb -a -z/; this is
! equivalent to running commandVACUUM ANALYZE/ on each database
! manually.  For more advice on how to load large amounts of data
  into productnamePostgreSQL/ efficiently, refer to xref
  linkend=populate.
 /para
--- 168,177 
 para
  After restoring a backup, it is wise to run xref
  linkend=sql-analyze endterm=sql-analyze-title on each
! database so the query optimizer has useful statistics;
! see xref linkend=vacuum-for-statistics endterm=vacuum-for-statistics-title
! and xref linkend=autovacuum endterm=autovacuum-title for more information.
! For more advice on how to load large amounts of data
  into productnamePostgreSQL/ efficiently, refer to xref
  linkend=populate.
 /para
Index: doc/src/sgml/indices.sgml
===
RCS file: /home/alvherre/Code/cvs/pgsql/doc/src/sgml/indices.sgml,v
retrieving revision 1.78
diff -c -p -r1.78 indices.sgml
*** doc/src/sgml/indices.sgml	17 Jun 2009 21:58:49 -	1.78
--- doc/src/sgml/indices.sgml	6 Aug 2009 21:11:09 -
*** SELECT am.amname AS index_method,
*** 1025,1031 
   real statistics, some default values are assumed, which are
   almost certain to be inaccurate.  Examining an application's
   index usage without having run commandANALYZE/command is
!  therefore a lost cause.
  /para
 /listitem
  
--- 1025,1033 
   real statistics, some default values are assumed, which are
   almost certain to be inaccurate.  Examining an application's
   index usage without having run commandANALYZE/command is
!  therefore a lost cause. 
!  See xref linkend=vacuum-for-statistics endterm=vacuum-for-statistics-title
!  and xref linkend=autovacuum endterm=autovacuum-title for more information.
  /para
 /listitem
  
Index: doc/src/sgml/maintenance.sgml
===
RCS file: /home/alvherre/Code/cvs/pgsql/doc/src/sgml/maintenance.sgml,v
retrieving revision 1.95
diff -c -p -r1.95 maintenance.sgml
*** doc/src/sgml/maintenance.sgml	17 Jun 2009 13:59:28 -	1.95
--- doc/src/sgml/maintenance.sgml	6 Aug 2009 20:12:00 -
***
*** 253,259 
/sect2
  
sect2 id=vacuum-for-statistics
!titleUpdating Planner Statistics/title
  
 indexterm zone=vacuum-for-statistics
  primarystatistics/primary
--- 253,259 
/sect2
  
sect2 id=vacuum-for-statistics
!title id=vacuum-for-statistics-titleUpdating Planner Statistics/title
  
 indexterm zone=vacuum-for-statistics
  primarystatistics/primary
Index: doc/src/sgml/perform.sgml
===
RCS file: /home/alvherre/Code/cvs/pgsql/doc/src/sgml/perform.sgml,v
retrieving revision 1.71
diff -c -p -r1.71 perform.sgml
*** doc/src/sgml/perform.sgml	17 Jun 2009 21:58:49 -	1.71
--- doc/src/sgml/perform.sgml	6 Aug 2009 21:11:36 -
*** SELECT * FROM x, y, a, b, c WHERE someth
*** 974,980 
  table.  With no statistics or obsolete statistics, the planner might
  make poor decisions during query planning, leading to poor
  performance on any tables with inaccurate or nonexistent
! statistics.
 /para
/sect2
  
--- 974,983 
  table.  With no statistics or obsolete statistics, the planner might
  make poor decisions during 

[HACKERS] include/commands/version.h is not used

2009-08-06 Thread Itagaki Takahiro
Hi,

I found include/commands/version.h is empty and not included from any files.
What is the purpose of the file?
http://doxygen.postgresql.org/version_8h.html

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Pg-migrator-general] Composite types break pg_migrated tables

2009-08-06 Thread David E. Wheeler

On Aug 6, 2009, at 6:00 PM, Bruce Momjian wrote:


Yes, I have regression tests I run but they are not in CVS, partly
because they are tied to other scripts I have to manage server  
settings.


Here are my scripts:

http://momjian.us/tmp/pg_migrator_test.tgz

One big problem is that pg_migrator fails as soon as it hits one of
these so there isn't much to automate.


Perhaps when I return from vacation I'll have a look at these and see  
if I can think of a way to automate them.


Best,

David

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] pgbench: new feature allowing to launch shell commands

2009-08-06 Thread Michael Paquier
Hi all,

Here is a short patch implementing a new feature in pgbench so as to allow
shell commands to be launched in a transaction file of pgbench.
the user has just to add at the beginning of the command line in his
transaction file \shell + the command wanted.

As an example of transaction:
Begin;
[Transaction instructions]
Prepare transaction ‘TXID’;
\shell ls ~/pg_twophase;
Commit prepared ‘TXID’;

This patch was particularly useful in order to determine the size of state
files flushed to disk for prepared but not committed transactions.
As an addition, I added a new default transaction in the code that permits
to launch a 2PC transaction with a random prepare identifier of the format
Txxx.

I also created a page in postgresql's wiki about this feature.
Please refer to this link:
http://wiki.postgresql.org/wiki/Pgbench:_shell_command

Regards,

-- 
Michael Paquier

NTT OSSC


Re: [HACKERS] [PATCH] pgbench: new feature allowing to launch shell commands

2009-08-06 Thread Michael Paquier
Sorry I forgot to attach the the patch.

Regards,

Michael

On Fri, Aug 7, 2009 at 12:23 PM, Michael Paquier
michael.paqu...@gmail.comwrote:

 Hi all,

 Here is a short patch implementing a new feature in pgbench so as to allow
 shell commands to be launched in a transaction file of pgbench.
 the user has just to add at the beginning of the command line in his
 transaction file \shell + the command wanted.

 As an example of transaction:
 Begin;
 [Transaction instructions]
 Prepare transaction ‘TXID’;
 \shell ls ~/pg_twophase;
 Commit prepared ‘TXID’;

 This patch was particularly useful in order to determine the size of state
 files flushed to disk for prepared but not committed transactions.
 As an addition, I added a new default transaction in the code that permits
 to launch a 2PC transaction with a random prepare identifier of the format
 Txxx.

 I also created a page in postgresql's wiki about this feature.
 Please refer to this link:
 http://wiki.postgresql.org/wiki/Pgbench:_shell_command

 Regards,

 --
 Michael Paquier

 NTT OSSC




-- 
Michael Paquier

NTT OSSC


postgresql-8.4.0-pgbenchshell.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl stop -m fast after -m smart

2009-08-06 Thread Fujii Masao
Hi,

On Fri, Aug 7, 2009 at 10:31 AM, Itagaki
Takahiroitagaki.takah...@oss.ntt.co.jp wrote:

 pg_ctl stop -m smart will wait for all connections are disconnected and
 pg_ctl stop -m fast will disconnect all connections forcibly.
 But fast after smart also wait for disconnections.

 Can we change the behavior that fast overwrites smart mode?

+1. This behavior was supported in 8.2 or before, but broken in 8.3.
Here is the patch. This should be backported to 8.3 and 8.4.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


fast_shutdown.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Filtering dictionaries support and unaccent dictionary

2009-08-06 Thread Robert Haas
2009/8/6 Teodor Sigaev teo...@sigaev.ru:
 Isn't that function leaking res pointer?  Also, I'm curious why you're

 fixed

 allocating 2*sizeof(TSLexeme) in unaccent_lexize ...

 That's is a dictionary's interface part: lexize returns an array of TSLexeme
 and last structure should have lexeme field NULL.


 filter_dictionary file is not changed, it's attached only for consistency.

I am not sure whether this has been formally reviewed by anyone yet;
do we think it's Ready for Committer?

Thanks,

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pgbench: new feature allowing to launch shell commands

2009-08-06 Thread Robert Haas
On Thu, Aug 6, 2009 at 11:26 PM, Michael
Paquiermichael.paqu...@gmail.com wrote:
 Sorry I forgot to attach the the patch.

Please add your patches at
https://commitfest.postgresql.org/action/commitfest_view/open

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pgbench: new feature allowing to launch shell commands

2009-08-06 Thread Itagaki Takahiro

Michael Paquier michael.paqu...@gmail.com wrote:

  Here is a short patch implementing a new feature in pgbench so as to allow
  shell commands to be launched in a transaction file of pgbench.
  \shell ls ~/pg_twophase;

+1 for \shell command itself, but does the performance fit for your purpose?
Spawning a new process is not so cheap, no?

-1 for -P option because it is too narrow purpose and 'ls' and '/tmp/'
is not portable. We don't need to include your workload because you can
use -f FILENAME to run your benchmark script.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pgbench: new feature allowing to launch shell commands

2009-08-06 Thread Alvaro Herrera
Michael Paquier escribió:

 I also created a page in postgresql's wiki about this feature.
 Please refer to this link:
 http://wiki.postgresql.org/wiki/Pgbench:_shell_command

Please don't use colons in wiki page names.  Pgbench_shell_command
should be fine.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pgbench: new feature allowing to launch shell commands

2009-08-06 Thread Michael Paquier
Yes it dramatically decreases the transaction flow.
This function has not been implemented at all for performance but for
analysis purposes.
I used it mainly to have a look at state files size in pg_twophase for
transactions that are prepared but not committed.

Regards

On Fri, Aug 7, 2009 at 12:55 PM, Itagaki Takahiro 
itagaki.takah...@oss.ntt.co.jp wrote:


 Michael Paquier michael.paqu...@gmail.com wrote:

   Here is a short patch implementing a new feature in pgbench so as to
 allow
   shell commands to be launched in a transaction file of pgbench.
   \shell ls ~/pg_twophase;

 +1 for \shell command itself, but does the performance fit for your
 purpose?
 Spawning a new process is not so cheap, no?

 -1 for -P option because it is too narrow purpose and 'ls' and '/tmp/'
 is not portable. We don't need to include your workload because you can
 use -f FILENAME to run your benchmark script.

 Regards,
 ---
 ITAGAKI Takahiro
 NTT Open Source Software Center





-- 
Michael Paquier

NTT OSSC


  1   2   >