Re: [GENERAL] Inheritance on foreign key

2009-10-16 Thread decibel

On Oct 12, 2009, at 1:21 PM, Erik Jones wrote:

On Oct 9, 2009, at 9:57 AM, Takeichi Kanzaki Cabrera wrote:

Hi everybody, I need your help. I have a hierarchy of tables, and
other table that has a foreign key with the top table of the
hierarchy, can I insert a value into the "other" table where the  
value

it reference is not on the parent table? (it's in one of its child)


No, foreign key checks do not (yet) follow inheritance  
hierarchies.  Here's the specific clause in the manual (http:// 
www.postgresql.org/docs/8.4/interactive/ddl-inherit.html) that  
covers this:


"All check constraints and not-null constraints on a parent table  
are automatically inherited by its children. Other types of  
constraints (unique, primary key, and foreign key constraints) are  
not inherited."



That said, there are ways around this. We're using inheritance to  
deal with things like customer "accounts" such as bank accounts,  
debit cards, etc. There's stuff that all of these have in common, and  
stuff that's specific, so the bank_account and debit_card tables each  
inherit from a customer_account table.  
customer_account.customer_account_type_id specifies what type of  
account a record is. Using that, we have a trigger that you can put  
on some other table that's referencing  
customer_account.customer_account_id; that trigger implements part of  
the functionality of a true foreign key. It only handles certain  
cases because that's all we need, but I believe you should be able to  
provide full foreign key support if you wanted to create all the  
right trigger functions. The key is to have the trigger function look  
at the parent table to determine what type of account / record it is,  
and then use that information to go to the appropriate child table  
and aquire a FOR UPDATE lock.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inheritance on foreign key

2009-10-16 Thread decibel

On Oct 12, 2009, at 1:21 PM, Erik Jones wrote:

On Oct 9, 2009, at 9:57 AM, Takeichi Kanzaki Cabrera wrote:

Hi everybody, I need your help. I have a hierarchy of tables, and
other table that has a foreign key with the top table of the
hierarchy, can I insert a value into the "other" table where the  
value

it reference is not on the parent table? (it's in one of its child)


No, foreign key checks do not (yet) follow inheritance  
hierarchies.  Here's the specific clause in the manual (http:// 
www.postgresql.org/docs/8.4/interactive/ddl-inherit.html) that  
covers this:


"All check constraints and not-null constraints on a parent table  
are automatically inherited by its children. Other types of  
constraints (unique, primary key, and foreign key constraints) are  
not inherited."



That said, there are ways around this. We're using inheritance to  
deal with things like customer "accounts" such as bank accounts,  
debit cards, etc. There's stuff that all of these have in common, and  
stuff that's specific, so the bank_account and debit_card tables each  
inherit from a customer_account table.  
customer_account.customer_account_type_id specifies what type of  
account a record is. Using that, we have a trigger that you can put  
on some other table that's referencing  
customer_account.customer_account_id; that trigger implements part of  
the functionality of a true foreign key. It only handles certain  
cases because that's all we need, but I believe you should be able to  
provide full foreign key support if you wanted to create all the  
right trigger functions. The key is to have the trigger function look  
at the parent table to determine what type of account / record it is,  
and then use that information to go to the appropriate child table  
and acquire a FOR UPDATE lock.


I can probably provide a more concrete example of this if anyone's  
interested.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] contrib/plantuner - enable PostgreSQL planner hints

2009-10-16 Thread decibel

On Oct 6, 2009, at 2:57 PM, Oleg Bartunov wrote:
this is an announcement of our new contribution module for  
PostgreSQL - Plantuner - enable planner hints

(http://www.sai.msu.su/~megera/wiki/plantuner).

=# set enable_seqscan=off;
=# set plantuner.forbid_index='id_idx2';



Out of curiosity, did you look at doing hints as comments in a query?  
I'm guessing you couldn't actually do that in just a contrib module,  
but it's how Oracle handles hints, and it seems to be *much* more  
convenient, because a hint only applies for a specific query. I think  
it's extremely unlikely you would intentionally want the same hint to  
apply to a bunch of queries, and extremely likely that you could  
accidentally forget to re-enable something.


That said, thanks for contributing this!
--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What happens when syslog gets blocked?

2009-08-06 Thread decibel

On Aug 6, 2009, at 2:00 PM, Bill Moran wrote:

In response to Tom Lane :


decibel  writes:

We recently had a problem with a database where the /var filesystem
got corrupted. This appears to have seriously impacted the  
ability of

STDERR from Postgres to get put out to disk, which ended up blocking
backends.



Because of this we want to switch from using STDERR to using syslog,
but I'm not sure if syslog() can end up blocking or not.


syslog (at least in the implementations I'm familiar with) has the
opposite problem: when the going gets tough, it starts losing  
messages.

I do not think you'll really be making your life better by switching.


Well ... "life better" really depends on which failure scenario you're
more comfortable with ... personally, I'd rather lose log messages  
than
have the DB system go down.  Of course, if auditing is critical to  
your

scenario, then your priorities are different ...



Bingo. I'm thinking we should make mention of this in the docs...
--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint between 2 tables and taking a coherent snapshot of both

2009-08-06 Thread decibel

On Jul 26, 2009, at 1:32 PM, Ivan Sergio Borgonovo wrote:

Actually a serializable transaction doesn't even seem a too bad
solution... but I just would like to understand better how to manage
this situation so that I could make it as simple as possible AND
lower as much as possible the chances that the transaction will have
to be rolled back.



This sounds exactly what serialized transactions are for. And I would  
certainly promote simplicity over worrying about things like rollback  
performance.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What happens when syslog gets blocked?

2009-08-06 Thread decibel
We recently had a problem with a database where the /var filesystem  
got corrupted. This appears to have seriously impacted the ability of  
STDERR from Postgres to get put out to disk, which ended up blocking  
backends.


Because of this we want to switch from using STDERR to using syslog,  
but I'm not sure if syslog() can end up blocking or not. I know that  
(by default) syslog uses UDP when logging to an external syslog, but  
what happens if you're using the local syslog? Is it still UDP or  
some other mechanism that could potentially block the backends?


Also, I think we should either warn users about STDERR (and  
presumably the CVS logging) or change things so that something that  
breaks logging doesn't block backends.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] REINDEX "is not a btree"

2009-07-10 Thread decibel

On Jul 10, 2009, at 6:47 AM, Vanessa Lopez wrote:
I discovered the table that was causing the error, delete it and  
create it again (I miss some data but at least everything else is  
working now)


Yes, for the backup we copy everything we had under /data (the  
directory containing "base", "global", and so on ... we do backups  
every day from the server), and then we restore the whole /data  
directory at once ... but it did not solve the problem ..



Given the problems you've had, I strongly suggest you take a pg_dump  
of the database, restore that dump, and use the restored copy. I bet  
there's probably other problems lurking in your database.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] REINDEX "is not a btree"

2009-07-09 Thread decibel

On Jul 4, 2009, at 8:06 AM, Craig Ringer wrote:

On Fri, 2009-07-03 at 15:00 +0100, Vanessa Lopez wrote:

I don't know much about postgre, I have no clue what else I can do.
Please, please any help is very very much appreciated I have lots of
databases and months of work in postgre (also lots of backups for the
data in /data)


When you say "in /data", do you mean the directory that contains the
directories "pg_xlog", "base", "global", "pg_clog", etc ?

Did you back up and restore the WHOLE data directory at once? Or  
did you

restore only parts of it?



And how exactly did you make the backups? You can't simply take a  
filesystem copy of a running database; that won't work.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cache reads vs. disk reads

2009-07-09 Thread decibel

On Jul 1, 2009, at 4:20 AM, Gerd König wrote:

b) pg_statio_user_tables=>

heap_blks_read is the number of disk blocks read for that table  
(excluding index
access), does heap_blks_hit mean the number of accesses to the  
cache for that data ?
...and is the number of heap_blks_read in heap_blks_hit included,  
or is this
number the additional accesses, after reading the data from disk to  
buffer ?



Take a look in the manual; there's a pretty clear explanation in there.
--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi - table statistics

2009-07-09 Thread decibel

On Jul 1, 2009, at 4:15 AM, Scara Maccai wrote:
I query is using Index scan instead of Bitmap Index Scan because it  
"thinks" that the number of rows returned will be low (12). In  
fact, the number of rows returned is 120.


This happens because there are some WHERE conditions that check  
cross-tables values (something like taba.col1 < tabb.col2) . Since  
Postgresql doesn't know (I guess) anything about cross-table values  
statistics, how are the "rows" values calculated?


Are you doing this as a JOIN or a subquery? That could make a  
difference. Also, non-equality predicates (ie: <) are hard to  
estimate for.



Is there any way I can force the planner to use bitmap index scan?



I would try pulling the explicit values you need into a temp table,  
analyze that, and then join. The planner will then know exactly how  
many rows it's dealing with. But keep in mind that it doesn't  
actually look at the values it will be getting, so if you have a  
skewed distribution of values in the join column in the larger table  
you might be stuck...

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Determining the names of columns in a dynamic query

2009-04-24 Thread Decibel!

My MTA says your email domain doesn't exist...

On Apr 24, 2009, at 1:27 PM, Johan Nel wrote:
Sorry for cross posting, but seems nobody read the pgsql.sql NG.   
Have not received any response there so lets see if this create  
some reaction.


Is it possible in pgsql to have something to manipulate columns  
where the column names will only be resolved during execution of  
the select statement:


FOR rec IN (SELECT * FROM table)
LOOP
  -- This is what I would like to do
  FOR col IN (rec.column_names)
  LOOP
IF col = 'blahblah' THEN
  -- Do something
ELSEIF col = 'nextcol'
  --
ELSE
  -- Other column
END IF;
  END LOOP;
END LOOP;

For the above, it does not need to be exactly the same, but are  
there a group of functions available to at least return in a  
dynamic query the number of columns, name of a column given the  
ordinal number of the column, etc?


You didn't mention, but that looks like PLPGSQL, so I'll assume that  
it is.


Unfortunately, the answer is no. You'd have to use something like PL/ 
Perl.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can't use "any" with SQL functions

2009-04-24 Thread Decibel!

Is there a reason we disallow using "any" with SQL functions?

deci...@phonebook.local=# create function textcoalesce("any") returns  
void language sql as $$SELECT coalesce( $1::text, '' )$$;

ERROR:  SQL functions cannot have arguments of type "any"
STATEMENT:  create function textcoalesce("any") returns void language  
sql as $$SELECT coalesce( $1::text, '' )$$;

ERROR:  SQL functions cannot have arguments of type "any"
deci...@phonebook.local=# SELECT version();
 
version
 
---
 PostgreSQL 8.3.7 on powerpc-apple-darwin8.11.0, compiled by GCC  
powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc.  
build 5370)

(1 row)

deci...@phonebook.local=#
--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NOVALIDATE in postgresql?

2009-02-21 Thread decibel

On Feb 19, 2009, at 1:49 PM, Adrian Klaver wrote:

From the Oracle manual:
ENABLE NOVALIDATE means the constraint is checked for new or  
modified rows, but existing data may violate the constraint.


So you are looking for an incomplete constraint?



More likely they want to add a constraint but can't afford the time  
it would take to scan the table while holding an exclusive lock. At  
least that's the situation we're facing at work.


FWIW, I've been talking to Command Prompt about developing a fix for  
this, targeting inclusion in 8.5. I think Alvaro and I have come up  
with a reasonably plan, but there hasn't been time to present it to  
the community yet.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any way to reliably influence WHERE predicate evaluation ordering?

2008-09-26 Thread Decibel!

On Sep 26, 2008, at 4:19 PM, Tom Lane wrote:

Decibel! <[EMAIL PROTECTED]> writes:

Does anyone have any ideas on a clean and reliable way to do this?


Use a trigger.


The problem is that the rules are sitting on a view, not a real  
table. And the view has columns that don't exist in both tables, so I  
can't use a trigger on the tables to deal with everything.


This is 8.1; do later versions have the ability to put triggers on  
views?

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] Is there any way to reliably influence WHERE predicate evaluation ordering?

2008-09-26 Thread Decibel!
I've been working with some views that UNION ALL two tables and are  
also updatable. On field in the view ('committed') simply indicates  
what table a row came from. I don't want people to try and update  
that field and think it'll take effect, so I have an assert function:


CREATE OR REPLACE RULE test_v__update AS ON UPDATE TO test_v DO INSTEAD
UPDATE test_committed set i=NEW.i
WHERE ( s=OLD.s )
AND assert( NOT NEW.committed IS DISTINCT FROM  
OLD.committed, 'Changing committed is not allowed' )

;

All fine and good, but the assert would fire on this case:

update test_v set committed = true,i=i+1 WHERE s=1;

Where s=1 is absolutely a row in the 'committed' table. I finally  
added some debugging and found the problem:


NOTICE:  OLD.committed = TRUE
NOTICE:  NOT DISTINCT =TRUE
NOTICE:  NEW.committed = TRUE
NOTICE:  NOT DISTINCT with s =TRUE
NOTICE:  OLD.committed = FALSE
NOTICE:  NOT DISTINCT =FALSE

AHA! The debug functions (and therefor the assert) was being  
evaluated for each row in either table, even if they're marked as  
IMMUTABLE.


This poses a problem in 2 ways: first, it means that every assert has  
to include s = OLD.s AND ..., complicating code. But perhaps even  
worse, it looks like the functions will force evaluation to happen  
for every row in each table. That's not going to cut it on a multi- 
million row table...


Changing the rule so that the functions were actually executed as  
part of the SET seems to have solved the issue, but it's *really* ugly:


CREATE OR REPLACE RULE test_v__update AS ON UPDATE TO test_v DO INSTEAD
UPDATE test_committed set i=NEW.i
, s = CASE WHEN assert( NOT NEW.committed IS DISTINCT  
FROM OLD.committed, 'Changing committed is not allowed' ) THEN s ELSE  
NULL END

WHERE s=OLD.s
;

I suspect I could do something like

CREATE OR REPLACE RULE ...
UPDATE test_committed SET i = NEW.i
WHERE s IN ( SELECT s FROM ( SELECT s, assert(...) FROM  
test_committed WHERE s = OLD.s ) a )

;

instead, but I haven't found a way to do that without making matters  
worse...


Does anyone have any ideas on a clean and reliable way to do this?  
What I think would be ideal is if there was some way to force  
evaluation order in the WHERE clause of the update, but I don't think  
that's possible.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828

smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Single character bitfields

2008-08-22 Thread Decibel!

On Aug 21, 2008, at 3:45 AM, Tomasz Ostrowski wrote:

- one byte "char" (with quotes), but it is a non standard, integral
type, will cause interface problems and I don't know if it will not be
deprecated some time.



It's used in the catalogs, so I'd say the odds of it getting replaced  
anytime in the near future (if ever) are next to zero, especially  
considering "it ain't broke, so don't fix it" and that there's  
undoubtedly people using it in the wild.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Changing between ORDER BY DESC and ORDER BY ASC

2008-08-16 Thread Decibel!

On Aug 15, 2008, at 12:35 PM, William Garrison wrote:
Is there an easy way to write one single query that can alternate  
between ASC and DESC orders?  Ex:


CREATE OR REPLACE FUNCTION GetSomeStuff(_start integer, _count  
integer, _sortDesc boolean)

RETURNS SETOF text AS
$BODY$
   SELECT
  something
   FROM
  whatever
   WHERE
  whatever
   ORDER BY
   another_column
   OFFSET $1 LIMIT $2
   ($4 = true ? 'DESC' : 'ASC');
$BODY$
LANGUAGE 'sql' VOLATILE;

I can think of a few ways, but I am hoping for something more elegant.
1) In my case another_column is numeric, so I could multiple by  
negative one if I want it in the other order.  Not sure what this  
does to the optimizer if the column is indexed or not.


In my experience, it's pretty rare for an index to be used to satisfy  
an ORDER BY.


2) I could write the statement twice, once with ASC and once with  
DESC, and then use IF/ELSE structure to pick one.

3) I could generate the statement dynamically.

I am hoping there is some super secret extension that can handle  
this.  This seems like one of those foolish things in SQL, where it  
is too declarative.  ASC and DESC should be parameters to order by,  
not a part of the syntax.  But I digress... any other suggestions?


None that I can think of, unfortunately. It might not be horribly  
hard to allow plpgsql to use a variable for ASC vs DESC; that might  
be your best bet.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Confronting the maximum column limitation

2008-08-16 Thread Decibel!

On Aug 12, 2008, at 3:15 PM, Jeff Gentry wrote:

So I've seen the header file where the 1600 column limit is defined



IIRC, that limit is directly related to block size in the header, so  
one possible fix is to increase block size. AFAIK anything up to 64K  
blocks should be safe.


BTW, keep in mind that if you're storing anything that's a varlena  
(anything that's variable length, including NUMBER) where you have  
that many columns, every single varlena is going to end up toasted.  
That's bound to have a *serious* performance impact.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] [HACKERS] postmaster.pid not visible

2008-07-17 Thread Decibel!

Moving to -general. -hackers is for discussion about PG development.

On Jul 16, 2008, at 1:10 AM, cinu wrote:


Hi All, I installed PostgreSQL-8.3.1 on my Suse Linux machine


You should upgrade; I'm pretty sure 8.3 is up to 8.3.3 now.

, it went on fine without any problems and I was able to create and  
access the database, even I was able to start, restart and check  
the status of the service. Since it is my local machine and people  
are remotly connecting to the database on my local machine, I used  
to keep the machine up and running. Today I came and checked and It  
was telling me that the service of postgres is not running, so I  
went and checked the postmaster.pid file it was not in the data  
folder, but I was able to get to the psql prompt and execute  
standard sql statements, even people were able to connect remotly  
and access the databse on my machine. The only difficult that I was  
facing was that I was unable to restart or stop the service. So  
with the help of the ps -ef | grep postgres command I was able to  
trace out the pid and then manually kill the pid with the kill -9  
command, after this I was able to restart, stop or check the status  
of the service.
Don't use kill -9. There's almost never a reason to do that, and  
hasn't been for probably 20 years or more.


Can anyone throw light on why the postmaster.pid was not visible,  
the other intresting factor that I observed was that the postgres  
service was running on the 5432 port this was visible from the /tmp  
location. Also I would like to know if theer is any other  
alternative with which i can restart the service and retain the  
postmaster.pid file.


My guess would be that something went in and removed the .pid file.
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Feature: FOR UPDATE SKIP LOCKED

2008-07-14 Thread Decibel!

On Jul 8, 2008, at 7:49 PM, Goldeneye Solutions Information wrote:
I’m been reading up on  FOR UPDATE NOWAIT and it looks like It was  
added in 8.1.


How difficult is it to add FOR UPDATE SKIP LOCKED or something  
similar? (basically skip locked rows / oracle syntax)

More background here:
http://forge.mysql.com/worklog/task.php?id=3597

It would be quite useful to implement a database queue. Although  
FOR UPDATE NOWAIT and trying again can work as well as other  
techniques,
just skipping over the locks has its advantages (simplicity and  
zero wait)



Patches welcome. :) You could likely use the patch that added NOWAIT  
as a template and be pretty close to something. You should bounce the  
idea off of -hackers first if you want to do this.


If you're not looking to hack the backend code, I'd suggest doing a  
random OFFSET in your select. You'd need to first do a select to pick  
a row, then try to actually lock it. You could also have an old  
record stick around a long time that way, so I'd suggest forcibly  
trying OFFSET 0 on some non-trivial number of attempts (say 10%). You  
might be able to achieve the same effect by applying a function to  
your random number that pushes it towards 0.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] function question

2008-06-21 Thread Decibel!

On Jun 21, 2008, at 8:06 AM, kevin kempter wrote:

Hi LIst;
Is there a way to print all the lines being executed for a  
function, like the equivelant of a psql -ef  for an sql file ?


No, but you could use a debugger on it if you're running a more  
recent version...

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] pg_locks "at-a-glance" view

2008-06-19 Thread Decibel!

On Jun 19, 2008, at 8:07 AM, David Fetter wrote:

On Wed, Jun 18, 2008 at 05:39:59PM -0700, Jeff Davis wrote:
I was trying to create a more "at-a-glance" view of the pg_locks  
table.

I included the SQL I came up with (after talking to Merlin) at the
bottom of this message.

The idea is to show any queries that are waiting on a lock, and the
query that currently holds the lock on which those queries are  
waiting.


Is my logic correct?


I'm not exactly sure, but it appears to match, at first blush, what's
in src/backend/storage/lmgr/lock.c:
static const LOCKMASK LockConflicts[] = {



Yeah, if you look at the code, locks are defined as numbers and I  
believe there's a very simple patter of what conflicts; a higher lock  
number conflicts with all those that are lower. So, it might be a lot  
cleaner to have a function that defines numbers for all the lock  
modes and just test to see if one lock is higher than another.


I think your logic in the view is correct. It might be helpful to  
also list how long the queries have been running.


I really wish we had some kind of a site for listing useful queries  
like this...

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Database design: Data synchronization

2008-06-19 Thread Decibel!

On Jun 18, 2008, at 7:07 AM, David wrote:

- Many foreign keys weren't enforced

- Some fields needed special treatment (eg: should be unique, or
behave like a foreign key ref, even if db schema doesn't specify it.
In other cases they need to be updated during the migration).

- Most auto-incrementing primary keys (and related foreign key
references) needed to be updated during migration, because they are
already used in the destination database for other records.

- Many tables are undocumented, some fields have an unknown purpose

- Some tables didn't have fields that can be used as a 'natural' key
for the purpose of migration (eg: tables which only exist to link
together other tables, or tables where there are duplicate records).

I wrote a Python script (using SQLAlchemy and Elixir) to do the above
for our databases.

Are there any existing migration tools which could have helped with
the above? (it would have required a *lot* of user help).

Are there recommended ways of designing tables so that synchronization
is easier?

The main thing I've read about is ensuring that all records have a
natural key of some kind, eg GUID. Also, your migration app needs to
have rules for conflict resolution.



Well, it sounds like you've got a good list of what NOT to do. The  
first step is to make sure that you have a good database design,  
outside of replication considerations. Most tables should have  
natural unique keys; make sure you have FK's, documment things (see  
the COMMENT ON command), etc. If you have low data quality to start  
with, spreading that all over is just going to make things worse.


For the actual replication, there isn't really a multi-master  
solution for Postgres. Your best bet is to try and design the system  
so that you don't have conflicts (ie: if you have a bunch of branch  
offices, each one is responsible for their own data). You can then  
build something akin to multi-master using londiste and pgq.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Clustering with minimal locking

2008-06-17 Thread Decibel!

On Jun 17, 2008, at 11:37 AM, Scott Ribe wrote:

BOOM! Deadlock.


No more likely than with the current cluster command. Acquiring the  
lock is

the same risk; but it is held for much less time.



Actually, no (at least in 8.2). CLUSTER grabs an exclusive lock  
before it does any work meaning that it can't deadlock by itself. Of  
course you could always do something like


BEGIN;
SELECT * FROM a;
CLUSTER .. ON a;
COMMIT;

Which does introduce the risk of a deadlock, but that's your fault,  
not Postgres.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Clustering with minimal locking

2008-06-16 Thread Decibel!

On May 28, 2008, at 11:21 AM, Scott Ribe wrote:
If I'm not totally off-base, here's one way to enable clustering on  
systems

that run 24/7:

1 cluster current rows
1.1 note current last committed transaction
1.2 copy all visible rows to new table in cluster order
1.3 build indexes on new table
2 add changes
2.1 note current last committed transaction
2.2 apply to new table (& indexes) all changes committed since 1.1
3 put new table into service
3.1 take exclusive lock on table


BOOM! Deadlock.


3.2 apply to new table (& indexes) all changes committed since 2.1
3.3 switch in new table
3.4 release lock
3.5 clean up old table storage

I don't know enough about pg internals to know how big a project  
this would
be, but it seems to me that the WAL provides many of the pieces  
needed to
support steps 1.1 and 2.2, for instance. (Even so, I know it's  
still not

trivial, just perhaps not huge.)

- I guess there's still the possibility that 3.1 could stall in the  
presence
of long-lived transactions--but this is certainly no worse than the  
current

situation where it would stall before starting the cluster operation.

- By "apply changes" I mean insert, update, delete rows--of course  
schema
changes would be locked out during the cluster, even if it takes  
days ;-)


What you're describing is possible; it's done for CREATE INDEX  
CONCURRENT. But it's not very easy to add. I think what makes a lot  
more sense is to have a form of clustering that puts effort into  
placing tuples in the correct location. If you had that, you could  
effectively migrate stuff into proper cluster order in userland; or  
just let it take care of itself. Presumable the table would  
eventually end up clustered if rows are updated often enough.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Money data type - Deprecated?

2008-06-16 Thread Decibel!

On May 27, 2008, at 8:29 AM, Merlin Moncure wrote:
On Mon, May 26, 2008 at 4:05 PM, Andrew Sullivan  
<[EMAIL PROTECTED]> wrote:

On Mon, May 26, 2008 at 03:46:39PM -0400, Justin wrote:
Thats a good Question is the Money type ever going to die i  
believe it was

deprecated several versions ago?


I think it was undeprecated in 8.3 because someone (D'Arcy J.M.  
Cain?)

made a bunch of changes to the datatype to make it more
currency-portable.  Have a look in the release notes.


Your summary is pretty much correct.  The 'deprecated' moniker was for
all intents and purposes removed when the range was expanded.  For the
record, I don't agree with this...IMO the money type is not really
rigorous enough to be useful.  It's missing tons of basic casts and is
just plain weird.  OTOH, it's a neat example of a fixed precision
integer type.

I personally think that it really belongs in contrib as an example of
adding a new type...with all the money bits stripped off, as a
strictly fixed precision integer.


I think what it really needs is currency information so that you can  
tell if a value is in USD or something else. But it seems that every  
time that's brought up, people want to start worrying about storing  
conversion tables.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] Short-circuiting FK check for a newly-added field

2008-05-20 Thread Decibel!
I need to add a field to a fairly large table. In the same alter  
statement I'd like to add a FK constraint on that new field. Is there  
any way to avoid the check of the table that the database is doing  
right now? The check is pointless because the newly added field is  
nothing but NULLs.


This is version 8.1.mumble.
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] DB page cache/query performance

2008-05-19 Thread Decibel!

On May 14, 2008, at 6:18 PM, Greg Smith wrote:
Also, how long should pages stay in the cache? (Assuming I have  
way more

memory than the total size of all the tables/indexes.) Is there any
time-based expiration (in addition to LRU-based, which in my case  
should

never be resorted to)?


Forever, at least as far as the PostgreSQL one is concerned.  They  
only get evicted when a new buffer needs to be allocated and  
there's none available.  It's not quite a LRU mechanism either.  If  
you visit http://www.westnet.com/~gsmith/content/postgresql/  
there's a presentation called "Inside the PostgreSQL Buffer Cache"  
that goes over how that part works.  You'd probably find a look  
inside your system using pg_buffercache informative.  Some of the  
sample queries I give only work on 8.3, but the "Buffer contents  
summary, with percentages" one should be OK on 8.1.



Hrm... don't seqscans use a separate set of buffers in 8.3? While  
technically those won't be evicted until needed, you're unlikely to  
find stuff hanging around there for terribly long...

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Request for Materialized Views

2008-05-14 Thread Decibel!

On May 5, 2008, at 2:43 AM, [EMAIL PROTECTED] wrote:

We are using Postgres in our company successfully since 2001 but now
we arrived at the conclusion that we really need "Materialized Views"
for our further business. Also we decided that we should try to pay
someone from the community for the feature to be implemented for
Postgres instead of doing it ourselves.
While we know that this is common practice in open source projects I
would like to ask how such a kind of "announcement" should be made for
Postgres.


You should post to -hackers. If you're willing to fund development by  
yourself then it shouldn't be hard to get this done. I know for a  
fact that Command Prompt will develop features for hire, so they'd  
probably be willing to undertake this. But fully sponsoring this  
might not be cheap...


If you're not willing to fund this entirely on your own then things  
get a lot more complicated.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Best approach for large table maintenance

2008-05-14 Thread Decibel!

On Apr 22, 2008, at 12:04 PM, Vanole, Mike wrote:

It seems that running vacuum still has value in the above approach
because I still see index row versions were removed.



That means either part of the import failed (transaction aborted), or  
you're updating or deleting rows between the load and the vacuum.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] rounding problems

2008-05-14 Thread Decibel!

On May 13, 2008, at 1:36 PM, Justin wrote:

Is is limit less no, but what is?



numeric is limitless, unless you specifically bound it. Or you run  
out of space...

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] rounding problems

2008-05-14 Thread Decibel!

On May 12, 2008, at 10:42 PM, Craig Ringer wrote:

Personally I'd be tempted to use a `double precision' (float8) for
things like materials consumption.



Or you could just use an un-bounded numeric...
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] ranked subqueries vs distinct question

2008-05-14 Thread Decibel!

On May 14, 2008, at 9:07 AM, Karsten Hilbert wrote:


That doesn't work, unfortunately, because the urb (cities)
table doesn't have the zip code. That's stored in a street
table which foreign keys into the urb table.



SELECT name, zip, zip='04317' AS zipmatch
FROM urb LEFT JOIN streets ON (streets.urb_id = urb.urb_id )
ORDER BY zipmatch DESC, name
;

Of course you'll need to adjust the table and field names appropriately.

Someone should probably teach the gnumed folks about schemas, too... ;)
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] In the belly of the beast (MySQLCon)

2008-04-19 Thread Decibel!

On Apr 18, 2008, at 2:18 PM, Alvaro Herrera wrote:

Actually, electrons themselves flow rather slowly -- millimeters per
second according to Wikipedia.  The signal propagation is a bit  
faster:

"typically 75% of light speed", Wikipedia again.



Yeah, electrons move *very* slowly in a solid. Presumably somewhat  
faster in liquid. Definitely faster in a gas, but you have to have a  
pretty good vacuum for them to actually approach the speed of light.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Slony-I for upgrades - was Re: In the belly of the beast (MySQLCon)

2008-04-19 Thread Decibel!

On Apr 18, 2008, at 2:42 PM, Chris Browne wrote:

However, it is unusual for a database to consist of just one table of
that sort.  If you have a case like this, it will make plenty of sense
to split this set of tables into pieces, and add them in at least
somewhat incrementally.



Does anyone happen to have any scripts/code that will just trawl  
through a database, adding tables to a set one at a time?

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] No server after starting

2008-04-19 Thread Decibel!

On Apr 18, 2008, at 2:00 PM, Bayless Kirtley wrote:
First, I am new to Postgres. I am hoping to migrate an existing  
Java application from a couple of tried but unreliable open source  
Java databases. I have a fair amount of experience with Oracle,  
Informix and DB2 but it has been a few years. No on to my problem.


I finally got PostgreSQL installed on Windows XP as a service.  
After restarting the computer, the Task Manager shows 5  
postgres.exe processes running. From a CMD window, I issue a pg_ctl  
status command and get the response: "No server running." What am I  
doing wrong. I took all the defaults on installation except for  
assigning a password rather than a random generated one.


Earlier today, I had it installed as a program and all worked fine  
after creating and initializing the database cluster. I was able to  
start the database from a non-admin user. Then I could check status  
and even shutdown as an admin user. I have added the ...\bin  
directory to the PATH and added the PGDATA environment variable.  
After this, I decided to remove Postgres and reinstall it as a  
service.


I suspect that PGDATA might be set incorrectly; you should verify  
that. You can see what directory it's actually using by connecting  
and doing a SHOW data_directory;

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Feature request/suggestion - CREATE SCHEMA LIKE

2008-03-25 Thread Decibel!

On Mar 18, 2008, at 7:17 AM, Joris Dobbelsteen wrote:

   Features like CREATE DATABASE WITH TEMPLATE or CREATE TABLE LIKE
are very usefull but it would be great to have such a feature on the
mid-level too. I mean something CREATE SCHEMA LIKE that would copy  
all

the template schema relations, etc...
 What do you think about it ? Would it be hard to implement ? Is it
worth the effort
I believe the CREATE DATABASE was because of the way postgresql  
creates a database. I thought it just copied the template database  
(but are not completely sure).



You are correct. This is why the database used as a template can't  
have anyone connected to it; if somebody was, we can't get a  
consistent filesystem-level copy of the database.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Create user trigger?

2008-03-25 Thread Decibel!

Dropping the slony list.

On Mar 18, 2008, at 5:32 AM, Glyn Astill wrote:
We're setting up 3 servers replicating using slony. I was wondering  
if it'd be possible for me to create a set of triggers that fire  
whenever a user is created/dropped/modified on one of the servers  
that goes and performs the same action the other two servers.


There is no support for DDL triggers, even though it's frequently  
requested. Unfortunately, almost every time it's brought up it  
degrades into a debate about why we can't add triggers to catalog  
tables; forgetting that there's other ways to do this. Searching the  
pgsql-hackers archives should be enlightening.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] Space wasted by denormalized data

2008-02-28 Thread Decibel!
I wrote this query to identify how much space is being wasted in a
database by denormalized data. For each field in each table, it
calculates how much space the field is taking in it's table as well as
indexes, then it calculates how much space would be needed if that field
was normalized into a separate table. It places some (somewhat
arbitrary) minimums on how much space would have to be saved to include
that field in the output. If you want to get rid of the limit you should
still keep savings > 0 in the query, otherwise you'll start seeing
normalization suggestions that make no sense (like normalizing an int).

As it stands, the query reports a total for the entire database, but you
can use just sub-sections of the query to see savings per-table, or
per-field.

-- Total
SELECT pg_size_pretty(sum(sum)::bigint)
-- Summarize by table
FROM (SELECT schemaname, tablename, sum(savings), 
pg_size_pretty(sum(savings)::bigint)
-- Get pretty size. Start here if you want per-table info
FROM (SELECT *, pg_size_pretty(savings::bigint)
FROM (
-- Here's where the real work starts
SELECT *, table_space_delta + coalesce(index_space_delta, 0) - side_table_space 
AS savings
FROM (
-- Figure out how much space we'd save in indexes by converting to an 
int
SELECT *, index_tuple_count*(avg_width-4) AS index_space_delta
FROM (SELECT s.*, index_count, index_tuple_count
FROM (SELECT schemaname, tablename, attname, null_frac, 
avg_width, n_distinct
-- How much space would we gain by 
changing this field to an int?
, 
reltuples*(1-null_frac)*(avg_width-4) AS table_space_delta
-- Estimate how big our "side 
table" will be
, CASE WHEN n_distinct >= 0 THEN 
n_distinct ELSE -n_distinct * reltuples END
* 
(24+4+avg_width+6+4+6+avg_width) AS side_table_space
FROM pg_stats s
JOIN pg_class c ON c.relname=tablename AND 
c.relkind='r'
JOIN pg_namespace n ON n.oid = 
c.relnamespace AND n.nspname=s.schemaname
WHERE schemaname NOT IN 
('pg_catalog','information_schema')
) s
NATURAL LEFT JOIN (
SELECT n.nspname AS schemaname, c.relname 
AS tablename, attname
, count(*) AS index_count, 
sum(i.reltuples) AS index_tuple_count
FROM pg_index x
JOIN pg_class c ON c.oid = 
x.indrelid
JOIN pg_class i ON i.oid = 
x.indexrelid
JOIN pg_namespace n ON n.oid = 
c.relnamespace
JOIN pg_attribute a ON a.attrelid = 
i.oid
GROUP BY n.nspname, c.relname, attname
) i
) a
) a
) a
-- Minimum savings to consider per-field
WHERE savings > 1e6) a
GROUP BY schemaname, tablename
-- Minimum savings to consider per-table
HAVING sum(savings) > 1e7
ORDER BY sum(savings) DESC
) a
;
-- 
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828


pgpPK8JSpK8sA.pgp
Description: PGP signature


Re: [GENERAL] Approaches for Lookup values (codes) in OLTP application

2008-02-23 Thread Decibel!

On Feb 15, 2008, at 1:43 PM, Scott Marlowe wrote:

Generally speaking, I tend towards using the real value as the key and
foreign key in lookup tables, but occasionally using an artificial
numeric key is a better choice.



Something to consider here... any table that will have either a lot  
of rows or a lot of "type" fields will likely be better off with a  
phantom key (such as a serial) rather than storing text values in the  
base table. As an example, we have a 500G database at work that  
currently doesn't use any phantom keys for this kind of thing. I  
recently estimated that if I normalized every field where doing so  
would save more than 1MB it would reduce the size of the database by  
142GB. Granted, about half of that is in a somewhat unusual table  
that logs emails (a lot of the emails have the same text, so the gain  
there is from normalizing that), but even discounting that 75G is  
nothing to sneeze at in an OLTP database.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] unnesesary sorting after Merge Full Join

2008-02-23 Thread Decibel!

On Feb 21, 2008, at 4:08 AM, Alexey Nalbat wrote:
I'd like to use ORDER BY in any specified order and LIMIT, OFFSET  
for paging query results.
The query is FULL OUTER JOIN of two tables by field id. I think the  
results of Merge Full Join
to be ordered by some "combined id". And there is no need in extra  
Sort if I specify ORDER BY

that "combined id". But unfortunately it is not so.

Here is a simple example:
-- BEGIN
create table t1 as select generate_series(1,100,2) as id;
create table t2 as select generate_series(1,100,3) as id;

create index i1 on t1 ( id );
create index i2 on t2 ( id );

analyze t1;
analyze t2;

explain analyze
select id, t1.*, t2.*
from t1 natural full join t2
order by 1 limit 10 offset 10;

drop table t1;
drop table t2;
-- END

Postgresql chooses such plan:
 Limit  (cost=44080.12..44080.15 rows=10 width=8) (actual  
time=6724.850..6724.906 rows=10 loops=1)
   ->  Sort  (cost=44080.10..45330.10 rows=50 width=8) (actual  
time=6724.806..6724.845 rows=20 loops=1)

 Sort Key: (COALESCE(t1.id, t2.id))
 Sort Method:  top-N heapsort  Memory: 25kB
 ->  Merge Full Join  (cost=0.00..30775.28 rows=50  
width=8) (actual time=0.142..5237.289 rows=67 loops=1)

   Merge Cond: (t1.id = t2.id)
   ->  Index Scan using i1 on t1  (cost=0.00..15212.30  
rows=50 width=4) (actual time=0.079..1188.601 rows=50 loops=1)
   ->  Index Scan using i2 on t2  (cost=0.00..10146.30  
rows=34 width=4) (actual time=0.051..793.635 rows=34 loops=1)


The desired plan is much faster:
 Limit  (cost=0.62..1.23 rows=10 width=8) (actual time=0.262..0.366  
rows=10 loops=1)
   ->  Merge Full Join  (cost=0.00..30775.28 rows=50 width=8)  
(actual time=0.156..0.303 rows=20 loops=1)

 Merge Cond: (t1.id = t2.id)
 ->  Index Scan using i1 on t1  (cost=0.00..15212.30  
rows=50 width=4) (actual time=0.088..0.120 rows=15 loops=1)
 ->  Index Scan using i2 on t2  (cost=0.00..10146.30  
rows=34 width=4) (actual time=0.056..0.078 rows=11 loops=1)


I found comment in src/backend/optimizer/path/pathkeys.c:
* EXCEPTION: in a FULL or RIGHT join, we cannot treat the result as
* having the outer path's path keys, because null lefthand rows may be
* inserted at random points. It must be treated as unsorted.

How can I get rid of this sorting? Or could this behavior of Merge  
Full Join be improved?


Theoretically, this can be improved, but I suspect it would be non- 
trivial. I suspect that the problem is the planner doesn't realize  
that the join key could never be null, which is often not the case.  
Consider this example:


decibel=# create table t1 as select generate_series(1,100,2) as id1;
SELECT
decibel=# create table t2 as select generate_series(1,100,3) as id2;
SELECT

Create index, etc.

explain analyze
select id1, id2
from t1 full join t2 on (t1.id1=t2.id2)
order by 1 limit 10 offset 10;

Note that in this case you have to re-sort, because NULLs sort  
differently.


As a workaround, I suggest creating a table that contains all the IDs  
from t1 and t2. You could maintain this table via a trigger if you  
wanted. You could then quickly determine the exact IDs you wanted,  
and then join against the two real tables.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Some Autovacuum Questions

2008-02-13 Thread Decibel!

On Feb 12, 2008, at 9:13 AM, Thomas Chille wrote:

vacuum_cost_delay = 200 
vacuum_cost_page_hit = 6
vacuum_cost_limit = 100



Vacuum is going to take forever with those settings. I strongly  
suggest you set them back to default. If you need to throttle vacuum,  
try setting cost_delay to between 10 and 20.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] end of life for pg versions...

2008-02-11 Thread Decibel!

On Feb 11, 2008, at 9:33 AM, Ivan Sergio Borgonovo wrote:

On Mon, 11 Feb 2008 15:36:21 +0100
Magnus Hagander <[EMAIL PROTECTED]> wrote:

http://www.postgresql.org/support/security

that's probably the best one you can find. Or that in combination
with the news archive (http://www.postgresql.org/about/newsarchive)


Really... without making it too formal as a developer I'd appreciate
a rough schedule a page where you would say something like:

- we expect our next minor release will come out in X months
- we expect our major release will come out in Y months
- EOL of release A for platform B is planned around date Z

Even with a disclaimer with a very bland commitment to the release
schedule it could help developers to build up their own schedule and
support list too and give some hook for advocacy as well.


The problem with that is that as a volunteer-run project, dates can  
be off by a mile. Less than a year ago the plan was to release 8.3 is  
August-September 2007. Instead it was released a week or two ago.


IIRC, the decision to end support for a version is determined in  
large part by how hard it would be to back-patch something. If a bug  
was found that dated back to 7.4 but was very difficult to fix in 7.4  
I bet you'd see 7.4 get EOL'd unless someone wanted to pay to back- 
patch it.


I think the closest thing to a policy you'll find is a discussion  
from a year or two ago where the consensus was that we should  
endeavor to support a version for at least 2 years after it's  
replacement comes out (ie: 8.2 should be supported for at least 2  
years after we released 8.3).

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] pg_stat_activity xact_start and autovacuum

2008-02-11 Thread Decibel!

On Feb 11, 2008, at 8:14 AM, Alvaro Herrera wrote:

Actually it's not just autovacuum; it's any lazy vacuum.  It's hard to
tell those processes apart in pg_stat_activity.  Perhaps we could have
added a column in pg_stat_activity indicating processes that don't  
hold

old tuples, but I feel that would have been a little too much.



I don't think it'd be too hard to construct a regex that would catch  
all vacuums, after which you could throw out FULLs. I'm thinking  
something like


\s*vacuum((\s+full){0,1}\s+\S+){0,1};{0,1}

Where \s indicates whitespace and \S indicates not whitespace (sorry,  
don't have a regex manual handy...)


You could probably even simplify that to

\s*vacuum(\s+full){0}

Of course, you'd want to perform all of those in a case-insensitive  
manner.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Conditional ordering operators

2008-02-11 Thread Decibel!

You should start a project for this on pgFoundry. It looks very useful!

On Feb 6, 2008, at 1:15 PM, Sergey Konoplev wrote:


Hello everybody.

I've written a script (see attachment) which creates operators

@< - ascending ordering
@> - descending ordering

that allows you to replace code like this

if  then
   for
   select 
   from 
   where 
   order by
   field1 desc,
   field2
   loop
   
   end loop;
elsif  then
   for
   select 
   from 
   where 
   order by
   field3,
   field1 desc,
   field2 desc
   loop
   
   end loop;
else
   for
   select 
   from 
   where 
   order by
   field4
   loop
   
   end loop;
end if;

that way

for
   select 
   from 
   where 
   order by
   case when  then
   @>field1
   @ then
   @field1
   @>field2
   else
   @
end loop;

It looks better, doesn't it?

Also it provides Oracle like OVER PARTITION effect

select * from (
   values
   (1.2, '2007-11-23 12:00'::timestamp, true),
   (1.4, '2007-11-23 12:00'::timestamp, true),
   (1.2, '2007-11-23 12:00'::timestamp, false),
   (1.4, '2007-01-23 12:00'::timestamp, false),
   (3.5, '2007-08-31 13:35'::timestamp, false)
) _
order by
   @column3
   else
   @>column2
   @--  
Regards,

Sergey Konoplev
---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings


--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] [pgsql-advocacy] PostgreSQL professionals group at LinkedIn.com

2008-02-01 Thread Decibel!
On Mon, Jan 28, 2008 at 09:07:34PM -0300, Alvaro Herrera wrote:
> Decibel! wrote:
> > On Tue, Jan 22, 2008 at 08:20:30PM +0100, Gevik Babakhani wrote:
> > > Dear all,
> > > 
> > > I have created a group for PostgreSQL professionals at LinkedIn.com
> > > Feel free to join if you like.
> > > 
> > > http://www.linkedin.com/e/gis/51776/760A11717C03
> > 
> > How is that different than the existing Postgres group?
> 
> Is there an existing Postgres group?

Yes.
-- 
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828


pgpmZ8rFCag3o.pgp
Description: PGP signature


Re: [GENERAL] [pgsql-advocacy] PostgreSQL professionals group at LinkedIn.com

2008-01-28 Thread Decibel!
On Tue, Jan 22, 2008 at 08:20:30PM +0100, Gevik Babakhani wrote:
> Dear all,
> 
> I have created a group for PostgreSQL professionals at LinkedIn.com
> Feel free to join if you like.
> 
> http://www.linkedin.com/e/gis/51776/760A11717C03

How is that different than the existing Postgres group?
-- 
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828


pgp45Bo8fd3KM.pgp
Description: PGP signature


Re: [GENERAL] storage size of "bit" data type..

2007-12-06 Thread Decibel!

On Dec 6, 2007, at 5:10 PM, Gregory Stark wrote:
It needs to store the number of bits present as well. Otherwise it  
wouldn't be

able to tell apart B'1' and B'01' ... B'0001'

...
Only in the sense that numeric also has to store some meta data as  
well like

the weight and display precision.



Hrm... perhaps that's another worthwhile target for the varvarlena  
technique...

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] storage size of "bit" data type..

2007-12-06 Thread Decibel!

On Dec 6, 2007, at 5:19 PM, Vyacheslav Kalinin wrote:

> It needs to store the number of bits present as well

Couldn't that be reduced to 1 byte that'd say how many bits count  
in the last byte?


> Only in the sense that numeric also has to store some meta data  
as well like

the weight and display precision.

Is it really necessary to store display precision when it can be  
taken from the table column definition?



Two problems...

1) CREATE TABLE n(n numeric);

2) The knowledge of extra type information (ie: the numbers in char()  
or numeric()) don't extend deeply enough into the code. This is part  
of why char() uses the exact same storage mechanism as varchar().

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] storage size of "bit" data type..

2007-12-06 Thread Decibel!

On Dec 5, 2007, at 7:23 PM, Michael Glaesemann wrote:

On Dec 5, 2007, at 14:19 , Alex Mayrhofer wrote:
i'm trying to find out the storage size for bit(n) data. My  
initial assumption would be that for any 8 bits, one byte of  
storage is required.


select pg_column_size(B'1') as "1bit",
   pg_column_size(B'') as "4bits",
   pg_column_size(B'') as "1byte",
   pg_column_size(B'') as "12bits",
   pg_column_size(B'') as "2bytes",
   pg_column_size(B'1') as "17bits",
   pg_column_size(B'') as "3bytes";
1bit | 4bits | 1byte | 12bits | 2bytes | 17bits | 3bytes
--+---+---++++
9 | 9 | 9 | 10 | 10 | 11 | 11
(1 row)

Looks like there's 8 bytes of overhead as well, probably because a  
bit string is a varlena type.


Wow, that's screwed up... that's a lot more than varlena overhead:

select pg_column_size('a'::text), pg_column_size(1::numeric),  
pg_column_size(3111234::numeric);

 pg_column_size | pg_column_size | pg_column_size
++
  5 | 10 | 12

Apparently it's something related to numeric.
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Rules slower than Dynamic SQL ?

2007-12-06 Thread Decibel!

On Nov 26, 2007, at 6:09 PM, Simon Riggs wrote:

On Mon, 2007-11-26 at 16:01 -0800, Alex Vinogradovs wrote:

I've got a data warehouse with pretty high rate of insert into
partitioned tables. What I've noticed, is that rule-based  
partitioning

seems to be somewhat slower than insertions made directly into
partitions through execution of dynamic SQL.

Is it really true ?


Depends how complex your rules are, but I think yes is the short  
answer.


My understanding is that each rule has to re-evaluate the query  
that's being run. That's going to add some overhead to a simple  
INSERT INTO table VALUES ... but if you think that's bad, try it with  
an expensive INSERT INTO table SELECT statement!

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Calculation for Max_FSM_pages : Any rules of thumb?

2007-12-06 Thread Decibel!

On Nov 20, 2007, at 6:14 PM, Ow Mun Heng wrote:

On Mon, 2007-11-19 at 08:24 -0500, Bill Moran wrote:

In response to Ow Mun Heng <[EMAIL PROTECTED]>:


Even with the regular vacuuming and even a vacuum full ( on my  
test DB)

I still see that perhaps something is wrong (from the below)

(I got this gem from the mailling list archives)
hmxmms=> SELECT
c.relname,
c.reltuples::bigint as rowcnt,
pg_stat_get_tuples_inserted(c.oid) AS inserted,
pg_stat_get_tuples_updated(c.oid) AS updated,
pg_stat_get_tuples_deleted(c.oid) AS deleted
FROM pg_class c
WHERE c.relkind = 'r'::"char"
GROUP BY c.oid, c.relname, c.reltuples
HAVING pg_stat_get_tuples_updated(c.oid) +
pg_stat_get_tuples_deleted(c.oid) > 1000
ORDER BY pg_stat_get_tuples_updated(c.oid) +
pg_stat_get_tuples_deleted(c.oid) DESC;
relname|  rowcnt  | inserted | updated | deleted
---+--+--+-+--
 tst_r | 11971691 |0 |   0 | 22390528  
<--

 pg_statistic  | 1465 |  280 |7716 |  153
 dr_ns |  2305571 | 1959 |   0 | 1922
 pg_attribute  | 3787 | 1403 | 184 | 1292

No matter how many times I vacuum/full the deleted number still  
doesn't

go down.


Are you sure you're interpreting that number correctly?  I took it to
mean a counter of the number of delete operations since server start.


Actually, it's not on server start; it's on stats reset. Which can  
happen at server start depending on your config.


You are right. This is definitely a snafu in my interpretation.  
After I
restarted PG on the laptop, the numbers went away. So, then I'm  
confused

as to why the above "gem" was provided as a means to see which tables
needs more vacumming.



By itself it doesn't help; you need to track how many rows have been  
updated or deleted since the last time you vacuumed. That, along with  
the rowcount, will give you an idea of how much of the table is dead  
space.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] PITR and warm standby setup questions

2007-11-13 Thread Decibel!

On Nov 12, 2007, at 11:07 PM, Greg Smith wrote:

On Mon, 12 Nov 2007, Mason Hale wrote:

After the wal segment file is copied by the restore_command  
script, is it safe to delete it from my archive?


While I believe you can toss them immediately, you should  
considering keeping those around for a bit regardless as an  
additional layer of disaster recovery resources.  I try to avoid  
deleting them until a new base backup is made, because if you have  
the last backup and all the archived segments it gives you another  
potential way to rebuild the database in case of a large disaster  
damages both the primary and the secondary.  You can never have too  
many ways to try and recover from such a situation.


Plus, the new resumable recovery probably won't be happy if you're  
too aggressive about nuking WAL logs from the archive.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Using generate_series to create a unique ID in a query?

2007-11-13 Thread Decibel!

On Nov 12, 2007, at 5:11 PM, Sarah Dougherty wrote:
For some context, I am trying to create a report that provides a  
list of client charges and payments and a "running balance" after  
each transaction. Because  we often have multiple charges and/or  
payments on the same day, we can't use the transaction date to  
calculate this balance.  Instead, I want to calculate our running  
balance by assigning a transaction ID to each transaction a d then  
having the query sum up transaction amounts for all transactions  
with an equal or lower ID.


I think you'd be much better off writing a function that does this  
for you... it'd have to accept and amount and then keep a running  
total. Wouldn't be a bad idea to have a boolean you can pass in that  
will reset the total, too... that would allow using it in a grouping  
scenario. Erm, I guess you'd actually have to accept something like a  
hash; pass in a hash of all the grouping fields and whenever that  
changes you reset the total.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] FreeBSD portupgrade of 8.1 -> 8.2

2007-11-12 Thread Decibel!
You can force the pkg_delete with -f (or maybe -F). You'll want to  
delete all the postgresql (ie: postgresql-*) packages as well, and  
then re-install them after you install 8.2.


On Nov 12, 2007, at 5:39 PM, Steve Manes wrote:

I've got 8.1 running fine.  I want to upgrade to 8.2.  Problem is,  
FreeBSD's portupgrade utility only wants to upgrade my existing 8.1  
installation.


So I grabbed the latest ports collection, which includes  
postgresql82-client and postgresql82-server.  Running 'make  
install' on postgresql82-client gives me:


===>  Installing for postgresql-client-8.2.5_1

===>  postgresql-client-8.2.5_1 conflicts with installed package(s):
  postgresql-client-8.1.10

  They install files into the same place.
  Please remove them first with pkg_delete(1).
*** Error code 1

So I ran: pkg_delete postgresql-client-8.1.10

... and got this error:

pkg_delete: package 'postgresql-client-8.1.10' is required by these  
other packages

and may not be deinstalled:
dovecot-1.0.0
kde-3.5.6_1
koffice-1.6.2_3,2
php5-extensions-1.1
php5-pgsql-5.2.3
postgresql-libpqxx-2.6.9
postgresql-plperl-8.1.9
postgresql-server-8.1.10_2

I seem to have hit a brick wall.  I tried installing postgresql82- 
server first but it wouldn't do that without the 8.2 client library  
installed.


What's the portupgrade process in FreeBSD??


---(end of  
broadcast)---

TIP 6: explain analyze is your friend



--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Updated .vim file

2007-11-12 Thread Decibel!

On Oct 31, 2007, at 9:33 AM, Filip Rembiałkowski wrote:

2007/10/30, Decibel! <[EMAIL PROTECTED]>:

Does anyone have a .vim file that takes dollar quoting into account?
I've tried the one mentioned at
http://archives.postgresql.org/pgsql-general/2006-04/ 
msg01266.php , but

it doesn't appear to understand dollar quotes.


dollar quoting is mostly used for function bodies.
would you like to have them all in StringConstantColor? :)
I like it more as it is now in Vim...


Ugh. Yeah, good point. What I actually want is dollar quoting except  
for functions. Just catching '$$' would suffice for most of it...

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] Updated .vim file

2007-10-30 Thread Decibel!
Does anyone have a .vim file that takes dollar quoting into account?
I've tried the one mentioned at
http://archives.postgresql.org/pgsql-general/2006-04/msg01266.php , but
it doesn't appear to understand dollar quotes.
-- 
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828


pgpGSrqzbdYLx.pgp
Description: PGP signature


Re: [GENERAL] how to know the current size of a database

2007-09-19 Thread Decibel!

On Sep 19, 2007, at 5:36 AM, [EMAIL PROTECTED] wrote:
I want to know about the size of my database. For example, I want  
to know
how many Mb of data for current myDatabase database in a postgres  
server.


If you don't need an exact size, this query will be a lot faster than  
the size functions:


SELECT pg_size_pretty(sum(relpages)*8192) FROM pg_class;
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Decibel!

On Sep 19, 2007, at 6:30 AM, Bjørn T Johansen wrote:

It's a Dell server with the following spec:

PE2950 Quad-Core Xeon E5335 2.0GHz, dual
4GB 667MHz memory
3 x 73GB SAS 15000 rpm disk
PERC 5/I Integrated controller card (8 ports, 256MB cache, battery  
backup) x 6 backplane


RAID5 is not a recipe for performance on a database, if that's what  
you were thinking.


Of course, without having any idea of database size or transaction  
rate, it's impossible to tell you if that's a good server for your  
needs or not. Maybe all you need is a 486. :)

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Performance issue with nested loop

2007-09-01 Thread Decibel!
10) AND
(overview = 1) AND (zweitstudium <> 2) AND (status > 0))

  ->  Index Scan using stud_pkey on stud
(cost=0.00..4.67 rows=1 width=4) (actual time=0.024..0.026 rows=1  
loops=420)


Index Cond: (stud.sid = "outer".sid)

Filter: ((status > 0) AND
(length((vname)::text) > 1) AND (length((nname)::text) > 1))

->  Unique  (cost=5383.29..5384.98 rows=337 width=4) (actual
time=1.520..2.686 rows=511 loops=208)

  ->  Sort  (cost=5383.29..5384.13 rows=337 width=4)  
(actual

time=1.519..1.871 rows=511 loops=208)

Sort Key: public.ausb.sid

->  Hash Join  (cost=17.61..5369.14 rows=337  
width=4)

(actual time=1.133..314.584 rows=511 loops=1)

  Hash Cond: ("outer".uniid = "inner".uniid)

  ->  Seq Scan on ausb  (cost=0.00..4827.30
rows=104174 width=8) (actual time=0.030..226.532 rows=103593 loops=1)

Filter: ((overview = 1) AND  
(zweitstudium <>

2) AND (status > 0))

  ->  Hash  (cost=17.60..17.60 rows=2 width=4)
(actual time=0.392..0.392 rows=2 loops=1)

->  Seq Scan on uni  (cost=0.00..17.60
rows=2 width=4) (actual time=0.369..0.381 rows=2 loops=1)

  Filter: ((uniort)::text =
'Augsburg'::text)



The estimated row numbers are not bad as long as one table is  
affected.
They're much worse as soon as two or more tables are joined. Though  
the
query plans are slightly different, the number of merged rows at  
different
stages seems to be rather the same for both plans. The big  
difference in my
eyes seems the cost for the first nested loop. This seems to be the  
point,

where the long running query consumes most time. I've then set
enable_nestloop to off, and actually the problem disappears.


Other maybe relevant parameters:
default_statistics_target = 100
work_mem = 4096
max_fsm_pages = 10

My questions:

What could be the problem behind high amount of actually used time  
for the

nested loop in the first query?

If we decided to constantly turn off nested loops, what side  
effects would

we have to expect?

Are there more granular ways to tell the query planner when to use  
nested

loops?

Or just other ideas what to do? We'd be grateful for any hint!


Here's what's killing you:

  ->  Nested Loop  (cost=65462.58..78785.78 rows=1 width=4) (actual
time=6743.856..698776.957 rows=250 loops=1)

Join Filter: ("outer".sid = "inner".sid)

->  Merge Join  (cost=11031.79..11883.12 rows=1 width=12)  
(actual

time=387.837..433.612 rows=494 loops=1)

That merge thinks it's olny going to see 1 row, but it ends up with  
494, which results in:


->  Unique  (cost=54430.79..4.18 rows=10599 width=4)  
(actual

time=6.851..1374.135 rows=40230 loops=494)

The miss-estimation is actually coming from lower in the query... I  
see there's one place where it expects 180 rows and gets 511, which  
is part of the problem. Try increasing the stats on ausb.sid.


Oh, and please don't line-wrap explain output.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Reliable and fast money transaction design

2007-08-29 Thread Decibel!
On Wed, Aug 29, 2007 at 08:37:26AM -0500, Ron Johnson wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 08/29/07 07:27, cluster wrote:
> > OK, thanks. But what with the second question in which the UPDATE is
> > based on a SELECT max(...) statement on another table? How can I ensure
> > that no other process inserts a row between my SELECT max() and UPDATE -
> > making my SELECT max() invalid?
> > 
> > A table lock could be an option but I am only interested in blocking for
> > row insertions for this particular account_id. Insertions for other
> > account_ids will not make the SELECT max() invalid and should therefore
> > be allowed.
> 
> Well, concurrency and transactional consistency *allows* other
> processes to update the table after you start your transaction.  You
> just won't *see* their updates while you're inside of a transaction.

Just make sure and read up about transaction isolation... in the default
of READ COMMITTED mode, you can sometimes see changes made by other
transactions.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpzgn6LbJ2dr.pgp
Description: PGP signature


Re: [GENERAL] Seeking datacenter PITR backup suggestions

2007-08-28 Thread Decibel!
On Tue, Aug 28, 2007 at 09:54:23PM -0400, Tom Lane wrote:
> Decibel! <[EMAIL PROTECTED]> writes:
> > On Tue, Aug 28, 2007 at 08:31:10PM +0100, Gregory Stark wrote:
> >> Postgres tries to reuse WAL files. Once the archive_command completes it
> >> believes it is safe to reuse the old file without deleting it. That will do
> >> nasty things if you've used ln as your archive command.
> 
> > I thought that was specifically disabled when PITR was enabled? Or do we
> > just do a rename rather than an unlink ond creating a new file?
> 
> No.  The only difference is we don't recycle the file until the
> archive_command says it's done with it.
> 
> The archive_command must actually physically copy the data someplace
> else, and must not return success until it's sure the copy is good.
> Perhaps the docs are not sufficiently clear on the point?

Yeah... I think that's a big gotcha waiting to smack someone. I'd
actually make the mention  so that hopefully no one can miss
it... or do we have an official method for putting warnings in the docs?

"Because WAL segment files are renamed and not re-created from scratch,
it is critical that the archive command actually copy files, not move
or hard-link them."
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgptc8hUPQEfa.pgp
Description: PGP signature


Re: [GENERAL] autovacuum not running

2007-08-28 Thread Decibel!
On Tue, Aug 28, 2007 at 03:10:34PM -0700, Ben wrote:
> Hm, I assumed it wasn't running because pg_stat_all_tables shows the last
> vacuum from several weeks ago, and this is an active db. Also, I see no 
> vacuum activity in the logs. But "show autovacuum" does show it being 
> on

Last vacuum, or last autovacuum?
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpLskOxLbXar.pgp
Description: PGP signature


Re: [GENERAL] Reliable and fast money transaction design

2007-08-28 Thread Decibel!
On Tue, Aug 28, 2007 at 05:48:50PM +0200, cluster wrote:
> I need a way to perform a series of money transactions (row inserts) 
> together with some row updates in such a way that integrity is ensured 
> and performance is high.
>
> I have two tables:
>   ACCOUNTS (
>  account_id int,
>  balance int
>   );
> 
>   TRANSACTIONS (
>  transaction_id int,
>  source_account_id int,
>  destination_account_id int,
>  amount int
>   );
> 
> When a money transaction from account_id = 111 to account_id = 222 with 
> the amount of 123 is performed, the following things must happen as an 
> atomic event:
>1) INSERT INTO TRANSACTIONS
> (source_account_id, destination_account_id, amount)
> VALUES (111, 222, 123)
>2) UPDATE ACCOUNTS SET balance = balance-123 WHERE account_id=111
>3) UPDATE ACCOUNTS SET balance = balance+123 WHERE account_id=222
 
Why do you think you need to do anything special for this? As long as
you're doing these 3 steps in a single transaction, everything should be
fine. At most, you might need to set your transaction isolation level to
serializable, but I don't think that's actually needed.

> A lot of such money transactions will happen in parallel so I need 
> ensure integrity of the rows in ACCOUNTS.
> This might be done by creating an *immutable* function that performs the 
> three steps but this will block unnecessarily if to completely unrelated 
> money transactions are tried to be performed in parallel.
> 
> Any suggestions on how to perform step 1-3 while ensuring integrity?
> 
> 
> QUESTION 2:
> 
> For various reasons I might need to modify the ACCOUNTS table to
> ACCOUNTS (
>  account_id int,
>  transaction_id int,
>  balance int,
>  
>   );
> 
> so that the balance for account_id=111 is given by
>SELECT balance FROM ACCOUNTS
>WHERE account_id=111
>ORDER BY transaction_id DESC
>LIMIT 1
> 
> How will that effect how I should perform the steps 1-3 above?
> 
> Thanks
> 
> Thanks
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
> 

-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpqj31uWxmiq.pgp
Description: PGP signature


Re: [GENERAL] Geographic High-Availability/Replication

2007-08-28 Thread Decibel!
On Fri, Aug 24, 2007 at 06:54:35PM +0200, Markus Schiltknecht wrote:
> Gregory Stark wrote:
> >Only if your application is single-threaded. By single-threaded I don't 
> >refer
> >to operating system threads but to the architecture. If you're processing a
> >large batch file handling records one by one and waiting for each commit
> >before proceeding then it's single threaded. If you have a hundred 
> >independent
> >clients on separate connections doing separate things then each one of them
> >could get 6tps. Which you have will depend on your application and your 
> >needs,
> >it may not be something you can change.
> 
> Correct.
> 
> Plus, as in the implementation of Postgres-R, performance is *not* bound 
> to the slowest node. Instead, every node can process transactions at 
> it's own speed. Slower nodes might then have to queue transactions from 
> those until they catch up again.

But is the complete transaction information safely stored on all nodes
before a commit returns?
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpZMvd5bPlZE.pgp
Description: PGP signature


Re: [GENERAL] Seeking datacenter PITR backup procedures [RESENDING]

2007-08-28 Thread Decibel!
On Thu, Aug 23, 2007 at 06:58:36PM -0400, Bill Moran wrote:
> Decibel! <[EMAIL PROTECTED]> wrote:
> >
> > On Aug 19, 2007, at 7:23 AM, Bill Moran wrote:
> > >> Assumptions:
> > >> a. After pg_stop_backup(), Pg immediately recycles log files and  
> > >> hence wal
> > >> logs can be copied to backup. This is a clean start.
> > >
> > > I don't believe so.  ARAIK, all pg_stop_backup() does is remove the
> > > marker that pg_start_backup() put in place to tell the recovery  
> > > process
> > > when the filesystem backup started.
> > 
> > I'm pretty certain that's not the case. For a PITR to ensure that  
> > data is back to a consistent state after a recovery, it has to replay  
> > all the transactions that took place between pg_start_backup and  
> > pg_stop_backup; so it needs to know when pg_stop_backup() was  
> > actually run.
> 
> Sounds likely ... but I don't believe that forces any specific log
> cycling activity, like the OP suggested.
> 
> Be nice if someone who knew for sure would chime in ;)

Oh, that one's easy... it was changed in 8.2. Previously, you had to
either manually copy the active WAL file or wait for it to roll over
before you had a valid PITR backup. In 8.2, pg_stop_backup forces WAL
rotation (but note that you still have to wait for the archive command
to complete before the backup is valid).
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpip6jztjYg8.pgp
Description: PGP signature


Re: [GENERAL] Seeking datacenter PITR backup suggestions

2007-08-28 Thread Decibel!
On Tue, Aug 28, 2007 at 08:31:10PM +0100, Gregory Stark wrote:
> "Steve Crawford" <[EMAIL PROTECTED]> writes:
> 
> > 4. Much more up-to-the-minute recovery data.
> >
> > In your scenario, what about using "cp -l" (or "ln") instead? Since the
> > hard-link it is only creating a new pointer, it will be very fast and
> > save a bunch of disk IO on your server and it doesn't appear that the
> > tempdir is for much other than organizing purposes anyway.
> 
> Postgres tries to reuse WAL files. Once the archive_command completes it
> believes it is safe to reuse the old file without deleting it. That will do
> nasty things if you've used ln as your archive command.

I thought that was specifically disabled when PITR was enabled? Or do we
just do a rename rather than an unlink ond creating a new file?
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpFcyaPuBpDU.pgp
Description: PGP signature


Re: [GENERAL] Seeking datacenter PITR backup suggestions

2007-08-28 Thread Decibel!
On Tue, Aug 28, 2007 at 11:05:52AM -0700, Steve Crawford wrote:
> In your scenario, what about using "cp -l" (or "ln") instead? Since the
> hard-link it is only creating a new pointer, it will be very fast and
> save a bunch of disk IO on your server and it doesn't appear that the
> tempdir is for much other than organizing purposes anyway.

Note that that will only work if you're creating the link on the same
filesystem, and having /tmp and your data in the same filesystem isn't
such a hot idea.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgps4AmAhx6NO.pgp
Description: PGP signature


Re: [HACKERS] [GENERAL] Undetected corruption of table files

2007-08-27 Thread Decibel!
On Mon, Aug 27, 2007 at 12:08:17PM -0400, Jonah H. Harris wrote:
> On 8/27/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > Indeed.  In fact, the most likely implementation of this (refuse to do
> > anything with a page with a bad CRC) would be a net loss from that
> > standpoint, because you couldn't get *any* data out of a page, even if
> > only part of it had been zapped.

I think it'd be perfectly reasonable to have a mode where you could
bypass the check so that you could see what was in the corrupted page
(as well as deleting everything on the page so that you could "fix" the
corruption). Obviously, this should be restricted to superusers.

> At least you would know it was corrupted, instead of getting funky
> errors and/or crashes.

Or worse, getting what appears to be perfectly valid data, but isn't.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgp33ocMCEwPO.pgp
Description: PGP signature


Re: [GENERAL] Enterprise Wide Deployment

2007-08-23 Thread Decibel!

On Aug 17, 2007, at 5:37 PM, Andrej Ricnik-Bay wrote:

On 8/14/07, john_sm <[EMAIL PROTECTED]> wrote:

Hey guys, for an enterprise wide deployment, what will you
suggest and why among - Red Hat Linux, Suse Linux and
Ubuntu Linux, also, do you think, we can negotiate the
support pricing down?

For all it's worth:  my personal experiences with RH support
were shocking, to say the least, and I can't fathom why
anyone would want to pay for it.

If you have in-house linux expertise, choose whatever they're
familiar with.  If you don't - find a local company that can give
you support and use what they're familiar with.  Just my 2 cents.


While you're looking at support; I strongly recommend looking at  
getting a support contract for PostgreSQL as well if you're going to  
be banking your business on it. While it's pretty rare to run into  
problems in production (depending on the knowledge of your staff and  
the quality of your hardware), it can happen.


(Disclosure: I work for one company that provides PostgreSQL support)
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Seeking datacenter PITR backup suggestions

2007-08-23 Thread Decibel!

On Aug 17, 2007, at 5:48 PM, Joey K. wrote:
We have several web applications with Pg 8.2.x running on isolated  
servers (~25). The database size on each machines (du -h pgdata) is  
~2 GB. We have been using nightly filesystem backup (stop pg, tar  
backup to ftp, start pg) and it worked well.


We would like to move to PITR backups since the database size will  
increase moving forward and our current backup method might  
increase server downtimes.


We have a central ftp backup server (yes, ftp :-) which we would  
like to use for weekly full and daily incremental PITR backups.


After reading the docs, PITR is still fuzzy. Our ideas for backup  
are (do not worry about the syntax),


** START **

tmpwal = "/localhost/tmp"   # tmp space on server 1 for storing wal  
files before ftp

Configure $pgdata/postgresql.conf archive_command = "cp %p $tmpwal/%f"


Why not just FTP WAL files directly?


Day 1:
% psql pg_start_backup(); tar pgdata.tar --exclude pg_xlog/ pgdata
% psql pg_stop_backup()
% ftp put pgdata.tar ftpserver:/server1/day1/pgdata
% ftp put $tmpwal/* ftpserver:/server1/day1/wal
% rm -f $tmpwal/* pgdata.tar


The last 2 are a race condition... you could easily lose a WAL file  
that way.


Keep in mind that that pgdata.tar is 100% useless unless you also  
have the WAL files that were created during the backup. I generally  
recommend to folks that they keep two base copies around for that  
reason.



Day 2:
% ftp put $tmpwal/* ftpserver:/server1/day2/wal
% rm -f $tmpwal/*

Day 3:
...
...

Day 7:
% rm -f $tmpwal/*
Start over

Recovery on server1 (skeleton commands),
% rm -f $tmpwal/*
% mv pgdata pgdata.hosed
% ftp get ftpbackup:/server1/day1/pgdata.tar  .
% tar -xvf pgdata.tar
% ftp get ftpbackup:/server1/day1/wal/*  $tmpwal
% ftp get ftpbackup:/server1/day2/wal/*  $tmpwal
.
.
% cp -r pgdata.hosed/pg_xlog pgdata/
% echo "cp $tmpwal/%f %p" > pgdata/recovery.conf
% start pg (recovery begins)

** END **

Assumptions:
a. After pg_stop_backup(), Pg immediately recycles log files and  
hence wal logs can be copied to backup. This is a clean start.

b. New wal files since (a) are incremental backups

We are not sure if WAL log filenames are unique and possibly  
overwrite older wal files during recovery.


I'm seeking suggestions from others with experience performing  
PostgreSQL PITR backups from multiple servers to a central backup  
server.


In general, your handling of WAL files seems fragile and error-prone.  
I think it would make far more sense to just FTP them directly, and  
not try and get fancy with different directories for different days.  
*when* a WAL file was generated is meaningless until you compare it  
to a base backup to see if that WAL file is required for the base  
backup, useful (but not required) to the base backup, or useless for  
the base backup.

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Seeking datacenter PITR backup procedures [RESENDING]

2007-08-23 Thread Decibel!

On Aug 19, 2007, at 7:23 AM, Bill Moran wrote:

Assumptions:
a. After pg_stop_backup(), Pg immediately recycles log files and  
hence wal

logs can be copied to backup. This is a clean start.


I don't believe so.  ARAIK, all pg_stop_backup() does is remove the
marker that pg_start_backup() put in place to tell the recovery  
process

when the filesystem backup started.


I'm pretty certain that's not the case. For a PITR to ensure that  
data is back to a consistent state after a recovery, it has to replay  
all the transactions that took place between pg_start_backup and  
pg_stop_backup; so it needs to know when pg_stop_backup() was  
actually run.



By not backing up pg_xlog, you are
going to be behind by however many transactions are in the most recent
transaction log that has not yet been archived.  Depending on how  
often

your databases are updated, this is likely acceptable.  If you need
anything more timely than that, you'll probably want to implement
Slony or some other replication system.


Just keep in mind that Slony is *not* a backup solution (though you  
could possibly argue that it's log shipping is).

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Geographic High-Availability/Replication

2007-08-23 Thread Decibel!

On Aug 22, 2007, at 3:37 PM, Joshua D. Drake wrote:

You can not do multi master cross continent reliably.


I'm pretty sure that credit card processors and some other companies  
do it... it just costs a LOT to actually do it well.

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Searching for Duplicates and Hosed the System

2007-08-23 Thread Decibel!

On Aug 21, 2007, at 12:04 AM, Tom Lane wrote:

If you need to deal with very large result sets, the standard advice
is to use a cursor so you can pull a few hundred or thousand rows
at a time via FETCH.


In case it's not obvious... in this case you might want to dump the  
output of that query into another table; perhaps a temp table...


CREATE TEMP TABLE dupe_check AS SELECT ...
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Automating logins for mundane chores

2007-08-23 Thread Decibel!

On Aug 18, 2007, at 5:20 AM, Phoenix Kiula wrote:

I am writing some simple batch scripts to login to the DB and do a
pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
to be asked for a password every time (which, for silly corporate
reasons, is quite a convoluted one).

So I read up on .pgpass.


FWIW, *IF* you can trust identd in your environment, I find it to be  
easier to deal with than .pgpass or the like.

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Function with Integer array parameter

2007-08-16 Thread Decibel!
On Thu, Aug 16, 2007 at 11:14:25AM -0400, Ranjan Kumar Baisak wrote:
> Postgres Gurus,
> Please suggest me what is wrong with this 
> function.
> This function tries to retrieve set of rows from description table based 
> on set of ID fields passed as array.
> 
> The error, I get is : ERROR:  only one AS item needed for language "plpgsql"
> 
> CREATE OR REPLACE function get_description_list(integer[]) RETURNS SETOF 
> type_description AS
> 'DECLARE
...
>WHERE   d_base.id in array_to_string(ints_desc_ids alias,',')

Note the quotes.

Use dollar quoting... it$$s your friend.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpoi3bRybvSj.pgp
Description: PGP signature


Re: [GENERAL] Performance question

2007-08-16 Thread Decibel!
On Thu, Aug 16, 2007 at 08:52:02AM -0300, Marcelo de Moraes Serpa wrote:
> Hello list,
> 
> If I've got a trigger that calls a function each time there is a DELETE or
> UPDATE opration on a table in my system, and in this function I retrieve
> some boolean information from another table and based on this information,
> additional code will be ran or not in this function. Could the solely fact
> of calling the function and selecting the data on another table (and the
> trigger on each update and delete on any table) affect the overall db
> performance in a noticiable manner ?

Of course, you're adding at least one extra query to each UPDATE and
DELETE. Plus the overhead of the trigger itself.

The real question is: so what? If you need that logic to happen, you
need it to happen. Unless you'll be updating or deleting scores of rows
a second, I wouldn't worry too much about it.

Remember the first rule of performance tuning: don't. :)
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpkB4ZpKaXlM.pgp
Description: PGP signature


Re: [GENERAL] how to get id of currently executed query?

2007-08-16 Thread Decibel!
On Thu, Aug 16, 2007 at 01:30:11PM +0200, hubert depesz lubaczewski wrote:
> hi,
> i need something to distinguish two separate calls for some select.
> 
> i tried to use c functions GetCurrentTransactionId() and
> GetCurrentCommandId(),
> but there is a problem:
> if i'll make plpgsql function, which uses GetCurrentTransactionId() and
> GetCurrentCommandId() - getCurrentCommandId changes ( sql's from
> functions also increment the command-id-counter).

Well of course, if you're running it in a separate command. If you run
the function twice from one query I'd expect both to return the same.

Of course you can easily get the same XID back from multiple commands if
you're in an explicit transaction.

BTW, it would likely be useful to others if you posted your code for
those functions somewhere. Just yesterday I was thinking of something
where it would be handy to know what your current XID is.

> so my question is - is there any way, to be able to tell "one query from
> another"?
> 
> i would like to write a plpgsql function that would be able to
> differentiate somehow it's calls from one statement to another.

I think you're going to have to give a more explicit example of what
you're actually trying to do.

> i'm not sure if my description is correct - if you dont understand
> something, please let me know.

Well, I'm not following, but it's early in the morning and I didn't
sleep well, so... :)
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpd7xHwD3Pep.pgp
Description: PGP signature


Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Decibel!
On Thu, Aug 16, 2007 at 12:12:03PM +0200, Rainer Bauer wrote:
> "Scott Marlowe" wrote:
> 
> >When I go to amazon.com I only ever get three pages of results.  ever.
> > Because they know that returning 190 pages is not that useful, as
> >hardly anyone is going to wander through that many pages.
> >
> >Google, you'll notice says "Results 1 - 10 of about 5,610,000 for
> >blacksmith"  i.e. it's guesstimating as well.  no reason for google to
> >look at every single row for blacksmith to know that there's about 5.6
> >million.
> 
> But if you go to eBay, they always give you an accurate count. Even if the no.
> of items found is pretty large (example: <http://search.ebay.com/new>).

And I'd bet money that they're using a full text search of some kind to
get those results, which isn't remotely close to the same thing as a
generic SELECT count(*).
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpOehTHECMRQ.pgp
Description: PGP signature


Re: [GENERAL] memory optimization

2007-08-16 Thread Decibel!
On Thu, Aug 16, 2007 at 09:17:37AM +0300, Sabin Coanda wrote:
> >>
> >> So, what is better from the postgres memory point of view: to use 
> >> temporary
> >> objects, or to use common variables ?
> >
> >A temp table might take *slightly* more room than variables...
> >
> >> Can you suggest me other point of views to be taken into consideration in 
> >> my
> >> case ?
> >
> >Code maintenance. I can't think of anyway to replace a temp table with
> >variables that isn't a complete nightmare.
> 
> With some conversion procedures that is even easiest to do it ;)

Sorry, I'm not quite grokking what you're saying there...

I guess maybe the original question wasn't clear enough... when temp
tables were mentioned I assumed that you were dealing with multiple
rows, but maybe that's not the case.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpY7tHgNdnTU.pgp
Description: PGP signature


Re: [GENERAL] Deadlocks caused by a foreign key constraint

2007-08-16 Thread Decibel!
On Thu, Aug 16, 2007 at 01:21:43AM -0400, Tom Lane wrote:
> Decibel! <[EMAIL PROTECTED]> writes:
> > But... taking a quick look at RI_FKey_check in backend/utils/adt/ 
> > ri_triggers.c, I don't see it checking to see if the FK has changed,  
> > which seems odd. I would think that if the FK fields haven't changed  
> > that there's no need to perform the check.
> 
> You looked in the wrong place; see AfterTriggerSaveEvent in
> commands/trigger.c

Ahh, I figured it must be in here somewhere...

I guess it's safe to say that RI triggers get a decent amount of special
treatment in the backend compared to normal triggers.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgppKI5InOPkh.pgp
Description: PGP signature


Re: [GENERAL] Interpreting statistics collector output

2007-08-15 Thread Decibel!

On Aug 15, 2007, at 2:11 PM, Gregory Stark wrote:

"Decibel!" <[EMAIL PROTECTED]> writes:

On Wed, Aug 15, 2007 at 01:26:02PM -0400, Steve Madsen wrote:

On Aug 15, 2007, at 11:52 AM, Decibel! wrote:
I can't really think of a case where a seqscan wouldn't return  
all the

rows in the table... that's what it's meant to do.


LIMIT


Ok, you got me. :P But normally you wouldn't do a LIMIT without some  
kind of an ORDER BY, which would mean scanning the whole table.


In any case, it's not a perfect metric, but in general use it seems  
to be "good enough".

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Deadlocks caused by a foreign key constraint

2007-08-15 Thread Decibel!

On Aug 15, 2007, at 1:27 PM, Dmitry Koterov wrote:
I have tested all cases, the code I quoted is complete and minimal.  
All operations are non-blocking (count incrementation is non- 
blocking, insertion with a foreign key is non-blocking too), but it  
still generates a deadlock time to time. Deletion of the foreign  
key constraint completely solves the problem.


Code? Got a reproducible test case?

You said "I'm pretty sure that recent versions check to see if the  
key actually changed", but how could it be if Postgres uses a row- 
level locking, not field-level locking? Seems it cannot check what  
fields are changed, it locks the whole row.


You already have the child row that's being updated; both versions of  
it. So you don't have to lock anything to see if the FK field has  
changed or not.


But... taking a quick look at RI_FKey_check in backend/utils/adt/ 
ri_triggers.c, I don't see it checking to see if the FK has changed,  
which seems odd. I would think that if the FK fields haven't changed  
that there's no need to perform the check.

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Trigger Procedure Error: NEW used in query that is not in a rule

2007-08-15 Thread Decibel!
On Sat, Aug 11, 2007 at 02:43:30AM -0500, Javier Fonseca V. wrote:
> Hello.
> 
> I'm doing a Trigger Procedure in pl/pgSQL.  It makes some kind of auditing.
> 
> I think that it's working alright except for the next line:
> 
> EXECUTE 'INSERT INTO ' || quote_ident(somedynamictablename) || ' SELECT
> new.*';
> 
> PostgreSQL keeps telling me: "ERROR: NEW used in query that is not in a
> rule".  I think that this NEW problem is because of the scope of the EXECUTE
> statement (outside the scope of the trigger), so it doesn't recognize the
> NEW record.

Sort-of... the issue is that EXECUTE hands the string off to the
backend, which has no clue what "NEW" is; only the trigger procedure
understands NEW.

> Maybe I could fix it concatenating column names and the 'new' values but I
> want to do my trigger as flexible as possible (I have several tables to
> audit).
> 
> Somebody has any suggestion?

You could theoretically make the trigger entirely dynamic by having it
pull the needed info out of the system catalogs... but I wouldn't want
to see the performance of that... If you care about performance *at
all*, I'd suggest writing some code that will generate the triggers for
a given table for you. I don't expect it'd be much harder than writing a
completely dynamic trigger.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpQrMR6ZYvad.pgp
Description: PGP signature


Re: [GENERAL] Interpreting statistics collector output

2007-08-15 Thread Decibel!
On Wed, Aug 15, 2007 at 01:26:02PM -0400, Steve Madsen wrote:
> On Aug 15, 2007, at 11:52 AM, Decibel! wrote:
> >I can't really think of a case where a seqscan wouldn't return all the
> >rows in the table... that's what it's meant to do.
> 
> Isn't a sequential scan the only option if an appropriate index does  
> not exist?  E.g., for a query with a WHERE clause, but none of the  
> referenced columns are indexed.

Yes, and that seqscan is going to read the entire table and then apply a
filter.

> Put another way: consider a large table with no indexes.   
> seq_tup_read / seq_scan is the average number of rows returned per  
> scan, and if this is a small percentage of the row count, then it  
> seems reasonable to say an index should help query performance.   
> (With the understanding that it's fewer common rather than many  
> unique queries.)
 
decibel=# select * into i from generate_series(1,9) i;
SELECT
decibel=# select seq_scan, seq_tup_read  from pg_stat_all_tables where 
relname='i';
 seq_scan | seq_tup_read 
--+--
    0 |    0
(1 row)

decibel=# select * from i where i=1;
 i 
---
 1
(1 row)

decibel=# select seq_scan, seq_tup_read  from pg_stat_all_tables where 
relname='i';
 seq_scan | seq_tup_read 
--+--
1 |9
(1 row)
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpP8XODjoEZA.pgp
Description: PGP signature


Re: [GENERAL] memory optimization

2007-08-15 Thread Decibel!
On Wed, Aug 15, 2007 at 10:21:31AM +0300, Sabin Coanda wrote:
> Hi there,
> 
> I have a procedure which uses temporary objects (table and sequence). I 
> tried to optimize it, using common variables (array and long varchar) 
> instead. I didn't found any difference in performance, but I'd like to 
> choose the best option from other points of view. One of them is the memory.
> 
> So, what is better from the postgres memory point of view: to use temporary 
> objects, or to use common variables ?

A temp table might take *slightly* more room than variables...

> Can you suggest me other point of views to be taken into consideration in my 
> case ?

Code maintenance. I can't think of anyway to replace a temp table with
variables that isn't a complete nightmare.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpHaqvgATEv0.pgp
Description: PGP signature


Re: [GENERAL] Copy command and duplicate items (Support Replace?)

2007-08-15 Thread Decibel!
On Tue, Aug 14, 2007 at 10:50:33AM +0800, Ow Mun Heng wrote:
> Hi,
> 
> Writing a script to pull data from SQL server into a flat-file (or just
> piped in directly to PG using Perl DBI)
> 
> Just wondering if the copy command is able to do a replace if there are
> existing data in the Db already. (This is usually in the case of updates
> to specific rows and there be a timestamp indicating it has been changed
> etc.)
> 
> In MySQL, the mysqlimport util has the --replace function which will
> replace the data if there is any event of a duplicate.
> 
> Does PG support this?

No; you'll need to COPY into a temporary or staging table and then
proceed from there. Alternatively, you could use
http://pgfoundry.org/projects/pgloader/.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpyQNuNDvD9l.pgp
Description: PGP signature


Re: [GENERAL] Performance check of my database

2007-08-15 Thread Decibel!
On Sun, Aug 12, 2007 at 05:40:26PM -0400, Harpreet Dhaliwal wrote:
> Hi,
> 
> Lately I completed the business logic of my application and all related
> database work.
> 
> Now i need to check the performance of my database, how much load it can
> bear, perfomance to different queries and stored procedures.
> 
> Basically i need to do the performance testing of my database and based on
> that I need to take a call whether i should go for clustering or not.
> 
> Please let me know the best practices in postgres for such an activity.

Best bet is to write something that drives a fake workload though your
complete system; that way you can see exactly how everything in the
system will handle load.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgp5CPDG1bTag.pgp
Description: PGP signature


Re: [GENERAL] Deadlocks caused by a foreign key constraint

2007-08-15 Thread Decibel!
On Fri, Aug 10, 2007 at 09:38:36PM +0400, Dmitry Koterov wrote:
> Hello.
> 
> I have a number of deadlock because of the foreign key constraint:
> 
> Assume we have 2 tables: A and B. Table A has a field fk referenced to
> B.idas a foreign key constraint.
> 
> 
> -- transaction #1
> BEGIN;
> ...
> INSERT INTO A(x, y, fk) VALUES (1, 2, 666);
> ...
> END;
> 
> 
> -- transaction #2
> BEGIN;
> UPDATE B SET z = z + 1 WHERE id = 666;
> ...
> UPDATE B SET z = z + 1 WHERE id = 666;
> ...
> UPDATE B SET z = z + 1 WHERE id = 666;
> END;
> 
> 
> You see, table A is only inserted, and table B is only updated their field z
> on its single row.
> If we execute a lot of these transactions concurrently using multiple
> parellel threads, sometimes we have a deadlock:
> 
> DETAIL:  Process 6867 waits for ShareLock on transaction 1259392; blocked by
> process 30444.
> Process 30444 waits for ShareLock on transaction 1259387; blocked by
> process 6867.
> CONTEXT:  SQL statement "SELECT 1 FROM ONLY "public"."B" x WHERE "id" = $1
> FOR SHARE OF x"
> 
> If I delete the foreign key constraint, all begins to work fine.
> Seems Postgres thinks that "UPDATE B SET z = z + 1 WHERE id = 666" query may
> modify B.id field and touch A.fk, so it holds the shareable lock on it.

What version are you running? I'm pretty sure that recent versions check
to see if the key actually changed.

> The question is: is it possible to KEEP this foreign key constraint, but
> avoid deadlocks?

I'm pretty sure that the deadlock is actually being caused by your
application code, likely because you're doing multiple updates within
one transaction, but not being careful about the id order you do them
in.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpYkssvlP10m.pgp
Description: PGP signature


Re: [GENERAL] Cluster and MVCC

2007-08-15 Thread Decibel!
On Fri, Aug 10, 2007 at 06:34:03PM +0100, Simon Riggs wrote:
> On Fri, 2007-08-10 at 10:02 -0400, Brad Nicholson wrote:
> > I just want to confirm that the cluster/MVCC issues are due to
> > transaction visibility.  Assuming that no concurrent access is happening
> > to a given table when the cluster command is issued (when takes it
> > visibility snapshot), it is safe to cluster that table.  Correct?
> 
> Yes, as long as pre-existing transactions do not then access the
> clustered table. If they do, rows they should have seen will now not be
> visible, yet you won't get an error message to say so.

Don't you also need to be in a serialized transaction?
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpM50NoxGTMF.pgp
Description: PGP signature


Re: [GENERAL] Interpreting statistics collector output

2007-08-15 Thread Decibel!
On Thu, Aug 09, 2007 at 09:14:55PM -0400, Steve Madsen wrote:
> On Aug 8, 2007, at 6:08 PM, Decibel! wrote:
> >Something else I like to look at is pg_stat_all_tables seq_scan and
> >seq_tup_read. If seq_scan is a large number and seq_tup_read/ 
> >seq_scan is
> >also large, that indicates that you could use an index on that table.
> 
> If seq_tup_read / seq_scan is large relative to the number of rows in  
> the table, wouldn't that imply that those sequential scans are often  
> returning most of the rows in the table?  In that case, would an  
> index help much or is a sequential scan the expected result?

I can't really think of a case where a seqscan wouldn't return all the
rows in the table... that's what it's meant to do.

What I was driving at by looking at seq_tup_read is that a small table
isn't going to use indexes anyway, so for the small tables it's
generally not worth worrying about indexes. If you wanted to be more
accurate you could look at reltuples or maybe relpages in pg_class
instead.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpg1IhUaxRGw.pgp
Description: PGP signature


Re: [GENERAL] Permission ALTER PASSWORD

2007-08-15 Thread Decibel!
On Wed, Aug 08, 2007 at 06:35:51PM -0300, Anderson Alves de Albuquerque  wrote:
>  After user $USER execute this ALTER, it get change PASSWORD. Could I block
> command ALTER password to user $USER?

No, there's no way to do that. You might want to look at using
ident-based authentication for that user instead.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpwsZ3lNWqxV.pgp
Description: PGP signature


Re: [GENERAL] Bytea question with \208

2007-08-09 Thread Decibel!
On Thu, Aug 09, 2007 at 04:16:15PM -0400, Woody Woodring wrote:
> My bad,  the table I was looking (8.7) at had the first column as the
> decimal representation and I did notice that the numbers changed as they
> moved right.
> 
> Is there a way for bytea to take a hex number, or do I need to convert the
> bit stream to octal numbers?

to_hex()?

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
> Sent: Thursday, August 09, 2007 2:14 PM
> To: Woody Woodring
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Bytea question with \208 
> 
> "Woody Woodring" <[EMAIL PROTECTED]> writes:
> > Could someone explain why \208 is not a valid syntax for bytea?
> 
> Aren't those escapes octal?
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
> 
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq
> 

-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpFsLPGQCFcG.pgp
Description: PGP signature


Re: [GENERAL] Modified FIFO queue and insert rule

2007-08-09 Thread Decibel!
On Thu, Aug 09, 2007 at 06:14:43PM +0200, Leif B. Kristensen wrote:
> On Wednesday 8. August 2007 15:12, Alban Hertroys wrote:
> >You should probably use a trigger (a before one maybe) instead of a
> > rule.
> 
> I tried that too, but I'm still quite shaky on how to write triggers, 
> and the same thing happened there: the inserted record was immediately 
> deleted. I solved the problem temporarily with two lines in PHP:

You have to use a BEFORE trigger for this to work, unless you're careful
about how you build your where clause. The AFTER trigger is going to see
the row that you just inserted, so you'd have to explicitly exclude it
from the DELETE.

> function set_last_selected_place($place) {
> pg_query("DELETE FROM recent_places WHERE place_fk = $place");
> pg_query("INSERT INTO recent_places (place_fk) VALUES ($place)");
> }
> 
> As my application is single-user, and everything is already wrapped up 
> in a transaction anyway, there's no real problem with this. But I'd 
> still like to understand how to do it 'properly' inside the DB.

Better than what you're doing right now would be to wrap everything into
a function and just call that. Depending on your design, that could be
more (or less) "correct" than trying to do it with a trigger.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpDGQcJuNi9q.pgp
Description: PGP signature


Re: [GENERAL] Time for Vacuum vs. Vacuum Full

2007-08-09 Thread Decibel!
On Thu, Aug 09, 2007 at 10:22:57AM -0400, Vivek Khera wrote:
> 
> On Aug 9, 2007, at 9:38 AM, Brad Nicholson wrote:
> 
> >I have the times that it takes to to do a regular
> >vacuum on the clusters, will vacuum full take longer?
> 
> almost certainly it will, since it has to move data to compact pages  
> rather than just tagging the rows as reusable.
> 
> you can speed things up by dropping your indexes first, then running  
> vacuum full, then re-creating your indexes.  this will make for  
> better (more compact) indexes too.
> 
> as for how much longer, I don't know how to estimate that.

A generally easier approach would be to cluster the tables on an
appropriate index. That does re-write the table from scratch, but in
cases of bad bloat that can actually be a lot faster.

One thing you can do to test this out is to setup another copy of the
database using PITR or some other file-based copy mechanism and try
running VACUUM FULL vs CLUSTER. Note that a copy obtained via pg_dump
obviously won't work for this. :)
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpIHUNsPsIZw.pgp
Description: PGP signature


Re: [GENERAL] Sylph Searcher

2007-08-09 Thread Decibel!
Is there a way to get this to work remotely? IE: is there an indexing
part that can be run on the mail server that you'd connect to remotely?

On Thu, Aug 09, 2007 at 05:30:13PM +0900, Tatsuo Ishii wrote:
> Hi,
> 
> I made a small demonstration for Sylph Searcher at Linux World at SF
> and was asked by Josh Berkus where he can download it. I would like to
> share the info with PostgreSQL users. Here is the URL:
> 
> http://sylpheed.sraoss.jp/en/download.html#searcher
> 
> Those who are not familiar with Syph Searcher, here are brief
> explanation:
> 
> Sylph-Searcher is a PostgreSQL+tsearch2 application that enables fast
> full-text search of messages stored in mailboxes of Sylpheed, or
> normal MH folders.
> 
> Sylph-Searcher requires the following programs:
> 
> GLib 2.4.0 or later (http://www.gtk.org/)
> GTK+ 2.4.0 or later (http://www.gtk.org/)
> MeCab 0.93 or later + mecab-ipadic (http://mecab.sourceforge.net/)
> PostgreSQL 8.2 or later + tsearch2 (http://www.postgresql.org/)
> LibSylph 1.0.0 or later (http://sylpheed.sraoss.jp/) 
> 
> The license of Sylph-Searcher follows the BSD license.
> 
> I'm using sylph searcher on my Linux laptop regulary with mew (a mail
> program running inside emacs) and has fairly large tsearch2 data base
> on it. The database size is ~1.7GB, and the number of messages are more
> than 300,000. Note that sylph-searcher can run on Windows too.
> 
> Sylph searcher was developed by Hiroyuki Yamamoto, who is the author
> of Sylpheed, a mail client.
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 

-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpzNdsypMi5A.pgp
Description: PGP signature


Re: [GENERAL] Automation using postgres

2007-08-08 Thread Decibel!
On Wed, Aug 08, 2007 at 11:48:28AM -0400, Jasbinder Singh Bali wrote:
> Hi,
> 
> I my system, I am using postgres triggers to launch some unix tools and thus
> postgres not only serves the purpose of data storage but also works as an
> engine to automate the whole system. (this is about my system, talking on a
> broader level )
> 
> I just wanted to know if there's any related work in this area so that I can
> compare my system with already existing systems related to it.

I think that's pretty rare. You should write something up about what
you're doing and post it, it could be useful info.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgp46sQLvSknV.pgp
Description: PGP signature


Re: [GENERAL] Interpreting statistics collector output

2007-08-08 Thread Decibel!
On Wed, Aug 08, 2007 at 11:01:13AM -0400, Steve Madsen wrote:
> Can anyone provide a brief overview of how to go about interpreting  
> the information generated by the statistics collector?  I've looked  
> around and can't find old mailing list messages or anything in the  
> manual beyond the basics of how to query the statistics.
> 
> Cache hit rates are easy to compute, but is this useful beyond  
> knowing if Postgres has enough buffers?
> 
> Can anything useful be gleaned from looking at idx_scan /  
> idx_tup_read / idx_tup_fetch?

Yes, that will give you information about how often an index is being
used. If you see indexes where idx_scan is a small number, that's an
indication that that index isn't being used for queries and could
potentially be dropped.

Something else I like to look at is pg_stat_all_tables seq_scan and
seq_tup_read. If seq_scan is a large number and seq_tup_read/seq_scan is
also large, that indicates that you could use an index on that table.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgp35f1Zcp38S.pgp
Description: PGP signature


Re: [GENERAL] Data Mart with Postgres

2007-08-08 Thread Decibel!
On Wed, Aug 08, 2007 at 08:56:47AM -0300, Andr? Volpato wrote:
> 
> Hello everybody,
> I?m working with a small project to a client, using Postgres to
> store data in a dimensional model, fact-oriented, e.g., a Datamart.
> At this time, all I have is a populated database, with the "star
> schemma" common relations (PK?s / FK?s).
> Below is a list of the main goals of this project :
> 1. Front-end app (PHP5)
> 2. Transactional database for this app (Postgres)
> 3. Datamart in Postgres (described above)
> 4. ROLAP server that supports Postgres (Java - Mondrian)
> 5. Front-end app to manage querys to the ROLAP server (JSP - JPivot)
> Users will have web access to (1), and will be enable to create
> views using (5).
> It seems like I?m trying to reinvent the wheel, but the point here
> is that the client can?t afford to use proprietary BI solutions, nor
> proprietary OS.
> What I have read all across the internet is that Postgres does not
> support this kind of application, wich demands materialyzed views,
> built-in bitmap index, and so on. In the open-source world, I find
> those missing features with Mondrian/JPivot.
> Does anyone has ever used this structure before ? At least Mondrian
> and JPivot ?

I haven't but it's certainly possible to build a datamart without bitmap
indexes or mviews, it's just a question of performance. MViews you can
build yourself easily enough; as for bitmap indexes, IIRC you can get
those in Bizgres.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpiSVi0CuRMG.pgp
Description: PGP signature


Re: [GENERAL] Reordering columns, will this ever be simple?

2007-08-08 Thread Decibel!
On Tue, Aug 07, 2007 at 02:28:20PM -0500, Tony Caduto wrote:
> Gregory Stark wrote:
> >"novnov" <[EMAIL PROTECTED]> writes:
> >
> >  
> >>Is there any plan to add such a capability to postgres? 
> >>
> >
> >It's been talked about. I wouldn't be surprised to see it in 8.4 but 
> >nobody's
> >said they'll be doing it yet and there are a lot of other more exciting 
> >ideas
> >too.
> >
> >  
> From a admin tool developers perspective the ability to reorder columns 
> without manually copying to a new table and all that is pretty exiting :-)

Patches welcome. :)

BTW, this is much more likely to happen if we divorce presentation order
from actual storage order, something that there is some desire to do
because it would allow the engine to automagically store things in the
optimal ordering from an alignment standpoint.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpEp5PM6K98S.pgp
Description: PGP signature


  1   2   >