[SQL] Analyze makes queries slow...

2003-08-06 Thread Stef
Hi all,

I have a problem :

A select statement that selects from 7 tables,
groups the information by 6 columns of the
tables involved.

When there are no rows in pg_statistics,
the query runs under 3 minutes.

When I analyze the biggest table of the 7
(approx 100 rows), the query takes longer than
12 Hours (Had to kill it eventually).

I have the explain plan of the original, under 3 minutes
query, and would like to reverse engineer this, to
build up a query with proper join statements, as I understand
that this is a way of forcing the planner to join the table
in a faster way.

I got very close a couple of times, but still can't get it 100%
the same as what the planner did prior to analyzing.

The database is actually faster when analyzed, except
for two or three multiple join queries (which don't finish
after analyze) So I would like to make an exception
for the tables that are used in these queries only,
or do proper joins.

Can anybody help, or give some links to good 
help resources?

TIA
Stefan

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Analyze makes queries slow...

2003-08-14 Thread Stef
On Fri, 8 Aug 2003 09:24:48 -0700
Jonathan Gardner <[EMAIL PROTECTED]> wrote:

=> Try the performance list.

Thanks for the tip

Stef

---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] Table versions

2003-10-29 Thread Stef
Hi all,

I'm trying to create some kind of table version control
system for approximately 300 postgres databases 
ranging in version from 7.1.2 to 7.3.4.

I compared the "pg_dump -s" output between 
the various versions of databases, but the format is inconsistent,
and I can't do diff's to check that table structures are identical
on the various databases this way. 

What I did next, is put a trigger on pg_attribute that should, in theory,
on insert and update, fire up a function that will increment a version
number on a table comment every time a table's structure is modified.
I tried to make the function update a comment on pg_description to
accomplish this. 

I'm having a lot of trouble doing this and testing it, and after plenty tries
it's still not working. I've attached the trigger statement and the plpgsql function.
(There might be a few mistakes, and I haven't attempted to cater for
system columns and  multiple changes yet.)

Can somebody please tell me if what I'm trying will ever work, or
maybe an alternative (easier) way to compare a specific table's 
structure amongst various databases, that are not necessarily 
on the same network, nor of the same version of postgres.  
 
Regards 
Stefan


test.plpgsql
Description: Binary data


test.trigger
Description: Binary data


pgp0.pgp
Description: PGP signature


Re: [SQL] Table versions

2003-10-29 Thread Stef
Correction on the function :
The function currently on the database did has
   select int4(description) + 1 into v_new_version from pg_description 
   where objoid = NEW.attrelid;

in stead of 
   select int4(description) into v_new_version from pg_description 
   where objoid = NEW.attrelid;

##START##
=> Hi all,
=> 
=> I'm trying to create some kind of table version control
=> system for approximately 300 postgres databases 
=> ranging in version from 7.1.2 to 7.3.4.
=> 
=> I compared the "pg_dump -s" output between 
=> the various versions of databases, but the format is inconsistent,
=> and I can't do diff's to check that table structures are identical
=> on the various databases this way. 
=> 
=> What I did next, is put a trigger on pg_attribute that should, in theory,
=> on insert and update, fire up a function that will increment a version
=> number on a table comment every time a table's structure is modified.
=> I tried to make the function update a comment on pg_description to
=> accomplish this. 
=> 
=> I'm having a lot of trouble doing this and testing it, and after plenty tries
=> it's still not working. I've attached the trigger statement and the plpgsql 
function.
=> (There might be a few mistakes, and I haven't attempted to cater for
=> system columns and  multiple changes yet.)
=> 
=> Can somebody please tell me if what I'm trying will ever work, or
=> maybe an alternative (easier) way to compare a specific table's 
=> structure amongst various databases, that are not necessarily 
=> on the same network, nor of the same version of postgres.  
=>  
=> Regards 
=> Stefan
=> 


pgp0.pgp
Description: PGP signature


Re: [SQL] Table versions

2003-10-29 Thread Stef
Thanks guys,

I had a feeling this was the case, but wasn't sure.
The one-version pg_dump looks like a winner.

Regards
Stefan

##START##
=> Rod Taylor <[EMAIL PROTECTED]> writes:
=> >> What I did next, is put a trigger on pg_attribute that should, in theory,
=> >> on insert and update, fire up a function that will increment a version
=> 
=> > System tables do not use the same process for row insertion / updates as
=> > the rest of the system. You're trigger will rarely be fired.
=> 
=> s/rarely/never/.  We do not support triggers on system catalogs.  The
=> system should have done its best to prevent you from creating one ...
=> I suppose you had to hack around with a "postgres -O" standalone backend?
=> 
=> Returning to the original problem, it seems to me that comparing "pg_dump
=> -s" output is a reasonable way to proceed.  The problem of inconsistent
=> output format across pg_dump versions is a red herring --- just use a
=> single pg_dump version (the one for your newest server) for all the
=> dumps.  Recent pg_dump versions still talk to older servers, back to 7.0
=> or thereabouts.
=> 
=>  regards, tom lane
=> 


pgp0.pgp
Description: PGP signature


[SQL] 'invalid' value in timestamp with timezone.

2003-11-03 Thread Stef
Hi all,

I've noticed for the first time in some 7.1.2 databases that
I use, that somebody/something managed to insert a literal
alpha value of 'invalid' into 'timestamp with timezone' columns.

I tested it myself, and it worked :
test712=# create table bob (field timestamp);
CREATE
test712=# \d bob 
   Table "bob"
 Attribute |   Type   | Modifier 
---+--+--
 field | timestamp with time zone | 

test712=# insert into bob values('invalid');
INSERT 3885934 1

On 7.3.4 it gives me :
test734=> insert into bob2 values('invalid');
ERROR:  TIMESTAMP WITH TIME ZONE 'invalid' no longer supported

but other strings gave me the same error on both db's :
test734=> insert into bob2 values('valid');
ERROR:  Bad timestamp external representation 'valid'

I looked in the documentation and on the net, but cannot find
much reference to this "ex-feature"

I'm 80% sure no one actually inserted these 'invalid' values into tables,
and what I need to know is : 
What else can insert a value of 'invalid' into a 7.1.2 
"timestamp with timezone" type column.

Was there a function or something similar?

Regards 
Stef


pgp0.pgp
Description: PGP signature


[SQL] Historical logging of pg_stat_activity ?

2004-01-13 Thread Stef
hello everyone,

Yes, yet agian i have a rather crazy idea,
go with me on this and hopefully it will make a bit
more sense than normal :)

Currently, i have a rather extensive application
that gets a lot of queries, i was wanting to track the
general 'usage' pattern overtime, but of course, that
would require a sort of 'historical view' of pg_stat_activity.
If there was such a thing, then it would let me isolate
hotspots or queries that were often used and allow me
to focus on them first for speed benefits, as well as
allowing me to produce some nifty histograms :)

So, the long and short, is there a pg_stat table
that contains the most 'used' queries and also (i know i
am shooting for the moon here) execution time ?

I know i -could- log_timestamp and log_statement
and then parse out (using perl) the most called ones for
each day and then sort them all and ... but thats a bit of
a pain if there is already such a thing in existence.

regards
Stef

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] Fw: postgres logging

2004-01-23 Thread Stef
Hi all, 

I've switched on  log_statement in postgresql.conf 
for debugging purposes. I tried logging connections
and pids as well, but I need to match up the logged statements 
to specific connections. 

The postmaster logs to a separate log file, but at the moment
it's impossible to tell which sql comes from which connection.
Is there an easy way to accomplish this?

Kind Regards
Stefan



0009.mimetmp
Description: PGP signature


pgp0.pgp
Description: PGP signature


Re: [SQL] Fw: postgres logging [SOLVED]

2004-01-23 Thread Stef
Stef mentioned :
=> The postmaster logs to a separate log file, but at the moment
=> it's impossible to tell which sql comes from which connection.
=> Is there an easy way to accomplish this?

Erm... sorry ,

It appears my postgresql.conf is not an original. It didn't have
the log_pid option in. :)


pgp0.pgp
Description: PGP signature


[SQL] postgres timeout.

2004-01-28 Thread Stef
Hi all ,

I'm trying to find out if there is a specific setting 
to make transactions time out faster in a scenario
where there's an update on a table in a transaction 
block, and another update process tries to update
the same column.

It looks like the second process will wait until you
end the transaction block in the first transaction.

I've looked at the deadlock timeout parameter and
other parameters, but I don't think I found  what 
I'm looking for.

I basically need to be able to let the second process
exit with an error after waiting 5 - 10 seconds.

Please can someone help?

Kind Regards
Stefan


pgp0.pgp
Description: PGP signature


Re: [SQL] postgres timeout. [SOLVED]

2004-01-28 Thread Stef
Hi all, 

It seems I always find a solution just after
panicking a little bit.
Anyway, I found that statement_timeout solved
my problem. When I tested it earlier, I actually 
made an error, and skipped it as a possible 
solution.

Cheers
Stef

Stef mentioned :
=> Forgot to mention that I use postgres 7.3.4
=> 
=> Stef mentioned :
=> => Hi all ,
=> => 
=> => I'm trying to find out if there is a specific setting 
=> => to make transactions time out faster in a scenario
=> => where there's an update on a table in a transaction 
=> => block, and another update process tries to update
=> => the same column.
=> => 
=> => It looks like the second process will wait until you
=> => end the transaction block in the first transaction.
=> => 
=> => I've looked at the deadlock timeout parameter and
=> => other parameters, but I don't think I found  what 
=> => I'm looking for.
=> => 
=> => I basically need to be able to let the second process
=> => exit with an error after waiting 5 - 10 seconds.
=> => 
=> => Please can someone help?
=> => 
=> => Kind Regards
=> => Stefan
=> => 
=> 


pgp0.pgp
Description: PGP signature


Re: [SQL] postgres timeout.

2004-01-28 Thread Stef
Forgot to mention that I use postgres 7.3.4

Stef mentioned :
=> Hi all ,
=> 
=> I'm trying to find out if there is a specific setting 
=> to make transactions time out faster in a scenario
=> where there's an update on a table in a transaction 
=> block, and another update process tries to update
=> the same column.
=> 
=> It looks like the second process will wait until you
=> end the transaction block in the first transaction.
=> 
=> I've looked at the deadlock timeout parameter and
=> other parameters, but I don't think I found  what 
=> I'm looking for.
=> 
=> I basically need to be able to let the second process
=> exit with an error after waiting 5 - 10 seconds.
=> 
=> Please can someone help?
=> 
=> Kind Regards
=> Stefan
=> 


pgp0.pgp
Description: PGP signature


Re: [SQL] Exceptions when 0 rows affected.

2004-05-13 Thread Stef

Andrei Bintintan mentioned :
=> Your problem depends on what interface/programming language you're using.

Yep, I tried to do it using rules or triggers, but I can't get it to do what I want 
exactly, and it's not a good idea to put any rules or triggers on my database.

I'm sticking to psql though, and managed to create the query with a nested
nullif and coalesce to make my query fail if there are 0 rows matching for
an update or delete.

Cheers
Stef

---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] Exceptions when 0 rows affected.

2004-05-10 Thread Stef
Hi all,

I've been looking on Google,the archives and documentation,
but cannot find what I'm looking for. Maybe I've read the 
answer, but it's still evading me. 

I'm working with postgres 7.3.4 using the psql client.

I want to know if it's possible to raise an error in a
transactional BEGIN->END block when e.g. an update
or delete statement affects zero rows.

I want the entire transaction block to rollback when 
this happens. I can do it in perl and java , but would 
in this instance like to load a file with SQL statements
using psql -f . Is it possible?

Kind Regards
Stefan


pgp0.pgp
Description: PGP signature


Re: [SQL] Date format problems

2004-06-03 Thread Stef
Tom Lane mentioned :
=> Check the manual about runtime
=> configuration settings and postmaster switches.

I use this :
export PGDATESTYLE=

(=ISO,European)

P.S. Is there something wrong with the date on the postgres mailing list machine?
All my mail from the list arrives with a date  of  : 16 Feb 2004

and hides the bottom of my e-mail folder.

Cheers
Stef

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Datetime problem

2004-06-14 Thread Stef
Hello Eric,

Are you looking for something like :
select to_char(timestamp 'now',' MM DD HH MI SS');
or the values in your example below :
select to_char(timestamp '20041010 00:00:00',' MM DD HH MI SS');

Eric Lemes mentioned :
=> Hello there,
=> 
=> I'm with a little trouble with postgresql and date/time conversions:
=> 
=> - select to_timestamp('2004 10 10 00 00 00', ' MM DD HH MI SS')
=> 
=> the output is:
=> 
=> - 2004-10-09 23:00:00-03
=> 
=> Anybody can help me?
=> 
=> 
=> []'s
=> 
=> Eric Lemes de Godoy Cintra
=> Analista de Sistemas Líder
=> Zanthus Sistemas e Equip. Automação S/A
=> Desenvolvimento de Software
=> 
=> 
=> 
=> [EMAIL PROTECTED]
=> Fone: 55(11) 3750-7077
=> 
=> 
=> 
=> ---
=> Outgoing mail is certified Virus Free.
=> Checked by AVG anti-virus system (http://www.grisoft.com).
=> Version: 6.0.693 / Virus Database: 454 - Release Date: 31/5/2004
=> 


pgps87G3ux9sg.pgp
Description: PGP signature


[SQL] ERROR: Unable to format timestamp; internal coding error

2004-06-23 Thread Stef
Hi all,

I've narrowed my problem down to the following
Java Timestamp that I managed to insert into
a postgres 7.3.4 database : 
Timestamp : '1475666-11-30 02:00:00.0'

My problem is, that when I try and select from the table I inserted 
this timestamp into, I get the following error :
ERROR:  Unable to format timestamp; internal coding error

OR if I try converting it :

=# select date(hpacc_cancelled) from hp_acc;
ERROR:  Unable to convert timestamp to date
=# select to_char(hpacc_cancelled,'') from hp_acc where hpacc_cancelled is not 
null;
ERROR:  to_char(): Unable to convert timestamp to tm

Is there any way I can isolate these and fix them on the database ?
Is there a way to select the values in the table?

Kind Regards
Stefan

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Order of execution of rules

2004-07-12 Thread Stef
Hilary Forbes mentioned :
=> Can I guarantee that if I have more than one Rule on a table they will be executed 
in a given order? I know triggers are done alphabetically but is the same true of 
rules?

http://www.postgresql.org/docs/7.4/static/sql-createrule.html#AEN37748


pgpgCnrDhs4Og.pgp
Description: PGP signature


[SQL] Maybe install debian on your home pc?

2004-08-12 Thread Stef
Hi Marcus,

Here's something interesting for you :
http://www.gnoppix.org/

It looks like it may be more easy to install, as it's
entirely geared towards installation rather than just booting.
It says it's based on woody, so I don't know if the 2.6 kernel is
a boot option here. Maybe  check it  out...

Cheers
Stefan

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Maybe install debian on your home pc?

2004-08-12 Thread Stef
Erm .. sorry list people. This one slipped to the wrong address...

Stef mentioned :
=> Hi Marcus,
=> 
=> Here's something interesting for you :
=> http://www.gnoppix.org/
=> 
=> It looks like it may be more easy to install, as it's
=> entirely geared towards installation rather than just booting.
=> It says it's based on woody, so I don't know if the 2.6 kernel is
=> a boot option here. Maybe  check it  out...
=> 
=> Cheers
=> Stefan

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Preserving column order when recreating table.

2004-09-15 Thread Stef
Hi all,

I'm struggling with a situation where I 
want to recreate a table (in more than 30 databases) to
fix the column order (attnum sequence) and in another case,
fix different definitions for the same column in a table e.g. 
amount numeric(16,2) 
in stead of :
amount numeric(16,5)

The complication comes in when the newly created table
could have extra (new) columns, or its column order is not the
same as the original table's, but the corresponding column 
names are the same.

My question is : 
Is it possible to do this in DML/DDL ?
That is, dumping  the data,  dropping the table,
recreating the table, and reimporting the data in the correct order ?

I've tried different things, but I cannot write SQL to do this.
I know it's easy from a script, but I don't have any other type 
of access to the database servers.

Kind Regards
Stefan

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Preserving column order when recreating table.

2004-09-15 Thread Stef
Andrew Sullivan mentioned :
=> I'm not sure why you want to do the former, but in any case, it's
Because lazy people write inserts without specifying column names.

=> possible by creating a new table which has things the way you want;
=> select all the old data from the old table into the new table (using
=> the column names to get everything in the order you like, of course),

I like this idea, but each database may have a different table definition
for the same table, and if I want to automate this, I need to figure out the 
column names on the fly.

=> and then rename the old table, rename the new table to the old table
=> name, and drop the old table if you like.

I think I've got the solution now. I'll do it in two steps. 
Fist add/drop all the columns that are not there/not supposed to be there,
and in the second step do what you suggested.

Thanks!!

Kind Regards
Stefan

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Preserving column order when recreating table.

2004-09-15 Thread Stef
Andrew mentioned :
=> Ugh.  Sorry to say so, but this sounds to me really a lot like the
=> cure is worse than the disease.  The answer to "Bob did something
=> incredibly stupid" is not "We'll bend ourselves into contortions to
=> support it."  (This is not to say I don't sympathise.  You wouldn't
=> believe how much I do.)

Funny you say. It looks like "Bob" had a hand in pg_dump -d , 'cause I've
many times wished there were column names specified there, too :)
(I'm talking Prior 7.4 here, dunno if it's changed already)

=> and then rename the old table, rename the new table to the old table
=> name, and drop the old table if you like.

The only problem I've run into now, is duplicate index names. I think this
is why I didn't use this solution originally. But I figured out a way to
modify pieces of the "create table" statement to drop all  the indexes 
and constraints first.

Is there an easier  way  around this?


Stef

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] Postgres schema comparison.

2005-03-07 Thread Stef
Hi all,

I've got a master database with many other databases that
all have (or is supposed to have) the same exact same schema 
as the master database (the master database is basically an empty 
template database containing the schema definition).

The problem is that none of the schemas actually match the master schema.
e.g. missing columns, columns not in the correct order (attnum), missing indexes
and primary keys, and in severe cases, missing sequences and tables.

I have the wonderful job of re-synch'ing  all the schemas out there not
conforming to the master. I've looked everywhere for something that 
will help doing this. I'm specifically looking for a way to do a sumcheck
or something similar on tables and/or schema as a whole to be able to
do a table comparison with the master database.

It will be a bonus to pick up exactly what is missing, but for now, just 
identifying 
differences is what I want to achieve. I'm using postgres 7.3 mostly, but
I may want to use this for 7.4 and 8.0 databases as well. 

Has anybody got some suggestions of what I can do or use to do this.

TIA
Kind Regards
Stefan

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] [SQL] Postgres schema comparison.

2005-03-07 Thread Stef
Markus Schaber mentioned :
=> There are (at least) two independently developed pgdiff applications,
=> they can be found at:
=> 
=> http://pgdiff.sourceforge.net/
=> 
=> http://gborg.postgresql.org/project/pgdiff/projdisplay.php

Thanks a lot!

=> I did not try the first one, but the latter one worked on some of my
=> datas, but fails on others. I filed a bug report some time ago, but got
=> no answer, so I'm afraid this tool currently is unmaintained:
=> http://gborg.postgresql.org/project/pgdiff/bugs/bugupdate.php?895
=> 
=> But maybe a pg_dump --schema-only on all the databases, and then
=> manually diffing the files may already fulfil your needs.

I've tested something similar, that seems to work ok for me for now :
pg_dump -s -t [TABLE] [DBNAME] | grep -v "^--" | md5sum 

The problem I have with this, is that I have to run the command per table,
and seeing that I have over 500 tables in each database, this takes quite a 
long time.

I'll test some of the above pgdiffs, and see if either can do it better.

Kind Regards
Stefan

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] [ADMIN] Postgres schema comparison.

2005-03-07 Thread Stef
[EMAIL PROTECTED] mentioned :
=> Are you just synching the schemas, or do you also need to synch the data?

Schemas now, data later.

To do the data part, I'm thinking of using slony, because it seems to be able to
do pretty much everything I need from that side. But, unfortunately I can't 
even start fixing the data before the schemas aren't fixed.

Kind Regards
Stefan

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] [ADMIN] Postgres schema comparison.

2005-03-07 Thread Stef
John DeSoi mentioned :
=> Develop a function that builds a string describing the tables/schemas 
=> you want to compare. Then have your function return the md5 sum of the 
=> string as the result. This will give you a 32 character value you can 
=> use to determine if there is a mismatch.

OK, this may be exactly what I need. I've compiled and installed 
contrib/pgcrypto
and I want to use either one of :
 Result data type | Schema |  Name  | Argument data types 
--+++-
 bytea| public | digest | bytea, text
 bytea| public | digest | text, text

Is it possible to somehow pass the output of : "\d [TABLE NAME]" 
to this function? If not, what would return me consistent text
that will describe the columns, indexes and primary keys of a table?

Kind Regards
Stefan

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] [SQL] Postgres schema comparison.

2005-03-07 Thread Stef
Markus Schaber mentioned :
=> Some weeks ago, I posted here a script that uses psql to create split
=> dumps. Maybe you can reuse some of its logics to create per-table
=> md5sums for all tables in a database automatically.


Thanks, but I've got something very similar to this already. I almost
thought you managed to split the output of the single schema dump of 
"pg_dump --schema-only" onto portions belonging to the various tables.
That would be very impressive :)

Kind Regards
Stefan

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] [SQL] Postgres schema comparison.

2005-03-07 Thread Stef
Tom Lane mentioned :
=> > The problem I have with this, is that I have to run the command per table,
=> 
=> Why?
=> 
=> If the problem is varying order of table declarations, try 8.0's
=> pg_dump.

Yes, this will solve the global schema check, but I will still need to split
it into "per table" dumps , to do "per table" comparisons.

Kind Regards
Stefan 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] [SOLVED] Postgres schema comparison.

2005-03-07 Thread Stef
John DeSoi mentioned :
=> I'm not sure you can use \d directly, but if you startup psql with the 
=> -E option it will show you all the SQL it is using to run the \d 
=> command. It should be fairly easy to get the strings you need from the 
=> results of running a similar query. The psql source is a good place to 
=> look also.

Sometimes you just need to see things from a different perspective.
Thanks!

Here's my final solution that runs in less than a minute for +- 543 tables :
for x in $(psql -tc "select relname from pg_class where relkind = 'r' and 
relname not like 'pg_%'")
do 
   echo "$(psql -tc "select  encode(digest('$(psql -c  '\d '${x}'' mer9188_test 
| tr -d \"\'\")', 'md5'), 'hex')" mer9188_test | grep -v "^$"|tr -d " "):${x}"
done > compare_list.lst

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] [SQL] Postgres schema comparison.

2005-03-08 Thread Stef
Jim Buttafuoco mentioned :
=> I use dblink to attach to both databases and query pg_namespace, pg_class, 
pg_attribute ... to get the diffs.  See
=> attached as an example.  look for the dblink_connect lines to specify your 
database.  You will need to install
=> contrib/dblink.  I used this with 7.4.X series and have NOT tested yet with 
8.0.X.

Thanks!

This is something I haven't even thought of.
Only some of the machines have dblink installed at the moment,
but that's the same work as having to install pgcrypto everywhere.

This is actually more thorough. It seems to be working with some minor changes 
on 7.3
(The dblink functions don't allow multiple connections, and take only one 
argument, so
I created temp tables in stead). This is actually very fast.

Thanks again.

Kind Regards
Stefan

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] [SOLVED] Postgres schema comparison.

2005-03-09 Thread Stef
Hi all,

If anyone is interested, here's the final solution
that I'm using to build a list of tables and their md5sums
based on what the psql interface queries when you do '\d [TABLE NAME]'

I attached the function I created, and this is the SQL I run :
select relname||':'||get_table_checksum(relname) from pg_class where relkind = 
'r' and relname not like ('pg_%') and relname not like ('sql_%') order by 
relname;

This gives the same result for a specific table across  all versions of 
postgres  >= 7.3,
and runs for a minute or so for +- 450 tables on my machine.
It may break if you have some exotic definitions that I didn't test for, 
but I think it's pretty solid as it is here.

Kind Regards
Stefan

Stef mentioned :
=> Here's my final solution that runs in less than a minute for +- 543 tables :
=> for x in $(psql -tc "select relname from pg_class where relkind = 'r' and 
relname not like 'pg_%'")
=> do 
=>echo "$(psql -tc "select  encode(digest('$(psql -c  '\d '${x}'' 
mer9188_test | tr -d \"\'\")', 'md5'), 'hex')" mer9188_test | grep -v "^$"|tr 
-d " "):${x}"
=> done > compare_list.lst

get_table_checksum.sql
Description: Binary data

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] Numeric and CSV under 8.0.1 ?

2005-04-08 Thread Stef
Hello Everyone,
Currently, here at work, I am doing the whole
'advocacy' part of postgreSQL. It's not really hard to
do, as the other database's are MySQL and Sybase ;)

There is obviously a whole spat of data munging
going on in the background, and I noticed that psql in
8.0.1 now accepts CSV ! Great. 

Except, it doesn't really appear to be 100% there.
Numeric's wrapped in '...' don't want to appear to go in.
Is this a 'known problem' ?

Table "public.visitor_main"
  Column  | Type |  Modifiers   
--+--+--
 iuserid  | numeric(12,0)| not null
 firstname| character(25)| not null
 lastname | character(25)| not null


Sample Data line
'3236','Alonzo','Peter'

ERROR:  invalid input syntax for type numeric: "'3236'"
CONTEXT:  COPY visitor_main, line 1, column iuserid: "'3236'"


Thoughts ?
Regards
Steph


pgp6CrlzVHPhB.pgp
Description: PGP signature


Re: [SQL] Numeric and CSV under 8.0.1 ?

2005-04-08 Thread Stef

Hello Keith,
well, I can understand that 3265 appears to be a
string, but, I was under the impression that -everything-
in a CSV format file could have ' ' around it? Is this not
the case ?

Sorry if I am being completely insane here :)
Steph

On Fri, Apr 08, 2005 at 02:12:11PM -0400, Keith Worthington wrote:
> On Fri, 8 Apr 2005 12:51:47 -0400, Stef wrote
> > Hello Everyone,
> > Currently, here at work, I am doing the whole
> > 'advocacy' part of postgreSQL. It's not really hard to
> > do, as the other database's are MySQL and Sybase ;)
> > 
> > There is obviously a whole spat of data munging
> > going on in the background, and I noticed that psql in
> > 8.0.1 now accepts CSV ! Great.
> > 
> > Except, it doesn't really appear to be 100% there.
> > Numeric's wrapped in '...' don't want to appear to go in.
> > Is this a 'known problem' ?
> > 
> > Table "public.visitor_main"
> >   Column  | Type |  Modifiers
> > 
> > --+--+--
> >  iuserid  | numeric(12,0)| not null
> >  firstname| character(25)| not null
> >  lastname | character(25)| not null
> > 
> > Sample Data line
> > '3236','Alonzo','Peter'
> > 
> > ERROR:  invalid input syntax for type numeric: "'3236'"
> > CONTEXT:  COPY visitor_main, line 1, column iuserid: "'3236'"
> > 
> > Thoughts ?
> > Regards
> > Steph
> 
> Steph,
> 
> '3236' is a string not a numeric.  As I see it (novice that I am) you have
> three choices.  1) Write an external program (gawk, sed, etc.) to remove the
> quotes around that field.  2) Import the data into an intermediate table and
> then using an after trigger move and manipulate the data using CAST.  3)
> Import the data into your table using a BEFORE trigger and manipulate the data
> using CAST.
> 
> HTH
> 
> Kind Regards,
> Keith
> 


pgpEdkkQPDQxn.pgp
Description: PGP signature


Re: [SQL] converting epoch to timestamp

2005-10-26 Thread Stef
Rajesh Kumar Mallah mentioned :
=> Can anyone tell me how to convert epoch to timestamp ?
=> 
=> ie reverse of :
=> 
=> SELECT EXTRACT( epoch FROM  now() );
=> +--+
=> |date_part |
=> +--+
=> | 1130317518.61997 |
=> +--+

Here is one way (In my case I still had to add/subtract timezone diff)
select '19700101'::timestamp + foo.epoch::interval from (select extract(epoch 
from now())||' seconds' as epoch) foo ;

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Archival of Live database to Historical database

2001-01-29 Thread Stef Telford


Hello everyone,
I have hit on a limit in my knowledge and i am looking for
some guidance. Currently I have two seperate databases, one for
live data, the other for historical data. The only difference really
being that the historical data has a Serial in it so that the tables 
can keep more than one 'version history'.

What i would like to do, is after my insert transaction to the
live database, i would like the information also transferred to the
historical one. Now. I can do this via perl (and i have been doing it
this way) and using two database handles. This is rather clumsy and
I know there must be a 'better' or more 'elegant' solution.

So i stumbled onto triggers and functions. All well and good.
I create the trigger to fire off the procedure after a succesful insert
into
the table (And yes i do have triggers on the 30 tables or so i use). 

The problem arises, in the procedure. I dont know the syntax
to reference another database. I assume there must be someway to 
simply say (in a function) copy the data inserted into this database as
well.

If it helps any, the tables are the same name, and all the same
fields (Apart from the SERIAL in the historical version, but since that
auto
increments i wouldnt have to worry about it)

I am interested on ideas, code and pointers as to if this is a
good idea or not. thank you.

Regards,
Steff



Re: [SQL] Archival of Live database to Historical database

2001-01-30 Thread Stef Telford

Richard wrote:
> > Hello everyone,
> > I have hit on a limit in my knowledge and i am looking for
> > some guidance. Currently I have two seperate databases, one for
> > live data, the other for historical data. The only difference really
> > being that the historical data has a Serial in it so that the tables
> > can keep more than one 'version history'.
> >
> > What i would like to do, is after my insert transaction to the
> > live database, i would like the information also transferred to the
> > historical one. Now. I can do this via perl (and i have been doing 
> > this way) and using two database handles. This is rather clumsy and
> > I know there must be a 'better' or more 'elegant' solution.
> 
> Not really (AFAIK) - this crops up fairly regularly but there's no way 
> to do a cross-database query.
> 

After going through the mailing list archive, i can see
that yes, this is asked a lot and that no, there is no
real solution to it at present. a shame to be sure.

> You could use rules/triggers to set a "dirty" flag for each record 
> that needs copying - but it sounds like you're already doing that.
> 
> If you wanted things to be more "real-time" you could look at 
> LISTEN/NOTIFY

What i would ideally like to do, is have the live database have
a trigger setup after an insert, so that the data will also be
copied across using a function. However, if cross database
functions or triggers are not possible, then i cant do this and
will have to stick with the current scheme (two database handles).

Its not pretty, but it works. which is the main thing.

Can i ask the postgreSQL powers that be, how hard would it be to
have the ability to reference different databases on the same 
machine ? I know it might make sense to have the two on seperate
machines, but that would require hostname resolution and other
silly things. All that is really needed is the ability to reference
another database on the SAME machine.

Of course, i can see this is a loaded gun. It would be very easy
to do some very nasty things and more than a few race conditions
spring to mind. Anyway, i look forward to getting screamed at for
such a silly preposterous idea ;)

regards,
Steff



[SQL] Trigger Function and Html Output

2001-04-01 Thread Stef Telford

Hello,
i find i must turn once again to the list for help, in what
is probably another silly request.

I have a view (made of 4 tables) and now find that users
need to have the view different based on some criteria from the
database. its. well. its rather icky to go into. but i can figure that 
part out myself.

The long and short of it is this, I would like the view to
return a html table. I understand that I can do this via a function
and a trigger on select, massage the data, and then construct
each line to simply be returned to the perl program. The only 
problem comes with, as far as i am aware, a trigger returns and
is processed on a 'per line' basis for a select, so how would i 
get the view's column titles output for the table header ?

I ideally want to make as much of the perl construction
of the table from the database, this includes the head of the table,
all and each row, and then of course closing the table 'off'. I know
this sounds like a strange way to do things, but i think this is the
best way to go. 

many thanks
stefs

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Trigger Function and Html Output

2001-04-01 Thread Stef Telford

Richard Huxton wrote:
> That's what the list is for, and it's only silly if you already know the
> answer :-)

well, i do agree that the only stupid question is the one not asked
but there are some questions that are universally stupid and jst 
hoped this wasnt one of them ;>

> > The long and short of it is this, I would like the view to
> > return a html table. I understand that I can do this via a function
> > and a trigger on select, massage the data, and then construct
> > each line to simply be returned to the perl program. The only
> > problem comes with, as far as i am aware, a trigger returns and
> > is processed on a 'per line' basis for a select, so how would i
> > get the view's column titles output for the table header ?
>
> For this sort of stuff, I like to make as much use of Perl's DBI and
> HTML templating modules as I can. For the templating, it's trivial to
> write your own stuff for this sort of problem, but have a look at the
> various modules on CPAN - there'll be something there for you.

actually, I have been using the perl DBI and have written my own
templating software. maybe i should try to explain more. I have a 
perl cgi, which until it is run doesnt know the query. nothing new
there as most queries are dynamic by nature, but this query comes
from the user preferences which are stored in the database.

Each select works on a view, rather than hardcode the view into the
perl CGI, i would rather have the table header/column titles returned
as the first item as text/html (i know about the func procedure to get the
table_attributes) and then all the formatting thereafter done by the database
for each row. 

maybe i am naive in thinking this way, but surely the a database function
formatting the returned string must be quicker then perl. (speaking 
generically of course, i conceed that there are times when the reverse
is true)

i -am- a perl convert. i truly am. i would jst prefer to take the massaging
of data and put that into a trigger function for the database. after all, i 
would rather use the database then jst have it as a large flat file ;)

> I've got to admit, I try to avoid using SELECT * if I can - maybe it's
> just me, but I've always felt if the database changes the code needs to
> break. I'd rather get errors than unexpected results. IMHO of course.

i dont like select * either, but i do see that there are some justified
uses for it. 'never say never' in my book ;)

> > I ideally want to make as much of the perl construction
> > of the table from the database, this includes the head of the table,
> > all and each row, and then of course closing the table 'off'. I know
> > this sounds like a strange way to do things, but i think this is the
> > best way to go.
>
> Definitely look at some of the general-purpose templating modules.
> They'll all handle tables.
>

thank you for the input, and if i was jst starting out i would agree with
you. I cant really explain it any better than i have previously, but 
hopefully you will see that i want to use the database to do this.

hopefully that isnt that strange a request ;)

many thanks,
stef

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] breakage in schema with foreign keys between 7.0.3 and 7.1

2001-04-18 Thread Stef Telford

Hello everyone

me again (apologies in advance :). I have been running a database
under 7.0.3 for some months now, and it was all fine. The tables all loaded 
and it was working flawlessly. Then 7.1 came out and I noticed it had outer 
joins (which are a big win in one of the main views i use). 

So, i started loading in the schema into 7.1, but it seems to break.
Now, i have included the 3 tables below, but first i would like to tell some 
of the design criteria behind this.

1) I need to have order_id as a primary key across the system (system key ?)
   so that i can pull out based on an order_id. The same goes for history_id 
   in the client.

2) I also need to have the client_id as a secondary key across the system,
   as another application frontend references on client_id. its icky but it   
   works.

3) i have taken out some of the non-important fields, so please dont tell 
me that i have over-normalised my data ;p

for some reason though, under 7.1 when trying to get the tables i 
get this error -> UNIQUE constraint matching given keys for referenced table 
"client" not found. I know what it is saying, but i dont quite understand what
has changed between 7.0.3 and 7.1



CREATE TABLE action
(
ORDER_IDintegerPRIMARY KEY,
ORDERTYPE integerNOT NULL,
client_idchar(16)NOT NULL,
priority  integerDEFAULT 5 NOT NULL,
creation_idnamedefault user,
creation_datedatetime   default now(),
close_id   nameNULL,
close_datedatetime   NULL,
lock_id nameNULL,
lock_date datetime   NULL
) \g

CREATE TABLE client
(
ORDER_IDinteger REFERENCES action 
(ORDER_ID)
ON UPDATE CASCADE
INITIALLY DEFERRED,
history_id  SERIAL,
active  boolean,
client_id   char(16)NOT NULL,
change_id   nameDEFAULT USER,
change_date datetimeDEFAULT NOW(),
PRIMARY KEY (ORDER_ID,history_id)
) \g

CREATE TABLE client_dates
(
ORDER_IDinteger REFERENCES action 
(ORDER_ID)
ON UPDATE CASCADE
INITIALLY DEFERRED,
LOCATION_ID integer NOT NULL,
history_id  integer REFERENCES client 
(history_id)
ON UPDATE CASCADE
INITIALLY DEFERRED,
active  boolean,
client_id   char(16)REFERENCES client 
(client_id)
ON UPDATE CASCADE
INITIALLY DEFERRED,
dte_action  integer NULL,
change_id   nameDEFAULT USER,
change_date datetimeDEFAULT NOW(),
PRIMARY KEY (ORDER_ID,LOCATION_ID,history_id)
) \g


thank you, i know its something almost smackingly obvious but 
i cant seem to understand why it was working and now isnt. i even went
through the changelog! 

regards
    Stef

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] breakage in schema with foreign keys between 7.0.3 and 7.1

2001-04-18 Thread Stef Telford

Stephan Szabo wrote:
> Hmm, don't know why it's not in changelog, but the spec requires that
> the target fields of a foreign key constraint are themselves constrained
> by a unique or primary key constraint. 

maybe its time for me to go and re-read the changelog with a fine tooth
comb (it has been known for me to be blind to the obvious before and
if this is the case then i more than apologise :)

> 7.0 didn't actually check this,
> but 7.1 does.  The reason for this is because while 7.0 would let you
> specify such a constraint, it wouldn't really work entirely properly
> if the field wasn't actually unique.  You'll need a unique constraint
> on client.client_id.

hhrrm. the only problem with -that- is that client_id by itself is not 
unique, but in conjunction with order_id it is. order_id is wholly 
unique. maybe i should jst drop the foreign key on client_id then,
although i did want to use referential integrity on the client_id on
an insert. 

although now i think about this, the criteria for having the changes
on client_id cascading are totally gone and i could (read will) jst
use a 'references' column.

in short, thank you, i have jst figured out what an idiot i have been
(again i hear you all say ;)

many thanks and good work on postrgresql 7.1, it seems to be quite
a bit quicker (and praise the lord for outer joins =)

stefs

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Referential Integrity Question (Delete/Insert during Transaction)

2001-06-18 Thread Stef Telford

hello again everyone,

I seem to have hit what i -think- may be a bug (but i am not crying
wolf jst yet ;).

I have three tables. action, client and order_details. action has a primary
key 'order', client references action (along with adding the client_id as 
part of its primary key), order_details references client (both parts of the 
primary key there). all foreign keys (order in client and order+client_id in
order_details) are set to INITIALLY DEFERRED. so far so good i hope.

Now, i have a trigger that fires on insert, so i delete from the live 
database and then insert the changes rather than doing an update. not 
great, but shouldnt be a problem.

The problem comes when i do this:

mms_post=# BEGIN;
BEGIN
mms_post=# DELETE from client WHERE order_id = 22;
DELETE 1
mms_post=# INSERT INTO client 
mms_post-#(cli_business_name,cli_chain_id,cli_business_type,cli_short_name,cl
i_sic,order_id,client_id,cli_agent_bank_id,cli_operating_name,creation_id,cli_
web_page,cli_tcc,creation_date)
mms_post-# VALUES ('STEFS','100-0333',1,'FHASDLKJH HFAKSDJ 
HKALSDJ',2534,22,'100-555',230,'FHASDLKJH HFAKSDJ 
HKALSDJFH','jack','[EMAIL PROTECTED]','R','2001-06-18 13:46:45-04');
INSERT 24211 1
mms_post=# COMMIT;
ERROR:   referential integrity violation - key in client still 
referenced from order_details


Now. the way i understand it, shouldnt the integrity of any foreign keys
be checked at the -end- of the transaction, after all the commands have been
processed ? it seems that the DELETE is being processed and rejected, but
the foreign key would be 'okay' due to the following INSERT.  I have tried 
SET CONSTRAINTS as well with no difference :\

Does this make any sense or am i completely mad ? (more than likely)

regards,
Stefs.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster