Hello Everyone,
Currently, here at work, I am doing the whole
'advocacy' part of postgreSQL. It's not really hard to
do, as the other database's are MySQL and Sybase ;)
There is obviously a whole spat of data munging
going on in the background, and I noticed that psql in
8.0.1 now
:11PM -0400, Keith Worthington wrote:
On Fri, 8 Apr 2005 12:51:47 -0400, Stef wrote
Hello Everyone,
Currently, here at work, I am doing the whole
'advocacy' part of postgreSQL. It's not really hard to
do, as the other database's are MySQL and Sybase ;)
There is obviously
some exotic definitions that I didn't test for,
but I think it's pretty solid as it is here.
Kind Regards
Stefan
Stef mentioned :
= Here's my final solution that runs in less than a minute for +- 543 tables :
= for x in $(psql -tc select relname from pg_class where relkind = 'r' and
relname
Jim Buttafuoco mentioned :
= I use dblink to attach to both databases and query pg_namespace, pg_class,
pg_attribute ... to get the diffs. See
= attached as an example. look for the dblink_connect lines to specify your
database. You will need to install
= contrib/dblink. I used this with
Hi all,
I've got a master database with many other databases that
all have (or is supposed to have) the same exact same schema
as the master database (the master database is basically an empty
template database containing the schema definition).
The problem is that none of the schemas actually
Markus Schaber mentioned :
= There are (at least) two independently developed pgdiff applications,
= they can be found at:
=
= http://pgdiff.sourceforge.net/
=
= http://gborg.postgresql.org/project/pgdiff/projdisplay.php
Thanks a lot!
= I did not try the first one, but the latter one worked on
[EMAIL PROTECTED] mentioned :
= Are you just synching the schemas, or do you also need to synch the data?
Schemas now, data later.
To do the data part, I'm thinking of using slony, because it seems to be able to
do pretty much everything I need from that side. But, unfortunately I can't
even
John DeSoi mentioned :
= Develop a function that builds a string describing the tables/schemas
= you want to compare. Then have your function return the md5 sum of the
= string as the result. This will give you a 32 character value you can
= use to determine if there is a mismatch.
OK, this
Tom Lane mentioned :
= The problem I have with this, is that I have to run the command per table,
=
= Why?
=
= If the problem is varying order of table declarations, try 8.0's
= pg_dump.
Yes, this will solve the global schema check, but I will still need to split
it into per table dumps , to
John DeSoi mentioned :
= I'm not sure you can use \d directly, but if you startup psql with the
= -E option it will show you all the SQL it is using to run the \d
= command. It should be fairly easy to get the strings you need from the
= results of running a similar query. The psql source is a
Andrew Sullivan mentioned :
= I'm not sure why you want to do the former, but in any case, it's
Because lazy people write inserts without specifying column names.
= possible by creating a new table which has things the way you want;
= select all the old data from the old table into the new table
way around this?
Stef
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Hi Marcus,
Here's something interesting for you :
http://www.gnoppix.org/
It looks like it may be more easy to install, as it's
entirely geared towards installation rather than just booting.
It says it's based on woody, so I don't know if the 2.6 kernel is
a boot option here. Maybe check it
Erm .. sorry list people. This one slipped to the wrong address...
Stef mentioned :
= Hi Marcus,
=
= Here's something interesting for you :
= http://www.gnoppix.org/
=
= It looks like it may be more easy to install, as it's
= entirely geared towards installation rather than just booting
Hi all,
I've narrowed my problem down to the following
Java Timestamp that I managed to insert into
a postgres 7.3.4 database :
Timestamp : '1475666-11-30 02:00:00.0'
My problem is, that when I try and select from the table I inserted
this timestamp into, I get the following error :
ERROR:
Hello Eric,
Are you looking for something like :
select to_char(timestamp 'now',' MM DD HH MI SS');
or the values in your example below :
select to_char(timestamp '20041010 00:00:00',' MM DD HH MI SS');
Eric Lemes mentioned :
= Hello there,
=
= I'm with a little trouble with postgresql
of : 16 Feb 2004
and hides the bottom of my e-mail folder.
Cheers
Stef
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
though, and managed to create the query with a nested
nullif and coalesce to make my query fail if there are 0 rows matching for
an update or delete.
Cheers
Stef
---(end of broadcast)---
TIP 8: explain analyze is your friend
Hi all,
I've been looking on Google,the archives and documentation,
but cannot find what I'm looking for. Maybe I've read the
answer, but it's still evading me.
I'm working with postgres 7.3.4 using the psql client.
I want to know if it's possible to raise an error in a
transactional
Hi all ,
I'm trying to find out if there is a specific setting
to make transactions time out faster in a scenario
where there's an update on a table in a transaction
block, and another update process tries to update
the same column.
It looks like the second process will wait until you
end the
Hi all,
It seems I always find a solution just after
panicking a little bit.
Anyway, I found that statement_timeout solved
my problem. When I tested it earlier, I actually
made an error, and skipped it as a possible
solution.
Cheers
Stef
Stef mentioned :
= Forgot to mention that I use
Hi all,
I've switched on log_statement in postgresql.conf
for debugging purposes. I tried logging connections
and pids as well, but I need to match up the logged statements
to specific connections.
The postmaster logs to a separate log file, but at the moment
it's impossible to tell which
and ... but thats a bit of
a pain if there is already such a thing in existence.
regards
Stef
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
to this ex-feature
I'm 80% sure no one actually inserted these 'invalid' values into tables,
and what I need to know is :
What else can insert a value of 'invalid' into a 7.1.2
timestamp with timezone type column.
Was there a function or something similar?
Regards
Stef
pgp0.pgp
Hi all,
I'm trying to create some kind of table version control
system for approximately 300 postgres databases
ranging in version from 7.1.2 to 7.3.4.
I compared the pg_dump -s output between
the various versions of databases, but the format is inconsistent,
and I can't do diff's to check
Correction on the function :
The function currently on the database did has
select int4(description) + 1 into v_new_version from pg_description
where objoid = NEW.attrelid;
in stead of
select int4(description) into v_new_version from pg_description
where objoid =
Thanks guys,
I had a feeling this was the case, but wasn't sure.
The one-version pg_dump looks like a winner.
Regards
Stefan
##START##
= Rod Taylor [EMAIL PROTECTED] writes:
= What I did next, is put a trigger on pg_attribute that should, in theory,
= on insert and update, fire up a function
On Fri, 8 Aug 2003 09:24:48 -0700
Jonathan Gardner [EMAIL PROTECTED] wrote:
= Try the performance list.
Thanks for the tip
Stef
---(end of broadcast)---
TIP 8: explain analyze is your friend
Hi all,
I have a problem :
A select statement that selects from 7 tables,
groups the information by 6 columns of the
tables involved.
When there are no rows in pg_statistics,
the query runs under 3 minutes.
When I analyze the biggest table of the 7
(approx 100 rows), the query takes longer
Hello,
i find i must turn once again to the list for help, in what
is probably another silly request.
I have a view (made of 4 tables) and now find that users
need to have the view different based on some criteria from the
database. its. well. its rather icky to go into. but i
that i want to use the database to do this.
hopefully that isnt that strange a request ;)
many thanks,
stef
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so
Hello everyone,
I have hit on a limit in my knowledge and i am looking for
some guidance. Currently I have two seperate databases, one for
live data, the other for historical data. The only difference really
being that the historical data has a Serial in it so that the tables
can keep
32 matches
Mail list logo