[ADMIN] Circular Dependency in Tables and Deletion of Data
Hello, Note: I know that its not a very good db design. But what if a senario comes in front of you? I have created an uncommon database design. Which is as follows: Created 2 Tables :- Emp1 and Emp2 TableA with 2 Attributes A1 and A2. A1 is the primary key. TableB with 2 Attributes B1 and B22. B1 is primary key B2 is refereing to Emp1.Pr1 Now I have altered table for addeing one more foreign key constraint. The constraint is on TableA.A2 refering to TableB.B1 This creates the circular dependency. Now I have add one record in each table then Will it allow me to delete without schema change??If yes then how? I can not delete the tables also?? Is this a useless functionality given in DB or where is this used? Can postgres detect it automatically and restrict it to do so? Is any other DB restricts this kind of functionality or not? Sample code is given below for trial purpose create table TableA ( A1 integer primary key, A2 integer ); create table TableB ( B1 integer primary key, B2 integer references TableA(A1) ); ALTER TABLE TableA ADD CONSTRAINT distfk FOREIGN KEY (A2) REFERENCES TableB(B1) MATCH FULL; Insert INTO TableA values (1,null); Insert INTO TableB values (100,1); Update TableA SET A2 = 100 WHERE A1 = 1; Delete FROM TableA; Isnt it a crazy thing?? Regards, Mans ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] Circular Dependency in Tables and Deletion of Data
Am Montag, 24. Juli 2006 14:53 schrieb Mans: Isnt it a crazy thing?? Yes, but with deferrable constraints and transactions blocks, it should be no problem to manage the data. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] Circular Dependency in Tables and Deletion of Data
Hi Peter, Can you please give some detailed information about it? Regards, Mans Peter Eisentraut wrote: Am Montag, 24. Juli 2006 14:53 schrieb Mans: Isnt it a crazy thing?? Yes, but with deferrable constraints and transactions blocks, it should be no problem to manage the data. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[ADMIN] btree index - incorrect results
Hi, Using this query plan, an extra uid shows up in this example. We are in the process of upgrading from v810 to v814. Does anyone see anything wrong with this query plan that might be causing a problem? Index Scan using pk_recent_projects on recent_projects (cost=0.00..5.81 rows=1 width=6) Index Cond: ((user_id = 139) AND (project_id = 3)) ... we have a 'hot' table (one with many many transactions) that gets inserted and deleted often. About once a month now when we do a select from that table the results of the select do not match the where clause, ex. select * from recent_projects where user_id = 139 sometimes produces these results: user_idproject_id 1393 1391 13917 75411 Tom writes.. Hmmm that looks sorta familiar. What is the query plan that's used for this SELECT? regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend Hey Tom, Here is the query: DELETE FROM recent_projects WHERE project_id = 3 AND user_id = 139; And here is the query plan: Index Scan using pk_recent_projects on recent_projects (cost=0.00..5.81 rows=1 width=6) Index Cond: ((user_id = 139) AND (project_id = 3)) The table definition is : CREATE TABLE recent_projects ( user_id int4 NOT NULL, project_id int4 NOT NULL, last_viewed timestamp, CONSTRAINT pk_recent_projects PRIMARY KEY (user_id, project_id), CONSTRAINT fk_recent_project_id FOREIGN KEY (project_id) REFERENCES project (project_id) MATCH SIMPLE ON UPDATE RESTRICT ON DELETE CASCADE, CONSTRAINT fk_recent_user_id FOREIGN KEY (user_id) REFERENCES users (user_id) MATCH SIMPLE ON UPDATE RESTRICT ON DELETE RESTRICT ) WITHOUT OIDS; This is a table with a lot of transactions. The behavior we noticed is that we do the delete as specified above and then do a reinsert with a new timestamp and everything else the same (lazy I know, but not my code). What happens some of the time is that the reinsert fails and give a duplicate key failure. What has fixed this in the past is reindexing the table - but we don't want to rely on that forever. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Circular Dependency in Tables and Deletion of Data
On Mon, 24 Jul 2006, Mans wrote: Hello, Note: I know that its not a very good db design. But what if a senario comes in front of you? I have created an uncommon database design. Which is as follows: Created 2 Tables :- Emp1 and Emp2 TableA with 2 Attributes A1 and A2. A1 is the primary key. TableB with 2 Attributes B1 and B22. B1 is primary key B2 is refereing to Emp1.Pr1 Now I have altered table for addeing one more foreign key constraint. The constraint is on TableA.A2 refering to TableB.B1 This creates the circular dependency. Now I have add one record in each table then Will it allow me to delete without schema change??If yes then how? I can not delete the tables also?? Is this a useless functionality given in DB or where is this used? Can postgres detect it automatically and restrict it to do so? Is any other DB restricts this kind of functionality or not? Sample code is given below for trial purpose create table TableA ( A1 integer primary key, A2 integer ); create table TableB ( B1 integer primary key, B2 integer references TableA(A1) ); ALTER TABLE TableA ADD CONSTRAINT distfk FOREIGN KEY (A2) REFERENCES TableB(B1) MATCH FULL; You need to have one or both of the constraints marked at least as deferrable. If you make the constraint initially deferred, then it should automatically wait until transaction end to check that constraint (thus you could use a second statement in the same transaction to remove the related rows). If you leave it as initially immediate, then you can use SET CONSTRAINTS to make it temporarily deferred. Also, the schema you gave should not prevent you from removing rows whose referencing column is NULL, which is potentially useful for some people. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] Unexplained growth of tables
Thaks to all who helped on this one, for I was going insane. I am now auto_vacuuming every 30 seconds, and have cheanged the para,eters so the vacuuming takes place much more aggressively. Performance has increased and table size appears to have been addressed. For the newbies like myself who may make the same mistake . I assumed that auto_vacuum was going to adversely affect perfromance if it ran often, so I extended the time between atovacuum cycles. BIG MISTAKE! I assume in many applicatios the net effet may not be huge, but in our application where currently we have more than 10 million insert/update transactions per day, it definitaly made a difference. I pulled my hair long on this one until the light bulb came on and I figured out it must be related to MVCC and the vast amount of change to the database at any given time. Once again, thanks to all of you who were kind enough to point me in the right direction. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Benjamin Krajmalnik Sent: Sunday, July 23, 2006 2:06 PM To: Tom Lane; Peter Eisentraut Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] Unexplained growth of tables I am running 8.1.4 on Windows. Quick question - I assume CLUSTER will lock the tables, correct? In my scenario where the number of live rows is very small but the number of dead rows is huge, how long will tis take? I have a problem in that this database is a live monitoring system which I cannot take down for too long. The only database on this server (other than the postgres database) is this database. From your experience, based on the activity of this database, how often should I run autovacuum? -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Sunday, July 23, 2006 11:48 AM To: Peter Eisentraut Cc: pgsql-admin@postgresql.org; Benjamin Krajmalnik Subject: Re: [ADMIN] Unexplained growth of tables Peter Eisentraut [EMAIL PROTECTED] writes: Benjamin Krajmalnik wrote: What can be causing this growth? Not vacuuming often enough? The is exactly the reason. I hav pg_autovacuum running every 60 seconds. These tables have 10-15 insert/update statements per second. You should probably run VACUUM FULL to get the table back to a normal size and then closely monitor what pg_autovacuum actually does. CLUSTER might be a better answer, since VACUUM FULL won't do anything to help shrink the indexes. As for the pg_autovacuum frequency, realize that pg_autovacuum processes one database per firing. So if you have N databases, any one database is going to be looked at every N*60 seconds, not every 60 seconds. You might need a shorter autovacuum cycle. Lastly, what PG version is this? We had some bugs that kept autovacuum from being fully informed in some cases, but AFAIK they're all fixed in the latest minor releases. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] Alter table set statistics with partitioned tables
I have had to bump the stats on a partitioned table in order to get the planner to use an index over a seqscan. This has worked well in making the system perform where it needs to as it reduced one query's execution from 45 seconds to 1 second. The one problem I have run into is that when I create a new child table / table partition, the stats value (attstattarget in pg_attribute) does not get carried over from the parent - instead the system default (-1) is set. Is it supposed to behave this way - meaning that I need to explicitly define the custom value for each child table? Or is there a way for this to be implicitly copied when the child table is created? I prefer not to change the system default on all the tables in the database because the value needs to be increased for one case. Thanks, Kevin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[ADMIN] problem backup/restore PSQL DB
Hi all, I am trying to backup a psql database in one system(Debian), and restore it to another system(ubuntu) which has a fresh installed psql db. However, I got the following errors. Can anyone give me some help on solving it. Many thanks. -OME Restore- \_ Extracting postgres database ome and checking archive version/bin/tar --preserve-permissions --same-owner --directory /tmp -xf 'ome_backup_2006-07-24.tar ' OMEmaint omeDB_backup \_ Checking archive for OMEIS files/bin/tar -tf 'ome_backup_2006-07-24.tar' Files/lastFileID \_ Restoring postgress database omeChecking databaseDatabase ome (version 2.23) was found and it will be overwritten. Continue ? [y/n]: ysu postgres -c '/usr/bin/createuser --adduser --createdb ome'createuser: creation of new user failed: ERROR: user ome already existssu postgres -c '/usr/bin/createdb -T template0 ome' CREATE DATABASEsu postgres -c '/usr/bin/pg_restore -O -d ome --use-set-session-authorization /tmp/omeDB_backup'pg_restore: [archiver (db)] could not execute query: ERROR: user rajesh does not exist
Re: [ADMIN] problem backup/restore PSQL DB
It looks like it's a user configuration issue. I have hit user setup issues while restoring DB's before and generally I find it useful to create the Users and/or Groups manually on the new server first before restoring the DB's whenever I hit problems like that. Try syncing the users manually, (assuming that there are only a few and that it could be done manually with some level of ease) and then try restoring again. -Paul -- View this message in context: http://www.nabble.com/problem-backup-restore-PSQL-DB-tf1995768.html#a5478199 Sent from the PostgreSQL - admin forum at Nabble.com. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly