Re: [GENERAL] "duplicate key violates unique constraint" error
On Wed, Nov 11, 2009 at 10:12 PM, tamanna madaan wrote: > > > Hi All > > > > I have a cluster setup with one master and one slave . Replication from > master to slave is not taking place. > > I am getting this error “duplicate key violates unique constraint” in my > slon.log on slave . This error is thrown while > > Slon is inserting a row in a table on slave. This must be because of the > reason that duplicate rows > > are being returned while querying sl_log_1 table. I googled about the same > problem and found that > > there is some bug in postgres due to which some table or index on that table > gets corrupted. Due to this If your underlying file system is unreliable and results in a corrupted index that allows postgresql to insert duplicate rows, that is NOT a bug in postgresql, it is a failing in your hardware / OS that you need to get fixed. Pgsql can't be blamed for errors created by bad hardware, and no amount of coding can overcome that deficit. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] "duplicate key violates unique constraint" error
tamanna madaan wrote: > > I am getting this error "duplicate key violates unique constraint" in > my slon.log on slave . This error is thrown while > > Slon is inserting a row in a table on slave. This must be because of > the reason that duplicate rows > > are being returned while querying sl_log_1 table. And are there any errors in the PostgreSQL logs? Does this table contain duplicates? > I googled about the > same problem and found that > > there is some bug in postgres due to which some table or index on that > table gets corrupted. Due to this What bug? Do you have a number or mailing-list reference? > I am using postgres 8.1.2 and slony 1.1.5 . > > Please suggest which version of postgres has fix for above mentioned > problem. I'm not sure it's clear what problem we're talking about here. However, you are missing *16* updates for PostgreSQL (8.1.18) and four for slony (1.1.9). First step - get a fresh cup of tea or coffee, read through the release notes between 8.1.2 and 8.1.18 and once you are happy upgrade. Do the same for slony. Then, if the slony sl_log table(s) have errors, I'd probably restart the whole replication from scratch. I'd never be happy that I'd found all the problems and corrected them. If you are starting the replication from base, it might make sense to upgrade slony to 1.2.17 while you're doing so. HTH -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] "duplicate key violates unique constraint" error
Hi All I have a cluster setup with one master and one slave . Replication from master to slave is not taking place. I am getting this error "duplicate key violates unique constraint" in my slon.log on slave . This error is thrown while Slon is inserting a row in a table on slave. This must be because of the reason that duplicate rows are being returned while querying sl_log_1 table. I googled about the same problem and found that there is some bug in postgres due to which some table or index on that table gets corrupted. Due to this duplicate rows are either stored in the table or there are no duplicate rows but duplicates are returned while querying the table. I am using postgres 8.1.2 and slony 1.1.5 . Please suggest which version of postgres has fix for above mentioned problem. Thanks... Tamanna
Re: [GENERAL] duplicate key violates unique constraint
2009/8/8 > Hello all, > > I have a table named "t_i_shift_shadow" with following spec > > > Column | Type | > Modifiers > > -+---+--- > cid | character varying(20) | not null > shift_rev_id| character varying(20) | not null > start_time | bigint| not null > end_time| bigint| not null > lunch_from_time | bigint| default -1 > lunch_to_time | bigint| default -1 > shift_date | date | not null > is_cyclic | integer | not null > serial | integer | not null default > nextval('t_i_shift_shadow_serial_seq'::regclass) > Indexes: >"t_i_shift_shadow_pkey" PRIMARY KEY, btree (serial) > Triggers: >_replcluster_logtrigger_4 AFTER INSERT OR DELETE OR UPDATE ON > t_i_shift_shadow FOR EACH ROW EXECUTE PROCEDURE > _replcluster.logtrigger('_replcluster', '4', 'k') > > > I am trying to replicate this table using slony. > > But I get the following error... I wanted to know if this is a postgres > error ??? If it is what is causing it?? > > ERROR --> > > PGRES_FATAL_ERROR select "_replcluster".setAddTable(8, 36, > 'public.t_i_shift_shadow', 't_i_shift_shadow_pkey', 'shift shadow > table'); - ERROR: duplicate key violates unique constraint > "sl_table_tab_reloid_key" > Looks like you are trying to add a table with same ID second time. Table ID has to be unique. > > I am a bit confused about where to post it.. So I am posting it here > first... > You can post such questions to slony1-general mailing list. -- Filip Rembiałkowski JID,mailto:filip.rembialkow...@gmail.com http://filip.rembialkowski.net/
[GENERAL] duplicate key violates unique constraint
Hello all, I have a table named "t_i_shift_shadow" with following spec Column | Type | Modifiers -+---+--- cid | character varying(20) | not null shift_rev_id| character varying(20) | not null start_time | bigint| not null end_time| bigint| not null lunch_from_time | bigint| default -1 lunch_to_time | bigint| default -1 shift_date | date | not null is_cyclic | integer | not null serial | integer | not null default nextval('t_i_shift_shadow_serial_seq'::regclass) Indexes: "t_i_shift_shadow_pkey" PRIMARY KEY, btree (serial) Triggers: _replcluster_logtrigger_4 AFTER INSERT OR DELETE OR UPDATE ON t_i_shift_shadow FOR EACH ROW EXECUTE PROCEDURE _replcluster.logtrigger('_replcluster', '4', 'k') I am trying to replicate this table using slony. But I get the following error... I wanted to know if this is a postgres error ??? If it is what is causing it?? ERROR --> PGRES_FATAL_ERROR select "_replcluster".setAddTable(8, 36, 'public.t_i_shift_shadow', 't_i_shift_shadow_pkey', 'shift shadow table'); - ERROR: duplicate key violates unique constraint "sl_table_tab_reloid_key" I am a bit confused about where to post it.. So I am posting it here first... Thanks and Regards, Sweta. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] duplicate key violates unique constraint
Hey, I've just find out what's happening. The problem is the "serial" datatype creates a sequence in the background (project_id_seq). If the sequence current numeber is 1, and I manually insert a new entry whit ID=2, the sequence doesn't "know" it. So when I try the INSERT statement, the next value in sequence is 2, and I get the error. The thing is, I'm migrating my system from mysql to postgresql, and that's why I was inserting directely the numbers (importing the .sql file), without respecting the backgroud sequence. Thanks a lot for those who answered me. Regards, Verônica ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] duplicate key violates unique constraint
Ron Johnson wrote: # select * from projects; project_id | username | project_name - +--+-- 1 | foo | 2 | bar | (2 rows) dupe_filenames=# insert into projects (project_name, username ) dupe_filenames-# values ('foo', 'bar'); ERROR: duplicate key violates unique constraint "projects_pkey" And you didn't insert records with those numbers by hand (not using the sequence)? It seems your sequence is a bit behind, which only happens if you don't always use it to generate your ids. I suggest you check your sequence values and update it to the highest value in use if it's too low. You should only need to do that once. -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] duplicate key violates unique constraint
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09/18/06 21:19, brian wrote: > Ron Johnson wrote: >> On 09/18/06 19:25, Jeff Davis wrote: >> >>> On Mon, 2006-09-18 at 19:47 -0300, [EMAIL PROTECTED] wrote: >>> Hi, I'm trying to create a table with a PRIMARY KEY. The CREATE statement looks like this: CREATE TABLE "projects" ( "project_id" serial, "username" varchar(30) NOT NULL default '', "project_name" varchar(30) NOT NULL default '', PRIMARY KEY ("project_id") ) ; The problem is that sometimes, I would say 1 in 10 tries, when I use a INSERT command I get the following error: "duplicate key violates unique constraint" The INSERT query is that: "INSERT INTO projects (\"project_name\", \"username\") VALUES ('$project_name', '$username')"; >>> >>> That INSERT statement will not cause a unique constraint violation. Are >>> you sure that is the statement causing the problem? Are there any rules >>> or triggers that may modify the behavior of that INSERT? >> >> >> If there already are records in the table, sure it would. >> >> ... >> >> dupe_filenames=# insert into projects (project_id, username ) >> dupe_filenames-# values (1, 'foo'); >> INSERT 0 1 >> dupe_filenames=# insert into projects (project_id, username ) >> dupe_filenames-# values (2, 'bar'); >> INSERT 0 1 >> >> ... >> >> dupe_filenames=# insert into projects (project_id, username ) >> dupe_filenames-# values (1, 'foo'); >> ERROR: duplicate key violates unique constraint "projects_pkey" >> >> > > If you insert a project_id, yes. The original query from vtaquette does > not. Ah, darn it. Gotta get those glasses. Still, though... # select * from projects; project_id | username | project_name - +--+-- 1 | foo | 2 | bar | (2 rows) dupe_filenames=# insert into projects (project_name, username ) dupe_filenames-# values ('foo', 'bar'); ERROR: duplicate key violates unique constraint "projects_pkey" - -- Ron Johnson, Jr. Jefferson LA USA Is "common sense" really valid? For example, it is "common sense" to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that "common sense" is obviously wrong. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFFD1c5S9HxQb37XmcRAl5zAKC3wxSPwaFhGO58stMhryhrsKgFdwCg5729 gZTe/3iYcJO7aLY2IyixoUw= =riVP -END PGP SIGNATURE- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] duplicate key violates unique constraint
Ron Johnson wrote: On 09/18/06 19:25, Jeff Davis wrote: On Mon, 2006-09-18 at 19:47 -0300, [EMAIL PROTECTED] wrote: Hi, I'm trying to create a table with a PRIMARY KEY. The CREATE statement looks like this: CREATE TABLE "projects" ( "project_id" serial, "username" varchar(30) NOT NULL default '', "project_name" varchar(30) NOT NULL default '', PRIMARY KEY ("project_id") ) ; The problem is that sometimes, I would say 1 in 10 tries, when I use a INSERT command I get the following error: "duplicate key violates unique constraint" The INSERT query is that: "INSERT INTO projects (\"project_name\", \"username\") VALUES ('$project_name', '$username')"; That INSERT statement will not cause a unique constraint violation. Are you sure that is the statement causing the problem? Are there any rules or triggers that may modify the behavior of that INSERT? If there already are records in the table, sure it would. ... dupe_filenames=# insert into projects (project_id, username ) dupe_filenames-# values (1, 'foo'); INSERT 0 1 dupe_filenames=# insert into projects (project_id, username ) dupe_filenames-# values (2, 'bar'); INSERT 0 1 > ... dupe_filenames=# insert into projects (project_id, username ) dupe_filenames-# values (1, 'foo'); ERROR: duplicate key violates unique constraint "projects_pkey" If you insert a project_id, yes. The original query from vtaquette does not. brian ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] duplicate key violates unique constraint
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09/18/06 19:25, Jeff Davis wrote: > On Mon, 2006-09-18 at 19:47 -0300, [EMAIL PROTECTED] wrote: >> Hi, >> >> I'm trying to create a table with a PRIMARY KEY. The CREATE statement looks >> like this: >> >> CREATE TABLE "projects" ( >> "project_id" serial, >> "username" varchar(30) NOT NULL default '', >> "project_name" varchar(30) NOT NULL default '', >> PRIMARY KEY ("project_id") >> ) ; >> >> The problem is that sometimes, I would say 1 in 10 tries, when I use a INSERT >> command I get the following error: >> >> "duplicate key violates unique constraint" >> >> The INSERT query is that: >> "INSERT INTO projects (\"project_name\", \"username\") VALUES >> ('$project_name', >> '$username')"; >> > > That INSERT statement will not cause a unique constraint violation. Are > you sure that is the statement causing the problem? Are there any rules > or triggers that may modify the behavior of that INSERT? If there already are records in the table, sure it would. dupe_filenames=# create table projects ( dupe_filenames(# project_id serial, dupe_filenames(# username varchar(30) NOT NULL default '', dupe_filenames(# project_name varchar(30) NOT NULL default '', dupe_filenames(# primary key (project_id)); NOTICE: CREATE TABLE will create implicit sequence "projects_project_id_seq" for serial column "projects.project_id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "projects_pkey" for table "projects" CREATE TABLE dupe_filenames=# insert into projects (project_id, username ) dupe_filenames-# values (1, 'foo'); INSERT 0 1 dupe_filenames=# insert into projects (project_id, username ) dupe_filenames-# values (2, 'bar'); INSERT 0 1 dupe_filenames=# select * from projects; project_id | username | project_name - +--+-- 1 | foo | 2 | bar | (2 rows) dupe_filenames=# commit; COMMIT dupe_filenames=# insert into projects (project_id, username ) dupe_filenames-# values (1, 'foo'); ERROR: duplicate key violates unique constraint "projects_pkey" > I suggest you turn on query logging, which you can do by setting the > configuration variable "log_statement" (found in postgresql.conf) to > 'all'. Then you can see exactly what queries are being sent and which > one causes the error. - -- Ron Johnson, Jr. Jefferson LA USA Is "common sense" really valid? For example, it is "common sense" to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that "common sense" is obviously wrong. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFFD0xkS9HxQb37XmcRAhMfAKDsP9ZILY1IaBndVLU3r7OBHYFzLACeNADP USrU5EV9ma6Lp0HWXGbieVw= =aBmc -END PGP SIGNATURE- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] duplicate key violates unique constraint
On Mon, 2006-09-18 at 19:47 -0300, [EMAIL PROTECTED] wrote: > Hi, > > I'm trying to create a table with a PRIMARY KEY. The CREATE statement looks > like this: > > CREATE TABLE "projects" ( > "project_id" serial, > "username" varchar(30) NOT NULL default '', > "project_name" varchar(30) NOT NULL default '', > PRIMARY KEY ("project_id") > ) ; > > The problem is that sometimes, I would say 1 in 10 tries, when I use a INSERT > command I get the following error: > > "duplicate key violates unique constraint" > > The INSERT query is that: > "INSERT INTO projects (\"project_name\", \"username\") VALUES > ('$project_name', > '$username')"; > That INSERT statement will not cause a unique constraint violation. Are you sure that is the statement causing the problem? Are there any rules or triggers that may modify the behavior of that INSERT? I suggest you turn on query logging, which you can do by setting the configuration variable "log_statement" (found in postgresql.conf) to 'all'. Then you can see exactly what queries are being sent and which one causes the error. Regards, Jeff Davis ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] duplicate key violates unique constraint
Hi, I'm trying to create a table with a PRIMARY KEY. The CREATE statement looks like this: CREATE TABLE "projects" ( "project_id" serial, "username" varchar(30) NOT NULL default '', "project_name" varchar(30) NOT NULL default '', PRIMARY KEY ("project_id") ) ; The problem is that sometimes, I would say 1 in 10 tries, when I use a INSERT command I get the following error: "duplicate key violates unique constraint" The INSERT query is that: "INSERT INTO projects (\"project_name\", \"username\") VALUES ('$project_name', '$username')"; Can someone help me please?!! Thanks, Verônica ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] duplicate key violates unique constraint
Thanx to Richard Huxton and Csaba. I will upgrade PostgreSQL up to 8.0 and use: > 2. Catch the error on the insert and try the update again. This > requires version 8.0 or higher. Best regards ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] duplicate key violates unique constraint
Csaba Nagy wrote: That would work indeed. Bit I guess the savepoint solution will be the simplest and fastest if the OP has or can install 8.0 version. I'd say so. Otherwise you'll just sit on the lock, and then still have to deal with an error later anyway when the lock times out. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] duplicate key violates unique constraint
That would work indeed. Bit I guess the savepoint solution will be the simplest and fastest if the OP has or can install 8.0 version. Cheers, Csaba. On Mon, 2005-06-13 at 17:49, Richard Huxton wrote: > Csaba Nagy wrote: > > [snip] > > > >>If you have more than one client, this can always happen. You have two > >>choices: > >> 1. Use a lock to stop two clients interacting like this > > > > > > This won't work unless you make all the clients serialized, or you have > > all the ip's already inserted in the data base... you can't lock on an > > unknown key, otherwise the locking will also need to insert, and you're > > back to the same race condition ;-) > > You can, however, have something more finely-grained than whole-table > locking (assuming one IP updated/inserted at a time) by filling a dummy > table with e.g. integers 0..255 and locking a row there based on (e.g.) > the last octet of your target IP. > > -- >Richard Huxton >Archonet Ltd ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] duplicate key violates unique constraint
Csaba Nagy wrote: [snip] If you have more than one client, this can always happen. You have two choices: 1. Use a lock to stop two clients interacting like this This won't work unless you make all the clients serialized, or you have all the ip's already inserted in the data base... you can't lock on an unknown key, otherwise the locking will also need to insert, and you're back to the same race condition ;-) You can, however, have something more finely-grained than whole-table locking (assuming one IP updated/inserted at a time) by filling a dummy table with e.g. integers 0..255 and locking a row there based on (e.g.) the last octet of your target IP. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] duplicate key violates unique constraint
[snip] > If you have more than one client, this can always happen. You have two > choices: > 1. Use a lock to stop two clients interacting like this This won't work unless you make all the clients serialized, or you have all the ip's already inserted in the data base... you can't lock on an unknown key, otherwise the locking will also need to insert, and you're back to the same race condition ;-) Cheers, Csaba. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] duplicate key violates unique constraint
ON.KG wrote: before inserting or updating this table there're some checkings, logs, etc., so I'm using PL/PgSQL for that after all checkings and logs I have: UPDATE table1 SET hits = hits + 1 WHERE ip = some_ip; IF NOT FOUND THEN INSERT INTO table1 (ip) VALUES (some_ip); END IF; when IP is not found in table it inserts new record into table but in logs i see error ERROR: duplicate key violates unique constraint "table1" CONTEXT: PL/pgSQL function "insert_table1" line 68 at SQL statement If you can have more than one client running this at once you have a race condition here. The order runs something like: 1. client A tries to update SOME_IP, no rows affected 2. client B tries to update SOME_IP, no rows affected 3. client A tries the insert of SOME_IP 4. client B tries the insert of SOME_IP - fails! If you have more than one client, this can always happen. You have two choices: 1. Use a lock to stop two clients interacting like this 2. Catch the error on the insert and try the update again. This requires version 8.0 or higher. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] duplicate key violates unique constraint
Your problem is that the trigger's "found" check will not see the row inserted by a concurrent transaction. In other words, your insert actually fails, the record what you see was inserted by another concurrent transaction, and the "found" check didn't work because the other transaction started after yours, but was quicker, and your transaction can't see it's results. What you try to do is similar to the "insert-or-update" thing, which cannot be done safely in the way you tried to do it. Don't even bother to try, there are lots of discussions on the list and the conclusion is you can't avoid a race condition between the concurrent inserts. There always will be a way one of them will fail with an error. You could actually ignore the error if it's not part of a bigger transaction, which would of course be broken by the error. Your only way to avoid the error completely is to place a save point before the insert, catch the error, and roll back to the save point, and then continue your transaction as you need. HTH, Csaba. On Mon, 2005-06-13 at 18:22, ON.KG wrote: > Hi All! > > I have table: > > CREATE TABLE table1 ( >ip char(15) NOT NULL, >hits integer NOT NULL default '1', >PRIMARY KEY (ip) > ); > > So it's counting hits per each IP for current day and every day > trancated by cron: > TRUNCATE TABLE table1; > > before inserting or updating this table there're some checkings, > logs, etc., so I'm using PL/PgSQL for that > > after all checkings and logs I have: > > UPDATE table1 > SET hits = hits + 1 > WHERE ip = some_ip; > > IF NOT FOUND THEN > INSERT INTO table1 > (ip) > VALUES > (some_ip); > END IF; > > when IP is not found in table it inserts new record into table > but in logs i see error > ERROR: duplicate key violates unique constraint "table1" > CONTEXT: PL/pgSQL function "insert_table1" line 68 at SQL statement > > But record is inserted into table > > what may be the problem? > > i also tried before: > SELECT INTO cnt hits > FROM table1 > WHERE ip = some_ip; > > IF FOUND THEN > UPDATE table1 > SET hits = hits + 1 > WHERE ip = some_ip; > ELSE > INSERT INTO table1 > (ip) > VALUES > (some_ip); > END IF; > > But same error still appears > > Thank You > > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] duplicate key violates unique constraint
Hi All! I have table: CREATE TABLE table1 ( ip char(15) NOT NULL, hits integer NOT NULL default '1', PRIMARY KEY (ip) ); So it's counting hits per each IP for current day and every day trancated by cron: TRUNCATE TABLE table1; before inserting or updating this table there're some checkings, logs, etc., so I'm using PL/PgSQL for that after all checkings and logs I have: UPDATE table1 SET hits = hits + 1 WHERE ip = some_ip; IF NOT FOUND THEN INSERT INTO table1 (ip) VALUES (some_ip); END IF; when IP is not found in table it inserts new record into table but in logs i see error ERROR: duplicate key violates unique constraint "table1" CONTEXT: PL/pgSQL function "insert_table1" line 68 at SQL statement But record is inserted into table what may be the problem? i also tried before: SELECT INTO cnt hits FROM table1 WHERE ip = some_ip; IF FOUND THEN UPDATE table1 SET hits = hits + 1 WHERE ip = some_ip; ELSE INSERT INTO table1 (ip) VALUES (some_ip); END IF; But same error still appears Thank You ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] duplicate key violates unique constraint "pg_class_oid_index"
postgresql 7.4.0, redhat 7.3 (under vmware 4.0 on win2k) Windows crashed and some of the files on Redhat got corrupted, including some files in /var/lib/pgsql/data/pg_xlog/. When I tried to start postmaster, it fails with message "Invalid primary checkPoint record". I think it was trying to look for files named "0021" but only files named "0022" through "0027" were there. So I ran pg_resetxlog (this is purely experimental installation so I can destroy and recreate the database). After reset, pg_xlog/ ends up having only "0023". Now, before the crash, I had created some tables like t, t2, and t3. The last activities I think were importing contrib/tablefunc.sql and creating +- 10k records to t3 from a Perl script in the form of some hundreds of transactions (all already committed). After the db is back up, I see (with \d in psql) only t. When I want to recreate treeadj1, postgres complains with this message: duplicate key violates unique constraint "pg_class_oid_index" And I see in the pg_class table there are t2 and t3 entries. Next I tried to do pg_dump because obviously the database is inconsistent. pg_dump fails with this message: pg_dump: attempt to lock table "t3" failed: ERROR: relation "public.t3" does not exist The question: what is the best/safest way to deal with this kind of situation: 1) mess with pg_class and possibly other system tables to fix the inconsistencies (How? I'm currently clueless at this :-) 2) do dump with -t to only dump existing tables (I tried this once, pg_restore fails with this message: input file does not appear to be a valid archive (too short?) I'm wild-guessing this is because the t table is empty. Dump file attached.) 3) restore from last backup (I'll lose more recent data). -- dave -- -- PostgreSQL database dump -- SET SESSION AUTHORIZATION 'dave'; SET search_path = public, pg_catalog; -- -- TOC entry 2 (OID 1466918) -- Name: t; Type: TABLE; Schema: public; Owner: dave -- CREATE TABLE t ( i integer ); -- -- Data for TOC entry 3 (OID 1466918) -- Name: t; Type: TABLE DATA; Schema: public; Owner: dave -- COPY t2b (i) FROM stdin; \. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])