Jamie Fox wrote: > Hi - > After what seemed to be a normal successful pg_migrator migration from 8.3.7 > to 8.4.0, in either link or copy mode, vacuumlo fails on both our production > and qa databases: > > Jul 1 11:17:03 db2 postgres[9321]: [14-1] LOG: duration: 175.563 ms > statement: DELETE FROM vacuum_l WHERE lo IN (SELECT "xml_data" FROM > "public"."xml_user") > Jul 1 11:17:03 db2 postgres[9321]: [15-1] ERROR: large object 17919608 > does not exist > Jul 1 11:17:03 db2 postgres[9321]: [16-1] ERROR: current transaction is > aborted, commands ignored until end of transaction block > > I migrated our qa database using pg_dump/pg_restore and vacuumlo has no > problem with it. When I try querying the two databases for large objects > manually I see the same error in the one that was migrated with pg_migrator: > > select loread(lo_open(xml_data,262144),1073741819) from xml_user where id = > '10837246'; > ERROR: large object 24696063 does not exist > SQL state: 42704 > > I can also see that the pg_largeobject table is different, in the pg_restore > version the Rows (estimated) is 316286 and Rows (counted) is the same, in > the pg_migrator version the Rows (counted) is only 180507. > > Any advice on what I might look for to try and track down this problem? > pg_restore on our production database takes too long so it would be really > nice to use pg_migrator instead.
[ Email moved to hackers list.] Wow, I didn't test large objects specifically, and I am confused why there would be a count discrepancy. I will need to do some research unless someone else can guess about the cause. -- Bruce Momjian <br...@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers