On Sat, 10 Jun 2006, Jeff Frost wrote:
I'm running a test dump now, so we'll see sometime tomorrow (it takes about
20 hrs with the current setup) if it worked properly or if I find a new
problem. :-)
You'll be happy to hear that the test dump was successful and actually only
required 12 hrs
On Fri, 9 Jun 2006, Jeff Frost wrote:
On Fri, 9 Jun 2006, Tom Lane wrote:
pg_dumpall calls pg_dump, so only one place to fix. I've already
committed the fix in CVS, if you'd prefer to use a tested patch.
http://developer.postgresql.org/cvsweb.cgi/pgsql/src/bin/pg_dump/pg_dump.c.diff?r1=1.422.
On Fri, 9 Jun 2006, Tom Lane wrote:
pg_dumpall calls pg_dump, so only one place to fix. I've already
committed the fix in CVS, if you'd prefer to use a tested patch.
http://developer.postgresql.org/cvsweb.cgi/pgsql/src/bin/pg_dump/pg_dump.c.diff?r1=1.422.2.3;r2=1.422.2.4
I'm running a test du
I wrote:
>> This is computing obj_description() redundantly for each pg_largeobject
>> chunk. Perhaps there is a memory leak in obj_description() in 7.3.2?
Actually, obj_description() is a SQL-language function, and we had
horrendous problems with end-of-function-call memory leakage in SQL
functi
On Fri, 9 Jun 2006, Tom Lane wrote:
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: Memory exhausted in
AllocSetAlloc(96)
Hm, I'm not sure why it did that. Possibly an ANALYZE on pg_largeobject
would change the plan for the SELECT DISTINCT and get you out of
trouble.
Jeff Frost <[EMAIL PROTECTED]> writes:
>> On Tue, 6 Jun 2006, Tom Lane wrote:
>>> I'd try a REINDEX of pg_largeobject to see if that fixes it.
> Got the REINDEX completed and found a new error that I haven't seen before:
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:
On Fri, 9 Jun 2006, Jeff Frost wrote:
Got the REINDEX completed and found a new error that I haven't seen before:
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: Memory exhausted in
AllocSetAlloc(96)
pg_dump: The command was: FETCH 100 IN blobcmt
pg_dumpall: pg_dump fa
On Wed, 7 Jun 2006, Jeff Frost wrote:
On Tue, 6 Jun 2006, Tom Lane wrote:
Some cursory trawling in the REL7_3 sources says that this means that
"SELECT DISTINCT loid FROM pg_largeobject" found a large object OID
that then could not be found by an indexscan of pg_largeobject. So
I'd try a REIN
On Tue, 6 Jun 2006, Tom Lane wrote:
Some cursory trawling in the REL7_3 sources says that this means that
"SELECT DISTINCT loid FROM pg_largeobject" found a large object OID
that then could not be found by an indexscan of pg_largeobject. So
I'd try a REINDEX of pg_largeobject to see if that fix
Jeff Frost <[EMAIL PROTECTED]> writes:
> I'm curious why this would happen:
> pg_dump: dumpBlobs(): could not open large object: ERROR: inv_open: large
> object 145391 not found
Some cursory trawling in the REL7_3 sources says that this means that
"SELECT DISTINCT loid FROM pg_largeobject" foun
I'm curious why this would happen:
pg_dump: dumpBlobs(): could not open large object: ERROR: inv_open: large
object 145391 not found
The db being dumped is 7.3.2 and the pg_dumpall is from a source compiled
8.1.4. The OS in question is Redhat 8 (soon to be upgraded). When I used the
7.3.2
11 matches
Mail list logo