Hey, a client of ours has been having some data corruption in their
database.  We got the data corruption fixed and we believe we've discovered
the cause (they had a script killing any waiting queries if the locks on
their database hit 1000), but they're still getting errors from one table:

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  invalid memory alloc request
size 18446744073709551613
pg_dump: The command was: COPY public.foo (<columns>) TO stdout;

That seems like an incredibly large memory allocation request - it shouldn't
be possible for the table to really be that large, should it?  Any idea what
may be wrong if it's actually trying to allocate that much memory for a copy
command?

Reply via email to