Oliver Elphick writes:
>"Ansley, Michael" wrote:
>  >Hi, all
>  >
>  >I dumped a table using pg_dump, and then tried to import it into a new
>  >database.  Of the ~210,000 records in the original table, only about 193,000
>  >were loaded, and the remainder caused an error: something about the query
>  >buffer being too small.  Now, I know that queries are limited in size, due
>  >to the size of the query buffer, but none of these insert queries are longer
>  >than about 600 characters.  Is it possible that the stream from the dump
>  >file is causing the buffer to overflow by simply overloading it with
>  >incoming queries, and if so, how do I prevent this?  If not, then does
>  >anybody have any idea what the problem is.
> 
>The problem is some kind of syntax error in the dump output that makes
>the parser think a query is not finished, so it runs together many
>queries until it runs out of space.  You need to find where the error is
>occurring and correct the dump script.


I had a similar problem the other night and fixed it by running the
output of pg_dump through "sed -e 's/<tab><tab>/<tab>\\N<tab>/g'".  It
seems like a pairs of adjacent tabs (implying a null field) was
messing it up.  (Now, there's apparently something broken with that
sed script, because if there were three adjacent tabs, I wasn't
winding up with <tab>\N<tab>\N<tab> like I wanted/expected, but
<tab>\N<tab><tab>...although it would reload just fine.  So I wonder
if it might have something to do with the type of the particular field
that was getting a non-value.  I didn't investigate further.)

I also added the following code to pg_dumpall, because while
pg_dumpall dumps pg_shadow and now ACLs, it doesn't dump pg_group!
Right after the `echo "${BS}."' for pg_shadow, add


      echo "delete from pg_group;"
      echo "copy pg_group from stdin;"
      psql -q template1 <<END
      copy pg_group to stdout;
      END
      echo "${BS}."


Todd


Reply via email to