2006/11/2, Albe Laurenz <[EMAIL PROTECTED]>:
>>>> psql -h host -p port -d database -U user <dump.sql
>>>
>>> It's a good enough solution in most cases, but when the rowcount
>>> starts to skyrocket, it simply doesn't seem to cut it (at least I
>>> couldn't make it to).
>>
>> INSERT statements? You dumped with the -d flag, didn't you?
>
> No I didn't, actually. :) The data was never in the database in the
> first place: it was generated from a different source. True, it was
> generated as a CSV file which I converted into INSERT statements, but
> conversion between the two is not a problem (given 1.5GB of RAM).

Then the best way is to convert it back to a CSV and use the COPY
statement to load in into the table (or \copy from psql).
You don't need any third party tools for that, it's all in PostgreSQL.

I had a problem with copy, but I can't remember what exactly...come to
think of it, it could have probably done the job...assuming I define
the primary key as DEFAULT nextval('id'), as I had no id in the rows I
was importing...nice to have alternatives. Thanks for the suggestion.

t.n.a.

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to