I was trying to run a bulk data load using the COPY command on PGSQL 8.1.0.

After loading about 3,500,000 records it ran out of memory - I am assuming because it ran out of space to store such a large transaction. Does the COPY command offer a similar feature to Oracle's SQL*Loader where you can specify the number of records to load between commit statements, or will I have to break the file I am loading into smaller files?

Or can a transaction be bypassed altogether with the COPY command since any failure (the load is going to an empty table) could easily be solved with a reload of the data anyway.

Thanks,

Kevin

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to