On Thu, 11 Aug 2011, David Johnston wrote:
If you have duplicates with matching real keys inserting into a staging table and then moving new records to the final table is your best option (in general it is better to do a two-step with a staging table since you can readily use Postgresql to perform any intermediate translations) As for the import itself,
David, I presume what you call a staging table is what I refer to as a copy of the main table, but with no key attribute. Writing the SELECT statement to delete from the staging table those rows that already exist in the main table is where I'm open to suggestions.
In this case I would just import the data to a staging table without any kind of artificial key, just the true key,
There is no true key, only an artificial key so I can ensure that rows are unique. That's in the main table with the 50K rows. No key column in the .csv file. Thanks, Rich -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general