Jeremy Haile wrote:
>> Note that things will go faster if you do your initial data load using
>> "copy from stdin" for the initial bulk data load.  individual inserts in
>> postgresql are quite costly compared to mysql.  It's the transactional
>> overhead.  by grouping them together you can make things much faster. 
>> copy from stdin does all the inserts in one big transaction.  
> 
> You could do "copy from file" as well right?  (no performance difference
> compared to "copy from stdin")  I do this all the time.
> 
> Also - maybe I misunderstand something, but why does PostgreSQL's
> implementation prohibit it from ignoring insert errors during a copy? 
> If you added a unique constraint to the table before copying, PostgreSQL
> would generate errors due to the unique constraint violation - so I
> don't think any additional locking would be required for it to simply
> say "If there is an error while copying in, ignore it and continue
> inserting other rows"  
> 
> PostgreSQL's copy command doesn't currently support this, so the temp
> table followed by a distinct select is the way to go.  But I didn't
> follow all of the talk about it requiring locking the table and being
> inherently impossible for PostgreSQL to support.
> 
> I've wanted a similar feature.  I select rows into a table on a regular
> basis.  I'd like to be able to overlap old values and have PostgreSQL
> ignore failed inserts.  SQL Server offers a flag that allows you to
> ignore inserts whose primary key already exists in the table.  The only
> solution in PostgreSQL is to run a query to manually delete the
> duplicate rows from a temp table before inserting - which takes much
> more time.
> 

I would also like this feature... :) (obviously)

I also didn't exactly follow the locking, I don't need it as far as I know.

- George

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to