I have a logging table where I insert a large number of rows every 5
minutes.  For performance reasons this occurs in bulk inserts of about
5000 rows at a time.  (ie. INSERT INTO table VALUES(...), (...), (...))

 

One of the fields in the table is an id that connects it to another
table.  It is possible that by the time the insert occurs (they queue up
in memory briefly before I create the bulk insert) a separate process
has deleted the entry in the parent table and the id is invalid.

 

When this happens right now the entire insert of 5000 rows fails because
one single row is bad.  I want the behavior to be that the one fails
silently and the other 4999 insert successfully.

 

Any ideas how I can do this?  It seems like INSERT IGNORE would make
sense but that appears to only ignore duplicates not foreign key
failures.

 

John A. McCaskey

Software Development Engineer

IP Sciences, Inc.

[EMAIL PROTECTED]

206.902.2027

 

Reply via email to