I have web sites where there are periodic inserts (& deletes ) of many 
new records generated by crawlers.

In order to reduce the # of queries I do compound operations like  
INSERT VALUES(a),(b),(c), and DELETE FROM table WHERE record_id IN 
($LONG_ID_LIST).

At the same as these operations are occurring users are doing queries on 
these tables.

I've noticed that during these long compound operations  SELECT queries 
seem to be slow as if the tables are read locked. Does this sound like 
what is happening ?

If so to prevent tables from being locked should I :

a) return to single INSERT and DELETE operations & suffer the query 
overhead for each single operation ?

or

b) if I do INSERT and DELETE with LOW_PRIORITY with a compound operation 
does that mean that tables won't get locked ? Will that result in 
INSERTs or DELETEs being spread over time such that SELECTs could 
potentially get different counts over that spread of time ?

or

c) something else ?



Reply via email to