I've seen occasional deadlocks reported, and I'm wondering how I should avoid them, or if my approach is just wrong.

I have two client processes running in PHP on different machines accessing the same database (all InnoDB on MySQL 5.0.24a). Each has an integer daemon_id which is used to grab a chunk of a list of tasks which it can take away and process before coming back for more. The task grabbing process goes like this (it has failure detection and rollback if necessary):

BEGIN;
UPDATE task SET daemon_id = $my_daemon_id where daemon_id = 0 and status = 'new' LIMIT 100;
COMMIT;

Then it grabs the tasks it has claimed:

SELECT * FROM task WHERE daemon_id = $my_daemon_id AND status = 'new';

These tasks should never be locked by another daemon because the daemon_id doesn't match.

As each process completes its tasks, it issues single updates:

UPDATE task SET status = 'complete' where id = 123;

In theory, this mechanism is intended to allow me to add more client processes to increase capacity without them treading on each other.

I don't see where the deadlock can occur, but there is one in the initial task grab apparently. If the two processes try to grab the same tasks at once, I would expect the later of the two to fail to do the update, or simply fail to find tasks in common because the transaction is atomic. Have I got this all wrong? Do I need to be more explicit about locking during the update? Is there a better way of doing this?

Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to