>> > > I have several CGI and cron scripts and that I would like coordinate >> > > via a "First In / First Out" style buffer. That is, some >> processes >> > > are adding work units, and some take the oldest and start work on >> > > them. >> > >> > > Since I need the queue to both survive system crashes and provide an >> > > audit record of all work-units in/out, people have suggested using >> an >> > > ACID compliant databse for the task. (SQLite or MySQL depending on >> > > volume). >> > >> > > Can SQLAlchemy be used for the task? >> > >> > > Any help would be appreciated, as I a new to SQL and SQLAlchemy.
I've attached a proof of concept for how I've approached the "homegrown" version of this in the past. a jobs table has two update passes - one to atomically mark jobs as "in progress" by a certain procid, then the transaction is released so that other processes can theoretically work on other jobs. after each job is completed the corresponding row is marked as "complete" (or failed) within a separate short transaction per row. no long running transactions are used so that there is always room for multiple processes to join in the work. Oftentimes you'll hear people doing this with MySQL and dealing with "select...for update" to try to lock the rows as they are selected within the transaction but that method has never appealed to me. all that said, I'm sure my approach has issues that are not (or perhaps are) readily apparent. if you want really failproof behavior without much tinkering the off the shelf solutions are probably worth a look. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~----------~----~----~----~------~----~------~--~---
queue.py
Description: Binary data