I'm looking at using the pyprocessing module to set up a (dispatcher -->
workqueue --> slave job) kind of environment.

The dispatcher will be my app server, and I've decided to just use a table
in the app database for the persistent job queue, since everything is going
to have to connect to that database anyway.

Here's the question: Assuming the dispatcher is reading the job queue using
SQLA, and therefore has a connection pool open, and the children of the
dispatcher are created using fork(), they will also have an open connection
pool to start -- is there any way to assure that I don't have
multiple-reader socket issues, maybe by assuring that the children are using
either a fully closed connection pool, or that the next DB request will get
a fresh connection, distinct from that of the parent dispatcher?

Thanks,
Rick

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to