I'm writing a cluster monitor, that collects information from a set of
machines and logs it to a database

In the interests of not hammering the db unnecessarily, I'm
considering the following
1. A series of independent "monitor" threads that collect information
over TCP from the cluster of machines, and write it to a queue
2. A "logger" thread that empties the queue every second or so and
inserts the collected information to the db via a single insert
statement

Reading up on python's built in Queue class, though, it seems oriented
towards "job queues", with a two-step dequeue operation (get() and
task_done()). I'm worried that this would make it too heavyweight for
my application. Is ther documentation somewhere on what exactly
task_done() does, and whether I can disable the tracking of a job once
it's removed from the queue? The python docs for the Queue module were
a bit light.

martin
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to