Piet van Oostrum wrote:
AJ> if __name__ == '__main__':
AJ>    THREADS = []
AJ>    for i in range(CONCURRENCY):
AJ>        THREADS.append(threading.Thread(target=threadProcessRecipient))
AJ>    for thread in THREADS:
AJ>        thread.run()

You should use thread.start(), not thread.run(). When you use run(), it
will be sequential execution, as you experience. With start() you get
concurrency

Thanks! Changing this method call fixes the original problem we saw. In the process I've run into another though, seemingly a race condition / deadlock.

Our email script DKIM signs all messages with pydkim. Because this is so CPU intensive we run the signing in an external process, using dkimsign.py (which basically accepts the message on stdin, calls dkim.sign() on it and passes the result back on stdout). We call the DKIM signing while preparing each message in each thread like this:
cmd = subprocess.Popen(
   ['/usr/bin/nice', PYTHONBIN, 'dkimsign.py',
   DKIMSELECTOR,
   DKIMDOMAIN,
   DKIMPRIVATEKEYFILE],
   stdin = subprocess.PIPE,
   stdout = subprocess.PIPE)
message = cmd.communicate(message)[0]

The majority of the time (80%) our mail merge works fine -- messages are sent out in a multi-threaded manner. The rest of the time, some of the threads deadlock when calling dkimsign.py. Specifically, they stop during the communicate() call above. dkimsign.py never receives any input (hangs on message=sys.stdin.read()), and sits there waiting forever, which stops the thread from doing anything.

Is this a bug with subprocess? Is there some way to set a timeout on the communicate() call so I can detect these locks?

Cheers,
AJ
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to