Neal Becker wrote:
I'm using multiprocessing as a crude batch queuing system, like this:

import my_test_program as prog
(where my_test_program has a function called 'run')

def run_test (args):
    prog.run (args[1:])

cases = []
for t in test_conditions:
  args = [prog.__name__]+[more args...]

  cases.append (args)

(leaving out details, but 'cases' will be the list of test cases to run)

results = pool.map (run_test, cases)

Problem is, it doesn't seem to keep all my cpus busy, even though there are more test cases than cpus. Ideas?

If they do a lot of I/O then that could be the bottleneck.
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to