On 19Jun2015 18:16, Fabien <fabien.mauss...@gmail.com> wrote:
On 06/19/2015 04:25 PM, Andres Riancho wrote:
   My recommendation is that you should pass some extra arguments to the task:
    * A unique task id
    * A result multiprocessing.Queue

    When an exception is raised you put (unique_id, exception) to the
queue. When it succeeds you put (unique_id, None). In the main process
you consume the queue and do your error handling.

    Note that some exceptions can't be serialized, there is where
tblib [0] comes handy.

[0]https://pypi.python.org/pypi/tblib

Regards,

Thanks, I wasn't aware of the multiprocessing.Queue workflow. It seems like its going to require some changes in the actual code of the tasks though. Did I get it right that I should stop raising exceptions then?

Something like:

def task_1(path, q):
   # Do stuffs
   if dont_work:
       q.put(RuntimeError("didnt work"))
        return
   # finished
   q.put(None)
   return

I would keep your core logic Pythonic, raise exceptions. But I would wrap each task in something to catch any Exception subclass and report back to the queue. Untested example:

 def subwrapper(q, callable, *args, **kwargs):
   try:
     q.put( ('COMPLETED', callable(*args, **kwargs)) )
   except Exception as e:
     q.put( ('FAILED', e, callable, args, kwargs) )

then dispatch tasks like this:

 pool.map(subwrapper, q, task1, dirs, chunksize=1)

and have a thread (or main program) collect things from the queue for logging and other handling. Obviously you might return something more sophisticated that my simple tuple above, but I'm sure you get the idea.

Cheers,
Cameron Simpson <c...@zip.com.au>

He's silly and he's ignorant, but he's got guts, and guts is enough.
       - Sgt. Hartmann
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to