New submission from olarn <bo.bantukulol...@gmail.com>:

Multiprocessing's pool apparently attempts to repopulate the pool in an event 
of sub-process worker crash. However the pool seems to hangs after about ~ 
4*(number of worker) process re-spawns.

I've tracked the issue down to queue.get() stalling at multiprocessing.pool, 
line 102

Is this a known issue? Are there any known workaround?

To reproduce this issue:

import multiprocessing
import multiprocessing.util
import logging

multiprocessing.util._logger = multiprocessing.util.log_to_stderr(logging.DEBUG)
import time
import ctypes


def crash_py_interpreter():
    print("attempting to crash the interpreter in ", 
multiprocessing.current_process())
    i = ctypes.c_char('a'.encode())
    j = ctypes.pointer(i)
    c = 0
    while True:
        j[c] = 'a'.encode()
        c += 1
    j


def test_fn(x):
    print("test_fn in ", multiprocessing.current_process().name, x)
    exit(0)

    time.sleep(0.1)


if __name__ == '__main__':

    # pool = multiprocessing.Pool(processes=multiprocessing.cpu_count())
    pool = multiprocessing.Pool(processes=1)

    args_queue = [n for n in range(20)]

    # subprocess quits
    pool.map(test_fn, args_queue)

    # subprocess crashes
    # pool.map(test_fn,queue)

----------
components: Library (Lib)
messages: 305124
nosy: olarn
priority: normal
severity: normal
status: open
title: Multiprocessing.Pool hangs after re-spawning several worker process.
type: behavior
versions: Python 2.7, Python 3.6

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31886>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to