cagney <andrew.cag...@gmail.com> added the comment:

I think the only pragmatic solution here is to add an optional parameter to 
logging.basicConfig() that specifies that the logger should use a single global 
lock; and then start documenting that thread locks and fork() don't work well 
together.

And note that this solution is pragmatic, not correct (@dhr, when we discussed 
this off line, pointed out that since python's using threads then a "correct" 
solution would be to use some sort of inter-process lock).

For instance, if I were to implement emit() as something contrived like:

    with lock:
        s = record.to-string()
        for i in 1 .. s'length:
            b[i] = s[i]
        for i in 1 .. b'length:
            stdout.write(b[i]).flush()
        b = []

then when fork() breaks 'lock' the guarantee that the code is atomic is also 
broken:

- when the child enters the code 'b' is undefined
- the guarantee that log records don't interleave is lost

while a global lock would help mitigate the first case it really isn't a 
"correct" fix.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue36533>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to