[issue14125] Windows: failures in refleak mode
sbt shibt...@gmail.com added the comment: The failures for test_multiprocessing and test_concurrent_futures seem to be caused by a leak in _multiprocessing.win32.WaitForMultipleObjects(). The attached patch fixes those leaks for me (on a 32 bit build). -- keywords: +patch nosy: +sbt Added file: http://bugs.python.org/file24655/mp_wfmo_leak.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14125 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14125] Windows: failures in refleak mode
sbt shibt...@gmail.com added the comment: The attached patch fixes the time related refleaks. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14125 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14125] Windows: failures in refleak mode
sbt shibt...@gmail.com added the comment: Ah. Forgot the patch. -- Added file: http://bugs.python.org/file24662/time_strftime_leak.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14125 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14166] private dispatch table for picklers
New submission from sbt shibt...@gmail.com: Currently the only documented way to have customised pickling for a type is to register a reduction function with the global dispatch table managed by the copyreg module. But such global changes are liable to disrupt other code which uses pickling. Multiprocessing deals with this by defining a ForkingPickler class which subclasses the pure python _Pickler class (using undocumented features), and supports registering reduction functions specifically for that class. I would like to see some documented alternative which works with both C and Python implementations. At least then multiprocessing can avoid using slow pure python pickling. The attached patch allows a pickler object to have a private dispatch table which it uses *instead* of the global one. It lets one write code like p = pickle.Pickler(...) p.dispatch_table = copyreg.dispatch_table.copy() p.dispatch_table[SomeClass] = reduce_SomeClass or class MyPickler(pickle.Pickler): dispatch_table = copyreg.dispatch_table.copy() MyPickler.dispatch_table[SomeClass] = reduce_SomeClass p = MyPickler(...) The equivalent using copyreg would be copyreg.pickle(SomeClass, reduce_SomeClass) p = pickle.Pickler(...) -- files: pickle_dispatch.patch keywords: patch messages: 154695 nosy: sbt priority: normal severity: normal status: open title: private dispatch table for picklers type: enhancement versions: Python 3.3 Added file: http://bugs.python.org/file24697/pickle_dispatch.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14166 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14166] private dispatch table for picklers
sbt shibt...@gmail.com added the comment: I don't understand the following code: ... since self.dispatch_table is a property returning self._dispatch_table. Did you mean type(self).dispatch_table? More or less. That code was a botched attempt to match the behaviour of the C implementation. The C implementation does not expose the dispatch table unless it has been explicitly set (on the pickler or the pickler class), and it ignores any dispatch_table (or persistent_id) attribute on the metaclass. I will do a fixed patch with docs. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14166 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12328] multiprocessing's overlapped PipeConnection on Windows
sbt shibt...@gmail.com added the comment: Hmm, I tried to apply the latest patch to the default branch and it failed. It also seems the patch was done against a changeset (508bc675af63) which doesn't exist in the repo... I will do an updated patch against a public changeset. (I usually use a patch for the project files to disable those extensions which my windows setup cannot compile.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12328 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14166] private dispatch table for picklers
sbt shibt...@gmail.com added the comment: Updated patch with docs. -- Added file: http://bugs.python.org/file24729/pickle_dispatch.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14166 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12328] multiprocessing's overlapped PipeConnection on Windows
sbt shibt...@gmail.com added the comment: Updated patch against 2822765e48a7. -- Added file: http://bugs.python.org/file24730/pipe_poll_fix.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12328 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12328] multiprocessing's overlapped PipeConnection on Windows
sbt shibt...@gmail.com added the comment: Updated patch addressing Antoine's comments. -- Added file: http://bugs.python.org/file24737/pipe_poll_fix.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12328 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14206] multiprocessing.Queue documentation is lacking important details
sbt shibt...@gmail.com added the comment: What you were told on IRC was wrong. By default the queue *does* have infinite size. When a process puts an item on the queue for the first time, a background thread is started which is responsible for writing items to the underlying pipe. This does mean that, on exit, the process should wait for the background thread to flush all the data to the pipe. This happens automatically unless you specifically prevent it by calling cancel_join_thread() method. If you stick to those methods supported by standard queue objects, then things should work correctly. (Maybe cancel_join_thread() would be better named allow_exit_without_flush().) -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14206 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14300] dup_socket() on Windows should use WSA_FLAG_OVERLAPPED
New submission from sbt shibt...@gmail.com: According to Microsoft's documentation sockets created using socket() have the overlapped attribute, but sockets created with WSASocket() do not unless you pass the WSA_FLAG_OVERLAPPED flag. The documentation for WSADuplicateSocket() says If the source process uses the socket function to create the socket, the destination process must pass the WSA_FLAG_OVERLAPPED flag to its WSASocket function call. This means that dup_socket() in socketmodule.c should use return WSASocket(FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO, info, 0, WSA_FLAG_OVERLAPPED); instead of return WSASocket(FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO, info, 0, 0); (On Windows, the new multiprocessing.connection.wait() function depends on the overlapped attribute, although it is primarily intended for use with pipe connections not sockets.) Patch attached. -- files: socket_dup.patch keywords: patch messages: 155748 nosy: sbt priority: normal severity: normal status: open title: dup_socket() on Windows should use WSA_FLAG_OVERLAPPED versions: Python 3.3 Added file: http://bugs.python.org/file24841/socket_dup.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14300 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14300] dup_socket() on Windows should use WSA_FLAG_OVERLAPPED
sbt shibt...@gmail.com added the comment: pitrou wrote: Are you sure this is desired? Nowhere can I think of a place in the stdlib where we use overlapped I/O on sockets. multiprocessing.connection.wait() does overlapped zero length reads on sockets. It's documentation currently claims that it works with sockets. Also it would seem strange if some sockets (created with socket()) have the overlapped attribute, but some others (created with WSASocket()) don't. amaury.forgeotdarc wrote: Which problem are you trying to solve? For one thing, the fact that socketmodule.c does not obey the word must in the quote from Microsoft's documentation. Can this change be tested somehow? An additional test could be added to test_multiprocessing.TestWait. Slightly surprisingly, in the testing I have done so far, using wait() with a duplicated socket seems to work without the patch. However, I would be rather wary of just assuming that it works in all cases and on all versions of Windows. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14300 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14288] Make iterators pickleable
sbt shibt...@gmail.com added the comment: I think PyAPI_FUNC(PyObject *) _PyIter_GetIter(const char *iter); has a confusing name for a convenience function which retrieves an attribute from the builtin module by name. Not sure what would be better. Maybe _PyIter_GetBuiltin(). -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14288 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14310] Socket duplication for windows
sbt shibt...@gmail.com added the comment: It appears that the 4th argument of the socket constructor is undocumented, so presumably one is expected to use fromfd() instead. Maybe you could have a frominfo(info) function (to match fromfd(fd,...)) and a dupinfo(pid) method. (It appears that multiprocessing uses DuplicateHandle() instead of WSADuplicateSocket() for duplicating socket handles on Windows. That should be fixed.) -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14310 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14308] '_DummyThread' object has no attribute '_Thread__block'
sbt shibt...@gmail.com added the comment: _DummyThread.__init__() explicitly deletes self._Thread__block: def __init__(self): Thread.__init__(self, name=_newname(Dummy-%d)) # Thread.__block consumes an OS-level locking primitive, which # can never be used by a _DummyThread. Since a _DummyThread # instance is immortal, that's bad, so release this resource. del self._Thread__block ^^^ -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14308 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14308] '_DummyThread' object has no attribute '_Thread__block'
sbt shibt...@gmail.com added the comment: Ignore my last message... -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14308 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14335] Reimplement multiprocessing's ForkingPickler using dispatch_table
New submission from sbt shibt...@gmail.com: The attached patch reimplements ForkingPickler using the new dispatch_table attribute. This allows ForkingPickler to subclass Pickler (implemented in C) instead of _Pickler (implemented in Python). -- components: Library (Lib) files: mp_forking_pickler.patch keywords: patch messages: 156028 nosy: sbt priority: normal severity: normal status: open title: Reimplement multiprocessing's ForkingPickler using dispatch_table type: performance versions: Python 3.3 Added file: http://bugs.python.org/file24887/mp_forking_pickler.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14335 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12338] multiprocessing.util._eintr_retry doen't recalculate timeouts
sbt shibt...@gmail.com added the comment: _eintr_retry is currently unused. The attached patch removes it. If it is retained then we should at least add a warning that it does not recalculate timeouts. -- keywords: +patch Added file: http://bugs.python.org/file24888/mp_remove_eintr_retry.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12338 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14336] Difference between pickle implementations for function objects
New submission from sbt shibt...@gmail.com: When pickling a function object, if it cannot be saved as a global the C implementation falls back to using copyreg/__reduce__/__reduce_ex__. The comment for the changeset which added this fallback claims that it is for compatibility with the Python implementation. See http://hg.python.org/cpython/rev/c6753db9c6af However, the current python implementations do not have any such fallback. This affects both 2.x and 3.x. For example Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32 Type help, copyright, credits or license for more information. import pickle, cPickle, copy_reg def f(): ... pass ... _f = f del f copy_reg.pickle(type(_f), lambda obj: (str, (FALLBACK,))) cPickle.dumps(_f) c__builtin__\nstr\np1\n(S'FALLBACK'\np2\ntp3\nRp4\n. pickle.dumps(_f) Traceback (most recent call last): File stdin, line 1, in module File c:\Python27\lib\pickle.py, line 1374, in dumps Pickler(file, protocol).dump(obj) File c:\Python27\lib\pickle.py, line 224, in dump self.save(obj) File c:\Python27\lib\pickle.py, line 286, in save f(self, obj) # Call unbound method with explicit self File c:\Python27\lib\pickle.py, line 748, in save_global (obj, module, name)) pickle.PicklingError: Can't pickle function f at 0x0299A470: it's not found as __main__.f I don't know what should be done. I would be tempted to always fall back to copyreg/__reduce__/__reduce_ex__ when save_global fails (not just for function objects) but that might make error messages less helpful. -- components: Library (Lib) messages: 156069 nosy: sbt priority: normal severity: normal status: open title: Difference between pickle implementations for function objects type: behavior versions: Python 2.7, Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14336 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14310] Socket duplication for windows
sbt shibt...@gmail.com added the comment: I think this captures the functionality better than duplicate or duppid() since there is no actual duplication involved until the fromshare() function is called. Are you saying the WSADuplicateSocket() call in share() doesn't duplicate the handle in to the target process? I am pretty sure it does. (Delaying handle duplication till WSASocket() is called in the target process would be rather problematic since then you cannot close the original socket in the source processes until you know the duplication has occurred.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14310 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14310] Socket duplication for windows
sbt shibt...@gmail.com added the comment: If duplication happened early, then there would have to be a way to unduplicate it in the source process if, say, IPC somehow failed. There is currently no api to undo the effects of WSADuplicateSocket(). If this were a normal handle then you could use the DUPLICATE_CLOSE_SOURCE flag with DuplicateHandle() to close it. But using DuplicateHandle() with socket handles is discouraged. I find the ability to duplicate and close handles in unrelated processes of the same user rather surprising. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14310 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14288] Make iterators pickleable
sbt shibt...@gmail.com added the comment: ... and that pickling things like dict iterators entail running the iterator to completion and storing all of the results in a list. The thing to emphasise here is that pickling an iterator is destructive: afterwards the original iterator will be empty. I can't think of any other examples where pickling an object causes non-trivial mutation of that object. Come to think of it, doesn't copy.copy() delegate to __reduce__()/__reduce_ex__(). It would be a bit surprising if copy.copy(myiterator) were to consume myiterator. I expect copy.copy() to return an independent copy without mutating the original object. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14288 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14288] Make iterators pickleable
sbt shibt...@gmail.com added the comment: If you look at the patch it isn't (or shouldn't be). Sorry. I misunderstood when Raymond said running the iterator to completion. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14288 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: Jimbofbx wrote: def main(): from multiprocessing import Pipe, reduction i, o = Pipe() print(i); reduced = reduction.reduce_connection(i) print(reduced); newi = reduced[0](*reduced[1]) print(newi); newi.send(hi) o.recv() On Windows with a PipeConnection object you should use rebuild_pipe_connection() instead of rebuild_connection(). With that change, on Python 3.3 I get multiprocessing.connection.PipeConnection object at 0x025BBCB0 (function rebuild_pipe_connection at 0x0262F420, (('.\\pipe\\pyc-6000-1-30lq4p', 356, False), True, True)) multiprocessing.connection.PipeConnection object at 0x029FF710 Having said all that I agree multiprocessing.reduction should be fixed. Maybe an enable_pickling_support() function could be added to register the necessary things with copyreg. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: ForkingPickler is only used when creating a child process. The multiprocessing.reduction module is only really intended for sending stuff to *pre-existing* processes. As things stand, after importing multiprocessing.reduction you can do something like buf = io.BytesIO() pickler = ForkingPickler(buf) pickler.dump(conn) data = buf.getvalue() writer.send_bytes(data) But that is rather less simple and obvious than just doing writer.send(conn) which was possible in pyprocessing. Originally just importing the module magically registered the reduce functions with copyreg. Since this was undesirable, the reduction functions were instead registered with ForkingPickler. But this fix rather missed the point of the module. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: But ForkingPickler could be used in multiprocessing.connection, couldn't it? I suppose so. Note that the way a connection handle is transferred between existing processes is unnecessarily inefficient on Windows. A background server thread (one per process) has to be started and the receiving process must connect back to the sending process to receive its duplicate handle. There is a simpler way to do this on Windows. The sending process duplicates the handle, and the receiving process duplicates that second handle using DuplicateHandle() and the DUPLICATE_CLOSE_SOURCE flag. That way no server thread is necessary on Windows. I got this to work recently for pickling references to file handles for mmaps on. (A server thread would still be necessary on Unix.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
New submission from sbt shibt...@gmail.com: In multiprocessing.connection on Windows, socket handles are indirectly duplicated using DuplicateHandle() instead the WSADuplicateSocket(). According to Microsoft's documentation this is not supported. This is easily avoided by using socket.detach() instead of duplicating the handle. -- files: mp_socket_dup.patch keywords: patch messages: 157747 nosy: sbt priority: normal severity: normal status: open title: Avoid using DuplicateHandle() on sockets in multiprocessing.connection Added file: http://bugs.python.org/file25153/mp_socket_dup.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: There is a simpler way to do this on Windows. The sending process duplicates the handle, and the receiving process duplicates that second handle using DuplicateHandle() and the DUPLICATE_CLOSE_SOURCE flag. That way no server thread is necessary on Windows. Note that this should not be done for socket handles since DuplicateHandle() is not supposed to work for them. socket.share() and socket.fromshare() with a server thread can be used for sockets. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
Changes by sbt shibt...@gmail.com: Removed file: http://bugs.python.org/file25153/mp_socket_dup.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
sbt shibt...@gmail.com added the comment: What is the bug that this fixes? Can you provide a test case? The bug is using an API in a way that the documentation says is wrong/unreliable. There does not seem to be a classification for that. I have never seen a problem caused by using DuplicateHandle() so I cannot provide a test case. Note that socket.dup() used to be implemented using DuplicateHandle(), but that was changed to WSADuplicateSocket(). See Issue 9753. -- Added file: http://bugs.python.org/file25154/mp_socket_dup.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
sbt shibt...@gmail.com added the comment: Actually Issue 9753 was causing failures in test_socket.BasicTCPTest and test_socket.BasicTCPTest2 on at least one Windows XP machine. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
sbt shibt...@gmail.com added the comment: Is there a reason the patch changes close() to win32.CloseHandle()? This is a Windows only code path so close() is just an alias for win32.CloseHandle(). It allow removal of the lines # Late import because of circular import from multiprocessing.forking import duplicate, close -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14087] multiprocessing.Condition.wait_for missing
sbt shibt...@gmail.com added the comment: New patch skips tests if ctypes not available. -- Added file: http://bugs.python.org/file25155/cond_wait_for.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14087 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14532] multiprocessing module performs a time-dependent hmac comparison
sbt shibt...@gmail.com added the comment: I only looked quickly at the web pages, so I may have misunderstood. But it sounds like this applies when the attacker gets multiple chances to guess the digest for a *fixed* message (which was presumably chosen by the attacker). That is not the case here because deliver_challenge() generates a new message each time. Therefore the expected digest changes each time. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14532 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: There is an undocumented function multiprocessing.allow_connection_pickling() whose docstring claims it allows connection and socket objects to be pickled. The attached patch fixes the multiprocessing.reduction module so that it works correctly. This means that TestPicklingConnections can be reenabled in the unit tests. The patch uses the new socket.share() and socket.fromshare() methods on Windows. -- keywords: +patch Added file: http://bugs.python.org/file25160/mp_pickle_conn.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
sbt shibt...@gmail.com added the comment: I think a generic solution must be found for multiprocessing, so I'll create a separate issue. I have submitted a patch for Issue 4892 which makes connection and socket objects picklable. It uses socket.share() and socket.fromshare() on Windows. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: Updated patch which uses ForkingPickler in Connection.send(). Note that connection sharing still has to be enabled using allow_connection_pickling(). Support could be enabled automatically, but that would introduce more circular imports which confuse me. It might be worthwhile refactoring to eliminate all circular imports. -- Added file: http://bugs.python.org/file25167/mp_pickle_conn.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: But connection doesn't depend on reduction, neither does forking. If registration of (Pipe)Connection is done in reduction then you can't make (Pipe)Connection picklable *automatically* unless you make connection depend on reduction (possibly indirectly). A circular import can be avoided by making reduction not import connection at module level. So not hard to fix. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14532] multiprocessing module performs a time-dependent hmac comparison
sbt shibt...@gmail.com added the comment: I think it would be reasonable to add a safe comparison function to hmac. Its documentation could explain briefly when it would be preferable to ==. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14532 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14548] garbage collection just after multiprocessing's fork causes exceptions
New submission from sbt shibt...@gmail.com: When running test_multiprocessing on Linux I occasionally see a stream of errors caused by ignored weakref callbacks: Exception AssertionError: AssertionError() in Finalize object, dead ignored These do not cause the unittests to fail. Finalizers from the parent process are supposed to be cleared after the fork. But if a garbage collection before that then Finalizer callbacks can be run in the wrong process. Disabling gc during fork seems to prevent the errors. Or maybe the Finalizer should record the pid of the process which created it and only invoke the callback if it matches the current pid. (Compare Issure 1336 conscerning subprocess.) -- messages: 158049 nosy: sbt priority: normal severity: normal status: open title: garbage collection just after multiprocessing's fork causes exceptions ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14548 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14548] garbage collection just after multiprocessing's fork causes exceptions
sbt shibt...@gmail.com added the comment: Patch to disable gc. -- keywords: +patch Added file: http://bugs.python.org/file25180/mp_disable_gc.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14548 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: The last patch did not work on Unix. Here is a new version where the reduction functions are automatically registered, so allow_connection_pickling() is redundant. -- Added file: http://bugs.python.org/file25181/mp_pickle_conn.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14548] garbage collection just after multiprocessing's fork causes exceptions
sbt shibt...@gmail.com added the comment: That's a problem indeed. Perhaps we need a global fork lock shared between subprocess and multiprocessing? I did an atfork patch which included a (recursive) fork lock. See http://bugs.python.org/review/6721/show The patch included changes to multiprocessing and subprocess. (Being able to acquire the lock when doing fd manipulation is quite useful. For instance, the creation of Process.sentinel currently has a race which can mean than another process inherits the write end of the pipe. That would cause Process.join() to wait till both processes terminate.) Actually, for Finalizers I think it would be easier to just record and check the pid. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14548 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14532] multiprocessing module performs a time-dependent hmac comparison
sbt shibt...@gmail.com added the comment: Why not just def time_independent_equals(a, b): return len(a) == len(b) and sum(x != y for x, y in zip(a, b)) == 0 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14532 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14548] garbage collection just after multiprocessing's fork causes exceptions
sbt shibt...@gmail.com added the comment: Alternative patch which records pid when Finalize object is created. The callback does nothing if recorded pid does not match os.getpid(). -- Added file: http://bugs.python.org/file25195/mp_finalize_pid.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14548 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14548] garbage collection just after multiprocessing's fork causes exceptions
sbt shibt...@gmail.com added the comment: But what if Finalize is used to cleanup a resource that gets duplicated in children, like a file descriptor? See e.g. forking.py, line 137 (in Popen.__init__()) or heap.py, line 244 (BufferWrapper.__init__()). This was how Finalize objects already acted (or were supposed to). In the case of BufferWrapper this is intended. BufferWrapper objects do not have reference counting semantics. Instead the memory is deallocated when the object is garbage collected in the process that created it. (Garbage collection in a child process should *not* invalidate memory owned by the parent process.) You can prevent the parent process from garbage collecting the object too early by following the advice below from the documentation: Explicitly pass resources to child processes On Unix a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to the constructor for the child process. Apart from making the code (potentially) compatible with Windows this also ensures that as long as the child process is still alive the object will not be garbage collected in the parent process. This might be important if some resource is freed when the object is garbage collected in the parent process. In the case of the sentinel in Popen.__init__(), it is harmless if this end of the pipe gets accidentally inherited by another process. Since Process does not have a closefds argument like subprocess.Popen unintended leaking happens all the time. And even without the pid check, I think this finalizer would very rarely be triggered in a child process. (A Process object can only be garbage collected after it has been joined, and it can only be joined by it parent process.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14548 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11750] Mutualize win32 functions
sbt shibt...@gmail.com added the comment: I think there are some issues with the treatment of the DWORD type. (DWORD is a typedef for unsigned long.) _subprocess always treats them as signed, whereas _multiprocessing treats them (correctly) as unsigned. _windows does a mixture: functions from _subprocess parse DWORD arguments as signed (i), functions from _multiprocessing parse DWORD arguments as unsigned (k), and the constants are signed. So in _windows the constants GENERIC_READ, NMPWAIT_WAIT_FOREVER and INFINITE will be negative. I think this will potentially cause errors from PyArg_ParseTuple() when used as arguments to functions from _multiprocessing. I think it is also rather confusing that some functions (eg CreatePipe()) return handles using a wrapper type which closes on garbage collection, while others (eg CreateNamedPipe()) return handles as plain integers. (The code also needs updating because quite a few functions have since been added to _multiprocessing.win32.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11750 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11750] Mutualize win32 functions
sbt shibt...@gmail.com added the comment: Attached is an up to date patch. * code has been moved to Modules/_windows.c * DWORD is uniformly treated as unsigned * _subprocess's handle wrapper type has been removed (although subprocess.py still uses a Python implemented handle wrapper type) I'm not familiar with Visual Studio. I ended up copying _socket.vcproj to _windows.vcproj and editing it by hand. I also edited _multiprocessing.vcproj and pythoncore.vcproj by hand. -- Added file: http://bugs.python.org/file25217/windows_module.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11750 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11750] Mutualize win32 functions
sbt shibt...@gmail.com added the comment: I don't think we need the vcproj file, unless I missed something. _multiprocessing.win32 currently wraps closesocket(), send() and recv() so it needs to link against ws2_32.lib. I don't know how to make _windows link against ws2_32.lib without adding a vcproj file for _windows unless we make pythoncore depend on ws2_32.lib. I presume this is why _socket and _select have their own vcproj files. Maybe the socket functions could be moved directly to the top level of _multiprocessing instead since they are not really win32 functions. (And I suppose if that does not happen then _multiprocessing should also stop linking against ws2_32.lib.) BTW why does _select link against wsock32.lib instead of ws2_32.lib? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11750 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11750] Mutualize win32 functions
sbt shibt...@gmail.com added the comment: New patch. Compared to the previous one: * socket functions have been moved from _windows to _multiprocessing * _windows.vcpoj has been removed (so _windows is part of pythoncore.vcproj) * no changes to pcbuild.sln needed * removed reference to 'win32_functions.c' in setup.py (I am not sure whether/how setup.py is used on Windows.) Lib/multiprocessing/connection.py | 124 +- Lib/multiprocessing/forking.py |31 +- Lib/multiprocessing/heap.py| 6 +- Lib/multiprocessing/reduction.py | 6 +- Lib/subprocess.py | 104 +- Lib/test/test_multiprocessing.py | 2 +- Modules/_multiprocessing/multiprocessing.c |83 +- Modules/_multiprocessing/win32_functions.c | 823 Modules/_windows.c | 1337 +++ PC/_subprocess.c | 697 -- PC/config.c| 6 +- PCbuild/_multiprocessing.vcproj| 4 - PCbuild/pythoncore.vcproj | 8 +- setup.py | 1 - 14 files changed, 1568 insertions(+), 1664 deletions(-) -- Added file: http://bugs.python.org/file25223/windows_module.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11750 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11750] Mutualize win32 functions
sbt shibt...@gmail.com added the comment: I think the module would be better named _win32, since that's the name of the API (like POSIX under Unix). Changed in new patch. Also, it seems there are a couple of naming inconsistencies renaming (e.g. the overlapped wrapper is named _multiprocessing.win32.Overlapped) I've fixed that one (and changed the initial comment at the beginning of _win32.c), but I can't see any other. I also removed a duplicate of getulong(). -- Added file: http://bugs.python.org/file25224/win32_module.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11750 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14087] multiprocessing.Condition.wait_for missing
sbt shibt...@gmail.com added the comment: New patch which calculates endtime outside loop. -- Added file: http://bugs.python.org/file25240/cond_wait_for.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14087 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11750] Mutualize win32 functions
sbt shibt...@gmail.com added the comment: How about _windowsapi or _winapi then, to ensure there are no clashes? I don't have any strong feelings, but I would prefer _winapi. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11750 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11750] Mutualize win32 functions
sbt shibt...@gmail.com added the comment: s/_win32/_winapi/g -- Added file: http://bugs.python.org/file25241/winapi_module.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11750 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11750] Mutualize win32 functions
sbt shibt...@gmail.com added the comment: Overlapped's naming is still lagging behind :-) Argh. And a string in winapi_module too. Yet another patch. -- Added file: http://bugs.python.org/file25252/winapi_module.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11750 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14310] Socket duplication for windows
sbt shibt...@gmail.com added the comment: Can this issue be reclosed now? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14310 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: Up to date patch. -- Added file: http://bugs.python.org/file25270/mp_pickle_conn.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: A couple of minor changes based on Antoine's earlier review (which I did not notice till now). -- Added file: http://bugs.python.org/file25272/mp_pickle_conn.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com