[issue27879] add os.syncfs()
Nir Soffer added the comment: Updating python version, this is not relevant to 3.6 now. On linux users can use "sync --file-system /path" but it would be nice if we have something that works on multiple platforms. -- nosy: +nirs versions: +Python 3.11 -Python 3.6 ___ Python tracker <https://bugs.python.org/issue27879> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29988] with statements are not ensuring that __exit__ is called if __enter__ succeeds
Nir Soffer added the comment: Does https://github.com/python/cpython/pull/1799 solve this issue for synchronous with? with closing(this), closing(that): If it does, can we backport this fix to python 3.6? 3.6 is used as system python for RHEL/Centos 8, will be used for at least 5 years or so. -- nosy: +nirs ___ Python tracker <https://bugs.python.org/issue29988> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40327] list(sys.modules.items()) can throw RuntimeError: dictionary changed size during iteration
Nir Soffer added the comment: Does this really affect only python 3.7? We see this in RHEL 8.2, using python 3.6.8: https://bugzilla.redhat.com/show_bug.cgi?id=1837199#c69 Likely caused by: lvs = dict(self._lvs) Without locking. self._lvs is a dict that may contain 1000's of items. I'm not sure if this is relvant now for upstream, but backport to 3.6 would be useful. -- nosy: +nirs ___ Python tracker <https://bugs.python.org/issue40327> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10819] ValueError on repr(closed_socket_file)
Nir Soffer added the comment: I find this new behavior a usability regression. Before this change, code (e.g python 2 code ported to python 3) could do: fd = sock.fileno() Without handling errors, since closed socket would raise (good). Now such code need to check the return value (bad): fd = sock.fileno() if fd == -1: fail... This is also not consistent with other objects: >>> f = open("Makefile") >>> f.fileno() 3 >>> f.close() >>> f.fileno() Traceback (most recent call last): File "", line 1, in ValueError: I/O operation on closed file >>> repr(f) "<_io.TextIOWrapper name='Makefile' mode='r' encoding='UTF-8'>" The issue with repr() on closed socket can be mitigated easily inside __repr__, handling closed sockets without affecting code using file descriptors. Can we return the old safe behavior? -- nosy: +nirs ___ Python tracker <https://bugs.python.org/issue10819> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26868] Document PyModule_AddObject's behavior on error
Change by Nir Soffer : -- nosy: +nirs ___ Python tracker <https://bugs.python.org/issue26868> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20215] socketserver.TCPServer can not listen IPv6 address
Nir Soffer added the comment: Doesn't it affect also 2.7, 3.6, 3.7, and 3.8? -- ___ Python tracker <https://bugs.python.org/issue20215> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20215] socketserver.TCPServer can not listen IPv6 address
Change by Nir Soffer : -- nosy: +nirs ___ Python tracker <https://bugs.python.org/issue20215> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33021] Some fstat() calls do not release the GIL, possibly hanging all threads
Nir Soffer <nir...@gmail.com> added the comment: Attaching reproducer for os.fdopen() -- Added file: https://bugs.python.org/file47492/fdopen_nfs_test.py ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33021] Some fstat() calls do not release the GIL, possibly hanging all threads
Nir Soffer <nir...@gmail.com> added the comment: Attaching reproducer for mmapobject.size() -- Added file: https://bugs.python.org/file47491/mmap_size_nfs_test.py ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33021] Some fstat() calls do not release the GIL, possibly hanging all threads
Nir Soffer <nir...@gmail.com> added the comment: Antoine, thanks for fixing this on master! but I don't think this issue can be closed yet. First, the issue is not a performance but reliability. I probably made bad choice when I marked this as performance. When you call mmap.mmap() in one thread, the entire process hangs for an hour because the file descriptor is on a non-responsive NFS server. With the fix, only the thread accessing the file descriptor is affected. The rest of the system can function normally. Second, the issue affects python 2.7, which is the production version on many servers, and will be for many years e.g. on RHEL/CentOS 7. I think it is important to fix this issue for these users. Here is examples of the issue using reproducer scripts I uploaded to the bug. When mmap.mmap block, the entire process hangs. I unblocked the process from another shell by removing the iptables rule. # python bpo-33021/mmap_nfs_test.py mnt dumbo.tlv.redhat.com 2018-03-17 01:17:57,846 - (MainThread) - Starting canary thread 2018-03-17 01:17:57,846 - (Canary) - Blocking access to storage 2018-03-17 01:17:57,857 - (Canary) - If this test is hang, please run: iptables -D OUTPUT -p tcp -d dumbo.tlv.redhat.com --dport 2049 -j DROP 2018-03-17 01:17:57,857 - (Canary) - check 0 2018-03-17 01:17:58,858 - (Canary) - check 1 2018-03-17 01:17:59,858 - (Canary) - check 2 2018-03-17 01:18:00,859 - (Canary) - check 3 2018-03-17 01:18:01,859 - (Canary) - check 4 2018-03-17 01:18:02,859 - (Canary) - check 5 2018-03-17 01:18:03,860 - (Canary) - check 6 2018-03-17 01:18:04,860 - (Canary) - check 7 2018-03-17 01:18:05,861 - (Canary) - check 8 2018-03-17 01:18:06,861 - (Canary) - check 9 2018-03-17 01:18:07,862 - (Canary) - check 10 2018-03-17 01:18:07,868 - (MainThread) - Calling mmap.mmap (I remove the iptables rule here) 2018-03-17 01:18:57,683 - (MainThread) - OK 2018-03-17 01:18:57,683 - (MainThread) - Done 2018-03-17 01:18:57,683 - (Canary) - check 11 When mmapobject.size() was called, the entire process was hang. I unblocked the process from another shell by removing the iptables rule. # python bpo-33021/mmap_size_nfs_test.py mnt dumbo.tlv.redhat.com 2018-03-17 01:22:17,991 - (MainThread) - Starting canary thread 2018-03-17 01:22:17,992 - (Canary) - Blocking access to storage 2018-03-17 01:22:18,001 - (Canary) - If this test is hang, please run: iptables -D OUTPUT -p tcp -d dumbo.tlv.redhat.com --dport 2049 -j DROP 2018-03-17 01:22:18,001 - (Canary) - check 0 2018-03-17 01:22:19,002 - (Canary) - check 1 2018-03-17 01:22:20,002 - (Canary) - check 2 2018-03-17 01:22:21,002 - (Canary) - check 3 2018-03-17 01:22:22,003 - (Canary) - check 4 2018-03-17 01:22:23,003 - (Canary) - check 5 2018-03-17 01:22:24,004 - (Canary) - check 6 2018-03-17 01:22:25,004 - (Canary) - check 7 2018-03-17 01:22:26,004 - (Canary) - check 8 2018-03-17 01:22:27,005 - (Canary) - check 9 2018-03-17 01:22:28,005 - (MainThread) - Calling mmapobject.size (I removed the ipatables rule here) 2018-03-17 01:23:38,701 - (MainThread) - OK 2018-03-17 01:23:38,701 - (MainThread) - Done 2018-03-17 01:23:38,701 - (Canary) - check 10 I found that os.fdopen issue does not affect RHEL/CentOS 7, because they use python 2.7.5, and the issue was introduced in python 2.7.7, in: commit 5c863bf93809cefeb4469512eadac291b7046051 Author: Benjamin Peterson <benja...@python.org> Date: Mon Apr 14 19:45:46 2014 -0400 when an exception is raised in fdopen, never close the fd (changing on my mind on #21191) This issue affects Fedora (python 2.7.14) and probably other distros using latest python 2.7. Here is example run show how this affects Fedora 27: # python fdopen_nfs_test.py mnt dumbo.tlv.redhat.com 2018-03-17 01:43:52,718 - (MainThread) - Starting canary thread 2018-03-17 01:43:52,718 - (Canary) - Blocking access to storage 2018-03-17 01:43:52,823 - (Canary) - If this test is hang, please run: iptables -D OUTPUT -p tcp -d dumbo.tlv.redhat.com --dport 2049 -j DROP 2018-03-17 01:43:52,824 - (Canary) - check 0 2018-03-17 01:43:53,824 - (Canary) - check 1 2018-03-17 01:43:54,824 - (Canary) - check 2 2018-03-17 01:43:55,825 - (Canary) - check 3 2018-03-17 01:43:56,825 - (Canary) - check 4 2018-03-17 01:43:57,825 - (Canary) - check 5 2018-03-17 01:43:58,826 - (Canary) - check 6 2018-03-17 01:43:59,826 - (Canary) - check 7 2018-03-17 01:44:00,826 - (Canary) - check 8 2018-03-17 01:44:01,827 - (Canary) - check 9 2018-03-17 01:44:02,827 - (Canary) - check 10 2018-03-17 01:44:02,834 - (MainThread) - Calling os.fdopen (remove iptbales rule, and force-unmount here) 2018-03-17 01:50:25,853 - (MainThread) - OK 2018-03-17 01:50:25,854 - (Canary) - check 11 2018-03-17 01:50:25,895 - (MainThread) - Done Traceback (most recent call last): File "fdopen_nfs_test.py", line 75, in os.unlink(filename) OSError: [Errno 2] No such file or directory: 'mnt/test' So, I think we should: - backport to 3.7, 3.6 - reconsider backport to
[issue33021] Some fstat() calls do not release the GIL, possibly hanging all threads
Nir Soffer <nir...@gmail.com> added the comment: Python cannot protect raw file descriptor from bad multi-threaded application. For example the application may close a file descriptor twice which may lead to closing unrelated file descriptor created by another thread just after it was closed, before the second close. This issue affects all function using raw file descriptors, and we cannot protect them with the GIL. Even if fstat was not thread safe, we cannot protect it using the GIl since this blocks the entire application until fstat returns. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33021] Some fstat() calls do not release the GIL, possibly hanging all threads
Change by Nir Soffer <nir...@gmail.com>: -- pull_requests: +5787 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33021] Some fstat() calls do not release the GIL, possibly hanging all threads
Change by Nir Soffer <nir...@gmail.com>: -- keywords: +patch pull_requests: +5786 stage: -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33021] Some fstat() calls do not release the GIL, possibly hanging all threads
New submission from Nir Soffer <nir...@gmail.com>: If the file descriptor is on a non-responsive NFS server, calling fstat() can block for long time, hanging all threads. Most of the fstat() calls release the GIL around the call, but some calls seems to be forgotten. In python 3, the calls are handled now by _py_fstat(), releasing the GIL internally, but some calls use _py_fstat_noraise() which does not release the GIL. Most of the calls to _py_fstat_noraise() release the GIL around the call, except these 2 calls, affecting users of: - mmap.mmap() - os.urandom() - random.seed() In python there are more fstat() calls to fix, affecting users of: - imp.load_dynamic() - imp.load_source() - mmap.mmap() - mmapobject.size() - os.fdopen() - os.urandom() - random.seed() -- components: Library (Lib) messages: 313407 nosy: brett.cannon, eric.snow, ncoghlan, nirs, serhiy.storchaka, twouters, vstinner, yselivanov priority: normal severity: normal status: open title: Some fstat() calls do not release the GIL, possibly hanging all threads type: performance versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32186] io.FileIO hang all threads if fstat blocks on inaccessible NFS server
Change by Nir Soffer <nir...@gmail.com>: -- keywords: +patch pull_requests: +4563 stage: -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32186> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32186] io.FileIO hang all threads if fstat blocks on inaccessible NFS server
Change by Nir Soffer <nir...@gmail.com>: -- pull_requests: +4564 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32186> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32186] io.FileIO hang all threads if fstat blocks on inaccessible NFS server
Nir Soffer <nir...@gmail.com> added the comment: Forgot to mention - reproducible with python 2.7. Similar issues exists in python 3, but I did not try to reproduce since we are using python 2.7. I posted patches for both 2.7 and master: - https://github.com/python/cpython/pull/4651 - https://github.com/python/cpython/pull/4652 -- nosy: +benjamin.peterson, stutzbach, vstinner ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32186> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32186] io.FileIO hang all threads if fstat blocks on inaccessible NFS server
New submission from Nir Soffer <nir...@gmail.com>: Using io.FileIO can hang all threads when accessing an inaccessible NFS server. To reproduce this, you need to open the file like this: fd = os.open(filename, ...) fio = io.FileIO(fd, "r+", closefd=True) Inside fileio_init, there is a checkfd call, calling fstat without releasing the GIL. This will hang all threads. The expected behavior is blocking only the thread blocked on the system call, so a system stay responsive and can serve other tasks. Here is the log showing this issue, created with the attached reproducer script (fileio_nfs_test.py). # python fileio_nfs_test.py mnt/fileio.out dumbo.tlv.redhat.com 2017-11-30 18:41:49,159 - (MainThread) - pid=3436 2017-11-30 18:41:49,159 - (MainThread) - Opening mnt/fileio.out 2017-11-30 18:41:49,160 - (MainThread) - OK fd=3 2017-11-30 18:41:49,161 - (MainThread) - Starting canary thread 2017-11-30 18:41:49,161 - (Canary) - Blocking access to storage 2017-11-30 18:41:49,169 - (Canary) - If this test is hang, please run: iptables -D OUTPUT -p tcp -d dumbo.tlv.redhat.com --dport 2049 -j DROP 2017-11-30 18:41:49,169 - (Canary) - check 0 2017-11-30 18:41:49,169 - (MainThread) - Waiting until storage is blocked... 2017-11-30 18:41:50,170 - (Canary) - check 1 2017-11-30 18:41:51,170 - (Canary) - check 2 2017-11-30 18:41:52,171 - (Canary) - check 3 2017-11-30 18:41:53,171 - (Canary) - check 4 2017-11-30 18:41:54,172 - (Canary) - check 5 2017-11-30 18:41:55,172 - (Canary) - check 6 2017-11-30 18:41:56,172 - (Canary) - check 7 2017-11-30 18:41:57,173 - (Canary) - check 8 2017-11-30 18:41:58,173 - (Canary) - check 9 2017-11-30 18:41:59,174 - (MainThread) - Opening io.FileIO Everything is hang now! After some time I run this from another shell: iptables -D OUTPUT -p tcp -d dumbo.tlv.redhat.com --dport 2049 -j DROP And now the script is unblocked and finish. 2017-11-30 18:45:29,683 - (MainThread) - OK 2017-11-30 18:45:29,684 - (MainThread) - Creating mmap 2017-11-30 18:45:29,684 - (Canary) - check 10 2017-11-30 18:45:29,684 - (MainThread) - OK 2017-11-30 18:45:29,685 - (MainThread) - Filling mmap 2017-11-30 18:45:29,685 - (MainThread) - OK 2017-11-30 18:45:29,685 - (MainThread) - Writing mmap to storage 2017-11-30 18:45:29,719 - (MainThread) - OK 2017-11-30 18:45:29,719 - (MainThread) - Syncing 2017-11-30 18:45:29,719 - (MainThread) - OK 2017-11-30 18:45:29,720 - (MainThread) - Done We have a canary thread logging every second. Once we tried to open the FileIO, the canary thread stopped - this is possible only if the io extension module was holding the GIL during a blocking call. And here is the backtrace of the hang process in the kernel: # cat /proc/3436/stack [] rpc_wait_bit_killable+0x24/0xb0 [sunrpc] [] __rpc_execute+0x154/0x410 [sunrpc] [] rpc_execute+0x68/0xb0 [sunrpc] [] rpc_run_task+0xf6/0x150 [sunrpc] [] nfs4_call_sync_sequence+0x63/0xa0 [nfsv4] [] _nfs4_proc_getattr+0xcc/0xf0 [nfsv4] [] nfs4_proc_getattr+0x72/0xf0 [nfsv4] [] __nfs_revalidate_inode+0xbf/0x310 [nfs] [] nfs_getattr+0x95/0x250 [nfs] [] vfs_getattr+0x46/0x80 [] vfs_fstat+0x45/0x80 [] SYSC_newfstat+0x24/0x60 [] SyS_newfstat+0xe/0x10 [] system_call_fastpath+0x16/0x1b [] 0x You cannot attach to the process with gdb, since it is in D state, but once the process is unblocked, gbd takes control, and we see: Thread 2 (Thread 0x7f97a2ea5700 (LWP 4799)): #0 0x7f97ab925a0b in do_futex_wait.constprop.1 () from /lib64/libpthread.so.0 #1 0x7f97ab925a9f in __new_sem_wait_slow.constprop.0 () from /lib64/libpthread.so.0 #2 0x7f97ab925b3b in sem_wait@@GLIBC_2.2.5 () from /lib64/libpthread.so.0 #3 0x7f97abc455f5 in PyThread_acquire_lock () from /lib64/libpython2.7.so.1.0 #4 0x7f97abc11156 in PyEval_RestoreThread () from /lib64/libpython2.7.so.1.0 #5 0x7f97a44f9086 in time_sleep () from /usr/lib64/python2.7/lib-dynload/timemodule.so #6 0x7f97abc18bb0 in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0 #7 0x7f97abc1aefd in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0 #8 0x7f97abc183fc in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0 #9 0x7f97abc1aefd in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0 #10 0x7f97abc183fc in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0 #11 0x7f97abc1aefd in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0 #12 0x7f97abba494d in function_call () from /lib64/libpython2.7.so.1.0 #13 0x7f97abb7f9a3 in PyObject_Call () from /lib64/libpython2.7.so.1.0 #14 0x7f97abc135bd in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0 #15 0x7f97abc1857d in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0 #16 0x7f97abc1857d in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0 #17 0x7f97abc1aefd in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0 #18 0x7f97abba4858 in function_call () from /lib64/libpython2.7.so.1.0 #19 0x7f97abb7f9a3 in PyObject_Cal
[issue31945] Configurable blocksize in HTTP(S)Connection
Nir Soffer <nir...@gmail.com> added the comment: When using highlevel request() api, users can control the block size by wrapping the file object with an iterator: class FileIter: def __init__(self, file, blocksize): self.file = file self.blocksize = blocksize def __iter__(self): while True: datablock = self.file.read(self.blocksize) if not datablock: break yield datablock Adding configurable block size will avoid this workaround. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31945> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31945] Configurable blocksize in HTTP(S)Connection
Change by Nir Soffer <nir...@gmail.com>: -- keywords: +patch pull_requests: +4241 stage: -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31945> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31945] Configurable blocksize in HTTP(S)Connection
New submission from Nir Soffer <nir...@gmail.com>: blocksize is hardcoded to 8192 in send() and _read_readable(), preventing efficient upload when using file-like body. Users of the module that are not interested in chunked encoding can rewrite the copy loop using HTTPConnection.send(): conn = HTTPSConnection(...) conn.putrequest(...) conn.putheader(...) conn.endheaders() while True: chunk = file.read(512*1024) if not chunk: break conn.send(chunk) But fixing send() to use a configurable blocksize seems more useful. Also, users of requests do not have access the underlying connection, so they cannot use preferred buffer size. When reading from /dev/zero and uploading to server that drop the received data, larger buffer size gives 3X more throughput *and* 1/3 of cpu time. With real storage and network, the effect will probably be much smaller. -- components: Library (Lib) messages: 305571 nosy: brett.cannon, haypo, nirs, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Configurable blocksize in HTTP(S)Connection versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31945> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30931] Race condition in asyncore may access the wrong dispatcher
Nir Soffer added the comment: Victor, I mostly agree with you, but I think we have here several bugs, and we should do the minimal fix for each of them. Your PR is too big, trying to fix too much. (Bug 1) The dispatcher A closes the dispatcher B. Currently, asyncore calls the handlers of the dispatcher B. We don't have such bug now - if dispatcher B is closed, it is not in the map, and current code will skip the fd when checking: for fd in r: obj = map.get(fd) if obj is None: continue (Bug 2) Dispatcher A close Dispatcher B, create Dispatcher C. C get may use the same fd as B. If B was ready, asyncore may get C from the map and access it instead of B. This issue is reported in this bug. (Bug 3) if handle_read() closed the dispatcher, asyncore will call handle_write() on a closed dispatcher. This is a very old bug with asyncore.readwrite(). (Bug 4) handle_xxx_event() internal methods may call multiple handle_xxx() user methods, again not checking if dispatcher was closed after each invocation. Same, very old bug. So I suggest that we fix *only* bug 2 in https://github.com/python/cpython/pull/2854. The issue in readwrite() can be solved with the approach you suggest, but I prefer to make small an careful steps so we don't introduce regressions in stable versions. Also, there is already too much noise here an in the various PRs about this issue, better file bugs for the rest of the issues and discuss them separately. This commit https://github.com/python/cpython/pull/2854/commits/bbd2d09ab999fa2214cbbd2589ae3642facd3057 Looks fine with the test_poll_close_replace_dispatchers test added in the later commit. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30931> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30980] Calling asyncore.file_wrapper.close twice may close unrelated file descriptor
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2952 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30980> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30980] Calling asyncore.file_wrapper.close twice may close unrelated file descriptor
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2951 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30980> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30980] Calling asyncore.file_wrapper.close twice may close unrelated file descriptor
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2950 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30980> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30985] Set closing variable in asyncore at close
Nir Soffer added the comment: Giampaolo, people using only 3.7 should probably use asyncio. Fixing asyncore is more important to those that can use only 2.7 (e.g.Centos/RHEL) or have to support both python 3 and 2. Do you think using _closed is safer for backport? This can also clash with existing code. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30985> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30931] Race condition in asyncore may access the wrong dispatcher
Nir Soffer added the comment: > I use the same trick all over the place in pyftpdlib: > https://github.com/giampaolo/pyftpdlib/blob/1268bb185cd63c657d78bc33309041628e62360a/pyftpdlib/handlers.py#L537 This allow detection of closed sockets, but does not fix the issue of accessing the wrong dispatcher. > In practical terms, does this bug report aim to fix this issue? Yes, see the attached PR's. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30931> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30985] Set closing variable in asyncore at close
Nir Soffer added the comment: The "new" closing attribute is old as asyncore, it was just unused :-) -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30985> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30931] Race condition in asyncore may access the wrong dispatcher
Nir Soffer added the comment: On my PR 2854, Nir added these comments (extract): > "And now we try to read from close dispatcher. I think we should wait > for #2804 (...)" > Sorry, I don't understand. My PR fixes described the bug that you > described in msg298682: > "If a dispatchers is closed and and a new dispatcher is created, the > new dispatcher may get the same file descriptor. If the file > descriptor was in the ready list returned from poll()/select(), > asyncore may try to invoke one of the callbacks (e.g. handle_write) > on the new dispatcher." > About reading from closed dispatcher: sorry, I don't know the asyncore > concurrency model, but I know the asyncio concurrency model. In > asyncio, I don't see why a "dispatcher" would affect any "unrelated" > dispatcher. Said differently: if a dispatcher A closes the dispatcher > B and dispatcher B is "readable", the read event will be handled > anyway. The "close event" will be handled at the next loop iteration. > The oversimplified asyncio model is "each iteration is atomic". I don't know asyncio enough, but I suspect this issue may effect it if it is using the fd to check if a reader/writer is closed. This is the main issue in asyncore, assuming that during the loop: obj = map.get(fd) Means that a obj is the dispatcher that owned fd when preparing for poll()/select(), and we can use obj for calling the I/O callbacks. The dispatcher owning this fd may have been closed during the loop, and a new dispatcher may have been created with the *same* fd. The only way to check if a dispatcher is closed is to check the object. > Closing a dispatcher calls its del_channel() which removes its file > descriptor from the asyncore map. And creating new one at that point will add a new channel using the same fd. > If I understand correctly, the *current behaviour* depends on the file > descriptor order, since select.select() and select.poll.poll() returns > ordered file descriptors (smallest to largest). If a dispatcher A is > called and closes a dispatcher B, the dispatcher B is called or not > depending on the file descriptor. If A.fileno() < B.fileno(), B is not > called. If A.fileno() > B.fileno(), B was already called. Am I right? I don't think the order matter, only the fact that closing a dispatcher and creating a new one is likely to use the same fd. > What is the use case of a dispatcher closing another one? What is the > consequence on my PR? Does the dispatcher B expects to not be called > to read if it was already closed? I don't know what is the use case of the reporter, but here is one possible example - you implement a proxy, each proxy connection has 2 legs, each using a dispatcher connected to an endpoint. If one leg is closed, you close also the other leg (one dispatcher closing another). Now if you create a new dispatcher during the same loop iteration, it may get the same fd of the other closed leg, and asyncore may try to read/write with this dispatcher. This may work or not depending on how you implement the dispatcher. This is domonstrated in https://github.com/python/cpython/pull/2764/commits/5aeb0098d2347984f3a89cf036c305edd2b8382b > I see that close() also immediately closes the underlying socket. Does > it mean that my PR can triggered OSError since we try recv() on a > closed socket? Yes, this is a typical issue in asyncore if you don't protect your subclass to handle double close. asyncore.file_wrapper is protected, but asyncore.dispatcher and its subclasses are not. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30931> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30994] Asyncore does not need to copy map.items() before polling
Nir Soffer added the comment: Using a quick test with 1000 clients sending 100 pings, I could not see significant difference between master and this patch. Seems that the extra copy is hidden by the noise. Having this documented is good enough if someone want to use this. -- stage: -> resolved status: open -> closed ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30994> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30994] Asyncore does not need to copy map.items() before polling
Nir Soffer added the comment: The advantage is avoiding wasteful copy on each iteration. -- nosy: +Nir Soffer ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30994> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30985] Set closing variable in asyncore at close
Nir Soffer added the comment: I agree that this change alone is may not be important enough to fix in asyncore today - but this enables easy detection of closed sockets needed for https://github.com/python/cpython/pull/2764 or the alternative https://github.com/python/cpython/pull/2854. Unless you suggest a better way to detect closed dispatchers. Introducing new failures modes in asyncore is extremely risky, error handling in asyncore is the weakest point. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30985> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30994] Asyncore does not need to copy map.items() before polling
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2872 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30994> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30994] Asyncore does not need to copy map.items() before polling
New submission from Nir Soffer: Asyncore is not thread safe, and cannot be called from multiple threads. Hence it does not need to copy the socket_map when preparing for poll or select. The copy was introduced in: commit d74900ebb5a22b387b49684990da1925e1d6bdc9 Author: Josiah Carlson <josiah.carl...@gmail.com> Date: Mon Jul 7 04:15:08 2008 + Committing Py3k version of changelist 64080 and 64257, along with updated tests for smtpd, which required updating with the new semantics. This is a huge patch, looks like port of asyncore to python 3, trying to keep the behavior of the python 2 code. Converting map.items() to list(map.items()) is correct, but on python 3 we can take advantage of the fact that items() does not copy anything. -- components: Library (Lib) messages: 298880 nosy: Nir Soffer, giampaolo.rodola, haypo priority: normal severity: normal status: open title: Asyncore does not need to copy map.items() before polling versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30994> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30985] Set closing variable in asyncore at close
Changes by Nir Soffer <nsof...@redhat.com>: -- nosy: +haypo ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30985> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30985] Set closing variable in asyncore at close
New submission from Nir Soffer: This is an old issue with asyncore - asyncore has a "closing" attribute, but it was never used. Every user has to implement the closing once logic in dispatcher subclasses. Here is a typical fixes in user code: - https://github.com/oVirt/vdsm/blob/master/lib/vdsm/storage/asyncevent.py#L497 - https://github.com/oVirt/vdsm/blob/master/lib/vdsm/storage/asyncevent.py#L540 Fixing closing attribute will allow fixing the races during asyncore.poll() and asyncore.poll2(), see https://github.com/python/cpython/pull/2764 -- nosy: +Nir Soffer versions: +Python 2.7, Python 3.5, Python 3.6, Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30985> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30977] reduce uuid.UUID() memory footprint
Nir Soffer added the comment: This saves memory, but using str(uuid.uuid4()) requires even less memory. If you really want to save memory, you can keep the uuid.uuid4().int. Can you explain someone would like to have 100 uuid objects, instead of 100 strings? What is the advantage of keeping UUID objects around? -- nosy: +Nir Soffer ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30977> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30980] Calling asyncore.file_wrapper.close twice may close unrelated file descriptor
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2839 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30980> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30980] Calling asyncore.file_wrapper.close twice may close unrelated file descriptor
New submission from Nir Soffer: Commit 4d4c69dc35154a9c21fed1b6b4088e741fbc6ae6 added protection for double close in file_wrapper.close, but the test did not consider that fact that file_wrapper is dupping the file descriptor, making the test ineffective. >>> fd1, fd2 = os.pipe() >>> f = asyncore.file_wrapper(fd1) >>> os.close(f.fd) >>> f.close() Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.5/asyncore.py", line 621, in close os.close(self.fd) OSError: [Errno 9] Bad file descriptor >>> f.fd 4 >>> fd3, fd4 = os.pipe() >>> fd3 4 >>> f.close() >>> os.close(fd3) Traceback (most recent call last): File "", line 1, in OSError: [Errno 9] Bad file descriptor f.close() closed an unrelated file descriptor. -- messages: 298753 nosy: Nir Soffer, haypo priority: normal severity: normal status: open title: Calling asyncore.file_wrapper.close twice may close unrelated file descriptor versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30980> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30931] Race condition in asyncore may access the wrong dispatcher
Nir Soffer added the comment: Adding more info after discussion in github. After polling readable/writeable dispatchers, asyncore.poll(2) receive a list of ready file descriptors, and invoke callbacks on the dispatcher objects. If a dispatchers is closed and and a new dispatcher is created, the new dispatcher may get the same file descriptor. If the file descriptor was in the ready list returned from poll()/select(), asyncore may try to invoke one of the callbacks (e.g. handle_write) on the new dispatcher. Here is an example in asycore.poll() r, w, e = select.select(r, w, e, timeout) for fd in r: obj = map.get(fd) if obj is None: continue read(obj) read close obj, removing fd from map, then creates a new one getting the same fd... for fd in w: obj = map.get(fd) this get the new object from the map, instead of the closed one. if obj is None: continue write(obj) invoke write on the wrong socket, which is not writable for fd in e: obj = map.get(fd) same issue here if obj is None: continue _exception(obj) asyncore.poll2() has same issue: r = pollster.poll(timeout) for fd, flags in r: obj = map.get(fd) if obj is None: continue readwrite(obj, flags) fd may have been closed and recreated by in a previous iteration of the loop. This issue is demonstrated in the failing test: https://github.com/python/cpython/pull/2707/commits/5aeb0098d2347984f3a89cf036c305edd2b8382b -- title: Race condition in asyncore wrongly closes channel -> Race condition in asyncore may access the wrong dispatcher ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30931> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30931] Race condition in asyncore wrongly closes channel
Nir Soffer added the comment: Can you provide a minimal reproducer, or best add a failing test? -- nosy: +Nir Soffer ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30931> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30914] test_alpn_protocols (test.test_ssl.ThreadedTests) fails on Fedora 26
New submission from Nir Soffer: To reproduce: checkout https://github.com/nirs/cpython/commit/9648088e6ccd6d0cc04f450f55628fd8eda3784c mkdir debug cd debug ../configure --with-pydebug make make test ... == FAIL: test_alpn_protocols (test.test_ssl.ThreadedTests) -- Traceback (most recent call last): File "/home/nsoffer/buildbot/worker/fedora26_py36/build/Lib/test/test_ssl.py", line 3272, in test_alpn_protocols self.assertIsInstance(stats, ssl.SSLError) AssertionError: {'compression': None, 'cipher': ('ECDHE-RSA-AES256-GCM-SHA384', 'TLSv1.2', 256), 'peercert': {}, 'client_alpn_protocol': None, 'client_npn_protocol': None, 'version': 'TLSv1.2', 'session_reused': False, 'session': <_ssl.Session object at 0x7f8f65a191f8>, 'server_alpn_protocols': [None], 'server_npn_protocols': [None], 'server_shared_ciphers': [[('ECDHE-ECDSA-AES256-GCM-SHA384', 'TLSv1.2', 256), ('ECDHE-RSA-AES256-GCM-SHA384', 'TLSv1.2', 256), ('ECDHE-ECDSA-AES128-GCM-SHA256', 'TLSv1.2', 128), ('ECDHE-RSA-AES128-GCM-SHA256', 'TLSv1.2', 128), ('ECDHE-ECDSA-CHACHA20-POLY1305', 'TLSv1.2', 256), ('ECDHE-RSA-CHACHA20-POLY1305', 'TLSv1.2', 256), ('DHE-DSS-AES256-GCM-SHA384', 'TLSv1.2', 256), ('DHE-RSA-AES256-GCM-SHA384', 'TLSv1.2', 256), ('DHE-DSS-AES128-GCM-SHA256', 'TLSv1.2', 128), ('DHE-RSA-AES128-GCM-SHA256', 'TLSv1.2', 128), ('DHE-RSA-CHACHA20-POLY1305', 'TLSv1.2', 256), ('ECDHE-ECDSA-AES256-CCM8', 'TLSv1.2', 256), ('ECDHE-ECDSA-AES256-CCM', 'TLSv1.2', 256), ('ECDHE-ECD SA-AES256-SHA384', 'TLSv1.2', 256), ('ECDHE-RSA-AES256-SHA384', 'TLSv1.2', 256), ('ECDHE-ECDSA-AES256-SHA', 'TLSv1.0', 256), ('ECDHE-RSA-AES256-SHA', 'TLSv1.0', 256), ('DHE-RSA-AES256-CCM8', 'TLSv1.2', 256), ('DHE-RSA-AES256-CCM', 'TLSv1.2', 256), ('DHE-RSA-AES256-SHA256', 'TLSv1.2', 256), ('DHE-DSS-AES256-SHA256', 'TLSv1.2', 256), ('DHE-RSA-AES256-SHA', 'SSLv3', 256), ('DHE-DSS-AES256-SHA', 'SSLv3', 256), ('ECDHE-ECDSA-AES128-CCM8', 'TLSv1.2', 128), ('ECDHE-ECDSA-AES128-CCM', 'TLSv1.2', 128), ('ECDHE-ECDSA-AES128-SHA256', 'TLSv1.2', 128), ('ECDHE-RSA-AES128-SHA256', 'TLSv1.2', 128), ('ECDHE-ECDSA-AES128-SHA', 'TLSv1.0', 128), ('ECDHE-RSA-AES128-SHA', 'TLSv1.0', 128), ('DHE-RSA-AES128-CCM8', 'TLSv1.2', 128), ('DHE-RSA-AES128-CCM', 'TLSv1.2', 128), ('DHE-RSA-AES128-SHA256', 'TLSv1.2', 128), ('DHE-DSS-AES128-SHA256', 'TLSv1.2', 128), ('DHE-RSA-AES128-SHA', 'SSLv3', 128), ('DHE-DSS-AES128-SHA', 'SSLv3', 128), ('ECDHE-ECDSA-CAMELLIA256-SHA384', 'TLSv1.2', 256), ('ECDHE-RSA-CAMELLIA256-S HA384', 'TLSv1.2', 256), ('ECDHE-ECDSA-CAMELLIA128-SHA256', 'TLSv1.2', 128), ('ECDHE-RSA-CAMELLIA128-SHA256', 'TLSv1.2', 128), ('DHE-RSA-CAMELLIA256-SHA256', 'TLSv1.2', 256), ('DHE-DSS-CAMELLIA256-SHA256', 'TLSv1.2', 256), ('DHE-RSA-CAMELLIA128-SHA256', 'TLSv1.2', 128), ('DHE-DSS-CAMELLIA128-SHA256', 'TLSv1.2', 128), ('DHE-RSA-CAMELLIA256-SHA', 'SSLv3', 256), ('DHE-DSS-CAMELLIA256-SHA', 'SSLv3', 256), ('DHE-RSA-CAMELLIA128-SHA', 'SSLv3', 128), ('DHE-DSS-CAMELLIA128-SHA', 'SSLv3', 128), ('AES256-GCM-SHA384', 'TLSv1.2', 256), ('AES128-GCM-SHA256', 'TLSv1.2', 128), ('AES256-CCM8', 'TLSv1.2', 256), ('AES256-CCM', 'TLSv1.2', 256), ('AES128-CCM8', 'TLSv1.2', 128), ('AES128-CCM', 'TLSv1.2', 128), ('AES256-SHA256', 'TLSv1.2', 256), ('AES128-SHA256', 'TLSv1.2', 128), ('AES256-SHA', 'SSLv3', 256), ('AES128-SHA', 'SSLv3', 128), ('CAMELLIA256-SHA256', 'TLSv1.2', 256), ('CAMELLIA128-SHA256', 'TLSv1.2', 128), ('CAMELLIA256-SHA', 'SSLv3', 256), ('CAMELLIA128-SHA', 'SSLv3', 128)]]} is not an instan ce of Not sure if this is a python issue or Fedora, posting here for now. -- components: Tests messages: 298245 nosy: Nir Soffer priority: normal severity: normal status: open title: test_alpn_protocols (test.test_ssl.ThreadedTests) fails on Fedora 26 versions: Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30914> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25516] threading.Condition._is_owned() is wrong when using threading.Lock
Nir Soffer added the comment: I rebased the patch on master (it was created against the legacy git tree in github), and sent a pull request. -- nosy: +Nir Soffer ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25516> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25516] threading.Condition._is_owned() is wrong when using threading.Lock
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2747 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25516> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2702 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2701 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2698 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
Changes by Nir Soffer <nir...@gmail.com>: -- pull_requests: +2687 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
Nir Soffer added the comment: So we have version 0x502 without libedit emulation succeeding on FreeBSD 9.x, and failing on 10.x. I think we are missing something, or maybe the libedit check is wrong. We need results from all builders to do something with this. I think at least for now we want to see readline info from all builders, not only for failed tests. Maybe switch the failing test to run only on Linux for now? -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30871] Add a "python info" command somewhere to dump versions of all dependencies
Nir Soffer added the comment: I like the idea, may be also useful in https://github.com/sosreport/sos/blob/master/sos/plugins/python.py -- nosy: +Nir Soffer ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30871> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
Nir Soffer added the comment: The failures looks like libedit failures on OS X, where history size is ignored. The test is skipped if is_editline is set, we should probably skip on these platforms too. -- nosy: +Nir Soffer ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
Nir Soffer added the comment: This issue does not exist on OS X 10.11.6 (latest my old mac can install). I tested using .editrc file: $ cat ~/.editrc history size 5 With history file with 10 items that crashes on Linux using GNU readline. This settings is ignored, adding items to the history file without truncating it to 5 items. I tested also truncating the size using readline.set_history_size(). It works correctly, but this means every application need to implement its own readline configuration, instead of reusing the system readline configuration. So this bug is relevant only to GNU readline, need to skip this test when using libedit. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
Nir Soffer added the comment: I think the issue can be solved in readline or in the code using it, but I don't have more time to dig into this, and I think that python should not crash in this case. I don't have an environment to test Apple editline, so I cannot test this issue. The PR includes a test case now, running the new test on OS X will tell us if this is the case. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
Nir Soffer added the comment: Sure, I'll add news entry and tests. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29854] Segfault when readline history is more then 2 * history size
New submission from Nir Soffer: GNU readline let the user select limit the history size by setting: $ cat ~/.inputrc set history-size 1000 So I cooked this test script: $ cat history.py from __future__ import print_function import readline readline.read_history_file(".history") print("current_history_length", readline.get_current_history_length()) print("history_length", readline.get_history_length()) print("history_get_item(1)", readline.get_history_item(1)) print("history_get_item(1000)", readline.get_history_item(1000)) input() readline.write_history_file(".history") And this history file generator: $ cat make-history for i in range(2000): print("%04d" % i) Generating .history file with 2000 entries: $ python3 make-history > .history Finally running the test script: $ python3 history.py current_history_length 1000 history_length -1 history_get_item(1) None history_get_item(1000) None please crash Segmentation fault (core dumped) So we have few issues here: - segfault - history_get_item returns None for both 1 and 1000 although we have 1000 items in history - history_length is always wrong (-1), instead of the expected value (1000), set in .inputrc Running with gdb we see: $ gdb python3 GNU gdb (GDB) Fedora 7.12.1-46.fc25 Copyright (C) 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from python3...Reading symbols from /usr/lib/debug/usr/libexec/system-python.debug...done. done. (gdb) run history.py Starting program: /usr/bin/python3 history.py [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". current_history_length 1000 history_length -1 history_get_item(1) None history_get_item(1000) None crash? Program received signal SIGSEGV, Segmentation fault. 0x7fffeff60fab in call_readline (sys_stdin=, sys_stdout=, prompt=) at /usr/src/debug/Python-3.5.2/Modules/readline.c:1281 1281line = (const char *)history_get(length)->line; (gdb) list 1276if (using_libedit_emulation) { 1277/* handle older 0-based or newer 1-based indexing */ 1278line = (const char *)history_get(length + libedit_history_start - 1)->line; 1279} else 1280#endif /* __APPLE__ */ 1281line = (const char *)history_get(length)->line; 1282else 1283line = ""; 1284if (strcmp(p, line)) 1285add_history(p); So we assume that history_get(length) returns non-null when length > 0, but this assumption is not correct. In 2 other usages in Modules/readline.c, we validate that history_get() return value is not null before using it. If we change the .history contents to 1999 lines, we get: $ python3 make-history | head -1999 > .history $ python3 history.py current_history_length 1000 history_length -1 history_get_item(1) None history_get_item(1000) 0999 crash? $ wc -l .history 1000 .history $ head -1 .history 1000 $ tail -1 .history crash? So now it does not crash, but item 1 is still None. Trying again with history file with 1000 entries: $ python3 make-history | head -1000 > .history $ python3 history.py current_history_length 1000 history_length -1 history_get_item(1) history_get_item(1000) 0999 looks fine! $ wc -l .history 1000 .history $ head -1 history head: cannot open 'history' for reading: No such file or directory $ head -1 .history 0001 $ tail -1 .history looks fine! Finally trying with 1001 items: $ python3 make-history | head -1001 > .history $ python3 history.py current_history_length 1000 history_length -1 history_get_item(1) None history_get_item(1000) 0999 And item 1 is wrong. I got same results with python 2.7, 3.5 and master on fedora 25. The root cause seems to be a readline bug when history file is bigger than the history-size in .inputrc, but I could not find yet readline library documentation, so I don't know if the issues is incorrect usage of the readline apis, or bug in readline. -- components: Extension Modules messages: 289865 nosy: nirs priority: normal severity: normal status: open title: Segfault
[issue26180] multiprocessing.util._afterfork_registry leak in threaded environment
Changes by Nir Soffer <nir...@gmail.com>: -- nosy: +nirs ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26180> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25551] Event's test_reset_internal_locks too fragile
Changes by Nir Soffer <nir...@gmail.com>: -- keywords: +patch Added file: http://bugs.python.org/file40941/0001-Issue-254074-Test-condition-behavior-instead-of-inte.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25551> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25551] Event's test_reset_internal_locks too fragile
New submission from Nir Soffer: test_reset_internal_locks is looking at Event's _cond._lock - this make it harder to change internal details of the Condition object and make the test fragile. We should test the condition behavior instead. -- components: Tests messages: 254074 nosy: nirs priority: normal severity: normal status: open title: Event's test_reset_internal_locks too fragile type: enhancement versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25551> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25516] threading.Condition._is_owned() is wrong when using threading.Lock
Changes by Nir Soffer <nir...@gmail.com>: -- nosy: +haypo, pitrou ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25516> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25516] threading.Condition._is_owned() is wrong when using threading.Lock
Changes by Nir Soffer <nir...@gmail.com>: -- type: -> behavior ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25516> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25516] threading.Condition._is_owned() is wrong when using threading.Lock
Nir Soffer added the comment: The commit hash in the previous message is a git commit from the github mirror: https://github.com/python/cpython/commit/8cb1ccbb8b9ed01c26d2c5be7cc86682e525dce7 -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25516> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25516] threading.Condition._is_owned() is wrong when using threading.Lock
New submission from Nir Soffer: When using threading.Lock, threading.Condition._is_owned is assuming that the calling thread is owning the condition lock if the lock cannot be acquired. This check is completely wrong if another thread owns the lock. >>> cond = threading.Condition(threading.Lock()) >>> threading.Thread(target=cond.acquire).start() >>> cond._is_owned() True >>> cond.notify() >>> cond.wait(0) False Careful users that acquire the condition before calling wait() or notify() are not effected. Careless users that should have been warned by RuntimeError are. Tested on Python 2.7 and 3.4.2 and 3.6.0a0. -- components: Library (Lib) messages: 253703 nosy: nirs priority: normal severity: normal status: open title: threading.Condition._is_owned() is wrong when using threading.Lock versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25516> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25516] threading.Condition._is_owned() is wrong when using threading.Lock
Nir Soffer added the comment: The issue was introduced in this commit: commit 8cb1ccbb8b9ed01c26d2c5be7cc86682e525dce7 Author: Guido van Rossum <gu...@python.org> Date: Thu Apr 9 22:01:42 1998 + New Java-style threading module. The doc strings are in a separate module. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25516> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25516] threading.Condition._is_owned() is wrong when using threading.Lock
Changes by Nir Soffer <nir...@gmail.com>: -- keywords: +patch Added file: http://bugs.python.org/file40900/0001-Issue-25516-threading.Condition._is_owned-fails-when.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25516> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25362] Use with instead of try finally
New submission from Nir Soffer: Using "with" is more clear and less error prone. -- components: Library (Lib) files: 0001-Use-with-instead-of-try-finally.patch keywords: patch messages: 252716 nosy: nirs priority: normal severity: normal status: open title: Use with instead of try finally versions: Python 3.6 Added file: http://bugs.python.org/file40740/0001-Use-with-instead-of-try-finally.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25362> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25362] In threading module, use with instead of try finally
Changes by Nir Soffer <nir...@gmail.com>: Added file: http://bugs.python.org/file40741/0001-In-threading-module-use-with-instead-of-try-finally.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25362> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25298] Add lock and rlock weakref tests
Changes by Nir Soffer <nir...@gmail.com>: -- nosy: +pitrou ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25298> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25319] Keep lock type when reseting internal locks
Changes by Nir Soffer <nir...@gmail.com>: -- nosy: +gregory.p.smith ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25319> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25319] Keep lock type when reseting internal locks
New submission from Nir Soffer: When Event._reset_internal_locks was called after fork, it use to reinitialize its condition without arguments, using RLock instead of Lock. -- components: Library (Lib) files: 0001-Keep-lock-type-when-reseting-internal-locks.patch keywords: patch messages: 252326 nosy: nirs priority: normal severity: normal status: open title: Keep lock type when reseting internal locks type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file40686/0001-Keep-lock-type-when-reseting-internal-locks.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25319> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25319] Keep lock type when reseting internal locks
Changes by Nir Soffer <nir...@gmail.com>: Added file: http://bugs.python.org/file40688/0001-Keep-lock-type-when-reseting-internal-locks.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25319> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25319] Keep lock type when reseting internal locks
Changes by Nir Soffer <nir...@gmail.com>: Added file: http://bugs.python.org/file40687/0001-Keep-lock-type-when-reseting-internal-locks-2.7.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25319> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25298] Add lock and rlock weakref tests
Changes by Nir Soffer <nir...@gmail.com>: -- components: Tests files: 0001-Add-lock-rlock-weakref-tests.patch keywords: patch nosy: nirs priority: normal severity: normal status: open title: Add lock and rlock weakref tests versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file40655/0001-Add-lock-rlock-weakref-tests.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25298> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25298] Add lock and rlock weakref tests
New submission from Nir Soffer: Same patch works also for 2.7 -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25298> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25249] Unneeded and unsafe mkstemp replacement in test_subprocess.py
New submission from Nir Soffer: The module define unsafe replacement if tempfile.mkstemp is not available. This function is available in both master and 2.7 branches. -- components: Tests files: 0001-Remove-unneeded-and-unsafe-mkstemp-replacement.patch keywords: patch messages: 251720 nosy: nirs priority: normal severity: normal status: open title: Unneeded and unsafe mkstemp replacement in test_subprocess.py versions: Python 2.7, Python 3.6 Added file: http://bugs.python.org/file40599/0001-Remove-unneeded-and-unsafe-mkstemp-replacement.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25249> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25249] Unneeded and unsafe mkstemp replacement in test_subprocess.py
Changes by Nir Soffer <nir...@gmail.com>: Added file: http://bugs.python.org/file40601/0001-Remove-unneeded-and-unsafe-mkstemp-replacement-2.7.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25249> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6721] Locks in the standard library should be sanitized on fork
Changes by Nir Soffer nir...@gmail.com: -- nosy: +nirs ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6721 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22697] Deadlock with writing to stderr from forked process
Nir Soffer added the comment: This is a duplicate of http://bugs.python.org/issue6721 -- nosy: +nirs ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue22697 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1762561] unable to serialize Infinity or NaN on ARM using marshal
Nir Soffer nir...@gmail.com added the comment: As someone who has to develop on ARM OABI, I find this won't fix policy rather frustrating. If you happen to need this patch on 2.7, this is the same patch as arm-float2.diff, which can be applied cleanly to release 2.7.2. Changes from arm-float2.diff: - Remove whitespace only changes - Replace tabs with spaces - Fixed indentation in changed code Enjoy. -- nosy: +nirs Added file: http://bugs.python.org/file24818/arm-oabi-float-2.7.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1762561 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4277] asynchat's handle_error inconsistency
Nir Soffer nir...@gmail.com added the comment: The idea is good, but seems that error handling should be inlined into initiate_send. Also those 3 special exceptions should be defined once in the module instead of repeating them. -- nosy: +nirs ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4277 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1068268] subprocess is not EINTR-safe
Changes by Nir Soffer nir...@gmail.com: -- nosy: +nirs ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1068268 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2637] urllib.quote() escapes characters unnecessarily and contrary to docs
Nir Soffer nir...@gmail.com added the comment: Senthil said: The way to handle this issue would be add these characters '%/:=?~#+!$,;'@()*[]' to always_safe list. This is wrong - for example, '=?' are NOT safe when quoting parameters for query string. This will break exiting code that assume the default safe parameters. Other characters may be unsafe in other parts of the url - I did not check which - and I don't have time to check. The current default (safe='/') is the best option - it will work correctly in most case, and in the worst is escaping some characters which are safe in particular use case. Since only the user know the context, the user should add safe characters to the function. If you don't specify anything, the function should be safe as possible for the worst use case. If you want to add characters to the default safe list, you have to make sure that the function will not break for common use cases. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2637 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2637] urllib.quote() escapes characters unnecessarily and contrary to docs
Nir Soffer nir...@gmail.com added the comment: Here is one example of code that would break if the safe parameter is changed in a careless way mentioned here (look for url_encode): http://dev.pocoo.org/projects/werkzeug/browser/werkzeug/urls.py#L112 I'm sure we can find similar code in every web application. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2637 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2637] urllib.quote() escapes characters unnecessarily and contrary to docs
Nir Soffer nir...@gmail.com added the comment: You can control what is safe in your particular context using the safe keyword argument. How do you want to support unicode? you must decide which character encoding you like, which depends on the server side decoding the url. Just document the fact that this function does not accept unicode. -- nosy: +nirs ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2637 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: handle_expt is documented to be called when there is OOB data. However, handle_expt_event is not documented, and according the framework design as I see it, it simply means socket has exceptional condition when select returns. On unix, this means there is oob data, and on Windows, it means there is some error. This works exactly the same for handle_read_event and handle_write_event - they may be called on connection refused error. Checking for errors in handle_expt_event is the right thing to do, and allow you to avoid the ugly checks and double try..except in _exception. If you want handle_foo_event to be called only on foo events, then they will not have anything to do except calling handle_foo. This is actually the case now in handle_expt_event. I don't see any advantage of this design change. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: I was wrong about handle_connect_event - it is called only from the dispatcher, so it will not break 3rd party dispatcher. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: This is asyncore-fix-refused-3.patch with some fixes: 1. Call handle_close instead of non exiting handle_close_event 2. Remove unneeded handle_close_event in test classes 3. Revert removal of handle_expt_event in test classes - not clear why it was removed in the previous patch. Tested on Mac OS X 10.5. -- Added file: http://bugs.python.org/file14617/asyncore-fix-refused-4.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: I tested asyncore_fix_refused-3.patch on Mac OS X 10.5 - all asyncore and asynchat tests pass. There is one minor issue - _exception calls the non existing handle_close_event instead of handle_close. However, looking again at the code I think that it is ugly and wrong - handle_expt_event should handle the low level expt event called from select, allowing third party dispatcher to override the behavior as needed. Another issue - lately, a new event was added - handle_connect_event - this is wrong! there is no such low level event. handle_connect is a high level event, implied by first read or write on the connecting socket. This event will break 3rd party dispatcher that does not implement it, and is not documented. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: I'll check the patch this week. The asyncore framework has low level events - handle_read_event, handle_write_event and handle_expt_event - these events are not used for reading, writing and OOB - they are just responsible to call the high level events. The high level events - handle_connect, handle_accept, handle_read, handle_write, handle_close and handle_expt should be used only for specific events. I don't see any problem in checking for errors in handle_expt_event, it works just like handle_read_event, that calls handle_connect. This design allow you do replace the dispatcher with your own dispatcher class, implementing only the low level events. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: I have a big problem with asyncore_fix_refused.patch - it assumes that a dispatcher has a socket attribute, which can be used with t getsockopt(). This is true in the default dispatcher class implemented in asyncore, but wont work with file_dispatcher, or 3rd party dispatcher class. The framework should let you drop in your dispatcher class, that will implement a minimal interface - handle_x_event, writable, readable etc. What are the issues on Windows with asyncore-handle-connect-event-3.patch on Windows? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: Tested on Ubuntu Linux 9.04. The tests will probably fail on Windows, since connection refused is detected trough handle_expt_event, and not in hadnle_read_event. I hope someone on Windows will fix this :-) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: The first fix reverted to 2.5 behavior, when self.connected is false when handle_connect is called. This behavior is little stupid - why call handle_connect if the socket is not really connected? This fix ensure that handle_connect is called only if the socket is connected - just like it is done in handle_write_event. Now it does not matter when you set the connected flag, before or after handle_connect. Tested in trunk on Mac OS X. -- Added file: http://bugs.python.org/file14561/asycore-handle-connect-event-2.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: This version fix also handle_expt_event, so connection refused error should be handled in the same way also on Windows. -- Added file: http://bugs.python.org/file14562/asycore-handle-connect-event-3.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6550] asyncore incorrect failure when connection is refused and using async_chat channel
Nir Soffer nir...@gmail.com added the comment: The patch is tested with release26-maint and trunk. -- versions: +Python 2.7 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5262] PythonLauncher considered harmfull
Nir Soffer nir...@gmail.com added the comment: I also think it should be removed. Opening a file should run it only if it is executable. -- nosy: +nirs ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5262 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1123] split(None, maxsplit) does not strip whitespace correctly
Nir Soffer added the comment: There is a problem only when maxsplit is smaller than the available splits. In other cases, the docs and the behavior match. __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue1123 __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1123] split(None, maxsplit) does not strip whitespace correctly
Nir Soffer added the comment: I did not look into the source, but obviously there is striping of leading and trailing whitespace. When you specify a separator you get: ' '.split(' ') ['', '', ''] ' a b '.split(' ') ['', 'a', 'b', ''] So one would expect to get this without striping: ' a b '.split() ['', 'a', 'b', ''] But you get this: ' a b '.split() ['a', 'b'] So the documentation is correct. __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue1123 __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1125] bytes.split shold have same interface as str.split, or different name
Nir Soffer added the comment: Why bytes should not use a default whitespace split behavior as str? __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue1125 __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1125] bytes.split shold have same interface as str.split, or different name
New submission from Nir Soffer: b'foo bar'.split() Traceback (most recent call last): File stdin, line 1, in module TypeError: split() takes at least 1 argument (0 given) b'foo bar'.split(None) Traceback (most recent call last): File stdin, line 1, in module TypeError: expected an object with the buffer interface str.split and bytes.split should have the same interface, or different names. -- components: Library (Lib) messages: 55723 nosy: nirs severity: normal status: open title: bytes.split shold have same interface as str.split, or different name versions: Python 3.0 __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue1125 __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1123] split(None, maxsplit) does not strip whitespace correctly
Nir Soffer added the comment: typo in the title -- title: split(None, maxplit) does not strip whitespace correctly - split(None, maxsplit) does not strip whitespace correctly __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue1123 __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1123] split(None, maxplit) does not strip whitespace correctly
New submission from Nir Soffer: string object .split doc say (http://docs.python.org/lib/string- methods.html): If sep is not specified or is None, a different splitting algorithm is applied. First, whitespace characters (spaces, tabs, newlines, returns, and formfeeds) are stripped from both ends. If the maxsplit argument is set and is smaller then the number of possible parts, whitespace is not removed. Examples: 'k: v\n'.split(None, 1) ['k:', 'v\n'] Expected: ['k:', 'v'] u'k: v\n'.split(None, 1) [u'k:', u'v\n'] Expected: [u'k:', u'v'] With larger values of maxsplits, it works correctly: 'k: v\n'.split(None, 2) ['k:', 'v'] u'k: v\n'.split(None, 2) [u'k:', u'v'] This looks like implementation bug, because there it does not make sense that the striping depends on the maxsplit argument, and it will be hard to explain such behavior. Maybe the striping should be removed in Python 3? It does not make sense to strip a string behind your back when you want to split it, and the caller can easily strip the string if needed. -- components: Library (Lib) messages: 55720 nosy: nirs severity: normal status: open title: split(None, maxplit) does not strip whitespace correctly versions: Python 2.4, Python 2.5, Python 3.0 __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue1123 __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com