[ python-Bugs-1680312 ] httplib fails to parse response on HEAD request
Bugs item #1680312, was opened at 2007-03-14 04:33 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1680312group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed Resolution: Duplicate Priority: 7 Private: No Submitted By: Patrick Altman (altman) Assigned to: Nobody/Anonymous (nobody) Summary: httplib fails to parse response on HEAD request Initial Comment: When attempting to get the response headers to a HEAD request, httplib hangs and then eventually throws an exception. Details can be found in the following post: http://groups.google.com/group/comp.lang.python/browse_thread/thread/ff9fa7c5e6dbea7f/ Thanks, Patrick Altman [EMAIL PROTECTED] -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 07:00 Message: Logged In: YES user_id=849994 Originator: NO This is a duplicate of #1486335. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1680312group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-767111 ] AttributeError thrown by urllib.open_http
Bugs item #767111, was opened at 2003-07-07 12:52 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=767111group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Closed Resolution: Fixed Priority: 6 Private: No Submitted By: Stuart Bishop (zenzen) Assigned to: Nobody/Anonymous (nobody) Summary: AttributeError thrown by urllib.open_http Initial Comment: In 2.3b2, looks like an error condition isn't being picked up on line 300 or 301 of urllib.py. The code that triggered this traceback was simply: url = urllib.urlopen(action, data) Traceback (most recent call last): File quot;autospamrep.pyquot;, line 170, in ? current_page = handle_spamcop_page(current_page) File quot;autospamrep.pyquot;, line 140, in handle_spamcop_page url = urllib.urlopen(action, data) File quot;/Library/Frameworks/Python.framework/Versions/2.3/ lib/python2.3/urllib.pyquot;, line 78, in urlopen return opener.open(url, data) File quot;/Library/Frameworks/Python.framework/Versions/2.3/ lib/python2.3/urllib.pyquot;, line 183, in open return getattr(self, name)(url, data) File quot;/Library/Frameworks/Python.framework/Versions/2.3/ lib/python2.3/urllib.pyquot;, line 308, in open_http return self.http_error(url, fp, errcode, errmsg, headers, data) File quot;/Library/Frameworks/Python.framework/Versions/2.3/ lib/python2.3/urllib.pyquot;, line 323, in http_error return self.http_error_default(url, fp, errcode, errmsg, headers) File quot;/Library/Frameworks/Python.framework/Versions/2.3/ lib/python2.3/urllib.pyquot;, line 551, in http_error_default return addinfourl(fp, headers, quot;http:quot; + url) File quot;/Library/Frameworks/Python.framework/Versions/2.3/ lib/python2.3/urllib.pyquot;, line 837, in __init__ addbase.__init__(self, fp) File quot;/Library/Frameworks/Python.framework/Versions/2.3/ lib/python2.3/urllib.pyquot;, line 787, in __init__ self.read = self.fp.read AttributeError: 'NoneType' object has no attribute 'read' -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 08:28 Message: Logged In: YES user_id=849994 Originator: NO Fixed in rev. 54376, 54377 (2.5). Raises IOError now. -- Comment By: Atul Varma (varmaa) Date: 2007-02-25 00:58 Message: Logged In: YES user_id=863202 Originator: NO I have attempted to fix this bug in patch 1668132: http://sourceforge.net/tracker/index.php?func=detailaid=1668132group_id=5470atid=305470 -- Comment By: Georg Brandl (birkenfeld) Date: 2005-12-15 22:11 Message: Logged In: YES user_id=1188172 Further information can be found in #1163401 which has been closed as a duplicate. -- Comment By: A.M. Kuchling (akuchling) Date: 2005-01-07 12:39 Message: Logged In: YES user_id=11375 No, not at this point in time. Unassigning (or, if this bug is on the radar for 2.3.5/2.4.1, I can find time to work on it). - -- Comment By: A.M. Kuchling (akuchling) Date: 2005-01-07 12:39 Message: Logged In: YES user_id=11375 No, not at this point in time. Unassigning (or, if this bug is on the radar for 2.3.5/2.4.1, I can find time to work on it). - -- Comment By: Raymond Hettinger (rhettinger) Date: 2005-01-07 01:37 Message: Logged In: YES user_id=80475 Andrew, are you still working on this one? -- Comment By: Rob Probin (robzed) Date: 2004-03-18 23:43 Message: Logged In: YES user_id=1000470 The file pointer (fp) is None (inside urllib) from httplib. This appears to be caused by a BadStatusLine exception in getreply() (line1016 httplib). This sets self.file to self._conn.sock.makefile('rb', 0) then does a self.close() which sets self.file to None. Being new to this peice of code, I'm not sure whether it's urllib assuming the file isn't going to be closed, or the BadStatusLine exception clearing the file. Certainly it looks like the error -1 is not being trapped by open_http() in urllib upon calling h.getreply() and assuming that the file still exists even in an error condition? It maybe a coincidence but it appears to occur more when a web browser on the same machine is refreshing. Regards Rob -- Comment By: Rob Probin (robzed) Date: 2004-03-17 22:24 Message: Logged In: YES
[ python-Bugs-1680230 ] urllib.urlopen() raises AttributeError
Bugs item #1680230, was opened at 2007-03-13 23:09 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1680230group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Bj�rn Lindqvist (sonderblade) Assigned to: Nobody/Anonymous (nobody) Summary: urllib.urlopen() raises AttributeError Initial Comment: When you connect urlopen() to a socket that does not send any data, it produces an AttributeError. File /lib/python2.6/urllib.py, line 608, in http_error_default return addinfourl(fp, headers, http: + url) File /lib/python2.6/urllib.py, line 951, in __init__ addbase.__init__(self, fp) File /lib/python2.6/urllib.py, line 898, in __init__ self.read = self.fp.read AttributeError: 'NoneType' object has no attribute 'read' Raising an exception is OK (I think?), but it should be an IOError instead of an AttributeError. See the attached patch for a test case for the bug. -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 08:28 Message: Logged In: YES user_id=849994 Originator: NO Committed fix and your test case in rev. 54376, 54377 (2.5). -- Comment By: Bj�rn Lindqvist (sonderblade) Date: 2007-03-13 23:13 Message: Logged In: YES user_id=51702 Originator: YES File Added: test-urllib-tc-attr-err.patch -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1680230group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1429783 ] urllib.py: AttributeError on BadStatusLine
Bugs item #1429783, was opened at 2006-02-11 18:15 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1429783group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: kxroberto (kxroberto) Assigned to: Nobody/Anonymous (nobody) Summary: urllib.py: AttributeError on BadStatusLine Initial Comment: PythonWin 2.3.5 (#62, Feb 8 2005, 16:23:02) [MSC v.1200 32 bit (Intel)] on win32. in httplib errcode -1 file=self._conn.sock.makefile('rb', 0) is returned on Badstatusline: except BadStatusLine, e: ### hmm. if getresponse() ever closes the socket on a bad request, ### then we are going to have problems with self.sock ### should we keep this behavior? do people use it? # keep the socket open (as a file), and return it self.file = self._conn.sock.makefile('rb', 0) # close our socket -- we want to restart after any protocol error self.close() self.headers = None return -1, e.line, None fp = h.getfile() delivers None in urllib.URLopener.open_http and this is traceback leading to an AttributeError Traceback (most recent call last): File interactive input, line 1, in ? File C:\Python23\lib\urllib.py, line 181, in open return getattr(self, name)(url) File C:\Python23\lib\urllib.py, line 306, in open_http return self.http_error(url, fp, errcode, errmsg, headers) File C:\Python23\lib\urllib.py, line 319, in http_error result = method(url, fp, errcode, errmsg, headers) File C:\Python23\lib\urllib.py, line 584, in http_error_301 return self.http_error_302(url, fp, errcode, errmsg, headers, data) File C:\Python23\lib\urllib.py, line 565, in http_error_302 data) File C:\Python23\lib\urllib.py, line 580, in redirect_internal return self.open(newurl) File C:\Python23\lib\urllib.py, line 181, in open return getattr(self, name)(url) File C:\Python23\lib\urllib.py, line 306, in open_http return self.http_error(url, fp, errcode, errmsg, headers) File C:\Python23\lib\urllib.py, line 323, in http_error return self.http_error_default(url, fp, errcode, errmsg, headers) File C:\Python23\lib\urllib.py, line 327, in http_error_default void = fp.read() AttributeError: 'NoneType' object has no attribute 'read' As I get this error rarely I cannot reproduce exactly how self._conn.sock.makefile('rb', 0) delivers None in that case. -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 08:29 Message: Logged In: YES user_id=849994 Originator: NO Fixed that bug finally in rev. 54376, 54377 (2.5). -- Comment By: Neal Norwitz (nnorwitz) Date: 2006-02-12 06:56 Message: Logged In: YES user_id=33168 I should add that the other bug is still open. -- Comment By: Neal Norwitz (nnorwitz) Date: 2006-02-12 06:55 Message: Logged In: YES user_id=33168 This may be a duplicate of a bug submitted by Bram Cohen. It was a couple of years ago and I don't remember any other details. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1429783group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1593751 ] poor urllib error handling
Bugs item #1593751, was opened at 2006-11-09 21:04 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1593751group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Guido van Rossum (gvanrossum) Assigned to: Nobody/Anonymous (nobody) Summary: poor urllib error handling Initial Comment: I set up a simple server that returns an empty response. from socket import * s = socket() s.bind((, )) while 1: x, c = s.accept(); print c; x.recv(1000); x.close() ... Pointing urllib at this gives a traceback: Python 2.6a0 (trunk:52099M, Oct 3 2006, 09:59:17) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type help, copyright, credits or license for more information. import urllib urllib.urlopen('http://localhost:/') Traceback (most recent call last): File stdin, line 1, in module File /home/guido/p/Lib/urllib.py, line 82, in urlopen return opener.open(url) File /home/guido/p/Lib/urllib.py, line 190, in open return getattr(self, name)(url) File /home/guido/p/Lib/urllib.py, line 334, in open_http return self.http_error(url, fp, errcode, errmsg, headers) File /home/guido/p/Lib/urllib.py, line 351, in http_error return self.http_error_default(url, fp, errcode, errmsg, headers) File /home/guido/p/Lib/urllib.py, line 608, in http_error_default return addinfourl(fp, headers, http: + url) File /home/guido/p/Lib/urllib.py, line 951, in __init__ addbase.__init__(self, fp) File /home/guido/p/Lib/urllib.py, line 898, in __init__ self.read = self.fp.read AttributeError: 'NoneType' object has no attribute 'read' I can repeat this with 2.2.3 and 2.4.3 as well (don't have 2.3 around for testing). The direct cause of the problem is that h.getfile() on line 329 of urllib.py (in head of trunk) returns None. -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 08:30 Message: Logged In: YES user_id=849994 Originator: NO Finally fixed in rev. 54376, 54377 (2.5). -- Comment By: Robert Winder (robertwinder) Date: 2006-12-15 16:15 Message: Logged In: YES user_id=195085 Originator: NO Same error handling with 2.3. Suggested fix doesn't work and gives. AttributeError: addinfourl instance has no attribute 'read' -- Comment By: Robert Carr (racarr) Date: 2006-12-05 15:26 Message: Logged In: YES user_id=1649655 Originator: NO Fix? Index: urllib.py === --- urllib.py (revision 52918) +++ urllib.py (working copy) @@ -895,8 +895,10 @@ def __init__(self, fp): self.fp = fp -self.read = self.fp.read -self.readline = self.fp.readline + try: + self.read = self.fp.read + self.readline = self.fp.readline + except: print File handler is none if hasattr(self.fp, readlines): self.readlines = self.fp.readlines if hasattr(self.fp, fileno): self.fileno = self.fp.fileno -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1593751group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-855819 ] urllib does not handle Connection reset
Bugs item #855819, was opened at 2003-12-07 16:59 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=855819group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Stefan Fleiter (fleiter) Assigned to: Nobody/Anonymous (nobody) Summary: urllib does not handle Connection reset Initial Comment: Python 2.2.3+ (#1, Nov 18 2003, 01:16:59) [GCC 3.3.2 (Debian)] on linux2 and Python 2.3.3c1 (#2, Dec 6 2003, 16:44:56) [GCC 3.3.3 20031203 (prerelease) (Debian)] on linux2 Server which does reset Connection: = import SocketServer class RequestHandler(SocketServer.BaseRequestHandler): def handle(self): self.request.send(quot;quot;) server = SocketServer.TCPServer((quot;localhostquot;, 2000), RequestHandler) server.serve_forever() urllib-Code: === import urllib f = urllib.urlopen(quot;http://localhost:2000quot;) Traceback: === Traceback (most recent call last): File quot;url.pyquot;, line 4, in ? f = urllib.urlopen(quot;http://localhost:2000quot;) File quot;/usr/lib/python2.2/urllib.pyquot;, line 73, in urlopen return _urlopener.open(url) File quot;/usr/lib/python2.2/urllib.pyquot;, line 178, in open return getattr(self, name)(url) File quot;/usr/lib/python2.2/urllib.pyquot;, line 301, in open_http return self.http_error(url, fp, errcode, errmsg, headers) File quot;/usr/lib/python2.2/urllib.pyquot;, line 318, in http_error return self.http_error_default(url, fp, errcode, errmsg, headers) File quot;/usr/lib/python2.2/urllib.pyquot;, line 546, in http_error_default return addinfourl(fp, headers, quot;http:quot; + url) File quot;/usr/lib/python2.2/urllib.pyquot;, line 824, in __init__ addbase.__init__(self, fp) File quot;/usr/lib/python2.2/urllib.pyquot;, line 778, in __init__ self.read = self.fp.read The cause seems to be that urllib.addbase depends on the fp argument beeing a valid socket while fp = h.getfile() in open_http sets it to None because in httplib.HTTP.getreply() the BadStatusLine-Exception-Handling was triggered. urllib2 does handle this right. Thanks for reading all of this. :-) -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 08:31 Message: Logged In: YES user_id=849994 Originator: NO Finally fixed in rev. 54376, 54377 (2.5). Now raises IOError. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=855819group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-891832 ] commands module doesn't support background commands
Bugs item #891832, was opened at 2004-02-06 14:57 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=891832group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Closed Resolution: Wont Fix Priority: 5 Private: No Submitted By: Skip Montanaro (montanaro) Assigned to: Nobody/Anonymous (nobody) Summary: commands module doesn't support background commands Initial Comment: The structure of the command passed to os.popen() prevents the getoutput() and getstatusoutput() functions from accepting commands for background execution. I think it would be sufficient to see if the last non-whitespace character in the command was '' and if so, suppress insertion of the semicolon into the command passed to os.popen(): dosemi = not cmd.strip()[-1:] == '' pipe = os.popen('{ %s%s } 21' % (cmd, dosemi and ';' or ''), 'r') The above is untested, but based on my fiddling at the shell prompt seems to be what's called for. Since the status and output mean little or nothing when the command is executed in the background, perhaps a better alternative would be to add a new function to the module which doesn't return either, but dumps stdout and stderr to /dev/null. -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 08:34 Message: Logged In: YES user_id=849994 Originator: NO I think now that the subprocess module is there, it should be used. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=891832group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-506100 ] commands.getstatusoutput(): cmd.exe support
Bugs item #506100, was opened at 2002-01-20 18:13 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=506100group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Platform-specific Status: Closed Resolution: Wont Fix Priority: 5 Private: No Submitted By: Pierre Rouleau (pierre_rouleau) Assigned to: Nobody/Anonymous (nobody) Summary: commands.getstatusoutput(): cmd.exe support Initial Comment: ##commands.getstatusoutput(): Does not support for DOS-type shells # - # # Inside commands.py, the getstatusoutput() function is not capable of running a # DOS-type shell command. The current code assumes that the operating system # is running a Unix-type shell. # # The old code is: def getstatusoutput(cmd): quot;quot;quot;Return (status, output) of executing cmd in a shell.quot;quot;quot; import os pipe = os.popen('{ ' + cmd + '; } 2gt;amp;1', 'r') text = pipe.read() sts = pipe.close() if sts is None: sts = 0 if text[-1:] == '\n': text = text[:-1] return sts, text # I propose that we update that code to check the operating system and support # DOS-style shells (for DOS, NT, OS/2) with the following modified code: def getstatusoutput(cmd): quot;quot;quot;Return (status, output) of executing cmd in a shell.quot;quot;quot; import os if os.name in ['nt', 'dos', 'os2'] : # use Dos style command shell for NT, DOS and OS/2 pipe = os.popen(cmd + ' 2gt;amp;1', 'r') else : # use Unix style for all others pipe = os.popen('{ ' + cmd + '; } 2gt;amp;1', 'r') text = pipe.read() sts = pipe.close() if sts is None: sts = 0 if text[-1:] == '\n': text = text[:-1] return sts, text -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 08:36 Message: Logged In: YES user_id=849994 Originator: NO commands.py explicitly states UNIX only. Anyway, nowadays you should use the subprocess module for such tasks. -- Comment By: Pierre Rouleau (pierre_rouleau) Date: 2002-01-24 01:14 Message: Logged In: YES user_id=420631 The changed proposed is for DOS-type shells, not DOS itself (as far as I know pure MS-DOS or PC-DOS are not supported). But Win32 platforms are (NT, 2000, ...) and they use the same type of native command interpreter shell. With the proposed change getstatusoutput() works in those. cmd.exe is available in NT, 2000 and also in OS/2. -- Comment By: Martin v. Löwis (loewis) Date: 2002-01-23 08:05 Message: Logged In: YES user_id=21627 My first reaction to this was quot;Is DOS still supportedquot;? Changing subject to mention cmd.exe (which is not a DOS application). -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=506100group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1197883 ] Installation path sent to configure
Bugs item #1197883, was opened at 2005-05-08 22:56 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1197883group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Installation Group: None Status: Closed Resolution: Duplicate Priority: 5 Private: No Submitted By: Bj�rn Lindqvist (sonderblade) Assigned to: Nobody/Anonymous (nobody) Summary: Installation path sent to configure Initial Comment: This is a minor problem but it makes some regression tests that rely upon Python's installation path to fail. $ ./configure --prefix=/opt/ All Python stuff will be installed with an extra '/'. /opt//bin/python /opt//lib/python2.5 etc. Not good. Configure or some other installation script should recognise the redundant '/' and strip it. -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 08:44 Message: Logged In: YES user_id=849994 Originator: NO Duplicate of #1676135. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1197883group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1680034 ] Importing SystemRandom wastes entropy.
Bugs item #1680034, was opened at 2007-03-13 17:17 Message generated for change (Comment added) made by stephent98 You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1680034group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Closed Resolution: Wont Fix Priority: 5 Private: No Submitted By: Steve Tyler (stephent98) Assigned to: Nobody/Anonymous (nobody) Summary: Importing SystemRandom wastes entropy. Initial Comment: Importing SystemRandom wastes entropy. The strace snippet shows a 16 byte read from /dev/urandom, which is presumably done to seed a random number generator. However SystemRandom does not need a seed, so the read is not needed. test case: #!/usr/bin/python from random import SystemRandom strace snippet: open(/dev/urandom, O_RDONLY|O_LARGEFILE) = 4 read(4, \\\333\277Q\243K\350 \321\316\26_\271\364~, 16) = 16 close(4)= 0 Python version: python-2.4.4-1.fc6 (Fedora Core 6) -- Comment By: Steve Tyler (stephent98) Date: 2007-03-14 12:17 Message: Logged In: YES user_id=1741843 Originator: YES Here is how I monitor the entropy: watch -d -n 1 cat /proc/sys/kernel/random/entropy_avail Repeatedly running this script will consume almost all system entropy: #!/usr/bin/python import gnome.ui For the record, the entropy-hog in this test case is not Python-related: #6 0x007742ae in fread () from /lib/libc.so.6 #7 0x0014cfd9 in g_rand_new () from /lib/libglib-2.0.so.0 #8 0x043eef5c in ORBit_genuid_init () from /usr/lib/libORBit-2.so.0 #9 0x043f5892 in CORBA_ORB_init () from /usr/lib/libORBit-2.so.0 #10 0x045596de in bonobo_activation_orb_init () from /usr/lib/libbonobo-activation.so.4 #11 0x04559b46 in bonobo_activation_init () from /usr/lib/libbonobo-activation.so.4 #12 0x002a5317 in initactivation () from /usr/lib/python2.4/site-packages/gtk-2.0/bonobo/activation.so #13 0x049d2f48 in _PyImport_LoadDynamicModule () from /usr/lib/libpython2.4.so.1.0 -- Comment By: Raymond Hettinger (rhettinger) Date: 2007-03-13 20:17 Message: Logged In: YES user_id=80475 Originator: NO Sorry, am closing this as won't fix. The 16 bytes are used to seed the MersenneTwister which is used by tempfile.py upon startup. That is a reasonable use of the resource. FWIW, it is possible for you to recover most of those 16 bytes of entropy just by calling the twister itself. Also, it is my understanding that /dev/urandom is continuously refilling its hardware based entropy source (so the supply is limitless, but not instant). -- Comment By: Steve Tyler (stephent98) Date: 2007-03-13 19:08 Message: Logged In: YES user_id=1741843 Originator: YES Here is a little more background on why wasting entropy is a problem. When accessed as /dev/urandom, as many bytes as are requested are returned even when the entropy pool is exhausted. http://www.linux.com/howtos/Secure-Programs-HOWTO/random-numbers.shtml When the entropy pool is exhausted, the Linux RNG (accessed via /dev/urandom) behaves like a pseudo-random number generator, which is not acceptable for cryptographic applications such as password generators. Analysis of the Linux Random Number Generator http://www.pinkas.net/PAPERS/gpr06.pdf Of course one can work around this issue by not using the random module and accessing /dev/urandom or /dev/random directly. For some perspective, simply importing the gnome.ui module consumes 4096 bytes of random data in a library I have not been able to completely identify. (I don't think it is Python, though.) -- Comment By: Steve Tyler (stephent98) Date: 2007-03-13 18:30 Message: Logged In: YES user_id=1741843 Originator: YES Entropy is not an unlimited quantity, therefore the existing behavior is undesirable. My app is a random password generator which may need the entropy for itself. https://sourceforge.net/projects/gnome-password/ -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-13 17:32 Message: Logged In: YES user_id=849994 Originator: NO This is not caused by SystemRandom, but by instantiating (and thereby seeding) the normal (Mersenne Twister) random number generator, which is done automatically when random is imported. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1680034group_id=5470 ___ Python-bugs-list mailing list Unsubscribe:
[ python-Bugs-1582282 ] email.header decode within word
Bugs item #1582282, was opened at 2006-10-22 09:16 Message generated for change (Comment added) made by bwarsaw You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1582282group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Tokio Kikuchi (tkikuchi) Assigned to: Barry A. Warsaw (bwarsaw) Summary: email.header decode within word Initial Comment: The problem is filed in mailman bug report: http://sourceforge.net/tracker/index.php?func=detailaid=1578539group_id=103atid=100103 While Microsoft Entourage's way of encoding iso-8859-1 text is not compliant with RFC-2047, Python email.header.decode_header should treat this 'word' as a simple us-ascii string and should not parse into series of string/charset list. Sm=?ISO-8859-1?B?9g==?=rg=?ISO-8859-1?B?5Q==?=sbord should be parsed as [('Sm=?ISO-8859-1?B?9rg==?=g=?ISO-8859-1?B?5Q==?=sbord', None)], not as [('Sm', None), ('\xf6', 'iso-8859-1'), ('g', None), ('\xe5', 'iso-8859-1'), ('sbord', None)] -- Comment By: Barry A. Warsaw (bwarsaw) Date: 2007-03-14 08:58 Message: Logged In: YES user_id=12800 Originator: NO Whoops! Resolution should have been Fixed -- Comment By: Barry A. Warsaw (bwarsaw) Date: 2007-03-14 01:00 Message: Logged In: YES user_id=12800 Originator: NO r54370 for Python 2.5 r54371 for Python 2.6 -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1582282group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1486335 ] httplib: read/_read_chunked failes with ValueError sometime
Bugs item #1486335, was opened at 2006-05-11 04:14 Message generated for change (Comment added) made by altman You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1486335group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: kxroberto (kxroberto) Assigned to: Greg Ward (gward) Summary: httplib: read/_read_chunked failes with ValueError sometime Initial Comment: This occasionally shows up in a logged trace, when a application crahes on ValueError on a http(s)_response.read() : (py2.3.5 - yet relevant httplib code is still the same in current httplib) \' File socket.pyo, line 283, in read\\n\', \' File httplib.pyo, line 389, in read\\n\', \' File httplib.pyo, line 426, in _read_chunked\\n\', \'ValueError: invalid literal for int(): \\n\'] ::: its the line: chunk_left = int(line, 16) Don't know what this line is about. Yet, that should be protected, as a http_response.read() should not fail with ValueError, but only with IOError/EnvironmentError, socket.error - otherwise Error Exception handling becomes a random task. -Robert Side note regarding IO exception handling: See also FR #1481036 (IOBaseError): why socket.error.__bases__ is (class exceptions.Exception at 0x011244E0,) ? -- Comment By: Patrick Altman (altman) Date: 2007-03-14 10:39 Message: Logged In: YES user_id=405010 Originator: NO I am attempting to use a HEAD request against Amazon S3 to check whether a file exists or not and if it does parse the md5 hash from the ETag in the response to verify the contents of the file so as to save on bandwidth of uploading files when it is not necessary. If the file exist, the HEAD works as expected and I get valid headers back that I can parse and pull the ETag out of the dictionary using getheader('ETag')[1:-1] (using the slice to trim off the double-quotes in the string. The problem lies when I attempt to send a HEAD request when no file exists. As expected, a 404 Not Found response is sent back from Amazon however, my test scripts seem to hang. I run python with trace.py and it hangs here: --- modulename: httplib, funcname: _read_chunked httplib.py(536): assert self.chunked != _UNKNOWN httplib.py(537): chunk_left = self.chunk_left httplib.py(538): value = '' httplib.py(542): while True: httplib.py(543): if chunk_left is None: httplib.py(544): line = self.fp.readline() --- modulename: socket, funcname: readline socket.py(321): data = self._rbuf socket.py(322): if size 0: socket.py(324): if self._rbufsize = 1: socket.py(326): assert data == socket.py(327): buffers = [] socket.py(328): recv = self._sock.recv socket.py(329): while data != \n: socket.py(330): data = recv(1) It eventually completes with an exception here: File C:\Python25\lib\httplib.py, line 509, in read return self._read_chunked(amt) File C:\Python25\lib\httplib.py, line 548, in _read_chunked chunk_left = int(line, 16) ValueError: invalid literal for int() with base 16: '' For reference, ethereal captured the following request and response: HEAD REMOVED HTTP/1.1 Host: s3.amazonaws.com Accept-Encoding: identity Date: Tue, 13 Mar 2007 02:54:12 GMT Authorization: AWS REMOVED HTTP/1.1 404 Not Found x-amz-request-id: E20B4C0D0C48B2EF x-amz-id-2: REMOVED Content-Type: application/xml Transfer-Encoding: chunked Date: Tue, 13 Mar 2007 02:54:16 GMT Server: AmazonS3 -- Comment By: John J Lee (jjlee) Date: 2006-08-07 19:23 Message: Logged In: YES user_id=261020 I think it's only worth worrying about bad chunking that a) has been observed in the wild (though not necessarily by us) and b) popular browsers can cope with. Greg: If there is an error here, it's at EOF, so it's not that big a deal. That's only if the response will be closed at the end of the current transaction. Quoting from 1411097: if the connection will not close at the end of the transaction, the behaviour should not change from what's currently in SVN (we should not assume that the chunked response has ended unless we see the proper terminating CRLF). Perhaps we don't need to be quite as strict as that, but the point is that otherwise, how do we know the server hasn't already sent that last CRLF, and that it will turn up in three weeks' time?-) If that happens, not sure exactly how httplib will treat the CRLF and possible chunked encoding trailers, but I suspect something bad happens. Perhaps we could just always close the connection in this case? I'm
[ python-Bugs-1528074 ] difflib.SequenceMatcher.find_longest_match() wrong result
Bugs item #1528074, was opened at 2006-07-25 03:59 Message generated for change (Comment added) made by rtvd You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1528074group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: John Machin (sjmachin) Assigned to: Tim Peters (tim_one) Summary: difflib.SequenceMatcher.find_longest_match() wrong result Initial Comment: A short example script is attached. Two strings, each 500 bytes long. Longest match is the first 32 bytes of each string. Result produced is a 10-byte match later in the string. If one byte is chopped off the end of each string (len now 499 each), the correct result is produced. Other observations, none of which may be relevant: 1. Problem may be in the heuristic for popular elements in the __chain_b method. In this particular example, the character '0' (which has frequency of 6 in the b string) is not popular with a len of 500 but is popular with a len of 499. 2. '0' is the last byte of the correct longest match. 3. The correct longest match is at the start of the each of the strings. 4. Disabling the popular heuristic (like below) appears to make the problem go away: if 0: # if n = 200 and len(indices) * 100 n: populardict[elt] = 1 del indices[:] else: indices.append(i) 5. The callable self.isbpopular is created but appears to be unused. 6. The determination of popular doesn't accord with the comments, which say 1%. However with len==500, takes 6 to be popular. -- Comment By: Denys Rtveliashvili (rtvd) Date: 2007-03-14 23:11 Message: Logged In: YES user_id=1416496 Originator: NO By the way, I found that the implementation should better be changed completely. The current one has a O(n^2) computational complexity, while the one, based on suffix trees using Ukkonen's algorithm would use only O(n) -- Comment By: Denys Rtveliashvili (rtvd) Date: 2007-03-11 18:29 Message: Logged In: YES user_id=1416496 Originator: NO I have sent a testcase for this bug into the SourceForge. The ID is #1678339. Also I have submitted a fix for this bug (ID #1678345), but the fix reduces the performance significantly. -- Comment By: Denys Rtveliashvili (rtvd) Date: 2007-03-10 20:24 Message: Logged In: YES user_id=1416496 Originator: NO The quick test for this bug is: for i in xrange(190, 200): text1 = a + b*i text2 = b*i + c m = difflib.SequenceMatcher(None, text1, text2) (aptr,bptr,l) = m.find_longest_match(0, len(text1), 0, len(text2)) print i:, i, l:, l, aptr:, aptr, bptr:, bptr assert l == i The assertion will fail when i==199 (the length of the texts will be 200). And yes, the bug is clearly populardict-related. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1528074group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1660009 ] continuing problem with httplib multiple set-cookie headers
Bugs item #1660009, was opened at 2007-02-14 19:52 Message generated for change (Comment added) made by jjlee You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1660009group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: David Margrave (davidma) Assigned to: Nobody/Anonymous (nobody) Summary: continuing problem with httplib multiple set-cookie headers Initial Comment: This is related to [ 432621 ] httplib: multiple Set-Cookie headers, which I was unable to re-open. The workaround that was adopted in the previous bug tracker item was to combine multiple set-cookie headers received from the server, into a single set-cookie element in the headers dictionary, with the cookies joined into a comma-separated string. The problem arises when a comma character appears inside the 'expires' field of one of the cookies. This makes it difficult to split the cookie headers back apart. The comma character should be escaped, or a different separator character used. i.e. expires=Sun, 17-Jan-2038 19:14:07 GMT For now I am using the workaround that gstein suggested, use response.msg.getallmatchingheaders() Python 2.3 has this behavior, and probably later versions. -- Comment By: John J Lee (jjlee) Date: 2007-03-14 20:48 Message: Logged In: YES user_id=261020 Originator: NO I'm not sure what your complaint is. What's wrong with response.msg.getallmatchingheaders()? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1660009group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1660009 ] continuing problem with httplib multiple set-cookie headers
Bugs item #1660009, was opened at 2007-02-14 19:52 Message generated for change (Comment added) made by davidma You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1660009group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: David Margrave (davidma) Assigned to: Nobody/Anonymous (nobody) Summary: continuing problem with httplib multiple set-cookie headers Initial Comment: This is related to [ 432621 ] httplib: multiple Set-Cookie headers, which I was unable to re-open. The workaround that was adopted in the previous bug tracker item was to combine multiple set-cookie headers received from the server, into a single set-cookie element in the headers dictionary, with the cookies joined into a comma-separated string. The problem arises when a comma character appears inside the 'expires' field of one of the cookies. This makes it difficult to split the cookie headers back apart. The comma character should be escaped, or a different separator character used. i.e. expires=Sun, 17-Jan-2038 19:14:07 GMT For now I am using the workaround that gstein suggested, use response.msg.getallmatchingheaders() Python 2.3 has this behavior, and probably later versions. -- Comment By: David Margrave (davidma) Date: 2007-03-14 21:10 Message: Logged In: YES user_id=31040 Originator: YES getallmatchingheaders() works fine. The problem is with the self.headers in the SimpleHTTPRequestHandler and derived classes. A website may send multiple set-cookie headers, using gmail.com as an example: Set-Cookie: GMAIL_RTT=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail Set-Cookie: GMAIL_LOGIN=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail The SimpleHTTPRequestHandler class combines multiple set-cookie response headers into a single comma-separated string which it stores in the headers dictionary i.e. self.headers ['set-cookie'] = GMAIL_RTT=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail, GMAIL_LOGIN=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail The problem is if you try to use code that uses self.headers['set-cookie'] and use string.split to get the original distinct cookie values on the comma delimiter, you'll run into trouble because of the use of the comma character within the cookies' expiration tags, such as Expires=Tue, 13-Mar-07 21:03:04 GMT Again, getallmatchingheaders() is fine as an alternative, but as long as you are going to the trouble of storing multiple set-cookie response headers in the self.headers dict, using a delimiter of some sort, I'd argue you might as well also take care that your delimiter is either unique or escaped within the fields you are delimiting. -- Comment By: John J Lee (jjlee) Date: 2007-03-14 20:48 Message: Logged In: YES user_id=261020 Originator: NO I'm not sure what your complaint is. What's wrong with response.msg.getallmatchingheaders()? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1660009group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1681020 ] execfile locks file forever if there are any syntax errors
Bugs item #1681020, was opened at 2007-03-14 22:16 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1681020group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: virwen (virwen) Assigned to: Nobody/Anonymous (nobody) Summary: execfile locks file forever if there are any syntax errors Initial Comment: When I execfile a file which contains a syntax error, the file becomes locked and stays this way all the way until I exit the interpreter (I am unable to delete it, for example). I have tried but failed to find any way to unlock the file. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1681020group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1564508 ] BaseCookie does not support $Port
Bugs item #1564508, was opened at 2006-09-24 14:05 Message generated for change (Comment added) made by jjlee You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1564508group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Anders Aagaard (aagaande) Assigned to: Nobody/Anonymous (nobody) Summary: BaseCookie does not support $Port Initial Comment: Sending a cookie containing $Port to python's Cookie.py causes this exception: File /usr/lib64/python2.4/Cookie.py, line 621, in load self.__ParseString(rawdata) File /usr/lib64/python2.4/Cookie.py, line 646, in __ParseString M[ K[1:] ] = V File /usr/lib64/python2.4/Cookie.py, line 437, in __setitem__ raise CookieError(Invalid Attribute %s % K) CookieError: Invalid Attribute port For RFC2965 compatibility more keys has to be added to the Morsel class in the same file. -- Comment By: John J Lee (jjlee) Date: 2007-03-14 21:20 Message: Logged In: YES user_id=261020 Originator: NO Why do you want RFC 2965 compatibility? I'm not trolling; RFC 2965 is dead as an internet protocol (except as a basis for implementing the older cookie protocols, as RFC 2965 + compatibility hacks -- but $Port is not relevant in that case). The authors of the RFC gave up on an effort to publish errata to the RFC, due to the complexities and the lack of interest from the internet at large. AFAIK, $Port is not implemented by browsers (except for maybe Opera and lynx, IIRC). It just never caught on. See also http://python.org/sf/1638033 -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1564508group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1681020 ] execfile locks file forever if there are any syntax errors
Bugs item #1681020, was opened at 2007-03-14 21:16 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1681020group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Closed Resolution: Out of Date Priority: 5 Private: No Submitted By: virwen (virwen) Assigned to: Nobody/Anonymous (nobody) Summary: execfile locks file forever if there are any syntax errors Initial Comment: When I execfile a file which contains a syntax error, the file becomes locked and stays this way all the way until I exit the interpreter (I am unable to delete it, for example). I have tried but failed to find any way to unlock the file. -- Comment By: Georg Brandl (gbrandl) Date: 2007-03-14 22:35 Message: Logged In: YES user_id=849994 Originator: NO Thanks for the report, this has already been fixed in SVN. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1681020group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1068268 ] subprocess is not EINTR-safe
Bugs item #1068268, was opened at 2004-11-17 22:07 Message generated for change (Comment added) made by mpitt You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1068268group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 3 Private: No Submitted By: Peter Åstrand (astrand) Assigned to: Peter Åstrand (astrand) Summary: subprocess is not EINTR-safe Initial Comment: The subprocess module is not safe for use with signals, because it doesn't retry the system calls upon EINTR. However, as far as I understand it, this is true for most other Python modules as well, so it isn't obvious that the subprocess needs to be fixed. The problem was first noticed by John P Speno. -- Comment By: Martin Pitt (mpitt) Date: 2007-03-14 23:36 Message: Logged In: YES user_id=80975 Originator: NO I updated Peter's original patch to 2.5+svn fixes and added proper tests to test_subprocess.py. It works great now. What do you think about this approach? Fixing it only in submodule feels a bit strange, but then again, this is meant to be an easy to use abstraction, and most of the people that were hit by this (according to Google) encountered the problem in subprocess. I don't see how to attach something here, so I attached the updated patch to the Ubuntu bug (https://launchpad.net/bugs/87292): http://librarian.launchpad.net/6807594/subprocess-eintr-safety.patch Thanks, Martin -- Comment By: Martin Pitt (mpitt) Date: 2007-02-26 13:15 Message: Logged In: YES user_id=80975 Originator: NO I just got two different Ubuntu bug reports about this problem as well, and I'm unsure how to circumvent this at the application level. http://librarian.launchpad.net/6514580/Traceback.txt http://librarian.launchpad.net/6527195/Traceback.txt (from https://launchpad.net/bugs/87292 and its duplicate) -- Comment By: Matt Johnston (mattjohnston) Date: 2004-12-22 08:07 Message: Logged In: YES user_id=785805 I've hit this on a Solaris 9 box without explicitly using signals. Using the DCOracle module, a seperate Oracle process is executed. When this terminates, a SIGCHLD is sent to the calling python process, which may be in the middle of a select() in the communicate() call, causing EINTR. From the output of truss (like strace), a sigchld handler doesn't appear to be getting explicitly installed by the Oracle module. SunOS 5.9 Generic_112233-01 sun4u sparc SUNW,Sun-Fire-280R -- Comment By: Peter Åstrand (astrand) Date: 2004-11-17 22:15 Message: Logged In: YES user_id=344921 One way of testing subprocess for signal-safeness is to insert these lines just after _cleanup(): import signal signal.signal(signal.SIGALRM, lambda x,y: 1) signal.alarm(1) import time time.sleep(0.99) Then run test_subprocess.py. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1068268group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1660009 ] continuing problem with httplib multiple set-cookie headers
Bugs item #1660009, was opened at 2007-02-14 19:52 Message generated for change (Comment added) made by jjlee You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1660009group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: David Margrave (davidma) Assigned to: Nobody/Anonymous (nobody) Summary: continuing problem with httplib multiple set-cookie headers Initial Comment: This is related to [ 432621 ] httplib: multiple Set-Cookie headers, which I was unable to re-open. The workaround that was adopted in the previous bug tracker item was to combine multiple set-cookie headers received from the server, into a single set-cookie element in the headers dictionary, with the cookies joined into a comma-separated string. The problem arises when a comma character appears inside the 'expires' field of one of the cookies. This makes it difficult to split the cookie headers back apart. The comma character should be escaped, or a different separator character used. i.e. expires=Sun, 17-Jan-2038 19:14:07 GMT For now I am using the workaround that gstein suggested, use response.msg.getallmatchingheaders() Python 2.3 has this behavior, and probably later versions. -- Comment By: John J Lee (jjlee) Date: 2007-03-14 23:57 Message: Logged In: YES user_id=261020 Originator: NO SimpleHTTPRequestHandler is not part of httplib. Did you mean to refer to module SimpleHTTPServer rather than httplib, perhaps? I don't see the particular bit of code you refer to (neither in httplib nor in module SimpleHTTPServer), but re the general issue: Regardless of the fact that RFC 2616 ss. 4.2 says headers MUST be able to be combined with commas, Netscape Set-Cookie headers simply don't work that way, and Netscape Set-Cookie headers are here to stay. So, Set-Cookie headers must not be combined. (Quoting does not help, because Netscape Set-Cookie headers contain cookie values that 1. may contain commas and 2. do not support quoting -- any quote () characters are in fact part of the cookie value itself rather than being part of a quoting mechanism. And there is no precedent for any choice of delimter other than a comma, nor for any other Netscape Set-Cookie cookie value quoting mechanism.) -- Comment By: David Margrave (davidma) Date: 2007-03-14 21:10 Message: Logged In: YES user_id=31040 Originator: YES getallmatchingheaders() works fine. The problem is with the self.headers in the SimpleHTTPRequestHandler and derived classes. A website may send multiple set-cookie headers, using gmail.com as an example: Set-Cookie: GMAIL_RTT=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail Set-Cookie: GMAIL_LOGIN=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail The SimpleHTTPRequestHandler class combines multiple set-cookie response headers into a single comma-separated string which it stores in the headers dictionary i.e. self.headers ['set-cookie'] = GMAIL_RTT=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail, GMAIL_LOGIN=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail The problem is if you try to use code that uses self.headers['set-cookie'] and use string.split to get the original distinct cookie values on the comma delimiter, you'll run into trouble because of the use of the comma character within the cookies' expiration tags, such as Expires=Tue, 13-Mar-07 21:03:04 GMT Again, getallmatchingheaders() is fine as an alternative, but as long as you are going to the trouble of storing multiple set-cookie response headers in the self.headers dict, using a delimiter of some sort, I'd argue you might as well also take care that your delimiter is either unique or escaped within the fields you are delimiting. -- Comment By: John J Lee (jjlee) Date: 2007-03-14 20:48 Message: Logged In: YES user_id=261020 Originator: NO I'm not sure what your complaint is. What's wrong with response.msg.getallmatchingheaders()? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1660009group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1660009 ] continuing problem with httplib multiple set-cookie headers
Bugs item #1660009, was opened at 2007-02-14 19:52 Message generated for change (Comment added) made by davidma You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1660009group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: David Margrave (davidma) Assigned to: Nobody/Anonymous (nobody) Summary: continuing problem with httplib multiple set-cookie headers Initial Comment: This is related to [ 432621 ] httplib: multiple Set-Cookie headers, which I was unable to re-open. The workaround that was adopted in the previous bug tracker item was to combine multiple set-cookie headers received from the server, into a single set-cookie element in the headers dictionary, with the cookies joined into a comma-separated string. The problem arises when a comma character appears inside the 'expires' field of one of the cookies. This makes it difficult to split the cookie headers back apart. The comma character should be escaped, or a different separator character used. i.e. expires=Sun, 17-Jan-2038 19:14:07 GMT For now I am using the workaround that gstein suggested, use response.msg.getallmatchingheaders() Python 2.3 has this behavior, and probably later versions. -- Comment By: David Margrave (davidma) Date: 2007-03-15 00:30 Message: Logged In: YES user_id=31040 Originator: YES fair enough, the RFC says thay have to be joinable with commas, so the behavior is correct. I can get by with getallmatchingheaders if I need access to the original individual cookie values. thanks, dave -- Comment By: John J Lee (jjlee) Date: 2007-03-14 23:57 Message: Logged In: YES user_id=261020 Originator: NO SimpleHTTPRequestHandler is not part of httplib. Did you mean to refer to module SimpleHTTPServer rather than httplib, perhaps? I don't see the particular bit of code you refer to (neither in httplib nor in module SimpleHTTPServer), but re the general issue: Regardless of the fact that RFC 2616 ss. 4.2 says headers MUST be able to be combined with commas, Netscape Set-Cookie headers simply don't work that way, and Netscape Set-Cookie headers are here to stay. So, Set-Cookie headers must not be combined. (Quoting does not help, because Netscape Set-Cookie headers contain cookie values that 1. may contain commas and 2. do not support quoting -- any quote () characters are in fact part of the cookie value itself rather than being part of a quoting mechanism. And there is no precedent for any choice of delimter other than a comma, nor for any other Netscape Set-Cookie cookie value quoting mechanism.) -- Comment By: David Margrave (davidma) Date: 2007-03-14 21:10 Message: Logged In: YES user_id=31040 Originator: YES getallmatchingheaders() works fine. The problem is with the self.headers in the SimpleHTTPRequestHandler and derived classes. A website may send multiple set-cookie headers, using gmail.com as an example: Set-Cookie: GMAIL_RTT=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail Set-Cookie: GMAIL_LOGIN=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail The SimpleHTTPRequestHandler class combines multiple set-cookie response headers into a single comma-separated string which it stores in the headers dictionary i.e. self.headers ['set-cookie'] = GMAIL_RTT=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail, GMAIL_LOGIN=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail The problem is if you try to use code that uses self.headers['set-cookie'] and use string.split to get the original distinct cookie values on the comma delimiter, you'll run into trouble because of the use of the comma character within the cookies' expiration tags, such as Expires=Tue, 13-Mar-07 21:03:04 GMT Again, getallmatchingheaders() is fine as an alternative, but as long as you are going to the trouble of storing multiple set-cookie response headers in the self.headers dict, using a delimiter of some sort, I'd argue you might as well also take care that your delimiter is either unique or escaped within the fields you are delimiting. -- Comment By: John J Lee (jjlee) Date: 2007-03-14 20:48 Message: Logged In: YES user_id=261020 Originator: NO I'm not sure what your complaint is. What's wrong with response.msg.getallmatchingheaders()? -- You can respond by visiting:
[ python-Bugs-1660009 ] continuing problem with httplib multiple set-cookie headers
Bugs item #1660009, was opened at 2007-02-14 19:52 Message generated for change (Comment added) made by jjlee You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1660009group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: David Margrave (davidma) Assigned to: Nobody/Anonymous (nobody) Summary: continuing problem with httplib multiple set-cookie headers Initial Comment: This is related to [ 432621 ] httplib: multiple Set-Cookie headers, which I was unable to re-open. The workaround that was adopted in the previous bug tracker item was to combine multiple set-cookie headers received from the server, into a single set-cookie element in the headers dictionary, with the cookies joined into a comma-separated string. The problem arises when a comma character appears inside the 'expires' field of one of the cookies. This makes it difficult to split the cookie headers back apart. The comma character should be escaped, or a different separator character used. i.e. expires=Sun, 17-Jan-2038 19:14:07 GMT For now I am using the workaround that gstein suggested, use response.msg.getallmatchingheaders() Python 2.3 has this behavior, and probably later versions. -- Comment By: John J Lee (jjlee) Date: 2007-03-15 00:45 Message: Logged In: YES user_id=261020 Originator: NO Huh? 1. *What* behaviour is correct? You still have not said which bit of code you're talking about, or even which module. 2. You seem to have got the sense of what I said backwards. As I said, RFC 2616 is (in practice) WRONG about joining with commas being OK for Set-Cookie. Set-Cookies headers must NOT be joined with commas, despite what RFC 2616 says. -- Comment By: David Margrave (davidma) Date: 2007-03-15 00:30 Message: Logged In: YES user_id=31040 Originator: YES fair enough, the RFC says thay have to be joinable with commas, so the behavior is correct. I can get by with getallmatchingheaders if I need access to the original individual cookie values. thanks, dave -- Comment By: John J Lee (jjlee) Date: 2007-03-14 23:57 Message: Logged In: YES user_id=261020 Originator: NO SimpleHTTPRequestHandler is not part of httplib. Did you mean to refer to module SimpleHTTPServer rather than httplib, perhaps? I don't see the particular bit of code you refer to (neither in httplib nor in module SimpleHTTPServer), but re the general issue: Regardless of the fact that RFC 2616 ss. 4.2 says headers MUST be able to be combined with commas, Netscape Set-Cookie headers simply don't work that way, and Netscape Set-Cookie headers are here to stay. So, Set-Cookie headers must not be combined. (Quoting does not help, because Netscape Set-Cookie headers contain cookie values that 1. may contain commas and 2. do not support quoting -- any quote () characters are in fact part of the cookie value itself rather than being part of a quoting mechanism. And there is no precedent for any choice of delimter other than a comma, nor for any other Netscape Set-Cookie cookie value quoting mechanism.) -- Comment By: David Margrave (davidma) Date: 2007-03-14 21:10 Message: Logged In: YES user_id=31040 Originator: YES getallmatchingheaders() works fine. The problem is with the self.headers in the SimpleHTTPRequestHandler and derived classes. A website may send multiple set-cookie headers, using gmail.com as an example: Set-Cookie: GMAIL_RTT=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail Set-Cookie: GMAIL_LOGIN=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail The SimpleHTTPRequestHandler class combines multiple set-cookie response headers into a single comma-separated string which it stores in the headers dictionary i.e. self.headers ['set-cookie'] = GMAIL_RTT=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail, GMAIL_LOGIN=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail The problem is if you try to use code that uses self.headers['set-cookie'] and use string.split to get the original distinct cookie values on the comma delimiter, you'll run into trouble because of the use of the comma character within the cookies' expiration tags, such as Expires=Tue, 13-Mar-07 21:03:04 GMT Again, getallmatchingheaders() is fine as an alternative, but as long as you are going to the trouble of storing multiple set-cookie response headers in the self.headers dict, using a delimiter of some
[ python-Bugs-1660009 ] continuing problem with httplib multiple set-cookie headers
Bugs item #1660009, was opened at 2007-02-14 19:52 Message generated for change (Comment added) made by davidma You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1660009group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: David Margrave (davidma) Assigned to: Nobody/Anonymous (nobody) Summary: continuing problem with httplib multiple set-cookie headers Initial Comment: This is related to [ 432621 ] httplib: multiple Set-Cookie headers, which I was unable to re-open. The workaround that was adopted in the previous bug tracker item was to combine multiple set-cookie headers received from the server, into a single set-cookie element in the headers dictionary, with the cookies joined into a comma-separated string. The problem arises when a comma character appears inside the 'expires' field of one of the cookies. This makes it difficult to split the cookie headers back apart. The comma character should be escaped, or a different separator character used. i.e. expires=Sun, 17-Jan-2038 19:14:07 GMT For now I am using the workaround that gstein suggested, use response.msg.getallmatchingheaders() Python 2.3 has this behavior, and probably later versions. -- Comment By: David Margrave (davidma) Date: 2007-03-15 00:58 Message: Logged In: YES user_id=31040 Originator: YES See the addheader method of the HTTPMessage class in httplib.py def addheader(self, key, value): Add header for field key handling repeats. prev = self.dict.get(key) if prev is None: self.dict[key] = value else: combined = , .join((prev, value)) self.dict[key] = combined also see the original tracker entry where this fix was first discussed implemented https://sourceforge.net/tracker/index.php?func=detailaid=432621group_id=5470atid=105470 -- Comment By: John J Lee (jjlee) Date: 2007-03-15 00:45 Message: Logged In: YES user_id=261020 Originator: NO Huh? 1. *What* behaviour is correct? You still have not said which bit of code you're talking about, or even which module. 2. You seem to have got the sense of what I said backwards. As I said, RFC 2616 is (in practice) WRONG about joining with commas being OK for Set-Cookie. Set-Cookies headers must NOT be joined with commas, despite what RFC 2616 says. -- Comment By: David Margrave (davidma) Date: 2007-03-15 00:30 Message: Logged In: YES user_id=31040 Originator: YES fair enough, the RFC says thay have to be joinable with commas, so the behavior is correct. I can get by with getallmatchingheaders if I need access to the original individual cookie values. thanks, dave -- Comment By: John J Lee (jjlee) Date: 2007-03-14 23:57 Message: Logged In: YES user_id=261020 Originator: NO SimpleHTTPRequestHandler is not part of httplib. Did you mean to refer to module SimpleHTTPServer rather than httplib, perhaps? I don't see the particular bit of code you refer to (neither in httplib nor in module SimpleHTTPServer), but re the general issue: Regardless of the fact that RFC 2616 ss. 4.2 says headers MUST be able to be combined with commas, Netscape Set-Cookie headers simply don't work that way, and Netscape Set-Cookie headers are here to stay. So, Set-Cookie headers must not be combined. (Quoting does not help, because Netscape Set-Cookie headers contain cookie values that 1. may contain commas and 2. do not support quoting -- any quote () characters are in fact part of the cookie value itself rather than being part of a quoting mechanism. And there is no precedent for any choice of delimter other than a comma, nor for any other Netscape Set-Cookie cookie value quoting mechanism.) -- Comment By: David Margrave (davidma) Date: 2007-03-14 21:10 Message: Logged In: YES user_id=31040 Originator: YES getallmatchingheaders() works fine. The problem is with the self.headers in the SimpleHTTPRequestHandler and derived classes. A website may send multiple set-cookie headers, using gmail.com as an example: Set-Cookie: GMAIL_RTT=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail Set-Cookie: GMAIL_LOGIN=EXPIRED; Domain=.google.com; Expires=Tue, 13-Mar-07 21:03:04 GMT; Path=/mail The SimpleHTTPRequestHandler class combines multiple set-cookie response headers into a single comma-separated string which it stores in the headers dictionary i.e.
[ python-Bugs-1328278 ] __getslice__ taking priority over __getitem__
Bugs item #1328278, was opened at 2005-10-16 18:22 Message generated for change (Comment added) made by stupidgeekman You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1328278group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Closed Resolution: Wont Fix Priority: 5 Private: No Submitted By: Josh Marshall (jpmarshall) Assigned to: Nobody/Anonymous (nobody) Summary: __getslice__ taking priority over __getitem__ Initial Comment: When creating a class that uses __getitem__ to implement slicing, if __getattr__ is also implemented, slicing will fail. This is due to the (deprecated) __getslice__ method being called before __getitem__. The attached file demonstrates this. If __getitem__ is implemented on its own, all is rosy. When we add __getattr__ and do not raise an AttributeError when __getslice__ is searched for, the slicing fails. If we raise this AttributeError, __getitem__ is called next. The only other reference I could find to this bug is on the jython mailing list, from 2003: http://sourceforge.net/mailarchive/forum.php? thread_id=2350972forum_id=5586 My question is; why is __getslice__ called before __getitem__? I assumed that because it is deprecated, it would be the last resort for a slicing. Is this planned to be fixed, or is there existing behaviour that is reliant on it? -- Comment By: Tim (stupidgeekman) Date: 2007-03-14 19:33 Message: Logged In: YES user_id=1743956 Originator: NO I would suggest that the list class should use the same form suggested on the documentation site, namely: if sys.version_info (2, 0): # They won't be defined if version is at least 2.0 final def __getslice__(self, i, j): return self[max(0, i):max(0, j):] def __setslice__(self, i, j, seq): self[max(0, i):max(0, j):] = seq def __delslice__(self, i, j): del self[max(0, i):max(0, j):] ... in order to assure that the *slice methods are not defined unless needed for backward compatability in an older interpreter; then classes developed with the above suggested structure should work properly. -- Comment By: Georg Brandl (birkenfeld) Date: 2005-11-11 11:50 Message: Logged In: YES user_id=1188172 You're correct. __getslice__ is supported for backwards compatibility, and its semantics cannot change (before 3.0, that is). -- Comment By: Thomas Lee (krumms) Date: 2005-11-10 06:48 Message: Logged In: YES user_id=315535 This seems to be the documented, expected behavior: http://www.python.org/doc/2.4.2/ref/sequence-methods.html As to _why_ __getslice__ is called before __getitem__, I'm not sure - but it's right there in the docs. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1328278group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1603150 ] wave module forgets to close file on exception
Bugs item #1603150, was opened at 2006-11-26 10:59 Message generated for change (Comment added) made by polivare You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1603150group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: amerinese (amerinese) Assigned to: Nobody/Anonymous (nobody) Summary: wave module forgets to close file on exception Initial Comment: I am using python 2.4 on Windows XP SP2 The wave module function: f = wave.open(file) In the case that file is a path to the file and the file is not yet opened, wave.open(file) may raise an exception if the file is opened and it does not fulfill the format of a WAV file. However, it forgets to close the file when the exception is raised keeping other functions from accessing the file (at least until the file is garbage collected). The regular file opening idiom doesn't work f = wave.open(file) try: ## do something with the wav file finally: f.close() Since wave.open(file) raises an exception before return the file name, f can't be closed, but the file is open. The reason I know this is because I try to delete the file if trying to open it raises an RIFF or not a WAV file exception and it claims the file is locked. -- Comment By: Patricio Olivares (polivare) Date: 2007-03-14 23:57 Message: Logged In: YES user_id=1413642 Originator: NO wave.open expects either a str or a file object. When it gets a str, it opens the file, works on it, and closes the file. All of this in the inner scope of the wave.open function. But if the file pointed by the str is not a correct wav format, then wave.open throws wave.Error but *doesn't close the file*. It assumes that the file will be garbage collected and then closed but that does not happen. I believe that it has to do with the Note at http://docs.python.org/ref/customization.html#l2h-177 The problem is noted mostly on the interactive interpreter on windows because on windows you can't delete/move a file if it's being used by another process so you need to close the interpreter to release the file. -- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-19 16:59 Message: Logged In: YES user_id=11375 Originator: NO Try putting the Wave.open() inside the try...finally. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=105470aid=1603150group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com