[issue11051] system calls per import
Changes by Nadeem Vawda nadeem.va...@gmail.com: -- nosy: +nvawda ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11051 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5863] bz2.BZ2File should accept other file-like objects.
Nadeem Vawda nadeem.va...@gmail.com added the comment: OK, I've rewritten the whole bz2 module (patch attached), and I think it is now ready for review. The BZ2File implementation is a cleaned-up version of the one from my previous patch, with some further additions. I've factored out the common compressor/decompressor stuff into classes Compressor and Decompressor in the _bz2 extension module; with these, BZ2Compressor, BZ2Decompressor, compress() and decompress() are trivial to implement in Python. My earlier efficiency concerns seem to have been unfounded; I ran some quick tests with a 4MB bz2 file, and there wasn't any measurable performance difference from the existing all-C implementation. I have added a peek() method to BZ2File, in accordance with Antoine's suggestion, but it's not clear how it should interpret its argument. I followed the lead of io.BufferedReader, and simply ignored the arg, returning whatever data as is already buffered. The patch also includes tests for peek() in test_bz2, based on test_io's BufferedRWPairTest. Also, while looking at io.BufferedReader's implementation, I noticed that it doesn't actually seem to use raw.peek() at all. If this is correct, then perhaps peek() is unnecessary, and shouldn't be added. The patch also adds a property 'eof' to BZ2Decompressor, so that the user can test whether EOF has been reached on the compressed stream. For the new files (Modules/_bz2module.c and Lib/bz2.py), I'm guessing there should be some license boilerplate stuff added at the top of each. I wasn't sure exactly what this should look like, though - some advice would be helpful here. -- Added file: http://bugs.python.org/file20621/bz2-v3.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5863 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: Looks good to me. My earlier patch was more defensive because I wasn't sure whether any of the other tests might be using cgi.log(), but it seems that this isn't the case. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6715] xz compressor support
Changes by Nadeem Vawda nadeem.va...@gmail.com: -- nosy: +nvawda ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6715 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11018] typo in test_bz2
Changes by Nadeem Vawda nadeem.va...@gmail.com: -- nosy: +nvawda ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11018 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5863] bz2.BZ2File should accept other file-like objects.
Nadeem Vawda nadeem.va...@gmail.com added the comment: * The read*() methods are implemented very inefficiently. Since they have to deal with the bytes objects returned by BZ2Decompressor.decompress(), a large read results in lots of allocations that weren't necessary in the C implementation. It probably depends on the buffer size. Trying to fix this /might/ be premature optimization. Actually, looking at the code again (and not being half-asleep this time), I think readline() and readlines() are fine. My worry is about read(), where the problem isn't the size of the buffer but rather the fact that every byte that is read gets copied around more than necessary: * Read into the readahead buffer in _fill_readahead(). * Copy into 'data' in _read_block() * Copy into newly-allocated bytes object for read()'s return value But you're right; this is probably premature optimization. I'll do some proper performance measurements before I jump into rewriting. In the meanwhile, FWIW, I noticed that with the Python implementation, test_bz2 took 20% longer than with my C implementation (~1.5s up from ~1.25s). I don't think this is a very reliable indicator of real-world performance, though. Also, as with GzipFile one goal should be for BZFile to be wrappable in a io.BufferedReader, which has its own very fast buffering layer (and also a fast readline() if you implement peek() in BZFile). Ah, OK. I suppose that is a sensible way of using it. peek() will be quite easy to implement. How should it interpret its argument, though? PEP3116 (New I/O) makes no mention of the function. BufferedReader appears to ignore it and return however much data is convenient. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5863 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10276] zlib crc32/adler32 buffer length truncation (64-bit)
Nadeem Vawda nadeem.va...@gmail.com added the comment: Here is an update patch, which corrects a typo in the previous patch, and adds a test to test_zlib. The test uses a memory-mapped sparse file, so it gets skipped on systems without mmap. The alternative would be to allocate a 4+GB buffer of ordinary memory, causes heavy swapping on my machine (4GB of RAM). The test also gets skipped on 32-bit builds, where the address space is too small for this bug to arise. I'm not sure whether the test can count on the created file actually being sparse, so I had the test require the 'largefile' resource, to be on the safe side. -- Added file: http://bugs.python.org/file20543/zlib-v2.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10276 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10684] Folders get deleted when trying to change case with shutil.move (case insensitive file systems only)
Nadeem Vawda nadeem.va...@gmail.com added the comment: BTW,what is the best way to check for case insensitive file-system? The test here merely checks if sys.platform returns mac, darwin or win32. I would suggest not checking at all. If the system is case-sensitive, the test will pass, so it doesn't really make a difference. You could write a small function that creates a dummy file and then tries to access it via a case variant of its name, but that seems unnecessary. You can't solve this by trying to do different things on different operating systems. This bug depends on file system properties, not OS. It's worth pointing out that it depends on both the FS *and* OS. For example, an NTFS filesystem is case-insensitive under Windows, but case-sensitive under Linux. This has caused me headaches in the past. I still think the best avenue would be to first try straight os.rename, and if that fails (maybe only if target exists), the logic that is currently in shutil.move. I agree. If os.rename() succeeds, there is no need to copy the file and then delete the original. If it fails because the two paths are on different devices, the existing code can safely be used without any further checks. I'm not sure if there are any other failure cases that would need to be handled, though. -- nosy: +nvawda ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10684 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5863] bz2.BZ2File should accept other file-like objects.
Nadeem Vawda nadeem.va...@gmail.com added the comment: Interesting! If you are motivated, a further approach would be to expose the compressor and decompressor objects from the C extension, and write the file object in Python (as in Lib/gzip.py). I had initially considered doing something that, but I decided not to for reasons that I can't quite remember. However, in hindsight it seems like it would have been a better approach than doing everything in C. I'll start on it ASAP. On a related note, the 'buffering' argument to __init__() is ignored, and I was wondering whether this should be documented explicitly? Yes, it should probably be deprecated if it's not useful anymore. How would I go about doing this? Would it be sufficient to raise a DeprecationWarning if the argument is provided by the caller, and add a note to the docstring and documentation? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5863 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5863] bz2.BZ2File should accept other file-like objects.
Nadeem Vawda nadeem.va...@gmail.com added the comment: * I had initially considered doing something *like* that -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5863 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5863] bz2.BZ2File should accept other file-like objects.
Nadeem Vawda nadeem.va...@gmail.com added the comment: Here is a quick-and-dirty reimplementation of BZ2File in Python, on top of the existing C implementation of BZ2Compressor and BZ2Decompressor. There are a couple of issues with this code that need to be fixed: * BZ2Decompressor doesn't signal when it reaches the EOS marker, so doesn't seem possible to detect a premature end-of-file. This was easy in the C implementation, when using bzDecompress() directly. * The read*() methods are implemented very inefficiently. Since they have to deal with the bytes objects returned by BZ2Decompressor.decompress(), a large read results in lots of allocations that weren't necessary in the C implementation. I hope to resolve both of these issues (and do a general code cleanup), by writing a C extension module that provides a thin wrapper around bzCompress()/bzDecompress(), and reimplementing the module's public interface in Python on top of it. This should reduce the size of the code by close to half, and make it easier to read and maintain. I'm not sure when I'll be able to get around to it, though, so I thought I should post what I've done so far. Other changes in the patch: * write(), writelines() and seek() now return meaningful values instead of None, in line with the behaviour of other file-like objects. * Fixed a typo in test_bz2's testReadChunk10() that caused the test to pass regardless of whether the data read was correct (self.assertEqual(text, text) - self.assertEqual(text, self.TEXT)). This one might be worth committing now, since it isn't dependent on the rewrite. -- Added file: http://bugs.python.org/file20521/bz2module-v2.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5863 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5863] bz2.BZ2File should accept other file-like objects.
Nadeem Vawda nadeem.va...@gmail.com added the comment: Here is a patch that rewrites BZ2File to implement the requested feature, and adds some tests using BytesIO objects. Some notes: * iteration and the read*() method now use the same buffering machinery, so they can be mixed freely. The test for issue8397 has been updated accordingly. * readlines() now respects its size argument. The existing implementation appears to effectively ignore it. * writelines() no longer uses the (deprecated) old buffer protocol, and is now much simpler. * Currently, calling next() on a writable BZ2File results in a rather unhelpful error message; the patched version checks that the file is readable before trying to actually read. * The docstrings have been rewritten to clarify that all of the methods deal with bytes and not text strings. One thing I was unsure of is how to handle exceptions that occur in BZ2File_dealloc(). Does the error status need to be cleared before it returns? The documentation for the bz2 module appears to be quite out of date; I will upload a patch in the next day or so. On a related note, the 'buffering' argument to __init__() is ignored, and I was wondering whether this should be documented explicitly? The current documentation claims that it allows the caller to specify a buffer size, or request unbuffered I/O. -- keywords: +patch Added file: http://bugs.python.org/file20510/bz2module-v1.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5863 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5863] bz2.BZ2File should accept other file-like objects.
Nadeem Vawda nadeem.va...@gmail.com added the comment: Yes, see bz2module-v1.diff. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5863 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5863] bz2.BZ2File should accept other file-like objects.
Nadeem Vawda nadeem.va...@gmail.com added the comment: I have been working on a patch for this issue. I've implemented everything except for readline(), readlines() and the iterator protocol. In the existing implementation, the reading methods seem to interact weirdly - iternext() uses a readahead buffer, while none of the other methods do. Does anyone know if there's a reason for this? I was planning on having all the reading methods use a common buffer, which should allow free mixing of read methods and iteration. Looking at issue8397, I'm guessing it would be fine, but I wanted to double-check in case there's a quirk of the iteration protocol that I've overlooked, or something like that. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5863 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10903] ZipExtFile:_update_crc fails for CRC = 0x80000000
Nadeem Vawda nadeem.va...@gmail.com added the comment: I have been unable to reproduce this on either 3.2rc1 or 2.6. I used a Zip archive containing a single file with the data ba\n (CRC 0xDDEAA107). -- nosy: +nvawda ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10903 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: Good idea; they look like more work to fix than the warnings so far. Aside from those two, it looks like test_cgi is all that's left. Just to clarify, did you manage to reproduce the test_cgi warning? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: Sorry, scratch that - I misunderstood the semantics of SocketIO.close(). I hadn't realized that the underlying socket is supposed to stay open until it itself is also explicitly closed (as well as all SocketIO objects referring to it). I've been able to get rid of 2 of the 7 warnings in test_urllib2net with the following change: diff --git a/Lib/urllib/request.py b/Lib/urllib/request.py --- a/Lib/urllib/request.py +++ b/Lib/urllib/request.py @@ -2151,7 +2151,9 @@ conn = self.ftp.ntransfercmd(cmd) self.busy = 1 # Pass back both a suitably decorated object and a retrieval length -return (addclosehook(conn[0].makefile('rb'), self.endtransfer), conn[1]) +fp = addclosehook(conn[0].makefile('rb'), self.endtransfer) +conn[0].close() +return (fp, conn[1]) def endtransfer(self): if not self.busy: return It seems that most of the remaining warnings are the result of FTPHandler.ftp_open() not doing anything to close the ftpwrapper objects it creates. I haven't been able to figure out exactly what the correct place to do this is, though. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4681] mmap offset should be off_t instead of ssize_t, and size calculation needs corrected
Changes by Nadeem Vawda nadeem.va...@gmail.com: -- nosy: +nvawda ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4681 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: Looking at the warnings from test_urllib2net, it seems that they all originate in the FTP tests: * OtherNetworkTests.test_ftp() * TimeoutTest.test_ftp_basic() * TimeoutTest.test_ftp_default_timeout() Most of these leaks seem to stem from the fact that socket.SocketIO.close() doesn't behave as documented. According to its docstring, it is meant to decrement the underlying socket's refcount, and close it if the refcount drops to zero. However, to do this job it calls socket._decref_socketios(), which is defined as follows: def _decref_socketios(self): if self._io_refs 0: self._io_refs -= 1 if self._closed: self.close() Clearly, this doesn't do what the docstring describes. Changing the second conditional from if self._closed: to if self._io_refs = 0: disposes of all but one of the ResourceWarnings, but also breaks 8 tests in test_socket. It seems that the tests expect a socket to remain open after all referring SocketIO objects have been closed, which contradicts the docstring for SocketIO.close(). I suppose I should open a separate issue for this. The remaining warning occurs in test_ftp() when retrieving a non-existent file; I haven't yet managed to figure out what is causing it. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: Fix attached for test_imaplib. Most of the warnings were simply due to reap_server() not closing the server object correctly. The remaining warning was due a genuine leak in imaplib.IMAP4.__init__() - if an exception is raised after the connection is opened, the socket is not closed. -- Added file: http://bugs.python.org/file20283/test_imaplib.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: Awesome. That just leaves test_urllibnet, test_urllib2net, and test_cgi. I'm hoping to post patches for the first two tomorrow. About test_cgi, I've fiddled around with it a bit more. The leak manifests itself with any set of tests including test_cgi and test___all__, for example: ☿ ./python -Wd -E -bb -m test.regrtest test___all__ test_cgi [1/2] test___all__ [2/2] test_cgi All 2 tests OK. sys:1: ResourceWarning: unclosed file _io.TextIOWrapper name='/dev/null' encoding='UTF-8' ... but not with any other 2-test combination. This led me to think it was something specific to test___all__, but it does also come up when running all tests *except* test___all__. I'm guessing there's something somewhere that's causing the cgi module to be garbage-collected between the tests finishing and the process terminating. Without some familiarity with unittest's internals, I can't say anything more, though. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: Have you tried my patch (resourcewarning-fixes-3.diff)? It fixes the warning for me without breaking anything. I was just worried that the warning was something triggered by my specific system configuration when you said that you couldn't reproduce it. I was trying to see if I could find a reason why it might appear on one system but not another. If you *have* been able to reproduce it, then I needn't look any further :P -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: r87710 introduces a ResourceWarning in test_threading. Fix attached. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Changes by Nadeem Vawda nadeem.va...@gmail.com: Added file: http://bugs.python.org/file20255/test_threading.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6643] Throw away more radioactive locks that could be held across a fork in threading.py
Nadeem Vawda nadeem.va...@gmail.com added the comment: r87710 introduces an AttributeError in test_thread's TestForkInThread test case. If os.fork() is called from a thread created by the _thread module, threading._after_fork() will get a _DummyThread (with no _block attribute) as the current thread. I've attached a patch that checks whether the thread has a _block attribute before trying to reinitialize it. -- nosy: +nvawda Added file: http://bugs.python.org/file20259/test_thread.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6643 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8052] subprocess close_fds behavior should only close open fds
Nadeem Vawda nadeem.va...@gmail.com added the comment: According to POSIX [1], if a multi-threaded program calls fork(), the child process may only use async-signal-safe system calls between fork() and exec*(). readdir() is not required to be async-safe [2], so reading /proc/self/fds in the child process is undefined behaviour. This is a pity, since it would IMO be a much cleaner solution than the current code. Of course, procfs isn't standard in any case; would it be necessary to have a fallback for systems without it? Or do all *nix systems that we care about provide it? In the former case, I suppose it might be possible to use the procfs on systems where readdir() is known to be safe, and use the fallback where it isn't. But such special cases would complicate the code rather than simplifying it... [1] http://pubs.opengroup.org/onlinepubs/009695399/functions/fork.html [2] http://pubs.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_04.html -- nosy: +nvawda ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8052 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: r87736 introduces another DeprecationError; this time in test_time (line 150; s/assertEquals/assertEqual/). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: The fix for test_normalization was committed as r87441. As for test_cgi, I still seem to get the leak (also on Linux; Ubuntu 10.10 64-bit). I'll poke around with it some more tomorrow. In addition to the ResourceWarnings, some of tests have been raising DeprecationWarnings: * test_unittest * test_array * test_httplib (trivial fix - replace assertEquals with assertEqual) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5863] bz2.BZ2File should accept other file-like objects.
Changes by Nadeem Vawda nadeem.va...@gmail.com: -- nosy: +nvawda ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5863 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: test_cgi causes a strange filehandle leak that only causes a warning when regrtest terminates, and for some reason doesn't show up if you run just test_cgi by itself. I've attached a patch that closes the filehandle. -- nosy: +lukasz.langa Added file: http://bugs.python.org/file19812/resourcewarning-fixes-3.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed socket
New submission from Nadeem Vawda nadeem.va...@gmail.com: When running make test on Python3, test_socket reports a number of ResourceWarnings due to unclosed sockets. Attached is a patch that changes the relevant tests so that they close all the created sockets. test_multiprocessing and test_xmlrpc have a similar problem; I will upload patches for these shortly. -- components: Tests files: test_socket-resourcewarning-fix.diff keywords: patch messages: 122209 nosy: nvawda priority: normal severity: normal status: open title: regrtest ResourceWarning - unclosed socket type: behavior versions: Python 3.2 Added file: http://bugs.python.org/file19783/test_socket-resourcewarning-fix.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10512] regrtest ResourceWarning - unclosed sockets and files
Nadeem Vawda nadeem.va...@gmail.com added the comment: Attached is a patch that fixes the warnings in test_xmlrpc, along with some other file- and socket-related warnings in test_normalization, test_timeout and test_tk that only show up when regrtest is run with -uall. The warning in test_timeout could be fixed with a smaller modification of the test code, but I thought it was better to have two separate attributes for the two sockets. It seemed misleading to have _some_ of the setup/teardown code in setUp() and tearDown(), but then be doing more in the actual tests. The warnings in test_multiprocessing seem to be due to leaks in the actual multiprocessing module, not in the test code, so that might be a bit more work to fix. -- title: regrtest ResourceWarning - unclosed socket - regrtest ResourceWarning - unclosed sockets and files Added file: http://bugs.python.org/file19789/resourcewarning-fixes-2.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10512 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1508475] transparent gzip compression in urllib
Changes by Nadeem Vawda nadeem.va...@gmail.com: -- nosy: +nvawda ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1508475 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10276] zlib crc32/adler32 buffer length truncation (64-bit)
New submission from Nadeem Vawda nadeem.va...@gmail.com: zlib.crc32() and zlib.adler32() in Modules/zlibmodule.c don't handle buffers of =4GB correctly. The length of a Py_buffer is of type Py_ssize_t, while the C zlib functions take length as an unsigned integer. This means that on a 64-bit build, the buffer length gets silently truncated to 32 bits, which results in incorrect output for large inputs. Attached is a patch that fixes this by computing the checksum incrementally, using small-enough chunks of the buffer. A better fix might be to have Modules/zlib/crc32.c use 64-bit lengths. I tried this, but I couldn't get it to work. It seems that if the system already has zlib installed, Python will link against the existing version instead of compiling its own. Testing this might be a bit tricky. Allocating a 4+GB regular buffer isn't practical. Using a memory-mapped file would work, but I'm not sure having a unit test create a multi-gigabyte file is a great thing to do. -- components: Library (Lib) files: zlib-checksum-truncation.diff keywords: patch messages: 120114 nosy: nvawda priority: normal severity: normal status: open title: zlib crc32/adler32 buffer length truncation (64-bit) type: behavior versions: Python 2.5, Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3 Added file: http://bugs.python.org/file19453/zlib-checksum-truncation.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10276 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1519] async_chat.__init__() parameters
Nadeem Vawda added the comment: Thanks for pointing that out; I've uploaded a second patch that changes async_chat.__init__() to use 'sock' instead of 'conn'. This change shouldn't affect anything either, since the argument is simply passed to asyncore.dispatcher.__init__(). Added file: http://bugs.python.org/file8834/asynchat.2.patch __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue1519 __Index: Lib/asynchat.py === --- Lib/asynchat.py (revision 59215) +++ Lib/asynchat.py (working copy) @@ -59,11 +59,11 @@ ac_in_buffer_size = 4096 ac_out_buffer_size = 4096 -def __init__ (self, conn=None): +def __init__ (self, sock=None, map=None): self.ac_in_buffer = '' self.ac_out_buffer = '' self.producer_fifo = fifo() -asyncore.dispatcher.__init__ (self, conn) +asyncore.dispatcher.__init__ (self, sock, map) def collect_incoming_data(self, data): raise NotImplementedError, must be implemented in subclass ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1519] async_chat.__init__() parameters
New submission from Nadeem Vawda: The __init__() function for asynchat.async_chat doesn't allow the caller to specify a channel map. I thought it would make sense to add an optional 'map' parameter, for consistency with asyncore.dispatcher. If the parameter is not specified, asyncore.dispatcher.__init__() will default to using the global map, which is the current behaviour. -- components: Library (Lib) files: asynchat.patch messages: 57930 nosy: nvawda severity: minor status: open title: async_chat.__init__() parameters type: behavior versions: Python 2.5 Added file: http://bugs.python.org/file8822/asynchat.patch __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue1519 __ asynchat.patch Description: Binary data ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com