William Tisäter added the comment:
I'm not sure if this is a suitable feature in `make_archive()`, it would only
introduce a more expensive and ugly lookup. Using this method with a
pre-defined filename including extension must be rare. If you really want this
behaviour, I would prefer h
William Tisäter added the comment:
Found this a simple fix for an annoying and time consuming error. Patched as
discussed earlier and decided to leave the filename out.
--
components: +Windows -Library (Lib)
keywords: +patch
nosy: +tiwilliam
versions: +Python 3.5
Added file: http
William Tisäter added the comment:
That makes sense.
I proceeded and updated `Lib/gzip.py` to use `io.DEFAULT_BUFFER_SIZE` instead.
This will change the existing behaviour in two ways:
* Start using 1024 * 8 as buffer size instead of 1024.
* Add one more kwarg (`buffer_size`) to `GzipFile
Changes by William Tisäter :
--
nosy: +tiwilliam
___
Python tracker
<http://bugs.python.org/issue20050>
___
___
Python-bugs-list mailing list
Unsubscribe:
William Tisäter added the comment:
I played around with different file and chunk sizes using attached benchmark
script.
After several test runs I think 1024 * 16 would be the biggest win without
losing too many μs on small seeks. You can find my benchmark output here:
https://gist.github.com