William Tisäter added the comment:

I played around with different file and chunk sizes using attached benchmark 
script.

After several test runs I think 1024 * 16 would be the biggest win without 
losing too many μs on small seeks. You can find my benchmark output here: 
https://gist.github.com/tiwilliam/11273483

My test data was generated with following commands:

dd if=/dev/random of=10K bs=1024 count=10
dd if=/dev/random of=1M bs=1024 count=1000
dd if=/dev/random of=5M bs=1024 count=5000
dd if=/dev/random of=100M bs=1024 count=100000
dd if=/dev/random of=1000M bs=1024 count=1000000
gzip 10K 1M 5M 100M 1000M

----------
nosy: +tiwilliam
Added file: http://bugs.python.org/file35029/benchmark_20962.py

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue20962>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to