Gurmeet Singh added the comment:
Thanks for letting me know about the optimization.
I trusted you that the system call is made once, though I looked up code to see
if size of the read in buffer is being passed to the C routine. I should
apologize though for raising this issue - since it is incorrect.
But, I think you would be interested (out of CURIOSITY) in findings from the
last experiment that I did to understand this issue:
1 >>> import io
2 >>> fl = io.FileIO('c:/temp9/Capability/Analyzing Data.mp4', 'rb')
3 >>> barr = bytearray(70934549)
4 >>> barr2= bytearray(70934549)
5 >>> id(barr)
29140440
6 >>> id(barr2)
26433560
7 >>> fl.readinto(barr)
70934549
8 >>> barr2 = barr[:]
9 >>> fl.close()
10 >>> fl = io.FileIO('c:/temp9/Capability/Analyzing Data.mp4', 'rb')
11 >>> barrt = bytearray(1)
12 >>> id(barrt)
34022512
13 >>> fl.readinto(barrt)
1
14 >>> fl.close()
>>>
The time of line 7 was much greater than line 13. It was also greater than 8
(but not that great as of 11). But I cannot say for sure that the time for line
13 plus line 8 is equal to or lesser than 7 - it looks lesser but needs more
precise testing to say anything further.
I tried to reason the situation as follows (for this I looked up the hyperlink
that you gave). I feel that the underlying system call takes the size argument
- so I guess that large value suggests the C compiler to make ask the disk
subsystem to read up the longer data - hence it takes the time since disk
access is slower.
Thanks for your time. Sorry for the incorrect issue.
----------
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue17440>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com