On Tue, Oct 09, 2012 at 04:54:41PM -0400, Thor Lancelot Simon wrote: > On Tue, Oct 09, 2012 at 10:15:06PM +0200, Manuel Bouyer wrote: > > > > Now, iostat -x shows that sequential read of a large file (either with cat > > or dd bs=1m) is done in 32k transfers at the disk level. So I guess > > something > > is not working properly here ... > > I saw that too, though I managed to get 64K transfers. I have been meaning > to test with mmap, since that is more like the code path through the upper > layers that produces the attempt to page in an entire executable. > > I've looked all through every layer I can identify and I do not see where > the seemingly still remaining 64K restriction comes from. I thought it > was FFS, but in fact *every vestige* of the maxcontig or "maximum extent > size" use seems to have been excised from our FFS code. So it's a mystery > to me.
There is still a reference to MAXPHYS in ufs_bmaparray(), which, if I got it right, will limit the *runp in VOP_BMAP(), which will limit iobytes in genfs_do_io(). but that doens't explain why I see 32k xfers. And, to make things worse, I see 64k if I start the big file read with debug enabled, so it looks timing-dependant :( -- Manuel Bouyer <bou...@antioche.eu.org> NetBSD: 26 ans d'experience feront toujours la difference --