Thanks a lot for all the responses to my question.
To be honest, I can not fully understand what you are discussing but just try 
to understand it better -
Below is part of the log when I debugging my driver code. It outputs the start 
sector and the sector number of every read/write request in the request queue 
of block layer.
It seems that many of the requested sectors are contiguous. And I just curious 
why the block layer does not merge these contiguous sectors into one single 
request? For example, if the block layer generate 'start_sect: 48776, nsect: 
64, rw: r' instead of below requests, I think the performance will be better.
...
start_sect: 48776, nsect: 8, rw: r
start_sect: 48784, nsect: 8, rw: r
start_sect: 48792, nsect: 8, rw: r
start_sect: 48800, nsect: 8, rw: r
start_sect: 48808, nsect: 8, rw: r
start_sect: 48816, nsect: 8, rw: r
start_sect: 48824, nsect: 8, rw: r
start_sect: 48832, nsect: 8, rw: r
...

Thanks.

Regards,
Yunpeng

>-----Original Message-----
>From: James Bottomley [mailto:james.bottom...@suse.de]
>Sent: 2010年4月13日 3:58
>To: Martin K. Petersen
>Cc: Robert Hancock; Gao, Yunpeng; linux-...@vger.kernel.org;
>linux-mmc@vger.kernel.org
>Subject: Re: How to make kernel block layer generate bigger request in the
>request queue?
>
>On Mon, 2010-04-12 at 14:26 -0400, Martin K. Petersen wrote:
>> >>>>> "James" == James Bottomley <james.bottom...@suse.de> writes:
>>
>> >> Correct.  It's quite unlikely for pages to be contiguous so this is
>> >> the best we can do.
>>
>> James> Actually, average servers do about 50% contiguous on average
>> James> since we changed the mm layer to allocate in ascending physical
>> James> page order ...  this figure is highly sensitive to mm changes
>> James> though, and can vary from release to release.
>>
>> Interesting.  When did this happen?
>
>The initial work was done by Bill Irwin, years ago.  For a while it was
>good, but then after Mel Gorman did the page reclaim code, we became
>highly sensitive to the reclaim algorithms for this, so it's fluctuated
>a bit ever since.  Even with all this, the efficiency is highly
>dependent on the amount of free memory: once the machine starts running
>to exhaustion (excluding page cache, since that usually allocates
>correctly to begin with) the contiguity really drops.
>
>> Last time I gathered data on segment merge efficiency (1 year+ ago) I
>> found that adjacent pages were quite rare for a normal fs type workload.
>> Certainly not in the 50% ballpark.  I'll take another look when I have a
>> moment...
>
>I got 60% with an I/O bound test with about a gigabyte of free memory a
>while ago (2.6.31, I think).  Even for machines approaching memory
>starvation, 30% seems achievable.
>
>James
>

Reply via email to