On Mon, Sep 25, 2017 at 01:54:42PM +0800, Huang, Ying wrote:
> Hi, Minchan,
> 
> Minchan Kim <[email protected]> writes:
> 
> > Hi Huang,
> >
> > On Thu, Sep 21, 2017 at 09:33:10AM +0800, Huang, Ying wrote:
> >> From: Huang Ying <[email protected]>
> 
> [snip]
> 
> >> diff --git a/mm/Kconfig b/mm/Kconfig
> >> index 9c4bdddd80c2..e62c8e2e34ef 100644
> >> --- a/mm/Kconfig
> >> +++ b/mm/Kconfig
> >> @@ -434,6 +434,26 @@ config THP_SWAP
> >>  
> >>      For selection by architectures with reasonable THP sizes.
> >>  
> >> +config VMA_SWAP_READAHEAD
> >> +  bool "VMA based swap readahead"
> >> +  depends on SWAP
> >> +  default y
> >> +  help
> >> +    VMA based swap readahead detects page accessing pattern in a
> >> +    VMA and adjust the swap readahead window for pages in the
> >> +    VMA accordingly.  It works better for more complex workload
> >> +    compared with the original physical swap readahead.
> >> +
> >> +    It can be controlled via the following sysfs interface,
> >> +
> >> +      /sys/kernel/mm/swap/vma_ra_enabled
> >> +      /sys/kernel/mm/swap/vma_ra_max_order
> >
> > It might be better to discuss in other thread but if you mention new
> > interface here again, I will discuss it here.
> >
> > We are creating new ABI in here so I want to ask question in here.
> >
> > Did you consier to use /sys/block/xxx/queue/read_ahead_kb for the
> > swap readahead knob? Reusing such common/consistent knob would be better
> > than adding new separate konb.
> 
> The problem is that the configuration of VMA based swap readahead is
> global instead of block device specific.  And because it works in
> virtual way, that is, the swap blocks on the different block devices may
> be readahead together.  It's a little hard to use the block device
> specific configuration.

Fair enough. page-cluster from the beginning should have been like that
instead of vma_ra_max_order.

One more questions: Do we need separate vma_ra_enable?

Can't we disable it via echo 0 > /sys/kernel/mm/swap/vma_ra_max_order
like page-cluster?

Reply via email to