On Mon, Feb 14, 2005 at 06:36:43PM +0000, Hugh Dickins wrote:
> On Mon, 14 Feb 2005, Andrea Arcangeli wrote:
> > > By the way, while we're talking of remove_exclusive_swap_page:
> > > a more functional issue I sometimes wonder about, why don't we
> > > remove_exclusive_swap_page on write fault?  Keeping the swap slot
> > > is valuable if read fault, but once the page is dirtied, wouldn't
> > > it usually be better to free that slot and allocate another later?
> > 
> > Avoiding swap fragmentation is one reason to leave it allocated. So you
> > can swapin/swapout/swapin/swapout always in the same place on disk as
> > long as there's plenty of swap still available. I'm not sure how much
> > speedup this provides, but certainly it makes sense.
> 
> I rather thought it would tend to increase swap fragmentation: that
> the next time (if) this page has to be written out to swap, the disk
> has to seek back to some ancient position to write this page, when
> the rest of the cluster being written is more likely to come from a
> recently allocated block of contiguous swap pages (though if many of
> them are being rewritten rather than newly allocated, they'll all be
> all over the disk, no contiguity at all).
> 
> Of course, freeing as soon as dirty does leave a hole behind, which
> tends towards swap fragmentation: but I thought the swap allocator
> tried for contiguous clusters before it fell back on isolated pages
> (I haven't checked, and akpm has changes to swap allocation in -mm).
> 
> Hmm, I think you're thinking of the overall fragmentation of swap,
> and are correct about that; whereas I'm saying "fragmentation"
> when what I'm really concerned about is increased seeking.

Swapouts aren't the problem. The swapins with physical readahead are the
ones that benefits from the reduced overall fragmentation. Or at least
this was the case in 2.4, you're right something might be different now
that we don't follow a swapout virtual address space order anymore (but
there is probably still some localization effect during the ->nopage
faults).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to