On Thu, 31 Jan 2008 12:32:24 +0100 Andi Kleen <[EMAIL PROTECTED]> wrote: > On Thu, Jan 31, 2008 at 05:52:09AM -0500, Rik van Riel wrote:
> > Don't malloc() and free() hopelessly fragment memory > > over time, ensuring that little related data can be > > found inside each 1MB chunk if the process is large > > enough? (say, firefox) > > Even if they do (I don't know if it's true or not) it does not really > matter because on modern hard disks/systems it does not cost less to > transfer 1MB versus 4K. The actual threshold seems to be rising in > fact. That is definately true. > The only drawback is that the swap might be full sooner, but > I would actually consider this a feature because it would likely > end many prolonged oom death dances much sooner. A second drawback would be that we evict more potentially useful data every time we swap in a whole lot of extra data around the little bit of data we need. On the other hand, swapping should be the exception on many of today's workloads. Maybe we can measure how many of the swapped in pages end up being used and how many are evicted again without being used and automatically change our chunk size based on those statistics? I would expect most desktop systems to end up with large chunks, because they rarely swap. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/