On Tue, Aug 13, 2013 at 10:51:37AM -0700, Linus Torvalds wrote: > I realize that benchmarking cares, and yes, I also realize that some > benchmarks actually want to reboot the machine between some runs just > to get repeatability, but if you're benchmarking a 16TB machine I'm > guessing any serious benchmark that actually uses that much memory is > going to take many hours to a few days to run anyway? Having some way > to wait until the memory is all done (which might even be just a silly > shell script that does "ps" and waits for the kernel threads to all go > away) isn't going to kill the benchmark - and the benchmark itself > will then not have to worry about hittinf the "oops, I need to > initialize 2GB of RAM now because I hit an uninitialized page". > I am not overly concerned with cost having to setup a page struct on first touch but what I need to avoid is adding more permanent cost to page faults on a system that is already "primed".
> Ok, so I don't know all the issues, and in many ways I don't even > really care. You could do it other ways, I don't think this is a big > deal. The part I hate is the runtime hook into the core MM page > allocation code, so I'm just throwing out any random thing that comes > to my mind that could be used to avoid that part. > The only mm structure we are adding to is a new flag in page->flags. That didn't seem too much. I had hoped to restrict the core mm changes to check_new_page and free_pages_check but I haven't gotten there yet. Not putting on uninitialized pages on to the lru would work but then I would be concerned over any calculations based on totalpages. I might be too paranoid there but having that be incorrect until after a system is booted worries me. Nate -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/