Hi Joonsoo, Sorry about the delay...
On Mon, 2013-12-23 at 11:11 +0900, Joonsoo Kim wrote: > On Mon, Dec 23, 2013 at 09:44:38AM +0900, Joonsoo Kim wrote: > > On Fri, Dec 20, 2013 at 10:48:17PM -0800, Davidlohr Bueso wrote: > > > On Fri, 2013-12-20 at 14:01 +0000, Mel Gorman wrote: > > > > On Thu, Dec 19, 2013 at 05:02:02PM -0800, Andrew Morton wrote: > > > > > On Wed, 18 Dec 2013 15:53:59 +0900 Joonsoo Kim > > > > > <iamjoonsoo....@lge.com> wrote: > > > > > > > > > > > If parallel fault occur, we can fail to allocate a hugepage, > > > > > > because many threads dequeue a hugepage to handle a fault of same > > > > > > address. > > > > > > This makes reserved pool shortage just for a little while and this > > > > > > cause > > > > > > faulting thread who can get hugepages to get a SIGBUS signal. > > > > > > > > > > > > To solve this problem, we already have a nice solution, that is, > > > > > > a hugetlb_instantiation_mutex. This blocks other threads to dive > > > > > > into > > > > > > a fault handler. This solve the problem clearly, but it introduce > > > > > > performance degradation, because it serialize all fault handling. > > > > > > > > > > > > Now, I try to remove a hugetlb_instantiation_mutex to get rid of > > > > > > performance degradation. > > > > > > > > > > So the whole point of the patch is to improve performance, but the > > > > > changelog doesn't include any performance measurements! > > > > > > > > > > > > > I don't really deal with hugetlbfs any more and I have not examined this > > > > series but I remember why I never really cared about this mutex. It > > > > wrecks > > > > fault scalability but AFAIK fault scalability almost never mattered for > > > > workloads using hugetlbfs. The most common user of hugetlbfs by far is > > > > sysv shared memory. The memory is faulted early in the lifetime of the > > > > workload and after that it does not matter. At worst, it hurts > > > > application > > > > startup time but that is still poor motivation for putting a lot of work > > > > into removing the mutex. > > > > > > Yep, important hugepage workloads initially pound heavily on this lock, > > > then it naturally decreases. > > > > > > > Microbenchmarks will be able to trigger problems in this area but it'd > > > > be important to check if any workload that matters is actually hitting > > > > that problem. > > > > > > I was thinking of writing one to actually get some numbers for this > > > patchset -- I don't know of any benchmark that might stress this lock. > > > > > > However I first measured the amount of cycles it costs to start an > > > Oracle DB and things went south with these changes. A simple 'startup > > > immediate' calls hugetlb_fault() ~5000 times. For a vanilla kernel, this > > > costs ~7.5 billion cycles and with this patchset it goes up to ~27.1 > > > billion. While there is naturally a fair amount of variation, these > > > changes do seem to do more harm than good, at least in real world > > > scenarios. > > > > Hello, > > > > I think that number of cycles is not proper to measure this patchset, > > because cycles would be wasted by fault handling failure. Instead, it > > targeted improved elapsed time. Fair enough, however the fact of the matter is this approach does en up hurting performance. Regarding total startup time, I didn't see hardly any differences, with both vanilla and this patchset it takes close to 33.5 seconds. > Could you tell me how long it > > takes to fault all of it's hugepages? > > > > Anyway, this order of magnitude still seems a problem. :/ > > > > I guess that cycles are wasted by zeroing hugepage in fault-path like as > > Andrew pointed out. > > > > I will send another patches to fix this problem. > > Hello, Davidlohr. > > Here goes the fix on top of this series. ... and with this patch we go from 27 down to 11 billion cycles, so this approach still costs more than what we currently have. A perf stat shows that an entire 1Gb huge page aware DB startup costs around ~30 billion cycles on a vanilla kernel, so the impact of hugetlb_fault() is definitely non trivial and IMO worth considering. Now, I took my old patchset (https://lkml.org/lkml/2013/7/26/299) for a ride and things do look quite better, which is basically what Andrew was suggesting previously anyway. With the hash table approach the startup time did go down to ~25.1 seconds, which is a nice -24.7% time reduction, with hugetlb_fault() consuming roughly 5.3 billion cycles. This hash table was on a 80 core system, so since we do the power of two round up we end up with 256 entries -- I think we can do better if we enlarger further, maybe something like statically 1024, or probably better, 8-ish * nr cpus. Thoughts? Is there any reason why we cannot go with this instead? Yes, we still keep the mutex, but the approach is (1) proven better for performance on real world workloads and (2) far less invasive. Thanks, Davidlohr -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/