On Mon, Mar 21, 2016 at 22:08:06 +0000, Peter Maydell wrote: > It is not _necessary_, but it is a performance optimization to > speed up the "missed in the TLB" case. (A TLB flush will wipe > the tb_jmp_cache table.) From the thread where the move-to-front-of-list > behaviour was added in 2010, benefits cited:
(snip) > I think what's happening here is that for guest CPUs where TLB > invalidation happens fairly frequently (notably ARM, because > we don't model ASIDs in the QEMU TLB and thus have to flush > the TLB on any context switch) the case of "we didn't hit in > the TLB but we do have this TB and it was used really recently" > happens often enough to make it worthwhile for the > tb_find_physical() code to keep its hash buckets in LRU order. > > Obviously that's all five year old data now, so a pinch of > salt may be indicated, but I'd rather we didn't just remove > the optimisation without some benchmarking to check that it's > not significant. A 2x difference is huge. Good point. Most of my tests have been on x86-on-x86, and the difference there (for many CPU-intensive benchmarks such as SPEC) was negligible. Just tested the current master booting Alex' debian ARM image, without LRU, and I see a 20% increase in boot time. I'll add per-bucket locks to keep the same behaviour without hurting scalability. Thanks, Emilio