On 09/09/2010 04:26 PM, Alexander Graf wrote:
On 09.09.2010, at 20:13, Hollis Blanchard wrote:
On 09/09/2010 04:16 AM, Liu Yu-B13201 wrote:
Yes, it's hard to resume TLB0. We only resume TLB1 in previous code.
But TLB1 is even more smaller (13 free entries) than 440,
So that it still has little possibility to get hit.
thus the resumption is useless.

The only reason hits are unlikely in TLB1 is because you still don't have large 
page support in the host. Once you have that, you can use TLB1 for large guest 
mappings, and it will become extremely likely that you get hits in TLB1. This 
is true even if the guest wants 256MB but the host supports only e.g. 16MB 
large pages, and must split the guest mapping into multiple large host pages.

When will you have hugetlbfs for e500? That's going to make such a dramatic 
difference, I'm not sure it's worth investing time in optimizing the MMU code 
until then.
I'm not sure I agree. Sure, huge pages give another big win, but the state as 
is should at least be fast enough for prototyping.
Sure, and it sounds like you can prototype with it already. My point is that, in your 80-20 rule of optimization, the 20% is going to change radically once large page support is in place.

Remember that the guest kernel is mapped with just a couple large pages. During guest Linux boot on 440, I think about half the boot time is spent TLB thrashing in the initcalls. Using TLB0 can ameliorate that for now, but why bother, since it doesn't help you towards the real solution?

I'm not saying this shouldn't be committed, if that's how you interpreted my comments, but in my opinion there are more useful things to do than continuing to optimize a path that is going to disappear in the future. Once you *do* have hugetlbfs in the host, you're not going to want to use TLB0 for guest TLB1 mappings any more anyways.

Hollis Blanchard
Mentor Graphics, Embedded Systems Division


--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to