Avi Kivity wrote:
>
> Note: I think we have to check _after_ kvm_mmu_get_page(), otherwise we
> might be checking a page that is not write-protected and can change again.
>
> So the logic needs to be something like
>
> for_each_shadow_entry:
> if (!last_level && !present(*spte))
>
On 07/05/2010 12:09 PM, Xiao Guangrong wrote:
Avi Kivity wrote:
I'm not convinced we can bypass the checks. Consider:
VCPU0 VCPU1
#PF
walk_addr
-> gpml4e0,gpdpe0,gpde0,gpte0
replace gpdpe0 with gpdpe1
#PF
Avi Kivity wrote:
>
> I'm not convinced we can bypass the checks. Consider:
>
>
> VCPU0 VCPU1
>
> #PF
> walk_addr
> -> gpml4e0,gpdpe0,gpde0,gpte0
>
> replace gpdpe0 with gpdpe1
> #PF
> walk_addr
> -> gpml4e0,
On 07/05/2010 11:45 AM, Xiao Guangrong wrote:
Avi Kivity wrote:
Looks into the code more carefully, maybe this code is wrong:
if (!direct) {
r = kvm_read_guest_atomic(vcpu->kvm,
- gw->pte_gpa[level - 2],
+
Avi Kivity wrote:
>> Looks into the code more carefully, maybe this code is wrong:
>>
>>
>> if (!direct) {
>> r = kvm_read_guest_atomic(vcpu->kvm,
>> - gw->pte_gpa[level - 2],
>> +
On 07/05/2010 05:52 AM, Xiao Guangrong wrote:
Avi Kivity wrote:
Umm, if we move the check before the judgment, it'll check every level,
actually, only the opened mapping and the laster level need checked, so
for the performance reason, maybe it's better to keep two check-point.
W
Avi Kivity wrote:
>> Umm, if we move the check before the judgment, it'll check every level,
>> actually, only the opened mapping and the laster level need checked, so
>> for the performance reason, maybe it's better to keep two check-point.
>>
>
> What exactly are the conditions when you w
On 07/03/2010 03:57 PM, Xiao Guangrong wrote:
Avi Kivity wrote:
And this check is not sufficient, since it's only checked if the
mapping is zapped or not exist, for other words only when broken this
judgment:
is_shadow_present_pte(*sptep)&& !is_large_pte(*sptep)
but if the middle l
On 07/03/2010 04:03 PM, Xiao Guangrong wrote:
Avi Kivity wrote:
Well, in the description, it looked like everything was using small
pages (in kvm, level=1 means PTE level, we need to change this one
day). Please describe again and say exactly when the guest or host
uses large pages.
Avi Kivity wrote:
>>
>>
>> Well, in the description, it looked like everything was using small
>> pages (in kvm, level=1 means PTE level, we need to change this one
>> day). Please describe again and say exactly when the guest or host
>> uses large pages.
>>
>>
>
> Oh, I see what you mean.
>
Avi Kivity wrote:
>> And this check is not sufficient, since it's only checked if the
>> mapping is zapped or not exist, for other words only when broken this
>> judgment:
>> is_shadow_present_pte(*sptep)&& !is_large_pte(*sptep)
>>
>> but if the middle level is present and it's not the larg
On 07/03/2010 03:44 PM, Avi Kivity wrote:
On 07/03/2010 03:31 PM, Xiao Guangrong wrote:
Avi Kivity wrote:
if (!direct) {
r = kvm_read_guest_atomic(vcpu->kvm,
gw->pte_gpa[level - 2],
&curr_pte, sizeof(curr_pte));
if (r || curr_pte
On 07/03/2010 03:31 PM, Xiao Guangrong wrote:
Avi Kivity wrote:
if (!direct) {
r = kvm_read_guest_atomic(vcpu->kvm,
gw->pte_gpa[level - 2],
&curr_pte, sizeof(curr_pte));
if (r || curr_pte != gw->ptes[level - 2]) {
Avi Kivity wrote:
>>
>
> if (!direct) {
> r = kvm_read_guest_atomic(vcpu->kvm,
> gw->pte_gpa[level - 2],
> &curr_pte, sizeof(curr_pte));
> if (r || curr_pte != gw->ptes[level - 2]) {
> kvm_mmu_put_page(shadow_page, sp
On 07/03/2010 03:16 PM, Xiao Guangrong wrote:
Avi Kivity wrote:
On 07/03/2010 01:31 PM, Xiao Guangrong wrote:
See how the pte is reread inside fetch with mmu_lock held.
It looks like something is broken in 'fetch' functions, this patch will
fix it.
Subject: [PAT
Avi Kivity wrote:
> On 07/03/2010 01:31 PM, Xiao Guangrong wrote:
>>
>>> See how the pte is reread inside fetch with mmu_lock held.
>>>
>>>
>> It looks like something is broken in 'fetch' functions, this patch will
>> fix it.
>>
>> Subject: [PATCH] KVM: MMU: fix last level broken in FNAME(f
On 07/03/2010 01:31 PM, Xiao Guangrong wrote:
See how the pte is reread inside fetch with mmu_lock held.
It looks like something is broken in 'fetch' functions, this patch will
fix it.
Subject: [PATCH] KVM: MMU: fix last level broken in FNAME(fetch)
We read the guest level out of 'mmu
On 07/02/2010 08:03 PM, Marcelo Tosatti wrote:
On Thu, Jul 01, 2010 at 09:55:56PM +0800, Xiao Guangrong wrote:
Combine guest pte read between guest pte walk and pte prefetch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/paging_tmpl.h | 48 ++-
1
Marcelo Tosatti wrote:
> On Thu, Jul 01, 2010 at 09:55:56PM +0800, Xiao Guangrong wrote:
>> Combine guest pte read between guest pte walk and pte prefetch
>>
>> Signed-off-by: Xiao Guangrong
>> ---
>> arch/x86/kvm/paging_tmpl.h | 48
>> ++-
>> 1 files
On Thu, Jul 01, 2010 at 09:55:56PM +0800, Xiao Guangrong wrote:
> Combine guest pte read between guest pte walk and pte prefetch
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/paging_tmpl.h | 48 ++-
> 1 files changed, 33 insertions(+), 15 deleti
Combine guest pte read between guest pte walk and pte prefetch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/paging_tmpl.h | 48 ++-
1 files changed, 33 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
21 matches
Mail list logo