On 14.01.2013, at 10:03, Gleb Natapov wrote:
On Thu, Jan 10, 2013 at 01:45:04PM +0100, Alexander Graf wrote:
Hi Marcelo / Gleb,
This is my current patch queue for ppc. Please pull.
Highlights this time:
- Book3S: enable potential sPAPR guest emulation on PR KVM on pHyp
- BookE:
-Original Message-
From: Paul Mackerras [mailto:pau...@samba.org]
Sent: Thursday, January 17, 2013 12:53 PM
To: Bhushan Bharat-R65777
Cc: kvm-ppc@vger.kernel.org; k...@vger.kernel.org; ag...@suse.de; Bhushan
Bharat-
R65777
Subject: Re: [PATCH 5/8] KVM: PPC: debug stub interface
On Thu, Jan 17, 2013 at 11:53:38AM +0100, Alexander Graf wrote:
On 14.01.2013, at 10:03, Gleb Natapov wrote:
On Thu, Jan 10, 2013 at 01:45:04PM +0100, Alexander Graf wrote:
Hi Marcelo / Gleb,
This is my current patch queue for ppc. Please pull.
Highlights this time:
-
When a host mapping fault happens in a guest TLB1 entry today, we
map the translated guest entry into the host's TLB1.
This isn't particularly clever when the guest is mapped by normal 4k
pages, since these would be a lot better to put into TLB0 instead.
This patch adds the required logic to map
When emulating tlbwe, we want to automatically map the entry that just got
written in our shadow TLB map, because chances are quite high that it's
going to be used very soon.
Today this happens explicitly, duplicating all the logic that is in
kvmppc_mmu_map() already. Just call that one instead.
Guests can trigger MMIO exits using dcbf. Since we don't emulate cache
incoherent MMIO, just do nothing and move on.
Reported-by: Ben Collins be...@servergy.com
Signed-off-by: Alexander Graf ag...@suse.de
Tested-by: Ben Collins be...@servergy.com
---
arch/powerpc/kvm/emulate.c |2 ++
1 files
Hi Marcelo / Gleb,
This is my current patch queue for ppc against 3.8. Please pull.
It contains a bug fix for an issue that Ben Collins ran into, where
a guest would just abort because it traps during an unknown instruction.
Alex
The following changes since commit
Guests can trigger MMIO exits using dcbf. Since we don't emulate cache
incoherent MMIO, just do nothing and move on.
Reported-by: Ben Collins be...@servergy.com
Signed-off-by: Alexander Graf ag...@suse.de
Tested-by: Ben Collins be...@servergy.com
CC: sta...@vger.kernel.org
---
On 01/17/2013 04:50:39 PM, Alexander Graf wrote:
@@ -1024,9 +1001,11 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64
eaddr, gpa_t gpaddr,
{
struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu);
struct tlbe_priv *priv;
- struct kvm_book3e_206_tlb_entry *gtlbe, stlbe;
+
On 18.01.2013, at 01:11, Scott Wood wrote:
On 01/17/2013 04:50:39 PM, Alexander Graf wrote:
@@ -1024,9 +1001,11 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr,
gpa_t gpaddr,
{
struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu);
struct tlbe_priv *priv;
-struct
On 18.01.2013, at 01:20, Alexander Graf wrote:
On 18.01.2013, at 01:11, Scott Wood wrote:
On 01/17/2013 04:50:39 PM, Alexander Graf wrote:
@@ -1024,9 +1001,11 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64
eaddr, gpa_t gpaddr,
{
struct kvmppc_vcpu_e500 *vcpu_e500 =
On 01/17/2013 06:29:56 PM, Alexander Graf wrote:
On 18.01.2013, at 01:20, Alexander Graf wrote:
On 18.01.2013, at 01:11, Scott Wood wrote:
It also seems like it would be cleaner to just invalidate the old
entry
in tlbwe, and then this function doesn't need to change at all. I
am a
On 01/17/2013 06:20:03 PM, Alexander Graf wrote:
On 18.01.2013, at 01:11, Scott Wood wrote:
On 01/17/2013 04:50:39 PM, Alexander Graf wrote:
@@ -1024,9 +1001,11 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu,
u64 eaddr, gpa_t gpaddr,
{
struct kvmppc_vcpu_e500 *vcpu_e500 =
On 18.01.2013, at 01:47, Scott Wood wrote:
On 01/17/2013 06:20:03 PM, Alexander Graf wrote:
On 18.01.2013, at 01:11, Scott Wood wrote:
On 01/17/2013 04:50:39 PM, Alexander Graf wrote:
@@ -1024,9 +1001,11 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64
eaddr, gpa_t gpaddr,
{
When emulating tlbwe, we want to automatically map the entry that just got
written in our shadow TLB map, because chances are quite high that it's
going to be used very soon.
Today this happens explicitly, duplicating all the logic that is in
kvmppc_mmu_map() already. Just call that one instead.
When a host mapping fault happens in a guest TLB1 entry today, we
map the translated guest entry into the host's TLB1.
This isn't particularly clever when the guest is mapped by normal 4k
pages, since these would be a lot better to put into TLB0 instead.
This patch adds the required logic to map
This patch set improves the shadow TLB handling of our e500
target.
The really important bit here is that with these patches applied,
we can map guest TLB1 entries into the host's TLB0. This gives a
significant performance improvement as you can see below.
Alex
v1 - v2:
- new patch: Move
On 01/17/2013 08:34:53 PM, Alexander Graf wrote:
When we invalidate shadow TLB maps on the host, we don't mark them
as not valid. But we should.
Fix this by removing the E500_TLB_VALID from their flags when
invalidating.
Signed-off-by: Alexander Graf ag...@suse.de
---
18 matches
Mail list logo