Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled

2013-10-07 Thread Alexander Graf

On 07.10.2013, at 15:58, Aneesh Kumar K.V  
wrote:

> Alexander Graf  writes:
> 
>> On 01.10.2013, at 03:27, Aneesh Kumar K.V wrote:
>> 
>>> Alexander Graf  writes:
>>> 
 On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V"
> 
> 
> 
> 
>>> 
>>> Can you  explain this better ? 
>> 
>> You're basically doing
>> 
>> hwaddr ppc_hash64_pteg_search(...)
>> {
>>if (kvm) {
>>pteg = read_from_kvm();
>>foreach pte in pteg {
>>if (match) return offset;
>>}
>>return -1;
>>} else {
>>foreach pte in pteg {
>>pte = read_pte_from_memory();
>>if (match) return offset;
>>}
>>return -1;
>>}
>> }
>> 
>> This is massive code duplication. The only difference between kvm and
>> tcg are the source for the pteg read. David already abstracted the
>> actual pte0/pte1 reads away in ppc_hash64_load_hpte0 and
>> ppc_hash64_load_hpte1 wrapper functions.
>> 
>> Now imagine we could add a temporary pteg store in env. Then have something 
>> like this in ppc_hash64_load_hpte0:
>> 
>> if (need_kvm_htab_access) {
>>if (env->current_cached_pteg != this_pteg) (
>>read_pteg(env->cached_pteg);
>>return env->cached_pteg[x].pte0;
>>}
>> } else {
>>
>> }
>> 
>> That way the actual resolver doesn't care where the PTEG comes from,
>> as it only ever checks pte0/pte1 and leaves all the magic on where
>> those come from to the load function.
> 
> I tried to do this and didn't like the end result. For one we
> unnecessarly bloat CPUPPCState struct to now carry a pteg information
> and associated array. ie, we need to have now the below in the CPUPPCState.

How about something like

token = ppc_hash64_start_access();
foreach (hpte entry) {
   pte0 = ppc_hash64_load_hpte0(token, ...);
   ...
}
ppc_hash64_stop_access(token);

That way you could put the buffer and pteg_group into the token struct and only 
allocate and use it when KVM with HV is in use.

> 
> int pteg_group;
> unsigned long hpte[(HPTES_PER_GROUP * 2) + 1];
> 
> Also out serach can be much effective with the current code, 

We're anything but performance critical at this point.

> 
>while (index < hpte_buf.header.n_valid) {
> 
> against 
> 
>for (i = 0; i < HPTES_PER_GROUP; i++) {
> 
> I guess the former is better when we can find invalid hpte entries.
> 
> We now also need to update kvm_cpu_synchronize_state to clear
> pte_group so that we would not look at the stale values. If we ever want
> to use reading pteg in any other place we could possibly look at doing
> this. But at this stage, IMHO it unnecessarily make it all complex and
> less efficient.

The point is to make it less complex. I don't like the idea of having 2 hash 
lookups in the same code base that do basically the same. And efficiency only 
ever counts in the TCG case here.


Alex




Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled

2013-10-07 Thread Aneesh Kumar K.V
Alexander Graf  writes:

> On 01.10.2013, at 03:27, Aneesh Kumar K.V wrote:
>
>> Alexander Graf  writes:
>> 
>>> On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote:
 From: "Aneesh Kumar K.V"
 



>> 
>> Can you  explain this better ? 
>
> You're basically doing
>
> hwaddr ppc_hash64_pteg_search(...)
> {
> if (kvm) {
> pteg = read_from_kvm();
> foreach pte in pteg {
> if (match) return offset;
> }
> return -1;
> } else {
> foreach pte in pteg {
> pte = read_pte_from_memory();
> if (match) return offset;
> }
> return -1;
> }
> }
>
> This is massive code duplication. The only difference between kvm and
> tcg are the source for the pteg read. David already abstracted the
> actual pte0/pte1 reads away in ppc_hash64_load_hpte0 and
> ppc_hash64_load_hpte1 wrapper functions.
>
> Now imagine we could add a temporary pteg store in env. Then have something 
> like this in ppc_hash64_load_hpte0:
>
> if (need_kvm_htab_access) {
> if (env->current_cached_pteg != this_pteg) (
> read_pteg(env->cached_pteg);
> return env->cached_pteg[x].pte0;
> }
> } else {
> 
> }
>
> That way the actual resolver doesn't care where the PTEG comes from,
> as it only ever checks pte0/pte1 and leaves all the magic on where
> those come from to the load function.

I tried to do this and didn't like the end result. For one we
unnecessarly bloat CPUPPCState struct to now carry a pteg information
and associated array. ie, we need to have now the below in the CPUPPCState.

int pteg_group;
unsigned long hpte[(HPTES_PER_GROUP * 2) + 1];

Also out serach can be much effective with the current code, 

while (index < hpte_buf.header.n_valid) {

against 

for (i = 0; i < HPTES_PER_GROUP; i++) {

I guess the former is better when we can find invalid hpte entries.

We now also need to update kvm_cpu_synchronize_state to clear
pte_group so that we would not look at the stale values. If we ever want
to use reading pteg in any other place we could possibly look at doing
this. But at this stage, IMHO it unnecessarily make it all complex and
less efficient.

-aneesh




Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled

2013-10-02 Thread Alexander Graf

On 01.10.2013, at 03:27, Aneesh Kumar K.V wrote:

> Alexander Graf  writes:
> 
>> On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote:
>>> From: "Aneesh Kumar K.V"
>>> 
>>> With kvm enabled, we store the hash page table information in the 
>>> hypervisor.
>>> Use ioctl to read the htab contents. Without this we get the below error 
>>> when
>>> trying to read the guest address
>>> 
>>>  (gdb) x/10 do_fork
>>>  0xc0098660:   Cannot access memory at address 
>>> 0xc0098660
>>>  (gdb)
>>> 
>>> Signed-off-by: Aneesh Kumar K.V
>>> ---
>>>  target-ppc/kvm.c| 59 
>>> +
>>>  target-ppc/kvm_ppc.h| 12 +-
>>>  target-ppc/mmu-hash64.c | 57 
>>> ---
>>>  3 files changed, 104 insertions(+), 24 deletions(-)
>>> 
>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>> index 1838465..05b066c 100644
>>> --- a/target-ppc/kvm.c
>>> +++ b/target-ppc/kvm.c
>>> @@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr)
>>>  void kvm_arch_init_irq_routing(KVMState *s)
>>>  {
>>>  }
>>> +
>>> +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
>>> + bool secondary, target_ulong ptem,
>>> + target_ulong *hpte0, target_ulong *hpte1)
>>> +{
>>> +int htab_fd;
>>> +uint64_t index;
>>> +hwaddr pte_offset;
>>> +target_ulong pte0, pte1;
>>> +struct kvm_get_htab_fd ghf;
>>> +struct kvm_get_htab_buf {
>>> +struct kvm_get_htab_header header;
>>> +/*
>>> + * Older kernel required one extra byte.
>> 
>> Older than what?
>> 
>>> + */
> 
> Since we decided to drop that kernel patch, that should be updated as
> "kernel requires one extra byte".
> 
>>> +unsigned long hpte[(HPTES_PER_GROUP * 2) + 1];
>>> +} hpte_buf;
>>> +
>>> +index = (hash * HPTES_PER_GROUP)&  cpu->env.htab_mask;
>>> +*hpte0 = 0;
>>> +*hpte1 = 0;
>>> +if (!cap_htab_fd) {
>>> +return 0;
>>> +}
>>> +
> 
> .
> 
>>> 
>>> -static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off,
>>> +static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash,
>>>   bool secondary, target_ulong ptem,
>>>   ppc_hash_pte64_t *pte)
>>>  {
>>> -hwaddr pte_offset = pteg_off;
>>> +hwaddr pte_offset;
>>>  target_ulong pte0, pte1;
>>> -int i;
>>> -
>>> -for (i = 0; i<  HPTES_PER_GROUP; i++) {
>>> -pte0 = ppc_hash64_load_hpte0(env, pte_offset);
>>> -pte1 = ppc_hash64_load_hpte1(env, pte_offset);
>>> -
>>> -if ((pte0&  HPTE64_V_VALID)
>>> -&&  (secondary == !!(pte0&  HPTE64_V_SECONDARY))
>>> -&&  HPTE64_V_COMPARE(pte0, ptem)) {
>>> -pte->pte0 = pte0;
>>> -pte->pte1 = pte1;
>>> -return pte_offset;
>>> +int i, ret = 0;
>>> +
>>> +if (kvm_enabled()) {
>>> +ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash,
>>> +secondary, ptem,
>>> +&pte->pte0,&pte->pte1);
>> 
>> Instead of duplicating the search, couldn't you just hook yourself into 
>> ppc_hash64_load_hpte0/1 and return the respective ptes from there? Just 
>> cache the current pteg to ensure things don't become dog slow.
>> 
> 
> Can you  explain this better ? 

You're basically doing

hwaddr ppc_hash64_pteg_search(...)
{
if (kvm) {
pteg = read_from_kvm();
foreach pte in pteg {
if (match) return offset;
}
return -1;
} else {
foreach pte in pteg {
pte = read_pte_from_memory();
if (match) return offset;
}
return -1;
}
}

This is massive code duplication. The only difference between kvm and tcg are 
the source for the pteg read. David already abstracted the actual pte0/pte1 
reads away in ppc_hash64_load_hpte0 and ppc_hash64_load_hpte1 wrapper functions.

Now imagine we could add a temporary pteg store in env. Then have something 
like this in ppc_hash64_load_hpte0:

if (need_kvm_htab_access) {
if (env->current_cached_pteg != this_pteg) (
read_pteg(env->cached_pteg);
return env->cached_pteg[x].pte0;
}
} else {

}

That way the actual resolver doesn't care where the PTEG comes from, as it only 
ever checks pte0/pte1 and leaves all the magic on where those come from to the 
load function.


Alex




Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled

2013-09-30 Thread Aneesh Kumar K.V
Alexander Graf  writes:

> On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V"
>>
>> With kvm enabled, we store the hash page table information in the hypervisor.
>> Use ioctl to read the htab contents. Without this we get the below error when
>> trying to read the guest address
>>
>>   (gdb) x/10 do_fork
>>   0xc0098660:   Cannot access memory at address 
>> 0xc0098660
>>   (gdb)
>>
>> Signed-off-by: Aneesh Kumar K.V
>> ---
>>   target-ppc/kvm.c| 59 
>> +
>>   target-ppc/kvm_ppc.h| 12 +-
>>   target-ppc/mmu-hash64.c | 57 
>> ---
>>   3 files changed, 104 insertions(+), 24 deletions(-)
>>
>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>> index 1838465..05b066c 100644
>> --- a/target-ppc/kvm.c
>> +++ b/target-ppc/kvm.c
>> @@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr)
>>   void kvm_arch_init_irq_routing(KVMState *s)
>>   {
>>   }
>> +
>> +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
>> + bool secondary, target_ulong ptem,
>> + target_ulong *hpte0, target_ulong *hpte1)
>> +{
>> +int htab_fd;
>> +uint64_t index;
>> +hwaddr pte_offset;
>> +target_ulong pte0, pte1;
>> +struct kvm_get_htab_fd ghf;
>> +struct kvm_get_htab_buf {
>> +struct kvm_get_htab_header header;
>> +/*
>> + * Older kernel required one extra byte.
>
> Older than what?
>
>> + */

Since we decided to drop that kernel patch, that should be updated as
"kernel requires one extra byte".

>> +unsigned long hpte[(HPTES_PER_GROUP * 2) + 1];
>> +} hpte_buf;
>> +
>> +index = (hash * HPTES_PER_GROUP)&  cpu->env.htab_mask;
>> +*hpte0 = 0;
>> +*hpte1 = 0;
>> +if (!cap_htab_fd) {
>> +return 0;
>> +}
>> +

.

>>
>> -static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off,
>> +static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash,
>>bool secondary, target_ulong ptem,
>>ppc_hash_pte64_t *pte)
>>   {
>> -hwaddr pte_offset = pteg_off;
>> +hwaddr pte_offset;
>>   target_ulong pte0, pte1;
>> -int i;
>> -
>> -for (i = 0; i<  HPTES_PER_GROUP; i++) {
>> -pte0 = ppc_hash64_load_hpte0(env, pte_offset);
>> -pte1 = ppc_hash64_load_hpte1(env, pte_offset);
>> -
>> -if ((pte0&  HPTE64_V_VALID)
>> -&&  (secondary == !!(pte0&  HPTE64_V_SECONDARY))
>> -&&  HPTE64_V_COMPARE(pte0, ptem)) {
>> -pte->pte0 = pte0;
>> -pte->pte1 = pte1;
>> -return pte_offset;
>> +int i, ret = 0;
>> +
>> +if (kvm_enabled()) {
>> +ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash,
>> +secondary, ptem,
>> +&pte->pte0,&pte->pte1);
>
> Instead of duplicating the search, couldn't you just hook yourself into 
> ppc_hash64_load_hpte0/1 and return the respective ptes from there? Just 
> cache the current pteg to ensure things don't become dog slow.
>

Can you  explain this better ? 

-aneesh




Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled

2013-09-30 Thread Alexander Graf

On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote:

From: "Aneesh Kumar K.V"

With kvm enabled, we store the hash page table information in the hypervisor.
Use ioctl to read the htab contents. Without this we get the below error when
trying to read the guest address

  (gdb) x/10 do_fork
  0xc0098660:   Cannot access memory at address 
0xc0098660
  (gdb)

Signed-off-by: Aneesh Kumar K.V
---
  target-ppc/kvm.c| 59 +
  target-ppc/kvm_ppc.h| 12 +-
  target-ppc/mmu-hash64.c | 57 ---
  3 files changed, 104 insertions(+), 24 deletions(-)

diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 1838465..05b066c 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr)
  void kvm_arch_init_irq_routing(KVMState *s)
  {
  }
+
+hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
+ bool secondary, target_ulong ptem,
+ target_ulong *hpte0, target_ulong *hpte1)
+{
+int htab_fd;
+uint64_t index;
+hwaddr pte_offset;
+target_ulong pte0, pte1;
+struct kvm_get_htab_fd ghf;
+struct kvm_get_htab_buf {
+struct kvm_get_htab_header header;
+/*
+ * Older kernel required one extra byte.


Older than what?


+ */
+unsigned long hpte[(HPTES_PER_GROUP * 2) + 1];
+} hpte_buf;
+
+index = (hash * HPTES_PER_GROUP)&  cpu->env.htab_mask;
+*hpte0 = 0;
+*hpte1 = 0;
+if (!cap_htab_fd) {
+return 0;
+}
+
+ghf.flags = 0;
+ghf.start_index = index;
+htab_fd = kvm_vm_ioctl(kvm_state, KVM_PPC_GET_HTAB_FD,&ghf);
+if (htab_fd<  0) {
+goto error_out;
+}
+/*
+ * Read the hpte group
+ */
+if (read(htab_fd,&hpte_buf, sizeof(hpte_buf))<  0) {
+goto out;
+}
+
+index = 0;
+pte_offset = (hash * HASH_PTEG_SIZE_64)&  cpu->env.htab_mask;;
+while (index<  hpte_buf.header.n_valid) {
+pte0 = hpte_buf.hpte[(index * 2)];
+pte1 = hpte_buf.hpte[(index * 2) + 1];
+if ((pte0&  HPTE64_V_VALID)
+&&  (secondary == !!(pte0&  HPTE64_V_SECONDARY))
+&&  HPTE64_V_COMPARE(pte0, ptem)) {
+*hpte0 = pte0;
+*hpte1 = pte1;
+close(htab_fd);
+return pte_offset;
+}
+index++;
+pte_offset += HASH_PTE_SIZE_64;
+}
+out:
+close(htab_fd);
+error_out:
+return -1;
+}
diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h
index 4ae7bf2..dad0e57 100644
--- a/target-ppc/kvm_ppc.h
+++ b/target-ppc/kvm_ppc.h
@@ -42,7 +42,9 @@ int kvmppc_get_htab_fd(bool write);
  int kvmppc_save_htab(QEMUFile *f, int fd, size_t bufsize, int64_t max_ns);
  int kvmppc_load_htab_chunk(QEMUFile *f, int fd, uint32_t index,
 uint16_t n_valid, uint16_t n_invalid);
-
+hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
+ bool secondary, target_ulong ptem,
+ target_ulong *hpte0, target_ulong *hpte1);
  #else

  static inline uint32_t kvmppc_get_tbfreq(void)
@@ -181,6 +183,14 @@ static inline int kvmppc_load_htab_chunk(QEMUFile *f, int 
fd, uint32_t index,
  abort();
  }

+static inline hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
+   bool secondary,
+   target_ulong ptem,
+   target_ulong *hpte0,
+   target_ulong *hpte1)
+{
+abort();
+}
  #endif

  #ifndef CONFIG_KVM
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 67fc1b5..2288fe8 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -302,37 +302,50 @@ static int ppc_hash64_amr_prot(CPUPPCState *env, 
ppc_hash_pte64_t pte)
  return prot;
  }

-static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off,
+static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash,
   bool secondary, target_ulong ptem,
   ppc_hash_pte64_t *pte)
  {
-hwaddr pte_offset = pteg_off;
+hwaddr pte_offset;
  target_ulong pte0, pte1;
-int i;
-
-for (i = 0; i<  HPTES_PER_GROUP; i++) {
-pte0 = ppc_hash64_load_hpte0(env, pte_offset);
-pte1 = ppc_hash64_load_hpte1(env, pte_offset);
-
-if ((pte0&  HPTE64_V_VALID)
-&&  (secondary == !!(pte0&  HPTE64_V_SECONDARY))
-&&  HPTE64_V_COMPARE(pte0, ptem)) {
-pte->pte0 = pte0;
-pte->pte1 = pte1;
-return pte_offset;
+int i, ret = 0;
+
+if (kvm_enabled()) {
+ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash,
+secondary, ptem,
+&pte->pte0,&pte->pte1);


Instead of duplicatin

Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled

2013-09-30 Thread Alexander Graf

On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote:

From: "Aneesh Kumar K.V"

With kvm enabled, we store the hash page table information in the hypervisor.
Use ioctl to read the htab contents. Without this we get the below error when
trying to read the guest address

  (gdb) x/10 do_fork
  0xc0098660:   Cannot access memory at address 
0xc0098660
  (gdb)

Signed-off-by: Aneesh Kumar K.V
---
  target-ppc/kvm.c| 59 +
  target-ppc/kvm_ppc.h| 12 +-
  target-ppc/mmu-hash64.c | 57 ---
  3 files changed, 104 insertions(+), 24 deletions(-)

diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 1838465..05b066c 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr)
  void kvm_arch_init_irq_routing(KVMState *s)
  {
  }
+
+hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
+ bool secondary, target_ulong ptem,
+ target_ulong *hpte0, target_ulong *hpte1)
+{
+int htab_fd;
+uint64_t index;
+hwaddr pte_offset;
+target_ulong pte0, pte1;
+struct kvm_get_htab_fd ghf;
+struct kvm_get_htab_buf {
+struct kvm_get_htab_header header;
+/*
+ * Older kernel required one extra byte.


Older than what?


+ */
+unsigned long hpte[(HPTES_PER_GROUP * 2) + 1];
+} hpte_buf;
+
+index = (hash * HPTES_PER_GROUP)&  cpu->env.htab_mask;
+*hpte0 = 0;
+*hpte1 = 0;
+if (!cap_htab_fd) {
+return 0;
+}
+
+ghf.flags = 0;
+ghf.start_index = index;
+htab_fd = kvm_vm_ioctl(kvm_state, KVM_PPC_GET_HTAB_FD,&ghf);
+if (htab_fd<  0) {
+goto error_out;
+}
+/*
+ * Read the hpte group
+ */
+if (read(htab_fd,&hpte_buf, sizeof(hpte_buf))<  0) {
+goto out;
+}
+
+index = 0;
+pte_offset = (hash * HASH_PTEG_SIZE_64)&  cpu->env.htab_mask;;
+while (index<  hpte_buf.header.n_valid) {
+pte0 = hpte_buf.hpte[(index * 2)];
+pte1 = hpte_buf.hpte[(index * 2) + 1];
+if ((pte0&  HPTE64_V_VALID)
+&&  (secondary == !!(pte0&  HPTE64_V_SECONDARY))
+&&  HPTE64_V_COMPARE(pte0, ptem)) {
+*hpte0 = pte0;
+*hpte1 = pte1;
+close(htab_fd);
+return pte_offset;
+}
+index++;
+pte_offset += HASH_PTE_SIZE_64;
+}
+out:
+close(htab_fd);
+error_out:
+return -1;
+}
diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h
index 4ae7bf2..dad0e57 100644
--- a/target-ppc/kvm_ppc.h
+++ b/target-ppc/kvm_ppc.h
@@ -42,7 +42,9 @@ int kvmppc_get_htab_fd(bool write);
  int kvmppc_save_htab(QEMUFile *f, int fd, size_t bufsize, int64_t max_ns);
  int kvmppc_load_htab_chunk(QEMUFile *f, int fd, uint32_t index,
 uint16_t n_valid, uint16_t n_invalid);
-
+hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
+ bool secondary, target_ulong ptem,
+ target_ulong *hpte0, target_ulong *hpte1);
  #else

  static inline uint32_t kvmppc_get_tbfreq(void)
@@ -181,6 +183,14 @@ static inline int kvmppc_load_htab_chunk(QEMUFile *f, int 
fd, uint32_t index,
  abort();
  }

+static inline hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
+   bool secondary,
+   target_ulong ptem,
+   target_ulong *hpte0,
+   target_ulong *hpte1)
+{
+abort();
+}
  #endif

  #ifndef CONFIG_KVM
diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
index 67fc1b5..2288fe8 100644
--- a/target-ppc/mmu-hash64.c
+++ b/target-ppc/mmu-hash64.c
@@ -302,37 +302,50 @@ static int ppc_hash64_amr_prot(CPUPPCState *env, 
ppc_hash_pte64_t pte)
  return prot;
  }

-static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off,
+static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash,
   bool secondary, target_ulong ptem,
   ppc_hash_pte64_t *pte)
  {
-hwaddr pte_offset = pteg_off;
+hwaddr pte_offset;
  target_ulong pte0, pte1;
-int i;
-
-for (i = 0; i<  HPTES_PER_GROUP; i++) {
-pte0 = ppc_hash64_load_hpte0(env, pte_offset);
-pte1 = ppc_hash64_load_hpte1(env, pte_offset);
-
-if ((pte0&  HPTE64_V_VALID)
-&&  (secondary == !!(pte0&  HPTE64_V_SECONDARY))
-&&  HPTE64_V_COMPARE(pte0, ptem)) {
-pte->pte0 = pte0;
-pte->pte1 = pte1;
-return pte_offset;
+int i, ret = 0;
+
+if (kvm_enabled()) {
+ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash,
+secondary, ptem,
+&pte->pte0,&pte->pte1);


Instead of duplicatin