Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-11-08 Thread Cedric Le Goater
On 11/06/2013 06:55 AM, Paul Mackerras wrote:
> On Tue, Nov 05, 2013 at 02:01:14PM +0100, Alexander Graf wrote:
>>
>> On 05.11.2013, at 13:28, Cedric Le Goater  wrote:
>>
>>> +/*
>>> + * Compare endian order of host and guest to determine whether we need
>>> + * to byteswap or not
>>> + */
>>> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>>> {
>>> -   return vcpu->arch.shared->msr & MSR_LE;
>>> +   return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
>>
>> mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.
> 
> Or (MSR_KERNEL & MSR_LE).

yes. That is better. I will resend the patch with an update. That was 
quite laborious for a single line patch ... 

Thanks,

C.

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-11-05 Thread Paul Mackerras
On Tue, Nov 05, 2013 at 02:01:14PM +0100, Alexander Graf wrote:
> 
> On 05.11.2013, at 13:28, Cedric Le Goater  wrote:
> 
> > +/*
> > + * Compare endian order of host and guest to determine whether we need
> > + * to byteswap or not
> > + */
> > static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> > {
> > -   return vcpu->arch.shared->msr & MSR_LE;
> > +   return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
> 
> mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.

Or (MSR_KERNEL & MSR_LE).

> > +   /* if we are loading data from a guest which is in Split
> > +* Little Endian mode, the byte order is reversed 
> 
> Only for data. Instructions are still non-reverse. You express this well in 
> the code, but not in the comment.

Well, his comment does say "if we are loading data", but I agree it's
slightly ambiguous (the guest's instructions are our data).

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-11-05 Thread Alexander Graf

On 05.11.2013, at 13:28, Cedric Le Goater  wrote:

> On 11/04/2013 12:44 PM, Alexander Graf wrote:
>> 
>> On 10.10.2013, at 12:16, Paul Mackerras  wrote:
>> 
>>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
 
 
 Am 09.10.2013 um 07:59 schrieb Paul Mackerras :
 
> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>> 
>> 
>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras :
>> 
>>> True, until we get to POWER8 with its split little-endian support,
>>> where instructions and data can have different endianness...
>> 
>> How exactly does that work?
> 
> They added an extra MSR bit called SLE which enables the split-endian
> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
> LE bit controls instruction endianness, and data endianness depends on
> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
> LE=0 you get little-endian data and big-endian instructions, and vice
> versa with SLE=1 and LE=1.
 
 So ld32 should only honor LE and get_last_inst only looks at SLE and swaps 
 even the vcpu cached version if it's set, no?
>>> 
>>> Not exactly; instruction endianness depends only on MSR[LE], so
>>> get_last_inst should not look at MSR[SLE].  I would think the vcpu
>>> cached version should be host endian always.
>> 
>> I agree. It makes the code flow easier.
> 
> 
> To take into account the host endian order to determine if we should 
> byteswap, we could modify kvmppc_need_byteswap() as follow :
> 
> 
> +/*
> + * Compare endian order of host and guest to determine whether we need
> + * to byteswap or not
> + */
> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> {
> -   return vcpu->arch.shared->msr & MSR_LE;
> +   return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^

mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.

> +   ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
> }
> 
> 
> 
> and I think MSR[SLE] could be handled this way :
> 
> 
> static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> @@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, 
> ulong *eaddr,
>  u32 *ptr, bool data)
> {
>int ret;
> +   bool byteswap;
> 
>ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> 
> -   if (kvmppc_need_byteswap(vcpu))
> +   byteswap = kvmppc_need_byteswap(vcpu);
> +
> +   /* if we are loading data from a guest which is in Split
> +* Little Endian mode, the byte order is reversed 

Only for data. Instructions are still non-reverse. You express this well in the 
code, but not in the comment.

> +*/
> +   if (data && (vcpu->arch.shared->msr & MSR_SLE))
> +   byteswap = !byteswap;
> +
> +   if (byteswap)
>*ptr = swab32(*ptr);
> 
>return ret;
> 
> 
> How does that look ? 
> 
> This is not tested and the MSR_SLE definition is missing. I will fix that in 
> v5.

Alrighty :)


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-11-05 Thread Cedric Le Goater
On 11/04/2013 12:44 PM, Alexander Graf wrote:
> 
> On 10.10.2013, at 12:16, Paul Mackerras  wrote:
> 
>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>>>
>>>
>>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras :
>>>
 On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>
>
> Am 09.10.2013 um 01:31 schrieb Paul Mackerras :
>
>> True, until we get to POWER8 with its split little-endian support,
>> where instructions and data can have different endianness...
>
> How exactly does that work?

 They added an extra MSR bit called SLE which enables the split-endian
 mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
 LE bit controls instruction endianness, and data endianness depends on
 LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
 LE=0 you get little-endian data and big-endian instructions, and vice
 versa with SLE=1 and LE=1.
>>>
>>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps 
>>> even the vcpu cached version if it's set, no?
>>
>> Not exactly; instruction endianness depends only on MSR[LE], so
>> get_last_inst should not look at MSR[SLE].  I would think the vcpu
>> cached version should be host endian always.
> 
> I agree. It makes the code flow easier.


To take into account the host endian order to determine if we should 
byteswap, we could modify kvmppc_need_byteswap() as follow :


+/*
+ * Compare endian order of host and guest to determine whether we need
+ * to byteswap or not
+ */
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 {
-   return vcpu->arch.shared->msr & MSR_LE;
+   return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
+   ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
 }



and I think MSR[SLE] could be handled this way :


 static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
@@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, 
ulong *eaddr,
  u32 *ptr, bool data)
 {
int ret;
+   bool byteswap;
 
ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
 
-   if (kvmppc_need_byteswap(vcpu))
+   byteswap = kvmppc_need_byteswap(vcpu);
+
+   /* if we are loading data from a guest which is in Split
+* Little Endian mode, the byte order is reversed 
+*/
+   if (data && (vcpu->arch.shared->msr & MSR_SLE))
+   byteswap = !byteswap;
+
+   if (byteswap)
*ptr = swab32(*ptr);
 
return ret;


How does that look ? 

This is not tested and the MSR_SLE definition is missing. I will fix that in v5.

Thanks,

C.

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-11-04 Thread Alexander Graf

On 10.10.2013, at 12:16, Paul Mackerras  wrote:

> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>> 
>> 
>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras :
>> 
>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
 
 
 Am 09.10.2013 um 01:31 schrieb Paul Mackerras :
 
> True, until we get to POWER8 with its split little-endian support,
> where instructions and data can have different endianness...
 
 How exactly does that work?
>>> 
>>> They added an extra MSR bit called SLE which enables the split-endian
>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>> LE bit controls instruction endianness, and data endianness depends on
>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>> versa with SLE=1 and LE=1.
>> 
>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps 
>> even the vcpu cached version if it's set, no?
> 
> Not exactly; instruction endianness depends only on MSR[LE], so
> get_last_inst should not look at MSR[SLE].  I would think the vcpu
> cached version should be host endian always.

I agree. It makes the code flow easier.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-10 Thread Paul Mackerras
On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 07:59 schrieb Paul Mackerras :
> 
> > On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
> >> 
> >> 
> >> Am 09.10.2013 um 01:31 schrieb Paul Mackerras :
> >> 
> >>> True, until we get to POWER8 with its split little-endian support,
> >>> where instructions and data can have different endianness...
> >> 
> >> How exactly does that work?
> > 
> > They added an extra MSR bit called SLE which enables the split-endian
> > mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
> > LE bit controls instruction endianness, and data endianness depends on
> > LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
> > LE=0 you get little-endian data and big-endian instructions, and vice
> > versa with SLE=1 and LE=1.
> 
> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps 
> even the vcpu cached version if it's set, no?

Not exactly; instruction endianness depends only on MSR[LE], so
get_last_inst should not look at MSR[SLE].  I would think the vcpu
cached version should be host endian always.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-09 Thread Cedric Le Goater
On 10/09/2013 10:29 AM, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 07:59 schrieb Paul Mackerras :
> 
>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>
>>>
>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras :
>>>
 True, until we get to POWER8 with its split little-endian support,
 where instructions and data can have different endianness...
>>>
>>> How exactly does that work?
>>
>> They added an extra MSR bit called SLE which enables the split-endian
>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>> LE bit controls instruction endianness, and data endianness depends on
>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>> LE=0 you get little-endian data and big-endian instructions, and vice
>> versa with SLE=1 and LE=1.
> 
> So ld32 should only honor LE and get_last_inst only looks at SLE and 
> swaps even the vcpu cached version if it's set, no?
 
Here is the table (PowerISA) illustrating the endian modes for all 
combinations :

SLE LE  DataInstruction

0   0   Big Big
0   1   Little  Little
1   0   Little  Big
1   1   Big Little


My understanding is that when reading instructions, we should test MSR[LE] 
and for data, test MSR[SLE] ^ MSR[LE].

This has to be done in conjunction with the host endian order to determine 
if we should byte-swap or not, but we can assume the host is big endian
for the moment and fix the byte-swapping later.

C.



>>
>> There is also a user accessible "mtsle" instruction that sets the
>> value of the SLE bit.  This enables programs to flip their data
>> endianness back and forth quickly, so it's usable for short
>> instruction sequences, without the need to generate instructions of
>> the opposite endianness.
>>
>> Paul.
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-09 Thread Alexander Graf


Am 09.10.2013 um 07:59 schrieb Paul Mackerras :

> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>> 
>> 
>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras :
>> 
>>> True, until we get to POWER8 with its split little-endian support,
>>> where instructions and data can have different endianness...
>> 
>> How exactly does that work?
> 
> They added an extra MSR bit called SLE which enables the split-endian
> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
> LE bit controls instruction endianness, and data endianness depends on
> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
> LE=0 you get little-endian data and big-endian instructions, and vice
> versa with SLE=1 and LE=1.

So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even 
the vcpu cached version if it's set, no?


Alex

> 
> There is also a user accessible "mtsle" instruction that sets the
> value of the SLE bit.  This enables programs to flip their data
> endianness back and forth quickly, so it's usable for short
> instruction sequences, without the need to generate instructions of
> the opposite endianness.
> 
> Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-08 Thread Paul Mackerras
On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 01:31 schrieb Paul Mackerras :
> 
> > True, until we get to POWER8 with its split little-endian support,
> > where instructions and data can have different endianness...
> 
> How exactly does that work?

They added an extra MSR bit called SLE which enables the split-endian
mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
LE bit controls instruction endianness, and data endianness depends on
LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
LE=0 you get little-endian data and big-endian instructions, and vice
versa with SLE=1 and LE=1.

There is also a user accessible "mtsle" instruction that sets the
value of the SLE bit.  This enables programs to flip their data
endianness back and forth quickly, so it's usable for short
instruction sequences, without the need to generate instructions of
the opposite endianness.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-08 Thread Alexander Graf


Am 09.10.2013 um 01:31 schrieb Paul Mackerras :

> On Tue, Oct 08, 2013 at 04:25:31PM +0200, Alexander Graf wrote:
>> 
>> On 08.10.2013, at 16:12, Cédric Le Goater  wrote:
>> 
>>> MMIO emulation reads the last instruction executed by the guest
>>> and then emulates. If the guest is running in Little Endian mode,
>>> the instruction needs to be byte-swapped before being emulated.
>>> 
>>> This patch stores the last instruction in the endian order of the
>>> host, primarily doing a byte-swap if needed. The common code
>>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>>> and the exit paths for the Book3S PV and HR guests use their own
>>> version in assembly.
>>> 
>>> Finally, the meaning of the 'is_bigendian' argument of the
>>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>>> slightly changed to represent an eventual reverse operation. This
>>> is used in conjunction with kvmppc_is_bigendian() to determine if
>>> the instruction being emulated should be byte-swapped.
>>> 
>>> Signed-off-by: Cédric Le Goater 
>>> ---
>>> 
>>> Changes in v2:
>>> 
>>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>>  exit paths. (Paul Mackerras)
>>> 
>>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>>  kvmppc_handle_load() by changing the is_bigendian parameter
>>>  meaning. (Paul Mackerras)
>>> 
>>> arch/powerpc/include/asm/kvm_book3s.h   |9 -
>>> arch/powerpc/include/asm/kvm_ppc.h  |   10 +-
>>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++
>>> arch/powerpc/kvm/book3s_segment.S   |   10 ++
>>> arch/powerpc/kvm/emulate.c  |1 -
>>> arch/powerpc/kvm/powerpc.c  |   16 
>>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
>>> b/arch/powerpc/include/asm/kvm_book3s.h
>>> index 4ee6c66..f043e62 100644
>>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>>> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, 
>>> ulong *eaddr,
>>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
>>>  u32 *inst)
>>> {
>>> -return kvmppc_ld32(vcpu, eaddr, inst, false);
>>> +int ret;
>>> +
>>> +ret = kvmppc_ld32(vcpu, eaddr, inst, false);
>>> +
>>> +if (kvmppc_need_byteswap(vcpu))
>>> +*inst = swab32(*inst);
>> 
>> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going 
>> to need a byteswap, regardless of whether it's an instruction or not.
> 
> True, until we get to POWER8 with its split little-endian support,
> where instructions and data can have different endianness...

How exactly does that work?

Alex

> 
> Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-08 Thread Paul Mackerras
On Tue, Oct 08, 2013 at 04:25:31PM +0200, Alexander Graf wrote:
> 
> On 08.10.2013, at 16:12, Cédric Le Goater  wrote:
> 
> > MMIO emulation reads the last instruction executed by the guest
> > and then emulates. If the guest is running in Little Endian mode,
> > the instruction needs to be byte-swapped before being emulated.
> > 
> > This patch stores the last instruction in the endian order of the
> > host, primarily doing a byte-swap if needed. The common code
> > which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> > and the exit paths for the Book3S PV and HR guests use their own
> > version in assembly.
> > 
> > Finally, the meaning of the 'is_bigendian' argument of the
> > routines kvmppc_handle_load() of kvmppc_handle_store() is
> > slightly changed to represent an eventual reverse operation. This
> > is used in conjunction with kvmppc_is_bigendian() to determine if
> > the instruction being emulated should be byte-swapped.
> > 
> > Signed-off-by: Cédric Le Goater 
> > ---
> > 
> > Changes in v2:
> > 
> > - replaced rldicl. by andi. to test the MSR_LE bit in the guest
> >   exit paths. (Paul Mackerras)
> > 
> > - moved the byte swapping logic to kvmppc_handle_load() and 
> >   kvmppc_handle_load() by changing the is_bigendian parameter
> >   meaning. (Paul Mackerras)
> > 
> > arch/powerpc/include/asm/kvm_book3s.h   |9 -
> > arch/powerpc/include/asm/kvm_ppc.h  |   10 +-
> > arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++
> > arch/powerpc/kvm/book3s_segment.S   |   10 ++
> > arch/powerpc/kvm/emulate.c  |1 -
> > arch/powerpc/kvm/powerpc.c  |   16 
> > 6 files changed, 46 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
> > b/arch/powerpc/include/asm/kvm_book3s.h
> > index 4ee6c66..f043e62 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, 
> > ulong *eaddr,
> > static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> >   u32 *inst)
> > {
> > -   return kvmppc_ld32(vcpu, eaddr, inst, false);
> > +   int ret;
> > +
> > +   ret = kvmppc_ld32(vcpu, eaddr, inst, false);
> > +
> > +   if (kvmppc_need_byteswap(vcpu))
> > +   *inst = swab32(*inst);
> 
> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to 
> need a byteswap, regardless of whether it's an instruction or not.

True, until we get to POWER8 with its split little-endian support,
where instructions and data can have different endianness...

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-08 Thread Cedric Le Goater
On 10/08/2013 04:25 PM, Alexander Graf wrote:
> 
> On 08.10.2013, at 16:12, Cédric Le Goater  wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian mode,
>> the instruction needs to be byte-swapped before being emulated.
>>
>> This patch stores the last instruction in the endian order of the
>> host, primarily doing a byte-swap if needed. The common code
>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>> and the exit paths for the Book3S PV and HR guests use their own
>> version in assembly.
>>
>> Finally, the meaning of the 'is_bigendian' argument of the
>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>> slightly changed to represent an eventual reverse operation. This
>> is used in conjunction with kvmppc_is_bigendian() to determine if
>> the instruction being emulated should be byte-swapped.
>>
>> Signed-off-by: Cédric Le Goater 
>> ---
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>>
>> arch/powerpc/include/asm/kvm_book3s.h   |9 -
>> arch/powerpc/include/asm/kvm_ppc.h  |   10 +-
>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++
>> arch/powerpc/kvm/book3s_segment.S   |   10 ++
>> arch/powerpc/kvm/emulate.c  |1 -
>> arch/powerpc/kvm/powerpc.c  |   16 
>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
>> b/arch/powerpc/include/asm/kvm_book3s.h
>> index 4ee6c66..f043e62 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, 
>> ulong *eaddr,
>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
>>u32 *inst)
>> {
>> -return kvmppc_ld32(vcpu, eaddr, inst, false);
>> +int ret;
>> +
>> +ret = kvmppc_ld32(vcpu, eaddr, inst, false);
>> +
>> +if (kvmppc_need_byteswap(vcpu))
>> +*inst = swab32(*inst);
> 
> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to 
> need a 
> byteswap, regardless of whether it's an instruction or not.

yes. the byteswap logic is not related to instruction or data. I will move it 
in 
kvmppc_ld32(). 

Thanks Alex,

C.

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-08 Thread Alexander Graf

On 08.10.2013, at 16:12, Cédric Le Goater  wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian mode,
> the instruction needs to be byte-swapped before being emulated.
> 
> This patch stores the last instruction in the endian order of the
> host, primarily doing a byte-swap if needed. The common code
> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> and the exit paths for the Book3S PV and HR guests use their own
> version in assembly.
> 
> Finally, the meaning of the 'is_bigendian' argument of the
> routines kvmppc_handle_load() of kvmppc_handle_store() is
> slightly changed to represent an eventual reverse operation. This
> is used in conjunction with kvmppc_is_bigendian() to determine if
> the instruction being emulated should be byte-swapped.
> 
> Signed-off-by: Cédric Le Goater 
> ---
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 
> arch/powerpc/include/asm/kvm_book3s.h   |9 -
> arch/powerpc/include/asm/kvm_ppc.h  |   10 +-
> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++
> arch/powerpc/kvm/book3s_segment.S   |   10 ++
> arch/powerpc/kvm/emulate.c  |1 -
> arch/powerpc/kvm/powerpc.c  |   16 
> 6 files changed, 46 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
> b/arch/powerpc/include/asm/kvm_book3s.h
> index 4ee6c66..f043e62 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, 
> ulong *eaddr,
> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> u32 *inst)
> {
> - return kvmppc_ld32(vcpu, eaddr, inst, false);
> + int ret;
> +
> + ret = kvmppc_ld32(vcpu, eaddr, inst, false);
> +
> + if (kvmppc_need_byteswap(vcpu))
> + *inst = swab32(*inst);

This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to 
need a byteswap, regardless of whether it's an instruction or not.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

2013-10-08 Thread Cédric Le Goater
MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater 
---

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |9 -
 arch/powerpc/include/asm/kvm_ppc.h  |   10 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++
 arch/powerpc/kvm/book3s_segment.S   |   10 ++
 arch/powerpc/kvm/emulate.c  |1 -
 arch/powerpc/kvm/powerpc.c  |   16 
 6 files changed, 46 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index 4ee6c66..f043e62 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong 
*eaddr,
 static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
  u32 *inst)
 {
-   return kvmppc_ld32(vcpu, eaddr, inst, false);
+   int ret;
+
+   ret = kvmppc_ld32(vcpu, eaddr, inst, false);
+
+   if (kvmppc_need_byteswap(vcpu))
+   *inst = swab32(*inst);
+
+   return ret;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@ extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-  unsigned int rt, unsigned int bytes,
-  int is_bigendian);
+ unsigned int rt, unsigned int bytes,
+ int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-   unsigned int rt, unsigned int bytes,
-   int is_bigendian);
+  unsigned int rt, unsigned int bytes,
+  int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-   u64 val, unsigned int bytes, int is_bigendian);
+  u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
   struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S 
b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..ff7da8b 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,21 @@ fast_interrupt_c_return:
lwz r8, 0(r10)
mtmsrd  r3
 
+   ld  r0, VCPU_MSR(r9)
+
+   andi.   r10, r0, MSR_LE
+
/* Store the result */
stw r8, VCPU_LAST_INST(r9)
 
+   beq after_inst_store
+
+   /* Swap and store the result */
+   addir11, r9, VCPU_LAST_INST
+   stwbrx  r8, 0, r11
+
/* Unset guest mode. */
+after_inst_store:
li  r0, KVM_GUEST_MODE_HOST_HV
stb r0, HSTATE_IN_GUEST(r13)
b   guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S 
b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..677ef7a 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -287,8 +287,18 @@ ld_last_inst:
sync
 
 #endif
+   ld  r8, SVCPU_SHADOW_SRR1(r13)
+
+   andi.   r10, r8, MSR_LE
+
stw r0, SVCPU_LAST_INST(r13)
 
+   beq no_ld_last_inst
+
+   /* swap and store the result */
+   addir11, r13, SVCPU_LAST_INST
+   stwbrx  r0, 0, r11
+
 no_ld_last_inst:
 
/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emu