Re: [PATCH v3 5/6] x86/vPIC: vpic_elcr_mask() master bit 2 control

2023-12-05 Thread Jan Beulich
On 05.12.2023 18:29, Roger Pau Monné wrote:
> On Tue, Nov 28, 2023 at 11:35:46AM +0100, Jan Beulich wrote:
>> @@ -387,7 +387,7 @@ static int cf_check vpic_intercept_elcr_
>>  if ( dir == IOREQ_WRITE )
>>  {
>>  /* Some IRs are always edge trig. Slave IR is always level 
>> trig. */
>> -data = (*val >> shift) & vpic_elcr_mask(vpic);
>> +data = (*val >> shift) & vpic_elcr_mask(vpic, 1);
> 
> Not that it matters much, but I think you could use
> vpic_elcr_mask(vpic, 0) to strictly keep the same behavior as
> before?

Indeed, as also said in the description. Personally I view it as (slightly)
more logical to not mask off ...

>>  if ( vpic->is_master )
>>  data |= 1 << 2;
> 
> Since the bit is forcefully set here anyway.

... and then set the bit, hence why I chose to go with 1.

> Regardless:
> 
> Reviewed-by: Roger Pau Monné 

Thanks.

Jan



Re: [PATCH v3 5/6] x86/vPIC: vpic_elcr_mask() master bit 2 control

2023-12-05 Thread Roger Pau Monné
On Tue, Nov 28, 2023 at 11:35:46AM +0100, Jan Beulich wrote:
> Master bit 2 is treated specially: We force it set, but we don't expose
> the bit being set to the guest. While right now the read and write
> handling can easily use the fixed mask, the restore input checking that
> is about to be put in place wants to use the inverted mask to prove that
> no bits are unduly set. That will require master bit 2 to be set. Otoh
> the read path requires the bit to be clear (the bit can have either
> value for the use on the write path). Hence allow use sites control over
> that bit.
> 
> Signed-off-by: Jan Beulich 
> ---
> v3: New, split from larger patch.
> ---
> I'm certainly open to naming suggestions for the new macro parameter.
> "mb2" can certainly be misleading as to Multiboot 2. Yet "master_bit_2"
> it too long for my taste, not the least because of the macro then
> needing to be split across lines.

Let's leave it as mb2, I think given the context it is difficult to
mislead this code as having anything to do with multiboot.

> 
> --- a/xen/arch/x86/hvm/vpic.c
> +++ b/xen/arch/x86/hvm/vpic.c
> @@ -41,7 +41,7 @@
>  #define vpic_lock(v)   spin_lock(__vpic_lock(v))
>  #define vpic_unlock(v) spin_unlock(__vpic_lock(v))
>  #define vpic_is_locked(v) spin_is_locked(__vpic_lock(v))
> -#define vpic_elcr_mask(v) ((v)->is_master ? 0xf8 : 0xde)
> +#define vpic_elcr_mask(v, mb2) ((v)->is_master ? 0xf8 | ((mb2) << 2) : 0xde)
>  
>  /* Return the highest priority found in mask. Return 8 if none. */
>  #define VPIC_PRIO_NONE 8
> @@ -387,7 +387,7 @@ static int cf_check vpic_intercept_elcr_
>  if ( dir == IOREQ_WRITE )
>  {
>  /* Some IRs are always edge trig. Slave IR is always level trig. 
> */
> -data = (*val >> shift) & vpic_elcr_mask(vpic);
> +data = (*val >> shift) & vpic_elcr_mask(vpic, 1);

Not that it matters much, but I think you could use
vpic_elcr_mask(vpic, 0) to strictly keep the same behavior as
before?

>  if ( vpic->is_master )
>  data |= 1 << 2;

Since the bit is forcefully set here anyway.

Regardless:

Reviewed-by: Roger Pau Monné 

Thanks, Roger.



[PATCH v3 5/6] x86/vPIC: vpic_elcr_mask() master bit 2 control

2023-11-28 Thread Jan Beulich
Master bit 2 is treated specially: We force it set, but we don't expose
the bit being set to the guest. While right now the read and write
handling can easily use the fixed mask, the restore input checking that
is about to be put in place wants to use the inverted mask to prove that
no bits are unduly set. That will require master bit 2 to be set. Otoh
the read path requires the bit to be clear (the bit can have either
value for the use on the write path). Hence allow use sites control over
that bit.

Signed-off-by: Jan Beulich 
---
v3: New, split from larger patch.
---
I'm certainly open to naming suggestions for the new macro parameter.
"mb2" can certainly be misleading as to Multiboot 2. Yet "master_bit_2"
it too long for my taste, not the least because of the macro then
needing to be split across lines.

--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -41,7 +41,7 @@
 #define vpic_lock(v)   spin_lock(__vpic_lock(v))
 #define vpic_unlock(v) spin_unlock(__vpic_lock(v))
 #define vpic_is_locked(v) spin_is_locked(__vpic_lock(v))
-#define vpic_elcr_mask(v) ((v)->is_master ? 0xf8 : 0xde)
+#define vpic_elcr_mask(v, mb2) ((v)->is_master ? 0xf8 | ((mb2) << 2) : 0xde)
 
 /* Return the highest priority found in mask. Return 8 if none. */
 #define VPIC_PRIO_NONE 8
@@ -387,7 +387,7 @@ static int cf_check vpic_intercept_elcr_
 if ( dir == IOREQ_WRITE )
 {
 /* Some IRs are always edge trig. Slave IR is always level trig. */
-data = (*val >> shift) & vpic_elcr_mask(vpic);
+data = (*val >> shift) & vpic_elcr_mask(vpic, 1);
 if ( vpic->is_master )
 data |= 1 << 2;
 vpic->elcr = data;
@@ -395,7 +395,7 @@ static int cf_check vpic_intercept_elcr_
 else
 {
 /* Reader should not see hardcoded level-triggered slave IR. */
-data = vpic->elcr & vpic_elcr_mask(vpic);
+data = vpic->elcr & vpic_elcr_mask(vpic, 0);
 if ( !shift )
 *val = data;
 else