On 14.09.2023 20:15, Shawn Anastasio wrote:
> On 9/13/23 2:29 AM, Jan Beulich wrote:
>> On 12.09.2023 20:35, Shawn Anastasio wrote:
>>> --- a/xen/arch/ppc/include/asm/bitops.h
>>> +++ b/xen/arch/ppc/include/asm/bitops.h
>>> @@ -1,9 +1,335 @@
>>> +/* SPDX-License-Identifier: GPL-2.0-or-later */
>>> +/*
>>> + * Adapted from Linux's arch/powerpc/include/asm/bitops.h.
>>> + *
>>> + * Merged version by David Gibson <da...@gibson.dropbear.id.au>.
>>> + * Based on ppc64 versions by: Dave Engebretsen, Todd Inglett, Don
>>> + * Reed, Pat McCarthy, Peter Bergner, Anton Blanchard.  They
>>> + * originally took it from the ppc32 code.
>>> + */
>>>  #ifndef _ASM_PPC_BITOPS_H
>>>  #define _ASM_PPC_BITOPS_H
>>>
>>> +#include <asm/memory.h>
>>> +
>>> +#define __set_bit(n, p)         set_bit(n, p)
>>> +#define __clear_bit(n, p)       clear_bit(n, p)
>>> +
>>> +#define BITOP_BITS_PER_WORD     32
>>> +#define BITOP_MASK(nr)          (1U << ((nr) % BITOP_BITS_PER_WORD))
>>> +#define BITOP_WORD(nr)          ((nr) / BITOP_BITS_PER_WORD)
>>> +#define BITS_PER_BYTE           8
>>> +
>>>  /* PPC bit number conversion */
>>> -#define PPC_BITLSHIFT(be)  (BITS_PER_LONG - 1 - (be))
>>> -#define PPC_BIT(bit)               (1UL << PPC_BITLSHIFT(bit))
>>> -#define PPC_BITMASK(bs, be)        ((PPC_BIT(bs) - PPC_BIT(be)) | 
>>> PPC_BIT(bs))
>>> +#define PPC_BITLSHIFT(be)    (BITS_PER_LONG - 1 - (be))
>>> +#define PPC_BIT(bit)         (1UL << PPC_BITLSHIFT(bit))
>>> +#define PPC_BITMASK(bs, be)  ((PPC_BIT(bs) - PPC_BIT(be)) | PPC_BIT(bs))
>>> +
>>> +/* Macro for generating the ***_bits() functions */
>>> +#define DEFINE_BITOP(fn, op, prefix)                                       
>>>     \
>>> +static inline void fn(unsigned int mask,                                   
>>>    \
>>> +                      volatile unsigned int *p_)                           
>>>     \
>>> +{                                                                          
>>>     \
>>> +    unsigned int old;                                                      
>>>    \
>>> +    unsigned int *p = (unsigned int *)p_;                                  
>>>     \
>>
>> What use is this, when ...
>>
>>> +    asm volatile ( prefix                                                  
>>>     \
>>> +                   "1: lwarx %0,0,%3,0\n"                                  
>>>     \
>>> +                   #op "%I2 %0,%0,%2\n"                                    
>>>     \
>>> +                   "stwcx. %0,0,%3\n"                                      
>>>     \
>>> +                   "bne- 1b\n"                                             
>>>     \
>>> +                   : "=&r" (old), "+m" (*p)                                
>>>     \
>>
>> ... the "+m" operand isn't used and ...
>>
>>> +                   : "rK" (mask), "r" (p)                                  
>>>     \
>>> +                   : "cc", "memory" );                                     
>>>     \
>>
>> ... there's a memory clobber anyway?
>>
> 
> I see what you're saying, and I'm not sure why it was written this way
> in Linux. That said, this is the kind of thing that I'm hesitant to
> change without knowing the rationale of the original author. If you are
> confident that the this can be dropped given that there is already a
> memory clobber, I could drop it. Otherwise I'm inclined to leave it in a
> state that matches Linux.

This being an arch-independent property, I am confident. Yet still you're
the maintainer, so if you want to keep it like this initially, that'll be
okay. If I feel bothered enough, I could always send a patch afterwards.

Jan

Reply via email to