On 22.02.2024 10:05, Roger Pau Monne wrote:
> The usage of a cmpxchg loop in hpet_get_channel() is unnecessary, as the same
> can be achieved with an atomic increment, which is both simpler to read, and
> avoid any need for a loop.
> 
> Note there can be a small divergence in the channel returned if next_channel
> overflows, but returned channel will always be in the [0, num_hpets_used)
> range, and that's fine for the purpose of balancing HPET channels across CPUs.
> This is also theoretical, as there's no system currently with 2^32 CPUs (as
> long as next_channel is 32bit width).

The code change looks good to me, but I fail to see the connection to
2^32 CPUs. So it feels like I'm missing something, in which case I'd
rather not offer any R-b.

Jan

> Signed-of-by: Roger Pau Monné <roger....@citrix.com>
> ---
>  xen/arch/x86/hpet.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
> index d982b0f6b2c9..4777dc859d96 100644
> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -458,11 +458,7 @@ static struct hpet_event_channel 
> *hpet_get_channel(unsigned int cpu)
>      if ( num_hpets_used >= nr_cpu_ids )
>          return &hpet_events[cpu];
>  
> -    do {
> -        next = next_channel;
> -        if ( (i = next + 1) == num_hpets_used )
> -            i = 0;
> -    } while ( cmpxchg(&next_channel, next, i) != next );
> +    next = arch_fetch_and_add(&next_channel, 1) % num_hpets_used;
>  
>      /* try unused channel first */
>      for ( i = next; i < next + num_hpets_used; i++ )


Reply via email to