On 24.05.2024 22:03, Andrew Cooper wrote:
> generic_f?s() being static inline is the cause of lots of the complexity
> between the common and arch-specific bitops.h
> 
> They appear to be static inline for constant-folding reasons (ARM uses them
> for this), but there are better ways to achieve the same effect.
> 
> It is presumptuous that an unrolled binary search is the right algorithm to
> use on all microarchitectures.  Indeed, it's not for the eventual users, but
> that can be addressed at a later point.
> 
> It is also nonsense to implement the int form as the base primitive and
> construct the long form from 2x int in 64-bit builds, when it's just one extra
> step to operate at the native register width.
> 
> Therefore, implement generic_f?sl() in lib/.  They're not actually needed in
> x86/ARM/PPC by the end of the cleanup (i.e. the functions will be dropped by
> the linker), and they're only expected be needed by RISC-V on hardware which
> lacks the Zbb extension.
> 
> Implement generic_fls() in terms of generic_flsl() for now, but this will be
> cleaned up in due course.
> 
> Provide basic runtime testing using __constructor inside the lib/ file.  This
> is important, as it means testing runs if and only if generic_f?sl() are used
> elsewhere in Xen.
> 
> Signed-off-by: Andrew Cooper <andrew.coop...@citrix.com>

Reviewed-by: Jan Beulich <jbeul...@suse.com>
with a suggestion and a question.

> I suspect we want to swap CONFIG_DEBUG for CONFIG_BOOT_UNIT_TESTS in due
> course.  These ought to be able to be used in a release build too.

+1

> --- /dev/null
> +++ b/xen/lib/generic-ffsl.c
> @@ -0,0 +1,65 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <xen/bitops.h>
> +#include <xen/boot-check.h>
> +#include <xen/init.h>
> +
> +unsigned int generic_ffsl(unsigned long x)
> +{
> +    unsigned int r = 1;
> +
> +    if ( !x )
> +        return 0;
> +
> +#if BITS_PER_LONG > 32

To be future-proof, perhaps ahead of this

#if BITS_PER_LONG > 64
# error "..."
#endif

or a functionally similar BUILD_BUG_ON()?

> --- /dev/null
> +++ b/xen/lib/generic-flsl.c
> @@ -0,0 +1,68 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <xen/bitops.h>
> +#include <xen/boot-check.h>
> +#include <xen/init.h>
> +
> +/* Mask of type UL with the upper x bits set. */
> +#define UPPER_MASK(x) (~0UL << (BITS_PER_LONG - (x)))
> +
> +unsigned int generic_flsl(unsigned long x)
> +{
> +    unsigned int r = BITS_PER_LONG;
> +
> +    if ( !x )
> +        return 0;
> +
> +#if BITS_PER_LONG > 32
> +    if ( !(x & UPPER_MASK(32)) )
> +    {
> +        x <<= 32;
> +        r -= 32;
> +    }
> +#endif
> +    if ( !(x & UPPER_MASK(16)) )
> +    {
> +        x <<= 16;
> +        r -= 16;
> +    }
> +    if ( !(x & UPPER_MASK(8)) )
> +    {
> +        x <<= 8;
> +        r -= 8;
> +    }
> +    if ( !(x & UPPER_MASK(4)) )
> +    {
> +        x <<= 4;
> +        r -= 4;
> +    }
> +    if ( !(x & UPPER_MASK(2)) )
> +    {
> +        x <<= 2;
> +        r -= 2;
> +    }
> +    if ( !(x & UPPER_MASK(1)) )
> +    {
> +        x <<= 1;
> +        r -= 1;
> +    }
> +
> +    return r;
> +}

While, as you say, the expectation is for this code to not commonly come
into actual use, I still find the algorithm a little inefficient in terms
of the constants used, specifically considering how they would need
instantiating in resulting assembly. It may be that Arm's fancy constant-
move insns can actually efficiently synthesize them, but I think on most
other architectures it would be more efficient (and presumably no less
efficient on Arm) to shift the "remaining" value right, thus allowing for
successively smaller (and hence easier to instantiate) constants to be
used.

Jan

Reply via email to