The type of bitops is currently "int" on most, if not all, architectures
except sparc64 where it is "unsigned long".  This already has the
potential of causing failures on extremely large non-NUMA x86 boxes
(specifically if any one node contains more than 8 TiB of memory, e.g.
in an interleaved memory system.)

x86 has hardware bitmask instructions which are signed, this limits the
types to either "int" or "long".

It seems pretty clear to me at least that x86-64 really should use
"long".  However, before blindly making that change I wanted to feel
people out for what this should look like across architectures.

Moving this forward, I see a couple of possibilities:

1. We simply change the type to "long" on x86, and let this be a fully
   architecture-specific option.  This is easy, obviously.

2. Same as above, except we also define a typedef for whatever type is
   the bitops argument type (bitops_t?  bitpos_t?)

3. Change the type to "long" Linux-wide, on the logic that it should be
   the same as the general machine width across all platforms.

4. Do some macro hacks so the bitops are dependent on the size of the
   argument.

5. Introduce _long versions of the bitops.

6. Do nothing at all.


Are there any 64-bit architectures where a 64-bit argument would be very
costly?

        -hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to