Scott David Daniels wrote:

Since floating point has to identify the position of the highest bit,
you can use that hardware to "quickly" get to the highest bit.  IEEE
has the mantissa .5 <= mantissa < 1., but some other floating point
formats treated the mantissa in different ranges.  This should work
for anything where the exponent is truly a "binary point." In an older
IBM floating point format, for example, the exponent was a "hexadecimal
point," so the exponent only went up 1 when you multiplied by 16.

EXP_OF_ONE = math.frexp(1.0)[1]

def high_bit(value):
    '''Find the highest order bit of a positive integer <= maxfloat'''
    assert value > 0
    return math.frexp(value)[1] - EXP_OF_ONE

Your point, that taking floor(log2(x)) is redundant, is a good catch. However, you should have added 'untested' ;-). When value has more significant bits than the fp mantissa can hold, this expression can be 1 off (but no more, I assume). The following correction should be sufficient:

res = math.frexp(value)[1] - EXP_OF_ONE
test = 1<<res
if test > r:     res -= 1
elif 2*test < r: res += 1

For value = 2**n -1, n > 53, it is always 1 too high on my Intel machine, so the first correction is sometimes needed. I do not know if the second is or not.

Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to