First and foremost, thanks for the effort of making tutorials.
I will explain the binary search method in this post.
To get an intiution on how the algorithm work, let's consider only x >= 1
first. And I assume readers should already know the following equalities.
log2(x) = 1 + log2(x/2) ---- (1)
log2(x) = 0.5 * log2(x^2) ---- (2)
Run
The core of the algorithm is how to exploit these recursive structure of
log2(x).
The process is that
1. if x >= 2, apply (1), make x to be x/2
2. otherwise (i.e. 1 <= x < 2), apply (2), make x to be x*x
Let's see an example
log2(3.2)
= 1 + log2(1.6) by (1)
= 1 + 0.5 * log2(2.56) by (2)
= 1 + 0.5 * (1 + log(1.28)) by (1)
= 1 + 0.5 * (1 + 0.5 * log2(1.6384)) by (2)
= 1 + 0.5 * (1 + 0.5 * 0.5 * log2(2.68435456)) by (2)
= 1 + 0.5 * (1 + 0.5 * 0.5 * (1 + log2(1.34217728))) by (1)
= (1 + 0.5 + 0.5^3) + 0.5^3 * log2(1.34217728) expand
= 1.625 + 0.125 * log2(1.34217728) simplify
Run
Look at how x change, 3.2 -> 1.6 -> 2.56 -> 1.28 -> 1.6384 -> 2.68435456 ->
1.34217728 -> ... We have controled the value of x to oscillate between 1 and
2. Since 1 <= x < 2, so 0 <= log(x) < 1, so we can say that
1.625 + 0.125 * 0 <= log2(3.2) < 1.625 + 0.125 * 1
1.625 <= log2(3.2) < 1.75
Run
And notice that during the above process, After applying (2), a co-efficent 0.5
keep multiply to log2(...), so the bound of log2(3.2) is keep halving. If we
want a bound to be smaller than B, we can continue the above process until the
coefficient is smaller than B.
Code the above process
const B = 0.0001
# This equality holds during the process
# log2(3.2) = y + c * log2(x)
var x = 3.2
var c = 1.0
var y = 0.0
while c > B:
if x >= 2:
x /= 2
y += c
else:
x *= x
c /= 2
echo "so, ", y, " <= log2(3.2) < ", y + B
# so, 1.677978515625 <= log2(3.2) < 1.678078515625
Run
I hope readers can follow up to this point. If you do understand the case x >=
1, it is similar for 0 <= x < 1\. We make use another equality
log2(x) = -1 + log2(x*2) ---- (3)
Run
The process now become
1. if x <= 0.5, apply (3), make x to be 2*x
2\. otherwise (i.e. 0.5 <= x < 1), apply (2), make x to be x*x so the process
keeps x between 0.5 <= x < 1 and so -1 <= log2(x) < 0 and so y - c <= log2(x) <
y
combine both cases, make the bound to be smallest (i.e. epsilon for floating
points). we get
func log2(n: float): float =
var x = n
var c = 1.0
while c >= epsilon(float64):
if x >= 2:
x /= 2
result += c
elif x <= 0.5:
x *= 2
result -= c
else:
x *= x
c /= 2
Run
For other bases, precompile the coefficient or just use `log(n,b) =
log(n)/log(b)`
const inv_log2_10 = 1/log2(10)
func log10(n: float) = inv_log2_10 * log2(n)
func log(n, b: float) = log2(n)/log2(b)
Run
It is also possible to modify log2 for other base like the following,
func log(n, b: float): float =
let inv_b = 1/b
var x = n
var c = 1.0
while c >= epsilon(float64):
if x >= b:
x *= inv_b
result += c
elif x <= inv_b:
x *= b
result -= c
else:
x *= x
c /= 2
Run
My feeling is that it **may** be numerically-unstable for some extreme values
of base. but in theory floating point multiplication do not lose significance
(or very slowly), x should keep enough significance during whole process, so it
should be fine.
Last, binary search method gains 1-bit accuracy per iteration, the accuracy is
arbitrary and it is suitable for decmial/fraction (not only floating point),
but notice that due to squaring x, the size of x grow exponentially in naive
implementation.