On Sat, Nov 1, 2014 at 7:02 PM, Christian Gollwitzer <aurio...@gmx.de> wrote: > Your above algorithm is obviously doing Heron- or Newton-Raphson iterations, > so the same as with floating point math. The first line before the while > loop computes some approximation to sqrt(n). Instead of doing bit shuffling, > you could compute this by FP math and get closer to the desired result, > unless the integer is too large to be represented by FP. Now, the > terminating condition seems to rely on the fact that the initial estimate > x>=sqrt(n), but I don't really understand it. My guess is that if you do > x=int(sqrt(n)), then do the first iteration, then swap x and y such that > x>y, then enter the loop, you would simply start with a better estimate in > case that the significant bits can be represented by the float.
Part of the point of that algorithm is that it never uses FP, and is therefore not limited by FP restrictions. As to the assumption that x>=sqrt(n), that would be safe: if the bit length is even (that is, it's between an odd power of 2 and an even one - for example, 2**13 < 16000 <= 2**14), the initial estimate is the exact square root of the power of two at the top of that range (bit length of 14 means x is 2**7, 128 == sqrt(16384)); if the bit length is odd (eg 2**8 < 300 <= 2**9), the initial estimate rounds the halving upward (bit length of 9 means x is 2**(9//2+1), 32 > sqrt(512)). So there's a guarantee that the initial estimate is no lower than the target number. ChrisA -- https://mail.python.org/mailman/listinfo/python-list