Tim Peters wrote: > I do the same kind of thing all the time; e.g., here from the tail end > of a hybrid modular/binary gcd function, which uses parity checks in > three places: > > # invariants: > # a > b >= 0 > # a is odd > # b is odd or 0 > while b: > a %= b > if a & 1: > a = b-a > assert a & 1 == 0 > a >>= 1 > if a: > while a & 1 == 0: > a >>= 1 > a, b = b, a > > return a << nbits > > For a long time CPython took time proportional to the number of bits > in n to compute n&1, so it wasn't actually efficient, but recent > releases repaired that. In any case the code is quite clear and > directly to the point.
Since we're at it, do you have any idea on why Python is so slow when doing such bit-full code? See for instance http://shootout.alioth.debian.org/debian/benchmark.php?test=nsievebits&lang=all, where the Python version needs to play caching tricks to get some speed. -- Giovanni Bajo _______________________________________________ Python-3000 mailing list [email protected] http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com
