[Giovanni Bajo] |> Since we're at it, do you have any idea on why Python is so slow when doing > such bit-full code?
Is it, compared to the other interpreted languages? Doesn't look like it. The cost of going around the eval loop once utterly swamps the cost of doing a bitwise operation on native ints, so relative interpreter overhead is massive. > See for instance > http://shootout.alioth.debian.org/debian/benchmark.php?test=nsievebits&lang=all, > where the Python version needs to play caching tricks to get some speed. So they mandate use of an algorithm that's naive in several ways. What a waste of time ;-) You can get a strong speedup (at the cost of strongly increased memory use) by using a vector of bits instead of playing C-ish chunking games: def primes_in_range(M) : bits = [1] * M count = 0 for prime in xrange(2, M): if bits[prime]: count += 1 bits[2*prime: M: prime] = [0] * ((M-1)//prime - 1) return count That does "the inner loop" (turned into one slice assignment) at C speed, so is much faster. _______________________________________________ Python-3000 mailing list [email protected] http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com
