Anton, it looks like a joke but while investigating potential performance problems of our classlib with DeCapo benchmark I found that the same method (counting signed bits in bitset) is the hottest method in antlr benchmark. But this method is not from classlib, but from the benchmark and its implementation is very close to ours and not optimized. I wonder if our JIT will be able to automatically detect such algorithms and replace them with more optimized version someday...
On 7/25/06, Mikhail Fursov <[EMAIL PROTECTED]> wrote:
On 7/25/06, Anton Luht <[EMAIL PROTECTED]> wrote: > > For example, consider current implementation of > java.util.BitSet.cardinality() . It just checks all bits one by one > and increments count. [1] Gives an overview of algorithms for checking > set bit count. The fastest algorithms are with table lookup, which > requires additional memory, but there are algorithms that are several > times faster than iteration (in C) and don't require tables. I believe > such local optimizations with good comments and links to corresponding > resources won't do any harm and are a good point to start with. > > Does it make sence? > > [1] http://www-db.stanford.edu/~manku/bitcount/bitcount.html > <http://www-db.stanford.edu/%7Emanku/bitcount/bitcount.html> > > Yes, BitSet code looks not optimized but at least it's readable :) I always use wikipedia to read about classic algorithms implementation. E.g. here is bit counting algorithm too, but it with a reference to the book where it was published: http://en.wikipedia.org/wiki/Bit_array So when improving algorithm and making it not readable and 'tricky' I propose to add a reference to the original source. -- Mikhail Fursov
-- Mikhail Fursov