"n00m" <[EMAIL PROTECTED]> writes:
> http://rapidshare.com/files/21267938/m1000.txt
> http://rapidshare.com/files/21268386/m4000.txt

I get 33190970 for the first set and 0 for the second set.

The first set only makes 38853 distinct dictionary entries, I guess
because the numbers are all fairly small so there's a lot of duplication.
The second set makes 8246860 distinct entries.

I got 128.42 sec runtime on the second set, on a 1.2 ghz Pentium M.
That was with the listcomp in the summation, which burned a lot of
extra memory, but I didn't bother writing out a summation loop.

I guess I can see a few other possible micro-optimizations but no
obvious algorithm improvements.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to