[steve@sylar ~]$ python2.7 -m timeit -s "L = [(1,2), (3,4), (0,5),
(9,100), (3,7), (2,8)]" "sorted(L, lambda (p,q),(r,s): cmp(p*s, q*r))"
10000 loops, best of 3: 25.1 usec per loop

[steve@sylar ~]$ python2.7 -m timeit -s "L = [(1,2), (3,4), (0,5),
(9,100), (3,7), (2,8)]" -s "from fractions import Fraction" "sorted(L,
key=lambda t: Fraction(*t))"
1000 loops, best of 3: 236 usec per loop


So for a short list, I get a factor of ten difference. For a longer
list, I'd expect the key function to win out. Much to my astonishment,
it doesn't -- I get similar results regardless of the size of L.

This shouldn't be surprising. The cost is primarily in the comparisons,
not in the creation of the Fraction objects. Now, the number of
comparisons won't change whether you use a cmp function or key objects;
the algorithm will compare and switch the same objects in the same
order. So if a Fraction.__lt__ takes seven times as long as a cmp call,
this ratio is preserved even for very long lists.

A lot can be saved if the __lt__ would assume that the other object
is of the same kind:

class Fcomp(tuple):
    def __lt__(self, other):
        return self[0]*other[1] < self[1]*other[0]

python -m timeit -s "L = [(1,2), (3,4), (0,5), (9,100), (3,7), (2,8)]" -s "from fcomp import Fcomp" "sorted(L, key=Fcomp)"

Regards,
Martin

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to