On Mar 19, 10:08 am, sturlamolden <[EMAIL PROTECTED]> wrote:
> On 18 Mar, 23:45, Arnaud Delobelle <[EMAIL PROTECTED]> wrote:
>
> > > def nonunique(lst):
> > >    slst = sorted(lst)
> > >    dups = [s[0] for s in
> > >         filter(lambda t : t[0] == t[1], zip(slst[:-1],slst[1:]))]
> > >    return [dups[0]] + [s[1] for s in
> > >         filter(lambda t : t[0] != t[1], zip(dups[:-1],dups[1:]))]
>
> > Argh!  What's wrong with something like:
>
> > def duplicates(l):
> >     i = j = object()
> >     for k in sorted(l):
> >         if i != j == k: yield k
> >         i, j = j, k
>
> Nice, and more readable. But I'd use Paul Robin's solution. It is O(N)
> as opposed to ours which are O(N log N).

I'd use Raymond Hettinger's solution. It is as much O(N) as Paul's,
and is IMHO more readable than Paul's.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to