En Fri, 07 Aug 2009 17:53:10 -0300, kj <no.em...@please.post> escribió:

Suppose that x is some list.  To produce a version of the list with
duplicate elements removed one could, I suppose, do this:

    x = list(set(x))

but I expect that this will not preserve the original order of
elements.

I suppose that I could write something like

def uniquify(items):
    seen = set()
    ret = []
    for i in items:
        if not i in seen:
            ret.append(i)
            seen.add(i)
    return ret

Assuming the elements are hashable, yes, that's the fastest way (minus
some microoptimizations like using local names for ret.append and
seen.add, or the 'not in' operator).

See bearophile's recipe [1], another one [2] by Tim Peters (quite old but
worths reading the comment section), and this thread [3]

[1] http://code.activestate.com/recipes/438599/
[2] http://code.activestate.com/recipes/52560/
[3] http://groups.google.com/group/comp.lang.python/t/40c6c455f4fd5154/

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to