Is there an obvious/pythonic way to remove duplicates from a 
list (resulting order doesn't matter, or can be sorted 
postfacto)?  My first-pass hack was something of the form

 >>> myList = [3,1,4,1,5,9,2,6,5,3,5]
 >>> uniq = dict([k,None for k in myList).keys()

or alternatively

 >>> uniq = list(set(myList))

However, it seems like there's a fair bit of overhead 
here...creating a dictionary just to extract its keys, or 
creating a set, just to convert it back to a list.  It feels 
like there's something obvious I'm missing here, but I can't 
put my finger on it.

Thanks...

-tkc





-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to