Hi, I am making a web crawler using Python.To avoid dupliacy of urls,i have to maintain lists of downloaded urls and to-be-downloaded urls ,of which the latter grows exponentially,resulting in a MemoryError exception .What are the possible ways to avoid this ?? -- View this message in context: http://www.nabble.com/MemoryError-%21%21%21-Help-Required-tp16510068p16510068.html Sent from the Python - tutor mailing list archive at Nabble.com.
_______________________________________________ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor