Please consider batching this data and doing larger writes. Thrashing the hard drive is not a good plan for performance or hardware longevity. For example, crawl an entire FQDN and then write out the results in one operation. If your job fails in the middle and you have to start that FQDN over, no big deal. If that's too big of a chunk for your purposes, perhaps break each FQDN up into top-level directories and crawl each of those in one operation before writing to disk.
There are existing solutions for managing job queues, so you can choose what you like. If you're unfamiliar, maybe start by looking at celery. Michael On Mon, Jul 9, 2012 at 1:52 AM, Plumo <richar...@gmail.com> wrote: >> What are you keeping in this status file that needs to be saved >> several times per second? Depending on what type of state you're >> storing and how persistent it needs to be, there may be a better way >> to store it. >> >> Michael > > This is for a threaded web crawler. I want to cache what URL's are > currently in the queue so if terminated the crawler can continue next > time from the same point. > -- > http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list