Hi All,

I use following command to pause and resume crawls.

scrapy crawl somespider -s JOBDIR=crawls/somespider-1

But problem is that I paused the crawl task and resume it later. I find the 
spider still crawl the urls it crawled before. The

default duplicate filter does not work.  Could you tell me what is wrong with 
it. I already set dont_filter as False in each

Request. 


Thanks

Jack


  


-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to