I want to avoid crawling already-visited URLs.

I have downloaded the deltafetch.py script and I put next to the 
settings.py script.

I added these codes to my settings.py

>
> SPIDER_MIDDLEWARES = {
>     'General_Spider_code_version_1.deltafetch.DeltaFetch': 100,
> }
> DELTAFETCH_ENABLED = True
> DOTSCRAPY_ENABLED = True


I run my spider the first time, and a .spider folder has been generated. 
Then, I run the spider another time, but the spider crawls the 
already-crawls URLs.

I know that because I write a json file for each item and each time I run 
my spider, new json files for the already scraped-items  are generated,

what am i doing wrong please?

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to