Hi all,

I have a project in which I have to crawl a great number of different 
sites. All of this sites crawling can use the same spider, as I don't need 
to extract items from its body pages. The approach I thought is to 
parametrize the domain to be crawled in the spider file, and call the 
scrapy crawl command passing the domain and starting urls as parameters, so 
I could avoid generate a single spider for every site (the sites list will 
increase over time). The idea is to deploy it to a server with scrapyd 
running, so several questions come to me:

- Is this the best approach I can take?
- If so, is there any concurrency problem if I schedule several times the 
same spider with different arguments passed?
- If this is not the best approach, and it is better to create one single 
spider per site... I will have to update the project frecuently. Does a 
project update affect running spiders?


Thanks to all for this great community.
Bernardo

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to