I'm using scrapy to crawl some websites. In my project, every spider has 
same code but start_urls, domain and  name.(It' means that my spider is 
just a general spider, I use it to crawl every websites.) 
My aims :
1. Just use one spider(Since every spider has same code), and  set the 
start_urls, domain and name dynamicly(Maybe I can get these info from 
database)
2. Run spider and make it crawl several websites at  the same time
3. Record the log for every website, for example:  website: ' www.hhhh.com' 
 It should has a log file named 'hhhh_log'

Can anyone give me some ideas?

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to