1. https://gist.github.com/nramirezuy/e05171a559d1b99caf66 - The gist 
spider will hang due to this issue https://github.com/scrapy/scrapy/pull/708
2. Just add more urls to the start url.
3. If you launch several spiders you can save the log for that specific 
one, I also think you can use Twisted logging and save to different files, 
based on a parameter.

El lunes, 18 de agosto de 2014 05:56:03 UTC-3, bin xiong escribió:
>
> I'm using scrapy to crawl some websites. In my project, every spider has 
> same code but start_urls, domain and  name.(It' means that my spider is 
> just a general spider, I use it to crawl every websites.) 
> My aims :
> 1. Just use one spider(Since every spider has same code), and  set the 
> start_urls, domain and name dynamicly(Maybe I can get these info from 
> database)
> 2. Run spider and make it crawl several websites at  the same time
> 3. Record the log for every website, for example:  website: ' www.hhhh.com' 
>  It should has a log file named 'hhhh_log'
>
> Can anyone give me some ideas?
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to