Hi,

I'm research about scrapy-redis. But I don't know how it work. I need a 
example step by step.
Can you help me?

Thank you



On Thursday, April 25, 2013 4:12:20 PM UTC+7, Andres Douglas wrote:
>
> Rolando, thanks for sharing this is really interesting. How stable is this 
> at this point? It seems like it's been a while since you published it, but 
> the code in github still has a warning about not being production ready? 
>
> On Sunday, August 28, 2011 9:47:14 PM UTC-7, Rolando Espinoza La fuente 
> wrote:
>>
>> I've published a scrapy+redis integration. This allows to:
>> * run many crawlers for the same spider and share the workload
>> * run many post-processing workers to consume the items
>> * persist request queue, therefore pause/resume crawling
>>
>> Certainly this is best suited for CPU-bound scrapers.
>>
>> Requires latest development version and scrapy and haven't been
>> tested on production.
>>
>> Source code: https://github.com/darkrho/scrapy-redis
>>
>> Regards,
>>
>> ~Rolando
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to