Hi, I don't want excactly an spider, but a system that crawls on demand single pages when users ask for them by a REST service. I was thinking on a queue (Celery?) where scrapy will read urls and will write json results when parsed. Now I've got a system built on xpath + re expressions and some ocr but I was thinking on a better scalability. Do you have some experience on this issue?
Best regards, - luismiguel -- You received this message because you are subscribed to the Google Groups "scrapy-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/scrapy-users. For more options, visit https://groups.google.com/d/optout.
