Hi, 

Thanks for the interest. I'm looking for specifically for replies from 
project maintainers/contributors to see whether or this is feasible in the 
first place - don't want to waste anyone's time, including mine. If it is, 
and I can do it on my own, I'd love to tackle it with some design guidance, 
to make sure I'm working with the framework instead of against it.

On Wednesday, January 28, 2015 at 6:15:16 PM UTC-8, user12345 wrote:
>
> I'm working on a scrapy project where a "rabbit client" and "crawl worker" 
> work together to consume scrape requests from a queue. These requests have 
> more configuration than a start_url - it could be something like url and a 
> set of xpaths, or a domain-specific configuration, like site-specific 
> product ID (from which we programmatically build the url) and optional 
> identifiers like color, style, and size to further specify the item one 
> wants to scrape.
>
> I'm wondering if it would be desirable to have built-in support for more 
> specific "crawl configurations" like this within the framework? If that's 
> the case, I'd be more than happy to have a design discussion and hash out 
> the details.
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to