I'm working on a scrapy project where a "rabbit client" and "crawl worker" 
work together to consume scrape requests from a queue. These requests have 
more configuration than a start_url - it could be something like url and a 
set of xpaths, or a domain-specific configuration, like site-specific 
product ID (from which we programmatically build the url) and optional 
identifiers like color, style, and size to further specify the item one 
wants to scrape.

I'm wondering if it would be desirable to have built-in support for more 
specific "crawl configurations" like this within the framework? If that's 
the case, I'd be more than happy to have a design discussion and hash out 
the details.

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to