I've been using scrapy professionally for some time now, so you can imagine
my surprise when I stumbled across a sister project called Crawl Frontier.
 (docs
<http://crawl-frontier.readthedocs.org/en/v0.2.0/topics/frontier-at-a-glance.html>
)

>From the docs:

A crawl frontier is the part of a crawling system that decides the logic
and policies to follow when a crawler is visiting websites (what pages
should be crawled next, priorities and ordering, how often pages are
revisited, etc).
Now, my question is this: why would I use this project with scrapy?  It
sounds like this is a component that has been broken out of scrapy so that
people could use the "crawl logic" component of scrapy with other crawling
systems (especially headlessjs, or requests, etc.)

I've searched google but not found many tutorials or introductory articles.

*Does anyone have more information or some links in addition to the
official docs? *

Thanks,
Travis

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to