Hey Jim,

Scrapy is great at two things:
1. downloading web pages, and
2. extracting unstructured data.

In your case, you should have already have access to the raw files (via 
FTP, etc.), as well as to the data in a structured format. It would be 
possible to do what you're aiming at with Scrapy, but it doesn't seem to be 
the most elegant solution. What speaks against setting up an rsync cronjob 
or similar to keep the failover in sync?


Cheers,
-Jakob

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to