Hello, During my crawling,some pages will failure due to unexpected redirection and no response returned . How can I catch this kind of error and re-schedule a request with original url, not with the redirected url?
After I do a lot of search with Google, looks there's two ways to address this issue, one is catch exception in a download middle-ware, the other is to process download exception in errback in spider's request, if so, here is my questions, for method 1, I don't know how to pass the original url in process_exception function, for method 2, I don't know how to pass external parameter to errback function in the spider. Any suggestion for this recrawl issue is highly appreciated. Regards Bing -- You received this message because you are subscribed to the Google Groups "scrapy-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/scrapy-users. For more options, visit https://groups.google.com/d/optout.
