Hi

During my crawling,some pages will failure for unexpected redirection and 
no response return. How can I catch this type of download error and 
re-schedule a request?

Before I release this post, I made a lot of google research, there's two 
methods for this issue, one is catch the error in a download middle-ware in 
process_exception function, but for this method, I don't know how to pass 
the original url into this function then return a new request,  the other 
is to catch this error in errback fuction of the spider request, for this 
method, I don't know how to pass external parameter to errback function. 

Any suggestion for this issue is high appreciated.

Regards,
Bing

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to