Yes,  the scrapy spider prints traceback.

2015-10-30 17:41:35+0530 [myspider] DEBUG: Crawled (200) <GET 
https://example.com/searchid?ss=1&id=1000> 
(referer: https://example.com/search?ss=1) ['partial']
2015-10-30 17:41:35+0530 [myspider] DEBUG: Crawled (200) <GET 
https://example.com/searchid?ss=2&id=2000> 
(referer: https://example.com/search?ss=2) ['partial']
2015-10-30 17:41:35+0530 [myspider] DEBUG: Crawled (200) <GET 
https://example.com/searchid?ss=3&id=3000> 
(referer: https://example.com/search?ss=3) ['partial']


And How can I fix this.


On Friday, October 30, 2015 at 2:15:02 PM UTC+5:30, Nikolaos-Digenis 
Karagiannis wrote:
>
> You may be experiencing data loss in your proxy.
> How do you know that your crawl fails? Does it print a traceback?
>
>
> On Saturday, 4 January 2014 15:16:45 UTC+2, Shivkumar Agrawal wrote:
>>
>> Hi 
>>
>> I have created a crawler in scrapy. It uses a list of proxies to crawl. I 
>> have 3-4 sites to crawl daily. During crawling, scrapy logs show 
>> ['partial'] message and my crawl is failed on those request. I have spent 
>> lot of time on googling with no luck. 
>> Can anybody help in this matter
>>
>>
>>  
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to