After poking around the Scrapy code, it seems this behaviour is to do with 
"needs_backout" and "max_active_size". In a nutshell, I think the 
active_size counts the number of bytes a response.body is, so is equal to 
the number of byes of responses in memory at one time. The max_active_size 
is max_active_size=5000000 (scrapy/core/scraper.py) and if the active size 
is greater than this then the needs_backout func returns True. Next looking 
at scrapy/core/engine.py shows that the engine only takes another request 
from schedular if needs_backout not True..You can tweak these numbers by 
hand then examine the number of items/responses in tales (telnet prefs()) 
before Scrapy stops processing more requests, and all focus goes on trying 
to clear the pipeline to remove items/responses.

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to