Hi,
First of all, thanks for such a wonderful piece of software :)
I am using scrapy with HTTPCACHE_ENABLED set to True and the cache policy
set to DummyPolicy. While crawling pages, sometimes I am getting zero or
very small sized responses which I do not want to cache. These however got
cached initially.
In order to invalidate the cache items with small sized responses, I have
created a new cache policy SimplePolicy which overrides
is_cached_response_fresh, like so-
def is_cached_response_fresh(self, response, request):
# Don't consider small responses as fresh
return len(response.body) > 100
However, I am still not able to get fresh data. The spider is still using
the cached data.
Any idea what's wrong?
Thanks,
Ram
--
You received this message because you are subscribed to the Google Groups
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.