You can purge the complete content of the cache, just have to
Clear the swap.state file and restart squid.

echo "" > /var/cache/squid/swap.state

Hope this helps.

Regards, Pablo

On 7/12/07, martin sarsale <[EMAIL PROTECTED]> wrote:
Kinkie wrote:
> On 7/2/07, martin sarsale <[EMAIL PROTECTED]> wrote:
>> Dear all:
>> We're developing the new version of our CMS and we would like to use
>> squid in accelerator mode to speed up our service.
>>
>>  From the application side, we know exactly when the data changed and we
>> would like to invalidate all cached data for that site. Is this
>> possible? maybe using squidclient or something.
>>
>> We can't do this purging url by url since it doesn't makes much sense
>> (and we don't have the url list!). We want to wipe out every cached
>> object for mysite.com.
>
> You can't do that on the squid side either, since squid doesn't index
> objects by URL but by hash. The only way is to PURGE the relevant
> object.
>
> You can reduce quite a lot the window of staleness by specifying in
> every response the HTTP header:
>
> Cache-Control: s-maxage=XXX, public, proxy-revalidate
>
> (reference taken from: http://www.mnot.net/cache_docs/)
> by choosing the right XXX value (the time in seconds before the object
> expires) you'll be able to find the right balance between higher load
> on the backend (smaller values of XXX) and higher chance of serving
> stale content (higher values of XXX)

(sorry for the delay)
I understand what you are proposing with that header but IMHO that's
valid for a 'dumb' system who cannot determine when it was modified.
Since my system has this feature (I know the exact date the content as
altered) I would like to let  Squid handling ALL the work except when is
really needed.

I understand about object hashes... does it hashes the full URL (ie,
including domain?) because if domain was hashed separately I could purge
the entire domain hash.

Any other hints? unofficial patches? alternative products? squid forks?

thanks

Reply via email to