[resending, I accidentally left off the list addr]

> If you cache very large files, you may need to change
> cache_swap_low 88
> cache_swap_high 89
> to force the cleanup process to be more aggressive with
> removing the oldest cached files
>
> Marcus


I don't see how increasing those values (except as possibly a
temporary bandaid) could fix the problem.  To me it looks very clear
that squid's internal accounting of how much space I'm using is
incorrect.   If the internal accounting never hits those thresholds,
the files will never be deleted.   Lowering them to 50% might fix it,
but instead I've just lowered my max size since it seems to be off by
a factor of 2X or so.

btw, I also tried 3.1.8 --with-large-files (although I'm already using
the 64bit version).  Same thing.

Reply via email to