On 29/08/2013 10:26 a.m., Nehal J Wani wrote:
Your reply is definitely helpful in re-configuring the new proxy server.
But I am more interested in the correct way of migrating/copying cache
Will a simple rsync -vertp --partial --progress /var/squid/cache1/* <dst> do?
What are the remaining steps? Will there be any permissions issue?
What if those copied files just stay there and not be used?

If you are in doubt just copy the cache contents (rsync is fine), erase the swap.state journal, run squid -z.
 A full rebuild of the index will happen on next startup.

... the rest of this is is about trouble you may get into by doing it that way ...

NOTE first that the cache is just a cache. You can rebuild it from nothing just by running any Squid with an empty cache_dir. If you have a very large cache now this is in fact probably the best way to go. You will have a few days with low HIT rate as it fills up again but the growth curve back to normal HITs is exponential and client traffic is never bogged down completely.

If you want to go ahead with the upgrade on a huge cache and/or if it needs to re-scan while in production, be aware that it means some hours/days depending on size where you will have 0 HIT rate anyway.

You an do the startup and resulting rebuild "offline" in a proxy with dummy config file pointing at the cache_dir, while the "old" one (or dummy with no cache) continues to run. This will result in cache which is some time out of data and have a burst of revalidations and removals when swapping the cache into production use. The growth back to normal HIT rate starts further along teh growth curve and is exponential, but slightly slower than for an empty cache.

Squid (all versions) will check the swap.state file and load it normally if possible. If there is any problems Squid will re-scan the entire cache_dir and rebuild it from the disk contents (see above). Be aware there is a swap.state corruption bug fixed and format upgrade happened in 3.2 so Squid-3.3 will most likely opt to re-scan regardless of what you do.

Amos

Reply via email to