: Another solution could be to use a multi-level CARP config, which incidentally
: scales far better horizontally than ICP/HTCP, as it eliminates the iterative
: "sideways" queries altogether by hashing URLs to parent cache_peers. In this
        ...
: different IP or TCP port that actually does the caching. This solves your
: issue by giving every edge instance the same list of parent cache_peers - it

I briefly considered an idea very similar to this (using a hashing 
feature of the load balancers designed for session affinity in 
place of any peering in squid) but ruled it out fairly early on because it 
would mean that any modifications to the cluster (adding or removing a 
machine) would immediately drop the cache hit rate of the entire cluster 
-- the only hits would be where the hashcode for a URL resulted in the 
same number for N as it did for N +/- 1.

Is there a feature of CARP that can be used to mitigate this problem?  

I see that a load-factor can be set per parent, so it seems like it might 
be possible to mitigate the removal of a machie in the short term by 
doubling the load factor on another peer (the one before it in the config 
i would assume?) so that the hash function still picked the same machines 
for all existing cached URLs -- but that seem's like it could only 
be used as a short term workarround to an urgent outage, I'm not sure how 
you would apply the same idea to adding/removing machines periodicly based 
on need.


-Hoss

Reply via email to