Barring the development of an in-code fix, I think your best bet is to take the config generator route you describe, but then run the generator from your squid initscript triggered by "start" or "reload". Beyond pushing out an updated text file to your boxes and building the cache_peer lines from there (excluding one's own IP), some other solutions to get that list could be:

- Get list of cache_peer IPs from the A responses to a DNS hostname query - If using multicast ICP/HTCP, script a query to the configured multicast address, and add the IPs of the responding hosts to the cache_peer config - Put a text file on an internal web server that contains the list of peer IPs

Another solution could be to use a multi-level CARP config, which incidentally scales far better horizontally than ICP/HTCP, as it eliminates the iterative "sideways" queries altogether by hashing URLs to parent cache_peers. In this setup, you'd run two squids on each box - one "edge" squid that answers client queries but does no caching on its own, and a "parent" squid that listens on a different IP or TCP port that actually does the caching. This solves your issue by giving every edge instance the same list of parent cache_peers - it just so happens that one of them is an IP/port that happens to be local, but not the same instance. Likewise, all the parent instances can have identical configs.

That said, it is a bit of work to get multiple squids running on a single box - you need your initscripts to call squid with different -f options for the different configs, set up separate log files, etc.

-C

On Sep 28, 2009, at 8:24 PM, Chris Hostetter wrote:


: The DNS way would indeed be nice. It's not possible in current Squid
: however, if anyone is able to sponsor some work it might be doable.

If i can demonstrate enough advantages in getting peering to work i might just be able to convince someone to think about doing that ... but that
also assumes i can get the operations team adament enough to protest
having a hack where they need to run a "config_generator" script on
every box whenever a cluster changes (because a script like that would be
fairly straight forward to write as a one off, it's just harder to
implement as a general purpose feature in squid)

: With Squid-2.7 you can use the 'include' directive to split the squid.conf : apart and contain the unique per-machine parts in a separate file to the
: shared parts.

yeah, i'm already familiar with inlcude, but either way i need a
per-machine snippetto get arround the "sibling to self" problem *and* a way to reconfig when the snippet changes (because of the cluster changing
problem)

-Hoss

Reply via email to