Hello all,

What is the recommended approach to perform load balancing and high 
availability between N squid servers? 
I have the following list of requirements to fullfil:

1) Manage N squid servers that share cache (as far as i understand is done 
using cache_peer). Desirable.
2) Availability: if any of the N servers fails the clients are redirected to 
the rest N-1. Prefferable.
3) Scalability: The load is distributed (round-roubin or some other algorithm) 
between N servers, if a new server is added (N + 1) new clients will be able to 
use it reducing the load on the rest N. Prefferable.
4) I need to be able to identify client IP addresses on the squid side and/or 
perform Squid authentication. The client IP and User Name are later to be 
passed to the ICAP server to scan the HTTP(S) request/response contents using 
icap_send_client_ip and icap_send_client_username. Very important requirement.
5) I need to support both HTTP and HTTPS connections with support of selective 
SSL Bump. I.e. for some web sites I do not want to look inside SSL so that 
original site's certificatesĀ are used for encryption. Very important 
requirement too.

I know that strictly for HTTP I could use HAProxy with Forward-For or something 
similar, but the 5th requirement is very important and I could not find a way 
to handle SSL with HAProxy properly.

The only idea that comes to my mind isĀ to use some form of round-robin load 
balancing on the level of DNS, but it has its own drawbacks (should be able to 
check availability of my N servers + not real balancing, more like 
distribution).
Any help/thoughts are appreciated.

Thank you!

Best regards,
Rafael Akchurin
Diladele B.V.

_______________________________________________
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Reply via email to