Hi there,
Personally I would use a combination of keepalived (Linux ony), haproxy (all
kinds of Unix derivats) and squid. If needed you can run keepalived, haproxy
and squid on the same maschine.
Keepalived would supply the VIP needed using VRRP, haproxy would do the
loadbalancing and cookie detection. For requests without cookie one of the
squids would be used otherwise the request would terminate on the app
servers directly.
There are other ways to do this but this is currently my favorite and tried
out more then once.
Regards,
Axel
Am 20.04.2007 2:44 Uhr schrieb "Cloude Porteus" unter
<[EMAIL PROTECTED]>:
> Squid developers,
> We would like to start using Squid-caches to take the load off of our
> application and http servers. We need someone who has hands-on
> experience setting up squid-cache clusters in a load balancing
> environment. We also need a custom cache filter written to check for
> the existence of a logged-in cookie, so that we only cache and serve
> pages for non-logged in users. Our proposed setup looks like this:
>
> internet
> |
> |
> firewall & load balancer
> |
> |
> +--Squid-cache 1
> +--Squid-cache 2
> |
> |
> +--App server 1
> +--App server 2
> |
> |
> +--DB server, etc.
>
> I would like the Squid cache to also handle load balancing it's
> requests to the App servers, but the only documentation I can find is
> the setup Wikipedia had/has for it's servers. If this is not the case,
> we could put another load balancer between the Squid & App servers,
> although I'd rather not have the extra server.
>
> "Squid cache servers used response time measurements to distribute
> page requests between seven web servers."
>
> We have not ordered our servers yet, so I'm also looking for
> recommended server configurations for our two squid servers, ram, disk
> space, etc. We will be ordering servers as soon as we have this
> component of our solution worked out.
>
> Please feel free to ask for more details or make suggestions based on
> this information.
>
> Thanks,
> Cloude