> Squid also takes away the work of doing SSL (presuming you're running it
> on a different machine). Unfortunately it doesn't support HTTP/1.1 which
> means that most generated pages (those that don't set Content-length) end
> up forcing squid to close and then reopen the connection to the web
> server.

It is true that it doesn't support http/1.1, but 'most generated pages'? 
Unless they are actually emitted progressively they should have a
perfectly good content-length header.

> I've also had some problems when Squid had a large number of connections
> open (several thousand); though that may have been because of my
> half_closed_clients setting. Squid 3 coped a lot better when I tried it
> (quite a few months ago now - and using FreeBSD and the special kqueue
> system call) but crashed under some (admittedly synthetic) conditions.

It runs out of the box with a very conservative setting for max open file
descriptors - this may or may not be the cause of the problems you have
seen.  Certainly I ran squid with >16,000 connections back in 1999...

> You still have periods of time when the web servers are busy using their
> CPUs to generate HTML rather than waiting for database queries. This is
> especially true if you cache a lot of data somewhere on the web servers
> themselves (which, in my experience, reduces the database load a great
> deal). If you REALLY need to reduce the number of connections (because you
> have a large number of web servers doing a lot of computation, say) then
> it might still be useful.

Aha, a postgres related topic in this thread!  What you say is very true,
but then given that the connection overhead is so vanishingly small, why
not simply run without a persistent DB connection in this case?  I would
maintain that if your webservers are holding open idle DB connections for
so long that it's a problem, then simply close the connections!

M

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to