Re: [squid-users] About bottlenecks (Max number of connections, etc.)

2013-02-25 Thread Alex Rousskov
On 02/23/2013 09:10 PM, Amos Jeffries wrote:
> But once you are reaching the above types of limit there is no magic
> setting that can gain big %-points more speed.

Actually, there is: "workers N" :-). I have seen SMP Squid handling
about Gbit of traffic while doing authentication, URL rewriting, and
ICAP. It just takes a few workers and a few more CPU cores... While SMP
is not a solution to every problem and does have serious limitations in
some environments, it is a very good performance booster in many situations.


HTH,

Alex.



Re: [squid-users] About bottlenecks (Max number of connections, etc.)

2013-02-23 Thread Amos Jeffries

On 23/02/2013 3:59 p.m., Manuel wrote:

Hi,

We are having problems with our Squid servers during traffic peaks. We had
problems in the past and we got different error such as "Your cache is
running out of filedescriptors", syncookies errors, etc. but nowadays we
have optimized that and we are not getting those errors anymore. The problem
is that the servers, which many of them are different in resources and in
two different datacenters (which are all running squid as a reverse proxy
caching contents from several webservers in other datacenters), during big
traffic peaks all of them fail to deliver content (html pages, js files and
css files gziped and non gziped as well as images) and we do not see any
error at all. The more connections/requests, the highest is the percentage
of clients that fails to get the content. So we are tring to find out where
is the bottleneck. Is Squid unable to deal with more than X connections per
second or any other bottleneck? I think the bottleneck starts to fail when
there is around 20,000 connections to each server.


What does squid -v say for you?



Squid does have a total service capacity which is somewhere around 20K 
as the result of a large number of cumulative little details.


 * The limit on number of connections any Squid can have attached is 
only limited by your configured FD limits and available server RAM. 
Squid uses ~64 KB per network socket for traffic state - which equates 
to around 2 GB of RAM just for I/O buffers at 20,000 concurrent client 
connections.


* The limit on how fast each client can be served data is bottlenecked 
by both TCP network stack and CPU speeds. Look up "Buffer Bloat" for 
things which affect Squid operation from the TCP direction.


* Squid parser is not very efficient yet. Which can bottleneck request 
handling speeds. In the developer test machine (which is a slow 
single-threaded server with ~1.2 GHz processor) Squid-3.2 achieves 
around 950 req/sec. On common ISP hardware which have much faster CPU 
capacity Squid is known to reach over twice that (~2.5K req/sec).
 - at 20K concurrent connections this means each connection only 
sending one request every 4-8 seconds.



Of course the squid developers all would like Squid to be as fast as 
possible (my personal aim is to break the 1K req/sec barrier on that 
1.2GHz machine) and there is a lot of ongoing effort to improve the 
speed all over Squid. But once you are reaching the above types of limit 
there is no magic setting that can gain big %-points more speed. If you 
want to participate in the improvements please upgrade to the latest 
Squid available (3.3 daily snapshot [stable] or 3.HEAD packages 
[experimental]) and profiling anything and everything you suspect might 
improve performance. Patch contributions to squid-dev are very welcome, 
discussions highlighting what needs updating almost equally so.


Amos



[squid-users] About bottlenecks (Max number of connections, etc.)

2013-02-22 Thread Manuel
Hi,

We are having problems with our Squid servers during traffic peaks. We had
problems in the past and we got different error such as "Your cache is
running out of filedescriptors", syncookies errors, etc. but nowadays we
have optimized that and we are not getting those errors anymore. The problem
is that the servers, which many of them are different in resources and in
two different datacenters (which are all running squid as a reverse proxy
caching contents from several webservers in other datacenters), during big
traffic peaks all of them fail to deliver content (html pages, js files and
css files gziped and non gziped as well as images) and we do not see any
error at all. The more connections/requests, the highest is the percentage
of clients that fails to get the content. So we are tring to find out where
is the bottleneck. Is Squid unable to deal with more than X connections per
second or any other bottleneck? I think the bottleneck starts to fail when
there is around 20,000 connections to each server.

Thank you in advance



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/About-bottlenecks-Max-number-of-connections-etc-tp4658650.html
Sent from the Squid - Users mailing list archive at Nabble.com.