Re: [squid-users] Squid performance recommendation

2022-09-24 Thread David Touzeau

Hi

We have some experience on cluster configuration.

https://wiki.articatech.com/en/proxy-service/hacluster

As using Kubernetes for Squid and for 40K users is a very "risky adventure".

Squid requires a very high disk performance (I/O) which means both a 
good hard disk drive and a decent controller card.


You will reach a functional limit of kubernete which by structure is not 
adapted to this type of service


Of course you can continue in this way

But we see this a lot from experience:

"To take on the load you're going to install a lot of instances on 
multiple virtualization servers.

Whereas 2 or 3 physical machines could handle it all."


Le 20/09/2022 à 21:52, Pintér Szabolcs a écrit :


Hi squid community,

I need to find most best and sustainable way to build a stable High 
Availability squid cluster/solution for abou 40k user.


Parameters: I need HA, caching(little objects only not like big 
windows updates), scaling(It is just secondly), and I want to use and 
modify(in production,in working hours) complex black- and whitelists


I have some idea:

1. A huge kubernetes cluster

pro: Easy to scale, change the config and update.

contra: I'm afraid of the network latency.(because of the most plus 
layers e.g. vm network stack, kubernetes network stack ith vxlan and 
etc.).


2. Simple VM-s with a HAProxy in tcp mode

pro: less network latency(I think)

contra: More time to Administration


Has anybody any experience with squid in kubernetes(or similar 
technology) with a large number of useres?


What do you think which is the most perfect solution or do you have 
other idea for the implementation?


Thanks!

Best, Szabolcs

--
*Pintér Szabolcs Péter*
H-1117 Budapest, Neumann János u. 1. A épület 2. emelet
+36 1 489-4600
+36 30 471-3827
spin...@npsh.hu


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance recommendation

2022-09-21 Thread ngtech1ltd
Hey Szabolcs,
 
Since Amos answered your question regarding a simple VM I would like to refer 
to the k8s part.
 
A huge Kubernetes cluster is good for very specific use cases.
It’s not “easy” to scale and or change the config and update out of the box,
you will need to work on that since there aren’t any ready to use solutions for 
these on k8s.
‘kubectl apply -f x.yaml’ is not really a good solution for every scaling 
problem.
Also take into account that you will probably will have issues with cache HITS 
if the distribution
algorithm is unable to inflict the same requests to the same proxy.
 
With k8s since the *big* clusters usually is on BareMetal it’s possible to get 
up to 30 percent more
performance then a VMs. Also, the network latency is not so high in a k8s 
cluster for the same reason.
Basically, in most k8s clusters the traffic is almost like inside a shared 
memory.
 
It’s possible to define the specs of the project and to asses from there.
HAproxy will be able to handle 40k clients without any issues and to allow full 
HA you might need 2 HAproxy machines.
The real issue with such a setup is how the config is applied.
For example, a big list of black and whitelist domains might be better stored 
outside of squid.
Depends on your requirements you might be able to use either ufdbGuard or 
another solution.
 
There aren’t many differences between containerized squid to VM’s is not a lot. 
Actually, in the case of a simple forward proxy
it might be pretty simple to run a containerized squid on-top of a VM (which 
how k8s is most runs like these days).
 
As for autoscaling squid containers on-top of k8s, you will probably need to 
invest a lot more then a VM to make this fit your needs.
 
Since you mentioned more Administration time on a VM, it’s not true (to my 
opinion and experience).
There isn’t much of a difference between a VM and a container for a simple 
forward squid setup.
(it will be different if you need interception of connections)
 
If you do have more details on the required setup itself it would be pretty 
simple to find the right way for a good solution.
 
I really recommend you to read the next article:
https://ably.com/blog/no-we-dont-use-kubernetes
 
which touch many aspects of k8s vs VM’s.
 
I can try to give you an idea for implementation on VM’s but I am still missing 
couple pieces to understand the best.
 
Yours,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
Pintér Szabolcs
Sent: Tuesday, 20 September 2022 22:52
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid performance recommendation
 
Hi squid community,
I need to find most best and sustainable way to build a stable High 
Availability squid cluster/solution for abou 40k user.
Parameters: I need HA, caching(little objects only not like big windows 
updates), scaling(It is just secondly), and I want to use and modify(in 
production,in working hours) complex black- and whitelists
I have some idea:

1. A huge kubernetes cluster 
pro: Easy to scale, change the config and update.
contra: I'm afraid of the network latency.(because of the most plus layers e.g. 
vm network stack, kubernetes network stack ith vxlan and etc.).
2. Simple VM-s with a HAProxy in tcp mode
pro: less network latency(I think)
contra: More time to Administration 


Has anybody any experience with squid in kubernetes(or similar technology) with 
a large number of useres?

What do you think which is the most perfect solution or do you have other idea 
for the implementation?

Thanks!

Best, Szabolcs
-- 
Pintér Szabolcs Péter
H-1117 Budapest, Neumann János u. 1. A épület 2. emelet
+36 1 489-4600 
+36 30 471-3827 
spin...@npsh.hu <mailto:spin...@npsh.hu> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance recommendation

2022-09-21 Thread Marcus Kool


On 20/09/2022 20:52, Pintér Szabolcs wrote:


Hi squid community,

I need to find most best and sustainable way to build a stable High 
Availability squid cluster/solution for abou 40k user.

Parameters: I need HA, caching(little objects only not like big windows 
updates), scaling(It is just secondly), and I want to use and modify(in 
production,in working hours) complex black- and whitelists

[snip]


To modify the Squid config in production during working hours is a requirement 
that needs careful thought since the web proxy is unavailable when it reloads 
its configuration.

HA can resolved this with
1. change config squid node 1
2. load balancer stops new connections to node 1
3. wait X minutes, maybe 15 minutes, for most connections to node 1 to disappear
4. reload the config on node 1 - existing connections are closed
5. wait until Squid on node 1 is operational again
6. load balancer allows new connections to node 1 and stops new connections to 
node 2
7. change config squid node 2
8. wait X minutes, maybe 15 minutes, for most connections to node 2 to disappear
9. reload the config on node 2 - existing connections are closed
10. wait until Squid on node 2 is operational again
11. load balancer allows new connections to node 2

Depending on what your requirements are, you may consider using ufdbGuard for 
Squid since ufdbGuard can reload its configuration without interrupting clients 
of the web proxy.

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance recommendation

2022-09-21 Thread Amos Jeffries

On 21/09/22 07:52, Pintér Szabolcs wrote:

Hi squid community,

I need to find most best and sustainable way to build a stable High 
Availability squid cluster/solution for abou 40k user.




Number of users is of low relevance to Squid. What matters is the rate 
of requests they are sending to Squid.


For example; each of your 40k users sending one request per hour to 
Squid is not a problem, but if they send one per second will need 
multiple Squid instances.




Parameters: I need HA,


Assuming you do mean "high availability" instead of something unusual.
Squid is designed to maximize availability - whether it meets this 
criteria will depend on several factors:


 * what measure(s) you consider necessary for this requirement.
   Proxy uptime? Response time?
   How much outage is acceptable for each?

 * the complexity of features and policy Squid is configured with.
  - impacts reconfigure/restart times, and response times.

 * consistency of client compliance to HTTP
  - impacts response times


caching(little objects only not like big windows 
updates),


No problem for Squid.


scaling(It is just secondly), and


Not a problem for Squid.

I want to use and modify(in 
production,in working hours) complex black- and whitelists




Should not be a problem. Details of course depend on your specific 
policy and update needs.





I have some idea:

1. A huge kubernetes cluster

pro: Easy to scale, change the config and update.

contra: I'm afraid of the network latency.(because of the most plus 
layers e.g. vm network stack, kubernetes network stack ith vxlan and etc.).




Sorry I have no experience here. So the remainder of your questions I 
cannot answer.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid performance recommendation

2022-09-20 Thread Pintér Szabolcs

Hi squid community,

I need to find most best and sustainable way to build a stable High 
Availability squid cluster/solution for abou 40k user.


Parameters: I need HA, caching(little objects only not like big windows 
updates), scaling(It is just secondly), and I want to use and modify(in 
production,in working hours) complex black- and whitelists


I have some idea:

1. A huge kubernetes cluster

pro: Easy to scale, change the config and update.

contra: I'm afraid of the network latency.(because of the most plus 
layers e.g. vm network stack, kubernetes network stack ith vxlan and etc.).


2. Simple VM-s with a HAProxy in tcp mode

pro: less network latency(I think)

contra: More time to Administration


Has anybody any experience with squid in kubernetes(or similar 
technology) with a large number of useres?


What do you think which is the most perfect solution or do you have 
other idea for the implementation?


Thanks!

Best, Szabolcs

--
*Pintér Szabolcs Péter*
H-1117 Budapest, Neumann János u. 1. A épület 2. emelet
+36 1 489-4600
+36 30 471-3827
spin...@npsh.hu
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance issues

2021-09-05 Thread Marcio B.
Thanks!

Regards,

Márcio Bacci

Em dom., 5 de set. de 2021 às 15:25, Eliezer Croitoru 
escreveu:

> From:
>
> https://serverfault.com/a/717273/227456
>
>
>
> 2
>
> The number of file descriptors is set in the systemd unit file. By default
> this is 16384, as you can see in /usr/lib/systemd/system/squid.service.
>
> To override this, create a locally overriding
> /etc/systemd/system/squid.service which changes the amount of file
> descriptors. It should look something like this:
>
> .include /usr/lib/systemd/system/squid.service
>
>
>
> [Service]
>
> LimitNOFILE=65536
>
> Do not edit the default file /usr/lib/systemd/system/squid.service, as it
> will be restored whenever the package is updated. That is why we put it in
> a local file to override defaults.
>
> After creating this file, tell systemd about it:
>
> systemctl daemon-reload
>
> and then restart squid.
>
> systemctl restart squid
>
>
>
>
>
> Eliezer
>
>
>
>
>
>
>
> *From:* NgTech LTD 
> *Sent:* Tuesday, August 31, 2021 6:11 PM
> *To:* Marcio B. 
> *Cc:* Squid Users 
> *Subject:* Re: [squid-users] Squid performance issues
>
>
>
> Hey Marcio,
>
>
>
> You will need to add a systemd service file that extends the current one
> with more FileDescriptors.
>
>
>
> I cannot guide now I do hope to be able to write later.
>
>
>
> If anyone is able to help faster go ahead.
>
>
>
> Eliezer
>
>
>
>
>
> בתאריך יום ג׳, 31 באוג׳ 2021, 18:05, מאת Marcio B. ‏ >:
>
> Hi,
>
> I implemented a Squid server in version 4.6 on Debian and tested it for
> about 40 days. However I put it into production today and Internet browsing
> was extremely slow.
>
> In /var/log/syslog I'm getting the following messages:
>
> Aug 31 11:29:19 srvproxy squid[4041]: WARNING! Your cache is running out
> of filedescriptors
>
> Aug 31 11:29:35 srvproxy squid[4041]: WARNING! Your cache is running out
> of filedescriptors
>
> Aug 31 11:29:51 srvproxy squid[4041]: WARNING! Your cache is running out
> of filedescriptors
>
>
> I searched the Internet, but I only found very old information and
> referring files that don't exist on my Squid Server.
>
> The only thing I did was add the following value to the
> /etc/security/limits.conf file:
>
> *-nofile 65535
>
> however this did not solve.
>
> Does anyone have any idea how I could solve this problem?
>
>
>
> Regards,
>
>
>
> Márcio Bacci
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance issues

2021-09-05 Thread Eliezer Croitoru
From:

https://serverfault.com/a/717273/227456

 

2

The number of file descriptors is set in the systemd unit file. By default this 
is 16384, as you can see in /usr/lib/systemd/system/squid.service.

To override this, create a locally overriding /etc/systemd/system/squid.service 
which changes the amount of file descriptors. It should look something like 
this:

.include /usr/lib/systemd/system/squid.service

 

[Service]

LimitNOFILE=65536

Do not edit the default file /usr/lib/systemd/system/squid.service, as it will 
be restored whenever the package is updated. That is why we put it in a local 
file to override defaults.

After creating this file, tell systemd about it:

systemctl daemon-reload

and then restart squid.

systemctl restart squid

 

 

Eliezer

 

 

 

From: NgTech LTD  
Sent: Tuesday, August 31, 2021 6:11 PM
To: Marcio B. 
Cc: Squid Users 
Subject: Re: [squid-users] Squid performance issues

 

Hey Marcio,

 

You will need to add a systemd service file that extends the current one with 
more FileDescriptors.

 

I cannot guide now I do hope to be able to write later.

 

If anyone is able to help faster go ahead.

 

Eliezer

 

 

בתאריך יום ג׳, 31 באוג׳ 2021, 18:05, מאת Marcio B. ‏mailto:marcioba...@gmail.com> >:

Hi,

I implemented a Squid server in version 4.6 on Debian and tested it for about 
40 days. However I put it into production today and Internet browsing was 
extremely slow.

In /var/log/syslog I'm getting the following messages:

Aug 31 11:29:19 srvproxy squid[4041]: WARNING! Your cache is running out of 
filedescriptors

Aug 31 11:29:35 srvproxy squid[4041]: WARNING! Your cache is running out of 
filedescriptors

Aug 31 11:29:51 srvproxy squid[4041]: WARNING! Your cache is running out of 
filedescriptors


I searched the Internet, but I only found very old information and referring 
files that don't exist on my Squid Server.

The only thing I did was add the following value to the 
/etc/security/limits.conf file:

*-nofile 65535

however this did not solve.

Does anyone have any idea how I could solve this problem?

 

Regards,

 

Márcio Bacci

___
squid-users mailing list
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org> 
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance issues

2021-08-31 Thread L . P . H . van Belle
He Marco, 

You better upgrade to debian bullseye and see if it happens there also. 
If you dont want that, try this. 

systemctl edit squid.service 
Add : 

[Service]
LimitNOFILE=65535
 
 
Save and run : systemctl restart squid

But i would recommend to use Debian Bullseye. 


Greetz, 

Louis
 
 

Van: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] Namens 
NgTech LTD
Verzonden: dinsdag 31 augustus 2021 17:11
Aan: Marcio B.
CC: Squid Users
Onderwerp: Re: [squid-users] Squid performance issues



Hey Marcio, 

You will need to add a systemd service file that extends the current one with 
more FileDescriptors.


I cannot guide now I do hope to be able to write later.


If anyone is able to help faster go ahead.


Eliezer




, 31 2021, 18:05, Marcio B. :

Hi,

I implemented a Squid server in version 4.6 on Debian and tested it for about 
40 days. However I put it into production today and Internet browsing was 
extremely slow.

In /var/log/syslog I'm getting the following messages:

Aug 31 11:29:19 srvproxy squid[4041]: WARNING! Your cache is running out of 
filedescriptors

Aug 31 11:29:35 srvproxy squid[4041]: WARNING! Your cache is running out of 
filedescriptors

Aug 31 11:29:51 srvproxy squid[4041]: WARNING! Your cache is running out of 
filedescriptors


I searched the Internet, but I only found very old information and referring 
files that don't exist on my Squid Server.

The only thing I did was add the following value to the 
/etc/security/limits.conf file:

*-nofile 65535

however this did not solve.

Does anyone have any idea how I could solve this problem?


Regards,


Márcio Bacci

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance issues

2021-08-31 Thread Klaus Brandl
look at your cache.log, after squid is starting, there you can see, how
much filedescriptors are available:

2021/08/31 17:14:36.870 kid1| With 1024 file descriptors available

Maybe there is a file like /etc/default/squid:

SQUID_MAXFD=1024

Regards

Klaus

Am Dienstag, dem 31.08.2021 um 18:10 +0300 schrieb NgTech LTD:
> Hey Marcio,
> 
> You will need to add a systemd service file that extends the current
> one with more FileDescriptors.
> 
> I cannot guide now I do hope to be able to write later.
> 
> If anyone is able to help faster go ahead.
> 
> Eliezer
> 
> 
> בתאריך יום ג׳, 31 באוג׳ 2021, 18:05, מאת Marcio B.
> ‏:
> > Hi,
> > 
> > I implemented a Squid server in version 4.6 on Debian and tested it
> > for about 40 days. However I put it into production today and
> > Internet browsing was extremely slow.
> > 
> > In /var/log/syslog I'm getting the following messages:
> > 
> > Aug 31 11:29:19 srvproxy squid[4041]: WARNING! Your cache is
> > running out of filedescriptors
> > 
> > Aug 31 11:29:35 srvproxy squid[4041]: WARNING! Your cache is
> > running out of filedescriptors
> > 
> > Aug 31 11:29:51 srvproxy squid[4041]: WARNING! Your cache is
> > running out of filedescriptors
> > 
> > 
> > I searched the Internet, but I only found very old information and
> > referring files that don't exist on my Squid Server.
> > 
> > The only thing I did was add the following value to the
> > /etc/security/limits.conf file:
> > 
> > *-nofile 65535
> > 
> > however this did not solve.
> > 
> > Does anyone have any idea how I could solve this problem?
> > 
> > Regards,
> > 
> > Márcio Bacci
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance issues

2021-08-31 Thread NgTech LTD
Hey Marcio,

You will need to add a systemd service file that extends the current one
with more FileDescriptors.

I cannot guide now I do hope to be able to write later.

If anyone is able to help faster go ahead.

Eliezer


בתאריך יום ג׳, 31 באוג׳ 2021, 18:05, מאת Marcio B. ‏:

> Hi,
>
> I implemented a Squid server in version 4.6 on Debian and tested it for
> about 40 days. However I put it into production today and Internet browsing
> was extremely slow.
>
> In /var/log/syslog I'm getting the following messages:
>
> Aug 31 11:29:19 srvproxy squid[4041]: WARNING! Your cache is running out
> of filedescriptors
>
> Aug 31 11:29:35 srvproxy squid[4041]: WARNING! Your cache is running out
> of filedescriptors
>
> Aug 31 11:29:51 srvproxy squid[4041]: WARNING! Your cache is running out
> of filedescriptors
>
>
> I searched the Internet, but I only found very old information and
> referring files that don't exist on my Squid Server.
>
> The only thing I did was add the following value to the
> /etc/security/limits.conf file:
>
> *-nofile 65535
>
> however this did not solve.
>
> Does anyone have any idea how I could solve this problem?
>
> Regards,
>
> Márcio Bacci
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid performance issues

2021-08-31 Thread Marcio B.
Hi,

I implemented a Squid server in version 4.6 on Debian and tested it for
about 40 days. However I put it into production today and Internet browsing
was extremely slow.

In /var/log/syslog I'm getting the following messages:

Aug 31 11:29:19 srvproxy squid[4041]: WARNING! Your cache is running out of
filedescriptors

Aug 31 11:29:35 srvproxy squid[4041]: WARNING! Your cache is running out of
filedescriptors

Aug 31 11:29:51 srvproxy squid[4041]: WARNING! Your cache is running out of
filedescriptors


I searched the Internet, but I only found very old information and
referring files that don't exist on my Squid Server.

The only thing I did was add the following value to the
/etc/security/limits.conf file:

*-nofile 65535

however this did not solve.

Does anyone have any idea how I could solve this problem?

Regards,

Márcio Bacci
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance 3.5.20 → 3.5.23

2017-01-25 Thread Stephen Baynes
Looks like this was a false alarm. The test environment I was using had
some random fluctuations in the results. I assumed they averaged out over
multiple test runs. However it looks as if I got some bad rolls of the dice
and they did not.
I have now a modified test which has <1% variation over four runs each
taking 3 hours. Testing both versions, I now see negligible difference
between them.

On 13 January 2017 at 10:10, Stephen Baynes 
wrote:

> Is there a known performance fall off going 3.5.20 → 3.5.23?
>
> I am seeing a 15% to 20% performance drop on my normal download benchmark
> and a crude test of uploading shows a few percent slowdown.
>
> Running on a Linux derived from Debian.
>
> Thanks
>
> --
>
> Stephen Baynes
>
>


-- 

Stephen Baynes
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance 3.5.20 → 3.5.23

2017-01-13 Thread Yuri

"Premature optimization is root of all evlis".


13.01.2017 16:10, Stephen Baynes пишет:

Is there a known performance fall off going 3.5.20 → 3.5.23?

I am seeing a 15% to 20% performance drop on my normal download 
benchmark and a crude test of uploading shows a few percent slowdown.


Running on a Linux derived from Debian.

Thanks

--

Stephen Baynes



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid performance 3.5.20 → 3.5.23

2017-01-13 Thread Stephen Baynes
Is there a known performance fall off going 3.5.20 → 3.5.23?

I am seeing a 15% to 20% performance drop on my normal download benchmark
and a crude test of uploading shows a few percent slowdown.

Running on a Linux derived from Debian.

Thanks

-- 

Stephen Baynes
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-05 Thread Amos Jeffries
On 4/08/2016 11:55 p.m., brendan kearney wrote:
> At what point does buffer bloat set in?  I have a linux router with the
> below sysctl tweaks load balancing with haproxy to 2 squid instances.  I
> have 4 x 1Gb interfaces bonded and have bumped the ring buffers on RX and
> TX to 1024 on all interfaces.

Exact timing will depend on your systems. AFAIU, it is the point where
control signals about congestion spend longer time in the traffic buffer
than needed for one endpoing to start re-sending packets and cause
congestino to get worse - a meltdown sort of behaviour.

If Squid takes say 1ms to process a I/O cycle, and reads 4KB per cycle.
Any server that send more than 4KB/ms will fill the buffer somewhat.
 (real I/O cycles are dynamic in timing, so theres no easily pointed at
time when bloat effects start to happen).

What I would expect to see with buffer limits set to 8MB. Is that on
transfer of objects greater than 8MB (ie 1GB) the first ~12MB happen
really fast, then speed drops off a cliff down to the slower rate Squid
is processing it out of the buffer.

With my fake numbers from above 1ms x 4KB ==> 4MB/sec. So in theory you
would get up to 64Mbps for the first chunk of large objects, then drop
down to 32Mbps. Then the Squid->client buffers start filling, and there
is a second drop down to whatever speed the client is emptying its side.

The issue is not visible on any object smaller than those cliff
boundaries. And may not be user visible at all unless total network load
reaches rates where the processing speed drops - which makes the speed
drop occur much sooner.
 In particular as I said earlier as Squid gets more processing load its
I/O cycle slow down, effectively shifting the speed 'cliff' to lower
thresholds.

 If there is any problem in the traffic, it will take 2 seconds for
Squid to become aware and even begin to start failure recovery.
Signals like end-of-object might arrive faster if the TCP stack is
optimized for control signals and cause up to 8MB of data at the end of
the object to be truncated. Other weird things like that start to happen
depending on the TCP stack implementation.

> 
> The squid servers run with almost the same hardware and tweaks, except the
> ring buffers have only been bumped to 512.
> 
> DSL Reports has a speed test page that supposedly finds and quantifies
> buffer bloat and my setup does not introduce it, per their tests.

The result there will depend on the size of object they test with. And
as Marcus mentioned the bandwidth product to the test server has an
impact on what data sizes will be required to find any problems.

> 
> I am only running a home internet connection (50 down x 15 up) but have a
> wonderful browsing experience.  I imagine scale of bandwidth might be a
> factor, but have no idea where buffer bloat begins to set in.

At values higher than your "50 down" by the sounds of it. I assume that
means 50 Mbps, which is well under the 64Mbps cliff your 8MB buffer causes.

It is rare to see a home connection that needs industrial scale
performance optimizations tuned with Squid. The bottlneck is that
Internet modem. Anything you configure internally greater than its
limits is effectively "infinity".

The bloat effects (if any) will be happening in your ISP's network.
Bloating is particularly nasty as it effects *others* sharing the
network worse than the individual causing it.


> # Maximum number of outstanding syn requests allowed; default 128
> #net.ipv4.tcp_max_syn_backlog = 2048
> net.ipv4.tcp_max_syn_backlog = 16284
> 

For each of these entries thare will be ~256 bytes of RAM used by Squid
to remember that it occured. Plus whatever your TCP stack uses.
Not big, but the latency effect of waiting for FD to become available in
Squid might be noticed in highly loaded network conditions.


> # Discourage Linux from swapping idle processes to disk (default = 60)
> #vm.swappiness = 10
> 
> # Increase Linux autotuning TCP buffer limits

AFAIK these are the limits. Not what is actually used. The latest Linux
versions contain algorithms designed by the buffer bloat research team
that prevent insane buffers being created even if the limits are set large.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-04 Thread Marcus Kool



On 08/04/2016 10:08 AM, Heiler Bemerguy wrote:


Sorry Amos, but I've tested with modifying JUST these two sysctl parameters and 
the difference is huge.

Without maximum tcp buffers set to 8MB, I got a 110KB/s download speed, and 
with a 8MB kernel buffer I got a 9.5MB/s download speed (via squid, of course).

I think it has to do with the TCP maximum Window Size, the kernel can set on a 
connection.


With these tuning parameters it is always important to look at the 
bandwidth*latency product.
I see that you are from Brasil and I know from experience that latencies to 
Europe are 230+ ms and latencies to USA vary between 80 and 200 ms.
I believe that the large variation in latency is due to the limited 
international capacity of Brasil (the Level3 link from the SP-IX to USA is most 
of the day 90+% utilized).

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-04 Thread Heiler Bemerguy


Sorry Amos, but I've tested with modifying JUST these two sysctl 
parameters and the difference is huge.


Without maximum tcp buffers set to 8MB, I got a 110KB/s download speed, 
and with a 8MB kernel buffer I got a 9.5MB/s download speed (via squid, 
of course).


I think it has to do with the TCP maximum Window Size, the kernel can 
set on a connection.



--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751


Em 04/08/2016 03:16, Amos Jeffries escreveu:

On 4/08/2016 2:32 a.m., Heiler Bemerguy wrote:

I think it doesn't really matter how much squid sets its default buffer.
The linux kernel will upscale to the maximum set by the third option.
(and the TCP Window Size will follow that)

net.ipv4.tcp_wmem = 1024 32768 8388608
net.ipv4.tcp_rmem = 1024 32768 8388608


Having large system buffers like that just leads to buffer bloat
problems. Squid is still the bottleneck if it is sending only 4KB each
I/O cycle to the client - no matter how much is already received by
Squid, or stuck in kernel queues waiting to arrive to Squid. The more
heavily loaded the proxy is the longer each I/O cycle gets as all
clients get one slice of the cycle to do whatever processing they need done.

The buffers limited by HTTP_REQBUF_SZ are not dynamic so its not just a
minimum. Nathan found a 300% speed increase from a 3x buffer size
increase. Which is barely noticable (but still present) on small
responses, but very noticable with large transactions.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-04 Thread brendan kearney
At what point does buffer bloat set in?  I have a linux router with the
below sysctl tweaks load balancing with haproxy to 2 squid instances.  I
have 4 x 1Gb interfaces bonded and have bumped the ring buffers on RX and
TX to 1024 on all interfaces.

The squid servers run with almost the same hardware and tweaks, except the
ring buffers have only been bumped to 512.

DSL Reports has a speed test page that supposedly finds and quantifies
buffer bloat and my setup does not introduce it, per their tests.

I am only running a home internet connection (50 down x 15 up) but have a
wonderful browsing experience.  I imagine scale of bandwidth might be a
factor, but have no idea where buffer bloat begins to set in.

# Favor low latency over high bandwidth
net.ipv4.tcp_low_latency = 1

# Use the full range of ports.
net.ipv4.ip_local_port_range = 1025 65535

# Maximum number of open files per process; default 1048576
#fs.nr_open = 1000

# Increase system file descriptor limit; default 402289
fs.file-max = 10

# Maximum number of requests queued to a listen socket; default 128
net.core.somaxconn = 1024

# Maximum number of packets backlogged in the kernel; default 1000
#net.core.netdev_max_backlog = 2000
net.core.netdev_max_backlog = 4096

# Maximum number of outstanding syn requests allowed; default 128
#net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_syn_backlog = 16284

# Discourage Linux from swapping idle processes to disk (default = 60)
#vm.swappiness = 10

# Increase Linux autotuning TCP buffer limits
# Set max to 16MB for 1GE and 32M (33554432) or 54M (56623104) for 10GE
# Don't set tcp_mem itself! Let the kernel scale it based on RAM.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Increase Linux autotuning UDP buffer limits
net.ipv4.udp_mem = 4096 87380 16777216

# Make room for more TIME_WAIT sockets due to more clients,
# and allow them to be reused if we run out of sockets
# Also increase the max packet backlog
net.core.netdev_max_backlog = 5
net.ipv4.tcp_max_syn_backlog = 3
net.ipv4.tcp_max_tw_buckets = 200
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 10

# Disable TCP slow start on idle connections
net.ipv4.tcp_slow_start_after_idle = 0

On Aug 4, 2016 2:17 AM, "Amos Jeffries"  wrote:

> On 4/08/2016 2:32 a.m., Heiler Bemerguy wrote:
> >
> > I think it doesn't really matter how much squid sets its default buffer.
> > The linux kernel will upscale to the maximum set by the third option.
> > (and the TCP Window Size will follow that)
> >
> > net.ipv4.tcp_wmem = 1024 32768 8388608
> > net.ipv4.tcp_rmem = 1024 32768 8388608
> >
>
> Having large system buffers like that just leads to buffer bloat
> problems. Squid is still the bottleneck if it is sending only 4KB each
> I/O cycle to the client - no matter how much is already received by
> Squid, or stuck in kernel queues waiting to arrive to Squid. The more
> heavily loaded the proxy is the longer each I/O cycle gets as all
> clients get one slice of the cycle to do whatever processing they need
> done.
>
> The buffers limited by HTTP_REQBUF_SZ are not dynamic so its not just a
> minimum. Nathan found a 300% speed increase from a 3x buffer size
> increase. Which is barely noticable (but still present) on small
> responses, but very noticable with large transactions.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-04 Thread Amos Jeffries
On 4/08/2016 2:32 a.m., Heiler Bemerguy wrote:
> 
> I think it doesn't really matter how much squid sets its default buffer.
> The linux kernel will upscale to the maximum set by the third option.
> (and the TCP Window Size will follow that)
> 
> net.ipv4.tcp_wmem = 1024 32768 8388608
> net.ipv4.tcp_rmem = 1024 32768 8388608
> 

Having large system buffers like that just leads to buffer bloat
problems. Squid is still the bottleneck if it is sending only 4KB each
I/O cycle to the client - no matter how much is already received by
Squid, or stuck in kernel queues waiting to arrive to Squid. The more
heavily loaded the proxy is the longer each I/O cycle gets as all
clients get one slice of the cycle to do whatever processing they need done.

The buffers limited by HTTP_REQBUF_SZ are not dynamic so its not just a
minimum. Nathan found a 300% speed increase from a 3x buffer size
increase. Which is barely noticable (but still present) on small
responses, but very noticable with large transactions.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-03 Thread Marcus Kool



On 08/03/2016 10:27 AM, Amos Jeffries wrote:

On 3/08/2016 9:45 p.m., Marcus Kool wrote:



On 08/03/2016 12:30 AM, Amos Jeffries wrote:



If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
read_ahead_gap in squid.conf. That has had some mixed results though,
faster traffic, but also some assertions being hit.


I remember the thread about increasing the request buffer to 64K and it
looked so promising.
Is there any evidence of setting HTTP_REQBUF_SZ to 16K is stable in 3.5.x?



It has not had much testing other than Nathan's use, so I'm a bit
hesitant to call it stable. But just raising the 4KB limit a bit to 64K
or less should not have much effect negative effect other than extra RAM
per transaction for buffering (bumped x8 from 256KB per client
connection to 2MB).


I am about to configure an array of squid servers to process 50 gbit of traffic
and the performance increase that Nathan originally reported is significant...
So if I understand it correctly, raising it to 16K in 3.5.20
will most likely have no issues.  I will give it a try.

Thanks
Marcus


We got a bit ambitious and made the main buffers dynamic and effectively
unlimited for Squid-4. But that hit an issue, so has been pulled out
while Nathan figures out how to avoid it.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-03 Thread Heiler Bemerguy


I think it doesn't really matter how much squid sets its default buffer. 
The linux kernel will upscale to the maximum set by the third option. 
(and the TCP Window Size will follow that)


net.ipv4.tcp_wmem = 1024 32768 8388608
net.ipv4.tcp_rmem = 1024 32768 8388608


--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751


Em 03/08/2016 06:45, Marcus Kool escreveu:



On 08/03/2016 12:30 AM, Amos Jeffries wrote:



If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
read_ahead_gap in squid.conf. That has had some mixed results though,
faster traffic, but also some assertions being hit.


I remember the thread about increasing the request buffer to 64K and it
looked so promising.
Is there any evidence of setting HTTP_REQBUF_SZ to 16K is stable in 
3.5.x?


Marcus


You may find that memory becomes your bottleneck at higher speeds.
8-16GB sounds like a lot for most uses, but when you have enough
connections active to drive Gbps (with 4-6x 64KB I/O buffers) there are
is lot of parallel pressures on the RAM.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-03 Thread Amos Jeffries
On 3/08/2016 9:45 p.m., Marcus Kool wrote:
> 
> 
> On 08/03/2016 12:30 AM, Amos Jeffries wrote:
> 
> 
>> If thats not fast enough, you may also wish to patch in a larger value
>> for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
>> read_ahead_gap in squid.conf. That has had some mixed results though,
>> faster traffic, but also some assertions being hit.
> 
> I remember the thread about increasing the request buffer to 64K and it
> looked so promising.
> Is there any evidence of setting HTTP_REQBUF_SZ to 16K is stable in 3.5.x?
> 

It has not had much testing other than Nathan's use, so I'm a bit
hesitant to call it stable. But just raising the 4KB limit a bit to 64K
or less should not have much effect negative effect other than extra RAM
per transaction for buffering (bumped x8 from 256KB per client
connection to 2MB).

We got a bit ambitious and made the main buffers dynamic and effectively
unlimited for Squid-4. But that hit an issue, so has been pulled out
while Nathan figures out how to avoid it.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-03 Thread Marcus Kool



On 08/03/2016 12:30 AM, Amos Jeffries wrote:



If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
read_ahead_gap in squid.conf. That has had some mixed results though,
faster traffic, but also some assertions being hit.


I remember the thread about increasing the request buffer to 64K and it
looked so promising.
Is there any evidence of setting HTTP_REQBUF_SZ to 16K is stable in 3.5.x?

Marcus


You may find that memory becomes your bottleneck at higher speeds.
8-16GB sounds like a lot for most uses, but when you have enough
connections active to drive Gbps (with 4-6x 64KB I/O buffers) there are
is lot of parallel pressures on the RAM.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-02 Thread Amos Jeffries
On 3/08/2016 2:42 p.m., Heiler Bemerguy wrote:
> 
> in /etc/sysctl.conf, add:
> 
> net.core.rmem_max = 8388608
> net.core.wmem_max = 8388608
> net.core.wmem_default = 32768
> net.core.rmem_default = 32768
> net.ipv4.tcp_wmem = 1024 32768 8388608
> net.ipv4.tcp_rmem = 1024 32768 8388608
> 


PLease aso bump up your version to 3.5.20, there have been more than a
few performance bug fixes since 3.5.1.

For near-Gbps speeds you will need one of the 'Extreme CARP'
configurations for spreading the workload between Squid processes. That
has achieved 800-900 Mbps a while back.


If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
read_ahead_gap in squid.conf. That has had some mixed results though,
faster traffic, but also some assertions being hit.

You may find that memory becomes your bottleneck at higher speeds.
8-16GB sounds like a lot for most uses, but when you have enough
connections active to drive Gbps (with 4-6x 64KB I/O buffers) there are
is lot of parallel pressures on the RAM.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-02 Thread Heiler Bemerguy


in /etc/sysctl.conf, add:

net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.wmem_default = 32768
net.core.rmem_default = 32768
net.ipv4.tcp_wmem = 1024 32768 8388608
net.ipv4.tcp_rmem = 1024 32768 8388608


--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751


Em 02/08/2016 23:37, Paul van Tuel escreveu:

Hi All,
We've been running Squid for many years. Recently we upgraded our 
internet link to a 1Gbps link, but we are finding that squid is not 
able to drive this link to its full potential (previous links have 
been 30Mbps or 100Mbps).

Currently running squid 3.5.1, but have tried 3.4, 3.3, 3.2 versions too.

Upload speeds from the server (without using the local proxy) to the 
internet are 200-300Mbps
Download speeds from the server (without using the local proxy) to the 
internet are 300-600Mbps (the link is not guaranteed).


If we use squid (or tinyproxy) to upload a file the upload speed is 
varies from 15-50Mbps.
If we use squid (or tinyproxy) to download a file from the internet, 
the speeds varies from 80-115Mbps.


We have used various combinations of hardware:
* Dell Power Edge T300, 2950 with SSD or SAS disks. Both using 
bare-metal or VMWare ESXi. Quad-core 2.6GHz Xeon processors with 8G or 
16G RAM
* We have used windows 7 Pro, Ubuntu, Centos 6, Centos 7 each as 
bare-metal and as VM under ESXi.
* We have used a white box (Asus motherboard with 1Gbps NIC, i7, 16G 
RAM) with each of the above OSes in bare-metal installations.


Squid configuration has basically been the out-of-the-box sample file 
with no authentication enabled. Only one user testing the performance 
of the squid proxy. The server is pretty much idle.


Each time the result is the same - if we go direct to the internet 
without using the local proxy the speed is as we would expect. If we 
use the local proxy, the speed drops significantly.
Is this expected behaviour? Or is there something we can do to speed 
up/tune squid's performance? I would be expecting squid to utilise the 
full bandwidth available (similar to what the server can download if 
you do not use a proxy).



Thank you
Paul.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-02 Thread Paul van Tuel

Hi All,
We've been running Squid for many years. Recently we upgraded our 
internet link to a 1Gbps link, but we are finding that squid is not able 
to drive this link to its full potential (previous links have been 
30Mbps or 100Mbps).

Currently running squid 3.5.1, but have tried 3.4, 3.3, 3.2 versions too.

Upload speeds from the server (without using the local proxy) to the 
internet are 200-300Mbps
Download speeds from the server (without using the local proxy) to the 
internet are 300-600Mbps (the link is not guaranteed).


If we use squid (or tinyproxy) to upload a file the upload speed is 
varies from 15-50Mbps.
If we use squid (or tinyproxy) to download a file from the internet, the 
speeds varies from 80-115Mbps.


We have used various combinations of hardware:
* Dell Power Edge T300, 2950 with SSD or SAS disks. Both using 
bare-metal or VMWare ESXi. Quad-core 2.6GHz Xeon processors with 8G or 
16G RAM
* We have used windows 7 Pro, Ubuntu, Centos 6, Centos 7 each as 
bare-metal and as VM under ESXi.
* We have used a white box (Asus motherboard with 1Gbps NIC, i7, 16G 
RAM) with each of the above OSes in bare-metal installations.


Squid configuration has basically been the out-of-the-box sample file 
with no authentication enabled. Only one user testing the performance of 
the squid proxy. The server is pretty much idle.


Each time the result is the same - if we go direct to the internet 
without using the local proxy the speed is as we would expect. If we use 
the local proxy, the speed drops significantly.
Is this expected behaviour? Or is there something we can do to speed 
up/tune squid's performance? I would be expecting squid to utilise the 
full bandwidth available (similar to what the server can download if you 
do not use a proxy).



Thank you
Paul.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance profiling

2013-06-21 Thread Ahmed Talha Khan


 I want to share the results with the community on the squid wikis. How
 to do that?


 We are collecting some ad-hoc benchmark details for Squid releases at
 http://wiki.squid-cache.org/KnowledgeBase/Benchmarks. So far this is not
 exactly a rigourous testing, although following the methodology for stats
 collection (as outline in the intro section) retains consistency and
 improves comparability between submissions.

 Since you are using a different methodology, please feel free to write up a
 new article on it. The details you just posted looks like a good start. We
 can offer wiki or static web page, or reference from our benchmarking page
 to a blog publication of your own.


Yes sure that would be great. I will write a blog post and post here aswell


 Some results from the tests are:

 Server response size = 200 Bytes
 New means keep-alive were turned
 Keep-alive mean keep-alive were used with 100 http req/conn
 C= concurrent requests


   HTTP   HTTPS
  New
 | Keep-Alive   New| Keep-Alive

 RPS
c= 50   6466 | 20227
1336 | 14461
c= 100 6392 | 21583
   1303 | 14683
c = 2005986 | 21462
1300 | 13967

 Throughput(mbps)
c = 50   26|
 82.45.4 | 59
c=100   25.8 | 88
5.25 | 60
 c=200  24 | 88
  5.4 | 58

 Latency(ms)
 c= 50  7.5 | 2.7
 36 | 3.75
 c= 10015.8 | 5.27
 80 | 8
c=200  26.5 | 11.3
 168 | 18


The SSL numbers seem pretty low to me on such a powerful machine. Do
you think these can be improved somehow.
For HTTPS i was using 1024 Bytes key size. The ciphers being selected
between ab and squid and between squid and
lighttpd were  TLS_RSA_WITH_AES_256_CBC_SHA and
TLS_DHE_RSA_WITH_AES_256_CBC_SHA respectively



 With this results I profile squid with perf tool and got some
 results that I could not understand. So my question are related to
 them


 Thank you. Some very nice numbers. I hope they give a clue to anyone still
 thinking persistent connections need to be disabled to improve performance.


 For the HTTS case, the CPU utilization peaks around 90% on all cores
 and the perf profiler gives:

 24.63%squid  libc-2.15.so [.] __memset_sse2

 6.13%squid  libcrypto.so.1.0.0   [.] bn_sqr4x_mont

  4.98%squid  [kernel.kallsyms][k] hypercall_page

|

--- hypercall_page

   |

   |--93.73%-- check_events


 Why is so much time spent in one instruction by squid? and too a
 memset instruction! Any pointers?


 Squid was originally written in C and still has a lot of memset() calls
 around the place clearing memory before use. We have made a few attempts to
 track them down and remove unnecessary usages but a lot still remain.
 Another attempt was tried in the more recent code, so you may find a lower
 profile rating in the current 3.HEAD.

 Also check whether you have memory_pools on or off. That can affect the
 amount of calls to memset().


Memory pools were ON. I did not change the default behavior


 Since in this case all CPU power is being used so it is understandable
 that  the performance cannot be improved here. The problem arises with
 the HTTP case.


 On the contrary, code improvements can be done to reduce CPU cycle
 requirements by Squid, which in turn raise the performance. If your
 profiling can highlight things like memset() or Squid functions in the
 current consuming large amounts of CPU effort can be targeted at reducing
 those occurances for best work/performance gains.


Yes obviously code improvements can be done. What I mean to say was
that in the current scenario with
the current code base these will stay constant

 For the plain HTTP case, the CPU utilization is only around 50-60% on
 all the cores and perf says:


 8.47%squid  [kernel.kallsyms][k] hypercall_page
--- hypercall_page
|--94.78%-- check_events

 1.78%squid  libc-2.15.so [.] vfprintf
 1.62%squid  [kernel.kallsyms][k] xen_spin_lock
 1.44%squid  libc-2.15.so [.] __memcpy_ssse3_back


 These results show that squid is NOT CPU bound at this point. Neither
 is it Network IO 

Re: [squid-users] Squid performance profiling

2013-06-21 Thread Ahmed Talha Khan
On Thu, Jun 20, 2013 at 5:21 PM, Marcus Kool
marcus.k...@urlfilterdb.com wrote:


 On 06/20/2013 06:51 AM, Amos Jeffries wrote:



 If anyone is interested with very detailed benchmarks, then I can provide
 them.


 Yes please :-)

 PS. could you CC the squid-dev mailing list as well with the details. The
 more developer eyes we can get on this data the better. Although please do
 test a current release first, we have significantly
 changed the ACL handling which was one bottleneck in Squid, and have
 altered the mempools use of memset() is several locations in the latest
 3.HEAD code.

 Amos


 I understand that Amos is eager to get more tests and more results about
 the latest enhancements, but as Amos himself also stated earlier, please
 use a released version of Squid for testing since the test results for
 3.3.x or 3.4.x are interesting for admins of Squid who can consider
 upgrading,
 but test results for 3.HEAD are not useful for them since they are not
 likely
 to consider an upgrade to 3.HEAD.

Yes sure I can do that.


 And if you have spare resources, it would be interesting to perform the
 same test for 3.3.5 and 3.2.11 to see the differences between releases.
 And of course, when 3.4 comes out, perform the test again...

 The test that you performed is very nice. I am sure that many like this.
 But I also like to see the full squid.conf. Just for transparency and
 maybe to suggest an optimisation tweak.

 Thanks
 Marcus



--
Regards,
-Ahmed Talha Khan


Re: [squid-users] Squid performance profiling

2013-06-21 Thread Ahmed Talha Khan
On Fri, Jun 21, 2013 at 10:41 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:
 On 06/20/2013 10:47 PM, Ahmed Talha Khan wrote:
 On Fri, Jun 21, 2013 at 6:17 AM, Alex Rousskov wrote:
 On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:
 My test methodology looks like this

 generator(apache benchmark)---squid--server(lighttpd)
 ...
 These results show that squid is NOT CPU bound at this point. Neither
 is it Network IO bound because i can get much more throughput when I
 only run the generator with the server. In this case squid should be
 able to do more. Where is the bottleneck coming from?


 The bottleneck may be coming from your test methodology -- you are
 allowing Squid to slow down the benchmark instead of benchmark driving
 the Squid box to its limits. You appear to be using what we call a best
 effort test, where the request rate is determined by Squid response
 time. In most real-world environments concerned with performance, the
 request rate does not decrease just because a proxy wants to slow down a
 little.


 Then the question becomes why squid is slowing down?

 I think there are 2.5 primary reasons for that:

 1) Higher concurrency level (c in your tables) means more
 waiting/queuing time for each transaction: When [a part of] one
 transaction has to wait for [a part of] another before being served,
 transaction response time goes up. For example, the more network sockets
 are ready at the same time, the higher the response time is going to
 be for the transaction which socket happens to be the last one ready
 during that specific I/O loop iteration.


Are these queues maintained internally inside squid? What can be done
to reduce this?

 2a) Squid sometimes uses hard-coded limits for various internal caches
 and tables. With higher concurrency level, Squid starts hitting those
 limits and operating less efficiently (e.g., not keeping a connection
 persistent because the persistent connection table is full -- I do not
 remember whether this actually happens, so this is just an example of
 what could happen to illustrate 2a).

Can you point me to some of the key ones and their impact? So that I
can test by changing
these limits and seeing if it enhances/degrades the performance. Also,
any tweaks in
the network stack that might help with that. I am primarily interested
in enhancing the SSL performance.



 2b) Poor concurrency scale. Some Squid code becomes slower with more
 concurrent transactions flying around because that code has to iterate
 more structures while dealing with more collisions and such.


 Well all that can be done on this front is that I have to wait for
the changes to go in.

 There is nothing we can do about #1, but we can improve #2a and #2b
 (they are kind of related).


 best effort tests also
 give a good measure of what the proxy(server) can do without breaking
 it.

 Yes, but, in my experience, the vast majority of best-effort results are
 misinterpreted: It is very difficult to use a best-effort test
 correctly, and it is very easy to come to the wrong conclusions by
 looking at its results. YMMV.


 Do you see any wrong conclusion that I might have made in
interpreting these results?

 BTW, a persistent load test does not have to break the proxy. You only
 need to break the proxy if you want to find where its breaking point
 (and, hence, the bottleneck) is with respect to load (or other traffic
 parameters).



Sure

 Do you see any reason from the perf results/benchmarks
 why squid would not be utilizing all CPU and giving out more requests
 per second?

 In our tests, Squid does utilize virtually all CPU cycles (if we push it
 hard enough). It is just a matter of creating enough/appropriate load.


Why would it not do in my test setup? I does use all CPU cores to the
fullest in the case of HTTPS, but not in the case
of HTTP as i pointed out earlier

 However, if you are asking whether Squid can be changed to run faster
 than it does today, then the answer is yes, of course. There is still a
 lot of inefficient, slow code in Squid.


 HTH,

 Alex.




--
Regards,
-Ahmed Talha Khan


Re: [squid-users] Squid performance profiling

2013-06-21 Thread Ahmed Talha Khan


 I understand that Amos is eager to get more tests and more results about
 the latest enhancements, but as Amos himself also stated earlier, please
 use a released version of Squid for testing since the test results for
 3.3.x or 3.4.x are interesting for admins of Squid who can consider
 upgrading,
 but test results for 3.HEAD are not useful for them since they are not
 likely
 to consider an upgrade to 3.HEAD.

 And if you have spare resources, it would be interesting to perform the
 same test for 3.3.5 and 3.2.11 to see the differences between releases.
 And of course, when 3.4 comes out, perform the test again...

 The test that you performed is very nice. I am sure that many like this.
 But I also like to see the full squid.conf. Just for transparency and
 maybe to suggest an optimisation tweak.


Here is my squid conf:
I am using squid as a normal forward proxy with SSL bump.

You would be able to see that I have not tampered with the
memory_pools or ssl_session much.

*SQUID.CONF**
cache_effective_user madmin

always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER

# No need for caching
cache deny all

#Dont need logs for benchmarks
access_log none

# SMP scale
workers 8

# Turn off ICAP for the benchmarks
icap_enable off

#Only for benchmarks
http_access allow all

# Dynamic certificate generation
sslcrtd_program /usr/local/squid-3.3/libexec/ssl_crtd -s
/usr/local/squid-3.3/var/lib/ssl_db -M 4MB
sslcrtd_children 10


#PORTS
http_port 10.174.198.149:3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/home/madmin/squid/ca.pem
key=/home/madmin/squid/ca.pem
https_port 10.174.198.149:3129 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/home/madmin/squid/ca.pem
key=/home/madmin/squid/ca.pem


**





 Thanks
 Marcus



--
Regards,
-Ahmed Talha Khan


Re: [squid-users] Squid performance profiling

2013-06-21 Thread Alex Rousskov
On 06/21/2013 04:34 AM, Ahmed Talha Khan wrote:
 Then the question becomes why squid is slowing down?

 I think there are 2.5 primary reasons for that:

 1) Higher concurrency level (c in your tables) means more
 waiting/queuing time for each transaction: When [a part of] one
 transaction has to wait for [a part of] another before being served,
 transaction response time goes up. For example, the more network sockets
 are ready at the same time, the higher the response time is going to
 be for the transaction which socket happens to be the last one ready
 during that specific I/O loop iteration.

 Are these queues maintained internally inside squid? What can be done
 to reduce this?

Resource contention is a property of the workload. While optimizations
targeted at making Squid spend fewer resources (e.g., CPU cycles) per
transaction or adding more resources (e.g., CPU cores) effectively
reduce contention as a side effect, there is nothing you can do to
reduce it directly.


 2a) Squid sometimes uses hard-coded limits for various internal caches
 and tables. With higher concurrency level, Squid starts hitting those
 limits and operating less efficiently (e.g., not keeping a connection
 persistent because the persistent connection table is full -- I do not
 remember whether this actually happens, so this is just an example of
 what could happen to illustrate 2a).
 
 Can you point me to some of the key ones and their impact? So that I
 can test by changing
 these limits and seeing if it enhances/degrades the performance. Also,
 any tweaks in
 the network stack that might help with that. I am primarily interested
 in enhancing the SSL performance.

I doubt #2a is a bottleneck here so if you want to optimize Squid, I
would recommend looking at other areas. Please note that I was simply
answering your why Squid slows down question, not outlining an
optimization plan.


I am not sure there is consensus among Squid developers regarding the
best optimization plan, but I can offer a few starting points:

* To enhance SSL performance, try hardware SSL accelerators. OpenSSL
supports some. Or are you already using them? Also, session caching may
be important. We will be posting a patch adding SSL session cache for
SMP Squid shortly. Sorry, I cannot answer most of your earlier questions
about session caching without additional research/checking, but I can
tell you that SMP session caching currently does not work at all (but
will work, at least for some forms of sessions, with the pending patch).

* To enhance overall Squid performance for small responses, I would
recommend focusing on reducing re-parsing and memory copying. The
StringNG project underpins many of the latter optimizations.

* To enhance overall Squid performance for large responses, especially
cached ones, I would focus on eliminating linear scans of in-memory pieces.

There are many other problematic areas and OS tweaks, of course. Some
are known and some are yet to be discovered. Their importance varies
depending on your workload and environment. If these changes, tweaks,
and optimizations were easy (or were backed by a lot of demand), we
would have done them already.


 2b) Poor concurrency scale. Some Squid code becomes slower with more
 concurrent transactions flying around because that code has to iterate
 more structures while dealing with more collisions and such.

 
  Well all that can be done on this front is that I have to wait for
 the changes to go in.

Waiting is one of at least three options you have:
http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F


 Do you see any wrong conclusion that I might have made in
 interpreting these results?

I believe you shared test results and asked good questions about them
pretty much without interpreting those results. Please correct me if I
am wrong.


 Why would it not do in my test setup? I does use all CPU cores to the
 fullest in the case of HTTPS, but not in the case
 of HTTP as i pointed out earlier

I have already speculated about that. In summary, your HTTP workload is
probably not giving Squid enough CPU work to do. HTTPS adds more CPU
work, of course. Needless to say, it is rather risky for me to criticize
a test based on a high-level description and a results table. I hope you
will forgive me if my guess ends up being wrong!


Thank you,

Alex.



Re: [squid-users] Squid performance profiling

2013-06-21 Thread Amos Jeffries

On 21/06/2013 10:34 p.m., Ahmed Talha Khan wrote:

On Fri, Jun 21, 2013 at 10:41 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:

On 06/20/2013 10:47 PM, Ahmed Talha Khan wrote:

On Fri, Jun 21, 2013 at 6:17 AM, Alex Rousskov wrote:

On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:

My test methodology looks like this

generator(apache benchmark)---squid--server(lighttpd)

...

These results show that squid is NOT CPU bound at this point. Neither
is it Network IO bound because i can get much more throughput when I
only run the generator with the server. In this case squid should be
able to do more. Where is the bottleneck coming from?



The bottleneck may be coming from your test methodology -- you are
allowing Squid to slow down the benchmark instead of benchmark driving
the Squid box to its limits. You appear to be using what we call a best
effort test, where the request rate is determined by Squid response
time. In most real-world environments concerned with performance, the
request rate does not decrease just because a proxy wants to slow down a
little.



Then the question becomes why squid is slowing down?

I think there are 2.5 primary reasons for that:

1) Higher concurrency level (c in your tables) means more
waiting/queuing time for each transaction: When [a part of] one
transaction has to wait for [a part of] another before being served,
transaction response time goes up. For example, the more network sockets
are ready at the same time, the higher the response time is going to
be for the transaction which socket happens to be the last one ready
during that specific I/O loop iteration.


Are these queues maintained internally inside squid? What can be done
to reduce this?


The queue is created in a single step by the kernel. It responds with a 
set of FD with I/O events to be handled. Squid is then expected to 
iterate over them and do the I/O.
Like Alex said there is nothing that can be done about that queue 
itself. Looping over it fast and scheduling multiple internal Calls at 
once is tempting but just offloads the delay from the 
select/poll/epoll/kqueue loop to the AsyncCall queue, the visible/total 
delay remains constant (or possibly worse if they are double queued).





2a) Squid sometimes uses hard-coded limits for various internal caches
and tables. With higher concurrency level, Squid starts hitting those
limits and operating less efficiently (e.g., not keeping a connection
persistent because the persistent connection table is full -- I do not
remember whether this actually happens, so this is just an example of
what could happen to illustrate 2a).

Can you point me to some of the key ones and their impact? So that I
can test by changing
these limits and seeing if it enhances/degrades the performance. Also,
any tweaks in
the network stack that might help with that. I am primarily interested
in enhancing the SSL performance.


Much of the lag in SSL is due to the handshake exchanges it requires. 
There are a small amount of bytes in each direction wasting entire 
packet round-trip times just to set it up, followed by the processing 
overheads of actually crypting the bits.


The certificate generation process is a well-known slow process, there 
is nothing that can be done there as it relies heavily on the random 
number generator in the machine. SSL-bump with certificate generation 
uses caching to avoid that to some extent - it would be worthwhile 
testing how often (if at all) your benchmarks are held up waiting for 
new certs to be created.





2b) Poor concurrency scale. Some Squid code becomes slower with more
concurrent transactions flying around because that code has to iterate
more structures while dealing with more collisions and such.


  Well all that can be done on this front is that I have to wait for
the changes to go in.


There is nothing we can do about #1, but we can improve #2a and #2b
(they are kind of related).



best effort tests also
give a good measure of what the proxy(server) can do without breaking
it.

Yes, but, in my experience, the vast majority of best-effort results are
misinterpreted: It is very difficult to use a best-effort test
correctly, and it is very easy to come to the wrong conclusions by
looking at its results. YMMV.


  Do you see any wrong conclusion that I might have made in
interpreting these results?


BTW, a persistent load test does not have to break the proxy. You only
need to break the proxy if you want to find where its breaking point
(and, hence, the bottleneck) is with respect to load (or other traffic
parameters).



Sure


Do you see any reason from the perf results/benchmarks
why squid would not be utilizing all CPU and giving out more requests
per second?

In our tests, Squid does utilize virtually all CPU cycles (if we push it
hard enough). It is just a matter of creating enough/appropriate load.


Why would it not do in my test setup? I does use all CPU cores to the
fullest in the case of HTTPS, but 

[squid-users] Squid performance profiling

2013-06-20 Thread Ahmed Talha Khan
Hello All,

I have been trying to benchmark the performance of squid for sometime
now for plain HTTP and HTTPS traffic.

The key performance indicators that i am looking at are Requests Per
Second(RPS), Throughput(mbps) and Latency (ms).

My test methodology looks like this

generator(apache benchmark)---squid--server(lighttpd)


All 3 are running on seperate VM on AWS.
The specs for all the machines are
8 VCPU @ 2.13 GHZ
16 GB RAM
Squid using 8 SMP workers to utilize all cores

In all these tests I have made sure that the generator and server are
always more powerful than squid. For latency calculation, Time per
request is calculated with and without squid inline and the difference
between them is taken.

I am using a release 3.HEAD just prior to the release of 3.3.

I want to share the results with the community on the squid wikis. How
to do that?

Some results from the tests are:

Server response size = 200 Bytes
New means keep-alive were turned
Keep-alive mean keep-alive were used with 100 http req/conn
C= concurrent requests


 HTTP   HTTPS
New
| Keep-Alive   New| Keep-Alive

RPS
  c= 50   6466 | 20227
  1336 | 14461
  c= 100 6392 | 21583
 1303 | 14683
  c = 2005986 | 21462
  1300 | 13967

Throughput(mbps)
  c = 50   26|
82.45.4 | 59
  c=100   25.8 | 88
  5.25 | 60
   c=200  24 | 88
5.4 | 58

Latency(ms)
   c= 50  7.5 | 2.7
   36 | 3.75
   c= 10015.8 | 5.27
   80 | 8
  c=200  26.5 | 11.3
   168 | 18


With this results I profile squid with perf tool and got some
results that I could not understand. So my question are related to
them


For the HTTS case, the CPU utilization peaks around 90% on all cores
and the perf profiler gives:

24.63%squid  libc-2.15.so [.] __memset_sse2

6.13%squid  libcrypto.so.1.0.0   [.] bn_sqr4x_mont

4.98%squid  [kernel.kallsyms][k] hypercall_page

  |

  --- hypercall_page

 |

 |--93.73%-- check_events


Why is so much time spent in one instruction by squid? and too a
memset instruction! Any pointers?

Since in this case all CPU power is being used so it is understandable
that  the performance cannot be improved here. The problem arises with
the HTTP case.

For the plain HTTP case, the CPU utilization is only around 50-60% on
all the cores and perf says:


8.47%squid  [kernel.kallsyms][k] hypercall_page
  --- hypercall_page
  |--94.78%-- check_events

1.78%squid  libc-2.15.so [.] vfprintf
1.62%squid  [kernel.kallsyms][k] xen_spin_lock
1.44%squid  libc-2.15.so [.] __memcpy_ssse3_back


These results show that squid is NOT CPU bound at this point. Neither
is it Network IO bound because i can get much more throughput when I
only run the generator with the server. In this case squid should be
able to do more. Where is the bottleneck coming from?

If anyone is interested with very detailed benchmarks, then I can provide them.


--
Regards,
-Ahmed Talha Khan


Re: [squid-users] Squid performance profiling

2013-06-20 Thread Amos Jeffries

On 20/06/2013 8:00 p.m., Ahmed Talha Khan wrote:

Hello All,

I have been trying to benchmark the performance of squid for sometime
now for plain HTTP and HTTPS traffic.

The key performance indicators that i am looking at are Requests Per
Second(RPS), Throughput(mbps) and Latency (ms).

My test methodology looks like this

generator(apache benchmark)---squid--server(lighttpd)


All 3 are running on seperate VM on AWS.
The specs for all the machines are
8 VCPU @ 2.13 GHZ
16 GB RAM
Squid using 8 SMP workers to utilize all cores


Using 8 workers is probably not a godo idea. The recommended practice is 
to use one core per worker and leave at leave one spare core for the 
kernels usage. Squid does pass a fair chunk of work to the kernel for 
I/O, while each workers will completely max out as many CPU cycles as it 
can grab from its own core. If there is no core retained for kernel 
usage those two properties will result in CPU contention slowdown as 
Squid and kernel fight for cycles.




In all these tests I have made sure that the generator and server are
always more powerful than squid. For latency calculation, Time per
request is calculated with and without squid inline and the difference
between them is taken.

I am using a release 3.HEAD just prior to the release of 3.3.


Then please upgrade to 3.3 stable release or a current 3.HEAD . There 
have been a few memory leaks and issues resolved in the time since 3.3 
was released which are fixed in the current stable. There are also 
additional performance improvements in the current 3.HEAD which will be 
in 3.4 when it branches.




I want to share the results with the community on the squid wikis. How
to do that?


We are collecting some ad-hoc benchmark details for Squid releases at 
http://wiki.squid-cache.org/KnowledgeBase/Benchmarks. So far this is not 
exactly a rigourous testing, although following the methodology for 
stats collection (as outline in the intro section) retains consistency 
and improves comparability between submissions.


Since you are using a different methodology, please feel free to write 
up a new article on it. The details you just posted looks like a good 
start. We can offer wiki or static web page, or reference from our 
benchmarking page to a blog publication of your own.


If you are intending to publish the results I do highly recommend that 
you settle on a packaged and numbered version of Squid so others can 
replicate the tests or do additional compartive testing on the same 
code. The 3.HEAD is a rolling release that is relatively difficult to 
locate the exact sources for any given revision,  the numbered packages 
can be referenced from our permanent archives in your description.




Some results from the tests are:

Server response size = 200 Bytes
New means keep-alive were turned
Keep-alive mean keep-alive were used with 100 http req/conn
C= concurrent requests


  HTTP   HTTPS
 New
| Keep-Alive   New| Keep-Alive

RPS
   c= 50   6466 | 20227
   1336 | 14461
   c= 100 6392 | 21583
  1303 | 14683
   c = 2005986 | 21462
   1300 | 13967

Throughput(mbps)
   c = 50   26|
82.45.4 | 59
   c=100   25.8 | 88
   5.25 | 60
c=200  24 | 88
 5.4 | 58

Latency(ms)
c= 50  7.5 | 2.7
36 | 3.75
c= 10015.8 | 5.27
80 | 8
   c=200  26.5 | 11.3
168 | 18


With this results I profile squid with perf tool and got some
results that I could not understand. So my question are related to
them


Thank you. Some very nice numbers. I hope they give a clue to anyone 
still thinking persistent connections need to be disabled to improve 
performance.



For the HTTS case, the CPU utilization peaks around 90% on all cores
and the perf profiler gives:

24.63%squid  libc-2.15.so [.] __memset_sse2

6.13%squid  libcrypto.so.1.0.0   [.] bn_sqr4x_mont

 4.98%squid  [kernel.kallsyms][k] hypercall_page

   |

   --- hypercall_page

  |

  |--93.73%-- check_events


Why is so much time spent in one instruction by squid? and too a
memset instruction! Any pointers?


Squid was originally written in C and still has a lot of memset() calls 

Re: [squid-users] Squid performance profiling

2013-06-20 Thread Marcus Kool



On 06/20/2013 06:51 AM, Amos Jeffries wrote:




If anyone is interested with very detailed benchmarks, then I can provide them.


Yes please :-)

PS. could you CC the squid-dev mailing list as well with the details. The more 
developer eyes we can get on this data the better. Although please do test a 
current release first, we have significantly
changed the ACL handling which was one bottleneck in Squid, and have altered 
the mempools use of memset() is several locations in the latest 3.HEAD code.

Amos


I understand that Amos is eager to get more tests and more results about
the latest enhancements, but as Amos himself also stated earlier, please
use a released version of Squid for testing since the test results for
3.3.x or 3.4.x are interesting for admins of Squid who can consider upgrading,
but test results for 3.HEAD are not useful for them since they are not likely
to consider an upgrade to 3.HEAD.

And if you have spare resources, it would be interesting to perform the
same test for 3.3.5 and 3.2.11 to see the differences between releases.
And of course, when 3.4 comes out, perform the test again...

The test that you performed is very nice. I am sure that many like this.
But I also like to see the full squid.conf. Just for transparency and
maybe to suggest an optimisation tweak.

Thanks
Marcus


Re: [squid-users] Squid performance profiling

2013-06-20 Thread Alex Rousskov
On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:

 My test methodology looks like this
 
 generator(apache benchmark)---squid--server(lighttpd)
...
 These results show that squid is NOT CPU bound at this point. Neither
 is it Network IO bound because i can get much more throughput when I
 only run the generator with the server. In this case squid should be
 able to do more. Where is the bottleneck coming from?

The bottleneck may be coming from your test methodology -- you are
allowing Squid to slow down the benchmark instead of benchmark driving
the Squid box to its limits. You appear to be using what we call a best
effort test, where the request rate is determined by Squid response
time. In most real-world environments concerned with performance, the
request rate does not decrease just because a proxy wants to slow down a
little.

When we want to find the bottleneck, we often tell Web Polygraph to
increase proxy load until things start to break. In this persistent
load mode, Polygraph does not allow the proxy to determine the request
rate. It keeps pounding the proxy [at the configured rate], just like
real users would. I do not know whether ab can do it, but I would not be
surprised if it can. plugStill, I would recommend that you use a
benchmarking tool designed to test proxies rather than origin servers
:-)./plug


Cheers,

Alex.



Re: [squid-users] Squid performance profiling

2013-06-20 Thread Ahmed Talha Khan
On Fri, Jun 21, 2013 at 6:17 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:
 On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:

 My test methodology looks like this

 generator(apache benchmark)---squid--server(lighttpd)
 ...
 These results show that squid is NOT CPU bound at this point. Neither
 is it Network IO bound because i can get much more throughput when I
 only run the generator with the server. In this case squid should be
 able to do more. Where is the bottleneck coming from?

 The bottleneck may be coming from your test methodology -- you are
 allowing Squid to slow down the benchmark instead of benchmark driving
 the Squid box to its limits. You appear to be using what we call a best
 effort test, where the request rate is determined by Squid response
 time. In most real-world environments concerned with performance, the
 request rate does not decrease just because a proxy wants to slow down a
 little.

Then the question becomes why squid is slowing down? It is not CPU
bound, neither is it Network bound. What
can cause squid to not utilize all the resources that it can. I agree
to your point of view that in real-world
environments the situation is more like webpolygraph(or spirent,
avalanche for that matter) but best effort tests also
give a good measure of what the proxy(server) can do without breaking
it. Do you see any reason from the perf results/benchmarks
why squid would not be utilizing all CPU and giving out more requests
per second?


 When we want to find the bottleneck, we often tell Web Polygraph to
 increase proxy load until things start to break. In this persistent
 load mode, Polygraph does not allow the proxy to determine the request
 rate. It keeps pounding the proxy [at the configured rate], just like
 real users would. I do not know whether ab can do it, but I would not be
 surprised if it can. plugStill, I would recommend that you use a
 benchmarking tool designed to test proxies rather than origin servers
 :-)./plug

I plan to test it with webpolygraph also when I get the due time and
resources. But for now I want to make sense of the lower
numbers. BTW, ab cannot do this constant request rate model testing


 Cheers,

 Alex.




--
Regards,
-Ahmed Talha Khan


Re: [squid-users] Squid performance profiling

2013-06-20 Thread Alex Rousskov
On 06/20/2013 10:47 PM, Ahmed Talha Khan wrote:
 On Fri, Jun 21, 2013 at 6:17 AM, Alex Rousskov wrote:
 On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:
 My test methodology looks like this

 generator(apache benchmark)---squid--server(lighttpd)
 ...
 These results show that squid is NOT CPU bound at this point. Neither
 is it Network IO bound because i can get much more throughput when I
 only run the generator with the server. In this case squid should be
 able to do more. Where is the bottleneck coming from?


 The bottleneck may be coming from your test methodology -- you are
 allowing Squid to slow down the benchmark instead of benchmark driving
 the Squid box to its limits. You appear to be using what we call a best
 effort test, where the request rate is determined by Squid response
 time. In most real-world environments concerned with performance, the
 request rate does not decrease just because a proxy wants to slow down a
 little.


 Then the question becomes why squid is slowing down? 

I think there are 2.5 primary reasons for that:

1) Higher concurrency level (c in your tables) means more
waiting/queuing time for each transaction: When [a part of] one
transaction has to wait for [a part of] another before being served,
transaction response time goes up. For example, the more network sockets
are ready at the same time, the higher the response time is going to
be for the transaction which socket happens to be the last one ready
during that specific I/O loop iteration.

2a) Squid sometimes uses hard-coded limits for various internal caches
and tables. With higher concurrency level, Squid starts hitting those
limits and operating less efficiently (e.g., not keeping a connection
persistent because the persistent connection table is full -- I do not
remember whether this actually happens, so this is just an example of
what could happen to illustrate 2a).

2b) Poor concurrency scale. Some Squid code becomes slower with more
concurrent transactions flying around because that code has to iterate
more structures while dealing with more collisions and such.

There is nothing we can do about #1, but we can improve #2a and #2b
(they are kind of related).


 best effort tests also
 give a good measure of what the proxy(server) can do without breaking
 it.

Yes, but, in my experience, the vast majority of best-effort results are
misinterpreted: It is very difficult to use a best-effort test
correctly, and it is very easy to come to the wrong conclusions by
looking at its results. YMMV.

BTW, a persistent load test does not have to break the proxy. You only
need to break the proxy if you want to find where its breaking point
(and, hence, the bottleneck) is with respect to load (or other traffic
parameters).


 Do you see any reason from the perf results/benchmarks
 why squid would not be utilizing all CPU and giving out more requests
 per second?

In our tests, Squid does utilize virtually all CPU cycles (if we push it
hard enough). It is just a matter of creating enough/appropriate load.

However, if you are asking whether Squid can be changed to run faster
than it does today, then the answer is yes, of course. There is still a
lot of inefficient, slow code in Squid.


HTH,

Alex.



Re: [squid-users] Squid performance with high load

2013-03-31 Thread Hasanen AL-Bana
On Sun, Mar 31, 2013 at 3:20 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 31/03/2013 9:07 a.m., Hasanen AL-Bana wrote:

 The above config for cache_dirs is not working probably.


 You are top-posting.
 . Why?
 .. There is no above config.
Sorry , it is the new gmail composer...



 I can see the aufs dir growing rapidly while the Rock directory has
 been created but it is empty !


 ---
 Store Directory Statistics:
 Store Entries  : 1166040
 Maximum Swap Size  : 174080 KB
 Current Store Swap Size: 85456552.00 KB
 Current Capacity   : 4.91% used, 95.09% free

 Store Directory #0 (rock): /mnt/ssd/cache/
 FS Block Size 1024 Bytes

 Maximum Size: 30720 KB
 Current Size: 760592.00 KB 0.25%
 Maximum entries:   239
 Current entries:  5942 0.25%
 Pending operations: 137 out of 0
 Flags:

 Store Directory #1 (aufs): /mnt/sas1/cache/store1
 FS Block Size 4096 Bytes
 First level subdirectories: 32
 Second level subdirectories: 512
 Maximum Size: 143360 KB
 Current Size: 84695960.00 KB
 Percent Used: 5.91%
 Filemap bits in use: 1159378 of 2097152 (55%)
 Filesystem Space in use: 121538556/-1957361748 KB (-5%)
 Filesystem Inodes in use: 1176103/146243584 (1%)
 Flags:
 Removal policy: lru
 LRU reference age: 0.17 days

 On Sat, Mar 30, 2013 at 5:10 PM, Hasanen AL-Bana hasa...@gmail.com
 wrote:

 Thank you Amos for clarifying these issues.
 I will skip SMP and use single worker since Rock limit my max object
 size to 32kb when used in shared environments.
 My new cache_dir configuration looks like this now :

 cache_dir rock /mnt/ssd/cache/ 30 max-size=131072
 cache_dir aufs /mnt/sas1/cache/store1  140 32 512


 NP: Rock is a 'slot'-based database format and does not support objects
 larger than 32KB, unless you are using the experimental large-rock code.
 max-size will be capped down to max-size=32767. You should have seen a
 warning about that when starting or reconfiguring Squid.

 To prevent the AUFS dir filling wil small objects that can best be served
 from Rock, you will also need min-size= parameter on the AUFS. Otherwise
 Squid will base selection on capacity loading and will determine that the
 1.4TB dir has more free space than the 300GB Rock one.


Ok , according to the Wiki at
http://wiki.squid-cache.org/Features/RockStore , the max size is
limited to 32kb if I work in shared environment. I have only one
worker now and my rock max-size is set to 131072.
To make it work , I had to change my cache_dir selection algorithm to
round-robin.



 I have enabled store.log to be used with some other software
 collecting data from it

 my disks are now mounted with

 noatime,barrier=0,journal_async_commit,noauto_da_alloc,nobh,data=writeback,commit=10

 I will keep the list posted with my results.
 Thanks.


 Amos


Re: [squid-users] Squid performance with high load

2013-03-30 Thread Amos Jeffries

On 30/03/2013 6:33 a.m., Hasanen AL-Bana wrote:

Hi,

I am running squid 3.2 with an average of 50k req/min. Total received
bandwidth is around 200mbit/s.
I have problem when my aufs cache_dirs reaches size above 600GB.
Traffic starts dropping and going up again , happening every 20~30 minutes.
I have more that enough RAM in the system (125GB DDR3 !) , All disks
are SAS 15k rpm, one of them is and SSD 450GB.
So hardware should not cause any problem and I should easily spawn
multiple squid workers any time.
So what could cause such problems ?


#1 - AUFS is *not* SMP-aware component in Squid.

Each of the two workers you are using will be altering the on-disk 
portion of the cache without updating the in-memory index. When the 
other worker encounters these over-written files it will erase them.


For now you are required to use the macros hacks to split the cache_dir 
lines between the workers.



#2 - you have multiple cache_dir per disk. Or so it seems from your 
configuration. Is that correct?


 * squid load balances between cache_dir treating them as separate 
physcal HDD when it comes to loading calculations.


* the memory requirement for indexing these 1.6 TB of disk space is 
~24GB per worker plus the 20GB of shared emmory cache == 68GB of RAM.




I advise to you inform Squid about the real disk topology it is working 
with. No more than one AUFS cache_dir per physical disk and allocate the 
cache_dir size to ~80-90% the full available disk space. With each 
worker assigned to use a sub-set of the AUFS cache_dir and associated disks.


For best performance and object de-duplication you should consider using 
rock cache_dir for small objects. They *are* shared between workers, and 
can also share disk space with an AUFS disk.




Thank you.


include /etc/squid3/refresh.conf


cache_mem 20 GB

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT


http_access allow localhost manager
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localnet
http_access allow localhost

http_access allow all


Sure, why bother having security at all?


  maximum_object_size_in_memory 512 KB
  memory_cache_mode always
  memory_replacement_policy heap GDSF
  cache_replacement_policy heap LFUDA
  store_dir_select_algorithm least-load
  max_open_disk_fds 0
  maximum_object_size 200 MB

  cache_swap_high 98
  cache_swap_low  97


# access_log stdio:/var/log/squid3/access.log
  access_log none
  cache_log /var/log/squid3/cache.log
  cache_store_log stdio:/var/log/squid3/store.log


Usually the log you want to use is access_log, with the store debug log 
(cache_store_log) disabled.



  workers 2

cache_dir aufs /mnt/ssd/cache/store1 5 32 512
cache_dir aufs /mnt/ssd/cache/store2 5 32 512
cache_dir aufs /mnt/ssd/cache/store3 5 32 512
cache_dir aufs /mnt/ssd/cache/store4 5 32 512
cache_dir aufs /mnt/ssd/cache/store5 5 32 512
cache_dir aufs /mnt/ssd/cache/store6 5 32 512
cache_dir aufs /mnt/ssd/cache/store7 5 32 512


cache_dir aufs /mnt/sas1/cache/store1  5 32 512
cache_dir aufs /mnt/sas1/cache/store2  5 32 512
cache_dir aufs /mnt/sas1/cache/store3  5 32 512
cache_dir aufs /mnt/sas1/cache/store4  5 32 512
cache_dir aufs /mnt/sas1/cache/store5  5 32 512
cache_dir aufs /mnt/sas1/cache/store6  5 32 512
cache_dir aufs /mnt/sas1/cache/store7  5 32 512
cache_dir aufs /mnt/sas1/cache/store8  5 32 512
cache_dir aufs /mnt/sas1/cache/store9  5 32 512
cache_dir aufs /mnt/sas1/cache/store10  5 32 512
cache_dir aufs /mnt/sas1/cache/store11  5 32 512
cache_dir aufs /mnt/sas1/cache/store12  5 32 512


cache_dir aufs /mnt/sas2/cache/store1  5 32 512
cache_dir aufs /mnt/sas2/cache/store2  5 32 512
cache_dir aufs /mnt/sas2/cache/store3  5 32 512
cache_dir aufs /mnt/sas2/cache/store4  5 32 512
cache_dir aufs /mnt/sas2/cache/store5  5 32 512
cache_dir aufs /mnt/sas2/cache/store6  5 32 512
cache_dir aufs /mnt/sas2/cache/store7  5 32 512
cache_dir aufs /mnt/sas2/cache/store8  5 32 512
cache_dir aufs /mnt/sas2/cache/store9  

Re: [squid-users] Squid performance with high load

2013-03-30 Thread Hasanen AL-Bana
Thank you Amos for clarifying these issues.
I will skip SMP and use single worker since Rock limit my max object
size to 32kb when used in shared environments.
My new cache_dir configuration looks like this now :

cache_dir rock /mnt/ssd/cache/ 30 max-size=131072
cache_dir aufs /mnt/sas1/cache/store1  140 32 512

I have enabled store.log to be used with some other software
collecting data from it

my disks are now mounted with
noatime,barrier=0,journal_async_commit,noauto_da_alloc,nobh,data=writeback,commit=10

I will keep the list posted with my results.
Thanks.

On Sat, Mar 30, 2013 at 12:57 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 30/03/2013 6:33 a.m., Hasanen AL-Bana wrote:

 Hi,

 I am running squid 3.2 with an average of 50k req/min. Total received
 bandwidth is around 200mbit/s.
 I have problem when my aufs cache_dirs reaches size above 600GB.
 Traffic starts dropping and going up again , happening every 20~30
 minutes.
 I have more that enough RAM in the system (125GB DDR3 !) , All disks
 are SAS 15k rpm, one of them is and SSD 450GB.
 So hardware should not cause any problem and I should easily spawn
 multiple squid workers any time.
 So what could cause such problems ?


 #1 - AUFS is *not* SMP-aware component in Squid.

 Each of the two workers you are using will be altering the on-disk portion
 of the cache without updating the in-memory index. When the other worker
 encounters these over-written files it will erase them.

 For now you are required to use the macros hacks to split the cache_dir
 lines between the workers.


 #2 - you have multiple cache_dir per disk. Or so it seems from your
 configuration. Is that correct?

  * squid load balances between cache_dir treating them as separate physcal
 HDD when it comes to loading calculations.

 * the memory requirement for indexing these 1.6 TB of disk space is ~24GB
 per worker plus the 20GB of shared emmory cache == 68GB of RAM.



 I advise to you inform Squid about the real disk topology it is working
 with. No more than one AUFS cache_dir per physical disk and allocate the
 cache_dir size to ~80-90% the full available disk space. With each worker
 assigned to use a sub-set of the AUFS cache_dir and associated disks.

 For best performance and object de-duplication you should consider using
 rock cache_dir for small objects. They *are* shared between workers, and can
 also share disk space with an AUFS disk.



 Thank you.


 
 include /etc/squid3/refresh.conf


 cache_mem 20 GB

 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl localnet src fc00::/7   # RFC 4193 local private network range
 acl localnet src fe80::/10  # RFC 4291 link-local (directly
 plugged) machines

 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT


 http_access allow localhost manager
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

 http_access allow localnet
 http_access allow localhost

 http_access allow all


 Sure, why bother having security at all?


   maximum_object_size_in_memory 512 KB
   memory_cache_mode always
   memory_replacement_policy heap GDSF
   cache_replacement_policy heap LFUDA
   store_dir_select_algorithm least-load
   max_open_disk_fds 0
   maximum_object_size 200 MB

   cache_swap_high 98
   cache_swap_low  97


 # access_log stdio:/var/log/squid3/access.log
   access_log none
   cache_log /var/log/squid3/cache.log
   cache_store_log stdio:/var/log/squid3/store.log


 Usually the log you want to use is access_log, with the store debug log
 (cache_store_log) disabled.


   workers 2

 cache_dir aufs /mnt/ssd/cache/store1 5 32 512
 cache_dir aufs /mnt/ssd/cache/store2 5 32 512
 cache_dir aufs /mnt/ssd/cache/store3 5 32 512
 cache_dir aufs /mnt/ssd/cache/store4 5 32 512
 cache_dir aufs /mnt/ssd/cache/store5 5 32 512
 cache_dir aufs /mnt/ssd/cache/store6 5 32 512
 cache_dir aufs /mnt/ssd/cache/store7 5 32 512


 cache_dir aufs /mnt/sas1/cache/store1  5 32 512
 cache_dir aufs /mnt/sas1/cache/store2  5 32 512
 cache_dir aufs /mnt/sas1/cache/store3  5 32 512
 cache_dir aufs /mnt/sas1/cache/store4  5 32 512
 cache_dir aufs /mnt/sas1/cache/store5  5 32 512
 cache_dir aufs /mnt/sas1/cache/store6  5 32 512
 cache_dir aufs /mnt/sas1/cache/store7  5 32 

Re: [squid-users] Squid performance with high load

2013-03-30 Thread Hasanen AL-Bana
The above config for cache_dirs is not working probably.
I can see the aufs dir growing rapidly while the Rock directory has
been created but it is empty !

---
Store Directory Statistics:
Store Entries  : 1166040
Maximum Swap Size  : 174080 KB
Current Store Swap Size: 85456552.00 KB
Current Capacity   : 4.91% used, 95.09% free

Store Directory #0 (rock): /mnt/ssd/cache/
FS Block Size 1024 Bytes

Maximum Size: 30720 KB
Current Size: 760592.00 KB 0.25%
Maximum entries:   239
Current entries:  5942 0.25%
Pending operations: 137 out of 0
Flags:

Store Directory #1 (aufs): /mnt/sas1/cache/store1
FS Block Size 4096 Bytes
First level subdirectories: 32
Second level subdirectories: 512
Maximum Size: 143360 KB
Current Size: 84695960.00 KB
Percent Used: 5.91%
Filemap bits in use: 1159378 of 2097152 (55%)
Filesystem Space in use: 121538556/-1957361748 KB (-5%)
Filesystem Inodes in use: 1176103/146243584 (1%)
Flags:
Removal policy: lru
LRU reference age: 0.17 days

On Sat, Mar 30, 2013 at 5:10 PM, Hasanen AL-Bana hasa...@gmail.com wrote:
 Thank you Amos for clarifying these issues.
 I will skip SMP and use single worker since Rock limit my max object
 size to 32kb when used in shared environments.
 My new cache_dir configuration looks like this now :

 cache_dir rock /mnt/ssd/cache/ 30 max-size=131072
 cache_dir aufs /mnt/sas1/cache/store1  140 32 512

 I have enabled store.log to be used with some other software
 collecting data from it

 my disks are now mounted with
 noatime,barrier=0,journal_async_commit,noauto_da_alloc,nobh,data=writeback,commit=10

 I will keep the list posted with my results.
 Thanks.

 On Sat, Mar 30, 2013 at 12:57 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 30/03/2013 6:33 a.m., Hasanen AL-Bana wrote:

 Hi,

 I am running squid 3.2 with an average of 50k req/min. Total received
 bandwidth is around 200mbit/s.
 I have problem when my aufs cache_dirs reaches size above 600GB.
 Traffic starts dropping and going up again , happening every 20~30
 minutes.
 I have more that enough RAM in the system (125GB DDR3 !) , All disks
 are SAS 15k rpm, one of them is and SSD 450GB.
 So hardware should not cause any problem and I should easily spawn
 multiple squid workers any time.
 So what could cause such problems ?


 #1 - AUFS is *not* SMP-aware component in Squid.

 Each of the two workers you are using will be altering the on-disk portion
 of the cache without updating the in-memory index. When the other worker
 encounters these over-written files it will erase them.

 For now you are required to use the macros hacks to split the cache_dir
 lines between the workers.


 #2 - you have multiple cache_dir per disk. Or so it seems from your
 configuration. Is that correct?

  * squid load balances between cache_dir treating them as separate physcal
 HDD when it comes to loading calculations.

 * the memory requirement for indexing these 1.6 TB of disk space is ~24GB
 per worker plus the 20GB of shared emmory cache == 68GB of RAM.



 I advise to you inform Squid about the real disk topology it is working
 with. No more than one AUFS cache_dir per physical disk and allocate the
 cache_dir size to ~80-90% the full available disk space. With each worker
 assigned to use a sub-set of the AUFS cache_dir and associated disks.

 For best performance and object de-duplication you should consider using
 rock cache_dir for small objects. They *are* shared between workers, and can
 also share disk space with an AUFS disk.



 Thank you.


 
 include /etc/squid3/refresh.conf


 cache_mem 20 GB

 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl localnet src fc00::/7   # RFC 4193 local private network range
 acl localnet src fe80::/10  # RFC 4291 link-local (directly
 plugged) machines

 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT


 http_access allow localhost manager
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

 http_access allow localnet
 http_access allow localhost

 http_access allow all


 Sure, why bother having security at all?


   maximum_object_size_in_memory 512 KB
   memory_cache_mode always
   memory_replacement_policy heap GDSF
   

Re: [squid-users] Squid performance with high load

2013-03-30 Thread Amos Jeffries

On 31/03/2013 9:07 a.m., Hasanen AL-Bana wrote:

The above config for cache_dirs is not working probably.


You are top-posting.
. Why?
.. There is no above config.


I can see the aufs dir growing rapidly while the Rock directory has
been created but it is empty !

---
Store Directory Statistics:
Store Entries  : 1166040
Maximum Swap Size  : 174080 KB
Current Store Swap Size: 85456552.00 KB
Current Capacity   : 4.91% used, 95.09% free

Store Directory #0 (rock): /mnt/ssd/cache/
FS Block Size 1024 Bytes

Maximum Size: 30720 KB
Current Size: 760592.00 KB 0.25%
Maximum entries:   239
Current entries:  5942 0.25%
Pending operations: 137 out of 0
Flags:

Store Directory #1 (aufs): /mnt/sas1/cache/store1
FS Block Size 4096 Bytes
First level subdirectories: 32
Second level subdirectories: 512
Maximum Size: 143360 KB
Current Size: 84695960.00 KB
Percent Used: 5.91%
Filemap bits in use: 1159378 of 2097152 (55%)
Filesystem Space in use: 121538556/-1957361748 KB (-5%)
Filesystem Inodes in use: 1176103/146243584 (1%)
Flags:
Removal policy: lru
LRU reference age: 0.17 days

On Sat, Mar 30, 2013 at 5:10 PM, Hasanen AL-Bana hasa...@gmail.com wrote:

Thank you Amos for clarifying these issues.
I will skip SMP and use single worker since Rock limit my max object
size to 32kb when used in shared environments.
My new cache_dir configuration looks like this now :

cache_dir rock /mnt/ssd/cache/ 30 max-size=131072
cache_dir aufs /mnt/sas1/cache/store1  140 32 512


NP: Rock is a 'slot'-based database format and does not support objects 
larger than 32KB, unless you are using the experimental large-rock code. 
max-size will be capped down to max-size=32767. You should have seen a 
warning about that when starting or reconfiguring Squid.


To prevent the AUFS dir filling wil small objects that can best be 
served from Rock, you will also need min-size= parameter on the AUFS. 
Otherwise Squid will base selection on capacity loading and will 
determine that the 1.4TB dir has more free space than the 300GB Rock one.




I have enabled store.log to be used with some other software
collecting data from it

my disks are now mounted with
noatime,barrier=0,journal_async_commit,noauto_da_alloc,nobh,data=writeback,commit=10

I will keep the list posted with my results.
Thanks.


Amos


[squid-users] Squid performance with high load

2013-03-29 Thread Hasanen AL-Bana
Hi,

I am running squid 3.2 with an average of 50k req/min. Total received
bandwidth is around 200mbit/s.
I have problem when my aufs cache_dirs reaches size above 600GB.
Traffic starts dropping and going up again , happening every 20~30 minutes.
I have more that enough RAM in the system (125GB DDR3 !) , All disks
are SAS 15k rpm, one of them is and SSD 450GB.
So hardware should not cause any problem and I should easily spawn
multiple squid workers any time.
So what could cause such problems ?

Thank you.


include /etc/squid3/refresh.conf


cache_mem 20 GB

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT


http_access allow localhost manager
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localnet
http_access allow localhost

http_access allow all

coredump_dir /usr/local/squid/var/cache/squid


# SNONO-SYSTEMS CONFIGURATION

 http_port 3128
 http_port 3129 intercept
 snmp_port 3401
 snmp_access allow localhost

 qos_flows tos
 qos_flows local-hit=0x30

 maximum_object_size_in_memory 512 KB
 memory_cache_mode always
 memory_replacement_policy heap GDSF
 cache_replacement_policy heap LFUDA
 store_dir_select_algorithm least-load
 max_open_disk_fds 0
 maximum_object_size 200 MB

 cache_swap_high 98
 cache_swap_low  97


# access_log stdio:/var/log/squid3/access.log
 access_log none
 cache_log /var/log/squid3/cache.log
 cache_store_log stdio:/var/log/squid3/store.log
 logfile_rotate 1
 minimum_expiry_time 60 seconds
 request_header_max_size 64 KB
 reply_header_max_size 64 KB
 request_body_max_size 0 KB
 client_request_buffer_max_size 128 KB
 cache_effective_user proxy
 cache_effective_group proxy
 visible_hostname snono-systems
 fqdncache_size 8096
 pipeline_prefetch on
 max_filedescriptors 5

 workers 2

cache_dir aufs /mnt/ssd/cache/store1 5 32 512
cache_dir aufs /mnt/ssd/cache/store2 5 32 512
cache_dir aufs /mnt/ssd/cache/store3 5 32 512
cache_dir aufs /mnt/ssd/cache/store4 5 32 512
cache_dir aufs /mnt/ssd/cache/store5 5 32 512
cache_dir aufs /mnt/ssd/cache/store6 5 32 512
cache_dir aufs /mnt/ssd/cache/store7 5 32 512


cache_dir aufs /mnt/sas1/cache/store1  5 32 512
cache_dir aufs /mnt/sas1/cache/store2  5 32 512
cache_dir aufs /mnt/sas1/cache/store3  5 32 512
cache_dir aufs /mnt/sas1/cache/store4  5 32 512
cache_dir aufs /mnt/sas1/cache/store5  5 32 512
cache_dir aufs /mnt/sas1/cache/store6  5 32 512
cache_dir aufs /mnt/sas1/cache/store7  5 32 512
cache_dir aufs /mnt/sas1/cache/store8  5 32 512
cache_dir aufs /mnt/sas1/cache/store9  5 32 512
cache_dir aufs /mnt/sas1/cache/store10  5 32 512
cache_dir aufs /mnt/sas1/cache/store11  5 32 512
cache_dir aufs /mnt/sas1/cache/store12  5 32 512


cache_dir aufs /mnt/sas2/cache/store1  5 32 512
cache_dir aufs /mnt/sas2/cache/store2  5 32 512
cache_dir aufs /mnt/sas2/cache/store3  5 32 512
cache_dir aufs /mnt/sas2/cache/store4  5 32 512
cache_dir aufs /mnt/sas2/cache/store5  5 32 512
cache_dir aufs /mnt/sas2/cache/store6  5 32 512
cache_dir aufs /mnt/sas2/cache/store7  5 32 512
cache_dir aufs /mnt/sas2/cache/store8  5 32 512
cache_dir aufs /mnt/sas2/cache/store9  5 32 512
cache_dir aufs /mnt/sas2/cache/store10  5 32 512
cache_dir aufs /mnt/sas2/cache/store11  5 32 512
cache_dir aufs /mnt/sas2/cache/store12  5 32 512

===

and my refresh patterns are :

#general
 refresh_pattern \.(jp(e?g|e|2)|tiff?|bmp|gif|png)
  12560  99% 30240ignore-no-cache
ignore-no-store override-expire override-lastmod ignore-private
 refresh_pattern
\.(z(ip|[0-9]{2})|r(ar|[0-9]{2})|jar|bz2|gz|tar|rpm|vpu)
 12560  99% 30240ignore-no-cache ignore-no-store
override-expire override-lastmod ignore-private
 refresh_pattern \.(mp3|wav|og(g|a)|flac|midi?|rm|aac|wma|mka|ape)
  12560  99% 30240ignore-no-cache
ignore-no-store override-expire override-lastmod ignore-private
 refresh_pattern 

[squid-users] squid performance is low because only 1 cpu is being used !!!!

2013-03-05 Thread Ahmad
hi ,.
i have centos 64 bit with kernel 3.7.5 compiled with tproxy features .
i noted  that in rush hour , squid squid guard is bypassing .
i noted that squid is using only 1 cpu .
here is output sample:
===
[root@squid squid-3.3.1]# mpstat -u
Linux 3.7.5 (squid) 03/05/2013  _x86_64_(24 CPU)

09:10:10 AM  CPU%usr   %nice%sys %iowait%irq   %soft  %steal 
%guest   %idle
09:10:10 AM  all5.100.001.70   14.090.000.440.00   
0.00   78.67
[root@squid squid-3.3.1]# mpstat -P ALL 
Linux 3.7.5 (squid) 03/05/2013  _x86_64_(24 CPU)

09:10:17 AM  CPU%usr   %nice%sys %iowait%irq   %soft  %steal 
%guest   %idle
09:10:17 AM  all5.100.001.70   14.090.000.440.00   
0.00   78.67
09:10:17 AM08.630.002.67   30.170.000.150.00   
0.00   58.37
09:10:17 AM1   12.350.014.50   27.910.000.440.00   
0.00   54.79
09:10:17 AM25.510.001.96   27.650.000.060.00   
0.00   64.81
09:10:17 AM37.170.002.16   22.100.000.060.00   
0.00   68.51
09:10:17 AM45.150.001.93   27.290.000.060.00   
0.00   65.56
09:10:17 AM57.630.001.83   18.540.000.060.00   
0.00   71.93
09:10:17 AM65.180.001.90   28.460.000.070.00   
0.00   64.40
09:10:17 AM76.730.001.45   14.060.000.050.00   
0.00   77.70
09:10:17 AM85.190.001.82   28.470.000.060.00   
0.00   64.46
09:10:17 AM96.950.001.28   10.670.000.040.00   
0.00   81.06
09:10:17 AM   105.540.001.78   28.100.000.060.00   
0.00   64.51
09:10:17 AM   116.680.001.259.020.000.050.00   
0.00   83.00
09:10:17 AM   121.300.000.439.270.000.000.00   
0.00   89.00
09:10:17 AM   138.050.003.832.680.002.610.00   
0.00   82.83
09:10:17 AM   143.370.001.557.860.000.790.00   
0.00   86.43
09:10:17 AM   156.850.003.272.390.002.120.00   
0.00   85.36
09:10:17 AM   163.490.001.618.030.000.830.00   
0.00   86.03
09:10:17 AM   171.780.000.203.020.000.120.00   
0.00   94.87
09:10:17 AM   181.240.000.388.080.000.000.00   
0.00   90.29
09:10:17 AM   194.860.002.172.640.001.510.00   
0.00   88.82
09:10:17 AM   201.150.000.348.030.000.000.00   
0.00   90.48
09:10:17 AM   215.070.002.272.260.001.570.00   
0.00   88.83
09:10:17 AM   221.160.000.358.830.000.000.00   
0.00   89.66
09:10:17 AM   231.750.000.161.750.000.000.00   
0.00   96.34
===
now i will try to migrate to squid 3.2 to the stable version ,
i had a look on 
http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem
now if i use squid 3.2 stable , will it support smp feature of using
multiple cpu by default  or i have compile it with smp enable ?
also , 
do i need more config to operate smp on it is configured by default ???


best regards

with my best regards



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-performance-is-low-because-only-1-cpu-is-being-used-tp4658844.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] squid performance is low because only 1 cpu is being used !!!!

2013-03-05 Thread Amos Jeffries

On 6/03/2013 3:14 a.m., Ahmad wrote:

hi ,.
i have centos 64 bit with kernel 3.7.5 compiled with tproxy features .
i noted  that in rush hour , squid squid guard is bypassing .


Are you basing that on the detected possible bypass attack messages 
mentioned in threads from days back?
... that would mean the same thing if it were worded detected possible 
attack without naming the type of attack found.




i noted that squid is using only 1 cpu .


Yes. Squid has always been that way.
Having a dedicated CPU does not necessarily mean slow - the OS and 
helpers can use the other(s).



here is output sample:
===
[root@squid squid-3.3.1]# mpstat -u
Linux 3.7.5 (squid) 03/05/2013  _x86_64_(24 CPU)

09:10:10 AM  CPU%usr   %nice%sys %iowait%irq   %soft  %steal
%guest   %idle
09:10:10 AM  all5.100.001.70   14.090.000.440.00
0.00   78.67
[root@squid squid-3.3.1]# mpstat -P ALL
Linux 3.7.5 (squid) 03/05/2013  _x86_64_(24 CPU)

09:10:17 AM  CPU%usr   %nice%sys %iowait%irq   %soft  %steal
%guest   %idle
09:10:17 AM  all5.100.001.70   14.090.000.440.00
0.00   78.67
09:10:17 AM08.630.002.67   30.170.000.150.00
0.00   58.37
09:10:17 AM1   12.350.014.50   27.910.000.440.00
0.00   54.79
09:10:17 AM25.510.001.96   27.650.000.060.00
0.00   64.81
09:10:17 AM37.170.002.16   22.100.000.060.00
0.00   68.51
09:10:17 AM45.150.001.93   27.290.000.060.00
0.00   65.56
09:10:17 AM57.630.001.83   18.540.000.060.00
0.00   71.93
09:10:17 AM65.180.001.90   28.460.000.070.00
0.00   64.40
09:10:17 AM76.730.001.45   14.060.000.050.00
0.00   77.70
09:10:17 AM85.190.001.82   28.470.000.060.00
0.00   64.46
09:10:17 AM96.950.001.28   10.670.000.040.00
0.00   81.06
09:10:17 AM   105.540.001.78   28.100.000.060.00
0.00   64.51
09:10:17 AM   116.680.001.259.020.000.050.00
0.00   83.00
09:10:17 AM   121.300.000.439.270.000.000.00
0.00   89.00
09:10:17 AM   138.050.003.832.680.002.610.00
0.00   82.83
09:10:17 AM   143.370.001.557.860.000.790.00
0.00   86.43
09:10:17 AM   156.850.003.272.390.002.120.00
0.00   85.36
09:10:17 AM   163.490.001.618.030.000.830.00
0.00   86.03
09:10:17 AM   171.780.000.203.020.000.120.00
0.00   94.87
09:10:17 AM   181.240.000.388.080.000.000.00
0.00   90.29
09:10:17 AM   194.860.002.172.640.001.510.00
0.00   88.82
09:10:17 AM   201.150.000.348.030.000.000.00
0.00   90.48
09:10:17 AM   215.070.002.272.260.001.570.00
0.00   88.83
09:10:17 AM   221.160.000.358.830.000.000.00
0.00   89.66
09:10:17 AM   231.750.000.161.750.000.000.00
0.00   96.34


These all appear to be idle. Why would it be a good thing to swap the 
main Squid running state between CPU caches at peak load time?

That  wastes time and cycles.

Squid is designed to take one CPU and use it to max capacity - the code 
is not using that capacity very efficiently yet but we are working on 
that (assistance welcome).



===
now i will try to migrate to squid 3.2 to the stable version ,
i had a look on
http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem
now if i use squid 3.2 stable , will it support smp feature of using
multiple cpu by default  or i have compile it with smp enable ?
also ,
do i need more config to operate smp on it is configured by default ???


The default (at present) is that SMP support is built in when available 
at build time but disabled in the default configuration.


You need to set a SMP worker count to enable it. 
http://www.squid-cache.org/Doc/config/workers/


Note carefully the list of things on that SMP scaling wiki page whoch 
are listed as *NOT* supporting SMP yet, and how to prepare your 
configuration of them for SMP usage.


Amos


[squid-users] squid performance tunning

2011-08-18 Thread Chen Bangzhong
I have some Dell 1950 servers dedicated to squid in my production
environment. Each with 16GB RAM and 300G disk
As the website traffic grows, the load of squid becomes high at high
traffic time. Average load is higher than 10.

Device: rrqm/s   wrqm/s   r/s   w/srkB/swkB/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.01  0.06  0.13 1.23 1.4528.87
0.004.13   2.19   0.04
sda1  0.00 0.01  0.06  0.11 1.23 1.4531.59
0.004.52   2.40   0.04
sdb   0.07 0.07  0.01  0.01 0.33 0.3259.88
0.00   19.75  15.74   0.03
sdc   0.00 2.08  9.13 104.4481.30  1066.74
20.22 0.50   11.95   1.73  19.63

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   3.500.003.75   24.340.00   68.41

Device: rrqm/s   wrqm/s   r/s   w/srkB/swkB/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.00  0.50  0.00 2.00 0.00 8.00
0.04   70.00  70.00   3.50
sda1  0.00 0.00  0.50  0.00 2.00 0.00 8.00
0.04   70.00  70.00   3.50
sdb   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdc   0.00 0.00 21.50 186.00   204.00  3106.25
31.9117.76  100.55   2.63  54.65

Here is the squidclient mgr:info output

Squid Object Cache: Version 3.1.12
Start Time: Sun, 14 Aug 2011 19:39:15 GMT
Current Time:   Thu, 18 Aug 2011 04:41:20 GMT
Connection information for squid:
Number of clients accessing cache:  77651
Number of HTTP requests received:   40449309
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   8319.3
Average ICP messages per minute since start:0.0
Select loop called: 476454933 times, 0.612 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 30.7%, 60min: 32.1%
Hits as % of bytes sent:5min: 40.5%, 60min: 43.2%
Memory hits as % of hit requests:   5min: 88.3%, 60min: 88.8%
Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
Storage Swap size:  120792244 KB
Storage Swap capacity:  90.0% used, 10.0% free
Storage Mem size:   5191632 KB
Storage Mem capacity:   100.0% used,  0.0% free
Mean Object Size:   20.61 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.00865  0.00865
Cache Misses:  0.01035  0.01035
Cache Hits:0.0  0.0
Near Hits: 0.00091  0.00091
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.0  0.0
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:291725.519 seconds
CPU Time:   37204.391 seconds
CPU Usage:  12.75%
CPU Usage, 5 minute avg:19.42%
CPU Usage, 60 minute avg:   18.20%
Process Data Segment Size via sbrk(): 1012440 KB
Maximum Resident Size: 28552368 KB
Page faults with physical i/o: 2957
Memory usage for squid via mallinfo():
Total space in arena:  -1265560 KB
Ordinary blocks:   -1308538 KB 264611 blks
Small blocks:   0 KB  0 blks
Holding blocks: 20708 KB  9 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   42978 KB
Total in use:  -1287830 KB 103%
Total free: 42978 KB -3%
Total size:-1244852 KB
Memory accounted for:
Total accounted:   -1781767 KB 143%
memPool accounted: 6606841 KB -531%
memPool unaccounted:   -7851693 KB 0%
memPoolAlloc calls: 10008474163
memPoolFree calls:  10065124847
File descriptor usage for squid:
Maximum number of file descriptors:   20480
Largest file desc currently in use:   4828
Number of file desc currently in use: 4703
Files queued for open: 178
Available number of file descriptors: 15599
Reserved number of file descriptors:   100
Store Disk files open:  22
Internal Data Structures:
5860834 StoreEntries
256880 StoreEntries with MemObjects
256646 Hot Object Cache Items
5860661 on-disk objects

related parameters

cache_mem 5120 MB
maximum_object_size 51200 KB
maximum_object_size_in_memory 1024 KB

log_icp_queries off
cache_swap_low 90
cache_swap_high 95
hosts_file /etc/squid/hosts
cache_dir aufs /export/squid/cache 131072 32 256

Is there any idea I can 

Re: [squid-users] squid performance tunning

2011-08-18 Thread Łukasz Makowski

W dniu 2011-08-18 08:19, Chen Bangzhong pisze:

I have some Dell 1950 servers dedicated to squid in my production
environment. Each with 16GB RAM and 300G disk
As the website traffic grows, the load of squid becomes high at high
traffic time. Average load is higher than 10.

Device: rrqm/s   wrqm/s   r/s   w/srkB/swkB/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.01  0.06  0.13 1.23 1.4528.87
 0.004.13   2.19   0.04
sda1  0.00 0.01  0.06  0.11 1.23 1.4531.59
 0.004.52   2.40   0.04
sdb   0.07 0.07  0.01  0.01 0.33 0.3259.88
 0.00   19.75  15.74   0.03
sdc   0.00 2.08  9.13 104.4481.30  1066.74
20.22 0.50   11.95   1.73  19.63

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
3.500.003.75   24.340.00   68.41

Device: rrqm/s   wrqm/s   r/s   w/srkB/swkB/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.00  0.50  0.00 2.00 0.00 8.00
 0.04   70.00  70.00   3.50
sda1  0.00 0.00  0.50  0.00 2.00 0.00 8.00
 0.04   70.00  70.00   3.50
sdb   0.00 0.00  0.00  0.00 0.00 0.00 0.00
 0.000.00   0.00   0.00
sdc   0.00 0.00 21.50 186.00   204.00  3106.25
31.9117.76  100.55   2.63  54.65

Here is the squidclient mgr:info output

Squid Object Cache: Version 3.1.12
Start Time: Sun, 14 Aug 2011 19:39:15 GMT
Current Time:   Thu, 18 Aug 2011 04:41:20 GMT
Connection information for squid:
 Number of clients accessing cache:  77651
 Number of HTTP requests received:   40449309
 Number of ICP messages received:0
 Number of ICP messages sent:0
 Number of queued ICP replies:   0
 Number of HTCP messages received:   0
 Number of HTCP messages sent:   0
 Request failure ratio:   0.00
 Average HTTP requests per minute since start:   8319.3
 Average ICP messages per minute since start:0.0
 Select loop called: 476454933 times, 0.612 ms avg
Cache information for squid:
 Hits as % of all requests:  5min: 30.7%, 60min: 32.1%
 Hits as % of bytes sent:5min: 40.5%, 60min: 43.2%
 Memory hits as % of hit requests:   5min: 88.3%, 60min: 88.8%
 Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
 Storage Swap size:  120792244 KB
 Storage Swap capacity:  90.0% used, 10.0% free
 Storage Mem size:   5191632 KB
 Storage Mem capacity:   100.0% used,  0.0% free
 Mean Object Size:   20.61 KB
 Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
 HTTP Requests (All):   0.00865  0.00865
 Cache Misses:  0.01035  0.01035
 Cache Hits:0.0  0.0
 Near Hits: 0.00091  0.00091
 Not-Modified Replies:  0.0  0.0
 DNS Lookups:   0.0  0.0
 ICP Queries:   0.0  0.0
Resource usage for squid:
 UP Time:291725.519 seconds
 CPU Time:   37204.391 seconds
 CPU Usage:  12.75%
 CPU Usage, 5 minute avg:19.42%
 CPU Usage, 60 minute avg:   18.20%
 Process Data Segment Size via sbrk(): 1012440 KB
 Maximum Resident Size: 28552368 KB
 Page faults with physical i/o: 2957
Memory usage for squid via mallinfo():
 Total space in arena:  -1265560 KB
 Ordinary blocks:   -1308538 KB 264611 blks
 Small blocks:   0 KB  0 blks
 Holding blocks: 20708 KB  9 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:   42978 KB
 Total in use:  -1287830 KB 103%
 Total free: 42978 KB -3%
 Total size:-1244852 KB
Memory accounted for:
 Total accounted:   -1781767 KB 143%
 memPool accounted: 6606841 KB -531%
 memPool unaccounted:   -7851693 KB 0%
 memPoolAlloc calls: 10008474163
 memPoolFree calls:  10065124847
File descriptor usage for squid:
 Maximum number of file descriptors:   20480
 Largest file desc currently in use:   4828
 Number of file desc currently in use: 4703
 Files queued for open: 178
 Available number of file descriptors: 15599
 Reserved number of file descriptors:   100
 Store Disk files open:  22
Internal Data Structures:
 5860834 StoreEntries
 256880 StoreEntries with MemObjects
 256646 Hot Object Cache Items
 5860661 on-disk objects

related parameters

cache_mem 5120 MB
maximum_object_size 51200 KB
maximum_object_size_in_memory 1024 KB

log_icp_queries off
cache_swap_low 90

Re: [squid-users] squid performance tunning

2011-08-18 Thread Drunkard Zhang
 Median Service Times (seconds)  5 min    60 min:
        HTTP Requests (All):   0.00865  0.00865
        Cache Misses:          0.01035  0.01035
        Cache Hits:            0.0  0.0
        Near Hits:             0.00091  0.00091
        Not-Modified Replies:  0.0  0.0
        DNS Lookups:           0.0  0.0
        ICP Queries:           0.0  0.0

Response time is reasonable at this time, while, peak time capture is
good for performance tunning. Try atop 1 at peak time, this magic
tool can clear about bottleneck.

Try multi-instance, which can improve throughput dramaticlly. Docs's here:
http://wiki.squid-cache.org/MultipleInstances

CARP is another choice for extreme perf demand.
http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend


Re: [squid-users] squid performance tunning

2011-08-18 Thread Chen Bangzhong
My cached objects will expire after 10 minutes.

Cache-Control:max-age=600

I don't know why there are so many disk writes and there are so many
objects on disk.

In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
is very low.

Can I increase the cache_mem? or not use disk cache at all?


2011/8/18 Łukasz Makowski lukasz.makow...@itsoft.pl:
 W dniu 2011-08-18 08:19, Chen Bangzhong pisze:

 I have some Dell 1950 servers dedicated to squid in my production
 environment. Each with 16GB RAM and 300G disk
 As the website traffic grows, the load of squid becomes high at high
 traffic time. Average load is higher than 10.

 Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda               0.00     0.01  0.06  0.13     1.23     1.45    28.87
     0.00    4.13   2.19   0.04
 sda1              0.00     0.01  0.06  0.11     1.23     1.45    31.59
     0.00    4.52   2.40   0.04
 sdb               0.07     0.07  0.01  0.01     0.33     0.32    59.88
     0.00   19.75  15.74   0.03
 sdc               0.00     2.08  9.13 104.44    81.30  1066.74
 20.22     0.50   11.95   1.73  19.63

 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
            3.50    0.00    3.75   24.34    0.00   68.41

 Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda               0.00     0.00  0.50  0.00     2.00     0.00     8.00
     0.04   70.00  70.00   3.50
 sda1              0.00     0.00  0.50  0.00     2.00     0.00     8.00
     0.04   70.00  70.00   3.50
 sdb               0.00     0.00  0.00  0.00     0.00     0.00     0.00
     0.00    0.00   0.00   0.00
 sdc               0.00     0.00 21.50 186.00   204.00  3106.25
 31.91    17.76  100.55   2.63  54.65

 Here is the squidclient mgr:info output

 Squid Object Cache: Version 3.1.12
 Start Time:     Sun, 14 Aug 2011 19:39:15 GMT
 Current Time:   Thu, 18 Aug 2011 04:41:20 GMT
 Connection information for squid:
         Number of clients accessing cache:      77651
         Number of HTTP requests received:       40449309
         Number of ICP messages received:        0
         Number of ICP messages sent:    0
         Number of queued ICP replies:   0
         Number of HTCP messages received:       0
         Number of HTCP messages sent:   0
         Request failure ratio:   0.00
         Average HTTP requests per minute since start:   8319.3
         Average ICP messages per minute since start:    0.0
         Select loop called: 476454933 times, 0.612 ms avg
 Cache information for squid:
         Hits as % of all requests:      5min: 30.7%, 60min: 32.1%
         Hits as % of bytes sent:        5min: 40.5%, 60min: 43.2%
         Memory hits as % of hit requests:       5min: 88.3%, 60min: 88.8%
         Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
         Storage Swap size:      120792244 KB
         Storage Swap capacity:  90.0% used, 10.0% free
         Storage Mem size:       5191632 KB
         Storage Mem capacity:   100.0% used,  0.0% free
         Mean Object Size:       20.61 KB
         Requests given to unlinkd:      0
 Median Service Times (seconds)  5 min    60 min:
         HTTP Requests (All):   0.00865  0.00865
         Cache Misses:          0.01035  0.01035
         Cache Hits:            0.0  0.0
         Near Hits:             0.00091  0.00091
         Not-Modified Replies:  0.0  0.0
         DNS Lookups:           0.0  0.0
         ICP Queries:           0.0  0.0
 Resource usage for squid:
         UP Time:        291725.519 seconds
         CPU Time:       37204.391 seconds
         CPU Usage:      12.75%
         CPU Usage, 5 minute avg:        19.42%
         CPU Usage, 60 minute avg:       18.20%
         Process Data Segment Size via sbrk(): 1012440 KB
         Maximum Resident Size: 28552368 KB
         Page faults with physical i/o: 2957
 Memory usage for squid via mallinfo():
         Total space in arena:  -1265560 KB
         Ordinary blocks:       -1308538 KB 264611 blks
         Small blocks:               0 KB      0 blks
         Holding blocks:         20708 KB      9 blks
         Free Small blocks:          0 KB
         Free Ordinary blocks:   42978 KB
         Total in use:          -1287830 KB 103%
         Total free:             42978 KB -3%
         Total size:            -1244852 KB
 Memory accounted for:
         Total accounted:       -1781767 KB 143%
         memPool accounted:     6606841 KB -531%
         memPool unaccounted:   -7851693 KB 0%
         memPoolAlloc calls: 10008474163
         memPoolFree calls:  10065124847
 File descriptor usage for squid:
         Maximum number of file descriptors:   20480
         Largest file desc currently in use:   4828
         Number of file desc currently in use: 4703
         Files queued for open:                 178
         Available number of file descriptors: 15599
         Reserved number of file 

Re: [squid-users] squid performance tunning

2011-08-18 Thread Drunkard Zhang
2011/8/18 Chen Bangzhong bangzh...@gmail.com:
 My cached objects will expire after 10 minutes.

 Cache-Control:max-age=600

Static content like pictures should cache longer, like 1 day, 86400.

 I don't know why there are so many disk writes and there are so many
 objects on disk.

 In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
 is very low.

Maybe cause by disk read timeout. You used too much disk space, you
can shrink it a little by a little, until disk busy percentage reduced
to 80% or lower.

 Can I increase the cache_mem? or not use disk cache at all?

I used all memory I can use :-)


Re: [squid-users] squid performance tunning

2011-08-18 Thread Amos Jeffries

On 18/08/11 19:40, Drunkard Zhang wrote:

2011/8/18 Chen Bangzhong:

My cached objects will expire after 10 minutes.

Cache-Control:max-age=600


Static content like pictures should cache longer, like 1 day, 86400.


Could also be a whole year. If you control the origin website, set 
caching times as large as reasonably possible for each object. With 
revalidate settings relevant to its likely replacement needs. And always 
send a correct ETag.


With those details Squid and other caches will take care of reducing 
caching times to suit the network and disk needs and 
updates/revalidation to suit your needs. So please set it large.





I don't know why there are so many disk writes and there are so many
objects on disk.


All traffic goes through either RAM cache or if its bigger than 
maximum_object_size_in_memory will go through disks.


From that info report ~60% of your traffic bytes are MISS responses. A 
large portion of that MISS traffic is likely not storable, so will be 
written to cache then discarded immediately. Squid is overall 
mostly-write with its disk behaviour.


Likely your 10-minute age is affecting this in a big way. The cache will 
have a lot of storable object which are stale. Next request they will be 
fetched into memory, then replaced by a revalidation REFRESH (near-HIT) 
response, which writes new data back to disk later.




In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
is very low.


Maybe cause by disk read timeout. You used too much disk space, you
can shrink it a little by a little, until disk busy percentage reduced
to 80% or lower.


Your Squid version is one which will promote HIT objects from disk and 
service repeat HITs from memory. Which reducing that disk-hit % a lot 
more than earlier squid versions would show it as.





Can I increase the cache_mem? or not use disk cache at all?


I used all memory I can use :-)


Indeed, the more the merrier. Unless it is swapping under high load. If 
that happens Squid speed goes terrible almost immediately.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] squid performance tunning

2011-08-18 Thread Drunkard Zhang
2011/8/18 Amos Jeffries squ...@treenet.co.nz:
 On 18/08/11 19:40, Drunkard Zhang wrote:

 2011/8/18 Chen Bangzhong:

 My cached objects will expire after 10 minutes.

 Cache-Control:max-age=600

 Static content like pictures should cache longer, like 1 day, 86400.

 Could also be a whole year. If you control the origin website, set caching
 times as large as reasonably possible for each object. With revalidate
 settings relevant to its likely replacement needs. And always send a correct
 ETag.

 With those details Squid and other caches will take care of reducing caching
 times to suit the network and disk needs and updates/revalidation to suit
 your needs. So please set it large.


 I don't know why there are so many disk writes and there are so many
 objects on disk.

 All traffic goes through either RAM cache or if its bigger than
 maximum_object_size_in_memory will go through disks.

 From that info report ~60% of your traffic bytes are MISS responses. A large
 portion of that MISS traffic is likely not storable, so will be written to
 cache then discarded immediately. Squid is overall mostly-write with its
 disk behaviour.

 Likely your 10-minute age is affecting this in a big way. The cache will
 have a lot of storable object which are stale. Next request they will be
 fetched into memory, then replaced by a revalidation REFRESH (near-HIT)
 response, which writes new data back to disk later.


 In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
 is very low.

 Maybe cause by disk read timeout. You used too much disk space, you
 can shrink it a little by a little, until disk busy percentage reduced
 to 80% or lower.

 Your Squid version is one which will promote HIT objects from disk and
 service repeat HITs from memory. Which reducing that disk-hit % a lot more
 than earlier squid versions would show it as.


 Can I increase the cache_mem? or not use disk cache at all?

 I used all memory I can use :-)

 Indeed, the more the merrier. Unless it is swapping under high load. If that
 happens Squid speed goes terrible almost immediately.

Actually I disabled swap at all, and use a script to start squid
process immediately when killed by OS. OS will kill squid when OOM(Out
of memory).


Re: [squid-users] squid performance tunning

2011-08-18 Thread Chen Bangzhong
thanks you Amos and Drunkard.

My website hosts novels, That's, user can read novel there.

The pages are not truely static contents, so I can only cache them for
10 minutes.

My squids serve both non-cachable requests (works like nginx) and
cachable-requests (10 min cache). So 60% cache miss is reasonable.  It
is not a good design, but we can't do more now.

Another point is, only hot novels are read by users. Crawlers/robots
will push many objects to cache. These objects are rarely read by user
and will expire after 10 minutes.

If the http response header indicates it is not cachable(eg:
max-age=0), will squid save the response in RAM or disk? My guess is
squid will discard the response.

If the http response header indicates it is cachable(eg: max-age=600),
squid will save it in the cache_mem. If the object is larger than
maximum_object_size_in_memory, it will be written to disk.

Can you tell me when will squid save the object to disk? When will
squid delete the staled objects?




2011/8/18 Amos Jeffries squ...@treenet.co.nz:
 On 18/08/11 19:40, Drunkard Zhang wrote:

 2011/8/18 Chen Bangzhong:

 My cached objects will expire after 10 minutes.

 Cache-Control:max-age=600

 Static content like pictures should cache longer, like 1 day, 86400.

 Could also be a whole year. If you control the origin website, set caching
 times as largeas reasonably possible for each object. With revalidate
 settings relevant to its likely replacement needs. And always send a correct
 ETag.

 With those details Squid and other caches will take care of reducing caching
 times to suit the network and disk needs and updates/revalidation to suit
 your needs. So please set it large.


 I don't know why there are so many disk writes and there are so many
 objects on disk.

 All traffic goes through either RAM cache or if its bigger than
 maximum_object_size_in_memory will go through disks.

 From that info report ~60% of your traffic bytes are MISS responses. A large
 portion of that MISS traffic is likely not storable, so will be written to
 cache then discarded immediately. Squid is overall mostly-write with its
 disk behaviour.

 Likely your 10-minute age is affecting this in a big way. The cache will
 have a lot of storable object which are stale. Next request they will be
 fetched into memory, then replaced by a revalidation REFRESH (near-HIT)
 response, which writes new data back to disk later.


 In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
 is very low.

 Maybe cause by disk read timeout. You used too much disk space, you
 can shrink it a little by a little, until disk busy percentage reduced
 to 80% or lower.

 Your Squid version is one which will promote HIT objects from disk and
 service repeat HITs from memory. Which reducing that disk-hit % a lot more
 than earlier squid versions would show it as.


 Can I increase the cache_mem? or not use disk cache at all?

 I used all memory I can use :-)

 Indeed, the more the merrier. Unless it is swapping under high load. If that
 happens Squid speed goes terrible almost immediately.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10



Re: [squid-users] squid performance tunning

2011-08-18 Thread Kaiwang Chen
2011/8/18 Amos Jeffries squ...@treenet.co.nz:
 On 18/08/11 19:40, Drunkard Zhang wrote:

 2011/8/18 Chen Bangzhong:

 My cached objects will expire after 10 minutes.

 Cache-Control:max-age=600

 Static content like pictures should cache longer, like 1 day, 86400.

 Could also be a whole year. If you control the origin website, set caching
 times as large as reasonably possible for each object. With revalidate
 settings relevant to its likely replacement needs. And always send a correct
 ETag.

 With those details Squid and other caches will take care of reducing caching
 times to suit the network and disk needs and updates/revalidation to suit
 your needs. So please set it large.


 I don't know why there are so many disk writes and there are so many
 objects on disk.

 All traffic goes through either RAM cache or if its bigger than
 maximum_object_size_in_memory will go through disks.

 From that info report ~60% of your traffic bytes are MISS responses. A large
 portion of that MISS traffic is likely not storable, so will be written to
 cache then discarded immediately. Squid is overall mostly-write with its
 disk behaviour.

Will a cache deny matching those non-storable objects suppress
storing them to disk?
And HTTP header 'Cache-Control: no-store' ?


 Likely your 10-minute age is affecting this in a big way. The cache will
 have a lot of storable object which are stale. Next request they will be
 fetched into memory, then replaced by a revalidation REFRESH (near-HIT)
 response, which writes new data back to disk later.


 In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
 is very low.

 Maybe cause by disk read timeout. You used too much disk space, you
 can shrink it a little by a little, until disk busy percentage reduced
 to 80% or lower.

 Your Squid version is one which will promote HIT objects from disk and
 service repeat HITs from memory. Which reducing that disk-hit % a lot more
 than earlier squid versions would show it as.


 Can I increase the cache_mem? or not use disk cache at all?

 I used all memory I can use :-)

 Indeed, the more the merrier. Unless it is swapping under high load. If that
 happens Squid speed goes terrible almost immediately.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Thanks,
Kaiwang


Re: [squid-users] squid performance tunning

2011-08-18 Thread Chen Bangzhong
Mean Object Size:   20.61 K
maximum_object_size_in_memory 1024 KB

So most objects will be save in RAM first, still can't explain why
there are so many disk writes.

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   1.520.001.636.950.00   89.91

Device: rrqm/s   wrqm/s   r/s   w/srkB/swkB/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.01  0.06  0.13 1.24 1.4528.96
0.004.16   2.20   0.04
sda1  0.00 0.01  0.06  0.11 1.24 1.4531.69
0.004.55   2.41   0.04
sdb   0.07 0.07  0.01  0.01 0.33 0.3159.88
0.00   19.77  15.75   0.03
sdc   0.00 2.08  9.16 104.9681.61  1071.39
20.21 0.575.02   1.73  19.75

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   2.380.003.38   10.380.00   83.88

Device: rrqm/s   wrqm/s   r/s   w/srkB/swkB/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sda1  0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdb   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdc   0.00 4.50 11.00 293.00   104.00  3768.50
25.48 7.26   23.88   1.92  58.30

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   3.250.002.633.880.00   90.24

Device: rrqm/s   wrqm/s   r/s   w/srkB/swkB/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sda1  0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdb   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdc   0.00 0.50 15.50 94.50   150.00   644.2514.44
0.423.79   1.95  21.50

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   3.000.002.883.380.00   90.75

Device: rrqm/s   wrqm/s   r/s   w/srkB/swkB/s avgrq-sz
avgqu-sz   await  svctm  %util
sda   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sda1  0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdb   0.00 0.00  0.00  0.00 0.00 0.00 0.00
0.000.00   0.00   0.00
sdc   0.00 4.00 13.50 241.50   134.00  1609.75
13.68 0.893.37   0.76  19.50



在 2011年8月18日 下午6:50,Chen Bangzhong bangzh...@gmail.com 写道:
 thanks you Amos and Drunkard.

 My website hosts novels, That's, user can read novel there.

 The pages are not truely static contents, so I can only cache them for
 10 minutes.

 My squids serve both non-cachable requests (works like nginx) and
 cachable-requests (10 min cache). So 60% cache miss is reasonable.  It
 is not a good design, but we can't do more now.

 Another point is, only hot novels are read by users. Crawlers/robots
 will push many objects to cache. These objects are rarely read by user
 and will expire after 10 minutes.

 If the http response header indicates it is not cachable(eg:
 max-age=0), will squid save the response in RAM or disk? My guess is
 squid will discard the response.

 If the http response header indicates it is cachable(eg: max-age=600),
 squid will save it in the cache_mem. If the object is larger than
 maximum_object_size_in_memory, it will be written to disk.

 Can you tell me when will squid save the object to disk? When will
 squid delete the staled objects?




 2011/8/18 Amos Jeffries squ...@treenet.co.nz:
 On 18/08/11 19:40, Drunkard Zhang wrote:

 2011/8/18 Chen Bangzhong:

 My cached objects will expire after 10 minutes.

 Cache-Control:max-age=600

 Static content like pictures should cache longer, like 1 day, 86400.

 Could also be a whole year. If you control the origin website, set caching
 times as largeas reasonably possible for each object. With revalidate
 settings relevant to its likely replacement needs. And always send a correct
 ETag.

 With those details Squid and other caches will take care of reducing caching
 times to suit the network and disk needs and updates/revalidation to suit
 your needs. So please set it large.


 I don't know why there are so many disk writes and there are so many
 objects on disk.

 All traffic goes through either RAM cache or if its bigger than
 maximum_object_size_in_memory will go through disks.

 From that info report ~60% of your traffic bytes are MISS responses. A large
 portion of that MISS traffic is likely not storable, so will be written to
 cache then discarded immediately. Squid is overall mostly-write with its
 disk behaviour.

 Likely your 10-minute age is affecting this in a big way. The cache will
 have a lot of storable object which 

Re: [squid-users] squid performance tunning

2011-08-18 Thread Amos Jeffries

On 18/08/11 22:50, Chen Bangzhong wrote:

thanks you Amos and Drunkard.

My website hosts novels, That's, user can read novel there.

The pages are not truely static contents, so I can only cache them for
10 minutes.

My squids serve both non-cachable requests (works like nginx) and
cachable-requests (10 min cache). So 60% cache miss is reasonable.  It
is not a good design, but we can't do more now.


Oh well. Good luck wishes on that side of the problem.



Another point is, only hot novels are read by users. Crawlers/robots
will push many objects to cache. These objects are rarely read by user
and will expire after 10 minutes.

If the http response header indicates it is not cachable(eg:
max-age=0), will squid save the response in RAM or disk? My guess is
squid will discard the response.


Correct. It will discard the response AND anything it has already cached 
for that URL.


For non-hot objects this will not be a major problem. But may raise disk 
I/O a bit as the existing old stored content gets kicked out. Which 
might actually be a good thing, emptying space in the cache early. Or 
wasted I/O. It's not clear exactly which.




If the http response header indicates it is cachable(eg: max-age=600),
squid will save it in the cache_mem. If the object is larger than
maximum_object_size_in_memory, it will be written to disk.


Yes.



Can you tell me when will squid save the object to disk? When will
squid delete the staled objects?


Stale objects are deleted at the point they are detected as stale and no 
longer usable (ie a request has been made for it and updated replacement 
has arrived from the web server). Or if they are the oldest object 
stored and more cache space is needed for newer objects.



Other than tuning your existing setup there are two things I think you 
may be interested in.


The first is a Measurement Factory project which involves altering Squid 
to completely bypass the cache storage when an object can't be cached or 
re-used by other clients. Makes them faster to process, and avoids 
dropping cached objects to make room. Combining this with a cache deny 
rule identifying those annoying robots as non-cacheable would allow you 
to store only the real users traffic needs.
  This is a slightly longer-term project, AFAIK it is not ready for 
production use (might be wrong). At minimum TMF are possibly needing 
sponsorship assistance to progress it faster. Contact Alex Rousskov 
about possibilities there, http://www.measurement-factory.com/contact.html



The second thing is an alternative squid configuration which would 
emulate that behaviour immediately using two Squid instances.
 Basically; configure a new second instance as a non-caching gateway 
which all requests go to first. That could pass the robots and other 
easily detected non-cacheable requests straight to the web servers for 
service. While passing the other potentially cacheable requests to your 
current Squid instance, where storage and cache fetches happen more 
often without the robots.


 The gateway squid would have a much smaller footprint since it needs 
no memory for caching or indexing, and no disk usage at all.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] squid performance tunning

2011-08-18 Thread Amos Jeffries
On 18/08/11 22:56, Chen Bangzhong wrote:
 Mean Object Size:   20.61 K
 maximum_object_size_in_memory 1024 KB
 
 So most objects will be save in RAM first, still can't explain why
 there are so many disk writes.
 

Well, I would check the HTTP response headers there. Make sure they are
containing Content-Length: header. If that is missing Squid is forced to
assume it will have infinite length and require disk backing for the
object until it is finished arriving.

The Mean Object Size: metric is measured on completely received and
stored objects. So does not really account for unknown length objects or
non-cacheable previous objects.

Amos
-- 
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] squid performance tunning

2011-08-18 Thread Amos Jeffries

On 18/08/11 22:53, Kaiwang Chen wrote:

2011/8/18 Amos Jeffriessqu...@treenet.co.nz:

On 18/08/11 19:40, Drunkard Zhang wrote:


2011/8/18 Chen Bangzhong:



snip



I don't know why there are so many disk writes and there are so many
objects on disk.


All traffic goes through either RAM cache or if its bigger than
maximum_object_size_in_memory will go through disks.

 From that info report ~60% of your traffic bytes are MISS responses. A large
portion of that MISS traffic is likely not storable, so will be written to
cache then discarded immediately. Squid is overall mostly-write with its
disk behaviour.


Will a cache deny matching those non-storable objects suppress
storing them to disk?
And HTTP header 'Cache-Control: no-store' ?


no-store header and cache deny directive have the same effect on 
your Squid. Both erase existing stored objects and erase the newely 
received one _after_ it is finished transfer.


 The difference is that the header applies everywhere receiving the 
object. The cache access control is limited to that one Squid instance 
testing it.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] squid performance tunning

2011-08-18 Thread Chen Bangzhong
thanks.

Before I try the gateway squid solution, I want to change one of my
squid to use memory cache only. I have 16GB RAM. now cache_mem is set
to 5GB.

I will try to increase it to 12GB and set cache_dir to null schma. I
do this because I am sure that my hot objects can be saved in RAM,
non-hot objects created by robots will stale  and the memory will be
reused.

Is that all I need to set squid to be a memory cache?




2011/8/18 Amos Jeffries squ...@treenet.co.nz:
 On 18/08/11 22:50, Chen Bangzhong wrote:

 thanks you Amos and Drunkard.

 My website hosts novels, That's, user can read novel there.

 The pages are not truely static contents, so I can only cache them for
 10 minutes.

 My squids serve both non-cachable requests (works like nginx) and
 cachable-requests (10 min cache). So 60% cache miss is reasonable.  It
 is not a good design, but we can't do more now.

 Oh well. Good luck wishes on that side of the problem.


 Another point is, only hot novels are read by users. Crawlers/robots
 will push many objects to cache. These objects are rarely read by user
 and will expire after 10 minutes.

 If the http response header indicates it is not cachable(eg:
 max-age=0), will squid save the response in RAM or disk? My guess is
 squid will discard the response.

 Correct. It will discard the response AND anything it has already cached for
 that URL.

 For non-hot objects this will not be a major problem. But may raise disk I/O
 a bit as the existing old stored content gets kicked out. Which might
 actually be a good thing, emptying space in the cache early. Or wasted I/O.
 It's not clear exactly which.


 If the http response header indicates it is cachable(eg: max-age=600),
 squid will save it in the cache_mem. If the object is larger than
 maximum_object_size_in_memory, it will be written to disk.

 Yes.


 Can you tell me when will squid save the object to disk? When will
 squid delete the staled objects?

 Stale objects are deleted at the point they are detected as stale and no
 longer usable (ie a request has been made for it and updated replacement has
 arrived from the web server). Or if they are the oldest object stored and
 more cache space is needed for newer objects.


 Other than tuning your existing setup there are two things I think you may
 be interested in.

 The first is a Measurement Factory project which involves altering Squid to
 completely bypass the cache storage when an object can't be cached or
 re-used by other clients. Makes them faster to process, and avoids dropping
 cached objects to make room. Combining this with a cache deny rule
 identifying those annoying robots as non-cacheable would allow you to store
 only the real users traffic needs.
  This is a slightly longer-term project, AFAIK it is not ready for
 production use (might be wrong). At minimum TMF are possibly needing
 sponsorship assistance to progress it faster. Contact Alex Rousskov about
 possibilities there, http://www.measurement-factory.com/contact.html


 The second thing is an alternative squid configuration which would emulate
 that behaviour immediately using two Squid instances.
  Basically; configure a new second instance as a non-caching gateway which
 all requests go to first. That could pass the robots and other easily
 detected non-cacheable requests straight to the web servers for service.
 While passing the other potentially cacheable requests to your current Squid
 instance, where storage and cache fetches happen more often without the
 robots.

  The gateway squid would have a much smaller footprint since it needs no
 memory for caching or indexing, and no disk usage at all.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10



Re: [squid-users] squid performance tunning

2011-08-18 Thread Kaiwang Chen
2011/8/18 Amos Jeffries squ...@treenet.co.nz:
 On 18/08/11 22:53, Kaiwang Chen wrote:

 2011/8/18 Amos Jeffriessqu...@treenet.co.nz:

 On 18/08/11 19:40, Drunkard Zhang wrote:

 2011/8/18 Chen Bangzhong:

 snip

 I don't know why there are so many disk writes and there are so many
 objects on disk.

 All traffic goes through either RAM cache or if its bigger than
 maximum_object_size_in_memory will go through disks.

  From that info report ~60% of your traffic bytes are MISS responses. A
 large
 portion of that MISS traffic is likely not storable, so will be written
 to
 cache then discarded immediately. Squid is overall mostly-write with its
 disk behaviour.

 Will a cache deny matching those non-storable objects suppress
 storing them to disk?
 And HTTP header 'Cache-Control: no-store' ?

 no-store header and cache deny directive have the same effect on your
 Squid. Both erase existing stored objects and erase the newely received one
 _after_ it is finished transfer.

  The difference is that the header applies everywhere receiving the object.
 The cache access control is limited to that one Squid instance testing it.

Great. What about Cache-Control: max-age=0 and Cache-Control:
no-cache responses? Does squid store them, hoping it is cheaper to
make a validatation than to fetch a whole fresh object? Which souce
code files describe the logic to deal with such cases?



 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Thanks,
Kaiwang


Re: [squid-users] squid performance tunning

2011-08-18 Thread Kaiwang Chen
在 2011年8月18日 下午9:07,Amos Jeffries squ...@treenet.co.nz 写道:
 On 18/08/11 22:56, Chen Bangzhong wrote:
 Mean Object Size:   20.61 K
 maximum_object_size_in_memory 1024 KB

 So most objects will be save in RAM first, still can't explain why
 there are so many disk writes.


 Well, I would check the HTTP response headers there. Make sure they are
 containing Content-Length: header. If that is missing Squid is forced to
 assume it will have infinite length and require disk backing for the
 object until it is finished arriving.

Will squid require disk backing despite of the object size, even it is
smaller than the receive buffer?
Not sure what is the default size of receive buffer, is it one of these?
read_ahead_gap 16 KB
tcp_recv_bufsize 0 bytes


 The Mean Object Size: metric is measured on completely received and
 stored objects. So does not really account for unknown length objects or
 non-cacheable previous objects.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Thanks,
Kaiwang


Re: [squid-users] squid performance tunning

2011-08-18 Thread Amos Jeffries

On 19/08/11 02:40, Kaiwang Chen wrote:

2011/8/18 Amos Jeffriessqu...@treenet.co.nz:

On 18/08/11 22:53, Kaiwang Chen wrote:


2011/8/18 Amos Jeffriessqu...@treenet.co.nz:


On 18/08/11 19:40, Drunkard Zhang wrote:


2011/8/18 Chen Bangzhong:



snip



I don't know why there are so many disk writes and there are so many
objects on disk.


All traffic goes through either RAM cache or if its bigger than
maximum_object_size_in_memory will go through disks.

  From that info report ~60% of your traffic bytes are MISS responses. A
large
portion of that MISS traffic is likely not storable, so will be written
to
cache then discarded immediately. Squid is overall mostly-write with its
disk behaviour.


Will a cache deny matching those non-storable objects suppress
storing them to disk?
And HTTP header 'Cache-Control: no-store' ?


no-store header and cache deny directive have the same effect on your
Squid. Both erase existing stored objects and erase the newely received one
_after_ it is finished transfer.

  The difference is that the header applies everywhere receiving the object.
The cache access control is limited to that one Squid instance testing it.


Great. What about Cache-Control: max-age=0 and Cache-Control:
no-cache responses? Does squid store them,


max-age=0, that means discard immediately. Same as no-store to Squid.

no-cache on responses is borderline. I can't seem to find anything 
relevant to no-cache kicking off a refresh. The HTTP/1.1 support results 
show it acting like no-store when last tested. So probably not usable yet.


Luckily there is an overlap with the must-revalidate response directive. 
You can send that on the reply instead.


 hoping it is cheaper to
 make a validatation than to fetch a whole fresh object? Which souce
 code files describe the logic to deal with such cases?


If the object has not actually changed, the server sends 304 instead of 
a new object, and there is an ETag to identify that object both machines 
are talking about is identical. Then yes, revalidation is much smaller.
 Squid does not (yet) send If-None-Match on revalidations (accepts and 
relay it but does not create it), so there are a number of possible 
cases where revalidation fails to be smaller.



src/client_side_reply.cc  cacheHit() handles the reply when an object is 
found in storage (to determin if its usable, obsolete, or simply old). 
That makes use of various other process*() code and src/refresh.cc does 
the revalidation calculations.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] squid performance tunning

2011-08-18 Thread Amos Jeffries

On 19/08/11 02:10, Chen Bangzhong wrote:

thanks.

Before I try the gateway squid solution, I want to change one of my
squid to use memory cache only. I have 16GB RAM. now cache_mem is set
to 5GB.

I will try to increase it to 12GB and set cache_dir to null schma. I
do this because I am sure that my hot objects can be saved in RAM,
non-hot objects created by robots will stale  and the memory will be
reused.

Is that all I need to set squid to be a memory cache?



You have squid-3.1, so only comment out the cache_dir lines and set 
cache_mem to something large. null dir schema no longer exists.


 Remember that cache_mem still has an index to account for and the 
usual active traffic buffering stays present. Also that reconfigure will 
wipe the RAM cache to empty.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] squid performance tunning

2011-08-18 Thread Chen Bangzhong
Amos, I want to find out what is filling my disk at 2-3MB/s. If there
is no cache related information in the response header, will squid
write the response to the disk?

In squid wiki, I found the following sentences:

Responses with Cache-Control: Private are NOT cachable.

Responses with Cache-Control: No-Cache are NOT cachable.

Responses with Cache-Control: No-Store are NOT cachable.

Responses for requests with an Authorization header are cachable ONLY
if the reponse includes Cache-Control: Public.
The following HTTP status codes are cachable:

200 OK
203 Non-Authoritative Information
300 Multiple Choices
301 Moved Permanently
410 Gone

My question is: If there is no Cache-control related information, such
as the following header

Server  nginx/0.8.54
DateThu, 18 Aug 2011 15:56:29 GMT
Content-Typeapplication/json; charset=UTF-8
Content-Length  1218
X-Cache MISS from zw12squid.my.com
X-Cache-Lookup  MISS from zw12squid.my.com:80
Via 1.0 zw12squid.my.com (squid/3.1.12)
Connection  keep-alive

will squid save it to disk?

Can you give me a detailed description about when will squid save the
object to disk?

thanks a lot for your kind help.



2011/8/18 Amos Jeffries squ...@treenet.co.nz:
 On 19/08/11 02:10, Chen Bangzhong wrote:

 thanks.

 Before I try the gateway squid solution, I want to change one of my
 squid to use memory cache only. I have 16GB RAM. now cache_mem is set
 to 5GB.

 I will try to increase it to 12GB and set cache_dir to null schma. I
 do this because I am sure that my hot objects can be saved in RAM,
 non-hot objects created by robots will stale  and the memory will be
 reused.

 Is that all I need to set squid to be a memory cache?


 You have squid-3.1, so only comment out the cache_dir lines and set
 cache_mem to something large. null dir schema no longer exists.

  Remember that cache_mem still has an index to account for and the usual
 active traffic buffering stays present. Also that reconfigure will wipe the
 RAM cache to empty.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10



Re: [squid-users] squid performance tunning

2011-08-18 Thread Amos Jeffries

On 19/08/11 02:59, Kaiwang Chen wrote:

在 2011年8月18日 下午9:07,Amos Jeffriessqu...@treenet.co.nz  写道:

On 18/08/11 22:56, Chen Bangzhong wrote:

Mean Object Size:   20.61 K
maximum_object_size_in_memory 1024 KB

So most objects will be save in RAM first, still can't explain why
there are so many disk writes.



Well, I would check the HTTP response headers there. Make sure they are
containing Content-Length: header. If that is missing Squid is forced to
assume it will have infinite length and require disk backing for the
object until it is finished arriving.


Will squid require disk backing despite of the object size, even it is
smaller than the receive buffer?


_require_ it. No. Do it that way due to old code, yes maybe.

The amount of data waiting to be processed does not matter much. Could 
be zero bytes chunked encoded and a set of followup pipelined response 
headers. Until it is processed and stored somewhere Squid can't tell if 
its some bytes that happened to appear early, or the whole thing.


 The packet size, read_ahead_gap, and the receive buffer size (dynamic! 
1-64KB), and cache_dir min/max values all have an effect in that area. 
I believe it picks a cache area before continuing to read more bytes 
(but not completely certain).


If the cache_dir all have small maximum size limits and RAM looks bigger 
it will go there. In fact cache_dir usage for backing being practically 
welded in 3.1 series with large cache_mem have been showing signs of 
memory-backing instead on occasion. The other dev have projects underway 
to eliminate all that confusion in 3.2 anyways.



Not sure what is the default size of receive buffer, is it one of these?
read_ahead_gap 16 KB


sliding window of bytes to buffer unsent to the client. Mostly unrelated 
to the receive buffer. When in effect its the minimum buffer size.



tcp_recv_bufsize 0 bytes


The tcp_recv_bufsize is the maximum amount per read cycle (0 being use 
the OS sysctl details, which is usually 4KB). Default buffer is 
hard-coded as 1KB for most of 3.1 series. 4KB for older and newer 
releases (slow-start algorithm from 1KB turned out to be bad for speed 
on MB sized objects and no benefit for small ones).


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] squid performance tunning

2011-08-18 Thread Amos Jeffries

On 19/08/11 03:58, Chen Bangzhong wrote:

Amos, I want to find out what is filling my disk at 2-3MB/s. If there
is no cache related information in the response header, will squid
write the response to the disk?

In squid wiki, I found the following sentences:

Responses with Cache-Control: Private are NOT cachable.

Responses with Cache-Control: No-Cache are NOT cachable.

Responses with Cache-Control: No-Store are NOT cachable.

Responses for requests with an Authorization header are cachable ONLY
if the reponse includes Cache-Control: Public.
The following HTTP status codes are cachable:

 200 OK
 203 Non-Authoritative Information
 300 Multiple Choices
 301 Moved Permanently
 410 Gone

My question is: If there is no Cache-control related information, such
as the following header

Server  nginx/0.8.54
DateThu, 18 Aug 2011 15:56:29 GMT
Content-Typeapplication/json; charset=UTF-8
Content-Length  1218
X-Cache MISS from zw12squid.my.com
X-Cache-Lookup  MISS from zw12squid.my.com:80
Via 1.0 zw12squid.my.com (squid/3.1.12)
Connection  keep-alive

will squid save it to disk?


No. It has a small Content-Length. Will store to RAM. But your RAM cache 
is running at 100% full, so something old will be pushed out to disk and 
this fills the empty gap.


Lack of Cache-Control and Expires: headers means on the nest request for 
its URL your refresh_pattern rules will be tested against the URL and 
whichever one matches will be used to determine whether its served or 
revalidated.
 The only thing that could feed that algorithm is Date: when produced 
and current time, so Squid is unlikely to get it right of the two are 
very similar or very different. Probably leading to a revalidation or 
new request anyway.




Can you give me a detailed description about when will squid save the
object to disk?


When it can't be saved to RAM cache_mem area.
 * cache_mem is full = least-popular object goes to disk.
 * object bigger than maximum_object_size_in_memory = goes to disk
 * object smaller than minimum_object_size_in_memory AND a cache_dir 
can accept it = goes to disk

 * object unknown length = goes to disk. Maybe RAM as well.

Those are the cases I know about. There may be others.

We know disk I/O happens far more often than it reasonably should in 
Squid. The newer releases since 2.6 and 3.0 are being improved to avoid 
it and increase traffic speeds, but progress is slow and irregular.



You were going to try the memory-only caching. I think that was a good 
idea for your 88% RAM-hit vs 1% disk-hit ratios.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


[squid-users] squid performance

2010-10-22 Thread Ananth
Dear team,

I run a Squid Cache: Version 3.1.8. i have a problem when my
client_http.requests = is more than 200/sec. pages doesn't browse but
when the request are less than 200 i dont find any problem. i don't
see any errors in /etc/var/squid/cache.log. my file descriptors is
32768.

Please find my configuration below and do suggest me if i m any where
wrong in my configuration.

Thanks in advance.

my h/w details is as fallows:
CPU: 3.00 GHZ XEON processor
RAM: 8 GB
HDD: 148 GB * 2 SAS HDD

my ulimint -n = 32768

File descriptor usage for squid:
Maximum number of file descriptors:   32768
Largest file desc currently in use:   6064
Number of file desc currently in use: 5656
Files queued for open:   0
Available number of file descriptors: 27112
Reserved number of file descriptors:   100
Store Disk files open: 119

my squid.conf:

### Start of squid.conf #created by ANANTH#
cache_effective_user squid
cache_effective_group squid

http_port 3128 transparent

# httpd_accel_host virtual
# httpd_accel_port 80
# httpd_accel_with_proxy on
# httpd_accel_uses_host_header on

# cache_dir aufs /var/spool/squid 16384 32 512
#--This has been inserted to check the cache--
#cache_dir ufs /var/spool/squid 16384 16 256
#cache_dir ufs /cache0/squid 16384 16 256
#cache_dir ufs /squid0/squid 16384 16 256
cache_dir aufs /squid1/squid 16384 32 512
#cache_dir /tmp null

cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
logfile_rotate 7
emulate_httpd_log on

cache_mem 3 GB
maximum_object_size_in_memory 256 KB
memory_replacement_policy lru
cache_replacement_policy lru
maximum_object_size 64 MB

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

hosts_file /etc/hosts

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 40% 4320

##Define your network below

#acl mynetwork src 192.168.0.0/24
acl mynetwork src 192.168.106.0/24   # cbinetwork private
acl mynetwork src 192.168.107.0/24   # cbinetwork private
acl mynetwork src 192.168.110.0/24   # cbinetwork private
acl mynetwork src 192.168.120.0/24   # cbinetwork private
acl mynetwork src 192.168.121.0/24   # cbinetwork private
acl mynetwork src 192.168.130.0/24   # cbinetwork private
acl mynetwork src 192.168.150.0/24   # cbinetwork private
acl mynetwork src 192.168.151.0/24   # cbinetwork private
acl mynetwork src 192.168.160.0/24   # cbinetwork private
acl mynetwork src 10.100.101.0/24   # cbinetwork private
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl localhost src ::1/128
acl to_localhost dst 127.0.0.0/8
acl to_localhost dst ::1/128
acl purge method PURGE
acl CONNECT method CONNECT

acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https

acl Safe_ports port 1025-65535 #unregistered ports

acl SSL_ports port 443 563

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localhost
http_access allow mynetwork
# http_access deny all
http_reply_access allow all
icp_access allow mynetwork

# icp_access deny all

visible_hostname proxy.cbinet.bi

coredump_dir /squid1/squid

#
###


Re: [squid-users] squid performance

2010-10-22 Thread Amos Jeffries

On 23/10/10 03:01, Ananth wrote:

Dear team,

I run a Squid Cache: Version 3.1.8. i have a problem when my
client_http.requests = is more than 200/sec. pages doesn't browse but
when the request are less than 200 i dont find any problem. i don't
see any errors in /etc/var/squid/cache.log. my file descriptors is
32768.

Please find my configuration below and do suggest me if i m any where
wrong in my configuration.


There is nothing visibly wrong with the below config. It's essentially 
the default one which most are using happily.


I've pointed out a few bits which could be improved for overall speed, 
but the gains are not ones which would suddenly cut in like that.


What does squid -v produce? and what OS is this on please?



Thanks in advance.

my h/w details is as fallows:
CPU: 3.00 GHZ XEON processor
RAM: 8 GB
HDD: 148 GB * 2 SAS HDD

my ulimint -n = 32768

File descriptor usage for squid:
Maximum number of file descriptors:   32768
Largest file desc currently in use:   6064
Number of file desc currently in use: 5656
Files queued for open:   0
Available number of file descriptors: 27112
Reserved number of file descriptors:   100
Store Disk files open: 119

my squid.conf:

### Start of squid.conf #created by ANANTH#
cache_effective_user squid
cache_effective_group squid


effective-group is a piece of major voodoo with VERY limited real 
use-cases. *general* recommendation is to trust the OS security settings 
membership of squid user and remove that group option from the config.




http_port 3128 transparent


With 3.1 this is now intercept to avoid confusion with tproxy 
(transparent proxy).




# httpd_accel_host virtual
# httpd_accel_port 80
# httpd_accel_with_proxy on
# httpd_accel_uses_host_header on


Um, those should be removed.

From your choice of transparent as a replacement I'm assuming you 
want this as a transparent interception-proxy.
 If you want it as a reverse-proxy (what those old config lines did) 
that is a whole separate config now.




# cache_dir aufs /var/spool/squid 16384 32 512
#--This has been inserted to check the cache--
#cache_dir ufs /var/spool/squid 16384 16 256
#cache_dir ufs /cache0/squid 16384 16 256
#cache_dir ufs /squid0/squid 16384 16 256
cache_dir aufs /squid1/squid 16384 32 512
#cache_dir /tmp null

cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
logfile_rotate 7
emulate_httpd_log on


Drop emulate_httpd_log and cache_access_log.

Use this instead for the same output slightly faster:
  access_log /var/log/squid/access.log common



cache_mem 3 GB
maximum_object_size_in_memory 256 KB
memory_replacement_policy lru
cache_replacement_policy lru
maximum_object_size 64 MB

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY


Drop the QUERY and cgi-bin stuff here. It will be forcing your Squid to 
do slow network fetches for a lot of otherwise cacheable dynamic pages.
 There is a refresh_pattern below which fixes up the non-cacheable ones 
behaviour.




hosts_file /etc/hosts


Just a note:
  I've been seeing this in a lot of tutorials lately. This is not 
needed unless you have a weird location for the hosts file (ie 
/home/youraccount/hosts).
  There are ./configure options that should be used to integrate 
correctly with the OS filesystem. This fixes a lot of file and folder 
paths. Details in the squid wiki about each OS type.




refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440


Add this right here:
  refresh_pattern -i (/cgi-bin/|\?) 0 0% 0


refresh_pattern . 0 40% 4320

##Define your network below

#acl mynetwork src 192.168.0.0/24
acl mynetwork src 192.168.106.0/24   # cbinetwork private
acl mynetwork src 192.168.107.0/24   # cbinetwork private
acl mynetwork src 192.168.110.0/24   # cbinetwork private
acl mynetwork src 192.168.120.0/24   # cbinetwork private
acl mynetwork src 192.168.121.0/24   # cbinetwork private
acl mynetwork src 192.168.130.0/24   # cbinetwork private
acl mynetwork src 192.168.150.0/24   # cbinetwork private
acl mynetwork src 192.168.151.0/24   # cbinetwork private
acl mynetwork src 192.168.160.0/24   # cbinetwork private
acl mynetwork src 10.100.101.0/24   # cbinetwork private
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl localhost src ::1/128
acl to_localhost dst 127.0.0.0/8
acl to_localhost dst ::1/128
acl purge method PURGE
acl CONNECT method CONNECT

acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https

acl Safe_ports port 1025-65535 #unregistered ports

acl SSL_ports port 443 563

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge


Um do you actually need PURGE?
 If not remove it entirely from the config. Including the ACL 
definition. Simply defining it makes Squid do more work tracking 

Re: [squid-users] squid performance - requests per second

2010-04-06 Thread Amos Jeffries

饶琛琳 wrote:
I have seem the 
page(http://wiki.squid-cache.org/KnowledgeBase/Benchmarks), and want to 
ask a question about the RPS.
My LVS tell me that the ActiveConn number of one squid is more than 
200,000;the netstat command tell me the established connection number is 
6;but the RPS from squidclient is only 110.

Who can teach me the difference between them?


netstat measures total connections to current box over last X time period
LVS measures total connections over all traffic routed for Y time period.

squidclient RPS measures only HTTP requests per second for last 1 or 5 
minutes.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] squid performance - requests per second

2010-04-01 Thread 饶琛琳
I have seem the 
page(http://wiki.squid-cache.org/KnowledgeBase/Benchmarks), and want to 
ask a question about the RPS.
My LVS tell me that the ActiveConn number of one squid is more than 
200,000;the netstat command tell me the established connection number is 
6;but the RPS from squidclient is only 110.

Who can teach me the difference between them?
Tks.

 2010-3-29 14:00, Amos Jeffries wrote:

guest01 wrote:

Hi guys,

I am sorry if this is a question which has been asked for many times,
but I did not find any actual question concerning the performance of
recent versions of squid.

We are trying to replace a commercial product with squid servers on
64bit linux servers (most likely red hat 5). At the moment, we have a
peak of about 6000 requests per second, which is really a lot. How
many requests can one single squid server handle? I am just talking
about caching, we also have icap servers and different forms of
authentication. What are your experiences? How many requests can you
handle with which hardware? A raw guess would be ok.

thanks, best regards


http://www.google.co.nz/search?q=squid+performance
http://www.google.co.nz/search?q=squid+benchmark
http://wiki.squid-cache.org/KnowledgeBase/Benchmarks

Amos





Re: [squid-users] squid performance - requests per second

2010-03-29 Thread Amos Jeffries

guest01 wrote:

Hi guys,

I am sorry if this is a question which has been asked for many times,
but I did not find any actual question concerning the performance of
recent versions of squid.

We are trying to replace a commercial product with squid servers on
64bit linux servers (most likely red hat 5). At the moment, we have a
peak of about 6000 requests per second, which is really a lot. How
many requests can one single squid server handle? I am just talking
about caching, we also have icap servers and different forms of
authentication. What are your experiences? How many requests can you
handle with which hardware? A raw guess would be ok.

thanks, best regards


http://www.google.co.nz/search?q=squid+performance
http://www.google.co.nz/search?q=squid+benchmark
http://wiki.squid-cache.org/KnowledgeBase/Benchmarks

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] squid performance - requests per second

2010-03-26 Thread guest01
Hi guys,

I am sorry if this is a question which has been asked for many times,
but I did not find any actual question concerning the performance of
recent versions of squid.

We are trying to replace a commercial product with squid servers on
64bit linux servers (most likely red hat 5). At the moment, we have a
peak of about 6000 requests per second, which is really a lot. How
many requests can one single squid server handle? I am just talking
about caching, we also have icap servers and different forms of
authentication. What are your experiences? How many requests can you
handle with which hardware? A raw guess would be ok.

thanks, best regards


Re: [squid-users] Squid performance issues

2010-01-27 Thread Amos Jeffries

Felipe W Damasio wrote:

  Hi Mr. Robertson,

2010/1/26 Chris Robertson crobert...@gci.net:

 Do you have any idea or any other data I can collect to try and
track down this?


Check your log rotation schedule.  Is it possible that logs are being
rotated  at midnight?  I think that the swap.state file is rewritten when
squid -k rotate is called.  Check the beginning of your cache.log to
verify.


  I don't use -k rotate.

  At midnight, the only thing that changes is that the traffic shapper
of the ISP let's everything run loose, ie. it allows http requests go
through the roof.

  The youtube requests, which is what I really care about, goes form
and avg of 20Mbps of traffic (shaped), to around 50Mbps unshapped.

   This is, I suppose, what's causing squid to slowdownbut squid
is able to handle this kind of traffic increase, isn't it?


Should yes. I have  theory though...

 ... One of the major differences between 2.x and 3.x is that for 
objects in memory 2.x must walk a list of chunks 4KNB in size from the 
start of the file every time it sends a bit. 3.x is able to keep its 
last read position.
This can cause 2.x to use a lot of CPU time walking the objects if Squid 
is forced to send them in small bits at a time.


Since its youtube (4-8MB objects) you are talking about and you only do 
memory caching I suspect the problem is that a lot of new requests 
arrive at around the same time for large objects. Squid then has to wait 
for each bit of the reply to trickle in and ends up spending a lot of 
time switching between replies and walking the object to find the next 
bit to send out.


You can test this by trying 3.x. In particular 3.1 which has the better 
memory caching of the 3.x line. (I'm releasing 3.1.0.16 this weekend if 
you can wait that long).




   I don't think is the correct behavior squid at this time slowdown
from 0.04s to load a HTML file to 23-40 seconds

   I'll run iostat and vmstat tonight to see if I get more info to
track this down, and I'll send them to the list tomorrow.


strace may provide some more info if those are unclear.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE21
  Current Beta Squid 3.1.0.15


Re: [squid-users] Squid performance issues

2010-01-27 Thread Felipe W Damasio
  Hi Mr. Lauro,

  Attached are the files you asked for:

  iostat -dx 1 11
  vmstat 1 11
  netstat -s

  Both with a baseline (ie, at a non-troubled time :-)), and at a
moment of pressure.

  On the baseline, /usr/bin/time squidclient http://www.amazon.com;
took 0.03s, and on the pressure files, the same command took around 4
seconds.

  Just so you know, sda is the system, and sdb is exclusively for squid cache.

  Does these numbers indicate that I/O operations might be the
bottleneck of the slowdown?

  Thanks,

Felipe Damasio

2010/1/26 John Lauro john.la...@covenanteyes.com:
 Yup, your stats on free look fine, especially if squid has been running
 awhile.  If it was recently restarted, it might not be accurate.  Looking
 closer at the data you originally provided, it should be accurate...  You
 may want to add 1-2GB of RAM cache to reduce your disk I/O (if that turns
 out to be the bottleneck).

 The vmstat and iostat should eliminate disk I/O as a bottleneck (or point to
 it).


 You may want to check out the stats with netstat -s, maybe before and
 after the other commands so you can see the deltas.

 Hmmm, I just reread and noticed it was around midnight...  I wonder if a
 bunch of stuff goes invalid in the cache because of the date change?  Never
 noticed that behavior before, but not currently running on a large setup
 either.

 Should probably run the vmstat/iostat prior to the extreme slow time to get
 a bit of a baseline for normal operation.


 Given the high number of connections you have, you may want to consider:
 echo 1024 60999  /proc/sys/net/ipv4/ip_local_port_range   (probably not an
 issue given transparent mode)

 and check to see how close you are coming to your connection tracking limit.
 Probably ok, but could cause connections to require retries if a problem.




 PS: You mentioned this is in bridge mode?  I am trying to get squid working
 in bridge mode...  have it working fine in transparent mode as a router, but
 no luck with bridge mode.  Can you send your iptables/ebtables/kernel's
 .config, etc?  I'm going to compile that kernel (2.6.29.6) now, in case it's
 something broke in the more recent kernels.



 -Original Message-
 From: Felipe W Damasio [mailto:felip...@gmail.com]
 Sent: Monday, January 25, 2010 10:06 PM
 To: John Lauro
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid performance issues

   Hi Mr. John,

 2010/1/26 John Lauro john.la...@covenanteyes.com:
  What does the following give:
  uname -a

 uname -a:

 Linux squid 2.6.29.6 #4 SMP Thu Jan 14 21:00:42 BRST 2010 x86_64
 Intel(R) Core(TM) i7 CPU @ 9200 @ 2.67GHz GenuineIntel GNU/Linux

  While it's being slow, run the following to get some stats:
 
  vmstat 1 11     ;# Will run for 11 seconds
  iostat -dx 11   ;# Will run for 11 seconds, install sysstat if not
 found

   I'll run these tonight.

  My first guess is memory swapping, but could be I/O.  The above
 should help
  narrow it down.

   I thought that, but actually both top and free -m tells me the same
 thing:

              total       used       free     shared    buffers
 cached
 Mem:          7979       5076       2903          0          0
 4144
 -/+ buffers/cache:        931       7047
 Swap:         3812          0       3811

   Swap isn't even touched...even when slow.

   But if you think vmstat and iostat can help, I'll run them no
 problem.

   Thanks,

 Felipe Damasio

 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.432 / Virus Database: 271.1.1/2644 - Release Date:
 01/25/10 19:36:00


Tue Jan 26 16:47:42 BRST 2010
Linux 2.6.29.6 (hyper)  01/26/10_x86_64_(8 CPU)

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda   0.0114.970.432.3611.93   678.83   247.49 
0.40  143.89  12.53   3.50
sdb   0.02 2.252.89   17.04   417.55  2127.18   127.67 
7.51  376.56   9.59  19.12

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda   0.00 0.000.000.00 0.00 0.00 0.00 
0.000.00   0.00   0.00
sdb   0.00 0.002.970.00   261.39 0.0088.00 
0.03   10.00   9.00   2.67

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda   0.00 0.000.000.00 0.00 0.00 0.00 
0.000.00   0.00   0.00
sdb   0.00 0.003.002.00   768.00 9.00   155.40 
0.10   19.60  19.40   9.70

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda   0.00 1.980.006.93 0.0073.2710.57 
0.57   81.71  30.71  21.29
sdb   0.00 5.940.99   98.02   253.47 11955.45   123.31
43.94  443.78   9.47  93.76

Device: rrqm/s   wrqm

RE: [squid-users] Squid performance issues

2010-01-27 Thread John Lauro
Both your CPU and disk both look ok based on these, and not enough
difference from baseline to explain the change in timing of the command.

I'll look at the netstats a little more later to see if I spot anything.


Can you test the equivalent outside of squid?  Maybe it's just your internet
or amazon being slow and it has nothing to do with squid...?


 -Original Message-
 From: Felipe W Damasio [mailto:felip...@gmail.com]
 Sent: Wednesday, January 27, 2010 4:55 PM

   Attached are the files you asked for:
 
   iostat -dx 1 11
   vmstat 1 11
   netstat -s
 
   Both with a baseline (ie, at a non-troubled time :-)), and at a
 moment of pressure.
 
   On the baseline, /usr/bin/time squidclient http://www.amazon.com;
 took 0.03s, and on the pressure files, the same command took around 4
 seconds.
 
   Just so you know, sda is the system, and sdb is exclusively for squid
 cache.
 
   Does these numbers indicate that I/O operations might be the
 bottleneck of the slowdown?
 
   Thanks,
 



Re: [squid-users] Squid performance issues

2010-01-27 Thread Felipe W Damasio
  Hi Mr. Lauro,

2010/1/27 John Lauro john.la...@covenanteyes.com:
 I'll look at the netstats a little more later to see if I spot anything.

 Can you test the equivalent outside of squid?  Maybe it's just your internet
 or amazon being slow and it has nothing to do with squid...?

  I thought that too, so I tried link -dump http://www.amazon.com;,
to see if it differs from 4PM to around midnight.

  And it does, but not that much: Around 4PM, it stays 0.02s, and
around midnight it goes to 0.08s-0.1ssquid's squidclient program
goes to 4s...so something is up with squid.

  Thanks!

Felipe Damasio


Re: [squid-users] Squid performance issues

2010-01-26 Thread Chris Robertson

Felipe W Damasio wrote:

 Hi all,

 Sorry for the long email.

 I'm using squid on a 300Mbps ISP with about 10,000 users.

 I have an 8-core I7 Intel processor-machine, with 8GB of RAM and 500
of HD for the cache. (exclusive Sata HD with xfs). Using aufs as
storeio.

 I'm caching mostly multimedia files (youtube and such).

 Squid usually eats around 50-70% of one core.

 But always around midnight (when a lot of users browse the internet),
my squid becomes very slowI mean, a page that usually takes 0.04s
to load takes 23seconds to load.

 My best guess is that the volume of traffic is making squid slow.

 I'm using a 2.6.29.6 vanilla kernel with tproxy enabled for squid.
And I'm using these /proc configurations:

echo 0  /proc/sys/net/ipv4/tcp_ecn
echo 1  /proc/sys/net/ipv4/tcp_low_latency
echo 10  /proc/sys/net/core/netdev_max_backlog
echo 409600   /proc/sys/net/ipv4/tcp_max_syn_backlog
echo 7  /proc/sys/net/ipv4/tcp_fin_timeout
echo 15  /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 3  /proc/sys/net/ipv4/tcp_keepalive_probes
echo 65536  /proc/sys/vm/min_free_kbytes
echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_rmem
echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_wmem
echo 1024000  /proc/sys/net/core/rmem_max
echo 1024000  /proc/sys/net/core/wmem_max
echo 512000  /proc/sys/net/core/rmem_default
echo 512000  /proc/sys/net/core/wmem_default
echo 524288  /proc/sys/net/ipv4/netfilter/ip_conntrack_max
echo 3  /proc/sys/net/ipv4/tcp_synack_retries

 The machine is in bridge-mode.

 I wrote a little script that prints:

 - The date;
 - The /usr/bin/time squidclient http://www.amazon.com;;
 - The number of ESTABLISHED connections (through netstat -an);
 - The number of TIME_WAIT connections;
 - The total number of netstat connections;
 - The route cache (ip route list cache);
 - The number of clients currently connected in squid (through mgr:info);
 - The number of free memory in MB (free -m);
 - The % used of the squid-running core;
 - The average number of time to respond a request / sec (mgr:info
also) - 5 minutes avg;
 - The average number of http requests / sec (5 minutes avg) - mgr:info as well.

 On any other hour, I have something like:

2010-01-25 18:48:19 ; 0.04 ; 19383 ; 9902 ; 29865 ; 96972 ; 4677 ; 131
; 59 ; 0.24524 ; 476.871718
2010-01-25 18:53:29 ; 0.04 ; 18865 ; 8593 ; 30123 ; 179570 ; 4679 ;
148 ; 62 ; 0.22004 ; 504.424207
2010-01-25 18:58:38 ; 0.04 ; 18377 ; 9056 ; 29283 ; 99038 ; 4680 ; 174
; 61 ; 0.22004 ; 466.659336
2010-01-25 19:03:49 ; 0.04 ; 18877 ; 9133 ; 28327 ; 181196 ; 4673 ;
171 ; 57 ; 0.24524 ; 483.558436

 So, it takes around 0.04s to get http://www.amazon.com.

2010-01-24 23:46:50 ; 2.53 ; 22723 ; 9861 ; 35012 ; 64752 ; 4306 ;
166; 70 ; 0.22004 ; 566.364274
2010-01-24 23:52:04 ; 3.74 ; 21173 ; 10256 ; 33242 ; 167594 ; 4309 ;
169 ; 68 ; 0.20843 ; 537.758601
2010-01-24 23:57:20 ; 0.08 ; 18691 ; 9050 ; 29590 ; 65496 ; 4312 ; 138
; 71 ; 0.20843 ; 525.119006
2010-01-25 00:02:29 ; 15.54 ; 18016 ; 8209 ; 29035 ; 149248 ; 4318 ;
160 ; 82 ; 0.25890 ; 491.615241

 As I said, it goes from 0.04 to 15.54s(!) to get a single html file.
Horrible. After 12:30, everything goes back to normal.

 From those variables, I can't seem to find any indication of what can
be causing this appalling slowdown. The number of squid users doesn't
go up that much, I just see that the avg time squid reports to
answering a request goes from 0.20s to 0.25, and the number of http
requests/sec actually goes down from 566 to 491...which is kind of odd
to me. And the number users using squid stays in aroung 4300.

 I talked to Mr. Dave Dykstra, and he thought it could be I/O delay
issues. So I tried:

cache_dir null /tmp
cache_access_log none
cache_store_log none

  But no luck, on midnight tonight again things went wild:

2010-01-25 23:57:03 ; 0.04 ; 24112 ; 11330 ; 37240 ; 74456 ; 3516 ;
160 ; 58 ; 0.25890 ; 581.047037
2010-01-26 00:02:15 ; 10.82 ; 25638 ; 11695 ; 38537 ; 177198 ; 3533 ;
149 ; 78 ; 0.27332 ; 570.312936
2010-01-26 00:07:38 ; 42.64 ; 23818 ; 11563 ; 38097 ; 88902 ; 3556 ;
171 ; 70 ; 0.30459 ; 585.880418

  From 0.04 to 42 seconds to load the main html page of amazon.com. (!)

  Do you have any idea or any other data I can collect to try and
track down this?
  


Check your log rotation schedule.  Is it possible that logs are being 
rotated  at midnight?  I think that the swap.state file is rewritten 
when squid -k rotate is called.  Check the beginning of your cache.log 
to verify.



  I'm using squid-2.7.stable7, but I'm willing to try squid-3.0 or
squid-3.1 if you guys think it could help.

  I'm using 2 gigabit Marvell Ethernet boards with sky2 driver. Don't
know if it's relevant, though.

  If you guys need any more info to try and help me figure this out, please ask.

  I'm willing to test, code or do pretty much anything to make squid
perform better on my environment Please let me know how can I help you
help me. :-)

  Thanks!

Felipe Damasio
  


Chris



Re: [squid-users] Squid performance issues

2010-01-26 Thread Felipe W Damasio
  Hi Mr. Robertson,

2010/1/26 Chris Robertson crobert...@gci.net:
  Do you have any idea or any other data I can collect to try and
 track down this?


 Check your log rotation schedule.  Is it possible that logs are being
 rotated  at midnight?  I think that the swap.state file is rewritten when
 squid -k rotate is called.  Check the beginning of your cache.log to
 verify.

  I don't use -k rotate.

  At midnight, the only thing that changes is that the traffic shapper
of the ISP let's everything run loose, ie. it allows http requests go
through the roof.

  The youtube requests, which is what I really care about, goes form
and avg of 20Mbps of traffic (shaped), to around 50Mbps unshapped.

   This is, I suppose, what's causing squid to slowdownbut squid
is able to handle this kind of traffic increase, isn't it?

   I don't think is the correct behavior squid at this time slowdown
from 0.04s to load a HTML file to 23-40 seconds.

   I'll run iostat and vmstat tonight to see if I get more info to
track this down, and I'll send them to the list tomorrow.

   Just so you know, if before midnight I run a ebtables -t broute
-F, the time to access the HTML file doesn't increase at all, so I
don't think it's the network running out of bandwidth.

   Thanks,

Felipe Damasio


Re: [squid-users] Squid performance issues

2010-01-26 Thread Chris Robertson

Felipe W Damasio wrote:

  Hi Mr. Robertson,

2010/1/26 Chris Robertson crobert...@gci.net:
  

 Do you have any idea or any other data I can collect to try and
track down this?

  

Check your log rotation schedule.  Is it possible that logs are being
rotated  at midnight?  I think that the swap.state file is rewritten when
squid -k rotate is called.  Check the beginning of your cache.log to
verify.



  I don't use -k rotate.
  


Err...  Really?  Last I heard, calling squid -k rotate (aside from the 
obvious logfile rotation) prunes the swap.state file.   Not doing so 
would lead to your swap.state growing without bounds.


Chris



Re: [squid-users] Squid performance issues

2010-01-26 Thread Felipe W Damasio
  Hi Mr. Robertson,

2010/1/26 Chris Robertson crobert...@gci.net:
  I don't use -k rotate.

 Err...  Really?  Last I heard, calling squid -k rotate (aside from the
 obvious logfile rotation) prunes the swap.state file.   Not doing so would
 lead to your swap.state growing without bounds.

  Should I?

  Is swap.state file being big a big deal?

  This is file is:

-rw-r-   1 nobody nobody 991584 Jan 26 19:05 swap.state

   This is less than 1MB. Is it too big?

   And if it is, when I disabled the cache with cache_dir null /tmp,
shouldn't the time-respone improved? Like I said in the previous
email, it didn't help at all. Still time when from 0.04 to 40s.

   Thanks,

Felipe Damasio


Re: [squid-users] Squid performance issues

2010-01-26 Thread Chris Robertson

Felipe W Damasio wrote:

  Hi Mr. Robertson,

2010/1/26 Chris Robertson crobert...@gci.net:
  

 I don't use -k rotate.

  

Err...  Really?  Last I heard, calling squid -k rotate (aside from the
obvious logfile rotation) prunes the swap.state file.   Not doing so would
lead to your swap.state growing without bounds.



  Should I?

  Is swap.state file being big a big deal?
  


That depends on how well your file system handles large files.


  This is file is:

-rw-r-   1 nobody nobody 991584 Jan 26 19:05 swap.state

   This is less than 1MB. Is it too big?
  


Not at all.


   And if it is, when I disabled the cache with cache_dir null /tmp,
shouldn't the time-respone improved?


Ah.  I missed that bit.  Sorry for the noise.


 Like I said in the previous
email, it didn't help at all. Still time when from 0.04 to 40s.

   Thanks,

Felipe Damasio
  


Chris



[squid-users] Squid performance issues

2010-01-25 Thread Felipe W Damasio
 Hi all,

 Sorry for the long email.

 I'm using squid on a 300Mbps ISP with about 10,000 users.

 I have an 8-core I7 Intel processor-machine, with 8GB of RAM and 500
of HD for the cache. (exclusive Sata HD with xfs). Using aufs as
storeio.

 I'm caching mostly multimedia files (youtube and such).

 Squid usually eats around 50-70% of one core.

 But always around midnight (when a lot of users browse the internet),
my squid becomes very slowI mean, a page that usually takes 0.04s
to load takes 23seconds to load.

 My best guess is that the volume of traffic is making squid slow.

 I'm using a 2.6.29.6 vanilla kernel with tproxy enabled for squid.
And I'm using these /proc configurations:

echo 0  /proc/sys/net/ipv4/tcp_ecn
echo 1  /proc/sys/net/ipv4/tcp_low_latency
echo 10  /proc/sys/net/core/netdev_max_backlog
echo 409600   /proc/sys/net/ipv4/tcp_max_syn_backlog
echo 7  /proc/sys/net/ipv4/tcp_fin_timeout
echo 15  /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 3  /proc/sys/net/ipv4/tcp_keepalive_probes
echo 65536  /proc/sys/vm/min_free_kbytes
echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_rmem
echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_wmem
echo 1024000  /proc/sys/net/core/rmem_max
echo 1024000  /proc/sys/net/core/wmem_max
echo 512000  /proc/sys/net/core/rmem_default
echo 512000  /proc/sys/net/core/wmem_default
echo 524288  /proc/sys/net/ipv4/netfilter/ip_conntrack_max
echo 3  /proc/sys/net/ipv4/tcp_synack_retries

 The machine is in bridge-mode.

 I wrote a little script that prints:

 - The date;
 - The /usr/bin/time squidclient http://www.amazon.com;;
 - The number of ESTABLISHED connections (through netstat -an);
 - The number of TIME_WAIT connections;
 - The total number of netstat connections;
 - The route cache (ip route list cache);
 - The number of clients currently connected in squid (through mgr:info);
 - The number of free memory in MB (free -m);
 - The % used of the squid-running core;
 - The average number of time to respond a request / sec (mgr:info
also) - 5 minutes avg;
 - The average number of http requests / sec (5 minutes avg) - mgr:info as well.

 On any other hour, I have something like:

2010-01-25 18:48:19 ; 0.04 ; 19383 ; 9902 ; 29865 ; 96972 ; 4677 ; 131
; 59 ; 0.24524 ; 476.871718
2010-01-25 18:53:29 ; 0.04 ; 18865 ; 8593 ; 30123 ; 179570 ; 4679 ;
148 ; 62 ; 0.22004 ; 504.424207
2010-01-25 18:58:38 ; 0.04 ; 18377 ; 9056 ; 29283 ; 99038 ; 4680 ; 174
; 61 ; 0.22004 ; 466.659336
2010-01-25 19:03:49 ; 0.04 ; 18877 ; 9133 ; 28327 ; 181196 ; 4673 ;
171 ; 57 ; 0.24524 ; 483.558436

 So, it takes around 0.04s to get http://www.amazon.com.

2010-01-24 23:46:50 ; 2.53 ; 22723 ; 9861 ; 35012 ; 64752 ; 4306 ;
166; 70 ; 0.22004 ; 566.364274
2010-01-24 23:52:04 ; 3.74 ; 21173 ; 10256 ; 33242 ; 167594 ; 4309 ;
169 ; 68 ; 0.20843 ; 537.758601
2010-01-24 23:57:20 ; 0.08 ; 18691 ; 9050 ; 29590 ; 65496 ; 4312 ; 138
; 71 ; 0.20843 ; 525.119006
2010-01-25 00:02:29 ; 15.54 ; 18016 ; 8209 ; 29035 ; 149248 ; 4318 ;
160 ; 82 ; 0.25890 ; 491.615241

 As I said, it goes from 0.04 to 15.54s(!) to get a single html file.
Horrible. After 12:30, everything goes back to normal.

 From those variables, I can't seem to find any indication of what can
be causing this appalling slowdown. The number of squid users doesn't
go up that much, I just see that the avg time squid reports to
answering a request goes from 0.20s to 0.25, and the number of http
requests/sec actually goes down from 566 to 491...which is kind of odd
to me. And the number users using squid stays in aroung 4300.

 I talked to Mr. Dave Dykstra, and he thought it could be I/O delay
issues. So I tried:

cache_dir null /tmp
cache_access_log none
cache_store_log none

  But no luck, on midnight tonight again things went wild:

2010-01-25 23:57:03 ; 0.04 ; 24112 ; 11330 ; 37240 ; 74456 ; 3516 ;
160 ; 58 ; 0.25890 ; 581.047037
2010-01-26 00:02:15 ; 10.82 ; 25638 ; 11695 ; 38537 ; 177198 ; 3533 ;
149 ; 78 ; 0.27332 ; 570.312936
2010-01-26 00:07:38 ; 42.64 ; 23818 ; 11563 ; 38097 ; 88902 ; 3556 ;
171 ; 70 ; 0.30459 ; 585.880418

  From 0.04 to 42 seconds to load the main html page of amazon.com. (!)

  Do you have any idea or any other data I can collect to try and
track down this?

  I'm using squid-2.7.stable7, but I'm willing to try squid-3.0 or
squid-3.1 if you guys think it could help.

  I'm using 2 gigabit Marvell Ethernet boards with sky2 driver. Don't
know if it's relevant, though.

  If you guys need any more info to try and help me figure this out, please ask.

  I'm willing to test, code or do pretty much anything to make squid
perform better on my environment Please let me know how can I help you
help me. :-)

  Thanks!

Felipe Damasio


RE: [squid-users] Squid performance issues

2010-01-25 Thread John Lauro
What does the following give:
uname -a

While it's being slow, run the following to get some stats:

vmstat 1 11 ;# Will run for 11 seconds
iostat -dx 11   ;# Will run for 11 seconds, install sysstat if not found


My first guess is memory swapping, but could be I/O.  The above should help
narrow it down.

 -Original Message-
 From: Felipe W Damasio [mailto:felip...@gmail.com]
 Sent: Monday, January 25, 2010 9:37 PM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Squid performance issues
 
  Hi all,
 
  Sorry for the long email.
 
  I'm using squid on a 300Mbps ISP with about 10,000 users.
 
  I have an 8-core I7 Intel processor-machine, with 8GB of RAM and 500
 of HD for the cache. (exclusive Sata HD with xfs). Using aufs as
 storeio.
 
  I'm caching mostly multimedia files (youtube and such).
 
  Squid usually eats around 50-70% of one core.
 
  But always around midnight (when a lot of users browse the internet),
 my squid becomes very slowI mean, a page that usually takes 0.04s
 to load takes 23seconds to load.
 
  My best guess is that the volume of traffic is making squid slow.
 
  I'm using a 2.6.29.6 vanilla kernel with tproxy enabled for squid.
 And I'm using these /proc configurations:
 
 echo 0  /proc/sys/net/ipv4/tcp_ecn
 echo 1  /proc/sys/net/ipv4/tcp_low_latency
 echo 10  /proc/sys/net/core/netdev_max_backlog
 echo 409600   /proc/sys/net/ipv4/tcp_max_syn_backlog
 echo 7  /proc/sys/net/ipv4/tcp_fin_timeout
 echo 15  /proc/sys/net/ipv4/tcp_keepalive_intvl
 echo 3  /proc/sys/net/ipv4/tcp_keepalive_probes
 echo 65536  /proc/sys/vm/min_free_kbytes
 echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_rmem
 echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_wmem
 echo 1024000  /proc/sys/net/core/rmem_max
 echo 1024000  /proc/sys/net/core/wmem_max
 echo 512000  /proc/sys/net/core/rmem_default
 echo 512000  /proc/sys/net/core/wmem_default
 echo 524288  /proc/sys/net/ipv4/netfilter/ip_conntrack_max
 echo 3  /proc/sys/net/ipv4/tcp_synack_retries
 
  The machine is in bridge-mode.
 
  I wrote a little script that prints:
 
  - The date;
  - The /usr/bin/time squidclient http://www.amazon.com;;
  - The number of ESTABLISHED connections (through netstat -an);
  - The number of TIME_WAIT connections;
  - The total number of netstat connections;
  - The route cache (ip route list cache);
  - The number of clients currently connected in squid (through
 mgr:info);
  - The number of free memory in MB (free -m);
  - The % used of the squid-running core;
  - The average number of time to respond a request / sec (mgr:info
 also) - 5 minutes avg;
  - The average number of http requests / sec (5 minutes avg) - mgr:info
 as well.
 
  On any other hour, I have something like:
 
 2010-01-25 18:48:19 ; 0.04 ; 19383 ; 9902 ; 29865 ; 96972 ; 4677 ; 131
 ; 59 ; 0.24524 ; 476.871718
 2010-01-25 18:53:29 ; 0.04 ; 18865 ; 8593 ; 30123 ; 179570 ; 4679 ;
 148 ; 62 ; 0.22004 ; 504.424207
 2010-01-25 18:58:38 ; 0.04 ; 18377 ; 9056 ; 29283 ; 99038 ; 4680 ; 174
 ; 61 ; 0.22004 ; 466.659336
 2010-01-25 19:03:49 ; 0.04 ; 18877 ; 9133 ; 28327 ; 181196 ; 4673 ;
 171 ; 57 ; 0.24524 ; 483.558436
 
  So, it takes around 0.04s to get http://www.amazon.com.
 
 2010-01-24 23:46:50 ; 2.53 ; 22723 ; 9861 ; 35012 ; 64752 ; 4306 ;
 166; 70 ; 0.22004 ; 566.364274
 2010-01-24 23:52:04 ; 3.74 ; 21173 ; 10256 ; 33242 ; 167594 ; 4309 ;
 169 ; 68 ; 0.20843 ; 537.758601
 2010-01-24 23:57:20 ; 0.08 ; 18691 ; 9050 ; 29590 ; 65496 ; 4312 ; 138
 ; 71 ; 0.20843 ; 525.119006
 2010-01-25 00:02:29 ; 15.54 ; 18016 ; 8209 ; 29035 ; 149248 ; 4318 ;
 160 ; 82 ; 0.25890 ; 491.615241
 
  As I said, it goes from 0.04 to 15.54s(!) to get a single html file.
 Horrible. After 12:30, everything goes back to normal.
 
  From those variables, I can't seem to find any indication of what can
 be causing this appalling slowdown. The number of squid users doesn't
 go up that much, I just see that the avg time squid reports to
 answering a request goes from 0.20s to 0.25, and the number of http
 requests/sec actually goes down from 566 to 491...which is kind of odd
 to me. And the number users using squid stays in aroung 4300.
 
  I talked to Mr. Dave Dykstra, and he thought it could be I/O delay
 issues. So I tried:
 
 cache_dir null /tmp
 cache_access_log none
 cache_store_log none
 
   But no luck, on midnight tonight again things went wild:
 
 2010-01-25 23:57:03 ; 0.04 ; 24112 ; 11330 ; 37240 ; 74456 ; 3516 ;
 160 ; 58 ; 0.25890 ; 581.047037
 2010-01-26 00:02:15 ; 10.82 ; 25638 ; 11695 ; 38537 ; 177198 ; 3533 ;
 149 ; 78 ; 0.27332 ; 570.312936
 2010-01-26 00:07:38 ; 42.64 ; 23818 ; 11563 ; 38097 ; 88902 ; 3556 ;
 171 ; 70 ; 0.30459 ; 585.880418
 
   From 0.04 to 42 seconds to load the main html page of amazon.com. (!)
 
   Do you have any idea or any other data I can collect to try and
 track down this?
 
   I'm using squid-2.7.stable7, but I'm willing to try squid-3.0 or
 squid-3.1 if you guys think it could help.
 
   I'm using 2 gigabit

Re: [squid-users] Squid performance issues

2010-01-25 Thread Felipe W Damasio
  Hi Mr. John,

2010/1/26 John Lauro john.la...@covenanteyes.com:
 What does the following give:
 uname -a

uname -a:

Linux squid 2.6.29.6 #4 SMP Thu Jan 14 21:00:42 BRST 2010 x86_64
Intel(R) Core(TM) i7 CPU @ 9200 @ 2.67GHz GenuineIntel GNU/Linux

 While it's being slow, run the following to get some stats:

 vmstat 1 11     ;# Will run for 11 seconds
 iostat -dx 11   ;# Will run for 11 seconds, install sysstat if not found

  I'll run these tonight.

 My first guess is memory swapping, but could be I/O.  The above should help
 narrow it down.

  I thought that, but actually both top and free -m tells me the same thing:

 total   used   free sharedbuffers cached
Mem:  7979   5076   2903  0  0   4144
-/+ buffers/cache:931   7047
Swap: 3812  0   3811

  Swap isn't even touched...even when slow.

  But if you think vmstat and iostat can help, I'll run them no problem.

  Thanks,

Felipe Damasio


Re: [squid-users] squid performance

2009-10-18 Thread Luis Daniel Lucio Quiroz
Le lundi 12 octobre 2009 10:11:03, Jason Martina a écrit :
 Hello,
 
  Well im looking for a better solution than MS ISA proxy, we have 3000
 users that uses 4 ISA proxy servers, and its a managment nightmare so
 im going to attempt to use squid+dansguardian, on the squid side of
 things i cant find anything about using it in a large orginization and
 with the users we have about 1500-2000 hit the proxy's at a time,
 there heavily used for customer service agents and i would like to use
 ONE server to control all, so im looking for some help or a document
 dealing with Larger companys!!
 
Hi Jason

I've experience with squid.  We've deployed it in an environment with about 
5000 users.  As you want in a 1 Master- N slaves configurations.  If you 
question is, is it possible with squid, yes it is.  We have archive up-to 25% 
of bandwidth save.

I'm also writting my tesis about real-time performance algorithm with squid to 
let it save as much of possible without knowing internet surffering tendencies 
changes.

LD


Re: [squid-users] squid performance

2009-10-15 Thread donovan jeffrey j


On Oct 12, 2009, at 11:11 AM, Jason Martina wrote:


Hello,

 Well im looking for a better solution than MS ISA proxy, we have 3000
users that uses 4 ISA proxy servers, and its a managment nightmare so
im going to attempt to use squid+dansguardian, on the squid side of
things i cant find anything about using it in a large orginization and
with the users we have about 1500-2000 hit the proxy's at a time,
there heavily used for customer service agents and i would like to use
ONE server to control all, so im looking for some help or a document
dealing with Larger companys!!



i run
2 primary transparent/nocache squid + squidguard
2 Authenticated squid cache + squidguard

covering 27 buildings and 2000 staff 9000 kids, and someone decided to  
give them all laptops one day :)


squid can hang





Re: [squid-users] squid performance

2009-10-15 Thread Mike Rambo

donovan jeffrey j wrote:


On Oct 12, 2009, at 11:11 AM, Jason Martina wrote:


Hello,

 Well im looking for a better solution than MS ISA proxy, we have 3000
users that uses 4 ISA proxy servers, and its a managment nightmare so
im going to attempt to use squid+dansguardian, on the squid side of
things i cant find anything about using it in a large orginization and
with the users we have about 1500-2000 hit the proxy's at a time,
there heavily used for customer service agents and i would like to use
ONE server to control all, so im looking for some help or a document
dealing with Larger companys!!



i run
2 primary transparent/nocache squid + squidguard
2 Authenticated squid cache + squidguard

covering 27 buildings and 2000 staff 9000 kids, and someone decided to 
give them all laptops one day :)


squid can hang





We're also a school district (or it sounds like donovan jeffrey j is 
anyway) though a little bit larger.


We have 40 sites and a bit shy of 14000 students. Not sure of staff but 
probably in the 2000 to 3000 range.


We do cache and use squidGuard as the filter but do not authenticate.

Typical traffic is 30M to 35M bps and will burst as high as 55M - or at 
least that is the highest I've seen.


We run two boxes with dual 1.6G processors, 3GB ram and three ultrawide 
scsi disks. Multiprocessor boxes are of only limited advantage as the 
main squid process is still single thread last I knew. Additional cores 
can run the OS and other squid threads though (disk IO for example).


We have a modest degree of balancing occurring via wpad but the majority 
of the traffic (~60-70%) is handled by the primary. We use the 2.x 
branch of squid at present. While there are a things which can be done 
to optimize performance RAM is the biggest issue IME. Be sure to have 
plenty. Fast disks for cache and logging is the second. One thing 
usually recommended is to avoid raid especially on the cache disks and 
let squid handle them itself. A mirror of the system disk to ease crash 
recovery is reasonable AFAIK.


Note that dansguardian will impose significantly higher hardware demands 
than SG last I heard. We experimented with but have never deployed DG so 
that may or may not have changed in the last few years.


HTH.


--
Mike Rambo


[squid-users] squid performance

2009-10-12 Thread Jason Martina
Hello,

 Well im looking for a better solution than MS ISA proxy, we have 3000
users that uses 4 ISA proxy servers, and its a managment nightmare so
im going to attempt to use squid+dansguardian, on the squid side of
things i cant find anything about using it in a large orginization and
with the users we have about 1500-2000 hit the proxy's at a time,
there heavily used for customer service agents and i would like to use
ONE server to control all, so im looking for some help or a document
dealing with Larger companys!!


Re: [squid-users] squid performance

2009-10-12 Thread Ralf Hildebrandt
* Jason Martina jason.mart...@gmail.com:
 Hello,
 
  Well im looking for a better solution than MS ISA proxy, we have 3000
 users that uses 4 ISA proxy servers, and its a managment nightmare so
 im going to attempt to use squid+dansguardian, on the squid side of
 things i cant find anything about using it in a large orginization and
 with the users we have about 1500-2000 hit the proxy's at a time,
 there heavily used for customer service agents and i would like to use
 ONE server to control all, so im looking for some help or a document
 dealing with Larger companys!!

We're having 4 squid  dansguardian proxies for a total of 15.000
cient machines. 

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



[squid-users] Squid performance in serving video files?

2009-10-03 Thread Ryan Chan
Hello,

As far as I know, Squid is a single process, single threaded program.

So is Squid good for serviing large video download (e.g. 10MB+), will
be block other download?

Thanks.


Re: [squid-users] Squid performance in serving video files?

2009-10-03 Thread Amos Jeffries

Ryan Chan wrote:

Hello,

As far as I know, Squid is a single process, single threaded program.

So is Squid good for serviing large video download (e.g. 10MB+), will
be block other download?

Thanks.


No. Two or more files can be served at the same time.
Squid is built in a non-blocking design that does multi-threaded things 
internally without using OS threads.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE19
  Current Beta Squid 3.1.0.14


Re: [squid-users] Squid performance in serving video files?

2009-10-03 Thread Ryan Chan
Hey

On Sun, Oct 4, 2009 at 10:34 AM, Amos Jeffries squ...@treenet.co.nz wrote:

 No. Two or more files can be served at the same time.
 Squid is built in a non-blocking design that does multi-threaded things
 internally without using OS threads.

 Amos

Is it using epoll?


Re: [squid-users] Squid performance in serving video files?

2009-10-03 Thread Amos Jeffries

Ryan Chan wrote:

Hey

On Sun, Oct 4, 2009 at 10:34 AM, Amos Jeffries squ...@treenet.co.nz wrote:

No. Two or more files can be served at the same time.
Squid is built in a non-blocking design that does multi-threaded things
internally without using OS threads.

Amos


Is it using epoll?


If it's built with that library and none better is available.

pluggable poll, select, dev/poll, epoll, kqueue, and signals for network IO.
pluggable AIO, OIO, pthread, AIOPS and custom helpers for disk IO.

Around a core asynchronous event processing loop.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE19
  Current Beta Squid 3.1.0.14


[squid-users] Squid Performance Tuning with 0 IO Wait

2009-03-12 Thread Ben Jonston
Hi Everyone,

I am currently doing performance testing with squid 3 and I seem to be
running into some bottlenecks.  I have done exhaustive research
through the squid mail archives, Duane Wessels O'reilly book(a great
resource) and other areas.

I have a dual hyperthreading Xeon machine with 8GB of Ram running
CentOS 5.2 with its 2.6.18 based kernel.  This machine is hooked up to
an I/O subsystem that essentially provides no device I/O waits. This
has been confirmed with 'top' showing effectively 0%wa during peak
load periods.

Apart from I/O tuning, what measures should I take to tune squid for
maximum requests per second and cache hit rates?

Any help or pointers would be greatly appreciated.

Best Regards,
Ben Jonston


[squid-users] Squid Performance Tuning with 0% IO Wait

2009-03-10 Thread Ben Jonston
Hi Everyone,

I am currently doing performance testing with squid 3 and I seem to be
running into some bottlenecks.  I have done exhaustive research
through the squid mail archives, Duane Wessels O'reilly book(a great
resource) and other areas.

I have a dual hyperthreading Xeon machine with 8GB of Ram running
CentOS 5.2 with its 2.6.18 based kernel.  This machine is hooked up to
an I/O subsystem that essentially provides no device I/O waits. This
has been confirmed with 'top' showing effectively 0%wa during peak
load periods.

Apart from I/O tuning, what measures should I take to tune squid for
maximum requests per second and cache hit rates?

Any help or pointers would be greatly appreciated.

Best Regards,
Ben Jonston


[squid-users] Squid performance regression with recent 2.6.26/2.6.28 kernels

2009-02-19 Thread Apps On The Move
Hello,

I am using a customized Web Polygraph recipe based on polymix4 to
benchmark Squid 2.7STABLE6. With Linux kernel 2.6.23.8 the benchmark
indicates that our hardware will allow for approximately 1000 requests
per second. When the kernel is switched to 2.6.28.5 the benchmark
indicates a likely sustained request rate of 500-600 requests per
second.

A different, but much shorter, benchmark shows that Linux kernels
2.6.24 and 2.6.26 also support a lower request rate than 2.6.23.8.

Has anyone else performed any benchmarks using different Linux kernel
versions, or noticed a decrease in performance with more recent
kernels? If so, are there any settings which need to be changed to
bring performance back in line with 2.6.23 (or earlier kernels).

Regards,

Matthew
--
Apps On The Move -- Applications for the iPhone and iPod Touch
www.appsonthemove.com


Re: [squid-users] Squid performance... RAM or CPU?

2008-07-03 Thread Henrik Nordstrom
On ons, 2008-07-02 at 18:12 -0500, Carlos Alberto Bernat Orozco wrote:

 Why I'm making this question, because when I installed squid for 120
 users, the ram went to the sky

ram usage is not very dependent on the amount of users, more on how you
configure Squid.

There is a whole chapter in the FAQ covering memory usage:
http://wiki.squid-cache.org/SquidFaq/SquidMemory

Where the most important entry is
How much memory do I need in my Squid server?
http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-09818ad4cb8a1dfea1f51688c41bdf4b79a69991

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


[squid-users] Squid performance for 600 users

2008-07-02 Thread Carlos Alberto Bernat Orozco
Hi group

I wonder if a debian box with 1Gb RAM could run squid to block child
porn sites for 600 users aprox.

Is possible? would be good? a checklist to know the requisites for squid?

Thanks in advanced


RE: [squid-users] Squid performance for 600 users

2008-07-02 Thread Jonathan Chretien

http://www.deckle.co.za/squid-users-guide/Installing_Squid

On my side, I run Squid on a HP VL420 P4 1.8 or 2.1ghz, 768meg of ram, 20gig 
Seagate IDE Hard Drive for approximatly 125 users.

The maximum load, that I got, is .70 (1 minute). My CPU peak sometimes at 40% 
but most of the time, it's running most of the time at idle. I have a peak time 
at lunch time and break. My average CPU is approximatly at 5% and peak at 40%. 
I saw a 50-60% and a load at .90 at the beginning when I had some problem with 
my Apple computer and the NTLM Auth. (I still had this problem but I did a 
bypass for my apple computer)

If I compare with my current setup, for an another 150 users, probably 1-1.5 
gig of Ram is important and probably changing my Seagate IDE HardDrive for a 
SCSI 10k-15k rpm U160 or U320.

Jonathan




 Date: Wed, 2 Jul 2008 15:52:16 -0500
 From: [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Subject: [squid-users] Squid performance for 600 users

 Hi group

 I wonder if a debian box with 1Gb RAM could run squid to block child
 porn sites for 600 users aprox.

 Is possible? would be good? a checklist to know the requisites for squid?

 Thanks in advanced

_
Envoie un sourire, fais rire, amuse-toi! Employez-le maintenant!
http://www.emoticonesgratuites.ca/?icid=EMFRCA120

  1   2   3   >