RE: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-18 Thread Mohsen Dehghani
Ubuntu 12.04


What??
what OS are you using?



Eliezer

On 09/15/2013 09:07 AM, Mohsen Dehghani wrote:
 All workarounds failed except adding ulimit -n 65000 to squid init 
 file
 
 Adding session required pam_limits.so to /etc/pam.d/common-session 
 also failed for me.
 The box never read '/etc/security/limits.conf' at boot time
 
 OK so now there is another thing That I have tested:
 /etc/pam.d/common-session
 dosn't have the limit module as a default so the admin will set it as 
 he wants and to prevent a problem..
 
 adding this line:
 session required pam_limits.so
 
 to the common-session file forces the ulimits on a PAM session startup 
 and end..
 this forces the bash(which is a pam) session to use the limits that 
 are set by the admin in the limits.conf...
 It's not such a good idea to allow a users such a thing but this is 
 the admin choice.
 
 Eliezer
 
 





RE: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-15 Thread Mohsen Dehghani
All workarounds failed except adding ulimit -n 65000 to squid init file

Adding session required pam_limits.so to /etc/pam.d/common-session also
failed for me.
The box never read '/etc/security/limits.conf' at boot time

OK so now there is another thing That I have tested:
/etc/pam.d/common-session
dosn't have the limit module as a default so the admin will set it as he
wants and to prevent a problem..

adding this line:
session required pam_limits.so

to the common-session file forces the ulimits on a PAM session startup and
end..
this forces the bash(which is a pam) session to use the limits that are set
by the admin in the limits.conf...
It's not such a good idea to allow a users such a thing but this is the
admin choice.

Eliezer




RE: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-14 Thread Mohsen Dehghani
I don't see any logic here. Are you sure your squid is started not by root?
Is replacing 'root' by 'squid' or '*' solves issue as well?


When I manually  start service by root, there is no file descriptor warning
and squid works as normal. 
But when the system boots up and starts the service automatically, squid
runs out of FD.

I've tested different the following settings without any luck. Every time
that the box reboots, I have to login and restart service manually.

root soft nofile 65000
root hard nofile 65000
proxy soft nofile 65000
proxy hard nofile 65000
squid soft nofile 65000
squid hard nofile 65000
* soft nofile 65000
* hard nofile 65000

It seems these settings only works if the user logins to system.
My squid user is proxy(I configured it at the time of compile).

Maybe some useful info:
OS:Ubuntu 12.04

# ulimit -n
65000

# squidclient mgr:info | grep 'file descri'
Maximum number of file descriptors:   65536
Available number of file descriptors: 65527
Reserved number of file descriptors:   100






RE: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-14 Thread Mohsen Dehghani
Oh , no...it is 1024 
thanks for the help
Now I added 'ulimit -n 65000' in squid init file and the problem is
resolved. But some questions:

1-why is it 1024 While I've set 65535 FD at compile time and squid user
which is proxy has this much limitation in limit.conf file?
2-is it ok to increase FD limit in this way?
3-Apearantly according to # cat /proc/sys/fs/file-max my os FD limit is
400577. Can I increase squid FD to it
4-What is best FD limit for about 150Mbps bandwidth and 18000 RPM

This could be ugly troubleshooting practice, but you can try to modify your
init script (or upstart job, not sure how exactly squid is being started in
ubuntu). The idea is to add 'ulimit -n  /tmp/squid.descriptors' and see if
the number is really 65k.

On 09/14/2013 09:41 AM, Mohsen Dehghani wrote:
 I don't see any logic here. Are you sure your squid is started not by
root?
 Is replacing 'root' by 'squid' or '*' solves issue as well?

 When I manually  start service by root, there is no file descriptor 
 warning and squid works as normal.
 But when the system boots up and starts the service automatically, 
 squid runs out of FD.

 I've tested different the following settings without any luck. Every 
 time that the box reboots, I have to login and restart service manually.

 root soft nofile 65000
 root hard nofile 65000
 proxy soft nofile 65000
 proxy hard nofile 65000
 squid soft nofile 65000
 squid hard nofile 65000
 * soft nofile 65000
 * hard nofile 65000

 It seems these settings only works if the user logins to system.
 My squid user is proxy(I configured it at the time of compile).

 Maybe some useful info:
 OS:Ubuntu 12.04

 # ulimit -n
 65000

 # squidclient mgr:info | grep 'file descri'
  Maximum number of file descriptors:   65536
  Available number of file descriptors: 65527
  Reserved number of file descriptors:   100









[squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-10 Thread Mohsen Dehghani
I have compiled and installed squid 3.3.8.
I have about 160Mbps bandwidth and about 18000 http request per minute. 
The problem is that as soon as I redirect traffic  to squid, its cpu usage
reaches 100% and it hangs and even squidclient will not work.

What is weird is that when I remove traffic from squid, CPU usage does not
go down immediately,  but with a delay of about 2 or 3 minutes! 
While in version 3.1.19(which previously I was using), as soon as I remove
traffic from squid, its cpu usage goes down. By the way in 3.1.19, CPU usage
never exceeded 30%. 

When I debug , I see some lines saying: WARNING! Your cache is running out
of filedescriptors

I don't know this is the cause or not. But I've already compiled squid with
65536 filedescriptors

I have disabled disk swap for testing, but the problem yet exists.

Any help is appreciated 

I have attached my compile options and config

___

#squid -v
Squid Cache: Version 3.3.8
configure options:  '--prefix=/usr/local/squid' '--build=x86_64-linux-gnu'
'--enable-storeio=ufs,aufs,diskd' '--enable-follow-x-forwarded-for'
'--with-filedescriptors=65536' '--with-large-files'
'--with-default-user=proxy' '--enable-linux-netfilter'
'build_alias=x86_64-linux-gnu' --enable-ltdl-convenience


###config:
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

shutdown_lifetime 3 second
wccp2_router 172.22.122.254
wccp_version 2
wccp2_rebuild_wait on
wccp2_forwarding_method 2
wccp2_return_method 2
wccp2_assignment_method 2
# wccp2_service standard 0
wccp2_service dynamic 80
wccp2_service dynamic 90
wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80
wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source
priority=240 ports=80

http_port 3129 tproxy 
qos_flows local-hit=0x18
cache_mem 2000 MB
maximum_object_size_in_memory 10 MB

access_log none

snmp_port 3401
acl snmppublic snmp_community golabi
snmp_access allow snmppublic trusted
http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost

http_access deny all

http_port 3128

coredump_dir /usr/local/squid/var/cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320




RE: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-10 Thread Mohsen Dehghani
Thanks everybody

[The problem resolved]
After adding following lines to /etc/security/limits.conf
root soft nofile 6
root hard nofile 6

but I am eager to know the rationale behind it, cuz squid runs as user
proxy not root


-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
Sent: Tuesday, September 10, 2013 9:10 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] 100% CPU Load problem with squid 3.3.8

It seems like an endless loop that causes this effect..
I dont think it's related in any way to squid directly but more to the
setup..

If we can make a real order and debug the problem I will be happy to try to
assist you.

please share your iptables, route, ulimit -sA, ulimit -hA and any other
information you do think is right for the cause and the solution.

if you can share the uname -a output and cat /proc/cpuinfo|grep model
also free -m, vmstat, the next script:
PID=`ps aux |grep squid-1|grep -v grep|awk '{print $2}'`;ls /proc/$PID/fd/
|wc

which should show how many FD are being used by the first squid process.
let's try out best and see what is the result..

we will might fine the reason on the first or second try.

Eliezer
On 09/10/2013 11:34 AM, Mohsen Dehghani wrote:
 I have compiled and installed squid 3.3.8.
 I have about 160Mbps bandwidth and about 18000 http request per minute. 
 The problem is that as soon as I redirect traffic  to squid, its cpu 
 usage reaches 100% and it hangs and even squidclient will not work.
 
 What is weird is that when I remove traffic from squid, CPU usage does 
 not go down immediately,  but with a delay of about 2 or 3 minutes!
 While in version 3.1.19(which previously I was using), as soon as I 
 remove traffic from squid, its cpu usage goes down. By the way in 
 3.1.19, CPU usage never exceeded 30%.
 
 When I debug , I see some lines saying: WARNING! Your cache is running 
 out of filedescriptors
 
 I don't know this is the cause or not. But I've already compiled squid 
 with
 65536 filedescriptors
 
 I have disabled disk swap for testing, but the problem yet exists.
 
 Any help is appreciated
 
 I have attached my compile options and config
 
 ___
 
 #squid -v
 Squid Cache: Version 3.3.8
 configure options:  '--prefix=/usr/local/squid' '--build=x86_64-linux-gnu'
 '--enable-storeio=ufs,aufs,diskd' '--enable-follow-x-forwarded-for'
 '--with-filedescriptors=65536' '--with-large-files'
 '--with-default-user=proxy' '--enable-linux-netfilter'
 'build_alias=x86_64-linux-gnu' --enable-ltdl-convenience
 
 
 ###config:
 acl localnet src 10.0.0.0/8   # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12# RFC1918 possible internal network
 acl SSL_ports port 443
 acl Safe_ports port 80# http
 acl Safe_ports port 21# ftp
 acl Safe_ports port 443   # https
 acl Safe_ports port 70# gopher
 acl Safe_ports port 210   # wais
 acl Safe_ports port 1025-65535# unregistered ports
 acl Safe_ports port 280   # http-mgmt
 acl Safe_ports port 488   # gss-http
 acl Safe_ports port 591   # filemaker
 acl Safe_ports port 777   # multiling http
 acl CONNECT method CONNECT
 
 shutdown_lifetime 3 second
 wccp2_router 172.22.122.254
 wccp_version 2
 wccp2_rebuild_wait on
 wccp2_forwarding_method 2
 wccp2_return_method 2
 wccp2_assignment_method 2
 # wccp2_service standard 0
 wccp2_service dynamic 80
 wccp2_service dynamic 90
 wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 
 ports=80 wccp2_service_info 90 protocol=tcp 
 flags=dst_ip_hash,ports_source
 priority=240 ports=80
 
 http_port 3129 tproxy
 qos_flows local-hit=0x18
 cache_mem 2000 MB
 maximum_object_size_in_memory 10 MB
 
 access_log none
 
 snmp_port 3401
 acl snmppublic snmp_community golabi
 snmp_access allow snmppublic trusted
 http_access deny !Safe_ports
 
 http_access deny CONNECT !SSL_ports
 
 http_access allow localhost manager
 http_access deny manager
 http_access allow localnet
 http_access allow localhost
 
 http_access deny all
 
 http_port 3128
 
 coredump_dir /usr/local/squid/var/cache/squid
 refresh_pattern ^ftp: 144020% 10080
 refresh_pattern ^gopher:  14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0   0%  0
 refresh_pattern . 0   20% 4320
 
 





RE: [squid-users] which version of squid is more stable and less cpu greedy?

2013-09-07 Thread Mohsen Dehghani
Um. Are you swapping between versions and using the same cache_dir?

Oh. Nope 
This is the testing process:

Installed Ubuntu12.04-create SNAPSHOT_1--install
3.1.19---put traffic(150Mbps)==30%CPU without any drop in bandwidth
graph
Moving back to SNAPSHOT_1-install 3.3.8--- put
traffic(150Mbps)==100%CPU with sudden drop in bandwidth graph
BTW I am not using any cache_dir. I just use memory swap...

It is now clear for me that 3.3.8 cannot cope with the traffic due to:
1-my wrong configuration 2-my  wrong compilation 3-OR some problems
in squid
I am madly eager to solve the problem...

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Saturday, August 31, 2013 12:20 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] which version of squid is more stable and less
cpu greedy?

On 31/08/2013 7:45 p.m., Mohsen Dehghani wrote:
 I have to say that two versions will be installed on the same VM. I 
 create a snapshot and install 3.1.19=about 10% CPU. And then go back 
 to that snapshot and compile and install 3.3.8=99%CPUI don't know 
 how to figure out the cause :(

Um. Are you swapping between versions and using the same cache_dir?

3.3 does an automatic upgrade of the swap.state file when its started with
one created by 3.1. That involves a lot of CPU scanning the disk files for a
while.

Amos





RE: [squid-users] which version of squid is more stable and less cpu greedy?

2013-08-31 Thread Mohsen Dehghani

I have to say that two versions will be installed on the same VM. I create a
snapshot and install 3.1.19=about 10% CPU. And then go back to that snapshot
and compile and install 3.3.8=99%CPUI don't know how to figure out the
cause :(

 Hi team

 I am planning to install a new squid from scratch. which version is 
 more stable

The latest.

   and less cpu greedy?

Depends on the configuration. But the latest version can again can do more
with less CPU in most situations.

 I have installed both both 3.1.19(default Ubuntu repository version) 
 and 3.3.8.
 On the same machine with same config and same load(about 60Mbps), I 
 have far less cpu usage on 3.1.19 than 3.3.8.(10% compared to 99%) so 
 what is the cause?

Faster request rate in the newer one?
more requests == more CPU use processing them.

Scanning the disk caches?
   lots of disk I/O == lots of CPU context switching

Cache garbage collection?
   lots of disk I/O == lots of CPU context switching

Those are the only known causes of traffic

 There are some features exist in newer versions that I had to move to 
 newer versions...
 Which version of squid is SQUID-USERS preference?

Put another way: is CPU usage enough to make anyone prefer to run a version
with a few hundred known bugs and lack of several major HTTP features?
We have enough of a cross-section of users that you will get a very mixed
response there.

Amos




[squid-users] how to use tproxy + iptables load balancing

2013-08-31 Thread Mohsen Dehghani
Hi team
I am planning to install multiple instances of squid on a machine as a
frontend. Tproxy is now working fine on a single instance machine.
No I want to run multiple instances and use this help to load balance
between them:

http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend#Frontend_Bala
ncer_Alternative_1:_iptables

does anybody have any experience doing this?

I think the key rule in tproxy iptables is:
iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark
0x1/0x1 --on-port 3129

I have tried some ways of combination of this with the example without any
success.

thanks





[squid-users] which version of squid is more stable and less cpu greedy?

2013-08-29 Thread Mohsen Dehghani
Hi team

I am planning to install a new squid from scratch. which version is more
stable and less cpu greedy?
I have installed both both 3.1.19(default Ubuntu repository version) and
3.3.8.
On the same machine with same config and same load(about 60Mbps), I have far
less cpu usage on 3.1.19 than 3.3.8.(10% compared to 99%)
so what is the cause?
There are some features exist in newer versions that I had to move to newer
versions...
Which version of squid is SQUID-USERS preference?

thanks




RE: [squid-users] [NEED HELP] TPROXY + L2 WCCP + multi cpu

2013-08-27 Thread Mohsen Dehghani
/shellinabox/ - HIER_DIRECT/74.125.236.164 text/html
1377506624.743  61038 178.173.12.70 TCP_MISS/503 4150 GET
http://www.tucny.com/favicon.ico - HIER_DIRECT/74.125.135.121 text/html
1377506625.548 240492 178.173.12.70 TCP_MISS/503 4263 GET
http://gravatar.com/avatar/33be8eebf9ff1375eecabb6d45bb84f0/? -
HIER_DIRECT/72.233.69.5 text/html
1377506625.744 240688 178.173.12.70 TCP_MISS/503 4263 GET
http://gravatar.com/avatar/10c08133f930b023f8a29f7aca903ade/? -
HIER_DIRECT/72.233.69.4 text/html
1377506625.744 240687 178.173.12.70 TCP_MISS/503 4263 GET
http://gravatar.com/avatar/bbafaf9e10ccbeadb05132f0907eef62/? -
HIER_DIRECT/72.233.69.4 text/html
1377506629.328  59995 178.173.12.70 TCP_MISS_ABORTED/000 0 GET
http://um16.eset.com/eset_eval/update.ver - HIER_DIRECT/93.184.71.10 -
1377506633.748 240973 178.173.12.70 TCP_MISS/503 7081 GET
http://cisco.112.2o7.net/b/ss/cisco-us,cisco-usprodswitches/1/H.24.3/s641795
77133309? - HIER_DIRECT/66.235.132.232 text/html
1377506674.091  0 :: TCP_DENIED/403 3788 GET
http://backend-kid2:4002/squid-internal-periodic/store_digest - HIER_NONE/-
text/html
1377506675.522  59980 178.173.12.70 TCP_MISS/503 4048 GET
http://wiki.squid-cache.org/favicon.ico - HIER_DIRECT/77.93.254.178
text/html
1377506680.531  59983 178.173.12.70 TCP_MISS/503 4053 GET
http://www.web-polygraph.org/favicon.ico - HIER_DIRECT/209.169.10.130
text/html
1377506687.797  61064 178.173.12.70 TCP_MISS/503 4920 GET
http://beacon-1.newrelic.com/1/c7e812077e? - HIER_DIRECT/50.31.164.168
text/html
1377506690.518  61188 178.173.12.70 TCP_MISS/503 4163 GET
http://um16.eset.com/eset_eval/update.ver - HIER_DIRECT/93.184.71.10
text/html
1377506734.092  0 :: TCP_DENIED/403 3788 GET
http://backend-kid3:4003/squid-internal-periodic/store_digest - HIER_NONE/-
text/html
1377506740.804 180166 178.173.12.70 TCP_MISS/503 4044 GET
http://packages.debian.org/favicon.ico - HIER_DIRECT/82.195.75.113 text/html
1377506863.961 241103 178.173.12.70 TCP_MISS/503 4951 GET
http://code.google.com/favicon.ico - HIER_DIRECT/74.125.236.166 text/html
##

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, August 28, 2013 9:55 AM
To: Mohsen Dehghani
Subject: Re: [squid-users] [NEED HELP] TPROXY + L2 WCCP + multi cpu

On 24/08/2013 6:26 p.m., Mohsen Dehghani wrote:
 Thanks
 But my bandwidth is gonna to be extended to 2Gbps. Are workers still 
 perform better than multi  instance?

I'm not sure of the answer to that one sorry. You are in a quite select
group at present dealing with Gbps traffic rates.
(If you understand Eliezers response earlier it sounds good thoguh I'm not
sure I udnerstand the specifics myself yet).

Amos




[squid-users] [NEED HELP] TPROXY + L2 WCCP + multi cpu

2013-08-20 Thread Mohsen Dehghani

Hi team

I have already implemented tproxy + L2 wccp and it works perfectly except
one: squid just uses one cpu(core) and other cores on a DELL R710 are
wasted. 
I have about 140 Mbps traffic and it utilizes 50% of one core. When decided
to run multicpu squid using this help:

http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem 

I noticed that the backend receives the requests with the ip address of
frontend(127.0.0.1).
As my squid machine do not have any public ip ( I just used tproxy before )
so it cannot get the request and forward it to the frontend. It means the
backend does not spoof the client ip.

My question is how can I force the backend to use the client ip address to
get request from internet servers?

My squid version is 3.3.8
My machine does not have any public IP