Re: [squid-users] Squid 3.3.3 issues...

2013-03-15 Thread John Doe
From: Amos Jeffries squ...@treenet.co.nz

 On 15/03/2013 2:59 a.m., John Doe wrote:
  So, I removed workers... but squid keeps dieing after 1 or 2 mns...
     assertion failed: comm.cc:1838: isOpen(fd)  
 !commHasHalfClosedMonitor(fd)
 
  Any idea how I can find out what caused it?
 
 A debugger backtrace listing how that line in comm.cc was reached, and a 
 printout of what the fd_table array says about the FD entry at the frame 
 above 
 the assertion. (print fd_table[fd])
 Details on how to get all that are in 
 http://wiki.squid-cache.org/SquidFaq/BugReporting

Ok, I will have to setup a test environment when I get the time...

Thx,
JD


[squid-users] which kernel is best for squid 3.xx with centos 6.3 with youtube caching ?

2013-03-15 Thread Ahmad
hi  guys ,
i have installed centos 6.3 
 i recompiled it with kernel 3.7.5 with tproxy suppport .
i want to say that squid 3.1 with kernel 3.7.5 is good , and i can make
caching about 25 %
 recently , i purchased third party which is called videocache  this party
can be used to cache the youtube videos .
--
but im facing some problems when using videocache with squid .
the problem are as follow :
1- squidguard begin to bypass althoug i  chained it with videocache during
the rush hours !
2- the performance of video cache is bad , it just make about 5 M at max .

i monitored the sessions that videocache is running , by the command
#netstat -ant  | grep 

note that the port  is the port of apache that videocache uses it 


i found that most of session of videocache is TIME_WAIT , and few of session 
is considered as ESTABLISHED .
i had a look on the wiki below about some old kernels can make time wait
problems :
http://wiki.squid-cache.org/Features/Tproxy4#Linux_Kernel_

not sure if the problem from my kernel 3.7.5 !!!
but as i mentioned , the old kernels :
2.6.28 to 2.6.36 are known to have ICMP and TIME_WAIT issues 

but im not sure if the problem of videocache is from kernel or another issue
.
i wish a guy that used centos with videocache with no problems to tell me
which kernel he used that gave him a best performance and how much Bw he
could save 

note that my current users is about 3000 users and i can make caching about
25 % , i think if i added the videocache i can make more caching ?


any suggestions to my issue above ???


with my best regards



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/which-kernel-is-best-for-squid-3-xx-with-centos-6-3-with-youtube-caching-tp4658999.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.3.2 SMP Problem

2013-03-15 Thread Ahmad
hi all ,
ive added in my squid.conf the config of smp ,
im using squid 3.3.3  ,

but my question is , how to monitoe the performace and make sure if my other
idle cores are being used ??

here is what i modified :
smp options###
# Custom options
memory_cache_shared off
#workers 2
#
workers 4
cache_dir rock /squid-cache/rock-1 3000 max-size=31000 max-swap-rate=250
swap-timeout=350
cache_dir rock /squid-cache/rock-2 3000 max-size=31000 max-swap-rate=250
swap-timeout=350
=
also i want to ask about another issue ,
does using smp can make delay when starting squid ? i mean that rebuildig
process take more time when using smp ??!!
wish to help

regards



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-2-SMP-Problem-tp4658906p4659000.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Squid 3.3.2 SMP Problem

2013-03-15 Thread Alexandre Chappaz
Hi,

which OS do you recommend for running SMP enabled version of squid?
Is there a particular kernel version to use or not use?
Do you have more specific tuning settings for shm, IPC and UDS
sockets, other than the hints from
http://wiki.squid-cache.org/Features/SmpScale#Troubleshooting

Regards
Alex

2013/3/15 Ahmad ahmed.za...@netstream.ps:
 hi all ,
 ive added in my squid.conf the config of smp ,
 im using squid 3.3.3  ,

 but my question is , how to monitoe the performace and make sure if my other
 idle cores are being used ??

 here is what i modified :
 smp options###
 # Custom options
 memory_cache_shared off
 #workers 2
 #
 workers 4
 cache_dir rock /squid-cache/rock-1 3000 max-size=31000 max-swap-rate=250
 swap-timeout=350
 cache_dir rock /squid-cache/rock-2 3000 max-size=31000 max-swap-rate=250
 swap-timeout=350
 =
 also i want to ask about another issue ,
 does using smp can make delay when starting squid ? i mean that rebuildig
 process take more time when using smp ??!!
 wish to help

 regards



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-2-SMP-Problem-tp4658906p4659000.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid process crash every day, why?

2013-03-15 Thread Feusi Remo (feus)
Hi, I am new to the mailing list and have the following issue:

Our squid crash every night between 01:00 and 02:00 CET. 

Mar 15 01:50:33 srv-app-904 (squid-1): Bungled (null) line 8: icap_retry deny 
all
Mar 15 01:50:35 srv-app-904 squid[3589]: Squid Parent: (squid-1) process 3592 
exited with status 1
Mar 15 01:50:38 srv-app-904 squid[3589]: Squid Parent: (squid-1) process 4119 
started
Mar 15 01:51:32 srv-app-904 (squid-1): Bungled (null) line 8: icap_retry deny 
all
Mar 15 01:51:32 srv-app-904 squid[3589]: Squid Parent: (squid-1) process 4119 
exited with status 1
Mar 15 01:51:35 srv-app-904 squid[3589]: Squid Parent: (squid-1) process 4141 
started
Mar 15 02:13:43 srv-app-904 (squid-1): Bungled (null) line 8: icap_retry deny 
all
Mar 15 02:13:43 srv-app-904 squid[3589]: Squid Parent: (squid-1) process 4141 
exited with status 1
Mar 15 02:13:46 srv-app-904 squid[3589]: Squid Parent: (squid-1) process 4193 
started

Cronlog told me that:
Mar 15 01:01:01 srv-app-904 CROND[4079]: (root) CMD (run-parts /etc/cron.hourly)
Mar 15 01:01:01 srv-app-904 run-parts(/etc/cron.hourly)[4079]: starting 0anacron
Mar 15 01:01:01 srv-app-904 anacron[4090]: Anacron started on 2013-03-15
Mar 15 01:01:01 srv-app-904 anacron[4090]: Jobs will be executed sequentially
Mar 15 01:01:01 srv-app-904 anacron[4090]: Normal exit (0 jobs run)
Mar 15 01:01:01 srv-app-904 run-parts(/etc/cron.hourly)[4092]: finished 0anacron
Mar 15 01:01:01 srv-app-904 run-parts(/etc/cron.hourly)[4079]: starting 
mcelog.cron
Mar 15 01:01:01 srv-app-904 run-parts(/etc/cron.hourly)[4100]: finished 
mcelog.cron
Mar 15 01:10:01 srv-app-904 CROND[4105]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 01:20:01 srv-app-904 CROND[4108]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 01:30:01 srv-app-904 CROND[4111]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 01:40:01 srv-app-904 CROND[4114]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 01:50:01 srv-app-904 CROND[4117]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 02:00:01 srv-app-904 CROND[4164]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 02:01:01 srv-app-904 CROND[4167]: (root) CMD (run-parts /etc/cron.hourly)
Mar 15 02:01:01 srv-app-904 run-parts(/etc/cron.hourly)[4167]: starting 0anacron
Mar 15 02:01:01 srv-app-904 anacron[4178]: Anacron started on 2013-03-15
Mar 15 02:01:01 srv-app-904 anacron[4178]: Jobs will be executed sequentially
Mar 15 02:01:01 srv-app-904 anacron[4178]: Normal exit (0 jobs run)
Mar 15 02:01:01 srv-app-904 run-parts(/etc/cron.hourly)[4180]: finished 0anacron
Mar 15 02:01:01 srv-app-904 run-parts(/etc/cron.hourly)[4167]: starting 
mcelog.cron
Mar 15 02:01:01 srv-app-904 run-parts(/etc/cron.hourly)[4188]: finished 
mcelog.cron
Mar 15 02:10:01 srv-app-904 CROND[4191]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 02:20:01 srv-app-904 CROND[4216]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 02:30:01 srv-app-904 CROND[4219]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 02:40:01 srv-app-904 CROND[4222]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 02:50:01 srv-app-904 CROND[4225]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 15 03:00:01 srv-app-904 CROND[4228]: (root) CMD (/usr/lib64/sa/sa1 1 1)

Logrotation this:
Mar 15 03:10:01 srv-app-904 run-parts(/etc/cron.daily)[4253]: starting logrotate
Mar 15 03:10:17 srv-app-904 run-parts(/etc/cron.daily)[4280]: finished logrotate

And here the interesting part from cache.log (with minus in the memory 
usage):
2013/03/15 01:50:33 kid1| Closing HTTP port 160.85.104.14:8080
2013/03/15 01:50:33 kid1| storeDirWriteCleanLogs: Starting...
2013/03/15 01:50:33 kid1| 65536 entries written so far.
2013/03/15 01:50:33 kid1|131072 entries written so far.
2013/03/15 01:50:33 kid1|196608 entries written so far.
2013/03/15 01:50:33 kid1|262144 entries written so far.
2013/03/15 01:50:33 kid1|327680 entries written so far.
2013/03/15 01:50:33 kid1|393216 entries written so far.
2013/03/15 01:50:33 kid1|458752 entries written so far.
2013/03/15 01:50:33 kid1|524288 entries written so far.
2013/03/15 01:50:33 kid1|589824 entries written so far.
2013/03/15 01:50:33 kid1|655360 entries written so far.
2013/03/15 01:50:33 kid1|720896 entries written so far.
2013/03/15 01:50:33 kid1|786432 entries written so far.
2013/03/15 01:50:33 kid1|   Finished.  Wrote 799896 entries.
2013/03/15 01:50:33 kid1|   Took 0.16 seconds (5155895.89 entries/sec).
FATAL: Bungled (null) line 8: icap_retry deny all
Squid Cache (Version 3.2.8): Terminated abnormally.
CPU Usage: 1200.378 seconds = 742.532 user + 457.845 sys
Maximum Resident Size: 10759744 KB
Page faults with physical i/o: 12
Memory usage for squid via mallinfo():
total space in arena:  -1523816 KB
Ordinary blocks:   -1587068 KB  98685 blks
Small blocks:   0 KB  1 blks
Holding blocks: 40796 KB 10 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   63251 KB
Total in use:  -1546272 KB 101%
Total free:   

[squid-users] infos for outlook anywhere - linux frontend server for exchange

2013-03-15 Thread Clem
Hello,

I’m coming back to you, just to tell you I’ve found a squid alternative
for a linux frontend server, in front of exchange 2007 or 2010 CAS server.
It works with owa, activesync and outlook anywhere in ntlm, and it’s free.

I was never able to make squid working for that configuration, in fact all
was working except RPC over https, or outlook anywhere in NTLM auth, so I
gave up with squid.

Recently I check if someone found a solution, and I found this page :
http://www.stevieg.org/e2010haproxy/

I try it, and that works like a charm, I don’t know exactly how it works,
it’s load balancing mode, you even not need to install certificates of
exchange server, it relays all request directly to exchange. And I exactly
wanted this !

[INTERNET] ←→ [HAPROXY PRE-CONFIGURED VM - IN DMZ] ←→ [EXCHANGE SERVER -
IN LAN]

Hope that can help.

I’ll get out of this mailing list now, squid is a very good program, thanks
for all you did and help for my researches.

Kind regards


Clem



[squid-users] Re: which kernel is best for squid 3.xx with centos 6.3 with youtube caching ?

2013-03-15 Thread babajaga
25% ... is that byte-hitrate ?
If YES: not very impressive.

I have self-made cache, especially for youtube. And between 30%-35%
byte-hitrate, daily, with about 20GB/day traffic, incl. cached data.
However, 4TB disk space in use.

2- the performance of video cache is bad , it just make about 5 M at max .


What does this mean ? 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/which-kernel-is-best-for-squid-3-xx-with-centos-6-3-with-youtube-caching-tp4658999p4659004.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Error with squid-reports (sarg)

2013-03-15 Thread Gerardo J. Leonardo

Hello there masters, anyone here encountered issue like below?

Error:

sarg-reports today
SARG: getword backtrace:
SARG: 1:/usr/bin/sarg() [0x804e1fc]
SARG: 2:/usr/bin/sarg() [0x804e3f9]
SARG: 3:/usr/bin/sarg() [0x80541d5]
SARG: 4:/lib/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0xb75b0ca6]
SARG: 5:/usr/bin/sarg() [0x8049fb1]
SARG: Maybe you have a broken date in your /var/log/squid3/access.log file
SARG: getword_atoll loop detected after 0 bytes.
SARG: Line=mage/jpeg
SARG: Record=mage/jpeg
SARG: searching for 'x2f'

squid version:
Squid Cache: Version 3.2.6
configure options:  '--build=i486-linux-gnu' '--prefix=/usr' 
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' 
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' 
'--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' 
'--disable-maintainer-mode' '--disable-dependency-tracking' 
'--disable-silent-rules' '--srcdir=.' '--datadir=/usr/share/squid3' 
'--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' 
'--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8' 
'--enable-log-daemon-helpers' '--enable-url-rewrite-helpers' 
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-icmp' 
'--enable-removal-policies=lru,heap' '--enable-x-accelerator-vary' 
'--disable-ipv6' '--enable-zph-qos' '--enable-ecap' 
'--disable-internal-dns' '--enable-ssl-crtd' '--enable-forw-via-db' 
'--enable-ssl' '--enable-stacktraces' '--enable-delay-pools' 
'--enable-cache-digests' '--enable-underscores' 
'--disable-ident-lookups' '--enable-icap-client' 
'--enable-follow-x-forwarded-for' '--enable-esi' '--disable-translation' 
'--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid' 
'--with-filedescriptors=65536' '--with-large-files' 
'--with-default-user=proxy' '--enable-linux-netfilter' 
'build_alias=i486-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2' 'LDFLAGS=' 
'CPPFLAGS=' 'CXXFLAGS=-g -march=pentium4m -mtune=pentium4m -O2 -g -Wall -O2'


sarg version:
2.3.1-1~bpo60+1

Sample access log:
1361338483.586   8160 10.10.10.10 TCP_MISS/200 590 GET 
http://5-act.channel.facebook.com/pull? - HIER_DIRECT/69.171.246.16 
text/plain
1361338483.595  1 10.10.10.10 TCP_MEM_HIT/200 607 GET 
http://i2.ytimg.com/crossdomain.xml - HIER_NONE/- text/x-cross-domain-policy


--
Gerardo J. Leonardo



Think before you print!



[squid-users] Re: which kernel is best for squid 3.xx with centos 6.3 with youtube caching ?

2013-03-15 Thread Ahmad
babajaga wrote
 25% ... is that byte-hitrate ?
 If YES: not very impressive.
 
 I have self-made cache, especially for youtube. And between 30%-35%
 byte-hitrate, daily, with about 20GB/day traffic, incl. cached data.
 However, 4TB disk space in use.
 
2- the performance of video cache is bad , it just make about 5 M at max .

 
 What does this mean ?

hi babajaga , thanks for reply ,
but im sorry to ask you wt do u mean with byte-hitrate  ??? 
agian , 
my question is about kernel  , does new kernel may make problems  with 3rd
parties like videocache ???
im using kernel 3.7.5
the original kernel  of centos 6.3 is 3.2.x as i remember , i mean i had to
rebuild it with tproxy supporting .
i  said that with the config of squid , i can save about 25 % of bw and
squidguard is excellent .
===
about  the bad  performance of videocache , i mean because most of
connections is timewait  not established the performance  of youtube is
bad , and it seems as im not using videocache , i monitored the cache of
videocache it just make 5 M - 10 M as maximum !

i mean that im only caching 25 % whatever in the absence of videocache or in
the presence of videocache .

regards





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/which-kernel-is-best-for-squid-3-xx-with-centos-6-3-with-youtube-caching-tp4658999p4659007.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] Dynamic content caching in Squid 3.2 vs 3.1

2013-03-15 Thread Jon Schneider
Thanks for the response Amos, 

I think I was so focused on trying to figure out why 3.2 was returning 
differently than 3.1 that I didn't take note of the odd Last-Modified date that 
was returning in the previous example.  I have included a clearer example below 
with cache settings remaining the same just a different URL.  I hit each Squid 
server a few times to provide opportunity to cache.  In this example all the 
dates and times make sense, and I believe the 3.2 instance should be caching 
the object, but it isn't.

Origin Server:
[root@cache1 portalsquid]# siege -b -r 1 -c 1 -g 
http://testsite.domain.com/__utm.js.aspx
HTTP/1.1 200 OK
Cache-Control: public, max-age=7200
Content-Type: text/javascript; charset=utf-8
Content-Encoding: gzip
Expires: Fri, 15 Mar 2013 17:35:17 GMT
Last-Modified: Fri, 15 Mar 2013 15:35:17 GMT
ETag: 71B76C2B36A7E48318E27D6B5ED98F3A
Vary: Accept-Encoding
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Server: IISSVR
X-Powered-By: ASP.NET
Date: Fri, 15 Mar 2013 15:35:17 GMT
Connection: close
Content-Length: 6157

Squid 3.1
[root@cache1 portalsquid]# siege -b -r 1 -c 1 -g 
http://testsite.domain.com/__utm.js.aspx
HTTP/1.0 200 OK
Cache-Control: public, max-age=7200
Content-Type: text/javascript; charset=utf-8
Content-Encoding: gzip
Expires: Fri, 15 Mar 2013 17:35:42 GMT
Last-Modified: Fri, 15 Mar 2013 15:35:42 GMT
ETag: 71B76C2B36A7E48318E27D6B5ED98F3A
Vary: Accept-Encoding
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Server: IISSVR
X-Powered-By: ASP.NET
Date: Fri, 15 Mar 2013 15:35:42 GMT
Content-Length: 6157
Age: 2
X-Cache: HIT from cache2.domain.com
X-Cache-Lookup: HIT from cache2.domain.com:80
Connection: close

Squid 3.2
[root@cache1 portalsquid]# siege -b -r 1 -c 1 -g 
http://testsite.domain.com/__utm.js.aspx
HTTP/1.1 200 OK
Cache-Control: public, max-age=7200
Content-Type: text/javascript; charset=utf-8
Content-Encoding: gzip
Expires: Fri, 15 Mar 2013 17:34:38 GMT
Last-Modified: Fri, 15 Mar 2013 15:34:38 GMT
ETag: 71B76C2B36A7E48318E27D6B5ED98F3A
Vary: Accept-Encoding
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Server: IISSVR
X-Powered-By: ASP.NET
Date: Fri, 15 Mar 2013 15:34:38 GMT
Content-Length: 6157
X-Cache: MISS from cache1.domain.com
X-Cache-Lookup: MISS from cache1.domain.com:80
Connection: close

Cache.log from 3.2 instance:

2013/03/15 09:34:38.323 kid1| TcpAcceptor.cc(190) doAccept: New connection on 
FD 12
2013/03/15 09:34:38.323 kid1| TcpAcceptor.cc(265) acceptNext: connection on 
local=192.168.5.183:80 remote=[::] FD 12 flags=9
2013/03/15 09:34:38.324 kid1| client_side.cc(2298) parseHttpRequest: HTTP 
Client local=192.168.5.183:80 remote=192.168.5.183:45831 FD 19 flags=1
2013/03/15 09:34:38.324 kid1| client_side.cc(2299) parseHttpRequest: HTTP 
Client REQUEST:
-
GET /__utm.js.aspx HTTP/1.1^M
Host: testsite.domain.com^M
Accept: */*^M
Accept-Encoding: gzip^M
User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.72)^M
Connection: close^M
^M

--
2013/03/15 09:34:38.324 kid1| client_side_request.cc(760) 
clientAccessCheckDone: The request GET http://testsite.domain.com/__utm.js.aspx 
is 1, because it matched 'all'
2013/03/15 09:34:38.324 kid1| client_side_request.cc(734) clientAccessCheck2: 
No adapted_http_access configuration. default: ALLOW
2013/03/15 09:34:38.324 kid1| client_side_request.cc(760) 
clientAccessCheckDone: The request GET http://testsite.domain.com/__utm.js.aspx 
is 1, because it matched 'all'
2013/03/15 09:34:38.324 kid1| forward.cc(103) FwdState: Forwarding client 
request local=192.168.5.183:80 remote=192.168.5.183:45831 FD 19 flags=1, 
url=http://testsite.domain.com/__utm.js.aspx
2013/03/15 09:34:38.324 kid1| peer_select.cc(271) peerSelectDnsPaths: Find IP 
destination for: http://testsite.domain.com/__utm.js.aspx' via 
origin.portal.domain.com
2013/03/15 09:34:38.325 kid1| peer_select.cc(271) peerSelectDnsPaths: Find IP 
destination for: http://testsite.domain.com/__utm.js.aspx' via 
origin.portal.domain.com
2013/03/15 09:34:38.325 kid1| peer_select.cc(298) peerSelectDnsPaths: Found 
sources for 'http://testsite.domain.com/__utm.js.aspx'
2013/03/15 09:34:38.325 kid1| peer_select.cc(299) peerSelectDnsPaths:   
always_direct = 0
2013/03/15 09:34:38.325 kid1| peer_select.cc(300) peerSelectDnsPaths:
never_direct = 0
2013/03/15 09:34:38.325 kid1| peer_select.cc(309) peerSelectDnsPaths:  
cache_peer = local=0.0.0.0 remote=192.168.5.20:80 flags=1
2013/03/15 09:34:38.325 kid1| peer_select.cc(309) peerSelectDnsPaths:  
cache_peer = local=0.0.0.0 remote=192.168.5.20:80 flags=1
2013/03/15 09:34:38.325 kid1| peer_select.cc(311) peerSelectDnsPaths:
timedout = 0
2013/03/15 09:34:38.325 kid1| http.cc(2177) sendRequest: HTTP Server 
local=192.168.5.183:51705 remote=192.168.5.20:80 FD 23 flags=1
2013/03/15 09:34:38.325 kid1| http.cc(2178) sendRequest: HTTP Server REQUEST:
-
GET /__utm.js.aspx HTTP/1.1^M
Host: testsite.domain.com^M
Accept: */*^M
Accept-Encoding: gzip^M
User-Agent: 

RE: [squid-users] Squid process crash every day, why?

2013-03-15 Thread Tim Duncan
FATAL: Bungled (null) line 8: icap_retry deny all Squid Cache (Version
3.2.8): Terminated abnormally.


squid3 -v

did you ./configure squid  using --enable-icap-client





[squid-users] ivp6 is required to use SMP ?

2013-03-15 Thread Paul Messina
Hello,
I have ivp6 disabled and  when I added the options workers, the
process are running but no one listen on 3128
and I got the message commBind: Cannot bind socket FD 12 to [::]:
(13) Permission denied  one  time for worker


Paul


[squid-users] Re: which kernel is best for squid 3.xx with centos 6.3 with youtube caching ?

2013-03-15 Thread babajaga
squid measures hitrate and byte-hitrate. Hitrate is the % of objects, fetched
from cache. Byte-hitrate is the amount of bytes fetched from cache versus
amount of traffic.
As an example. you have 100MB traffic, which consists out of 1 video, 50MB, 
and 99 objects, 0.5MB each. When all the small objects are fetched from
cache, you have a hit-rate of 99%, but only byte-hitrate of 50%.
When only the video is in cache, hit-rate is only 1%, but byte-hitrate is
also 50%.

So, regarding caching/caching of videos, my hit-rate is only about 10%. But 
the byte-hitrate is between 30%-35%, which is the amount of traffic saved.
So, for me about 6GB/day, because of 1/3 of 20GB/day, 

i monitored the cache of videocache it just make 5 M - 10 M as maximum
! 

You mean, only 5MBit/s-10MBit/s traffic to be handled ?
Thats not good. 

TIME_WAIT is a connection state after closing. To have a lot of conns in
this state is not unusual.

I have the impression, you should better ask the guys, who developed
videocache, regarding performance.

 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/which-kernel-is-best-for-squid-3-xx-with-centos-6-3-with-youtube-caching-tp4658999p4659011.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: ivp6 is required to use SMP ?

2013-03-15 Thread babajaga
Sounds familar to me :-)
For a start, have a look here:

http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-crash-with-rock-during-startup-td4658281.html#a4658307



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ivp6-is-required-to-use-SMP-tp4659010p4659012.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Bypass bumping all websites in SSL transparent mode

2013-03-15 Thread Alex Rousskov
On 03/12/2013 01:00 PM, David Touzeau wrote:

 squid force to bump all websites and change the certificate even an ACL
 is created to deny bump websites.
 
 I would like to know if it is possible to do that ?

Changing server certificates without bumping SSL connections is not
possible. You may want to rephrase or detail what you want to do because
the above summary does not compute (as Alex Crow has noted).

Other than that, using https_port for bumping intercepted SSL
connections is the right approach.


Cheers,

Alex.


 I have set this in the squid.conf
 
 # - SSL Listen Port
 https_port 192.168.1.204:3130 intercept ssl-bump
 cert=/etc/squid3/ssl/cacert.pem key= /etc/squid3/ssl/privkey.pem
 # - SSL Rules
 ssl_bump deny all
 always_direct allow all
 
 -A PREROUTING -p tcp -m tcp --dport 3128  -j DROP
 -A PREROUTING -p tcp -m tcp --dport 3130  -j DROP
 -A PREROUTING -s 192.168.1.204/32 -p tcp -m tcp --dport 80 -j ACCEPT
 -A PREROUTING -s 192.168.1.204/32 -p tcp -m tcp --dport 443 -j ACCEPT
 -A PREROUTING -s 192.168.0.4/32 -p tcp -m tcp --dport 80  -j ACCEPT
 -A PREROUTING -s 192.168.0.4/32 -p tcp -m tcp --dport 443 -j ACCEPT
 -A PREROUTING -p tcp -m tcp --dport 80 -m comment --to-ports 3128
 -A PREROUTING -p tcp -m tcp --dport 443 -m comment -j REDIRECT
 --to-ports 3130
 -A POSTROUTING -m comment  -j MASQUERADE
 



Re: [squid-users] squid and unauthorized clients rate-blocking

2013-03-15 Thread Alex Rousskov
On 03/13/2013 04:27 AM, Eugene M. Zheganin wrote:

 I use squid mostly for internet access authorization in corporate
 network. I have a problem. Let's suppose some foobar company has
 developed a proxy-unaware update mechanism using HTTP to update their
 software. Or some internet company wrote a javascript that does execute
 outside proxy context in a browser. Such things can produce a massive
 amount of GET requests which squid answers with HTTP/407. Massive like
 thousands per seconds from just one machine.

Ouch.


 In the same time, being
 explicitly blocked with HTTP/403 answers, this madness stops. So, is
 there a mechanism that I could use for, like, send 403 after exceeding
 some rate to a client ? Or rate-block some acls ? Or something similar ?

That Javascript wonder probably uses a specific upgrade URL or two. Can
you block just those with a URL-specific ACL? If you place http_access
deny with that ACL before authentication ACLs, Squid should respond
with 403 Forbidden. I understand that this is not fully automated
because you need to maintain the black list of update URLs, but it gives
you an immediate and simple solution:

http_access deny badScriptUrl
http_access allow !needsAuthentication
http_access allow authenticated


If you really want to rate-limit, you can probably do that using an
external ACL that will compute the request rate of unauthenticated
requests and deny access based on that. AFAICT, you do not really need
to measure authentication failure rate (although that would be possible
in a custom authenticator too) -- measuring the rate of
not-yet-authenticated-but-must-be-authenticated requests would be
sufficient in practice (and might be even overall better as it will
protect your authentication code).

http_access allow !needsAuthentication
http_access allow lowRate authenticated


HTH,

Alex.



Re: [squid-users] Re: Squid 3.3.2 SMP Problem

2013-03-15 Thread Alex Rousskov
On 03/15/2013 05:05 AM, Ahmad wrote:
 hi all ,
 ive added in my squid.conf the config of smp ,
 im using squid 3.3.3  ,
 
 but my question is , how to monitoe the performace and make sure if my other
 idle cores are being used ??

In general, you should monitor SMP Squid performance just like you
monitor non-SMP Squid performance. There are still some Squid-reported
stats that are not aggregated across workers, but most critical ones
are. The details are being documented at
http://wiki.squid-cache.org/Features/CacheManager#SMP_considerations

As for CPU core utilization monitoring specifically, for short-term
checks, consider using top (press '1' to see all cores) and for
long-term trends, perhaps an mpstat log would help?

After Squid has been running for some time, you could also look at the
total CPU time of each Squid worker reported by top, ps, and other
tools. This will give you an idea of how well your OS balances workers.


 does using smp can make delay when starting squid ? i mean that rebuildig
 process take more time when using smp ??!!

If the machine has enough resources, SMP rebuild should be faster than
an exactly the same rebuild done by a non-SMP Squid, but YMMV.


HTH,

Alex.



Re: [squid-users] Error with squid-reports (sarg)

2013-03-15 Thread Amos Jeffries

On 16/03/2013 5:20 a.m., Gerardo J. Leonardo wrote:

Hello there masters, anyone here encountered issue like below?



Perhapse you should contact the SARG help channels. That tool is not 
part of the Squid Project.


Amos


Error:

sarg-reports today
SARG: getword backtrace:
SARG: 1:/usr/bin/sarg() [0x804e1fc]
SARG: 2:/usr/bin/sarg() [0x804e3f9]
SARG: 3:/usr/bin/sarg() [0x80541d5]
SARG: 4:/lib/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0xb75b0ca6]
SARG: 5:/usr/bin/sarg() [0x8049fb1]
SARG: Maybe you have a broken date in your /var/log/squid3/access.log 
file

SARG: getword_atoll loop detected after 0 bytes.
SARG: Line=mage/jpeg
SARG: Record=mage/jpeg
SARG: searching for 'x2f'

squid version:
Squid Cache: Version 3.2.6
configure options:  '--build=i486-linux-gnu' '--prefix=/usr' 
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' 
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' 
'--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' 
'--disable-maintainer-mode' '--disable-dependency-tracking' 
'--disable-silent-rules' '--srcdir=.' '--datadir=/usr/share/squid3' 
'--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' 
'--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8' 
'--enable-log-daemon-helpers' '--enable-url-rewrite-helpers' 
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-icmp' 
'--enable-removal-policies=lru,heap' '--enable-x-accelerator-vary' 
'--disable-ipv6' '--enable-zph-qos' '--enable-ecap' 
'--disable-internal-dns' '--enable-ssl-crtd' '--enable-forw-via-db' 
'--enable-ssl' '--enable-stacktraces' '--enable-delay-pools' 
'--enable-cache-digests' '--enable-underscores' 
'--disable-ident-lookups' '--enable-icap-client' 
'--enable-follow-x-forwarded-for' '--enable-esi' 
'--disable-translation' '--with-logdir=/var/log/squid3' 
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536' 
'--with-large-files' '--with-default-user=proxy' 
'--enable-linux-netfilter' 'build_alias=i486-linux-gnu' 'CFLAGS=-g -O2 
-g -Wall -O2' 'LDFLAGS=' 'CPPFLAGS=' 'CXXFLAGS=-g -march=pentium4m 
-mtune=pentium4m -O2 -g -Wall -O2'


sarg version:
2.3.1-1~bpo60+1

Sample access log:
1361338483.586   8160 10.10.10.10 TCP_MISS/200 590 GET 
http://5-act.channel.facebook.com/pull? - HIER_DIRECT/69.171.246.16 
text/plain
1361338483.595  1 10.10.10.10 TCP_MEM_HIT/200 607 GET 
http://i2.ytimg.com/crossdomain.xml - HIER_NONE/- 
text/x-cross-domain-policy






[squid-users] squid Basic authentication

2013-03-15 Thread hadi

Im using squid-3.1.23 trying to configure username/password for
authentication with local user's (getpwname_auth). It popup for
authentication but when I supply username and password doesn't work.
Please help regard this matter.

May squid.conf
auth_param basic program /usr/local/squid/libexec/getpwname_auth
auth_param basic utf8 off
auth_param basic children 15 start=1 idle=1
auth_param basic realm Squid proxy Server at proxy.bigmama.com
auth_param basic credentialsttl 4 hours
auth_param basic casesensitive off
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
http_access deny all

# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl lan src 192.168.0.0/24  # my lan
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow lan
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128
# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /usr/local/squid/var/cache 1000 16 256
cache_mem 50 MB

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
visible_hostname host1.bigmama.com
cache_effective_user squid
cache_effective_group squid

access.log
1362861900.377  1 192.168.0.1 TCP_DENIED/407 4175 GET
http://www.google.com/ - NONE/- text/html
1362861903.039  1 192.168.0.1 TCP_DENIED/407 4282 GET
http://www.google.com/ hadi NONE/- text/html
1362861905.676  1 192.168.0.1 TCP_DENIED/407 4297 GET
http://www.google.com/ hadi NONE/- text/html
1362861931.381  1 192.168.0.1 TCP_DENIED/407 4318 GET
http://www.google.com/ root NONE/- text/html

More error logs from cache with set to debug_options ALL,2 29
2013/03/16 01:41:02.758| ConnStateData::swanSong: FD 12
2013/03/16 01:41:22.128| The request CONNECT www.hotmail.com:443 is DENIED,
because it matched 'auth'
2013/03/16 01:41:22.128| errorpage.cc(1075) BuildContent: No existing error
page language negotiated for ERR_CACHE_ACCESS_DENIED. Using default error
file.
2013/03/16 01:41:22.128| The reply for CONNECT www.hotmail.com:443 is
ALLOWED, because it matched 'auth'
2013/03/16 01:41:22.130| ConnStateData::swanSong: FD 14
2013/03/16 01:41:22.133| The request CONNECT www.hotmail.com:443 is DENIED,
because it matched 'auth'
2013/03/16 01:41:22.133| errorpage.cc(1075) BuildContent: No existing error
page language negotiated for ERR_CACHE_ACCESS_DENIED. Using default error
file.
2013/03/16 01:41:22.134| The reply for CONNECT www.hotmail.com:443 is
ALLOWED, because it matched 'auth'
2013/03/16 01:41:22.135| connReadWasError: FD 14: got flag -1
2013/03/16 01:41:22.135| ConnStateData::swanSong: FD 14




[squid-users] squid Basic authentication

2013-03-15 Thread hadi

Im using squid-3.1.23 trying to configure username/password for
authentication with local user's (getpwname_auth). It popup for
authentication but when I supply username and password doesn't work.
Please help regard this matter.
May squid.conf

auth_param basic program /usr/local/squid/libexec/getpwname_auth
auth_param basic utf8 off
auth_param basic children 15 start=1 idle=1
auth_param basic realm Squid proxy Server at proxy.bigmama.com
auth_param basic credentialsttl 4 hours
auth_param basic casesensitive off
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
http_access deny all

# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl lan src 192.168.0.0/24  # my lan
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow lan
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128
# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /usr/local/squid/var/cache 1000 16 256
cache_mem 50 MB

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
visible_hostname host1.bigmama.com
cache_effective_user squid
cache_effective_group squid

access.log
1362861900.377  1 192.168.0.1 TCP_DENIED/407 4175 GET
http://www.google.com/ - NONE/- text/html
1362861903.039  1 192.168.0.1 TCP_DENIED/407 4282 GET
http://www.google.com/ hadi NONE/- text/html
1362861905.676  1 192.168.0.1 TCP_DENIED/407 4297 GET
http://www.google.com/ hadi NONE/- text/html
1362861931.381  1 192.168.0.1 TCP_DENIED/407 4318 GET
http://www.google.com/ root NONE/- text/html
More error logs from cache with set to debug_options ALL,2 29
2013/03/16 01:41:02.758| ConnStateData::swanSong: FD 12
2013/03/16 01:41:22.128| The request CONNECT www.hotmail.com:443 is DENIED,
because it matched 'auth'
2013/03/16 01:41:22.128| errorpage.cc(1075) BuildContent: No existing error
page language negotiated for ERR_CACHE_ACCESS_DENIED. Using default error
file.
2013/03/16 01:41:22.128| The reply for CONNECT www.hotmail.com:443 is
ALLOWED, because it matched 'auth'
2013/03/16 01:41:22.130| ConnStateData::swanSong: FD 14
2013/03/16 01:41:22.133| The request CONNECT www.hotmail.com:443 is DENIED,
because it matched 'auth'
2013/03/16 01:41:22.133| errorpage.cc(1075) BuildContent: No existing error
page language negotiated for ERR_CACHE_ACCESS_DENIED. Using default error
file.
2013/03/16 01:41:22.134| The reply for CONNECT www.hotmail.com:443 is
ALLOWED, because it matched 'auth'
2013/03/16 01:41:22.135| connReadWasError: FD 14: got flag -1
2013/03/16 01:41:22.135| ConnStateData::swanSong: FD 14




Re: [squid-users] Error with squid-reports (sarg)

2013-03-15 Thread Gerardo J. Leonardo

Thank you Amos, will do.

Gerardo J. Leonardo


Think before you print!

On 03/16/2013 07:25 AM, Amos Jeffries wrote:

On 16/03/2013 5:20 a.m., Gerardo J. Leonardo wrote:

Hello there masters, anyone here encountered issue like below?



Perhapse you should contact the SARG help channels. That tool is not 
part of the Squid Project.


Amos


Error:

sarg-reports today
SARG: getword backtrace:
SARG: 1:/usr/bin/sarg() [0x804e1fc]
SARG: 2:/usr/bin/sarg() [0x804e3f9]
SARG: 3:/usr/bin/sarg() [0x80541d5]
SARG: 4:/lib/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0xb75b0ca6]
SARG: 5:/usr/bin/sarg() [0x8049fb1]
SARG: Maybe you have a broken date in your /var/log/squid3/access.log 
file

SARG: getword_atoll loop detected after 0 bytes.
SARG: Line=mage/jpeg
SARG: Record=mage/jpeg
SARG: searching for 'x2f'

squid version:
Squid Cache: Version 3.2.6
configure options:  '--build=i486-linux-gnu' '--prefix=/usr' 
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' 
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' 
'--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' 
'--disable-maintainer-mode' '--disable-dependency-tracking' 
'--disable-silent-rules' '--srcdir=.' '--datadir=/usr/share/squid3' 
'--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' 
'--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8' 
'--enable-log-daemon-helpers' '--enable-url-rewrite-helpers' 
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-icmp' 
'--enable-removal-policies=lru,heap' '--enable-x-accelerator-vary' 
'--disable-ipv6' '--enable-zph-qos' '--enable-ecap' 
'--disable-internal-dns' '--enable-ssl-crtd' '--enable-forw-via-db' 
'--enable-ssl' '--enable-stacktraces' '--enable-delay-pools' 
'--enable-cache-digests' '--enable-underscores' 
'--disable-ident-lookups' '--enable-icap-client' 
'--enable-follow-x-forwarded-for' '--enable-esi' 
'--disable-translation' '--with-logdir=/var/log/squid3' 
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536' 
'--with-large-files' '--with-default-user=proxy' 
'--enable-linux-netfilter' 'build_alias=i486-linux-gnu' 'CFLAGS=-g 
-O2 -g -Wall -O2' 'LDFLAGS=' 'CPPFLAGS=' 'CXXFLAGS=-g 
-march=pentium4m -mtune=pentium4m -O2 -g -Wall -O2'


sarg version:
2.3.1-1~bpo60+1

Sample access log:
1361338483.586   8160 10.10.10.10 TCP_MISS/200 590 GET 
http://5-act.channel.facebook.com/pull? - HIER_DIRECT/69.171.246.16 
text/plain
1361338483.595  1 10.10.10.10 TCP_MEM_HIT/200 607 GET 
http://i2.ytimg.com/crossdomain.xml - HIER_NONE/- 
text/x-cross-domain-policy