Re: [squid-users] Reg: Migration of squid cache

2013-08-28 Thread Ben Nichols
Whoah there cowboy, I think hes just asking how to move cache and use with a 
new squid installation. 



Re: [squid-users] Evaluating SQUID performance

2013-07-24 Thread Ben Nichols
You could go and do a simple demonstration, download a 300mb+ file one one 
machine, then download the same file on a machine sitting next to it, let em 
see how fast it is coming from the cache.



- Original Message - 
From: "John Joseph" 

To: 
Sent: Wednesday, July 24, 2013 3:48 AM
Subject: [squid-users] Evaluating SQUID performance




Hi
How could I do a test on squid server and check the performance on the 
bandwidth saved.

Is there any tool for the same.
I know squid can save bandwith, but I want to convince others with proof
Guidance and Advice requested
thanks
Joseph John




Re: [squid-users] Configuring Squid for windows to fight DDoS attacks

2013-07-22 Thread Ben Nichols


- Original Message - 
From: "Amos Jeffries" 

To: 
Sent: Monday, July 22, 2013 6:47 PM
Subject: Re: [squid-users] Configuring Squid for windows to fight DDoS 
attacks




On 23/07/2013 9:25 a.m., Fernando Gros Gonzalez wrote:

Hello,

We have a server (for an online game) an we are receiving Ddos
attacks. We don't know anything about Squid, but we would like that
someone explains us how to configure the Windows version of squid to
fight DDos attacks.

Thanks,

Fernando





If your running a game server and the website for this service is the target 
of the attack, a reverse proxy service such as cloudflare may help.
But if your ip is allready known to the attacker, well, all you can do is 
wait until they stop attacking, or try to get a new ip.


Also if you can identify the attacker, going to http://ic3.gov and filing a 
report may help bring justice to your attacker.



Sucks when it happens. 



Re: [squid-users] Squid 2.7.x No Transparent Smartphones issues with youtube?

2013-06-28 Thread Ben Nichols

This might sound silly, but does your phone support flash?

Have you tried http://www.youtube.com/html5

Might be able to get them to load in html5 if your phone doesnt support 
flash.


Also, might I ask how you managed to edit the squid.conf in pfsense?

Where is it located in pfsense?




Re: [squid-users] Squid 2.7.x No Transparent Smartphones issues with youtube?

2013-06-28 Thread Ben Nichols


- Original Message - 
From: "Beto Moreno" 

To: 
Sent: Friday, June 28, 2013 4:44 PM
Subject: Re: [squid-users] Squid 2.7.x No Transparent Smartphones issues 
with youtube?




This is a test enviroment under pfsense, is all.
I have the option to move to squid 3.x I can.
Now related to smartphone, 3.x fix this came of issues?




Running Squid under pfsense?

Pardon me if someone finds what I am about to say offensive,

But in my opinion, pfsense is a childs toy, and no sane man would put it 
into production in a serious environment,


More specifically, in regards to squid proxy, I find that it is absurd the 
way they bury the squid.conf in some obscure
xml scheme, as I have placed multiple inquiries to the pfsense community 
over this issue, the response I got...


Was essentially that they are purposely making direct editing of 
configuration file difficult to do for the end users.


I was offended by this, as I find their user interface for squid3 lacking in 
configuration directives, limiting the users ability
to truly have the freedom to configure the squid proxy under pfsense in any 
sensible manner.


But then again, that is just my opinion, and maybe I am wrong, but thats how 
I see it.


Hopefully some serious squid enthusiasts/experts can assist the pfsense 
developerment team in developing a better interface for squid proxy under 
pfsense, or even simply allow an option in the pfsense web gui for direct 
editing of the configuration file for advanced or custom tailored options.




Once again, I apologize if voicing my opinion on this matter if it was less 
than productive.


Fix Nichols
http://www.squidblacklist.org 



Re: [squid-users] Squid 2.7.x No Transparent Smartphones issues with youtube?

2013-06-28 Thread Ben Nichols

Is there any particular reason you are using squid2.7?

No offense meant, but you might consider upgrading to a newer version.



- Original Message - 
From: "Beto Moreno" 

To: 
Sent: Friday, June 28, 2013 4:32 PM
Subject: [squid-users] Squid 2.7.x No Transparent Smartphones issues with 
youtube?




Hi people.

I setup squid 2.7.x

My LAN works without issue, now I setup my WiFi network to test some
smartphones example Atrix 2.

Those devices doesn't have the option to auto discover proxy
settings, I setup manually done.

I could browse Internet, but there is something that took my
attention, I could watch any video in youtube.

I can search videos and that stuff but once I click to watch any of
them it just show me loading...

squid logs won't show any DENIED I still don't setup any ACL for this.

Any comment about your experience with smartphones?

My Setup is not Transparent-Proxy.

Thanks.





Re: [squid-users] squid 3.2 cache mechanism - not working properly compare to 3.1 series

2012-10-25 Thread Ben

Hi Amos,

For my curiosity, I again checked same traffic with 3.1.19 and it is 
working fantastic.


I really feel that, in 3.1.19 squid caching performance is superb while 
considering 3.2.3.


I would request that once you verify caching mechanism of 3.2.3 with 
3.1.19. I mean some changes are there at code level or something.


25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 3461 GET 
http://elitecore.com/ - NONE/- text/html
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 1324 GET 
http://elitecore.com/css/stylesheet.css - NONE/- text/css
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 3726 GET 
http://elitecore.com/js/menuscript.js - NONE/- application/javascript
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 1396 GET 
http://elitecore.com/js/menu.js - NONE/- application/javascript
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 465 GET 
http://elitecore.com/images/aarow_bullet.gif - NONE/- image/gif
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 5799 GET 
http://elitecore.com/images/header_curve.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 5349 GET 
http://elitecore.com/images/Telecommunication-icon.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  1 10.115.1.16 TCP_MEM_HIT/200 9673 GET 
http://elitecore.com/images/elitecore_logo.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 1200 GET 
http://elitecore.com/images/home-mod-bot.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 6606 GET 
http://elitecore.com/images/NetworkSecurity-icon.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 7398 GET 
http://elitecore.com/images/AccessGateway-icon.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  8 10.115.1.16 TCP_MEM_HIT/200 38632 GET 
http://elitecore.com/images/home_banner_new.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  9 10.115.1.16 TCP_MEM_HIT/200 46453 GET 
http://elitecore.com/images/customers-new.gif - NONE/- image/gif
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 5963 GET 
http://elitecore.com/images/body_bkgd.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 15364 GET 
http://www.google-analytics.com/ga.js - NONE/- text/javascript
25/Oct/2012:16:37:31 +0530  5 10.115.1.16 TCP_MEM_HIT/200 33119 GET 
http://meltwaternews.com/magenta/xml/html/51/05/v2_374671.html - NONE/- 
text/html
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 5504 GET 
http://elitecore.com/images/menu_bkgd.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 5531 GET 
http://elitecore.com/images/flyout_bkgd.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 485 GET 
http://elitecore.com/images/home-mod-top-rep.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 562 GET 
http://elitecore.com/images/homepage-mod-top2.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 5526 GET 
http://elitecore.com/images/footer_bkgd.jpg - NONE/- image/jpeg
25/Oct/2012:16:37:31 +0530  5 10.115.1.16 TCP_MEM_HIT/200 57556 GET 
http://meltwaternews.com/js/jquery_1.3.js - NONE/- application/javascript
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 903 GET 
http://meltwaternews.com/ext/a3logics/APAC/Images/FBN-32333bg1.png - 
NONE/- image/png
25/Oct/2012:16:37:31 +0530125 10.115.1.16 TCP_MISS/200 429 GET 
http://www.google-analytics.com/__utm.gif? - DIRECT/74.125.236.46 image/gif
25/Oct/2012:16:37:31 +0530  0 10.115.1.16 TCP_MEM_HIT/200 21744 GET 
http://elitecore.com/favicon.ico - NONE/- image/x-icon

On 25-10-2012 14:49, Amos Jeffries wrote:

On 25/10/2012 7:24 p.m., Ben wrote:


Hi,

I upgraded my squid boxes from 3.1 series to 3.2 series. I noticed
that 3.2 is not better then 3.1 while concerning with caching
capabilities.

I checked for simple web site , which has jpg images and all. as web
site is standard and normal site. So squid must cache it contents. I
tried many times, but each time i saw TCP_MISS in access.log

I checked for a site which is elitecore.com ( as example )

redbot url: http://redbot.org/?descend=True&uri=http://elitecore.com -
it suggests that this site is cacheable.

squid version : 3.2.3

currently, there is no change in squid.conf. I  am using default one
for testing purposes.


what could be problem with squid-3.2.3? Is there any changes required
for 3.2.3 configuration or any changes in caching mechanism for 3.2
series?


Firstly, TCP_MISS does *not* mean anything about whether the response to
a previous request or to the curent request was cached. All it means is
the current request was not serviced by anything already existing in
cache.

Both the client request headers AND the cached response headers are
taken into account when deci

Re: [squid-users] squid 3.2 cache mechanism - not working properly compare to 3.1 series

2012-10-25 Thread Ben

On 25-10-2012 14:49, Amos Jeffries wrote:

On 25/10/2012 7:24 p.m., Ben wrote:


Hi,

I upgraded my squid boxes from 3.1 series to 3.2 series. I noticed
that 3.2 is not better then 3.1 while concerning with caching
capabilities.

I checked for simple web site , which has jpg images and all. as web
site is standard and normal site. So squid must cache it contents. I
tried many times, but each time i saw TCP_MISS in access.log

I checked for a site which is elitecore.com ( as example )

redbot url: http://redbot.org/?descend=True&uri=http://elitecore.com -
it suggests that this site is cacheable.

squid version : 3.2.3

currently, there is no change in squid.conf. I  am using default one
for testing purposes.


what could be problem with squid-3.2.3? Is there any changes required
for 3.2.3 configuration or any changes in caching mechanism for 3.2
series?


Firstly, TCP_MISS does *not* mean anything about whether the response to
a previous request or to the curent request was cached. All it means is
the current request was not serviced by anything already existing in cache.

Both the client request headers AND the cached response headers are
taken into account when deciding whether a response can be served from
cache. It is perfectly reasonable to get a log trace like this from a
client requesting brand new object and rejecting anything that might be
stale (max-age=0).

Since the site uses User-Agent in its Vary: header it is perfectly
possible that each client request has a different agent string and
MISS'es the previously cached entry. Even one byte of change in the
agent string will cause those objects to MISS.
  But since you said 3.1 was working okay it is probably not this issue
which affects all caches.


For testing purpose, I open same site from same browser and again run 
same  process to check caching is happening or not.





For more info on what 3.2.3 is caching you can log at debug level 11,3
and get log about each response and whether it was determined cacheable.
Look for "cacheableReply" and a YES/NO with explained reason.



I enabled debug_options 11,3. It shows request/response header with 
meaningfull information but I could not find "cacheableReply", 
mentioning one block of request/response from log as per 11,3 debug level.



-
GET http://elitecore.com/images/body_bkgd.jpg HTTP/1.1^M
Host: elitecore.com^M
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 
Firefox/15.0.1^M

Accept: image/png,image/*;q=0.8,*/*;q=0.5^M
Accept-Language: en-us,en;q=0.5^M
Accept-Encoding: gzip, deflate^M
Proxy-Connection: keep-alive^M
Referer: http://elitecore.com/css/stylesheet.css^M
Cookie: __utma=25544809.1437989579.1351159439.1351159439.1351159439.1; 
__utmb=25544809.4.10.1351159439; __utmc=25544809; 
__utmz=25544809.1351159439.1

.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)^M
If-Modified-Since: Thu, 24 Sep 2009 06:16:20 GMT^M
If-None-Match: "4005f-56a5-4744cc51a1500"^M
Cache-Control: max-age=0^M
^M

--
2012/10/25 15:45:32.794 kid1| httpStart: "GET 
http://elitecore.com/images/body_bkgd.jpg";
2012/10/25 15:45:32.794 kid1| HTTP Server local=10.115.1.230:30562 
remote=180.179.100.102:80 FD 35 flags=1

2012/10/25 15:45:32.794 kid1| HTTP Server REQUEST:
-
GET /images/body_bkgd.jpg HTTP/1.1^M
Host: elitecore.com^M
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 
Firefox/15.0.1^M

Accept: image/png,image/*;q=0.8,*/*;q=0.5^M
Accept-Language: en-us,en;q=0.5^M
Accept-Encoding: gzip, deflate^M
Referer: http://elitecore.com/css/stylesheet.css^M
Cookie: __utma=25544809.1437989579.1351159439.1351159439.1351159439.1; 
__utmb=25544809.4.10.1351159439; __utmc=25544809; 
__utmz=25544809.1351159439.1

.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)^M
If-Modified-Since: Thu, 24 Sep 2009 06:16:20 GMT^M
If-None-Match: "4005f-56a5-4744cc51a1500"^M
Via: 1.1 ns1.example.com (squid/3.2.3)^M
X-Forwarded-For: 10.115.1.16^M
Cache-Control: max-age=0^M
Connection: keep-alive^M

As per my understanding, In request header,I can see Cache-Control: 
max-age=0 , so i add override-expire options with refresh_pattern to 
cross this  header parameters but still same.


refresh_pattern is for response header ? or for both request/response?
I added refresh_pattern,

refresh_pattern -i \.(jpg|jpeg)$ 1440 40% 14400 override-expire
for testing purpose .

How to cache such site's contents?


BR
Ben





Amos



I tested same site with 3.1. series and it is working fine.

access.log :

1351098321.835 80 192.168.1.23 TCP_MISS/200 3414 GET
http://elitecore.com/ - HIER_DIRECT/180.179.100.102 text/html
1351098321.876 19 192.168.1.23 TCP_MISS/200 1280 GET
http://elitecore.com/css/stylesheet.css - HIER_DIRECT/180.179.100.102
text/css
1351098321.898 37 192.168.1.23 TCP_MISS/200 1352 GET
http://elitecore.com/js/menu.js - HIER_DIRECT/180.179.100.102
application/javascript
1351098321.899 

[squid-users] squid 3.2 cache mechanism - not working properly compare to 3.1 series

2012-10-24 Thread Ben


Hi,

I upgraded my squid boxes from 3.1 series to 3.2 series. I noticed that 
3.2 is not better then 3.1 while concerning with caching capabilities.


I checked for simple web site , which has jpg images and all. as web 
site is standard and normal site. So squid must cache it contents. I 
tried many times, but each time i saw TCP_MISS in access.log


I checked for a site which is elitecore.com ( as example )

redbot url: http://redbot.org/?descend=True&uri=http://elitecore.com - 
it suggests that this site is cacheable.


squid version : 3.2.3

currently, there is no change in squid.conf. I  am using default one for 
testing purposes.



what could be problem with squid-3.2.3? Is there any changes required 
for 3.2.3 configuration or any changes in caching mechanism for 3.2 series?


I tested same site with 3.1. series and it is working fine.

access.log :

1351098321.835 80 192.168.1.23 TCP_MISS/200 3414 GET 
http://elitecore.com/ - HIER_DIRECT/180.179.100.102 text/html
1351098321.876 19 192.168.1.23 TCP_MISS/200 1280 GET 
http://elitecore.com/css/stylesheet.css - HIER_DIRECT/180.179.100.102 
text/css
1351098321.898 37 192.168.1.23 TCP_MISS/200 1352 GET 
http://elitecore.com/js/menu.js - HIER_DIRECT/180.179.100.102 
application/javascript
1351098321.899 21 192.168.1.23 TCP_MISS/200 421 GET 
http://elitecore.com/images/aarow_bullet.gif - 
HIER_DIRECT/180.179.100.102 image/gif
1351098321.913 52 192.168.1.23 TCP_MISS/200 3682 GET 
http://elitecore.com/js/menuscript.js - HIER_DIRECT/180.179.100.102 
application/javascript
1351098321.919 57 192.168.1.23 TCP_MISS/200 5755 GET 
http://elitecore.com/images/header_curve.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098321.919 57 192.168.1.23 TCP_MISS/200 9629 GET 
http://elitecore.com/images/elitecore_logo.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098321.922 22 192.168.1.23 TCP_MISS/200 1156 GET 
http://elitecore.com/images/home-mod-bot.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098321.941  1 192.168.1.23 TCP_MEM_HIT/200 33122 GET 
http://meltwaternews.com/magenta/xml/html/51/05/v2_374671.html - 
HIER_NONE/- text/html
1351098321.973  1 192.168.1.23 TCP_MEM_HIT/200 57559 GET 
http://meltwaternews.com/js/jquery_1.3.js - HIER_NONE/- 
application/javascript
1351098322.019  0 192.168.1.23 TCP_MEM_HIT/200 906 GET 
http://meltwaternews.com/ext/a3logics/APAC/Images/FBN-32333bg1.png - 
HIER_NONE/- image/png
1351098322.145247 192.168.1.23 TCP_MISS/200 5305 GET 
http://elitecore.com/images/Telecommunication-icon.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098322.365446 192.168.1.23 TCP_MISS/200 7354 GET 
http://elitecore.com/images/AccessGateway-icon.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098322.575662 192.168.1.23 TCP_MISS/200 6562 GET 
http://elitecore.com/images/NetworkSecurity-icon.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098322.584437 192.168.1.23 TCP_MISS/200 5460 GET 
http://elitecore.com/images/menu_bkgd.jpg - HIER_DIRECT/180.179.100.102 
image/jpeg
1351098322.865498 192.168.1.23 TCP_MISS/200 5487 GET 
http://elitecore.com/images/flyout_bkgd.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098323.379514 192.168.1.23 TCP_MISS/200 5482 GET 
http://elitecore.com/images/footer_bkgd.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098323.635   1710 192.168.1.23 TCP_MISS/200 5919 GET 
http://elitecore.com/images/body_bkgd.jpg - HIER_DIRECT/180.179.100.102 
image/jpeg
1351098324.097   2235 192.168.1.23 TCP_MISS/200 38616 GET 
http://elitecore.com/images/home_banner_new.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098325.362   2786 192.168.1.23 TCP_MISS/200 441 GET 
http://elitecore.com/images/home-mod-top-rep.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098325.765   3826 192.168.1.23 TCP_MISS/200 15368 GET 
http://www.google-analytics.com/ga.js - HIER_DIRECT/173.194.36.36 
text/javascript
1351098325.861   3276 192.168.1.23 TCP_MISS/200 518 GET 
http://elitecore.com/images/homepage-mod-top2.jpg - 
HIER_DIRECT/180.179.100.102 image/jpeg
1351098326.073   4153 192.168.1.23 TCP_MISS/200 46409 GET 
http://elitecore.com/images/customers-new.gif - 
HIER_DIRECT/180.179.100.102 image/gif
1351098326.676865 192.168.1.23 TCP_MISS/200 432 GET 
http://www.google-analytics.com/__utm.gif? - HIER_DIRECT/173.194.36.36 
image/gif
1351098328.980   2291 192.168.1.23 TCP_MISS/200 21700 GET 
http://elitecore.com/favicon.ico - HIER_DIRECT/180.179.100.102 image/x-icon



BR
Ben





Re: [squid-users] squid 3.2.3 crashed with FATAL error

2012-10-23 Thread Ben

Hi,


Hi,

On 23/10/2012 8:10 p.m., Ben wrote:

Hi,


On 23/10/2012 5:07 a.m., Ben wrote:

Hi,

My squid 3.2.3 latest version getting restart automatically with 
error "FATAL: Bungled (null) line 192: icap_retry deny all". What 
could be reason behind this problem? How to resolve  it.?


Did you ./configure  using --enable-icap-client ?

Yes, i configured with this options.

Squid Cache: Version 3.2.3
configure options:  '--prefix=/opt/squid-3.2' 
'--enable-storeio=aufs,ufs' '--enable-removal-policies=lru,heap' 
'--enable-cachemgr-hostname=CACHE-Engine' '--enable-linux-netfilter' 
'--enable-follow-x-forwarded-for' '--disable-auth' '--disable-ipv6' 
'--enable-zph-qos' '--with-large-files' '--enable-snmp' 
'--enable-wccp' '--enable-wccp2' '--enable-kill-parent-hack' 
'--enable-http-violations' '--enable-async-io=128' 
'--enable-err-languages=English' 
'--enable-default-err-language=English' '--enable-icap-client' 
'--enable-libcap' --enable-ltdl-convenience



Amos



since last day, there is no more entry for this fatal error. what 
does this error says?


I'm not exactly sure what the bungled is about. I've just patched 
latest 3.HEAD to explain "(null)" better. That means one of the 
default values built-in to Squid is broken.


This message is saying the default value for when you have nothing in 
your squid.conf about icap_retry is not able to be defined.



What do you mean by "since last day" ...  you have a new build that 
works? or you added icap_retry to the config and it works? or no 
changes and it just started working?




Yes, no changes and it just started working.

I just got some logs now,

cat /opt/squid-3.2.3/var/logs/cache.log | grep -i fatal
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all

what do you suggest to resolve it?



One thing i noticed in 3.2.3, now there is no FATAL: dying issues 
which I faced in 3.1. series for which i had sent mail to squid users.



Amos

BR
Ben

BR
Ben


Re: [squid-users] squid 3.2.3 crashed with FATAL error

2012-10-23 Thread Ben

Hi,

On 23/10/2012 8:10 p.m., Ben wrote:

Hi,


On 23/10/2012 5:07 a.m., Ben wrote:

Hi,

My squid 3.2.3 latest version getting restart automatically with 
error "FATAL: Bungled (null) line 192: icap_retry deny all". What 
could be reason behind this problem? How to resolve  it.?


Did you ./configure  using --enable-icap-client ?

Yes, i configured with this options.

Squid Cache: Version 3.2.3
configure options:  '--prefix=/opt/squid-3.2' 
'--enable-storeio=aufs,ufs' '--enable-removal-policies=lru,heap' 
'--enable-cachemgr-hostname=CACHE-Engine' '--enable-linux-netfilter' 
'--enable-follow-x-forwarded-for' '--disable-auth' '--disable-ipv6' 
'--enable-zph-qos' '--with-large-files' '--enable-snmp' 
'--enable-wccp' '--enable-wccp2' '--enable-kill-parent-hack' 
'--enable-http-violations' '--enable-async-io=128' 
'--enable-err-languages=English' 
'--enable-default-err-language=English' '--enable-icap-client' 
'--enable-libcap' --enable-ltdl-convenience



Amos



since last day, there is no more entry for this fatal error. what 
does this error says?


I'm not exactly sure what the bungled is about. I've just patched 
latest 3.HEAD to explain "(null)" better. That means one of the 
default values built-in to Squid is broken.


This message is saying the default value for when you have nothing in 
your squid.conf about icap_retry is not able to be defined.



What do you mean by "since last day" ...  you have a new build that 
works? or you added icap_retry to the config and it works? or no 
changes and it just started working?




Yes, no changes and it just started working.

One thing i noticed in 3.2.3, now there is no FATAL: dying issues which 
I faced in 3.1. series for which i had sent mail to squid users.



Amos

BR
Ben


Re: [squid-users] squid 3.2.3 crashed with FATAL error

2012-10-23 Thread Ben

Hi,


On 23/10/2012 5:07 a.m., Ben wrote:

Hi,

My squid 3.2.3 latest version getting restart automatically with 
error "FATAL: Bungled (null) line 192: icap_retry deny all". What 
could be reason behind this problem? How to resolve  it.?


Did you ./configure  using --enable-icap-client ?

Yes, i configured with this options.

Squid Cache: Version 3.2.3
configure options:  '--prefix=/opt/squid-3.2' 
'--enable-storeio=aufs,ufs' '--enable-removal-policies=lru,heap' 
'--enable-cachemgr-hostname=CACHE-Engine' '--enable-linux-netfilter' 
'--enable-follow-x-forwarded-for' '--disable-auth' '--disable-ipv6' 
'--enable-zph-qos' '--with-large-files' '--enable-snmp' '--enable-wccp' 
'--enable-wccp2' '--enable-kill-parent-hack' '--enable-http-violations' 
'--enable-async-io=128' '--enable-err-languages=English' 
'--enable-default-err-language=English' '--enable-icap-client' 
'--enable-libcap' --enable-ltdl-convenience



Amos



since last day, there is no more entry for this fatal error. what does 
this error says?


Ben


[squid-users] squid 3.2.3 crashed with FATAL error

2012-10-22 Thread Ben

Hi,

My squid 3.2.3 latest version getting restart automatically with error 
"FATAL: Bungled (null) line 192: icap_retry deny all". What could be 
reason behind this problem? How to resolve  it.?


2012/10/22 20:46:23 kid1| Closing HTTP port 0.0.0.0:3128
2012/10/22 20:46:23 kid1| Closing HTTP port 0.0.0.0:8080
2012/10/22 20:46:23 kid1| storeDirWriteCleanLogs: Starting...
2012/10/22 20:46:23 kid1|   Finished.  Wrote 0 entries.
2012/10/22 20:46:23 kid1|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: Bungled (null) line 192: icap_retry deny 
all  <--

Squid Cache (Version 3.2.3): Terminated abnormally.
CPU Usage: 37.056 seconds = 21.337 user + 15.720 sys
Maximum Resident Size: 400768 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:   91356 KB
Ordinary blocks:88959 KB849 blks
Small blocks:   0 KB  1 blks
Holding blocks:  9048 KB  6 blks
Free Small blocks:  0 KB
Free Ordinary blocks:2396 KB
Total in use:   98007 KB 107%
Total free:  2396 KB 3%
2012/10/22 20:46:23 kid1| BUG: Orphan Comm::Connection: 
local=0.0.0.0:3401 remote=[::] FD 11 flags=9

2012/10/22 20:46:23 kid1| NOTE: 1 Orphans since last started.
2012/10/22 20:46:27 kid1| Starting Squid Cache version 3.2.3 for 
x86_64-unknown-linux-gnu...



Regards,
Ben


[squid-users] Re: squid 3.2.3 problems -- nothing comes to access.log

2012-10-22 Thread Ben

Hi,

It has been resolved after compiling squid 3.2 with other compilation 
options.


BR
Ben

Hi,

Due to facing FATAL : dying issue and squid crashing with squid 3.1 
series, I just download latest squid 3.2.3 and compile it on centos 
6.2 64 bit, after compilation, I am facing issue  that no logs coming 
into access.log but in cache.log i got continues,



2012/10/22 13:02:01 kid1| NOTE: 1972 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=69.164.37.115:80 remote=180.87.250.89:2877 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1973 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=205.128.78.126:80 remote=180.87.250.114:2668 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1974 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=205.128.78.126:80 remote=180.87.250.114:2669 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1975 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=205.128.78.126:80 remote=180.87.250.114:2670 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1976 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=205.128.78.126:80 remote=180.87.250.114:2671 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1977 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=69.164.37.115:80 remote=180.87.250.89:2878 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1978 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=69.164.37.115:80 remote=180.87.250.89:2879 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1979 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=205.128.78.126:80 remote=180.87.250.114:2672 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1980 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=205.128.78.126:80 remote=180.87.250.114:2659 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1981 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=31.172.124.2:80 remote=180.87.250.101:33490 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1982 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=23.11.234.10:80 remote=180.87.250.84:2341 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1983 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=74.125.236.63:80 remote=180.87.250.42:7025 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1984 Orphans since last started.
2012/10/22 13:02:01 kid1| BUG: Orphan Comm::Connection: 
local=205.128.78.126:80 remote=180.87.250.114:2660 FD 13 flags=17

2012/10/22 13:02:01 kid1| NOTE: 1985 Orphans since last started.


what does it means? something problem with my compilation?

my compilation perameters:

./sbin/squid -v
Squid Cache: Version 3.2.3
configure options:  '--prefix=/opt/squid-3.2' 
'--enable-xmalloc-statistics' '--enable-storeio=aufs,ufs' 
'--enable-removal-policies=lru,heap' '--enable-icmp' 
'--enable-cachemgr-hostname=CACHE-Engine' '--enable-linux-netfilter' 
'--enable-follow-x-forwarded-for' '--disable-auth' '--disable-ipv6' 
'--enable-zph-qos' '--with-large-files' '--enable-snmp' 
'--enable-wccp' '--enable-wccp2' '--enable-kill-parent-hack' 
'--enable-http-violations' '--with-filedescriptors=16384' 
'--enable-async-io=128' '--enable-err-languages=English' 
'--enable-default-err-language=English' '--enable-icap-client' 
--enable-ltdl-convenience


squid.conf

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl localnet src '/etc/squid/localnet'


#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the on

[squid-users] squid 3.2.3 problems -- nothing comes to access.log

2012-10-22 Thread Ben
ocalhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128
http_port 8080 tproxy

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /opt/squid-3.2/var/cache/squid 100 16 256

cache_dir aufs /cacheA 716800 128 512
cache_dir aufs /cacheB 716800 128 512
cache_dir aufs /cacheC 716800 128 512
cache_dir aufs /cacheD 716800 128 512

# Leave coredumps in the first cache dir
coredump_dir /cacheA

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320



wccp2_router 
wccp2_forwarding_method l2
wccp2_return_method l2
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80
wccp2_service dynamic 90
wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source 
priority=240 ports=80

wccp2_assignment_method hash

BR
Ben


Re: [squid-users] Re: squid crashes many times in a day

2012-10-20 Thread Ben

Hi Amos,

Since last week, I am facing below issues and really want to resolve 
it.While moving around and looking on Google,  I found that some similar 
post for below problem in 3.2. series also. Can you please comment on 
below problem that is it open bug with 3.1 and 3.2 series ? Or is it 
resolved in current 3.2 series?


Your inputs are highly appreciated.

BR
Ben

Hi Kinkie,

Thanks for your kind response. Please find the below more entries 
before and after FATAL error.


What does it FATAL error says?


2012/10/19 15:22:24| clientProcessRequest: Invalid Request
2012/10/19 15:22:39| clientProcessRequest: Invalid Request
2012/10/19 15:23:03| clientProcessRequest: Invalid Request
2012/10/19 15:23:35| clientProcessRequest: Invalid Request
2012/10/19 15:24:51| clientProcessRequest: Invalid Request
2012/10/19 16:03:02| clientProcessRequest: Invalid Request
2012/10/19 16:07:03| clientProcessRequest: Invalid Request
2012/10/19 16:08:41| clientProcessRequest: Invalid Request
2012/10/19 16:11:04| clientProcessRequest: Invalid Request
2012/10/19 16:13:50| WARNING: swapfile header inconsistent with 
available data

2012/10/19 16:15:05| clientProcessRequest: Invalid Request
2012/10/19 16:17:51| WARNING: swapfile header inconsistent with 
available data

2012/10/19 16:19:06| clientProcessRequest: Invalid Request
2012/10/19 16:23:01| WARNING: swapfile header inconsistent with 
available data

2012/10/19 16:23:07| clientProcessRequest: Invalid Request
2012/10/19 16:24:59| clientProcessRequest: Invalid Request
2012/10/19 16:25:08| clientProcessRequest: Invalid Request
2012/10/19 16:26:07| clientProcessRequest: Invalid Request
2012/10/19 16:27:01| clientProcessRequest: Invalid Request
2012/10/19 16:27:08| clientProcessRequest: Invalid Request
2012/10/19 16:27:08| WARNING: swapfile header inconsistent with 
available data

FATAL: Received Segment Violation...dying. <--
2012/10/19 16:27:08| storeDirWriteCleanLogs: Starting...
2012/10/19 16:27:08| WARNING: Closing open FD   21
2012/10/19 16:27:08| 65536 entries written so far.
2012/10/19 16:27:08|131072 entries written so far.
2012/10/19 16:27:08|196608 entries written so far.
2012/10/19 16:27:08|262144 entries written so far.
2012/10/19 16:27:08|327680 entries written so far.
2012/10/19 16:27:08|393216 entries written so far.
2012/10/19 16:27:08|   Finished.  Wrote 410417 entries.
2012/10/19 16:27:08|   Took 0.10 seconds (3937042.54 entries/sec).
CPU Usage: 367.492 seconds = 212.301 user + 155.191 sys
Maximum Resident Size: 6021760 KB
Page faults with physical i/o: 1
Memory usage for squid via mallinfo():
total space in arena:  1361484 KB
Ordinary blocks:   1361359 KB308 blks
search hit TOP, continuing at BOTTOM
2012/10/20 11:47:25| Finished rebuilding storage from disk.
2012/10/20 11:47:25|819651 Entries scanned
2012/10/20 11:47:25| 0 Invalid entries.
2012/10/20 11:47:25| 0 With invalid flags.
2012/10/20 11:47:25|819651 Objects loaded.
2012/10/20 11:47:25| 0 Objects expired.
2012/10/20 11:47:25| 0 Objects cancelled.
2012/10/20 11:47:25| 0 Duplicate URLs purged.
2012/10/20 11:47:25| 0 Swapfile clashes avoided.
2012/10/20 11:47:25|   Took 2.04 seconds (402003.52 objects/sec).
2012/10/20 11:47:25| Beginning Validation Procedure
2012/10/20 11:47:26|   Completed Validation Procedure
2012/10/20 11:47:26|   Validated 1639327 Entries
2012/10/20 11:47:26|   store_swap_size = 21508128
2012/10/20 11:47:26| storeLateRelease: released 0 objects
2012/10/20 11:47:31| WARNING: swapfile header inconsistent with 
available data

FATAL: Received Segment Violation...dying. <--
2012/10/20 11:47:31| storeDirWriteCleanLogs: Starting...
2012/10/20 11:47:31| WARNING: Closing open FD   21
2012/10/20 11:47:31| 65536 entries written so far.
2012/10/20 11:47:31|131072 entries written so far.
2012/10/20 11:47:31|196608 entries written so far.
2012/10/20 11:47:31|262144 entries written so far.
2012/10/20 11:47:31|327680 entries written so far.
2012/10/20 11:47:31|393216 entries written so far.
2012/10/20 11:47:31|458752 entries written so far.
2012/10/20 11:47:31|524288 entries written so far.
2012/10/20 11:47:31|589824 entries written so far.
2012/10/20 11:47:31|655360 entries written so far.
2012/10/20 11:47:31|720896 entries written so far.
2012/10/20 11:47:31|786432 entries written so far.
2012/10/20 11:47:31|   Finished.  Wrote 819742 entries.
2012/10/20 11:47:31|   Took 0.18 seconds (4520395.05 entries/sec)

Regards,
Ben

Hi Ben,
   in order to investigate we would need to know WHAT caused the
segment violation. Can you please paste the 20 lines BEFORE that
message in cache.log?

Thanks


On Fri, Oct 19, 2012 at 8:13 AM, Ben  wrote:

Hi Amos,

Any suggestions please, I tested same setup with other 3.1.x 
releases and

got same error messages from cache.log.

what does this error message inform? Kindly guide us to resolve it.


cat /v

Re: [squid-users] Re: squid crashes many times in a day

2012-10-20 Thread Ben

Hi Kinkie,

Thanks for your kind response. Please find the below more entries before 
and after FATAL error.


What does it FATAL error says?


2012/10/19 15:22:24| clientProcessRequest: Invalid Request
2012/10/19 15:22:39| clientProcessRequest: Invalid Request
2012/10/19 15:23:03| clientProcessRequest: Invalid Request
2012/10/19 15:23:35| clientProcessRequest: Invalid Request
2012/10/19 15:24:51| clientProcessRequest: Invalid Request
2012/10/19 16:03:02| clientProcessRequest: Invalid Request
2012/10/19 16:07:03| clientProcessRequest: Invalid Request
2012/10/19 16:08:41| clientProcessRequest: Invalid Request
2012/10/19 16:11:04| clientProcessRequest: Invalid Request
2012/10/19 16:13:50| WARNING: swapfile header inconsistent with 
available data

2012/10/19 16:15:05| clientProcessRequest: Invalid Request
2012/10/19 16:17:51| WARNING: swapfile header inconsistent with 
available data

2012/10/19 16:19:06| clientProcessRequest: Invalid Request
2012/10/19 16:23:01| WARNING: swapfile header inconsistent with 
available data

2012/10/19 16:23:07| clientProcessRequest: Invalid Request
2012/10/19 16:24:59| clientProcessRequest: Invalid Request
2012/10/19 16:25:08| clientProcessRequest: Invalid Request
2012/10/19 16:26:07| clientProcessRequest: Invalid Request
2012/10/19 16:27:01| clientProcessRequest: Invalid Request
2012/10/19 16:27:08| clientProcessRequest: Invalid Request
2012/10/19 16:27:08| WARNING: swapfile header inconsistent with 
available data

FATAL: Received Segment Violation...dying. <--
2012/10/19 16:27:08| storeDirWriteCleanLogs: Starting...
2012/10/19 16:27:08| WARNING: Closing open FD   21
2012/10/19 16:27:08| 65536 entries written so far.
2012/10/19 16:27:08|131072 entries written so far.
2012/10/19 16:27:08|196608 entries written so far.
2012/10/19 16:27:08|262144 entries written so far.
2012/10/19 16:27:08|327680 entries written so far.
2012/10/19 16:27:08|393216 entries written so far.
2012/10/19 16:27:08|   Finished.  Wrote 410417 entries.
2012/10/19 16:27:08|   Took 0.10 seconds (3937042.54 entries/sec).
CPU Usage: 367.492 seconds = 212.301 user + 155.191 sys
Maximum Resident Size: 6021760 KB
Page faults with physical i/o: 1
Memory usage for squid via mallinfo():
total space in arena:  1361484 KB
Ordinary blocks:   1361359 KB308 blks
search hit TOP, continuing at BOTTOM
2012/10/20 11:47:25| Finished rebuilding storage from disk.
2012/10/20 11:47:25|819651 Entries scanned
2012/10/20 11:47:25| 0 Invalid entries.
2012/10/20 11:47:25| 0 With invalid flags.
2012/10/20 11:47:25|819651 Objects loaded.
2012/10/20 11:47:25| 0 Objects expired.
2012/10/20 11:47:25| 0 Objects cancelled.
2012/10/20 11:47:25| 0 Duplicate URLs purged.
2012/10/20 11:47:25| 0 Swapfile clashes avoided.
2012/10/20 11:47:25|   Took 2.04 seconds (402003.52 objects/sec).
2012/10/20 11:47:25| Beginning Validation Procedure
2012/10/20 11:47:26|   Completed Validation Procedure
2012/10/20 11:47:26|   Validated 1639327 Entries
2012/10/20 11:47:26|   store_swap_size = 21508128
2012/10/20 11:47:26| storeLateRelease: released 0 objects
2012/10/20 11:47:31| WARNING: swapfile header inconsistent with 
available data

FATAL: Received Segment Violation...dying. <--
2012/10/20 11:47:31| storeDirWriteCleanLogs: Starting...
2012/10/20 11:47:31| WARNING: Closing open FD   21
2012/10/20 11:47:31| 65536 entries written so far.
2012/10/20 11:47:31|131072 entries written so far.
2012/10/20 11:47:31|196608 entries written so far.
2012/10/20 11:47:31|262144 entries written so far.
2012/10/20 11:47:31|327680 entries written so far.
2012/10/20 11:47:31|393216 entries written so far.
2012/10/20 11:47:31|458752 entries written so far.
2012/10/20 11:47:31|524288 entries written so far.
2012/10/20 11:47:31|589824 entries written so far.
2012/10/20 11:47:31|655360 entries written so far.
2012/10/20 11:47:31|720896 entries written so far.
2012/10/20 11:47:31|786432 entries written so far.
2012/10/20 11:47:31|   Finished.  Wrote 819742 entries.
2012/10/20 11:47:31|   Took 0.18 seconds (4520395.05 entries/sec)

Regards,
Ben

Hi Ben,
   in order to investigate we would need to know WHAT caused the
segment violation. Can you please paste the 20 lines BEFORE that
message in cache.log?

Thanks


On Fri, Oct 19, 2012 at 8:13 AM, Ben  wrote:

Hi Amos,

Any suggestions please, I tested same setup with other 3.1.x releases and
got same error messages from cache.log.

what does this error message inform? Kindly guide us to resolve it.


cat /var/log/squid/cache.log | grep -i fatal
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation..

[squid-users] Re: squid crashes many times in a day

2012-10-18 Thread Ben

Hi Amos,

Any suggestions please, I tested same setup with other 3.1.x releases 
and got same error messages from cache.log.


what does this error message inform? Kindly guide us to resolve it.


cat /var/log/squid/cache.log | grep -i fatal
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.
FATAL: Received Segment Violation...dying.

BR
Ben

Hi,

I noticed one serious issue  in squid, squid crashed and that time, 
log shows "FATAL: Received Segment Violation...dying.". what does it 
mean?


Squid Cache: Version 3.1.19

BR
Ben




[squid-users] squid crashes many times in a day

2012-10-18 Thread Ben

Hi,

I noticed one serious issue  in squid, squid crashed and that time, log 
shows "FATAL: Received Segment Violation...dying.". what does it mean?


Squid Cache: Version 3.1.19

BR
Ben


[squid-users] Facing strange issues with squid after up gradation

2012-10-13 Thread Ben

Hi,

Since long time I am using squid 3.1.10 and it is working fantastic. 
Thanks to squid team.


But since last few days, I upgraded my version to squid 3.1.19 and every 
day 2/3 times i faced strange issue.


Issue is that, after sometimes squid freeze, it means does not accepting 
any more web traffic from network and then if I reboot server and then 
squid comes in good state for a while. If that time, instead of reboot 
of server, we do restart of squid process, it shows below so many 
entries in cache.log and takes so much time to come up from it.


2012/10/14 13:35:11|   29622272 entries written so far.
2012/10/14 13:35:13|   29687808 entries written so far.
2012/10/14 13:35:46|   29753344 entries written so far.
2012/10/14 13:36:32|   29818880 entries written so far.
2012/10/14 13:37:41|   29884416 entries written so far.

While these entries are coming to cache.log, same time i watch netstat 
to check squid comes with port no. , But there is nothing from squid 
side. So i guess that internal squid database rebuilding is happening.


After up gradation, my squid shows 85-90 % memory usage and CPU is 
normal between 0-10 %. while squid uses 90% memory, I can see that swap 
is also comes in use.


Is there any way to tune MEMORY use of squid?

Total memory is 8 gb, before i set 4 gb for squid but now i set 2 gb for 
squid.


Is there any command or configuration parameter in squid.conf to improve 
squid restart process ( specially rebuilding activity )?


My second server is having squid 3.1.10 and it is working awesome, no 
single point of pain from it :)



Upgraded squid version : 3.1.19
OS : Centos 6.2 64 bit

BR
Ben





Re: [squid-users] Uploads not working behind squid proxy

2012-07-11 Thread Crawford, Ben
Turns our I just needed the
never_direct allow all
in the right spot of course.

Cheers,
Ben

> well the answer is you other mail + the squid.conf
>
> "Without the cache_peer I can not get to any sties at all.  All
> internet (well, http and https) traffic on our network must go through
> the parent proxy, either directly or through a local child proxy."
>
> the proxy tries to connect the direct upstream server to get access because
> you dont have an explicit "never_direct allow all" acl defined.
> so post and other requests that requires direct access will then tried to be
> served by accessing the origin server.
> you must be explicit with cache_peer acl.
> replace:
> ##
>
> cache_peer 10.55.240.250 parent 3128 3130 no-query default login=PASS
> ##
> with
> ##
> cache_peer 10.55.240.250 parent 3128 3130 no-query default login=PASS
> name=upstream
> cache_peer_access upstream allow all
> never_direct allow all
> ##
>
> this will allow and will force all traffic through the upstream proxy
> server.
>
> Good luck,
> Eliezer
>
> 
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer  ngtech.co.il
>
>


Re: [squid-users] Uploads not working behind squid proxy

2012-07-11 Thread Crawford, Ben
Sorry for double reply, sent that last one out a bit quicker than I should have.

On 12 July 2012 04:25, Eliezer Croitoru  wrote:
> two things:
> post a more detailed squid.conf to see if there is something wrong there.
>
> i am using squid3.1.19 and 3.2.16-17 and it works like for many others.
> this problem can be an issue about routing and not related to squid at all.

I understand that this could be a routing problem and have looked into
that a little.  The reason I didn't look very far into this is because
most things do work.  I would imagine that if this is a routing
problem then it would show up more often.

If I am wrong here, please let me know and I will take a harder look at routing.

> a 504 code is:
> 10.5.5 504 Gateway Timeout
>
> The server, while acting as a gateway or proxy, did not receive a timely
> response from the upstream server specified by the URI (e.g. HTTP, FTP,
> LDAP) or some other auxiliary server (e.g. DNS) it needed to access in
> attempting to complete the request.
>
>   Note: Note to implementors: some deployed proxies are known to
>   return 400 or 500 when DNS lookups time out.
>
> is there any enforcement on the usage of the cache_peer on the ip leve? ie.
> without the cache_peer proxy can you get sites fine?

Without the cache_peer I can not get to any sties at all.  All
internet (well, http and https) traffic on our network must go through
the parent proxy, either directly or through a local child proxy.

Thanks again,
Ben


>
> Eliezer
>
>
> On 7/11/2012 12:42 PM, Crawford, Ben wrote:
>>
>> Hi All,
>>
>> I have run into a problem with not being able to access a few specific
>> things on the web when running through our local proxy.
>>
>> Some details:
>> * The current setup is a Linux box running squid 3.1.19.
>> * This is being run behind a pfsense box that is load balancing our
>> two internet connections
>> * Both internet connections are behind the same proxy (we are actually
>> on a private network), which is set as the parent for our internal
>> proxy
>> * Squid is running in intercept mode
>>
>> With this setup, most things work as expected; I can visit web pages,
>> watch youtube videos, upload attachments to gmail.  However, some
>> things are not working.  The easiest example is speedtest.net.  I can
>> run the download test, but the upload test always fails.  Trying to
>> watch content on tvnz.co.nz (on demand content) does not work either.
>>
>> When running traffic without our internal proxy (ie direct to the
>> parent) everything works fine.  I'm stuck and can't find any
>> solutions.
>>
>> Here is what I have tried so far:
>> * First, I was hoping to run squid on the pfsense box, but ran into
>> similar problems, so I tried to isolate the problem by putting in the
>> Linux box.  (never a bad idea to be running more recent version of
>> squid either, it may be needed shortly for some of the newer features
>> anyway)
>> * Instead of running my full squid.conf, I am using the default
>> squid.conf with just the extra line to access the parent (cache_peer
>> 10.55.240.250 parent 3128 3130 no-query default login=PASS)
>> * I've read bits and pieces about similar problems dealing with sysctl
>> and some ipv4 settings.  None of this seemed to apply, and what I did
>> try didn't work.
>> * Checking on the specific web pages in firefox using firebug and I
>> can see some 504 errors (seemingly only on POST) - this lead me to
>> check the logs for POST with 504 errors (see logs below)
>> * Checked the problem in IE, Chrome and Firefox
>> * Lots of googleing and reading of squid documentation
>>
>> Here is what is showing in the squid logs where there is a 504 with a
>> POST, you'll notice that most are for the local speedtest.net testing.
>>   I figured not much point finding lots of sites when just a few are
>> causing problems.
>>
>> 1342030821.058  59542 10.161.128.34 TCP_MISS/504 4301 POST
>> http://speedtest.worldnet.co.nz/speedtest.net/speedtest/upload.php? -
>> DIRECT/202.169.192.58 text/html
>> 1342030821.058  59536 10.161.128.34 TCP_MISS/504 4300 POST
>> http://speedtest.worldnet.co.nz/speedtest.net/speedtest/upload.php? -
>> DIRECT/202.169.192.58 text/html
>> 1342039010.134  60806 10.161.128.34 TCP_MISS/504 4285 POST
>> http://rt1403.infolinks.com/action/doq.htm? - DIRECT/64.71.153.213
>> text/html
>> 1342039947.624  59642 10.161.128.34 TCP_MISS/504 4834 POST
>> http://c.brightcove.com/services/messagebroker/amf? -
>> DIRECT/8.19.200.152 text/html
>> 1342040562.565  61340 10.161.128.34 

Re: [squid-users] Uploads not working behind squid proxy

2012-07-11 Thread Crawford, Ben
As requested, a more detailed squid.conf:
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl localnet src 10.161.128.0/20
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
cache_peer 10.55.240.250 parent 3128 3130 no-query default login=PASS
http_access allow manager localhost
http_access allow localnet
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
http_port 10.161.128.11:3128 intercept
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320

Ben


On 12 July 2012 04:25, Eliezer Croitoru  wrote:
> two things:
> post a more detailed squid.conf to see if there is something wrong there.
>
> i am using squid3.1.19 and 3.2.16-17 and it works like for many others.
> this problem can be an issue about routing and not related to squid at all.
>
> a 504 code is:
> 10.5.5 504 Gateway Timeout
>
> The server, while acting as a gateway or proxy, did not receive a timely
> response from the upstream server specified by the URI (e.g. HTTP, FTP,
> LDAP) or some other auxiliary server (e.g. DNS) it needed to access in
> attempting to complete the request.
>
>   Note: Note to implementors: some deployed proxies are known to
>   return 400 or 500 when DNS lookups time out.
>
>
> is there any enforcement on the usage of the cache_peer on the ip leve? ie.
> without the cache_peer proxy can you get sites fine?
>
> Eliezer
>
>
> On 7/11/2012 12:42 PM, Crawford, Ben wrote:
>>
>> Hi All,
>>
>> I have run into a problem with not being able to access a few specific
>> things on the web when running through our local proxy.
>>
>> Some details:
>> * The current setup is a Linux box running squid 3.1.19.
>> * This is being run behind a pfsense box that is load balancing our
>> two internet connections
>> * Both internet connections are behind the same proxy (we are actually
>> on a private network), which is set as the parent for our internal
>> proxy
>> * Squid is running in intercept mode
>>
>> With this setup, most things work as expected; I can visit web pages,
>> watch youtube videos, upload attachments to gmail.  However, some
>> things are not working.  The easiest example is speedtest.net.  I can
>> run the download test, but the upload test always fails.  Trying to
>> watch content on tvnz.co.nz (on demand content) does not work either.
>>
>> When running traffic without our internal proxy (ie direct to the
>> parent) everything works fine.  I'm stuck and can't find any
>> solutions.
>>
>> Here is what I have tried so far:
>> * First, I was hoping to run squid on the pfsense box, but ran into
>> similar problems, so I tried to isolate the problem by putting in the
>> Linux box.  (never a bad idea to be running more recent version of
>> squid either, it may be needed shortly for some of the newer features
>> anyway)
>> * Instead of running my full squid.conf, I am using the default
>> squid.conf with just the extra line to access the parent (cache_peer
>> 10.55.240.250 parent 3128 3130 no-query default login=PASS)
>> * I've read bits and pieces about similar problems dealing with sysctl
>> and some ipv4 settings.  None of this seemed to apply, and what I did
>> try didn't work.
>> * Checking on the specific web pages in firefox using firebug and I
>> can see some 504 errors (seemingly only on POST) - this lead me to
>> check the logs for POST with 504 errors (see logs below)
>> * Checked the problem in IE, Chrome and Firefox
>> * Lots of googleing and reading of squid documentation
>>
>> Here is what is showing in the squid logs where there is a 504 with a
>> POST, you'll notice that most are for the local speedtest.net testing.
>>   I figured not much point finding lots of sites when just a few are
>> causing problems.
>>
>> 1342030821.058  59542 10.161.128.34 TCP_MISS/504 4301 POST
>> http:/

[squid-users] Uploads not working behind squid proxy

2012-07-11 Thread Crawford, Ben
Hi All,

I have run into a problem with not being able to access a few specific
things on the web when running through our local proxy.

Some details:
* The current setup is a Linux box running squid 3.1.19.
* This is being run behind a pfsense box that is load balancing our
two internet connections
* Both internet connections are behind the same proxy (we are actually
on a private network), which is set as the parent for our internal
proxy
* Squid is running in intercept mode

With this setup, most things work as expected; I can visit web pages,
watch youtube videos, upload attachments to gmail.  However, some
things are not working.  The easiest example is speedtest.net.  I can
run the download test, but the upload test always fails.  Trying to
watch content on tvnz.co.nz (on demand content) does not work either.

When running traffic without our internal proxy (ie direct to the
parent) everything works fine.  I'm stuck and can't find any
solutions.

Here is what I have tried so far:
* First, I was hoping to run squid on the pfsense box, but ran into
similar problems, so I tried to isolate the problem by putting in the
Linux box.  (never a bad idea to be running more recent version of
squid either, it may be needed shortly for some of the newer features
anyway)
* Instead of running my full squid.conf, I am using the default
squid.conf with just the extra line to access the parent (cache_peer
10.55.240.250 parent 3128 3130 no-query default login=PASS)
* I've read bits and pieces about similar problems dealing with sysctl
and some ipv4 settings.  None of this seemed to apply, and what I did
try didn't work.
* Checking on the specific web pages in firefox using firebug and I
can see some 504 errors (seemingly only on POST) - this lead me to
check the logs for POST with 504 errors (see logs below)
* Checked the problem in IE, Chrome and Firefox
* Lots of googleing and reading of squid documentation

Here is what is showing in the squid logs where there is a 504 with a
POST, you'll notice that most are for the local speedtest.net testing.
 I figured not much point finding lots of sites when just a few are
causing problems.

1342030821.058  59542 10.161.128.34 TCP_MISS/504 4301 POST
http://speedtest.worldnet.co.nz/speedtest.net/speedtest/upload.php? -
DIRECT/202.169.192.58 text/html
1342030821.058  59536 10.161.128.34 TCP_MISS/504 4300 POST
http://speedtest.worldnet.co.nz/speedtest.net/speedtest/upload.php? -
DIRECT/202.169.192.58 text/html
1342039010.134  60806 10.161.128.34 TCP_MISS/504 4285 POST
http://rt1403.infolinks.com/action/doq.htm? - DIRECT/64.71.153.213
text/html
1342039947.624  59642 10.161.128.34 TCP_MISS/504 4834 POST
http://c.brightcove.com/services/messagebroker/amf? -
DIRECT/8.19.200.152 text/html
1342040562.565  61340 10.161.128.34 TCP_MISS/504 4469 POST
http://2975c.v.fwmrm.net/ad/p/1? - DIRECT/75.98.70.31 text/html
1342040573.047  59531 10.161.128.34 TCP_MISS/504 4834 POST
http://c.brightcove.com/services/messagebroker/amf? -
DIRECT/8.19.200.152 text/html
1342040679.001  59688 10.161.128.34 TCP_MISS/504 4838 POST
http://c.brightcove.com/services/messagebroker/amf? -
DIRECT/64.152.208.202 text/html
1342040700.694  59871 10.161.128.34 TCP_MISS/504 4469 POST
http://2975c.v.fwmrm.net/ad/p/1? - DIRECT/75.98.70.31 text/html
1342040742.908  60168 10.161.128.34 TCP_MISS/504 4295 POST
http://speedtest.orcon.net.nz/speedtest/upload.php? -
DIRECT/219.88.241.70 text/html
1342040742.908  60162 10.161.128.34 TCP_MISS/504 4296 POST
http://speedtest.orcon.net.nz/speedtest/upload.php? -
DIRECT/219.88.241.70 text/html
1342042640.381  60407 10.161.128.34 TCP_MISS/504 4295 POST
http://speedtest.orcon.net.nz/speedtest/upload.php? -
DIRECT/219.88.241.70 text/html
1342042640.381  60026 10.161.128.34 TCP_MISS/504 4297 POST
http://speedtest.orcon.net.nz/speedtest/upload.php? -
DIRECT/219.88.241.70 text/html
1342042921.326  60879 10.161.128.34 TCP_MISS/504 4831 POST
http://c.brightcove.com/services/messagebroker/amf? -
DIRECT/64.152.208.202 text/html


Any suggestions about getting the rest of the web up running through
our local squid would be most appreciated.

Cheers,
Ben


[squid-users] DSCP mark not working

2012-07-09 Thread Ben

Hi,

We are running squid since long time and it is working fine.Now days, we 
migrated squid for RHEL 6 to use qos_flow DSCP marking parameter.


For testing purpose at lab, we deploy two squid box, one with rhel rpm ( 
Version 3.1.19 ) and on second box with squid source compilation ( 
Version 3.1.20 .


In both squid box, we enabled '--enable-zph-qos' and configure qos_flows 
local-hit=0x30 in squid.conf


From client pc , we set static proxy in browser with squid ip and 
port.Everything is working fine but DSCP marking is not happening.


On both squid box, I open tcpdump -vni eth0 | grep 'tos 0×30' terminal 
but nothing comes on screen.


Kindly suggest me where could be mistake? Also suggest if anything is 
required from my side, I mean logs and all.


As per my understanding , qos_flows local-hit marks given mark value 
when squid sees DISK_HIT, MEMORY_HIT or relavent tag

Kindly correct me if I am doing mistake

Regards,
Ben


[squid-users] Squid to get around Android proxy authentication

2012-04-30 Thread Crawford, Ben
Good Day,

I am running squid 2.7 (although switching to squid 3 is likely to
happen soon) on our local school internal proxy (Ubuntu) that is
behind a larger network proxy (which I don't have control over).

We have started allowing students to access our wireless network as the
proliferation of smart phones, tablets and laptops has been steadily
increasing.

The problem is Andorid does not play nice with proxies that require
authentication.  I had an idea of a way around this that would still tie
things to the individual logins.  The solution I have been looking at
is to either bind the http_port or MAC address (through arp) to a
specific cache peer.  Here is what I was thinking:

Either:
http_port 123 name=student1_port
cache_peer 10.x.x.x parent 3128 no-query login=user:my_pass name=student1_peer
cache_peer_access student1_peer allow student1_port

Or:
cache_peer 10.x.x.x parent 3128 no-query login=user:my_pass name=student1_peer
acl student1_mac arp 01:01:01:01:01:01
cache_peer_access student1_peer allow student1_mac

I was hoping that one of these solutions would allow me to point at the
local proxy and avoid having to provide details for the upstream proxy
which requires authentication (basic auth - which I continue to rail
against).  However, no such luck just yet.

I am still relatively new to squid, and searches along with trial and
error have also been unsuccessful.

Any suggestions would be greatly appreciated.

Cheers,
Ben


Re: [squid-users] Caching in 3.2 vs 3.1

2012-03-14 Thread Ben

Hi Amos,

Since last 2-3 months i am testing squid 3.2 with different version till 
current latest version, And i observed that 3.1 is working fantastic 
while we are looking for cache gain / cache hit.


Again, i say squid 3.1 is awesome for people who wants cache hit / 
bandwidth saving.:-)


Regards,
Ben


cc'ing to squid-dev where the people who might know reside


Also, adding "debug_options 11,2" may show something useful in the 
HTTP flow for 3.2.


Amos

On 15.03.2012 05:59, Erik Svensson wrote:

Hi,

Objects don't get cached in Squid 3.2. Same transactions and config
works in 3.1

I will show my problem with a simple webserver listening on 
127.0.0.1:9990
and sending transactions from curl to a squid listening on 
127.0.0.1:9993


3.1 logs first a MISS since the cache is empty and when repeating the
transaction a HIT.
3.2 logs 2 MISSes

# /opt/squid-3.1.19/sbin/squid -v
Squid Cache: Version 3.1.19
configure options:  '--prefix=/opt/squid-3.1.19' '--disable-wccp'
'--disable-wccpv2' '--disable-ident-lookups' '--disable-ipv6'
'--with-large-files' --with-squid=/usr/local/src/squid-3.1.19
--enable-ltdl-convenience

# /opt/squid-3.2.0.16/sbin/squid -v
Squid Cache: Version 3.2.0.16
configure options:  '--prefix=/opt/squid-3.2.0.16' '--disable-wccp'
'--disable-wccpv2' '--disable-ident-lookups' '--disable-ipv6'
'--with-large-files' --enable-ltdl-convenience

# cat 3.conf
http_port 127.0.0.1:9993
icp_port 0
cache_mem 128 mb
#cache_dir null /tmp
access_log  /tmp/3/access.log
cache_log   /tmp/3/cache.log
pid_filename/tmp/3/squid.pid
coredump_dir/tmp/3
refresh_pattern . 0 20% 4320
http_access allow all
http_reply_access allow all
shutdown_lifetime 2 seconds

# thttpd -p 9990 -d /tmp# start thttpd webserver serving files in
directory /tmp

# echo HiHo >/tmp/x # Create a file to serve


# /opt/squid-3.1.19/sbin/squid -f 3.conf

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.0 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:49:39 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localhost.localdomain
< Via: 1.0 localhost.localdomain (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.0 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:49:39 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< Age: 8
< X-Cache: HIT from localhost.localdomain
< Via: 1.0 localhost.localdomain (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# /opt/squid-3.1.19/sbin/squid -f 3.conf -k shutdown

# cat access.log
1331740179.023  2 127.0.0.1 TCP_MISS/200 339 GET
http://127.0.0.1:9990/x - DIRECT/127.0.0.1 text/plain
1331740187.003  0 127.0.0.1 TCP_MEM_HIT/200 346 GET
http://127.0.0.1:9990/x - NONE/- text/plain

# rm access.log


# /opt/squid-3.2.0.16/sbin/squid -f 3.conf

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.1 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:55:29 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localhost.localdomain
< Via: 1.1 localhost.localdomain (squid/3.2.0.16)
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# curl -v -H "Pragma:" -x 127.0.0.1:9

Re: [squid-users] Squid Processes

2012-02-26 Thread Ben



On 26/02/2012 6:59 a.m., Ben wrote:

Hi Amos,

On 22.02.2012 03:15, Steve Tatlow wrote:

Hi,

We are running squid as a transparent proxy, with dansguardian 
doing the

content filtering. All traffic will be coming from localhost and no
authentication is required.  Can someone tell me how I ensure there 
are

enough squid processes to support a large number of users (maybe 250
concurrent users)


None of us can tell you specific numbers. It is dependent on your 
hardware and client traffic.


The thing to be aware of is that measuring in users is meaningless. 
One user can flood the proxy, or some thousands could leave it idle 
waiting for more work. Capacities are reliably measured only in 
requests per second.



To get the details you seek measure and get some idea of how many 
requests per second those users make at peak times, and how many the 
whole structure is capable of handling.
 Each Squid series has a theoretical limit which is hardware 
dependant (3.1 can do about 800 req/sec on a dual core 2.2GHz CPU 
etc). The configuration specifics you create and type of requests 
the clients will reduce the capacity limit from there.


You mean to say that single squid instance can handle 800 req/sec on 
a dual core 2.2 GHz CPU ? Can you elaborate it in details means how 
many hdd have you used and is there any specific configuration do you 
want to highlight


We have a test machine which can reach that. Nothing special on the 
hardware, and in active use running several other services. But the 
test is a bit artificial. So I think overall its a reasonable sort of 
result. Your mileage *will* vary.




As i tested single squid instance with 400-450 req / sec and it is 
performing fine.Currently i deployed squid with 175 Mbps bandwidth 
load.Now we plan to use it for 400 Mbps so it suppose be 800 or 900 
http req / sec , Does single squid process handle such heavy load or ?


The fact you got past 50Mbps easily at ~400 req/sec tells me your 
traffiic might be a bit unusual. On the ISP scenario I'm used to 
estimating with most of the reports have needed two Squid to get over 
100Mbps. Good news for you, bad news for forcasting the limits.


You mean two instances of squid on same h/w to handle 100 Mbps.? As in 
production i m using squid with 175 Mbps bandwidth usage and 450 http 
req / sec.
And it seems fine.Yes sometimes my cpu consumption is ~ 95 % and memory 
is 85 % but generally cpu consumption is ~40 % and memory is ~ 70 %.


As now i tested with single disk( 10k rpm). But now i plan to upgrade it 
with more hdd.




And what kind of h/w specification you suggest for such kind of load ?


At this point you have a Squid already to use as baseline. So you can 
look at the resource usage CPU, memory, Disk I/O etc and guess (yes 
guess) how much more load it can take before any one of those is maxed 
out.


Amos

Ben


Re: [squid-users] Squid Processes

2012-02-25 Thread Ben

Hi Amos,

On 22.02.2012 03:15, Steve Tatlow wrote:

Hi,

We are running squid as a transparent proxy, with dansguardian doing the
content filtering. All traffic will be coming from localhost and no
authentication is required.  Can someone tell me how I ensure there are
enough squid processes to support a large number of users (maybe 250
concurrent users)


None of us can tell you specific numbers. It is dependent on your 
hardware and client traffic.


The thing to be aware of is that measuring in users is meaningless. 
One user can flood the proxy, or some thousands could leave it idle 
waiting for more work. Capacities are reliably measured only in 
requests per second.



To get the details you seek measure and get some idea of how many 
requests per second those users make at peak times, and how many the 
whole structure is capable of handling.
 Each Squid series has a theoretical limit which is hardware dependant 
(3.1 can do about 800 req/sec on a dual core 2.2GHz CPU etc). The 
configuration specifics you create and type of requests the clients 
will reduce the capacity limit from there.


You mean to say that single squid instance can handle 800 req/sec on a 
dual core 2.2 GHz CPU ? Can you elaborate it in details means how many 
hdd have you used and is there any specific configuration do you want to 
highlight


As i tested single squid instance with 400-450 req / sec and it is 
performing fine.Currently i deployed squid with 175 Mbps bandwidth 
load.Now we plan to use it for 400 Mbps so it suppose be 800 or 900 http 
req / sec , Does single squid process handle such heavy load or ?


And what kind of h/w specification you suggest for such kind of load ?

Kindly suggest us.

With content filtering you can usually expect only to reach 30% of 
Squids regular throughput due to the content processing overheads.





250 users is not large for Squid. Any of the production releases 
should be able to handle that many without causing much of a CPU bump 
on modern hardware. I think you can start with one Squid process and 
expand to more if you find it stressing the machines. More likely you 
will need more DansGuardian proxy processes though, that is where the 
heavy CPU consumption will occur.


Amos


Regards,
Ben


Re: [squid-users] squid mib information

2012-01-09 Thread Ben

Hi Amos,

Thanks for your kind response.

On 7/01/2012 8:32 p.m., Ben wrote:

Hi,

We would like to use squid snmp mibs to get statistics of squid
performance and cache gain.

We are using squid-3.1.10.By reference 
ofhttp://wiki.squid-cache.org/Features/Snmp


We have a list of OIDs information in one line.Can we have any 
document which provide more details regarding each OIDs. Some OIDs 
are confusing.So if we have a document which provides more details 
about them that is much helpful to understand them in appropriate 
meaning.


There is no better documentation than that page. Unless you are one of 
the rare people able to read MIB files without getting confused. In 
which case the MIB.txt can be found in the Squid sources.




Some of are OIDs, which we decided to use with mrtg to get cache 
gain, bandwidth  saving statistics from squid.



*.1.3.2.1.2.0 cacheHttpHits Counter32 2.0+ Number of HTTP Hits

It provided url cache hit , my percaption is right?


Yes.




*.1.3.2.1.4.0 cacheHttpInKb Counter32 2.0+ Number of HTTP KB's received

It says that how many traffic comes from  internet to squid , my 
perception is right?




No. Received by Squid from clients.



*.1.3.2.1.5.0 cacheHttpOutKb Counter32 2.0+ Number of HTTP KB's 
transmitted


It says that how many traffic goes out from squid to internet , my 
perception is right?


No. Sent by Squid to clients.




*.1.3.2.1.13.0 cacheServerOutKb Counter32 2.0+ KB's of traffic sent 
to servers


what  does it says ?



"KB's of traffic sent to servers ". I think you already know what a 
server is.



server means squid itself, right? internet to squid ( in traffic )


*.1.3.2.1.12.0 cacheServerInKb Counter32 2.0+ KB's of traffic 
received from servers


what does it says ?



"traffic received from servers".

squid to internet ( out traffic )



if i m wrong, kindly correct me.


The server metrics do not mention HTTP because this is the KB used by 
all server protocols.




Our network design :


   internet
   |
   core router -->  squid box
   |
  local network


We want to identify that how much cache gain / bandwidth saving we have
from squid?


Bandwidth Savings KB is:   cacheHttpOutKb - cacheServerInKb

Bandwidth Savings % is:  cacheRequestByteRatio.1


Amos


Ben


[squid-users] squid mib information

2012-01-06 Thread Ben

Hi,

We would like to use squid snmp mibs to get statistics of squid
performance and cache gain.

We are using squid-3.1.10.By reference 
ofhttp://wiki.squid-cache.org/Features/Snmp

We have a list of OIDs information in one line.Can we have any document which 
provide more details regarding each OIDs. Some OIDs are confusing.So if we have 
a document which provides more details about them that is much helpful to 
understand them in appropriate meaning.

Some of are OIDs, which we decided to use with mrtg to get cache gain, 
bandwidth  saving statistics from squid.


*.1.3.2.1.2.0 cacheHttpHits Counter32 2.0+ Number of HTTP Hits

It provided url cache hit , my percaption is right?


*.1.3.2.1.4.0 cacheHttpInKb Counter32 2.0+ Number of HTTP KB's received

It says that how many traffic comes from  internet to squid , my perception is 
right?


*.1.3.2.1.5.0 cacheHttpOutKb Counter32 2.0+ Number of HTTP KB's transmitted

It says that how many traffic goes out from squid to internet , my perception 
is right?


*.1.3.2.1.13.0 cacheServerOutKb Counter32 2.0+ KB's of traffic sent to servers

what  does it says ?


*.1.3.2.1.12.0 cacheServerInKb Counter32 2.0+ KB's of traffic received from 
servers

what does it says ?


Our network design :


   internet
   |
   core router -->  squid box
   |
  local network


We want to identify that how much cache gain / bandwidth saving we have
from squid?

If i m using any wrong oid to measure it , let me correct for the same.


Regards,
Ben



Re: [squid-users] Question on transparent proxy with web server behind proxy.

2011-01-25 Thread Ben Greear

On 01/25/2011 11:14 AM, Pieter De Wit wrote:

Hi Ben,

I suspect that will do the trick :)


It seems it was a tad more tricky, but this appears to be working:

sbin/ebtables -t broute -A BROUTING -i br0 --logical-in veth2 -p IPv4 
--ip-protocol 6 --ip-destination-port 80 -j redirect --redirect-target ACCEPT
/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp --dport 80 -m physdev 
--physdev-in veth2 -j REDIRECT --to-port 3128

The 'veth2' interface is the downstream port.

Thanks,
Ben




Let us know

Cheers,

Pieter

On Tue, 25 Jan 2011, Ben Greear wrote:


On 01/25/2011 10:36 AM, Ben Greear wrote:

On 01/25/2011 10:06 AM, Pieter De Wit wrote:

Hi Ben,

On 26/01/2011 06:55, Ben Greear wrote:

On 01/25/2011 09:48 AM, Pieter De Wit wrote:

Hi Ben,

There sure is :)

Change the IP Tables rule at the bottom to something like this:

/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp -s 192.168.0.0/24
--dport 80 -j REDIRECT --to-port 3128

Replace the 192.168 with your network. Keep in mind that you can have
multiples of these :)

In a nutshell, IP Tables was making each request (even from the
outside
world) go via Squid.


Do you happen to know if it can be done based on incoming (real) port
so we don't have to care about IP addresses?


You can, but that is not guaranteed, since the source port should be
assigned at random by the OS. Keep in mind that this will be
Chrome/IE/Firefox/ that makes the connection.
Having re-read your suggestion, are you not referring to the ethernet
port ?


I mean ethernet port/interface, something like '-i br0
--original-input-dev eth0'

If nothing comes to mind immediately, don't worry..I'll go read man
pages :)


Looks like '--physdev-in eth0'
might do the trick..we'll do some testing.

Thanks,
Ben



Thanks,
Ben





--
Ben Greear 
Candela Technologies Inc http://www.candelatech.com





--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



Re: [squid-users] Question on transparent proxy with web server behind proxy.

2011-01-25 Thread Ben Greear

On 01/25/2011 10:36 AM, Ben Greear wrote:

On 01/25/2011 10:06 AM, Pieter De Wit wrote:

Hi Ben,

On 26/01/2011 06:55, Ben Greear wrote:

On 01/25/2011 09:48 AM, Pieter De Wit wrote:

Hi Ben,

There sure is :)

Change the IP Tables rule at the bottom to something like this:

/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp -s 192.168.0.0/24
--dport 80 -j REDIRECT --to-port 3128

Replace the 192.168 with your network. Keep in mind that you can have
multiples of these :)

In a nutshell, IP Tables was making each request (even from the outside
world) go via Squid.


Do you happen to know if it can be done based on incoming (real) port
so we don't have to care about IP addresses?


You can, but that is not guaranteed, since the source port should be
assigned at random by the OS. Keep in mind that this will be
Chrome/IE/Firefox/ that makes the connection.
Having re-read your suggestion, are you not referring to the ethernet
port ?


I mean ethernet port/interface, something like '-i br0
--original-input-dev eth0'

If nothing comes to mind immediately, don't worry..I'll go read man
pages :)


Looks like '--physdev-in eth0'
might do the trick..we'll do some testing.

Thanks,
Ben



Thanks,
Ben





--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



Re: [squid-users] Question on transparent proxy with web server behind proxy.

2011-01-25 Thread Ben Greear

On 01/25/2011 10:06 AM, Pieter De Wit wrote:

Hi Ben,

On 26/01/2011 06:55, Ben Greear wrote:

On 01/25/2011 09:48 AM, Pieter De Wit wrote:

Hi Ben,

There sure is :)

Change the IP Tables rule at the bottom to something like this:

/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp -s 192.168.0.0/24
--dport 80 -j REDIRECT --to-port 3128

Replace the 192.168 with your network. Keep in mind that you can have
multiples of these :)

In a nutshell, IP Tables was making each request (even from the outside
world) go via Squid.


Do you happen to know if it can be done based on incoming (real) port
so we don't have to care about IP addresses?


You can, but that is not guaranteed, since the source port should be
assigned at random by the OS. Keep in mind that this will be
Chrome/IE/Firefox/ that makes the connection.
Having re-read your suggestion, are you not referring to the ethernet
port ?


I mean ethernet port/interface, something like '-i br0 --original-input-dev 
eth0'

If nothing comes to mind immediately, don't worry..I'll go read man pages :)

Thanks,
Ben


--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



Re: [squid-users] Question on transparent proxy with web server behind proxy.

2011-01-25 Thread Ben Greear

On 01/25/2011 09:48 AM, Pieter De Wit wrote:

Hi Ben,

There sure is :)

Change the IP Tables rule at the bottom to something like this:

/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp -s 192.168.0.0/24
--dport 80 -j REDIRECT --to-port 3128

Replace the 192.168 with your network. Keep in mind that you can have
multiples of these :)

In a nutshell, IP Tables was making each request (even from the outside
world) go via Squid.


Do you happen to know if it can be done based on incoming (real) port
so we don't have to care about IP addresses?


The other solution is to process those via squid, which will take some
load off the web servers.


I'm a bit out of the loop, but for whatever reason, the users don't
want this to happen.

Thanks for the quick response!

Ben


--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



[squid-users] Question on transparent proxy with web server behind proxy.

2011-01-25 Thread Ben Greear

Hello!

We have a squid + bridge + transparent proxy working pretty
well.  It seems to be properly caching and dealing with data
when requests are coming from behind the bridge to the outside
world.

But, there are some web servers behind the bridge that should
be accessible to the outside world.  When the outside attempts
to access them, squid is attempting to cache those requests
as well.

Is there any way to just have squid handle traffic originating
on the inside?

We're using firewall rules like this:

/sbin/ebtables -t broute -A BROUTING -i br0 -p IPv4 --ip-protocol 6 
--ip-destination-port 80 -j redirect --redirect-target ACCEPT
/sbin/iptables -t nat -A PREROUTING -i br0 -p tcp --dport 80 -j REDIRECT 
--to-port 3128

Thanks,
Ben

--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



[squid-users] ident authentication and follow_x_forwarded_for

2010-05-11 Thread Ben Miller
Greetings,

I am configuring a Squid/Dansguardian web proxy/content filter. The
flow of traffic looks like this:

Client --> Proxy:8080 (Dansguardian) --> 127.0.0.1:3128 (Squid running
on Proxy) --> Edge firewall

The relevant portions of squid.conf follow:

==
acl localnet src 10.0.0.0/8

# Authentication ACLs
# Allow ident lookups on internal clients
#ident_lookup_access allow localnet
ident_lookup_access allow localnet
ident_lookup_access deny all

# Allow clients with IDENT
acl ident_auth ident REQUIRED
# If they don't have ident login restrict access to authorized via ldap
acl ldap_auth proxy_auth REQUIRED

# Attempt ident, then LDAP/basic authentication. Note that Squid is
only listening on 127.0.0.1:3128, so the following lines are to
support acl_uses_indirect_client
http_access allow ip_authenticated
http_access allow ident_auth localnet
http_access allow ldap_auth localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# OPTIONS FOR X-Forwarded-For
# -

# Allow Squid to see Dansguardian IP addresses
follow_x_forwarded_for allow localhost
follow_x_forwarded_for deny all

# NETWORK OPTIONS
# -

# Listen only to Dansguardian
http_port 127.0.0.1:3128

==


I am attempting to configure Squid to authenticate with ident, but it
seems that the 'follow_x_forwarded_for allow localhost' is not being
honored by the ident authenticator. Is there any way to configure
Squid to send the ident queries to the originating client?

I have confirmed that follow_x_forwarded_for is functional for other
things (logging of client IP addresses for example), and that ident
queries are being responded to by the clients. Squid is simple never
asking for ident and is skipping directly to LDAP/Basic
authentication.

Thanks in advance for any help you may provide,

Ben Miller

6 X 9 = 42


[squid-users] squid_ldap_auth just hangs

2009-06-11 Thread Ben Stokes
Hi all,

I'm unable to get squid_ldap_auth to do anything against my LDAP
source which is a Windows 2003 native mode domain controller. Here's
my latest iteration of failed attempts, although I have also tried
many variations of the below.

/usr/lib64/squid/squid_ldap_auth \
-b "dc=corp,dc=ads" \
-h 10.11.2.48 \
-p 389 \
-D "CN=svc_squid,OU=Service Accounts,OU=Service & Admin
Accounts,DC=corp,DC=ads" \
-w password \
-f "sAMAccountName=%s"

What happens next is nothing - it just sits at a new line. Doesn't
seem to ever time out or give any kind of output, even if I try using
the -c or -t options. I can telnet to my dc on port 389 and it
connects OK so I know network/DNS are working OK. The user account is
new and the password is OK. i tried using  -v 2 and -v 3 and neither
worked.

I tested using ldapsearch and it was successful, using:

ldapsearch -x \
-b "OU=Service Accounts,OU=Service & Admin Accounts,DC=corp,DC=ads" \
-h 10.11.2.48 \
-D "CN=svc_squid,OU=Service Accounts,OU=Service & Admin
Accounts,DC=corp,DC=ads" \
-W "cn=sqlmailbox"

... I get a load of information back about the user account.

# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1

What am I doing wrong with squid_ldap_auth? I've tried it on 2 servers
and the same thing happens (Red Hat x64 and Ubuntu x32), so it's not
distro related or due to a specific version of Squid. I'm guessing I
am missing some options but reading through the help file and mailing
list archive is not getting me anywhere. Any thoughts welcomed.

Yours in confusion,
Ben Stokes


Re: [squid-users] HTTP/0.0?

2009-06-10 Thread Ben Scott
On Wed, Jun 10, 2009 at 11:20 PM, George
Herbert wrote:
> The 400 code makes sense.  The HTTP/0.0 in the log (vs 1.0) doesn't, to me...

  I think that's just a consequence of the fact that Squid never got
anything that it could parse as a valid request, so it never got as
far as negotiating a protocol level.

  Consider: If you "telnet squid 8080" and type "bogus" and hit
[ENTER], what HTTP protocol version is that?

-- Ben


Re: [squid-users] restarting squid without affecting clients?

2009-05-29 Thread Ben Scott
On Fri, May 29, 2009 at 5:43 PM, John Horne  wrote:
> I would agree with others in using 'squid -k reconfigure'. However, I
> always run 'squid -k parse' beforehand, just to make sure the config
> file is valid.

  I believe "squid -k reconfigure" parses the config file and refuses
to attempt the restart if it's not valid.

-- Ben


Re: [squid-users] Squid cache cleanup

2009-05-29 Thread Ben Scott
On Fri, May 29, 2009 at 11:22 AM, Maxime Gaudreault
 wrote:
> Is there a way to delete objects in the cache that are unused for X days

  Squid manages the cache automatically.  The least recently used
objects are removed as needed.

-- Ben


Re: [squid-users] Why squid is using HTTP 1.0?

2009-05-29 Thread Ben Scott
On Fri, May 29, 2009 at 10:44 AM, Roy M.  wrote:
> I use squid as reverse proxy, however, in the backend Apache access
> log, I found that squid is using HTTP 1.0 to connect, but in the squid
> access log, it is using HTTP 1.1.

  What release of Squid?  Most releases don't do HTTP/1.1 at all.  I
guess 3.1 is supposed to have more support for HTTP/1.1.

-- Ben


Re: [squid-users] Gmail attachment

2009-05-26 Thread Ben Scott
On Tue, May 26, 2009 at 11:18 AM, Nitin Bhadauria
 wrote:
> I am using squid 3.0 with ntlm authentication. Now when a user try to
> attach a file in gmail he got an authentication windows now even if he
> enter the user name and passwd the attachment is not uploading.

  I see the same thing with Squid 2.6.STABLE6 on CentOS 5.

  Turning of the "advanced" attachment feature of Gmail works around
the problem.  (The "advanced" attachment uses Adobe Flash, if that
helps anyone figure out a cause.)

-- Ben


Re: [squid-users] CPU spikes, heap fragmentation, memory_pools_limit question

2009-03-18 Thread Ben Drees

Ben Drees wrote:
After Squid has been up for a day or two handling about 500 (mostly 
cacheable) requests per second, we start to see CPU spikes reaching 
100% and response times getting longer.  It usually recovers on its 
own, but we sometimes resort to restarting it, which always fixes the 
problem quickly.  Attaching gdb and hitting Ctrl-C randomly while it 
is in this state usually lands in malloc.  Zenoss plots (from SNMP) of 
the number of cached objects always show a decline when this is 
happening, as if a burst of requests yielding larger responses is 
displacing many more smaller responses already in the cache.


The config uses no disk cache ("cache_dir null 
/mw/data/cache/diskcache") and roughly 3GB ("cache_mem 3072 MB") of 
memory cache on an 8GB machine.  I've tried bumping memory_pools_limit 
up to 1024 MB from the default, but that doesn't seem to make a 
difference.


Here's some of the configuration:

cache_mem 3072 MB
maximum_object_size_in_memory 512 KB
cache_dir null /mw/data/cache/cache/diskcache
maximum_object_size 512 KB
log_mime_hdrs on
debug_options ALL,1 99,3
strip_query_terms off
buffered_logs on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
store_avg_object_size 8 KB
half_closed_clients off
snmp_access allow snmppublic localhost
never_direct allow all
check_hostnames off
retry_on_error off



[squid-users] CPU spikes, heap fragmentation, memory_pools_limit question

2009-03-18 Thread Ben Drees

Hi,

I'm running Squid 2.6STABLE21 as a reverse proxy (migration to 
2.7STABLE6 in progress).


After Squid has been up for a day or two handling about 500 (mostly 
cacheable) requests per second, we start to see CPU spikes reaching 100% 
and response times getting longer.  It usually recovers on its own, but 
we sometimes resort to restarting it, which always fixes the problem 
quickly.  Attaching gdb and hitting Ctrl-C randomly while it is in this 
state usually lands in malloc.  Zenoss plots (from SNMP) of the number 
of cached objects always show a decline when this is happening, as if a 
burst of requests yielding larger responses is displacing many more 
smaller responses already in the cache.


The config uses no disk cache ("cache_dir null 
/mw/data/cache/diskcache") and roughly 3GB ("cache_mem 3072 MB") of 
memory cache on an 8GB machine.  I've tried bumping memory_pools_limit 
up to 1024 MB from the default, but that doesn't seem to make a difference.


I've modified the source in ways that use a bit more dynamic memory than 
usual - for example logging all four of the HTTP headers (client/server, 
request/response) and I've added a few fields to some core data 
structures like _request_t and _AccessLogEntry.  But I'm pretty sure 
there are no memory leaks, and it has run smoothly for extended periods 
in the past.


It seems like there is something about a newly shifting workload that's 
exposing a heap fragmentation problem, and I'm trying to get a grip on 
how memory pools work.  If my extended code uses only xmalloc() for 
dynamic memory, do those objects automatically become candidates for 
storage in a memory pool when freed? Or do I have to do something 
special to associate them with a memory pool?


Thanks,
Ben



[squid-users] Squid Performance Tuning with 0 IO Wait

2009-03-12 Thread Ben Jonston
Hi Everyone,

I am currently doing performance testing with squid 3 and I seem to be
running into some bottlenecks.  I have done exhaustive research
through the squid mail archives, Duane Wessels O'reilly book(a great
resource) and other areas.

I have a dual hyperthreading Xeon machine with 8GB of Ram running
CentOS 5.2 with its 2.6.18 based kernel.  This machine is hooked up to
an I/O subsystem that essentially provides no device I/O waits. This
has been confirmed with 'top' showing effectively 0%wa during peak
load periods.

Apart from I/O tuning, what measures should I take to tune squid for
maximum requests per second and cache hit rates?

Any help or pointers would be greatly appreciated.

Best Regards,
Ben Jonston


[squid-users] Squid Performance Tuning with 0% IO Wait

2009-03-10 Thread Ben Jonston
Hi Everyone,

I am currently doing performance testing with squid 3 and I seem to be
running into some bottlenecks.  I have done exhaustive research
through the squid mail archives, Duane Wessels O'reilly book(a great
resource) and other areas.

I have a dual hyperthreading Xeon machine with 8GB of Ram running
CentOS 5.2 with its 2.6.18 based kernel.  This machine is hooked up to
an I/O subsystem that essentially provides no device I/O waits. This
has been confirmed with 'top' showing effectively 0%wa during peak
load periods.

Apart from I/O tuning, what measures should I take to tune squid for
maximum requests per second and cache hit rates?

Any help or pointers would be greatly appreciated.

Best Regards,
Ben Jonston


[squid-users] reverse proxy, how many origin servers?

2008-07-30 Thread Ben Drees

Hi,

We're using Squid 2.6 as a reverse proxy and load balancer.  Does anyone 
out there have experience concerning the maximum number of origin 
servers Squid can load balance before problems start to arise?  Are 
other limits likely to come into play that make this one moot?


Thanks,
Ben


Re: [squid-users] multi original servers

2008-06-09 Thread Ben Hollingsworth

Ken W. wrote:

Under squid's reverse proxy mode, if there are more than one original
server, how to config it?

cache_peer InsideIP1 parent 80 0 no-query originserver  name=Myserver
round-robin
cache_peer InsideIP2 parent 80 0 no-query originserver  name=Myserver
round-robin

Is the config above right? The two lines have the same values of
'name=' , is it right?
  


In my testing, I found that the names had to be slightly different.  For 
instance:


cache_peer INTERNALIP1 parent 80 0 no-query originserver login=PASS 
name=INTERNALNAME1-peer sourcehash
cache_peer INTERNALIP2 parent 80 0 no-query originserver login=PASS 
name=INTERNALNAME2-peer sourcehash

cache_peer_access INTERNALNAME1-peer allow sites_INTERNALNAME
cache_peer_access INTERNALNAME2-peer allow sites_INTERNALNAME




begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Quick question on using squid as a reverse proxy

2008-04-25 Thread Ben Hollingsworth
Steven Pfister wrote:
> Besides taking away direct access to the webserver (and any vulnerabilities 
> it may have) and providing some caching for static content, what are some 
> other advantages of using squid this way? I'm trying to help put together a 
> security recommendation.
>   

Squid can terminate an SSL connection and then speak HTTP to the real
server, allowing you to secure the outside access without having to
SSL-enable all inside access.  If you do this with multiple servers, you
can use a single wildcard SSL certificate on the squid box to cover all
your inside servers, which saves money.  We do this.

-- 
CONFIDENTIALITY NOTICE: This e-mail message,including any
attachments,is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any
unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient,please
contact the sender by reply e-mail and destroy all copies
of the original message.

begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Medical Center;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506-1275;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Rewrite http to https for owa.

2008-04-22 Thread Ben Hollingsworth
Dwyer, Simon wrote:
> One last step to have it fully working is to rewrite address's coming in on
> http to https.  This is for OWA.  I have tried to use squirm and have some
> success.  What I need to do is redirect http://mail.domainname.com/  to
> https://mail.domainname/com/owa.  For all reverse proxy requests.  Is there
> an easier way to do this?  I have googled it without much success.
>   

Here's how I do exactly that.  In squid.conf:

url_rewrite_program /usr/local/bin/rewrite-http

and then:

% cat /usr/local/bin/rewrite-http
#!/usr/bin/perl
#
# URL rewriter for squid to convert HTTP requests to HTTPS.
# Return an HTTP permanent redirect back to the browser.
# http://wiki.squid-cache.org/SquidFaq/SquidRedirectors
#
$| = 1;
while (<>) {
s/^http:/301:https:/;   # replace "http" with "https"
print;
}


-- 
CONFIDENTIALITY NOTICE: This e-mail message,including any
attachments,is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any
unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient,please
contact the sender by reply e-mail and destroy all copies
of the original message.

begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Medical Center;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506-1275;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] what to block from the proxy to speed up?

2008-04-08 Thread Ben Hollingsworth

Rakotomandimby Mihamina wrote:

Hi,
I want to speedup the websurfing of my LAN.
I have a squid running on the gateway, not transparent (people choose 
to use the proxy or not.
I would like to know if there is a list of url patterns to "block" in 
order to have a more fluent surfing.

For example, I already "blocked"
 - googlesyndication
 - google-analytics


A log graphing program like Calamaris will tell you definitively where 
all your bandwidth is being spent.  It's invaluable for determining what 
to block.  We block youtube, myspace, and the like here at work.
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Pegging CPU with epoll_wait

2008-04-04 Thread Ben Hollingsworth

Henrik Nordstrom wrote:
restarting squid (maybe a few hours or a few days), it starts pegging 
the CPU at 100%.  Running strace on the squid processes scrolls:


epoll_wait(3, {}, 256, 0)   = 0

as fast as my screen will scroll.  Restarting squid makes it settle down 
again for a while.



... then file a bug report and attach your cache.log file.
  


It happened again, and bug #2296 has been filed.  FYI, squidclient reports:

# squidclient -p 80 mgr:events
HTTP/1.0 200 OK
Server: squid/2.6.STABLE6
Date: Fri, 04 Apr 2008 16:41:22 GMT
Content-Type: text/plain
Expires: Fri, 04 Apr 2008 16:41:22 GMT
Last-Modified: Fri, 04 Apr 2008 16:41:22 GMT
X-Cache: MISS from revproxy.bryanlgh.org
X-Cache-Lookup: MISS from revproxy.bryanlgh.org:80
Via: 1.0 revproxy.bryanlgh.org:80 (squid/2.6.STABLE6)
Connection: close

Last event to run: storeClientCopyEvent

Operation   Next Execution  Weight  Callback Valid?
storeClientCopyEvent0.00 seconds0   yes
storeClientCopyEvent0.00 seconds0   yes
storeClientCopyEvent0.00 seconds0   yes
MaintainSwapSpace   0.449644 seconds1   N/A
ipcache_purgelru5.780025 seconds1   N/A
fqdncache_purgelru  9.699383 seconds1   N/A
storeDirClean   13.604307 seconds   1   N/A
statAvgTick 43.007845 seconds   1   N/A
peerClearRR 102.739505 seconds  0   yes
peerClearRR 102.739505 seconds  0   yes
peerClearRR 102.739505 seconds  0   yes
peerClearRR 102.739505 seconds  0   yes
peerClearRR 102.739505 seconds  0   yes
peerClearRR 102.739505 seconds  0   yes
peerClearRR 102.739505 seconds  0   yes
peerRefreshDNS  2868.335948 seconds 1   N/A
User Cache Maintenance  3102.819286 seconds 1   N/A
storeDigestRebuildStart 3103.252835 seconds 1   N/A
storeDigestRewriteStart 3103.278672 seconds 1   N/A
peerDigestCheck 141057.508629 seconds   1   yes

begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



[squid-users] Pegging CPU with epoll_wait

2008-04-02 Thread Ben Hollingsworth
I'm running Squid 2.6STABLE6 (the RedHat-distributed version) on a stock 
RHEL 5.1 64-bit server with kernel 2.6.18-53.1.13.el5.  Some time after 
restarting squid (maybe a few hours or a few days), it starts pegging 
the CPU at 100%.  Running strace on the squid processes scrolls:


   epoll_wait(3, {}, 256, 0)   = 0

as fast as my screen will scroll.  Restarting squid makes it settle down 
again for a while.  This server sees only a few hits an hour.  What's 
causing this, and how do I stop it?
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Squid reverse proxy load balancing

2008-03-28 Thread Ben Hollingsworth

Ben Hollingsworth wrote:
I've got squid running as a reverse proxy, terminating HTTPS requests 
and forwarding them to HTTP(S) servers on the inside.  I've now gotten 
a request to use this same proxy to load balance requests between 
multiple internal servers.  It looks like you can do this by 
specifying two "cache_peer" lines with different IP's, and putting the 
"round-robin" flag on them, like this:


cache_peer InsideIP1 parent 80 0 no-query originserver login=PASS 
name=InsideName-peer round-robin
cache_peer InsideIP2 parent 80 0 no-query originserver login=PASS 
name=InsideName-peer round-robin



Using this setup, what will happen if one of those servers goes down?  
Will half of the requests fail, or will squid transparently resend the 
request to the working server?


Is there any way to specify automatic connection persistence, where 
all requests from a certain client will go to the same back end server 
so as to maintain session state & the like?  I don't want to split 
them up manually using ACL's; I want squid to do this for me while 
allowing for down servers (see above).


BTW, I'm running Squid 2.6.STABLE6 on RHEL 5.
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



[squid-users] Squid reverse proxy load balancing

2008-03-28 Thread Ben Hollingsworth
I've got squid running as a reverse proxy, terminating HTTPS requests 
and forwarding them to HTTP(S) servers on the inside.  I've now gotten a 
request to use this same proxy to load balance requests between multiple 
internal servers.  It looks like you can do this by specifying two 
"cache_peer" lines with different IP's, and putting the "round-robin" 
flag on them, like this:


cache_peer InsideIP1 parent 80 0 no-query originserver login=PASS 
name=InsideName-peer round-robin
cache_peer InsideIP2 parent 80 0 no-query originserver login=PASS 
name=InsideName-peer round-robin


Using this setup, what will happen if one of those servers goes down?  
Will half of the requests fail, or will squid transparently resend the 
request to the working server?


Is there any way to specify automatic connection persistence, where all 
requests from a certain client will go to the same back end server so as 
to maintain session state & the like?  I don't want to split them up 
manually using ACL's; I want squid to do this for me while allowing for 
down servers (see above).
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] squidGuard - log but allow access

2008-03-27 Thread Ben Hollingsworth




Quoting "Dennis B. Hopp" <[EMAIL PROTECTED]>:


I've setup squidGuard and it works pretty well.  What I would like to
do is to have squidGuard log when somebody tries to go to a specific
targetgroup but allow them access rather then doing a redirect.

I can only seem to get it to either log and block access or allow
access but not log.  This is what I have in squidGuard.conf



dest Warez {
domainlist warez/domains
#   urllist warez/urls
log warez.log
}

acl {
default {
pass Porn Warez all
redirect 
http://cache1.server/cgi-bin/squidGuard.cgi?url=%u

}
}



I've tried taking out the redirect statement as well.  It allows the
access but doesn't log it.


You need to add a "verbose" flag to you log line in order to log passed 
items.


dest warez {
   domainlist blacklists/warez/domains
   urllist blacklists/warez/urls
   # The "verbose" option logs all hits -- even those that pass.
   # Without "verbose", only redirected/rewritten hits are logged.
   log verbose warez.log
}

begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] RAID is good

2008-03-25 Thread Ben Hollingsworth



One should also consider the difference between
simple RAID and extremely advanced RAID disk systems
(i.e. EMC and other arrays).
The external disk arrays like EMC with internal RAID5 are simply faster
than a JBOD of internal disks.


How many write-cycles does EMC use to backup data after one 
system-used write cycle?
How may CPU cycles does EMC spend figuring out which disk the 
file-slice is located on, _after_ squid has already hashed the file 
location to figure out which disk the file is located on?


Regardless of speed, unless you can provide a RAID system which has 
less than one hardware disk-io read/write per system disk-io 
read/write you hit these theoretical limits.


I can't quote disk cycle numbers, but I know that our fiber-connected HP 
EVA8000's (with ginormous caches and LUNs spread over 72 spindles, even 
at RAID5) are one hell of a lot faster than the local disks.  The 2 Gbps 
fiber connection is the limiting factor for most of our high-bandwidth 
apps.  In our shop, squid is pretty low bandwidth by comparison.  We 
normally hover around 100 req/sec with occasional peaks at 200 req/sec.


But its not so much a problem of human-noticable absolute-time as a 
problem of underlying duplicated disk-io-cycles and 
processor-io-cycles and processor delays remains.


For now the CPU half of the problem gets masked by the 
single-threadedness of squid (never though you'd see that being a 
major benefit eh?). If squid begins using all the CPU threads the OS 
will loose out on its spare CPU cycles on dual-core machines and RAID 
may become a noticable problem there.


Your arguments are valid for software RAID, but not for hardware RAID.  
Most nicer systems have a dedicated disk controller with its own 
processor that handles nothing but the onboard RAID.  A fiber-connected 
disk array is conceptually similar, but with more horsepower.  The CPU 
never has to worry about overhead in this case.  Perhaps for these 
scenarios, squid could use a config flag that tells it to put everything 
on one "disk" (as it sees it) and not bother imposing any of its own 
overhead for operations that will already be done by the array controller.


begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Problem with SSL/Http and Squid in Reverse Proxy

2008-03-05 Thread Ben Hollingsworth

Kaddu, Patrick wrote:

I have set up Squid3 with SSL as a Reverse Proxy, SSL work as expected,
but when a backendserver have hardcoded links inside a webapplikation
like http://bla.bla.bla , the url change when the user click on this
link and you have no more ssl, only http! 


Can you force to use only ssl, even if there are hardcoded links inside
the applikation?
  


We've run into the same problem, and have only partially solved it.  For 
simple web pages, we setup squid to listen on port 80.  We then 
configured a rewriter that replaces "http://"; in any URL's with 
"301:https://"; to send a permanent redirect back to the client (see below).


The problem comes with form submissions.  The HTTP spec prohibits 
clients from changing the URL of POST requests without confirming with 
the user (see section 10.3.2 & 10.3.4 of RFC 2616: 
http://www.ietf.org/rfc/rfc2616.txt?number=2616 ).  Neither IE nor 
Firefox bother confirming this, and instead just change the method to 
"GET," which drops all the form variables on the floor.  In short, form 
submissions that hardcode the "http://"; won't work using this method.  
You can find my thread on this topic in the archives betwen 23 Jan - 1 
Feb 2008.  I'd love to hear any suggestions around it, as it's a deal 
breaker for us on this project.


In squid.conf:
url_rewrite_program /usr/local/bin/rewrite-http

> cat /usr/local/bin/rewrite-http
#!/usr/bin/perl
#
# URL rewriter for squid to convert HTTP requests to HTTPS.
# Return an HTTP permanent redirect back to the browser.
# http://wiki.squid-cache.org/SquidFaq/SquidRedirectors
#
$| = 1;
while (<>) {
   s/^http:/301:https:/;   # replace "http" with "https"
   print;
}

begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Need help

2008-03-05 Thread Ben Hollingsworth

piyush joshi wrote:

Dear All,
  Can anyone suggest me any free software to monitor squid
which will show all information like CPU usage, Memory Usage, No of
hite, IP address where from request is coming top users, Top sites,
Top Bandwith . Please reply to me i will be grateful to you ..
  


We use a combination of calamaris and cacti/SNMP to get all those stats.
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Auth through HTTPS reverse proxy

2008-03-04 Thread Ben Hollingsworth

Ben Hollingsworth wrote:
I've setup Squid 2.6.STABLE6 as a reverse proxy.  It terminates SSL 
connections using a wildcard cert and then passes the connections to 
back-end servers using either HTTP or HTTPS.  All works well for 
servers that don't require any authentication (or which let the web 
application handle its own authentication).  However, when I try to 
use Apache's native authentication to restrict directory access, any 
access through the proxy always fails authentication.  Access directly 
to the server (bypassing the proxy) authenticates just fine, so it 
appears that something about my Squid setup is causing authentication 
to break.  This happens regardless of whether the back-end is running 
HTTP or HTTPS.  The squid & apache logs don't tell me anything.  I've 
looked over packet dumps (on the HTTP side, of course), but I don't 
see the user/pwd anywhere.  Any ideas what I'm doing wrong?


Squid.conf:   ("docs" is the server in question)

http_port 80 vhost
https_port 443 cert=/etc/squid/server.crt key=/etc/squid/server.pem vhost
icp_port 0
cache_peer 172.26.6.159 parent 443 0 no-query originserver ssl 
sslflags=DONT_VERIFY_PEER name=cmaxx-app-peer

cache_peer 172.22.65.2 parent 80 0 no-query originserver name=docs-peer
cache_peer 172.22.66.208 parent 80 0 no-query originserver 
name=ocsapp-peer
cache_peer 172.22.66.206 parent 80 0 no-query originserver 
name=ocsinf-peer


OK, I fixed my problem.  I need to add "login=PASS" to the option list 
in the cache_peer lines.  Otherwise, it wasn't passing login info back 
to the real server.
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Auth through HTTPS reverse proxy

2008-03-04 Thread Ben Hollingsworth

Ben Hollingsworth wrote:
I've setup Squid 2.6.STABLE6 as a reverse proxy.  It terminates SSL 
connections using a wildcard cert and then passes the connections to 
back-end servers using either HTTP or HTTPS.  All works well for 
servers that don't require any authentication (or which let the web 
application handle its own authentication).  However, when I try to 
use Apache's native authentication to restrict directory access, any 
access through the proxy always fails authentication.  Access directly 
to the server (bypassing the proxy) authenticates just fine, so it 
appears that something about my Squid setup is causing authentication 
to break.  This happens regardless of whether the back-end is running 
HTTP or HTTPS.  The squid & apache logs don't tell me anything.  I've 
looked over packet dumps (on the HTTP side, of course), but I don't 
see the user/pwd anywhere.  Any ideas what I'm doing wrong?


Here's a little more info I should have included earlier.  Apache 2.0.25 
on RHEL4.  Squid runs on RHEL5.  Apache config:



ServerTokens OS
ServerRoot "/etc/httpd"
PidFile run/httpd.pid
Timeout 120
KeepAlive Off
MaxKeepAliveRequests 100
KeepAliveTimeout 15

StartServers   8
MinSpareServers5
MaxSpareServers   20
ServerLimit  256
MaxClients   256
MaxRequestsPerChild  4000


StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild  0

Listen 80
LoadModule access_module modules/mod_access.so
LoadModule auth_module modules/mod_auth.so
LoadModule auth_anon_module modules/mod_auth_anon.so
LoadModule auth_dbm_module modules/mod_auth_dbm.so
LoadModule auth_digest_module modules/mod_auth_digest.so
LoadModule ldap_module modules/mod_ldap.so
LoadModule auth_ldap_module modules/mod_auth_ldap.so
LoadModule include_module modules/mod_include.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule env_module modules/mod_env.so
LoadModule mime_magic_module modules/mod_mime_magic.so
LoadModule cern_meta_module modules/mod_cern_meta.so
LoadModule expires_module modules/mod_expires.so
LoadModule deflate_module modules/mod_deflate.so
LoadModule headers_module modules/mod_headers.so
LoadModule usertrack_module modules/mod_usertrack.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule mime_module modules/mod_mime.so
LoadModule dav_module modules/mod_dav.so
LoadModule status_module modules/mod_status.so
LoadModule autoindex_module modules/mod_autoindex.so
LoadModule asis_module modules/mod_asis.so
LoadModule info_module modules/mod_info.so
LoadModule dav_fs_module modules/mod_dav_fs.so
LoadModule vhost_alias_module modules/mod_vhost_alias.so
LoadModule negotiation_module modules/mod_negotiation.so
LoadModule dir_module modules/mod_dir.so
LoadModule imap_module modules/mod_imap.so
LoadModule actions_module modules/mod_actions.so
LoadModule speling_module modules/mod_speling.so
LoadModule userdir_module modules/mod_userdir.so
LoadModule alias_module modules/mod_alias.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule cache_module modules/mod_cache.so
LoadModule suexec_module modules/mod_suexec.so
LoadModule disk_cache_module modules/mod_disk_cache.so
LoadModule file_cache_module modules/mod_file_cache.so
LoadModule mem_cache_module modules/mod_mem_cache.so
LoadModule cgi_module modules/mod_cgi.so
Include conf.d/*.conf
User apache
Group apache
ServerAdmin [EMAIL PROTECTED]
UseCanonicalName Off
DocumentRoot "/var/www/html"

   Options FollowSymLinks
   AllowOverride None


   Options Indexes FollowSymLinks
   AllowOverride None
   Order allow,deny
   Allow from all


   UserDir disable

DirectoryIndex index.html index.html.var
AccessFileName .htaccess

   Order allow,deny
   Deny from all

TypesConfig /etc/mime.types
DefaultType text/plain

   MIMEMagicFile conf/magic

HostnameLookups Off
ErrorLog logs/error_log
LogLevel warn
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" 
combined

LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent
CustomLog logs/access_log combined
ServerSignature On
Alias /icons/ "/var/www/icons/"

   Options Indexes MultiViews
   AllowOverride None
   Order allow,deny
   Allow from all


   DAVLockDB /var/lib/dav/lockdb

ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"

   AllowOverride None
   Options None
   Order allow,deny
   Allow from all

IndexOptions FancyIndexing VersionSort NameWidth=*
AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip
AddIconByType (TXT,/icons/text.gif) text/*
AddIconByType 

[squid-users] Auth through HTTPS reverse proxy

2008-03-04 Thread Ben Hollingsworth
TP connection
Sending HTTP request.
HTTP request sent; waiting for response.
Can't Access `https://docs.bryanlgh.org/'
Alert!: Unable to access document.
lynx: Can't access startfile

begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Squid Blocking non-listed websites

2008-02-04 Thread Ben Hollingsworth

Amos Jeffries wrote:

Go Wow wrote:

whats the command to get only those configuration lines from
squid.conf leaving the comment lines. If i get it i will post my
config file.


grep -v -E "^#" squid.conf


Or to also remove all the empty lines and trailing comments from valid lines:

sed -e 's/ *#.*//' squid.conf | sed -e '/^$/d'
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Redirects changing POST to GET

2008-02-01 Thread Ben Hollingsworth

The problem I'm seeing is that whenever a CGI is called via HTTP with a
POST method, it gets converted to GET when the new request comes in on
HTTPS.  This, of course, breaks the app.
I should mention that we've experienced this with both IE 7 on WinXP and
with Firefox on Ubuntu Linux.

I did some packet dumps during the switchover.  Here's the proxy's reply 
containing the 301 redirect to the HTTPS
version of the same URL.  Content-Length is zero (is that bad at this
point?), and no method is specified.




The critical point being that it was the browser that initiated the GET
information. Last squid saw was a POST.

I've done a bit more research and found the RFC2616 section relevant to
this. It's seems that this is a standards violation being committed by the
redirector (NOT a good idea to reply 301/302 to a POST request.) and the
consequences are being felt.


For the benefit of those reading this in the archives later on, I found this in 
section 10.3.2 (the 301 return code) of RFC 2616:
http://www.ietf.org/rfc/rfc2616.txt?number=2616

  If the 301 status code is received in response to a request other
  than GET or HEAD, the user agent MUST NOT automatically redirect the
  request unless it can be confirmed by the user, since this might
  change the conditions under which the request was issued.

 Note: When automatically redirecting a POST request after
 receiving a 301 status code, some existing HTTP/1.0 user agents
 will erroneously change it into a GET request.

And then from section 10.3.4 (302 code):

 Note: RFC 1945 and RFC 2068 specify that the client is not allowed
 to change the method on the redirected request.  However, most
 existing user agent implementations treat 302 as if it were a 303
 response, performing a GET on the Location field-value regardless
 of the original request method. The status codes 303 and 307 have
 been added for servers that wish to make unambiguously clear which
 kind of reaction is expected of the client.


I think the proper solution here would be to fix the form that is POSTing
to the wrong URL according to your policy. You can use the "it can't be
fixed" line (which is nearly true, the only 'fix' would be to 404 them :-(
anyway)

The exact behaviour appears to be browser-dependant with some weird
effects occuring on some non-standard ones (Netscape and IE for starters).


This does seem to indicate that my desired approach of forcing encryption 
between us and the user by redirecting all HTTP requests to HTTPS won't work.  
Changing the method used in the scripts isn't possible, either, as this is a 
shrink-wrapped app.  Back to the drawing board, I guess.  Thanks so much for 
all your help, Amos.

begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Redirects changing POST to GET

2008-01-31 Thread Ben Hollingsworth
refox/2.0.0.
   0x00f0:  3131 0d0a 4163 6365 7074 3a20 7465 7874  11..Accept:.text
   0x0100:  2f78 6d6c 2c61 7070 6c69 6361 7469 6f6e  /xml,application
   0x0110:  2f78 6d6c 2c61 7070 6c69 6361 7469 6f6e  /xml,application
   0x0120:  2f78 6874 6d6c 2b78 6d6c 2c74 6578 742f  /xhtml+xml,text/
   0x0130:  6874 6d6c 3b71 3d30 2e39 2c74 6578 742f  html;q=0.9,text/
   0x0140:  706c 6169 6e3b 713d 302e 382c 696d 6167  plain;q=0.8,imag
   0x0150:  652f 706e 672c 2a2f 2a3b 713d 302e 350d  e/png,*/*;q=0.5.
   0x0160:  0a41 6363 6570 742d 4c61 6e67 7561 6765  .Accept-Language
   0x0170:  3a20 656e 2d75 732c 656e 3b71 3d30 2e35  :.en-us,en;q=0.5
   0x0180:  0d0a 4163 6365 7074 2d45 6e63 6f64 696e  ..Accept-Encodin
   0x0190:  673a 2067 7a69 702c 6465 666c 6174 650d  g:.gzip,deflate.
   0x01a0:  0a41 6363 6570 742d 4368 6172 7365 743a  .Accept-Charset:
   0x01b0:  2049 534f 2d38 3835 392d 312c 7574 662d  .ISO-8859-1,utf-
   0x01c0:  383b 713d 302e 372c 2a3b 713d 302e 370d  8;q=0.7,*;q=0.7.
   0x01d0:  0a43 6f6f 6b69 653a 2043 4649 443d 3132  .Cookie:.CFID=12
   0x01e0:  3138 3636 3b20 4346 544f 4b45 4e3d   1866;.CFTOKEN=33
   0x01f0:  3738 3939 3132 3b20 5353 4f5f 4944 3d76  789912;.SSO_ID=v
   0x0200:  312e 327e 317e 3944 3141 4239 3831 4338  1.2~1~9D1AB981C8
   0x0210:  3342 4437 3445 4142 3535 3342 4131 3442  3BD74EAB553BA14B
... (more hex dump deleted)
   0x0410:  4436 3242 3231 0d0a 5669 613a 2031 2e31  D62B21..Via:.1.1
   0x0420:  2072 6576 7072 6f78 792e 6272 7961 6e6c  .revproxy.bryanl
   0x0430:  6768 2e6f 7267 3a38 3020 2873 7175 6964  gh.org:80.(squid
   0x0440:  2f32 2e36 2e53 5441 424c 4536 290d 0a58  /2.6.STABLE6)..X
   0x0450:  2d46 6f72 7761 7264 6564 2d46 6f72 3a20  -Forwarded-For:.
   0x0460:  3139 322e 3136 382e 322e 380d 0a43 6163  192.168.2.8..Cac
   0x0470:  6865 2d43 6f6e 7472 6f6c 3a20 6d61 782d  he-Control:.max-
   0x0480:  6167 653d 3235 3932 3030 0d0a 0d0a   age=259200

And the internal server responds with "NOT FOUND":

16:03:13.738001 IP 172.22.66.206.http > revproxy.bryanlgh.org.40293: P 
1:197(196) ack 1127 win 8192
   0x:  4500 00ec 9bcf 4000 ff06 2d6f ac16 42ce  [EMAIL PROTECTED]
   0x0010:  c0a8 0240 0050 9d65 1aef 2b43 5058 6260  [EMAIL PROTECTED]
   0x0020:  5018 2000 b548  4854 5450 2f31 2e31  PH..HTTP/1.1
   0x0030:  2034 3034 204e 6f74 2046 6f75 6e64 0d0a  .404.Not.Found..
   0x0040:  4461 7465 3a20 5468 752c 2033 3120 4a61  Date:.Thu,.31.Ja
   0x0050:  6e20 3230 3038 2032 323a 3033 3a31 3320  n.2008.22:03:13.
   0x0060:  474d 540d 0a53 6572 7665 723a 204f 7261  GMT..Server:.Ora
   0x0070:  636c 652d 4170 706c 6963 6174 696f 6e2d  cle-Application-
   0x0080:  5365 7276 6572 2d31 3067 2f31 302e 312e  Server-10g/10.1.
   0x0090:  322e 302e 3220 4f72 6163 6c65 2d48 5454  2.0.2.Oracle-HTT
   0x00a0:  502d 5365 7276 6572 0d0a 436f 6e6e 6563  P-Server..Connec
   0x00b0:  7469 6f6e 3a20 636c 6f73 650d 0a43 6f6e  tion:.close..Con
   0x00c0:  7465 6e74 2d54 7970 653a 2074 6578 742f  tent-Type:.text/
   0x00d0:  6874 6d6c 3b20 6368 6172 7365 743d 6973  html;.charset=is
   0x00e0:  6f2d 3838 3539 2d31 0d0a 0d0ao-8859-1
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] Redirects changing POST to GET

2008-01-23 Thread Ben Hollingsworth

Amos Jeffries wrote:

I've setup a reverse proxy running Squid 2.6.STABLE6 5.el5_1.2 on RHEL5.1.
 All remote access to the proxy is supposed to be via HTTPS, but since
some of the protected apps give out absolute URL's at HTTP, I've also
setup a redirector that listens on port 80 and sends a 301 redirect back
to the client with an HTTPS version of the same URL.

The problem I'm seeing is that whenever a CGI is called via HTTP with a
POST method, it gets converted to GET when the new request comes in on
HTTPS.  This, of course, breaks the app.

When I bypass the proxy, the HTTP POST method works just fine.  Any ideas
what might be causing the method to change or how to get around this?
Every web search I try comes up empty.  I'm not sure if the variables are
getting dropped in the process, or if the app just doesn't know how to
handle GET methods, but regardless, this is a debilitating problem for
this app, so I really need a solution.  The app in question is Oracle
Collaboration Suite 10g, if it makes a difference.


Sounds like a broken CGI to me. With redirection to 301:... squid should
be actually sending the 301 back to the client for it to re-POST back to
the new URI.


I don't see how it can be the CGI that's at fault, because the conversion from 
POST to GET is happening before the CGI ever gets hit.  I think itmust be 
something about the proxy, redirector, or browser that's causing the conversion.

I should mention that we've experienced this with both IE 7 on WinXP and with 
Firefox on Ubuntu Linux.

Has anybody seen this behavior before, or heard anything that would indicate 
the conversion is a security feature?
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



[squid-users] Redirects changing POST to GET

2008-01-23 Thread Ben Hollingsworth

I've setup a reverse proxy running Squid 2.6.STABLE6 5.el5_1.2 on RHEL5.1.  All 
remote access to the proxy is supposed to be via HTTPS, but since some of the 
protected apps give out absolute URL's at HTTP, I've also setup a redirector 
that listens on port 80 and sends a 301 redirect back to the client with an 
HTTPS version of the same URL.  My rewrite script is pretty simple:

#!/usr/bin/perl
$|=1;
while (<>) {
   s/^http:/301:https:/;
   print;
}

The problem I'm seeing is that whenever a CGI is called via HTTP with a POST 
method, it gets converted to GET when the new request comes in on HTTPS.  This, 
of course, breaks the app.  Here's a log snippet:

1200950259.294  2 192.168.2.8 TCP_MISS/301 200 POST 
http://inf.domain.org/pls/orasso/orasso.wwsso_app_admin.ls_logout - NONE/- -
1200950259.396 75 192.168.2.8 TCP_MISS/404 704 GET 
https://inf.domain.org/pls/orasso/orasso.wwsso_app_admin.ls_logout - 
FIRST_UP_PAREN
T/172.22.66.206 text/html

When I bypass the proxy, the HTTP POST method works just fine.  Any ideas what 
might be causing the method to change or how to get around this?  Every web 
search I try comes up empty.  I'm not sure if the variables are getting dropped 
in the process, or if the app just doesn't know how to handle GET methods, but 
regardless, this is a debilitating problem for this app, so I really need a 
solution.  The app in question is Oracle Collaboration Suite 10g, if it makes a 
difference.  My squid.conf follows.

# grep -v "^#" squid.conf | sed -e '/^$/d'
http_port 80 vhost
https_port 443 cert=/etc/squid/server.crt key=/etc/squid/server.pem vhost
icp_port 0
cache_peer 172.26.6.159 parent 443 0 no-query originserver ssl 
sslflags=DONT_VERIFY_PEER name=server1-app-peer
cache_peer 172.22.66.208 parent 80 0 no-query originserver name=app-peer
cache_peer 172.22.66.206 parent 80 0 no-query originserver name=inf-peer
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
maximum_object_size 0 KB
access_log /var/log/squid/access.log squid
url_rewrite_program /usr/local/bin/rewrite-http
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl sites_server1-app dstdomain server1b.domain.org server1-app.domain.org
acl sites_app dstdomain app.domain.org
acl sites_inf dstdomain inf.domain.org
acl webserver dst 172.26.6.159 192.168.2.65 172.22.66.208 172.22.66.206
http_access allow webserver
miss_access allow webserver
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
cache_peer_access server1-app-peer allow sites_server1-app
cache_peer_access app-peer allow sites_app
cache_peer_access inf-peer allow sites_inf
cache_mgr [EMAIL PROTECTED]
coredump_dir /var/spool/squid
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



[squid-users] intermittent ERR_CANNOT_FORWARD

2007-07-25 Thread Ben Drees

Hi,

I'm running Squid 2.6 STABLE12 as a reverse proxy. It is configured to 
select one of two origin servers based on the request URL like this:


cache_peer 10.1.64.104 parent 8102 0 originserver no-query round-robin
cache_peer 10.1.64.106 parent 8105 0 originserver no-query round-robin

# always_direct = default
never_direct allow all

acl api_url urlpath_regex ^/api/*

cache_peer_access 10.1.64.104 allow api_url
cache_peer_access 10.1.64.104 deny all
cache_peer_access 10.1.64.106 allow !api_url
cache_peer_access 10.1.64.106 deny all

Currently only URLs that *do* match the api_url acl are ever sent to Squid.

After a power outage yesterday I started seeing "X-Squid-Error: 
ERR_CANNOT_FORWARD 11" in the response headers of some but not all such 
requests. I had no difficulty connecting directly to the origin server 
at 10.1.64.104:8102. The errors went away as soon as I restarted Squid.


This raises the following questions:

1) Is it inappropriate to use the 'round-robin' option in this way, 
since only one origin server peer will match the cache_peer_access rules 
for a given URL?


2) Why did some requests succeed and some fail? Is there a health test 
that blocks the opening of new connections but allows existing 
persistent connections to be reused?


Does anyone have insight on these?

Thanks,
Ben


Re: [squid-users] Transfer-Encoding support in Squid 2.6

2007-06-08 Thread Ben Drees

Ben Drees wrote:
I'm running Squid 2.6 STABLE12 as a reverse proxy. The origin servers 
are Apaches (2.0.58) configured to gzip most responses (which are all 
dynamic) with mod_deflate. The fact that Squid is an HTTP 1.0 client 
has the undesirable effect, in this scenario, that every compressed 
response results in a connection closure. If I use telnet to replay a 
forwarded request, changing only the "HTTP/1.0" to "HTTP/1.1", the 
response includes "Transfer-Encoding: chunked" and "Connection: 
Keep-Alive" rather than "Connection: Close". If only there were some 
other way to produce this outcome.


It looks like Squid includes some code that supports 
"Transfer-Encoding: chunked", so my question is: How can Apache (or 
any other origin server) be coaxed into using "Transfer-Encoding: 
chunked" in its responses if Squid advertises itself as an HTTP 1,0 
client? Is that code in Squid only as a best effort patch to deal with 
responses inappropriately transfer-encoded by origin servers?


What sorts of things would break if Squid advertised itself as an HTTP 
1.1 client?


I see now that Squid 2.6 can read Transfer-Encoded responses, but turns 
them into HTTP-1.0-style responses to the client, with "Connection: 
Close" and no Content-Length.


Can Squid 3 do HTTP 1.1?


[squid-users] Transfer-Encoding support in Squid 2.6

2007-06-08 Thread Ben Drees

Hi,

I'm running Squid 2.6 STABLE12 as a reverse proxy. The origin servers 
are Apaches (2.0.58) configured to gzip most responses (which are all 
dynamic) with mod_deflate. The fact that Squid is an HTTP 1.0 client has 
the undesirable effect, in this scenario, that every compressed response 
results in a connection closure. If I use telnet to replay a forwarded 
request, changing only the "HTTP/1.0" to "HTTP/1.1", the response 
includes "Transfer-Encoding: chunked" and "Connection: Keep-Alive" 
rather than "Connection: Close". If only there were some other way to 
produce this outcome.


It looks like Squid includes some code that supports "Transfer-Encoding: 
chunked", so my question is: How can Apache (or any other origin server) 
be coaxed into using "Transfer-Encoding: chunked" in its responses if 
Squid advertises itself as an HTTP 1,0 client? Is that code in Squid 
only as a best effort patch to deal with responses inappropriately 
transfer-encoded by origin servers?


What sorts of things would break if Squid advertised itself as an HTTP 
1.1 client?


Thanks,
Ben


[squid-users] cache_mem, restart, and memory versus disk hits

2007-05-21 Thread Ben Drees

Hi,

It has been previously reported that Squid will never serve a cached 
resource directly from memory once the resource has been written to 
disk. In other words, cache_mem is allocated to in-transit resources 
only, not "hot resources in general", with the particularly interesting 
effect that all resources cached before restart are served from disk 
after restart, no matter how popular they become.


Is this still the case in Squid 2.6 STABLE12? Comments in the config 
file seem to suggest that this problem has been dealt with:


#'cache_mem' specifies the ideal amount of memory to be used
#for:
#* In-Transit objects
#* Hot Objects
#* Negative-Cached objects

Thanks,
Ben


[squid-users] How well does squid perform under stress?

2007-04-04 Thread Ben Spencer
I did some research for an answer to this question, but, things tend to
always resort to CPU usage and tuning (though, I did get some good
information from those threads also).

We have a squid appliance which is very heavy on CPU (which is
expected). My question isn't really how can I tune it or why is it using
so much CPU, but rather, how well does squid perform on a busy (CPU
wise) box?

I guess another way to ask is, does squid's performance scale linearly
as the box (CPU specifically) usage increases or does performance
actually degrade/level off once the CPU is approaches 100% utilization?

Another question is: once the system is pushed to a maximum (or beyond),
are things just slow or should abnormal behavior be expected?

Thanks
Benji

---
Benji Spencer
System Administrator
Ph: 312-329-2288



[squid-users] ~5% of cache hits very slow

2006-12-05 Thread Ben Drees

Hi,

I'm running Squid (2.5 STABLE 12) in a reverse proxy setup, caching 
mostly dynamic content with unreadably long parameterized URLs. Most 
cache hits take about 2 - 4 milliseconds (as indicated by the access 
log), but I see frequent outliers in the 200 - 800 millisecond range. 
These seem to be mostly TCP_HIT and TCP_IMS_HIT cases, not cases where 
Squid connects with an origin server. This is on a very lightly loaded 
test server. There is no fancy authorization happening. Are there any 
well known configuration issues that could be causing this? Even if the 
hits are on disk rather than in memory, the TCP_IMS_HIT case should only 
need to consult an in-memory index structure and reply with a 304, right?


Thanks,
Ben
  


[squid-users] Can Squid *be* the origin server for certain URLs?

2006-11-30 Thread Ben Drees

Hi,

I'm running several Squids (2.5 STABLE 12) in a reverse proxy setup 
behind a load balancer. I want the load balancer to be able to health 
check the Squids independent of whether the origin servers are currently 
available (there are other health checks that cover all bases). For 
various reasons, using cachemgr or "cache_object://" urls won't work. I 
would like to carve a magic URL out of the URL space, something like 
"/status", and have Squid just return 200 OK whenever that URL is 
requested. Is there a way to configure such behavior, or will I have to 
touch the code to make this work?


Thanks,
Ben


Re: [squid-users] Squid takes too long to stop.

2006-09-08 Thread Ben Drees

Hello,

I have a load balancer in front of two Squids in a reverse proxy setup. 
The load balancer decides whether a Squid is healthy by opening a TCP 
connection to it periodically. During the shutdown interval governed by 
configuration parameter 'shutdown_lifetime', Squid 2.5 STABLE12 
continues accepting connection requests and responding to each request 
with status code 503. The load balancer interprets this as "healthy". I 
would prefer instead that Squid continue servicing pending requests on 
open connections, but refuse new connection requests so that the load 
balancer could do its thing and route to the other Squid for 
uninterrupted service. Is there any way to achieve this in Squid 2.5 
STABLE12 or later?


Thanks,
Ben

Henrik Nordstrom wrote:

fre 2006-08-25 klockan 09:24 -0700 skrev Jim John:
  
Hi all. We have squid set up for transparency using shorewall, but it takes 
too long to stop. Can we simply direct traffic away from squid using 
shorewall before we stop squid instead of afterwards?



Yes, assuming you are doing transparent interception using iptables NAT.

  
 Is there another way 
to stop squid faster and safer because our users lose connection while squid 
is stopping, which takes 2 minutes or so.



See shutdown_lifetime.

  

This also happens for reload when we have squidGuard child processes running 
under squid. Thanks.



What is a "reload"?

Squid know how to stop, start, rotate logs and reconfigure.

Regards
Henrik
  




Re: [squid-users] Digest Auth Problem in Reverse Proxy Setup

2006-08-15 Thread Ben Drees
If I had a load balancer mapping many incoming client connections to 
fewer backend connections to Squid, would that cause trouble for the 
digest authentication logic? In particular, if requests from two 
different authenticated users were mapped onto a single connection from 
load balancer to Squid (and interleaved?) would that cause trouble?


It seems like there is some cached auth state associated with each 
connection, and that the connection multiplexing must be interacting 
badly with that. Is there a way to suppress the caching of this auth state?


Henrik Nordstrom wrote:

tor 2006-08-10 klockan 16:52 -0700 skrev Ben Drees:

  
Users are complaining that they are challenged to re-enter their 
credentials too frequently.



Then something is wrong somewhere. They should only need to enter their
credentials once, just as for basic..

  
I figured nonce_max_duration would set the "max session time", but the 
credentials challenges still seem to happen much more frequently.



The nounce duration is not a session timer as such. It's more related to
replay attacks on the digest protocol. 

  
I notice log entries like these that seem to be correlated with the 
credentials challenges:


#1) authenticateValidateUser: Auth_user '0xb61430' is broken for it's 
scheme.

#2) authenticateValidateUser: Validating Auth_user request '(nil)'.

Are these normal sorts of log messages? What does AUTH_BROKEN mean (from 
the source generating example #1)?



Most likely Squid didn't like something of the Digest message sent by
the browser.

debug_options ALL,1 29,9
should give more insight into the Digest processing.

If you enable log_mime_hdrs and repeat the problem with a known password
then we can look into what the browser sent and if it makes sense or
not.

Or at mimimum log_mime_hdrs and get the relevant /407 entries. Maybe
there is something obvious.

Regards
Henrik
  




[squid-users] Digest Auth Problem in Reverse Proxy Setup

2006-08-10 Thread Ben Drees

Hi,

I'm running Squid 2.6 STABLE12 as a reverse proxy.

I have digest authentication turned on:

auth_param digest program 
/.../squid/helpers/digest_auth/password/digest_pw_auth /.../passwords

auth_param digest children 5
auth_param digest realm ...
auth_param digest nonce_garbage_interval 5 minutes
auth_param digest nonce_max_duration 12 hours
auth_param digest nonce_max_count 50

I turned nonce_max_duration way up to try to get around the following 
problem (but it didn't work):


Users are complaining that they are challenged to re-enter their 
credentials too frequently.


I figured nonce_max_duration would set the "max session time", but the 
credentials challenges still seem to happen much more frequently.


Is the "max session time" predictable based on config parameters, or is 
there some dependency on the vaguaries of garbage collection? I'm 
confused about what impact nonce_garbage_interval might has on this.


Is it the case that browsers typically make users re-enter credentials 
when "stale=false" appears in a 401/WWW-Authenticate response header?


I notice log entries like these that seem to be correlated with the 
credentials challenges:


#1) authenticateValidateUser: Auth_user '0xb61430' is broken for it's 
scheme.

#2) authenticateValidateUser: Validating Auth_user request '(nil)'.

Are these normal sorts of log messages? What does AUTH_BROKEN mean (from 
the source generating example #1)?


Does "Validating Auth_user request '(nil)'" mean that no "Authorization" 
header was included in the request?


In what may or may not be a related matter, the browser credentials 
dialog box is sometimes presented three or four times in a row. I think 
this might just have to do with parallel requests from the browser all 
failing with 401s at the same time. I think this happens with a variety 
of browsers - sorry no more details are available.


Thanks,
Ben



[squid-users] over time squid slows down

2006-07-17 Thread Ben Collver
Good day,

I am running a no-cache transparent proxy using squid 2.5.13 and
ipfilter on NetBSD/i386 3.0.

About every 2 or 3 weeks, it slows down and web browsing grinds to a
halt.  Restarting the squid daemon fixes the issue for another 2 weeks.

While this is happening, top reports that the load is low, and squid is
not using much memory.

My squid configuration is at the end of this message.  Can anyone give
advice on how to trouble-shoot this or ideas on what I may be doing
wrong?

Thank you,

Ben

http_port 127.0.0.1:3128
icp_port 0
udp_incoming_address XXX.XXX.XXX.XXX
udp_outgoing_address 255.255.255.255
hierarchy_stoplist cgi-bin ?
acl never-cache src 0.0.0.0/0.0.0.0
no_cache deny never-cache
cache_dir null /tmp
cache_access_log /LOGDIR/squid/access_log
cache_log /LOGDIR/squid/cache_log
cache_store_log none
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
shutdown_lifetime 1 seconds
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
acl our_networks src XXX.XXX.XXX.XXX/XX
http_access allow our_networks
http_access deny all
http_reply_access allow all
icp_access deny all
tcp_outgoing_address XXX.XXX.XXX.XXX
cache_mgr root
mail_program /usr/bin/mailx
cache_effective_user squid
httpd_accel_host virtual
httpd_accel_port 0
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
logfile_rotate 60
coredump_dir /var/squid/cache


RE: [squid-users] Putting high mem objects on cache. [signed]

2006-07-17 Thread Ben Hathaway
All,

I asked this question a while back. It seems that it's not possible.
My suggested solution (although I haven't tried it yet) is to run an actual
webserver on the machine and download the known-large-files (virus updates,
etc...) as a cron job every day at 4am. Then we should be able to setup
redirection rules in squid to point to our webserver rather than the live
internet server. This is a little like re-inventing the wheel, but should
work ok. It's on my list of things to do but way down near the bottom at the
moment.

Let me know if anyone comes up with a better solution.

Regards,

Ben Hathaway
Software Developer
http://www.spidersat.net
 Spidersat Logo 

-Original Message-
From: Rajendra Adhikari [c] [mailto:[EMAIL PROTECTED] 
Sent: 17 July 2006 12:46
To: squid-users@squid-cache.org
Subject: [squid-users] Putting high mem objects on cache. [signed]

Hi,
I have set maximum_object_size to 4MB. But without increasing this 
value, I would like to put some objects greater than 4mb on cache, like, 
msn installation file.
How do I put it on cache explicitily?  If it can be done, what would be 
the best way to automate this task? Please give me an idea if anyone has 
done this.

thanks in advance,
Rajendra.



--
- [ SECURITY NOTICE ] -
To: [EMAIL PROTECTED]
For your security, [EMAIL PROTECTED]
digitally signed this message on 17 July 2006 at 09:45:45 UTC.
Verify this digital signature at http://www.ciphire.com/verify.
 [ CIPHIRE DIGITAL SIGNATURE ] 
Q2lwaGlyZSBTaWcuAjhzcXVpZC11c2Vyc0BzcXVpZC1jYWNoZS5vcmcAcmFqZW5k
cmFAc3ViaXN1Lm5ldC5ucABlbWFpbCBib2R5AB4BAAB8AHwBSVy7RB4B
AABAAgACAAIAAgAge41wR4L+bXcWdThKam3FEHwmE/qn1pYTspEfujVuk+0BAHcW
34bSvF8RoB15amIjv339V+ZaGrEv2mG92v+dvY8RXvn4bPVNeO8r3tu+4+4rwghd
tuiXMrd2Hvd1R0GWnoSPXvQgU2lnRW5k
-- [ END DIGITAL SIGNATURE ] --





RE: [squid-users] Issues with Debian, Squid and WCCP

2006-07-13 Thread Ben Hathaway
Andrew,

This sounds very much like the problem I struggled with for several
weeks. Ha! It's good to be able to contribute positively to this mailing
list for a change!

Basically - the WCCP modules and GRE modules just don't work with
Cisco / WCCP / Debian. I have no idea why. There is a work around however :
Use a much higher version of Debian (one that has a GRE module built in that
can handle these weird WCCP packets properly) and no extra WCCP modules.
Then use a different kind of IPTables redirection method (DNAT). This DOES
work. Again - I have no idea why. I just kept messing with the different
options until something worked. Brute force - the worst kind of debugging!

Let me point you towards my previous post on the subject:-

http://www.webservertalk.com/archive254-2006-1-1360989.html


I hope this helps. If someone can explain this phenomenon I'd be most
appreciative!

Regards,


Ben Hathaway
Software Developer
http://www.spidersat.net


-Original Message-
From: Andrew Yoward [mailto:[EMAIL PROTECTED] 
Sent: 13 July 2006 19:17
To: squid-users@squid-cache.org
Subject: [squid-users] Issues with Debian, Squid and WCCP

Greetings,

I am wondering if you could shed some light on a rather tricky issue 
that I am having.  I have a local education authority who are 
experiencing a lot of traffic on their internet pipe and often find that 
it is used to the max.  We are wanting to introduce a transparent cache 
for http and so we thought that Squid and WCCP would be the answer to 
our prayers, but I am having great difficulty in getting any traffic to 
go through the Squid.  Here is what I am trying to do in the lab. 
My client has no setting in Firefox for a proxy and is on 
192.168.250.1/24 and gw is 192.168.250.254.  I have a Cisco 2600 router 
with two FE ports.  One is configured with 192.168.250.254/24, the other 
is configured as 10.3.65.4/24.  It is running IOS 12.3(6c).  My proxy is 
built on Debian Sarge and a 2.6.8 kernel.  Squid is version 
2.5.9-10sarge2.  The proxy has 10.3.65.3/24 and gw is 10.3.65.254.  I 
have gone through all the FAQs and other literature I can find regarding 
what I'm trying to do.  I have enabled WCCP version 1 on the 2600.  I 
have done ip wccp web-cache redirect in on the 192 side and I have 
swapped it round to redirect out on the 10 side, during my 
troubleshooting.  I know that the Squid and the router are communicating 
as I get the packet exchange on port 2048 with no trouble.  I have 
configured the squid.conf as shown in the FAQs, I have also added the 
needed prerouting line in firewall.up for IPTables to redirect port 80 
traffic to 3128.  I have compiled the WCCP module, modprobed it and it 
is listed in lsmod.  I also did all the GRE tunnelling stuff.  When I 
try from my client to reach a web page, if I watch the nat on IPTables, 
I can see the packets hitting the rule to forward to 3128, but nothing 
happens at the client.  If I use lynx on the squid, and set it's proxy 
to localhost, I can get web pages fine, so I know squid is working 
correctly.  Having run tcpdump, I can see WCCP packets coming across 
from the router, but it seems that either the encapsulation is not being 
stripped off when the packet hits, or squid doesn't know what to do with 
it when it is passed.  There is no entry in the squid access.log to tell 
me anything.  The syslog is spurious.  At first, it identified the 
source as 10.3.65.4 and destination of .3 but also complained about 
protocol 47.  After I enabled protocol 47 and port 1723 in iptables, it 
then identified the source as 192.168.250.1 but still I got no joy with 
http content being passed back.  I am at a loss now as to what I may be 
doing wrong.  Whether the GRE tunnel isn't right, whether IPtables is 
the issue, or the WCCP module.  I am hoping that someone may be able to 
shed some light.

I would of course be very grateful for any help that you could offer and 
if I can answer any questions, or if I have not given enough 
information, please let me know.

Best regards,

Andrew Yoward
YHGfL Foundation
www.yhgfl.net



[squid-users] Allow specific large files by URL

2006-07-03 Thread Ben Hathaway
Dear Group,

I have a basic 4MB limit on cache file size. This is about right for
my needs. However, there is a group of about 5 URLs that keep cropping up
that are downloads of larger files - normally software updates, virus
definitions, that sort of thing. I want to cache them regardless of their
size, but I don't want to cache anything else that it more than 4Mb.

Can I set exceptions to the Maximum File Size rules for specific URLs?
Actually, specific domain paths...?

My only other thought was a kind of transparent redirection (perhaps in the
iptables) to some other local server and then download these files in a cron
job. However, this is a little bit like re-inventing the wheel just because
you want a blue one. (shameless "Hitchhikers guide to the Galaxy" reference)

Any suggestions?

My thanks in advance,


Ben Hathaway
Software Developer
http://www.spidersat.net




RE: [squid-users] 4 second Cache Miss Service Times

2006-06-18 Thread Ben Hathaway
Dear All,

Thanks for your suggestions. It seems there were some upstream
problems (resulting in packet loss) which we are trying to iron out with our
satellite communications provider. I still think there is something
suspicious about these service times, but I will have to wait a few days for
this other issue to be resolved before I investigate further. I'll post
again later when I have had a chance to try your suggestions.

For Henrik & Chris who were concerned about DNS times - yes we have
a second DNS server sitting right next to the cache server. I made sure that
the DNS server had plenty of in-memory cache available so that it would
respond quickly to the second DNS request from SQUID. I thought that I could
disable the Squid DNS lookup completely if it proved too slow (if that is
possible?). At the moment this is not an issue.

Michael - are you saying that I can expect service times of 3-4
Round-Trip-Time even in the best situations - something like 2 seconds? And
that this is simply the way that HTTP works (and therefore unavoidable)?
Apart from tinkering with TCP window sizes (which I will certainly look
into), is there anything else I can do to improve this?

Many thanks,

Ben.

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 18 June 2006 00:43
To: Chris Robertson
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] 4 second Cache Miss Service Times

fre 2006-06-16 klockan 12:43 -0800 skrev Chris Robertson:

> Perhaps the time required to make DNS queries is being included in the 
> Cache Miss and Near Hits time due to using WCCP (my clients are set 
> explicitly to use the proxy).

Yes DNS time is included in the service time. The service time is
measured from where Squid has read the request until it has finished
sending the response. But this does not exaplain the differences (see
below).

Squid always queries DNS for the host name, but only once/twice (it
caches the result internally).

When using WCCP the DNS service times should be a little better as the
client has just made the same DNS query and the result should be cached
in your central DNS, assuming both Squid and the client is using the
same DNS server (directly, or via DNS forwarder).

Regards
Henrik




[squid-users] 4 second Cache Miss Service Times

2006-06-16 Thread Ben Hathaway
Dear All,

I have recently set up a Squid cache using WCCP and a cisco router.
I am getting very impressive performance for my cache hits, but my cache
misses sometimes take as long as 4 seconds! We are at the end of a high
bandwidth, high latency satellite link with a normal latency (for example:
to google.com) of around 600ms round-trip. So why the 4 second delay?
Sometimes it comes down to 1.3sec but even that is a lot slower than I would
expect when my cache hits are being pumped out in 0.017s

Any ideas?

Here's my cachemanager stats (I've marked the relevant line with an
asterisk) :

Squid Object Cache: Version 2.5.STABLE9
Start Time:   Thu, 15 Jun 2006 08:43:36 GMT
Current Time: Fri, 16 Jun 2006 07:14:09 GMT

Connection information for squid:
Number of clients accessing cache:  46
Number of HTTP requests received:   556986
Number of ICP messages received:0
Number of ICP messages sent:540
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   412.4
Average ICP messages per minute since start:0.4
Select loop called: 12369540 times, 6.551 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 33.2%, 60min: 30.2%
Byte Hit Ratios:5min: 10.9%, 60min: 16.3%
Request Memory Hit Ratios:  5min: 6.2%, 60min: 8.9%
Request Disk Hit Ratios:5min: 28.7%, 60min: 27.1%
Storage Swap size:  19916692 KB
Storage Mem size:   102392 KB
Mean Object Size:   13.80 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   1.62803  1.71839
**  Cache Misses:  3.11263  2.79397
Cache Hits:0.01745  0.01745
Near Hits: 1.38447  1.24267
Not-Modified Replies:  0.01164  0.01164
DNS Lookups:   0.01535  0.02033
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:81033.414 seconds
CPU Time:   2825.424 seconds
CPU Usage:  3.49%
CPU Usage, 5 minute avg:8.78%
CPU Usage, 60 minute avg:   8.15%
Process Data Segment Size via sbrk(): 283308 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 3
Memory usage for squid via mallinfo():
Total space in arena:  283308 KB
Ordinary blocks:   282627 KB  36898 blks
Small blocks:   0 KB  0 blks
Holding blocks: 11528 KB  6 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 680 KB
Total in use:  294155 KB 100%
Total free:   680 KB 0%
Total size:294836 KB
Memory accounted for:
Total accounted:   215351 KB
memPoolAlloc calls: 79976881
memPoolFree calls: 76613850
File descriptor usage for squid:
Maximum number of file descriptors:   4096
Largest file desc currently in use:571
Number of file desc currently in use:  433
Files queued for open:   0
Available number of file descriptors: 3663
Reserved number of file descriptors:   100
Store Disk files open:   0
Internal Data Structures:
1446719 StoreEntries
 20863 StoreEntries with MemObjects
 20747 Hot Object Cache Items
1443710 on-disk objects


Regards,

Ben Hathaway




[squid-users] httpd_accel_uses_host_header doesn't use port?

2006-04-10 Thread Ben Drees
I have been running Squid as a reverse proxy to an Apache origin server 
on the same host. Squid and Apache use the same port number but 
different addresses, as recommended (the public routable address for 
Squid, 127.0.0.1 for Apache).


I would like to change this configuration (at least in development, if 
not in production) so that both the reverse proxy and the origin server 
use the public routable address, but different port numbers. I 
understand that one of the pitfalls to consider is that proxied response 
headers and response bodies will contain the origin server port number. 
But since I control the origin server, I can make it base any port 
numbers appearing in a response on the port number in the request 
"Host:" header.


But I can't seem to get Squid to pass the original port number through. 
Even with "httpd_accel_uses_host_header on" the origin server sees its 
own port number in the requests.


Is the behavior I seek supported?

Here are some relevant settings:

http_port 0.0.0.0:8114
httpd_accel_host 0.0.0.0
httpd_accel_port 8101
httpd_accel_single_host on
httpd_accel_uses_host_header on

I'm using "0.0.0.0" to get "public routable address" and "localhost".

To be clear, I would like Squid to accept a request on port 8114 with a 
"Host:" header like this:


Host: x.y.z:8114

Then, on cache miss, send the request to port 8101 on the same machine 
with the same "Host:" header:


Host: x.y.z:8114

It does this except that it changes the "Host": header to this:

Host: x.y.z:8101

-Ben


Re: [squid-users] Squid-cache clustering

2006-04-07 Thread Ben Drees

I am curious about a related question:

In reverse proxy scenarios, what are the options for load balancing 
cache misses among several origin server replicas?


1) Of course one could use a hardware load balancer in between squid and 
the origin servers.


2) It is my understanding that if DNS returns more than one address for 
a hostname, Squid can be configured to perform round-robin selection of 
an origin server. Are there any caveats to be aware of when persistent 
connections are used between squid and the origin servers?


3) It seems like the Redirector API could be used as a hook to do this 
kind of load balancing also, offering a convenient place to code custom 
health checks.


Are there any other options?

-Ben

[EMAIL PROTECTED] wrote:

Hi all,

I just want to know if theres other way to cluster 2 or more Squid-cache/proxy? 
My idea of clustering 2 or more proxy is by using a layer 7 switch, define a

common IP on the switch that will simoultaneously checks multiple proxy server.

Any Idea is welcome and highly appreciated.

Thank you very much

Wennie


  




Re: [squid-users] Reverse Proxy - Comparisons

2006-03-25 Thread Ben Drees
I am quite interested in this question, and have not yet reached a 
conclusion about which one is better for my application, but I have 
these thoughts:


Squid is much more mature, for example with respect to:
- Support for inter-cache protocols
- Support for specific client and server idiosyncrasies
- Logging and diagnostics (at least in relation to reverse proxy caching)

The Apache code is, to me, quite a bit easier to read.

Squid has a few special-purpose extensibility hooks (like for 
redirectors and authentication helpers), but Apache has more generic 
APIs that offer better opportunities for modularity.


One suspects that Apache's MPMs may be able to better exploit modern 
multiprocessor hardware under certain circumstances, but I'm not yet 
sure of which circumstances.


Squid will not bring popular disk hits into memory after it has been 
restarted. It will only cache in memory resources that it has retrieved 
from across the network. I have not yet checked Apache on this score.


In tinkering with the source, I've found it much easier to bring down 
Squid with a programmer error as compared with Apache.


Other details that I think I understand, but I'm not totally sure of:

- Apache must perform a seek to discover a disk-cache hit, whereas Squid 
consults an in-memory index of disk resources.


- It looks like "Vary" support is partially broken in Apache, and that 
it stores at most one variant.


- Apache doesn't do negative caching (of 404s, for example) like Squid.

I haven't found any direct comparisons of the two on the web.

-Ben

[EMAIL PROTECTED] wrote:
Are there any resources that compare using Squid as a reverse proxy versus 
Apache?  We currently use Apache as our reverse proxy, and do some url 
rewriting, and cookie based conditional url rewriting.  Is there an 
advantage to using Squid instead?  Disadvantages?  Thanks!


=
Scott Mace
Security Administrator
Travelcenters of America
440-808-4318
[EMAIL PROTECTED]
=

  




[squid-users] cache_mem, restarts, and low TCP_MEM_HIT

2006-03-22 Thread Ben Drees

Hi,

I am running Squid 2.5 STABLE 12 in a reverse proxy scenario.

I noticed after a restart that the percentage of memory hits was zero. I 
found an old message that says "this is expected":


http://www.squid-cache.org/mail-archive/squid-users/200407/0489.html

Is it still expected in Squid 2.5 STABLE 12? Is there a fix in STABLE 13 
or version 3.0?


Thanks,
Ben


RE: [squid-users] Re: URL's that begins with a minus

2006-02-07 Thread Ben Tanner
Aurelien Requiem wrote:
> I've got the same problem with firefox (w/o squid).
> It seems you can't resolve a hostname starting with a minus.

Werner Rost wrote:
> Come on, some of you must know the solution to this one.

You might find it useful to read RFC 1035:
Domain Names: Implementation and Specification

--
Benjamin Tanner BSc (Hons) MBCS
Senior Network Analyst
Networks & Systems Team, Computing Service
Network Operations Centre, KentMAN Ltd.
Canterbury Christ Church University
E-Mail: <[EMAIL PROTECTED]>
Phone:  <01227 782977>
Web:  


RE: [squid-users] Squid with SquidGuard

2006-01-17 Thread Ben Tanner
> If I run squidGuard on its own as root it seems to work. Is there any
> way I can try to run it as user "squid" from the command line 
> to see if
> I get any more information? Trying "su squid" obviously 
> didn't work (but
> I had to try it anyway).

Are you familiar with the sudo command?

Whilst root you should be able to do something like:

% sudo -u squid squidguard

And that will execute the command as squid.

Hope that helps,

Ben


RE: [squid-users] proxy auto configuration

2006-01-03 Thread Ben Tanner
Adam,

> I was just wondering if anybody could tell me how I'd go about making
> my proxy auto-detectable by Internet Explorer and Mozilla Firefox. I
> have read the FAQ on the subject and classed it as slightly outdated
> due to the fact that it talks about MSIE 3.0 on Windows 3.1.

Read the information here:

<http://wp.netscape.com/eng/mozilla/2.0/relnotes/demo/proxy-live.html>

It looks quite old, but it's still correct.

However, you want a web server called wpad.domain.local, and rather than
a proxy.pac file, you'll want to host a wpad.dat file on it.

You'll need to set the mime for .dat file type to:

application/x-ns-proxy-autoconfig

There should be readme's for whatever brand of webserver you're running
on how to do this.

The wpad.dat file is effectively javascript. The netscape readme covers
most of what you can do with it. Remember that DNS lookups, such as the
isResolvable() function, are costly in terms of time, and should be a
last resort within the script.

It's also possible to use DHCP to push this information out, though only
recent microsoft clients support it.

If you're attempting to supply auto-config information to apple macs,
they need a proxy.pac file, so make that identical to the wpad.dat file,
and set the mime type up for the .pac in the same way.

Hope that helps.

Ben

--
Benjamin Tanner BSc (Hons) MBCS
Senior Computing Officer
Network Support Unit, Computing Service
Network Operations Centre, KentMAN Ltd.
Canterbury Christ Church University
E-Mail: <[EMAIL PROTECTED]>
Phone:  <01227 782977>
Web:<http://www.canterbury.ac.uk>


Re: [squid-users] Secure acceleration

2005-12-05 Thread Ben Sagal
You miss understood me, I know all communication from the squid to the
backend is unencrypted.  What I want is for squid to log weather to
client web browser connected to the http port or the https port of
squid this information to be sent to the redirector.



Ben

On 05/12/05, Matus UHLAR - fantomas <[EMAIL PROTECTED]> wrote:
> On 05.12 15:08, Ben Sagal wrote:
> > I have a squid server,  I is currently setup to accelerates both normal
> > and ssl pages.  I have a redirector running and deepening on which page is
> > requested it rewrites the address for the relevant server.
> >
> > I would like the redirector to also be able to differentiate between
> > http and https pages,  ie. the redirector could send
> > http://mydomain.com/index.html and https://mydomain.com/index.html to
> > different pages/servers.  Is there any way to adjust squid sop that
> > the URL that is sent to the redirector (and stored in the logs)
> > reflects if the client connected to the standard port of the ssl port.
>
> don't you trust the network between squid and servers? Note that security of
> connections is already lower because squid can see the content. Also server
> won't see real clients' sertificates...
>
> However, for this kind of setup you need squid-3.0, or the squid SSL patch -
> squid 2.5 can't behave as https client.
>
> --
> Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> Remember half the people you know are below average.
>


[squid-users] Secure acceleration

2005-12-05 Thread Ben Sagal
I have a squid server,  I is currently setup to accelerates both
normal and ssl pages.  I have a redirector running and deepening on
which page is requested it rewrites the address for the relevant
server.

I would like the redirector to also be able to differentiate between
http and https pages,  ie. the redirector could send
http://mydomain.com/index.html and https://mydomain.com/index.html to
different pages/servers.  Is there any way to adjust squid sop that
the URL that is sent to the redirector (and stored in the logs)
reflects if the client connected to the standard port of the ssl port.

Thank You
Ben


RE: [squid-users] autoconfig pac file

2005-11-24 Thread Ben Tanner
Hi there Toto,

I think you want to do something like this:

function FindProxyForURL(url, host)
{
 if (isPlainHostName(host) ||
isInNet(myIpAddress(), "10.2.0.0", "255.255.0.0"))
 return "PROXY 10.1.1.13:3128";
 else
 return "DIRECT";
}


Otherwise what you're doing is a DNS lookup against the host the client is 
attempting to fetch, and then checking to see if that destination is in the 
relevant subnet. This code will check that you're not attempting to access a 
local page (removes load off your proxy) and then check to see if the client IP 
is in the specified IP range.

Hope that helps,

Ben
--
Ben Tanner
Senior Computing Officer
Network Support Unit
E-Mail: <[EMAIL PROTECTED]>
Phone:  <2977>



Re: [squid-users] iptables + wccp

2005-10-28 Thread Ben

Hi ,
   What´s I am wrong?
   I set
   iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j 
REDIRECT --to-port 8080


   but ,
cache:~ # tcpdump  -s 1600 -n -i any -p port 80

140(0) win 64240 
10:53:16.324378 IP 201.66.103.169.20011 > 66.218.71.102.80: S 
1179195225:1179195

225(0) win 16384 
10:53:16.341145 IP 201.66.103.169.20271 > 200.56.83.44.80: S 
1101813889:11018138

89(0) win 16384 
10:53:16.349980 IP 201.66.103.169.20006 > 207.68.177.126.80: S 
1934873506:193487

3506(0) win 8192 
10:53:16.416353 IP 201.66.103.169.16404 > 209.172.55.4.80: . ack 2761 win 
64860
10:53:16.435404 IP 201.66.103.169.20013 > 66.218.71.198.80: S 
523263296:52326329

6(0) win 65535 
10:53:16.460689 IP 201.66.103.169.57763 > 64.86.106.214.80: F 0:0(0) ack 1 
win 8

280


while trying to browse Internet on my computers isn´t work, but if I put IP 
address then I can browse Internet



Regards
Ben

- Original Message - 
From: "Nauman" <[EMAIL PROTECTED]>

To: "Ben" <[EMAIL PROTECTED]>; 
Sent: Friday, October 28, 2005 9:47 AM
Subject: Re: [squid-users] iptables + wccp



For Caching:-

iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j 
REDIRECT --to-port 8080




For Natting:-

iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -d 0/0 -j 
MASQUERADE




For Flushing iptable Nat Rules:-

iptables -t nat -F



Entries of "/etc/rc.local" :-

echo 1> /proc/sys/net/ipv4/ip_forward

iptables -t nat -F

iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j 
REDIRECT --to-port 8080


iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -d 0/0 -j 
MASQUERADE






For Saving iptables Rules:-

iptables-save > /opt/iptables_rule



For Restore iptables Rules:-

iptables-restore < /opt/iptables_rule




Thanks and regards,
M.Nauman Habib

- Original Message - 
From: "Ben" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 28, 2005 7:39 PM
Subject: [squid-users] iptables + wccp



Hi all,
   What configuration must have iptables for use wccp?
Suse 9.3
Kernel 2.6
Wccp V1
IOS 12.x

Regards
Ben














[squid-users] iptables + wccp

2005-10-28 Thread Ben

Hi all,
   What configuration must have iptables for use wccp?
Suse 9.3
Kernel 2.6
Wccp V1
IOS 12.x

Regards
Ben





Re: [squid-users] any new documentation about squid?in PDF?

2005-10-24 Thread Ben

hi Kumara
   check it 
http://squid.visolve.com/squid/configuration_manual_24.htm


Regards 
Ben


- Original Message - 
From: "Kumara Jayaweera" <[EMAIL PROTECTED]>

To: 
Cc: 
Sent: Monday, October 24, 2005 11:13 AM
Subject: [squid-users] any new documentation about squid?in PDF?



Greetings to the List!
is there any new documentation about squid?in PDF?
Thanks
kumara









[squid-users] Systems Requirements

2005-10-24 Thread Ben

Hi everbody
   What is the Hardware Systems Requirements the squid for 10,000 clients?

thanks
Regards
Ben





Re: [squid-users] wccp

2005-10-20 Thread Ben


  ok, so I use ip_wccp with Wccp V1, but  the cache don´t see the router 
or

router don´t see cache


Is your router supporting WCCP v1?


Yes, My router supportWCCP v1 and support WCCP v2



2005/10/19 09:26:30| Ignoring WCCP_I_SEE_YOU from X.X.X.X with 
non-positive

number of caches


Odd..

tcpdump -X -s 1600 -n -i any -p port 2048


With tcpdump -X -s 1600 -n -i any -p port 2048 show:

13:32:07.890582 IP Y.Y.Y.Y.2048 > X.X.X.X.2048: UDP, length 52
   0x:  4500 0050 ae54 4000 4011 d6ea c85e 12a0  [EMAIL 
PROTECTED]@^..
   0x0010:  c85e 1201 0800 0800 003c 3a0c  0007  .^...<:.
   0x0020:   0004        
   0x0030:           
   0x0040:         0001  
13:32:07.891233 IP X.X.X.X.2048 > Y.Y.Y.Y.2048: UDP, length 64
   0x:  4500 005c d8c5  ff11 2d6d c85e 1201  E..\..-m.^..
   0x0010:  c85e 12a0 0800 0800 0048 5eef  0008  .^...H^.
   0x0020:   0004  0002  0002  0001  
   0x0030:  c85e 12a0        .^..
   0x0040:           
   0x0050:       0001

Y.Y.Y.Y = IP SQUID
X.X.X.X = IP ROUTER

and the log router:

Oct 20 13:29:54: WCCP-EVNT: Built I_See_You msg body w/0 usable web caches, 
change # 0001
Oct 20 13:30:08: WCCP-EVNT: Built I_See_You msg body w/1 usable web caches, 
change # 0002

Oct 20 13:30:08: %WCCP-5-CACHEFOUND: Web Cache Y.Y.Y.Y acquired
Oct 20 13:31:11: WCCP-PKT: Received valid Here_I_Am packet from Y.Y.Y.Y 
w/rcvd_id 0007
Oct 20 13:31:11: WCCP-PKT: Sending I_See_You packet to Y.Y.Y.Y  w/rcvd_id 
0008
Oct 20 13:31:21: WCCP-PKT: Received valid Here_I_Am packet from Y.Y.Y.Y 
w/rcvd_id 0008
Oct 20 13:31:21: WCCP-PKT: Sending I_See_You packet to Y.Y.Y.Y  w/rcvd_id 
0009
Oct 20 13:31:32: WCCP-PKT: Received valid Here_I_Am packet from Y.Y.Y.Y 
w/rcvd_id 0009
Oct 20 13:31:32: WCCP-PKT: Sending I_See_You packet to Y.Y.Y.Y w/rcvd_id 
000A
Oct 20 13:31:42: WCCP-PKT: Received valid Here_I_Am packet from Y.Y.Y.Y 
w/rcvd_id 000A
Oct 20 13:31:42: WCCP-PKT: Sending I_See_You packet to Y.Y.Y.Y 
w/rcvd_id000B





may sched some additional light on the problem.. but it does look like
your Router is sending very odd WCCP_I_SEE_YOU to your cache.

Regards
Henrik




Regards
Benj






Re: [squid-users] wccp

2005-10-19 Thread Ben

Rénald,

   ok, so I use ip_wccp with Wccp V1, but  the cache don´t see the router 
or router don´t see cache


My log show:

2005/10/19 09:26:22| Squid Cache (Version 2.5.STABLE3): Exiting normally.
2005/10/19 09:26:23| Starting Squid Cache version 2.5.STABLE3 for 
i686-redhat-linux-gnu...

2005/10/19 09:26:23| Process ID 5349
2005/10/19 09:26:23| With 1024 file descriptors available
2005/10/19 09:26:23| DNS Socket created at 0.0.0.0, port 32774, FD 4
2005/10/19 09:26:23| Adding nameserver Y.Y.Y.Y from /etc/resolv.conf
2005/10/19 09:26:23| Adding nameserver Z.Z.Z.Z from /etc/resolv.conf
2005/10/19 09:26:23| User-Agent logging is disabled.
2005/10/19 09:26:23| Referer logging is disabled.
2005/10/19 09:26:23| Unlinkd pipe opened on FD 9
2005/10/19 09:26:23| Swap maxSize 52428800 KB, estimated 4032984 objects
2005/10/19 09:26:23| Target number of buckets: 201649
2005/10/19 09:26:23| Using 262144 Store buckets
2005/10/19 09:26:23| Max Mem  size: 65536 KB
2005/10/19 09:26:23| Max Swap size: 52428800 KB
2005/10/19 09:26:23| Rebuilding storage in /var/spool/squid (CLEAN)
2005/10/19 09:26:23| Using Least Load store dir selection
2005/10/19 09:26:23| Set Current Directory to /var/spool/squid
2005/10/19 09:26:23| Loaded Icons.
2005/10/19 09:26:24| Accepting HTTP connections at 0.0.0.0, port 3128, FD 
11.

2005/10/19 09:26:24| Accepting ICP messages at 0.0.0.0, port 3130, FD 12.
2005/10/19 09:26:24| Accepting WCCP messages on port 2048, FD 13.
2005/10/19 09:26:24| Ready to serve requests.
2005/10/19 09:26:24| Store rebuilding is 22.6% complete
2005/10/19 09:26:24| Done reading /var/spool/squid swaplog (18108 entries)
2005/10/19 09:26:24| Finished rebuilding storage from disk.
2005/10/19 09:26:24| 18108 Entries scanned
2005/10/19 09:26:24| 0 Invalid entries.
2005/10/19 09:26:24| 0 With invalid flags.
2005/10/19 09:26:24| 18108 Objects loaded.
2005/10/19 09:26:24| 0 Objects expired.
2005/10/19 09:26:24| 0 Objects cancelled.
2005/10/19 09:26:24| 0 Duplicate URLs purged.
2005/10/19 09:26:24| 0 Swapfile clashes avoided.
2005/10/19 09:26:24|   Took 0.5 seconds (38958.8 objects/sec).
2005/10/19 09:26:24| Beginning Validation Procedure
2005/10/19 09:26:24|   Completed Validation Procedure
2005/10/19 09:26:24|   Validated 18108 Entries
2005/10/19 09:26:24|   store_swap_size = 258452k
2005/10/19 09:26:24| storeLateRelease: released 0 objects
2005/10/19 09:26:30| Ignoring WCCP_I_SEE_YOU from X.X.X.X with non-positive 
number of caches


x.x.x.x is IP router.

Regards
Ben

- Original Message - 
From: "Rénald CASAGRAUDE" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 19, 2005 4:37 AM
Subject: Re: [squid-users] wccp



Ben,

On 18 oct. 05, at 23:45, Ben wrote:


--on Linux:
modprobe ip_wccp
modprobe ip_gre
iptunnel add gre1 mode gre remote X.X.X.X local X.X.X.X dev eth0
ifconfig gre1 up
modprobe ip_wccp


I don't think you have to use both ip_wccp and ip_gre. You have to  choose 
one.
I know that WCCP 1 works good with ip_wccp, but this module isn't the 
preferred

method. I have never make it work with ip_gre module.

Regards

R.










  1   2   >