Re: [squid-users] Squid3 extremely slow for some website cnn.com

2012-12-17 Thread Muhammed Shehata

1st file for large attach size

Best Regards,
*Muhammad Shehata*
IT Network Security Engineer
TEData
Building A11- B90, Smart Village
Km 28 Cairo - Alex Desert Road, 6th October, 12577, Egypt
T: +20 (2) 33 32 0700 | Ext: 1532
F: +20 (2) 33 32 0800 | M:
E: m.sheh...@tedata.net
On 12/13/2012 11:58 PM, Amos Jeffries wrote:


On 13/12/2012 9:41 p.m., Muhammed Shehata wrote:

Dear Amos,
-the interrelation:
the logs are from two squid similar servers that only differ in
version and client at both request doesn't disconnect or anything the
aborted maybe mean that squid can't get this url contains java script
but what I wonder of why squid can get it successfully
-here is the logs with time :
squid2 on Centos5.2  1355387935.418  7 x.x.x.x TCP_MISS/304 324
GET http://cdn.optimizely.com/js/128727546.js - DIRECT/23.50.196.211
text/javascript
squid3 on Centos 6.3 
13/Dec/2012:10:39:05 +0200  20020 x.x.x.x TCP_MISS_ABORTED/000 0 GET
http://cdn.optimizely.com/js/128727546.js -
HIER_DIRECT/cdn.optimizely.com -
13/Dec/2012:10:39:25 +0200  20020 x.x.x.x TCP_MISS_ABORTED/000 0 GET
http://cdn.optimizely.com/js/128727546.js -
HIER_DIRECT/cdn.optimizely.com -



Aha. Thanks this makes more sense. 7ms with a response versus 20
seconds with nothing returned.

Although for better debug you should get the squid-3 to leave upstream
server IP address in the log. It could be some problem of which IP is
being connected to by Squid.

With 3.2 at debugs_options 11,2 you get a cache.log HTTP trace of
what is going between Squid and optimizely and client. I suspect
optimizely is not responding when a request is delivered to them - but
you need to track that down.

Amos






squid3.2.4 when accept js urls which make slowness.tar.gz
Description: squid3.2.4 when accept js urls which make slowness.tar.gz


Re: [squid-users] Squid3 extremely slow for some website cnn.com

2012-12-17 Thread Muhammed Shehata

2nd file for large attach size

Best Regards,
*Muhammad Shehata*
IT Network Security Engineer
TEData
Building A11- B90, Smart Village
Km 28 Cairo - Alex Desert Road, 6th October, 12577, Egypt
T: +20 (2) 33 32 0700 | Ext: 1532
F: +20 (2) 33 32 0800 | M:
E: m.sheh...@tedata.net
On 12/13/2012 11:58 PM, Amos Jeffries wrote:


On 13/12/2012 9:41 p.m., Muhammed Shehata wrote:

Dear Amos,
-the interrelation:
the logs are from two squid similar servers that only differ in
version and client at both request doesn't disconnect or anything the
aborted maybe mean that squid can't get this url contains java script
but what I wonder of why squid can get it successfully
-here is the logs with time :
squid2 on Centos5.2  1355387935.418  7 x.x.x.x TCP_MISS/304 324
GET http://cdn.optimizely.com/js/128727546.js - DIRECT/23.50.196.211
text/javascript
squid3 on Centos 6.3 
13/Dec/2012:10:39:05 +0200  20020 x.x.x.x TCP_MISS_ABORTED/000 0 GET
http://cdn.optimizely.com/js/128727546.js -
HIER_DIRECT/cdn.optimizely.com -
13/Dec/2012:10:39:25 +0200  20020 x.x.x.x TCP_MISS_ABORTED/000 0 GET
http://cdn.optimizely.com/js/128727546.js -
HIER_DIRECT/cdn.optimizely.com -



Aha. Thanks this makes more sense. 7ms with a response versus 20
seconds with nothing returned.

Although for better debug you should get the squid-3 to leave upstream
server IP address in the log. It could be some problem of which IP is
being connected to by Squid.

With 3.2 at debugs_options 11,2 you get a cache.log HTTP trace of
what is going between Squid and optimizely and client. I suspect
optimizely is not responding when a request is delivered to them - but
you need to track that down.

Amos







squid3.2.4 when deny js urls.tar.gz
Description: squid3.2.4 when deny js urls.tar.gz


Re: [squid-users] upload data report

2012-12-17 Thread Muhammad Yousuf Khan
[cut]


 probably?

probably, because i am not confident enough with my understanding
with the output.

[cut]



 The '' means data sent to client. The '' means data received from client.
 The format omitting these characters is the total sum of both.


the confusion part is the actual value that i am looking for whether
in upload or download. value always comes in  with this value not
with 

for example here is when i uploaded 2.2 MB of file to live.com

139811.654 485143 10.51.100.240 TCP_MISS/200 18155070 181 CONNECT
snt121.mail.live.com:443 - DIRECT/65.55.68.103


in this log i am download approx 3.2 MB of file

 1355739469.389  11759 10.51.100.240 TCP_MISS/200 4077890 1917 GET
http://software-files-a.cnet.com/s/software/12/84/20/79/ccsetup325.exe?
- DIRECT/92.122.213.10 application/octet-stream

however in both logs you can see two values 1)  18155070 181   (upload
log) , 2) 4077890 1917 (download log)

so the question is. does  (value) with method CONNECT means data
sent to client and on the other hand  (value) with method GET
means data receive from the client? are they  totally inverse of each
other when methods are change.








 Amos



[squid-users] tunnel state data

2012-12-17 Thread paulo bruck
Hello everyone

In one of my clients squid frozen. Looking at cache.log I saw the last
line w/ an error message:

Tunnel State Data Connection error FD 265 read/write= failure (32) Broken pipe

It is not the first time that it happens, but this time  I could see
this message before restarting squid.
It is a bug or a normal message indicating a sporadic error??

Using:3.1.6-1.2+squeeze2 + debian squeeze + kernel 2.6.32-5-amd64

best regards


Re: [squid-users] Request header too large ip_conntrack

2012-12-17 Thread Shawn Wright
Hello, 


This problem continues. How can I locate where these request header too large 
are coming from? I don't see the client IP being logged. Or is it the line 
preceding? 


2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
Request 
2012/12/13 20:00:20| Request header is too large (67792 bytes) 
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes. 
2012/12/13 20:00:20| Request header is too large (67623 bytes) 


Shawn Wright 
Manager of Information Technology 
Shawnigan Lake School 

Please direct requests for support to helpd...@shawnigan.ca 



- Original Message - 

From: Shawn Wright swri...@shawnigan.ca 
To: squid-users@squid-cache.org 
Sent: Friday, 14 December, 2012 11:53:35 AM 
Subject: [squid-users] Request header too large  ip_conntrack 

Hello, 

I have been trying to track down a congestion issue we have been seeing at 8pm 
each night for several weeks, where most of our clients see slow or no 
connectivity for 20-40 minutes. 

First issue was our firewall reaching ip_conntrack_max, so I increased it, and 
began logging the conn count every 5 minutes. The problem was gone for a week. 

Then it came back, just as before. The firewall was fine, no errors this time, 
and well below the ip_conntrack_max. 

I looked at proxy, and saw an excessive number of invalid requests during peak 
times, at one point over 100/sec from a single client. Adjusting some rules on 
our wireless controller to resolve this issue, and invalid requests dropped by 
a factor of 10, but the issue at 8pm continued. I also set: 

request_header_max_size 64 KB 
reply_header_max_size 64 KB 

as we were seeing many request header too large errors. 

I enabled conntrack logging every minute on the proxy, and saw it came very 
close to it's limit last night at 8pm, and stayed there for over an hour, but 
no errors were logged. However, at the instant that ip_conntrack climbed at 8pm 
(limit was 65536, now 262144): 

2012-12-13 19:56:01 28754 
2012-12-13 19:57:01 29398 
2012-12-13 19:58:01 27449 
2012-12-13 19:59:01 25355 
2012-12-13 20:00:02 25551 
2012-12-13 20:01:01 48476 
2012-12-13 20:02:01 61525 
2012-12-13 20:03:01 58012 
2012-12-13 20:04:01 59262 
2012-12-13 20:05:01 61038 
2012-12-13 20:06:01 61023 

squid started logging this: 

2012/12/13 19:59:55| clientReadRequest: FD 1027 (10.2.120.12:61069) Invalid 
Request 
2012/12/13 20:00:00| parseHttpRequest: Requestheader contains NULL characters 
2012/12/13 20:00:00| parseHttpRequest: Can't get request method 
2012/12/13 20:00:00| clientReadRequest: FD 1901 (10.2.120.51:41435) Invalid 
Request 
2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
Request 
2012/12/13 20:00:20| Request header is too large (67792 bytes) 
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes. 
2012/12/13 20:00:20| Request header is too large (67623 bytes) 
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes. 
2012/12/13 20:00:20| Request header is too large (67487 bytes) 
... 

the above continues for 4000 lines, with 250 of them in the first second. 

squid is still servicing some requests during the outage, and things appear 
normal in the access.log, albeit lower volume. During the issue, there are very 
few other errors in cache.log - just the request header too large and a few 
invalid requests. 

MRTG shows squid hits/s drop from ~120 to ~10 for the 70 minute outage, slowly 
declining to near zero until 21:10 when the request header too large errors 
stop, and the hits/s climbs to ~100 immediately. 

The environment: 

Dual Xeon CPUs, 4Gb, Ubuntu 8.04 LTS 32bit 
Squid Cache: Version 2.6.STABLE20 
configure options: '--sysconfdir=/etc/squid' '--localstatedir=/var' 
'--enable-delay-pools' '--enable-snmp' '--enable-async-io=64' 
'--disable-ident-lookups' '--enable-auth=ntlm,basic' 
'--enable-removal-policies' '--enable-kill-parent-hack' 
'--with-filedescriptors=16384' '--with-large-files' '--enable-linux-netfilter' 

Approximately 700 active clients, most on wireless during this period. Aruba 
wireless controller DNATs all port 80 traffic to squid for transparent proxy. 

squid.conf: 
# Squid 2.6 stable 20, ubuntu 8.04 32bit 
# 26/Mar/2008 11:52 
# 5/Jan/2010 10:15 - recompile with large file support for logs 2Gb 
# 27/Aug/2010 11:32 - clone config  modify for transparent listening on 
72.2.0.12:3128 
# 5/Nov/2010 10:27 remove WCCP2  replace with DNATs on Aruba VLANs 
5,6,80,90,100,110,120 
# 1/Oct/2012 - Disable Caching 

visible_hostname proxy.shawnigan.ca 
pid_filename /var/run/squid.pid 

append_domain .shawnigan.ca 
dns_nameservers 208.67.222.222 208.67.220.220 

# disable X-Forwarded-For: header -31/Mar/2006 8:43 
forwarded_for off 
via off 
client_db off 
#header_access Via deny all 

# increase request header to 64k as per RFC 2616 
request_header_max_size 64 KB 
reply_header_max_size 64 KB 


http_port 72.2.0.12:3128 transparent 

icp_port 0 

#wccp2_router 72.2.0.1 

[squid-users] Risposta: Re: [squid-users] Squid (using External ACL) problem with Icap

2012-12-17 Thread Roberto Galluzzi
Hi,

I would upgrade my Squid 3.1.16 to 3.2.5. Bug specified below (3132) is still 
open?
I already tried authentication through external acl using icap but It doesn't 
work. Bypassing icap, instead, I see username correctly.
In 3.1.x version I used the patch but in 3.2.x files' content are different? 
How can I resolve?

Thanks
Roberto

 Amos Jeffries squ...@treenet.co.nz 02/12/2011 8.54 
On 2/12/2011 4:37 a.m., Roberto Galluzzi wrote:
 Hi,

 I'm using Squid 3.1 and SquidGuard with success. Now I want to add 
 SquidClamav 6.

 Versions 6.x need Icap and I didn't have problem to install.

 In my Squid configuration I use External ACL to get username from a script 
 but enabling Icap I can't surf because user is empty (in access.log). However 
 in my script log I see that Squid is using it.

 If I use simple authentication (auth_param basic ...) I get user and all work.

 Nevertheless I MUST use External ACL so I need help about this context.

The problem is that external_acl_type user= tag is not an 
authenticated username. Just a label for logging etc. in the current Squid.

There is a temporary workaround patch available in the existing bug report:
http://bugs.squid-cache.org/show_bug.cgi?id=3132 

You can use that while we continue to work on redesigning the auth 
systems to handle this better.



 This is part of my configuration:

 squid.conf
 -
 (...)
 external_acl_typename  children=15 ttl=7200 negative_ttl=60 %SRC 
 %SRChelper  arguments
 (...)
 icap_enable on
 icap_send_client_ip on
 icap_send_client_username on
 icap_client_username_encode off
 icap_client_username_header X-Authenticated-User
 icap_preview_enable on
 icap_preview_size 1024
 icap_service service_req reqmod_precache bypass=1 
 icap://127.0.0.1:1344/squidclamav
 adaptation_access service_req allow all
 icap_service service_resp respmod_precache bypass=1 
 icap://127.0.0.1:1344/squidclamav
 adaptation_access service_resp allow all
 (...)
 -

 If you need other info, ask me without problem.

 Thank you

 Roberto





Re: [squid-users] Request header too large ip_conntrack

2012-12-17 Thread Eliezer Croitoru

Hey there,

Take a look at:
http://www.squid-cache.org/Doc/config/request_header_max_size/
You dont see the logs since it's invalid Request.

Try to make the size more then 64KB but I would consider trying to find 
out what request is trying to use this kind of header for security reasons.


Regards,
Eliezer

On 12/17/2012 4:52 PM, Shawn Wright wrote:

Hello,


This problem continues. How can I locate where these request header too large 
are coming from? I don't see the client IP being logged. Or is it the line 
preceding?


2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
Request
2012/12/13 20:00:20| Request header is too large (67792 bytes)
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes.
2012/12/13 20:00:20| Request header is too large (67623 bytes)


Shawn Wright
Manager of Information Technology
Shawnigan Lake School

Please direct requests for support to helpd...@shawnigan.ca



- Original Message -

From: Shawn Wright swri...@shawnigan.ca
To: squid-users@squid-cache.org
Sent: Friday, 14 December, 2012 11:53:35 AM
Subject: [squid-users] Request header too large  ip_conntrack

Hello,

I have been trying to track down a congestion issue we have been seeing at 8pm 
each night for several weeks, where most of our clients see slow or no 
connectivity for 20-40 minutes.

First issue was our firewall reaching ip_conntrack_max, so I increased it, and 
began logging the conn count every 5 minutes. The problem was gone for a week.

Then it came back, just as before. The firewall was fine, no errors this time, 
and well below the ip_conntrack_max.

I looked at proxy, and saw an excessive number of invalid requests during peak 
times, at one point over 100/sec from a single client. Adjusting some rules on 
our wireless controller to resolve this issue, and invalid requests dropped by 
a factor of 10, but the issue at 8pm continued. I also set:

request_header_max_size 64 KB
reply_header_max_size 64 KB

as we were seeing many request header too large errors.

I enabled conntrack logging every minute on the proxy, and saw it came very 
close to it's limit last night at 8pm, and stayed there for over an hour, but 
no errors were logged. However, at the instant that ip_conntrack climbed at 8pm 
(limit was 65536, now 262144):

2012-12-13 19:56:01 28754
2012-12-13 19:57:01 29398
2012-12-13 19:58:01 27449
2012-12-13 19:59:01 25355
2012-12-13 20:00:02 25551
2012-12-13 20:01:01 48476
2012-12-13 20:02:01 61525
2012-12-13 20:03:01 58012
2012-12-13 20:04:01 59262
2012-12-13 20:05:01 61038
2012-12-13 20:06:01 61023

squid started logging this:

2012/12/13 19:59:55| clientReadRequest: FD 1027 (10.2.120.12:61069) Invalid 
Request
2012/12/13 20:00:00| parseHttpRequest: Requestheader contains NULL characters
2012/12/13 20:00:00| parseHttpRequest: Can't get request method
2012/12/13 20:00:00| clientReadRequest: FD 1901 (10.2.120.51:41435) Invalid 
Request
2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
Request
2012/12/13 20:00:20| Request header is too large (67792 bytes)
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes.
2012/12/13 20:00:20| Request header is too large (67623 bytes)
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes.
2012/12/13 20:00:20| Request header is too large (67487 bytes)
...

the above continues for 4000 lines, with 250 of them in the first second.

squid is still servicing some requests during the outage, and things appear 
normal in the access.log, albeit lower volume. During the issue, there are very 
few other errors in cache.log - just the request header too large and a few 
invalid requests.

MRTG shows squid hits/s drop from ~120 to ~10 for the 70 minute outage, slowly 
declining to near zero until 21:10 when the request header too large errors 
stop, and the hits/s climbs to ~100 immediately.

The environment:

Dual Xeon CPUs, 4Gb, Ubuntu 8.04 LTS 32bit
Squid Cache: Version 2.6.STABLE20
configure options: '--sysconfdir=/etc/squid' '--localstatedir=/var' 
'--enable-delay-pools' '--enable-snmp' '--enable-async-io=64' 
'--disable-ident-lookups' '--enable-auth=ntlm,basic' 
'--enable-removal-policies' '--enable-kill-parent-hack' 
'--with-filedescriptors=16384' '--with-large-files' '--enable-linux-netfilter'

Approximately 700 active clients, most on wireless during this period. Aruba 
wireless controller DNATs all port 80 traffic to squid for transparent proxy.

squid.conf:
# Squid 2.6 stable 20, ubuntu 8.04 32bit
# 26/Mar/2008 11:52
# 5/Jan/2010 10:15 - recompile with large file support for logs 2Gb
# 27/Aug/2010 11:32 - clone config  modify for transparent listening on 
72.2.0.12:3128
# 5/Nov/2010 10:27 remove WCCP2  replace with DNATs on Aruba VLANs 
5,6,80,90,100,110,120
# 1/Oct/2012 - Disable Caching

visible_hostname proxy.shawnigan.ca
pid_filename /var/run/squid.pid

append_domain .shawnigan.ca
dns_nameservers 208.67.222.222 208.67.220.220

# disable X-Forwarded-For: header 

Re: [squid-users] Request header too large ip_conntrack

2012-12-17 Thread Shawn Wright
Hi,

Thanks - I did check the docs, and already increased the max header to 64K as 
per the RFC. If this client is causing a DOS (which it appears to be) surely 
there must be some way to determine the client IP? Debug log?


Shawn Wright 
Manager of Information Technology 
Shawnigan Lake School 

Please direct requests for support to helpd...@shawnigan.ca 



- Original Message - 

From: Eliezer Croitoru elie...@ngtech.co.il 
To: squid-users@squid-cache.org 
Sent: Monday, 17 December, 2012 7:19:33 AM 
Subject: Re: [squid-users] Request header too large  ip_conntrack 

Hey there, 

Take a look at: 
http://www.squid-cache.org/Doc/config/request_header_max_size/ 
You dont see the logs since it's invalid Request. 

Try to make the size more then 64KB but I would consider trying to find 
out what request is trying to use this kind of header for security reasons. 

Regards, 
Eliezer 

On 12/17/2012 4:52 PM, Shawn Wright wrote: 
 Hello, 
 
 
 This problem continues. How can I locate where these request header too large 
 are coming from? I don't see the client IP being logged. Or is it the line 
 preceding? 
 
 
 2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
 Request 
 2012/12/13 20:00:20| Request header is too large (67792 bytes) 
 2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes. 
 2012/12/13 20:00:20| Request header is too large (67623 bytes) 
 
 
 Shawn Wright 
 Manager of Information Technology 
 Shawnigan Lake School 
 
 Please direct requests for support to helpd...@shawnigan.ca 
 
 
 
 - Original Message - 
 
 From: Shawn Wright swri...@shawnigan.ca 
 To: squid-users@squid-cache.org 
 Sent: Friday, 14 December, 2012 11:53:35 AM 
 Subject: [squid-users] Request header too large  ip_conntrack 
 
 Hello, 
 
 I have been trying to track down a congestion issue we have been seeing at 
 8pm each night for several weeks, where most of our clients see slow or no 
 connectivity for 20-40 minutes. 
 
 First issue was our firewall reaching ip_conntrack_max, so I increased it, 
 and began logging the conn count every 5 minutes. The problem was gone for a 
 week. 
 
 Then it came back, just as before. The firewall was fine, no errors this 
 time, and well below the ip_conntrack_max. 
 
 I looked at proxy, and saw an excessive number of invalid requests during 
 peak times, at one point over 100/sec from a single client. Adjusting some 
 rules on our wireless controller to resolve this issue, and invalid requests 
 dropped by a factor of 10, but the issue at 8pm continued. I also set: 
 
 request_header_max_size 64 KB 
 reply_header_max_size 64 KB 
 
 as we were seeing many request header too large errors. 
 
 I enabled conntrack logging every minute on the proxy, and saw it came very 
 close to it's limit last night at 8pm, and stayed there for over an hour, but 
 no errors were logged. However, at the instant that ip_conntrack climbed at 
 8pm (limit was 65536, now 262144): 
 
 2012-12-13 19:56:01 28754 
 2012-12-13 19:57:01 29398 
 2012-12-13 19:58:01 27449 
 2012-12-13 19:59:01 25355 
 2012-12-13 20:00:02 25551 
 2012-12-13 20:01:01 48476 
 2012-12-13 20:02:01 61525 
 2012-12-13 20:03:01 58012 
 2012-12-13 20:04:01 59262 
 2012-12-13 20:05:01 61038 
 2012-12-13 20:06:01 61023 
 
 squid started logging this: 
 
 2012/12/13 19:59:55| clientReadRequest: FD 1027 (10.2.120.12:61069) Invalid 
 Request 
 2012/12/13 20:00:00| parseHttpRequest: Requestheader contains NULL characters 
 2012/12/13 20:00:00| parseHttpRequest: Can't get request method 
 2012/12/13 20:00:00| clientReadRequest: FD 1901 (10.2.120.51:41435) Invalid 
 Request 
 2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
 Request 
 2012/12/13 20:00:20| Request header is too large (67792 bytes) 
 2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes. 
 2012/12/13 20:00:20| Request header is too large (67623 bytes) 
 2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes. 
 2012/12/13 20:00:20| Request header is too large (67487 bytes) 
 ... 
 
 the above continues for 4000 lines, with 250 of them in the first second. 
 
 squid is still servicing some requests during the outage, and things appear 
 normal in the access.log, albeit lower volume. During the issue, there are 
 very few other errors in cache.log - just the request header too large and a 
 few invalid requests. 
 
 MRTG shows squid hits/s drop from ~120 to ~10 for the 70 minute outage, 
 slowly declining to near zero until 21:10 when the request header too large 
 errors stop, and the hits/s climbs to ~100 immediately. 
 
 The environment: 
 
 Dual Xeon CPUs, 4Gb, Ubuntu 8.04 LTS 32bit 
 Squid Cache: Version 2.6.STABLE20 
 configure options: '--sysconfdir=/etc/squid' '--localstatedir=/var' 
 '--enable-delay-pools' '--enable-snmp' '--enable-async-io=64' 
 '--disable-ident-lookups' '--enable-auth=ntlm,basic' 
 '--enable-removal-policies' '--enable-kill-parent-hack' 
 

Re: [squid-users] Request header too large ip_conntrack

2012-12-17 Thread Eliezer Croitoru

Hey,

The max header is 64KB by default.
Change it to more then 64 to about 80KB just to make sure it's OK.
There are debug_options that can help you with it:
http://wiki.squid-cache.org/KnowledgeBase/DebugSections

I do not know the exact section used with Request header is too large 
but a simple look-up in the source code will get you the section and by 
making the verbosity of this section to more then 1(2-3) you will might 
have the data you need.
If I remember right section 33 will do what you need if not then 66 but 
i'm not sure which one of the sections should have the exact data you need.


Try the HTTP sections as the best choice.

If you are afraid of DDOS then this basic settings of 64KB protect you 
from it with the only exception of squid cache.log size explosion.

Are you trying to protect the server from that?

Regards,
Eliezer

On 12/17/2012 5:52 PM, Shawn Wright wrote:

Hi,

Thanks - I did check the docs, and already increased the max header to 64K as 
per the RFC. If this client is causing a DOS (which it appears to be) surely 
there must be some way to determine the client IP? Debug log?


Shawn Wright
Manager of Information Technology
Shawnigan Lake School

Please direct requests for support to helpd...@shawnigan.ca



- Original Message -

From: Eliezer Croitoru elie...@ngtech.co.il
To: squid-users@squid-cache.org
Sent: Monday, 17 December, 2012 7:19:33 AM
Subject: Re: [squid-users] Request header too large  ip_conntrack

Hey there,

Take a look at:
http://www.squid-cache.org/Doc/config/request_header_max_size/
You dont see the logs since it's invalid Request.

Try to make the size more then 64KB but I would consider trying to find
out what request is trying to use this kind of header for security reasons.

Regards,
Eliezer

On 12/17/2012 4:52 PM, Shawn Wright wrote:

Hello,


This problem continues. How can I locate where these request header too large 
are coming from? I don't see the client IP being logged. Or is it the line 
preceding?


2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
Request
2012/12/13 20:00:20| Request header is too large (67792 bytes)
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes.
2012/12/13 20:00:20| Request header is too large (67623 bytes)


Shawn Wright
Manager of Information Technology
Shawnigan Lake School

Please direct requests for support to helpd...@shawnigan.ca



- Original Message -

From: Shawn Wright swri...@shawnigan.ca
To: squid-users@squid-cache.org
Sent: Friday, 14 December, 2012 11:53:35 AM
Subject: [squid-users] Request header too large  ip_conntrack

Hello,

I have been trying to track down a congestion issue we have been seeing at 8pm 
each night for several weeks, where most of our clients see slow or no 
connectivity for 20-40 minutes.

First issue was our firewall reaching ip_conntrack_max, so I increased it, and 
began logging the conn count every 5 minutes. The problem was gone for a week.

Then it came back, just as before. The firewall was fine, no errors this time, 
and well below the ip_conntrack_max.

I looked at proxy, and saw an excessive number of invalid requests during peak 
times, at one point over 100/sec from a single client. Adjusting some rules on 
our wireless controller to resolve this issue, and invalid requests dropped by 
a factor of 10, but the issue at 8pm continued. I also set:

request_header_max_size 64 KB
reply_header_max_size 64 KB

as we were seeing many request header too large errors.

I enabled conntrack logging every minute on the proxy, and saw it came very 
close to it's limit last night at 8pm, and stayed there for over an hour, but 
no errors were logged. However, at the instant that ip_conntrack climbed at 8pm 
(limit was 65536, now 262144):

2012-12-13 19:56:01 28754
2012-12-13 19:57:01 29398
2012-12-13 19:58:01 27449
2012-12-13 19:59:01 25355
2012-12-13 20:00:02 25551
2012-12-13 20:01:01 48476
2012-12-13 20:02:01 61525
2012-12-13 20:03:01 58012
2012-12-13 20:04:01 59262
2012-12-13 20:05:01 61038
2012-12-13 20:06:01 61023

squid started logging this:

2012/12/13 19:59:55| clientReadRequest: FD 1027 (10.2.120.12:61069) Invalid 
Request
2012/12/13 20:00:00| parseHttpRequest: Requestheader contains NULL characters
2012/12/13 20:00:00| parseHttpRequest: Can't get request method
2012/12/13 20:00:00| clientReadRequest: FD 1901 (10.2.120.51:41435) Invalid 
Request
2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
Request
2012/12/13 20:00:20| Request header is too large (67792 bytes)
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes.
2012/12/13 20:00:20| Request header is too large (67623 bytes)
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes.
2012/12/13 20:00:20| Request header is too large (67487 bytes)
...

the above continues for 4000 lines, with 250 of them in the first second.

squid is still servicing some requests during the outage, and things appear 
normal in the 

Re: [squid-users] Request header too large ip_conntrack

2012-12-17 Thread Shawn Wright

Hi, 

My mistake, the client IP is logged in the access.log. I have narrowed it down 
to a single client (a student) for the past several days, so I will bring the 
machine in to investigate what it is doing. It appears every time this machine 
connects, within 20 seconds of getting an IP address, it floods our proxy with 
these requests! 

Thanks 



Shawn Wright 
Manager of Information Technology 
Shawnigan Lake School 

Please direct requests for support to helpd...@shawnigan.ca 



Shawn Wright Manager of Information Technology Shawnigan Lake School Please 
direct requests for support to helpd...@shawnigan.ca 


Shawn Wright 
Manager of Information Technology 
Shawnigan Lake School 

Please direct requests for support to helpd...@shawnigan.ca 



- Original Message - 

From: Eliezer Croitoru elie...@ngtech.co.il 
To: squid-users@squid-cache.org 
Sent: Monday, 17 December, 2012 9:01:49 AM 
Subject: Re: [squid-users] Request header too large  ip_conntrack 

Hey, 

The max header is 64KB by default. 
Change it to more then 64 to about 80KB just to make sure it's OK. 
There are debug_options that can help you with it: 
http://wiki.squid-cache.org/KnowledgeBase/DebugSections 

I do not know the exact section used with Request header is too large 
but a simple look-up in the source code will get you the section and by 
making the verbosity of this section to more then 1(2-3) you will might 
have the data you need. 
If I remember right section 33 will do what you need if not then 66 but 
i'm not sure which one of the sections should have the exact data you need. 

Try the HTTP sections as the best choice. 

If you are afraid of DDOS then this basic settings of 64KB protect you 
from it with the only exception of squid cache.log size explosion. 
Are you trying to protect the server from that? 

Regards, 
Eliezer 

On 12/17/2012 5:52 PM, Shawn Wright wrote: 
 Hi, 
 
 Thanks - I did check the docs, and already increased the max header to 64K as 
 per the RFC. If this client is causing a DOS (which it appears to be) surely 
 there must be some way to determine the client IP? Debug log? 
 
 
 Shawn Wright 
 Manager of Information Technology 
 Shawnigan Lake School 
 
 Please direct requests for support to helpd...@shawnigan.ca 
 
 
 
 - Original Message - 
 
 From: Eliezer Croitoru elie...@ngtech.co.il 
 To: squid-users@squid-cache.org 
 Sent: Monday, 17 December, 2012 7:19:33 AM 
 Subject: Re: [squid-users] Request header too large  ip_conntrack 
 
 Hey there, 
 
 Take a look at: 
 http://www.squid-cache.org/Doc/config/request_header_max_size/ 
 You dont see the logs since it's invalid Request. 
 
 Try to make the size more then 64KB but I would consider trying to find 
 out what request is trying to use this kind of header for security reasons. 
 
 Regards, 
 Eliezer 
 
 On 12/17/2012 4:52 PM, Shawn Wright wrote: 
 Hello, 
 
 
 This problem continues. How can I locate where these request header too 
 large are coming from? I don't see the client IP being logged. Or is it the 
 line preceding? 
 
 
 2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
 Request 
 2012/12/13 20:00:20| Request header is too large (67792 bytes) 
 2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes. 
 2012/12/13 20:00:20| Request header is too large (67623 bytes) 
 
 
 Shawn Wright 
 Manager of Information Technology 
 Shawnigan Lake School 
 
 Please direct requests for support to helpd...@shawnigan.ca 
 
 
 
 - Original Message - 
 
 From: Shawn Wright swri...@shawnigan.ca 
 To: squid-users@squid-cache.org 
 Sent: Friday, 14 December, 2012 11:53:35 AM 
 Subject: [squid-users] Request header too large  ip_conntrack 
 
 Hello, 
 
 I have been trying to track down a congestion issue we have been seeing at 
 8pm each night for several weeks, where most of our clients see slow or no 
 connectivity for 20-40 minutes. 
 
 First issue was our firewall reaching ip_conntrack_max, so I increased it, 
 and began logging the conn count every 5 minutes. The problem was gone for a 
 week. 
 
 Then it came back, just as before. The firewall was fine, no errors this 
 time, and well below the ip_conntrack_max. 
 
 I looked at proxy, and saw an excessive number of invalid requests during 
 peak times, at one point over 100/sec from a single client. Adjusting some 
 rules on our wireless controller to resolve this issue, and invalid requests 
 dropped by a factor of 10, but the issue at 8pm continued. I also set: 
 
 request_header_max_size 64 KB 
 reply_header_max_size 64 KB 
 
 as we were seeing many request header too large errors. 
 
 I enabled conntrack logging every minute on the proxy, and saw it came very 
 close to it's limit last night at 8pm, and stayed there for over an hour, 
 but no errors were logged. However, at the instant that ip_conntrack climbed 
 at 8pm (limit was 65536, now 262144): 
 
 2012-12-13 19:56:01 28754 
 2012-12-13 19:57:01 29398 
 2012-12-13 19:58:01 27449 

Re: [squid-users] Request header too large ip_conntrack

2012-12-17 Thread Eliezer Croitoru

Hey,

In this case you can use a more collective way to defend this kind of 
abuse using IPTABLES rate limiting.
Just use some a small nginx server with a Warning page no logs etc to 
redirect blocked computers into.


They will come to you I promise!

Regards,
Eliezer

On 12/17/2012 8:43 PM, Shawn Wright wrote:


Hi,

My mistake, the client IP is logged in the access.log. I have narrowed it down 
to a single client (a student) for the past several days, so I will bring the 
machine in to investigate what it is doing. It appears every time this machine 
connects, within 20 seconds of getting an IP address, it floods our proxy with 
these requests!

Thanks



Shawn Wright
Manager of Information Technology
Shawnigan Lake School

Please direct requests for support to helpd...@shawnigan.ca



Shawn Wright Manager of Information Technology Shawnigan Lake School Please 
direct requests for support to helpd...@shawnigan.ca


Shawn Wright
Manager of Information Technology
Shawnigan Lake School

Please direct requests for support to helpd...@shawnigan.ca



- Original Message -

From: Eliezer Croitoru elie...@ngtech.co.il
To: squid-users@squid-cache.org
Sent: Monday, 17 December, 2012 9:01:49 AM
Subject: Re: [squid-users] Request header too large  ip_conntrack

Hey,

The max header is 64KB by default.
Change it to more then 64 to about 80KB just to make sure it's OK.
There are debug_options that can help you with it:
http://wiki.squid-cache.org/KnowledgeBase/DebugSections

I do not know the exact section used with Request header is too large
but a simple look-up in the source code will get you the section and by
making the verbosity of this section to more then 1(2-3) you will might
have the data you need.
If I remember right section 33 will do what you need if not then 66 but
i'm not sure which one of the sections should have the exact data you need.

Try the HTTP sections as the best choice.

If you are afraid of DDOS then this basic settings of 64KB protect you
from it with the only exception of squid cache.log size explosion.
Are you trying to protect the server from that?

Regards,
Eliezer

On 12/17/2012 5:52 PM, Shawn Wright wrote:

Hi,

Thanks - I did check the docs, and already increased the max header to 64K as 
per the RFC. If this client is causing a DOS (which it appears to be) surely 
there must be some way to determine the client IP? Debug log?


Shawn Wright
Manager of Information Technology
Shawnigan Lake School

Please direct requests for support to helpd...@shawnigan.ca



- Original Message -

From: Eliezer Croitoru elie...@ngtech.co.il
To: squid-users@squid-cache.org
Sent: Monday, 17 December, 2012 7:19:33 AM
Subject: Re: [squid-users] Request header too large  ip_conntrack

Hey there,

Take a look at:
http://www.squid-cache.org/Doc/config/request_header_max_size/
You dont see the logs since it's invalid Request.

Try to make the size more then 64KB but I would consider trying to find
out what request is trying to use this kind of header for security reasons.

Regards,
Eliezer

On 12/17/2012 4:52 PM, Shawn Wright wrote:

Hello,


This problem continues. How can I locate where these request header too large 
are coming from? I don't see the client IP being logged. Or is it the line 
preceding?


2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
Request
2012/12/13 20:00:20| Request header is too large (67792 bytes)
2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes.
2012/12/13 20:00:20| Request header is too large (67623 bytes)


Shawn Wright
Manager of Information Technology
Shawnigan Lake School

Please direct requests for support to helpd...@shawnigan.ca



- Original Message -

From: Shawn Wright swri...@shawnigan.ca
To: squid-users@squid-cache.org
Sent: Friday, 14 December, 2012 11:53:35 AM
Subject: [squid-users] Request header too large  ip_conntrack

Hello,

I have been trying to track down a congestion issue we have been seeing at 8pm 
each night for several weeks, where most of our clients see slow or no 
connectivity for 20-40 minutes.

First issue was our firewall reaching ip_conntrack_max, so I increased it, and 
began logging the conn count every 5 minutes. The problem was gone for a week.

Then it came back, just as before. The firewall was fine, no errors this time, 
and well below the ip_conntrack_max.

I looked at proxy, and saw an excessive number of invalid requests during peak 
times, at one point over 100/sec from a single client. Adjusting some rules on 
our wireless controller to resolve this issue, and invalid requests dropped by 
a factor of 10, but the issue at 8pm continued. I also set:

request_header_max_size 64 KB
reply_header_max_size 64 KB

as we were seeing many request header too large errors.

I enabled conntrack logging every minute on the proxy, and saw it came very 
close to it's limit last night at 8pm, and stayed there for over an hour, but 
no errors were logged. However, at the instant that 

Re: [squid-users] Squid 3.2.2 is available

2012-12-17 Thread Ralf Hildebrandt
* Ralf Hildebrandt ralf.hildebra...@charite.de:

  But why? Are there known performance issues with 3.2?

Turns out that it was this:
ICAP server connection leaks have been resolved.
which had been resolved in 3.2.7 - YAY!

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


Re: [squid-users] Squid 3.2.2 is available

2012-12-17 Thread Eliezer Croitoru

You meant 3.2.5 ? right?

Hope you wont get any issues later on but a simple netstat would have 
showed this issue with a bit divide and concur.


By the way, is this ICAP server you are using is on the local machine or 
on remote server?


Regards,
Eliezer

On 12/17/2012 9:16 PM, Ralf Hildebrandt wrote:

* Ralf Hildebrandtralf.hildebra...@charite.de:


 But why? Are there known performance issues with 3.2?

Turns out that it was this:
ICAP server connection leaks have been resolved.
which had been resolved in 3.2.7 - YAY!


--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid 3.2.2 is available

2012-12-17 Thread Ralf Hildebrandt
* Eliezer Croitoru elie...@ngtech.co.il:
 You meant 3.2.5 ? right?

Yep, but I hope it will still fixed in 3.2.7 as well :)
 
 Hope you wont get any issues later on but a simple netstat would have
 showed this issue with a bit divide and concur.
 
 By the way, is this ICAP server you are using is on the local machine
 or on remote server?

It's a local ICAP server.

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


[squid-users] Help with Squid HTTPS proxy

2012-12-17 Thread Ali Jawad
Hi
I am trying to setup an HTTPS transparent proxy with latest stable
squid with --enable-ssl compiled. Problem is that the squid server
returns an error connection refused, but the thing is that it was
trying to connect to itself. I did also check using tcpdump and
actually no https requests are leaving for the destination site, all
https is happening between the browser and the squid server.

2012/12/18 02:39:56.086 kid1| url.cc(385) urlParse: urlParse: Split
URL 'https://signup.netflix.com/global' into proto='https',
host='signup.netflix.com', port='443', path='/global'
2012/12/18 02:39:56.086 kid1| Address.cc(409) LookupHostIP: Given
Non-IP 'signup.netflix.com': Name or service not known

And basically the same happens for other hostnames, however when I do
run nslookup on the squid server the lookup is correct, the
resolv.conf file lists teht Google DNS as the top DNS server.

Config below

http://pastebin.com/Cm29hmXL

Please advice


Re: [squid-users] Help with Squid HTTPS proxy

2012-12-17 Thread Joshua B.

Netflix doesn't work through Squid
The only option you have to allow Netflix to work through a proxied 
environment without adding exceptions on all your clients, is to put 
this code in your configuration file:


acl netflix dstdomain .netflix.com
cache deny netflix

That allows Netflix to fully work through the proxy.
Tested and therefore knows it works on my network.

On 12-12-17 06:48 PM, Ali Jawad wrote:

Hi
I am trying to setup an HTTPS transparent proxy with latest stable
squid with --enable-ssl compiled. Problem is that the squid server
returns an error connection refused, but the thing is that it was
trying to connect to itself. I did also check using tcpdump and
actually no https requests are leaving for the destination site, all
https is happening between the browser and the squid server.

2012/12/18 02:39:56.086 kid1| url.cc(385) urlParse: urlParse: Split
URL 'https://signup.netflix.com/global' into proto='https',
host='signup.netflix.com', port='443', path='/global'
2012/12/18 02:39:56.086 kid1| Address.cc(409) LookupHostIP: Given
Non-IP 'signup.netflix.com': Name or service not known

And basically the same happens for other hostnames, however when I do
run nslookup on the squid server the lookup is correct, the
resolv.conf file lists teht Google DNS as the top DNS server.

Config below

http://pastebin.com/Cm29hmXL

Please advice





Re: [squid-users] Request header too large ip_conntrack

2012-12-17 Thread Shawn Wright

Hi, 

I was wondering about rate-limiting, as we have used it for inbound connections 
to our mail server. We've avoided this so far on the proxy in an effort to keep 
it simple and maximize performance, but that isn't working so well anymore. We 
recently turned off disk caching as the hit rate was so low, and our bandwidth 
fee is flat-rate. 

I will have a look at iptables on the proxy box tonight, thanks again. Students 
leave soon, so I may not know until next year if it works... 



Shawn Wright 
Manager of Information Technology 
Shawnigan Lake School 

Please direct requests for support to helpd...@shawnigan.ca 



Shawn Wright Manager of Information Technology Shawnigan Lake School Please 
direct requests for support to helpd...@shawnigan.ca 


Shawn Wright 
Manager of Information Technology 
Shawnigan Lake School 

Please direct requests for support to helpd...@shawnigan.ca 



- Original Message - 

From: Eliezer Croitoru elie...@ngtech.co.il 
To: squid-users@squid-cache.org 
Sent: Monday, 17 December, 2012 10:52:10 AM 
Subject: Re: [squid-users] Request header too large  ip_conntrack 

Hey, 

In this case you can use a more collective way to defend this kind of 
abuse using IPTABLES rate limiting. 
Just use some a small nginx server with a Warning page no logs etc to 
redirect blocked computers into. 

They will come to you I promise! 

Regards, 
Eliezer 

On 12/17/2012 8:43 PM, Shawn Wright wrote: 
 
 Hi, 
 
 My mistake, the client IP is logged in the access.log. I have narrowed it 
 down to a single client (a student) for the past several days, so I will 
 bring the machine in to investigate what it is doing. It appears every time 
 this machine connects, within 20 seconds of getting an IP address, it floods 
 our proxy with these requests! 
 
 Thanks 
 
 
 
 Shawn Wright 
 Manager of Information Technology 
 Shawnigan Lake School 
 
 Please direct requests for support to helpd...@shawnigan.ca 
 
 
 
 Shawn Wright Manager of Information Technology Shawnigan Lake School Please 
 direct requests for support to helpd...@shawnigan.ca 
 
 
 Shawn Wright 
 Manager of Information Technology 
 Shawnigan Lake School 
 
 Please direct requests for support to helpd...@shawnigan.ca 
 
 
 
 - Original Message - 
 
 From: Eliezer Croitoru elie...@ngtech.co.il 
 To: squid-users@squid-cache.org 
 Sent: Monday, 17 December, 2012 9:01:49 AM 
 Subject: Re: [squid-users] Request header too large  ip_conntrack 
 
 Hey, 
 
 The max header is 64KB by default. 
 Change it to more then 64 to about 80KB just to make sure it's OK. 
 There are debug_options that can help you with it: 
 http://wiki.squid-cache.org/KnowledgeBase/DebugSections 
 
 I do not know the exact section used with Request header is too large 
 but a simple look-up in the source code will get you the section and by 
 making the verbosity of this section to more then 1(2-3) you will might 
 have the data you need. 
 If I remember right section 33 will do what you need if not then 66 but 
 i'm not sure which one of the sections should have the exact data you need. 
 
 Try the HTTP sections as the best choice. 
 
 If you are afraid of DDOS then this basic settings of 64KB protect you 
 from it with the only exception of squid cache.log size explosion. 
 Are you trying to protect the server from that? 
 
 Regards, 
 Eliezer 
 
 On 12/17/2012 5:52 PM, Shawn Wright wrote: 
 Hi, 
 
 Thanks - I did check the docs, and already increased the max header to 64K 
 as per the RFC. If this client is causing a DOS (which it appears to be) 
 surely there must be some way to determine the client IP? Debug log? 
 
 
 Shawn Wright 
 Manager of Information Technology 
 Shawnigan Lake School 
 
 Please direct requests for support to helpd...@shawnigan.ca 
 
 
 
 - Original Message - 
 
 From: Eliezer Croitoru elie...@ngtech.co.il 
 To: squid-users@squid-cache.org 
 Sent: Monday, 17 December, 2012 7:19:33 AM 
 Subject: Re: [squid-users] Request header too large  ip_conntrack 
 
 Hey there, 
 
 Take a look at: 
 http://www.squid-cache.org/Doc/config/request_header_max_size/ 
 You dont see the logs since it's invalid Request. 
 
 Try to make the size more then 64KB but I would consider trying to find 
 out what request is trying to use this kind of header for security reasons. 
 
 Regards, 
 Eliezer 
 
 On 12/17/2012 4:52 PM, Shawn Wright wrote: 
 Hello, 
 
 
 This problem continues. How can I locate where these request header too 
 large are coming from? I don't see the client IP being logged. Or is it the 
 line preceding? 
 
 
 2012/12/13 20:00:05| clientReadRequest: FD 165 (10.5.0.150:60948) Invalid 
 Request 
 2012/12/13 20:00:20| Request header is too large (67792 bytes) 
 2012/12/13 20:00:20| Config 'request_header_max_size'= 65536 bytes. 
 2012/12/13 20:00:20| Request header is too large (67623 bytes) 
 
 
 Shawn Wright 
 Manager of Information Technology 
 Shawnigan Lake School 
 
 Please direct requests for support to 

Re: [squid-users] Help with Squid HTTPS proxy

2012-12-17 Thread Eliezer Croitoru


On 12/18/2012 2:31 AM, Joshua B. wrote:

Netflix doesn't work through Squid
The only option you have to allow Netflix to work through a proxied
environment without adding exceptions on all your clients, is to put
this code in your configuration file:

acl netflix dstdomain .netflix.com
cache deny netflix

That allows Netflix to fully work through the proxy.
Tested and therefore knows it works on my network.


The above only makes netfilx.com and all their subdomains to not be 
cached but will be proxied the same as any other connection So it's not 
a solution.


The latest squid stable version 3.2.5 dosnt have the feature ssl-bump 
server first which suppose to help in your case and others.

And also uses a dynamic Certificate helper support which will help a lot.

You can also remove the hierarchy_stoplist cgi-bin ? from your squid.conf.

You do have another problem with your setup and it's that you dont have 
a basic proxy socket with no tproxy/intercept.
add a line http_port 127.0.0.1:3127 or any other port just to make 
this one.


Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Help with Squid HTTPS proxy

2012-12-17 Thread Amos Jeffries

On 18/12/2012 1:31 p.m., Joshua B. wrote:

Netflix doesn't work through Squid
The only option you have to allow Netflix to work through a proxied 
environment without adding exceptions on all your clients, is to put 
this code in your configuration file:


acl netflix dstdomain .netflix.com
cache deny netflix

That allows Netflix to fully work through the proxy.
Tested and therefore knows it works on my network.


All that does is prevent *caching* of Netflix objects, all the other 
proxy handling and traffic management is still operating.


That is a clear sign that your caching rules are causing problems, or 
that the site itself has very broken cache controls. A quick scan of 
Netflix shows a fair knowledge of caching control, geared towards 
non-caching of objects. Which points back at your config being the problem.


 Do you have refresh_pattern with loose regex and ignore-* options 
forcing things to cache which are supposed to not be stored? please 
check and remove.


Amos


[squid-users] squid with c-icap

2012-12-17 Thread Zakharov Victor
I try to configure squidclamav according 
http://squidclamav.darold.net/config.html
How results c-icap work,  but squid - 3.1.19 dont send any request to 
port 1344 whereis c-icap work.
My discussion with squidclamav developer - 
https://sourceforge.net/p/squidclamav/discussion/800646/thread/3fe5a2a0/?limit=50#30d7 


Can you help me with this trouble ?