[squid-users] 3.2.5 comm_open: socket failure: (24) Too many open files

2012-12-29 Thread
Squid is configured with 16384 FD and it is only using 1k, it seems it
is still complains about running out of FD. Is there a regression from
3.2.5 -》 3.2.2 ?

2012/12/29 04:25:53 kid1| Reserved FD adjusted from 100 to 15391 due to failures
2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open files
2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open files
2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open files
2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open files
2012/12/29 04:25:53 kid1| WARNING! Your cache is running out of filedescriptors



# squidclient -p 9444 mgr:info
HTTP/1.1 200 OK
Server: squid/3.2.5
Mime-Version: 1.0
Date: Sat, 29 Dec 2012 12:28:37 GMT
Content-Type: text/plain
Expires: Sat, 29 Dec 2012 12:28:37 GMT
Last-Modified: Sat, 29 Dec 2012 12:28:37 GMT
X-Cache: MISS from oow-ssh
X-Cache-Lookup: MISS from oow-ssh:9444
Connection: close

Squid Object Cache: Version 3.2.5
Start Time: Sat, 29 Dec 2012 12:24:37 GMT
Current Time:   Sat, 29 Dec 2012 12:28:37 GMT
Connection information for squid:
Number of clients accessing cache:  0
Number of HTTP requests received:   4257
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   1064.1
Average ICP messages per minute since start:0.0
Select loop called: 1196966 times, 0.201 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 0.5%, 60min: 0.5%
Hits as % of bytes sent:5min: 0.2%, 60min: 0.2%
Memory hits as % of hit requests:   5min: 23.8%, 60min: 23.8%
Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
Storage Swap size:  0 KB
Storage Swap capacity:   0.0% used,  0.0% free
Storage Mem size:   32512 KB
Storage Mem capacity:   99.2% used,  0.8% free
Mean Object Size:   0.00 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.13498  0.13498
Cache Misses:  0.13498  0.13498
Cache Hits:0.0  0.0
Near Hits: 0.07409  0.07409
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.02683  0.02683
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:240.038 seconds
CPU Time:   21.460 seconds
CPU Usage:  8.94%
CPU Usage, 5 minute avg:8.94%
CPU Usage, 60 minute avg:   8.94%
Process Data Segment Size via sbrk(): 111960 KB
Maximum Resident Size: 457216 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
Total space in arena:  112092 KB
Ordinary blocks:   105455 KB   1075 blks
Small blocks:   0 KB  0 blks
Holding blocks: 17236 KB  6 blks
Free Small blocks:  0 KB
Free Ordinary blocks:6637 KB
Total in use:6637 KB 5%
Total free:  6637 KB 5%
Total size:129328 KB
Memory accounted for:
Total accounted:41168 KB  32%
memPool accounted:  41168 KB  32%
memPool unaccounted:88160 KB  68%
memPoolAlloc calls: 9
memPoolFree calls: 964366
File descriptor usage for squid:
Maximum number of file descriptors:   16384
Largest file desc currently in use:991
Number of file desc currently in use:  611
Files queued for open:   0
Available number of file descriptors: 15773
Reserved number of file descriptors:  15391
Store Disk files open:   0
Internal Data Structures:
  1969 StoreEntries
  1969 StoreEntries with MemObjects
  1956 Hot Object Cache Items
 0 on-disk objects


Re: [squid-users] 3.2.5 comm_open: socket failure: (24) Too many open files

2012-12-29 Thread
So you are saying even squid is configured to use 16384 fd, it
couldn't because limit is 1024?

That's kind of confusing,  I tried to use ulimit -n 16384 as root to
raise FD limit now, will report back.

Maybe squid can probe at start and warn about mis-matching fd limits?

On Sat, Dec 29, 2012 at 12:16 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey,

 It's probably the linux ulimit settings.
 run:
 ulimit -Sn
 ulimit -Hn

 If there is a limit of 1k in the list you will need to change it since squid
 bound to the OS limits.

 Regards,
 Eliezer



 On 29/12/2012 14:30, 叶雨飞 wrote:

 Squid is configured with 16384 FD and it is only using 1k, it seems it
 is still complains about running out of FD. Is there a regression from
 3.2.5 -》 3.2.2 ?

 2012/12/29 04:25:53 kid1| Reserved FD adjusted from 100 to 15391 due to
 failures
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open
 files
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open
 files
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open
 files
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open
 files
 2012/12/29 04:25:53 kid1| WARNING! Your cache is running out of
 filedescriptors



 # squidclient -p 9444 mgr:info
 HTTP/1.1 200 OK
 Server: squid/3.2.5
 Mime-Version: 1.0
 Date: Sat, 29 Dec 2012 12:28:37 GMT
 Content-Type: text/plain
 Expires: Sat, 29 Dec 2012 12:28:37 GMT
 Last-Modified: Sat, 29 Dec 2012 12:28:37 GMT
 X-Cache: MISS from oow-ssh
 X-Cache-Lookup: MISS from oow-ssh:9444
 Connection: close

 Squid Object Cache: Version 3.2.5
 Start Time: Sat, 29 Dec 2012 12:24:37 GMT
 Current Time:   Sat, 29 Dec 2012 12:28:37 GMT
 Connection information for squid:
  Number of clients accessing cache:  0
  Number of HTTP requests received:   4257
  Number of ICP messages received:0
  Number of ICP messages sent:0
  Number of queued ICP replies:   0
  Number of HTCP messages received:   0
  Number of HTCP messages sent:   0
  Request failure ratio:   0.00
  Average HTTP requests per minute since start:   1064.1
  Average ICP messages per minute since start:0.0
  Select loop called: 1196966 times, 0.201 ms avg
 Cache information for squid:
  Hits as % of all requests:  5min: 0.5%, 60min: 0.5%
  Hits as % of bytes sent:5min: 0.2%, 60min: 0.2%
  Memory hits as % of hit requests:   5min: 23.8%, 60min: 23.8%
  Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
  Storage Swap size:  0 KB
  Storage Swap capacity:   0.0% used,  0.0% free
  Storage Mem size:   32512 KB
  Storage Mem capacity:   99.2% used,  0.8% free
  Mean Object Size:   0.00 KB
  Requests given to unlinkd:  0
 Median Service Times (seconds)  5 min60 min:
  HTTP Requests (All):   0.13498  0.13498
  Cache Misses:  0.13498  0.13498
  Cache Hits:0.0  0.0
  Near Hits: 0.07409  0.07409
  Not-Modified Replies:  0.0  0.0
  DNS Lookups:   0.02683  0.02683
  ICP Queries:   0.0  0.0
 Resource usage for squid:
  UP Time:240.038 seconds
  CPU Time:   21.460 seconds
  CPU Usage:  8.94%
  CPU Usage, 5 minute avg:8.94%
  CPU Usage, 60 minute avg:   8.94%
  Process Data Segment Size via sbrk(): 111960 KB
  Maximum Resident Size: 457216 KB
  Page faults with physical i/o: 0
 Memory usage for squid via mallinfo():
  Total space in arena:  112092 KB
  Ordinary blocks:   105455 KB   1075 blks
  Small blocks:   0 KB  0 blks
  Holding blocks: 17236 KB  6 blks
  Free Small blocks:  0 KB
  Free Ordinary blocks:6637 KB
  Total in use:6637 KB 5%
  Total free:  6637 KB 5%
  Total size:129328 KB
 Memory accounted for:
  Total accounted:41168 KB  32%
  memPool accounted:  41168 KB  32%
  memPool unaccounted:88160 KB  68%
  memPoolAlloc calls: 9
  memPoolFree calls: 964366
 File descriptor usage for squid:
  Maximum number of file descriptors:   16384
  Largest file desc currently in use:991
  Number of file desc currently in use:  611
  Files queued for open:   0
  Available number of file descriptors: 15773
  Reserved number of file descriptors:  15391
  Store Disk files open:   0
 Internal Data Structures:
1969 StoreEntries
1969 StoreEntries with MemObjects
1956 Hot Object Cache Items
   0 on-disk objects


 --
 Eliezer Croitoru

Re: [squid-users] 3.2.5 comm_open: socket failure: (24) Too many open files

2012-12-29 Thread
Is this a new patch or is it in 3.2.5? I've never seen anything less
than 16384 in the log file, even though I've run into the issue in
3.2.5

On Sat, Dec 29, 2012 at 1:19 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 I made sure and squid logs the current available FD from the OS and not just
 using compiled options.

 2012/12/29 23:16:18 kid1| With 8192 file descriptors available


 Eliezer

 On 29/12/2012 22:41, 叶雨飞 wrote:

 So you are saying even squid is configured to use 16384 fd, it
 couldn't because limit is 1024?

 That's kind of confusing,  I tried to use ulimit -n 16384 as root to
 raise FD limit now, will report back.

 Maybe squid can probe at start and warn about mis-matching fd limits?

 On Sat, Dec 29, 2012 at 12:16 PM, Eliezer Croitoru elie...@ngtech.co.il
 wrote:

 Hey,

 It's probably the linux ulimit settings.
 run:
 ulimit -Sn
 ulimit -Hn

 If there is a limit of 1k in the list you will need to change it since
 squid
 bound to the OS limits.

 Regards,
 Eliezer



 On 29/12/2012 14:30, 叶雨飞 wrote:


 Squid is configured with 16384 FD and it is only using 1k, it seems it
 is still complains about running out of FD. Is there a regression from
 3.2.5 -》 3.2.2 ?

 2012/12/29 04:25:53 kid1| Reserved FD adjusted from 100 to 15391 due to
 failures
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open
 files
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open
 files
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open
 files
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many open
 files
 2012/12/29 04:25:53 kid1| WARNING! Your cache is running out of
 filedescriptors



 # squidclient -p 9444 mgr:info
 HTTP/1.1 200 OK
 Server: squid/3.2.5
 Mime-Version: 1.0
 Date: Sat, 29 Dec 2012 12:28:37 GMT
 Content-Type: text/plain
 Expires: Sat, 29 Dec 2012 12:28:37 GMT
 Last-Modified: Sat, 29 Dec 2012 12:28:37 GMT
 X-Cache: MISS from oow-ssh
 X-Cache-Lookup: MISS from oow-ssh:9444
 Connection: close

 Squid Object Cache: Version 3.2.5
 Start Time: Sat, 29 Dec 2012 12:24:37 GMT
 Current Time:   Sat, 29 Dec 2012 12:28:37 GMT
 Connection information for squid:
   Number of clients accessing cache:  0
   Number of HTTP requests received:   4257
   Number of ICP messages received:0
   Number of ICP messages sent:0
   Number of queued ICP replies:   0
   Number of HTCP messages received:   0
   Number of HTCP messages sent:   0
   Request failure ratio:   0.00
   Average HTTP requests per minute since start:   1064.1
   Average ICP messages per minute since start:0.0
   Select loop called: 1196966 times, 0.201 ms avg
 Cache information for squid:
   Hits as % of all requests:  5min: 0.5%, 60min: 0.5%
   Hits as % of bytes sent:5min: 0.2%, 60min: 0.2%
   Memory hits as % of hit requests:   5min: 23.8%, 60min:
 23.8%
   Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
   Storage Swap size:  0 KB
   Storage Swap capacity:   0.0% used,  0.0% free
   Storage Mem size:   32512 KB
   Storage Mem capacity:   99.2% used,  0.8% free
   Mean Object Size:   0.00 KB
   Requests given to unlinkd:  0
 Median Service Times (seconds)  5 min60 min:
   HTTP Requests (All):   0.13498  0.13498
   Cache Misses:  0.13498  0.13498
   Cache Hits:0.0  0.0
   Near Hits: 0.07409  0.07409
   Not-Modified Replies:  0.0  0.0
   DNS Lookups:   0.02683  0.02683
   ICP Queries:   0.0  0.0
 Resource usage for squid:
   UP Time:240.038 seconds
   CPU Time:   21.460 seconds
   CPU Usage:  8.94%
   CPU Usage, 5 minute avg:8.94%
   CPU Usage, 60 minute avg:   8.94%
   Process Data Segment Size via sbrk(): 111960 KB
   Maximum Resident Size: 457216 KB
   Page faults with physical i/o: 0
 Memory usage for squid via mallinfo():
   Total space in arena:  112092 KB
   Ordinary blocks:   105455 KB   1075 blks
   Small blocks:   0 KB  0 blks
   Holding blocks: 17236 KB  6 blks
   Free Small blocks:  0 KB
   Free Ordinary blocks:6637 KB
   Total in use:6637 KB 5%
   Total free:  6637 KB 5%
   Total size:129328 KB
 Memory accounted for:
   Total accounted:41168 KB  32%
   memPool accounted:  41168 KB  32%
   memPool unaccounted:88160 KB  68%
   memPoolAlloc calls: 9
   memPoolFree calls: 964366
 File descriptor usage for squid:
   Maximum number of file descriptors:   16384
   Largest file

Re: [squid-users] 3.2.5 comm_open: socket failure: (24) Too many open files

2012-12-29 Thread
ubuntu 10.04 .   and appearently /etc/security/limits.conf is not
working on it because it doesn;t configure with pam_session.

I just added ulimit -n 65535 in my launch script

On Sat, Dec 29, 2012 at 1:56 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 I am using the trunk version and didn't check any other version and I don't
 think that there was a change about limits checks or way of handling in
 squid for a long time.

 The ulimits are tricky in some systems due to system policies.
 What linux distro are you using?

 Eliezer


 On 29/12/2012 23:24, 叶雨飞 wrote:

 Is this a new patch or is it in 3.2.5? I've never seen anything less
 than 16384 in the log file, even though I've run into the issue in
 3.2.5

 On Sat, Dec 29, 2012 at 1:19 PM, Eliezer Croitoru elie...@ngtech.co.il
 wrote:

 I made sure and squid logs the current available FD from the OS and not
 just
 using compiled options.

 2012/12/29 23:16:18 kid1| With 8192 file descriptors available


 Eliezer

 On 29/12/2012 22:41, 叶雨飞 wrote:


 So you are saying even squid is configured to use 16384 fd, it
 couldn't because limit is 1024?

 That's kind of confusing,  I tried to use ulimit -n 16384 as root to
 raise FD limit now, will report back.

 Maybe squid can probe at start and warn about mis-matching fd limits?

 On Sat, Dec 29, 2012 at 12:16 PM, Eliezer Croitoru
 elie...@ngtech.co.il
 wrote:


 Hey,

 It's probably the linux ulimit settings.
 run:
 ulimit -Sn
 ulimit -Hn

 If there is a limit of 1k in the list you will need to change it since
 squid
 bound to the OS limits.

 Regards,
 Eliezer



 On 29/12/2012 14:30, 叶雨飞 wrote:



 Squid is configured with 16384 FD and it is only using 1k, it seems it
 is still complains about running out of FD. Is there a regression from
 3.2.5 -》 3.2.2 ?

 2012/12/29 04:25:53 kid1| Reserved FD adjusted from 100 to 15391 due
 to
 failures
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many
 open
 files
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many
 open
 files
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many
 open
 files
 2012/12/29 04:25:53 kid1| comm_open: socket failure: (24) Too many
 open
 files
 2012/12/29 04:25:53 kid1| WARNING! Your cache is running out of
 filedescriptors



 # squidclient -p 9444 mgr:info
 HTTP/1.1 200 OK
 Server: squid/3.2.5
 Mime-Version: 1.0
 Date: Sat, 29 Dec 2012 12:28:37 GMT
 Content-Type: text/plain
 Expires: Sat, 29 Dec 2012 12:28:37 GMT
 Last-Modified: Sat, 29 Dec 2012 12:28:37 GMT
 X-Cache: MISS from oow-ssh
 X-Cache-Lookup: MISS from oow-ssh:9444
 Connection: close

 Squid Object Cache: Version 3.2.5
 Start Time: Sat, 29 Dec 2012 12:24:37 GMT
 Current Time:   Sat, 29 Dec 2012 12:28:37 GMT
 Connection information for squid:
Number of clients accessing cache:  0
Number of HTTP requests received:   4257
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   1064.1
Average ICP messages per minute since start:0.0
Select loop called: 1196966 times, 0.201 ms avg
 Cache information for squid:
Hits as % of all requests:  5min: 0.5%, 60min: 0.5%
Hits as % of bytes sent:5min: 0.2%, 60min: 0.2%
Memory hits as % of hit requests:   5min: 23.8%, 60min:
 23.8%
Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
Storage Swap size:  0 KB
Storage Swap capacity:   0.0% used,  0.0% free
Storage Mem size:   32512 KB
Storage Mem capacity:   99.2% used,  0.8% free
Mean Object Size:   0.00 KB
Requests given to unlinkd:  0
 Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.13498  0.13498
Cache Misses:  0.13498  0.13498
Cache Hits:0.0  0.0
Near Hits: 0.07409  0.07409
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.02683  0.02683
ICP Queries:   0.0  0.0
 Resource usage for squid:
UP Time:240.038 seconds
CPU Time:   21.460 seconds
CPU Usage:  8.94%
CPU Usage, 5 minute avg:8.94%
CPU Usage, 60 minute avg:   8.94%
Process Data Segment Size via sbrk(): 111960 KB
Maximum Resident Size: 457216 KB
Page faults with physical i/o: 0
 Memory usage for squid via mallinfo():
Total space in arena:  112092 KB
Ordinary blocks:   105455 KB   1075 blks
Small blocks:   0 KB  0 blks
Holding

Re: RE: [squid-users] Squid3 extremely slow for some website cnn.com

2012-12-11 Thread
Try lowering MTU to 1400 on squid's system , sometime that's a
non-obvious problem.

On Tue, Dec 11, 2012 at 1:58 AM, Muhammad Shehata m.sheh...@tedata.net wrote:
 Dears,
 sorry but I've an urgent case, is there any Ideas about the JS issues in 
 squid3

 
 From: Muhammad Shehata
 Sent: Tuesday, December 11, 2012 9:32 AM
 To: Eliezer Croitoru; squ...@treenet.co.nz
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] Squid3 extremely slow for some website cnn.com

 Dear Amos,Eliezer

 Could you help me in that, I found squid3 failed to get some Java script 
 pages in some websites
 squid3 logs : TCP_MISS_ABORTED/000 0 GET 
 http://cdn.optimizely.com/js/128727546.js
 squid3 logs :TCP_MISS/200 17298 GET 
 http://cdn.optimizely.com/js/128727546.js - DIRECT/23.50.196.211 
 text/javascript


 Is there any patch to solve such issue in squid3 and if there any 
 configuration option to speed up the response time without affecting badly

 Mshehata
 IT NS

 Best regards,

 Muhammed Shehata
 IT Network Security Engineer
 TE Data
 Building A11- B90, 2nd floor
 Smart Village, Cairo, Alex Desert Road, 28 Km
 6th of October 12577, Egypt
 T: +20 (2) 33 32 0700 | Ext: 1532
 F: +20 (2) 33 32 0800 | M:
 E: m.sheh...@tedata.net
 www.tedata.net




Re: [squid-users] 我想在wdows server 里安装个 squid 用来做web 服务器缓存

2012-12-10 Thread
Please restate your question in english, and attach error logs.

2012/12/10 fireflyhoo firefly...@gmail.com:
 我安装后一直运行不起来,不报错误啊
 运行squid 没有任何的提示和错误.




 fireflyhoo


[squid-users] Squid 3.2 built-in ACLs?

2012-10-01 Thread
Hi, it looks like squid 3.2 have built ACLs ,I'm getting these warnings:

2012/10/01 21:11:01| WARNING: (B) '127.0.0.1' is a subnetwork of (A) '127.0.0.1'
2012/10/01 21:11:01| WARNING: because of this '127.0.0.1' is ignored
to keep splay tree searching predictable
2012/10/01 21:11:01| WARNING: You should probably remove '127.0.0.1'
from the ACL named 'localhost'
2012/10/01 21:11:01| WARNING: (B) '127.0.0.1' is a subnetwork of (A) '127.0.0.1'
2012/10/01 21:11:01| WARNING: because of this '127.0.0.1' is ignored
to keep splay tree searching predictable
2012/10/01 21:11:01| WARNING: You should probably remove '127.0.0.1'
from the ACL named 'localhost'
2012/10/01 21:11:01| WARNING: (B) '127.0.0.0/8' is a subnetwork of (A)
'127.0.0.0/8'
2012/10/01 21:11:01| WARNING: because of this '127.0.0.0/8' is ignored
to keep splay tree searching predictable
2012/10/01 21:11:01| WARNING: You should probably remove '127.0.0.0/8'
from the ACL named 'to_localhost'
2012/10/01 21:11:01| WARNING: (B) '0.0.0.0' is a subnetwork of (A) '0.0.0.0'
2012/10/01 21:11:01| WARNING: because of this '0.0.0.0' is ignored to
keep splay tree searching predictable
2012/10/01 21:11:01| WARNING: You should probably remove '0.0.0.0'
from the ACL named 'to_localhost'
2012/10/01 21:11:01| WARNING: (B) '0.0.0.0' is a subnetwork of (A) '0.0.0.0'
2012/10/01 21:11:01| WARNING: because of this '0.0.0.0' is ignored to
keep splay tree searching predictable
2012/10/01 21:11:01| WARNING: You should probably remove '0.0.0.0'
from the ACL named 'to_localhost'


relevant configs are

acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16
acl to_localnet dst 10.0.0.0/8
acl to_localnet dst 172.16.0.0/12
acl to_localnet dst 192.168.0.0/16

http_access allow manager localhost
http_access deny manager

acl internal-url urlpath_regex ^/squid-internal-.*
http_access allow localnet to_localhost internal-url
http_access deny to_localhost
http_access deny to_localnet


is this expected?


[squid-users] How to get rid of this message?

2012-09-16 Thread
Using squid 3.2 and I frequently see such problem:

2012/09/16 17:49:02 kid1| Failed to select source for
'http://www.googleads.g.doubleclick.net/pagead/viewthroughconversion/1033191019/?value=0label=LDO7CJ2XvwMQ6_zU7AMguid=ONscript=0'
2012/09/16 17:49:02 kid1|   always_direct = 1
2012/09/16 17:49:02 kid1|never_direct = 0
2012/09/16 17:49:02 kid1|timedout = 0

Note that this is actually an NXDOMAIN error , however I would like to
make squid not retry and fail fast and don't print these out, what
config directive do I need?

Thanks.


Re: [squid-users] How to get rid of this message?

2012-09-16 Thread
Thanks for the explanation Amos, However, does this domain work for
you? I tried on various network and none returns anything beside
NXDOMAIN. I think it it just some weird webpage that has outdated
reference.

Knowing squid didn't block or retrying is reassuring, however can you
help to decipher this message?

2012/09/16 19:22:06 kid1| Failed to select source for '[null_entry]'
2012/09/16 19:22:06 kid1|   always_direct = 1
2012/09/16 19:22:06 kid1|never_direct = 0
2012/09/16 19:22:06 kid1|timedout = 0

Cheers.

On Sun, Sep 16, 2012 at 7:16 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 17/09/2012 12:52 p.m., 叶雨飞 wrote:

 Using squid 3.2 and I frequently see such problem:

 2012/09/16 17:49:02 kid1| Failed to select source for

 'http://www.googleads.g.doubleclick.net/pagead/viewthroughconversion/1033191019/?value=0label=LDO7CJ2XvwMQ6_zU7AMguid=ONscript=0'
 2012/09/16 17:49:02 kid1|   always_direct = 1
 2012/09/16 17:49:02 kid1|never_direct = 0
 2012/09/16 17:49:02 kid1|timedout = 0

 Note that this is actually an NXDOMAIN error ,


 No this is an Failed to select source error.  Squid is unable to locate
 *any* source for the request to be fetched from. DNS lookup (DIRECT request)
 is only one of several types of sources Squid is trying to locate.

 The fact that you configured always_direct to be '1' (ALLOW) and NXDOMAIN
 occured when doing so is a separate error which caused this Failed to
 select source.



   however I would like to
 make squid not retry and fail fast and don't print these out, what
 config directive do I need?


 What you describe wanting to happen is what Squid is already doing...

 ... Squid Receives an HTTP request.
 ... Looks up destination sources where it can be retrieved (cache, peers,
 DNS records).
 ... None found.
 ... Print that message to your log (at IMPORTANT messages level)
 ... Send NXDOMAIN error page back to the client (since it is the most
 specific problem of the two).

 Things to note:
 * Retry is not happening. There are ZERO destinations which can be tried to
 start with, so nothing to *re*-try.

 * Fast failure is dependent on how fast the DNS response comes back.

 * To not print them out you set the debug_options level to ALL,0 (critical
 only messages).


 BTW you need to fix your DNS service, Google is a major service. It is
 doubtful their DNS is returning NXDOMAIN for that query.
 Use dstdomain ACL in Squid to block requests if you are trying to blacklist
 it as an advertising source.

 Amos



[squid-users] Squid 3.2.1 sometimes return webpages as a .gz download

2012-08-18 Thread
Today I noticed,sometime I visit
http://dwarffortresswiki.org/index.php/DF2012:Large_pot sometime squid
will return the webpage as a  downloadable .gz file

any idea what's going on?


Re: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?

2012-08-17 Thread
No, I just launch it with ./squid -f squid.conf , no script.

I think this is a problem with default config , it might be
initialized wrong in the default config.

On Fri, Aug 17, 2012 at 1:09 AM, Jenny Lee bodycar...@live.com wrote:

 In your /etc/rc.d/init.d/squid file, or whatever script is starting squid, 
 put:
 ulimit -HSn 65536
 Jenny
 From: sunyuc...@gmail.com
 Date: Thu, 16 Aug 2012 20:03:05 -0700
 To: squid-users@squid-cache.org
 Subject: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?

 I found that if I include
 max_filedescriptors 16384 in the config, it will actually use the 16384 fds

 if I don't have this line, then it will use 1024, however the document
 and source code I can find doesn't say any thing like 1024 at all,

 what might be the reason?

 On Thu, Aug 16, 2012 at 7:31 PM, Yucong Sun (叶雨飞) sunyuc...@gmail.com 
 wrote:
  Here's what I get from mgr:info
 
  File descriptor usage for squid:
  Maximum number of file descriptors: 1024
  Largest file desc currently in use: 755
  Number of file desc currently in use: 692
  Files queued for open: 0
  Available number of file descriptors: 332
  Reserved number of file descriptors: 100
  Store Disk files open: 0
 
 
  and here's the squid -v output
 
  Squid Cache: Version 3.2.1
  configure options: '--disable-maintainer-mode'
  '--disable-dependency-tracking' '--disable-silent-rules'
  '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs'
  '--enable-removal-policies=lru,heap' '--enable-cache-digests'
  '--enable-underscores' '--enable-follow-x-forwarded-for'
  '--disable-translation' '--with-filedescriptors=65536'
  '--with-default-user=proxy' '--enable-ssl' '--enable-ltdl-convenience'
 
  How can I get squid 3.2.1 to use more than 1024 ?
 
  I've verified that system is fine, there's no per user limit either.
 
  # cat /proc/sys/fs/file-max
  199839


Re: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?

2012-08-17 Thread
told you there's no limit set at all anywhere, set it again won't
solve it. it's squid that don't want to use more than 1024 unless told
so explicitly in the config.

On Fri, Aug 17, 2012 at 2:04 AM, Jenny Lee bodycar...@live.com wrote:

 So put it before that, then:
 ulimit -HSn 65536; ./squid -f squid.conf
 Jenny
 
 From: sunyuc...@gmail.com
 Date: Fri, 17 Aug 2012 01:56:59 -0700
 Subject: Re: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
 To: bodycar...@live.com
 CC: squid-users@squid-cache.org

 No, I just launch it with ./squid -f squid.conf , no script.

 I think this is a problem with default config , it might be
 initialized wrong in the default config.

 On Fri, Aug 17, 2012 at 1:09 AM, Jenny Lee bodycar...@live.com wrote:
 
  In your /etc/rc.d/init.d/squid file, or whatever script is starting squid, 
  put:
  ulimit -HSn 65536
  Jenny
  From: sunyuc...@gmail.com
  Date: Thu, 16 Aug 2012 20:03:05 -0700
  To: squid-users@squid-cache.org
  Subject: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
 
  I found that if I include
  max_filedescriptors 16384 in the config, it will actually use the 16384 
  fds
 
  if I don't have this line, then it will use 1024, however the document
  and source code I can find doesn't say any thing like 1024 at all,
 
  what might be the reason?
 
  On Thu, Aug 16, 2012 at 7:31 PM, Yucong Sun (叶雨飞) sunyuc...@gmail.com 
  wrote:
   Here's what I get from mgr:info
  
   File descriptor usage for squid:
   Maximum number of file descriptors: 1024
   Largest file desc currently in use: 755
   Number of file desc currently in use: 692
   Files queued for open: 0
   Available number of file descriptors: 332
   Reserved number of file descriptors: 100
   Store Disk files open: 0
  
  
   and here's the squid -v output
  
   Squid Cache: Version 3.2.1
   configure options: '--disable-maintainer-mode'
   '--disable-dependency-tracking' '--disable-silent-rules'
   '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs'
   '--enable-removal-policies=lru,heap' '--enable-cache-digests'
   '--enable-underscores' '--enable-follow-x-forwarded-for'
   '--disable-translation' '--with-filedescriptors=65536'
   '--with-default-user=proxy' '--enable-ssl' '--enable-ltdl-convenience'
  
   How can I get squid 3.2.1 to use more than 1024 ?
  
   I've verified that system is fine, there's no per user limit either.
  
   # cat /proc/sys/fs/file-max
   199839


Re: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?

2012-08-17 Thread
ah, I see, ulimit -HSn does print out a limit of 1024 on my shell, I
thought it was unlimited before. thanks for the information.

On Fri, Aug 17, 2012 at 2:20 AM, Jenny Lee bodycar...@live.com wrote:

 You said if you use max_filedescriptors 16384 it is using 16K fds. If you 
 do not use it, it is using your shell's limit which can be increased with the 
 command I gave you.
 Where is the problem adn what are you exactly trying to solve? Put 
 max_filedescriptos 64K in your config and be done with it.
 Jenny

 
 From: sunyuc...@gmail.com
 Date: Fri, 17 Aug 2012 02:13:48 -0700
 To: bodycar...@live.com
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?

 told you there's no limit set at all anywhere, set it again won't
 solve it. it's squid that don't want to use more than 1024 unless told
 so explicitly in the config.

 On Fri, Aug 17, 2012 at 2:04 AM, Jenny Lee bodycar...@live.com wrote:
 
  So put it before that, then:
  ulimit -HSn 65536; ./squid -f squid.conf
  Jenny
  
  From: sunyuc...@gmail.com
  Date: Fri, 17 Aug 2012 01:56:59 -0700
  Subject: Re: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
  To: bodycar...@live.com
  CC: squid-users@squid-cache.org
 
  No, I just launch it with ./squid -f squid.conf , no script.
 
  I think this is a problem with default config , it might be
  initialized wrong in the default config.
 
  On Fri, Aug 17, 2012 at 1:09 AM, Jenny Lee bodycar...@live.com wrote:
  
   In your /etc/rc.d/init.d/squid file, or whatever script is starting 
   squid, put:
   ulimit -HSn 65536
   Jenny
   From: sunyuc...@gmail.com
   Date: Thu, 16 Aug 2012 20:03:05 -0700
   To: squid-users@squid-cache.org
   Subject: [squid-users] Re: 3.2.1 file descriptor is locked to 1024?
  
   I found that if I include
   max_filedescriptors 16384 in the config, it will actually use the 
   16384 fds
  
   if I don't have this line, then it will use 1024, however the document
   and source code I can find doesn't say any thing like 1024 at all,
  
   what might be the reason?
  
   On Thu, Aug 16, 2012 at 7:31 PM, Yucong Sun (叶雨飞) 
   sunyuc...@gmail.com wrote:
Here's what I get from mgr:info
   
File descriptor usage for squid:
Maximum number of file descriptors: 1024
Largest file desc currently in use: 755
Number of file desc currently in use: 692
Files queued for open: 0
Available number of file descriptors: 332
Reserved number of file descriptors: 100
Store Disk files open: 0
   
   
and here's the squid -v output
   
Squid Cache: Version 3.2.1
configure options: '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs'
'--enable-removal-policies=lru,heap' '--enable-cache-digests'
'--enable-underscores' '--enable-follow-x-forwarded-for'
'--disable-translation' '--with-filedescriptors=65536'
'--with-default-user=proxy' '--enable-ssl' 
'--enable-ltdl-convenience'
   
How can I get squid 3.2.1 to use more than 1024 ?
   
I've verified that system is fine, there's no per user limit either.
   
# cat /proc/sys/fs/file-max
199839


[squid-users] How to debug slowness ?

2012-08-16 Thread
Hi,

I've compiled and use 3.2.1 today, however I notice under high
concurrent request, my server is showing some slowness in the request
start, much like a sceniaor that it only allow X concurrent request
and all rest are blocking until it get schedule, so, any idea what
might cause this and is there some knob to tune or how to debug such
slowness?

I do have a very limited ram settings, (32m) though.

Cheers.


[squid-users] 3.2.1 file descriptor is locked to 1024?

2012-08-16 Thread
Here's what I get from mgr:info

File descriptor usage for squid:
Maximum number of file descriptors:   1024
Largest file desc currently in use:755
Number of file desc currently in use:  692
Files queued for open:   0
Available number of file descriptors:  332
Reserved number of file descriptors:   100
Store Disk files open:   0


and here's the squid -v output

Squid Cache: Version 3.2.1
configure options:  '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs'
'--enable-removal-policies=lru,heap' '--enable-cache-digests'
'--enable-underscores' '--enable-follow-x-forwarded-for'
'--disable-translation' '--with-filedescriptors=65536'
'--with-default-user=proxy' '--enable-ssl' '--enable-ltdl-convenience'

How can I get squid 3.2.1 to use more than 1024 ?

I've verified that system  is fine, there's no per user limit either.

# cat /proc/sys/fs/file-max
199839


[squid-users] Re: 3.2.1 file descriptor is locked to 1024?

2012-08-16 Thread
I found that if I include
max_filedescriptors 16384 in the config, it will actually use the 16384 fds

if I don't have this line, then it will use 1024, however the document
and source code I can find doesn't say any thing like 1024 at all,

what might be the reason?

On Thu, Aug 16, 2012 at 7:31 PM, Yucong Sun (叶雨飞) sunyuc...@gmail.com wrote:
 Here's what I get from mgr:info

 File descriptor usage for squid:
 Maximum number of file descriptors:   1024
 Largest file desc currently in use:755
 Number of file desc currently in use:  692
 Files queued for open:   0
 Available number of file descriptors:  332
 Reserved number of file descriptors:   100
 Store Disk files open:   0


 and here's the squid -v output

 Squid Cache: Version 3.2.1
 configure options:  '--disable-maintainer-mode'
 '--disable-dependency-tracking' '--disable-silent-rules'
 '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs'
 '--enable-removal-policies=lru,heap' '--enable-cache-digests'
 '--enable-underscores' '--enable-follow-x-forwarded-for'
 '--disable-translation' '--with-filedescriptors=65536'
 '--with-default-user=proxy' '--enable-ssl' '--enable-ltdl-convenience'

 How can I get squid 3.2.1 to use more than 1024 ?

 I've verified that system  is fine, there's no per user limit either.

 # cat /proc/sys/fs/file-max
 199839


[squid-users] Windows build machine

2012-07-04 Thread
Hi,

some time ago, I donated a windows vm for squid dev team to use as a
windows build machine, just want to confirm it is still in use,
otherwise I plan to decommission it. please let me know!

Cheers.


[squid-users] Squid ASN acl

2012-04-12 Thread
Hi,
I notice that SQUID has ability to specify srcdst ASN in access list,
how does it work?  How does squid look up as number for given IP?

Thanks.


[squid-users] 3.1.15 squid report ERR_SECURE_CONNECT_FAIL on peer with self-signed cert

2012-03-02 Thread
Hi,

I've been trying to use a SSL connection to an parent squid proxy, and
the child squid always fails even I specifically asked it to stop
verifying stuff

here's the relevant config on child

sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN
cache_peer x.x.x.x parent 8443 0 no-digest no-query default ssl
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN,NO_DEFAULT_CA
sslcert=ssl.pem sslkey=ssl.key

and this appears in the cache.log

2012/03/03 02:50:51| fwdNegotiateSSL: Error negotiating SSL connection
on FD 11: error::lib(0):func(0):reason(0) (5/-1/104)

I've verified the parent side works fine, in fact, the server side has
been implemented using stunnel and it works fine if I setup stunnel in
local and tunnel squid through it.

Cheers.


Re: [squid-users] Implement Tproxy on Debian squeeze

2012-03-02 Thread
I think what happens is the document seems to be wrong, the kernel
already has TPROXY compiled in , look for /boot/config-   and
search for TPROXY, it should says m.

for the iptables rules, you will need to use mangle table, there's no
tproxy table anymore.

as such

iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port
proxyport  \
  --tproxy-mark 0x1/0x1


on my machine ubuntu 10.04 LTS,  Linux fullcenter 2.6.32-37-server
#81-Ubuntu SMP Fri Dec 2 20:49:12 UTC 2011 x86_64 GNU/Linux
I have TPROXY 4.1.0 included, not sure about debian.

[5282830.948528] NF_TPROXY: Transparent proxy support initialized, version 4.1.0
[5282830.948533] NF_TPROXY: Copyright (c) 2006-2007 BalaBit IT Ltd.


However, I do want to add an additional question , suppose my proxy
machine will be acting as network gateway to my LAN,  can I simply
archive the same effect by simply
-iptables -t mangle -A PREROUTING -p tcp --dport 80 -j DNAT
127.0.0.1:  ??? why was tproxy needed in the first place?

Thanks.

On Fri, Mar 2, 2012 at 9:33 AM, David Touzeau da...@touzeau.eu wrote:

 There is bad news, backports did not change something according Tproxy
 Only kernel 3.2x is available on backports repository.

 apt-get install -t squeeze-backports linux-image-3.2.0-0.bpo.1-686-pae
 apt-get install -t squeeze-backports upgrade
 reboot
 my kernel is now
 Linux squid32.localhost.localdomain 3.2.0-0.bpo.1-686-pae #1 SMP Sat Feb 11
 14:57:20 UTC 2012 i686 GNU/Linux


  iptables -t tproxy -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j TPROXY
 --on-port 80
 WARNING: All config files need .conf: /etc/modprobe.d/fuse, it will be
 ignored in a future release.
 iptables v1.4.8: can't initialize iptables table `tproxy': Table does not
 exist (do you need to insmod?)
 Perhaps iptables or your kernel needs to be upgraded

 grep -i iptables /boot/config-`uname -r`
 CONFIG_IP_NF_IPTABLES=m
 CONFIG_IP6_NF_IPTABLES=m
 # iptables trigger is under Netfilter config (LED target)

 SNIF, SNIF


 Le 02/03/2012 17:03, David Touzeau a écrit :

 iptables -t tproxy -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j
 TPROXY --on-port 80


Re: [squid-users] 3.1.15 squid report ERR_SECURE_CONNECT_FAIL on peer with self-signed cert

2012-03-02 Thread
On Fri, Mar 2, 2012 at 5:03 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 3/03/2012 7:57 a.m., Yucong Sun (叶雨飞) wrote:

 Hi,

 I've been trying to use a SSL connection to an parent squid proxy, and
 the child squid always fails even I specifically asked it to stop
 verifying stuff


 The child verifying the parent? or the parent verifying the child?
 SSL is designed not to allow problems to go unseen, so validation happens at
 both ends. You can only control what Squid (child) verifies from squid.conf.

It looks like the child is verifying parent, because server side is a
stunnel and we have other client talking to it without issue.




 here's the relevant config on child

 sslproxy_cert_error allow all

 This makes Squid completely ignore all server errors when negotiating TLS.
 You should not need it unless the server certificate is malformed.

 sslproxy_flags DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN


 These are for controlling the DIRECT access TLS connections.

Year, these should not be needed, but I am so desperate so I included them here.



 cache_peer x.x.x.x parent 8443 0 no-digest no-query default ssl
 sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN,NO_DEFAULT_CA
 sslcert=ssl.pem sslkey=ssl.key


 This is what is affecting the peer.

 If we assume your ssl.pem and ssl.key are valid, it could still be the peer
 rejecting them.

It doesn't work without the cert/key either.




 and this appears in the cache.log

 2012/03/03 02:50:51| fwdNegotiateSSL: Error negotiating SSL connection
 on FD 11: error::lib(0):func(0):reason(0) (5/-1/104)

 I've verified the parent side works fine, in fact, the server side has
 been implemented using stunnel and it works fine if I setup stunnel in
 local and tunnel squid through it.


 Same ssl.pem/ssl.key certificates used by that test stunnel and this Squid?

yes, the server are not verifying the client cert/key either.


 Second question is whether you need ssl.pem/ssl.key at all?
  SSL auto-generates random client certificates as needed if you only specify
 ssl option to cache_peer.
  It is common to only specify cache_peer options ssl
 sslflags=DONT_VERIFY_PEER   to have an auto-generated client certificate,
 and ignore self-signed certificates from the peer.

that's what I originally thought , but it actually don't parse if I
don't have those two there.

So look like something is missing in the ssl part that cause it still
tries to verify the server cert, I switched the parent to a valid cert
and it all starts to work, how can I trace this ?

Cheers.


 Amos


Re: [squid-users] Ordinal block keeps growing?

2012-02-29 Thread
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 1783 proxy 20   0  569m 529m 4132 S1 26.4 274:53.22 squid

but the system still thinks it is using 529m memory

I'm still on 3.1.15, I will see how it goes for 3.1.19
Squid Cache: Version 3.1.15
configure options:  '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
'--enable-cache-digests' '--enable-underscores'
'--enable-follow-x-forwarded-for' '--disable-translation'
'--with-filedescriptors=65536' '--with-default-user=proxy'
'--enable-ssl' '--enable-ltdl-convenience'
--with-squid=/tmp/squid-3.1.15/squid-3.1.15

On Wed, Feb 29, 2012 at 12:06 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 29/02/2012 6:50 p.m., Yucong Sun (叶雨飞) wrote:

 Memory usage for squid via mallinfo():
         Total space in arena:  536788 KB
         Ordinary blocks:       173203 KB   4895 blks
         Small blocks:               0 KB      0 blks
         Holding blocks:          1420 KB      3 blks
         Free Small blocks:          0 KB
         Free Ordinary blocks:  363584 KB
         Total in use:          174623 KB 32%
         Total free:            363585 KB 68%
         Total size:            538208 KB


 These are numbers provided by the operating system. Squid is using ~174 MB
 now and under peak traffic load it used ~538 MB. The difference has already
 been free'd.


 Memory accounted for:
         Total accounted:        40844 KB 8%
         memPool accounted:      40843 KB 8%
         memPool unaccounted:   497364 KB 92%
         memPoolAlloc calls:         0
         memPoolFree calls:  841528260


 That indicates the extra 500 MB as being temporary objects for processing
 client requests as they pass through Squid.

 92% unaccounted is strange though. If you have a Squid older than 3.1.19
 please try upgrading, it could be one of several memory problems which we
 have fixed already.

 I'm also aware of a patch which can be tried on top of 3.1.19 as a last
 resort. It is untested and a bit risky in 3.1 series.

 Amos


[squid-users] Ordinal block keeps growing?

2012-02-28 Thread
Hi,

I have squid 3.1 with 32M cache memory, but there's something called
ordinal block keeps growing out of control taking about 500M memory,
any way we could restrict that growth?

Cheers.


Re: [squid-users] Ordinal block keeps growing?

2012-02-28 Thread
here's the request from mgr:info

Memory usage for squid via mallinfo():
Total space in arena:  536788 KB
Ordinary blocks:   173203 KB   4895 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1420 KB  3 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  363584 KB
Total in use:  174623 KB 32%
Total free:363585 KB 68%
Total size:538208 KB

How can I get squid to release those memory?

Cheers.

On Tue, Feb 28, 2012 at 6:51 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 29.02.2012 07:09, Yucong Sun wrote:

 Hi,

 I have squid 3.1 with 32M cache memory, but there's something called
 ordinal block keeps growing out of control taking about 500M memory,
 any way we could restrict that growth?


 Where are you getting that information? in particular where does the term
 ordinal show up?

 Amos


Re: [squid-users] Ordinal block keeps growing?

2012-02-28 Thread
Memory usage for squid via mallinfo():
Total space in arena:  536788 KB
Ordinary blocks:   173203 KB   4895 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1420 KB  3 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  363584 KB
Total in use:  174623 KB 32%
Total free:363585 KB 68%
Total size:538208 KB
Memory accounted for:
Total accounted:40844 KB 8%
memPool accounted:  40843 KB 8%
memPool unaccounted:   497364 KB 92%
memPoolAlloc calls: 0
memPoolFree calls:  841528260


On Tue, Feb 28, 2012 at 9:49 PM, Yucong Sun (叶雨飞) sunyuc...@gmail.com wrote:
 here's the request from mgr:info

 Memory usage for squid via mallinfo():
        Total space in arena:  536788 KB
        Ordinary blocks:       173203 KB   4895 blks
        Small blocks:               0 KB      0 blks
        Holding blocks:          1420 KB      3 blks
        Free Small blocks:          0 KB
        Free Ordinary blocks:  363584 KB
        Total in use:          174623 KB 32%
        Total free:            363585 KB 68%
        Total size:            538208 KB

 How can I get squid to release those memory?

 Cheers.

 On Tue, Feb 28, 2012 at 6:51 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 29.02.2012 07:09, Yucong Sun wrote:

 Hi,

 I have squid 3.1 with 32M cache memory, but there's something called
 ordinal block keeps growing out of control taking about 500M memory,
 any way we could restrict that growth?


 Where are you getting that information? in particular where does the term
 ordinal show up?

 Amos


Re: [squid-users] Websites with # hash in URL

2012-02-23 Thread
#  part of the URL is local state, stored in the browser and only used
by the JavaScript once it is loaded, it won't be send as part of the
request, so you can't block it on squid.

On Wed, Feb 22, 2012 at 11:34 PM, Dasd Rads dasdr...@yahoo.de wrote:
 Hello,

 how can i define a website with a hash (#) in the URL in squid.conf ?
 It's necessary for twitter.com/#!/myCompany for example.

 Therefore i must configure in the whitelist the URL with a hash # (ordinarily 
 is hash a comment).

 Example:

 acl ExampleURL url_regex .twitter.com/#!/myCompany
 http_access allow ExampleURL

 My Squid Version under Windows: 2.7/STABLE6


 Thank you for the support.


Re: [squid-users] Use squid to switch to Tor network

2012-02-14 Thread
Tor (without the browser part) basically provides a socks proxy ,
Vidalia translate socks proxy to http proxy, and the browser use that
http proxy to work.

So, to get your squid use it too, just launch tor and vidalia as
usual, have squid configure a cache_peer parent to that proxy
(localhost:someport), also never_direct allow all , and you will be
going through tor any minute.

On Tue, Feb 14, 2012 at 10:33 PM, Nguyen Hai Nam nam...@nd24.net wrote:
 Hi Squid guys,

 We're using Squid 3.2 on Solaris 11 system smoothly, but few days ago our
 ISP has had troubles with external Internet routing so we can't reach many
 websites. I discovered that if I use Tor's browser I can open that websites
 normally (yes, it's slow btw), at least we can open the website. So I think
 we should cooperate between Squid and Tor to bring the Internet back for
 users.

 I'm not familiar with Tor switching network except using Tor's browser, so
 it's great to hear your opion and if you guys know the already
 configurations, I highly appreciate it.

 Thank you,
 ~Neddy.


Re: [squid-users] SSL SESSION PARAMS poluting the cache log

2011-10-24 Thread
As I said, there's no such setting in my config, I don't even have a
debug_options in the config.

2011/10/23 Henrik Nordström hen...@henriknordstrom.net:
 As said earlier this is printed only if you have set debug section 83 to
 level 4 or higher.

 grep debug_options /path/to/squid.conf


 sön 2011-10-23 klockan 21:25 -0700 skrev Yucong Sun (叶雨飞):
 Hi, After a few version this still hasn't gone, my debug_options are
 default, which should be all,1 per manual. I'm compiling from the
 source on a ubuntu 10.04LTS


 Anyone else seeing this problem?

 2011/8/29 Henrik Nordström hen...@henriknordstrom.net
         sön 2011-08-28 klockan 04:07 -0700 skrev Yucong Sun (叶雨飞):
          Hi,  after turning on https_port , I start to have these
         logs in
          cache.log , which is meaningless to have on a production
         server,
          anyway to turn it off?
         
          -BEGIN SSL SESSION PARAMETERS-


         What are your debug_options set to? This is only printed if
         you have
         enabled debug section 83 at level 4 or above.

         Regards
         Henrik








Re: [squid-users] SSL SESSION PARAMS poluting the cache log

2011-10-24 Thread
# ./squid -v
Squid Cache: Version 3.1.15
configure options:  '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
'--enable-cache-digests' '--enable-underscores'
'--enable-follow-x-forwarded-for' '--disable-translation'
'--with-filedescriptors=65536' '--with-default-user=proxy'
'--enable-ssl' '--enable-ltdl-convenience'
--with-squid=/tmp/squid-3.1.15/squid-3.1.15


2011/10/23 Henrik Nordström hen...@henriknordstrom.net:
 Which Squid versions have you tried, and is these standard Squid
 versions or with any kind of patches applied?

 sön 2011-10-23 klockan 23:28 -0700 skrev Yucong Sun (叶雨飞):
 As I said, there's no such setting in my config, I don't even have a
 debug_options in the config.

 2011/10/23 Henrik Nordström hen...@henriknordstrom.net:
  As said earlier this is printed only if you have set debug section 83 to
  level 4 or higher.
 
  grep debug_options /path/to/squid.conf
 
 
  sön 2011-10-23 klockan 21:25 -0700 skrev Yucong Sun (叶雨飞):
  Hi, After a few version this still hasn't gone, my debug_options are
  default, which should be all,1 per manual. I'm compiling from the
  source on a ubuntu 10.04LTS
 
 
  Anyone else seeing this problem?
 
  2011/8/29 Henrik Nordström hen...@henriknordstrom.net
          sön 2011-08-28 klockan 04:07 -0700 skrev Yucong Sun (叶雨飞):
           Hi,  after turning on https_port , I start to have these
          logs in
           cache.log , which is meaningless to have on a production
          server,
           anyway to turn it off?
          
           -BEGIN SSL SESSION PARAMETERS-
 
 
          What are your debug_options set to? This is only printed if
          you have
          enabled debug section 83 at level 4 or above.
 
          Regards
          Henrik
 
 
 
 
 
 





Re: [squid-users] SSL SESSION PARAMS poluting the cache log

2011-10-23 Thread
Hi, After a few version this still hasn't gone, my debug_options are
default, which should be all,1 per manual. I'm compiling from the
source on a ubuntu 10.04LTS

Anyone else seeing this problem?

2011/10/23 Yucong Sun (叶雨飞) sunyuc...@gmail.com

 Hi, After a few version this still hasn't gone, my debug_options are default, 
 which should be all,1 per manual. I'm compiling from the source on a ubuntu 
 10.04LTS
 Anyone else seeing this problem?

 2011/8/29 Henrik Nordström hen...@henriknordstrom.net

 sön 2011-08-28 klockan 04:07 -0700 skrev Yucong Sun (叶雨飞):
  Hi,  after turning on https_port , I start to have these logs in
  cache.log , which is meaningless to have on a production server,
  anyway to turn it off?
 
  -BEGIN SSL SESSION PARAMETERS-

 What are your debug_options set to? This is only printed if you have
 enabled debug section 83 at level 4 or above.

 Regards
 Henrik




Re: [squid-users] Can squid cache ajax requests?

2011-10-02 Thread
Your use case is wrong,  you never want to cache POST request, even
some GET request too. It will break website and render your user and
you yelling.

On Sun, Oct 2, 2011 at 12:24 AM, serge spierr...@yahoo.com wrote:
 Hi,

 I'm starting from the default squid.conf. I'm trying a simplistic
 configuration where everything is cached:
 refresh_pattern  .              1440    20%     100    override-expire
 override-lastmod ignore-no-cache ignore-private ignore-auth

 It works well, except for my ajax requests (which are POST requests) are
 never cached.

 Any clue on what I'm missing?

 I'm using squid 2.7.STABLE9.

 Thanks.



Re: [squid-users] proxy over SSL

2011-09-15 Thread
You can't, because most browser don't support https proxy.

However, you can have user setup a local squid and use your proxy as
parent , in https.

On Thu, Sep 15, 2011 at 8:47 AM, Damien Martins doc...@makelofine.org wrote:
 Hi,

 I'd like to provide a proxy (using Squid) trought SSL.
 I know how to let people access URL in https://
 But I'd like them to connect to the proxy trought SSL connection. I didn't
 find that information, any research being polluted by how to provide proxy
 for URL in https://)

 Thank for any tip, link, information regarding my case



[squid-users] HTTPS certificate chain

2011-09-05 Thread
Hi,

How can I tell squid to send along certificate chain together with the
server cert? Should that be embed in the certificate PEM ?

Cheers.


Re: [squid-users] Squid still visible in tproxy (kinda of)

2011-09-03 Thread
I think you need to block Via and X-Forwarded-For headers.

On Sat, Sep 3, 2011 at 12:51 PM, Hasanen AL-Bana hasa...@gmail.com wrote:
 Hi,

 I configured squid 2.7 with tproxy patch and it is working as expected
 in bridge mode on linux , client IPs are spoofed just fine but
 whatismyipaddress.com still reporting my visible_hostname , is there
 anyway to make squid stop adding such headers ?

 thank you.



Re: [squid-users] realtime stats

2011-08-29 Thread
squidclient mgr:

On Tue, Aug 30, 2011 at 12:29 AM, alexus ale...@gmail.com wrote:
 Is there a way to view realtime stats from squid?

 --
 http://alexus.org/



[squid-users] SSL SESSION PARAMS poluting the cache log

2011-08-28 Thread
Hi,  after turning on https_port , I start to have these logs in
cache.log , which is meaningless to have on a production server,
anyway to turn it off?

-BEGIN SSL SESSION PARAMETERS-
MHECAQECAgMABAIANQQg6X7UjP5JRBfqj4Q9tBdJJ8P1q395I9+2pCEVpgKADCQE
MOH2jo0YlnTp7LV3CrzePF67/nRrUxLfhkKOC4L223Nb1PuNgHCYm/QB38RtzyVY
N6EGAgROWhpZogQCAhwgpAIEAA==
-END SSL SESSION PARAMETERS-


Re: [squid-users] SSL SESSION PARAMS poluting the cache log

2011-08-28 Thread
squid 3.1.14

https_port XXX:443 key=ssl.key cert=ssl.pem version=3

On Sun, Aug 28, 2011 at 4:56 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 28/08/11 23:07, Yucong Sun (叶雨飞) wrote:

 Hi,  after turning on https_port , I start to have these logs in
 cache.log , which is meaningless to have on a production server,
 anyway to turn it off?

 -BEGIN SSL SESSION PARAMETERS-
 MHECAQECAgMABAIANQQg6X7UjP5JRBfqj4Q9tBdJJ8P1q395I9+2pCEVpgKADCQE
 MOH2jo0YlnTp7LV3CrzePF67/nRrUxLfhkKOC4L223Nb1PuNgHCYm/QB38RtzyVY
 N6EGAgROWhpZogQCAhwgpAIEAA==
 -END SSL SESSION PARAMETERS-

 What version of Squid and how exactly did you enable https_port?

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.15
  Beta testers wanted for 3.2.0.10



[squid-users] adding Trailing newline

2011-08-24 Thread
Hi,

Some website don't send trailing newline in their http response,
(check http://ipcheckit.com/ ) that breaks my client software as they
are expecting it as the end of message.

Is there something I can do in squid to have it adding that for me?

Cheers.


[squid-users] Effort for port 3.1 to windows?

2011-04-25 Thread
Hi there,

Is there any effort now to port 3.1 to windows?

I know there's one for 2.7, and be struggling to get it compile on
vs2010 and win7 sdk.

But it is so complicated and horribly broken by new CRT security
features (which can be fixed by adding some code) and Winsocks
changes. I managed to get one build, but all internal calls stuck with
WSAEWOULDBLOCK somehow.

I know windows is not popular these days, but I would really hope to
see a effort to get latest version run on windows.

Cheers.


Re: [squid-users] Re: Effort for port 3.1 to windows?

2011-04-25 Thread
Well, thanks for the pointer, But as far as I can see there, it's a
installer, how did you generate the binary?

what I really hoping for, is to compile and run 3.1 normally on windows.
Well, 2.7 may cut it as well since I need mostly existing features,
but I can't get it compile correctly either. And current 2.7 windows
version requires to compile under vc6,  is just impossible these days.

I'm surprised no one has been taking on squid on windows seriously (or
I didn't find it), but by far, most proxy software I tried on windows
have different problem, while squid is the best atm I think.

Cheers.

On Mon, Apr 25, 2011 at 1:38 PM, sichent sich...@mail.ru wrote:
 On 4/25/2011 9:26 PM, Yucong Sun (叶雨飞) wrote:

 Hi there,

 Is there any effort now to port 3.1 to windows?

 I know there's one for 2.7, and be struggling to get it compile on
 vs2010 and win7 sdk.

 But it is so complicated and horribly broken by new CRT security
 features (which can be fixed by adding some code) and Winsocks
 changes. I managed to get one build, but all internal calls stuck with
 WSAEWOULDBLOCK somehow.

 I know windows is not popular these days, but I would really hope to
 see a effort to get latest version run on windows.

 Cheers.


 We have an MSI project for Squid 2.7... if you need help for 3.1 with MSI
 and Wix - we can do it :)

 http://squidwindowsmsi.sourceforge.net/

 best regards,
 sich




Re: [squid-users] Re: Effort for port 3.1 to windows?

2011-04-25 Thread
Hi Amos,

You can get VS2010 Express for free at here
http://www.microsoft.com/express/Downloads/#2010-Visual-CPP

Win SDK are free as well
http://msdn.microsoft.com/en-us/windows/bb980924 but it's not required
to build.

You get get other VC Express version free as well.

There's some change in  C runtime since VS2005 which mess things up,
including some redefine of EAGAIN EWOULDBLOCK etc etc.
http://msdn.microsoft.com/en-us/library/8ef0s5kh(v=vs.80).aspx
http://msdn.microsoft.com/en-us/library/ms737828(v=vs.85).aspx

I'm just generally frustrated about not having something that at least
build, I can probably risk a couple of night to try to get things
start.

Cheers.

On Mon, Apr 25, 2011 at 5:29 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 26/04/11 11:02, Yucong Sun (叶雨飞) wrote:

 Well, thanks for the pointer, But as far as I can see there, it's a
 installer, how did you generate the binary?

 what I really hoping for, is to compile and run 3.1 normally on windows.
 Well, 2.7 may cut it as well since I need mostly existing features,
 but I can't get it compile correctly either. And current 2.7 windows
 version requires to compile under vc6,  is just impossible these days.

 I'm surprised no one has been taking on squid on windows seriously (or
 I didn't find it), but by far, most proxy software I tried on windows
 have different problem, while squid is the best atm I think.


 There is work underway (slowly) on getting Squid up to at least build again
 on Windows.

  The more active upstream dev (myself and Francesco Chemolli) only have
 access to MinGW test machine and are blocked by a few bugs which we need to
 be diagnosed and patch created by someone with direct access to a MinGW
 setup (http://bugs.squid-cache.org/show_bug.cgi?id=3203 and
 http://bugs.squid-cache.org/show_bug.cgi?id=3043).

  Guido Serassio has better access but no time to work on it (sponsorship to
 recompense for his taking time off work will help a lot there, contact him
 about it).

  AFAIK none of us have access to recent VC versions (Guido mentioned
 something about the new IDE versions causing major pains in the build
 process).

 Patches on 3.HEAD code to get Windows going are *very* welcome.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1



Re: [squid-users] Squid 2.7STABLE8 for windows hang.

2011-04-15 Thread
well, I have another application spawn squid and read status from it,
stderr are easier than constantly watching over a file.

On Thu, Apr 14, 2011 at 11:25 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 15/04/11 18:12, Outofwall.com wrote:

 Ah, okay.

 I have figured it out, when I don't set cache_log to /dev/null, it
 works perfectly, but my intention is to get output to STDERR instead.

 I would never thought this would be the problem!


 To get stderr output use the -d command line option. The parameter for it is
 how deep the debug level you want displayed (usually 0 or 1).

 This begs the question: why?

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.6



Re: [squid-users] Block Facebook message page

2011-04-14 Thread
You can't do it, since HTTPS traffic is tunneled through squid, can't
be filtered or cached.

In order to filter HTTPS you will need proxy your HTTPS traffic (ssl
bumping), isntead. I'm not sure you want to do that.

On Thu, Apr 14, 2011 at 10:07 AM, Mohammad Fattahian
mfattah...@monexgroup.com wrote:
 I found the message composer address.
 How can I block :

 https://www.facebook.com/ajax/gigaboxx/endpoint/MessageComposerEndpoint.php

 I just put bellow configuration to block messaging page (http)

 acl fb1 url_regex -i
 ^http://www.facebook.com/ajax/gigaboxx/endpoint/MessageComposerEndpoint.php
 http_access deny fb1

 but it does not work for HTTPS

 Mohammad

 -Original Message-
 From: Helmut Hullen [mailto:hul...@t-online.de]
 Sent: April-14-11 11:15 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Block Facebook message page

 Hallo, Mohammad,

 Du meintest am 14.04.11:


 I want to block message page within facebook. Any body can help me?

 Is there any way to block some pages inside a certaine sites?

 I presume that's an end user problem, no squid problem (for all users).
 With firefox I use adblock+ for such problems.

 Viele Gruesse!
 Helmut




Re: [squid-users] Block Facebook message page

2011-04-14 Thread
Joseph, there's no point of matching https because when your browser
using SQUID as a proxy,

it sends CONNECT request and then exchange SSL traffic which squid
can't/won't touch at all.  so the acls, they can't be applied.

On Thu, Apr 14, 2011 at 2:23 PM, Joseph L. Casale
jcas...@activenetwerx.com wrote:
You can't do it, since HTTPS traffic is tunneled through squid, can't
be filtered or cached.

 If you followed what he was doing, you would have seen his error
 and known you can very much do what he was trying to do but
 he failed as a result of the regex.

 You're match might change to just www.facebook.com for example
 or make a case for 1 or none s...




[squid-users] Fastest peer without ICP ?

2011-03-29 Thread
Hi there,

So I have a squid a configured with several remote parent, with no ICP
port available (as the http port is tunneled in between)

I think ICP serves both QUERY and HEALTH CHECK function in squid,  but
I can't tunnel udp packets

SQUID A - SQUID B
   - SQUID C
   - SQUID D

Due to the fact that the link in between is all different and possibly
in-stable from time to time, I would want A to try to fetch from
multiple peers at once and use which ever returns first.

Could this be done?

A alternative is to have squid detect dead/slow peer, I know this is
porbably done in ICP, but I can't tunnel udp packets.


Re: [squid-users] Fastest peer without ICP ?

2011-03-29 Thread
it's a customized tunnel can only transport TCP data.

on that diagram, SQUID BCD is actually act as a service cluster, for
many client like A. the data between A and BCD must be encrypt, and
the latency is expected to be pretty high on the link. flooding the
limited bandwidth tunnel with HTCPs probably not a good idea either.

However, per you, SQUID will track HTTP connect time and prefer faster
one by default?  can it track response time instead? which means
time of a header exchange is done (which pretty much equals to the
link speed).

But I don't understand how SQUID reset the peer preference though, in
this case, if one of the peer is slow, squid wills till periodically
query it just to make sure it is still slow, but that will be very bad
experience for the end-user though.

Cheers.

On Tue, Mar 29, 2011 at 2:56 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 29/03/11 21:39, Yucong Sun (叶雨飞) wrote:

 Hi there,

 So I have a squid a configured with several remote parent, with no ICP
 port available (as the http port is tunneled in between)

 I think ICP serves both QUERY and HEALTH CHECK function in squid,  but
 I can't tunnel udp packets

 SQUID A -  SQUID B
                -  SQUID C
                -  SQUID D

 Due to the fact that the link in between is all different and possibly
 in-stable from time to time, I would want A to try to fetch from
 multiple peers at once and use which ever returns first.

 Could this be done?

 A alternative is to have squid detect dead/slow peer, I know this is
 porbably done in ICP, but I can't tunnel udp packets.

 What type of tunnel is this that cannot transfer a particular protocol?

 Squid takes measures of HTTP connect times, ICP response times, HTCP
 response times and if the pinger is installed ICMP active destination
 probing. All of these affect the RTT measure on a cache_peer.

 Squid *will not* fetch from multiple peers at once. HTTP does not permit it
 by design. ICP and HTCP probes are used in parallel to determine if a peer
 is worth contacting. In their absence peers are tried in an sequential order
 assigned by the selection algorithm.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5



[squid-users] Randomly selected peer ?

2011-03-26 Thread
Hi,

Is there a way to have squid to select from a list of peers randomly?
The scenario is like this:


client
 |
 v
SQUID A (client side, forward all request to B)
 |
 v
SQUID B (server side)


and B have multi port open, like  8080 - 8081,8082,8083 - ...

I would like A to randomly choose one of the squid B port when it
tries to forward the request, rather than round-robin. Could this be
done?

Cheers.


[squid-users] Retry invalided response?

2011-03-17 Thread
Hi there,

Is there a way that squid can be configured to retry a fetch for a
configurable time, if last response is not valid response?

By invalid response I mean:

1) response is totally empty.
or 2) is not a http response
or 3) http status code is 5xx

Cheers.


Re: [squid-users] Connect directly if parent cache fails

2011-02-20 Thread
Hi there,

I've been trying to do the same thing, but squid always tries too hard
on parents.

You can do this as what I do:

Have a local squid with always_direct allow all listening on another
port as the last entry of cache_peer with a default .

when squid tried the peers above and failed, it will fall back to the
last one which runs on the same computer, so should (suppose) always
work as long as your local network works..


On Sun, Feb 20, 2011 at 10:18 PM, Tom Tux tomtu...@gmail.com wrote:
 Hi

 Is my scenario in general possible to implement (connect directly, if
 the one and only cache_peer fails)?

 Thanks a lot.
 Tom

 2011/2/17 Tom Tux tomtu...@gmail.com:
 Hi Amos

 This doesn't work as expected. I removed the never_direct entry (I
 was unsure, how strong it is in the configuration...) and dropped
 also the hierarchy_stoplist-directive.

 But if the cache_peer fails, it either connects directly (if I have
 set nonhierarchical_direct on) or the connect will fail.

 I just want to implement the behavior, that when the cache_peer fails,
 the squid should connect directly.

 Thanks a lot for your help.
 Tom


 2011/2/17 Amos Jeffries squ...@treenet.co.nz:
 On 17/02/11 21:29, Tom Tux wrote:

 Hi

 I'm trying to configure Squid (3.1.9) to connect directly, if the one
 and only cache_peer (parent) fails:

 cache_peer xx.xx.xx.xx parent  8080            0       no-query
 no-digest default connect-fail-limit=5
 prefer_direct off
 never_direct allow all


 But squid will never connect directly, even if the parent isn't
 available. How can I implement this?


 WTF? (sorry but you will kick yourself in a second)

 You have set a directive called never_direct to always be true/on/working
 and are having trouble because Squid never connects directly.


 So, you need to drop that never_direct entry.
 Also drop any hierarchy_stoplist entries.

 Also take a look at nonhierarchical_direct, it operates on requests like
 CONNECT which are best handled directly. Setting it to ON will make the peer
 handle those as well when its available.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5