[squid-users] Windows caching

2016-10-07 Thread Jasper Van Der Westhuizen
Hi all

I am trying to tweak my configuration to better catch Windows 10
 updates. I download the file manually so at the first log entry I see
the TCP_HIT/206, then after the download is completed I get
TCP_HIT/200. But If I download the file again I get another TCP_HIT/206
and the file is downloaded again.

Any idea why it's doing this?

1475843695.498 241787 10.7.48.42 TCP_HIT/206 1801411025 GET http://au.v
4.download.windowsupdate.com/d/msdownload/update/software/secu/2016/09/
windows10.0-kb3185614-x64_5_c3b5d298472570c878c124
d05a0ef6fb3e088506.psf - HIER_NONE/- application/octet-stream
1475844229.809 238798 10.7.48.42 TCP_HIT/200 1801413860 GET http://au.v
4.download.windowsupdate.com/d/msdownload/update/software/secu/2016/09/
windows10.0-kb3185614-x64_5_c3b5d298472570c878c124
d05a0ef6fb3e088506.psf - HIER_NONE/- application/octet-stream
1475844244.273  0 10.7.48.42 TCP_HIT/206 498 GET http://au.v4.downl
oad.windowsupdate.com/d/msdownload/update/software/secu/2016/09/windows
10.0-kb3185614-x64_5_c3b5d298472570c878c124d05a0ef
6fb3e088506.psf - HIER_NONE/- application/octet-stream
1475844403.706 159429 10.7.48.42 TCP_HIT_ABORTED/206 1165844993 GET htt
p://au.v4.download.windowsupdate.com/d/msdownload/update/software/secu/
2016/09/windows10.0-kb3185614-x64_5_c3b5d298472570
c878c124d05a0ef6fb3e088506.psf - HIER_NONE/- application/octet-stream
1475844648.832 238010 10.7.48.42 TCP_HIT/200 1801413860 GET http://au.v
4.download.windowsupdate.com/d/msdownload/update/software/secu/2016/09/
windows10.0-kb3185614-x64_5_c3b5d298472570c878c124
d05a0ef6fb3e088506.psf - HIER_NONE/- application/octet-stream
1475844658.045  0 10.7.48.42 TCP_HIT/206 498 GET http://au.v4.downl
oad.windowsupdate.com/d/msdownload/update/software/secu/2016/09/windows
10.0-kb3185614-x64_5_c3b5d298472570c878c124d05a0ef
6fb3e088506.psf - HIER_NONE/- application/octet-stream


My config file contains:


--cut--
coredump_dir /var/cache/squid/
cache_dir aufs /var/cache/squid/ 20480 32 256
cache_replacement_policy heap lfuda
memory_replacement_policy lru



# CACHE TWEAKING


acl Windows_Updates dstdomain .windowsupdate.com
range_offset_limit 5 GB Windows_Updates
range_offset_limit 0
cache_mem 4096 MB
minimum_object_size 0 bytes
maximum_object_size 5 GB
quick_abort_min -1

dns_v4_first on
strip_query_terms off
ipcache_size 8192
fqdncache_size 8192

###
### DEBUG Options #
###

debug_options ALL,1 2,4

##
#   Custom Refresh patterns
##

#--
# Added to better cache windows updates
#--

refresh_pattern -i (get_video\?|videoplayback\?|videodownload\?) 10080
80% 43200 override-expire ignore-reload reload-into-ims ignore-private

refresh_pattern -i
\.(mp2|mp3|mid|midi|mp[234]|wav|ram|ra|rm|au|3gp|m4r|m4a)(\?.*|$) 10080
80% 43200 override-expire ignore-reload reload-into-ims ignore-private

refresh_pattern -i
\.(mpg|mpeg|mp4|m4v|mov|avi|asf|wmv|wma|dat|flv|swf)(\?.*|$) 10080 80%
43200 override-expire ignore-reload reload-into-ims ignore-private

refresh_pattern -i
\.(jpeg|jpg|jpe|jp2|gif|tiff?|pcx|png|bmp|pic|ico)(\?.*|$) 10080 80%
43200 override-expire ignore-reload reload-into-ims ignore-private

refresh_pattern -i
\.(chm|dll|doc|docx|xls|xlsx|ppt|pptx|pps|ppsx|mdb|mdbx)(\?.*|$) 10080
80% 43200 override-expire ignore-reload reload-into-ims ignore-private

refresh_pattern -i \.(txt|conf|cfm|psd|wmf|emf|vsd|pdf|rtf|odt)(\?.*|$)
10080 80% 43200 override-expire ignore-reload reload-into-ims ignore
-private

refresh_pattern -i
\.(class|jar|exe|gz|bz|bz2|tar|tgz|zip|gzip|arj|ace|bin|cab|msi|rar|esd
|psf)(\?.*|$) 10080 80% 43200 override-expire ignore-reload reload-into
-ims ignore-private

refresh_pattern -i
.windowsupdate.com/.*\.(class|exe|bin|cab|msi|esd|psf)(\?.*|$) 10080
80% 43200 reload-into-ims

refresh_pattern -i \.(htm|html|mhtml|css|js)(\?.*|$) 1440 90% 86400
override-expire ignore-reload reload-into-ims



--
Kind Regards
Jasper




Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid crash - 3.5.21

2016-10-04 Thread Jasper Van Der Westhuizen


On Mon, 2016-10-03 at 11:33 -0600, Alex Rousskov wrote:

On 10/03/2016 04:50 AM, Jasper Van Der Westhuizen wrote:


This morning I had some problems with some of our proxies. 2 Proxies in
cluster A crashed with the below errors. The shortly afterwards 4 in
cluster B did the same. Both clusters are configured to run their cache
in memory with SMP and 4 workers configured.

FATAL: Received Bus Error...dying.




There are at least two possible reasons:

  1. A bug in Squid and
  2. Memory overallocation by the OS kernel.

To fix the former, the developers will need a stack trace (at least). I
recommend filing a bug report after getting that trace and excluding
reason #2. Squid wiki and various system administration guides explain
how to make Squid dump core files.

To check for memory overallocation, you can temporary start Squid v4.0
with "shared_memory_locking on". Unfortunately, that squid.conf
directive is not available in Squid v3. You may be able to emulate it
using some OS-specific sysctl or environment variables, but doing so may
be far from trivial, and I do not have instructions.




Thanks Alex. We have patched the servers to the latest and will monitor. If it 
happens again I will fill in a bug report and see where it takes us.

Regards
Jasper





Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.21 - High CPU (100%)

2016-09-26 Thread Jasper Van Der Westhuizen



> Is there anything that you guys can suggest I do around the cache?
> Should I try a different store type? A different filesystem type
> perhaps?
>
>
>
> If your store has a configuration knob that effectively limits disk
> writing rate, then use it to limit that rate to avoid overflowing the
> queue.
>
> You can also consider using rock store that has an explicit write
> rate limiting option (but comes with other problems that may or may
> not affect your setup).
>
> Adding more physical disk spindles helps, of course.
>

Just to be clear since you mentioned working with virtual services. By
"spindle" Alex refers to individual physical HDD devices underneath the
whole VM storage setup. 'Disk' a the VM level, even at the RAID level
might refer to multiple or overlapping (shared) 'spindles'.

Unless your AUFS/UFS/diskd cache_dir are limited to being on one
"spindle" HDD, they can cause a shared HDD controller to fill up /
overload and adding more cache_dir just makes that particular problem worse.

Amos


Hi Amos

At the time of my email my cache_dir was a 100GB disk assigned from the same 
datastore as where the VM is. I have to say that I never had problems with this 
setup. The storage is provided from a IBM XIV storage unit so it's not the 
slowest of storage devices. After some discussions with a couple of people we 
thought that perhaps the cache is too big, so I now made it 4 x 15GB aufs 
cach_dirs. 2 Disks are from the same datastore as the VM, and 2 are from a 
datastore assigned to the cluster from a SVC (XIV behind it) with the following 
in my squid.conf.

maximum_object_size_in_memory 256 KB
maximum_object_size 4096 MB
store_dir_select_algorithm least-load

cache_dir aufs /var/cache/squid-c1/ 10240 32 256
cache_dir aufs /var/cache/squid-c2/ 10240 32 256
cache_dir aufs /var/cache/squid-c3/ 10240 32 256
cache_dir aufs /var/cache/squid-c4/ 10240 32 256
cache_replacement_policy heap lfuda
memory_replacement_policy lru


Do you think this is optimal?














Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.21 - High CPU (100%)

2016-09-26 Thread Jasper Van Der Westhuizen


On Wed, 2016-09-21 at 08:34 -0600, Alex Rousskov wrote:

On 09/21/2016 07:53 AM, Jasper Van Der Westhuizen wrote:


I have been having some problems with Squid using
100% CPU at times which impacts my users browsing experience.



Sustained 100% CPU load at ~100/s rates with regular traffic on
reasonable hardware is a sign (albeit not a proof!) of a Squid bug
(including long searches and similar optimization problems).


I seem to have gotten better results/performance by doing some config changed. 
I removed a whole bunch of pattern_refresh lines and replaced them with better 
formulated lines.








During the last time I had a proxy servers CPU reach 100% I ran a
cachemgr export and below is an extract.



Next time this happens, consider getting a stack trace or two from the
process showing sustained 100% CPU utilization. It is possible to do
that without killing the processes (at least on Linux). I do not have
step-by-step instructions, but you can find them. Make sure you run (or
at least use for getting the stack trace?) an unstripped Squid binary,
preferably built with --disable-optimizations.

If the place where Squid gets stuck is known, somebody may volunteer to
fix the corresponding code.



I do not have much experience with strace. What I found now after making 
changes to my squid.conf file, my cache drive gets *hammered* at times of high 
usage. Especially in the morning when all the users come online. I have huge 
amounts of writes happening and this creates CPU wait and the user experience 
goes out of the window. My proxies are VM's with separate drives for the cache, 
running on reiserfs(previously EXT3). My L1 and L2 directory structure is 32 
and 256 respectively with the aufs store type. In my VMWare cluster I see high 
write latency to the cache disks.

Is there anything that you guys can suggest I do around the cache? Should I try 
a different store type? A different filesystem type perhaps?

Kind Regards
Jasper




Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.21 - High CPU (100%)

2016-09-21 Thread Jasper Van Der Westhuizen
Hi all


In my environment I have two Squid clusters that comprise of 4 VM's each, load 
balanced over a F5 LB. I will refer to them as cluster A (client facing) and 
cluster B(edge proxies). Depending on the destination, the traffic is routed 
from cluster A via ISP 1 or to cluster B via ISP 2. I have been having some 
problems with Squid using 100% CPU at times which impacts my users browsing 
experience. This would happen on both clusters, even after upgrading to the 
latest 3.5.21 build.


I also recently saw the following errors in the cache logs:

2016/09/20 14:37:02 kid1| DiskThreadsDiskFile::openDone: (2) No such file or 
directory


After some reading up it looks like files were removed without Squid knowing 
about it. My cache director was on a EXT3 FS with journaling enabled. To try 
and address a possible disk access/speed issue I disabled journaling on the LV.

I have also done some optimizing from a OS point of view.


In order to get the CPU utilization down, a simple squid -k reconfigure will be 
enough. Not a full restart.


During the last time I had a proxy servers CPU reach 100% I ran a cachemgr 
export and below is an extract.








sample_start_time = 1474463096.261275 (Wed, 21 Sep 2016 13:04:56 GMT)
sample_end_time = 1474463396.382713 (Wed, 21 Sep 2016 13:09:56 GMT)
client_http.requests = 101.722157/sec
client_http.hits = 6.623985/sec
client_http.errors = 18.505842/sec
client_http.kbytes_in = 4718.936473/sec
client_http.kbytes_out = 4148.813921/sec
client_http.all_median_svc_time = 0.898576 seconds
client_http.miss_median_svc_time = 0.321543 seconds
client_http.nm_median_svc_time = 0.008653 seconds
client_http.nh_median_svc_time = 0.220042 seconds
client_http.hit_median_svc_time = 0.013867 seconds
server.all.requests = 78.671488/sec
server.all.errors = 0.00/sec
server.all.kbytes_in = 7963.083264/sec
server.all.kbytes_out = 4704.282405/sec
server.http.requests = 31.533902/sec
server.http.errors = 0.00/sec
server.http.kbytes_in = 4451.098225/sec
server.http.kbytes_out = 45.414950/sec
server.ftp.requests = 0.00/sec
server.ftp.errors = 0.00/sec
server.ftp.kbytes_in = 0.00/sec
server.ftp.kbytes_out = 0.00/sec
server.other.requests = 47.137586/sec
server.other.errors = 0.00/sec
server.other.kbytes_in = 3511.981707/sec
server.other.kbytes_out = 4658.870787/sec
icp.pkts_sent = 0.00/sec
icp.pkts_recv = 0.00/sec
icp.queries_sent = 0.00/sec
icp.replies_sent = 0.00/sec
icp.queries_recv = 0.00/sec
icp.replies_recv = 0.00/sec
icp.replies_queued = 0.00/sec
icp.query_timeouts = 0.00/sec
icp.kbytes_sent = 0.00/sec
icp.kbytes_recv = 0.00/sec
icp.q_kbytes_sent = 0.00/sec
icp.r_kbytes_sent = 0.00/sec
icp.q_kbytes_recv = 0.00/sec
icp.r_kbytes_recv = 0.00/sec
icp.query_median_svc_time = 0.00 seconds
icp.reply_median_svc_time = 0.00 seconds
dns.median_svc_time = 0.002783 seconds
unlink.requests = 0.00/sec
page_faults = 0.029988/sec
select_loops = 260.427914/sec
select_fds = 4391.065859/sec
average_select_fd_period = 0.00/fd
median_select_fds = 1.00
swap.outs = 6.204155/sec
swap.ins = 2.312397/sec
swap.files_cleaned = 0.00/sec
aborted_requests = 2.962134/sec
syscalls.disk.opens = 8.253326/sec
syscalls.disk.closes = 16.216769/sec
syscalls.disk.reads = 24.953232/sec
syscalls.disk.writes = 998.445836/sec
syscalls.disk.seeks = 0.00/sec
syscalls.disk.unlinks = 1.426089/sec
syscalls.sock.accepts = 74.449863/sec
syscalls.sock.sockets = 59.052763/sec
syscalls.sock.connects = 59.052763/sec
syscalls.sock.binds = 0.00/sec
syscalls.sock.closes = 137.934165/sec
syscalls.sock.reads = 2467.637783/sec
syscalls.sock.writes = 1735.070988/sec
syscalls.sock.recvfroms = 24.050265/sec
syscalls.sock.sendtos = 14.464145/sec
cpu_time = 295.754484 seconds
wall_time = 300.121438 seconds
cpu_usage = 98.544938%





Here is one some time later after running a suid -k reconfigure.


Cache Manager 
menu







sample_start_time = 1474465618.601004 (Wed, 21 Sep 2016 13:46:58 GMT)
sample_end_time = 1474465918.604173 (Wed, 21 Sep 2016 13:51:58 GMT)
client_http.requests = 72.862564/sec
client_http.hits = 1.216654/sec
client_http.errors = 3.693294/sec
client_http.kbytes_in = 904.270448/sec
client_http.kbytes_out = 3676.207834/sec
client_http.all_median_svc_time = 0.649683 seconds
client_http.miss_median_svc_time = 0.177113 seconds
client_http.nm_median_svc_time = 0.00 seconds
client_http.nh_median_svc_time = 0.034266 seconds
client_http.hit_median_svc_time = 0.009754 seconds
server.all.requests = 72.545900/sec
server.all.errors = 0.00/sec
server.all.kbytes_in = 

[squid-users] Redirect URL's to another cache

2015-09-22 Thread Jasper Van Der Westhuizen
Hi all

I have a cloud based cache peer that I use to authenticate users and filter 
etc. I have a new requirement to redirect a set list of domains to another 
cache peer. The list is defined in a file and only these domains should be 
redirected to a different peer than my default one. The rest should still be 
sent to the parent.

Any assistance will be greatly appreciated.

--
Kind Regards
Jasper






Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] CACHE partition fills up

2015-09-08 Thread Jasper Van Der Westhuizen

On 2/09/2015 2:35 a.m., Jasper Van Der Westhuizen wrote:
> Good day everyone
>
> I have a problem with my Squid proxy cache. On two occasions over the last 
> week the cache partitions have filled up to 100%. I have 4 load balanced 
> nodes with 100GB cache partitions each. All of them have filled up.
>
> I tried to limit the size by using the following cache_dir directive.
>
> cache_dir ufs /var/cache/squid/ 61440 128 512
>
> I have had a very large increase in traffic over the last couple of months, 
> but surely the configuration above should prevent the cache from filling up?
>

That depends on what the partition is filling up with.

If its cache objects not being erased, its probably bug 3553. High
traffic speed is the bug trigger. There is a fix in the latest 3.5
snapshot already if its urgent - and will be in the 3.5.8 I'm currently
preparing for release (ETA within 24 hrs).


For completeness; if swap.state or netdb journals are growing huge and
filling up the extra partition space. Then its probably just "squid -k
rotate" not being used often enough for the traffic volume. Regular, but
not too frequent, rotation is good for Squids overall health and clears
up file based outputs in all sorts of areas.

Amos

Thank you Amos. I will look out for the 3.5.8 release and manage the situation 
in the mean time.

Regards
Jasper


I upgraded to 3.5.8 on Friday 04/09 and so far it has been behaving well with 
the cache directories dropping to 59%. Thanks Amos.





Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] CACHE partition fills up

2015-09-01 Thread Jasper Van Der Westhuizen
Good day everyone

I have a problem with my Squid proxy cache. On two occasions over the last week 
the cache partitions have filled up to 100%. I have 4 load balanced nodes with 
100GB cache partitions each. All of them have filled up.

I tried to limit the size by using the following cache_dir directive.

cache_dir ufs /var/cache/squid/ 61440 128 512

I have had a very large increase in traffic over the last couple of months, but 
surely the configuration above should prevent the cache from filling up?

--
Kind Regards
Jasper






Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows 10 Updates

2015-07-08 Thread Jasper Van Der Westhuizen

On 8/07/2015 1:57 a.m., Jasper Van Der Westhuizen wrote:
 Hi list

 I have a problem with Windows 10 updates. It seems that Microsoft will do 
 updates via https now.

 --cut--
 1436268325.765 5294 xxx.xxx.xxx.xxx TCP_REFRESH_UNMODIFIED/206 9899569 GET 
 http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5?http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5?
  - HIER_DIRECT/165.165.47.19http://DIRECT/165.165.47.19 
 application/octet-stream
 1436268333.267 7484 xxx.xxx.xxx.xxx TCP_REFRESH_UNMODIFIED/206 21564261 GET 
 http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5?http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5?
  - HIER_DIRECT/165.165.47.19http://DIRECT/165.165.47.19 
 application/octet-stream
 1436268430.871 147280 xxx.xxx.xxx.xxx TCP_TUNNEL/200 4267 CONNECT 
 cp201-prod.do.dsp.mp.microsoft.com:443 - 
 HIER_DIRECT/23.214.151.174http://DIRECT/23.214.151.174 -
 1436268478.259 96621 xxx.xxx.xxx.xxx TCP_TUNNEL/200 5705 CONNECT 
 array204-prod.do.dsp.mp.microsoft.com:443 - 
 HIER_DIRECT/64.4.54.117http://DIRECT/64.4.54.117 -
 1436268786.878 78517 xxx.xxx.xxx.xxx TCP_TUNNEL/200 5705 CONNECT 
 array204-prod.do.dsp.mp.microsoft.com:443 - 
 HIER_DIRECT/64.4.54.117http://DIRECT/64.4.54.117 -
 --cut--

 To my knowledge there is no way to cache this.

Technically yes, there is no way to cache it without breaking into the
HTTPS.

 How would one handle this? Is it even possible to cache the updates?


SSL-Bump is the Squid feature for accessing HTTPS data in decrypted form
for filtering and/or caching.

However, that will depend on;
a) being able to bump the crypto (if the WU app is validating server
cert against a known signature its not),
b) the content inside actually being HTTPS (they do updates via P2P now
too), and
c) the HTTP content inside being cacheable (no guarantees, but a good
chance its about as cacheable as non-encrypted updates).

You are the first to mention it, so there is no existing info on those
requirements.

Amos

___
Thank you Amos.

Like in Windows 8.1, these updates are HUGE. I will keep an eye on 
developments. Microsoft really makes things difficult. For now we will be 
shaping the bandwidth on the network layer.

Kind Regards
Jasper






Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Windows 10 Updates

2015-07-07 Thread Jasper Van Der Westhuizen
Hi list

I have a problem with Windows 10 updates. It seems that Microsoft will do 
updates via https now.

--cut--
1436268325.765   5294 xxx.xxx.xxx.xxx TCP_REFRESH_UNMODIFIED/206 9899569 GET 
http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5?
 - HIER_DIRECT/165.165.47.19 application/octet-stream
1436268333.267   7484 xxx.xxx.xxx.xxx TCP_REFRESH_UNMODIFIED/206 21564261 GET 
http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5?
 - HIER_DIRECT/165.165.47.19 application/octet-stream
1436268430.871 147280 xxx.xxx.xxx.xxx TCP_TUNNEL/200 4267 CONNECT 
cp201-prod.do.dsp.mp.microsoft.com:443 - HIER_DIRECT/23.214.151.174 -
1436268478.259  96621 xxx.xxx.xxx.xxx TCP_TUNNEL/200 5705 CONNECT 
array204-prod.do.dsp.mp.microsoft.com:443 - HIER_DIRECT/64.4.54.117 -
1436268786.878  78517 xxx.xxx.xxx.xxx TCP_TUNNEL/200 5705 CONNECT 
array204-prod.do.dsp.mp.microsoft.com:443 - HIER_DIRECT/64.4.54.117 -
--cut--

To my knowledge there is no way to cache this. How would one handle this? Is it 
even possible to cache the updates?


--
Kind Regards
Jasper






Disclaimer:
http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WARNING: swapfile header inconsistent with available data

2014-06-27 Thread Jasper Van Der Westhuizen


On Thu, 2014-06-26 at 18:03 +0300, Eliezer Croitoru wrote:
 On 06/25/2014 04:06 PM, Jasper Van Der Westhuizen wrote:
  As a matter of interest, in my cache logs I see many lines like these
 
  2014/06/25 14:52:58 kid1| WARNING: swapfile header inconsistent with
  available data
  2014/06/25 14:53:28 kid1| WARNING: swapfile header inconsistent with
  available data
 What Storage are you using?
 What is the fstab content?
 
 Eliezer
 

Hi Eliezer

These proxies are VM's with the cache and logs sitting on ext3 LV's.

/dev/VG01/SQUIDCACHE  /var/cache/squid ext3   defaults1  1 

Regards
Jasper


Re: [squid-users] WARNING: swapfile header inconsistent with available data

2014-06-27 Thread Jasper Van Der Westhuizen

On Fri, 2014-06-27 at 19:06 +1200, Amos Jeffries wrote:
 On 27/06/2014 6:53 p.m., Jasper Van Der Westhuizen wrote:
  
  
  On Thu, 2014-06-26 at 18:03 +0300, Eliezer Croitoru wrote:
  On 06/25/2014 04:06 PM, Jasper Van Der Westhuizen wrote:
  As a matter of interest, in my cache logs I see many lines like these
 
  2014/06/25 14:52:58 kid1| WARNING: swapfile header inconsistent with
  available data
  2014/06/25 14:53:28 kid1| WARNING: swapfile header inconsistent with
  available data
  What Storage are you using?
  What is the fstab content?
 
  Eliezer
 
  
  Hi Eliezer
  
  These proxies are VM's with the cache and logs sitting on ext3 LV's.
  
  /dev/VG01/SQUIDCACHE  /var/cache/squid ext3   defaults1  1 
  
  Regards
  Jasper
  
 
 Do any of those cache_dir share an LV?
 
 Amos
 

Hi Amos

No, each proxy server has it's own cache_dir on it's own LV.

Regards
Jasper


Re: [squid-users] WARNING: swapfile header inconsistent with available data

2014-06-26 Thread Jasper Van Der Westhuizen

 
 Are you using SMP workers with an AUFS, UFS or diskd cache_dir?
  UFS/AUFS/diskd are not SMP-aware and this is how it shows up when two
 or more workers are over-writing cache disk files and corrupting each
 others records.
 
 Amos
 

Hi Amos

No, I don't make use of multiple SMP workers. Nothing in my config
anyway. 

PS from one of the proxies.

root 25678 1  0 11:00 ?
00:00:00 /usr/local/squid/sbin/squid -sY
squid25680 25678  8 11:00 ?00:03:36 (squid-1) -sY
squid25681 25680  0 11:00 ?00:00:10
(logfile-daemon) /var/log/squid//access.log
squid25682 25680  0 11:00 ?00:00:06 (unlinkd)

Kind Regards
Jasper


[squid-users] WARNING: swapfile header inconsistent with available data

2014-06-25 Thread Jasper Van Der Westhuizen
Hi all

I'm running a compiled version of Squid 3.4.4 and I'm having some
strange behavior lately. I have a two node cluster load balancing via a
F5 LB and at times one of the two servers will simply not complete a
connection. Squid is running, the logs keep rolling(although much slower
and most entries are from it's sibling cache, but no connections go
through. A restart of Squid fixes this.

As a matter of interest, in my cache logs I see many lines like these

2014/06/25 14:52:58 kid1| WARNING: swapfile header inconsistent with
available data
2014/06/25 14:53:28 kid1| WARNING: swapfile header inconsistent with
available data

On one of my servers I cleaned out the cache, as well as the swap.state
file but it didn't make a difference.

squid -v output.

Squid Cache: Version 3.4.4
configure options:  '--prefix=/usr/local/squid'
'--sysconfdir=/etc/squid3.4.4/' '--enable-follow-x-forwarded-for'
'--with-logdir=/var/log/squid/' '-with-pidfile=/var/run/squid.pid'
'--with-swapdir=/var/cache/squid/' '--with-large-files'
'--disable-strict-error-checking' '--with-default-user=squid'

Can anyone shed some light on this problem?

Kind Regards
Jasper


Re: [squid-users] Cache Chrome updates

2014-04-16 Thread Jasper Van Der Westhuizen

  On Tue, 2014-04-15 at 13:11 +0100, Nick Hill wrote:
  This may the the culprit
 
  hierarchy_stoplist cgi-bin ?
 
  I believe this will prevent caching of any URL containing a ?
 
  
  Should I remove the ? and leave cgi-bin?
 
 You can remove the whole line quite safely.
 
 It prevents cache_peers being sent requests that match the regex
 patterns listed. Since it is now very rare to find a peer that cannot
 support those requests...
 
 Amos

Thanks Amos. I will remove the string and test.

Regards
Jasper


Re: [squid-users] Cache Chrome updates

2014-04-16 Thread Jasper Van Der Westhuizen


On Tue, 2014-04-15 at 14:38 +0100, Nick Hill wrote:
 URLs with query strings have traditionally returned dynamic content.
 Consequently, http caches by default tend not to cache content when
 the URL has a query string.
 
 In recent years, notably Microsoft and indeed many others have adopted
 a habit of putting query strings on static content.
 
 This could be somewhat inconvenient on days where Microsoft push out a
 new 4Gb update for windows 8, and you have many such devices connected
 to your nicely cached network. Each device will download exactly the
 same content, but with it's own query string.
 
 The nett result is generation of a huge amount of network traffic.
 Often for surprisingly minor updates.
 
 I am currently testing a new configuration for squid which identifies
 the SHA1 hash of the windows update in the URL, then returns the bit
 perfect cached content, irrespective of a wide set of URL changes. I
 have it in production in a busy computer repair centre. I am
 monitoring the results. So far, very promising.

Hi Nick

As you rightly said, Windows 8 devices are becoming more and more common
now, specially in the work place. I don't want to download the same 4GB
update multiple times. Would you mind sharing your SHA1 hash
configuration or is it perhaps available somewhere?

Regards
Jasper


Re: [squid-users] Cache Chrome updates

2014-04-15 Thread Jasper Van Der Westhuizen

  
  Hi Pieter
  
  No, that gives me an incorrect regular expression error.
 
 NP: regex has an implied .* prefix and suffix on patterns unless you use
 the ^ and $ endpoint anchors.
 
 
 What are the HTTP headers for these requests and replies?
  The 206 status indicates a Range request taking place. But the first
 and third log entry indicate a 283KB range out of a 1400 byte rediretion
 object being delivered. But weird.
 
  A quick scan by redbot indicates that the top URL is a redirection (302
 as logged first but not second time around) to a URL which responds with
 the worlds most annoying Vary header.
 Vary: * effectively makes this response uncacheable. in redbots words.
 
 
 Amos

Hi Amos

Using squidclient I see a document has moved redirect of sorts. The
second one I get this.

--cut--
HTTP/1.0 200 OK
Accept-Ranges: bytes
Content-Length: 530464
Content-Type: application/x-msdos-program
ETag: 3457d
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
Date: Fri, 11 Apr 2014 05:42:56 GMT
Last-Modified: Thu, 13 Dec 2012 20:00:00 GMT
Alternate-Protocol: 80:quic
X-Cache: MISS from sles-squid2
X-Cache-Lookup: MISS from sles-squid2:8080
Via: 1.0 sles-squid2 (squid/3.1.12)
Connection: close
--cut--

Not sure if this is what you are referring to?

Regards
Jasper


Re: [squid-users] Cache Chrome updates

2014-04-15 Thread Jasper Van Der Westhuizen


On Tue, 2014-04-15 at 12:09 +0100, Nick Hill wrote:
 Hi Jaspar
 
 I use an expression like this, which will work on almost all Limux
 machines, Cygwin on windows and I expect Mac OSX or a terminal in
 Android so long as you have a version of grep similar to GNU grep.
 
 echo 
 http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe;
 | grep -Pi pack.google.com/.*\.(exe|crx)
 
 This shows a match, so I think your expression is OK.
 
 Do you have any refresh_pattern entries earlier in your squid.conf
 which may inadvertently match?
 
 I see some of your requests have a query string attached. Many
 squid.conf recipies have the following, which will block caching of
 URLs with query strings.
 acl QUERY urlpath_regex cgi-bin \?
 no_cache deny QUERY
 
 If you want to see the part of your query string after the ?  in your
 log, you will have to add this to your squid.conf:
 strip_query_terms off
 
 But for privacy reasons you may not want to leave this on.
 

Hi Nick

Thanks for the reply. 

I have the following refresh_patterns in my config file.

--cut--
refresh_pattern -i microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|
dat|zip) 10080 80% 43200 ignore-no-cache ignore-no-store ignore-reload
reload-into-ims

refresh_pattern -i windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|
a]|dat|zip|psf) 10080 80% 43200 ignore-no-cache ignore-no-store
ignore-reload reload-into-ims

refresh_pattern -i windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|
dat|zip) 10080 80% 43200 ignore-no-cache ignore-no-store ignore-reload
reload-into-ims

#---
#Google updates
#---

refresh_pattern -i pack.google.com/.*\.(exe|crx) 10080 80% 43200
override-expire override-lastmod ignore-no-cache  ignore-reload
reload-into-ims ignore-private

#
# Apple
#

refresh_pattern -i apple.com/.*\.(pkg|dmg|dat) 10080 80% 43200
ignore-no-cache  ignore-reload  reload-into-ims

#---
# Cache Adobe updates better
#---

refresh_pattern armdl.adobe.com/.*\.(exe|msp|msi) 10800  80%  43200
ignore-no-store ignore-reload reload-into-ims

--cut--

I also have a list of single file type refresh patterns below this list.

The only cgi-bin entries I have in my squid conf file are:

hierarchy_stoplist cgi-bin ?

and

refresh_pattern -i (/cgi-bin/|\?) 0 0%  0

I added the strip_query_terms off setting to my QA system config and
tested again.

15/Apr/2014:13:58:29 +0200302 xxx.xxx.xxx.xxx TCP_MISS/302 1385 GET
http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?
 - DIRECT/173.194.34.64 text/html
15/Apr/2014:13:58:30 +0200   1028 xxx.xxx.xxx.xxx TCP_MISS/200 8704709
GET
http://r2---sn-uxa3vh-j2ue.c.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?cms_redirect=yesexpire=1397577513ip=xxx.xxx.xxx.xxxipbits=0ir=1ms=nvhmt=1397563090mv=msparams=expire,ip,ipbitssignature=47FA99194E00E36C1FB3677DC81ACAE1F684AD12.64E6B64942EA9AFCF58B5F13EDFF452AC0D54F49key=cms1
 - DIRECT/165.165.38.13 application/x-msdos-program
15/Apr/2014:13:58:36 +0200156 xxx.xxx.xxx.xxx TCP_MISS/302 1385 GET
http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?
 - DIRECT/173.194.34.64 text/html
15/Apr/2014:13:58:37 +0200   1363 xxx.xxx.xxx.xxx TCP_MISS/200 8704709
GET
http://r2---sn-uxa3vh-j2ue.c.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?cms_redirect=yesexpire=1397577519ip=xxx.xxx.xxx.xxxipbits=0ir=1ms=nvhmt=1397563090mv=msparams=expire,ip,ipbitssignature=3D4FCBE80DCCF3C6CB9689B2B5006A580FACA0F2.63E7CA08C16498F695CEB745A03E1FFF2343B0F7key=cms1
 - DIRECT/165.165.38.13 application/x-msdos-program
15/Apr/2014:13:59:04 +0200155 xxx.xxx.xxx.xxx TCP_MISS/302 1385 GET
http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?
 - DIRECT/173.194.34.64 text/html
15/Apr/2014:13:59:05 +0200962 xxx.xxx.xxx.xxx TCP_MISS/200 8704709
GET
http://r2---sn-uxa3vh-j2ue.c.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?cms_redirect=yesexpire=1397577548ip=xxx.xxx.xxx.xxxipbits=0ir=1ms=nvhmt=1397563090mv=msparams=expire,ip,ipbitssignature=4C96A3FEA0F319EEFF155C11212659D682B6F432.1ADE1DE4DA72DAB0333AC1014640C6775BCB5382key=cms1
 - DIRECT/165.165.38.13 application/x-msdos-program


-- 
Kind Regards
Jasper 


Re: [squid-users] Cache Chrome updates

2014-04-15 Thread Jasper Van Der Westhuizen
 

On Tue, 2014-04-15 at 13:11 +0100, Nick Hill wrote:
 This may the the culprit
 
 hierarchy_stoplist cgi-bin ?
 
 I believe this will prevent caching of any URL containing a ?
 

Should I remove the ? and leave cgi-bin?

Regards
Jasper


[squid-users] Cache Chrome updates

2014-04-14 Thread Jasper Van Der Westhuizen
Hi all

I'm trying to cache chrome updates, but I see it always fetches over and
over again. 

I have the following refresh pattern in my config.

refresh_pattern -i pack.google.com/.*\.(exe|crx) 10080 80% 43200
override-expire override-lastmod ignore-no-cache  ignore-reload
reload-into-ims ignore-private

I see the following behavior in my logs. This is for the same
client(source). Multiple entries, like it gets downloaded over and over
again.
Logs:

1397459574.511199 xxx.xxx.xxx.xxx TCP_MISS/302 1400 GET
http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe
 - DEFAULT_PARENT/xxx.xxx.xxx.xxx text/html
1397459579.924   4794 xxx.xxx.xxx.xxx TCP_MISS/206 141330 GET
http://r2---sn-pn-woce.c.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?
 - DEFAULT_PARENT/xxx.xxx.xxx.xxx application/x-msdos-program
1397459591.067548 xxx.xxx.xxx.xxx TCP_MISS/302 1400 GET
http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe
 - DEFAULT_PARENT/xxx.xxx.xxx.xxx text/html
1397459596.709   4917 xxx.xxx.xxx.xxx TCP_MISS/206 283744 GET
http://r2---sn-pn-woce.c.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?
 - DEFAULT_PARENT/xxx.xxx.xxx.xxx 
application/x-msdos-program

Is my refresh pattern incorrect?

-- 
Kind Regards
Jasper 


Re: [squid-users] Blank page on first load

2014-04-14 Thread Jasper Van Der Westhuizen

On Wed, 2014-04-09 at 23:57 +0300, Eliezer Croitoru wrote:
 Hey Jasper,
 
 Just to make sure I understand:
 What is the issue and is it on specific sites?
 take my site for example http://www1.ngtech.co.il/
 
 Try to browse into the main page.
 For me squid works fine.
 I had an issue which ICAP settings delayed the page loading but what you 
 describe is not a blank page but an error page.
 Can you look at the development console of IE11 and see what is 
 happening in the network layer?
 
 Eliezer
 
 On 04/09/2014 01:05 PM, Jasper Van Der Westhuizen wrote:
  Hi Amos
 
  I built a new proxy, stock standard settings, and tested it again. With
  IE11 and SPDY/3 enabled I still get the initial page can not be loaded
  problem. When doing a refresh immediately afterward, it loads the page.
 
  On the plus side, Sharepoint sites now work.:-)
 
  Any suggestions?
 
  Kind Regards
  Jasper

Hi Eliezer

The error message I see on IE looks to be a IE generated message. I can
see the client request(one or two entries) in the logs to google, yet on
IE it displays page can not be found almost immediately. It's not a
SQUID generated message. I tried Squid 3.2, 3.3 and 3.4 on a Fedora Core
host. All show the same results. Disabling SPDY/3 fixes the problem on
IE11. Disabling HTTP1.1 on IE10 and 9 fixes it there, although I don't
get many complaints on ver 9 and 10. 

Right now SLES 11 SP3 only supports Squid 3.1.12-8.12.1 via SMT repo.
I'm hoping 3.2 and later will be supported by SLES 11 SP3 soon so that I
can fully support HTTP1.1 on these proxies.

Regards
Jasper


Re: [squid-users] Cache Chrome updates

2014-04-14 Thread Jasper Van Der Westhuizen

  refresh_pattern -i pack.google.com/.*\.(exe|crx) 10080 80% 43200
  override-expire override-lastmod ignore-no-cache  ignore-reload
  reload-into-ims ignore-private
 
  I see the following behavior in my logs. This is for the same
  client(source). Multiple entries, like it gets downloaded over and over
  again.
  Logs:
 
  1397459574.511199 xxx.xxx.xxx.xxx TCP_MISS/302 1400 GET
  http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe
   - DEFAULT_PARENT/xxx.xxx.xxx.xxx text/html
  1397459579.924   4794 xxx.xxx.xxx.xxx TCP_MISS/206 141330 GET
  http://r2---sn-pn-woce.c.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?
   - DEFAULT_PARENT/xxx.xxx.xxx.xxx application/x-msdos-program
  1397459591.067548 xxx.xxx.xxx.xxx TCP_MISS/302 1400 GET
  http://cache.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe
   - DEFAULT_PARENT/xxx.xxx.xxx.xxx text/html
  1397459596.709   4917 xxx.xxx.xxx.xxx TCP_MISS/206 283744 GET
  http://r2---sn-pn-woce.c.pack.google.com/edgedl/chrome/win/34.0.1847.116_33.0.1750.154_chrome_updater.exe?
   - DEFAULT_PARENT/xxx.xxx.xxx.xxx
  application/x-msdos-program
 
  Is my refresh pattern incorrect?
 
 Dag se Jasper :)
 
 Should it not read *pack.google
 
 Cheers,
 
 Pieter
 

Hi Pieter

No, that gives me an incorrect regular expression error.

Regards
Jasper


Re: [squid-users] Blank page on first load

2014-04-09 Thread Jasper Van Der Westhuizen

  
  You could avoid that by upgrading Squid, perferrably to the current
  supproted release (3.4.4). I have a client running many IE11 with their
  default settings behind a Squid-3.4 and not seeing problems.
  
  Amos
  
  
 
 Thank you Amos. I will go to 3.4 then.
 

Hi Amos

I built a new proxy, stock standard settings, and tested it again. With
IE11 and SPDY/3 enabled I still get the initial page can not be loaded
problem. When doing a refresh immediately afterward, it loads the page.

On the plus side, Sharepoint sites now work. :-)

Any suggestions?

Kind Regards
Jasper


[squid-users] Blank page on first load

2014-04-07 Thread Jasper Van Der Westhuizen
Hi all

I have a problem with some of my users getting blank pages when loading
sites like google and MSN. They would open the site and get a blank
page, but when refreshing it loads. These users mostly use IE11 but have
had it with browsers like Safari. Although I have to say that 98% of the
time it is with IE10 and 11.

In my squid logs I can see the request going to the website. The client
just gets a blank page until they reload it.

My setup is 3 servers running squid 3-3.1.12-8.12.1 behind an F5 load
balancer. From there I send all traffic to a ZScaler cache peer. In my
testing I have bypassed the cache peer but without any success.

Has anyone come across this problem before?

-- 
Kind Regards
Jasper 


Re: [squid-users] Blank page on first load

2014-04-07 Thread Jasper Van Der Westhuizen

  In my squid logs I can see the request going to the website. The client
  just gets a blank page until they reload it.
 
 Do you see anything coming back *from* the webserver?
  Is anything being delivered by Squid to the client?

Hi Amos

Yes I do see traffic coming back from the server. 

What I'm found though was that when going to http://www.google.co.za or
even http://www.google.com, it redirects to https://www.google.co.za or
https://www.google.com. That then gives the user the blank page. When
refreshing the page, it then loads properly.

Regards
Jasper


Re: [squid-users] Blank page on first load

2014-04-07 Thread Jasper Van Der Westhuizen


On Mon, 2014-04-07 at 18:42 +1200, Pieter De Wit wrote:
  My setup is 3 servers running squid 3-3.1.12-8.12.1 behind an F5 load
  balancer. From there I send all traffic to a ZScaler cache peer. In my
  testing I have bypassed the cache peer but without any success.
 
  Has anyone come across this problem before?
 
 
 Hi Jasper,
 
 Have you tried bypassing the F5's ? They try and do a bunch of clever 
 things and this can mess with normal networking/caching
 
 Cheers,
 
 Pieter
 

Hi Pieter

I also thought the F5 could be the problem, but I have a QA squid server
that I have been testing with and this server does not sit behind a load
balancer. Same results as with prod.

Regards
Jasper


Re: [squid-users] Blank page on first load

2014-04-07 Thread Jasper Van Der Westhuizen

  Do you see anything coming back *from* the webserver?
   Is anything being delivered by Squid to the client?
 
 Hi Amos
 
 Yes I do see traffic coming back from the server. 
 
 What I'm found though was that when going to http://www.google.co.za or
 even http://www.google.com, it redirects to https://www.google.co.za or
 https://www.google.com. That then gives the user the blank page. When
 refreshing the page, it then loads properly.

Hi Amos

I have done more testing and found the following. It seems this problem
is HTTP1.1 related. In IE11 Advanced settings(pic attached) there are 3
settings under HTTP Settings. Use HTTP1.1, Use HTTP 1.1 through proxy
connections and Use SPDY/3. If we disable the first two, sites like
google load first time. After more searches on the web we found an
article that suggested that SPDY/3 was problematic and that a patch is
available.
(http://angrytechnician.wordpress.com/2014/01/16/google-not-loading-first-time-in-ie11-via-a-web-proxy-on-windows-8-1-turn-off-spdy-support/)

With the first two options enabled in IE and SPDY/3 disabled, google
loads fine first time. IE9 doesn't have a SPDY/3 setting, but disabling
the HTTP1.1 settings work.

So to me it seems that HTTP1.1 is the problem here(as well as the SPDY/3
problem). We run Squid 3.1.

Regards
Jasper
attachment: image001.png

Re: [squid-users] Blank page on first load

2014-04-07 Thread Jasper Van Der Westhuizen

  With the first two options enabled in IE and SPDY/3 disabled, google
  loads fine first time. IE9 doesn't have a SPDY/3 setting, but disabling
  the HTTP1.1 settings work.
  
  So to me it seems that HTTP1.1 is the problem here(as well as the SPDY/3
  problem). We run Squid 3.1.
 
 
 Okay. Squid-3.1 is still mostly HTTP/1.0 software and IE has problems
 using HTTP/1.1 to a 1.0 proxy.
 
 You could avoid that by upgrading Squid, perferrably to the current
 supproted release (3.4.4). I have a client running many IE11 with their
 default settings behind a Squid-3.4 and not seeing problems.
 
 Amos
 
 

Thank you Amos. I will go to 3.4 then.

Regards
Jasper


RE: [squid-users] Connection pinning (NTLM pass through)

2012-07-26 Thread Jasper Van Der Westhuizen


 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Sunday, May 27, 2012 1:22 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Connection pinning (NTLM pass through)
 
 On 26/05/2012 8:31 a.m., Petter Abrahamsson wrote:
  Hi,
 
  I'm trying to get NTLM pass through to work with squid 3.1.19. I have
  followed the instructions found on the wiki[1] on connection pinning
  but I just keep receiving 401 status messages.
  Below is the very simple squid.conf that I'm using for this test.
 
  acl manager proto cache_object
  acl localhost src 127.0.0.1/32 ::1
  acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
  acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
  acl SSL_ports port 443
  acl Safe_ports port 80  # http
  acl Safe_ports port 21  # ftp
  acl Safe_ports port 443 # https
  acl Safe_ports port 70  # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 1025-65535  # unregistered ports
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl CONNECT method CONNECT
  http_access allow manager localhost
  http_access deny manager
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localnet
  http_access allow localhost
  http_access deny all
  http_port 8080 connection-auth=on
  hierarchy_stoplist cgi-bin ?
  coredump_dir /var/cache/squid
  refresh_pattern ^ftp:   144020% 10080
  refresh_pattern ^gopher:14400%  1440
  refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
  refresh_pattern .   0   20% 4320
 
  And below is the corresponding access.log entries with obfuscated ip
  addresses and host names.
 
  1337976537.852 63 192.168.12.214 TCP_MISS/401 466 GET
  http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
  1337976550.714 29 192.168.12.214 TCP_MISS/401 1074 GET
  http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
  1337976551.025 57 192.168.12.214 TCP_MISS/401 466 GET
  http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
  1337976554.627 57 192.168.12.214 TCP_MISS/401 1074 GET
  http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
  1337976558.006   3128 192.168.12.214 TCP_MISS/401 466 GET
  http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
  1337976559.462 59 192.168.12.214 TCP_MISS/401 1074 GET
  http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
  1337976559.760 56 192.168.12.214 TCP_MISS/401 466 GET
  http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
 
  I feel like I'm missing something obvious since the instructions on
  the wiki are quite simple.
  When I try the same website through a v2.7 squid it lets me login.
  Let me know if any other information is needed.
  Any help would be very much appreciated.
 
 Check the HTTP headers at each point before/after Squid for keep-alive.
 There is something a little strange going on with HTTP/1.1 connections to
 servers and NTLM keep-alive in 3.1.19. If you are able to do some code
 digging that would help as well.
 
 Amos


Hi Peter.

I'm having the same issues with a IIS portal site. In 3.1 it doesn't work but 
with 2.7 it does. Have you managed to fix this problem by any chance?

Kind Regards
Jasper



RE: [squid-users] Re: FTP through squid

2012-06-26 Thread Jasper Van Der Westhuizen
Hi 

I'm trying to force all FTP connections direct. I have a parent cache and at 
the moment ftp connections via a brower works fine and is sent directly but my 
problem is that when using a client like filezilla it sends the connection to 
the parent cache and not directly.

I have enabled the following settings:

acl FTP proto FTP
always_direct allow FTP
acl Safe_ports port 21
http_access allow CONNECT Safe_ports

Is there anything I missed?

Regards
Jasper


 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Wednesday, May 09, 2012 12:31 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Re: FTP through squid
 
 On 09.05.2012 01:18, Hugo Deprez wrote:
  Hello,
 
  thank you for the information.
 
  If I sepcify the following parameter in my configuration file  it :
  ftp_epsv OFF
 
  FTP with firefox is working.
 
  I think users will needs to use Filezilla or something equivalent, so
  what kind of Solution Do I Have ?
 
 With Squid, only that nasty CONNECT open-proxy situation.
 
 Frox proxy is the one we usually point people at for FTP protocol proxying. It
 does FTP proxy without needing the HTTP wrapping layers like Squid.
 
 
 
  My main goal is to setup a squid in order to specify bandwith
  limitation.
 
  By the way, does delay_parameters are apply to Safe_ports ?
 
 delay_parameters applies to requests which past the delay_access ACL set
 for each pool. Safe_ports is just an ACL test.
 
  For information I use the following ACL :
  delay_access 1 allow all
 
 Then delay pool #1 in your config is applied to *all* traffic.
 
 Amos


RE: [squid-users] Re: FTP through squid

2012-06-26 Thread Jasper Van Der Westhuizen
I don't think that is the problem.. If I ftp directly from the squid server to 
my test ftp site, it works fine. Via a browser it works fine. Only when using a 
FTP client, such as FileZilla for example, it fails..



 -Original Message-
 From: Jakob Curdes [mailto:j...@info-systems.de]
 Sent: Tuesday, June 26, 2012 10:43 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Re: FTP through squid
 
 Am 26.06.2012 10:29, schrieb Vishal Agarwal:
  There are port 21 and 22.
 You probably mean port 20? 22 is  SSH.
 
 JC


RE: [squid-users] Re: FTP through squid

2012-06-26 Thread Jasper Van Der Westhuizen
 To proxy FTP well you need an FTP proxy. + FTP was designed not to be
 proxied. There is one called frox proxy which handles FTP with some tricks.

Thank you for the tip Amos. I will have a look at Frox now. 

There is another requirement though. Ideally I would like, as in the case with 
Squid, to have users authenticate against AD when using the FTP proxy. Some of 
my users are allowed to FTP and some now. I know this goes beyond the scope of 
Squid and this mailing list, but any suggestions would be welcome.


RE: [squid-users] IP based ACL - regex?

2012-06-23 Thread Jasper Van Der Westhuizen


 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Saturday, June 23, 2012 12:18 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] IP based ACL - regex?
 
 On 22/06/2012 11:30 p.m., Jasper Van Der Westhuizen wrote:
  Hi all
 
  Could anyone give me some pointers on how to set up a ACL based on
 allowing : If I want to set up an ACL that includes all hosts(different 
 subnets)
 that end in .105, how would I go about?
 
 Strictly speaking you *can't*. Because hosts don't end in numbers. Hosts are
 alphanumeric names.
 
 I take it you mean IPv4 addresses, (one host has at least 1 MAC address,
 2 IPv4 addresses, 3 IPv6 addresses - three of which may be used to contact
 Squid).
 
 
 A 1980-style netmask does what you are asking:
 
 acl foo src 0.0.0.105/0.0.0.255
 
 But Why? Modern networks use CIDR subnet masking /8, /16, /24, /32, /48,
 /56, /64, /128 etc.
 
 Amos
 
Thank you Amos. Yes, I should have been more clear. I have to set the ACL based 
on the IP address, with the last octet ending with for example .105.

The problem with your second proposal is that I don't have a list of all the 
networks. We have various networks and as in the example of the .105 IP 
address, every host in our enormous list networks ending in .105 has the same 
function. If I want to allow these hosts access to only a certain list of 
websites, I need to set up an ACL based on the last octet.

I think your acl foo src 0.0.0.105/0.0.0.255 idea will work for me and I will 
test it. 

Thanks again.



[squid-users] IP based ACL - regex?

2012-06-22 Thread Jasper Van Der Westhuizen
Hi all

Could anyone give me some pointers on how to set up a ACL based on allowing : 
If I want to set up an ACL that includes all hosts(different subnets) that end 
in .105, how would I go about?

Any help is appreciated.

Regards
Jasper



RE: [squid-users] does a match on an ACL stop or continue?

2012-04-05 Thread Jasper Van Der Westhuizen

-Original Message-
From: Greg Whynott [mailto:greg.whyn...@gmail.com] 
Sent: Wednesday, April 04, 2012 5:04 PM
To: Squid Users
Subject: [squid-users] does a match on an ACL stop or continue?

If i have a list of 10 ACLs and a client matches on ACL#4,  will ACLs
#6-10 be considered or does squid stop evaluating the ACLs and preform the 
actions defined in ACL#4?

example: 

if someone in the network 10.101.10.0/24 attempts to load badsite.com,  will 
they be denied with the ACLs below or will the TOS be modified and the site 
loaded?

acl execnetwork src 10.10.10.0/24
tcp_outgoing_tos 0x38 execnetwork

acl badsite url_regex -i badsite
http_access deny badsite


I ask as it appears to not be consistent with my current setup.

thanks for your time,
greg

Hi Greg

As far as I know it stops when it hits a rule. Rules are AND'd or OR'd 
together. 

This is AND: 
Deny if URL is in badsite AND connection from execnetwork.

http_access deny badsite execnetwork

This is OR: 
Deny if connection from execnetwork OR URL is in badsite.

http_access deny execnetwork
http_access deny badsite 

Regards
Jasper



RE: [squid-users] Allowing linked sites - NTLM and un-authenticated users

2012-04-04 Thread Jasper Van Der Westhuizen

 This allows my un-authenticated users access to the whitelisted domains and 
 blocks any links in the sites that are not whitelisted(like facebook and 
 youtube). It also allows my authenticated users access to all sites, 
 including whitelisted sites, as well as allowing linked sites like facebook 
 etc.

 Do you perhaps see any issue with this setup?

 The only problem I forsee is that srcdomain is the clients IP rDNS record. 
 You have to encode into that what group they are in, so its restricted to 
 clients you  have control over rDNS for. In which case you may as well make 
 them static and use src IP checks.

Amos

Hi Amos

I want to change my setup to do authentication for everyone, and based on 
whether the user is in a specific group or not, allow them access to certain 
ACL's.

I have a group in AD that should have full access. All users should 
authenticate. If the user is not in my Internet group then he gets to access a 
list of sites. If the user is in the Internet group he gets a different ACL to 
access everything.

Is this possible with NTLM? I don't think it is. How would I approach this?



RE: [squid-users] Allowing linked sites - NTLM and un-authenticated users

2012-04-04 Thread Jasper Van Der Westhuizen


-Original Message-
From: Jasper Van Der Westhuizen [mailto:javanderwesthui...@shoprite.co.za] 
Sent: Wednesday, April 04, 2012 11:13 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Allowing linked sites - NTLM and un-authenticated 
users


 This allows my un-authenticated users access to the whitelisted domains and 
 blocks any links in the sites that are not whitelisted(like facebook and 
 youtube). It also allows my authenticated users access to all sites, 
 including whitelisted sites, as well as allowing linked sites like facebook 
 etc.

 Do you perhaps see any issue with this setup?

 The only problem I forsee is that srcdomain is the clients IP rDNS record. 
 You have to encode into that what group they are in, so its restricted to 
 clients you  have control over rDNS for. In which case you may as well make 
 them static and use src IP checks.

Amos

 Hi Amos

 I want to change my setup to do authentication for everyone, and based on 
 whether the user is in a specific group or not, allow them access to certain 
 ACL's.

 I have a group in AD that should have full access. All users should 
 authenticate. If the user is not in my Internet group then he gets to access 
 a list of sites. If  the user is in the Internet group he gets a different 
 ACL to access everything.

Is this possible with NTLM? I don't think it is. How would I approach this?

What I did now was use wbinfo_group.pl to help with group ACL's.

--cut--
external_acl_type testForNTGroup1 children=5 %LOGIN /usr/sbin/wbinfo_group.pl
external_acl_type testForNTGroup2 children=5 %LOGIN /usr/sbin/wbinfo_group.pl
acl InternetUsers external testForNTGroup1 /etc/squid/group_auth_all_access
acl NonInternetUsers external testForNTGroup2 
/etc/squid/group_auth_limited_access

---

### Allow Whitelisted domains to all users
http_access allow InternetUsers
http_access allow NonInternetUsers whitelist
http_access deny NonInternetUsers !whitelist all

--cut--

I place my groups that allowed to have full access in group_auth_all_access. 
The file group_auth_limited_access contains the group that covers my entire AD 
domain.

I think this will work fine. I've tested it successfully.



RE: [squid-users] Allowing linked sites - NTLM and un-authenticated users

2012-04-03 Thread Jasper Van Der Westhuizen


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, April 02, 2012 9:27 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Allowing linked sites - NTLM and un-authenticated 
users

On 2/04/2012 5:54 p.m., Jasper Van Der Westhuizen wrote:

 -Original Message-
 From: Amos Jeffries

 On 30/03/2012 11:45 p.m., Jasper Van Der Westhuizen wrote:
 Hi everyone

 I've been struggling to get a very specific setup going.

 Some background:  Our users are split into Internet users and 
 Non-Internet users. Everyone in a specific AD group is allowed to have 
 full internetaccess. I have two SQUID proxies with squidGuard load 
 balanced with NTLM authentication to handle the group authentication. All 
 traffic also then getssent to a cache peer.

 This is basically what I need:
 1. All users(internet and non-internet) must be able to access sites in 
 /etc/squid/lists/whitelist.txt
 2. If a user wants to access any external site that is not in the whitelist 
 then he must be authenticated. Obviously a non-internet user can try until 
 he is bluein the face, it won't work.

 These two scenarios are working 100%, except for one irritating bit. Most 
 of the whitelisted sites have got linked websites like facebook or twitter 
 oryourtube in them that load icons and graphics or adds etc. This causes 
 a auth-prompt for non-internet users. I can see the requests in the logs 
 being0DENIED.

 The only way I could think of getting rid of these errors was to 
 implement a http_access deny !whitelist after the allow. This 
 works great for non-internet users and it blocks all the linked 
 sites without asking to authenticate, but obviously this breaks 
 access to all other sites for authenticated users.(access denied for 
 all sites)
 You can use the all hack and two login lines:

 http_access allow whitelist# allow authed users, but dont challenge 
 if missing auth http_access allow authed all # block access to some 
 sites unless alreadylogged in http_access deny blacklist http_access 
 deny !authed


 The authed users may still have problems logging in if the first site they 
 visit is one of the blacklist ones. But if they visit another page first 
 they can loginand get there.


 Amos
 Hi Amos

 Thank you for the reply.

 I think I already tried this method but it still fails. In any case I tried 
 what you suggested and the problem remains that my 
 unauthenticated(non-internet)  users can get to the whitelisted sites just 
 fine, but they still get authentication prompts for the linked content like 
 facebook and youtube that the site contains. An example of a site is 
 http://www.triptrack.co.za/ and you will see what I mean. At the bottom right 
 of the site there are links to facebook and youtube. Those links cause a 
 authentication request to the unauthenticated(or non-internet) users. I can't 
 have these prompts appear for these users. They have a set list of sites 
 they can visit, and it should work for them and should not get asked to 
 authenticate. Only once they try and go directly to sites that are not in 
 the whitelist, should they be prompted, and obviously denied since they are 
 not included in the AD group.

The problem of course is that they *are* going directly to the blacklisted 
sites when they load an object from those sites. Even if the object was 
embeded in some third-party whitelisted sites HTML.
HTTP protocol makes no distinctions about how HTML, XML, or Flash document 
structures group objects. All Squid sees is a request for an object on a 
non-whitelisted site.


 Current rules:
 http_access allow whitelist
 http_access allow authenticated all
 http_access deny blacklist
 http_access deny !authenticated

 Kind Regards
 Jasper



Something else I've tried was using a cache_peer_access to pass the whitelisted 
domains that everyone should have access to, to another squid instance that 
should only allow access to the whitelisted sites. Nothing else. Again it works 
kind of. I can see that the proxy sends the request to the cache_peer, but it 
only sends the requested site there and again not any sites that are linked 
within it.(like facebook).

Is there a way to send the entire session to the cache_peer if a particular 
domain was requested?



RE: [squid-users] Allowing linked sites - NTLM and un-authenticated users

2012-04-03 Thread Jasper Van Der Westhuizen


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, April 03, 2012 8:43 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Allowing linked sites - NTLM and un-authenticated 
users

On 3/04/2012 6:12 p.m., Jasper Van Der Westhuizen wrote:

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Monday, April 02, 2012 9:27 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Allowing linked sites - NTLM and 
 un-authenticated users

 On 2/04/2012 5:54 p.m., Jasper Van Der Westhuizen wrote:
 -Original Message-
 From: Amos Jeffries

 On 30/03/2012 11:45 p.m., Jasper Van Der Westhuizen wrote:
 Hi everyone

 I've been struggling to get a very specific setup going.

 Some background:  Our users are split into Internet users and 
 Non-Internet users. Everyone in a specific AD group is allowed to have 
 full internetaccess. I have two SQUID proxies with squidGuard load 
 balanced with NTLM authentication to handle the group authentication. All 
 traffic also then getssent to a cache peer.

 This is basically what I need:
 1. All users(internet and non-internet) must be able to access sites in 
 /etc/squid/lists/whitelist.txt
 2. If a user wants to access any external site that is not in the 
 whitelist then he must be authenticated. Obviously a non-internet user can 
 try until he is bluein the face, it won't work.

 These two scenarios are working 100%, except for one irritating bit. Most 
 of the whitelisted sites have got linked websites like facebook or twitter 
 oryourtube in them that load icons and graphics or adds etc. This causes 
 a auth-prompt for non-internet users. I can see the requests in the logs 
 being0DENIED.

 The only way I could think of getting rid of these errors was to 
 implement a http_access deny !whitelist after the allow. This 
 works great for non-internet users and it blocks all the linked 
 sites without asking to authenticate, but obviously this breaks 
 access to all other sites for authenticated users.(access denied 
 for all sites)
 You can use the all hack and two login lines:

 http_access allow whitelist# allow authed users, but dont challenge 
 if missing auth http_access allow authed all # block access to some 
 sites unless alreadylogged in http_access deny blacklist 
 http_access deny !authed


 The authed users may still have problems logging in if the first site they 
 visit is one of the blacklist ones. But if they visit another page first 
 they can loginand get there.


 Amos
 Hi Amos

 Thank you for the reply.

 I think I already tried this method but it still fails. In any case I tried 
 what you suggested and the problem remains that my 
 unauthenticated(non-internet)users can get to the whitelisted sites just 
 fine, but they still get authentication prompts for the linked content like 
 facebook and youtube that the sitecontains. An example of a site is 
 http://www.triptrack.co.za/ and you will see what I mean. At the bottom 
 right of the site there are links to facebook andyoutube. Those links cause 
 a authentication request to the unauthenticated(or non-internet) users. I 
 can't have these prompts appear for these users. Theyhave a set list of 
 sites they can visit, and it should work for them and should not get asked 
 to authenticate. Only once they try and go directly to sites that arenot in 
 the whitelist, should they be prompted, and obviously denied since they are 
 not included in the AD group.
 The problem of course is that they *are* going directly to the blacklisted 
 sites when they load an object from those sites. Even if the object was 
 embededin some third-party whitelisted sites HTML.
 HTTP protocol makes no distinctions about how HTML, XML, or Flash document 
 structures group objects. All Squid sees is a request for an object on a 
 non-whitelisted site.
 Current rules:
 http_access allow whitelist
 http_access allow authenticated all
 http_access deny blacklist
 http_access deny !authenticated

 Kind Regards
 Jasper


 Something else I've tried was using a cache_peer_access to pass the 
 whitelisted domains that everyone should have access to, to another squid 
 instance that should only allow access to the whitelisted sites. Nothing 
 else. Again it works kind of. I can see that the proxy sends the request to 
 the cache_peer, but it only sends the requested site there and again not any 
 sites that are linked within it.(like facebook).

 Is there a way to send the entire session to the cache_peer if a particular 
 domain was requested?


 There is maybe the Referer: header. Since the evercookie attacks it has 
 becoming popular to erase or not send those though. So good luck.
 You can test that with req_header ACL type and a regex pattern.

Amos

I think I found a work-around. I added another ACL and used the same list of 
whitelisted domains, but instead of dstdomain the new acl is of type 
srcdomain.

So the access list looks

RE: [squid-users] Allowing linked sites - NTLM and un-authenticated users

2012-04-01 Thread Jasper Van Der Westhuizen


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Saturday, March 31, 2012 10:11 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Allowing linked sites - NTLM and un-authenticated 
users

On 30/03/2012 11:45 p.m., Jasper Van Der Westhuizen wrote:
 Hi everyone

 I've been struggling to get a very specific setup going.

 Some background:  Our users are split into Internet users and 
 Non-Internet users. Everyone in a specific AD group is allowed to have 
 full internet access. I have two SQUID proxies with squidGuard load 
 balanced with NTLM authentication to handle the group authentication. All 
 traffic also then gets sent to a cache peer.

 This is basically what I need:
 1. All users(internet and non-internet) must be able to access sites in 
 /etc/squid/lists/whitelist.txt
 2. If a user wants to access any external site that is not in the whitelist 
 then he must be authenticated. Obviously a non-internet user can try until 
 he is blue in the face, it won't work.

 These two scenarios are working 100%, except for one irritating bit. Most of 
 the whitelisted sites have got linked websites like facebook or twitter or 
 yourtube in them that load icons and graphics or adds etc. This causes a 
 auth-prompt for non-internet users. I can see the requests in the logs 
 being0 DENIED.

 The only way I could think of getting rid of these errors was to 
 implement a http_access deny !whitelist after the allow. This works 
 great for non-internet users and it blocks all the linked sites 
 without asking to authenticate, but obviously this breaks access to 
 all other sites for authenticated users.(access denied for all sites)

 You can use the all hack and two login lines:

http_access allow whitelist# allow authed users, but dont challenge if missing 
auth http_access allow authed all # block access to some sites unless already 
logged in http_access deny blacklist http_access deny !authed


The authed users may still have problems logging in if the first site they 
visit is one of the blacklist ones. But if they visit another page first 
they can login and get there.


Amos

Hi Amos

Thank you for the reply. 

I think I already tried this method but it still fails. In any case I tried 
what you suggested and the problem remains that my 
unauthenticated(non-internet) users can get to the whitelisted sites just fine, 
but they still get authentication prompts for the linked content like facebook 
and youtube that the site contains. An example of a site is 
http://www.triptrack.co.za/ and you will see what I mean. At the bottom right 
of the site there are links to facebook and youtube. Those links cause a 
authentication request to the unauthenticated(or non-internet) users. I can't 
have these prompts appear for these users. They have a set list of sites they 
can visit, and it should work for them and should not get asked to 
authenticate. Only once they try and go directly to sites that are not in the 
whitelist, should they be prompted, and obviously denied since they are not 
included in the AD group.

Current rules:
http_access allow whitelist
http_access allow authenticated all
http_access deny blacklist
http_access deny !authenticated

Kind Regards
Jasper



[squid-users] Allowing linked sites - NTLM and un-authenticated users

2012-03-30 Thread Jasper Van Der Westhuizen
Hi everyone

I've been struggling to get a very specific setup going.

Some background:  Our users are split into Internet users and Non-Internet 
users. Everyone in a specific AD group is allowed to have full internet access. 
I have two SQUID proxies with squidGuard load balanced with NTLM authentication 
to handle the group authentication. All traffic also then gets sent to a cache 
peer.

This is basically what I need:
1. All users(internet and non-internet) must be able to access sites in 
/etc/squid/lists/whitelist.txt
2. If a user wants to access any external site that is not in the whitelist 
then he must be authenticated. Obviously a non-internet user can try until he 
is blue in the face, it won't work.

These two scenarios are working 100%, except for one irritating bit. Most of 
the whitelisted sites have got linked websites like facebook or twitter or 
yourtube in them that load icons and graphics or adds etc. This causes a 
auth-prompt for non-internet users. I can see the requests in the logs being 
DENIED.

The only way I could think of getting rid of these errors was to implement a 
http_access deny !whitelist after the allow. This works great for 
non-internet users and it blocks all the linked sites without asking to 
authenticate, but obviously this breaks access to all other sites for 
authenticated users.(access denied for all sites)

I thought of placing the whitelisted sites in squidGuard but NTLM 
authentication happens in squid, before it reaches squidGuard.

Here is an excerpt from my config file:
--cut--
### Whitelisted sites for all users
acl whitelist dstdomain /etc/squid/lists/whitelist.txt

acl authenticated proxy_auth REQUIRED

http_access allow whitelist
http_access allow authenticated !whitelist
http_access deny all
--cut--

What I basically need is something like http_access deny !whitelist 
!authenticated which means deny all sites not in whitelist for users who are 
not authenticated.  Can this be done?

Any suggestions would be greatly appreciated.

Kind Regards
Jasper