Re: [squid-users] Squid (using External ACL) problem with Icap

2011-12-01 Thread Amos Jeffries

On 2/12/2011 4:37 a.m., Roberto Galluzzi wrote:

Hi,

I'm using Squid 3.1 and SquidGuard with success. Now I want to add SquidClamav 
6.

Versions 6.x need Icap and I didn't have problem to install.

In my Squid configuration I use External ACL to get username from a script but 
enabling Icap I can't surf because user is empty (in access.log). However in my 
script log I see that Squid is using it.

If I use simple authentication (auth_param basic ...) I get user and all work.

Nevertheless I MUST use External ACL so I need help about this context.


The problem is that external_acl_type "user=" tag is not an 
authenticated username. Just a label for logging etc. in the current Squid.


There is a temporary workaround patch available in the existing bug report:
http://bugs.squid-cache.org/show_bug.cgi?id=3132

You can use that while we continue to work on redesigning the auth 
systems to handle this better.





This is part of my configuration:

squid.conf
-
(...)
external_acl_type  children=15 ttl=7200 negative_ttl=60 %SRC %SRC  

(...)
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav
adaptation_access service_resp allow all
(...)
-

If you need other info, ask me without problem.

Thank you

Roberto





Re: [squid-users] Squid losing connectivity for 30 seconds

2011-12-01 Thread Elie Merhej





 Hi,

I am currently facing a problem that I wasn't able to find a 
solution for in the mailing list or on the internet,
My squid is dying for 30 seconds every one hour at the same 
exact time, squid process will still be running,
I lose my wccp connectivity, the cache peers detect the squid 
as a dead sibling, and the squid cannot server any requests
The network connectivity of the sever is not affected (a ping 
to the squid's ip doesn't timeout)



Hi,

here is the strace result
- 

reading, DNS lookups and other network read/writes>

read(165, "!", 256) = 1




read(165, "!", 256) = 1
 


Squid is freezing at this point


The 1-byte read on FD #165 seems odd. Particularly suspicious being 
just before a pause and only having a constant 256 byte buffer space 
available. No ideas what it is yet though.





wccp2_router x.x.x.x
wccp2_forwarding_method l2
wccp2_return_method l2
wccp2_service dynamic x
wccp2_service_info x protocol=tcp flags=src_ip_hash priority=240 
ports=80

wccp2_service dynamic x
wccp2_service_info x protocol=tcp flags=dst_ip_hash,ports_source 
priority=240 ports=80

wccp2_assignment_method mask


#icp configuration
maximum_icp_query_timeout 30
cache_peer x.x.x.x sibling 3128 3130 proxy-only no-tproxy
cache_peer x.x.x.x sibling 3128 3130 proxy-only no-tproxy
cache_peer x.x.x.x sibling 3128 3130 proxy-only no-tproxy
log_icp_queries off
miss_access allow squidFarm
miss_access deny all


So if I understand this right. You have a layer of proxies defined as 
"squidFarm" which client traffic MUST pass through *first* before they 
are allowed to fetch MISS requests from this proxy.  Yet you are 
receiving WCCP traffic directly at this proxy with both NAT and TPROXY?


This miss_access policy seems decidedly odd. Perhapse you can 
enlighten me?

Hi,

Let me explain what I am trying to do,(I was hoping that this is the 
right setup) the squids are siblings so my clients pass through one 
squid only (this squid uses icp to check if the object is in my network, 
if not the squid fetches the object from the internet)


 if 
miss if miss

clients>WCCP>squid->ICP--->Internet>WCCP--->squid>clients

I have over 400Mbps of bandwidth, but one squid (3.1) cannot withstand 
this kind of bandwidth (number of clients), this is why I have created a 
squidFarm
I have the following hardware: i7 xeon 8 cpus - 16GB Ram - 2 HDDs 450GB 
& 600GB no RAID
Software: Debian OS squeeze 6.0.3 with kernel 2.6.32-5-amd64 and 
iptables 1.4.8
Please note that when I only use one cache_dir (the small one cache_dir 
aufs /cache1/squid 32 480 256 ) I don't face this problem

The problem starts when the cache dir size is bigger then 320 GB
Please advise

Thank you for the advise on the refresh patterns
Regards
Elie


RE: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-01 Thread Jenny Lee

> K. first problem:
> # host download.windowsupdate.com
> ...
> download.windowsupdate.com.c.footprint.net has address 204.160.124.126
> download.windowsupdate.com.c.footprint.net has address 8.27.83.126
> download.windowsupdate.com.c.footprint.net has address 8.254.3.254
> 
> 
> Client is connecting to server 4.26.235.254 port 80. Which is clearly 
> not "download.windowsupdate.com" according to the official DNS entries I 
> can see.

Yes, welcome to the host header forgery mess. I don't know who benefited from 
this but a lot of people got bitten by it.

I mentioned this first day http://bugs.squid-cache.org/show_bug.cgi?id=3325

Anyone doing ANYCAST will be screwed (and a whole lotta people do that).

p4$ host download.windowsupdate.com
mscom-wui-any.vo.msecnd.net has address 70.37.129.251
mscom-wui-any.vo.msecnd.net has address 70.37.129.244

p12$ host download.windowsupdate.com
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.42
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.8
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.24
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.26
a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.41

Jenny 

Re: [squid-users] Squid losing connectivity for 30 seconds

2011-12-01 Thread Amos Jeffries

On 2/12/2011 3:16 a.m., Elie Merhej wrote:





 Hi,

I am currently facing a problem that I wasn't able to find a 
solution for in the mailing list or on the internet,
My squid is dying for 30 seconds every one hour at the same 
exact time, squid process will still be running,
I lose my wccp connectivity, the cache peers detect the squid 
as a dead sibling, and the squid cannot server any requests
The network connectivity of the sever is not affected (a ping 
to the squid's ip doesn't timeout)


The problem doesn't start immediately when the squid is 
installed on the server (The server is dedicated as a squid)

It starts when the cache directories starts to fill up,
I have started my setup with 10 cache directors, the squid will 
start having the problem when the cache directories are above 
50% filled
when i change the number of cache directory (9,8,...) the squid 
works for a while then the same problem

cache_dir aufs /cache1/squid 9 140 256
cache_dir aufs /cache2/squid 9 140 256
cache_dir aufs /cache3/squid 9 140 256
cache_dir aufs /cache4/squid 9 140 256
cache_dir aufs /cache5/squid 9 140 256
cache_dir aufs /cache6/squid 9 140 256
cache_dir aufs /cache7/squid 9 140 256
cache_dir aufs /cache8/squid 9 140 256
cache_dir aufs /cache9/squid 9 140 256
cache_dir aufs /cache10/squid 8 140 256

I have 1 terabyte of storage
Finally I created two cache dircetories (One on each HDD) but 
the problem persisted


You have 2 HDD?  but, but, you have 10 cache_dir.
 We repeatedly say "one cache_dir per disk" or similar. In 
particular one cache_dir per physical drive spindle (for "disks" 
made up of multiple physical spindles) wherever possible with 
physical drives/spindles mounting separately to ensure the 
pairing. Squid performs a very unusual pattern of disk I/O which 
stress them down to the hardware controller level and make this 
kind of detail critical for anything like good speed. Avoiding 
cache_dir object limitations by adding more UFS-based dirs to 
one disk does not improve the situation.


That is a problem which will be affecting your Squid all the 
time though, possibly making the source of the pause worse.


From teh description I believe it is garbage collection on the 
cache directories. The pauses can be visible when garbage 
collecting any caches over a few dozen GB. The squid default 
"swap_high" and "swap_low" values are "5" apart, with at minimum 
being a value of 0 apart. These are whole % points of the total 
cache size, being erased from disk in a somewhat random-access 
style across the cache area. I did mention uncommon disk I/O 
patterns, right?


To be sure what it is, you can use the "strace" tool to the 
squid worker process (the second PID in current stable Squids) 
and see what is running. But given the hourly regularity and 
past experience with others on similar cache sizes, I'm almost 
certain its the garbage collection.


Amos



Hi Amos,

Thank you for your fast reply,
I have 2 HDD (450GB and 600GB)
df -h displays that i have 357Gb and 505GB available
In my last test, my cache dir where:
cache_swap_low 90
cache_swap_high 95


This is not. For anything more than 10-20 GB I recommend setting 
it to no more than 1 apart, possibly the same value if that works.
Squid has a light but CPU-intensive and possibly long garbage 
removal cycle above cache_swap_low, and a much more aggressive but 
faster and less CPU intensive removal above cache_swap_high. On 
large caches it is better in terms of downtime going straight to 
the aggressive removal and clearing disk space fast, despite the 
bandwidth cost replacing any items the light removal would have left.



Amos


Hi Amos,

I have changed the swap_high  90 and swap_low 90 with two cache dir 
(one for each HDD), i still have the same problem,

I did an strace (when the problem occured)
-- --- --- - - 
 23.060.004769   0 8568196 write
 21.070.004359   0 24658 5 futex
 19.340.004001 800 5   open
  6.540.001352   0  5101  5101 connect
  6.460.001337   3   491   epoll_wait
  5.340.001104   0 51938  9453 read
  3.900.000806   0 39727   close
  3.540.000733   0 86400   epoll_ctl
  3.540.000732   0 32357   sendto
  2.020.000417   0 56721   recvmsg
  1.840.000381   0 24064   socket
  0.960.000199   0 56264   fcntl
  0.770.000159   0  6366   329 accept
  0.530.000109   0 24033   bind
  0.520.000108   0 30085   getsockname
  0.210.44   0 11200   stat
  0.210.44   0  6998   359 recvfrom
  0.090.19   0  5085   getsockopt
  0.06

Re: [squid-users] ACLs - making up a multiple match requirement. (AND like)

2011-12-01 Thread Amos Jeffries

On 2/12/2011 5:43 a.m., Greg Whynott wrote:


looking for guidance on creating delay pools,  something I've never 
done before and because its a production system,  I'd like to minimize 
my down time or the amount of time i'd be here if I have to come in on 
the weekend to do it.




It looks like you need to read this FAQ tutorial on how ACLs and access 
controls work in Squid before any of what I say below will make much sense:

   http://wiki.squid-cache.org/SquidFaq




the intent is to limit bandwidth to a list of external networks,  
either by IP or URL regex,  to 1000kb/sec for the entire studio during 
work hours,,  _except_ for a list/group of excluded hosts inside;  
which will have unrestricted access to the same external hosts.


i'm attempting to limit youtube bandwidth during work hours for a 
particular inside network,  whist the other inside networks have full 
bandwidth,  with squid.  At the same time,   the 'limited' network has 
full bandwidth to other non youtube sites.   it appears i'd need some 
soft of AND logic (if src IP is youtube and dest is LAN-A then..).



 I achieved this on the router using limiters/queues but its appears 
this won't work going forward,  with the new 'exclusion' requirement 
management has asked me to implement.The source or destination 
always appears to be the squid server itself from the internet 
router's perspective.  which is why i'm considering squid now.





Okay, one thing to be aware of before you start altering things is that 
delay pools are assigned by Squid at the start of each request and until 
that request is finished or Squid restarts the pool is not changed. This 
means YT videos started in the slowdown period will stay slow even if 
they run into the time when fast is allowed. Vice versa for the videos 
started in fast period will stay at the fast sppeed when and after the 
slow period begins.


Since you have setup the router already with policies and limiting you 
may find this TOS marking to be the easier way forward. Instead of 
replicating the limits and policies in Squid delay pools. All that 
limiting is kept in the router and Squid only marks outgoing packets 
with a TOS value depending on your criteria. For exclusions you want 
http://www.squid-cache.org/Doc/config/tcp_outgoing_tos and some ACLs to 
determine when and which TOS is applied to the particular requests 
outgoing packets.



HTH
Amos


Re: [squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-01 Thread Amos Jeffries

On 2/12/2011 5:13 a.m., Matus UHLAR - fantomas wrote:

On 01.12.11 15:05, Josef Karliak wrote:
 I wanna use tmpfs for squid cache, is 8GB enough or too big ? We've 
about 3000 computers behind squid, for OS is 16GB sufficient, that's 
why I used 8GB for squid tmpfs.


what is the point of using tmpfs as squid cache? I think using only 
memory cache would be much more efficient (unless you are running 
32-bit squid).


Yes, consider the purpose of why a disk cache is better than RAM cache: 
objects are not erased when Squid or the system restarts.


==> tmpfs data is erased when Squid or the system restarts. So why bother?

All you gain from tmpfs is a drop in speed accessing the data, from RAM 
speeds down to the Disk speeds. Whether it is SSD or HDD that is slower 
than RAM.


Amos



Re: [squid-users] CacheHierarchy - load balance, failover

2011-12-01 Thread Amos Jeffries

On 2/12/2011 2:10 p.m., Chia Wei LEE wrote:

Hi Amos

Since the child server don have internet access. so we only set it
never_direct allow all



Okay. Then the answer is Yes. Traffic will only be permitted through the 
cache_peers. And if one goes offline/unavailable the other will take all 
the load.


Amos



  Amos Jeffries wrote:


On 1/12/2011 10:49 p.m., Chia Wei LEE wrote:

Hi All

I had linked squid servers into a cache hierarchy. One Child and two
Parent.

below is the Child config
cache_peer p1.example.com parent 3128 3130 weighted-round-robin
cache_peer p2.example.com parent 3128 3130 weighted-round-robin

Here is my question,
if let say, the p1.example.com was down, is it all the traffic will go

thru

the p2.example.com ? if not, any solution to avoid the service downtime ?

Maybe. What do your always_direct and never_direct and prefer_direct
configuration say about it?

Amos

ForwardSourceID:NT9DFE





Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-01 Thread Amos Jeffries


Hooray progress :)


On 2/12/2011 5:49 a.m., David Touzeau wrote:


Here it is the log in debug mode :

--
2011/12/01 17:49:14.106 kid1| HTTP Client local=4.26.235.254:80
remote=192.168.1.228:1074 FD 30 flags=33
2011/12/01 17:49:14.106 kid1| HTTP Client REQUEST:
-
GET /v9/windowsupdate/a/selfupdate/WSUS3/x86/Other/wsus3setup.cab?1112011649 
HTTP/1.1
Accept: */*
User-Agent: Windows-Update-Agent
Host: download.windowsupdate.com
Connection: Keep-Alive


K. first problem:
#  host download.windowsupdate.com
...
download.windowsupdate.com.c.footprint.net has address 204.160.124.126
download.windowsupdate.com.c.footprint.net has address 8.27.83.126
download.windowsupdate.com.c.footprint.net has address 8.254.3.254


Client is connecting to server 4.26.235.254 port 80. Which is clearly 
not "download.windowsupdate.com" according to the official DNS entries I 
can see. It is likely you have another set of IPs entirely, so please 
confirm that by running "host download.windowsupdate.com" on the Squid box.


Note that transparent Squid requires the same DNS "view" as the clients 
to keep the traffic flowing to the right places. Since it should be in 
the same network as the clients for transparent to work anyway this is 
not usually a problem. But can appear if you or the client is doing 
anything fancy with DNS server configurations.


NP: if 4.26.235.254 happens to be a local WSUS server you need to 
configure your local DNS to pass that info on to Squid for the relevant 
WSUS hosted domains. You will also benefit from Squid helping to enforce 
that MS update traffic stays on-LAN.



Amos


Re: [squid-users] CacheHierarchy - load balance, failover

2011-12-01 Thread Chia Wei LEE
Hi Amos

Since the child server don have internet access. so we only set it
never_direct allow all


Cheers
Chia Wei

Notice
The information in this message is confidential and may be legally
privileged.  It is intended solely for the addressee.  Access to this
message by anyone else is unauthorized.  If you are not the intended
recipient, any disclosure, copying or distribution of the message, or any
action taken by you in reliance on it, is prohibited and may be unlawful.
This email is for communication purposes only. It is not intended to
constitute an offer or form a binding agreement.  Our company accepts no
liability for the content of this email, or for the consequences of any
actions taken on the basis of the information provided.  If you have
received this message in error,  please delete it and contact the sender
immediately.  Thank you.




   
 Amos Jeffries 
   To 
   squid-users@squid-cache.org 
 01-12-2011 08:33   cc 
 PM
   Subject 
   Re: [squid-users] CacheHierarchy -  
   load balance, failover  
   
   
   
   
   
   




On 1/12/2011 10:49 p.m., Chia Wei LEE wrote:
> Hi All
>
> I had linked squid servers into a cache hierarchy. One Child and two
> Parent.
>
> below is the Child config
> cache_peer p1.example.com parent 3128 3130 weighted-round-robin
> cache_peer p2.example.com parent 3128 3130 weighted-round-robin
>
> Here is my question,
> if let say, the p1.example.com was down, is it all the traffic will go
thru
> the p2.example.com ? if not, any solution to avoid the service downtime ?

Maybe. What do your always_direct and never_direct and prefer_direct
configuration say about it?

Amos

ForwardSourceID:NT9DFE



Re: [squid-users] Filters in terminal but not in Browser

2011-12-01 Thread Amos Jeffries

On 2/12/2011 8:49 a.m., Paul Crown wrote:

On 11/30/2011 06:41 PM, Amos Jeffries wrote:

On Wed, 30 Nov 2011 17:07:54 -0600, Paul Crown wrote:

Greetings,

I feel I am missing something simple.  I have installed squid3 on
Ubuntu.  I added

acl allow_domains dstdomain "/etc/squid3/always_direct.acl"
always_direct allow allow_domains

acl denied_domains dstdomain "/etc/squid3/denied_domains.acl"
http_access deny denied_domains

and populated both files accordingly, and restarted squid3.

Now from a terminal, curl good-url and it works.  curl bad-url and it
gives me the blocked message.

Try it in firefox, and good-url and bad-url both work fine.  Neither is
blocked.

What did I forget?

Thanks.

Paul

What you are missing is two details:

Firstly, http_access and always_direct are completely unrelated controls.
  - http_access determins whether Squid is allowed to service the request.
  - always_direct determines whether Squid MUST (versus MAY) service the
request using DNS lookups and going directly to the public origin
server(s).

Also, you are missing minor details about the URL being tested. ie
- whether the browse is automatically adding "www." in front of the
domain, or not
- whether curl is setting the HTTP/1.1 Host: header correctly, or not
- whether the browse and terminal tools were run on the same machine, or
not
- whether you have any other access controls affecting the requests (ie
a browser type ACL allowing Mozilla/* agents through before these controls)

Amos


Thanks Amos.

That makes sense.

I got the browser working by configuring proxy settings in the browser
to port 3128.

I was trying to do transparent interception without changing the browser
(otherwise some employees are going to change it back).   So, I am still
showing my lack of understanding regarding transparent http access.
Must I also redirect port 80 to 3128 such as in iptables to not have to
config the browser?


To not configure the browser *at all*. Yes you have to lie to the 
browse, with NAT or TPROXY trickery.


However, there is the WPAD protocol and PAC scripts to automatically 
configure the browser (and other background software!) as needed without 
having the bother of visiting N machines to do setup. Details are 
here:   http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers


This still requires the "auto detect network setting" or such config be 
turned on in the browser. Squid langpack now provides several error page 
templates explaining to end users how to configure their browser 
themselves and avoid admin work :)  (requires a 3.1 or later proxy though).




Paul

For ref:

squid3 3.0.STABLE19-1
Ubuntu 10.04.2 LTS 64-bit

/etc/squid3/squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl denied_domains dstdomain "/etc/squid3/denied_domains.acl"
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny denied_domains
http_access allow localhost
http_access deny all


Um, depending on how you have done your NAT rules the above http_access 
rules would block everything or allow everything. With almost no control 
in Squid.


Make sure NAT is being performed on the Squid box and the firewall rules 
are locked down securely to prevent other traffic arriving in the 
"transparent" flagged Squid port.

Details on how to do that can be found here:
http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat


For traffic from configured browsers and your management access you 
should have a second http_port in Squid without the "transparent" flag 
set, for normal proxy access.


I recommend using 3128 for the normal proxy traffic since it is a 
well-known port (meaning attackers try and scan for access to it 
routinely, so a bit dangerous with all the security holes added by NAT). 
Dedicating some randomly chosen port number for the NAT traffic.



When this is working Squid will see all traffic arriving in the NAT port 
as coming from the LAN client who made the request. Which means "allow 
localhost" rull will not match and permit them access. You will need to 
alter that to "localnet" or such with the acceptible LAN subnet ranges 
permitted.



NP: if you have some other software before Squid handling the traffic, 
that is a very different setup and "transparent" is entirely the wrong way.



icp_access deny all
htcp_access deny all
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
acc

Re: [squid-users] Unable to access IIS site through squid3

2011-12-01 Thread Amos Jeffries

On 2/12/2011 4:23 a.m., Fredrik Eriksson wrote:

On 12/01/2011 01:13 PM, Amos Jeffries wrote:


Ah sorry.  In short I think its a kernel bug in the TCP / IP support.


This seems to be a rather persistant kernel bug, if so.

Since there are FD leaks in the debian stable (squeeze/6.0) packaged
version of squid3 (3.1.6-1.2+squeeze1), we pull the squid3 package from
testing (wheezy/7.0). Therefore the testing repo is already added to
our squid servers, so I installed linux from testing as well (linux
version3.1.0-1-amd64).

I tried both with IPv6 enabled and disabled, which you do by adding
this line to /etc/sysctl.d/disableipv6.conf

  net.ipv6.conf.all.disable_ipv6=1

neither case worked. Are the kernel developers aware of this bug you
mention, and is it solved in a even later version of linux?


I can't speak for what they know. I only pay attention to the details 
directly affecting Squid features on the netfilter lists.


FWIW I'm running the Wheezy kernels here with no such problems. It may 
be something particular in your iptables rules affecting the checksum. 
Its probably best to take this to the netfilter mailing list now and see 
if anyone there has a better clue than me.


Amos


Re: [squid-users] TCP_REFRESH_UNMODIFIED instead of TCP_IMS_HIT? "Revalidation failed"

2011-12-01 Thread Amos Jeffries

On 2/12/2011 2:29 a.m., David Wojak wrote:

On 12/01/2011 01:42 PM, Amos Jeffries wrote:

On 2/12/2011 1:12 a.m., David Wojak wrote:

On 11/30/2011 01:48 PM, David Wojak wrote:





Client to Server (via proxy):

GET 
http://bla.bla.bla:8080/afdc3604/lib/commons-logging-1.1.1.jar 
HTTP/1.1

Host: tlptest.m2n.at:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) 
Gecko/20100101 Firefox/8.0
Accept: 
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Proxy-Connection: keep-alive
Cache-Control: no-cache


Proxy to origin server:

GET /afdc3604/lib/commons-logging-1.1.1.jar HTTP/1.1
If-Modified-Since: Tue, 29 Nov 2011 12:21:04 GMT
Host: bla.bla.bla:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) 
Gecko/20100101 Firefox/8.0
Accept: 
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Via: 1.1 proxytest (squid/3.1.16)
Cache-Control: max-age=540
Connection: keep-alive


Origin Server to Proxy:

HTTP/1.1 304 Not Modified
Server: Jetty(6.1.16)


There is no Date header here. Which may be a problem as it makes 
the 304 invalid, but Squid assumes "now" and sends that to the 
client...


Well, thought of that too when I read the 304 specification... 
well, I'll try that!





Proxy to client:

HTTP/1.0 200 OK
Date: Wed, 30 Nov 2011 08:27:16 GMT
Content-Type: application/java-archive
Content-Length: 66245
Last-Modified: Tue, 29 Nov 2011 12:21:04 GMT
Server: Jetty(6.1.16)
Warning: 110 squid/3.1.16 "Response is stale", 111 squid/3.1.16 
"Revalidation failed"

X-Cache: HIT from proxytest
X-Cache-Lookup: HIT from proxytest:3128
Via: 1.0 proxytest (squid/3.1.16)
Connection: keep-alive



I've been checking up. It appears the Warning bug is not fixed 
yet. It is wrong and can be ignored.


If you can, fix that Date up though.

Amos
Cool, thanks Amos! I'll try to fix the Date thingi on jetty-side. 
I'll see if that makes a difference :)


Amos, can you give me an example of how a valid IMS with valid 304 
response should look like? We changed the jetty header and tried 
again, but still - revalidation failed.


Current Test:

Client to proxy:

GET http://bla.bla.bla:8080/3b8c257a/lib/commons-logging-1.1.1.jar 
HTTP/1.1

Host: bla.bla.bla:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20100101 
Firefox/8.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Proxy-Connection: keep-alive
Cache-Control: no-cache



Proxy to origin:

GET /3b8c257a/lib/commons-logging-1.1.1.jar HTTP/1.1
If-Modified-Since: Thu, 01 Dec 2011 10:25:55 GMT
Host: bla.bla.bla:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20100101 
Firefox/8.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Via: 1.1 proxytest (squid/3.1.16)
Cache-Control: no-cache
Connection: keep-alive


Response:

HTTP/1.1 304 Not Modified
Date: Thu, 01 Dec 2011 11:05:59 GMT
Server: Jetty(6.1.16)

I've read something about "if you send cache-control with the get 
request, you have to send back the same header with the 304 
response" - but couldn't confirm that reading the RFCs... Any idea? 
What does squid need as response?




What that is about is that *if* the 304 contains one of the valid 
extra headers like Cache-Control, the Squid response is supposed to 
be updated to contain the new information. Those are optional extras 
which for now you are not using.




Proxy to client:

HTTP/1.0 200 OK
Content-Type: application/java-archive
Content-Length: 66245
Last-Modified: Thu, 01 Dec 2011 10:25:55 GMT
Date: Thu, 01 Dec 2011 11:06:04 GMT
Server: Jetty(6.1.16)
Warning: 110 squid/3.1.16 "Response is stale", 111 squid/3.1.16 
"Revalidation failed"

X-Cache: HIT from proxytest
X-Cache-Lookup: HIT from proxytest:3128
Via: 1.0 proxytest (squid/3.1.16)
Connection: keep-alive


and again in the log:

1322737561.457  9 192.168.100.215 TCP_REFRESH_UNMODIFIED/200 
66658 GET 
http://bla.bla.bla:8080/3b8c257a/lib/commons-logging-1.1.1.jar -  
DIRECT/192.168.100.170 application/java-archive




That is a success. The client is requesting a whole new copy, so 
Squid is required to send a 200 with new copy. But is optimizing the 
backend check to use 304.


The warning is a minor bug, Squid is adding it based on the 
before-validation information about the request.


I was expecting Squid to update the Last-Modified date to the client 
to say "Thu, 01 Dec 2011 11:05:59 GMT" from the 304, but will have to 
double-check the code and RFCs about that.


Amos
I'm confused now... I guess I have a big understanding problem. Just 
to make sure: The "Da

Re: [squid-users] Filters in terminal but not in Browser

2011-12-01 Thread Paul Crown
On 11/30/2011 06:41 PM, Amos Jeffries wrote:
> On Wed, 30 Nov 2011 17:07:54 -0600, Paul Crown wrote:
>> Greetings,
>>
>> I feel I am missing something simple.  I have installed squid3 on
>> Ubuntu.  I added
>>
>> acl allow_domains dstdomain "/etc/squid3/always_direct.acl"
>> always_direct allow allow_domains
>>
>> acl denied_domains dstdomain "/etc/squid3/denied_domains.acl"
>> http_access deny denied_domains
>>
>> and populated both files accordingly, and restarted squid3.
>>
>> Now from a terminal, curl good-url and it works.  curl bad-url and it
>> gives me the blocked message.
>>
>> Try it in firefox, and good-url and bad-url both work fine.  Neither is
>> blocked.
>>
>> What did I forget?
>>
>> Thanks.
>>
>> Paul
> 
> What you are missing is two details:
> 
> Firstly, http_access and always_direct are completely unrelated controls.
>  - http_access determins whether Squid is allowed to service the request.
>  - always_direct determines whether Squid MUST (versus MAY) service the
> request using DNS lookups and going directly to the public origin
> server(s).
> 
> Also, you are missing minor details about the URL being tested. ie
> - whether the browse is automatically adding "www." in front of the
> domain, or not
> - whether curl is setting the HTTP/1.1 Host: header correctly, or not
> - whether the browse and terminal tools were run on the same machine, or
> not
> - whether you have any other access controls affecting the requests (ie
> a browser type ACL allowing Mozilla/* agents through before these controls)
> 
> Amos
> 

Thanks Amos.

That makes sense.

I got the browser working by configuring proxy settings in the browser
to port 3128.

I was trying to do transparent interception without changing the browser
(otherwise some employees are going to change it back).   So, I am still
showing my lack of understanding regarding transparent http access.
Must I also redirect port 80 to 3128 such as in iptables to not have to
config the browser?

Paul

For ref:

squid3 3.0.STABLE19-1
Ubuntu 10.04.2 LTS 64-bit

/etc/squid3/squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl denied_domains dstdomain "/etc/squid3/denied_domains.acl"
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny denied_domains
http_access allow localhost
http_access deny all
icp_access deny all
htcp_access deny all
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid3/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
error_directory /var/www/squid3
acl data_urls dstdomain "/etc/squid3/always_direct.acl"
always_direct allow data_urls
always_direct deny all
coredump_dir /var/spool/squid3

/etc/squid3/always_direct.acl
.amazonaws.com
.google.com

/etc/squid3/denied_domains.acl
.evony.com
.myspace.com
.pogo.com
.facebook.com
.twitter.com
.zynga.com



Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-01 Thread David Touzeau
Le jeudi 01 décembre 2011 à 09:58 +0100, David Touzeau a écrit :
> Le mercredi 30 novembre 2011 à 11:14 +1300, Amos Jeffries a écrit :
> > On Tue, 29 Nov 2011 22:48:39 +0100, David Touzeau wrote:
> > > Dear
> > >
> > > I'm trying to make  Squid Cache: Version 3.2.0.13-2027-r11436
> on
> > > transparent mode
> > >
> > > But squid refuse to access to some websites
> > > for example google.* is ok
> > >
> > > but microsoft is impossible.
> > >
> > > How to fix this issue ?
> > 
> >  Track down the client software which is producing the requests.
> > 
> > >
> > > On event :
> > >
> > 
> > 
> >  ... missing log line...
> > 
> > > Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: By user
> agent:
> > > Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> > > InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> > > 3.0.4506.2152; .NET CLR 3.5.30729)
> > > Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: on URL:
> > > http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome
> > 
> >  ... missing log line...
> > 
> > > Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: By user
> agent:
> > > Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> > > InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> > > 3.0.4506.2152; .NET CLR 3.5.30729)
> > > Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: on URL:
> > > http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome
> > 
> > 
> >  Which brings us back to the question of where the key log line has 
> >  disappeared to.
> > 
> >  The log line which says "Host header forgery from $C ($A does not
> match 
> >  $B)"
> > 
> >  What those $ values are is important to how to fix it. $C is the 
> >  connection details needed to isolate the machine to investigate. $A
> and 
> >  $B the details which it is getting wrong.
> > 
> >  Amos
> > 
> 
> 
> I have made others tests 
> 
> HEre it is the dump.
> 
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> 3.0.4506.2152; .NET CLR 3.5.30729)
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
> http://db2.stb00.s-msn.com/i/42/72A83D0D39814D13CA15F184E71D2.jpg
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> 3.0.4506.2152; .NET CLR 3.5.30729)
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
> http://db2.stb00.s-msn.com/i/F4/9DC6A31D2F48971E8CF184EAF3ACFF.jpg
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> 3.0.4506.2152; .NET CLR 3.5.30729)
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
> http://db2.stb00.s-msn.com/i/B5/2BC4D612CC1DB446582EB29AD4FF0.jpg
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> 3.0.4506.2152; .NET CLR 3.5.30729)
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
> http://db2.stb00.s-msn.com/i/B3/F358459610F7EE4285351371CB3A.jpg
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> 3.0.4506.2152; .NET CLR 3.5.30729)
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
> http://db2.stb01.s-msn.com/i/4B/9571894AD3B49F1AFBDFB6A0AB929.gif
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> 3.0.4506.2152; .NET CLR 3.5.30729)
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
> http://db2.stb00.s-msn.com/i/98/FD8C6B5E35BB28EE6D5D7CAA46C48.jpg
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> 3.0.4506.2152; .NET CLR 3.5.30729)
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
> http://db2.stb00.s-msn.com/i/FF/976AED20082B54679EAB83F1C3.jpg
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> 3.0.4506.2152; .NET CLR 3.5.30729)
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
> http://db2.stb00.s-msn.com/i/48/B6F62B8F241454CD698D3CE9DB625.jpg
> Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> 3

Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-01 Thread David Touzeau
Le vendredi 02 décembre 2011 à 01:12 +1300, Amos Jeffries a écrit :
>De: 
> Amos Jeffries
> 
> À: 
> squid-users@squid-cache.org
> Sujet: 
> Re: [squid-users] SECURITY ALERT:
> Squid Cache: Version 3.2.0.13
>  Date: 
> Fri, 02 Dec 2011 01:12:40 +1300
> (01/12/2011 13:12:40)
> 
> 
> On 1/12/2011 9:58 p.m., David Touzeau wrote:
> > Le mercredi 30 novembre 2011 à 11:14 +1300, Amos Jeffries a écrit :
> >> On Tue, 29 Nov 2011 22:48:39 +0100, David Touzeau wrote:
> >>> Dear
> >>>
> >>> I'm trying to make  Squid Cache: Version 3.2.0.13-2027-r11436
> on
> >>> transparent mode
> >>>
> >>> But squid refuse to access to some websites
> >>> for example google.* is ok
> >>>
> >>> but microsoft is impossible.
> >>>
> >>> How to fix this issue ?
> >>   Track down the client software which is producing the requests.
> >>
> >>> On event :
> >>>
> >>
> >>   ... missing log line...
> >>
> >>> Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: By user
> agent:
> >>> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> >>> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> >>> 3.0.4506.2152; .NET CLR 3.5.30729)
> >>> Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: on URL:
> >>> http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome
> >>   ... missing log line...
> >>
> >>> Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: By user
> agent:
> >>> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> >>> InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> >>> 3.0.4506.2152; .NET CLR 3.5.30729)
> >>> Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: on URL:
> >>> http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome
> >>
> >>   Which brings us back to the question of where the key log line
> has
> >>   disappeared to.
> >>
> >>   The log line which says "Host header forgery from $C ($A does not
> match
> >>   $B)"
> >>
> >>   What those $ values are is important to how to fix it. $C is the
> >>   connection details needed to isolate the machine to investigate.
> $A and
> >>   $B the details which it is getting wrong.
> >>
> >>   Amos
> >>
> >
> > I have made others tests
> >
> > HEre it is the dump.
> >
> > Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> > Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> > InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> > 3.0.4506.2152; .NET CLR 3.5.30729)
> > Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
> > http://db2.stb00.s-msn.com/i/42/72A83D0D39814D13CA15F184E71D2.jpg
> > Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
> > Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> > InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> > 3.0.4506.2152; .NET CLR 3.5.30729)
> 
> Hmm, same as the last lot. Lets take another approach.
> 
> Start with checking the actual cache.log (usually 
> /var/logs/squid/cache.log or /var/log/squid/cache.log). syslog is only
> a 
> copy and an unreliable one it appears.
> 
> If you dont have a cache.log you will need to configure one to be
> written.
> 
> If you are still getting useless data out of the cache.log you can
> try 
> setting "debug_options 11,2" for a short period. This dumps the
> entire 
> HTTP headers in both directions coming AND going from Squid. Which
> can 
> be a lot of data if you have a high level of traffic. What we look
> for 
> in that load is the "HTTP Client Request" and TCP details with same
> URL 
> and User-Agent that are showing up in your alerts.
> 
> Amos 


Here it is the log in debug mode :

--
2011/12/01 17:49:14.106 kid1| HTTP Client local=4.26.235.254:80
remote=192.168.1.228:1074 FD 30 flags=33
2011/12/01 17:49:14.106 kid1| HTTP Client REQUEST:
-
GET /v9/windowsupdate/a/selfupdate/WSUS3/x86/Other/wsus3setup.cab?1112011649 
HTTP/1.1
Accept: */*
User-Agent: Windows-Update-Agent
Host: download.windowsupdate.com
Connection: Keep-Alive


--
2011/12/01 17:49:14.106 kid1| HTTP Client local=4.26.235.254:80
remote=192.168.1.228:1074 FD 30 flags=33
2011/12/01 17:49:14.106 kid1| HTTP Client REPLY:
-
HTTP/1.1 409 Conflict
Server: squid/3.2.0.13-2027-r11436
Mime-Version: 1.0
Date: Thu, 01 Dec 2011 16:49:14 GMT
Content-Type: text/html
Content-Length: 4184
X-Squid-Error: ERR_INVALID_REQ 0
X-Cache: MISS from proxyweb
X-Cache-Lookup: NONE from proxyweb:3129
Via: 1.1 proxyweb (squid/3.2.0.13-2027-r11436)
Connection: keep-alive


--
2011/12/01 17:49:14.128 kid2| HTTP Client local=4.26.235.254:80
remote=192.168.1.228:1075 FD 33 flags=33
2011/12/01 17:49:14.128 kid2| HTTP Client REQUEST:
-
HEAD /v9/windowsupdate/a/selfupdate/WSUS3/x86/Other/wsus3setup.cab?1112011649 
HTTP/1.1
Accept: */*
User-Agent: Windows-Update-Agent
Host: download.windowsupdate.com
Connection: Keep-Alive


--
2011/12/01 17:49:14

[squid-users] ACLs - making up a multiple match requirement. (AND like)

2011-12-01 Thread Greg Whynott


looking for guidance on creating delay pools,  something I've never done 
before and because its a production system,  I'd like to minimize my 
down time or the amount of time i'd be here if I have to come in on the 
weekend to do it.



the intent is to limit bandwidth to a list of external networks,  either 
by IP or URL regex,  to 1000kb/sec for the entire studio during work 
hours,,  _except_ for a list/group of excluded hosts inside;  which will 
have unrestricted access to the same external hosts.


i'm attempting to limit youtube bandwidth during work hours for a 
particular inside network,  whist the other inside networks have full 
bandwidth,  with squid.  At the same time,   the 'limited' network has 
full bandwidth to other non youtube sites.   it appears i'd need some 
soft of AND logic (if src IP is youtube and dest is LAN-A then..).



 I achieved this on the router using limiters/queues but its appears 
this won't work going forward,  with the new 'exclusion' requirement 
management has asked me to implement.The source or destination 
always appears to be the squid server itself from the internet router's 
perspective.  which is why i'm considering squid now.



I looked around the documents and how-tos but they all seem to use ACLs 
which reference a set value,  without exclusions.


in my perfect world,  it would look something like this..(i know this 
syntax probably doesn't exist.. just an example of how i think it would 
look if it did..)


acl youtubelimit  dstdomain .youtube.com
acl networkA youtubelimit
acl networkB !youtubelimit

where youtubelimit would be a delay pool, I guess...


I guess the short question would be,  is there a method to set up acls 
with multiple critera (an AND like ACL)?

eg:
if src ip = 74.200.40.20 and dst ip = 192.168.1.4 then use limiter.






Re: [squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-01 Thread Matus UHLAR - fantomas

On 01.12.11 15:05, Josef Karliak wrote:
 I wanna use tmpfs for squid cache, is 8GB enough or too big ? We've 
about 3000 computers behind squid, for OS is 16GB sufficient, that's 
why I used 8GB for squid tmpfs.


what is the point of using tmpfs as squid cache? I think using only memory 
cache would be much more efficient (unless you are running 32-bit 
squid).

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety. -- Benjamin Franklin, 1759


[squid-users] Squid (using External ACL) problem with Icap

2011-12-01 Thread Roberto Galluzzi
Hi,

I'm using Squid 3.1 and SquidGuard with success. Now I want to add SquidClamav 
6.

Versions 6.x need Icap and I didn't have problem to install.

In my Squid configuration I use External ACL to get username from a script but 
enabling Icap I can't surf because user is empty (in access.log). However in my 
script log I see that Squid is using it.

If I use simple authentication (auth_param basic ...) I get user and all work.

Nevertheless I MUST use External ACL so I need help about this context.

This is part of my configuration:

squid.conf
-
(...)
external_acl_type  children=15 ttl=7200 negative_ttl=60 %SRC %SRC 
 
(...)
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=1 
icap://127.0.0.1:1344/squidclamav
adaptation_access service_resp allow all
(...)
-

If you need other info, ask me without problem.

Thank you

Roberto



Re: [squid-users] Infinite loop when sending request to HTTPS reverse proxy

2011-12-01 Thread michael...@gmx.net
Hi Amos,

thanks for the quick response - it was of great help! When I avoid reading a 
cache digest HTTP reply by adding "no-digest" to the cache_peer line, 
everything works fine.

To quickly repeat our setup: we have squid as a reverse accelerator proxy, 
talks HTTPS on both ends. The client contacts the proxy via HTTPS, and the 
proxy shall talk to the application server via HTTPS.
 
OS is RHEL6, Squid is 3.1.10. We installed the regular squid.x86_64 packet, and 
used ldd to make sure it's got the SSL libs linked in.
 
Our working squid.conf now is this:
--
cache_replacement_policy heap GDSF
memory_replacement_policy heap GDSF
cache_dir aufs /var/cache/squid 4096 256 256
cache_mem 2048 MB
cache_store_log none
cache_peer (our-app-server-fqdn) parent 9443 0 no-query originserver no-digest 
name=httpsAccel login=PROXY ssl sslflags=DONT_VERIFY_PEER
cache_peer_access httpsAccel allow all
coredump_dir /var/log/squid
http_access allow all
#http_port  accel vhost
https_port (our-proxy-srv-fqdn):9443 accel cert=/etc/squid/server.pem 
key=/etc/squid/privkey.pem vhost
refresh_pattern .  0   20% 4320
cachemgr_passwd disable all
maximum_object_size 1024 MB
maximum_object_size_in_memory 16 MB
buffered_logs on
visible_hostname (our-proxy-srv-fqdn)
--

Still I wonder if you are right and there is a bug, too. We don't need the 
cache digest currently, but as soon as I omit "no-digest" in the config, squid 
loops forever. I just tested it again...

Anyway, thanks for your help!

Michael

Am 01.12.2011 um 02:11 schrieb Amos Jeffries:

> On Thu, 1 Dec 2011 00:02:09 +0100, michael...@gmx.net wrote:
>> We would like to set up squid as a reverse accelerator proxy, to talk
>> HTTPS on both ends. The client shall contact the proxy via HTTPS, and
>> the proxy shall talk to the application server via HTTPS.
>> 
>> We got it all set up, but for some reason, our squid goes into an
>> infinite loop as soon the client browser sends it first request after
>> accepting the Proxy's SSL-certificate. The client session just hangs.
>> The cache.log fills with the repeating sequence pasted below. Maybe
>> the solution is obvious, but it seems too obvious for us. So - any
>> hint or help would be greatly appreciated. What can we do to further
>> dig into the root cause? Should we just compile squid from scratch?
>> 
>> Out OS is RHEL6, Squid is 3.1.10. We installed the regular
>> squid.x86_64 packet, and used ldd to make sure it's got the SSL libs
>> linked in.
>> 
>> Our squid.conf is this:
>> --
>> cache_replacement_policy heap GDSF
>> memory_replacement_policy heap GDSF
>> cache_dir aufs /var/cache/squid 4096 256 256
>> cache_mem 2048 MB
>> cache_store_log none
>> cache_peer (our-app-server) parent 9443 0 no-query originserver
>> name=httpsAccel login=PROXYPASS ssl sslflags=DONT_VERIFY_PEER
> 
> The value in "(our-app-server)" is important. If it is a domain name whose IP 
> points at Squid ... oops. IP address if you can , or a FQDN host name which 
> resolves only to the origin IPs for Squid to use.
> 
> "PROXYPASS" does strange things with WWW-Auth headers. In your case they are 
> coming in correctly as www-auth headers and you should use login=PASS or 
> nothing at all.
> 
> 
>> cache_peer_access httpsAccel allow all
>> coredump_dir /var/log/squid
>> http_access allow all
>> #http_port  accel vhost
>> https_port 9445 cert=/etc/squid/server.pem key=/etc/squid/privkey.pem
>> accel vhost
> 
> To avoid problems upgrading in future you should put "accel" first among the 
> options (right after port) nowdays.
> 
> Note that with no IP address this is a wildcard port accepting all 3+ IP 
> addresses the box has. It needs a wildcard certificate to cope with the 
> multiple addresses.
> 
> The use of ports 9445 on http_port and 9443 on cache_peer is important. No 
> problems when they are the same, but when different you require one of two 
> things:
> 1) the backend needs to be capable of accepting port 9445 URLs through its 
> port 9443.
> or
> 2) squid http_port needs to contain vport=9443 to re-write the port number 
> for the backend to get its expected URL.
> 
> 
>> refresh_pattern .  0   20% 4320
>> cachemgr_passwd disable all
>> maximum_object_size 1024 MB
>> maximum_object_size_in_memory 16 MB
>> buffered_logs on
>> visible_hostname (our-proxy-hostname)
>> --
>> 
>> The cache.log is repeating itself with this sequence. I am asking
>> myself, what the heck is it doing here?
> 
> Reading a cache digest HTTP reply from a cache_peer.
> 
> You should be able to avoid it by adding "no-digest" to the cache_peer line.
> 
> It does seem to be a bug though.
> 
> Amos
> 
>> --
>> 2011/11/30 15:40:00.000| entering storeClientCopyEvent(0x7f6aa5b2ede8*?)
>> 2011/11/30 15:40:00.000| AsyncCall

Re: [squid-users] Unable to access IIS site through squid3

2011-12-01 Thread Fredrik Eriksson

On 12/01/2011 01:13 PM, Amos Jeffries wrote:


Ah sorry.  In short I think its a kernel bug in the TCP / IP support.


This seems to be a rather persistant kernel bug, if so.

Since there are FD leaks in the debian stable (squeeze/6.0) packaged
version of squid3 (3.1.6-1.2+squeeze1), we pull the squid3 package from
testing (wheezy/7.0). Therefore the testing repo is already added to
our squid servers, so I installed linux from testing as well (linux
version3.1.0-1-amd64).

I tried both with IPv6 enabled and disabled, which you do by adding
this line to /etc/sysctl.d/disableipv6.conf

  net.ipv6.conf.all.disable_ipv6=1

neither case worked. Are the kernel developers aware of this bug you
mention, and is it solved in a even later version of linux?

 

I hate to say this, but if all else fails you will probably need to
--disable-ipv6 in Squid to get back to the IPv4-ony behaviour Squid-2
had. That wont exactly solve the problem, but should avoid it.


This is something we would rather not have to do because of a single
site (out of several thousand requests per minute from our users)
supposedly triggering a kernel bug, even though this is an important
site for our sales department, our environment is hard to maintain as
it is. We'd rather stick to the, by the debian project, pre packaged
software as far as we can.


Can I provide you with any other information, not yet given?


Regards
--
Fredrik


[squid-users] I need to block a connection in the squid.

2011-12-01 Thread Rolando Cañer Roblejo

Hi all,

I use sqstat to see active connections in the squid. How can I stop a 
connection in the squid without restarting the service, for example, 
stop a large file download.


Thanks.


Re: [squid-users] Squid losing connectivity for 30 seconds

2011-12-01 Thread Elie Merhej





 Hi,

I am currently facing a problem that I wasn't able to find a 
solution for in the mailing list or on the internet,
My squid is dying for 30 seconds every one hour at the same 
exact time, squid process will still be running,
I lose my wccp connectivity, the cache peers detect the squid as 
a dead sibling, and the squid cannot server any requests
The network connectivity of the sever is not affected (a ping to 
the squid's ip doesn't timeout)


The problem doesn't start immediately when the squid is 
installed on the server (The server is dedicated as a squid)

It starts when the cache directories starts to fill up,
I have started my setup with 10 cache directors, the squid will 
start having the problem when the cache directories are above 
50% filled
when i change the number of cache directory (9,8,...) the squid 
works for a while then the same problem

cache_dir aufs /cache1/squid 9 140 256
cache_dir aufs /cache2/squid 9 140 256
cache_dir aufs /cache3/squid 9 140 256
cache_dir aufs /cache4/squid 9 140 256
cache_dir aufs /cache5/squid 9 140 256
cache_dir aufs /cache6/squid 9 140 256
cache_dir aufs /cache7/squid 9 140 256
cache_dir aufs /cache8/squid 9 140 256
cache_dir aufs /cache9/squid 9 140 256
cache_dir aufs /cache10/squid 8 140 256

I have 1 terabyte of storage
Finally I created two cache dircetories (One on each HDD) but 
the problem persisted


You have 2 HDD?  but, but, you have 10 cache_dir.
 We repeatedly say "one cache_dir per disk" or similar. In 
particular one cache_dir per physical drive spindle (for "disks" 
made up of multiple physical spindles) wherever possible with 
physical drives/spindles mounting separately to ensure the 
pairing. Squid performs a very unusual pattern of disk I/O which 
stress them down to the hardware controller level and make this 
kind of detail critical for anything like good speed. Avoiding 
cache_dir object limitations by adding more UFS-based dirs to one 
disk does not improve the situation.


That is a problem which will be affecting your Squid all the time 
though, possibly making the source of the pause worse.


From teh description I believe it is garbage collection on the 
cache directories. The pauses can be visible when garbage 
collecting any caches over a few dozen GB. The squid default 
"swap_high" and "swap_low" values are "5" apart, with at minimum 
being a value of 0 apart. These are whole % points of the total 
cache size, being erased from disk in a somewhat random-access 
style across the cache area. I did mention uncommon disk I/O 
patterns, right?


To be sure what it is, you can use the "strace" tool to the squid 
worker process (the second PID in current stable Squids) and see 
what is running. But given the hourly regularity and past 
experience with others on similar cache sizes, I'm almost certain 
its the garbage collection.


Amos



Hi Amos,

Thank you for your fast reply,
I have 2 HDD (450GB and 600GB)
df -h displays that i have 357Gb and 505GB available
In my last test, my cache dir where:
cache_swap_low 90
cache_swap_high 95


This is not. For anything more than 10-20 GB I recommend setting it 
to no more than 1 apart, possibly the same value if that works.
Squid has a light but CPU-intensive and possibly long garbage 
removal cycle above cache_swap_low, and a much more aggressive but 
faster and less CPU intensive removal above cache_swap_high. On 
large caches it is better in terms of downtime going straight to 
the aggressive removal and clearing disk space fast, despite the 
bandwidth cost replacing any items the light removal would have left.



Amos


Hi Amos,

I have changed the swap_high  90 and swap_low 90 with two cache dir 
(one for each HDD), i still have the same problem,

I did an strace (when the problem occured)
-- --- --- - - 
 23.060.004769   0 8568196 write
 21.070.004359   0 24658 5 futex
 19.340.004001 800 5   open
  6.540.001352   0  5101  5101 connect
  6.460.001337   3   491   epoll_wait
  5.340.001104   0 51938  9453 read
  3.900.000806   0 39727   close
  3.540.000733   0 86400   epoll_ctl
  3.540.000732   0 32357   sendto
  2.020.000417   0 56721   recvmsg
  1.840.000381   0 24064   socket
  0.960.000199   0 56264   fcntl
  0.770.000159   0  6366   329 accept
  0.530.000109   0 24033   bind
  0.520.000108   0 30085   getsockname
  0.210.44   0 11200   stat
  0.210.44   0  6998   359 recvfrom
  0.090.19   0  5085   getsockopt
  0.060.12   0  2887   lse

[squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-01 Thread Josef Karliak

  Hi there,
  I wanna use tmpfs for squid cache, is 8GB enough or too big ? We've  
about 3000 computers behind squid, for OS is 16GB sufficient, that's  
why I used 8GB for squid tmpfs.

  Thanks for answers.
  J.K.

--
Ma domena pouziva zabezpeceni a kontrolu SPF (www.openspf.org) a  
DomainKeys/DKIM (with ADSP) . Pokud mate problemy s dorucenim emailu,  
zacnete pouzivat metody overeni puvody emailu zminene vyse. Dekuji.
My domain use SPF (www.openspf.org) and DomainKeys/DKIM (with ADSP)  
policy and check. If you've problem with sending emails to me, start  
using email origin methods mentioned above. Thank you.



This message was sent using IMP, the Internet Messaging Program.



bintAm4HGUw0T.bin
Description: Veřejný PGP klíč


Re: [squid-users] TCP_REFRESH_UNMODIFIED instead of TCP_IMS_HIT? "Revalidation failed"

2011-12-01 Thread Amos Jeffries

On 2/12/2011 1:12 a.m., David Wojak wrote:

On 11/30/2011 01:48 PM, David Wojak wrote:





Client to Server (via proxy):

GET http://bla.bla.bla:8080/afdc3604/lib/commons-logging-1.1.1.jar 
HTTP/1.1

Host: tlptest.m2n.at:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20100101 
Firefox/8.0
Accept: 
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Proxy-Connection: keep-alive
Cache-Control: no-cache


Proxy to origin server:

GET /afdc3604/lib/commons-logging-1.1.1.jar HTTP/1.1
If-Modified-Since: Tue, 29 Nov 2011 12:21:04 GMT
Host: bla.bla.bla:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20100101 
Firefox/8.0
Accept: 
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Via: 1.1 proxytest (squid/3.1.16)
Cache-Control: max-age=540
Connection: keep-alive


Origin Server to Proxy:

HTTP/1.1 304 Not Modified
Server: Jetty(6.1.16)


There is no Date header here. Which may be a problem as it makes the 
304 invalid, but Squid assumes "now" and sends that to the client...


Well, thought of that too when I read the 304 specification... well, 
I'll try that!





Proxy to client:

HTTP/1.0 200 OK
Date: Wed, 30 Nov 2011 08:27:16 GMT
Content-Type: application/java-archive
Content-Length: 66245
Last-Modified: Tue, 29 Nov 2011 12:21:04 GMT
Server: Jetty(6.1.16)
Warning: 110 squid/3.1.16 "Response is stale", 111 squid/3.1.16 
"Revalidation failed"

X-Cache: HIT from proxytest
X-Cache-Lookup: HIT from proxytest:3128
Via: 1.0 proxytest (squid/3.1.16)
Connection: keep-alive



I've been checking up. It appears the Warning bug is not fixed yet. 
It is wrong and can be ignored.


If you can, fix that Date up though.

Amos
Cool, thanks Amos! I'll try to fix the Date thingi on jetty-side. 
I'll see if that makes a difference :)


Amos, can you give me an example of how a valid IMS with valid 304 
response should look like? We changed the jetty header and tried 
again, but still - revalidation failed.


Current Test:

Client to proxy:

GET http://bla.bla.bla:8080/3b8c257a/lib/commons-logging-1.1.1.jar 
HTTP/1.1

Host: bla.bla.bla:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20100101 
Firefox/8.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Proxy-Connection: keep-alive
Cache-Control: no-cache



Proxy to origin:

GET /3b8c257a/lib/commons-logging-1.1.1.jar HTTP/1.1
If-Modified-Since: Thu, 01 Dec 2011 10:25:55 GMT
Host: bla.bla.bla:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20100101 
Firefox/8.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Via: 1.1 proxytest (squid/3.1.16)
Cache-Control: no-cache
Connection: keep-alive


Response:

HTTP/1.1 304 Not Modified
Date: Thu, 01 Dec 2011 11:05:59 GMT
Server: Jetty(6.1.16)

I've read something about "if you send cache-control with the get 
request, you have to send back the same header with the 304 response" 
- but couldn't confirm that reading the RFCs... Any idea? What does 
squid need as response?




What that is about is that *if* the 304 contains one of the valid extra 
headers like Cache-Control, the Squid response is supposed to be updated 
to contain the new information. Those are optional extras which for now 
you are not using.




Proxy to client:

HTTP/1.0 200 OK
Content-Type: application/java-archive
Content-Length: 66245
Last-Modified: Thu, 01 Dec 2011 10:25:55 GMT
Date: Thu, 01 Dec 2011 11:06:04 GMT
Server: Jetty(6.1.16)
Warning: 110 squid/3.1.16 "Response is stale", 111 squid/3.1.16 
"Revalidation failed"

X-Cache: HIT from proxytest
X-Cache-Lookup: HIT from proxytest:3128
Via: 1.0 proxytest (squid/3.1.16)
Connection: keep-alive


and again in the log:

1322737561.457  9 192.168.100.215 TCP_REFRESH_UNMODIFIED/200 66658 
GET http://bla.bla.bla:8080/3b8c257a/lib/commons-logging-1.1.1.jar -  
DIRECT/192.168.100.170 application/java-archive




That is a success. The client is requesting a whole new copy, so Squid 
is required to send a 200 with new copy. But is optimizing the backend 
check to use 304.


The warning is a minor bug, Squid is adding it based on the 
before-validation information about the request.


I was expecting Squid to update the Last-Modified date to the client to 
say "Thu, 01 Dec 2011 11:05:59 GMT" from the 304, but will have to 
double-check the code and RFCs about that.


Amos


Re: [squid-users] CacheHierarchy - load balance, failover

2011-12-01 Thread Amos Jeffries

On 1/12/2011 10:49 p.m., Chia Wei LEE wrote:

Hi All

I had linked squid servers into a cache hierarchy. One Child and two
Parent.

below is the Child config
cache_peer p1.example.com parent 3128 3130 weighted-round-robin
cache_peer p2.example.com parent 3128 3130 weighted-round-robin

Here is my question,
if let say, the p1.example.com was down, is it all the traffic will go thru
the p2.example.com ? if not, any solution to avoid the service downtime ?


Maybe. What do your always_direct and never_direct and prefer_direct 
configuration say about it?


Amos


Re: [squid-users] How to set the IP of the real originator in HTTP requests (instead of Squid's IP)?

2011-12-01 Thread Amos Jeffries

On 1/12/2011 11:26 p.m., Leonardo wrote:

Thanks LR and AJ for your answers.

As far as I understand, I can use the tcp_outgoing_address directive
to explicitly specify a different outgoing address for each client
subnet.
However, what if I would like that the Cisco ASA sees directly the
private IP address of the requesting client (kind of having a
super-transparent Squid)?  I understand this may require some hack, as
at the network layer it *is* the Squid which handles the HTTP
connections, but would it be possible?

Best regards,

L.


Squid supports transparent proxy (not the NAT interception people call 
the same).

http://wiki.squid-cache.org/Features/Tproxy4

Amos


Re: [squid-users] Can't make Squid 3.2 work as Interception proxy

2011-12-01 Thread Amos Jeffries

On 1/12/2011 11:59 p.m., Nguyen Hai Nam wrote:

On 12/1/2011 5:55 AM, Amos Jeffries wrote:


Yes that is packets successfully arriving at squid and HTT request 
being processed fine. The "intercept" flag tells squid to accept 
origin server formatted (partial) URLs. Its absence tells Squid to 
accept proxy formatted (absolute) URLs.


The problem is that IPF-transparent NAT lookup with ioctl() is not 
working correctly. If you can find for me any kind of documentation 
on how non-kernel software like squid can do NAT table lookups in 
your OS I can probably fix that for you.


Amos

Hi Amos,

I discovered that I'm missed IPFilter header files on OI box, so 
that's why Squid does nothing in intercept mode. After installing the 
IPFilter headers and recompling Squid 3.2, it works like champs.


Thanks and best regards,
~ Neddie.


Sweet. And thanks for the confirmation IPF works :)

Amos


Re: [squid-users] Unable to access IIS site through squid3

2011-12-01 Thread Amos Jeffries

On 1/12/2011 4:45 a.m., Fredrik Eriksson wrote:

On 11/30/2011 02:06 AM, Amos Jeffries wrote:


  Data packet from Squid->Server. 1085 bytes. Well under both 1160 and
  1460 sizes, even with TCP packet bits added.

  However the packet checksum is incorrect.

  This is a problem in the kernel code somewhere. Given that it works on
  the same box with older Squid it is likely something to do with the
  IPv4/IPpv6 v4-mapping features of the kernel. Squid-3.1 prefers to use
  "v4-mapped" IPv6 sockets and let the kernel swap the TCP stacks around
  depending on the IP address type connected to.


I'm not entirely sure what you said here, but I tried to pursue the 
issue further based on this having to do something with the kernel and 
perhaps IPv6.


Ah sorry.  In short I think its a kernel bug in the TCP / IP support.



On the old squid2 server, running debian etch, we have linux 
2.6.18-6-686. To make sure, I installed squid 2.7.STABLE9-2.1 on the 
squid3 server, changed the http_port, to be able to run both at the 
same time, and verified that we could indeed access www.usitc.gov from 
the same host running a 2.x version of squid.


After that I have tried both changing linux from 2.6.32-5-amd64 (from 
debian stable) to 2.6.38-bpo.2-amd64 (from squeeze-backports) as well 
as completely disable IPv6 on the host. We don't use IPv6 and have no 
IPv6 connections to the outside world, but the interfaces did, of 
course, have link-local addresses, however, disabling IPv6 did not 
make any difference to the state of not being able to access the site.


Is there any other information I could provide you with? To my 
knowlege, this is the only site we are having these problems with.




I hate to say this, but if all else fails you will probably need to 
--disable-ipv6 in Squid to get back to the IPv4-ony behaviour Squid-2 
had. That wont exactly solve the problem, but should avoid it.


Amos


Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-01 Thread Amos Jeffries

On 1/12/2011 9:58 p.m., David Touzeau wrote:

Le mercredi 30 novembre 2011 à 11:14 +1300, Amos Jeffries a écrit :

On Tue, 29 Nov 2011 22:48:39 +0100, David Touzeau wrote:

Dear

I'm trying to make  Squid Cache: Version 3.2.0.13-2027-r11436 on
transparent mode

But squid refuse to access to some websites
for example google.* is ok

but microsoft is impossible.

How to fix this issue ?

  Track down the client software which is producing the requests.


On event :



  ... missing log line...


Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: on URL:
http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome

  ... missing log line...


Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: on URL:
http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome


  Which brings us back to the question of where the key log line has
  disappeared to.

  The log line which says "Host header forgery from $C ($A does not match
  $B)"

  What those $ values are is important to how to fix it. $C is the
  connection details needed to isolate the machine to investigate. $A and
  $B the details which it is getting wrong.

  Amos



I have made others tests

HEre it is the dump.

Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/42/72A83D0D39814D13CA15F184E71D2.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)


Hmm, same as the last lot. Lets take another approach.

Start with checking the actual cache.log (usually 
/var/logs/squid/cache.log or /var/log/squid/cache.log). syslog is only a 
copy and an unreliable one it appears.


If you dont have a cache.log you will need to configure one to be written.

If you are still getting useless data out of the cache.log you can try 
setting "debug_options 11,2" for a short period. This dumps the entire 
HTTP headers in both directions coming AND going from Squid. Which can 
be a lot of data if you have a high level of traffic. What we look for 
in that load is the "HTTP Client Request" and TCP details with same URL 
and User-Agent that are showing up in your alerts.


Amos


Re: [squid-users] Can't make Squid 3.2 work as Interception proxy

2011-12-01 Thread Nguyen Hai Nam

On 12/1/2011 5:55 AM, Amos Jeffries wrote:


Yes that is packets successfully arriving at squid and HTT request 
being processed fine. The "intercept" flag tells squid to accept 
origin server formatted (partial) URLs. Its absence tells Squid to 
accept proxy formatted (absolute) URLs.


The problem is that IPF-transparent NAT lookup with ioctl() is not 
working correctly. If you can find for me any kind of documentation on 
how non-kernel software like squid can do NAT table lookups in your OS 
I can probably fix that for you.


Amos

Hi Amos,

I discovered that I'm missed IPFilter header files on OI box, so that's 
why Squid does nothing in intercept mode. After installing the IPFilter 
headers and recompling Squid 3.2, it works like champs.


Thanks and best regards,
~ Neddie.


Re: [squid-users] How to set the IP of the real originator in HTTP requests (instead of Squid's IP)?

2011-12-01 Thread Leonardo
Thanks LR and AJ for your answers.

As far as I understand, I can use the tcp_outgoing_address directive
to explicitly specify a different outgoing address for each client
subnet.
However, what if I would like that the Cisco ASA sees directly the
private IP address of the requesting client (kind of having a
super-transparent Squid)?  I understand this may require some hack, as
at the network layer it *is* the Squid which handles the HTTP
connections, but would it be possible?

Best regards,

L.


Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-01 Thread FredB
No problem with 3.2.0.13-2029-r11445 without transparent mode

There is something interresting in access.log ?

- Mail original -
De: "David Touzeau" 
À: squid-users@squid-cache.org
Envoyé: Jeudi 1 Décembre 2011 09:58:47
Objet: Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

Le mercredi 30 novembre 2011 à 11:14 +1300, Amos Jeffries a écrit :
> On Tue, 29 Nov 2011 22:48:39 +0100, David Touzeau wrote:
> > Dear
> >
> > I'm trying to make  Squid Cache: Version 3.2.0.13-2027-r11436 on
> > transparent mode
> >
> > But squid refuse to access to some websites
> > for example google.* is ok
> >
> > but microsoft is impossible.
> >
> > How to fix this issue ?
>
>  Track down the client software which is producing the requests.
>
> >
> > On event :
> >
>
>
>  ... missing log line...
>
> > Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: By user agent:
> > Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> > InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> > 3.0.4506.2152; .NET CLR 3.5.30729)
> > Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: on URL:
> > http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome
>
>  ... missing log line...
>
> > Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: By user agent:
> > Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> > InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> > 3.0.4506.2152; .NET CLR 3.5.30729)
> > Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: on URL:
> > http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome
>
>
>  Which brings us back to the question of where the key log line has
>  disappeared to.
>
>  The log line which says "Host header forgery from $C ($A does not match 
>  $B)"
>
>  What those $ values are is important to how to fix it. $C is the
>  connection details needed to isolate the machine to investigate. $A and 
>  $B the details which it is getting wrong.
>
>  Amos
>


I have made others tests

HEre it is the dump.

Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/42/72A83D0D39814D13CA15F184E71D2.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/F4/9DC6A31D2F48971E8CF184EAF3ACFF.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/B5/2BC4D612CC1DB446582EB29AD4FF0.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/B3/F358459610F7EE4285351371CB3A.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb01.s-msn.com/i/4B/9571894AD3B49F1AFBDFB6A0AB929.gif
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/98/FD8C6B5E35BB28EE6D5D7CAA46C48.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/FF/976AED20082B54679EAB83F1C3.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/48/B6F62B8F241454CD698D3CE9DB625.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.450

[squid-users] CacheHierarchy - load balance, failover

2011-12-01 Thread Chia Wei LEE

Hi All

I had linked squid servers into a cache hierarchy. One Child and two
Parent.

below is the Child config
cache_peer p1.example.com parent 3128 3130 weighted-round-robin
cache_peer p2.example.com parent 3128 3130 weighted-round-robin

Here is my question,
if let say, the p1.example.com was down, is it all the traffic will go thru
the p2.example.com ? if not, any solution to avoid the service downtime ?

Thanks

Cheers
Chia Wei



Notice
The information in this message is confidential and may be legally
privileged.  It is intended solely for the addressee.  Access to this
message by anyone else is unauthorized.  If you are not the intended
recipient, any disclosure, copying or distribution of the message, or any
action taken by you in reliance on it, is prohibited and may be unlawful.
This email is for communication purposes only. It is not intended to
constitute an offer or form a binding agreement.  Our company accepts no
liability for the content of this email, or for the consequences of any
actions taken on the basis of the information provided.  If you have
received this message in error,  please delete it and contact the sender
immediately.  Thank you.




Re: [squid-users] how to convert incoming url requests from users to hashed format?

2011-12-01 Thread Amos Jeffries

On 1/12/2011 12:44 a.m., Yavuz Maşlak wrote:

I use squid proxy server.

I have a url blacklist which is in sha1 format.

I wanna convert url requests incoming by users into sha1 algorithm in order
to compare the list.

How can I do that ?

is there a third party tool that will run as addon onto squid about that ?



This is very a abnormal requirement, so there is no easy way built into 
Squid.


There are two options available to you.

 * You can write an ACL test for Squid-3 very easily and do a local 
Squid build to use it.
  - gain in processing speed, lose in time creating and managing the 
ACL patch.


* You can write an script to plug into the external_acl_type interface 
for Squid ACLs.

 - gain in creation time, lose in processing speed.


Are you able to explain why the list is in SHA1 in the first place? 
(just to satisfy my curiosity about this).


Amos


Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-01 Thread David Touzeau
Le mercredi 30 novembre 2011 à 11:14 +1300, Amos Jeffries a écrit :
> On Tue, 29 Nov 2011 22:48:39 +0100, David Touzeau wrote:
> > Dear
> >
> > I'm trying to make  Squid Cache: Version 3.2.0.13-2027-r11436 on
> > transparent mode
> >
> > But squid refuse to access to some websites
> > for example google.* is ok
> >
> > but microsoft is impossible.
> >
> > How to fix this issue ?
> 
>  Track down the client software which is producing the requests.
> 
> >
> > On event :
> >
> 
> 
>  ... missing log line...
> 
> > Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: By user agent:
> > Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> > InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> > 3.0.4506.2152; .NET CLR 3.5.30729)
> > Nov 29 22:18:57 squid2 squid[11257]: SECURITY ALERT: on URL:
> > http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome
> 
>  ... missing log line...
> 
> > Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: By user agent:
> > Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
> > InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
> > 3.0.4506.2152; .NET CLR 3.5.30729)
> > Nov 29 22:18:59 squid2 squid[11257]: SECURITY ALERT: on URL:
> > http://www.microsoft.com/isapi/redir.dll?prd=ie&pver=6&ar=msnhome
> 
> 
>  Which brings us back to the question of where the key log line has 
>  disappeared to.
> 
>  The log line which says "Host header forgery from $C ($A does not match 
>  $B)"
> 
>  What those $ values are is important to how to fix it. $C is the 
>  connection details needed to isolate the machine to investigate. $A and 
>  $B the details which it is getting wrong.
> 
>  Amos
> 


I have made others tests 

HEre it is the dump.

Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/42/72A83D0D39814D13CA15F184E71D2.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/F4/9DC6A31D2F48971E8CF184EAF3ACFF.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/B5/2BC4D612CC1DB446582EB29AD4FF0.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/B3/F358459610F7EE4285351371CB3A.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb01.s-msn.com/i/4B/9571894AD3B49F1AFBDFB6A0AB929.gif
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/98/FD8C6B5E35BB28EE6D5D7CAA46C48.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/FF/976AED20082B54679EAB83F1C3.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb00.s-msn.com/i/48/B6F62B8F241454CD698D3CE9DB625.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729)
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: on URL:
http://db2.stb01.s-msn.com/i/9B/BBD5BC1B0962CA282508E1A7FB4A0.jpg
Dec  1 09:56:22 squid2 squid[28798]: SECURITY ALERT: By user agent:
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
In

Re: [squid-users] squid compilation option --disable-internal-dns

2011-12-01 Thread Amos Jeffries

On 1/12/2011 6:45 p.m., Benjamin wrote:

Hi,

As per document, i get that  this option "--disable-internal-dns" is 
useful while we are talking for good performance in heavy load.




The docs are a little bit old and now a bit misleading.

The internal component operates in a multi-threaded concurrent design 
that has no limits on the number of simultaneous active lookups, and 
operates seamlessly between HTTP processing actions in Squid.


dnsserver helper is from the early days of Squid. It handles the 
blocking system calls the OS provides to do DNS. It has a maximum DNS 
packet throughput that is very, very low (5-10%) compared to the 
internal component. On systems needing many DNS requests it acts as an 
upper speed limit, on systems needing none or few lookups it acts as a 
resource wasting process. I recommend avoiding it unless you need some 
special OS DNS feature which your Squid version cant access via its 
internal component.




As per document :

The number of processes spawn to service DNS name lookups.
For heavily loaded caches on large servers, you should
probably increase this value to at least 10.  The maximum
is 32.  The default is 5.


This is should better be written as "*if* you are running dnsserver on a 
heavily loaded cache..."




You must have at least one dnsserver process.


Assumes that one is running dnsserver processes at all. The internal DNS 
component uses zero helpers, naturally.





Last statement " You must have at least on dns server. So while 
compilation we need dns server running on OS because when i compiling 
this option with srpms. i m facing error with this [ dnsserver ].


No. "dnsserver" is the Squid helper name, different from "a DNS server". 
The helper uses the OS DNS lookup routines. The DNS server they use can 
be anywhere (close is better for speed reasons only).


Hope this helps :)

Amos