RE: [squid-users] RE: WCCP issues with Centos 6.3 and Cisco 2901

2013-09-23 Thread Jordan Dalley
Hi Eliezer,

I must admit I skimmed through that as it appeared different to anything else 
I'd seen elsewhere.

IOS version is 15.0(1)M10

Cheers,
J.

-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
Sent: Tuesday, 24 September 2013 3:30 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: WCCP issues with Centos 6.3 and Cisco 2901

Before saying this or that, did you had the chance of looking at:
http://wiki.squid-cache.org/ConfigExamples/UbuntuTproxy4Wccp2
??
What version of IOS the 2901 has on it?

Eliezer

On 09/24/2013 02:31 AM, Jordan Dalley wrote:
> Thanks for your reply Bob,
> 
> I tried what you said - completely removed any ifcfg-gre0 config and simply 
> ran the commands:
> 
> ifconfig gre0 inet 1.1.1.1 netmask 255.255.255.0 up iptables -F -t nat 
> iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT 
> --to-destination 10.112.4.4:3127
> 
> On the router side (I had to modify your acl's a bit)
> 
> ip access-list standard wccp-servers
> permit host 10.112.4.4
> ip access-list extended wccp-traffic
> permit tcp 10.114.32.0 0.0.7.255 any eq www
> 
> ip wccp web-cache redirect-list wccp-traffic group-list wccp-servers
> 
> Upon inspection, I can see the router forwarding packets through the gre 
> tunnel:
> 
> [root@tsv-squid1 ~]# tcpdump -i gre0
> tcpdump: verbose output suppressed, use -v or -vv for full protocol 
> decode listening on gre0, link-type LINUX_SLL (Linux cooked), capture 
> size 65535 bytes
> 20:40:04.370754 IP 10.114.32.51.62007 > 190.93.248.164.http: Flags 
> [S], seq 2779756886, win 8192, options [mss 1460,nop,wscale 
> 2,nop,nop,sackOK], length 0
> 20:40:04.370861 IP 10.114.32.51.62008 > 190.93.248.164.http: Flags 
> [S], seq 1665803222, win 8192, options [mss 1460,nop,wscale 
> 2,nop,nop,sackOK], length 0
> 20:40:07.381696 IP 10.114.32.51.62007 > 190.93.248.164.http: Flags 
> [S], seq 2779756886, win 8192, options [mss 1460,nop,wscale 
> 2,nop,nop,sackOK], length 0
> 20:40:07.381779 IP 10.114.32.51.62008 > 190.93.248.164.http: Flags 
> [S], seq 1665803222, win 8192, options [mss 1460,nop,wscale 
> 2,nop,nop,sackOK], length 0
> 20:40:13.387792 IP 10.114.32.51.62007 > 190.93.248.164.http: Flags 
> [S], seq 2779756886, win 8192, options [mss 1460,nop,nop,sackOK], 
> length 0
> 20:40:13.387812 IP 10.114.32.51.62008 > 190.93.248.164.http: Flags 
> [S], seq 1665803222, win 8192, options [mss 1460,nop,nop,sackOK], 
> length 0
> 
> Here's the weird thing..
> 
> [root@tsv-squid1 ~]# ifconfig gre0
> gre0  Link encap:UNSPEC  HWaddr 
> 00-00-00-00-00-00-82-12-00-00-00-00-00-00-00-00
>   inet addr:1.1.1.1  Mask:255.255.255.0
>   UP RUNNING NOARP  MTU:1476  Metric:1
>   RX packets:143 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:7136 (6.9 KiB)  TX bytes:0 (0.0 b)
> 
> Note my GRE tunnel is not transmitting, only receiving.
> 
> I can confirm, ip forwarding is enabled.
> 
> [root@tsv-squid1 ~]# cat /proc/sys/net/ipv4/ip_forward
> 1
> [root@tsv-squid1 ~]#
> 
> Cheers,
> J.
> 
> -Original Message-
> From: Luderitz Bob [mailto:bob.luder...@niproglassamericas.com]
> Sent: Tuesday, 24 September 2013 1:38 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] RE: WCCP issues with Centos 6.3 and Cisco 2901
> 
>   Hey Jordan, I am running a similar config with wccp and CentOS 6.3 with 
> Cisco routers.
> 
>   Your config looks close to what I have but I use the built-in gre0 
> tunnel so don't have the interface explicitly setup just have it in rc.local 
> like this:
>   ifconfig gre0 inet 1.2.3.4 netmask 255.255.255.0 up
>   echo 1 > /proc/sys/net/ipv4/ip_forward
>   (same iptables statement as you have)
>   iptables -F -t nat
>   iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j 
> DNAT --to-destination 10.80.166.227:3127
> 
>   From the router - I did not set the config up but have to documentation 
> and have these additional statements defined to force http traffic to the 
> squid - 166.227)
>   access-list wccp-servers extended permit ip host 10.80.166.227 any
>   access-list wccp-traffic extended permit tcp object-group 
> DM_INLINE_NETWORK_7 any eq www
>   wccp web-cache redirect-list wccp-traffic group-list wccp-servers
>   
>   hope this helps
> 
> -Original Message-
> From: Jordan Dalley [mailto:jdal...@tsv.catholic.edu.au]
> Sent: Monday, September 23, 2013 6:17 AM
> To:   
> Subject: [squid-users] WCCP issues with Centos 6.3 and Cisco 2901
> 
> Hi Squid community,
> 
> I have an issue whereby I am just struggling to find out why it wont work.
> 
> I have trawled through multiple forums, howto's, faq's etc but no matter what 
> I do, I cannot get it to work properly.
> 
> Here is what I have done so far:
> 
> Router IP: 10.114.3.34
> Squid IP: 10.112.4.4
> WAN Subnet: 10.112.0.0 / 

[squid-users] Unwanted DNS queries

2013-09-23 Thread T Ls

Hi,

today, some users complained about poor respons time of the webproxy. 
Searching for a possible reason, I found, that the proxy makes a DNS 
request (mostly  but also A) for every http request. We are behind a 
firewall and resolving internet names is impossible, we have to use 
parent proxies to reach the internet and I thought, I configured squid 
that way (config at the end of the mail). When I saw the DNS queries, my 
first guess was a dst-ACL, but there are no dst-ACLs.


I recorded some traffic at the proxy and looked inside with wireshark, 
for every http request the proxy queries both it's nameservers for the 
IP(v6) of the destination host and after these queries failed the http 
request ist forwarded to the parend proxy, the content ist fetched from 
web and delivered to the client.


Last week, I made some changes to the logformat, but switching back to 
the original format did not stop the DNS queries.


Where is the error in my config, what causes the DNS queries?

Thanks in advance
Thomas





squid.conf:
^^^
include /mnt/squid3-shared-settings/*.conf

visible_hostname proxy.my.domain.org

hierarchy_stoplist cgi-bin ?

cache_peer  parent-ip1  parent  80  7   no-query 
no-digest
cache_peer  parent-ip2  parent  80  7   no-query 
no-digest
cache_peer  proxy.domain.orgparent  7   no-query 
no-digest

cache_peer_access   proxy.domain.orgallow   MYDOMAINS
cache_peer_access   parent-ip-1 denyMYDOMAINS
cache_peer_access   parent-ip-2 denyMYDOMAINS

### MEMORY CACHE OPTIONS ...
### Disk-Cache Optionen ...



access.conf:


acl localhost src 127.0.0.1/32
acl Safe_ports port "...SafePorts.txt"
acl SSL_ports port 443 563 8443 9443
acl CONNECT method CONNECT

acl MYNET src ip-range1
acl MYNET src ip-range2
acl MYNET ...

acl MY-LOCAL-DOMAIN dstdomain .my.domain.org

acl badURLs dstdomain "...badURLs.txt"
acl goodTLDs dstdomain "...goodTLDs.txt"
acl adminPCs src "...adminPCs.txt"
acl labPcs src "...labor-pcs.txt"


acl MYDOMAINS dstdomain .domain.org
acl MYDOMAINS dstdomain .domain.net
acl MYDOMAINS dstdomain .domain.eu


http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny labPcs
http_access allow adminPCs
http_access deny  badURLs
http_access deny !goodTLDs

http_access allow MYNET
http_access deny all

htcp_access deny all
htcp_clr_access deny all

--
common-server.conf:
^^^

http_port 8080
error_directory /usr/share/squid/errors/de
log_icp_queries on
cache_effective_user squid
cache_effective_group nogroup
cache_mgr m...@my.domain.org

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


---
logging.conf:
^


logformat myformat %tl %6tr %>a %Ss/%03>Hs %%mt

cache_access_log /var/log/squid/access.log myformat
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
pid_filename /var/log/squid/squid.pid
debug_options ALL,1


request-forward.conf:
^


always_direct allow MY-LOCAL-DOMAIN

never_direct deny MY-LOCAL-DOMAIN
never_direct allow all



Re: [squid-users] RE: WCCP issues with Centos 6.3 and Cisco 2901

2013-09-23 Thread Eliezer Croitoru
Before saying this or that, did you had the chance of looking at:
http://wiki.squid-cache.org/ConfigExamples/UbuntuTproxy4Wccp2
??
What version of IOS the 2901 has on it?

Eliezer

On 09/24/2013 02:31 AM, Jordan Dalley wrote:
> Thanks for your reply Bob,
> 
> I tried what you said - completely removed any ifcfg-gre0 config and simply 
> ran the commands:
> 
> ifconfig gre0 inet 1.1.1.1 netmask 255.255.255.0 up
> iptables -F -t nat
> iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT 
> --to-destination 10.112.4.4:3127
> 
> On the router side (I had to modify your acl's a bit)
> 
> ip access-list standard wccp-servers
> permit host 10.112.4.4
> ip access-list extended wccp-traffic
> permit tcp 10.114.32.0 0.0.7.255 any eq www
> 
> ip wccp web-cache redirect-list wccp-traffic group-list wccp-servers
> 
> Upon inspection, I can see the router forwarding packets through the gre 
> tunnel:
> 
> [root@tsv-squid1 ~]# tcpdump -i gre0
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on gre0, link-type LINUX_SLL (Linux cooked), capture size 65535 
> bytes
> 20:40:04.370754 IP 10.114.32.51.62007 > 190.93.248.164.http: Flags [S], seq 
> 2779756886, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
> 20:40:04.370861 IP 10.114.32.51.62008 > 190.93.248.164.http: Flags [S], seq 
> 1665803222, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
> 20:40:07.381696 IP 10.114.32.51.62007 > 190.93.248.164.http: Flags [S], seq 
> 2779756886, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
> 20:40:07.381779 IP 10.114.32.51.62008 > 190.93.248.164.http: Flags [S], seq 
> 1665803222, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
> 20:40:13.387792 IP 10.114.32.51.62007 > 190.93.248.164.http: Flags [S], seq 
> 2779756886, win 8192, options [mss 1460,nop,nop,sackOK], length 0
> 20:40:13.387812 IP 10.114.32.51.62008 > 190.93.248.164.http: Flags [S], seq 
> 1665803222, win 8192, options [mss 1460,nop,nop,sackOK], length 0
> 
> Here's the weird thing..
> 
> [root@tsv-squid1 ~]# ifconfig gre0
> gre0  Link encap:UNSPEC  HWaddr 
> 00-00-00-00-00-00-82-12-00-00-00-00-00-00-00-00
>   inet addr:1.1.1.1  Mask:255.255.255.0
>   UP RUNNING NOARP  MTU:1476  Metric:1
>   RX packets:143 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:7136 (6.9 KiB)  TX bytes:0 (0.0 b)
> 
> Note my GRE tunnel is not transmitting, only receiving.
> 
> I can confirm, ip forwarding is enabled.
> 
> [root@tsv-squid1 ~]# cat /proc/sys/net/ipv4/ip_forward
> 1
> [root@tsv-squid1 ~]#
> 
> Cheers,
> J.
> 
> -Original Message-
> From: Luderitz Bob [mailto:bob.luder...@niproglassamericas.com] 
> Sent: Tuesday, 24 September 2013 1:38 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] RE: WCCP issues with Centos 6.3 and Cisco 2901
> 
>   Hey Jordan, I am running a similar config with wccp and CentOS 6.3 with 
> Cisco routers.
> 
>   Your config looks close to what I have but I use the built-in gre0 
> tunnel so don't have the interface explicitly setup just have it in rc.local 
> like this:
>   ifconfig gre0 inet 1.2.3.4 netmask 255.255.255.0 up
>   echo 1 > /proc/sys/net/ipv4/ip_forward
>   (same iptables statement as you have)
>   iptables -F -t nat
>   iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT 
> --to-destination 10.80.166.227:3127
> 
>   From the router - I did not set the config up but have to documentation 
> and have these additional statements defined to force http traffic to the 
> squid - 166.227)
>   access-list wccp-servers extended permit ip host 10.80.166.227 any
>   access-list wccp-traffic extended permit tcp object-group 
> DM_INLINE_NETWORK_7 any eq www
>   wccp web-cache redirect-list wccp-traffic group-list wccp-servers 
>   
>   hope this helps
> 
> -Original Message-
> From: Jordan Dalley [mailto:jdal...@tsv.catholic.edu.au]
> Sent: Monday, September 23, 2013 6:17 AM
> To:   
> Subject: [squid-users] WCCP issues with Centos 6.3 and Cisco 2901
> 
> Hi Squid community,
> 
> I have an issue whereby I am just struggling to find out why it wont work.
> 
> I have trawled through multiple forums, howto's, faq's etc but no matter what 
> I do, I cannot get it to work properly.
> 
> Here is what I have done so far:
> 
> Router IP: 10.114.3.34
> Squid IP: 10.112.4.4
> WAN Subnet: 10.112.0.0 / 255.252.0.0
> 
> Squid Config:
> 
> http_port 3127 intercept
> wccp2_router 10.114.3.34
> wccp2_forwarding_method gre
> wccp2_return_method gre
> wccp2_service standard 0
> 
> Confirm I can access and use port 3127 directly without issue from any 
> location in the WAN.
> 
> Router Config:
> 
> ip wccp web-cache
> interface G0/1
> !Inside interface
> ip wccp web-cache redirect in
> 
> Added to 

Re: [squid-users] Samba 4 vs Squid 3.2 - NTLM Authentication

2013-09-23 Thread Eliezer Croitoru
Hey there,

I will need to understand more about the NTLM auth process in order to
try to help you.
Can you tell me if it worked on another version of Samba?
there is a DEBUG option if I am not wrong about the NTLM helper.
I think that using kerberus can might solve couple issues..
As far as I can tell NTLM is an older way to do things which can be the
wrong choice.
did you had the chance of looking at:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
??
Is this a samba only environment or AD one?

take a small peak at:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/NtlmCentOS5#Notes

Eliezer

On 09/23/2013 03:25 PM, Aecio Alves wrote:
> Hi Eliezer!
> 
> Any ideas or suggestions on how I can proceed?
> 
> Thank you!
> 
> Aécio
> 
> 
> Em 9/18/13 3:46 PM, Aecio Alves escreveu:
>> Hello,
>>
>> Sorry for the delay in answering you.
>>
>> I'm using - helper-protocol = squid-2.5-ntlmssp.
>>
>> I tried to use version 3.3 of squid, but had several errors during
>> compilation.
>> But I can try again.
>>
>> I am using version 4 of Samba.
>>
>> The OS I use is CentOS 5.9.
>>
>> Thank you.
>>
>> Aécio
>>
>> Em 9/16/13 8:41 PM, Eliezer Croitoru escreveu:
>>> Hey there,
>>>
>>> What helper are you using?
>>> can you test the code on the 3.3 branch rather then the 3.2?
>>> which is a more new and maybe stable then 3.2.
>>>
>>> what version of samba are you using and on what OS?
>>>
>>> Eliezer
>>>
>>> On 09/16/2013 11:55 PM, Aecio Alves wrote:
 Good afternoon.

 I'm trying to make the integrated authentication between Samba and
 Squid
 3.2.0.1 4. My scenario is as follows:

 - A server running Samba 4 as a Domain Controller and Squid to filter
 the users' navigation.

 The domain is properly configured and squid too. No authentication
 squid
 works perfectly, but when I enable authentication it stops working.

 Sometimes it starts loading the page, but not complete and stops
 working.

 Could you help me?

 Thank you!

 Aecio
>>
> 



Re: [squid-users] caching for 60 minutes, ignoring any header

2013-09-23 Thread Eliezer Croitoru
Hey Ron,

I added notes near the quotes.

On 09/23/2013 12:13 PM, Ron Klein wrote:
> I'll describe the real scenario in a more detailed way, but I can't
> disclose all of it.
> 
It's OK since it's a public list.

> There are a few machines, let's name them M1 to M9, that are processing
> data.
OK
> From time to time, those machines should make HTTP requests to external
> servers, that are business partners. All of these HTTP requests are in
> the same format and have the following request headers:
> * User-Agent: undisclosed_user_agent
> * |Accept-Encoding: gzip, deflate
> * Host: the_hostname_of_the_external_server
> |* Expect: [nothing]
> * Pragma: [nothing]
> That's it, nothing more, nothing less.
OK and what are the server response headers?
These are matter from many angles of the problems..

> On those servers, as we agreed, there should be an xml file in a
> specific path. For instance:
> http://foo.com/bar/daily-orders.xml
Which let say we can describe a case scenario for that from mighty
google for example?
> (I can't disclose the exact path here)
NP CIA is important!
> These files are re-generated from time to time. How often? I can't tell,
> and it's not up to me.
OK
> Now, since there are a few thousands of business partners that generate
> these xml files for my business, I thought that caching these xml files
> in a single machine would be a good idea, since it should reduce
> external traffic.
What is called forward proxy??
> Therefore, I installed Squid3 on a specific machine, and updated M1-M9
> HTTP clients to use the proxy server instead of directly fetching the
> xml files.
OR intercept them.. all up to you..

> For business considerations, when an xml file is cached, I don't need it
> to be as fresh as possible. I want to reduce outgoing traffic as much as
> I possible.
squid v3.1 and 3.3 works a bit differently and also their logs are kind
of different about it.
> My business partners don't care about it, too. They also don't want to
> change anything at all in their web servers. That's a fact I can't
> change what so ever.
Which is a major and bad habit of many..
> 
> All I want is to have a local copy of the xml file for every external
> server, that would be considered as "fresh" from T0 to T0+60minutes. For
> my business needs, that's what I need. And if some of the xml files are
> cached somewhere else, which is a rare scenario for this case, then I
> can ignore that (business-wise)
This is what is so called "mirror" site.
and in a proxy world it would be considered a "stale" cached object or
"offline copy".
> 
> I initially thought that the favicons example would simplify things
> (since a lot of web sites have favicons, and it's a common knowledge),
> but I wasn't aware of the special case of favicons. I apologize for the
> time wasted about my simplified example.
it's nice to simplify it since not all can see the whole picture from
one favicons..
> 
> I hope I shed more light about the subject.
> 
Yes indeed.
I would give an example case that can help you understand the complexity
of the issue and also help you.
two responses that can be indicated using redbot:
http://redbot.org/?uri=http%3A%2F%2Fwww.google.co.il%2Ffavicon.ico
http://redbot.org/?uri=http%3A%2F%2Fwww.google.co.il%2Findex.html

The above tool allows a simulation of a simple request while showing the
differences between them.

the favicon.ico is a nice and simple target to cache if the server serve
it in a simple way.
When the server starts to complicate things in the application level and
to change response headers for couple clients it's another story.
The above might be the reason for your partner to not change their
application.

Also the above put you in a situation where you might not be able to
cache the object(file) in a simple way that squid offers out on the blue
since squid is a *general* http cache proxy which might not meet a very
very very deep complexity issues of the developer of the site.

With the above in stake and since these XML files are only for machines
M1-M9(right?) the basic way will be to "inject" these files into the
cache or store them on a dedicated *offline cache server*.
You are not the first to ask the above but squid is an *online* cache
and not a store mechanism.

Since you have a specific issue with specific clients and specific issue
with specific servers you will need an expert to use squid for logs
using the debug_options Amos suggested that will document the above case
to make sure that the right solution for your very specific case
scenario will be delivered considering the *local* business effect of
the so called *cache* for favicon.ico.

I have posted before that a cache maintainer needs to think that cache
is not the only option for all cases and re-validation is not such a bad
thing.
In squid 3.3.X if the exact simple requests result in a simple response
which is cachable a re-validation is expected and re-download can be the
right choice which is not a bad result since

Re: [squid-users] What to do in an imperfect world

2013-09-23 Thread Amos Jeffries

On 24/09/2013 9:04 a.m., Mark Davies wrote:

Some time ago I set connect_timeout down to 10 seconds because we were
hitting various sites that were advertising v6 addresses but not listening
on them and it seemed that in this day and age connect times were
generally under a second but now we've found a site
(http://inspirehep.net) that some of our users use heavily that regularly
can take over 20 seconds to connect so we've gone back to the default
setting.  Any way we can deal with both situations at once?

cheers
mark


The patch in here (for 3.2 and later) may be of some interest:
http://bugs.squid-cache.org/show_bug.cgi?id=3901

It is not yet in any releases because there are some questions about how 
it works with some of the special edge case uses of the 
tcp_outgoing_address code.


Amos



Re: [squid-users] Problems with cache peering, sourcehash, *_uses_indirect, and follow_x_forwarded_for

2013-09-23 Thread Amos Jeffries

On 24/09/2013 9:06 a.m., Martín Ferco wrote:

Hello,

I'm trying to use DansGuardian together with Squid and load-balancing
to use more than one ISP.

I've been able to achieve this by using cache_peer, and I should be
able to perform load balancing with the following two lines:

{{{
cache_peer squid-isp1 parent 13128 0 no-query round-robin sourcehash proxy-only
cache_peer squid-isp2 parent 23128 0 no-query round-robin sourcehash proxy-only
}}}

These two cache-peers run on the same box, as you can see.


Problem #1:
  round-robin is one type of peer selection, sourcehash is a different 
type. Only one method will be used to select between these peers.



I've also made sure that indirect options are set properly like this:

acl_uses_indirect_client on
delay_pool_uses_indirect_client on
log_uses_indirect_client on
follow_x_forwarded_for allow localhost


Problem #2:
  notice how none of these options mention cache_peer or outbound 
connections.



I'm sure that's working fine as the logs show the correct information
for different IP addresses (and not 127.0.0.1, where DansGuardian is
running as well).

Now, the problem with the original two lines is "sourcehash". It lookw
like it's *NOT* using the 'indirect' feature. I've set squid debug
options to "39,2", and the following is shown in the logs:

{{{
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:21| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:21| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:21| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
}}}

So, basically, the IP where DansGuardian is running is being hashed,
instead of the original one. When looking at the sourcecode for
version 2.7.STABLE9 (the one I'm using), it looks like client_addr is
used instead of the indirect one as the key in
"src/peer_sourcehash.c":

{{{
key = inet_ntoa(request->client_addr);
}}}

This also seems to happen in the latest 3.3 version of squid.

Could this be fixed by adding the following lines to that file, after
that line shown above:

{{{
#if FOLLOW_X_FORWARDED_FOR
key = inet_ntoa(request->indirect_client_addr;
#endif /* FOLLOW_X_FORWARDED_FOR */
}}}

Are you aware of this problem, or am I doing something wrong?


It is not a problem per-se.
* sourcehash is a hashing algorithm based in inbound TCP connection details.
* "indirect client" feature is about network state of a TCP connection 
unrelated to Squid.


If round-robin is sufficient for your needs I suggest dropping the 
sourcehash entirely.



Also, I recommend an upgrade to the 3.3 Squid if you can. 2.7 is getting 
very outdated.


Amos


[squid-users] RE: WCCP issues with Centos 6.3 and Cisco 2901

2013-09-23 Thread Jordan Dalley
Thanks for your reply Bob,

I tried what you said - completely removed any ifcfg-gre0 config and simply ran 
the commands:

ifconfig gre0 inet 1.1.1.1 netmask 255.255.255.0 up
iptables -F -t nat
iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT 
--to-destination 10.112.4.4:3127

On the router side (I had to modify your acl's a bit)

ip access-list standard wccp-servers
permit host 10.112.4.4
ip access-list extended wccp-traffic
permit tcp 10.114.32.0 0.0.7.255 any eq www

ip wccp web-cache redirect-list wccp-traffic group-list wccp-servers

Upon inspection, I can see the router forwarding packets through the gre tunnel:

[root@tsv-squid1 ~]# tcpdump -i gre0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on gre0, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
20:40:04.370754 IP 10.114.32.51.62007 > 190.93.248.164.http: Flags [S], seq 
2779756886, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
20:40:04.370861 IP 10.114.32.51.62008 > 190.93.248.164.http: Flags [S], seq 
1665803222, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
20:40:07.381696 IP 10.114.32.51.62007 > 190.93.248.164.http: Flags [S], seq 
2779756886, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
20:40:07.381779 IP 10.114.32.51.62008 > 190.93.248.164.http: Flags [S], seq 
1665803222, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
20:40:13.387792 IP 10.114.32.51.62007 > 190.93.248.164.http: Flags [S], seq 
2779756886, win 8192, options [mss 1460,nop,nop,sackOK], length 0
20:40:13.387812 IP 10.114.32.51.62008 > 190.93.248.164.http: Flags [S], seq 
1665803222, win 8192, options [mss 1460,nop,nop,sackOK], length 0

Here's the weird thing..

[root@tsv-squid1 ~]# ifconfig gre0
gre0  Link encap:UNSPEC  HWaddr 
00-00-00-00-00-00-82-12-00-00-00-00-00-00-00-00
  inet addr:1.1.1.1  Mask:255.255.255.0
  UP RUNNING NOARP  MTU:1476  Metric:1
  RX packets:143 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:7136 (6.9 KiB)  TX bytes:0 (0.0 b)

Note my GRE tunnel is not transmitting, only receiving.

I can confirm, ip forwarding is enabled.

[root@tsv-squid1 ~]# cat /proc/sys/net/ipv4/ip_forward
1
[root@tsv-squid1 ~]#

Cheers,
J.

-Original Message-
From: Luderitz Bob [mailto:bob.luder...@niproglassamericas.com] 
Sent: Tuesday, 24 September 2013 1:38 AM
To: squid-users@squid-cache.org
Subject: [squid-users] RE: WCCP issues with Centos 6.3 and Cisco 2901

Hey Jordan, I am running a similar config with wccp and CentOS 6.3 with 
Cisco routers.

Your config looks close to what I have but I use the built-in gre0 
tunnel so don't have the interface explicitly setup just have it in rc.local 
like this:
ifconfig gre0 inet 1.2.3.4 netmask 255.255.255.0 up
echo 1 > /proc/sys/net/ipv4/ip_forward
(same iptables statement as you have)
iptables -F -t nat
iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT 
--to-destination 10.80.166.227:3127

From the router - I did not set the config up but have to documentation 
and have these additional statements defined to force http traffic to the squid 
- 166.227)
access-list wccp-servers extended permit ip host 10.80.166.227 any
access-list wccp-traffic extended permit tcp object-group 
DM_INLINE_NETWORK_7 any eq www
wccp web-cache redirect-list wccp-traffic group-list wccp-servers 

hope this helps

-Original Message-
From: Jordan Dalley [mailto:jdal...@tsv.catholic.edu.au]
Sent: Monday, September 23, 2013 6:17 AM
To: 
Subject: [squid-users] WCCP issues with Centos 6.3 and Cisco 2901

Hi Squid community,

I have an issue whereby I am just struggling to find out why it wont work.

I have trawled through multiple forums, howto's, faq's etc but no matter what I 
do, I cannot get it to work properly.

Here is what I have done so far:

Router IP: 10.114.3.34
Squid IP: 10.112.4.4
WAN Subnet: 10.112.0.0 / 255.252.0.0

Squid Config:

http_port 3127 intercept
wccp2_router 10.114.3.34
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_service standard 0

Confirm I can access and use port 3127 directly without issue from any location 
in the WAN.

Router Config:

ip wccp web-cache
interface G0/1
!Inside interface
ip wccp web-cache redirect in

Added to sysctl.conf:

# Controls IP packet forwarding
net.ipv4.ip_forward = 1

# Controls source route verification
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth0.ip_filter = 0
net.ipv4.conf.gre0.rp_filter = 0
net.ipv4.conf.gre0.ip_filter = 0

Added to /etc/sysconfig/network-scripts/ifcfg-gre0

DEVICE=gre0
BOOTPROTO=static
IPADDR=127.0.0.2
NETMASK=255.255.255.0
ONBOOT=YES
IPV6INIT=NO

Linux Configuration:

modprobe ip_gre

[squid-users] Problems with cache peering, sourcehash, *_uses_indirect, and follow_x_forwarded_for

2013-09-23 Thread Martín Ferco
Hello,

I'm trying to use DansGuardian together with Squid and load-balancing
to use more than one ISP.

I've been able to achieve this by using cache_peer, and I should be
able to perform load balancing with the following two lines:

{{{
cache_peer squid-isp1 parent 13128 0 no-query round-robin sourcehash proxy-only
cache_peer squid-isp2 parent 23128 0 no-query round-robin sourcehash proxy-only
}}}

These two cache-peers run on the same box, as you can see.

I've also made sure that indirect options are set properly like this:

acl_uses_indirect_client on
delay_pool_uses_indirect_client on
log_uses_indirect_client on
follow_x_forwarded_for allow localhost

I'm sure that's working fine as the logs show the correct information
for different IP addresses (and not 127.0.0.1, where DansGuardian is
running as well).

Now, the problem with the original two lines is "sourcehash". It lookw
like it's *NOT* using the 'indirect' feature. I've set squid debug
options to "39,2", and the following is shown in the logs:

{{{
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:20| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:20| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:21| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
2013/09/23 15:10:21| peerSourceHashSelectParent: selected squid-isp1
2013/09/23 15:10:21| peerSourceHashSelectParent: Calculating hash for 127.0.0.1
}}}

So, basically, the IP where DansGuardian is running is being hashed,
instead of the original one. When looking at the sourcecode for
version 2.7.STABLE9 (the one I'm using), it looks like client_addr is
used instead of the indirect one as the key in
"src/peer_sourcehash.c":

{{{
key = inet_ntoa(request->client_addr);
}}}

This also seems to happen in the latest 3.3 version of squid.

Could this be fixed by adding the following lines to that file, after
that line shown above:

{{{
#if FOLLOW_X_FORWARDED_FOR
key = inet_ntoa(request->indirect_client_addr;
#endif /* FOLLOW_X_FORWARDED_FOR */
}}}

Are you aware of this problem, or am I doing something wrong?

Thanks,
Martín.


[squid-users] What to do in an imperfect world

2013-09-23 Thread Mark Davies
Some time ago I set connect_timeout down to 10 seconds because we were 
hitting various sites that were advertising v6 addresses but not listening 
on them and it seemed that in this day and age connect times were 
generally under a second but now we've found a site 
(http://inspirehep.net) that some of our users use heavily that regularly 
can take over 20 seconds to connect so we've gone back to the default 
setting.  Any way we can deal with both situations at once?

cheers
mark


[squid-users] is "nice" useful?

2013-09-23 Thread Alfredo Rezinovsky

I have a heavy loaded squid and I noticed high latency.

using workers it seems a little faster

I dont have 100% CPU load and iowait is also low. But clients browse 
faster when I disable the tproxy and let them pass bridged.


Running squid with a lower "nice" value should improbe the latency?

--
Alfrenovsky


[squid-users] RE: WCCP issues with Centos 6.3 and Cisco 2901

2013-09-23 Thread Luderitz Bob
Hey Jordan, I am running a similar config with wccp and CentOS 6.3 with 
Cisco routers.

Your config looks close to what I have but I use the built-in gre0 
tunnel so don't have the interface explicitly setup just have it in rc.local 
like this:
ifconfig gre0 inet 1.2.3.4 netmask 255.255.255.0 up
echo 1 > /proc/sys/net/ipv4/ip_forward
(same iptables statement as you have)
iptables -F -t nat
iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT 
--to-destination 10.80.166.227:3127

From the router - I did not set the config up but have to documentation 
and have these additional statements defined to force http traffic to the squid 
- 166.227)
access-list wccp-servers extended permit ip host 10.80.166.227 any
access-list wccp-traffic extended permit tcp object-group 
DM_INLINE_NETWORK_7 any eq www
wccp web-cache redirect-list wccp-traffic group-list wccp-servers 

hope this helps

-Original Message-
From: Jordan Dalley [mailto:jdal...@tsv.catholic.edu.au] 
Sent: Monday, September 23, 2013 6:17 AM
To: 
Subject: [squid-users] WCCP issues with Centos 6.3 and Cisco 2901

Hi Squid community,

I have an issue whereby I am just struggling to find out why it wont work.

I have trawled through multiple forums, howto's, faq's etc but no matter what I 
do, I cannot get it to work properly.

Here is what I have done so far:

Router IP: 10.114.3.34
Squid IP: 10.112.4.4
WAN Subnet: 10.112.0.0 / 255.252.0.0

Squid Config:

http_port 3127 intercept
wccp2_router 10.114.3.34
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_service standard 0

Confirm I can access and use port 3127 directly without issue from any location 
in the WAN.

Router Config:

ip wccp web-cache
interface G0/1
!Inside interface
ip wccp web-cache redirect in

Added to sysctl.conf:

# Controls IP packet forwarding
net.ipv4.ip_forward = 1

# Controls source route verification
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth0.ip_filter = 0
net.ipv4.conf.gre0.rp_filter = 0
net.ipv4.conf.gre0.ip_filter = 0

Added to /etc/sysconfig/network-scripts/ifcfg-gre0

DEVICE=gre0
BOOTPROTO=static
IPADDR=127.0.0.2
NETMASK=255.255.255.0
ONBOOT=YES
IPV6INIT=NO

Linux Configuration:

modprobe ip_gre
ifup gre0
iptables -t nat -F
iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT 
--to-destination 10.112.4.4:3127

If, I then do a tcpdump -i gre0 I can see packets flowing through this 
interface with destination port 80. Unfortunately it seems as if they are 
somehow not being natted to the squid server.

I've tried different varying methods of doing this, but none of them seem to 
work.

Does anyone have any ideas?

Regards,
Jordan.

__

NIPRO GLASS
__

CAUTION  - This message may contain privileged and confidential information 
intended only for the use of the addressee named above. If you are not the 
intended recipient of this message you are hereby
notified  that  any  use,  dissemination, distribution or reproduction of this 
message is prohibited. If you have received this message in error please notify 
NIPRO GLASS
 immediately. Any views expressed in
this message are those of the individual sender and may not necessarily reflect 
the views of NIPRO GLASS.


Re: [squid-users] is "nice" useful?

2013-09-23 Thread Antony Stone
On Monday 23 September 2013 at 18:01:48, Alfredo Rezinovsky wrote:

> I have a heavy loaded squid and I noticed high latency.

Please specify "heavy loaded" and "high latency":

 - what spec machine is Squid running on?  (CPU cores, speed, amount of RAM 
are the most important factors, also disk interface type might be important)

 - how many connections per second is it servicing?

 - what bandwidth is going through the machine?

 - what cache hit ratio are you getting?

 - how are you measuring latency?

 - what do you regard as "high" (and what did you previously have which seemed 
to be "low")?

> using workers it seems a little faster
> 
> I dont have 100% CPU load and iowait is also low. But clients browse
> faster when I disable the tproxy and let them pass bridged.

How are you measuring the speed difference, and what difference does it 
actually 
make?

> Running squid with a lower "nice" value should improbe the latency?

What else is running on the machine?


Regards,


Antony.

-- 
Under UK law, no VAT is charged on biscuits and cakes - they are "zero rated".  
Chocolate covered biscuits, however, are classed as "luxury items" and are 
subject to VAT.  McVitie's classed its Jaffa Cakes as cakes, but in 1991 this 
was challenged by Her Majesty's Customs and Excise in court.

The question which had to be answered was what criteria should be used to 
class something as a cake or a biscuit.  McVitie's defended the classification 
of Jaffa Cakes as a cake by arguing that cakes go hard when stale, whereas 
biscuits go soft.  It was demonstrated that Jaffa Cakes become hard when stale 
and McVitie's won the case.

 Please reply to the list;
   please don't CC me.


[squid-users] Fwd: Problem "whitelisting" .shiprush.com

2013-09-23 Thread Chris Nighswonger
So what am I missing in the following situation?

Our mail dept uses shiprush.com. The software supplied by shiprush is
not proxy-auth friendly, so I added a

acl ShipRush dstdomain .shiprush.com

and

http_access allow campusnet ShipRush

before my http_access line requiring authentication.

Yet I still see Squid3 requesting auth [1].
What am I doing wrong?

I've supplied my squid.conf in redacted form [2]. (General comments
welcome as well as those specific to this problem.)

Kind Regards,
Chris


Misc Info:

OS: Ubuntu 10.04.4 LTS

Squid Cache: Version 3.1.6
configure options:  '--build=i486-linux-gnu' '--prefix=/usr'
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
'--infodir=${prefix}/share/info' '--sysconfdir=/etc'
'--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3'
'--disable-maintainer-mode' '--disable-dependency-tracking'
'--disable-silent-rules' '--srcdir=.' '--datadir=/usr/share/squid3'
'--sysconfdir=/etc/squid3' '--mandir=/usr/share/man'
'--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm,'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-arp-acl' '--enable-esi' '--disable-translation'
'--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid'
'--with-filedescriptors=65536' '--with-large-files'
'--with-default-user=proxy' '--enable-linux-netfilter'
'build_alias=i486-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2'
'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g
-Wall -O2' --with-squid=/build/buildd/squid3-3.1.6


[1] https://docs.google.com/file/d/0B5GhqVvpzpvjVE5MX2drM21HNW8/edit?usp=sharing
[2] https://docs.google.com/file/d/0B5GhqVvpzpvjWjhQUnc4UDNweUk/edit?usp=sharing


Re: [squid-users] Samba 4 vs Squid 3.2 - NTLM Authentication

2013-09-23 Thread Aecio Alves

Hi Eliezer!

Any ideas or suggestions on how I can proceed?

Thank you!

Aécio


Em 9/18/13 3:46 PM, Aecio Alves escreveu:

Hello,

Sorry for the delay in answering you.

I'm using - helper-protocol = squid-2.5-ntlmssp.

I tried to use version 3.3 of squid, but had several errors during 
compilation.

But I can try again.

I am using version 4 of Samba.

The OS I use is CentOS 5.9.

Thank you.

Aécio

Em 9/16/13 8:41 PM, Eliezer Croitoru escreveu:

Hey there,

What helper are you using?
can you test the code on the 3.3 branch rather then the 3.2?
which is a more new and maybe stable then 3.2.

what version of samba are you using and on what OS?

Eliezer

On 09/16/2013 11:55 PM, Aecio Alves wrote:

Good afternoon.

I'm trying to make the integrated authentication between Samba and 
Squid

3.2.0.1 4. My scenario is as follows:

- A server running Samba 4 as a Domain Controller and Squid to filter
the users' navigation.

The domain is properly configured and squid too. No authentication 
squid

works perfectly, but when I enable authentication it stops working.

Sometimes it starts loading the page, but not complete and stops 
working.


Could you help me?

Thank you!

Aecio






[squid-users] WCCP issues with Centos 6.3 and Cisco 2901

2013-09-23 Thread Jordan Dalley
Hi Squid community,

I have an issue whereby I am just struggling to find out why it wont work.

I have trawled through multiple forums, howto's, faq's etc but no matter what I 
do, I cannot get it to work properly.

Here is what I have done so far:

Router IP: 10.114.3.34
Squid IP: 10.112.4.4
WAN Subnet: 10.112.0.0 / 255.252.0.0

Squid Config:

http_port 3127 intercept
wccp2_router 10.114.3.34
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_service standard 0

Confirm I can access and use port 3127 directly without issue from any location 
in the WAN.

Router Config:

ip wccp web-cache
interface G0/1
!Inside interface
ip wccp web-cache redirect in

Added to sysctl.conf:

# Controls IP packet forwarding
net.ipv4.ip_forward = 1

# Controls source route verification
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth0.ip_filter = 0
net.ipv4.conf.gre0.rp_filter = 0
net.ipv4.conf.gre0.ip_filter = 0

Added to /etc/sysconfig/network-scripts/ifcfg-gre0

DEVICE=gre0
BOOTPROTO=static
IPADDR=127.0.0.2
NETMASK=255.255.255.0
ONBOOT=YES
IPV6INIT=NO

Linux Configuration:

modprobe ip_gre
ifup gre0
iptables -t nat -F
iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT 
--to-destination 10.112.4.4:3127

If, I then do a tcpdump -i gre0 I can see packets flowing through this 
interface with destination port 80. Unfortunately it seems as if they are 
somehow not being natted to the squid server.

I've tried different varying methods of doing this, but none of them seem to 
work.

Does anyone have any ideas?

Regards,
Jordan.



Re: [squid-users] caching for 60 minutes, ignoring any header

2013-09-23 Thread Ron Klein
I'll describe the real scenario in a more detailed way, but I can't 
disclose all of it.


There are a few machines, let's name them M1 to M9, that are processing 
data.
From time to time, those machines should make HTTP requests to external 
servers, that are business partners. All of these HTTP requests are in 
the same format and have the following request headers:

* User-Agent: undisclosed_user_agent
* |Accept-Encoding: gzip, deflate
* Host: the_hostname_of_the_external_server
|* Expect: [nothing]
* Pragma: [nothing]
That's it, nothing more, nothing less.
On those servers, as we agreed, there should be an xml file in a 
specific path. For instance:

http://foo.com/bar/daily-orders.xml
(I can't disclose the exact path here)
These files are re-generated from time to time. How often? I can't tell, 
and it's not up to me.
Now, since there are a few thousands of business partners that generate 
these xml files for my business, I thought that caching these xml files 
in a single machine would be a good idea, since it should reduce 
external traffic.
Therefore, I installed Squid3 on a specific machine, and updated M1-M9 
HTTP clients to use the proxy server instead of directly fetching the 
xml files.
For business considerations, when an xml file is cached, I don't need it 
to be as fresh as possible. I want to reduce outgoing traffic as much as 
I possible.
My business partners don't care about it, too. They also don't want to 
change anything at all in their web servers. That's a fact I can't 
change what so ever.


All I want is to have a local copy of the xml file for every external 
server, that would be considered as "fresh" from T0 to T0+60minutes. For 
my business needs, that's what I need. And if some of the xml files are 
cached somewhere else, which is a rare scenario for this case, then I 
can ignore that (business-wise)


I initially thought that the favicons example would simplify things 
(since a lot of web sites have favicons, and it's a common knowledge), 
but I wasn't aware of the special case of favicons. I apologize for the 
time wasted about my simplified example.


I hope I shed more light about the subject.

Thanks!

On 23-Sep-13 11:21, Amos Jeffries wrote:

On 23/09/2013 7:21 p.m., Ron Klein wrote:
My example of favicons was to simplify the question. The real case is 
different.


Then please tell us the real details. In full if possible.
favicon is one of the special-case type of URLs and like Eliezer and I 
already mentioned there are some specific usage for them which 
directly causes problems with your stated goals or even using it as a 
simplified test case. Perhapse your real case is also using similar 
special-case URLs with other problems - but nobody can assist with 
that if you hide details.


So please at least avoid "favicon" references for the remainder of 
this discussion. You have indicated that they are irrelevant.


I want to cache all "favicons" (that is, other resources, internally 
used) for 60 minutes.

For a given "favicon", I'd like to have the following caching policy:


Anywho, ignoring all the protocol and UA special-case behaviour 
factoids because you said that was a fake example...


The period of 60 minutes should start when the first consumer 
consumes the favicon. Let's mark the time for that first request as 
T0 (T Zero).


Your policy assumes and requires that your proxy is the only one 
between users and the origin server. If your upstream at any stage 
have a proxy the object age will not meet your T0 criterion - this is 
why Last-Modified and Age headers are used in HTTP. To indicate an 
objects time since creation regardless of whether the object might 
have been newely generated by the origin, altered by an intermediary 
or stored for some time by an intermediary or the origin itself 
(server-side caching or static archive).


FWIW: I am working with a client at present who want to do this type 
of caching for every URL in existence, but only for a few minutes. 
They have a growing list of domain names where the policy has to be 
disabled due to problems it causes to user traffic.


During T0 until T0+60minutes, this favicon should be considered as 
"fresh", in terms of caching.


The single value of 60 in the refresh_pattern line "max" field along 
with override-expire override-lastmod meets the above criteria.


However as I said earlier, freshness does not guarantee a HIT. There 
are many other HTTP features which need to be considered on top of 
that freshness to determine whether it HITs or MISSes.


After T0+60minutes, this favicon should be considered as "stale", in 
terms of caching, and should be re-fetched by Squid, upon request.


There is no such thing as a refetch in HTTP caching.
There is only MISS or REFRESH. The revalidation may happen 
transparently at any time and you never see it.


The favicon would be cached even if the original server explicitly 
instructed not to cache nor store the favicon.


The refresh_pattern ignore-pri

Re: [squid-users] caching for 60 minutes, ignoring any header

2013-09-23 Thread Amos Jeffries

On 23/09/2013 7:21 p.m., Ron Klein wrote:
My example of favicons was to simplify the question. The real case is 
different.


Then please tell us the real details. In full if possible.
favicon is one of the special-case type of URLs and like Eliezer and I 
already mentioned there are some specific usage for them which directly 
causes problems with your stated goals or even using it as a simplified 
test case. Perhapse your real case is also using similar special-case 
URLs with other problems - but nobody can assist with that if you hide 
details.


So please at least avoid "favicon" references for the remainder of this 
discussion. You have indicated that they are irrelevant.


I want to cache all "favicons" (that is, other resources, internally 
used) for 60 minutes.

For a given "favicon", I'd like to have the following caching policy:


Anywho, ignoring all the protocol and UA special-case behaviour factoids 
because you said that was a fake example...


The period of 60 minutes should start when the first consumer consumes 
the favicon. Let's mark the time for that first request as T0 (T Zero).


Your policy assumes and requires that your proxy is the only one between 
users and the origin server. If your upstream at any stage have a proxy 
the object age will not meet your T0 criterion - this is why 
Last-Modified and Age headers are used in HTTP. To indicate an objects 
time since creation regardless of whether the object might have been 
newely generated by the origin, altered by an intermediary or stored for 
some time by an intermediary or the origin itself (server-side caching 
or static archive).


FWIW: I am working with a client at present who want to do this type of 
caching for every URL in existence, but only for a few minutes. They 
have a growing list of domain names where the policy has to be disabled 
due to problems it causes to user traffic.


During T0 until T0+60minutes, this favicon should be considered as 
"fresh", in terms of caching.


The single value of 60 in the refresh_pattern line "max" field along 
with override-expire override-lastmod meets the above criteria.


However as I said earlier, freshness does not guarantee a HIT. There are 
many other HTTP features which need to be considered on top of that 
freshness to determine whether it HITs or MISSes.


After T0+60minutes, this favicon should be considered as "stale", in 
terms of caching, and should be re-fetched by Squid, upon request.


There is no such thing as a refetch in HTTP caching.
There is only MISS or REFRESH. The revalidation may happen transparently 
at any time and you never see it.


The favicon would be cached even if the original server explicitly 
instructed not to cache nor store the favicon.


The refresh_pattern ignore-private and ignore-no-store meet that 
criteria in a way. The object result from the current transaction will 
be left in the cache regardless of what might happen to it on any future 
or past ones.



Yes, I know it might be considered a bad practice,


As stated your caching policy is not particularly bad. The use/need of 
ignore-private and ignore-no-store is the only bad thing and the strong 
sign that you are possibly violating some law...



and perhaps illegal to some readers,


... so consulting a lawyer is recommended.

We provide those controls in Squid for specific use-cases. Yours may or 
may not be one of those it is hard to tell from a fake example.


but I assure you that the other servers (the real web servers) that 
provide the responses, are business partners and they gave me their 
approval to override their caching policy. However, they don't want to 
change their configuration and it's totally up to me to create my 
caching layer.


They may not be willing to alter their public cache controls, but 
Surrogate-Control features available in Squid offer an alternative 
targeted caching policy to be emitted by their servers for your proxy. 
This assumes they are willing to setup such alternative policy and you 
configure your proxy as a reverse-proxy for their traffic.


Your whole problem would be solved by the upstream simply sending: 
Surrogate-Control: max-age=3600;your_proxy_fqdn


And another thing: the clients are not web browsers. The clients 
consuming these resources ("favicons" for sake of simplicity) are 
software components using HTTP as their transport protocol.


Thanks for any advice on the subject.


Well...
 you have a set of URLs with undefined behaviour differences from the 
notably special-case ones in your example ...
 being fetched by clients with undefined but very big behaviour 
differences from the UA which would be fetching your example URLs ...


... and you want us to help with specific details about why your config 
is not working as expected?

 As the old cliche goes "insufficient data".

Amos



Re: [squid-users] caching for 60 minutes, ignoring any header

2013-09-23 Thread Ron Klein

Hi Amos,

Thanks for your thorough response.
Please refer to my response to Eliezer.

Thanks!

On 23-Sep-13 07:29, Amos Jeffries wrote:

On 23/09/2013 3:01 p.m., Ron Klein wrote:

Hi,

I'm trying to cache all favicons files, named favicon.ico, located 
always in the root of the web site (when they exist, of course)
I would like to ignore any caching instruction originates from the 
(real) web server response headers.
For instance, if I get the "last modified" header, I'd like to ignore 
it.

I want the caching policy to be purely "mine".


FYI: Last-Modified is not caching policy. It is a timestamp telling 
when the object was last changed. *Your* caching policy relies on such 
details in order to calculate age teh same as teh default ones do.


And no there is no way to replace the caching features of HTTP with 
your own. All Squid does is allow you to tune the parameters used by 
the algorithm.





I use Squid 3 on Ubuntu 12.04 .
I created the following instruction in the configuration file:
refresh_pattern -i ^http(s?)://.*/favicon.ico$   60  0% 60  
ignore-private override-expire override-lastmod ignore-no-store


Erm...

"ignore-no-store" makes everything which is forbidden to be cached go 
into the storage.
NOTE: no-store on a favicon is usually only done on private 
company internal sites where the entire domain or subdomain (its 
existence even) is legally privilaged information.


"ignore-private" makes Squid ignore the privacy restrictions on marked 
content and deliver the per-user content to all users.
   NOTE: private content *can* be cached by Squid. It is the ability 
to send one users private data to another user which is enabled by 
this option.
   Consider that the favicon is sometimes used for ever-cookie style 
tagging of users or signalling of persistent authentication states. 
You just caused any server doing that to send the wrong signals to the 
wrong users.


"override-lastmod" makes Squid ignore the Last-Modified timestamp when 
applying caching policy.
  NOTE: without a last modified timestamp the caching policy is fed 
the Date header on the transaction. Effectively everything is less 
than 1 second old.


"override-expires" makes Squid ignore the Expires: header on the 
response when applying caching policy.

  NOTE: without expiry timestamp the object *never* expires.



My question:
Is this the correct instruction? I think not, since I get "HIT" 
response headers even after one hour of caching.


Are HITs somehow bad? note that HIT is not related to FRESH/STALE in 
HTTP/1.1. It just means the cached object is *able* to be sent to the 
client immediately.


You can set debug_options 22,3 to get a trace of the refresh algorithm 
tests and reasons about a response FRESH/STALE.


Amos




Re: [squid-users] caching for 60 minutes, ignoring any header

2013-09-23 Thread Ron Klein
My example of favicons was to simplify the question. The real case is 
different.
I want to cache all "favicons" (that is, other resources, internally 
used) for 60 minutes.

For a given "favicon", I'd like to have the following caching policy:
The period of 60 minutes should start when the first consumer consumes 
the favicon. Let's mark the time for that first request as T0 (T Zero).
During T0 until T0+60minutes, this favicon should be considered as 
"fresh", in terms of caching.
After T0+60minutes, this favicon should be considered as "stale", in 
terms of caching, and should be re-fetched by Squid, upon request.
The favicon would be cached even if the original server explicitly 
instructed not to cache nor store the favicon. Yes, I know it might be 
considered a bad practice, and perhaps illegal to some readers, but I 
assure you that the other servers (the real web servers) that provide 
the responses, are business partners and they gave me their approval to 
override their caching policy. However, they don't want to change their 
configuration and it's totally up to me to create my caching layer.


And another thing: the clients are not web browsers. The clients 
consuming these resources ("favicons" for sake of simplicity) are 
software components using HTTP as their transport protocol.


Thanks for any advice on the subject.


On 23-Sep-13 06:43, Eliezer Croitoru wrote:

Youd better leave it on the default since most browsers will cache it
automatically.
a HIT can be a vary of HITs like TCP_IMS_HIT etc and not just a TCP_HIT.
You also need to understand how squid does the cache and override.
How is it goes without the refresh_pattern?
why would you want to force it on all sites when many of them has far
more longer cache headers then 60 min?

Eliezer

On 09/23/2013 06:01 AM, Ron Klein wrote:

Hi,

I'm trying to cache all favicons files, named favicon.ico, located
always in the root of the web site (when they exist, of course)
I would like to ignore any caching instruction originates from the
(real) web server response headers.
For instance, if I get the "last modified" header, I'd like to ignore it.
I want the caching policy to be purely "mine".

I use Squid 3 on Ubuntu 12.04 .
I created the following instruction in the configuration file:
refresh_pattern -i ^http(s?)://.*/favicon.ico$   60  0% 60
ignore-private override-expire override-lastmod ignore-no-store

My question:
Is this the correct instruction? I think not, since I get "HIT" response
headers even after one hour of caching.

Thanks!