Re: [squid-users] WCCP configuration
Hi, CEF: 11336 Indicates that all requests ended up going direct ie the router presumed your cache could not serve the request. Increase debugging levels on the cache with debug_options 80,3 and on the cisco with debug ip wccp packets and debug ip wccp events and monitor both. Off the bat I'd ask why you are trying to use tproxy and wccp - my feeling is that its somewhat self defeating, but I digress. I feel you need an ACL that explicitly denies ANYTHING from the caches from travelling via WCCP. eg: ip wccp web-cache redirect-list cache-users ip access-list extended cache-users deny ip host 10.1.1.20 any permit ip any any Where 10.1.1.20 would be the cache server. Regards, Regardt vivek...@aol.in wrote: OS - Fedor8 Kernal - 2.6.20 Cttproxy - 2.6.20 Cisco Router - IOS 12.4 We have compiled squid+Tproxy and it works fine. Tunelling has been done between the squid machine and the router. We need to configure WCCP. The WCCP config in squid:- wccp2_router xxx.xx.xxx.xxx wccp_version 4 wccp2_forwarding_method 1 wccp2_return_method 1 wccp2_assignment_method 1 wccp2_service dynamic 80 wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80 wccp2_service dynamic 90 wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source priority=240 ports=80 Router Eth0 - connected to lan. Eth1 - connecte to squid. Router WCCP Configuration. Eth0 - ip wccp web-cache redirect out ip wccp web-cache redirect in Eth1 - ip wccp redirect exclude in We tried the above commands in all combination possible, interchanging the commands but in vain. Internet just doesn't work in WCCP sh ip wccp Global WCCP information: Router information: Router Identifier: xxx.xx.xxx.x Protocol Version:2.0 Service Identifier: web-cache Number of Service Group Clients: 1 Number of Service Group Routers: 1 Total Packets s/w Redirected:11336 Process: 0 Fast: 0 CEF: 11336 Redirect access-list:-none- Total Packets Denied Redirect: 0 Total Packets Unassigned:9198 Group access-list: -none- Total Messages Denied to Group: 0 Total Authentication failures: 0 Total Bypassed Packets Received: 0 Is there any simple way of configuring WCCP. We have beating round the bush all day long to configure wccp. You are invited to Get a Free AOL Email ID. - http://webmail.aol.in
Re: [squid-users] HTTP/1.1 support and Chunked
Adam Squids wrote: Hello, I am using Squid 2.5Build10, from what I understand squid does not fully support HTTP/1.1, not in this version, not in 2.6 and not in 3.0. Is this correct? I need to enable my requests to reach my backend servers in HTTP/1.1 so I'll be able to multiplex them. Another issue, I've tried to disable chunked in the http header via this workaround: acl broken dstdomain domain request_header_access Accept-Encoding deny broken but it failed. I am still getting encoding: Chucked. Thanks a million, Adam Hi Adam, The HTTP1.1 efforts are still a work in progress AFAIK - but those with more knowledge may correct me ;-) As to the chunked encoding ... squid 2.5 Build 10 almost certainly does not have support for the chunked bits. I'm sure Adrian will mention this ... but it's really recommended you run either 2.7stable5 or 3stable11 for testing/debugging support. Regardt
Re: [squid-users] How do i update
Tarak Ranjan wrote: Hi List, i have as running SQUID 2.6.STABLE6 , and i want to update to SQUID 3.0 STABLE11 for SSL bump. Is it possible to do the upgrade ? /\ Tarak Add more friends to your messenger and enjoy! Go to http://messenger.yahoo.com/invite/ Hi Tarak, If compiling from source you could setup a new copy of squid without immediately affecting your current deployment. If using a package management system things could neccesitate some downtime. As indicated by yonghua the configs are rather similar. After the upgrade you can run a squid -k parse to get an indication of compatibility between your current squid config and the requirements of v3. Regardt
Re: [squid-users] load balancing
Hi Remy, Just a couple of comments. 1) As per your response, if DNS is down squid is not going to be much happier as it needs that DNS resolution in order to be able to function ;-) 2) WCCP would/could work very nicely for you in a fully transparent configuration. Cost of wccp capable routers plays a role 3) A true load balancer front end like Cisco's content director could also do the job but also runs into cost issues. Methods I've used: 1) Running squid in an LVS (linux virtual server) environment - works but can get fun to configure 2) Add another squid box to the configuration. - Setup this squid so that 10.200.1.2 and 10.200.1.1 are parent caches with CARP enabled - Do not enable any disk storage on this front-end cache This gives you an environment where the parent caches will determine load between them and handle requests as needed. Setting dead_peer_timeout and peer_connect_timeout will also allow relatively quick responses to caches that die. I know this last option is not fully redundant but is a cost effective way of handling the load balancing issue cleanly. Regardt Mario Remy Almeida wrote: Hi All, What I mean to say is.. E.G:- SP 1 = 10.200.2.1 SP 2 = 10.200.2.2 LAN USERS = 10.200.2.x All lan users should connect to SP1 or SP2 depending upon the load and if one of the SP is down the other should take the load. One way of achieving load balance is with DNS proxy1.example.com IN A 10.200.2.1 proxy1.example.com IN A 10.200.2.2 And what if the DNS Server is down and also how to do fail over //Remy On Tue, 2008-12-23 at 09:05 -0600, Luis Daniel Lucio Quiroz wrote: Just remember when using load balancing, if you use digest auth, then you MUST use source persistence. On Tuesday 23 December 2008 08:38:27 Ken Peng wrote: Hi All, any links on how to configure load balancing of squid See the default squid.conf, :)
Re: [squid-users] problema with de cache
Leonel Flor�n Selles wrote: Problem with the cache Web friends: I am new in this list I, install the squid and it work ok, but does not store me the Web in the squid's spool, and I know that because when I see the traces it tells me tcp-miss to each URL, Also I check /var/spool/squid and it has the spool's structure created but it's empty Also I use the command squid -z but nothing at all What I can do Greetings Borrar y Atrás | Borrar y Adelante Mover a: Babelfish: Hi Leonel, Welcome to the list. First off, which version of squid? What is your configuration? Are there ANY TCP/IMS HITS ? The squid -z should really only be used 1st time round to create the required swap structures. Regardt
Re: [squid-users] transparent proxy not working!! any advice?
Roland Roland wrote: i've just created a new box with the following options: but wccp with router is still not working! any advice? using centos 5.2 and squid 2.6 firewall enabled SElinux permissive --- done the following: yum update yum yum install squid squid -z --- gedit /etc/rc.d/init.d/rc.local #added: modprobe ip_gre ifconfig gre0 192.168.0.183 netmask 255.255.255.0 up #this is the same ip as my eth0 gedit /etc/sysconfig/iptables #added: -A INPUT -i gre0 -j ACCEPT -A INPUT -i gre0 -j ACCEPT -A INPUT -p gre -j ACCEPT #my routers lan interface 192.168.0.1 -A RH-Firewall-1-INPUT -s 192.168.0.1/24 -p udp -m udp --dport 2048 -j ACCEPT --- service iptables condrestart gedit /etc/squid/squid.conf #edited/added the follwoing: http_port 80 transparent http_access allow all wccp2_router 192.168.0.1 wccp_version 4 wccp2_rebuild_wait on wccp2_forwarding_method 1 wccp2_return_method 1 wccp2_assignment_method 1 wccp2_service dynamic 80 wccp2_service dynamic 90 wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80 wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source priority=240 ports=80 -- Cisco router 2811 side: conf t ip wccp version 2 ip wccp web-cache int f0/1 (Lan interface) ip wccp 80 redirect in ip wccp 90 redirect out -- service squid restart then sh ip wccp on router gave me all hits as 0 no hits from squid to router!! -- service iptables status [r...@localhost ~]# service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination 1RH-Firewall-1-INPUT all -- 0.0.0.0/00.0.0.0/0 2ACCEPT all -- 0.0.0.0/00.0.0.0/0 3ACCEPT all -- 0.0.0.0/00.0.0.0/0 4ACCEPT 47 -- 0.0.0.0/00.0.0.0/0 Chain FORWARD (policy ACCEPT) num target prot opt source destination 1RH-Firewall-1-INPUT all -- 0.0.0.0/00.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) num target prot opt source destination 1ACCEPT all -- 0.0.0.0/00.0.0.0/0 2ACCEPT icmp -- 0.0.0.0/00.0.0.0/0 icmp type 255 3ACCEPT esp -- 0.0.0.0/00.0.0.0/0 4ACCEPT ah -- 0.0.0.0/00.0.0.0/0 5ACCEPT udp -- 0.0.0.0/0224.0.0.251 udp dpt:5353 6ACCEPT udp -- 0.0.0.0/00.0.0.0/0 udp dpt:631 7ACCEPT tcp -- 0.0.0.0/00.0.0.0/0 tcp dpt:631 8ACCEPT all -- 0.0.0.0/00.0.0.0/0 state RELATED,ESTABLISHED 9ACCEPT tcp -- 0.0.0.0/00.0.0.0/0 state NEW tcp dpt:22 10 ACCEPT tcp -- 0.0.0.0/00.0.0.0/0 state NEW tcp dpt:80 11 ACCEPT tcp -- 0.0.0.0/00.0.0.0/0 state NEW tcp dpt:5900 12 ACCEPT udp -- 192.168.0.0/24 0.0.0.0/0 udp dpt:2048 13 REJECT all -- 0.0.0.0/00.0.0.0/0 reject-with icmp-host-prohibited --- lsmod: Module Size Used by ip_conntrack_netbios_ns 6977 0 xt_state6209 4 ip_conntrack 53025 2 ip_conntrack_netbios_ns,xt_state nfnetlink 10713 1 ip_conntrack iptable_filter 7105 1 ip_tables 17029 1 iptable_filter ip6table_filter 6849 1 ip6_tables 18053 1 ip6table_filter nls_utf86208 1 ip_gre 16737 0 autofs424517 2 hidp 23105 2 rfcomm 42457 0 l2cap 29505 10 hidp,rfcomm bluetooth 53797 5 hidp,rfcomm,l2cap sunrpc144893 1 ipt_REJECT 9537 1 ip6t_REJECT 9409 1 xt_tcpudp 7105 15 x_tables 17349 6 xt_state,ip_tables,ip6_tables,ipt_REJECT,ip6t_REJECT,xt_tcpudp dm_multipath 22089 0 video 21193 0 sbs18533 0 backlight 10049 1 video i2c_ec 9025 1 sbs button 10705 0 battery13637 0 asus_acpi 19289 0 ac 9157 0 ipv6 258273 17 ip6t_REJECT xfrm_nalgo 13765 1 ipv6 crypto_api 11969 1 xfrm_nalgo lp 15849 0 floppy
Re: [squid-users] transparent proxy not working!! any advice?
Roland Roland wrote: Hello, the output of the debugging is as such: *Jan 4 23:16:43.205: WCCP-EVNT:D90: Here_I_Am packet from 192.168.0.183: service not active *Jan 4 23:16:43.205: WCCP-EVNT:D80: Here_I_Am packet from 192.168.0.183: service not active what service is that?! -- From: Regardt van de Vyver sq...@vdvyver.net Sent: Sunday, January 04, 2009 9:33 PM Cc: squid-users@squid-cache.org Subject: Re: [squid-users] transparent proxy not working!! any advice? Roland Roland wrote: i've just created a new box with the following options: but wccp with router is still not working! any advice? using centos 5.2 and squid 2.6 firewall enabled SElinux permissive --- done the following: yum update yum yum install squid squid -z --- gedit /etc/rc.d/init.d/rc.local #added: modprobe ip_gre ifconfig gre0 192.168.0.183 netmask 255.255.255.0 up #this is the same ip as my eth0 gedit /etc/sysconfig/iptables #added: -A INPUT -i gre0 -j ACCEPT -A INPUT -i gre0 -j ACCEPT -A INPUT -p gre -j ACCEPT #my routers lan interface 192.168.0.1 -A RH-Firewall-1-INPUT -s 192.168.0.1/24 -p udp -m udp --dport 2048 -j ACCEPT --- service iptables condrestart gedit /etc/squid/squid.conf #edited/added the follwoing: http_port 80 transparent http_access allow all wccp2_router 192.168.0.1 wccp_version 4 wccp2_rebuild_wait on wccp2_forwarding_method 1 wccp2_return_method 1 wccp2_assignment_method 1 wccp2_service dynamic 80 wccp2_service dynamic 90 wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80 wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source priority=240 ports=80 -- Cisco router 2811 side: conf t ip wccp version 2 ip wccp web-cache int f0/1 (Lan interface) ip wccp 80 redirect in ip wccp 90 redirect out -- service squid restart then sh ip wccp on router gave me all hits as 0 no hits from squid to router!! -- service iptables status [r...@localhost ~]# service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination 1RH-Firewall-1-INPUT all -- 0.0.0.0/00.0.0.0/0 2ACCEPT all -- 0.0.0.0/00.0.0.0/0 3ACCEPT all -- 0.0.0.0/00.0.0.0/0 4ACCEPT 47 -- 0.0.0.0/00.0.0.0/0 Chain FORWARD (policy ACCEPT) num target prot opt source destination 1RH-Firewall-1-INPUT all -- 0.0.0.0/00.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) num target prot opt source destination 1ACCEPT all -- 0.0.0.0/00.0.0.0/0 2ACCEPT icmp -- 0.0.0.0/00.0.0.0/0 icmp type 255 3ACCEPT esp -- 0.0.0.0/00.0.0.0/0 4ACCEPT ah -- 0.0.0.0/00.0.0.0/0 5ACCEPT udp -- 0.0.0.0/0224.0.0.251 udp dpt:5353 6ACCEPT udp -- 0.0.0.0/00.0.0.0/0 udp dpt:631 7ACCEPT tcp -- 0.0.0.0/00.0.0.0/0 tcp dpt:631 8ACCEPT all -- 0.0.0.0/00.0.0.0/0 state RELATED,ESTABLISHED 9ACCEPT tcp -- 0.0.0.0/00.0.0.0/0 state NEW tcp dpt:22 10 ACCEPT tcp -- 0.0.0.0/00.0.0.0/0 state NEW tcp dpt:80 11 ACCEPT tcp -- 0.0.0.0/00.0.0.0/0 state NEW tcp dpt:5900 12 ACCEPT udp -- 192.168.0.0/24 0.0.0.0/0 udp dpt:2048 13 REJECT all -- 0.0.0.0/00.0.0.0/0 reject-with icmp-host-prohibited --- lsmod: Module Size Used by ip_conntrack_netbios_ns 6977 0 xt_state6209 4 ip_conntrack 53025 2 ip_conntrack_netbios_ns,xt_state nfnetlink 10713 1 ip_conntrack iptable_filter 7105 1 ip_tables 17029 1 iptable_filter ip6table_filter 6849 1 ip6_tables 18053 1 ip6table_filter nls_utf86208 1 ip_gre 16737 0 autofs424517 2 hidp 23105 2 rfcomm 42457 0 l2cap 29505 10 hidp,rfcomm bluetooth 53797 5 hidp,rfcomm,l2cap sunrpc144893 1 ipt_REJECT 9537 1 ip6t_REJECT 9409 1 xt_tcpudp 7105 15 x_tables 17349
Re: [squid-users] WCCP load balancing and TPROXY fully transparent interception
Richard Wall wrote: WCCPv2 can support this feature by Packet Return Method. (See http://www.cisco.com/en/US/docs/ios/12_0t/12_0t3/feature/guide/wccp.html, search Web Cache Packet Return. Also mentioned in your url: http://bazaar.launchpad.net/~squid3/squid/3.1/annotate/9363?file_id=draftwilsonwccpv212o-20070417152110-s6qkuxj8uabe-1) But Henrik said squid hadn't implemented this feature yet. (See http://www.squid-cache.org/mail-archive/squid-users/200811/0130.html) Thanks for the links. -RichardW. The issue with the WCCP Packet return method is that you'll now need to maintain state for each connection from the router as it will keep sending packets related to the same http request to the cache even if you reject the 1st packet correctly. This implies a full state engine in your wccp return management since you need to still serve valid http traffic while rejecting invalid port 80 traffic. As I recall Adrian was indicating a proposed split of the WCCP code from squid directly - if/when this happens I believe the implementation of the WCCP2 return method will become a reality. regards, Regardt vd Vyver
Re: [squid-users] transparent Proxy with WCCP
Roland Roland wrote: ... --added to Squid.conf:-- acl MyNet src 192.168.0.0/24 http_access allow MyNet (this is set before the deny all rule) wccp_router 192.168.0.1 http_port 3128 transparent --connectivity-- ip tunnel add wccp0 mode gre remote 192.168.0.1 local 192.168.0.108 dev eth0 ip addr add 192.168.0.108/24 dev wccp0 ip link set wccp0 up iptables -t nat -A PREROUTING -i wccp0 -j REDIRECT -p tcp --to-port 80 -- to direct from GRE to port 80 ... Hi Roland, My experience is almost exclusively with wccp2 but off the bat the only think that looks 'funky' to me is your iptables rule and a few /proc tweaks. Try the following after doing the ip link set wccp0 up: echo 1 /proc/sys/net/ipv4/ip_forward echo 0 /proc/sys/net/ipv4/conf/wccp0/rp_filter The GRE tunnel is only there to provide decapsulation of the WCCP traffic from the router. Once that is done the traffic is essentially still pointing towards port 80. Since you're running your squid on port 3128 your iptables rule NEEDS to redirect incomming port 80 traffic to that port, so it should read: iptables -t nat -A PREROUTING -i wccp0 -p tcp --dport 80 -j REDIRECT --to-port 3128 regards, Regardt vd Vyver
Re: [squid-users] Rotating squid logs
w...@msdrd.com wrote: Hello. Logrotate.d is rotating my squid logs. I would like to know if is ok to let it rotate but, logically I think it will not but, just want to be sure. Will squid work if it get its log files rotated? Thanks in advanced and sorry for asking too many questions. Hi, First stop I can recommend is: http://wiki.squid-cache.org/SquidFaq/SquidLogs?highlight=%28logrotate%29#head-5f3d54d268734f58005e09bf16f125468ce90813 You need to set logfile_rotate 0 ... http://www.squid-cache.org/Versions/v2/2.7/cfgman/logfile_rotate.html Hope this helps and points you in the right direction. Regardt vd Vyver
Re: [squid-users] Update Accelerator, Squid and Windows Update Caching
Amos Jeffries wrote: Richard Wall wrote: On Fri, Oct 10, 2008 at 12:30 PM, Amos Jeffries [EMAIL PROTECTED] wrote: Richard Wall wrote: Hi, I've been reading through the archive looking for information about squid 2.6 and windows update caching. The FAQ mentions problems with range offsets but it's not really clear which versions of Squid this applies to. All versions. The FAQ was the result of my experiments mid last year. With some tweaks made early his year since Vista came out. We haven't done a intensive experiments with Vista yet. Hi Amos, I'm still investigating Windows Update caching (with 2.6.STABLE17/18) First of all, I have been doing some tests to try and find out the problem with Squid and Content-Range requests. * I watch the squid logs as a vista box does its automatic updates and I can see that *some* of its requests use ranges. (so far I have only seen these when it requests .psf files...some of which seem to be very large files...so the range request makes sense) See: http://groups.google.hr/group/microsoft.public.windowsupdate/browse_thread/thread/af5db07dc2db9713 # zcat squid.log.192.168.1.119.2008-10-16.gz | grep multipart/byteranges | awk '{print $7}' | uniq | while read URL; do echo $URL; wget --spider $URL 21 | grep Length; done http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/10/windows6.0-kb956390-x86_2d03c4b14b5bad88510380c14acd2bffc26436a7.psf Length: 91,225,471 (87M) [application/octet-stream] http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/05/windows6.0-kb950762-x86_0cc2989b92bc968e143e1eeae8817f08907fd715.psf Length: 834,868 (815K) [application/octet-stream] http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/03/windows6.0-kb948590-x86_ed27763e42ee2e20e676d9f6aa13f18b84d7bc96.psf Length: 755,232 (738K) [application/octet-stream] http://www.download.windowsupdate.com/msdownload/update/software/crup/2008/09/windows6.0-kb955302-x86_1e40fd3ae8f95723dbd76f837ba096adb25f3829.psf Length: 7,003,447 (6.7M) [application/octet-stream] ... * I have found that curl can make range requests so I've been using it to test how Squid behavesand it seems to do the right thing. eg - First ask for a range : The correct range is returned X-Cache: MISS - Repeat the range request : The correct range is returned X-Cache: MISS - Request the entire file: The entire file is correctly returned X-Cache: MISS - Repeat the request: X-Cache: HIT - Repeat the previous range request: X-Cache: HIT - Request a different range: X-Cache: HIT curl --range 1000-1002 --header Pragma: -v -x http://127.0.0.1:3128 http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/05/windows6.0-kb950762-x86_0cc2989b92bc968e143e1eeae8817f08907fd715.psf /dev/null Looking back through the archive I find this conversation from 2005: http://www.squid-cache.org/mail-archive/squid-users/200504/0669.html ...but the behaviour there sounds like a result of setting: range_offset_limit -1 Seems to me that Squid should do a good job of Windows Update caching. There is another thread discussing how to override MS update cache control headers: http://www.squid-cache.org/mail-archive/squid-users/200508/0596.html but I don't see anything evil in the server response headers today. I guess the client may be sending no-cache headers...I'll double check that later. Is there some other case that I'm missing? As I said. I have not seen Vista in detail. I just had to turn off my old hack to get around the SP1 hanging. (that huge .psf perhapse?) Never had to do anything with headers. When I did my testing it was with outdated Win98-WinXP machines (often needing SP1 in XP's case). The WU on them made an HTTPS request (seems to be auth-related even today) requested one or more update indexes fine. Then proceeded to random-access range requests out of the middle of the update *.cabs using dynamic urls at various update sites. This was causing bandwidth blowout with all the MISS'es when I had several machines a week coming through. I _think_ but have no confirmation, that the early patch-tuesday releases were done as large single .CAB files and a particular machine may only need updating from individual fixes inside them. As your test showed, fetching the whole file squid can handle the ranges fine. It's when they are still in MISS state that ranges become trouble. I'm going to experiment, but if anyone has any positive or negative experience of Squid and windows update caching, I'd be really interested to hear from you. In case Squid cannot do windows update caching by its self, I'm also looking at integrating Update Accelerator (http://update-accelerator.advproxy.net/) script with standard squid 2.6 and wondered if anyone else had any experience of this. The update accelerator script is just a perl wrapper around wget which is configured as a Squid url_rewrite_program. It's not clear
Re: [squid-users] Inelegant routing based on file size
Vernon Kennedy-Van Dam wrote: Thanx very much Amos. Much appreciated with the feedback. Hi All I am looking to route download traffic based on file size of the download requests. If a user in our network was to download a 10mb file, he gets routed through link 1. If a user requests a 100mb file download, he gets routed through link 2. How is this achieved? It can not. File size is not known until after the file starts arriving. Sometimes not even until it has finished arriving. The best you may possibly do is create a custom external ACL helper to scan store.log for previous file sizes of requested URL. Then use tcp_outgoing_address based on a best-guess. This however breaks completely on: * new and unknown URL, * changed URLs, * dynamic URL (very common!), * and most websites updated by their webmaster between your visits (almost as common as dynamic URLs). Amos Well, we used a rather complicated way to achive something similar - so it seems possible. Use url_rewrite_program to redirect the request to an inspection script. The inspection script then fetches the HTTP headers of the file to determine file size. If small enough or not indicated, the script returns the url untouched so that the local cache processes it. If the file is larger, you can redirect to an alternate url/script that can do the fetching. The only thing I've not tested, is possibly making the redirect send to another proxy server, not entirely sure of that syntax. But you get the general idea.