[squid-users] Squid can't cache web traffic via TPROXY
Dear all, I just implement linux box consist of linux 2.6.17 + tproxy + squid 2.6 + wccp v1 + iptable 1.3 . I follow through step by step for tproxy solution like this 1. recomplie kernel with tproxy patch ==> It's ok I try to lsmod then I see iptable_tproxy 23316 1 iptable_nat13188 1 iptable_tproxy ip_nat 29100 2 iptable_tproxy,iptable_nat ip_conntrack 61280 3 iptable_tproxy,iptable_nat,ip_nat ip_tables 18372 3 iptable_filter,iptable_tproxy,iptable_nat ipt_TPROXY 6400 1 ipt_tproxy 6144 0 x_tables 19972 5 iptable_nat,ip_tables,xt_tcpudp,ipt_TPROXY,ipt_tproxy 2. create gre interface # ifconfig gre0 127.0.0.2 up ==> It's got good result . 3. uninstall iptables 1.3 rpm then recomplie iptable with tproxy patch I use this rule of iptables # iptables -A PREROUTING -i all -p tcp -m tcp --dport 80 -j TPROXY --on-port 3128 ==> I think it is fine .. see output of lsmod and this result of iptables command # iptables -t tproxy -L -v Chain PREROUTING (policy ACCEPT 265 packets, 41235 bytes) pkts bytes target prot opt in out source destination 0 0 TPROXY tcp -- allany anywhere anywheretcp dpt:http TPROXY redirect 0.0.0.0:3128 Chain OUTPUT (policy ACCEPT 10 packets, 771 bytes) pkts bytes target prot opt in out source destination 4. I recompile source rpm with new version of squid-2.6.STABLE3-2.src.rpm --> with enable tproxy configuration My squid.conf like this http_port 3128 transparent tproxy vhost vport=80 always_direct allow all http_access allow all wccp_router x.x.x.x wccp_version 4 wccp2_rebuild_wait off wccp2_forwarding_method 1 wccp2_return_method 1 wccp_address 0.0.0.0 I start squid without error. 5. tunning the kernel option : disable rp_filter, enable ip_forwarding # sysctl -a | grep rp_filter net.ipv4.conf.gre0.arp_filter = 0 net.ipv4.conf.gre0.rp_filter = 0 net.ipv4.conf.eth1.arp_filter = 0 net.ipv4.conf.eth1.rp_filter = 0 net.ipv4.conf.eth0.arp_filter = 0 net.ipv4.conf.eth0.rp_filter = 0 net.ipv4.conf.lo.arp_filter = 0 net.ipv4.conf.lo.rp_filter = 0 net.ipv4.conf.default.arp_filter = 0 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.arp_filter = 0 net.ipv4.conf.all.rp_filter = 0 # sysctl -a | grep ip_forward net.ipv4.ip_forward = 1 5. I enable wccp on router 6. debug traffic with tcpdump - I can find port 80 traffic between client and web server - TPROXY can capture every thing But I can't see any access log in /var/log/squid/access.log Please Help me!!! Thanks
[squid-users] redirect access to wpad.dat
how can i redirect access to wpad.dat to my own version of this file using squid? tia dny --- http://bloglines.com/public/bacaan --- berita terkini - bencana/gempa/tsunami/flue/etc ... they look but do not see and hear but do not listen or understand. Mat 13:13 ... but that which cometh out of the mouth, this defileth a man. Mat 15:11
[squid-users] Squid 2.6 + COSS comparison
Hi everyone, The COSS code in Squid-2.6 has come quite far from its original design by Eric Stern. Steven Wilton has put an enormous amount of effort into the COSS design to fix the remaining bugs and dramatically improve its performance. I've assembled a quick webpage showing the drop in CPU usage and the negligible effect on hit-rate. Steven Wilton provided the statistics from two Squid caches he administers. You can find it here - http://www.squid-cache.org/~adrian/coss/. Steven is running a recent snapshot of squid-2.6. The latest -STABLE release of Squid-2.6 doesn't incorporate all of the COSS bugfixes (and there's at least one really nasty bug!) so if you're interested in trying COSS out please grab the latest Squid-2.6 snapshot from the website. Adrian
Re: [squid-users] [squid-users]: WARNING: Disk space over limit: 31457564 KB > 31457280 KB. How to avoid it?
On 9/18/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote: mån 2006-09-18 klockan 10:31 -0700 skrev Pranav Desai: > I am running some polymix-4 tests with squid 2.6-S3. Every single test > I have run so far has these warning. Some FAQs and other searches > suggest that this is caused by a corrupted swap.state, and the > solution is to wipe out the cache dirs and rebuild the cache. Small amounts above the limit is simply that things runs a bit too quick for a while with the garbage collection not keeping up. Very large amounts above the limit (often many times the amount of storage available) is corrupted swap.state. Could any of the above have any impact on the performance. Does the LRU starts after the disk is full and can that have any impact. I am just thinking out loud ... will try some more tests ... Thanks for your time. -- Pranav Regards Henrik -- -- http://pd.dnsalias.org
Re: [squid-users] OSX and Multi-Tentacled Creatures
useradd squid, groupadd squid if on linux. Quoting Luke <[EMAIL PROTECTED]>: > > If there is anyone on this list who is successfully using squid 2.5, and is > willing to share... > > Please share your technique for creating the squid user and group... > > I wasn't sure how to go about this... > > So i just duplicated the www user in "NetInfo Manager"... > > Then i renamed some things to make sense... > > i have a feeling that this was not the best approach... > > would someone be willing to email me screen shots of a properly created > squid user and group... ? > > or you could post it here for the benefit of the group... > > > thanks > -- Dwayne Hottinger Network Administrator Harrisonburg City Public Schools
Re: [squid-users] Caching Issue with accelerator mode
Currently not using refresh_stale_hit Since its a new feature in 2.6, should I try setting it to 20 seconds or so? Tried testing 2.6 on production with high traffic. still same problem. When setting the page expiration time to 60sec, the cache works properly. Page Age never goes above 59 secs. Even during peak time. Switching collapse_foward on/off doesn't seem to have an effect on it either. Regards Frank Henrik Nordstrom wrote: tis 2006-09-12 klockan 08:00 -0700 skrev Frank Hoang: During offpeak times, caching will work and Age of page will always be under 20seconds. This works the way its intended. The page will always have an age under 20 seconds. During Peak times, when traffic is high. the Age of the page will stay cached by squid and have Page Age of ~120-300 seconds. Maybe your backend is a bit overloaded, taking time to compose the page? Have you perhaps set refresh_stale_hit? Regards Henrik
RE: [squid-users] "Efficiency" of around 10%, is that normal
mån 2006-09-18 klockan 15:23 -0700 skrev Bas Rijniersce: > Hi, > > But wouldn't repeated visits by the same people have the same effect? Not that noticeable if the site is also cached in their browser.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
RE: [squid-users] "Efficiency" of around 10%, is that normal
mån 2006-09-18 klockan 13:43 -0700 skrev Bas Rijniersce: > Yes, that is what I meant. I don't think the users here are very > different from the average user. Is 10% then an expected hitrate? How long has the cache been running? It takes a few days (up to a week) until hit ratio is fully restored after flushing the cache. Also 40 users isn't very many. Hit ratio is very dependent on having multiple users with common interests. With only 40 users it isn't unlikely they to a large extent visit different sites, and only a relatively small proportion is common interests visited by more than one user. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] parent proxy
tis 2006-09-19 klockan 00:43 +0300 skrev Nerijus Baliunas: > It works if I use the following setup: > > acl viaproxy2 dstdomain .itu.int > cache_peer 10.10.10.6 parent 3128 3130 > cache_peer_access 10.10.10.6 allow viaproxy2 > never_direct allow viaproxy2 > > Why cache_peer_domain does not always work? Should I have used > some never_direct option? both cache_peer_domain and cache_peer_access only limits which requests may be sent via the peer in question, it says nothing about the other peers if you have other cache_peer lines. The only functional difference between the two directives is the syntax. You can do everything with cache_peer_access alone. Also, using peers is just advisory, not forced. If squid thinks there is no use in terms of hit ratio to use the peer then it won't look at them. To force Squid to look for a peer you can use never_direct. Requests matching never_direct tells Squid it MUST use a peer and not attempt to go direct under any conditions. Then there is also the prefer_direct on/off, and nonhierarchical_direct options to tune this more softly.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] parent proxy
On Mon, 18 Sep 2006 23:50:48 +0300 Nerijus Baliunas <[EMAIL PROTECTED]> wrote: > Because downloads from itu.int are very slow using one proxy server, but ok > if using another proxy server (I'm still investigating why), I have the > following > setup (proxy1, 10.10.10.5, 2.5.STABLE14): > > cache_peer 10.10.10.6 parent 3128 3130 > cache_peer_domain 10.10.10.6 .itu.int > > I.e. access to *.itu.int should go via another proxy server proxy2 > (10.10.10.6, 2.5.STABLE5). It works if I use the following setup: acl viaproxy2 dstdomain .itu.int cache_peer 10.10.10.6 parent 3128 3130 cache_peer_access 10.10.10.6 allow viaproxy2 never_direct allow viaproxy2 Why cache_peer_domain does not always work? Should I have used some never_direct option? Regards, Nerijus
Re: [squid-users] parent proxy
Nerijus Baliunas wrote: Hello, Because downloads from itu.int are very slow using one proxy server, but ok if using another proxy server (I'm still investigating why), I have the following setup (proxy1, 10.10.10.5, 2.5.STABLE14): cache_peer 10.10.10.6 parent 3128 3130 Try adding no-query and never direct cache_peer 10.10.10.6 parent 3128 3130 no-query acl ituInt dstdomain .itu.int never_direct allow ituInt ...as it seems you want this traffic to use the parent proxy regardless of whether it's a cache hit or not. cache_peer_domain 10.10.10.6 .itu.int I.e. access to *.itu.int should go via another proxy server proxy2 (10.10.10.6, 2.5.STABLE5). I go to http://www.itu.int/md/R00-CR-CIR-0261/en using proxy1. Log on proxy1: 1158610617.927 4611 127.0.0.1 TCP_MISS/200 17779 GET http://www.itu.int/md/R00-CR-CIR-0261/en - FIRST_PARENT_MISS/10.10.10.6 text/html Log on proxy2: 1158610614.296 0 10.10.10.5 UDP_MISS/000 61 ICP_QUERY http://www.itu.int/md/R00-CR-CIR-0261/en - NONE/- - 1158610618.905 4608 10.10.10.5 TCP_MISS/200 17702 GET http://www.itu.int/md/R00-CR-CIR-0261/en - DIRECT/156.106.192.163 text/html Then I press on any doc or pdf link on that page. The download is very slow, and it stops after about 10 minutes, download is incomplete and nothing in squid logs on both servers. It seems download does not go via proxy2. Why? Look into the hierarchy_stoplist directive to see why the child proxy would bypass the parent for the PDF request. If I download the same doc or pdf file using proxy2, it is downloaded ok and the log is: 1158611275.248418 127.0.0.1 TCP_MISS/200 17697 GET http://www.itu.int/md/R00-CR-CIR-0261/en - DIRECT/156.106.192.163 text/html 1158611318.520 1984 127.0.0.1 TCP_MISS/200 243709 GET http://www.itu.int/md/dologin_md.asp? - DIRECT/156.106.192.163 application/pdf Regards, Nerijus Chris
[squid-users] parent proxy
Hello, Because downloads from itu.int are very slow using one proxy server, but ok if using another proxy server (I'm still investigating why), I have the following setup (proxy1, 10.10.10.5, 2.5.STABLE14): cache_peer 10.10.10.6 parent 3128 3130 cache_peer_domain 10.10.10.6 .itu.int I.e. access to *.itu.int should go via another proxy server proxy2 (10.10.10.6, 2.5.STABLE5). I go to http://www.itu.int/md/R00-CR-CIR-0261/en using proxy1. Log on proxy1: 1158610617.927 4611 127.0.0.1 TCP_MISS/200 17779 GET http://www.itu.int/md/R00-CR-CIR-0261/en - FIRST_PARENT_MISS/10.10.10.6 text/html Log on proxy2: 1158610614.296 0 10.10.10.5 UDP_MISS/000 61 ICP_QUERY http://www.itu.int/md/R00-CR-CIR-0261/en - NONE/- - 1158610618.905 4608 10.10.10.5 TCP_MISS/200 17702 GET http://www.itu.int/md/R00-CR-CIR-0261/en - DIRECT/156.106.192.163 text/html Then I press on any doc or pdf link on that page. The download is very slow, and it stops after about 10 minutes, download is incomplete and nothing in squid logs on both servers. It seems download does not go via proxy2. Why? If I download the same doc or pdf file using proxy2, it is downloaded ok and the log is: 1158611275.248418 127.0.0.1 TCP_MISS/200 17697 GET http://www.itu.int/md/R00-CR-CIR-0261/en - DIRECT/156.106.192.163 text/html 1158611318.520 1984 127.0.0.1 TCP_MISS/200 243709 GET http://www.itu.int/md/dologin_md.asp? - DIRECT/156.106.192.163 application/pdf Regards, Nerijus
[squid-users] OSX and Multi-Tentacled Creatures
If there is anyone on this list who is successfully using squid 2.5, and is willing to share... Please share your technique for creating the squid user and group... I wasn't sure how to go about this... So i just duplicated the www user in "NetInfo Manager"... Then i renamed some things to make sense... i have a feeling that this was not the best approach... would someone be willing to email me screen shots of a properly created squid user and group... ? or you could post it here for the benefit of the group... thanks
RE: [squid-users] "Efficiency" of around 10%, is that normal
Hi, > Not sure what "efficiency" means, but if it means cache hit rate, that > largely can depend on your users, and on the cacheability of the content > they're fetching. Yes, that is what I meant. I don't think the users here are very different from the average user. Is 10% then an expected hitrate? Bas
Re: [squid-users] "Efficiency" of around 10%, is that normal
Not sure what "efficiency" means, but if it means cache hit rate, that largely can depend on your users, and on the cacheability of the content they're fetching. --On September 18, 2006 1:21:20 PM -0700 Bas Rijniersce <[EMAIL PROTECTED]> wrote: Hello, I was used to getting efficiencies of around 50%. My current setup does not get past 10% as reported by Calamaris. There are around 40 people behind the proxy. After Googling around I added the following lines to squid.conf: cache_mem 256 MB maximum_object_size 32 MB maximum_object_size_in_memory 128 KB System has 512Mb of memory and very little other tasks. Is there anything else I can do to improve efficieny? Thank you, Bas Bas Rijniersce IT Specialist @ Seaspan Ship Management E: [EMAIL PROTECTED] P: +1 604 638 2620 M: +1 604 616 4969 -- "Genius might be described as a supreme capacity for getting its possessors into trouble of all kinds." -- Samuel Butler
[squid-users] "Efficiency" of around 10%, is that normal
Hello, I was used to getting efficiencies of around 50%. My current setup does not get past 10% as reported by Calamaris. There are around 40 people behind the proxy. After Googling around I added the following lines to squid.conf: cache_mem 256 MB maximum_object_size 32 MB maximum_object_size_in_memory 128 KB System has 512Mb of memory and very little other tasks. Is there anything else I can do to improve efficieny? Thank you, Bas Bas Rijniersce IT Specialist @ Seaspan Ship Management E: [EMAIL PROTECTED] P: +1 604 638 2620 M: +1 604 616 4969
Re: [squid-users] [squid-users]: WARNING: Disk space over limit: 31457564 KB > 31457280 KB. How to avoid it?
mån 2006-09-18 klockan 10:31 -0700 skrev Pranav Desai: > I am running some polymix-4 tests with squid 2.6-S3. Every single test > I have run so far has these warning. Some FAQs and other searches > suggest that this is caused by a corrupted swap.state, and the > solution is to wipe out the cache dirs and rebuild the cache. Small amounts above the limit is simply that things runs a bit too quick for a while with the garbage collection not keeping up. Very large amounts above the limit (often many times the amount of storage available) is corrupted swap.state. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] DNS client options
mån 2006-09-18 klockan 14:18 -0300 skrev Alejandro: > 1) Internal DNS client feature only uses UDP protocol and external uses > UDPand TCP ??? The internal also uses TCP when needed in the current versions. > 2) Is it necessary to use TCP for normal DNS queries ??? Because I know > that ALL normal queries run over UDPexcept the zone transfers > operations. TCP is needed for any large DNS response in some special cases.. (i.e. when using djb dnscached, or a few other DNS resolvers). > 3) For a LAN with 80/100 users...is it convenient to use internal or > external DNS client mode ??? Normally the default internal one is preferable. In some corner cases it may be desireable to use the older external dns client but today thats mainly if you need other name resolution than DNS or /etc/hosts. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] More than one AD domains for Squid LDAP authentications
mån 2006-09-18 klockan 12:50 +0200 skrev Marek Dvornik: > But we need authenticate users also to another AD. Not subdirectory of > domain but another AD. > For example: > AD1 = DC=europe,DC=domain,DC=eu > AD2 = DC=africa,DC=family,DC=com I don't think this will work, unless all users can be indexed on a single AD. Try if the following works ldapsearch -b . -D cn=squid,ou=users,dc=your,dc=domain -h your.primary.adserver -W [EMAIL PROTECTED] of that works for both domains then it's fine Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
[squid-users] [squid-users]: WARNING: Disk space over limit: 31457564 KB > 31457280 KB. How to avoid it?
Hello All, I am running some polymix-4 tests with squid 2.6-S3. Every single test I have run so far has these warning. Some FAQs and other searches suggest that this is caused by a corrupted swap.state, and the solution is to wipe out the cache dirs and rebuild the cache. Is there a better solution than this, or does anyone know what causes the swap to get corrupted. It seems odd that it gets corrupted every single time. I am asking since most of the tests seems to fail under load close to the point where these messages start coming. And I dont want to wipe a cache store in the middle of a test. Setup --- polymix-4, 800req/s box - Dual Core AMD Opteron(tm) Processor 270 HE, 16GB RAM, single SATA disk. squid-version --- Squid Cache: Version 2.6.STABLE3 configure options: '--prefix=/usr/squid' '--exec-prefix=/usr/squid' '--sysconfdir=/usr/squid/etc' '--enable-snmp' '--enable-err-languages=English' '--enable-linux-netfilter' '--enable-async-io=24' '--enable-storeio=ufs,aufs,null,coss' '--enable-coss-aio-ops' '--enable-linux-tproxy' '--enable-gnuregex' '--enable-internal-dns' '--enable-epoll' '--with-maxfd=32768' 'CFLAGS=-g -O2 -pg ' squid.conf visible_hostname10.51.6.102 cache_dir aufs /var/cache/squid 30720 16 256 http_port 8080 request_body_max_size 0 snmp_port 3401 negative_ttl0 minutes pid_filename/var/run/squid.pid coredump_dir/var/log/squid cache_effective_usersquid cache_effective_group squid cache_access_log/var/log/squid/access.log cache_log /var/log/squid/cache.log cache_store_log none cache_swap_log /var/log/squid/swap.log logfile_rotate 10 icp_port3130 icp_query_timeout 2 log_icp_queries on extension_methods SEARCH PROPPATCH forwarded_for on acl all src 0.0.0.0/0.0.0.0 acl localhost src 127.0.0.1 10.51.6.102 acl manager proto cache_object acl snmppublic snmp_community public http_access allow localhost miss_access allow all http_access allow all snmp_access allow snmppublicall memory_pools on cache_mem 1 GB I also have the gprof profile and the test results. In case you are interested. Thanks for your time. -- pranav -- http://pd.dnsalias.org
[squid-users] DNS client options
Dear all, I'm new at squid and now I need to know this short topics for the stable version: 1) Internal DNS client feature only uses UDP protocol and external uses UDPand TCP ??? 2) Is it necessary to use TCP for normal DNS queries ??? Because I know that ALL normal queries run over UDPexcept the zone transfers operations. 3) For a LAN with 80/100 users...is it convenient to use internal or external DNS client mode ??? Thanks a lot, alex
Re: [squid-users] 5 second delay
Hey George, Since One of my hats is working at the UniMelb as a NetAdmin I checked this on my setup, and had no delay at all. I am running (after fixing my own stupid compile flag errors...) a generic configuration with using transparent proxy on STABLE3 on slackware linux. Doe snot seem to be an inherent squid problem. -Michael Carmody - Original Message - From: "Henrik Nordstrom" <[EMAIL PROTECTED]> To: "George Dominguez" <[EMAIL PROTECTED]> Cc: Sent: Monday, September 18, 2006 7:19 PM Subject: Re: [squid-users] 5 second delay On Mon, 2006-09-18 at 12:36 +1000, George Dominguez wrote: Good morning, It was brought to my attention that there is a 5 second delay when accessing the following page and their respective sub menus http://cat.lib.unimelb.edu.au/ Start by checking access.log. One of the columns indicate the response time. Maybe you can find a interesting pattern on what is delaying the pages? (might be some inline object not working out well) Regards Henrik
Re: [squid-users] block Skype with Squid
On Monday 18 September 2006 14:03, Pavel Ivanchev wrote: > Hi there! > I'm interesting in how to block skype with squid. I found in the net > some how-to and i followed it, but no result: > acl block_skype_IPs urlpath_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+ > acl connect method CONNECT > http_access deny connect block_skype_IPs all 1. You likely don't need "all" here. 2. Try url_regex instead of urlpath_regex. Note that I don't use Skype (and never will be) so I can't test it. Christoph
[squid-users] block Skype with Squid
Hi there! I'm interesting in how to block skype with squid. I found in the net some how-to and i followed it, but no result: acl block_skype_IPs urlpath_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+ acl connect method CONNECT http_access deny connect block_skype_IPs all After that i was still able to use skype. Just for test, i wrote in browser address bar https://196.34.23.4 , i don'n know what this ip is,but anyway, the connection wasn't rejected by squid, after while connection failed with "Connection time out" The log file says: "CONNECT 196.34.23.4:443 HTTP/1.0" 503 0 TCP_MISS:DIRECT instead of TCP_DENIED Where I wrong?
[squid-users] More than one AD domains for Squid LDAP authentications
Hello, We use Squid proxy and for authentication of users from Active Directory we use squid_ldap_auth. Users are authenticate to one AD. It's work fine. But we need authenticate users also to another AD. Not subdirectory of domain but another AD. For example: AD1 = DC=europe,DC=domain,DC=eu AD2 = DC=africa,DC=family,DC=com It's possible? Thanks Marek
Re: [squid-users] 5 second delay
On Mon, 2006-09-18 at 12:36 +1000, George Dominguez wrote: > Good morning, > > It was brought to my attention that there is a 5 second delay when > accessing the following page and their respective sub menus > http://cat.lib.unimelb.edu.au/ Start by checking access.log. One of the columns indicate the response time. Maybe you can find a interesting pattern on what is delaying the pages? (might be some inline object not working out well) Regards Henrik
[squid-users] Strange Mbuf Problem
Hi all, Are there any known bugs or config problems etc that might be able to cause mbufs on a FreeBSD box to be completely utilised? This is not under high load - its during the down time, and they are completely used within 5 minutes, whereas the cache has been running under much higher load all day. I believe I have narrowed it down to specific traffic - a machine behind the cache was mirroring content - getting only 206 messages (partial content). Could this be the cause? Thanks in advance Dave