Re: [squid-users] load balancing
Mario Remy Almeida wrote: Hi All, What I mean to say is.. E.G:- SP 1 = 10.200.2.1 SP 2 = 10.200.2.2 LAN USERS = 10.200.2.x All lan users should connect to SP1 or SP2 depending upon the load and if one of the SP is down the other should take the load. One way of achieving load balance is with DNS proxy1.example.com IN A 10.200.2.1 proxy1.example.com IN A 10.200.2.2 Hi Remy, I agree the DNS server could do the balancing here. But to be more precise DNS is more appropriate in Load-Balancing the other kind of services like SMTP, WEB etc. What I recommend is a router capable of web-traffic redirection like WCCP in Cisco routers. May be if you want to know more about WCCP. This URL http://articles.techrepublic.com.com/5100-10878_11-6175637.html could let you know how it works. Regards, Pritam Regards, Pritam And what if the DNS Server is down and also how to do fail over //Remy On Tue, 2008-12-23 at 09:05 -0600, Luis Daniel Lucio Quiroz wrote: Just remember when using load balancing, if you use digest auth, then you MUST use source persistence. On Tuesday 23 December 2008 08:38:27 Ken Peng wrote: Hi All, any links on how to configure load balancing of squid See the default squid.conf, :) Internal Virus Database is out of date. Checked by AVG - http://www.avg.com Version: 8.0.176 / Virus Database: 270.9.19/1857 - Release Date: 12/19/2008 10:09 AM
Re: [squid-users] Internal DNS / External DNS configuraiton in squid
Kinkie wrote: Hello, That's not something squid can do. You can do that with some limitations by configuring your DNS server using forwarding zones for your internal domains. You may want to set a dedicated server up for your proxies. Happy 2009! On 12/31/08, Tharanga Abeyseela wrote: Hi folks, Iam using squid 3 as my proxy and it has different ACL's. I need to use internal usernames (u...@mydomain.com) in access control list instead of IP's . But my issue is iam resolving the names from external DNS server. is there any way i can use in squid to use internal ip's from a intrnal DNS server and others from external DNS server. this will ease my task . Hi Tharanga, I don't know, what is in your case. But I would recommend you to use VIEWS in your Internal DNS server. Matching you Proxy Server(s) to read Internal DNS Database and other to External DNS server. But again I agree as Kinkie said this is not what squid can do for you. Regards, Pritam many thanks, Tharanga Abeyseela Internal Virus Database is out of date. Checked by AVG - http://www.avg.com Version: 8.0.176 / Virus Database: 270.9.19/1857 - Release Date: 12/19/2008 10:09 AM
Re: [squid-users] transparent Proxy with WCCP
Regardt van de Vyver wrote: Roland Roland wrote: ... --added to Squid.conf:-- acl MyNet src 192.168.0.0/24 http_access allow MyNet (this is set before the deny all rule) wccp_router 192.168.0.1 http_port 3128 transparent --connectivity-- ip tunnel add wccp0 mode gre remote 192.168.0.1 local 192.168.0.108 dev eth0 ip addr add 192.168.0.108/24 dev wccp0 ip link set wccp0 up iptables -t nat -A PREROUTING -i wccp0 -j REDIRECT -p tcp --to-port 80 <<-- to direct from GRE to port 80 ... Hi Roland, My experience is almost exclusively with wccp2 but off the bat the only think that looks 'funky' to me is your iptables rule and a few /proc tweaks. Try the following after doing the "ip link set wccp0 up": echo 1 > /proc/sys/net/ipv4/ip_forward I guess you don't need to set ip_forward = 1 when you aren't NATing your private to public IP in proxy. ( I mean in your case If the router is the default gw for the proxy ). echo 0 > /proc/sys/net/ipv4/conf/wccp0/rp_filter The GRE tunnel is only there to provide decapsulation of the WCCP traffic from the router. Once that is done the traffic is essentially still pointing towards port 80. Since you're running your squid on port 3128 your iptables rule NEEDS to redirect incomming port 80 traffic to that port, so it should read: iptables -t nat -A PREROUTING -i wccp0 -p tcp --dport 80 -j REDIRECT --to-port 3128 regards, Regardt vd Vyver Internal Virus Database is out of date. Checked by AVG - http://www.avg.com Version: 8.0.175 / Virus Database: 270.8.2/1741 - Release Date: 10/23/2008 7:54 AM It is working with following configuration in my case: 1. A script to set up GRE interface in proxy: > #!/bin/bash case "$1" in up) echo -n "Setting gre1 UP: " /sbin/modprobe ip_gre /sbin/iptunnel add gre1 mode gre remote local dev eth0 /sbin/ip addr add /32 dev gre1 /sbin/ip link set gre1 up /sbin/sysctl -w net.ipv4.conf.gre1.rp_filter=0 /sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 exit ;; down) /sbin/ip link set gre1 down /sbin/ip tunnel del gre1 exit ;; esac exit 0 --> 2. Configuration in my router: conf t ! ip wccp version 1 ip wccp web-cache redirect-list squid-acl ! int fa 1/0 ! ! The interface is facing towards my-LAN ip wccp web-cache redirect in ! ! But you can apply redirection either at IN/OUT direction and in more than one interface. This way is what I have prefered ip access-list extended squid-acl deny ip host any deny ip any permit ip any any ! Regards, Pritam
Re: [squid-users] assertion failed
Dear Henrik, Thank you very much for your response. I haven't try with 2.7-ST2. But now I will. Regards, Pritam Henrik Nordstrom wrote: Do you see the same with 2.7.STABLE2? There is a suspicion one of the changes in 2.7.STABLE3 is causing this... but nothing confirmed yet. Regards Henrik
[squid-users] Re: assertion failed
pritam wrote: Hi All, Knowing that it is a bug ( ..? ) I need yours help here. My squid is getting restarted often (2, 3 times a day) with following messages: 2008/07/14 07:56:47| assertion failed: store_client.c:172: "!EBIT_TEST(e->flags, ENTRY_ABORTED)" 2008/07/14 17:18:17| assertion failed: forward.c:109: "!EBIT_TEST(e->flags, ENTRY_FWD_HDR_WAIT)" I have recently updated my squid to 2.7ST3 in two of my servers (one in Fedora 6, other in CentOS 5.1) and also implemented COSS. The above problem is seen in only one of my server ( running CentOS 5.1). My questions are; Is this related to COSS...? Or it has to do something with OS Installed...? Or related to squid 2.7ST3, because I had no such issue before with squid 2.6. Sorry the problem shouldn't be related to the OS Installed as my other squid box ( running on fedora) also shows 'assertion failed:' error and gets restarted. Any suggestions for me...? And what could be the best way to get rid of this problem. Yours' suggestions will be appreciated. Regards, Pritam
[squid-users] assertion failed
Hi All, Knowing that it is a bug ( ..? ) I need yours help here. My squid is getting restarted often (2, 3 times a day) with following messages: 2008/07/14 07:56:47| assertion failed: store_client.c:172: "!EBIT_TEST(e->flags, ENTRY_ABORTED)" 2008/07/14 17:18:17| assertion failed: forward.c:109: "!EBIT_TEST(e->flags, ENTRY_FWD_HDR_WAIT)" I have recently updated my squid to 2.7ST3 in two of my servers (one in Fedora 6, other in CentOS 5.1) and also implemented COSS. The above problem is seen in only one of my server ( running CentOS 5.1). My questions are; Is this related to COSS...? Or it has to do something with OS Installed...? Or related to squid 2.7ST3, because I had no such issue before with squid 2.6. And what could be the best way to get rid of this problem. Yours' suggestions will be appreciated. Regards, Pritam
Re: [squid-users] When worlds collide
Amos Jeffries wrote: Paul Bertain wrote: What I should have said was put an entry in /etc/hosts and then modify /etc/nsswitch.conf on the Squid box so that it sees that same host as valid. You could. Although by using the internal DNS resolver for just squid, you only need to add the entry to /etc/hosts. Squid loads the hosts file to prime its internal DNS resolver. That would be the easiest way to configure it yes. But it makes the site available to all users of Squid. Not just the one client. Hi, I thing their is a tricky idea here. And I have tested with IE and Firefox as browser. User PC first checks /etc/hosts before DNS Server. In the browser setting use proxy with port 3128 ( link in non-transparent) and add the domain/host ( viz: .EXAMPLE.COM/SNEAKY.EXAMPLE.COM) in field of 'no-proxy for:' This works for my clients. May be in yours scenario too... Regards, Pritam Amos
Re: [squid-users] FW: COSS and 1Gb files...
Andy McCall wrote: Excellent. In that case the set up I put in place until I worked out how to get coss working is probably what I want anyway. One thing I wasn't sure on - Am I supposed to format my cache partition as aufs/ufs and specify aufs/ufs in squid.conf, or do I format it as ext2/ext3 and specify aufs/ufs in squid.conf? You can format as ext2 or ext3 At the moment, I have a /cache as an ext2 file system and cache_dir ufs 4 16 256 in my squid.conf, which I think this is right. It is better to have different partition for coss, say /cache1 or you may want to split your /cache into two; one for aufs/ufs other for coss. If that is right, what is the view on ext2 vs ext3 as the file system? ext2 should be better for squid caching Is the choice between slower access times, but quick recovery times using ext3, or quicker access times but slower recover times using ext2 Regards, Pritam
Re: [squid-users] Forwarding loops...
Henrik Nordstrom wrote: On fre, 2008-07-11 at 07:49 -0700, John Doe wrote: I don't have use allow-miss. But I do have: header_access Cache-Control deny all header_replace Cache-Control max-age=864000 I will try without it... That explains the loops in sibling setup. But it does not explain why your cache_peer_access rules wasn't effective. Thise should have worked... To solve this, I tried to prevent a squid from querying a sibling on behalf of another sibling: example of squid1.conf: cache_peer 192.168.17.12 sibling 8000 3130 proxy-only name=squid2 cache_peer 192.168.17.13 sibling 8000 3130 proxy-only name=squid3 cache_peer 192.168.17.14 sibling 8000 3130 proxy-only name=squid4 acl from_squids src 192.168.17.12 acl from_squids src 192.168.17.13 acl from_squids src 192.168.17.14 cache_peer_access squid2 deny from_squids cache_peer_access squid3 deny from_squids cache_peer_access squid4 deny from_squids But it is not helping... That should defenitely help. What is said in access.log? If I ask squid2 for a looping object: squid2 access.log: 1215783827.918 1 192.168.17.12 TCP_MISS/200 7188 GET http://192.168.16.23/img/spain.gif - FIRST_UP_PARENT/apache image/gif 1215783827.919 2 192.168.17.12 TCP_MISS/200 7233 GET http://192.168.16.23/img/spain.gif - CD_SIBLING_HIT/squid3 image/gif Why is there two requests from 17.12? Are you still not using tcp_outgoing_address? Please add appropriate tcp_outgoing_address directives. Hi, Try once hashing (#) the tcp_outgoing_address. Regards, Pritam Assuming the squid2 config looks the same as the squid1 save for the rotation of the servers to squid1,3,4 your cache_peer_access rule won't match here as the request is coming from 17.12 which is the squid2 address.. To answer your other question about Apache, there is no big problem with having the setup as you do with a set of apaches on different ports, but you may need to tweak the Apache config a little to make Apache assume port 80 in references to itself, used. when sending a browser redirect etc such as seen when requesting http://server/directory without trailing / Additionally you probably only need one instance of Apache.. Regards Henrik
[squid-users] assertion failed
Hi, My squid box was restarted with following message in cache.log. httpReadReply: Excess data from "HEAD http://dl_dir.qq.com/qqfile/ims/qqdoctor/tsfscan.dat"; assertion failed: forward.c:109: "!EBIT_TEST(e->flags, ENTRY_FWD_HDR_WAIT)" Starting Squid Cache version 2.7.STABLE3 for i686-pc-linux-gnu... Process ID 22147 With 8192 file descriptors available... I googled the issue and couldn't actually get the clearer idea behind. I have squid 2.7 ST3 running in CentOS 5.1. Did I miss anything in tuning my OS to avoid this issue or... Regards, Pritam
Re: [squid-users] RAM and Optimization Questions
Egi wrote: We see that we averagely can save 18% of our total traffic (Highest: 7.5 Mbit of 35 Mbit) and this server has been running for 1 week (Now the disks are full). How is it possible that the RAM gets more occupied when there isn't traffic? Also are our savings ok (18%)? Do you recommend any tuning options? Thank You! Hi, There are two important parameters that affect the proxy to dramatically change its output and performance. ~ range_offset_limit < > and ~ quick_abort_min < >. Coz: after setting this parameters to 0 I had more than 4 Mbps of Server In/Out Traffic reduced. Hello all, can you help us here. Regards, Pritam
Re: [squid-users] Squid ZPH patch
Henrik Nordstrom wrote: http://www.squid-cache.org/Versions/v2/2.7/cfgman/ http://www.squid-cache.org/Versions/v2/2.7/RELEASENOTES.html Regards Henrik Thanks for the link and your answers, I have updated my squid to 2.7 from 2.6 and had COSS Filesystem working properly along with the new features of 2.7 Thanks Pritam. Kathamndu
Re: [squid-users] Squid ZPH patch
Amos Jeffries wrote: pritam wrote: Amos Jeffries wrote: pritam wrote: Hello, I have squid 2.6 STABLE20 and I want to update it to squid 2.7.x. Do I need to patch the squid 2.7.x source for ZPH ...? Depends on the exact ZPH functionality you want. Most of it was included. Only the pass-thru capability was left out for architectural reasons. You will most likely need squid.conf changes to match the accepted 2.7 config options. I need to say: *zph_tos_local 0x10* in my squid.conf, which I cannot do in squid 2.6.x unless I make patch to source using http://zph.bratcheda.org/squid-2.6.STABLE2-ToS_Hit_ToS_Preserve.patch. Thanks That part has been merged. The config option for it has changed to "zph_local 0x10" Amos Any link, that can help me further before I upgrade to 2.7 from 2.6. Thanks No virus found in this incoming message. Checked by AVG. Version: 8.0.101 / Virus Database: 270.4.3/1527 - Release Date: 6/30/2008 6:07 PM
Re: [squid-users] Squid ZPH patch
Amos Jeffries wrote: pritam wrote: Hello, I have squid 2.6 STABLE20 and I want to update it to squid 2.7.x. Do I need to patch the squid 2.7.x source for ZPH ...? Depends on the exact ZPH functionality you want. Most of it was included. Only the pass-thru capability was left out for architectural reasons. You will most likely need squid.conf changes to match the accepted 2.7 config options. I need to say: *zph_tos_local 0x10* in my squid.conf, which I cannot do in squid 2.6.x unless I make patch to source using http://zph.bratcheda.org/squid-2.6.STABLE2-ToS_Hit_ToS_Preserve.patch. Thanks The merge for 2.7.s1 says: " This is a cleaned up version of the ZPH patch by Marin Stavrev, Evgeni Gechev and Venkatesh K. Reddy. It combines all three rolles versions of the patch, supporting both - IP ToS / Diffserv - Socket priority - IP Option but exluding the ability to mirror incoming TOS on cache misses as it's still a bit unclear how that should be done and support also missing in standard kernels. (the way it's done in the original patch looks a little odd) Original patches used as input: http://zph.bratcheda.org/squid-2.6.STABLE2-ToS_Hit_ToS_Preserve.patch http://zph.bratcheda.org/squid-2.5.STABLE3-hit_prio.diff http://zph.bratcheda.org/squid-2.5.STABLE7-option-marking-zph.diff " Amos No virus found in this incoming message. Checked by AVG. Version: 8.0.101 / Virus Database: 270.4.3/1527 - Release Date: 6/30/2008 6:07 PM
[squid-users] Squid ZPH patch
Hello, I have squid 2.6 STABLE20 and I want to update it to squid 2.7.x. Do I need to patch the squid 2.7.x source for ZPH ...? Thanks, Pritam
Re: [squid-users] Failed to select source for ...assert failure...
Roy M. wrote: Hi On 6/24/08, Amos Jeffries <[EMAIL PROTECTED]> wrote: Roy M. wrote: If you are compiling there is the ./configure --with-filedescriptors=NUMBER option in 3.0 for this. try using --with-maxfd=N I am using "--with-filedescriptors=6400", but it said too large during "make", so what value I should use? Thanks. No virus found in this incoming message. Checked by AVG. Version: 8.0.100 / Virus Database: 270.4.1/1515 - Release Date: 6/23/2008 7:16 PM
Re: [squid-users] Squid with two networks ...
julian julian wrote: You could probably use a set of static routes made by "route" command, where you can specify static gateway for each network. Defining as "gateway" each of yours public IP. --- On Mon, 6/23/08, Ramiro Sabastta <[EMAIL PROTECTED]> wrote: From: Ramiro Sabastta <[EMAIL PROTECTED]> Subject: [squid-users] Squid with two networks ... To: squid-users@squid-cache.org Date: Monday, June 23, 2008, 8:20 AM Hi !!! I've installed a Squid box transparent mode (3STABLE7) with two network cards and I must to implement this scenario: - The network cards are connected at two diferents internal Class C network with public IP. Could you illustrate more about your network. I mean the connected to the squid box..? - If the http requeriment asking about an object that is in the cache, the Squid give the object throught the same interface that the original requermients comes (I tink that this is not a problem, because de origin IP is in the same network that the squid have). - If the http requeriment asking about an object that isn't in the cache, the Squid go direct to public network trought the same interface that the original requermients comes (This is the problem). Are some configurations of squid.conf that allow me to do that? I think resolve te problem, externally form Squid (with iptables, for example) Thanks a lot !!! Regards !! Ramiro
Re: [squid-users] Failed to select source for ...assert failure...
Roy M. wrote: Hi, On 6/24/08, Pritam <[EMAIL PROTECTED]> wrote: Try increasing the size of file descriptor Are there any reference links? Thanks. If squid binary is install from source then following steps should work. ~ Stop the squid process. ~ Use ulimit to increase the file descriptor size. # ulimit –HSn 8192 (where 8192 is the FD size ) ~ Start the squid process. See cache.log to confirm the change FD. else this link should help you further. ~ http://www.cyberciti.biz/faq/squid-proxy-server-running-out-filedescriptors/ Regards
Re: [squid-users] Failed to select source for ...assert failure...
Try increasing the size of file descriptor Roy M. wrote: I am using Squid as HTTP accelerator, occasally my Squid3 (stable6) will have error in cache.log, e.g. Failed to select source for http://www.example.com But I am sure that the URL is reachable in backend web server, and sometimes it will immediately followed by an assert failure and restart, e.g. assertion failed: comm.cc:1997: "!fd_table[fd].flags.closing" Also, sometimes I did observe an error in cache.log, e.g. client_side.cc(2723) okToAccept: WARNING! Your cache is running out of filedescriptors Any suggestions? Thanks.