Re: [squid-users] Problem with negotiate_wrapper and ntlm authentication
On 29/10/2013 6:19 a.m., Matteo De Lazzari wrote: Dear all, I have a little problem trying to configure a fall back authentication via negotiate_wrapper I'm using a precompiled 3.1.10 squid version on centos 6.4. Please try a current Squid version (3.3 or later). There seems to be an issue in the older releases from before wrapper was created with the Negotiate auth module not quite supporting the helper requirements for NTLM auth. Amos
[squid-users] Re: something not being understood in ,workers , squid proces , cores mapping
hi alex , im happy to hear that from you nice update on wiki . regards - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/something-not-being-understood-in-workers-squid-proces-cores-mapping-tp4662942p4662990.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: Squid 3.1 with Tproxy and WCCP on Cisco 3550
hi , you are having tproxy , so you cant use well know service in this case in your case above, u need to define two wccp services . so , you wccp setting will be similar to : === wccp2_router x.x.x.x wccp_version 2 wccp2_forwarding_method 2 wccp2_return_method 1 wccp2_assignment_method 1 wccp2_service dynamic 80 *wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=250 ports=80 wccp2_service dynamic 90 wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source priority=250 ports=80 wccp2_rebuild_wait off* after that restart squid now remove because we dont need it MLS#conf t MLS#NO ip wccp web-cache now , in global config of your MLS put: MLS#conf t MLS#ip wccp 80 MLS#ip wccp 90 = interface FastEthernet0/15 description PPTP-Server no switchport ip address X.X.X.X 255.255.255.252 *ip wccp 80 redirect in ip wccp 90 redirect out * = interface GigabitEthernet0/2 description ***Squid-Proxy*** no switchport ip address X.X.X.X 255.255.255.248 * ip wccp redirect exclude in* do as above , and give me logs squid at : /var/log/squid/cache.log /var/log/squid/access.log == also give me on your cisco MLS logs : #debug ip wccp packets & sh ip wccp === do above , and let me know , if ur CPU get high , tell me, i may find a better solution for u regards - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-with-Tproxy-and-WCCP-on-Cisco-3550-tp4662987p4662989.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Squid 3.1 with Tproxy and WCCP on Cisco 3550
Hi, I am working on setting up Squid 3.1 with Tproxy using WCCP on Cisco 3550. Configs that I am using is below Router and Proxy both are on Public IPs, traffic coming in from clients are also Public IP But for some reason the Router Identifier IP is showing as Local IP which is being used to access router from local network. = [root@proxy squid]# cat squid.conf ##start of config http_port 3127 tproxy icp_port 3130 icp_query_timeout 5000 pid_filename /var/run/squid-3127.pid cache_effective_user squid cache_effective_group squid visible_hostname proxy.local unique_hostname proxy.local cache_mgr noc@proxy.local access_log /var/log/squid/access.log cache_log /var/log/squid/cache.log cache_store_log none logfile_rotate 1 shutdown_lifetime 10 seconds acl localnet src X.X.X.X/X# Public IP range for clients acl squidlocal src 127.0.0.1 uri_whitespace strip request_header_max_size 120 KB dns_nameservers 127.0.0.1 cache_mem 8 GB maximum_object_size_in_memory 1 MB memory_replacement_policy heap GDSF cache_replacement_policy heap LFUDA max_filedesc 65500 cache_dir aufs /cache1 85 64 256 max-size=20971520 cache_dir aufs /cache2 85 64 256 max-size=20971520 cache_dir aufs /cache3 85 64 256 max-size=20971520 cache_dir aufs /cache4 85 64 256 max-size=20971520 minimum_object_size 512 bytes maximum_object_size 100 MB offline_mode off cache_swap_low 98 cache_swap_high 99 # No redirector configured *wccp2_router 192.168.50.4 wccp2_rebuild_wait off wccp2_forwarding_method 2 wccp2_return_method 1 wccp2_assignment_method 1 * # Setup some default acls acl all src all acl localhost src 127.0.0.1/255.255.255.255 acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901 81 3128 3127 1025-65535 acl sslports port 443 563 81 acl manager proto cache_object acl purge method PURGE acl connect method CONNECT acl dynamic urlpath_regex cgi-bin \? http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !safeports http_access deny CONNECT !sslports # Always allow localhost connections http_access allow localhost # Allow local network(s) on interface(s) http_access allow localnet http_access allow squidlocal # Default block all to be sure http_access deny all qos_flows local-hit=0x30 qos_flows sibling-hit=0x31 qos_flows parent-hit=0x32 ##end of config = *Router config related to WCCP* Switch-3550#sh ru ip wccp web-cache interface FastEthernet0/15 description PPTP-Server no switchport ip address X.X.X.X 255.255.255.252 ip wccp web-cache redirect in interface GigabitEthernet0/2 description ***Squid-Proxy*** no switchport ip address X.X.X.X 255.255.255.248 Switch-3550#sh ip wccp Global WCCP information: Router information: Router Identifier: 192.168.50.4 Protocol Version:2.0 Service Identifier: web-cache Number of Service Group Clients: 0 Number of Service Group Routers: 0 Total Packets s/w Redirected:0 Process: 0 CEF: 0 Redirect access-list:-none- Total Packets Denied Redirect: 0 Total Packets Unassigned:0 Group access-list: -none- Total Messages Denied to Group: 0 Total Authentication failures: 0 Total Bypassed Packets Received: 0 Switch-3550# = As I am new to WCCP with Squid, I do not know a great detail of configuring WCCP and Squid. With above config, I do not see any traffic being redirected to squid. Any help is greatly appreciated. - Regards, Mudasir Mirza -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-with-Tproxy-and-WCCP-on-Cisco-3550-tp4662987.html Sent from the Squid - Users mailing list archive at Nabble.com.
RE: [squid-users] one dansguardian, multiple squids
Hello Fred, Not sure about Dansguardian (possibly configure it as parent proxy for all Squids) but ICAP server qlproxy that does the same web filtering as DansGuardian but within Squid (coupled through ICAP protocol) will do the job nicely. Just install it on a separate server and direct all Squids using icap_* configuration directives to it. Having more CPUs on the server will greatly increase concurrency as qlproxy is heavily threaded inside. Best regards, Rafael -Original Message- From: Fred Maranhão [mailto:fred.maran...@gmail.com] Sent: Monday, October 28, 2013 7:46 PM To: squid-users@squid-cache.org Subject: [squid-users] one dansguardian, multiple squids is there a way to configure my dansguardian to use more than one squid as backend proxy? I have plans to have multiple dansguardians and multiples squid, but by now I want to test one dansguardian and multiples squid.
[squid-users] Re: why wccp config with smp must be put in backend.conf ???
hi amos , i note that it only on kid1 and i have no i/o on my cache rock dirs not tcp_hit !! also i noted that it occurs on js , xml files - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/why-wccp-config-with-smp-must-be-put-in-backend-conf-tp4662961p4662984.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: why wccp config with smp must be put in backend.conf ???
hi amos , ive pumped about 200 users to squid , and there were a lot of frequent logs that say : http://www.gstatic.com/bg/cBsqIKQy8Z-2rJQ_TZ58t_YGnEGmYguE3xgHQUEDAYw.js' 'accept-encoding="gzip,%20deflate"' 2013/10/28 21:00:10 kid1| clientProcessHit: Vary object loop! 2013/10/28 21:00:10 kid1| varyEvaluateMatch: Oops. Not a Vary object on second attempt, 'http://www.gstatic.com/bg/cBsqIKQy8Z-2rJQ_TZ58t_YGnEGmYguE3xgHQUEDAYw.js' 'accept-encoding="gzip,%20deflate"' 2013/10/28 21:00:10 kid1| clientProcessHit: Vary object loop! dos i need to change config in squid file ?? or this is an issue of smp bug or squid 3.3.9 bug ? regards - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/why-wccp-config-with-smp-must-be-put-in-backend-conf-tp4662961p4662982.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] one dansguardian, multiple squids
is there a way to configure my dansguardian to use more than one squid as backend proxy? I have plans to have multiple dansguardians and multiples squid, but by now I want to test one dansguardian and multiples squid.
Re: [squid-users] squid 3.4.0.2 + smp + rock storage error
Amos, in some way I did something wrong with those permissions (I checked them before post here, but, dont know why I didnt saw that they where wrong). Anyway, working 3.4.0.2 on Slackware 14.1 (rc2) with 2 workers and rock storage. Next test will be with CentOS 6.4 + NTLM authentication and LDAP_group helper (we set permissions based on where the user is in AD groups, so, support people can change users access permissions without asking us). -- Att... Ricardo Felipe Klein klein@gmail.com On Sat, Oct 26, 2013 at 12:25 AM, Amos Jeffries wrote: > On 26/10/2013 1:13 p.m., Ricardo Klein wrote: >> >> I am trying to run latest squid (for test purposes) and even on 3.3.9 >> I always get: >> Squid Cache (Version 3.4.0.2): Terminated abnormally. >> CPU Usage: 0.015 seconds = 0.012 user + 0.003 sys >> Maximum Resident Size: 24864 KB >> Page faults with physical i/o: 0 >> FATAL: Ipc::Mem::Segment::open failed to >> shm_open(/squid-squid-page-pool.shm): (2) No such file or directory >> >> Anyone Know why? > > > The SHM socket/pipe for SMP worker communications cannot be opened by Squid. > > Check the permissions of /var/sun/squid. > > NP: if you are using MacOS there is something strange about the OS not > accepting the normal read/write flags needed to open it. > > Amos
[squid-users] Re: transparent proxy on remote box issue
> That line above the headers is showing the problem: > > HTTP Client local=:3130 remote=:65090 FD 10 > flags=1 > > local= contains the details of www.nba.com server where the request is > being fetched original dst IP:port from the TCP packets. > remote= contains the client src IP:port from the TCP packets. > > Your NAT is still being done at the client end of the connection before > it reaches the Squid box. This is THE problem. Move the NAT rules. > > 1) the client end of the VPN needs to contain the routing and MARK rules > from section 6.2 of that page. > > 2) the VPN tunnel needs to deliver those packets directly onto the Squid > box. Avoiding any problems ECN may cause with routing the packets. > > 2a) at this point you should still be able to browse the web without > problems. However your packets should be going over the VPN without any > browser or test tool mention of the Squid box IP. > > 3) the Squid box needs to contains the REDIRECT rule from section 6.2 on > that page, and probably the MASQUERADE rule from section 6.3. Squid > needs the "intercept" http_port option. > > 3a) at this point you should still be able to browse the web without > problems using *identical* tests to those made in (2a) when there was no > proxy used. However the traffic should be logged in Squid access.log. > > logged in those lines above> > > Amos > > Thanks for detailed analysis. So I did some test and could not resolve the issue. First of all I moved to use EC2 instance using VPC so all my servers are under the same subnet (10.0.1.0/24) that fixes the ip route command issue but it didn't help. I went with this guide http://lartc.org/howto/lartc.cookbook.squid.html) since it's close to what I want in terms of routing and it doesn't involve NAT (I don't have additional NAT in this subnet since it can access internet directly). This guide is almost the same to the other one (same idea). Result: 1) before anything I made sure my VPN client can access the internet normally - works 2) apply policy based changes and two thing happen: 2a) no port 80 traffic is going to SQUID server 2b) client can't navigate the internet (I can ping the hostname but browser can't load the page; tshark show traffic going to the web site but there is nothing coming back; normally after DNS name resolution web server talks back to the client). Rules I used: iptables -A PREROUTING -i eth0 -t mangle -p tcp --dport 80 -j MARK --set- mark 2 echo 202 http >> /etc/iproute2/rt_tables ip rule add fwmark 2 table http ip route add default via dev eth0 table http ip route flush cache All the apps I am using on my phone still seems to works (I assume non port 80 still work fine, great). Additionally when I add the word "intercept" my curl - http://:3130 www.nba.com always return access denied (deny all requests). Thanks alot Thanks,
[squid-users] Problem with negotiate_wrapper and ntlm authentication
Dear all, I have a little problem trying to configure a fall back authentication via negotiate_wrapper Here is the squid configuration line: auth_param negotiate program /usr/local/bin/negotiate_wrapper -d --ntlm /usr/bin/ntlm_auth -d --helper-protocol=squid-2.5-ntlmssp --domain=PREVIDOM --kerberos /usr/lib64/squid/squid_kerb_auth -d -s HTTP/srvsquidproxy.previdom.previnet.it The Kerberos auth runs very well, but, when negotiate_wrapper identifies a type 1 NTLM token I get a NT_STATUS_NO_SUCH_USER in the cache.log. The strange thing is that if I run ntlm_auth outside squid context I get a successful auth. /usr/bin/ntlm_auth --username=provaproxy --password=Pass1word --domain=PREVIDOM NT_STATUS_OK: Success (0x0) Is it possible that negotiate_wrapper doesn't "understand" correctly username & password from browser? What is the correct username sintax to use in the login request? user@fqdn or netbios domain\user or user without anything else? in my case: provapr...@previdom.previnet.it, previdom\provaproxy or provaproxy without domain? I'm using a precompiled 3.1.10 squid version on centos 6.4. Thanks to all and sorry for my bad english
Re: [squid-users] Re: 3x cpu usage after upgrade 3.1.20 to 3.3.8
On 10/27/2013 03:03 AM, Omid Kosari wrote: > Everything is same as before . I have 2 squid boxes . one of them has rock , > another even doesn't has rock . just upgraded and both boxes has increased > cpu usage . For each of those 2 squid boxes, can you post a "top" snapshot showing (a) all CPU cores and (b) all [non-idle] Squid processes? In most top implementations, "1" will toggle per-core statistics, "u" lets you limit process display to Squid user, and "c" lets you should Squid kid names. If you can do the same for v3.1 as well, that would be awesome, but I suspect it may be too much of a hassle for you to go back to that version just to get a CPU usage snapshot. There are many ways to interpret your "3x CPU usage" concern, and it may help a lot to see the actual CPU usage on your systems... Thank you, Alex.
Re: [squid-users] Re: something not being understood in ,workers , squid proces , cores mapping
On 10/28/2013 03:18 AM, Ahmad wrote: > hi , alex & amos , thanks very much for clarification , > > but im wondering why this info you posted here is not found on the wiki Primarily because nobody took the time to write it up and post it there. It is there now: http://wiki.squid-cache.org/Features/SmpScale#Terminology http://wiki.squid-cache.org/Features/SmpScale#How_many_processes_does_a_single_Squid_instance_have.3F Both new entries need polishing but, hopefully, they are better than nothing. Alex.
[squid-users] Problem with negotiate_wrapper and ntlm authentication
Dear all, I have a little problem trying to configure a fall back authentication via negotiate_wrapper Here is the squid configuration line: auth_param negotiate program /usr/local/bin/negotiate_wrapper -d --ntlm /usr/bin/ntlm_auth -d --helper-protocol=squid-2.5-ntlmssp --domain=PREVIDOM --kerberos /usr/lib64/squid/squid_kerb_auth -d -s HTTP/srvsquidproxy.previdom.previnet.it The Kerberos auth runs very well, but, when negotiate_wrapper identifies a type 1 NTLM token I get a NT_STATUS_NO_SUCH_USER in the cache.log. The strange thing is that if I run ntlm_auth outside squid context I get a successful auth. /usr/bin/ntlm_auth --username=provaproxy --password=Pass1word --domain=PREVIDOM NT_STATUS_OK: Success (0x0) Is it possible that negotiate_wrapper doesn't "understand" correctly username & password from browser? What is the correct username sintax to use in the login request? user@fqdn or netbios domain\user or user without anything else? in my case: provapr...@previdom.previnet.it, previdom\provaproxy or provaproxy without domain? I'm using a precompiled 3.1.10 squid version on centos 6.4. Thanks to all and sorry for my bad english
Re: AW: [squid-users] Vary object loop
On 29/10/2013 12:23 a.m., Ahmad wrote: oh , it seems bad news for me ! Not too bad. Squid is self-correcting since we (devs) know exactly what state is happening and can detect it. But how Squid got into that state in the first place is still a mystery. What it means for production traffic is a MISS and some lag on the transaction. Amos
Re: AW: [squid-users] Vary object loop
oh , it seems bad news for me ! - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Vary-object-loop-tp4662627p4662973.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: AW: [squid-users] Vary object loop
On 28/10/2013 11:37 p.m., Ahmad wrote: hi , could this be harmfull problem , ?? i mean can i ignore it and deal with my suqid as it works normally ? regards If it is happening often that probably should be looked into. It happens from time to time though anyway. Amos
Re: [squid-users] Re: something not being understood in ,workers , squid proces , cores mapping
On 28/10/2013 10:18 p.m., Ahmad wrote: hi , alex & amos , thanks very much for clarification , but im wondering why this info you posted here is not found on the wiki !!! The features are new and volatile, not everything is documented yet. Most of these details are in the wiki, just hidden inside long descriptions of the architecture and plans for future extension etc. agian , i have 3 workers 3 rock disk so = 3 disker process 3 worker prcoess i have 1 coordinator process 1 master squid process = here is from my server #ps aux | grep squid === *root * 7148 0.0 0.0 1329360 1568 ?Ss 10:42 0:00 squid < master process squid 7150 0.0 0.7 1305644 30944 ? S10:42 0:00 (squid-coord-7) <===coordinator squid 7151 0.0 0.7 1339780 28036 ? S10:42 0:01 (squid-disk-6)<=== disker squid 7152 0.0 0.7 1339780 30068 ? S10:42 0:01 (squid-disk-5)<=== disker squid 7153 0.0 0.7 1339780 28048 ? S10:42 0:01 (squid-disk-4)<=== disker squid 7154 0.0 0.7 1406764 27844 ? S10:42 0:00 (squid-3)<=== worker squid 7155 0.0 0.7 1406764 27856 ? S10:42 0:00 (squid-2)<=== worker squid 7156 0.0 0.8 1311244 34256 ? S10:42 0:01 (squid-1)<=== worker == ive put comments on them to make sure about wt i understood !! if i was correct, the master process is run by root user ??? Yes that is correct. Amos
Re: [squid-users] Re: SQUID in TPROXY - do not resolve
On 25/10/2013 2:44 a.m., Plamen wrote: Amos Jeffries-2 wrote On 24/10/2013 6:44 a.m., Plamen wrote: Yes, this is one of the problems I'm also experiencing, the customer is using different DNS than the Squid, and he complains because he says - without your SQUID I can open web page, but with your SQUID it's not opening. Ah. So the real problem is "Why is it not opening for Squid?" The current releases of Squid *do* use the client provided destination IP. The DNS resolution is only to determine whether the response is cacheable and if alternative IPs may be tried as backup _if_ the client given one is unable to connect by Squid. Hi Amos, thanks for the valuable feedback. Do I need to do something specific to get this behavior of Squid where it uses the dst provided IP, like some directive in config has to be enabled or it is default behavior? This is default behaviour for squid-3.2 and later. In this scenario, what happens if the DNS servers configured in SQUID stop responding for some reason for some period of time (or they become unreachable), will the traffic continue to pass normally or the users will start getting errors and they will not be able to browse anymore? Traffic will pass to the client dst IP. There may be some small lag on the first request after DNS went out while Squid waits for the DNS response. But some delays are only to be expected when things on the network break. I will give you real life example that I'm trying to resolve. The ISP is having 2 Upstream providers. The SQUID is running in TPROXY mode, and the squidbox has an IP address from Upstream 1 and respectively uses this IP to contact DNS servers. When both upstream providers are working - everything is smooth in terms of HTTP traffic. When Upstream 1 goes down for some reason, for a period of time, the customers which are provisioned with IPs belonging to UPSTREAM 2 also get affected because the SQUID cannot do DNS lookups anymore. I'm trying to resolve this kind of issues. This kind of issue is best fixed via other means. For example; I use IPv6 private allocation fc00::/16 IP addressing for all my network internal traffic including the links between Squid and its DNS server. No matter which upstream is active (even with none) these connections and lookups will continue working so long as my own network remains stable. Another way is to configure an explicit address in udp_outgoing_address for Squid to use as its src IP on UDP packets (and thus DNS packets). This does the same thing for Squid->DNS traffic, but does not protect other internal-only traffic so I dont favour it as much as the IPv6 method. Amos
Re: AW: [squid-users] Vary object loop
hi , could this be harmfull problem , ?? i mean can i ignore it and deal with my suqid as it works normally ? regards - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Vary-object-loop-tp4662627p4662969.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Re: SQUID in TPROXY - do not resolve
On 28/10/2013 9:46 p.m., Ahmad wrote: @plamen , regards to 1st discussion now i tried trpoxy squid i pointed squid to dns1 and put on my pc dns2 my pc resolved the site aaa.com with 1.1.1.1 and squid resolved aaa.com with 2.2.2.2 but my pc see the site with 1.1.1.1 not with 2.2.2.2 ??? why ? shouldn't my pc see the site with 2.2.2. because of tproxy ??? No. Squid fetches from 1.1.1.1 same as the client wanted to get it from (transparent means same in and out). All that happens in this case is that Squid block caching because it cannot trust that 1.1.1.1 is authorized to be presenting content from 2.2.2.2. Amos
[squid-users] Re: something not being understood in ,workers , squid proces , cores mapping
hi , alex & amos , thanks very much for clarification , but im wondering why this info you posted here is not found on the wiki !!! agian , i have 3 workers 3 rock disk so = 3 disker process 3 worker prcoess i have 1 coordinator process 1 master squid process = here is from my server #ps aux | grep squid === *root * 7148 0.0 0.0 1329360 1568 ?Ss 10:42 0:00 squid < master process squid 7150 0.0 0.7 1305644 30944 ? S10:42 0:00 (squid-coord-7) <===coordinator squid 7151 0.0 0.7 1339780 28036 ? S10:42 0:01 (squid-disk-6)<=== disker squid 7152 0.0 0.7 1339780 30068 ? S10:42 0:01 (squid-disk-5)<=== disker squid 7153 0.0 0.7 1339780 28048 ? S10:42 0:01 (squid-disk-4)<=== disker squid 7154 0.0 0.7 1406764 27844 ? S10:42 0:00 (squid-3)<=== worker squid 7155 0.0 0.7 1406764 27856 ? S10:42 0:00 (squid-2)<=== worker squid 7156 0.0 0.8 1311244 34256 ? S10:42 0:01 (squid-1)<=== worker == ive put comments on them to make sure about wt i understood !! if i was correct, the master process is run by root user ??? plz correct me if i was wrong ! regards - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/something-not-being-understood-in-workers-squid-proces-cores-mapping-tp4662942p4662967.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: why wccp config with smp must be put in backend.conf ???
hi Amos , agian , the cache.log says : with my config above , and when wccp worked with rock type , i have logs as below !!! *clientProcessHit: Vary object loop! Oops. Not a Vary object on second attempt* = 2013/10/28 10:21:31 kid1| varyEvaluateMatch: Oops. Not a Vary object on second attempt, 'http://maps.google.com/maps/api/js?sensor=true&libraries=places&v=3.exp' 'accept-language="en-US,en%3Bq%3D0.5"' 2013/10/28 10:21:31 kid1|* clientProcessHit: Vary object loop!* 2013/10/28 10:21:32 kid1| *varyEvaluateMatch: Oops. Not a Vary object on second attempt, *'http://partner.googleadservices.com/gpt/pubads_impl_28.js' 'accept-encoding="gzip,%20deflate"' 2013/10/28 10:21:32 kid1| clientProcessHit: Vary object loop! 2013/10/28 10:21:37 kid1| varyEvaluateMatch: Oops. Not a Vary object on second attempt, 'http://platform.twitter.com/widgets.js' 'accept-encoding="gzip,%20deflate"' 2013/10/28 10:21:37 kid1| clientProcessHit: Vary object loop! 2013/10/28 10:27:52 kid1| varyEvaluateMatch: Oops. Not a Vary object on second attempt, 'http://r1---sn-25auxa-b15e.c.youtube.com/crossdomain.xml' 'accept-encoding="gzip,%20deflate"' 2013/10/28 10:27:52 kid1| clientProcessHit: Vary object loop! 2013/10/28 10:28:43 kid1| varyEvaluateMatch: Oops. Not a Vary object on second attempt, 'http://r2---sn-25auxa-b15e.c.youtube.com/crossdomain.xml' 'accept-encoding="gzip,%20deflate"' 2013/10/28 10:28:43 kid1| clientProcessHit: Vary object loop! 2013/10/28 10:36:23 kid1| varyEvaluateMatch: Oops. Not a Vary object on second attempt, 'http://forum.tp-link.com/clientscript/vbulletin_md5.js?v=414' 'accept-encoding="gzip,%20deflate"' 2013/10/28 10:36:23 kid1| clientProcessHit: Vary object loop! why there is var object looop ??? how solve it !!! regards - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/why-wccp-config-with-smp-must-be-put-in-backend-conf-tp4662961p4662966.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: SQUID in TPROXY - do not resolve
@plamen , regards to 1st discussion now i tried trpoxy squid i pointed squid to dns1 and put on my pc dns2 my pc resolved the site aaa.com with 1.1.1.1 and squid resolved aaa.com with 2.2.2.2 but my pc see the site with 1.1.1.1 not with 2.2.2.2 ??? why ? shouldn't my pc see the site with 2.2.2. because of tproxy ??? wish to make sure to me - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SQUID-in-TPROXY-do-not-resolve-tp4662819p4662965.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: why wccp config with smp must be put in backend.conf ???
hi amos , with wccp2_rebuild_wait off not wccp_rebuild_wait off squid with wccp worked now , but i must put this configs in backend.conf !! it dont work if i put it in frontend.conf !!! ? - Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/why-wccp-config-with-smp-must-be-put-in-backend-conf-tp4662961p4662964.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Re: squid with muliwan
Il 26/10/2013 16:43, adamso ha scritto: Thanks for the replies, Marcello Romani i tried it. But le problème, when i broke eth0:1 on the pfsense gateway, i still have connexion. E.g : on my squid, yahoo mail go to eth0:1 par tcp_outgoing_ address. But when i broke eth0:1, i can go to yahoo mail. What do you mean when you write "I broke eth0:1" ? -- Marcello Romani
Re: [squid-users] why wccp config with smp must be put in backend.conf ???
On 28/10/2013 8:10 p.m., Ahmad wrote: hi , all im wondering , i have followed example of : http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster my question is , why the wccp config dont work if it was put in fronted.conf why it only work if i put it in backed.conf ??? Same reason for this and the other thread you started about WCCP. A small bug in rock storage system initialization and SMP workers prevents the WCCP being activated. Since it waits for all stores to be ready for service before announcing, and the workers may not be aware of when the caches are ready. You can workaround it by configuring wccp_rebuild_wait off Amos
[squid-users] why wccp config with smp must be put in backend.conf ???
hi , all im wondering , i have followed example of : http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster my question is , why the wccp config dont work if it was put in fronted.conf why it only work if i put it in backed.conf ??? i read about backend.conf , we put in it the heavy traffic and caching disks !!! here is my config i did : [root@drx ~]# cat /etc/squid/squid.conf # dns_nameservers x.x.x.x 8.8.8.8 # # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl localnet src x.x.x.x.x.x./xx acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports ### dns_v4_first on # 3 workers, using worker #1 as the frontend is important workers 3 if ${process_number} = 1 include /etc/squid/frontend.conf else include /etc/squid/backend.conf endif # http_access deny all ## refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 [root@drx ~]# cat /etc/squid/backend.conf # each backend must listen on a unique port # without this the CARP algorithm would be useless http_port 127.0.0.1:400${process_number} ### cache_dir rock /rock1 1 max-size=32768 swap-timeout=350 cache_dir rock /rock2 1 max-size=32768 swap-timeout=350 cache_dir rock /rock3 1 max-size=32768 swap-timeout=350 ### visible_hostname backend.example.com ### cache_log /var/log/squid/backend.cache.log access_log stdio:/var/log/squid/backend.access.log # cache_mem 512 MB maximum_object_size_in_memory 10 MB minimum_object_size 0 KB maximum_object_size 100 MB ## http_access allow localhost ### wccp2_router x.x.x.x wccp_version 2 wccp2_forwarding_method 2 wccp2_return_method 2 wccp2_assignment_method 2 wccp2_service dynamic 92 wccp2_service_info 92 protocol=tcp flags=src_ip_hash priority=250 ports=80 wccp2_service dynamic 93 wccp2_service_info 93 protocol=tcp flags=dst_ip_hash,ports_source priority=250 ports=80 [root@drx ~]# cat /etc/squid/frontend.conf http_port x.x.x.x:xxx http_port x.x.x.x:xxx tproxy acl mysubnet src .xxx.,xxx http_access allow mysubnet cache_mgr x cachemgr_passwd xx all # add user authentication and similar options here http_access allow manager localhost http_access allow manager mysubnet http_access allow mysubnet manager http_access deny manager ### cache_mem 2500 MB # maximum_object_size_in_memory 200 MB minimum_object_size 0 KB maximum_object_size 2 MB ## cache_log /var/log/squid/frontend.cache.log access_log stdio:/var/log/squid/frontend.access.log ## visible_hostname frontend.example.com ### cache_swap_low 95 cache_swap_high 99 ### server_persistent_connections off client_persistent_connections off quick_abort_min 0 KB quick_abort_max 0 KB quick_abort_pct 95 fqdncach
[squid-users] wccp dont work with smp i i used cache dir rock !!! ?????
hi , im using last 3.3.9 stabe version compiled ! if i finish an error in squid , i must see another error instantaneously , seems squid hate me !!! the problem is , when i use rock cache dir , the wccp dont work with squid !!! if i remove cache dir rock from backend.conf , the wccp work !!! the question is why ? , and how to solve the problem ??!! here is my config below : === [root@drx ~]# cat /etc/squid/squid.conf # dns_nameservers x.x.x.x 8.8.8.8 # # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl localnet src x.x.x.x.x.x./xx acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports ### dns_v4_first on # 3 workers, using worker #1 as the frontend is important workers 3 if ${process_number} = 1 include /etc/squid/frontend.conf else include /etc/squid/backend.conf endif # http_access deny all ## refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 [root@drx ~]# cat /etc/squid/backend.conf # each backend must listen on a unique port # without this the CARP algorithm would be useless http_port 127.0.0.1:400${process_number} ### cache_dir rock /rock1 1 max-size=32768 swap-timeout=350 cache_dir rock /rock2 1 max-size=32768 swap-timeout=350 cache_dir rock /rock3 1 max-size=32768 swap-timeout=350 ### visible_hostname backend.example.com ### cache_log /var/log/squid/backend.cache.log access_log stdio:/var/log/squid/backend.access.log # cache_mem 512 MB maximum_object_size_in_memory 10 MB minimum_object_size 0 KB maximum_object_size 100 MB ## http_access allow localhost ### wccp2_router x.x.x.x wccp_version 2 wccp2_forwarding_method 2 wccp2_return_method 2 wccp2_assignment_method 2 wccp2_service dynamic 92 wccp2_service_info 92 protocol=tcp flags=src_ip_hash priority=250 ports=80 wccp2_service dynamic 93 wccp2_service_info 93 protocol=tcp flags=dst_ip_hash,ports_source priority=250 ports=80 [root@drx ~]# cat /etc/squid/frontend.conf http_port x.x.x.x:xxx http_port x.x.x.x:xxx tproxy acl mysubnet src .xxx.,xxx http_access allow mysubnet cache_mgr x cachemgr_passwd xx all # add user authentication and similar options here http_access allow manager localhost http_access allow manager mysubnet http_access allow mysubnet manager http_access deny manager ### cache_mem 2500 MB # maximum_object_size_in_memory 200 MB minimum_object_size 0 KB maximum_object_size 2 MB ## cache_log /var/log/squid/frontend.cache.log access_log stdio:/var/log/squid/frontend.access.log ## visible_hostname frontend.example.com ### cache_swap_low 95 cache_swap_high 99 ### server_persistent_connections off client_