Re: [squid-users] lots of UDP connections
Bal Krishna Adhikari 6/3/2011 6:13 AM Hello, I found a lot of UDP connections that is coming to my proxy servers. I don't find the cause of such one-way traffics to my servers. The sample UDP traffic is as :- 14:00:07.506612 IP 41.209.69.146.10027 x.x.x.x.65453: UDP, length 30 14:00:07.518118 IP 121.218.37.254.41597 x.x.x.x.64338: UDP, length 30 14:00:07.572559 IP 85.224.143.193.29978 x.x.x.x.62782: UDP, length 30 14:00:07.596554 IP 183.87.200.42.36895 x.x.x.x.15786: UDP, length 30 14:00:07.642820 IP 180.215.37.96.49977 x.x.x.x.49458: UDP, length 30 14:00:07.653055 IP 117.195.138.64.24314 x.x.x.x.44985: UDP, length 33 14:00:07.739963 IP 82.31.238.101.50534 x.x.x.x.52750: UDP, length 30 14:00:07.783452 IP 86.83.107.196.41870 x.x.x.x.62782: UDP, length 30 14:00:07.809677 IP 94.246.23.15.59003 x.x.x.x.27462: UDP, length 30 14:00:07.837415 IP 75.156.164.147.49398 x.x.x.x.34847: UDP, length 30 14:00:07.841668 IP 82.8.212.242.25931 x.x.x.x.24869: UDP, length 30 14:00:07.841697 IP 89.136.112.99.42182 x.x.x.x.52750: UDP, length 30 14:00:07.854215 IP 99.191.156.208.18162 x.x.x.x.64338: UDP, length 30 14:00:07.885386 IP 88.147.72.252.60224 x.x.x.x.19151: UDP, length 30 14:00:07.960841 IP 68.169.185.192.63480 x.x.x.x.58638: UDP, length 30 14:00:08.071763 IP 79.113.242.42.31998 x.x.x.x.33995: UDP, length 30 14:00:08.078260 IP 94.202.49.109.61957 x.x.x.x.26071: UDP, length 67 14:00:08.101495 IP 82.169.68.179.19605 x.x.x.x.45682: UDP, length 30 14:00:08.113238 IP 86.99.42.7.15086 x.x.x.x.11706: UDP, length 67 14:00:08.127979 IP 62.195.70.253.45266 x.x.x.x.37050: UDP, length 30 14:00:08.163992 IP 2.82.207.195.38343 x.x.x.x.26680: UDP, length 30 14:00:08.183453 IP 68.81.206.57.25923 x.x.x.x.18378: UDP, length 30 14:00:08.237689 IP 108.120.241.254.47249 x.x.x.x.39433: UDP, length 30 14:00:08.256906 IP 99.161.157.254.41719 x.x.x.x.26680: UDP, length 30 14:00:08.291885 IP 121.136.175.247.12577 x.x.x.x.16485: UDP, length 67 14:00:08.315427 IP 121.144.158.120.30845 x.x.x.x.61415: UDP, length 30 14:00:08.317404 IP 115.117.219.18.25817 x.x.x.x.59936: UDP, length 30 Anyone has any idea if the traffic is genuine or some kind of attack ? x.x.x.x is my proxy server. --- Bal Krishna On 04/06/11 01:16, Chad Naugle wrote: Check the hostname of these IP addresses. They could be DNS replies, using random ports for source/destinations. Squid can generate tons of DNS traffic. I don't think its genuine Squid traffic. DNS, ICP and HTCP all use a fixed well-known port at one end and a rarely changing port at the other. It could be anything else on the box though. There are a few CVE attacks this could be, two using DNS and one HTCP. If you have a Squid 2.7.STABLE8+, 3.0.STABLE23+ or 3.1.1+ you are safe from those. They are just annoying. If you have a Squid-3.1+ with an IPv6 address publicly advertised this could be a sign of v6 connection attempts. Several IP tunnel protocols involve UDP handshakes. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
Re: [squid-users] ip aliased multi-instanced squid
On 04/06/11 06:25, errno wrote: On Thursday, June 02, 2011 01:03:06 AM Amos Jeffries wrote: On 02/06/11 19:41, errno wrote: Just to confirm: If I have multiple ip aliases assigned to the same physical nic, will there still be port conflicts on an ip (aliased) based multi-instanced squid server? There is rarely a need for the combo of IP aliasing + Squid. You know, maybe this just now actually clicked in my brain... So, let's say that we did have a few different aliased IPs (on different subnets): For example: eth0 - 192.196.0.2 eth0:1 - 192.196.1.2 eth0:2 - 192.196.2.2 eth0:3 - 192.168.3.2 Rather than setting up, say, 4 separate instances of squid - one per subnet - I'm thinking why not just set up 1 single instance (say, on 192.196.0.2), then just use iptables to redirect any traffic hitting the other IPs (192.196.1.2 through 192.168.3.2) to the 192.196.0.2? Then the single squid.conf would be configured (somehow) to use the appropriate tcp_outgoing_address(?), or something? Something like: incoming request to 192.196.2.2:80 - iptables passes it to 192.196.0.2:80 - squid receives request on 192.196.0.2, but dispatches back out 192.196.2.2 ??? Something along those lines? Yes. Based on the myip ACL for the incoming request to $myip bit. Note that myip fails if NAT is happening on the packets arrival. Squid will get mangled IPs to test against $myip and usually fail to do a reliable match. In this case you do need multiple http_port in squid.conf for the one squid instance and myportname ACL for the manipulations. Or can I achieve the same effect w/o iptables - by just supplying multiple ip:ports to http_port ? The primary concern is that if a request to squid comes in on one particular address, that squid will ensure that this request leaves squid with the same tcp_outgoing_address - which is why we were (naively?) using multiple separate instances... each instance had: include /etc/squid/squid_common.conf access_log /var/log/squid/access_192.168.0.2.log squid auth_param basic program /usr/libexec/squid/ncsa_auth /etc/squid/passwd http_port 192.168.0.2:8002 tcp_outgoing_address 192.168.0.2 pid_filename /var/run/squid_192.168.0.2.pid visible_hostname 192.168.0.2 *IF* (and that is a big IF) you really need the outgoing IP to be fixed. You can run one instance with multiple copies of the above snippet. Note the visible_hostname and pid_filename, and auth are unique directives, only one copy is used per instance of Squid. I setup this kind of thing like with Squid-3.1 like so: squid.conf: include /etc/squid/IPA/* .. blah... /etc/squid/IPA contains a number of files with the specific listening IP handling. eg /etc/squid/IPA/192.168.0.2_8002: http_port 192.168.0.2:8002 name=ip-2-8002 acl ip-2-8002 myportname ip-2-8002 tcp_outgoing_address 192.168.0.2 ip-2-8002 access_log /var/log/squid/access_192.168.0.2_8002.log squid ip-2-8002 Thanks for helping to clear my confusion and possible derive a much simpler and easier to maintain squid service; and huge thanks to Amos for the incredible amount of time and assistance he offers on this list! Thank you :) Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
Re: [squid-users] Re: Should I see a massive slowdown when chaining squid = privoxy
On 04/06/11 16:23, Eliezer Croitoru wrote: On 04/06/2011 02:05, sichent wrote: Setup: Gentoo linux OS on squid and privoxy home lan server Squid-3.1.12 privoxy-3.0.17 I'm not running an html server, just trying to use squid and privoxy for my own browsing. Why not to use ICAP or URL rewriter functionality built into Squid to achieve the same results as privoxy instead of having this chaining setup? Sorry for possible offtopic :) What offtopic? the only problem i can thing of is cpu and process management on this server. icap is a nice idea and i can recommend one http://greasyspoon.sourceforge.net/ you must know basic programming to make it work and they have rules samples. if your hardware capabilities are low you will get most likely poor results on both privoxy and ICAP. also i will send later the settings for working with greasyspoon on squid 3.2 another question. are you by any change made a speedtest on the current setup? if you do get the right speed but wrong speed to get the page processed it's a known side effect of privoxy. Regards Eliezer sich Speed gain/loss/other depends on what you are moving from. MORE IMPORTANTLY: how you define slow! Keep in mind that you also now have around 2x the processing going on with 2 proxies. The difference added by Squid can be at least 10ms. Some people call that noticeable slowdown. Some dont care about anything less than a second. * 3.1 is about 10-20% slower than the latest 2.7 on the same config. With the older versions of 3.1 being on the slower end of that scale as we work to optimize and fix things throughout the series. * Moving to Squid from a non-proxy setup can be a major drop down depending on the browser age. The browsers themselves drop the parallel fetch rate from hundreds down to under 10. Browser tweaking is the only way to avoid this. * Moving from browser-privoxy to a browser-squid-privoxy setup you should have seen only a small drop. Some possibilities are Squid using slow disks (maybe RAID), or Squid box is swapping, or the bandwidth is being routed down the same physical links to/from Squid. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
Re: [squid-users] Should I see a massive slowdown when chaining squid = privoxy
On 04/06/11 08:16, Harry Putnam wrote: Setup: Gentoo linux OS on squid and privoxy home lan server Squid-3.1.12 privoxy-3.0.17 I'm not running an html server, just trying to use squid and privoxy for my own browsing. I'm attempting to get started with squid and privoxy. So far using nearly original config files in both cases. First I tried just using privoxy without squid and that seemed to work OK. I set the privoxy listen-address in /etc/privoxy/config like: listen-address 192.168.0.2:8118 (The address of a gmane box on the lan) I was not able to access gmail thru a firefox gadget but could get gmail directly alright. But other than that, No real noticeable change in browsing speed as against NO proxy at all. I then used this website to help setup squid: https://www.antagonism.org/web/squid-proxy.shtml And did the things suggested there. Chaining in this direction: Browser - Squid - Privoxy - Privoxy's IP - Public Internet Adding squid into the mix... I Added these two lines to the end of squid.conf: cache_peer 192.168.0.2 parent 8118 0 default no-query no-digest no-netdb-exchange never_direct allow all And uncommented these lines in squid.conf: acl localnet src 192.168.0.0/24 # RFC1918 possible internal network header_access From deny all header_access Referer deny all header_access Server deny all header_access User-Agent deny all header_access WWW-Authenticate deny all I kind of doubt you actually want these two above. The first hides your browser from website (okay so you *might* want that one) which will make many of teh modern browser sniffing websites simply not work. The second blocks website authentication from working. It is a manual process by you whether you let the browser login in the first place. Preventing you from ever choosing Yes to that seems a bit extreme. NOTE: with Squid-3.1 these are also split into request_header_access and reply_header_access directives. Run squid -k parse to see if there are any other easily detectable bad problems in the config. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
Re: [squid-users] trouble with www address not resolving
On 03/06/2011 19:24, William Bakken wrote: Amos, is there a way to tell Squid to stop asking for records/IPv6? We are having problems with other sites not working in the same way. On 04/06/11 16:37, Eliezer Croitoru wrote: The answer is to disable IPV6 on squid and on the linux machine and software. but we do not know that this is the case.. Correct. If you have IPv6 _properly_ disabled in the OS. Such that applications attempting to open IPv6 sockets for use get denied. Squid 3.1.10+ will pick that up next restart and not try to perform IPv6 again until next restart. There is the small matter of a lot of garbage tutorials about how to disable IPv6 in the OS though. It has to be done in a way where a program opening a IPv6 socket gets an error message back. Not letting the app open and use the socket, (spewing errors out all over the place) or to hanging (frowning mostly at some RHEL user blogs there). Re-building Squid with --disable-ipv6 is another more extreme option. do you have a local DNS server on the machine for caching and forwarding? you can setup on the squid to use the local dns server and on the dns server setup specific forwarding zone for this domain NS this will result a much more efficient way to get it done and to make your system more reliable by any case. I recommend this way. * Site-specific to allow access to all the working v6-only sites (1% of the Internet now). * Easily reversible once the site starts working. Best best way is to get the site fixed ASAP. From the other post by Rick from Carfax it looks like they are on the issue now. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
Re: [squid-users] Did logrotation code change between 3.1.9 and 3.1.12?
On 04/06/11 07:17, Rich Rauenzahn wrote: My logrotating done by logrotated doesn't work anymore... /var/log/squid/*.log { weekly rotate 52 size 100M compress notifempty missingok sharedscripts postrotate # Asks squid to reopen its logs. (logfile_rotate 0 is set in squid.conf) # errors redirected to make it silent if squid is not running /usr/sbin/squid -k rotate 2/dev/null # Wait a little to allow Squid to catch up before the logs is compressed sleep 60 endscript } I end up with an access.log that has the wrong number and logrotate can't compress it. I think what is happening is logrotate renames access.log to access.log.0, and then squid -k rotate renames it to access.log.1, and then logrotate tries to compress logrotate.0, which it now can't find. Or something like that. My uncompressed store.log is actually at .2 store.log i rarely useful. You may want to remove it from the config. I've worked around it by setting logfile_rotate to 0, but I'm wondering if this is a recent change with unexpected side effects when working with logrotate. That would be the problem. logfile_rotate 0 is a requirement when doing log rotation outside of Squid. Such as with logrotate. It has been that way since Squid-2.6. The mystery is why it worked for you before. Or why the squid.conf changed during the upgrade. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
Re: [squid-users] Squid not caching, plz help
On 04/06/11 09:16, MrNicholsB wrote: Ok Ive had squid3 running rock solid for months, I recently migrated from Ubuntu 9 to 10.04 and now Squid is clearly not caching, but traffic IS passing through it, my conf is the same as it was before but now im getting an error on cache.log every time squid gets a request, any help would be great, im sure its something simple Im just not seeing..THANK YOU!! ERRORs from cache.log == 2011/06/03 13:57:32| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (92) Protocol not available You have a http_port configured with transparent or intercept. Tellign Squid to lookup NAT for the IP details. It is being sent traffic which apparently never went through NAT. Your access.log will contain lies about what client IP was making the request. *THIS IS BAD*. Your squid.conf is making you vulnerable to security attack CVE-2009-0801 Solution: * pick a random port number for the NAT-to-Squid packet arrival. Use a second port for regular proxy requests. * follow the config details for iptables mangle table: http://wiki.squid-cache.org/ConfigExamples/LinuxDnat === #squid..conf visible_hostname central.server http_port 3128 transparent icp_port 0 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 dns_nameservers 127.0.0.1 cache_swap_low 95 cache_swap_high 98 access_log /var/log/squid3/access.log cache_mem 2048 MB memory_pools on maximum_object_size_in_memory 50 MB log_icp_queries off cache_mgr ad...@meatspin.com cache_dir ufs /var/spool/squid3 2 32 256 acl localhost src 127.0.0.1/32 acl manager proto cache_object acl our_networks src 10.10.1.0/24 acl localnet src 127.0.0.1/255.255.255.255 acl windowsupdate dstdomain windowsupdate.microsoft.com acl windowsupdate dstdomain .update.microsoft.com acl windowsupdate dstdomain download.windowsupdate.com acl windowsupdate dstdomain redir.metaservices.microsoft.com acl windowsupdate dstdomain images.metaservices.microsoft.com acl windowsupdate dstdomain c.microsoft.com acl windowsupdate dstdomain www.download.windowsupdate.com acl windowsupdate dstdomain wustat.windows.com acl windowsupdate dstdomain crl.microsoft.com acl windowsupdate dstdomain sls.microsoft.com acl windowsupdate dstdomain productactivation.one.microsoft.com acl windowsupdate dstdomain ntservicepack.microsoft.com acl SSL_ports port 443 acl Safe_ports port 21 # ftp acl Safe_ports port 80 # http acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT acl wuCONNECT dstdomain www.update.microsoft.com http_access allow our_networks http_access allow localnet our_networks and localnet both means LAN in Squid terminology. They are the same, one is the Squid-2 default ACL name, one is the Squid-3 default naming. Though you have configured localnet to means IPv4-only localhost. You could alter the localhost definition to mean that. http_access allow CONNECT wuCONNECT our_networks http_access allow windowsupdate our_networks The windows update config is only necessary when you have enabled features such as authentication which Windows update cannot handle. http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow manager localhost http_access deny manager http_access allow all allow all is a proxy which intercepts traffic is amazingly unsafe. Since I'm tired of repeating myself day after day about what these default ACL actually mean and why breaking the defaults is BAD... Please read http://wiki.squid-cache.org/SquidFaq/SecurityPitfalls In short: http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow our_networks http_access allow localhost http_access deny all Notice how this is almost exactly the upstream default configuration. The only change you have needed is to define the LAN IP range ACL. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
Re: [squid-users] Don't serve cached content for some acl
On 03/06/11 20:26, Nuno Fernandes wrote: On Friday 03 June 2011, Amos Jeffries wrote: On 03/06/11 10:46, E.S. Rosenberg wrote: If you want them to have a direct connection to the internet you could use always_direct (or never_direct) (which also exists in squid 2.x). Something like this: acl servers src [ips/fqdns] acl direct_sites {dst|dstdomain} {ips/fqdns|fqdns/domains} always_direct allow servers direct_sites This is not relevant. always/never_direct only determin if cache_peer is used. It has no effect on bypassing Squid as implied above OR cached content being served up as originally asked. Regards, Eli 2011/6/2 Nuno Fernandes: Hello, Is it possible with squid 3.1 to have some kind of acl so that cached content doen't get served so some client machines. If the client wishes to use the slow route to the origin, replacing all cached content along the way, it sends Cache-Control: no-cache in its request headers. Please explain why you want to force some clients to use the slowest most inefficient and wasteful source for data? All the possible reasons I'm aware of have far better ways to achieve. Ok.. let me explain.. In the scenario squid - dansguardian - squid (cache), the second squid instance only does caching while the first does all the acl and auth work. I want to remove the second instance of squid and send the dansguardian requests back to the first instance for internet fetching and caching. The answer then is simple. Enable caching on squid1 and remove squid2 entirely from the setup. Squid1 will fetch things from DG. DG fetched from wherever globally it need to. Only non-cached content will be fetched through DG. The DG denial page will be cached when things are blocked. So you you only test a URL with DG once. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
[squid-users] Re: Squid 3.1.12 times out when trying to access MSDN
(First of all, I apologize profusely for top-posting; Gmail Java Mobile client for Symbian can only top-post) There seems to be a glimpse of light for my situation... In the netfilter mailing list, there's someone having a problem similar to mine, i.e., unable to access some websites but no problem with other websites. His web proxy is behind a firewall who was DROPping packets incoming to TCP/80. When he changed the rule to ACCEPT, those websites work. (Although there weren't *any* process listening on TCP/80). I am going to experiment with my firewall rules on Monday. Rgds, On 2011-05-31, Pandu Poluan pa...@poluan.info wrote: On Mon, May 30, 2011 at 17:32, Pandu Poluan pa...@poluan.info wrote: On Mon, May 30, 2011 at 17:25, Pandu Poluan pa...@poluan.info wrote: On Fri, May 27, 2011 at 17:47, Amos Jeffries squ...@treenet.co.nz wrote: On 27/05/11 19:42, Pandu Poluan wrote: Hello list, I've been experiencing a perplexing problem. Squid 3.1.12 often times out when trying to access certain sites, most notably MSDN. But it's still very fast when accessing other non-problematic sites. For instance, trying to access the following URL *always* result in timeout: http://msdn.microsoft.com/en-us/library/aa302323.aspx Trying to get the above URL using wget: No problem. -- 8 8 8 8 8 -- I've specified dns_v4_fallback on explicitly (it was not specified previously) and even replaced the miss_access lines with miss_access allow all. Still failing on those problematic pages. No other bright ideas? :-( -- Pandu E Poluan ~ IT Optimizer ~ Visit my Blog: http://pepoluan.posterous.com -- -- Pandu E Poluan - IT Optimizer My website: http://pandu.poluan.info/
Re: [squid-users] Squid not caching, plz help
seems like amos gave you many things to see the result in... Eliezer On 04/06/2011 12:08, Amos Jeffries wrote: On 04/06/11 09:16, MrNicholsB wrote: Ok Ive had squid3 running rock solid for months, I recently migrated from Ubuntu 9 to 10.04 and now Squid is clearly not caching, but traffic IS passing through it, my conf is the same as it was before but now im getting an error on cache.log every time squid gets a request, any help would be great, im sure its something simple Im just not seeing..THANK YOU!! ERRORs from cache.log == 2011/06/03 13:57:32| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (92) Protocol not available You have a http_port configured with transparent or intercept. Tellign Squid to lookup NAT for the IP details. It is being sent traffic which apparently never went through NAT. Your access.log will contain lies about what client IP was making the request. *THIS IS BAD*. Your squid.conf is making you vulnerable to security attack CVE-2009-0801 Solution: * pick a random port number for the NAT-to-Squid packet arrival. Use a second port for regular proxy requests. * follow the config details for iptables mangle table: http://wiki.squid-cache.org/ConfigExamples/LinuxDnat === #squid..conf visible_hostname central.server http_port 3128 transparent icp_port 0 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 dns_nameservers 127.0.0.1 cache_swap_low 95 cache_swap_high 98 access_log /var/log/squid3/access.log cache_mem 2048 MB memory_pools on maximum_object_size_in_memory 50 MB log_icp_queries off cache_mgr ad...@meatspin.com cache_dir ufs /var/spool/squid3 2 32 256 acl localhost src 127.0.0.1/32 acl manager proto cache_object acl our_networks src 10.10.1.0/24 acl localnet src 127.0.0.1/255.255.255.255 acl windowsupdate dstdomain windowsupdate.microsoft.com acl windowsupdate dstdomain .update.microsoft.com acl windowsupdate dstdomain download.windowsupdate.com acl windowsupdate dstdomain redir.metaservices.microsoft.com acl windowsupdate dstdomain images.metaservices.microsoft.com acl windowsupdate dstdomain c.microsoft.com acl windowsupdate dstdomain www.download.windowsupdate.com acl windowsupdate dstdomain wustat.windows.com acl windowsupdate dstdomain crl.microsoft.com acl windowsupdate dstdomain sls.microsoft.com acl windowsupdate dstdomain productactivation.one.microsoft.com acl windowsupdate dstdomain ntservicepack.microsoft.com acl SSL_ports port 443 acl Safe_ports port 21 # ftp acl Safe_ports port 80 # http acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT acl wuCONNECT dstdomain www.update.microsoft.com http_access allow our_networks http_access allow localnet our_networks and localnet both means LAN in Squid terminology. They are the same, one is the Squid-2 default ACL name, one is the Squid-3 default naming. Though you have configured localnet to means IPv4-only localhost. You could alter the localhost definition to mean that. http_access allow CONNECT wuCONNECT our_networks http_access allow windowsupdate our_networks The windows update config is only necessary when you have enabled features such as authentication which Windows update cannot handle. http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow manager localhost http_access deny manager http_access allow all allow all is a proxy which intercepts traffic is amazingly unsafe. Since I'm tired of repeating myself day after day about what these default ACL actually mean and why breaking the defaults is BAD... Please read http://wiki.squid-cache.org/SquidFaq/SecurityPitfalls In short: http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow our_networks http_access allow localhost http_access deny all Notice how this is almost exactly the upstream default configuration. The only change you have needed is to define the LAN IP range ACL. Amos
Re: [squid-users] Squid not caching, plz help
On 04/06/2011 12:08, Amos Jeffries wrote: On 04/06/11 09:16, MrNicholsB wrote: Ok Ive had squid3 running rock solid for months, I recently migrated from Ubuntu 9 to 10.04 and now Squid is clearly not caching, but traffic IS passing through it, my conf is the same as it was before but now im getting an error on cache.log every time squid gets a request, any help would be great, im sure its something simple Im just not seeing..THANK YOU!! ERRORs from cache.log == 2011/06/03 13:57:32| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (92) Protocol not available You have a http_port configured with transparent or intercept. Tellign Squid to lookup NAT for the IP details. It is being sent traffic which apparently never went through NAT. Your access.log will contain lies about what client IP was making the request. *THIS IS BAD*. Your squid.conf is making you vulnerable to security attack CVE-2009-0801 Solution: * pick a random port number for the NAT-to-Squid packet arrival. Use a second port for regular proxy requests. * follow the config details for iptables mangle table: http://wiki.squid-cache.org/ConfigExamples/LinuxDnat Sorry, that should have been http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
[squid-users] Re: Should I see a massive slowdown when chaining squid = privoxy
sichent sich...@mail.ru writes: Setup: Gentoo linux OS on squid and privoxy home lan server Squid-3.1.12 privoxy-3.0.17 I'm not running an html server, just trying to use squid and privoxy for my own browsing. Why not to use ICAP or URL rewriter functionality built into Squid to achieve the same results as privoxy instead of having this chaining setup? Well for starts, I'd be jumping off into stuff I know nothing about and more than likely end up spending way more time than I should on it. I don't know much about squid and privoxy either but at least have tinkered with them several times over the years.
[squid-users] Re: Should I see a massive slowdown when chaining squid = privoxy
Eliezer Croitoru elie...@ec.hadorhabaac.com writes: another question. are you by any change made a speedtest on the current setup? if you do get the right speed but wrong speed to get the page processed it's a known side effect of privoxy. Is there some standard way to do a speed test? I'm not sure I'd know how to do one that was at least semi-scientific. Find a slow loading page and do `time firefox URL'... killing it soon as it loads? Then clean the cache and do it again with squid and privoxy in the loop?
[squid-users] Sqiuid Refuses to Serve Cached Content, strange cache.log errors
Ive set transparent and removed it, changed my OS, doesnt matter, access log doesnt even show half the things im downloading. My clients browsers are set manually to the squid servers ip, I get internet through the proxy, just not getting the benefits of the cache :( root@katmai:/var/log/squid3# df -h FilesystemSize Used Avail Use% Mounted on /dev/mapper/katmai-root 37G 1.2G 34G 4% / varrun1.5G 60K 1.5G 1% /var/run varlock 1.5G 0 1.5G 0% /var/lock udev 1.5G 44K 1.5G 1% /dev devshm1.5G 0 1.5G 0% /dev/shm /dev/sda1 236M 25M 199M 12% /boot Linux katmai 2.6.24-19-server #1 SMP Wed Jun 18 15:18:00 UTC 2008 i686 = GNU/Linux cache.log errors -- 2011/06/04 13:29:25| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:30:29| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:31:30| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:32:31| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:33:32| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:34:34| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:35:35| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:36:36| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available squid.conf -- root@katmai:/var/log/squid3# df -h FilesystemSize Used Avail Use% Mounted on /dev/mapper/katmai-root 37G 1.2G 34G 4% / Linux katmai 2.6.24-19-server #1 SMP Wed Jun 18 15:18:00 UTC 2008 i686 = GNU/Linux cache.log errors -- 2011/06/04 13:29:25| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:30:29| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:31:30| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:32:31| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:33:32| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:34:34| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:35:35| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available 2011/06/04 13:36:36| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) = failed: (92) Protocol not available squid.conf -- visible_hostname central.server http_port 3128 icp_port 0 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 dns_nameservers 127.0.0.1 cache_swap_low 90 cache_swap_high 95 access_log /var/log/squid3/access.log cache_mem 2048 MB memory_pools on maximum_object_size_in_memory 50 MB log_icp_queries off cache_mgr ad...@meatspin.com cache_dir ufs /var/spool/squid3 2 32 256 acl localhost src 127.0.0.1/32 acl manager proto cache_object acl our_networks src 10.10.1.0/24 acl SSL_ports port 443 acl Safe_ports port 21 # ftp acl Safe_ports port 80 # http acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow our_networks http_access allow localhost http_access deny all cache deny localhost manager SSL_ports maximum_object_size 300 MB cache_replacement_policy heap GDSF
[squid-users] Re: Should I see a massive slowdown when chaining squid = privoxy
Amos Jeffries squ...@treenet.co.nz writes: [...] Harry wrote (summarized -ed hp): Adding squid and privoxy into a proxy setup seems to really really slow down my browsing as compared to browsing with a direct connection. (no proxy) And asks if this is normal. [...] Speed gain/loss/other depends on what you are moving from. MORE IMPORTANTLY: how you define slow! Keep in mind that you also now have around 2x the processing going on with 2 proxies. The difference added by Squid can be at least 10ms. Some people call that noticeable slowdown. Some dont care about anything less than a second. I'm guessing its more than seconds slower but not really sure how to gage the difference reliably, so as not to be giving flawed information here or have difference due to caching or something. Can you suggest a method to arrive at a fairly good comparison? * 3.1 is about 10-20% slower than the latest 2.7 on the same config. With the older versions of 3.1 being on the slower end of that scale as we work to optimize and fix things throughout the series. Wouldn't 20% be noticeable? So you're saying to back down a few versions for now? * Moving to Squid from a non-proxy setup can be a major drop down depending on the browser age. The browsers themselves drop the parallel fetch rate from hundreds down to under 10. Browser tweaking is the only way to avoid this. I'm using firefox 4.X on all home lan machines (that have a gui). Can you recommend some documentation that might help with what you called `Browser tweaking', I've never done anything special to a browser other than add or subtract add-on tools. * Moving from browser-privoxy to a browser-squid-privoxy setup you should have seen only a small drop. Some possibilities are Squid using slow disks (maybe RAID), or Squid box is swapping, or the bandwidth is being routed down the same physical links to/from Squid. No raid, and the hardware of the server is P4 Intel(R) Celeron(R) CPU 3.06GHz and 2GBram running on oldish IDE discs ---- ---=--- - Probably should have included some questions about the squid.conf and privoxy/config in the first post. Maybe there are things in there that are not good left as default. I realize that both tools have several config files. I've left all but the two main ones in default state and have posted my working /etc/squid/squid.conf and /etc/privoxy/config There is a lot of debris in the comments but still I thought it might be useful to leave it all in for jogging memories...But also included a way to prune the comments by changing the name of the cgi script at the end of the URL from `disp.cgi' to `strp.cgi'. Any coaching would be well appreciated. ---- ---=--- - squid.conf WITH comments: www.jtan.com/~reader/sqcfg/disp.cgi squid.conf WITH OUT comments www.jtan.com/~reader/sqcfg/strp.cgi = privoxy's config WITH comments: www.jtan.com/~reader/prcfg/disp.cgi privoxy's config WITH OUT comments: www.jtan.com/~reader/prcfg/strp.cgi
[squid-users] Re: Should I see a massive slowdown when chaining squid = privoxy
Amos Jeffries squ...@treenet.co.nz writes: [...] Speed gain/loss/other depends on what you are moving from. MORE IMPORTANTLY: how you define slow! OK, getting down to cases here. Here is one test.. First clear the cache just the simple way tools/options/advanced/network `clear now' Close firefox (4.0.1) Start firefox (It starts on google which is struggling to resolve) Clear the cache once more Type this URL in http://www.ford-trucks.com Start stopwatch, then start firefox on that address by hitting enter.. I get 1:32 (one min, 32 sec) with chain of browser = squid = privoxy in place Now with no proxy at all. CLear the cache, kill firefox. Start firefox, (it starts on google very quickly), clear the cache once more for good measure. Type in http://www.ford-trucks.com, Start the stopwatch, then start firefox on that address by hitting enter... I get 0:9 (9 seconds) with no proxy in place That is something on the order of 900% faster... I think. ---- ---=--- - What kind of things should I do to start tracking down what is hampering the connections so bad.
Re: [squid-users] Re: Should I see a massive slowdown when chaining squid = privoxy
On 05/06/11 13:54, Harry Putnam wrote: Amos Jeffriessqu...@treenet.co.nz writes: [...] Speed gain/loss/other depends on what you are moving from. MORE IMPORTANTLY: how you define slow! OK, getting down to cases here. Here is one test.. First clear the cache just the simple way tools/options/advanced/network `clear now' Close firefox (4.0.1) Start firefox (It starts on google which is struggling to resolve) Clear the cache once more Type this URL in http://www.ford-trucks.com Start stopwatch, then start firefox on that address by hitting enter.. I get 1:32 (one min, 32 sec) with chain of browser = squid = privoxy in place Now with no proxy at all. CLear the cache, kill firefox. Start firefox, (it starts on google very quickly), clear the cache once more for good measure. Type in http://www.ford-trucks.com, Start the stopwatch, then start firefox on that address by hitting enter... I get 0:9 (9 seconds) with no proxy in place That is something on the order of 900% faster... I think. ---- ---=--- - What kind of things should I do to start tracking down what is hampering the connections so bad. This may help... http://www.extremetech.com/article2/0,2845,1854196,00.asp Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2
Re: [squid-users] lots of UDP connections
On 06/04/2011 12:59 PM, Amos Jeffries wrote: Bal Krishna Adhikari 6/3/2011 6:13 AM Hello, I found a lot of UDP connections that is coming to my proxy servers. I don't find the cause of such one-way traffics to my servers. The sample UDP traffic is as :- 14:00:07.506612 IP 41.209.69.146.10027 x.x.x.x.65453: UDP, length 30 14:00:07.518118 IP 121.218.37.254.41597 x.x.x.x.64338: UDP, length 30 14:00:07.572559 IP 85.224.143.193.29978 x.x.x.x.62782: UDP, length 30 14:00:07.596554 IP 183.87.200.42.36895 x.x.x.x.15786: UDP, length 30 14:00:07.642820 IP 180.215.37.96.49977 x.x.x.x.49458: UDP, length 30 14:00:07.653055 IP 117.195.138.64.24314 x.x.x.x.44985: UDP, length 33 14:00:07.739963 IP 82.31.238.101.50534 x.x.x.x.52750: UDP, length 30 14:00:07.783452 IP 86.83.107.196.41870 x.x.x.x.62782: UDP, length 30 14:00:07.809677 IP 94.246.23.15.59003 x.x.x.x.27462: UDP, length 30 14:00:07.837415 IP 75.156.164.147.49398 x.x.x.x.34847: UDP, length 30 14:00:07.841668 IP 82.8.212.242.25931 x.x.x.x.24869: UDP, length 30 14:00:07.841697 IP 89.136.112.99.42182 x.x.x.x.52750: UDP, length 30 14:00:07.854215 IP 99.191.156.208.18162 x.x.x.x.64338: UDP, length 30 14:00:07.885386 IP 88.147.72.252.60224 x.x.x.x.19151: UDP, length 30 14:00:07.960841 IP 68.169.185.192.63480 x.x.x.x.58638: UDP, length 30 14:00:08.071763 IP 79.113.242.42.31998 x.x.x.x.33995: UDP, length 30 14:00:08.078260 IP 94.202.49.109.61957 x.x.x.x.26071: UDP, length 67 14:00:08.101495 IP 82.169.68.179.19605 x.x.x.x.45682: UDP, length 30 14:00:08.113238 IP 86.99.42.7.15086 x.x.x.x.11706: UDP, length 67 14:00:08.127979 IP 62.195.70.253.45266 x.x.x.x.37050: UDP, length 30 14:00:08.163992 IP 2.82.207.195.38343 x.x.x.x.26680: UDP, length 30 14:00:08.183453 IP 68.81.206.57.25923 x.x.x.x.18378: UDP, length 30 14:00:08.237689 IP 108.120.241.254.47249 x.x.x.x.39433: UDP, length 30 14:00:08.256906 IP 99.161.157.254.41719 x.x.x.x.26680: UDP, length 30 14:00:08.291885 IP 121.136.175.247.12577 x.x.x.x.16485: UDP, length 67 14:00:08.315427 IP 121.144.158.120.30845 x.x.x.x.61415: UDP, length 30 14:00:08.317404 IP 115.117.219.18.25817 x.x.x.x.59936: UDP, length 30 Anyone has any idea if the traffic is genuine or some kind of attack ? x.x.x.x is my proxy server. --- Bal Krishna On 04/06/11 01:16, Chad Naugle wrote: Check the hostname of these IP addresses. They could be DNS replies, using random ports for source/destinations. Squid can generate tons of DNS traffic. I don't think its genuine Squid traffic. DNS, ICP and HTCP all use a fixed well-known port at one end and a rarely changing port at the other. It could be anything else on the box though. There are a few CVE attacks this could be, two using DNS and one HTCP. If you have a Squid 2.7.STABLE8+, 3.0.STABLE23+ or 3.1.1+ you are safe from those. They are just annoying. If you have a Squid-3.1+ with an IPv6 address publicly advertised this could be a sign of v6 connection attempts. Several IP tunnel protocols involve UDP handshakes. Amos I'm currently using 2.7 STABLE9. And the connection seems increased then earlier. Blocking the UDP other then DNS and SNMP from outside can solve the problem ? -- Bal Krishna