[squid-users] SSL / TLS
Slightly off topic but am I correct in thinking TLS supersedes SSL? ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Multiple SSL certificates on same IP
Could you A – forward to different ports B – Use Network address translation? Thoughts… From: squid-users On Behalf Of Patrick Chemla Sent: 19 December 2018 18:29 To: squid-users@lists.squid-cache.org Subject: [squid-users] Multiple SSL certificates on same IP Hi all, Thanks for the great work you do/provide with squid. I am using squid for years, I like it very much, and I am now installing a SSL load-balancing unit for about 80 domains/sub-domains. My OS release is Fedora release 29 (Twenty Nine) My squid version and parameters are : # squid -v Squid Cache: Version 4.4 Service Name: squid This binary uses OpenSSL 1.1.1-pre9 (beta) FIPS 21 Aug 2018. For legal restrictions on distribution see https://www.openssl.org/source/license.html configure options: '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid' '--disable-dependency-tracking' '--enable-eui' '--enable-follow-x-forwarded-for' '--enable-auth' '--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,PAM,POP3,RADIUS,SASL,SMB,SMB_LM' '--enable-auth-ntlm=SMB_LM,fake' '--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos' '--enable-external-acl-helpers=LDAP_group,time_quota,session,unix_group,wbinfo_group,kerberos_ldap_group' '--enable-storeid-rewrite-helpers=file' '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' '--enable-delay-pools' '--enable-epoll' '--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' '--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs,rock' '--enable-diskio' '--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio' '--with-default-user=squid' '--with-dl' '--with-openssl' '--with-pthreads' '--disable-arch-native' '--with-pic' '--disable-security-cert-validators' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -pie -Wl,-z,relro -Wl,-z,now -Wl,--warn-shared-textrel' 'CXXFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' The problem I have is that all these domains are actually on one IP only, on a single server, running nginx with multiple SSL certificates on one single IP, and I would like to do the same with squid. I did few years ago with HaProxy, but I would prefer to keep squid. 3 choices: - Having more than one IP on the server, create SSL certificates from LetsEncrypt including each a list of some domains and sub-domains - Create a very bing certificate to have squid using it (not the best choice because domains are of different content, far one to the other) - Having squid managing all certificates on a single IP. (The best because some domains have very high encryption needs, and LetsEncrypt is not their preference) Like a bottle in the sea: Is that possible, multiple certificates, with squid 4.4 on a single IP? Thanks for your help. Patrick ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Advice - Squid Proxy
> So, Squid is installed on an Ubuntu VM, which runs on your laptop? Correct > So, the phone is either - direct connection via mobile Internet access, or > via Squid and your home Internet connection - no way for the phone to use the > Internet connection without going via Squid? Yeah - however I use bitdefender on top of squid. Once the phone detects and connects to my laptop it then uses the proxy server > Configured it in Squid, so users have to authenticate there to get access? Yeah - I have an ACL running in Squid > So, where do any other devices (phone, TV, the three VMs) get their IP > addresses from? They must have them, otherwise they couldn't communicate > with Squid... What do these devices have as a gateway address? I use dhcp allocated from ubuntu, the gateway address that’s broadcast is my Ubuntu address. I'm writing this and thinking I've gone a bit Orwellian. Still I think I've covered the bases. I was toying with the idea of running Asterix off my laptop too, but I figured I'd start with this project. -Original Message- From: squid-users On Behalf Of Antony Stone Sent: 19 December 2018 16:17 To: squid-users@lists.squid-cache.org Subject: Re: [squid-users] Advice - Squid Proxy On Wednesday 19 December 2018 at 16:04:36, Squid users wrote: > Hi, > > Re network diagram - Mish Mash / blended / spaghetti I think :p > > Squid is installed on the Ubuntu virtual machine. Sorry forgot to draw > that on. So, Squid is installed on an Ubuntu VM, which runs on your laptop? > The phone connects to mobile internet when out of the house, then > reverts back to going via squid proxy when my laptop wifi is turned > on. The phone detects my laptop and connects accordingly. The phone > reconfigures to go via proxy when it connects to my laptop. So, the phone is either - direct connection via mobile Internet access, or via Squid and your home Internet connection - no way for the phone to use the Internet connection without going via Squid? > As for the TV - yeah my laptop needs to be in the house for that to work. Okay. > Internet Use - I'm happy to record websites called by 'user' so for > example: Tv=user1 > Phone=user2 > Laptop user=user3 > Then each family member with their own user id /password. > I've configured this bit already Configured it in Squid, so users have to authenticate there to get access? > I have set my home internet router to only allocate my laptop mac a > DHCP address So, where do any other devices (phone, TV, the three VMs) get their IP addresses from? They must have them, otherwise they couldn't communicate with Squid... What do these devices have as a gateway address? > I'll draw a better diagram later today. Okay. > I may have gone a bit overboard with the control and monitoring :s Yes, maybe :) Antony. -- Software development can be quick, high quality, or low cost. The customer gets to pick any two out of three. Please reply to the list; please *don't* CC me. _______ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users _______ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Advice - Squid Proxy
Hi, Re network diagram - Mish Mash / blended / spaghetti I think :p Squid is installed on the Ubuntu virtual machine. Sorry forgot to draw that on. The phone connects to mobile internet when out of the house, then reverts back to going via squid proxy when my laptop wifi is turned on. The phone detects my laptop and connects accordingly. The phone reconfigures to go via proxy when it connects to my laptop. As for the TV - yeah my laptop needs to be in the house for that to work. Internet Use - I'm happy to record websites called by 'user' so for example: Tv=user1 Phone=user2 Laptop user=user3 Then each family member with their own user id /password. I've configured this bit already I have set my home internet router to only allocate my laptop mac a DHCP address I'll draw a better diagram later today. I may have gone a bit overboard with the control and monitoring :s Thanks -Original Message- From: squid-users On Behalf Of Antony Stone Sent: 19 December 2018 13:19 To: squid-users@lists.squid-cache.org Subject: Re: [squid-users] Advice - Squid Proxy On Wednesday 19 December 2018 at 13:22:57, Squid users wrote: > The attached configuration is currently in use on my computer. It isn't a network diagram; I'm not quite sure what to describe it as, but I don't even see where Squid is on there. > My aim is to use my laptop while I'm out and about (libraries, work > etc) and when I'm at home have my TV and Phone connect into the proxy server. > This would allow caching by any device to my laptop so I'm minimising > my connections outbound. So, Squid runs on your laptop? What are the phone and TV supposed to do when the laptop isn't there? > I also want it to record use by other people so I can monitor my > internet use at home. Define "use". What level of detail do you want to record? > As you can see I run bitdefender parental control on my computer. > Would it be possible for someone to manipulate the proxy server to bypass > this? > Could the proxy server be used to hide / obscure actual sites visited? Show us a rather more conventional network diagram, which shows how packets get to & from the Internet, and what filters / firewalls are in place between different bits of equipment, and we might be able to asnwer this. Antony. -- "Can you keep a secret?" "Well, I shouldn't really tell you this, but... no." Please reply to the list; please *don't* CC me. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Advice - Squid Proxy
The attached configuration is currently in use on my computer. My aim is to use my laptop while I'm out and about (libraries, work etc) and when I'm at home have my TV and Phone connect into the proxy server. This would allow caching by any device to my laptop so I'm minimising my connections outbound. I also want it to record use by other people so I can monitor my internet use at home. As you can see I run bitdefender parental control on my computer. Would it be possible for someone to manipulate the proxy server to bypass this? Could the proxy server be used to hide / obscure actual sites visited? Can anyone point out any flaws or issues. Thanks _______ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Possible access via v6 when no interfaces present, fixable with dns_v4_first
Hello squid users, I'm trying to understand a strange problem with requests to edge.apple.com, which I think may be related to IPv6 DNS resolution. To set the scene - we operate a large (1,000+) fleet of Squid 3.5.25 caches. Each runs on a separate LAN, connected to the internet via another upstream proxy, accessed over a wide-area network. Each local cache runs on a CentOS 6 box, incuding BIND for name resolution. For DNS resolution, each local CentOS server runs BIND, which is configured to resolve against a local Microsoft DNS server, which then resolves internet queries using a whole-of-WAN BIND service operated by the carrier. The WAN does not support IPv6, and CentOS does not have any v6 network interfaces configured. Recently we became aware of a fault on a single cache serving requests for edge.icloud.com. Requests would time out with a TAG_NONE/503 written to the log. The error could be replicated with cURL at the CLI using this URL: https://edge.icloud.com/perf.css. This was a strange error, because at the time it happened, it was possible to connect to edge.icloud.com on port 443. The error was happening in just one site. To isolate the fault we stripped the Squid config at the affected site right back to the following: # Skeleton Squid 3.5.25 config shutdown_lifetime 2 seconds max_filedesc 16384 coredump_dir /var/spool/squid dns_timeout 5 seconds error_directory /var/www/squid-errors logfile_rotate 0 http_port 3128 cache_dir ufs /var/spool/squid 8192 16 256 maximum_object_size 536870912 bytes cache_replacement_policy heap LFUDA http_access allow localhost debug_options ALL,5 Here's the messages written to the log when fetching https://edge.icloud.com/perf.css with curl: 2018/05/08 16:25:46.321 kid1| 14,3| ipcache.cc(362) ipcacheParse: 18 answers for 'edge.icloud.com' 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse: edge.icloud.com #0 [2403:300:a50:105::f] 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse: edge.icloud.com #1 [2403:300:a50:105::9] 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse: edge.icloud.com #2 [2403:300:a50:100::e] 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse: edge.icloud.com #3 [2403:300:a50:101::5] 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse: edge.icloud.com #4 [2403:300:a50:104::e] 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse: edge.icloud.com #5 [2403:300:a50:104::9] 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse: edge.icloud.com #6 [2403:300:a50:104::5] 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse: edge.icloud.com #7 [2403:300:a50:101::6] 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse: edge.icloud.com #8 17.248.155.107 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse: edge.icloud.com #9 17.248.155.142 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse: edge.icloud.com #10 17.248.155.110 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse: edge.icloud.com #11 17.248.155.80 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse: edge.icloud.com #12 17.248.155.114 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse: edge.icloud.com #13 17.248.155.77 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse: edge.icloud.com #14 17.248.155.145 2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse: edge.icloud.com #15 17.248.155.89 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(280) peerSelectDnsPaths: Found sources for 'edge.icloud.com:443' 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(281) peerSelectDnsPaths: always_direct = DENIED 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(282) peerSelectDnsPaths: never_direct = DENIED 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths: DIRECT = local=[::] remote=[2403:300:a50:105::f]:443 flags=1 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths: DIRECT = local=[::] remote=[2403:300:a50:105::9]:443 flags=1 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths: DIRECT = local=[::] remote=[2403:300:a50:100::e]:443 flags=1 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths: DIRECT = local=[::] remote=[2403:300:a50:101::5]:443 flags=1 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths: DIRECT = local=[::] remote=[2403:300:a50:104::e]:443 flags=1 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths: DIRECT = local=[::] remote=[2403:300:a50:104::9]:443 flags=1 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths: DIRECT = local=[::] remote=[2403:300:a50:104::5]:443 flags=1 2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths: DIRECT = local=[::] remote=[24
[squid-users] Cache peer selection with duplicate host names
Hi Squid users, I'm having some trouble understanding Squid's peer selection algorithms, in a configuration where multiple cache_peer lines reference the same host. The background to this is that we wish to present cache service using multiple accounts at an upstream provider, with account selection taking place based on the local TCP port (8080, 8181, 8282) the request arrived on. First we define the cache peers: cache_peer proxy.myisp.net parent 8080 0 login=staffuser:abc123 no-query no-digest no-netdb-exchange connect-timeout=1 connect-fail-limit=2 name=Staff cache_peer proxy.myisp.net parent 8080 0 login=guestuser:abc123 no-query no-digest no-netdb-exchange connect-timeout=1 connect-fail-limit=2 name=Guest cache_peer proxy.myisp.net parent 8080 0 login=PASS no-query no-digest no-netdb-exchange connect-timeout=1 connect-fail-limit=2 name=Student Then lock access down: acl localport_Staff localport 8282 acl localport_Guest localport 8181 acl localport_Student localport 8080 cache_peer_access Staff allow localport_Staff !localport_Guest !localport_Student cache_peer_access Guest allow localport_Guest !localport_Staff !localport_Student cache_peer_access Student allow localport_Student !localport_Guest !localport_Staff To reproduce the error, first a connection is made with wget to tcp port 8282: http_proxy=http://10.159.192.24:8282/ wget www.monash.edu --delete-after Squid selects the Staff profile as expected: 1492999376.993811 10.159.192.26 TCP_MISS/200 780195 GET http://www.monash.edu/ - FIRSTUP_PARENT/Staff text/html "EDU%20%20%20en" "Wget/1.12 (linux-gnu)" Then another connection is made, this time to port 8080: http_proxy=http://10.159.192.24:8080/ wget www.monash.edu --delete-after But instead of the desired Student profile being selected, the Staff profile is still used instead: 1492999405.953338 10.159.192.26 TCP_MISS/200 780195 GET http://www.monash.edu/ - FIRSTUP_PARENT/Staff text/html "EDU%20%20%20en" "Wget/1.12 (linux-gnu)" I had a look in the cache.log with debug_options 44,6 enabled. None of the messages reference the contents of the name= parameter in the cache_peer lines; only hostnames and IP addresses are mentioned. I suspect that the peer selection algorithms have changed since Squid 3.1, whereby peers are now selected based on hostname (or IP address) rather than the name defined in the cache_peer line. Is this correct? If so, is there any other way to achieve the functionality outlined above (hit different usernames on an upstream peer based on which localport the request arrived on?) Cheers Luke ___________ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] [SOLVED] Re: TCP Outgoing Address ACL Problem
Thanks Garry and Amos! My problem is solved. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] TCP Outgoing Address ACL Problem
Can anyone point out what I'm doing wrong in my config? Squid config: https://bpaste.net/show/796dda70860d I'm trying to use ACLs to direct incoming traffic on assigned ports to assigned outgoing addresses. But, squid uses the first IP address assigned to the interface not listed in the config instead. IP/Ethernet Interface Assignment: https://bpaste.net/show/5cf068a4ce9a Thanks! P.S. Sorry for that last message. _______ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] TCP Outgoing Address ACL Problem
You are not allowed to post to this mailing list, and your message has been automatically rejected. If you think that your messages are being rejected in error, contact the mailing list owner at squid-users-ow...@lists.squid-cache.org. --- Begin Message --- Can anyone point out what I'm doing wrong in my config? Squid config: https://bpaste.net/show/796dda70860d I'm trying to use ACLs to direct incoming traffic on assigned ports to assigned outgoing addresses. But, squid uses the first IP address assigned to the interface not listed in the config instead. IP/Ethernet Interface Assignment: https://bpaste.net/show/5cf068a4ce9a Thanks! --- End Message --- _______ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Custom User Agent Per ACL
Is it possible to have a custom "request_header_replace User-Agent" assigned to mapped acl/listening port/tcp_outgoing_address? Examples: acl ipv4-1 myportname 3128 src xxx.xxx.xxx.xxx/24http_access allow ipv4-1 -> request_header_replace User Agent "Firefox x" ipv4-1 -> tcp_outgoing_address xxx.xxx.xxx.xxx ipv4-1 acl ipv4-2 myportname 3129 src xxx.xxx.xxx.xxx/24 -> http_access allow ipv4-2 -> request_header_replace User Agent "Internet Explorer x" ipv4-2 -> tcp_outgoing_address xxx.xxx.xxx.xxx ipv4-2 Thanks! ___________ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Introducing delay to HTTP 407 responses
Alex, > However, there is a difference between my August tests and this thread. > My tests were for a request parsing error response. Access denials do not > reach the same http_reply_access checks! See "early return" > statements in clientReplyContext::processReplyAccess(), including: > > > /** Don't block our own responses or HTTP status messages */ > > if (http->logType.oldType == LOG_TCP_DENIED || > > http->logType.oldType == LOG_TCP_DENIED_REPLY || > > alwaysAllowResponse(reply->sline.status())) { > > headers_sz = reply->hdr_sz; > > processReplyAccessResult(ACCESS_ALLOWED); > > return; > > } > > I am not sure whether avoiding http_reply_access in such cases is a > bug/misfeature or the right behavior. As any exception, it certainly > creates problems for those who want to [ab]use http_reply_access as a > delay hook. FWIW, Squid had this exception since 2007: Thanks, makes sense. It would be great if there was a way to slow down 407 responses; at the moment the only workaround I can think of is to write a log-watching script to maintain a list of offending IP/domain pairs, then write a helper to use that data to introduce delay when the request is first received (via http_access and the !all trick). If anyone has a better option, I'm all ears. Luke _______ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Introducing delay to HTTP 407 responses
> > I set this up as you suggested, then triggered a 407 response from the > cache. It seems that way; I couldn't see aclMatchHTTPStatus or http- > response-407 in the log: > > > > Strange. I was sure Alex did some tests recently and proved that even > internally generated responses get http_reply_access applied to them. > Yet no sign of that in your log. > > Is this a very old Squid version? It's a recent Squid version - 3.5.20 on CentOS 6, built from the SRPM kindly provided by Eliezer. > Or are the "checking http_reply_access" lines just later in the log than > your snippet covered? There was nothing more in the log previously posted at the point the 407 response was returned to the client. That log did have a lot of other stuff in it though. Using a much simpler squid.conf (attached), I tested for differences in authenticated vs unauthenticated requests, when "http_reply_access deny all" is in place. When credentials are supplied, a http/403 (forbidden) response is provided, as you would expect. But when credentials are not supplied, a http/407 response is provided. The divergence seems to start around line 31 in cache_noauth.log: Checklist.cc(63) markFinished: 0x331e4a8 answer AUTH_REQUIRED for AuthenticateAcl exception Perhaps when answer=AUTH_REQUIRED (line 35), http_reply_access is not checked? Another difference is that Acl.cc(158) reports async when an authenticated request is in place, but not otherwise. If someone could give me some pointers where to look in the source, I can start digging to see if I can find out more. Luke cache_auth.log Description: Binary data cache_noauth.log Description: Binary data squid.conf Description: Binary data _______ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Introducing delay to HTTP 407 responses
squid/access.log 2016/10/04 22:37:18.197 kid1| 28,5| Acl.cc(138) matches: checking (access_log /var/log/squid/access.log line) 2016/10/04 22:37:18.197 kid1| 28,3| Acl.cc(158) matches: checked: (access_log /var/log/squid/access.log line) = 1 2016/10/04 22:37:18.197 kid1| 28,3| Acl.cc(158) matches: checked: access_log /var/log/squid/access.log = 1 2016/10/04 22:37:18.197 kid1| 28,3| Checklist.cc(63) markFinished: 0x7ffcaaa6a430 answer ALLOWED for match 2016/10/04 22:37:18.197 kid1| 28,4| FilledChecklist.cc(66) ~ACLFilledChecklist: ACLFilledChecklist destroyed 0x7ffcaaa6a430 2016/10/04 22:37:18.197 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist: destroyed 0x7ffcaaa6a430 2016/10/04 22:37:22.738 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffcaaa6a540 checking fast rules 2016/10/04 22:37:22.738 kid1| 28,5| Checklist.cc(346) fastCheck: aclCheckFast: list: 0x1c3da68 2016/10/04 22:37:22.738 kid1| 28,5| Acl.cc(138) matches: checking snmp_access 2016/10/04 22:37:22.738 kid1| 28,5| Checklist.cc(400) bannedAction: Action 'ALLOWED/0is not banned 2016/10/04 22:37:22.738 kid1| 28,5| Acl.cc(138) matches: checking snmp_access#1 2016/10/04 22:37:22.738 kid1| 28,5| Acl.cc(138) matches: checking localhost 2016/10/04 22:37:22.738 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '127.0.0.1:38013' found 2016/10/04 22:37:22.738 kid1| 28,3| Acl.cc(158) matches: checked: localhost = 1 2016/10/04 22:37:22.738 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access#1 = 1 2016/10/04 22:37:22.738 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access = 1 2016/10/04 22:37:22.738 kid1| 28,3| Checklist.cc(63) markFinished: 0x7ffcaaa6a540 answer ALLOWED for match 2016/10/04 22:37:22.738 kid1| 28,4| FilledChecklist.cc(66) ~ACLFilledChecklist: ACLFilledChecklist destroyed 0x7ffcaaa6a540 2016/10/04 22:37:22.738 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist: destroyed 0x7ffcaaa6a540 2016/10/04 22:37:22.739 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffcaaa6a540 checking fast rules 2016/10/04 22:37:22.739 kid1| 28,5| Checklist.cc(346) fastCheck: aclCheckFast: list: 0x1c3da68 2016/10/04 22:37:22.739 kid1| 28,5| Acl.cc(138) matches: checking snmp_access 2016/10/04 22:37:22.739 kid1| 28,5| Checklist.cc(400) bannedAction: Action 'ALLOWED/0is not banned 2016/10/04 22:37:22.739 kid1| 28,5| Acl.cc(138) matches: checking snmp_access#1 2016/10/04 22:37:22.739 kid1| 28,5| Acl.cc(138) matches: checking localhost 2016/10/04 22:37:22.739 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '127.0.0.1:38013' found 2016/10/04 22:37:22.739 kid1| 28,3| Acl.cc(158) matches: checked: localhost = 1 2016/10/04 22:37:22.739 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access#1 = 1 2016/10/04 22:37:22.739 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access = 1 2016/10/04 22:37:22.739 kid1| 28,3| Checklist.cc(63) markFinished: 0x7ffcaaa6a540 answer ALLOWED for match 2016/10/04 22:37:22.739 kid1| 28,4| FilledChecklist.cc(66) ~ACLFilledChecklist: ACLFilledChecklist destroyed 0x7ffcaaa6a540 2016/10/04 22:37:22.739 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist: destroyed 0x7ffcaaa6a540 Luke ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Introducing delay to HTTP 407 responses
Eliezer, Thankyou for your reply, I tried the following: > Hey Luke, > > Try to use the next line instead: > external_acl_type delay ttl=1 negative_ttl=0 cache=0 %SRC %SRCPORT %URI > /tmp/delay.pl > > And see what happens. But it's not introducing a delay into the response. Running strace across the pid of each child helper doesn't show any activity across those processes either. I also tried the approach suggested by Amos: > The outcome of that was a 'ext_delayer_acl helper in Squid-3.5 > > <http://www.squid-cache.org/Versions/v3/3.5/manuals/ext_delayer_acl.html> > > It works slightly differently to what was being discussed in the thread. > see the man page for details on how to configure it. Using the following config: external_acl_type delay concurrency=10 children-max=2 children-startup=1 children-idle=1 cache=10 %URI /tmp/ext_delayer_acl -w 1000 -d acl http-response-407 http_status 407 acl delay-1sec external delay http_reply_access deny http-response-407 delay-1sec !all Debug information from ext_delayer_acl is written to the cache log; I see the processes start up but they are not hit with any requests by Squid. I also added %SRC %SRCPORT into the configuration, but that didn't seem to help either. Would the developers be open to adding a configuration-based throttle to authentication responses, avoiding the need for an external helper? Or alternatively, is there another way to slow down auth responses? It's comprising about 90% of the log volume (450,000 requests/hr) in badly affected sites at the moment. Luke _______ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Question about the url rewrite before proxy out
> > > > If you input http://www.yahoo.com/page.html, this will be transformed > > to http://192.168.1.1/www.google.com/page.html. > > I got the impression that the OP wanted the rewrite to work the other way > around. My apologies, that does seem to be the case. > Squid sees http://192.168.1.1/www.google.com and re-writes it to > http://www.google.com > > > The helper just needs to print that out prepended by "OK rewrite- > url=xxx". > > More info at > > http://www.squid-cache.org/Doc/config/url_rewrite_program/ > > > > Of course, you will need something listening on 192.168.1.1 (Apache, > > nginx, > > whatever) that can deal with those rewritten requests. > > I got the impression that the OP wanted Squid to be listening on this > address, doing the rewrites, and then fetching from standard origin > servers. Then not only the request needs to be rewritten, but probably the page content too. Eg, assets in the page will all be pointing at http://www.yahoo.com/image.png and also need transforming to http://192.168.1.1/www.yahoo.com/image.png. If that is the case, then Squid doesn't seem like the right tool for the job. I think CGIproxy can do this (https://www.jmarshall.com/tools/cgiproxy/) or perhaps Apache's mod_proxy (https://httpd.apache.org/docs/current/mod/mod_proxy.html) would work. Luke ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Question about the url rewrite before proxy out
> i am looking for a proxy which can "bounce" the request, which is not a > classic proxy. > > I want it works in this way. > > e.g. a proxy is running a 192.168.1.1 > and when i want to open http://www.yahoo.com, i just need call > http://192.168.1.1/www.yahoo.com > the proxy can pickup the the host "http://www.yahoo.com"; from the URI, and > retrieve the info for me, > so it need to get the new $host from $location, and remove the $host from the > $location before proxy pass it. > it is doable via squid? Yes it is doable (but unusual). First you need to tell Squid which requests should be rewritten, then send them to a rewrite program to be transformed. Identify the domains like this: acl rewrite-domains dstdomain .yahoo.com .google.com etc) Then set up a URL rewriting program, and only allow it to process requests matching the rewrite-domains ACL, like this: url_rewrite_program /tmp/rewrite-program.pl url_rewrite_extras "%>ru" url_rewrite_access allow rewrite-domains url_rewrite_access deny all The program itself can be anything. A very simple example in Perl might look like this: #!/usr/bin/perl use strict; $| = 1; # Enter loop while (my $thisline = <>) { my @parts = split(/\s+/, $thisline); my $url = $parts[0]; $url =~ s/http:\/\/(.*)/http:\/\/192.168.1.1\/$1/g; print "OK rewrite-url=\"$url\"\n"; } If you input http://www.yahoo.com/page.html, this will be transformed to http://192.168.1.1/www.google.com/page.html. The helper just needs to print that out prepended by "OK rewrite-url=xxx". More info at http://www.squid-cache.org/Doc/config/url_rewrite_program/ Of course, you will need something listening on 192.168.1.1 (Apache, nginx, whatever) that can deal with those rewritten requests. That is an unusual way of getting requests to 192.168.1.1 though, because you are effectively putting the hostname component into the URL then sending it to a web service and expecting it to deal with that. Another note. If you have a cache_peer defined, you might need some config to force rewritten requests to be sent to 192.168.1.1 and not your cache peer. In that case this should do the trick: acl rewrite-host dst 192.168.1.1 always_direct allow rewrite-host HtH. Luke ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Introducing delay to HTTP 407 responses
Hi Squid users, Seeking advice on how to slow down 407 responses to broken Apple & MS clients, which seem to retry at very short intervals and quickly fill the access.log with garbage. The problem is very similar to this: http://www.squid-cache.org/mail-archive/squid-users/201404/0326.html However the config below doesn't seem to slow down the response: acl delaydomains dstdomain .live.net .apple.com acl authresponse http_status 407 external_acl_type delay ttl=0 negative_ttl=0 cache=0 %SRC /tmp/delay.pl acl delay external delay http_reply_access deny delaydomains authresponse delay http_reply_access allow all The helper is never asked by Squid to process the request. Just wondering if http_status ACLs can be used in http_reply_access? My other thinking, if this isn't possible, was to mark 407 responses with clientside_tos so they could be delayed/throttled with tc or iptables. Ie, acl authresponse http_status 407 clientside_tos 0x20 authresponse However, auth response packets don't get the desired tos markings. Instead the following message appears in cache.log: 2016/09/13 11:35:43 kid1| WARNING: authresponse ACL is used in context without an HTTP response. Assuming mismatch. After reviewing http://lists.squid-cache.org/pipermail/squid-users/2016-May/010630.html it seems like this has cropped up before. The suggestion in that thread was to exclude 407 responses from the access log. Fortunately this works. But I'm wondering if there is a way to introduce delay into the 407 response itself? Partly to minimise load associated with serving broken clients, and also to maintain logging of actual intrusion attempts. Any suggestions? Luke ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Landing- Disclaimer-Page for an Exchange 2013 Reverse Proxy
Hi, I've installed a Squid reverse proxy for a MS-Exchange Test-Installation to reach OWA from the outside. My current environment is as follows: Squid Version 3.4.8 with ssl on a Debian Jessie (self compiled) The Squid and the exchange system are in the internal network with private ip-addresses (same network segment) The access to the squid system is realized by port forwarding (tcp/80, tcp/443, tcp/22) from a public ip-address Used certificate is from letsencrypt (san-certificate, used by both servers) Current Status: Pre-Login works Outlook-Access to OWA works (other protocolls not tested yet) https://portal.xxx.de doesn't work (Forwarding denied) (which is quite normal because there is no acl for it) Ho can I reach that: 1) Access to https://portal.xxx.de ends up on a kind of "landing-page" with instructions how to use the exchange test-installation (web server can be the iis oh the exchange system, apache on the squid system or a third system) 2) Is there a way to integrate the initial password dialog in that web page? Kind regards Bob Squid configuration: # Hostname visible_hostname portal.xxx.de # Externer Zugriff https_port 192.168.xxx.21:443 accel cert=/root/letsencrypt/certs/xxx.de/cert.pem key=/root/letsencrypt/certs/xxx.de/privkey.pem cafile=/root/letsencrypt/certs/xxx.de/fullchain.pem defaultsite=portal.xxx.de # Interner Server cache_peer 192.168.xxx.20 parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER sslcert=/root/letsencrypt/certs/xxx.de/cert.pem sslkey=/root/letsencrypt/certs/xxx.de/privkey.pem name=ExchangeServer # Zugriff auf folgende Adressen ist erlaubt acl EXCH url_regex -i ^https://portal.xxx.de$ acl EXCH url_regex -i ^https://portal.xxx.de/owa.*$ acl EXCH url_regex -i ^https://portal.xxx.de/Microsoft-Server-ActiveSync.*$ acl EXCH url_regex -i ^https://portal.xxx.de/ews.*$ acl EXCH url_regex -i ^https://portal.xxx.de/autodiscover.*$ acl EXCH url_regex -i ^https://portal.xxx.de/rpc/.*$ # Auth auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid3/passwd auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive on # Regeln acl ncsa_users proxy_auth REQUIRED http_access allow ncsa_users cache_peer_access ExchangeServer allow EXCH never_direct allow EXCH http_access allow EXCH http_access deny all miss_access allow EXCH miss_access deny all # Logging access_log /var/log/squid3/access.log squid debug_options ALL,9 cache_mgr mailto:x...@xxx.de ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Debugging http_access and http_reply_access
> debug_options 11,2 28,3 > > 11,2 gives you the HTTP messages. > 28,3 gives you the ACL processing action and results. > > Its a bit like quantum mechanics at the moment though. You can know > the request mesage details. OR you can know the ACL matching. Not both > at once. > > > > Are there any other debug sections which would be more appropriate > > to the task? If not, is there another more suitable approach? > > Why exactly are you doing this? What are you trying to achieve with it? The intent is to record which rule matched each request, for accounting & historical purposes. And also to help less-technical administrators to find (& resolve) unexpected behaviour in their http_access policies. Ideally, it would be great to be able to get this information through observation alone, without making changes to a live config. Luke ___________ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Debugging http_access and http_reply_access
Hi Squid users, I'm seeking some guidance regarding the best way to debug the http_access and http_reply_access configuration statements on a moderately busy Squid 3.5 cache. In cases where a number (say, 5 or more) of http_access lines are present, the goal is to find which configuration statement (if any) was found to match for a given request, then write this information to a log for further processing. Example: http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost manager http_access deny manager http_access allow localhost http_access deny out_working_hours http_access allow working_hours whitelist http_access allow network http_access deny all Let's assume each of those lines have an index (0, 1, 2 thru 8 in the example above). Is there any way to find which one matched? Explored so far: using debug_options to look at sections 33 (Client Side Routines), 88 (Client-side Reply Routines) and 85 (Client Side Request Routines) return useful information, but it's hard to use it to identify (programmatically) which log entries relate to which request on a busy cache. Activating debug logging on a busy cache also doesn't seem like the right approach. Also explored: creating a pair of logformat and access_log statements corresponding to each http_access and http_reply_access statement, with the same ACL conditions as their policy counterparts. The idea being to create a log entry for each http_access and http_reply_access statement, to which Squid will write matching requests. This approach only partially achieves the goal, because although it collects matching requests, it doesn't take into account the sequential nature of policy rule processing. Eg, in the example above, even though a request to manager may be denied by rule 3, it might still have matched the conditions associated with rule 7, and thus be written to that log, even though it never hit that policy rule. Are there any other debug sections which would be more appropriate to the task? If not, is there another more suitable approach? Luke _______ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users