[squid-users] Any plan for an SSL bump mode ACL?
I’m trying to figure out if there’s a way to avoid those 0 byte “peeked” requests being processed by the rest of our external ACLs etc. by allowing them early on in the transaction. Unfortunately there doesn’t seem to be a way to target just those ones with http_access—the TAG_NONE isn’t an actual method and and there’s no ACL for the bump mode—without also targeting the spliced ones. Any ideas, denizens of the mailing list? ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] peek all step with bump instance of proxy
On 27/08/2015 10:50 p.m., john jacob wrote: > Hi All, > > I am trying to configure a squid filtering instance which serves both proxy > and intercepted (transparent) connections. Filtering is accomplished by a > Request eCAP adapter which have something like > > if(IsDenied() && RequestMethod=="CONNECT") > { > // Gives TAG_NONE/403 in the access log > hostx->blockVirgin(); > return; > } > > I also have a requirement to bump a particular domain and peek other https > connections for intercepted mode. So there are 3 possible > outcomes/filtering decision for any https connections hitting this server. > They are > > 1) Bump and allow the access > 2) Non bumped and allowed access > 3) Non bumped and denied access, by the code given above in eCAP adapter > > My squid (tried with v3.5.6 and v3.5.7-20150823-r13895, same outcome) > config looks like below > . > . > . > # TAG: ssl_bump > ssl_bump server-first > ssl_bump peek all > ssl_bump splice all > . Dont use those three options together. The "server-first" option is a backward compatibility option only. The "bump" action is usualy best to use in its place. This config looks like a case where that is true. > . > . > http_port : ssl-bump generate-host-certificates=on > dynamic_cert_mem_cache_size=4MB cert= > > > https_port : intercept ssl-bump > generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert= > > Things are fine with the intercepted connections (for all the 3 scenarios). > But with the proxy connections I am encountering some peculiar behavior > with scenario 3 (ie when a non bumped https CONNECT is denied by eCAP). > Instead of terminating the connection, it is logged as TAG_NONE/200 in the > access log and getting bumped (a dynamic certificate is generated) and then > getting terminated. The behavior disappears and works if I comment the > "peek all" line. > > I am not sure if this is a bug or an expected behavior. Certain very popular browsers refuse to show the user any error messages output by a proxy in response to a CONNECT message. Just a bland connection failed message of their own. To get around that we have a nasty hack that delays emitting error pages until after the bumping has been done. I think that is what you are seeing happen. There may well be bugs in the hack itself, but the behaviours you describe all seem like logical side effects given what it does. - delays the 403 output -> logs 200 for the 'CONNECT', then - bumping starts and does peek, then - splice starts, and detects 403 to be sent back. - but thats impossible in splice, so terminate. Theres also a good possibility Squid is sending the 403 as plain-text instead of terminating or delivering TLS handshake data. But the client/browser simply not showing it to you because well, browsers. > > Of course the proxy bumped connection works fine if I selectively peek for > intercepted connections (ssl_bump peek ), but > in this case I am getting duplicate entries in the access log file (ie 2 > CONNECT log messages for each https CONNECT) for intercepted mode https > connections.The same goes for other ACL combinations like the below > resulting in duplicated log messages > > ssl_bump server-first > ssl_bump splice intercept/transparent ip :port> > ssl_bump peek all > ssl_bump splice all > > Regards, > John Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] peek all step with bump instance of proxy
On 08/27/2015 04:50 AM, john jacob wrote: > with the proxy connections I am encountering some > peculiar behavior with scenario 3 (ie when a non bumped https CONNECT is > denied by eCAP). Instead of terminating the connection, it is logged as > TAG_NONE/200 in the access log and getting bumped (a dynamic certificate > is generated) and then getting terminated. The behavior disappears and > works if I comment the "peek all" line. > > I am not sure if this is a bug or an expected behavior. Most of the stuff you are describing matches my expectations, but there is not enough information in a couple of cases: * "getting bumped and then getting terminated". Terminated without serving an error page to the bumped client? Is there a request after CONNECT? Is the client happy about the bumped connection? How many CONNECT requests does your eCAP adapter see (and deny) in this case? * "works if I comment the 'peek all' line". Works how? I would expect Squid to bump and then serve an error page to the user on a bumped connection. How many CONNECT requests does your eCAP adapter see (and deny) in this case? Here is some background so that you can debug this further: Denying a CONNECT request implies serving an error page to the user. However, the user will not see the error page if Squid sends it as a response to the CONNECT request (due to lazy browsers security policies; nothing to do with Squid). The only way to show that error page to the user is to bump the client connection and then serve the error page over that bumped connection in response to the first bumped request. If your eCAP adapter denies CONNECT during an SslBump step, Squid should bump the client connection (200 OK) and serve an error page to the user in response to the first request on that bumped connection, if any (TAG_NONE/403?). You can find more information about this behaviour at http://bazaar.launchpad.net/~squid/squid/trunk/revision/13759 but some of that old info is probably out of date by now. > Of course the proxy bumped connection works fine if I selectively peek > for intercepted connections (ssl_bump peek mode>), but in this case I am getting duplicate entries in the access > log file (ie 2 CONNECT log messages for each https CONNECT) for > intercepted mode https connections. This is expected if you peek at step1. The second CONNECT should have SNI information (but it often does not -- it is complicated and there are bugs/missing features in that area of the code). If you do not need SNI, you can make all decisions during the first CONNECT. > The same goes for other ACL > combinations like the below resulting in duplicated log messages > > ssl_bump server-first > ssl_bump splice intercept/transparent ip :port> > ssl_bump peek all > ssl_bump splice all In general, it is a bad idea to combine legacy (server-first) and modern (peek/stare/splice/bump/terminate) rules. If at all possible, use modern rules. Alex. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] cpu high 100%
hi i don't know if someone facing hi cpu after 6 or more hoer my server is xeon quad 2 cpu 2.2geg ubuntu 14 latest i never had problem with v 2.7 on normal use with peek time max cpu get up to 20% no more then up to 40 50 % Squid Object Cache: Version 3.5.6 Build Info: Service Name: squid Start Time: Thu, 27 Aug 2015 08:15:34 GMT Current Time: Thu, 27 Aug 2015 22:08:38 GMT Connection information for squid: Number of clients accessing cache: (client_db off) Number of HTTP requests received: 632823 Number of ICP messages received:0 Number of ICP messages sent:0 Number of queued ICP replies: 0 Number of HTCP messages received: 0 Number of HTCP messages sent: 0 Request failure ratio: 0.00 Average HTTP requests per minute since start: 759.6 Average ICP messages per minute since start:0.0 Select loop called: 27134859 times, 1.842 ms avg Cache information for squid: Hits as % of all requests: 5min: 16.5%, 60min: 14.1% Hits as % of bytes sent:5min: -6.1%, 60min: -52.5% Memory hits as % of hit requests: 5min: 42.8%, 60min: 34.1% Disk hits as % of hit requests: 5min: 29.4%, 60min: 42.5% Storage Swap size: 1005260424 KB Storage Swap capacity: 65.4% used, 34.6% free Storage Mem size: 8953296 KB Storage Mem capacity: 71.2% used, 28.8% free Mean Object Size: 131.49 KB Requests given to unlinkd: 0 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.27332 0.42149 Cache Misses: 0.32154 0.46965 Cache Hits:0.01469 0.02451 Near Hits: 0.24524 0.37825 Not-Modified Replies: 0.0 0.0 DNS Lookups: 0.08717 0.07968 ICP Queries: 0.0 0.0 Resource usage for squid: UP Time:49984.156 seconds CPU Time: 6753.848 seconds CPU Usage: 13.51% CPU Usage, 5 minute avg:60.63% CPU Usage, 60 minute avg: 51.62% Maximum Resident Size: 46576384 KB Page faults with physical i/o: 1 Memory accounted for: Total accounted: -1789337 KB memPoolAlloc calls: 455 memPoolFree calls: 223754685 File descriptor usage for squid: Maximum number of file descriptors: 32768 Largest file desc currently in use:428 Number of file desc currently in use: 182 Files queued for open: 0 Available number of file descriptors: 32586 Reserved number of file descriptors: 100 Store Disk files open: 5 Internal Data Structures: 7645004 StoreEntries 199880 StoreEntries with MemObjects 199854 Hot Object Cache Items 7644926 on-disk objects then it go up higher Start Time: Thu, 27 Aug 2015 08:15:34 GMT Current Time: Thu, 27 Aug 2015 22:27:53 GMT Connection information for squid: Number of clients accessing cache: (client_db off) Number of HTTP requests received: 644780 Number of ICP messages received:0 Number of ICP messages sent:0 Number of queued ICP replies: 0 Number of HTCP messages received: 0 Number of HTCP messages sent: 0 Request failure ratio: 0.00 Average HTTP requests per minute since start: 756.5 Average ICP messages per minute since start:0.0 Select loop called: 27407388 times, 1.866 ms avg Cache information for squid: Hits as % of all requests: 5min: 22.6%, 60min: 16.3% Hits as % of bytes sent:5min: -19.7%, 60min: -26.7% Memory hits as % of hit requests: 5min: 53.7%, 60min: 39.0% Disk hits as % of hit requests: 5min: 28.9%, 60min: 40.2% Storage Swap size: 100556 KB Storage Swap capacity: 65.5% used, 34.5% free Storage Mem size: 9147656 KB Storage Mem capacity: 72.7% used, 27.3% free Mean Object Size: 131.47 KB Requests given to unlinkd: 0 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.33943 0.32154 Cache Misses: 0.39928 0.37825 Cache Hits:0.03829 0.02899 Near Hits: 0.27332 0.30459 Not-Modified Replies: 0.0 0.0 DNS Lookups: 0.09535 0.08334 ICP Queries: 0.0 0.0 Resource usage for squid: UP Time:51139.181 seconds CPU Time: 7501.588 seconds CPU Usage: 14.67% CPU Usage, 5 minute avg:67.96% CPU Usage, 60 minute avg: 58.95% Maximum Resident Size: 47446256 KB Page faults with physical i/o: 1 Memory accounted for:
Re: [squid-users] completely transparent Squid
On 28/08/2015 9:43 a.m., kuntal_ba...@bnz.co.nz wrote: > Could you please un-subscribe me ? > > Hi Kuntal, You need to begin the process by entering the email you want unsubscribed into the unsubscribe form at the bottom of the listinfo page whose link is here: > ___ > squid-users mailing list > squid-users@lists.squid-cache.org > http://lists.squid-cache.org/listinfo/squid-users > (I have just done this step for you). Then you need to check for the confirmation email that should arrive and follow the instructions there. If you do not do that step yourself the email will remain subscribed. If you have email forwarded from another address in your company than the one you are posting from. Then you need to arrange for that address to be no longer delivering to you, or unsubscribed using the above process for it too. > > -- > View this message in context: > http://squid-web-proxy-cache.1019090.n4.nabble.com/completely-transparent-Squid-tp4672904.html > > Sent from the Squid - Users mailing list archive at Nabble.com. If that was you receiving this through Nabble, gmane or another email relay service then you may need to unsubscribe from there separately. They are independent services only relaying our list messages. Hope this helps. Amos Jeffries Squid Software Foundation ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] completely transparent Squid
Could you please un-subscribe me ? Cheers, Kuntal Senior Infrastructure Architecture and Design Specialist Infrastructure Architecture and Design Bank of New Zealand DDI: 04-474 6722 Mobile: 021-2408034 ?Success is not final, failure is not fatal: it is the courage to continue that counts.? - Winston Churchill From: Arkantos <221...@gmail.com> To: squid-users@lists.squid-cache.org, Date: 28/08/2015 04:26 a.m. Subject:[squid-users] completely transparent Squid Sent by:"squid-users" hello everybody, my friend and I, happen to run the neighborhood cable and wifi network. Costs are picked up by the users' community, and we get a salary for running around. we have about 35 users we get around 60 mb from the ISP and then we have deployed Inventum Unify Cloud MSC for user management (only for login, B/w control and URL logging). the community is now wanting a caching server. i have zeroed in on CentOS+Squid+Webmin but we are unable to configure it as a "completely transparent cache" can anybody help us to configure? we can pay if wanted. but fees should not be astronomical. please help. Arkantos. 221184 at gmail -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/completely-transparent-Squid-tp4672904.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users CAUTION - This message may contain privileged and confidential information intended only for the use of the addressee named above. If you are not the intended recipient of this message you are hereby notified that any use, dissemination, distribution or reproduction of this message is prohibited. This email was sent by the Bank of New Zealand. You can contact us on 0800 ASK BNZ (0800 275 269). Any views expressed in this message are those of the individual sender and may not necessarily reflect the views of Bank of New Zealand. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Zero Sized Reply
Solve what exactly?? If the site is broken I think that the only solution is for the site admin to fix the issue... Eliezer On 27/08/2015 22:51, Jorgeley Junior wrote: You're the man Amos!!! You're the man!!! Thanks!!! Thanks so so much!!! that's solved the problem, but I'm thinking if it solved just for this domain, so can it happen again with another domains, ok? No way to solve for future errors of this same type? 2015-08-27 16:03 GMT-03:00 Amos Jeffries : On 28/08/2015 5:49 a.m., Jorgeley Junior wrote: Amos, thank you so much for attention, but sorry, I didn't understand what you said. Nevermind. The website code is broken. I have been looking into it from here using those request headers from your log. What I see happening is that the server starts responding. Then the PHP code it is running hangs for a very long time. If you wait long enough it will pop out part of a page and a PHP error message about its database connection script and some timeout. The best I could get was over a minute (78 seconds) delay before anything at all happened. Usually a bit longer. I think something in your network is terminating the server connection after it takes too long. NAT and high speed router systems tend to have a 30 second maximum wait between TCP packets before they close the connection. Either way the website server itself is very broken. So, I tried to change the http for https and it showed the website and i added the security exception for no trusted certificate, but I really would like that the squid didn't show the error. Why http show de Zero Sized Reply and https no? Different protocols and ports. I still see the same delays, partial page and database errors when connecting with HTTPS. But kept digging to see why you might be getting a page... It seems to be an IPv6 server sitting behind some form of gateway access network and only pretending to be IPv4-only. When sending it a X-Forwarded-For header claiming to be an IPv6-enabled browser it seems to operate just fine. So, try adding this to your squid.conf: acl magicXff dstdomain .grupoatuall.com.br request_header_access X-Forwarded-For deny magicXff request_header_replace X-Forwarded-For ::1 Amos -- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Zero Sized Reply
You're the man Amos!!! You're the man!!! Thanks!!! Thanks so so much!!! that's solved the problem, but I'm thinking if it solved just for this domain, so can it happen again with another domains, ok? No way to solve for future errors of this same type? 2015-08-27 16:03 GMT-03:00 Amos Jeffries : > On 28/08/2015 5:49 a.m., Jorgeley Junior wrote: > > Amos, thank you so much for attention, but sorry, I didn't understand > what > > you said. > > Nevermind. The website code is broken. > > I have been looking into it from here using those request headers from > your log. > > What I see happening is that the server starts responding. Then the PHP > code it is running hangs for a very long time. If you wait long enough > it will pop out part of a page and a PHP error message about its > database connection script and some timeout. > > The best I could get was over a minute (78 seconds) delay before > anything at all happened. Usually a bit longer. > > I think something in your network is terminating the server connection > after it takes too long. NAT and high speed router systems tend to have > a 30 second maximum wait between TCP packets before they close the > connection. > > Either way the website server itself is very broken. > > > > So, I tried to change the http for https and it showed the website and i > > added the security exception for no trusted certificate, but I really > would > > like that the squid didn't show the error. > > Why http show de Zero Sized Reply and https no? > > Different protocols and ports. > > I still see the same delays, partial page and database errors when > connecting with HTTPS. But kept digging to see why you might be getting > a page... > > It seems to be an IPv6 server sitting behind some form of gateway access > network and only pretending to be IPv4-only. When sending it a > X-Forwarded-For header claiming to be an IPv6-enabled browser it seems > to operate just fine. > > So, try adding this to your squid.conf: > > acl magicXff dstdomain .grupoatuall.com.br > request_header_access X-Forwarded-For deny magicXff > request_header_replace X-Forwarded-For ::1 > > > Amos > > -- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Zero Sized Reply
On 28/08/2015 5:49 a.m., Jorgeley Junior wrote: > Amos, thank you so much for attention, but sorry, I didn't understand what > you said. Nevermind. The website code is broken. I have been looking into it from here using those request headers from your log. What I see happening is that the server starts responding. Then the PHP code it is running hangs for a very long time. If you wait long enough it will pop out part of a page and a PHP error message about its database connection script and some timeout. The best I could get was over a minute (78 seconds) delay before anything at all happened. Usually a bit longer. I think something in your network is terminating the server connection after it takes too long. NAT and high speed router systems tend to have a 30 second maximum wait between TCP packets before they close the connection. Either way the website server itself is very broken. > So, I tried to change the http for https and it showed the website and i > added the security exception for no trusted certificate, but I really would > like that the squid didn't show the error. > Why http show de Zero Sized Reply and https no? Different protocols and ports. I still see the same delays, partial page and database errors when connecting with HTTPS. But kept digging to see why you might be getting a page... It seems to be an IPv6 server sitting behind some form of gateway access network and only pretending to be IPv4-only. When sending it a X-Forwarded-For header claiming to be an IPv6-enabled browser it seems to operate just fine. So, try adding this to your squid.conf: acl magicXff dstdomain .grupoatuall.com.br request_header_access X-Forwarded-For deny magicXff request_header_replace X-Forwarded-For ::1 Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Does anyone have a working Juniper SRX with tproxy squid?
I am gathering information on different routing options for squid tproxy mode for quite some time. I have a working settings for: - Cisco - Linux - FreeBSD - OpenBSD - Mikrotik The topology I have tested it until now is at: http://ngtech.co.il/squidblocker/topology1.png The Edge router divert traffic to the squid instances using routing policy. I have been reading about ways to make squid work with Juniper but they all use intercept mode and not tproxy. A list of sources until now: http://kb.juniper.net/InfoCenter/index?page=content&id=KB23300 https://andymillett.co.uk/2013/09/14/load-balancing-transparent-redirect-junos/ http://kb.juniper.net/InfoCenter/index?page=content&id=KB21046 http://forums.juniper.net/t5/SRX-Services-Gateway/SRX650-routing-instance-not-working/m-p/54130 http://forums.juniper.net/t5/SRX-Services-Gateway/port-80-redirection-on-srx650-cluster/m-p/53010 http://serverfault.com/questions/442385/how-to-route-all-network-traffic-for-vlan-through-a-proxy-server-on-srx https://forum.ivorde.com/squid-http-s-transparent-proxy-with-juniper-srx-part-3-t14191.html http://kb.juniper.net/InfoCenter/index?page=content&id=KB23895 ###END SOURCES I know that on FreeBSD and Linux I must refer to route each packet by itself or to mark the connection. On juniper SRX devices I do not know what to do exactly. I have seen an option to disable the flowd which follows the tcp\udp flows and I am not sure it is a requirement. My current vSRX settings are at: http://paste.ngtech.co.il/pdsltlobf And the connection is being redirected from the client to the proxy and back from the proxy to the client. The issue is that the traffic which flows from the internet back which suppose to be redirected into the proxy are flowing back to the client. The issue as I identify it is that there is a routing decision based on some routing table. The option I have seen here and there mentioned are to use a virtual router. I am pretty sure there is some network admin here on the list which might have a clue about how to solve the reverse path traffic flow routing issue. Thanks, Eliezer ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Zero Sized Reply
On 28/08/2015 5:04 a.m., Jorgeley Junior wrote: > more logs: Aha! the first bit. What Squid sent to the server: > 2015/08/27 11:43:31.301 kid1| http.cc(2217) sendRequest: HTTP Server local= > 192.168.25.2:43127 remote=200.98.190.9:80 FD 66 flags=1 > 2015/08/27 11:43:31.301 kid1| http.cc(2218) sendRequest: HTTP Server > REQUEST: > - > GET / HTTP/1.1^M > Host: www.grupoatuall.com.br^M > User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:40.0) Gecko/20100101 > Firefox/40.0^M > Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8^M > Accept-Language: pt-BR,pt;q=0.8,en-US;q=0.5,en;q=0.3^M > Accept-Encoding: gzip, deflate^M > Via: 1.1 firewall (squid)^M > X-Forwarded-For: 192.168.1.11^M > Cache-Control: max-age=259200^M > Now. Notice the FD number (66) on the line at the top there. And look for the very next set of headers with matching set of local=,remote=,FD values but titled "HTTP Server RESPONSE". That is apparently the reply that failed. * Compare the timestamps of the request/reply lines to see if it reminds you of any kind of timeout value you are aware of. Multiples of 30sec or 5min are common for timeout settings. * anything visibly wrong about the reply headers? Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] completely transparent Squid
On Thursday 27 Aug 2015 at 17:21, Arkantos wrote: > the community is now wanting a caching server. > > i have zeroed in on CentOS+Squid+Webmin > but we are unable to configure it as a "completely transparent cache" If your community of users wants a caching proxy server, why make it transparent? I'm assuming that you're in charge of both a DHCP server giving the users their IP addresses, and local caching DNS server doing name resolution, so why not just implement PAC https://en.wikipedia.org/wiki/Proxy_auto-config and let the clients discover the caching proxy and then use it? Even if you can't do this, why not set up a proxy (not in transparent mode), tell the users how to connect to it, and let them use it because they want to? Configuring a proxy such as squid in transparent mode never works quite as well as configuring it for explicit mode (where the clients know they're talking to a proxy), so this would give you a better end result as well as being easier to set up. Hope that helps, Antony. PS: If you feel you really do want to set up a transparent proxy, but have failed to get Squid to do what you need, please at least let us know what you have tried so far so we can help you deal with the problem. This means telling us: - the network configuration, being especially clear about where the Squid machine is in the network setup - the squid.conf you are trying to use (without comments or blank lines) - how you are testing it and finding that it doesn't work - what you are seeing (browser errors, log file messages, etc) to indicate that it doesn't work. -- Atheism is a non-prophet-making organisation. Please reply to the list; please *don't* CC me. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Zero Sized Reply
On 28/08/2015 4:25 a.m., Jorgeley Junior wrote: > increasing the log leve i got this: > 2015/08/27 11:43:30.966 kid1| ipcache.cc(501) ipcache_nbgethostbyname: > ipcache_nbgethostbyname: Name 'www.grupoatuall.com.br'. > 2015/08/27 11:43:30.966 kid1| Address.cc(389) lookupHostIP: Given Non-IP ' > www.grupoatuall.com.br': Name or service not known > 2015/08/27 11:43:30.966 kid1| ipcache.cc(549) ipcache_nbgethostbyname: > ipcache_nbgethostbyname: MISS for 'www.grupoatuall.com.br' > Any other ideas??? Thats not 11,2. The 11,2 level will output full HTTP protocol message headers in current Squid. Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Zero Sized Reply
increasing the log leve i got this: 2015/08/27 11:43:30.966 kid1| ipcache.cc(501) ipcache_nbgethostbyname: ipcache_nbgethostbyname: Name 'www.grupoatuall.com.br'. 2015/08/27 11:43:30.966 kid1| Address.cc(389) lookupHostIP: Given Non-IP ' www.grupoatuall.com.br': Name or service not known 2015/08/27 11:43:30.966 kid1| ipcache.cc(549) ipcache_nbgethostbyname: ipcache_nbgethostbyname: MISS for 'www.grupoatuall.com.br' Any other ideas??? 2015-08-27 13:01 GMT-03:00 Amos Jeffries : > On 28/08/2015 2:42 a.m., Jorgeley Junior wrote: > > Thanks Amos. > > my squid is 3.5.6, so i can disconsider the bug, right? > > Yes I believe so. > > Amos > > > -- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] completely transparent Squid
hello everybody, my friend and I, happen to run the neighborhood cable and wifi network. Costs are picked up by the users' community, and we get a salary for running around. we have about 35 users we get around 60 mb from the ISP and then we have deployed Inventum Unify Cloud MSC for user management (only for login, B/w control and URL logging). the community is now wanting a caching server. i have zeroed in on CentOS+Squid+Webmin but we are unable to configure it as a "completely transparent cache" can anybody help us to configure? we can pay if wanted. but fees should not be astronomical. please help. Arkantos. 221184 at gmail -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/completely-transparent-Squid-tp4672904.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Zero Sized Reply
On 28/08/2015 2:42 a.m., Jorgeley Junior wrote: > Thanks Amos. > my squid is 3.5.6, so i can disconsider the bug, right? Yes I believe so. Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Zero Sized Reply
Thanks Amos. my squid is 3.5.6, so i can disconsider the bug, right? I'm very lost about this problem, any suggestion will be appreciated 2015-08-27 3:04 GMT-03:00 Amos Jeffries : > On 27/08/2015 7:48 a.m., Jorgeley Junior wrote: > > Hi guys. > > I'm having a weird problem, my squid is doing "ZERO SIZED REPLY" when I > try > > to connect with some addresses, like this on log above: > > 2015/08/26 13:50:31.335 kid1| http.cc(1300) continueAfterParsingHeader: > > WARNING: HTTP: Invalid Response: No object data received for > > http://www.grupoatuall.com.br/ AKA www.grupoatuall.com.br/ > > 2015/08/26 13:50:31.335 kid1| store.cc(1755) reset: StoreEntry::reset: > > http://www.grupoatuall.com.br/ > > 2015/08/26 13:50:31.335 kid1| FwdState.cc(412) fail: ERR_ZERO_SIZE_OBJECT > > "Bad Gateway" > > http://www.grupoatuall.com.br/ > > Any ideas??? > > The server connection got disconnected between sending Squid the reply > headers and the message payload they were attached to. > > If you have a Squid between 3.2.0 and 3.5.5 (inclusive) please upgrade. > Which is processing SSL-bump, NTLM or Negotiate auth (even just relaying > those in www-auth form). It is probably bug 3329 related. > > If you have a more current Squid "debug_options 11,2" should show you > the HTTP headers going through to eyeball if they had any kind of fatal > syntax problem that would make Squid abandon the connection. > > Otherwise it would seem to be the server disconnecting. There can be a > lot of reasons for that. > > Amos > > ___ > squid-users mailing list > squid-users@lists.squid-cache.org > http://lists.squid-cache.org/listinfo/squid-users > -- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] peek all step with bump instance of proxy
Hi All, I am trying to configure a squid filtering instance which serves both proxy and intercepted (transparent) connections. Filtering is accomplished by a Request eCAP adapter which have something like if(IsDenied() && RequestMethod=="CONNECT") { // Gives TAG_NONE/403 in the access log hostx->blockVirgin(); return; } I also have a requirement to bump a particular domain and peek other https connections for intercepted mode. So there are 3 possible outcomes/filtering decision for any https connections hitting this server. They are 1) Bump and allow the access 2) Non bumped and allowed access 3) Non bumped and denied access, by the code given above in eCAP adapter My squid (tried with v3.5.6 and v3.5.7-20150823-r13895, same outcome) config looks like below . . . # TAG: ssl_bump ssl_bump server-first ssl_bump peek all ssl_bump splice all . . . http_port : ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert= https_port : intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert= Things are fine with the intercepted connections (for all the 3 scenarios). But with the proxy connections I am encountering some peculiar behavior with scenario 3 (ie when a non bumped https CONNECT is denied by eCAP). Instead of terminating the connection, it is logged as TAG_NONE/200 in the access log and getting bumped (a dynamic certificate is generated) and then getting terminated. The behavior disappears and works if I comment the "peek all" line. I am not sure if this is a bug or an expected behavior. Of course the proxy bumped connection works fine if I selectively peek for intercepted connections (ssl_bump peek ), but in this case I am getting duplicate entries in the access log file (ie 2 CONNECT log messages for each https CONNECT) for intercepted mode https connections.The same goes for other ACL combinations like the below resulting in duplicated log messages ssl_bump server-first ssl_bump splice ssl_bump peek all ssl_bump splice all Regards, John ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] High-Availability in Squid
Hi all, Bit of a strange one but I'm wondering if it's possible to have squid redirect a site to a secondary backend server if the primary is down. Have been looking into this but haven't seen much similar to this. Currently I have my setup along the lines of this; Client -> Squid -> Backend1 but in the event that Backend1 is down, the following should be done; Client -> Squid -> Backend2 Is squid capable of monitoring connections to peer or redirecting based on an ACL looking for some HTTP error code? Thanks. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/High-Availability-in-Squid-tp4672899.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Squid and compression
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I agree in general, but there are some considerations. 1. Compressed content is almost always becomes unique. Which leads to a dramatic decrease in the cache hit ratio. 2. Compression actually became a de facto standard on the Internet. 3. Project ecap-gzip looks dead for over three years as squidguard. There is no active maintainers and we have to dig in sources myself. Solution compatibility do not actually confirm or not the Squid team. While independent tests confirm the compatibility and show good results. 4. The structure of Squid-3 actually included a third-party utility purge. What prevents the same way include ecap-gzip directly in code, without ecap usage? It is not about exotic functionality demanded a bunch of geeks. It is about having the widest distribution of functionality, more than 90% of servers on the Internet are using compression. This is not a whim of the user. This is servers default behaviour. And I believe that the caching proxy default is to use all that as part of the standard, to obtain the highest possible degree of caching. WBR, Yuri PS. Amos, I generally agree that we talk functionality, which "will not be implemented neve because we do not want to do that." But you will agree that my arguments are essential. 27.08.15 9:49, Amos Jeffries пишет: > On 27/08/2015 8:50 a.m., Yuri Voinov wrote: >> >> Btw, >> >> when Squid will directly support gzip, inflate compression itself? > > Thats a tough question. "When someone does it." is the sadly true cliche. > > Transfer-Encoding with gzip is what Squid as a proxy is actually > expected to do by the protocol. But neither Squid nor most other > software implement it so its not got much demand. I'm working on it as a > hobby task and a favour for a customer who cant get signoff on big costs > for such a low-gain feature. So small steps at a time and still a ways > off at this rate. > (exactly *when* is sponsor dependent. Anyone want to front up a few > weeks or months of full-time developer paycheck to see it happen by Jan > 2016 or some deadline like that?) > > > But what most of you are talking about with this gzip question is > actually Content-Encoding:gzip as used by browser and origin servers. > Recoding that on the fly is content adaptation. An eCAP service plugin > already exists and was the right way to go IMHO. I'm not sure what > happend to the plugins author. Theres still lots of optimization stuff > to be done in that area. > > Normalizing the Accept headers before Vary processing is simpler change. > But again needs someone interested in doing it, and is possibly better > suited to the eCAP adaper doing in prep for its reply transformations. > This Vary stuff does have a fair few bugs and missing bits so its not > quite clean sailing or would have happened years ago. > > > So lots to be done. But nobody with money has enough interest right now > in seeing it happen. Whats the phrase, "free as in freedom, not beer". > > Amos > ___ > squid-users mailing list > squid-users@lists.squid-cache.org > http://lists.squid-cache.org/listinfo/squid-users -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJV3tzpAAoJENNXIZxhPexGofcIAMUA+huUxU4q+x0Q4zWQu6pf Oy/5+xzAyYIpTwV4yptmuvn9sjn6dkeF5jfNnkERdvNvS/jWU3dVxCs5CkpWnmV8 guKCycyh+dDLsDisp2+xi46UnZQNNcpvpJgxnjNK84Mft44SNtHPX4+upXi9B276 UjXwjkOjhcBll+kRiNJKlYMyd9p/5parOG01SWPXhUBiI0ON0QS5mQ8cJdyA6Dsa quxv8iL/3BCfT71rEgOYSHYb5JmZIBGtIIb4oVAtytBgPOTCJCJZOf8CSLd8x33t GlfhV9nocAFjndQ1N2tSPyQ4EKQEmmLsHlHlnyGgSPMdx23et4CdMoaRh0tp49c= =8Dx/ -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users