Re: [squid-users] SQL DB squid.conf backend, who was it that asked about it?
Hi Marcelo, Is this going to be released as free and open-source software, or it's a closed project? If 1st answer, then I might be able to help! While I wouldn't call myself an squid expert, I have to admit I have some knowledge on it. And i'm also from Brazil, noticed your .com.br email address! Em 10/08/2022 13:25, marcelorodr...@graminsta.com.br escreveu: Hi Amos, It was me indeed. We have developed a squid based php application to create VPSs and deliver proxies via web panel. It is still in development, but fase 1 is working already running SQL user management, create VPSs and squid.conf auto configuration. We are heading to fase 2 to use cache pears and IPv4/IPv6 routing depends on source. Squid.conf got so complex at this point that its getting very hard to implement fase 2. Lack of deep squid knowledge is still our weak spot. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Squid caching webpages now days?
Em 01/08/2021 21:01, Amos Jeffries escreveu: Leonardo, it sounds like your decades ago decision was before squid gained full HTTP/1.1 caching ability. 1.0-only abilities are almost useless today. Are you at least still using memory cache? That is squid configured without cache_dir but also without "cache deny" rule. Hi Amos, You're spot on, I clearly remember to have decided that (to stop caching) before full HTTP/1.1 support days. I have not actually tried memory cache for a while, but not because it wasn't working as expected or effective, it's mainly because i'm not managing those services on networks large enough for caching to bring real beneficts. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Squid caching webpages now days?
Em 31/07/2021 22:48, Periko Support escreveu: Hello guys. With today's ISP's speed increasing, does squid cache (caching web pages) now days is a good option? I have some customers that want to setup a cache server, but I have doubts about how much traffic will be save, with most of the web sites running under https. I use squid+sg acl features. But for me, caching is not a bandwidth saving tool anymore. Of course, my experience is just MY experience and others might be completly different ones :) Speaking for myself, and for some small to medium sized customer networks I manage, caching is disabled for more than a decade now. Squid is still VERY useful for applying controls and loggings, but not caching. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] issues with sslbump and "Host header forgery detected" warnings
Em 07/11/2020 22:19, Eliezer Croitor escreveu: Hey Leonardo, I assume The best solution for you is a simple SNI proxy. Squid does also that and you can try to debug this issue to make sure you understand what is wrong. It clearly states that Squid doesn't see this specific address: local=216.58.222.106:443 As the domain: chromesyncpasswords-pa.googleapis.com:443 "real" destination address. Maybe Alex or Amos remember the exact and relevant debug_options: https://wiki.squid-cache.org/KnowledgeBase/DebugSections I assume section 78 would be of help. debug_options ALL,1 78,3 Is probably enough to discover what are the DNS responses and from where these are. On what OS are you running this Squid? Hi Eliezer, I have already tracked the DNS stuff and I could confirm that squid is resolving to a different IP address than the client is, despite both using the same DNS server. It only happens for hosts with multiple A addresses or CDN hostnames that changes IP very often (every 10 seconds for example). It's not a bug in that regards, absolutely not, the client connecting to a specific IP address and squid seeing another IP to that hostname caught on the TLS transaction is real. I'm running on CentOS 8 ... and after all, maybe these findings, I'm about to realize doing this kind of interception, even without the full decrypt part, is not trivial at all, despite it works flawlessly (and very easily) for "regular" hostnames which translates to a single IP and never changes it. Will study this a little more. Thanks for your observations and recommendations! -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] issues with sslbump and "Host header forgery detected" warnings
Em 07/11/2020 08:42, Amos Jeffries escreveu: All we can do is minimize the occurrences (sometimes not very much). This wiki page has all the details of why and workarounds <https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery>. Thanks Amos, I had already find that and it has very good information on the subject. Also found an old thread of you discussing the security concerns on bypsasing those checks, very good information, thanks so much :) -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] issues with sslbump and "Host header forgery detected" warnings
Hello Everyone, I'm trying to setup sslbump for the first time (on squid-4.13) and, at first, things seems to be working. After taking some time to understand the new terms (splice, bump, stare, etc), seems to got things somehow working. Actually i'm NOT looking for complete bumping (and decrypting) the connections. During my lab studies, I found out that simply 'splice' the connections is enough for me. I just wanna intercept https connections and have them logged, just the hostname, and that seems to be acchievable without even installing my certificates on the client, as i'm not changing anything, just 'taking a look' on the SNI values of the connection. The connection itself remains end-to-end protected, and that's fine to me. I just wanna have things logged. And that's working just fine. However, some connections are failing with the "Host header forgery detected" warnings. Example: 2020/11/06 18:04:21 kid1| SECURITY ALERT: Host header forgery detected on local=216.58.222.106:443 remote=10.4.1.123:39994 FD 73 flags=33 (local IP does not match any domain IP) 2020/11/06 18:04:21 kid1| SECURITY ALERT: on URL: chromesyncpasswords-pa.googleapis.com:443 and usually a NONE/409 (Conflict) log entry is generated on those. Refreshing once or twice and it will eventually work. I have found several discussions on this and I can confirm it happens on hostnames that resolvs to several different IPs or hostnames that, somehow, keeps changing IPs (CDNs or something like that). Clients are already using the same DNS server as the squid box, as recommended, but problem is still happening quite a lot. For regular hostnames who translates for a single IP address, things are 100% working. Questions: - without using WPAD or without configuring proxy on the client devices, is this somehow "fixable" ? Same DNS already being used ... - is there any chance the NONE/409 (Conflict) logs i'm seeing are not related to this? Maybe these are just WARNINGs and not ERRORs, or these would really cause a fail to the intercepted connection? - any other hint on this one without having to set proxy, in any way, on the clients? I just wanna have hostnames (and traffic generated) logged, no need for full decrypt (bumping) of the connections. Thanks !!! -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] GENEVE?
Em 25/08/2020 16:21, Jonas Steinberg escreveu: Is there any way to definitively confirm this? Also is this something I could submit as a feature request via github or is it too crazy or out-of-scope for the roadmap? And please never forget that if you need some feature that is not there yet, you can always sponsor the dev team to develop it :) -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Need help blocking an specific HTTPS website
Em 04/03/2019 19:27, Felipe Arturo Polanco escreveu: Hi, I have been trying to block https://web.whatsapp.com/ from squid and I have been unable to. So far I have this: I can block other HTTPS websites fine I can block www.whatsapp.com <http://www.whatsapp.com> fine I cannot block web.whatsapp.com <http://web.whatsapp.com> I have HTTPS transparent interception enabled and I am bumping all TCP connections, but still this one doesn't appear to get blocked by squid. This is part of my configuration: === acl blockwa1 url_regex whatsapp\.com$ acl blockwa2 dstdomain .whatsapp.com <http://whatsapp.com> acl blockwa3 ssl::server_name .whatsapp.com <http://whatsapp.com> acl step1 at_step SslBump1 blockwa1 and blockwa2 should definitely block web.whatsapp.com .. your rules seems right. Can you confirm the web.whatsapp.com access are getting through squid ? Are these accesses on your access.log with something different than DENIED status ? -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] squid on openwrt: Possible to get rid of "... SECURITY ALERT: Host header forgery detected ..." msgs ?
Em 23/01/2019 06:22, reinerotto escreveu: Running squid 4.4 on very limited device, unfortunately quite a lot of messages: "... SECURITY ALERT: Host header forgery detected ... " show up. Unable to eliminate real cause of this issue (even using iptables to redir all DNS requests to one dnsmasq does not help), these annoying messages tend to fill up cache.log, which is kept in precious RAM. Is there an "official" method to suppress these messages ? Or can you please give a hint, where to apply a (hopefully) simple patch ? I have some OpenWRT boxes running squid 3.5 and cache_log simply goes null ... i do have access log enabled, with scripts to rotate, export to another server (where log analyzis are done) and keep just a minimum on the box itself, as storage is a big problem on these boxes. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] want to change squid name
Em 01/10/18 10:08, --Ahmad-- escreveu: i just need to have something not squid to run it on linux i dont want squid so don't run squid ?!?! If someone finding that you're running squid and that's a problem to you, don't run it, period :) -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] minimize squid memory usage
Em 09/07/18 20:45, Gordon Hsiao escreveu: Assuming I need _absolutely_ no cache what-so-ever(to the point to change compile flags to disable that, if needed), no store-to-disk neither, i.e. no objects need to be cached at all. I just need Squid to check a few ACLs with absolutely minimal memory usage for now, what else am I missing to get that work? If you don't need everything that squid can offer, maybe using other proxy software can be a better option. There are other software, with less options, that for sure will have a smaller memory footprint. But as you just need ACL capabilities, maybe those can be enough. Have you tried checking that ? -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Office 365 Support for Squid Proxy
i have a lot of customers who access Office 365 through squid proxies and have no problem at all. Office 365 is just another website, there's absolutely no need for special configurations for it to simply work. Em 12/06/17 06:05, Blason R escreveu: Hello All, If someone can confirm if squid can very well work with Office 365? If anyone has any documentation can someone please forward that to me? I do have almost around 400 Office 365 users hence wanted to know what configuration I might need for Office 365 traffic? -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] retrieve amount of traffic by username
Em 06/06/17 10:45, Janis Heller escreveu: Seems like parsing would be what I need. Is the size (consumed bandwith) and the usernams (timestamp can be generated by my parser) being written to this file? Could you show me a sample output of this file? the already existing documentation is your friend :) http://wiki.squid-cache.org/SquidFaq/SquidLogs -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] HTTPS sites specifics URL
That's correct, when not using SSL-Bump feature (that's the one you're looking for), squid will only see the domain part. All the rest of the URL is crypted and visible only to the client (browser) and the server on the other side, the only two parts involved on that crypto session. To enable squid to see the whole URL and be able to do full filtering on HTTPS requests, you're looking for SSL-Bump feature. Google for it, there's a LOT of tutorials and mailing list messages on that. Em 06/02/17 12:40, Dante F. B. Colò escreveu: Hello Everyone I have a question , probably a noob one , i 'm trying to allow some https sites with specific URL's (i mean https://domain.tld/blablabla) but https sites are working only with the domain part , what i have to do to make this work ? -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Problem https logging
Em 01/02/16 14:46, Yuri Voinov escreveu: You can't do it without bump. Longer answer: transparent proxy for HTTPS (tcp/443) do not work the same way it does for HTTP (tcp/80). It can be done, but some other configurations are needed. The name for SSL transparent proxy support in squid is ssl_bump. That's not as trivial as transparent proxy for HTTP, but can be done. Google for it, there's plenty of documentation on the feature, the caveats, the implementation details, etc etc. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Assign multiple IP Address to squid
Em 29/12/15 10:43, Eugene M. Zheganin escreveu: Hi. On 29.12.2015 17:05, Reet Vyas wrote: Hi I have working squid3.5.4 configuration with ssl bump, I am using this squid machine as router and have external IP to it and have a leased line connection but with leased line I have 10 extra IP address and I want to NAT those external ip to local ip on same network, like we do in our router, so that I can assign those IP ip my machines having webservers. Please suggest me way to configure it. This has nothing to do with squid. Well, it can be squid-related as 'machines having webservers' is given by the OP. Yes, squid cannot be used to to port-forwarding as routers usually can do, but it can work as reverse-proxy 'publishing' internal-ip webservers to the world. You didnt specified if you're using squid for general proxy or reverse proxy. SSL-Bump, as far as i know, can be used for both. Maybe you wanna google or search the mailing list archives for reverse proxy setups, that's what you're looking for. But keep in mind if will work only for HTTP and HTTPS connections. If you want general port-forwarding, than Eugene is right, it's a not squid-related subject. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] New skype version can't control by squid
Em 22/12/15 23:10, fbismc escreveu: Hi everyone Below is my skype control in squid.conf #skype acl numeric_IPs dstdom_regex ^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443 acl Skype_UA browser ^skype acl validUserAgent browser \S+ acl skypenet dstdomain .skype.com After skype update to 7.17 ,the control is failed , I need to give a "allowed" permission , the "allowed" means have a privilege to Internet surfing. How should I fix this problem , any suggestion will be a appreciated Well ... if you need want someone to be able to help you, you can start giving some real informations on the new skype accesses that are failing your rules. You have rules for user agent, IP access on port 443 and domain skype.com. Which accesses are not getting caught by these ? What are the new user agent used on the new skype accesses ??? Provide real information if you want real help (which of course, if not always guaranteed on a community mailing list). But be sure that with no real information, you wont get any useful help at all. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] logging https websites
Em 09/12/15 13:11, George Hollingshead escreveu: is there a simple way to log request made to https sites. I just want to see sites visited without having to set up tunneling and all this complex stuff i'm reading about. Hoping there's a simple way, and yes, i'm a newb but smart enough to have your awesome program running; hehe If you really want a SIMPLE way, than the answer is NO, that's not possible With simply configuring the proxy on the users browsers, you'll be able to see the hostname, but not the full URL user acessing https://www.gmail.com/mail/something/INBOX will appear on the logs just as CONNECT www.gmail.com and that's how it works ... the path is only visible to the endpoints, the browser and the server, squid just carries the encripted tunnel between them, without knowing what's happening inside. is it possible to decript and see the full path on the logs, being able to filter on them and everything else ?? YES, that's ssl-bump, but that's FAR from being an easy setup ... -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] TCP_MISS/200
Em 17/11/15 20:18, Jens Kallup escreveu: Hello, what means the log ouput TCP_MISS/200 ? Error in squid config? HTTP responde code 200 means 'OK, your request was processed fine', it's the 'everything ok' return code. TCP_MISS means there was no cached answer for that query and so it was fetched from the origin server. It's definitely not an error, there's absolutely nothing wrong on seeing LOTS of those on your access.log files, as you're certainly face LOTS of 'everything ok' requests and lots of them will not be fetched from the cached objects. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] big files caching-only proxy
Em 22/10/15 06:08, Amos Jeffries escreveu: On 22/10/2015 7:13 a.m., Leonardo Rodrigues wrote: It sounds to me that you are not so much wanting to cache only big things, you are wanting to cache only certain sites which contain mostly big things. The best way to confgure that is with the cache directive. Just allow those sites you want, and deny all others. Then you dont have to worry about big vs small object size limits. Though why you would want to avoid caching everything that was designed to be cached is a bit mystifying. You might find better performance providing several cache_dir with different size ranges in each for optimal caching to be figured out by Squid. At first that (caching only 'big' things) was the idea, but when i look to cache instagram, that really changed. I know i dont have a good hardware (I/O limitation) and having a VERY heterogenous group of people, hits were low when caching 'everything' and, in some cases, access was even getting slower as i do have a good internet pipe. But caching windows update and other 'big things' (antivirus updates, apple updates, etc etc) still looked interesting to me. As you suggested, i further enhanced my ACLs that match 'what i want to cache' and could get it working using cache rules. I have even, in some cases, created two ACLs, one for the dstdom and other for the urlpath for matching just some extensions i want to cache. Maybe not perfect, but seems to be working fine after lowering the minimum_object_size to some few KBs. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] big files caching-only proxy
Hi, I have a running setup for proxying only 'big' files, like Windows Update, Apple Updates and some other very specific URLs. That's working just fine, no problem on that. For avoiding caching small things on the URLs i want to have big files proxied, i setup the 'minimum_object_size' for 500Kb, for example. That's doing just fine, working flawlessly. Now i'm looking for caching instagram data. That's seems easy, instagram videos are already being cached, but i really dont know how to deal with the small images and thumbnails from the timetime. If i lower too much the minimum_object size, those will be cached as well as not wanted data from the other URLs. Question is: can the minimum_object_size be paired with some ACL ? Can i have a minimum_object globally and another one for specific URLs (from an ACL) for example? i'm running squid 3.5.8. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Monitoring Squid using SNMP.
Em 20/10/15 16:26, sebastien.boulia...@cpu.ca escreveu: When I try to do a snmpwalk, I got a timeout. [root@bak ~]# snmpwalk xx:3401 -c cpuread -v 1 [root@bak ~]# Anyone monitor Squid using SNMP ? Do you experiment some issues ? You're not getting timeout, you're getting no data, which is completly different from timeout. Try giving the initial MIB number and you'll probably get the data: [root@firewall ~]# snmpwalk -v 1 -c public localhost:3401 .1.3.6.1.4.1.3495.1 SNMPv2-SMI::enterprises.3495.1.1.1.0 = INTEGER: 419756 SNMPv2-SMI::enterprises.3495.1.1.2.0 = INTEGER: 96398932 SNMPv2-SMI::enterprises.3495.1.1.3.0 = Timeticks: (77355691) 8 days, 22:52:36.91 SNMPv2-SMI::enterprises.3495.1.2.1.0 = STRING: "webmaster" SNMPv2-SMI::enterprises.3495.1.2.2.0 = STRING: "squid" SNMPv2-SMI::enterprises.3495.1.2.3.0 = STRING: "3.5.8" and to make things easier, i use to configure the SNMP daemon that runs on UDP/161 to 'proxy' requests to squid, so i dont need to worry about informing the correct port: [root@firewall snmp]# grep proxy snmpd.conf # proxying requests to squid MIB proxy -v 1 -c public localhost:3401 .1.3.6.1.4.1.3495.1 so i can 'snmpwalk' on the default udp/161 port: (note the lack of :3401 port) [root@firewall snmp]# snmpwalk -v 1 -c public localhost .1.3.6.1.4.1.3495.1 SNMPv2-SMI::enterprises.3495.1.1.1.0 = INTEGER: 419964 SNMPv2-SMI::enterprises.3495.1.1.2.0 = INTEGER: 96359504 SNMPv2-SMI::enterprises.3495.1.1.3.0 = Timeticks: (77370521) 8 days, 22:55:05.21 -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] How to allow subdomains in my config.
Em 13/10/15 18:14, sebastien.boulia...@cpu.ca escreveu: cache_peer ezproxyx.reseaubiblio.ca parent 80 0 no-query originserver name=ezproxycqlm acl ezproxycqlmacl dstdomain ezproxycqlm.reseaubiblio.ca http_access allow www80 ezproxycqlmacl cache_peer_access ezproxycqlm allow www80 ezproxycqlmacl cache_peer_access ezproxycqlm deny all no guessing games would be awesome ... please post your ACL definitions as well -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] got http2?
Em 11/10/15 19:31, Linda A. Walsh escreveu: Are the impacts or implementation details being thought about in squid?, since if it comes down to it only being supported by encrypted TUNNELS, its not only going to be hard to cache, but also makes it a pain to implement http/browsing controls on content -- since it would all be encrypted and and compressed and impossible to directly use in companies that need to filter web-content as it comes in. Not different from the 'everything https' that is being implemented by all websites on the last 2 years ... -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] squid cache
Em 30/09/15 16:35, Magic Link escreveu: Hi, i configure squid to use cache. It seems to work because when i did a try with a software's download, the second download is TCP_HIT in the access.log. The question i have is : why the majority of requests can't be cached (i have a lot of tcp_miss/200) ? i found that dynamic content is not cached but i don't understand.very well. That's the way internet works ... most of the traffic is dinamically generated, which in default squid configurations avoid the content to be cached. Nowadays, with the 'everything https' taking place, HTTPS is also non-cacheable (in default configurations). And by default configurations, you must understand that they are the 'SECURE' configuration. Tweaking with refresh_pattern is usually not recommended except in some specific cases in which you're completly clear that you're violating the HTTP protocol and can have problems with that. In short, the days of 20-30% byte-hits are gone and will never came back anymore. Keep your default (and secure) squid configuration, there's no need to tweak refresh_pattern unless on very specific situations that you clearly understand what you're doing. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] analyzing cache in and out files
Em 30/09/15 04:13, Matus UHLAR - fantomas escreveu: the problem was iirc in caching partial objects http://wiki.squid-cache.org/Features/PartialResponsesCaching that problem could be avoided with properly setting range_offset_limit http://www.squid-cache.org/Doc/config/range_offset_limit/ but that also means that whole files instead of just their parts are fetched. it's quite possible that microsoft changed the windows updates to be smaller files, but I don't know anything about this, so I wonder if you really do cache windows updates, and how does the caching work related to informations above... yes, i'm definitely caching windows update files !! [root@firewall ~]# cd /var/squid/ [root@firewall squid]# for i in `find . -type f`; do strings $i | head -3 | grep "http://";; done | grep windowsupdate | wc -l 824 and yes, i had to configure range_offset_limit: range_offset_limit 500 MB updates minimum_object_size 500 KB maximum_object_size 500 MB quick_abort_min -1 (being 'updates' the ACL with the URLs to be cached, basically windowsupdate and avast definition updates - the second one required further tweaks with storeid_rewrite for the CDN URLs) from access.log, i see a lot of TCP_HIT/206 (and just a few TCP_HIT/200), so it seems squid is able to get the fully cached file and provide the smaller pieces requested: [root@firewall squid]# grep "TCP_HIT/" access.log | grep windowsupdate | wc -l 9860 [root@firewall squid]# bzcat access.log.20150927.bz2 | grep "TCP_HIT/" | grep windowsupdate | wc -l 38584 having squid to download the WHOLE file at the very first request (even a partial request) may be bad, but considering it will be used later to provide the data for other requests, even partial ones, make things a little better. (this windowsupdate caching is running just for a few weeks, i expect HITs to grow a little more) -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] analyzing cache in and out files
Em 28/09/15 17:55, Amos Jeffries escreveu: The store.log is the one recording what gets added and removed from cache. It is just that there are no available tools to do the analysis you are asking for. Most admin (and thus tools aimed at them) are more concerned with whether cached files are re-used (HITs and near-HITs) or not. That is recorded in the access.log and almost all analysis tools use that log in one format or another. That's i was afraid, there's no tools to analyze the data. Anyway, thanks for the answer. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] analyzing cache in and out files
Em 29/09/15 10:46, Matus UHLAR - fantomas escreveu: hmm, when did this change? IIRC that was big problem since updates use huge files and fetch only parts of them, which squid wasn't able to cache. But i'm off for a few years, maybe M$ finally fixed that up... i'm not a squid expert, but it seems that things became much easier when squid becames fully HTTP/1.1 compliant. Caching huge files do not changed, that's needed for caching Windows Update files. Storage space, however, is becaming cheaper every year. In my setup, for example, i'm caching files up to 500Mb, i have absolutely no intention of caching ALL Windows Update files. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] analyzing cache in and out files
Em 29/09/15 07:42, Matus UHLAR - fantomas escreveu: On 28.09.15 15:59, Leonardo Rodrigues wrote: I have a running squid that, until some weeks ago, was not doing any kind of cache, it was just used for access controle rules. Now i have enabled it for windows updateand some specificURLs caching and it's just working fine. windows updates are so badly designed that the only sane way to get them cached it running windows update server (WSUS). WSUS works for corporate environments, not for all the others. And caching Windows Update with squid is pretty trivial actually, it doesnt even need URL rewriting as other services, youtube for example, do. And it works just fine !! -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] analyzing cache in and out files
Hi, I have a running squid that, until some weeks ago, was not doing any kind of cache, it was just used for access controle rules. Now i have enabled it for windows updateand some specificURLs caching and it's just working fine. I was looking, however, for a way of tracking files that are getting into the cache and excluded fromit. At first, i tough store_log would be the way, but the comment on cache_store_log default squid.conf file dissapointed me:"There are not really utilities to analyze this data" Which log coud i enable, if there's any, to help me analyze files (and its URLs) getting into and out of the cache dirs ?? I'm using squid 3.5.8 btw. Thanks ! -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] user agent
i personally hate using !acl ... it's the easiest way, in my opinion, of getting in trouble and getting things to NOT work the way you want to. i always prefeer to replace by other 4-5 'normal' rules than using !acl Em 18/09/15 06:32, joe escreveu: hi i need to have 3 useragent replace and its not working example acl brs browser -i Mozilla.*Window.* acl phone-brs browser -i Mozilla.*(Android|iPhone|iPad).* request_header_access User-Agent deny brs !phone-brs request_header_replace User-Agent Mozilla/5.0 (Windows NT 5.1; rv:40.0) Gecko/20100101 request_header_access User-Agent deny phone-brs !brs request_header_replace User-Agent Mozilla/5.0 (Android; iPhone; Mobile;) Gecko/18.0 -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Android
Of course you can always use 'acl aclname browser' to identify some specific agents and, using that, try to match android browsers. however, that would be basically impossible to guarantee to work 100% because softwares that calls HTTP requests can always sent different identifications and, thus, your rule will not match. And those rules would allow, also, other browsers/OSs to fake their agent-id and, forcing something that will look like an Android to you, have the access allowed without authentication. You can try, but i would say you can never have a fully 100% working and 100% fake-proof setup on that scenario. Em 12/08/15 14:09, Jorgeley Junior escreveu: Hi guys. Is there a way to work around android under squid authentication??? I could make an ACL to a range of address that my wifi router distribute to my wifi network and deny auth for them, but I'd like to identify the Android clients and specify that just them do not need authentication. Any ideas? Thanks since now -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Logging of 'indirect' requests, e.g. involving NAT or VPN
Em 24/06/15 15:28, Henry S. Thompson escreveu: I've searched the documentation and mailing list archives w/o success, and am not competent to read the source, so asking here: what is logged as the 'remotehost' in Squid logs when a request that has been encapsulated, as in from a machine on a local network behind a router implementing NAT, or from a machine accessing the proxy via a VPN connection? logs will show the IP address that reached squid, ie. the source address of the connection. If that was NATted, squid will never know (and thus is not able to log) the original address before the NAT. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] tos miss-mask not working at all squid 3.5.5
Em 22/06/15 17:11, mohammad escreveu: why is no-one answering this ?!! BTW, i tried the kernel patch 2.6.35 from ZPH, it worked intermittently, and stopped working after a squid re-build. any help is appreciated This list is community-based, there's no guarantee someone will answer and be able to help you. Everybody does its best, but there's no guarantee at all. If you really need someone to help solving your problems, there's lots of companies/partners that can offer commercial support on squid: http://www.squid-cache.org/Support/services.html -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Migration from squid 3.1.20 to 3.4.8
On 10/06/15 06:39, Diercks, Frank (VRZ Koblenz) wrote: Hallo squid-users, i migrated our Proxy from 3.1.20 to 3.4.8. Here are the changes I made: why going to 3.4 if it's already 'old' code ? Why not going straight to 3.5 which is the current release ? -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Saving memory cache to disk on reboot?
On 18/05/15 08:55, Yan Seiner wrote: The title says it all - is it possible to save the memory cache to disk on reboot? I reboot my systems weekly and I wonder if this would be any advantage. Initially, let's say that a cache can ALWAYS be lost. Sometimes it may not be desirable, but losing a cache must not create problems, the cache will simply be repopulated again and no problems should occur. Losing terabytes of cache is not be a good idea, as that amount of data would take some days to be repopulated and thus, during that time, you'll have bad hit ratios on your cache. As you say you're using memory cache, i'll assume that you're dealing with 16Gb or 32Gb of cache. We're not talking on terabytes, we're talking on few gigabytes. On that scenario, i would not worry about loosing it. Unless you're serving just a few specific pages on that cache, which is not usually the case, your hit ratio already shouldn't be too high, so loosing the cache shouldn't be a problem, it will be populated again in few hours, depending the number of clients and traffic generated by them. And my only question here is: why rebooting weekly ? Assuming you're running Linux or some Unix variant, that's absolutely unnecessary. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] i want to block images with size more than 40 KB
On 25/03/15 02:34, snakeeyes wrote: BTW can squid block dynamically loaded images, and ajax request which return images. I want that on yahoo and google Is that possible ? Yes snakeeyes, that's surely possible. Maybe not easy, but only a few things are not possible. However, as said in previous emails, this is the time that you should stop trying to ask for 'ready-to-use-rules' and start understanding that, as already said previously, blocking the way you want is not trivial, as internet browsing can look trivial to the user but it's definitely not on a technical view. Images can come in https connections and, thus, squid cannot see them. If you use SSL-Bump, filtering based on the content of SSL connections becames possible. That's not a trivial configuration, but that's surely possible. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] i want to block images with size more than 40 KB
On 18/03/15 08:06, Amos Jeffries wrote: On 19/03/2015 5:57 a.m., snakeeyes wrote: I need help in blocking images that has size less than 40 KB Use the Squid provided access controls to manage access to things. <http://wiki.squid-cache.org/SquidFaq/SquidAcl> You should know that you cannot evaluate the response size using only the request data. So to acchieve what you want, data from the reply must be considered as well, the response size for example. Images can be identified by the presence of '.jpg' or '.png' on the request URL, but images can be generated on-the-fly by scripts as well, so you wont see those extensions all the time. In that case, analyzing replies mime headers can be usefull as well, the reply mime type having 'image' is a great indication that we're receiving an image. Put all that together and you'll acchieve the rules you want to. But keep in mind that you'll probably break A LOT of sites who 'slices' images, background images, menus and all sort of things. I would call that a VERY bad idea, but can be acchieved with a few rules. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Calculate time spent on website (per ip address)
On 10/02/15 20:23, Yuri Voinov wrote: HTTP is stateless protocol (in most cases, excluding presistant connections). So, it is impossible to determine how much time user spent on site. Only very approximately. Right? in most cases, probably not even close to the real deal ! -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Order of http_access allow/deny
On 04/02/15 09:19, andreas.resc...@mahle.com wrote: Hi there, Is there a order of http_access allow/deny? If I activate "http_access deny !chkglwebhttp" nobody can use the proxy, squid allways ask for user and password (user and password is correct) ## acl chkglwebhttp external LDAPLookup GGPY-LO-Web-Http acl sellingUser external LDAPLookup GGPY-LO-Web-Allowed-Selling acl socialUser external LDAPLookup GGPY-LO-Web-Allowed-Social acl allforbUser external LDAPLookup GGPY-LO-Web-Allowed-All acl ftpputUser external LDAPLookup GGPY-LO-Web-Ftp-Put acl loggingUser external LDAPLookup GGPY-LO-Web-Log-User acl auth proxy_auth REQUIRED acl permitt_ips src 10.143.10.247/32 acl FTP proto FTP acl PUT method PUT # whitelisten http_access allow open-sites all http_access allow localhost http_access allow permitt_ips !denied-sites !social-sites http_access allow indien DAY http_access deny indien #http_access deny !chkglwebhttp http_access allow selling-sites sellingUser http_access allow social-sites socialUser Actually, and i dont know if this a bug or a desired behavior, denying a group seems to always (at least to me) brings the authentication popup. To avoid that and make things really work as expected, i usually add an 'all' to the denying clause. As the 'all' rule will match anything, it wont change the denying or not of your rule. And it will make things work. Actually this hint was found on the mailing list archives. So, instead of http_access deny !chkglwebhttp try using http_access deny !chkglwebhttp all if your 'indien' acl, which is also used on a deny rule, is also a group rule (that cannot be confirmed on the conf you posted), just add the all as well. In summary, always add an 'all' to an http_access rule which envolves denying by any king of group checking. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] squid_ldap_auth: WARNING, could not bind to binddn 'Invalid credentials'
I have several squids authenticating users using ldap_auth and it works fine. Users are located on the 'Users' OU and my config lines are: (single line) auth_param basic program /usr/lib/squid/squid_ldap_auth -P -R -b "dc=myad,dc=domain" -D "cn=ProxyUser,cn=Users,dc=myad,dc=domain" -w "x" -f sAMAccountName=%s -h ad.ip.addr.ess (single line) external_acl_type ldap_group children=3 ttl=300 %LOGIN /usr/lib/squid/squid_ldap_group -P -R -b "dc=myad,dc=domain" -D "cn=ProxyUser, cn=Users,dc=myad,dc=domain" -w "xxx" -f "(&(objectclass=person)(sAMAccountName=%v)(memberof=cn=%a,cn=Users, dc=myad,dc=domain))" -h ad.ip.addr.ess On 15/12/14 21:03, Ahmed Allzaeem wrote: Hi guys Im trying to use squid with active directory 2008 R2 as an external authentication On DC called smart.ps Create user squid and gave it delegation to the dc and put it also in the group admins in the OU=proxy -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] WARNING: there are more than 100 regular expressions
On 27/11/14 07:59, navari.lore...@gmail.com wrote: "Consider using less REs ..." is not possible. so dont worry about this WARNING message. This is just a warning, not an error. If you're aware that using lots of REs can hit hard on the CPU usage, just go for it. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] how to obtain info about actual active downloads?
On 27/10/14 11:47, Antony Stone wrote: On Monday 27 October 2014 at 14:32:39 (EU time), Frantisek Hanzlik wrote: Please, what is best way for determining who squid clients (their PC IP addresses) have which downloads active? I want it to determine which clients burden our slow internet line. Examining 'access.log' does not help much in this case, because users can download large files and it may take a few minutes or hours (e.g. in case of consuming some audio/video streams). I would use the tool 'iptraf', either running on your squid server, or on a machine which can sniff your internal network traffic (possibly with the use of a spanning port on the switch). That can give you real-time bandwidth measurements per IP address. I use this script: http://samm.kiev.ua/sqstat/ Set it to auto-update on 15/15 seconds, for example, and you'll have a great and easy way to evaluate active connections and high bandwidth use connections. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] redirect all ports to squid
On 30/09/14 18:49, James Harper wrote: It's possible to redirect all ports to squid ? thru iptables ? For example port 25 smtp,143 imap, etc... Can squid handle that. In transparent mode. So in short it works, but not as well as it could, and you might be better of finding another solution. The main reason I was interested is that Squid already has a very nice acl implementation, and there are already a number of good log analysis tools for it. Despite the fact it can work with some heavy tweaking as you pointed, it's important to have it clear that "it was NOT designed for that". Squid was designed to be an http/https proxy, not a general tcp applications one. As said, despite it can work, other solutions designed for that specific protocol will surely make a much better work. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] redirect all ports to squid
On 30/09/14 13:41, hadi wrote: It's possible to redirect all ports to squid ? thru iptables ? For example port 25 smtp,143 imap, etc... Can squid handle that. In transparent mode. for the 1000th time here on the mailing list ... no, you cannot. Squid is an HTTP/HTTPS proxy and cannot handle other protocols. HTTPS can be transparently redirected, but complex configurations are needed for that and, in most cases, it will not be 100% transparent as for HTTP protocol. Other protocols, SMTP, IMAP, POP3, etc etc etc, cannot be handled by squid. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users