[squid-users] user agent
hi i need to have 3 useragent replace and its not working example acl brs browser -i Mozilla.*Window.* acl phone-brs browser -i Mozilla.*(Android|iPhone|iPad).* request_header_access User-Agent deny brs !phone-brs request_header_replace User-Agent Mozilla/5.0 (Windows NT 5.1; rv:40.0) Gecko/20100101 request_header_access User-Agent deny phone-brs !brs request_header_replace User-Agent Mozilla/5.0 (Android; iPhone; Mobile;) Gecko/18.0 what happen is if i have 2 or more useragent replace it replace all match on both acl with second request_header_replace so brs and phone-brs they hook on request_header_replace User-Agent Mozilla/5.0 (Android; iPhone; Mobile;) Gecko/18.0 any idea how to fix this or its a bug ?? for my understanding is wen first brs deny happen the $user_agent variable load the first then it re load the second replace and send to browser this is bad it should work on first one then the second replace should be discarded cause it did not match on first deny so here is my suggestion to do pls its important so it wont miss grab the right replace request_header_access User-Agent deny brs request_header_replace User-Agent Mozilla/5.0 (Windows NT 5.1; rv:40.0) Gecko/20100101 {brs} so add pls a match at the end to the request_header_replace *.* brs tks -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/user-agent-tp4673284.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] kinda confused about Peek and Splice
On 09/18/2015 01:38 PM, Marek Serafin wrote: > 1. the only way to by absolutely sure what is transmitted over a SSL > tunnel is bumping the connection - there is no other possibility. Correct. > 2. some important websites shouldn't be bumped - like banking or payment > systems. Such pages should be spliced by a whitelist at step 2? Whether some sites should or should not be bumped is a local policy decision. There is no one-size-fits-all answer to this question. The specifics of that local policy may affect _when_ you splice those important sites (if any) or, in other words, _how_ you identify those important sites. > 3. some websites/services can't be bumped because of HPKP feature. So > if we want to allow users to use such sites/services we must splice it Or, if you can reinstall all browsers from scratch, you can overwrite/delete site's Public-Key-Pins headers when bumping. HPKP is a Trust on First Use feature so you can essentially disable it if you control that "first use". Please note that I am not an expert on this -- I am just reading Mozilla's description of the feature at https://developer.mozilla.org/en-US/docs/Web/Security/Public_Key_Pinning There are other reasons a site may not support bumping. You will need to babysit your bumping Squid to make sure your users are as happy as can be expected. > at step 2 (like banking systems)? The best timing (i.e., the step number) for splicing depends on many local factors. > My policy is: bump everything except banking systems (and some other > important domains): My config is like this: > -- > acl nobumpSites ssl::server_name "/etc/squid3/allowed_SSL_sites.txt" > > ssl_bump peek step1 > ssl_bump splice step2 nobumpSites > ssl_bump bump all > -- I do not see the reason for the "step2" ACL in the above. Do you? > So tell me what's the reason of peeking at step1 ? I suppose getting the > real server_name based on SNI instead of reading it from CONNECT > request? (remember: all browsers are proxy aware) Yes. Not all CONNECT requests have host names. > I'm asking because when I change my configuration to this one: > > -- > acl allowed_https_sites dstdomain "/etc/squid3/allowed_SSL_sites.txt" > ssl_bump splice allowed_https_sites > ssl_bump bump all > -- > It seems to work the same way. Have you tested both configurations using a CONNECT request with an IP address? Have you tested with a CONNECT request for a foo.example.com domain when that domain responds with a bar.example.com certificate? If not, your testing is not good enough to expose [at least two] differences between the two configurations. > Is 'ssl::server_name' more reliable than 'dstdomain'? "reliable" is an undefined term in this context. ssl::server_name may use SNI (where available). Dstdomain does not know about SNI. There are other important documented differences as well: > The server name is obtained during Ssl-Bump steps from such sources > as CONNECT request URI, client SNI, and SSL server certificate CN. > During each Ssl-Bump step, Squid may improve its understanding of a > "true server name". Unlike dstdomain, this ACL does not perform > DNS lookups. > So, despite that I'm still confused about peek & stare - for me > it makes only sense in this order > > 1. peek everything at step 1 (to get reliable server name by SNI ???) > 2. splicing exceptions ("whitelist") at step 2 > 3. stare all at step 2 (or just bump the rest at step 2) > 4. bump all at step 3 > > does it make sense according to my policy assumptions? It depends how you want to identify whitelisted sites. For example, if you want to validate the server certificate before splicing, then the above will not work. > what's the advantage of stare at step 2 - instead of > bumping everything after splicing the exceptions? I am not sure, but it is possible that bumping at step 2 will not mimic some server certificate features in the future (it does now). For a related discussion, please see http://bugs.squid-cache.org/show_bug.cgi?id=4327 HTH, Alex. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] kinda confused about Peek and Splice
Hi guys, I'm still confused about peek and stare. Correct me please if I'm wrong. 1. the only way to by absolutely sure what is transmitted over a SSL tunnel is bumping the connection - there is no other possibility. 2. some important websites shouldn't be bumped - like banking or payment systems. Such pages should be spliced by a whitelist at step 2? 3. some websites/services can't be bumped because of HPKP feature. So if we want to allow users to use such sites/services we must splice it at step 2 (like banking systems)? My policy is: bump everything except banking systems (and some other important domains): My config is like this: -- acl nobumpSites ssl::server_name "/etc/squid3/allowed_SSL_sites.txt" ssl_bump peek step1 ssl_bump splice step2 nobumpSites ssl_bump bump all -- So tell me what's the reason of peeking at step1 ? I suppose getting the real server_name based on SNI instead of reading it from CONNECT request? (remember: all browsers are proxy aware) I'm asking because when I change my configuration to this one: -- acl allowed_https_sites dstdomain "/etc/squid3/allowed_SSL_sites.txt" ssl_bump splice allowed_https_sites ssl_bump bump all -- It seems to work the same way. Is 'ssl::server_name' more reliable than 'dstdomain' ? So, despite that I'm still confused about peek & stare - for me it makes only sense in this order 1. peek everything at step 1 (to get reliable server name by SNI ???) 2. splicing exceptions ("whitelist") at step 2 3. stare all at step 2 (or just bump the rest at step 2) 4. bump all at step 3 does it make sense according to my policy assumptions? If yes, tell me what's the advantage of stare at step 2 - instead of bumping everything after splicing the exceptions? I truly apologize for so long email, but I wanted to put as much doubts as I can :) thanks a lot! Marek ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] user agent
again in HttpHeaderTools.cc } else { /* It was denied, but we have a replacement. Replace the * header on the fly, and return that the new header * is allowed. */ e->value = hm->replacement; retval = 1; } return retval; // dose retval flag set to 0 or normal value after it serve the new header so second acl will work properly ? if it stay retval = 1 next acl match on diferent req..serve same content } -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/user-agent-tp4673284p4673296.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Optimezed???
On Friday 18 September 2015 at 13:13:27, Jorgeley Junior wrote: > hey guys, forgot-me? :( Surely you can see for yourself how many connections you've had of different types? Here are the most common (all those over 100 instances) from your list of 5240 results > > 290 TAG_NONE/503 > > 368 TCP_DENIED/403 > >1421 TCP_DENIED/407 > > 680 TCP_MISS/200 > > 192 TCP_REFRESH_UNMODIFIED/304 > >1896 TCP_TUNNEL/200 So: 290 (5.5%) got a 503 result (service unavailable) 368 (7%) were denied by the remote server with code 403 (forbidden) 1421 (27%) were deined by the remote server with code 407 (auth required) 680 (13%) were successfully retreived from the remote servers but were not previously in your cache 192 (3.6%) were already cached by your browser and didn't need to be retreived 1896 (36%) were successful HTTPS tunneled connections, simply being forwarded by the proxy This accounts for 4847 (92.5%) of your 5240 results. As you can see, just measuring HIT and MISS is not the whole picture. Hope that helps, Antony. -- Pavlov is in the pub enjoying a pint. The barman rings for last orders, and Pavlov jumps up exclaiming "Damn! I forgot to feed the dog!" Please reply to the list; please *don't* CC me. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Optimezed???
hey guys, forgot-me? :( 2015-09-17 8:08 GMT-03:00 Jorgeley Junior: > thank you all for the reply, here is the result of the command: > 1 TAG_NONE/500 > 290 TAG_NONE/503 > 10 TAG_NONE_ABORTED/000 > 4 TCP_CLIENT_REFRESH_MISS/200 > 368 TCP_DENIED/403 >1421 TCP_DENIED/407 > 5 TCP_HIT/200 > 7 TCP_HIT_ABORTED/000 > 7 TCP_IMS_HIT/200 > 39 TCP_IMS_HIT/304 > 1 TCP_MEM_HIT/200 > 680 TCP_MISS/200 > 39 TCP_MISS/204 > 1 TCP_MISS/206 > 9 TCP_MISS/301 > 30 TCP_MISS/302 > 70 TCP_MISS/304 > 8 TCP_MISS/404 > 29 TCP_MISS/416 > 1 TCP_MISS/500 > 3 TCP_MISS/503 > 16 TCP_MISS_ABORTED/000 > 4 TCP_MISS_ABORTED/200 > 1 TCP_MISS_ABORTED/206 > 56 TCP_REFRESH_MODIFIED/200 > 1 TCP_REFRESH_MODIFIED/416 > 38 TCP_REFRESH_UNMODIFIED/200 > 192 TCP_REFRESH_UNMODIFIED/304 > 3 TCP_SWAPFAIL_MISS/200 > 10 TCP_SWAPFAIL_MISS/304 >1896 TCP_TUNNEL/200 > > > 2015-09-17 2:12 GMT-03:00 Amos Jeffries : > >> On 17/09/2015 8:55 a.m., Eliezer Croitoru wrote: >> > Try to run this on you access.log: >> > cat /var/log/squid/access.log|gawk '{print $4}'|sort|uniq -c >> > >> > This should show a list of all the cases which includes 304 status code. >> > If you can post the results there will might be another side to the >> > whole story in the output. >> > >> > Eliezer >> >> Yes that should clarify the story a bit. As would the Squid version >> details. >> >> What is clear is that over 60% of the traffic by both count and volume >> is neither HIT nor MISS. The graphing / analysis tool does not account >> for TUNNEL or REFRESH transactions which can happen in HTTP/1.1. >> >> Amos >> >> >> ___ >> squid-users mailing list >> squid-users@lists.squid-cache.org >> http://lists.squid-cache.org/listinfo/squid-users >> > > > > -- > > > -- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] user agent
i personally hate using !acl ... it's the easiest way, in my opinion, of getting in trouble and getting things to NOT work the way you want to. i always prefeer to replace by other 4-5 'normal' rules than using !acl Em 18/09/15 06:32, joe escreveu: hi i need to have 3 useragent replace and its not working example acl brs browser -i Mozilla.*Window.* acl phone-brs browser -i Mozilla.*(Android|iPhone|iPad).* request_header_access User-Agent deny brs !phone-brs request_header_replace User-Agent Mozilla/5.0 (Windows NT 5.1; rv:40.0) Gecko/20100101 request_header_access User-Agent deny phone-brs !brs request_header_replace User-Agent Mozilla/5.0 (Android; iPhone; Mobile;) Gecko/18.0 -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] user agent
i did with or without no go -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/user-agent-tp4673284p4673286.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] user agent
try without putting !brs in the second one and without putting !phone-brs in 1st one -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/user-agent-tp4673284p4673285.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] user agent
like what ? -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/user-agent-tp4673284p4673292.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Is it possible to send the connection, starting with the CONNECT, to cache-peer?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 18.09.15 21:22, Matus UHLAR - fantomas пишет: > from earlier e-mail: > >> acl tor_url url_regex "C:/Squid/etc/squid/url.tor" > > On 17.09.15 18:47, Yuri Voinov wrote: >> acl NoSSLIntercept ssl::server_name_regex -i localhost \.icq\.* kaspi\.kz >> ssl_bump splice NoSSLIntercept > >> # Privoxy+Tor access rules >> never_direct allow tor_url > >> cache_peer_access 127.0.0.1 allow tor_url > > I wonder if the never_direct and cache_peer_access should not use the same > acl as "ssl_bump splice". > Also, the regex \.icq\.* will apparently never match, there should be "\.icq\..*" or simply "\.icq\." This match ICQ.COM HTTP over 443 port. > > ...regex should match inside the server_name, correct? > in such case apparently kaspi\.kz should be "kaspi\.kz$" no. This must match kaspi\.ks.* And this match. -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJV/EBzAAoJENNXIZxhPexGtjcH/jOOCtBpfW1KyqDrhZDyGCgF oFPmwI0ZzyXgd0mzfgxfT1EvGGNFzHH9zLgSzx5uUz6ipwBKqmnTA6uqWkaORE5S rClkoPF4xT3o4yEsvHU5Z6ZoL7xXEAbwsvgwhOolh/pAB1meW0ZXqZre+mrBGiaP JOnXbjzls4Qy5CnzGzBUcPM9XVVMfcWF9oiobAct4CPmABeymxSkwGFW5zPMm/mA XiggAc4ZuRzMI4iS7/sfP2LHxej1GH8QMGsXHL8VvWZz4MxaThIJk805PAdpRNiI NyT+xE+W7GLuQvUu0IEsaM9fl7G47OeCgCERhD1Chwf2+uKW+ObbLWfLUFlaGwI= =xiVd -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Is it possible to send the connection, starting with the CONNECT, to cache-peer?
from earlier e-mail: acl tor_url url_regex "C:/Squid/etc/squid/url.tor" On 17.09.15 18:47, Yuri Voinov wrote: acl NoSSLIntercept ssl::server_name_regex -i localhost \.icq\.* kaspi\.kz ssl_bump splice NoSSLIntercept # Privoxy+Tor access rules never_direct allow tor_url cache_peer_access 127.0.0.1 allow tor_url I wonder if the never_direct and cache_peer_access should not use the same acl as "ssl_bump splice". Also, the regex \.icq\.* will apparently never match, there should be "\.icq\..*" or simply "\.icq\." ...regex should match inside the server_name, correct? in such case apparently kaspi\.kz should be "kaspi\.kz$" -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. If Barbie is so popular, why do you have to buy her friends? ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users