[squid-users] How does squid behave when caching really large files (GBs)
Hello everyone, I currently have a server which stores many terabytes of rather static files, each one having tenths of gigabytes. Right now, these files are only accessed through a local connection, but in some time this is going to change. One option to make the access acceptable is to deploy new servers on the places that will most access these files. The new server would keep a copy of the most accessed ones so that only a LAN connection is needed, instead of wasting bandwidth to external access. I'm considering almost any solution to these new hosts and one of then is just using a cache tool like squid to make the downloads faster, but as I didn't see someone caching files this big, I would like to know which problems I may find if I adopt this kind of solution. The alternatives I've considered so far include using a distributed file system such as Hadoop, deploying a private cloud storage system to communicate between the servers or even using bittorrent to share the files among servers. Any comments on these alternatives too? thank you all, Thiago Moraes - EnC 07 - UFSCar
Re: [squid-users] SECURITY ALERT: Host: header forgery detected with today's BZR checkout
* Amos Jeffries squ...@treenet.co.nz: On 15/08/11 23:52, Ralf Hildebrandt wrote: With today's BZR checkout (3.2-HEAD) I'm getting a lot of SECURITY ALERT: Host: header forgery detected with everyday requests: 2011/08/15 13:50:59.016| SECURITY ALERT: Host: header forgery detected from local=141.42.1.205:8080 remote=10.43.65.227:3266 FD 1312 flags=1 (amsprd0104.outlook.com:443 does not match amsprd0104.outlook.com) We now forcibly detect CVE-2009-0801 vulnerability abuse. A few cases have been found missing from the detection. Please apply these two patches in this order: http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11647.patch http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11649.patch I tried to apply them both but: # patch -p1 ../squid-3-11647.patch patching file ClientRequestContext.h Hunk #1 FAILED at 27. 1 out of 1 hunk FAILED -- saving rejects to file ClientRequestContext.h.rej patching file client_side_request.cc Hunk #1 FAILED at 546. Hunk #2 FAILED at 620. Hunk #3 FAILED at 638. 3 out of 3 hunks FAILED -- saving rejects to file client_side_request.cc.rej -- Ralf Hildebrandt Geschäftsbereich IT | Abteilung Netzwerk Charité - Universitätsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebra...@charite.de | http://www.charite.de
[squid-users] Skip IP6 addresses
Hello, Is it possible to set up squid 3.1.6 so that it wouldn't use IP6 addresses when proxies requests. Specifically I can't access http://packages.debian.org because squid tries to do so with an IP6 address an I have only IP4 connection to the Internet: The following error was encountered while trying to retrieve the URL: http://packages.debian.org/search? Connection to 2001:648:2ffc:deb:214:22ff:feb2:17e8 failed. The system returned: (110) Connection timed out The remote host or network may be down. Please try the request again. However packages.debian.org has IP4 addresses as well: # host packages.debian.org packages.debian.org has address 87.106.64.223 packages.debian.org has address 128.31.0.49 packages.debian.org has address 194.177.211.202 packages.debian.org has IPv6 address 2001:8d8:81:1520::1 packages.debian.org has IPv6 address 2001:648:2ffc:deb:214:22ff:feb2:17e8 packages.debian.org mail is handled by 10 powell.debian.org. -- Alexei
Re: [squid-users] SECURITY ALERT: Host: header forgery detected with today's BZR checkout
On 16/08/11 20:37, Ralf Hildebrandt wrote: * Amos Jeffries: On 15/08/11 23:52, Ralf Hildebrandt wrote: With today's BZR checkout (3.2-HEAD) I'm getting a lot of SECURITY ALERT: Host: header forgery detected with everyday requests: 2011/08/15 13:50:59.016| SECURITY ALERT: Host: header forgery detected from local=141.42.1.205:8080 remote=10.43.65.227:3266 FD 1312 flags=1 (amsprd0104.outlook.com:443 does not match amsprd0104.outlook.com) We now forcibly detect CVE-2009-0801 vulnerability abuse. A few cases have been found missing from the detection. Please apply these two patches in this order: http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11647.patch http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11649.patch I tried to apply them both but: # patch -p1 ../squid-3-11647.patch patching file ClientRequestContext.h Hunk #1 FAILED at 27. 1 out of 1 hunk FAILED -- saving rejects to file ClientRequestContext.h.rej patching file client_side_request.cc Hunk #1 FAILED at 546. Hunk #2 FAILED at 620. Hunk #3 FAILED at 638. 3 out of 3 hunks FAILED -- saving rejects to file client_side_request.cc.rej Sorry, looks like you sync'd them in from 3.2 before applying. FWIW the Firefox CONNECT case is fixed a few hours ago now too. I've had confirmation that one works and just ported it back to 3.2 right now. Should be available to you soon. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] Skip IP6 addresses
On 16/08/11 22:56, Alexei Ustyuzhaninov wrote: Hello, Is it possible to set up squid 3.1.6 so that it wouldn't use IP6 addresses when proxies requests. Please try the 3.1 package from the Debian Wheezy/Testing repository. It has fixes for several bugs with this symptom. Or if you have a static IP you may want to install miredo package alongside Squid and watch yourself using IPv6 websites. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] Skip IP6 addresses
On 16.08.2011 17:09, Amos Jeffries wrote: On 16/08/11 22:56, Alexei Ustyuzhaninov wrote: Hello, Is it possible to set up squid 3.1.6 so that it wouldn't use IP6 addresses when proxies requests. Please try the 3.1 package from the Debian Wheezy/Testing repository. It has fixes for several bugs with this symptom. Or if you have a static IP you may want to install miredo package alongside Squid and watch yourself using IPv6 websites. Thanks Amos. I've upgraded the squid3 package to version 3.1.14-1, but that didn't help. BTW do you think that is a debian bug? If yes I will report it there. As for miredo and other IP6-tunneling solution they seem like an overkill for my simple task. I would rather switch to another proxy. -- Alexei
Re: [squid-users] Squid log : source from x_forwarded_for field
Hello Amos, thank you for your answer. I did add the follow_x_forwarded_for allow localhost and it did what I wanted to. With regards to the security warnings, I am ok with it as all users have the same acl. Regards, Hugo On 12 August 2011 15:23, Amos Jeffries squ...@treenet.co.nz wrote: On 13/08/11 00:47, Hugo Deprez wrote: Dear community, I am trying To configure dansguardian with squid3. I am running debian squeeze. Everything is working but I am trying to have the real IP source in the squid's access.log file. I configured forwardedfor = on in dansguardian.conf, When I check The access.log file, i only see 127.0.0.1 as source of the request. I did a network packet capture. And I found the field X-forwarded-for was like : http.x_forwarded_for == 192.168.200.1, 127.0.0.1 In squid.conf I used the following log configuration : logformat combined %a %a %A %p %la %lp %ui %un [%{%d/%b/%Y:%H:%M:%S +}tl] %rm %ru HTTP/%rv %Hs %st %{Referer}h %{User-Agent}h %Ss:%Sh access_log /var/log/squid3/access.log combin But %a is still return 127.0.0.1. So is there a way to change the behaviour in order to show the real IP address ? log_uses_indirect_client on Or is there a way to hide source 127.0.0.1 ? You define in squid.conf that 127.0.0.1 has a proxy you *trust* not to lie to you in its XFF header. Please read the security warnings about follow_x_forwarded_for http://www.squid-cache.org/Doc/config/follow_x_forwarded_for/ follow_x_forwarded_for allow localhost NP: assuming that you still have the default localhost definition configured. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
[squid-users] reading external acl from squid.conf
is there a way to have this acl bk src XX.XXX.XX.XX/32 acl bk src XXX.XX.XXX.XX/32 in a external file and have squid.conf reference to it? -- http://alexus.org/
[squid-users] Re: Kerberos authentication and WMP.
Hi João Carlos , I tested this with windows media player 11 and I do not have a problem to authenticate against squid using Negotiate/Kerberos. See my exchaange between wmp 11 and squid. Markus GET http://www.jhepple.com/SampleMovies/niceday.wmv HTTP/1.1 Accept: */* User-Agent: NSPlayer/11.0.5721.5262 WMFSDK/11.0 Accept-Encoding: gzip, deflate Range: bytes=3836- Unless-Modified-Since: Thu, 02 Dec 2010 03:59:26 GMT Host: www.jhepple.com Proxy-Connection: Keep-Alive HTTP/1.0 407 Proxy Authentication Required Server: squid/3.1.14-BZR Mime-Version: 1.0 Date: Tue, 16 Aug 2011 19:53:25 GMT Content-Type: text/html Content-Length: 3555 X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0 Vary: Accept-Language Content-Language: en Proxy-Authenticate: Negotiate Proxy-Authenticate: NTLM X-Cache: MISS from opensuse11 X-Cache-Lookup: NONE from opensuse11:3128 Via: 1.0 opensuse11 (squid/3.1.14-BZR) Connection: keep-alive !DOCTYPE html PUBLIC -//W3C//DTD HTML 4.01//EN http://www.w3.org/TR/html4/strict.dtd; htmlhead meta http-equiv=Content-Type content=text/html; charset=utf-8 titleERROR: Cache Access Denied/title style type=text/css!-- /* Stylesheet for Squid Error pages Adapted from design by Free CSS Templates http://www.freecsstemplates.org Released for free under a Creative Commons Attribution 2.5 License */ /* Page basics */ * { .font-family: verdana, sans-serif; } html body { .margin: 0; .padding: 0; .background: #efefef; .font-size: 12px; .color: #1e1e1e; } /* Page displayed title area */ #titles { .margin-left: 15px; .padding: 10px; .padding-left: 100px; .background: url('http://www.squid-cache.org/Artwork/SN.png') no-repeat left; } /* initial title */ #titles h1 { .color: #00; } #titles h2 { .color: #00; } /* special event: FTP success page titles */ #titles ftpsuccess { .background-color:#00ff00; .width:100%; } /* Page displayed body content area */ #content { .padding: 10px; .background: #ff; } /* General text */ p { } /* error brief description */ #error p { } /* some data which may have caused the problem */ #data { } /* the error message received from the system or other software */ #sysmsg { } pre { font-family:sans-serif; } /* special event: FTP / Gopher directory listing */ #dirmsg { font-family: courier; color: black; font-size: 10pt; } #dirlisting { margin-left: 2%; margin-right: 2%; } #dirlisting tr.entry td.icon,td.filename,td.size,td.date { border-bottom: groove; } #dirlisting td.size { width: 50px; text-align: right; padding-right: 5px; } /* horizontal lines */ hr { .margin: 0; } /* page displayed footer area */ #footer { .font-size: 9px; .padding-left: 10px; } body :lang(fa) { direction: rtl; font-size: 100%; font-family: Tahoma, Roya, sans-serif; float: right; } :lang(he) { direction: rtl; float: right; } --/style /headbody id=ERR_CACHE_ACCESS_DENIED div id=titles h1ERROR/h1 h2Cache Access Denied./h2 /div hr div id=content pThe following error was encountered while trying to retrieve the URL: a href=http://www.jhepple.com/SampleMovies/niceday.wmv;http://www.jhepple.com/SampleMovies/niceday.wmv/a/p blockquote id=error pbCache Access Denied./b/p /blockquote pSorry, you are not currently allowed to request http://www.jhepple.com/SampleMovies/niceday.wmv from this cache until you have authenticated yourself./p pPlease contact the a href=mailto:webmaster?subject=CacheErrorInfo%20-%20ERR_CACHE_ACCESS_DENIEDamp;body=CacheHost%3A%20opensuse11%0D%0AErrPage%3A%20ERR_CACHE_ACCESS_DENIED%0D%0AErr%3A%20%5Bnone%5D%0D%0ATimeStamp%3A%20Tue,%2016%20Aug%202011%2019%3A53%3A25%20GMT%0D%0A%0D%0AClientIP%3A%20192.168.1.12%0D%0A%0D%0AHTTP%20Request%3A%0D%0AGET%20%2FSampleMovies%2Fniceday.wmv%20HTTP%2F1.1%0AAccept%3A%20*%2F*%0D%0AUser-Agent%3A%20NSPlayer%2F11.0.5721.5262%20WMFSDK%2F11.0%0D%0AAccept-Encoding%3A%20gzip,%20deflate%0D%0ARange%3A%20bytes%3D3836-%0D%0AUnless-Modified-Since%3A%20Thu,%2002%20Dec%202010%2003%3A59%3A26%20GMT%0D%0AHost%3A%20www.jhepple.com%0D%0AProxy-Connection%3A%20Keep-Alive%0D%0A%0D%0A%0D%0A;cache administrator/a if you have difficulties authenticating yourself or a href=http://opensuse11/cgi-bin/chpasswd.cgi;change/a your default password./p br /div hr div id=footer pGenerated Tue, 16 Aug 2011 19:53:25 GMT by opensuse11 (squid/3.1.14-BZR)/p !-- ERR_CACHE_ACCESS_DENIED -- /div /body/html GET http://www.jhepple.com/SampleMovies/niceday.wmv HTTP/1.1 Accept: */* User-Agent: NSPlayer/11.0.5721.5262 WMFSDK/11.0 Accept-Encoding: gzip, deflate Range: bytes=3836- Unless-Modified-Since: Thu, 02 Dec 2010 03:59:26 GMT Proxy-Authorization: Negotiate
Re: [squid-users] reading external acl from squid.conf
of course !!! acl bk src /path/to/your/file.txt file.txt would be 192.168.1.2 192.168.2.38/32 192.168.20.0/24 10.8.0.0/16 (note the /32 is not needed. if / is not specified, its automatically /32) and after modifying the .txt file, you'll have to issue the command squid -k reconfigure to ask squid to re-read external files Em 16/08/11 14:18, alexus escreveu: is there a way to have this acl bk src XX.XXX.XX.XX/32 acl bk src XXX.XX.XXX.XX/32 in a external file and have squid.conf reference to it? -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it
[squid-users] about the cache and CARP
I want to make Common Address Redundancy Protocol or CARP with two squid 3.0 STABLE10 that I have, but here I found this question: If the main Squid with 40 GB of cache shutdown for any reason, then the 2nd squid will start up but without any cache. There is any way to synchronize the both cache, so when this happen the 2nd one start with all the cache ? Thank again for all your help !!!
Re: [squid-users] reading external acl from squid.conf
tried that but but made a syntax error so it didn't work tried it again using right syntax and it works like a charm! thanks! On Tue, Aug 16, 2011 at 4:20 PM, Leonardo Rodrigues leolis...@solutti.com.br wrote: of course !!! acl bk src /path/to/your/file.txt file.txt would be 192.168.1.2 192.168.2.38/32 192.168.20.0/24 10.8.0.0/16 (note the /32 is not needed. if / is not specified, its automatically /32) and after modifying the .txt file, you'll have to issue the command squid -k reconfigure to ask squid to re-read external files Em 16/08/11 14:18, alexus escreveu: is there a way to have this acl bk src XX.XXX.XX.XX/32 acl bk src XXX.XX.XXX.XX/32 in a external file and have squid.conf reference to it? -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br My SPAMTRAP, do not email it -- http://alexus.org/
Re: [squid-users] about the cache and CARP
tis 2011-08-16 klockan 16:54 -0400 skrev Carlos Manuel Trepeu Pupo: I want to make Common Address Redundancy Protocol or CARP with two squid 3.0 STABLE10 that I have, but here I found this question: If the main Squid with 40 GB of cache shutdown for any reason, then the 2nd squid will start up but without any cache. Why will the second Squid start up without any cache? If you are using CARP then cache is sort of distributed over the available caches, and the amount of cache you loose is proportional to the amount of cache space that goes offline. However, CARP routing in Squid-3.0 only applies when you have multiple levels of caches. Still doable with just two servers but you then need two Squid instances per server. * Frontend Squids, doing in-memory cache and CARP routing to Cache Squids * Cache Squids, doing disk caching When request routing is done 100% CARP then you loose 50% of the cache should one of the two cache servers go down. There is also possible hybrid models where the cache gets more duplicated among the cache servers, but not sure 3.0 can handle those. Regards Henrik
Re: [squid-users] Skip IP6 addresses
On Tue, 16 Aug 2011 17:51:54 +0600, Alexei Ustyuzhaninov wrote: I've upgraded the squid3 package to version 3.1.14-1, but that didn't help. BTW do you think that is a debian bug? If yes I will report it there. Since its not the common bugs we fixed already it may be this one: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=593815 Amos
Re: [squid-users] reading external acl from squid.conf
On Tue, 16 Aug 2011 13:18:49 -0400, alexus wrote: is there a way to have this acl bk src XX.XXX.XX.XX/32 acl bk src XXX.XX.XXX.XX/32 in a external file and have squid.conf reference to it? in squid.conf: acl bk src /etc/squid/acls-file then in /etc/squid/acls-file one ACL value per line: XX.XXX.XX.XX/32 XX.XXX.XXX.XX/32 Two things to note: * The file is only loaded on reconfigure and startup. If you need something more dynamic or real-time, use external_acl_type to run a script. * /32 is not needed, will be ignored by Squid. There is also A-B style ranges accepted for odd ranges. Amos
Re: [squid-users] reading external acl from squid.conf
On Wed, 17 Aug 2011 09:14:44 +0930, Brett Lymn wrote: On Wed, Aug 17, 2011 at 11:30:39AM +1200, Amos Jeffries wrote: If you need something more dynamic or real-time, use external_acl_type to run a script. Out of interest, what are the performance implications of doing this? Are the external_acl_type scripts like other helpers and forked at startup? Yes. Just like auth helpers. With the same types of impact as Basic auth when the credentials are given. Performance on individual requests is directly slower by the amount of lookup time, as you would expect. These are cached for some TTL value (configurable). sso impact is variable relative to the permutations for input format passed to the helper. Overall Squid speed has very little performance impact. Since the requests are handled in parallel one left waiting simply frees up CPU for others. But the FD and RAM usage are related to total request handling time. So their requirements are relative to the response speed of the script. You don't want it taking many seconds/minutes to respond. Then there are the malloc implementations that explode virtual memory whenever fork() is run by a large in-RAM process like Squid. Amos
Re: [squid-users] Squid mitigation of advanced persistent tracking
On Wed, 3 Aug 2011, Amos Jeffries wrote: On Tue, 2 Aug 2011 13:39:51 -0700 (PDT), John Hardin wrote: The analysis of the APT techniques used by Kissmetrics (at http://www.wired.com/epicenter/2011/07/undeletable-cookie/) is interesting if thin, and suggests one way that Squid might be leveraged to interfere with such tracking: deleting the Etag: header from request replies. /me bows head in shame Comments? All they are doing is a server-side browsing session. But unlike Cookies, ETag are usually shared between many clients simultaneously. Middleware like Squid is able to reply to them instead of contacting the origin site. Even creates new ones the origin is not aware of when compressing on the fly. Some more details are available in the more-academic paper: http://ashkansoltani.org/docs/respawn_redux.html One example in that paper: INITIAL REQUEST HEADER: GET /i.js HTTP/1.1 Host: i.kissmetrics.com INITIAL RESPONSE HEADER: Etag: Z9iGGN1n1-zeVqbgzrlKkl39hiY Expires: Sun, 12 Dec 2038 01:19:31 GMT Last-Modified: Wed, 27 Jul 2011 00:19:31 GMT Set-Cookie: _km_cid=Z9iGGN1n1-zeVqbgzrlKkl39hiY; expires=Sun, 12 Dec 2038 01:19:31 GMT;path=/; ...has the possibly useful signature of the Etag value appearing in a cookie being set. Any comments on the utility of writing an eCAP filter to block _that_ (to either strip the cookie or block the entire response)? Give up isn't helpful. :) -- John Hardin KA7OHZhttp://www.impsec.org/~jhardin/ jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C AF76 D822 E6E6 B873 2E79 --- USMC Rules of Gunfighting #4: If your shooting stance is good, you're probably not moving fast enough nor using cover correctly. --- 8 days until the 1932nd anniversary of the destruction of Pompeii
Re: [squid-users] Squid mitigation of advanced persistent tracking
On Tue, 16 Aug 2011 18:16:38 -0700 (PDT), John Hardin wrote: On Wed, 3 Aug 2011, Amos Jeffries wrote: On Tue, 2 Aug 2011 13:39:51 -0700 (PDT), John Hardin wrote: The analysis of the APT techniques used by Kissmetrics (at http://www.wired.com/epicenter/2011/07/undeletable-cookie/) is interesting if thin, and suggests one way that Squid might be leveraged to interfere with such tracking: deleting the Etag: header from request replies. /me bows head in shame Comments? All they are doing is a server-side browsing session. But unlike Cookies, ETag are usually shared between many clients simultaneously. Middleware like Squid is able to reply to them instead of contacting the origin site. Even creates new ones the origin is not aware of when compressing on the fly. Some more details are available in the more-academic paper: http://ashkansoltani.org/docs/respawn_redux.html One example in that paper: INITIAL REQUEST HEADER: GET /i.js HTTP/1.1 Host: i.kissmetrics.com INITIAL RESPONSE HEADER: Etag: Z9iGGN1n1-zeVqbgzrlKkl39hiY Expires: Sun, 12 Dec 2038 01:19:31 GMT Last-Modified: Wed, 27 Jul 2011 00:19:31 GMT Set-Cookie: _km_cid=Z9iGGN1n1-zeVqbgzrlKkl39hiY; expires=Sun, 12 Dec 2038 01:19:31 GMT;path=/; ...has the possibly useful signature of the Etag value appearing in a cookie being set. Any comments on the utility of writing an eCAP filter to block _that_ (to either strip the cookie or block the entire response)? Give up isn't helpful. :) Could be useful. Up to you. This particular case comes under Middleware like Squid is able to reply to them instead of contacting the origin site. ** Object will clearly never expire, therefore no need to contact the origin (or tracker) until 2038. Unless the client request explicitly contains no-cache or max-age=0 to force immediate revalidation. ** No indication that the response was customized. Therefore it may be sent in response to arbitrary clients for the same object _by URL alone_. Also may be sent in response to client revalidations of _any_ Etag value which was older. If that is actually being used in practice I would seriously doubt any claims the tracker makes about their data accuracy. Particularly regarding Asia-Pacific regions data where cache farms are popular speed boosters. It would need Expires in the past or Cache-Control values to prevent caching. In which case ETag is safe to drop along with the Cookie. :) If you want to go the route of creating a filter, IMO it would be most effective to calculate the MD5 or SHA1 of the body instance (avoiding range request responses, since the body is not the object instance there). Then recording an index of object hash versus ETag values. If you see non-identical bodies using one ETag or vice-versa the origin is broken (this tracking type is regarded such). Looks like KISSmetrics have officially given up the arms race anyway. As of 29th July. Amos
[squid-users] microtik router with squid tproxy
Hi ALL, Currently i have a requirement to configure squid for tproxy feature with microtik router os.If i will configure policy routing in microtik router for port 80 traffic pass to squid box, in that case , do i use tproxy feature or ? Thanks, Benjo
Re: [squid-users] microtik router with squid tproxy
it is easy to forward port 80 traffic to Squid using Mikrotik , it can be done by going to ip-firewall-nat and adding new dst-nat. Or you can just use your cache server as a gateway for the Mikrotik router. On Wed, Aug 17, 2011 at 8:16 AM, Benjamin benjo11...@gmail.com wrote: Hi ALL, Currently i have a requirement to configure squid for tproxy feature with microtik router os.If i will configure policy routing in microtik router for port 80 traffic pass to squid box, in that case , do i use tproxy feature or ? Thanks, Benjo