Re: [squid-users] Squid not keeping authenticated NTLM session open
On 15/05/2012 5:28 p.m., infernalis wrote: Hi all, I'm having considerable trouble getting Squid to work well with NTLM/Kerberos and was hoping someone here would be able to help. My ultimate goal is to be able to connect to an IIS server through Squid using a computer that is not a member of the AD domain. I would like to enter my credentials once to the proxy, and then have Squid save the authentication token in order to use it against other servers that require authentication. Token re-use in this form is not what happens in NTLM. It uses a code specific to the TCP connection and a hash. The problem I'm facing is that no matter what I've tried, I'm forced to authenticate manually six times while loading sites requiring authentication. This is much worse than the behavior prior to adding Squid. 6 times is a problem. You should at most be asked once. But there are some software (IE primarily) which are known to ask for manual authentication when it should not need to. First, is it possible for Squid to cache the credentials and then authenticate on behalf of the client to an upstream server? If this isn't the best way to go about doing this, what would you suggest? Squid *does* cache the credentials. In a specific way that NTLM requires. Re-using the same credentials for other TCP connections out of a normal cache causes a major security vulnerability with NTLM. Second, what could be the problem with my configuration? I'm running Squid 3.1.10. Please try an upgrade; 3.1.19 is current, 3.1.15 at oldest is recommended. The hacks disabling certain HTTP features in order to get NTLM to operate were improved incrementally across 3.1 series, so the later the release you can get the better NTLM will work. Up to a point. However, be aware this multiple-login is known to still occur with IE + Squid even in the latest releases. It is IE behaviour. Thanks in advance! Here is my current config: http_port 80 accel defaultsite=webservername connection-auth=on Ah, so by "sites" which login is failing for you mean "http://webservername/";. NP: NTLM is *not* a good protocol to use for website authentication over the general Internet. It is extremely fragile, resource intensive, and not supported by most of the software spread through the middle of the Internet. cache_peer x.x.x.x parent 80 0 no-query login=PASS originserver connection-auth=on name=serv auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 10 auth_param ntlm keep_alive on Turning this one off might help reduce your popups. It does not disable connection persistence, but enables a hack to get around some of the IE multiple-popup behaviour. auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic children 5 auth_param basic realm Domain Proxy Server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off acl auth proxy_auth REQUIRED http_access allow auth http_access deny all At this "deny all" any following http_access lines are ignored. acl our_sites dstdomain webservername proxy_auth REQUIRED client_persistent_connections on server_persistent_connections on debug_options ALL,2 http_access allow our_sites cache_peer_access serv allow our_sites cache_peer_access serv deny all Amos
Re: [squid-users] problem with logging to mysql
On 15/05/2012 5:12 p.m., Jan Malaník wrote: 2012/5/15 Amos Jeffries: On 15/05/2012 1:56 a.m., Jan Malaník wrote: Good day, I have a problem with logging to mysql. I tried this configuration: logformat squid %tl;%>a;%>A;%ru;%un;%Ss access_log daemon:/127.0.0.1/report/surf/squid/squid squid logfile_daemon /usr/local/sbin/log_mysql_daemon.pl But it wrote during start: /etc/init.d/squid3 start Starting Squid HTTP Proxy 3.x: squid3Creating Squid HTTP Proxy 3.x cache structure ... (warning). 2012/05/14 15:16:58| cache_cf.cc(363) parseOneConfigFile: squid.conf:2309 unrecognized: 'logfile_daemon' 2012/05/14 15:16:58| cache_cf.cc(363) parseOneConfigFile: squid.conf:2309 unrecognized: 'logfile_daemon' Why this happen? Then I found log_db_daemon directive, but I don't know howto configure it. Please can someone help me? You need squid 2.7 or 3.2+ to use logfile_daemon. It looks like you are running a packaged Squid from Debian or a derivative. Which would be 3.1 series Squid, not 3.2 yet. Amos Yes I use debian stable. You want to say in other version then 2.7 or 3.2there isn't logfile_daemon directive? Yes. Please can you explain me howto use log_db_daemon(http://www1.it.squid-cache.org/Versions/v3/3.HEAD/manuals/log_db_daemon)? It's possible to use it? Only in a version which supports logfile_daemon. For older versions you can apparently use squidtaild to send the log elsewhere. I've not used it myself though so YMMV. Amos
Re: [squid-users] Squid load balancing access log
On 15/05/2012 06:29, Ibrahim Lubis wrote: Squid guru, I do load balancing 2 centos server with ucarp and haproxy, with cache peering all squid server as sibling. I use squid as caching. The problem is every log line what i see in access log file is ip from squid cache not from user requested web access. Before i do load balancing,only one squid box, in access log file i see ip user requested web access. Thx if you will post some more info\squid.conf on the network structure it will make more sense to understand what is causing this. in what mode are you using the proxies? tproxy?intercept?plain forward? the haproxy server is sitting in front of the cache proxy servers and do the load balancing? on what server in the access log are you not getting the client ip? eliezer -- Eliezer Croitoru https://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer ngtech.co.il
Re: [squid-users] Squid load balancing access log
On 15/05/2012 3:29 p.m., Ibrahim Lubis wrote: Squid guru, I do load balancing 2 centos server with ucarp and haproxy, with cache peering all squid server as sibling. I use squid as caching. The problem is every log line what i see in access log file is ip from squid cache not from user requested web access. Before i do load balancing,only one squid box, in access log file i see ip user requested web access. Thx Your understanding of the logged information is incorrect. What your server is logging is the *client* it received the request from. client and user are two different things. Squid is the servers client. Previously you had some configuration setup to pass Squids client IP through to the web server and log that instead of teh Web servers client (Squid IP). Squid with the "forwarded_for on" directive updates the X-Forwarded-For header with its clients IP. The web server needs to process that header in order to retrieve the furthest downstream IP it can trust. NP: the client "IP" produced by this may be one IP or a whole list of addresses, possibly the word "unknown", and increasingly likely these days to be a different IPv4/6 version to teh web server. Be sure your server can handle any of those cases. Amos
[squid-users] Unable to test HTTP PUT-based file upload via Squid Proxy
Hello, I can upload a file to my Apache web server using Curl just fine: echo "[$(date)] file contents." | curl -T - http://WEB-SERVER/upload/sample.put However, if I put a Squid proxy server in between, then I am not able to: echo "[$(date)] file contents." | curl -x http://SQUID-PROXY:3128 -T - http://WEB-SERVER/upload/sample.put Curl reports the following error: *Note: The following error response was in HTML format, but I've removed the tags for ease of reading.* ERROR: The requested URL could not be retrieved ERROR The requested URL could not be retrieved While trying to retrieve the URL: http://WEB-SERVER/upload/sample.put The following error was encountered: Unsupported Request Method and Protocol Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request. Your cache administrator is root. My `squid.conf` doesn't seem to be having any ACL/rule that should disallow based on the `src` or `dst` IP addresses, or the `protocol`, or the HTTP `method`... **as I can do an `HTTP POST` just fine between the same client and the web server, with the same proxy sitting in between.** In case of the failing `HTTP PUT` case, to see the request and response traffic that was actually occurring, I placed a `netcat` process in between Curl and Squid, and this is what I saw: **Request:** PUT http://WEB-SERVER/upload/sample.put HTTP/1.1 User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 Host: WEB-SERVER Pragma: no-cache Accept: */* Proxy-Connection: Keep-Alive Transfer-Encoding: chunked Expect: 100-continue **Response:** HTTP/1.0 501 Not Implemented Server: squid/2.6.STABLE21 Date: Sun, 13 May 2012 02:11:39 GMT Content-Type: text/html Content-Length: 1078 Expires: Sun, 13 May 2012 02:11:39 GMT X-Squid-Error: ERR_UNSUP_REQ 0 X-Cache: MISS from SQUID-PROXY-FQDN X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128 Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21) Proxy-Connection: close *Note: I have anonymized the IP addresses and server names throughout for readability reasons.* *Note: I had posted this question on [StackOverflow also][1], but got no helpful response. Posting it here, in case people on StackOverflow are seeing this as a non-programming question and not taking interest.* Regards, /HS [1]: http://stackoverflow.com/questions/10568655/unable-to-test-http-put-based-file-upload-via-squid-proxy
Re: [squid-users] Re: Expire time cache
Ok, so Squid will know when si correct to delete old cache. Wil this work also with music files and images? With the size do you meant this? cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256 thank you Lucas Coudures Linux User #442566 Blog: http://lucas-coudures.blogspot.com/ . -. Dead is a matter of definition. Free software only dies when the last copy of the source code is erased. On Mon, May 14, 2012 at 11:31 PM, RW wrote: > On Mon, 14 May 2012 10:09:43 -0300 > lucas coudures wrote: > >> Hello, >> I installed squid 3.1 in a debian squeeze 2.6.32-5-686, I need to know >> if exist a way to set the time to expire cache, or if squid alredy do >> this. > > Squid maintains it's caches, whether memory or disk, within the sizes > you specified. If you need more or less retention then adjust the size. >
AW: AW: [squid-users] Authentication problem
Image #1 appears to be a login box of some kind. Where is it coming from; the browser software or a web page? >>> Browser Image #2 appears to be an HTTP login which the browser is refusing to display popup box for. Why is the browser not finding credentials somewhere or showing a popup? >>> The popup shown in picture one doesn't appear. For some reason, some >>> credentials are automatically used (maybe SSO) or some configuration block >>> this login popup. -Ursprüngliche Nachricht- Von: Amos Jeffries [mailto:squ...@treenet.co.nz] Gesendet: Mittwoch, 9. Mai 2012 03:53 An: squid-users@squid-cache.org Betreff: Re: AW: [squid-users] Authentication problem On 09.05.2012 01:44, Fuhrmann, Marcel wrote: > Hi Markus, > > sorry, but it doesn't work. :-( > > - Added this line in squid.conf > - server squid3 reload > - deleted IE cache restarted IE and open the website -> same error. > Err, yeah. Leaving the headers alone only works if one was already playing with erasing them in the first place. If someone else was erasing them in transit you need to kick them about the problems. > Any other ideas? Finding out what the problem actually is would be a better start. Image #1 appears to be a login box of some kind. Where is it coming from; the browser software or a web page? Image #2 appears to be an HTTP login which the browser is refusing to display popup box for. Why is the browser not finding credentials somewhere or showing a popup? Amos > > -Ursprüngliche Nachricht- > Von: Markus Lauterbach > > Hi Marcel, > > You have to add a small piece in your config. I think, it should lool > somehow like this: > > header_access Authorization allow all > > And restart your squid. > > Markus > >> -Ursprüngliche Nachricht- >> Von: Fuhrmann, Marcel >> >> Hello, >> >> i am using 3.0.STABLE19-1ubuntu0.2 and I have a problem accessing a >> website. >> Normally (without proxy) I am getting this windows to login: >> http://ubuntuone.com/5fEJKKTenjJuAjJm9AJjSu >> >> With proxy I get this error (german; but understandable): >> http://ubuntuone.com/6zbxnmZevYWiDDqPMG24Um >> >> Can somebody give me advice? >> >> >> Thanks a lot! >> >> -- >> Marcel
[squid-users] delay pools with IP ranges
Hi, I need to create delay pools to do bandwith control. How can I do to use IP ranges on acl src statements? Not netmasks bits /24. Tks in advance, Marlon
RE: [squid-users] IPv6 error prevents Squid start
Has anyone seen this error? Should I revert to 3.2.0.16? Please advise. -Original Message- From: Zill, Gregory (OMA-GIS) Sent: Thursday, May 10, 2012 7:37 AM To: 'Amos Jeffries'; squid-users@squid-cache.org Subject: RE: [squid-users] IPv6 error prevents Squid start Sorry for the lack of info: Linux 32-bit 2.6.32-220.13.1.el6.i686 CentOS 6.2 Squid Cache: Version 3.2.0.17 configure options: --enable-ltdl-convenience - no patching >From ifcfg-eth0: ... IPV4_FAILURE_FATAL=yes IPV6INIT=no -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Thursday, May 10, 2012 7:34 AM To: squid-users@squid-cache.org Subject: Re: [squid-users] IPv6 error prevents Squid start On 11/05/2012 12:14 a.m., Zill, Gregory (OMA-GIS) wrote: > I have some references from a google search, but no actual fix seen. Has > anyone overcome this? > >Address.cc(958) GetSockAddr: Ip::Address::GetSockAddr : Cannot > convert non-IPv4 to IPv4. from [::] > > I appreciate your time. That is the equivaent of failing to convert between upper and lower case letters 'A' and 'a'. There is absolutely no reason for it to fail. What Squid is this happening in? what operating system is it built for? and what has been patched into it? Amos This message contains information which may be confidential and privileged. Unless you are the intended recipient (or authorized to receive this message for the intended recipient), you may not use, copy, disseminate or disclose to anyone the message or any information contained in the message. If you have received the message in error, please advise the sender by reply e-mail, and delete the message. Thank you very much.
[squid-users] cache videos/bittorrent
Hi guys. I'd like to cache some p2p like kazaa, emule, bittorrent, ares, and videos from Youtube, Netflix or services that use Adaptive Bitrate (ABR). Is possible with Squid? Regards, MSC
RE: [squid-users] IPv6 error prevents Squid start
I get the same error when I revert to Squid 3.2.0.16. squid[7989]: Address.cc(958) GetSockAddr: Ip::Address::GetSockAddr : Cannot convert non-IPv4 to IPv4. from [::] context from line 958 of Address.cc void Ip::Address::GetSockAddr(struct sockaddr_in &buf) const { if ( IsIPv4() ) { buf.sin_family = AF_INET; buf.sin_port = m_SocketAddr.sin6_port; Map6to4( m_SocketAddr.sin6_addr, buf.sin_addr); } else { >> debugs(14, DBG_CRITICAL, HERE << "Ip::Address::GetSockAddr : Cannot >> convert non-IPv4 to IPv4. from " << *this ); memset(&buf,0x,sizeof(struct sockaddr_in)); assert(false); } -Original Message- From: Zill, Gregory (OMA-GIS) Sent: Tuesday, May 15, 2012 2:49 PM To: 'squid-users@squid-cache.org' Cc: Amos Jeffries (squ...@treenet.co.nz) Subject: RE: [squid-users] IPv6 error prevents Squid start Has anyone seen this error? Should I revert to 3.2.0.16? Please advise. -Original Message- From: Zill, Gregory (OMA-GIS) Sent: Thursday, May 10, 2012 7:37 AM To: 'Amos Jeffries'; squid-users@squid-cache.org Subject: RE: [squid-users] IPv6 error prevents Squid start Sorry for the lack of info: Linux 32-bit 2.6.32-220.13.1.el6.i686 CentOS 6.2 Squid Cache: Version 3.2.0.17 configure options: --enable-ltdl-convenience - no patching >From ifcfg-eth0: ... IPV4_FAILURE_FATAL=yes IPV6INIT=no -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Thursday, May 10, 2012 7:34 AM To: squid-users@squid-cache.org Subject: Re: [squid-users] IPv6 error prevents Squid start On 11/05/2012 12:14 a.m., Zill, Gregory (OMA-GIS) wrote: > I have some references from a google search, but no actual fix seen. Has > anyone overcome this? > >Address.cc(958) GetSockAddr: Ip::Address::GetSockAddr : Cannot > convert non-IPv4 to IPv4. from [::] > > I appreciate your time. That is the equivaent of failing to convert between upper and lower case letters 'A' and 'a'. There is absolutely no reason for it to fail. What Squid is this happening in? what operating system is it built for? and what has been patched into it? Amos This message contains information which may be confidential and privileged. Unless you are the intended recipient (or authorized to receive this message for the intended recipient), you may not use, copy, disseminate or disclose to anyone the message or any information contained in the message. If you have received the message in error, please advise the sender by reply e-mail, and delete the message. Thank you very much.
[squid-users] Cache of port 443 with SSL Reverse Proxy
Hi, - It possible do cache of port 443 with SSL Reverse Proxy? - What is the advantage of doing reverse proxy SSL if the squid does not make SSL cache? - You can startar two instances of squid, and the second instance is a squid.conf https_port doing reverse proxy with SSL? This is because the workstations of the environment here has two firefox profiles (firefox a profile in common, and the other by means of a web application that uses port 443) for this reason it would be deployed two instances of squid. The question begs, if the squid does not cache SSL as SSL reverse proxy, what is the advantage of using squid as proxy Reverse SSL? -- Sylvio
[squid-users] Transparent interception MTU issues
Hi, I am accessing squid through a PPTP tunnel and have a lower MTU as a result. I am able to use squid ok as an explicit proxy however when trying transparent interception many pages timeout and don't open. I guess this is because of MTU issues. I have tried "http_port 3129 intercept disable-pmtu-discovery=always" but to no avail. I am using 3.2.0.17. Any ideas? Thanks Daniel
Re: [squid-users] Unable to test HTTP PUT-based file upload via Squid Proxy
On 16.05.2012 00:39, Harry Simons wrote: **Request:** PUT http://WEB-SERVER/upload/sample.put HTTP/1.1 User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 Host: WEB-SERVER Pragma: no-cache Accept: */* Proxy-Connection: Keep-Alive Transfer-Encoding: chunked Expect: 100-continue **Response:** HTTP/1.0 501 Not Implemented Server: squid/2.6.STABLE21 Date: Sun, 13 May 2012 02:11:39 GMT Content-Type: text/html Content-Length: 1078 Expires: Sun, 13 May 2012 02:11:39 GMT X-Squid-Error: ERR_UNSUP_REQ 0 X-Cache: MISS from SQUID-PROXY-FQDN X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128 Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21) Proxy-Connection: close Curl is attempting to use HTTP/1.1 features which 2.6 does not support (Expect:100-continue, Transfer-Encoding:chunked), and is too old to even have proper workarounds for broken clients. Your request won't work due to these even if PUT was okay. Please upgrade. squid-2.7/3.1 are still HTTP/1.0 but have some hacks to workaround the HTTP/1.1 features curl is asking for. Squid-3.2 (beta) has HTTP/1.1 support. Amos
Re: [squid-users] Re: Expire time cache
On 16.05.2012 00:41, lucas coudures wrote: Ok, so Squid will know when si correct to delete old cache. Wil this work also with music files and images? Why should music and images be any different from other files? Squid is not displaying, playing or editing them. With the size do you meant this? cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256 Yes that is 100 MB of cache. Consider how many MB per second of HTTP traffic do you process? Amos On Mon, May 14, 2012 at 11:31 PM, RW wrote: On Mon, 14 May 2012 10:09:43 -0300 lucas coudures wrote: Hello, I installed squid 3.1 in a debian squeeze 2.6.32-5-686, I need to know if exist a way to set the time to expire cache, or if squid alredy do this. Squid maintains it's caches, whether memory or disk, within the sizes you specified. If you need more or less retention then adjust the size.
Re: [squid-users] Cache of port 443 with SSL Reverse Proxy
On 16.05.2012 09:20, Sylvio Cesar wrote: Hi, - It possible do cache of port 443 with SSL Reverse Proxy? Yes. - What is the advantage of doing reverse proxy SSL if the squid does not make SSL cache? SSL offloading away from the WWW server. When the WWW server is doing a lot of dynamic content generation any reduction of CPU can benefit overall. Plus all the other scaling advantages of reverse-proxy. There is nothing special about SSL reverse-proxy other than the traffic arrives over a secure channel. - You can startar two instances of squid, and the second instance is a squid.conf https_port doing reverse proxy with SSL? This is because the workstations of the environment here has two firefox profiles (firefox a profile in common, and the other by means of a web application that uses port 443) for this reason it would be deployed two instances of squid. Yes. But why two Squid? One instance can do multiple input modes and is simpler to operate. The method the client uses to configure the proxy is largely irrelevant to the proxy. AND, if there is a browser configured to use the proxy it is *NOT* a reverse-proxy. It is a forward-proxy. reverse-proxy is when there is only DNS records pointing at a domain name serviced by the proxy pretending to be a web server. The question begs, if the squid does not cache SSL as SSL reverse proxy, what is the advantage of using squid as proxy Reverse SSL? Question is irrelevant. Caching happens on all cacheable content. TLS/SSL does not determine cacheability. Amos
Re: [squid-users] delay pools with IP ranges
On 16.05.2012 05:38, Marlon Bastida wrote: Hi, I need to create delay pools to do bandwith control. Considered TOS or QoS functionality of Squid? it tends to work better than delay pools. How can I do to use IP ranges on acl src statements? Not netmasks bits /24. Hmm. Question unrelated to delay pools. The ACL src and dst type syntax is: first-IP [ '-' last-IP] ['/' netmask]. acl blah src 192.168.0.1-192.168.0.5/32 or acl blah src 192.168.0.1-192.168.0.5 Amos
Re: [squid-users] cache videos/bittorrent
On 16.05.2012 08:14, Mário Sérgio Candian wrote: Hi guys. I'd like to cache some p2p like kazaa, emule, bittorrent, ares, and videos from Youtube, Netflix or services that use Adaptive Bitrate (ABR). Is possible with Squid? Regards, MSC Squid is an HTTP caching proxy. Squid is not a P2P, Torrent, or VoD cache. Squid is also not a multimedia transcoder. Youtube Netflix can be proxied and cached when they transfer over HTTP. But the ABR is ignored and can easily lead to bad client experiences. Amos
Re: [squid-users] Cache of port 443 with SSL Reverse Proxy
Thanks Amos, 2012/5/15 Amos Jeffries : > On 16.05.2012 09:20, Sylvio Cesar wrote: >> >> Hi, >> >> - It possible do cache of port 443 with SSL Reverse Proxy? > > > Yes. Where I find information about of how to cache of port 443 with SSL Reverse Proxy? > >> >> - What is the advantage of doing reverse proxy SSL if the squid does not >> make >> SSL cache? > > > SSL offloading away from the WWW server. When the WWW server is doing a lot > of dynamic content generation any reduction of CPU can benefit overall. > > Plus all the other scaling advantages of reverse-proxy. There is nothing > special about SSL reverse-proxy other than the traffic arrives over a secure > channel. > > > >> - You can startar two instances of squid, and the second >> instance is a squid.conf https_port doing reverse proxy with SSL? >> >> This is because the workstations of the environment here has two >> firefox profiles (firefox a profile in common, and the other by means of a >> web application that uses port 443) for this reason it would be >> deployed two instances of squid. > > > > Yes. But why two Squid? One instance can do multiple input modes and is > simpler to operate. The method the client uses to configure the proxy is > largely irrelevant to the proxy. The second configuration will be to cache all content SSL of an application internal of my work. > > AND, if there is a browser configured to use the proxy it is *NOT* a > reverse-proxy. It is a forward-proxy. > reverse-proxy is when there is only DNS records pointing at a domain name > serviced by the proxy pretending to be a web server. > > >> >> The question begs, if the squid does not cache SSL as >> SSL reverse proxy, what is the advantage of using squid as proxy >> Reverse SSL? > > > Question is irrelevant. Caching happens on all cacheable content. TLS/SSL > does not determine cacheability. > > Amos > Sylvio
Re: [squid-users] Cache of port 443 with SSL Reverse Proxy
On 16.05.2012 13:32, Sylvio Cesar wrote: Thanks Amos, 2012/5/15 Amos Jeffries: On 16.05.2012 09:20, Sylvio Cesar wrote: Hi, - It possible do cache of port 443 with SSL Reverse Proxy? Yes. Where I find information about of how to cache of port 443 with SSL Reverse Proxy? At the point Squid receives the traffic it gets unwrapped from SSL into plain HTTP. There is no special configuration needed for caching. Amos
Re: [squid-users] Transparent interception MTU issues
On 16.05.2012 09:53, Daniel Niasoff wrote: Hi, I am accessing squid through a PPTP tunnel and have a lower MTU as a result. I am able to use squid ok as an explicit proxy however when trying transparent interception many pages timeout and don't open. I guess this is because of MTU issues. Likely. But Please check your guesses before looking for a fix to them. ping -s 1499 ... PMTU response or lost packet? I have tried "http_port 3129 intercept disable-pmtu-discovery=always" but to no avail. I am using 3.2.0.17. Any ideas? If it actually is MTU issues, fix them. * Enable ICMP control messages to cross the network. * set MTU and/or MSS on the tunnel entrance to an appropriate low value. Amos