[squid-users] Squid 3.5.6 Windows SquidTray crash
Unfortunately SquidTray still crashes with 3.5.6. This is on Server 2008 R2 x64 (as before). The mini dump is shown below. MarkJ - Description: Stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: diladele.squid.tray.exe Problem Signature 02: 1.0.0.0 Problem Signature 03: 559b843a Problem Signature 04: mscorlib Problem Signature 05: 2.0.0.0 Problem Signature 06: 53a11de1 Problem Signature 07: 123f Problem Signature 08: 5f Problem Signature 09: System.IO.FileNotFoundException OS Version: 6.1.7601.2.1.0.272.7 Locale ID: 3081 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraries/CDNs Booster
Hi All, Advanced Caching Add-On for Linux Squid Proxy Cache v2.7, v3.4 and v3.5 with Videos, Music, Images, Libraries and CDNs. New version 2.545 https://sourceforge.net/projects/squidvideosbooster/ - July 8th 2015. - Apple Music - new! - Google Music - new! - and more ... More details on https://svb.unveiltech.com Enjoy Bye Fred -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4672107.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] transparent proxy splice using dstdomain issue
On 8/07/2015 1:54 a.m., S.Kirschner wrote: Amos Jeffries wrote On 7/07/2015 11:45 p.m., S.Kirschner wrote: I think the issues exist because the reverse lookup dont got the anwser sparkasse.de, but why it does not use the hostname from the dns request to the dns-server ? Because Squid is not a DNS server. The HTTP message details including URL where dstdomain comes from are encrypted at the time you are trying to use the dstdomain ACL. Yes but, in pfsense a dns server is installed, so on these host a dns server is running. Also i tried to use the google DNS Location of the DNS resolver has nothing to do with how Squid (or DNS) operates. Here now the entries from the cache.log With sparkasse.de in /etc/hosts #2015/06/19 14:03:03.907 kid1| DomainData.cc(108) match: aclMatchDomainList: checking '212.34.69.3' #2015/06/19 14:03:03.907 kid1| DomainData.cc(113) match: aclMatchDomainList: '212.34.69.3' NOT found #2015/06/19 14:03:03.908 kid1| DomainData.cc(108) match: aclMatchDomainList: checking 'sparkasse.de' #2015/06/19 14:03:03.908 kid1| DomainData.cc(113) match: aclMatchDomainList: 'sparkasse.de' found These are the rDNS host name in your hosts file for that IP. In DNS hosts file entries are authoritative and override any gobal registrations. #2015/06/19 14:03:03.908 kid1| Acl.cc(158) matches: checked: bypass = 1 #2015/06/19 14:03:03.908 kid1| Acl.cc(158) matches: checked: (ssl_bump rule) = 1 #2015/06/19 14:03:03.908 kid1| Acl.cc(158) matches: checked: (ssl_bump rules) = 1 Without sparkasse.de in /etc/hosts #2015/06/19 14:05:19.842 kid1| DomainData.cc(108) match: aclMatchDomainList: checking '212.34.69.3' #2015/06/19 14:05:19.842 kid1| DomainData.cc(113) match: aclMatchDomainList: '212.34.69.3' NOT found #2015/06/19 14:05:19.842 kid1| DomainData.cc(108) match: aclMatchDomainList: checking 'rev-212.34.69.3.rev.izb.net' #2015/06/19 14:05:19.842 kid1| DomainData.cc(113) match: aclMatchDomainList: 'rev-212.34.69.3.rev.izb.net' NOT found The real host name registered in global rDNS for that IP. If I assume you configured Squid to use the pfsense DNS resolver. That is the hostname it presents Squid with for that IP. Note that domain name and host name are different concepts... * one domain name DNS entry possibly points at multiple IPs, and * multiple domain names can possibly point at one IP, but * each IP rDNS points at exactly one host name. So, 212.34.69.3 is one of possibly many IPs for sparkasse.de. sparkasse.de is one of may names pointing at 212.34.69.3. But, rev-212.34.69.3.rev.izb.net is the host name for 212.34.69.3. (which also means rev-212.34.69.3.rev.izb.net is the primary one of may names pointing at 212.34.69.3). Problem: since that IP has many domain names pointing at it. Which one did the user lookup in *forward* DNS to get to that IP address? They could have as easily gone to https://rev-212.34.69.3.rev.izb.net/ as to https://sparkasse.de/ and the TCP connection would look identical to Squid. When dealing with HTTP (not encrypted) the answer is to look at the HTTP message headers and see what they are requesting. dstdomain does that. BUT ... in HTTPS those headers are encrypted. And you are currently deciding whether or not its appropriate to try and decrypt at all. Meaning the HTTP URL domain used by dstdomain is unavailable, and thus dstdomain will not work properly. #2015/06/19 14:05:19.842 kid1| Acl.cc(158) matches: checked: bypass = 0 #2015/06/19 14:05:19.842 kid1| Acl.cc(158) matches: checked: (ssl_bump rule) = 0 The proper solution for HTTPS is to use the correct ACL type (ssl::server_name) designed for use in your situation. That uses the non-encrypted TLS metadata, which provides the server hostname. Despite popular myths TLS is not end-to-end (user-to-origin). It is point-to-point (client-to-server) encryption, with maybe multiple hops along the way. The TLS server name metadata will only give you the hostname of the server the client was contacting. With SNI it is usually (but no guarantee) the domain name. When SNI is not available it's down to TLS certificate SubjectName that could as easily be a TLS proxy or CDN service in front of the real server(s) and in fact its usually states a whole list of alternative names, and regex patterns!! of domain names, which the cert might be used for. The definitions for Site, Domain name, host name (note the space), hostname, and X.509 SubjectName are different for good reasons. So are the ACL definitions. HTH Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Question about squid-3.5-13849.patch
Hi, On 07-07-2015 11:05, Amos Jeffries wrote: On 8/07/2015 1:37 a.m., dweimer wrote: System is Running on FreeBSD 10.1-RELEASE-p14, using OpenSSL included in base FreeBSD. No, the change is automatic for all Squid built against an OpenSSL library that supports the library API option. If it is not working, then the library you are using probably does not support that option. AFAIK you need at least OpenSSL 0.9.8m for anything related to that vulnerability to be fixable. The latest 1.x libraries do not support the flag we use because they do the rejection internally without needing any help from Squid. Unfortunately this seems not to be the case. I have installed FreeBSD 10.1-RELEASE-p14 in a VM for testing. Running openssl version reports OpenSSL 1.0.1l-freebsd 15 Jan 2015. I was able to reproduce Dean's issue (renegotiation does not get disabled), but I was not able to fix it so far. For OpenSSL version comparison purposes, Debian wheezy (which the patch was able to harden) ships 1.0.1e. Debian jessie (which was already hardened out-of-the-box, without the patch) ships 1.0.1k. It is strange that FreeBSD's more recent OpenSSL version (1.0.1l) presents the issue. The SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS define exists in FreeBSD OpenSSL headers, the relevant code gets compiled in squid executable, SSL_CTX_set_info_callback runs, but *the ssl_info_cb callback is never called* (I tested by inserting a debug message inside the #if defined, just after SSL_CTX_set_info_callback, and another one at the beginning of the callback). Maybe we could try to adapt nginx's solution, but it does not seem to be trivial to do that in the current codebase https://github.com/nginx/nginx/commit/70bd187c4c386d82d6e4d180e0db84f361d1be02 Best regards, Paulo Matias ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Question about squid-3.5-13849.patch
On 07/08/2015 9:33 am, Paulo Matias wrote: Hi, On 07-07-2015 11:05, Amos Jeffries wrote: On 8/07/2015 1:37 a.m., dweimer wrote: System is Running on FreeBSD 10.1-RELEASE-p14, using OpenSSL included in base FreeBSD. No, the change is automatic for all Squid built against an OpenSSL library that supports the library API option. If it is not working, then the library you are using probably does not support that option. AFAIK you need at least OpenSSL 0.9.8m for anything related to that vulnerability to be fixable. The latest 1.x libraries do not support the flag we use because they do the rejection internally without needing any help from Squid. Unfortunately this seems not to be the case. I have installed FreeBSD 10.1-RELEASE-p14 in a VM for testing. Running openssl version reports OpenSSL 1.0.1l-freebsd 15 Jan 2015. I was able to reproduce Dean's issue (renegotiation does not get disabled), but I was not able to fix it so far. For OpenSSL version comparison purposes, Debian wheezy (which the patch was able to harden) ships 1.0.1e. Debian jessie (which was already hardened out-of-the-box, without the patch) ships 1.0.1k. It is strange that FreeBSD's more recent OpenSSL version (1.0.1l) presents the issue. The SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS define exists in FreeBSD OpenSSL headers, the relevant code gets compiled in squid executable, SSL_CTX_set_info_callback runs, but *the ssl_info_cb callback is never called* (I tested by inserting a debug message inside the #if defined, just after SSL_CTX_set_info_callback, and another one at the beginning of the callback). Maybe we could try to adapt nginx's solution, but it does not seem to be trivial to do that in the current codebase https://github.com/nginx/nginx/commit/70bd187c4c386d82d6e4d180e0db84f361d1be02 I also tried building against OpenSSL (1.0.2c 12 Jun 2015) from FreeBSD ports instead of from base. Still same result. -- Thanks, Dean E. Weimer http://www.dweimer.net/ ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Fwd: Squid 3.5.5 automatically reload itself in 2h rhythm
The workaround in the mentioned 3.5.6-snapshot seems to solve these periodically restarts. Many thanks. Tom On Tue, Jul 7, 2015 at 10:48 AM, Amos Jeffries squ...@treenet.co.nz wrote: On 7/07/2015 4:27 p.m., Tom Tom wrote: Hi Opened a while ago, but no answer, if this problem is a (known) bug or it's already solved with 3.5.6..? Its bug 4190. A workaround patch was developed a while ago, but it has some problems of its own and I forgot to apply it as a temporary workaround for the 3.5.6 cyle. see http://bugs.squid-cache.org/show_bug.cgi?id=4190#c28 for details. New snapshot containing the workaround is building now and should be available in a few hours. Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Issue with Citrix sessions and squid
Dear I would like to share a strange behavior. We have servers that stores Citrix application. Each Citrix server run about 10 users/session Each session execute browsers connected to squid 3.5.6 or 3.3.13. After opening 10 tabs, browsers generates error about Connections broken or connection unavailable from Proxy. So next tabs cannot be opened correctly. Both HTTP/HTTPS destination websites meet this behavior. if we wait several seconds , refresh the browser tab that generate the error, website can be opened. No Squid error page can be seen on the browser error. Using the same test without Squid ( in direct mode ) and inside a Citrix Session did *not* reproduce this issue Using the same test on a physical machine did *not* reproduce this issue. Using the same test on a server Without Citrix inside a TSE session did *not* reproduce this issue. Using the same test on with Chrome and Internet Explorer and FireFox *reproduce* the issue Using Squid without any ACL, without any caching system, with only one worker *reproduce* this issue This issue can only be reproduced on a server with Citrix installed or inside a Citrix Session with Squid as proxy. It seems/like that Squid refuse connections from a Citrix Server. Can anybody have already reproduced/fixed this issue ? Is there a squid limitation of number of opened browsers from one single IP ( the Citrix server) ? ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Issue with Citrix sessions and squid
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Looks like TCP/IP stack level issue. 09.07.15 0:26, David Touzeau пишет: Dear I would like to share a strange behavior. We have servers that stores Citrix application. Each Citrix server run about 10 users/session Each session execute browsers connected to squid 3.5.6 or 3.3.13. After opening 10 tabs, browsers generates error about Connections broken or connection unavailable from Proxy. So next tabs cannot be opened correctly. Both HTTP/HTTPS destination websites meet this behavior. if we wait several seconds , refresh the browser tab that generate the error, website can be opened. No Squid error page can be seen on the browser error. Using the same test without Squid ( in direct mode ) and inside a Citrix Session did *not* reproduce this issue Using the same test on a physical machine did *not* reproduce this issue. Using the same test on a server Without Citrix inside a TSE session did *not* reproduce this issue. Using the same test on with Chrome and Internet Explorer and FireFox *reproduce* the issue Using Squid without any ACL, without any caching system, with only one worker *reproduce* this issue This issue can only be reproduced on a server with Citrix installed or inside a Citrix Session with Squid as proxy. It seems/like that Squid refuse connections from a Citrix Server. Can anybody have already reproduced/fixed this issue ? Is there a squid limitation of number of opened browsers from one single IP ( the Citrix server) ? ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJVnXCVAAoJENNXIZxhPexGjmsIAMmY4fHGysvNCENf9LlM8fNQ i3lLCHls3JDSP/tcaCWOuFkuGn/2SejEs7kZx5iw/JGcE6WNwTZg/gCkK/aMmBCb 8LaBs7tixcpfYT1zqT/xeax7Sz6gjB5aNcVKLL6jNTuZX2Q3bJZx/UhZbNCM99cV XN9VlrvErM4P45KvlBeZFkdOOPgCK49uMfEZmPN15RxbqRj8WeuBRiX5QFXwZHTO /IcjAAorVcRgrdCC8DmwS9MwPZwDfdWvX6PYTLmcUoexJMlAIikwFW6GJ+dloUMW OVsFol29IzKUQc1oBqc4CPwN2UURnhTFKMTsp+NQh3702hY49rHeaPPoyTqipN0= =SK/q -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Two questions about stored objects
Hello everyone, I have been making some modifications (size, object max size) in some cache dirs and I have a couple of questions: 1) If I lower de maximum object size for a certain cache_dir and reconfigure (I did a squid -z without squid running), what happens to the files that are no longer in the cache_dir size limits but are already stored? 2) If I change the min and max times in refresh_patterns, what happens to the objects that are already stored? Where they stored with the old times or are they going to be re-evaluated the next time they are requested by a user? Thanks a lot Sebastian ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Issue with Citrix sessions and squid
On 9/07/2015 7:01 a.m., David Touzeau wrote: Thanks Yuri, Any tips how to increase TCP/IP stack ? Did you means TCP/IP stack on the Citrix Server side or on the squid box or both ? I'm thinking its a problem related to TCP sockets. A rough estimate calculatino of: 10 users x10 tabs x20 avg domains per page x 2 for happy eyeballs makes it somewhere up to 4k sockets in active use at any time. if the users are accessing domains with larger numbers ofdomains per page (ie Facebook has up to 100) that could be 20k concurrent sockets just from the browser. By the time that goes through Squid it becomes 40k, and if you have ICAP it becomes up to 80K (out of an available 64k sockets). Then there is all the OS background services that use HTTP through the proxy, etc. Without Citrix the users internal src-IPs vary. Making available a 64k sockets per-user. Which is harder to reach, and the browser silently limits itself when socekts start to run out. Without Squid the Citrix connections are going to N different domains with varying dst-IP. Which again raises the available port numbers per-user. If the assumptino behind the above is right you should be able to alleviate the problem by having Squid listen on multiple ports and/or IPs. Then spreading the client connections out across those Squid ports. Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Question about squid-3.5-13849.patch
On 9/07/2015 2:33 a.m., Paulo Matias wrote: Hi, On 07-07-2015 11:05, Amos Jeffries wrote: On 8/07/2015 1:37 a.m., dweimer wrote: System is Running on FreeBSD 10.1-RELEASE-p14, using OpenSSL included in base FreeBSD. No, the change is automatic for all Squid built against an OpenSSL library that supports the library API option. If it is not working, then the library you are using probably does not support that option. AFAIK you need at least OpenSSL 0.9.8m for anything related to that vulnerability to be fixable. The latest 1.x libraries do not support the flag we use because they do the rejection internally without needing any help from Squid. Unfortunately this seems not to be the case. I have installed FreeBSD 10.1-RELEASE-p14 in a VM for testing. Running openssl version reports OpenSSL 1.0.1l-freebsd 15 Jan 2015. I was able to reproduce Dean's issue (renegotiation does not get disabled), but I was not able to fix it so far. For OpenSSL version comparison purposes, Debian wheezy (which the patch was able to harden) ships 1.0.1e. Debian jessie (which was already hardened out-of-the-box, without the patch) ships 1.0.1k. It is strange that FreeBSD's more recent OpenSSL version (1.0.1l) presents the issue. The SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS define exists in FreeBSD OpenSSL headers, the relevant code gets compiled in squid executable, SSL_CTX_set_info_callback runs, but *the ssl_info_cb callback is never called* (I tested by inserting a debug message inside the #if defined, just after SSL_CTX_set_info_callback, and another one at the beginning of the callback). That would be a nasty bug in the FreeBSD OpenSSL then. (FreeBSD 10 is growing an annoying set of bugs; libpthreads not working, OS signals not working, now OpenSSL not working...) Maybe we could try to adapt nginx's solution, but it does not seem to be trivial to do that in the current codebase https://github.com/nginx/nginx/commit/70bd187c4c386d82d6e4d180e0db84f361d1be02 They are using the same SSL_CTX_set_info_callback() mechanism we are to set the initial flag which triggers errors. If the callback itself is not being run their fix will not work either. Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Two questions about stored objects
On 9/07/2015 5:17 a.m., Sebastian Goicochea wrote: Hello everyone, I have been making some modifications (size, object max size) in some cache dirs and I have a couple of questions: 1) If I lower de maximum object size for a certain cache_dir and reconfigure (I did a squid -z without squid running), what happens to the files that are no longer in the cache_dir size limits but are already stored? No. It only affects new objects being stored to disk. There is one caveat however. Sometimes objects get promoted from disk to memory caching. When those cycle back to disk they will not go to the original cache_dir. Which may leave you with an un-deleted but nolonger indexed file on disk. 2) If I change the min and max times in refresh_patterns, what happens to the objects that are already stored? Where they stored with the old times or are they going to be re-evaluated the next time they are requested by a user? Have no effect except on active traffic. That does include active in the sense of being saved to disk. But not objects just sitting there already. Overall, if you change settings like these the state does slowly migrates to the new values as cached content expires. But that is not fast, could take minutes or weeks depending on your initial state. To make immediate administrative changes to on-disk AUFS/UFS/diskd cache content use the squid-purge tool which is bundled with recent Squid versions. Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Windows 10 Updates
On 8/07/2015 1:57 a.m., Jasper Van Der Westhuizen wrote: Hi list I have a problem with Windows 10 updates. It seems that Microsoft will do updates via https now. --cut-- 1436268325.765 5294 xxx.xxx.xxx.xxx TCP_REFRESH_UNMODIFIED/206 9899569 GET http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5?http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5? - HIER_DIRECT/165.165.47.19http://DIRECT/165.165.47.19 application/octet-stream 1436268333.267 7484 xxx.xxx.xxx.xxx TCP_REFRESH_UNMODIFIED/206 21564261 GET http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5?http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/0cbda2af-bf7d-4408-8a17-d305e378c8e5? - HIER_DIRECT/165.165.47.19http://DIRECT/165.165.47.19 application/octet-stream 1436268430.871 147280 xxx.xxx.xxx.xxx TCP_TUNNEL/200 4267 CONNECT cp201-prod.do.dsp.mp.microsoft.com:443 - HIER_DIRECT/23.214.151.174http://DIRECT/23.214.151.174 - 1436268478.259 96621 xxx.xxx.xxx.xxx TCP_TUNNEL/200 5705 CONNECT array204-prod.do.dsp.mp.microsoft.com:443 - HIER_DIRECT/64.4.54.117http://DIRECT/64.4.54.117 - 1436268786.878 78517 xxx.xxx.xxx.xxx TCP_TUNNEL/200 5705 CONNECT array204-prod.do.dsp.mp.microsoft.com:443 - HIER_DIRECT/64.4.54.117http://DIRECT/64.4.54.117 - --cut-- To my knowledge there is no way to cache this. Technically yes, there is no way to cache it without breaking into the HTTPS. How would one handle this? Is it even possible to cache the updates? SSL-Bump is the Squid feature for accessing HTTPS data in decrypted form for filtering and/or caching. However, that will depend on; a) being able to bump the crypto (if the WU app is validating server cert against a known signature its not), b) the content inside actually being HTTPS (they do updates via P2P now too), and c) the HTTP content inside being cacheable (no guarantees, but a good chance its about as cacheable as non-encrypted updates). You are the first to mention it, so there is no existing info on those requirements. Amos ___ Thank you Amos. Like in Windows 8.1, these updates are HUGE. I will keep an eye on developments. Microsoft really makes things difficult. For now we will be shaping the bandwidth on the network layer. Kind Regards Jasper Disclaimer: http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Issue with Citrix sessions and squid
Thanks Yuri, Any tips how to increase TCP/IP stack ? Did you means TCP/IP stack on the Citrix Server side or on the squid box or both ? Because , all computers that did not use Citrix can surf trough squid and open unlimited tabs without any issue. And Citrix sessions that *did not use Squid* can surf trough Internet and open unlimited tabs without any issue. Le 08/07/2015 20:48, Yuri Voinov a écrit : -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Looks like TCP/IP stack level issue. 09.07.15 0:26, David Touzeau пишет: Dear I would like to share a strange behavior. We have servers that stores Citrix application. Each Citrix server run about 10 users/session Each session execute browsers connected to squid 3.5.6 or 3.3.13. After opening 10 tabs, browsers generates error about Connections broken or connection unavailable from Proxy. So next tabs cannot be opened correctly. Both HTTP/HTTPS destination websites meet this behavior. if we wait several seconds , refresh the browser tab that generate the error, website can be opened. No Squid error page can be seen on the browser error. Using the same test without Squid ( in direct mode ) and inside a Citrix Session did *not* reproduce this issue Using the same test on a physical machine did *not* reproduce this issue. Using the same test on a server Without Citrix inside a TSE session did *not* reproduce this issue. Using the same test on with Chrome and Internet Explorer and FireFox *reproduce* the issue Using Squid without any ACL, without any caching system, with only one worker *reproduce* this issue This issue can only be reproduced on a server with Citrix installed or inside a Citrix Session with Squid as proxy. It seems/like that Squid refuse connections from a Citrix Server. Can anybody have already reproduced/fixed this issue ? Is there a squid limitation of number of opened browsers from one single IP ( the Citrix server) ? ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJVnXCVAAoJENNXIZxhPexGjmsIAMmY4fHGysvNCENf9LlM8fNQ i3lLCHls3JDSP/tcaCWOuFkuGn/2SejEs7kZx5iw/JGcE6WNwTZg/gCkK/aMmBCb 8LaBs7tixcpfYT1zqT/xeax7Sz6gjB5aNcVKLL6jNTuZX2Q3bJZx/UhZbNCM99cV XN9VlrvErM4P45KvlBeZFkdOOPgCK49uMfEZmPN15RxbqRj8WeuBRiX5QFXwZHTO /IcjAAorVcRgrdCC8DmwS9MwPZwDfdWvX6PYTLmcUoexJMlAIikwFW6GJ+dloUMW OVsFol29IzKUQc1oBqc4CPwN2UURnhTFKMTsp+NQh3702hY49rHeaPPoyTqipN0= =SK/q -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users