Re: [squid-users] R: Re: TCP_DENIED/411
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 10/11/2014 8:53 p.m., Riccardo Castellani wrote: I think the request is http/1.1 because I captured it and it shows in the 'Hypertext Transfer Protocol' in the POST section, the field 'Request version' is HTTP/1.1 I understand Squid 2.7 is not able to understand http/1.1, but I ask myself if 'content-length' field was missing in the http/1.1 request and Squid was compliant to http/1.1( squid 3.x version ) , what Squid would return 'DENIED/411' again? Can you produce a copy of those HTTP headers to clarify what we are discussing? HTTP/1.1 allows the possibility of Content-Length not being present yet the request being valid. Squid-2.7 has just enough 1.1 compliance to perform checks like that and respond with the appropriate 411 or accept. HTTP/1.0 requires terminating the TCP connection in these circumstances. Amos -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (MingW32) iQEcBAEBAgAGBQJUYHoZAAoJELJo5wb/XPRjzsQIAK0inM0JrQVMSyfqdtZPfpmR D4nO6+Xs888obdVh4mGciCePSwzHcFd7HCwb/RIqCit/v1ocQhdJxzXSwCiIgCud eVNDYQUKO08LKlIMVg0zyQqBbEQlaxEv8hfa0CLnk7KNB74s4e0Lv0PDbHrOSDsm iQ1jvCKlgcTq8owm2ITXoYWoIkze/BbrUegdMmgSqjHebQ+5Gk7by0TAlQCXC4Ct uS1fnZHmlkzX9HcaBwvnh8IjMXuRYd09ZF5ASwi0Puo1XZpyjMFD4svsEpTA00Rb 9wb2C0O6R7h1wWoIwmsu3JfjuufPRPZ+HUyyMA0Ba+xXv3/g34zmd7UzgTzYfyA= =ykSC -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] sslbump working with 3.4.9 but not in intercept mode?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Can you send all ssl_bump related settings? There are some missing parts in the settings. If there is a bug\error the full details are needed to analyze the subject. I need: - - OS details - - machine details - - network topology - - cache logs - - access logs Eliezer On 11/10/2014 11:17 AM, Jason Haar wrote: Hi there, I've googled about for this but I think most of the squid intercept stuff refers to 3.2 and I think things have changed since then? I have squid-3.4.9 running with sslbump, and when I configure my browser to use it as a proxy, it bumps the certs nicely, signing fake certs/etc. I then added an iptables run to redirect outbound tcp/80 onto port 3129 (see below) and that transparently proxies all port 80 - great. I then went through the same exercise with sslbump, but when I put in an iptables rule to redirect outbound tcp/443 traffic onto 3127, it doesn't bump - it acts like a TCP forwarder instead. I get a CONNECT ip.add.ress:443 log record - no sign of the hostname and no bumping http_port 3126 ssl-bump cert=/etc/squid/squid-CA.cert capath=/etc/ssl/certs/ generate-host-certificates=on dynamic_cert_mem_cache_size=256MB options=ALL http_port 3129 transparent https_port 3127 transparent ssl-bump cert=/etc/squid/squid-CA.cert capath=/etc/ssl/certs/ generate-host-certificates=on dynamic_cert_mem_cache_size=256MB options=ALL acl SSL_nonHTTPS_sites dstdom_regex /etc/squid/SSL_nonHTTPS_sites.txt acl SSL_noIntercept_sites dstdom_regex /etc/squid/SSL_noIntercept_sites.txt ssl_bump none SSL_nonHTTPS_sites ssl_bump none SSL_noIntercept_sites ssl_bump server-first all So these older search-engine pages I came across claimed this should work with squid, but either I am missing something, or this doesn't work in 3.4.9? Thanks -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJUYJbJAAoJENxnfXtQ8ZQUcAwH/RFRxy4Rk+TliEEPzcgT+BLu Yu4n5I1XiOBMIixR+4qckV/f0j0Y51eWSvczs082Ow/vfOMlmImLtdWS8lswpTBX cRQq3jhV9+MeFVDjDr8/owGXtf9TY5Aj1Jcmxvg+lR9TJvj4IzG5tp6t+SsW1Y0C ulXdvKBYr+KGILSrUsIKb+Px+pSZHB/yRx1GHClQFVDrkHG1djSTT74SlRnTNREs 1Ewzm6CtNF5lYD5sHpgUAaI3fsDGbAmvebwyk4nzxyDj6o3Ow1tl3/z3gND8Tv++ WMoziJphFPPDAYhCpk5f6fSCPgM1nNaxdIDs0Z+i9wd/Nw2A5TWeW9U+JPAehqU= =y/Dr -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] SslBump Squid - Dropbox client does not work
Hello, I am using squid 3.4.9 and the Dropbox client does not work with SSLBump feature of squid. Dropbox client gives a message that it cannot make a secure connection. Does anyone know fix or workaround or this issue? Thanks, Jatin ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Squid3 config on Ubuntu remains even after uninstall and ignore the new config
OS: Ubuntu 14.04 LTS After i installed the squid3 package for the 1st time, i've add a list of domains to be blocked in squid.conf: acl myrule dstdom_regex /etc/squid3/domainblock.txt http_access deny myrule where domainblock.txt is someaddress.com blockthis.net Which worked fine and redirect them to localhost running on my LAMP Index of / Name Last modified Size Description html 2014/04/10 -- -- Apache/2.4.7 (Ubuntu) Server at google-analytics.com Port 80 Later i purged it by: sudo apt-get remove --purge squid3* and removed every filefolder the command locate squid gave, including the /etc/squid3 folder then reboot. But i still couldnt access the websites in domainblock.txt even though it doesnt exist anymore. Then i re-installed with sudo apt-get install squid3 this time with the config to allow those websites in the list: acl myrule dstdom_regex /etc/squid3/domainblock.txt http_access allow myrule But still no luck. I guess some configurations remain even after removing the squid in the system. So what should i do now? Note: The default squid.conf is huge to add here, but i just added these lines above changed nothing. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] sslbump working with 3.4.9 but not in intercept mode?
On 11/11/14 00:06, Amos Jeffries wrote: Grr, strdup bites again. Backtrace please if you can. I'm not a developer, so here's my attempt, let me know if I need to do something else (gdb) run Starting program: /usr/sbin/squid -N [Thread debugging using libthread_db enabled] Detaching after fork from child process 29759. Detaching after fork from child process 29760. Detaching after fork from child process 29761. Detaching after fork from child process 29762. Detaching after fork from child process 29763. Detaching after fork from child process 29764. Detaching after fork from child process 29765. Detaching after fork from child process 29766. Detaching after fork from child process 29767. Detaching after fork from child process 29768. Detaching after fork from child process 29769. Detaching after fork from child process 29770. Detaching after fork from child process 29771. Program received signal SIGABRT, Aborted. 0x003f40032625 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x003f40032625 in raise () from /lib64/libc.so.6 #1 0x003f40033e05 in abort () from /lib64/libc.so.6 #2 0x0059cbbb in fatal_dump(char const*) () #3 0x0082a6bb in xstrdup () #4 0x006b528c in ACLUrlPathStrategy::match(ACLDatachar const**, ACLFilledChecklist*, ACLFlags) () #5 0x006f9478 in ACL::matches(ACLChecklist*) const () #6 0x006f in ACLChecklist::matchChild(Acl::InnerNode const*, __gnu_cxx::__normal_iteratorACL* const*, std::vectorACL*, std::allocatorACL* , ACL const*) () #7 0x006faeb3 in Acl::AndNode::doMatch(ACLChecklist*, __gnu_cxx::__normal_iteratorACL* const*, std::vectorACL*, std::allocatorACL* ) const () #8 0x006f9478 in ACL::matches(ACLChecklist*) const () #9 0x006f in ACLChecklist::matchChild(Acl::InnerNode const*, __gnu_cxx::__normal_iteratorACL* const*, std::vectorACL*, std::allocatorACL* , ACL const*) () #10 0x006fae2e in Acl::OrNode::doMatch(ACLChecklist*, __gnu_cxx::__normal_iteratorACL* const*, std::vectorACL*, std::allocatorACL* ) const () #11 0x006f9478 in ACL::matches(ACLChecklist*) const () #12 0x006fc474 in ACLChecklist::matchAndFinish() () #13 0x006fce90 in ACLChecklist::nonBlockingCheck(void (*)(allow_t, void*), void*) () #14 0x00635f1a in ?? () #15 0x005bc2b8 in FwdState::Start(RefCountComm::Connection const, StoreEntry*, HttpRequest*, RefCountAccessLogEntry const) () #16 0x005bc706 in FwdState::fwdStart(RefCountComm::Connection const, StoreEntry*, HttpRequest*) () #17 0x0053c572 in ConnStateData::switchToHttps(HttpRequest*, Ssl::BumpMode) () #18 0x0053cde9 in ?? () #19 0x0054860f in ?? () #20 0x006fc63b in ACLChecklist::checkCallback(allow_t) () #21 0x0054df1a in ?? () #22 0x006ffa46 in AsyncCall::make() () #23 0x00702b02 in AsyncCallQueue::fireNext() () #24 0x00702e50 in AsyncCallQueue::fire() () #25 0x00593cf4 in EventLoop::runOnce() () #26 0x00593e48 in EventLoop::run() () #27 0x00613e48 in SquidMain(int, char**) () #28 0x006147d8 in main () (gdb) quit A debugging session is active. Inferior 1 [process 29756] will be killed. Quit anyway? (y or n) y -- Cheers Jason Haar Corporate Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] eCap + lua integration
Hi! Is there any eCap lua integration module available that one could use for filtering - similar to apache mod_lua? It then could get easily used as a URL-rewrite engine or to handle session-affinity and similar without any context switches that is needed with url_rewrite_program... Thanks, Martin This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement, you may review at http://www.amdocs.com/email_disclaimer.asp ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] 3.3.x - 3.4.x: huge performance regression
Info added to the bug report. On Sun, Nov 9, 2014 at 7:53 PM, Diego Woitasen di...@woitasen.com.ar wrote: Hi, I have more information. The testing environment has a few users. We switched to basic authencation and it's been working for a week without any issues. A couple of days ago we enabled NTLM again and the issue appeared again. I 'm on mobile now. I'll add more info in the bug report. Regards, Diego On Oct 25, 2014 1:51 PM, Eliezer Croitoru elie...@ngtech.co.il wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hey Diego, Can you take a look at the bug report and help pinpoint the issue please? http://bugs.squid-cache.org/show_bug.cgi?id=3997 I am pretty sure it's unique to auth only but I want to verify that external_acl helpers do not affect this issue. Also if you can share the testing environment details or we can get some help with testing from your IT testing team? Thanks, Eliezer On 10/25/2014 06:17 PM, Diego Woitasen wrote: Same problem here. New users, only a few users from IT testing it and CPU usage is really high from time to time. Switched to basic auth for a few days. Looks like everybody is having issues with NTLM/SPNEGO. Keep in touch and we'll fix it :) Regards, Diego -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJUS9TcAAoJENxnfXtQ8ZQU4EEIAIKeKjvzrPSlj8UlGUaWHhT+ 64ontOl7wiYdyo1rjU1MWZxg+6erlVVYg5p46Ki/bznes/on70peU6UndzInLA0K JACZEq0P6eQBDQjP0eVfRbSVo4QeMA/+1prDZY8GAwyI3ugSWndeAT2dqVQFkVdt x3OxXc5ch4nfV9ZF4HPAMKRp6mey4LJjixTToIw9CsoDpcAE7UAWuXi//JOHMqmp b6ZONdhOBCJajWebhEHbUwNbciZVeCgGWXJGuyVA8kp0ChkFTtBnC7BpNjWRC3hL rH5cJcfJXyFLoG67qZaPTueakk5aII8Aj2DkPauK2ofQAOjlLL6gh45GiO1oeJ0= =sV5l -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users -- Diego Woitasen Infrastructure Developer, DevOps Engineer, Linux and Open Source expert http://www.woitasen.com.ar ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] SslBump Squid - Dropbox client does not work
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/11/2014 12:08 a.m., Jatin Bhasin wrote: Hello, I am using squid 3.4.9 and the Dropbox client does not work with SSLBump feature of squid. Dropbox client gives a message that it cannot make a secure connection. Does anyone know fix or workaround or this issue? Please start by finding out what is the problem. Cannot connect is not sufficient to diagnose anything. Your proxy logs should be able to help out there. Amos -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (MingW32) iQEcBAEBAgAGBQJUYLwOAAoJELJo5wb/XPRjQCkIAI2Zb+sryeBziaDZlR98uzvS p2y8GZiUK8qzXyMwkyL5D2KUzdrC1LJGwc8NpBErX+r0VZrT0ryAzzmxgcnp+AWr LbaVv5hmy/+AArKv9sjuxnOFTZlS0tmnJkmCJx3zo4B0LhZKEqB+NnhgeryjsNDs PsUtU1A2ZoLjCb7w9Lc/ZvxcrEhvRMQPoEBj8Z4jX7XFkoJk3N6fy+QGhO6iM4cm kgziG0fFi3mY1ppKuTc5J24/9+I4rRfPg9NhgWp+CWi5eqlHpV/PeMXmRAnSU6I+ IrcJ6OU9Lxd0XsivA8HksCtjUWvq+6EcCFRtXRWbT+fz+wcRom52EUh7oy5FqtE= =bQ3E -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] SslBump Squid - Dropbox client does not work
On Nov 10, 2014, at 5:08 AM, Jatin Bhasin jbhasi...@gmail.com wrote: Hello, I am using squid 3.4.9 and the Dropbox client does not work with SSLBump feature of squid. Dropbox client gives a message that it cannot make a secure connection. Does anyone know fix or workaround or this issue? Thanks, Jatin I’ve researched this and even contacted Dropbox tech support — Dropbox seems to use its own SSL library and doesn’t provide a way to add a trusted root cert. I haven’t found an easy way to work around this when using intercept mode. I would like to try out 3.5’s peek splice to see if it can help. Guy ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FTP-Prompt-Behaviour changed between 3.3.11 and =3.3.13
Traces are showing, that in the 401-response from squid, which provides the ftp-prompt (3.3.11), the header-field 'WWW-Authenticate: Basic realm=FTP Access' exists. In the newer squid-version (ex. 3.3.13), the prompt doesn't appear and the header-field WWW-Authenticate is not existent. Why does squid in newer versions eats this header-field? Is there a configuration-directive for squid, not to delete this field? On Fri, Nov 7, 2014 at 6:20 PM, Amos Jeffries squ...@treenet.co.nz wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 8/11/2014 2:07 a.m., Tom Tom wrote: Hi Within squid 3.3.11 and 3.3.13 (and of course squid 3.3.13) changed something concerning browser-behaviour while accessing ftp-sites: squid 3.3.11 ftp://ftp.xxx.xxx - User is prompted for username/password (TCP_DENIED/401), when anonymous-access is not allowed squid 3.3.13 (same config as in 3.3.11) === ftp://ftp.xxx.xxx - User is *not* prompted for username/password (also TCP_DENIED/401), when anonymous-access is not allowed. Tested with IE and FF. Any hints for this behaviour? Is there a way, to enforce the ftp-prompt? Take a look at the HTTP response headers Squid is sending back to the browser in those 401. And see if there is a followup request with credentials. It may just be the browser is doing SSO with the authentication now. Amos -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (MingW32) iQEcBAEBAgAGBQJUXP9WAAoJELJo5wb/XPRjbXQH/21x8hke4Ot872IlbFMIlVru NtLME6wnOB3rk+u/2e1gSiE7cdBWp6iGOIi1LWARncL84qmOb6AmsZYBNx3zpLXh 4znWM8KTSUIHWDEkbV/wJVR5XYBSbGR2onyjISc1DqA4QHkXR/+GO+ZIGhkJ/Vhf QipOJQQQJC/9ByvtnPSXJBa5QO+h8RI7dPgkTuuImQ/vQV7X4BKFhMkoI2p3QoIV zYXYyi9dUTV7ohdN+MxkIOSYMxgRUpED4b1bcM/Jxy4gNfwjIc2QAub1P0TNvqZ4 GCCZD4tPnAgwbb0FiEI6At5Ya/6Y5kF9zhc7OT420QFqzOUgU/i4Rn2ll8HGp3Y= =x/fE -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] High CPU-Usage with squid 3.4.9 (and/or 3.4.4)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/11/2014 4:12 a.m., Rietzler, Markus (RZF, SG 324 / RIETZLER_SOFTWARE) wrote: -Ursprüngliche Nachricht- Von: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] Im Auftrag von Amos Jeffries Gesendet: Montag, 10. November 2014 14:36 An: squid-users@lists.squid-cache.org Betreff: Re: [squid-users] High CPU-Usage with squid 3.4.9 (and/or 3.4.4) On 7/11/2014 2:50 a.m., Tom Tom wrote: Hi After migration from squid 3.3.13 to 3.4.4, I recognized a performance-issue. Squid is configured with 4 workers. They often have a CPU-Utilization between 50%-90% (each worker). With squid 3.3.13 (same configuration), the CPU-Utilization was never a problem. I installed squid 3.4.9 and had the same issue. No warnings/errors in the cache.log I saw, that someone other reported a similar issue: http://www.squid-cache.org/mail-archive/squid-users/201407/0500.html Concerning the post above: Yes, we have external_auth-helpers (ext_kerberos_ldap_group_acl) and no, we do not use delay_pools. The high cpu-usage comes not from the auth-helper - it comes from the 4 squid-worker-processes. Any hints? Is this a known problem? Probably solved in 3.5? Are you able to find out any specific details about what the workers are doing that uses so much extra CPU? Amos during our last tests (with 3.4.x) we also tried the worker option. it does not matter if workers are enabled or not. with more workers the cpu rise seems to be somewhat slower. so it is not connected to (smp)workers. it is the external auth helper - although the squid process and not the helper does consume all the cpu... The only difference between SMP and non-SMP mode here is that non-SMP has 1 worker doing all the work with one CPU core, whereas SMP mode has several workers. They can all hit the same issues independently for the same reason(s). I am of the understanding that the code associated with the helper processe is using a lot of CPU doing *something* that consumes a lot of cycles. There is a bunch of code doing cache lookups on previous helper queries, queueing new lookups, generating and parsing strings in the I/O, and even sometimes running whole trees of ACL logics when the helper(s) respond. So to get anywhere on this complaint it is important to know what (from the above set of things) exactly the CPU is doing. Amos -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (MingW32) iQEcBAEBAgAGBQJUYODRAAoJELJo5wb/XPRjvrkH/RBEitKpvLcWypNPCCrMTtw6 tKftf9gz4og6GOkUiipE0qNLTMgWvV7Fk9/By3vhEGNL+WpQG5UEWCbfpc2h2RdL H5tIWJGnXcsV1PGwYI1cuyDpanNs6EnSvKnSGTZ2DdWabiFEOPr9FR/8QtVqpUdS EK3uMpnmZ0mbo7auDIPxwa7CYh44tC/C3VMZSto+peB1ikiDonU9B0tXVEFCheeE B0IWs8FaoYByVd54lL6cYPz7HcOtyt2Hb6uyPJyQVrrEJs2JuI4ZQh0X7B2mbzAi HK8wBbDcyC4ZKagq4ABQIYHsxwqxiNFD6v9ntXBjZpORG1opXLMSBAh9K0Ycq5s= =5pNQ -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Squid Ecap - Centos 6.6 x86_64
Hi All Has anyone come across a compile issues for Squid 3+ and Ecap? I've tried the following below. http://www.e-cap.org/Documentation squid-3.1 to squid-3.4 with libecap-0.2.0 and libecap-1.0.0 ./configure --enable-ecap ./configure --enable-ecap --with-included-ltdl When I use make I keep getting this error. The folder versions just change when I try other versions. I have yum installed many packages I could find just in-case. I have rebooted. Host.cc: In static member function 'static void Adaptation::Ecap::Host::Register()': Host.cc:130: error: cannot allocate an object of abstract type 'Adaptation::Ecap::Host' ../../../src/adaptation/ecap/Host.h:17: note: because the following virtual functions are pure within 'Adaptation::Ecap::Host': /usr/local/include/libecap/host/host.h:25: note:virtual void libecap::host::Host::noteVersionedService(const char*, const std::tr1::weak_ptrlibecap::adapter::Service) make[4]: *** [libsquid_ecap_la-Host.lo] Error 1 make[4]: Leaving directory `/usr/src/squid-3.4.9/src/adaptation/ecap' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/usr/src/squid-3.4.9/src/adaptation' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/usr/src/squid-3.4.9/src' make[1]: *** [all] Error 2 make[1]: Leaving directory `/usr/src/squid-3.4.9/src' make: *** [all-recursive] Error 1 Regards Garth ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] High CPU-Usage with squid 3.4.9 (and/or 3.4.4)
during our last tests (with 3.4.x) we also tried the worker option. it does not matter if workers are enabled or not. with more workers the cpu rise seems to be somewhat slower. so it is not connected to (smp)workers. it is the external auth helper - although the squid process and not the helper does consume all the cpu... The only difference between SMP and non-SMP mode here is that non-SMP has 1 worker doing all the work with one CPU core, whereas SMP mode has several workers. They can all hit the same issues independently for the same reason(s). I am of the understanding that the code associated with the helper processe is using a lot of CPU doing *something* that consumes a lot of cycles. There is a bunch of code doing cache lookups on previous helper queries, queueing new lookups, generating and parsing strings in the I/O, and even sometimes running whole trees of ACL logics when the helper(s) respond. So to get anywhere on this complaint it is important to know what (from the above set of things) exactly the CPU is doing. Indeed but setting debug_options to ALL,9 does not work since the log file already is too big and unmanageable even before Squid begins to do thing that consumes CPU time. I have a script for a daemon that I wrote. The script is executes when the daemon receives a fatal signal (e.g. SIGSEGV): the daemon catches SIGSEV and executes the script which saves a stack trace (using gdb) of all threads in a file, and then finally the daemon exits. It is a nice debug tool. Maybe we can make a similar script for Squid that does something like this: collect the pids of all processes and then for all pids run gdb that attaches to the process, dumps a stack trace and detaches. The script can even do it 25 times to have a better insight in what squid does. An administrator can then run the script at the time that CPU peaks and send the output to the Squid developers. Do you like the idea? Another idea is to implement a user-defined signal handler, say on receipt of SIGUSR2, squid sets the debug_options to ALL,9 without rereading the config file, and after 3 seconds sets the debug_options back to the configured value. This way you get a 3-second sample of what Squid is doing at a specific time and the log file will have a reasonable size. Marcus Amos -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (MingW32) iQEcBAEBAgAGBQJUYODRAAoJELJo5wb/XPRjvrkH/RBEitKpvLcWypNPCCrMTtw6 tKftf9gz4og6GOkUiipE0qNLTMgWvV7Fk9/By3vhEGNL+WpQG5UEWCbfpc2h2RdL H5tIWJGnXcsV1PGwYI1cuyDpanNs6EnSvKnSGTZ2DdWabiFEOPr9FR/8QtVqpUdS EK3uMpnmZ0mbo7auDIPxwa7CYh44tC/C3VMZSto+peB1ikiDonU9B0tXVEFCheeE B0IWs8FaoYByVd54lL6cYPz7HcOtyt2Hb6uyPJyQVrrEJs2JuI4ZQh0X7B2mbzAi HK8wBbDcyC4ZKagq4ABQIYHsxwqxiNFD6v9ntXBjZpORG1opXLMSBAh9K0Ycq5s= =5pNQ -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] sslbump working with 3.4.9 but not in intercept mode?
I applied the patch and now it works! I can transparently access port 443-based websites with ssl-bump :-) Thanks Amos :-) On 11/11/14 02:20, Amos Jeffries wrote: You have an urlpath_regex ACL test depending on URIs containing paths. Which is not the case with CONNECT. The attached patch should fix the crash. Amos -- Cheers Jason Haar Corporate Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Squid 3.3.12, Multiple process, requests serviced by process.
Hello, We use a squid cache for our robots to collects an information from client's web sites. The squid running on FreeBSD 9.3 , squid version 3.3.13 the configuration is like this: if ${process_number} = 1 http_port 3001 cache_peer 1.1.1.1 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.1.2 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.1.3 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.1.4 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.1.5 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 endif if ${process_number} = 2 http_port 3001 cache_peer 1.1.1.1 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.1.2 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.1.3 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.1.4 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.1.5 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 endif if ${process_number} = 3 http_port 3002 cache_peer 1.1.2.1 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.2.2 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.2.3 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.2.4 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.2.5 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 endif if ${process_number} = 4 http_port 3002 cache_peer 1.1.2.1 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.2.2 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.2.3 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.2.4 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 cache_peer 1.1.2.5 parent 4567 0 no-query no-digest no-netdb-exchange round-robin connect-fail-limit=3 endif . # COORDINATOR if ${process_number} = 16 http_port 3099 endif workers 15 in total 15+1 processes is running, traffic load over 100 Mbit; around 50K req/min (total #) Problem is: when we restart the squid all request to port 3001 do serve only upstream proxy defined for this process. after couple hours, we see request served by upstream cache NOT belonged to this 3001 ports. ( like in example above can served by 1.1.2.4) The rate depend on the load, up to 15% all requests can be served by others upstream proxy NOT belonged to this port we use a java application and our website to logging all requests we generating and passing trough the cache server. This behavior is a serious trouble for us . Thanks in advance for any tips to solve it (Thinking it an internal request distribution mechanism produce a fault ) -- This electronic message, including any attachments, may contain proprietary, confidential or privileged information for the sole use of the intended recipient(s). You are hereby notified that any unauthorized disclosure, copying, distribution, or use of this message is prohibited. If you have received this message in error, please immediately notify the sender by reply e-mail and delete it. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] https intercept breaks non-HTTPS port 443 traffic?
Hi there Now that I've got ssl-bump working with port 443 intercept, I now find non-HTTPS apps that operate on port 443 no longer work. eg for ssl-bump in standard proxy mode I had an ACL to disable bump when an application (like Skype, which doesn't use HTTPS) tried CONNECT-ing to ip addresses, but with intercept mode that needed to be removed as all outbound https intercepted sessions begin with them being to an ip address. I just brought up a remote SSH server on port 443 and when I try to telnet to it, instead of getting the OpenSSH banner, I see nothing, but the remote server receives a SSL transaction from squid. All makes sense, but is there a way for bump to fail open on non-SSL traffic? I see squid 3.5 mentions peek and at_step - are those components going to be the mechanism to solve this issue? Just curious, I'm only testing/playing with intercepting port 443, but it's interesting to see where this is going Finally, when I attempted this connection, cache.log reported fwdNegotiateSSL: Error negotiating SSL connection on FD 25: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol (1/-1/0) I guess that's it squealing about getting non-SSL content back from the server (ie the SSH banner). Shouldn't that be a bit more verbose - to help sysadmins figure out what was behind it. eg fwdNegotiateSSL: Error negotiating SSL connection from 192.168.22.11:44382 - 1.2.3.4:443 (FD 25): error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol (1/-1/0) At the very least, with that I could have a cronjob grep through my cache.log to auto-create a bump none acl ;-) Thanks -- Cheers Jason Haar Corporate Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users