Re: [squid-users] Squid 2.7STABLE9 'zph_sibling' seemingly not tagging traffic
Hey Amos. I was waiting for that! We have a few requirements not yet satisfied in Squid 3.x, storeurl_rewrite features are a big one, so we're having to hold off until we're able to conjure something up. Can't get them ported across can you? ;-) WIth regard to 2.7, ZPH, are you aware of any bugs that may cause sibling_hit to be ineffective. I saw on the Lusca project that their code had an issue preventing the mark from ever being applied. I wonder if Squid suffers with a similar fault. Nick -- Nick Fennell n...@tbfh.org On 12 Feb 2013, at 02:05, Amos Jeffries squ...@treenet.co.nz wrote: On 12/02/2013 12:18 a.m., Nick Fennell wrote: Hi. Hi Nick, 2.7 series Squid is no longer supported or receiving bug fixes. Is there any particular reason you have not yet upgraded to 3.2 or 3.3? Amos
Re: AW: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?
On 12/02/2013 8:41 p.m., Sandrini Christian (xsnd) wrote: Hi I have now enabled ipv6 3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:50:56:a6:07:27 brd ff:ff:ff:ff:ff:ff inet 160.85.104.14/24 brd 160.85.104.255 scope global eth1 inet6 fe80::250:56ff:fea6:727/64 scope link valid_lft forever preferred_lft forever When I dig for record to ipv6.idrobot.net I don't get a timeout dig ipv6.idrobot.net ; DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 ipv6.idrobot.net ;; global options: +cmd ;; Got answer: ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 34596 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;ipv6.idrobot.net. IN ;; AUTHORITY SECTION: net.900 IN SOA a.gtld-servers.net. nstld.verisign-grs.com. 1360654692 1800 900 604800 86400 ;; Query time: 17 msec ;; SERVER: 160.85.192.100#53(160.85.192.100) ;; WHEN: Tue Feb 12 08:38:40 2013 ;; MSG SIZE rcvd: 107 When I dig for record to www2.zhlex.zh.ch I get one dig www2.zhlex.zh.ch ; DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 www2.zhlex.zh.ch ;; global options: +cmd ;; connection timed out; no servers could be reached Do you have the same timout as well with that host and ipv6 running? This is a domain which is queried a lot. Yes. I traced it through three CNAME redirections to a pair of DNS servers which do not respond to any queries. # dig zhcompublicweb1.subd.djiktzh.ch @lc1.djiktzh.ch ; DiG 9.3.6-P1 zhcompublicweb1.subd.djiktzh.ch @lc1.djiktzh.ch ;; global options: printcmd ;; connection timed out; no servers could be reached # dig zhcompublicweb1.subd.djiktzh.ch @lc2.djiktzh.ch ; DiG 9.3.6-P1 zhcompublicweb1.subd.djiktzh.ch @lc2.djiktzh.ch ;; global options: printcmd ;; connection timed out; no servers could be reached Those DNS servers lc1.djiktzh.ch and lc2.djiktzh.ch are broken. Amos
Re: [squid-users] Squid 2.7STABLE9 'zph_sibling' seemingly not tagging traffic
On 12/02/2013 10:31 p.m., Nick Fennell wrote: Hey Amos. I was waiting for that! We have a few requirements not yet satisfied in Squid 3.x, storeurl_rewrite features are a big one, so we're having to hold off until we're able to conjure something up. Can't get them ported across can you? ;-) That feature is already ported into 3.HEAD thanks to Eliezer. You can make use of it by building that development package. As things stand today it will be in 3.4 series. There is a bit of work and a lot of testing required to get it into 3.3, but if anyone is interested in helping out with that let me know. Eliezer has decided to concentrate on some needed further improvements now rather than back-ports. WIth regard to 2.7, ZPH, are you aware of any bugs that may cause sibling_hit to be ineffective. I saw on the Lusca project that their code had an issue preventing the mark from ever being applied. I wonder if Squid suffers with a similar fault. I'm not aware of any bugs in the ZPH patch. They (ZPH) wrote two very different versions of the feature for 2.7 and 3.x, and we have extended and fixed the 3.x version in quite a few ways since it was merged. A lot of the bugs people have reported are either in code which was never setting TOS at all, or where they confused the up/down directionality of the packet flow. As for Lusca vs 2.7, yes being a fork of that version it is likely that Lusca contains any bug known in 2.7. Amos
AW: AW: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?
That is what I guessed as well. But we can not control their DNS and the solution so far was not to check for records. It is silly for one domain but it is a quite important one that is used a lot. Not sure if there is any alternatives? I thought that squid 3.2 is doing parallel lookups to and A records? -Ursprüngliche Nachricht- Von: Amos Jeffries [mailto:squ...@treenet.co.nz] Gesendet: Dienstag, 12. Februar 2013 10:54 An: squid-users@squid-cache.org Betreff: Re: AW: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored? On 12/02/2013 8:41 p.m., Sandrini Christian (xsnd) wrote: Hi I have now enabled ipv6 3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:50:56:a6:07:27 brd ff:ff:ff:ff:ff:ff inet 160.85.104.14/24 brd 160.85.104.255 scope global eth1 inet6 fe80::250:56ff:fea6:727/64 scope link valid_lft forever preferred_lft forever When I dig for record to ipv6.idrobot.net I don't get a timeout dig ipv6.idrobot.net ; DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 ipv6.idrobot.net ;; global options: +cmd ;; Got answer: ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 34596 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;ipv6.idrobot.net. IN ;; AUTHORITY SECTION: net.900 IN SOA a.gtld-servers.net. nstld.verisign-grs.com. 1360654692 1800 900 604800 86400 ;; Query time: 17 msec ;; SERVER: 160.85.192.100#53(160.85.192.100) ;; WHEN: Tue Feb 12 08:38:40 2013 ;; MSG SIZE rcvd: 107 When I dig for record to www2.zhlex.zh.ch I get one dig www2.zhlex.zh.ch ; DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 www2.zhlex.zh.ch ;; global options: +cmd ;; connection timed out; no servers could be reached Do you have the same timout as well with that host and ipv6 running? This is a domain which is queried a lot. Yes. I traced it through three CNAME redirections to a pair of DNS servers which do not respond to any queries. # dig zhcompublicweb1.subd.djiktzh.ch @lc1.djiktzh.ch ; DiG 9.3.6-P1 zhcompublicweb1.subd.djiktzh.ch @lc1.djiktzh.ch ;; global options: printcmd ;; connection timed out; no servers could be reached # dig zhcompublicweb1.subd.djiktzh.ch @lc2.djiktzh.ch ; DiG 9.3.6-P1 zhcompublicweb1.subd.djiktzh.ch @lc2.djiktzh.ch ;; global options: printcmd ;; connection timed out; no servers could be reached Those DNS servers lc1.djiktzh.ch and lc2.djiktzh.ch are broken. Amos
Re: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?
On 2/12/2013 2:09 AM, Amos Jeffries wrote: No. A bug report will not make any difference here. dns_v4_first is about the sorting the results found, not the lookup order. is faster than A in most networks, so we perform that lookup first in 3.1. This was altered in 3.2 to perform happy-eyeballs parallel lookups anyway so most bugs in the lookup code of 3.1 will be closed as irrelevant. Note that the current supported release is now 3.3.1. Thanks, The logic seems odd to me and now I understood the reason to what happens. This is VERY likely to be the problem. Squid tests for IPv6 ability automatically by opening a socket on a private IP address, if that works the socket options are noted and used. There is no way for Squid to identify in advance of opening upstream connections whether the NIC the kernel chooses to use will be v6-enabled or not. Notice that the method used to disable IPv6 was to simply not assign IPv6 address to the NIC, nothing at the sockets layer was actually disabled. So every NIC needs to be checked and disabled individually as well, and any sub-system loading IPv6 functionality into the kernel also needs disabling as well. (Warning: soapbox) The big question is, why disable in the first place? v6 is faster and more efficient than v4 when you get it going properly. And one he*l of a lot easier to administrate. If any of your upstreams supply native connections it is well worth taking the option up. If not there is always 6to4 or other tunnel types that can be built right to the proxy box to get IPv6 at only a small initial latency on the SYN packet (ping 192.88.99.1 to see what 6to4 adds for you). Note that these are IPv6 connectivity initiated from the proxy to the Internet *only*, so firewall alterations are minimal to get Squid v6-enabled. Amos The main problem with IPV6 is that most of the ISPs around the world dosn't support\provide it yet. While trying to use a 4to6 tunnel I have seen some weird stuff going on when a gateway is used. A proxy is another thing and speed is most likely the issue in the cases which 4to6 tunnel is not being used. Regards, -- Eliezer Croitoru http://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
Re: AW: AW: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?
Try to contact the dns servers maintainer using postmaster or any other relevant address. You can consult about it in ISOC mailing list. BIND has very nice logging options about lazy and problematic dns servers which can help you prevent these issues. It's a very common problem in the dns world not related just to IPV6. Eliezer On 2/12/2013 12:36 PM, Sandrini Christian (xsnd) wrote: That is what I guessed as well. But we can not control their DNS and the solution so far was not to check for records. It is silly for one domain but it is a quite important one that is used a lot. Not sure if there is any alternatives? I thought that squid 3.2 is doing parallel lookups to and A records? -- Eliezer Croitoru http://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
[squid-users] Is squi 3.3.1 stable released?
Hello, I am seeing squid 3.3.1 released on 9th Feb 2013, mentioned under Stable versions at: http://www.squid-cache.org/Versions/ Where it is also mentioned that: Current versions suitable for production use. But when I see release notes for 3.3.1, its written that: While this release is not deemed ready for production use, we believe it is ready for wider testing by the community. Also I have not seen any official announcement here in mailing list. Sorry if I missed it. So please clarify if squid 3.3.1 is released as stable and production use, or not? Thank you, Amm.
AW: [squid-users] AW: any chance to optimize squid3?
Hello again, i found out, that this delay comes from squid_ldap_group and not from squid_kerb_auth. I thought it would be faster when I'm using Kerberos auth and ldap groupcheck: auth_param negotiate children 10 auth_param negotiate keep_alive on auth_param negotiate program /usr/lib64/squid/squid_kerb_auth external_acl_type checkgroup %LOGIN /usr/lib64/squid/squid_ldap_group -R -K -b dc=DOMAIN,dc=local -D ldap -w PASSWORD -f ((objectclass=person)(sAMAccountName=%v)(memberof=cn=%g,ou=User_Gruppen,dc=DOMAIN,dc=local)) -h DOMAINCONTROLLER instead of my old config: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 20 startup=0 idle=1 auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic children 5 auth_param basic realm Domain Proxy Server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off authenticate_cache_garbage_interval 10 seconds authenticate_ttl 28800 seconds external_acl_type nt_group ttl=5 children=5 %LOGIN /usr/lib/squid3/wbinfo_group.pl What can I do? What's the best way to authorize an specific ldap group? Thanks for help. -- Marcel -Ursprüngliche Nachricht- Von: Fuhrmann, Marcel [mailto:marcel.fuhrm...@lux.ag] Gesendet: Donnerstag, 7. Februar 2013 11:22 An: squid-users@squid-cache.org Betreff: AW: [squid-users] AW: any chance to optimize squid3? Hello, at the moment some users are using my new proxy (with kerberos auth instead of NTLM). There is just one unlikely thing yet. First time browser starts (start page google) it takes several seconds till google page is loaded. When I continue browsing to another page, this delay isn't noticeable. I suspect It has to do with the initial authentication. Is this normal or can I adjust some config? This is my config for Kerberos: auth_param negotiate program /usr/lib64/squid/squid_kerb_auth auth_param negotiate children 10 auth_param negotiate keep_alive on Thanks for helping me. -Ursprüngliche Nachricht- Von: Fuhrmann, Marcel [mailto:marcel.fuhrm...@lux.ag] Gesendet: Samstag, 2. Februar 2013 11:04 An: squid-users@squid-cache.org Betreff: AW: [squid-users] AW: any chance to optimize squid3? Hi Amos, finally i've configured Kerberos auth and ldap group check. In a few weeks I will report if the bottlenecks are eliminated. This is now my config: auth_param negotiate program /usr/lib64/squid/squid_kerb_auth auth_param negotiate children 10 auth_param negotiate keep_alive on external_acl_type checkgroup %LOGIN /usr/lib64/squid/squid_ldap_group -R -K -b dc=DOMAIN,dc=local -D ldap -w PASSWORD -f ((objectclass=person)(sAMAccountName=%v)(memberof=cn=%g,ou=UserGroups,dc=DOMAIN,dc=local)) -h DOMAINCONTROLLER . (snip) . acl Terminalserver src 10.4.1.51-10.4.1.75 acl AUTH proxy_auth REQUIRED acl InternetGroup external checkgroup internet . (snip) . http_access deny !AUTH http_access allow InternetGroup Terminalserver http_access deny Terminalserver . (snip) . Thanks for help. Amos Jeffries wrote: The big issues you have are: * using NTLM. This seriously caps the proxy performance and capacity. Each new TCP connection (~30 per second from your graphs) requires at least two full HTTP reqesut/reply round trips just to authenticate before the actual HTTP response can begin to be identified and fetched. * using group to base access permissions. Like NTLM this caps the capacity of your Squid. * using a URL helper. Whether that is a big drag or not depends on what you are using it for and whether Squid can do that faster by itself. These are your big performance bottlenecks. Eliminating any of them will speed up your proxy. BUT whether it is worth doing is up to you.
Re: [squid-users] Is squi 3.3.1 stable released?
On 13/02/2013 12:05 a.m., Amm wrote: Hello, I am seeing squid 3.3.1 released on 9th Feb 2013, mentioned under Stable versions at: http://www.squid-cache.org/Versions/ Where it is also mentioned that: Current versions suitable for production use. But when I see release notes for 3.3.1, its written that: While this release is not deemed ready for production use, we believe it is ready for wider testing by the community. Also I have not seen any official announcement here in mailing list. Sorry if I missed it. So please clarify if squid 3.3.1 is released as stable and production use, or not? Yes it is packaged now as production-ready so far as we know. At least as much and slightly more so than 3.2 series. The announcement is delayed a few days to allow for packaging/documentation problem detection. Like the text you spotted, thanks, looks like that change was omitted in 3.2 series as well. Amos
[squid-users] Squid 3.3x: UNLNK id(232) Error: no filename in shm buffer
Dear I have these errors on Squid 3.3 What does it means ? 26988 UNLNK id(232) Error: no filename in shm buffer 26989 UNLNK id(547) Error: no filename in shm buffer 26988 UNLNK id(233) Error: no filename in shm buffer 26989 UNLNK id(548) Error: no filename in shm buffer 26988 UNLNK id(234) Error: no filename in shm buffer 26989 UNLNK id(549) Error: no filename in shm buffer 2013/02/12 09:12:23| Error sending to ICMPv6 packet to [2a02:13a8:102:1:40::83]. ERR: (101) Network is unreachable 26992 UNLNK id(300) Error: no filename in shm buffer 26992 UNLNK id(301) Error: no filename in shm buffer 26989 UNLNK id(550) Error: no filename in shm buffer 26991 UNLNK id(276) Error: no filename in shm buffer 26989 UNLNK id(551) Error: no filename in shm buffer 26988 UNLNK id(235) Error: no filename in shm buffer 26989 UNLNK id(552) Error: no filename in shm buffer 26991 UNLNK id(277) Error: no filename in shm buffer 26988 UNLNK id(236) Error: no filename in shm buffer 26989 UNLNK id(553) Error: no filename in shm buffer 26992 UNLNK id(302) Error: no filename in shm buffer 26992 UNLNK id(303) Error: no filename in shm buffer 26992 UNLNK id(304) Error: no filename in shm buffer 26989 UNLNK id(554) Error: no filename in shm buffer 26991 UNLNK id(278) Error: no filename in shm buffer 26989 UNLNK id(555) Error: no filename in shm buffer 26991 UNLNK id(279) Error: no filename in shm buffer 26989 UNLNK id(556) Error: no filename in shm buffer 26991 UNLNK id(280) Error: no filename in shm buffer 26989 UNLNK id(557) Error: no filename in shm buffer 26992 UNLNK id(305) Error: no filename in shm buffer 26991 UNLNK id(281) Error: no filename in shm buffer 26989 UNLNK id(558) Error: no filename in shm buffer 26991 UNLNK id(282) Error: no filename in shm buffer 26989 UNLNK id(559) Error: no filename in shm buffe
Re: [squid-users] Squid 3.3x: UNLNK id(232) Error: no filename in shm buffer
Hello, I was going to ask for the same thing, I'm running 3.2 and I also see tons of these errors filling my cache.log. On Tue, 12 Feb 2013 13:29:07 +0100 David Touzeau da...@articatech.com wrote: Dear I have these errors on Squid 3.3 What does it means ? 26988 UNLNK id(232) Error: no filename in shm buffer 26989 UNLNK id(547) Error: no filename in shm buffer 26988 UNLNK id(233) Error: no filename in shm buffer 26989 UNLNK id(548) Error: no filename in shm buffer 26988 UNLNK id(234) Error: no filename in shm buffer 26989 UNLNK id(549) Error: no filename in shm buffer 2013/02/12 09:12:23| Error sending to ICMPv6 packet to [2a02:13a8:102:1:40::83]. ERR: (101) Network is unreachable 26992 UNLNK id(300) Error: no filename in shm buffer 26992 UNLNK id(301) Error: no filename in shm buffer 26989 UNLNK id(550) Error: no filename in shm buffer 26991 UNLNK id(276) Error: no filename in shm buffer 26989 UNLNK id(551) Error: no filename in shm buffer 26988 UNLNK id(235) Error: no filename in shm buffer 26989 UNLNK id(552) Error: no filename in shm buffer 26991 UNLNK id(277) Error: no filename in shm buffer 26988 UNLNK id(236) Error: no filename in shm buffer 26989 UNLNK id(553) Error: no filename in shm buffer 26992 UNLNK id(302) Error: no filename in shm buffer 26992 UNLNK id(303) Error: no filename in shm buffer 26992 UNLNK id(304) Error: no filename in shm buffer 26989 UNLNK id(554) Error: no filename in shm buffer 26991 UNLNK id(278) Error: no filename in shm buffer 26989 UNLNK id(555) Error: no filename in shm buffer 26991 UNLNK id(279) Error: no filename in shm buffer 26989 UNLNK id(556) Error: no filename in shm buffer 26991 UNLNK id(280) Error: no filename in shm buffer 26989 UNLNK id(557) Error: no filename in shm buffer 26992 UNLNK id(305) Error: no filename in shm buffer 26991 UNLNK id(281) Error: no filename in shm buffer 26989 UNLNK id(558) Error: no filename in shm buffer 26991 UNLNK id(282) Error: no filename in shm buffer 26989 UNLNK id(559) Error: no filename in shm buffe 26989 UNLNK id(551) Error: no filename in shm buffer
[squid-users] Squid 3.3.1 / Solaris 10
Hi, I've just tried Solaris 10 compilation of last squid 3.3.1. Here is my configure : CFLAGS=-std=c99 ./configure --prefix=$PREFIX --disable-strict-error-checking --localstatedir=/var/squid --with-pthreads --enable-default-err-language=French --enable-err-languages=French --with-build-environment=POSIX_V6_ILP32_OFFBIG --enable-auth-basic=LDAP NCSA --enable-digest-auth-helpers=password --enable-external-acl-helpers=ldap_group ip_user --enable-eui --enable-ssl --with-openssl=/usr/sfw --with-large-files LDFLAGS=-R/usr/sfw/lib Configure passed, but compilation failed here : libtool: compile: g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib -I../../src -I../../include -I/usr/include/gssapi -I/usr/include/kerberosv5 -I../../libltdl -I/usr/sfw/include -I/usr/include/gssapi -I/usr/include/kerberosv5 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -pipe -D_REENTRANT -pthreads -Usparc -Uunix -Ui386 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 -MT ModDevPoll.lo -MD -MP -MF .deps/ModDevPoll.Tpo -c ModDevPoll.cc -fPIC -DPIC -o .libs/ModDevPoll.o In file included from ../../compat/compat_shared.h:202, from ../../compat/compat.h:80, from ../../include/squid.h:66, from ModDevPoll.cc:51: /usr/include/kerberosv5/com_err.h:20: warning: ignoring #pragma ident ModDevPoll.cc: In function `void Comm::SelectLoopInit()': ModDevPoll.cc:224: error: `fd_open' undeclared (first use this function) ModDevPoll.cc:224: error: (Each undeclared identifier is reported only once for each function it appears in.) ModDevPoll.cc: In function `void Comm::SetSelect(int, unsigned int, void (*)(int, void*), void*, time_t)': ModDevPoll.cc:252: error: `fd_table' undeclared (first use this function) ModDevPoll.cc: In function `comm_err_t Comm::DoSelect(int)': ModDevPoll.cc:384: error: `fd_table' undeclared (first use this function) gmake[3]: *** [ModDevPoll.lo] Error 1 gmake[3]: Leaving directory `/export/home/peli/pub.d/squid-3.3.1/src/comm' gmake[2]: *** [all-recursive] Error 1 gmake[2]: Leaving directory `/export/home/peli/pub.d/squid-3.3.1/src' gmake[1]: *** [all] Error 2 gmake[1]: Leaving directory `/export/home/peli/pub.d/squid-3.3.1/src' gmake: *** [all-recursive] Error 1 == Adding --disable-devpoll === compilation OK == --disable-devpoll Disable Solaris /dev/poll support. Oracle doc about /dev/poll : == http://docs.oracle.com/cd/E19253-01/816-5177/6mbbc4g9n/index.html The /dev/poll device, associated driver and corresponding manpages may be removed in a future Solaris release. For similar functionality in the event ports framework, see port_create(3C). The /dev/poll driver is a special driver that enables you to monitor multiple sets of polled file descriptors. By using the /dev/poll driver, you can efficiently poll large numbers of file descriptors. Access to the /dev/poll driver is provided through open(2), write(2), and ioctl(2) system calls. == What kind of performance impact could drive disabling devpoll. Greetings PS. Solaris compilation of 3.2 and 3.3 needs the following patch diff xstrto.h xstrto.h.ori 1d0 #if defined(__cplusplus) 32d30 #endif
Re: [squid-users] Re: Squid round-robin to 2 Apache's
Hi, Just to update , it worked fine. The problem was with the redirect login cgi that had some permission issues in other server hence it did not failover earlier. It now listens only in 443, works beautifully. Thanks Amos for your help. On Wed, Feb 6, 2013 at 10:16 AM, paramkrish mkpa...@gmail.com wrote: Dear Squid Users: Do you see any gross difference in my setup ? What i m trying is something very basic, in my opinion, just having two apache's running in 8080 behind Squid and a http-https redirection. While everything works great, I am concerned why squid does not detect the failed cache_peer parent to failover the request to the other node. What could possibly be missing in the configs or is this some sort of bug when squid made to work with 443 / SSL ? Please guide me as i am completely stalled. Thanks a lot for the wonderful work you have been doing. PK -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-round-robin-to-2-Apache-s-tp4658362p4658394.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Squid 2.7STABLE9 'zph_sibling' seemingly not tagging traffic
Hey Amos. I believe we're already testing the package provided by Eliezer and have encountered some issues with the current workings. I know we'll be in touch with him shortly to discuss. Hopefully it's something fixable but he'll know more when we do :) Thanks for the reply/info. Nick -- Nick Fennell n...@tbfh.org On 12 Feb 2013, at 10:08, Amos Jeffries squ...@treenet.co.nz wrote: On 12/02/2013 10:31 p.m., Nick Fennell wrote: Hey Amos. I was waiting for that! We have a few requirements not yet satisfied in Squid 3.x, storeurl_rewrite features are a big one, so we're having to hold off until we're able to conjure something up. Can't get them ported across can you? ;-) That feature is already ported into 3.HEAD thanks to Eliezer. You can make use of it by building that development package. As things stand today it will be in 3.4 series. There is a bit of work and a lot of testing required to get it into 3.3, but if anyone is interested in helping out with that let me know. Eliezer has decided to concentrate on some needed further improvements now rather than back-ports. WIth regard to 2.7, ZPH, are you aware of any bugs that may cause sibling_hit to be ineffective. I saw on the Lusca project that their code had an issue preventing the mark from ever being applied. I wonder if Squid suffers with a similar fault. I'm not aware of any bugs in the ZPH patch. They (ZPH) wrote two very different versions of the feature for 2.7 and 3.x, and we have extended and fixed the 3.x version in quite a few ways since it was merged. A lot of the bugs people have reported are either in code which was never setting TOS at all, or where they confused the up/down directionality of the packet flow. As for Lusca vs 2.7, yes being a fork of that version it is likely that Lusca contains any bug known in 2.7. Amos
Re: AW: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?
Christian, This sounds very similar to what I have seen with a few sites. My solution was to add the problematic domains to /etc/hosts (only ipv4 address) and restart squid. I'm not proud or happy about this solution but it does the trick for me. Kind regards, /petter On Tue, Feb 12, 2013 at 5:36 AM, Sandrini Christian (xsnd) x...@zhaw.ch wrote: That is what I guessed as well. But we can not control their DNS and the solution so far was not to check for records. It is silly for one domain but it is a quite important one that is used a lot. Not sure if there is any alternatives? I thought that squid 3.2 is doing parallel lookups to and A records? -Ursprüngliche Nachricht- Von: Amos Jeffries [mailto:squ...@treenet.co.nz] Gesendet: Dienstag, 12. Februar 2013 10:54 An: squid-users@squid-cache.org Betreff: Re: AW: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored? On 12/02/2013 8:41 p.m., Sandrini Christian (xsnd) wrote: Hi I have now enabled ipv6 3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:50:56:a6:07:27 brd ff:ff:ff:ff:ff:ff inet 160.85.104.14/24 brd 160.85.104.255 scope global eth1 inet6 fe80::250:56ff:fea6:727/64 scope link valid_lft forever preferred_lft forever When I dig for record to ipv6.idrobot.net I don't get a timeout dig ipv6.idrobot.net ; DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 ipv6.idrobot.net ;; global options: +cmd ;; Got answer: ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 34596 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;ipv6.idrobot.net. IN ;; AUTHORITY SECTION: net.900 IN SOA a.gtld-servers.net. nstld.verisign-grs.com. 1360654692 1800 900 604800 86400 ;; Query time: 17 msec ;; SERVER: 160.85.192.100#53(160.85.192.100) ;; WHEN: Tue Feb 12 08:38:40 2013 ;; MSG SIZE rcvd: 107 When I dig for record to www2.zhlex.zh.ch I get one dig www2.zhlex.zh.ch ; DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 www2.zhlex.zh.ch ;; global options: +cmd ;; connection timed out; no servers could be reached Do you have the same timout as well with that host and ipv6 running? This is a domain which is queried a lot. Yes. I traced it through three CNAME redirections to a pair of DNS servers which do not respond to any queries. # dig zhcompublicweb1.subd.djiktzh.ch @lc1.djiktzh.ch ; DiG 9.3.6-P1 zhcompublicweb1.subd.djiktzh.ch @lc1.djiktzh.ch ;; global options: printcmd ;; connection timed out; no servers could be reached # dig zhcompublicweb1.subd.djiktzh.ch @lc2.djiktzh.ch ; DiG 9.3.6-P1 zhcompublicweb1.subd.djiktzh.ch @lc2.djiktzh.ch ;; global options: printcmd ;; connection timed out; no servers could be reached Those DNS servers lc1.djiktzh.ch and lc2.djiktzh.ch are broken. Amos
[squid-users] Caching URLs with a ? in them?
I have a bunch of static content with appropriate Expires headers, but the URL contains a ?serial=123456 where the serial number is dynamic. Is squid smart enough to ignore the fact that the URL looks like a dynamic request, and use the expire headers to see that it's indeed static/cacheable content? -- Scott Baker - Canby Telcom System Administrator - RHCE - 503.266.8253
Re: AW: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?
Many admins will be happy to know about these domains. Admins should properly maintained and fix them or maybe get some help in finding the culprit for the problem. As I posted before the ISOC list is full of requests for help regarding similar problems and solutions for them else then the way you have used. Eliezer On 2/12/2013 7:01 PM, Petter Abrahamsson wrote: Christian, This sounds very similar to what I have seen with a few sites. My solution was to add the problematic domains to /etc/hosts (only ipv4 address) and restart squid. I'm not proud or happy about this solution but it does the trick for me. Kind regards, /petter -- Eliezer Croitoru http://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
Re: [squid-users] Caching URLs with a ? in them?
On 13/02/2013 10:48 a.m., Scott Baker wrote: I have a bunch of static content with appropriate Expires headers, but the URL contains a ?serial=123456 where the serial number is dynamic. Is squid smart enough to ignore the fact that the URL looks like a dynamic request, It *is* a dynamic request. Look see ... the URL is constantly changing. and use the expire headers to see that it's indeed static/cacheable content? Expires is relative to the URL. So if the URL changed its a *new* object (MISS) with new Expiry details. Get the picture? see http://wiki.squid-cache.org/ConfigExamples/DynamicContent for teh configuration directives to change for cachign these responses. If you have a new install of Squid-3.1 or later the default settings will cache them. However, once you have them cached, you will probably still see a lot of MISS happening because the URL are changing. For best cache HIT rate you need to look at why those serial exist at all in the URL. They are breaking the cacheability for you and everyone else on the Internet. Do you have control over the origin server generating those URLs? If you could explain what the serial is for exactly perhapse we could point you in the direction of fixing the object cacheability. Amos
Re: [squid-users] sharepoint pinning issue?
On 13/02/2013 3:49 a.m., Alexandre Chappaz wrote: Hi, I know this is a subject that has been put on the table many times, but I wanted to share with you my experience with squid + sharepoint. Squid Cache: Version 3.2.7-20130211-r11781 I am having an issue with autehtication : when accessing the sharepoint server, I do get a login/pw popup, I can login and see some of the pages behind, but when doing some operation, even though I am supposed to be logged in, the autentcation popup comes back. Here is what I find the the access log : 1360679927.561 43 X.X.X.X TCP_MISS/200 652 GET http://saralex.hd.free.fr/_layouts/images/selbg.png - FIRSTUP_PARENT/192.168.100.XX image/png URL #1. No authentication required. non-pinned connection used. 1360679928.543 37 X.X.X.X TCP_MISS/401 542 GET http://saralex.hd.free.fr/_layouts/listform.aspx? - PINNED/192.168.100.XX - URL #2. Sent to upstream on already authenticated+PINNED connection. Upstream server requires further authentication details. -- authentication challenge? 1360679928.665 58 X.X.X.X TCP_MISS/401 795 GET http://saralex.hd.free.fr/_layouts/listform.aspx? - PINNED/192.168.100.XX - URL #2 repeated. Sent to upstream on already authenticated+PINNED connection. Upstream server requires further authentication details. -- possibly authentication handshake request? 1360679928.753229 X.X.X.X TCP_MISS/200 20625 GET http://saralex.hd.free.fr/_layouts/images/fgimg.png - FIRSTUP_PARENT/192.168.100.XX image/png URL #3. No authentication required. non-pinned connection used. 1360679928.788 68 X.X.X.X TCP_MISS/302 891 GET http://saralex.hd.free.fr/_layouts/listform.aspx? - PINNED/192.168.100.XX text/html URL #2 repeated. Sent to upstream on already authenticated+PINNED connection. Upstream server redirectes the client to another URL. -- authentication credentials accepted. 1360679928.921 45 X.X.X.X TCP_MISS/401 542 GET http://saralex.hd.free.fr/Lists/Tasks/NewForm.aspx? - PINNED/192.168.100.XX - URL #4. Sent to upstream on already authenticated+PINNED connection. Upstream server requires further authentication details. -- authentication challenge? 1360679929.019 47 X.X.X.X TCP_MISS/401 795 GET http://saralex.hd.free.fr/Lists/Tasks/NewForm.aspx? - PINNED/192.168.100.XX - URL #4 repeated. Sent to upstream on already authenticated+PINNED connection. Upstream server requires further authentication details. -- possibly authentication handshake request? 1360679929.656 81 X.X.X.X TCP_MISS/200 1986 GET http://saralex.hd.free.fr/_layouts/images/loadingcirclests16.gif - FIRSTUP_PARENT/192.168.100.XX image/gif URL #5. no authentication required. non-pinned connection used. 1360679930.417 1322 X.X.X.X TCP_MISS/200 130496 GET http://saralex.hd.free.fr/Lists/Tasks/NewForm.aspx? - PINNED/192.168.100.XX text/html URL #4 repeated. Sent to upstream on already authenticated+PINNED connection. Upstream server provides the display response. -- authentication credentials accepted. 1360679934.618 53 X.X.X.X TCP_MISS/401 542 GET http://saralex.hd.free.fr/_layouts/iframe.aspx? - PINNED/192.168.100.XX - 1360679934.729 51 X.X.X.X TCP_MISS/401 795 GET http://saralex.hd.free.fr/_layouts/iframe.aspx? - PINNED/192.168.100.XX - could this be a pinning issue? Is V2.7 STABLE managing these things in a nicer way? Unknown. But I doubt it. This is Squid using a PINNED connection to relay traffic to an upstream server. That upstream server is rejecting the clients delivered credentials after each object. There is no sign of proxy authentication taking place, this re-challenge business is all between client and upstream server. You need to look at whether these connections are being pinned then closed, and why that is happening. Squid-3.2 offers debug level 11,2 which will give you a trace of the HTTP headers to see if the close is a normal operation from either end. Or whether they are keep-alive by Squid and the upstream server is just constantly forcing re-auth (it happens). Amos
Re: [squid-users] Squid 3.3x: UNLNK id(232) Error: no filename in shm buffer
On 13/02/2013 1:29 a.m., David Touzeau wrote: Dear I have these errors on Squid 3.3 What does it means ? We recently added a number of messages similar to this one to report error codes coming out of the kernel which were previously ignored for no documented reason. This one appears to be happening a lot due to a massive amount of empty IPC packets flowing around. We are still trying to figure that out exactly what that means to decide the best way to silence or fix it. There is a workaround patch which can be used to quiet Squid meanwhile in http://bugs.squid-cache.org/show_bug.cgi?id=3763. PS. if you have any interest in a proper fix, or insight that helps please comment on the bug report. Amos
[squid-users] squid 3.3.1 - assertion failed with dstdom_regex with IP based URL
I had reported this bug earlier in Dec 2012 but probably went unnoticed in squid-dev group http://www.squid-cache.org/mail-archive/squid-dev/201212/0099.html So just re-posting as it still exists in stable branch 3.3.1 Hello, I get following when using squid 3.3.1. 2013/02/13 08:57:33 kid1| assertion failed: Checklist.cc:287: !needsAsync !matchFinished Squid restarts after this. The culprit acl line seems to be this: acl noaccess dstdom_regex -i /etc/squid/noaccess This happens only when URL is IP based instead of domain based. i.e. http://1.2.3.4 Squid acl reference has this note for dstdom_regex: # For dstdomain and dstdom_regex a reverse lookup is tried if a IP # based URL is used and no match is found. The name none is used # if the reverse lookup fails So I suppose 3.3.1 is trying to do reverse lookup and some kind of assertion fails. This bug does not exist in 3.2 as I did not notice it happening in 3.2 So please fix it. Regards, AMM - Forwarded Message - From: Amm ammdispose-sq...@yahoo.com To: squid-...@squid-cache.org Cc: Sent: Thursday, 13 December 2012 1:28 PM Subject: assertion failed with dstdom_regex with IP based URL atleast for 3.3.0.2
Re: [squid-users] Squid 3.3.1 / Solaris 10
For the record; please report these type of issues through bugzilla. On 13/02/2013 2:07 a.m., C. Pelissier wrote: Hi, I've just tried Solaris 10 compilation of last squid 3.3.1. Here is my configure : CFLAGS=-std=c99 ./configure --prefix=$PREFIX --disable-strict-error-checking --localstatedir=/var/squid --with-pthreads --enable-default-err-language=French --enable-err-languages=French --with-build-environment=POSIX_V6_ILP32_OFFBIG --enable-auth-basic=LDAP NCSA --enable-digest-auth-helpers=password --enable-external-acl-helpers=ldap_group ip_user --enable-eui --enable-ssl --with-openssl=/usr/sfw --with-large-files LDFLAGS=-R/usr/sfw/lib Configure passed, but compilation failed here : libtool: compile: g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib -I../../src -I../../include -I/usr/include/gssapi -I/usr/include/kerberosv5 -I../../libltdl -I/usr/sfw/include -I/usr/include/gssapi -I/usr/include/kerberosv5 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -pipe -D_REENTRANT -pthreads -Usparc -Uunix -Ui386 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 -MT ModDevPoll.lo -MD -MP -MF .deps/ModDevPoll.Tpo -c ModDevPoll.cc -fPIC -DPIC -o .libs/ModDevPoll.o In file included from ../../compat/compat_shared.h:202, from ../../compat/compat.h:80, from ../../include/squid.h:66, from ModDevPoll.cc:51: /usr/include/kerberosv5/com_err.h:20: warning: ignoring #pragma ident ModDevPoll.cc: In function `void Comm::SelectLoopInit()': ModDevPoll.cc:224: error: `fd_open' undeclared (first use this function) ModDevPoll.cc:224: error: (Each undeclared identifier is reported only once for each function it appears in.) ModDevPoll.cc: In function `void Comm::SetSelect(int, unsigned int, void (*)(int, void*), void*, time_t)': ModDevPoll.cc:252: error: `fd_table' undeclared (first use this function) ModDevPoll.cc: In function `comm_err_t Comm::DoSelect(int)': ModDevPoll.cc:384: error: `fd_table' undeclared (first use this function) gmake[3]: *** [ModDevPoll.lo] Error 1 gmake[3]: Leaving directory `/export/home/peli/pub.d/squid-3.3.1/src/comm' gmake[2]: *** [all-recursive] Error 1 gmake[2]: Leaving directory `/export/home/peli/pub.d/squid-3.3.1/src' gmake[1]: *** [all] Error 2 gmake[1]: Leaving directory `/export/home/peli/pub.d/squid-3.3.1/src' gmake: *** [all-recursive] Error 1 Adding this following line to src/comm/ModDevPoll.cc should fix it: #include globals.h However, globals.h is one file we are trying to erase from Squid in upcoming versions. Can I enlist your assistance build-testing a longer term fix? == Adding --disable-devpoll === compilation OK == --disable-devpoll Disable Solaris /dev/poll support. Oracle doc about /dev/poll : == http://docs.oracle.com/cd/E19253-01/816-5177/6mbbc4g9n/index.html The /dev/poll device, associated driver and corresponding manpages may be removed in a future Solaris release. For similar functionality in the event ports framework, see port_create(3C). The /dev/poll driver is a special driver that enables you to monitor multiple sets of polled file descriptors. By using the /dev/poll driver, you can efficiently poll large numbers of file descriptors. Access to the /dev/poll driver is provided through open(2), write(2), and ioctl(2) system calls. == What kind of performance impact could drive disabling devpoll. You drop back to the slow poll() or select() system calls. They are a noticable 1-2 digit %-points slower in bandwidth/sec under heavy load. Greetings PS. Solaris compilation of 3.2 and 3.3 needs the following patch diff xstrto.h xstrto.h.ori 1d0 #if defined(__cplusplus) 32d30 #endif Checked and added. Thank you. Amos