[squid-users] RE: squid-users Digest 11 Feb 2011 21:14:30 -0000 Issue 3732
>It does not matter where the files are generated. As long as they are >stored on the Squid box for Squid to access. > >For Squid you do not have to install anything into OpenSSL, which is >just a library. Thanks for the pointers Amos. Hopefully I'm going to attempt to do it this way; 1) Export the file from the Windows server as a .pfx file 2) Separate the private key from the .pfx file; openssl pkcs12 -in windows.pfx -out outputfile.txt -nodes 3) Extract the private key from outputfile.txt and store it as private.key 4) Then add the line to Squid; https_port 443 cert=/usr/newrprgate/CertAuth/verisign.cert key=/usr/newrprgate/CertAuth/private.key defaultsite=mywebsite.mydomain.com vhost Where; private.key = the original private key of the Windows server that generated the original request verisign.cert = the wildcard certificate back from Verisign Can anybdy see any immediate faults with doing it this way? Thanks John This email and any files transmitted with it are intended solely for the named recipient and may contain sensitive, confidential or protectively marked material up to the central government classification of ?RESTRICTED" which must be handled accordingly. If you have received this e-mail in error, please immediately notify the sender by e-mail and delete from your system, unless you are the named recipient (or authorised to receive it for the recipient) you are not permitted to copy, use, store, publish, disseminate or disclose it to anyone else. E-mail transmission cannot be guaranteed to be secure or error-free as it could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses and therefore the Council accept no liability for any such errors or omissions. Unless explicitly stated otherwise views or opinions expressed in this email are solely those of the author and do not necessarily represent those of the Council and are not intended to be legally binding. All Council network traffic and GCSX traffic may be subject to recording and/or monitoring in accordance with relevant legislation. South Tyneside Council, Town Hall & Civic Offices, Westoe Road, South Shields, Tyne & Wear, NE33 2RL, Tel: 0191 427 1717, Website: www.southtyneside.info
RE: [squid-users] url blocking
Thanks, I have installed ufdbGuard and defined it in squid but it doesnt seem to redirect anything to ufdbGuard, following is what I have defined in squid.conf: url_rewrite_program /usr/local/ufdbguard/bin/ufdbgclient url_rewrite_children 64 Please help.. > Date: Thu, 10 Feb 2011 12:36:59 -0200 > From: marcus.k...@urlfilterdb.com > To: squ...@treenet.co.nz > CC: squid-users@squid-cache.org; zart...@hotmail.com > Subject: Re: [squid-users] url blocking > > ufdbGuard is a URL filter for Squid that does exactly what Zartash needs. > It transforms codes like %xx to their respective characters and does > URL matching based on the normalised/translated URLs. > It also supports regular expressions, Google Safesearch enforcement and more. > > Marcus > > > Amos Jeffries wrote: > > On 10/02/11 18:25, Zartash . wrote: > >> > >> So is there any way to block %? > >> > > > > If it actually exists in the URL (not just the browser display version) > > using '%' in the pattern will match it. Block with that ACL. > > > > > > If its encoding something then no, you can't block it directly. It's a > > URL wire-level encoding byte. > > > > You could decode the %xx code and figure out what character it is > > hiding. Match and block on that. > > > > Or, if you don't care what character its encoding use '.' regex control > > to match any single byte. > > > > > > > > Amos
FW: [squid-users] Configuring SQUID in Windows to authenticate with Active Directory
Hi Guido, Thank you for your email. I added the .exe extension and now squid starts without any errors. However, I have a feeling that it does not talk to Micosoft Active Directory to authenticate users - if I key in an arbitary value for the -w "password" option, squid still starts. I was expecting to see an error. cache.log has the following entry: 2011/01/27 16:51:09| Accepting proxy HTTP connections at 0.0.0.0, port 3128, FD 14. Is that normal? Also if I try to use a browser (I used Firefox)- it prompts for user credentials, but if I use any usernames in Microsoft Active Directory it does not authenticate against those usernames. The browser keeps on promptimng for a username and a password. access.log is filled with TCP_DENIED/407 errors. Any assistance is muchly appreciated. Thanks and Regards Lakshman From: Guido Serassio [guido.seras...@acmeconsulting.it] Sent: Sunday, 13 February 2011 5:35 PM To: Liyanage, Lakshman; squid-users@squid-cache.org Subject: R: [squid-users] Configuring SQUID in Windows to authenticate with Active Directory Hi, You must add the .exe extension after squid_ldap_auth as noted in the documentation. Regards Guido Serassio Acme Consulting S.r.l. Microsoft Gold Certified Partner Via Lucia Savarino, 110098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: guido.seras...@acmeconsulting.it WWW: http://www.acmeconsulting.it > -Messaggio originale- > Da: Liyanage, Lakshman [mailto:lakshman.liyan...@jcu.edu.au] > Inviato: sabato 12 febbraio 2011 4.41 > A: squid-users@squid-cache.org > Oggetto: [squid-users] Configuring SQUID in Windows to authenticate with > Active Directory > > Hello All, > I am new to SQUID and hence require some help. > I have SQUID 2.7 Stable8 installed on a Windows Server 2008 R2. I am now > trying to configure it to use MS Active Directory. I have the following > lines in the .conf file: > - > auth_param basic program c:/squid/libexec/squid_ldap_auth -R -b "dc=ad- > mycompany,dc=domain,dc=com" -D "cn=admin,cn=Users,dc=ad- > mycompany,dc=domain,dc=com" -w "password" -f sAMAccountName=%s -h > myipnumber > auth_param basic children 5 > auth_param basic realm My_Company > auth_param basic credentialsttl 5 minute > -- > When I try to start SQUID, Windows throws" Error 1067: The process > terminated unexpectedly" at me. I have a web server/service running on > port 80 and 443. > What am I missing here? > Many many thanks for your help > > Lakshman
[squid-users] Polygraph Kerberos patch
Hi Here is a patch for the latest polygraph version to perform Kerberos based performance testing. Apply attached patch and rebuild configure and other files with: aclocal autoheader automake -a autoreconf -f -i Now run ./configure ... Four new options are introduced: 1) kerberos_auth = true; Selects Kerberos over NTLM in Negotiate requests 2) kerberos_config_path = "krb5_WINDOWS.conf"; Defines the Kerberos configuration file to use 3) kerberos_clear_cache = true; Do not cache credentials but re-authenticate user for every HTTP request. Creates high amount of Keberos traffic to kdc or Active Directory and not recommended 4) kerberos_proxy_spn = "HTTP/" ( and kerberos_server_spn = "HTTP/ for testing web server performance) Setting the spn avoids DNS resolution of the proxy or web server hotsname to IP address and vice versa. Simple Polygraph configuration /* * A very simple "Hello, World!" workload */ // this is just one of the simplest workloads that can produce hits // never use this workload for benchmarking // SimpleContent defines properties of content that the server generates; // if you get no hits, set SimpleContent.obj_life_cycle to cntStatic, which // is defined in workloads/include/contents.pg Content SimpleContent = { size = exp(13KB); // response sizes distributed exponentially cachable = 80%; // 20% of content is uncachable }; // a primitive server cleverly labeled "S101" // normally, you would specify more properties, // but we will mostly rely on defaults for now Server S = { kind = "S101"; contents = [ SimpleContent ]; direct_access = contents; addresses = [ '192.168.1.12:9090' ]; // where to create these server agents }; DnsResolver dr = { servers = [ '127.0.0.1:53' ]; timeout = 5sec; }; AddrMap M = { addresses = [ '192.168.1.10' ,'192.168.1.11', '192.168.1.12' ]; names = [ 'client.suse.home' , 'proxy.suse.home', 'server.suse.home' ]; }; // a primitive robot Robot R1 = { kind = "R101"; pop_model = { pop_distr = popUnif(); }; recurrence = 55% / SimpleContent.cachable; // adjusted to get 55% DHR origins = S.addresses; // where the origin servers are addresses = [ '192.168.1.10' ]; // where these robot agents will be created //kerberos_clear_cache = true; kerberos_auth = true; kerberos_config_path = "krb5_SUSE.conf"; kerberos_proxy_spn = "HTTP/proxy.suse.home"; credentials = [ "user1:user1" ]; dns_resolver = dr; }; // a primitive robot Robot R2 = { kind = "R101"; pop_model = { pop_distr = popUnif(); }; recurrence = 55% / SimpleContent.cachable; // adjusted to get 55% DHR origins = S.addresses; // where the origin servers are addresses = [ '192.168.1.10' ]; // where these robot agents will be created //kerberos_clear_cache = true; kerberos_auth = true; kerberos_config_path = "krb5_WINDOWS.conf"; // user can be the same as in Robot R1 as the default domain in krb5 will differentiate them as user1@ and user1@ kerberos_proxy_spn = "HTTP/proxy.suse.home"; credentials = [ "user1:user1" ]; dns_resolver = dr; }; // commit to using these servers and robots use(M); use(S, R1, R2); Run the client with: /opt/polygraph-4.0.11/bin/polygraph-client --proxy 192.168.1.11:3128 --config /home/markus/mysources/polygraph/simple_proxy.pg --verb_lvl 10 --log client.log Simple Kerberos configuration file [libdefaults] default_realm = WIN2003R2.HOME default_keytab_name = /etc/krb5.keytab default_tgs_enctypes = rc4-hmac des3-cbc-sha1 des-cbc-crc des-cbc-md5 default_tkt_enctypes = rc4-hmac des3-cbc-sha1 des-cbc-crc des-cbc-md5 permitted_enctypes = rc4-hmac des3-cbc-sha1 des-cbc-crc des-cbc-md5 #Heimdal settings default_etypes = arcfour-hmac-md5 des3-cbc-sha1 des-cbc-crc des-cbc-md5 default_etypes_des = des-cbc-crc des-cbc-md5 # DNS settings to reduce DNS traffic and rely on below settings dns_lookup_kdc = no dns_lookup_realm = no [realms] WIN2003R2.HOME = { kdc = 192.168.1.10 admin_server = 192.168.1.10 } [domain_realm] .win2003r2.home = WIN2003R2.HOME win2003r2.home = WIN2003R2.HOME [logging] Using IP-addresses reduces the load on DNS !! In the case of a high number of connections you may see errors 1765328228 from krb5_get_init_creds_password. This can happen when more than FD_SETSIZE file descriptors are open. The only way to avoid this is to recompile the Kerberos library after setting with sysctl ( on Linux) a file max file descriptor number and changing the header file define for FD_SETSIZE in typesizes.h (depending on OS it is defined inother header files). Any feedback is appreciated. Regards Markus http://www.mail-archive.com/squid-dev@squid-cache.org/msg14948/polygraph-4.0.11-kerberos-v7.patch
Re: [squid-users] Squid 3.2.0.5 beta is available
* Ralf Hildebrandt : > I get a compilation error: > > make[3]: Entering directory /usr/src/squid-3.2.0.5/src/adaptation' > Making all in icap > make[4]: Entering directory /usr/src/squid-3.2.0.5/src/adaptation/icap' > /bin/bash ../../../libtool --tag=CXX --mode=compile g++ > -DHAVE_CONFIG_H -I../../.. -I../../../include -I../../../lib > -I../../../src -I../../../include -I/usr/include -Wall > -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT > -m32 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 -c -o > ModXact.lo ModXact.cc > libtool: compile: g++ -DHAVE_CONFIG_H -I../../.. -I../../../include > -I../../../lib -I../../../src -I../../../include -I/usr/include -Wall > -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT > -m32 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 -c ModXact.cc > -fPIC -DPIC -o .libs/ModXact.o > ModXact.cc: In member function 'void > Adaptation::Icap::ModXact::makeUsernameHeader(const HttpRequest*, > MemBuf&)': > ModXact.cc:1425: error: 'const class HttpRequest' has no member named > 'auth_user_request' > ModXact.cc:1426: error: 'const class HttpRequest' has no member named > 'auth_user_request' > make[4]: *** [ModXact.lo] Error 1 > make[4]: Leaving directory /usr/src/squid-3.2.0.5/src/adaptation/icap' That is due to --disable-auth :) Once I remove it from my ./configure options, it compiles all the way through -- Ralf Hildebrandt Geschäftsbereich IT | Abteilung Netzwerk Charité - Universitätsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebra...@charite.de | http://www.charite.de
Re: [squid-users] Squid 3.2.0.5 beta is available
* Amos Jeffries : > The Squid HTTP Proxy team is very pleased to announce the > availability of the Squid-3.2.0.5 beta release! I get a compilation error: make[3]: Entering directory /usr/src/squid-3.2.0.5/src/adaptation' Making all in icap make[4]: Entering directory /usr/src/squid-3.2.0.5/src/adaptation/icap' /bin/bash ../../../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I../../.. -I../../../include -I../../../lib -I../../../src -I../../../include -I/usr/include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 -c -o ModXact.lo ModXact.cc libtool: compile: g++ -DHAVE_CONFIG_H -I../../.. -I../../../include -I../../../lib -I../../../src -I../../../include -I/usr/include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 -c ModXact.cc -fPIC -DPIC -o .libs/ModXact.o ModXact.cc: In member function 'void Adaptation::Icap::ModXact::makeUsernameHeader(const HttpRequest*, MemBuf&)': ModXact.cc:1425: error: 'const class HttpRequest' has no member named 'auth_user_request' ModXact.cc:1426: error: 'const class HttpRequest' has no member named 'auth_user_request' make[4]: *** [ModXact.lo] Error 1 make[4]: Leaving directory /usr/src/squid-3.2.0.5/src/adaptation/icap' -- Ralf Hildebrandt Geschäftsbereich IT | Abteilung Netzwerk Charité - Universitätsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebra...@charite.de | http://www.charite.de
Re: [squid-users] squid-3.2.0.5 compilation errors
On 13/02/11 20:18, Yonah Russ wrote: Hi, I tried to compile the latest beta this morning and the compilation failed . I'm running Ubuntu 10.04.2 LTS I'm using the following configure string: ./configure --enable-build-info '--enable-storeio=aufs,coss,diskd,ufs' '--enable-removal-policies=heap,lru' '--enable-icmp' '--enable-delay-pools' --enable-esi --enable-icap-client '--enable-ssl' --enable-forw-via-db '--enable-cache-digests' '--enable-follow-x-forwarded-for' --disable-ident-lookups --enable-ssl-crtd '--enable-default-hostsfile=/etc/hosts' --enable-auth --enable-auth-basic -enable-auth-ntlm --enable-auth-negotiate --enable-auth-digest '--enable-log-daemon-helpers=file' --enable-external-acl-helpers '--enable-x-accelerator-vary' --disable-translation It fails with the following message: /bin/bash ../../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib -I../../src -I../../include -I../../libltdl -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT MyPortName.lo -MD -MP -MF .deps/MyPortName.Tpo -c -o MyPortName.lo MyPortName.cc libtool: compile: g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib -I../../src -I../../include -I../../libltdl -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT MyPortName.lo -MD -MP -MF .deps/MyPortName.Tpo -c MyPortName.cc -fPIC -DPIC -o .libs/MyPortName.o In file included from ../../src/ProtoPort.h:10, from MyPortName.cc:37: ../../src/ssl/gadgets.h:54: error: variable or field 'TXT_DB_free_cpp' declared void ../../src/ssl/gadgets.h:54: error: 'TXT_DB' was not declared in this scope ../../src/ssl/gadgets.h:54: error: 'a' was not declared in this scope ../../src/ssl/gadgets.h:55: error: 'TXT_DB' was not declared in this scope ../../src/ssl/gadgets.h:55: error: 'TXT_DB_free_cpp' was not declared in this scope ../../src/ssl/gadgets.h:55: error: template argument 1 is invalid ../../src/ssl/gadgets.h:55: error: template argument 2 is invalid ../../src/ssl/gadgets.h:55: error: invalid type in declaration before ';' token make[3]: *** [MyPortName.lo] Error 1 make[3]: Leaving directory `/root/squid-3.2.0.5/src/acl' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/root/squid-3.2.0.5/src' make[1]: *** [all] Error 2 make[1]: Leaving directory `/root/squid-3.2.0.5/src' make: *** [all-recursive] Error 1 Thanks, Yonah This would be a good thing to bring to the developers attention (via the squid-dev mailing list). The guys playing with SSL don't read this users list. It appears like your OpenSSL version missing or not able to be found by Squid. The bug is probably that configure seems not to have told you that earlier in the build. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.11 Beta testers wanted for 3.2.0.4
Re: [squid-users] Reverse Proxy and Externally Generated Wildcard SSL Certificates
On 13/02/11 21:12, John Gardner wrote: Hi everyone. I've got a query about running Squid as a Reverse Proxy that I hope someone can answer. Over the past year, I've been tasked with introducing serveral Squid servers into our organisation, most of them so far have been internal Caching proxies, but I'm now at the stage where I need to implement a Reverse Proxy (RP) in our DMZ. We're going to offload the SSL onto the RP using a Wildcard SSL Certificate and during testing I used the advice here: http://wiki.squid-cache.org/ConfigExamples/Reverse/SslWithWildcardCertifiate. This was great to test everything and worked well. However, now I'm ready to put this into a Production environment and I have to deal with the fact that we are fundamentally a Windows house. They have already procured wildcard SSL certificates from Verisign, where the original CSR was generated on a Windows server sent off to the CA (Verisign) and then then the wildcard certificate returned to us. My question is quite simple, how do I import the wildcard certificate into openssl on the RP server? All the examples I've seen online assume that you're generating the CSR on the proxy server itself but I don't have that luxury unfortunately. I know this is more of an OpenSSL question rather than pure Squid question, I was just hoping that someone on the list has already done this and can give me some advice. Thanks in advance. John It does not matter where the files are generated. As long as they are stored on the Squid box for Squid to access. For Squid you do not have to install anything into OpenSSL, which is just a library. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.11 Beta testers wanted for 3.2.0.4
[squid-users] Reverse Proxy and Externally Generated Wildcard SSL Certificates
Hi everyone. I've got a query about running Squid as a Reverse Proxy that I hope someone can answer. Over the past year, I've been tasked with introducing serveral Squid servers into our organisation, most of them so far have been internal Caching proxies, but I'm now at the stage where I need to implement a Reverse Proxy (RP) in our DMZ. We're going to offload the SSL onto the RP using a Wildcard SSL Certificate and during testing I used the advice here: http://wiki.squid-cache.org/ConfigExamples/Reverse/SslWithWildcardCertifiate. This was great to test everything and worked well. However, now I'm ready to put this into a Production environment and I have to deal with the fact that we are fundamentally a Windows house. They have already procured wildcard SSL certificates from Verisign, where the original CSR was generated on a Windows server sent off to the CA (Verisign) and then then the wildcard certificate returned to us. My question is quite simple, how do I import the wildcard certificate into openssl on the RP server? All the examples I've seen online assume that you're generating the CSR on the proxy server itself but I don't have that luxury unfortunately. I know this is more of an OpenSSL question rather than pure Squid question, I was just hoping that someone on the list has already done this and can give me some advice. Thanks in advance. John This email and any files transmitted with it are intended solely for the named recipient and may contain sensitive, confidential or protectively marked material up to the central government classification of ?RESTRICTED" which must be handled accordingly. If you have received this e-mail in error, please immediately notify the sender by e-mail and delete from your system, unless you are the named recipient (or authorised to receive it for the recipient) you are not permitted to copy, use, store, publish, disseminate or disclose it to anyone else. E-mail transmission cannot be guaranteed to be secure or error-free as it could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses and therefore the Council accept no liability for any such errors or omissions. Unless explicitly stated otherwise views or opinions expressed in this email are solely those of the author and do not necessarily represent those of the Council and are not intended to be legally binding. All Council network traffic and GCSX traffic may be subject to recording and/or monitoring in accordance with relevant legislation. South Tyneside Council, Town Hall & Civic Offices, Westoe Road, South Shields, Tyne & Wear, NE33 2RL, Tel: 0191 427 1717, Website: www.southtyneside.info