Re: [squid-users] Problem downloading files greater then 2 GB
Jacques Beaudoin wrote: Hi, Sorry for my late reply I was making more test My os version is suse enterprise 10.2 32-bits kernel 2.6.16 with 16 GB memory on my server Ouch. Yes definitely go to a 64-bit kernel. It will already be having other problems simply addressing most of that RAM. I have the message *preventing off_t overflow in my squid log Found this message after a google search Squid can handle 64-bit for 'large' files even if Squid itself is 32-bit if the build environment and underlying kernel can support larger types. It sounds to me like the kernel Squid was built against could not support it. Going to 64-bit kernel and rebuilding Squid may indeed be what you require. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.1
Re: [squid-users] Re: Yahoo mail Display problem
Pls ignore my last email. Best Regards, .Goody. - Original Message From: goody goody think...@yahoo.com To: Amos Jeffries squ...@treenet.co.nz; squid-users@squid-cache.org Sent: Wed, April 21, 2010 10:25:58 AM Subject: Re: [squid-users] Re: Yahoo mail Display problem Thanks for your help Amos, Actually the reason behind the question was my previous experience of 3.0.4 version, which i installed but after then it was shutting down after running for some time, and if there is not such a serious problem with 3.1.1 i would definitely love to install the latest to get benefit from new features. Best Regards, - Original Message From: Amos Jeffries squ...@treenet.co.nz To: squid-users@squid-cache.org Sent: Tue, April 20, 2010 6:31:58 PM Subject: Re: [squid-users] Re: Yahoo mail Display problem goody goody wrote: Thanks for reply. Please let me know which version of squid 2.7/3.1.1 is most stable i-e bug free bcoz i am gonna deploy it in production environment. Best Regards, Both the same by that measure. 126 bugs and enhancement requests each. 2.7 being the oldest version still supported. We do recommend trying 3.1 first. Coming from 2.5 you will not already be using any of the features that have locked people into 2.7 use. Be careful of the configuration file though, since there are now two full versions worth of changes you have to leap over. If you need any help with the conversion the release notes and we are here. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.1
Re: [squid-users] unable to bypass AUP page with local servers
Johnson, S wrote: Hello, I've got a weird issue that I've been finding off an on. I can finally duplicate it regularly now. I'm working with a public network that we've separated from the local network. We have web resources that are on the external side of the squid box. This is what our network looks like: public network 65.80.133.x | | | public network firewall---(nat)DMZ (192.168.80.x/23) | (192.168.2.0/24) |(web servers) | | private network (10.x.x.x) The squid server here is configured with an AUP page with a click through to continue to the site they originally were trying to get to. Any page outside of our network altogether works great; they get the AUP and the click through it. However, if they try to access the local web server which shares the same external subnet as the squid server then I cannot click past the AUP. To make this a little more complex, I'm attempting to do this through transparent proxy. I've also got DNS configured to provide a WPAD file. If I use the autoproxy config in the browser then it works just fine (which is why it was working for me). Once I turn this off in the browser I once again cannot get to the local web server but other outside sites work just fine. I don't see any hits in the log if I try to browse the local web server which makes me believe that the traffic isn't even hitting the proxy. However, it should since there are no local routes on the workstation that would do otherwise. It's like the proxy server isn't picking up the packets at all... Oh one more weird thing... if I set myweb in the acl below at the top of the ACL list then I'm able to get to the local servers but the AUP page never shows if their homepage is set to the local web server. I guess I would expect this behavior since I've never denied the session. I've tried moving the myweb acl around the whole list but I don't get any other results... This is my config: # TAG: acl #Recommended minimum configuration: acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl to_localbox dst 192.168.80.5/32 acl myweb dst 64.80.132.1/32 follow_x_forwarded_for allow localhost acl_uses_indirect_client on delay_pool_uses_indirect_client on log_uses_indirect_client on external_acl_type session ttl=10 children=1 negative_ttl=0 concurrency=200 %SRC /usr/lib/squid/squid_session -t 1800 acl session external session acl localnet src 192.168.80.0/23 # RFC1918 possible internal network acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # TAG: http_access http_access allow to_localbox deny_info http://192.168.80.5/index.php?url=%s session Using the IP address in the URL like that breaks when NAT is involved. Clients outside the 192.* routable network won't ever be able to open the page directly. You need some form of publicly resolvable domain name that resolves to a the relevant IP for each network. #http_access allow myweb #trying different locations for the session to be set http_access deny !Safe_portshttp_access allow session I hope that was a typo of the cut-n-paste process? http_access allow SSL_ports http_access allow CONNECT SSL_ports http_access deny !session http_access allow myweb http_access deny !Safe_ports http_access deny all http_port 3128 transparent Due to CVE-2009-0801 it's no longer safe practice to receive NAT intercepted traffic on the same port as normal proxy traffic. Another port should be chosen and secured for the private channel between Squid and the firewall doing NAT. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.1
Re: [squid-users] What's the difference between vhost and vport?
On Wed, Apr 21, 2010 at 9:07 AM, yjyj yangjing001...@gmail.com wrote: Hi, I know that 'vhost' and 'vport' used in the reverse proxy mode. Whtat's the difference between them? You should check squid.conf for details. vhost: for host based VS. vport: for port based VS. And what about 'accel' ? It is said that 'vhost' and 'vport' implies 'accel' in the default squid.conf. Is it necessary in the reverse proxy mode ? accel means reverse proxy mode, it's a needed option if you are running squid under that mode. -- Jeff Pang http://home.arcor.de/pangj/
Re: [squid-users] What's the difference between vhost and vport?
2010/4/21 Jeff Pang pa...@arcor.de: On Wed, Apr 21, 2010 at 9:07 AM, yjyj yangjing001...@gmail.com wrote: Hi, I know that 'vhost' and 'vport' used in the reverse proxy mode. Whtat's the difference between them? You should check squid.conf for details. vhost: for host based VS. vport: for port based VS. the squid.conf says: vhost Accelerator mode using Host header for virtual domain support. Implies accel. vport Accelerator with IP based virtual host support. Implies accel. I don't really understand what they means. For example, what's the difference among the following lines? (SQUID's IP is 192.168.1.1) http_port 192.168.1.1:80 vhost vport http_port 192.168.1.1:80 vhost http_port 192.168.1.1:80 vport
Re: [squid-users] What's the difference between vhost and vport?
On Wed, Apr 21, 2010 at 3:50 PM, yjyj yangjing001...@gmail.com wrote: the squid.conf says: vhost Accelerator mode using Host header for virtual domain support. Implies accel. vport Accelerator with IP based virtual host support. Implies accel. I don't really understand what they means. You will see apache's httpd.conf for the config of a virtual host. And check apache's online document: http://httpd.apache.org/docs/1.3/vhosts/ -- Jeff Pang http://home.arcor.de/pangj/
[squid-users] Query regarding squid-filter patch for Squid3.0STABLE9
Hi, I am trying to use the squid-filter patch specified in the site: http://sites.inka.de/bigred/devel/squid-3.0stable9-filter-0.2.patch.gz I had also read the document mentioned in the site: http://sites.inka.de/bigred/devel/squid-filter.html I had built the squid3.0STABLE9. The problem that I am facing is: 1) The gif animation is not working with the filter_module. In the squid.conf, I am using filter_module gifanim 1 * deny all When I try to browse a gif animated web page, only 1/10th of the image is visible or sometimes only 1% of the image. 2) I tried the activex and the script modules, the result was I wasn't able to browse google. Whenever I type google.com in the browser, I get an empty page. If I remove the filter_module, then I am able to surf the web sites. If any one had tested this patch and found it working, it would be helpful for me to proceed with this patch. If any more information is required to know what I have done, please let me know so that I can give the details accordingly. Thanks, Sujith H
[squid-users] R: [squid-users] External users from Child AD domain unable to use local Squid proxy
Hi, We have the below acl for users in the Ad global group external_acl_type AD_global_group ttl=120 %LOGIN c:/squid/libexec/mswin_check_ad_group.exe -G and another acl below that allows full access thru the squid proxy using an ad group acl InetAllow external AD_global_group CLW.Squid.Full any ideas AGAIN: When using mswin_check_ad_group.exe 1.x in global mode (-G options), the check is done always against a global group placed in the user's domain. So the question is: On which AD domain is defined the CLW.Squid.Full group ? Regards Guido Guido Serassio Acme Consulting S.r.l. Microsoft Gold Certified Partner VMware Professional Partner Via Lucia Savarino, 110098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: guido.seras...@acmeconsulting.it WWW: http://www.acmeconsulting.it
[squid-users] SPN case sensitivity culprit for Negotiate/Kerberos Failures +msktutil
Dear Markus/Nick/All, After a great struggle and help (i got from you people)i was managed to resolve the issue however i have few confusions which i wish you to ask please. 1. First of all I traced down my problem to SPN Names casesensitivity the case for ServicePrincipalName attribute as seen through ADSIEDIT.msc tool was different from the value my klist -ke was showing. According to ASIedit.msc: servicePrincipalName == HTTP/squidlhrtest.v.local userPrinciapalName == HTTP/squidlhrtest.v.lo...@v.local Where as klisting the SPN as stored in my keytab: 2 HTTP/squidlhrtest.v.lo...@v.local (DES cbc mode with CRC-32) 2 HTTP/squidlhrtest.v.lo...@v.local (DES cbc mode with RSA-MD5) 2 HTTP/squidlhrtest.v.lo...@v.local (ArcFour with HMAC/md5) After diagnosing the problem i tried recreation of keytab/spn through msktutil utility however in no benefit. But Then i changed my hostname(squidmachines') all to lowercase and recreated the keytab and it worked. I confirmed that it matched the one as stored in the Active Directory. kerberos/negotiate was working. Although i have studied that microsoft spn is case insensitive but does this also mean that microsoft will always store spn in lower case no matter how you have given name in your msktutil command? Second thing is that what is the role of upn here? I mean why a upn is required when created SPN with computer objects? I can understand that its some kind of linkage but i am not sure and clear about the purpose ? Also why SPNattribute has no realm name appended in the output while upn has a realm name appended in the output when seeing it through ADSIEDIT.msc. Another question is that as i am using SARG configured with Apache i am looking forward to SSO apache also with kerberos. Now the keytab/spn for squid sso is already here created as : msktutil -c -b CN=COMPUTERS -s HTTP/squidlhrtest.v.local -h squidlhrtest.v.local -k /etc/squid/HTTP.keytab --computer-name squid-http --upn HTTP/squidlhrtest.v.local --server vdc.v.local --verbose Right now to my understanding a keytab can have keys from multiple services so this means that i can have the same keytab used for squid Apache both ? For example i think the following command will append the keytab file with the following new keys. I guess that only computer-name is to be changed and the rest of the same command will do as far as the keytab creation is concerned. (apache specific settings is a seperate story which is definately out of scope here) The command to my understanding which will append keys to be used by Apache: msktutil -c -b CN=COMPUTERS -s HTTP/squidlhrtest.v.local -h squidlhrtest.v.local -k /etc/squid/HTTP.keytab --computer-name apache-http --upn HTTP/squidlhrtest.v.local --server vdc.v.local --verbose But why not apache and squid should share a single keytab? as after all they are both HTTP in the end. Isnt creating a seperate key/spn for apache be redundant or it is must? Another somewhat similar question is that My active Directory setup has a single forest with one Parent A wand two childs B and childs C. The internet users are only in childs A and B. What would be the way to handle SSO. I have not much clarity can anybody please advice? ...How Would i be pointing to the multiple realms? would i b duplicate exact setup which i have done for 1 domain and somehow(i am unclear) somehow update squid accordingly? Please i would be real thankful to all of you for guidance/help. best regards, Bilal Aslam _ Hotmail: Free, trusted and rich email service. https://signup.live.com/signup.aspx?id=60969
Re: [squid-users] SQUID3: Access denied connecting to one site
From: Alexandr Dmitriev alexandr.dmitr...@mos.lv I tried to change tcp_ecn, but this did not help. Maybe some other ideas? Just 2 things I found: When I check the page source, I see: metahttp-equiv=Cache-Control: max-age content=300 metahttp-equiv=Expires content=Tue, 20 Apr 2010 06:23:44 GMT The expire is set to yesterday... is that normal? And their SSL certificate is for the .com; no the .lv... JD
Re: [squid-users] SPN case sensitivity culprit for Negotiate/Kerberos Failures +msktutil
Hi Bilal, Good to hear you've pin-pointed the problem. I'm not one hundred percent sure on all the answers to your questions, but I'll throw in my 10 cents.. It's all a learning curve! I've just created a new computer account using msktuil and I specified the SPN as HTTP/FuNnYName.{domain}. Checking ADSI showed that the SPN was entered: HTTP/funnyname.{domain}. It was converted into lowercase. With regards to the UPN, it depends on how it's being used. By default you won't be using it I believe if you are just using it for the standard kerb authentication.. However, I was playing around with the squid_kerb_ldap external acl the other day and my experience was that a UPN was required - but not with the UPN specified as HTTP... Do a search on the list for my problem with it (post is titled 'Squid_ldap_kerb make'). Not exactly and answer but my own experience.. Re: SPN attribute and realms - I'm not sure on this.. Other than the way a computer account and user account differs in authenticating Kerberos. As for the multiple SPNs in one account... That's up to you. I haven't tried it but I guess you could do it. As you know you can authenticate against an account providing there is an SPN... Is there a chance your keytab would get out of sync for either? If it broke both wouldn't work.. Nick On 21/04/2010 11:36, GIGO . gi...@msn.com wrote: Dear Markus/Nick/All, After a great struggle and help (i got from you people)i was managed to resolve the issue however i have few confusions which i wish you to ask please. 1. First of all I traced down my problem to SPN Names casesensitivity the case for ServicePrincipalName attribute as seen through ADSIEDIT.msc tool was different from the value my klist -ke was showing. According to ASIedit.msc: servicePrincipalName == HTTP/squidlhrtest.v.local userPrinciapalName == HTTP/squidlhrtest.v.lo...@v.local Where as klisting the SPN as stored in my keytab: 2 HTTP/squidlhrtest.v.lo...@v.local (DES cbc mode with CRC-32) 2 HTTP/squidlhrtest.v.lo...@v.local (DES cbc mode with RSA-MD5) 2 HTTP/squidlhrtest.v.lo...@v.local (ArcFour with HMAC/md5) After diagnosing the problem i tried recreation of keytab/spn through msktutil utility however in no benefit. But Then i changed my hostname(squidmachines') all to lowercase and recreated the keytab and it worked. I confirmed that it matched the one as stored in the Active Directory. kerberos/negotiate was working. Although i have studied that microsoft spn is case insensitive but does this also mean that microsoft will always store spn in lower case no matter how you have given name in your msktutil command? Second thing is that what is the role of upn here? I mean why a upn is required when created SPN with computer objects? I can understand that its some kind of linkage but i am not sure and clear about the purpose ? Also why SPNattribute has no realm name appended in the output while upn has a realm name appended in the output when seeing it through ADSIEDIT.msc. Another question is that as i am using SARG configured with Apache i am looking forward to SSO apache also with kerberos. Now the keytab/spn for squid sso is already here created as : msktutil -c -b CN=COMPUTERS -s HTTP/squidlhrtest.v.local -h squidlhrtest.v.local -k /etc/squid/HTTP.keytab --computer-name squid-http --upn HTTP/squidlhrtest.v.local --server vdc.v.local --verbose Right now to my understanding a keytab can have keys from multiple services so this means that i can have the same keytab used for squid Apache both ? For example i think the following command will append the keytab file with the following new keys. I guess that only computer-name is to be changed and the rest of the same command will do as far as the keytab creation is concerned. (apache specific settings is a seperate story which is definately out of scope here) The command to my understanding which will append keys to be used by Apache: msktutil -c -b CN=COMPUTERS -s HTTP/squidlhrtest.v.local -h squidlhrtest.v.local -k /etc/squid/HTTP.keytab --computer-name apache-http --upn HTTP/squidlhrtest.v.local --server vdc.v.local --verbose But why not apache and squid should share a single keytab? as after all they are both HTTP in the end. Isnt creating a seperate key/spn for apache be redundant or it is must? Another somewhat similar question is that My active Directory setup has a single forest with one Parent A wand two childs B and childs C. The internet users are only in childs A and B. What would be the way to handle SSO. I have not much clarity can anybody please advice? ...How Would i be pointing to the multiple realms? would i b duplicate exact setup which i have done for 1 domain and somehow(i am unclear) somehow update squid accordingly? Please i would be real thankful to all of you for guidance/help. best regards, Bilal Aslam
Re: [squid-users] Slow tranfert speed over ADSL internet connection
What I can add is when IE is not connected to the proxy, it goes at 2,5 mbps and I connect to the proxy it goes down to 500 kbps. At home the speed is the same 10 mbps on both tests. I'll check for the DNS, could the cisco 837 router limit speed somehow? Tanks, Francis. 2010/4/20 Amos Jeffries squ...@treenet.co.nz: On Tue, 20 Apr 2010 11:49:05 -0400, francis aubut fugitif...@gmail.com wrote: Hi,I configured Squid, first with Ubuntu server and then on CentOS 5 the problem is the same, I get very slow speed on a network connected with a ADSL internet connection and when I bring the computer at home it goes well, I have a Cable Modem connection, what could be wrong? Francis. Your experiments as described pretty conclusively confirm that the problems is: a) difference in network lag (its conceivable that your ADSL is simply slower than Cable, I know mine is by a whole order of magnitude or two). b) site-specific configuration somewhere in your setup. Resulting in the box going a long way to get stuff, ie a DNS server from the cable connection being used when on ADSL etc. Amos
Re: [squid-users] External users from Child AD domain unable to use local Squid proxy
So instead of the way the line is now: acl InetAllow external AD_global_group CLW.Squid.Full The domain would be added to the group like below: acl InetAllow external AD_global_group NA\CLW.Squid.Full On Wed, Apr 21, 2010 at 06:19, Guido Serassio guido.seras...@acmeconsulting.it wrote: Hi, We have the below acl for users in the Ad global group external_acl_type AD_global_group ttl=120 %LOGIN c:/squid/libexec/mswin_check_ad_group.exe -G and another acl below that allows full access thru the squid proxy using an ad group acl InetAllow external AD_global_group CLW.Squid.Full any ideas AGAIN: When using mswin_check_ad_group.exe 1.x in global mode (-G options), the check is done always against a global group placed in the user's domain. So the question is: On which AD domain is defined the CLW.Squid.Full group ? Regards Guido Guido Serassio Acme Consulting S.r.l. Microsoft Gold Certified Partner VMware Professional Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: guido.seras...@acmeconsulting.it WWW: http://www.acmeconsulting.it
[squid-users] R: [squid-users] External users from Child AD domain unable to use local Squid proxy
Hi, Yes, but only if you are using the 2.x version of the helper and the CLW.Squid.Full group is group with the appropriate scope (Local, Global or Universal). Regards Guido Serassio Acme Consulting S.r.l. Microsoft Gold Certified Partner VMware Professional Partner Via Lucia Savarino, 110098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: guido.seras...@acmeconsulting.it WWW: http://www.acmeconsulting.it -Messaggio originale- Da: Milan [mailto:compguy030...@gmail.com] Inviato: mercoledì 21 aprile 2010 14.52 A: Guido Serassio Cc: squid-users@squid-cache.org Oggetto: Re: [squid-users] External users from Child AD domain unable to use local Squid proxy So instead of the way the line is now: acl InetAllow external AD_global_group CLW.Squid.Full The domain would be added to the group like below: acl InetAllow external AD_global_group NA\CLW.Squid.Full On Wed, Apr 21, 2010 at 06:19, Guido Serassio guido.seras...@acmeconsulting.it wrote: Hi, We have the below acl for users in the Ad global group external_acl_type AD_global_group ttl=120 %LOGIN c:/squid/libexec/mswin_check_ad_group.exe -G and another acl below that allows full access thru the squid proxy using an ad group acl InetAllow external AD_global_group CLW.Squid.Full any ideas AGAIN: When using mswin_check_ad_group.exe 1.x in global mode (-G options), the check is done always against a global group placed in the user's domain. So the question is: On which AD domain is defined the CLW.Squid.Full group ? Regards Guido Guido Serassio Acme Consulting S.r.l. Microsoft Gold Certified Partner VMware Professional Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: guido.seras...@acmeconsulting.it WWW: http://www.acmeconsulting.it
Re: [squid-users] External users from Child AD domain unable to use local Squid proxy
Yes the Version we are using is 2.0 on Squid 2.7 stable 8 and and clw.squid.full is a universal group On Wed, Apr 21, 2010 at 09:19, Guido Serassio guido.seras...@acmeconsulting.it wrote: Hi, Yes, but only if you are using the 2.x version of the helper and the CLW.Squid.Full group is group with the appropriate scope (Local, Global or Universal). Regards Guido Serassio Acme Consulting S.r.l. Microsoft Gold Certified Partner VMware Professional Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: guido.seras...@acmeconsulting.it WWW: http://www.acmeconsulting.it -Messaggio originale- Da: Milan [mailto:compguy030...@gmail.com] Inviato: mercoledì 21 aprile 2010 14.52 A: Guido Serassio Cc: squid-users@squid-cache.org Oggetto: Re: [squid-users] External users from Child AD domain unable to use local Squid proxy So instead of the way the line is now: acl InetAllow external AD_global_group CLW.Squid.Full The domain would be added to the group like below: acl InetAllow external AD_global_group NA\CLW.Squid.Full On Wed, Apr 21, 2010 at 06:19, Guido Serassio guido.seras...@acmeconsulting.it wrote: Hi, We have the below acl for users in the Ad global group external_acl_type AD_global_group ttl=120 %LOGIN c:/squid/libexec/mswin_check_ad_group.exe -G and another acl below that allows full access thru the squid proxy using an ad group acl InetAllow external AD_global_group CLW.Squid.Full any ideas AGAIN: When using mswin_check_ad_group.exe 1.x in global mode (-G options), the check is done always against a global group placed in the user's domain. So the question is: On which AD domain is defined the CLW.Squid.Full group ? Regards Guido Guido Serassio Acme Consulting S.r.l. Microsoft Gold Certified Partner VMware Professional Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: guido.seras...@acmeconsulting.it WWW: http://www.acmeconsulting.it
[squid-users] Primary, Secondary, Tertiary Squid proxies
Hi, I would like to configure my proxies to route via different boxes if the primary upstream is unavailable. I have three Squid boxes all at different sites . All three have the entry: cache_peer upstream.isp.com parent 8080 0 no-query default All three are the same, utilising Kerberos authentication, hooking into an ICAP server. All working ok - users authenticate, ICAP manipulation then passed upstream. What I want is if the upstream at site A is unavailable I would like to route to the site B, and then site C to pass to the upstream. Likewise at Site B, site A, site C. And again at site C, site B and site A. I think I need to be looking at something like this.. I'm not using caching by the way: SiteA proxy: cache_peer upstream.isp.com parent 8080 0 no-query no-digest default cache_peer siteb.[mydomain] sibling 8080 0 no-query no-digest cache_peer sitec.[mydomain] sibling 8080 0 no-query no-digest Do I also need additional conf lines to say send upstream and don't do any auth/ICAP etc or is it as simple as getting the right lines above and it'll automatically go upstream? Could anyone offer some pointers? Thanks in advance, Nick ** Please consider the environment before printing this e-mail ** The information contained in this e-mail is of a confidential nature and is intended only for the addressee. If you are not the intended addressee, any disclosure, copying or distribution by you is prohibited and may be unlawful. Disclosure to any party other than the addressee, whether inadvertent or otherwise, is not intended to waive privilege or confidentiality. Internet communications are not secure and therefore Conde Nast does not accept legal responsibility for the contents of this message. Any views or opinions expressed are those of the author. Company Registration details: The Conde Nast Publications Ltd Vogue House Hanover Square London W1S 1JU Registered in London No. 226900
[squid-users] Squid as a SSL accelerator for webmail
Dear all, I have Debian Lenny + 2.7.STABLE3-4.1lenny1 (obtained with apt-get). I suppose this Squid package was compiled with the enable-ssl option, but I don't know. I want to access to an OWA HTTPS webmail in port 443, and to other web site HTTP in port 80. So the main lines of my squid.conf are: http_port 10.1.0.200:80 accel defaultsite=www.company.com vhost https_port ip_of_squid:443 cert=/root/certs/owa.crt defaultsite=mail.company.com cache_peer 10.1.0.103 parent 80 0 no-query originserver login=PASS front-end-https=on name=owaServer cache_peer 10.1.0.62 parent 80 0 no-query originserver name=intranetBP acl OWA dstdomain webmail.company.com acl WWW dstdomain www.company.com but it doesn't work Does anyone know if it's possible to do this ??? Really thanks Alejandro
Re: [squid-users] Squid as a SSL accelerator for webmail
The package of lenny dosen't support ssl. Alejandro Cabrera Obed aco1...@gmail.com escribió: Dear all, I have Debian Lenny + 2.7.STABLE3-4.1lenny1 (obtained with apt-get). I suppose this Squid package was compiled with the enable-ssl option, but I don't know. I want to access to an OWA HTTPS webmail in port 443, and to other web site HTTP in port 80. So the main lines of my squid.conf are: http_port 10.1.0.200:80 accel defaultsite=www.company.com vhost https_port ip_of_squid:443 cert=/root/certs/owa.crt defaultsite=mail.company.com cache_peer 10.1.0.103 parent 80 0 no-query originserver login=PASS front-end-https=on name=owaServer cache_peer 10.1.0.62 parent 80 0 no-query originserver name=intranetBP acl OWA dstdomain webmail.company.com acl WWW dstdomain www.company.com but it doesn't work Does anyone know if it's possible to do this ??? Really thanks Alejandro
[squid-users] SQUID 3.1 + sslBump https interception and decryption
Hi, I need for testing purpose (i have to test and debug several mobile phone java application some of that using https/ssl) to intercept and decrypt https traffic; I configured one debian box with squid 3.1 (compiling it with ssl support) enabling sslBump feature with a self signed certificate, obviously browser and applications warn about the certificate but all seems to work. Is there a way to use trusted certificate for removing that warning (sorry for this dumb question but some applications doesn't permit certificate exception list like firefox for example)? Another question is about ICAP, i read on Squid-cache wiki that is possible to use ICAP server to inspect traffic (While decrypted, the traffic can be inspected using ICAP), is there some hints regarding which ICAP server use (C-ICAP? or other ICAP server) and some configuration example about it? I didn't find many informations about. Thanks for your patience Best Regards Franz
Fwd: [squid-users] Slow tranfert speed over ADSL internet connection
-- Forwarded message -- From: francis aubut fau...@infogfa.com Date: 2010/4/21 Subject: Re: [squid-users] Slow tranfert speed over ADSL internet connection To: Amos Jeffries squ...@treenet.co.nz Cc : squid-users@squid-cache.org What I can add is when IE is not connected to the proxy, it goes at 2,5 mbps and I connect to the proxy it goes down to 500 kbps. At home the speed is the same 10 mbps on both tests. I'll check for the DNS, could the cisco 837 router limit speed somehow? Tanks, Francis. 2010/4/20 Amos Jeffries squ...@treenet.co.nz: On Tue, 20 Apr 2010 11:49:05 -0400, francis aubut fugitif...@gmail.com wrote: Hi,I configured Squid, first with Ubuntu server and then on CentOS 5 the problem is the same, I get very slow speed on a network connected with a ADSL internet connection and when I bring the computer at home it goes well, I have a Cable Modem connection, what could be wrong? Francis. Your experiments as described pretty conclusively confirm that the problems is: a) difference in network lag (its conceivable that your ADSL is simply slower than Cable, I know mine is by a whole order of magnitude or two). b) site-specific configuration somewhere in your setup. Resulting in the box going a long way to get stuff, ie a DNS server from the cable connection being used when on ADSL etc. Amos
RE: [squid-users] SQUID 3.1 + sslBump https interception and decryption
From: Franz Angeli [mailto:franz.ang...@gmail.com] I configured one debian box with squid 3.1 (compiling it with ssl support) enabling sslBump feature with a self signed certificate, obviously browser and applications warn about the certificate but all seems to work. Is there a way to use trusted certificate for removing that warning (sorry for this dumb question but some applications doesn't permit certificate exception list like firefox for example)? If you have the signed certificate for the URL you're developing for, then yes, you can use the certificate. For example, if your app is going to app.squid-cache.org and you have the signed certificate for app.squid-cache.org or *.squid-cache.org, then everything will be happy. But, if you're trying to intercept the traffic for a third-party domain, no, you can't. The best you can do, is to create your own CA and add the public certificate to the browser/application, if it even allows you to. -Dan
Re: [squid-users] SQUID3: Access denied connecting to one site
On Wed, 21 Apr 2010 03:54:33 -0700 (PDT), John Doe jd...@yahoo.com wrote: From: Alexandr Dmitriev alexandr.dmitr...@mos.lv I tried to change tcp_ecn, but this did not help. Maybe some other ideas? Just 2 things I found: When I check the page source, I see: metahttp-equiv=Cache-Control: max-age content=300 metahttp-equiv=Expires content=Tue, 20 Apr 2010 06:23:44 GMT The expire is set to yesterday... is that normal? Well the syntax is broken. There is whitespace after the tag name meta missing. Browsers will drop it as an unknown tag. ... and yes, there is a community of web developers who still add the old IE 3 cache-controls to their page data instead of the HTTP protocol headers. These headers will have exactly zero effect on most systems. And their SSL certificate is for the .com; no the .lv... Also a problem. Though an SSL error should appear if it were being hit. Amos
[squid-users] how to set up expires header in squid?
hi, everyone i have a web server(apache 2.0.x) and a squid(2.6) in front of the web as the reverse proxy. in the web server, i have set up the mod_expires module to export the Expires HTTP header as 1 year. and it works, when i visit the apache directly, here is the http header info -- [shell]# curl -D- -o /dev/null http://www.mydomain.com/test.png HTTP/1.1 200 OK Date: Fri, 09 Apr 2010 07:52:21 GMT Server: Apache/2.0.53 (Unix) mod_perl/1.99_14 Perl/v5.8.5 PHP/5.0.3 mod_ssl/2.0.53 OpenSSL/0.9.7e mod_fastcgi/mod_fastcgi-SNAP-0404142202 Last-Modified: Thu, 06 Apr 2006 12:03:53 GMT Accept-Ranges: bytes Content-Length: 512 Cache-Control: max-age=31536000, public Expires: Sat, 09 Apr 2011 07:52:21 GMT Content-Type: image/png --- and then i put the squid in front of the web, set up the squid as a reverse proxy. then i visit the same URL again, then i got the different header -- [shell]$ curl -D- -o /dev/null http://www.mydomain.com/test.png HTTP/1.0 200 OK Date: Fri, 09 Apr 2010 08:56:00 GMT Server: Apache/2.0.53 (Unix) mod_perl/1.99_14 Perl/v5.8.5 PHP/5.0.3 mod_ssl/2.0.53 OpenSSL/0.9.7e mod_fastcgi/mod_fastcgi-SNAP-0404142202 Last-Modified: Thu, 06 Apr 2006 12:03:53 GMT Accept-Ranges: bytes Content-Length: 512 Content-Type: image/png Age: 1025 X-Cache: HIT from squid.mydomain.com Via: 1.0 squid.mydomain.com:80 (squid/2.6.STABLE23) Connection: close --- the pic loads correctly, but as you can see, the expires header is gone (compare with visit apache directly). but i need to keep it. so, i'm wondering how to set up squid to export the expires header? i have tried use refresh_pattern, but no luck.. i totaly have no idea now. so i have to look for help here. any tips are appreciated? thank you~~
Re: [squid-users] how to set up expires header in squid?
On Thu, Apr 22, 2010 at 12:28 PM, 老邪 swansu...@gmail.com wrote: the pic loads correctly, but as you can see, the expires header is gone (compare with visit apache directly). Squid normally doesn't discard the output headers from original server. max-age header should be there, like 126's: $ curl -D- -o /dev/null www.126.com HTTP/1.0 200 OK Date: Thu, 22 Apr 2010 04:17:04 GMT Server: Apache Accept-Ranges: bytes Cache-Control: max-age=3600 Expires: Thu, 22 Apr 2010 05:17:04 GMT Vary: Accept-Encoding Content-Length: 26281 Content-Type: text/html; charset=GB2312 Age: 946 X-Cache: HIT from mcache.163.com Connection: close So you may both check your httpd.conf to see if mod_expire handle the http 1.0 request correctly since squid forward the request with http/1.0 protocal. -- Jeff Pang http://home.arcor.de/pangj/
[squid-users] Yahoo Messenger
Dear All, I was browsing through the Squid's config examples and got to this page: http://wiki.squid-cache.org/ConfigExamples/Chat/YahooMessenger about the Yahoo Messenger. From what I understand, that is to deny access to those Yahoo IM servers. Why would we need that? What if we allow it? Sorry if this question had been answered before, in which case a pointer to it would be much appreciated. Regards, Khem
Re: [squid-users] how to set up expires header in squid?
will try, thank you!!! will update here On Thu, Apr 22, 2010 at 12:36 PM, Jeff Pang pa...@arcor.de wrote: On Thu, Apr 22, 2010 at 12:28 PM, 老邪 swansu...@gmail.com wrote: the pic loads correctly, but as you can see, the expires header is gone (compare with visit apache directly). Squid normally doesn't discard the output headers from original server. max-age header should be there, like 126's: $ curl -D- -o /dev/null www.126.com HTTP/1.0 200 OK Date: Thu, 22 Apr 2010 04:17:04 GMT Server: Apache Accept-Ranges: bytes Cache-Control: max-age=3600 Expires: Thu, 22 Apr 2010 05:17:04 GMT Vary: Accept-Encoding Content-Length: 26281 Content-Type: text/html; charset=GB2312 Age: 946 X-Cache: HIT from mcache.163.com Connection: close So you may both check your httpd.conf to see if mod_expire handle the http 1.0 request correctly since squid forward the request with http/1.0 protocal. -- Jeff Pang http://home.arcor.de/pangj/
[squid-users] Help about iptable squid
Dear All: Linux has three card: One is 192.168.1.250 (Internet) by 192.168.1.1 The other two are: 192.168.2.1, 192.168.3.1 Client: 192.168.2.100-192.168.2.200 / IP 192.168.3.100-192.168.3.200 I have a few questions 1: I'm in the allocation of time, add squid --enable-underscore options But on a visit to the site is still has underlined 2: why Teamviever software from external links, always break, then cannot connect But, I have broken the network ,configuration files below http_port 3128 ipcache_size 1024 ipcache_low 90 ipcache_high 95 cache_mem 128 MB cache_dir ufs /var/spool/squid 4096 16 256 cache_effective_user squid cache_effective_group squid dns_nameservers 192.168.1.10 httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on cache_access_log /var/log/squid/access.log cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log visible_hostname gw.efc.cory cache_mgr ka...@everfocus.com.cn acl 2 src 192.168.2.100-192.168.2.200/32 http_access allow 2 acl 3 src 192.168.3.100-192.168.3.200/32 http_access allow 3 acl all src 0.0.0.0/0.0.0.0 acl localhost src 127.0.0.1/255.255.255.255 http_access allow localhost http_access deny all *mangle :PREROUTING ACCEPT [11949307:8517837757] :INPUT ACCEPT [61863944:9774933638] :FORWARD ACCEPT [11730595:8495305567] :OUTPUT ACCEPT [40941:4437279] :POSTROUTING ACCEPT [11214754:8468974725] COMMIT *nat :PREROUTING ACCEPT [694231:44896066] :POSTROUTING ACCEPT [71812:4199611] :OUTPUT ACCEPT [1788:412902] -A POSTROUTING -m iprange --src-range 192.168.3.100-192.168.3.200 -o eth0 -j SNAT --to-source 192.168.1.250 -A POSTROUTING -m iprange --src-range 192.168.2.100-192.168.2.200 -o eth0 -j SNAT --to-source 192.168.1.250 -A PREROUTING -i eth2 -p tcp -m iprange --src-range 192.168.3.100-192.168.3.200 --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -i eth1 -p tcp -m iprange --src-range 192.168.2.100-192.168.2.200 --dport 80 -j REDIRECT --to-ports 3128 COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [37276:4032229] -A INPUT -p icmp -m limit --limit 1/s --limit-burst 10 -j ACCEPT -A INPUT -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN -m limit --limit 10/sec -A INPUT -i lo -j ACCEPT -A INPUT -p udp -m multiport --dports 53,123,161,162,500,1701,1194,1993 -j ACCEPT -A INPUT -p tcp -m multiport --dports 80,53,8080,3128,9101,9102,9103 -j ACCEPT -A INPUT -s 168.95.1.1 -j ACCEPT -A INPUT -s 168.95.192.1 -j ACCEPT -A INPUT -s 211.72.67.226 -j ACCEPT -A INPUT -s 216.146.35.35 -j ACCEPT -A INPUT -s 216.146.36.36 -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 168.95.1.1 -j ACCEPT -A FORWARD -s 168.95.192.1 -j ACCEPT -A FORWARD -s 211.72.67.226 -j ACCEPT -A FORWARD -s 216.146.35.35 -j ACCEPT -A FORWARD -s 216.146.36.36 -j ACCEPT -A FORWARD -d 168.95.1.1 -j ACCEPT -A FORWARD -d 168.95.192.1 -j ACCEPT -A FORWARD -d 211.72.67.226 -j ACCEPT -A FORWARD -d 216.146.35.35 -j ACCEPT -A FORWARD -d 216.146.36.36 -j ACCEPT -A FORWARD -m iprange --src-range 192.168.2.100-192.168.2.200 -d 192.168.1.176 -j ACCEPT -A FORWARD -s 192.168.10.0/24 -j ACCEPT -A FORWARD -s 192.168.11.0/24 -j ACCEPT -A FORWARD -d 192.168.10.0/24 -j ACCEPT -A FORWARD -d 192.168.11.0/24 -j ACCEPT -A FORWARD -s 10.8.0.0/24 -j ACCEPT -A FORWARD -p icmp -j ACCEPT -A FORWARD -d 211.157.108.130 -j ACCEPT -A FORWARD -d 220.128.204.167 -j ACCEPT -A FORWARD -d 211.72.67.227 -j ACCEPT -A FORWARD -d 211.72.67.226 -j ACCEPT -A FORWARD -d 220.128.204.163 -j ACCEPT -A FORWARD -d 61.66.137.4 -j ACCEPT -A FORWARD -d 61.66.137.3 -j ACCEPT -A FORWARD -d 61.66.137.5 -j ACCEPT -A FORWARD -p udp -m multiport --dports 53,123,137,138 -j ACCEPT -A FORWARD -p tcp -m multiport --dports 20,21,53,139,445,1863,5900,3128,8080 -j ACCEPT -A FORWARD -m iprange --src-range 192.168.3.100-192.168.3.200 -p tcp -m multiport --dports 80,443,25,110 -j ACCEPT -A FORWARD -m iprange --src-range 192.168.2.100-192.168.2.200 -p tcp -m multiport --dports 80,443,25,110 -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT COMMIT Hope everybody to help me to solve it Thank Kavin
Re: [squid-users] SQUID3: Access denied connecting to one site
Ok, the headers are broken, but there is a way to make squid ignore them? About ssl - they also have another domain www.airbaltic.com which is not accessible either. 22.04.2010 3:45, Amos Jeffries пишет: On Wed, 21 Apr 2010 03:54:33 -0700 (PDT), John Doejd...@yahoo.com wrote: From: Alexandr Dmitrievalexandr.dmitr...@mos.lv I tried to change tcp_ecn, but this did not help. Maybe some other ideas? Just 2 things I found: When I check the page source, I see: metahttp-equiv=Cache-Control: max-age content=300 metahttp-equiv=Expires content=Tue, 20 Apr 2010 06:23:44 GMT The expire is set to yesterday... is that normal? Well the syntax is broken. There is whitespace after the tag name meta missing. Browsers will drop it as an unknown tag. ... and yes, there is a community of web developers who still add the old IE 3 cache-controls to their page data instead of the HTTP protocol headers. These headers will have exactly zero effect on most systems. And their SSL certificate is for the .com; no the .lv... Also a problem. Though an SSL error should appear if it were being hit. Amos -- Alexandr Dmitrijev Head of IT Department Fashion Retail Ltd. Phone: +371 67560501 Fax: +371 67560502 GSM: +371 2771 E-mail:alexandr.dmitr...@mos.lv
Re: [squid-users] Help about iptable squid
kavin wrote: Dear All: Linux has three card: One is 192.168.1.250 (Internet) by 192.168.1.1 The other two are: 192.168.2.1, 192.168.3.1 Client: 192.168.2.100-192.168.2.200 / IP 192.168.3.100-192.168.3.200 I have a few questions 1: I'm in the allocation of time, add squid --enable-underscore options But on a visit to the site is still has underlined That made no sense at all. Can you please describe the problem it another way? 2: why Teamviever software from external links, always break, then cannot connect But, I have broken the network ,configuration files below Again. Is that a question? Something called teamviewer does not work after you broke it? Please explain some more. snip httpd_accel_host virtual Squid 2.5 config. Please upgrade your software. 1) We have not supported 2.5 since more than 3 years now. 2) reverse proxy is quite difficult in that version. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.1
Re: [squid-users] SQUID3: Access denied connecting to one site
Alexandr Dmitriev wrote: Ok, the headers are broken, but there is a way to make squid ignore them? About ssl - they also have another domain www.airbaltic.com which is not accessible either. Part of the point was that they are not even headers at all. Squid does not do anything with body data but pump through. The HTML code bits are just some other bytes of body data to Squid. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.1
Re: [squid-users] Yahoo Messenger
Khemara Lyn wrote: Dear All, I was browsing through the Squid's config examples and got to this page: http://wiki.squid-cache.org/ConfigExamples/Chat/YahooMessenger about the Yahoo Messenger. From what I understand, that is to deny access to those Yahoo IM servers. Why would we need that? You may not. Many others did at one stage or that page would not exist. What if we allow it? There is nothing special needed to allow it. Adding those rules with allow will be handled in the context of how you add them. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.1
Re: [squid-users] Primary, Secondary, Tertiary Squid proxies
Nick Cairncross wrote: Hi, I would like to configure my proxies to route via different boxes if the primary upstream is unavailable. I have three Squid boxes all at different sites . All three have the entry: cache_peer upstream.isp.com parent 8080 0 no-query default All three are the same, utilising Kerberos authentication, hooking into an ICAP server. All working ok - users authenticate, ICAP manipulation then passed upstream. What I want is if the upstream at site A is unavailable I would like to route to the site B, and then site C to pass to the upstream. Likewise at Site B, site A, site C. And again at site C, site B and site A. I think I need to be looking at something like this.. I'm not using caching by the way: SiteA proxy: cache_peer upstream.isp.com parent 8080 0 no-query no-digest default cache_peer siteb.[mydomain] sibling 8080 0 no-query no-digest cache_peer sitec.[mydomain] sibling 8080 0 no-query no-digest Do I also need additional conf lines to say send upstream and don't do any auth/ICAP etc or is it as simple as getting the right lines above and it'll automatically go upstream? Could anyone offer some pointers? sibling relationships form a kind of cluster with multi-lateral fetches happening. It does not match your structured failover requirement. Although, it does permit a fastest-fetch setup you might like better. To meet your failover requirement each would need to be parented off each other. Unless you add some form of load-balancing setting yourself the default is to fetch from the structured first, second, third parent the way you described. Simply list the cache_peer lines in priority order. Failover will happen when a) one goes down, or b) one gets overloaded. One thing to watch out for is that for this to work without looping you MUST have the Via: and X-Forwarded-For headers enabled (via on, forwarded_for on settings in squid.conf). Also some cache_peer_access rules preventing a fetch which came from any of the A,B,C sites being passed to one of the others. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.1