Re: [squid-users] .pac file/newbie guide request here
On Wed, 10 Dec 2003, Renato Kalugdan wrote: Hello All, I've just implemented Squid as a Proxy Server on a Lab setup at work. So far so good. My question pertains to .pac files Is there a guide that will allow me to comprehend this more thoroughly? Where would I put such a file? On the Squid Server or on a Web Server? You would put this file on a Web server. Furthermore you need to make sure that the server returns the correct content type for the URL. You can do it in apache like this: AddType application/x-ns-proxy-autoconfig .pac Duane W.
Re: [squid-users] squid query
Duane W. -- Buy my book: http://squidbook.org/ On Thu, 11 Dec 2003, Lendra Tanujaya wrote: Hi there, I am having issue with accessing a URL with the following format when I am using the squid. http://www.something.com:81 or http://www.something.com:6060 (or some other port numbers). The error message that I get is: * Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. Can someone please advice? Your Squid has destination-port-based access controls. You need to add these non-standard ports to the list of allowed ports. In the default config file it looks like this: acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http Note that because of the unregistered ports entry, port 6060 should be allowed by the default config. Duane W.
Re: [squid-users] 2.5.4: WARNING: Disk space over limit: -755964 KB 817152 KB and huge memory usage
Duane W. -- Buy my book: http://squidbook.org/ On Wed, 10 Dec 2003, fire-eyes wrote: Hello, I'd first off like to thank the squid team for a fantastic product. Keep up the excellent work. Now I'd like to present the problem I have run into. I have hit google for this one, and found close matches, but never squid complaining about NEGATIVE numbers. It is not a typeo. This is what shows up in my logs: WARNING: Disk space over limit: -755964 KB 817152 KB This happens every few seconds. At the same time this started yesterday, I found that squid is using 399MB of memory. It used to take approximately 20. I can still use the proxy, although it is very sluggish. The version of squid I am using is 2.5.4. I set squid up the first time around many months ago, and have only made minor modifications to squid.conf, the last of which was October 15th. Any input and ideas would be much appreciated. Looks like the metadata has a corrupt entry such that Squid thinks there is some really huge object in the cache. If you haven't shutdown and restarted Squid recently, try that first. If that doesn't work, then (1) shut down Squid, (2) remove the swap.state files in each cache_dir, and (3) restart Squid. This will rebuild the metadata from the disk files and it should pick up the correct size for each file in the cache. Duane W.
Re: [squid-users] Squid web acceleration for two
Duane W. -- Buy my book: http://squidbook.org/ On Wed, 10 Dec 2003 [EMAIL PROTECTED] wrote: Thanks much! Yes, pid_filename is an odd directive, no wonder I overlooked that. ;) And that was the trick. By adding pid_filename none, two instances of squid will run, each using it's own conf file with unique https_port directives. This brings me to another problem of starting squid from a script. When starting squid using -NCd1 for testing, squid prompts me for the PEM pass phrase during SSL initialization, which I type and squid runs happily along. But when starting in daemon mode (without the -N option) I am never prompted for the pass phrase, squid does not start, it dies. How can I pass the pass phrase to squid so that it starts automatically like from a script or on reboot? Probably you cannot without making some modifications to the source code. What you can do, however, is remove the passphrase from your (RSA) key with a command like this: openssl rsa -in private-key -out private-key.no-passphrase This is arguably a bad idea, but will allow you to start Squid as you want. Duane W.
RE: AW: AW: AW: AW: [squid-users] smb_auth
On Thu, 11 Dec 2003, melvin melvin wrote: while in the NETLOGON directory smb: \open proxyauth smb: \ nothing appears i tried other files but the outcome is the same. I think you need to use the get command... get proxyauth - most likely the open command just tries to open the file, but I can not even find this smbclient command in my copy of the documentation (probably because my Samba is a bit dated) Regards Henrik
Re: [squid-users] Squid web acceleration for two
On Wed, 10 Dec 2003 [EMAIL PROTECTED] wrote: This brings me to another problem of starting squid from a script. When starting squid using -NCd1 for testing, squid prompts me for the PEM pass phrase during SSL initialization, which I type and squid runs happily along. But when starting in daemon mode (without the -N option) I am never prompted for the pass phrase, squid does not start, it dies. To use SSL certificates in daemon mode you need to use unencrypted private keys without a pass phrase. openssl rsa -in your_encrypted_key.pem -out plain_key.pem Regards Henrik
Re: [squid-users] Is there a forum to talk about Squid proxy server ?
On Wed, 10 Dec 2003 [EMAIL PROTECTED] wrote: Is there a forum to talk about Squid proxy server or ask questions? Yes, this is it. Regards Henrik
[squid-users] Download Time - Out with large files
Hy, I get timeouts from the clients by downloading large files (greater than 25 MB). Do naybody Know which parameter I must turn, to - get a longer Timeperiode to wait. - bypass the cache for a non - caching download. Thanks R. Maurer
Re: [squid-users] Squid2.4 supports persistent connection, but why Squid2.5 or Squid3.0 not.
On Wed, 10 Dec 2003 [EMAIL PROTECTED] wrote: Could you let me know how Squid2.5 or 3.0 can keep a TCP connection to allow client and server send requests and responses in one TCP connection? Works here... But please remember that Squid is a proxy and persistent connections are a hop-by-hop feature of HTTP. There is no guarantee that the next client request will be forwarded to the same server connection. Squid simply selects the best connection to use for forwarding the request. Ah, now I realise what your problem is. You are looking at the POST method. Due to complications on non-indempotent requests in case of server-side timeouts of persistent connections Squid-2.5.STABLE2 and later does not use persistent connections for POST or other non-indempotent requests This is also strongly recommended in the HTTP/1.1 specifications for the same reasons. url:http://www.squid-cache.org/Versions/v2/2.5/bugs/#squid-2.5.STABLE1-indempotent. Regards Henrik
Re: [squid-users] .pac file/newbie guide request here
On Wed, 10 Dec 2003, Renato Kalugdan wrote: My question pertains to .pac files Is there a guide that will allow me to comprehend this more thoroughly? The Netscape specification is quite useable. You can find this and other userful information from the Squid FAQ section on proxy auto configuration. I would recommend to also look into WPAD which you can find almost in the same place in the Squid FAQ. Where would I put such a file? On the Squid Server or on a Web Server? On a web server somewhere. This web server may be on the same server as Squid if you like, or some other server on your network. Or it might even be a file share accessible to all clients. To Squid it does not matter how the client finds the .pac file, just that the .pac file instructs the client to go to the Squid proxy when it should.. Regards Henrik
Re: [squid-users] Squid web acceleration for two
On Thu, 11 Dec 2003, Duane Wessels wrote: openssl rsa -in private-key -out private-key.no-passphrase This is arguably a bad idea, but will allow you to start Squid as you want. Not worse than having the pass phrase in some script... But it is true that Squid should be extended to somehow allow entering of pass phrase on startup even when using daemon mode. Just how this is to be done with the current daemon mode implementation where Squid becomes a daemon before even reading the squid.conf is a little bit tricky. But if we change Squid to first read squid.conf and then become a daemon then there is no problem doing this. Regards Henrik
RE: AW: AW: AW: AW: [squid-users] smb_auth
if i use smb: \more proxyauth it returns me an allow i tried smb: \get proxyauth getting file proxyauth of size 5 as proxyauth (1.0 kb/s) (average 1.0 kb/s) thanks From: Henrik Nordstrom [EMAIL PROTECTED] To: melvin melvin [EMAIL PROTECTED] CC: [EMAIL PROTECTED], [EMAIL PROTECTED] Subject: RE: AW: AW: AW: AW: [squid-users] smb_auth Date: Thu, 11 Dec 2003 09:40:55 +0100 (CET) On Thu, 11 Dec 2003, melvin melvin wrote: while in the NETLOGON directory smb: \open proxyauth smb: \ nothing appears i tried other files but the outcome is the same. I think you need to use the get command... get proxyauth - most likely the open command just tries to open the file, but I can not even find this smbclient command in my copy of the documentation (probably because my Samba is a bit dated) Regards Henrik _ Find gifts, buy online with MSN Shopping. http://shopping.msn.com.sg/
Re: [squid-users] Download Time - Out with large files
On Thu, 11 Dec 2003, Maurer Roland MKG-Bank wrote: I get timeouts from the clients by downloading large files (greater than 25 MB). Works here.. Are you using any virus scanner or the like which may delay the download? Do naybody Know which parameter I must turn, to - get a longer Timeperiode to wait. Depends on why there is a timeout. There should not be a timeout with the default settings. If you have set client_lifetime very short then this may time out long requests. - bypass the cache for a non - caching download. Hard. To do this your browser needs to know before the request is sent if it is a download or not, and you then need to teach your .pac script to use this to tell the browser when a proxy should be used or not. Regards Henrik
[squid-users] squid 2.5 and wccp
Hello Squid Users, I am trying to setup squid Version 2.5.STABLE4 with wccp version 2 on a linux server using SuSE 8.2 and using the SuSE kernel sources 2.4.20. I tried to find the missing information to get this working in the mailing list but I can NOT establish the comminication. Just let me shortly explain my actions: - I have successfully applied the wccpv2.patch from: http://squid.visolve.com/developments/wccpv2.htm to squid Version 2.5.STABLE4 Squid and the router communicating using the wccp hello packets (using I_See_You and Here_I_Am packets). - I successfully applied the ip_wccp-2_4_18.patch from: http://squid.visolve.com/developments/wccpv2.htm to SuSE kernel sources 2.4.20 After the reboot the ip_wccp module get loaded sucessfully (with a taint warning, but I think this is ok as it is not a standard kernel module). I added a appropriate iptables PREROUTING rule from port 80 to 3128 to get the kernel to decapsulate the packets. I can capture the HELLO - and the GRE packets on the linux machine. But the module does NOT get used: sq:~ # lsmod ip_wccp 744 0 (unused) and thatswhy the kernel does NOT catch and decapsulate the incoming packets before passing them to Squid when I turned on the wccp version 2 on the router. The question is now: - how can I get the kernel to decapsulate the packets ? - does the kernel normally logs the access of the module into /var/log/messages ? Cheers, Alexander
[squid-users] filtering new IE exploit
I saw a new IE exploit descibed as follows: - http://www.secunia.com/advisories/10395/ Example displaying only http://www.trusted_site.com; in the address bar when the real domain is malicious_site.com: http://[EMAIL PROTECTED]/malicious.html I'm trying to use an acl to prevent access to such urls. I tried this: acl ieflaw url_regex %01@ and http_access deny ieflaw but this doesn't seem to do anything at all Can anyone help? This problem could be serious and who know when M$ will get it patched. DB
Re: [squid-users] filtering new IE exploit
On Thursday 11 December 2003 3:07 pm, DB wrote: I saw a new IE exploit descibed as follows: - http://www.secunia.com/advisories/10395/ Example displaying only http://www.trusted_site.com; in the address bar when the real domain is malicious_site.com: http://[EMAIL PROTECTED]/malicious.html I'm trying to use an acl to prevent access to such urls. I tried this: acl ieflaw url_regex %01@ and http_access deny ieflaw but this doesn't seem to do anything at all This is a bit of a guess, but you might need to escape one or two of those characters? acl ieflaw url_regex \%01\@ should be safe. Also, from a discussion on another mailing list, I believe the exploit is still effective: a) with one or more characters between the %01 and the @ (I don't know if there's an upper limit to how many can be instered) b) with certain other non-printable characters in place of the %01 Antony. -- There are two possible outcomes: If the result confirms the hypothesis, then you've made a measurement. If the result is contrary to the hypothesis, then you've made a discovery. - Enrico Fermi Please reply to the list; please don't CC me.
[squid-users] [OT] Buy my book?
I appreciate this list being here, but why am I seeing what appear to be automated responses from [EMAIL PROTECTED] to buy his book? Not exactly a useful response to myself and others. I'm new to this list, is this normal here?
[squid-users] [OT] Appology
In regards to my last posting, i would like to appologize. All I saw was the top of his responses, and my client was not drawing scroll bars to see the rest of the message, so I thought that was the full message. My personal appologies, that was foolish.
AW: [squid-users] [OT] Buy my book?
This list and the whole world is waiting for this book since month and years;-) Announcing this book are good news for us. Mit freundlichem Gruß/Yours sincerely Werner Rost GM-FIR - Netzwerk ZF Boge Elastmetall GmbH Friesdorfer Str. 175, 53175 Bonn, Deutschland/Germany Telefon/Phone +49 228 3825 - 420, Telefax/Fax +49 228 3825 - 398 [EMAIL PROTECTED] -Ursprüngliche Nachricht- Von: fire-eyes [mailto:[EMAIL PROTECTED] Gesendet: Donnerstag, 11. Dezember 2003 16:37 An: [EMAIL PROTECTED] Betreff: [squid-users] [OT] Buy my book? Wichtigkeit: Niedrig I appreciate this list being here, but why am I seeing what appear to be automated responses from [EMAIL PROTECTED] to buy his book? Not exactly a useful response to myself and others. I'm new to this list, is this normal here?
Re: [squid-users] [OT] Buy my book?
On Thursday 11 December 2003 3:37 pm, fire-eyes wrote: I appreciate this list being here, but why am I seeing what appear to be automated responses from [EMAIL PROTECTED] to buy his book? Not exactly a useful response to myself and others. I'm new to this list, is this normal here? Not really - what you are seeing is something like a sig, but placed at the top of the email instead of the bottom. Certainly in the four emails Duane posted this morning, he has responded to the questions further down in his reply, although I can understand your confusion on seeing the advert at the top and thinking perhaps that's all there was. In my opinion top posting is bad enough, but mixing top posting and bottom posting in the same email is apalling. Antony. -- I want to build a machine that will be proud of me. - Danny Hillis, creator of The Connection Machine Please reply to the list; please don't CC me.
Re: [squid-users] [OT] Appology
On Thursday 11 December 2003 3:41 pm, fire-eyes wrote: In regards to my last posting, i would like to appologize. All I saw was the top of his responses, and my client was not drawing scroll bars to see the rest of the message, so I thought that was the full message. My personal appologies, that was foolish. A valid comment all the same. There was more to Duane's postings than you thought at first, but I don't think there's an excuse for the confusing format of them. Antony. -- Anything that improbable is effectively impossible. - Murray Gell-Mann, Novel Prizewinner in Physics Please reply to the list; please don't CC me.
[squid-users] Update on odd errors
I removed the state files as Duane had instructed, and I am seeing different problems now. This is after restarting squid, and for other reasons, the entire machine. 2003/12/11 11:05:50| Starting Squid Cache version 2.5.STABLE4 for i686-pc-linux-gnu... 2003/12/11 11:05:50| Process ID 2422 2003/12/11 11:05:50| With 1024 file descriptors available 2003/12/11 11:05:50| DNS Socket created at 0.0.0.0, port 32772, FD 4 2003/12/11 11:05:50| Adding nameserver 192.168.1.2 from /etc/resolv.conf 2003/12/11 11:05:50| User-Agent logging is disabled. 2003/12/11 11:05:50| Referer logging is disabled. 2003/12/11 11:05:50| errorTryLoadText: '/etc/squid/errors/ERR_READ_TIMEOUT': (13) Permission denied 2003/12/11 11:05:50| errorTryLoadText: '/usr/lib/squid/errors/English/ERR_READ_TIMEOUT': (13) Permission denied FATAL: failed to find or read error text file. Squid Cache (Version 2.5.STABLE4): Terminated abnormally. CPU Usage: 0.010 seconds = 0.000 user + 0.010 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 399 The numer 399 strikes me as insteresting, as that is how many MB of memory it has been using. I'll be investigating futher...
RE: [squid-users] .pac file/newbie guide request here
I put the .pac file on the workstations, and update it via login script. This allows me to do things like: function FindProxyForURL(url, host) { if (isInNet(myIpAddress(), 10.10.14.0, 255.255.255.0)) return PROXY 10.10.10.10:3128; return DIRECT; } which allows laptops to go home and work correctly, without a proxy server, on their broadband connection. -GS -Original Message- From: Duane Wessels [mailto:[EMAIL PROTECTED] Sent: Thursday, December 11, 2003 12:35 AM To: Renato Kalugdan Cc: [EMAIL PROTECTED] Subject: Re: [squid-users] .pac file/newbie guide request here On Wed, 10 Dec 2003, Renato Kalugdan wrote: Hello All, I've just implemented Squid as a Proxy Server on a Lab setup at work. So far so good. My question pertains to .pac files Is there a guide that will allow me to comprehend this more thoroughly? Where would I put such a file? On the Squid Server or on a Web Server? You would put this file on a Web server. Furthermore you need to make sure that the server returns the correct content type for the URL. You can do it in apache like this: AddType application/x-ns-proxy-autoconfig .pac Duane W. This electronic message transmission is a PRIVATE communication which contains information which may be confidential or privileged. The information is intended to be for the use of the individual or entity named above. If you are not the intended recipient, please be aware that any disclosure, copying, distribution or use of the contents of this information is prohibited. Please notify the sender of the delivery error by replying to this message, or notify us by telephone (877-633-2436, ext. 0), and then delete it from your system.
RES: [squid-users] filtering new IE exploit
It didn't work here. It seems the cache only receives de 2nd part of the address (as it seems access.log). Realy hope M$ patches it quickly. Regards, J.T. João Tiago T. F. Silveira Baterias AJAX Ltda. Departamento de Informática [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] http://www.ajax.com.br http://www.ajax.com.br/ -Mensagem original- De: Antony Stone [mailto:[EMAIL PROTECTED] Enviada em: quinta-feira, 11 de dezembro de 2003 13:29 Para: [EMAIL PROTECTED] Assunto: Re: [squid-users] filtering new IE exploit On Thursday 11 December 2003 3:07 pm, DB wrote: I saw a new IE exploit descibed as follows: - http://www.secunia.com/advisories/10395/ Example displaying only http://www.trusted_site.com; in the address bar when the real domain is malicious_site.com: http://[EMAIL PROTECTED]/malicious.html I'm trying to use an acl to prevent access to such urls. I tried this: acl ieflaw url_regex %01@ and http_access deny ieflaw but this doesn't seem to do anything at all This is a bit of a guess, but you might need to escape one or two of those characters? acl ieflaw url_regex \%01\@ should be safe. Also, from a discussion on another mailing list, I believe the exploit is still effective: a) with one or more characters between the %01 and the @ (I don't know if there's an upper limit to how many can be instered) b) with certain other non-printable characters in place of the %01 Antony. -- There are two possible outcomes: If the result confirms the hypothesis, then you've made a measurement. If the result is contrary to the hypothesis, then you've made a discovery. - Enrico Fermi Please reply to the list; please don't CC me.
Re: RES: [squid-users] filtering new IE exploit
On Thursday 11 December 2003 4:16 pm, AJAX - João Tiago T. F. Silveira wrote: It didn't work here. It seems the cache only receives de 2nd part of the address (as it seems access.log). Probably the bug in IE6 is that it shows the first part of the URL, but actually requests the second part. Therefore Squid will only see the second part (because that's what was requested), whilst the user only sees the first part... Really hope M$ patches it quickly. Hm, where have I heard that before? Antony. -- All matter in the Universe can be placed into one of two categories: 1. Things which need to be fixed. 2. Things which need to be fixed once you've had a few minutes to play with them. Please reply to the list; please don't CC me.
[squid-users] Squid, snmp and MRTG
I am trying to configure Squid for snmp although when I execute mrtg pointing to my mrtg-squid.cfg I get a error no response recieved. I am using the walkthrough contained on http://www.psychofx.com/chris/unix/mrtg/ I have edited squid.conf and configured it for SNMP on port 3401 and have uncommented the following sections and squid stops and starts without errors. Since I have the acl snmppublic do I need to add my ip to the acl so that I can query squid? snmp_port 3401 acl snmppublic snmp_community public snmp_access allow snmppublic localhost snmp_incoming_address 0.0.0.0 snmp_outgoing_address 255.255.255.255 output below. mrtg /var/www/mrtg/mrtg-squid.cfg SNMP Error: no response received SNMPv1_Session (remote host: localhost [127.0.0.1].3401) community: public request ID: -2107181969 PDU bufsize: 8000 bytes timeout: 2s retries: 5 backoff: 1) at /usr/share/perl5/SNMP_util.pm line 465 SNMPGET Problem for cacheServerRequests cacheServerRequests cacheUptime cacheSoftware cacheVersionId on [EMAIL PROTECTED]:3401 at /usr/bin/mrtg line 1683 Use of uninitialized value in concatenation (.) or string at /usr/bin/mrtg line 1686. Use of uninitialized value in concatenation (.) or string at /usr/bin/mrtg line 1686. Modification of non-creatable array value attempted, subscript -2 at /usr/bin/mrtg line 1686. Jim
[squid-users] WCCP with Squidguard?
I've been tasked with getting an network infrastructure using Cisco routers to unobtrusivly start redirecting traffic to an existing squid server. The network admin is helping on the cisco side but some things just aren't quite there yet. Does anyone have a short howto on making sure the proxy server is setup and ready and/or any notes on the Cisco side of things? I'm reading through the Cisco web pages now but it's going to take a while to go through all this. Robert :wq! --- Robert L. Harris | GPG Key ID: E344DA3B @ x-hkp://pgp.mit.edu DISCLAIMER: These are MY OPINIONS ALONE. I speak for no-one else. Life is not a destination, it's a journey. Microsoft produces 15 car pileups on the highway. Don't stop traffic to stand and gawk at the tragedy. signature.asc Description: Digital signature
Re: [squid-users] [OT] Buy my book?
On Thu, 11 Dec 2003, fire-eyes wrote: I appreciate this list being here, but why am I seeing what appear to be automated responses from [EMAIL PROTECTED] to buy his book? Not exactly a useful response to myself and others. I'm new to this list, is this normal here? This was my mistake. I had intended for that to appear at the bottom of some of my emails (although not messages sent to this list). My email program placed the signature at the top of the message, and I didn't notice it there when replying. I wont be using that feature any more. Duane W.
Re: [squid-users] Update on odd errors
On Thu, 11 Dec 2003, fire-eyes wrote: I removed the state files as Duane had instructed, and I am seeing different problems now. This is after restarting squid, and for other reasons, the entire machine. 2003/12/11 11:05:50| Starting Squid Cache version 2.5.STABLE4 for i686-pc-linux-gnu... 2003/12/11 11:05:50| Process ID 2422 2003/12/11 11:05:50| With 1024 file descriptors available 2003/12/11 11:05:50| DNS Socket created at 0.0.0.0, port 32772, FD 4 2003/12/11 11:05:50| Adding nameserver 192.168.1.2 from /etc/resolv.conf 2003/12/11 11:05:50| User-Agent logging is disabled. 2003/12/11 11:05:50| Referer logging is disabled. 2003/12/11 11:05:50| errorTryLoadText: '/etc/squid/errors/ERR_READ_TIMEOUT': (13) Permission denied 2003/12/11 11:05:50| errorTryLoadText: '/usr/lib/squid/errors/English/ERR_READ_TIMEOUT': (13) Permission denied FATAL: failed to find or read error text file. Squid Cache (Version 2.5.STABLE4): Terminated abnormally. CPU Usage: 0.010 seconds = 0.000 user + 0.010 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 399 You have some serious file permission problems there. Make sure that the Squid userid can read the files and parent directories mentioned in the errors. The numer 399 strikes me as insteresting, as that is how many MB of memory it has been using. Its just a coincidence. DW
Re: [squid-users] Squid, snmp and MRTG
On Thu, 11 Dec 2003 Jim_Brouse/[EMAIL PROTECTED] wrote: I am trying to configure Squid for snmp although when I execute mrtg pointing to my mrtg-squid.cfg I get a error no response recieved. I am using the walkthrough contained on http://www.psychofx.com/chris/unix/mrtg/ I have edited squid.conf and configured it for SNMP on port 3401 and have uncommented the following sections and squid stops and starts without errors. Since I have the acl snmppublic do I need to add my ip to the acl so that I can query squid? snmp_port 3401 acl snmppublic snmp_community public snmp_access allow snmppublic localhost snmp_incoming_address 0.0.0.0 snmp_outgoing_address 255.255.255.255 Since you are using localhost, the above 'snmp_access' line should be fine. You may want to add 49,9 to debug_options in squid.conf. Then watch cache.log. That should show you if Squid is actually receiving the SNMP queries, and whether or not they are allowed. You might also want to use tcpdump/ethereal to look for SNMP packets. Duane W.
[squid-users] authentication problem
hello, evereybory i hope somebody can help me!!! i am running squid-2.5.STABLE1-2 and having problems authenticating users against a win2000 ADS/LDAP directory.When installed win2000 i created the following domain: tre-pb.gov.br. I didn't create any organization unit, so the users that i created stays under the standard organization unit (Users). this is the line that i have in squid.conf to define the external helper: auth_param basic program /usr/lib/squid/squid_ldap_auth -b ou=Users, dc=tre-pb, dc=gov, dc=br -h 10.12.1.15 the following error message appears on /var/log/squid/access.log TCP_DENIED/407 1755 GET http://www.google.com. am i doing something wrong??? if so, please help me
Re: [squid-users] Squid, snmp and MRTG
This is the command I used mrtg /var/www/mrtg/mrtg-squid.cfg 49,9 Is that how you meant for 49,9 to be used? Below is the output in cache.log 2003/12/11 08:47:09| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 08:47:11| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 08:47:13| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 08:47:15| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 08:47:17| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 09:18:52| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 09:18:53| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 09:18:55| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 09:18:57| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 09:18:59| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 13:29:24| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 13:29:26| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 13:29:28| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 13:29:30| Failed SNMP agent query from : 127.0.0.1. 2003/12/11 13:29:32| Failed SNMP agent query from : 127.0.0.1. Jim Duane Wessels [EMAIL PROTECTED] To: Jim_Brouse/[EMAIL PROTECTED] cache.orgcc: [EMAIL PROTECTED] Subject: Re: [squid-users] Squid, snmp and MRTG 12/11/2003 11:45 AM On Thu, 11 Dec 2003 Jim_Brouse/[EMAIL PROTECTED] wrote: I am trying to configure Squid for snmp although when I execute mrtg pointing to my mrtg-squid.cfg I get a error no response recieved. I am using the walkthrough contained on http://www.psychofx.com/chris/unix/mrtg/ I have edited squid.conf and configured it for SNMP on port 3401 and have uncommented the following sections and squid stops and starts without errors. Since I have the acl snmppublic do I need to add my ip to the acl so that I can query squid? snmp_port 3401 acl snmppublic snmp_community public snmp_access allow snmppublic localhost snmp_incoming_address 0.0.0.0 snmp_outgoing_address 255.255.255.255 Since you are using localhost, the above 'snmp_access' line should be fine. You may want to add 49,9 to debug_options in squid.conf. Then watch cache.log. That should show you if Squid is actually receiving the SNMP queries, and whether or not they are allowed. You might also want to use tcpdump/ethereal to look for SNMP packets. Duane W.
RE: AW: AW: AW: AW: [squid-users] smb_auth
And what does get proxyauth - return? (this is what smb_auth does) Regards Henrik On Thu, 11 Dec 2003, melvin melvin wrote: if i use smb: \more proxyauth it returns me an allow i tried smb: \get proxyauth getting file proxyauth of size 5 as proxyauth (1.0 kb/s) (average 1.0 kb/s) thanks From: Henrik Nordstrom [EMAIL PROTECTED] To: melvin melvin [EMAIL PROTECTED] CC: [EMAIL PROTECTED], [EMAIL PROTECTED] Subject: RE: AW: AW: AW: AW: [squid-users] smb_auth Date: Thu, 11 Dec 2003 09:40:55 +0100 (CET) On Thu, 11 Dec 2003, melvin melvin wrote: while in the NETLOGON directory smb: \open proxyauth smb: \ nothing appears i tried other files but the outcome is the same. I think you need to use the get command... get proxyauth - most likely the open command just tries to open the file, but I can not even find this smbclient command in my copy of the documentation (probably because my Samba is a bit dated) Regards Henrik _ Find gifts, buy online with MSN Shopping. http://shopping.msn.com.sg/
[squid-users] Yahoo Messenger fails to authenticate through squid
Yahoo Messenger fails to authenticate through squid version 2.5 stable1. access.log 1071169347.406 1 172.20.x.xxx TCP_DENIED/407 1738 POST http://shttp.msg.yahoo.com/notify/ - NONE/- text/html Yahoo log On 12/11 14:19: 4| Open new internet session and connection. The error code is 0 On 12/11 14:19: 4| Failed on HttpSendRequest, restart. The error code is 12031 On 12/11 14:20:42| #Start Session::User connecting to HTTP server On 12/11 14:21:12| Failed on HttpSendRequest, restart. The error code is 12002 On 12/11 14:25: 9| #Start Session::User connecting to HTTP server We are using NTLM schema for authentication and we can browse without problems. Any suggestions? RGDS
[squid-users] SNMP + Remote query problem
Hello all, I've been befuddled with a SNMP problem. I've got a squid box running as a transparent cache (using wccp) which is working fine. I've tried to get SNMP going so that I can use MRTG to generate usage graphs. Where things stand now is SNMP queries work fine from the localhost which squid is running on... ie: snmpwalk 192.168.255.120:3401 -c public .1.3.6.1.4.1.3495.1.1 returns: enterprises.3495.1.1.1.0 = 2636 enterprises.3495.1.1.2.0 = 123976 enterprises.3495.1.1.3.0 = Timeticks: (261736) 0:43:37.36 however that same statement from another box on the same network yeilds a time out and I get Failed SNMP agent query from : 192.168.252.82. on the squid box. I've been unable to find any similar situations on the web and the archives of this mailing list. So I'm wondering if anyone else has seen this and or has any thoughts on how to diagnose this. Here are the pertinent sections of my squid.conf: acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT acl snmppub snmp_community public acl logger src 192.168.252.82/255.255.255.255 http_access allow manager localhost http_access deny manager # Deny requests to unknown ports http_access deny !Safe_ports # Deny CONNECT to other than SSL ports http_access deny CONNECT !SSL_ports acl our_networks src 192.168.252.0/22 http_access allow our_networks http_reply_access allow all snmp_access allow snmppub logger snmp_access deny all Any help at all would be greatly appreciated! Thanks Berant
Re: [squid-users] squid 2.5 and wccp
On Thu, 11 Dec 2003, Alexander Harkenthal wrote: I can capture the HELLO - and the GRE packets on the linux machine. But the module does NOT get used: That is normal. The module just magically sits there doing it's work when loaded. There is no references which increases the use count. and thatswhy the kernel does NOT catch and decapsulate the incoming packets before passing them to Squid when I turned on the wccp version 2 on the router. Do you see the decapsulated packets anywhere? - how can I get the kernel to decapsulate the packets ? This the ip_wccp module is supposed to be doing. Note that you need to use the correct version of the ip_wccp module to match the WCCP version you are using. - does the kernel normally logs the access of the module into /var/log/messages ? Not unless you select to log things using iptables. Regards Henrik
Re: [squid-users] filtering new IE exploit
On Thursday 11 December 2003 3:07 pm, DB wrote: I saw a new IE exploit descibed as follows: - http://www.secunia.com/advisories/10395/ Example displaying only http://www.trusted_site.com; in the address bar when the real domain is malicious_site.com: http://[EMAIL PROTECTED]/malicious.html I'm trying to use an acl to prevent access to such urls. I tried this: acl ieflaw url_regex %01@ and http_access deny ieflaw but this doesn't seem to do anything at all What do you see in access.log? Regards Henrik
RE: [squid-users] Squid2.4 supports persistent connection, but why Squid2.5 or Squid3.0 not.
Thank you so much, Henrik I really want the persistent connection between Squid and my server because my server dosen't support application level session ID and I use TCP connection ID (socket) to keep talk to my client application until the server or client close the connection. But if our customer choice Squid as their proxy server and Squid dosen't support persistent connection, We will get trouble to the communication. Would you please let me know if there is a way to set Squid to keep persistent connectiion to send multi requests in one TCP connection if I use other HTTP method but not POST? How about GET or CONNECT or ...? But GET method can not carry large data, but POST do. Other issues are concerned to performance of my server if Squid don't support persistent connection. Best Regards, Chi Sun -Original Message- From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] Sent: Thursday, December 11, 2003 1:06 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: [squid-users] Squid2.4 supports persistent connection, but why Squid2.5 or Squid3.0 not. On Wed, 10 Dec 2003 [EMAIL PROTECTED] wrote: Could you let me know how Squid2.5 or 3.0 can keep a TCP connection to allow client and server send requests and responses in one TCP connection? Works here... But please remember that Squid is a proxy and persistent connections are a hop-by-hop feature of HTTP. There is no guarantee that the next client request will be forwarded to the same server connection. Squid simply selects the best connection to use for forwarding the request. Ah, now I realise what your problem is. You are looking at the POST method. Due to complications on non-indempotent requests in case of server-side timeouts of persistent connections Squid-2.5.STABLE2 and later does not use persistent connections for POST or other non-indempotent requests This is also strongly recommended in the HTTP/1.1 specifications for the same reasons. url:http://www.squid-cache.org/Versions/v2/2.5/bugs/#squid-2.5.STABLE1-inde mpotent. Regards Henrik
Re: [squid-users] [OT] Buy my book?
On Thu, 11 Dec 2003, fire-eyes wrote: I appreciate this list being here, but why am I seeing what appear to be automated responses from [EMAIL PROTECTED] to buy his book? Not exactly a useful response to myself and others. All messages I have seen has included usefull comments further down in the message. I'm new to this list, is this normal here? It is not normal, but I would say he is fully entitled to include such advertisements in any messages he sends in any manner he pleases, at least as long as the rest of the message corresponds with the guidelines for the squid-users mailing list. He is after all the father of Squid and who is paying for all costs around the squid-cache.org site and mailing lists (via one of his companies). For convenience to the readers I usually place advertisements in the bottom sig when I want to advertise for my services, but I did not mind seeing this specific one at the top (but suspect from the look that it was meant to be bottom sig). It is a farily big news that finnaly after more than 6 years there is now a book only about using Squid. It is a major step for Squid and should be noiced ;-) Regards Henrik
Re: [squid-users] Update on odd errors
On Thu, 11 Dec 2003, fire-eyes wrote: 2003/12/11 11:05:50| errorTryLoadText: '/etc/squid/errors/ERR_READ_TIMEOUT': (13) Permission denied This you MUST fix. Page faults with physical i/o: 399 This does not mean anything relevant. Regards Henrik
Re: RES: [squid-users] filtering new IE exploit
On Thu, 11 Dec 2003, Antony Stone wrote: Really hope M$ patches it quickly. Hm, where have I heard that before? Well, it only took them 2 years to officially patch then authentication in IE6 without breaking other aspects of authentication, but it only rendereded the browser not very useful to users behind proxies requiring authentication and was not a security issue and thus was not given very much attention outside the proxy communities. Bugs like this new one are likely to get widespread attention, and thus more people with support contracts demanding a fix. If you have a support contract and like to see a bug fixed better make use of your support contract than sit silent. The more support requests there is for a bug the higher is the likelyhood that the provider gives the issue priority. If you do not have a support contract then I am afraid you have to accept what you get, or look for an alternative where the support options suit you better. Regards Henrik
Re: [squid-users] authentication problem
On Thu, 11 Dec 2003, Victor Souza Menezes wrote: following domain: tre-pb.gov.br. I didn't create any organization unit, so the users that i created stays under the standard organization unit (Users). this is the line that i have in squid.conf to define the external helper: auth_param basic program /usr/lib/squid/squid_ldap_auth -b ou=Users, dc=tre-pb, dc=gov, dc=br -h 10.12.1.15 You still need to use the search mode of the helper. See the squid_ldap_auth manual. You can also find a couple of MSAD examples in the squid_ldap_auth manual. Regards Henrik
Re: [squid-users] SNMP + Remote query problem
On Thu, 11 Dec 2003, Berant Lemmenes wrote: however that same statement from another box on the same network yeilds a time out and I get Failed SNMP agent query from : 192.168.252.82. on the squid box. Depending on the version of your SNMP tools you may need to specify which version of SNMP to use. The Squid SNMP agent is a little dated and only supports SNMPv1 or SNMPv2 queries. Using SNMPv1 is a safe bet. Regards Henrik
[squid-users] redirecting transparently to few different ports based on URL or domain name
I am running Squid on port 80, proxing to Apache httpd on 81, following the FAQs. I could use a redirector successfully to redirect from one URL to another on the same host and port, however, I'd like Squid and/or it's redirector to be able to route requests based on URL (ideally) or at least based on the sub domain name to a specific port on the same local machine it is running on. So for example http://soaptest.parasoft.com:80/ would go to the apache on the same machine unchanged: http://soaptest.parasoft.com:81/ http://soaptest.parasoft.com/ws1 would get routed to a different HTTP server (web service), but on a another port like 8000 on the same machine, ie http://soaptest.parasoft.com:8000/ws1 http://soaptest.parasoft.com/ws2 would go to http://soaptest.parasoft.com:8001/ws2 etc. Or if that is not possible, perhaps with using virtual hosts llike: http://ws1.soaptest.parasoft.com -- http://soaptest.parasoft.com:8000 http://ws2.soaptest.parasoft.com -- http://soaptest.parasoft.com:8001 ...etc. Keep in mind that this routing needs to be transparent; the web services listening on ports 8000 and 8001 cannot take an HTML redirect page, which apache can generate. When I specify different ports in the destination URL in my redirector program to do this, it doesn't work as expected. It only seems to redirect to httpd port 81, ignoring any ports I put in the destination URL. Can Squid do what I need? I investigated iptabls and other little tools like rinetd, but they all do not seem to do what I need because they seem to operate on the IP addresses level, they don't take domain names, right? Thank you for your time. Rami Jaamour SOAPtest http://www.parasoft.com/jsp/products/home.jsp?product=SOAP Development ParaSoft Corporation http://www.parasoft.com
RE: [squid-users] Squid2.4 supports persistent connection, but why Squid2.5 or Squid3.0 not.
On Thu, 11 Dec 2003 [EMAIL PROTECTED] wrote: I really want the persistent connection between Squid and my server because my server dosen't support application level session ID and I use TCP connection ID (socket) to keep talk to my client application until the server or client close the connection. This will by definition NOT WORK when the client is using a HTTP proxy. If you make this assumption in your application then it is not HTTP compliant as HTTP does not guarantee there is a TCP session - end user relation, and even encourages that there should be no such relation in order to make more efficiently use of the network resources. See RFC 2616 section 8.1.3 Persistent Connections and Proxy Servers. Squid does supports persistent connections per the specifications. HTTP persistent connections are a hop-by-hop feature of HTTP and is negotiated separately client-proxy and proxy-server. For each hop the connections is to be used as efficiently as possible while at the same time not violating the non-indempotent request requirements. What this means is that a) A proxy may have a number of persisitent connections open to the server. When a client request is to be forwarded (regarless on how this request was received by the proxy) the first available persistent connection to the requested server will be selected. This means that the server will and MUST expect to receive requests from multiple clients on the same connection, and requests from the same client connection may be forwarded on different server connections depending on the total traffic pattern, timing and whatever else may influence how the proxy selects which persistent connection to forward the request on. b) POST and other non-indempotent request methods will always be sent on a new connection to the server by the proxy. This due to the fact that persistent connection are not reliable and may be closed by the server at any time while idle and the fact that proxy is not allowed to retry non-indempotent requests even if sending the request over a persistent connection fails due to the server closing the connection while the request is being sent by the proxy. Because of this the proxy can not reuse a persistent server connection for a POST request without risking failing the request in ways not acceptable by the HTTP specification. If you really need to make the above assumption about client connections then you should use https. Due to the nature of running ontop of SSL https gives a sort of guaranteed TCP connection - end user relation. (SSL garantees this even if the HTTP which runs ontop of the SSL connections does not) Regards Henrik
Re: [squid-users] redirecting transparently to few different ports based on URL or domain name
On Thu, 11 Dec 2003, Rami Jaamour wrote: I am running Squid on port 80, proxing to Apache httpd on 81, following the FAQs. I could use a redirector successfully to redirect from one URL to another on the same host and port, however, I'd like Squid and/or it's redirector to be able to route requests based on URL (ideally) or at least based on the sub domain name to a specific port on the same local machine it is running on. The redirector is free to do whatever you please as long as you do not enable httpd_accel_single_host. Via the redirector interface you can divide the URL space of a single domain (or multiple, your choice) in any way you please among different web servers, while at the same time applying any transformations of the URL-path you may need. You can for example send all requests for .jpg images to one server, requests for .nfs files to another (Notes) server, move things around if needed, etc etc. When you are not using httpd_accel_single_host then the reverse-proxying process is rougly 1. Read the request and reconstruct a full URL based on httpd_accel_host and httpd_accel_uses_host_header settings. 2. http_access access controls 3. Send the request to the redirector 4. Fetch the URL as returned by the redirector. Regards Henrik
Re: [squid-users] redirecting transparently to few different ports based on URL or domain name
well, yes, I see what you are saying, I thought that I can do that, but here is my redirector: #!/usr/bin/perl $|=1; while () { [EMAIL PROTECTED]://soaptest.parasoft.com/[EMAIL PROTECTED]://soaptest.parasoft.com/glue/calculator-01.wsdl@; [EMAIL PROTECTED]://soaptest.parasoft.com/glue/[EMAIL PROTECTED]://soaptest.parasoft.com:8000/glue/calculator@; print; } The first one work fine, but the when there are port changes like the second one it does not work, I just get a 404 from apache on 81! Note that there is another HTTP server listening on http://soaptest.parasoft.com:8000 different than the apach on 80 which Squid is hooked into. Rami Henrik Nordstrom wrote: On Thu, 11 Dec 2003, Rami Jaamour wrote: I am running Squid on port 80, proxing to Apache httpd on 81, following the FAQs. I could use a redirector successfully to redirect from one URL to another on the same host and port, however, I'd like Squid and/or it's redirector to be able to route requests based on URL (ideally) or at least based on the sub domain name to a specific port on the same local machine it is running on. The redirector is free to do whatever you please as long as you do not enable httpd_accel_single_host. Via the redirector interface you can divide the URL space of a single domain (or multiple, your choice) in any way you please among different web servers, while at the same time applying any transformations of the URL-path you may need. You can for example send all requests for .jpg images to one server, requests for .nfs files to another (Notes) server, move things around if needed, etc etc. When you are not using httpd_accel_single_host then the reverse-proxying process is rougly 1. Read the request and reconstruct a full URL based on httpd_accel_host and httpd_accel_uses_host_header settings. 2. http_access access controls 3. Send the request to the redirector 4. Fetch the URL as returned by the redirector. Regards Henrik -- Rami Jaamour SOAPtest http://www.parasoft.com/jsp/products/home.jsp?product=SOAP Development ParaSoft Corporation http://www.parasoft.com
[squid-users] moving the cache
Good day, List I have added a new scsi disk to our squid server, and would like to know which is the best way to move the cache to the new scsi disk. Your advice will be highly appreciated Regards George = Privileged/Confidential Information may be contained in this message. If you are not the addressee (or responsible for delivery of the message to the addressee), you may not copy or deliver this message to anyone. In such a case, you should destroy this message and kindly notify the sender by reply e-mail. Opinions, conclusions and other information in this message that do not relate to the official business of my employer shall be understood as neither given nor endorsed by it.
Re: AW: [squid-users] [OT] Buy my book?
On Thu, 11 Dec 2003 16:42:37 +0100, Werner wrote: This list and the whole world is waiting for this book since month and years;-) Announcing this book are good news for us. Indeed - many congratulations Duane. I saw Duane's sig and like Henrik just assumed it was supposed to be a sig but like he said, if Duane wants to put it at the top, no problem for me all his posts were/are helpful. Although this mailing list is awesome I will be buying the book for three reasons: - my company doesn't let us send money or donations to the squid project (I've asked - I'd like to donate a Sun since that is what we use, but we can't donate). But we *can* buy books so we'll be getting at least two. - It's nice to have a reference and I am sure it has bits that either weren't in the FAQ or on this list or are but will be more easily findable via the TOC or index/book format. - I won't always be with this company and this will help me turn over our Squid proxies to whoever takes my place. Again, I think this is a great achievement and I am looking forward to it. And as Henrik mentions, it's a major step for Squid and should be noticed! Thanks again Duane and the whole Squid Team! adam
Re: [squid-users] moving the cache
On Fri, 12 Dec 2003, George Dominguez wrote: Good day, List I have added a new scsi disk to our squid server, and would like to know which is the best way to move the cache to the new scsi disk. First, make sure Squid is not running. Assuming that your 'mv' can move directories between filesystems (most can), you can just do: # mv /from/oldcachedir /to/newcachedir then update cache_dir in squid.conf and restart. This procedure will take a while, depending on the size of your cache and the speed of your system. If you don't want to have any interruption of service, and can wait a really long time (like days or weeks), you can use this trick: Add a new cache_dir and mark the old cache_dir as read-only. New objects will get stored to the new location, and you'll get cache hits from the old one. After some amount of time, say when the new cache dir fills up, you can remove the old one. Duane W.
Re: [squid-users] Squid, snmp and MRTG
On Thu, 11 Dec 2003 Jim_Brouse/[EMAIL PROTECTED] wrote: This is the command I used mrtg /var/www/mrtg/mrtg-squid.cfg 49,9 Is that how you meant for 49,9 to be used? Not really. I meant for this line to be in squid.conf: debug_options ALL,1 49,9 Below is the output in cache.log 2003/12/11 08:47:09| Failed SNMP agent query from : 127.0.0.1. This is helpful anyway.It implies that Squid is denying the SNMP query because it is not matching your access rules.Based on what you've shown so far, it seems like it should work. You are sending queries to 127.0.0.1 and using 'public' as the SNMP community. To see if the problem is with Squid or with MRTG, you might want to try a different SNMP client, such as snmpget from Net-SNMP (net-snmp.sourceforge.net). For example: % snmpget -v 1 -c public localhost:3401 .1.3.6.1.4.1.3495.1.1.1 Duane W.
Re: [squid-users] Re: Windows Update Problem
Hello, This isn't a squid problem. Check the time/date on your pc, that will cause the problems. Bart dwi amk wrote: Looks like the problem not coming from squid. Try Uncle Google some search about windows update error. I've been in the same situation lately, I just need to tools-internet options... in ie. Sturgis, Grant writes: Greetings All, We have experienced an interesting problem with Windows Update. Essentially, the service fails when the client (W2K / IE6) uses the proxy server and succeeds when it bypasses the proxy. After you click Scan for Updates the web server replies with something like (sorry I don't have the exact error in front of me) an unknown error has occurred. The access.log and cache.log don't show anything out of the ordinary (access.log excerpt is below). I have gotten around the problem temporarily by including: acl windowsupdate dstdomain .windowsupdate.microsoft.com no_cache deny windowsupdate in squid.conf The mailing list archives have some similar problems that point to cache_dir being too small (running out of cache space) but I don't believe that is my problem: cache_dir aufs /usr/local/squid/cache0 48000 16 256 cache_dir aufs /usr/local/squid/cache1 48000 16 256 #df -h|grep cache /dev/sdb1 67G 37G 27G 58% /usr/local/squid/cache0 /dev/sdc1 67G 37G 27G 58% /usr/local/squid/cache1 #./squid -v Squid Cache: Version 2.5.STABLE1-20030102 configure options: --enable-storeio=ufs,aufs,diskd --enable-snmp Any suggestions would be most welcome. Thanks, Grant - access.log excerpt: Tue Dec 2 15:30:36 2003 30 10.10.14.113 TCP_MEM_HIT/200 3592 GET http://windowsupdate.microsoft.com/ - NONE/- text/html Tue Dec 2 15:30:36 2003 32 10.10.14.113 TCP_MEM_HIT/200 2391 GET http://windowsupdate.microsoft.com/redirect.js - NONE/- application/x-javascript Tue Dec 2 15:30:36 2003102 10.10.14.113 TCP_MISS/302 428 GET http://v4.windowsupdate.microsoft.com/default.asp - DIRECT/207.46.244.222 text/html Tue Dec 2 15:30:36 2003174 10.10.14.113 TCP_MISS/200 8383 GET http://v4.windowsupdate.microsoft.com/en/default.asp - DIRECT/65.54.249.61 text/html Tue Dec 2 15:30:36 2003 35 10.10.14.113 TCP_MEM_HIT/200 3854 GET http://v4.windowsupdate.microsoft.com/shared/js/Redirect.js - NONE/- application/x-javascript Tue Dec 2 15:30:36 2003129 10.10.14.113 TCP_HIT/200 22132 GET http://v4.windowsupdate.microsoft.com/shared/js/top.js - NONE/- application/x-javascript Tue Dec 2 15:30:37 2003 51 10.10.14.113 TCP_HIT/200 520 GET http://v4.windowsupdate.microsoft.com/shared/js/top.vbs - NONE/- text/vbscript Tue Dec 2 15:30:37 2003106 10.10.14.113 TCP_MISS/200 1173 GET http://v4.windowsupdate.microsoft.com/shared/js/survey.js? - DIRECT/65.54.249.61 application/x-javascript Tue Dec 2 15:30:37 2003136 10.10.14.113 TCP_MISS/200 1496 GET http://v4.windowsupdate.microsoft.com/en/footer.asp - DIRECT/65.54.249.61 text/html Tue Dec 2 15:30:37 2003188 10.10.14.113 TCP_MISS/200 7109 GET http://v4.windowsupdate.microsoft.com/en/toc.asp? - DIRECT/65.54.249.61 text/html Tue Dec 2 15:30:37 2003245 10.10.14.113 TCP_MISS/200 4351 GET http://v4.windowsupdate.microsoft.com/en/mstoolbar.asp? - DIRECT/207.46.244.222 text/html Tue Dec 2 15:30:37 2003178 10.10.14.113 TCP_MISS/200 1872 GET http://v4.windowsupdate.microsoft.com/en/splash.asp? - DIRECT/207.46.244.222 text/html Tue Dec 2 15:30:37 2003 71 10.10.14.113 TCP_MEM_HIT/200 558 GET http://v4.windowsupdate.microsoft.com/shared/css/footer.css - NONE/- text/css Tue Dec 2 15:30:37 2003 70 10.10.14.113 TCP_HIT/200 2656 GET http://v4.windowsupdate.microsoft.com/shared/js/mstoolbar.js - NONE/- application/x-javascript Tue Dec 2 15:30:37 2003105 10.10.14.113 TCP_HIT/200 9547 GET http://v4.windowsupdate.microsoft.com/shared/js/toc.js - NONE/- application/x-javascript Tue Dec 2 15:30:37 2003113 10.10.14.113 TCP_HIT/200 12615 GET http://v4.windowsupdate.microsoft.com/shared/js/content.js - NONE/- application/x-javascript Tue Dec 2 15:30:37 2003 98 10.10.14.113 TCP_HIT/200 448 GET http://v4.windowsupdate.microsoft.com/shared/images/toc_endnode.gif - NONE/- image/gif Tue Dec 2 15:30:37 2003 98 10.10.14.113 TCP_HIT/200 1578 GET http://v4.windowsupdate.microsoft.com/shared/css/hcp.css - NONE/- text/css Tue Dec 2 15:30:37 2003139 10.10.14.113 TCP_HIT/200 1573 GET http://v4.windowsupdate.microsoft.com/shared/css/toc.css - NONE/- text/css Tue Dec 2 15:30:37 2003 51 10.10.14.113 TCP_HIT/200 5463 GET http://v4.windowsupdate.microsoft.com/shared/css/content.css - NONE/- text/css Tue Dec 2 15:30:38 2003200 10.10.14.113 TCP_HIT/200 2054 GET http://v4.windowsupdate.microsoft.com/shared/css/mstoolbar.css - NONE/- text/css Tue Dec 2 15:30:38 2003166 10.10.14.113 TCP_HIT/200 449 GET http://v4.windowsupdate.microsoft.com/shared/images/mstoolbar_curve.gif - NONE/- image/gif Tue Dec 2 15:30:38 2003
Re: [squid-users] redirecting transparently to few different ports based on URL or domain name
And what is your httpd_accel_single_host setting? If this redirection does not work then I suspect you have set httpd_accel_single_host to on which will force Squid to always contact httpd_accel_host:httpd_accel_port no matter what is indicated by the URL. Regards Henrik On Thu, 11 Dec 2003, Rami Jaamour wrote: well, yes, I see what you are saying, I thought that I can do that, but here is my redirector: #!/usr/bin/perl $|=1; while () { [EMAIL PROTECTED]://soaptest.parasoft.com/[EMAIL PROTECTED]://soaptest.parasoft.com/glue/calculator-01.wsdl@; [EMAIL PROTECTED]://soaptest.parasoft.com/glue/[EMAIL PROTECTED]://soaptest.parasoft.com:8000/glue/calculator@; print; } The first one work fine, but the when there are port changes like the second one it does not work, I just get a 404 from apache on 81! Note that there is another HTTP server listening on http://soaptest.parasoft.com:8000 different than the apach on 80 which Squid is hooked into. Rami Henrik Nordstrom wrote: On Thu, 11 Dec 2003, Rami Jaamour wrote: I am running Squid on port 80, proxing to Apache httpd on 81, following the FAQs. I could use a redirector successfully to redirect from one URL to another on the same host and port, however, I'd like Squid and/or it's redirector to be able to route requests based on URL (ideally) or at least based on the sub domain name to a specific port on the same local machine it is running on. The redirector is free to do whatever you please as long as you do not enable httpd_accel_single_host. Via the redirector interface you can divide the URL space of a single domain (or multiple, your choice) in any way you please among different web servers, while at the same time applying any transformations of the URL-path you may need. You can for example send all requests for .jpg images to one server, requests for .nfs files to another (Notes) server, move things around if needed, etc etc. When you are not using httpd_accel_single_host then the reverse-proxying process is rougly 1. Read the request and reconstruct a full URL based on httpd_accel_host and httpd_accel_uses_host_header settings. 2. http_access access controls 3. Send the request to the redirector 4. Fetch the URL as returned by the redirector. Regards Henrik