[squid-users] Unable to download files over 2GB of size
Hi all, I am facing issues in downloading files which are more than 2 GB in Size. We tried downloading an ISO DVD image of size more than 2 GB which is failing via Squid Proxy but I am able to download the same if I bypass the proxy server. It doesn't display any errors on the screen instead it ends in downloading just 1 KB. I am using squid-2.5.STABLE13. Regards, Sathyan Arjunan Unix Support | +1 408-962-2500 Extn : 22824 Kindly [EMAIL PROTECTED] reach us @ 22818 for any correspondence alike to ensure your email are being replied in timely manner
[squid-users] Squid reverse proxy AND proxy
Hi, I use SquidNT as a Proxy on a Win 2000 server with no problem. Is-it possible to use Squid as a proxy AND a reverse proxy on the same machine, and the same configuration file ? Thanks. ___ Jérôme
Re: [squid-users] Squid reverse proxy AND proxy
Hi, yes .. you can ... view below website for more information ... http://wiki.squid-cache.org/SquidFaq/ReverseProxy Regards, Kenny - Original Message - From: Jerome [EMAIL PROTECTED] To: squid-users@squid-cache.org Sent: Thursday, May 03, 2007 04:42 PM Subject: [squid-users] Squid reverse proxy AND proxy Hi, I use SquidNT as a Proxy on a Win 2000 server with no problem. Is-it possible to use Squid as a proxy AND a reverse proxy on the same machine, and the same configuration file ? Thanks. ___ Jérôme
[squid-users] squid reverse proxy and UTM urchin stats
I know that this is a question more related to urchin software, but maybe someone in this list have successfully configured urchin software (UTM enabled) with a squid reverse proxy configuration. The Urchin UTM installation is available only with Apache and IIS, http://www.google.com/support/urchin45/bin/topic.py?topic=7355 but it would be helpful have instructions with squid as reverse proxy too. The squid version we are playing to is squid-2.6-Stable12 with a common logformat modified such as: logformat common %a %ui %un [%tl] %rm %ru HTTP/%rv %{Referer}h %{User-Agent}h %{Cookie}h But it isn't working with urchin. Meanwhile I'll ask for support to urchin in order to get a squid configuration. Have anyonone configured urchin UTM with squid as reverse proxy? Thanks in advance. Emilio C.
Re: [squid-users] Unable to download files over 2GB of size
Sathyan, Arjonan wrote: Hi all, I am facing issues in downloading files which are more than 2 GB in Size. We tried downloading an ISO DVD image of size more than 2 GB which is failing via Squid Proxy but I am able to download the same if I bypass the proxy server. It doesn't display any errors on the screen instead it ends in downloading just 1 KB. I am using squid-2.5.STABLE13. Regards, Sathyan Arjunan Unix Support | +1 408-962-2500 Extn : 22824 Kindly copy [EMAIL PROTECTED] or reach us @ 22818 for any correspondence alike to ensure your email are being replied in timely manner Hi Sathyan, This is a known issue in all versions of squid. http://www.squid-cache.org/bugs/show_bug.cgi?id=437 Luckily one of the other developers happens to be working on a fix for it right now. If you are interested in testing give him a yell out. Amos
Re: [squid-users] cache_peer - multiple ones
Gareth Edmondson wrote: Henrik Nordstrom wrote: tis 2007-05-01 klockan 23:41 +0100 skrev Gareth Edmondson: Thanks for the advice here. I read about this name= option earlier in the archives - but I got the impression from previous posters that it was in version 3 of squid and not the stable version that ships with Debian Etch. The stable version is 2.6.5-6. It's in 2.6 and later. cache_peer_access sslproxy allow CONNECT cache_peer_access sslproxy deny all cache_peer_access original upstream name deny CONNECT cache_peer_access original upstream name allow all I'm not sure they are in the right order. Looks fine. order of cache_peer_access is important, but only per peer. The order of the peers is not important. Everything seems to be working. However when we try and connect to the 443 website it challenges us again for the AD username and password. Upon entering this the browser challenges us again and again and again - simply not letting us through. One more thing, have you added trust between Squid and the peer for forwarding of proxy authentication? See the login option to cache_peer. Regards Henrik Here is an extract of my access.log file - what is the difference between a HIT and a MISS in this scenario? 117813.463 0 127.0.0.1 TCP_HIT/200 506 GET http://communities.rm.com/forums/skins/communities/images/message_gradient_header.gif - NONE/- image/gif 117813.515 53 127.0.0.1 TCP_MISS/404 1952 GET http://communities.rm.com/favicon.ico - DEFAULT_PARENT/webcluster.education.swansea.sch.uk text/html 117815.152111 127.0.0.1 TCP_MISS/302 1302 GET http://communities.rm.com/forums/member/default.aspx - DEFAULT_PARENT/webcluster.education.swansea.sch.uk text/html 117815.198 3 127.0.0.1 TCP_MISS/000 3112 CONNECT communities.rm.com:443 - DEFAULT_PARENT/proxyssl - 117818.229 3 127.0.0.1 TCP_MISS/000 3112 CONNECT communities.rm.com:443 - DEFAULT_PARENT/proxyssl - 117821.481 3 127.0.0.1 TCP_MISS/000 3112 CONNECT communities.rm.com:443 - DEFAULT_PARENT/proxyssl - You can see clearly where I have attempted to access a 443 website - yet it still asks me to authenticate against the AD with my username and password. The problem must lie with my authentication modules. GJE Ah, check your squid.conf very carefully. The acl are checked in order and if any of the acl before the 'http_access allow CONNECT' or 'http_access allow SSL_Ports' requires auth, then the auth will be checked for. To get CONNECT out without auth you will need to configure any acl with auth _after_ the allow CONNECT. Amos
Re: [squid-users] Transparent proxy testing from the proxy server
Leah Kubik wrote: On Wednesday 02 May 2007 10:20, Henrik Nordstrom wrote: 2.5: http_port 3128 httpd_accel_host virtual httpd_accel_uses_host_header on httpd_accel_with_proxy on Actually, that's perfect. I thought that I had these parameters, but somewhere in my testing, I must have done something to remove them. Unfortunately, this client runs CentOS 4, and there are no 2.6 packages available for CentOS 4, and they are not inclined to use a source build or to upgrade at the moment. Thanks for putting up with me, Leah Apparently the FedoraCore packages build in Centos, you might give them a try to keep yourself up to date. Amos
[squid-users] squid 3.0 won't cache
hello i have a problem not caching (storing) anything here is my squid.conf: http_port 86.104.87.228:8080 accel vhost vport=80 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? acl flash urlpath_regex -i \*.swf no_cache deny QUERY log_fqdn off request_header_max_size 20 KB redirect_rewrites_host_header off maximum_object_size 50 MB cache_dir ufs /depo/cache 35 32 215 cache_access_log /var/log/squid/access.log squid cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log cache_effective_user squid cache_effective_group squid the acl's work always_direct allow clients http_access allow all clients http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny all visible_hostname xxx.x.xx.x anybody had this problem ? 10x
[squid-users] squid 3.0 won't cache
hello i have a problem not caching (storing) anything here is my squid.conf: http_port 86.104.87.228:8080 accel vhost vport=80 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? acl flash urlpath_regex -i \*.swf no_cache deny QUERY log_fqdn off request_header_max_size 20 KB redirect_rewrites_host_header off maximum_object_size 50 MB cache_dir ufs /depo/cache 35 32 215 cache_access_log /var/log/squid/access.log squid cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log cache_effective_user squid cache_effective_group squid the acl's work always_direct allow clients http_access allow all clients http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny all visible_hostname xxx.x.xx.x anybody had this problem ? 10x
Re: [squid-users] squid 3.0 won't cache
[EMAIL PROTECTED] wrote: hello i have a problem not caching (storing) anything here is my squid.conf: http_port 86.104.87.228:8080 accel vhost vport=80 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? acl flash urlpath_regex -i \*.swf no_cache deny QUERY log_fqdn off request_header_max_size 20 KB redirect_rewrites_host_header off maximum_object_size 50 MB cache_dir ufs /depo/cache 35 32 215 cache_access_log /var/log/squid/access.log squid cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log cache_effective_user squid cache_effective_group squid the acl's work always_direct allow clients http_access allow all clients http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny all visible_hostname xxx.x.xx.x anybody had this problem ? 10x 2 questions: What permissions are set on the /depo/cache directory? what do cache.log and store.log contain? Amos
Re: [squid-users] Cant get redirect gre and wccp and cisco 3600 and 7200 and debian and centos
ons 2007-05-02 klockan 17:28 -0300 skrev Facundo Vilarnovo: packets are coming in the gre tunnel, i am seing them with the tcpdump command Have you disabled rp_filter on the gre tunnel interface? (See FAQ) Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Authentication Override
ons 2007-05-02 klockan 18:41 -0400 skrev Brian Kirk: We have a need for an authentication override for NTLM The following should work: acl generic_user proxy_auth genericusername http_access deny genericuser placed after where you allow access Note: http_access is sensitive on ordering. The first matchng rule is used, the rest ignored. So your rules (both allowing and denying) should go after the CONNECT and Safe_Ports stuff, just before the deny all. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Reverse Proxy and Authentication
tor 2007-05-03 klockan 10:39 +1200 skrev Jo Pitts: I've googled, trolled through the squid archives, and experimented and clearly I'm just not very clever, because I cannot seem to get authentication to work. I've found a lot of half answers but nothing that is clear enough for me (an admitted non guru) http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-c59962b21bb8e2a437beb149bcce3190ee1c03fd Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Proper Access ACLs
ons 2007-05-02 klockan 16:00 -0700 skrev Michael Puckett: a path outside to reach the server external.com. Will the following configuration directives route requests to external.com ONLY through extern-proxy.mydomain while keeping all other requests inside my own domain? Is this the correct way to do this, or is there another recommendation for configuring for this case? cache_peer extern-proxy.mydomain parent 8181 5151 no-query no-digest acl OUTSIDE dstdomain external.com cache_peer_access allow OUTSIDE cache_peer_access deny all Ok. always_direct allow all never_direct deny all Not ok. Says Squid should always go direct, ignoring whatever cache_peer you have.. Should probably be just never_direct allow OUTSIDE with no always_direct rule specified at all, or a deny all rule if you like (it's the default). Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Unable to download files over 2GB of size
tor 2007-05-03 klockan 00:09 -0700 skrev Sathyan, Arjonan: I am facing issues in downloading files which are more than 2 GB in Size. We tried downloading an ISO DVD image of size more than 2 GB which is failing via Squid Proxy but I am able to download the same if I bypass the proxy server. It doesn't display any errors on the screen instead it ends in downloading just 1 KB. I am using squid-2.5.STABLE13. Example URL? http of ftp? Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Unable to download files over 2GB of size
tor 2007-05-03 klockan 22:49 +1200 skrev Amos Jeffries: Hi Sathyan, This is a known issue in all versions of squid. http://www.squid-cache.org/bugs/show_bug.cgi?id=437 Not all.. was fixed in Squid-2 for the 2.5.STABLE10 release. Luckily one of the other developers happens to be working on a fix for it right now. If you are interested in testing give him a yell out. That's Squid-3.. there is no known issues with 2GB objects with 2.5.STABLE10 or later, but it's also not very frequently tested, and additionally my box where most Squid-2 testing is done is a 64-bit box which gets away with most of these silly limitations.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
RE: [squid-users] Unable to download files over 2GB of size
URL: https://h20293.www2.hp.com/ecommerce/efulfillment/getReceipt.do?orderNum ber=361694666 During download us the below e-mail address [EMAIL PROTECTED] Regards, Sathyan Arjunan Unix Support | +1 408-962-2500 Extn : 22824 Kindly copy [EMAIL PROTECTED] or reach us @ 22818 for any correspondence alike to ensure your email are being replied in timely manner -Original Message- From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] Sent: Thursday, May 03, 2007 5:33 PM To: Sathyan, Arjonan Cc: squid-users@squid-cache.org Subject: Re: [squid-users] Unable to download files over 2GB of size tor 2007-05-03 klockan 00:09 -0700 skrev Sathyan, Arjonan: I am facing issues in downloading files which are more than 2 GB in Size. We tried downloading an ISO DVD image of size more than 2 GB which is failing via Squid Proxy but I am able to download the same if I bypass the proxy server. It doesn't display any errors on the screen instead it ends in downloading just 1 KB. I am using squid-2.5.STABLE13. Example URL? http of ftp? Regards Henrik
RE: [squid-users] Unable to download files over 2GB of size
Henrik, there is no known issues with 2GB objects with 2.5.STABLE10 or later But I use squid-2.5.STABLE13 which is having this problem. So could you please elaborate on the parameters which we need to use for avoiding this issue... Regards, Sathyan Arjunan Unix Support | +1 408-962-2500 Extn : 22824 Kindly copy [EMAIL PROTECTED] or reach us @ 22818 for any correspondence alike to ensure your email are being replied in timely manner
Re: [squid-users] block https? (again)
... Help me understand. Help me understand in what context I said this was not possible. Oops. I mixed up Squid with DansGuardian. Mea culpa. -Chuck Kollars __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: [squid-users] Proper Access ACLs
Henrik Nordstrom wrote: ons 2007-05-02 klockan 16:00 -0700 skrev Michael Puckett: a path outside to reach the server external.com. Will the following configuration directives route requests to external.com ONLY through extern-proxy.mydomain while keeping all other requests inside my own domain? Is this the correct way to do this, or is there another recommendation for configuring for this case? cache_peer extern-proxy.mydomain parent 8181 5151 no-query no-digest acl OUTSIDE dstdomain external.com cache_peer_access allow OUTSIDE cache_peer_access deny all Ok. Thank you... always_direct allow all never_direct deny all Not ok. Says Squid should always go direct, ignoring whatever cache_peer you have.. Should probably be just never_direct allow OUTSIDE with no always_direct rule specified at all, or a deny all rule if you like (it's the default). So this then says that OUTSIDE should never go direct I understand, with the implication that everything else is always direct? What tells everything else to go direct? What would get the default deny all? Would that be never_direct deny all or always_direct deny all Regards -mikep
[squid-users] NTLM + Squid - No NTLM Header being sent
I am trying to get Squid and NTLM working together. I've looked at several guides, and the same thing happens with all of them. Whenever I try to access a page (using IE6 - should support NTLM), I get a dialog box asking for my username and password - which if provided authenticates me and I can browse the site. I've used ethereal to capture the conversation and noticed that there isn't an NTLM authenticate header as seen below: HTTP/1.0 407 Proxy Authentication Required Server: squid/2.5.STABLE12 Mime-Version: 1.0 Date: Tue, 01 May 2007 01:10:37 GMT Content-Type: text/html Content-Length: 1322 Expires: Tue, 01 May 2007 01:10:37 GMT X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0 Proxy-Authenticate: Basic realm=Squid proxy-caching web server X-Cache: MISS from proxy.domain.local X-Cache-Lookup: NONE from proxy.domain.local:3128 Proxy-Connection: close Here are the contents of my squid.conf file (minus all the comments and blank space) - which according to the guides I've seen should be enough to do NTLM. hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY hosts_file /etc/hosts auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 10 auth_param ntlm max_challenge_reuses 0 auth_param ntlm max_challenge_lifetime 2 minutes auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl lcl src 192.168.0.0/16 acl SSL_ports port 443 # 563 # https, snews acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # 563 # https, snews acl NTAuth proxy_auth REQUIRED http_access allow lcl NTAuth http_access allow localhost http_access deny all http_reply_access allow all icp_access allow all http_port 127.0.0.1:3128 http_port 192.168.1.241:3128 cache_mgr [EMAIL PROTECTED] httpd_accel_port 80 httpd_accel_single_host off httpd_accel_with_proxy on httpd_accel_uses_host_header on coredump_dir /var/spool/squid visible_hostname proxy.domain.local httpd_accel_host virtual Thanks -Mike
[squid-users] Java + NTLM Authentication through Squid question...
I've been using NTLM Authentication through squid for a long while now and love it. Recently a Java app has wanted to use it as well (using browser settings to determine proxy settings) yet when I enter in data for the username, password and domain fields, it never correctly authenticates. Is Java sending different NTLM fields to Squid then what IE normally sends? Is there a different helper protocol that would work better than what I've currently got set in squid.conf: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp I've googled and found this work-around but was wondering if there wasn't something that wasn't so easily forgable by the clients: acl Java browser Java/1.4 Java/1.5 http_access allow localhost Java
[squid-users] Squid exited on signal 6 (core dumped)
Dears, I have an squid proxy server in an FreeBSD 6.2-STABLE with tuning of kernel for best squid performance. Recently, i detect anormals situations in my daemon(squid), then looking the logs, i have this message below. 2007/05/03 14:27:53| comm_accept: FD 11: (53) Software caused connection abort 2007/05/03 14:27:53| httpAccept: FD 11: accept failure: (53) Software caused connection abort 2007/05/03 14:27:53| comm_accept: FD 11: (53) Software caused connection abort 2007/05/03 14:27:53| httpAccept: FD 11: accept failure: (53) Software caused connection abort 2007/05/03 14:27:53| comm_accept: FD 11: (53) Software caused connection abort 2007/05/03 14:27:53| httpAccept: FD 11: accept failure: (53) Software caused connection abort 2007/05/03 14:27:53| assertion failed: diskd/store_io_diskd.c:384: !diskdstate-flags.close_request 2007/05/03 14:28:24| Starting Squid Cache version 2.6.STABLE12 for i386-portbld-freebsd6.2... 2007/05/03 14:28:24| Process ID 44697 2007/05/03 14:28:24| With 11072 file descriptors available 2007/05/03 14:28:24| Using kqueue for the IO loop 2007/05/03 14:28:24| DNS Socket created at 0.0.0.0, port 63377, FD 5 2007/05/03 14:28:24| Adding nameserver 200.195.58.4 from /etc/resolv.conf 2007/05/03 14:28:24| Unlinkd pipe opened on FD 9 2007/05/03 14:28:24| Swap maxSize 5120 KB, estimated 3938461 objects 2007/05/03 14:28:24| Target number of buckets: 196923 2007/05/03 14:28:24| Using 262144 Store buckets 2007/05/03 14:28:24| Max Mem size: 524288 KB 2007/05/03 14:28:24| Max Swap size: 5120 KB 2007/05/03 14:28:24| Store logging disabled 2007/05/03 14:28:24| Rebuilding storage in /usr/local/squid/cache (DIRTY) 2007/05/03 14:28:24| Using Least Load store dir selection 2007/05/03 14:28:24| Set Current Directory to /usr/local/squid/cache/ 2007/05/03 14:28:24| Loaded Icons. 2007/05/03 14:28:24| Accepting transparently proxied HTTP connections at 0.0.0.0, port 3128, FD 11. 2007/05/03 14:28:24| Accepting ICP messages at 0.0.0.0, port 3130, FD 12. 2007/05/03 14:28:24| WCCP Disabled. 2007/05/03 14:28:24| Ready to serve requests. 2007/05/03 14:28:24| Store rebuilding is 1.9% complete 2007/05/03 14:28:27| WARNING: unparseable HTTP header field {POST /qzwxec HTTP/1.0} 2007/05/03 14:28:29| comm_accept: FD 11: (53) Software caused connection abort 2007/05/03 14:28:29| httpAccept: FD 11: accept failure: (53) Software caused connection abort In other log, i have: pid 44470 (squid), uid 100: exited on signal 6 (core dumped) [about signal 6] #define SIGABRT 6 /* abort() */ The SIGABRT signal provides a way to abort a process and create a core dump. If this is caught and the signal handler does not return, however, the program will not terminate. The default action is to terminate the process and create a core dump. Analyzing the core dump file: srv-aol-04# gdb /usr/local/sbin/squid /usr/local/squid/cache/squid.core GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type show copying to see the conditions. There is absolutely no warranty for GDB. Type show warranty for details. This GDB was configured as i386-marcel-freebsd...(no debugging symbols found)... Core was generated by `squid'. Program terminated with signal 6, Aborted. Reading symbols from /lib/libcrypt.so.3...(no debugging symbols found)...done. Loaded symbols for /lib/libcrypt.so.3 Reading symbols from /lib/libm.so.4...(no debugging symbols found)...done. Loaded symbols for /lib/libm.so.4 Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib/libc.so.6 Reading symbols from /libexec/ld-elf.so.1...(no debugging symbols found)...done. Loaded symbols for /libexec/ld-elf.so.1 #0 0x48206363 in kill () from /lib/libc.so.6 I made same attempts, as: - recompile my squid; - recompile my kernel (note: i use this kernel config in several squid servers); - clean my cache and re-make. My dmesg below: Copyright (c) 1992-2007 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 6.2-STABLE #2: Fri Apr 27 11:46:32 BRT 2007 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/BYALNET ACPI APIC Table: A M I OEMAPIC Timecounter i8254 frequency 1193182 Hz quality 0 CPU: AMD Sempron(tm) Processor 2800+ (1607.83-MHz 686-class CPU) Origin = AuthenticAMD Id = 0x10fc0 Stepping = 0 Features=0x78bfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2 AMD Features=0xe2500800SYSCALL,NX,MMX+,FFXSR,LM,3DNow+,3DNow AMD Features2=0x1LAHF real memory = 1073479680 (1023 MB) avail memory = 1041494016 (993 MB) ioapic0 Version 1.1 irqs 0-23 on motherboard kbd1 at kbdmux0
Re: [squid-users] cache_peer - multiple ones
Hi Amos Thanks for that. The lines are as follows: #TAG: cache_peer_access cache_peer_access proxyssl allow CONNECT cache_peer_access proxyssl deny all cache_peer_access upstreamproxyaddress deny CONNECT cache_peer_access upstreamproxyaddress allow all As for the cache_peer lines they are as follows: #TAG: cache_peer cache_peer upstreamproxyaddress parent 8080 7 no-digest no-query no-net-db-exchange default login=username:password cache_peer proxyssl parent 443 no-digest no-query no-net-db-exchange default login=username:password Where username and password are our values. proxyssl is defined in the hosts file because I don't quite understand how to use the name= tag in Squid (I must read up about it). From some tests we have run, we can tell that the Squid proxy is not sending the proxy authorisation headers (username and password) to the upstream proxy SSL proxy. I'm assuming this is due to a configuration error. The passwords for the two proxies (8080 and 443) are the same as they always have been. Can anyone gleam anything from that? Cheers Gareth
Re: [squid-users] Proper Access ACLs
tor 2007-05-03 klockan 08:03 -0700 skrev Michael Puckett: Should probably be just never_direct allow OUTSIDE with no always_direct rule specified at all, or a deny all rule if you like (it's the default). So this then says that OUTSIDE should never go direct I understand, with the implication that everything else is always direct? What tells everything else to go direct? The fact that there is no peer it may go via.. What would get the default deny all? Would that be never_direct deny all or always_direct deny all Default for both directives is deny all. Squid first checks always_direct. If allow it goes direct and does not look any further to find a path where to send the request. Then it checks never_direct. If allow then it knows going direct is not an option. Then it selects and weights the available paths (cache_peers + direct) Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Squid exited on signal 6 (core dumped)
tor 2007-05-03 klockan 14:44 -0300 skrev Felippe de Meirelles Motta: 2007/05/03 14:27:53| assertion failed: diskd/store_io_diskd.c:384: !diskdstate-flags.close_request Squid release notes: 3. Known issues There is a few known issues in this version of Squid which we hope to correct in a later release * Bug #761: Unstable under load when using diskd url:http://www.squid-cache.org/bugs/show_bug.cgi?id=761 Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
RE: [squid-users] NTLM + Squid - No NTLM Header being sent
I just tried using the same config, but commenting out the auth_param basic lines. Instead of being asked for a password this time, I only get to a cache access denied page. An ethereal snoop of the http response from squid shows the following HTTP/1.0 407 Proxy Authentication Required Server: squid/2.5.STABLE12 Mime-Version: 1.0 Date: Thu, 03 May 2007 18:53:16 GMT Content-Type: text/html Content-Length: 1322 Expires: Thu, 03 May 2007 18:53:16 GMT X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0 X-Cache: MISS from proxy.domain.local X-Cache-Lookup: NONE from proxy.domain.local:3128 Proxy-Connection: close Notice that there aren't any Proxy-Authenticate: ... lines that tell IE what kind of authentication to attempt to use even though the only authentication type is NTLM -Mike -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Thursday, May 03, 2007 2:45 PM To: Mike Poublon Subject: Re: [squid-users] NTLM + Squid - No NTLM Header being sent On Thursday 03 May 2007 12:09 pm, Mike Poublon wrote: Whenever I try to access a page (using IE6 - should support NTLM), I get a dialog box asking for my username and password - which if provided authenticates me and I can browse the site. I'm pretty sure that what you did was use *basic* auth and validate the creds using NTLM. That's not the same thing as NTLM auth! See: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 10 auth_param ntlm max_challenge_reuses 0 auth_param ntlm max_challenge_lifetime 2 minutes auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours All those basic auth_params are what's happening (and it's working because the basic auth program is /usr/bin/ntlm_auth). Mordy -- Mordy Ovits Network Security Bloomberg L.P.
Re: [squid-users] Authentication Override
Ok I have been trying various configurations in my squid.conf, I am sure that I was over complicating the issue. Here is a stripped down version that I would like to use basic if NTLM fails, but it never drops down to the basic authentication. I think that I am putting probably alot more in this than I need to get my point across, but if I log into a machine locally, an try to get to the Internet it prompts me, but doesn't seem to have the realm correct or use the basic authentication, we have multiple domains and when we use auth_param basic program /opt/samba/bin/ntlm_auth --helper-protocol=squid-2.5-basic users have to know there domain, and some of our users aren't that bright: cache_peer firewall.domain.com parent 8080 0 no-query default emulate_httpd_log on auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --require-membership-of={SID of our Internet Group} auth_param ntlm children 5 #auth_param basic program /opt/samba/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param basic program /opt/squid/libexec/squid_ldap_auth -R -b DC=domain,DC=com -D cn=Squid,OU=Service Accounts,DC=hdq,DC=domain,DC=com -w xx -f sAMAccountName=%s -h directory.hdq.domain.com -p 3268 auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours acl all src 0.0.0.0/0.0.0.0 acl authenticated_users proxy_auth REQUIRED never_direct allow all http_access allow authenticated_users http_access deny all http_reply_access allow all icp_access allow all
[squid-users] Really transparent proxy
Hello squid users! I don't know if there's any post about this, but, maybe not... anyone knows if there's any way for making transparent the squid for those pages that tells you what its your ip?, for example, right now I am behind my transparent squid with wccp, and if I go to any site like http://www.adsl4ever.com/ip/ it tells my ip address, and also tells me, that I am behind a proxy. Like I say before I don't have any explicit configuration on my browser that points to the squid. PS: I'd also try another pages like this.. happens the same! Regards Facundo
Re: [squid-users] Java + NTLM Authentication through Squid question...
On Thu, May 03, 2007, Ian wrote: I've been using NTLM Authentication through squid for a long while now and love it. Recently a Java app has wanted to use it as well (using browser settings to determine proxy settings) yet when I enter in data for the username, password and domain fields, it never correctly authenticates. Is Java sending different NTLM fields to Squid then what IE normally sends? Is there a different helper protocol that would work better than what I've currently got set in squid.conf: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp I've googled and found this work-around but was wondering if there wasn't something that wasn't so easily forgable by the clients: acl Java browser Java/1.4 Java/1.5 http_access allow localhost Java Could you please capture a packet trace of the failing NTLM auth, and the relevant Squid/NTLM auth helper debugging? Thanks, Adrian
Re: [squid-users] cache_peer - multiple ones
Gareth Edmondson wrote: Hi Amos Thanks for that. The lines are as follows: #TAG: cache_peer_access cache_peer_access proxyssl allow CONNECT cache_peer_access proxyssl deny all cache_peer_access upstreamproxyaddress deny CONNECT cache_peer_access upstreamproxyaddress allow all As for the cache_peer lines they are as follows: #TAG: cache_peer cache_peer upstreamproxyaddress parent 8080 7 no-digest no-query no-net-db-exchange default login=username:password cache_peer proxyssl parent 443 no-digest no-query no-net-db-exchange default login=username:password Where username and password are our values. proxyssl is defined in the hosts file because I don't quite understand how to use the name= tag in Squid (I must read up about it). That would be the reason you are being prompted for password a second time. Squid has no way of knowing that these are the same upstream proxy. What you want to do is... cache_peer upstreamproxyaddress parent 8080 7 no-digest no-query no-net-db-exchange default login=username:password name=proxy cache_peer upstreamproxyaddress parent 443 7 no-digest no-query no-net-db-exchange default login=username:password name=proxyssl cache_peer_access proxyssl allow CONNECT cache_peer_access proxyssl deny all cache_peer_access proxy deny CONNECT cache_peer_access proxy allow all ...which informs Squid that even though both proxy definitions use the same machine, they have different purposes, and defines what those purposes are. From some tests we have run, we can tell that the Squid proxy is not sending the proxy authorisation headers (username and password) to the upstream proxy SSL proxy. I'm assuming this is due to a configuration error. The passwords for the two proxies (8080 and 443) are the same as they always have been. Can anyone gleam anything from that? Cheers Gareth Chris
Re: [squid-users] squid reverse proxy and UTM urchin stats
Emilio Casbas wrote: I know that this is a question more related to urchin software, but maybe someone in this list have successfully configured urchin software (UTM enabled) with a squid reverse proxy configuration. The Urchin UTM installation is available only with Apache and IIS, http://www.google.com/support/urchin45/bin/topic.py?topic=7355 but it would be helpful have instructions with squid as reverse proxy too. From http://www.google.com/support/urchin45/bin/answer.py?answer=28710... The second important function of the UTM Sensor is to uniquely identify both sessions and unique visitors. Through a patent-pending combination of browser cookies, the Sensor detects and initializes the unique visitor and session identifiers allowing exact monitoring of new and returning visitors regardless of service provider proxy behavior. Most service providers take advantage of proxying by recycling IP addresses and clustering users behind firewalls. This can cause problems with normal logfile tracking, which typically utilizes the IP address as an identifier of the user. ...so your accelerator setup should have no effect on Urchin's analysis of your Apache logs. The squid version we are playing to is squid-2.6-Stable12 with a common logformat modified such as: logformat common %a %ui %un [%tl] %rm %ru HTTP/%rv %{Referer}h %{User-Agent}h %{Cookie}h But it isn't working with urchin. Meanwhile I'll ask for support to urchin in order to get a squid configuration. Have anyonone configured urchin UTM with squid as reverse proxy? I have not, and given the information above, wouldn't bother trying. *shrug* Thanks in advance. Emilio C. Chris
Re: [squid-users] Really transparent proxy
Facundo Vilarnovo wrote: Hello squid users! I don't know if there's any post about this, but, maybe not... anyone knows if there's any way for making transparent the squid for those pages that tells you what its your ip?, for example, right now I am behind my transparent squid with wccp, and if I go to any site like http://www.adsl4ever.com/ip/ it tells my ip address, and also tells me, that I am behind a proxy. Like I say before I don't have any explicit configuration on my browser that points to the squid. PS: I'd also try another pages like this.. happens the same! Regards Facundo http://www.squid-cache.org/mail-archive/squid-users/200604/0013.html and the response at http://www.squid-cache.org/mail-archive/squid-users/200604/0014.html In short: header_access Via deny all header_access X-Forwarded-For deny all Chris
Re: [squid-users] Really transparent proxy
On Thu, May 03, 2007, Chris Robertson wrote: Facundo Vilarnovo wrote: Hello squid users! I don't know if there's any post about this, but, maybe not... anyone knows if there's any way for making transparent the squid for those pages that tells you what its your ip?, for example, right now I am behind my transparent squid with wccp, and if I go to any site like http://www.adsl4ever.com/ip/ it tells my ip address, and also tells me, that I am behind a proxy. Like I say before I don't have any explicit configuration on my browser that points to the squid. PS: I'd also try another pages like this.. happens the same! Regards Facundo http://www.squid-cache.org/mail-archive/squid-users/200604/0013.html and the response at http://www.squid-cache.org/mail-archive/squid-users/200604/0014.html In short: header_access Via deny all header_access X-Forwarded-For deny all And check TPROXY and Squid-2.6. Its supported in squid-3, but some features have yet to be ported. Adrian