[squid-users] Forwarding User Authentication parameters to Web GW
Dear All, Instead of sending only the Squid proxy IP to Web GW by following line in squid.conf: (cache_peer 172.xxx.yyy.zzz. parent 8080 7 no-query round-robin) Is it possible to forward authentication parameters like username user IP to Web GW? -- Regards. Sagar Navalkar.
[squid-users] Memory and CPU usage squid-3.1.4
Hello list, I have a question regarding memory and CPU usage change from 3.0 to 3.1. I have 4 forwards proxies with ICAP (c-icap and clamav), NTLMv2 authentication, all four proxies each have about 200-400 req/sec on RedHat AS5 64bit servers with each 16GB mem for about 15k to 30k users. cache_mem is set to 3.5 GB, no disk cache, some ACLs are used. With 3.0.STABLE23 the typical memory usage was about 7 GB and CPU usage of about 30-40% for the main squid process, without the helpers or other processes. With 3.1.4 the proxies use up to 2GB more memory and CPU usage rose at times to 60% or 80%. After a restart of one of the proxies it behaved normally since then. Status: (the CPU usage, Client HTTP out and requests per second are all from last 5 minute-info) Proxy1: 36% CPU 7.2 GB mem 3.3 MB/s client out 341 req/s Version 3.1.4 Proxy2: 77% CPU 8.6 GB mem 2.6 MB/s client out 384 req/s Version 3.1.4 - Proxy3: 53% CPU 8.8 GB mem 6.8 MB/s client out 402 req/s Version 3.1.4 Proxy4: 32% CPU 6.8 GB mem 2.7 MB/s client out 405 req/s Version 3.0.STABLE23 Configuration of the proxies is the same, with only the changes for 3.1, mainly icap config. Is there some kind of memory leak that additionally causes massive CPU usage, could it be load related or is this normal behaviour? Martin
Re: [squid-users] Memory and CPU usage squid-3.1.4
martin.pichlma...@continental-corporation.com wrote: Hello list, I have a question regarding memory and CPU usage change from 3.0 to 3.1. I have 4 forwards proxies with ICAP (c-icap and clamav), NTLMv2 authentication, all four proxies each have about 200-400 req/sec on RedHat AS5 64bit servers with each 16GB mem for about 15k to 30k users. cache_mem is set to 3.5 GB, no disk cache, some ACLs are used. With 3.0.STABLE23 the typical memory usage was about 7 GB and CPU usage of about 30-40% for the main squid process, without the helpers or other processes. With 3.1.4 the proxies use up to 2GB more memory and CPU usage rose at times to 60% or 80%. After a restart of one of the proxies it behaved normally since then. Status: (the CPU usage, Client HTTP out and requests per second are all from last 5 minute-info) Proxy1: 36% CPU 7.2 GB mem 3.3 MB/s client out 341 req/s Version 3.1.4 Proxy2: 77% CPU 8.6 GB mem 2.6 MB/s client out 384 req/s Version 3.1.4 - Proxy3: 53% CPU 8.8 GB mem 6.8 MB/s client out 402 req/s Version 3.1.4 Proxy4: 32% CPU 6.8 GB mem 2.7 MB/s client out 405 req/s Version 3.0.STABLE23 Configuration of the proxies is the same, with only the changes for 3.1, mainly icap config. Is there some kind of memory leak that additionally causes massive CPU usage, could it be load related or is this normal behaviour? Martin With an explicit cache_mem there should be no difference between the two. Maybe; ICAP needs to buffer traffic twice as much as normal. One buffer queue to ICAP server, one to client. Maybe; with HTTP/1.1 being advertised now, if you have ignore_expect_100 turned on you can see the number of waiting clients rise. These are active but 'hung' connections which waste more resources until the client times out and continues. Maybe; There are some known leaks in 3.0/3.1 auth. But that does not account for the extra CPU unless it contributes to making the box swap memory pages. In general 3.1.4 has a lot of memory fixes than 3.0. Which are supposed to cause less resource waste. Earlier 3.1.x default memory features were broken on 64-bit. Check if turning memory pools on/off has any good effect. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.4
[squid-users] sarg index.html
I accidentally deleted the sarg index.html file located under /var/www/squid-reports/ the index.html file says about Daily, Monthly and weekly is there a way to regenerate it ? Thanks, Kaushal
Re: [squid-users] Forwarding User Authentication parameters to Web GW
Sagar wrote: Dear All, Instead of sending only the Squid proxy IP to Web GW by following line in squid.conf: (cache_peer 172.xxx.yyy.zzz. parent 8080 7 no-query round-robin) Is it possible to forward authentication parameters like username user IP to Web GW? Passing username details: cache_peer . login=PASS Passing client IP: forwarded_for on via on Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.4
[squid-users] Antwort: Re: [squid-users] Memory and CPU usage squid-3.1.4
Amos Jeffries squ...@treenet.co.nz wrote on 15.06.2010 10:48:33: martin.pichlma...@continental-corporation.com wrote: Hello list, I have a question regarding memory and CPU usage change from 3.0 to 3.1. I have 4 forwards proxies with ICAP (c-icap and clamav), NTLMv2 authentication, all four proxies each have about 200-400 req/sec on RedHat AS5 64bit servers with each 16GB mem for about 15k to 30k users. cache_mem is set to 3.5 GB, no disk cache, some ACLs are used. With 3.0.STABLE23 the typical memory usage was about 7 GB and CPU usage of about 30-40% for the main squid process, without the helpers or other processes. With 3.1.4 the proxies use up to 2GB more memory and CPU usage rose at times to 60% or 80%. After a restart of one of the proxies it behaved normally since then. Status: (the CPU usage, Client HTTP out and requests per second are all from last 5 minute-info) Proxy1: 36% CPU 7.2 GB mem 3.3 MB/s client out 341 req/s Version 3.1.4 Proxy2: 77% CPU 8.6 GB mem 2.6 MB/s client out 384 req/s Version 3.1.4 - Proxy3: 53% CPU 8.8 GB mem 6.8 MB/s client out 402 req/s Version 3.1.4 Proxy4: 32% CPU 6.8 GB mem 2.7 MB/s client out 405 req/s Version 3.0.STABLE23 Configuration of the proxies is the same, with only the changes for 3.1, mainly icap config. Is there some kind of memory leak that additionally causes massive CPU usage, could it be load related or is this normal behaviour? Martin With an explicit cache_mem there should be no difference between the two. Maybe; ICAP needs to buffer traffic twice as much as normal. One buffer queue to ICAP server, one to client. Maybe; with HTTP/1.1 being advertised now, if you have ignore_expect_100 turned on you can see the number of waiting clients rise. These are active but 'hung' connections which waste more resources until the client times out and continues. Maybe; There are some known leaks in 3.0/3.1 auth. But that does not account for the extra CPU unless it contributes to making the box swap memory pages. In general 3.1.4 has a lot of memory fixes than 3.0. Which are supposed to cause less resource waste. Earlier 3.1.x default memory features were broken on 64-bit. Check if turning memory pools on/off has any good effect. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.4 Hi Amos, thanks for your reply and suggestions! ICAP needs about two times of memory, that is true but was already the same with 3.0. ignore_expect_100 is not configured, therefore the default (turned off) is used and about the same amount of ESTABLISHED and TIME_WAIT is seen on 3.0 and 3.1, and only few other connection states, therefore I think there is no problem of hanging connections. Auth - hmm I have no idea to trace mem leaks here but at least the box itself does not swap. I am now checking with mem pools off on one of the proxies and report later whether it changes anything. But I have found two things to mention: First squid is slowly needing more memory over time. In 3 hours the Process Data Segment Size via sbrk(): rose about 600MB. I will check whether it will continue rising so fast and whether it changes with memory_pool off. The other thing is that with 3.1.4 I now have many ctx: enter level 1234: 'lost' messages in cache.log. ctx messages were also with 3.0 but normally I had ctx enter level 0: 'some text' immediately followed by ctx exit level 0. With all three 3.1.4 proxies I see many ctx enter level rising number without or only very few ctx exit level messages. This is new with 3.1.4 -- 3.1.3 here behaved like 3.0. Here is a snip from the cache.log: 2010/06/15 11:36:09| ctx: enter level 2007: 'lost' 2010/06/15 11:36:09| ctx: enter level 2008: 'lost' 2010/06/15 11:36:09| ctx: enter level 2009: 'lost' 2010/06/15 11:36:09| ctx: enter level 2010: 'lost' 2010/06/15 11:36:09| ctx: enter level 2011: 'lost' 2010/06/15 11:36:09| ctx: enter level 2012: 'lost' 2010/06/15 11:36:09| ctx: enter level 2013: 'lost' 2010/06/15 11:36:09| ctx: enter level 2014: 'lost' 2010/06/15 11:36:09| ctx: enter level 2015: 'lost' 2010/06/15 11:36:09| ctx: enter level 2016: 'lost' 2010/06/15 11:36:09| ctx: enter level 2017: 'lost' 2010/06/15 11:36:09| ctx: enter level 2018: 'lost' 2010/06/15 11:36:09| ctx: enter level 2019: 'lost' 2010/06/15 11:36:09| ctx: enter level 2020: 'lost' 2010/06/15 11:36:09| ctx: enter level 2021: 'lost' 2010/06/15 11:36:09| ctx: enter level 2022: 'lost' 2010/06/15 11:36:09| ctx: enter level 2023: 'lost' 2010/06/15 11:36:09| ctx: enter level 2024: 'lost' 2010/06/15 11:36:09| ctx: enter level 2025: 'lost' 2010/06/15 11:36:09| ctx: enter level 2026: 'lost' 2010/06/15 11:36:09| ctx: enter level 2027: 'lost' 2010/06/15 11:36:09| ctx: enter level 2028: 'lost' 2010/06/15 11:36:09| ctx: enter level 2029: 'lost' 2010/06/15 11:36:09| ctx:
[squid-users] Squid NTML and auth problems with POST
Hi list! I have a problems with Squid and winbind auth. There is a couple of sites (internal CMS systems and external banking sites) what have the same problems - users can not send attached data files using html web forms (http POST method). We have Squid and Samba/winbind scheme what perform auth of users against AD domain via NTLM. Everything works just fine except this mystical POST problems. It looks like this: === 1276593195.910256 10.1.2.20 TCP_DENIED/407 4500 POST http://www.site.com/admin.php? - NONE/- text/html 1276593195.919 7 10.1.2.20 TCP_DENIED/407 4706 POST http://www.site.com/admin.php? - NONE/- text/html === And if I make a hole in auth for POST method using: === acl POST method POST acl POST_whitelist dstdomain /etc/squid/POST_whitelist.txt http_access allow POST POST_whitelist all === and try to send file via form, then all is working fine again: === 1276593290.237438 10.1.2.20 TCP_MISS/200 6752 GET http://www.site.com/admin.php? USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593290.303 2 10.1.2.20 TCP_DENIED/407 4582 GET http://www.site.com/n.php - NONE/- text/html 1276593290.307 1 10.1.2.20 TCP_DENIED/407 4788 GET http://www.site.com/n.php - NONE/- text/html 1276593290.490180 10.1.2.20 TCP_MISS/200 413 GET http://www.site.com/n.php USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593305.751 12342 10.1.2.20 TCP_MISS/302 817 POST http://www.site.com/admin.php? - DEFAULT_PARENT/10.1.4.2 text/html 1276593305.755 1 10.1.2.20 TCP_DENIED/407 4680 GET http://www.site.com/admin.php? - NONE/- text/html 1276593305.761 1 10.1.2.20 TCP_DENIED/407 4886 GET http://www.site.com/admin.php? - NONE/- text/html 1276593306.106344 10.1.2.20 TCP_MISS/302 722 GET http://www.site.com/admin.php? USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593306.110 0 10.1.2.20 TCP_DENIED/407 4684 GET http://www.site.com/admin.php? - NONE/- text/html === I Googled this and have read a lot of forums, but the only thing that I found jet, is that there is some king of brain damage in ntlm auth scheme (it performs auth in a couple of iterations each time sending more and more of info about user, and this is fine fore GET but bad for POST). Anyway, it seems that InternetExplorrer 8 (and Firefox 3 as well) do not performs additional auth iterations then they get first 407 while POSTing data. I been trying to overcome this problem by using squid configuration directives like auth_param ntlm keep_alive on/off, no_cache and ie_refresh on/off. Unfortunately - no luck for me :( Is there any solution for this problem except acl POST hole I made? Any help is highly wanted. Thanks in advance.
Re: [squid-users] Squid NTML and auth problems with POST
Dmitrijs Demidovs wrote: Hi list! I have a problems with Squid and winbind auth. There is a couple of sites (internal CMS systems and external banking sites) what have the same problems - users can not send attached data files using html web forms (http POST method). We have Squid and Samba/winbind scheme what perform auth of users against AD domain via NTLM. Everything works just fine except this mystical POST problems. It looks like this: === 1276593195.910256 10.1.2.20 TCP_DENIED/407 4500 POST http://www.site.com/admin.php? - NONE/- text/html 1276593195.919 7 10.1.2.20 TCP_DENIED/407 4706 POST http://www.site.com/admin.php? - NONE/- text/html === And if I make a hole in auth for POST method using: === acl POST method POST acl POST_whitelist dstdomain /etc/squid/POST_whitelist.txt http_access allow POST POST_whitelist all === and try to send file via form, then all is working fine again: === 1276593290.237438 10.1.2.20 TCP_MISS/200 6752 GET http://www.site.com/admin.php? USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593290.303 2 10.1.2.20 TCP_DENIED/407 4582 GET http://www.site.com/n.php - NONE/- text/html 1276593290.307 1 10.1.2.20 TCP_DENIED/407 4788 GET http://www.site.com/n.php - NONE/- text/html 1276593290.490180 10.1.2.20 TCP_MISS/200 413 GET http://www.site.com/n.php USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593305.751 12342 10.1.2.20 TCP_MISS/302 817 POST http://www.site.com/admin.php? - DEFAULT_PARENT/10.1.4.2 text/html 1276593305.755 1 10.1.2.20 TCP_DENIED/407 4680 GET http://www.site.com/admin.php? - NONE/- text/html 1276593305.761 1 10.1.2.20 TCP_DENIED/407 4886 GET http://www.site.com/admin.php? - NONE/- text/html 1276593306.106344 10.1.2.20 TCP_MISS/302 722 GET http://www.site.com/admin.php? USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593306.110 0 10.1.2.20 TCP_DENIED/407 4684 GET http://www.site.com/admin.php? - NONE/- text/html === I Googled this and have read a lot of forums, but the only thing that I found jet, is that there is some king of brain damage in ntlm auth scheme (it performs auth in a couple of iterations each time sending more and more of info about user, and this is fine fore GET but bad for POST). Anyway, it seems that InternetExplorrer 8 (and Firefox 3 as well) do not performs additional auth iterations then they get first 407 while POSTing data. I been trying to overcome this problem by using squid configuration directives like auth_param ntlm keep_alive on/off, no_cache and ie_refresh on/off. Unfortunately - no luck for me :( keep_alive on is highly recommended for Squid older than 3.1. It should be done by default in 3.1+, though I have not yet checked that. no_cache is useless for this. The no_ part has been obsolete for many years now. And POST data is not cached anyway. ie_refresh is a hack to get around broken refresh requests from old IE versions. It is only peripherally relevant, in that the refresh bug may by some fluke cause connections to close early sometimes. NP: persistent_after_error needs to be set as well to help catch these ie_refresh error conditions. Is there any solution for this problem except acl POST hole I made? a) persistent_connections for both clients and servers is also required. Your proxy appears to be closing the connection and thus requiring a re-auth when a new connection is opened for each request. b) not using NTLM. Negotiate/Kerberos works better and is recommended over NTLM. You see this problem ONLY with IE8 and Firefox 3? not with older IE versions? Then chances are good those 'broken' IE8 and similar are sending Kerberos tokens instead of NTLM ones when challenged. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.4
[squid-users] empty basic/digest realm
Hello all, I'd like to give Squid an empty realm as the realm for basic/digest authentication but Squid quits with a message similar to this: FATAL: Bungled squid.conf line xxx: auth_param digest realm. Maybe I am doing something wrong but I can't get the empty realm working. Can anyone here tell me how it's done? Thanks in advance! Regards, Khaled
[squid-users] Re: empty basic/digest realm
I just tried leaving the auth_param digest realm statement away and then squid used Squid proxy-caching web server as the realm. I am using squid 2.7. Does Squid support empty realm in versions 2.7? 2010/6/15 Khaled Blah khaled.b...@googlemail.com: Hello all, I'd like to give Squid an empty realm as the realm for basic/digest authentication but Squid quits with a message similar to this: FATAL: Bungled squid.conf line xxx: auth_param digest realm. Maybe I am doing something wrong but I can't get the empty realm working. Can anyone here tell me how it's done? Thanks in advance! Regards, Khaled
[squid-users] Request - The Basic flow chart of squid
Hi Could anyone please help me with the basic flow chart of squid. It is actually for my MSc Project. Many Thanks Yubraj
Re: [squid-users] Antwort: Re: [squid-users] Memory and CPU usage squid-3.1.4
martin.pichlma...@continental-corporation.com wrote: Amos Jeffries squ...@treenet.co.nz wrote on 15.06.2010 10:48:33: martin.pichlma...@continental-corporation.com wrote: Hello list, I have a question regarding memory and CPU usage change from 3.0 to 3.1. I have 4 forwards proxies with ICAP (c-icap and clamav), NTLMv2 authentication, all four proxies each have about 200-400 req/sec on RedHat AS5 64bit servers with each 16GB mem for about 15k to 30k users. cache_mem is set to 3.5 GB, no disk cache, some ACLs are used. With 3.0.STABLE23 the typical memory usage was about 7 GB and CPU usage of about 30-40% for the main squid process, without the helpers or other processes. With 3.1.4 the proxies use up to 2GB more memory and CPU usage rose at times to 60% or 80%. After a restart of one of the proxies it behaved normally since then. Status: (the CPU usage, Client HTTP out and requests per second are all from last 5 minute-info) Proxy1: 36% CPU 7.2 GB mem 3.3 MB/s client out 341 req/s Version 3.1.4 Proxy2: 77% CPU 8.6 GB mem 2.6 MB/s client out 384 req/s Version 3.1.4 - Proxy3: 53% CPU 8.8 GB mem 6.8 MB/s client out 402 req/s Version 3.1.4 Proxy4: 32% CPU 6.8 GB mem 2.7 MB/s client out 405 req/s Version 3.0.STABLE23 Configuration of the proxies is the same, with only the changes for 3.1, mainly icap config. Is there some kind of memory leak that additionally causes massive CPU usage, could it be load related or is this normal behaviour? Martin With an explicit cache_mem there should be no difference between the two. Maybe; ICAP needs to buffer traffic twice as much as normal. One buffer queue to ICAP server, one to client. Maybe; with HTTP/1.1 being advertised now, if you have ignore_expect_100 turned on you can see the number of waiting clients rise. These are active but 'hung' connections which waste more resources until the client times out and continues. Maybe; There are some known leaks in 3.0/3.1 auth. But that does not account for the extra CPU unless it contributes to making the box swap memory pages. In general 3.1.4 has a lot of memory fixes than 3.0. Which are supposed to cause less resource waste. Earlier 3.1.x default memory features were broken on 64-bit. Check if turning memory pools on/off has any good effect. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.4 Hi Amos, thanks for your reply and suggestions! ICAP needs about two times of memory, that is true but was already the same with 3.0. ignore_expect_100 is not configured, therefore the default (turned off) is used and about the same amount of ESTABLISHED and TIME_WAIT is seen on 3.0 and 3.1, and only few other connection states, therefore I think there is no problem of hanging connections. Auth - hmm I have no idea to trace mem leaks here but at least the box itself does not swap. I am now checking with mem pools off on one of the proxies and report later whether it changes anything. But I have found two things to mention: First squid is slowly needing more memory over time. In 3 hours the Process Data Segment Size via sbrk(): rose about 600MB. I will check whether it will continue rising so fast and whether it changes with memory_pool off. The other thing is that with 3.1.4 I now have many ctx: enter level 1234: 'lost' messages in cache.log. ctx messages were also with 3.0 but normally I had ctx enter level 0: 'some text' immediately followed by ctx exit level 0. With all three 3.1.4 proxies I see many ctx enter level rising number without or only very few ctx exit level messages. This is new with 3.1.4 -- 3.1.3 here behaved like 3.0. Here is a snip from the cache.log: 2010/06/15 11:36:09| ctx: enter level 2007: 'lost' 2010/06/15 11:36:09| ctx: enter level 2008: 'lost' 2010/06/15 11:36:09| ctx: enter level 2009: 'lost' 2010/06/15 11:36:09| ctx: enter level 2010: 'lost' 2010/06/15 11:36:09| ctx: enter level 2011: 'lost' 2010/06/15 11:36:09| ctx: enter level 2012: 'lost' 2010/06/15 11:36:09| ctx: enter level 2013: 'lost' 2010/06/15 11:36:09| ctx: enter level 2014: 'lost' 2010/06/15 11:36:09| ctx: enter level 2015: 'lost' 2010/06/15 11:36:09| ctx: enter level 2016: 'lost' 2010/06/15 11:36:09| ctx: enter level 2017: 'lost' 2010/06/15 11:36:09| ctx: enter level 2018: 'lost' 2010/06/15 11:36:09| ctx: enter level 2019: 'lost' 2010/06/15 11:36:09| ctx: enter level 2020: 'lost' 2010/06/15 11:36:09| ctx: enter level 2021: 'lost' 2010/06/15 11:36:09| ctx: enter level 2022: 'lost' 2010/06/15 11:36:09| ctx: enter level 2023: 'lost' 2010/06/15 11:36:09| ctx: enter level 2024: 'lost' 2010/06/15 11:36:09| ctx: enter level 2025: 'lost' 2010/06/15 11:36:09| ctx: enter level 2026: 'lost' 2010/06/15 11:36:09| ctx: enter level 2027: 'lost' 2010/06/15 11:36:09| ctx: enter level 2028: 'lost' 2010/06/15 11:36:09| ctx: enter level 2029: 'lost' 2010/06/15 11:36:09| ctx: enter level 2030:
Re: [squid-users] sarg index.html
I think this is not the right place to post this question, but I believe that the lines below will help you: # --- sarg.conf file --- # TAG: index yes|no|only # Generate the main index.html. # only - generate only the main index.html # #index yes index yes 2010/6/15 Kaushal Shriyan kaushalshri...@gmail.com I accidentally deleted the sarg index.html file located under /var/www/squid-reports/ the index.html file says about Daily, Monthly and weekly is there a way to regenerate it ? Thanks, Kaushal
Re: [squid-users] Join Squid to Windows Domain Controller : Configuring Squid for NTLM with Winbind Authentication on CentOS 5
Hello. Follow bellow the steps I've used to get NTLM authentication working. 1.# yum -y install authconfig krb5-workstation samba-common 2.[r...@proxyweb ~]# authconfig --enableshadow --enablemd5 --passalgo=md5 --krb5kdc=AD_SERVER.YOUR.FULL.DOMAIN --krb5realm=YOUR.FULL.DOMAIN --smbservers=AD_SERVER.YOUR.FULL.DOMAIN --smbworkgroup=YOUR_AD_GROUP --enablewinbind --enablewinbindauth --smbsecurity=ads --smbrealm=YOUR.FULL.DOMAIN --smbidmapuid=16777216-33554431 --smbidmapgid=16777216-33554431 --winbindtemplateshell=/bin/false --enablewinbindusedefaultdomain --disablewinbindoffline --winbindjoin=SOME_DOMAIN_ADMIN --disablewins --disablecache --enablelocauthorize --updateall 3.# wbinfo --set-auth-user=YOUR_PROXY_USER%YOUR_PROXY_USER_PASSWORD This is the user that proxy will use to validate users credentials. 4.# chown root:squid /var/cache/samba/winbindd_privileged 2010/6/14 Edouard Zorrilla ezorri...@tsf.com.pe: Hi Guys, Did anyone make it works ? : http://wiki.squid-cache.org/ConfigExamples/Authenticate/NtlmCentOS5 # authconfig --enableshadow --enablemd5 --passalgo=md5 --krb5kdc=ads.example.local \ --krb5realm=EXAMPLE.LOCAL --smbservers=ads.example.local --smbworkgroup=EXAMPLE \ --enablewinbind --enablewinbindauth --smbsecurity=ads --smbrealm=EXAMPLE.LOCAL \ --smbidmapuid=16777216-33554431 --smbidmapgid=16777216-33554431 --winbindseparator=+ \ --winbindtemplateshell=/bin/false --enablewinbindusedefaultdomain --disablewinbindoffline \ --winbindjoin=Administrator --disablewins --disablecache --enablelocauthorize --updateall I just want to authenticate against a Windows Domain Controller but no luck yet, could someone give one advice how can I do that ?. Maybe I am going through the wrong path, I want to use the NTLM since as far as I have seen this is best way I can do that. The error that I get is : [2010/06/14 16:39:42, 0] libads/kerberos.c:ads_kinit_password(228) kerberos_kinit_password u...@abc.xyz.com failed: Client not found in Kerberos database Any help would be greatly appreciated. Thanks.,
RE: [squid-users] Squid NTML and auth problems with POST
Just to check, Amos: Squid 3 and above has client_persistent_connections and server_persistent_connections 'on' by default i.e. not required in the conf file unless setting to 'off'... Correct? -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: 15 June 2010 12:51 To: squid-users@squid-cache.org Subject: Re: [squid-users] Squid NTML and auth problems with POST Dmitrijs Demidovs wrote: Hi list! I have a problems with Squid and winbind auth. There is a couple of sites (internal CMS systems and external banking sites) what have the same problems - users can not send attached data files using html web forms (http POST method). We have Squid and Samba/winbind scheme what perform auth of users against AD domain via NTLM. Everything works just fine except this mystical POST problems. It looks like this: === 1276593195.910256 10.1.2.20 TCP_DENIED/407 4500 POST http://www.site.com/admin.php? - NONE/- text/html 1276593195.919 7 10.1.2.20 TCP_DENIED/407 4706 POST http://www.site.com/admin.php? - NONE/- text/html === And if I make a hole in auth for POST method using: === acl POST method POST acl POST_whitelist dstdomain /etc/squid/POST_whitelist.txt http_access allow POST POST_whitelist all === and try to send file via form, then all is working fine again: === 1276593290.237438 10.1.2.20 TCP_MISS/200 6752 GET http://www.site.com/admin.php? USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593290.303 2 10.1.2.20 TCP_DENIED/407 4582 GET http://www.site.com/n.php - NONE/- text/html 1276593290.307 1 10.1.2.20 TCP_DENIED/407 4788 GET http://www.site.com/n.php - NONE/- text/html 1276593290.490180 10.1.2.20 TCP_MISS/200 413 GET http://www.site.com/n.php USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593305.751 12342 10.1.2.20 TCP_MISS/302 817 POST http://www.site.com/admin.php? - DEFAULT_PARENT/10.1.4.2 text/html 1276593305.755 1 10.1.2.20 TCP_DENIED/407 4680 GET http://www.site.com/admin.php? - NONE/- text/html 1276593305.761 1 10.1.2.20 TCP_DENIED/407 4886 GET http://www.site.com/admin.php? - NONE/- text/html 1276593306.106344 10.1.2.20 TCP_MISS/302 722 GET http://www.site.com/admin.php? USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593306.110 0 10.1.2.20 TCP_DENIED/407 4684 GET http://www.site.com/admin.php? - NONE/- text/html === I Googled this and have read a lot of forums, but the only thing that I found jet, is that there is some king of brain damage in ntlm auth scheme (it performs auth in a couple of iterations each time sending more and more of info about user, and this is fine fore GET but bad for POST). Anyway, it seems that InternetExplorrer 8 (and Firefox 3 as well) do not performs additional auth iterations then they get first 407 while POSTing data. I been trying to overcome this problem by using squid configuration directives like auth_param ntlm keep_alive on/off, no_cache and ie_refresh on/off. Unfortunately - no luck for me :( keep_alive on is highly recommended for Squid older than 3.1. It should be done by default in 3.1+, though I have not yet checked that. no_cache is useless for this. The no_ part has been obsolete for many years now. And POST data is not cached anyway. ie_refresh is a hack to get around broken refresh requests from old IE versions. It is only peripherally relevant, in that the refresh bug may by some fluke cause connections to close early sometimes. NP: persistent_after_error needs to be set as well to help catch these ie_refresh error conditions. Is there any solution for this problem except acl POST hole I made? a) persistent_connections for both clients and servers is also required. Your proxy appears to be closing the connection and thus requiring a re-auth when a new connection is opened for each request. b) not using NTLM. Negotiate/Kerberos works better and is recommended over NTLM. You see this problem ONLY with IE8 and Firefox 3? not with older IE versions? Then chances are good those 'broken' IE8 and similar are sending Kerberos tokens instead of NTLM ones when challenged. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.4 ** Please consider the environment before printing this e-mail ** The information contained in this e-mail is of a confidential nature and is intended only for the addressee. If you are not the intended addressee, any disclosure, copying or distribution by you is prohibited and may be unlawful. Disclosure to any party other than the addressee, whether inadvertent or otherwise, is not intended to waive privilege or confidentiality. Internet communications are not secure and therefore Conde Nast does not accept legal responsibility for the contents of this message. Any views or opinions expressed are those of the author. Company Registration details: The Conde Nast Publications Ltd Vogue House Hanover Square London W1S 1JU
RE: [squid-users] Join Squid to Windows Domain Controller : Configuring Squid for NTLM with Winbind Authentication on CentOS 5
Did anyone make it works ? : http://wiki.squid-cache.org/ConfigExamples/Authenticate/NtlmCentOS5 Of course, it was written while being built, then retested immediately after. authconfig --enableshadow --enablemd5 --passalgo=md5 --krb5kdc=ads.example.local ^ Really? The error that I get is : [2010/06/14 16:39:42, 0] libads/kerberos.c:ads_kinit_password(228) kerberos_kinit_password u...@abc.xyz.com failed: Client not found in Kerberos database Well that's not surprising, I doubt your real domain was ads.example.local...
[squid-users] squid3 NTLM cant find the user but wbinfo does
Hi folks, i'm with this problem during a few days, and i'm getting nuts! :D I'm using: debian 5.0.4 squid3.0.STABLE8 (default by debian apt repositories) Samba 3.2.5 kerberos5 I'm trying to implement the users automatic authentication by use ntlm_auth with Active Directory 2008. Well, all it's working fine (really), from my shell i got wbinfo answers, with ntlm_auth helper basic i got sucess answer (i've not tried using ntlmssp because i cannot found the correct query on google). but when I tried to get web from browser i got this message on cache.out: --[cut] [2010/06/15 10:23:36, 3] utils/ntlm_auth.c:check_plaintext_auth(328) NT_STATUS_NO_SUCH_USER: No such user (0xc064) 2010/06/15 10:23:36| storeDirWriteCleanLogs: Starting... 2010/06/15 10:23:36| WARNING: Closing open FD 65 2010/06/15 10:23:36| Finished. Wrote 0 entries. 2010/06/15 10:23:36| Took 0.00 seconds ( 0.00 entries/sec). FATAL: authenticateNTLMHandleReply: *** Unsupported helper response ***, 'ERR' Squid Cache (Version 3.0.STABLE8): Terminated abnormally. --[cut] Here is my ntlm's squid.conf line: --[cut] auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --[cut] Test from shell: # ntlm_auth --username=douglas --nt-response password: NT_STATUS_OK: Success (0x0) # # wbinfo -u administrator guest krbtgt douglas # Someone blessed with a good heart could help me on this? :) thanks. -- -- Douglas dos Santos
[squid-users] TPROXY squid and shorewall
Has anyone successfully setup shorewall with squid in tproxy mode? I'm having a hard time finding documentation on the shorewall side to work with Squid... Does anyone have any? Thanks.
[squid-users] Re: squid3 NTLM cant find the user but wbinfo does
Sniffing the connection I got: --[cut]-- Proxy-Authorization: NTLM TlRMTVNTUAABB4IIogAFASgKDw==\r\n NTLMSSP NTLMSSP identifier: NTLMSSP NTLM Message Type: NTLMSSP_NEGOTIATE (0x0001) Flags: 0xa2088207 ...all flags not set ommited... Negotiate 56: Set Negotiate 128: Set Negotiate 0x0200: Set Negotiate NTLM2 key: Set Negotiate Always Sign: Set Negotiate NTLM key: Set Request Target: Set Negotiate OEM: Set Negotiate UNICODE: Set ...all flags not set ommited... Calling workstation domain: NULL Calling workstation name: NULL \r\n appears to be THAT my problem. any clue about? 2010/6/15 Douglas Santos ml...@corelabs.com.br: Hi folks, i'm with this problem during a few days, and i'm getting nuts! :D I'm using: debian 5.0.4 squid3.0.STABLE8 (default by debian apt repositories) Samba 3.2.5 kerberos5 I'm trying to implement the users automatic authentication by use ntlm_auth with Active Directory 2008. Well, all it's working fine (really), from my shell i got wbinfo answers, with ntlm_auth helper basic i got sucess answer (i've not tried using ntlmssp because i cannot found the correct query on google). but when I tried to get web from browser i got this message on cache.out: --[cut] [2010/06/15 10:23:36, 3] utils/ntlm_auth.c:check_plaintext_auth(328) NT_STATUS_NO_SUCH_USER: No such user (0xc064) 2010/06/15 10:23:36| storeDirWriteCleanLogs: Starting... 2010/06/15 10:23:36| WARNING: Closing open FD 65 2010/06/15 10:23:36| Finished. Wrote 0 entries. 2010/06/15 10:23:36| Took 0.00 seconds ( 0.00 entries/sec). FATAL: authenticateNTLMHandleReply: *** Unsupported helper response ***, 'ERR' Squid Cache (Version 3.0.STABLE8): Terminated abnormally. --[cut] Here is my ntlm's squid.conf line: --[cut] auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --[cut] Test from shell: # ntlm_auth --username=douglas --nt-response password: NT_STATUS_OK: Success (0x0) # # wbinfo -u administrator guest krbtgt douglas # Someone blessed with a good heart could help me on this? :) thanks. -- -- Douglas dos Santos -- -- Douglas dos Santos
Re: [squid-users] Squid + Tproxy + Bridge on Kernel 2.6.34 - Workaround
Le mardi 25 mai 2010 23:21:39, senthilkumaar2021 a écrit : Hi, Squid + Tproxy + Bridge Setup on latest kernel - version 2.6.34 I had followed all the steps that had given in the http://wiki.squid-cache.org/Features/Tproxy4 Kernel - 2.6.34 iptable - 1.4.8 ebtable - 2.0.9-1 But clients were unable to browse and no errors in cache.log. Error - Network Unreachable. The error had returned by browser not squid proxy. Workaround :- After adding the following rules, clients are able to browse. # ip rule add dev device name fwmark 1 lookup 100 example # ip rule add dev eth0 fwmark 1 lookup 100 NOTE : Repeat the above for each interface except lo Source - https://lists.balabit.hu/pipermail/tproxy/2010-January/001212.html Based on the above source this issue had identified on kernel version - 2.6.32. But still not yet fixed. I have CC ed this mail to netfilter mailing lists also. Hope this helps Thanks, Senthil I was about to ask if this is fixed in 2.6.33+ or shall i stay in 2.6.31.x
[squid-users] Help configuriing Squid with delay pools
Hello, we are implementing a squid proxy server in my office. The principal idea is to limit bandwith using delay pools and also block some websites. We have made our config and its working, but we dont know if everyting is correct specially in the delay pools part. **Introduction: We have dedicated broadband with : 4MB FOR INTERNATIONAL TRAFIC 100MB FOR NATIONAL PROXY: 192.168.169.3 GATEWAY: 192.168.169.2 Users: like 150 daily **We want to divide our internal lan in 5 groups with the following rules PER USER. GROUP 1: normales from 192.168.169.30 to 192.168.169.129 -If a user exceeds 10mb when downloading a file limit to 10kb of download speed. GROUP 2: tecnicos from 192.168.169.130 to 192.168.169.149 -File bigger than 50mb, limit to 30kb GROUP 3: administrador 192.168.169.150 to 192.168.169.189 -File bigger than 100mb limit to 30kb GROUP 4: estudio 192.168.169.190 to 192.168.169.219 -No downloads for files or very slow, but freely web surfing including youtube. GROUP 5: gerencia 192.168.169.220 to 192.168.169.252 -Everyting unlimited **We want to block the following sites from 192.168.169.1 to 192.168.169.129 : BLOCK: [CODE].facebook.com .twitter.com .doubleclick.com .fotolog.com .warez-bb.org .fotolog.cl .chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de .mediafire.com .hotfile.com .myspace.com .fotolog.terra.cl .fotologs.com .portalnet.cl .taringa.net .antro.cl .chilewarez.cl .chilebt.com .shared.cl .comparte.cl .mininova.org .torrentz.com .flickr.com .flicker.net .keepvid.com .kotteshiro.com .no-ip.org .no-ip.com .redtube.com .xnxx.com .muyzorras.com .bananacorp.cl .orgasmatrix.com .depositfiles.com[/CODE] **From: 192.168.169.130 to 192.168.169.149 BLOCK: Same as above unless facebook.com **Deny from 192.168.169.1 to 192.168.169.29 DOWNLOADING THE FOLLOWING EXTENSIONS: [CODE].exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov[/CODE] [B]We dont know if the rules per group are possible with the bandwidth we have, also everyone surfs a lot youtube, we need that to not eat so much bandwith[/B]. Here is our current squid.conf [QUOTE] http_port 192.168.169.3:3128 transparent cache_dir ufs /usr/local/squid/var/cache 250 16 256 cache_effective_user squid cache_effective_group squid access_log /usr/local/squid/var/logs/access.log squid acl localnet src 192.168.169.0/255.255.255.0 acl localhost src 127.0.0.1/255.255.255.255 acl all src 0.0.0.0/0.0.0.0 acl SSL_ports port 443 563 acl Safe_ports port 80# http acl Safe_ports port 21# ftp acl Safe_ports port 443# https acl Safe_ports port 70# gopher acl Safe_ports port 210# wais acl Safe_ports port 1025-65535# unregistered ports acl Safe_ports port 280# http-mgmt acl Safe_ports port 488# gss-http acl Safe_ports port 591# filemaker acl Safe_ports port 777# multiling http acl CONNECT method CONNECT SITIOS BLOKEADOS # acl restobb src 192.168.169.1-192.168.169.129 acl sucky_urls dstdomain .facebook.com .twitter.com .doubleclick.com .fotolog.com .warez-bb.org .fotolog.cl .chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de .medi$ deny_info [url]http://www.xxx.xx/error.html[/url] sucky_urls http_access deny restobb sucky_urls NO DESCARGAS # acl resto src 192.168.169.1-192.168.169.29/32 acl descargas_negadas url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov deny_info [url]http://www.xx.xx/error.html[/url] descargas_negadas http_access deny resto descargas_negadas SITIOS CASI BLOKEADOS ### acl restobb2 src 192.168.169.130-192.168.169.149 acl sucky_urls2 dstdomain .twitter.com .doubleclick.com .fotolog.com .warez-bb.org .fotolog.cl .chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de .mediafire.com .de$ deny_info [url]http://www..xx/error.html[/url] sucky_urls2 http_access deny restobb2 sucky_urls2 http_access allow CONNECT SSL_ports http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localnet http_access allow localhost http_access deny all ## http_reply_access allow localnet http_reply_access deny all # #REGLAS DESCARGAS acl normales src 192.168.169.30-192.168.169.129/32 acl tecnicos src 192.168.169.130-192.168.169.149/32 acl administrador src 192.168.169.150-192.168.169.189/32 acl estudio src 192.168.169.190-192.168.169.219/32 acl gerencia src 192.168.169.220-192.168.169.252/32 acl descargas url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov delay_pools 5 delay_class 1 1 delay_parameters 1 10240/10485760 10240/10485760 delay_access 1 allow normales descargas delay_access 1 deny all
Re: [squid-users] Help configuriing Squid with delay pools
did you read http://wiki.squid-cache.org/Features/DelayPools ?? Le mardi 15 juin 2010 14:42:13, Jorge Perez a écrit : Hello, we are implementing a squid proxy server in my office. The principal idea is to limit bandwith using delay pools and also block some websites. We have made our config and its working, but we dont know if everyting is correct specially in the delay pools part. **Introduction: We have dedicated broadband with : 4MB FOR INTERNATIONAL TRAFIC 100MB FOR NATIONAL PROXY: 192.168.169.3 GATEWAY: 192.168.169.2 Users: like 150 daily **We want to divide our internal lan in 5 groups with the following rules PER USER. GROUP 1: normales from 192.168.169.30 to 192.168.169.129 -If a user exceeds 10mb when downloading a file limit to 10kb of download speed. GROUP 2: tecnicos from 192.168.169.130 to 192.168.169.149 -File bigger than 50mb, limit to 30kb GROUP 3: administrador 192.168.169.150 to 192.168.169.189 -File bigger than 100mb limit to 30kb GROUP 4: estudio 192.168.169.190 to 192.168.169.219 -No downloads for files or very slow, but freely web surfing including youtube. GROUP 5: gerencia 192.168.169.220 to 192.168.169.252 -Everyting unlimited **We want to block the following sites from 192.168.169.1 to 192.168.169.129 : BLOCK: [CODE].facebook.com .twitter.com .doubleclick.com .fotolog.com .warez-bb.org .fotolog.cl .chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de .mediafire.com .hotfile.com .myspace.com .fotolog.terra.cl .fotologs.com .portalnet.cl .taringa.net .antro.cl .chilewarez.cl .chilebt.com .shared.cl .comparte.cl .mininova.org .torrentz.com .flickr.com .flicker.net .keepvid.com .kotteshiro.com .no-ip.org .no-ip.com .redtube.com .xnxx.com .muyzorras.com .bananacorp.cl .orgasmatrix.com .depositfiles.com[/CODE] **From: 192.168.169.130 to 192.168.169.149 BLOCK: Same as above unless facebook.com **Deny from 192.168.169.1 to 192.168.169.29 DOWNLOADING THE FOLLOWING EXTENSIONS: [CODE].exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov[/CODE] [B]We dont know if the rules per group are possible with the bandwidth we have, also everyone surfs a lot youtube, we need that to not eat so much bandwith[/B]. Here is our current squid.conf [QUOTE] http_port 192.168.169.3:3128 transparent cache_dir ufs /usr/local/squid/var/cache 250 16 256 cache_effective_user squid cache_effective_group squid access_log /usr/local/squid/var/logs/access.log squid acl localnet src 192.168.169.0/255.255.255.0 acl localhost src 127.0.0.1/255.255.255.255 acl all src 0.0.0.0/0.0.0.0 acl SSL_ports port 443 563 acl Safe_ports port 80# http acl Safe_ports port 21# ftp acl Safe_ports port 443# https acl Safe_ports port 70# gopher acl Safe_ports port 210# wais acl Safe_ports port 1025-65535# unregistered ports acl Safe_ports port 280# http-mgmt acl Safe_ports port 488# gss-http acl Safe_ports port 591# filemaker acl Safe_ports port 777# multiling http acl CONNECT method CONNECT SITIOS BLOKEADOS # acl restobb src 192.168.169.1-192.168.169.129 acl sucky_urls dstdomain .facebook.com .twitter.com .doubleclick.com .fotolog.com .warez-bb.org .fotolog.cl .chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de .medi$ deny_info [url]http://www.xxx.xx/error.html[/url] sucky_urls http_access deny restobb sucky_urls NO DESCARGAS # acl resto src 192.168.169.1-192.168.169.29/32 acl descargas_negadas url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov deny_info [url]http://www.xx.xx/error.html[/url] descargas_negadas http_access deny resto descargas_negadas SITIOS CASI BLOKEADOS ### acl restobb2 src 192.168.169.130-192.168.169.149 acl sucky_urls2 dstdomain .twitter.com .doubleclick.com .fotolog.com .warez-bb.org .fotolog.cl .chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de .mediafire.com .de$ deny_info [url]http://www..xx/error.html[/url] sucky_urls2 http_access deny restobb2 sucky_urls2 http_access allow CONNECT SSL_ports http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localnet http_access allow localhost http_access deny all ## http_reply_access allow localnet http_reply_access deny all # #REGLAS DESCARGAS acl normales src 192.168.169.30-192.168.169.129/32 acl tecnicos src 192.168.169.130-192.168.169.149/32 acl administrador src 192.168.169.150-192.168.169.189/32 acl estudio src 192.168.169.190-192.168.169.219/32 acl gerencia src 192.168.169.220-192.168.169.252/32 acl descargas url_regex -i ftp .exe .mp3 .vqf .tar.gz
RE: [squid-users] Squid NTML and auth problems with POST
On Tue, 15 Jun 2010 15:19:15 +0100, Nick Cairncross nick.cairncr...@condenast.co.uk wrote: Just to check, Amos: Squid 3 and above has client_persistent_connections and server_persistent_connections 'on' by default i.e. not required in the conf file unless setting to 'off'... Correct? Yes. (I mention because they are relevant and many still use older config files with them explicitly off.) The persistent_after_error was supposed to be on by default but was recently found to be off for 3.1.4 and older releases. Chances are it's the IE8 NTLM-Kerberos transition hitting you though. That seems to be the biggest NTLM complaint in recent times. Amos -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: 15 June 2010 12:51 To: squid-users@squid-cache.org Subject: Re: [squid-users] Squid NTML and auth problems with POST Dmitrijs Demidovs wrote: Hi list! I have a problems with Squid and winbind auth. There is a couple of sites (internal CMS systems and external banking sites) what have the same problems - users can not send attached data files using html web forms (http POST method). We have Squid and Samba/winbind scheme what perform auth of users against AD domain via NTLM. Everything works just fine except this mystical POST problems. It looks like this: === 1276593195.910256 10.1.2.20 TCP_DENIED/407 4500 POST http://www.site.com/admin.php? - NONE/- text/html 1276593195.919 7 10.1.2.20 TCP_DENIED/407 4706 POST http://www.site.com/admin.php? - NONE/- text/html === And if I make a hole in auth for POST method using: === acl POST method POST acl POST_whitelist dstdomain /etc/squid/POST_whitelist.txt http_access allow POST POST_whitelist all === and try to send file via form, then all is working fine again: === 1276593290.237438 10.1.2.20 TCP_MISS/200 6752 GET http://www.site.com/admin.php? USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593290.303 2 10.1.2.20 TCP_DENIED/407 4582 GET http://www.site.com/n.php - NONE/- text/html 1276593290.307 1 10.1.2.20 TCP_DENIED/407 4788 GET http://www.site.com/n.php - NONE/- text/html 1276593290.490180 10.1.2.20 TCP_MISS/200 413 GET http://www.site.com/n.php USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593305.751 12342 10.1.2.20 TCP_MISS/302 817 POST http://www.site.com/admin.php? - DEFAULT_PARENT/10.1.4.2 text/html 1276593305.755 1 10.1.2.20 TCP_DENIED/407 4680 GET http://www.site.com/admin.php? - NONE/- text/html 1276593305.761 1 10.1.2.20 TCP_DENIED/407 4886 GET http://www.site.com/admin.php? - NONE/- text/html 1276593306.106344 10.1.2.20 TCP_MISS/302 722 GET http://www.site.com/admin.php? USER01 DEFAULT_PARENT/10.1.4.2 text/html 1276593306.110 0 10.1.2.20 TCP_DENIED/407 4684 GET http://www.site.com/admin.php? - NONE/- text/html === I Googled this and have read a lot of forums, but the only thing that I found jet, is that there is some king of brain damage in ntlm auth scheme (it performs auth in a couple of iterations each time sending more and more of info about user, and this is fine fore GET but bad for POST). Anyway, it seems that InternetExplorrer 8 (and Firefox 3 as well) do not performs additional auth iterations then they get first 407 while POSTing data. I been trying to overcome this problem by using squid configuration directives like auth_param ntlm keep_alive on/off, no_cache and ie_refresh on/off. Unfortunately - no luck for me :( keep_alive on is highly recommended for Squid older than 3.1. It should be done by default in 3.1+, though I have not yet checked that. no_cache is useless for this. The no_ part has been obsolete for many years now. And POST data is not cached anyway. ie_refresh is a hack to get around broken refresh requests from old IE versions. It is only peripherally relevant, in that the refresh bug may by some fluke cause connections to close early sometimes. NP: persistent_after_error needs to be set as well to help catch these ie_refresh error conditions. Is there any solution for this problem except acl POST hole I made? a) persistent_connections for both clients and servers is also required. Your proxy appears to be closing the connection and thus requiring a re-auth when a new connection is opened for each request. b) not using NTLM. Negotiate/Kerberos works better and is recommended over NTLM. You see this problem ONLY with IE8 and Firefox 3? not with older IE versions? Then chances are good those 'broken' IE8 and similar are sending Kerberos tokens instead of NTLM ones when challenged. Amos
Re: [squid-users] TPROXY squid and shorewall
On Tue, 15 Jun 2010 11:41:52 -0500, Johnson, S sjohn...@edina.k12.mn.us wrote: Has anyone successfully setup shorewall with squid in tproxy mode? I'm having a hard time finding documentation on the shorewall side to work with Squid... Does anyone have any? Thanks. Shorewall is just an obfuscating wrapper around the iptables command line. If you understand where shorewall lays out all its bits of each rule it should not be too hard to map the wiki.squid-cache.org/Features/Tproxy4 iptables rules into shorewall config settings. (Been a long while since I used shorewall sorry or I'd make a stab at it myself). Amos
Re: [squid-users] Squid + Tproxy + Bridge on Kernel 2.6.34 - Workaround
On Tue, 15 Jun 2010 13:37:48 -0500, Luis Daniel Lucio Quiroz luis.daniel.lu...@gmail.com wrote: Le mardi 25 mai 2010 23:21:39, senthilkumaar2021 a écrit : Hi, Squid + Tproxy + Bridge Setup on latest kernel - version 2.6.34 I had followed all the steps that had given in the http://wiki.squid-cache.org/Features/Tproxy4 Kernel - 2.6.34 iptable - 1.4.8 ebtable - 2.0.9-1 But clients were unable to browse and no errors in cache.log. Error - Network Unreachable. The error had returned by browser not squid proxy. Workaround :- After adding the following rules, clients are able to browse. # ip rule add dev device name fwmark 1 lookup 100 example # ip rule add dev eth0 fwmark 1 lookup 100 NOTE : Repeat the above for each interface except lo Source - https://lists.balabit.hu/pipermail/tproxy/2010-January/001212.html Based on the above source this issue had identified on kernel version - 2.6.32. But still not yet fixed. I have CC ed this mail to netfilter mailing lists also. Hope this helps Thanks, Senthil I was about to ask if this is fixed in 2.6.33+ or shall i stay in 2.6.31.x From the Squid side; I have not seen any concrete evidence that this problem was anything more than a configuration mixup. This fix is to configure routing tables so that packets the bridge stack sends to the routing stacks (ebtables ... -j DROP) actually get routed to Squid. Our wiki demo uses 127.0.0.1 and the lo interface, it seems like the reporter was using a global IP and only had to configure a global interfaces' routing. The other two older reporters have been suspiciously silent on the lists since the same bridge/router interaction was mentioned. Amos
Re: [squid-users] Squid + Tproxy + Bridge on Kernel 2.6.34 - Workaround
Hi The tproxy setup in bridge mode worked well as per in wiki squid till the kernel version 2.6.30.xx When we tested tproxy in bridge mode for kernels greater than 2.6.33.xx(2.6.34 also). The tproxy was not working.when the following workaround was used the tproxy was working fine. # ip rule add dev device name fwmark 1 lookup 100 example # ip rule add dev eth0 fwmark 1 lookup 100 NOTE : Repeat the above for each interface except lo and also echo 0 /proc/sys/net/ipv4/conf/lo/rp_filter echo 1 /proc/sys/net/ipv4/ip_forward echo 1 /proc/sys/net/ipv4/ip_nonlocal_bind echo 0 /proc/sys/net/ipv4/conf/eth1/rp_filter echo 0 /proc/sys/net/ipv4/conf/eth0/rp_filter echo 0 /proc/sys/net/ipv4/conf/br0/rp_filter echo 1 /proc/sys/net/ipv4/conf/all/forwarding echo 1 /proc/sys/net/ipv4/conf/all/send_redirects echo 1 /proc/sys/net/ipv4/conf/eth0/send_redirects We suspect the problem was not in squid and it is related to net filter. Regards senthil Amos Jeffries wrote: On Tue, 15 Jun 2010 13:37:48 -0500, Luis Daniel Lucio Quiroz luis.daniel.lu...@gmail.com wrote: Le mardi 25 mai 2010 23:21:39, senthilkumaar2021 a écrit : Hi, Squid + Tproxy + Bridge Setup on latest kernel - version 2.6.34 I had followed all the steps that had given in the http://wiki.squid-cache.org/Features/Tproxy4 Kernel - 2.6.34 iptable - 1.4.8 ebtable - 2.0.9-1 But clients were unable to browse and no errors in cache.log. Error - Network Unreachable. The error had returned by browser not squid proxy. Workaround :- After adding the following rules, clients are able to browse. # ip rule add dev device name fwmark 1 lookup 100 example # ip rule add dev eth0 fwmark 1 lookup 100 NOTE : Repeat the above for each interface except lo Source - https://lists.balabit.hu/pipermail/tproxy/2010-January/001212.html Based on the above source this issue had identified on kernel version - 2.6.32. But still not yet fixed. I have CC ed this mail to netfilter mailing lists also. Hope this helps Thanks, Senthil I was about to ask if this is fixed in 2.6.33+ or shall i stay in 2.6.31.x From the Squid side; I have not seen any concrete evidence that this problem was anything more than a configuration mixup. This fix is to configure routing tables so that packets the bridge stack sends to the routing stacks (ebtables ... -j DROP) actually get routed to Squid. Our wiki demo uses 127.0.0.1 and the lo interface, it seems like the reporter was using a global IP and only had to configure a global interfaces' routing. The other two older reporters have been suspiciously silent on the lists since the same bridge/router interaction was mentioned. Amos
[squid-users] help while using squid
Hi, the problem goes: squid server ip like 211.83.105.* the client ip like 121.49.127.* the two machine can ping each other the squid.conf i change to : http_port 3128 acl all src 0.0.0.0/0 http_access allow all and this simply do not work, what should i do? ps: the system i use is ubuntu 10.04, and squid version is 3.0 regards, luwening