Re: [squid-users] Large ACL problem
tis 2007-03-27 klockan 18:12 -0400 skrev Chris Rosset: Hello, I am still having a issue with high CPU usage. In a previous post it was suggested that my ACL (which is 8k+ lines) 8k+ lines of what? for both Squid and SquidGuard it's important you use the correct acl type for the structured data such as host names, domains etc, and only use regex patterns as a last resort. The problem with regex is a) It's CPU intensive to evaluate as the whole list has to be evaluated on each request only to find that it doesn't match any of the patterns.. b) Quite memory hungry. The other ACL types works much more efficiently thanks to their data being structured allowing the patterns to be sorted and searched efficiently. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Re: squid 3 + content encode
ons 2007-03-28 klockan 11:31 +0700 skrev Wahyu: I got failed when i run bootstrap, any one got same with me? proxy:~/squid-3.0.PRE5-20070328# ./bootstrap.sh automake : autoconfg: libtool : Bootstrapping definition of AC_CHECK_PTH configure.in:34: error: possibly undefined macro: AC_LTDL_DLLIB If this token and others are legitimate, please use m4_pattern_allow. Hmm.. aclocal doesn't find the libtool macros. try the following: cat /usr/share/libtool/libtool.m4 acinclude.m4 if that file is not found then go shopping in /usr/share to see where libtool.m4 is installed.. If it's not found then your libtool installation is incomplete. then run bootstrap.sh again. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
RE: [squid-users] Re: Antivir scan big file problem with ICAP
ons 2007-03-28 klockan 10:44 +0400 skrev Podolski: All commercial antivirus not use Preview Method of ICAP protocol :( Several do use the preview to detect content which does not need to be scanned. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Re: squid 3 + content encode
l saw on /usr/share/libtool libtool.m4 is on there, then i try cat /usr/share/libtool/libtool.m4 acinclude.m4 but the error is still exist when i run bootstrap.sh.. is there any posible that the problem is incorect version of tool (automake, libtool, etc) that i use? rgds, why On Wed, 28 Mar 2007 11:16:17 +0200 Henrik Nordstrom [EMAIL PROTECTED] wrote: ons 2007-03-28 klockan 11:31 +0700 skrev Wahyu: I got failed when i run bootstrap, any one got same with me? proxy:~/squid-3.0.PRE5-20070328# ./bootstrap.sh automake : autoconfg: libtool : Bootstrapping definition of AC_CHECK_PTH configure.in:34: error: possibly undefined macro: AC_LTDL_DLLIB If this token and others are legitimate, please use m4_pattern_allow. Hmm.. aclocal doesn't find the libtool macros. try the following: cat /usr/share/libtool/libtool.m4 acinclude.m4 if that file is not found then go shopping in /usr/share to see where libtool.m4 is installed.. If it's not found then your libtool installation is incomplete. then run bootstrap.sh again. Regards Henrik
RE: [squid-users] Re: Antivir scan big file problem with ICAP
All commercial antivirus not use Preview Method of ICAP protocol :( Several do use the preview to detect content which does not need to be scanned. Regards Henrik Yep I test havp On FreeBSD Mandatory locking disabled! KEEPBACK settings not used! Data trickling not use and users panic if downlods big files Test download http://httpdl2.usw.nero.com/software/nero7/trial/Nero-7.8.5.0_rus_trial.exe Server Timeout Havp.conf #REMOVETHISLINE deleteme USER havp GROUP havp DAEMON true PIDFILE /var/run/havp/havp.pid SERVERNUMBER 24 MAXSERVERS 100 ACCESSLOG /var/log/havp/access.log ERRORLOG /var/log/havp/havp.log LOG_OKS false # LOGLEVEL 0 # SCANTEMPFILE /var/tmp/havp/havp-XX # TEMPDIR /var/tmp DBRELOAD 60 PORT 8083 TEMPLATEPATH /usr/local/etc/havp/templates/ru # WHITELISTFIRST true # WHITELIST /usr/local/etc/havp/whitelist # BLACKLIST /usr/local/etc/havp/blacklist # FAILSCANERROR true SCANNERTIMEOUT 10 RANGE false SCANIMAGES true MAXSCANSIZE 500 KEEPBACKBUFFER 20 KEEPBACKTIME 5 TRICKLING 15 # MAXDOWNLOADSIZE 0 # # ClamAV Library Scanner (libclamav) # ENABLECLAMLIB true CLAMDBDIR /var/db/clamav # CLAMBLOCKENCRYPTED false # CLAMBLOCKMAX false # CLAMMAXFILES 1000 # CLAMMAXFILESIZE 10 # CLAMMAXRECURSION 8 # CLAMMAXRATIO 250
[squid-users] squid auth (cache_peer)
Hi, any one have try squid auth but only for sibling/parent connection. I want to my squd box have auth if other squid want to join sibling/parent.. rgds, why
Re: [squid-users] Re: squid 3 + content encode
ons 2007-03-28 klockan 16:45 +0700 skrev Wahyu: l saw on /usr/share/libtool libtool.m4 is on there, then i try cat /usr/share/libtool/libtool.m4 acinclude.m4 but the error is still exist when i run bootstrap.sh.. is there any posible that the problem is incorect version of tool (automake, libtool, etc) that i use? Hmm.. checking. AC_LTDL_DLLIB is from libltdl, the library provided by libtool. Check if you have a ltdl.m4 file and that libltdl is installed. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Design a delay_pools policy...
On Mon 26 Mar 2007, Henrik Nordstrom wrote: Hi Henrik, So what was the hardest part to understand? (just trying to figure out how to better document this) http://wiki.squid-cache.org/SquidFaq/MiscFeatures#head-fd9b4b7ba1854a3c2179 6173af9d0b9aee33e376 http://www.deckle.co.za/squid-users-guide/Access_Control_and_Access_Control _Operators#Delay_Classes Basically the terminology (pools, buckets, classes), but I suspect that if I had read it in more quiet environment (at home for example) I wouldn't have had any problems... Actually, now that I look at it again it makes more sense than last week :) Looks fine if you want a hard 100KByte/s (ca 1Mbit/s) limit per user. Usually how most people use delay pools is by using a somewhat bigger bucket (maybe 10 times the rate) and a slightly smaller rate to allow some burstiness which is typical in normal browsing while still limiting downloads. delay_parameters 1 -1/-1 40/9 this will allow users to access Internet in bursts of up to 400Kbyte unlimited as long as they stay within their allowed 90Kbyte/s limit in average. You could also reorder the acl lists slightly to make them easier to maintain if you need to add more exclusion criterias later on delay_access 1 deny fastusers delay_access 1 deny local delay_access 1 allow all Right, I say. That's perhaps a better solution... I'll try this instead to see what happens. Thanks, Ray -- You cannot discover new oceans unless you have the courage to lose sight of the shore
RE: [squid-users] Re: Antivir scan big file problem with ICAP
ons 2007-03-28 klockan 13:46 +0400 skrev Podolski: Data trickling not use and users panic if downlods big files Data trickling (ICAP sending the response before it's fully processed) is a different thing unrelated to preview. Not sure the ICAP patch to Squid-2 supports data trickling. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] squid auth (cache_peer)
ons 2007-03-28 klockan 16:49 +0700 skrev Wahyu: Hi, any one have try squid auth but only for sibling/parent connection. I want to my squd box have auth if other squid want to join sibling/parent.. You can select access requirements per source IP or any other criteria which can be expressed in terms of acls. See http_access. However, a child cache using you as a parent is just another HTTP client so it's hard to handle it differently unless you know about the child server beforehand. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Design a delay_pools policy...
ons 2007-03-28 klockan 13:20 +0300 skrev Raymond A. Meijer: Actually, now that I look at it again it makes more sense than last week :) If you can thing of anything we can add to clarify the concepts making them easier to understand the first time then you are most welcome to inform us. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Re: squid 3 + content encode
Henrik Nordstrom wrote: ons 2007-03-28 klockan 16:45 +0700 skrev Wahyu: l saw on /usr/share/libtool libtool.m4 is on there, then i try cat /usr/share/libtool/libtool.m4 acinclude.m4 but the error is still exist when i run bootstrap.sh.. is there any posible that the problem is incorect version of tool (automake, libtool, etc) that i use? Hmm.. checking. AC_LTDL_DLLIB is from libltdl, the library provided by libtool. Check if you have a ltdl.m4 file and that libltdl is installed. Regards Henrik Sigh, this is the same problem I encountered and posted a fix for earlier this month, and the month before Check that you have a recent version of automake and libltdl-dev explicitly installed. On some package-based systems (Debian etc) the 'automake' package does not necessarily pull them properly. Those systems need automake1.x installed seperately. The full details I wrote up earlier are here: http://www.mail-archive.com/squid-dev@squid-cache.org/msg05214.html Amos
[squid-users] how to configure sourcehash persistance timeout?
Hi, how can I configure the timeout for the sourcehash-persistance? And how long is the default-timeout? best regards Oliver Voelker signature.asc Description: PGP signature
[squid-users] redirection prob
Dear Folks I am getting a strange problem with Squid Caching. The object seems to be miss but still my users are getting some old page or some other problem. when i submit this form http://www.gawab.com/register.html?lng=enprivatedomain=smilepk.net it gives in my squid logs 1175080896.087 2562 XXX TCP_MISS/200 8616 POST http://www.gawab.com/register.html - DIRECT/208.21.175.132 text/html 1175080896.896730 XXX TCP_MISS/200 756 GET http://www.gawab.com/gawabexternal/funGen/fungen.php - DIRECT/208.21.175.132 text/html 1175080897.894899 XXX TCP_MISS/200 8332 GET http://www.gawab.com/gawabexternal/humancheck/codegen.php - DIRECT/208.21.175.132 text/html 1175080904.560946 XXX TCP_MISS/302 551 POST http://www.gawab.com/register.html - DIRECT/208.21.175.132 text/html 1175080905.462768 XXX TCP_MISS/404 594 GET http://www.gawab.com/main.php? - DIRECT/208.21.175.132 text/html which mean nothing was found in cache right ? but in browser its gives me 404 main.php not found . it tries to access main.php which does not exists, oki server problem but when i try with out squid the form gets complete and user is redirected to his/her inbox. squid shows it MISS i tried with no_cache, always_direct.but nothing works. 2.5.STABLE7 and Squid 2.5 stable 14
[squid-users] anybody know wht session helper is??
i wish to display a web page only the first time when users authenticate hendrik suggested session helper in squid 2.6 anybody know abt it plz do tell me
[squid-users] URL Redirecting...
Hi All, I would just like to know what is the easiest way, and the easiest thing to use to to url redirecting with squid. What should I use for this. I also need whatever it is to be quite easy to configure, seeing as I do not have enough time to figure out how regular expressions work (pyredir)... what I am trying to accomplish is to have squid redirect anyone that browses to 192.168.0.1 to 192.168.1.2 Thanx everyone!!! -- C
Re: [squid-users] URL Redirecting...
Use jesred http://www.linofee.org/~jel/webtools/jesred/. and easy to use. - Original Message - From: Charl Loubser [EMAIL PROTECTED] To: squid-users@squid-cache.org Sent: Wednesday, March 28, 2007 6:02 PM Subject: [squid-users] URL Redirecting... Hi All, I would just like to know what is the easiest way, and the easiest thing to use to to url redirecting with squid. What should I use for this. I also need whatever it is to be quite easy to configure, seeing as I do not have enough time to figure out how regular expressions work (pyredir)... what I am trying to accomplish is to have squid redirect anyone that browses to 192.168.0.1 to 192.168.1.2 Thanx everyone!!! -- C
Re: [squid-users] how to configure sourcehash persistance timeout?
ons 2007-03-28 klockan 13:46 +0200 skrev Oliver Voelker: how can I configure the timeout for the sourcehash-persistance? And how long is the default-timeout? There is none. sourcehash is stateless, purely a mathematical function based on the source IP and the active sourcehash peers. As long as the conditions is the same the request gets forwarded the same path, the next second, next day or next year. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] redirection prob
ons 2007-03-28 klockan 17:31 +0500 skrev Shabbir Ahmed: 1175080905.462768 XXX TCP_MISS/404 594 GET http://www.gawab.com/main.php? - DIRECT/208.21.175.132 text/html which mean nothing was found in cache right ? but in browser its gives me 404 main.php not found . Yes, and that's what the access.log says... it tries to access main.php which does not exists, oki server problem but when i try with out squid the form gets complete and user is redirected to his/her inbox. squid shows it MISS i tried with no_cache, always_direct.but nothing works. 2.5.STABLE7 and Squid 2.5 stable 14 Hard to say what could be the cause without knowing the server application, but perhaps you have enabled anonymization features blocking the use of cookies? Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] anybody know wht session helper is??
Abhishek Chavan escribió: i wish to display a web page only the first time when users authenticate hendrik suggested session helper in squid 2.6 anybody know abt it plz do tell me Take a look; #nroff -man squid-2.6.STABLE12/helpers/external_acl/session/squid_session.8 Thanks Emilio C.
RE: [squid-users] SquidNT2.6 - Active Directory
Hi, At 23.01 27/03/2007, Sergio Gleser wrote: My answers ... According to your squid.conf. the internet windows group should be a Domain Global group, is this true ? True. The Internet Group is a Global Security Group. And the case is correct ? The helper is case sensitive, you can use the -c option for case insensitive compare. If yes, you could try do debug the external acl helper adding the -d option and look into cache.log to see what happens. I send you, my cache.log. This is not correct: /mswin_check_lm_group.exe[976]: Valid_Global_Groups: checking group membership of 'grupoapex\sgleser'. /mswin_check_lm_group.exe NetServerGetInfo() failed.' The helper is not able to retrieve the group membership for the user sgleser. There is something strange in your AD environment: I have installed just today a 2.6 STABLE12 on a Windows 2003 machine member of a multi domain Windows 203 AD Forest without any problem. Regards Guido - Guido Serassio Acme Consulting S.r.l. - Microsoft Certified Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: [EMAIL PROTECTED] WWW: http://www.acmeconsulting.it/
Re: [squid-users] Large ACL problem
Henrik Nordstrom [EMAIL PROTECTED] 3/28/2007 2:04:40 AM tis 2007-03-27 klockan 18:12 -0400 skrev Chris Rosset: Hello, I am still having a issue with high CPU usage. In a previous post it was suggested that my ACL (which is 8k+ lines) 8k+ lines of what? for both Squid and SquidGuard it's important you use the correct acl type for the structured data such as host names, domains etc, and only use regex patterns as a last resort. The problem with regex is a) It's CPU intensive to evaluate as the whole list has to be evaluated on each request only to find that it doesn't match any of the patterns.. b) Quite memory hungry. The other ACL types works much more efficiently thanks to their data being structured allowing the patterns to be sorted and searched efficiently. Regards Henrik Hi Henrik, I remember your last email pointed me towards looking at the ACL's, sounds like the url_regex are most intensive since they pattern match on the whole url? For more info on what i have We have a few ACL's most are not more then a few hundred lines but the one big one we have is 8200 lines, mostly spam, spyware, porn sites. acl never-allow-url dstdom_regex -i /usr/local/squid/etc/FilterLists/never-allow-url deny_info ERR_BW_CONTENT_SUPPRESSED never-allow-url and the entries in this acl are all like \.100percentcash.com$ I tried installing squidguard, but had problems installing so far, maybe i could try squirm, jesred or just recompile with --enable-gnuregex Or any other reccomendation? Thanks very much -Chris
Re: [squid-users] Large ACL problem
Chris Rosset wrote: Hi Henrik, I remember your last email pointed me towards looking at the ACL's, sounds like the url_regex are most intensive since they pattern match on the whole url? For more info on what i have We have a few ACL's most are not more then a few hundred lines but the one big one we have is 8200 lines, mostly spam, spyware, porn sites. acl never-allow-url dstdom_regex -i /usr/local/squid/etc/FilterLists/never-allow-url deny_info ERR_BW_CONTENT_SUPPRESSED never-allow-url and the entries in this acl are all like \.100percentcash.com$ acl never-allow-domain dstdomain /usr/local/squid/etc/FilterLists/never-allow-domain deny-info ERR_BW_CONTENT_SUPPRESSED never-allow-domain With a file content of... .100percentcash.com (etc.) ... and obviously a matching http_access deny rule would perform the same task, with much lower CPU utilization. I tried installing squidguard, but had problems installing so far, maybe i could try squirm, jesred or just recompile with --enable-gnuregex Or any other reccomendation? Thanks very much -Chris Chris
Re: [squid-users] URL Redirecting...
Charl Loubser wrote: Hi All, I would just like to know what is the easiest way, and the easiest thing to use to to url redirecting with squid. What should I use for this. I also need whatever it is to be quite easy to configure, seeing as I do not have enough time to figure out how regular expressions work (pyredir)... what I am trying to accomplish is to have squid redirect anyone that browses to 192.168.0.1 to 192.168.1.2 Thanx everyone!!! -- C acl oldSite dst 192.168.0.1 deny_info http://192.168.1.2/ oldSite http_access deny oldSite Any access to 192.168.0.1 will be directed to the main page of 192.168.1.2 via a 302 (Moved Temporarily). If you have the same URL structure on both servers you can use a %s on the deny_info line to pass them to the URL on the new server. More details are available in the default squid.conf. Chris
Re: [squid-users] Large ACL problem
ons 2007-03-28 klockan 15:48 -0400 skrev Chris Rosset: I remember your last email pointed me towards looking at the ACL's, sounds like the url_regex are most intensive since they pattern match on the whole url? Its not so much the fact that it needs to match on the whole URL as the fact that it needs to evaluate each and every pattern you have (all 8K of them) on the whole URL.. and the entries in this acl are all like \.100percentcash.com$ That kind of pattern really really should be placed into a dstdomain acl as .100percentcash.com I tried installing squidguard, but had problems installing so far, Begin by sorting your data into what must be regex patterns and what fits better into the structured acls (i.e. dstdomain). REgards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Re: Antivir scan big file problem with ICAP
ons 2007-03-28 klockan 17:58 +0300 skrev Henrik Krohns: PS. There is really not much advantage in using ICAP versus Parent in normal use. But I'm looking into adding ICAP support to HAVP in the future, since it seems to be a magic word. :) The big thing with ICAP is that it doesn't change the network relations of the proxy. It can still participate with other proxies etc even if the ICAP based scanner is plugged in. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
[squid-users] Problem with Squid on VPS
Hi Friends, I run a Virtuozzo server and I have several VPSs on it. One of them uses Squid to surf internet. The problem is that I can see very high loads on my Virtuozzo Statistics, maybe 25% of total CPU ( of a Duale Xeon 3.2 Ghz HT ). When I logged into the VPS and use top command, I see that squid is using 99% of CPU usage and that the load average is 1.2! I have no idea why this happens, did anyone had the same experience ? -- Warm Regards, Amir Mirghassemi
[squid-users] Squid not caching
We are setting up a squid/zope setup for testing. But we can't get squid to cache pages. everything returns with a TCP_MISS here are some excerpts from the logs: == /var/log/squid/access.log == 1175142751.362 13 64.22.224.54 TCP_MISS/200 3367 GET http:// new.josmc.org/ - DEFAULT_PARENT/10.0.1.1 text/html == /var/log/squid/store.log == 1175142751.362 RELEASE -1 856850571D92AC3E0C3165EAD4CA77F2 200 1175142751-1-1 text/html 3052/3052 GET http:// new.josmc.org/ We've gone through the config and web sites and we're pretty sure the config is good. We are running squid 2.6. Is there anything header wise that would cause pages to not be cached? Aside from no-cache to be specified somewhere (which it's not). -- Edward Muller Interlix, LLC Owner Zope, Plone Zimbra Hosting phone: +1.417.862.0573 fax: +1.770.818.5437 --
RE: [squid-users] Re: Antivir scan big file problem with ICAP
Thanks If you create ICAP support in HAVP you kill two rabbits ;) ICAP support need for big nets, for speed up scan process Example 5 servers scan for virus over icap and 5 squid over wccp cache objects :) this clustercacheengine It will be more powerful CISCO Cacheengine ftp://ftp.rfc-editor.org/in-notes/rfc3507.txt -Original Message- From: Henrik Krohns [mailto:[EMAIL PROTECTED] Sent: Thursday, March 29, 2007 9:35 AM To: squid-users@squid-cache.org Subject: Re: [squid-users] Re: Antivir scan big file problem with ICAP On Thu, Mar 29, 2007 at 12:44:59AM +0200, Henrik Nordstrom wrote: ons 2007-03-28 klockan 17:58 +0300 skrev Henrik Krohns: PS. There is really not much advantage in using ICAP versus Parent in normal use. But I'm looking into adding ICAP support to HAVP in the future, since it seems to be a magic word. :) The big thing with ICAP is that it doesn't change the network relations of the proxy. It can still participate with other proxies etc even if the ICAP based scanner is plugged in. You can easily create sandwich which loops back to the same Squid instance, I recommend that with HAVP. It doesn't create much additional load, since with ICAP you usually transfer the same network traffic back and forth anyway. But I agree that with ICAP you can get a slighly more clear setup. Cheers, Henrik