[squid-users] compile error 9-20050502|03
Some knows about this: Making all in icons Making all in errors Making all in doc sed " [EMAIL PROTECTED]@%/usr/local/squid/etc/squid.conf%g; [EMAIL PROTECTED]@%/usr/local/squid/etc/cachemgr.conf%g; [EMAIL PROTECTED]@%/usr/local/squid/share/errors/Portuguese%g; [EMAIL PROTECTED]@%/usr/local/squid/etc/mime.conf%g; " < > squid.8 Syntax error: redirection unexpected *** Error code 2 Stop in /usr/local/squid/squid-2.5.STABLE9-20050503/doc. *** Error code 1 on FreeBSD 4.11 w perl5.8 compile options --enable-default-err-language=Portuguese --enable-storeio=diskd,null --enable-removal -policies=heap --enable-underscores --disable-ident-lookups Hans A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura. Service fornecido pelo Datacenter Matik https://datacenter.matik.com.br
Re: [squid-users] Zero sized reply and other recent access problems
On Saturday 05 March 2005 23:41, Reuben Farrelly wrote: > I think you've misunderstood something quite fundamental about how squid > works: > may be I did not used the exact expressions you like to see but like you wrote you did get it. Anyway, my intention like said in my mail was not to attack anybody. > > * Strict HTTP header parsing - implemented in the most recent STABLE > releases of squid, you can turn this off via a squid.conf directive > anyway (but it is useful to have it set to log bad pages). > what do you mean? relaxed_header_parser? I think this is on by default, not off, turning it off it parse strict or am I wrong here? > * ECN on with Linux can cause 'zero sized reply' responses, although > usually you'll get a timeout. I have ECN on on my system and very few > sites fail because of this, but there are a small number. Read the > squid FAQ for information about how to turn this off if it is a problem. > FYI it does not happens only on Linux, again, the problem and a possible solution here is not the point, the point is that for the end-user the site opens using "the other ISP" so for him it is an ISP problem, he doesn't care if it is squid or the remote site, network congestion or other. anyway, IMO the error message is obscure for the user, it starts saying the URL: (blank) the user obviously complains about that he typed correctly the URL and on the error msg it is blank, so this cause understanding problems between the support staff and the user Then it does not help to send reading FAQs because what I am speaking about is the user not the administrator. The user does not need to learn squid but what he gets should be understandable enough and most important he should get it when he gets it without squid. I mean that a site should be accessible behind squid when it opens normally with a Browser without squid. It is not interesting here if there is a wrong header or whatever. > * NTLM authentication, some uninformed site admins require or request > NO, I was not speaking about any authentication at all > > Can you give some examples of specific sites which you need to bypass > squid for that you cannot get to display using the items I mentioned above? > First some banking and other secure sites which need gre protocol for example but I was not speaking about this ones. Lots of Blogger sites are giving erros. Sure there is a lot of underline and whitespace problems but the latter ones often are not resolvable by squid settings. On the other side they open normally with MSIE At work I can check for more, one specific follows. Other errors are like this, even if this specific site now is working after contacting them. The site gave problem with squid > 2.5-S4 if I am not wrong here. GET / HTTP/1.1 Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/vnd.ms-excel, application/msword, application/vnd.ms-powerpoint, application/x-shockwave-flash, */* Accept-Language: pt-br Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt) Host: www.redecard.com.br Connection: Keep-Alive Hans > Reuben -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpKoxMMuk7RR.pgp Description: PGP signature
[squid-users] Zero sized reply and other recent access problems
Recently all of us are having problems with squid not serving certain pages/objects anymore. We do know that squid most probably does detect correct or incorrect html codes and tells it via it's error messages. But I am not so sure if this should be a squid task. Squid IMO should cache and serve what it gets from the server. The code check should be done by the browser - means incorrect code is a browser problem or a web server problem so it should be adviced by the browser not by anything in the middle. Even if the page code is buggy the page could contain objects to be cached and that is what squid should do. I say so because who use squid is an ISP or a system admin of any kind of network. So it should not turn into be this man's problem if somebody is coding his server's html pages incorrectly. He with his squid only serves his customers or his people on his network. IMO this strict html code checking is complicating network support to end customers what already was or is not so easy sometimes. We here do use transparent squid on lots of sites and soon someone complains about this kind of problem we rewrite our fwd rules so that it does not goes through squid anymore. Even if we know that the remote site owner has no interest in somebody not capable to access his site we do not have the time to talk to him. Indeed it is not our problem and we are not a html coding school teaching how to correct errors. So here we simply desist and pass by squid for such kind of sites. IMO I think it might be better for squid not checking code. Custumers say: "Without your cache I can access the site, with your cache not. I do not want to know about and if you do not resolve this problem for me I do not use you service anymore but another where it works." So even if "I" loose first my customer second they do not use squid anymore. I believe it could be considered to think about this. I like to add that we here are using squid since 97/98 and what I wrote here is not in any kind a meant as offending critic to the developers but a point to think about. So what you think about this? Hans -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgphXRhDQqqxi.pgp Description: PGP signature
Re: [squid-users] :Direct connection without DNS lookup
On Tuesday 22 February 2005 21:08, [EMAIL PROTECTED] wrote: > Hi henric > > I have allready running cache_peer for this box default > set to a master squid box. > Yes i attempted /etc/hosts enry for this specifc intranet site > and also checked on /etc/nsswitch.conf. > Still squid is looking for DNS when resolving. > Is their any specific entry to ask squid to use /etc/hosts > when resolving? > > Chanaka Hi is this a NAT problem or what is your concern here? Hans > > > On Tue, 22 Feb 2005 [EMAIL PROTECTED] wrote: > >> Is their a a methood wher you can tell squid to directly connect to a > >> specific web site by providing its IP addredd without DNS lookups. > > > > cache_peer > > > > or /etc/hosts > > > > or requesting the site by IP. > > > > Regards > > Henrik -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpRatBr0rlOp.pgp Description: PGP signature
Fwd: Re: [squid-users] Two squid instances based on file types? Is it good?
On Tuesday 22 February 2005 09:13, you wrote: > > this should work, you add other extensions as you need > > > > acl bf urlpath_regex \.mpg > > acl bf urlpath_regex \.avi > > acl bf urlpath_regex \.wmv > > > > never_direct allow bf > > always_direct deny bf > > As I wrote to Henrik, I should use never_direct to be sure that the > front end squid asks to the back end squid right?! you may even even add always_direct allow !bf in order not to query it for other file types if you do not use always|never_direct the thing will not work correctly I am not sure if talking back|front-end cache is good here, lets stay with large_object and small_object cache we suppose here that the small_object cache is the one exposed and used by your users if you do not use always|never_direct in the small_object cache you probably never get a hit and the small_object cache will pull the large objects without storing them ever since you limit it with max_object_size means you need to force calling the large_object cache which then should store the object and you get a hit at the next call > Suppose the backend squid has its own cache_peer parent and suppose I > want that, whenever the frontend squid asks the backend squid for a > multimedia files, if the backend squid doesn't have it, it asks to its > own parent, when it gets the file back it sends this back back to the > frontend squiddo you think this could be possible with such a > configuration?!?! this is pretty confusing Hans > > > But, what about staleness? Can I set up the refresh time in > > > squid...with which directive?!?! > > > > you can use refresh_pattern > > Do you know if I can match on the cache_dir using refresh_pattern!?!? > > > Hans > > > > > Once again, may thanks > > Thank you so much Hans! ;) > > Marco -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ --- pgpzT3VVhc10b.pgp Description: PGP signature
Re: [squid-users] Two squid instances based on file types? Is it good?
On Monday 21 February 2005 07:24, Marco Crucianelli wrote: > On Fri, 2005-02-18 at 16:10 -0200, H Matik wrote: > > On Friday 18 February 2005 08:34, Marco Crucianelli wrote: > > > On Thu, 2005-02-17 at 12:52 -0600, Kevin wrote: > > > > What mechanism are you using to set expire times? > > > > > Do you mean using max_object_size=512K for the small_object squid? > yes > > I was thinking about using ACL regex_url to direct avi,mp3,iso etc from > the small_object (front-end) squid to the big_object (back-end) squid > together with the directive cache_peer_access...do you think I can do it > this way? > this should work, you add other extensions as you need acl bf urlpath_regex \.mpg acl bf urlpath_regex \.avi acl bf urlpath_regex \.wmv never_direct allow bf always_direct deny bf > > But, what about staleness? Can I set up the refresh time in squid...with > which directive?!?! you can use refresh_pattern Hans > > > Once again, may thanks -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgp9IqF7vAwwR.pgp Description: PGP signature
Re: [squid-users] Two squid instances based on file types? Is it good?
On Friday 18 February 2005 08:34, Marco Crucianelli wrote: > On Thu, 2005-02-17 at 12:52 -0600, Kevin wrote: > > What mechanism are you using to set expire times? > > Well, I'm still not sure what I shall use! I mean: should I use > refresh_pattern!? Or what? I mean, refresh_pattern can let me change > refresh period based on sire url right? What' else could I use?! > when I suggested the choice of two caches, one for small objects and one for large objects the focus was not on refresh patterns the goal here is you can use at very first priority the OS and especially the HD System tuned for serving small or large files. This certainly will not be possible running two squids on one machine the second point is that you can use max|min_object_size in order to limit the file size you will serve by each server. My experienced showed best results breaking at 512K on modern PCs third step is to use cache_replacement_policy LFUDA/GDSF accordingly and if using diskd you may play with Q1 and Q2 what will give you the difference and to make sense you push from the large_obj_cache with proxy-only set to achieve this correctly you may or should set additional always|never_direct for known mpg, avi, wmv,mp3,iso and other so that the small_obj_cache pulls them really from the larg_obj_cache Hans -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpG7mnwhyxrL.pgp Description: PGP signature
Re: [squid-users] Customise Squid's pages
On Tuesday 15 February 2005 07:24, Henrik Nordstrom wrote: > On Tue, 15 Feb 2005, H Matik wrote: > > BTW, they are overwritten by upgrading squid, may be you could sometime > > check if it is possible to do as with squid.conf, that would be nice > > Just copy them to your own directory and tell this to squid via > squid.conf. > I know, I just mentioned that it could be easier if not overwritten > > as additional whish you may consider that one could create a privat error > > msg for the ACL he use, that would be great. > > See squiud.conf and/or the FAQ. > > Regards > Henrik -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpfHaCSIu7Jt.pgp Description: PGP signature
Re: [squid-users] Customise Squid's pages
On Tuesday 15 February 2005 06:29, Henrik Nordstrom wrote: > On Tue, 15 Feb 2005, Roger wrote: > > I want to ask you if it is possible to customize the error pages of > > squid, > > Yes, see the FAQ for details, or peek into the errors directory. > BTW, they are overwritten by upgrading squid, may be you could sometime check if it is possible to do as with squid.conf, that would be nice as additional whish you may consider that one could create a privat error msg for the ACL he use, that would be great. Hans > Regards > Henrik -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpGpk4u6Xmwp.pgp Description: PGP signature
Re: [squid-users] Squid, storage size and other questions...
On Wednesday 09 February 2005 13:49, you wrote: > On Wed, 2005-02-09 at 12:41 -0200, H Matik wrote: > > On Wednesday 09 February 2005 11:22, Elsen Marc wrote: > > > What do you mean by "using two front ends"? Maybe I didn't understand > but how can I use two frontend, I can make client (browser) use only one > proxy at the time! Moreover, what do you mean by "using squid.conf > technics to get the best out of it"? Maybe using ICP as I was thinking > (but in that case I have a problem of waster of storage space when the > squid get via ICP the file from the other and cache the file for itself > too!)!? > You can use one cache server for your clients but this one can have one, two or more parents. Means the backend server which is the visible cache for your clients queries the frontend caches. So you could set up one with the minimum_object_size 2048 KB maximum_object_size 512000 KB what means this one will not store on disks any objects less then 2MB and not larger the 512MB the other you configure maximum_object_size 2048 KB Then with the proper cache_peer settings the backend server should get icp replies about what each has on disk and so you can have e fast rotating content on the small object cache and slow rotating large objects on the other. So you may set something like this cache_peer large_object_ip 8080 3130 parent proxy-only (for not storing the files again) cache_peer small_object_ip 8080 3130 parent Depending on your client-number and/or network size you may use a backend server without caching anything setting both peers to proxy only and a null cache_dir if the three are talking fast enought between them You may consider tweaking the quick_abort values on the large_object cache also and sure you need to set refresh_patterns to your convinience. You should set always_direct and never_direct in order to make this combination work good. With such setups which are easy and cheap we get considerable impacts on bandwidth reduction, sometimes you need to look at the trio some days to get the best out of it. Like one said before the small_object cache do not need to be so big and it may even be bad but you may consider some GB as base number or whatever your network pushes in 10 - 20 days but you need to figure it out on your hardware for your network. Hans > Many thanks for your answer! > > Marco > > PS: please, do pardon my bad english! :P -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgp9ZAg79oraN.pgp Description: PGP signature
Re: [squid-users] Squid, storage size and other questions...
On Wednesday 09 February 2005 11:22, Elsen Marc wrote: > > I was wondering how good can be having a huge storage size > > for caching. > > Not good , probably even bad. > Let alone RAM requirements (see FAQ). > I guess "huge" is a bad number to discuss > > Particular staying with the first part of your question > Squid efficiency will not increase with a random pointed big cache size. > Let alone the RAM you would need (see FAQ). > > The 'average advise' in terms of maximising efficiency is to choose > a cache size corresponding to one week of traffic generated by your user > community. IMO this is not sooo easy and cache size does matter When speaking about dynamic content and small objects well i agree but you can cache large objects from ftp servers, as well as avis and mpegs and iso which can drastically improve your cache performance and still reduce drastically as well your link usage, probably such objects do not change in months so why refresh them? When using only one cache server probably hard to do but you can use two frontends and you limit one of them to store small objects as found on normal pages and the other one caches only large objects and then you can use squid.conf technics to get best out of it. Hans > > M. -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpXlgpvrQg08.pgp Description: PGP signature
Re: [squid-users] log_fqdn only for external addresses
On Tuesday 08 February 2005 21:28, Henrik Nordstrom wrote: > If you on the other hand build Squid with --disable-internal-dns then it > will use the OS resolver functions with all it's nsswitch/hosts.conf > magic, but at a significant performance penalty due to the API limitations > of the OS resolver functions. > if configuring with --disable-internal-dns the external dns lookup program will interact with the OS, is that right and I need to define dns_children ? This was the standard in older squid versions or am I wrong? So if I understood you this internal DNS resolver is better because gives better performance? Then it is even better I define hosts_file none in order not to lose time querying the file before dns? Hans > Regards > Henrik -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpU0gXll2KwR.pgp Description: PGP signature
Re: [squid-users] log_fqdn only for external addresses
On Tuesday 08 February 2005 18:14, H Matik wrote: > > Does anything else need to be done to tell squid to > > read the hosts file? > > this should be done on dnOS s level, check your /etc/host.conf for priority > seems I answered this wrong here, sorry Hans > > but you even could tweak your DNS server responses locally with private > names for reverse lookup using view and address_match_list in your > named.conf > > Hans > > > --- [EMAIL PROTECTED] wrote: > > > You can put local machines in hosts file.Try it! > > > > > > > > > > > > > > > [EMAIL PROTECTED] > > > > > > @inet > > > > > > > > >To > > > 08.02.2005 18:48 > > > squid-users@squid-cache.org > > > > > >cc > > > > > > > > > > > > Subject > > >[squid-users] > > > log_fqdn only for > > >external > > > addresses > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I have log_fqdn set to on but due to the location of > > > squid, it cannot resolve client ip addresses > > > connecting to it. This generates a lot of > > > unnecessary > > > traffic on the dns servers. Is there a solution to > > > resolve only the external ip addresses but not have > > > squid attempt to resolve the clients? > > > > > > Or another possibility, is it possible to put static > > > entries for the few clients connecting to squid? > > > > > > Shawn > > > > > > > > > > > > > > > __ > > > Do you Yahoo!? > > > All your favorites on one personal page â Try My > > > Yahoo! > > > http://my.yahoo.com > > > > __ > > Do you Yahoo!? > > Read only the mail you want - Yahoo! Mail SpamGuard. > > http://promotions.yahoo.com/new_mail -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpPP8l0xS727.pgp Description: PGP signature
Re: [squid-users] log_fqdn only for external addresses
On Tuesday 08 February 2005 18:00, Henrik Nordstrom wrote: > On Tue, 8 Feb 2005, Alexander Varga wrote: > > nsswitch.conf ??? > > Not used by Squid. > > Squid loads /etc/hosts (or the hosts file specified in squid.conf) on > startup, uses DNS for the rest. > I never needed this and only now it came to my attention. if hosts_file is not in squid.conf it still tries /etc/hosts right? but anyway squid does not respect OS settings as in /etc/host.conf ? Hans > Regards > Henrik -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpkDFvVCixiw.pgp Description: PGP signature
Re: [squid-users] cluster solution
On Saturday 05 February 2005 22:25, you wrote: > > LVS is useful in load balancing both servers and proxies, including > transparently intercepting proxies if you like. It can even run on the > same nodes as the servers, eleminating the need of extra hardware. hmm, for server balance ok but do you think LVS is better then parent weight and some other squid configs for walking through several frontend caches? Hans > > Regards > Henrik -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpEdlE1C5Kqn.pgp Description: PGP signature
Re: [squid-users] log_fqdn only for external addresses
On Tuesday 08 February 2005 16:00, shawn reed wrote: > I had already tried that, but it didn't work. It looks > like squid just queries the dns server. > sure it does but it should be your OS which first should query the host file and then the dns server > Does anything else need to be done to tell squid to > read the hosts file? > this should be done on dnOS s level, check your /etc/host.conf for priority but you even could tweak your DNS server responses locally with private names for reverse lookup using view and address_match_list in your named.conf Hans > --- [EMAIL PROTECTED] wrote: > > You can put local machines in hosts file.Try it! > > > > > > > > > > [EMAIL PROTECTED] > > > > @inet > > > > > >To > > 08.02.2005 18:48 > > squid-users@squid-cache.org > > > >cc > > > > > > > > Subject > >[squid-users] > > log_fqdn only for > >external > > addresses > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I have log_fqdn set to on but due to the location of > > squid, it cannot resolve client ip addresses > > connecting to it. This generates a lot of > > unnecessary > > traffic on the dns servers. Is there a solution to > > resolve only the external ip addresses but not have > > squid attempt to resolve the clients? > > > > Or another possibility, is it possible to put static > > entries for the few clients connecting to squid? > > > > Shawn > > > > > > > > > > __ > > Do you Yahoo!? > > All your favorites on one personal page â Try My > > Yahoo! > > http://my.yahoo.com > > __ > Do you Yahoo!? > Read only the mail you want - Yahoo! Mail SpamGuard. > http://promotions.yahoo.com/new_mail -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgp1RA5Uip3tu.pgp Description: PGP signature
Re: [squid-users] cluster solution
On Saturday 05 February 2005 15:24, Askar wrote: > hi list > what is the best clustering solution for squid cache servers ? > > LVS ? > > LVS tunneling or routing. > do you serve users or serv content with your cache? What OS you wnat to use? And may be you have some more details, links, bandwidth, size, disks, servers in numbers ? And what is your priority? Performance, link problemas, server problems? What do you wnat to get out of this? lvs (I may be wrong) is probably only a load balancer but not the cluster and probably thought for serving content but not users (access users) Load balance you can probably achieve easier and cheaper (depending on your project size) using only squid on several servers for different content types but may be you answer first my first question Hans > we are thinking about this http://dragon.linux-vs.org/~dragonfly/ > solution based on LVS > > however im will be kinda glad to get some advices from gurus over here :) > > > regards -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpZCZwDxpejG.pgp Description: PGP signature
Re: [squid-users] protecting my network from known viruses.
On Tuesday 01 February 2005 22:38, Daniel Navarro wrote: > May I create rules in squid to protect my network from > known viruses blocking for example .jpg.vbs files of > .htm.exe files? > > how will the lines be? > maybe something like this can help you out: acl danger urlpath_regex \.exe acl danger urlpath_regex \.scr$ acl danger urlpath_regex \.pif$ acl danger urlpath_regex fotosde2004 acl danger urlpath_regex cartao\.exe acl danger urlpath_regex serasa\.exe acl danger urlpath_regex Serasa\.exe acl danger urlpath_regex \.bat$ acl danger urlpath_regex \.cmd$ acl danger urlpath_regex \.com$ acl danger urlpath_regex \.chm$ acl danger urlpath_regex \.wsf$ acl danger urlpath_regex \.vbe$ acl danger urlpath_regex \.vbs$ acl danger urlpath_regex \.shs$ acl danger urlpath_regex \.cpl$ acl danger urlpath_regex \.reg$ acl danger urlpath_regex fotos\.zip acl danger urlpath_regex \.ida$ acl danger urlpath_regex readme\.eml > how can it affect squid performance? > should not have any reasonable impact on performance H > Regards, Daniel Navarro > Maracay, Venezuela > www.csaragua.com/ecodiver > > _ > Do You Yahoo!? > Información de Estados Unidos y América Latina, en Yahoo! Noticias. > Visítanos en http://noticias.espanol.yahoo.com -- ___ Infomatik (18)8112.7007 http://info.matik.com.br Mensagens não assinadas com GPG não são minhas. Messages without GPG signature are not from me. ___ pgpPQ2H6ynGtQ.pgp Description: PGP signature
Re: [squid-users] Blocking download video.
On Tuesday 01 February 2005 14:25, Renato Policani wrote: > Hi everybody > I am blocking video in configuration file named deny_music and in > squidGuardian in blacklist/audio-video. But some users had discovered a way > for download this extension using "?" before the extension. Exemple: > > http://www.xyz.com/video.wmv -> Squid block !! OK !! > > http://www.xyz.com/video.wmv? -> Squid don?t block.. Why ??? > before? you're sure? or do you mean the other before? ;) you are not sending your acls and you also do not say if you try it in domains or urls or expressions of squid guard, may be you try in expressions ok or try squid acl as urlpath_regex video\.wmv should catch "video.wmv" in any place of the URL !host part, means urlpath_regex \.wmv should catch any .wmv anywhere in the url but not if part of the host (example: www.wmv.com goes through) if you want to catch wmv at the end of the URL string you can try urlpath_regex \.wmv$ H please do not read further and if do not hit or flame me on this list, I just couldn't hold me back ... > Attention: ÂThis message was sent for exclusive use of the addressees above > identified, being able to contain information and or > privileged/confidential documents and law protects its secrecies. sorry for ignoring your advice, reading and replying without beeing exclusivly addressed here :)) > In case that you it has received for deceit, please, it informs the shipper > and erases it of your system. may be you claim a better grammar at babblefish ... tell me how people would know if they got your message by deceit? I would say you may have sent it by deceit but how would I know? > We notify that law forbids its retention, dissemination, distribution, copy > or use without express authorization. must be jungle law since you sent this to a public list ... even your portuguese text does not make so very much sense ... our (brazilian) constitution gives us the right to say anything since we identify ourself so you may advice that you do not agree or authorize but you're not the law, so anybody who got this can send it to where he wants to and there is no law forbidding it ... may be it turns into spam then but that then is another issue > Personal opinions of the shipper do not reflect, necessarily, the point of > view of the CETIP, which is only divulged by authorized people. pois à ... if it was the "sender's" personal note but it seems to be the company e-mail footer and so IMO you guys should produce some better shit (especially after seeing what big cetip.com.br pretends to be) -- ___ Infomatik http://info.matik.com.br Mensagens nÃo assinadas com GPG nÃo sÃo minhas. Messages without GPG signature are not from me. ___ pgpV8kXPoBtzz.pgp Description: PGP signature
Re: [squid-users] M ULTI UPLINK CACHE
On Tuesday 25 January 2005 10:43, RAHUL T. KARTHA wrote: > HAS ANY ONE HERE TRIED TO DOA MULTI UPLINK CACHE I.E if you are interested in this we do have several ISPs running this but look first, what and how much internal networks you run does not matter at this stage if you are running BGP you do it at router level and you can use one front-end cache if you are running some load-balance with Cisco-CEF or similar you do it also at router level and can use one front-end cache if you are otherwise "pseudo-multi-homed" as lots of people do with one IP-link and another or several ADSL where you need NAT you better put one cache for each link and a chield as main cache for your network. You can then use parent weight or some policy routing on OS-Level to get what you want and with some good ideas you get a certain balance and even redundance since squid do not query a dead parent if you configure it right. We tried linux iproute2 and policy routing on BSD for single caches but the performance of the former example is really better, BTW both gave bad results when one link died and even stopped to serve correctly, but sure depends of what you want and how much you can spend in this (weak and watch time ;) and money) Hans -- ___ Mensagens nÃo assinadas com GPG nÃo sÃo minhas. Messages without GPG signature are not from me. ___ pgp4VzlT1HER0.pgp Description: PGP signature
[squid-users] assertion failed: cbdata.cc:402: "c->locks > 0"
I get this error assertion failed: cbdata.cc:402: "c->locks > 0" on FreeBSD 5.2.1Rand 5.3R from squid-3.0-PRE3-20040116 Squid compiles fine with any compile options and starts as well also, soon the first request comes in it exites with this error. It does not matter which store_io I use for the cache_dir, also it doen't matter wether I compile with any of storeio,disk-io,kqueue or not. Both server run absolutly fine with squid2.5 and diskd Hans pgpLgCtdbwCh0.pgp Description: PGP signature