Re: [squid-users] how squid act on caching same file on dif url
On Mon, Jun 18, 2007, [EMAIL PROTECTED] wrote: > Should be possible on (but tricky, and maybe slow) to use a hash/MD5 or > similar on the file binary instead of the URI. Good backup programs do it > so accel configs should be easy. > Problem (maybe the blocker) would be getting it out of the webserver, or > long-term linking and caching of such maps. Someone brought it up on the IETF HTTP working group list a few weeks ago, but noone could come up with any particularly good reason for it. This'd be a good reason, but establishing actual URI forms for distributed content would be much better. The trouble is implementing MD5's of each object, and all the varying forms; especially on dynamically generated content which is "actually" static. All an MD5 hash of an object here is, when you think about it, is an informal URI anyway.. Adrian
Re: [squid-users] FTP error with squid
try: ftp://user:[EMAIL PROTECTED]/ or enable anonymous access on your ftp server !! On 6/18/07, Indunil Jayasooriya <[EMAIL PROTECTED]> wrote: Hi, when I try to browse a ftp site with squid, I get bellow error. How can I solve this? This is the way I tired. ftp://192.168.102.2 The requested URL could not be retrieved An FTP authentication failure occurred while trying to retrieve the URL: ftp://192.168.102.2 Squid sent the following FTP command: PASS and then received this reply User anonymous cannot log in. Your cache administrator is root. Generated Mon, 18 Jun 2007 04:57:21 GMT by box.domain.com (squid/2.5.STABLE6) -- Thank you Indunil Jayasooriya -- Sds. Alexandre J. Correa Onda Internet / OPinguim.net http://www.ondainternet.com.br http://www.opinguim.net
Re: [squid-users] FTP error with squid
Hi, On Mon, 2007-06-18 at 10:25 +0530, Indunil Jayasooriya wrote: > Hi, > > when I try to browse a ftp site with squid, I get bellow error. How > can I solve this? > > This is the way I tired. > > ftp://192.168.102.2 > > The requested URL could not be retrieved > > > An FTP authentication failure occurred while trying to retrieve the > URL: ftp://192.168.102.2 > > Squid sent the following FTP command: > PASS > and then received this reply > User anonymous cannot log in. > > > > Your cache administrator is root. > > Generated Mon, 18 Jun 2007 04:57:21 GMT by box.domain.com > (squid/2.5.STABLE6) If you don't have an account there, you can't browse as anonymous access is denied. If you do have an account you can pass that information through to the server using the standard format: ftp://[username[:[EMAIL PROTECTED] The "[]" pairs denote optional items. Colin > -- Colin Campbell Unix Support/Postmaster/Hostmaster Citec +61 7 3227 6334
Re: [squid-users] how squid act on caching same file on dif url
> On Sun, Jun 17, 2007, Andreas Pettersson wrote: >> Alexandre Correa wrote: >> >Hello squid-users !!! >> > >> >i have one question about caching same file on diferent url !! how >> >squid act in this situation: >> > >> > >> >www.xxx.xxx.com/file.exe >> >www.yyy.yyy.com/file.exe >> > >> >same file... >> > >> >squid cache one file ? >> > >> No, if they are cachable they will be cached as two separate files. > > But if you'd like a fun project, figure out how to patch Squid to consider > those equivalent for storage and retrieval; youtube caching would be > possible > with something like that. > > Adrian > Should be possible on (but tricky, and maybe slow) to use a hash/MD5 or similar on the file binary instead of the URI. Good backup programs do it so accel configs should be easy. Problem (maybe the blocker) would be getting it out of the webserver, or long-term linking and caching of such maps. Amos
[squid-users] FTP error with squid
Hi, when I try to browse a ftp site with squid, I get bellow error. How can I solve this? This is the way I tired. ftp://192.168.102.2 The requested URL could not be retrieved An FTP authentication failure occurred while trying to retrieve the URL: ftp://192.168.102.2 Squid sent the following FTP command: PASS and then received this reply User anonymous cannot log in. Your cache administrator is root. Generated Mon, 18 Jun 2007 04:57:21 GMT by box.domain.com (squid/2.5.STABLE6) -- Thank you Indunil Jayasooriya
Re: [squid-users] 2.6-S13 + diskd is freaky bugged
On Sun, Jun 17, 2007, Michel Santos wrote: > anyway, I wonder, why then the swap.state problem does not appear when > using ufs. On FreeBSD you can tear off the powercable twice and trice and > squid with ufs cache_dir comes up fine after fsck corrected the errors - > but - diskd goes wild Have you tried aufs? > it is clearly a diskd isolated problem since the cache_dir and swap.state > are the same for both, so I guess it has nothing to do with the swap.state > file it is only wrongly rebuild when running diskd try aufs; it should run fine under FreeBSD-6. Adrian
Re: [squid-users] how squid act on caching same file on dif url
On Sun, Jun 17, 2007, Andreas Pettersson wrote: > Alexandre Correa wrote: > >Hello squid-users !!! > > > >i have one question about caching same file on diferent url !! how > >squid act in this situation: > > > > > >www.xxx.xxx.com/file.exe > >www.yyy.yyy.com/file.exe > > > >same file... > > > >squid cache one file ? > > > No, if they are cachable they will be cached as two separate files. But if you'd like a fun project, figure out how to patch Squid to consider those equivalent for storage and retrieval; youtube caching would be possible with something like that. Adrian
[squid-users] Can i Limit Upload traffic?
Hi again. is there any type of "delay pools for upload traffic"?? Regards. Emiliano Vazquez
Re: [squid-users] 2.6-S13 + diskd is freaky bugged
Henrik Nordstrom disse na ultima mensagem: > > It means it's the first time we have heard of this problem. > > And yes, I don't care much for diskd. Never have. My main focus is on > aufs and ufs. But how swap.state is maintained should be the same in all > three "ufs" based cache_dir types. And with current FreeBSDs also fully > capable of using aufs... > that is too sad to hear ... capable is one thing but the fast thing is what diskd is, especially on smp machines anyway, I wonder, why then the swap.state problem does not appear when using ufs. On FreeBSD you can tear off the powercable twice and trice and squid with ufs cache_dir comes up fine after fsck corrected the errors - but - diskd goes wild it is clearly a diskd isolated problem since the cache_dir and swap.state are the same for both, so I guess it has nothing to do with the swap.state file it is only wrongly rebuild when running diskd Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
[squid-users] Squid Rate-Limiting TCP_HIT Downloads
Sir, I have compiled Squid 2.6.13 on Fedora core 5 with delay pools enabled. Squid 2.5.14 never rate limited the local TCP_HIT traffic at delay pools rate. Local TCP_HIT traffic worked at LAN speed. But Squid 2.6.13 is rate limiting the TCP_HIT traffic also delay pools rate. Is this Expected?.. My Squid config:/usr/local/squid/sbin/squid -v Squid Cache: Version 2.6.STABLE13 configure options: '--build=i686-redhat-linux-gnu' '--host=i686-redhat-linux-gnu' '--target=i386-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr/local/squid' '--enable-epoll' '--enable-snmp' '--enable-removal-policies=heap,lru' '--enable-storeio=aufs,null,ufs' '--enable-ssl' '--with-openssl=/usr/kerberos' '--enable-delay-pools' '--enable-linux-netfilter' '--with-pthreads' '--enable-useragent-log' '--enable-referer-log' '--disable-dependency-tracking' '--enable-cachemgr-hostname=localhost' '--enable-underscores' '--enable-cache-digests' '--enable-ident-lookups' '--with-large-files' '--enable-fd-config' '--enable-follow-x-forwarded-for' 'build_alias=i686-redhat-linux-gnu' 'host_alias=i686-redhat-linux-gnu' 'target_alias=i386-redhat-linux-gnu' cat /usr/local/squid/etc/squid.conf | grep -v ^$ | grep -v ^# hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY acl apache rep_header Server ^Apache broken_vary_encoding allow apache http_port 10.172.198.9:3128 transparent visible_hostname MOLMTM cache_mem 16 MB acl all src 0.0.0.0/0.0.0.0 acl siti src 10.172.196.0/24 10.172.197.0/24 10.172.198.0/24 10.172.199.0/24 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl SSL_ports port 443 563 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access allow manager siti http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow siti http_access deny all icp_access allow all cache_log none cache_access_log /cache1/log/access.log cache_store_log none cache_dir aufs /cache1/squidpool36000 32 256 cache_dir aufs /cache2/squidpool36000 32 256 half_closed_clients off maximum_object_size 40 MB maximum_object_size_in_memory 16 KB cache_swap_high 100% cache_swap_low 80% acl normalday time 07:00-22:00 acl nightdouble time 22:00-23:59 acl midnight time 00:00-07:00 delay_pools 2 delay_class 1 3 delay_parameters 1 192000/212000 -1/-1 9000/12000 delay_access 1 allow siti normalday delay_access 1 deny all delay_class 2 3 delay_parameters 2 162000/182000 -1/-1 29000/35000 delay_access 2 allow siti nightdouble delay_access 2 allow siti midnight delay_access 2 deny all [EMAIL PROTECTED] ~]# I suppose Squid to serve TCP_HIT traffic at LAN speed... Regards, Rayudu. Fussy? Opinionated? Impossible to please? Perfect. Join Yahoo!'s user panel and lay it on us. http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7
Re: [squid-users] COSS unusable on FreeBSD?
Adrian Chadd wrote: On Sun, Jun 17, 2007, Tek Bahadur Limbu wrote: Output of: squid -v Squid Cache: Version 2.6.STABLE9 configure options: '--bindir=/usr/local/sbin' '--sysconfdir=/usr/local/etc/squid' '--datadir=/usr/local/etc/squid' '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/usr/local/squid' '--enable-removal-policies=lru heap' '--enable-async-io' '--with-pthreads' '--enable-storeio=coss,ufs diskd null aufs' '--enable-delay-pools' '--enable-snmp' '--enable-carp' '--enable-cache-digests' '--enable-underscores' '--enable-useragent-log' '--enable-poll' '--enable-select' '--enable-kqueue' '--enable-time-hack' '--enable-arp-acl' '--with-large-files' '--enable-large-cache-files' '--prefix=/usr/local' '--enable-follow-x-forwarded-for' '--disable-http-violations' '--enable-forward-log' '--enable-kill-parent-hack' Why've you got enable-poll, enable-select, and enable-kqueue? :) Anyway. Hi Adrian, Thanks for your reply and suggestions as always. This was one of my first experimental freebsd proxy server. I had enabled all 3 of the above compilation options thinking that I would have a choice of 3 instead of 1!:) But my question is: does it degrade the performance of Squid? If I just use enable-kqueue, will it give a performance boost? Thanking you... By the way, I don't seem to have /etc/libthr.conf in my FreeBSD 6.0 box. Is this normal? You won't have one by default. If I just let Squid run continuously, which I am, then COSS does it's job quite well and I have low median service times. If I have to stop and restart Squid on rare occasions, only then it takes Squid at least 1 hour to rebuild my COSS storage which is about 20 GB in size. Squid rebuilds COSS async but it will do it by reading the entire COSS file in from start to finish. Its not efficient at all. Try running iostat or systat -vmstat 1 whilst its rebuilding and see if you've saturating the disk IO. I'm sorry guys, I just don't have the time to work on fixing up COSS past its current state at this present time. Adrian
Re: [squid-users] how squid act on caching same file on dif url
Alexandre Correa wrote: Hello squid-users !!! i have one question about caching same file on diferent url !! how squid act in this situation: www.xxx.xxx.com/file.exe www.yyy.yyy.com/file.exe same file... squid cache one file ? No, if they are cachable they will be cached as two separate files. -- Andreas
[squid-users] how squid act on caching same file on dif url
Hello squid-users !!! i have one question about caching same file on diferent url !! how squid act in this situation: www.xxx.xxx.com/file.exe www.yyy.yyy.com/file.exe same file... squid cache one file ? thanks, Regards !! -- Sds. Alexandre J. Correa Onda Internet / OPinguim.net http://www.ondainternet.com.br http://www.opinguim.net
Re: [squid-users] 2.6-S13 + diskd is freaky bugged
sön 2007-06-17 klockan 13:12 -0300 skrev Michel Santos: > you say so but diff counts 111+/90- on store_dir_diskd.c and 50+/22- on > store_io_diskd.c between 2.5.S14-20060721 and 2.6.S13 but "not much" is > pretty relative In the area of how swap.state is maintained. > > Should not happen, and have not heard of it happening from anyone else. > > first part I agree but hum :) > I am not sure if this means not true until someone else calls it or not > true coming from me ... either way, strange answer, does it mean you don't > care and I'm on my own with this? It means it's the first time we have heard of this problem. And yes, I don't care much for diskd. Never have. My main focus is on aufs and ufs. But how swap.state is maintained should be the same in all three "ufs" based cache_dir types. And with current FreeBSDs also fully capable of using aufs... Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Fwd: Squid historical cache?
sön 2007-06-17 klockan 02:05 -0500 skrev Tim Alexander: > Here's what I'm wondering. I know that Squid caches pages so that > browsing is sped up. I also know that it can be set to cache even > dynamic pages and such. Is there a program out there that > can view this cache? (see what is in the cache, like on a "last visited" > basis?) The purge utility can inspect the ufs/aufs/diskd cache_dir caches, but not in the manner you are looking for I think. HTTP does not operate in pages, just objects where the HTML content is one of many objects making up a page. Also Squid only caches information which makes sense to cache. Not information which can not be given out as a cache hit on a second requests. From your description it sounds more like you want something that records the traffic. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Dynamic caching
Squid doesn't know what's dynamic or static content.It's descided by you.By default,Squid think 'cgi-bin' and '?' in urlpath are dynamic,so it wouldn't cache them,known by these 2 lines: acl QUERY urlpath_regex cgi-bin \? cache deny QUERY Given the case you have .php or .jsp pages,but you didn't specify them in above config directives,then Squid think those pages are also static.How long would they be cached?It's descided by default refresh_pattern(that line of ".") (if you didn't specify them distinctly). 2007/6/17, Monah Baki <[EMAIL PROTECTED]>: Hi all, Where can I get information about dynamic caching in squid and how to enable it, and after a certain period of time go see if the content has changed and cache the new content. Thank you BSD Networking, Microsoft Notworking
Re: [squid-users] 2.6-S13 + diskd is freaky bugged
Henrik Nordstrom disse na ultima mensagem: > > Not much if anything has changed in this area since 2.5. At least not > after the 2GB changes in 2.5.STABLE10. > you say so but diff counts 111+/90- on store_dir_diskd.c and 50+/22- on store_io_diskd.c between 2.5.S14-20060721 and 2.6.S13 but "not much" is pretty relative >> 2.6 now calculates the % wrong and crashes and never comes back because >> the disk gets full > > Should not happen, and have not heard of it happening from anyone else. first part I agree but hum :) I am not sure if this means not true until someone else calls it or not true coming from me ... either way, strange answer, does it mean you don't care and I'm on my own with this? Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
Re: [squid-users] Dynamic caching
sön 2007-06-17 klockan 09:30 -0400 skrev Monah Baki: > Where can I get information about dynamic caching in squid and how to > enable it Squid by default caches any cachable content. > and after a certain period of time go see if the content > has changed and cache the new content. Squid automatically goes out and checks freshness when seeing a request for a stale cached object. You can tune the details via the refresh_pattern directive. To learn more about HTTP caching see the Cacheability check engine http://www.mnot.net/cacheability/ and Caching Tutorial for Web Authors and Webmasters http://www.mnot.net/cache_docs/ Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] 2.6-S13 + diskd is freaky bugged
sön 2007-06-17 klockan 10:28 -0300 skrev Michel Santos: > reducing rc_shutdown time in order killing the running processes does the > same harm to squid's cache_dirs Odd. Should not happen. > of course but the fs should be recovered by the systems fsck which > definitly happens in cases after a power or hardware failure, so I mean > the cash_dirs and their content as well as swap.state are in perfect > conditions (no file corruption I mean) when squid starts Not all OS:es guarantee consistent file contents on a sudden powerfailure. Some may cause garbage from other files to show up at the end of the file, and this will make Squid very very unhappy. If should handle situations where swap.state is truncated just fine. > so that was so on 2.5 Not much if anything has changed in this area since 2.5. At least not after the 2GB changes in 2.5.STABLE10. > 2.6 now calculates the % wrong and crashes and never comes back because > the disk gets full Should not happen, and have not heard of it happening from anyone else. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
[squid-users] Dynamic caching
Hi all, Where can I get information about dynamic caching in squid and how to enable it, and after a certain period of time go see if the content has changed and cache the new content. Thank you BSD Networking, Microsoft Notworking
Re: [squid-users] 2.6-S13 + diskd is freaky bugged
Henrik Nordstrom disse na ultima mensagem: > sön 2007-06-17 klockan 07:22 -0300 skrev Michel Santos: > >> the problem is not unique to this particular situation, it is repeatable >> easily by shutting down the machine without waiting for squid closing >> the >> files and bang ... > > Shutting down hard by pulling the power, or by the shutdown command? > actually both reducing rc_shutdown time in order killing the running processes does the same harm to squid's cache_dirs > Squid will be very unhappy if swap.state contains garbage, which might > happen if you suddently pull the power and your OS is using a filesystem > which don't guarantee file integrity in such conditions.. > of course but the fs should be recovered by the systems fsck which definitly happens in cases after a power or hardware failure, so I mean the cash_dirs and their content as well as swap.state are in perfect conditions (no file corruption I mean) when squid starts so let's say that because of a power shortage swap.state is not written perfectly so in my opinion when squid rebuilds the cache_dir it should be build to the latest correct written transaction. You must have a kind of check point in it or not? So let's say cache_dir state up to a minute before the power off or so. Then squid discards the overhead in cache_dir - but squid actually deletes the complete cache_content within a day or so, that can not be ok In my opinion this is a problem of the system time and how diskd handles it, because sometimes the above process initiates when going into summertime automatically in a bad moment so that was so on 2.5 2.6 now calculates the % wrong and crashes and never comes back because the disk gets full it certainly seems wrong to me that squid builds a 20GB swap.state on a 1GB cache_dir (under whatever condition) Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
Re: [squid-users] COSS unusable on FreeBSD?
On Sun, Jun 17, 2007, Tek Bahadur Limbu wrote: > Output of: squid -v > > Squid Cache: Version 2.6.STABLE9 > configure options: '--bindir=/usr/local/sbin' > '--sysconfdir=/usr/local/etc/squid' '--datadir=/usr/local/etc/squid' > '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/usr/local/squid' > '--enable-removal-policies=lru heap' '--enable-async-io' '--with-pthreads' > '--enable-storeio=coss,ufs diskd null aufs' '--enable-delay-pools' > '--enable-snmp' '--enable-carp' '--enable-cache-digests' > '--enable-underscores' '--enable-useragent-log' '--enable-poll' > '--enable-select' '--enable-kqueue' '--enable-time-hack' '--enable-arp-acl' > '--with-large-files' '--enable-large-cache-files' '--prefix=/usr/local' > '--enable-follow-x-forwarded-for' '--disable-http-violations' > '--enable-forward-log' '--enable-kill-parent-hack' Why've you got enable-poll, enable-select, and enable-kqueue? :) Anyway. > By the way, I don't seem to have /etc/libthr.conf in my FreeBSD 6.0 box. Is > this normal? You won't have one by default. > If I just let Squid run continuously, which I am, then COSS does it's job > quite well and I have low median service times. > > If I have to stop and restart Squid on rare occasions, only then it takes > Squid at least 1 hour to rebuild my COSS storage which is about 20 GB in size. Squid rebuilds COSS async but it will do it by reading the entire COSS file in from start to finish. Its not efficient at all. Try running iostat or systat -vmstat 1 whilst its rebuilding and see if you've saturating the disk IO. I'm sorry guys, I just don't have the time to work on fixing up COSS past its current state at this present time. Adrian
Re: [squid-users] First external_acl by myself : FATAL: the myexternalacl hepers ate crashing too rapidly, need help!
lör 2007-06-16 klockan 11:50 -0300 skrev [EMAIL PROTECTED]: > Hi, > > Sorry about last incomplete message. To many keyboards here. :-( > > I am trying to make my first external_acl_helper, but I am having > problems. > > My script ( based on examples from squid-user mail list archive): > > #!/usr/bin/perl > $|=1; > while ( my $parms = ) > { > print "OK\n"; > } > > squid.conf > > external_acl_type myexternalacl %SRC %LOGIN > /usr/local/squid/libexec/script.pl > acl testing external myexternalacl > (There is no ACL using the helper, for awhile, because it is not > working anyway ) Can you execute the helper from the command line as your cache_effective_user? > cache.log > > WARNING: myexternalacl #10 (FD45) exited > (there are 5 lines, changing just # and FD) Anything relevant before that? Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] 2.6-S13 + diskd is freaky bugged
sön 2007-06-17 klockan 07:22 -0300 skrev Michel Santos: > the problem is not unique to this particular situation, it is repeatable > easily by shutting down the machine without waiting for squid closing the > files and bang ... Shutting down hard by pulling the power, or by the shutdown command? Squid will be very unhappy if swap.state contains garbage, which might happen if you suddently pull the power and your OS is using a filesystem which don't guarantee file integrity in such conditions.. having swap.state cut short is fine. Having it contain garbage is not. Regards Herik signature.asc Description: Detta är en digitalt signerad meddelandedel
[squid-users] lookup host in http_access more often
Hi! I am running a little proxy server. To prevent others from accessing it, I set up a http_access allow directive to my dyndns account, which resolves to my IP (my IP changes every 24 hours). The problem is that Squid does not look up the IP as frequently as I would like it to. How can I change this? Martin signature.asc Description: Dies ist ein digital signierter Nachrichtenteil
Re: [squid-users] RE: Using squid through a ipsec-isakmp tunnel
fre 2007-06-15 klockan 17:41 +0100 skrev Darren Goulden: > Hello, > > I need some help using squid browsing through a ipsec-isakmp tunnel. > > We have been using squid internally now for quite a while and recently > implemented a ipsec-isakmp tunnel to manage one of our customers > services remotely (via http), the way its setup is as follows; If you draw ascii diagrams, make sure to use Courier or another monospace font, and disable linewrap.. Impossible to read otherwise. Regarding your problem. The speed impacts of Squid should be negligible. If you see a drastic difference then something is wrong. It's very hard to say from the data you have provided what might be wrong, but it smells more network related than Squid. Simple test: Can you from the Squid server access the customer network fine without using Squid? (Squid will only run as fine as the host Squid is running on..) Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] COSS unusable on FreeBSD?
Adrian Chadd disse na ultima mensagem: > On Sun, Jun 17, 2007, Michel Santos wrote: >> >> squid needs about two hours to build a 8GB coss_dir on a clean partition >> while it is building the cache_dir the service is practically unusable >> slow > > Nope, not meant to be that bad. > Whats squid -v say, and whats your /etc/libthr.conf say? > Squid Cache: Version 2.6.STABLE13-20070603 configure options: '--enable-default-err-language=Portuguese' '--enable-storeio=diskd,ufs,aufs,coss,null' '--enable-removal-policies=heap,lru' '--enable-underscores' '--disable-ident-lookups' '--disable-hostname-checks' '--enable-large-files' '--disable-http-violations' '--enable-truncate' '--disable-wccp' '--disable-wccpv2' '--enable-follow-x-forwarded-for' '--disable-linux-tproxy' '--disable-linux-netfilter' '--disable-epoll' I tried w/o '--enable-large-files' '--enable-truncate' but makes no difference I don't use libthr.conf but libmap.conf [/usr/local/squid/sbin/squid] libpthread.so.2 libthr.so.2 libpthread.so libthr.so Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
Re: [squid-users] COSS unusable on FreeBSD?
Adrian Chadd disse na ultima mensagem: > On Sun, Jun 17, 2007, Michel Santos wrote: >> >> squid needs about two hours to build a 8GB coss_dir on a clean partition >> while it is building the cache_dir the service is practically unusable >> slow > > Nope, not meant to be that bad. > Whats squid -v say, and whats your /etc/libthr.conf say? > Squid Cache: Version 2.6.STABLE13-20070603 configure options: '--enable-default-err-language=Portuguese' '--enable-storeio=diskd,ufs,aufs,coss,null' '--enable-removal-policies=heap,lru' '--enable-underscores' '--disable-ident-lookups' '--disable-hostname-checks' '--enable-large-files' '--disable-http-violations' '--enable-truncate' '--disable-wccp' '--disable-wccpv2' '--enable-follow-x-forwarded-for' '--disable-linux-tproxy' '--disable-linux-netfilter' '--disable-epoll' I tried w/o '--enable-large-files' '--enable-truncate' but makes no difference I don't use libthr.conf but libmap.conf [/usr/local/squid/sbin/squid] libpthread.so.2 libthr.so.2 libpthread.so libthr.so Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
Re: [squid-users] wbinfo_group.pl not responding correctly
fre 2007-06-15 klockan 12:53 -0300 skrev Isnard Jaquet: > Hello all, > > I'm facing a weird problem using wbinfo_group.pl to validate windows > groups. I'm used to install and configure this often, so I don't think > I'm doing anything wrong, but here goes: > > SISOP = FreeBSD 6.2-STABLE > Samba = 3.0.25a > Squid = 2.6.STABLE13 > > ## > Squid settings: > > Debug: > debug_options ALL,1 82,9 > > External ACL: > external_acl_type NT_global_group concurrency=5 % > LOGIN /usr/local/libexec/squid/wbinfo_group.pl concurrency should be children above.. concurrency is a quite different thing. See squid.conf.default for details. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
RE: [squid-users] Redirecting to different server basedonURLpattern
fre 2007-06-15 klockan 15:15 + skrev Jacobson, Martin: > http_port 8080 accel defaultsite=googleapp2o.ismc.intelink.gov > > Now when I go to http://lindev2o.lab.ismc.intelink.gov:8080/, I get an > access denied when trying to retrieve > http://googleapp2o.ismc.intelink.gov/ Have you configured http_access to allow access to the accelerated sites? Note: The URL is what you have told in http_port. The defaultsite part of http_port SHOULD be the host:port part of the URL you intend to put in the web browser. The server names is given by cache_peer. It might differ if you need to have the URL sent differently to the backend server, but is generally a bad idea. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Time to serve request
fre 2007-06-15 klockan 22:58 +0800 skrev squid squid: > If this is the case, is there any parameters or means that I can find the > total round trip time for the access??? That has to be measured at the client I am afraid, or perhaps by a packet analyzer looking at the ACKs sent by the client as an approximation. To get quite good approximations you can reduce the socket send buffer size. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] COSS unusable on FreeBSD?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sun, 17 Jun 2007 18:41:05 +0800 Adrian Chadd <[EMAIL PROTECTED]> wrote: > On Sun, Jun 17, 2007, Michel Santos wrote: > > > > squid needs about two hours to build a 8GB coss_dir on a clean partition > > while it is building the cache_dir the service is practically unusable slow > > Nope, not meant to be that bad. > Whats squid -v say, and whats your /etc/libthr.conf say? > Hi Adrian, I am also facing the same problem as Michel regarding rebuilding the COSS storage system. Output of: squid -v Squid Cache: Version 2.6.STABLE9 configure options: '--bindir=/usr/local/sbin' '--sysconfdir=/usr/local/etc/squid' '--datadir=/usr/local/etc/squid' '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/usr/local/squid' '--enable-removal-policies=lru heap' '--enable-async-io' '--with-pthreads' '--enable-storeio=coss,ufs diskd null aufs' '--enable-delay-pools' '--enable-snmp' '--enable-carp' '--enable-cache-digests' '--enable-underscores' '--enable-useragent-log' '--enable-poll' '--enable-select' '--enable-kqueue' '--enable-time-hack' '--enable-arp-acl' '--with-large-files' '--enable-large-cache-files' '--prefix=/usr/local' '--enable-follow-x-forwarded-for' '--disable-http-violations' '--enable-forward-log' '--enable-kill-parent-hack' By the way, I don't seem to have /etc/libthr.conf in my FreeBSD 6.0 box. Is this normal? If I just let Squid run continuously, which I am, then COSS does it's job quite well and I have low median service times. If I have to stop and restart Squid on rare occasions, only then it takes Squid at least 1 hour to rebuild my COSS storage which is about 20 GB in size. Thanking you. > > > Adrian > > - -- With best regards and good wishes, Yours sincerely, Tek Bahadur Limbu (TAG/TDG Group) Jwl Systems Department Worldlink Communications Pvt. Ltd. Jawalakhel, Nepal http://www.wlink.com.np -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.2.2 (FreeBSD) iD8DBQFGdSaYVrOl+eVhOvYRAg/eAJ9AFTfs48tVWVS0R/H0IsvzVZjNKgCfbHHV 3pJbjDngfpnlFubXEV1R4mo= =QfCg -END PGP SIGNATURE-
Re: [squid-users] Squid cache manipulation
fre 2007-06-15 klockan 10:15 -0400 skrev Lukasz Koszanski: > Hi, I wonder if it is possible to manipulate squid cache content, for > example I want to convert all the png files in my cache to jpeg images. Not easily. But with Squid-3 you can in theory plug in an ICAP server doing online conversion from PNG to JPEG if you like. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] squid+ldap
fre 2007-06-15 klockan 09:56 -0300 skrev pauloric: > c) from squid.conf: > auth_param basic program /usr/lib/squid/ldap_auth -b > "dc=xxx,dc=com,dc=br" -f "uid=%s" -h 130.0.150.2 > auth_param basic children 10 > auth_param basic > program /usr/lib/squid/ncsa_auth /etc/admwebuser/squidusers.passwd > auth_param basic children 10 You can only have one set of auth_param basic settings. The second overwrites the first. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] squid cache dns?
fre 2007-06-15 klockan 20:01 +0800 skrev Snow Wolf: > No.I run squid as reverse-proxy.Clients' DNS query is nothing to me.I > just take care the DNS query from my squids. There should be nearly no DNS traffic if any at all in a reverse proxy setup, at least not using Squid-2.6 or later with the cache_peer based forwarding. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] squid cache dns?
fre 2007-06-15 klockan 08:49 -0300 skrev Leonardo Rodrigues Magalhães: > So your clients will do DNS queries to the DNS they are configured > to query. If DNS resolves fine, the HTTP query will be made and this one > will be forwarded to squid in transparent proxy fashion. > > And when squid receives the HTTP query, it will have to DNS query > again And is why it's preferable to have the clients and Squid use the same DNS server. This way Squid will benefit from the cached data kept by the DNS server, avoiding a full lookup of the requested server again.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] squid cache dns?
fre 2007-06-15 klockan 14:16 +0800 skrev Snow Wolf: > When Squid was running on transparent mode,it would make lots of DNS queries. > Would Squid cache those DNS query results for some time? Yes, it caches the DNS queies for as long as allowed by the DNS server, using the TTL provided in the DNS protocol. There is also a couple of squid.conf options to tune this if needed, but there is normally not any need to look into those. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] strange lines in cache.log
fre 2007-06-15 klockan 10:44 +0500 skrev rihad: > Here's an example of one typical erroneous TCP stream flow processed by > Ethereal/Wireshark, and shown here in all its ASCII glory. Any tips > appreciated. > > > |.2.|j.s|..?|..=...K.H...~..|Z.#|v..|<.m.n.G.t...J...p.1.f...(.I.^.5.:.a.V.O...[...y.N...T.e.*.3.P...F...>.W.6.-.x...4..| > ...0...&._.l.]...k|..C|..!.r..|X.9...g...%...Q.L...b.i|..U...o|R2{|8.S.,...B.+|...|..w|d..|`.M.2.'|...|.0}.".)[EMAIL > PROTECTED]/.|.;...Y.. > > E...q > HTTP/1.0 400 Bad Request > Server: squid/2.6.STABLE13 > Date: Wed, 13 Jun 2007 08:45:51 GMT > Content-Type: text/html > Content-Length: 1180 > Expires: Wed, 13 Jun 2007 08:45:51 GMT > X-Squid-Error: ERR_INVALID_REQ 0 > X-Cache: MISS from cache.net > Via: 1.0 cache.net:8080 (squid/2.6.STABLE13) > Proxy-Connection: close Are you running transparent interception? I would guess so. The above indicates there is some application on your network trying to use the HTTP port 80 for non-HTTP traffic. And Squid being and HTTP proxy dislikes seeing non-HTTP traffic sent to it. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] COSS unusable on FreeBSD?
On Sun, Jun 17, 2007, Michel Santos wrote: > > squid needs about two hours to build a 8GB coss_dir on a clean partition > while it is building the cache_dir the service is practically unusable slow Nope, not meant to be that bad. Whats squid -v say, and whats your /etc/libthr.conf say? Adrian
[squid-users] COSS unusable on FreeBSD?
squid needs about two hours to build a 8GB coss_dir on a clean partition while it is building the cache_dir the service is practically unusable slow even a 1Gb needs more than 30 minutes the same time is spend each time squid is starting and this is on a dual-cpu with U320 disks. freeBSD releng_6 amd64. ufs2, while it is building there is no high cpu-usage or disk-usage, seems to be simply slow by itself when then cache_dir is rebuild than it works ok with good performance but until getting there it is a burden is it meant to be that slow? Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
Re: [squid-users] 2.6-S13 + diskd is freaky bugged
Tek Bahadur Limbu disse na ultima mensagem: >> After an unclean reboot squid builds a monster swap.state which fills up >> the disk in seconds (graph attached) >> >> funny is that until the disk is full it logs >> >> Store rebuilding is -0.3% complete >> Store rebuilding is -0.4% complete >> Store rebuilding is -0.4% complete >> Store rebuilding is -0.3% complete >> Store rebuilding is -0.4% complete >> Store rebuilding is -0.3% complete >> Store rebuilding is -0.4% complete >> >> until suddenly ... >> >> Store rebuilding is 1291.7% complete >> Store rebuilding is 743.5% complete >> Store rebuilding is 1240.4% complete >> Store rebuilding is 725.0% complete >> Store rebuilding is 1194.1% complete >> Store rebuilding is 1150.4% complete >> Store rebuilding is 707.9% complete >> >> >> what shall I do with this? >> > > Hi Michel, > > Have you tried stopping Squid and deleting your swap.state file and > restarting Squid again? > No because that does not solve anything, I deleted the partitions (newfs) and -zeed the cache_dirs what is faster the problem is not unique to this particular situation, it is repeatable easily by shutting down the machine without waiting for squid closing the files and bang ... diskd had already a problem after unclean shutdowns before but "only" unlinked slowly the cache_dir content what btw was already anoying but did not killed the service Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
[squid-users] Fwd: Squid historical cache?
I had a question about squid, and I don't really know where to go to get the answer. (been on google, been on IRC, looked through the docs, etc.) Here's what I'm wondering. I know that Squid caches pages so that browsing is sped up. I also know that it can be set to cache even dynamic pages and such. Is there a program out there that can view this cache? (see what is in the cache, like on a "last visited" basis?) If so, that would make my job a lot easier. I am looking to see if there is a way i can do historical caching as well. Basically, I don't want to discard cache, because I want to be able to view everything that people on the network did (the actual page itself, and the data itself, etc.) Basically, this would incorporate squid into a group of applications that I am putting together as a parental monitoring suite. I know this is kind of changing what squid is actually made to do (OK, totally changing what it is supposed to do) but as far as I know, squid is one of the best caching proxies out there, and caching proxies is where I'm working from as far as my needs go. It also has some great support by the linux community, and by people actually working with it in the field. In short, I want to know if I can browse cache with some kind of a pre-existing interface, and if there is some way I can make squid keep the page, or move it off somewhere, instead of deleting it from cache. Thank you in advance for your help.