Re: [squid-users] Preparing squid training
On 19/08/11 07:35, Jorge Armando Medina wrote: On 08/18/2011 02:03 PM, squidbob wrote: Hi, I'm planning to prepare squid training which firstly ill give for the local requests then maybe to remote sites or online. Basically it may include teorical knowledge (preknowledge TCIP, squid introduction etc), handson (installation, configuration, different deployment teories, troubleshooting /cases etc). It may also be more divided trainings like basic and advanced. Ill like to have your opsinion about a squid training and any recommendations feedback for preparing it (include this do that etc). Regards The last months, I have been writting a big manual for squid proxy implementations, it is in spanish, I use it for courseware here at the company, It is almost everything I know about squid implementation. The docto is here: http://tuxjm.net/docs/Manual_de_Instalacion_de_Servidor_Proxy_Web_con_Ubuntu_Server_y_Squid/html-multiples/ Jorge Armando Medina: Is this at a stable location I can add to the non-English documentation index? Taking a read through some early pages I find its talking about 3.0.STABLE1. You may want to update build examples to the current Ubuntu Lucid supported release 3.0.STABLE19 and mention why its not documenting a current 3.x release. I'd also advise using the squidclient instead of squid3-client and squid-cgi instead of squid3-cgi packages. They are not related to the main squid version and these *3 alternative packages have been dropped in current Debian/Ubuntu versions. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] Preparing squid training
On 19/08/11 07:52, squidbob wrote: Thanks for the comments coming. My motivation on this is : 1-Make my brain work a little while preparing and giving the training 2- Able to exercise more for both hands on and theory about squid/proxies , i myself need to know/learn more 3- I want to give more training about IT Security so this can help me warm on it 4- I always like to share and get knowledge to/from others 5- Yes i need to earn extra money :-) We have a 3.1 series beginners guide available through the main website for purchase in various formats. You might like to use it as an available course book. It has everything you are seeking to teach in the concepts and feature use areas. There are some simple hands-on pieces in there. But you will want to write more complex tutorial tasks yourself or from other sources. Skills such as how to data-mine the living documentation at wiki.squid-cache.org and www.squid-cache.org websites for specific problems will be useful for early beginners to find new things. Most of what I do here is point people at this info or re-write it to suit their particular situation. On 18.08.2011 22:38, Ron Wheeler wrote: Are you going to charge for this training? Ron On 18/08/2011 3:28 PM, Benjamin wrote: On 08/19/2011 12:33 AM, squidbob wrote: Hi, I'm planning to prepare squid training which firstly ill give for the local requests then maybe to remote sites or online. Basically it may include teorical knowledge (preknowledge TCIP, squid introduction etc), handson (installation, configuration, different deployment teories, troubleshooting /cases etc). It may also be more divided trainings like basic and advanced. Ill like to have your opsinion about a squid training and any recommendations feedback for preparing it (include this do that etc). Regards Hi, Yes that's good.Even go with advance level training of squid like tproxy,high cache gain etc..and also share your sessions to community. If you are going to include TCP instructions IPv6 basics is also required these days. Squid-3.1+ being one of the tools designed to make the addition of IPv6 easier and more comfortable by gatewaying between the IP networks. "How do I enable it for just HTTP but not everything else?" is one of the questions beginners to IPv6 still worry over needlessly. There are multiple safe answers besides Squid. Awareness is the key. Thank you for your focus on Squid. We are happy to assist with advertising of Squid related services and products on the squid-cache website. If you want a potentially global spread of clients let me know when you are ready to go. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] tproxy and "disable-pmtu-discovery=always"
On 19/08/11 07:36, Ritter, Nicholas wrote: Back when I first setup TPROXY/SQUID, I was told to use "disable-pmtu-discovery=always" after the http_port tproxy config entry in squid.conf. Is "disable-pmtu-discovery=always" still needed? Depends on the kernel. ICMP linking was one of the things fixed last. Around 2.6 .35/.36. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] Preparing squid training
Thanks for the comments coming. My motivation on this is : 1-Make my brain work a little while preparing and giving the training 2- Able to exercise more for both hands on and theory about squid/proxies , i myself need to know/learn more 3- I want to give more training about IT Security so this can help me warm on it 4- I always like to share and get knowledge to/from others 5- Yes i need to earn extra money :-) On 18.08.2011 22:38, Ron Wheeler wrote: Are you going to charge for this training? Ron On 18/08/2011 3:28 PM, Benjamin wrote: On 08/19/2011 12:33 AM, squidbob wrote: Hi, I'm planning to prepare squid training which firstly ill give for the local requests then maybe to remote sites or online. Basically it may include teorical knowledge (preknowledge TCIP, squid introduction etc), handson (installation, configuration, different deployment teories, troubleshooting /cases etc). It may also be more divided trainings like basic and advanced. Ill like to have your opsinion about a squid training and any recommendations feedback for preparing it (include this do that etc). Regards Hi, Yes that's good.Even go with advance level training of squid like tproxy,high cache gain etc..and also share your sessions to community. Thanks, Benjamin
Re: [squid-users] Preparing squid training
Are you going to charge for this training? Ron On 18/08/2011 3:28 PM, Benjamin wrote: On 08/19/2011 12:33 AM, squidbob wrote: Hi, I'm planning to prepare squid training which firstly ill give for the local requests then maybe to remote sites or online. Basically it may include teorical knowledge (preknowledge TCIP, squid introduction etc), handson (installation, configuration, different deployment teories, troubleshooting /cases etc). It may also be more divided trainings like basic and advanced. Ill like to have your opsinion about a squid training and any recommendations feedback for preparing it (include this do that etc). Regards Hi, Yes that's good.Even go with advance level training of squid like tproxy,high cache gain etc..and also share your sessions to community. Thanks, Benjamin -- Ron Wheeler President Artifact Software Inc email: rwhee...@artifact-software.com skype: ronaldmwheeler phone: 866-970-2435, ext 102 <>
[squid-users] tproxy and "disable-pmtu-discovery=always"
Back when I first setup TPROXY/SQUID, I was told to use "disable-pmtu-discovery=always" after the http_port tproxy config entry in squid.conf. Is "disable-pmtu-discovery=always" still needed?
Re: [squid-users] Preparing squid training
On 08/18/2011 02:03 PM, squidbob wrote: > Hi, > > I'm planning to prepare squid training which firstly ill give for the > local requests then maybe to remote sites or online. Basically it may > include teorical knowledge (preknowledge TCIP, squid introduction > etc), handson (installation, configuration, different deployment > teories, troubleshooting /cases etc). It may also be more divided > trainings like basic and advanced. Ill like to have your opsinion > about a squid training and any recommendations feedback for preparing > it (include this do that etc). > > Regards The last months, I have been writting a big manual for squid proxy implementations, it is in spanish, I use it for courseware here at the company, It is almost everything I know about squid implementation. The docto is here: http://tuxjm.net/docs/Manual_de_Instalacion_de_Servidor_Proxy_Web_con_Ubuntu_Server_y_Squid/html-multiples/ Best regards. -- Jorge Armando Medina Computación Gráfica de México Web: http://www.e-compugraf.com Tel: 55 51 40 72, Ext: 124 Email: jmed...@e-compugraf.com GPG Key: 1024D/28E40632 2007-07-26 GPG Fingerprint: 59E2 0C7C F128 B550 B3A6 D3AF C574 8422 28E4 0632
RE: [squid-users] Re: squid tproxy problem
I have had this problem. I have found that part of the problem is that when the iptables rules are entered at the CLI, they are not added in the correct order required for functioning. I have also seen cases where the client web surfing keeps timing out, and either after timeout or after the client clicks the stop button, the access shows up in the access.log. I find that I have add the iptables rules via the cli, do an "service iptables save", then "vim /etc/sysconfig/iptables" and rearrange the rules. -Original Message- From: Benjamin [mailto:benjo11...@gmail.com] Sent: Thursday, August 18, 2011 2:11 PM To: squid-users@squid-cache.org Subject: Re: [squid-users] Re: squid tproxy problem On 08/18/2011 08:19 PM, Amos Jeffries wrote: > On 19/08/11 01:43, Benjamin wrote: >> Hi Amos, >> >> Thanks for your kind response.I am going to try with latest kernel >> 3.0.3 and update u with final status. >> >> kernel 3.0.3 is ok for tproxy with squid verion 3.1.10 ? >> > > I have no information about it. But I expect so. > > Amos Hi Amos, i tried with kernel 2.6.38.8.But i face same issue.When i see packets in iptables tproxy rule , i can not see any requests into access.log also customers are not able to browse sites. and then when i swap interface in ebtables rules , from customer side browsing is working but no packets in tproxy rule and no requests in access.log. I don't find where is the mistake? Regards, Benjamin
Re: [squid-users] Preparing squid training
On 08/19/2011 12:33 AM, squidbob wrote: Hi, I'm planning to prepare squid training which firstly ill give for the local requests then maybe to remote sites or online. Basically it may include teorical knowledge (preknowledge TCIP, squid introduction etc), handson (installation, configuration, different deployment teories, troubleshooting /cases etc). It may also be more divided trainings like basic and advanced. Ill like to have your opsinion about a squid training and any recommendations feedback for preparing it (include this do that etc). Regards Hi, Yes that's good.Even go with advance level training of squid like tproxy,high cache gain etc..and also share your sessions to community. Thanks, Benjamin
Re: [squid-users] Re: squid tproxy problem
On 08/18/2011 08:19 PM, Amos Jeffries wrote: On 19/08/11 01:43, Benjamin wrote: Hi Amos, Thanks for your kind response.I am going to try with latest kernel 3.0.3 and update u with final status. kernel 3.0.3 is ok for tproxy with squid verion 3.1.10 ? I have no information about it. But I expect so. Amos Hi Amos, i tried with kernel 2.6.38.8.But i face same issue.When i see packets in iptables tproxy rule , i can not see any requests into access.log also customers are not able to browse sites. and then when i swap interface in ebtables rules , from customer side browsing is working but no packets in tproxy rule and no requests in access.log. I don't find where is the mistake? Regards, Benjamin
[squid-users] Preparing squid training
Hi, I'm planning to prepare squid training which firstly ill give for the local requests then maybe to remote sites or online. Basically it may include teorical knowledge (preknowledge TCIP, squid introduction etc), handson (installation, configuration, different deployment teories, troubleshooting /cases etc). It may also be more divided trainings like basic and advanced. Ill like to have your opsinion about a squid training and any recommendations feedback for preparing it (include this do that etc). Regards
Re: [squid-users] Downloading Mailarchive for offline use
On 19/08/11 03:10, Tarek Kilani wrote: Hi, I wanted to know if there is a way to download the archive for offline use so that I have something to read and skim through while I'm on my flight. Thank you. The mail archive is online at http://www.squid-cache.org/mail-archive/. Its a few GB of repetitive Q&A though. You might like the the wiki instead, essentially a condensed version. Or one of the Squid books, both available in electronic forms. (free too if you look in the right places, but I'm not allowed to say where). Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] squid performance tunning
On 19/08/11 03:58, Chen Bangzhong wrote: Amos, I want to find out what is filling my disk at 2-3MB/s. If there is no cache related information in the response header, will squid write the response to the disk? In squid wiki, I found the following sentences: Responses with Cache-Control: Private are NOT cachable. Responses with Cache-Control: No-Cache are NOT cachable. Responses with Cache-Control: No-Store are NOT cachable. Responses for requests with an Authorization header are cachable ONLY if the reponse includes Cache-Control: Public. The following HTTP status codes are cachable: 200 OK 203 Non-Authoritative Information 300 Multiple Choices 301 Moved Permanently 410 Gone My question is: If there is no Cache-control related information, such as the following header Server nginx/0.8.54 DateThu, 18 Aug 2011 15:56:29 GMT Content-Typeapplication/json; charset=UTF-8 Content-Length 1218 X-Cache MISS from zw12squid.my.com X-Cache-Lookup MISS from zw12squid.my.com:80 Via 1.0 zw12squid.my.com (squid/3.1.12) Connection keep-alive will squid save it to disk? No. It has a small Content-Length. Will store to RAM. But your RAM cache is running at 100% full, so something old will be pushed out to disk and this fills the empty gap. Lack of Cache-Control and Expires: headers means on the nest request for its URL your refresh_pattern rules will be tested against the URL and whichever one matches will be used to determine whether its served or revalidated. The only thing that could feed that algorithm is Date: when produced and current time, so Squid is unlikely to get it right of the two are very similar or very different. Probably leading to a revalidation or new request anyway. Can you give me a detailed description about when will squid save the object to disk? When it can't be saved to RAM cache_mem area. * cache_mem is full => least-popular object goes to disk. * object bigger than maximum_object_size_in_memory => goes to disk * object smaller than minimum_object_size_in_memory AND a cache_dir can accept it => goes to disk * object unknown length => goes to disk. Maybe RAM as well. Those are the cases I know about. There may be others. We know disk I/O happens far more often than it reasonably should in Squid. The newer releases since 2.6 and 3.0 are being improved to avoid it and increase traffic speeds, but progress is slow and irregular. You were going to try the memory-only caching. I think that was a good idea for your 88% RAM-hit vs 1% disk-hit ratios. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] squid performance tunning
On 19/08/11 02:59, Kaiwang Chen wrote: 在 2011年8月18日 下午9:07,Amos Jeffries 写道: On 18/08/11 22:56, Chen Bangzhong wrote: Mean Object Size: 20.61 K maximum_object_size_in_memory 1024 KB So most objects will be save in RAM first, still can't explain why there are so many disk writes. Well, I would check the HTTP response headers there. Make sure they are containing Content-Length: header. If that is missing Squid is forced to assume it will have infinite length and require disk backing for the object until it is finished arriving. Will squid require disk backing despite of the object size, even it is smaller than the receive buffer? _require_ it. No. Do it that way due to old code, yes maybe. The amount of data waiting to be processed does not matter much. Could be zero bytes chunked encoded and a set of followup pipelined response headers. Until it is processed and stored somewhere Squid can't tell if its some bytes that happened to appear early, or the whole thing. The packet size, read_ahead_gap, and the receive buffer size (dynamic! 1->64KB), and cache_dir min/max values all have an effect in that area. I believe it picks a cache area before continuing to read more bytes (but not completely certain). If the cache_dir all have small maximum size limits and RAM looks bigger it will go there. In fact cache_dir usage for backing being practically welded in 3.1 series with large cache_mem have been showing signs of memory-backing instead on occasion. The other dev have projects underway to eliminate all that confusion in 3.2 anyways. Not sure what is the default size of receive buffer, is it one of these? read_ahead_gap 16 KB sliding window of bytes to buffer unsent to the client. Mostly unrelated to the receive buffer. When in effect its the minimum buffer size. tcp_recv_bufsize 0 bytes The tcp_recv_bufsize is the maximum amount per read cycle (0 being use the OS sysctl details, which is usually 4KB). Default buffer is hard-coded as 1KB for most of 3.1 series. 4KB for older and newer releases (slow-start algorithm from 1KB turned out to be bad for speed on MB sized objects and no benefit for small ones). Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] squid performance tunning
Amos, I want to find out what is filling my disk at 2-3MB/s. If there is no cache related information in the response header, will squid write the response to the disk? In squid wiki, I found the following sentences: Responses with Cache-Control: Private are NOT cachable. Responses with Cache-Control: No-Cache are NOT cachable. Responses with Cache-Control: No-Store are NOT cachable. Responses for requests with an Authorization header are cachable ONLY if the reponse includes Cache-Control: Public. The following HTTP status codes are cachable: 200 OK 203 Non-Authoritative Information 300 Multiple Choices 301 Moved Permanently 410 Gone My question is: If there is no Cache-control related information, such as the following header Server nginx/0.8.54 DateThu, 18 Aug 2011 15:56:29 GMT Content-Typeapplication/json; charset=UTF-8 Content-Length 1218 X-Cache MISS from zw12squid.my.com X-Cache-Lookup MISS from zw12squid.my.com:80 Via 1.0 zw12squid.my.com (squid/3.1.12) Connection keep-alive will squid save it to disk? Can you give me a detailed description about when will squid save the object to disk? thanks a lot for your kind help. 2011/8/18 Amos Jeffries : > On 19/08/11 02:10, Chen Bangzhong wrote: >> >> thanks. >> >> Before I try the gateway squid solution, I want to change one of my >> squid to use memory cache only. I have 16GB RAM. now cache_mem is set >> to 5GB. >> >> I will try to increase it to 12GB and set cache_dir to null schma. I >> do this because I am sure that my hot objects can be saved in RAM, >> non-hot objects created by robots will stale and the memory will be >> reused. >> >> Is that all I need to set squid to be a memory cache? >> > > You have squid-3.1, so only comment out the cache_dir lines and set > cache_mem to something large. null dir schema no longer exists. > > Remember that cache_mem still has an index to account for and the usual > active traffic buffering stays present. Also that reconfigure will wipe the > RAM cache to empty. > > Amos > -- > Please be using > Current Stable Squid 2.7.STABLE9 or 3.1.14 > Beta testers wanted for 3.2.0.10 >
Re: [squid-users] squid performance tunning
On 19/08/11 02:10, Chen Bangzhong wrote: thanks. Before I try the gateway squid solution, I want to change one of my squid to use memory cache only. I have 16GB RAM. now cache_mem is set to 5GB. I will try to increase it to 12GB and set cache_dir to null schma. I do this because I am sure that my hot objects can be saved in RAM, non-hot objects created by robots will stale and the memory will be reused. Is that all I need to set squid to be a memory cache? You have squid-3.1, so only comment out the cache_dir lines and set cache_mem to something large. null dir schema no longer exists. Remember that cache_mem still has an index to account for and the usual active traffic buffering stays present. Also that reconfigure will wipe the RAM cache to empty. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] squid performance tunning
On 19/08/11 02:40, Kaiwang Chen wrote: 2011/8/18 Amos Jeffries: On 18/08/11 22:53, Kaiwang Chen wrote: 2011/8/18 Amos Jeffries: On 18/08/11 19:40, Drunkard Zhang wrote: 2011/8/18 Chen Bangzhong: I don't know why there are so many disk writes and there are so many objects on disk. All traffic goes through either RAM cache or if its bigger than maximum_object_size_in_memory will go through disks. From that info report ~60% of your traffic bytes are MISS responses. A large portion of that MISS traffic is likely not storable, so will be written to cache then discarded immediately. Squid is overall mostly-write with its disk behaviour. Will a "cache deny" matching those non-storable objects suppress storing them to disk? And HTTP header 'Cache-Control: no-store' ? "no-store" header and "cache deny" directive have the same effect on your Squid. Both erase existing stored objects and erase the newely received one _after_ it is finished transfer. The difference is that the header applies everywhere receiving the object. The cache access control is limited to that one Squid instance testing it. Great. What about "Cache-Control: max-age=0" and "Cache-Control: no-cache" responses? Does squid store them, max-age=0, that means discard immediately. Same as no-store to Squid. no-cache on responses is borderline. I can't seem to find anything relevant to no-cache kicking off a refresh. The HTTP/1.1 support results show it acting like no-store when last tested. So probably not usable yet. Luckily there is an overlap with the must-revalidate response directive. You can send that on the reply instead. > hoping it is cheaper to > make a validatation than to fetch a whole fresh object? Which souce > code files describe the logic to deal with such cases? > If the object has not actually changed, the server sends 304 instead of a new object, and there is an ETag to identify that object both machines are talking about is identical. Then yes, revalidation is much smaller. Squid does not (yet) send If-None-Match on revalidations (accepts and relay it but does not create it), so there are a number of possible cases where revalidation fails to be smaller. src/client_side_reply.cc cacheHit() handles the reply when an object is found in storage (to determin if its usable, obsolete, or simply old). That makes use of various other process*() code and src/refresh.cc does the revalidation calculations. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
RE: [squid-users] Re: squid tproxy problem
I have one CentOS v6 box running the CentOS v6 supplied 2.6.32-71.29.1.el6 kernel, and iptables-1.4.7-3.el6. I am using a recompiled squid-3 rpm that I popped 3.1.14 into and the combination seems be working fine. I am also testing a CentOS v6 install with a the kernel source rpm from RHEL 6 (kernel-2.6.32-131.6.1.el6), iptables source rpm from RHEL6 (iptables-1.4.7-4), and the squid 3.1.14 rpm I made. I am testing this because there was a TPROXY fixes made in an upstream kernel release that RedHat back-patched. The only issue I have run into thus far is a higher than normal occurrence of TCP_MISS/502 errors in squid. I am not sure if the error is in squid/tproxy/kernel or on the network, but I suspect it is on my network. Nick -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Thursday, August 18, 2011 9:49 AM To: squid-users@squid-cache.org Subject: Re: [squid-users] Re: squid tproxy problem On 19/08/11 01:43, Benjamin wrote: > Hi Amos, > > Thanks for your kind response.I am going to try with latest kernel > 3.0.3 and update u with final status. > > kernel 3.0.3 is ok for tproxy with squid verion 3.1.10 ? > I have no information about it. But I expect so. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
[squid-users] Downloading Mailarchive for offline use
Hi, I wanted to know if there is a way to download the archive for offline use so that I have something to read and skim through while I'm on my flight. Thank you.
Re: [squid-users] squid performance tunning
在 2011年8月18日 下午9:07,Amos Jeffries 写道: > On 18/08/11 22:56, Chen Bangzhong wrote: >> Mean Object Size: 20.61 K >> maximum_object_size_in_memory 1024 KB >> >> So most objects will be save in RAM first, still can't explain why >> there are so many disk writes. >> > > Well, I would check the HTTP response headers there. Make sure they are > containing Content-Length: header. If that is missing Squid is forced to > assume it will have infinite length and require disk backing for the > object until it is finished arriving. Will squid require disk backing despite of the object size, even it is smaller than the receive buffer? Not sure what is the default size of receive buffer, is it one of these? read_ahead_gap 16 KB tcp_recv_bufsize 0 bytes > > The "Mean Object Size:" metric is measured on completely received and > stored objects. So does not really account for unknown length objects or > non-cacheable previous objects. > > Amos > -- > Please be using > Current Stable Squid 2.7.STABLE9 or 3.1.14 > Beta testers wanted for 3.2.0.10 > Thanks, Kaiwang
Re: [squid-users] Re: squid tproxy problem
On 19/08/11 01:43, Benjamin wrote: Hi Amos, Thanks for your kind response.I am going to try with latest kernel 3.0.3 and update u with final status. kernel 3.0.3 is ok for tproxy with squid verion 3.1.10 ? I have no information about it. But I expect so. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] How does squid behave when caching really large files (GBs)
On 16/08/11 20:33, Thiago Moraes wrote: Hello everyone, I currently have a server which stores many terabytes of rather static files, each one having tenths of gigabytes. Right now, these files are only accessed through a local connection, but in some time this is going to change. One option to make the access acceptable is to deploy new servers on the places that will most access these files. The new server would keep a copy of the most accessed ones so that only a LAN connection is needed, instead of wasting bandwidth to external access. I'm considering almost any solution to these new hosts and one of then is just using a cache tool like squid to make the downloads faster, but as I didn't see someone caching files this big, I would like to know which problems I may find if I adopt this kind of solution. You did mean "tenths" right, as in 100-900 MB files? seems slightly larger than most traffic, but not huge. Even old Squid installs limited to 32-bit files should have no problem with handling that as traffic. Most Squid installs wont store them locally to the clients though. The default limit is 4MB to cache the bulk of web page traffic and avoid rarer large objects like yours from pushing much out of cache. Most of the bumping up mentioned around here is for YouTube and similar video media content. Only increasing it to tens/hundreds of MB then stops there for the same caching reasons as the 4MB limit. Occasionally we hear from ISP or CDN bumping it enough to cache CDs or DVDs. And OS distribution mirrors, although those also tend to have smaller package caches. Mostly tens of MB objects. The CERN Frontier network admins are pushing multiple-TB around via Squids. It sounds like they are a scale above what you want to do, but if you want operational experience with big data they could be the best people to talk to. The alternatives I've considered so far include using a distributed file system such as Hadoop, deploying a private cloud storage system to communicate between the servers or even using bittorrent to share the files among servers. Any comments on these alternatives too? No opinion on them as such. AFAIK these don't seem to be really in the same type of service area as Squid. If you are after distributed _storage_. Squid is then definitely not the right solution. Squid design is more about fast delivery of the data than storage. Caches being distributed stores is a side effect of that model being very efficient for delivery rather than any effort to spread the locations of things. Cache storage is fundamentally a giant /tmp director. Persistent but liable for erasure any given second. A chunk of it is often found only in volatile RAM too. Bittorrent perhapse is closest in a matter of being delivery oriented rather than storage. With one authority source and a hierarchy of intermediaries doing the delivery. Thats where the similarities end as well. If what you are after is scalable delivery mechanism that can minimize the bandwidth consumption, Squid is definitely an option there. You can layer a whole distributed background set of storage servers behind a gateway layer of Squid. Using the various peering algorithms and ACL rules for source selection. Those background layer servers can in turn use any of the actual storage-oriented methods you mention to actually store the content. If they still need scale. With web services to provide the files as HTTP objects from each location to the Squid layer. WikiMedia have some nice CDN network diagrams published if you want to see what I mean: http://meta.wikimedia.org/wiki/Wikimedia_servers Sorry, talked you round in a circle there. But I hope its of some help. At least of where and whether Squid can fit into things for you. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] squid performance tunning
2011/8/18 Amos Jeffries : > On 18/08/11 22:53, Kaiwang Chen wrote: >> >> 2011/8/18 Amos Jeffries: >>> >>> On 18/08/11 19:40, Drunkard Zhang wrote: 2011/8/18 Chen Bangzhong: > > > I don't know why there are so many disk writes and there are so many > objects on disk. >>> >>> All traffic goes through either RAM cache or if its bigger than >>> maximum_object_size_in_memory will go through disks. >>> >>> From that info report ~60% of your traffic bytes are MISS responses. A >>> large >>> portion of that MISS traffic is likely not storable, so will be written >>> to >>> cache then discarded immediately. Squid is overall mostly-write with its >>> disk behaviour. >> >> Will a "cache deny" matching those non-storable objects suppress >> storing them to disk? >> And HTTP header 'Cache-Control: no-store' ? > > "no-store" header and "cache deny" directive have the same effect on your > Squid. Both erase existing stored objects and erase the newely received one > _after_ it is finished transfer. > > The difference is that the header applies everywhere receiving the object. > The cache access control is limited to that one Squid instance testing it. Great. What about "Cache-Control: max-age=0" and "Cache-Control: no-cache" responses? Does squid store them, hoping it is cheaper to make a validatation than to fetch a whole fresh object? Which souce code files describe the logic to deal with such cases? > > Amos > -- > Please be using > Current Stable Squid 2.7.STABLE9 or 3.1.14 > Beta testers wanted for 3.2.0.10 > Thanks, Kaiwang
Re: [squid-users] squid performance tunning
thanks. Before I try the gateway squid solution, I want to change one of my squid to use memory cache only. I have 16GB RAM. now cache_mem is set to 5GB. I will try to increase it to 12GB and set cache_dir to null schma. I do this because I am sure that my hot objects can be saved in RAM, non-hot objects created by robots will stale and the memory will be reused. Is that all I need to set squid to be a memory cache? 2011/8/18 Amos Jeffries : > On 18/08/11 22:50, Chen Bangzhong wrote: >> >> thanks you Amos and Drunkard. >> >> My website hosts novels, That's, user can read novel there. >> >> The pages are not truely static contents, so I can only cache them for >> 10 minutes. >> >> My squids serve both non-cachable requests (works like nginx) and >> cachable-requests (10 min cache). So 60% cache miss is reasonable. It >> is not a good design, but we can't do more now. > > Oh well. Good luck wishes on that side of the problem. > >> >> Another point is, only hot novels are read by users. Crawlers/robots >> will push many objects to cache. These objects are rarely read by user >> and will expire after 10 minutes. >> >> If the http response header indicates it is not cachable(eg: >> max-age=0), will squid save the response in RAM or disk? My guess is >> squid will discard the response. > > Correct. It will discard the response AND anything it has already cached for > that URL. > > For non-hot objects this will not be a major problem. But may raise disk I/O > a bit as the existing old stored content gets kicked out. Which might > actually be a good thing, emptying space in the cache early. Or wasted I/O. > It's not clear exactly which. > >> >> If the http response header indicates it is cachable(eg: max-age=600), >> squid will save it in the cache_mem. If the object is larger than >> maximum_object_size_in_memory, it will be written to disk. > > Yes. > >> >> Can you tell me when will squid save the object to disk? When will >> squid delete the staled objects? > > Stale objects are deleted at the point they are detected as stale and no > longer usable (ie a request has been made for it and updated replacement has > arrived from the web server). Or if they are the oldest object stored and > more cache space is needed for newer objects. > > > Other than tuning your existing setup there are two things I think you may > be interested in. > > The first is a Measurement Factory project which involves altering Squid to > completely bypass the cache storage when an object can't be cached or > re-used by other clients. Makes them faster to process, and avoids dropping > cached objects to make room. Combining this with a "cache deny" rule > identifying those annoying robots as non-cacheable would allow you to store > only the real users traffic needs. > This is a slightly longer-term project, AFAIK it is not ready for > production use (might be wrong). At minimum TMF are possibly needing > sponsorship assistance to progress it faster. Contact Alex Rousskov about > possibilities there, http://www.measurement-factory.com/contact.html > > > The second thing is an alternative squid configuration which would emulate > that behaviour immediately using two Squid instances. > Basically; configure a new second instance as a non-caching gateway which > all requests go to first. That could pass the robots and other easily > detected non-cacheable requests straight to the web servers for service. > While passing the other potentially cacheable requests to your current Squid > instance, where storage and cache fetches happen more often without the > robots. > > The gateway squid would have a much smaller footprint since it needs no > memory for caching or indexing, and no disk usage at all. > > Amos > -- > Please be using > Current Stable Squid 2.7.STABLE9 or 3.1.14 > Beta testers wanted for 3.2.0.10 >
RE: [squid-users] RE: Squid NTLM - Dont want users to have to enter domain
>IIRC the Samba ntlm_auth provides "--domain=DOMAIN" option to force >verification of all users against a certain domain (enabling no domain >on the popup). Thanks Amos, that did the trick :) -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: 18 August 2011 12:48 To: squid-users@squid-cache.org Subject: Re: [squid-users] RE: Squid NTLM - Dont want users to have to enter domain On 18/08/11 21:52, Almighty wrote: > Hi, > > Transparent NTLM authentication works great on our site and running on 5 > proxy servers. > > However we are having an increasing number of clients who are not on the > domain (E.g. Mac labs). > Is there any way that these non-AD end users could get prompted for just > their "username& password" instead of "DOMAIN\username& password". > > Many thanks in advance, > Well, considering that NTLM is a protocol which operates by authenticating that users are members of a domain. How do you expect that would work? IIRC the Samba ntlm_auth provides "--domain=DOMAIN" option to force verification of all users against a certain domain (enabling no domain on the popup). It is up to the client software to obtain the right security tokens that domains DC will accept. Squid cannot do anything about that. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] Re: squid tproxy problem
On 08/18/2011 05:50 PM, Amos Jeffries wrote: On 18/08/11 22:51, Benjamin wrote: I tested interception in bridge mode with current setup.that is working fine.but when i configure tproxy , it is not working.Please guide me for that. Thanks, Benjo Hi, Any suggestions please. My Current Network Setup: WAN ROUTER(114.30.XX.1 --- public ip) | | | SWITCH | | | SQUID BOX (114.30.XX.19 gw: 114.30.XX.1) ( bridge mode) | | | BANDWITH MGMT. LINUX BOX ( 114.30.XX.10 gw: 114.30.XX.1) | | | END USERS ( mix with private ips and public ips ) at squid box : eth0 ->internet( cable from switch) eth1-> cable connected to BANDWITH MGMT. LINUX BOX) ... ebtables -t broute --list Bridge table: broute Bridge chain: BROUTING, entries: 2, policy: ACCEPT -p IPv4 -i eth0 --ip-proto tcp --ip-dport 80 -j redirect -p IPv4 -i eth1 --ip-proto tcp --ip-sport 80 -j redirect Unless you changed the config between posts that means port 80 traffic _from_ the Internet is being passed to the proxy. Same for traffic received _from_ internal web servers. According to the cabling diagram that should be: -i eth0 --ip-sport 80 -i eth1 --ip-dport 80 ... or plug the cables the other way around. Alternatively, and at least for testing. Drop the -i NIC parameters entirely and route everything to or from port 80. iptables -L -nvx -t mangle Chain PREROUTING (policy ACCEPT 959157 packets, 79545939 bytes) pkts bytes target prot opt in out source destination 10993 689414 DIVERT tcp -- * * 0.0.0.0/0 0.0.0.0/0 socket 16765 1000259 TPROXY tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 TPROXY redirect 0.0.0.0:3129 mark 0x1/0x1 ... OS CENTOS 6 64 bit squid : 3.1.4 KERNEL : 2.6.32-71.29.1.el6.x86_64 Indeed this shows some packets that should be showing up in Squid logs. As TCP_DENIED visitors if my assessment of the ebtables rules is correct. But either way, showing up. This looks a LOT like the problem Debian Lenny and Ubuntu Lucid have. They also had kernels from early 2.6.3n numbers. Indeed going back to my notes (in the wiki): "2.6.32 to 2.6.34 have bridging issues on some systems. Please use 2.6.30 or 2.6.31 for production machines, they seem to work properly." I wrote that while monitoring TPROXY related patches going into the kernel. About the time 2.6.36 came out. So if you can, 2.6.35 or later should work (the later the better). Most people working with Debian Squeeze (kernel 2.6.37+) have had no problems AFAICT. That success should be mirrored in other distros on the similar kernel versions. Amos Hi Amos, Thanks for your kind response.I am going to try with latest kernel 3.0.3 and update u with final status. kernel 3.0.3 is ok for tproxy with squid verion 3.1.10 ? Thanks, Benjamin
Re: [squid-users] squid performance tunning
On 18/08/11 22:53, Kaiwang Chen wrote: 2011/8/18 Amos Jeffries: On 18/08/11 19:40, Drunkard Zhang wrote: 2011/8/18 Chen Bangzhong: I don't know why there are so many disk writes and there are so many objects on disk. All traffic goes through either RAM cache or if its bigger than maximum_object_size_in_memory will go through disks. From that info report ~60% of your traffic bytes are MISS responses. A large portion of that MISS traffic is likely not storable, so will be written to cache then discarded immediately. Squid is overall mostly-write with its disk behaviour. Will a "cache deny" matching those non-storable objects suppress storing them to disk? And HTTP header 'Cache-Control: no-store' ? "no-store" header and "cache deny" directive have the same effect on your Squid. Both erase existing stored objects and erase the newely received one _after_ it is finished transfer. The difference is that the header applies everywhere receiving the object. The cache access control is limited to that one Squid instance testing it. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] squid performance tunning
On 18/08/11 22:56, Chen Bangzhong wrote: > Mean Object Size: 20.61 K > maximum_object_size_in_memory 1024 KB > > So most objects will be save in RAM first, still can't explain why > there are so many disk writes. > Well, I would check the HTTP response headers there. Make sure they are containing Content-Length: header. If that is missing Squid is forced to assume it will have infinite length and require disk backing for the object until it is finished arriving. The "Mean Object Size:" metric is measured on completely received and stored objects. So does not really account for unknown length objects or non-cacheable previous objects. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] squid performance tunning
On 18/08/11 22:50, Chen Bangzhong wrote: thanks you Amos and Drunkard. My website hosts novels, That's, user can read novel there. The pages are not truely static contents, so I can only cache them for 10 minutes. My squids serve both non-cachable requests (works like nginx) and cachable-requests (10 min cache). So 60% cache miss is reasonable. It is not a good design, but we can't do more now. Oh well. Good luck wishes on that side of the problem. Another point is, only hot novels are read by users. Crawlers/robots will push many objects to cache. These objects are rarely read by user and will expire after 10 minutes. If the http response header indicates it is not cachable(eg: max-age=0), will squid save the response in RAM or disk? My guess is squid will discard the response. Correct. It will discard the response AND anything it has already cached for that URL. For non-hot objects this will not be a major problem. But may raise disk I/O a bit as the existing old stored content gets kicked out. Which might actually be a good thing, emptying space in the cache early. Or wasted I/O. It's not clear exactly which. If the http response header indicates it is cachable(eg: max-age=600), squid will save it in the cache_mem. If the object is larger than maximum_object_size_in_memory, it will be written to disk. Yes. Can you tell me when will squid save the object to disk? When will squid delete the staled objects? Stale objects are deleted at the point they are detected as stale and no longer usable (ie a request has been made for it and updated replacement has arrived from the web server). Or if they are the oldest object stored and more cache space is needed for newer objects. Other than tuning your existing setup there are two things I think you may be interested in. The first is a Measurement Factory project which involves altering Squid to completely bypass the cache storage when an object can't be cached or re-used by other clients. Makes them faster to process, and avoids dropping cached objects to make room. Combining this with a "cache deny" rule identifying those annoying robots as non-cacheable would allow you to store only the real users traffic needs. This is a slightly longer-term project, AFAIK it is not ready for production use (might be wrong). At minimum TMF are possibly needing sponsorship assistance to progress it faster. Contact Alex Rousskov about possibilities there, http://www.measurement-factory.com/contact.html The second thing is an alternative squid configuration which would emulate that behaviour immediately using two Squid instances. Basically; configure a new second instance as a non-caching gateway which all requests go to first. That could pass the robots and other easily detected non-cacheable requests straight to the web servers for service. While passing the other potentially cacheable requests to your current Squid instance, where storage and cache fetches happen more often without the robots. The gateway squid would have a much smaller footprint since it needs no memory for caching or indexing, and no disk usage at all. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] Re: squid tproxy problem
On 18/08/11 22:51, Benjamin wrote: I tested interception in bridge mode with current setup.that is working fine.but when i configure tproxy , it is not working.Please guide me for that. Thanks, Benjo Hi, Any suggestions please. My Current Network Setup: WAN ROUTER(114.30.XX.1 --- public ip) | | | SWITCH | | | SQUID BOX (114.30.XX.19 gw: 114.30.XX.1) ( bridge mode) | | | BANDWITH MGMT. LINUX BOX ( 114.30.XX.10 gw: 114.30.XX.1) | | | END USERS ( mix with private ips and public ips ) at squid box : eth0 ->internet( cable from switch) eth1-> cable connected to BANDWITH MGMT. LINUX BOX) ... ebtables -t broute --list Bridge table: broute Bridge chain: BROUTING, entries: 2, policy: ACCEPT -p IPv4 -i eth0 --ip-proto tcp --ip-dport 80 -j redirect -p IPv4 -i eth1 --ip-proto tcp --ip-sport 80 -j redirect Unless you changed the config between posts that means port 80 traffic _from_ the Internet is being passed to the proxy. Same for traffic received _from_ internal web servers. According to the cabling diagram that should be: -i eth0 --ip-sport 80 -i eth1 --ip-dport 80 ... or plug the cables the other way around. Alternatively, and at least for testing. Drop the -i NIC parameters entirely and route everything to or from port 80. iptables -L -nvx -t mangle Chain PREROUTING (policy ACCEPT 959157 packets, 79545939 bytes) pkts bytes target prot opt in out source destination 10993 689414 DIVERT tcp -- * * 0.0.0.0/0 0.0.0.0/0 socket 16765 1000259 TPROXY tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 TPROXY redirect 0.0.0.0:3129 mark 0x1/0x1 ... OS CENTOS 6 64 bit squid : 3.1.4 KERNEL : 2.6.32-71.29.1.el6.x86_64 Indeed this shows some packets that should be showing up in Squid logs. As TCP_DENIED visitors if my assessment of the ebtables rules is correct. But either way, showing up. This looks a LOT like the problem Debian Lenny and Ubuntu Lucid have. They also had kernels from early 2.6.3n numbers. Indeed going back to my notes (in the wiki): "2.6.32 to 2.6.34 have bridging issues on some systems. Please use 2.6.30 or 2.6.31 for production machines, they seem to work properly." I wrote that while monitoring TPROXY related patches going into the kernel. About the time 2.6.36 came out. So if you can, 2.6.35 or later should work (the later the better). Most people working with Debian Squeeze (kernel 2.6.37+) have had no problems AFAICT. That success should be mirrored in other distros on the similar kernel versions. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
RE: [squid-users] RE: Squid NTLM - Dont want users to have to enter domain
Hi Amos, Thanks for your reply. I was hoping that I could inject the domain name somehow when the credentials are being submitted. I can see now it's very much a Samba related query, Regards, -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: 18 August 2011 12:48 To: squid-users@squid-cache.org Subject: Re: [squid-users] RE: Squid NTLM - Dont want users to have to enter domain On 18/08/11 21:52, Almighty wrote: > Hi, > > Transparent NTLM authentication works great on our site and running on 5 > proxy servers. > > However we are having an increasing number of clients who are not on the > domain (E.g. Mac labs). > Is there any way that these non-AD end users could get prompted for just > their "username& password" instead of "DOMAIN\username& password". > > Many thanks in advance, > Well, considering that NTLM is a protocol which operates by authenticating that users are members of a domain. How do you expect that would work? IIRC the Samba ntlm_auth provides "--domain=DOMAIN" option to force verification of all users against a certain domain (enabling no domain on the popup). It is up to the client software to obtain the right security tokens that domains DC will accept. Squid cannot do anything about that. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] RE: Squid NTLM - Dont want users to have to enter domain
On 18/08/11 21:52, Almighty wrote: Hi, Transparent NTLM authentication works great on our site and running on 5 proxy servers. However we are having an increasing number of clients who are not on the domain (E.g. Mac labs). Is there any way that these non-AD end users could get prompted for just their "username& password" instead of "DOMAIN\username& password". Many thanks in advance, Well, considering that NTLM is a protocol which operates by authenticating that users are members of a domain. How do you expect that would work? IIRC the Samba ntlm_auth provides "--domain=DOMAIN" option to force verification of all users against a certain domain (enabling no domain on the popup). It is up to the client software to obtain the right security tokens that domains DC will accept. Squid cannot do anything about that. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] squid performance tunning
Mean Object Size: 20.61 K maximum_object_size_in_memory 1024 KB So most objects will be save in RAM first, still can't explain why there are so many disk writes. avg-cpu: %user %nice %system %iowait %steal %idle 1.520.001.636.950.00 89.91 Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.01 0.06 0.13 1.24 1.4528.96 0.004.16 2.20 0.04 sda1 0.00 0.01 0.06 0.11 1.24 1.4531.69 0.004.55 2.41 0.04 sdb 0.07 0.07 0.01 0.01 0.33 0.3159.88 0.00 19.77 15.75 0.03 sdc 0.00 2.08 9.16 104.9681.61 1071.39 20.21 0.575.02 1.73 19.75 avg-cpu: %user %nice %system %iowait %steal %idle 2.380.003.38 10.380.00 83.88 Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdc 0.00 4.50 11.00 293.00 104.00 3768.50 25.48 7.26 23.88 1.92 58.30 avg-cpu: %user %nice %system %iowait %steal %idle 3.250.002.633.880.00 90.24 Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdc 0.00 0.50 15.50 94.50 150.00 644.2514.44 0.423.79 1.95 21.50 avg-cpu: %user %nice %system %iowait %steal %idle 3.000.002.883.380.00 90.75 Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdc 0.00 4.00 13.50 241.50 134.00 1609.75 13.68 0.893.37 0.76 19.50 在 2011年8月18日 下午6:50,Chen Bangzhong 写道: > thanks you Amos and Drunkard. > > My website hosts novels, That's, user can read novel there. > > The pages are not truely static contents, so I can only cache them for > 10 minutes. > > My squids serve both non-cachable requests (works like nginx) and > cachable-requests (10 min cache). So 60% cache miss is reasonable. It > is not a good design, but we can't do more now. > > Another point is, only hot novels are read by users. Crawlers/robots > will push many objects to cache. These objects are rarely read by user > and will expire after 10 minutes. > > If the http response header indicates it is not cachable(eg: > max-age=0), will squid save the response in RAM or disk? My guess is > squid will discard the response. > > If the http response header indicates it is cachable(eg: max-age=600), > squid will save it in the cache_mem. If the object is larger than > maximum_object_size_in_memory, it will be written to disk. > > Can you tell me when will squid save the object to disk? When will > squid delete the staled objects? > > > > > 2011/8/18 Amos Jeffries : >> On 18/08/11 19:40, Drunkard Zhang wrote: >>> >>> 2011/8/18 Chen Bangzhong: My cached objects will expire after 10 minutes. Cache-Control:max-age=600 >>> >>> Static content like pictures should cache longer, like 1 day, 86400. >> >> Could also be a whole year. If you control the origin website, set caching >> times as largeas reasonably possible for each object. With revalidate >> settings relevant to its likely replacement needs. And always send a correct >> ETag. >> >> With those details Squid and other caches will take care of reducing caching >> times to suit the network and disk needs and updates/revalidation to suit >> your needs. So please set it large. >> >>> I don't know why there are so many disk writes and there are so many objects on disk. >> >> All traffic goes through either RAM cache or if its bigger than >> maximum_object_size_in_memory will go through disks. >> >> From that info report ~60% of your traffic bytes are MISS responses. A large >> portion of that MISS traffic is likely not storable, so will be written to >> cache then discarded immediately. Squid is overall mostly-write with its >> disk behaviour. >> >> Likely your 10-minute age is affecting
Re: [squid-users] squid performance tunning
2011/8/18 Amos Jeffries : > On 18/08/11 19:40, Drunkard Zhang wrote: >> >> 2011/8/18 Chen Bangzhong: >>> >>> My cached objects will expire after 10 minutes. >>> >>> Cache-Control:max-age=600 >> >> Static content like pictures should cache longer, like 1 day, 86400. > > Could also be a whole year. If you control the origin website, set caching > times as large as reasonably possible for each object. With revalidate > settings relevant to its likely replacement needs. And always send a correct > ETag. > > With those details Squid and other caches will take care of reducing caching > times to suit the network and disk needs and updates/revalidation to suit > your needs. So please set it large. > >> >>> I don't know why there are so many disk writes and there are so many >>> objects on disk. > > All traffic goes through either RAM cache or if its bigger than > maximum_object_size_in_memory will go through disks. > > From that info report ~60% of your traffic bytes are MISS responses. A large > portion of that MISS traffic is likely not storable, so will be written to > cache then discarded immediately. Squid is overall mostly-write with its > disk behaviour. Will a "cache deny" matching those non-storable objects suppress storing them to disk? And HTTP header 'Cache-Control: no-store' ? > > Likely your 10-minute age is affecting this in a big way. The cache will > have a lot of storable object which are stale. Next request they will be > fetched into memory, then replaced by a revalidation REFRESH (near-HIT) > response, which writes new data back to disk later. > >>> >>> In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9% >>> is very low. >> >> Maybe cause by disk read timeout. You used too much disk space, you >> can shrink it a little by a little, until disk busy percentage reduced >> to 80% or lower. > > Your Squid version is one which will promote HIT objects from disk and > service repeat HITs from memory. Which reducing that disk-hit % a lot more > than earlier squid versions would show it as. > >> >>> Can I increase the cache_mem? or not use disk cache at all? >> >> I used all memory I can use :-) > > Indeed, the more the merrier. Unless it is swapping under high load. If that > happens Squid speed goes terrible almost immediately. > > Amos > -- > Please be using > Current Stable Squid 2.7.STABLE9 or 3.1.14 > Beta testers wanted for 3.2.0.10 > Thanks, Kaiwang
[squid-users] Re: squid tproxy problem
I tested interception in bridge mode with current setup.that is working fine.but when i configure tproxy , it is not working.Please guide me for that. Thanks, Benjo Hi, Any suggestions please. My Current Network Setup: WAN ROUTER(114.30.XX.1 --- public ip) | | | SWITCH | | | SQUID BOX (114.30.XX.19 gw: 114.30.XX.1) ( bridge mode) | | | BANDWITH MGMT. LINUX BOX ( 114.30.XX.10 gw: 114.30.XX.1) | | | END USERS ( mix with private ips and public ips ) at squid box : eth0 ->internet( cable from switch) eth1-> cable connected to BANDWITH MGMT. LINUX BOX) i am using centos 6 and squid version is 3.1.10 I can see traffic in tproxy iptables rules but i can not get any request to access.log Kindly guide me to solve this problem. Regards, Benjamin On Wed, Aug 17, 2011 at 7:15 PM, benjamin fernandis wrote: Hi, I configured squid for tproxy feature in my network with bridge mode. I follow http://wiki.squid-cache.org/Features/Tproxy4 But I m not getting requests in access.log of squid. My configuration: cat /etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl SSL_ports port 443 acl Safe_ports port 80# http acl Safe_ports port 21# ftp acl Safe_ports port 443# https acl Safe_ports port 70# gopher acl Safe_ports port 210# wais acl Safe_ports port 1025-65535# unregistered ports acl Safe_ports port 280# http-mgmt acl Safe_ports port 488# gss-http acl Safe_ports port 591# filemaker acl Safe_ports port 777# multiling http acl CONNECT method CONNECT acl mynetwork src '/etc/squid/mynetwork' acl cache_deny dst '/etc/squid/deny1' cache deny cache_deny # cache_mem 1024 MB # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow mynetwork http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 3128 http_port 3129 tproxy # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir aufs /cache/squid 25600 32 512 # Leave coredumps in the first cache dir coredump_dir /cache/squid httpd_suppress_version_string on # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp:144020%10080 refresh_pattern ^gopher:14400%1440 refresh_pattern -i (/cgi-bin/|\?) 00%0 refresh_pattern .020%4320 ip rule list 0:from all lookup local 32765:from all fwmark 0x1 lookup 100 32766:from all lookup main 32767:from all lookup default iptables -L -nvx -t mangle Chain PREROUTING (policy ACCEPT 959157 packets, 79545939 bytes) pkts bytes target prot opt in out source destination 10993 689414 DIVERT tcp -- * * 0.0.0.0/0 0.0.0.0/0 socket 16765 1000259 TPROXY tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 TPROXY redirect 0.0.0.0:3129 mark 0x1/0x1 Chain INPUT (policy ACCEPT 15122 packets, 1149717 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 959996 packets, 79295677 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 28272 packets, 10090599 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 988265 packets, 89386044 bytes) pkts bytes target prot opt in out source destination Chain DIVERT (1 references) pkts bytes target prot opt in out source destinatio
Re: [squid-users] squid performance tunning
thanks you Amos and Drunkard. My website hosts novels, That's, user can read novel there. The pages are not truely static contents, so I can only cache them for 10 minutes. My squids serve both non-cachable requests (works like nginx) and cachable-requests (10 min cache). So 60% cache miss is reasonable. It is not a good design, but we can't do more now. Another point is, only hot novels are read by users. Crawlers/robots will push many objects to cache. These objects are rarely read by user and will expire after 10 minutes. If the http response header indicates it is not cachable(eg: max-age=0), will squid save the response in RAM or disk? My guess is squid will discard the response. If the http response header indicates it is cachable(eg: max-age=600), squid will save it in the cache_mem. If the object is larger than maximum_object_size_in_memory, it will be written to disk. Can you tell me when will squid save the object to disk? When will squid delete the staled objects? 2011/8/18 Amos Jeffries : > On 18/08/11 19:40, Drunkard Zhang wrote: >> >> 2011/8/18 Chen Bangzhong: >>> >>> My cached objects will expire after 10 minutes. >>> >>> Cache-Control:max-age=600 >> >> Static content like pictures should cache longer, like 1 day, 86400. > > Could also be a whole year. If you control the origin website, set caching > times as largeas reasonably possible for each object. With revalidate > settings relevant to its likely replacement needs. And always send a correct > ETag. > > With those details Squid and other caches will take care of reducing caching > times to suit the network and disk needs and updates/revalidation to suit > your needs. So please set it large. > >> >>> I don't know why there are so many disk writes and there are so many >>> objects on disk. > > All traffic goes through either RAM cache or if its bigger than > maximum_object_size_in_memory will go through disks. > > From that info report ~60% of your traffic bytes are MISS responses. A large > portion of that MISS traffic is likely not storable, so will be written to > cache then discarded immediately. Squid is overall mostly-write with its > disk behaviour. > > Likely your 10-minute age is affecting this in a big way. The cache will > have a lot of storable object which are stale. Next request they will be > fetched into memory, then replaced by a revalidation REFRESH (near-HIT) > response, which writes new data back to disk later. > >>> >>> In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9% >>> is very low. >> >> Maybe cause by disk read timeout. You used too much disk space, you >> can shrink it a little by a little, until disk busy percentage reduced >> to 80% or lower. > > Your Squid version is one which will promote HIT objects from disk and > service repeat HITs from memory. Which reducing that disk-hit % a lot more > than earlier squid versions would show it as. > >> >>> Can I increase the cache_mem? or not use disk cache at all? >> >> I used all memory I can use :-) > > Indeed, the more the merrier. Unless it is swapping under high load. If that > happens Squid speed goes terrible almost immediately. > > Amos > -- > Please be using > Current Stable Squid 2.7.STABLE9 or 3.1.14 > Beta testers wanted for 3.2.0.10 >
Re: [squid-users] anyone describe the model of how Squid manage the memory?
On 18/08/11 19:26, Raymond Wang wrote: hi, all: I am new to Squid, and I am assigned to learn how Squid manage to memory, in order to make best use of the Squid. there are some problems about the memory management for Squid: 1, if two files have the same content, such as two Javascript files, then how Squid deal with the two files in memory? dose it treat it as one file and keep the two file name somewhere? 2, how does the Squid define the level of hot data? and what is the distribution strategy of hot data like?and How can I affect the distribution strategy ? Welcome to the world of caching. :) Introducing the Squid FAQ, Knowledge Base and How-To collection: http://wiki.squid-cache.org/ It's quite big and contains all of your answers, buried somewhere. Enjoy. Hint: http://wiki.squid-cache.org/SquidFaq/SquidMemory (2) "hot" data to Squid is the set of URLs (a) currently being transferred, plus (b) the N last requested URL objects permitted to stay stored in RAM. cache_mem and maximum_object_size_in_memory control the RAM cache space and object size limits. (1) Squid deals with URLs and where to find them. That is all. Things like the content of objects at those URLs is completely under the control and responsibility of webmasters authoring the objects. If they have different URLs they are different "URL objects". (Technical warning) Notice how I don't say file in any of the above. "file objects" is not really the right idea to be applying if you want to understand the web properly. * sometimes one URL object is not a whole file object * sometimes one URL object is multiple file objects * sometimes the URL object can only be described as a "stream of data" or "tunnel". Not related to the concept of "file" in any way. * URL object size ranges from zero to infinite (inclusive). * Sometimes multiple unique URL objects share a URL, the HTTP header meta data then affects potential storage location as well. On disk cacheable things may look like files. In memory they are structured objects with snippets of HTTP headers and other meta data attached. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
[squid-users] Re: squid tproxy problem
Hi, Any suggestions please. My Current Network Setup: WAN ROUTER(114.30.XX.1 --- public ip) | | | SWITCH | | | SQUID BOX (114.30.XX.19 gw: 114.30.XX.1) ( bridge mode) | | | BANDWITH MGMT. LINUX BOX ( 114.30.XX.10 gw: 114.30.XX.1) | | | END USERS ( mix with private ips and public ips ) at squid box : eth0 ->internet( cable from switch) eth1-> cable connected to BANDWITH MGMT. LINUX BOX) i am using centos 6 and squid version is 3.1.10 I can see traffic in tproxy iptables rules but i can not get any request to access.log Kindly guide me to solve this problem. Regards, Benjamin On Wed, Aug 17, 2011 at 7:15 PM, benjamin fernandis wrote: > Hi, > > I configured squid for tproxy feature in my network with bridge mode. > > I follow http://wiki.squid-cache.org/Features/Tproxy4 > > But I m not getting requests in access.log of squid. > > My configuration: > > cat /etc/squid/squid.conf > > # > # Recommended minimum configuration: > # > acl manager proto cache_object > acl localhost src 127.0.0.1/32 > acl localhost src ::1/128 > acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 > acl to_localhost dst ::1/128 > > # Example rule allowing access from your local networks. > # Adapt to list your (internal) IP networks from where browsing > # should be allowed > > acl SSL_ports port 443 > acl Safe_ports port 80 # http > acl Safe_ports port 21 # ftp > acl Safe_ports port 443 # https > acl Safe_ports port 70 # gopher > acl Safe_ports port 210 # wais > acl Safe_ports port 1025-65535 # unregistered ports > acl Safe_ports port 280 # http-mgmt > acl Safe_ports port 488 # gss-http > acl Safe_ports port 591 # filemaker > acl Safe_ports port 777 # multiling http > acl CONNECT method CONNECT > acl mynetwork src '/etc/squid/mynetwork' > acl cache_deny dst '/etc/squid/deny1' > > > cache deny cache_deny > # > cache_mem 1024 MB > > > # Recommended minimum Access Permission configuration: > # > # Only allow cachemgr access from localhost > http_access allow manager localhost > http_access deny manager > > # Deny requests to certain unsafe ports > http_access deny !Safe_ports > > # Deny CONNECT to other than secure SSL ports > http_access deny CONNECT !SSL_ports > > # We strongly recommend the following be uncommented to protect innocent > # web applications running on the proxy server who think the only > # one who can access services on "localhost" is a local user > #http_access deny to_localhost > > # > # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS > # > > # Example rule allowing access from your local networks. > # Adapt localnet in the ACL section to list your (internal) IP networks > # from where browsing should be allowed > http_access allow mynetwork > http_access allow localhost > > # And finally deny all other access to this proxy > http_access deny all > > # Squid normally listens to port 3128 > http_port 3128 > http_port 3129 tproxy > > # We recommend you to use at least the following line. > hierarchy_stoplist cgi-bin ? > > # Uncomment and adjust the following to add a disk cache directory. > cache_dir aufs /cache/squid 25600 32 512 > > # Leave coredumps in the first cache dir > coredump_dir /cache/squid > httpd_suppress_version_string on > > # Add any of your own refresh_pattern entries above these. > refresh_pattern ^ftp: 1440 20% 10080 > refresh_pattern ^gopher: 1440 0% 1440 > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 > refresh_pattern . 0 20% 4320 > > ip rule list > 0: from all lookup local > 32765: from all fwmark 0x1 lookup 100 > 32766: from all lookup main > 32767: from all lookup default > > iptables -L -nvx -t mangle > Chain PREROUTING (policy ACCEPT 959157 packets, 79545939 bytes) > pkts bytes target prot opt in out source > destination > 10993 689414 DIVERT tcp -- * * 0.0.0.0/0 > 0.0.0.0/0 socket > 16765 1000259 TPROXY tcp -- * * 0.0.0.0/0 > 0.0.0.0/0 tcp dpt:80 TPROXY redirect 0.0.0.0:3129 mark > 0x1/0x1 > > Chain INPUT (policy ACCEPT 15122 packets, 1149717 bytes) > pkts bytes target prot opt in out source > destination > > Chain FORWARD (policy ACCEPT 959996 packets, 79295677 bytes) > pkts bytes target prot opt in out source > destination > > Chain OUTPUT (policy ACCEPT 28272 packets, 10090599 bytes) > pkts bytes target prot opt in out source > destination > > Chain POSTROUTING (policy ACCEPT 988265 packets, 89386044 bytes) > pkts bytes target prot opt in out source > destination > > Chain DIVERT (1 references) > pkts bytes target prot opt in out
Re: [squid-users] Whatismyip response behind squid
Hallo, a, Du meintest am 18.08.11: > I have several squid boxes running. There is one which when i set it > on the proxy configuration on my client PCs browser then open > www.whatismyip.com , It not only bring its real NAT IP , but also > below information too. What makes the site gets these information and > how can prevent or change this banner? Then try another server/service, p.e. myip.it myip.nl And then you need a script for extractiing the IP address ... Viele Gruesse! Helmut
Re: [squid-users] squid performance tunning
2011/8/18 Amos Jeffries : > On 18/08/11 19:40, Drunkard Zhang wrote: >> >> 2011/8/18 Chen Bangzhong: >>> >>> My cached objects will expire after 10 minutes. >>> >>> Cache-Control:max-age=600 >> >> Static content like pictures should cache longer, like 1 day, 86400. > > Could also be a whole year. If you control the origin website, set caching > times as large as reasonably possible for each object. With revalidate > settings relevant to its likely replacement needs. And always send a correct > ETag. > > With those details Squid and other caches will take care of reducing caching > times to suit the network and disk needs and updates/revalidation to suit > your needs. So please set it large. > >> >>> I don't know why there are so many disk writes and there are so many >>> objects on disk. > > All traffic goes through either RAM cache or if its bigger than > maximum_object_size_in_memory will go through disks. > > From that info report ~60% of your traffic bytes are MISS responses. A large > portion of that MISS traffic is likely not storable, so will be written to > cache then discarded immediately. Squid is overall mostly-write with its > disk behaviour. > > Likely your 10-minute age is affecting this in a big way. The cache will > have a lot of storable object which are stale. Next request they will be > fetched into memory, then replaced by a revalidation REFRESH (near-HIT) > response, which writes new data back to disk later. > >>> >>> In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9% >>> is very low. >> >> Maybe cause by disk read timeout. You used too much disk space, you >> can shrink it a little by a little, until disk busy percentage reduced >> to 80% or lower. > > Your Squid version is one which will promote HIT objects from disk and > service repeat HITs from memory. Which reducing that disk-hit % a lot more > than earlier squid versions would show it as. > >> >>> Can I increase the cache_mem? or not use disk cache at all? >> >> I used all memory I can use :-) > > Indeed, the more the merrier. Unless it is swapping under high load. If that > happens Squid speed goes terrible almost immediately. Actually I disabled swap at all, and use a script to start squid process immediately when killed by OS. OS will kill squid when OOM(Out of memory).
Re: [squid-users] squid performance tunning
On 18/08/11 19:40, Drunkard Zhang wrote: 2011/8/18 Chen Bangzhong: My cached objects will expire after 10 minutes. Cache-Control:max-age=600 Static content like pictures should cache longer, like 1 day, 86400. Could also be a whole year. If you control the origin website, set caching times as large as reasonably possible for each object. With revalidate settings relevant to its likely replacement needs. And always send a correct ETag. With those details Squid and other caches will take care of reducing caching times to suit the network and disk needs and updates/revalidation to suit your needs. So please set it large. I don't know why there are so many disk writes and there are so many objects on disk. All traffic goes through either RAM cache or if its bigger than maximum_object_size_in_memory will go through disks. From that info report ~60% of your traffic bytes are MISS responses. A large portion of that MISS traffic is likely not storable, so will be written to cache then discarded immediately. Squid is overall mostly-write with its disk behaviour. Likely your 10-minute age is affecting this in a big way. The cache will have a lot of storable object which are stale. Next request they will be fetched into memory, then replaced by a revalidation REFRESH (near-HIT) response, which writes new data back to disk later. In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9% is very low. Maybe cause by disk read timeout. You used too much disk space, you can shrink it a little by a little, until disk busy percentage reduced to 80% or lower. Your Squid version is one which will promote HIT objects from disk and service repeat HITs from memory. Which reducing that disk-hit % a lot more than earlier squid versions would show it as. Can I increase the cache_mem? or not use disk cache at all? I used all memory I can use :-) Indeed, the more the merrier. Unless it is swapping under high load. If that happens Squid speed goes terrible almost immediately. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
[squid-users] RE: Squid NTLM - Dont want users to have to enter domain
Hi, Transparent NTLM authentication works great on our site and running on 5 proxy servers. However we are having an increasing number of clients who are not on the domain (E.g. Mac labs). Is there any way that these non-AD end users could get prompted for just their "username & password" instead of "DOMAIN\username & password". Many thanks in advance,
Re: [squid-users] Whatismyip response behind squid
On 18/08/11 18:35, a bv wrote: Hi, I have several squid boxes running. There is one which when i set it on the proxy configuration on my client PCs browser then open www.whatismyip.com , It not only bring its real NAT IP , but also "real NAT IP". So you have a fake NAT IP? Unplug your phone then try to make a phone call. Works yes? Call a friend then tell them to call you back at a number you make up in your head during the phone call. Works yes? Your IP is your contact point _for that one transaction_. There is no guarantee the next transaction will use the same one. Unless your ISP are selling you a static IP. below information too. What makes the site gets these information and how can prevent or change this banner? They have that information because: -> You visited them and your browser tried to hand your PCs information over. -> Squid erased pieces of that and replaced it with Squids information. -> Your NAT box erased pieces of Squids information and handed its own over instead. So what they see is a visit from on machine (squid)> at . Its not exactly rocket science to detect that a machine calling itself squid is *possibly* a proxy. You can doctor the config and make Squid show your "real" internal IPs and information. You want that? Or would you rather this composite external "view" of you be visible? Regards What Is My IP Address - WhatIsMyIP.com Your IP Address Is: x.y.z.t Possible Proxy Detected: 1.0 myproxyhostname.mydomain.com :8080 (squid/2.6.STABLE6) You can suppress the particular squid version details with: httpd_suppress_version_string on Most people trying to be anonymous also turn off the "via" directive. This only hides the proxy HTTP/1.0 version details. So can screw up websites which rely on it to disable certain HTTP/1.1-only features. Up to you. Nothing can hide the NAT details. They are your public IP address used at the packet level to receive the webpage. IP address in the 192.168.* or 10.* "private" ranges are shared by so many people there is nothing unique about them. As anonymous as you can get. Similar to everyone naming themselves by only the first two letters of their surname. How many millions of people have the same two letters? Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] Website is not displayed correctly
Thanks. Can you show me a sample code please. Regards, Malvin On 8/18/2011 5:09 PM, bilalma...@gmail.com wrote: You can make no cache site list, and add this website to the list. --Original Message-- From: Malvin Rito To: squid-users@squid-cache.org ReplyTo: mr...@mail.altcladding.com.ph Subject: [squid-users] Website is not displayed correctly Sent: Aug 18, 2011 12:03 PM Hi List, We are running Squid Proxy on Transparent mode and we have encountered a problem recently on accessing the http://www.grasshopper3d.com/ website wherein the site is not displayed correctly. Like images on that website are not displayed and text are not formatted. I did try also accessing the site on my extra router and it the site is displayed correctly. What do you think is causing the problem? Regards, Malvin Best Regards ~ Bilal J.Mahdi Sat-Link Inc
[squid-users] Website is not displayed correctly
Hi List, We are running Squid Proxy on Transparent mode and we have encountered a problem recently on accessing the http://www.grasshopper3d.com/ website wherein the site is not displayed correctly. Like images on that website are not displayed and text are not formatted. I did try also accessing the site on my extra router and it the site is displayed correctly. What do you think is causing the problem? Regards, Malvin
Re: [squid-users] Installing Squid from Binary
On 18/08/11 17:09, Justin Lawler wrote: Hi, We want to upgrade squid to a greater number of FD's. We want to do a build on an off-line environment to do testing on, and then deploy that executable in production. Is this possible? From all the articles I've seen so far, the only way to install squid is to rebuild on the same machine, then do a 'make install'. Machine is not a limit. Otherwise OS distributors like Microsoft, Apple or Debian would not be able to provide binary packages. The only fixed requirement is that the same CPU architecture and a compatible software environment is used to build. For example you can't built a i686 CPU version of Squid and run it on an ARM CPU. The CPU requirement is more flexible at build than most people think though. If you don't have a suitable machine for building on look up cross-compiling. It's slightly tricky with Squid (due to bugs in our code) but compilers often have options to build code for other CPUs. The software enviromment requirement is rather rigid. Its so that libraries etc you will use on the destination machine can be detected properly by ./configure during build. You can eliminate most features, but not add any unless you have the right build dependencies are present. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.10
Re: [squid-users] squid performance tunning
2011/8/18 Chen Bangzhong : > My cached objects will expire after 10 minutes. > > Cache-Control:max-age=600 Static content like pictures should cache longer, like 1 day, 86400. > I don't know why there are so many disk writes and there are so many > objects on disk. > > In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9% > is very low. Maybe cause by disk read timeout. You used too much disk space, you can shrink it a little by a little, until disk busy percentage reduced to 80% or lower. > Can I increase the cache_mem? or not use disk cache at all? I used all memory I can use :-)
[squid-users] anyone describe the model of how Squid manage the memory?
hi, all: I am new to Squid, and I am assigned to learn how Squid manage to memory, in order to make best use of the Squid. there are some problems about the memory management for Squid: 1, if two files have the same content, such as two Javascript files, then how Squid deal with the two files in memory? dose it treat it as one file and keep the two file name somewhere? 2, how does the Squid define the level of hot data? and what is the distribution strategy of hot data like?and How can I affect the distribution strategy ? thanks.