Re: [squid-users] Re: Effort for port 3.1 to windows?
On 26/04/11 14:57, Yucong Sun (叶雨飞) wrote: Hi Amos, You can get VS2010 Express for free at here http://www.microsoft.com/express/Downloads/#2010-Visual-CPP Win SDK are free as well http://msdn.microsoft.com/en-us/windows/bb980924 but it's not required to build. You get get other VC Express version free as well. There's some change in C runtime since VS2005 which mess things up, including some redefine of EAGAIN EWOULDBLOCK etc etc. http://msdn.microsoft.com/en-us/library/8ef0s5kh(v=vs.80).aspx http://msdn.microsoft.com/en-us/library/ms737828(v=vs.85).aspx Thank you for the thought. It's not a matter of having the IDE itself though. I have no hardware capable of running it is the main problem. The boxes here with enough resources to run recent versions of Windows (and the IDE) are all running Debian or Ubuntu for CDN servers and media editing. Guido provides a VM for us to test syntax bugs with, but those two blocker bugs are in my way to do more work. I'm just generally frustrated about not having something that at least build, I can probably risk a couple of night to try to get things start. Anything will help. Thank you. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues
On Mon, 25 Apr 2011, Alex Rousskov wrote: On 04/25/2011 06:14 PM, da...@lang.hm wrote: if that regains the speed and/or scalability it would point fingers fairly conclusively at the DNS components. this is the only think that I can think of that should be shared between multiple workers processing ACLs but it is _not_ currently shared from Squid point of view. Ok, I was assuming from the description of things that there would be one DNS process that all the workers would be accessing. from the way it's described in the documentation it sounds as if it's already a separate process I would like to fix that documentation, but I cannot find what phrase led you to the above conclusion. The SmpScale wiki page says: Currently, Squid workers do not share and do not synchronize other resources or services, including: * DNS caches (ipcache and fqdncache); So that seems to be correct and clear. Which documentation are you referring to? ahh, I missed that, I was going by the description of the config options that configure and disable the DNS cache (they don't say anything about the SMP mode, but I read them to imply that the squid-internal DNS cache was a separate thread/proccess) David Lang
Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues
On 04/25/2011 06:14 PM, da...@lang.hm wrote: >>> if that regains the speed and/or scalability it would point fingers >>> fairly conclusively at the DNS components. >>> >>> this is the only think that I can think of that should be shared between >>> multiple workers processing ACLs >> >> but it is _not_ currently shared from Squid point of view. > > Ok, I was assuming from the description of things that there would be > one DNS process that all the workers would be accessing. from the way > it's described in the documentation it sounds as if it's already a > separate process I would like to fix that documentation, but I cannot find what phrase led you to the above conclusion. The SmpScale wiki page says: > Currently, Squid workers do not share and do not synchronize other > resources or services, including: > > * DNS caches (ipcache and fqdncache); So that seems to be correct and clear. Which documentation are you referring to? Thank you, Alex.
Re: [squid-users] Re: Effort for port 3.1 to windows?
Hi Amos, You can get VS2010 Express for free at here http://www.microsoft.com/express/Downloads/#2010-Visual-CPP Win SDK are free as well http://msdn.microsoft.com/en-us/windows/bb980924 but it's not required to build. You get get other VC Express version free as well. There's some change in C runtime since VS2005 which mess things up, including some redefine of EAGAIN EWOULDBLOCK etc etc. http://msdn.microsoft.com/en-us/library/8ef0s5kh(v=vs.80).aspx http://msdn.microsoft.com/en-us/library/ms737828(v=vs.85).aspx I'm just generally frustrated about not having something that at least build, I can probably risk a couple of night to try to get things start. Cheers. On Mon, Apr 25, 2011 at 5:29 PM, Amos Jeffries wrote: > On 26/04/11 11:02, Yucong Sun (叶雨飞) wrote: >> >> Well, thanks for the pointer, But as far as I can see there, it's a >> installer, how did you generate the binary? >> >> what I really hoping for, is to compile and run 3.1 normally on windows. >> Well, 2.7 may cut it as well since I need mostly existing features, >> but I can't get it compile correctly either. And current 2.7 windows >> version requires to compile under vc6, is just impossible these days. >> >> I'm surprised no one has been taking on squid on windows seriously (or >> I didn't find it), but by far, most proxy software I tried on windows >> have different problem, while squid is the best atm I think. > > > There is work underway (slowly) on getting Squid up to at least build again > on Windows. > > The more active upstream dev (myself and Francesco Chemolli) only have > access to MinGW test machine and are blocked by a few bugs which we need to > be diagnosed and patch created by someone with direct access to a MinGW > setup (http://bugs.squid-cache.org/show_bug.cgi?id=3203 and > http://bugs.squid-cache.org/show_bug.cgi?id=3043). > > Guido Serassio has better access but no time to work on it (sponsorship to > recompense for his taking time off work will help a lot there, contact him > about it). > > AFAIK none of us have access to recent VC versions (Guido mentioned > something about the new IDE versions causing major pains in the build > process). > > Patches on 3.HEAD code to get Windows going are *very* welcome. > > Amos > -- > Please be using > Current Stable Squid 2.7.STABLE9 or 3.1.12 > Beta testers wanted for 3.2.0.7 and 3.1.12.1 >
Re: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?
On 26/04/11 05:27, Jenny Lee wrote: HALF-BAKED: acl OFFICE src 1.1.1.1 request_header_access User-Agent allow OFFICE request_header_access User-Agent deny all request-header_replace User-Agent BOGUS AGENT [DIRECT works as expected for OFFICE -- no modifications. However, UA for OFFICE is replaced as soon as the connection is forwarded to a peer] HALF-BAKED: acl OFFICE src 1.1.1.1 cache_peer 2.2.2.2 parent 2 0 proxy-only no-query name=PEER2 acl PEER2 peername PEER2 request_header_access User-Agent allow PEER2 OFFICE request_header_access User-Agent deny PEER2 !OFFICE request_header_access User-Agent deny all request-header_replace User-Agent BOGUS AGENT [all and every combination of ALLOW/DENY/PEER2/OFFICE... does not work] WORKS WHEN GOING THROUGH A PEER: request_header_access User-Agent allow PEER2 request_header_access User-Agent deny all request-header_replace User-Agent BOGUS AGENT It seems to me that ACL SRC is NEVER checked when going to a Peer. WHAT I WANT TO DO: acl OFFICE src 1.1.1.1 request_header_access User-Agent allow OFFICE request_header_access User-Agent deny all request-header_replace User-Agent BOGUS AGENT [OFFICE UA should not be modified whehter going direct or through a peer] Thanks, Jenny PS: Running 3.2.0.7 on production and works good and reliably. The UA issue above is present on both 3.2.0.1 and 3.2.0.7. Okay, this is going to need a cache.log trace for "debug_options 28,9" to see what is being tested where. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] Reverse Proxy on Squid to port 8080
On 26/04/11 01:11, Ali Jawad wrote: Hi I have got a reverse proxy that is working just fine, it accepts requests on port 443 and port 80 and ONLY sends traffic upstream to port 80 to the apache server listening on localhost. I use the following config: https_port 10.14.1.72:443 cert=/etc/squid/self_certs/site.crt key=/etc/squid/self_certs/site.key defaultsite=site vhost cache_peer 127.0.0.1 parent 443 80 no-query originserver login=PASS http_port 10.14.1.72:80 vhost This configuration does not match what you stated above and is broken. It does accept requests on port 443 and port 80. However it sends non-encrypted HTTP traffic upstream to port *443* on the apache server listening on localhost. It also sends UDP packets with ICP queries to port 80 to the apache server (which does not handle ICP). My problem is the following : The site should act differently in some occasions based on whether http or https was requested. So my idea is to setup second http vhost on apache listening to port 8080 and on that vhost I would server the https code. So is it possible to use SQUID to : Send traffic destined for port 443 to localhost:8080 and Send traffic destined for port 80 to localhost:80 ? Any hints/ comments are highly appreciated. acl HTTP proto HTTP acl HTTPS proto HTTPS Then using cache_peer_access to test the protocol version sent to each peer. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] squid 3.1 android ebuddy
On 26/04/11 12:19, Gerson Barreiros wrote: Hi, I'm using Squid 3.1.12.1 (Amos ppa-maverick) and i got some weird problem. Users with Android2 can't get 'ebuddy' to work, but for iPhone users, it works. (?) I've made an exception on firewall (for 38.99.73.0/24) to ebuddy connections skip squid, now it works for both. Anyone know anything related? My squid.conf don't block nothing related to ebuddy. Can you diagnose anything about it from cache.log and/or access.log? from the app error message? Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] Re: Effort for port 3.1 to windows?
On 26/04/11 11:02, Yucong Sun (叶雨飞) wrote: Well, thanks for the pointer, But as far as I can see there, it's a installer, how did you generate the binary? what I really hoping for, is to compile and run 3.1 normally on windows. Well, 2.7 may cut it as well since I need mostly existing features, but I can't get it compile correctly either. And current 2.7 windows version requires to compile under vc6, is just impossible these days. I'm surprised no one has been taking on squid on windows seriously (or I didn't find it), but by far, most proxy software I tried on windows have different problem, while squid is the best atm I think. There is work underway (slowly) on getting Squid up to at least build again on Windows. The more active upstream dev (myself and Francesco Chemolli) only have access to MinGW test machine and are blocked by a few bugs which we need to be diagnosed and patch created by someone with direct access to a MinGW setup (http://bugs.squid-cache.org/show_bug.cgi?id=3203 and http://bugs.squid-cache.org/show_bug.cgi?id=3043). Guido Serassio has better access but no time to work on it (sponsorship to recompense for his taking time off work will help a lot there, contact him about it). AFAIK none of us have access to recent VC versions (Guido mentioned something about the new IDE versions causing major pains in the build process). Patches on 3.HEAD code to get Windows going are *very* welcome. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
[squid-users] squid 3.1 android ebuddy
Hi, I'm using Squid 3.1.12.1 (Amos ppa-maverick) and i got some weird problem. Users with Android2 can't get 'ebuddy' to work, but for iPhone users, it works. (?) I've made an exception on firewall (for 38.99.73.0/24) to ebuddy connections skip squid, now it works for both. Anyone know anything related? My squid.conf don't block nothing related to ebuddy.
Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues
On Mon, 25 Apr 2011, Alex Rousskov wrote: On 04/25/2011 05:31 PM, da...@lang.hm wrote: On Mon, 25 Apr 2011, da...@lang.hm wrote: On Mon, 25 Apr 2011, Alex Rousskov wrote: On 04/14/2011 09:06 PM, da...@lang.hm wrote: In addition, there seems to be some sort of locking betwen the multiple worker processes in 3.2 when checking the ACLs There are pretty much no locks in the current official SMP code. This will change as we start adding shared caches in a week or so, but even then the ACLs will remain lock-free. There could be some internal locking in the 3rd-party libraries used by ACLs (regex and such), but I do not know much about them. what are the 3rd party libraries that I would be using? See "ldd squid". Here is a sample based on a randomly picked Squid: libnsl, libresolv, libstdc++, libgcc_s, libm, libc, libz, libepol Please note that I am not saying that any of these have problems in SMP environment. I am only saying that Squid itself does not lock anything runtime so if our suspect is SMP-related locks, they would have to reside elsewhere. The other possibility is that we should suspect something else, of course. IMHO, it is more likely to be something else: after all, Squid does not use threads, where such problems are expected. BTW, do you see more-or-less even load across CPU cores? If not, you may need a patch that we find useful on older Linux kernels. It is discussed in the "Will similar workers receive similar amount of work?" section of http://wiki.squid-cache.org/Features/SmpScale the load is pretty even across all workers. with the problems descripted on that page, I would expect uneven utilization at low loads, but at high loads (with the workers busy serviceing requests rather than waiting for new connections), I would expect the work to even out (and the types of hacks described in that section to end up costing performance, but not in a way that would scale with the ACL processing load) one thought I had is that this could be locking on name lookups. how hard would it be to create a quick patch that would bypass the name lookups entirely and only do the lookups by IP. I did not realize your ACLs use DNS lookups. Squid internal DNS code does not have any runtime SMP locks. However, the presence of DNS lookups increases the number of suspects. they don't, everything in my test environment is by IP. But I've seen other software that still runs everything through name lookups, even if what's presented to the software (both in what's requested and in the ACLs) is all done by IPs. It's a easy way to bullet-proof the input (if it's a name it gets resolved, if it's an IP, the IP comes back as-is, and it works for IPv4 and IPv6, no need to have logic that looks at the value and tries to figure out if the user intended to type a name or an IP). I don't know how squid is working internally (it's a pretty large codebase, and I haven't tried to really dive into it) so I don't know if squid does this or not. A patch you propose does not sound difficult to me, but since I cannot contribute such a patch soon, it is probably better to test with ACLs that do not require any DNS lookups instead. if that regains the speed and/or scalability it would point fingers fairly conclusively at the DNS components. this is the only think that I can think of that should be shared between multiple workers processing ACLs but it is _not_ currently shared from Squid point of view. Ok, I was assuming from the description of things that there would be one DNS process that all the workers would be accessing. from the way it's described in the documentation it sounds as if it's already a separate process, so I was thinking that it was possible that if each ACL IP address is being put through a single DNS process, I could be running into contention on that process (and having to do name lookups for both IPv6 and then falling back to IPv4 would explain the severe performance hit far more than the difference between IPs being 128 bit values instead of 32 bit values) David Lang
Re: [squid-users] Squid and Splash page
On 26/04/11 02:54, Daniel Shelton wrote: Hello again all, First of all, thanks to Amos and Andrew for replying to my previous question. I have setup squid_session with the following in squid.conf. The result is attached below also. For whatever reason the squid sessions are crashing and I am not sure why. The goal would be to display a splash page to the user and then release them after that. ("Catch and Release") Does anyone know why the sessions are exiting ? Thanks, -- squid.conf -- external_acl_type session ttl=60 %SRC /usr/lib64/squid/squid_session -t 7200 -b /etc/squid/session.db The -d option should record the helper actions in cache.log to confirm what is going on inside it. I suspect it is permission to write under /etc (which is not safe for an app to do). Try with /var/run/squid/session.db or similar. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues
On 04/25/2011 05:31 PM, da...@lang.hm wrote: > On Mon, 25 Apr 2011, da...@lang.hm wrote: >> On Mon, 25 Apr 2011, Alex Rousskov wrote: >>> On 04/14/2011 09:06 PM, da...@lang.hm wrote: >>> In addition, there seems to be some sort of locking betwen the multiple worker processes in 3.2 when checking the ACLs >>> >>> There are pretty much no locks in the current official SMP code. This >>> will change as we start adding shared caches in a week or so, but even >>> then the ACLs will remain lock-free. There could be some internal >>> locking in the 3rd-party libraries used by ACLs (regex and such), but I >>> do not know much about them. >> >> what are the 3rd party libraries that I would be using? See "ldd squid". Here is a sample based on a randomly picked Squid: libnsl, libresolv, libstdc++, libgcc_s, libm, libc, libz, libepol Please note that I am not saying that any of these have problems in SMP environment. I am only saying that Squid itself does not lock anything runtime so if our suspect is SMP-related locks, they would have to reside elsewhere. The other possibility is that we should suspect something else, of course. IMHO, it is more likely to be something else: after all, Squid does not use threads, where such problems are expected. BTW, do you see more-or-less even load across CPU cores? If not, you may need a patch that we find useful on older Linux kernels. It is discussed in the "Will similar workers receive similar amount of work?" section of http://wiki.squid-cache.org/Features/SmpScale > one thought I had is that this could be locking on name lookups. how > hard would it be to create a quick patch that would bypass the name > lookups entirely and only do the lookups by IP. I did not realize your ACLs use DNS lookups. Squid internal DNS code does not have any runtime SMP locks. However, the presence of DNS lookups increases the number of suspects. A patch you propose does not sound difficult to me, but since I cannot contribute such a patch soon, it is probably better to test with ACLs that do not require any DNS lookups instead. > if that regains the speed and/or scalability it would point fingers > fairly conclusively at the DNS components. > > this is the only think that I can think of that should be shared between > multiple workers processing ACLs but it is _not_ currently shared from Squid point of view. Cheers, Alex.
Re: [squid-users] Zero Sized Reply went trying FTP
On 25/04/11 23:45, Javier wrote: Hello: went i navigate to: ftp://novapublishers.com/... squid3 sent me a: Zero Sized Reply my acl is: ftp proto ftp http_access allow ftp my squd3 version is: 3.1.6 Please try the 3.1.12 version. There have been two causes of Zero Sized Reply fixed recently. (A third known cause remains at present). Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues
On Mon, 25 Apr 2011, da...@lang.hm wrote: On Mon, 25 Apr 2011, Alex Rousskov wrote: On 04/14/2011 09:06 PM, da...@lang.hm wrote: In addition, there seems to be some sort of locking betwen the multiple worker processes in 3.2 when checking the ACLs There are pretty much no locks in the current official SMP code. This will change as we start adding shared caches in a week or so, but even then the ACLs will remain lock-free. There could be some internal locking in the 3rd-party libraries used by ACLs (regex and such), but I do not know much about them. what are the 3rd party libraries that I would be using? one thought I had is that this could be locking on name lookups. how hard would it be to create a quick patch that would bypass the name lookups entirely and only do the lookups by IP. if that regains the speed and/or scalability it would point fingers fairly conclusively at the DNS components. this is the only think that I can think of that should be shared between multiple workers processing ACLs David Lang
Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues
On Mon, 25 Apr 2011, Alex Rousskov wrote: On 04/14/2011 09:06 PM, da...@lang.hm wrote: Ok, I finally got a chance to test 2.7STABLE9 it performs about the same as squid 3.0, possibly a little better. with my somewhat stripped down config (smaller regex patterns, replacing CIDR blocks and names that would need to be looked up in /etc/hosts with individual IP addresses) 2.7 gives ~4800 requests/sec 3.0 gives ~4600 requests/sec 3.2.0.6 with 1 worker gives ~1300 requests/sec 3.2.0.6 with 5 workers gives ~2800 requests/sec Glad you did not see a significant regression between v2.7 and v3.0. We have heard rather different stories. Every environment is different, and many lab tests are misguided, of course, but it is still good to hear positive reports. The difference between v3.2 and v3.0 is known and have been discussed on squid-dev. A few specific culprits are also known, but more need to be identified. We are working on identifying these performance bugs and reducing that difference. let me know if there are any tests that I can run that will help you. As for 1 versus 5 worker difference, it seems to be specific to your environment (as discussed below). the numbers for 3.0 are slightly better than what I was getting with the full ruleset, but the numbers for 3.2.0.6 are pretty much exactly what I got from the last round of tests (with either the full or simplified ruleset) so 3.1 and 3.2 are a very significant regression from 2.7 or 3.0, and the ability to use multiple worker processes in 3.2 doesn't make up for this. the time taken seems to almost all be in the ACL avaluation as eliminating all the ACLs takes 1 worker with 3.2 up to 4200 requests/sec. If ACLs are the major culprit in your environment, then this is most likely not a problem in Squid source code. AFAIK, there are no locks or other synchronization primitives/overheads when it comes to Squid ACLs. The solution may lie in optimizing some 3rd-party libraries (used by ACLs) or in optimizing how they are used by Squid, depending on what ACLs you use. As far as Squid-specific code is concerned, you should see nearly linear ACL scale with the number of workers. given that my ACLs are IP/port matches or regex matches (and I've tested replacing the regex matches with IP matches with no significant change in performance), what components would be used. one theory is that even though I have IPv6 disabled on this build, the added space and more expensive checks needed to compare IPv6 addresses instead of IPv4 addresses accounts for the single worker drop of ~66%. that seems rather expensive, even though there are 293 http_access lines (and one of them uses external file contents in it's acls, so it's a total of ~2400 source/destination pairs, however due to the ability to shortcut the comparison the number of tests that need to be done should be <400) Yes, IPv6 is one of the known major performance regression culprits, but IPv6 ACLs should still scale linearly with the number of workers, AFAICT. Please note that I am not an ACL expert. I am just talking from the overall Squid SMP design point of view and from our testing/deployment experience point of view. that makes sense and is what I would have expected, but in my case (lots of ACLs) I am seeing a definante problem with more workers not completing more work, and beyond about 5 workers I am seeing the total work being completed drop. I can't think of any reason besides locking that this may be the case. In addition, there seems to be some sort of locking betwen the multiple worker processes in 3.2 when checking the ACLs There are pretty much no locks in the current official SMP code. This will change as we start adding shared caches in a week or so, but even then the ACLs will remain lock-free. There could be some internal locking in the 3rd-party libraries used by ACLs (regex and such), but I do not know much about them. what are the 3rd party libraries that I would be using? David Lang HTH, Alex. On Wed, 13 Apr 2011, Marcos wrote: Hi David, could you run and publish your benchmark with squid 2.7 ??? i'd like to know if is there any regression between 2.7 and 3.x series. thanks. Marcos - Mensagem original De: "da...@lang.hm" Para: Amos Jeffries Cc: squid-users@squid-cache.org; squid-...@squid-cache.org Enviadas: S?bado, 9 de Abril de 2011 12:56:12 Assunto: Re: [squid-users] squid 3.2.0.5 smp scaling issues On Sat, 9 Apr 2011, Amos Jeffries wrote: On 09/04/11 14:27, da...@lang.hm wrote: A couple more things about the ACLs used in my test all of them are allow ACLs (no deny rules to worry about precidence of) except for a deny-all at the bottom the ACL line that permits the test source to the test destination has zero overlap with the rest of the rules every rule has an IP based restriction (even the ones with url_regex are source -> URL regex) I moved the ACL that allows my test from the bottom of the ruleset to the top and
Re: Res: Res: [squid-users] squid 3.2.0.5 smp scaling issues
On Mon, 25 Apr 2011, Marcos wrote: thanks for your answer David. i'm seeing too much feature been included at squid 3.x, but it's getting as slower as new features are added. that's unfortunantly fairly normal. i think squid 3.2 with 1 worker should be as fast as 2.7, but it's getting slower e hungry. that's one major problem, but the fact that the ACL matching isn't scaling with more workers I think is what's killing us. 1 3.2 worker is ~1/3 the speed of 2.7, but with the easy availablity of 8+ real cores (not hyperthreaded 'fake' cores), you should still be able to get ~3x the performance of 2.7 by using 3.2. unfortunantly that's not what's happening, and we end up topping out around 1/2-2/3 the performance of 2.7 David Lang Marcos - Mensagem original De: "da...@lang.hm" Para: Marcos Cc: Amos Jeffries ; squid-users@squid-cache.org; squid-...@squid-cache.org Enviadas: Sexta-feira, 22 de Abril de 2011 15:10:44 Assunto: Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues ping, I haven't seen a response to this additional information that I sent out last week. squid 3.1 and 3.2 are a significant regression in performance from squid 2.7 or 3.0 David Lang On Thu, 14 Apr 2011, da...@lang.hm wrote: Subject: Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues Ok, I finally got a chance to test 2.7STABLE9 it performs about the same as squid 3.0, possibly a little better. with my somewhat stripped down config (smaller regex patterns, replacing CIDR blocks and names that would need to be looked up in /etc/hosts with individual IP addresses) 2.7 gives ~4800 requests/sec 3.0 gives ~4600 requests/sec 3.2.0.6 with 1 worker gives ~1300 requests/sec 3.2.0.6 with 5 workers gives ~2800 requests/sec the numbers for 3.0 are slightly better than what I was getting with the full ruleset, but the numbers for 3.2.0.6 are pretty much exactly what I got from the last round of tests (with either the full or simplified ruleset) so 3.1 and 3.2 are a very significant regression from 2.7 or 3.0, and the ability to use multiple worker processes in 3.2 doesn't make up for this. the time taken seems to almost all be in the ACL avaluation as eliminating all the ACLs takes 1 worker with 3.2 up to 4200 requests/sec. one theory is that even though I have IPv6 disabled on this build, the added space and more expensive checks needed to compare IPv6 addresses instead of IPv4 addresses accounts for the single worker drop of ~66%. that seems rather expensive, even though there are 293 http_access lines (and one of them uses external file contents in it's acls, so it's a total of ~2400 source/destination pairs, however due to the ability to shortcut the comparison the number of tests that need to be done should be <400) In addition, there seems to be some sort of locking betwen the multiple worker processes in 3.2 when checking the ACLs as the test with almost no ACLs scales close to 100% per worker while with the ACLs it scales much more slowly, and above 4-5 workers actually drops off dramatically (to the point where with 8 workers the throughput is down to about what you get with 1-2 workers) I don't see any conceptual reason why the ACL checks of the different worker threads should impact each other in any way, let alone in a way that limits scalability to ~4 workers before adding more workers is a net loss. David Lang On Wed, 13 Apr 2011, Marcos wrote: Hi David, could you run and publish your benchmark with squid 2.7 ??? i'd like to know if is there any regression between 2.7 and 3.x series. thanks. Marcos - Mensagem original De: "da...@lang.hm" Para: Amos Jeffries Cc: squid-users@squid-cache.org; squid-...@squid-cache.org Enviadas: S?bado, 9 de Abril de 2011 12:56:12 Assunto: Re: [squid-users] squid 3.2.0.5 smp scaling issues On Sat, 9 Apr 2011, Amos Jeffries wrote: On 09/04/11 14:27, da...@lang.hm wrote: A couple more things about the ACLs used in my test all of them are allow ACLs (no deny rules to worry about precidence of) except for a deny-all at the bottom the ACL line that permits the test source to the test destination has zero overlap with the rest of the rules every rule has an IP based restriction (even the ones with url_regex are source -> URL regex) I moved the ACL that allows my test from the bottom of the ruleset to the top and the resulting performance numbers were up as if the other ACLs didn't exist. As such it is very clear that 3.2 is evaluating every rule. I changed one of the url_regex rules to just match one line rather than a file containing 307 lines to see if that made a difference, and it made no significant difference. So this indicates to me that it's not having to fully evaluate every rule (it's able to skip doing the regex if the IP match doesn't work) I then changed all the acl lines that used hostnames to have IP addresses in them, and this also made no significant difference I then chan
Re: [squid-users] Re: Effort for port 3.1 to windows?
Well, thanks for the pointer, But as far as I can see there, it's a installer, how did you generate the binary? what I really hoping for, is to compile and run 3.1 normally on windows. Well, 2.7 may cut it as well since I need mostly existing features, but I can't get it compile correctly either. And current 2.7 windows version requires to compile under vc6, is just impossible these days. I'm surprised no one has been taking on squid on windows seriously (or I didn't find it), but by far, most proxy software I tried on windows have different problem, while squid is the best atm I think. Cheers. On Mon, Apr 25, 2011 at 1:38 PM, sichent wrote: > On 4/25/2011 9:26 PM, Yucong Sun (叶雨飞) wrote: >> >> Hi there, >> >> Is there any effort now to port 3.1 to windows? >> >> I know there's one for 2.7, and be struggling to get it compile on >> vs2010 and win7 sdk. >> >> But it is so complicated and horribly broken by new CRT security >> features (which can be fixed by adding some code) and Winsocks >> changes. I managed to get one build, but all internal calls stuck with >> WSAEWOULDBLOCK somehow. >> >> I know windows is not popular these days, but I would really hope to >> see a effort to get latest version run on windows. >> >> Cheers. >> > > We have an MSI project for Squid 2.7... if you need help for 3.1 with MSI > and Wix - we can do it :) > > http://squidwindowsmsi.sourceforge.net/ > > best regards, > sich > >
Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues
On 04/14/2011 09:06 PM, da...@lang.hm wrote: > Ok, I finally got a chance to test 2.7STABLE9 > > it performs about the same as squid 3.0, possibly a little better. > > with my somewhat stripped down config (smaller regex patterns, replacing > CIDR blocks and names that would need to be looked up in /etc/hosts with > individual IP addresses) > > 2.7 gives ~4800 requests/sec > 3.0 gives ~4600 requests/sec > 3.2.0.6 with 1 worker gives ~1300 requests/sec > 3.2.0.6 with 5 workers gives ~2800 requests/sec Glad you did not see a significant regression between v2.7 and v3.0. We have heard rather different stories. Every environment is different, and many lab tests are misguided, of course, but it is still good to hear positive reports. The difference between v3.2 and v3.0 is known and have been discussed on squid-dev. A few specific culprits are also known, but more need to be identified. We are working on identifying these performance bugs and reducing that difference. As for 1 versus 5 worker difference, it seems to be specific to your environment (as discussed below). > the numbers for 3.0 are slightly better than what I was getting with the > full ruleset, but the numbers for 3.2.0.6 are pretty much exactly what I > got from the last round of tests (with either the full or simplified > ruleset) > > so 3.1 and 3.2 are a very significant regression from 2.7 or 3.0, and > the ability to use multiple worker processes in 3.2 doesn't make up for > this. > > the time taken seems to almost all be in the ACL avaluation as > eliminating all the ACLs takes 1 worker with 3.2 up to 4200 requests/sec. If ACLs are the major culprit in your environment, then this is most likely not a problem in Squid source code. AFAIK, there are no locks or other synchronization primitives/overheads when it comes to Squid ACLs. The solution may lie in optimizing some 3rd-party libraries (used by ACLs) or in optimizing how they are used by Squid, depending on what ACLs you use. As far as Squid-specific code is concerned, you should see nearly linear ACL scale with the number of workers. > one theory is that even though I have IPv6 disabled on this build, the > added space and more expensive checks needed to compare IPv6 addresses > instead of IPv4 addresses accounts for the single worker drop of ~66%. > that seems rather expensive, even though there are 293 http_access lines > (and one of them uses external file contents in it's acls, so it's a > total of ~2400 source/destination pairs, however due to the ability to > shortcut the comparison the number of tests that need to be done should > be <400) Yes, IPv6 is one of the known major performance regression culprits, but IPv6 ACLs should still scale linearly with the number of workers, AFAICT. Please note that I am not an ACL expert. I am just talking from the overall Squid SMP design point of view and from our testing/deployment experience point of view. > In addition, there seems to be some sort of locking betwen the multiple > worker processes in 3.2 when checking the ACLs There are pretty much no locks in the current official SMP code. This will change as we start adding shared caches in a week or so, but even then the ACLs will remain lock-free. There could be some internal locking in the 3rd-party libraries used by ACLs (regex and such), but I do not know much about them. HTH, Alex. >> On Wed, 13 Apr 2011, Marcos wrote: >> >>> Hi David, >>> >>> could you run and publish your benchmark with squid 2.7 ??? >>> i'd like to know if is there any regression between 2.7 and 3.x series. >>> >>> thanks. >>> >>> Marcos >>> >>> >>> - Mensagem original >>> De: "da...@lang.hm" >>> Para: Amos Jeffries >>> Cc: squid-users@squid-cache.org; squid-...@squid-cache.org >>> Enviadas: S?bado, 9 de Abril de 2011 12:56:12 >>> Assunto: Re: [squid-users] squid 3.2.0.5 smp scaling issues >>> >>> On Sat, 9 Apr 2011, Amos Jeffries wrote: >>> On 09/04/11 14:27, da...@lang.hm wrote: > A couple more things about the ACLs used in my test > > all of them are allow ACLs (no deny rules to worry about precidence > of) > except for a deny-all at the bottom > > the ACL line that permits the test source to the test destination has > zero overlap with the rest of the rules > > every rule has an IP based restriction (even the ones with > url_regex are > source -> URL regex) > > I moved the ACL that allows my test from the bottom of the ruleset to > the top and the resulting performance numbers were up as if the other > ACLs didn't exist. As such it is very clear that 3.2 is evaluating > every > rule. > > I changed one of the url_regex rules to just match one line rather > than > a file containing 307 lines to see if that made a difference, and it > made no significant difference. So this indicates to me that it's not > having to fully evaluate every rule (it's able to skip doing the regex >
Re: [squid-users] How to diagnose race condition?
On Mon, Apr 25, 2011 at 2:41 PM, Steve Snyder wrote: > I just upgraded from CentOS 5.5 to CentOS 5.6, while running Squid v3.1.12.1 > in both environments, and somehow created a race condition in the process. > Besides updating the 200+ software packages that are the difference between > 5.5 and 5.6, I configured and enabled DNSSEC on my nameserver. > > What I see now is that Squid started at boot time uses 100% CPU, with no > traffic at all, and will stay that way seemingly forever. If I shut down > Squid and restart it, all is well. So: Squid started at boot time = bad, > Squid started post-boot = good. There is nothing unusual in either the > system or Squid logs to suggest what the problem is. > > Can anyone suggest how to diagnose what Squid is doing/waiting for? > > Thanks. Not precisely sure, however in general... If you have a viable console at the time, you can trace the process activity and see what it's waiting on (what file, network port, etc). Figure out what process ID squid is, and then strace -p . If that's not working, modify the init start script temporarily. Where it normally runs squid, modify it to log instead: strace -o/tmp/squid-strace Quick and dirty solution to try first - move its init script to S99squid from whatever number it is now. And if you're starting it at runlevel 2, move it to the end of runlevel 3... More generally, look at what init scripts got moved around from 5.5 to 5.6 -- -george william herbert george.herb...@gmail.com
[squid-users] How to diagnose race condition?
I just upgraded from CentOS 5.5 to CentOS 5.6, while running Squid v3.1.12.1 in both environments, and somehow created a race condition in the process. Besides updating the 200+ software packages that are the difference between 5.5 and 5.6, I configured and enabled DNSSEC on my nameserver. What I see now is that Squid started at boot time uses 100% CPU, with no traffic at all, and will stay that way seemingly forever. If I shut down Squid and restart it, all is well. So: Squid started at boot time = bad, Squid started post-boot = good. There is nothing unusual in either the system or Squid logs to suggest what the problem is. Can anyone suggest how to diagnose what Squid is doing/waiting for? Thanks.
[squid-users] Re: Effort for port 3.1 to windows?
On 4/25/2011 9:26 PM, Yucong Sun (叶雨飞) wrote: Hi there, Is there any effort now to port 3.1 to windows? I know there's one for 2.7, and be struggling to get it compile on vs2010 and win7 sdk. But it is so complicated and horribly broken by new CRT security features (which can be fixed by adding some code) and Winsocks changes. I managed to get one build, but all internal calls stuck with WSAEWOULDBLOCK somehow. I know windows is not popular these days, but I would really hope to see a effort to get latest version run on windows. Cheers. We have an MSI project for Squid 2.7... if you need help for 3.1 with MSI and Wix - we can do it :) http://squidwindowsmsi.sourceforge.net/ best regards, sich
RE: [squid-users] Re: SSLBump+DynamicSSL not working in Squid 3.2.0.7?
I experience the same problem for 3.2.0.7 on FreeBSD 8.0. When https to a site, the CONNECT request is sent for reqmod, but after receiving the reqmod reply, the squid is not proceeding to make the connection to the web server. Here is the logs with debug option for 93 and 28 on. 2011/04/25 15:19:15.303 kid1| ModXact.cc(696) parseHeaders: parse ICAP headers 2011/04/25 15:19:15.303 kid1| ModXact.cc(1026) parseHead: have 405 head bytes to parse; state: 0 2011/04/25 15:19:15.303 kid1| ModXact.cc(1041) parseHead: parse success, consume 405 bytes, return true 2011/04/25 15:19:15.303 kid1| ModXact.cc(1119) stopParsing: will no longer parse [FD 39;rG/RwP(ieof) job269] 2011/04/25 15:19:15.303 kid1| Adaptation::Icap::ModXact still cannot be repeated because preparing to echo content [FD 39;G/RwP(ieof)rp job269] 2011/04/25 15:19:15.303 kid1| ModXact.cc(667) disableBypass: not protecting group bypass because preparing to echo content 2011/04/25 15:19:15.304 kid1| Xaction.cc(459) setOutcome: ICAP_ECHO 2011/04/25 15:19:15.304 kid1| ModXact.cc(890) prepEchoing: cloning virgin message 0x801fd1800 2011/04/25 15:19:15.304 kid1| ModXact.cc(927) prepEchoing: cloned virgin message 0x801fd1800 to 0x801fd1f00 2011/04/25 15:19:15.304 kid1| ModXact.cc(946) prepEchoing: no virgin body to echo 2011/04/25 15:19:15.304 kid1| ModXact.cc(561) stopSending: Enter stop sending 2011/04/25 15:19:15.304 kid1| ModXact.cc(564) stopSending: Proceed with stop sending 2011/04/25 15:19:15.304 kid1| ModXact.cc(576) stopSending: will not start sending [FD 39;/RwP(ieof)rp job269] 2011/04/25 15:19:15.304 kid1| HttpRequest.cc(428) adaptHistory: made 0x802b1ba40*1 for 0x801fd1f00 2011/04/25 15:19:15.304 kid1| Adaptation::Icap::ModXact still cannot be repeated because sent headers [FD 39;/RwP(ieof)rpS job269] 2011/04/25 15:19:15.304 kid1| Answer.cc(23) Forward: forwarding: 0x801fd1f00 2011/04/25 15:19:15.304 kid1| The AsyncCall Initiator::noteAdaptationAnswer constructed, this=0x802b949c0 [call49851] 2011/04/25 15:19:15.304 kid1| Initiate.cc(54) will call Initiator::noteAdaptationAnswer(0) [call49851] 2011/04/25 15:19:15.304 kid1| ModXact.cc(494) readMore: returning from readMore because reader or doneReading() 2011/04/25 15:19:15.304 kid1| Xaction.cc(305) callEnd: Adaptation::Icap::ModXact done with I/O [FD 39;/RwP(ieof)rpS job269] 2011/04/25 15:19:15.304 kid1| Xaction.cc(192) closeConnection: pushing pconn [FD 39;/RwP(ieof)rpS job269] 2011/04/25 15:19:15.304 kid1| Adaptation::Icap::ModXact still cannot be retried [FD 39;/RwP(ieof)rpS job269] 2011/04/25 15:19:15.304 kid1| Adaptation::Icap::Xaction::noteCommRead(FD 39, data=0x801fd1118, size=405, buf=0x802a55000) ends job [/RwP(ieof)rpS job269] 2011/04/25 15:19:15.304 kid1| ModXact.cc(1189) swanSong: swan sings [/RwP(ieof)rpS job269] 2011/04/25 15:19:15.304 kid1| ModXact.cc(561) stopSending: Enter stop sending 2011/04/25 15:19:15.304 kid1| Initiate.cc(36) swanSong: swan sings [/RwP(ieof)rpS job269] 2011/04/25 15:19:15.304 kid1| Initiate.cc(43) swanSong: swan sang [/RwP(ieof)rpS job269] 2011/04/25 15:19:15.304 kid1| Adaptation::Icap::ModXact destructed, this=0x801fd1118 [icapxjob269] 2011/04/25 15:19:15.304 kid1| HttpRequest.cc(67) ~HttpRequest: destructed, this=0x801fd0a00 2011/04/25 15:19:15.304 kid1| AsyncJob destructed, this=0x801fd1728 type=Adaptation::Icap::ModXact [job269] 2011/04/25 15:19:15.304 kid1| AsyncJob.cc(138) callEnd: Adaptation::Icap::Xaction::noteCommRead(FD 39, data=0x801fd1118, size=405, buf=0x802a55000) ended 0x801fd1728 2011/04/25 15:19:15.304 kid1| leaving Adaptation::Icap::Xaction::noteCommRead(FD 39, data=0x801fd1118, size=405, buf=0x802a55000) 2011/04/25 15:19:15.304 kid1| entering Initiator::noteAdaptationAnswer(0) 2011/04/25 15:19:15.304 kid1| AsyncCall.cc(32) make: make call Initiator::noteAdaptationAnswer [call49851] 2011/04/25 15:19:15.304 kid1| Adaptation::Icap::ModXactLauncher status in: [ job268] 2011/04/25 15:19:15.304 kid1| Launcher.cc(56) noteAdaptationAnswer: launches: 1 answer: 0 2011/04/25 15:19:15.304 kid1| The AsyncCall Initiator::noteAdaptationAnswer constructed, this=0x802b94c00 [call49854] 2011/04/25 15:19:15.304 kid1| Initiate.cc(54) will call Initiator::noteAdaptationAnswer(0) [call49854] 2011/04/25 15:19:15.304 kid1| Initiator::noteAdaptationAnswer(0) ends job [ job268] 2011/04/25 15:19:15.304 kid1| ModXact.cc(1875) swanSong: swan sings 2011/04/25 15:19:15.304 kid1| Initiate.cc(36) swanSong: swan sings [ job268] 2011/04/25 15:19:15.304 kid1| Initiate.cc(43) swanSong: swan sang [ job268] 2011/04/25 15:19:15.304 kid1| AsyncJob destructed, this=0x8029978b0 type=Adaptation::Icap::ModXactLauncher [job268] 2011/04/25 15:19:15.304 kid1| AsyncJob.cc(138) callEnd: Initiator::noteAdaptationAnswer(0) ended 0x8029978b0 2011/04/25 15:19:15.304 kid1| leaving Initiator::noteAdaptationAnswer(0) 2011/04/25 15:19:15.304 kid1| entering Initiator::noteAdaptationAnswer(0) 2011/04/25 15:19:15.304 kid1| AsyncCall.cc(32) make: make c
[squid-users] Effort for port 3.1 to windows?
Hi there, Is there any effort now to port 3.1 to windows? I know there's one for 2.7, and be struggling to get it compile on vs2010 and win7 sdk. But it is so complicated and horribly broken by new CRT security features (which can be fixed by adding some code) and Winsocks changes. I managed to get one build, but all internal calls stuck with WSAEWOULDBLOCK somehow. I know windows is not popular these days, but I would really hope to see a effort to get latest version run on windows. Cheers.
Res: Res: [squid-users] squid 3.2.0.5 smp scaling issues
thanks for your answer David. i'm seeing too much feature been included at squid 3.x, but it's getting as slower as new features are added. i think squid 3.2 with 1 worker should be as fast as 2.7, but it's getting slower e hungry. Marcos - Mensagem original De: "da...@lang.hm" Para: Marcos Cc: Amos Jeffries ; squid-users@squid-cache.org; squid-...@squid-cache.org Enviadas: Sexta-feira, 22 de Abril de 2011 15:10:44 Assunto: Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues ping, I haven't seen a response to this additional information that I sent out last week. squid 3.1 and 3.2 are a significant regression in performance from squid 2.7 or 3.0 David Lang On Thu, 14 Apr 2011, da...@lang.hm wrote: > Subject: Re: Res: [squid-users] squid 3.2.0.5 smp scaling issues > > Ok, I finally got a chance to test 2.7STABLE9 > > it performs about the same as squid 3.0, possibly a little better. > > with my somewhat stripped down config (smaller regex patterns, replacing CIDR >blocks and names that would need to be looked up in /etc/hosts with individual >IP addresses) > > 2.7 gives ~4800 requests/sec > 3.0 gives ~4600 requests/sec > 3.2.0.6 with 1 worker gives ~1300 requests/sec > 3.2.0.6 with 5 workers gives ~2800 requests/sec > > the numbers for 3.0 are slightly better than what I was getting with the full >ruleset, but the numbers for 3.2.0.6 are pretty much exactly what I got from >the >last round of tests (with either the full or simplified ruleset) > > so 3.1 and 3.2 are a very significant regression from 2.7 or 3.0, and the >ability to use multiple worker processes in 3.2 doesn't make up for this. > > the time taken seems to almost all be in the ACL avaluation as eliminating > all >the ACLs takes 1 worker with 3.2 up to 4200 requests/sec. > > one theory is that even though I have IPv6 disabled on this build, the added >space and more expensive checks needed to compare IPv6 addresses instead of >IPv4 >addresses accounts for the single worker drop of ~66%. that seems rather >expensive, even though there are 293 http_access lines (and one of them uses >external file contents in it's acls, so it's a total of ~2400 >source/destination >pairs, however due to the ability to shortcut the comparison the number of >tests >that need to be done should be <400) > > > > In addition, there seems to be some sort of locking betwen the multiple > worker >processes in 3.2 when checking the ACLs as the test with almost no ACLs scales >close to 100% per worker while with the ACLs it scales much more slowly, and >above 4-5 workers actually drops off dramatically (to the point where with 8 >workers the throughput is down to about what you get with 1-2 workers) I don't >see any conceptual reason why the ACL checks of the different worker threads >should impact each other in any way, let alone in a way that limits >scalability >to ~4 workers before adding more workers is a net loss. > > David Lang > > >> On Wed, 13 Apr 2011, Marcos wrote: >> >>> Hi David, >>> >>> could you run and publish your benchmark with squid 2.7 ??? >>> i'd like to know if is there any regression between 2.7 and 3.x series. >>> >>> thanks. >>> >>> Marcos >>> >>> >>> - Mensagem original >>> De: "da...@lang.hm" >>> Para: Amos Jeffries >>> Cc: squid-users@squid-cache.org; squid-...@squid-cache.org >>> Enviadas: S?bado, 9 de Abril de 2011 12:56:12 >>> Assunto: Re: [squid-users] squid 3.2.0.5 smp scaling issues >>> >>> On Sat, 9 Apr 2011, Amos Jeffries wrote: >>> On 09/04/11 14:27, da...@lang.hm wrote: > A couple more things about the ACLs used in my test > > all of them are allow ACLs (no deny rules to worry about precidence of) > except for a deny-all at the bottom > > the ACL line that permits the test source to the test destination has > zero overlap with the rest of the rules > > every rule has an IP based restriction (even the ones with url_regex are > source -> URL regex) > > I moved the ACL that allows my test from the bottom of the ruleset to > the top and the resulting performance numbers were up as if the other > ACLs didn't exist. As such it is very clear that 3.2 is evaluating every > rule. > > I changed one of the url_regex rules to just match one line rather than > a file containing 307 lines to see if that made a difference, and it > made no significant difference. So this indicates to me that it's not > having to fully evaluate every rule (it's able to skip doing the regex > if the IP match doesn't work) > > I then changed all the acl lines that used hostnames to have IP > addresses in them, and this also made no significant difference > > I then changed all subnet matches to single IP address (just nuked /## > throughout the config file) and this also made no significant difference. > Squid has always worked this way. It will *tes
Re: [squid-users] Re: Squid Cache flush
Have you tested to see if any of these concerns are in fact something that can happen. It is my understanding that Squid will ask the app server if the content is new or not and if the app server says that the text is new and the photo is old, then squid will ask for a new copy of the test and show the old photo. http://wiki.squid-cache.org/SquidFaq/InnerWorkings#How_does_Squid_decide_when_to_refresh_a_cached_object.3F Ron On 25/04/2011 2:37 PM, Jawahar Balakrishnan (JB) wrote: The problem is not refreshing content from the CMS. Our deployment will be Squid reverse proxying an app server that in turn talks to the CMS for the content and adds the look and feel to the content. So squid will be caching the final url. the challenge is to figure out how to get squid to be aware of any changes that might have happened to any object in that page. If it is a few objects, it will be an easy thing but when there are large scale changes - i would like to be able to flush the cache without having to restart. On Thu, Apr 21, 2011 at 8:24 PM, Ron Wheeler wrote: On 21/04/2011 5:29 PM, Ron Wheeler wrote: On 21/04/2011 1:46 PM, Jawahar Balakrishnan (JB) wrote: If you are thinking that is is dynamic content with query strings then it's not the case. the urls will look like a directory structured static content but the back-end app server will translate the url and fetch the appropriate content from the CMS (alfresco) Very few CMS or portals use query strings to select content. Our portal does not. What software are you using? Perhaps you can get some actual experience from a current squid user. You might get more help in the Alfresco forum. There seems to be a specific Alfresco problem for which there seems to be a solution. http://forums.alfresco.com/en/viewtopic.php?t=11412 Ron Have you tried a test with squid? Ron On Thu, Apr 21, 2011 at 1:30 PM, Ron Wheeler wrote: If you google "squid dynamic content" you will find that by default squid does not cache dynamic content. If it did, it would be useless as a proxy server since that would make almost all dynamic sites unusable. There are lots of instructions about how to trick squid into caching content that it (and the web servers it proxies) think is dynamic but you know is not. Youtube videos is one example where the web server says the content is dynamic but in fact humans know that it is not. I think that a simple test will allow you to see that your CMS content will get handled correctly. What are you using for CMS servers? Perhaps someone can give you first-hand experience or a web site to visit. I have never had to do anything to Apache and Wordpress to get it to work properly. Don't forget that Squid and the web server can talk to each other without actually shipping content. The HTTP protocol has lots of different messages that can be quickly exchanged to make decisions about whether squid actually needs new content. Ron On 21/04/2011 12:31 PM, Jawahar Balakrishnan (JB) wrote: It is all dynamic content going forward scenarios where a cache flush would be required 1) an article is updated 2) category is updated with a list of articles. we syndicate content to abut 150 partner and will have same article/category with a different URL doesn't squid cache based on the url? when you update content on your cms - how does squid know to update it's cache? JB On Thu, Apr 21, 2011 at 12:10 PM, Ron Wheeler wrote: Are you sure that you need to do this? Squid should be able to tell the difference between static and dynamic content. We have a dynamic JSR-168/268 portal based on Tomcat and Jetspeed sitting behind Apache and Squid and we have never had to intervene with Squid for 3 years. We also have lots of Wordpress CMS sites. The user gets the latest information on every page load regardless of the URL being the same. What exactly would cause you to trigger a flush of the cache? Ron On 21/04/2011 11:30 AM, Jawahar Balakrishnan (JB) wrote: I would rather not do a restart of anything unless absolutely required Here are the challenges we face 1) We are trying to deploy Suqid as a reverse-proxy in front of a CMS 2) We want to trying find a balance between keeping the content fresh without affecting performance by frequently expiring content. Our current reverse proxy solution allow us to flush the entire cache without having to restart but in limited testing Squid seemed to perform much better and we would prefer to use Squid but still retain the functionality of being able to flush the entire cache periodically via cron or when in case of an emergency. Cache-control headers are fine and will work in case of limited number of objects. Thanks JB On Tue, Apr 19, 2011 at 7:27 PM, Amos Jeffries wrote: On Tue, 19 Apr 2011 11:14:55 -0400, Jawahar Balakrishnan (JB) wrote: I am looking to deploy Squid as a reverse proxy and i had couple of questions. We currrently use Bluecoat and Sun Web proxy and i am able to d
Re: [squid-users] Re: Squid Cache flush
The problem is not refreshing content from the CMS. Our deployment will be Squid reverse proxying an app server that in turn talks to the CMS for the content and adds the look and feel to the content. So squid will be caching the final url. the challenge is to figure out how to get squid to be aware of any changes that might have happened to any object in that page. If it is a few objects, it will be an easy thing but when there are large scale changes - i would like to be able to flush the cache without having to restart. On Thu, Apr 21, 2011 at 8:24 PM, Ron Wheeler wrote: > On 21/04/2011 5:29 PM, Ron Wheeler wrote: >> >> On 21/04/2011 1:46 PM, Jawahar Balakrishnan (JB) wrote: >>> >>> If you are thinking that is is dynamic content with query strings then >>> it's not the case. the urls will look like a directory structured >>> static content but the back-end app server will translate the url and >>> fetch the appropriate content from the CMS (alfresco) >>> >> Very few CMS or portals use query strings to select content. >> Our portal does not. >> >> What software are you using? Perhaps you can get some actual experience >> from a current squid user. >> > You might get more help in the Alfresco forum. > There seems to be a specific Alfresco problem for which there seems to be a > solution. > > http://forums.alfresco.com/en/viewtopic.php?t=11412 > > > Ron > >> Have you tried a test with squid? >> >> Ron >>> >>> On Thu, Apr 21, 2011 at 1:30 PM, Ron Wheeler >>> wrote: If you google "squid dynamic content" you will find that by default squid does not cache dynamic content. If it did, it would be useless as a proxy server since that would make almost all dynamic sites unusable. There are lots of instructions about how to trick squid into caching content that it (and the web servers it proxies) think is dynamic but you know is not. Youtube videos is one example where the web server says the content is dynamic but in fact humans know that it is not. I think that a simple test will allow you to see that your CMS content will get handled correctly. What are you using for CMS servers? Perhaps someone can give you first-hand experience or a web site to visit. I have never had to do anything to Apache and Wordpress to get it to work properly. Don't forget that Squid and the web server can talk to each other without actually shipping content. The HTTP protocol has lots of different messages that can be quickly exchanged to make decisions about whether squid actually needs new content. Ron On 21/04/2011 12:31 PM, Jawahar Balakrishnan (JB) wrote: > > It is all dynamic content going forward > > scenarios where a cache flush would be required > > 1) an article is updated > 2) category is updated with a list of articles. > > we syndicate content to abut 150 partner and will have same > article/category with a different URL doesn't squid cache based on the > url? > > when you update content on your cms - how does squid know to update > it's > cache? > > JB > > On Thu, Apr 21, 2011 at 12:10 PM, Ron Wheeler > wrote: >> >> Are you sure that you need to do this? >> Squid should be able to tell the difference between static and dynamic >> content. >> >> We have a dynamic JSR-168/268 portal based on Tomcat and Jetspeed >> sitting >> behind Apache and Squid and we have never had to intervene with Squid >> for 3 >> years. >> We also have lots of Wordpress CMS sites. >> >> The user gets the latest information on every page load regardless of >> the >> URL being the same. >> >> What exactly would cause you to trigger a flush of the cache? >> >> Ron >> >> >> On 21/04/2011 11:30 AM, Jawahar Balakrishnan (JB) wrote: >>> >>> I would rather not do a restart of anything unless absolutely >>> required >>> >>> Here are the challenges we face >>> >>> 1) We are trying to deploy Suqid as a reverse-proxy in front of a CMS >>> 2) We want to trying find a balance between keeping the content >>> fresh >>> without affecting performance by frequently expiring content. >>> >>> Our current reverse proxy solution allow us to flush the entire cache >>> without having to restart but in limited testing Squid seemed to >>> perform much better and we would prefer to use Squid but still retain >>> the functionality of being able to flush the entire cache >>> periodically >>> via cron or when in case of an emergency. >>> >>> Cache-control headers are fine and will work in case of limited >>> number >>> of objects. >>> >>> Thanks >>> JB >>> >>> >>> On Tue, Apr 19, 2011 at 7:27 PM, Amos Jef
RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?
> I'm a little confused by this scenario and your statement "It would be > nice if the crawler identified itself". > Is it spoofing an agent name identical that on your OFFICE machines? > Even the absence of a U-A header is identification in a way. That was just an example. In its simplest form: DO NOT MODIFY UA OF SRC ACL OFFICE Machines Change UA of everything else to a fixed value. > AFAIK it *should* only require that config you have. If we can figure > out whats going wrong the bug can be fixed. I have submitted close to 20 bugs over teh years (not all are from this email) and all of them are fixed over time. I am positive this issue does not arise because of my config. HALF-BAKED: acl OFFICE src 1.1.1.1 request_header_access User-Agent allow OFFICE request_header_access User-Agent deny all request-header_replace User-Agent BOGUS AGENT [DIRECT works as expected for OFFICE -- no modifications. However, UA for OFFICE is replaced as soon as the connection is forwarded to a peer] HALF-BAKED: acl OFFICE src 1.1.1.1 cache_peer 2.2.2.2 parent 2 0 proxy-only no-query name=PEER2 acl PEER2 peername PEER2 request_header_access User-Agent allow PEER2 OFFICE request_header_access User-Agent deny PEER2 !OFFICE request_header_access User-Agent deny all request-header_replace User-Agent BOGUS AGENT [all and every combination of ALLOW/DENY/PEER2/OFFICE... does not work] WORKS WHEN GOING THROUGH A PEER: request_header_access User-Agent allow PEER2 request_header_access User-Agent deny all request-header_replace User-Agent BOGUS AGENT It seems to me that ACL SRC is NEVER checked when going to a Peer. WHAT I WANT TO DO: acl OFFICE src 1.1.1.1 request_header_access User-Agent allow OFFICE request_header_access User-Agent deny all request-header_replace User-Agent BOGUS AGENT [OFFICE UA should not be modified whehter going direct or through a peer] Thanks, Jenny PS: Running 3.2.0.7 on production and works good and reliably. The UA issue above is present on both 3.2.0.1 and 3.2.0.7.
[squid-users] Squid and Splash page
Hello again all, First of all, thanks to Amos and Andrew for replying to my previous question. I have setup squid_session with the following in squid.conf. The result is attached below also. For whatever reason the squid sessions are crashing and I am not sure why. The goal would be to display a splash page to the user and then release them after that. ("Catch and Release") Does anyone know why the sessions are exiting ? Thanks, -- squid.conf -- external_acl_type session ttl=60 %SRC /usr/lib64/squid/squid_session -t 7200 -b /etc/squid/session.db acl new_users external session deny_info http://172.23.1.2/main.html new_users http_access deny !new_users -- cache.log with some debugging -- 2011/04/25 09:17:57.602| ACLList::matches: checking !new_users 2011/04/25 09:17:57.602| ACL::checklistMatches: checking 'new_users' 2011/04/25 09:17:57.602| aclMatchExternal: session("10.140.43.227") = lookup needed 2011/04/25 09:17:57.602| aclMatchExternal: "10.140.43.227": entry=@0, age=0 2011/04/25 09:17:57.602| aclMatchExternal: "10.140.43.227": queueing a call. 2011/04/25 09:17:57.602| aclMatchExternal: "10.140.43.227": return -1. 2011/04/25 09:17:57.602| ACL::ChecklistMatches: result for 'new_users' is -1 2011/04/25 09:17:57.602| ACLList::matches: result is false 2011/04/25 09:17:57.602| aclmatchAclList: 0xe1a9348 returning false (AND list entry failed to match) 2011/04/25 09:17:57.602| ACLChecklist::asyncInProgress: 0xe1a9348 async set to 1 2011/04/25 09:17:57.602| externalAclLookup: lookup in 'session' for '10.140.43.227' 2011/04/25 09:17:57.602| externalAclLookup: looking up for '10.140.43.227' in 'session'. 2011/04/25 09:17:57.602| The AsyncCall SomeCommWriteHander constructed, this=0xe1aaf70 [call35] 2011/04/25 09:17:57.602| comm_write: FD 11: sz 14: asynCall 0xe1aaf70*1 2011/04/25 09:17:57.602| helperDispatch: Request sent to session #2, 14 bytes 2011/04/25 09:17:57.602| externalAclLookup: will wait for the result of '10.140.43.227' in 'session' (ch=0xe1a9348). 2011/04/25 09:17:57.602| aclmatchAclList: async=1 nodeMatched=0 async_in_progress=1 lastACLResult() = 0 finished() = 0 2011/04/25 09:17:57.602| WARNING: session #1 (FD 9) exited 2011/04/25 09:17:57.602| leaving SomeCloseHandler(FD 9, data=0xdf80008) 2011/04/25 09:17:57.602| entering comm_close_complete(FD 9) 2011/04/25 09:17:57.603| AsyncCall.cc(32) make: make call comm_close_complete [call34] 2011/04/25 09:17:57.603| fd_close FD 9 squid_session #1 2011/04/25 09:17:57.603| leaving comm_close_complete(FD 9) 2011/04/25 09:17:57.603| commHandleWrite: FD 11: off 0, sz 14. 2011/04/25 09:17:57.603| commHandleWrite: write() returns 14 2011/04/25 09:17:57.603| commio_finish_callback: called for FD 11 (0, 0) 2011/04/25 09:17:57.603| comm.cc(165) will call SomeCommWriteHander(FD 11, data=0xdf822d8, size=14, buf=0xe1a9c80) [call35] 2011/04/25 09:17:57.603| entering SomeCommWriteHander(FD 11, data=0xdf822d8, size=14, buf=0xe1a9c80) 2011/04/25 09:17:57.603| AsyncCall.cc(32) make: make call SomeCommWriteHander [call35] 2011/04/25 09:17:57.603| leaving SomeCommWriteHander(FD 11, data=0xdf822d8, size=14, buf=0xe1a9c80) 2011/04/25 09:17:57.603| comm_read_try: FD 11, size 8191, retval 0, errno 0 2011/04/25 09:17:57.603| commio_finish_callback: called for FD 11 (0, 0) 2011/04/25 09:17:57.603| comm.cc(165) will call SomeCommReadHandler(FD 11, data=0xdf822d8, size=0, buf=0xdf823a0) [call8] 2011/04/25 09:17:57.603| entering SomeCommReadHandler(FD 11, data=0xdf822d8, size=0, buf=0xdf823a0) 2011/04/25 09:17:57.603| AsyncCall.cc(32) make: make call SomeCommReadHandler [call8] 2011/04/25 09:17:57.603| helperHandleRead: 0 bytes from session #2 2011/04/25 09:17:57.603| comm_close: start closing FD 11 2011/04/25 09:17:57.603| The AsyncCall comm_close_start constructed, this=0xdf889b0 [call36] 2011/04/25 09:17:57.603| comm.cc(1611) will call comm_close_start(FD 11) [call36] 2011/04/25 09:17:57.603| comm.cc(1195) commSetTimeout: FD 11 timeout -1 2011/04/25 09:17:57.603| comm.cc(1206) commSetTimeout: FD 11 timeout -1 2011/04/25 09:17:57.603| commCallCloseHandlers: FD 11 2011/04/25 09:17:57.603| commCallCloseHandlers: ch->handler=0xdf7fe90*1 2011/04/25 09:17:57.603| comm.cc(1460) will call SomeCloseHandler(FD 11, data=0xdf822d8) [call7] 2011/04/25 09:17:57.603| The AsyncCall comm_close_complete constructed, this=0xdf6d870 [call37] 2011/04/25 09:17:57.603| comm.cc(1643) will call comm_close_complete(FD 11) [call37] 2011/04/25 09:17:57.603| leaving SomeCommReadHandler(FD 11, data=0xdf822d8, size=0, buf=0xdf823a0) 2011/04/25 09:17:57.603| entering comm_close_start(FD 11) 2011/04/25 09:17:57.603| AsyncCall.cc(32) make: make call comm_close_start [call36] 2011/04/25 09:17:57.603| leaving comm_close_start(FD 11) 2011/04/25 09:17:57.603| entering SomeCloseHandler(FD 11, data=0xdf822d8) 2011/04/25 09:17:57.603| AsyncCall.cc(32)
[squid-users] Reverse Proxy on Squid to port 8080
Hi I have got a reverse proxy that is working just fine, it accepts requests on port 443 and port 80 and ONLY sends traffic upstream to port 80 to the apache server listening on localhost. I use the following config: https_port 10.14.1.72:443 cert=/etc/squid/self_certs/site.crt key=/etc/squid/self_certs/site.key defaultsite=site vhost cache_peer 127.0.0.1 parent 443 80 no-query originserver login=PASS http_port 10.14.1.72:80 vhost My problem is the following : The site should act differently in some occasions based on whether http or https was requested. So my idea is to setup second http vhost on apache listening to port 8080 and on that vhost I would server the https code. So is it possible to use SQUID to : Send traffic destined for port 443 to localhost:8080 and Send traffic destined for port 80 to localhost:80 ? Any hints/ comments are highly appreciated.
[squid-users] Zero Sized Reply went trying FTP
Hello: went i navigate to: ftp://novapublishers.com/... squid3 sent me a: Zero Sized Reply my acl is: ftp proto ftp http_access allow ftp my squd3 version is: 3.1.6 please help me.. thanks, Javier -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Zero-Sized-Reply-went-trying-FTP-tp3472997p3472997.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] ACL::checklistMatches WARNING: 'http_err_log' ACL is used but there is no HTTP reply -- not matching.
On 23/04/11 19:11, Edward Ting wrote: Hi Amos, You mentioned in the post below that this is "One of the design flaws we have not yet removed from Squid.". Is there a bug ID already? This could be from many places for many reasons. Care to track down where the problem is and see if its mentioned in the bug reports? The results of your checking are likely to also help speed up a fix. http://www.squid-cache.org/mail-archive/squid-users/201011/0432.html acl http_err_log http_status 301-307 400-406 408-417 500- access_log /usr/local/squid/var/logs/access.log squid http_err_log Edward Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?
On 21/04/11 13:04, Jenny Lee wrote: I have 3.2.0.1 and unfortunately this does not work either. I will check on 3.2.0.7 (would that make a difference?). May do. I don't recall changing anything there directly but the passing around of request details has been fixed in a few places earlier which may affect it. Also, do you have this part which I forgot to add? cache_peer name=X Yes I do, Amos. Here what I am trying to do is to brand our connections. Suppose we have a crawler. It would be nice if the crawler identified itself as such. On the other hand, I do not want to modify the UA of our OFFICE users. They should be passed as is. I'm a little confused by this scenario and your statement "It would be nice if the crawler identified itself". Is it spoofing an agent name identical that on your OFFICE machines? Even the absence of a U-A header is identification in a way. I thought this would be relatively easy to accomplish in squid, after all it is very able and comes with the whole shabang and the kitchen sink, but unfortunately i have had no success so far. AFAIK it *should* only require that config you have. If we can figure out whats going wrong the bug can be fixed. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] I would like to use Squid for caching but it is imperative that all files be cached.
On 22/04/11 04:57, Sheridan "Dan" Small wrote: First I will explain what I am trying to do. I have a number of tests (executables and scripts) which run on resources downloaded via HTTP, FTP etc. Some of these tests are third party compiled executables which would be problematic to change. The resources can potentially be any type of file and have different file extensions. Some URLs for these files have query strings. Tests can download resources in any order, there is no way to tell which test will download any given file first. I have no control at all over the resources tested. The tests run on a server which is used for nothing else but running these tests (no human web browsing). It is imperative that all tests are run on identical files for each URL. If the file changes the tests will be inconsistent. As I read that it appears that your tests are fatally broken. Abusing real live content and warping the traffic behaviour *breaks* its reliability. Using the resulting unreliable traffic for testing will pass that breakage right back up to the test results. Therefore it is imperative that all files be cached regardless of anything. I would like to use Squid for this caching. The only things No. It is imperative that the tests run on the correct content. Dynamic content *cannot* be tested this way. Static content has cache controls to correctly cached without any intervention on your part. that should not be cached are HTTP response codes Internal Error 500, Service temporarily overloaded 502 and suchlike, where it is better to have some tests run rather than none in the case of a temporary server error. I guess it would be to much to ask to be able to cache over HTTPS. What are your tests testing and what scenarios would they be run under? (ie are they a test suite for some internal work or public tests like CPAN have that can be run from anywhere online?) Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] forwarded_for ? in 3.2.x
On 22/04/11 02:08, jeffrey j donovan wrote: Greetings, I have a a transparent squid in a private net with a 1-1 NAT, Im trying to get a good understanding of what my clients look like to the outside. What is the Default setting " for forwarded_for" if my system is running intercept? "forwarded_for on" is the default for all modes. The client IP *as seen by Squid* is added to the header. to my understanding if I leave the X-Forwarded-For header my natted clients ip will be the visible requestor ? Whatever the client IP making the request was will be noted as the original requestor. The internal "private" IP ranges have no meaning to external viewers. They simply indicate that there was a NAT step. in the past did we strip that out or is it something new? Nothing has changed in Squid. Maybe your config or something outside Squid was playing with it. is there a way to have the final request return the global NAT ip of the client ? There is no such global IP for the client, at least for port 80. The client never touches the Internet when intercepted into Squid. This is one of the few benefits of interception. Squid box is the only public TCP/IP address touching the Internet. currently squid seems to be the final, i think. can someone clarify this option for me, thanks -j 192.168.1.2 ---> 192.168.1.1[ squid]10.10.10.1 -- 10.10.10.2 [ IP NAT ] -- GLOBAL Correct. forwarded_for New setting options. transparent, truncate, delete. If set to "transparent", Squid will not alter the X-Forwarded-For header in any way. If set to "delete", Squid will delete the entire X-Forwarded-For header. If set to "truncate", Squid will remove all existing X-Forwarded-For entries, and place itself as the sole entry. ... as you cut-n-pasted from the documentation, that is what it does. The "place itself as the sole entry" was incorrect. Fixed in recent releases to be "place the client IP as the sole entry" Going back to your initial goal "get a good understanding of what my clients look like to the outside"... The "outside" all sees Squid global IP connecting to them and making requests. For smart web services that attempt to use advanced transfer features they see the Via: header indicating the client and Squid capabilities so nothing breaks halfway back. For smart security systems that attempt IP-based security (the ones that do it well anyway) they see the X-Forwarded-For header with a group of identifiers that can be combined to classify different end clients apart. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] Dependencies to build squid-3.1.12
On 22/04/11 07:42, Pablo Hrbacek wrote: Hello! I want to compile a squid 3.1.12 in Debian Lenny but I don't know what packages I need. Please help me. The Debian source package for 3.1.12 in the Unstable or Testing source repositories requires these: http://packages.debian.org/source/unstable/squid3 Any extra features you are adding may have their own dependencies which may or may not be available on Debian. Note: Lenny (aka "oldstable" Debian) does not contain recent enough versions of several libraries (libkrb*, libcap being the main ones) and so requires they also be upgraded. Many of the Squid features will also not be available as advertised due to missing system components. Configure options: ./configure --prefix=/usr/local/squid/ --enable-removal-polices=heap,lru --enable-delay-pools --disable-wccp --disable-wccpv2 --disable-snmp --enable-arp-acl --disable-htcp --enable-default-err-language=Spanish --enable-err-languages='English Spanish' --disable-http-violations --enable-linux-netfilter --disable-ident-lookups --disable-internal-dns --enable-auth --enable-auth-basic --enable-auth-digest --enable-external-acl-helpers=ldap_group --disable-translation --with-default-user=squid --with-logdir=/var/log/squid/ --with-pidfile=/var/run/squid.pid --with-large-files --with-filedescriptors=8192 --sysconfdir=/etc/squid/ --disable-loadable-modules I have installed cpp4.3, gcc4.3, g++4.3 and binutils2.18 Configure step works fine but I have a lot of undefined references at make time make time errors mean configure has failed even if it self-claimed "Success". See the above dependency list, g++4.3 is a good base to work on. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] Default TCP/IP parameters of squid and linux and recommendations
On 22/04/11 19:32, a bv wrote: Hi, For a linux which runs squid and will act as a proxy what are the parameters youll recommend to change and why and how (on both linux and squid)? (especially with TCP/IP stack, TIME_WAIT, maximum connect.ion limits vs) Regards It would be helpful if you could estimate the amount of traffic this proxy will see. Particularly new connections per second sort of info. OS this will run on etc, etc. If you have an existing setup the proxy is going to move into you can glean a relatively good estimate by counting SYN packets on port 80 at peak load. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] How to redirect / preserve header between source and destination?
On 23/04/11 17:23, Andreas Braathen wrote: Hi, I've noticed that squid manipulates the headers/traffic from a source towards a destination. The squid is acting like a mediator with my config - how is it possible to forward the exact header retrieved from a client without squid changing it? There is no "Retrieved" from the client. It is *sent* by the client. All headers are passed unchanged unless RFC 2616 explicitly states that it SHOULD or MUST be changed. The change performed matches RFC requirements. To make Squid do otherwise is an RFC violation and requires manual configuration. "squid -k parse" should complain/warn about all "violation" settings you have added. To make an example: |source|<-> |squid|<-> |destination| Source is sending a GET request to destination: "http://domain.com:443/path";. Squid sees that the URL is not a HTTP request, but a port 443 (i.e. HTTPS), and therefore sending a SYN-packet to the destination to establish an SSL connection. Yes. IANA has reserved port 443 for HTTPS protocol. http://www.iana.org/assignments/port-numbers What Squid does depends on the traffic "mode". * Forward proxy mode should see the "http://"; and label it for HTTP outgoing. * The various other modes will never see the "http://"; part of the URL and must assume the protocol flowing over port 443 is the protocol which is supposed to be there. I think this _only_ applies with HTTP -> HTTPS traffic and not HTTP -> HTTP. Andreas Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1
Re: [squid-users] How to tell if Squid is in Reverse or Transparent mode
On 25/04/11 02:20, Nick wrote: Hi all, I'm using someone's Squid service but I want to find out if the squid is configured in Reverse mode or transparent mode. I don't have access to the configuration. Is there any way to find out? Make a telnet port-80 to another random website. If it goes through the proxy or another identifiable as part of the same group the its an intercept proxy in your ISP. If only the website in question goes through the proxy then its probably a reverse-proxy in the website CDN. There is no difference from the client perspective. Only the proxy admins perspective (as to which controls are available for use). Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.7 and 3.1.12.1