RE: superfluous DNS lookups
Henrik wrote (quoting Andrew): With interception proxying, is the DNS lookup that is performed by Squid necessary? Would it not be more efficient, possibly even more reliable, to use the destination IP address in the original intercepted request? Both yes and no. By discarding the original destination IP address caching is made more effective by being able to cache on the requested hostname. If using the original destination IP address then caching needs to be done using the requested hostname + IP address due to security implications of trusting the client provided destination IP. The drawback is initial request latency from the double DNS lookup (client proxy) and as you say differences in resolving between client and proxy. Doing the DNS lookup in the cache has many benefits; to list a few: - the cache is able to intelligently retry and maintain an alive/dead list for major sites based on all customers' usage (each customer doesn't have to individually wait for a TCP timeout to determine that one of a round robin IP set is down) - the cache is able to cache on a URL basis rather than an IP+URL basis (as Henrik commented, you can't trust the client to supply a correct IP due to security problems of forcing people to incorrect pages); for major sites with round-robin DNS this would result in multiple copies (up to 10 for some sites) of the site being cached, which would be a performance hit - customers with modem accelerator software which caches outdated, incorrect DNS don't notice the problem as the transproxy corrects it for them The double DNS lookup may not be as much of an issue as you expect; the customer's DNS lookup primes your DNS caches and then your cache's DNS lookup should be instant (which helps web cache performance). However it would be very nice to avoid the latency of doing a DNS lookup over the client connection. If a customer's browser knows they are using a proxy (by static configuration, WPAD or similar) it will skip the DNS lookup step. David.
RE: 2.5 and delay pools
Adrian wrote: -fd_set slowfds; +char slowfds[SQUID_MAXFD]; -static fd_set delay_no_delay; +static int delay_no_delay[SQUID_MAXFD]; Firstly, either decide on an int array or a char array to replace the current bitmask. On a typical 8k FD cache, that's either 8kb or 32kb rather than the current 1kb for the bitmask, which may seem like not much memory but remember on the one hand, people are running caches on systems with pretty small CPU caches, and on the other hand, ints may be faster to access than bitmasks on some faster/larger machines - so it's a tradeoff either way. Secondly, as per Henrik, confirm that use of the fd_set is the cause of your brokenness. Thirdly, you could consider using your own bitmask structure and the FD_SET/FD_CLR/etc functions. Only FD_ZERO won't work on an incorrectly sized structure as it references the calculated size of an fd_set. If you look on Linux, you'll find these are defined to something like: # define __FD_SET(fd, fdsp) \ __asm__ __volatile__ (btsl %1,%0 : =m (__FDS_BITS (fdsp)[__FDELT (fd)]) : r (((int) (fd)) % __NFDBITS) : cc,memory) ie - rather fast opcodes to access an array of longs as a bitmask. David.
HTTP/1.1 non-compliance on POST request handling?
Guys, HTTP/1.1 section 4.4 states: For compatibility with HTTP/1.0 applications, HTTP/1.1 requests containing a message-body MUST include a valid Content-Length header field unless the server is known to be HTTP/1.1 compliant. If a request contains a message-body and a Content-Length is not given, the server SHOULD respond with 400 (bad request) if it cannot determine the length of the message, or with 411 (length required) if it wishes to insist on receiving a valid Content-Length. A customer of ours is complaining that they have an embedded device, which is attempting to connect to a server it knows to be HTTP/1.1 compliant (as the device and server are from the same vendor), and our transparent proxy is intercepting the connection and rejecting it due to the lack of a content length header. From what I can see, they are correct in their reading of the RFC; the proxy server is enforcing the HTTP/1.0 behavior and it is a problem as their client software has not been coded to work with HTTP/1.0. Comments? The code in question is around line 3070 of client_side.c (and the if statement directly above that one is also relevant). David.
Re: changes to make mib.txt work with current net-snmp
You can make that more compact as: SQUID-MIB DEFINITIONS ::= BEGIN enterprises OBJECT IDENTIFIER ::= { iso org(3) dod(6) internet(1) private(4) 1 } nlanr OBJECT IDENTIFIER ::= { enterprises 3495 } (you can merge those two lines also, but that's going a bit far) Although you shouldn't have to do it due to: IMPORTS enterprises [...] FROM SNMPv2-SMI And in fact if you are defining enterprises yourself, you shouldn't also import it. Are you sure your MIB path includes SNMPv2-SMI.txt? David. -- David Luyer Phone: +61 3 9674 7525 Network Development ManagerP A C I F I CFax: +61 3 9699 8693 Pacific Internet (Australia) I N T E R N E T Mobile: +61 4 BYTE http://www.pacific.net.au/ NASDAQ: PCNTF - Original Message - From: Duane Wessels [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Friday, February 07, 2003 4:50 PM Subject: changes to make mib.txt work with current net-snmp I find that a recent installation of net-snmp apparently does not parse Squid's mib.txt file until I make this change: --- SQUID-MIB.txt Tue May 16 01:06:05 2000 +++ SQUID-MIB.txt.new Thu Feb 6 22:40:59 2003 @@ -1,9 +1,14 @@ -SQUID-MIB { iso org(3) dod(6) internet(1) private(4) enterprises(1) 3495 } - -DEFINITIONS ::= BEGIN +SQUID-MIB DEFINITIONS ::= BEGIN -- -- $Id: mib.txt,v 1.25 2000/05/16 07:06:05 wessels Exp $ -- + +orgOBJECT IDENTIFIER ::= { iso 3 } -- iso = 1 +dodOBJECT IDENTIFIER ::= { org 6 } +internet OBJECT IDENTIFIER ::= { dod 1 } +privateOBJECT IDENTIFIER ::= { internet 4 } +enterprisesOBJECT IDENTIFIER ::= { private 1 } + IMPORTS enterprises, Unsigned32, TimeTicks, Gauge32, Counter32, Any SNMP-knowledgeable folks have an opinion on this change?