On Dec 9, 2009, at 3:37 AM, Grayhat wrote: >> My local resolver is going to to be the fastest for local >> email lookups, especially cached ones. > > same here; as long as the local DNS resolver(s) are > correctly configured, they are in most (if not all) cases
Really, did you just say that :) Kidding, but I get contracted to fix this so much, I would have to say, DNS is pretty busted, amazed something so broken, at least, when I run into their naned.conf, and always, 99% of the time, they have a world usable rr, I just cringe. And that is one of many, god, how much abuse can ORIGIN be given :) I think the problem is I hear "DNS is easy, just set up a new server for us", and they think it is a 30 minute job. I have had clients of large companies we all know the names of tell me that their colo's slaves somehow magically will know when to add the slave, and magically know to delete the slave. Deletion can be scripted, but slave addition, I really doubt a large colo provider is grepping their logs, if they are even saving a query log long enough to do so. > faster than any external one and btw you have some > BIG pluses then since not only you can directly control > the values for cache and other parameters but using a > local resolver you'll also be able to keep *local* copies > of DNSBLs/URIBLs and speed up lookups a lot And, you can also fix things right away, and not wait for any propagation nonsense. My mail is my mail, and your mail is your mail, the point is, we both have different uses, but the end result is the same, and within those uses, there are strong patterns for me that are totally different than those for you. We also may share some patterns, but I would say, DNSBL's, there will be strong caching patterns, but no two MTA's that are not within the same org are going to share those patterns. This is why openDNS is not going to help you a lot. It works for the web because of popularity etc. Outside of gmail, hotmail, aol, msn, and the rest of the top 10, most email is going to be not that cacheable for a public DNS option, just too much variety. Now, if the whole world used the same public DNS, that would be a different story. But, take it at the face value, which is the reason it was suggested, that being, users were having slow DNS lookup issues, and it was making ASSP behave in some bad ways. ( I do not know the history of this ) Slow DNS lookups probably should not hurt ASSP and that hopefully has been resolved ( resolved, ha ha ). Switching to a public RR is not a solution to the underlying issue. The issue is that slow DNS response times should eventually time out, and ASSP moves on. Regardless, we have a problem, which is ASSP users apparently experienced problems, that were traced back to slow DNS resolution. We are dealing with 10's of milliseconds here. If they were having problems, their problems had to be so much more severe than just slow resolution; their entire DNS system was messed up. I am sure PTR's were not even in place, TTL's were all at something like 100 from the one time they moved and never bothered to change them back, etc. So we have someone who does not know how to admin the DNS, in charge of doing just such a thing, and now being offered up a solution, by a reputable source. Of course they are going to take the suggestion, but do they realize the repercussions? With openDNS 1) Must have an opeDNS account 2) Possible TOS violations 3) No longer in control of your DNS, at some else's networks mercy 4) Policy can and will change at any time. Does any company evey add a new preference with default being set to off, or do they set it to on, and make you login to turn it off? Yeah, staring right at you facebook. On #1, in this discussion thread, I am the only one mentioning that aspect. It is the most important aspect. Who knows what happens with type correction and all the rest they offer. I see no docs on how openDNS pertains to lookups for MX records at all, they mostly talk about A and AAAA records. On #4, that is not a chance I am willing to wait around for. Not a phone call I want to deal with from a client. openDNS has bugs, they are aware of them, they are harmful to other servers. It could potentially be used as a little DOS attack. I did my research, made sure it was a bug, made sure it was RFC non compliant behavior, and made sure to report it. I had to fight my way to get someone to even care. It came down to making noise on the isc mail list, then openDNS got in touch. Months of losing contact, and a patch is rolled out. Does not fix it. A second round, months later, I email, asking what is up, get an apology and am told it will be worked on in a week or so. And here we are today, I think ~1.5 years later, no fix, and I do not have the time or desire to help someone who can not even keep a dialog open. My query log shows it the problem still is happening. openDNS is good, even great for home users; for servers, no way. You are running servers, you need to know about DNS just to troubleshoot. Setting up your own DNS, is one way to learn. Try a +trace against openDNS, how helpful is that? openDNS do some very strange stuff with caching, that from an MTA level, I would call bad. They do some other strange things with non authoritative hosts in regards to how much they will beat the hell out of the primary. The users already did not understand DNS, and now they are pushed into a DNS service that has bells and whistles, yet at first glance because it works without even making an account, people love it. Spell checking has no place in DNS :) This is not a resolution speed issue, this is an end user issue. There are a lot of DNS resolvers out there for public use. Google makes it a point to state, they do not mess with the results, period. ( http://code.google.com/speed/public-dns/ ) I am sure they analyze the hell out of the logs; they publish log retention timeframes. They are being open, transparent, and you do not need an account, there are no settings to change, it is reliable DNS, done with peering agreements using anycast so you are not 50 hops away from the googleplex when you are in some other country. Not sure how openDNS deals with that. Given they announce that they are bringing on more machines in other countries tells me they have a little harder road to travel than google does in that regard. But my goodness, how often a persons internet "does not work" to simply change to openDNS for them, does show just how valuable they are. I wish comcast would get out of the DNS business, along with every other broadband provider, and just partner with openDNS. If a suggestion is to be made to use a public DNS for a server, I would go in this order: 1) Self maintained DNS 2) Level3 / At&T and their Tier 1 DNS 3) Your colo facility * #2 and #3 are pretty interchangeable usually, if your provider is somewhat competent. While #2 does not publicly talk about it much, they are a Tier 1, there is not a lot of messing around they can do, without literally breaking the internet. 8.8.8.8 is pretty darn simple to remember, but 4.2.2.1 to 4.2.2.6 has long since been committed to memory as well. > For example, imagine having a box running BIND as the > recursive resolver; you may start by improving the whole > resolution process by just setting up a slave copy of the > root zones and sparing a lookup hop, this way > > // forward root zone > zone "." { > type slave; > file "root.db"; > notify yes; > masters { > 192.5.5.241; > 192.228.79.201; > 192.33.4.12; > }; > }; > Very solid advice, exactly what I do when I bring on a new service that I know will be DNS heavy. ie: MTA that is going to use BL's and WL's. Why did you chose b, f, and c roots, just curious? > // reverse root zone (v4) > zone "in-addr.arpa" { > type slave; > file "inaddr.db"; > notify yes; > masters { > 192.5.5.241; > 192.33.4.12; > }; > }; > > the above means that YOUR BIND will keep local copies for both the > forward and reverse root zones so sparing a hop during lookups and > speeding up things; then, having a second box, you may install a copy > of "rbldnsd" (http://www.corpit.ru/mjt/rbldnsd.html) on it and host > LOCAL > copies of some DNSBLs, for example, assuming the rbldnsd box is > at IP 192.168.1.100 your BIND config may contain something like How come you mention second box? I usually do so as well, and light up a second ip on the same interface, unless I have mult-ethernet ports, but is there any reason technically you can see to not run this all on one machine, and not even pass through the switch? > zone "zen.spamhaus.org" { > type forward; > forward first; > forwarders { 192.168.1.100; }; > }; > > zone "dul.dnsbl.sorbs.net" { > type forward; > forward first; > forwarders { 192.168.1.100; }; > }; > > //.... add more as needed ... Great advice as well, I do the same/similar for some clients. I got tired of building out rbldnsd all the time on Mac OS X, I just bit the bullet and became maintainer for the MacPorts installer for it: http://trac.macports.org/browser/trunk/dports/net/rbldnsd/Portfile There probably is not a simpler way to get it up and running on Mac OS X, and should build out as universal no problem. Maybe even 63 bits, I need to look into that now that Snow is out. And man, is that one fast resolver. Testing on my dual core MacBook yields per second queries in the 5000's per second. I should bench it on the 8 core X- Server I just deployed. I can not even imagine how fast it will be when they rev bump to the next Intel stuff. You and I seem to very much be on the same DNS page with our setups. I run a rbldnsd zone for my internal private BL's and WL's as well, though they only have a few thousand entries, and named probably would suffice, I figured what the heck. I had bigger plans when I started. rbldnsd is amazing, simple, and certainly concentration into efficiency has been made. Being the backbone of every DNSBL out there, shows it can handle some serious load. I do not think it has seen significant software updates in years of time, it just works, and well. > the above means that the DNSBL lookups will be lightning fast and > they won't "bash" on the DNSBL servers so allowing you to carry on > a whole lot of queries w/o any "bandwidth capping"; by the way you > will have to arrange things with the various DNSBLs to be allowed to > transfer zones from them, but this isn't a problem, most zones will > allow that for free or for a decent fee and, as I wrote, your DNSBL > or URIBL lookups will be a greased lightning... and all this isn't > possible if you aren't running your OWN DNS resolvers Well said. Try to donate to some of your favorite ones if you can, I was going to toss a bit the way of rbldnsd, but since that project does not seem to need it with how stable it is, I ended up trying to spread what little I have around to other sources, wikipedia, and a small handful of the DNSBL's that are all in dire need of funding. Remember, we all use them as a front line defense, most are not going to rsync the zones over, or set up forwards, so most are going to hit the BL's servers direct, that costs CPU and bandwidth. Far too many great DMSBL's have gone under over the years. It is one of those layers that we all use and is so valuable, but forget how bad email would be without those services. > By the way, using external resolver is still ok in case you run a > small shop or are running a toy server, but other than that, if you > are taking things seriously, then having a decent local resolver > infrastucture and local copies of DNSBL/URIBL is the way to go Thanks for this email. It helps lay out a set of steps for someone who wants to start learning about arguably one of the most important layers of their infrastructure. If you run a "toy server", I would just point the MX at your registrar to google for domains, and get out of the business side of DNS/MTA sys admin. If you deploy your business around these services, then pay attention to what GrayHat has to say, he is spot on. The main point I take from all this, is if a user is DNS ignorant, openDNS is the worst answer to the problem. Again, running tests and looking at the ms response times is not really what this choice is about. Look what happened to twitter.com just yesterday. Users sitting around thinking that twitter got hacked, when in reality, I do not think a single twitter machine was even touched. Pretty sure someone managed to pop the IP's A records at the registrar and ran with that. OpenDNS has a purpose, it is great, maybe even awesome for the end home user, who does not know that paypal.com.accounts.someting.cn is bad, or that google.cmo may be easier to have corrected for them on the fly. If you run ASSP, you should long ago have gotten beyond that, and want to be able to see a real NXDOMAIN, and not some ads page, or even a fake NXDOMAIN page. Not to mention, the internet is not just port 80. Sorry it got so long, you brought up some great points, I wanted to try to make sure others see the entire picture here. If not, go buy http://oreilly.com/catalog/9780596001582 a year 2001 book that still is one of the most relevant DNS books there is. Thank you again for the email, I am sure it helped many people on the list understand the importance of thinking out your deployment ahead of time. -- Scott * If you contact me off list replace talklists@ with scott@ * ------------------------------------------------------------------------------ This SF.Net email is sponsored by the Verizon Developer Community Take advantage of Verizon's best-in-class app development support A streamlined, 14 day to market process makes app distribution fast and easy Join now and get one step closer to millions of Verizon customers http://p.sf.net/sfu/verizon-dev2dev _______________________________________________ Assp-test mailing list Assp-test@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/assp-test