Your post is a little difficult to respond to, due to lack of detail on each of 
your enumerated points.

For instance, "DNSSEC" is a fairly broad topic, and all major DNS 
implementations support the record types (RRSIG, DNSKEY, DS and so forth) 
necessary to support validation and the "chain of trust". But, the devil is in 
the details, e.g. how flexible and well-performing are the mechanisms and/or 
tools provided for validation, for on-line signing, for key rollover, etc.? 
This requires a fairly detailed understanding and experience of DNSSEC 
operational details, and I'm not sure you'll find many people who have this, as 
applied to multiple product suites like BIND, unbound, PowerDNS and so forth. 
It's possible, I suppose.

For "HA", Anycast has already been mentioned as a way to provide robust DNS 
service, with automatic failover. Most private enterprises, if they use Anycast 
at all, only do so for *recursive* service, since that's where the bulk of the 
DNS query traffic occurs, and stub-resolver failover incurs a significant 
performance penalty, which implementors seek to negate. But, there's no reason 
why Anycast can't be used for *authoritative* service as well, just as is done 
for root and/or gTLD servers on the public Internet. The big downside to 
Anycast, of course, is that it requires close co-ordination with your routing 
infrastructure and the people who run it. Depending on the organization, this 
may be a significant challenge. In the absence of Anycast, at least some level 
of transparent failover can be achieved, for DNS as it is for other services 
(typically HTTP/HTTPS) using LSLB (Local Server Load Balancing). Note that GSLB 
(Global Server Load Balancing) relies on DNS itself, so th
 at's a chicken-and-egg situation, which means you *cannot* use it for 
providing high-availability to DNS services. But, with LSLB, you can at least 
eliminate the "failover penalty" associated with a single-node failure. I think 
this is what you were hinting at by "clustering", but the term "clustering" 
usually implies a server-based methodology, whereas LSLB is a more generic 
term, and the function can be provided by dedicated network devices (if 
available).

As others have mentioned, however, unless you have a really small number of 
really busy subnets, it wouldn't likely be economically feasible to have 
multiple DNS instances on every subnet. You could easily end up with dozens of 
DNS instances that way, and how do you manage all of that (I expand further on 
that, below)?

You mentioned having one master and multiple slaves in each of these per-subnet 
"clusters", and I'm wondering why you would think you would need that. Are you 
envisioning that each subnet has its own dedicated zone? That too seems like 
overkill. You may not have thought through your DNS namespace hierarchy yet, 
but generally speaking, location-based naming schemes go down to the "campus" 
level, if that. One zone per subnet seems rather excessive, for "forward" 
names, although it may make sense for your "reverse" namespace, especially with 
IPv4 and where your subnets are divvied up on 8-bit boundaries (e.g. /24 or 
/16). Just because a zone may be subnet-specific, though, doesn't imply that 
the master server for the zone needs to be *on* the subnet, however. Usually, 
it doesn't hurt performance much for a DNS caching resolver to be a few network 
hops away from its stub-resolver clients, or for a DNS master server to be a 
few hops away from a client that updates it via Dynamic U
 pdate. In the DNS realm, one usually gets better performance results by 
focusing on *availability* (because of the aforementioned stub-resolver 
"failover penalty"), rather than nickle-and-dime'ing a few network hops here 
and there (caveat: this assumes relatively fast links; I suppose, if your 
network hops are _really_ slow, you might need to focus on minimizing them. If 
network hops *really* matter, then, again, you should look long and hard at 
Anycast as the most efficient way to provide service, despite the steep initial 
learning curve and the possible co-ordination challenges). Does replication of 
DNS data towards the clients which query it -- via the classic master/slave 
replication or something else -- also help performance? You bet it does. So, I 
think you're on the right track to be thinking about it. Just be aware that 
maintaining a bunch of master/slave configuration elements, or the equivalent 
in non-standards-defined replication mechanisms, can be a chore in and 
 of itself, especially if you want to set things up so that changes made at the 
master replicate quickly (using the NOTIFY extension or something else).

Lastly, you mention "DNS-SD". I'm assuming you're referring to the 
Apple-oriented "Bonjour" stuff specified in RFC 6763 (?) Apparently you have a 
lot of Apple gear. I gather that there's not a lot of "special" DNS-server 
requirements for that -- standard record types like PTR, SRV and TXT are used. 
Again, like DNSSEC, the devil is in the details, though: do you expect your 
Apple stuff to be able to update Bonjour resources directly? If so, then you 
need a way to secure those Dynamic Updates. I can't really speak to this, 
having not had much exposure to Bonjour, and I don't think BIND has any special 
accommodations in this regard. Maybe some other DNS packages do (?)

A requirement you *didn't* mention, however, and probably *should*, is: what 
are the mechanisms and tools for maintaining the DNS data and configurations in 
the environment? BIND is an open-source package, but it doesn't really provide 
its own GUI, or console, for instance, for managing DNS data and 
configurations. Were you thinking you'd just manually edit zone files and 
named.conf? That can get old real fast. A lot of commercial products are built 
on BIND (e.g. Infoblox and Blue Cat, to name a couple), and BIND can be 
front-ended with open source tools too (Webmin has been mentioned, although 
I've never used it personally) and these tools provide such functionality. I 
think you'll find that, while the initial implementation of a DNS environment 
can be rather manpower-intensive, in the long term, the care and feeding of a 
DNS environment will take a lot more of your time, unless you have powerful 
tools to help manage it. So, you should be looking not only at the core DNS pac
 kage to use on your network, but also at the management layer you're going to 
use with that core, since different combinations of core-software and 
management layer work better or worse with each other.

Lastly, another thing you didn't mention is integration between DNS and DHCP. 
Don't your clients use DHCP, and don't you want records populated automatically 
in DNS when they get a lease, and removed when the lease is gone? I suppose 
it's _possible_ that you don't use DHCP, or you don't care about having your 
client names automatically populate DNS, but this would be rather atypical. 
It's worth noting, I think, that the prevailing DHCP server software comes from 
the same organization (ISC) as maintains BIND, so these pieces work well 
together (which is not to suggest that ISC's DHCP server *can't* work with, 
say, PowerDNS and so forth, but just that there are more examples and 
"mindshare" out there for getting the ISC stuff to work together). Some of the 
aforementioned commercial products, e.g. Infoblox, provide both DNS and DHCP 
services, among others, and DNS and DHCP can be very tightly integrated with 
each other, within those products.

                                                                                
                                                                        - Kevin

P.S. Re-reading my message, I realize it may sound like a bit of an 
advertisement for a commercial DDI (DNS, DHCP and IPAM) solution, e.g. 
Infoblox. But, having made the transition from a fairly "classic" BIND 
installation, with a significant amount of custom programming wrapped around 
it, to a commercial-DDI solution, with the remaining custom programming just a 
front-end to it, I can attest to the manageability and integration benefits. 
It's especially useful if one wants to run more advanced features like DNSSEC, 
Anycast, reputation-based blacklisting of C&C sites, DoS mitigation strategies, 
meaningful mining of query stats, integration with Active Directory's "sites 
and subnets" mechanism, etc. I shudder to think how much time and effort would 
be involved in creating those things from scratch, and/or cobbling together 
open source tools to make all of that work.

-----Original Message-----
From: bind-users-boun...@lists.isc.org 
[mailto:bind-users-boun...@lists.isc.org] On Behalf Of David Li
Sent: Sunday, January 17, 2016 12:34 AM
To: bind-users@lists.isc.org
Subject: Newbie's BIND Questions on DNSSEC, HA and SD

Hi,

I am new to BIND. I am researching for a DNS server that can meet a list of 
requirements to be used in  a distributed system. They are:

1. Security (DNSSEC)
2. High Availability (HA)
3. Service Discovery (DNS-SD)

So I think BIND might be my best choice so far. Others I have looked at include 
dnsmasq, unbound, PowerDNS etc.

Because I don't have real experience with BIND yet and our architecture hasn't 
been finalized, I am asking the community experts for validations on my 
conclusion.

Another question I haven't quite figured out is the HA architecture.
Is it possible to set up a cluster of BIND servers (> 2) for each VLAN subnet 
with one of them as master the rest as slaves?

Thanks!

David
_______________________________________________
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
_______________________________________________
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Reply via email to