Re: Moving dynamic zones to new master+slave pair without interruptions
On Wed, 2016-01-06 at 18:04 +, Darcy Kevin (FCA) wrote: > I'd just like to note in passing that the "separate authoritative and > recursive" herd mentality reaches the ultimate point of absurdity > when you only have 2 servers and you're going to create single points > of failure (apparently, unless I'm misinterpreting "stand alone") to > conform to this so-called "best practice". [...] I'm not religious about either model, but in this case the load on the recursive caching servers merits them being their own instances. We are not splitting the functions based on security concerns. > Needless to say, I don't subscribe to the (apparently popular) notion > that the roles need to exist on separate *hardware*. [...] One of two authoritative servers and two of three recursing will be virtual servers. So it's not as much a waste of hardware as it could be. :-) > View-level separation is, in my opinion, sufficient to meet the > security requirements. [...] Certainly. We use views on the resolvers for our public "guest" network and have had not concerns about this. [...] > Speaking of availability, as your network evolves, you might want to > consider running recursive service on Anycast addresses [...] We already use anycasting on the recursive servers and would prefer a simple configuration that can easily be replicated to new instances. As part of this pending transition we will introduce an extra recursing server. Keeping things simple, even if that means running more servers, helps me sleep at night. It helps my colleagues handling things without having to call me. :-) -- Peter Rathlev ___ Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list bind-users mailing list bind-users@lists.isc.org https://lists.isc.org/mailman/listinfo/bind-users
Re: Moving dynamic zones to new master+slave pair without interruptions
Hi Tony, Thank you for the suggestions! On Wed, 2016-01-06 at 16:05 +, Tony Finch wrote: > * Set up a new hidden master, with copies of your zones. (See below) > > * Change your existing servers to slave from the new hidden master > instead of the old master. Reconfigure the old master to be a slave > of the new one. Wouldn't this ruin dynamic updates from the DHCP servers? These updates need to be sent to the master. I could of course configur™e "allow- update-forwarding". Manually specifying the hidden master in the DHCP configuration seems clumsy. > You don't need to worry about the data on disk on your existing > slaves. They will continue to serve the same data, they will just > xfer changes from a different master. This made my think... Maybe I could just AXFR from the running slave and use the output as zone files on the master. As far as I can see this should Just Work™. > My program nsdiff (http://dotat.at/prog/nsdiff) is useful for copying > dynamic zones from from an existing master to a new master without > faffing around with `rndc freeze`. Nice. :-) Perfect for copying changes without touching the files. I'll take a thorough look at it. -- Peter Rathlev ___ Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list bind-users mailing list bind-users@lists.isc.org https://lists.isc.org/mailman/listinfo/bind-users
Re: dnskey algorithm update
On Wed, 6 Jan 2016, Carl Byington wrote: My zones are currently using algorithm 5 (RSASHA1), with two KSKs and two ZSKs with overlapping timers. In preparation for updating to algorithm 8 (RSASHA256), I read: The bind-users thread "KSK signing all records; NSEC3 algorithm status?" https://tools.ietf.org/html/rfc6781#page-31 https://labs.ripe.net/Members/anandb/dnssec-algorithm-roll-over Is there a more authoritative document that describes the algorithm roll over procedure? It seems that I need to: generate new ZSK and KSKs using algorithm 8 sign the zone with all the keys wait one ttl cycle, then publish a new dnskey rrset wait one ttl cycle, then upload the new ds rrset ... eventually, remove the old KSKs from the dnskey rrset, but still use them to sign the zone wait one ttl cycle, then resign the zone without the old KSKs. Carl: When I did that algorithm upgrade, I used something close to this process (based on the dual-signature method): # generate new RSASHA256 ZSK & KSK (active & published) dnssec-keygen -K Keys.dnssec -a RSASHA256 -b 1024 -n ZONE $ZONE dnssec-keygen -K Keys.dnssec -a RSASHA256 -b 4096 -n ZONE -f KSK $ZONE # re-sign the zone, using smart signing to pick up all keys dnssec-signzone -K $KEY_DIR -d $KEY_DIR -S -o $ZONE $DIR/$ZONE # re-load the zone (add any other required rndc args) rndc reload $ZONE # add DS record(s) for new KSK in parent zone; # left as an exercise for the reader # wait at least 1 TTL cycle (minimum of that for $ZONE & that for the # DS records in the parent zone) to let new DNSKEY, RRSIG, & DS records # propagate # move old keys out of key dir so they don't get used mv $KEY_DIR/K$ZONE.+005+* $TMP_DIR # re-sign the zone (with just new keys) dnssec-signzone -K $KEY_DIR -d $KEY_DIR -S -o $ZONE $DIR/$ZONE # re-load the zone (add any other required rndc args) rndc reload $ZONE # delete DS record(s) for old KSK in parent zone; # left as an exercise for the reader # if all good, delete old keys rm $TMP_DIR/K$ZONE.+005+* where: $ZONE is the zone being upgraded $KEY_DIR contains the key files $DIR contains the zone files $TMP_DIR contains old keys temporarily You can get by with activating the new (RRSIG,DNSKEY,DS) set as a group immediately after creation & re-signing because the old set is still present as the basis for validation while the new set propagates around. Likewise, after the TTL cycle you can delete the old (RRSIG,DNSKEY,DS) set as a group because by then the new set is present as the basis for validation. It worked for me. As always, your experience might vary. I recommend working through this for a zone which: o doesn't matter o has the parent under your direct control These tools are extremely useful: http://dnsviz.net/ http://dnssec-debugger.verisignlabs.com/ Use them to view & verify things at each stage. To really have some fun, purposefully break some part of your test zone & see how the above tools show it. Jay Ford, Network Engineering Group, Information Technology Services University of Iowa, Iowa City, IA 52242 email: jay-f...@uiowa.edu, phone: 319-335- ___ Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list bind-users mailing list bind-users@lists.isc.org https://lists.isc.org/mailman/listinfo/bind-users
dnskey algorithm update
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 My zones are currently using algorithm 5 (RSASHA1), with two KSKs and two ZSKs with overlapping timers. In preparation for updating to algorithm 8 (RSASHA256), I read: The bind-users thread "KSK signing all records; NSEC3 algorithm status?" https://tools.ietf.org/html/rfc6781#page-31 https://labs.ripe.net/Members/anandb/dnssec-algorithm-roll-over Is there a more authoritative document that describes the algorithm roll over procedure? It seems that I need to: generate new ZSK and KSKs using algorithm 8 sign the zone with all the keys wait one ttl cycle, then publish a new dnskey rrset wait one ttl cycle, then upload the new ds rrset ... eventually, remove the old KSKs from the dnskey rrset, but still use them to sign the zone wait one ttl cycle, then resign the zone without the old KSKs. For that to work, I need to get dnssec-signzone to sign a zone without publishing the keys (activate < publish) and (inactivate > delete). 'man dnssec-signzone' under -S smart signing, talks about the following timers - (publication, activation, revocation, unpublication, deletion). That man page implies that dnssec-signzone will always publish keys that it has used to sign the zone. The use of 'unpublication' and lack of mention of 'inactivate' seems to be an oversight. 'man dnssec-settime' documents the following timers - (P publication, A activation, R revocation, I retired (inactive?), D deleted) 'dnssec-settime -p all' uses (Created, Publish, Activate, Revoke, Inactive, Delete) names. -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.14 (GNU/Linux) iEYEARECAAYFAlaNdXsACgkQL6j7milTFsFQ6wCffo9wlY7roi2U3iI/6TSahK7R 6hQAn3HgFbGeJBXsMza6IRAuDLBx2Wr3 =bTLc -END PGP SIGNATURE- ___ Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list bind-users mailing list bind-users@lists.isc.org https://lists.isc.org/mailman/listinfo/bind-users
RE: Moving dynamic zones to new master+slave pair without interruptions
I'd just like to note in passing that the "separate authoritative and recursive" herd mentality reaches the ultimate point of absurdity when you only have 2 servers and you're going to create single points of failure (apparently, unless I'm misinterpreting "stand alone") to conform to this so-called "best practice". Needless to say, I don't subscribe to the (apparently popular) notion that the roles need to exist on separate *hardware*. View-level separation is, in my opinion, sufficient to meet the security requirements. (Bear in mind, views can be matched by TSIG key, if one doesn’t consider match-clients or match-destinations to be sufficiently rigorous; while this may not be practical for typical stub-resolver-to-BIND-instance communication, it is something to consider at or near the apex of a forwarding hierarchy). If match-clients-based or TSIG-based view-level separation isn't considered rigorous enough, then you could spin up additional IP addresses and run authoritative on one set, and recursive on another set. Even the eponymous Mr. Bernstein, one of the leading proponents of auth/recursive separation (in his DNS software package, they are totally separate programs), takes care to say to not run auth and recursive "on the same IP address". See https://cr.yp.to/djbdns/separation.html. Never does he say -- as others do -- that the roles have to be on separate *hardware* (or, in the modern era, these might actually be separate virtual instances). Now, whether you actually run separate named processes, with specific listen-on's, for those IPs, or take the view approach, with match-destinations, is, again, dependent on how much rigor you want to apply to your separation. But, hopefully, I've given you some other options to consider besides the most extreme, hardware-based separation approach. Remember that "availability" is one of the pillars of information security, and if you sacrifice availability to conform to a "best practice", you might not be improving your overall information security. Speaking of availability, as your network evolves, you might want to consider running recursive service on Anycast addresses (see http://ddiguru.com/blog/118-introduction-to-anycast-dns or Cricket's informative video). When implemented, this largely moots the whole "recursive versus authoritative" debate, because recursive service now runs on IP addresses that are "virtual", at a network-routing level, and do not intersect with the IP addresses used for authoritative service (if one wants to implement Anycast for *authoritative* service, like the Public Internet does, those would typically be a *separate* set of Anycast addresses from the recursive ones). - Kevin -Original Message- From: bind-users-boun...@lists.isc.org [mailto:bind-users-boun...@lists.isc.org] On Behalf Of Peter Rathlev Sent: Wednesday, January 06, 2016 8:17 AM To: bind-users@lists.isc.org Subject: Moving dynamic zones to new master+slave pair without interruptions We currently have two internal DNS servers that are both authoritative for a range of internal zones and caching resolvers for our clients. We would like to split this so authorizative and caching roles exist on different servers. And we would like to do this with as little down time as possible, also for dynamic zones. Moving static zones is of course trivial. Moving dynamic zones is what I cannot quite wrap my head around. I think I want to set up a new slave and AXFR from the existing master. Then I can point delegations and "forwarders" at this new slave only,. Together with having the configured "masters" pointing at a not yet running master server this would make it "stand alone". Next step in my head would be to re-create the master from this slave. I thought that I could just copy the zone files from the slave, since that slave would not have made any changes, seeing as it is only the master that can do that. (I am fine with rejecting changes to the dynamic zones during the move exercise.) However, I see that the current slave also has ".jnl" files for the dynamic zones and "rndc freeze " is invalid except on the zone master. With journal files present I guess that I cannot trust the zone files to actually be valid/complete. So... What do I do then? Is there another way of committing the journal to disk on a slave? Is there a "best practice" for re-creating a lost master when dealing dynamic zones? I may of course have started out completely wrong. If there are better ways to acheive what I want then I am all ears! :-) This is all a thought exercise right now, I have not actually tried to move anything yet. If BIND versions are relevant then we plan on using the CentOS 6 default which is BIND 9.8.2 (with some patches, so it's bind-9.8.2- 0.37.rc1.el6_7.5.x86_64) on the new servers. Building from sources is
Re: Moving dynamic zones to new master+slave pair without interruptions
Peter Rathlev wrote: > We currently have two internal DNS servers that are both authoritative > for a range of internal zones and caching resolvers for our clients. We > would like to split this so authorizative and caching roles exist on > different servers. And we would like to do this with as little down > time as possible, also for dynamic zones. > > Moving static zones is of course trivial. Moving dynamic zones is what > I cannot quite wrap my head around. I suggest the following process: * Set up a new hidden master, with copies of your zones. (See below) * Change your existing servers to slave from the new hidden master instead of the old master. Reconfigure the old master to be a slave of the new one. * Add new slaves which will be your new authoritative-only servers. * Change your delegations to point to your new authoritative-only servers. You don't need to worry about the data on disk on your existing slaves. They will continue to serve the same data, they will just xfer changes from a different master. My program nsdiff (http://dotat.at/prog/nsdiff) is useful for copying dynamic zones from from an existing master to a new master without faffing around with `rndc freeze`. On the new master, run nsdiff -m oldmaster -s localhost myzone | nsupdate -l and it will axfr the zone from the oldmaster and copy it into the new master using dynamic updates. (If you are changing your DNS infrastructure then nsdiff can be useful for verifying that the zone data is consistent between old and new.) Tony. -- f.anthony.n.finchhttp://dotat.at/ Southwest Forties, Cromarty, Forth: Southeasterly 6 to gale 8, occasionally severe gale 9 later. Rough or very rough, occasionally high later. Rain at times. Moderate, occasionally poor. ___ Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list bind-users mailing list bind-users@lists.isc.org https://lists.isc.org/mailman/listinfo/bind-users
Moving dynamic zones to new master+slave pair without interruptions
We currently have two internal DNS servers that are both authoritative for a range of internal zones and caching resolvers for our clients. We would like to split this so authorizative and caching roles exist on different servers. And we would like to do this with as little down time as possible, also for dynamic zones. Moving static zones is of course trivial. Moving dynamic zones is what I cannot quite wrap my head around. I think I want to set up a new slave and AXFR from the existing master. Then I can point delegations and "forwarders" at this new slave only,. Together with having the configured "masters" pointing at a not yet running master server this would make it "stand alone". Next step in my head would be to re-create the master from this slave. I thought that I could just copy the zone files from the slave, since that slave would not have made any changes, seeing as it is only the master that can do that. (I am fine with rejecting changes to the dynamic zones during the move exercise.) However, I see that the current slave also has ".jnl" files for the dynamic zones and "rndc freeze " is invalid except on the zone master. With journal files present I guess that I cannot trust the zone files to actually be valid/complete. So... What do I do then? Is there another way of committing the journal to disk on a slave? Is there a "best practice" for re-creating a lost master when dealing dynamic zones? I may of course have started out completely wrong. If there are better ways to acheive what I want then I am all ears! :-) This is all a thought exercise right now, I have not actually tried to move anything yet. If BIND versions are relevant then we plan on using the CentOS 6 default which is BIND 9.8.2 (with some patches, so it's bind-9.8.2- 0.37.rc1.el6_7.5.x86_64) on the new servers. Building from sources is a hassle we would rather avoid, but since we are already doing this with ISC DHCP we could also do it with BIND if necessary. Current master is _quite_ old, BIND 9.3.6 (bind-9.3.6-25.P1.el5_11.5). So the setup is really in need of a refresh. :-) Thank you in advance! -- Peter Rathlev ___ Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list bind-users mailing list bind-users@lists.isc.org https://lists.isc.org/mailman/listinfo/bind-users