John Levine wrote:

While I think it would be good to publish some best practices in this area,
this draft still seems scattered and makes some assertions that seem to me
to be somewhere between unsupported and mistaken.

I think we agree that the goal is there are two parties, call them
owner and verifier, and the verifier gives the owner a random token
that the owner puts in its DNS to show it owns the domain.  There are
a bunch of different aspects that one can look at independently.

Two (or three) parties yes.

One is where the token goes, in the name or contents of the owner's
record. I think we agree that putting the token at the owner's host
name is a bad idea, but either of these can work, with a1b2c3 being
the random token:

_a1b2c3.example.com IN ... "whatever"
_crudco.example.com IN ... "a1b2c3"

Adding cryptogrpahically strong/long strings in the prefix seems
unwieldly and prone to problems - especially if the user has to put
these in via a webgui of mediocre quality. Also it makes manual
checking harder (eg to use the dig command). But most importantly, you
already need a good prefix to identify the vendor and service and
mucking up random strings there just seems the wrong thing to recommend
as BCP. So the only good reason for these is if there is a 3 party
system (vendor/service, client, proof-provider)

If you use a fixed prefix, _crudco in this case, you should register it
in the RFC8552 attrleaf registry.

Since we are recommending these are easilly recognised to vendor and
service, these would be different for each vendoer. Sure, we can add
another prefix in between and put that in the attrleaf registry, but
that doesn't add any real value.

Another is what record type to use. I find the arguments against CNAME
unpersuasive, basically saying that if you do something dumb, it won't
work, which is true, but it's always true.

I thought a BCP was about recomending people don't do dumb things?

I realize that RFC1034 says
not to chain CNAMEs, but we all know that people chain CNAMEs and it
works, e.g. www.microsoft.com goes through three CNAMEs and it works
fine.

I was not allowed this line or argumentation with you when talking
about the LHS of an email address :)  So let's stick with not
recommending things that violate existing RFCs?

If you use a _name in the attrleaf registry or a _random prefix
I would think the changes of a CNAME colliding with something else are
low, and a verifier presumably controls its own DNS and can keep CNAME
chains short.

People make mistakes with CNAMEs, let's not recommend CNAMEs so chances
that people get affected my mistakes is reduced.

For the length of time the token is valid, there seem to be only two:
five minutes for a one-off verification like for ACME, or forever for
someone who is doing continuing analysis of something in your domain

Five minutes seem optimistic and only true for geeks running their own
webserver and nameservers. In real life, you have different departments,
timezones, ticketing system response times, etc. So these things will
stay in the zone for days or as we often see now, forever, because
people like to error on the side of "I didn't cause that outage" over
"I cleaned up our DNS".

typically web analytics. While I can see aesthetic reasons to get rid
of expired one-off tokens, I don't see the point of putting an
expiration time in them, nor any particular harm in leaving them there
if they are at _name and not the main host name.

You might be missing operational experience in working in larger
companies. For example, when I started current $dayjob, I found 7
records and 4 of these could not be traced as to who needed them or
whether they were still needed or not. Some "digging" around showed
us cases where large companies had 20+ entries in their APEX related
to these. So yes, cleanup is very imporant and one of the main
reasons for this BCP - to faciliate cleanup of records by getting
them issued in a better self-documenting syntax.

Yes, using/recommending prefixes helps us, which is why we recommend
them. But having an expiry is also obviously useful for cleanup.

From a security point of view, it is also good to know that a certain
record has no more value so your audit reports can state no access
of some kind was given via some DNS record 10 years ago and might be
still in place but we dont know if the vendor still honours the record
of whether it is expired and just junk.

something about TTLs, e.g., not to have a 12 hour TTL for a token you
plan to remove in a few minutes.

We could say something about TTL, but here I really don't see much
point. Having 1 TXT record in some recursive cache for 12 hours? Meh.


There are some other minor points. You say to generate 128 random
bits, but then to hash them to 256 which I don't understand. (As RFC
4086 sec 6.2 notes, strong random bits often are already hash output.)
If 128 bits is enough, use 128 bits, if you need 256, generate 256.
Either way I'd do base32 rather than base64 encoding to make the
result more robust against helpful software that does you a favor by
case folding.

We will look and clarify this section.


Overall, I'd lay out the options, and point out the advantages or
disadvantages of each rather than just saying "do this" without a
strong basis not to do it the other ways that work fine in practice.

I disagree about there being a lack of "strong basis". This is actually
driven by people's operational (bad) experience at various
organizations. Giving people CNAME rope to hang themselves or unwieldly
long random prefixes seem to be bad ideas that should not be recommended
nor "explained but not recommended" either.

Paul

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to