On 11/9/17, 00:01, "DNSOP on behalf of Paul Wouters" <dnsop-boun...@ietf.org on 
behalf of p...@nohats.ca> wrote:

>"prerogative of the implementer" canont apply to how one defines trust
>in a protocol. You can't have two implementations make different
>trust decisions, especially as most endusers don't control their
>resolver.

The first split I make when considering this is the difference between general 
purpose implementations - such as BIND, Unbound, Microsoft's, and others that 
are distributed to a general population - and turn-key implementations which 
are purpose built for a specific need.  The latter do exist, are generally not 
considered when it comes to standards as, for them, conformance to open 
standards just doesn't matter.

For general purpose implementations, it is up to the distributor of the 
software to decide what and how to implement.  Whether to faithfully implement 
a Request for Comments is a choice.  I have been witness to an implementation 
that tried to fix itself to be more compliant only to find the relying parties 
balking, requesting a return to the non-conformant behavior.

When it comes to DNS resolution, there is the basic protocol to implement - the 
messages, the formats, timing, etc.  If not done correctly, a resolution engine 
may suffer rcode=FORMERR and get nowhere.  But there is a lot of choice for an 
implementer and still remain interoperable with other software builds.

One feature is what I've often called the "aggressive nature" of searching for 
an answer.  Neglecting DNSSEC for the moment, in 2012 I tried to write a basic 
resolver to find information about a popular ranking engine's top zones.  I 
discovered that many of the zones were "broken" in the sense that the servers 
operating them did not answer queries for SOA, NS, etc., errors that should 
have made the zones unreachable.  Yet these zones were ranked as popular.  I 
realized that general purpose resolvers went beyond the basic strategy for 
getting an answer, I mentally tagged this as "aggressive resolution".  In the 
same sense that an experienced librarian may find a reference book that an 
inexperienced one may not.

DNSSEC offers more choice in terms of aggressive resolution.  For example, if a 
resolver is suffering a forgery attack (a'la what Kaminsky described long ago 
now) the first response the resolver would get is the forgery.  If the resolver 
waited longer, the real answer would likely arrive.  Early, and as far as I 
know still true for current, resolvers close their listener after getting the 
forgery.  It is possible to continue listening (but for how long?) to get the 
real answer.  With DNSSEC proving the first answer wrong, the real answer could 
be validated - if the listener is not torn down.  I understand why this more 
aggressive implementation was not chosen.

There is much leeway in what an implementer chooses to encode.  The goal of the 
RFC series, early on at least, was to give guidance on how to allow 
interoperability.  The documents are not requirements nor design guides.  I 
spent a long time with "The Algorithm" in section 4.3.2 of "Domain Concepts and 
Facilities" (part of STD 13) to come to know what that means.  (Thanks to Rob 
Austein.)

Implementers, like any kind of writer, have to put their audience foremost in 
mind.  This is why there are many extra-standards features (and bloat) in 
software today.  (Round-robin deliver of answers worked its way into mainstream 
code that way.)  Implementers have to balance simplicity of tools with 
functionality via complex configurations.  I've been told that many times the 
operator will just go with the default settings - so that is important.

>It's up to neither. You cannot retroactively argue that hardcoding trust 
>anchors is an option can that be ignored by an implementation.

I don't understand this point.  If you are referring to the practice of general 
purpose code shipping with a set of trust anchors, at least in BIND one can 
turn DNSSEC off - or remove/override the packaged defaults.  That is useful 
when deploying code on a network that is not the global public internet.
    
>knot had a bug some people thought would be a nice fallback recovery feature 
>to a failed rollover procedure. knot should just be fixed. This really does 
>not require an RFC update.

It's not clear whether the "bug" was a "feature you didn't like" or a coding 
error from this statement, so I don't know what the latter sentence is saying.

I realize I'm writing a lot, and I'm not sure I'm making a strong enough case 
about why "for the sake of resiliency" any chain that validates ought to be 
taken as a good thing.  This might take a different approach as trying to talk 
about "trust" is too ill-defined when discussing a protocol whose basic design 
did not consider "trust" regarding the data delivered.  "Trustworthiness" in 
"Clarifications to the DNS Specification" relied on what section the data 
appeared in a response and from what IP address it came.  Forgeries weren't 
considered even in that round of clarification.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to