This discussion comes up in every security WG. The proposal tries to help a 
client who wants to always use the strongest algorithm without actually having 
to understand why a different algorithm is weaker. This makes the semantics 
pretty intractable.

On Mar 17, 2014, at 8:50 AM, Viktor Dukhovni <[email protected]> wrote:

> On Mon, Mar 17, 2014 at 11:26:58AM -0400, Paul Wouters wrote:
> 
>> On Sat, 15 Mar 2014, Viktor Dukhovni wrote:
>> 
>>> Goal:
>>> 
>>>  * It should be possible for servers to publish TLSA records
>>>    employing multiple digest algorithms allowing clients to
>>>    choose the best mutually supported digest.
>> 
>> Isn't that already possible?
> 
> Not based on RFC 6698 alone.  With RFC 6698 the client trusts all
> TLSA records whether "weak" and "strong".

Can you point to the specific text for that? It was not my intention, and I 
doubt it was the intention of the WG.

> My proposal is essentially the same.  The client uses the strongest
> acceptable digest algorithm.  The *client* decides what "strongest"
> means.  It never chooses an unsupported algorithm.

Again, that was at least my intention for 6698. If we need to clarify that, 
that would be much better than adding another layer of protocol grease.

>> If a certain digest is so weak it is basically broken, it should not be
>> left in a published TLSA record.
> 
> Weak digests (say SHA2-256 if/when broken) cannot be easily removed
> from RRsets until all clients support stronger ones.  The idea is
> to publish stronger digests and deploy stronger clients, then remove
> weak digests later.  

Yes.

> Stronger clients will never use the published
> weak records.  

I strongly doubt that is the desired outcome. If so, lots of zones will go 
invisible when the "later" in "remove weak digests later" stretches to a decade.

Instead, a stronger client can have a setting that says "I'm going to abort 
when seeing a weaker digest, and I will alert you". The latter part is 
important.

> Otherwise there's an Internet-wide flag-day.

Which will never happen, so bringing it up is just hyperbole.

>> If the most prefered TLSA record fails validation, the client should try
>> another TLSA record.
> 
> This works poorly.  While the weak algorithm is being phased out
> (years) even clients that support stronger algorithms are at risk.

At risk of what? Seriously: DANE is additional security over non-TLS, so a 
"weak" algorithm is still better than "no TLS". Reduction to absurdity is not 
helpful here.

>> Perhaps there is text in the DS record RFC to look at that describes
>> this better than I just did.
> 
> Perhaps Wes can chime in.  His comment to me was that the proposed
> DAA (digest algorithm agility) is essentially the only possible
> and largely analogous to the DNSSEC approach.

I believe he is talking about RFC 6975. I do not believe that it attracted any 
significant interest.

--Paul Hoffman
_______________________________________________
dane mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dane

Reply via email to