On Sat, 15 Mar 2014, Viktor Dukhovni wrote:

Goal:

   * It should be possible for servers to publish TLSA records
     employing multiple digest algorithms allowing clients to
     choose the best mutually supported digest.

Isn't that already possible?

    * The client SHOULD employ digest algorithm agility by ignoring
      all but the strongest non-zero digest for each usage/selector
      combination.  Note, records with matching type zero play no
      role in digest algorithm agility.

I don't think that is a proper assumption. For example, a zone might
need to publish a GOST based digest for legal reasons (eg not trusting
US based digests) but might publish a FIPS approved digest for other
people. The client has a more complicated reduction scheme going on
for their local policy than "strongest".

Traditionally, for instance with the DS record, we allow publishing
multiple digests, and the client's task is just to find one which is
"acceptable". It would be nice if it starts with what it believes is
the "strongest".

If a certain digest is so weak it is basically broken, it should not be
left in a published TLSA record.

If the most prefered TLSA record fails validation, the client should try
another TLSA record. The order in which is does so could be written down
in the RFC if we think there is one true way of doing so.  Otherwise,
it should just be referenced in the RFC as "according to local policy".

This also gives the server admin some more protection. If they publish
digests using SHA2-256 and SHA1, and it turns out their tool generates
bad SHA2-256, than the clients still have a valid SHA1 to fall back to.

Perhaps there is text in the DS record RFC to look at that describes
this better than I just did.

Paul

_______________________________________________
dane mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dane

Reply via email to