On 11/2/17, 11:04, "Bob Harold" <rharo...@umich.edu> wrote:

>I generally agree with you, but wonder if there is a performance penalty to 
>searching every possible path before failing.  Is that a reasonable concern?

There are a bunch of tradeoffs to consider.  (I'll start with "it is a concern" 
but not sure if it's "something to be afraid of.")

For one, in terms of latency (time to answering the question), if the attempts 
are in parallel then there's arguably less impact, as network latency might be 
greater than compute latency.  ("Might" is the operative word.)

One of the old debates between myself and Olafur was whether the cost of a 
network retrieval was greater than or less than the cost of a cryptographic 
operation.  That debate is so old I've forgotten which of us sided with which 
side.  I do know that we never resolved it, knew it wasn't generally 
resolvable, and would probably change over time anyway.

For two, in terms of load on the network (more traffic), it's hard to know.  
Depends on what is cached, how often the work is done, whether it is each 
validation or not.  (You'd only have the question arise when you have a trust 
anchor besides the root to consider - the frequency might be dependent on 
whether split-DNS is an issue or not, and so on.)

For three, performing actions in parallel complicate code writing.  If you were 
educated in the days before thread packages existed, you are tempted to try 
serially.  (pthreads, IEEE Std 1003.1c-1995, came along too late in the game 
for me.)  Younger kids are apparently able to multi-task better these days, I 
hear, so I hope that makes for better code (a'la Go).

There's always the "one branch gives me this result, the other branch seems to 
be hitting a timeout" case meaning sometimes you have to estimate when it's 
time to give up and get on with it.  It's that case that makes coders implement 
"first result wins" more often than is probably wise.

This is what makes "robustness" a tricky thing (and why I like to dive into 
this again now and then).  If "robust" means returning as much data to a query 
as possible (like finding the one working secure chain), you might risk 
breaking a different definition of "robust" for the relying application, which 
has a time-deadline to meet.

In short - it's a tradeoff of time (latency) and maximizing validation of 
answers (robustness) which depends a lot on how the code is implemented and the 
workload provided.  Like looking for a needle in a haystack - the longer you 
look the higher the chance you will find it, with how you look being a factor 
in speed, as well as the size of the haystack, number of needles present, etc.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to