Eric,

Thanks for your comments.  responses are inline.
...
1 - In the draft, there is discussion of the global agreement to move to algorithm B. Who ensures the global agreement of B, and who chooses
and ensures agreement of the various dates?

the IETF is responsible for the alg selection, just as it has been for other algs used with all other IETF standard protocols. Based on Terry's comments, I think we will state that the RFC defining the transition dates will be coordinated with the NRO and IANA.

2 - (Double checking that I have read this right), if the motivation for an algorithm roll is discovery of a weakness in the current algo, no CA can roll until this top-down process reaches them, right (months or years)? I see this is broached in Section 11, but it doesn't seem to be answered there? It sounds like the authors don't intend to address this any further than acknowledging the suboptimality of this
approach?

The motivation for alg transition is anticipated weakness in the current alg suite, more so than a sudden discovery of a vulnerability. Althoygh there have been major headlines about alg breaks, these are usually FUD, and do not motivate an immediate transition to a new alg suite. So, no we are not proposing a process that deals with a sudden alg break.

3 - Section 11 also prompted another question I had throughout: what happens if a CA doesn't meet these deadlines? It seems like that CA is simply orphaned and cannot participate in routing anymore (until they catch back up)?

It's easier to discuss this if you pick a specific phase. Which one did you have in mind?

>From these three questions, I came to the following clarification suggestions: 1 - I see the phases in this draft as defining a formal process. However, I don't see any error-legs (i.e. what happens if there needs to be an abort, rollback, whatever you want to call it). I think it is important to outline how this process can react if there are any unforeseen failures at each phase. I'm not sure that we need to be terribly specific, but perhaps we can agree that _something_ could go wrong and cause the need for an abort? I think this is quite common in process-specifications, unless we think nothing will ever go wrong
in this process? :)

What one might would do is phase specific. But, in general, the timeline could be pushed back if there is a good reason to do so. I thin terry';s suggestion helps in this regard. If we view the NRO as representing the RIRs, and the RIRs as representing ISPs, then there is a path for a CA or RP that has a problem to make that problem known, and addressed.

2 - Related to the above, I would imagine (but maybe this is just me?) that in the event of a failure at one phase or another,
there may need to be a rollback procedure specified.

I'm not sure that there is a need for a rollback, per se. Pick a phase and a failure mode as an example as we can explore that issue.

3 - I think a lot of complexity in the overall draft (and my above comments) could be addressed by allowing CAs to choose their own algorithms and their own schedules. Could this be considered? I recall we discussed how this might negatively affect the performance of the current design's complexity. It's possible that we will just simply come to loggerheads here, but (design issues aside) do people think CA operators should have the ability to protect themselves as soon as they can move to a new algo?

One cannot allow each CA to choose it's own set of algs, because that local choice has an impact on ALL RPs. That's what economists call externalization, and it's a bad thing. Having each CA choose it's own schedule is also a non -starter. Geoff Huston observed that unless we adopt a top-down transition plan, the repository could grow exponentially! That's an unreasonable burden. With a top-down plan CAs have limits imposed on them, already, i.e., a lower tier CA cannot switch to a new alg until it's parents support the new alg.

4 - Finally, there is a note that all algorithms must be specified in I-D.ietf-sidr-rpki-algs. While I am not challenging that, I would like to point out that having an analogous requirement in DNSSEC made life a little challenging to add new algos (specifically GOST) without a lot of people trying to assess the algo's worthiness w/i the IETF. I thought, though I could be mistaken, that several people lamented
having that requirement.  So, perhaps it would make sense to soften it here?

DNSSEC was initially less rigorous in its alg criteria, and the result was not great. We are avoiding those problems.

Steve
_______________________________________________
sidr mailing list
sidr@ietf.org
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to