Shane,

That is true. However, I feel the need to point a subtle, yet important difference from today's environment.

First, today, the ISP has direct knowledge prior to this 'event' occurring and given the direct contractual relationship they have with the customer the ISP would (IMO) be more likely to have a significant business interest (i.e.: get paid) to ensure the order was valid, applicable, etc. /before/ it was carried out. IOW, if the ISP is not providing service to the customer, then the ISP may be unable to bill for all or part of that service provided, (one, simple example: usage-based billing). OTOH, if a third-party (RIR?) higher up the allocation hierarchy were to "whack" a cert, thus making a cert for a resource invalid can the same be said?
Fair question. In the case of RIPE, where there was a LEO action that was somewhat analogous to ROA whacking, RIPE fought the action (in court). Only time will tell if other RIRs adopt a similar approach.
Second, consider the 'scope' of where such a change will be propagated and a time-to-restore back to "normalcy" after the matter is rectified. Today -- for better or worse -- it's the immediate upstream ISP(s) of the targeted entity that, in your example, would be curtailing the propagation of routes and/or forwarding to the 'targeted entity' to/from the rest of the Internet. IOW, if (or, when) the matter is appropriately resolved, then (I suspect) the restoration times to get the customer back online are minimal today given we are largely talking about BGP propagation times, i.e.: likely minutes. Ultimately, it's just those immediate upstream ISP's that need to 'act' in order to propagate that route to the entire Internet.
I think that's a fair characterization of an example that I gave.
OTOH, in a fully RPKI-enabled world, SP's the world over are performing origin filtering using information from the RPKI at all of their customer and peering interconnections ... this could make for some rather long 'restoration' times for the 'targeted entity' based on several of the models I've seen tossed around in SIDR. And, the longer the restoration times, again, the more noticeable the impact it could have on the ISP's who provide service to/from the 'targeted entity', (again: usage-based billing).
When a targeted entity detects that it's ROA has been whacked, it can try to notify RPs (via out of band means). The easiest action for RPs to take if they choose to not act on the whacking is to retain the old data that they have cached. So, the time it would take to deal the whacking is determined by the time it takes to
get the word out to RPs (after detection of the whacking.
Nonetheless, I will be interested to see if the paper you are writing will include these types of detail.
yes, it does.

if an RIR accidentally allocated the same prefix to two different entities, there would be a problem for one or both of them when they tried to advertise the twice-allocated space.

While theoretically a possibility, I'd be curious to know if the above is a common occurrence? If the RIR were concerned about this, one quick way to ensure this did not happen would be for the RIR to consult a "live" routing table to see if the space they are (re-)allocating is already being announced somewhere. In fact, I'd be surprised if they did not do this already. (Granted, this does not prevent the same space from ever being twice allocated, because the first allocation was sitting dormant/not-announced at that moment).
I think this is a very, very unlikely failure case. I just mentioned it because others have cited this as a
concern.

Steve
_______________________________________________
sidr mailing list
sidr@ietf.org
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to