true, but the repository conversation stops at: "all gatherers in the
system have the data"

inside each ASN it's really up to the ASN operator to get from
gatherer -> cache -> router in a 'timely fashion'.

If you're signing a route with something, and your upstreams are signing their secure path elements of that route with something, and my routers haven't been provisioned (from data obtained from the RPKI) to validate with the corresponding public versions of those somethings (for whatever reason - e.g., latency in fetching, DoS, failure, etc..), then this *tight coupling* between the public key repositories (i.e., RPKI) will result in my router as a relying party not having the information it need to validate those signed somethings, and I therefore "downgrade" or break for that route.

You not knowing when I have the information to validate something because I may not have fetched it from the RPKI yet (or ingested it in my routers yet, for whatever reason) means you can't send a route with the new signatures because it will fail validation, thereby causing a downgrade attack or something to break for the route.

This "looseness" about an expanding array of validation states and acceptable downgrade attacks in BGPSEC because the RPKI may not be available, or local systems may be operating on stale data, continues to give me pause given the overhead we're introducing and the expanding set of residual vulnerabilities some are willing to accept.

-danny

_______________________________________________
sidr mailing list
sidr@ietf.org
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to