[Catching up on back mail while in transit to Hiroshima...]
At Tue, 27 Oct 2009 16:28:57 -0400, Matt Lepinski wrote:
Here, I understand that everyone hitting the repository system at once
is a bad outcome regardless of the frequency that we recommend. That is,
regardless of whether we
Matt,
I think this is sensible.
Terry
On 6/11/09 6:08 AM, Matt Lepinski mlepin...@bbn.com wrote:
Note that since relying parties will perform these operations
regularly, it is more efficient for the relying party to request from
the repository system only those objects that have
I can understand these concerns.
Would you rather see the wg try:
(a) concentrated redesign (to what, I have no idea)
(b) operational guidance to operators of how to manage this risk - get new
certs out inserttimeframe before the route that relies on it, etc.
(There's already been discussion
On Oct 31, 2009, at 10:07 AM, Danny McPherson da...@tcb.net wrote:
One might suspect there are some things we can learn from
this:
http://tools.ietf.org/html/rfc4641
In particular, when *contrasting* even initial operational
practices with preceding recommendations made in theoretical
sorry. late here.
as ggm said, probably better than i can, a week or two after philly.
if we have a parent-child chain of length L and each runs as a batch at
some time interval T, then the mean time to propagate is (T/2)*(N-1)
s/N/L/
randy
___
Randy Bush wrote:
Randy, could you elaborate in which case does this transitive property
apply, that makes something longer to propagate on a deeper hierarchy? Is it
rekey, re-issue, revocation, or some kind or relying party check?
as ggm said, probably better than i can, a week or two after
That is only true if the thing you're propagating has to travel hop by
hop to the bottom of the hierarchy. So the question still stands: what
is this thing that you think propagates slowly, and why does it have
to propagate hop by hop?
incorrect or missing cert high in chain which needs
Hi George/Matt,
On Oct 27, 2009, at 10:44 PM, George Michaelson wrote:
The goal set earlier on in the life of this project was to stabilize
the system in two complete 24h work cycles of the repository system
as a whole. And yes, this was predicated on a MINIMUM of one fetch
per 24h per RP.
folk going down this rathole might consider two things
rpki-rtr suggests that the number of global fetchers will be radically
lest than the number of global asns
there might be ca chain depth of 3-6 for which a 24 hour cycle would
mean a three day delay, making operators remember curtis
Regardless of the adopted cycle time, the repositories are just going to
have to scale to meet it, be that 24hrs, 12hrs, 3hrs, or 30 mins (in the
extreme).
... and there will always be jokers who put * * * * * in their crontab
and run it every minute...
However, if the suggested best
Geoff,
I'm happy to accept that the new wording is poor, but I'm pretty sure
the old wording was also bad, and I think this discussion is important.
The old wording could easily be interpreted to suggest that once per day
was the correct frequency for pulling from a repository. (That is, I
WG Co-Chair Hat OFF
Hi Matt,
entities who are actually using RPKI data for routing SHOULD be
fetching fresh data from the repositories at least once every three
hours.
3 hours?
At a first pass that seems very frequent.
From a server's perspective if there are 30,000 AS's out there
WG Chair Hat Off
On 27/10/2009, at 10:52 AM, Sandra Murphy wrote:
Geoff, I thought the reason for using (and mandating) rsync was
precisely to avoid re-load of the whole data space on each
synchronization.
I thought so too.
If so, what estimates would you use of how much of the space
13 matches
Mail list logo