On Fri, 12 Mar 2010, Tony Li wrote:

Are you referring to Geoff's recent APRICOT talk on "BGP in 2009"?

Yes.

If so, that would seem to indicate convergence times for single prefixes are relatively stable and seem to be proportional to AS path length.

Maybe. Though I also got the following from his presentation:

- the diameter of the internet is fairly stable (not unexpected,
  given we know the internet topology tends to be scale-free in
  distribution)

- despite the continued growth in the number of the ASes, the
  convergence time has been fairly stable too

This latter point is very interesting as, if I'm understanding Geoff's results and the previous academic work on BGP convergence correctly (Labovitz, Griffin), it means that the common-case convergence of BGP+internet topology scales a lot more nicely than the (worst-case) scaling predicted by academic work.

I could be completely misunderstanding stuff maybe, and I guess we'd also need more data beyond Geoff's too, however Geoff's presentation seemed to say "the BGP sky isn't falling anywhere like we thought it should" and based on his charts I'd have to agree.

It is not terribly surprising that these are constant, as a significant fraction of changes are due to tail circuit changes where only a small number of prefixes are actually going to shift paths.

 Since BGP convergence at a single node is going to take
time that is linear in the number of affected prefixes, and only a few prefixes are changing, there would seem to be nearly a constant amount of work to do.

Well, the per-prefix work ought to be increasing as the number of prefixes advertised were increasing, surely? Unless somehow the growth in new prefixes is not distributed evenly - i.e. all the new prefixes are in the core, while the failures are mostly at the edges (tail circuits).

However, I wonder if Geoff is measuring per-UPDATE alone and not breaking out UPDATE activity out by the number of NLRIs? That would be consistent with what you say.

Whatever, his findings are that there isn't any evidence of a BGP meltdown, surely? To the contrary, internet routing is doing surprisingly well.

That same talk reinforces the fact that we're still on a quadratic growth curve, for both v4 and v6. ;-(

I can't remember where I saw it (another one of Geoff's lovely graph-filled presentations maybe?? :) ) but didn't someone do a comparison of growth in memory/CPU resources versus growth in DFZ prefixes? Iirc they found RAM/compute resources were increasing at a sufficient rate to keep ahead of DFZ growth.

I.e. the exponential Moore's law growth in transistors-per-device beats the quadratic DFZ growth.

The growth in prefixes is simply to be expected, obviously - to some extent its natural and good.

What I didn't see in there was any discussion of the convergence time when there is a point failure in the network, such as when a router reboots. These are the cases where the BGP convergence time is going to be linear in the number of prefixes.

I think what Geoff was suggesting is that it's linear in the diameter of the internet, which is a lot better than the "linear in the size of the internet topology" O(|V|+|E|) worst-scaling from the literature - particularly when the internet topology grows in such a way as to keep diameter-growth very limited.

Or, put another way, Geoff's work suggests that routers in the middle of the net aren't rebooting frequently. Whew. ;-)

Maybe :)

FWIW, I'm not claiming that BGP scalability should not be improved. However I don't see evidence for any terrible scaling problem that must urgently be solved. Without evidence of such a problem, I would argue we should be careful about picking too extreme solutions (extreme in the disruptive sense).

I.e., in trying to find a way to insulate the core of the internet from growth at the edges we should be careful that we do not rush towards a solution that is less good than the null-solution (e.g. because it has a mapping system that adds latency and/or instability).

It's possible that the current architecture of the internet is holding back growth that should otherwise be there. I guess we could test for that by comparing growth today to growth in earlier days. If it's significantly slowed down then that must be the case. I havn't checked, but my vague memory of the address-space reports is that this is not the case and that the internet is growing normally (how this will change as/when v4 runs out and v6 is the only space that allows growth will be interesting to see).

In short, I havn't seen compelling evidence to suggest that the current routing architecture can not cope with the historical and current modes of growth.

Anyway, I could be completely under- and/or mis- informed, in which case apologies in advance :).

regards,
--
Paul Jakma      p...@jakma.org  Key ID: 64A2FF6A
Fortune:
According to the latest official figures, 43% of all statistics are
totally worthless.
_______________________________________________
rrg mailing list
rrg@irtf.org
http://www.irtf.org/mailman/listinfo/rrg

Reply via email to