[EMAIL PROTECTED] (Sean Doran) writes:
> | My own feeling is that we're just going to have to accept the notion
> | of our routers having millions of routes in them and go for algorithms
> | that scale better than distance vector or path vector so we don't
> | drive them into the ground while doing the computations.
> 
> While I think that you're not exactly using the term "algorithm" right,
> I do think you have a point that another system which is more desirable
> than RIP with funny non-scalar metrics is desirable.   Unfortunately,
> all the ones I know about do have annoying problems with algorithms having
> poor scaling properties as one increases the amount of state known by 
> any node in the distributed computation.   In other words, switching 
> to LS or any other known routing system does not help us: we take a 
> LONG time to compute when there alot of information, and while the 
> side-effects of that vary from system to system, they are pretty much 
> universally unpleasant.

I think algorithm is the right word to use here because we're down at
the complexity theory of the algorithms level, and graph algorithms
are the core of the whole problem. One of the interesting questions
seems to be one of how much preserving partial results of earlier
computations helps in the face of incremental updates arriving. We
need scaling that behaves well not only in the number of nodes but
also in the number of arriving updates. Path vector is badly unstable
as the updates come in. You seem to dismiss link state offhand, but it
isn't clear that link state couldn't help out a lot here. (Neither is
it clear that it could but it appears to be an interesting area for
some experiments.)

> Meanwhile, please accept that separating identity from location is
> a means to allow one to aggressively constrain the amount of global
> knowledge by hiding the topological state of most distant network
> elements, at the cost of maintaining a mapping between _what_ and _where_.

Ah, but then what's your method for mapping between the two, and how
fast must *it* run and how well does *it* scale? I've yet to see
something that seems to sound like it would work well. I've also yet
to see a proposed system that actually reduces the complexity of the
routing portion of the problem either.

I'm not saying such a thing doesn't exist. I'm just saying that I
haven't seen a worked example I can believe in.

> | We can't get
> | rid of the desire to have huge numbers of routes so we have to find
> | ways to avoid nuking ourselves when we have huge numbers of routes.
> 
> You haven't been paying attention to multi6.  That is EXACTLY the desire
> there, and it cuts across provider/researcher/vendor/implementor boundaries.

I have been paying attention, actually. I know what the desire is. I
just don't yet know that it is realizable. Even if we can reduce to
some extent the pressure to inject routes for networks with multiple
connection points, we can't eliminate it, and as the number of
carriers and interconnects rises and the number of "big" end customers
rises it feels like one ultimately is going to have to deal with huge
numbers of routes. Multi6 and such seem like they will at best take a
constant factor off of "huge". We still have to plan for "huge".

> Your noisemaking here on the main IETF list is counter-productive
> and childish.

Your opinion is noted.

--
Perry E. Metzger                [EMAIL PROTECTED]
--
NetBSD Development, Support & CDs. http://www.wasabisystems.com/

Reply via email to