>
>
> It is a mistake to equate LSP flooding with a set of independent P2P
> “connections” – each of which can operate at a rate independent of the
> other.
>
>
>
>
At least my experience much disagrees with that and such a proposal seems
to steer towards slowest receiver in the whole network problem so I wait
for others to chime in.

Then, to clarify on Tony's mail, the "problem" I mentioned anecdotally
yesterday as behavior I saw on things I did in their time was of course
when processors were still well under 1GHz and links in Gigs and not 10s
and 100s of Gigs we have today but yes, the limiting factor was the
flooding rate (or rather effective processing rate of receiver AFAIR before
it started drop the RX queues or was late enough to cause RE-TX on senders)
in terms of losses/retransmissions necessary that were causing transients
to the point it looked to me then the cure seemed worse than the disease
(while the disease was likely a flu then compared to today given we didn't
have massively dense meshes we steer towards today). The base spec &
mandated flooding numbers didn't change but what is possible in terms of
rates when breaking the spec did change of course in terms of CPU/links
speed albeit most ISIS implementations go back to megahertz processors
still ;-) And the dinner was great BTW ;-)

So yes, I do think that anything that will flood @ reasonable rate without
excessive losses will work well on well-computed
double-flood-reduced-graph, the question is how to get the "reasonable" in
place both in terms of numbers as well as mechanism for which we saw tons
lively discussions/proposal yesterday, most obvious being of course going
and manually bumping e'one's implementation to the desired (? ;-) value
....  Other consideration is having computation always trying to get more
than 2 links in minimal cut on the graph of course which should alleviate
any bottleneck or rather, make the cut less likely. Given quality of
max-disjoint-node/link graph computation algorithms that should be doable
by gut feeling. If e.g. the flood rate per link is available the algorithms
should be doing even better in centralized case.

BTW, with all that experience (MANET did its share in different space as we
know in terms of flood reduction as well) in RIFT we chose a solution based
on MANET derivative where every source chooses a different set of trees to
flood on using Fisher-Yates hashes but that seems possible only if you have
directionality on the graph (that's what I said once on the mike that doing
flood reduction in a lattice [partial rank-ordered graph with upper & lower
bounds] is fairly trivial, on generic graphs not so much necessarily). But
maybe Pascal reads that and gives it a think ;-)

as usual, 2 cents to improve the internet ;-)

--- tony
_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to