On 25/Oct/17 21:41, Luis Balbinot wrote:
> Never underestimate your reference-bandwidth!
>
> We recently set all our routers to 1000g (1 Tbps) and it was not a
> trivial task. And now I feel like I'm going to regret that in a couple
> years. Even if you work with smaller circuits, having larger
On 25/Oct/17 20:47, Saku Ytti wrote:
> There are to principal IGP designs. One would be role based, where
> metric is static depending on roles of A and B end of connection. Such
> as P-P, P-PE, PE-PE. This strategy is well suited for networks where
> hop count is similar to geographic distance
On 27 October 2017 at 02:05, Pavel Lunin wrote:
> Basically I agree with your concept, but it's worth to note that you assume
> the current traffic as "given" and that links can't be saturated. This
> assumption only holds when there is a great number of short-lived low
> bandwidth sessions, lik
Well, in fact I rather meant a different way of setting the role based
costs rather than treating real link bandwidth as 1/cost.
>For LAG you should set minimum links to a number which allows you to
>carry traffic you need.
OK, good point. Though I am not sure that all vendors allow to do it this
How different the bandwidth topology ends up being to strict hop-count
topology? How many changes would you need in hop-count topology to
make it same?
What I'm trying to say, if the bandwidth topology works, probably just
static number would work, as options between two nodes are all
reasonable.
Well, for the 99% of us that only do basic stuff with a TE tunnel every now
and then that works fine. For those that have extremely demanding customers
and critical services you will need some sort of external controller to
manage all that anyway and then we are basically replaced by scripts ;-)
I disagree. Either traffic fits or it does not fit on SPT path,
bandwidth is irrelevant.
For LAG you should set minimum links to a number which allows you to
carry traffic you need.
Ideally you have capacity redundancy on SPT level, if best path goes
down, you know redundant path, and you know it
Reference bandwidth might however be useful for lags, when you may want to
lower the cost of a link if some members go down (though I prefer ECMP in
the core for most cases).
And you can combine the role/latency approach with automatic reference
bandwidth-based cost, if you configure 'bandwidth' p
Hey,
This only matters if you are letting system assign metric
automatically based on bandwidth. Whole notion of preferring
interfaces with most bandwidth is fundamentally broken. If you are
using this design, you might as well assign same number to every
interface and use strict hop count.
On 25
Never underestimate your reference-bandwidth!
We recently set all our routers to 1000g (1 Tbps) and it was not a
trivial task. And now I feel like I'm going to regret that in a couple
years. Even if you work with smaller circuits, having larger numbers
will give you more range to play around.
Lui
Hey Alexander,
> we're redesigning our backbone with multiple datacenters and pops currently
> and looking for a best practice or a recommendation for configuring the
> metrics.
> What we have for now is a full meshed backbone with underlaying isis. IBGP
> exports routes without any metric. LSP
Hello,
we're redesigning our backbone with multiple datacenters and pops currently and
looking for a best practice or a recommendation for configuring the metrics.
What we have for now is a full meshed backbone with underlaying isis. IBGP
exports routes without any metric. LSP are in loose mode
12 matches
Mail list logo