inline

From: [email protected] <[email protected]>
Date: Tuesday, April 4, 2023 at 4:19 AM
To: Colby Barth <[email protected]>, Robert Raszuk <[email protected]>, Tony 
Li <[email protected]>
Cc: RTGWG <[email protected]>
Subject: Re: Re: TTE
[External Email. Be cautious of content]

Hi, Colby


<cb> You are correct that several nodes could activate TTE at the same time.  I 
can’t remember if that question was asked during the preso, I think it was … we 
will address this in a revision as we have given it some thought.


    Can we make an assumption that the TTE only happens on the ingress nodes? 
So that, it only takes place once.


    Or after the TTE in any node, we mark the packets, and make sure it will 
not take place again.

<cb> Yes, this is what we had thought.  i.e. a packet should only be TTE’d once.

--cb


Best Regards
Zongpeng Du

________________________________
[email protected]<mailto:[email protected]> & 
[email protected]

From: Colby Barth<mailto:[email protected]>
Date: 2023-04-03 06:38
To: Robert Raszuk<mailto:[email protected]>; Tony Li<mailto:[email protected]>
CC: RTGWG<mailto:[email protected]>
Subject: Re: TTE
Robert-

Tony already responded to several points but a few more inline <cb>

From: Robert Raszuk <[email protected]>
Date: Sunday, April 2, 2023 at 5:31 PM
To: Tony Li <[email protected]>
Cc: Colby Barth <[email protected]>, RTGWG <[email protected]>
Subject: Re: TTE
[External Email. Be cautious of content]

Hi Tony,

I think you can agree that link/node failure will be detected by directly 
attached PLRs only. My point is that congestion is on the other hand caused by 
traffic flow(s) which are not sourced in P node(s) in the middle of the 
network. They usually enter the network from the edge (ingress) and go to the 
the other side (egress).
Agreed.  How is this relevant?


It does seem relevant as in such a case your trigger will likely fire for the 
same overdose of traffic on more than one node in the path.

In fact if your FRR is just link protection the packets will take a bypass 
detour to land on the next node so you need this bypass activated also there.

And on all links in the path if say interface bandwidth is equal.

<cb> You seem to be putting a microscope on 1 FRR mechanism only while ignoring 
that in section 3.3 we discuss many & allude to possibly others.  We chose 
back-up paths for a couple of reasons:


·         They are loop free (packets will not come back to the PLR – more on 
this in a moment)

·         No new mechanism is required to instantiate them

·         We have them for nearly all transport technologies

·         If a link fails that they are protecting, the network is likely to be 
able to absorb the load

<cb> as Himanshu suggested, there may be better methods to compute a TTE tunnel 
that takes current load of the path into account if the congested node has such 
visibility or if this computation is off-loaded.  The further the computation 
is from the congested node, the less likely we are to provide fast congestion 
relief though.

<cb> You are correct that several nodes could activate TTE at the same time.  I 
can’t remember if that question was asked during the preso, I think it was … we 
will address this in a revision as we have given it some thought.

Do you plan to support PE-CE link congestion by use of PE-CE protection in case 
of multihomed customer sites ?
We support this on all links.

<cb> We support it on any link but it will only work [be activated] if a TTE 
tunnel exists.  The current implementation leverages TI-LFA backup routes thus 
bounded by the IGP domain.

Very interesting ..

You know what this means for PE-CE ... you need to take traffic back to your 
core, carry it to another PE then try to see if it fits the PE-CE links.

And what if it does not ?

And you do swap a service label (for example L3VPN) ?

As I’ve tried to explain: TTE cannot create bandwidth. In such a case where the 
network is uniformly congested, TTE cannot address the issue. That’s not it’s 
use case. No mechanism is going to be able fix that: if the load exceeds the 
capacity of the network, even an oracle is not going to provide path placement 
that avoids loss.

That's obvious.

But my point is that you just do not know when doing this automation if there 
is sufficient capacity on the bypass path or not.

<cb> This seems to be a circular argument at this point.  As I said in the 
preso and Tony has stated multiple times, if a network employs protection (FRR, 
LFA, TI-LFA, …) we assume sufficient capacity is available for the backups.  If 
it is not, the operator should not enable TTE … and they probably shouldn’t use 
protection either.

<cb> Its worth noting also that some implementations can and do account for BW 
along the protected path during bypass placement.

Sorry, I tend to be a little bit haphazard in my terminology.  TTE activates 
specific prefixes and labels.  Upon activation of the backup path, it forms (or 
adds to) an ECMP group for the prefix or label. This ECMP group will direct 
some flows on the backup path.

Strictly speaking, we should be talking about prefix (and label) selection.


Then my feedback is that this is way too coarse for the vast majority of real 
networks. The min granularity here is per egress PE any time when encapsulation 
is in use.

<cb> noted.

--Colby

Conceptually, this can also be done on MPLS LSPs equally.  Entropy labels are 
recommended

Entropy on FRR path ?


If the only traffic on the link are elephant flows without any kind of entropy 
or distribution, then TTE is not recommended. The prefix/label selection 
algorithm at this point is random and relies on the Law of Large Numbers to 
select prefixes/labels. Pragmatically that suggests that a link should be 
carrying at least 50 flows with a Gaussian distribution of bandwidth demands.

Again if you would do it for internet IP plain forwarding I would perhaps give 
it some chances to help. But for all tunneled networks I am remaining quite 
sceptical here - sorry.

Many thx,
Robert




Juniper Business Use Only


Juniper Business Use Only
_______________________________________________
rtgwg mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/rtgwg

Reply via email to