Thank you Linda.  Trimmed the agreements, including acceptable text from your reply.  Leaving the two points that can benefit from a little more tuning.

Marked <jmh2></jmh2>

Yours,

Joel

On 8/22/2023 12:12 AM, Linda Dunbar wrote:


    Similarly, section 3.2 looks like it could apply to any operator.
    The reference to the presence or absence of IGPs seems largely
    irrelevant to the question of how partial failures of a facility
    are detected and dealt with.

    [Linda] Two reasons that the site failure described in Section 3.2
    do not apply to other networks:

      * One DC can have many server racks concentrated in a small area
        which can fail by one single event. Vs. Regular network
        failure at one location only impact the routers at the
        location, which quickly triggers the services switched to the
        protection paths.
      * Regular networks run IGP, which can propagate inner fiber cut
        failures quickly to the edge. While as many DCs don’t run IGP.

<jmh>Given that even a data center has to deal with internal failures, and that even traditional ISPs have to deal with partitioning failures, I don't think the distinction you are drawing in this section really exists.  If it does, you need to provide stronger justification.  Also, not all public DCs have chosen to use just BGP, although I grant that many have. I don't think you want to argue that the folks who have chosen to use BGP are wrong.  </jmh>

<ld> Are you referring to Network-Partitioning Failures in Cloud Systems?

Traditional ISPs don’t host end services; they are responsible for transporting packets;  therefore protection path can reroute packets . But Cloud DC site/PoD failure causing all the hosts (prefixes) no longer reachable </ld>

<jmh2> If a DC Site fails, the services failed too.  Yes, the DC operator has to reinstantiate them.  But that is way outside our scope.  To the degree that they can recover by rerouting to other instances (whether using anycast or some other trick) it looks just like routing around failures in other case, which BGP and IGPs can do.  I am still not seeing how this justifies any special mechanisms. </jmh2>

    Figure 1 in section 4.1 could use some clarification.  It is
    unclear if the two TN-1 are the same networks, or are intended to
    be different parts of the tenant network.  And similarly for the
    two TN-2.  It is also unclear why the top portion is even included
    in the figure, since it does not seem to have anything to do with
    the data center connectivity task?  Wouldn't it be simpler to just
    note that the diagram only shows part of the tenant
    infrastructure, and leave out irrelevancies?

    [Linda] The two TN-1 are intended to be different parts of one
    single tenant network.  Is adding the following good enough?

    /“TN: Tenant Network. One TN (e.g., TN-1) can be attached to both
    vR1 and vR2.”/

/<jmh>While that at least makes meaning of the figure clear, I am still left confused as to why the upper part of the figure is needed.</jmh>///

<ld> mainly to show that one Tenant can have some routes reachable via Internet GW and others reachable via Virtual GW (IPsec). And routes belonging to one Tenant can be connected by vRouters </ld>

<jmh2>You may want to think about ways to better explain your point, since I missed it. </jmh2>
_______________________________________________
rtgwg mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/rtgwg

Reply via email to