Les,

Please see [GS2] inline.

Guillaume –

Please see LES2: inline.

*From:* guillaume.solig...@orange.com <mailto:guillaume.solig...@orange.com> <guillaume.solig...@orange.com <mailto:guillaume.solig...@orange.com>>
*Sent:* Thursday, July 29, 2021 10:34 AM
*To:* Les Ginsberg (ginsberg) <ginsb...@cisco.com <mailto:ginsb...@cisco.com>>; bruno.decra...@orange.com <mailto:bruno.decra...@orange.com>; lsr-cha...@ietf.org <mailto:lsr-cha...@ietf.org>
*Cc:* lsr@ietf.org <mailto:lsr@ietf.org>
*Subject:* Re: [Lsr] draft-decraene-lsr-isis-flooding-speed & IETF 111

Les,

Thanks for taking your time to answer.

Le 29/07/2021 à 18:26, Les Ginsberg (ginsberg) a écrit :

    Guillaume –

    Thanx for the thoughtful response.

    Responses inline.

    *From:* guillaume.solig...@orange.com
    <mailto:guillaume.solig...@orange.com>
    <guillaume.solig...@orange.com>
    <mailto:guillaume.solig...@orange.com>
    *Sent:* Thursday, July 29, 2021 3:20 AM
    *To:* Les Ginsberg (ginsberg) <ginsb...@cisco.com>
    <mailto:ginsb...@cisco.com>; bruno.decra...@orange.com
    <mailto:bruno.decra...@orange.com>; lsr-cha...@ietf.org
    <mailto:lsr-cha...@ietf.org>
    *Cc:* lsr@ietf.org <mailto:lsr@ietf.org>
    *Subject:* Re: [Lsr] draft-decraene-lsr-isis-flooding-speed & IETF 111

    Hello Les,

    Jumping in since I have some insight as well.

    Le 29/07/2021 à 08:51, Les Ginsberg (ginsberg) a écrit :

        Bruno –

        Resuming this thread…

        I assure you that we have the same goals.

        We are not yet in agreement on the best way to achieve those
        goals.

    Your slides show indeed we have the same goal, and we agree on one
    way to deal with the matter (congestion control).


        Looking forward to the WG discussion on Friday.

        To get some discussion going in advance – if you have time to
        do so (which I know is challenging especially during IETF
        week) – I call your attention to slides 15-18 in the
        presentation we have prepared:

        
https://datatracker.ietf.org/meeting/111/materials/slides-111-lsr-21-isis-flood-scale-00
        
<https://datatracker.ietf.org/meeting/111/materials/slides-111-lsr-21-isis-flood-scale-00>

        I do not intend to present these slides during my portion of
        the presentation time – but I included them as potential
        points of discussion during the discussion portion of the
        meeting (though the WG chairs will decide how best to direct
        that portion of the time).

        I call your attention specifically to Slide 16, which
        discusses the functional elements in the input path typically
        seen on router platforms.

        Each of these elements has controls associated with it – from
        queue sizes to punt rates, etc. that play a significant role
        in delivery of incoming IS-IS PDUs to the protocol running in
        the control plan.

        Your slides –
        
https://datatracker.ietf.org/meeting/111/materials/slides-111-lsr-22-flow-congestion-control-00
        
<https://datatracker.ietf.org/meeting/111/materials/slides-111-lsr-22-flow-congestion-control-00>


         focus only on the direct input queue to IS-IS in the control
        plane. I do not see how the state of the other staging
        elements on the path from PDU ingress to IS-IS implementation
        reading the PDUs in the control plane is known and/or used in
        determining the flow control state signaled to the
        transmitting neighbor. If, for example, PDUs were being
        dropped on ingress and never made it to IS-IS in the control
        plane, how would your algorithm be aware of this and react to
        this?

        In my experience, the state of these lower level staging
        elements plays a significant role.

        I can imagine that some form of signaling from the dataplane
        to the control plane about the state of these lower level
        elements could be possible – and that signaling could be used
        as input to a receiver based control algorithm. However, given
        the differences as to how individual platforms implement these
        lower level elements,  I see it as challenging to get each
        platform to provide the necessary signaling in real time and
        to normalize it in a way so that IS-IS flow control logic in
        the control plane can use the data in a platform independent
        way. I believe this represents a significant bar to successful
        deployment of receiver-based flow control.

    That's one point we want to clarify, the flow control algorithm
    does not focus on the "IO path" between the line card and the
    control plane. There is no magic there, it does not directly deals
    with congestion on the IO path. It happens it has some nice
    properties even in case of drops before reaching the control
    plane, but it's arguably not sufficient. That's why we also
    propose a congestion control algorithm. While it is not necessary
    to establish a standard since it's only local, it helps having a
    baseline if one does not want to spend time re-developing its own
    algorithm.

    Our slides also show the result of our congestion control
    algorithm, which is the part that deals with IO path losses. Very
    much like your algorithm, ours sees this IO path as a black box.

    */[LES:] This is an aspect on which I need further clarification./*

    */From the POV of the control plane, in the absence of enhanced
    signaling (which I believe is problematic to implement – and you
    seem to agree) you simply have no knowledge as to whether incoming
    PDUs have been dropped or are simply delayed./*

[GS] From the receiver side, absolutely.

*/[LES2:] This is also true from the TX side. At a given moment all the TX side knows is that an ACK for a given LSP has yet to be received. /*

[GS2] I misread your statement, my bad. A corollary is that for congestion control the TX side has to make a guess with that information to decide to increase or reduce speed. This is what I call "Congestion Signal".

*//*



    */On Slide 6 you say: “Sender will never send more than RWin
    unacknowledged LSPs”. On Slide 7 you describe how to choose RWIN.
    But all of these methods are fixed in size – not adaptive. And
    since the input queue of packets to IS-IS is not “per interface”,
    the number you choose for RWIN seems to have nothing at all to do
    with current state/neighbor./*

[GS] If input queue = last buffer before processing (what I refer as socket buffer). Then we indeed assumed we have 1 socket per neighbor. If it is not the case, and the input queue is shared among all neighbors, you can split your available memory between neighbors (and in that case, the advertised value can indeed change, but not very often). The idea is that each sender knows the amount of space it can use on the receiver side without packets being lost.

*/[LES2:] Implementations that I have worked on (and/or are familiar with) do NOT have a separate input buffer/interface – so it isn’t safe to assume that is the case. If you do not do pre-processing-inspection to detect how many PDUs came from a particular neighbor (and I am not suggesting that you should) then you cannot adjust your window/neighbor – you are left with one-size-for-all. This limitation does not apply to Tx based solution since standard operation of the IS-IS Update process requires us to track unacknowledged LSPs on a per neighbor basis./*

[GS2] Thanks for the insight. The answer still holds. The receiver knows how much space it has for PDUs, it can make an informed decision on what memory it can guarantee to hold the LSPs of every sender. It does not need to be smart about how to allocate for every neighbor; splitting equally is enough.

Maybe with an example we can agree on this point. Let's assume you want a buffer of 10 LSPs, for 100 neighbors. You need 10 * 100 * 1500 (MTU) * 2 (to let room for other PDUs) = 3 MB. Now if a platform has 3 MB for its PDU input queue, it can safely advertise 10 to each of its 100 neighbors. Do you think it can be an issue with the implementations you have in mind ? I recognize that I lack some real world values here.

As we show, with faster PSNPs sending, for an RTT of 10ms (as an example), it allows to reach 10LSP/10ms = 1000LSP/s.

*//*



    */You then go on to discuss RTT (which seems to be a configured or
    pre-determined value??) and LPP (LSPs acked/PSNP) – which is only
    meaningful for the LSPs that have actually made their way to IS-IS
    – no way to account for those that have been dropped or delayed./*

[GS] RTT is useful for analysis purposes, not for the RWin algorithm itself: if you bound the number of unacked LSPs, you naturally bound your bandwidth.

The dropped LSPs are actually taken into account.

  * Implicitly for RWin : since dropped LSPs are unacked LSPs in the
    sender's POV, they limit the number of sent LSPs afterwards.
  * Explicitly for Congestion Control, since they will arrive
    late/never and a congestion signal will get triggered on the
    sender side.

*/[LES2:] Sorry, what “signal” is Tx sending?? I assume there is no such signal. Both solutions leverage the fact that Tx side always knows how many unacknowledged LSPs there are per neighbor./*

[GS2] Sorry for the aliasing of names. For me, Congestion signal is what triggers the decision to decrease the sending rate: if your algorithm is the one in the previous version of your draft, your congestion signal is when a threshold of unacked LSPs that you consider dangerous is reached. In our case, for congestion control, this congestion signal is the "unusual" lateness of an ACK.

    */So it is difficult for me to understand how you are actually
    accounting for the current state of the I/O path in real time on a
    per neighbor basis./*

        This is one major reason why we prefer a Tx based flow control
        mechanism. Tx based flow control simply focuses on the
        relative rates of LSP transmission and reception of
        Acknowledgments. Whether slower acks are due to PDU drops at
        ingress, slow punt path operation, lack of CPU time for IS-IS
        to process the incoming packets, etc. matters not. Whatever
        the reason, if acks are not keeping pace with transmission we
        know we have to slow down.

    As you can see in the slides & draft, we also have a congestion
    control algorithm and we show the results. This congestion control
    algorithm only works with the ACKs (like yours), and gives results
    in the case of "IO congestion" (like yours).

    */[LES:] As best I understand it, ACKs in your case are from the
    receiver’s POV – which makes it dependent on what LSPs the
    receiver has actually seen./*

    */In the TX based algorithm, we don’t care/know whether the
    receiver has seen anything – we just know whether we have got
    timely ACKs or not. And since it is possible that drops/delays
    could occur on the Tx side as well as the Rx side, this approach
    seems much more robust./*

[GS] I need some clarification here. I am indeed talking about Sender to Receiver LSPs and Receiver to Sender PSNPs.

If you talk about timely ACKs, they also come from the Receiver, right ? So it needs to see the LSPs as well. I don't think our approaches are different on that particular point. The difference is that you control bandwidth, while we limit the number of unacked LSPs.

*/[LES:] The Tx solution looks at the rate at which we are sending LSPs and the rate at which acknowledgments are being received. If the two do not match then the Tx side adjusts the Tx Rate (up or down) as appropriate./*

[GS2] Ok. Thanks for the clarification.

    Flow and congestion control are not mutually exclusive; in fact it
    is almost certain it will be necessary to have both at some point.
    The main benefit of limiting the number of unacked packets
    inflight is to avoid loosing packets in case of CPU contention. As
    this should be a common situation (in part for reasons in your
    slide 15), flow control as we propose seems very relevant.

    For example, in your Slowing Down scenario, if the slowing down
    occurs at the Control Plane, a congestion control algorithm will
    lose packets. If on top of your algorithm, you limit the number of
    unacked LSPs (flow control), these losses cannot occur anymore as
    the sender will stop sending before overflowing the socket buffer.
    It's an (almost) free win.

    */[LES:] In both approaches it is the control plane that is
    adapting. And Tx based approach does react to the number of
    unacked packets./*

[GS] Your approach does track but does not limit the number of unacked packets, thus allowing for the losses in the above scenario. If you add a maximum number of unacked LSPs based on the knowledge of the receiver's input queue, you avoid losing packets in the above scenario.

*/[LES2:] There is some truth here – but I don’t think this is the full picture./*

*/If the lack of acknowledgment is due to packet drops (e.g., perhaps a short burst of noise on the media corrupting some packets), your solution will assume that the packets are somewhere in the receivers processing queue and stop sending LSPs at all until some timer fires (the retransmit timer???). Our solution will slow down, but continue to send at a slower rate. It is possible under a given set of circumstances that our solution may trigger more retransmissions due to buffer overflow – but this will not persist since we will continue to slow down – and do so aggressively – until the receiver catches up. It is also possible under the right set of circumstances that your solution will introduce unnecessarily long delays. /*

*/Both scenarios may be seen in the real world./*

[GS2] It is true that losing packets will trash the sender, inciting it to reduce its sending rate. This is what we show slide 19. It is even a good property in some cases as if a bottleneck in the IO is reached, lost LSPs will reduce the rate of the sender. The timer is indeed the retransmit timer.

However, for it to completely stop sending, it would need to drop RWin LSPs. In the event that a whole window is lost (which is very less likely than losing just some LSPs), there is a fallback minimum rate to ensure that we do not get under what was previously done.

*//*



    In addition, slide 17 talks about signalling in real time; I am
    unsure of your point. As the socket size is static (or at least
    long lived), there is no need to change the advertised value in
    real time. Maybe the previous explanations helped in clarifying
    the proposed changes. I don't really understand the point of slide
    18 neither. I would be interested in more details.

    */[LES:] The point of Slide 17 is that if we want the algorithm to
    react to transient changes in the receiver’s capability (e.g., due
    to bursts unrelated to IS-IS LSP activity), to be effective we
    have to do so quickly. And since this coincides with the high
    input of IS-IS PDUs, the likelihood of delays in processing the
    PDUs used to send the signal is higher than normal. Look at our
    Slide #8 Row #2 for an example of the consequences of not adapting
    quickly. /*

    */If, as you say, there is no need to signal in real time, this
    tells me that you are simply advertising conservative values not
    based on the actual real time performance, in which case the
    issues highlighted on our Slide #15 are relevant. You seem to be
    acknowledging you have no intent to adapt to transient changes –
    you are simply going to limit things to something you have
    determined via offline evaluation should be safe. But such values
    have to account for the “worst case” in terms of # of neighbors, #
    of LSPs in the network, …all the things listed on our Slide #15 –
    so either they are overly conservative, or the customer somehow
    has to determine what value to configure based on the network. So
    you aren’t actually proposing anything adaptive at all it seems. ???/*

[GS] For RWin, even though the advertised value (in the proposed new TLV) is static, the algorithm still reacts to ACKs, and will naturally pace the sending to the Receiver ACKs.

Let's take a simple case : a receiver advertise an RWin of 10, and to simplify, sends 1 PSNP as soon as it has processed the LSP. The sender uses only RWin (for now). It has a large amount of LSPs to send. It sends its first 10 LSPs, then stops.

The receiver processes the first LSP, sends a PSNP, then the second LSP etc. The PSNPs are paced to the processing rate of the receiver control plane (since it sends them as soon as it can).

The sender then sees the first PSNP coming, it knows LSP #1 has been acked : it can safely assume that this LSP is not inside the input queue, and send another one. The same goes on for the following LSPs, and since PSNPs were paced by the receiver control plane, the sender automatically adapts to this rate.

If the receiver is busy at some point, it won't send PSNPs anymore, thus halting the sending of LSPs as well. Exactly what is needed for the cases you describe.

This is why this algorithm performs particularly well under CPU contention. This is also why this algorithm is very dependent on sending PSNPs as fast as possible.

*/[LES2:] We both agree that sending PSNPs quickly is an aid to faster flooding – this has been demonstrated. But, we also would like to safely do faster flooding even with neighbors who have not been upgraded to send PSNPs more quickly. And I believe (validated by testing) that implementations today can support significantly faster flooding with no changes whatsoever. I therefore would like a flow control/congestion control solution that works both with upgraded and non-upgraded neighbors./*

[GS2] +1 for faster PSNPs. I am in no position at all to prevent you from deploying your algorithm (especially since it's local).

IMO, not having the additional guarantee forces you to be more cautious. Variations in CPU availability will either cause losses (Cf. Slowing Down scenario) or slower sending rate for being overly cautious. I think it is safer to have as much safeguards as possible, but again it is not my decision to make.

You can also include the advertised window in your algorithm _when available_ to benefit from the additional guarantees it provides. If a router advertises this TLV, it should be safe to assume it also supports faster sending of PSNPs.

One way to see it with your testbed would be to implement this limit in the number of unacked LSPs, ensure a buffer is present on Rx and run your scenarios again.

_the signal is not on our TLV, is comes implicitly in the PSNPs_).

*/[LES2:] AFAICT you are not adjusting the window advertised to the Tx side dynamically. It is simply set to some conservative value at startup. In which case all of the concerns mentioned in our slide #15 are relevant./*

[GS2] As my example (vaguely), and experiments show, each of your issues that can be classified as CPU contention are dealt with thanks to having an advertised window size. The other ones are tackled by congestion control (locally at the sender side).

Very happy to have your insight. Looking forward tomorrow's session.

Guillaume

*//*

*/   Les/*

It does not work well when losses occur elsewhere (ie not due to CPU contention). In that case, there is absolutely the need to control the rate, either explicitly (as you do), or implicitly (as we do, by adapting the number of unacked LSPs we allow). But I don't think the Congestion control algorithm itself should be the focus of this discussion.

    */Slide 18 is highlighting the differences between operation of a
    TCP session and operation of IS-IS Update process. One of the
    arguments used for the Rx based approach has been “this is the way
    TCP has done it for years”. We are just highlighting why that
    isn’t a very good analogy. To be fair, I note you have
    acknowledged some (but not all) of this in your presentation as
    well e.g., on your Slide 25 you acknowledge that “packet
    reordering” isn’t applicable to IS-IS./*

[GS] It's true that the reasoning is not that simple, but I stand by the result. There are still lots of things to take from TCP, being the /de facto /playground of congestion control algorithms IRL...

    Thanks for your remarks,

    */[LES:] Thank you for your response. I hope we can continue this
    dialogue./*

[GS] Agreed !

Guillaume

    */   Les/*

    Guillaume

        Please comment on these points as you have time.

        Thanx.

           Les

        *From:* bruno.decra...@orange.com
        <mailto:bruno.decra...@orange.com> <bruno.decra...@orange.com>
        <mailto:bruno.decra...@orange.com>
        *Sent:* Monday, July 12, 2021 1:48 AM
        *To:* Les Ginsberg (ginsberg) <ginsb...@cisco.com>
        <mailto:ginsb...@cisco.com>; lsr-cha...@ietf.org
        <mailto:lsr-cha...@ietf.org>
        *Cc:* lsr@ietf.org <mailto:lsr@ietf.org>
        *Subject:* RE: draft-decraene-lsr-isis-flooding-speed & IETF 111

        Les,

        Faster flooding may be achieved without protocol extension.
        But if we are at changing flooding, it would be reasonable to
        try to make it good (rather than just faster than today).

        In particular some goals are:

        - faster flooding when the receiver has free cycles

        - slower flooding when the receiver is busy/congested (either
        by flooding, or any CPU computation including not coming from
        IS-IS)

        - avoiding/minimizing the parameters that the network operator
        is been asked to tune

        - avoiding/minimizing the loss of LSPs

        - robust to a wide variety of conditions (good ones and bad ones)

        You seem to agree on changing the flooding behaviour on both
        the sender and the receiver so that they can better cooperate.
        That’s protocol extension to me (and IMHO much bigger than the
        sending of info in one TLV)

        Bruno

        *From:*Les Ginsberg (ginsberg) [mailto:ginsb...@cisco.com
        <mailto:ginsb...@cisco.com>]
        *Sent:* Friday, July 9, 2021 7:49 PM
        *To:* DECRAENE Bruno INNOV/NET <bruno.decra...@orange.com
        <mailto:bruno.decra...@orange.com>>; lsr-cha...@ietf.org
        <mailto:lsr-cha...@ietf.org>
        *Cc:* lsr@ietf.org <mailto:lsr@ietf.org>
        *Subject:* RE: draft-decraene-lsr-isis-flooding-speed & IETF 111

        Bruno –

        Neither of us has presented anything new of substance in the
        last few IETFs.

        There were two presentations recently - one by Arista and one
        by Huawei – each of which simply demonstrated that it is
        possible to flood faster - and that in order to do so it is
        helpful to send acks faster - both points on which there is no
        disagreement.

        To have a productive discussion we both need to present new
        data - which is why having the discussion as part of the
        meeting at which the presentations occur makes sense to me.

        We removed the example(sic) algorithm from our draft because
        it was only an example, is not normative, and we did not want
        the discussion of our approach to be bogged down in a debate
        on the specifics of the example algorithm.

        Based on your response, seems like we were right to remove the
        algorithm. 😊

        Regarding WG adoption, one of the premises of our draft is
        that faster flooding can be achieved w/o protocol extensions
        and so there is no need for a draft at all. I am sure we do
        not yet agree on this - but I do hope that makes clear why
        adopting either draft at this time is premature.

           Les

        *From:* bruno.decra...@orange.com
        <mailto:bruno.decra...@orange.com> <bruno.decra...@orange.com
        <mailto:bruno.decra...@orange.com>>
        *Sent:* Friday, July 9, 2021 9:15 AM
        *To:* Les Ginsberg (ginsberg) <ginsb...@cisco.com
        <mailto:ginsb...@cisco.com>>; lsr-cha...@ietf.org
        <mailto:lsr-cha...@ietf.org>
        *Cc:* lsr@ietf.org <mailto:lsr@ietf.org>
        *Subject:* RE: draft-decraene-lsr-isis-flooding-speed & IETF 111

        Les,

        > *From:*Les Ginsberg (ginsberg) [mailto:ginsb...@cisco.com
        <mailto:ginsb...@cisco.com>]

        […]

        > I also think it would be prudent to delay WG adoption

        For how long exactly would it be “prudent to delay WG
        adoption”? (in addition to the past two years)

        Until what condition?

        It’s been two years now since
        draft-decraene-lsr-isis-flooding-speed brought this subject to
        the WG (and even more in private discussions).

        Two years during which we have presented our work to the WG,
        discussed your comments/objections, been asked to provide more
        data and consequently worked harder to implement it and obtain
        evaluation results.

        What’s precisely the bar before a call for WG adoption be
        initiated?

        We have data proving the benefit, so after those two years,
        what are your clear and precise _/technical/_ objections to
        the mechanism proposed in draft-decraene-lsr-isis-flooding-speed?

        Coming back to draft-decraene-lsr-isis-flooding-speed,

        we have a specification and the flow control part is stable.

        We have an implementation and many evaluations demonstrating
        that flow control alone is very effective in typical conditions.

        we have an additional congestion control part which is still
        been refined but this part is a local behavior which don’t
        necessarily needs to be standardized and which is mostly
        useful when the receivers of the LSP is not CPU-bound which
        does not seem to be the case from what we have seen. (in most
        of the cases, receivers are CPU bound. In fact, we needed to
        artificially create I/O congestion in order to evaluate the
        congestion control part) .

        Regarding your draft, in the latest version of your draft,
        published yesterday, you have removed the specification of
        your proposed congestion control algorithm… Based on this, I
        don’t see how technical discussion and comparison of the
        specification can be achieved.

        You have an implementation. This is good to know and we are
        ready to evaluate it under the same conditions than our
        implementation, so that we can compare the data. Could you
        please send us an image? We may be able to have data for the
        interim.

        --Bruno

        *From:*Les Ginsberg (ginsberg) [mailto:ginsb...@cisco.com
        <mailto:ginsb...@cisco.com>]
        *Sent:* Friday, July 9, 2021 5:00 PM
        *To:* DECRAENE Bruno INNOV/NET <bruno.decra...@orange.com
        <mailto:bruno.decra...@orange.com>>; lsr-cha...@ietf.org
        <mailto:lsr-cha...@ietf.org>; lsr@ietf.org <mailto:lsr@ietf.org>
        *Subject:* RE: draft-decraene-lsr-isis-flooding-speed & IETF 111

        As is well known, there are two drafts in this problem space:

        https://datatracker.ietf.org/doc/draft-decraene-lsr-isis-flooding-speed/
        
<https://datatracker.ietf.org/doc/draft-decraene-lsr-isis-flooding-speed/>

        and

        https://datatracker.ietf.org/doc/draft-ginsberg-lsr-isis-flooding-scale/
        
<https://datatracker.ietf.org/doc/draft-ginsberg-lsr-isis-flooding-scale/>

        Regarding the latter, we also have a working implementation
        and we also have requested a presentation slot for IETF 111
        LSR WG meeting.

        I agree with Bruno that the time available in the WG meeting
        will likely be inadequate to present full updates for both
        drafts. In addition, I think it is important that the WG have

        an opportunity to discuss publicly in an interactive way, the
        merits of each proposal. The likelihood that time will be
        available in the scheduled WG meeting for that discussion as
        well seems low.

        I therefore join w Bruno in suggesting that an interim meeting
        dedicated to the flooding speed topic be organized.

        Given the short time available before IETF 111, I would
        suggest that we look at scheduling an interim meeting after
        IETF 111 - but I leave it to the WG chairs to decide when to
        schedule this.

        I also think it would be prudent to delay WG adoption calls
        for either draft until after such an interim meeting is held.
        In that way the WG can make a more informed decision.

           Les

        *From:* Lsr <lsr-boun...@ietf.org
        <mailto:lsr-boun...@ietf.org>> *On Behalf Of
        *bruno.decra...@orange.com <mailto:bruno.decra...@orange.com>
        *Sent:* Friday, July 9, 2021 2:01 AM
        *To:* lsr-cha...@ietf.org <mailto:lsr-cha...@ietf.org>;
        lsr@ietf.org <mailto:lsr@ietf.org>
        *Subject:* [Lsr] draft-decraene-lsr-isis-flooding-speed & IETF 111

        Hi chairs, WG,

        Over the last two years, we have presented and the WG
        discussed draft-decraene-lsr-isis-flooding-speed at IETF 105
        and “107”

        IETF 105:
        https://datatracker.ietf.org/meeting/105/proceedings#lsr
        <https://datatracker.ietf.org/meeting/105/proceedings#lsr>   
        Note: that the presentation is in first slot/video but a large
        part of the discussion is in the second one.

        IETF 107/interim:
        
https://datatracker.ietf.org/meeting/interim-2020-lsr-02/materials/agenda-interim-2020-lsr-02-lsr-01-07.html
        
<https://datatracker.ietf.org/meeting/interim-2020-lsr-02/materials/agenda-interim-2020-lsr-02-lsr-01-07.html>

        The goal is to improve flooding performance and robustness to
        make it both faster when the receiver have free cycles, and
        slower when the receiver is congested.

        In addition to technical discussions, a feedback was that
        implementation and tests/evaluation would be good in order to
        evaluate the proposal.

        We are reporting that we have an implementation of [1] based
        on the open source Free Range Routing implementation.

        We are now ready to report the evaluation to the WG. We have a
        lot of data so ideally would need around an hour in order to
        cover the whole picture.

        We have requested a slot for IETF 111 LSR meeting. If the IETF
        111 slot is short, we’d like to request for an interim
        meeting. In order to keep the context, the sooner/closer to
        IETF 111 seems the better.

        Since we have an implementation, we have requested for a code
        point, in order to avoid squatting on one. This is currently
        under review by the designed experts.

        Finally, given the two-years work, the specification, the
        implementation and extensive evaluation, we’d like to ask for
        WG adoption.

        Thanks,

        Regards,

        --Bruno

        [1]
        
https://datatracker.ietf.org/doc/html/draft-decraene-lsr-isis-flooding-speed
        
<https://datatracker.ietf.org/doc/html/draft-decraene-lsr-isis-flooding-speed>

        
_________________________________________________________________________________________________________________________

        Ce message et ses pieces jointes peuvent contenir des
        informations confidentielles ou privilegiees et ne doivent donc

        pas etre diffuses, exploites ou copies sans autorisation. Si
        vous avez recu ce message par erreur, veuillez le signaler

        a l'expediteur et le detruire ainsi que les pieces jointes.
        Les messages electroniques etant susceptibles d'alteration,

        Orange decline toute responsabilite si ce message a ete
        altere, deforme ou falsifie. Merci.

        This message and its attachments may contain confidential or
        privileged information that may be protected by law;

        they should not be distributed, used or copied without
        authorisation.

        If you have received this email in error, please notify the
        sender and delete this message and its attachments.

        As emails may be altered, Orange is not liable for messages
        that have been modified, changed or falsified.

        Thank you.

        
_________________________________________________________________________________________________________________________

        Ce message et ses pieces jointes peuvent contenir des
        informations confidentielles ou privilegiees et ne doivent donc

        pas etre diffuses, exploites ou copies sans autorisation. Si
        vous avez recu ce message par erreur, veuillez le signaler

        a l'expediteur et le detruire ainsi que les pieces jointes.
        Les messages electroniques etant susceptibles d'alteration,

        Orange decline toute responsabilite si ce message a ete
        altere, deforme ou falsifie. Merci.

        This message and its attachments may contain confidential or
        privileged information that may be protected by law;

        they should not be distributed, used or copied without
        authorisation.

        If you have received this email in error, please notify the
        sender and delete this message and its attachments.

        As emails may be altered, Orange is not liable for messages
        that have been modified, changed or falsified.

        Thank you.

        
_________________________________________________________________________________________________________________________

        Ce message et ses pieces jointes peuvent contenir des
        informations confidentielles ou privilegiees et ne doivent donc

        pas etre diffuses, exploites ou copies sans autorisation. Si
        vous avez recu ce message par erreur, veuillez le signaler

        a l'expediteur et le detruire ainsi que les pieces jointes.
        Les messages electroniques etant susceptibles d'alteration,

        Orange decline toute responsabilite si ce message a ete
        altere, deforme ou falsifie. Merci.

        This message and its attachments may contain confidential or
        privileged information that may be protected by law;

        they should not be distributed, used or copied without
        authorisation.

        If you have received this email in error, please notify the
        sender and delete this message and its attachments.

        As emails may be altered, Orange is not liable for messages
        that have been modified, changed or falsified.

        Thank you.




        _______________________________________________

        Lsr mailing list

        Lsr@ietf.org  <mailto:Lsr@ietf.org>

        https://www.ietf.org/mailman/listinfo/lsr  
<https://www.ietf.org/mailman/listinfo/lsr>

    
_________________________________________________________________________________________________________________________

    Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

    pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
ce message par erreur, veuillez le signaler

    a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

    Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

    This message and its attachments may contain confidential or privileged 
information that may be protected by law;

    they should not be distributed, used or copied without authorisation.

    If you have received this email in error, please notify the sender and 
delete this message and its attachments.

    As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

    Thank you.

_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.
This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to