Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread Ketan Talaulikar (ketant)
Hi Tony,

Please check inline below.

From: Tony Li  On Behalf Of Tony Li
Sent: 25 May 2021 00:15
To: Ketan Talaulikar (ketant) 
Cc: Acee Lindem (acee) ; lsr@ietf.org; 
draft-hegde-lsr-flex-algo-bw-...@ietf.org
Subject: Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, 
Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02


Hi Ketan,

In general, I support the adoption of this document. There is, however, one 
specific point which is not clear to me (8) below that I would appreciate some 
clarity on before adoption.


As the chairs have noted, adoption is binary and not contingent upon rough 
consensus on the content, just on rough consensus on the interest.
[KT] I believe the WG has (or must have) the ability to determine portions of 
the proposal to adopt and/or to split documents. I have seen this in recent 
past in other WGs. In any case, it is not a point that I wish to argue or 
debate on – especially in the context of this document. My point (8) was 
clarified and hence I fall in the binary YES in this instance.



  1.  Why is the Generic Metric type in ISIS limited to 3 byte size? OSPF 
allows 4 byte size and so why not the same for ISIS? Elsewhere in the document, 
I do see MAX METRIC being referred to as 4,261,412,864.


Because I’m a lazy sod.

It’s far easier to detect metric overflow on three byte values than four byte 
values. True, four byte is not impossible, but it’s just quick and easy with 
three byte values.  Adding a fourth byte would add range to the metric space, 
but in practice, this seemed like it was not really relevant. Most areas are 
not a very high diameter and the need for detailed metric distinctions has not 
been that high.  Thus, we went with a 3 byte metric in RFC 5305 (sec 3.7) and 
that seems to work.
[KT] The Generic Metric is by definition something that will get extended in 
the future and we don’t know what use-cases might arise. It doesn’t seem right 
to follow in the steps of an administratively assigned metric type like the TE 
metric. Therefore, I suggest to make it variable.


[KT] Regarding metric overflow, I think it would be better to leave it to 
implementations on how to deal with it. A guidance similar to below (from 
draft-ietf-lsr-flex-algo) would help handle the condition in a manner that does 
not cause interop issues. Theoretically, it is something that is independent of 
the size of the metric.

   During the route computation, it is possible for the Flex-Algorithm
   specific metric to exceed the maximum value that can be stored in an
   unsigned 32-bit variable.  In such scenarios, the value MUST be
   considered to be of value 4,294,967,295 during the computation and
   advertised as such.


  1.
  2.  Would be good to cover the max-metric considerations for the Generic 
Metric. Similar to 
https://datatracker.ietf.org/doc/html/draft-ietf-lsr-flex-algo-15#section-15.3


Fair




  1.
  2.  Since the draft is covering FlexAlgo, I would have expected that Generic 
Metric is carried only in the ASLA and this document specifies usage only for 
the FA application. Later this can be also used/extended for other applications 
but still within ASLA. Keeping an option of advertising both outside and within 
the ASLA is problematic – we will need precedence rules and such. I prefer we 
avoid this complication.


We preferred avoiding ASLA.
[KT] The text today does not avoid ASLA and in fact requires the use of ASLA 
(quite correctly) for FlexAlgo application. Peter has confirmed the same. I am 
simply asking to avoid the complications of using Generic Metric in both ASLA 
and outside it. Whatever new “application” we invent to use this generic metric 
type, can use ASLA so that the Generic Metric can be very cleanly shared 
between applications. The encoding allows for using the same value – sharing 
single advertisement across applications or doing a different one 
per-application.


  1.
  2.  For the newly proposed FAD b/w constraints, I would suggest the following 
names for the constraint sub-TLVs where the b/w value signalled by all is 
compared with the Max Link B/w attribute. This is just to make the meaning, at 
least IMHO, more clear.

 *   Exclude Higher Bandwidth Links
 *   Exclude Lower Bandwidth Links
 *   Include-Only Higher Bandwidth Links
 *   Include-Only Lower Bandwidth Links

  1.  Similar naming for the FAD delay constraints as well would help. Though I 
can only think of the use of “exclude” for links above a certain delay 
threshold to be more practical but perhaps others might eventually be required 
as well?


Thank you for the suggestions.




  1.
  2.  For the Max B/w Link attribute and its comparison with the FAD b/w 
constraints, I see the reference to ASLA. While in OSPF max-bandwidth is not 
allowed in ASLA - https://datatracker.ietf.org/doc/html/rfc8920#section-7, in 
case of ISIS also, it is not really appropriate for use within ASLA 
-https://datatracker.ietf.org/doc/ht

Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread Acee Lindem (acee)
Speaking as a WG member:

I think the argument for delays < 1 usec is very weak and haven’t heard any 
compelling arguments.

Thanks,
Acee

From: Lsr  on behalf of Anoop Ghanwani 

Date: Tuesday, May 25, 2021 at 6:08 PM
To: Tony Li 
Cc: lsr , "gregory.mir...@ztetx.com" 
Subject: Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, 
Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

>>>
That’s not a big deal, but when we make the base more precise, we lose range.  
If we go with 32 bits of nanoseconds, we limit ourselves to a link delay of ~4 
seconds. Tolerable, but it will certainly disappoint Vint and his 
inter-planetary Internet. :-)
>>>
The current 24 bits of usec delay isn't that much better at ~16 sec.  (Unless 
there is a proposal to change that to 32 bits?)

On Tue, May 25, 2021 at 9:53 AM Tony Li 
mailto:tony...@tony.li>> wrote:

Hi Greg,

Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps, 10 
nanoseconds or 100 nanoseconds.

Ok.  The specific precision isn’t particularly relevant to me.  The real 
questions are whether microseconds are the right base or not, and whether we 
should shift to floating point for additional range or add more bits.



To Tony's question, the delay is usually calculated from the timestamps 
collected at measurement points (MP). Several formats of a timestamp, but most 
protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where 32 bits 
represent seconds and 32 bits - a fraction of a second. As you can see, the 
nanosecond-level resolution is well within the capability of protocols like 
OWAMP/TWAMP/STAMP. As for use cases that may benefit from higher resolution of 
the packet delay metric, I can think of URLLC in the MEC environment. I was 
told that some applications have an RTT budget of in the tens microseconds 
range.

It’s very true that folks have carried around nanosecond timestamps for a long 
time now.  No question there. My question is whether it is actually useful. 
While NTP has that precision in its timestamps, the actual precision  of NTP’s 
synchronization algorithms aren’t quite that strong.  In effect, many of those 
low order bits are wasted.

That’s not a big deal, but when we make the base more precise, we lose range.  
If we go with 32 bits of nanoseconds, we limit ourselves to a link delay of ~4 
seconds. Tolerable, but it will certainly disappoint Vint and his 
inter-planetary Internet. :-)

We could go with 64 bits of nanoseconds, but then we’ll probably only rarely 
use the high order bits, so that seems wasteful of bandwidth.

Or we can go to floating point. This will greatly increase the range, at the 
expense of having fewer significant bits in the mantissa.

Personally, I would prefer to stay with 32 bits, but I’m flexible after that.

Tony

___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr
___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr


Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread Anoop Ghanwani
>>>
That’s not a big deal, but when we make the base more precise, we lose
range.  If we go with 32 bits of nanoseconds, we limit ourselves to a link
delay of ~4 seconds. Tolerable, but it will certainly disappoint Vint and
his inter-planetary Internet. :-)
>>>
The current 24 bits of usec delay isn't that much better at ~16 sec.
(Unless there is a proposal to change that to 32 bits?)

On Tue, May 25, 2021 at 9:53 AM Tony Li  wrote:

>
> Hi Greg,
>
> Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps,
> 10 nanoseconds or 100 nanoseconds.
>
>
> Ok.  The specific precision isn’t particularly relevant to me.  The real
> questions are whether microseconds are the right base or not, and whether
> we should shift to floating point for additional range or add more bits.
>
> To Tony's question, the delay is usually calculated from the timestamps
> collected at measurement points (MP). Several formats of a timestamp, but
> most protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where
> 32 bits represent seconds and 32 bits - a fraction of a second. As you can
> see, the nanosecond-level resolution is well within the capability of
> protocols like OWAMP/TWAMP/STAMP. As for use cases that may benefit from
> higher resolution of the packet delay metric, I can think of URLLC in the
> MEC environment. I was told that some applications have an RTT budget of in
> the tens microseconds range.
>
>
> It’s very true that folks have carried around nanosecond timestamps for a
> long time now.  No question there. My question is whether it is actually
> useful. While NTP has that precision in its timestamps, the actual
> precision  of NTP’s synchronization algorithms aren’t quite that strong.
> In effect, many of those low order bits are wasted.
>
> That’s not a big deal, but when we make the base more precise, we lose
> range.  If we go with 32 bits of nanoseconds, we limit ourselves to a link
> delay of ~4 seconds. Tolerable, but it will certainly disappoint Vint and
> his inter-planetary Internet. :-)
>
> We could go with 64 bits of nanoseconds, but then we’ll probably only
> rarely use the high order bits, so that seems wasteful of bandwidth.
>
> Or we can go to floating point. This will greatly increase the range, at
> the expense of having fewer significant bits in the mantissa.
>
> Personally, I would prefer to stay with 32 bits, but I’m flexible after
> that.
>
> Tony
>
> ___
> Lsr mailing list
> Lsr@ietf.org
> https://www.ietf.org/mailman/listinfo/lsr
>
___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr


Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread gregory.mirsky
Hi Tony,


thank you for clarifying your view on this. Please find my notes in-line below 
under the GIM>> tag.








Regards,


Greg Mirsky






Sr. Standardization Expert
预研标准部/有线研究院/有线产品经营部 Standard Preresearch Dept./Wireline Product R&D 
Institute/Wireline Product Operation Division









E: gregory.mir...@ztetx.com 
www.zte.com.cn








Original Mail



Sender: TonyLi
To: gregory mirsky10211915;
CC: lsr;
Date: 2021/05/25 09:52
Subject: Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, 
Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02






Hi Greg,





Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps, 10 
nanoseconds or 100 nanoseconds.




Ok.  The specific precision isn’t particularly relevant to me.  The real 
questions are whether microseconds are the right base or not, and whether we 
should shift to floating point for additional range or add more bits.



To Tony's question, the delay is usually calculated from the timestamps 
collected at measurement points (MP). Several formats of a timestamp, but most 
protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where 32 bits 
represent seconds and 32 bits - a fraction of a second. As you can see, the 
nanosecond-level resolution is well within the capability of protocols like 
OWAMP/TWAMP/STAMP. As for use cases that may benefit from higher resolution of 
the packet delay metric, I can think of URLLC in the MEC environment. I was 
told that some applications have an RTT budget of in the tens microseconds 
range.




It’s very true that folks have carried around nanosecond timestamps for a long 
time now.  No question there. My question is whether it is actually useful. 
While NTP has that precision in its timestamps, the actual precision  of NTP’s 
synchronization algorithms aren’t quite that strong.  In effect, many of those 
low order bits are wasted.

GIM>> What I see from deployment of active measurement protocols, e.g., TWAMP 
and STAMP, is the strong interest in using PTP, i.e. IEEE 1588v2, timestamp 
format in the data plane. And the requirement (actually, there are different 
profiles) for the quality of clock synchronization for 5G is, as I understand 
it, achievable with PTP. I have no information if that is the case with NTP.


That’s not a big deal, but when we make the base more precise, we lose range.  
If we go with 32 bits of nanoseconds, we limit ourselves to a link delay of ~4 
seconds. Tolerable, but it will certainly disappoint Vint and his 
inter-planetary Internet. :-)

GIM>> Agree. I would propose to consider 100 usec as a unit, which gets close 
to 7 minutes.


We could go with 64 bits of nanoseconds, but then we’ll probably only rarely 
use the high order bits, so that seems wasteful of bandwidth.

Or we can go to floating point. This will greatly increase the range, at the 
expense of having fewer significant bits in the mantissa.




Personally, I would prefer to stay with 32 bits, but I’m flexible after that.

GIM>> I think that we can stay with a 32 bit-long field and get better 
resolution at the same time.


Tony___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr


Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread gregory.mirsky
Hi Anoop,

thank you for sharing your thoughts. I agree, it is very likely that in most 
cases, the potential inaccuracy of 1 usec per link would not affect the 
construction of a route. But for cases that require very low bounded latency, 
e.g., DetNet, such a level of uncertainty about the actual value of the delay 
might fail to find a route altogether. Consider a case where the measured e2e 
delay is N usec is well within the required level. But the sum of the 
advertised Maximum Link Delays is above that threshold.

I think that using 100 nsec as the unit of the Maximum Link Delay is a 
reasonable compromise between that scale and accuracy.








Regards,


Greg Mirsky






Sr. Standardization Expert
预研标准部/有线研究院/有线产品经营部 Standard Preresearch Dept./Wireline Product R&D 
Institute/Wireline Product Operation Division









E: gregory.mir...@ztetx.com 
www.zte.com.cn






Original Mail



Sender: AnoopGhanwani
To: gregory mirsky10211915;
CC: lsr@ietf.org;
Date: 2021/05/25 00:11
Subject: Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, 
Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02




Greg,
One thing to keep in mind is that even though we can measure latency at a 
precision of 10's or 100's of nanoseconds, does it hurt to round the link delay 
up to the nearest microsecond?  One way to look at this is that by doing such 
rounding up, we add at most 1 usec worth of additional delay per hop.  That was 
my rationale for thinking it's probably OK to leave the resolution at 1 usec.  

I don't have a strong opinion either way.

Anoop 




On Mon, May 24, 2021 at 7:32 PM  wrote:

Dear All,

thank you for the discussion of my question on the unit of the Maximum Link 
Delay parameter.

Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps, 10 
nanoseconds or 100 nanoseconds.

To Tony's question, the delay is usually calculated from the timestamps 
collected at measurement points (MP). Several formats of a timestamp, but most 
protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where 32 bits 
represent seconds and 32 bits - a fraction of a second. As you can see, the 
nanosecond-level resolution is well within the capability of protocols like 
OWAMP/TWAMP/STAMP. As for use cases that may benefit from higher resolution of 
the packet delay metric, I can think of URLLC in the MEC environment. I was 
told that some applications have an RTT budget of in the tens microseconds 
range.




Shraddha, you've said

"The measurement mechanisms and advertisements in ISIS support micro-second 
granularity (RFC 8570)."

Could you direct me to the text in RFC 8570 that defines the measurement 
method, protocol that limits the resolution to a microsecond?




To Acee, I think that

"Any measurement of delay would include the both components of delay"

it depends on where the MP is located (yes, it is another "It depends" 
situation). 




I agree with Anoop that it could be beneficial to have a text in the draft that 
explains three types of delays a packet experiences and how the location of an 
MP affects the accuracy of the measurement and the metric.






Best regards,


Greg Mirsky





Sr. Standardization Expert
预研标准部/有线研究院/有线产品经营部 Standard Preresearch Dept./Wireline Product R&D 
Institute/Wireline Product Operation Division









E: gregory.mir...@ztetx.com 
www.zte.com.cn





___
 Lsr mailing list
 Lsr@ietf.org
 https://www.ietf.org/mailman/listinfo/lsr___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr


Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread gregory.mirsky
Hi Shraddha,


thank you for pointing out the text. Though it mentions that the value is the 
average delay calculated over configured, i.e., pre-defined interval, that 
seems it leaves out some important aspects of the measurement method, e.g., 
number of measurements in the set over which the average is calculated. Also, 
I'd note that the average is not the most informative and robust metric as it 
is more sensitive to extremes (spikes and drops) than, for example, median and 
percentile (the most often used are 95, 99, and 99.9).








Regards,


Greg Mirsky






Sr. Standardization Expert
预研标准部/有线研究院/有线产品经营部 Standard Preresearch Dept./Wireline Product R&D 
Institute/Wireline Product Operation Division









E: gregory.mir...@ztetx.com 
www.zte.com.cn








Original Mail



Sender: ShraddhaHegde
To: gregory mirsky10211915;lsr@ietf.org;
Date: 2021/05/25 02:46
Subject: RE: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, 
Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02






Snipped …


 

>Shraddha, you've said

>"The measurement mechanisms and advertisements in ISIS support micro-second 
>granularity (RFC 8570)."

>Could you direct me to the text in RFC 8570 that defines the measurement 
>method, protocol that limits the >resolution to a microsecond?

 

Pls refer RFC 8570 sec 4.1. The link delay is encoded in microseconds

“   Delay:  This 24-bit field carries the average link delay over a 
  configurable interval in microseconds, encoded as an integer


  value.  When set to the maximum value 16,777,215


  (16.777215 seconds), then the delay is at least that value and may


  be larger.


“

Exact same text can be found in OSPF RFC 7471 sec 4.1.5 as well.

 

Rgds

Shraddha


 


 


 


 


Juniper Business Use Only



From: Lsr  On Behalf Of gregory.mir...@ztetx.com
 Sent: Tuesday, May 25, 2021 8:03 AM
 To: lsr@ietf.org
 Subject: Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, 
Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02




 


[External Email. Be cautious of content]


 

Dear All,

thank you for the discussion of my question on the unit of the Maximum Link 
Delay parameter.

Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps, 10 
nanoseconds or 100 nanoseconds.

To Tony's question, the delay is usually calculated from the timestamps 
collected at measurement points (MP). Several formats of a timestamp, but most 
protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where 32 bits 
represent seconds and 32 bits - a fraction of a second. As you can see, the 
nanosecond-level resolution is well within the capability of protocols like 
OWAMP/TWAMP/STAMP. As for use cases that may benefit from higher resolution of 
the packet delay metric, I can think of URLLC in the MEC environment. I was 
told that some applications have an RTT budget of in the tens microseconds 
range.

 

Shraddha, you've said

"The measurement mechanisms and advertisements in ISIS support micro-second 
granularity (RFC 8570)."

Could you direct me to the text in RFC 8570 that defines the measurement 
method, protocol that limits the resolution to a microsecond?

 

To Acee, I think that

"Any measurement of delay would include the both components of delay"

it depends on where the MP is located (yes, it is another "It depends" 
situation). 

 

I agree with Anoop that it could be beneficial to have a text in the draft that 
explains three types of delays a packet experiences and how the location of an 
MP affects the accuracy of the measurement and the metric.

 

Best regards,


Greg Mirsky


 


Sr. Standardization Expert
 预研标准部/有线研究院/有线产品经营部 Standard Preresearch Dept./Wireline Product R&D 
Institute/Wireline Product Operation Division


 






 E: gregory.mir...@ztetx.com 
 www.zte.com.cn___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr


Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread Tony Li

Hi Greg,
> Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps, 10 
> nanoseconds or 100 nanoseconds.
> 

Ok.  The specific precision isn’t particularly relevant to me.  The real 
questions are whether microseconds are the right base or not, and whether we 
should shift to floating point for additional range or add more bits.

> To Tony's question, the delay is usually calculated from the timestamps 
> collected at measurement points (MP). Several formats of a timestamp, but 
> most protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where 32 
> bits represent seconds and 32 bits - a fraction of a second. As you can see, 
> the nanosecond-level resolution is well within the capability of protocols 
> like OWAMP/TWAMP/STAMP. As for use cases that may benefit from higher 
> resolution of the packet delay metric, I can think of URLLC in the MEC 
> environment. I was told that some applications have an RTT budget of in the 
> tens microseconds range.
> 

It’s very true that folks have carried around nanosecond timestamps for a long 
time now.  No question there. My question is whether it is actually useful. 
While NTP has that precision in its timestamps, the actual precision  of NTP’s 
synchronization algorithms aren’t quite that strong.  In effect, many of those 
low order bits are wasted.

That’s not a big deal, but when we make the base more precise, we lose range.  
If we go with 32 bits of nanoseconds, we limit ourselves to a link delay of ~4 
seconds. Tolerable, but it will certainly disappoint Vint and his 
inter-planetary Internet. :-)

We could go with 64 bits of nanoseconds, but then we’ll probably only rarely 
use the high order bits, so that seems wasteful of bandwidth.

Or we can go to floating point. This will greatly increase the range, at the 
expense of having fewer significant bits in the mantissa.

Personally, I would prefer to stay with 32 bits, but I’m flexible after that.

Tony

___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr


Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread Tony Li

Hi Aijun,

> My suggestion is still not introduce such non-cumulative metric to cumulative 
> based SPF calculation process.

Again, what we’re proposing is cumulative.

Tony

___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr


Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread Shraddha Hegde
Snipped …


>Shraddha, you've said

>"The measurement mechanisms and advertisements in ISIS support micro-second 
>granularity (RFC 8570)."

>Could you direct me to the text in RFC 8570 that defines the measurement 
>method, protocol that limits the >resolution to a microsecond?



Pls refer RFC 8570 sec 4.1. The link delay is encoded in microseconds

“   Delay:  This 24-bit field carries the average link delay over a
  configurable interval in microseconds, encoded as an integer
  value.  When set to the maximum value 16,777,215
  (16.777215 seconds), then the delay is at least that value and may
  be larger.
“

Exact same text can be found in OSPF RFC 7471 sec 4.1.5 as well.



Rgds

Shraddha





Juniper Business Use Only
From: Lsr  On Behalf Of gregory.mir...@ztetx.com
Sent: Tuesday, May 25, 2021 8:03 AM
To: lsr@ietf.org
Subject: Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, 
Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

[External Email. Be cautious of content]


Dear All,

thank you for the discussion of my question on the unit of the Maximum Link 
Delay parameter.

Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps, 10 
nanoseconds or 100 nanoseconds.

To Tony's question, the delay is usually calculated from the timestamps 
collected at measurement points (MP). Several formats of a timestamp, but most 
protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where 32 bits 
represent seconds and 32 bits - a fraction of a second. As you can see, the 
nanosecond-level resolution is well within the capability of protocols like 
OWAMP/TWAMP/STAMP. As for use cases that may benefit from higher resolution of 
the packet delay metric, I can think of URLLC in the MEC environment. I was 
told that some applications have an RTT budget of in the tens microseconds 
range.



Shraddha, you've said

"The measurement mechanisms and advertisements in ISIS support micro-second 
granularity (RFC 8570)."

Could you direct me to the text in RFC 8570 that defines the measurement 
method, protocol that limits the resolution to a microsecond?



To Acee, I think that

"Any measurement of delay would include the both components of delay"

it depends on where the MP is located (yes, it is another "It depends" 
situation).



I agree with Anoop that it could be beneficial to have a text in the draft that 
explains three types of delays a packet experiences and how the location of an 
MP affects the accuracy of the measurement and the metric.



Best regards,

Greg Mirsky



Sr. Standardization Expert
预研标准部/有线研究院/有线产品经营部 Standard Preresearch Dept./Wireline Product R&D 
Institute/Wireline Product Operation Division


[cid:image001.gif@01D75178.7F44CE20]
[cid:image002.gif@01D75178.7F44CE20]
E: gregory.mir...@ztetx.com
www.zte.com.cn

___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr


Re: [Lsr] LSR WG Adoption Poll for "Flexible Algorithms: Bandwidth, Delay, Metrics and Constraints" - draft-hegde-lsr-flex-algo-bw-con-02

2021-05-25 Thread Anoop Ghanwani
Greg,

One thing to keep in mind is that even though we can measure latency at a
precision of 10's or 100's of nanoseconds, does it hurt to round the link
delay up to the nearest microsecond?  One way to look at this is that by
doing such rounding up, we add at most 1 usec worth of additional delay per
hop.  That was my rationale for thinking it's probably OK to leave the
resolution at 1 usec.

I don't have a strong opinion either way.

Anoop

On Mon, May 24, 2021 at 7:32 PM  wrote:

> Dear All,
>
> thank you for the discussion of my question on the unit of the Maximum
> Link Delay parameter.
>
> Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps,
> 10 nanoseconds or 100 nanoseconds.
>
> To Tony's question, the delay is usually calculated from the timestamps
> collected at measurement points (MP). Several formats of a timestamp, but
> most protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where
> 32 bits represent seconds and 32 bits - a fraction of a second. As you can
> see, the nanosecond-level resolution is well within the capability of
> protocols like OWAMP/TWAMP/STAMP. As for use cases that may benefit from
> higher resolution of the packet delay metric, I can think of URLLC in the
> MEC environment. I was told that some applications have an RTT budget of in
> the tens microseconds range.
>
>
> Shraddha, you've said
>
> "The measurement mechanisms and advertisements in ISIS support
> micro-second granularity (RFC 8570)."
>
> Could you direct me to the text in RFC 8570 that defines the measurement
> method, protocol that limits the resolution to a microsecond?
>
>
> To Acee, I think that
>
> "Any measurement of delay would include the both components of delay"
>
> it depends on where the MP is located (yes, it is another "It depends"
> situation).
>
>
> I agree with Anoop that it could be beneficial to have a text in the draft
> that explains three types of delays a packet experiences and how the
> location of an MP affects the accuracy of the measurement and the metric.
>
>
> Best regards,
>
> Greg Mirsky
>
>
> Sr. Standardization Expert
> 预研标准部/有线研究院/有线产品经营部 Standard Preresearch Dept./Wireline Product R&D
> Institute/Wireline Product Operation Division
>
>
>
> E: gregory.mir...@ztetx.com
> www.zte.com.cn
> ___
> Lsr mailing list
> Lsr@ietf.org
> https://www.ietf.org/mailman/listinfo/lsr
>
___
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr