Dear All,
thank you for the discussion of my question on the unit of the Maximum Link
Delay parameter.
Firstly, I am not suggesting it be changed to a nanosecond, but, perhaps, 10
nanoseconds or 100 nanoseconds.
To Tony's question, the delay is usually calculated from the timestamps
collected at measurement points (MP). Several formats of a timestamp, but most
protocols I'm familiar with, use 64 bit-long, e.g., NTP or PTP, where 32 bits
represent seconds and 32 bits - a fraction of a second. As you can see, the
nanosecond-level resolution is well within the capability of protocols like
OWAMP/TWAMP/STAMP. As for use cases that may benefit from higher resolution of
the packet delay metric, I can think of URLLC in the MEC environment. I was
told that some applications have an RTT budget of in the tens microseconds
range.
Shraddha, you've said
"The measurement mechanisms and advertisements in ISIS support micro-second
granularity (RFC 8570)."
Could you direct me to the text in RFC 8570 that defines the measurement
method, protocol that limits the resolution to a microsecond?
To Acee, I think that
"Any measurement of delay would include the both components of delay"
it depends on where the MP is located (yes, it is another "It depends"
situation).
I agree with Anoop that it could be beneficial to have a text in the draft that
explains three types of delays a packet experiences and how the location of an
MP affects the accuracy of the measurement and the metric.
Best regards,
Greg Mirsky
Sr. Standardization Expert
预研标准部/有线研究院/有线产品经营部 Standard Preresearch Dept./Wireline Product R&D
Institute/Wireline Product Operation Division
E: gregory.mir...@ztetx.com
www.zte.com.cn
_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr