Maybe a (last) motivation point.

We just did a 100G link traffic capture with time-stamping of all packets in HW using a Mellanox CX5. SW time-stamping fails to reveal queueing delays, and as multi-queue is needed for writing 100G traffic to multiple NVMe drives, does not allow to recover the original ordering mixed by multi-queuing.

Here, we timestamped traffic in hardware (FYI, given in ticks of the internal CX5 clock, not in unit of time), and thanks to the new API, converted it to real time value (through frequency + base).

But precision is not the only improvement. As DPDK is userlevel, calling get_timeofday for millions of packets pretty much kills the capture. Here we do a simple math per packet to convert the packet's timestamp in ticks to the real clock time, (very) much cheaper than even a vDSO syscall.

Tom


On 2019-05-02 14:11, Tom Barbette wrote:
Some NICs allow to timestamp packets, but do not support the full
PTP synchronization process. Hence, the value set in the mbuf
timestamp field is only the raw value of an internal clock.

To make sense of this value, one at least needs to be able to query
the current hardware clock value. This patch series adds a new API to do
so, rte_eth_read_clock. As with the TSC, from there
a frequency can be derieved by querying multiple time the current value of the
internal clock with some known delay between the queries (example
provided in the API doc).

This patch series adds support of read_clock for MLX5.

An example app is provided in the rxtx_callback application.
It has been updated to display, on top of the software latency
in cycles, the total latency since the packet was received in hardware.
The API is used to compute a delta in the Tx callback. The raw amount of
ticks is converted to cycles using a variation of the technique describe above.

Aside from offloading timestamping, which relieve the
software from a few operations, this allows to get much more precision
when studying the source of the latency in a system.
Eg. in our 100G, CX5 setup the rxtx callback application shows
SW latency is around 74 cycles (TSC is 3.2Ghz), but the latency
including NIC processing, PCIe, and queuing is around 196 cycles.

One may think at first this API is overlapping with te_eth_timesync_read_time.
rte_eth_timesync_read_time is clearly identified as part of a set of functions
to use PTP synchronization.
The device raw clock is not "sync" in any way. More importantly, the returned
value is not a timeval, but an amount of ticks. We could have a cast-based
solution, but on top of being an ugly solution, some people seeing the timeval
type of rte_eth_timesync_read_time could use it blindly.

Change in v2:
   - Rebase on current master

Change in v3:
   - Address comments from Ferruh Yigit

Changes in v4:
   - Address comments from Keith Wiles and Andrew Rybchenko
   - Use "clock" as argunment name everywhere.
   - Expand the API description to make clear that read_clock gives an
     amount in ticks, and that it has no unit.

Tom Barbette (3):
   rte_ethdev: Add API function to read dev clock
   mlx5: Implement support for read_clock
   rxtx_callbacks: Add support for HW timestamp

  doc/guides/nics/features.rst                |  1 +
  doc/guides/sample_app_ug/rxtx_callbacks.rst |  9 ++-
  drivers/net/mlx5/mlx5.c                     |  1 +
  drivers/net/mlx5/mlx5.h                     |  1 +
  drivers/net/mlx5/mlx5_ethdev.c              | 30 +++++++
  drivers/net/mlx5/mlx5_glue.c                |  8 ++
  drivers/net/mlx5/mlx5_glue.h                |  2 +
  examples/rxtx_callbacks/Makefile            |  3 +
  examples/rxtx_callbacks/main.c              | 87 ++++++++++++++++++++-
  examples/rxtx_callbacks/meson.build         |  3 +
  lib/librte_ethdev/rte_ethdev.c              | 12 +++
  lib/librte_ethdev/rte_ethdev.h              | 47 +++++++++++
  lib/librte_ethdev/rte_ethdev_core.h         |  6 ++
  lib/librte_ethdev/rte_ethdev_version.map    |  1 +
  lib/librte_mbuf/rte_mbuf.h                  |  2 +
  15 files changed, 208 insertions(+), 5 deletions(-)

Reply via email to