> -----Original Message-----
> From: Harman Kalra <hka...@marvell.com>
> Sent: Wednesday, June 17, 2020 18:58
> To: Slava Ovsiienko <viachesl...@mellanox.com>
> Cc: dev@dpdk.org; Thomas Monjalon <tho...@monjalon.net>; Matan
> Azrad <ma...@mellanox.com>; Raslan Darawsheh
> <rasl...@mellanox.com>; Ori Kam <or...@mellanox.com>;
> olivier.m...@6wind.com; Shahaf Shuler <shah...@mellanox.com>
> Subject: Re: [EXT] RE: [dpdk-dev] [RFC] mbuf: accurate packet Tx scheduling
> 
> On Wed, Jun 10, 2020 at 03:16:12PM +0000, Slava Ovsiienko wrote:
> 
> > External Email
> >
> > ----------------------------------------------------------------------
Hi, Harman

Sorry for delay - missed your reply.

[..skip..]
> > Should we waste CPU cycles to wait the desired moment of time? Can we
> > guarantee stable interrupt latency if we choose to schedule on interrupts
> approach?
> >
> > This RFC splits the responsibility - application should prepare the
> > data and specify when it desires to send, the rest is on PMD.
> 
> I agree with the fact that we cannot guarantee the delay between tx burst
> call and data on wire, hence PMD should take care of it.
> Even if PMD is holding, it is wastage of CPU cycles or if we setup an alarm
> then also interrupt latency might be a concern to achieve precise timming. So
> how are you planning to address both of above issue in PMD.

It is promoted to HW. The special WAIT descriptor with timestamp
is pushed to the queue and hardware just waits for the appropriate moment.
It is exactly the task for PMD - convert the timestamp in hardware related
entities and perform requested operation on hardware. Thus we should not
wait on CPU in any way - loop/interrupts, etc. Let NIC do it for us.
 
> >
> > > >
> > > > PMD reports the ability to synchronize packet sending on timestamp
> > > > with new offload flag:
> > > >
> > > > This is palliative and is going to be replaced with new eth_dev
> > > > API about reporting/managing the supported dynamic flags and its
> > > > related features. This API would break ABI compatibility and can't
> > > > be introduced at the moment, so is postponed to 20.11.
> > > >
> > > > For testing purposes it is proposed to update testpmd "txonly"
> > > > forwarding mode routine. With this update testpmd application
> > > > generates the packets and sets the dynamic timestamps according to
> > > > specified time pattern if it sees the "rte_dynfield_timestamp" is
> registered.
> > >
> > > So what I am understanding here is "rte_dynfield_timestamp" will
> > > provide information about three parameters:
> > > - timestamp at which TX should start
> > > - intra packet gap
> > > - intra burst gap.
> > >
> > > If its about "intra packet gap" then PMD can take care, but if it is
> > > about intra burst gap, application can take care of it.
> >
> > Not sure - the intra-burst gap might be pretty small.
> > It is supposed to handle intra-burst in the same way - by specifying
> > the timestamps. Waiting is supposed to be implemented on tx_burst() retry.
> > Prepare the packets with timestamps, tx_burst - if not all packets are
> > sent - it means queue is waiting for the schedult, retry with the remaining
> packets.
> > As option - we can implement intra-burst wait basing rte_eth_read_clock().
> 
> Yeah, I think app can make use of rte_eth_read_clock() to implement intra-
> burst gap.
> But my actual doubt was, what all information will app provide as part of
> "rte_dynfield_timestamp" - one I understand will be timestamp at which
> packets should be sent out. What else? intra-packet gap ?
Intra-packet gap is just the parameter of testpmd to provide some
preliminary feature testing. If intra-gap is too small even hardware might
not support. In mlx5 we are going to support the scheduling with minimal
granularity 250 nanoseconds, so minimal gap supported is 250ns,
on 100Gbps line speed it means at least two 1500B packets.

With best regards, Slava

Reply via email to