On Wed, Jun 23, 2021 at 11:04 AM Hu, Jiayu <jiayu...@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: dev <dev-boun...@dpdk.org> On Behalf Of Bruce Richardson
> > Sent: Thursday, June 17, 2021 1:31 AM
> > To: fengchengwen <fengcheng...@huawei.com>
> > Cc: tho...@monjalon.net; Yigit, Ferruh <ferruh.yi...@intel.com>;
> > dev@dpdk.org; nipun.gu...@nxp.com; hemant.agra...@nxp.com;
> > maxime.coque...@redhat.com; honnappa.nagaraha...@arm.com;
> > jer...@marvell.com; david.march...@redhat.com; jerinjac...@gmail.com
> > Subject: Re: [dpdk-dev] [RFC PATCH] dmadev: introduce DMA device library
> >
> > On Wed, Jun 16, 2021 at 05:41:45PM +0800, fengchengwen wrote:
> > > On 2021/6/16 0:38, Bruce Richardson wrote:
> > > > On Tue, Jun 15, 2021 at 09:22:07PM +0800, Chengwen Feng wrote:
> > > >> This patch introduces 'dmadevice' which is a generic type of DMA
> > > >> device.
> > > >>
> > > >> The APIs of dmadev library exposes some generic operations which
> > > >> can enable configuration and I/O with the DMA devices.
> > > >>
> > > >> Signed-off-by: Chengwen Feng <fengcheng...@huawei.com>
> > > >> ---
> > > > Thanks for sending this.
> > > >
> > > > Of most interest to me right now are the key data-plane APIs. While
> > > > we are still in the prototyping phase, below is a draft of what we
> > > > are thinking for the key enqueue/perform_ops/completed_ops APIs.
> > > >
> > > > Some key differences I note in below vs your original RFC:
> > > > * Use of void pointers rather than iova addresses. While using iova's
> > makes
> > > >   sense in the general case when using hardware, in that it can work 
> > > > with
> > > >   both physical addresses and virtual addresses, if we change the APIs 
> > > > to
> > use
> > > >   void pointers instead it will still work for DPDK in VA mode, while 
> > > > at the
> > > >   same time allow use of software fallbacks in error cases, and also a 
> > > > stub
> > > >   driver than uses memcpy in the background. Finally, using iova's makes
> > the
> > > >   APIs a lot more awkward to use with anything but mbufs or similar
> > buffers
> > > >   where we already have a pre-computed physical address.
> > >
> > > The iova is an hint to application, and widely used in DPDK.
> > > If switch to void, how to pass the address (iova or just va ?) this
> > > may introduce implementation dependencies here.
> > >
> > > Or always pass the va, and the driver performs address translation,
> > > and this translation may cost too much cpu I think.
> > >
> >
> > On the latter point, about driver doing address translation I would agree.
> > However, we probably need more discussion about the use of iova vs just
> > virtual addresses. My thinking on this is that if we specify the API using 
> > iovas
> > it will severely hurt usability of the API, since it forces the user to 
> > take more
> > inefficient codepaths in a large number of cases. Given a pointer to the
> > middle of an mbuf, one cannot just pass that straight as an iova but must
> > instead do a translation into offset from mbuf pointer and then readd the
> > offset to the mbuf base address.
>
> Agree. Vhost is one consumer of DMA devices. To support SW fallback
> in case of DMA copy errors, vhost needs to pass VA for both DPDK mbuf
> and guest buffer to the callback layer (a middle layer between vhost and
> dma device). If DMA devices use iova, it will require the callback layer to
> call rte_mem_virt2iova() to translate va to iova in data path, even if iova
> is va in some cases. But if DMA devices claim to use va, device differences
> can be hided inside driver, which makes the DMA callback layer simpler
> and more efficient.

+1 to Bruce suggestion. I think, we can make void * by:

- Add RTE_PCI_DRV_NEED_IOVA_AS_VA in our driver
and
- I think, we need capability  to say DMA address should be from
hugepage or (mapped by IOMMU)
Not from random heap and stack area. aka capablity to say
https://www.kernel.org/doc/html/latest/x86/sva.html feature is not
supported for those devices


>
> Thanks,
> Jiayu
>
> >
> > My preference therefore is to require the use of an IOMMU when using a
> > dmadev, so that it can be a much closer analog of memcpy. Once an iommu
> > is present, DPDK will run in VA mode, allowing virtual addresses to our
> > hugepage memory to be sent directly to hardware. Also, when using
> > dmadevs on top of an in-kernel driver, that kernel driver may do all iommu
> > management for the app, removing further the restrictions on what memory
> > can be addressed by hardware.
> >
> > > > * Use of id values rather than user-provided handles. Allowing the
> > user/app
> > > >   to manage the amount of data stored per operation is a better 
> > > > solution,
> > I
> > > >   feel than proscribing a certain about of in-driver tracking. Some apps
> > may
> > > >   not care about anything other than a job being completed, while other
> > apps
> > > >   may have significant metadata to be tracked. Taking the user-context
> > > >   handles out of the API also makes the driver code simpler.
> > >
> > > The user-provided handle was mainly used to simply application
> > > implementation, It provides the ability to quickly locate contexts.
> > >
> > > The "use of id values" seem like the dma_cookie of Linux DMA engine
> > > framework, user will get a unique dma_cookie after calling
> > > dmaengine_submit(), and then could use it to call
> > dma_async_is_tx_complete() to get completion status.
> > >
> >
> > Yes, the idea of the id is the same - to locate contexts. The main 
> > difference is
> > that if we have the driver manage contexts or pointer to contexts, as well 
> > as
> > giving more work to the driver, it complicates the APIs for measuring
> > completions. If we use an ID-based approach, where the app maintains its
> > own ring of contexts (if any), it avoids the need to have an "out" parameter
> > array for returning those contexts, which needs to be appropriately sized.
> > Instead we can just report that all ids up to N are completed. [This would 
> > be
> > similar to your suggestion that N jobs be reported as done, in that no
> > contexts are provided, it's just that knowing the ID of what is completed is
> > generally more useful than the number (which can be obviously got by
> > subtracting the old value)]
> >
> > We are still working on prototyping all this, but would hope to have a
> > functional example of all this soon.
> >
> > > How about define the copy prototype as following:
> > >   dma_cookie_t rte_dmadev_copy(uint16_t dev_id, xxx) while the
> > > dma_cookie_t is int32 and is monotonically increasing, when >=0 mean
> > > enqueue successful else fail.
> > > when complete the dmadev will return latest completed dma_cookie, and
> > > the application could use the dma_cookie to quick locate contexts.
> > >
> >
> > If I understand this correctly, I believe this is largely what I was 
> > suggesting -
> > just with the typedef for the type? In which case it obviously looks good to
> > me.
> >
> > > > * I've kept a single combined API for completions, which differs from 
> > > > the
> > > >   separate error handling completion API you propose. I need to give the
> > > >   two function approach a bit of thought, but likely both could work. 
> > > > If we
> > > >   (likely) never expect failed ops, then the specifics of error handling
> > > >   should not matter that much.
> > >
> > > The rte_ioat_completed_ops API is too complex, and consider some
> > > applications may never copy fail, so split them as two API.
> > > It's indeed not friendly to other scenarios that always require error
> > handling.
> > >
> > > I prefer use completed operations number as return value other than
> > > the ID so that application could simple judge whether have new
> > > completed operations, and the new prototype:
> > >  uint16_t rte_dmadev_completed(uint16_t dev_id, dma_cookie_t *cookie,
> > > uint32_t *status, uint16_t max_status, uint16_t *num_fails);
> > >
> > > 1) for normal case which never expect failed ops:
> > >    just call: ret = rte_dmadev_completed(dev_id, &cookie, NULL, 0,
> > > NULL);
> > > 2) for other case:
> > >    ret = rte_dmadev_completed(dev_id, &cookie, &status, max_status,
> > &fails);
> > >    at this point the fails <= ret <= max_status
> > >
> > Completely agree that we need to plan for the happy-day case where all is
> > passing. Looking at the prototypes you have above, I am ok with returning
> > number of completed ops as the return value with the final completed cookie
> > as an "out" parameter.
> > For handling errors, I'm ok with what you propose above, just with one small
> > adjustment - I would remove the restriction that ret <= max_status.
> >
> > In case of zero-failures, we can report as many ops succeeding as we like,
> > and even in case of failure, we can still report as many successful ops as 
> > we
> > like before we start filling in the status field. For example, if 32 ops are
> > completed, and the last one fails, we can just fill in one entry into 
> > status, and
> > return 32. Alternatively if the 4th last one fails we fill in 4 entries and 
> > return
> > 32. The only requirements would be:
> > * fails <= max_status
> > * fails <= ret
> > * cookie holds the id of the last entry in status.
> >
> > A further possible variation is to have separate "completed" and
> > "completed_status" APIs, where "completed_status" is as above, but
> > "completed" skips the final 3 parameters and returns -1 on error. In that 
> > case
> > the user can fall back to the completed_status call.
> >
> > > >
> > > > For the rest, the control / setup APIs are likely to be rather
> > > > uncontroversial, I suspect. However, I think that rather than xstats
> > > > APIs, the library should first provide a set of standardized stats
> > > > like ethdev does. If driver-specific stats are needed, we can add
> > > > xstats later to the API.
> > >
> > > Agree, will fix in v2
> > >
> > Thanks. In parallel, we will be working on our prototype implementation too,
> > taking in the feedback here, and hopefully send it as an RFC soon.
> > Then we can look to compare and contrast and arrive at an agreed API. It
> > might also be worthwhile to set up a community call for all interested 
> > parties
> > in this API to discuss things with a more rapid turnaround. That was done in
> > the past for other new device class APIs that were developed, e.g. eventdev.
> >
> > Regards,
> > /Bruce

Reply via email to