I got initial comments from Ray and Stephen on this RFC[1]. Thanks for the comments.
Is anyone else planning to have an architecture level or API usage level review or any review of other top-level aspects? I believe low-level aspects of the code can be taken care of from the v1 series onwards. I am just wondering what would be an appropriate time for sending v1. If someone planning for reviewing at the top level, I can wait until the review complete. Let us know if anyone planning to review? If no other comment then I would like to request tech board approval for the library on 26/Feb meeting. [1] http://mails.dpdk.org/archives/dev/2020-January/156765.html On Sat, Feb 1, 2020 at 11:14 AM Jerin Jacob <jerinjac...@gmail.com> wrote: > > On Sat, Feb 1, 2020 at 12:05 AM Ray Kinsella <m...@ashroe.eu> wrote: > > > > Hi Jerin, > > Hi Ray, > > > Much kudos on a huge contribution to the community. > > All the authors of this patch set spend at least the last 3/4 months > to bring this up RFC with performance data with an l3fwd-graph example > application. > We hope it would be useful for DPDK community. > > > Look forward to spend more time looking at it in the next few days. > > That would be very helpful. > > > > > I'll bite and ask the obvious questions - why would I use rte_graph over > > FD.io VPP? > > I did not get the opportunity to work day to day on FD.io projects. My > understanding of FD.io is very limited. > I do think, it is NOT one vs other. VPP is quite a mature project and > they are pioneers in graph architecture. > > VPP is an entirely separate framework by itself and provides an > alternate data plane environment. > The objective of rte_graph is to add a graph subsystem to DPDK as a > foundational element. > This will allow the DPDK community to use the powerfull graph > architecture concept in a fundamental > way with purely DPDK based applications > > That would boil down to: > 1) Provision to use pure native mbuf based dpdk application with graph > architecture. i.e > avoid the cost of packet format conversion for good. > 2) Use rte_mempool, rte_flow, rte_tm, rte_cryptodev, rte_eventdev, > rte_regexdev HW accelerated > API in the data plane application. > 3) Based on our experience, NPU HW accelerates are so different than > one vendor to another vendor. > Going forward, We believe, API abstraction may not be enough abstract > the difference in HW. > The Vendor-specific nodes can abstract the HW differences and reuse > generic the nodes as needed. > This would help both the silicon vendors and DPDK end-users to avoid writing > capabilities based APIs and avoid vendor-specific fast path routines. > So such vendor plugin can be part of dpdk to help both vendors > and end-user of DPDK. > 4) Provision for multiprocess support in graph architecture. > 5) Contribute to dpdk.org > 6) Use Linux coding standards. > 7) Finally, one may consider using rte_graph, _if_ specific workload > performs better in performance > in this model due to framework and/or the HW acceleration attached to it. > > > > > > Ray K > >