On Fri, Jun 13, 2014 at 02:56:47PM +0000, Doherty, Declan wrote: > > -----Original Message----- > > From: Neil Horman [mailto:nhorman at tuxdriver.com] > > Sent: Friday, June 6, 2014 3:54 PM > > To: Doherty, Declan > > Cc: dev at dpdk.org > > Subject: Re: [dpdk-dev] [PATCH v2 0/4] Link Bonding Library > > > > On Fri, Jun 06, 2014 at 08:23:31AM +0000, Doherty, Declan wrote: > > > > -----Original Message----- > > > > From: Neil Horman [mailto:nhorman at tuxdriver.com] > > > > Sent: Thursday, May 29, 2014 12:34 PM > > > > To: Doherty, Declan > > > > Cc: dev at dpdk.org > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] Link Bonding Library > > > > > > > > On Thu, May 29, 2014 at 10:33:00AM +0000, Doherty, Declan wrote: > > > > > -----Original Message----- > > > > > > From: Neil Horman [mailto:nhorman at tuxdriver.com] > > > > > > Sent: Wednesday, May 28, 2014 6:49 PM > > > > > > To: Doherty, Declan > > > > > > Cc: dev at dpdk.org > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] Link Bonding Library > > > > > > > > > > > > On Wed, May 28, 2014 at 04:32:00PM +0100, declan.doherty at > > > > > > intel.com > > > > wrote: > > > > > > > From: Declan Doherty <declan.doherty at intel.com> > > > > > > > > > > > > > > Initial release of Link Bonding Library (lib/librte_bond) with > > > > > > > support for bonding modes : > > > > > > > 0 - Round Robin > > > > > > > 1 - Active Backup > > > > > > > 2 - Balance l2 / l23 / l34 > > > > > > > 3 - Broadcast > > > > > > > > > > > > > Why make this a separate library? That requires exposure of yet > > > > > > another > > > > API to applications. Instead, why > not write a PMD that can enslave > > > > other PMD's and treat them all as a single interface? That way this > > > > all > > works with the existing API. > > > > > > > > > > > > Neil > > > > > > > > > > Hi Neil, > > > > > the link bonding device is essentially a software PMD, and as such > > > > > supports > > > > all the standard PMD APIs, the only new APIs which the link bonding > > > > library introduces are for the control operations of the bonded > > > > device which are currently unsupported by the standard PMD API. > > > > Operations such as creating, adding/removing slaves, and configuring > > > > the modes of operation of the device have no analogous APIs in the > > > > current PMD API and required new ones to be created . > > > > > > > > Thats really only true in spirit, in the sense that this library > > > > transmits and receives frames like a PMD does. In practice it doesn't > > > > work and isn't architected the same way. You don't register the > > > > driver using the same method as the other PMDs, which means that using > > > > --vdev on the command line wont work for this type of device. It also > > > > implies that applications have to be made specifically aware of the > > > > fact that they are using a bonded interface (i.e. they need to call > > > > the bonding setup routines to create the bond). I would > > > > recommend: > > > > > > > > 1) Register the pmd using the PMD_DRIVER_REGISTER macro, like other > > > > PMD's > > > > 2) Use the kvargs library to support configuration via the --vdev > > > > command line option, so bonds can be created administratively, rather > > > > than just programatically > > > > 3) Separate the command api from the PMD functionality into separate > > > > libraries (use control mbufs to communicate configuration changes to > > > > the pmd). This will allow users to dynamically load the pmd > > > > functionality (without compile or run time linking requirements), and > > > > then optionally use the programatic interface (or not if they want to > > > > save memory) > > > > > > > > Regards > > > > Neil > > > > -----Original Message----- > > > > From: Neil Horman [mailto:nhorman at tuxdriver.com] > > > > Sent: Thursday, June 5, 2014 12:04 PM > > > > To: Doherty, Declan > > > > Cc: dev at dpdk.org > > > > Subject: Re: [dpdk-dev] [PATCH v2 0/4] Link Bonding Library > > > > > > > > On Wed, Jun 04, 2014 at 04:18:01PM +0100, declan.doherty at intel.com > > > > wrote: > > > > > From: Declan Doherty <declan.doherty at intel.com> > > > > > > > > > > v2 patch additions, > > > > > fix for tx burst broadcast, incrementing the reference count on each > > > > > mbuf by the number of slaves - 1 add/remove slave behavior chnange > > > > > to fix primary slave port assignment patchcheck code fixes > > > > > > > > > > > > > > > > > > > This doesn't address any of the comments I made previously. > > > > Neil > > > > > > > > > Hi Neil, > > > > > > sorry for not replying regarding this earlier, in relation to your > > recommendations 1 and 2, I'm currently working on the implementation of both > > and should have an additional patch within the next couple of days to add > > this > > functionality. > > > > > > Regarding your third recommendation, I have no problem splitting the > > > library > > into separate API and PMD libraries although this does seem to break > > convention > > with the other non hw based pmd's, such as pmd_pcap, > > I'm not at all sure what you mean here. pmd_pcap registers a driver with > > the > > registration mechanism, allowing for the use of the --vdev option on the > > command > > line, same as all the other virtual drivers. The physical drivers do the > > same > > thing using the --whitelist and --blacklist options. How is the bonding > > driver > > following convention with pmd_pcap? > > > > > > > so would prefer not to go down that route. Also, I don't see the ability > > > to > > dynamically load the pmd as justification for the use of a control mbufs > > interface, > > this would require that either additional queues be created to handle > > control > > messages, or that the data TX queues have the ability to identify and > > process > > control mbufs, and either of these options will have a performance hit, > > > > Well, you're correct in that the driver would have to identify control > > messages, > > but I don't see how thats a performance hit at all. If you create a control > > channel queue, then all you have to introduce to the datapath is a single > > branch > > to see which queue is transmitting, and your done. If control mbufs are just > > that hurtful to performance, it seems thats an area that really needs > > improvement then. > > > > Another alternative is to add a control packet method to the rte_eth_dev > > structure allowing you to pass control mbufs directly to a pmd out of line > > with > > the datapath. That could avoid any additional branches in the datapath, and > > still allow you to separate user api from driver implementation. > > > > > > > for the core handling the separate control messages or directly to the tx > > > burst > > performance. I am not currently aware of any functional requirements or use > > cases for bonded devices to be used over multiple processes which is the > > only > > reason I can see for using an interface based on control mbufs, > > > > The reason is administrative. Its got nothing to do with how many processes > > want to use an interface, it has to do with an interface knowing if its > > using > > the bonding driver or not (it shouldn't ever have to, but with your patch > > set, > > its required to use the bonding api to create the interface and add slaves. > > > > > and by providing a --vdev configuration options the user can still write > > > their > > applications without any knowledge of the additional link bonding > > programmatic > > interface. If in the future a use case does emerge which requires > > multi-process > > control of bonded devices, I think it would make sense then to add an > > additional > > library to provide this functionality. > > > > How exactly does that happen? At what point does the bonding library in its > > current form register a driver with rte_ethdev without the application first > > calling some function in the bonding library to do so. Because until that > > happens, --vdev is non functional for this pmd. > > > > FWIW, the concern for me here is one of packaging. I don't want users to > > have > > to pull in a library that they don't absolutely need. the way you have > > this put > > together right now, they need the entire bonding library even if they just > > want > > a statically configured bond interface > > > > Neil > > > > > > > > > > Regards > > > Declan > > > -------------------------------------------------------------- > > Hi Neil, > I waited to reply to this until I had v3 of the patch ready to submit. I > think you mistook the intent of my last > response, but anyway I think v3 of the patch addresses the majority of your > concerns regarding the bonded device. Which > can now be initialized using the --vdev options and supports both virtual and > physical devices to be specified > as slaves and no knowledge of the bonding API is required by the user > application if the bonding ports are configure at > runtime. > Sounds great, I look forward to seeing it.
Thanks! Neil > Regards >