This discussion requires more opinions. Please everybody, READ and COMMENT. Thanks
If it is not enough visible, a new thread could be started later. 2016-05-04 07:43, Neil Horman: > On Wed, May 04, 2016 at 10:24:18AM +0200, David Marchand wrote: > > On Tue, May 3, 2016 at 1:57 PM, Neil Horman <nhorman at tuxdriver.com> > > wrote: > > >> This approach has a few pros and cons: > > >> > > >> pros: > > >> 1) Its simple, and doesn't require extra infrastructure to implement. > > >> E.g. we > > >> don't need a new tool to extract driver information and emit the C code > > >> to build > > >> the binary data for the special section, nor do we need a custom linker > > >> script > > >> to link said special section in place > > >> > > >> 2) Its stable. Because the marker symbols are explicitly exported, this > > >> approach is resilient against stripping. It is a good point. We need something resilient against stripping. > > >> cons: > > >> 1) It creates an artifact in that PMD_REGISTER_DRIVER has to be used in > > >> one > > >> compilation unit per DSO. As an example em and igb effectively merge two > > >> drivers into one DSO, and the uses of PMD_REGISTER_DRIVER occur in two > > >> separate > > >> C files for the same single linked DSO. Because of the use of the > > >> __COUNTER__ > > >> macro we get multiple definitions of the same marker symbols. > > >> > > >> I would make the argument that the downside of the above artifact isn't > > >> that big > > >> a deal. Traditionally in other projects a unit like a module (or DSO in > > >> our > > >> case) only ever codifies a single driver (e.g. module_init() in the > > >> linux kernel > > >> is only ever used once per built module). If we have code like igb/em > > >> that > > >> shares some core code, we should build the shared code to object files > > >> and link > > >> them twice, once to an em.so pmd and again to an igb.so pmd. It is also a problem for compilation units having PF and VF drivers. > > >> But regardless, I thought I would propose this to see what you all > > >> thought of > > >> it. Thanks for proposing. > > - This implementation does not support binaries, so it is not suitable > > for people who don't want dso, this is partially why I used bfd rather > > than just dlopen. > > If you're statically linking an application, you should know what hardware you > support already. Its going to be very hard, if not impossible to develop a > robust solution that works with static binaries (the prior solutions don't > work > consistently with static binaries either). I really think the static solution > needs to just be built into the application (i.e. the application needs to > add a > command line option to dump out the pci id's that are registered). No, we need a tool to know what are the supported devices before running the application (e.g. for binding). This tool should not behave differently depending of how DPDK was compiled (static or shared). [...] > > - How does it behave if we strip the dsos ? > > I described this above, its invariant to stripping, because the symbols for > each > pmd are explicitly exported, so strip doesn't touch the symbols that pmdinfo > keys off of. > [...] > > - The tool output format is not script friendly from my pov. > > Don't think it really needs to be script friendly, it was meant to provide > human > readable output, but script friendly output can be added easily enough if you > want. Yes it needs to be script friendly. It appears that we must agree on a set of requirements first. Please let's forget the implementation until we have collected enough feedbacks on the needs. I suggest these items to start the list: - query all drivers in static binary or shared library - stripping resiliency - human friendly - script friendly - show driver name - list supported device id / name - list driver options - show driver version if available - show dpdk version - show kernel dependencies (vfio/uio_pci_generic/etc) - room for extra information? Please ack or comment items of this list, thanks.