On Wed, Mar 24, 2021 at 1:55 PM Jerin Jacob <jerinjac...@gmail.com> wrote: > > > IMO, We dont need to make it configurable and each platform sets its > > > value. That scheme won't work as generic distribution build will fail > > > to run. > > > Since PCIe specification defines this value and there is no > > > performance impact on increasing this, > > > IMO, We can change to 2048 as default. > > > > It probably breaks rte_intr_* ABI. > > Yes. Even though all APIs are used as a pointer (ie. "struct > rte_intr_handle *"), the definition > kept in the header file. > > > > struct rte_intr_handle { > > ... > > int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping > > */ > > struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID]; > > /**< intr vector epoll event */ > > ... > > > > > > I see you need this for octeontx2, so wondering if you could handle > > this differently in octeontx2 drivers? > > This is an issue with any PCIe device that has more than 512 MSIX interrupts. > > The PCI spec the max is defined as 2K. > > CN10K drivers have 1K interrupt lines per PCIe device. > > I think, following are the options. > 1) To avoid ABI breakage in default configuration use the existing patch > 2) In 21.11 break ABI and Either change to > a) RTE_MAX_RXTX_INTR_VEC_ID as 1024 > or > b) Make it full dynamic allocation based on PCI device MSIX size on probe > time. > That brings some kind of dependency rte_intr with PCI device. Need to > understand, > How it can clearly be abstracted out and Is it worth trouble for the > amount of memory. > Looks like the cost of one entry is 40B. So additional 512 is 40B * > 512 = 21KB virtual memory.
Since you mentioned performance is not impacted, I guess this is control path only. And there is no need to expose this. So: c) Rework API so that we don't expose such details. -- David Marchand