> > +/**
> > + * Mempool event type.
> > + * @internal
>
> Shouldn't internal go first?
>
> > + */
> > +enum rte_mempool_event {
It really should, but I had to keep it this way
because otherwise Doxygen fails on multiple systems:
[3110/3279] Generating doxygen with a custom command
FAILED: doc/
On Fri, Oct 15, 2021 at 01:07:42PM +, Dmitry Kozlyuk wrote:
[...]
> > > +static void
> > > +mempool_event_callback_invoke(enum rte_mempool_event event,
> > > + struct rte_mempool *mp)
> > > +{
> > > + struct mempool_callback_list *list;
> > > + struct rte_tailq
[...]
> > @@ -360,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp,
> char *vaddr,
> > STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
> > mp->nb_mem_chunks++;
> >
> > + /* Report the mempool as ready only when fully populated. */
> > + if (mp->populated_size >= mp
Hi Dmitry,
On Wed, Oct 13, 2021 at 02:01:28PM +0300, Dmitry Kozlyuk wrote:
> Data path performance can benefit if the PMD knows which memory it will
> need to handle in advance, before the first mbuf is sent to the PMD.
> It is impractical, however, to consider all allocated memory for this
> purp
> -Original Message-
> From: Andrew Rybchenko
> [...]
> With below review notes processed
>
> Reviewed-by: Andrew Rybchenko
>
Thanks for the comments, I'll fix them all, just a small note below FYI.
> > + rte_mcfg_mempool_read_lock();
> > + rte_mcfg_tailq_write_lock();
> > +
On 10/13/21 2:01 PM, Dmitry Kozlyuk wrote:
> Data path performance can benefit if the PMD knows which memory it will
> need to handle in advance, before the first mbuf is sent to the PMD.
> It is impractical, however, to consider all allocated memory for this
> purpose. Most often mbuf memory comes
Data path performance can benefit if the PMD knows which memory it will
need to handle in advance, before the first mbuf is sent to the PMD.
It is impractical, however, to consider all allocated memory for this
purpose. Most often mbuf memory comes from mempools that can come and
go. PMD can enumer
7 matches
Mail list logo