On 09.07.2019 10:14, David Marchand wrote: > Hello Ilya, > > On Mon, Jul 8, 2019 at 4:55 PM Ilya Maximets <i.maxim...@samsung.com > <mailto:i.maxim...@samsung.com>> wrote: > > On 04.07.2019 14:59, David Marchand wrote: > > pmd reloads are currently serialised in each steps calling > > reload_affected_pmds. > > Any pmd processing packets, waiting on a mutex etc... will make other > > pmd threads wait for a delay that can be undeterministic when syscalls > > adds up. > > > > Switch to a little busy loop on the control thread using the existing > > per-pmd reload boolean. > > > > The memory order on this atomic is rel-acq to have an explicit > > synchronisation between the pmd threads and the control thread. > > > > Signed-off-by: David Marchand <david.march...@redhat.com > <mailto:david.march...@redhat.com>> > > --- > > Changelog since v2: > > - remove unneeded synchronisation on pmd thread join (Ilya) > > - "inlined" synchronisation loop in reload_affected_pmds > > > > Changelog since v1: > > - removed the introduced reloading_pmds atomic and reuse the existing > > pmd->reload boolean as a single synchronisation point (Ilya) > > > > Changelog since RFC v1: > > - added memory ordering on 'reloading_pmds' atomic to serve as a > > synchronisation point between pmd threads and control thread > > > > --- > > lib/dpif-netdev.c | 26 ++++++++++++-------------- > > 1 file changed, 12 insertions(+), 14 deletions(-) > > > > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c > > index 8eeec63..415107b 100644 > > --- a/lib/dpif-netdev.c > > +++ b/lib/dpif-netdev.c > > @@ -649,9 +649,6 @@ struct dp_netdev_pmd_thread { > > struct ovs_refcount ref_cnt; /* Every reference must be > refcount'ed. */ > > struct cmap_node node; /* In 'dp->poll_threads'. */ > > > > - pthread_cond_t cond; /* For synchronizing pmd thread > reload. */ > > - struct ovs_mutex cond_mutex; /* Mutex for condition variable. */ > > - > > /* Per thread exact-match cache. Note, the instance for cpu core > > * NON_PMD_CORE_ID can be accessed by multiple threads, and thusly > > * need to be protected by 'non_pmd_mutex'. Every other instance > > @@ -1758,11 +1755,8 @@ dp_netdev_reload_pmd__(struct > dp_netdev_pmd_thread *pmd) > > return; > > } > > > > - ovs_mutex_lock(&pmd->cond_mutex); > > seq_change(pmd->reload_seq); > > atomic_store_explicit(&pmd->reload, true, memory_order_release); > > - ovs_mutex_cond_wait(&pmd->cond, &pmd->cond_mutex); > > - ovs_mutex_unlock(&pmd->cond_mutex); > > } > > > > static uint32_t > > @@ -4655,6 +4649,17 @@ reload_affected_pmds(struct dp_netdev *dp) > > if (pmd->need_reload) { > > flow_mark_flush(pmd); > > dp_netdev_reload_pmd__(pmd); > > + } > > + } > > + > > + CMAP_FOR_EACH (pmd, node, &dp->poll_threads) { > > + if (pmd->need_reload) { > > + bool reload; > > + > > + do { > > + atomic_read_explicit(&pmd->reload, &reload, > > + memory_order_acquire); > > + } while (reload); > > If we'll ever set 'need_reload' for non-PMD thread there will be > kind of deadlock. We never waited for non-PMD thread previously, > because dp_netdev_reload_pmd__() has a special case for it. > > > We can skip the non-PMD thread in this loop for consistency with > dp_netdev_reload_pmd__() if this is what you meant. > > But for a non-PMD thread, pmd->reload is set to false at init and is never > touched after this. > So in this case, this loop is supposed to break right away.
Hmm. Yes, it seems that you're right. However, I think, it's better to not wait for non-PMD thread anyway for consistency and better readability. Best regards, Ilya Maximets. _______________________________________________ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev