Hi Maxime,

Are there any comments about the patch?
Please let me know and thanks help review it.

Regards,
Li Zhang

> -----Original Message-----
> From: Maxime Coquelin <maxime.coque...@redhat.com>
> Sent: Thursday, June 16, 2022 5:02 PM
> To: Li Zhang <l...@nvidia.com>; Ori Kam <or...@nvidia.com>; Slava
> Ovsiienko <viachesl...@nvidia.com>; Matan Azrad <ma...@nvidia.com>;
> Shahaf Shuler <shah...@nvidia.com>
> Cc: dev@dpdk.org; NBU-Contact-Thomas Monjalon (EXTERNAL)
> <tho...@monjalon.net>; Raslan Darawsheh <rasl...@nvidia.com>; Roni
> Bar Yanai <ron...@nvidia.com>
> Subject: Re: [PATCH v2 00/15] mlx5/vdpa: optimize live migration time
> 
> External email: Use caution opening links or attachments
> 
> 
> On 6/16/22 09:24, Maxime Coquelin wrote:
> > Hi Li,
> >
> > On 6/16/22 04:29, Li Zhang wrote:
> >> Allow the driver to use internal threads to obtain fast
> >> configuration.
> >> All the threads will be open on the same core of the event completion
> >> queue scheduling thread.
> >>
> >> Add max_conf_threads parameter to configure the maximum number of
> >> internal threads in addition to the caller thread (8 is suggested).
> >> These internal threads to pipeline handle VDPA tasks in system and
> >> shared with all VDPA devices.
> >> Default is 0, don't use internal threads for configuration.
> >>
> >> Depends-on: series=21868 ("vdpa/mlx5: improve device shutdown time")
> >> http://patchwork.dpdk.org/project/dpdk/list/?series=21868
> >>
> >> RFC ("Add vDPA multi-threads optiomization")
> >> https://patchwork.dpdk.org/project/dpdk/cover/20220408075606.33056-
> 1-
> >> l...@nvidia.com/
> >>
> >
> > I just notice there was a RFC that was sent on time because I was not
> > cc'ed. I thought V1, which arrived on June 6th was targetting v22.11.
> 
> Ok, so checking with Thomas, get_maintainer.pl script does not return me for
> vDPA drivers patches, so that 'explain why I'm not cc'ed automatically.
> 
> Also, the auto-delegation script in patchwork seems to assign it to Andrew,
> that's why I did not see it.
> 
> I'll try to review it tomorrow.
> 
> > Given how late we are in the schedule for v22.07, this series will be
> > postponed to v22.11.
> >
> > Regards,
> > Maxime
> >
> >> V2:
> >> * Drop eal device removal patch in series.
> >> * Add release note in release_22_07.rst.
> >>
> >> Li Zhang (12):
> >>    vdpa/mlx5: fix usage of capability for max number of virtqs
> >>    common/mlx5: extend virtq modifiable fields
> >>    vdpa/mlx5: pre-create virtq in the prob
> >>    vdpa/mlx5: optimize datapath-control synchronization
> >>    vdpa/mlx5: add multi-thread management for configuration
> >>    vdpa/mlx5: add task ring for MT management
> >>    vdpa/mlx5: add MT task for VM memory registration
> >>    vdpa/mlx5: add virtq creation task for MT management
> >>    vdpa/mlx5: add virtq LM log task
> >>    vdpa/mlx5: add device close task
> >>    vdpa/mlx5: add virtq sub-resources creation
> >>    vdpa/mlx5: prepare virtqueue resource creation
> >>
> >> Yajun Wu (3):
> >>    vdpa/mlx5: support pre create virtq resource
> >>    common/mlx5: add DevX API to move QP to reset state
> >>    vdpa/mlx5: support event qp reuse
> >>
> >>   doc/guides/rel_notes/release_22_07.rst |   5 +
> >>   doc/guides/vdpadevs/mlx5.rst           |  25 +
> >>   drivers/common/mlx5/mlx5_devx_cmds.c   |  77 ++-
> >>   drivers/common/mlx5/mlx5_devx_cmds.h   |   6 +-
> >>   drivers/common/mlx5/mlx5_prm.h         |  30 +-
> >>   drivers/vdpa/mlx5/meson.build          |   1 +
> >>   drivers/vdpa/mlx5/mlx5_vdpa.c          | 270 ++++++++--
> >>   drivers/vdpa/mlx5/mlx5_vdpa.h          | 152 +++++-
> >>   drivers/vdpa/mlx5/mlx5_vdpa_cthread.c  | 360 ++++++++++++++
> >>   drivers/vdpa/mlx5/mlx5_vdpa_event.c    | 160 ++++--
> >>   drivers/vdpa/mlx5/mlx5_vdpa_lm.c       | 128 ++++-
> >>   drivers/vdpa/mlx5/mlx5_vdpa_mem.c      | 270 ++++++----
> >>   drivers/vdpa/mlx5/mlx5_vdpa_steer.c    |  22 +-
> >>   drivers/vdpa/mlx5/mlx5_vdpa_virtq.c    | 654 ++++++++++++++++++-------
> >>   14 files changed, 1776 insertions(+), 384 deletions(-)
> >>   create mode 100644 drivers/vdpa/mlx5/mlx5_vdpa_cthread.c
> >>

Reply via email to