Hi Xiaolong,

Thanks for the update, 2 small comments below.

> -----Original Message-----
> From: Ye, Xiaolong
> Sent: Monday, September 24, 2018 4:43 PM
> To: dev@dpdk.org; Maxime Coquelin <maxime.coque...@redhat.com>; Bie,
> Tiwei <tiwei....@intel.com>; Wang, Zhihong <zhihong.w...@intel.com>
> Cc: Wang, Xiao W <xiao.w.w...@intel.com>; Rami Rosen
> <roszenr...@gmail.com>; Wang, Haiyue <haiyue.w...@intel.com>; Ye,
> Xiaolong <xiaolong...@intel.com>
> Subject: [PATCH v4 2/2] examples/vdpa: introduce a new sample for vDPA
> 
> The vdpa sample application creates vhost-user sockets by using the
> vDPA backend. vDPA stands for vhost Data Path Acceleration which utilizes
> virtio ring compatible devices to serve virtio driver directly to enable
> datapath acceleration. As vDPA driver can help to set up vhost datapath,
> this application doesn't need to launch dedicated worker threads for vhost
> enqueue/dequeue operations.
> 
> Signed-off-by: Xiao Wang <xiao.w.w...@intel.com>
> Signed-off-by: Xiaolong Ye <xiaolong...@intel.com>
> ---
>  MAINTAINERS                            |   2 +
>  doc/guides/rel_notes/release_18_11.rst |   8 +
>  doc/guides/sample_app_ug/index.rst     |   1 +
>  doc/guides/sample_app_ug/vdpa.rst      | 118 +++++++
>  examples/Makefile                      |   2 +-
>  examples/vdpa/Makefile                 |  32 ++
>  examples/vdpa/main.c                   | 466 +++++++++++++++++++++++++
>  examples/vdpa/meson.build              |  16 +
>  8 files changed, 644 insertions(+), 1 deletion(-)
>  create mode 100644 doc/guides/sample_app_ug/vdpa.rst
>  create mode 100644 examples/vdpa/Makefile
>  create mode 100644 examples/vdpa/main.c
>  create mode 100644 examples/vdpa/meson.build
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 5967c1dd3..5656f18e8 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -683,6 +683,8 @@ F: doc/guides/sample_app_ug/vhost.rst
>  F: examples/vhost_scsi/
>  F: doc/guides/sample_app_ug/vhost_scsi.rst
>  F: examples/vhost_crypto/
> +F: examples/vdpa/
> +F: doc/guides/sample_app_ug/vdpa.rst
> 
>  Vhost PMD
>  M: Maxime Coquelin <maxime.coque...@redhat.com>
> diff --git a/doc/guides/rel_notes/release_18_11.rst
> b/doc/guides/rel_notes/release_18_11.rst
> index bc9b74ec4..dd53a9ecf 100644
> --- a/doc/guides/rel_notes/release_18_11.rst
> +++ b/doc/guides/rel_notes/release_18_11.rst
> @@ -67,6 +67,14 @@ New Features
>    SR-IOV option in Hyper-V and Azure. This is an alternative to the previous
>    vdev_netvsc, tap, and failsafe drivers combination.
> 
> +* **Add a new sample for vDPA**
> +
> +  The vdpa sample application creates vhost-user sockets by using the
> +  vDPA backend. vDPA stands for vhost Data Path Acceleration which utilizes
> +  virtio ring compatible devices to serve virtio driver directly to enable
> +  datapath acceleration. As vDPA driver can help to set up vhost datapath,
> +  this application doesn't need to launch dedicated worker threads for vhost
> +  enqueue/dequeue operations.
> 
>  API Changes
>  -----------
> diff --git a/doc/guides/sample_app_ug/index.rst
> b/doc/guides/sample_app_ug/index.rst
> index 5bedf4f6f..74b12af85 100644
> --- a/doc/guides/sample_app_ug/index.rst
> +++ b/doc/guides/sample_app_ug/index.rst
> @@ -45,6 +45,7 @@ Sample Applications User Guides
>      vhost
>      vhost_scsi
>      vhost_crypto
> +    vdpa
>      netmap_compatibility
>      ip_pipeline
>      test_pipeline
> diff --git a/doc/guides/sample_app_ug/vdpa.rst
> b/doc/guides/sample_app_ug/vdpa.rst
> new file mode 100644
> index 000000000..d05728a37
> --- /dev/null
> +++ b/doc/guides/sample_app_ug/vdpa.rst
> @@ -0,0 +1,118 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2018 Intel Corporation.
> +
> +Vdpa Sample Application
> +=======================
> +
> +The vdpa sample application creates vhost-user sockets by using the
> +vDPA backend. vDPA stands for vhost Data Path Acceleration which utilizes
> +virtio ring compatible devices to serve virtio driver directly to enable
> +datapath acceleration. As vDPA driver can help to set up vhost datapath,
> +this application doesn't need to launch dedicated worker threads for vhost
> +enqueue/dequeue operations.
> +
> +Testing steps
> +-------------
> +
> +This section shows the steps of how to start VMs with vDPA vhost-user
> +backend and verify network connection & live migration.
> +
> +Build
> +~~~~~
> +
> +To compile the sample application see :doc:`compiling`.
> +
> +The application is located in the ``vdpa`` sub-directory.
> +
> +Start the vdpa example
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +.. code-block:: console
> +
> +        ./vdpa [EAL options]  -- [--client] [--interactive|-i] or [--iface 
> SOCKET_PATH]
> +
> +where
> +
> +* --client means running vdpa app in client mode, in the client mode, QEMU
> needs
> +  to run as the server mode and take charge of socket file creation.
> +* --iface specifies the path prefix of the UNIX domain socket file, e.g.
> +  /tmp/vhost-user-, then the socket files will be named as 
> /tmp/vhost-user-<n>
> +  (n starts from 0).
> +* --interactive means run the vdpa sample in interactive mode, currently 4
> +  internal cmds are supported:
> +
> +  1. help: show help message
> +  2. list: list all available vdpa devices
> +  3. create: create a new vdpa port with socket file and vdpa device address
> +  4. quit: unregister vhost driver and exit the application
> +
> +Take IFCVF driver for example:
> +
> +.. code-block:: console
> +
> +        ./vdpa --log-level=9 -c 0x6 -n 4 --socket-mem 1024,1024 \
> +                -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
> +                -- --interactive

To demonstrate app doesn't need to launch dedicated worker threads for vhost 
enqueue/dequeue operations,
We can use "-c 0x2" to indicate that no need to allocate dedicated worker 
threads.

> +
> +.. note::
> +    We need to bind vfio-pci to VFs before running vdpa sample.
> +
> +    * modprobe vfio-pci
> +    * ./usertools/dpdk-devbind.py -b vfio-pci 06:00.3 06:00.4
> +
> +Then we can create 2 vdpa ports in interactive cmdline.
> +
> +.. code-block:: console
> +
> +        vdpa> list
> +        device id       device address  queue num       supported features
> +        0               0000:06:00.3    1               0x5572362272
> +        1               0000:06:00.4    1               0x5572362272
> +
> +        vdpa> create /tmp/vdpa-socket0 0000:06:00.3
> +        vdpa> create /tmp/vdpa-socket1 0000:06:00.4
> +
> +.. _vdpa_app_run_vm:
> +
> +Start the VMs
> +~~~~~~~~~~~~~
> +
> +.. code-block:: console
> +
> +       qemu-system-x86_64 -cpu host -enable-kvm \
> +       <snip>
> +       -mem-prealloc \
> +       -chardev socket,id=char0,path=<socket_file created in above steps> \
> +       -netdev type=vhost-user,id=vdpa,chardev=char0 \
> +       -device virtio-net-pci,netdev=vdpa,mac=00:aa:bb:cc:dd:ee \
> +
> +After the VMs launches, we can login the VMs and configure the ip, verify the
> +network connection via ping or netperf.
> +
> +.. note::
> +    Suggest to use QEMU 3.0.0 which extends vhost-user for vDPA.

[...]

> +
> +/* *** List all available vdpa devices *** */
> +struct cmd_list_result {
> +     cmdline_fixed_string_t action;
> +};
> +
> +static void cmd_list_vdpa_devices_parsed(
> +             __attribute__((unused)) void *parsed_result,
> +             struct cmdline *cl,
> +             __attribute__((unused)) void *data)
> +{
> +     int did;
> +     uint32_t queue_num;
> +     uint64_t features;
> +     struct rte_vdpa_device *vdev;
> +     struct rte_pci_addr addr;
> +
> +     cmdline_printf(cl, "device id\tdevice address\tqueue num\tsupported
> features\n");
> +     for (did = 0; did < dev_total; did++) {
> +             vdev = rte_vdpa_get_device(did);
> +             if (!vdev)
> +                     continue;
> +             if (vdev->ops->get_queue_num(did, &queue_num) < 0) {
> +                     RTE_LOG(ERR, VDPA,
> +                             "failed to get vdpa queue number "
> +                             "for device id %d.\n", did);
> +                     continue;
> +             }
> +             if (vdev->ops->get_features(did, &features) < 0) {
> +                     RTE_LOG(ERR, VDPA,
> +                             "failed to get vdpa features "
> +                             "for device id %d.\n", did);
> +                     continue;
> +             }
> +             addr = vdev->addr.pci_addr;
> +             cmdline_printf(cl,
> +
>       "%d\t\t"PCI_PRI_FMT"\t%"PRIu32"\t\t0x%"PRIu64"\n", did,
> +                     addr.domain, addr.bus, addr.devid,
> +                     addr.function, queue_num, features);

Use PRIx64 instead of PRIu64 for features.
You can add a blank space between "PRIx64" and the other section to make it 
more readable.
Refer to:
        lib/librte_vhost/vhost_user.c:                  "guest memory region 
%u, size: 0x%" PRIx64 "\n" 

BRs,
Xiao

Reply via email to