The last Mellanox adapters\SoCs support virtio operations to accelerate the 
vhost data path.

A new net PMD will be added to implement the vdpa driver requirements on top of 
Mellanox
devices support it like ConnectX6 and BlueField.

Points:

  *   The mlx5_vdpa PMD run on top of PCI devices VF and PF.
  *   The Mellanox PCI device can be configured ether to support ethdev device 
or to support vdpa device.
  *   An one physical device can contain ethdev VFs\PF driven by the mlx5 PMD 
and vdpa VFs\PF driven by the new mlx5_vdpa PMD parallelly.
  *   The  decision which driver should be selected to probe Mellanox PCI 
device should be taken by the user using the PCI device devargs.
  *   mlx5 and mlx5_vdpa depend in rdma-core lib so some code may be shared 
between them,

due that a new mlx5 common directory will be added in drivers/commom for code 
reusing.

  *   All the guest physical memory of the virtqs will be translated to the 
host physical memory by the HW.

Reply via email to