Hi all:
   AFAIK, nowadays there's only one solution to apply DPDK into Docker 
Containers, which is Passing-Through physical NIC to applications.
   I'm now working on another solution, considering combining DPDK and OVS via 
vhost-net, I name it "vhost_net pmd driver".
   The detailed solution is as follows:
   1 Similar to the process of qemu<->vhost_net, we use a serial of ioctl 
commands to make virtqueue visible to both vhost_net and vhost_net pmd driver.
   2 In kvm guests, the tx/rx queue is consisted of GPA addresses, and the 
vhost_net will transform it into HVA addresses, then the tap device could copy 
datagram afterwards. However,  GPA addresses are not necessary for containers 
to fulfill the tx/rx queue. Thus, we fake it to fulfill the HVA addresses into 
the tx/rx queues, and pass the (HVA, HVA) map table to vhost_net by 
VHOST_SET_MEM_TABLE ioctl during initialization. Thus *the vhost_net codes 
could keep untouched*.
   3 the packet-transceiver-process is totally the same to virtio pmd driver.

   The demo has been worked out already. In the demo, the dpdk could directly 
access vhost_net to realize L2 forward.
     clients  |                      host                   |    contrainer
      ping    |                                             |
vm0   ----- > |ixgbe:enp131s0f0 <-> ovs:br0  <-> vhost:tap0 |<-> vhost-net pmd
              |                                             |         |
              |                                             |      testpmd
              |                                             |         |
vm1  <------  |ixgbe:enp131s0f1 <-> ovs:br1  <-> vhost:tap1 |<-> vhost-net pmd
              |                                             |

     I don't know wheter this solution is acceptable here. Any blueprints for 
combining container with dpdk? any suggestions or advices? Thanks in advance.


---
Ann

Reply via email to