Re: [Qemu-devel] Sending packets up to VM using vhost-net User.

2014-11-19 Thread Anshul Makkar
Any suggestions here..

Thanks
Anshul Makkar

On Tue, Nov 18, 2014 at 5:34 PM, Anshul Makkar 
anshul.mak...@profitbricks.com wrote:

 Sorry, forgot to mention I am using  git clone -b vhost-user-v5
 https://github.com/virtualopensystems/qemu.git; for vhost-user backend
 implementation.

 and git clone https://github.com/virtualopensystems/vapp.git  for
 reference implementation.

 Anshul Makkar


 On Tue, Nov 18, 2014 at 5:29 PM, Anshul Makkar 
 anshul.mak...@profitbricks.com wrote:

 Hi,

 I am developing an application that is using vhost-user backend for
 packet transfer.

 The architecture:

 1) VM1 is using Vhost-user and executing on server1.

 .qemu-system-x86_64 -m 1024 -mem-path
 /dev/hugepages,prealloc=on,share=on -drive
 file=/home/amakkar/test.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
 -device
 virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
 -vga std -vnc 0.0.0.0:3 -netdev
 type=vhost-user,id=net0,file=/home/amakkar/qemu.sock -device
 virtio-net-pci,netdev=net0

 2) App1 on server1: executing in user-mode connects with vhost-user
 backend over qemu.sock. As expected, initialization is done and guest
 addresses including addresses of descriptor ring , available ring and used
 ring and mapped to my userspace app and I can directly access them.

 I launch PACKETH on VM1 and transfer some packets using eth0 on VM1
 (packet transfer uses virtio-net backend. ifconfig eth0 shows correct TX
 stats)

 In App1 I directly access the avail_ring buffer and consume the packet
 and then I do RDMA transfer to server 2 .

 3) VM2 and App2 executing on server2 and again using VHost-User.

 App2: Vring initializations are successfully done and vring buffers are
 mapped. I get the buffer from App1 and now *I want to transfer this
 buffer (Raw packet) to VM2.*

 To transfer the buffer from App2 to VM2, I directly access the descriptor
 ring, place the buffer in it and update the available index and then issue
 the kick.

 code snippet for it:

 dest_buf = (void *)handler-map_handler(handler-context,
 desc[a_idx].addr);
 memcpy(dest_buf + hdr_len, buf, size);
 avail-ring[avail-idx % num] = a_idx;
 avail-idx++;
 fprintf(stdout, put_vring, synching memory \n);
 sync_shm(dest_buf, size);
 sync_shm((void *)(avail), sizeof(struct vring_avail));

 kick(vhost_user-vring_table, rx_idx);

 But the buffer never reaches to VM2. (I do ifconfig eth0 in VM2 and RX
 stats are 0)

 Please can you share if my approach is correct in transferring the packet
 from App2 to VM. Can I directly place the buffer in descriptor ring and
 issue a kick to notify virtio-net that a packet is available or you can
 smell some implementation problem.

 Thanks
 Anshul Makkar





Re: [Qemu-devel] Sending packets up to VM using vhost-net User.

2014-11-18 Thread Anshul Makkar
Sorry, forgot to mention I am using  git clone -b vhost-user-v5
https://github.com/virtualopensystems/qemu.git; for vhost-user backend
implementation.

and git clone https://github.com/virtualopensystems/vapp.git  for
reference implementation.

Anshul Makkar

On Tue, Nov 18, 2014 at 5:29 PM, Anshul Makkar 
anshul.mak...@profitbricks.com wrote:

 Hi,

 I am developing an application that is using vhost-user backend for packet
 transfer.

 The architecture:

 1) VM1 is using Vhost-user and executing on server1.

 .qemu-system-x86_64 -m 1024 -mem-path /dev/hugepages,prealloc=on,share=on
 -drive
 file=/home/amakkar/test.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
 -device
 virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
 -vga std -vnc 0.0.0.0:3 -netdev
 type=vhost-user,id=net0,file=/home/amakkar/qemu.sock -device
 virtio-net-pci,netdev=net0

 2) App1 on server1: executing in user-mode connects with vhost-user
 backend over qemu.sock. As expected, initialization is done and guest
 addresses including addresses of descriptor ring , available ring and used
 ring and mapped to my userspace app and I can directly access them.

 I launch PACKETH on VM1 and transfer some packets using eth0 on VM1
 (packet transfer uses virtio-net backend. ifconfig eth0 shows correct TX
 stats)

 In App1 I directly access the avail_ring buffer and consume the packet and
 then I do RDMA transfer to server 2 .

 3) VM2 and App2 executing on server2 and again using VHost-User.

 App2: Vring initializations are successfully done and vring buffers are
 mapped. I get the buffer from App1 and now *I want to transfer this
 buffer (Raw packet) to VM2.*

 To transfer the buffer from App2 to VM2, I directly access the descriptor
 ring, place the buffer in it and update the available index and then issue
 the kick.

 code snippet for it:

 dest_buf = (void *)handler-map_handler(handler-context,
 desc[a_idx].addr);
 memcpy(dest_buf + hdr_len, buf, size);
 avail-ring[avail-idx % num] = a_idx;
 avail-idx++;
 fprintf(stdout, put_vring, synching memory \n);
 sync_shm(dest_buf, size);
 sync_shm((void *)(avail), sizeof(struct vring_avail));

 kick(vhost_user-vring_table, rx_idx);

 But the buffer never reaches to VM2. (I do ifconfig eth0 in VM2 and RX
 stats are 0)

 Please can you share if my approach is correct in transferring the packet
 from App2 to VM. Can I directly place the buffer in descriptor ring and
 issue a kick to notify virtio-net that a packet is available or you can
 smell some implementation problem.

 Thanks
 Anshul Makkar




[Qemu-devel] Sending packets up to VM using vhost-net User.

2014-11-18 Thread Anshul Makkar
Hi,

I am developing an application that is using vhost-user backend for packet
transfer.

The architecture:

1) VM1 is using Vhost-user and executing on server1.

.qemu-system-x86_64 -m 1024 -mem-path /dev/hugepages,prealloc=on,share=on
-drive
file=/home/amakkar/test.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
-device
virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-vga std -vnc 0.0.0.0:3 -netdev
type=vhost-user,id=net0,file=/home/amakkar/qemu.sock -device
virtio-net-pci,netdev=net0

2) App1 on server1: executing in user-mode connects with vhost-user backend
over qemu.sock. As expected, initialization is done and guest addresses
including addresses of descriptor ring , available ring and used ring and
mapped to my userspace app and I can directly access them.

I launch PACKETH on VM1 and transfer some packets using eth0 on VM1 (packet
transfer uses virtio-net backend. ifconfig eth0 shows correct TX stats)

In App1 I directly access the avail_ring buffer and consume the packet and
then I do RDMA transfer to server 2 .

3) VM2 and App2 executing on server2 and again using VHost-User.

App2: Vring initializations are successfully done and vring buffers are
mapped. I get the buffer from App1 and now *I want to transfer this buffer
(Raw packet) to VM2.*

To transfer the buffer from App2 to VM2, I directly access the descriptor
ring, place the buffer in it and update the available index and then issue
the kick.

code snippet for it:

dest_buf = (void *)handler-map_handler(handler-context, desc[a_idx].addr);
memcpy(dest_buf + hdr_len, buf, size);
avail-ring[avail-idx % num] = a_idx;
avail-idx++;
fprintf(stdout, put_vring, synching memory \n);
sync_shm(dest_buf, size);
sync_shm((void *)(avail), sizeof(struct vring_avail));

kick(vhost_user-vring_table, rx_idx);

But the buffer never reaches to VM2. (I do ifconfig eth0 in VM2 and RX
stats are 0)

Please can you share if my approach is correct in transferring the packet
from App2 to VM. Can I directly place the buffer in descriptor ring and
issue a kick to notify virtio-net that a packet is available or you can
smell some implementation problem.

Thanks
Anshul Makkar