[dpdk-dev] [PATCH 0/4] virtio support for container

2016-01-20 Thread Amit Tomer
Hello,

> For this case, please use --single-file option because it creates much more
> than 8 fds, which can be handled by vhost-user sendmsg().

Thanks, I'm able to verify it by sending ARP packet from container to
host on arm64. But sometimes, I do see following message while running
l2fwd in container(pointed by Rich).

EAL: Master lcore 0 is ready (tid=8a7a3000;cpuset=[0])
EAL: lcore 1 is ready (tid=89cdf050;cpuset=[1])
Notice: odd number of ports in portmask.
Lcore 0: RX port 0
Initializing port 0... PANIC in kick_all_vq():
TUNSETVNETHDRSZ failed: Inappropriate ioctl for device

How it could be avoided?

Thanks,
Amit.


[dpdk-dev] [PATCH 0/4] virtio support for container

2016-01-14 Thread Amit Tomer
Hello,

>
> Not necessary. But if you want to use hugepages inside Docker, use -v option
> to map a hugetlbfs into containers.

I modified Docker command line in order to make use of Hugetlbfs:

CMD ["/usr/src/dpdk/examples/l2fwd/build/l2fwd", "-c", "0x3", "-n",
"4","--no-pci", "--socket-mem","512",
"--vdev=eth_cvio0,queue_num=256,rx=1,tx=1,cq=0,path=/var/run/usvhost",
"--", "-p", "0x1"]

Then, I run docker :

 docker run -i -t --privileged  -v /dev/hugepages:/dev/hugepages  -v
/home/ubuntu/backup/usvhost:/var/run/usvhost  l6

But this is what I see:

EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 48 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: Failed to find phys addr for 2 MB pages
PANIC in rte_eal_init():
Cannot init memory
1: [/usr/src/dpdk/examples/l2fwd/build/l2fwd(rte_dump_stack+0x20) [0x48ea78]]

This is from Host:

# mount | grep hugetlbfs
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
none on /dev/hugepages type hugetlbfs (rw,relatime)

 #cat /proc/meminfo | grep Huge
AnonHugePages:548864 kB
HugePages_Total:4096
HugePages_Free: 1024
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

What is it, I'm doing wrong here?

Thanks,
Amit


[dpdk-dev] [PATCH 0/4] virtio support for container

2016-01-14 Thread Amit Tomer
Hello,

> Can you send out how you start this l2fwd program?

This is how, I run l2fwd program.

CMD ["/usr/src/dpdk/examples/l2fwd/build/l2fwd", "-c", "0x3", "-n",
"4","--no-pci",
,"--no-huge","--vdev=eth_cvio0,queue_num=256,rx=1,tx=1,cq=0,path=/usr/src/dpdk/usvhost",
"--", "-p", "0x1"]

I tried passing "-m 1024" to it but It causes l2fwd killed even before
it could connect to usvhost socket.

Do I need to create Hugepages from Inside Docker container to make use
of Hugepages?

Thanks,
Amit.


[dpdk-dev] [PATCH 0/4] virtio support for container

2016-01-13 Thread Amit Tomer
Hello,

>
> You can use below patch for l2fwd to send out an arp packet when it gets
> started.

I tried to send out arp packet using this patch but buffer allocation
for arp packets itself gets failed:

 m = rte_pktmbuf_alloc(mp);

Return a NULL Value.

Thanks,
Amit.


[dpdk-dev] [PATCH 0/4] virtio support for container

2016-01-12 Thread Amit Tomer
Hello,

> In vhost-switch, it judges if a virtio device is ready for processing after
> receiving
> a pkt from virtio device. So you'd better construct a pkt, and send it out
> firstly
> in l2fwd.

I tried to ping the socket interface from host for the same purpose
but it didn't work.

Could you please suggest some other approach for achieving same(how
pkt can be sent out to l2fwd)?

Also, before trying this, I have verified that vhost-switch is working
ok with testpmd .

Thanks,
Amit.


[dpdk-dev] [PATCH 0/4] virtio support for container

2016-01-12 Thread Amit Tomer
Hello,

>  Have you applied all three fixes discussed here?

I am running it with, only RFC patches applied with "--no-huge" in l2fwd.

Thanks
Amit.


[dpdk-dev] [PATCH 0/4] virtio support for container

2016-01-12 Thread Amit Tomer
Hello,

I run l2fwd from inside docker with following logs:

But, don't see Port statistics gets updated ?

#/home/ubuntu/dpdk# sudo docker run -i -t -v
/home/ubuntu/dpdk/usvhost:/usr/src/dpdk/usvhost l4
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 6 on socket 0
EAL: Detected lcore 7 as core 7 on socket 0
EAL: Detected lcore 8 as core 8 on socket 0
EAL: Setting up physically contiguous memory...
EAL: TSC frequency is ~9 KHz
EAL: Master lcore 1 is ready (tid=b5968000;cpuset=[1])
Notice: odd number of ports in portmask.
Lcore 1: RX port 0
Initializing port 0... done:
Port 0, MAC address: F6:9F:7A:47:A4:99

Checking link statusdone
Port 0 Link Up - speed 1 Mbps - full-duplex
L2FWD: entering main loop on lcore 1
L2FWD:  -- lcoreid=1 portid=0


Port statistics 
Statistics for port 0 --
Packets sent:0
Packets received:0
Packets dropped: 0
Aggregate statistics ===
Total packets sent:  0
Total packets received:  0
Total packets dropped:   0


Host side logs after running

# ./vhost-switch -c 0x3 f -n 4 --socket-mem 2048 --huge-dir
/dev/hugepages -- -p 0x1  --dev-basename usvhost

PMD: eth_ixgbe_dev_init(): MAC: 4, PHY: 3
PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x1528
pf queue num: 0, configured vmdq pool num: 64, each vmdq pool has 2 queues
VHOST_PORT: Max virtio devices supported: 64
VHOST_PORT: Port 0 MAC: d8 9d 67 ee 55 f0
VHOST_PORT: Skipping disabled port 1
VHOST_DATA: Procesing on Core 1 started
VHOST_CONFIG: socket created, fd:20
VHOST_CONFIG: bind to usvhost
VHOST_CONFIG: new virtio connection is 21
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: mapped region 0 fd:22 to 0x7f3400 sz:0x400 off:0x0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:23
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:24
VHOST_CONFIG: virtio isn't ready for processing.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:25
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:26
VHOST_CONFIG: virtio is now ready for processing.
VHOST_DATA: (0) Device has been added to data core 1

Could anyone please point out, how it can be tested further(how can
traffic be sent across host and container)  ?

Thanks,
Amit.

On Tue, Jan 12, 2016 at 4:18 PM, Pavel Fedin  wrote:
>  Hello!
>
>> Your guess makes sense because current implementation does not support
>> multi-queues.
>>
>>  From you log, only 0 and 1 are "ready for processing"; others are "not
>> ready for processing".
>
>  Yes, and if study it even more carefully, we see that we initialize all tx 
> queues but only a single rx queue (#0).
>  After some more code browsing and comparing the two patchsets i figured out 
> that the problem is caused by inappropriate VIRTIO_NET_F_CTRL_VQ flag. In 
> your RFC you used different capability set, while in v1 you seem to have 
> forgotten about this.
>  I suggest to temporarily move hw->guest_features assignment out of 
> virtio_negotiate_features() into the caller, where we have eth_dev->dev_type, 
> and can choose the right set depending on it.
>
>  With all mentioned fixes i've got the ping running.
>  Tested-by: Pavel Fedin 
>
> Kind regards,
> Pavel Fedin
> Expert Engineer
> Samsung Electronics Research center Russia
>
>