Hey Liping,
Follow up on this issue, i have configured SR-IOV and now i am not
seeing any packetloss or any latency issue.
On Mon, Sep 17, 2018 at 1:27 AM Liping Mao (limao) wrote:
>
> > Question: I have br-vlan interface mapp with bond0 to run my VM (VLAN
>
> traffic), so do i need to do anythin
Liping,
Last 2 days i am running test with hping3 and found following
behavior, if you noticed my result UDP doing very bad if i increase
number of queue, do you know why ?
UDP:
If i set "ethtool -L eth0 combined 1" then UDP pps rate is 100kpps
if i set "ethtool -L eth0 combined 8" then UDP
Thanks Liping,
I will try to reach out or open new thread to get sriov info.
By the way what version of openstack you guys using and what hardware specially
NIC. Just trying to see if it's hardware related.
I'm running kernel 3.10.x do you think it's not something related kernel.
Sent from
> Question: I have br-vlan interface mapp with bond0 to run my VM (VLAN
traffic), so do i need to do anything in bond0 to enable VF/PF
function? Just confused because currently my VM nic map with compute
node br-vlan bridge.
I had not actually used SRIOV in my env~ maybe others could help.
Thanks Liping,
I will check bug for tx/rx queue size and see if i can make it work
but look like my 10G NIC support SR-IOV so i am trying that path
because it will be better for long run.
I have deploy my cloud using openstack-ansible so now i need to figure
out how do i wire that up with opensta
Hi Satish,
There are hard limitations in nova's code, I did not actually used more thant 8
queues:
def _get_max_tap_queues(self):
# NOTE(kengo.sakai): In kernels prior to 3.0,
# multiple queues on a tap interface is not supported.
# In kernels 3.x, the number o
Update on my last email.
I am able to achieve 150kpps with queue=8 and my goal is to do 300kpps
because some of voice application using 300kps.
Here i am trying to increase rx_queue_size & tx_queue_size but its not
working somehow. I have tired following.
1. add rx/tx size in /etc/nova/nova.conf
I successful reproduce this error with hping3 tool and look like
multiqueue is our solution :) but i have few question you may have
answer of that.
1. I have created two instance (vm1.example.com & vm2.example.com)
2. I have flood traffic from vm1 using "hping3 vm2.example.com
--flood" and i ha
Hi Liping,
>> I think multi queue feature should help.(be careful to make sure the ethtool
>> update queue number action also did after reboot the vm).
Is there a way i can automate this last task to update queue number
action after reboot vm :) otherwise i can use cloud-init to make sure
all VM
I am currently playing with those setting and trying to generate
traffic with hping3 tools, do you have any tool to test traffic
performance for specially udp style small packets.
I am going to share all my result and see what do you feel because i
have noticed you went through this pain :) I wil
I think multi queue feature should help.(be careful to make sure the ethtool
update queue number action also did after reboot the vm).
Numa cpu pin and queue length will be a plus in my exp. You may need yo have
performance test in your situatuon,in my case cpu numa helpped the app get very
sta
Thanks Liping,
I am using libvertd 3.9.0 version so look like i am eligible take
advantage of that feature. phew!
[root@compute-47 ~]# libvirtd -V
libvirtd (libvirt) 3.9.0
Let me tell you how i am running instance on my openstack, my compute
has 32 core / 32G memory and i have created two insta
It is still possible to update rx and tx queues length if your qemu and libvirt
version is higher than the version recorded in [3]. (You should possible to
update directly in libvirt configuration if my memory is correct)
We also have some similar use case which run audio/vedio serivcs. They are
Hi Liping,
Thank you for your reply,
We notice packet drops during high load, I did try txqueue and didn't help so I
believe I am going to try miltiqueue.
For SRIOV I have to look if I have support in my nic.
We are using queens so I think queue size option not possible :(
We are using voip
Hi Satish,
Did your packet loss happen always or it only happened when heavy load?
AFAIK, if you do not tun anything, the vm tap can process about 50kpps before
the tap device start to drop packets.
If it happened in heavy load, couple of things you can try:
1) increase tap queue length,
[root@compute-33 ~]# ifconfig tap5af7f525-5f | grep -i drop
RX errors 0 dropped 0 overruns 0 frame 0
TX errors 0 dropped 2528788837 overruns 0 carrier 0 collisions 0
Noticed tap interface dropping TX packets and even after increasing
txqueue from 1000 to 1 nothing changed, stil
Folks,
I need some advice or suggestion to find out what is going on with my
network, we have notice high packet loss on openstack instance and not
sure what is going on, same time if i check on host machine and it has
zero packet loss.. this is what i did for test...
ping 8.8.8.8
from instance:
17 matches
Mail list logo