Hi, Bhanu,

I could still observe the issue. Please keep me in the loop if you have any 
updates.


Thanks, An

________________________________
发件人: awan...@masonlive.gmu.edu
发送时间: 2016年8月4日 1:14:57
收件人: Bodireddy, Bhanuprakash; discuss@openvswitch.org
主题: 答复: help - ovs 2.5 flow statistics issues


Hi, Bhanu,


In my case, I don't think it is the rate issue. I sent the traffic at a very 
low rate (~0.7 Mbps) and the statistics are still not correct.

Also, in my trace for the test, traffic belonging 10.0.0.0/8 -> 81.0.0.0/8 only 
serves a small portion.

The total statistics combined together are correct, e.g. the results I observed:


NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=2.183s, table=0, n_packets=12, n_bytes=2083, idle_age=2, 
priority=50000,ip, in_port=1,nw_src=10.0.0.0/8 actions=output:2
cookie=0x0, duration=10.829s, table=0, n_packets=2328320, n_bytes=216980958, 
idle_age=10, in_port=1 actions=output:2

The sum of 12 and 2328320 is correct. However, there should be 2075 packets 
belonging to 10.0.0.0/8 -> 81.0.0.0/8.
After looking into the kernel flow table, I found that in the beginning, masked 
key 10.0.0.0/255.0.0.0 -> 81.0.0.0/255.0.0.0 was installed and packets were 
processed fine.
But after a while, this masked key will be removed by the revalidator thread 
due to the mismatches of the flows in the kernel table.
So my guess is that either I misconfigured the kernel flow table or there is 
something wrong with the kernel.
Could you please help further verify this? Appreciate your help!

Thanks,
An




________________________________
发件人: Bodireddy, Bhanuprakash <bhanuprakash.bodire...@intel.com>
发送时间: 2016年8月3日 17:55:44
收件人: awan...@masonlive.gmu.edu; discuss@openvswitch.org
主题: RE: help - ovs 2.5 flow statistics issues

>-----Original Message-----
>From: awan...@masonlive.gmu.edu [mailto:awan...@masonlive.gmu.edu]
>Sent: Monday, August 1, 2016 3:49 PM
>To: Bodireddy, Bhanuprakash <bhanuprakash.bodire...@intel.com>;
>discuss@openvswitch.org
>Subject: 答复: help - ovs 2.5 flow statistics issues
>
>Thank you, Bhanu, for your verification. I really appreciate it!
>Please keep me updated if you figure out the root cause of this problem.

Can you reduce the packet rate and reverify the stats. I don’t see a problem 
with stats when adjusted my packet rate.

Regards,
Bhanu Prakash.

>
>Thanks,  An
>________________________________________
>发件人: Bodireddy, Bhanuprakash <bhanuprakash.bodire...@intel.com>
>发送时间: 2016年8月1日 22:43:37
>收件人: awan...@masonlive.gmu.edu; discuss@openvswitch.org
>主题: RE: help - ovs 2.5 flow statistics issues
>
>>-----Original Message-----
>>From: awan...@masonlive.gmu.edu
>[mailto:awan...@masonlive.gmu.edu]
>>Sent: Monday, August 1, 2016 12:52 PM
>>To: Bodireddy, Bhanuprakash <bhanuprakash.bodire...@intel.com>;
>>discuss@openvswitch.org
>>Subject: 答复: help - ovs 2.5 flow statistics issues
>>
>>Hi, Bhanu,
>>
>>Thank you for your response! I tried it with the master branch and it works! I
>>guess it is a problem with the 2.5.0 version.
>>But I noticed that the statistics are actually wrong in my case even though I
>>could see the counts.
>
>You are right, the stats doesn't seem accurate in my case too.
>I tested it by sending 100k packets for 5 runs and below is  'n_packets' count 
>in
>dump-flows output.
>
> Reported | Actual
>97206         | 100,000
>97833         | 100,000
>97954         | 100,000
>98334         | 100,000
>98017         | 100,000
>
>Regards,
>Bhanu Prakash.
>
>>Could you please help verify if you have obtained accurate results for your
>>case?
>>
>>Thanks,
>>An
>>________________________________________
>>发件人: Bodireddy, Bhanuprakash <bhanuprakash.bodire...@intel.com>
>>发送时间: 2016年8月1日 15:59:13
>>收件人: awan...@masonlive.gmu.edu; discuss@openvswitch.org
>>主题: RE: help - ovs 2.5 flow statistics issues
>>
>>>-----Original Message-----
>>>From: discuss [mailto:discuss-boun...@openvswitch.org] On Behalf Of
>>>awan...@masonlive.gmu.edu
>>>Sent: Wednesday, July 27, 2016 3:14 PM
>>>To: discuss@openvswitch.org
>>>Subject: [ovs-discuss] help - ovs 2.5 flow statistics issues
>>>
>>>Hi, All,
>>>
>>>I am running openvswitch 2.5.0 on Ubuntu 14.04 LTS with kernel 3.13.0-92-
>>>generic.
>>>I installed the two following rules to the switch:
>>>
>>>NXST_FLOW reply (xid=0x4):
>>> cookie=0x0, duration=2.183s, table=0, n_packets=0, n_bytes=0,
>idle_age=2,
>>>priority=50000,ip,in_port=1,nw_src=10.0.0.0/8,nw_dst=81.0.0.0/8
>>>actions=output:2
>>> cookie=0x0, duration=10.829s, table=0, n_packets=0, n_bytes=0,
>>>idle_age=10, in_port=1 actions=output:2
>>>
>>>Another host machine connecting to the switch generates traffic (including
>>>10.0.0.0/8 -> 81.0.0.0/8) that goes through the switch.
>>>However, the statistics were not be updated correctly with n_packets for
>>the
>>>higher priority rule remaining 0 while all the traffic statistics being 
>>>credited
>to
>>>the lower priority rule.
>>>But if I modify the rules to be:
>>>
>>>NXST_FLOW reply (xid=0x4):
>>> cookie=0x0, duration=2.183s, table=0, n_packets=0, n_bytes=0,
>idle_age=2,
>>>priority=50000,ip,in_port=1,nw_src=10.0.0.0/8 actions=output:2
>>> cookie=0x0, duration=10.829s, table=0, n_packets=0, n_bytes=0,
>>>idle_age=10, in_port=1 actions=output:2
>>>
>>>Then it works fine. So it seems that statistics with both nw_src and nw_dst
>>>matching do not work properly?
>>
>>I don't see an issue matching both nw_src & nw_dst and can see the
>>'n_packets', 'n_bytes' entry incremented for all matching packets. I quickly
>>verified this on master branch.
>>
>>NXST_FLOW reply (xid=0x4):
>> cookie=0x0, duration=430.063s, table=0, n_packets=2109130027,
>>n_bytes=126547801620, idle_age=9,
>>priority=50000,ip,in_port=1,nw_src=2.0.0.0/8,nw_dst=3.0.0.0/8
>>actions=output:2
>> cookie=0x0, duration=428.842s, table=0, n_packets=0, n_bytes=0,
>>idle_age=428, in_port=1 actions=output:2
>>
>>Regards,
>>Bhanu Prakash.
>>
>> Or did I misconfigure something?
>>>I'd really appreciate your help!
>>>
>>>Thanks,
>>>An
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to