Hi Hongjun,

Yes, sure. I will try the patch and share my results as soon as possible.


Regards,
 
Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com



From: Ni, Hongjun
Sent: Tuesday, May 7, 2019 6:40 PM
To: Abeeha Aqeel; Singh, Jasvinder
Cc: b...@xflowresearch.com; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] HQoS

Hi Abeeha,

There is a patch submitted to fix the API change.       
https://gerrit.fd.io/r/#/c/16839/
Could you help to give it a try?

Thanks,
Hongjun

From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Abeeha Aqeel
Sent: Monday, May 6, 2019 6:32 PM
To: Ni, Hongjun <hongjun...@intel.com>
Cc: b...@xflowresearch.com; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] HQoS

Hi Hongjun,

I have tested the HQoS Plugin with iperf3. Below is a simple topology I have 
implemented:
 

First, I tested the default profile (profile 0) provided by VPP and assigned it 
to all 4096 pipes (users) which should give approximately 2MB (10G/4096) to 
each user. It worked fine for 5 users i.e. each user was assigned 2MB but for 
an increased number of users it didn’t work the same way. I changed the token 
bucket rate, token bucket size and tc_period parameters in the hqos.c code 
which didn’t give me any concrete results. I have attached results for 10  
clients with different combinations of token bucket size, token bucket rate and 
tc_period.

Any ideas what the problem might be ? 

Also, in what proportion should the tb_rate, tb_size , tc_rate and tc_period  
be assigned? 


S#
Rate (Bytes per second)
Token Bucket Size (Bytes)
TC Period (ms)
Total no of PPPoE sessions
Expected Result (Cumulative average bandwidth in Mbps)
Actual Result (Cumulative average bandwidth in Mbps)
Comments

Subport
Pipe
Subport 
Pipe
Subport
Pipe




1
1250000000
305175
1000000
1000000
10
40
10
24.4
8.1
vpp remains stable
2
1250000000
305175
1000000
1000000
10
60
10
24.4
12.72
vpp remains stable
3
1250000000
305175
1000000
1000000
10
100
10
24.4
11.4
vpp remains stable
4
1250000000
305175
1000000
10000000
10
40
10
24.4
12.1
vpp remains stable
5
1250000000
305175
10000000
10000000
10
40
10
24.4
12.03
vpp remains stable
6
1250000000
305175
10000000
10000000
10
100
10
24.4
2.21
vpp crashes after a few seconds
7
1250000000
305175
1000000
1000000
10
60
10
24.4
12.1
vpp remains stable



Regards,
 
Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com



From: Ni, Hongjun
Sent: Monday, May 6, 2019 12:52 PM
Subject: RE: [vpp-dev] HQoS

Hi Abeeha,

For downstream bandwidth limiting, we leveraged HQos plugin in OpenBRAS 
solution.
In our previous integration test, it could support 64K subscribers with HQos.

Thanks,
Hongjun

From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Abeeha Aqeel
Sent: Monday, May 6, 2019 3:28 PM
To: Ni, Hongjun <hongjun...@intel.com>
Cc: b...@xflowresearch.com; vpp-dev@lists.fd.io
Subject: [vpp-dev] HQoS

Hi Hongjun,  

I have been trying to implement downstream bandwidth limiting using HQoS plugin 
in vpp. It works fine for a certain number of clients (less than 5) but doesn’t 
assign proper bandwidth for larger number of clients. 

Can you please elaborate which method is being used in the OpenBRAS solution 
for band limiting traffic? Are you using the token-bucket algorithm as well?


Regards,
 
Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12961): https://lists.fd.io/g/vpp-dev/message/12961
Mute This Topic: https://lists.fd.io/mt/31517544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to