Dear Abeeha,

I’m afraid you will not get much luck here. DPDK HQoS support code was 
contributed long time ago, it is incomplete 
and not actively developed. Also seems that it is not used as i disabled it in 
master 3 months ago (due to some API changes)
and I didn’t hear any complaint since.

Personally I’m very keen to remove it completely unless somebody is keen to 
pick it up and actively work on it.

— 
Damjan


> On 2 May 2019, at 14:47, Abeeha Aqeel <abeeha.aq...@xflowresearch.com> wrote:
> 
> Hi everyone, 
>  
> I’ve been testing the behavior of the HQoS plugin in VPP. As for now there is 
> only a single subport (Subport 0),  and 4096 pipes in each subport. I tested 
> HQoS for the default profile (profile 0) which should assign approximately 
> 2-3Mbps to each user on a 10Gbps link. For two iperf3 clients simultaneously 
> sending traffic to vpp, HQoS is working fine and limiting traffic to 2Mbps 
> per client but for greater number of clients it does not function the same 
> way.
>  
> Below is a simple topology I have implemented to test the HQoS functionality. 
> I tried changing the token bucket rate, token bucket size and tc_period 
> values but still not much luck. I have attached results for 10  clients with 
> different combinations of token bucket size, token bucket rate and tc_period.
>  
> Any ideas what the problem might be ?
>  
> Also, in what proportion should the tb_rate, tb_size , tc_rate and tc_period 
> should be set ?
> <0216E4862CAB4884A673627A3EB8B833.png> 
>  
> S#
> Rate (Bytes per second)
> Token Bucket Size (Bytes)
> TC Period (ms)
> Total no of PPPoE sessions
> Expected Result (Commulative average bandwidth in Mbps)
> Actual Result (Commulative average bandwidth in Mbps)
> Comments
> Subport
> Pipe
> Subport 
> Pipe
> Subport
> Pipe
> 1
> 1250000000
> 305175
> 1000000
> 1000000
> 10
> 40
> 10
> 24.4
> 8.1
> vpp remains stable
> 2
> 1250000000
> 305175
> 1000000
> 1000000
> 10
> 60
> 10
> 24.4
> 12.72
> vpp remains stable
> 3
> 1250000000
> 305175
> 1000000
> 1000000
> 10
> 100
> 10
> 24.4
> 11.4
> vpp remains stable
> 4
> 1250000000
> 305175
> 1000000
> 10000000
> 10
> 40
> 10
> 24.4
> 12.1
> vpp remains stable
> 5
> 1250000000
> 305175
> 10000000
> 10000000
> 10
> 40
> 10
> 24.4
> 12.03
> vpp remains stable
> 6
> 1250000000
> 305175
> 10000000
> 10000000
> 10
> 100
> 10
> 24.4
> 2.21
> vpp crashes after a few seconds
> 7
> 1250000000
> 305175
> 1000000
> 1000000
> 10
> 60
> 10
> 24.4
> 12.1
> vpp remains stable
>  
>  
>  
>  
> Regards,
>  
> Abeeha Aqeel
> Network Design Engineer, 
> xFlow Research Inc. 
> abeeha.aq...@xflowresearch.com <mailto:abeeha.aq...@xflowresearch.com>
>  
> <0216E4862CAB4884A673627A3EB8B833.png>-=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12897): https://lists.fd.io/g/vpp-dev/message/12897 
> <https://lists.fd.io/g/vpp-dev/message/12897>
> Mute This Topic: https://lists.fd.io/mt/31454199/675642 
> <https://lists.fd.io/mt/31454199/675642>
> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
> <https://lists.fd.io/g/vpp-dev/unsub>  [dmar...@me.com 
> <mailto:dmar...@me.com>]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12916): https://lists.fd.io/g/vpp-dev/message/12916
Mute This Topic: https://lists.fd.io/mt/31454199/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to