Re: [vpp-dev] TCP Proxy Through-put

2018-03-07 Thread Florin Coras
Hi Shaun, 

Glad to see you’re experimenting with the proxy code. Note however that the 
implementation is literally just a proof of concept. We haven’t spent any time 
optimizing it. Nonetheless, it would be interesting to understand why this 
happens. 

Does the difference between apache and vpp diminish if you increase the file 
size (10 or 100 times)? That is, could the difference be attributable to vpp 
being slow to ramp up throughput? Our TCP implementation uses, for now, NewReno 
as opposed to Cubic. 

Also, does “show error” indicate that lots of duplicate acks are exchanged? 
Note that the drop counters you see are actually packets that have been 
consumed by the stack and have afterwards been discarded. To see if the NICs 
have issues keeping up, try “sh hardware” and check the drop counters there. 

>From the output you’ve pasted, I can only conclude that vpp has no problems 
>receiving the traffic but it has issues pushing it out. 

Cheers, 
Florin

> On Mar 7, 2018, at 1:54 PM, Shaun McGinnity  
> wrote:
> 
> Hi,
>
> We are doing some basic testing using the TCP proxy app in stable-18.01 
> build. When I proxy a single HTTP request for a 20MB file through to an 
> Apache web server I get less than one-tenth of the throughput compared to 
> using a Linux TCP Proxy (Apache Traffic Server) on exactly the same setup. 
> The latency also varies significantly between 1 and 3 seconds.
>
> I’ve modified the fifo-size and rcv-buf-size and also the increased the TCP 
> window scaling which helps a bit but I still see a large difference.
>
> Here is a sample output when running the test. The drops on the client-side 
> interface are a lot higher than on the server side – is that significant?
>
> Is there any other tuning that I could apply?
>
> show int addr
> GigabitEthernet0/6/0 (up):
>   192.168.11.5/24
> GigabitEthernet0/7/0 (up):
>   172.16.11.5/24
> local0 (dn):
>
> show int
>   Name   Idx   State  Counter  
> Count
> GigabitEthernet0/6/0  1 up   rx packets   
>5012
>  rx bytes 
>  331331
>  tx packets   
>   14813
>  tx bytes
> 21997981
>  drops
>5008
>  ip4  
>5010
> GigabitEthernet0/7/0  2 up   rx packets   
>   14688
>  rx bytes
> 21941197
>  tx packets   
>   14680
>  tx bytes 
>  968957
>  drops
>  12
>  ip4  
>   14686
> local00down 
>
> show session verbose 2
> Thread 0: 2 active sessions
> [#0][T] 192.168.11.5:12000->192.168.11.4:37270ESTABLISHED   
>  flags:  timers: [RETRANSMIT]
> snd_una 18309357 snd_nxt 18443589 snd_una_max 18443589 rcv_nxt 96 rcv_las 96
> snd_wnd 705408 rcv_wnd 524288 snd_wl1 96 snd_wl2 18309357
> flight size 134232 send space 620 rcv_wnd_av 524288
> cong none cwnd 134852 ssthresh 33244 rtx_bytes 0 bytes_acked 2856
> prev_ssthresh 0 snd_congestion 5101145 dupack 0 limited_transmit 4135592144
> tsecr 1408587266 tsecr_last_ack 1408587266
> rto 200 rto_boff 0 srtt 1 rttvar 1 rtt_ts 1408587268 rtt_seq 177798749
> tsval_recent 1403556248 tsval_recent_age 1
> scoreboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
> last_bytes_delivered 0 high_sacked 164476297 snd_una_adv 0
> cur_rxt_hole 4294967295 high_rxt 164406209 rescue_rxt 0
> Rx fifo: cursize 0 nitems 524288 has_event 0
> head 95 tail 95
> ooo pool 0 active elts newest 4294967295
> Tx fifo: cursize 272884 nitems 524288 has_event 1
> head 483564 tail 232160
> ooo pool 0 active elts newest 4294967295
> [#0][T] 172.16.11.5:26485->192.168.200.123:80 ESTABLISHED   
>  flags:  timers: []
> snd_una 96 snd_nxt 96 snd_una_max 96 rcv_nxt 18582241 rcv_las 18582241
> snd_wnd 29056 rcv_wnd 229984 snd_wl1 18582185 snd_wl2 96
> flight size 0 send space 4385 rcv_wnd_av 229984
> cong none cwnd 4385 ssthresh 28960 rtx_bytes 0 bytes_acked 0
> prev_ssthresh 0 snd_congestion 4132263094 dupack 0 limited_transmit 4132263094
> tsecr 1408587264 tsecr_last_ack 1408587264
> rto 200 rto_boff 0 srtt 1 rttvar 1 rtt_ts 0 rtt_seq 162704298
> tsval_recent 1403552275 tsval_recent_age 2
> scoreboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
> last_bytes_delivered 0 high_sacked 0 snd_una_adv 0
> cur_rxt_hole 4294967295 high_rxt 0 rescue_rxt 0
> Rx fifo: cursize 272884 nitems 524288 has_eve

[vpp-dev] TCP Proxy Through-put

2018-03-07 Thread Shaun McGinnity
Hi,

We are doing some basic testing using the TCP proxy app in stable-18.01 build. 
When I proxy a single HTTP request for a 20MB file through to an Apache web 
server I get less than one-tenth of the throughput compared to using a Linux 
TCP Proxy (Apache Traffic Server) on exactly the same setup. The latency also 
varies significantly between 1 and 3 seconds.

I've modified the fifo-size and rcv-buf-size and also the increased the TCP 
window scaling which helps a bit but I still see a large difference.

Here is a sample output when running the test. The drops on the client-side 
interface are a lot higher than on the server side - is that significant?

Is there any other tuning that I could apply?

show int addr
GigabitEthernet0/6/0 (up):
  192.168.11.5/24
GigabitEthernet0/7/0 (up):
  172.16.11.5/24
local0 (dn):

show int
  Name   Idx   State  Counter  Count
GigabitEthernet0/6/0  1 up   rx packets 
 5012
 rx bytes  
331331
 tx packets 
14813
 tx bytes
21997981
 drops  
 5008
 ip4
 5010
GigabitEthernet0/7/0  2 up   rx packets 
14688
 rx bytes
21941197
 tx packets 
14680
 tx bytes  
968957
 drops  
   12
 ip4
14686
local00down

show session verbose 2
Thread 0: 2 active sessions
[#0][T] 192.168.11.5:12000->192.168.11.4:37270ESTABLISHED
 flags:  timers: [RETRANSMIT]
snd_una 18309357 snd_nxt 18443589 snd_una_max 18443589 rcv_nxt 96 rcv_las 96
snd_wnd 705408 rcv_wnd 524288 snd_wl1 96 snd_wl2 18309357
flight size 134232 send space 620 rcv_wnd_av 524288
cong none cwnd 134852 ssthresh 33244 rtx_bytes 0 bytes_acked 2856
prev_ssthresh 0 snd_congestion 5101145 dupack 0 limited_transmit 4135592144
tsecr 1408587266 tsecr_last_ack 1408587266
rto 200 rto_boff 0 srtt 1 rttvar 1 rtt_ts 1408587268 rtt_seq 177798749
tsval_recent 1403556248 tsval_recent_age 1
scoreboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
last_bytes_delivered 0 high_sacked 164476297 snd_una_adv 0
cur_rxt_hole 4294967295 high_rxt 164406209 rescue_rxt 0
Rx fifo: cursize 0 nitems 524288 has_event 0
head 95 tail 95
ooo pool 0 active elts newest 4294967295
Tx fifo: cursize 272884 nitems 524288 has_event 1
head 483564 tail 232160
ooo pool 0 active elts newest 4294967295
[#0][T] 172.16.11.5:26485->192.168.200.123:80 ESTABLISHED
 flags:  timers: []
snd_una 96 snd_nxt 96 snd_una_max 96 rcv_nxt 18582241 rcv_las 18582241
snd_wnd 29056 rcv_wnd 229984 snd_wl1 18582185 snd_wl2 96
flight size 0 send space 4385 rcv_wnd_av 229984
cong none cwnd 4385 ssthresh 28960 rtx_bytes 0 bytes_acked 0
prev_ssthresh 0 snd_congestion 4132263094 dupack 0 limited_transmit 4132263094
tsecr 1408587264 tsecr_last_ack 1408587264
rto 200 rto_boff 0 srtt 1 rttvar 1 rtt_ts 0 rtt_seq 162704298
tsval_recent 1403552275 tsval_recent_age 2
scoreboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
last_bytes_delivered 0 high_sacked 0 snd_una_adv 0
cur_rxt_hole 4294967295 high_rxt 0 rescue_rxt 0
Rx fifo: cursize 272884 nitems 524288 has_event 1
head 483564 tail 232160
ooo pool 0 active elts newest 4294967295
Tx fifo: cursize 0 nitems 524288 has_event 0
head 95 tail 95
ooo pool 0 active elts newest 4294967295


Regards,

Shaun