Hi Juraj,
Could you please try the attached patch?
Thanks.
-Original Message-
From: Juraj Linkeš
Sent: 2019年12月4日 18:12
To: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco) ;
Benoit Ganne (bganne) ; Maciek Konstantynowicz (mkonstan)
; vpp-dev ; csit-...@lists.fd.io
Cc: Vratko Polak
Hi Jon,
Apologies for the delay.
Is this what you’re after :
https://gerrit.fd.io/r/c/vpp/+/23808
/neale
From: on behalf of "Jon Loeliger via Lists.Fd.Io"
Reply to: "j...@netgate.com"
Date: Thursday 7 November 2019 at 06:28
To: vpp-dev
Cc: "vpp-dev@lists.fd.io"
Subject: [vpp-dev] FIB
Hi Jerome,
Thanks for the clarification
Regards,
Nitin
> -Original Message-
> From: Jerome Tollet (jtollet)
> Sent: Wednesday, December 4, 2019 11:30 PM
> To: Nitin Saxena ; Thomas Monjalon
> ; Damjan Marion
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] efficient use of
Hi Dom,
I would actually recommend testing with iperf because it should not be slower
than the builtin echo server/client apps. Remember to add fifo-size to your
echo apps cli commands (something like fifo-size 4096 for 4MB) to increase the
fifo sizes.
Also note that you’re trying
Hi Dom,
I suspect your client/server are really bursty in sending/receiving and your
fifos are relatively small. So probably the delay in issuing the cli in the two
vms is enough for the receiver to drain its rx fifo. Also, whenever the rx fifo
on the receiver fills, the sender will most
Hi Florin,
Those are tcp echo results. Note that the "show session verbose 2" command was
issued while there was still traffic being sent. Interesting that on the client
(sender) side the tx fifo is full (cursize 65534 nitems 65534) and on the
server (receiver) side the rx fifo is empty
Hi Dom,
[traveling so a quick reply]
For some reason, your rx/tx fifos (see nitems), and implicitly the snd and rcv
wnd, are 64kB in your logs lower. Is this the tcp echo or iperf result?
Regards,
Florin
> On Dec 4, 2019, at 7:29 AM, dch...@akouto.com wrote:
>
> Hi,
>
> Thank you Florin
It turns out I was using DPDK virtio, with help from Moshin I changed the
configuration and tried to repeat the tests using VPP native virtio, results
are similar but there are some interesting new observations, sharing them here
in case they are useful to others or trigger any ideas.
After
04/12/2019 16:29, Jerome Tollet (jtollet):
> Hi Thomas,
> I strongly disagree with your conclusions from this discussion:
>
> 1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not at
> the cost of performance. (It's actually the opposite ie AVF driver)
I mean performance
Hi Nitin,
I am not necessarily speaking about Inline IPSec. I was just saying that VPP
lets you the choice to do both inline and lookaside types of offload.
Here is a public example of inline acceleration:
Hi Jerome,
I have query unrelated to the original thread.
>> There are other examples (lookaside and inline)
By inline do you mean "Inline IPSEC"? Could you please elaborate what you meant
by inline offload in VPP?
Thanks,
Nitin
> -Original Message-
> From: vpp-dev@lists.fd.io On
Are you using VPP native virtio or DPDK virtio ?
Jerome
De : au nom de "dch...@akouto.com"
Date : mercredi 4 décembre 2019 à 16:29
À : "vpp-dev@lists.fd.io"
Objet : Re: [vpp-dev] VPP / tcp_echo performance
Hi,
Thank you Florin and Jerome for your time, very much appreciated.
· For
Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
conversion and tend to be faster than when used by DPDK. I suspect VPP is not
the only project to report this extra cost.
Jerome
Le 04/12/2019 15:43, « Thomas Monjalon » a écrit :
03/12/2019 22:11, Jerome Tollet
Hi Thomas,
I strongly disagree with your conclusions from this discussion:
1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not at
the cost of performance. (It's actually the opposite ie AVF driver)
2) VPP is NOT exclusively CPU centric. I gave you the example of crypto
Hi,
Thank you Florin and Jerome for your time, very much appreciated.
* For VCL configuration, FIFO sizes are 16 MB
* "show session verbose 2" does not indicate any retransmissions. Here are the
numbers during a test run where approx. 9 GB were transferred (the difference
in values between
03/12/2019 22:11, Jerome Tollet (jtollet):
> Thomas,
> I am afraid you may be missing the point. VPP is a framework where plugins
> are first class citizens. If a plugin requires leveraging offload (inline or
> lookaside), it is more than welcome to do it.
> There are multiple examples including
I looked into this and there are some problems.
The first problem is the inability to fine tune any parameters we might want to
for target cpu/microarchitecture (for arm, that would be building packages with
specifics for ThunderX, McBin, Raspberry PI etc.). I'm not sure how Qemu does
the
04/12/2019 15:25, Ole Troan:
> Thomas,
>
> > 2/ it confirms the VPP design choice of not being DPDK-dependent (at a
> > performance cost)
>
> Do you have any examples/features where a DPDK/offload solution would be
> performing better than VPP?
> Any numbers?
No sorry, I am not benchmarking
03/12/2019 20:56, Ole Troan:
> Interesting discussion.
>
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If an user wants best performance with VPP and a real NIC,
> > a new driver must be implemented for VPP only.
> >
> > Anyway real performance benefits are in hardware
Thomas,
> 2/ it confirms the VPP design choice of not being DPDK-dependent (at a
> performance cost)
Do you have any examples/features where a DPDK/offload solution would be
performing better than VPP?
Any numbers?
Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent
03/12/2019 20:01, Damjan Marion:
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > 03/12/2019 13:12, Damjan Marion:
> >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> >>> 03/12/2019 00:26, Damjan Marion:
> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > VPP has a buffer
Coverity run failed today.
Current number of outstanding issues are 3
Newly detected: 0
Eliminated: 0
More details can be found at
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#14787):
Thanks for the report, will fix. Can’t reply to the @github address.
From: Ivan Shvedunov
Sent: Wednesday, December 4, 2019 5:56 AM
To: FDio/site
Cc: Subscribed
Subject: [FDio/site] Typo in socksvr config parameter name (#50)
This
Yuri,
> NAT44 does not use all addresses uniformly
The address and port allocation algorithm is plugable.
Contributions to alternative/better algorithms would be very welcomed!
Best regards,
Ole
>
> Hi,
>
> I use nat output address space:
> nat44 add address 19.246.159.5 - 19.246.159.100
>
Hi Ben, Lijian, Honnappa,
The issue is reproducible after the second invocation of show pci:
DBGvpp# show pci
Address Sock VID:PID Link Speed Driver Product Name
Vital Product Data
:11:00.0 2 8086:10fb 5.0 GT/s x8 ixgbe
NAT44 does not use all addresses uniformly
Hi,
I use nat output address space:
nat44 add address 19.246.159.5 - 19.246.159.100
Configuration:
nat {
translation hash buckets 1048576
translation hash memory 268435456
user hash buckets 250
max translations per user 2
}
Looks like
Hi Florin,Thanks for your patient reply. Still I have some doubt inline.
XL710 does not in the list of "show int".
Any suggestion?
[root@localhost ~]# dmesg | grep XL710
[1.152341] i40e: Intel(R) Ethernet Connection XL710 Network Driver -
version 2.1.14-k
[root@localhost ~]#
[root@localhost ~]# dpdk-devbind --status
Network devices using DPDK-compatible driver
Hi Dom,
In addition to Florin’s questions, can you clarify what you mean by
“…interfaces are assigned to DPDK/VPP” ? What driver are you using ?
Regards,
Jerome
De : au nom de Florin Coras
Date : mercredi 4 décembre 2019 à 02:31
À : "dch...@akouto.com"
Cc : "vpp-dev@lists.fd.io"
Objet : Re:
29 matches
Mail list logo