Hi Qian and Gilad,
Thanks for your reply. We are using dpdk-2.0.0 and mlnx-en-2.4-1.0.0.1 on a
Mellanox Connectx-3 EN with a single 40G port.
I ran testpmd on the server with following commands: sudo ./testpmd -c 0xff
-n 4 -- -i --portmask=0x1 --port-topology=chained --rxq=4 --txq=4
--nb-cores=4;
According to 82599 and x540 HW specifications RS bit *must* be
set in the last descriptor of *every* packet.
This patch fixes the Tx hang we were constantly hitting with a
seastar-based application on x540 NIC.
Signed-off-by: Vlad Zolotarov
---
drivers/net/ixgbe/ixgbe_ethdev.c | 9 +
d
Hi Vlad
I don't think the changes are needed. It says in datasheet that the RS bit
should be
set on the last descriptor of every packet, ONLY WHEN TXDCTL.WTHRESH equals to
ZERO.
Regards,
Helin
> -Original Message-
> From: Vlad Zolotarov [mailto:vladz at cloudius-systems.com]
> Sent: Th
I am running dpdk in KVM-based virtual machine. Two ports were bound to
igb_uio driver and KNI generated two ports vEth0 and vEth1. I was trying to
use ethtool to get information of these two ports, but failed to do so. It
reported "Operation not supported".
How to address this issue?
Thanks
Any comments on this question ?
Thanks
-Avinash
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Yeddula, Avinash
Sent: Wednesday, August 12, 2015 3:04 PM
To: dev at dpdk.org
Subject: [dpdk-dev] Lookup mechanim in DPDK HASH table.
Hello All,
I'm using DPDK extenda
Hi John,
I got PTP working and was able to transmit a valid PTPv2 packet over the
DPDK network card.
Every time a PTP packet arrives I get following message in the testpmd
application: Port 0 Received PTP packet not filtered by hardware
However, the hardware does not change the timestamp, when
Hi John,
> +
> +* **Added additional hotplug support.**
> +
> + Port hotplug support was added to the following PMDs:
> +
> + * e1000/igb.
> + * ixgbe.
> + * i40e.
> + * fm10k.
> + * Ring.
> + * Bonding.
> + * Virtio.
ring, bonding and virtio should probably be all lowercase.
> +
> +
Renamed function name to comply with coding standard.
Signed-off-by: Maciej Gajdzica
---
app/test/test_table.c |2 +-
app/test/test_table_acl.c |2 +-
app/test/test_table_acl.h |2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/app/test/test_table.c b/app/test/
Added release notes for the DPDK R2.1 release.
Signed-off-by: John McNamara
---
doc/guides/rel_notes/release_2_1.rst | 980 ++-
1 file changed, 970 insertions(+), 10 deletions(-)
diff --git a/doc/guides/rel_notes/release_2_1.rst
b/doc/guides/rel_notes/release_2_
Please review the DPDK 2.1 release notes for omissions or errors.
John McNamara (1):
doc: updated release notes for r2.1
doc/guides/rel_notes/release_2_1.rst | 980 ++-
1 file changed, 970 insertions(+), 10 deletions(-)
--
1.8.1.4
v2 changes:
1. Create a svg picture.
2. Add part about how to check memory channel by dmidecode -t memory.
3. Add the command about how to check PCIe slot's speed.
4. Some doc updates according to the comments.
Add a new guide doc under guides folder. This document is a
step-by-step guide about
Xiaozhou,
Following Qian answer - 2Mpps is VERY (VERY) low and far below what we see even
with single core.
Which version of DPDK and PMD are you using? Are you using MLNX optimized libs
for PMD? Can you provide more details on the exact setup?
Can you run a simple test with testpmd and see if y
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Maciej Gajdzica
> Sent: Wednesday, August 12, 2015 2:41 PM
> To: dev at dpdk.org
> Subject: [dpdk-dev] [PATCH 1/1] test_table: fixed failing unit tests checking
> offset
>
> In commit: 1129992baa61d72c5 checking
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Gajdzica, MaciejX T
> Sent: Wednesday, August 12, 2015 2:58 PM
> To: Thomas Monjalon
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/1] test_table: added ACL table test to the
> list
>
>
>
> >
On Wed, Aug 12, 2015 at 04:02:37PM +0800, Ouyang Changchun wrote:
> Each virtio device could have multiple queues, say 2 or 4, at most 8.
> Enabling this feature allows virtio device/port on guest has the ability to
> use different vCPU to receive/transmit packets from/to each queue.
>
> In multip
Xiaozhou
So seems the performance bottleneck is not at the core, have you checked that
the Mellanox NIC's configuration? How many queues per port are you using? Could
you try l3fwd example with Mellanox to check if the performance is good enough?
I'm not familiar with Mellanox NIC, but if you ha
16 matches
Mail list logo