Update RTE_VERIFY macro to make it possible to use complex expressions
in RTE_ASSERT.
Signed-off-by: Ilya V. Matveychikov
Fixes: 148f963fb532 ("xen: core library changes")
Cc: bruce.richard...@intel.com
---
Now it's incorrect to use complex expressions for assertion
like RTE_ASSERT((1 + 2) == 3
Without this patch, the number of queues per i40e VF is defined as 4
by CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4 in config/common_base.
It is fixed value determined in building time and can't be changed
during run time.
With this patch, the number of queues per i40e VF can be determinated
during
> On Nov 19, 2017, at 12:18 PM, Ilya Matveychikov
> wrote:
>
> Update RTE_VERIFY macro to make it possible to use complex expressions
> in RTE_ASSERT.
>
> Signed-off-by: Ilya V. Matveychikov
>
> Fixes: 148f963fb532 ("xen: core library changes")
> Cc: bruce.richard...@intel.com
>
> ---
> No
HI Shippen,
DPDK is a BSD licensed projects unlike Linux kernel. BSD is very
permissive license.
I am not a lawyer, I am just afraid that including a proprietary license should
not have any implications on DPDK project. We are planning to move to SPDX
based license identifiers to clear
A warning is issued when using an argument to likely() or unlikely()
builtins which is evaluated to a pointer value, as __builtin_expect()
expects a 'long int' type for its first argument. With this fix
a pointer value is converted to an integer with the value of 0 or 1.
Signed-off-by: Aleksey Bau
L3fwd start failed on PF, for tx_q check failed.
That occured when the SRIOV is active and tx_q > rx_q.
The tx_q is equal to nb_q_per_pool. The number of nb_q_per_pool
should equeal to max number of queues supported by HW not nb_rx_q.
Fixes: 27b609cbd1c6 (ethdev: move the multi-queue mode check to
VF can't run in multi queue mode, if nb_q_per_pool was set as 1.
The value of nb_q_per_pool is passed through to max_rx_q and max_tx_q in VF.
So if nb_q_per_pool is equal to 1, max_rx_q and max_tx_q can't be more
than 1 and VF multi queue mode will fail.
Fixes: 27b609cbd1c6 (ethdev: move the multi
VF can't run in multi queue mode, if nb_q_per_pool was set as 1.
Nb_q_per_pool is passed through to max_rx_q and max_tx_q in VF.
So if nb_q_per_pool is equal to 1, max_rx_q and max_tx_q can't be more
than 1 and VF multi queue mode will fail.
Fixes: 27b609cbd1c6 (ethdev: move the multi-queue mode c
L3fwd start failed on PF, for tx_q check failed.
That occurred when the SRIOV is active and tx_q > rx_q.
The tx_q is equal to nb_q_per_pool. The number of nb_q_per_pool
should equeal to max number of queues supported by HW not nb_rx_q.
Fixes: 27b609cbd1c6 (ethdev: move the multi-queue mode check t
Hi Yanglong,
You should write something after SoB, which makes reviewer easier to know
What changes are since last version.
Thanks
Zhiyong
> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Yanglong Wu
> Sent: Monday, November 20, 2017 10:26 AM
> To: dev@dpdk.or
L3fwd start failed on PF, for tx_q check failed.
That occurred when the SRIOV is active and tx_q > rx_q.
The tx_q is equal to nb_q_per_pool. The number of nb_q_per_pool
should equeal to max number of queues supported by HW not nb_rx_q.
Fixes: 27b609cbd1c6 (ethdev: move the multi-queue mode check t
The datasheet says, if using MSI-X mode, the PBA support
bit of the GPIE register must be set to 1.
DPDK uses polling mode, we cannot hit this issue in the
scenario DPDK PF + DPDK VF. If we use DPDK PF + kernel VF,
as the kernel driver uses interrpt mode, VF may hit RX hang
after running hours.
Fi
According to loopback mode, setup loopback link or not.
If loopback link is setted, packets will be sent to
rx_q from tx_q directly.Loopback mode can be used to
support testing task.
Signed-off-by: Yanglong Wu
---
drivers/net/i40e/base/i40e_adminq_cmd.h | 1 +
drivers/net/i40e/i40e_ethdev.c
Ping. Awaiting feedback/comments.
Thanks
Shally
> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Verma, Shally
> Sent: 31 October 2017 17:09
> To: dev@dpdk.org; Trahe, Fiona ; Athreya,
> Narayana Prasad ; Challa, Mahipal
>
> Subject: [dpdk-dev] [RFC v1] doc com
Hi Fiona
Could you give some expected timeframe for next comp API spec patch?
Thanks
Shally
> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Verma, Shally
> Sent: 10 November 2017 17:35
> To: Trahe, Fiona ; dev@dpdk.org
> Cc: Athreya, Narayana Prasad ;
> Challa
From: Jan Wickbom
Issue:
Vhost user socket addresses left in /var/run/openvswitch.
This will lead to failure to add vhost user ports with names that
already exist in this directory.
When there is a failure to add a vhost user socket file descriptor to
the file descriptor set using fdset_add() i
Hi all,
In the function bond_mode_8023ad_enable(), the var i is to used to as the
second parameter to pass the slave dev's dpdk port id to the function
bond_mode_8023ad_activate_slave().
I think this variable is only a index for array internals->active_slaves. So
its need to be fixed and chang
On 11/17/2017 6:42 AM, Mohammad Abdul Awal wrote:
> V2:
> Update rte_bus_vdev.h header file instead of rte_vdev.h file.
>
> V1:
> Representor PMD is a virtual PMD which provides a logical representation
> in DPDK for ports of a multi port device. This supports configuration,
> management, and moni
On 11/17/2017 6:42 AM, Mohammad Abdul Awal wrote:
> +struct eth_dev_ops i40e_representor_dev_ops = {
> + .link_update = i40e_representor_link_update,
> + .dev_infos_get= i40e_representor_dev_infos_get,
> +
> + .stats_get= i40e_representor_stats_get,
> +
19 matches
Mail list logo