Re: [dpdk-dev] [PATCH v6 07/16] ethdev: flatten RSS configuration in flow API
Hi,Adrien Mazarguil There is a bug of queue region with 18.05-rc1 The test steps are as below: ./usertools/dpdk-devbind.py -b igb_uio 05:00.0 ./x86_64-native-linuxapp-gcc/app/testpmd -c 1 -n 4 -- -i --rxq=16 --txq=16 testpmd> port config all rss all Configuration of RSS hash at ethernet port 0 failed with error (22): Invalid argument. testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start testpmd> set port 0 queue-region region_id 0 queue_start_index 1 queue_num 1 testpmd> set port 0 queue-region region_id 0 flowtype 31 testpmd> set port 0 queue-region flush on testpmd> port 0/queue 0: received 1 packets src=00:13:3B:0C:21:95 - dst=00:00:00:00:01:00 - type=0x0800 - length=90 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP - sw ptype: L2_ETHER L3_IPV4 L4_UDP - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0 ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD the packet should be distributed to queue 1 instead of queue0. And the command" port config all rss all" failed to execute. Could you help to check if this has relationship with your patches relevant to flow api? Thanks. Yuan. -Original Message- From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Peng, Yuan Sent: Saturday, April 28, 2018 1:29 PM To: Zhao1, Wei ; Adrien Mazarguil ; dev@dpdk.org Cc: Xu, Qian Q ; Liu, Yu Y ; Lu, Wenzhuo ; Wu, Jingjing Subject: Re: [dpdk-dev] [PATCH v6 07/16] ethdev: flatten RSS configuration in flow API Hi,Adrien Mazarguil There is a bug present with 18.05-rci when I test the feature "Move RSS to rte_flow" in i40e NIC The test steps are as below: ./usertools/dpdk-devbind.py -b igb_uio 05:00.0 05:00.1 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1fffe -n 4 -- -i --nb-cores=8 --rxq=8 --txq=8 --port-topology=chained testpmd> set fwd rxonly Set rxonly packet forwarding mode testpmd> set verbose 1 Change verbose level from 0 to 1 testpmd> start testpmd> flow create 0 ingress pattern end actions rss queues 0 4 7 end / end Caught error type 16 (specific action): cause: 0x7fff84e33658, RSS hash key too large The rss rule can be set successfully when I test it yesterday with older dpdk version without this patch. The NIC information is: driver: i40e version: 2.4.3 firmware-version: 6.01 0x80003205 1.1691.0 There is another problem with ixgbe nic: ./usertools/dpdk-devbind.py -b igb_uio 07:00.0 07:00.1 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1fffe -n 4 -- -i --nb-cores=8 --rxq=8 --txq=8 --disable-rss --port-topology=chained testpmd> flow create 0 ingress pattern end actions rss queues 5 6 7 end / end Caught error type 2 (flow rule (handle)): Failed to create flow. The rule setting command can be executed successfully with older dpdk version. Could you help to check if there is a relationship between the bugs and this patch? Thank you. Yuan. -Original Message- From: Zhao1, Wei Sent: Saturday, April 28, 2018 11:46 AM To: Adrien Mazarguil ; dev@dpdk.org Cc: Peng, Yuan ; Xu, Qian Q ; Liu, Yu Y ; Lu, Wenzhuo ; Wu, Jingjing Subject: RE: [dpdk-dev] [PATCH v6 07/16] ethdev: flatten RSS configuration in flow API Hi,Adrien Mazarguil We have just use new RC.1 code on the feature of flow RSS API, but we find some abnormal phenomenon. After that I check code again, I find that it is introduced in this patch: SHA-1: ac8d22de2394e03ba4a77d8fd24381147aafb1d3 * ethdev: flatten RSS configuration in flow API Signed-off-by: Adrien Mazarguil This abnormal phenomenon include i40e and ixgbe 2 NIC, it do not has these 2 bug before merge this patch. It is first find out by yuan.p...@intel.com, she can tell you how to reappear these abnormal phenomenon on RSS flow API. Thank you. > -Original Message- > From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Adrien Mazarguil > Sent: Wednesday, April 25, 2018 11:28 PM > To: Thomas Monjalon ; Yigit, Ferruh > ; dev@dpdk.org > Cc: Xueming Li ; Lu, Wenzhuo > ; Wu, Jingjing ; Xing, Beilei > ; Zhang, Qi Z ; Ananyev, > Konstantin ; Nelio Laranjeiro > ; Yongseok Koh ; > Andrew Rybchenko ; Pascal Mazon > ; Nicolau, Radu ; Akhil > Goyal > Subject: [dpdk-dev] [PATCH v6 07/16] ethdev: flatten RSS configuration in > flow API > > Since its inception, the rte_flow RSS action has been relying in part on > external struct rte_eth_rss_conf for compatibility with the legacy RSS API. > This structure lacks parameters such as the hash algorithm to use, and more > recently, a method to tell which layer RSS should be performed on [1]. > > Given struct rte_eth_rss_conf will never be flexible enough to represent a > complete RSS configuration (e.g. RETA table), this patch supersedes it by > extending the rte_flow RSS action directly. > > A subsequent patch will add a field to use a non-default RSS hash > algorithm. To that end, a field named "types" replaces the field formerly > known as "rss_hf" and standing for "RSS hash functions" as it was > confusing. Actual RSS hash function types are defined by enum > rte_
Re: [dpdk-dev] [PATCH v3 2/2] mem: revert to using flock() and add per-segment lockfiles
On 04/25/2018 01:36 PM, Anatoly Burakov wrote: The original implementation used flock() locks, but was later switched to using fcntl() locks for page locking, because fcntl() locks allow locking parts of a file, which is useful for single-file segments mode, where locking the entire file isn't as useful because we still need to grow and shrink it. However, according to fcntl()'s Ubuntu manpage [1], semantics of fcntl() locks have a giant oversight: This interface follows the completely stupid semantics of System V and IEEE Std 1003.1-1988 (“POSIX.1”) that require that all locks associated with a file for a given process are removed when any file descriptor for that file is closed by that process. This semantic means that applications must be aware of any files that a subroutine library may access. Basically, closing *any* fd with an fcntl() lock (which we do because we don't want to leak fd's) will drop the lock completely. So, in this commit, we will be reverting back to using flock() locks everywhere. However, that still leaves the problem of locking parts of a memseg list file in single file segments mode, and we will be solving it with creating separate lock files per each page, and tracking those with flock(). We will also be removing all of this tailq business and replacing it with a simple array - saving a few bytes is not worth the extra hassle of dealing with pointers and potential memory allocation failures. Also, remove the tailq lock since it is not needed - these fd lists are per-process, and within a given process, it is always only one thread handling access to hugetlbfs. So, first one to allocate a segment will create a lockfile, and put a shared lock on it. When we're shrinking the page file, we will be trying to take out a write lock on that lockfile, which would fail if any other process is holding onto the lockfile as well. This way, we can know if we can shrink the segment file. Also, if no other locks are found in the lock list for a given memseg list, the memseg list fd is automatically closed. One other thing to note is, according to flock() Ubuntu manpage [2], upgrading the lock from shared to exclusive is implemented by dropping and reacquiring the lock, which is not atomic and thus would have created race conditions. So, on attempting to perform operations in hugetlbfs, we will take out a writelock on hugetlbfs directory, so that only one process could perform hugetlbfs operations concurrently. [1] http://manpages.ubuntu.com/manpages/artful/en/man2/fcntl.2freebsd.html [2] http://manpages.ubuntu.com/manpages/bionic/en/man2/flock.2.html Fixes: 66cc45e293ed ("mem: replace memseg with memseg lists") Fixes: 582bed1e1d1d ("mem: support mapping hugepages at runtime") Fixes: a5ff05d60fc5 ("mem: support unmapping pages at runtime") Fixes: 2a04139f66b4 ("eal: add single file segments option") Cc: anatoly.bura...@intel.com Signed-off-by: Anatoly Burakov Acked-by: Bruce Richardson We have a problem with the changeset if EAL option -m or --socket-mem is used. EAL initialization hangs just after EAL: Probing VFIO support... strace points to flock(7, LOCK_EX List of file descriptors: # ls /proc/25452/fd -l total 0 lrwx-- 1 root root 64 Apr 28 10:34 0 -> /dev/pts/0 lrwx-- 1 root root 64 Apr 28 10:34 1 -> /dev/pts/0 lrwx-- 1 root root 64 Apr 28 10:32 2 -> /dev/pts/0 lrwx-- 1 root root 64 Apr 28 10:34 3 -> /run/.rte_config lrwx-- 1 root root 64 Apr 28 10:34 4 -> socket:[154166] lrwx-- 1 root root 64 Apr 28 10:34 5 -> socket:[154158] lr-x-- 1 root root 64 Apr 28 10:34 6 -> /dev/hugepages lr-x-- 1 root root 64 Apr 28 10:34 7 -> /dev/hugepages I guess the problem is that there are two /dev/hugepages and it hangs on the second. Ideas how to solve it? Andrew.
[dpdk-dev] [PATCH v2] net/i40e: fix missing some offload capabilities
MULTI_SEGS and JUMBO_FRAME offload capability should be exposed both VF and PF since i40e does support it. Fixes: c3ac7c5b0b8a ("net/i40e: convert to new Rx offloads API") Fixes: 7497d3e2f777 ("net/i40e: convert to new Tx offloads API") Signed-off-by: Yanglong Wu --- v2: rework fix line --- drivers/net/i40e/i40e_ethdev.c| 3 ++- drivers/net/i40e/i40e_ethdev_vf.c | 6 -- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index d869add95..2e454566a 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -3316,7 +3316,8 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_IPIP_TNL_TSO | - DEV_TX_OFFLOAD_GENEVE_TNL_TSO; + DEV_TX_OFFLOAD_GENEVE_TNL_TSO | + DEV_TX_OFFLOAD_MULTI_SEGS; dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 48e7ac21e..dba28bb90 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -2200,7 +2200,8 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_CRC_STRIP | - DEV_RX_OFFLOAD_SCATTER; + DEV_RX_OFFLOAD_SCATTER | + DEV_RX_OFFLOAD_JUMBO_FRAME; dev_info->tx_queue_offload_capa = 0; dev_info->tx_offload_capa = @@ -2215,7 +2216,8 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_IPIP_TNL_TSO | - DEV_TX_OFFLOAD_GENEVE_TNL_TSO; + DEV_TX_OFFLOAD_GENEVE_TNL_TSO | + DEV_TX_OFFLOAD_MULTI_SEGS; dev_info->default_rxconf = (struct rte_eth_rxconf) { .rx_thresh = { -- 2.11.0
[dpdk-dev] [PATCH v5] net/i40e: fix disabling promiscuous mode
v5 updates: === - Modificate some comments for this patch v4 updates: === - Add some comments for this patch v3 updates: === - Move modification from device close to device disable - i40evf_reset_vf() will cause kernel driver enable all vlan promiscuous, so unicast/multicast promiscuous disable should set before reset. v2 updates: === - Add more comments In scenario of Kernel Driver runs on PF and PMD runs on VF, PMD exit doesn't disable promiscuous mode, this will cause vlan filter set by Kernel Driver will not take effect. This patch will fix it, add promiscuous disable at device disable. Fixes: 4861cde46116 ("i40e: new poll mode driver") Cc: sta...@dpdk.org Signed-off-by: Rosen Xu Acked-by: Qi Zhang --- drivers/net/i40e/i40e_ethdev_vf.c | 8 1 file changed, 8 insertions(+) diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 48e7ac2..b8977a6 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -2288,6 +2288,14 @@ static int eth_i40evf_pci_remove(struct rte_pci_device *pci_dev) i40evf_dev_stop(dev); i40e_dev_free_queues(dev); + /* +* disable promiscuous mode before reset vf +* it fixes missing disable promiscuous mode +* when work with kernel dirver bug issue +*/ + i40evf_dev_promiscuous_disable(dev); + i40evf_dev_allmulticast_disable(dev); + i40evf_reset_vf(hw); i40e_shutdown_adminq(hw); /* disable uio intr before callback unregister */ -- 1.8.3.1
Re: [dpdk-dev] [PATCH v1] net/mlx5: fix flow director rule deletion crash
Friday, April 27, 2018 9:55 AM, Nélio Laranjeiro: > Subject: Re: [PATCH v1] net/mlx5: fix flow director rule deletion crash > > On Thu, Apr 26, 2018 at 06:17:46PM +0200, Adrien Mazarguil wrote: > > Flow director rules matching traffic properties above layer 2 do not > > target a fixed hash Rx queue (HASH_RXQ_ETH), it actually depends on > > the highest protocol layer specified by each flow rule. > > > > mlx5_fdir_filter_delete() makes this wrong assumption and causes a > > crash when attempting to destroy flow rules with L3/L4 specifications. > > > > Fixes: 4c3e9bcdd52e ("net/mlx5: support flow director") > > Cc: Nelio Laranjeiro > > Cc: sta...@dpdk.org > > > > Signed-off-by: Adrien Mazarguil > > Acked-by: Nelio Laranjeiro > > > --- > > drivers/net/mlx5/mlx5_flow.c | 10 +- > > 1 file changed, 5 insertions(+), 5 deletions(-) > > > > diff --git a/drivers/net/mlx5/mlx5_flow.c > > b/drivers/net/mlx5/mlx5_flow.c index 8f5fcf4d6..05def2b14 100644 > > --- a/drivers/net/mlx5/mlx5_flow.c > > +++ b/drivers/net/mlx5/mlx5_flow.c > > @@ -3409,13 +3409,13 @@ mlx5_fdir_filter_delete(struct rte_eth_dev > *dev, > > if (parser.drop) { > > struct ibv_flow_spec_action_drop *drop; > > > > - drop = (void > *)((uintptr_t)parser.queue[HASH_RXQ_ETH].ibv_attr + > > - parser.queue[HASH_RXQ_ETH].offset); > > + drop = (void > *)((uintptr_t)parser.queue[parser.layer].ibv_attr + > > + parser.queue[parser.layer].offset); > > *drop = (struct ibv_flow_spec_action_drop){ > > .type = IBV_FLOW_SPEC_ACTION_DROP, > > .size = sizeof(struct ibv_flow_spec_action_drop), > > }; > > - parser.queue[HASH_RXQ_ETH].ibv_attr->num_of_specs++; > > + parser.queue[parser.layer].ibv_attr->num_of_specs++; > > } > > TAILQ_FOREACH(flow, &priv->flows, next) { > > struct ibv_flow_attr *attr; > > @@ -3426,8 +3426,8 @@ mlx5_fdir_filter_delete(struct rte_eth_dev *dev, > > void *flow_spec; > > unsigned int specs_n; > > > > - attr = parser.queue[HASH_RXQ_ETH].ibv_attr; > > - flow_attr = flow->frxq[HASH_RXQ_ETH].ibv_attr; > > + attr = parser.queue[parser.layer].ibv_attr; > > + flow_attr = flow->frxq[parser.layer].ibv_attr; > > /* Compare first the attributes. */ > > if (memcmp(attr, flow_attr, sizeof(struct ibv_flow_attr))) > > continue; > > -- > > 2.11.0 Applied to next-net-mlx, thanks. > > -- > Nélio Laranjeiro > 6WIND
Re: [dpdk-dev] [PATCH v2 1/3] net/mlx4: fix Rx resource leak in case of error
Thursday, April 26, 2018 7:26 PM, Adrien Mazarguil: > Subject: [PATCH v2 1/3] net/mlx4: fix Rx resource leak in case of error > > When creation of a flow rule fails during dev_start(), the usage count of the > common RSS context is not decremented, which triggers an assertion failure > in debug mode during dev_close(). > > This is addressed by tracking the initialization status of the common RSS > context in order to add missing cleanup code. > > A similar issue exists in mlx4_rxq_attach(), where usage count is > incremented on a Rx queue but not released in case of error. This may lead > to the above issue since RSS contexts created by flow rules attach > themselves to Rx queues, incrementing their usage count. > > Fixes: 5697a4142107 ("net/mlx4: relax Rx queue configuration order") > Cc: sta...@dpdk.org > > Signed-off-by: Adrien Mazarguil Series applied to next-net-mlx, thanks.
Re: [dpdk-dev] [PATCH v9 10/10] devtools: prevent new instances of rte_panic and rte_exit
So will it fail checkpatch in patchwork? I agree with Aaron and Anatoly > that patches with rte_panic/exit should flag some warning message, but > the maintainer should have final say. I don't think failing checkpatch > is the solution for that. > Ok. will leave that to the maintainers and change that not to fail > > I would rather keep the word 'prevents' as an intention declaration that > > puts the fact that a panic is undesired first, and the technical ability > > of a maintainer to allow it second. > > It's only words, but to me 'prevents' does not indicate that it is > undesirable - it indicates it is not allowed. > They say words create reality, but since we agreed on reality, I'll reword :) > > > Not sure I understand what you meant here. Can you please elaborate? > > > > Sure, I made a patch with an rte_panic and ran checkpatch on a bunch of > patches. It gave me the name of all the patches except the one with the > rte_panic. > > Got it now. I wasn't using -v .Will fix Thanks /Arnon --