Re: [dpdk-users] DPDK 20.11 - i40e 2 tuple RSS configuration

2021-05-27 Thread Zhang, AlvinX
Hi Beilei,

We will try it ASAP.

BRs,
Alvin Zhang

From: Xing, Beilei 
Sent: Friday, May 28, 2021 9:24 AM
To: Vishal Mohan ; users@dpdk.org; Zhang, 
AlvinX 
Subject: RE: DPDK 20.11 - i40e 2 tuple RSS configuration

+ Alvin.

Could you please help on it? Thanks.

BR,
Beilei

From: Vishal Mohan 
mailto:vishal.mo...@tatacommunications.com>>
Sent: Thursday, May 27, 2021 5:45 PM
To: Xing, Beilei mailto:beilei.x...@intel.com>>; 
users@dpdk.org
Subject: RE: DPDK 20.11 - i40e 2 tuple RSS configuration

Hi Beilei,

Thanks for the pointer. By using l3-src-only, I was able to run testpmd on 1 
tuple mode. But no success when configured manually. Please find the snippet 
below of my rte_eth_conf and flow conf for your kind perusal:

static struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
.rss_key_len = 40,
.rss_hf = ETH_RSS_NONFRAG_IPV4_UDP,
},
}
};

struct rte_flow_item pattern[] = {
[0] = {
.type = RTE_FLOW_ITEM_TYPE_ETH,
},
[1] = {
.type = RTE_FLOW_ITEM_TYPE_IPV4,
},
[2] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
},
[3] = {
.type = RTE_FLOW_ITEM_TYPE_END,
}
};

struct rte_flow_action_rss action_rss = {
.types = ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
.queue_num = 10,
.queue = queue_ids,
};

struct rte_flow_action action[] = {

[0] = {
.type = RTE_FLOW_ACTION_TYPE_RSS,
.conf = _rss
},

[1] = {
.type = RTE_FLOW_ACTION_TYPE_END,
}
};

struct rte_flow_attr attr = {
.egress = 0,
.ingress = 1
};

struct rte_flow_error err;

retval = rte_flow_validate(portid, ,
pattern, action, );
printf("retval %d %d\n",retval,-ENOTSUP);

if(!retval){
struct rte_flow *flow = rte_flow_create(portid, , pattern, action, 
);
}

The above given flow validates and adds successfully but no effect on RSS 
hashing. Also I did not set .spec and .mask for patterns assuming 
ETH_RSS_L3_SRC_ONLY will take care of the fields looked into.
Can you please point out if im missing anything here ?

Thanks & Regards,
Vishal Mohan

-Original Message-
From: Xing, Beilei mailto:beilei.x...@intel.com>>
Sent: 27 May 2021 01:42 PM
To: Vishal Mohan 
mailto:vishal.mo...@tatacommunications.com>>;
 users@dpdk.org
Subject: RE: DPDK 20.11 - i40e 2 tuple RSS configuration

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.

Hi,

I remember there's no legacy API rte_eth_dev_filter_ctrl()  supported in 20.11.
Please refer to RSS Flow part in i40e.rst:

Enable hash and set input set for ipv4-tcp.
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
  actions rss types ipv4-tcp l3-src-only end queues end / end

BR,
Beilei

> -Original Message-
> From: users  On Behalf Of Vishal Mohan
> Sent: Thursday, May 27, 2021 3:40 PM
> To: mailto:users@dpdk.org
> Subject: [dpdk-users] DPDK 20.11 - i40e 2 tuple RSS configuration
>
> I'm trying to implement RSS with 2 tuple (src ip, dst ip) hashing with
> X710 - quad port in DPDK 20.11 with no success. I was able to
> implement the same in DPDK 17.11 with a combination of RSS flags
> given below and
> rte_eth_dev_filter_ctrl():
>
> .rss_hf = (ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |
> ETH_RSS_NONFRAG_IPV4_TCP)
>
> and selecting input fields as dst and src ip for every rss_hf flag
> using rte_eth_dev_filter_ctrl().
>
> In DPDK 20.11, I believe there is no explicit usage of using
> rte_eth_dev_filter_ctrl() instead we can configure the hashing with
> the generic rte_flow api. I did configure a flow validate and create
> the same, but the hashing is not working as expected. Without flags
> ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV4_TCP no hashing takes
> place and with those flags included in .rss_hf, 5 tuple hashing takes
> place.
>
> When using rte_flow api, any flags given in rte_flow_action_rss.types
> has no effect on the final RSS hash result. Also the RSS hashing in
> the given testpmd isn't working when it is configured in "ip" (2 tuple) mode.
>
> Any inputs for configuring RSS hashing for 2 tuple is much appreciated.
>
>
> Thanks & Regards,
>  Vishal Mohan



Re: [dpdk-users] DPDK 20.11 - i40e 2 tuple RSS configuration

2021-05-27 Thread Xing, Beilei
+ Alvin.

Could you please help on it? Thanks.

BR,
Beilei

From: Vishal Mohan 
Sent: Thursday, May 27, 2021 5:45 PM
To: Xing, Beilei ; users@dpdk.org
Subject: RE: DPDK 20.11 - i40e 2 tuple RSS configuration

Hi Beilei,

Thanks for the pointer. By using l3-src-only, I was able to run testpmd on 1 
tuple mode. But no success when configured manually. Please find the snippet 
below of my rte_eth_conf and flow conf for your kind perusal:

static struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
.rss_key_len = 40,
.rss_hf = ETH_RSS_NONFRAG_IPV4_UDP,
},
}
};

struct rte_flow_item pattern[] = {
[0] = {
.type = RTE_FLOW_ITEM_TYPE_ETH,
},
[1] = {
.type = RTE_FLOW_ITEM_TYPE_IPV4,
},
[2] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
},
[3] = {
.type = RTE_FLOW_ITEM_TYPE_END,
}
};

struct rte_flow_action_rss action_rss = {
.types = ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
.queue_num = 10,
.queue = queue_ids,
};

struct rte_flow_action action[] = {

[0] = {
.type = RTE_FLOW_ACTION_TYPE_RSS,
.conf = _rss
},

[1] = {
.type = RTE_FLOW_ACTION_TYPE_END,
}
};

struct rte_flow_attr attr = {
.egress = 0,
.ingress = 1
};

struct rte_flow_error err;

retval = rte_flow_validate(portid, ,
pattern, action, );
printf("retval %d %d\n",retval,-ENOTSUP);

if(!retval){
struct rte_flow *flow = rte_flow_create(portid, , pattern, action, 
);
}

The above given flow validates and adds successfully but no effect on RSS 
hashing. Also I did not set .spec and .mask for patterns assuming 
ETH_RSS_L3_SRC_ONLY will take care of the fields looked into.
Can you please point out if im missing anything here ?

Thanks & Regards,
Vishal Mohan

-Original Message-
From: Xing, Beilei mailto:beilei.x...@intel.com>>
Sent: 27 May 2021 01:42 PM
To: Vishal Mohan 
mailto:vishal.mo...@tatacommunications.com>>;
 users@dpdk.org
Subject: RE: DPDK 20.11 - i40e 2 tuple RSS configuration

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.

Hi,

I remember there's no legacy API rte_eth_dev_filter_ctrl()  supported in 20.11.
Please refer to RSS Flow part in i40e.rst:

Enable hash and set input set for ipv4-tcp.
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
  actions rss types ipv4-tcp l3-src-only end queues end / end

BR,
Beilei

> -Original Message-
> From: users  On Behalf Of Vishal Mohan
> Sent: Thursday, May 27, 2021 3:40 PM
> To: mailto:users@dpdk.org
> Subject: [dpdk-users] DPDK 20.11 - i40e 2 tuple RSS configuration
>
> I'm trying to implement RSS with 2 tuple (src ip, dst ip) hashing with
> X710 - quad port in DPDK 20.11 with no success. I was able to
> implement the same in DPDK 17.11 with a combination of RSS flags
> given below and
> rte_eth_dev_filter_ctrl():
>
> .rss_hf = (ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |
> ETH_RSS_NONFRAG_IPV4_TCP)
>
> and selecting input fields as dst and src ip for every rss_hf flag
> using rte_eth_dev_filter_ctrl().
>
> In DPDK 20.11, I believe there is no explicit usage of using
> rte_eth_dev_filter_ctrl() instead we can configure the hashing with
> the generic rte_flow api. I did configure a flow validate and create
> the same, but the hashing is not working as expected. Without flags
> ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV4_TCP no hashing takes
> place and with those flags included in .rss_hf, 5 tuple hashing takes
> place.
>
> When using rte_flow api, any flags given in rte_flow_action_rss.types
> has no effect on the final RSS hash result. Also the RSS hashing in
> the given testpmd isn't working when it is configured in "ip" (2 tuple) mode.
>
> Any inputs for configuring RSS hashing for 2 tuple is much appreciated.
>
>
> Thanks & Regards,
>  Vishal Mohan



Re: [dpdk-users] Issue with UDP based fragmented packets on Azure cloud

2021-05-27 Thread Stephen Hemminger
On Thu, 27 May 2021 15:40:57 +
Raslan Darawsheh  wrote:

> Hi,
> 
> > -Original Message-
> > From: users  On Behalf Of madhukar mythri
> > Sent: Thursday, May 27, 2021 5:58 PM
> > To: users@dpdk.org
> > Subject: [dpdk-users] Issue with UDP based fragmented packets on Azure
> > cloud
> > 
> > Hi,
> > 
> > We are facing issue with UDP/IP based fragmented packets on Azure cloud
> > platform with Accelerated-Network enabled ports.
> > 
> > UDP fragmented Rx packets were able to receive well on media ports. But,
> > when two fragmented packet received, first fragment is received on Queue-
> > 0
> > and second fragment is received on Queue-1. Ideally all the fragments(of
> > single large packet) should be received single queue based on RSS, so that
> > we can re-assemble as single pkt and process it, which is working well in
> > other platforms on KVM hyper-visors(with I40evf NIC’s).
> > 
> > I think, the as per RSS hash cacluation all the fragmented pkts should
> > reach on single-queue(because the 5-tuple hash value will be same), but
> > this is not happening in-case of Azue VM's Why ?
> > 
> > Does anybody faced similar issue, please let me know your suggestion.  
> I guess it depends on the fragments themselves, 
> If your first fragment contains a UDP header (the first frag in the list)  
> then the RSS hash will be on the full 5 tuble 
> Src/dst IP and src/dst udp 
> But, for the other frags you'll not get src/dst udp since they are not 
> present in the pkt.
> I guess you should be using only RSS On IP header to make all frags go to the 
> same queue.
> > 

Yes, and this is not unique to Azure or even the DPDK.
Fragmented packets do not have enough information (no UDP header in second 
fragment)
to do L4 RSS.



Re: [dpdk-users] Enable to install DPDK on Centos 8-Stream

2021-05-27 Thread Thomas Monjalon
27/05/2021 18:10, Gabriel Danjon:
> Hello,
> 
> I am having difficulties to compile and install DPDK from sources on the 
> latest Centos 8-Stream.

Did you compile and install DPDK successfully?
Where is it installed?

> 
> After having installed the required drivers and libraries, following the 
> documentation and the DPDK build (meson build && cd build && ninja && 
> ninja install && ldconfig), I tried to compile the helloworld example 
> without success:
> 'Makefile:12: *** "no installation of DPDK found".  Stop.'
> 
> 
> Please find attached to this mail some logs.

The log is way too long to be read.
Please copy only what is relevant.

> Could you provide help please ?

It looks to be basic issue with library installation.
Did you read the doc?
http://doc.dpdk.org/guides/linux_gsg/build_dpdk.html

Especially this note:
"
On some linux distributions, such as Fedora or Redhat, paths in /usr/local are 
not in the default paths for the loader. Therefore, on these distributions, 
/usr/local/lib and /usr/local/lib64 should be added to a file in 
/etc/ld.so.conf.d/ before running ldconfig.
"





Re: [dpdk-users] Issue with UDP based fragmented packets on Azure cloud

2021-05-27 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: users  On Behalf Of madhukar mythri
> Sent: Thursday, May 27, 2021 5:58 PM
> To: users@dpdk.org
> Subject: [dpdk-users] Issue with UDP based fragmented packets on Azure
> cloud
> 
> Hi,
> 
> We are facing issue with UDP/IP based fragmented packets on Azure cloud
> platform with Accelerated-Network enabled ports.
> 
> UDP fragmented Rx packets were able to receive well on media ports. But,
> when two fragmented packet received, first fragment is received on Queue-
> 0
> and second fragment is received on Queue-1. Ideally all the fragments(of
> single large packet) should be received single queue based on RSS, so that
> we can re-assemble as single pkt and process it, which is working well in
> other platforms on KVM hyper-visors(with I40evf NIC’s).
> 
> I think, the as per RSS hash cacluation all the fragmented pkts should
> reach on single-queue(because the 5-tuple hash value will be same), but
> this is not happening in-case of Azue VM's Why ?
> 
> Does anybody faced similar issue, please let me know your suggestion.
I guess it depends on the fragments themselves, 
If your first fragment contains a UDP header (the first frag in the list)  then 
the RSS hash will be on the full 5 tuble 
Src/dst IP and src/dst udp 
But, for the other frags you'll not get src/dst udp since they are not present 
in the pkt.
I guess you should be using only RSS On IP header to make all frags go to the 
same queue.
> 
> Thanks,
> Madhukar.
 

Kindest regards,
Raslan Darawsheh


[dpdk-users] Issue with UDP based fragmented packets on Azure cloud

2021-05-27 Thread madhukar mythri
Hi,

We are facing issue with UDP/IP based fragmented packets on Azure cloud
platform with Accelerated-Network enabled ports.

UDP fragmented Rx packets were able to receive well on media ports. But,
when two fragmented packet received, first fragment is received on Queue-0
and second fragment is received on Queue-1. Ideally all the fragments(of
single large packet) should be received single queue based on RSS, so that
we can re-assemble as single pkt and process it, which is working well in
other platforms on KVM hyper-visors(with I40evf NIC’s).

I think, the as per RSS hash cacluation all the fragmented pkts should
reach on single-queue(because the 5-tuple hash value will be same), but
this is not happening in-case of Azue VM's Why ?

Does anybody faced similar issue, please let me know your suggestion.

Thanks,
Madhukar.


Re: [dpdk-users] DPDK-20.11 and HUGEPAGES

2021-05-27 Thread Templin (US), Fred L
Thank you for your message. I will post the log after I return from visiting
the lab where the test environment is hosted later today.

Fred

> -Original Message-
> From: Huai-En Tseng [mailto:t...@csie.io]
> Sent: Wednesday, May 26, 2021 7:05 PM
> To: Templin (US), Fred L 
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] DPDK-20.11 and HUGEPAGES
> 
> 
> Hi, DPDK can be executed under 2MB hugepages scenario, and 1GB hugepages is 
> not necessary.
> 
> Could you paste the log?
> 
> >
> > Message: 2
> > Date: Wed, 26 May 2021 18:12:40 +
> > From: "Templin (US), Fred L" 
> > To: "users@dpdk.org" 
> > Subject: [dpdk-users] DPDK-20.11 and HUGEPAGES
> > Message-ID: 
> > Content-Type: text/plain; charset="us-ascii"
> >
> > Hi, I have a test environment based on RHEL VMs that are run from a 
> > hypervisor
> > that I have no administrative or physical access to. My only administrative 
> > control
> > is by ssh into the running VMs over the network where at least I have sudo 
> > access.
> >
> > When I run my DPDK-20.11 application, it crashes because hugepages are not
> > configured. So, I allocated 2MB hugepages at runtime but the app still 
> > crashes
> > because there are no 1GB hugepages which on RHEL can only be set at boot 
> > time.
> >
> > I found instructions for setting RHEL boot parameters that will allocate 1GB
> > hugepages at boot time, but I have not tried it because I am concerned that
> > if I mess something up and reboot the VM it may never come back.
> >
> > So, I am wondering if DPDK-20.11 supports a "semi-huge" mode of operation
> > that allows it to run with only 2MB hugepages configured and no 1GB pages?
> > If so, what would be the way to set that up?
> >
> > I have also tried invoking DPDK with "--no-huge". My application starts 
> > fine,
> > but at runtime it crashes out of a DPDK API call while processing an mbuf
> > for a received packet. I can give more details about this if there is a 
> > chance
> > it could be debugged - or, is "--no-huge" problematic in general?
> >
> > Thanks - Fred
> >



Re: [dpdk-users] DPDK 20.11 - i40e 2 tuple RSS configuration

2021-05-27 Thread Vishal Mohan
Hi Beilei,

Thanks for the pointer. By using l3-src-only, I was able to run testpmd on 1 
tuple mode. But no success when configured manually. Please find the snippet 
below of my rte_eth_conf and flow conf for your kind perusal:

static struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
.rss_key_len = 40,
.rss_hf = ETH_RSS_NONFRAG_IPV4_UDP,
},
}
};

struct rte_flow_item pattern[] = {
[0] = {
.type = RTE_FLOW_ITEM_TYPE_ETH,
},
[1] = {
.type = RTE_FLOW_ITEM_TYPE_IPV4,
},
[2] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
},
[3] = {
.type = RTE_FLOW_ITEM_TYPE_END,
}
};

struct rte_flow_action_rss action_rss = {
.types = ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
.queue_num = 10,
.queue = queue_ids,
};

struct rte_flow_action action[] = {

[0] = {
.type = RTE_FLOW_ACTION_TYPE_RSS,
.conf = _rss
},

[1] = {
.type = RTE_FLOW_ACTION_TYPE_END,
}
};

struct rte_flow_attr attr = {
.egress = 0,
.ingress = 1
};

struct rte_flow_error err;

retval = rte_flow_validate(portid, ,
pattern, action, );
printf("retval %d %d\n",retval,-ENOTSUP);

if(!retval){
struct rte_flow *flow = rte_flow_create(portid, , pattern, action, 
);
}

The above given flow validates and adds successfully but no effect on RSS 
hashing. Also I did not set .spec and .mask for patterns assuming  
ETH_RSS_L3_SRC_ONLY will take care of the fields looked into.
Can you please point out if im missing anything here ?

Thanks & Regards,
Vishal Mohan

-Original Message-
From: Xing, Beilei 
Sent: 27 May 2021 01:42 PM
To: Vishal Mohan ; users@dpdk.org
Subject: RE: DPDK 20.11 - i40e 2 tuple RSS configuration

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.

Hi,

I remember there's no legacy API rte_eth_dev_filter_ctrl()  supported in 20.11.
Please refer to RSS Flow part in i40e.rst:

Enable hash and set input set for ipv4-tcp.
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
  actions rss types ipv4-tcp l3-src-only end queues end / end

BR,
Beilei

> -Original Message-
> From: users  On Behalf Of Vishal Mohan
> Sent: Thursday, May 27, 2021 3:40 PM
> To: mailto:users@dpdk.org
> Subject: [dpdk-users] DPDK 20.11 - i40e 2 tuple RSS configuration
>
> I'm trying to implement RSS with 2 tuple (src ip, dst ip) hashing with
> X710 - quad port in DPDK 20.11 with no success. I was able to
> implement the same in DPDK 17.11 with a combination of RSS flags
> given below and
> rte_eth_dev_filter_ctrl():
>
> .rss_hf = (ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |
> ETH_RSS_NONFRAG_IPV4_TCP)
>
> and selecting input fields as dst and src ip for every rss_hf flag
> using rte_eth_dev_filter_ctrl().
>
> In DPDK 20.11, I believe there is no explicit usage of using
> rte_eth_dev_filter_ctrl() instead we can configure the hashing with
> the generic rte_flow api. I did configure a flow validate and create
> the same, but the hashing is not working as expected. Without flags
> ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV4_TCP no hashing takes
> place and with those flags included in .rss_hf, 5 tuple hashing takes
> place.
>
> When using rte_flow api, any flags given in rte_flow_action_rss.types
> has no effect on the final RSS hash result. Also the RSS hashing in
> the given testpmd isn't working when it is configured in "ip" (2 tuple) mode.
>
> Any inputs for configuring RSS hashing for 2 tuple is much appreciated.
>
>
> Thanks & Regards,
>  Vishal Mohan



Re: [dpdk-users] Issues with rte_flow_destroy

2021-05-27 Thread Xing, Beilei
Hi,

What's the DPDK version you used? With the latest DPDK version, there's no such 
issue.

BR,
Beilei

> -Original Message-
> From: users  On Behalf Of Antoine POLLENUS
> Sent: Tuesday, May 25, 2021 8:34 PM
> To: users@dpdk.org
> Subject: [dpdk-users] Issues with rte_flow_destroy
> 
> Hi,
> 
> I'm experiencing some issues using the flow API and a intel XXV710 (i40e).
> 
> I managed to reproduce it  in the flow filtering sample.
> 
> I'm creating one flow than deleting it and then creating another with basic
> change #define SRC_IP ((0<<24) + (0<<16) + (0<<8) + 0) /* src ip = 0.0.0.0 */
> #define SRC_IP_1 ((192<<24) + (168<<16) + (1<<8) + 3) /* dest ip = 192.168.1.1
> */ #define DEST_IP ((192<<24) + (168<<16) + (1<<8) + 1) /* dest ip = 
> 192.168.1.1
> */ #define DEST_IP_1 ((192<<24) + (168<<16) + (1<<8) + 2) /* dest ip =
> 192.168.1.1 */
> 
> flow = generate_ipv4_flow(port_id, selected_queue,
>SRC_IP, 
> EMPTY_MASK,
>DEST_IP, 
> FULL_MASK, );
> if (!flow) {
>printf("Flow can't be created %d message: 
> %s\n",
>error.type,
>error.message ? error.message 
> : "(no stated reason)");
>rte_exit(EXIT_FAILURE, "error in creating 
> flow");
> }
> //Deleting the rule
> int returned;
> returned = rte_flow_destroy(port_id, flow, );
> if(returned < 0)
> {
>printf("destroy %d message: %s\n",
>error.type,
>error.message ? error.message 
> : "(no stated reason)");
> }
> //Generating another rule
> flow1 = generate_ipv4_flow(port_id, selected_queue,
>SRC_IP_1, 
> FULL_MASK,
>DEST_IP_1, 
> FULL_MASK, );
> if (!flow1) {
>printf("Flow can't be created %d message: 
> %s\n",
>error.type,
>error.message ? error.message 
> : "(no stated reason)");
>rte_exit(EXIT_FAILURE, "error in creating 
> flow");
> }
> 
> When doing that I always get an error on the second flow I want to add.
> 
> Flow can't be created 13 message: Conflict with the first rule's input set.
> 
> The rule is indeed in conflict because it uses the same as the previous but 
> with
> the source IP changing and also the destination IP.
> 
> The strange thing is that a destroy has been made on the previous rule and
> should not be there anymore
> 
> Am I doing something wrong or is there a bug in the destroy function ?
> 
> Thank you in advance for your answer,
> 
> Regards,
> 
> Antoine Pollenus


Re: [dpdk-users] DPDK 20.11 - i40e 2 tuple RSS configuration

2021-05-27 Thread Xing, Beilei
Hi,

I remember there's no legacy API rte_eth_dev_filter_ctrl()  supported in 20.11.
Please refer to RSS Flow part in i40e.rst:

Enable hash and set input set for ipv4-tcp.
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
  actions rss types ipv4-tcp l3-src-only end queues end / end

BR,
Beilei

> -Original Message-
> From: users  On Behalf Of Vishal Mohan
> Sent: Thursday, May 27, 2021 3:40 PM
> To: users@dpdk.org
> Subject: [dpdk-users] DPDK 20.11 - i40e 2 tuple RSS configuration
> 
> I'm trying to implement RSS with 2 tuple (src ip, dst ip) hashing with X710 -
> quad port in DPDK 20.11 with no success. I was able to implement the same
> in DPDK 17.11 with a combination of RSS flags  given below and
> rte_eth_dev_filter_ctrl():
> 
> .rss_hf = (ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER |
> ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |
> ETH_RSS_NONFRAG_IPV4_TCP)
> 
> and selecting input fields as dst and src ip for every rss_hf flag using
> rte_eth_dev_filter_ctrl().
> 
> In DPDK 20.11, I believe there is no explicit usage of using
> rte_eth_dev_filter_ctrl() instead we can configure the hashing with the
> generic rte_flow api. I did configure a flow validate and create the same, but
> the hashing is not working as expected. Without flags
> ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV4_TCP no hashing
> takes place and with those flags included in .rss_hf, 5 tuple hashing takes
> place.
> 
> When using rte_flow api, any flags given in rte_flow_action_rss.types has no
> effect on the final RSS hash result. Also the RSS hashing in the given testpmd
> isn't working when it is configured in "ip" (2 tuple) mode.
> 
> Any inputs for configuring RSS hashing for 2 tuple is much appreciated.
> 
> 
> Thanks & Regards,
>  Vishal Mohan


[dpdk-users] DPDK 20.11 - i40e 2 tuple RSS configuration

2021-05-27 Thread Vishal Mohan
I'm trying to implement RSS with 2 tuple (src ip, dst ip) hashing with X710 - 
quad port in DPDK 20.11 with no success. I was able to implement the same in 
DPDK 17.11 with a combination of RSS flags  given below and 
rte_eth_dev_filter_ctrl():

.rss_hf = (ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_FRAG_IPV4 |
ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV4_TCP)

and selecting input fields as dst and src ip for every rss_hf flag using 
rte_eth_dev_filter_ctrl().

In DPDK 20.11, I believe there is no explicit usage of using 
rte_eth_dev_filter_ctrl() instead we can configure the hashing with the generic 
rte_flow api. I did configure a flow validate and create the same, but the 
hashing is not working as expected. Without flags ETH_RSS_NONFRAG_IPV4_UDP | 
ETH_RSS_NONFRAG_IPV4_TCP no hashing takes place and with those flags included 
in .rss_hf, 5 tuple hashing takes place.

When using rte_flow api, any flags given in rte_flow_action_rss.types has no 
effect on the final RSS hash result. Also the RSS hashing in the given testpmd 
isn't working when it is configured in "ip" (2 tuple) mode.

Any inputs for configuring RSS hashing for 2 tuple is much appreciated.


Thanks & Regards,
 Vishal Mohan


Re: [dpdk-users] MLX5 configuration and installation

2021-05-27 Thread Asaf Penso
Hello Alberto,

Is this a specific issue with the helloworld?
Does testpmd work ok for you?

Regards,
Asaf Penso

>-Original Message-
>From: users  On Behalf Of Alberto Perro
>Sent: Thursday, May 20, 2021 12:53 PM
>To: users@dpdk.org
>Subject: [dpdk-users] MLX5 configuration and installation
>
>Good morning,
>
>I want to evaluate DPDK on my servers, which are equipped with 4 Mellanox
>ConnectX-5 Ex 2x100G cards.
>I have installed MLNX_OFED 5.3-1.0.0 from nvidia, compiled from source with
>`--upstream-libs --dpdk` flags.
>I downloaded DPDK 20.11 LTS and compiled following the quick start guide.
>I have allocated 1024 2MB hugepages for each NUMA node.
>When I try to run dpdk-helloworld I get:
>
>```
>[aperro@ebstortest02 examples]$ sudo ./dpdk-helloworld
>EAL: Detected 64 lcore(s)
>EAL: Detected 2 NUMA nodes
>EAL: Detected static linkage of DPDK
>EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>EAL: Selected IOVA mode 'PA'
>EAL: No available hugepages reported in hugepages-1048576kB
>EAL: Probing VFIO support...
>EAL: VFIO support initialized
>EAL: DPDK is running on a NUMA system, but is compiled without NUMA
>support.
>EAL: This will have adverse consequences for performance and usability.
>EAL: Please use --legacy-mem option, or recompile with NUMA support.
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: :81:00.0 (socket 1)
>mlx5_pci: probe of PCI device :81:00.0 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device :81:00.0 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: :81:00.1 (socket 1)
>mlx5_pci: probe of PCI device :81:00.1 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device :81:00.1 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: :a1:00.0 (socket 1)
>mlx5_pci: probe of PCI device :a1:00.0 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device :a1:00.0 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: :a1:00.1 (socket 1)
>mlx5_pci: probe of PCI device :a1:00.1 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device :a1:00.1 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: :c1:00.0 (socket 1)
>mlx5_pci: probe of PCI device :c1:00.0 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device :c1:00.0 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: :c1:00.1 (socket 1)
>mlx5_pci: probe of PCI device :c1:00.1 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device :c1:00.1 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: :c2:00.0 (socket 1)
>mlx5_pci: probe of PCI device :c2:00.0 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device :c2:00.0 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: :c2:00.1 (socket 1)
>mlx5_pci: probe of PCI device :c2:00.1 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device :c2:00.1 cannot be used
>EAL: No legacy callbacks, legacy socket not created hello from core 1 ...
>```


Re: [dpdk-users] Unable to recognize master/representor on the multiple IB devices

2021-05-27 Thread Asaf Penso
Hello Sankalpa,

Are you seeing this with testpmd? If so can you provide the command line?
If not, can you provide the EAL parameters and the devargs used?

Regards,
Asaf Penso

>-Original Message-
>From: users  On Behalf Of Sankalpa Timilsina
>Sent: Saturday, May 22, 2021 8:58 PM
>To: users@dpdk.org
>Subject: [dpdk-users] Unable to recognize master/representor on the
>multiple IB devices
>
> Hi, I am getting a DPDK MLX5 probing issue.
>
>   1. I have installed the mlx5/ofed driver
>   2. I have loaded the kernel modules.
>
>EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>EAL: Selected IOVA mode 'PA'
>EAL: No available hugepages reported in hugepages-2048kB
>EAL: Probing VFIO support...
>EAL: Probe PCI driver: mlx5_pci (15b3:1013) device: :5e:00.0 (socket 0)
>mlx5_pci: unable to recognize master/representors on the multiple IB devices
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device :5e:00.0 cannot be used
>EAL: Bus (pci) probe failed.
>
>As for the 'failing to load mlx5_pci' driver, I can see that the mlx5_core 
>driver is
>loaded.
>
>dpdk-devbind.py -s
>
>Network devices using kernel driver
>===
>:5e:00.0 'MT27700 Family [ConnectX-4]' if=enp94s0 drv=mlx5_core
>unused=
>
>What does failing to recognize master/representors on multiple IB devices
>mean?
>
>My configuration is: CentOS 7.9, Linux Kernel 5.12, OFED 4.9 (LTS)