[dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup

2016-06-28 Thread Wang, Zhihong
Thanks Nelio and Pablo!

> -Original Message-
> From: N?lio Laranjeiro [mailto:nelio.laranjeiro at 6wind.com]
> Sent: Tuesday, June 28, 2016 4:34 PM
> To: De Lara Guarch, Pablo 
> Cc: Wang, Zhihong ; dev at dpdk.org; Ananyev,
> Konstantin ; Richardson, Bruce
> ; thomas.monjalon at 6wind.com
> Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup
> 
> Hi Pablo,
> 
> On Mon, Jun 27, 2016 at 10:36:38PM +, De Lara Guarch, Pablo wrote:
> > Hi Nelio,
> >
> > > -Original Message-
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of N?lio Laranjeiro
> > > Sent: Monday, June 27, 2016 3:24 PM
> > > To: Wang, Zhihong
> > > Cc: dev at dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara 
> > > Guarch,
> > > Pablo; thomas.monjalon at 6wind.com
> > > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss 
> > > setup
> > >
> > > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote:
> > > > This patch removes constraints in rxq handling when multiqueue is 
> > > > enabled
> > > > to handle all the rxqs.
> > > >
> > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be
> > > > ignored when core number is less than rxq number, and that causes
> > > confusion
> > > > and inconvenience.
> > > >
> > > > One example: One Red Hat engineer was doing multiqueue test, there're 2
> > > > ports in guest each with 4 queues, and testpmd was used as the 
> > > > forwarding
> > > > engine in guest, as usual he used 1 core for forwarding, as a results he
> > > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of
> > > > emails and quite some time are spent to root cause it, and of course 
> > > > it's
> > > > caused by this unreasonable testpmd behavior.
> > > >
> > > > Moreover, even if we understand this behavior, if we want to test the
> > > > above case, we still need 8 cores for a single guest to poll all the
> > > > rxqs, obviously this is too expensive.
> > > >
> > > > We met quite a lot cases like this, one recent example:
> > > > http://openvswitch.org/pipermail/dev/2016-June/072110.html
> > > >
> > > >
> > > > Signed-off-by: Zhihong Wang 
> > > > ---
> > > >  app/test-pmd/config.c | 8 +---
> > > >  1 file changed, 1 insertion(+), 7 deletions(-)
> > > >
> > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> > > > index ede7c78..4719a08 100644
> > > > --- a/app/test-pmd/config.c
> > > > +++ b/app/test-pmd/config.c
> > > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void)
> > > > cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
> > > > cur_fwd_config.nb_fwd_streams =
> > > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
> > > > -   if (cur_fwd_config.nb_fwd_streams > 
> > > > cur_fwd_config.nb_fwd_lcores)
> > > > -   cur_fwd_config.nb_fwd_streams =
> > > > -   (streamid_t)cur_fwd_config.nb_fwd_lcores;
> > > > -   else
> > > > -   cur_fwd_config.nb_fwd_lcores =
> > > > -   (lcoreid_t)cur_fwd_config.nb_fwd_streams;
> > > >
> > > > /* reinitialize forwarding streams */
> > > > init_fwd_streams();
> > > >
> > > > setup_fwd_config_of_each_lcore(_fwd_config);
> > > > rxp = 0; rxq = 0;
> > > > -   for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
> > > > +   for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) 
> > > > {
> > > > struct fwd_stream *fs;
> > > >
> > > > fs = fwd_streams[lc_id];
> > > > --
> > > > 2.5.0
> > >
> > > Hi Zhihong,
> > >
> > > It seems this commits introduce a bug in pkt_burst_transmit(), this only
> > > occurs when the number of cores present in the coremask is greater than
> > > the number of queues i.e. coremask=0xffe --txq=4 --rxq=4.
> > >
> > >   Port 0 Link Up - speed 4 Mbps - full-duplex
> > >   Port 1 Link Up - speed 4 Mbps - full-duplex
> > >   Done
> > >   testpmd> start tx_first
> &g

[dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup

2016-06-28 Thread Nélio Laranjeiro
Hi Pablo,

On Mon, Jun 27, 2016 at 10:36:38PM +, De Lara Guarch, Pablo wrote:
> Hi Nelio,
> 
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of N?lio Laranjeiro
> > Sent: Monday, June 27, 2016 3:24 PM
> > To: Wang, Zhihong
> > Cc: dev at dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch,
> > Pablo; thomas.monjalon at 6wind.com
> > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup
> > 
> > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote:
> > > This patch removes constraints in rxq handling when multiqueue is enabled
> > > to handle all the rxqs.
> > >
> > > Current testpmd forces a dedicated core for each rxq, some rxqs may be
> > > ignored when core number is less than rxq number, and that causes
> > confusion
> > > and inconvenience.
> > >
> > > One example: One Red Hat engineer was doing multiqueue test, there're 2
> > > ports in guest each with 4 queues, and testpmd was used as the forwarding
> > > engine in guest, as usual he used 1 core for forwarding, as a results he
> > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of
> > > emails and quite some time are spent to root cause it, and of course it's
> > > caused by this unreasonable testpmd behavior.
> > >
> > > Moreover, even if we understand this behavior, if we want to test the
> > > above case, we still need 8 cores for a single guest to poll all the
> > > rxqs, obviously this is too expensive.
> > >
> > > We met quite a lot cases like this, one recent example:
> > > http://openvswitch.org/pipermail/dev/2016-June/072110.html
> > >
> > >
> > > Signed-off-by: Zhihong Wang 
> > > ---
> > >  app/test-pmd/config.c | 8 +---
> > >  1 file changed, 1 insertion(+), 7 deletions(-)
> > >
> > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> > > index ede7c78..4719a08 100644
> > > --- a/app/test-pmd/config.c
> > > +++ b/app/test-pmd/config.c
> > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void)
> > >   cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
> > >   cur_fwd_config.nb_fwd_streams =
> > >   (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
> > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores)
> > > - cur_fwd_config.nb_fwd_streams =
> > > - (streamid_t)cur_fwd_config.nb_fwd_lcores;
> > > - else
> > > - cur_fwd_config.nb_fwd_lcores =
> > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams;
> > >
> > >   /* reinitialize forwarding streams */
> > >   init_fwd_streams();
> > >
> > >   setup_fwd_config_of_each_lcore(_fwd_config);
> > >   rxp = 0; rxq = 0;
> > > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
> > > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) {
> > >   struct fwd_stream *fs;
> > >
> > >   fs = fwd_streams[lc_id];
> > > --
> > > 2.5.0
> > 
> > Hi Zhihong,
> > 
> > It seems this commits introduce a bug in pkt_burst_transmit(), this only
> > occurs when the number of cores present in the coremask is greater than
> > the number of queues i.e. coremask=0xffe --txq=4 --rxq=4.
> > 
> >   Port 0 Link Up - speed 4 Mbps - full-duplex
> >   Port 1 Link Up - speed 4 Mbps - full-duplex
> >   Done
> >   testpmd> start tx_first
> > io packet forwarding - CRC stripping disabled - packets/burst=64
> > nb forwarding cores=10 - nb forwarding ports=2
> > RX queues=4 - RX desc=256 - RX free threshold=0
> > RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> > TX queues=4 - TX desc=256 - TX free threshold=0
> > TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> > TX RS bit threshold=0 - TXQ flags=0x0
> >   Segmentation fault (core dumped)
> > 
> > 
> > If I start testpmd with a coremask with at most as many cores as queues,
> > everything works well (i.e. coremask=0xff0, or 0xf00).
> > 
> > Are you able to reproduce the same issue?
> > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204).
> 
> Thanks for reporting this. I was able to reproduce this issue and
> sent a patch that should fix it. Could you verify it?
> http://dpdk.org/dev/patchwork/patch/14430/


I have tested it, it works, I will add a test report on the
corresponding email.

Thanks
> 
> 
> Thanks
> Pablo
> > 
> > Regards,

-- 
N?lio Laranjeiro
6WIND


[dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup

2016-06-27 Thread De Lara Guarch, Pablo
Hi Nelio,

> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of N?lio Laranjeiro
> Sent: Monday, June 27, 2016 3:24 PM
> To: Wang, Zhihong
> Cc: dev at dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch,
> Pablo; thomas.monjalon at 6wind.com
> Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup
> 
> On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote:
> > This patch removes constraints in rxq handling when multiqueue is enabled
> > to handle all the rxqs.
> >
> > Current testpmd forces a dedicated core for each rxq, some rxqs may be
> > ignored when core number is less than rxq number, and that causes
> confusion
> > and inconvenience.
> >
> > One example: One Red Hat engineer was doing multiqueue test, there're 2
> > ports in guest each with 4 queues, and testpmd was used as the forwarding
> > engine in guest, as usual he used 1 core for forwarding, as a results he
> > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of
> > emails and quite some time are spent to root cause it, and of course it's
> > caused by this unreasonable testpmd behavior.
> >
> > Moreover, even if we understand this behavior, if we want to test the
> > above case, we still need 8 cores for a single guest to poll all the
> > rxqs, obviously this is too expensive.
> >
> > We met quite a lot cases like this, one recent example:
> > http://openvswitch.org/pipermail/dev/2016-June/072110.html
> >
> >
> > Signed-off-by: Zhihong Wang 
> > ---
> >  app/test-pmd/config.c | 8 +---
> >  1 file changed, 1 insertion(+), 7 deletions(-)
> >
> > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> > index ede7c78..4719a08 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void)
> > cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
> > cur_fwd_config.nb_fwd_streams =
> > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
> > -   if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores)
> > -   cur_fwd_config.nb_fwd_streams =
> > -   (streamid_t)cur_fwd_config.nb_fwd_lcores;
> > -   else
> > -   cur_fwd_config.nb_fwd_lcores =
> > -   (lcoreid_t)cur_fwd_config.nb_fwd_streams;
> >
> > /* reinitialize forwarding streams */
> > init_fwd_streams();
> >
> > setup_fwd_config_of_each_lcore(_fwd_config);
> > rxp = 0; rxq = 0;
> > -   for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
> > +   for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) {
> > struct fwd_stream *fs;
> >
> > fs = fwd_streams[lc_id];
> > --
> > 2.5.0
> 
> Hi Zhihong,
> 
> It seems this commits introduce a bug in pkt_burst_transmit(), this only
> occurs when the number of cores present in the coremask is greater than
> the number of queues i.e. coremask=0xffe --txq=4 --rxq=4.
> 
>   Port 0 Link Up - speed 4 Mbps - full-duplex
>   Port 1 Link Up - speed 4 Mbps - full-duplex
>   Done
>   testpmd> start tx_first
> io packet forwarding - CRC stripping disabled - packets/burst=64
> nb forwarding cores=10 - nb forwarding ports=2
> RX queues=4 - RX desc=256 - RX free threshold=0
> RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> TX queues=4 - TX desc=256 - TX free threshold=0
> TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> TX RS bit threshold=0 - TXQ flags=0x0
>   Segmentation fault (core dumped)
> 
> 
> If I start testpmd with a coremask with at most as many cores as queues,
> everything works well (i.e. coremask=0xff0, or 0xf00).
> 
> Are you able to reproduce the same issue?
> Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204).

Thanks for reporting this. I was able to reproduce this issue and
sent a patch that should fix it. Could you verify it?
http://dpdk.org/dev/patchwork/patch/14430/


Thanks
Pablo
> 
> Regards,
> 
> --
> N?lio Laranjeiro
> 6WIND


[dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup

2016-06-27 Thread Nélio Laranjeiro
On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote:
> This patch removes constraints in rxq handling when multiqueue is enabled
> to handle all the rxqs.
> 
> Current testpmd forces a dedicated core for each rxq, some rxqs may be
> ignored when core number is less than rxq number, and that causes confusion
> and inconvenience.
> 
> One example: One Red Hat engineer was doing multiqueue test, there're 2
> ports in guest each with 4 queues, and testpmd was used as the forwarding
> engine in guest, as usual he used 1 core for forwarding, as a results he
> only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of
> emails and quite some time are spent to root cause it, and of course it's
> caused by this unreasonable testpmd behavior.  
> 
> Moreover, even if we understand this behavior, if we want to test the
> above case, we still need 8 cores for a single guest to poll all the
> rxqs, obviously this is too expensive.
> 
> We met quite a lot cases like this, one recent example:
> http://openvswitch.org/pipermail/dev/2016-June/072110.html
> 
> 
> Signed-off-by: Zhihong Wang 
> ---
>  app/test-pmd/config.c | 8 +---
>  1 file changed, 1 insertion(+), 7 deletions(-)
> 
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index ede7c78..4719a08 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void)
>   cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
>   cur_fwd_config.nb_fwd_streams =
>   (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
> - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores)
> - cur_fwd_config.nb_fwd_streams =
> - (streamid_t)cur_fwd_config.nb_fwd_lcores;
> - else
> - cur_fwd_config.nb_fwd_lcores =
> - (lcoreid_t)cur_fwd_config.nb_fwd_streams;
>  
>   /* reinitialize forwarding streams */
>   init_fwd_streams();
>  
>   setup_fwd_config_of_each_lcore(_fwd_config);
>   rxp = 0; rxq = 0;
> - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
> + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) {
>   struct fwd_stream *fs;
>  
>   fs = fwd_streams[lc_id];
> -- 
> 2.5.0

Hi Zhihong,

It seems this commits introduce a bug in pkt_burst_transmit(), this only
occurs when the number of cores present in the coremask is greater than
the number of queues i.e. coremask=0xffe --txq=4 --rxq=4.

  Port 0 Link Up - speed 4 Mbps - full-duplex
  Port 1 Link Up - speed 4 Mbps - full-duplex
  Done
  testpmd> start tx_first
io packet forwarding - CRC stripping disabled - packets/burst=64
nb forwarding cores=10 - nb forwarding ports=2
RX queues=4 - RX desc=256 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX queues=4 - TX desc=256 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0x0
  Segmentation fault (core dumped)


If I start testpmd with a coremask with at most as many cores as queues,
everything works well (i.e. coremask=0xff0, or 0xf00).

Are you able to reproduce the same issue?
Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204).

Regards,

-- 
N?lio Laranjeiro
6WIND


[dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup

2016-06-14 Thread Zhihong Wang
This patch removes constraints in rxq handling when multiqueue is enabled
to handle all the rxqs.

Current testpmd forces a dedicated core for each rxq, some rxqs may be
ignored when core number is less than rxq number, and that causes confusion
and inconvenience.

One example: One Red Hat engineer was doing multiqueue test, there're 2
ports in guest each with 4 queues, and testpmd was used as the forwarding
engine in guest, as usual he used 1 core for forwarding, as a results he
only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of
emails and quite some time are spent to root cause it, and of course it's
caused by this unreasonable testpmd behavior.  

Moreover, even if we understand this behavior, if we want to test the
above case, we still need 8 cores for a single guest to poll all the
rxqs, obviously this is too expensive.

We met quite a lot cases like this, one recent example:
http://openvswitch.org/pipermail/dev/2016-June/072110.html


Signed-off-by: Zhihong Wang 
---
 app/test-pmd/config.c | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ede7c78..4719a08 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void)
cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
cur_fwd_config.nb_fwd_streams =
(streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
-   if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores)
-   cur_fwd_config.nb_fwd_streams =
-   (streamid_t)cur_fwd_config.nb_fwd_lcores;
-   else
-   cur_fwd_config.nb_fwd_lcores =
-   (lcoreid_t)cur_fwd_config.nb_fwd_streams;

/* reinitialize forwarding streams */
init_fwd_streams();

setup_fwd_config_of_each_lcore(_fwd_config);
rxp = 0; rxq = 0;
-   for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
+   for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) {
struct fwd_stream *fs;

fs = fwd_streams[lc_id];
-- 
2.5.0