[dpdk-dev] [PATCH 0/4] New library: rte_distributor

2014-05-28 Thread Richardson, Bruce
> -Original Message-
> From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
> Sent: Tuesday, May 27, 2014 11:33 PM
> To: Richardson, Bruce
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/4] New library: rte_distributor
> 
> Hi Bruce,
> 
> As for rte_acl, I have some formatting comments.
> 
> 2014-05-20 11:00, Bruce Richardson:
> > This adds a new library to the Intel DPDK whereby a set of packets can be
> > distributed one-at-a-time to a set of worker cores, with dynamic load
> > balancing being done between those workers. Flows are identified by a tag
> > within the mbuf (currently the RSS hash field, 32-bit value), which is used
> > to ensure that no two packets of the same flow are processed in parallel,
> > thereby preserving ordering.
> >
> >  app/test/Makefile  |   2 +
> >  app/test/commands.c|   7 +-
> >  app/test/test.h|   2 +
> >  app/test/test_distributor.c| 582 
> > +
> >  app/test/test_distributor_perf.c   | 274 
> >  config/defconfig_i686-default-linuxapp-gcc |   5 +
> >  config/defconfig_i686-default-linuxapp-icc |   5 +
> >  config/defconfig_x86_64-default-bsdapp-gcc |   6 +
> >  config/defconfig_x86_64-default-linuxapp-gcc   |   5 +
> >  config/defconfig_x86_64-default-linuxapp-icc   |   5 +
> >  lib/Makefile   |   1 +
> >  lib/librte_distributor/Makefile|  50 +++
> >  lib/librte_distributor/rte_distributor.c   | 417 ++
> >  lib/librte_distributor/rte_distributor.h   | 173 
> >  lib/librte_eal/common/include/rte_tailq_elem.h |   2 +
> >  mk/rte.app.mk  |   4 +
> >  16 files changed, 1539 insertions(+), 1 deletion(-)
> 
> As you are introducing a new library, you need to update
> doxygen configuration and start page:
> doc/doxy-api.conf
> doc/doxy-api-index.md

Didn't know to update those, I'll add it in to the v2 patch set.

> 
> I've run checkpatch.pl from kernel.org on these distributor patches
> and it reports some code style issues.
> Could you have a look at it please?

Yep. I've downloaded and run that patch myself in preparation for a V2 patch 
set (due really soon), so hopefully all should be well second time round.



[dpdk-dev] [PATCH 0/4] New library: rte_distributor

2014-05-28 Thread Thomas Monjalon
Hi Bruce,

As for rte_acl, I have some formatting comments.

2014-05-20 11:00, Bruce Richardson:
> This adds a new library to the Intel DPDK whereby a set of packets can be
> distributed one-at-a-time to a set of worker cores, with dynamic load
> balancing being done between those workers. Flows are identified by a tag
> within the mbuf (currently the RSS hash field, 32-bit value), which is used
> to ensure that no two packets of the same flow are processed in parallel,
> thereby preserving ordering.
> 
>  app/test/Makefile  |   2 +
>  app/test/commands.c|   7 +-
>  app/test/test.h|   2 +
>  app/test/test_distributor.c| 582 
> +
>  app/test/test_distributor_perf.c   | 274 
>  config/defconfig_i686-default-linuxapp-gcc |   5 +
>  config/defconfig_i686-default-linuxapp-icc |   5 +
>  config/defconfig_x86_64-default-bsdapp-gcc |   6 +
>  config/defconfig_x86_64-default-linuxapp-gcc   |   5 +
>  config/defconfig_x86_64-default-linuxapp-icc   |   5 +
>  lib/Makefile   |   1 +
>  lib/librte_distributor/Makefile|  50 +++
>  lib/librte_distributor/rte_distributor.c   | 417 ++
>  lib/librte_distributor/rte_distributor.h   | 173 
>  lib/librte_eal/common/include/rte_tailq_elem.h |   2 +
>  mk/rte.app.mk  |   4 +
>  16 files changed, 1539 insertions(+), 1 deletion(-)

As you are introducing a new library, you need to update
doxygen configuration and start page:
doc/doxy-api.conf
doc/doxy-api-index.md

I've run checkpatch.pl from kernel.org on these distributor patches
and it reports some code style issues.
Could you have a look at it please?

Thanks
-- 
Thomas


[dpdk-dev] [PATCH 0/4] New library: rte_distributor

2014-05-20 Thread Richardson, Bruce


> -Original Message-
> From: Neil Horman [mailto:nhorman at tuxdriver.com]
> Sent: Tuesday, May 20, 2014 6:14 PM
> To: Richardson, Bruce
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/4] New library: rte_distributor
> 
> Ah, my bad, I was looking at the API as a way of multiplexing locally 
> generated
> data to multiple workers for transmission over multiple > On Tue, May 20, 
> 2014 at 11:02:15AM +, Richardson, Bruce wrote:
> > > -Original Message-
> > > From: Neil Horman [mailto:nhorman at tuxdriver.com]
> > > Sent: Tuesday, May 20, 2014 11:39 AM
> > > To: Richardson, Bruce
> > > Cc: dev at dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 0/4] New library: rte_distributor
> > >
> > > >
> > > This sounds an awful lot like the team and bonding drivers.  Why implement
> this
> > > as a separate application accessible api, rather than a stacked PMD?  If 
> > > you
> do
> > > the latter then existing applications could concievably change their
> > > configurations to use this technology and gain the benefit of load
> distribution
> > > without having to alter the application to use a new api.
> > >
> >
> > I'm not sure I see the similarity with the bonded driver, which merges 
> > multiple
> ports into a single logical port, i.e. you pull packets from a single source 
> which is
> actually pull packets from possibly multiple sources behind the scenes, 
> whereas
> this takes packets from an unknown source and distributes them among a set of
> workers a single packet at a time. (While handling single packets is slower 
> than
> handling packet bursts, it is something that is sometimes needed to support
> existing code which may not be written to work with packet bursts.)
> >network interfaces, not
> to demultiplex received data to multiple workers.  That makes more sense.
> Sorry
> for the noise.  I've got a few more comments inline with the rest of your
> patches.
> Neil

No problem, thanks for the feedback, I'll work through it and submit a v2 patch 
as soon as I can.

/Bruce


[dpdk-dev] [PATCH 0/4] New library: rte_distributor

2014-05-20 Thread Neil Horman
On Tue, May 20, 2014 at 11:02:15AM +, Richardson, Bruce wrote:
> > -Original Message-
> > From: Neil Horman [mailto:nhorman at tuxdriver.com]
> > Sent: Tuesday, May 20, 2014 11:39 AM
> > To: Richardson, Bruce
> > Cc: dev at dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 0/4] New library: rte_distributor
> > 
> > >
> > This sounds an awful lot like the team and bonding drivers.  Why implement 
> > this
> > as a separate application accessible api, rather than a stacked PMD?  If 
> > you do
> > the latter then existing applications could concievably change their
> > configurations to use this technology and gain the benefit of load 
> > distribution
> > without having to alter the application to use a new api.
> > 
> 
> I'm not sure I see the similarity with the bonded driver, which merges 
> multiple ports into a single logical port, i.e. you pull packets from a 
> single source which is actually pull packets from possibly multiple sources 
> behind the scenes, whereas this takes packets from an unknown source and 
> distributes them among a set of workers a single packet at a time. (While 
> handling single packets is slower than handling packet bursts, it is 
> something that is sometimes needed to support existing code which may not be 
> written to work with packet bursts.) 
> 
Ah, my bad, I was looking at the API as a way of multiplexing locally generated
data to multiple workers for transmission over multiple network interfaces, not
to demultiplex received data to multiple workers.  That makes more sense.  Sorry
for the noise.  I've got a few more comments inline with the rest of your
patches.
Neil



[dpdk-dev] [PATCH 0/4] New library: rte_distributor

2014-05-20 Thread Richardson, Bruce
> -Original Message-
> From: Neil Horman [mailto:nhorman at tuxdriver.com]
> Sent: Tuesday, May 20, 2014 11:39 AM
> To: Richardson, Bruce
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/4] New library: rte_distributor
> 
> >
> This sounds an awful lot like the team and bonding drivers.  Why implement 
> this
> as a separate application accessible api, rather than a stacked PMD?  If you 
> do
> the latter then existing applications could concievably change their
> configurations to use this technology and gain the benefit of load 
> distribution
> without having to alter the application to use a new api.
> 

I'm not sure I see the similarity with the bonded driver, which merges multiple 
ports into a single logical port, i.e. you pull packets from a single source 
which is actually pull packets from possibly multiple sources behind the 
scenes, whereas this takes packets from an unknown source and distributes them 
among a set of workers a single packet at a time. (While handling single 
packets is slower than handling packet bursts, it is something that is 
sometimes needed to support existing code which may not be written to work with 
packet bursts.) 

The load balancing is also more dynamic than that done by existing mechanisms, 
since no calculation is done on the packets or the packet metadata to assign a 
packet to a worker - instead if a particular flow tag is not in-flight with a 
worker, the next packet with that tag goes to the next available worker. In 
this way, the library also takes care of ensuring that packets from a single 
flow are maintained in order, and provides a mechanism to have the packets 
passed back to the distributor thread when done, for further processing there, 
e.g. rescheduling a second time, or other actions. 

While in certain circumstances an ethdev rx/tx API could be used (and it is 
something we have thought about and may well add to this library in future), 
there are certain requirements that cannot be met by just making this a stacked 
ethdev/PMD:
* not all packets come from an rx_burst call on another PMD, especially where 
the tags on the packets need to be computed by software
* the rx_burst API call provides no way to pass back packets to the source when 
finished.


[dpdk-dev] [PATCH 0/4] New library: rte_distributor

2014-05-20 Thread Bruce Richardson
This adds a new library to the Intel DPDK whereby a set of packets can be 
distributed one-at-a-time to a set of worker cores, with dynamic load balancing 
being done between those workers. Flows are identified by a tag within the mbuf 
(currently the RSS hash field, 32-bit value), which is used to ensure that no 
two packets of the same flow are processed in parallel, thereby preserving 
ordering.

Bruce Richardson (4):
  eal: add tailq for new distributor component
  distributor: new packet distributor library
  distributor: add distributor library to build
  distributor: add unit tests for distributor lib

 app/test/Makefile  |   2 +
 app/test/commands.c|   7 +-
 app/test/test.h|   2 +
 app/test/test_distributor.c| 582 +
 app/test/test_distributor_perf.c   | 274 
 config/defconfig_i686-default-linuxapp-gcc |   5 +
 config/defconfig_i686-default-linuxapp-icc |   5 +
 config/defconfig_x86_64-default-bsdapp-gcc |   6 +
 config/defconfig_x86_64-default-linuxapp-gcc   |   5 +
 config/defconfig_x86_64-default-linuxapp-icc   |   5 +
 lib/Makefile   |   1 +
 lib/librte_distributor/Makefile|  50 +++
 lib/librte_distributor/rte_distributor.c   | 417 ++
 lib/librte_distributor/rte_distributor.h   | 173 
 lib/librte_eal/common/include/rte_tailq_elem.h |   2 +
 mk/rte.app.mk  |   4 +
 16 files changed, 1539 insertions(+), 1 deletion(-)
 create mode 100644 app/test/test_distributor.c
 create mode 100644 app/test/test_distributor_perf.c
 create mode 100644 lib/librte_distributor/Makefile
 create mode 100644 lib/librte_distributor/rte_distributor.c
 create mode 100644 lib/librte_distributor/rte_distributor.h

-- 
1.9.0



[dpdk-dev] [PATCH 0/4] New library: rte_distributor

2014-05-20 Thread Neil Horman
On Tue, May 20, 2014 at 11:00:53AM +0100, Bruce Richardson wrote:
> This adds a new library to the Intel DPDK whereby a set of packets can be 
> distributed one-at-a-time to a set of worker cores, with dynamic load 
> balancing being done between those workers. Flows are identified by a tag 
> within the mbuf (currently the RSS hash field, 32-bit value), which is used 
> to ensure that no two packets of the same flow are processed in parallel, 
> thereby preserving ordering.
> 
> Bruce Richardson (4):
>   eal: add tailq for new distributor component
>   distributor: new packet distributor library
>   distributor: add distributor library to build
>   distributor: add unit tests for distributor lib
> 
>  app/test/Makefile  |   2 +
>  app/test/commands.c|   7 +-
>  app/test/test.h|   2 +
>  app/test/test_distributor.c| 582 
> +
>  app/test/test_distributor_perf.c   | 274 
>  config/defconfig_i686-default-linuxapp-gcc |   5 +
>  config/defconfig_i686-default-linuxapp-icc |   5 +
>  config/defconfig_x86_64-default-bsdapp-gcc |   6 +
>  config/defconfig_x86_64-default-linuxapp-gcc   |   5 +
>  config/defconfig_x86_64-default-linuxapp-icc   |   5 +
>  lib/Makefile   |   1 +
>  lib/librte_distributor/Makefile|  50 +++
>  lib/librte_distributor/rte_distributor.c   | 417 ++
>  lib/librte_distributor/rte_distributor.h   | 173 
>  lib/librte_eal/common/include/rte_tailq_elem.h |   2 +
>  mk/rte.app.mk  |   4 +
>  16 files changed, 1539 insertions(+), 1 deletion(-)
>  create mode 100644 app/test/test_distributor.c
>  create mode 100644 app/test/test_distributor_perf.c
>  create mode 100644 lib/librte_distributor/Makefile
>  create mode 100644 lib/librte_distributor/rte_distributor.c
>  create mode 100644 lib/librte_distributor/rte_distributor.h
> 
> -- 
> 1.9.0
> 
> 
This sounds an awful lot like the team and bonding drivers.  Why implement this
as a separate application accessible api, rather than a stacked PMD?  If you do
the latter then existing applications could concievably change their
configurations to use this technology and gain the benefit of load distribution
without having to alter the application to use a new api.

Neil