Re: [lng-odp] Classification Queue Group

2016-11-09 Thread Bill Fischofer
On Mon, Nov 7, 2016 at 5:16 AM, Bala Manoharan 
wrote:

> Hi,
>
> This mail thread discusses the design of classification queue group
> RFC. The same can be found in the google doc whose link is given
> below.
> Users can provide their comments either in this mail thread or in the
> google doc as per their convenience.
>
> https://docs.google.com/document/d/1fOoG9WDR0lMpVjgMAsx8QsMr0YFK9
> slR93LZ8VXqM2o/edit?usp=sharing
>
> The basic issues with queues as being a single target for a CoS are two
> fold:
>
> Queues must be created and deleted individually. This imposes a
> significant burden when queues are used to represent individual flows
> since the application may need to process thousands (or millions) of
> flows.
> A single PMR can only match a packet to a single queue associated with
> a target CoS. This prohibits efficient capture of subfield
> classification.
> To solve these issues, Tiger Moth introduces the concept of a queue
> group. A queue group is an extension to the existing queue
> specification in a Class of Service.
>
> Queue groups solve the classification issues associated with
> individual queues in three ways:
>
> * The odp_queue_group_create() API can create a large number of
> related queues with a single call.
> * A single PMR can spread traffic to many queues associated with the
> same CoS by assigning packets matching the PMR to a queue group rather
> than a queue.
> * A hashed PMR subfield is used to distribute individual queues within
> a queue group for scheduling purposes.
>
>
> diff --git a/include/odp/api/spec/classification.h
> b/include/odp/api/spec/classification.h
> index 6eca9ab..cf56852 100644
> --- a/include/odp/api/spec/classification.h
> +++ b/include/odp/api/spec/classification.h
> @@ -126,6 +126,12 @@ typedef struct odp_cls_capability_t {
>
> /** A Boolean to denote support of PMR range */
> odp_bool_t pmr_range_supported;
> +
> + /** A Boolean to denote support of queue group */
> + odp_bool_t queue_group_supported;
>

To date we've not introduced optional APIs into ODP so I'm not sure if we'd
want to start here. If we are adding queue groups, all ODP implementations
should be expected to support queue groups, so this flag shouldn't be
needed. Limits on the support (e.g., max number of queue groups supported,
etc.) are appropriate, but there shouldn't be an option to not support them
at all.


> +
> + /** A Boolean to denote support of queue */
> + odp_bool_t queue_supported;
>

Not sure what the intent is here. Is this anticipating that some
implementations might only support sending flows to queue groups and not to
individual queues? That would be a serious functional regression and not
something we'd want to encourage.


> } odp_cls_capability_t;
>
>
> /**
> @@ -162,7 +168,18 @@ typedef enum {
>  * Used to communicate class of service creation options
>  */
> typedef struct odp_cls_cos_param {
> - odp_queue_t queue; /**< Queue associated with CoS */
> + /** If type is ODP_QUEUE_T, odp_queue_t is linked with CoS,
> + * if type is ODP_QUEUE_GROUP_T, odp_queue_group_t is linked with CoS.
>

Perhaps not the best choice of discriminator names. Perhaps
ODP_COS_TYPE_QUEUE and ODP_COS_TYPE_GROUP might be simpler?


> + */
> + odp_queue_type_e type;
>

We already have an odp_queue_type_t defined for the queue APIs so this name
would be confusingly similar. We're really identifying what the type of the
CoS is so perhaps odp_cos_type_t might be better here? That would be
consistent with the ODP_QUEUE_TYPE_PLAIN and ODP_QUEUE_TYPE_SCHED used in
the odp_queue_type_t enum.


> +
> + typedef union {
> + /** Queue associated with CoS */
> + odp_queue_t queue;
> +
> + /** Queue Group associated with CoS */
> + odp_queue_group_t queue_group;
> + };
> odp_pool_t pool; /**< Pool associated with CoS */
> odp_cls_drop_t drop_policy; /**< Drop policy associated with CoS */
> } odp_cls_cos_param_t;
>
>
> diff --git a/include/odp/api/spec/queue.h b/include/odp/api/spec/queue.h
> index 51d94a2..7dde060 100644
> --- a/include/odp/api/spec/queue.h
> +++ b/include/odp/api/spec/queue.h
> @@ -158,6 +158,87 @@ typedef struct odp_queue_param_t {
> odp_queue_t odp_queue_create(const char *name, const odp_queue_param_t
> *param);
>
> +/**
> + * Queue group capability
> + * This capability structure defines system Queue Group capability
> + */
> +typedef struct odp_queue_group_capability_t {
> + /** Number of queues supported per queue group */
> + unsigned supported_queues;
>

We usually specify limits with a max_ prefix, so max_queues would be better
than supported_queues here.


> + /** Supported protocol fields for hashing*/
> + odp_pktin_hash_proto_t supported;
>

"supported" by itself is unclear. Perhaps hashable_fields might be better?


> +}
> +
> +/**
> + * ODP Queue Group parameters
> + * Queue group supports only schedule queues 
>

I thought we decided that this would be the case since the notion of
polling the individual queues within a queue 

Re: [lng-odp] [PATCHv4] linux-generic: add vlan insertion test

2016-11-09 Thread Maxim Uvarov

Merged,
Maxim.

On 11/08/16 21:45, Mike Holmes wrote:



On 8 November 2016 at 09:55, Maxim Uvarov > wrote:


linux-generic packet mmap has separate function to put back
vlan tags which stripped out by linux kernel. This test is
to add code coverage for this function with receiving vlan
traffic from veth device.

Signed-off-by: Maxim Uvarov >


Reviewed-by: Mike Holmes >


---

 v4: add check for root

 test/linux-generic/Makefile.am|   4 +
 test/linux-generic/m4/configure.m4|  1 +
 test/linux-generic/mmap_vlan_ins/.gitignore   |  2 +
 test/linux-generic/mmap_vlan_ins/Makefile.am  | 15 ++
 test/linux-generic/mmap_vlan_ins/mmap_vlan_ins.c  | 232
++
 test/linux-generic/mmap_vlan_ins/mmap_vlan_ins.sh | 82 
 test/linux-generic/mmap_vlan_ins/pktio_env| 120 +++
 test/linux-generic/mmap_vlan_ins/vlan.pcap| Bin 0 -> 9728
bytes
 8 files changed, 456 insertions(+)
 create mode 100644 test/linux-generic/mmap_vlan_ins/.gitignore
 create mode 100644 test/linux-generic/mmap_vlan_ins/Makefile.am
 create mode 100644 test/linux-generic/mmap_vlan_ins/mmap_vlan_ins.c
 create mode 100755 test/linux-generic/mmap_vlan_ins/mmap_vlan_ins.sh
 create mode 100644 test/linux-generic/mmap_vlan_ins/pktio_env
 create mode 100644 test/linux-generic/mmap_vlan_ins/vlan.pcap

diff --git a/test/linux-generic/Makefile.am
b/test/linux-generic/Makefile.am
index 4660cf0..998ee56 100644
--- a/test/linux-generic/Makefile.am
+++ b/test/linux-generic/Makefile.am
@@ -37,11 +37,15 @@ TESTS = validation/api/pktio/pktio_run.sh \

 SUBDIRS += validation/api/pktio\
   validation/api/shmem\
+  mmap_vlan_ins\
   pktio_ipc\
   ring

 if HAVE_PCAP
 TESTS += validation/api/pktio/pktio_run_pcap.sh
+
+TESTS +=   mmap_vlan_ins/mmap_vlan_ins.sh
+SUBDIRS += mmap_vlan_ins
 endif
 if netmap_support
 TESTS += validation/api/pktio/pktio_run_netmap.sh
diff --git a/test/linux-generic/m4/configure.m4
b/test/linux-generic/m4/configure.m4
index 6b92201..8746dab 100644
--- a/test/linux-generic/m4/configure.m4
+++ b/test/linux-generic/m4/configure.m4
@@ -3,6 +3,7 @@ m4_include([test/linux-generic/m4/performance.m4])
 AC_CONFIG_FILES([test/linux-generic/Makefile
 test/linux-generic/validation/api/shmem/Makefile
 test/linux-generic/validation/api/pktio/Makefile
+test/linux-generic/mmap_vlan_ins/Makefile
 test/linux-generic/pktio_ipc/Makefile
 test/linux-generic/ring/Makefile
 test/linux-generic/performance/Makefile])
diff --git a/test/linux-generic/mmap_vlan_ins/.gitignore
b/test/linux-generic/mmap_vlan_ins/.gitignore
new file mode 100644
index 000..755fa2e
--- /dev/null
+++ b/test/linux-generic/mmap_vlan_ins/.gitignore
@@ -0,0 +1,2 @@
+*.pcap
+plat_mmap_vlan_ins
diff --git a/test/linux-generic/mmap_vlan_ins/Makefile.am
b/test/linux-generic/mmap_vlan_ins/Makefile.am
new file mode 100644
index 000..2641556
--- /dev/null
+++ b/test/linux-generic/mmap_vlan_ins/Makefile.am
@@ -0,0 +1,15 @@
+include $(top_srcdir)/test/Makefile.inc
+TESTS_ENVIRONMENT += TEST_DIR=${top_builddir}/test/validation
+
+dist_check_SCRIPTS = vlan.pcap \
+mmap_vlan_ins.sh \
+pktio_env
+
+test_SCRIPTS = $(dist_check_SCRIPTS)
+
+bin_PROGRAMS = plat_mmap_vlan_ins$(EXEEXT)
+plat_mmap_vlan_ins_LDFLAGS = $(AM_LDFLAGS) -static
+plat_mmap_vlan_ins_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
+
+# Clonned from example odp_l2fwd simple
+dist_plat_mmap_vlan_ins_SOURCES = mmap_vlan_ins.c
diff --git a/test/linux-generic/mmap_vlan_ins/mmap_vlan_ins.c
b/test/linux-generic/mmap_vlan_ins/mmap_vlan_ins.c
new file mode 100644
index 000..0682d2d
--- /dev/null
+++ b/test/linux-generic/mmap_vlan_ins/mmap_vlan_ins.c
@@ -0,0 +1,232 @@
+/* Copyright (c) 2016, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+#define POOL_NUM_PKT 8192
+#define POOL_SEG_LEN 1856
+#define MAX_PKT_BURST 32
+#define MAX_WORKERS 1
+
+static int exit_thr;
+static int g_ret;
+
+struct {
+   odp_pktio_t if0, if1;
+   odp_pktin_queue_t if0in, if1in;
+   odp_pktout_queue_t if0out, if1out;

Re: [lng-odp] [API-NEXT PATCH 0/3] _ishm: multiple blocks using identical names

2016-11-09 Thread Bill Fischofer
For this series:

Reviewed-by: Bill Fischofer 

On Tue, Nov 8, 2016 at 3:49 AM, Christophe Milard <
christophe.mil...@linaro.org> wrote:

> NOTE: must be applied on top of: "using _ishm as north API mem allocator"
>
> This patch series allows the usage of the same name when reserving
> different
> memory blocks. The lookup function returns any handle of the set of blocks
> using the common name if a lookup in attempted on a name being used
> multiple
> times.
>
> Christophe Milard (3):
>   linux-gen: _ishm: accept multiple usage of same block name
>   test: api: shm: test using the same block name multiple times
>   doc: shm: defining behaviour when blocks have same name
>
>  doc/users-guide/users-guide.adoc  | 8 ++--
>  platform/linux-generic/_ishm.c| 8 
>  test/common_plat/validation/api/shmem/shmem.c | 9 +
>  3 files changed, 15 insertions(+), 10 deletions(-)
>
> --
> 2.7.4
>
>


Re: [lng-odp] driver interface thinkings....

2016-11-09 Thread Francois Ozog
To deal with drivers we need to have additional "standard" (helper or API?)
objects to deal with MAC addresses.

ODP could be made to handle SDH, SDLC, FrameRelay or ATM packets. A
packet_io is pretty generic...

But as soon as we talk with drivers and initialization, we need to deal
with L1 initialization (link up/ synchronized...) and L2 initialization
(DLCI, MAC (Ethernet, FDDI, TokenRing...)...).

Shall we add those L1 and L3 concepts ?

FF

On 9 November 2016 at 09:38, Francois Ozog  wrote:

>
>
> On 9 November 2016 at 09:09, Christophe Milard <
> christophe.mil...@linaro.org> wrote:
>
>>
>>
>> On 8 November 2016 at 18:19, Francois Ozog 
>> wrote:
>>
>>> Hi Christophe,
>>>
>>> We are focusing on network devices which have at least network ports.
>>> Inventory of devices and ports are two distinct topics as there is no
>>> guarantee that there is one device per port. (Intel and Mellanox have this
>>> policy: one device per PCI port, but many others have one PCI device for
>>> multiple ports (Chelsio...)
>>>
>>> 1) device discovery
>>>
>>> So let's focus first on device discovery.
>>>
>>> Devices can be found from:
>>> - static "configuration" based on ODP implementation
>>> - ACPI
>>> - PCI
>>> - DPAA2 NXP bus
>>> - VMBus (VMs running on Microsoft Hyper-V hypervisor)
>>> - others ?
>>>
>>> So bus are somewhat limited (ACPI is NOT a bus). I prefer to talk about
>>> device enumerators.
>>>
>>
>> OK.
>>
>>
>>>
>>> so we can loop through all devices using your proposed:
>>> for each 
>>> for each probed device
>>>
>>
>> Not sure what a "probed device" is here, I what thinking:
>>  for each 
>> for each driver
>> driver.probe(device)
>>
>> Is that what you meant?
>>
>>
>>> [FF] no:
>  for each 
>
>> for each enumerated device
>>  for each registered driver
>>
>driver.probe(device)
>
>>
>>> But that is only for static discovery at the program startup.
>>>
>>
>> yes
>>
>>
>>>
>>> It would be way better to allow hotplugs (card insertion in a chassis,
>>> addition of one virtual interfrace in aa VM because of a scale up
>>> operation).
>>>
>>
>> yes: we can probe new hotplugged devices (again registered drivers) when
>> a new device is hot plugged and and probe registered (unassigned) device
>> again any hot plugged driver.
>>
>>
>>>
>>> So in addition to initial synchronous probing we need asynchronous
>>> events handling.
>>>
>>
>> yes
>>
>>
>>>
>>> Once we have found the device, we need to check if we have a driver for
>>> it and if the "user" has decided if he wants to use that device
>>> (configuration aspects)
>>>
>>
>> I guess the fact that a user loads a driver is an act of configuration...?
>>  [FF] I guess not, load all drivers
>>
>>>
>>> 2) driver assignment
>>> From this device we may want to enumerate ports and other "things" (like
>>> embeded switch) that are present on the device.
>>>
>>
>> hmmm... so a device driver could act as a new enumerator? I am not fully
>> following your proposal here... Let's say 2 PCI board contains (each) its
>> switch (one switch per board). The PCI enumerator will detect 2 devices. So
>> according to your proposal, we would also have switch enumerators? How many
>> such switch enumerator do we have? Where is it located?
>> [FF]: lets make a simpler example: 2 PCI devices, each with four 10Gbps
>> ethernet ports. there are two enumerated PCI devices but there are 8
>> packet_ios. packet ios should conserve a link to a device, the device
>> should have a link back to the enumerator. no upperlayer metadata saved in
>> each "object", just a link.
>>
>>>
>>> Accessing devices is bus dependent, os dependent and processor
>>> dependent.
>>> A few problems to tackle:
>>> - On Linux, a device can be accessed through uio, vfio or through a
>>> direct library (cf bifurcated drivers)
>>>
>>
>> yes. So  a device enumerator will give many interface to play with the
>> devices it enumerates. In my first example, I called BUS what you call
>> enumerator and DRIVER_TYPE was the key to select the device interface (like
>> PCI_VFIO wouild select the vfio interface).  If we keep with the enumerator
>> view (that I like), shall each enumerator give a choice of
>> "device-interface" for each of its device? I.E:
>> my BUS = your enumerator (OK)
>> my DRIVER_TYPE = device_interface?
>>  [FF] not sure I understand
>>
>
>
>> - if uio, it also depends on the processor architecture (Intel has a
>>> special way of exposing uio, PPC64 and ARM64 are different)
>>>
>>
>> Haven't digged into uio yet ...
>>
>>
>>
>>> - if bifurcated driver, the "odp/dpdk driver" SHOULD not deal directly
>>> with the PCI device but rather with the device specific library
>>>
>>
>> "odp/dpdk driver"...Are you planning for a new driver structure for dpdk
>> too, which we would share ?
>> Not sure I understand what you meant there. I though bifurcated drivers
>> where just drivers getting their data from userland 

Re: [lng-odp] driver interface thinkings....

2016-11-09 Thread Francois Ozog
On 9 November 2016 at 09:09, Christophe Milard  wrote:

>
>
> On 8 November 2016 at 18:19, Francois Ozog 
> wrote:
>
>> Hi Christophe,
>>
>> We are focusing on network devices which have at least network ports.
>> Inventory of devices and ports are two distinct topics as there is no
>> guarantee that there is one device per port. (Intel and Mellanox have this
>> policy: one device per PCI port, but many others have one PCI device for
>> multiple ports (Chelsio...)
>>
>> 1) device discovery
>>
>> So let's focus first on device discovery.
>>
>> Devices can be found from:
>> - static "configuration" based on ODP implementation
>> - ACPI
>> - PCI
>> - DPAA2 NXP bus
>> - VMBus (VMs running on Microsoft Hyper-V hypervisor)
>> - others ?
>>
>> So bus are somewhat limited (ACPI is NOT a bus). I prefer to talk about
>> device enumerators.
>>
>
> OK.
>
>
>>
>> so we can loop through all devices using your proposed:
>> for each 
>> for each probed device
>>
>
> Not sure what a "probed device" is here, I what thinking:
>  for each 
> for each driver
> driver.probe(device)
>
> Is that what you meant?
>
>
>> [FF] no:
 for each 

> for each enumerated device
>  for each registered driver
>
   driver.probe(device)

>
>> But that is only for static discovery at the program startup.
>>
>
> yes
>
>
>>
>> It would be way better to allow hotplugs (card insertion in a chassis,
>> addition of one virtual interfrace in aa VM because of a scale up
>> operation).
>>
>
> yes: we can probe new hotplugged devices (again registered drivers) when a
> new device is hot plugged and and probe registered (unassigned) device
> again any hot plugged driver.
>
>
>>
>> So in addition to initial synchronous probing we need asynchronous events
>> handling.
>>
>
> yes
>
>
>>
>> Once we have found the device, we need to check if we have a driver for
>> it and if the "user" has decided if he wants to use that device
>> (configuration aspects)
>>
>
> I guess the fact that a user loads a driver is an act of configuration...?
>  [FF] I guess not, load all drivers
>
>>
>> 2) driver assignment
>> From this device we may want to enumerate ports and other "things" (like
>> embeded switch) that are present on the device.
>>
>
> hmmm... so a device driver could act as a new enumerator? I am not fully
> following your proposal here... Let's say 2 PCI board contains (each) its
> switch (one switch per board). The PCI enumerator will detect 2 devices. So
> according to your proposal, we would also have switch enumerators? How many
> such switch enumerator do we have? Where is it located?
> [FF]: lets make a simpler example: 2 PCI devices, each with four 10Gbps
> ethernet ports. there are two enumerated PCI devices but there are 8
> packet_ios. packet ios should conserve a link to a device, the device
> should have a link back to the enumerator. no upperlayer metadata saved in
> each "object", just a link.
>
>>
>> Accessing devices is bus dependent, os dependent and processor dependent.
>> A few problems to tackle:
>> - On Linux, a device can be accessed through uio, vfio or through a
>> direct library (cf bifurcated drivers)
>>
>
> yes. So  a device enumerator will give many interface to play with the
> devices it enumerates. In my first example, I called BUS what you call
> enumerator and DRIVER_TYPE was the key to select the device interface (like
> PCI_VFIO wouild select the vfio interface).  If we keep with the enumerator
> view (that I like), shall each enumerator give a choice of
> "device-interface" for each of its device? I.E:
> my BUS = your enumerator (OK)
> my DRIVER_TYPE = device_interface?
>  [FF] not sure I understand
>


> - if uio, it also depends on the processor architecture (Intel has a
>> special way of exposing uio, PPC64 and ARM64 are different)
>>
>
> Haven't digged into uio yet ...
>
>
>
>> - if bifurcated driver, the "odp/dpdk driver" SHOULD not deal directly
>> with the PCI device but rather with the device specific library
>>
>
> "odp/dpdk driver"...Are you planning for a new driver structure for dpdk
> too, which we would share ?
> Not sure I understand what you meant there. I though bifurcated drivers
> where just drivers getting their data from userland application but
> controled by kernel tools. I guess my view is not complete here.
> [FF] when you bind a driver to uio, the netdev disappears. When using
> bifurcated drivers it does not.
>
>>
>> 3) created objects in ODP
>> odp_pktio_t is somewhat corresponding to a port "driver". it does
>> properly adapts to a device with multiple ports.
>>
>
> Not sure of the match when you started talking about the embeded
> switches...
>
[FF] I keep hw discovery/inventory/representation (hierarchy of connected
elements) separate from the functional aspects (drivers, ports,
controllable hw blocks such as classifier, scheduler...).  .I guess we can
represent the switch as if it was a 

Re: [lng-odp] driver interface thinkings....

2016-11-09 Thread Christophe Milard
On 8 November 2016 at 18:19, Francois Ozog  wrote:

> Hi Christophe,
>
> We are focusing on network devices which have at least network ports.
> Inventory of devices and ports are two distinct topics as there is no
> guarantee that there is one device per port. (Intel and Mellanox have this
> policy: one device per PCI port, but many others have one PCI device for
> multiple ports (Chelsio...)
>
> 1) device discovery
>
> So let's focus first on device discovery.
>
> Devices can be found from:
> - static "configuration" based on ODP implementation
> - ACPI
> - PCI
> - DPAA2 NXP bus
> - VMBus (VMs running on Microsoft Hyper-V hypervisor)
> - others ?
>
> So bus are somewhat limited (ACPI is NOT a bus). I prefer to talk about
> device enumerators.
>

OK.


>
> so we can loop through all devices using your proposed:
> for each 
> for each probed device
>

Not sure what a "probed device" is here, I what thinking:
 for each 
for each driver
driver.probe(device)

Is that what you meant?


>
> But that is only for static discovery at the program startup.
>

yes


>
> It would be way better to allow hotplugs (card insertion in a chassis,
> addition of one virtual interfrace in aa VM because of a scale up
> operation).
>

yes: we can probe new hotplugged devices (again registered drivers) when a
new device is hot plugged and and probe registered (unassigned) device
again any hot plugged driver.


>
> So in addition to initial synchronous probing we need asynchronous events
> handling.
>

yes


>
> Once we have found the device, we need to check if we have a driver for it
> and if the "user" has decided if he wants to use that device (configuration
> aspects)
>

I guess the fact that a user loads a driver is an act of configuration...?


>
> 2) driver assignment
> From this device we may want to enumerate ports and other "things" (like
> embeded switch) that are present on the device.
>

hmmm... so a device driver could act as a new enumerator? I am not fully
following your proposal here... Let's say 2 PCI board contains (each) its
switch (one switch per board). The PCI enumerator will detect 2 devices. So
according to your proposal, we would also have switch enumerators? How many
such switch enumerator do we have? Where is it located?


> Accessing devices is bus dependent, os dependent and processor dependent.
> A few problems to tackle:
> - On Linux, a device can be accessed through uio, vfio or through a direct
> library (cf bifurcated drivers)
>

yes. So  a device enumerator will give many interface to play with the
devices it enumerates. In my first example, I called BUS what you call
enumerator and DRIVER_TYPE was the key to select the device interface (like
PCI_VFIO wouild select the vfio interface).  If we keep with the enumerator
view (that I like), shall each enumerator give a choice of
"device-interface" for each of its device? I.E:
my BUS = your enumerator (OK)
my DRIVER_TYPE = device_interface?


> - if uio, it also depends on the processor architecture (Intel has a
> special way of exposing uio, PPC64 and ARM64 are different)
>

Haven't digged into uio yet ...



> - if bifurcated driver, the "odp/dpdk driver" SHOULD not deal directly
> with the PCI device but rather with the device specific library
>

"odp/dpdk driver"...Are you planning for a new driver structure for dpdk
too, which we would share ?
Not sure I understand what you meant there. I though bifurcated drivers
where just drivers getting their data from userland application but
controled by kernel tools. I guess my view is not complete here.


> 3) created objects in ODP
> odp_pktio_t is somewhat corresponding to a port "driver". it does properly
> adapts to a device with multiple ports.
>

Not sure of the match when you started talking about the embeded switches...

Christophe.

>
> FF
>
>
>
>
>
> On 7 November 2016 at 17:49, Christophe Milard <
> christophe.mil...@linaro.org> wrote:
>
>> Bonsoir Francois,
>>
>> I'll take that in English thereafter so that other can read (copy to the
>> list).
>>
>> I have looked at that:
>>
>> https://dpdksummit.com/Archive/pdf/2016Userspace/Day02-
>> Session03-ShreyanshJain-Userspace2016.pdf
>>
>> I guess that is what you referred to, Francois, when talking at the
>> SYNC meeting earlier today. I missed a lot of was you said, though, my
>> BJ being extremely uncooperative today.
>>
>> I have not seen the presentation, but my understanding is that PCI
>> drivers would "require" PCI bus drivers. As SocX driver could require
>> Soc bus scan.
>> I have to admit that I did not follow 100%  the DPDK proposal, but
>> this approach to have a "BUS driver" is not very clear to me, when it
>> comes to ODP:
>>
>> 1)Because SoC constructors will probably accelerate more than the
>> interface. If those constructors have, for instance, different
>> odp-packet implementation (matching their HW), they will likely have
>> their own ODP implementation anyway, so making the bus