here is the code snippet I promised:

/* ========================= */
/* Code snipets              */


/* --------------- */
/* enumerator stuf */

/* PCI enumerator module */

pci_enumerator_class_init
{
    enumclass_register("pci"...);
    Browse /sys/bus/pci to create pci_enumerators for each bus
}

pci_enumerator_init
{
/* Do nothing */
}

/* netmdev enumerator module */

netmdev_enumerator_class_init
{
    enumclass_register("netmdev"...);
}

netmdev_enumerrator_init
{
/* Do nothing */
}


/* --------------- */
/* driver stuf     */

virtionet_probe()
{
}

virtionet_init()
{
enumclass clazz = enumclass_get("pci");
enumclass_register(clazz, virtionet_probe);
}

r8169_probe()
{
}

r8169_init()
{
enumclass clazz = enumclass_get("netmdev");
enumclass_driver_register(clazz, r8169_probe);
}


/* --------------- */
/* ODP application */

/* ========================= */
/* pktio_open("pci:0000:04.0") execution flow */

/* in pktio_open */
-> enumclass clazz= enumclass_get("pci") /* pci obtained from all chars
before ":"
-> if clazz=NULL (because no ":" or because no enumclass found browse
through all known pktios (this is the current behavior)

->clazz->pktio_probe(clazz, "pci:0000:04.0")

/* in pci_pktio_probe: */
--> try all registered drivers:
--> driver->probe(clazz, "pci:0000:04.0")

/* in virtionet_probe: */
---> check if the specified string designate a virtio_net pci device, lets
assume yes
--->clazz->add_device(clazz, ....)
---> return pktio structure


/* ========================= */
/* pktio_open("netmdev:enp4s0") execution flow */

/* in pktio_open */
-> enumclass clazz= enumclass_get("netmdev") /* netmdev obtained from all
chars before ":"
-> if clazz=NULL (because no ":" or because no enumclass found browse
through all known pktios (this is the current behavior)

->clazz->pktio_probe(clazz, "netmdev:enp4s0")

/* in netmdev_pktio_probe: */
--> check enp4s0 is a netmdev capable device
/* we may want to introduce an additional parameter to
enumclass_driver_register with a string to avoid browsing */
/* for PCI this may be useless as one driver may handle so many different
PCI IDs, so we may ignore this */
--> if yes, try all registered drivers:
--> driver->probe(clazz, "netmdev:enp4s0")

/* in r8169_probe: */
---> check if the specified string designate a r8169 device (readlink
driver in sysfs), lets assume yes
--->clazz->add_device(clazz, ....)
---> return pktio structure



An ODP application that want to get hotplug would call:

pktio_open("*:*") or pktio_open("pci:*")

This could be made blocking so that bottom line ODP API do not change but
an application can deal with those things. The blocking call would be done
in an odp_thread. Alternatively, we may want to have event based stuff. So
the pktio_open above would actually result in subscribing to events.


A "device" will have multiple interfaces and additional meta data not found
in ODP objects:
- NUMA node,
- pktio_interface
- MII management interface
- xyz interface

The non pkitio interfaces are required if we can't go the netmdev route:
we'll need to have control on things beyond packetio ops and we don't want
to have a pktio_ops that grows 200 callbacks.

For those who did not attend the DDF presentation (or for those who
forgot). A pluggable NIC/SmartNIC may have multiple ports and even a
configurable switch.

ODP may consider the multiple ports as totally independent entities, but
doing so we fail to leverage NIC/SmarNIC features. The classifier of those
NIC/SmartNIC may be programmed to autonomously forward say non HTTP traffic
from port A to port B while sending all HTTP traffic to host queues.
But this classifier may not forward to Port C of another SmartNIC.

So information about devices and their structures will be required. That is
the role of the DDF.

FF
FF

On 28 October 2017 at 00:01, Bill Fischofer <bill.fischo...@linaro.org>
wrote:

>
>
> On Fri, Oct 27, 2017 at 4:54 PM, Francois Ozog <francois.o...@linaro.org>
> wrote:
>
>>
>> Le ven. 27 oct. 2017 à 23:51, Bill Fischofer <bill.fischo...@linaro.org>
>> a écrit :
>>
>>> On Fri, Oct 27, 2017 at 4:24 PM, Francois Ozog <francois.o...@linaro.org
>>> > wrote:
>>>
>>>>
>>>> Le ven. 27 oct. 2017 à 23:05, Honnappa Nagarahalli <
>>>> honnappa.nagaraha...@linaro.org> a écrit :
>>>>
>>>>> On 27 October 2017 at 13:35, Bill Fischofer <bill.fischo...@linaro.org
>>>>> > wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 27, 2017 at 10:45 AM, Francois Ozog <
>>>>>> francois.o...@linaro.org> wrote:
>>>>>>
>>>>>>>
>>>>>>> Le ven. 27 oct. 2017 à 17:17, Bill Fischofer <
>>>>>>> bill.fischo...@linaro.org> a écrit :
>>>>>>>
>>>>>>>> The problem with scanning, especially in a VNF environment, is that
>>>>>>>> (a) the application probably isn't authorized to to that
>>>>>>>>
>>>>>>>
>>>>>>> nothing prevents scanning what is available for n the vm. "Scale up"
>>>>>>> Events are triggered when increasing resources (memory cpus Ethernet
>>>>>>> ports...)
>>>>>>>
>>>>>>> and (b) the application certainly doesn't have real visibility into
>>>>>>>> what the actual device topology is.
>>>>>>>>
>>>>>>>
>>>>>>> Qemu allows to select if the VM has visibility of a real nuns
>>>>>>> topology or a virtua one
>>>>>>>
>>>>>>
>>>>>> Qemu may itself be running under a hypervisor and have limited
>>>>>> visibility as to the real HW configuration. The whole point of
>>>>>> virtualization/containerization is to limit what the VM/application
>>>>>> can see and do to provide management controls over isolation and
>>>>>> portability.
>>>>>>
>>>>>
>>>>> In this case, the 'platform' refers to what is available in the VM.
>>>>> There is no need to know what is available in the underlying hardware.
>>>>>
>>>>
>>>> Yes. That said there is a corner case with FPGA. When à VM loads a
>>>> bitstream to an FPGA slice , it is expected that a new PCI device to deal
>>>> with the configured FPGA is dynamically passed to the VM. The VM needs to
>>>> know the underlying FPGA chip to load the proper bitstream (altera Xilinx
>>>> microsemi...).
>>>>
>>>
>>> In that case either the VM has access to the "unconfigured" FPGA (which
>>> is still capable of identifying itself while unconfigured) and writes this
>>> directly, or else this is done at the host level and the result is made
>>> available to the VM.
>>>
>>
>> Nope. The idea is to have a generic interface with virtio to load
>> bitstreams. This was specified by altera . One of the concerns was to
>> enhance security and the availability of Sriov on existing chips.
>>
>
> There are two separate questions:
>
> - Constructing a bitstream that can be used by a target FPGA
> - Getting that bitstream into the FPGA to configure it.
>
> For the first do we have a "universal bitstream" that is essentially an
> intermediate representation that gets translated into whatever the specific
> model needs at load time or must the "absolute bitstream" be used. My
> understanding is that Xilinx was developing strategies for the former.
>
> Virtio is as good a means as any for transporting the bitstream from where
> it resides to the device. Again, however, the trend seems to be for
> applications to load "modules" that cover their specific application area
> that patch into an already operational FPGA, similar to the way
> applications load under an OS. Loading a full FPGA image (which can't be
> shared with anyone else) is sort of the older way of doing things.
>
>
>
>>
>>
>>
>>>
>>>
>>>>
>>>>>>
>>>>>>>
>>>>>>> It only knows what it "needs to know" (as determined by higher-level
>>>>>>>> functions) so there's no point trying to second-guess that. 
>>>>>>>> Applications
>>>>>>>> will be told "use these devices" and that should be our design 
>>>>>>>> assumption
>>>>>>>> in this support.
>>>>>>>>
>>>>>>>
>>>>>>> Vnf manager will know precisely the host vision of things (use this
>>>>>>> vhostuser interface) but don't don't know how it may end up being named 
>>>>>>> in
>>>>>>> guest . This is a key problem that has been identified by British 
>>>>>>> telecom .
>>>>>>>
>>>>>>
>>>>>> Sounds like we should follow whatever recommended solution OPNFV
>>>>>> comes up with in this area. In any event, that solution will be outside 
>>>>>> of
>>>>>> ODP and would still result in the application being told what devices to
>>>>>> use with names that are meaningful in the environment it's running in.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>> On Fri, Oct 27, 2017 at 10:10 AM, Honnappa Nagarahalli <
>>>>>>>> honnappa.nagaraha...@linaro.org> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 27 October 2017 at 09:50, Bill Fischofer <
>>>>>>>>> bill.fischo...@linaro.org> wrote:
>>>>>>>>>
>>>>>>>>>> ODP 2.0 assumes Linux system services are available so the
>>>>>>>>>> question of how to operate in bare metal environments is a separate 
>>>>>>>>>> one and
>>>>>>>>>> up to those ODP implementations. Again the application will provide a
>>>>>>>>>> sufficiently-qualified device name string to identify which device 
>>>>>>>>>> it wants
>>>>>>>>>> to open in an unambiguous manner. How it does that is again outside 
>>>>>>>>>> the
>>>>>>>>>> scope of ODP so this isn't something we need to worry about. All ODP 
>>>>>>>>>> needs
>>>>>>>>>> to do is be able to identify which driver needs to be loaded and 
>>>>>>>>>> passed the
>>>>>>>>>> rest of the device name and then that driver handles the rest.
>>>>>>>>>>
>>>>>>>>>> By baremetal, I meant host with Linux OS.
>>>>>>>>> I agree, it is applications responsibility to provide the device
>>>>>>>>> string, how it does that is outside the scope of ODP.
>>>>>>>>> We will continue the discussion on the slides. But, in the
>>>>>>>>> meanwhile, I am thinking of possible design if we want to avoid 
>>>>>>>>> complete
>>>>>>>>> scanning of the platform during initialization. In the current packet 
>>>>>>>>> I/O
>>>>>>>>> framework, all the function pointers of the driver are known before
>>>>>>>>> odp_pktio_open API is called. If we have to stick to this design, the
>>>>>>>>> drivers for the PCI device should have registered their functions 
>>>>>>>>> with the
>>>>>>>>> packet IO framework before odp_pktio_open is called.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Fri, Oct 27, 2017 at 2:36 AM, Francois Ozog <
>>>>>>>>>> francois.o...@linaro.org> wrote:
>>>>>>>>>>
>>>>>>>>>>> Well, we do not need to scan all the platform because the
>>>>>>>>>>> pktio_open contains enough information to target the right device.
>>>>>>>>>>> This is almost true as we need to have an additional "selector"
>>>>>>>>>>> for the port on multiport NICs that are controlled by a single pci 
>>>>>>>>>>> ID.
>>>>>>>>>>> <enumerator>:<enumerator specific string>:<port> or something
>>>>>>>>>>> like that.
>>>>>>>>>>>
>>>>>>>>>>> All ports may not be typically handled by ODP. For instance
>>>>>>>>>>> management network will most certainly be a native Linux one.
>>>>>>>>>>>
>>>>>>>>>>> Dpaa2 bus may impose us to full scan if we have to go the full
>>>>>>>>>>> driver route (vfio-pci) but this will not be necessary if we can 
>>>>>>>>>>> have the
>>>>>>>>>>> net-mdev. In that case, the dpaa2 bus can be handled the same way 
>>>>>>>>>>> as pci.
>>>>>>>>>>>
>>>>>>>>>>> Dynamic bus events (hot plug) are a nice to have and may be
>>>>>>>>>>> dropped from coding yet we can talk about it. and when it come to 
>>>>>>>>>>> this,
>>>>>>>>>>> this is not about dealing with the bus controllers directly but 
>>>>>>>>>>> tapping
>>>>>>>>>>> into Linux event framework.
>>>>>>>>>>>
>>>>>>>>>>> I know we can simplify things and I am very flexible in what we
>>>>>>>>>>> can decide not to do. That said I have been dealing with 
>>>>>>>>>>> operational issues
>>>>>>>>>>> on that very topic since 2006 when I designed my first kernel based 
>>>>>>>>>>> fast
>>>>>>>>>>> packet IO.... So there will be topics where I'll push hard to 
>>>>>>>>>>> explain (not
>>>>>>>>>>> impose) why we should go a harder route. This slows things down but 
>>>>>>>>>>> I think
>>>>>>>>>>> this is worth it.
>>>>>>>>>>>
>>>>>>>>>>> FF
>>>>>>>>>>>
>>>>>>>>>>> Le ven. 27 oct. 2017 à 06:23, Honnappa Nagarahalli <
>>>>>>>>>>> honnappa.nagaraha...@linaro.org> a écrit :
>>>>>>>>>>>
>>>>>>>>>>>> On 26 October 2017 at 16:34, Bill Fischofer <
>>>>>>>>>>>> bill.fischo...@linaro.org> wrote:
>>>>>>>>>>>> > I agree with Maxim. Best to get one or two working drivers
>>>>>>>>>>>> and see what else
>>>>>>>>>>>> > is needed. The intent here is not for ODP to become another
>>>>>>>>>>>> OS, so I'm not
>>>>>>>>>>>> > sure why we need to concern ourselves with bus walking and
>>>>>>>>>>>> similar arcana.
>>>>>>>>>>>> > Linux has already long solved this problem. We should
>>>>>>>>>>>> leverage what's there,
>>>>>>>>>>>> > not try to reinvent it.
>>>>>>>>>>>> >
>>>>>>>>>>>> > ODP applications are told what I/O interfaces to use, either
>>>>>>>>>>>> through an
>>>>>>>>>>>> > application configuration file, command line, or other means
>>>>>>>>>>>> outside the
>>>>>>>>>>>> > scope of ODP itself. ODP's job is simply to connect
>>>>>>>>>>>> applications to these
>>>>>>>>>>>> > configured I/O interfaces when they call odp_pktio_open().
>>>>>>>>>>>> The name used for
>>>>>>>>>>>> > interfaces is simply a string that we've defined to have the
>>>>>>>>>>>> format:
>>>>>>>>>>>> >
>>>>>>>>>>>> > class: device: other-info-needed-by-driver
>>>>>>>>>>>> >
>>>>>>>>>>>> > We've defined a number of classes already:
>>>>>>>>>>>> >
>>>>>>>>>>>> > - loop
>>>>>>>>>>>> > - pcap
>>>>>>>>>>>> > - ipc
>>>>>>>>>>>> > - dpdk
>>>>>>>>>>>> > - socket
>>>>>>>>>>>> > - socket_mmap
>>>>>>>>>>>> > - tap
>>>>>>>>>>>> > etc.
>>>>>>>>>>>> >
>>>>>>>>>>>> > We simply need to define new classes (e.g., ddf, mdev) and
>>>>>>>>>>>> the names they
>>>>>>>>>>>> > need to take to identify a specific device and the associated
>>>>>>>>>>>> driver to use
>>>>>>>>>>>> > for operate that device. The driver is then loaded if
>>>>>>>>>>>> necessary and its
>>>>>>>>>>>> > open() interface is called.
>>>>>>>>>>>> >
>>>>>>>>>>>>
>>>>>>>>>>>> Coincidentally, internally we were discussing this exactly.
>>>>>>>>>>>>
>>>>>>>>>>>> Why do we need to scan and understand the complete platform
>>>>>>>>>>>> during
>>>>>>>>>>>> initialization? - I would think, mostly, ODP will run in a
>>>>>>>>>>>> platform
>>>>>>>>>>>> (baremetal, VM, container) where all the devices are supposed
>>>>>>>>>>>> to be
>>>>>>>>>>>> used by ODP (i.e. ODP will not run in a platform where it will
>>>>>>>>>>>> share
>>>>>>>>>>>> the platform with other applications). So why not scan and
>>>>>>>>>>>> identify
>>>>>>>>>>>> the devices/drivers and create the packet IO ops during
>>>>>>>>>>>> initialization. The packet I/O framework assumes that various
>>>>>>>>>>>> methods
>>>>>>>>>>>> (open, close, send, recv etc) of the device driver are known
>>>>>>>>>>>> when
>>>>>>>>>>>> odp_pktio_open API is called. None of that has to change.
>>>>>>>>>>>>
>>>>>>>>>>>> Another solution I can think of is (tweak to reduce the time
>>>>>>>>>>>> spent in
>>>>>>>>>>>> scanning etc), instead of scanning all the devices, DDF
>>>>>>>>>>>> initialization
>>>>>>>>>>>> function can be provided with all the ports the user has
>>>>>>>>>>>> requested.
>>>>>>>>>>>>
>>>>>>>>>>>> If the scanning for the device and identifying the driver has
>>>>>>>>>>>> to be
>>>>>>>>>>>> triggered through the odp_pktio_open API, the current packet IO
>>>>>>>>>>>> framework needs to change.
>>>>>>>>>>>>
>>>>>>>>>>>> > On Thu, Oct 26, 2017 at 3:59 PM, Maxim Uvarov <
>>>>>>>>>>>> maxim.uva...@linaro.org>
>>>>>>>>>>>> > wrote:
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> Hello Honnappa,
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> I think we also need to take a look from bottom. I.e. from
>>>>>>>>>>>> exact drivers
>>>>>>>>>>>> >> to
>>>>>>>>>>>> >> implement. That it will be more clear which interface is
>>>>>>>>>>>> needed to be
>>>>>>>>>>>> >> created.
>>>>>>>>>>>> >> Do you have some list of drivers which needed to be
>>>>>>>>>>>> implemented? I.e. with
>>>>>>>>>>>> >> pci drivers I think we in a good way, but non pci drivers
>>>>>>>>>>>> are under
>>>>>>>>>>>> >> question.
>>>>>>>>>>>> >> I think we should not over-complicate odp itself with huge
>>>>>>>>>>>> discovery and
>>>>>>>>>>>> >> numeration of devices. Let's take a look what is the minimal
>>>>>>>>>>>> interface to
>>>>>>>>>>>> >> support
>>>>>>>>>>>> >> devices.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> Best regards,
>>>>>>>>>>>> >> Maxim.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> On 26 October 2017 at 23:35, Honnappa Nagarahalli <
>>>>>>>>>>>> >> honnappa.nagaraha...@linaro.org> wrote:
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> > Hi,
>>>>>>>>>>>> >> >    Agree, we have taken 2 hours and the progress has been
>>>>>>>>>>>> slow. But
>>>>>>>>>>>> >> > the discussions have been good and helpful to us at Arm.
>>>>>>>>>>>> The goal is
>>>>>>>>>>>> >> > to identify the gaps and work items. I am not sure if it
>>>>>>>>>>>> has been
>>>>>>>>>>>> >> > helpful to others, please let me know.
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> > To speed this up, I propose few options below, let me know
>>>>>>>>>>>> your opinion:
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> > 1) Have 2 additional (along with regular ODP 2.0) calls
>>>>>>>>>>>> next week - We
>>>>>>>>>>>> >> > can do Tuesday 7:00am and then another on Thursday 7:00am
>>>>>>>>>>>> (Austin, TX
>>>>>>>>>>>> >> > time, GMT-6, one hour before the regular ODP-2.0)
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> > 2) Resolve the pending PRs on emails
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> > 3) Discuss the DDF slides on email - Not sure how
>>>>>>>>>>>> effective it will be
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> > Any other solutions?
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> > Thank you,
>>>>>>>>>>>> >> > Honnappa
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >
>>>>>>>>>>>> >
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> [image: Linaro] <http://www.linaro.org/>
>>>>>>>>>>> François-Frédéric Ozog | *Director Linaro Networking Group*
>>>>>>>>>>> T: +33.67221.6485
>>>>>>>>>>> francois.o...@linaro.org | Skype: ffozog
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>> --
>>>>>>> [image: Linaro] <http://www.linaro.org/>
>>>>>>> François-Frédéric Ozog | *Director Linaro Networking Group*
>>>>>>> T: +33.67221.6485
>>>>>>> francois.o...@linaro.org | Skype: ffozog
>>>>>>>
>>>>>>>
>>>>>> --
>>>> [image: Linaro] <http://www.linaro.org/>
>>>> François-Frédéric Ozog | *Director Linaro Networking Group*
>>>> T: +33.67221.6485
>>>> francois.o...@linaro.org | Skype: ffozog
>>>>
>>>> --
>> [image: Linaro] <http://www.linaro.org/>
>> François-Frédéric Ozog | *Director Linaro Networking Group*
>> T: +33.67221.6485
>> francois.o...@linaro.org | Skype: ffozog
>>
>>
>


-- 
[image: Linaro] <http://www.linaro.org/>
François-Frédéric Ozog | *Director Linaro Networking Group*
T: +33.67221.6485
francois.o...@linaro.org | Skype: ffozog

Reply via email to