On 3 October 2017 at 08:51, Francois Ozog <francois.o...@linaro.org> wrote:
> Hi Maxim,
>
> Modularization is configurable so that the produced output can be either:
> - runtime selection of modules based on whatever discovery mechanism is
> available
> - fixed compilation of a set of modules
>
> As per the discovery, I would favor a similar approach to packetio: give an
> opportunity to each module to say if it is appropriate for the environment.
> This avoids a standardized way to detect platform which may not even exist:
> - A thunderX module may check for PCI bus existence, and if yes certain PCI
> entries, or any other method (devicetree? ACPI?)
> - A DPAA2 may check for DPAA2 bus existence and particular objects for each
> board, or any other method (devicetree? ACPI?)
>
I was thinking that the platform discovery could be at a higher level.
Identify a processor string via /proc or such entry. Then use that
string to map to a set of libraries for that platform (this mapping
could be through config files). The libraries in turn should handle
any variations in the platform.

>
> FF
>
>
> On 3 October 2017 at 15:33, Maxim Uvarov <maxim.uva...@linaro.org> wrote:
>
>>
>>
>> On 3 October 2017 at 15:59, Bill Fischofer <bill.fischo...@linaro.org>
>> wrote:
>>
>>> Good summary. The key is that RedHat and others want:
>>>
>>> 1. They build the distribution from source we provide, we don't get to
>>> provide any binaries
>>> 2. There is a single distribution they will support per-ISA (e.g.,
>>> AArch64)
>>>
>>> The modular framework is designed to accommodate these requirements by
>>> allowing a "universal core" to discover the microarchitecture at run time
>>> and the most appropriate pluggable modules to use to exploit that
>>> microarchitecture's acceleration capabilities. These modules may be
>>> precompiled along with the core if they are part of the single ODP
>>> distribution, or they may be packaged and distributed separately as the
>>> supplier of these modules wishes.
>>>
>>> At the same time, this universal core can be statically linked for a
>>> specific target platform to accommodate the needs of embedded
>>> applications.
>>> In this case the discovery and call-indirection goes away so there should
>>> be no more overhead than exists in today's ODP when switching between
>>> --enable-abi-compat=[yes|no]
>>>
>>>
>> I have nothing to object about modularization components in linux-gen but
>> can we be more consistent. Or we speak
>> about runtime discovery or we speak about configuration files. I hear one
>> day people say about discovery, next day
>> about configuration files.
>> Does somebody have understanding how that discovery will work? How odp in
>> guest will understand during discovery what
>> to use? Scan PCI devices or some drivers? With drivers that is mode clear
>> as with other odp components.
>>
>> I think that people more afraid that changes to current master will bring
>> side affects as more complexity and less performance.
>> Maintain new work in separate branch is complex but I think we need to
>> define requirements when code can be merged to master.
>> These requirements from my side can be:
>> - no performance degradation according to current code;
>> - feature completeness (which means module framework works, we can change
>> scheduler, do discovery at least some platforms,
>> some pktio from module ddf works);
>> - code is formed as clean patches on top of current master and passes
>> merging review (I'm not sure how many people review current cloud-dev
>> branch work);
>> - at least several platforms are merged together with that framework to
>> show it's effectiveness;
>>
>> In my understanding if all 4 items are met nobody will have objections.
>>
>> Other thing is might be shift priorities a little bit. Instead of changing
>> everything to modules, change one or two things, merge second platform and
>> make it work. Then send clean patches for review.
>>
>> Any thoughts on that?
>>
>> Thank you,
>> Maxim.
>>
>>
>>
>>> On Tue, Oct 3, 2017 at 5:12 AM, Francois Ozog <francois.o...@linaro.org>
>>> wrote:
>>>
>>> > Thanks Ola and Petri.
>>> >
>>> > Let's talk about use cases first.
>>> >
>>> > Go to market for ODP applications may be:
>>> >
>>> >    - A product composed of software and hardware (typically a NEP
>>> approach
>>> >    such as Nokia)
>>> >    - A software to be installed by a system administrator of an
>>> enterprise
>>> >    - A "service" to be part of a cloud offering (say an AWS image)
>>> >    - A VNF to be deployed on a wide variety, apriori unknown, of
>>> hardware
>>> >    as a VM
>>> >    - An image to be deployed on bare metal clouds (packet.net or OVH
>>> for
>>> >    instance) with hardware diversity
>>> >
>>> > As a result, an ODP application may be :
>>> >
>>> >    1. Deployed as a single installed image and instantiated in different
>>> >    virtualized or bare metal clouds
>>> >    2. A VM is live migrated between two asymetric compute nodes
>>> >    3. Installed on a specific machine
>>> >    4. Deployed as an image that is to be instantiated on a single
>>> hardware
>>> >    platform
>>> >
>>> > Irrespective of commercial Linux distribution acceptance, case 3 and 4
>>> can
>>> > accommodate a static deployment paradigm where the hardware dependent
>>> > package is selected at installation time. Those cases corresponds to a
>>> > system integrator, an network equipment provider that builds a product
>>> for
>>> > a specific hardware platform.
>>> >
>>> > On the other hand, case 1 and 2 need a run time adaptation of the
>>> > framework. Case 2 may in fact be more between platform of the same type
>>> but
>>> > with different PLUGGED NICs and accelerators. While technically feasible
>>> > (yet very complex), I don't expect to deal with live migration between
>>> > Cavium and NXP or even Cavium ThunderX and Cavium ThunderX/2.
>>> > So case 1 is essentially addressing the needs of ISVs that do NOT sell a
>>> > bundle of software and hardware as a product. You can call it software
>>> > appliance.
>>> >
>>> > Ola, on the Xorg thing: yes it says that xorg.conf is now determined at
>>> > runtime... But if you concretely experience changing graphics card, or
>>> > supporting both CPU integrated graphics in additional to external GPU,
>>> you
>>> > will face trouble and find a lot of literature about achieving the
>>> results
>>> > or recovering from depressive situations...
>>> >
>>> >
>>> > The modular framework allows one ODP implementation to be considered as
>>> a
>>> > single module and loaded at runtime to solve case 1, 3 and 4. Those
>>> modules
>>> > may still be deployed as separate packages. The initial idea was to
>>> split
>>> > the implementation in more modules but it may not make that much sense
>>> > after giving more thoughts. Additional native drivers and the DDF itself
>>> > may be considered as separate modules and also distributed as separate
>>> > packages.
>>> > So we would have:
>>> > - odp-core
>>> > - odp-linux required module that provides AF packet and other packetios;
>>> > depends on odp-core
>>> > - odp-ddf optional module that provides dynamic pluggable hardware
>>> support;
>>> > depends on odp-core
>>> > - odp-<NIC> optional modules for the various native NIC support;
>>> depends on
>>> > odp-ddf
>>> > - odp-<platform> optional modules to deal with SoC specific arch
>>> (ThunderX,
>>> > ThunderX/2, DPAA2...); depends on odp-core
>>> >
>>> > The odp-<platform> is derived from the current native SoC implementation
>>> > but need to leverage odp-mbuf and the new mempool facilities to allow
>>> > diversity of packetio to livetogether in a single platform, the rest is
>>> > entirely proprietary.
>>> >
>>> > The static and dynamic approaches are not mutually exclusive. I highly
>>> > recommend that the static (case 3 and 4) approach is driven by
>>> individual
>>> > members should they need it while we collectively solve the broader
>>> cloud
>>> > (case 1) deployment.
>>> >
>>> > Cheers
>>> >
>>> > FF
>>> >
>>> > On 3 October 2017 at 10:12, Savolainen, Petri (Nokia - FI/Espoo) <
>>> > petri.savolai...@nokia.com> wrote:
>>> >
>>> > > > -----Original Message-----
>>> > > > From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf
>>> Of
>>> > Ola
>>> > > > Liljedahl
>>> > > > Sent: Friday, September 29, 2017 8:47 PM
>>> > > > To: lng-odp@lists.linaro.org
>>> > > > Subject: [lng-odp] generic core + HW specific drivers
>>> > > >
>>> > > > olli@vubuntu:~$ dpkg --get-selections | grep xorg
>>> > > > xorg install
>>> > > > xorg-docs-core install
>>> > > > xserver-xorg install
>>> > > > xserver-xorg-core install
>>> > > > xserver-xorg-input-all install
>>> > > > xserver-xorg-input-evdev install
>>> > > > xserver-xorg-input-libinput install
>>> > > > xserver-xorg-input-synaptics install
>>> > > > xserver-xorg-input-wacom install
>>> > > > xserver-xorg-video-all install
>>> > > > xserver-xorg-video-amdgpu install
>>> > > > xserver-xorg-video-ati install
>>> > > > xserver-xorg-video-fbdev install
>>> > > > xserver-xorg-video-intel install
>>> > > > xserver-xorg-video-mach64 install
>>> > > > xserver-xorg-video-neomagic install
>>> > > > xserver-xorg-video-nouveau install    <<<Nvidia
>>> > > > xserver-xorg-video-openchrome install
>>> > > > xserver-xorg-video-qxl install
>>> > > > xserver-xorg-video-r128 install
>>> > > > xserver-xorg-video-radeon install .   <<<AMD
>>> > > > xserver-xorg-video-savage install
>>> > > > xserver-xorg-video-siliconmotion install
>>> > > > xserver-xorg-video-sisusb install
>>> > > > xserver-xorg-video-tdfx install
>>> > > > xserver-xorg-video-trident install
>>> > > > xserver-xorg-video-vesa install
>>> > > > xserver-xorg-video-vmware install .   <<<virtualised GPU?
>>> > > >
>>> > > > So let's rename ODP Cloud to ODP Core.
>>> > > >
>>> > > > -- Ola
>>> > >
>>> > >
>>> > > DPDK packages in Ubuntu 17.05 (https://packages.ubuntu.com/a
>>> rtful/dpdk)
>>> > > include many HW dependent packages
>>> > >
>>> > > ...
>>> > > librte-pmd-fm10k17.05 (= 17.05.2-0ubuntu1) [amd64, i386]  <<< Intel
>>> Red
>>> > > Rock Canyon net driver, provided only for x86
>>> > > librte-pmd-i40e17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-ixgbe17.05 (= 17.05.2-0ubuntu1) [not ppc64el]
>>> > > librte-pmd-kni17.05 (= 17.05.2-0ubuntu1) [not i386]
>>> > > librte-pmd-lio17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-nfp17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-null-crypto17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-null17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-octeontx-ssovf17.05 (= 17.05.2-0ubuntu1)   <<< OcteonTX SSO
>>> > > eventdev driver files as a package
>>> > > librte-pmd-pcap17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-qede17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-ring17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-sfc-efx17.05 (= 17.05.2-0ubuntu1) [amd64]
>>> > > librte-pmd-skeleton-event17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-sw-event17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-tap17.05 (= 17.05.2-0ubuntu1)
>>> > > librte-pmd-thunderx-nicvf17.05 (= 17.05.2-0ubuntu1)  <<< ThunderX net
>>> > > driver files as a package
>>> > > ...
>>> > >
>>> > >
>>> > > So, we should be able to deliver ODP as a set of HW independent and HW
>>> > > specific packages (libraries). For example, minimal install would
>>> include
>>> > > only odp, odp-linux and odp-test-suite, but when on arm64 (and
>>> especially
>>> > > when on ThunderX) odp-thunderx would be installed also. The trick
>>> would
>>> > be
>>> > > how to select odp-thunderx installed files (also headers) as default
>>> when
>>> > > application is built or run on ThunderX (and not on another arm64).
>>> > >
>>> > > Package:
>>> > > * odp (only generic folders and documentation, no implementation)
>>> > >   * depends on: odp-linux, odp-test-suite
>>> > >   * recommends: odp-linux, odp-dpdk, odp-thunderx, odp-dpaa2, ...
>>> > > * odp-linux
>>> > >   * depends on: odp, openssl
>>> > >   * recommends: dpdk, netmap, ...
>>> > > * odp-dpdk
>>> > >   * depends on: odp, dpdk
>>> > > * odp-thunderx [arm64]
>>> > >   * depends on: odp, ...
>>> > > * odp-test-suite
>>> > >   * depends on: odp
>>> > >
>>> > >
>>> > > -Petri
>>> > >
>>> > >
>>> > >
>>> >
>>> >
>>> > --
>>> > [image: Linaro] <http://www.linaro.org/>
>>> > François-Frédéric Ozog | *Director Linaro Networking Group*
>>> > T: +33.67221.6485
>>> > francois.o...@linaro.org | Skype: ffozog
>>> >
>>>
>>
>>
>
>
> --
> [image: Linaro] <http://www.linaro.org/>
> François-Frédéric Ozog | *Director Linaro Networking Group*
> T: +33.67221.6485
> francois.o...@linaro.org | Skype: ffozog

Reply via email to