Hi Richard,

thank you for your reply, you gave me very interesting cues to think
about. I'll reply in reverse/importance order

Il 2022-09-15 14:16 Richard Purdie wrote:

For the source issues above it basically it comes down to how much
"pain" we want to push onto all users for the sake of adding in this
data. Unfortunately it is data which many won't need or use and
different legal departments do have different requirements.

We didn't paint the overall picture sufficiently well, therefore our
requirements may come across as coming from a particularly pedantic
legal department; my fault :)

Oniro is not "yet another commercial Yocto project", we are not a legal
department (even if we are experienced FLOSS lawyers and auditors, the
most prominent of whom is Carlo Piana -- cc'ed -- former general counsel
of FSFE and member of OSI Board).

Our rather ambitious goal is not limited to Oniro, and consists in doing
compliance in the open source way and both setting an example and
providing guidance and material for others to benefit from our effort.
Our work will therefore be shared (and possibly improved by others) not
only with Oniro-based projects but also with any Yocto project. Among
other things, the most relevant bit of work that we want to share is
**fully reviewed license information** and other legal metadata about a
whole bunch of open source components commonly used in Yocto projects.

To do that in a **scalable and fully automated way**, we need that Yocto
collects some information that is currently disposed of (or simply not
collected) at build time.

Oniro Project Leader, Davide Ricci - cc'ed - strongly encouraged us to
seek for feedback from you in order to find out the best way to do it.

Maybe organizing a call would be more convenient than discussing
background and requirements here, if you (and others) are available.


Experience
with archiver.bbclass shows that multiple codepaths doing these things
is a nightmare to keep working, particularly for corner cases which do
interesting things with the code (externalsrc, gcc shared workdir, the
kernel and more).

I had a look at this and was a bit puzzled by some of it.

I can see the issues you'd have if you want to separate the unpatched
source from the patches and know which files had patches applied as
that is hard to track. There would be significiant overhead in trying
to process and store that information in the unpack/patch steps and the
archiver class does some of that already. It is messy, hard and doens't
perform well. I'm reluctant to force everyone to do it as a result but
that can also result in multiple code paths and when you have that, the
result is that one breaks :(.

I also can see the issue with multiple sources in SRC_URI, although you
should be able to map those back if you assume subtrees are "owned" by
given SRC_URI entries. I suspect there may be a SPDX format limit in
documenting that piece?

I'm replying in reverse order:

- there is a SPDX format limit, but it is by design: a SPDX package
  entity is a single sw distribution unit, so it may have only one
  downloadLocation; if you have more than one downloadLocation, you must
  have more than one SPDX package, according to SPDX specs;

- I understand that my solution is a bit hacky; but IMHO any other
  *post-mortem* solution would be far more hacky; the real solution
  would be collecting required information directly in do_fetch and
  do_unpack

- I also understand that we should reduce pain, otherwise nobody would
  use our solution; the simplest and cleanest way I can think about is
  collecting just package (in the SPDX sense) files' relative paths and
  checksums at every stage (fetch, unpack, patch, package), and leave
  data processing (i.e. mapping upstream source packages -> recipe's
  WORKDIR package -> debug source package -> binary packages -> binary
  image) to a separate tool, that may use (just a thought) a graph
  database to process things more efficiently.



Where I became puzzled is where you say "Information about debug
sources for each actual binary file is then taken from
tmp/pkgdata/<machine>/extended/*.json.zstd". This is the data we added
and use for the spdx class so you shouldn't need to reinvent that
piece. It should be the exact same data the spdx class uses.


you're right, but in the context of a POC it was easier to extract them
directly from json files than from SPDX data :) It's just a POC to show
that required information may be retrieved in some way, implementation
details do not matter

I was also puzzled about the difference between rpm and the other
package backends. The exact same files are packaged by all the package
backends so the checksums from do_package should be fine.


Here I may miss some piece of information. I looked at files in
tmp/pkgdata but I couldn't find package file checksums anywhere: that is
why I parsed rpm packages. But if such checksums were already available
somewhere in tmp/pkgdata, it wouldn't be necessary to parse rpm packages
at all... Could you point me to what I'm (maybe) missing here? Thanks!

In any case, thank you much so for all your insights, they were
super-useful!

Cheers,

Alberto
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#170791): 
https://lists.openembedded.org/g/openembedded-core/message/170791
Mute This Topic: https://lists.openembedded.org/mt/93698335/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to