On Wed, 2020-03-11 at 11:31 +0000, Paul Barker wrote: > On Tue, 10 Mar 2020 23:16:38 +0000 > Richard Purdie <richard.pur...@linuxfoundation.org> wrote: > > > On Mon, 2020-03-09 at 14:21 +0000, Paul Barker wrote: > > > To fully archive a `gitsm://` entry in SRC_URI we need to also capture > > > the submodules recursively. If shallow mirror tarballs are found, they > > > must be temporarily extracted so that the submodules can be determined. > > > > > > Signed-off-by: Paul Barker <pbar...@konsulko.com> > > > --- > > > meta/classes/archiver.bbclass | 31 ++++++++++++++++++++++++++----- > > > 1 file changed, 26 insertions(+), 5 deletions(-) > > > > > > diff --git a/meta/classes/archiver.bbclass b/meta/classes/archiver.bbclass > > > index 013195df7d..fef7ad4f62 100644 > > > --- a/meta/classes/archiver.bbclass > > > +++ b/meta/classes/archiver.bbclass > > > @@ -306,7 +306,7 @@ python do_ar_configured() { > > > } > > > > > > python do_ar_mirror() { > > > - import subprocess > > > + import shutil, subprocess, tempfile > > > > > > src_uri = (d.getVar('SRC_URI') or '').split() > > > if len(src_uri) == 0: > > > @@ -337,12 +337,10 @@ python do_ar_mirror() { > > > > > > bb.utils.mkdirhier(destdir) > > > > > > - fetcher = bb.fetch2.Fetch(src_uri, d) > > > - > > > - for url in fetcher.urls: > > > + def archive_url(fetcher, url): > > > if is_excluded(url): > > > bb.note('Skipping excluded url: %s' % (url)) > > > - continue > > > + return > > > > > > bb.note('Archiving url: %s' % (url)) > > > ud = fetcher.ud[url] > > > @@ -376,6 +374,29 @@ python do_ar_mirror() { > > > bb.note('Copying source mirror') > > > cmd = 'cp -fpPRH %s %s' % (localpath, destdir) > > > subprocess.check_call(cmd, shell=True) > > > + > > > + if url.startswith('gitsm://'): > > > + def archive_submodule(ud, url, module, modpath, workdir, d): > > > + url += ";bareclone=1;nobranch=1" > > > + newfetch = bb.fetch2.Fetch([url], d, cache=False) > > > + > > > + for url in newfetch.urls: > > > + archive_url(newfetch, url) > > > + > > > + # If we're using a shallow mirror tarball it needs to be > > > unpacked > > > + # temporarily so that we can examine the .gitmodules file > > > + if ud.shallow and os.path.exists(ud.fullshallow) and > > > ud.method.need_update(ud, d): > > > + tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR")) > > > + subprocess.check_call("tar -xzf %s" % ud.fullshallow, > > > cwd=tmpdir, shell=True) > > > + ud.method.process_submodules(ud, tmpdir, > > > archive_submodule, d) > > > + shutil.rmtree(tmpdir) > > > + else: > > > + ud.method.process_submodules(ud, ud.clonedir, > > > archive_submodule, d) > > > + > > > + fetcher = bb.fetch2.Fetch(src_uri, d, cache=False) > > > + > > > + for url in fetcher.urls: > > > + archive_url(fetcher, url) > > > } > > > > I can't help feeling that this is basically a sign the fetcher is > > broken. > > > > What should really happen here is that there should be a method in the > > fetcher we call into. > > > > Instead we're teaching code how to hack around the fetcher. Would it be > > possible to add some API we could call into here and maintain integrity > > of the fetcher API? > > This is gitsm-specific so the process_submodules method is probably the > correct fetcher API. We need to call back into an archiver-supplied function > for each submodule that is found. > > I guess process_submodules could do the temporary unpacking of the shallow > archive and then this code would be simplified. Is that what you had in mind?
Nearly. The "operation" here is similar to "download" or "unpack" but amounts to "make a mirror copy". Should the fetcher have such a method, which would then have the fetcher implementation details in the fetchers themselves? Cheers, Richard -- _______________________________________________ Openembedded-core mailing list Openembedded-core@lists.openembedded.org http://lists.openembedded.org/mailman/listinfo/openembedded-core