There are some old Epics around this from a year (?) ago.

Ultimatively we need a model for keeping build requirements outside of the
production container without invalidating all dockerfiles, breaking the
developer experience or moving to a complete different approach. Oh and we
need to avoid hacks like removing content in higher layers and then
squashing (did anyone ever try that?).

In earlier discussions we had asked for an out-of-tree version of yum (now
that would be dnf), that could be seamlessly mounted into the context of
the container during build. Then do the same to other common tools.

I am not sure where, but the idea progressed to a model of having a docker
patch that looks at a number of 'sidecar' or 'plugin' containers, each
providing a specific set of tools, mounting them and adding them to the
path. At the time, Docker kept mentioning 'plugins' from time to time as
the solution for this.

So the yum plugin-container would enable the yum command in the docker
file. There are a number of reasons, why this should be a separate tool
container and not the host tool.

I believe we have progressed to a point where it's much more realistic to
achieve, but I think that the original outline still is correct: we need
the ability to mount tools in a volume-container-like model.

Regards,

Daniel

On Tuesday, October 20, 2015, Pavel Odvody <[email protected]> wrote:

> On Tue, 2015-10-20 at 12:09 +0200, Nick Coghlan wrote:
> > On 15 October 2015 at 17:56, Pavel Odvody <[email protected]
> <javascript:;>> wrote:
> > > Hello,
> > >
> > > at [0] is a rationale and description of the process, whereas at
> > > [1] is
> > > the final code.
> > > The example uses hosts DNF but could be easily extrapolated and use
> > > DNF
> > > via another (spc) container (Atomic use-case).
> > >
> > > [0]: https://docs.google.com/document/d/1dsStqcuZTeeu3BgwZwmX2zRsuY
> > > JoD9Qt9I3SQA9F7Lc/pub
> > > [1]: https://github.com/shaded-enmity/dnf-container-update
> >
> > Could this be adapted to do container builds in a way where the build
> > container is separate from the container being built? (At the moment,
> > s2i still has the two merged, so you end up with build tools and
> > artifacts in your runtime container by default)
> >
> > Regards,
> > Nick.
> >
>
> Creating fully-working container chroot can be done with DNF alone [0]:
>
> dnf -y --releasever=21 --nogpg --installroot=/srv/mycontainer -
> -disablerepo='*' --enablerepo=fedora install systemd passwd dnf fedora
> -release vim-minimal
>
> But I guess that is still slightly different from what you're asking.
> Shameless plug of docker-hica [1] - one specific use case I had in mind
> when building HICA was that I want to use bleeding edge LLVM to test
> how good is the code that is coming out of their optimizer. Here's how
> I tackle it:
>   * create F22-based image with latest LLVM compiled from SVN checkout
>   * CD into directory with the project I want to compile, and execute:
>     docker run -v $(pwd):$(pwd) -w $(pwd) /build llvm-builder bash -c
> './configure && make'
>
> Of course I don't write that ugly Docker command by hand all the time,
> but use label 'io.hica.bind_pwd', then it can be launched:
>
>    docker-hica llvm-builder -- bash -c "./configure && make"
>
> While this may not be the exact answer neither, I think we're getting
> close.
>
>
> [0]: http://www.freedesktop.org/software/systemd/man/systemd-nspawn.html
> [1]: https://github.com/shaded-enmity/docker-hica
>
> --
> Pavel Odvody <[email protected] <javascript:;>>
> Software Engineer - EMEA ENG Developer Experience
> 5EC1 95C1 8E08 5BD9 9BBF 9241 3AFA 3A66 024F F68D
> Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno
>
>
>

-- 
Daniel Riek <[email protected]>
* Sr. Director Systems Design & Engineering
* Red Hat Inc, Tel. +1-617-863-6776

Reply via email to