Re: s6 in production on Ubuntu - yeah!

2020-11-04 Thread Dreamcat4
Yep. Have been using s6-overlay inside docker containers. On top of
the ubuntu 20.04 base image. It works well enough...

Perhaps you could speak to some Canonical people about your usage of
s6? Maybe it would be helpful to them, in some broader sense? Like to
be a little bit less reliant upon systemd?

On Wed, Nov 4, 2020 at 11:01 AM Oliver Schad
 wrote:
>
> Hi everybody,
>
> we're proud to announce, that we have s6 in production in context of
> platform as a service for our customers.
>
> We started with the rollout on our container hypervisors and will
> extend that to all of our LXC containers.
>
> We use Ubuntu 16 for now and will migrate that to Ubuntu 20. The reasons
> we use that distro is, are some:
>
> - good package support from community
> - canonical maintains LXC/LXD
> - co-maintainers of ZFS
>
> So these are important reasons for us to stay on Ubuntu.
>
> Thanks Laurent for supporting us to develop the integration of s6 in
> ubuntu 16 and 20. We can definitly recommend to engage Laurent for
> integration questions.
>
> The reasons to migrate away from systemd are well known but to recap
> that in short:
>
> - buggy
> - bad support from development team (go-away mentality)
> - over complex in every dimension
> - really limited cause of DSL/config approach and really big config
>   language at the same time - more than 200 config statements - have
>   fun to know them all
> - tries to enforce itself everywhere as dependency
> - linux only
> - tightly bundled to kernel interfaces, which might be dangerous in
>   container business (container's systemd might depend on specific
>   kernel interfaces of the host)
> - cgroup massacre (mi-mi-mi that is my cgroup and nobody else is
>   allowed to use it)
>
> And I guess some more. The pain we had with systemd, journald and so on
> was too much.
>
> Best Regards
> Oli
>
> --
> Automatic-Server AG •
> Oliver Schad
> Geschäftsführer
> Turnerstrasse 2
> 9000 St. Gallen | Schweiz
>
> www.automatic-server.com | oliver.sc...@automatic-server.com
> Tel: +41 71 511 31 11 | Mobile: +41 76 330 03 47


Re: [s6] debian packaging

2015-08-11 Thread Dreamcat4
Will it work on ubuntu?

I ask b/c I have built packages the other way around. On 14.04 trusty,
which it then turns out also worked on Jessie (8.X)

On Tue, Aug 11, 2015 at 7:40 PM, Claes Wallin (韋嘉誠) <
skar...@clacke.user.lysator.liu.se> wrote:

> [repost with correct sender]
>
> On 11-Aug-2015 7:51 pm, "Buck Evan"  wrote:
> > On Mon, Aug 10, 2015 at 11:38 AM, Laurent Bercot <
> > ska-supervis...@skarnet.org> wrote:
>
> > >  That's perfectly reasonable.
> > >  Is this Debian policy that /lib/*.so is in the -dev while
> > > /lib/*.so.* is in the runtime package ?
> >
> > Yes. It's quite explicit.
>
> [ . . . ]
>
> > > If you're developing
> > > and want to link against the .so, you need the shared object
> > > at compile time anyway, you can't do with just the .so symlink
> > > (or can you ?) - so, what's the rationale for separating just
> > > that link instead of having all the .so stuff in the runtime
> > > package ?
> >
> > As you say, you want the .so if you're developing.
> > If you're "just a user" though, none of your binaries will link directly
> to
> > that symlink.
> > That's the rule of thumb for moving things to the -dev package.
> > Possibly the bit you're missing is that x-dev almost always depends on x.
>
> Also, putting the .so in -dev means that libfoo2 and libfoo3 can coexist,
> even though libfoo2-dev and libfoo3-dev can't, because they both provide
> /usr/lib/libfoo.so.
>
> --
>/c
>


Re: Arch Linux derivative using s6?

2015-04-19 Thread Dreamcat4
On Sun, Apr 19, 2015 at 3:17 PM, Laurent Bercot  wrote:
>
>  If I ever have to install a distribution again, I'll probably go
> with Alpine, unless something even better comes along.


I also would use Alpine, except they just don't comprehensively support all
up-to-date packages. ATM, no other linux distro touches  ubuntu + PPA for
package support. Just doing nearly anything. It's pretty much guaranteed.
Therefore, sorry to say: I use ubuntu debootstrap min-base for all server
stuff. And ubuntu desktop for a GUI environment.

There's nothing wrong with extolling the virtues of these respective
distros. However I am a practically person. And when lacking pkgs = not
practical / time efficient to fill in so many missing gaps.

I do agree with the sentiments here and hope that pkg support may continue
improve for alpine linux in particular. But i myself simply cannot justify
changing over until then. Being realistic the timescale is probably more
towards several years than several months.

You can disagree with me on the respective levels of package support. But
I'll probably just laugh at you all.
Kind Regards


-- 
>  Laurent
>


Re: Thoughts on S6 and Docker

2015-03-23 Thread Dreamcat4
On Mon, Mar 23, 2015 at 10:27 AM, Aristomenis Pikeas  wrote:
> 3) I understand the reasons behind breaking S6 up into a large set of simple 
> utilities, but the current state of affairs is pretty extreme. ;ss /usr/bin 
> |grep s6-* | wc -l reports 55 different utilities. Have you considered 
> breaking s6 up into multiple packages, for

I agree with this point. However it seems most of those little
programs don't hurt if they are left un-used. It means fewer problems
to create just one full and complete release of 's6-overlay'. Then
have to also maintain a 'lite' version too, with some fewer minimum
tools.

> 5) For major distributions and usage mediums (Ubuntu, docker, OS X, and 
> perhaps Alpine/busybox), the docs should provide clearer steps for 
> installation and usage.

Nobody has really had time to document the 's6-overlay' yet. But you
can try it by adding a few extra lines to your dockerfile. And please
start help testing it:

# Install s6-overlay
ADD 
https://github.com/just-containers/s6-overlay-builder/releases/download/v1.8.4/s6-overlay-linux-amd64.tar.gz
/tmp/
RUN tar zxf /tmp/s6-overlay-linux-amd64.tar.gz -C / && rm
/tmp/s6-overlay-linux-amd64.tar.gz
ENTRYPOINT ["/init"]

> 9) This has already been discussed a bit, but I'd like to add another vote 
> for allowing s6-svscan to run a command. As others have

Gorka and Laurent have implemented that feature especially for docker
in the 's6-overlay' project. It works well. Please try.


Re: process supervisor - considerations for docker

2015-03-09 Thread Dreamcat4
On Sun, Mar 1, 2015 at 6:54 PM, John Regan  wrote:
> Quick FYI, busybox tar will extract a tar.gz, you just need to add the z

Ah right. It turns out that the default official busybox image
("latest") does not have z option yet. Because it is too old version
of busybox. I have kindly asked on their dockerhub page to update it.

> flag - tar xvzf /path/to/file.tar.gz
>
>
> On March 1, 2015 11:59:33 AM CST, Dreamcat4  wrote:
>>
>> On Sun, Mar 1, 2015 at 5:27 PM, John Regan  wrote:
>>>
>>>  Hi all -
>>>
>>>  Dreamcat4,
>>>  I think I got muddled up a few emails ago and didn't realize what you
>>>  were getting at. An easy-to-use, "extract this and now you're cooking
>>>  with gas" type tarball that works for any distro is an awesome idea!
>>>  My apologies for misunderstanding your idea.
>>>
>>>  The one "con" I foresee (if you can really call it that) you can't
>>>  list just a tarball on the Docker Hub. Would it be worth coming up
>>>  with a sort of "flagship image" that makes use of this? I guess we
>>
>>
>> Yeah I see the value in that. Good idea. In the documentation for such
>> example / showcase image, it can include the instruction for general
>> ways (any image).
>>
>>
>> ===
>> I've started playing
>> around with gorka's new tarball now. Seems that
>> that ADD isn't decompressing the tarball (when fetched from remote
>> URL). Which is pretty annoying. So ADD is currently 'broken' for want
>> it to do.
>>
>> Official Docker people will eventually improve ADD directive to take
>> optional arguments --flagX --flagY etc to let people control the
>> precise behaviour of ADD. Here is an open issue on docker, can track
>> it here:
>>
>> https://github.com/docker/docker/issues/3050
>> ===
>>
>>
>> Until then, these commands will work for busybox image:
>>
>> FROM busybox
>>
>> ADD
>> https://github.com/glerchundi/container-s6-overlay-builder/releases/download/v0.1.0/s6-overlay-0.1.0-linux-amd64.tar.gz
>> /s6-overlay.tar.gz
>> RUN gunzip -c
>> /s6-overlay.tar.gz | tar -xvf - -C / && rm /s6-overlay.tar.gz
>>
>> COPY test.sh /test.sh
>>
>> ENTRYPOINT ["/init"]
>> CMD ["/test.sh"]
>>
>> ^^ Where busybox has a very minimal 'tar' program included. Hence the
>> slightly awkward way of doing things.
>>
>>
>>>  could just start using it in our own images? In the end, it's not a
>>>  big deal - just thought it'd be worth figuring out how to maximize
>>>  exposure.
>>>
>>>  Laurent, Gorka, and Dreamcat4: this is awesome. :)
>>>
>>>  -John
>>>
>>>  On Sun, Mar 01, 2015 at 10:13:24AM +0100, Gorka Lertxundi wrote:
>>>>
>>>>  Hi guys,
>>>>
>>>>  I haven't had much time this week
>>>> due to work and now I am overwhelmed!
>>>>
>>>>  Yesterday, as Dreamcat4 has noticed, I've been working in a version
>>>> that
>>>>  gathers all the ideas covered here.
>>>>
>>>>  All,
>>>>  * I already converted bash init scripts into execline and make use of
>>>>  s6-utils instead of 'linux' ones to facilitate usage in another base
>>>> images.
>>>>  * It's important to have just _one_ codebase, this would help focusing
>>>>  improvements and problems in one place. I extracted all the elements I
>>>>  thought would be useful in a container environment. So, if you all feel
>>>>  comfortable we could start discussing bugs, improvements or whatever
>>>> there.
>>>>  I called this project/repo container-s6-overlay-builder (
>>>>  https://github.com/glerchundi/container-s6-overlay-builder).
>>>>  * Now, and after abstracting 's6-overlay', using ubuntu with s6 is a
>>>> matter
>>>>  of extracting a tarball. container-base is using
>>>> it already:
>>>>
>>>> https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75.
>>>>  * To sum up, we all agree with this. It is already implemented in the
>>>>  overlay:
>>>>- Case #1: Common case, start supervision tree up.
>>>>  docker run image
>>>>- Case #2: Would start a shell without the supervision tree running
>>>>  docker run -ti --entrypoint="" base

Re: process supervisor - considerations for docker

2015-03-02 Thread Dreamcat4
On Sun, Mar 1, 2015 at 9:13 AM, Gorka Lertxundi  wrote:
> Hi guys,
>
> I haven't had much time this week due to work and now I am overwhelmed!
>
> Yesterday, as Dreamcat4 has noticed, I've been working in a version that
> gathers all the ideas covered here.
>
> All,
> * I already converted bash init scripts into execline and make use of
> s6-utils instead of 'linux' ones to facilitate usage in another base images.
> * It's important to have just _one_ codebase, this would help focusing
> improvements and problems in one place. I extracted all the elements I
> thought would be useful in a container environment. So, if you all feel
> comfortable we could start discussing bugs, improvements or whatever there.
> I called this project/repo container-s6-overlay-builder (
> https://github.com/glerchundi/container-s6-overlay-builder).
> * Now, and after abstracting 's6-overlay', using ubuntu with s6 is a matter
> of extracting a tarball. container-base is using it already:
> https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75.
> * To sum up, we all agree with this. It is already implemented in the
> overlay:
>   - Case #1: Common case, start supervision tree up.
> docker run image
>   - Case #2: Would start a shell without the supervision tree running
> docker run -ti --entrypoint="" base /bin/sh
>   - Case #3: Would start a shell with the supervision tree up.
> docker run -ti image /bin/sh
>
> Dreamcat4,
> * Having a tarball with all the needed base elements to get s6 working is
> the way to go!
>
> Laurent,
> * Having a github mirror repo is gonna help spreading the word!
> * Although three init phases are working now I need your help with those
> scripts, probably a lot of mistakes were done...

Gorka,
Thank you for doing so much of this. Have been testing it yesterday.
It is pretty good. Especialy for:

* The passing of CMD arguments - works well.
* Receiving a TERM - orphan reaping (docker stop) - works well.


For the rough edges: Each item raised as seperate Github issue on your
repo Gorka. Laurent please check them also if you can. They are here
vv

Open issues on Gorka's s6-overlay repo:

https://github.com/glerchundi/container-s6-overlay-builder/issues

Subscibe to issue to track it.
Many thanks.

>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage1
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage2
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage3
> * I've chosen /etc/s6/.s6-init as the destination folder for the init
> scripts, would you like me to change?

^^ Laurent

>
> John,
> About github organization, I think this is not the place to discuss about
> it. I really like the idea and I'm open to discuss it but first things
> first, lets focus on finishing this first approach! Still, simple-d and
> micro-d are good names but are tightly coupled to docker *-d, and rocket
> being the relatively the new buzzword (kubernetes is going to support it)
> maybe we need to reconsider them.
>
> rgds,
>
> 2015-02-28 18:57 GMT+01:00 John Regan :
>
>> Sweet. And yeah, as Laurent mentioned in the other email, it's the
>> weekend. Setting dates for this kind of stuff is hard to do, I just
>> work on this in my free time. It's done when it's done.
>>
>> I also agree that s6 is *not* a docker-specific tool, nor should it
>> be. I'm thankful that Laurent's willing to listen to any ideas we
>> might have re: s6 development, but like I said, the goal is *not*
>> "make s6 a docker-specific tool"
>>
>> There's still a few high-level decisions to be made, too, before we
>> really start any work:
>>
>> 1. Goals:
>>   * Are we going to make a series of s6 baseimages (like one
>>   based on Ubuntu, another on CentOS, Alpine, and so on)?
>>   * Should we pick a base distro and focus on creating a series of
>>   platform-oriented images, aimed more at developers (ie, a PHP image, a
>>   NodeJS image, etc)?
>>   * Or should be focus on creating a series of service-oriented
>>   images, ie, an image for running GitLab, an image for running an
>>   XMPP server, etc?
>>
>> Figuring out the overall, high-level focus early will be really
>> helpful in the long run.
>>
>> Options 2 and 3 are somewhat related - you can't really get to 3
>> (create service-oriented images) without getting through 2 (make
>> platform-oriented images) anyway.
>>
>> It's not like a

Re: process supervisor - considerations for docker

2015-03-01 Thread Dreamcat4
On Sun, Mar 1, 2015 at 5:27 PM, John Regan  wrote:
> Hi all -
>
> Dreamcat4,
> I think I got muddled up a few emails ago and didn't realize what you
> were getting at. An easy-to-use, "extract this and now you're cooking
> with gas" type tarball that works for any distro is an awesome idea!
> My apologies for misunderstanding your idea.
>
> The one "con" I foresee (if you can really call it that) you can't
> list just a tarball on the Docker Hub. Would it be worth coming up
> with a sort of "flagship image" that makes use of this? I guess we

Yeah I see the value in that. Good idea. In the documentation for such
example / showcase image, it can include the instruction for general
ways (any image).


===
I've started playing around with gorka's new tarball now. Seems that
that ADD isn't decompressing the tarball (when fetched from remote
URL). Which is pretty annoying. So ADD is currently 'broken' for want
it to do.

Official Docker people will eventually improve ADD directive to take
optional arguments --flagX --flagY etc to let people control the
precise behaviour of ADD. Here is an open issue on docker, can track
it here:

https://github.com/docker/docker/issues/3050
===


Until then, these commands will work for busybox image:

FROM busybox

ADD 
https://github.com/glerchundi/container-s6-overlay-builder/releases/download/v0.1.0/s6-overlay-0.1.0-linux-amd64.tar.gz
/s6-overlay.tar.gz
RUN gunzip -c /s6-overlay.tar.gz | tar -xvf - -C / && rm /s6-overlay.tar.gz

COPY test.sh /test.sh

ENTRYPOINT ["/init"]
CMD ["/test.sh"]

^^ Where busybox has a very minimal 'tar' program included. Hence the
slightly awkward way of doing things.


> could just start using it in our own images? In the end, it's not a
> big deal - just thought it'd be worth figuring out how to maximize
> exposure.
>
> Laurent, Gorka, and Dreamcat4: this is awesome. :)
>
> -John
>
> On Sun, Mar 01, 2015 at 10:13:24AM +0100, Gorka Lertxundi wrote:
>> Hi guys,
>>
>> I haven't had much time this week due to work and now I am overwhelmed!
>>
>> Yesterday, as Dreamcat4 has noticed, I've been working in a version that
>> gathers all the ideas covered here.
>>
>> All,
>> * I already converted bash init scripts into execline and make use of
>> s6-utils instead of 'linux' ones to facilitate usage in another base images.
>> * It's important to have just _one_ codebase, this would help focusing
>> improvements and problems in one place. I extracted all the elements I
>> thought would be useful in a container environment. So, if you all feel
>> comfortable we could start discussing bugs, improvements or whatever there.
>> I called this project/repo container-s6-overlay-builder (
>> https://github.com/glerchundi/container-s6-overlay-builder).
>> * Now, and after abstracting 's6-overlay', using ubuntu with s6 is a matter
>> of extracting a tarball. container-base is using it already:
>> https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75.
>> * To sum up, we all agree with this. It is already implemented in the
>> overlay:
>>   - Case #1: Common case, start supervision tree up.
>> docker run image
>>   - Case #2: Would start a shell without the supervision tree running
>> docker run -ti --entrypoint="" base /bin/sh
>>   - Case #3: Would start a shell with the supervision tree up.
>> docker run -ti image /bin/sh
>>
>> Dreamcat4,
>> * Having a tarball with all the needed base elements to get s6 working is
>> the way to go!
>>
>> Laurent,
>> * Having a github mirror repo is gonna help spreading the word!
>> * Although three init phases are working now I need your help with those
>> scripts, probably a lot of mistakes were done...
>>   -
>> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage1
>>   -
>> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage2
>>   -
>> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage3
>> * I've chosen /etc/s6/.s6-init as the destination folder for the init
>> scripts, would you like me to change?
>>
>> John,
>> About github organization, I think this is not the place to discuss about
>> it. I really like the idea and I'm open to discuss it but first things
>> first, lets focus on finishing this first approach! Still, simple-d and
>> micro-d are good names but are tightly coupled to docker *-d, and rocket
>>

Re: process supervisor - considerations for docker

2015-02-28 Thread Dreamcat4
Right,
Glad that others (maybe Laurent and Gorka) want to do some guys who
all have more experience than me with 's6' process supervisor. I way
just offering to do it in case, to not impose extra work on anybody
who is already busy.

Gorka is now  working on something changes today. I don't quite know
what those are. It seems like we should check again his repos after he
finishes whatever stuff he's currently doing ATM. Fabulous.


In respect of recent replies:

Laurent,
Of course I don't mind any suitable changes upstreamed (that can be
upstreamed). And things which are not-docker-specific should not be
presented that way (preferable all of them).

Providing that the functionality is added and it can be used with
docker too amongst other tools. Docker is not the only tool which uses
linux namespaces for containerization. And that is the underlying
kernel feature (available in all modern linux kernel). I missed saying
"try to upstream everything". Sorry - I agree it is a good idea to do
that.

Just the major requirement I am interested to see fulfilled (one way
or another) is a new single tarball build product / build target that
combines the 2 or more separate s6 packages into a single tar ball…
For the docker ADD mechanism to work.

That unified tarball does not need to be named 's6-docker' or anything
that is docker-specific. It does not need to be officially sanctioned
/ provided by upstream or not…

Upstream would be better of course but we don't wish to impose our
ways upon you!

We could also debate what exacty should or should-not be in that
single tarball. Or just not bother arguing specifics when there is
always the solution to make a 'light' and 'full' variant. With all the
optional tools included in the 'full' version. And just the minimum in
the 'light'. Where the 'light' appears to be 2 of your current
official tarballs.


John,
Sorry I don't see the point (anymore) of writing s6-specific base
images when there is no longer any pressing technical reason to do so.
The drawback of choosing that approach is that by publishing some s6
base images, we will always end up neglecting and leaving some out
other ones from being supported. Wheras with ADD (from a single
tarball source) - that supports all base images in one swoop without
any effort whatsoever on our part. And that is a big plus.

Of course, once the universal ADD mechanism is working, then it
becomes a completely trivial matter to roll new base images that
include s6, for whichever distros you wish. With just a single extra
line in the Dockerfile. But then it hardly seems worth it telling
people to use them rather than ADD, which other users can just do
directly themselves?

I just get the general impression that most Docker citizens strongly
prefer to use official base images whenever possible. Because it is
what they know and what they trust. Even if a user writes their own
base, they will almost always be FROM: ubuntu or FROM: debian (or
'alpine' now, whichever one they like the most).

Meaning that: it is harder to get the general docker population to
trust and switch over to some new 's6-*' base image coming from
somewhere which is effectively deemed as being a 3rd party repo.

If we instead tell them to use the ADD:  method… they
will trust it more because they all still get to keep their preferred
"FROM: ubuntu" line at the top as always…

At least that's my own thought process / reasoning behind such
opinion. Where I am coming at from, with this whole 'maybe we should
not don't do base images' anymore.




On Sat, Feb 28, 2015 at 5:57 PM, John Regan  wrote:
> Sweet. And yeah, as Laurent mentioned in the other email, it's the
> weekend. Setting dates for this kind of stuff is hard to do, I just
> work on this in my free time. It's done when it's done.
>
> I also agree that s6 is *not* a docker-specific tool, nor should it
> be. I'm thankful that Laurent's willing to listen to any ideas we
> might have re: s6 development, but like I said, the goal is *not*
> "make s6 a docker-specific tool"
>
> There's still a few high-level decisions to be made, too, before we
> really start any work:
>
> 1. Goals:
>   * Are we going to make a series of s6 baseimages (like one
>   based on Ubuntu, another on CentOS, Alpine, and so on)?
>   * Should we pick a base distro and focus on creating a series of
>   platform-oriented images, aimed more at developers (ie, a PHP image, a
>   NodeJS image, etc)?
>   * Or should be focus on creating a series of service-oriented
>   images, ie, an image for running GitLab, an image for running an
>   XMPP server, etc?
>
> Figuring out the overall, high-level focus early will be really
> helpful in the long run.
>
> Options 2 and 3 are somewhat related - you can't really get to 3
> (create service-oriented images) without getting through 2 (make
> platform-oriented images) anyway.
>
> It's not like a goal would be set in stone, either. If more guys want
> to get on board and help, we could alway sit down and re

Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
Okay! all great then.

Now I want us to iterate upon the suggested plan / 'road-map'. I have
hinted that we can use ADD. That is another kind of improvement. I
think it will save us some considerable overheads, if we move around
our code a bit. And actually not need to make any special s6-base
images for the remaining distros.

The idea is that with a docker-targeted s6 tarball, it should
universally work on top of any / all base image. If what Laurent says
is true that the s6 packages are entirely self-contained. So it should
'just work' if we put the right things in the tar ball.

It can be centrally maintained in one place. No need for us to have to
duplicate the same thing in multiple distros. And in multiple versions
of each distro  (e.g. sid, wheezy). Yet every one of those we can
still support.

I also dropping off (remove) those previous points which John didn't like.
So the revised roadmap would look something like this:

* re-write /init from bash ---> execline
* add the argv[] support for CMD/ENTRYPOINT arguments
* move /init from ubuntu base image --> s6-builder
* create new 's6docker' build product in s6-builder, which contains:
  * execline.tar.gz
  * s6.tar.gz
  * /init (probably renamed to be 's6-init' or something like that)
* replace 'moved' bits in Gorak's pre-existing ubuntu base image
* document how to use it
* blog about it


How we use it:

When s6-builder runs it's builds, it can spit out 1 new extra build
target to this location:

https://github.com/glerchundi/container-s6-builder/releases

1 single unified tarball called "s6-docker-2.1.1.2-linux-amd64.tar.gz"
(or whatever we should call it).

That 1 tarball contain all necessary s6 files required to be used in docker.
Then we can just document by telling people to put this in their Dockerfile:

FROM: debian:wheezy
ADD: https://github.com/<...>/s6docker.tar.gz /
ENTRYPOINT: ["/s6-init", … ]

All Done!

We can just as easy specify whatever other official image in the FROM line. e.g.

FROM: alpine
ADD: https://github.com/<...>/s6docker.tar.gz /
ENTRYPOINT: ["/s6-init", … ]

Or busybox, arch linux, centos, etc. It should not matter 1 cent or
change the proceedure…

This way helps us out a lot to reduce the labor. Otherwise I guess we
are giving ourselves a rather daunting task of maintaining multiple
different base images. Each of which having multiple versions etc. And
then when upstream changes them we have to too…

Please consider the revised plan.
Many thanks

* Very happy have a crack at all of the needed development works on
that list. And submit them to gorka's repos. Although I'm not so
technically adept / familiar with this stuff yet. Therefore it may
take me a while to do all that, learning as I go, etc.

* Can't blog about it (no blog myself). So someone else can do that
later on (e.g. John). After everything is done.

* Can probably start work on the first bullet point (convert "/init"
to execline) during this weekend. Unless anyone else would rather jump
in before me and do it. But it seems not.



Possible Github Organization:

So this is another point of discussion.

A new github organisation would not be so essential anymore. At least
for "being a place to house the various s6-base images". Since there
would not be any.

Yet a github organisation can still be used in 2 other ways:

* To have an official-sounding name (and downloads URL) that never need change.

* If Laurant wants to push his core s6 releases (including the docker
specific one) onto Github. Then it would be great for him to make a
"github/s6" org with Gorak, as new home for 's6', or else a git mirror
of the official skanet.org.

Given the reduced complexity, I don't really care now if we actually
have an 's6' organisation at all. But am happy to leave such a choice
entirely up to Laurent and Gorka. I mean, if they want to do it, then
they are the official people to decide upon that. I have no personal
opinion myself (either way). As I only contribute a relatively very
minor part of works to help improve it.


Kind Regards
dreamcat4


Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Fri, Feb 27, 2015 at 5:29 PM, John Regan  wrote:
> Quick preface:
>
> I know I keep crapping on some parts of your ideas here, but I would
> to reiterate the core idea is absolutely great - easy-to-use images
> based around s6. Letting the user quickly prototype a service by just
> running `docker run imagename command some arguments` is actually a
> *great* idea, since that lets people start making images without
> needing to know a lot of details about the underlying process
> supervisor. It's flexible, I know a lot of people like to initially
> try things "on-the-fly" before sitting down to write a Dockerfile, and
> this allows for that.
>
> You're just getting caught up in some of the finer
> implementation details, like the usage of ENTRYPOINT vs CMD vs
> environment variables and so on, how to handle dealing with shell
> arguments, these parts are solved problems.
>
> So, onwards:
>
>> * Convert Gorak's current "/init" script from bash --> execline
>> * Testing on Gorak's ubuntu base image
>>
>> * Add support for spawning the argv[], for cmd and entry point.
>> * Testing on Gorak's ubuntu base image
>>
>> * Create new base image for Alpine (thanks for mentioning it earlier John)
>> * Possibly create other base image(s). e.g. debian, arch, busy box and so on.
>> * Test them, refine them.
>
> Everything up to here sounds fine and dandy - make some images and get
> them out there, I'm all about that.
>
>>
>> * Document the new set of s6 base images
>> * Blog about them
>
> Also awesome.
>
>> * Inform the fusion people we have created a successor
>
> Hmm, I don't think that'll go over well, nor do I think it's really an
> appropriate thing to do. Some people like the phusion images, for one.
> The whole reason they exist is because a lot of guys keep trying to
> use Docker as VM, and get upset when they can't SSH into the
> container.
>
> I don't see these as a "successor", rather an "alternative."
>
> Besides, none of us would even be concerned about this if Phusion
> hadn't made something in the first place!
>
>> * Inform 1(+) member(s) of Docker Inc to make them aware of new s6-images.
>
> Docker Inc isn't *that* concerned about what types of images are being
> made. They'd probably get that email and say "..alright cool?"
>
> The right approach (in my opinion) is to just build a really cool
> product, then let the product speak for itself.
>
>> Of course the proper channel ATM is to open issues and PR's on Gorak's
>> current s6-ubuntu base image. So we can open those fairly soon and
>> move discussion to there.
>
> It might be worth starting up a github organization or something and
> creating new images under that namespace. I can't speak for Gorak, but
> the proposed image is sufficiently different enough from my existing
> one that I'd have to make a new image anyways. My existing ones don't
> allow for that `docker run imagename command arguments` use-case, for
> me this constitutes a breaking change. I'd rather just deprecate my
> existing ones, and either join in on Gorak's project, or start a new
> one, either way.
>
>>
>>
>> Another thing we don't need to worry about right now (but may become a
>> consideration later on):
>>
>> * Once there are 2+ similar s6 images.
>>   * May be worth to consult Docker Inc employees about official / base
>> image builds on the hub.
>>   * May be worth to consult base image writers (of the base images we
>> are using) e.g. 'ubuntu' etc.
>
> I wouldn't ever really worry about that. The base images don't have
> any kind of process supervisor, and they shouldn't. They're meant to
> be minimal installs for building stuff off of.
>
>>
>>  * Is a possibile to convert to github organisation to house multiple images
>>  * Can be helpful for others to grow support and other to come on
>> board later on who add new distros.
>>   * May be worth to ensure uniform behaviour of common s6 components
>> across different disto's s6 base images.
>>  * e.g. Central place of structured and consistent documentation
>> that covers all similar s6 base images together.
>
> Yep, see my comments above. I'm all about that.
>
>>
>> Again I'm not mandating that we need to do any of those things at all.
>> As it should not be anything of my decision whatsoever. But good idea
>> to keep those possibilities in mind when doing near-term work. "Try to
>> keep it general" basically.
>>
>> For example:
>>
>> I see a lot of good ideas in Gorak's base image about fixing APT. It
>> maybe that some of those great ideas can be fed back upstream to the
>> official ubuntu base image itself. Then (if they are receptive
>> upstream) it can later be removed from Gorak's s6-specific ubuntu base
>> image (being a child of that). Which generally improves the level
>> standardization, granularity (when people choose decide s6 or not),
>> etc.
>
> They probably won't be that receptive - Gorak isn't changing the base
> Ubuntu image *drastically*, but it still deviates from how a normal,
> plain-jane

Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Fri, Feb 27, 2015 at 5:08 PM, John Regan  wrote:
>> Let me explain my point by an example:
>>
>> I am writing an image for tvheadend server. The tvheadend program has
>> some default arguments, which almost always are:
>>
>> -u hts -g video -c /config
>>
>> So then after that we might append user-specifig flags. Which for my
>> personal use are:
>>
>> --satip_xml http://192.168.1.22:8080/desc.xml --bindaddr 192.168.2.218
>>
>> So those user flags become set in CMD. As a user I set from my
>> orchestration tool, which is crane. In a 'crane.yml' YAML user
>> configuration file. Then I type 'crane lift', which does the appending
>> (and overriding) of CMD at the end of 'docker run...'.
>>
>> Another user comes along. Their user-specific (last arguments) will be
>> entirely different. And they should naturally use CMD to set them.
>> This is all something you guys have already stated.
>>
>> BUT (as an image writer). I don't want them to wipe out (override) the
>> first "default" part:
>>
>> -u hts -g video -c /config
>>
>> Because the user name, group name, and "/config" configuration dir (a
>> VOLUME). Those choices were all hard-coded and backed into the
>> tvheadend image's Dockerfile. HOWEVER if some very few people do want
>> to override it, they can by setting --entrypoint=. For example to
>> start up an image as a different user.
>>
>> BUT because they are almost never changed. Then that's why they are
>> tacked onto the end of entrypoint instead. As that is the best place
>> to put them. It says every user repeating unnecessarily the same set
>> of default arguments every time in their CMD part. So as an image
>> writer of the tvheadend image, the image's default entry point and cmd
>> are:
>>
>> ENTRYPOINT ["/tvheadend","-u","hts","-g","video","-c","/config"]
>> CMD *nothing* (or else could be "--help")
>>
>> after converting it to use s6 will be:
>>
>> ENTRYPOINT ["/init", "/tvheadend","-u","hts","-g","video","-c","/config"]
>> CMD *nothing* (or else could be "--help")
>>
>> And it that that easy. After making such change then no user of my
>> tvheadend image will be affected… because users are only meant to
>> override CMD. And if they choose to override entrypoint (and
>> accidentally remove '/init') then they are entirely on their own.
>>
>
> Making the ENTRYPOINT have a bunch of defaults like this is actually
> exactly what you *shouldn't* do. Ideally, once you've created your
> baseimage, you shouldn't touch the ENTRYPOINT and CMD ever again in
> any derived image.
>
> So, first thing's first - if you're building an image with TVHeadEnd
> installed, you may as well take the time to write a script to run it
> in s6, which eliminates the need to change the ENTRYPOINT and CMD

Well you are then neglecting to see 2 other benefits which i neglected
to mention anywhere previously:

1) Is that anyone can just look in my Dockerfile and see the default
tvheadend arguments as-is. Without needing to go around chasing them
being embedded in some script (when only Dockerfile is published to
DockerHub, not such start scripts).

2) Is that when they run my image, they can do a 'docker ps' and see
the FULL tvheadend arguments, including the default ones in the entry
point component. Which is then behaving much more like regular 'ps'
command.

I actually like those benefits, and as the image writer have the
responsibility to ensure such things work properly, and to my own
liking.

> arguments in the first place. Your ENTRYPOINT just remains "/init",
> which will launch s6-svscan, which will launch TVHeadEnd. Your CMD
> array remains null.
>
> Now, conceptually, you can think of TVHeadEnd as requiring "system
> arguments" and "user arguments" -- "system arguments" being those
> defaults you mentioned.

Yeah - expect I don't actually want to do that, and Docker INC are
officially giving me the freedom to use both entrypoint + cmd in any
ways as I personally see fit.

>
> Now just make the TVHeadEnd `run` script something like:

vv that start script won't be published in 'Dockerfile' tab on
Dockerhub. Therefore, the more functionality I am pushing from the
Dockerfile into that script, then the harder my Dockerfile will be for
other people to understand.

It's a comletely valid way of doing things. I just don't think you
should try to enforce it as a hard rule in other people's images.

>
> ```
> #!/bin/sh
>
> DEFAULT_TVHEADEND_SYSTEM_ARGS="-u hts -g video -c /config"
> TVHEADEND_SYSTEM_ARGS=${TVHEADEND_SYSTEM_ARGS:-$DEFAULT_TVHEADEND_ARGS}
> # the above line will use TV_HEADEND_SYSTEM_ARGS if defined, otherwise
> # set TVHEADEND_SYSTEM_ARGS = DEFAULT_TVHEADEND_SYSTEM_ARGS
>
> exec tvheadend $TVHEADEND_SYSTEM_ARGS $TVHEADEND_USER_ARGS
> ```
>
> That's shell script, I'm sure you could do an execline
> implementation, that fact doesn't particularly matter.
>
> So now, the user can run:
>
> `docker run -e TVHEADEND_USER_ARGS="--satip_xml http:..." tvheadendimage`
>
> If they need to change the default/system argumen

Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Fri, Feb 27, 2015 at 1:10 PM, Dreamcat4  wrote:
>> * Once there are 2+ similar s6 images.
>>   * May be worth to consult Docker Inc employees about official / base
>> image builds on the hub.
>
> Here is an example of why we might benefit from seeking help from Docker Inc:
>
> * Multiple FROM images (multiple inheritance).
>
> There should already be an open ticket for this feature (which does
> not exist in Docker). And it seems relevant to our situation.
>
> Or they could make a feature called "flavours" as a way to "tweak"
> base images. Then that would save us some unnecessary duplication of
> work.
>
> For example:
>
> FROM: ubuntu
> FLAVOUR: s6
>
> People could instead do:
>
> FROM: alpine
> FLAVOUR: s6

Oh wait a minute: I'm being a little retarded. We can already use ADD
for achieving that sort of thing. Just instead the entry would point
to a github URL to get a single tarball from. Gorka is sort-of already
doing this… just with 2 separate ones, without his /init included
within, which is copied from a local directory etc.

> Where FLAVOR: s6 is just a separate auks layer (added ontop of the
> base) at the time the image is build. So s6 is just the s6-part, kept
> independent and separated out from the various base images.
>
> Then we would only need to worry about maintaining an 's6' flavour,
> which is self-contained. Bringing everything it needs with it - it's
> own 'execline' and other needed s6 support tools. So not depending
> upon anything that may or may-not be in the base image (including busy
> box).
>
> Such help from Docker Inc would save us having to maintain many
> individual copies of various base images. So we should tell them about
> it, and let them know that!
>
> The missing capability of multiple FROM: base images (which I believe
> is how is described in current open ticket(s) on docker/docker) is
> essentially exactly the same idea as this FLAVOR keyword I have used
> above ^^. They are interchangeable concepts. I've just called it
> something else for the sake of being awkward / whatever.


Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
> * Once there are 2+ similar s6 images.
>   * May be worth to consult Docker Inc employees about official / base
> image builds on the hub.

Here is an example of why we might benefit from seeking help from Docker Inc:

* Multiple FROM images (multiple inheritance).

There should already be an open ticket for this feature (which does
not exist in Docker). And it seems relevant to our situation.

Or they could make a feature called "flavours" as a way to "tweak"
base images. Then that would save us some unnecessary duplication of
work.

For example:

FROM: ubuntu
FLAVOUR: s6

People could instead do:

FROM: alpine
FLAVOUR: s6

Where FLAVOR: s6 is just a separate auks layer (added ontop of the
base) at the time the image is build. So s6 is just the s6-part, kept
independent and separated out from the various base images.

Then we would only need to worry about maintaining an 's6' flavour,
which is self-contained. Bringing everything it needs with it - it's
own 'execline' and other needed s6 support tools. So not depending
upon anything that may or may-not be in the base image (including busy
box).

Such help from Docker Inc would save us having to maintain many
individual copies of various base images. So we should tell them about
it, and let them know that!

The missing capability of multiple FROM: base images (which I believe
is how is described in current open ticket(s) on docker/docker) is
essentially exactly the same idea as this FLAVOR keyword I have used
above ^^. They are interchangeable concepts. I've just called it
something else for the sake of being awkward / whatever.


Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Fri, Feb 27, 2015 at 10:19 AM, Gorka Lertxundi  wrote:
> Dreamcat4, pull request are always welcomed!
>
> 2015-02-27 0:40 GMT+01:00 Laurent Bercot :
>
>> On 26/02/2015 21:53, John Regan wrote:
>>
>>> Besides, the whole idea here is to make an image that follows best
>>> practices, and best practices state we should be using a process
>>> supervisor that cleans up orphaned processes and stuff. You should be
>>> encouraging people to run their programs, interactively or not, under
>>> a supervision tree like s6.
>>>
>>
>>  The distinction between "process" and "service" is key here, and I
>> agree with John.
>>
>> 
>>  There's a lot of software out there that seems built on the assumption
>> that
>> a program should do everything within a single executable, and that
>> processes
>> that fail to address certain issues are incomplete and the program needs to
>> be patched.
>>
>>  Under Unix, this assumption is incorrect. Unix is mostly defined by its
>> simple and efficient interprocess communication, so a Unix program is best
>> designed as a *set* of processes, with the right communication channels
>> between them, and the right control flow between those processes. Using
>> Unix primitives the right way allows you to accomplish a task with minimal
>> effort by delegating a lot to the operating system.
>>
>>  This is how I design and write software: to take advantage of the design
>> of Unix as much as I can, to perform tasks with the lowest possible amount
>> of code.
>>  This requires isolating basic building blocks, and providing those
>> building
>> blocks as binaries, with the right interface so users can glue them
>> together on the command line.
>>
>>  Take the "syslogd" service. The "rsyslogd" way is to have one executable,
>> rsyslogd, that provides the syslogd functionality. The s6 way is to combine
>> several tools to implement syslogd; the functionality already exists, even
>> if it's not immediately apparent. This command line should do:
>>
>>  pipeline s6-ipcserver-socketbinder /dev/log s6-envuidgid nobody
>> s6-applyuidgid -Uz s6-ipcserverd ucspilogd "" s6-envuidgid syslog
>> s6-applyuidgid -Uz s6-log /var/log/syslogd
>>
>>
> I love puzzles.
>
>
>>  Yes, that's one unique command line. The syslogd implementation will take
>> the form of two long-running processes, one listening on /dev/log (the
>> syslogd socket) as user nobody, and spawning a short-lived ucspilogd
>> process
>> for every connection to syslog; and the other writing the logs to the
>> /var/log/syslogd directory as user syslog and performing automatic
>> rotation.
>> (You can configure how and where things are logged by writing a real s6-log
>> script at the end of the command line.)
>>
>>  Of course, in the real world, you wouldn't write that. First, because s6
>> provides some shortcuts for common operations so the real command lines
>> would be a tad shorter, and second, because you'd want the long-running
>> processes to be supervised, so you'd use the supervision infrastructure
>> and write two short run scripts instead.
>>
>>  (And so, to provide syslogd functionality to one client, you'd really have
>> 1 s6-svscan process, 2 s6-supervise processes, 1 s6-ipcserverd process,
>> 1 ucspilogd process and 1 s6-log process. Yes, 6 processes. This is not as
>> insane as it sounds. Processes are not a scarce resource on Unix; the
>> scarce resources are RAM and CPU. The s6 processes have been designed to
>> take *very* little of those, so the total amount of RAM and CPU they all
>> use is still smaller than the amount used by a single rsyslogd process.)
>>
>>  There are good reasons to program this way. Mostly, it amounts to writing
>> as little code as possible. If you look at the source code for every single
>> command that appears on the insane command line above, you'll find that
>> it's
>> pretty short, and short means maintainable - which is the most important
>> quality to have in a codebase, especially when there's just one guy
>> maintaining it.
>>  Using high-level languages also reduces the source code's size, but it
>> adds the interpreter's or run-time system's overhead, and a forest of
>> dependencies. What is then run on the machine is not lightweight by any
>> measure. (Plus, most of those languages are total crap.)
>>
>>  Anyway, my point is that it often takes several proces

Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Thu, Feb 26, 2015 at 11:40 PM, Laurent Bercot
 wrote:
> On 26/02/2015 21:53, John Regan wrote:
>>
>> Besides, the whole idea here is to make an image that follows best
>> practices, and best practices state we should be using a process
>> supervisor that cleans up orphaned processes and stuff. You should be
>> encouraging people to run their programs, interactively or not, under
>> a supervision tree like s6.
>
>
>  The distinction between "process" and "service" is key here, and I
> agree with John.
>
> 
>  There's a lot of software out there that seems built on the assumption that
> a program should do everything within a single executable, and that
> processes
> that fail to address certain issues are incomplete and the program needs to
> be patched.
>
>  Under Unix, this assumption is incorrect. Unix is mostly defined by its
> simple and efficient interprocess communication, so a Unix program is best
> designed as a *set* of processes, with the right communication channels
> between them, and the right control flow between those processes. Using
> Unix primitives the right way allows you to accomplish a task with minimal
> effort by delegating a lot to the operating system.
>
>  This is how I design and write software: to take advantage of the design
> of Unix as much as I can, to perform tasks with the lowest possible amount
> of code.
>  This requires isolating basic building blocks, and providing those building
> blocks as binaries, with the right interface so users can glue them
> together on the command line.
>
>  Take the "syslogd" service. The "rsyslogd" way is to have one executable,
> rsyslogd, that provides the syslogd functionality. The s6 way is to combine
> several tools to implement syslogd; the functionality already exists, even
> if it's not immediately apparent. This command line should do:
>
>  pipeline s6-ipcserver-socketbinder /dev/log s6-envuidgid nobody
> s6-applyuidgid -Uz s6-ipcserverd ucspilogd "" s6-envuidgid syslog
> s6-applyuidgid -Uz s6-log /var/log/syslogd
>
>  Yes, that's one unique command line. The syslogd implementation will take
> the form of two long-running processes, one listening on /dev/log (the
> syslogd socket) as user nobody, and spawning a short-lived ucspilogd process
> for every connection to syslog; and the other writing the logs to the
> /var/log/syslogd directory as user syslog and performing automatic rotation.
> (You can configure how and where things are logged by writing a real s6-log
> script at the end of the command line.)
>
>  Of course, in the real world, you wouldn't write that. First, because s6
> provides some shortcuts for common operations so the real command lines
> would be a tad shorter, and second, because you'd want the long-running
> processes to be supervised, so you'd use the supervision infrastructure
> and write two short run scripts instead.
>
>  (And so, to provide syslogd functionality to one client, you'd really have
> 1 s6-svscan process, 2 s6-supervise processes, 1 s6-ipcserverd process,
> 1 ucspilogd process and 1 s6-log process. Yes, 6 processes. This is not as
> insane as it sounds. Processes are not a scarce resource on Unix; the
> scarce resources are RAM and CPU. The s6 processes have been designed to
> take *very* little of those, so the total amount of RAM and CPU they all
> use is still smaller than the amount used by a single rsyslogd process.)
>
>  There are good reasons to program this way. Mostly, it amounts to writing
> as little code as possible. If you look at the source code for every single
> command that appears on the insane command line above, you'll find that it's
> pretty short, and short means maintainable - which is the most important
> quality to have in a codebase, especially when there's just one guy
> maintaining it.
>  Using high-level languages also reduces the source code's size, but it
> adds the interpreter's or run-time system's overhead, and a forest of
> dependencies. What is then run on the machine is not lightweight by any
> measure. (Plus, most of those languages are total crap.)
>
>  Anyway, my point is that it often takes several processes to provide a
> service, and that it's a good thing. This practice should be encouraged.
> So, yes, running a service under a process supervisor is the right design,
> and I'm happy that John, Gorka, Les and other people have figured it out.
>
>  s6 itself provides the "process supervision" service not as a single
> executable, but as a set of tools. s6-svscan doesn't do it all, and it's
> by design. It's just another basic building block. Sure, it's a bit special
> because it can run as process 1 and is the root of the supervision tree,
> but that doesn't mean it's a turnkey program - the key lies in how it's
> used together with other s6 and Unix tools.
>  That's why starting s6-svscan directly as the entrypoint isn't such a
> good idea. It's much more flexible to run a script as the entrypoint
> that performs a few basic initialization steps t

Re: process supervisor - considerations for docker

2015-02-26 Thread Dreamcat4
I think you guys are kindda on the right track. But you are trying to
impose some un-needed restrictions in regards to CMD vs ENTRYPOINT
usage.

To docker, the final combination of ENTRYPOINT + CMD is all that
matters. So long as the first arg is /init. Then any remaining args
should be shifted and used to determine what program is being run on
supervision.

So basically all the user has to care about (to get process
supervision) is to pre-pend "/init" as the first argument, before
argv[0] (the command to run) and any optional subsequent args.

If there is no arguments after "/init". Then it's multi-manages
process. And just conventional ways (inspect config directories for
each services).

You CANNOT enforce specific ENTRYPOINT + CMD usages amongst docker
users. It will never work because too many people use docker in too
many different ways. And it does not matter from a technical
perspective for the solution I have been quietly thinking of (but not
had an opportunity to share yet).


It's best to think of ENTRYPOINT (in conventional docker learning
before throwing in any /init system) and being "the interpreter" such
as the "/bin/sh -c" bit that sets up the environment. Like the shebang
line. Or could be the python interpreter instead etc.

It's best to understand CMD as being the docker image's user
presentation of its exposed "command interface". So for example if you
ran docker inside of docker, it's overridable CMD part would be the
docker subcommands such as 'run' 'stop' etc. And then users can
understand that "docker run docker  " Is all they need to
care about from a user perpective. --help usage of the docker image
and so on.

But technically, /init should only ever receive and argv[] array which
is the combined ENTRYPOINT + CMD appended after it (whether the user
has overridden entry point or has no entry point set is entirely their
business. And inside of /init it cannot know such things anyway.

My suggestion:

* /init is launched by docker as the first argument.
* init checks for "$@". If there are any arguments:

 * create (from a simple template) a s6 run script
   * run script launches $1 (first arg) as the command to run
 * run script template is written with remaining args to $1

 * proceed normally (inspect the s6 config directory as usual!)
   * as there should be no breakage of all existing functionality

* Providing there is no VOLUME sat ontop of the /etc/s6 config directory
* Then the run script is temporary - it will only last while the
container is running.
   * So won't be there anymore to cleanup on and future 'docker run'
invokations with different arguments.

The main thing I'm concerned about is about preserving proper shell
quoting, because sometimes args can be like --flag='some thing'.

It may be one simple way to get proper quoting (in conventional shells
like bash) is to use 'set -x' to echo out the line, as the output is
ensured by the interpreter to be re-executable. Although even if that
takes care of the quotes, it would still not be good to have
accidental variable expansion, interpretation of $ ! etc. Maybe I'm
thinking a bit too far ahead. But we already know that Gorka's '/init'
script is written in bash.

Sorry for coming at it from a completely different angle!
:)


On Thu, Feb 26, 2015 at 5:03 PM, John Regan  wrote:
>
>
>>  I think you're better off with:
>>
>>  * Case 1 : docker run --entrypoint="" image commandline
>>(with or without -ti depending on whether you need an interactive
>>terminal)
>>  * Case 2 : docker run image
>>  * Case 3: docker run image commandline
>>(with or without -ti depending on whether you need an interactive
>>terminal)
>>
>>  docker run --entrypoint="" -ti image /bin/sh
>>would start a shell without the supervision tree running
>>
>>  docker run -ti image /bin/sh
>>would start a shell with the supervision tree up.
>>
>
> After reading your reasoning, I agree 100% - let -ti drive whether it's 
> interactive, and --entrypoint drive whether there's a supervision tree.
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: process supervisor - considerations for docker

2015-02-26 Thread Dreamcat4
* run a custom script and supervise it: docker run your-image /init
> /your-custom-script
>
>>
>> > Would appreciate coming back to how we can do this later on. After I
>> > have made a more convincing case for why it's actually needed. My
>> > naive assumption, not knowing any of s6 yet: It should be that simply
>> > passing on an argv[] array aught to be possible. And perhaps without
>> > too many extra hassles or loops to jump through.
>
>
> Would appreciate that use-cases! :-)

To make an overview

* Containers that provide Development tools / dev environments - often
those category of docker images take direct cmd line args.
  * Here are some examples of complex single-shot commands that often
take command line arguments:
* To run a complex build of something (which may spawn out to many cores)
   * It is very desirable to build software in docker containers.
* To launch a dev sandbox local web server (e.g. when hacking on a
Ruby on Rails website, or nodjs / python / whatever)
   * Like I run 'ruhoh serve -9595' for my web development (where
the argument specifies the TCP port number.

  * Those above examples ^^ are not an isolated case - that is a whole
category of docker images.
  * It is not uncommon for people to just make a shell `alias
myalias=docker run  image name` shortcut for these
single-shot commands.
* Then passing across the 'myalias ' which just get
appended to the full `docker run imagename …` command.

Another, entirely different example:

* If a new docker user even just wants to write the simplest of shell
scripts in a container.
  * And they decide to make it POSIX compliant (#!/bin/sh shebang line)
  * And they decided to use the ubuntu base image.
 * Then their default /bin/sh shell will point to DASH (not bash).
   * Because ubuntu officially decided upon dash, which is 2x
faster than 'bash' for POSIX compliant scripts.
   * Well guess what I found out guys? Dash subshells are ALWAYS
orphaned children.
 * Everything except for & backgrounding
 * So if the user's script is busy or hung during a ``
backpacks or $() or pipe |
* Then docker stop won't kill it - 'error: no such
process'. Buggers up the docker daemon.
 *  This may also be true of other images that have a /bin/sh such
as debian and busybox. I haven't investigated.

^^ And that was just a simple system shell script. Nevermind whatever
hideously complex java, python and so on tool which other people try
to use.

My point being that a good way to save grief of new users who are not
aware of things like this. Then it should be possible to convert any
existing docker images to use FROM: baseimage-s6 whatever, and Gorka's
init. And this is something deserved of even the simplest of docker
containers.

Process supervision can be needed even on the simplest of docker
containers, never mind running persistent services like mysql etc.
Some people just want to run a single-shot command. However that
command may be unknown to the user and shell out etc. So let them
change their FROM: line and pre-pend s6 init to their existing
CMD/ENTRYPOINT combination. And they can make safe their existing
containers.

Even non-services things, and the simplest docker images (which may be
single-shot commands) deserve to be able to have this as a simple
modification. To get the .s6 stop script which will TERM any zombies.

It is something we can do more generally for the docker community.
Rather than being only concerned with out own direct and immediate
needs. As having such a feature makes easier for other people to
universally make use of. So I can recommend people universally use
this solution without having to ask them: oh but let me first check
BTW how actually to you use CMD and ENTRYPOINT in your docker images?
It sounds like we should not have to impose that restriction.

And like Gorka said it's possible to have both ways and not impose a
restriction of one way over the other.

Anyway, to make my point more pointedly, Here are some real-world examples…

These docker images already use docker's CMD feature for letting users
pass in cmdline args. But they are also all sufficiently complex
software to benefit from being run under a process supervisor such as
s6.

* maven: https://registry.hub.docker.com/_/maven/

* consul: https://github.com/progrium/docker-consul

* pipework: 
https://github.com/dreamcat4/docker-images/blob/master/pipework/2.%20Usage.md#run-modes

*  tvheadend: (not published yet)
dockerfile: tvheadend
run:
  cmd: --satip_xml http://192.168.1.22:8080/desc.xml --bindaddr 192.168.2.2


Now we could do what John suggests, and try to modify all those images
to use -e ENV vars instead. But overall (for the community) I believe
it is less work to try to implement the proposed feature once in s6
base image / 

Re: process supervisor - considerations for docker

2015-02-25 Thread Dreamcat4
Really overwhelmed by all this. It is a much more positive response
than expected. And many good things. I am very grateful. Some bits I
still like us to continue discussion upon.

But Gornak - I must say that your new ubuntu base image really seem *a
lot* better than the phusion/baseimage one. It is fantastic and an
excellent job you have done there and you continue to update with new
versions of s6, etc. Can't really say thank you enough for that.

Anyway, back to the discussion:

On Wed, Feb 25, 2015 at 3:57 PM, John Regan  wrote:
> On Wed, Feb 25, 2015 at 03:58:07PM +0100, Gorka Lertxundi wrote:
>> Hello,
>>
>> After that great john's post, I tried to solve exactly your same problems. I
>> created my own base image based primarily on John's and Phusion's base
>> images.
>
> That's awesome - I get so excited when I hear somebody's actually
> read, digested, and taken action based on something I wrote. So cool!
> :)
>
>>
>> See my thoughts below.
>>
>> 2015-02-25 12:30 GMT+01:00 Laurent Bercot :
>>
>> >
>> >  (Moving the discussion to the supervision@list.skarnet.org list.
>> > The original message is quoted below.)
>> >
>> >  Hi Dreamcat4,
>> >
>> >  Thanks for your detailed message. I'm very happy that s6 found an
>> > application in docker, and that there's such an interest for it!
>> > skaw...@list.skarnet.org is indeed the right place to reach me and
>> > discuss the software I write, but for s6 in particular and process
>> > supervisors in general, supervision@list.skarnet.org is the better
>> > place - it's full of people with process supervision experience.
>> >
>> >  Your message gives a lot of food for thought, and I don't have time
>> > right now to give it all the attention it deserves. Tonight or
>> > tomorrow, though, I will; and other people on the supervisionlist
>> > will certainly have good insights.
>> >
>> >  Cheers!
>> >
>> > -- Laurent
>> >
>> >
>> > On 25/02/2015 11:55, Dreamcat4 wrote:
>> >
>> >> Hello,
>> >> Now there is someone (John Regan) who has made s6 images for docker.
>> >> And written a blog post about it. Which is a great effort - and the
>> >> reason I've come here. But it gives me a taste of wanting more.
>> >> Something a bit more foolproof, and simpler, to work specifically
>> >> inside of docker.
>> >>
>> >>  From that blog post I get a general impression that s6 has many
>> >> advantages. And it may be a good candidate for docker. But I would be
>> >> remiss not to ask the developers of s6 themselves not to try to take
>> >> some kind of a personal an interest in considering how s6 might best
>> >> work inside of docker specifically. I hope that this is the right
>> >> mailing list to reach s6 developers / discuss such matters. Is this
>> >> the correct mailing list for s6 dev discussions?
>> >>
>> >> I've read and read around the subject of process supervision inside
>> >> docker. Various people explain how or why they use various different
>> >> process supervisors in docker (not just s6). None of them really quite
>> >> seem ideal. I would like to be wrong about that but nothing has fully
>> >> convinced me so far. Perhaps it is a fair criticism to say that I
>> >> still have a lot more to learn in regards to process supervisors. But
>> >> I have no interest in getting bogged down by that. To me, I already
>> >> know more-or-less enough about how docker manages (or rather
>> >> mis-manages!) it's container processes to have an opinion about what
>> >> is needed, from a docker-sided perspective. And know enough that
>> >> docker project itself won't fix these issues. For one thing because of
>> >> not owning what's running on the inside of containers. And also
>> >> because of their single-process viewpoint take on things. Andy way.
>> >> That kind of political nonsense doesn't matter for our discussion. I
>> >> just want to have a technical discussion about what is needed, and how
>> >> might be the best way to solve the problem!
>> >>
>> >>
>> >> MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER
>> >>
>> >> In regards of s6 only, currently these are my currently perceived
>> >> shortcomings when using it in docker:
>> >>
>> >> * i