Re: [Xen-devel] [stage1-xen PATCH v1 04/10] build/fedora: Add `run` and `components/*` scripts

2017-09-13 Thread Stefano Stabellini
On Wed, 13 Sep 2017, Rajiv Ranganath wrote:
> On Tue, Sep 12 2017 at 01:36:04 AM, Stefano Stabellini 
>  wrote:
> 
> [...]
> 
> > Fortunately, from the stage1-xen code point of view, there is very
> > little difference between PVHv2 and PV. Switching from one to the
> > other should be a matter of adding one line to the xl config file.
> 
> There is a related use-case here that I think will be important to
> users.
> 
> In stage1-xen we are packaging a Dom-U kernel. When this kernel crashes
> we would want to capture its crash log. Depending on the nature of the
> issue, users can then work with their own kernel team, vendor (who is
> open to supporting LTS kernels) or upstream.
> 
> We might also want to consider supporting two LTS kernel versions on a
> rolling basis. Users can then use something like labels [1] or
> annotations [2] to toggle the kernel version. That way if their
> containers start crashing under a newer Dom-U kernel, they can roll back
> to a working kernel.
> 
> [...]

Yes, I agree. I think it makes sense to allow users to change the DomU
kernel version, at least at build time. At runtime it would be useful
too. One day we might even support kernels other than Linux.



> > You have a good point. I think we should be clear about the stability
> > of the project and the backward compatibility in the README. We should
> > openly say that it is still a "preview" and there is no "support" or
> > "compatibility" yet.
> 
> Sounds good. I'll update README to reflect this.
>
> > Choosing Xen 4.9 should not be seen as a statement of support. I think
> > we should choose the Xen version based only on the technical merits.
> >
> > In the long term it would be great to support multiple stable versions
> > and a development version of Xen. As of now, I think it makes sense to
> > have an "add-hoc approach": I would use Xen 4.9 just because it is the
> > best choice at the moment. Then, I would update to other versions when
> > it makes sense, manually. I don't think that building against a changing
> > target ("master") is a good idea, because we might end up stumbling
> > across confusing and time-consuming bugs that have nothing to do with
> > stage1-xen. However, we could pick a random commit on the Xen tree if
> > that's convenient for us, because at this stage there is no support
> > really. For example, PVCalls will require some tools changes in Xen.
> > Once they are upstream, we'll want to update the Xen version to the
> > latest with PVCalls support.
> >
> > Does it make sense?
> 
> Yes, it does. I'll switch to xen-4.9, qemu-2.10 and rkt-1.28 in the next
> version of the patchset.

Thanks!


> Best,
> Rajiv
> 
> [1] https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
> [2] 
> https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [stage1-xen PATCH v1 04/10] build/fedora: Add `run` and `components/*` scripts

2017-09-12 Thread Rajiv Ranganath

On Tue, Sep 12 2017 at 01:36:04 AM, Stefano Stabellini  
wrote:

[...]

> Fortunately, from the stage1-xen code point of view, there is very
> little difference between PVHv2 and PV. Switching from one to the
> other should be a matter of adding one line to the xl config file.

There is a related use-case here that I think will be important to
users.

In stage1-xen we are packaging a Dom-U kernel. When this kernel crashes
we would want to capture its crash log. Depending on the nature of the
issue, users can then work with their own kernel team, vendor (who is
open to supporting LTS kernels) or upstream.

We might also want to consider supporting two LTS kernel versions on a
rolling basis. Users can then use something like labels [1] or
annotations [2] to toggle the kernel version. That way if their
containers start crashing under a newer Dom-U kernel, they can roll back
to a working kernel.

[...]

>> 3. Multiboot2 - One of the reasons why I documented using EFI is because
>> I could not get multiboot2 to work. It looks like the fix for it is on
>> its way. I anticipate using multiboot2 would be easier for users.
>
> That's for the host right? I didn't have that problem, but maybe because
> I am not using Fedora.

That's correct! I ran into this issue on Fedora host.

[...]

> You have a good point. I think we should be clear about the stability
> of the project and the backward compatibility in the README. We should
> openly say that it is still a "preview" and there is no "support" or
> "compatibility" yet.

Sounds good. I'll update README to reflect this.

> Choosing Xen 4.9 should not be seen as a statement of support. I think
> we should choose the Xen version based only on the technical merits.
>
> In the long term it would be great to support multiple stable versions
> and a development version of Xen. As of now, I think it makes sense to
> have an "add-hoc approach": I would use Xen 4.9 just because it is the
> best choice at the moment. Then, I would update to other versions when
> it makes sense, manually. I don't think that building against a changing
> target ("master") is a good idea, because we might end up stumbling
> across confusing and time-consuming bugs that have nothing to do with
> stage1-xen. However, we could pick a random commit on the Xen tree if
> that's convenient for us, because at this stage there is no support
> really. For example, PVCalls will require some tools changes in Xen.
> Once they are upstream, we'll want to update the Xen version to the
> latest with PVCalls support.
>
> Does it make sense?

Yes, it does. I'll switch to xen-4.9, qemu-2.10 and rkt-1.28 in the next
version of the patchset.

Best,
Rajiv

[1] https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
[2] 
https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [stage1-xen PATCH v1 04/10] build/fedora: Add `run` and `components/*` scripts

2017-09-11 Thread Stefano Stabellini
On Sat, 9 Sep 2017, Rajiv Ranganath wrote:
> On Thu, Sep 07 2017 at 12:29:54 AM, Stefano Stabellini 
>  wrote:
> 
> [...]
> 
> >> +QEMU_BRANCH = 'master'
> >
> > I am not sure we want to checkout always the latest QEMU. It is a
> > running target. It makes sense to use one of the latest releases
> > instead, such as v2.10.0?
> >
> 
> [...]
> 
> I feel once we have an understanding around what stable xen container
> experience for our users should be, it makes a lot of sense to support
> two stable versions (on a rolling basis) along with unstable/devel
> versions of xen, qemu and rkt.

Yes, I think that would be ideal too.


> I am hoping we can include the following before adding support for
> stable version.
> 
> 1. Kernel - PV Calls backend support will be in 4.14, which is few
> months away.
> 
> 2. PVHv2 - xl and PVHv2 support is inflight for 4.10. I would like to
> see xen container users start off with PVHv2 and using PV Calls
> networking. Therefore I am a bit hesitant adding support for Xen 4.9.

Yes, that would fantastic. Fortunately, from the stage1-xen code point
of view, there is very little difference between PVHv2 and PV. Switching
from one to the other should be a matter of adding one line to the xl
config file.

Regarding statements of support, see below.


> 3. Multiboot2 - One of the reasons why I documented using EFI is because
> I could not get multiboot2 to work. It looks like the fix for it is on
> its way. I anticipate using multiboot2 would be easier for users.

That's for the host right? I didn't have that problem, but maybe because
I am not using Fedora.


> 4. Rkt - Support for Kubernetes CRI and OCI image format will be of
> importance to our users. Rkt is working on it but I'm not sure of their
> progress. There are other projects that are also incubating in CNCF -
> cri-o and cri-containerd.
> 
> PV Calls networking is new to me, and I wanted to do some prototyping to
> understand how it would integrate with the rest of the container
> ecosystem it after landing this series.
> 
> By adding support for xen-4.9, qemu-2.10 or rkt-1.28.1 I feel we should
> not set some kind stability or backward compatibility expectations
> around stage1-xen as yet.

I agree we should not set any kind of backward compatibility
expectations yet. See below.


> My preference would be to keep things on master (albeit deliberately)
> till we can figure out a good xen container experience for our users.
> 
> Please let me know what you think.

You have a good point. I think we should be clear about the stability of
the project and the backward compatibility in the README. We should
openly say that it is still a "preview" and there is no "support" or
"compatibility" yet.

Choosing Xen 4.9 should not be seen as a statement of support. I think
we should choose the Xen version based only on the technical merits.

In the long term it would be great to support multiple stable versions
and a development version of Xen. As of now, I think it makes sense to
have an "add-hoc approach": I would use Xen 4.9 just because it is the
best choice at the moment. Then, I would update to other versions when
it makes sense, manually. I don't think that building against a changing
target ("master") is a good idea, because we might end up stumbling
across confusing and time-consuming bugs that have nothing to do with
stage1-xen. However, we could pick a random commit on the Xen tree if
that's convenient for us, because at this stage there is no support
really. For example, PVCalls will require some tools changes in Xen.
Once they are upstream, we'll want to update the Xen version to the
latest with PVCalls support.

Does it make sense?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [stage1-xen PATCH v1 04/10] build/fedora: Add `run` and `components/*` scripts

2017-09-08 Thread Rajiv Ranganath

On Thu, Sep 07 2017 at 12:29:54 AM, Stefano Stabellini  
wrote:

[...]

>> +QEMU_BRANCH = 'master'
>
> I am not sure we want to checkout always the latest QEMU. It is a
> running target. It makes sense to use one of the latest releases
> instead, such as v2.10.0?
>

[...]

I feel once we have an understanding around what stable xen container
experience for our users should be, it makes a lot of sense to support
two stable versions (on a rolling basis) along with unstable/devel
versions of xen, qemu and rkt.

I am hoping we can include the following before adding support for
stable version.

1. Kernel - PV Calls backend support will be in 4.14, which is few
months away.

2. PVHv2 - xl and PVHv2 support is inflight for 4.10. I would like to
see xen container users start off with PVHv2 and using PV Calls
networking. Therefore I am a bit hesitant adding support for Xen 4.9.

3. Multiboot2 - One of the reasons why I documented using EFI is because
I could not get multiboot2 to work. It looks like the fix for it is on
its way. I anticipate using multiboot2 would be easier for users.

4. Rkt - Support for Kubernetes CRI and OCI image format will be of
importance to our users. Rkt is working on it but I'm not sure of their
progress. There are other projects that are also incubating in CNCF -
cri-o and cri-containerd.

PV Calls networking is new to me, and I wanted to do some prototyping to
understand how it would integrate with the rest of the container
ecosystem it after landing this series.

By adding support for xen-4.9, qemu-2.10 or rkt-1.28.1 I feel we should
not set some kind stability or backward compatibility expectations
around stage1-xen as yet.

My preference would be to keep things on master (albeit deliberately)
till we can figure out a good xen container experience for our users.

Please let me know what you think.

>> +if p.returncode != 0:
>> +sys.exit(1)
>
> Is this the same as
>   #!/bin/bash
>   set -e
> ?

That's right. 

> Please add a few words in the commit message about the benefit of this
> approach of writing scripts.
>

I'll update the commit message in the next version of the series.

Best,
Rajiv


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [stage1-xen PATCH v1 04/10] build/fedora: Add `run` and `components/*` scripts

2017-09-06 Thread Stefano Stabellini
On Sun, 27 Aug 2017, Rajiv Ranganath wrote:
> From: Rajiv M Ranganath 
> 
> Signed-off-by: Rajiv Ranganath 
> ---
>  build/fedora/components/qemu |   50 
>  build/fedora/components/rkt  |   58 
> ++
>  build/fedora/components/xen  |   46 +
>  build/fedora/run |   56 +
>  4 files changed, 210 insertions(+)
>  create mode 100755 build/fedora/components/qemu
>  create mode 100755 build/fedora/components/rkt
>  create mode 100755 build/fedora/components/xen
>  create mode 100755 build/fedora/run
> 
> diff --git a/build/fedora/components/qemu b/build/fedora/components/qemu
> new file mode 100755
> index 000..6c89e2c
> --- /dev/null
> +++ b/build/fedora/components/qemu
> @@ -0,0 +1,50 @@
> +#!/usr/bin/python2
> +
> +import shlex
> +import subprocess
> +import sys
> +import os
> +
> +# Modify this if you would like to install Qemu elsewhere on your filesystem 
> or
> +# a different version of Qemu
> +QEMU_PREFIX = '/opt/qemu-unstable'
> +QEMU_BRANCH = 'master'

I am not sure we want to checkout always the latest QEMU. It is a
running target. It makes sense to use one of the latest releases
instead, such as v2.10.0?


> +# This should correspond to your Xen install prefix
> +XEN_PREFIX = '/opt/xen-unstable'
> +
> +
> +# helper function to capture stdout from a long running process
> +def subprocess_stdout(cmd, cwd, env):
> +p = subprocess.Popen(
> +shlex.split(cmd), cwd=cwd, env=env, stdout=subprocess.PIPE)
> +while p.poll() is None:
> +l = p.stdout.readline()
> +sys.stdout.write(l)
> +if p.returncode != 0:
> +sys.exit(1)

Is this the same as
  #!/bin/bash
  set -e
?

Please add a few words in the commit message about the benefit of this
approach of writing scripts.


> +env = os.environ.copy()
> +
> +# build and install qemu
> +print "Cloning qemu..."
> +cmd = "git clone --branch %(branch)s git://git.qemu.org/qemu.git" % {
> +'branch': QEMU_BRANCH
> +}
> +subprocess.check_output(shlex.split(cmd), cwd='/root')
> +
> +steps = [
> +"./configure --prefix=%(qemu_prefix)s --enable-xen 
> --target-list=i386-softmmu --extra-cflags=\"-I%(xen_prefix)s/include\" 
> --extra-ldflags=\"-L%(xen_prefix)s/lib -Wl,-rpath,%(xen_prefix)s/lib\" 
> --disable-kvm --enable-virtfs --enable-linux-aio"
> +% {
> +'qemu_prefix': QEMU_PREFIX,
> +'xen_prefix': XEN_PREFIX
> +}, 'make', 'make install'
> +]
> +for cmd in steps:
> +cwd = '/root/qemu'
> +subprocess_stdout(cmd, cwd, env)
> +
> +cmd = "cp i386-softmmu/qemu-system-i386 
> %(xen_prefix)s/lib/xen/bin/qemu-system-i386" % {
> +'xen_prefix': XEN_PREFIX
> +}
> +subprocess.check_output(shlex.split(cmd), cwd='/root/qemu')
> diff --git a/build/fedora/components/rkt b/build/fedora/components/rkt
> new file mode 100755
> index 000..edfdd1c
> --- /dev/null
> +++ b/build/fedora/components/rkt
> @@ -0,0 +1,58 @@
> +#!/usr/bin/python2
> +
> +import shlex
> +import subprocess
> +import sys
> +import os
> +
> +# `rkt` is installed in the same prefix as `stage1-xen`. Modify this if you
> +# would like to install rkt elsewhere on your filesystem.
> +STAGE1_XEN_PREFIX = '/opt/stage1-xen'
> +RKT_PREFIX = STAGE1_XEN_PREFIX
> +RKT_BRANCH = 'master'
> +
> +# Adjust this according to what RKT_BRANCH generates
> +RKT_BUILD_VER = 'rkt-1.28.1+git'

I think it would be best if git-checked out the tag (v1.28.1) so that we
are sure there are no version mismatches. In fact, I would remove
RKT_BRANCH and just use a single variable to specify the version to
clone and build.


> +# helper function to capture stdout from a long running process
> +def subprocess_stdout(cmd, cwd, env):
> +p = subprocess.Popen(
> +shlex.split(cmd), cwd=cwd, env=env, stdout=subprocess.PIPE)
> +while p.poll() is None:
> +l = p.stdout.readline()
> +sys.stdout.write(l)
> +if p.returncode != 0:
> +sys.exi(1)
> +
> +
> +env = os.environ.copy()
> +
> +# build rkt
> +print "Cloning rkt..."
> +cmd = "git clone --branch %(branch)s https://github.com/rkt/rkt.git; % {
> +'branch': RKT_BRANCH
> +}
> +subprocess.check_output(shlex.split(cmd), cwd='/root')
> +
> +steps = [
> +'./autogen.sh', './configure --disable-tpm --with-stage1-flavors=coreos',
> +'make'
> +]
> +for cmd in steps:
> +cwd = '/root/rkt'
> +subprocess_stdout(cmd, cwd, env)
> +
> +# install rkt build artifacts to RKT_PREFIX
> +steps = [
> +"mkdir -p %(prefix)s/bin" % {
> +'prefix': RKT_PREFIX
> +},
> +"cp /root/rkt/build-%(build_ver)s/target/bin/rkt %(prefix)s/bin/rkt" % {
> +'build_ver': RKT_BUILD_VER,
> +'prefix': RKT_PREFIX
> +}
> +]
> +for cmd in steps:
> +cwd = '/root/rkt'
> +subprocess_stdout(cmd, cwd, env)
> diff --git a/build/fedora/components/xen