Re: [Xen-devel] On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
On Tue, 2015-09-08 at 18:38 +, Antti Kantee wrote: > On 08/09/15 16:15, Ian Campbell wrote: > > On Tue, 2015-09-08 at 15:03 +, Antti Kantee wrote: > > > > > For unikernels, the rump kernel project provides Rumprun, which can > > > provide you with a near-full POSIX'y interface. > > > > I'm not 100% clear: Does rumprun _build_ or _run_ the application? It > > sound > > s like it builds but the name suggests otherwise. > > For all practical purposes, Rumprun is an OS, except that you always > cross-compile for it. So, I'd say "yes", but it depends on how you want > to interpret the situation. We could spend days writing emails back and > forth, but there's really no substitute for an hour of hands-on > experimentation. > > (nb. the launch tool for launching Rumprun instances is currently called > rumprun. It's on my todo list to propose changing the name of the tool > to e.g. rumprunner or runrump or something which is distinct from the OS > name, since similarity causes some confusion) Thanks, I think I get it... > > Do these wrappers make a rump kernel build target look just like any > > other > > ross build target? (I've just got to the end and found my answer, which > > was > > yes. I've left this next section in since I think it's a nice summary > > of > > why it matters that the answer is yes) > > > > e.g. I have aarch64-linux-gnu-{gcc,as,ld,ar,etc} which I can use to > > build > > aarch64 binaries on my x86_64 host, including picking up aarch64 > > libraries > > and headers from the correct arch specific patch. > > > > Do these rumprun-provided wrappers provide x86_64-rumpkernel > > -{gcc,as,ld,ar,etc} ? > > No, like I said and which you discovered later, > x86_64-rumprun-netbsd-{gcc,as,ld,ar,etc}. aarch64 would be > aarch64-rumprun-netbsd-{...}. Sorry, I used an explicit example when really I just meant "some triplet" without saying "such as" or "e.g.". So the answer to the question I wanted to ask (rather than the one I did) is "yes", which is good! > > > If the above didn't explain the grand scheme of things clearly, have a > > > look at http://wiki.rumpkernel.org/Repo and especially the picture. If > > > things are still not clear after that, please point out matters of > > > confusion and I will try to improve the explanations. > > > > I think that wiki page is clear, but I think it's orthogonal to the issue > > with distro packaging of rump kernels. > > Sure, but I wanted to get the concepts right. And they're still not > right. We're talking about packaging for *Rumprun*, not rump kernels in > general. Right. > > >However, since a) nobody (else) ships applications as relocatable > > > static objects b) Rumprun does not support shared libraries, I don't > > > know how helpful the fact of ABI compatibility is. IMO, adding > > > shared > > > library support would be a backwards way to go: increasing runtime > > > processing and memory requirements to solve a build problem sounds > > > plain > > > weird. So, I don't think you can leverage anything existing. > > > > This is an interesting point, since not building a shared library is > > already therefore requiring packaging changes which are going to be at > > least a little bit rumpkernel specific. > > > > Is it at all possible (even theoretically) to take a shared library > > (which > > is relocatable as required) and to do a compile time static linking > > pass on > > it? i.e. use libfoo.so but still do static linking? > > But shared libraries aren't "relocatable", that's the whole point of > shared libraries! ;) ;) Hrm, perhaps I'm confusing PIC with relocatable but AIUI a shared library can be loaded at any address (subject to some constraints) in a process and may be loaded at different addresses in different processes, which is what you actually need to do... > I guess you could theoretically link shared libs with a different ld, ...this. (assuming you meant "link an app against shared libs") > and I don't think it would be very different from prelinking shared > libs, Indeed. > but as Samuel demonstrated, it won't work at least with an > out-of-the-box ld. Right, I thought it probably wasn't, which is why I said "even theoretically". > I think it's easier to blame Solaris for the world going bonkers with > shared libs, bite the bullet, and start adding static linking back where > it's been ripped out from. Shared libs make zero sense for unikernels > since you don't have anyone to share them with, so you're just paying > extra for PIC for absolutely no return. (dynamically loadable code is a > separate issue, if you even want to go there ... I wouldn't) The issue, and the reason I mentioned it, is that distros (at least Linux distros) have, for better or worse, gone in heavily for the use of shared libraries in their application packaging norms. Actually distros might be (e.g. Debian is) quite good at always providing a .a as well as the .so when packaging libraries
Re: [Xen-devel] On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
Ian, Thank you for the explanations. Hmm. (not replying to anything specific) My guess is that shared libs won't be the biggest problem. I'd find it extremely surprising if you can take a Linux (or any other !NetBSD) packaging system and discover the dozens of dependencies of QEMU to not contain any "'isms" which do not apply when building for what is essentially a NetBSD target. I don't know how Wei Liu built his QEMU, but I assume it was by farming a lot of --disable-stuff. That's what I'd do and somehow find a way to ship the result. The requirements for stubdom QEMU and /usr/bin/qemu are IMO too dissimilar to stuffed into the same box. By my judgement (which may be wrong), almost none of the dynamic dependencies of QEMU on a regular Linux system apply for the Xen stub domain use. If the goal is to get a reduced footprint qemu, bundling unnecessary clutter is in direct conflict with the goal. Besides, trying to get e.g. libsystemd to build or work with [a NetBSD-API'd] Rumprun will most likely earn you a ticket to the loony bin. So while I think I understand your predicament (you need to sell the Xen improvement to distros which means playing by their rules) and the temptation of reusing existing packages for the job, I seriously doubt the approach will lead to a sensible result. That, of course, shouldn't stop you from trying. If the result of your experimentation matches your hypothesis and shared libs is the main problem, I'll figure out a way to make it work on the Rumprun side of things. If you truly decide you need to use existing Linux infra, I'd start down that track by bolting a Linux userspace environment on top of the "kernel only" Rumprun stack (which could/should more or less work thanks to syscall emulation). Of course, you'd need to do the same for FreeBSD and every other system you want to support, so it's not a free ticket either, but by my guess at least a cheaper one. Anyway, I only have guesses. - antti ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Re: [Xen-devel] On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
On 08/09/15 16:15, Ian Campbell wrote: On Tue, 2015-09-08 at 15:03 +, Antti Kantee wrote: For unikernels, the rump kernel project provides Rumprun, which can provide you with a near-full POSIX'y interface. I'm not 100% clear: Does rumprun _build_ or _run_ the application? It sound s like it builds but the name suggests otherwise. For all practical purposes, Rumprun is an OS, except that you always cross-compile for it. So, I'd say "yes", but it depends on how you want to interpret the situation. We could spend days writing emails back and forth, but there's really no substitute for an hour of hands-on experimentation. (nb. the launch tool for launching Rumprun instances is currently called rumprun. It's on my todo list to propose changing the name of the tool to e.g. rumprunner or runrump or something which is distinct from the OS name, since similarity causes some confusion) Do these wrappers make a rump kernel build target look just like any other ross build target? (I've just got to the end and found my answer, which was yes. I've left this next section in since I think it's a nice summary of why it matters that the answer is yes) e.g. I have aarch64-linux-gnu-{gcc,as,ld,ar,etc} which I can use to build aarch64 binaries on my x86_64 host, including picking up aarch64 libraries and headers from the correct arch specific patch. Do these rumprun-provided wrappers provide x86_64-rumpkernel -{gcc,as,ld,ar,etc} ? No, like I said and which you discovered later, x86_64-rumprun-netbsd-{gcc,as,ld,ar,etc}. aarch64 would be aarch64-rumprun-netbsd-{...}. Appearing as a regular cross-compilation target is, I think, going to be important to being able to create rumpkernel based versions of distro packages. I think that package maintainers ideally won't want to have to include a bunch of rumpkernel specific code in their package, they just want to leverage the existing cross-compilability of their package. Yes, that is critical. We bled to achieve that goal. It looks obvious now, but I can assure you it wasn't obvious a year ago. $ ldd /usr/bin/qemu-system-x86_64 | wc -l 87 $ Heh, that's quite a lot. If the above didn't explain the grand scheme of things clearly, have a look at http://wiki.rumpkernel.org/Repo and especially the picture. If things are still not clear after that, please point out matters of confusion and I will try to improve the explanations. I think that wiki page is clear, but I think it's orthogonal to the issue with distro packaging of rump kernels. Sure, but I wanted to get the concepts right. And they're still not right. We're talking about packaging for *Rumprun*, not rump kernels in general. However, since a) nobody (else) ships applications as relocatable static objects b) Rumprun does not support shared libraries, I don't know how helpful the fact of ABI compatibility is. IMO, adding shared library support would be a backwards way to go: increasing runtime processing and memory requirements to solve a build problem sounds plain weird. So, I don't think you can leverage anything existing. This is an interesting point, since not building a shared library is already therefore requiring packaging changes which are going to be at least a little bit rumpkernel specific. Is it at all possible (even theoretically) to take a shared library (which is relocatable as required) and to do a compile time static linking pass on it? i.e. use libfoo.so but still do static linking? But shared libraries aren't "relocatable", that's the whole point of shared libraries! ;) ;) I guess you could theoretically link shared libs with a different ld, and I don't think it would be very different from prelinking shared libs, but as Samuel demonstrated, it won't work at least with an out-of-the-box ld. I think it's easier to blame Solaris for the world going bonkers with shared libs, bite the bullet, and start adding static linking back where it's been ripped out from. Shared libs make zero sense for unikernels since you don't have anyone to share them with, so you're just paying extra for PIC for absolutely no return. (dynamically loadable code is a separate issue, if you even want to go there ... I wouldn't) I don't really have good solutions for the packaging problem. Building a "full distro" around rump kernels certainly sounds interesting, FWIW I don't think we need a full distro, just sufficient build dependencies for the actual useful things (maybe that converges on to a full distro though). By "full distro" I meant "enough to get a majority of the useful services going". Seems like once qemu works, we're 99% there ;) Debian are (most likely) not going to accept a second copy of the QEMU source in the archive and likewise they wouldn't want a big source package which was "qemu + all its build dependencies" or anything like that, especially when "all its build dependencies" is duplicating the source of dozens of
Re: [Xen-devel] On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
On Tue, 2015-09-08 at 18:26 +0200, Samuel Thibault wrote: > Ian Campbell, le Tue 08 Sep 2015 17:15:40 +0100, a écrit : > > Is it at all possible (even theoretically) to take a shared library > > (which > > is relocatable as required) and to do a compile time static linking > > pass on > > it? i.e. use libfoo.so but still do static linking? > > € gcc test.c -o libtest.so -shared -Wl,--relocatable > /usr/bin/ld.bfd.real: -r and -shared may not be used together Sorry, my suggestion was a bit garbled, to say the least... I meant more "link an application against it statically even though it is a shared library": $ gcc main.c -o myapp.elf -static libfoo.so Where myapp.elf would be statically linked and include the libfoo code directly. Ian. ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Re: [Xen-devel] On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
Ian Campbell, le Tue 08 Sep 2015 17:15:40 +0100, a écrit : > Is it at all possible (even theoretically) to take a shared library (which > is relocatable as required) and to do a compile time static linking pass on > it? i.e. use libfoo.so but still do static linking? € gcc test.c -o libtest.so -shared -Wl,--relocatable /usr/bin/ld.bfd.real: -r and -shared may not be used together Samuel ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Re: [Xen-devel] On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
Hi, Wei Liu hinted that I should "chime in and / or provide corrections" (his words). I'll attempt to do exactly that by not really replying to anything specific. For the record, when I say "we" in this mail, I mean "people who have contributed to the rump kernel project" (as also indicated by the email-hat). First of all, there's a difference between a rump kernel (driver bundle built out of unmodified kernel components) and any unikernel you construct out of rump kernels ... sort of like how there's a difference between Linux and GNU/Linux. For unikernels, the rump kernel project provides Rumprun, which can provide you with a near-full POSIX'y interface. Rumprun also provides toolchain wrappers so that you can compile existing programs as Rumprun unikernels. Rumprun also recently regrew the ability to run without the POSIX'y bits; some people found it important to be able to make a tradeoff between running POSIX'y applications and more compact "kernel plane" unikernels such as routers and firewalls. But, for brevity and simplicity, I'll assume the POSIX'y mode for the rest of this email, since that's what the QEMU stubdom will no doubt use. If the above didn't explain the grand scheme of things clearly, have a look at http://wiki.rumpkernel.org/Repo and especially the picture. If things are still not clear after that, please point out matters of confusion and I will try to improve the explanations. Also for simplicity, I'll be talking about rump kernels constructed from the NetBSD kernel, and the userspace environment of Rumprun being NetBSD-derived. Conceptually, there's nothing stopping someone from plugging a GNU layer on top of NetBSD-derived rump kernels (a bit like Debian kXBSD?) or constructing rump kernels out of Linux. But for now, let's talk about the only working implementation. As far as I know, the API/ABI of the application environment provided by Rumprun is the same as the one provided by standard NetBSD. Granted, I didn't perform the necessary experiments to verify that, so take the following with a few pinches of salt. In theory, you could take application objects built for NetBSD and link them against Rumprun libs. However, since a) nobody (else) ships applications as relocatable static objects b) Rumprun does not support shared libraries, I don't know how helpful the fact of ABI compatibility is. IMO, adding shared library support would be a backwards way to go: increasing runtime processing and memory requirements to solve a build problem sounds plain weird. So, I don't think you can leverage anything existing. We do have most of the Rumprun cross-toolchain figured out at this point. First, we don't ship any backend toolchain(s), but rather bolt wrappers and specs on top of any toolchain (*) you provide. That way we don't have to figure out where to get a toolchain which produces binary for every target that everyone might want. Also, it makes bootstrapping Rumprun convenient, since you just say "hey give me the components and application wrappers for CC=foocc" and off you go. *) as long as it's gcc-derived, for now (IIRC gcc 4.8 - 5.1 are known to work well, older than that at least C++ won't work). clang doesn't support specs files at least AFAIK, so someone would have to figure out how to move the contents of the specs into the wrappers, or whatever equivalent clang uses. (patches welcome ;) The produced wrappers look exactly like a normal cross-toolchain. The tuple is the same as what NetBSD uses, except with rumprun added in the middle, so e.g. x86_64-rumprun-netbsd or arm-rumprun-netbsdelf-eabihf. That naming scheme means that most GNU-using software compiles nicely for Rumprun just by running configure as ./configure --host=x86_64-rumprun-netbsd followed by "make". Sometimes you additionally need things like --disable-shared, but all in all everything works pretty well. See http://repo.rumpkernel.org/rumprun-packages for a bunch of "case studies", not limited to just GNU autotools. After "make", before launch we have an additional step called "bake", which links the specific kernel components onto the binary. So for example, you can make the compiled binary run on Xen or KVM depending on which kernel components you bake onto it. As a crude analogy, it's like scp'ing a binary to a Xen or KVM or bare metal system, but since the Rumprun unikernel is missing exec, we use the linker to perform "system selection". So for shipping, one option is to ship the binary after "make", but then you also need to ship the toolchain. The other option is to ship the baked binary, but then you lose some of your possibilities on how to target the binary. I'm not sure either option is right for all cases. We're still trying to figure out the exact form and figure of bake+launch. In the original implementation we assumed that at launch-time we could cheaply control all of the details of
Re: [Xen-devel] On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
Ian Campbell, le Tue 08 Sep 2015 17:37:21 +0100, a écrit : > On Tue, 2015-09-08 at 18:26 +0200, Samuel Thibault wrote: > > Ian Campbell, le Tue 08 Sep 2015 17:15:40 +0100, a écrit : > > > Is it at all possible (even theoretically) to take a shared library > > > (which > > > is relocatable as required) and to do a compile time static linking > > > pass on > > > it? i.e. use libfoo.so but still do static linking? > > > > € gcc test.c -o libtest.so -shared -Wl,--relocatable > > /usr/bin/ld.bfd.real: -r and -shared may not be used together > > Sorry, my suggestion was a bit garbled, to say the least... I meant more > "link an application against it statically even though it is a shared > library": > > $ gcc main.c -o myapp.elf -static libfoo.so Yes, that's what I understood, but the answer is the same: AIUI, once the library is linked, you can not link again with it, because the code has already been specialized. Samuel ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel