Re: Debian and our frenemies of containers and userland repos
On 05.10.19 19:03, Bernd Zeimetz wrote: > For that, developers also need or want the > latest shiniest software - which is something a distribution can't provide. It can, but that needs different workflows and higher grade of automation. (and of course wouldn't be so well tested) Actually, I for python world I already did something: fully automatic import and debianziation for pypi-packages. Yet experimental and part of another tool (which I'm using for building customer specific backport- and extra repos): https://github.com/metux/deb-pkg/tree/wip/pypi > I'm wondering if there is something Debian can do to be even more > successful in the container world. You could use dck-buildpackage --create-baseimage to do that. Feel free to create some target configs, and I'll be happy to add them. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Debian and our frenemies of containers and userland repos
On 07.10.19 13:17, Shengjing Zhu wrote: > Why not have a repository for it, like dockerhub. So this becomes > "pull latest build env", which saves lots of time("re-bootstrap" is > still slow nowadays). No idea how sbuild works these days (turned away from it aeons ago, as I've found it too complicated to set up), but dck-buildpackage can do both. It can try to pull an existing image for the given target, but will bootstrap it when there isn't one. IMHO, the best idea is treating images as nothing as a cache, and having the build machinery bootstrap automatically when needed. One thing on my 2do list for dck-buildpackage is keeping cache images for dependency sets (eg. if the same package is rebuilt many times, during development) - installing dependencies can eat up a lot of time. (for now, this can be achieved manually, by configuring a target with dependencies already installed - but I don't like manual things :o) BTW: one important point w/ dck-buildpackage for me was being able to specifiy what's in the image. I really prefer to have it really minimal. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Debian and our frenemies of containers and userland repos
On 05.10.19 18:25, Bernd Zeimetz wrote: Hi, > Having something that works with git-buildpackage would be really nice, :) > though. Even better if it would allow to use the k8s API to build things... Patches are always welcomed :) There're some problems to be solved for remote hosts (IMHO, k8s only on local node doesn't make so much sense ;-)): dck-buildpackage currently mounts some host directories (eg. local apt repo and reflink'ed copy of the source tree) into the container. While one could put docker nodes into a shared filesystem, that probably wouldn't be so nice w/ k8s ... --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Debian and our frenemies of containers and userland repos
On 05.10.19 03:31, Paul Wise wrote: > On Fri, Oct 4, 2019 at 10:49 PM Enrico Weigelt wrote: >> On 24.07.19 08:17, Marc Haber wrote: >> >>> Do we have a build technology that uses containers instead of chroots >>> yet? >> >> Something like docker-buildpackage ? > > AFAICT, docker-buildpackage doesn't exist I'm pretty sure it does exist, since I wrote it :p https://github.com/metux/docker-buildpackage --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Debian and our frenemies of containers and userland repos
On 24.07.19 08:17, Marc Haber wrote: > Do we have a build technology that uses containers instead of chroots > yet? Something like docker-buildpackage ? --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Debian and our frenemies of containers and userland repos
On 11.07.19 17:25, Yao Wei wrote: Hi, It can be a "solid base" of container images and barebone systems, but the days are numbered as operating systems as free and focused on its mission (like Google COOS, Yocto, Alpine etc.) is evolving steady. Could it be a disaster for us? And more importantly, do users care? I don't think so. COOS: just yet another special purpose distro, in that case for docker hosts. neither the first, nor the last one to come. Yocto: just yet another compile-yourself distro, focused on embeedded, that happens to be hyped by certain corporations. (for small/embedded devices, I'd really recommend ptxdist). Alpine: yet another distro, optimized for running in small containers BTW: the idea of building small payload/application-specific containers/chroot's is anything but new. I've done it somewhere in the 90th. But nowadays, these so-called "small" containers tend to be bigger than whole machines of the 90th. Containerization is a valid approach for some kind of workloads (eg. specific inhouse applications) that can be easily isolated from the rest. But it comes with the price of huge redundancies (depending on how huge some application stacks are). And unless everybody wants to go back of maintaining everything on his own, we still need distros. If different applications need to deeply interact (eg. various plugin stuff, applications calling each other, etc), containerization doesn't help much. (eg: how can you have a pure texlive in one container and extra things like fonts, document classes, etc, in separate ones ? :o) The whole point about containerization isn't about packaging and deployment of individual applications - instead it's about automatizing the rollout of fully-configured installations. One thing seems to be right: folks who always have been hostile towards the whole concept of distros now have a better excuse. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Backports needed for Firefox/Thunderbird ESR 68 in Stretch
On 02.07.19 22:45, Moritz Mühlenhoff wrote: Hi, > ESR 68 will require an updated Rust/Cargo toolchain and build dependencies not > present in Stretch (nodejs 8, llvm-toolchain-7, cbindgen and maybe more). > Stretch was already updated wrt Rust/Cargo for ESR 60, so there's at least no > requirement to bootstrap stage0 builds this time. Few days ago I had a try with newer rust/cargo version from unstable on stretch. Unfortunately failed miserably, eg. certain libs were missing in the source tree, and some of them even couldn't be found on upstream git repo anymore. Seems that rust/cargo needs a lot more attention. > If we want to continue to have Firefox/Thunderbird supported in > oldstable-security > after October, someone needs to step up to take care of backports to a > Stretch point > release before October 22nd (or in case of poor timing, we can also release > build > dependency updates via stretch-security). ACK. I haven't had a chance to take a deeper look at the rust/cargo issue yet (currently too occupied with other things). If anybody could come forward with a solution, I'd be really glad. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: OpenCL / handling of hardware-specific packages
On 01.07.19 23:59, Rebecca N. Palmer wrote: Hi, >> So, installing an opencl-based package pulls in *all* cl driver stacks ? > > If we do the above, yes by default, but the user can prevent this by > explicitly installing any one. Ok, that's fine, as long as it doesn't cause the already mentioned problems. >> Please don't do that. This IMHO clearly belongs into the operator's >> hands > > Do you mean "not as long as it would cause the above bugs" or "not > ever"? If the latter, is it because of wasted storage/bandwidth or > something else? Bandwidth and install time is one issue, another is storage (yes, some folks actually care about storage, eg. w/ containers :p), another is reducing code and therefore potential bugs and attack surface. In general, I'd like to keep my systems as minimal as possible. > Do you also believe that the existing hardware-related -all packages > (printer-driver-all(-enforce), va-driver-all, vdpau-driver-all, > xserver-xorg-input-all, xserver-xorg-video-all) should not exist / > should not be installed by default? I don't have a problem with those packages, but there shouldn't be dependencies on them, so that suddenly hundreds of packages come in when just one is needed. Maybe there should be some selection mechanism (not sure how that could be implemented, yet). >> (or possibly some installer magic). > > Do we have such a mechanism? I agree this would be better if it existed. Honestly, I don't know. Perhaps that deserves a more detailed discussion. At that point it would also be nice if we could configure things like which editor or mailer to install. Maybe this could be done w/ some virtual package + equivs magic. I'll have to think about this ... > The AppStream metadata format includes a field for "hardware this works > with", and beignet-opencl-icd has one, but I don't know if any existing > tools use this field. I don't like the idea of making driver packages depending on some desktop stuff :o IMHO, it should be handled on dpkg/apt level. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: ZFS in Buster
On 01.07.19 21:06, Enrico Weigelt, metux IT consult wrote: Hi, > IIRC the whole things is actually about crypto stuff. Why don't zfs> folks > just use the standard linux crypto api (potentially introduce a> new algo if the existing ones aren't sufficient) ? Addendum: just had a quick scan through the code and found a completely own AES implementation. Seriously ? Completely redundant cipher implementations from an likely understaffed project is something I wouldn't like to have on my machines, not in the kernel. There're just too many pitfalls (especially w/ buggy x86 CPUs) in crypto programming - I wouldn't dare to implement that myself and run it on production systems. And for performance, many folks like to use hw acceleration. Linux crypto api already provides that. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: ZFS in Buster
On 07.06.19 10:16, Philipp Kern wrote: Hi, > This would not be the case here. But crippling the performance would be> > indeed an option, even though this would make Debian much less relevant> for ZFS deployments and people would just go and use Ubuntu instead. Is it really necessary to to have some competition w/ Ubuntu on who's got the larger user base ? In which way is that relevant for the progress on Debian ? For me, personally, when working on FOSS, it never mattered how many users are out there - don't need to "sell" anything. > I personally wonder why a kernel which provides a module interface does > not provide a way to save FPU state, but alas, they made their decision. Because that's really low-level, arch specific stuff. I don't even recall any platform driver that ever cares about such things. From a kernel hacker/maintainer pov, the idea of having an arch specific filesystem driver, sounds really weird. This function IIRC just was a workaround for kvm, which always had been suboptimal and was replaced by a better solution. Since nobody used it anymore for quite some time, it got removed. And regarding LTS, I don't recall that Greg ever made any committment on not removing obsolete and used stuff (he's just reluctant of putting too much extra work into that for his lts trees). Of course, as users of kernel-internal APIs, only the in-tree stuff matters - this has always been the policy in Linux development (at least as far as I remember). IIRC the whole things is actually about crypto stuff. Why don't zfs folks just use the standard linux crypto api (potentially introduce a new algo if the existing ones aren't sufficient) ? --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: AMDGPU+OpenCL with Debian?
On 19.06.19 09:09, Rebecca N. Palmer wrote: Hi, > I proposed [0] fixing this by creating a metapackage for "all OpenCL > drivers" (similar to the ones for graphics). However, having unusable > OpenCL drivers installed can trigger bugs: [1] in llvm, and some > applications that treat "no hardware for this driver" as "fatal error" > instead of "try the next driver". So, installing an opencl-based package pulls in *all* cl driver stacks ? Please don't do that. This IMHO clearly belongs into the operator's hands (or possibly some installer magic). --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: scratch buildds
On 15.06.19 11:20, Helmut Grohne wrote: Hi, > Unlike the ${dist}-proposed variant, the scratch distribution can be set> up > entirely outside Debian. It only needs someone doing the work with no> involvement of DSA. Wait, this reminds me of something. Luca Falavigna> put up debomatic-${arch}.debian.net. And it has piuparts and lintian! As somebody who does backports stuff and project/client specific repos, I've created something on my own, which can build whole stacks of packages and create apt repos. it also allows fine control on what is in the base image, extra repos, etc. The bad thing for me is: I've only got limited computing power and and very limited in available archs (just x86 and some older arm). So, having a CI that can build for all the debian supported archs and allows using extra repos, tailored base images, work on git directly (fully debianized branch) and publishes the repos to the outside world, would be a really cool thing for me. IIRC that should also cover this 'scratch builds' usecase. I admit, I haven't checked whether gitlab-ci can already do that. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 01.07.19 15:09, Andrey Rahmatullin wrote: > On Mon, Jul 01, 2019 at 03:04:26PM +0200, Enrico Weigelt, metux IT consult > wrote: >> On 29.05.19 17:41, Andrey Rahmatullin wrote: >> >>>> Perhaps we should update policy to say that the .orig tarball may (or >>>> even "should") be generated from an upstream release tag where >>>> applicable. >>> This conflicts with shipping tarball signatures. >> >> Does that really need to be the upstream's tarballs ? > The idea is checking the sig that the upstream made, with the key the > upstream published. Okay, but is that actually used (by somebody except the maintainers) ? >> If it's about validating the source integrity all along the path from >> from upstream to deb-src repo, we could do that by auditable process >> (eg. fully automatic, easily reproducable transformations) > Sounds very complicated. I don't think so, at least if we're considering the whole workflow. In the end, it's just a matter of trust-chains: * upstream should used signed tags - we can collect their pubkeys in some suitable place (what we should do anyway). * if upstream doesn't sign, the maintainer has to trust them blindly, or needs to verify the code anyways. we could use some half-automated process for verifying the diff between the upstream tarball and the scm repo (we could add our own signatures here) * finally the maintainer signs his final tree (the one that's used for actual building the final packages) I believe that 99% can be done automatically, with a little bit of tooling. -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey results: git packaging practices / repository format
On 28.06.19 23:42, Ian Jackson wrote: >https://wiki.debian.org/GitPackagingSurvey > > Unfortunately due to bugs in the Debian and Wiki CSS styles, it is > displayed very suboptimally unless you log into the wiki and set your > style to "Classic". (I have filed bug reports.) Very nice work. For the next stage I think it would be nice to assign some kind of canonical names for the individual workflows and try to create some precise documentation. Once we have that, we could add some machine readable file to the d/ trees stating the exact workflow and possibly extra metadata (eg. where to find certain repos/branches, etc). > [1] This does *not* include the one response from a Debian downstream. Talking about me ? ;-) > The task of being a Debian downstream is rather different and it > doesn't make sense to try to represent that in the same table. I don't think it's so different in that regard. IMHO the only difference is that I'm not an official debian maintainer and not using buildd. But I believe the same approach could be used by any actual Debian maintainer if he just also creates dsc's and pushes them onto buildd. Maybe it would be better to just differenciate the statistics between official debian and others (such as downstreams). --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 29.05.19 23:39, David Bremner wrote: >> Also, how do you move to a new upstream version ? > > use git merge, typically from an upstream tag, or from a debian specific > upstream branch with tarballs imported on top of upstream history. Uh, that creates an pretty ugly, unreadable git repo and makes interacting w/ upstream (eg. submitting patches) unncessarily hard. That's something I regularily see w/ crappy vendor kernels, which then take lots of time to bring them into some somewhat usable state :o --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 29.05.19 17:41, Andrey Rahmatullin wrote: >> Perhaps we should update policy to say that the .orig tarball may (or >> even "should") be generated from an upstream release tag where >> applicable. > This conflicts with shipping tarball signatures. Does that really need to be the upstream's tarballs ? Why not just automatically generating the orig tarballs and fingerprint *them* (not caring about the upstream's tarball at all) ? If it's about validating the source integrity all along the path from from upstream to deb-src repo, we could do that by auditable process (eg. fully automatic, easily reproducable transformations) --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 29.05.19 15:14, Ben Hutchings wrote: > Perhaps we should update policy to say that the .orig tarball may (or > even "should") be generated from an upstream release tag where > applicable. ACK. But there should also be some definition or at least guideline on what is considiered "applicable" (or better: when is it okay not doing that) and some rules on how the generation process shall be done. (maybe having some script with a defined name ?) --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 02.06.19 16:22, Sam Hartman wrote: >>>>>> "Nikolaus" == Nikolaus Rath writes: > > > Yes, but the lack of similarity is the thing I find interesting. > In git-pdm (and git-debrebase), you retain all the rebases and stitch > them together with various pseudo-merges into a combined history. > > If you could avoid that and have a more pure rebase workflow, I think it > would be nice. > As Ian points out, we don't know how to do that because we don't know > how to figure out whether you have the latest rebase. Could you give me some more details on the intended workflow ? Why does one need that information at all ? --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 02.06.19 00:57, Ian Jackson wrote: Hi, > The difficulty with this as a collaboration approach is that you can't > tell whether a rebase is "the newest", at least without a lot of > additional information. That additional information is the "clutter" > if you like which the "cleaner" history doesn't contain. Depends on what you actually call "new". I think for each supported upstream version there should be a separate maintenance branch (we can track the maintenance status in some other place, maybe some extra metadata branch). These individual branches are never rebased, just created once on the corresponding upstream tag. (well, there could be extra devel branches that indeed are rebased on upstream's master, but that has to be explicitly communicated) > Both git-debrebase and git-dpm use a special history structure to > record what the most recent rebase is. Obviously I prefer > git-debrebase since I wrote it - using a different data model - even > after I knew about git-dpm and its data model. But maybe this isn't > the thread for that advocacy conversation. What exactly does this tool do ? --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 29.05.19 14:47, Sam Hartman wrote: Hi, > I'm certainly going to look at dck-buildpackage now, because what he > describes is a workflow I'd like to be using within Debian. :) Maybe you'd also find that useful: https://github.com/metux/deb-pkg It's a little tool for automatically creating whole repos, eg. automatically cloning repos (even w/ multiple remotes), building whole dependency trees (deps are explicitly configured instead of fetched from the packages, intentionally), etc. Note: the "master" branch is pretty hackish, basically an example - the idea is to branch off from "base" for each project and do the necessary changes directly in git. (from time to time it's worth rebasing). > For some projects I want to ignore orig tarballs as much as I can. I'm > happy with native packages, or 3.0 quilt with single-debian-patch. single-patch isn't so nice for interacting w/ upstreams. > I don't want merge artifacts from Debian packaging on my branches. > I'm happy to need to give the system an upstream tag. I'd prefer always having a debian branch ontop of the upstream release tag and doing all the debianziation there, possibly per-release or dist release, flavour, etc. > I'm happy for a dsc to fall out the bottom, and so long as it > corresponds to my git tree I don't care how that happens. ACK. I see dsc just as an autogenerated itermediate stage for certain build systems (eg. builldd) or providing src repos. > I have a slight preference for 3.0 format over 1.0 format packages. 3.0 > makes it possible to deal with binaries, better compression and a couple > of things like that. The quilt bits are (in this workflow) an annoyance > to be conquered, not a value. ACK. That's why I do everything in git only. I don't really care how the src packages look like, as long as I've got an easy and fully automatic way for getting a clean git tree with all the necessary changes already applied as readable (and documented) git commits. > The thing his approach really seems to have going for it is that he > gives up on the debian history fast forwarding and instead rebases a lot > for a cleaner history. ACK. Personally, I don't see any actual value in an separate Debian history, or even an history of the text-based patches. git-rebase is one of my primary daily tools. > If we could figure out a way to collaborate on something like that well, > it might be a very interesting tool to have. ACK. I believe we should set some computable policies on how orig trees are generated from actual upstream repos and patches are handled, so we can do imports/transformations fully automatically. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 29.05.19 13:50, Ian Jackson wrote: Hi, > Oh. I think I have misunderstood. I think you are describing a git > workflow you use as a *downstream* of Debian, not as a maintainer > *within* Debian. Yes, something like that. I'm maintaining additional repos for certain projects, usually backports or packages that aren't in Debian at all. Pretty often it's pretty much rebasing Debianziation onto latest upstream releases. A very annoying aspect for me is the way upstream souces and patches are managed, eg. * I need to reproduce how exactly orig trees are constructed from the actual upstream (git) trees. We often have autogenerated stuff in here (eg. autotools stuff), often files are missing, stuff moved around, etc, etc. --> here we at least should have some fully automatic transformation system within git. probably not within the actual source packages themselves, possibly as a cross-package project. * text-based patches are costly to reproduce into a git repository * many just don't apply via git-am --> at least we should fix them and add a policy that all patches need to be git-am compatible (no, quilt isn't so much helpful here, and I find it pretty complex to use - compared with git - I need rebase) * we don't have any clear (machine-readable) distinction between types of patches, eg. whether they're generic or really debian specific * somethimes we even have patches against autogenerated stuff * many patches lack any clear description * sometimes we even have weird transformations on the source tree (usually on the orig tree, but sometimes also within rules file) Few days ago, I tried to rebuild recent rustc (which I need for tbird), but got a lot of strange fails. It also lacked the source code for some library where that specific version even doesn't exist in the upstream git repo anymore. I know that rust is an ugly beast, but those things just should not happen. The rust toolchain seems to be a good candidate for creating an fully automatic git transformation (eg. transform submodules into merges, etc). I'd like to propose some additions to the packaging policies: * there shall be no source tree transformations in the build process, all necessary changes shall be done by patches * the upstream build process shall be used as-is whenever possible, if necessary patch it. (eg. no manual build or install rules, etc) * there shall be no conditional applying of patches - the queue shall be always applied a as a whole. if certain code changes are only applicable for certain build conditions (eg. different flavours like with the kernel), proper build-time config flags shall be introduced. * all patches shall be git-am compatible and have a clear description of what they're actually doing * patches shall be written in an generic/upstreamable way if possible (eg. introduce new build-time config flags if necessary) * patches shall be grouped into generic/upstreamable and distro specific ones, the differenciation shall be easily machine-readable (eg. message headers), and the generic ones shall be first in the queue. * no patching of autogenerated files * autogenerated files shall be removed from source and always regenerated within the build process * the debian/ directory shall contain a machine readable file for telling exactly the upstream source tree (eg. canonical version, url to tarball and gitrepo, tag name + commit id, etc) * minimum required debhelper version shall be the one present in stable, (or better oldstable) - unless threre's really no other sane way > And I think what you are saying is that you don't use source packages > (.dsc) except maybe in the innards somewhere of your machinery. > I think that is a good way for a downstream to work. Certainly when I > modify anything locally I don't bothere with that .dsc stuff. Right, I've always found that very complicated and hard to maintain. Actually, I'd like to propose phasing it out that old relic. With git we just don't need that anymore. > But my aim in this thread was to capture how people work *within* > Debian, where a maintainer is still required to produce a .dsc. I don't think that .dsc really makes the picture so different. It always can be generated automatically. IMHO, it's only needed as an output format for creating src repos and as an intermediate transport for traditional tools like buildd. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: ZFS in Buster
On 28.05.19 18:43, Dan wrote: > Sadly the Linux Kernel has introduced a commit in kernel 4.19 and 5.0> that > prevents ZFS from using SIMD. The result is that ZFS won't be> usable in Buster. See the following issue> https://github.com/zfsonlinux/zfs/issues/8793 We recently had this discussion on lkml - yet another case of 3rdparty folks that just don't follow the license rules. It's not the kernel who broke zfs, it's zfs that broke itself. The kernel is GPL, and they just have to follow the rules or go away. OOT modules are conceptionally messy in the first place. It's sometimes okay as an temporary workaround, until things get mainlined. But intentionally keeping things oot for long time is just silly and creates lots of more problems than it creates. And they're even using now *deeply* arch-internal functions directly. > NixOS reverted that particular commit:> https://www.phoronix.com/scan.php?page=news_item&px=NixOS-Linux-5.0-ZFS-FPU-Drop Intentional license violation. Not funny. > Debian is the "Universal Operating System" and gives the user the> option to > choose. It provides "vim and emacs", "Gnome and KDE", If you wanna have something new included, you'll have to sit down and do the actual work. In the end of the day, it's that simple. > Would it be possible to provide an alternative patched linux kernel > that works with ZFS? You mean patching against the license ? > The ZFS developers proposed the Linux developers to rewrite the whole > ZFS code and use GPL, but surprisingly the linux developers didn't > accept. See below: > https://github.com/zfsonlinux/zfs/issues/8314 Wait, no. It's not that we refused anything (actually, I don't even recall any decent discussion on that @lkml). There even wasn't anything to accept or refuse - except the existing code, that is nowhere near a quality where any maintainer likes to even have a closer look at. The major problem is that ZoL always has been oot on purpose, which is the wrong approach to begin with. That also leads to bad code quality (eg. lots of useless wrappers, horrible maintenance, ...) What ZoL folks could do is step by step rewrite it to use mainline functionality where ever technically feasible and work closely with upstream to introduce missing functionality. Obviously, their current proprietary userland interface can't be accepted for mainline - it has to be reworked to be conformant w/ standard uapi (eg. we already have it for things like snapshots, deduplication, quotas, ...) But it's up to ZoL developers to do the actual work and post patches to lkml. There won't be anybody else doing that. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 28.05.19 19:31, Simon McVittie wrote: Hi, > Debian Linux kernel > === > > Tree contains: an incomplete debian/ directory, notably without d/control, > and no upstream source > Changes to upstream source are: d/patches only > Baseline upstream: changelog version => .orig tarball > Patches managed by: ??? > Special build tool: there is a pre-build step to generate d/control I'm handling the kernel very differently (actually the offical packages never actually built at my site), similar to what I've described in my other mails - layered branches: * layer 0: upstream tag (linus or greg) * layer 1: generic patches for making upstream's 'make dep-pkg' work with usual debian workflows (eg. not creating debian/rules from there anymore, but a generic one instead) * layer 2: dist and target specific customizations (changelos, .config, etc ...) The whole thing is again is built via dck-buildpackage (dpkg- buildpackage should also work, but I never called it manually anymore, since i've wrote dck-buildpackage). Note that I don't even try to create some one-fits-all superpackage for all archs, flavours, etc. - instead I'm using separate layer 2 branches for that. (for maintaining lots of kernel configs based on some meta config, I've got a separate tool) --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 29.05.19 01:39, Simon McVittie wrote: Hi, > You might reasonably assume that, but no, they are not. mesa (and probably > other xorg-team packages) uses v1.0 dpkg-source format combined with > dh --with quilt, so deliberate Debian changes can be either direct > changes to the upstream source code, or quilt patches in d/patches, > or a mixture. Additionally, mesa uses d/source/local-options to ignore > files that only exist in the upstream git tag (which is what gets merged > into the packaging git branch), but not in the upstream `make dist` output > produced from that tag (which is used as the .orig tarball). hmm, sounds quite complicated ... anyone here who could explain why exactly they're doing it that way ? by the way: that's IMHO an important information we should also collect: why exactly some particular workflow was picked > My understanding is that this unusual difference between the .orig > tarball and what's in git is an attempt to "square the circle" between > two colliding design principles: "the .orig tarball should be upstream's > official binary artifact" (in this case Automake `make dist` output, > including generated files like Makefile.in but not non-critical source > files like .gitignore) and "what's in git should match upstream's git > repository" (including .gitignore but > not usually Makefile.in). Since we have git, I've completely given up the orig tarball - I'm just basing on their release tags. And, of course, there shouldn't be anything autogenerated in the git repo - always recreate everything (*especially* autotools-generated stuff). The orig tarball, IMHO, is a long obsolete ancient relic. For upstreams that don't have a git repo yet, I setup an importer first, and call that my upstream. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 28.05.19 22:08, Ian Jackson wrote: Hi, > Please can we leave aside discussion of the merits or otherwise of > each of these formats/workflows. > > Perhaps we can talk about that (again!) at some point, but it tends to > derail any conversation about git packaging stuff and I don't want > this thread derailed. I understand you point, but I believe we really should discuss this. (maybe based on some specific examples) OTOH, I'll only participate in such discussions, if I see that it's really going forward ... already tried that several times in recent years, but no success, so I just gave up :( --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 28.05.19 22:30, Ian Jackson wrote: > Hi, thanks for replying. You have an interesting workflow which I > think I need to ask some questions about before I can document it > fully. I'd call it the 'git-only-workflow' ;-) The main reasons behind are: * i wanna be able to easily rebase onto upstream anytime * i wanna keep generic changes separate from the distro-specific stuff (usually I try to do very generic, so it can go into mainline, eg. instead of directly patching things like pathes, etc, I'm adding new build options, ...) * i wanna easily bring generic changes upstream * i don't ever like to cope w/ text-based patches anymore (all these apply/unapply cycles really suck :p) - git is much easier to handle, IMHO * i wanna have exactly the build tree in my git repo * i don't wanna versioning patches (reading diffs of diffs are not quite useful :o) Actually, the workflow is a tiny bit more complex: i'm using layered branches (regularily rebasing): layer 0: upstream releases layer 1: per release maintenance branches w/ generic (hopefully upstreamable) fixes - based on the corresponding upstream release tags (or potentially their maint branches) layer 2: per distro and release debianized branches (sometimes some layer 1.5 for really generic deb stuff) Branches and tags have a canonical naming - ref name prefixes, canonical version numbers, ... (eg. anyting for debian stretch is prefixed 'stretch/' ...) Years ago, I've already tried to form layer 1 into a greater, cross-distro community, where stabelization efforts are shared between many distros (kinda send-patches.org but w/ high grade of normalization and automation. It was called the 'oss-qm' project (github org with same name). But the interest from dist maintainers was asymptotically approaching zero, from below. > Enrico Weigelt, metux IT consult writes ("Re: Survey: git packaging practices > / repository format"): >> I'm always cloning the upstream repo, branch off at their release tag >> and add all necessary chanages as individual git commits - first come >> the generic (non-deb specific) patches, then the deb specific ones. >> No text-based patches, or even magic rewriting within the build process. >> The HEAD is exactly the debianized source tree, > > What source format do you use ? What is in debian/source/format, if > anything ? Usually "3.0 (quilt)", but I actually don't really care so much. Just picked that some time, as it just worked, and never really though about it anymore :p > Do you handle orig tarballs at all ? No. I'm exclusively using docker-buildpackage, which directly operates on the final source tree - no intermediate steps like unpacking, patching, etc. One of the fine things (besides simplicity) is that if anything goes wrong, I can just jump into the container (it intentionally doesn't clean up failing containers) and directly work from there (the git repo is also there). > When you go to a new upstream, you make a new git branch, then ? git checkout -b git rebase And then see it it works, finxing things, etc. Of course, I'll also care about self-consistent and understandable commits - git history is documentation, not rotating backup ;-) > Do you publish this git branch anywhere ? https://github.com/oss-qm (from time to time I also send patches upstream) >> which is then fed to dck-buildpackage. > > What is that ? https://github.com/metux/docker-buildpackage It's a little tool that sets up build containers (also creates base images on-demand), including build tools, extra repos, etc, runs the build in the container and finally pulls out the debs. The main audience are folks that maintain extra repos (eg. customizations, backports, etc) - that's one of the things I'm regularily doing for my clients. I've got another toolkit ontop of that, which helps maintaining whole repos, including managing git repos and their remotes, dependency handling, etc. It's actually not a standalone tool, but a fundation for easily setting up your own customized build environment. I'm using it for all my customers who get apt repos, but also for backports and depotterization. (Note: the 'master' branch currently is crappy, more a playgound, w/ lot's of things that have to be cleaned up ... for production use fork from 'base' branch.) > manpages.debian.org wasn't any help. It's not in official Debian. I've announced it long go, but nobody here really cared. I couldn't even convice debian maintainers for little less insane scm workflows (just look at the kernel :p), but failed, so I don't waste my time anymore, instead just clean up the mess for those packages that I actually need. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Survey: git packaging practices / repository format
On 28.05.19 17:51, Ian Jackson wrote:> While trying to write the dgit FAQ, and some of the relevant docs, it > has become even more painfully obvious that we lack a good handle on > what all the different ways are that people use git to do their Debian > packaging, and what people call these formats/workflows, and what > tools they use. > > Can you please look through the table below and see if I have covered > everything you do ? I'm always cloning the upstream repo, branch off at their release tag and add all necessary chanages as individual git commits - first come the generic (non-deb specific) patches, then the deb specific ones. No text-based patches, or even magic rewriting within the build process. The HEAD is exactly the debianized source tree, which is then fed to dck-buildpackage. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Introducting Debian Trends: historical graphs about Debian packaging practices, and "packages smells"
On 13.04.19 10:20, Lucas Nussbaum wrote: > TL;DR: see https://trends.debian.net and > https://trends.debian.net/#smells > > Hi, > > Following this blog post[1] I did some work on setting up a proper > framework to graph historical trends about Debian packaging practices. > The result is now available at [2], and I'm confident that I will be > able to update this on a regular basis (every few months). Just a quick idea: For packages using git, can you also trace how they're using it ? There're several approaches used, some only track debian/ subtree, some track source tree plus debian/, some w/ extra text-based patches, some w/ patches already applied in git. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: PPAs (Re: [Idea] Debian User Repository? (Not simply mimicing AUR))
On 11.04.19 09:44, Mo Zhou wrote: > Different from that, duprkit's design don't hope to limit> the user with any > pre-defined "sequence", but enable the users to> selectively call the functions they need. In other words, the> user can define how to deal with the prepared source+debian directories,> afterall the .durpkg header is a shell script. That said, I think> some more helper functions would be nice: [1]. I'm still struggling to understand why simply using old-fashioned ./debian/rules file (possibly w/o using debhelper+friends) isn't sufficient here. AFAIK, all that tools like buildd do for actual build is calling that rule file w/ some specific target names (eg. 'binary'). You can put in anything you like here - it's just a makefile. Theoretically, you could also use shellscript instead. If you drop the idea of having everything in a single file in favour of debian trees (= something that has the 'debian' subdirectory with a 'rules' file in it), the existing debian toolchains could be used. > My blueprint includes a very light-weight/simple dependency tracking> > mechanism. And I assume the project don't need to handle complex dep> trees like apt. Because:> > 1. If a package is common and useful enough, such that it has been>adopted by many other projects, why don't I package it for the>official Debian release? So, I expect that most packages that DUPR>will deal with, are actually leaf or near-leaf packages on the>dependency tree. Okay, that's a different topic. We have three options here: a) put it into official debian repo. that would go the usual ways, but takes pretty long, until the next release is out and the desired audience actually uses it. b) add it to backports repos. i'm not sure how the actual workflows and release timelines look here. c) go the PPA route. here we'd need some repo-level dependency handling (not sure what tools exist here), and we'd have to coordinate between several PPAs > 2. Some of my targeted upstreams do sourceless binary tarball release.> > They seldom get into the dependency trouble... When I have to touch those stuff, I basically always run into trouble. Many subtle breaks, that are extremly hard to resolve (even to track down). Those stuff I'm only doing in containers. Binary-only stuff is not trustworthy at all, so it really should be strictly confined. Those vendors (eg. Microsoft/Skype) also like to mess w/ package manager configuration, have implicit dependencies like silly Lennartware, etc. I never ever run such crap outside a strictly confined container. One of the worst things I've ever seen is coming from National Instruments (which don't support Debian anyways, just ancient RHEL) Traditionally they only provided ridiculous installer programs (just like they're used to from the dillettantic Windows world) that do lots of really weird things, even messing w/ the kernel (yeah, they still insist on binary-only kernel modules, that's always broken-by-design). Somewhere in last summer they learned what package repos are for, well, just partially learned. They now messed w/ the repo configs and installed a globally trusted package source with explicitly disabled authentication and plain http. Boom - 0day ! Due to their long history of hostility, total bullshit and censorship in their own "community", I've posted that @full-disclosure (even goverment institutions like BSI called by for interviews on that matter - their products also run in critical infrastructure like power plants). Again it took several month for the issue to be migitated by NI. > 3. Inserting a DUPR package into the near-root part of the Debian> > dependency tree is, generally speaking, a terrible idea.>Only those who aware of what they are doing will do this. ACK. Those stuff belongs into throwaway-containers. > The `bin/duprCollector` will collect meta information from a collection> (and > will form a dependency tree in the future). I have no plan to> rethink about the "get-orig-source" target since there are ... lots> of weird upstreams in my list... Maybe we should talk about some of these cases, to get a better idea. In general, we IMHO should rethink the workflows for creating the actual buildable debian tree from upstream releases (in many packages that's still pretty manual and hackish) --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: PPAs (Re: [Idea] Debian User Repository? (Not simply mimicing AUR))
On 10.04.19 16:56, Helmut Grohne wrote: Hi, > I looked into this. Your reasons are sound and you are scratching your> itch. > This is great. ACK. It's always good when people make their hands dirty and work on solving actual problems. Even if the actual output (=code, etc) finally doesn't get wide use or even thrown away completely, we still a lot that way. When I look back into my earlier years, I've written lots of things that never have been actually used, but I've learned a lot that way. > Your implementation goes straight from .durpkg -> .deb. I question this> > decision: We already have lots of tools to go from .dsc to .deb. Your> implementation replicates part of that and I think, this is bad as it> makes it harder to collaborate. I did a similar decision w/ dck-buildpackage, because I came to the conclusion that intermediate steps via dsc are just unncessary complexity and slowdown. But the motivation of dck-buildpackage was getting rid of complicated and cumbersome things like pbuilder. So, I can understand to his decision - he probably doesn't need anything from the dsc-based tools, as he's operating in a completely different scope. > Let me propose a rather intrusive interface change to duprkit. What if> the > result of building a .durpkg was a .dsc rather than a .deb? Then you> could split duprkit into two tools:> > * One tool to build source packages from .durpkg files on demand.> * One tool to build a specific binary package from a given deb-src>repository. Let me propose an even more consequent approach: let it operate even one step earlier in the pipeline by just generating a debianized source tree. You could then use the tool of your choice to create dsc from that and put in whatever kind of pipeline you prefer. My personal choice here would be dck-buildpackage, and my infrastructure ontop of that. By the way, this opens up another common topic: how do we get from an upstream tree (in git repo) to a debianzed source tree w/ minimal manual efforts ? > Now in principle, the latter is simply sbuild or pbuilder, but there is> more > to it:> * Given the binary package name, figure out which source package must>be built. Yet another tricky issue. The primary data source for that usually are the control files. But they also somethimes are autogenerated. Could we invent some metadata language for this, that also can handle tricky cases like the kernel ? > * Given that source package's Build-Depends, figure out what other> > binary packages need to be built.> * Recurse.> * Build them in a suitable order. You're talking about building whole overlay repos ? Then I might have something for you: https://github.com/metux/deb-pkg Note: it's still pretty hackish and needs some local per-project customizations. Haven't had the time to make some general purpose standalone package of it. I'm just using it for building per private extra repos for my customers. If anybody likes to join in and turn it into some general purpose package, let's talk about that in a different thread. The first step would be creating a decent command line interface (for now, the run-* scripts are just project-specific dirty hacks to save me from typing too much ;-)). > (Some people will observe that this is the "bootstrap" problem. ;) Not really bootstrap problem, but depenency problem. Easier to solve :p > There is one major difficulty here (and duprkit doesn't presently solve> that > either): If you figure that some binary package is missing, you> have no way of knowing which .durpkg file to build to get the relevant> source package. Yes, he'd have to reinvent the dependency handling. This is one of the points that let me question the whole approach and favour completely different approaches like classic containers. > Now let's assume that you do want to allow complex dependencies in this > user repository. In this case, it would make sense to trade .durpkg > files for plain "debian" directories with an additional debian/rules > target to obtain the source. (We removed "get-orig-source" from policy a > while ago, but maybe this is what you want here.) Sounds a good idea. Maybe we should put this to a broader discussion, along w/ the control file generation problem. My desired outcome of that would be a generic way for fully automatically building everything from a debianzed source tree (eg. git repo) within a minimal container/jail, w/o any other extra configuration outside that source tree - even for those cases where the control file needs to be generated, which again needs some deps. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: [Idea] Debian User Repository? (Not simply mimicing AUR)
On 10.04.19 03:53, Russ Allbery wrote: Hi, > Possibly my least favorite> thing about RPMs is the spec files, because by > smashing everything> together into the same file, the syntax of that file is absurd. This bit> is a shell script! This bit is a configuration file! This bit is> human-readable text! This bit is a patch list! This bit is a file> manifest! This bit is a structured changelog! This bit is a bunch of> preprocessor definitions! Aie. Same for me. Certainly, deb and debian methods also have their downsides: #1: text-based patches inside debian/ make everything unncessarily complex, as soon as you're working w/ a decent VCS (eg. git). their historical purpose is long obsoleted (since over a decade) #2: for many common usecases, full-blown makefile is too much complexity, and even w/ debhelper, knowing which variables have to be set in which exact way, isn't entirely trivial. some purely declarative rule file (eg. yaml) would make those very common usecases much easier. #3: when you have to generate the control file on the fly, things easily get messy - i'm currently fighting here w/ packaging the kernel. the problem is that this file contains the source package name and source dependencies, which need to be known before the control file can be generated. circular dependency. I'm currently working around that by having a separate control file (debian/control.bootstrap) which is used by my build machinery (dck-buildpackage) in a separate preparation step, when control file is missing. But still, IMHO, the debian packaging toolstack is much superior to anything else i've every encountered. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
[prototype] Debian User Repository Toolkit 0.0a release
On 09.04.19 05:32, Mo Zhou wrote: Hi, > I drafted a 0.0 alpha release[1] for the toolkit, and created a logo for > the DUPR project. From now on I'll try to add more packaging scripts > (maybe I should call them recipes) to the default collection[2]. > Packaing plans are tracked here[3], and maybe further discussion about > the DUPR (DUR, whatever.) should be redirected to a dedicated issue[4]. > And, I hope someone could put forward a better name for these prototypes > (naming issue tracked here: [6]). it seems that you're trying to package crap software. I, personally, only touching those stuff when a customer inserts lots of coins for that. And in these cases, I make it absolutely clear to them, that we can't expect quality and stability - relying on binary-only crap is always playing russian-roulette. The mentioned cuda stuff (remember that Nvidia is a very pretty hostile and fraudulent corporation) is just the tip of the iceberg - so called "professional" software in industrial world (eg. Xilinx studio, Sigasi, etc) is even more crappy. (Xilinx is also criminal - eg. *deliberately* violating the GPL). That's a kind of software, you seriously don't wanna install outside a well-confined container anyways. In some cases, I just write usual debian/rules files, or the q&d ansible way. Usually, the job is to provide whole container environments for the customer's daily work - deb packaging is just one element here, which isn't necessarily economically efficient (for those stuff, I've already given up the idea of quality, it's just about making the customer happy enough, so he inserts more coins :p). A major challenge here is retrieving the original media in a *reliable* and fully automatic way, even in the customer's often pretty weird network setups. One just cannot rely that the original media remain online - expect it to vanish over night, w/o any single notice. Therefore you also need your own local mirror, if that stuff shall come anywhere near to production use. I would be open for collaborating in maintainance of install stuff for such crapware (some of that I've already got @github), but there's a lot more to do than just yet another way to build debs. OTOH, I'm a bit reluctant to publish some fancy solution, as the vendors of those crapware - as well as their customers, who even pay for that crap - should feel the pain. That pain has be increased much higher, before anybody there even considers learning the fundamental basics of software build and deployment. (as long as they plague us with their ridiculous installers, they haven't learned a single bit). Maybe we should pick a different license here, which mandantes the customers to squeeze the vendor's balls, make them feel a lot pain ;-) (eg. the customer could tell the vendor something like "this is the last time we tolarate that, next purchases only if you provide properly long-term maintained apt repos for the distros and arch's we use). Okay, it's just a dream, but a very nice one ;-) --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Conflict over /usr/bin/dune
On 18.01.19 01:43, Andreas Beckmann wrote: > On Tue, 18 Dec 2018 17:48:06 + Ian Jackson > wrote: >> Ian Jackson writes ("Re: Conflict over /usr/bin/dune"): >>> https://www.google.com/search?q=dune+software >>> https://en.wikipedia.org/wiki/Dune_(software) >>> https://www.google.com/search?q=%2Fusr%2Fbin%2Fdune >>> >>> Under the circumstances it seems obvious that, at the very least, the >>> ocaml build tool should not be allowed the name /usr/bin/dune. By the way: there's also the Game "Dune". IMHO not in official Debian repo, but I've got it hanging around in some 3rdparty repo ... --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Limiting the power of packages
On 19.11.18 20:24, gregor herrmann wrote: > On Mon, 19 Nov 2018 17:29:37 +0100, Enrico Weigelt, metux IT consult wrote: > > (OT, but since I noticed it too:) > >> Anyways, Skype doesn't work since 8.30 as it crashes directly on >> startup. > > Apparently it needs (e)logind since that version. That didn't help neither. Few days ago, I tried their inofficial preview build, which doesn't seem to crash anymore. Let's see what happens when the official release comes. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: git vs dfsg tarballs
On 21.11.18 04:22, Paul Wise wrote: > I don't think Andreas was talking about applying the DFSG but about > files we don't have permission to distribute at all. Have there been any cases where those files have been in the upstream VCS ? I don't recall any such case. For the case where certain parts shouldn't be built/shipped due to policy, this can - and IMHO should - be handled with changes within the VCS, instead of having tarballs laying around w/o any clear history and no indication how exactly it was created from upstream. Actually, since about a decade, I'm not doing any code changes outside git, and I'm building packages only directly from git. Frankly, I don't see any reason why that can't be the standard case. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: git vs dfsg tarballs
On 19.11.18 17:36, Ian Jackson wrote: Hi, > I am saying that for packages whose Debian maintainer follow those> > recommendations, much of what you want would be straightforward - or,> anyway a lot easier. So I was plugging my recommendations. Unfortunately, those packages I'm coping w/ don't really follow that. Kodi is an really unpleasant example: * unclear orig<->dfsg relationship (I'll have to analyze them one by one and adapt my import scripts) * very non-linear history (eg. new upstream trees, sometimes even completely unrelated branches, directly merged down into the deb branch) * lot's of patches against non-existing files * rules trying to touch missing source files/directories. >> Here're some examples on how my deb branches look like:> > Not sure what you >> mean by `your deb branches', Those who add the debian/* stuff, and possibly other patches. In my model, they're always linear descendants of the corresponding upstream release tag. > but looking at what> Debian gives you:> >> * canonical ref names> > dgit > (dgit clone, dgit fetch) will give you this, regardless of the> maintainer's behaviour. hmm, looks like good start. But it doesn't really look easy to clone from different distros and specific or yet unreleased versions. and one of my main problems remains unresolved: linear history ontop of the upstream's release tag. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Limiting the power of packages
On 07.10.18 21:20, Adrian Bunk wrote: > For leaf software like Skype or Chrome, approaches like flatpak where> > software can be installed by non-root users and then runs confined> have a more realistic chance of being able to becoming a good solution. I'd rather put those non-trustworthy code them into a minimal container w/ fine tuned minimal permissions. Anyways, Skype doesn't work since 8.30 as it crashes directly on startup. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: git vs dfsg tarballs
On 19.11.18 13:52, Ian Jackson wrote: > Clearly the transformation on the *tree* can't be reversible because > in the usual case it is deleting things. So you'll need the history. It certain can be, if you know the exact orig commit. Maybe I wasn't really clear here: I wanna do a fully automatic import into a git history (optimally, by just having package name and version). > With most gitish workflows, the corresponding pre-dfsg upstream > *commit* can be found with `git-merge-base', assuming you have some > uploaded (or pushed) Debian commit and a suitable upstream branch. It's not entirely trivial, if the maintainers are doing wild merges. (eg. w/ kodi). Even worse: reconstructing the change history ontop of some given upstream release is pretty complicated and manual. Merging down from upstream into packaging branch (instead of just a simple rebase) turns out as bad idea here. >> My preferred way (except for rare cases where upstream history is >> extremely huge - like mozilla stuff) would be just branching at the >> upstream's release tag and adding commits for removing the non-dfsg >> files ontop of that. From that branching the debianized branch, >> where all patches are directly applied in git. > > I think that most of the workflows recommended in these manpages > > https://manpages.debian.org/stretch-backports/dgit/dgit-maint-gbp.7.en.html > > https://manpages.debian.org/stretch-backports/dgit/dgit-maint-merge.7.en.html > > https://manpages.debian.org/stretch-backports/dgit/dgit-maint-debrebase.7.en.html Yet complicated for me (especially regarding automating/CI). Here're some examples on how my deb branches look like: https://github.com/oss-qm/flatbuffers/commits/debian/maint-1.9.0 https://github.com/oss-qm/go/commits/debian/maint-1.11.1 * canonical ref names * always based on the corresponding upstream's release tag * changes directly as git commits - no text-based patches whatsoever * generic changes below the deb-specific ones While gbp can help a bit here and there, it still far away from an fully-automated process. I'm currently helping myself w/ lots of mappings and import scripts, but I'd like to get rid of maintaining all these little pieces. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
git vs dfsg tarballs
Hi folks, I'm often seeing packagers directly putting dfsg'ed trees into their git repos, w/o any indication how the tree was actually created from the original releases. As I'm doing all patching exclusively via git (no text-based patches anymore - adding my changes ontop the upstream release tag and then rebasing for new releases), this (amongst other problems like wild merges) is quite a challenge for efficient (heavily automatic) handling. Can we agree on some auomatically reproducable (and inversable) transformation process from orig to dfsg tree My preferred way (except for rare cases where upstream history is extremely huge - like mozilla stuff) would be just branching at the upstream's release tag and adding commits for removing the non-dfsg files ontop of that. From that branching the debianized branch, where all patches are directly applied in git. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: A message from CMake upstream: announcing dh-cmake
On 06.07.18 22:00, Colin Watson wrote: > If the libraries in question are DFSG-free themselves, there's no DFSG > issue and you don't need to remove them from the tarball (and we'd > generally encourage not modifying the upstream tarball unnecessarily for > upload to Debian). The policy about bundling is separate from the DFSG. > Of course it'd be incumbent on whoever's doing the Debian upload to > actually check the licensing status. last time i've packaged vtk, I removed them (at least those that either already had been packaged or easy to do so), in order to make sure that nothing in that really complex cmake file can even try build/use any piece of them. the package was just meant for an inhouse installation for my client, so i didn't care much about policies and orig tarball handling - I've just patched directly in the git repo. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: You are not seriously considering dropping the support of sysVinit?!
On 17.10.18 08:55, free...@tango.lu wrote: > Dropping sysvinit would also put an enormous amount of work on the > Devuan project (the only future for Debian) by making them fork more > packages. Well, in that case we can also completely drop other Lennartware dependencies in the affected packages. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Limiting the power of packages
On 04.10.2018 01:19, Carl-Valentin Schmitt wrote: > It would be a possibility, for safety to create a new directory only for > brandy 3rd-party-software like skype, Google Chrome, Swift, and else > Software where huge companies are Sponsors. > > This would then mean, to create a second sources list for 3rd-party-links. We don't need to add anything to dpkg/apt for that - there's a simpler solution: Automatically fetch those packages from the vendor and collect them into our own repo, but run a strict analysis before accepting anything. Rules could be strictly limiting to certain filename patterns, file modes (eg. forbid suid or limit to certain owners), no maintainer scripts, etc, etc. We could either filter out anything suspicious or reject the package completely (maybe even automatically filing upstream bugs :p). Yes, that would have to be customized per-package, but we're only talking about a hand full of packages, anyways. What's really important for me: don't add more complexity on the target apt/deb for these few cases, unless *absolutely* *necessary* By the way: we can put aside the whole Skype issue for the next few month, as it's completely broken and unusable anyways - for several month now. We could reconsider once the Upstream (Microsoft) manages get it at least running w/o segfaulting. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: Limiting the power of packages
On 03.10.2018 19:19, Lars Wirzenius wrote: > Sometimes what they do is an unwelcome surprise to the user. For > example, the Microsoft Skype .deb and the Google Chrome .deb add to > the APT sources lists and APT accepted signing keys. Some users do not > realise this, and are unpleasantly surprise. https://seclists.org/fulldisclosure/2018/Sep/53 > (Note that I'm not saying Microsoft or Google are doing something > nefarious here: But I do think that. If they really wanted to do that in a reasonably secure and safe way (assuming they're not completely incompetent), they'd split off the sources.list part from the actual package (there're many good ways to do that), and added proper pinning to reduce the attack surface. And they would have talked to the Distros about a proper process of bringing Skype into Distro repos. OTOH, considering the tons of other bugs and design flaws, I'm not really sure whether they're nefarious or incompetent, maybe a mix of both ... > they're trying to make sure security updates for their > packages will be deployed to user's system; this seems like a worthy > goal. But it's a surprise to some users.) The goal is nice, but that's what Distros are for. But it's always the same since aeons: commercial vendors tend to work against the distros. > I don't think it's good enough to say the user shouldn't install > third-party packages. Actually, I do think so (unless the user knows exactly what he does). It's not about proprietary software in general - this can (and is) also handled by Distros. But the Distro (or some other neutral project, that provides an extra repo) is needed as quality gate. > It's not even good enough to say the user should > use flatpaks or snaps instead: not everything can be packaged that > way. Debian's own packages can have equally unwelcome surprises. Haven't really looked deeper into flatpak, but I'm doing a lot with docker and lxc containers. As those proprietary vendors tend to be completely overstrained with the whole concept of package management (no idea why, but I've seen this a thousand times w/ my clients), it seems to be the most pragmatic solution to put everything into strictly isolated containers. Those packages are only few special cases anyways. (for the average end-user I don't see much more candidates besides Skype, but there's still a lot very special business applications each having a petty tiny user base). That way, the vendors could just pick some minimal base system (maybe apline or devuan based) and step by step learn how to use package management, in their own confined microcosmos. At least they wouldn't have to cope w/ many different distros, as long as they haven't understood the whole concept behind (if they did, it would be pretty trivial for them, and we wouldn't need this this discussion). > Imagine a package that accidentally removes /var, but only under > specific conditions. You'd hope that Debian's testing during a release > cycle would catch that, but there's not guarantee it will. (That's a > safety issue more than a security issue.) Did this ever happen ? Why should anybody write such things into a maintainer script in the first place ? > A suggestion: we restrict where packages can install files and what > maintainer scripts can do. The default should be as safe as we can > make it, and packages that need to do things not allowed by the > default should declare they that they intend to do that. Rebuild flatpak+friends ? Point is: maintainer scripts can do anything they want. What we can (and should) do is doing most things in a purely declarative way - at deployment time, instead of autogenerating mnt scripts or calling predefined functions - so we can concentrate on more careful reviews of the remaining special cases. By the way: at lot of the demand for mnt scripts, IMHO, comes from upstream's bad sw architecture (interestingly, GUI stuff again tends to be the ugliest area). Usually, good packages should be fine with just unpacking some files into proper places. > This could be done, for example, by having each package labelled with > an installation profile, which declares what the package intends to do > upon installation, upgrade, or removal. Who defines these labels ? The packager ? Is there any extra quality gate (before the user) ? > * default: install files in /usr only That's bad enough, if the package is of bad quality or even malicious. Finally, I'd really like to reduce complexity, not introduce even more. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Announce: docker-buildpackage
Hi folks, I've written a tool for isolated deb builds in docker containers. It's a little bit like pbuilder, but using docker for isolation. https://github.com/metux/docker-buildpackage Everything written in shellscript, simple config as sh includes. Not debianized yet, as it might require some local customizations. (planned for future releases) I'm also hacking on another tool which automatically clones repos and calls dck-buildpackage for building whole pipelines - but that's still experimental and hackish: https://github.com/metux/deb-pkg --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: FHS: Where to store user specific plugins / code
On 09.03.2018 14:23, Georg Faerber wrote: Hi, > I guess we'll go with /usr/local/lib/schleuder then? Does this sound> like a reasonable choice? That would be my choice. OTOH, it might be nice to have a helper that automatically creates deb packages. (would also be nice for other applications, eg. moz). --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
Re: MIA ? Miriam Ruiz
On 24.10.2017 05:02, Norbert Preining wrote: I am trying to contact Miriam Ruiz (uid=miriam) but I haven't seen any sign of life/answer. All recent uploads of her packages are from other people, her own uploads are from 2015. Her last blog entry is also from 2015. I'll try to talk to her. --mtx -- Enrico Weigelt, metux IT consult Free software and Linux embedded engineering i...@metux.net -- +49-151-27565287
substvars in *.install + friends
Hi folks, is it possible to use the substvars mechanism for the *.install and similar files, just like w/ control file ? For multi-version installations, I'm keeping the whole package in a prefix w/ the version number (see my other mail - nodejs). I don't like to change lots of files which each version number. thx --mtx
Re: Packaging nodejs-7.9
On 04.05.2017 09:26, Jérémy Lal wrote: > At the moment, in debian, /usr/lib/nodejs is there to store all node > modules installed from debian packages. hmm, would that conflict w/ having certain "nodejs-$version" subdirs w/ the actual engines (the whole tree - not splitted out the several FHS parts yet) there ? Meanwhile I've also added some update-alternatives support. (yet have to add version into package name). But this will conflict w/ current versions, as they directly install /usr/bin/nodejs. Can we make a minor update of 0.10.* for update-alternatives ? > Are you talking about installing modules depending on their > compatibility with node engines (as found in package.json) ? Actually, not sure whether that's really required. Are there any known (already packaged) modules that break w/ newer nodejs ? If not, I guess, just adding depend son newer engines where needed should be enough. --mtx
Packaging nodejs-7.9
Hi folks, I'm currently packaging nodejs-7.9 for various deb Distros. I'll have to maintain some applications that use the fanciest new features, and precompiled binaries from untrusted sources (eg. nvm+friends) of course are not an option. Before I go all of this alone - is there anybody here who already done this ? Or anything I should consider ? My current plan is: * install in similar way as jvm (/usr/lib/nodejs/nodejs-$version) * for now I'll just directly symlink - update-alternatives support comes in a later step (or maybe someone here likes to help ?) * the actual nodejs package will be named "nodejs-$version", the symlinks in package "nodejs". The tricky part will be a safe upgrade path from current 0.10 and npm's dependencies. What do you folks think about that ? --mtx
Re: Bug#857394: Debian Policy violation -- libgegl-dev contains duplicate copy of openCL library files
On 14.04.2017 14:34, ian_br...@mail.ru wrote: > I was right -- it IS a Debian Policy violation: > > * 4.13 Convenience copies of code * I've got a similar problem while packaging recent webkit (latest surf needs a newer one). Their git repo is >GB (!). No idea how much I'll have to cut out here yet (still pulling) ... By the way: is there any automatic way for creating the -dfsg trees out of the upstream ? (I prefer working directly w/ git repos instead of additional patching) --mtx
Re: init system agnosticism [WAS: how to remove libsystemd0 from a live-running debian desktop system]
On 13.04.2017 11:27, Vincent Danjean wrote: > For me, the first argument explain in the first mail is not this one. > systemd is not portable on lots of system (hurd, kFreeBSD, ...), This is just one of many arguments for not making applications depending on it. (and they shouldn't depend on any other init system either). Regarding service status reporting, systemd folks indeed make a good point. There is some demand for that, and they solved the problem for their audience (unfortunately only for *their* audience), and moved one to the next topic. For a prototype that's really fine, but not for long term maintenance over dozens of different platforms. Now stating, everybody should just implement their interfaces is just like asking everybody to implement it in the windows way (NT came up with it's entirely own service management system, which works quite well, as long as you're confined within the windows world) > systemd is not interested in making its code portable, nor to stabilize > its interfaces so that other system init can easily implement them, Well, that's their choice, and I respect that. It's just not mine. I don't wanna be forced into their ways (as I wouldn't ever try to force them into mine). So, I'm looking for a *generic solution* for the actual problem, which that functions ins libsystemd aimed to solve, so applications can just use them, w/o ever having to care which init system might be installed (or if there even is one at all) > lots of applications are now using libsystemd to get 'classical' information > (status, ...) because they do not want to have to deal with several init > system Exactly. They're just looking for some API for that stuff, not caring what it actually does under the hood. And systemd just happens to provide one. From the application developer's pov systemd is filling some gap, and they dont even wanna care about the consequences. So, it's up to us, to provide a better solution - just telling how bad systemd is, isn't just enought (from their perspective). > and porters of platforms not supported by systemd have a really hard > work to follow systemd developments and patch all things. Exactly. For some arbitrary application developer (who usually doesn't even know much about packaging, etc), it's hard to understand the underlying problem - they just want something they can set their code ontop (that's also the reason why all these strange proprietary platforms can even exist). So, it's up to us, who know better, to give them something they can work with, and that doesn't cause all the trouble that Lennartware does. > From your mail, you seems to deny this issue ("everybody can be pleased > with systemd" and/or "this is not a general problem, just a problem > from people that dislike systemd"). For what I see, it seems a problem > also for people that like systemd but cannot use it on their plate-form > (Hurd, ...) Right, it's basicly the same old "shut up and go away" attitude. Actually, many people already went away, and more will follow. If it goes on that way, we'll end up w/ an own OS called systemd, which is as far away from GNU/Linux as Android. Do you folks really want that or did you just ran out of better ideas ? > I'm persuaded that ignoring this issue will lead to an unmaintanable > Debian distribution on platforms that do not support systemd in the > middle/long term. But, perhaps, it is what the project wants. That, in turn, would lead to Debian step by step defeating its own original goals. I'm pretty sure that it won't take long for lots of other things (beginning w/ other kernels) are dropped, just because nobody is willing to keep it compatible w/ systemd. > Enrico is proposing something else. I'm not sure if his proposal is > good and doable (ie with enough support from various parties and > manpower). If we get out of the ideologic war (including the upstreams, too), it wouldn't be such a big deal. A minimal implementation the proposed library is quite simple and small. We'd just have to touch a bunch of applications and rewrite a few lines there - and once it works and included in a major distro, we have good chances for convincing upstreams to take our patches in. And I'm sure, Devuan folks, which had been driven out of Debian, will help here, too. Yes, somebody needs to maintain the systemd-version/branch - but as the library interface will be stable (it's scope is quite limited, so there wont be much desire to add anything new). So, we at least have the overhead of keeping up w/ systemd minimized and centralized in one small lib. Maybe even someday systemd folks have some moment of insight and take that part into their hands. --mtx
init system agnosticism [WAS: how to remove libsystemd0 from a live-running debian desktop system]
On 17.02.2015 18:49, The Wanderer wrote: Hi folks, just digging out an older thread that was still laying around in my inbox - w/ about 2yrs distance, I hope it was enough cool down time so we discuss it more objectively about that. > libsystemd0 is not a startup method, or an init system. It's a shared > library which permits detection of whether systemd (and the > functionality which it provides) is present. >From a sw architects pov, I've got a fundamental problem w/ that appraoch: we'll have lots of sw that somehow has 'magically' additional functionality if some other sw (in that case systemd) happens to run. The official description is: "The libsystemd0 library provides interfaces to various systemd components." But what does that mean ? Well, more or less a catchall for anything that somehow wants to communicate w/ systemd. What this is actually for, isn't clear at all at that point - you'll have to read the code yourself to find out. And new functionality can be added anytime, and sooner or later some application will start using it. So, at least anybody who maintains and systemd-free environment (eg. platforms that dont even have it) needs run behind them and keep up. Certainly, systemd has a lot of fancy features that many people like, but also many people dislike (even for exactly the same reaons). The current approach adds a lot of extra load on the community and causes unnecessary conflicts. So, why don't we just ask, what kind of functionality do applications really want (and what's the actual goal behind), and then define open interfaces, that can be easily implemented anywhere ? After looking at several applications, the most interesting part seems to be service status reporting. Certainly an interesting issue that deserves some standardization (across all unixoid OS'es). There're lots of ways to do that under the hood - even without having to talk to some central daemon (eg. extending the classical pidfile approach to statfiles, etc). All we need yet is an init-system/service-monitor agnostic API, that can be easily implemented w/o extra hassle. A simple reference implementation probably would just write some statfiles and/or log to syslog, others could talk to some specific service monitor. Having such an API (in its own library), we'd already have most of the problems here out of the way. Each init system / service monitor setup comes with some implementation of that API, and applications just depend on the corresponding package - everything else can be easily handled by the existing package management infrastructure. No need for recompiles (perhaps even no need to opt out in all the individual packages). The same can be done for all the other features currently used from libsystemd, step by step. Maintenance of these APIs (specification and reference implementation) should be settled in an open community (perhaps similar to freedesktop.org for the DE's), not in an individual init system / service monitor project. I really wonder why people spent so much time in init system wars, instead of thinking clearly of the actual root problem to solve. --mtx
Re: What's a safe way to have extensions in chromium in Debian?
On 11.04.2017 10:22, Andrey Rahmatullin wrote: > On Tue, Apr 11, 2017 at 04:22:40AM +0200, Enrico Weigelt, metux IT consult > wrote: >>>> >>>> >>>> could anyone please give me some insight, was the security problems >>>> are here exactly ? >>> Extension auto-updating is considered "phoning home". >> >> Isn't there a way to just disable part ? > Disabling extension auto-updating is wrong from several perspectives, > including the security one. hmm, I'd actually feel better w/ manual update (on user request) for the unpackaged ones (the packaged ones of course go via apt). --mtx -- mit freundlichen Grüßen -- Enrico, Sohn von Wilfried, a.d.F. Weigelt, metux IT consulting +49-151-27565287
Re: What's a safe way to have extensions in chromium in Debian?
On 09.04.2017 22:58, Andrey Rahmatullin wrote: > On Sat, Apr 08, 2017 at 08:28:38AM +0200, Enrico Weigelt, metux IT consult > wrote: >> >> >> could anyone please give me some insight, was the security problems >> are here exactly ? > Extension auto-updating is considered "phoning home". Isn't there a way to just disable part ? --mtx
Re: What's a safe way to have extensions in chromium in Debian?
could anyone please give me some insight, was the security problems are here exactly ? --mtx -- mit freundlichen Grüßen -- Enrico, Sohn von Wilfried, a.d.F. Weigelt, metux IT consulting +49-151-27565287
Re: dpkg packaging problems
On 02.01.2015 17:08, Martin Pitt wrote: Hi, > Yes, man dh_fixperms. Shared libraries don't need to and should not be > executable. Oh, wasn't aware of that. Just used to that as gcc sets that flag. Is it a bug in gcc, or are there platforms where +x is required ? cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/54a6d261.9070...@gr13.net
dpkg packaging problems
Hi folks, I'm just packaging some library to various deb distros using pbuilder + git-buildpackage. Unfortunately, the .so's loose the +x flag in the package (while usual 'make install' is okay) - it seems that some of the dh stuff drops that flag :( maybe some of you guys might have an idea ? See: https://github.com/metux/fskit/tree/jessie/master https://github.com/metux/fskit/tree/trusty/master the build process is driven by: https://github.com/metux/packaging cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/54a6beaa.8060...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 25.11.2014 16:29, Philip Hands wrote: > How is it that Debian changing the default for something on some of What about the enforced replace on dist-upgrade, which at least produces lots of extra work and can easily cause systems being unbootable ? cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/5482bec5.8030...@gr13.net
Re: Summary:Re: Bug#762194: Proposal for upgrades to jessie (lendows 1)
On 29.11.2014 19:15, Svante Signell wrote: > Since there is no interest in adding a debconf message on new installs, > I wish for a menu entry in the advanced part of the installer to be able > to install a new system with sysvinit-core or upstart! +1 -- mit freundlichen Grüßen -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/5481a9ed.1010...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 27.11.2014 11:53, Matthias Urlichs wrote: > Yes, the logind-related parte _could_ be provided elsewhere, but part of > the features logind needs is already implemented in systemd. Can you understand, that this method is exactly one of the major reason why many people dont like the systemd faction ? cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/54812e53.9080...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 27.11.2014 11:18, Martin Steigerwald wrote: >> Desktops (not only GNOME) use a very tiny bit of systemd, interfaces >> that could be provided elsewhere. The real purpose of systemd is to >> provide a modern init system. > > I still wonder why there are provided within systemd then. Same for me. If there really is some functionality which some DEs really need, why not having an entirely separate tool for that ? Anyways, I still didn't understand why udev is bundled within systemd. cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/54812dae.9080...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 27.11.2014 00:29, Noel Torres wrote: > manpower required to maintain a distribution with more than one init > system widey installed, manpower to perform the required changes to > support multiple init systems in Jessie, centered about the most > important question: our users. Just curious: how large actually is the overhead for that ? For most packages, that IMHO should be just still writing/updating init scripts parallel to systemd service descriptors. I haven't had the time for a deeper analysis (systemd specifications aren't entirely precise and complete ;-o), but maybe we could even generate them from an common primary source, at least for a large portion of the cases. But there are other cases like GNOME (and IIRC KDE), which now seem to rely on systemd. I haven't done a deeper analysis what's exactly the big deal about it, and why we now need a new init system (or parts of it) for that. The most common argument I've heared from systemd folks is the multi-seat issue. Well, I'm maybe a bit old-fashioned, such setups aren't anything but new to me (actually, done that 20 years ago), and I wonder what that all has to do with the init system. The primary aspect here is a proper Xserver configuration. We'll always have to support various unusual setups, like multi-screen composition, multiple input devices, etc, so just having multiple Xservers on separate screens seems a rather simple sub-case. An hardcoded magic like systemd-logind does (eg. it generates it's own xserver configs on the fly) sounds like a pretty bad idea to me. It might be working for a large number of users, but also limits the whole stack to those rather simple scenarios. The big question I'd ask the systemd and gnome folks is: Why do these things all have to be so deeply interdependent ? I would even question, why each DE needs it own display manager ? What's so wrong with all the other DMs ? Certain DEs (like GNOME and KDE) seem trying to build their own operating system - I really fail to understand why. cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/54812cc2.6080...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 25.11.2014 18:30, Stephen Gran wrote: > Excellent. I'm sure that if they can create a deb, they can install > sysvinit, or runit, or some BSD, or whatever else they want. A default > is only a default, after all. Just curious about the term "default": Can I still install a system w/o systemd ever going into my system - instead of replacing it later (eg. some option in the installer) ? cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/54811f6d.7040...@gr13.net
mass hosting + cgi [WAS: Technical committee acting in gross violation of the Debian constitution]
On 04.12.2014 22:23, Christoph Anton Mitterer wrote: > Apart from that, when you speak of "non-trivial" quantities - I'd > probably say that running gazillion websites from different entities on > one host is generally a really bad idea. No, it's not, and it's pretty cheap, if done right. Several years ago, I was working for some large ISP (probably the largest in Germany). Hosting more than 1000 sites per box, several millions in total. (yes, most of them are pretty small and low traffic). IIRC at that time they've been using cgiexec. I just don't recall why they didn't use my muxmpm. (maybe because apache upstream was too lazy to pick it up, even though it had been shipped by several large distros). A few years earlier I've developed muxmpm for exactly that purpose: a derivative of worker/perchild, running individual sites under their own UID, spawning on-demand. This approach not just worked for CGI, but also builtin content processor like mod_php, mod_perl, etc. >>> FastCGI is just a slightly more fancy way of doing this. >> FastCGI is another thing that almost nobody can afford when hosting >> a significant number of web sites. > Why not? It adds additional complexity, especially when you're going to manage a _large_ number (several k) of users per box. In such scenarios you wanna be careful about system resources like sockets, fds, etc. I'm not up to date whether there's meanwhile an efficient solution for fully on-demand startup (and auto-cleanup) of fcgi slaves with arbitrary UIDs, and how much overhead copying between processes (compared to socket-passing) produces on modern systems (back when I wrote muxmpm, it still was quite significant) OTOH, for high-volume scenarios, apache might not be the first choice. cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/54811cde.5000...@gr13.net
gnome depending on apache [WAS: Technical committee acting in gross violation of the Debian constitution]
On 02.12.2014 06:01, Paul Wise wrote: >> gnome depends on apache ? > > gnome-user-share uses apache2 to share files on the local network via WebDAV. Is this an purely optional program, or does gnome itself depend on it ? >> seriously ? > > Sharing files with other computers on the local network seems like > perfectly reasonable and useful feature to me. Okay. But WebDAV would be one of the last protocols, I'd consider for that (maybe for the wide internet, but not for local networks). cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/548117a8.9010...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 28.11.2014 19:09, Christoph Anton Mitterer wrote: > For many things, CGI is actually the only way to run them securely, > since it's the only way to run foreign processes in a container > environment (chroots, etc.) or with user privilege separation. Not entirely true. About a decade ago, I've wrote muxmpm, which ran individual sites under their own uid/gid, chroot, etc. That made things like cgixec, php's safe_mode etc practically obsolete. It was even shipped by several large distros, eg. suse (the orignal one, not novell). cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/54807f48.9060...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 29.11.2014 20:45, Ivan Shmakov wrote: > As for Systemd being the default (on Debian GNU/Linux, > specifically), – I guess I shouldn’t bother. GNOME is also the > default, but I cannot readily recall ever having it running on > my Debian installs. > By the way: didn't GNOME originally have the intention of being crossplaform, not Linux-only ? cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/5480793e.3000...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 29.11.2014 20:43, Svante Signell wrote: > The best for kFreeBSD and Hurd would be to abandoning the Debian ship. > It is sinking :( (just let the devuan people get things in order first) Well, I'll also put my projecsts on getting rid of polkit into that direction. Why ? Because I've got the impression that these guys still value traditional unix concepts, like using the filesystem for simple hierarchical data structures and access control, tiny and easily compositable servers and tools, etc. cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/5480782f.3050...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 27.11.2014 02:18, Josh Triplett wrote: > gnome Depends: gnome-core, which Depends: gnome-user-share, which > Depends: apache2-bin (or apache2.2-bin in stable, which is a > transitional package depending on apache2-bin in unstable). gnome depends on apache ? seriously ? cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/547d44b3.6030...@gr13.net
Re: Technical committee acting in gross violation of the Debian constitution
On 22.11.2014 02:13, Troy Benjegerdes wrote: > Someone will find a hole in something, and there will be fire when sysadmins > have to upgrade in the middle of the night and now are running systemd > instead of what they are used to. Well, in that case, I'd say a rain of fire isn't entirely what's going to happen here ... would be more like a rain of transphasic torpedos ... I think, the latest decision was really bad. Not because I personally dont like Lennartware, but because we should leave people the choice. At lot of people have lots of reasons why they never ever wont let systemd on their machines, and would even switch whole datacenters to Gentoo, LFS or BSD, before accepting systemd. Most of the people I know personally (and that are quite a lot), many of them traditional *nix operators, integrators, developers from embedded to enterprise, people who're maintaining missing criticial systems, large datacenters, etc, give a clear and absolute NO to systemd. Can't tell how representative that is, but my gutts tell me Debian will immediately loose 30..50% user base, if systemd becomes mandatory (or even worse: silently injects it via an upgrade). That would be desastreos, and directly lead into a fork (in fact, the preparations for that are already on the way). I think it would be very wise having a fundamental decision, that: a) individual (usual) packages do _not_ depend on a specific init system (eg. making the systemd-specific stuff has to optional) b) we will continue to provide the existing alternatives, including fresh installation (choosable at installation time, or separate installer images) c) the init system will never be switched w/o _explicit_ order by the operator d) this decision stands until explicitly revoked cu -- Enrico Weigelt, metux IT consulting +49-151-27565287 -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/547495e0.7040...@gr13.net
incomplete de translation of maintainer guide
Hi folks, just seen that the German translation of the maintainer guide is quite incomplete. Perhaps I could find some time for fixing it, if anybody explains me how to do that ;-) cu -- -- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120612171543.gd25...@mailgate.onlinehome-server.info
Packaging entirely via git
Hi folks, is there already a way for maintaining debian packages entirely with git - without the orig tarball ? As more and more projects are moving to git, this IMHO would make things easier for those projects. I'm currently (re)packaging wwwoffle, and optimially I'd just like to push a signed tag (to the debianized branch) and let the machinery run ;-) cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120612171034.gc25...@mailgate.onlinehome-server.info
Re: wwwoffle
* Paul Wise schrieb: Hi, > Please read through some of these pages: > > http://www.debian.org/doc/manuals/maint-guide/ > http://mentors.debian.net/intro-maintainers Thanks. I'll dig into it. Meanwhile I'm almost finished with git importing and patch sorting, and created a .deb via git-buildpackage. Uploaded it to github, maybe somebody likes to have a look at it: https://github.com/metux/wwwoffle/ cu -- ---------- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120612164238.gb25...@mailgate.onlinehome-server.info
wwwoffle
Hi folks, I've seen wwwoffle was dropped from Debian and Ubuntu. As I really need it, I'm willing to step in as maintainer. I'm currently in process of importing the available releases into an git repo and adding the latest patches. I've never really contributed to Debian yet, so please let me know what should be done here. thx -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120612011246.ga25...@mailgate.onlinehome-server.info
Re: Packaging best practice when upstream git contains more directory levels than the upstream tarball?
* Axel Beckert schrieb: > Upstream tarballs are preferable because: > > * It's use is recommended in the Developer Reference recommended essentially means optional. > * It's clear how the tarball was generated -- built by upstream and > downloaded > * Distributions which build the software on package/port installation > time (like e.g. FreeBSD and Gentoo) rely a lot on the Debian > mirrors -- but only if we use the original upstream tarball.(*) FreeBSD and Gentoo have their own mirror infrastructures, I don't think that they rely on Debian in any way. > Tarballs built from a git repo which includes the upstream git repo as > a remote git repo are preferable because: > > * They may include more files you possibly need for using > automake/autoconf foo or to rebuild other stuff which is prebuilt in > the official upstream tarball.(**) That's one of the reasons I don't like upstream tarballs at all, if an usable VCS repo is available. I usually regenerate everything (at least the autocrap stuff) and therefore want all those files out of the way first. Can't we build packages *directly* from an git tag without having an upstream/orig tarball at all ? cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120104012211.gd9...@mailgate.onlinehome-server.info
dokuwiki and /usr [WAS: from / to /usr/: a summary]
* Tanguy Ortolo schrieb: > Enrico Weigelt, 2011-12-31 03:55+0100: > > IMHO this is completely wrong, those files should be under > > /usr/lib/... or maybe even /usr/share/... as they're not > > dynamic data. > > Well, when people install new plugins or new themes, they get installed > on the same directory, so I decided that it was less surprising to have > packaged files that people will not touch under /var than to have user > files under /usr. Well, *I* would be *very* suprised by having packaged files under /var. So I would have changed the app to (additionally) look for plugins under /usr (in this case /usr/share/dokuwiki/plugin/) cu -- ---------- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120104011036.gc9...@mailgate.onlinehome-server.info
Re: from / to /usr/: a summary
* Russ Allbery schrieb: > That experience aside, we're not talking about patches here, assuming > Marco's description of the situation is correct. We're talking about a > full-blown fork and a need for a new udev upstream. Maybe a downstream-branch is enough. cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120104010205.gb9...@mailgate.onlinehome-server.info
Re: from / to /usr/: a summary
* Fernando Lemos schrieb: > Are you guys applying for maintainership for this huge delta > you want to introduce between upstream and us? Actually, yes. cu -- -- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120104005611.ga9...@mailgate.onlinehome-server.info
Re: from / to /usr/: a summary
* Adam Borowski schrieb: > On Fri, Dec 30, 2011 at 02:47:38PM +0100, Marco d'Itri wrote: > > On Dec 30, Carlos Alberto Lopez Perez wrote: > > > > > I think that stephan is right here. Every package using files in /etc > > It DOES NOT MATTER who is right, some upstreams have decided otherwise. > > At least udev, systemd and next month module-init-tools do override the > > configuration files in /usr/lib/ with the configuration files in /etc/. > > Deal with it. > > Then udev and systemd are broken. > > You're introducing a dangerous inconsistency that's going to bite people. > ACK. Sometimes upstreams doing really stange things (maybe because they dont have any package management in mind), that should be fixed. If upstream doesnt do those fixes, distros have to catch in. Look, the purpose of distros is to provide an consistent, robust and mainainable environment. Sometimes the distro maintainers have to bite in the bitter apple and cleanup upstreams's dirt. cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20111231025922.gd30...@mailgate.onlinehome-server.info
Re: from / to /usr/: a summary
* Tanguy Ortolo schrieb: > Enrico Weigelt, 2011-12-30 06:21+0100: > > Which packages ship files in /var ? > > Many install directories there. And one of my packages, dokuwiki, also > installs files under /var/lib/dokuwiki/lib/{plugins,tpl}. These files > define the default set of bundled plugins and templates, and I install > them there so that the user can add other plugins or templates, which is > a very common case for DokuWiki administrators since one of the major > advantages of this wiki engine is its rich set of available plugins. IMHO this is completely wrong, those files should be under /usr/lib/... or maybe even /usr/share/... as they're not dynamic data. I'd split off package install and instance deployment (as soon as you want several dokuwiki instances on one host, that will be needed anyways). cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20111231025510.gc30...@mailgate.onlinehome-server.info
Re: from / to /usr/: a summary
* Carlos Alberto Lopez Perez schrieb: > I think that stephan is right here. Every package using files in /etc > should ship this files in the package in order to let the admin know > what package each file belongs to. Its very ugly to do a "dpkg -S > /etc/xxx" and get a no path found. > > If some package is not doing this I think is a mistake and should be fixed. ACK. And it's then the job of the package manager to handle the config files properly (eg. not simply overwriting them on on updates, instead put them to some proper location where the admin or an admin-tool can pick them up) cu -- ---------- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20111231024734.gb30...@mailgate.onlinehome-server.info
Re: Conffiles (was: from / to /usr/: a summary)
* Tanguy Ortolo schrieb: > I think having the default configuration values written in a default > configuration file under /usr is better than having them harcoded, since > it makes it really easier to determine what these defaults are. But not > shipping the user configuration file, I do not know, that seems ugly > somehow. At least the possibility to write a configuration file should > be documented, ideally with a manpage. I have a general objection against putting (default) configs outside /etc at all. The main problem is, on updates, defaults might silently change, without operators used to look at /etc and comparing current config with new defaults. Not sure how Debian handles this, but Gentoo's etc-update mechanism has shown to be very handy. cu -- ---------- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20111231024024.ga30...@mailgate.onlinehome-server.info
Re: from / to /usr/: a summary
* Josh Triplett schrieb: > Well aware of that; just trying to fill in the full picture of how the > top-level directories would look after a move from / to /usr. Also, the > FHS says nothing about the current approach of overriding files in /usr > with files in /etc, which allows packages to stop shipping configuration > files in /etc that just consist of the default settings. Actualy, I'm opposed to putting default config files somewhere else from operating perspective. Having the initial configs in /etc is much easier when installing and then reconfiguring a package (it's already obvious where to look for the config file, and with proper comments you easily know what you might have to adapt). Not sure how Debian handles this, but Gentoo has a wonderful tool called etc-update for managing config file updates. > Again, well aware of that; just trying to fill in the full picture of > how the top-level directories would look after a move from / to /usr. > Also, nothing in the FHS states that packages shouldn't ship files in > /var. Which packages ship files in /var ? > /bin, /sbin, /lib, /lib32, /lib64, and package-managed files in /var and > /etc make these things significantly less convenient. More to the > point, all of those directories (as well as /usr) exist as top-level > directories right next to /home, /tmp, /lost+found, /media, and others > which often require different treatment. Are there any packages installing something in /home, /tmp, /lost+found or /media ? > People have consistently argued that sharing /usr makes no sense without > also sharing all the other package-managed bits that live outside /usr, > such as /bin, /sbin, /lib, /lib32, /lib64, and so on. However, > consolidating all the package-managed bits in /usr would make it > entirely sensible to share /usr as a single consistent pile of packaged > bits. I personally haven't seen an installation where /usr is actually shared between separate hosts, I don't have a real position on that. But: /usr is meant for things that are not needed by an minimal bootup, eg. singleuser or fundamental networking only (ip-stack + sshd), so they can be splitted to separate media, eg. for emergency cases. For completey fresh installations, there are probably better ways for providing remote recovery (eg. large hosters have rescue boot), maybe even using containers etc. But the big problem are the uncountable existing systems which might become troublemakers with that change. We need practical and reliable migration strategies first. In the end, I'm curious if it's really worth all of this. cu -- -- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20111230052144.ga25...@mailgate.onlinehome-server.info
Re: from / to /usr/: a summary
* Russell Coker schrieb: > On Wed, 21 Dec 2011, Tanguy Ortolo wrote: > > I tend to agree. At least, this is how I interpret the FHS, and it seems > > appropriate to me. Although it may not be useful in most cases, I do not > > see it as harmful. > > The harm is if it takes us extra development time because other distributions > don't support it, provide configuration options for it, or test it. Hmm, AFAIK Gentoo still is FHS compliant in this regard. Ubuntu too. No idea what SuSE does, but who really cares what these windows imitator jerks do ? ;-o > Things have changed a lot since the FSSTD first came out. True. But the need for quick and easy system maintenance remains. cu -- ---------- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20111222081705.gb1...@mailgate.onlinehome-server.info
Re: from / to /usr/: a summary
* Russell Coker schrieb: > On Wed, 21 Dec 2011, Stephan Seitz wrote: > > True, but / as emergency system is still a valid reason. That???s why > > I keep / and /boot outside LVM, so that I can repair/rename/change the > > LVM system. I did this more than once. > > When was the last time you needed to do that? Wrong question. Instead shoud be: what's the risk that you'll need to do that an how large are the costs of loosing that option (eg. by having to drive several hundred km's to sit physically in front of the box). > Probably the best thing we could do in this regard is to move some stuff from > /usr to /usr/share or some other tree that is more convenient for placing on > a > different filesystem. What exacty do you intend to put to /usr/share ? cu -- ---------- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20111222081024.ga1...@mailgate.onlinehome-server.info
Re: from / to /usr/: a summary
* Darren Salt schrieb: > I fully intend to continue with lilo, separate /usr and no initramfs/initrd. > I *may* decide to stop using a separate /usr should I need to replace > hardware ??? but probably not before then. > > I will NOT use an initramfs just to have /usr mounted early enough. Seconded. The reasons for that separation in FHS are far more than just historical slowless/expensiveness of larger spindles. / should only contain things needed for boot up into singleuser, /usr should hold all the rest. cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20111221043514.ga19...@mailgate.onlinehome-server.info
Re: Best practice for cleaning autotools-generated files?
* Neil Williams schrieb: > Just as there are packages which cannot be automatically built just > from the VCS checkout and ./autogen.sh, hence the need for tarballs > where that work has already been done. The interesting question is: why arent those packages buildable directly from VCS ? Perhaps some concrete examples ? > > At least, for autoconf packages, which follow the rules I've written > > down here: > > > > > > http://www.metux.de/index.php/de/component/content/article/1-software-entwicklung/57-rules-for-distro-friendly-packages.html > > > > that would be the case. > > Those aren't rules, those are your preferences which you wrote > yourself. These *are* rules (more precisely: requirements), applied to my OSS-QM releases. When upstream doesnt meet them, I'll fix it there. > Try that with any of my upstream packages and I'll laugh at you, then > ignore you and then I'll add you to the killfile. Please stop this > one-man campaign. It's already sounding tired and repetitive. You wanna tell me to stop my projects ?! > > Actually, once I fixed packages to this, the individual distro > > builder pkg configuration is reduced to nothing more than the > > package name, list of available features and their enable/disable > > flags, and the individual target config for the package just > > tells the version and selected features. That's it. > > > > (see attached files) > > Gentoo have been trying that for quite some time but it still needs a > wide range of specialist tools to keep it working. No, they didnt really. They're essentially going the same way as virtually any other distros do: manual creation of text-based patches and trying to work around bad upstreams in the package management's build files, instead of fixing the actual source. > > That's not necessary. Having a project, where people like distro > > package maintainers come together (at least for a bunch of packages > > and growing it later) and maintain stabelized and distro-friendly > > downstream branches. (oh, that's what OSS-QM is meant for ;-o) > > Then I'm glad I've got nothing to do with OSS-QM and I have no > intention of modifying any of my upstream packages to meet your > personal preferences, even if you continue to dress them up as > requirements. Seems like you still didn't get the idea. As upstream, you don't need to do *anything*. That's the whole purpose of OSS-QM: leave the upstream alone and do the redundant fixes (which distros still are doing alone, just for themselves, again and again) in one dedicated project where distros then can pull from. cu -- -- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110508223716.GM25423@nibiru.local
Re: Crypto consolidation in debian ?
* Arthur de Jong schrieb: > Although switching SSL/TLS library to something different may be a good > idea, I don't think it will fix the problem for NSS (Name Service Switch > here) modules. Having the whole SSL/TLS handling in an separate daemon would be a fine idea. Maybe even as an synthentic filesystem. The interesting question is how this behaves on high-load scenarios. cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110508224757.GO25423@nibiru.local
Re: Software quality metrics in Debian?
* Gunnar Wolf schrieb: > Many authors, true, do not provide a test suite at all... So we could > have a three(?)-state definition here: > > Runs-tests: (Yes|No|NotAvailable) Maybe a 4th state: Skipped. (where it had to be disabled) -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110516092841.GA14996@nibiru.local
Re: anyone interested in adopting ICU?
* Jay Berkenbilt schrieb: > ** > > Please reply to me directly (or CC me on replies) as I am not > currently subscribed to debian-devel. > > ** > > I'm looking for someone who might like to take over the icu package. > This is ICU4C (C/C++), not to be confused with ICU4J (Java), which is a > separate package maintained by someone else. ICU is "International > Components for Unicode". If it's not urgent, I'd like to take the job (need some time to set up an recent Debian build machinery again, as I didn't use it for almost a year now, and I'm little bit busy right now :(). ICU4C is a little bit tricky case, as it tends to break ABI (sometimes even API) between releases, sometimes even semantic changes (at least had been the case in recent years). So I'd go for a full MVCc installation (IMHO better than Gentoo's slotting + revdep-rebuild approach ;-p). I'll yet have to pull it through my own embedded QM process and build machinery first, so we'll also get crosscompile fixups etc as a buy-product here ;-) cu -- -- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110516094323.GB14996@nibiru.local
Re: Best practice for cleaning autotools-generated files?
* Tollef Fog Heen schrieb: > ]] Enrico Weigelt > > | Autoconf (w/o automake) offers no means to tell additional m4 > | include pathes (eg. in configure.ac), so that just calling > | autoreconf (w/o any additional parameters!) can do a full > | regeneration all on its own. > > What's wrong with the suggestion in > http://lists.debian.org/debian-devel/2011/05/msg00442.html ? > > If that doesn't work, that sounds like a bug in autoconf. Ah, I probably misread this posting in the way that automake is required. thx for the tip, I'll investigate. cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110509074419.GA2467@nibiru.local
Re: Crypto consolidation in debian ?
* Arthur de Jong schrieb: > Another solution (that Joss already pointer out) is libnss-sss which has > a slightly broader scope. In the long run, IMHO, it would be best to move everything (besides reading local flat files) into its own daemon and remove the whole plugin stuff from glibc and pam. That would also solve the static linking problem. Perhaps something like Plan9's factotum ? cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110508224557.GN25423@nibiru.local
Re: Using -Werror in CFLAGS for a debian package build
* Russ Allbery schrieb: > If -Werror had not been disabled for this warning, my guess is that nearly > every package using -Wall -Werror not previously tested with 4.6 would > FTBFS. I, personally, consider all of those warnings bugs. Well, unused variables aren't problems per se, but often can give good hints where there *might* be some bug. So, IMHO, maintainers should always enable them for testing and try to fix the problems. cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110520205351.GC14996@nibiru.local
Re: Best practice for cleaning autotools-generated files?
* Bernhard R. Link schrieb: > Given that aclocal is part of automake that is not very surprising. > If you want to use it without automake, you could just add a single-line > Makefile.am with the needed data (without AM_INIT_AUTOMAKE in configure.ac > autoreconf will not call automake), but best simply use automake. So, essentially, autoconf-only packages should be converted to automake packages that way ? > Promoting everything to add some script (which people will always have > to look at first because you can never be sure what it does), just > because some projects prefer to have their own kind of build-system > really seems kind of far-fetched. Well, I'm doing such things in the OSS-QM project (where packages also undergo lots of other tests, massbuilds on various cross targets, etc), so consumers of OSS-QM can be sure that these scripts do exactly what I described there. Of course, we cannot ever enforce that on upstreams, and we dont need to. That's why I've founded the OSS-QM project: it acts as an intermediate between upstream and consumers (distros, etc), and does those things (also including hotfixes, etc) which upstreams tend not to do. cu -- ---------- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110508204400.GL25423@nibiru.local
Re: Best practice for cleaning autotools-generated files?
* Neil Williams schrieb: (forgot attachments) ... -- -- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- @package: expat @version: 2.0.1.999 feature-enable=namespaces: on feature-enable=tools: on feature:namespaces on: --enable-namespaces off:--disable-namespaces -- feature:tools on: --enable-tools off:--disable-tools -- @style: class/autotools-2-features @style: source/oss-qm-git-2 builder-supports-install-strip: off !!postinstall-remove-la:on
Re: Best practice for cleaning autotools-generated files?
* Neil Williams schrieb: > ./autogen.sh does not clean anything. Clarification: with "clean state" I meant a state of the tree where all the generated files (from autotools+friends) have been reliably regenerated afresh (no leftovers from previous runs). Removing temporary files from different stages (eg. configure or make run) is a completely different issue. > > It *is*, as soon as this cannot run through without special > > flags (eg. if some features have to be switched off, etc). > > The options are just passed on unchanged. No problem with that. Can > still call ./configure directly. Maybe it wastes a little bit of time > but if you're cross-building, you're using a really fast machine so > this is hardly of concern. Requires that you can really pass all required options *and* environment variables reliably. I've seen packages where this wasn't the case. And requires the distro packaging system to pass the configure flags twice (to autogen.sh and configure), yet another piece of additional complexity. > > Adds additional complexity to add proper parameters here, for each > > individual package. (and, of course, find out all this first). > > > > This way, you add unnecessary burden to all the maintainers of all > > the distros out there (that might be interested in your package). > > and all the other build systems out there are suddenly compatible > across all packages?? Dream on. At least, for autoconf packages, which follow the rules I've written down here: http://www.metux.de/index.php/de/component/content/article/1-software-entwicklung/57-rules-for-distro-friendly-packages.html that would be the case. Actually, once I fixed packages to this, the individual distro builder pkg configuration is reduced to nothing more than the package name, list of available features and their enable/disable flags, and the individual target config for the package just tells the version and selected features. That's it. (see attached files) > That's not technical and there is no one organisation which bridges > all of the various upstream teams. If you want to change things, you > must persuade each team to adopt your preferences and you have That's not necessary. Having a project, where people like distro package maintainers come together (at least for a bunch of packages and growing it later) and maintain stabelized and distro-friendly downstream branches. (oh, that's what OSS-QM is meant for ;-o) > > Exactly the same reason why things like AC plugs, voltages and > > frequencies are standardized. > > Not true. Those things MUST work together in every permutation within a > specific jurisdiction or people can die. Debian and autotools are > nowhere near that level of importance. Probably not. But why not learning from the other areas of engineering ? cu -- -- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110508203620.GJ25423@nibiru.local
Re: Best practice for cleaning autotools-generated files?
* Simon Josefsson schrieb: > >> The problem is that autoreconf offers NO command line options for you to > >> pass the required -I parameters for aclocal, nor is there a way to encode > >> that information in the one place where it could conveniently live > >> (configure.ac) AFAIK. > > > > So, more precisely: autoreconf lacks an important feature would like > > to see. > > I don't think this is the case -- others have already explained how to > achieve the goals above. If you still believe something is missing in > autoreconf, please report it to upstream so it can be fixed. I was just summing up what I learned from this thread for now. Let's repeat this: (correnct me if I'm wrong) Autoconf (w/o automake) offers no means to tell additional m4 include pathes (eg. in configure.ac), so that just calling autoreconf (w/o any additional parameters!) can do a full regeneration all on its own. Right ? cu -- -- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110508194331.GI25423@nibiru.local
Re: Best practice for cleaning autotools-generated files?
patches for an > embedded distribution, it is pointless to modify the sources > themselves. Provide patches in your own build system (i.e. debian/) and > let people have the assurance that your sources match everyone else's > by keeping the original tarball unchanged and putting your changes into > either a .diff.gz or a .debian.tar.gz. You'll have to patch anyways, no matter if you manually create an diff and let the build system apply it or let it fetch the finished source tree directly from some VCS. As modern VCS'es like git already provide the necessary operations for easily doing the whole SCM there, I don't see any point in having the whole patch management complexity in the build system and package maintenance side. In the end, both approaches want the same: a clean source tree which can be built automatically. > > Obviously, you almost never have to change the input files. > > Or need to fix/tailor autotools. For mainline distros this might > > work well, but in the embedded world you're easily out of luck > > with this approach. > > Hah! Nonsense. If so, it's a problem with your embedded build system. > I've done several thousand embedded builds of packages across all of > Debian and the result? > > 99.99% of those packages ONLY needed changes in the debian/ directory > to be able to cross-build and mostly any changes we needed to make to > select different ./configure options were all done via debian/rules. Aha. How do you then, for example, repair broken autoconf inputs ? (things like AC_TRY_RUN, AC_CHECK_FILE, ...) And for not mixing up things: are you sure that you just didn't encounter certain upstream bugs since they were already fixed by standard Debian ? > > And it *IS* necessary, as soon as you have to change some of the > > input files. In my daily projects, this happens on the majority > > of the packages. > > Nope. Not true. Not even slightly. I have several years of embedded > experience with some 800 packages in Debian which prove the opposite. Now you really make me curious: how do you cope with things like AC_TRY_RUN or AC_CHECK_FILE without touching the source ? > > a) everybody does the required fixups and workarounds all on its own > > b) collaborate and provide a stabelized midstream together and so > >save a lot of human workforce. > > ... or just use the tarballs which exist in Debian where others have > already done all that work for us. We're running in circles. I was talking about situations where upstream (being tarball or vcs tag) is buggy and somebody has to fix these bugs. Maybe in your case, sitting ontop of standard Debian, other Debian folks already have fixed most of them, but in my case, having nothing to do with Debian at this front, that doesnt help in any way (okay, I can manually take patches from Debian, but that's a completely different issue) > > Why dont you just tag your releases properly ? > > Why should I? Who defines "properly"? A tag is just a pointer (or name) to a specific version of the tree. (the whole tree, not just individual files) > > And did you ever consider that fact that certain downstreams would > > like to use their VCS to rebase their changes to newer releases ? > > I doubt you can actually assert that this is a fact vs mere opinion > with respect to my actual upstream packages. Well, as soon as you maintain patches/changesets ontop some upstream's package version (so, being a downstream) and going to port them to subsequent versions, you're essentially doing a rebase operation, either manually or by help of a VCS. And modern VCS'es like git can do that quite automatically (yes, you'll sometimes still have to resolve some conflicts manually, but quite few compared to applying text-based patches). And the number of people using their VCS for rebasing is growing. > > You're aware of the fact that you're making life harder for downstreams ? > > I doubt it. If you want to make that assertion, prove it in relation to > my upstream packages. You are making universal claims which are > manifestly individual preferences. I don't know your packages in particular, so I don't even know if I would be required to some downstream works on them. But for the majority of packages I have to deal with, I need to maintain my own downstream branches (yes, for some I could work around by moving all the complexity to the distro build system, but I dont wanna open that pandorra box anymore, so I'll fix the source). And for those packages, having the source in a modern VCS (eg. git) is a big big bonus. > > > Indeed, in a few upstream projects, the only files which ever get > > > tagged are the debian/ files. ;-) &g
Re: Best practice for cleaning autotools-generated files?
* Simon McVittie schrieb: > As much as I wish this had been the convention, it isn't - the convention is > that autogen.sh *does* call ./configure (often with options suitable for > developers of the project, whereas the ./configure defaults are more suitable > for packagers). Actually, I dont see that there's any convention on that. Some packages do call configure, some don't, other even have different script names. It's quite unlinkely that we'll some day really have an standard, so I've decided to set my own policies which I think are best for distros in general (not just an specific one) and fixing packages within the OSS-QM project. I've written down a few lines on my policies, JFYI: http://www.metux.de/index.php/de/component/content/article/1-software-entwicklung/57-rules-for-distro-friendly-packages.html > For many (most? all?) autoconf/automake projects, running "autoreconf" > is enough; that's essentially what dh_autoreconf does. Yes, but just most of them, not all. That still leaves a lot of extra logic for those which dont. I prefer to keep those things out of the distro's packaging system, handle them in the individual package itself and provide a common interface, which (for autoconf'ed packages) looks like this: #1: ./autogen.sh #2: CC=.. LD=.. ... ./configure #3: make #4: make DESTDIR=... install cu -- ------ Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110508192518.GB25222@nibiru.local
Re: Best practice for cleaning autotools-generated files?
* Henrique de Moraes Holschuh schrieb: > > I'm (as upstream) using serval macros in their own .m4 files (eg. > > in ./m4/, maybe even sorted into subdirs). Can autoreconf figure > > out the required search pathes all on its own ? > > The problem is that autoreconf offers NO command line options for you to > pass the required -I parameters for aclocal, nor is there a way to encode > that information in the one place where it could conveniently live > (configure.ac) AFAIK. So, more precisely: autoreconf lacks an important feature would like to see. So, until this feature is available, we'll still need another way to cope with those situations. Sure, most of the packages probably could be changed to be fine with simple autoreconf call, but I've seen several cases where it's not that easy. In the end we have two options: a) maintain specific build rules for virtually any package in virtually any distro (yes, I'm not just looking at one single distro), indefinitely b) fix it in the source (the package itself) once and for all > I sure hope it will NEVER decide to actually search for .m4 files at > non-standard directories on its own, that would make things much worse. > > Anyway, you have to work around it using something like: > > ACLOCAL='aclocal -I foo -I bar' autoreconf Well, that would again be a package specific workaround which has to be maintained in the distro's build descriptors (whichever distro you're looking at). And if that changes, the distro's package maintainer has to find out early enough to (manually) adapt properly. Leaves too much manual works and chances of breaks for my taste. cu -- -- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weig...@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 -- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme -- -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110507195421.GA25222@nibiru.local