On 27/06/2016 15:55, Bernhard Rosenkraenzer wrote:
Hi,

On 2016-06-26 23:17, Maik Wagner wrote:
I noticed a couple of discussions on the German Linux News Sites such
as heise.de or pro-linux.de that package management is apparently
changing. Instead of the rpm/yum or dpkg/apt-get discussions there
seem to be some new contenders: Snappy, Flatpack, AppImage etc.

From where I stand, those things are a major step backward.
Essentially they come down to Windowsism like "Bundle every library with every application" as opposed to "have 1 system wide copy of every library, make everything use that".

The idea behind that is great from the perspective of a (particularly non-free) ISV (always have the exact version of a library you've checked out, possibility to patch every library or depending service to death), regardless of what you're running on.

But from the perspective of a system maintainer it is a horrible idea (security bug in glibc -> replace every single package, keep getting bug reports about something that has been fixed in a commonly used shared library ages ago because some applications still bundle the old version, ...)

From the perspective of a malware developer, it's a great idea, you get to hide malware/spyware/... in any library instead of having to deal with the fact that distro developers look at libraries and patch out malicious behavior if any. As long as you can get one package developer to use your copy of the library, your stuff goes in.

From an end user perspective, you get more bugs, more bloat and far less memory efficiency.

It probably can't hurt to allow installing those new package formats so people can use whatever non-free stuff gets released there - but I think adopting them beyond the point of "compatibility with other people's packages" would be a really bad idea.

ttyl
bero


You are right, but you are emphasizing only the dark side. It's true that this "snap" model has an higher memory footprint, and that in case of security problem you need to upgrade every single component, but other system has followed and pursuit this system: think to the .APK for Android or Windows Portable Apps. But where is the problem? Lack of disk space? Bandwidth?

Indeed is the current paradigm of the distros that is becoming weak and obsolete; it's not related to free software vs proprietary software; that building/compiling paradigm is unchanged since almost 15-20 years. Actually every libs/tool/whatever around there contributes to build a piece of a distro, which become bigger and bigger in the total number of components. But every shared libs/tools shares both the good and bad stuff. Then for every new lib which is part of the distro (apart the safety) you need to check that is the right library version, that is compatible for a certain program and so on, that is compiling with all the rest of tools, etc., in a neverending cycle of "quality test": testing/packaging/upgrading/fix patching/compiling: think to a particular version of python that breaks everything or a new perl, or a new compiler where applications no longer compiles, etc.

But as the total number of applications or tools or libraries grows, the "quality tests" needed to provide something acceptable grows also exponentially (or to be precise "combinatiorally"): while in a ideal world this it's irrelevant, in the real world is very important as you need much more manpower to resolve conflicts, but packaging manpower doesn't grow exponentially (or combinatorially) but linearly. As a side effect you have that a) ton of new applications upgrades are released upstream much faster than a distro packager could every upgrade or package in his lifetime (in other word the "world is moving faster", b) average quality decrease: a single application crashes because of a library upgraded somewhere in the distro or a new compiler or some combination of library/application was not well tested, or tested at all, and propagates problems up to your application, and maybe that's the only application you need. This is very important: for a professional use of a single OSS application (think to professional use of libreoffce, or a video-editor) features or the availability of the latest version come first. It's not accecptable for instance that you have a system where you can't use libreoffice, or you click there some button and you get cells blocked because of some libinput change. Or you have a video editor that you use professionally and it's not acceptable that a codec that you were using for production is not supported anymore because someone is not sure of a license and decide to disable it widespread. That's what also caused a slow adoption of free distro applications in the commercial world. On the server side this is opposite: you need safety first and then features later.

ISVs already provide their own way (usually out of packaging), and certainly this system won't change much for them, considering that they already provide (all but libc) all the libraries needed. And if license doesn't permit they can compile everything statically they won't.

Providing all the tools or libraries needed for a single package or even static linking was always been possible. Wasn't accepted much as a "building paradigm" because would require much more a hand fine tuning, and because the shared model is more convenient from building point of view, but IMHO if a standardized tool could help packager and would allow to build applications that won't propagate breakage to the whole distro, then IMHO is welcomed and should be analyzed.

Bye
Giuseppe


_______________________________________________
OM-Cooker mailing list
[email protected]
http://ml.openmandriva.org/mailman/listinfo/om-cooker_ml.openmandriva.org

Reply via email to