Hi Luca, On Sun, May 07, 2023 at 12:51:21PM +0100, Luca Boccassi wrote: > The local/external aspect is already covered in Ansgar's reply and subthread.
I hope that we can at least agree that we don't have consensus on this view. And the more I think about it, the more it becomes clear to me that this non-consensus is part of the larger disagreement we have about this whole transition. Do you see any way towards getting to common ground here? > Sure, but adding changes that are (seemingly) unnecessary for a large > percentage of affected packages also brings uncertainty. Every > software has bugs, thus it follows that injecting more software in the > way of a package being installed will likely also inject bugs. Which > doesn't mean we shouldn't consider it, however, it should be weighted > appropriately. Let me put this into perspective. In this scenario, we will have a few packages with versioned Pre-Depends on usr-is-merged. The seemingly unnecessary change here is adding more Pre-Depends of the same kind to many more packages. It seems very likely to me that one of the few Pre-Depends will cause usr-is-merged to be upgraded early and thus those possibly unnecessary Pre-Dependencies will be harmless. Do you actually have some scenario in mind that would warrant judging this as risky beyond suspicion? (Which is not to say that there is no risk as the whole affair bears quite some risk.) > Packages that need special handling will need special handling for > backporting too. This is nothing new, there was never a project-wide > guarantee that a package uploaded to testing can apply 1:1 to > backports, it is common enough to require changes/reverts/adjustments, > and if it's fine to require that in other cases, it's fine for this > case too. It seems that you missed my argument and it likely wasn't spelled out explicitly enough, so let me retry. Yes, you may need to adapt packages that are being backported. We don't disagree about that (and hope people get it right, which they won't, but so be it). The really bad thing here is that a backports upload may require changes to the package in unstable! Say we packaged foo version 1 in stable and it puts everything in /bin. Then we update foo to version 2 in unstable and foo gains a new /bin/bar. Due to the debhelper addon, this is actually shipped as /usr/bin/bar. Great. Then we backport foo version 2 to stable. Given that debhelper no longer moves, it'll be /bin/bar. Then we notice that foo is not laid out nicely and we split a bar package from it in version 3 and move /usr/bin/bar into bar. Now a user may install stable, install foo version 1, install the foo version 2 backport and then update to nextstable. In that stable upgrade, bar version 3 may be unpacked before foo version 3 and as a result /usr/bin/bar goes missing when the backported foo version 2 gets upgraded to the regular foo version 3 as this deletes /bin/bar. So when we backport a package, the unstable package may need to be modified to avoid such unpack file loss scenarios. In a simple case, we may be able to just add Conflicts, but the takeaway is that backporting a package may now break upgrades to nextstable in a way that requires fixes in nextstable to accommodate for such upgrades. > If the majority of packages are simply converted, with no manual > handling and no diversion, then it should be simple to handle: the > debhelper in stable will not perform the conversion by definition as > the logic won't be present, and any dh upload to backports will have > such logic disabled, so that other packages that get uploaded to > backports and built with either the stable or the backports debhelper > won't have any change performed on them. As much a I'd like to trust you on things actually being simple, we've seen over and over again that the simple approaches have non-trivial flaws. If you were to highlight resulting problems (and propose solutions), that would be more convincing to me than continuously labeling it simple. > Or to put it in another way: I think our defaults should prioritize > the Debian native use case. Given we ship our loader in /usr/lib/ld* > now, it makes sense to me that the default in GCC is to point to > /usr/lib/ld*. Callers can override that as needed for > third-party/external/foreign use cases. I guess you'll be having a hard time convincing the toolchain maintainers of this change, but my other point was that this is unnecessary when we can use patchelf after the fact. > > How about the long-term vision of this? Elsewhere you indicated that > > you'd like the aliasing symlinks to not be shipped by any data.tar. Does > > that imply that we'd keep patching the interpreter and using /usr/bin/sh > > forever in the essential set? If adding the links to base-files, it > > would be of temporary nature only. > > > > If adding the symlinks to base-files, how about /lib64? Would we ship it > > for all architectures or just for those that need it (e.g. amd64, > > loong64, mips64el, ppc64, ppc64el)? > > https://wiki.debian.org/ArchitectureSpecificsMemo has a list of dynamic > > loaders we also need /libx32 for x32 at least. If making this > > architecture-dependent, would base-files become Multi-Arch: same? > > ... > > I think we should leave the long term vision for another day, and > focus on your requirements for the essential set unpacking right now. Knowing the target state of a transition seems fairly fundamental to implementing it and base-files is part of the essential set. To me, it is a significant difference whether we temporarily or permanently modify the ELF interpreter in the essential set. For these reasons, I do think the answers to these questions do matter at this time. As long as we do not have answers here, we must not move ld.so nor /bin/sh regardless of whether we patch dpkg or not. Helmut