On 22/01/11 00:43, Xyne wrote:
Allan McRae wrote:

I pointed out that hard rules are not good.   e.g. coreutils should (and
does) depend on glibc as it is not guaranteed that glibc is installed at
the time when you first install coreutils (which is likely the initial
install).   But there is no point putting glibc in the depends list for
(e.g.) openoffice-base as it will be installed by that stage.

That's irrelevant to this discussion because it's a bootstrapping issue. I
don't know how cyclical dependencies and other initialization problems should
be handled, but they constitute a special case that should be detected and
dealt with separately.

Isn't this exactly the issue here? The original question was whether we should include glibc in the dependency list for a package. I pointed out a case where including glibc in the depends is critical and a case where it is a waste of time, indicating there is no one answer to such a question.


Two points to consider:
1) How much more complicated would it be to list all dependencies?

  >  readelf -d $(pacman -Qql openoffice-base) 2>/dev/null | grep NEEDED |
sort | uniq | wc -l
150

That is a lot of libraries... although some will be in the same package
so that is an upper estimate.   But that is only libraries and the
complete dep list will be longer than that.

I agree that is a lot. Of course we can't reasonably expect a packager to
manually enter 150 libraries into a PKGBUILD, but are all of those direct
dependencies? Maybe this is a silly question due to my ignorance of linking,
but are any of those libraries linked via other packages? For example, if bar
provides code that links to baz, and foo builds against bar, would baz turn up
in the readelf output for foo? If the answer is yes, then baz would not be a
dep of foo, even if foo links to it, because the linking was established
"indirectly", i.e. bar could have used something else.
>
Of course, in that case, baz would be a strict runtime dependency (unless
sodeps could resolve this, but again, my understanding here is limited), but
from a graph-theory pov, foo would only depend on bar. Such a situation would
only require a rebuild, just as it would now (i.e. if baz were replaced by
something else).

The answer is no. "readelf -d" only lists directly linked libraries. "ldd" gives the entire link chain.


2) It is worth the effort?   We have very few bug reports about missing
dependencies and most (all?) of those fall into the category of missed
soname bumps or due to people not building in chroots.  I.e. these are
because of poor packaging and not because we make assumptions about what
packages are installed or the dependencies of dependencies.


So I see making a change to the current approach as making things (1)
more complicated for (2) no real benefit.

The answer depends on the answer to my previous question. The current system
does indeed work, but it provides no strict guarantees. I think good practice
in general is to make something that is critical as reliable and future-proof
as possible, and I see that as a true benefit. It's like wanting to agree upon
an open specification instead of just letting everyone do it their way and hope
for a triumph of common sense.

Admittedly, I doubt it would be a problem in the future and I'm discussing this
idealistically.

Idealistically, I might even agree with you. I just think it is not a practical thing to do. And if we move away from just using binary libraries as examples, we can find situation where we can make guarantees that a dep of a dep will never be removed.

e.g. I have a some perl software that depends on the perl-foo module. If we list all dependencies, I would need to list perl and perl-foo. But I can guarantee that perl-foo will always depend on perl, so do I really need to list perl as a dep?

As for complication, even if there were a large number of deps to consider,
there would likely be ways to generate at least a tentative list using simple
tools. It could then be refined through feedback.*

Again, I think this is too idealistic. Our current dependency checking tool (namcap) has a lot of issues determining the dependencies of a package and it is not be refined much at all...

Allan

Reply via email to