Peter Tribble writes:

> It was my intention that reducing the number of packages would
> make minimization easier. It should certainly reduce the dependency
> problem, and the new package structure should be along more
> meaningful boundaries. It's not *just* lumping them all together.

IMO, small packages *are* a meaningful boundary: consider e.g. the current
packaging of network drivers.  Each driver goes into it's own package.
This way, I can simply pkgrm SUNWbge on my laptop and instead pkgadd
BRCMbcme (until I make bge(7D) support my Broadcom NX 5901 card, that is :-).
If you start lumping things together, this becomes impossible or at
least a lot harder.

> Does splitting gnome into over 200 packages help anybody? Or
> CDE into over 20? Many other packages deliver 1 or 2 files,
> and the package overhead is significant.

It depends on the user's default view on the packages: if the default is to
show clusters only, with the ability to look at the package level if
desired, you get the best of both worlds: a high-level meaningful view by
default, but you retain the control that is sometimes necessary.

> Besides, if you want to get down to individual files you can do
> that as well. For example, I use removef to get rid of /usr/ucb/cc.
> (It's a shame this isn't persistent across updates.)

It's neither persistent nor manageable except perhaps for a single file
like this.

> Is the package the correct fundamental unit of granularity?

It can be a useful unit if there are higher-level units available as
first-class citizens (and the default/primary ones) as well.

        Rainer

-----------------------------------------------------------------------------
Rainer Orth, Faculty of Technology, Bielefeld University

Reply via email to