Peter Tribble wrote:
> As I see it, allowing users to use pkg*/patch* can take 3 forms:
>
> 1. Non-root users can manipulate the system packages. (Can be done
> today with privileges, supposedly. Didn't for me when I tried it.)
>
> 2. Non-root users can install packages in the conventional way but
> in a private location using the -R flag.
>
> 3. Users can set up personal software repositories in a manner that
> goes beyond the simplistic view in style 2.
>
> As I understand it, this proposal is aimed at style 3.
Yeah, but it doesn't go very far beyond style 2. The main differences
between this and just using -R are just:
- underlying OS package repository is checked for dependencies
- the package database ("/var/sadm/install/contents" and friends)
can relocate independently of the install root. (So, if
you have a package that installs "bin/foo" you can put that in
~/bin/foo without necessarily having the contents file be
~/var/sadm/install/contents.)
>
> There's also a fourth possibility:
>
> 4. There is a central repository into which users can install
> software without privileges, and the user environment mechanism
> knows how to pick up software from that repository.
>
> This would be more about rules for creating, distributing,
> and accessing such software packages rather than the installation
> process.
>
I'm not sure what you mean; this seems very different from the current
software distribution methods indeed. Do you have an example of some
current system that is like what you want?
>> Summary:
>>
>> This project modifies the Solaris packaging and patching
>> utilities to allow non-root users to install and manage
>> packages and patches, separate from the system-level
>> packages, package database, and patch database.
>
> The packaging system is not renowned for being user-friendly.
> Not a view I personally subscribe to, but even I would regard
> it as requring some level of expertise. Is it wise to inflict
> it upon all users?
Even now for administrators with root privs, anything bigger than
a package or two has a higher level install script or program to direct
things; I'd expect that to continue. (In fact one of the main aims is
to make it so we can have the same high level installer for both system
level and user level installs; right now applications end up having to
make one that uses packages for system install, and a completely
separate one (using tarballs, etc) for user installs.
>
> What I'm really after is a mechanism to - for users, not the
> system - eliminate any use of software management tools at
> all. Users shouldn't have to install applications at all,
> they should just work.
I'm having a tough time seeing how this fits with any kind of reasonable
change control proceedings. For example, I don't want the compilers
changing under me immediately when the compiler folks release a new
version. This seems like a classic version control/configuration
management question. If applications were all very simple, where all one
needs to do is get access to the bits, maybe something like mercurial
would work. ClearCase (and, cough, NSE) tried to do that ages ago.
However, applications almost always need some kind of script "packing"
to integrate with the menu system, migrate/initialize preferences, etc.
so until we get everybody to start doing that dynamically somehow it's
hard to see how to get around the classic "install" scenario.
>
>> Benefits:
>>
>> - Remove barrier to using native packaging on Solaris. Many
>> software vendors spend effort producing parallel distributions
>> not based on Solaris packages, or do not produce package-based
>> distributions at all. This requires the product group to re-invent
>> technology already available in the packaging tools.
>
> Is this actually a benefit? What I can't see is why an end-user would
> wish to use the system administration tools to maintain their own
> personal files.
It seems to me that users have the same concerns with their personal
software collections that administrators have with the system software;
they need to know what versions they have, they need to be able to apply
updates reliably, they need to be able to remove parts with at least
some assurance that they aren't breaking anything else.
A class of "user" we care about might be better described as
"application" or "middleware" administrators. In very large data
centers, one group of administrators take care of the base OS, while
other groups take care of "middleware" like application servers and the
applications themselves -- the actual "users" of the applications of
course don't get involved in this kind of administration at all. These
large data centers don't like to give all the application admins root
access. The inventory maintenance job being done by the application
admin is very like the job done by the OS admin, so it seems reasonable
to have them use the same tools.
>
> The primary use I would have for using pkgadd as myself is to be able
> to install a piece of software distributed as a package into a temporary
> location so I can repackage it in some more suitable format (such as
> a tar file).
>> Users will install packages and patches using similar interfaces
>> as the root user. The default install locations will be relative
>> to $HOME.
>
> It isn't clear to me that using $HOME as a default necessarily makes
> sense. For a single-machine setup, it doesn't matter where it goes;
> for a multi machine setup it makes more sense to me to associate it
> with a system rather than a user.
We want the default to be something that has a very good chance of
working; for classic UNIX users $HOME's the only persistent place you
can count on being able to write into. Somebody building a system- or
network-wide software collection is going to have to override the
default, but they're going to have to be doing some pretty careful planning.
> Besides, I really want to avoid
> every user installing their own copy.
The case of a user installing some software for themselves is very
important to us, since we want people to be able to download new things
and try them out without needing any special privileges.
>
> I would also not expect (and certainly wouldn't expect users to
> understand) that some files in my filespace cannot be managed
> using the same tools as others.
How is this different from managing system files now?
>> Each domain has a "Domain-Home", under which all softare belonging
>> to that domain is installed. This will by default be within $HOME
>> for non-root users.
>
> A normal package might install files under /usr, /opt, or wherever -
> how are these to be mapped under a user's home directory?
A package that actually specifies absolute paths ("this file must go in
/etc/foo") generally won't be installable as non-root. A saner package
that sets basedir to /opt/PACKAGE and uses paths like bin/foo, might end
up in $HOME/opt/PACKAGE/bin/foo. There are details yet to work out;
it might be better if the Domain-Home overwrote default basedir specs so
the default result would be <Domain-Home>/bin/foo in this case. (I
expect higher level installers to always override the basedir anyway.)
>
>> Each domain has a sparse package registry of only those packages
>> installed on that domain. The registry shares the format of the
>> contents(4) file and by default is installed into ~/.sunw, but is
>> overrideable with a CLI switch.
>
> So the package doesn't follow the normal filesystem hierarchy
> of a package. What about all the other stuff in /var/sadm -
> where would that go? And what if you had many software domains
> - where do the package files associated with them go?
I've been assuming that the other /var/sadm stuff would go in the same
spot; i.e. if you look in ~/.sunw after pkgadds with the defaults, you'd
see something like
~/.sunw/
XXX/contents
XXX/pkg/WHATever
where XXX is not yet determined. It could look just like the current
structure under /var/sadm, or some of the paths could be simplified.
The key is that *all* the metadata goes under the domain's registry path.
>
>> Inter-domain dependencies are resolved by following a per-user
>> Domain-Path variable (environment variable, or CLI override).
>> The first domain able to satisfy the dependency is used.
>
> That's install-time dependencies, I presume. What about run-time
> dependencies? How do you make sure they follow the same rules?
We don't. An application that wanted to leverage this would have to
pull the basedirs out with pkgparam, the way one does it now if one
wants to deal with packages that have been relocated away from their
default basedir. In reality very few things work that way now, I'd
expect that most applications would take a very simple model: things are
either resolved out of the base system (from the normal locations like
/usr/lib) or via a relative path.
>
>> The "root" domain (the one that exists today on each Solaris
>> OS instance) is always part of everyone's Domain-Path and is
>> used to satisfy package dependencies when no other domain can.
>
> But a user may use their files on multiple machines, on which
> the root domain is different.
Right, if you install on one machine and run on another, then it might
work and it might not, same as today. The case where the install system
isn't actually suitable for execution (you want to install on the file
server but never run there) would be handled by simply turning off
dependency checking in the admin file.
>
>> Restrictions:
>>
>> - Packages might need to be re-generated to indicate
>> ready-for-UBI. Products not ready for UBI must be flagged
>
> Surely it's the other way round? All packages aren't UBI
> ready unless explicitly indicated.
In general I agree, though we might be able to identify a restricted
subset of packages that could be trusted to be UBI-compatible:
- purely relocatable (no absolute paths)
- no scripts
- no setuid or other special file perms
I don't know whether there are actually any existing packages that simple.
>
>> - Users should not be able to use domains as basis of DOS
>> attack
>
> All you're proposing is that users can unpack file archives.
> I don't see how making it fractionally easier allows a user
> to do anything they couldn't otherwise do anyway.
I think James was referring to the idea that it would be dangerous to
let regular users declare dependencies on system components that would
be enforced by the system tools.
>
>> - Domains cannot exist within other domains. This restriction
>> may be lifted in future work but for now, this simplifies
>> logic and removes the need for recursion.
>
> I don't see where "within" is a useful term. What do you mean
> here?
This just means we're at first just going to have two "levels" of
searching. When trying to satisfy a dependency, we look in the "active"
domain, and in the "system" one.
>
> I would (if using such a system) want the requirement to create
> a hierarchy of domains
What would it keep you from doing if you didn't have it? Some
deployment models would require you to turn off dependency checking, but
since the current alternative is just to use bare files anyway, it's at
least no worse.
Admittedly looping through multiple levels isn't much more code (if any)
than doing two, but we've been looking for ways to trim this project
down, not gussy it up.
>
>> - Domains might not always be "up" (i.e. $HOME inaccessible) -
>> this mustn't cause hangs
>
> This presumes that $HOME is independent of the system.
The ability to mount home directories over NFS is something we don't
want to give up.
> But if it's
> personal software then $HOME being inaccessible is going to stop
> you logging in so not being able to run a program is secondary.
The case we're worried about is having to have all the home directories
mounted just to do system maintenance (the "DOS" mentioned above.)
>
>> - Packages can specify architecture and dependencies. This becomes
>> difficult to fully validate at install time, given that the domain
>> may exist on a shared filesystem ($HOME). Therefore, dependency
>> failures and mismatched architectures are flagged as warnings.
>> pkgchk can always be used to check the architecture and
>> dependencies
>> at any time, so this is not seen as a major issue.
>
> So it's fine to install an x86 package on a sparc system?
Works right now, I believe.
>
> But generally I would see turning off validation as simply giving
> the user too much rope.
Since the competing available technology is "tar -xf" which does no
checking at all, it's hard to see this as fatal.
>
>> Interface changes:
>>
>> - package/patch utilities CLI changes for software domains
>> - package/patch creation utilities metadata for support of
>> software domains.
>
> Ouch. The pkg system is complex enough as it is, and has already
> been pushed way beyond its original design boundaries I suspect.
> Making the system more complex, and making packages (and package
> creation) more complex, doesn't seem to be good.
Really, all we're doing is making a couple of currently-hard-coded paths
variable, taking out the current check that requires uid=0, and
iterating the current dependency check over the domain search path.
That's hardly a massive amount of difference.
>
>> Related Bugs/RFEs:
>>
>> 1249015 pkgadd requires root access
>> 1165888 allow non-root users to install software using the
>> package mechanism
>
> But these are basically addressable with RBAC or allowing any user to
> use the -R flag to relocate software anywhere they have write access
> to.
No, "non-root user" really means "user with only Basic Solaris User";
when that RFE was filed 12 years ago we didn't have the current
privilege model. It's a developer capture issue -- we want a
prospective developer to be able to download packaged applications,
install them, and try them out, without having to dicker with her local
sysadmin (who may just hoot at her when she asks for root access, or
even Software Installation, since as Casper pointed out that immediately
escalates to full root access.)
We did think of just using -R but we found it inadequate for several
reasons:
- it overly restricts the placement of the software relative to the
package data. If I have a package that wants to create, say ~/bin/foo
which uses a library in ~/lib/libfoo.so.1, I'm going to have to set
"-R ~" and my package data now must live in ~/var/sadm/...
- There are packages that assume that -R being set means that the
install is not to the running system, therefore there's no need to
stop/restart servers, etc. While it's probably true that such
packages will need to be updated to work as UBI, it seems bad to
break them knowingly.
- One might want to set -R and the domain home at the same time, perhaps
when jumpstarting a system with a initial set of apps that are to have
their future maintenance delegated to a "application admin"
> It's tricky enough to make software work reliably in an alternate
> root. (So A links against B - how does it know where to find B.)
> I think it's going to be very difficult to get everyone producing
> packages to build them in an arbitrarily relocatable fashion.
We don't need to get everybody to do that. It's pretty clear that
significant applications can be structured to work this way (anything
that can install now just by unpacking a tar ball is an example.)
Rich