All,

There are a number of questions concerning the scope of this proposal.
Rich McAllister and I attempted to highlight those in the below outline
of issues that I hope covers the issues discussed thus far.  We hope
this clears up the rationale for the current scope, and hopefully
can frame discussions about modifications so that this proposal can
move forward and I can generate a new version of it using the
results of this discussion thus far.

I will try and be more proactive at responding as well, although as
many of you know US-based Sun employees will be on break next week.

-jhf-

------

* What use cases are we trying to address?

** Developer install

    A product like Java Studio Creator provides a way for developers to create
    applications that run in web containers like the Sun Java System
    Application Server.  This means Creator includes a copy of the
    Application Server so that after installing Creator the developer can
    proceed to create an application and run it.

    Many application developers in big companies just have a normal UNIX
    account, no root access, and have a lot of trouble getting any attention
    from the harried system administrators who do have root access.  This
    means that if root access is required to install things like Creator (and
    thus the Application Server) then Creator just won't get tried, since
    alternatives like Eclipse and JBoss will install without root access.

** Application Administrator

    In large data centers, the responsiblity of administering application
    "stacks" may be divided from the responsiblity of administering the
    underlying system.  This means that the administrators who install,
    upgrade, and migrate the applications don't have full root access.
    While it's true that there are a few operations needed for complete
    deployment of a server, like adding and enabling an SMF service (or
    manipulating /etc/init.d and /etc/rc?.d for older Solaris
    releases), those operations aren't always needed, and when they are
    they are generally easily accomplished under typical data center
    change control procedures.  Getting an authorized user to run a few
    specified commands at a specific scheduled time is much more likely
    than getting an authorized user to run through an often lengthy and
    somewhat trial-and-error procedure like the Java ES component
    installations.

* What use cases are we not addressing, and why?

   It's clear that there are lots of interesting problems in the
   packaging and application distribution area that could be
   tackled. We're not tackling them now, but we don't want to make it
   harder for others to make progress later.  Some of the more
   interesting problems we're not trying to solve:

** Self service software store

    This is the idea of a network-accessible software collection where
    developers and packagers can deposit packages and users can obtain
    and easily install the packages.  Current efforts like the Debian
    distribution, the very similar Ubuntu "universe" and "multiverse"
    distributions, and the Blastwave collection of software for
    Solaris are part of the answer, but they are much too hard for
    packagers to use and all restrict users to those with root
    privilege. Any service like this needs the basic
    inventory management and dependency checking functions provided by
    a packaging system, but needs to add more (like the additional
    function that pkg-get/apt-get provide over pkgadd/dpkg.) I know of
    very little work yet to make things easier for packagers; working
    on this area is part of the strategy of the Purple Haze project but
    it's something we haven't gotten to yet.

** User's ad hoc perfectly general software collection

    For system components and applications that install in the
    traditional "system" directories like /usr and /opt, there's a
    long-standing set of guidelines for developers to follow in laying
    down the installed objects (Solaris filesystem(5) man page.)  Some
    details, like when to create a directory under /usr/lib and when
    not to, are necessarily fuzzy, but in general it's pretty clear
    where things should go.  There isn't any similar set of guidelines
    for software installed by regular users in their home
    directories. Since most applications of any significant size that
    can be installed by regular users are distributed as tarballs,
    generally application developers arrange everything below an
    "application root" in a way they find pleasing, and the user has to
    either continually add fairly random things to $PATH or spend
    time writing shell scripts and creating GUI launcher objects.
    It would clearly be better if there were guidelines for a layout
    for user-installed software, and such guidelines should explain how
    users can have the same home directory on different architectures
    and operating systems.

    Since the applications we're most interested in are middleware,
    which would be installed in home directories only for application
    developer/tool use, we're not addressing this.  Java ES is working
    on how to arrange things in a "middleware stack".

* Can't we already do what is needed?

   It's never a bad idea to question whether a proposed new feature is
   really needed.

** Why not just use tarballs?

    As mentioned above, the way application developers deal with our
    use cases now is to create tarballs (or functional equivalents like
    cpio archives.)  This clearly works, at least for initial software
    installation. Where it is lacking is in the inventory management
    and dependency checking provided by a packaging system. If you
    don't know what you have, it's very hard to tell if you need
    updates, or to automatically apply them if you do.  One of the main
    reasons we want to use the Solaris packaging system in particular
    is that the Solaris patch tools and the higher levels (smpatch, Sun
    Update Manager) can then be simply extended to maintain
    "middleware stacks" as well as operating system instances.

    It would clearly be possible to layer some extra versioning
    information into tarballs by convention, but it seems to be
    duplication of effort when there's already a part of the system
    that does it.

    The other main problem with the current use of tarballs is that
    middleware developers end up packaging their products twice: once
    for installation as "part of the system" by authorized
    administrators, once for installation by regular users or
    application administrators.  This is extra work for the middleware
    developers we'd like to eliminate.

** Why not use zones?

    Zones definitely provide a way to delegate administration for part
    of a system, but they don't help in every case.  It's still
    necessary to get an authorized administrator to create the zone.
    This is particularly problematic currently, since every zone that
    uses the network needs its own IP address, and in the large data
    centers that have delegated application administration, IP address
    allocation is typically the responsibility of a separate "network"
    group, meaning that the attention of yet another overworked person
    is needed to get the install done.

    Again from the application developer productivity perspective, an
    approach which works very similarly (or identically) on multiple
    Solaris releases and other operating systems is desirable.  rpm
    either is or can be easily adapted to be run by a regular user, and
    Windows MSIs already have this capability (though so far very few
    Windows applications allow installation without administrator
    privileges, this is expected to change in Vista.)

** Why not just use RBAC?

    There's already a "Software Installation" profile in Solaris which
    allows the execution of the package tools.  It runs the package
    tools with uid of 0; this means that there's a simple path to
    escalate this privilege into all privileges: just create a package
    with a setuid root program.  So no system administrator who is not
    willing to give the application administrator full privileges will
    grant this profile either.

    There's an interesting idea that it might be "safe" to delegate
    installation this way if the packages that were installed were
    restricted to ones cryptographically signed by "trusted"
    suppliers.  The trouble with this is that it requires the entire
    certificate distribution and administraton problem to be solved
    well, and despite the obvious advantages to doing that, nobody has
    managed to do so yet.  I don't want to be held up behind one of the
    Big Problems.

* What are the risks?

   Whenever we introduce a new capability into the system, we need to
   consider whether there might be negative effects; a complete project
   must address the forseeable ones.  Here are the ones that have been
   suggested.

** Not "everything" in the system is visible to the SA

    Up to now, everything installed on a particular operating system
    instance through the packaging system has been registred in the
    same place.  This means that a system administrator could
    reasonably trust that, when removing software, there would be a
    notification that other parts of the system were depending on it.

    It's clear that this has only ever worked in a very restrictive
    way. First of all, users have always been able to install software
    outside the packaging system, with no protection whatsoever; ever
    since NFS was started, it's not even possible for a sysadmin to do
    a "find" to locate everything.

    So it seems we're not actually making the world worse; but are we
    missing a chance to make it better?  By introducing inventory
    management for non-root installed software, we at least make it
    possible to cross check proposed changes to a platform against the
    registered uses of it in other domains.  If there were some way to
    keep an inventory of domains, it would be quite simple to have
    checking tools check against each domain in turn (similar
    operations already exist to handle zones.)

    The hard problem here is basically referential integrity.  Once a
    domain is registered, nothing keeps the domain creator from
    destroying the domain without updating the registry.  This means
    that a typical check would generate more noise about missing
    domains than useful information.  This is another Hard Problem that
    I don't want to get stuck behind.  (There were a lot of proposed
    and even implemented hypertext systems in the 1980s that tried to
    address a similar referential integrity/ "missing link" problem;
    they all failed.  The World Wide Web, which completely punted on
    the problem -- just give 'em a 404 -- was wildly successful.)

    One way to preserve referential integrity is to control domain
    creation and registry by having both done by the same process.
    People who want to do this already can -- the N1 System
    Provisioning System is a product which does it, and many big data
    centers have developed their own provisioning systems.  But that
    approach is completely incompatible with the ad-hoc nature of our
    developer install use case.

** should we make the nuisance less attractive?

    Many of the problematic omissions from the current proposal
    (breakage of user software, not addressing the multi-architecture
    layout issues) seem to come from the possible use of the new
    capability to maintain per-user "private" software
    repositories. That's not really the main issue we're interested in
    solving now -- the developer install use case is close to it, but
    there we have the packages rather deeply embedded in other
    products; it's not visible to the installing developer at all. What
    if we addressed this by removing the $HOME default for
    installation?  We could even introduce a "preformatting" command to
    allocate and set up a domain, so that more random use of the
    capability is discouraged until we are confident we've solved the
    layout and registration problems.

----

Bart Smaalders wrote:
> Darren J Moffat wrote:
>> Vasiliy wrote:
>>> I talk to Bart and he raise serious consern about installation in 
>>> $HOME or any shared area (he asked me to post it here). Home 
>>> directories srared over network and can be accessible from any 
>>> architecture and any Solaris Version. So instalation of software in 
>>> shared area need special attention and special procedure to make it 
>>> multiplatform.
>>
>> No arguments about the existence of the issue.  However that is for 
>> the created or the software and the packages to fix.
>>
>> For example Firefox assumes a single architecture/platform and allows 
>> you to place binaries for the plugins in your home directory.  This is 
>> completely independent of where and how the firefox binaries 
>> themselves are stored.
>>
>> There is a workaround available set and environment variable to say 
>> where the plugins are.  BTW this predates even Mozilla as a project if 
>> I remember correctly.
>>
>> I've seen other software that writes out binary files and has basic 
>> endianness issues.
>>
>> None of these can be solved by the packaging software.
>>
>>> I think that Shared Software is another big project.
>>
>> In my opinion it isn't a project at all it is just about good design 
>> of the software in the first place.  SunOS has long encouraged the 
>> separation of binaries from sharable this is what things like /usr/share
>> are all about.
>>
>> I don't see how the packaging tools can actually help with this at all.
>>
> 
> I think adding features to support user-level software install without
> making sure the appropriate engineering and documentation has been done
> to support using it correctly is not a good idea.  In particular, our
> failure to correctly describe how to build shared libraries led to all
> sorts of LD_LIBRARY_PATH hacks which still plague us...
> 
> Note that users of user-level configs will have to use either $ORIGIN
> or per-application ld.config files in order to find their shared 
> libraries; they'll need per-architecture libraries and $ARCH
> directives as well.  Add to this info on 64 bit libraries and
> HWCAP support and there's a lot more to this project that just
> enhancing pkgadd.
> 
> 
> - Bart
> 

Reply via email to