Warning: another long post...

What we are looking for I think is "software virtualization" and it
just is not there yet.
It's about giving each participant the illusion that he has the entire
thing for himself, while under the covers you take advantage of the
architecture to use less resources than you would when you duplicated
the entire thing N times.

The virtualization is never complete, but it has dents and cracks in
places where the participants do not notice or do not care. For
example you can virtualize the CPU but it means that the same sequence
of instructions one time takes longer than the other time. That is
good enough for most situations, but not if you're trying to do timing
by counting instructions (so we do not use our virtual CPU to make the
speaker beep) or when you try to account for used CPU time by
measuring the unused CPU time.

What Rick describes is an effort to implement some aspects of the
software virtualization by using available technology and by drastic
simplification and restriction of the function provided. Some of that
will be good enough for specific applications.
No surprise Rick builds on techniques we know from CMS, because CMS
gets close to this. There's the separation of private components and
things that can be shared. That allows for upgrade of the shared
components without touching the private part. An important function of
having your own software (the ability to modify it) is through defined
exit points (various PROFILEs) and the hierarchy in the software stack
(e.g. loading nucleus extensions, invoke an exec instead of the
command, etc).

In order to do this virtualization, we need to define the function
that must be virtualized. When that definition is high level, it is
easier to maintain the illusion. Right now you upgrade software by
placement of objects on disk with the same primitives as applications
can use to manage their private data. That could be implemented by
something like unionfs, but it makes you lose the ability to also
upgrade things under the covers.

So we need to virtualize on a higher level, say the rpm command. With
virtualized software a specific rpm command will make you upgrade the
package. After you have done that, you execute the new package. But it
does not necessarily mean that when you did not use the rpm command
you will run the old level forever (not a serious dent, because in
real life your co-worker could have done the upgrade without you
knowing it). Clearly RPM with its single database would need to
change. It may be as simple as implementing the database by snippets
that are located in the directory with the software components.

What I have been thinking about is the combination of autofs, LDAP and
XIP. The LDAP database would be your software inventory (rpm database)
that can be changed local and central. When you reach for delicate
parts of the file system, autofs would look into LDAP and find what
version you have and mount the right files on the place where you went
to look. With XIP in the picture, you can make it attach the correct
DCSS (if available) and share the code rather than load it yourself.
It would not be realistic to have autofs treat each binary separate,
but I think unionfs would help because it provides the illusion of a
single directory without physically relocating the the objects.

We will get there. But not today.
--
Rob

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to