On Thu, Feb 14, 2008 at 10:30:58AM -0800, Philip Brown wrote:

> It seems like you are limiting your view, to OS level "databases".
> What about actual "databases", or "database-like" applications that may be 
> packaged, that may benefit from database-specific postinstall type actions?
> Or, database-USING applications.
> 
> For example; some web application that uses a database; When upgrading from 
> version 1.0.3, to version 1.0.5, it is required to run 
> "upgrade_from_old_ver.sh", otherwise, the application becomes 
> non-functional, or worse yet, corrupts the data.

It sounds like this is a case where both 1.0.3 and 1.0.5 need to be on the
system at the same time, so that upgrade_from_old_ver.sh has access to both
the old bits and the new bits, yes?

That's going to require special care on the part of the packager to make
sure that both versions can co-exist, and likely special instructions to
make sure that the user makes a conscious decision to move from the old
version to the new, runs the upgrade script in between, and never moves
back to the old version.

I don't think that kind of operation is appropriate for automation in a
packaging script, even with the current tools.

Ideally, 1.0.5 would be able to read the data from a 1.0.3 installation,
optimize the data for its own use, and continue from there, but that's not
always feasible.  In the case where it is, but, say, upgrading from 1.0.3
directly to 2.1 is impossible (because 2.1 drops support for reading 1.x
data, while 2.0 had such support), then we have the notion of a critical
version, through which one must upgrade.

> To make sure I'm understanding what you are saying;
> 
> it seems like you are suggesting a varient on the old hack of,
> 
> "new program provides /etc/rc3.d/S99runmeonce",
> 
> except that rather than dropping a file into /etc/rc3.d, you are saying they 
> now need to put it in the new init framework, aka SMF.

That kind of rc script tended to delete itself after running once, which is
not what I'm suggesting should happen here.  The service would last as long
as the subsystem which utilizes the data collected by the service was on
the system, and data provided to that subsystem by various packages would
stay on the system as long as those packages would be installed.  The
service would be responsible for determining what was new, and the package
would be responsible for telling it to go look.

> I personally dont see that as any big advantage over "run a postinstall 
> script". And I actually see some disadvantages.

What are those disadvantages?

> I read your summary of advantages for this; however,  I think you are 
> oversimplifying things, and not allowing for easy software management for 
> the user in complex cases, where the people doing the software packaging are 
> actually intelligent enough to write good scripts.
> In other words; what you have stated, works great for simple cases. But it 
> fails for complex cases, if you dont allow for arbitrary execution.
> (it wasnt quite clear to me, whether your "put stuff in SMF" writeup, 
> included a more liberal allowance for arbitrary code, or not.

Like Bart said, the packaging operations need to be idempotent.  The SMF
scripts can be arbitrarily complex if you choose.

But I'd like to hear more about these complex cases you don't think are
covered.  Can you give an example or two?

> >   - Packages delivering such data can now depend on a core package
> >     delivering the service that updates the database, rather than
> >     carrying around what's potentially a stale script.  Or they can
> >     even choose not to depend on that package, allowing someone to
> >     install the service after much of the data is already installed.
> 
> This paragraph was confusing to me. The "stale script" part, did not seem
> to follow a normal cause and effect chain.
> 
> New upgrades/data, REQUIRE NEW scripts to handle the upgrades/data.
> Seems as though it is more likely that the new package, will have the
> newer (and thus up to date) script, rather than a stale script.

Not necessarily.  See, for instance, the old version of i.rbac that's
floating around in a number of packages.  Because you can package your own
class-action scripts, people do, in case there isn't one on the system
already, but that means that if the one on the system is newer and more
correct than the one in the package, it doesn't get used.  It's the same
issue as with static linking, and I believe that the dynamic aspect of
using what's on the system is better in this case, too.

And when it comes to other install scripts, they must currently be provided
by the package, rather than allowing the script to exist on the system
already.  See, for instance, all the postinstall script variants out there
that do driver installation.  They're unable to rely on new features (like
update_drv) of the underlying OS because of the multitude of execution
contexts they need to work in.  If all such packages are part of the core
OS, then we can kind of make it work, with a lot of effort, but for all the
unbundled drivers delivered by Blastwave or any other provider, it's
impossible (or at least not worthwhile), because you're targeting multiple
OS releases.  It also means that the OS provider is less flexible in making
changes to how these things work, since there's a lot of grotty knowledge
embedded in these scripts.

You're right that the Oracle 15.0 package would have a fresh script to
upgrade all your Oracle 14.x databases correctly, but like I said above, I
think that's the wrong way of tackling that particular kind of problem.

Danek
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to