On Wed, Feb 16, 2005 at 09:01:13PM -0500, Neil Joseph Schelly wrote:
> Similarly, most packages don't rely on more packages.  So another maintainer 
> responsible for another package means he or she will do what is necessary to 
> keep track of its dependencies and that will be the same number of 
> dependencies as most apps, namely just one or two.  Your example assumes that 
> all packages interfere or interact with all others and that's unnecssary 
> complexity.  Anyway, I'm not a math guy and this is a null argument here 
> anyway. 

It isn't a null argument; you're missing the point.  It isn't that the
package depends on all the other packages, clearly that's not the
case.  The point isn't even that it does or does not interfere with
some other package.  The point is that it *MAY* interfere with other
packages unexpectedly, and you have to test them all in order to be
certain that it doesn't.  This slows the testing process down, and is
a big part of the reason it takes 3 years to release a stable release.

> >   Exactly my point.  testing and unstable are moving targets.  It's in
> > flux. To test something, it needs to be *unchanging*.  
[SNIP]
> Testing doesn't change significantly that fast.  And by the time stable is 
> outdated, testing is good enough that it can be safely used instead.  

My experience has been different.  I once installed testing on my
workstation at work, and nothing worked.  Granted this situation isn't
normal, but it illustrates the point.  That hypothetical example I
gave about glibc wasn't hypothetical at all...  Though it may not have
been glibc specifically, I don't remember.  Something made my system
unusable.  I didn't have time to mess with it, so I promptely
re-installed RH...

> feel fine with Testing running in production.  

You shouldn't; and if you keep doing it, and run regular updates, I'd
bet big money that eventually you'll get bitten by it.

> And when Testing is unreliable, that means a new Stable has just
> been released that will be modern enough for at least a year for all
> intents and purposes... especially in a business environment where
> the latest/greatest toys aren't necessary.

Newer software may not strictly speaking be necessary, but it's often
desireable, because it's just plain better.  Faster.  Less buggy.
Have nice features that make life easier.  What have you.  

If performance is a factor, newer software usually performs better,
because the developers have had the chance to do more optimizing
(however notable exceptions abound).  Newer software often has done a
lot more than just plugged up old security holes; often it has
re-designed the entire security model to make it inherently better.
Sometimes, newer software just has happy bells and whistles that make
managing it a lot easier than in old versions...

> >   Right, but now I just can't type "apt-get install foo" and magically have
> > everything work.  And one will quite quickly get into the "dependency hell"
> > that people are all too quick to blame on RPM.
> I do this all the time for this or that package on my KnoppMyth install and 
> haven't run into a problem yet.  

That doesn't mean you won't; it only means you've been lucky thus far.

I have done similar things and been bitten by them.

> >   Cool.  Wanna tell me how I use it?  I've got Debian 3.0r2 images on my
> > hard disk.  (I see 3.0r4 is out now, but they keep telling me not much has
> > changed...)  I've attempted installs of this Debian before, but my HD is
> When you get to the bootup, there's a choice of kernels and you choose the 
> bf24 one for a 2.4 kernel rather than a 2.2 kernel.  

My shiny new (hypothetical) server hardware is only supported by the
2.6 kernel...  What do I do?

> >   The Debian zealots I know have been telling me the installer is going to
> > get much better Real Soon Now for over five years.  You'll pardon me if I
> > don't hold my breath.  :)
> It is.  It's not coming soon - it's here.  Download a Sarge ISO and see for 
> yourself.  

I have...  I admit it was much better than the potato installer, but
that didn't take much.  It still seemed to me like it was a bit behind
the times...  As for X being configured in a grossly sub-optimal
state, that seems absurd.  All the other major distros have been
getting that pretty much right for a LONG time now.  If nothing else,
Debian could just steal code and have it working tomorrow...

> If you're looking for a GUI, then you'll still be disappointed, 
> but I don't care about eye candy for something I see so rarely.  

If you're a sysadmin for a large site, you tend to see it quite often.
I don't care about the eye candy that much anyway, but I still found
it to be, um, let's say my least favorite installer of all the major
distros.  :)

> You could... I'd just download a Sarge ISO.

Historically, IIRC, just downloading an ISO was not easy to do.  If it
is now, that's a welcome change.  But I still don't want to spend 4
hours downloading a bunch of software that's 3 years old...

> > > I don't really see anyone doing anything better than APT, even
> > > on a large scale here.
> >
> >   Read my keystokes: It's not the frelling package manager.  :-)
> >
> >   Configuration management is completely hopeless if one's configuration
> > varies depending on when you happened to pull your package set from
> > testing/unstable/sarge/sid/pixar/whatever.
>
> Why does it depend on that?  Configuration is very reliable in all
> releases of Debian I've found over the years.  It doesn't change and
> often makes a lot more sense than other distros I've used.  Although
> that is likely as much a "what you're used to" thing than anything
> else.

You don't seem to understand what we mean by "configuration
management."  It refers to maintaining the software and configurations
of the software of a group of machines in some known state.  Ideally,
the state should be the SAME state across all the machines, unless
there is a specific technical reason why it isn't.  APT does not and
can not do this for you.

The reasons to do this are many, but boil down to consistency and ease
of management.  For example, if I have a script which I use to
automate some task, which needs to be performed regularly on all of
the machines I manage, I need to know that if it works properly on my
test machine, it works on ALL the machines.  I can't guarantee that if
the last perl update has a suble difference in some function
implementation, or if a package has moved its configuration file to a
different location (or worse still, it has a different syntax), or if
the groff command now needs the -c option to produce the old output
that it formerly generated without the -c option...

APT does not and can not do this for you.  At least, not all by
itself.  That's why configuration management doesn't depend on
the package manager.

> > > As for deploying hundreds of machines, I have no idea how that's
> > > connected to choice of distro ...                ^^^^^^^^^^^^^^
> >
> >   Exactly my point.  :-)

> Well, I'm lost then.  What was your point?  You brought up this idea
> of Debian being inappropriate for large installations, presumably
> because something else does that kind of thing better?  I don't
> imagine there's a better solution than maintaining a local
> repository and letting everyone (non-mission-critical anyway, which
> I would never do automatic) auto-update from it.

I already answered this in a previous message, but here it is again.
For many if not most purposes, stable is too old.  Yes, even on the
day it's released, and yes, even on servers, depending on the hardware
being used and its intended purpose.  Testing and unstable are not
stable enough for an organization to depend on.  Debian is not a good
choice for large enterprise installations.  Not because the software
sucks (it doesn't), but because from a carefully-considered management
perspective, there are better choices available.

Disclaimer: Ben and I are both from the "old school" of system
administration...  Keep everything the same, do things once, and do
them right (as much as management will let you).  Change NOTHING
unless not doing so will result in catastrophe (ok, I'm exaggerating
here:-).  Do as little work as possible to keep things running well.
Not because you're lazy, but because it shouldn't be necessary and
your time is better spent working on something cool.  ;-)

There are other schools of thought.  Some sysadmins like to play
things fast and loose, which is fine if it works at your site.
However, I will say that I've been in a number of such shops, and
every single one of them was a complete maintenance nightmare...
Invariably those shops (the ones I worked in, YMMV) have spent all
their time fighting fires, so they never got to the cool stuff.
And that's no fun.

-- 
Derek D. Martin    http://www.pizzashack.org/   GPG Key ID: 0xDFBEAD02
-=-=-=-=-
This message is posted from an invalid address.  Replying to it will result in
undeliverable mail.  Sorry for the inconvenience.  Thank the spammers.

Attachment: pgpy2jbC961Mp.pgp
Description: PGP signature

Reply via email to