On Dec 29, 2014, at 8:02 AM, James B. Byrne <byrn...@harte-lyne.ca> wrote:

> In many instances in government and business seven
> years is a typical time-frame in which to get a major software system built
> and installed.  And I have witnessed longer.

As a software developer, I think I can speak to both halves of that point.

First, the world where you design, build, and deploy The System is disappearing 
fast.

The world is moving toward incrementalism, where the first version of The 
System is the smallest thing that can possibly do anyone any good.  That is 
deployed ASAP, and is then built up incrementally over years.

Though you spend the same amount of time, you will not end up in the same place 
because the world has changed over those years.  Instead of building on top of 
an increasingly irrelevant foundation, you track the actual evolving needs of 
the organization, so that you end up where the organization needs you to be 
now, instead of where you thought it would need to be 7 years ago.

Instead of trying to go from 0 to 100 over the course of ~7 years, you deliver 
new functionality to production every 1-4 weeks, achieving 100% of the desired 
feature set over the course of years.

This isn’t pie-in-the-sky theoretical BS.  This is the way I’ve been developing 
software for decades, as have a great many others.  Waterfall is dead, 
hallelujah!

Second, there is no necessary tie between OS and software systems built on top 
of it.  If your software only runs on one specific OS version, you’re doing it 
wrong.

I don’t mean that glibly.  I mean you have made a fundamental mistake if your 
system breaks badly enough due to an OS change that you can’t fix it within an 
iteration or two of your normal development process.  The most likely mistake 
is staffing your team entirely with people who have never been through a 
platform shift before.

Again, this is not theoretical bloviation.  The software system I’ve been 
working on for the past 2 decades has been through several of these platform 
changes.  It started on x86 SVR4, migrated to Linux, bounced around several 
distros, and occasionally gets updated for whatever version of OS X or FreeBSD 
someone is toying with at the moment.

Unix is about 45 years old now.  It’s been thorough shifts that make my 
personal experience look trivial.  (We have yet to get off x86, after all.  How 
hard could it have been, really?)  The Unix community knows how to do 
portability.

If you aren’t planning for platform shift, you aren’t planning.

We have plenty of technology for coping with platform shift.  The autotools, 
platform-independence libraries (Qt, APR, Boost…), portable language platforms 
(Perl, Java, .NET…), and on and on.

Everyone’s moaning about systemd, and how it’s taking over the Linux world, as 
if it would be better if Red Hat kept on with systemd and all the other Linux 
distro providers shunned it.  Complain about its weaknesses if it you like, but 
at least it’s looking to be a real de facto standard going forward.

> So, seven, even ten, years of stability is really nothing at all.  And as
> Linux seeks to enter into more and more profoundly valuable employment the
> type of changes that we witnessed from v6 to v7 are simply not going to be
> tolerated.

Every other OS provider does this.

(Those not in the process of dying, at any rate.  A corpse is stable, but 
that’s no basis for recommending the widespread assumption of ambient 
temperature.)

Windows?  Check.  (Vista, Windows 8, Windows CE/Pocket PC/Windows 
Mobile/Windows RT/Windows Phone)

Apple?  Check.  (OS 9->X, Lion, Mavericks, Yosemite, iOS 6, iOS 7, iOS 8…)

And when all these breakages occurred, what was the cry heard throughout the 
land of punditry?  “This is Linux’s chance!  Having forced everyone to rewrite 
their software [bogus claim], Bad OS will make everyone move to Linux!”  Except 
it doesn’t happen.  Interesting, no?

Could it be that software for these other platforms *also* manages to ride 
through major breaking changes?

> What enterprise can afford to rewrite all of its software
> every ten years?

Straw man.

If you have to rewrite even 1% of your system to accommodate the change from 
EL6 to EL7, you are doing it wrong.

If you think EL6 to EL7 is an earth-shaking change, you must not have been 
through something actually serious, like Solaris to Linux, or Linux to BSD, or 
(heaven forfend) Linux to Windows.  Here you *might* crest the 1% rewrite 
level, but if you do that right, you just made it possible to port to a third 
new platform much easier.

> What enterprise can afford to retrain all of its personnel to
> use different tools to accomplish the exact same tasks every seven years?

Answer: Every enterprise that wants to remain an enterprise.

This is exactly what happens with Windows and Apple, only on a bit swifter 
pace, typically.

(The long dragging life of XP is an exception.  Don’t expect it to occur ever 
again.)

> The
> desktop software churn that the PC has inured in people simply does not scale
> to the enterprise.

Tell that to Google.

What, you think they’re still building Linux boxes based on the same kernel 2.2 
custom distro they were using when they started in the mid 1990s?

We don’t have to guess, they’ve told us how they coped:

http://events.linuxfoundation.org/sites/events/files/lcjp13_merlin.pdf

Check out the slide titled "How did that strategy of patching Red Hat 7.1 work 
out?”

Read through the rest of it, for that matter.

If you come away from it with “Yeah, that’s what I’m telling you, this is a 
hard problem!” you’re probably missing the point, which is that while your 
resources aren’t as extensive as Google’s, your problem isn’t nearly as big as 
Google’s, either.

Bottom line: This is the job.  This is what you get paid to do.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to