Andre Poenitz wrote:
> On Tue, Nov 18, 2008 at 03:42:52PM -0500, Michael Wojcik wrote:
>> Andre Poenitz wrote:
>>> On Mon, Nov 17, 2008 at 11:07:05AM -0500, Paul A. Rubin wrote:
>> I've worked on many projects that maintained backward compatibility
>> with new releases of the API, and seen a great many more.
> 
> Just for my curiosity: Which projects, which scope? 

Hmm. Off the top of my head, in roughly chronological order:

- Various IBM internal-only projects, such as the E editor.

- Early versions of Windows. The Windows 1.x to Windows 2.0 and
Windows/286 transition maintained compatibility in the Windows API;
Windows 1.x applications ran unchanged in the 2.0 family.

- X11R3. The X11 API was layered correctly: as long as the server
follows the protocol spec, it doesn't matter what it does under the
covers. I added support for new hardware to the ddx layer; wrote new
window managers with completely different look-and-feel from the
standard ones; added extensions to X11 itself. None of that interfered
with existing clients one bit.

- The 4.3 BSD kernel. Extended multihead support in the console driver
and wrote some drivers for new hardware. Enhanced the shared memory
kernel option. Nothing that didn't want to use the new features needed
to be recompiled.

- A number of Micro Focus commercial products and components thereof:
AAI, CSB, CCI, MFCC ... These are commercial APIs used by paying
customers to build in-house and ISV commercial applications. Changing
them and breaking existing mission-critical applications isn't good
for business. But we release updates a few times a year for most of them.

Take AAI, for example. AAI 1.0 came out in 1988, and had major new
releases for the next 10 years. Typical AAI purchases were in the $10K
to $300K range, with yearly maintenance fees. The 1998 release had a
feature set probably five times as large as in the 1988 release and
ran on a dozen more platforms (from 16-bit Windows to big iron). We
still shipped, as one of the demos, the original 1988 demo source -
unchanged. The *binaries* from 1988 still ran, unchanged. The 1988 AAI
clients and servers interoperated with the 1998 ones, with no user
intervention (just a bit of automatic protocol negotiation).

Maintaining backward compatibility simply is not that hard.

> I am still pretty convinced that "compatibility" and "progress" are
> fairly incompatible notions when it comes to the development of _usable_
> libraries.

And I'll say that my experience as a professional software developer
for 20 years, and as a hobbyist for a number of years prior to that,
shows me otherwise.

> you try to provide everything and the kitchen sink, and end up with
> design and implementation decisions that need to be re-evaluated from
> time to time in the presence of new environments. Java and Python, or
> anything including a "GUI" comes to mind.

I'll offer X11 as a counterexample.

>> And in this case, we're talking C and C++ runtimes, which should
>> conform to the ISO standard anyway.
> 
> Ah... should they conform to the Standard or should they be compatible to
> older versions?

To the standard.

> What is supposed to happen if an existing version does
> _not_ conform to the Standard?

Since the standards attempt to codify existing practice, that rarely
happens. The only case that comes to mind of an incompatible change in
the C standard, between C90 (ISO 9899-1990) and C99, is the choice of
return code semantics for snprintf when it was added to the standard.
There were two implementations with different semantics; the committee
chose the sensible one. The only significant broken implementations by
that point were HP's and Microsoft's, and Microsoft's doesn't really
count because 1) the canonical name of the function in the Microsoft
libraries was _sprintf, an identifier reserved to the implementation,
and 2) Microsoft wasn't inclined to follow the standard anyway.

> Also: What am I supposed to do in case there is no obvious standard to
> adhere to? I have e.g. a few hundred kLOC of pre-1998 C++ code (done
> well before 1998...) around that's uncompilable with todays compilers.
> Who is to blame here? Should g++ have sticked to 2.95's view of the
> world?

That's not a dynamic-runtime issue, which is what we were discussing.
It's another problem entirely - the overly large and loose definition
of the C++ language.

>>> In particular that would mean not only source and binary but also
>>> behavioural compatibility including keeping buggy behaviour.
>> No it doesn't. Undefined behavior is undefined; an application that
>> relies on it is broken.
> 
> What is an application supposed to do when it lives in an environment
> where only buggy libraries are available? 

Suck it up? Might as well ask what a car is supposed to do in an
environment with no roads. That's not a design failure in the car, nor
a mistake on the part of the car's engineers; and neither does it mean
that cars are a bad idea.

>> And for the rare application that does, there are other Windows
>> mechanisms for tying it to the old version of the DLL.
> 
> I obviously dispute "rare", otherwise Wikipedia would not know about
> "DLL hell"

DLL hell exists because Microsoft likes to release incompatible
versions of its libraries, and because they failed to implement a good
versioning policy in Windows, and because application developers often
wrote really poor installers. None of that means that applications
needed to rely on undefined behavior.

> and I have to admit that I am not aware of a lot of "other
> Windows mechanisms" that scale from, say, Win 3.11^H95 through Vista.
> What exactly are you refering to?

First, Win9x is dead. There's little reason to target anything that's
not in the NT family. There's certainly no reason to expect to use the
same techniques on Win9x and the NT family.

As for Windows mechanisms for supporting multiple library versions,
the most obvious, prior to the half-assed linker-manifest disaster
from 2005, is colocation. If you have an application that absolutely
needs a specific version of a common DLL, then you drop that version
into the application's private binary directory with the executable.

Or, better, you explicitly load the DLL, and first check if the public
one is a version you're compatible with; if not, you load your version
from a private application directory. PE files have binary version
information, so use it.

Or, if you can't be bothered to use PE versioning, just change the DLL
filename when you absolutely have to come out with an incompatible
update. That's what Microsoft did for years (up through MSVC 6), and
it would've done the job if they had been consistent.

That's for public, third-party DLLs. For DLLs you control - as we do
with our COBOL runtime, for example - it's really not hard to do
proper versioning, with a combination of explicit loading (handled by
a small loader statically linked to the application binary), separate
load directories for each version, and version path information in the
registry.

-- 
Michael Wojcik
Micro Focus
Rhetoric & Writing, Michigan State University

Reply via email to