On Sun, Nov 23, 2008 at 11:21:27AM -0500, Michael Wojcik wrote:
> Andre Poenitz wrote:
> > On Tue, Nov 18, 2008 at 03:42:52PM -0500, Michael Wojcik wrote:
> >> Andre Poenitz wrote:
> >>> On Mon, Nov 17, 2008 at 11:07:05AM -0500, Paul A. Rubin wrote:
> >> I've worked on many projects that maintained backward compatibility
> >> with new releases of the API, and seen a great many more.
> > 
> > Just for my curiosity: Which projects, which scope? 
> 
> Hmm. Off the top of my head, in roughly chronological order:
> 
> - Various IBM internal-only projects, such as the E editor.
>
> - Early versions of Windows. The Windows 1.x to Windows 2.0 and
> Windows/286 transition maintained compatibility in the Windows API;
> Windows 1.x applications ran unchanged in the 2.0 family.

Windows 2.0 was released pretty exactly two years after 1.0, Windows 3.0
completely broke the API 2 1/2 years later.  So, at best, that's a
period of 4.5 years of "API stability". That's close to a joke,
especially when taking into account that < 3.11 was not usable for any
reasonable practical purpose...

> - X11R3. The X11 API was layered correctly: as long as the server
> follows the protocol spec, it doesn't matter what it does under the
> covers. I added support for new hardware to the ddx layer; wrote new
> window managers with completely different look-and-feel from the
> standard ones; added extensions to X11 itself. None of that interfered
> with existing clients one bit.

X11R3: End of 88, X11R4: End of 89.

In any case, this is a nice example for something that is "finished" at
some point of time. Nobody changed 7 bit ASCII for a while for that
matter. If a feature set is closed at some point of time it is easy to
"outsource" the problems to "extensions" and "toolkits".

Pretty much around 1990 supposedly the last person died that used plain X.
[No, that was not me *cough*]  SCNR ;-)

> - The 4.3 BSD kernel. Extended multihead support in the console driver
> and wrote some drivers for new hardware. Enhanced the shared memory
> kernel option. Nothing that didn't want to use the new features needed
> to be recompiled.

Spring (?) 2001 - January 2002.

I can't/won't comment on the others.

> Maintaining backward compatibility simply is not that hard.

We are _not_ talking about _two_ years here. I can maintain compatibility
over two years by simply ignoring advancements in the outside world for
that long and release "incompatible version x+1" after that.

> > I am still pretty convinced that "compatibility" and "progress" are
> > fairly incompatible notions when it comes to the development of _usable_
> > libraries.
> 
> And I'll say that my experience as a professional software developer
> for 20 years, and as a hobbyist for a number of years prior to that,
> shows me otherwise.

Fine. My experience so far shows that one has a choice between
stagnation and breaking compatibility. And making that choice is 
neither obvious nor easy.

> > you try to provide everything and the kitchen sink, and end up with
> > design and implementation decisions that need to be re-evaluated from
> > time to time in the presence of new environments. Java and Python, or
> > anything including a "GUI" comes to mind.
> 
> I'll offer X11 as a counterexample.

X11 has certainly its merits and is time proven. Still it puts a lot of
burden on the application developer, or, at the very least, on the
toolkit developer. Lots of the initial design decisions that do not
scale well into the 21st century are only bearable because of the
"outsourcing" mentioned above. Plain X11 does _not_ come with kitchen
sinks.
 
> >> And in this case, we're talking C and C++ runtimes, which should
> >> conform to the ISO standard anyway.
> > 
> > Ah... should they conform to the Standard or should they be compatible to
> > older versions?
> 
> To the standard.

That rules out fixing bugs, and it also breaks compatibility. I do not
say that's a bad choice - in fact that's what I'd do in most cases - but
it is incompatible with your statement that maintaining compatibility is
possible _and easy_.

> > What is supposed to happen if an existing version does
> > _not_ conform to the Standard?
> 
> Since the standards attempt to codify existing practice, that rarely
> happens.

Hear, hear.

How come ISO 14882 codifies "export" for templates when not a single 
compiler was able to handle that in 1998 (and for a few years after
that)?

Apart from that the point is not how often it happens but that it
happens at all. You just admit that it happens.

> The only case that comes to mind of an incompatible change in
> the C standard, between C90 (ISO 9899-1990) and C99, is the choice of
> return code semantics for snprintf when it was added to the standard.
> There were two implementations with different semantics; the committee
> chose the sensible one. The only significant broken implementations by
> that point were HP's and Microsoft's, and Microsoft's doesn't really
> count because 1) the canonical name of the function in the Microsoft
> libraries was _sprintf, an identifier reserved to the implementation,
> and 2) Microsoft wasn't inclined to follow the standard anyway.

And C is suitable for _application development_ without resorting to
using tons of external, non-standardized libraries covering a mu
 
> > Also: What am I supposed to do in case there is no obvious standard to
> > adhere to? I have e.g. a few hundred kLOC of pre-1998 C++ code (done
> > well before 1998...) around that's uncompilable with todays compilers.
> > Who is to blame here? Should g++ have sticked to 2.95's view of the
> > world?
> 
> That's not a dynamic-runtime issue, which is what we were discussing.
> It's another problem entirely - the overly large and loose definition
> of the C++ language.

I smell an attempt at C++ bashing here. But C is not much different:
Look at K&R vs ANSI style function declarations.

> > and I have to admit that I am not aware of a lot of "other
> > Windows mechanisms" that scale from, say, Win 3.11^H95 through Vista.
> > What exactly are you refering to?
> 
> First, Win9x is dead. There's little reason to target anything that's
> not in the NT family. There's certainly no reason to expect to use the
> same techniques on Win9x and the NT family.
>
> As for Windows mechanisms for supporting multiple library versions,
> the most obvious, prior to the half-assed linker-manifest disaster
> from 2005, is colocation. If you have an application that absolutely
> needs a specific version of a common DLL, then you drop that version
> into the application's private binary directory with the executable.

And how is that conceptionally different from static linking?

> Or, better, you explicitly load the DLL, and first check if the public
> one is a version you're compatible with; if not, you load your version
> from a private application directory. PE files have binary version
> information, so use it.

That's what I can do with static linking more directly and safer. I just
link the version I tested againt. No further assumptions on what
subminor versions are compatible or not...

> Or, if you can't be bothered to use PE versioning, just change the DLL
> filename when you absolutely have to come out with an incompatible
> update. That's what Microsoft did for years (up through MSVC 6), and
> it would've done the job if they had been consistent.

Sure... one has just to drop the habit of calling things '32' when one
means some 32 bit version and '42' when version 4.2 is refered to ;-}
 
> That's for public, third-party DLLs.

Our use-case so far if I may remind you...

> For DLLs you control - as we do
> with our COBOL runtime, for example - it's really not hard to do
> proper versioning, with a combination of explicit loading (handled by
> a small loader statically linked to the application binary), separate
> load directories for each version, and version path information in the
> registry.

[Of course not. And that scales well across a half dozen platforms, five
of which you can't directly access for testing.... No.]

Andre'

Reply via email to