At 03:33 PM 2/8/00 -0600, you wrote:

>MS _software_ isn't all bad...

You dirty, lousy, no-good Microsoft apologist.


:-)

>it's the fact that they're still using clunky DOS code in their operating
>system that makes it suck.

Here are the problems.
1. 16 bit code here and there. Lots of gnarly stuff to glue 16 bit and 32 bit
   components together.
2. Terrible coding practises. Tons of code full of exposed non-encapsulated
   void pointers and other horrors. Lots of failure to test pointers for null.
3. So much stuff occasionally dereferences null, they figured the easiest way
   to "fix" the problem was to hack something to not trap and app-fault on
   null dereferences -- so programs that do it don't fail cleanly right away
   but silently shoot themselves in the foot and stumble on for a while
   corrupting data here and there before eventually collapsing. And developers
   don't get any SEGV's or other indications of null dereference problems in
   *their* code, and wind up producing lots of code that also doesn't properly
   test for and handle null.............
4. They had this horrid idea, known as "resources". Instead of just allocating
   needed stuff on the heap in the normal fashion, they also have three fixed
   size pools of "system", "user", and "gdi" resources, and if one of them
   runs out of space, there's no test for allocation failure -- things just
   start to come crashing down around the user's ears with, to them, no
   apparent reason or trigger. Whatever objects are in these three pools
   should have just been allocated on the heap as needed, so the more RAM and
   swap is available, the less likely these crashes become, and so the usual
   tests for allocation failures and usual ways to try to handle them
   gracefully in apps would work uniformly for anything.
5. Windows Security. What a joke.

And FYI, NT is not much better than Win9x; Unix runs circles around it.
It's crash prone still, and the security though better still leaves alot to
be desired. On a 9x box with the multiuser stuff "secured" I can crack root
in ten seconds if I have physical access to the box; it's harder on an NT
machine but there are lots of holes still, especially in network stuff. Go
to a WWW server running Apache on Unix, especially on Linux. See how
stable, quick to respond, and reliable it is over a period of weeks of
frequent access. Then go to a WWW server running a M$ Web server on NT --
any Microsoft Web site will do, e.g. msnbc.com. It'll be down half the
time, slow the rest of the time, produce strange "HTTP 500 -- Internal
Server Error" messages, misbehave, and so forth. For example, the M$ httpd
sometimes will spuriously send an RST packet in the middle of a GET,
leading to a user seeing a half-displayed Web page and a dialog on top
saying that the page can't be loaded because the connection was
"unexpectedly reset". I mean, there is an HTTP error code, 500, that is
assigned for an error message that basically means "The server has a bug
and your HTTP request happened to trigger it; better luck next time;
fortunately, these are usually POM-dependent bugs so chances are if you
click the link again it will magically work perfectly!" Undoubtedly, it was
Microsoft's httpd's that led to the guys who maintain the HTTP spec
deciding there was a need for an HTTP error code specifically to report a
bug in the server.

>Win NT isn't too bad. It's pretty stable in most cases...

Yep. I'm sure that, by itself, it is. Install some apps and begin to
actually *work* on it, though...

>`They might possibly get their act together with Win2k...

There was similar misguided optimism when everyone was using Windows 3.1
and the rumors of Win95 and NT began to circulate. Now we have Win98 and
NT, and things are no better in Winworld than with 3.1. Too much legacy
code, too many bad programming practises, the rationalizations used to
excuse them, and the hacks used to work around the bugs, for Windows to
recover without a total rewrite. Which they attempted with NT, but the
philosophical and architecture problems in the minds of the developers led
them to make many of the same mistakes again anyways, and plenty of new
ones. And of course, rewrites make an essentially new OS breaking
compatibility with the old. NT breaks most DOS apps -- they won't work from
the NT command prompt. It also breaks plenty of 'doze apps, including many
32 bit ones. To keep as much compatibility as possible with legacy apps and
32 bit 95/98 apps, they compromised making NT a full rewrite by building it
around the same APIs used for Win9x, and the APIs in question contained
encapsulation errors as well as design flaws that encourage implementations
to use bad design decisions. Including forcing the same three pools of
"resources" in addition to the usual heap memory for allocation.
For a real fresh start, someone started hacking away a decade or two ago at
making a free POSIX-compliant OS, choosing POSIX because apps based on
POSIX APIs were already numerous and because the POSIX specification didn't
include serious design flaws and encapsulation problems. The result, of
course, should be familiar to all of us in this list: Linux. :-)


-- 
   .*.  "Clouds are not spheres, mountains are not cones, coastlines are not
-()  <  circles, and bark is not smooth, nor does lightning travel in a
   `*'  straight line."    -------------------------------------------------
        -- B. Mandelbrot  |http://surf.to/pgd.net [EMAIL PROTECTED]
_____________________ ____|________                          Paul Derbyshire
Programmer & Humanist|ICQ: 10423848|

Reply via email to