Linux-Advocacy Digest #580, Volume #28           Tue, 22 Aug 00 23:13:06 EDT

Contents:
  Re: Windows stability: Alternate shells? (R.E.Ballard ( Rex Ballard ))
  Re: Would a M$ Voluntary Split Save It? (Chad Irby)

----------------------------------------------------------------------------

From: R.E.Ballard ( Rex Ballard ) <[EMAIL PROTECTED]>
Subject: Re: Windows stability: Alternate shells?
Date: Wed, 23 Aug 2000 02:31:30 GMT

In article <VIno5.7319$[EMAIL PROTECTED]>,
  "Erik Funkenbusch" <[EMAIL PROTECTED]> wrote:
> "R.E.Ballard ( Rex Ballard )" <[EMAIL PROTECTED]> wrote in message
> news:8nqin2$fj5$[EMAIL PROTECTED]...
> > In article <yr0n5.6505$[EMAIL PROTECTED]>,
> >   "Erik Funkenbusch" <[EMAIL PROTECTED]> wrote:
> > > "R.E.Ballard ( Rex Ballard )" <[EMAIL PROTECTED]>
> > > wrote in message
> > > news:8nhokh$v4f$[EMAIL PROTECTED]...
> > > > > You do realize that AT&T and the bells have always used
> > redundancy.
> > > > > switches fail, but they've always had
> > > > > extensive cutovers and the
> > > > > ability to re-route around failures.
> > > >
> > > > Yes.  I was a developer on one of
> > > > the first computerized Directory
> > > > Assistance systems to go nationwide.
> > >
> > > Then why did you insinuate that Unix stability
> > > was the reason phone systems
> > > genearlly have zero to little downtime?
> >
> > Because modern telephone switching systems are controlled
> > by UNIX based switches such as the #5ESS and similar switches
> > from Northern Telecomm.
> >
> > Nearly every carrier and regional uses UNIX in some flavor for it's
> > mission-critical systems.
>
> You didn't answer the question.
> You claimed that the telephone network was
> stable because it uses Unix.

You snipped my original comment.  I did say that UNIX was a key
ingrediant in the stability of the stability of the telephone
system.

>  I said no, it's not.  It's stable because it
> has redundancy, and redundancy on the redundancy.
>  The stability of the OS that's used is virtually irrelevant
>  to stable the telephone networks are as a whole.

Keep in mind UNIX was developed for and by AT&T within Bell Labs
for years before it became publicly available as a commercial product.

AT&T designed redundancy support into UNIX back when it was still
a baby.  One of the biggest problems in benchmarking UNIX against
other systems was that UNIX would start running processes in parallel
which gave misleading performance numbers.  When additional processors
were added to SMP, the load automatically distributed itself through
what is now called "trivial parallelization".  In addition, the use
of pipelines actually improved performance.  For example:
  cat < file1 | cat > file2
was faster (when file 1 and file 2 were on different drives) than
  cat < file1 > file2
because each cat was able to read/write without waiting for the
other device to complete.

In my earlier post, I pointed out that UNIX was forged "in the fires
of hell" - particularly in high stress environments such as telephone
systems.

NT evolved from a workstation environment and Microsoft had little
experience dealing with high reliability high performance servers.

Windows 2000 brings the experience Microsoft has gained since 1997
to bear on it's design.  Essentially, Windows 2000 is comarable in
quality and experience to AT&T System III.  I would even say that
Windows 2000 is even better than AT&T System III or Berkely 2.1.

By the time AT&T sold UNIX to Novell, UNIX had been forged by
competition from BSD, OSF, and Linux AS WELL AS deployment in AT&T's
own switching systems, Nortels switching systems.

> They could be using AmigaOS and still have the same levels of overall
> network stability.

Perhaps, but if you use 100 unreliable machines when you could use
3-5 reliable machines, and the price of each unreliable machine is
only 30% less than the reliable machines, it's more cost-effective
to run 3-5 reliable machines, especially if you add costs for staffing
and support.

> Unix is used because AT&T defined the market,
> and AT&T owned the OS.
> Now it's just legacy network effects.

> They could have used any number of other
> stable OS's as well.

They could have, and at one time they did.  And when they did,
they made major enhancements to that system as well.

AT&T would have needed a multitasking system capable of running
at least 100 concurrent processes (24 channels in each direction
times two trunks).  They would have needed efficient memory management
(ability to pass nearly 2 megabyte of data per second with mapped
routing.  They would have needed minimal latency (each delay becomes
more perceptable to telephone users), and consistent (jerky resolution
would result in clicks or jerky sounds).

They would need to be able to remotely manage the system over a very
low bandwidth line (via modem on one of the T1 channels).  They would
need to be able to do EVERYTHING (including rebooting the system)
without requiring direct manual intervention or interaction with
the physical computer (since many of these switches are in remote
locations which may even be completely under snow during part of the
year, or in deserts during the summer.

Finally, they would have wanted to monitor the servers using remote
management tools that sent statistics back to consoles that could
display rapidly changing information in near-real-time.  And since
we already have an OS, we might as well use UNIX for the monitoring
package as well (hence X11).

Had AT&T adopted MS-DOS back in 1982, MS-DOS would have ended up
looking, acting, and performing much the same as UNIX does today.
The closest anybody came was VMS, but since DEC wasn't letting AT&T
get at the source code, (which AT&T needed to achieve that level of
reliability), AT&T stayed with UNIX.  Had K&R known about Multics,
they wouldn't have bothered with UNIX - but Multics would have
probably been more like Open Source too.

Here's the irony.  Bill Gate's first Operating System was actually
Xenix, a variant of Version 6 UNIX.  When Gates offered IBM an OS,
he already had Xenix in his back pocket (he'd been selling it to
Tandy for almost 2 years).

Only a few people at IBM and Bill Gates will ever know whether IBM
was actually hoping to get their hands on a UNIX clone without paying
royalties to AT&T (who they viewed as a competitor) or if IBM wanted
to keep the PC as dumb as a rock to prevent Xenix machines from
displacing the Series 1, System/360, and possibly even the DOS-VS
and System 370 market.

> > > With that kind of redundancy, you could run a 24/7/365 shop
> > > on Dos 4 (the most unstable version ever released).
> > >  What OS you're using is irrelevant to
> > > the overall reliability in this case.

Again, you would have needed more staff, better monitoring systems,
more resources, more electricity, more communications channels, and
feature that MS-DOS 4.0 didn't offer. Essentially, you would have
to graft UNIX into some meta-application.  This was actually tried
by Quarterdeck with it's desq-view, desq-viewX, and by DRI with GEM.
Unfortunately, as you pointed out MS-DOS 4.0 was so unstable that
the only cure was switching to DR-DOS, which Microsoft went to
extraordinary lengths to prevent (Caldera vs Microsoft 1998).


> > Not entirely.  The one critical requirement was that the entire OS
> > had to be available in Source code format because problems needed
> > to be FIXED in a very short period of time.  When a bug affected
> > one machine, it usually hit all of them very quickly.  The bug
> > had to be fixed before multiple machines were lost.
>
> And you don't think MS provides source code
> liscenses to strategic partners?

I know that they did provide source code to DEC (which may have
had something to do with admitting in an interview that Gates found
the source code to BASIC in the dumpster of his former employer
(CCC) - code which was copyrighted by DEC and for which DEC was
never paid royalties.

At one point, HP got a 'port kit' and found the process of porting
NT such an ordeal that they decided to stick with UNIX on PA_RISC
and sell NT on Itel/AMD/Cyrix boxes only.

Attempts to port NT 3.51 to MIPS, Alpha, PPC, and 68000 were so
bad (lack of ISV support) that NT 4.0 was only ported to the ALPHA.
And now, Windows 2000 isn't even available on the ALPHA.

Meanwhile, back at the ranch, Linux ports to all of the above chips
have actually created markets that made up for some of the lost NT
market share.

> They do.

They do, they have, and for enough of an incentive (like a huge
percentage (say 20%) of your company) they might do it again.
Even Paul Allen is betting against Microsoft (Transmeta, LinuxWorld,
and other Linux Friendly ZDNet publications).  And he's dropping
Microsoft stock like soap in a cold shower.

> > In one case, after an operator had taken down the system 3 times
> > (we had ways of finding these things out) we paid him $500 cash
> > to show us what he did.  We fixed the bug in 7 hours.  At no time
> > during this "test phase" was the entire system down for more than
> > a few minutes, but in production, with money due back to the client
> > if downtime exceeded 15 minutes/year, this was a bug that had to
> > be fixed.  On numerous occaisions, it was only because we had
> > the source code to *everything* that we could quickly handle the
> > problems that did come up.
>
> Source availability is irrelevant in such high
> dollar projects, since you
> can most certainly get it.

True.  We paid DEC nearly 1/2 million for source to RT-11.

We paid BSD $25,000 and AT&T another $25,000 for commercial
use of source code that was already available at MIT, CMU, UCB,
and numerous other top schools in the country.

> > > Really?  3 years ago vendors were
> > > not providing Linux drivers.

> > > They were all written by Linux enthusiests.
> > > Drivers being supplied by vendors is a
> > > very recent thing.
> >
> > Actually, the KA9Q drivers and the
> > Linux drivers were among the first
> > ported to Winsock.  It was easier
> > to plug a simple veneer between the
> > working driver and the winsock API.
> > You could pretty easily slip NDIS,
> > ODI, or KA9Q over any of the packet drivers,
> > Winsock needed NDIS.
> >
> > I believe the NE2000 driver was running on
> > Linux before it was available under Windows 95.
>
> Microsoft shipped a generic NE2000 driver with Windows 95.  Sorry.

But Windows 95 was shipped the last weekend of August 1995.  Linux
had a working NE2000 32 bit driver for Slackware, Yddragasil, and
SoftLanding as early as mid 1994.

And by 1995, there was another new company - Caldera.  It was formed
by some of the Novell people who were upset that Novell had agreed
to cancel it's workstation initiative if Microsoft agreed not to
deploy NT as a server.

Keep in mind this was NT 3.5/3.51.  Some of those folks were backing
Linux as a workstation platform back in 1994.

> ???? Lower level MFC????  Gezus Rex,
> stop while you've got SOME credibility.
> MFC is not an API in any sense of
> Windows (it's a class framework) and
> there's no such thing as a "low level MFC API"
> even if you're stretch the
> terminology to loosely accept MFC
> as some sort of API at all.

I have some old C++ manuals that were published in 1992 that would
be quite a bit different from anything you've read.

You see, back in the "really old days", you had to code your windows
and objects at an incredibly low level.  You had to do your own
initialization of all of the variables, you had to do your own
message passing, you even had to create your own queues.  In fact,
you didn't really even have threads.  This was back in the days
of Windows 386 and Windows 3.0.

Of course, UNIX and the X Consortium had just come out with the
R3 release of X11 which contained "Widgets".  Widgets made programming
GUIs much easier.  About the time Windows 3.1 came out, X11/R4 was
touting not only X11 but the resource database (like the registry),
that could be initialized using app-defaults files).  Microsoft
responded by including the .ini files for a new application programmer
interface called Microsoft Foundation Classes.

Under X11R4, communication between clients such as text editors or
drafting tools and clients such as window managers had been
stanadardized.  Microsoft came out with it's own higher level API
for Windows 3.1 under Visual C++ version 2 that included OLE objects.
You could still code to MFC, but you were strongly encouraged to
use the OLE interfaces instead.  Eventually documentation for MFC
was dropped entirely.

When Microsoft was preparing for the release of Windows NT (3.xx),
they were hoping to eliminate the use of shared memory in DLLs (since
NT would give each program it's own memory).  This is when COM was
introduced.  It was also introduced in Visual C++ version 3.

Microsoft realized that too many people were making "portability"
toolkits for C++, and decided to offer their own programming language
that would help them create a captive developer base that could be
herded by ignorance instead of by incentives.  They came out with
Visual Basic.  It was in Visual Basic that COM was formalized.

> IPC cannot cause deadlocks in and of itself.
> If your apps don't use any kind of synchronization
> (such as a semaphore or mutex), then the deadlock is
> your own fault.

Precisely!  This is exactly the problem.  The OS does so little
that the applications begin depending on shared memory to communicate
between threads and between processes.

Again, a bit of history:

With MS-DOS, you had only one process.  That process could create
a Terminate and Stay Resident program that would copy executable code
into the top of the memory area, map an interrupt to that code, and
allow the system to execute this "hidden" code by fireing the interrupt.

At first, TSRs were trivial programs, like print spoolers.  Eventually,
they became secondary applications such as Borland's Sidekick.

When the 80286 came out, Intel had asked DRI to come out with a
multitasking operating system, and they made a number of changes
at DRI's request.  Unfortunately, the masks on the first million
chips had a defect that prevented the use of protected virtual memory.

Later AMD corrected the fault, but by then, DRI was more interested
in the 80386 and Microsoft was focused on OS/2 (or diverting OS/2 money
into Microsoft Windows).

Although OS/2 had a number of excellent interprocess communications
features for supporting communication between independent processes,
Microsoft had opted to go with a simpler solution (since they needed
to be able to run on 80286 machines as well as 80386 machines).
As a result, Microsoft ignored most of the interprocess communications
and left ISVs to create their own.  Most of the ISVs solved the problem
by using shared memory, static variables within a DLL, to pass
information between separately started applications.  Under
Windows 3.1 this worked because each Windows application was
essentially a subthread of the main MS-DOS Win31 process.

NT 3.5  broke so many of these applications that Microsoft had to
tear down the walls between the processes.  The problem was two-fold.
First, NT 3.5x protected ALL memory so that NO memory could be shared
between applications.  Second, the context switching was so slow
(originally only about 180 process level switches per second) that
even UNIX applications didn't port well.

Windows95 dropped some of the memory protection (which hurt
reliability, but for a game/school machine, who cares?).  This
made it possible to run more of the old Windows 3.1 applications,
while at the same time, giving Microsoft the market share it needed
to push ISVs into rewriting their code for NT 4.0 compatibility.

NT 4.0 provided strict memory management, but also allowed threads
of Windows 95 applications to use the "shared memory trick" through
special coding.  Although it was strongly discouraged, developers
could create the equivalent of their shared memory (now implemented
in VXDs on Win95 but not available under NT's protected memory),
by creating a "pseudo-driver" as an "executable library".
Unfortunately, each OCX was still "home grown", which meant that
there was still no standard for IPC.

At this point, Microsoft proposed the use of DCOM as a means of
communicating between fully memory protected processes (as you said,
using a loopback through localhost).  The performance was so bad
that most developers either stuck with home-grown IPC or third party
IPC (MQSeries) or just linked everything into DLLs linked by a small
exe, usually written in Visual Basic.

Certainly the biggest player in this IPC market has been MQSeries.
They did very well because they provided a means to quickly get
messages passed between processes on the same machine (local queue
manager), or between machines, with little more than an alias.
This allowed developers and vendors to split large, complex


> Not with the OS.  Sorry, you just don't know what you're
> talking about here.
>
> All of this is irrelevant though, since you claimed that these were
added
> later.  You're backpeddling Rex.
>
> > Eventually, the ISVs revised all of their applications to support
> > the new APIs, and usually had to distribute them freely.  Many
> > vendors also opted to load the older versions of some of the DLLs.
>
> And which specific API's were those?  Come one, name them.
>
> > > DCOM is much slower because it's used between machines,
> > > over a network.  Not between processes on the same machine.
> >
> > It's not practical to use DCOM between processes on the same
> > machine, but it is possible.  When you have two processes which
> > are independent, they can be connected via in-proccess,
out-of-process,
> > or networked objects.  When you wanted independence to the point of
> > being able to change the components at run-time, you had to at least
> > go out-of-process.  Behind the covers, you were running DCOM over
> > Winsock.
>
> You can change componenets at runtime with in-proc DLL's as well.
Ever
> heard of CoCreateObject?
>
> > > > Again, Microsoft is aware of these problems in NT, and has made
a
> > > > number of changes to Windows 2000 to eliminate them.  In some
cases,
> > > > they preserved the windows APIs while cleaning up the back-end.
In
> > > > other cases, they introduced new APIs like COM+, MTS, and MSMQ.
> > >
> > > Again, you have no idea what COM+, MTS, or MSMQ are, do you?
They're
> > > transactioning technologies for use in large scale database
> > applications.
> > > They have nothing to do with IPC in general.
> >
> > MSMQ can be used for IPC, similar to MQSeries (which I know a GREAT
DEAL
> > about).  MTS provides protection between the threads when hundreds
of
> > connections are made to a single server process, similar to the way
> > UNIX or LINUX ip deamons fork processes and let the child handle
> > the accepted connection.
> >
> > COM+ is an abscraction of COM for C++ that hides most of the
ugliness
> > of thread management, object management, and DCOM from the
application
> > programmer.  Essentially it makes the C++ version look more like the
> > friendlier VB and J++ versions.
> >
> > At least that's what this here book from Alex Homer and David
Sussman
> > says (boiling down a 493 page book into 3 paragraphs).
>
> COM+ provides those abstractions, but that's not what it *IS*.  COM+
is
> basically the integration of COM, MTS, and MSMQ into a single API,
with
> extensions for C++ as well.  How else could you use COM+ from VB then
if it
> was only a C++ technology?
>
> > > Incompatible with any other platoform?  I guess that's why COM and
> > DCOM
> > > exist for Solaris and HP/UX.  You should know that almost nothing
is
> > > incompatible with another platform, it just takes someone to write
it.
> >
> > Only a very limited subset of COM and DCOM exist for these
platforms.
> > Microsoft first farmed it out, and later took it back, partly
because
> > the third party vendor was providing too many of the hooks needed
for
> > CORBA Client GUI objects.
>
> CORBA Client GUI objects... CORBA didn't even have a component
architecture
> until CORBA 3, which was very recently.
>
> > Many of the features of COM, such as drag-and-drop, dde, and
embedding
> > are not supported for X11 interfaces.
>
> Because those are desktop features, not distributed features.  They
don't
> work in DCOM at all, even between windows machines.
>
> > Gnome and KDE provide similar
> > functions that can be mapped via CORBA, but not without more than
> > Microsoft makes available.  Otherwise, it would be trivial to run
> > Microsoft COM/DCOM applications via Linux/UNIX or X11 clients.
>
> Oh yeah, sure.  (that's sarcasm, btw).
>
> > > > In UNIX a much higher percentage of the executable and static
memory
> > > > is shared by all processes.  Futhermore, a larger percentage of
> > > > the buffer memory is also shared.
> > >
> > > And how do you quantify this statement?
> >
> > Compare memory utilization, number of processes on UNIX vs
> > number of Processes on Windows.  Number of threads on UNIX vs
> > number of Threads on UNIX.  Compare the memory utilization for
> > UNIX vis the memory utilization for Windows for the same number
> > of processes.  It's not at all unusual to see 200 to 300 processes
> > on a Linux workstation, and 2000 to 3000 processes on a Linux
server.
> > The statistics on both machines can be easily monitored.
>
> And are completely irrelevant.  Windows applications are much more
feature
> rich than their typical Unix counterparts, and strictly ported
applications
> use similar amounts of memory.
>
> > Of course, we're  comparing apples and oranges.  Linux runs
thousands
> > of processes that all call a very small number of highly optimised
> > (to reduce thrashing, swapping, and cache misses) library routines
> > that are rarely paged out.  Each process has only a few kbytes of
unique
> > code.  Many contain only a few hundred.  The "biggies" are X  (neary
> > 12 meg) and ld-linux.so (sharable verion of the Motif library used
by
> > Netscape).
>
> I guess you've never heard of DLL's under Windows.
>
> > > NT had "Unix way" pipes from day one.  It didn't "eventually" do
it
> > that
> > > way.  My first edition "Inside Windows NT" by Helen Custer (1992)
> > describes
> > > them in detail.  You're confusing DOS and Win32.  Windows 95 was
not
> > the
> > > first Win32 architecture.
> >
> > Correct.  NT 3.51, if you considered that a viable and successful
> > product
> > (which didn't sell very well as anything other than a file-server)
> > then you *could* say that NT has had UNIX style pipes since 1994.
After
> > over 10 years of making pipes unworkable.  The key is that it wasn't
> > considered a strategic means of communicating between processes.
>
> No.  The the first version of NT was 3.1, not 3.51.  And that was
1993.
> 3.51 was released in 1995 btw.
>
> I wish I had a dime for every matter of fact statement you've made,
only to
> back down when challenged.
>
> > Microsoft discouraged the use of direct access to pipes and sockets.
> > They discouraged the use of streams.  They prefered that programmers
> > call their proprietary APIs which would grab values from the stream,
> > or put values onto the stream, but which gave the programmer little
> > or no control of the the data being streamed.  Even ISAPI tries to
hide
> > the formats of HTTP and HTML.
>
> While MS provides these features, they're to help the developer out.
> Nothing stops you from doing the work yourself (as you would have to
do
> under Unix).
>
> > Compare this to UNIX, where parsers and scanners like Yacc and PERL
> > give the application programmer direct access and control over both
> > input streams and output streams.  Compare this to CORBA where
backward
> > compatibility is carefully protected.
>
> Yacc and PERL exist for Windows, and have for quite some time.  such
scripts
> have always been slow, even under Unix.  In fact, this is one of the
main
> reasons Apache provides modules, to prevent that.
>
> > Backword compatibility of Windows in an oxymoron, and it's contrary
> > to the revenue and marketing intrests of Microsoft's current revenue
> > structure.  If Microsoft DOES start selling service contracts,
backward
> > compatibility will suddenly become a VERY strategic benefit.
> > Personnally, as someone who does still have to deal with NT and
> > Microsoft  pretty regularly, I'd LOVE to see Microsoft adopt a
> > support based revenue model.  It would make my job as an integrator
> > and architect MUCH easier.
>
> Microsoft spends an inordinate amount of time making broken apps work
on new
> versions of their OS's.  Check out the compatibility section of your
local
> Windows 9x machine.
>
> > > > Actually, the "UNIX way" is to have two blocks of memory.  This
way,
> > > > the sender can fill blocks WHILE the receiver is draining them.
> > > >
> > > > Unfortunately, unless you have the NT resource kit, most Windows
> > > > programs are still written to a paradigm based on huge
monolithic
> > > > objects that must be read into memory in their entirity before
the
> > > > methods of the object can be invoked to modify the object.  If
you
> > > > have a large object such as a BMP file, or a Word document, this
> > > > can involve megabytes between the processes.
> > >
> > > What the hell are you talking about?
> > > What does the NT resource kit have to do with anything?
> >
> > I first found out about it when investigating the possibility of
> > getting an MCSE.  Everyone I talked to, online and offline said
> > that the first step was to read the NT resource kit cover to cover
> > (pretty good reading, some of Microsoft's best).
> >
> > Of course, the NT resource kit also has all of the POSIX tools, and
> > a bunch of UNIX-like tools that I also found VERY Interesting.
> > This is ONE Microsoft product I was GLAD I purchased.  To me,
> > NT isn't complete without the standard NT resource kit installation.
>
> That doesn't explain your statement that:
> > > > Unfortunately, unless you have the NT resource kit, most Windows
> > > > programs are still written to a paradigm based on huge
monolithic
> > > > objects that must be read into memory in their entirity before
the
> > > > methods of the object can be invoked to modify the object.
>
> Please explain how you came to this conclusion.
>
> > > Please, pray tell, let us know how Unix changes all this.
> >
> > UNIX used delimited streams from the very beginning and established
a
> > set of standards for creating, passing, and parsing delimited
streams
> > rarely required more than a few hundred bytes of memory.  I've
created
> > stream processors that handled reconciliation, auditing, and
> > summarization
> > of multiple 30 gigabyte tables with only a few hundred lines of PERL
> > code.
>
> All of which is doable under NT.
>
> > Even though PERL is fully available on NT and Linux (though the NT
> > version
> > is a bit crippled by lack of many UNIX functions) Microsoft has
strongly
> > discouraged the use of it, promoting VBScript instead.
>
> Please provide one reference.  Just one, of microsoft discouraging
perl's
> use.
>
> HINT:  Not advocating something, is not discouraging it.
>
> > > > Sure, some of the files are actually compressed binary
executables.
> > > > More building blocks.  Most of the packages however are
essentially
> > > > a combination of simple components combined with some PYTHON,
PERL,
> > > > or TCL scripts along with some BASH scripts.
> > >
> > > I'm not talking about the packages.
> > > I'm talking about the database files used by RPM.
> >
> > Notice that these are orthagonal.  You can dump the database in text
> > format if you'd like.
>
> Totally irrelevant.  I said, if text files are so much superior on
Unix, why
> are the red had package databases stored in binary format?
>
>

--
Rex Ballard - I/T Architect, MIS Director
Linux Advocate, Internet Pioneer
http://www.open4success.com
Linux - 42 million satisfied users worldwide
and growing at over 5%/month! (recalibrated 8/2/00)


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Chad Irby <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.ms-windows.nt.advocacy,comp.os.os2.advocacy,comp.sys.mac.advocacy
Subject: Re: Would a M$ Voluntary Split Save It?
Date: Wed, 23 Aug 2000 02:28:58 GMT

[EMAIL PROTECTED] wrote:

> Said Chad Irby in comp.os.linux.advocacy; 
>    [...]
> >And in many areas, they have a monopoly.  There are large areas of the 
> >US where Coke is the only major soft drink you can get.
> 
> "Major".  That one word is what prevents Coke from *being* a monopoly,
> which is a federal offense. 

I deleted the entire rest of your post, because it's based on this one 
fallacy (that you keep repeating, for some reason, even after a 
half-dozen people have given you examples of how it's just plain not 
true).

> 'Monopoly' is *not* a measure of market share.  It is, I
> think, most comprehensively translated as "not willing to compete", to
> be perfectly honest.  Sure, the root is in the word 'one', and the board
> game ends when someone owns *all* the property.  But recall that the
> term in that board game "monopoly" was used, not to refer to the winner,
> but to anyone who owned all three of one color property.  Why?  Because
> if they "had a monopoly", the could jack the price up.

Right.  In that local market, they had a monopoly.  And by jacking that 
price up, they could abuse that monopoly power.  In the board game, it's 
okay.  In the real world, that's illegal.

Thanks for finally figuring that out.

-- 

Chad Irby         \ My greatest fear: that future generations will,
[EMAIL PROTECTED]   \ for some reason, refer to me as an "optimist."

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to