Linux-Advocacy Digest #642, Volume #25           Wed, 15 Mar 00 19:13:04 EST

Contents:
  Linux Technology Valuation ("ax")
  Re: Open Software Reliability (R.E.Ballard ( Rex Ballard ))

----------------------------------------------------------------------------

From: "ax" <[EMAIL PROTECTED]>
Subject: Linux Technology Valuation
Date: Wed, 15 Mar 2000 21:29:50 GMT

I am trying to figure out how the Linux related technologies are currently
valued. I thought the P/E ratio of Linux stocks may give me some clue, but I
didn't find the P/E information about Linux stocks such as RHAT, LNUX, etc.
Any suggestion where I can find such information?  Are there other useful
information for Linux technology valuation?



------------------------------

From: R.E.Ballard ( Rex Ballard ) <[EMAIL PROTECTED]>
Subject: Re: Open Software Reliability
Date: Wed, 15 Mar 2000 22:59:47 GMT

In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Terry Murphy) wrote:
> On Tue, 14 Mar 2000 03:03:48 GMT, R.E.Ballard ( Rex Ballard )
> <[EMAIL PROTECTED]> wrote:
>
> >Actually, more than a claim. UNIX, and subsequently Linux
> >have set unprecedented levels of reliability. Much of this
> >due to the combination of AT&T culture and Open Source culture.
>
> Unix is one of the most notoriously unreliable operating systems
> in the history of computing.

It depends on what period of history we are discussing.  Sure, in
1982, when Version 7 UNIX on a PDP-11/70 was running with 75-100
users, it could get pretty bumpy.  We'd have 2-3 unscheduled breaks
every week.

When BSD 4.0 came out for the VAX-11/780, we'd still have 2-3 outages
each week.

> Its initial implementations did not
> even run on hardware with memory protection.

True.  Version 6 and Version 7 had memory protection in 1978, mostly
via segmentation.  This memory protection was "swapped" during each
context switch.

Some versions of UNIX (sort of) that didn't have memory management
included OS/9 (not really UNIX, but similar), Mark Williams Coherant,
and Minix.  The were good laboratory experiments, but not very popular.

> For the first 20 years
> of its existence it had to be
> constantly rebooted and its reliability
> was ridiculed compared to the mainframes.

Actually, it was only the first 15 years.  UNIX version 1 came out
in 1968 (internal use only).  By 1984 - with BSD 4.2, running on
a VAX 11/780, UNIX was getting pretty robust.  By 1988, there were
several versions of Unix including BSD 4.3, SysVR4, and OSF/1 and
hybrids such as SunOS, AIX.  In 1984, AT&T was using UNIX in their
#5ESS switch, and Northern Telecom was using UNIX in most of their
switches.  MCI used UNIX in much of their network as well.

Mainframes still have a reputation for reliability.  In VM systems,
it's quite easy to IPL a region without rebooting the system. CICS
systems recover almost automatically.  Of course, the biggest selling
point for MVS is that you get excellent service from IBM.  If MVS
had to stand on the kind of service that most UNIX or PC vendors
provide, it would be out of the shop in a jiffy.  Mainframe systems
are still the primary platform for clearing cash transactions of any
kind.  IBM has been acutely aware of the need to provide real-time
service and has offered integration tools such as MQSeries to provide
a more real-time interface as opposed to the batch interfaces and
dedicated terminal interfaces that have traditionally been the mainstay
of mainframes.  Much of the mainframe's reliability comes from a series
of subsystems that control the load through the system.

UNIX tends to work best when real-time and frequent load changes occur.
UNIX can handle very unpredictable loads quite effectively.  Usually,
the worst that happens is that things start to slow down.

> Even today, the unreliability stands.

Quantify this unreliability please?

> eBay has lost _b_illions in market capitalization due to bugs in Unix.

Do you have details?  I know that a number of sites have failed a number
of times.  Many of these sites fail due to integration issues.  Often
the problems result from 20%/month traffic growth.  This has always
been one of the big issues for UNIX.

> Where I work, our Linux servers need to be constantly rebooted
> because the NFS implementation is so riddled with bugs,

It sounds like you may have some configuration problems.  I realise
that NFS3 (available on the FreeBSD distributions) is a bit more
reliable, but NFS is normally pretty robust unless you have an
overloaded network.  If you have too many lost packets, the clients
keep reconnecting and the NFS servers run out of file handles.

> and there is no way to fix things besides reboot.

You might need to put some clean-up into a cron job.  You might
also need to implement ulimits.  Linux does have a nasty habit
of letting a process grab all the physical and virtual memory
available.

Sometimes you can simply send a SIGHUP to the NFS daemon.

> At least Unix reboots fast - it
> needs to be rebooted so often that it better be.



> AT&T culture, and open source culture, aren't engineering entities,

Let me see, you are saying that AT&T - during the 1980s and 1990s
was a research company and wasn't interested in providing commercial
services?  Did they donate all of that long distance time?  Perhaps
they shouldn't have been divested? :-)

I've done work for AT&T and the Baby Bells on a number of occaisions
and I can only say that they have some of the highest quality standards
in the industry.

As I said in my earlier posting (which you clipped to suit your
purposes), the "Open Source Culture" consisted primarily of system
administrators, service providers, and corporate IT managers.  These
were mostly users, not vendors.

Remember, back in the 1980's the thinking was "In Search of Excellence"
which suggested that large corporations "stick to knitting".  This
meant that a telephone company shouldn't get into the software business,
an insurance company should stick to insurance and stay out of selling
software.

The problem was that very often, system administrators and operators
had problems that couldn't be solved with commercial "off the shelf"
solutions.  In many cases, the problem was "ad hoc", or adressed
such a small niche market that big companies like Microsoft, Lotus,
or Dbase didn't even want to persue the market.  Hardware vendors
such as IBM, DEC, and HP/Apollo wanted to offer "floor to ceiling"
solutions - we can fix your problem but we will have total control
of you IT environment.

> they are research entities, and are more interested in creating
> research vehicles than products.

Actually, they were operations groups, more interested in keeping
the system running than creating high-gloss 256 color graphical
interfaces to "grep".

> The products are not engineered to
> be bulletproof or reliable.

Actually, they aren't engineered to be "pretty".  By the time
UNIX had a common graphical X11 interface, there were thousands
of command line components (programs) that used STDIO.  It wasn't
until Linux that there was any serious attempt to create pretty
interfaces and GUI access to the controls.  Even today, tools like
Linuxconf and KDE Config tools are essentially glorified editors.

> _Today_, VMS has the following markets:

VMS is a really great operating system.  It was created by DEC
to compete with UNIX back in 1982-4.  In many cases, DEC went
to extraordinary lengths to drive customers back to VMS and away
from UNIX.  There were even cases where the DEC representative
would actually come in and reformat the hard drives and reinstall
VMS over a formerly UNIX disk-pack.  Eventually, companies put
armed guards at the entrances and would switch packs so that the
DEC rep could apply his patches and leave - after which the
administrator would replace the UNIX pack.  I remember several
months when we had unscheduled outages due to DEC service calls.

> 90% of worldwide microprocessor production
> Runs 17 of the world's 20 largest stock
> exchanges (and over 100 worldwide)

VMS or MVS?  Most of the largest stock exchanges negotiate
the trades on UNIX systems and the confirmed trade is sent
to MVS in batch mode.  In the U.S. ADP uses MVS systems to
actualize the transactions.  Until ADP sends the confirmations
to the clearing houses, the trade is merely a promise, not
an actual movement of cash.

Again, with day-trading and other forms of speculation becoming
more and more common, we are seeing more pressure to have real-time
clearing systems.  In many cases, the clearing houses are fronting
the interface to the ADP systems with UNIX.

> Handles 60% of electronic bank-to-bank transacations

This would definately be MVS.  MVS is still the leader in bank-to-bank
transactions.  AS/400 is a close second.  UNIX is generally not used
because most of the bank-to-bank transaction clearing network was based
on APPC instead of TCP/IP.

> Runs 30 top telecommunications billing systems worldwide

This could be VMS.  MVS has the lion's share of the billing systems
in the world because billing is primarily a batch job environment.
VMS is a good platform for collecting summary records.  But UNIX
is generally used to control the switches, monitor the routing,
duration, and rate of the calls.  The UNIX system then send the
summary records to the VMS or MVS system.  Like it or not, COBOL
is very good for programming some of the really wierd business
rules while at the same time not getting lost in structure
manipulation.

> Source: Wall Street Journal 20-FEB-1999

Dow Jones get their information from other sources as well.  Usually
from PR Newswire.  These articles are usually Press Releases.  I
suppose it's possible that Compaq was trying to promote VMS, but
the remainder of the quote appears to be more true of MVS.

> And so on and so forth. You can throw around all of your trendy,
> teeny-bopper, "open source is the future" nonsense, but that does
> not change what real companies are using in serious installations.
> And it is not open source. Heck, open source can barely keep afloat
> with Microsoft let alone the important things.

UNIX has been a very effective competitor in nearly every niche
it has entered.  Today, 90% of all minicomputers run UNIX (the
remainder run VMS), 80% of all supercomputers run UNIX (the remainder
run MVS or OS/390), and 70% of all microcomputer servers run variants
of UNIX (the remainder run a combination of NetWare and NT).  About 70%
of all engineering workstations run UNIX.  Linux is the first serious
UNIX entry into the PC workstation market.  It has only been in that
market since July of 1999, and it has already created a distinct niche
market that represents about 5% of the end-user market.

What makes all of this even more remarkable is that UNIX has very
little vendor support.  UNIX is almost entirely user supported.
For many years companies like DEC, IBM, and HP nominally offered
UNIX systems, but the commission structures, the service structures,
and the profit structures were all geared to the proprietary products.

As more of these companies shift from a product and license oriented
structure, there is more focus on UNIX.  Even IBM like UNIX because
it's easy for Global Services people to whip up solutions and
integration solutions in a relatively short period of time.

I still tend to view Linux primarily as a prototying and pilot
project tool.  I can quickly implement a solution without the
encubrances of network connectivity, security configurations, and
permissioning and can prepare a solution that can be rapidly moved
into the more structured environment with less effort.

> >This is the first big myth that needs to be exploded. For the most
> >part, open source software is developed by system administrators
> >for system administrators.
>
> And it is preferable to have software which controls nuclear reactors
> written by sysadmins (as opposed to professional engineers) for what
> reason? These makes Unix more reliable than the more robust operating
> systems, such as VMS, why?

It's nice to have components written by engineers combined to meet
the custom needs of an organization.  Too often, all that's needed
is some very minor customizations to a relatively simple and complete
solution. You could bring in a vendor team to create custom software
that will solve the custom application problem in a shrink-wrapped
solution - but the cost can become prohibitive.

I've had a number of situations where VMS programs developed to
meet the needs of a company in 1982 were still being "tweaked and
tuned" almost daily, to meet the needs of the same company in 1994.
The problem was that the back-end systems had changed from BSC to
TCP/IP over Frame-Relay, the content had been changed from propriatary
tagged VT100 screens to HTML, and the editors were moving from VT 100
emulation to SGML.

The poor guy who was responsible for managing the VMS system couldn't
be promoted for 7 years because he was the only one who understood
the programs, the applications, and the customer's needs.  Worse,
every time the editors changed the output format, this guy had to
deal with irate customers who had to change their HLLAPI screen
scrapers to capture the new format.

Ironically, it was cheaper to implement everything to UNIX than it
was to continue to try to add yet another band-aid to the legacy
system.  Even DEC didn't want to deal with the losses of yet another
service contract.

A also think the Tandem system is pretty cool, but again you have
a single vendor with a single support source, and they still have
to make a profit for their investors.  As a result, you pay several
times more for the hardware, the software, and the service contracts
than you do in a UNIX shop.

> >The original "open source project" was AT&T UNIX. AT&T donated
> >version 6 UNIX, in source code format to these Universities and
> >colleges. Prior to this type of publication it was considered too
> >complex to be reliable.
>
> Other operating systems which are considerably
> more complex than Unix have
> no problem being ultra-reliable.

In 1968, the most complex operating systems were OS/360, DOS/VS,
and RSTS.  If Brian Kernigan had known about Multics he never would
have bothered to write UNIX.

By 1983, when AT&T released Version 7 as a commercial product,
Smalltalk was much more sophisticated, but it was also too tightly
integrated to the applications.

My personal favorite was FORTH.  Forth was the operating system,
the programming language, the application infrastructure, and
even the GUI if you put it on a PC.  The create/does construct
was very powerful.  For many years FORTH was used to control
robots, and even control the ignition and fuel injection in
automobiles.  The biggest problem was that there were only
a few thousand programmers ever needed.  I would guess that
there are still only a few thousand in the world, and they
don't need to advertise.  If you wanted to count deployments,
there have probably been 4 times more FORTH systems deployed
in everything from VCRs to Microwaves to Automobiles to Washing
Machines, to telephones than all of the Microsoft PCs every sold.
Even Postscript printers are using a forth-like language.

Unfortantely, FORTH is even harder to learn and deal with than
UNIX.  The language itself is easy, only about 64 primatives.
But the libraries vary for each system and it can take several
weeks or months to learn these systems.  Besides, it's so easy
to burn a FORTH program into ROM that most people never even
think about debugging it.

> Why did Unix have so many problems being
> reliable circa Version 6 (when it was EXTREMELY primitive -
> it didn't even support networking

Actually, Version 6 did support a very primative form of networking
in the form of RS-232 connections.  It also supported UUCP and UUX.

It's interesting to note that the version 6 source and commentary
are very similar to the descriptions of objects and object oriented
design.  The bdevswitch and cdevswitch were the original "Objects"
that called methods.

> and advanced features such as clustering were
> and still are -- a huge way off)?

Actually, CCI had UNIX clustering in it's Power 5/55 and Power 6/55
systems back in 1984.  They used dual ethernets and put routers
on each box and put a semaphore server on one dedicated box.

As I understand it, CCI gave DEC much of it's clustering technology
as part of it's effort to get it's DAS/C system (based on PDP-11).

One of the requirements that CCI had was that they needed the source
code to the Operating System to implement clustering.  CCI got source
and DEC got clustering.

> This is because Unix was an undesigned,
> hack, with poor/no engineering principles.

Actually, this might have been true with Version 6.  The BSD
project was dominated by MSEE and BSEE students and PHDs in
electrical engineering.  This included people at MIT, RIT,
Berkely, Stamford, Harvard, and dozens of other institutions.

One of the biggest group of contributors were the Alumni, who
had brought UNIX into the IT shops of companies whose IT needs
had gone beyond the capabilities of batch systems and DCL.  They
were using UNIX to enhance customer service organizations, to
provide real-time access to customer records, and to provide
answers to problems.  They were also using UNIX to control
near-real-time environments like telecommunications systems,
transportation systems, and distribution systems.

> VMS was much more complex in 1983 than Unix is today

If you are comparing kernel to kernel, you are probably correct.
Part of the power of UNIX is the relative simplicity of the kernel.
Many chided Linus because he made his kernel to complicated.  Of
course, Linus needed to be able to debug his kernel using cheap PC
hardware because he didn't have access to the logic analysers, digital
oscilloscopes, and protocol analysers used to debug multithreaded
microkernels.

The power of UNIX is in the libraries, the stdio funtionality,
and the use of datastreams that can be used in a "flow".
Even the X11 interface is a stream of commands that are pipelined
to the server.  This adds some real power when dealing with
distributed environments.

> (and perhaps about 10x as feature-ful as Version
> 6), but it had/has no problems being dominantly
> more reliable than Unix is.

Probably true.  VMS had RT-11 emulation which meant that the RT-11
programs including those written in DEC BASIC could be run on VMS.

As for reliability.  UNIX bugs aren't somenthing you can sweep
under the table and hide from the public with nondisclosure agreements.
In fact, UNIX bugs are announced on a newsgroup and fixes come back
via the same newsgroup.

With VMS, the customer was only allowed to have one qualified person
make a call to a qualified service person, and everything was treated
as confidential and proprietary.  Without a court subpeona granting
broad and sweeping authority to examine the customer service records
since 1980, it would be hard to substantiate whether there  were more
or less bugs.  We just have to take the vendor's word for it.

Of course, there is also the stock, the takeover, and the layoffs,
which don't indicate that VMS was a smashing commercial success in
spite of the fact that DEC promoted VMS every way that it could.

> If the original development of Unix had any serious engineering
> done it would have been reliable, but it wasn't, and that
> is why it is so unreliable today.

UNIX unreliable today?  Again, you would need to publish private
and confidential availability records for VMS in order to make
a comparison.  Even what to measure would be up for grabs.  In
some cases, the user interface is down while the batch jobs are
running.

> >Keep it mind that even today, the average UNIX or Linux based server
> >supports no less than 100 concurrent users. Many support as many as
> >1000 concurrent connections per processor. Even the most trivial bug
> >can become incredibly costly. As a result, the control has become
> >quite sophisticated.
>
> Proof please? Which companies deploy Linux servers which supports
> 1000 concurrent users (i.e. logins, not HTTP requests)?

I do know that Linux has been used by ISPs.  Typically, you could
run 100 users on a T1 circuit on a 486/50 and each could be running
a shell account.  This really isn't that impressive.  That VAX 11/780
ran 1 MIPS, had 1 MEG, and 1.2 gig in 4 300 megabyte drives on 14 inch
platters with access times of 65 ms.  A modern Linux PC with an AMD
K6/2-300 processor runs 600 MIPS, runs 128 meg of RAM, and runs dual 10
gig 7200 RPM drives with built-in cache for 6ms access time and a
100 mbit ethernet connected to a 40 mbit OC3 line.

> >In 1984, the military tried to get all government programmers to
> >use ADA. Their hope was that the software produced would be so
> >reliable that they could use it to guide nuclear missles from space.
> >
> >Eventually, the military began to see that the open source community
> >was achieving - for a fraction of the cost, what the military had
> >spent nearly $1 trillion over several years to achieve.
>
> Proof please? Please show me documentation that the US military is
> using open source software to guide nuclear missiles from space (and,
> I mean the software running on the missile, or control centers, not
> some print server in the back room of a design center).

How much demand do you see for ADA programmers?

How much demand do you see for VMS programmers?

It's possible that the ADA/VMS programmers are like the FORTH
programmers.  There are a few thousand left in the country and
they do all the work.  Last I heard DEC/COMPAQ had laid off most
of their VMS people, and most of the VMS people were frantically
trying to learn UNIX.

The demand for C, C++, and PERL programmers is still pretty good.
Java is a much easier language to learn, but thread management is
more difficult.  Many companies are using Java Compilers that
can generate binaries that can be fork()d.

> >Keep in mind, that UNIX (all that code that get included with the
> >Linux kernel) has been used to control Nuclear Reactors,
> > manage nearly
> >all telecommunications traffic, provide the services of the Web,
> >distributed financial information, and even clear real-time financial
> >transactions such as those conducted on the stock exchanges.
>
> Proof please? Please tell me which Nuclear Reactors,
> which telecomunnications traffic centers,

BBN, MCI, Sprint, USWest, ...

All of the real-time feedback is monitored via UNIX systems and
most of the controllers are managed via UNIX systems.

> and which stock exchanges run on any
> brand of Unix (and, no, a print server in a back room doesn't count).

The NASD specifies UNIX as the platform to be used for posting
trades to the exchange.  Most smaller sites use SCO and
larger offices use Solaris.

Personally, I'm not crazy about SCO, but they do a good job of
providing remote maintenance and management tools.

> As for "the services of the Web", yes, Unix does indeed rule that,
> but it is also one of the most UNRELIABLE computer services in the
> history.

The servers are quite reliable.  Most often the problem has been
that the growth rate has exceeded the capacity of the networking
infrastructure.

> C.f. "the world wide wait",

This was at it's peak when the bulk of the internet was still X.25
systems such as tymnet, telenet, and sprintnet.  The first set of
problems was when they started doing PPP over X.3 pads.  The reply
packets had to time-out.  Eventually, they replaced the pads with
BSD UNIX powered PCs and performance began to improve.

The second round of problems hit when the X.25 lines became saturated
and the switches would lock-up.  Eventually, the BSD Boxes were wired
directly to the frame-relay network.

More recently, the problem has been that the T1 links get saturated,
especially in corporate environments, usually because you had 200
users connected via 8-10 10 megabit ethernet strings trying to
concurrently download 1 megabytes worth of GIFs - effectively the
average throughput available to each user was only 7.2 kbits/second.
They actually got better net responses via dial-up.  Worse, if a
TCP/IP packet times out and gets dropped by the router, it will take
15 seconds for TCP to demand a retry.

I've even been seeing corporate offices where 2000 people connected
to the router via multiple 100 mbit ethernet try to get through
an OC3 connection - average 20 kbits/second/user and wonder why
the net is so slow (when their used to the much faster internal
100 mbit/second environment.

Note that the X.25 network was controlled by Tandem systems, which
are supposedly much more reliable than UNIX.

> repeated frustrations with various
> servers (almost all of which run Unix) randomly crashing.

Again, there aren't too many systems that can handle 20%/month
growth rates.  Often, the money handlers will try to stretch
the budget and get more revenue (which they will spend on advertizing)
by delaying purchases of new equipment.  Another problem is an
excess of "creativity" in web site design.  Switching from a site
of primarily static pages and static GIFs and JPEGs to an
ASP/DHTML/Animated web-site when you are connected to a 1.5 gigabit
OC/5 connection can often get "exciting".  Performance engineering
of these sites is always more difficult.  Back in the days when all
you had to do is simulate 10 megabyte/second traffic, things were
pretty simple.  Simulating a 100 megabyte/second load across 20,000
simulated users can get tricky.  A common mistake in performance
management of these types of sites is to assume that the user will
cache most of the objects.  The nature of ASP and DHTML sites is that
you will cache very little.

> Look at the
> recent report that said something like 25% of potential e-commerce
> transactions due to server problems - this is Unix unreliability
> costing companies billions of dollars. All to blame on Unix.

What's even more interesting is that a 2 hour outage caused damages
of $4 billion dollars.  Sure, everything should have stayed up,
but much of this was because the sites in question underestimated
the Christmas Rush load.  Many sites were doing peak-hour loads that
had been higher than there average weekly load only 6 months prior.

You may also find that some of the sites in question were actually
port-ups from NT sites, or were implemented by Windows trained
programmers.  I always get a bit nervous when someone starts telling
me that he wants to know how to do threads for his UNIX web site.  It
shows a lack of understanding of UNIX capabilities and a lack of
awareness of the issues of thread management.  With NT there was
good reason to use threads.   Even Windows 2000 has tried to eliminate
the use of shared-memory threads.  They are promoting the use of
fabrics - pools of apartment threaded threads that are remarkably
similar to the UNIX fork(), except that they rely on pools to
eliminate the overhead of the NT kernel process generation.  UNIX
doesn't have that overhead so fork() can be quite efficient.  Not
as efficient as shared memory forks(), but since most processes
sharing more than two or three threads require more management
overhead in the application, it's a reasonable trade-off to use
fork() instead of pthread().

> Unix users are proud of the World Wide Web,

Think of it.  How many businesses or industries have supported
personalized service and growth rates of 20%/month.

The television industry grew at 10%/month, the radio industry grew
at 12%/month, and the VCR industry grew at 8%/month.  The CB Radio
industry grew at 15%/month but saturated the bandwidth in 2 years.

And when you think of the supply side of television, there were 3
broadcast networks in 1949, 3 in 1955, and 4 in 1965 - it wasn't
until the television market was completely saturated - many families
had 3-4 sets including sets in the bedrooms, that cable actually
increased the number of channels to 35.  Today, digital satellite
gives us the promise of 500 channels - but my primestar unit uses
100 of those channels to show 20 movies for a whole month.  Digital
cable isn't much better.  With the combination of a VCR and digital
cable, you'd think you had you choice of 6,000 two-hour programs/day
or 180,000 two-hour programs/month.  Unfortunately, there have only
been about 8000 hours of television produced in the last 30 years,
and only about 3000 hours worth of movies in the last 50 years.

Simply put, there are more producers offering more goods and services
via the internet and more buyers buying than ever in any other
industry.  The scary thing is that we still aren't finished growing.
We are even having to deal with international commerce in a whole
new way.

> but in fact, I would be
> extremely ashamed if my OS choice controlled
> that (and if it was still as
> unreliable as it is now), and would try to downplay its dominance..

So you point to 3 incidents among 6 million servers in which capacities
were exceeded and claim that this is a failure.  I'd call that a
miracle.

> Please point to a Unix SUCCESS, not a failure.

How do you measure success?

I suppose you consider Windows 3.1 a success because Bill's net
worth went from $2 billion to $20 billion during this time.

While Bill was keeping 25% of the net worth of Microsoft in his
hip pocket, and growing Micrsoft from $2 billion to $60 billion,
UNIX was creating and driving the infrastructure that led to a
network of companies whose combined net worth has climbed from
$4 billion to nearly $1 trillion.

> >As a result, the same code used to manage powerplants, simulate
> >the airfoils of 747s, and control the worlds largest global networks
> >ran transparently on Linux.
>
> Proof please?

That this software is written for UNIX?

> Please prove that Linux controls any of these applications.

I said the same code was ported to Linux.  TCP/IP, X11, CORBA,
MQSeries, RPC, - same code.

> Regards,
>
> Terry Murphy
>
--
Rex Ballard - Open Source Advocate, Internet
I/T Architect, MIS Director
http://www.open4success.com
Linux - 60 million satisfied users worldwide
and growing at over 1%/week!


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to