Linux-Advocacy Digest #632, Volume #25           Tue, 14 Mar 00 20:13:09 EST

Contents:
  Re: Disproving the lies. (R.E.Ballard ( Rex Ballard ))
  Re: A Linux server atop Mach? (John Jensen)
  Re: What might really help Linux (a developer's perspective) ("Netway")
  Comparison between Linux and FreeBSD! (Justin)
  Re: Disproving the lies. (R.E.Ballard ( Rex Ballard ))
  Re: Disproving the lies. (R.E.Ballard ( Rex Ballard ))
  Re: What might really help Linux (a developer's perspective) ("Netway")

----------------------------------------------------------------------------

From: R.E.Ballard ( Rex Ballard ) <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Disproving the lies.
Date: Tue, 14 Mar 2000 23:44:45 GMT

In article <8a7ivr$s0b$[EMAIL PROTECTED]>,
"2 + 2" <[EMAIL PROTECTED]> wrote:
>
> R.E.Ballard ( Rex Ballard ) wrote in message
> <8a6phv$dpt$[EMAIL PROTECTED]>...
> >In article <v%sx4.6$897.392@client>,
> <Snip>
>
> >
> >I should point out that if you put enough redundant servers
> >into parallel, you can make a system appear to be much more
> >reliable. Dell and Barns&Noble both use this approach, as does
> >the Microsoft site. I've heard of some systems which use load
> >balancing routers and firewalls (actually UNIX systems disguised as
> >appliances) to distribute the traffic across as many as 300 servers.
> >
> >IBM's WebShere provides the same type
> >of load balancing across multiple
> >UNIX systems or NT systems. The company
> > I discussed above replecated
> >multiple copies of each NOTES database
> >so that even massive NT failures
> >requiring "reengineering" of the server
> >(reinstallation of all software)
> >didn't result in a catastrophic loss.
>
> You are a visionary!
>
> Both Windows 2000 and Linux are going to use this approach with
> off-the-shelf hardware to get "high availability."

Hardly a visionary.  From 1982-1987, I worked at a company
called Computer Consoles.  They strapped together 8 PDP-11 processor
cabinets along with some custom drive controllers (similar to SCSI -
in fact Anita Freeman - a CCI employee contributed to the SCSI standard
- especially the RAID concepts).  And custom communications interfaces,
and custom terminal group controllers (only the keyboard and CRT was
in fron of the operator, making it possible to quickly "replace" a
faulty controller board.  Each cage controlled 24 terminals, each PDP
connected to 32 terminal controllers, and each terminal controller
connected to 8 PDP-11 machines.

Back in the days when a 3090 cranked out 400 transactions/minute,
we were serving 8,000 transactions per hour.  They did this 24/7
with an average unscheduled down-time of 15 minutes/year across
300 customer sites.  Keep in mind that the PDP 11/44 was actually
slower than a PC/AT and sported 1 meg of memory in a 64 megabyte
segmented memory space.  The routers were based on 1 megahertz
8085 processors - up to 25 processors per cage in a hybrid SMP/MPP
environment.  We had discovered the limitations of SMP back in 1978.

The same company eventually ported their technology to UNIX systems
that weren't even as powerful as 486/50 single processor PCs.  These
servers were eventually used for the MCI billing system and cranked
1000 transactions per SECOND.  Eventually, CCI was sold to British
Telecom who sold it to Northern Telecom who became Nortel.  The same
building still houses the directory services division of Nortel.


> It's the wave of the future. The technologies
> are not new, but the effect of
> COMMODITY clustering on the market is unprecedented.

Very true.  The Beowulf clusters, including the Avalon and
some of the larger clusters have been solving problems ranging
from faster-than real-time weather simulations to cryptography to
genome mapping.

The fact that a very powerful Beowulf cluster can be built using as
few as 5 Pentium 90 Linux machines makes it a very scalable solution.

> Whether the top 500 corporations use it is irrelevant.

Remember.  The top 500 corporations have to compete with the companies
that run Linux powered server clusters.

Keep in mind that when the Internet had grown to 30 million people,
Dow Jones was one of the few publishers that had a working web site.
AOL touted 1 million users, Prodigy about 300,000, and Compuserve about
400,000.  Believe it or not, in early 1994, the "smart money" was
betting on AOL, Compuserve, and Prodigy.  Even MSN was supposed to be
a single product single vendor company.

The competition to these huge companies, backed by Sprint, Tymenet,
and Telenet (huge X.25 networks), was a bunch of ex-fido and ex-wildcat
servers connected together with 9600 baud PPP links and X.25 links
that eventually connected to T1 circuits.  TCP/IP was a dirty word
in the corporate environment.  You used Netware, Lan Manager, or DecNet,
but TCP/IP was a "toy".

By January of 1995, Netware stock had fallen from 80 to 20, Prodigy
had stopped growing and Compuserve was actually losing subscribers.
Meanwhile, every local newspaper was beginning to publish it's content
directly to the public via a web site.  They even formed a cooperative
to pool adverizing revenue called New Century Networks which collected
from the top Madison Avenue advertising firms and distributed to the
publishers based on the number of ads shown and the number of referrals
that resulted.   Eventually, NCN was no longer necessary and the web
portals began to direct millions of users to millions of servers.

Behind this whole phenomenon, unseen by any but the most savvy
administrators, was UNIX and Linux servers, routers, firewalls,
monitoring tools, and POPs.  Microsoft tried to "corner the market"
by recruiting top executives, and in some case placing them, in top
positions where they could marshall all available resources into
NT based server strategies.  Meanwhile, it was the tiny dot-com
UNIX based companies that were cleaning up.

> You know, with dot.com money coming out the
> ears, efficiency does not seem to matter.

Competition is fierce.  For every Yahoo or Infoseek, there are two
hundred well-used and less known search engines that haven't advertised
on television.  Meanwhile, a single outage can cost as much as $100
million/hour and an NT based solution increases the risk substantially.

Ask any brokerage firm - a single outage during a market rise can
result in lost customers, lost orders, and lost commissions.  Even
the best Sun boxes have to be run redundantly to prevent a catastrophic
loss of Revenue.

> But the effect of the "screen" on,
> for example, online trading has sent
> low-cost Schwab to the top of the hill.

Yep.  And we're seeing the same thing happening in
wholesaling (business to business auction companies),
retailing (price-line, ebay), and supply chain management.

> Brokers commissions have dropped
> like a rock. If it's any consolation to you,
> the last time I looked, their
> stock trading is on a mainframe. :)

I believe the trades themselves are processed using a UNIX system,
but they are cleared through mainframes.  ADP still uses ADP mainframes.

> "Screen" is a term used in opposition
> to "open outcry" in stock trading.

> Guess which one is winning. The world
> wide financial markets have been
> effected. Of course these changes have
> been happening for quite some time
> before the web.

Not just the world financial markets.  The world economy is
shifting in the same way.  Once upon a time, a foreign company
had to set up a very complex network of importers and exporters
and dummy corporations and delivery was very difficult.  Today,
the combination of international air-freight and the web has
made it possible to work with suppliers across all 7 continents
to create "Domestic" products with order response times of a few
seconds and delivery times of a few days.

> The financial industry worldwide has
> been impacted tremendously. The Russian
> financial scam was aided by some banking software.

Part of what makes the whole thing so interesting is that
while there are higher concentrations of people in communication,
managing and policing this huge population is remarkably easy.
When you compare the amount of crime on the internet to a place
like Newark, New Jersey, or even New York City, the Internet
is remarkably crime-free.  Part of the reason for this is that
it is much easier to identify perpetrators and track crime back
to it's source.  Much of this is a result of open and public
standards that allows administrators to identify potentially
fraudulent traffic at it's source.

> How do you keep the
> banking classes, and now the masses, down.

You don't.  There are still networks of trust.  When those trusts
are breached, you have a problem.  When you consider the amount
of speculation and financial traffic that occurrs on the Internet,
and you compare this to the number of "Railroad Banks" that simply
closed up shop, and took the money - leaving the homesteaders cash
broke and mortgaged to the hilt (for some reason, the mortgages were
sold to the railroad before the banker left town), we have a remarkably
stable global economic system.

Of course, there are companies like Microsoft that want you to trust
their "little black box" that may be capable of doing anything to
you hard drive or communications links - without anyone knowing.

By the way, I have equal misgivings about UNIX "little black boxes"
too.  It doesn't really matter whether it's an encryption scheme,
or a cash transfer program, trade secrets are the worst form of
security (since they are nearly always breached - and first by
criminals.

> >>pops over a shrinkwrapped banking
> >> software ("Instant Offshore Bank"), puts
> the CD-ROM in.
>
> "Let's see. Clicks on Aruba."
>
> >>Uploads to the server.
>
> "Deposits anyone?"

A legitimate concern.  If the authentication, verification, and
transactions are done without regulatory measures, such as
Treasury Department wire-transfer policies, you could create
a real mess.  When Microsoft wanted to give us "Billy Bucks",
part of the problem was that Microsoft didn't want to let the
government monitor and regulate the cash-flow.  When this was
combined with the fact that no-one could track the financial
transaction, you had the potential for massive fraud.

> But I digress.
>
> There is coming a relentless downward pressure in prices.
> The web "screen" economy has very low cost of entry.

This is partially true.  Of course, if you are dealing in a regulated
market such as loans, insurance, medical practice, or brokerage, you
must still adhere to these regulatory requirements whether you run
your business out of an office, a phone bank, or a Linux powered web
site.  In some cases, the coding to comply with these regulations,
which seem to change every day, can be the most expensive part of the
on-line business.

Microsoft works very hard to avoid getting into the middle of custom
code that could be regulated by federal or international law.  A
word-processor or spreadsheet is trivial code that can be sold cheap.
A spreadsheet template that generates an SEC compliant 10K form is
a whole different matter.  A word processor that generates Lexus and
Library of Congress compliant mark-up codes is much more sophisticated
and not Microsoft Word (it's usually WordPerfect).

> Mass advertising has not had the impact on web users.
> It has its own culture.

The key is that there are tiers of interest.  If you are new to
the web, that $1 billion SuperBowl ad might intice you to try Yahoo.
But if you are a seasoned web user you probably have 5-6 favorite
"esoteric" search engines.

> While there will always be a high end market,

But even this distinction is being blurred.  It's getting easier
and easier to get custom made products as a high end user, and
often you can get very good service from a low-volume provider.
VA Linux sells custom configured PCs with Linux and other user
requested software.  It doesn't do the volume of Dell, but it
gets a substantially higher price for providing a more tailored
solution.

> but the web market is a relentless price driven
> competition in the end.

More and more, we're seeing a blend of price and custom service
creating and defining the web market.  Even items that used to
be commodities are being offered as "tailor-made".  You can click
a custom made car - complete with your choice of options, and have
it delivered to your door.  You can buy furniture where you choose
the fabric, the style, and the accessories and even coordinate the
drapes and the carpet, for only a modest percentage over discount
retail.

> The TPC-C benchmark is a shot across the bow.
> Sun and IBM have responded
> with "great deals" on packages,
> if only the customer will use their now much
> more overpriced hardware.

IBM has embraced Linux and is supporting it quite well.  They
realize that people aren't going to jump directly from an NT
powered Aptiva to an SP/2 or an OS/390.  On the other hand,
Linux lets them introduce the customer to the UNIX paridigm
at Thinkpad 600E or Aptiva prices and migrate through netfinity
into RS/6000-B50 processors on up to 400 processor SP/2 systems.

> The clustering approach to high availability is a conscious business
> decision on the part of Microsoft.

A business decision which UNIX has sucessfully implemented for
many years.  Microsoft has still not effectively demontstrated
cost-effective clustering.  There is a huge difference between
using a cluster to get a nice benchmark and creating a production
hardened cluster.

Even today, the mainframe - now in the form of OS/390, persists.
The reason is that when you are dealing with transactions worth
thousands or millions of dollars it is unacceptable to "loose a few".
One man embezzeled several million dollars by having the "rounding
errors" credited to his personal account.

> 2 + 2
>
> >Rex Ballard - Open Source Advocate, Internet
> >I/T Architect, MIS Director
> >http://www.open4success.com
> >Linux - 60 million satisfied users worldwide
> >and growing at over 1%/week!
> >
> >
> >Sent via Deja.com http://www.deja.com/
> >Before you buy.
>
>
--
Rex Ballard - Open Source Advocate, Internet
I/T Architect, MIS Director
http://www.open4success.com
Linux - 60 million satisfied users worldwide
and growing at over 1%/week!


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: John Jensen <[EMAIL PROTECTED]>
Crossposted-To: comp.sys.next.advocacy
Subject: Re: A Linux server atop Mach?
Date: 15 Mar 2000 00:07:35 GMT

Charles W. Swiger <[EMAIL PROTECTED]> writes:

: > This only matters to people who want to run Darwin on Intel for some
: > reason.  I think you've demonstrated above that the several levels of
: > portablity between MacOS X and Linux, and the lack of a Mac-ish GUI on
: > Intel, make Darwin on Intel a questionable goal.  Why, beyond the
: > hackerish fun in helping create it?

: I would expect Darwin to have much better support and integration for
: Apple-specific technologies which are of value to some people but
: perhaps not terribly useful to the rest of the world.

: For example, Darwin will handle HFS+, Mac-style forks, Unicode
: filenames, and all of that crap which a Mac user might find essential
: but most Linux users could care less about.  Darwin will support Mach
: messaging and NetInfo and various AppleTalk/EtherTalk/AppleShare
: networking protocols.  Again, very useful if you have those already on
: your network, and not of interest to people who don't.

: But Darwin also is supposed to be a plug-n-chug replacement layer for
: the rest of MacOS X-- so a Linux user who does want to add ELF support
: and have Linux binary compatibility has the Mach kernel sources
: available and can do so.

I can undersand these benefits on Apple hardware.  It still seems to me a
mixed network, with the _full_ MacOS X on Apple hardware(*) and Linux on
x86 PCs, would be the best general purpose solution.

(If Mach is important, perhaps a mixed network of Mac OS X on Apple
hardware and Hurd(**) on x86?)

John

* - if the full OS is free (bundled with every CPU) one benefit of free
software fades.

** - for those (not you Chuck) who forget:

    http://www.gnu.org/software/hurd/hurd.html

------------------------------

From: "Netway" <[EMAIL PROTECTED]>
Subject: Re: What might really help Linux (a developer's perspective)
Date: Tue, 14 Mar 2000 19:15:00 -0500


JEDIDIAH <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Tue, 14 Mar 2000 13:23:02 -0500, Rich Cloutier
<[EMAIL PROTECTED]> wrote:
> >"Donal K. Fellows" <[EMAIL PROTECTED]> wrote in message
> >news:8algc7$qhg$[EMAIL PROTECTED]...
> >> In article <38cdfa06@news>, Rich Cloutier <[EMAIL PROTECTED]>
> >wrote:
> >> > As far as standards go, [DnD] needs to be done at the LOWEST COMMON
> >> > DENOMINATOR ie, XFree86, so that every graphical program can conform
> >> > to the standards, whether it be KDE, Gnome, or Fred's Desktop
> >> > Environment.
> >>
> >> No, it goes in at the toolkit level so that no matter what display
> >> hosts your Linux session, you can use DnD!  Furthermore, supporting a
> >> DnD protocol, especially one as rich as Xdnd (which is used by both
> >> KDE and Gnome,) takes quite a lot of work to do even after you handle
> >> the basics of actually talking the protocol, since you need to deal
> >> with all the user activity during the drag, etc.  Hence it is doubly a
> >> natural for the toolkit level, e.g. Qt and GTK[-+]*.
> >>
> >> Donal.
> >
> >Then you've got to make sure that EVERY toolkit is DnD compatible. To me,
as
>
> No you don't. DnD is somewhat orthogonal to the core function
> of a gui widgetset. It's perfectly feasable to exploit DnD
> functionality quite independent of what widget library you
> are using.
>
Yes, but WHERE do you do that? If as Donal says it goes in at the toolkit
level, then doesn't the implementation depend on the toolkit? And if it is
implemented elsewhere, doesn't every 'parallel' set of functions have to
support it in the same way? That's why I made my point that it should be
done at the lowest common denominator: XFree86. That way, no matter what you
use to design your app, or what environment you design it for, DnD will
still work with it, as long as it goes on top of X (and you build in the
support, of course.)

--
Rich C.
"Great minds discuss ideas.
Average minds discuss events.
Small minds discuss people."



------------------------------

Date: Tue, 14 Mar 2000 19:17:27 -0500
From: Justin <[EMAIL PROTECTED]>
Subject: Comparison between Linux and FreeBSD!

Hello all,

Id like to get the "linux" opinion of this issue, What are your
opinions?

Thanks,
Justin



------------------------------

From: R.E.Ballard ( Rex Ballard ) <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Disproving the lies.
Date: Wed, 15 Mar 2000 00:11:30 GMT

In article <8a7ueb$jb$[EMAIL PROTECTED]>,
"2 + 2" <[EMAIL PROTECTED]> wrote:
>
> R.E.Ballard ( Rex Ballard ) wrote in message
> <8a6phv$dpt$[EMAIL PROTECTED]>...
> >In article <v%sx4.6$897.392@client>,
>
> <Snip>
>
> >Oracle, Sybase, Informix, and IBM are
> offering their flagship databases
> >on Linux and Linux compatible systems
> > (the BSD variants). BEA is
> >offering Tuxedo, and IBM has MQSeries in Beta.
>
> Isn't Tuxedo a Transaction Processing monitor?
>
> Also, the whole industry is going to componentised TP monitors,
> sometimes called OTMs or Object Transaction Monitors.

Yes.  Most of the industry is moving to the CORBA standards.
CORBA has transaction services.  However, there are still many
legacy systems that require the coordination of XA databases,
CICS mainframe transactions, and single UOW transactions that
must all be managed concurrently.  Tuxedo, Encina, and CICS are
still very much alive.

> I believe nearly all of the non Microsoft
> variety are EJB based.

EJB is simply a subset of the CORBA standard product.  EJB is the
Java API to the CORBA Transaction and Messaging services.  Again,
the challenge comes when you have combine Object oriented applications
with legacy applications and third-party (B2B) applications.  You
rarely have control over what happens at the far end.  You have to
manage the protocol not the API.

> Since these are first generation products and
> MTS is a third generation product, then
> they have had the usual developmental problems.

Unfortunately MTS is neither a transaction monitor nor an OTM.  MTS
is mostly providing the capability to manage a "session" of sessionless
transactions (Web Transactions).  This has been handled by UNIX services
for years, but Microsoft is only now realising the difficulty of
managing the shared-process multi-threaded model.

Other OTM and TP systems provide recovery, load balancing, load
transfer (to bypass a failed component), and real-time recovery
mechanisms.  MTS provides limited support for two-phase commit.
Don't get me wrong, MTS is a great thing and has been desparately
needed for years (since about 1982) but any delusions that MTS
is somehow the same as CICS or Tuxedo should be squashed immediately.

> Tuxedo is an older product.
> It is proven, yet the development for it is difficult.

This is especially true in a multithreaded (NT) environment.
Managing transactions across multiple multithreaded servers is
very difficult.

The use of messaging - such as MQ Series or UNIX data streams
is much simpler, but only in an unthreaded environment.  MQSeries
provides some capability to hide the multithreaded model from the
programmer by using reply-to Queue Manager, Queue, and Correlation
ID to manage the flow which simplifies the programming model.  Each
client can use a simple request/response model, the servers can use
a simple pipe model, and the number of server processes can be managed
using the trigger thresholds.  Some of these features have been moved
from MSMQ to MTS.  This may complicate, rather than simplify the
programming, auditing, and management of both systems.

> That is the reason the whole industry is going to components.

You assume this to be true.  There seems to be an advantage to
components in the GUI world, but in the world of transformations
and transaction management, pushing components down to the lowest
levels adversely affects performance.  You still end up with thousands
of objects waiting on spinlocks to get access to critical resources.
MTS simplifies the programming model, but the overhead still exists.

Moving to a message-server paradigm, or a pipe-and-processes paradigm
simplifies the processing model substantially.  Each message can be
passed to a forked child which can confirm the operation and return
the result efficiently.

> With components, you can chose the
> language you want to develop in.

This is true with CORBA.  This is less true with DCOM.  With DCOM
or COM+, you are pretty much tied to the Microsoft development
system and platforms.

With CORBA, you have a much wider choice of platforms, programming
languages, and CORBA implementations.

> Except EJB is limited to Java and some related minor
> languages like Dylan and Eiffel.

Again, the thing to remember is that EJB is a direct mapping to
CORBA.  You can use Orbix, Visigenics (Inprise), or any of the
source-available orbs such as MICO (used in KDE) or ORBIT (used in
GNOME).  Generally, there is a point of demarcation where you
let a simple object call a transaction service object.

> The great strength of EJB is that it is cross-platform,
> although not necessarily cross-vendor.
>
> These technologies will greatly strengthen Linux.

Remember, Linux has integrated CORBA services into it's infrastructure
for almost two years, and supported CORBA since the earliest versions
of MICO.  I remember putting CORBA on Linux back in early 1997.

> 2 + 2
>
--
Rex Ballard - Open Source Advocate, Internet
I/T Architect, MIS Director
http://www.open4success.com
Linux - 60 million satisfied users worldwide
and growing at over 1%/week!


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: R.E.Ballard ( Rex Ballard ) <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Disproving the lies.
Date: Wed, 15 Mar 2000 00:20:11 GMT

In article <[EMAIL PROTECTED]>,
2:1 <[EMAIL PROTECTED]> wrote:
>
> > UNIX compatible TCP/IP, Berkeley Sockets, RPC, NIS, NFS, DCE,
> > CORBA, MQSeries, X11, datastreams of ascii text, kosher XML,
> > IRC and IRC-II, PVM, and MPI. Not to mention programming
> > languages like PERL, ANSII standard C++, PYTHON, interactive
> > shells, cron, sed, grep, awk, lex, yacc, and cobol.
> >
> > Sure, you can spend a few grand and get all these goodies, but
> > they aren't part of the standard package. Windows 2000 comes
> > with qbasic, vbscript, and XML/ActiveX. Even the JVM is so
> > dependent on ActiveX and Microsoft-only APIs that it isn't useful
> > as an integration tool.
>
> No!!! It cant!!!!
>
> Does NT *really* come with QBasic?
> If it does, then that in itself is reason enough to shun it.

Yup.  I have NT 4.0 with Service Pack 5.  Bring up the MS-DOS
prompt, and type qbasic - and up it comes - help menues and all.

> I HATE qbasic. I recently used it to write a Tetris game
> FYI, a friend had a broken Win instalation, but working dos. She was
> *really* pissed off at having no games (tetris is soothing don't ya
> know). Having no C compiler for DOS, I had to write the game in
> (compiled) Quasic.)
> It is foul, limited and hacked. I hate it. Its nasty.
> The width command
> chnget the /height/ of the screen. Aaaaarrgh.
> This is one of my pet hates at the moment. Qbasic is useless for most
> things (although it is a big improvement on batch files for scriping
:-)

Careful - you're talking about Billy's BABY!  You can insult his
daughter, even his wife, but don't insult his BASIC!  :-)

> I thought M$ siabanded QB anyway?
>
> -Ed
>
--
Rex Ballard - Open Source Advocate, Internet
I/T Architect, MIS Director
http://www.open4success.com
Linux - 60 million satisfied users worldwide
and growing at over 1%/week!


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: "Netway" <[EMAIL PROTECTED]>
Subject: Re: What might really help Linux (a developer's perspective)
Date: Tue, 14 Mar 2000 19:29:15 -0500


Bob Hauck <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Tue, 14 Mar 2000 03:37:55 -0500, Rich Cloutier
> <[EMAIL PROTECTED]> wrote:
>
> >"Bob Hauck" <[EMAIL PROTECTED]> wrote in message
> >news:[EMAIL PROTECTED]...
[snip]
>
> >Also, most people really only use one app at a time, partly because
they're
> >too afraid that Windows will crash and they'll lose TWO unsaved
documents,
>
> Yes, there is that, although this applies more to Win9x than to NT.

Agreed. I was referring more to Win9x than NT anyway, since that's where my
experience lies. :o)

[snip]
>
> >As a programmer, do you just start using version control, or do
> >you manually double check what the program is doing for the first two
dozen
> >times you use it?
>
> I'll admit to double-checking when I first started using it.  But that was
> years ago and long fogotten <g>.
>
> Your point is well taken though, that people don't trust new technology
> at first.  My point is slightly off to the side of that and is that people
> can't trust new technology that hides it's workings and doesn't provide
> tools to assure oneself that things are ok.

There are two categories of users here, I think. You are in the category you
mention, that doesn't trust a technology if you can't figure out how it is
doing
its thing. But there is another category that won't trust anything new until
they
get up the nerve to jump in and try it, and as long as they can make it work
and it doesn't burn them, they "trust" it.

[snip]
>
> >That's where OSS, IMO, is superior. The features that get added (or don't
> >get added) to software are driven by the people who need them, not by
> >marketeers who think they can sell more units if they have them.
>
> That is a very important difference.  Saying that is _not_ the same as
> saying "to hell with new users", as is so often alleged.

Unfortunately, people judge how "good" or "technologically up to date" a
product is by how many of these fancy features it has.

--
Rich C.
"Great minds discuss ideas.
Average minds discuss events.
Small minds discuss people."



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to