Linux-Advocacy Digest #620, Volume #25           Mon, 13 Mar 00 23:13:05 EST

Contents:
  Re: Top 10 reasons why Linux sux (Osugi Sakae)
  Re: Open Software Reliability (R.E.Ballard ( Rex Ballard ))
  Re: Top 10 reasons why Linux sux (Donovan Rebbechi)
  Re: Linux Sucks************************* (Mark S. Bilk)
  Re: Giving up on NT (Bob shows his lack of knowledge yet again) ([EMAIL PROTECTED])
  Re: Giving up on NT (Bob shows his lack of knowledge yet again) ([EMAIL PROTECTED])

----------------------------------------------------------------------------

Subject: Re: Top 10 reasons why Linux sux
From: Osugi Sakae <[EMAIL PROTECTED]>
Date: Mon, 13 Mar 2000 19:01:21 -0800

In article <gimy4.1700$[EMAIL PROTECTED]>, "Jim Ross"
<[EMAIL PROTECTED]> wrote:
>
>Marada C. Shradrakaii <[EMAIL PROTECTED]> wrote in
message
>news:[EMAIL PROTECTED]...
>> >10. X-Windows fonts look like shit. Go "borrow" true-type
fonts and
>> >they still suck. Mac looks great. Windows looks good. Linux
looks like
>> >shit. Not to mention X-Windows is slow as shit.
>> >
>>
>> Tried XF86 4.0 yet?  It's supposed to improve performance,
and it's out
>NOW.
>
>Missing driver support of XF86 3.3 series.
>And that wasn't perfect.
>
>>
>> >9. Sound Blaster Live is supported in an abortive manner, if
you can
>> >even make it work at all.
>
>Many people do, as the orginal poster was saying.
>Thus, being called "popular".
>Meaning "important to allow one to use easily."
>
>>
>> Not everyone owns one, so not everyone cares.
>
>I don't own one and I care.
>My friend dumped Linux for this reason.
>And he has a dual processor system (which noone has those
either)
>and the SB Live driver SB provides doesn't work in that case.
>
>>
>> >
>> >8. Postscript printers are really the only ones that fully
function
>> >easily under Linux.
>>
>> My HP 612C works just fine, cost 100USD, and was fairly
easily set up.  My
>> Panasonic 2135 dot-matrix was also easily configured. (on
Slack 4.0/7.0
>and RH
>> 5.1 respectively)
>
>I'm 50% there.
>I printer works, one doesn't.
>Both work under NT.
>
>>
>> >6.Dial up's and Free ISP's as well as AOL.
>>
>> They should follow standards.  That's why they're standards.
>>
>> If you write your messages in backwards Kilrathi, would you
be surprised
>if
>> nobody can read them?  Similarly, AOL/<insert free isp>
breaks the
>standards of
>> dialup connection handling; it's not our business to futz
with that when
>there
>> are better uses of resources.
>
>Either way Linux doesn't provide a way for those AOL users to
use AOL in
>Linux.
>Score:  Subtract many possible users.  Extra bonus if you own a
winmodem.
>
<snip>

>> >How about begging a software or hardware
>> >manufacturer to support Linux.
>
>They are too many to convince one by one unfortunately.
>There are so many OSes, they really don't bother often times.

How many operating systems are there? Windows has 90%+ of the
desktop market. So of course everyone writes windows drivers for
their hardware. After windows there is? OS/2, BeOS, the various
BSD's, and linux. Do any others have more than a fraction of a
percent of the desktop market? Unix vendors usually supply the
hardware as well, right? Same for mac, so drivers are not an
issue. So please don't say that vendors don't bother because
there are too many operating systems. Vendors will write drivers
for the operating systems that are popular with the people who
use their hardware. If linux gets a 50% share in the next two
years, every company will write linux drivers for their hardware.


>>
>> Buy with compatibility in mind.

>I do this.  Often it means buying older, slower, more expensive
hardware
>though.

I do this too, and it didn't. It does sometimes mean not getting
full performance out of new hardware because the drivers are not
as polished as those available for windows. So it is a driver
issue again.

>>
>> >It also looks crude an boxy, like most Linux applications.
>>
>> So?  You're not buying the looks.  You're buying the
functionality.  If
>you
>> bought the looks, we'd all be using AbiWord under X11 with an
elaborate
>GTK
>> theme.
>
>If AbiWord didn't have so many incomplete dialogs.
>It lacks many features in Word.
>This formats supported must be 10-20% of those that Word 97
supports.

Supported formats? I thought everyone using windows used MS
office or at least Word. Where is the need for supported formats
in a windows environment? In linux, there are open standards
that can be used, but depends on what kind of data your talking
about.


>Linux is nice on the server, but let's be honest, it sucks on
the desktop.

Doesn't suck on my desktop.


>
>First example, I would like to be able to copy a URL in KEDIT
and paste that
>URL in
>the Netscape Location Bar.  Doesn't work.  Well that says it.
>
>Second example, I would like every GUI app to install a program
entry and
>icon into my default desktop environment.
>As of now, it's a 50-50.  Way too low.
>
>Jim
>

Through much of your post, you speak of linux as though it were
a company that is competing with MS. Linux should do this, linux
won't attract new users if (whatever). Linux is an operating
system that is not maintained or developed by a single company.
There is no business plan or strategy beyond making a high
quality os. If some company wants to find a way to help AOL
users use linux and still connect to AOL, that company can do
it, but it still isn't linux, it is that company. If that
company goes out of business, tough, but linux doesn't. I think
you know all of this, but I wonder if you understand all of
this. Linux is not responsible for driver development. Linux is
not responsible for supported file formats. MS is the company,
Windows is the os. MS has a strategy (prolly) for windows. Linux
is the os, there is no company.

I am starting to repeat myself, which is usually a sign that it
is time to stop.

Osugi Sakae



* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


------------------------------

From: R.E.Ballard ( Rex Ballard ) <[EMAIL PROTECTED]>
Subject: Re: Open Software Reliability
Date: Tue, 14 Mar 2000 03:03:48 GMT

In article <8a2uvc$23mf$[EMAIL PROTECTED]>,
"Frank Mayer" <[EMAIL PROTECTED]> wrote:
> I wonder if someone could help me understand
> a claim that the development
> paradigm for open source software in
> general (and for Linux in particular)
> yields higher reliability, maintainability and stability.

Actually, more than a claim.  UNIX, and subsequently Linux
have set unprecedented levels of reliability.  Much of this
due to the combination of AT&T culture and Open Source culture.

> I understand that open source software
> is developed by a large decentralised
> group as a labour of love.

This is the first big myth that needs to be exploded.  For the most
part, open source software is developed by system administrators
for system administrators.  While it is true that many of the UNIX
utilities were initially written by students at Berkeley, MIT, and
other top universities, these projects were supervised by teachers
and managers who were training these students to become professional
systems administrators of systems supporting hundreds or thousands
of concurrent users.

The original "open source project" was AT&T UNIX.  AT&T donated
version 6 UNIX, in source code format to these Universities and
colleges.  Prior to this type of publication it was considered to
complex to be reliable.

By 1983, when AT&T Divestature freed AT&T to begin marketing UNIX
directly, the graduates of the universities had enhanced Berkely UNIX
almost beyond recognition.  The differences between Version 6 and
BSD 4.2 are quiet substantial.

Eventually, AT&T realized that it needed BSD code to be acceptable
to the market.  As a result, AT&T and Berkeley worked out a deal where
Berkely could continue to publish their code along with AT&T code if
AT&T could publish their code along with the BSD code.

The primary form of support for UNIX was through the UUCP net.  This
was the predecessor to usenet and when joined with ARPANET in 1985,
became known as the Internet.  There was no commercial use other than
the sharing of reasearch and mutual support of UNIX systems.

As a result of this network, revisions to the code were coming in
from numerous sources and UNIX developers quickly established practices
for managing this code.  Eventually, proprietary software vendors tried
to codify this into what became OSI 9000.  RCS, SCCS and CVS became
critical tools for managing software that often included numerous
branches.  Processes for managing roll-out and deployment were also
put into place.

Keep it mind that even today, the average UNIX or Linux based server
supports no less than 100 concurrent users.  Many support as many as
1000 concurrent connections per processor.  Even the most trivial bug
can become incredibly costly.  As a result, the control has become
quite sophisticated.

The Free Software Foundation was working on a fully Open Source version
of UNIX called HURD, when Linus submitted his fully operational kernel.
It eliminated much of the problems of AT&T proprietary code and as a
result was quickly adopted by the user community as a new baseline for
a new Kernel.  We liked the name and it stuck.

> It is hard to imagine that the developers would
> voluntairly submit themselves to the irksome
> quality requirements of, for
> instance, software generated using ISO-9000 standards.

As I mentioned before, many of the ISO-9000 principles were actually
codifications of practices that had been going on with BSD and FSF
software since 1984.  In fact, rigorous quality control and source
change management has been the norm in open source for decades.

> On the other hand, I imagine (?) that
> when a central authority pays for an
> operation system (or other) development,
> they can institute and enforce
> demanding quality standards.

Actually, there is more of a tendency to scuttle the standards
and release procedures in the name of expedience, budget, and
closing the contract.  Nearly any project that hasn't been grossly
overbudgeted usually ends up in "crunch mode", with the revenue for
a quarter - and consequently the price of the stock, riding on the
ability to get a signature of acceptance from the customer by a
certain date.

When you stand to loose your investors hundreds of millions of
dollars if you are late, you tend to cut corners.  The first
corners cut are the release and source control procedures.

> So I would expect that the code generated under
> the centrally controlled paradigm to be more easily maintainable.

The problem is that you have several layers of indefferent people
between users - often with their careers on the line, and developers
capable of affecting a critical code change required to render a
useful system.  Furthermore the number of developers available are
very few, and often, with new projects, the top people who knew why
and how the system worked, have been reassigned to other projects.

In the open source project, you have fewer layers between the
administrator and the developer.  Furthermore, you have more qualified
"mechanics" who can take the thing apart and put it back together for
you if necessary.  And when they are done, it will work right.

Finally, because there are fewer nondisclosure agreements, even the
smallest bugs can quickly be raised to an embarassing profile.  When
a bug is discovered, and fixed, by third parties, the organization
that manages the production release is almost humiliated into releasing
the fix as quickly as possible.

Some of the bug fixes to NT that came out in Windows 2000 - in an open
source environment - would have been published in service pak one.

> The Linux community claims that this is not so.

Not just the Linux community.  The Linux community shares 90% of
the distribution code with the BSD community, the UNIX community,
the OSF community, and the user community.  Again, the entire
infrastructure evolved to assure the highest levels of quality.

In 1984, the military tried to get all government programmers to
use ADA.  Their hope was that the software produced would be so
reliable that they could use it to guide nuclear missles from space.

Eventually, the military began to see that the open source community
was achieving - for a fraction of the cost, what the military had
spent nearly $1 trillion over several years to achieve.

Keep in mind, that UNIX (all that code that get included with the
Linux kernel) has been used to control Nuclear Reactors, manage nearly
all telecommunications traffic, provide the services of the Web,
distributed financial information, and even clear real-time financial
transactions such as those conducted on the stock exchanges.

> Am I missing something?

Yes.  You heard - probably from some Microsoft advocate, that Linux
was a bunch of hobbiest software cobbled together by some hobbiest
so that he could run his toy computer as a web server.

As romantic as it is to think that this College freshmen from
Helzinki released the entire Linux distribution on November 27, 1991,
this would be a fairy tale to say the least.

After Linus placed Linux in the TSX-11 Archives (FSF archives),
hundreds of highly trained, highly experienced developers,
administrators, and kernel designers modified it almost beyond
recognition.  Since none of the Linux code was patented, and the
GPL prevented anyone from doing the "Embrace/Extend" that Microsoft
did with Mosaic, nearly 10,000 developers contributed to the kernel.

More importantly, these developers worked very hard to make Linux
more "UNIX Compatible than UNIX".  The source code written for BSD
with the GCC compiler could be ported to Linux without modification.
What was going on under the covers was spectacularly different, but
the interface between the applications and the kernel were nearly
a perfect match.

As a result, the same code used to manage powerplants, simulate
the airfoils of 747s, and control the worlds largest global networks
ran transparently on Linux.

Finally, Linus was familiar with the release procedures, and the
"holy 7" (the people who helped him coordinate releases) worked very
hard to enforce the same controls that had been used for the national
defense systems code.


Had Berkely released a version of BSD 4.4 complete with AT&T
constructs to GPL, we might never have heard of Linux.
Unfortunately, the politics and economics wasn't ironed out
until Novell took control of UNIX in late 1994.  And the final
nail in the coffin was an agreement that Novell would keep UNIX
off the Desktop if Microsoft would keep NT out of the server room.
By the time Novell realized that Microsoft had no intention of
honoring that agreement, they didn't have the resources to get
Univel to the desktop in time to compete with Windows NT and
Windows 95.

Keep in mind that SCO still uses UNIX code to control thousands
of franchise branches including Burger King, Pizza Hut, Kentucky
Fried Chicken, and several others - without on-site personnel.

> Frank

--
Rex Ballard - Open Source Advocate, Internet
I/T Architect, MIS Director
http://www.open4success.com
Linux - 60 million satisfied users worldwide
and growing at over 1%/week!


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (Donovan Rebbechi)
Subject: Re: Top 10 reasons why Linux sux
Date: 14 Mar 2000 03:17:35 GMT

On Mon, 13 Mar 2000 17:10:46 -0500, Jim Ross wrote:
>

>I got it working.
>Even though I believe it supports the CTRL-C and  CTRL-V commands, WRT to

On UNIX ( Motif to be precise ) it's ALT-C and ALT-V. Only recently, we're
seeing KDE and GNOME move towards the "Windows way" to make life easier 
for new users.

-- 
Donovan

------------------------------

From: [EMAIL PROTECTED] (Mark S. Bilk)
Subject: Re: Linux Sucks*************************
Date: 14 Mar 2000 03:26:33 GMT

In article <[EMAIL PROTECTED]>,
 <[EMAIL PROTECTED],net> wrote:

Now it all makes sense: Steve/Mike/pickle_pete/mcswain/heather, 
etc. -- the Proctologist of Borg -- is a transvestite!  (Or a 
transsexual; maybe his peter really *is* pickled.  Ouch!)

>On Mon, 13 Mar 2000 15:10:54 -0600, John Sanders wrote:
>>[EMAIL PROTECTED] wrote:
>>> 
>>> Subject says it all***************************

>Trust my data to that crap?
>
> HELL NO!
>
>Free software is just that....Free and full of comprimises...

At least we have a good spell-checker...

>>Gosh.  I envy you.  Some day I'll buy a real OS.  Then I'll 
>>be cool, too.  Am I right?
>
>I doubt you would be cool driving a 2000 canary yellow Vette....
>You sound more like an AMC Pacer guy to me. Maybe Daddy's station
>wagon complete with fake wood sides?
>
>I dated a guy once with a car like that. Geek city!

But it had really soft seats, which was a major advantage 
when Steve was coming home after his hot date.

>I'll bet he's running Linux these days. 

>Heather and Steve....
>
>Easily reached via [EMAIL PROTECTED]
>
>And yes, there is a Heather and there is a Steve and surprise 
>they are real names :)

With Multiple Personality Disorder, you're never lonely!



------------------------------

Crossposted-To: 
comp.os.ms-windows.nt.advocacy,comp.os.os2.advocacy,comp.sys.mac.advocacy
From: [EMAIL PROTECTED]
Subject: Re: Giving up on NT (Bob shows his lack of knowledge yet again)
Date: Tue, 14 Mar 2000 03:38:54 GMT

 [EMAIL PROTECTED] (Jason Bowen) said:

>In article <38cd408e$2$yrgbherq$[EMAIL PROTECTED]>,
> <[EMAIL PROTECTED]> wrote:
>>Dave <[EMAIL PROTECTED]> said:
>>
>>>In article <38ccfde2$2$obot$[EMAIL PROTECTED]>, Bob Germer 
>>><[EMAIL PROTECTED]> wrote:
>>
>>
>>>> As others have pointed out, OS/2 can and does use ALL the memory thanks 
>>>> to
>>>> its cacheing methods which are far superior to what idiots who run any
>>>> Windows operating system experience.
>>
>>>No, others have pointed out that the way OS/2 uses memory (bottom up  instead
>>>of top down) may make the fact that the memory over 64 megs on a  430TX
>>>chipset MB IS NOT CACHED less of a problem, but it is STILL NOT  CACHED.  No
>>>OS can overcome this!!!!!!!!!!!!!!!!!!!!!!!
>>
>>This is true, but it is limited to the value of the chip cache.  OS2 does over
>>come it in the sense that it does keep track of files and other things, which
>>do increase the overall performance levels to something beyond what we see
>>from Winwhatever.

>You really are dense, just like Boob.  Prove what you are saying with real
>knowledge, not some bs statement without proof.  The whole issue at hand was
>the cacheable limit of the 430TX chipset, Boob showed his complete ignorance
>on the issue.  We are speaking of the amount of memory that the CPU cache(L1,
>L2, L3) can address. You act as if OS/2 can address uncached memory faster
>than anything else. Tell me how it does that?  Does it speed up chip access
>times?  Does it circumvent the laws of physics?


No I'm not dense, but you are a pathetic whiner who wouldn't believe the proof
if knocked you over.  Go away with your wincrap -- because you know what: I
really don't give a shit what you and your windoze buddies think. 



>>>You really *are* dense, aren't you.  Windows and/or OS/2's "cacheing 
>>>methods" have *nothing* to do with this problem.  It's a *chipset* 
>>>limitation.

>>You fellows are talking past each other.  


_____________
Ed Letourneau <[EMAIL PROTECTED]>


------------------------------

Crossposted-To: 
comp.os.ms-windows.nt.advocacy,comp.os.os2.advocacy,comp.sys.mac.advocacy
From: [EMAIL PROTECTED]
Subject: Re: Giving up on NT (Bob shows his lack of knowledge yet again)
Date: Tue, 14 Mar 2000 03:38:58 GMT

Dave <[EMAIL PROTECTED]> said:

>[EMAIL PROTECTED] wrote:

>> This is true, but it is limited to the value of the chip cache.  OS2 does over
>> come it in the sense that it does keep track of files and other things, which
>> do increase the overall performance levels to something beyond what we see
>> from Winwhatever.
>> 

>OSes have no bearing on this.  This a hardware, chipset level
>limitation.  

I didn't say it wasn't.

>All OSes "keep track of files and other things" via buffers and OS-level
>caches.  These buffers and caches are stored in main ram memory.  This has
>nothing to do with the *hardware* CPU cache, which is what we are discussing
>here.  The limitation is that the Intel 430TX Pentium motherboard chipset
>does *not* cache main ram above 64 megs.  You can install as much ram as you
>want, and it all will be used by whatever OS you are running (well, all
>except plain old DOS I suppose!), it's just that the memory over 64 megs will
>not be cached by the hardware
>motherboard static ram cache.

I never said it had anything to do with the hardware cache. I simply tried to
say that OS2 does a very good job of minimizing the performance hit.  

Borrowing from an archived file from Ron Higgin (I hope he does not mind) here
is his primer on OS2 memory management:

Unlike DOS (with or without Windows) OS/2 is a VIRTUAL rather than REAL memory
operating system. In many ways OS/2's memory management functions are as
complex as multimillion dollar IBM mainframe operating systems such as MVS and
VM/370.

Virtual memory systems manage REAL memory (RAM in PC jargon) as a global
system resource rather than on an application by application basis. Normal
(OS/2, DOS, and WinOS2) applications, and indeed most integrated OS/2
functions such as the Workplace Shell, do NOT actually "see" REAL memory.  The
memory they "see" is referred to as VIRTUAL memory.

Both REAL and VIRTUAL memory is managed in small (4K) blocks called pages. A
page of REAL memory is often referred to as a "page frame" whereas a VIRTUAL
memory page is usually referred to as simply a "page".

The operating system creates and manipulates internal memory management
tables. These tables associate a (virtual) page with the (real) frame that
"backs" it. A (virtual) page may reside in one of two places:

   1. A REAL page frame (that is, in RAM)

   2. The "SWAPPER.DAT" file on your hard drive


Now for the real magic of virtual storage ... NOT all pages need to be have a
real (RAM) page associated with them. How is this possible?

Programs, and the data used by those programs, that comprise both applications
and most OS/2 functions reside in virtual storage pages. Almost all
applications, and OS/2 itself, consists of hundreds if not thousands of
individual program modules. Each module is designed to perform some specific
function and is therefore executed (and the data it operates on accessed) ONLY
when the function it represents is requested. For this reason a relatively
small number of the program modules residing in virtual memory are actually
executed. This is also true of the data those programs access.

When the CPU attempts to access a page it looks uses the pages virtual address
as kind of an index to the virtual memory tables associated with the currently
executing session (virtual machine in OS/2 terms). The lookup will end in one
of two ways:

    1. The real memory (RAM) address associated with the (virtual) page
       address will be located.

    2. A hardware signal called a "program exception" will interrupt
       the CPU (and hence the executing thread).

In the first case program execution continues as though the lookup never
occurred. The program "thinks" it is executing an instruction, or accessing
data, at location "x" in real memory (RAM) when in fact it is actually
referencing a totally different address in REAL memory.

In the second case the CPU will automatically start executing an OS/2 provided
exception handling function that will in turn give control to the OS/2 memory
management function to resolve the problem, often referred to as a "page
fault". To understand OS/2's handling of REAL memory it is necessary to learn
a bit about page fault resolution.

Each time a real memory page frame is accessed (through the OS/2 managed
virtual memory lookup tables) the hardware automatically sets an indicator for
that frame that tells OS/2 the frame has been referenced. Likewise each time a
frame is updated (that is, an instruction is accessed that updates the page
frame contents), the hardware sets another indicator for that frame that tells
OS/2 the frame contents have been changed.

These reference/change indicators allow OS/2 to efficiently manage the REAL
storage resource on a global (system wide) basis.

When a page fault occurs, the event is in effect telling OS/2 that there is no
real storage (RAM) assigned to the virtual address being referenced. OS/2 can
resolve this problem in one of two ways:

   1. Assign an unused frame to "back" the virtual page.
   2. Steal a frame backing some other virtual page and use it to
      "back" the virtual page currently being accessed.

The first case is a "no brainer". As long as unused RAM exists OS/2 will quite
happily assign it to the interrupted process.  THIS IS WHY MEMORY MONITORS
SHOW THE STORAGE UTILIZATION AS BEING VERY HIGH IMMEDIATELY AFTER OS/2 IS
BOOTED. The OS/2 boot (actually the initialization) process accesses large
quantities of virtual storage to hold the programs that actually initialize
the operating system, and a high percentage of these programs are indeed
executed resulting in large quantities of RAM being needed for "backing" the
virtual storage.

The second case requires much more complex to handle. In this case OS/2 needs
to find a frame to steal. It would not be very efficient if it were to steal a
frame that needed to be accessed very soon after the current fault was
resolved. To avoid such "thrashing" the system looks for a frame that has not
been accessed for the longest period of time; the so called "least recently
used", or LRU, rule.

A frame that has neither the changed nor the referenced indicator set on can
be immediately assigned to the interrupted process WITHOUT having to write its
contents to the swap file.  A frame that has the referenced, but NOT the
changed, indicator on can be immediately stolen IF its contents have been
previously written to the swap file.  A frame that has both the referenced and
changed indicators on must always be written to the swap file before it can be
stolen and is therefore the lowest priority on the totem pole for resolving
page faults.

OS/2 searches for a frame that represents the least amount of work (overhead)
to resolve the page fault. Clearly the least amount of work is in case 1 where
there is an unused frame to satisfy the request. For this reason OS/2
periodically rearranges memory to automatically steal frames that have not
been used for a relatively long time. This allows the system to maintain a
pool of unused frames by simply adding the stolen pages (now written to the
swap file) to the list of available (unused) real storage (RAM) pages. THIS IS
WHY OS/2 MEMORY USAGE IS SO UNPREDICTABLE WHEN VIEWED FROM A MEMORY MONITOR.

So what good is a memory monitor?

A memory monitor, depending on monitor implementation, gives you an
instantaneous, or averaged, view of the number of real page frames (times 4 to
get kilobytes, or Kb) assigned to an OS/2 process or session. If you (or the
monitor) takes a sufficient number of samples the resulting data can be used
to determine the amount of storage ACTIVELY being used by a process or
application session. This is called the "working set" (of pages) for the
application.

The "working set" for a given application depends not only on the application
itself, but also on which functions offered by the application are being
actively utilized. THIS IS THE REASON WHY IT IS NOT ALL THAT EASY TO DETERMINE
HOW MUCH REAL STORAGE AN OS/2 APPLICATION NEEDS TO RUN.

If you've made it to this point in my tutorial ... I commend your tenacity. I
apologize if I've offended the more technical users that may read this who
probably already know how virtual storage systems work. I simply do not know
how to adequately explain OS/2's behaviour with respect to RAM usage without
at least peeking under the covers.

Ron Higgin [OS/2 Advisor] 



>Now it may be true that OS/2 uses memory differently than Win9x (bottom up
>instead of top down).  I don't know enough about OS/2 memory management to
>know.  I have read in several places that Win9x does in fact use memory from
>the top down.  So OS/2 may in fact *minimize the impact* of this hardware
>limitation, but in no way can it get around it.  As you use more memory in
>OS/2 by loading more/bigger apps, etc. it will eventually show the effects of
>no cacheing as the used memory overflows into the beyond 64 meg range.

>Having said all this, I've run Win98SE and Linux on my 430TX chipset Pentium
>233 MB with 96 megs of ram and didn't notice any slowdown.  All other things
>being equal, more ram means less paging to the swapfile, and ram access is
>always faster than disk access, cache or no cache!

I've run Windows machines with 128mb and I didn't see a slowdown -- which does
not mean it couldn't be measured with the proper tools. What I have seen is
Windows run slower then OS2 doing essentially the same tasks with the same
software.  

Example: A few years go I set up a series of tests with Textbridge OCR with
the Win95 version and the Win3.1 version running under OS2.  At the time I was
beta testing for XEROX (and had for a few years so I knew the developers by
name). I asked if there was a difference between the versions. They assured me
that the W95 version was a true 32bit application. The W31 was 16bit.  I ran
tests using the same set of scanned pages to eliminate and differences in
scanner performance under either system. 

In the first set using files on a FAT partition the W31 version ran 25% faster
then the W95 version.  When I moved the files to a HPFS partition, the W31
version ran 30% faster then the W95 version. (Which XEROX said was 15% faster
in their testing against the W31 version running on W31.

Now one can can go on all day about hardware caching, but I really doubt if
hardware caching was or could make the 25%-30% speed differences. 


_____________
Ed Letourneau <[EMAIL PROTECTED]>


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to