Linux-Advocacy Digest #474, Volume #28           Fri, 18 Aug 00 11:13:06 EDT

Contents:
  Re: Anonymous Wintrolls and Authentic Linvocates - Re: R.E. Ballard says Linux 
growth stagnating (Donal K. Fellows)
  Re: Would a M$ Voluntary Split Save It? (Joe Ragosta)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Donal K. Fellows)
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Anonymous Wintrolls and Authentic Linvocates - Re: R.E. Ballard says 
Linux growth stagnating
Date: 18 Aug 2000 14:08:47 GMT

In article <8nhbj1$ll6$[EMAIL PROTECTED]>,
Mike Byrns <[EMAIL PROTECTED]> wrote:
> "Donal K. Fellows" <[EMAIL PROTECTED]> wrote in message
>> I would imagine that the lack of reentrancy refers to poorly
>> implemented libraries on the platform; I have heard stories
>> (unconfirmed by myself) that the platform uses a lot of spinlocks
>> instead of using something more reentrant (like context parameters
>> which can be stored on the local stack.)
> 
> Spin locks are restricted to kernel mode device driver development on
> multiprocessor machines.  They are not used in user mode libraries.
> http://www.microsoft.com/DDK/DDKdocs/Win2k/16issues_4qhz.htm

They've got better about that than they used to be?  I suppose it is
to be expected.  (Some of us have enough trouble following all the
latest developments of the fields we are closely involved in, without
searching for more stuff to track...)

>> The lack of memory protection almost certainly refers to the way
>> that all servers are implemented (in C/C++, languages not known for
>> their safety)
> 
> What language are the servers in linux implemented in?

C or C++.  But that's not my point.  My point is that because the
servers are usually implemented in C/C++ (on both platforms
admittedly) they will tend to have stability problems for at least the
first few revisions - writing a perfect bug-free server first time is
not impossible, but it doesn't happen very often - and this means that
strong isolation between service contexts (those things that are
handling a particular client's accesses at the moment) is almost
universally extremely important.  Processes offer greater isolation,
and because Unix processes are relatively cheap, they tend to be used
fairly often, giving a corresponding increase in overall system
stability.  Sure, processes are not as cheap as threads on any
platform.  But if the cost isn't that much more (and the combined
effects of fork() semantics and copy-on-write paging keep this cost
down for most common cases) then the loss in performance of any one
service context is offset by the gain in overall system stability and
availability.  (Crashed computers/services aren't very available!)

I suspect that a fair part of the stabilty of Unix userland is in
reality a historical accident.  It makes it no less valuable though.

[...'Doze does provide the basic tools, but...]
> In many cases programmers forgo implementing memory protection between
> threads but that's not the fault of the OS.  It's the fault of the
> programmer.  It's there if you want to take advantage of it.

It does beg the question of why there is a culture of not using such
techniques within the Windows programmer community.  A possible
hypothesis (I've not thought about it a lot, so this could be nothing
but hot air :^) is that this protection is awkward and/or tiresome to
use.  It could also be the curse of the "bare metal" programmer - some
coders believe that since they know precisely the context in which all
code is executing, they can take all sorts of short-cuts to improve
performance.  Alas, determining when you can use such tricks is a lot
harder than it looks, and many BMPs get it wrong.  (The games industry
seems to be particularly full of this type of thinking, along with a
large dose of NIH just to stir things up even more.  <sigh>)

>> Unix servers tend to have more memory protection ('cos it is much
>> cheaper to use processes, and they offer much more isolation) so
>> the problem bites far less.
> 
> How much cheaper?  Both OSs do essentially the same things when creating
> processes.  I'd be interested to learn just how many processor cycles it
> takes each to create a simple "Hello, world." process.  I'll wager that it
> is cheaper to create a thread on Windows than it is to create a process on
> UNIX.  Since Windows threads CAN offer a similar degree of memory protection
> it would seem the more efficient route.

Well, Unix process creation is not linked to program execution in any
way, and your example is therefore not really a fair one.  This is
really an area where impedance mismatches between the two groups of
platform's APIs makes comparison tricky.  Plus, we have to be careful
to distinguish between whether we are describing the theoretical limit
of the throughput and the practical experience of it.  Theoretically,
threads can beat processes.  Practically, its harder since in both
cases you need to allocate a new page (initially) for the stack and
the other pages can be left pretty-much alone.  The only differences
are in the extra changes Unix-class systems make to the memory
management hardware's configuration; I don't know the actual extra
cost of doing that, but it is likely to be relatively small.

>> Given that you have a system that is less reentrant (forcing more
>> spinlocks to avoid trouble) and less stable (due to reduced levels of
>> memory protection) you reach the inescapable conclusion that you need
>> more hardware to serve an equivalent load.  That's just plain obvious.
>> Most of these things are not required to be necessarily so on NT; they
>> could be fixed.  It just hasn't happened yet AFAIK...
> 
> I think your conclusion is based on flawed data.  As I explained, spinlocks
> are used only in the driver code, not in the service code nor in the core
> DLLs and thread memory protection is there if want to use it.  If anything
> the lower overhead of protected threads vs. multiple processes should allow
> the same hardware running Windows 2000 to serve a greater load with the same
> degree of memory protection.

With perfect server implementations, maybe.  However that assumption
reminds me of a quote from somewhere (I don't remember where) that ran
like this:  "Assume a system of perfect spherical consumers..."  :^)
While the theoretical limits of a system are interesting, how the
system degrades in the presence of failures is usually even more
interesting.  The evidence from the field doesn't seem to support your
(only partially explicit) thesis all that well.

(As I've noted before, the spinlocks stuff was based on out-of-date
and flawed info.  I won't address the points rising therefrom, as they
are really not worthy of the electrons.  Withdrawing part of an
article is difficult though...  :^)

>> (Another factor that makes NT less capable at serving heavy loads
>> is the degree of OO technology used.  The problem with OO
>> (particularly where you have all your methods virtual, as is
>> necessary with COM for good technical reasons) is that it has
>> poorer code and data locality, which makes both disk- and
>> RAM-caching less effective, degrading performance.  Sure, OO has
>> loads of advantages but locality of reference isn't one of them.
>> Unix uses more procedural C, and that is easier for hardware to
>> speed-boost.)
> 
> Having pages mapped from the disk cache to the paging file allows
> both to be used more efficiently and can increase your cache
> effectiveness.  Consider a COM DLL that has been initialized and
> freed.  It's VM image gets released and under normal circumstances
> you'd have to reload the image from the disk or cache.  Windows just
> maps the disk cache pages as VM and accesses the image right out of
> the disk cache -- no copy required.  This easily offsets any
> performance hit taken due to differences in locality associated with
> OO.

Provided you can fit all the resources that you actually need into
cache.  (L1 caches L2 caches Main RAM caches swapspace.)  The smaller
your effective working-set, the better.  However, the biggest gains
come when you can move from using one level to only needing the next
one in.  Swap is *S*L*O*W*, so if you can keep everything in main
memory, so much the better.  L2 cache is much faster than main memory
(and at the prices charged for it, it darn well ought to be) so if you
can put as much of the commonly used bits of the system in that, you
win again.  L1 is usually so slow that only stupid little meaningless
benchmarks fit entirely inside.  :^)

The reason why OO tends to lose out is that caches work on a page or
cache-line basis (depending on the exact nature of the cache) and not
a method/function basis.  Procedural code tends to have better
locality of reference since all the operations on one kind of data
structure are usually collected together.  OO code instead tends to
collect functions along class lines, which means that the operations
that work on a particular object are distributed between that class
and all its superclasses.  Which is harder to optimise for cache
locality.  (Harder still when some of the classes are library
classes...)  This is one of those times when OO is *not* as good as
procedural code, and is just yet another demonstration of the fact
that silver bullets are really made out of mild steel, and tarnish
quickly...

It's a shame really.  Not much you can do about it though beyond
trying to keep your inheritance hierarchy shallow and the number of
parent classes of any class down.  And that's non-trivial.  Life
sucks, but so do DustBusters.

Donal.
-- 
Donal K. Fellows    http://www.cs.man.ac.uk/~fellowsd/    [EMAIL PROTECTED]
-- US citizens?  Remember, I rule the world in this scenario.  They aren't
   citizens of the US, unless that stands for United Stevenland.
                                         -- Steven Odhner <[EMAIL PROTECTED]>

------------------------------

From: Joe Ragosta <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.os2.advocacy,comp.os.ms-windows.nt.advocacy,comp.sys.mac.advocacy
Subject: Re: Would a M$ Voluntary Split Save It?
Date: Fri, 18 Aug 2000 14:20:46 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
wrote:

> Said Joe Ragosta in comp.os.linux.advocacy; 
> >In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
>    [...]
> >> In other words, your company won't make a product if it can make a
> >> profit on it; it has to be able to profiteer (restrict access to it in
> >> order to charge exorbitant profits) or it isn't worth the investment.
> >> This is the standard mode of business today, and rather than being
> >> responsible for the wonders of the modern world, it merely takes
> >> advantage of it, and purports to take responsibility for it.
> >
> >Thanks for proving beyond any doubt that you don't have an ounce of 
> >business experience.
> 
> I don't have an MBA, if that's what you mean.  But I was part of a three
> person partnership with two employees in 1992, when we were acquired by
> a 40 person private corporation, which was subsequently purchased by a
> larger public corporation, in which I am now (through a series of
> dealings originating from the partnership) an extremely minor stock
> holder.  This is the extent of my business experience to date.

And with all that, you still haven't learned anything. Amazing.

> 
> >Sometimes having intellectual property protection is the only way to 
> >guarantee _any_ profit.
> 
> And when that still isn't enough, and you need two different forms of
> "intellectual property protection", namely trade secret *and* copyright,
> to apply simultaneously in order to restrict access to your product
> sufficiently that you can profiteer?  What then?

I don't know. Just what's your point?

I'm just curious. Who's the one to define when a profit is acceptable 
and when it's so high that it's profiteering? You? 

The only reasonable mechanism is a free market -- which you're trying to 
interefere with.

> 
> >Furthermore, you're assuming that there are infinite resources 
> >available. This is clearly false. If I have so many staff members, 
> >they're going to work on projects that are the most profitable. If I can 
> >justify adding people, I will do so, but only if the profits are high 
> >enough to justify it.
> 
> And if the profits aren't high enough, and you bet wrong, you lose
> money.  No company comes with a guarantee that what you spent your hard
> time and money developing is going to be valuable enough to the market
> that you will earn any profit at all.  Pretending you have to guarantee
> extraordinary profits in order to take the risk simply proves you don't
> have what it takes to compete.

Wrong. You keep using stupid terms like "extraordinary profits" and 
"profiteering". 

A business charges what they can for their product. The price is set by 
the market. If the price is too high, people won't buy.

They then pay their bills. If there's money left over, they have a 
profit. If they are smart, hard working, or just plain lucky, they have 
a lot of profit.

The objective is to increase profits -- at least under a free market 
system. Under U.S. law, a company is legally obligated to maximize 
shareholder return, as well.

> 
> >Do yourself a favor--learn a bit about how business works before 
> >spouting off.
> 
> Do me a favor; stop blowing smoke up my ass.  The rest of the business
> world generally survives on reasonable profits; why should anything
> involving IP be any different?  What makes work for hire so special in
> comparison to any other cost of goods that it should merit special
> ability to extort thousands of times its "real value" because of its
> "perceived value" when that "perceived value" is intentionally
> manipulated by the producer?

You don't have a clue what you're talking about. Just what is an 
"acceptable profit"? 

As for anything involving intellectual property having a higher level of 
profit, that seems completely reasonable. If you're developing something 
new (which warrants IP protection), most reasonable people would concede 
that it's OK to charge more than someone who's making a simple copycat 
product which doesn't get IP protection.

> 
>    [...]
> >> Its about proprietary trade secrets from every company that sells
> >> commercial software.  It has nothing whatsoever to do with 
> >> intellectual
> >> property.  If you can't publish it and still earn a profit on it
> >> (relying on copyright law and value-for-cost to prevent piracy), then 
> >> it
> >> isn't worth anything to begin with.  Software is text, not machinery.
> >
> >You don't think that trade secrets are intellectual property?
> 
> Apparently, I was contrasting trade secrets to copyright; trade secrets
> have a much much lesser requirement for originality.  Additionally,
> state law, rather than federal law, generally has more of a controlling
> authority than with copyright, though I don't really think that's
> relevant to this point.

IOW (speaking of blowing smoke.....), you don't know what you're talking 
about. You said trade secrets aren't intellectual property. When called 
on it, you don't admit your mistake, you just drift off into your 
alternative reality.

> 
>    [...]
> >> No, it isn't; these companies (except probably Stac, and any software
> >> products the others may sell) do not sell trade secret licenses to
> >> copyrighted information.
> >
> >Just about as far off topic as it could be.
> >
> >Stac's business plan was selling software. If Microsoft had been allowed 
> >to copy their algorithms freely (or if you got your way and Stac lost 
> >intellectual property protection), they would never have been in 
> >business long.
> 
> I didn't say Stac would lose intellectual property protection.  I said
> they have a choice of intellectual property protection.  They can
> copyright their software (so that anyone can read their algorithms, but
> still be unable to copy them) or keep it as a trade secret (so that only
> those who agree not to use the knowledge other than to provide the
> function can read them), but applying both is, even for Stac, and
> certainly for Microsoft, overreaching.

Only in your poor deluded reasoning.

> 
>    [...]
> >> So long as *your* expectations are that a business has no obligation 
> >> to
> >> act ethically , there is no way for ethical companies to compete.  
> >> That
> >> ought to tell you something about why there aren't many ethical
> >> companies these days.
> >
> >Only in the eyes of people like you who have bizarre definitions of 
> >what's ethical.
> 
> Perhaps, but at least I have a definition.  And it certainly isn't as
> bizarre as you seem to think; *open* competition, unfettered by *anyone*
> (including other competitors) is not really all that strange an idea.
> That it also means that you cannot fetter your competitors is an aspect
> of a free market that some people just can't grasp.  But consider the
> Sega v. Accolade case, often cited on other grounds.  This seems to make
> clear the point: if you claim copyright protection *in order to prevent
> competition*, it won't work.


That's nonsense and even you should know it.

The whole reason for copyright protection is to prevent competitors from 
stealing your idea.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to