Linux-Advocacy Digest #678, Volume #27           Fri, 14 Jul 00 17:13:03 EDT

Contents:
  Re: Linsux as a desktop platform (T. Max Devlin)
  Re: Linux is blamed for users trolling-wish. (T. Max Devlin)
  Re: Linsux as a desktop platform (T. Max Devlin)
  Re: Linsux as a desktop platform (T. Max Devlin)

----------------------------------------------------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.sys.mac.advocacy,comp.os.ms-windows.advocacy,comp.unix.advocacy
Subject: Re: Linsux as a desktop platform
Date: Fri, 14 Jul 2000 16:34:41 -0400
Reply-To: [EMAIL PROTECTED]

Said ZnU in comp.os.linux.advocacy; 
>In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
   [...]
>> Then other than buggy applications, there's no benefit to PMT, right?
>
>Wrong.

Couldn't you just say "you're mistaken"?  Or maybe skip it entirely and
merely address the point, as you do below?

>> Except its easier for the engineers, and doesn't work the way I want
>> when I *don't* have any idle time.  What happens in PMT if I *don't*
>> have idle time?
>
>That's where a PMT system really excels.
>
>CPU time is still dealt out according to task priority, and, as has been 
>pointed out, user interface processes tend to accumulate priority 
>because they don't do anything most of the time. And what you have to 
>realize here is that we're working on such small time scales that even 
>if you're typing away like mad, from the computer's point of view the 
>time between keystrokes is so long that you're not doing anything most 
>of the time. Why shouldn't it be using that time to get other things 
>done?

It should, but the assumption that any set of algorithms is the same
thing as "smart" is what makes a lot of crap available in computers
today.  I recognize that process scheduling is a VERY low level issue,
which certainly doesn't seem appropriate to this kind of
double-checking, but obviously I'm not the only one who thinks so, as
the link that Ed sent me in email will show:

http://www.uk.research.att.com/~dmi/linux-srt/wm.html

There's at least one engineer at AT&T who isn't blinded by assumptions,
and realizes that the value of a desktop computer, regardless of *all*
other considerations, is entirely based on the user's ability to make it
work the way he wants it to work.

Even PMT systems give a nod to what I've been talking about by providing
re-prioritization via nice or the NT task manager.  The idea that this
issue is dealt with by, again, an automatical algorithm which uses a
rule that user interface processes should "tend to" accumulate priority
because they don't do anything most of the time is not enough to
dissuade me of the notion that there is an issue here.  Apparently these
scheduling algorithms are deep magic, as this is the closest I've ever
seen to a cogent explanation of their behavior for non-engineers.

Your "forever between keystrokes" idea is also disconcerting.  Because
it hints that the inverse is also true; while I'm sitting their for the
CPU to get something *which requires CPU processing* done, the reason it
is only at 30% utilization is because it is wasting more than two thirds
of the time not getting it done.  Please let me know if this is not a
valid assumption.  I am aware that CPU is not the only bottleneck.  I am
also aware, most specifically, that engineers who work with the
technical details of a system can be surprisingly blind to the way that
system actually behaves in the real world.  The situation seems similar,
for instance, to a discussion between a frame relay and a router guy,
with both sides being able to explicitly "prove" that its the other guy
who is the bottleneck, and not them.  And neither of them are even
aware, often, until I explain it, that when a "network" is at 50%
utilization, YOU CANNOT TELL if it is because you only needed 50% of the
bandwidth, or if it is because you could only get 50% of the bandwidth.
Because both are dealing only with their system, and not with its
interaction, end-to-end, with the human beings who ultimately decide if
something is efficient or if it works.

>Additionally, GUI PMT systems typically give priority bonuses to the 
>foreground app. This means everything tends to stay nice and responsive.
>
>In a CMT system with no idle time, you end up with apps fighting over 
>the CPU. A heavily loaded CMT system can literally take _minutes_ to 
>respond to a user interface event as simple as registering a mouse click.

As can, at least a bad, PMT system.  I've had Unix boxes behave that
way, too, but I've no idea what in particular caused the issue.  Only
that rebooting fixed it.  ;-)

--
T. Max Devlin
Manager of Research & Educational Services
Managed Services
ELTRAX Technology Services Group 
[EMAIL PROTECTED]
-[Opinions expressed are my own; everyone else, including
   my employer, has to pay for them, subject to
    applicable licensing agreement]-


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: alt.sad-people.microsoft.lovers,alt.destroy.microsoft
Subject: Re: Linux is blamed for users trolling-wish.
Date: Fri, 14 Jul 2000 16:34:48 -0400
Reply-To: [EMAIL PROTECTED]

Said [EMAIL PROTECTED] in alt.destroy.microsoft; 
>Better yet, make up something absurd and describe how you feel the
>failure/problem is related to it.
>
>Ex:
>       I feel the BSOD's are coming from the "double-flush" buffer in
>the ball-cock module. I've seen this happen before when a new memory
>simm is installed and the user doesn't re-install Windows so it can
>recognize the dual piping system now. 
>
>
>Or something along those lines.
>
> It's kind of like the "muffler bearings" joke amongst auto mechanics.
>
>You have to be convincing though and do it with a straight face :)

Yup, that's the trick.  MS has figured out it works very successfully,
enormously profitably, even, if the goof is convinced himself that
technology is incomprehensible to begin with.  A simple matter of
providing "training" which is little more than lists of these kinds of
excuses, together with an absolute lack of any technical considerations
of the software requirements of the operations being performed.

In the open systems world, obviously, it is a bit tougher to get away
with, but not much, because you'd never do it to someone who is clueful,
and there are many of them.  The manager may be mollified with such
claptrap, though, saving you from having to admit you need someone to
cast some runes.

Its my own personal back yard, so maybe I'm seeing things that other's
don't, or maybe I'm just seeing things (and we'll leave it for the
reader to try to guess if they could tell which was true, if it were
true).  I guess that just means I need to research it further, and I
will when I get the chance.  My example is their OSI stack thing.

I hate all vendor's OSI stack thing; if you want the full schpeel, check
Deja; there's tons of detail of my entire professional arguments in
dozens of posts.  I won't bore anyone yet again with the details.
Suffice it to say it is a 7-layer "model of the network".

Now, maybe every vendor does it this way, to some extent, but Microsoft
seems to be the most extreme.  What they do is they essentially teach
this model in a way particularly suited (and thus, apparently intended)
to provide seven _other_ things that might be wrong, so Windows can be
held blameless for a random failure.  In contrast, I find the 7 layer
model quite useful in identifying seven potential things which Windows
may have glitched to cause a random failure, and I have often pointed
out this is more appropriate to the purpose and use of the OSI model.

And on the subject of random failures, those who have followed this
thread may have noticed that while I disagreed with both Nathan and
"mjcr's" definitions of "random failure", I didn't provide a considered
alternative.  This was intentional at the time, so as not to confuse the
issue further.  A random failure is not, as "mjcr" explicitly
identified, an intermittent failure, nor is it an indeterminate failure
as Nathan more generally used the term.  A failure is said to be
"random", (throughout the world of technology and science, I'll point
out), if it's occurrence, not its location, cannot be pre-determined.
In a sample, the frequency of occurrences, and even their character, can
be identified.

What cannot be pre-determined, even by these means, is which of them, or
a potentially novel one, will occur in any given instance of
implementation, or when it will occur.  By seeking to minimize the
failures of a known, frequent character, we build "guidelines" for
trying to avoid these unavoidable, almost inevitable, yet still random,
failures.

For every failure blamed on using old drivers, there's one that occurs
because of using new drivers.  As characterizations placed almost
arbitrarily on the population's observation of random failures, these
distinctions are illusory to begin with.  If you add a new driver to a
system containing an old driver, and one fails, are you actually in a
position to define one or the other as literally having a bug in it?
The illusion that specifications are pristine perfect and completely
cover all contingencies, able to without judgement declare right from
wrong, is just the kind of artificial thinking, I guess I'd call it,
that I accused David Petticord of practicing in his approach to statute
and law.  There is not a single divine standard for all compatibility
problems between all software, which can arbitrarily place blame for the
two not being able to function correctly.  And even if there were, the
end user, the implementor, and the consumer involved in the system is
not in a position to give a flying fig either way.  Particularly if
these three roles are filled by a single individual, the question
becomes "which can I replace most easily" and that is the end of it.

It isn't so much you can't tell what went wrong.  Many times you can
(whether you characterize it as a Windows glitch or a misconfiguration
or something is irrelevant), but isn't that which makes the label
"random" appropriate in anything but the most casual (read: amateurish,
even ingenuous) sense.  The thing that is randomized is your ability to
predict when *a* failure will occur, or of which character it will be
when it does, not your ability to recognize or even forestall or avoid
any one particular failure after it happens.  The question of 'if' a
random failure occurs is meaningless.  If one hasn't happened yet, it is
because they occur randomly, and one simply hasn't happened, *yet*.  It
is this very character of randomness which causes the "sometimes its
stable, sometimes its not" observations, which lead people to assume
that they can predict, or even identify, which is which.  While in the
same breath, they are giving advice for how to "prevent" future failures
(of that one particular variety) from happening again, they are
remarking on the oddly incongruous character of such behavior with their
non-Windows troubleshooting experiences.

So now I think I've made my entire case, for all to read and consider at
their leisure.  Am I a fool with delusions of grandeur, trying to tell
millions of competent and highly trained and intelligent people that
they are dumb?  Or have I noticed that everybody seems to be making an
assumption which doesn't actually turn out to be valid, no matter how
many people are convinced of it?  Feedback would be greatly appreciated,
and I'll try not to simply use it as an excuse to keep rambling.  I'm a
whore for compliments, but I also would be more than grateful to anyone
who can point out where I might be mistaken or making an inappropriate
assumption.

--
T. Max Devlin
Manager of Research & Educational Services
Managed Services
[A corporation which does not wish to be identified]
[EMAIL PROTECTED]
-[Opinions expressed are my own; everyone else, including
   my employer, has to pay for them, subject to
    applicable licensing agreement]-


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.sys.mac.advocacy,comp.os.ms-windows.advocacy,comp.unix.advocacy
Subject: Re: Linsux as a desktop platform
Date: Fri, 14 Jul 2000 16:35:12 -0400
Reply-To: [EMAIL PROTECTED]

Said Bob Hauck in comp.os.linux.advocacy; 
>On Thu, 13 Jul 2000 21:06:38 -0400, T. Max Devlin <[EMAIL PROTECTED]> wrote:
>
>>Because there are too many engineers who get too wacked out at the idea
>>of supporting *cooperative* application implementations on a user's
>>desktop system and can't figure out how to provide the necessary
>>functionality without going to PMT.
>
>There are applications where CMT is the right thing.  For example, some
>real-time systems use CMT in order to improve determinism (at the cost
>of more complex software designs).  Systems where you mainly only need
>task switching rather than true multitasking and have limited system
>resources (like the Palm).  And so on.  
>
>But for general desktop use, PMT is more robust, easier to program for,
>and with modern CPU's and schedulers there is no significant downside
>to PMT systems.

Yes, I am willing to concede the point.  I wouldn't have gone so far in
my arguments if more engineers had simply said "it was appropriate for
the 'toy client' mentality of the Mac, where single-tasking is really
all you need because the user could only do one thing at a time in that
context, and even a client system requires a more robust scheduling
system, although your points about weighting performance towards user
interaction is a valid one for the subject of Linux as a desktop
platform; thank you for your input."  Needless to say, that wasn't the
reaction I got, was it?

>The reason Win9x sometimes has lousy responsiveness is not because it
>is has PMT, but because it has bad PMT. [...]

I concur on that point, as well.  A lurker who frequently (and to great
appreciation) emails me some very incisive comments (why don't you post,
Ed?) made the point better in two paragraphs than most of these
"engineers" did in 200.  Essentially by responding appropriately, as I
described above.  You yourself did the same, by opening your response
with the comment "CMT makes sense in some cases..."  Because however
limited those cases are, thinking that engineers are developing
technology based on assumptions that something is 'stupid' does not
inspire confidence in the people who desire to advocate their solution.
The Palm Pilot, by the way, is the perfect case for everything I've been
saying, is it not?  And obviously that is an important case in at least
some regards.  I would say the Palm is the Macintosh of the new
millennium, in a way, and that seems appropriate on many many different
levels.  But I would never suggest the Palm as a desktop platform, to be
sure.

--
T. Max Devlin
Manager of Research & Educational Services
Managed Services
ELTRAX Technology Services Group 
[EMAIL PROTECTED]
-[Opinions expressed are my own; everyone else, including
   my employer, has to pay for them, subject to
    applicable licensing agreement]-


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.sys.mac.advocacy,comp.os.ms-windows.advocacy,comp.unix.advocacy
Subject: Re: Linsux as a desktop platform
Date: Fri, 14 Jul 2000 16:35:14 -0400
Reply-To: [EMAIL PROTECTED]

Said Bob Hauck in comp.os.linux.advocacy; 
>On Thu, 13 Jul 2000 21:13:47 -0400, T. Max Devlin <[EMAIL PROTECTED]> wrote:
>
>>So why isn't everyone using a realtime OS on their client desktop PCs?
>
>Because most current desktop systems were not designed for multimedia. 
>They originated before multimedia became popular.  Realtime features
>are not easily "tacked on" after the fact, although there are packages
>for both NT and Linux that "slide a realtime OS underneath".
>
>There are also disadvantages to realtime systems for desktop use that
>are similar to the disadvantages of CMT systems.  Realtime systems
>allow high-priority tasks to monopolize the system.  For many business
>applications this is not a good thing.  Something like what you call a
>"workstation" is in fact more appropriate.
>
>For home entertainment, it certainly would be appropriate to have
>realtime features in the OS.

But I don't think it is appropriate to have home entertainment in the
OS.  Fantasies of consolidating all home electronics equipment into the
PC itself are counter-productive.  I want my PMT (which should look like
CMT to me, both magically and automagically) computer to interface with
my RT DVD player, not *be* my DVD player.

I'll go further to say that I think anyone who maintains the fantasy of
wanting their OS to be realtime for home entertainment purposes has
simply based their expectations on an assumption proffered by the
manufacturers that this would be a good thing.  This illustrates the
ongoing (I hope) discussion in alt.destroy.microsoft concerning the
ramifications of the Microsoft decision and anti-trust law.  The theory
is that a Supreme Court ruling consistent with a judgment that Microsoft
was illegally tying (as opposed to 'merely' monopolizing) when they
integrated IE into Windows would have the effect of potentially
preventing manufacturers from pushing this view on the consumer.  Its
purpose is to drive up profits without increasing (and, in fact,
massively decreasing) the efficiency of the market in delivering goods
from producers to consumers.

There's a difference between "I want my MTV", and "I want my MTV along
with 14 other channels in a package with bloated a la carte pricing, if
any at all is available to begin with".  The second should, due to the
value to the producer of having channels delivered versus their
*variable* cost of producing them, be cheaper in *total* cost than
merely MTV by itself.  Do you see what I'm saying?

If you want to know if Linux makes a good desktop platform, look to the
*real* market requirements, not the technical ones, or the *marketing*
requirements.  That's always been what its really all about.  Some do it
idealistically, like RMS and his politics.  Some do it practically, as
Apple has, which is why I refuse to call their decisions "stupid" even
if I don't agree with them, and some do it by figuring out how to
exploit it in order to feed their egotistical dreams of being the king
of the world.

--
T. Max Devlin
Manager of Research & Educational Services
Managed Services
ELTRAX Technology Services Group 
[EMAIL PROTECTED]
-[Opinions expressed are my own; everyone else, including
   my employer, has to pay for them, subject to
    applicable licensing agreement]-


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to