Linux-Advocacy Digest #95, Volume #31            Thu, 28 Dec 00 10:13:08 EST

Contents:
  Re: New to Linux, and I am not satisfied. (T. Max Devlin)
  Re: Conclusion (T. Max Devlin)
  Re: Conclusion (T. Max Devlin)
  Re: Conclusion (T. Max Devlin)
  Re: Conclusion (T. Max Devlin)
  Re: Conclusion (T. Max Devlin)
  Re: Conclusion (T. Max Devlin)
  Re: Is Windows an operating system like Linux? (T. Max Devlin)
  Re: Is Windows an operating system like Linux? (T. Max Devlin)

----------------------------------------------------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Subject: Re: New to Linux, and I am not satisfied.
Reply-To: [EMAIL PROTECTED]
Date: Thu, 28 Dec 2000 14:51:28 GMT

Said Bob Hauck in comp.os.linux.advocacy on Tue, 26 Dec 2000 16:43:44
GMT; 
>On Sun, 24 Dec 2000 15:45:28 GMT, T. Max Devlin <[EMAIL PROTECTED]> wrote:
>>Said Bob Hauck in comp.os.linux.advocacy on Sat, 23 Dec 2000 23:43:42 
>>>On Sat, 23 Dec 2000 19:40:31 GMT, T. Max Devlin <[EMAIL PROTECTED]> wrote:
>>>>Said Bob Hauck in comp.os.linux.advocacy on Sat, 23 Dec 2000 15:43:43
>>>>GMT; 
>
>> The primary reason for avoiding an ISP-based news service is
>> they require that you be connected through their dial-up box in order to
>> access news.  Since I always am going to be using a variety of business
>> and personal systems and dial-up accounts, I'm not satisfied with this
>> arrangement.
>
>Ok, that's legitimate.  Your ISP could provide authenticated NNTP so you 
>could log in from anywhere, but if they don't then they have to restrict 
>by IP otherwise they spammers will be pounding them into the ground in a 
>week.

Spammers aren't the problem; you can restrict posting only, I would
expect, on a modern news server, and I wouldn't mind that.

>In the "old days", there used to be a lot of open NNTP servers.  The
>spammers and software pirates and idiots have pretty much killed off
>that breed.  That's too bad, but I think it is the price you pay for the 
>growth of the Internet.

No, its the carriers who have killed off that breed, by keeping
bandwidth so expensive.

>>>> So how come you had to pay by the number of simultaneous users?
>>>
>>>How else would you price it?  You could do flat-rate, but that would
>>>mean the little customers subsidize the big ones.
>>
>>Actually, the big customers subsidize the little ones; that's what
>>economies of scale are supposed to provide.  Get it?
>
>No, I don't, because we are comparing two different pricing models with
>different effects regarding cross-subsidy.
>
>*If* they charge a flat rate, ISP's with 10 simultaneous users pay the
>same dollar amount as ones with 500.  That sounds to me like the small
>guy is subsidizing the big guy, since at least some of the NNTP service's 
>costs are related to per-user resources.

This would only be true if the cost per user were linear, and its not.
Thus the reason I say that "economies of scale" don't work the way you
expect them to.  "At least some" isn't enough, particularly when it is
this cost which increments least. 

>OTOH, *if* they charge per-user, then it might be as you suggest where
>the big guy subsidizes the little guy, depending on what kind of volume
>breaks are given.

No, they charge by ISP, but by size of ISP, not by number of users who
use the Usenet service.  That way, all the ISP customers that don't use
NNTP "subsidize" the service for those few that do.  If they charge by a
"per simultaneous connection of a user through the ISP", they're
charging the wrong people, and subsidizing the heavy users by charging
them as little (no scalability, no discounts, no reason not to use it as
heavily as possible) as people who only use the system a little bit.

>>>The economies of scale I spoke about are related to the fact that a
>>>small ISP doing their own news server will be carrying thousands of
>>>groups that literally none of their customers read.
>>
>>Actually, they won't be carrying any groups.  How is that economies of
>>scale?
>
>I write "the small ISP doing their own news server".  They do exist you
>know.  That is, ISP's that handle their own news feeds (and you normally
>want more than one to get reliability and full coverage).  They get
>feeds from a couple of other ISP's, and maybe feed some too.  That's how
>it used to be done, and still is by a shrinking minority.  You might ask
>yourself why they are a minority and why there are fewer and fewer of
>them doing it themselves and more are contracting with Supernews and the
>like.

Because fools chase easy money, rather than honest profit.  Voicenet
still does it, AFAIK, and I'm happy to have signed on with them.  I
don't care about the ISPs that contract with Supernews; I care about the
contracts they have with Supernews, because they're shaving costs at
*my* expense, whenever possible, from both sides.

>>I know, I know.  You're trying to say its cheaper for a large company to
>>run servers for bunches of ISPs.  Which leaves my question, "How is it
>>'economies of scale' that they charge the ISPs per simultaneous user?"
>>You asked "how else", and that's really beside the point.  However they
>>do it, apparently they aren't benefitting from 'economies of scale'.
>
>There are economies of scale if the big service can provide NNTP for
>less than what an ISP would pay for equipment, bandwidth, labor, and the
>cost of the feeds to do it on their own.  There can still be per-user
>costs in this scenario, but they would need to be lower than what they
>would be on a smaller scale.

There are no per-user costs on the server (administration of
newsgroups); its all hardware and bandwidth.  There is no separate
economies of scale for a Usenet service.  Networks don't work on
per-user basis; that's just a rough (and generally extremely erroneous)
number which can only be averaged.

>>> So they are paying for disk space and bandwidth to download groups
>>> that nobody reads.  And that's a lot of disk space and a lot of
>>> bandwidth these days.
>
>>Disk space and bandwidth are both entirely replenishable.
>
>Disk space is cheap, but not free, and the high-performance variants
>needed for a good news server are especially non-free.  Bandwidth is not
>particularly cheap to an ISP either because they can't (well...shouldn't)
>buy a consumer pipe that's already been oversold 30 to 1.  And labor is
>very expensive.

Indeed, and ISPs make money on all of this they successfully resell.  I
am unconcerned with how they arrange it, so long as they offer me a fair
price for acceptable quality.  It has nothing to do with business
*models*; its business operations which are lacking.  Any moderate sized
ISP can support a Usenet server, if they're not brain-dead.  The costs
are minimal, and if you don't make money, you charge more.  Or, you take
the easy way out, and resell a lower quality and higher priced service,
subsidizing the heavy users with the light ones, since the consumer has
no scalable cost.  If you run your own service, you subsidize the light
users with the heavy ones, and this makes good sense because the heavy
users are generally much more willing to pay higher prices for greater
value.

>> Your argument seems to ignore the fact that the entire basis of Usenet
>> is servers which deal with these issues precisely.  Perhaps it is too
>> much work to handle it well, but its really a matter of how short your
>> aging is.
>
>You seem to forget that I actually ran a Usenet server with a full fead
>for about four years.  I know what I'm talking about.

You don't do it anymore.  And four years is a long time on the Internet.

>How short your aging is affects how much disk space you need, which does
>indirectly affect how hard you work (if you try to run close to capacity
>in order to make maximum use of your disks, sometimes you'll get hit by
>a blizzard of spam or Grateful Dead tunes and run out of space).  How
>much work you do is also affected by how reliable your equipment is, how
>reliable your upstream feeds are, and how often you need to upgrade your
>hardware in order to keep up with the ever-growing volume of binary crap
>and spam.

Oh, the horrors of having to work for a living.

>News admins spend a lot of time tuning things and recovering from
>various failures (e.g. one of your feeds went down for a day and is now
>spewing gigabytes of old news at you as fast as it can).  They are also
>continuously spending money on more of those cheap LVD SCSI disks.

You seem to be under the impression that I've said that it doesn't cost
anything to run a news server.  I don't know where you got that idea.

>Being a full-time news admin isn't so bad, actually, but trying to do it
>part time is a pain in the ass and most small ISP's can't afford a
>full-time admin for news.  They would rather spend the money on things
>like tech support or web admins, things that more of their customers
>care about.

Maybe they're too small to be decent ISPs, too.  I'm still up in the air
about the efficiencies involved in the free market for Internet stuff.
Its still sorting out, but I don't really think there's much of an
inevitable move towards larger or smaller.  LebaNet survives still, but
I've recently and gratefully dropped them for Voicenet.  I had to,
though, since I moved.  Voicenet is much bigger than Lebanet, but its
still considered a small ISP.

>> I don't like what Deja or Supernews have become, and I don't like my
>> ISP "giving me" NNTP access only when I'm dialed in through their
>> links.  
>
>Yes, that is one complaint a few of our more sophisticated users had
>about Supernews.  The alternative is to use authenticated NNTP, which
>old news clients don't support.  That is less of a problem now, and
>maybe they should change their policy.
>
>Most of the users who complained about this were also clued enough to
>understand how to read news with slrn from a shell account.  Giving
>these users shell accounts solved the majority of the complaints.

Shell accounts are extremely rare these days.

>> Turns out, its not very popular, even though I know it is the most
>> reliable, powerful, scalable, and flexible way to run discussion
>> forums and article archives and such.  That sucks.
>
>Yes, it is the most scalable way to run forums.  Yes, it is more
>flexible than web-based ones.  Yes, it does suck that it seems to be
>getting shunted aside.  There are lots of reasons for what's happening
>though, only a few of which have to do with the avarice of Supernews.

Not avarice; incompetence and ignorance.  I have no interest in passing
moral judgements on anyone, so whether that is spawned by greed,
laziness (my personal favorite) or stupidity is not for me to say.  And
it is, generally, the reason for "what's happening", not at all limited
to Supernews.

>One thing is that the spammers discovered Usenet.  Yes, the cancelbots
>sort of keep it under control, but at the cost of further increasing the
>already insane volume of messages.  Even a few years ago I was getting
>tens of thousands of cancels a day.  
>
>Another thing is piracy and porn.  Not that they exist, but the sheer
>volume they create.  The volumes on binary groups are just out of
>control.  Worse, since the volume in binary groups is created by
>relatively few irresponsible people, it changes wildly from day to day,
>which makes it hard to optimize disk space and creates a lot more work
>and expense for admins.  Many smaller ISP's simply don't carry binaries
>any more, but too much of that results in binaries showing up in other
>groups, creating a new problem.  A lot of this could be eliminated if
>everyone used authenticated NNTP, but that goes against the grain of
>many Usenetters.
>
>It is the volume of messages that's driving the consolidation toward
>providers like Supernews among small to medium ISP's.

Like its entirely impossible to even imagine not supporting every single
useless binary group that everyone else does?  For Christ's sake,
there's no reason whatsoever (save those mentioned above, incompetence
or ignorance) that this isn't a local problem.

>Another problem with NNTP is the "web mentality" of new users aided and
>abetted by the "lock in mentality" of vendors.  New users generally
>don't "get" Usenet and since it isn't a proprietary technology most
>software vendors aren't interested in explaining it.  They are more
>interested in selling proprietary versions like Exchange and Domino or
>closed web-based solutions.  Here's where the avarice of Supernews comes
>in.

Well, you've roughly traveled through quite a bit of the silliness I'm
referring to.  Why is it you were disagreeing with me, again?  ;-)

[Don't worry, Bob.  I know damn well its my supercilious tone.]

>> >> A bigger operation avoids this waste, but they still have to
>> >> provide per-user resources too.
>
>> >What per-user resources?  That's the fallacy; there are no per-user
>> >resources.  
>
>Yes there are.  Each NNTP reader creates a connection and a process on
>the server.  This uses memory and cpu time and creates IO traffic to the
>disks and network.  All of these represent costs in that the systems
>that support readers must be sized according to the expected number of
>users (as opposed to feeder machines that must be sized by the volume of 
>traffic and expiration policy).

My newsreader has the ability to connect many times to the server, thus
replicating the problem you insist is per-user with a single user.
These are not per user costs; they are resource limitations which
roughly scale to how *busy* a server is, not how many users it has.
Most of the bandwidth is on the feed-side, and that uses the storage
resources as well.  The per-user costs are almost negligible in
comparison.  Its Usenet which is big; ISPs are small (and still they
generally support only 2 or three users who use NNTP for every hundred
subscribers they have.)

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***

Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.advocacy,comp.os.ms-windows.nt.advocacy
Subject: Re: Conclusion
Reply-To: [EMAIL PROTECTED]
Date: Thu, 28 Dec 2000 14:51:30 GMT

Said Ayende Rahien in comp.os.linux.advocacy on Sun, 24 Dec 2000 
>"Shane Phelps" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>>
>>
>> Adam Warner wrote:
>> >
>> > Hi Ayende,
>> >
>> > > > > > Anyone else encountered users doing this rm /tmp ?
>> > > >
>> > > > Now would be a good time to ask: what's wrong with deleting files in
>the
>> > > > /tmp directory?
>> > >
>> > > The wrong thing is when you delete the directory itself.
>> >
>> > OK, thanks. That would obviously cause problems for all programs trying
>to
>> > access/create temporary files.
>> >
>> > Adam
>>
>> Pedant point:
>>
>> rm /tmp isn't going to do anything at all unless /tmp is a symbolic link.
>>
>> The rm command won't remove directories unless it's given the -r
>> (recursive) option
>
>Murphy law dictate that morons will always be able to learn how to make life
>misreble, but not enough to stop them doing it.
>
>> The DEL command in MS-DOS used to remove directories as well as files,
>> but I don't think that was recursive. Wan't there a DELTREE command
>> for recursive removal?
>
>deltree is for recursive removal, indeed.
>But del is not capable of removing directories.
>You need rd to remove a directory, and it need to be clean (no file or
>sub-directories) for it to work.

Well, that certainly brought us full circle, eh, what?

Until DOS 4, I believe, there was no way to remove both a directory and
its files, nor to do recursive removal of files or directories, with one
command.  This was a major reason people saw Windows as 'superior', back
in the days when any current user knew DOS (provisionally).  You could
delete a whole subtree with point-and-click ease (after the appropriate
number of excessive warnings).  In DOS, you needed a combination of del,
rd, or deltree which, as I mentioned, didn't appear until DR-DOS
provided a little (very little) competitive development.  Couldn't just
fix del to support recursive and directory operations; that would be too
hard.  Better to add another command.

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***

Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.advocacy,comp.os.ms-windows.nt.advocacy
Subject: Re: Conclusion
Reply-To: [EMAIL PROTECTED]
Date: Thu, 28 Dec 2000 14:51:31 GMT

Said Tom Wilson in comp.os.linux.advocacy on Sun, 24 Dec 2000 13:56:08 
>"Chad C. Mulligan" <[EMAIL PROTECTED]> wrote in message
>news:ELw06.15239$[EMAIL PROTECTED]...
>
><snippage>
>
>> Mine are designed that way too.
>>
>> > The internal structure will typically be quite complex, with access
>> > to the "public" servers, internal web servers etc in their own DMZ(s)
>> > and so on - I'm more concerned with the external view for this example.
>>
>> And another reason Netcraft numbers should be taken with a grain of salt.
>> hear that Matt?
>
>I think this whole debate is meaningless. Netcraft statistics, like any
>other statistics, can be manipulated or outright distorted to fit ANY point
>of view and are therefore next to useless. The only yardstick any of us have
>is our own experiences with the platforms in question.

Another in the on-going attempts by many contributors to advance an
argument from ignorance.  In fact, the Netcraft statistics cannot be
"manipulated" to fit the point of view that NT is a reliable server.
They are, in fact, kind of useful for that.  Outright distortions, of
course, we must leave to those who wish that the data said something
different.  Indeed, personal anecdotal experience is by far a less
reliable yardstick, depending on the person involved and how much regard
they have for empirical facts.

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***

Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.advocacy,comp.os.ms-windows.nt.advocacy
Subject: Re: Conclusion
Reply-To: [EMAIL PROTECTED]
Date: Thu, 28 Dec 2000 14:51:33 GMT

Said Erik Funkenbusch in comp.os.linux.advocacy on Mon, 25 Dec 2000 
>"Jim Richardson" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>> If the OS is detected correctly, and the uptime returned for that system
>is
>> accurate. Then what diff does it make whether it is listed as a webserver
>or
>> firewall if what you are after  is uptime ?
>
>You're making the same mistake you've been making all along.  You're
>assuming that Netcraft will identify a web server with a firewall as the
>firewall, but that's not what happens in many cases (including my own).

And in your own case, Netcraft did not provide any uptime, correct?  It
identified that you were using a firewall, somehow, and this has
something to do, we must therefore presume, with how Netcraft gets its
figures.  Yet we must also presume that it depends on how the server or
firewall is configured, and you, who would seem to be in the best
position to do so, have not yet reverse engineered precisely how they
provide uptime values by determining how your system managed to foil it.
For all those systems for which it does provide a web server OS
identification, we must again presume that at least a good number of
them are behind firewalls, and they got uptime figures.

You are making the same mistake you have yourself been making all along;
you believe an argument from ignorance can settle the matter.  No matter
how many times you assume that it is not possible to know that the
uptime numbers match the OS designations provided by these statistics,
this has not been shown to be true, or even reasonable.

>Netcraft reports the server and OS as Linux, but it's getting it's uptime
>data from my firewall, which is neither Linux or Unix based (actually it's
>getting no uptime at all because my firewall doesn't give out that data).

You seem to think that the fact that no uptime number was reported is
insignificant.  Believe me, nobody's firewalls "give out" any data at
all, not even uptime.  You simply don't know how the uptime value is
derived AT ALL to begin with, and wish most strongly to assume that this
means that the empirical data is false.  It is a simple inductive
failure, not uncommon.  It should be easily corrected if you manage to
get past your "NT is a good server" conceptual glitch and manage to
think a little harder.

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***

Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.advocacy,comp.os.ms-windows.nt.advocacy
Subject: Re: Conclusion
Reply-To: [EMAIL PROTECTED]
Date: Thu, 28 Dec 2000 14:51:34 GMT

Said Erik Funkenbusch in comp.os.linux.advocacy on Mon, 25 Dec 2000 
>That's exactly the point.  Without inside knowledge of the site, you don't
>know if the statistics are correct or not.  My point is that if a firewall
>can interfere with providing uptimes, then it could also give inaccurate
>ones.  Or are you suggesting that this is impossible?

Hardly.  We are not assuming it is correct, though, and so it does not
with a wave of the hand invalidate Netcraft's numbers!  You mistake
knowing if the statistics are correct with wanting them to agree to an
arbitrary dataset.  We know the statistics are correct because they are
consistent; your own testing and the variety of other explanations
you've been provided (including how to transcend counter roll-over, how
to recognize invalid data due to clustering, and why mistaking the role
of the system is irrelevant) indicates they don't misreport which OS
they're collecting statistics for.  Granted, it is not an entirely
conclusive test, but so far it is only your foot-stomping which leads
any of us to believe that Netcraft's numbers might be questionable to
begin with.  Unless you can show they are inaccurate, there is no reason
to assume they are.  You haven't shown any inconsistency, inaccuracy, or
other misrepresentative characteristics of these values at all; you've
just repeated the same argument from ignorance for weeks now, grinding
through a number of silly arguments where you've shown your propensity
to grasp any straw to insist that NT is a reliable, robust, or even
competitive operating system.

>I can't give an example, because I don't have inside information on all the
>sites that netcraft is reporting, only my own.

So try a small handful; we hardly need information on all the sites
netcraft is reporting on.  Figure out how its getting uptime, and then
see if you can "fool" it.  We'll be happy to consider whether that is a
likely natural occurrence, and temper our reliance (none, to speak of,
outside this argument, as our low opinion of NT is not merely based on
Netcraft's statistics) on these numbers accordingly.  Better yet, strike
a blow for freedom of information and describe their method in detail.
Unless you had to agree not to do that to get on their poll list; I
don't know, but its possible.


-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***

Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.advocacy,comp.os.ms-windows.nt.advocacy
Subject: Re: Conclusion
Reply-To: [EMAIL PROTECTED]
Date: Thu, 28 Dec 2000 14:51:36 GMT

Said Erik Funkenbusch in comp.os.linux.advocacy on Tue, 26 Dec 2000
12:18:35 -0600; 
>"Adam Ruth" <[EMAIL PROTECTED]> wrote in message
>news:92afhd$23sm$[EMAIL PROTECTED]...
>> But of course the burden of proof is on you and your claim, because of the
>> impossibility of proving a negative proposition (which is that there are
>no
>> sites with the problem you suggest).  Since you claim to have much
>> experience with complex web server setups, perhaps you could tell us about
>> some so we could see if Netcraft's numbers are right.
>
>You're proving my point.  By the inability to prove that Netcraft is wrong,
>you are proving that you don't know if they're right.
>
>
>

Holy christ.  That post ought to be put in a museum.  Really; "since you
can't prove them wrong, you don't know if they're right."  Yow.  That's
a friggen' masterpiece.  Convoluted, indirect, utterly unrefutable, and
completely meaningless.  Undoubtedly the most complete argument from
ignorance possible.  The post-modernists might even give you a grant so
you can publish more baloney.

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***

Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.advocacy,comp.os.ms-windows.nt.advocacy
Subject: Re: Conclusion
Reply-To: [EMAIL PROTECTED]
Date: Thu, 28 Dec 2000 14:51:37 GMT

Said Stuart Fox in comp.os.linux.advocacy on Wed, 27 Dec 2000 18:12:05 
>In article <92c13p$m11$[EMAIL PROTECTED]>,
>  sfcybear <[EMAIL PROTECTED]> wrote:
>>
>> Here is independent evidence the supports my claim that w2k is
>unstable:
>>
>> http://uptime.netcraft.com/up/graph?site=www.microsoft.com
>>
>> Even MS has problems keeping W2K up and running! Just over 12 days
>> average uptime!
>
>Or is that 12 days average of their firewall?  Or could it be set of
>clustered servers?

Well, whatever it is, its running W2K, so it doesn't really matter.

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***

Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Subject: Re: Is Windows an operating system like Linux?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 28 Dec 2000 14:51:38 GMT

Said Tim Smith in comp.os.linux.advocacy on 24 Dec 2000 16:47:40 -0800; 
>On Sun, 24 Dec 2000 15:45:33 GMT, T. Max Devlin <[EMAIL PROTECTED]> wrote:
>>middleware.  Your impulsive assignment of everything to "Windows"
>>because that's the name on the box and in the Microsoft markitecture
>>diagrams is quite illustrative of the problem.
>
>No, I assign those to Windows because (1) I've read the results of
>people who have disassembled all those components and reported on how
>they work, 

() Who all referenced as "Windows" what would more credibly be called
'DOS' simply because that is how Microsoft identified it.  Several of
them even mentioned this.

>and (2) as part of developing Windows block device drivers,
>I've had occasion myself to trace through most of those parts of Windows
>in SoftICE.

Are you sure that was Windows?  What makes it Windows, and not DOS 7?
The fact that it wasn't in DOS 4?  Or the fact that you can't tell the
difference between the name on the box and the codebase and APIs inside?

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***

Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Subject: Re: Is Windows an operating system like Linux?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 28 Dec 2000 14:51:39 GMT

Said John W. Stevens in comp.os.linux.advocacy on Tue, 26 Dec 2000
17:05:40 -0700; 
>"T. Max Devlin" wrote:
>> 
>> Said Tim Smith in comp.os.linux.advocacy on 23 Dec 2000 16:37:31 -0800;
>> >Let's check Win95 against these:
>> >
>> >Scheduling:    Handled by Windows, not DOS.
>> 
>> DOS doesn't have scheduling.
>
>Actually, it did.
>
>DOS was designed to be a single tasking OS.  IOW, DOS scheduled one and
>only one task (or, "process", if you prefer), allocating it 100% of all
>non-kernel cycles.

That is not scheduling.

>The fact that some bright programmers later wrote "task scheduling
>extensions" for DOS, including TSR's, and (*ahem*), my own MPI
>executive, do not invalidate the point, they simply illustrate a very
>basic point: at the lowest, most basic level, an OS is a library.

And at the highest, most practical level, it is a box with a name, and
possibly a price tag.  Everything in between is the point of contention,
which would include an understanding that the nature of "software" is
somewhat amorphous, but that does not invalidate all discussion of
rational engineering and product design.

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***

Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to comp.os.linux.advocacy.

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to