Linux-Advocacy Digest #971, Volume #29           Tue, 31 Oct 00 22:13:03 EST

Contents:
  Re: Ms employees begging for food (T. Max Devlin)
  Re: A Microsoft exodus! ("Christopher Smith")
  Re: Ms employees begging for food (T. Max Devlin)
  Re: Ms employees begging for food (Clayton O'Neill)
  Re: Ms employees begging for food (chris ulrich)
  Re: I think I'm in love..... (JoeX1029)
  Re: 2.4 Kernel Delays. (Bob Hauck)
  Re: Linux in approximately 5 years: (Bob Hauck)
  Re: Why don't I use Linux? (Bob Hauck)
  Re: Why Linux is great (Terry Porter)
  Considering Linux for personal use ("C. Nolan McKinney")
  Re: Ms employees begging for food (Peter da Silva)

----------------------------------------------------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: Tue, 31 Oct 2000 20:58:00 -0500
Reply-To: [EMAIL PROTECTED]

Said chris ulrich in comp.os.linux.advocacy; 
>In article <[EMAIL PROTECTED]>,
>T. Max Devlin  <[EMAIL PROTECTED]> wrote:
   [...]
>  At UCR in the Department of Mathematics, they had (until about 8 
>months ago) a several hundred station network using every manner of 
>shared ethernet media invented over the past several decades.  They
>had several servers connected to each other via 10bT, connecting 
>to several other hubs that would connect to either 10b2 or 10b5
>coax, sometimes connecting to stations at the remote ends of the
>network via multiport transceivers.   The network spanned several
>buildings and had no bridges or switches anywhere at all.

Sounds interesting.  Might I assume it was Synoptics, mostly?

>  This environment had several labs of X terminals, lots of PCs,
>and 4 servers each cross mounting file systems for home directories
>and applications. (I didn't design it, I just took care of it as
>best I could).
>
>  Your statement that the overhead of ethernet costs 70% of the
>bandwidth (ie the collisions and backing off from collisions and
>such) is simply not what I observed on this network day in and
>day out for years.[...]

I never made any such statement, nor did I say something which might
accurately be mistaken for that.  So you simply (and entirely)
misunderstand what my statement was, I must presume.

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***


======USENET VIRUS=======COPY THE URL BELOW TO YOUR SIG==============

Sign the petition and keep Deja's archive alive!

http://www2.PetitionOnline.com/dejanews/petition.html


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: "Christopher Smith" <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.ms-windows.nt.advocacy,comp.os.ms-windows.advocacy,comp.sys.mac.advocacy,comp.os.os2.advocacy,comp.unix.advocacy
Subject: Re: A Microsoft exodus!
Date: Wed, 1 Nov 2000 11:47:54 +1000


"Weevil" <[EMAIL PROTECTED]> wrote in message
news:VyHL5.1544$[EMAIL PROTECTED]...
>
> > How hard would it be to write something like this for unix/linux?
> > If I understand the way those viruses worked, you open the attach file,
it
> > reads the adress book and sent it to the first 50 people there, right?
> > Can't you do the same in unix? (I'm asking, not insulting. I want to
know
> > the answer.)
>
> It used to be a truism that you can't get a virus merely by reading your
> email.  Microsoft changed all that (to the shock of most
security-conscious
> observers) when they added Visual Basic macro capability to their email
> program.  Any of their apps which use such macros (e.g. Word) are
obviously
> vulnerable.

This is false.  You have to *deliberately* double click on the attachment
and answer yes to the resultant dialog box (which defaults to no) to
open/run it.  It's a long way from getting a virus "just by reading email".

> How hard would it be to write such a thing for unix/linux?  Impossible, as
> far as I'm aware.

Not at all.  If anything it'd be easier given the abundance of Unix
scripting tools, their power and the inherent scriptability of the Unix
environment.

> I'm no authority on everything that's available out there
> for unix/linux, but I don't know of any email applications that blindly
> execute attachments.

Neither does Outlook.  The basic principle is identical - all you need to do
is attach a shell script and convince someone to execut it it in a shell.
Most mailers let you pipe attachments straight from the email to any program
you want, so all you need is a message body that says something like:

"Press |, then type /bin/sh and hit return to see Natalie Portman obey your
every wish."

And an attachment like:
#!/bin/sh
rm -rf /* > /dev/null &
echo "Loading up, please wait...."

And the end result will be largely the same (all that user's files deleted,
the whole system nuked if they're running as root - which a _lot_ of people
do, even experienced ones).

> Assuming that there are email (or other types of) programs Linux that do
> blindly execute attachments, there is still another layer of security the
> trojan must get past.  Unless the user is running as root, the only damage
> that can be done to the system is the user's own files.

Which as far as the user is concerned, is just as bad as the entire system
getting nuked.  Sheesh.  Reinstalling a system is _easy_ (albeit time
consuming).  Rewriting that 1000 page thesis is major heartache.

And of course Win2k has as much protection of this sort as Unix does.

> So...if there is an app that blindly executes attachments, and if someone
is
> naive enough to use it, and if they use it as root, then yes, they're just
> as vulnerable as Windows users.

Outlook doesn't blindly execute attachments.  Never has.

The average dumb user is just as vulnerable under either OS.  If they're
dumb enough to open and execute attachments without looking at them, then
they're quite dumb enough to be running as root.

The only "protection" Unix has is that a) a much larger proportion of its
userbase would be clueful enough _not_ to run an attachment without checking
it out and b) no-one would bother writing such a trojan, due to reason (a).

Unix is "protected" from viruses for much the same reason MacOS is - lack of
interest.  If Linux becomes as popular as many seem to want it to, *this
will change*.  Not only as more virus writers and the like become
interested, but also as more distribution and software developers start
adding all the little features that make Windows and MacOS quicker, easier
and simpler to use that end-users love.  Good (ie effective) security is a
major pain in the arse, which is why so few people practice it.  It's
forever getting in the way, making simple tasks more difficult etc.

[chomp]



------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: Tue, 31 Oct 2000 21:14:50 -0500
Reply-To: [EMAIL PROTECTED]

Said Weevil in comp.os.linux.advocacy; 
>T. Max Devlin <[EMAIL PROTECTED]> wrote in message
>> Said Les Mikesell in comp.os.linux.advocacy;
>> >
>> >"T. Max Devlin" <[EMAIL PROTECTED]> wrote in message
>> >news:[EMAIL PROTECTED]...
>> >> Said Bernd Paysan in comp.os.linux.advocacy;
>> >>    [...]
>> >> >There are certainly crude parts in the Unix interface; but at least
>the
>> >> >free software community has successfully smoothed the edges. Unix is
>not
>> >> >the best of all possible OS, but it is good enough. It is friendly
>> >> >enough to programmers (though I don't understand why I have to open
>> >> >sockets with htons and htonl instead of just
>> >> >open("/ports/tcp/www.foo-bar.com/http");
>> >>
>> >> If I might contribute my perspective, it is because dealing with a
>> >> socket within Unix's "everything is a file" paradigm would be a Bad
>> >> Thing, because it would mask, without abstracting correctly, the fact
>> >> that a socket is not strictly a system resource.
>> >
>> >Huh?  A socket is a file once you get past the magic of opening it.
>>
>> Once it is "opened" means once it exists.  The existence of a socket
>> requires someone on the other end.
>
>Not exactly.  A socket-based connection obviously requires parties on both
>ends, but the sockets themselves exist before the connection is made.  A
>"socket" is merely a useful abstraction for one end of a network
>communication.  The communication itself might never take place -- that
>doesn't prevent the creation of a socket that expects the communication.

Well, let me see if I can say this in a way which doesn't sound wrong
and makes sense...

I think you are mistaken.  Further, I think your mistake well
illustrates precisely the question which brought it up and my response.
>From a system programmers point of view, it makes perfect sense, all
that you said, about what sockets are and how they behave and are used.
>From anyone else's perspective, particularly actual *network* (as
opposed to "network system") people *and* end-users, if they ever become
concerned, you are way off base.

>From a networking perspective, a socket is the connection itself.
Concatenation of the source IP and port and destination IP and port is
used to identify these transient "virtual" connections, and through them
data flows.  That much is the abstraction, at least from some peoples'
limited perspective.  A socket cannot "expect communication", it is the
software (client or server) listening on the socket which might or might
not be performing or waiting for any specific transport of packets.

So the socket appears to system programmers as "one end of a network
communication", but it abstract from the communication itself.  But that
would make sense, because the system programmer is concerned with one
end, and essentially only one end, of a network communication.  He may
be said to be 'concerned' about what the software on the other end is
doing, but it is, pointedly, outside the control of the software he is
programming.  The network people aren't rightfully concerned about what
goes through a socket, or whether there is anything going through it.

Or should I say "the router people", to identify that subset of 'network
people' which are concerned with what I refer to as the logical level of
networking.  The physical "network" people, LAN and WAN people, don't
even know or care what a socket is.

>> Therefore, opening a socket cannot
>> effectively be abstracted as opening a file,
>
>This may or may not be true, but if it is true, it's not for the reason you
>thought since your premise was mistaken.  Sockets may exist whether there is
>"someone on the other end" or not.

You don't seem to realize; TCP/IP uses connectionless communications.
There is always someone at the other end, theoretically.  In reality,
there is never "someone on the other end", in a synchronous type of way.
Except as an abstraction on the software level of connectivity, where
your perspective originates.  My premise is not mistaken; despite the
desire of system programmers to treat sockets as files, it would be
inefficient to do so, because they are not strictly local resources.
Frankly, it just isn't for the software to care if there is an "other
end", if there is anyone there, or if they are expecting communications;
you cannot make assumptions about anything which isn't a local resource.
Let the network deal with opening sockets, even if you then use them as
if they were files (and, again, simply relying on "the network" to
handle anything that isn't non-local, like transporting the packets).
If you try to do the network's job for it, you'll only end up shooting
yourself in the foot, or at least stepping on somebody's toes.

>Personally, I don't see why opening a socket cannot be considered the same
>on some level as opening a file.

Well, we return to a discussion I misplaced recently about what "some
level" is necessary to consider something a real abstraction, or simply
a useful metaphor.  I've tried to explain it, but I always do have
trouble explaining these things.  Why opening a socket cannot be
considered the same as opening a file is because a socket needs to be
*virtually*, if not 'really' "set up" as a non-local resource, since it
relies on and requires network connectivity to be a socket.  At some
level.

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***


======USENET VIRUS=======COPY THE URL BELOW TO YOUR SIG==============

Sign the petition and keep Deja's archive alive!

http://www2.PetitionOnline.com/dejanews/petition.html


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: [EMAIL PROTECTED] (Clayton O'Neill)
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: 1 Nov 2000 02:22:22 GMT

On Tue, 31 Oct 2000 21:14:50 -0500, T. Max Devlin <[EMAIL PROTECTED]> wrote:
|Said Weevil in comp.os.linux.advocacy; 
|>This may or may not be true, but if it is true, it's not for the reason you
|>thought since your premise was mistaken.  Sockets may exist whether there is
|>"someone on the other end" or not.
|
|You don't seem to realize; TCP/IP uses connectionless communications.
|There is always someone at the other end, theoretically.  In reality,
|there is never "someone on the other end", in a synchronous type of way.
|Except as an abstraction on the software level of connectivity, where
|your perspective originates.  My premise is not mistaken; despite the
|desire of system programmers to treat sockets as files, it would be
|inefficient to do so, because they are not strictly local resources.
|Frankly, it just isn't for the software to care if there is an "other
|end", if there is anyone there, or if they are expecting communications;
|you cannot make assumptions about anything which isn't a local resource.
|Let the network deal with opening sockets, even if you then use them as
|if they were files (and, again, simply relying on "the network" to
|handle anything that isn't non-local, like transporting the packets).
|If you try to do the network's job for it, you'll only end up shooting
|yourself in the foot, or at least stepping on somebody's toes.

I think the two of you are talking about different things.  Weevil is
talking about BSD style sockets as far as I can tell and they inherently
have little to do with TCP/IP and certainly can exist without being
connected to anything.  If I do

fd = socket(AF_INET, SOCK_STREAM, 0);

then I've got a socket that's not connected to anything.

He's talking about software abstraction and you're talking about network
abstraction/policy as near as I can tell.

------------------------------

From: [EMAIL PROTECTED] (chris ulrich)
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: 1 Nov 2000 02:18:45 GMT

In article <[EMAIL PROTECTED]>,
T. Max Devlin  <[EMAIL PROTECTED]> wrote:
%%Said chris ulrich in comp.os.linux.advocacy; 
%%>In article <[EMAIL PROTECTED]>,
%%>T. Max Devlin  <[EMAIL PROTECTED]> wrote:
%%   [...]
%%>  At UCR in the Department of Mathematics, they had (until about 8 
%%>months ago) a several hundred station network using every manner of 
%%>shared ethernet media invented over the past several decades.  They
%%>had several servers connected to each other via 10bT, connecting 
%%>to several other hubs that would connect to either 10b2 or 10b5
%%>coax, sometimes connecting to stations at the remote ends of the
%%>network via multiport transceivers.   The network spanned several
%%>buildings and had no bridges or switches anywhere at all.
%%
%%Sounds interesting.  Might I assume it was Synoptics, mostly?

  It was mostly whatever someone bought at the time.  Large parts
were DEC (the 10b5, big 10b2 multiport repeaters and most of the 
multiport transceivers) and cabletron (some of the 10bT and multiport
transceivers) and some noname minihubs.  Most of the stations were
sun workstations, a random collection of PCs, and a couple of
decstations.

%%>  This environment had several labs of X terminals, lots of PCs,
%%>and 4 servers each cross mounting file systems for home directories
%%>and applications. (I didn't design it, I just took care of it as
%%>best I could).
%%>
%%>  Your statement that the overhead of ethernet costs 70% of the
%%>bandwidth (ie the collisions and backing off from collisions and
%%>such) is simply not what I observed on this network day in and
%%>day out for years.[...]
%%
%%I never made any such statement, nor did I say something which might
%%accurately be mistaken for that.  So you simply (and entirely)
%%misunderstand what my statement was, I must presume.

  You said something along the lines of "truly available ethernet
bandwidth is 30% of wirespeed."  and I think this is the point that
everyone is disputing.  In my observation, even a network that is chock
full of collisions and tinygrams is still able to dish out 800-900k
transfers to those that want to move big files around, all the while
the smaller stuff happens without any problem at all (telnet sessions,
billions of arps, random LAT traffic, printing, bootp crap, etc).  
  To quote you:

"T. Max Devlin" wrote:
> That would depend on what you consider "ethernet speeds".  The correct
> throughput rate to measure on an Ethernet is comparable to arcnet.
> Ethernet's CSMA/CD relies on statistical access to the media, and is
> only really efficient at nominally 10% of the "bandwidth speed".

and 
> You are under the impression that throughput tests would involve
> measuring the utilization of the Ethernet.  That is specifically what I
> meant to point out would be of dubious value.   I have no interest what
> an ethernet can do in isolation or in thought experiments.  If you're
> going to design a [complex] network which includes Ethernet, you should,
> as Robert Metcalfe intended when he designed the thing, consider your
> "bandwidth" to be a nominal 10% of the bit rate.  Unless you're not
> using shared ethernet, and hardly anyone uses shared media these days.
> And then they wonder why their shared switches don't make all of their
> "network" problems magically disappear, like the sales droid promised.

and
> Another reason I would call 10% *nominal throughput*, in contrast to
> that 30% figure, is the observation that at the time of the creation of
> Ethernet, microcircuit components for LAN-style transceivers generally
> used a cost/performance break point of 1 to 2.5 Megabits per second.  In
> contrast, the CSMA/CD method required very fast transmission bit rate,
> and the components became much more expensive.  Yet, with the
> logarithmic response curve, shared media ethernet wasn't expected to
> reach even 50% utilization.  It appears that in order to ensure a
> nominal throughput on the order found in similar designs, a 10 Megabit
> NIC and media were necessary.

  So it seems to me that you are saying that 'althought the wire speed is
10megabit/sec, the actual, usable bandwidth on a real-world network with
lots of traffic is something dramatically less than that'. 

  This has simply not been my experience, even in very complex, very
physically large, *very* poorly planned ethernet networks with lots of 
real world traffic and users.  This isn't to say that there were no
problems in the network I took care of (the cable over the flickering
flourescent lamp did not generate 802.3 compliant packets), but for the
most part, when you make the above listed statements, I have to say that
they are flat out wrong.

  It is entirely possible that I misunderstand your assertions, but if
that is the case I am not only not alone, but in the company of everyone
else who has responded to you.  If I have misunderstood you you should
clarify what you mean by things like "Yet, with the logarithmic response
curve, shared media ethernet wasn't expected to reach even 50%
utilization." and other similar assertions.
chris

------------------------------

From: [EMAIL PROTECTED] (JoeX1029)
Subject: Re: I think I'm in love.....
Date: 01 Nov 2000 02:39:51 GMT

>Now if I can just find some helpful people that like to shoot skeet.
>I've got a whole stack of MS CD's that we could use.

Hey *I* shoot skeet.....

------------------------------

From: [EMAIL PROTECTED] (Bob Hauck)
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.os.ms-windows.advocacy
Subject: Re: 2.4 Kernel Delays.
Reply-To: bobh{at}haucks{dot}org
Date: Wed, 01 Nov 2000 02:43:42 GMT

On Tue, 31 Oct 2000 06:30:07 GMT, R.E.Ballard ( Rex Ballard )
<[EMAIL PROTECTED]> wrote:

>USB, Firewire, interrupt binding, process binding, and large file
>support (for video via broadband).

The iPaq has multiple CPU's and multiple NIC's?  News to me.  The large
file support is starting to become an issue, but I don't think that it
is a showstopper for an appliance.


>If there were an official version of 2.2 that supported these
>features, it would by more time for 2.4.

Well, I guess you'd have to ask Alan about that.


>If Linus decided to release a version of 2.4 that supported these
>features with the ability to support other promised features later on,
>that would be acceptable.

Maybe you could talk him into that.  A willingness to compromise might
go a long way.  Not issuing press releases designed to increase the
pressure might also be a good idea.

The thing I'm finding objectionable in all of this is the idea that
because some corporation "needs" some feature to protect the stock
price, that it must therefore be released whether Linus thinks it is
ready or not.  That the "more important" the needer the more
cooperative Linus is expected to be.  Maybe 2.4 is "good enough", but
setting a precedent that "pressure works" is a recipie for disaster in
the long run.


>Backports are, by nature, unsupported by anyone.

Huh?  The distributions that have such support them.


>Here's the question.  Would you borrow $1 billion from you friends
>and family, knowing that you might blow the entire amount because
>Linus didn't deliver within the market window.

I would hope that I'm not dumb enough to do that.  I don't control
Linus, so it would be idiotic to do what you propose.  OTOH, if I had a
fallback position of shipping a 2.2 kernel, then it wouldn't be as much
of a risk, now would it?


>I very much hope you are wrong.  I would hope that there are enough
>people participating in Linux who wanted to see Open Source accepted
>commercially and displacing proprietary and exclusive technology
>that they would actually be concerned about the needs of corporate
>customers including OEMs.

I certainly don't mind seeing Open Source accepted commercially.  That
isn't why I started using it, but it can be a good thing.  As long as
we don't get to a situation where the stock prices of companies start
determining release schedules.  That way lies disaster.  Release
schedules should be determined by when things are ready, period.  We
can all argue over when that is, that's part of the process too. 
Expecting more than that, trying to set dates in advance and having a
freak-out when they slip, just isn't going to work in the long run.  It
doesn't even work for commercial software where you have actual
authority over the developers, just ask Microsoft.

Everyone also needs to understand that "relese early and often" is
going to mean that we have things happen like the NFS problems in 2.2.
It may therefore be a bad idea to base high-pressure products on those
early releases.  This is true of commercial products too of course, but
it is easier to pretend it isn't when there's only limited public
disclosure.


-- 
 -| Bob Hauck
 -| To Whom You Are Speaking
 -| http://www.haucks.org/

------------------------------

From: [EMAIL PROTECTED] (Bob Hauck)
Subject: Re: Linux in approximately 5 years:
Reply-To: bobh{at}haucks{dot}org
Date: Wed, 01 Nov 2000 02:43:46 GMT

On Tue, 31 Oct 2000 18:06:06 +1100, Bennetts family
<[EMAIL PROTECTED]> wrote:

>Even almost all Winvocates hate the paperclip, this creature is a rare
>exception.

They should kill Clippy, but keep the search engine he fronts for. 
That actually does a pretty decent job.


-- 
 -| Bob Hauck
 -| To Whom You Are Speaking
 -| http://www.haucks.org/

------------------------------

From: [EMAIL PROTECTED] (Bob Hauck)
Subject: Re: Why don't I use Linux?
Reply-To: bobh{at}haucks{dot}org
Date: Wed, 01 Nov 2000 02:43:48 GMT

On Tue, 31 Oct 2000 22:32:29 GMT, Pete Goodwin
<[EMAIL PROTECTED]> wrote:

>Delphi has a fairly powerful editor, which you can switch to a number of 
>emulations of other editors.

Emacs has that.  I betcha it has more features than Delphi's too.


>Delphi has an integrated compiler, linker, resource compiler and debugger.

Emacs can run external compilers and collect the error messages, which
you can then click on to visit the file.  It also has a debugger
interface, although I'm partial to standalone debuggers.


>Delphi uses coloured syntax editing.

Emacs has that.  For dozens of languages.  It also auto-indents (and I
mean it understands syntax, not just "indent to previous level").


>Delphi can handle resource name changes - it updates the code where it 
>knows how to.

I'm not clear on what you mean here.  If you're talking about Windows
resource files, that would be a Windows-specific thing don't you think?


-- 
 -| Bob Hauck
 -| To Whom You Are Speaking
 -| http://www.haucks.org/

------------------------------

From: [EMAIL PROTECTED] (Terry Porter)
Subject: Re: Why Linux is great
Reply-To: No-Spam
Date: 01 Nov 2000 02:55:16 GMT

On Tue, 31 Oct 2000 23:57:37 GMT, George Richard Russell
 <[EMAIL PROTECTED]> wrote:
<snip>

>Every few years, Unix gets another GUI. Its a shame the cli isn't
>replaced / improved as often.
>
<snip>

There are reasons for the cli George, low overhead making the remote
admin of Unix easy, is one.

Your lovely high overhead GUI's will always be less effective that way, (in
os's where remote admin is possible at all, this naturally includes any Windows
os except NT and Win2k, and the Mac) until we are all using 2000mhz cpus and
have fibre everywhere.



>George Russell


-- 
Kind Regards
Terry
--
****                                              ****
   My Desktop is powered by GNU/Linux, and has been   
 up 2 weeks 2 days 22 hours 22 minutes
** Registration Number: 103931,  http://counter.li.org **

------------------------------

From: "C. Nolan McKinney" <[EMAIL PROTECTED]>
Subject: Considering Linux for personal use
Date: Wed, 01 Nov 2000 03:00:18 GMT

I have always been a big fan of DOS. I used a 286 computer for ten years
before i broke down and bought a 586 winbox, I expected a quantum leap, but
the new computer is way slower.  I am disappointed with Windows98, I think I
just don't like graphical interfaces, I especially hate  that the mouse
(dirty rat!) has to be used all the time.  I also have a 386 laptop that I
use to type and email when I'm away from home.  I'm thinking about switching
both computers to linux just to get away from Microsoft, because as far as
microsoft is concerned, my laptop doesn't even exist.  I have heard that
linux uses a lot less computing power than windows so I am very interested.

I would like both computers to boot up to something like a DOS shell, a
simple menu where I can scroll to the program I want to use and hit enter
and be there.  Can I do this? Is there linux software that will do word
processing and email lightning fast on my 386 laptop?  Will it be harder to
set up than a DOS shell? unfortunately my knowledge of the inner workings of
computers is limited.

Thanks,
Nolan



--
_____________________________________
C. Nolan McKinney
http://home.att.net/~c.nolan.mckinney/



------------------------------

From: [EMAIL PROTECTED] (Peter da Silva)
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: 1 Nov 2000 02:54:39 GMT

In article <[EMAIL PROTECTED]>,
T. Max Devlin  <[EMAIL PROTECTED]> wrote:
> No, it is the results of my research and experience, which I'm going to
> have to point out is not limited to only examining the ethernet itself,
> but dealing with the "whole network".

Even the crappy ethernets we built using thicknet segments and Intel's
crummy 911 repeater boxes between Xenix-286 systems and VAXes got better
than 1 Mbps throughput on a shared ethernet. I don't care what your
research and experience says, it's completely out of line with my
experience.

-- 
 `-_-'   In hoc signo hack, Peter da Silva.
  'U`    "Milloin halasit viimeksi suttasi?"

         Disclaimer: WWFD?

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to