Linux-Advocacy Digest #963, Volume #29 Tue, 31 Oct 00 13:13:07 EST
Contents:
Re: Ms employees begging for food (T. Max Devlin)
Re: Ms employees begging for food (T. Max Devlin)
Re: Pros and Cons of MS Windows Dominated World? (T. Max Devlin)
----------------------------------------------------------------------------
From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: Tue, 31 Oct 2000 12:26:07 -0500
Reply-To: [EMAIL PROTECTED]
Said Les Mikesell in comp.os.linux.advocacy;
>
>"T. Max Devlin" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>
>> >It is just the simple trade-off between efficiency and generality that
>> >we see in every product. If you are very short-sighted you go
>> >for immediate efficiency and it ends up being wrong in the long run.
>>
>> Well, it wouldn't have been, if Novell and the LAN server market not
>> been stymied by illegal monopolization, resulting in the common
>> attribution of Novell's hard times to generally vague and often nameless
>> "mistakes". It was the most efficient mechanism, and so by all rights
>> should have been a) profitable, and b) competitive, in encouraging
>> market opportunities for more optimized and/or flexible alternatives.
>
>Replace 'efficient' with 'limited' and you will see why this view is
>wrong.
Wrong for who? A pie-in-the-sky pundit more than a decade later? If
you replace 'efficient' with 'limited', aren't you just *saying*,
without any support, "this view is wrong"?
>> >The difference in on-the-wire efficiency might be measurable on
>> >something as slow as arcnet, but makes no real difference at
>> >ethernet speeds.
>>
>> That would depend on what you consider "ethernet speeds". The correct
>> throughput rate to measure on an Ethernet is comparable to arcnet.
>> Ethernet's CSMA/CD relies on statistical access to the media, and is
>> only really efficient at nominally 10% of the "bandwidth speed".
>
>Please try this on an in-spec ethernet before making claims like
>that. Ethernet speed is wire speed.
Please stop trying to treat a single ethernet as if it is the entirety
of the network, and offering nothing but naked truisms to supposedly
make your point. "Ethernet speed is wire speed" if you define "Ethernet
speed" as "wire speed", and presuming you define those two terms
individually in any accurate, consistent, or practical way. I think
what you meant to say was "the signaling rate is the same as the bit
rate", maybe? In that case you would be wrong, though, so I suppose you
probably meant "the bit rate is the bandwidth".
The question isn't what happens on an "in-spec ethernet", but what
happens across one thousand ethernets, both in and out of spec, and each
being part of an arbitrary network which is comprised of far more than
the ethernet itself. The idea is to know how to design the thing, not
how to impress your friends if you're lucky enough to not have something
choke on you. How do you avoid making an ethernet into a choke point,
without any bit-twiddling or exorbitant monitoring necessary? Simply
recognize the fact that Ethernet is most efficient at a nominal 10%
utilization rate for provisioning purposes.
Its a hell of a lot cheaper, in the end, then replacing all the
equipment to go from shared to switched, though it is, I'll admit, only
a slightly different way to "throw bandwidth at the problem". Or,
rather, ensure you have enough bandwidth not to have a problem. The key
is that it is a whole lot cheaper and easier way.
>> A competitive company, like Novell was, and might still be (who can tell
>> these days?) relies intrinsically on alternatives and competition to
>> lead the way in what is necessary to expand their market opportunities.
>> Novell could have easily built a decent cross-protocol proxy engine into
>> Netware, and opened up all Novell LANs to Internet scalability with a
>> whole raft of good new products. Instead, they hobbled along trying to
>> get the shell-game based tunnelling crap they kept trying to develop,
>> because it was all the market could bear.
>
>I don't think there was a lack of customers for a native-IP-only
>netware, it just didn't happen.
My comments concern why it didn't happen. You also misunderstood what I
meant by "expand their market opportunities", slightly, if you
translated it to refer to "lack of customers".
>> >Again, the network implications are irrelevant compared to the
>> >memory footprint.
>>
>> To you, maybe. But the solution was a failure if it didn't account for
>> network implications, as much as any other, just as it wouldn't have
>> worked had the memory footprint been considered irrelevant. You confuse
>> your perspective with the priority. No requirement has priority, unless
>> you're misrepresenting the meaning of "requirement".
>
>Make up a bunch of ipx size packets and the equivalent ip size packets.
>Time them going over a wire.
I have passingly little interest in theoretical results. I understand
why it seems counter-productive to ignore whether IPX or IP packets will
"go over a wire" quicker, but I'm not willing to rely on thought
experiments to explain why this would be a limiting metric, considering
that in comparison to this timing difference, the overall variation in
real throughput makes this statistic picayune, to say the very least.
[...]
>> On the contrary. The idea that a host number is not arbitrary, or that a
>> network number is not arbitrary, likewise, is counter-productive.
>
>Routers can only deal with very small numbers of arbitrary numbers.
Routers don't deal with host numbers; router interfaces do, and they
have never been a bottleneck of any particularly great degree. In fact,
the IPX method is far more efficient for larger numbers of arbitrary
host address, since no ARP is necessary. Algorithmic translation of
physical to logical address is much more efficient in this regard.
[...]
>What I mean is that there is no structure to the cable numbers and
>routers had to learn them all. For IP, you assign addresses to the
>hosts and the network (cable) numbers just fall out as a result.
You're just playing a shell-game, though. In point of fact, you are
assigning both network and host number for IP; nothing "just falls out".
The two separate numbers all routing systems use, network number and
host address, are simply jammed into one representation with IP, using a
variable length subnet mask to identify where the network part stops and
the host part starts. This is efficient in many ways, but it certainly
places a load on the routers which is not necessary in IPX, where the
two numbers are simply shown as network:host. The IP subnet mask method
is more flexible and elegant. But routers have to learn all the network
addresses to begin with. It would even be possible to build into IPX
some method of "aggregating" network numbers, just as CIDR/VLSM and
"route summarization" did for IP, in 1997.
>For
>IPX you have to assign cable numbers and there is no way to
>make them aggregate sensibly. I'm not talking about an arbitrary
>number of addresses, but how to physically route a large number
>of addresses that are assigned arbitrarily.
Until 1997, there was no way to make IP network numbers aggregate
sensibly, either. Your understanding of routing is valid, for the most
part, but not entirely correct. IP simply has a smaller address space;
that is the only real issue which intersects with your comments.
>> Variable length subnet masks weren't themselves
>> really effectively implemented until 1998, if you could delude yourself
>> into thinking there are any effective implementations of variable length
>> subnet masking.
>
>Variable length masks weren't needed until it became obvious that
>the wastefullnes of using the orginal class designations was going
>to be a problem and that aggregration had to happen in more flexible
>units.
That "wastefulness of the original class designations" was an efficiency
which allowed global routing with 1970s technology. It was the ISP
model of the Internet which was the problem, not the mechanisms used to
allow high performance routing. With the class-based system, core
routers did not have to spend the huge amount of time in overhead
checking every single destination address, down to the last bit,
potentially, against a subnet mask. Merely examining the first few bits
of any address was sufficient to tell the router how far into the
address to go to extract the network number.
Modern hardware, obeying Moore's Law, made it more efficient to expand
the illusion that IP addressing was a hierarchical routing system by
dropping the class system. It is more efficient to have backbone
routers using subnet masks for every routing decision than to have
routing tables that list every individual "class C" subnet on the
planet.
>> >If cable numbers were really supposed
>> >to be a workable routing concept, a global registry would have been
>> >needed (as already existed for the MAC numbers).
>>
>> You mean "global uniqueness", I presume, which isn't the same thing at
>> all as a "global registry".
>
>No, I mean a system that permits/encourages/enforces aggregration of
>units so backbone routers can maintain a single entry for a large number
>of host addresses that may be reached through that hop. Global
>uniqueness is also needed, but the inclusion of the MAC address
>takes care of that for ethernet end points.
You are confusing the aggregation of network numbers with the
aggregation of host numbers, which is not at all the same thing. We are
either talking about routing or we are talking about transmitting. You
can't use transmission characteristics (like MAC address) to deal with
routing decisions, whether the end points might usually "match up" or
not.
The difference between your view, which I realize is rather
straight-forward and accepted conventionally, and reality, is subtle, so
I'm not sure if there's any need to keep quibbling about it. Your
statements are based, however, on an apparent understanding of how
"networks work" which is extremely common, but unfortunately imprecise.
>> But its apparent that you are referring to
>> the flexibility of not having a fixed division between network and host
>> number, as in IP. But with address spaces as large as IPX, who needs
>> them?
>
>How do you route for large numbers of connected hosts?
You don't. All routing decisions are performed one number at a time.
The fact that the network and host number are *apparently* combined in
IP addressing has lead you to the inevitable presumption that the host
number is an "extension" of the network number. This isn't really the
case. And, again, your position seems to reduce to the fact that IPX
simply has a much larger address space than IP, and you expect that this
would make routing decisions more costly. But you're measuring the
wrong scalability; it is cheaper to have a larger addressing space which
is bound by known limits than it is to have a smaller address space with
arbitrary and indeterminate limits. This last concept refers to the
requirement in IP for a subnet mask, and the resulting processing
overhead necessary to use it. IPX doesn't waste time dealing with
subnet masks. And, no, this neither makes route aggregation a
definitive advantage for IP, nor impossible in IPX.
>> IP's 32 bits hardly compares to IPX's 16 byte segment number
>> *plus* 16 byte (twelve digits of hexadecimal values is 16 bytes, isnt'
>> it, or is it 8?). The only difference it would make in the end is to
>> people who would find the routing of a packet through 24 different
>> routers to be offensive in comparison to 17 routers, to get from one end
>> to the other on the Internet. Routing still works whether you use a
>> subnet mask or discreet numbers to identify the destination network
>> (and, in fact, using subnet masks is more efficient in some cases and
>> less in others).
>
>Subnet numbers can be aggregated, arbitrary discreet numbers can't.
Subnet masks couldn't be aggregated until 1997, when the entire routing
infrastructure was modified to take advantage of Moore's Law and
mitigate a bottleneck which never would have occurred if the Internet
hadn't been changed from a host-based to an ISP-based system.
>> TCP/IP isn't really a hierarchical routing system, or at least it wasn't
>> until CIDR was introduced in 1997. In August of that year, the term
>> "class" became obsolete in TCP/IP, though it is still often used to mean
>> the equivalent "N bits of subnet mask".
>
>Yes. Do you understand why arbitrary-width CIDR aggregration
>was necessary?
Intimately. I'm not really sure that you do, to be honest. I don't
mean that as an insult; I just mean to say I'm not sure what your
understanding of why CIDR is. Most of the things you've been saying are
'bad' are not only left unaddressed by CIDR itself, but CIDR actually
makes them much worse. CIDR, for instance, is not what makes
aggregation possible except on backbone links (individual autonomous
authorities have been able to provide "route summaries" since OSPF was
developed.) Obviously, arbitrary-length subnet masks are related to
route aggregation, but they aren't the same thing.
>> Segment IDs ("cable numbers")
>> at least allow for a more rational definition of the "network
>> interface".
>
>Unless you are a router trying to store them all.
Storing them isn't a problem. Looking through the list whenever you
have to route a packet is a problem. That problem is decreased
tremendously when you don't have any "variable length" to the network
number to begin with, but present it simply as a separate value from the
host number. Combining the two together as IP does certainly has major
efficiencies. Just not the ones you're referring to.
[...]
>> >Yes there was, as far as email and usenet were concerned. There
>> >was even some effort to provide ftp-over-email services to everyone.
>>
>> You mistake "ability to" for "convergence with."
>
>You sent/received email and worked whether you were on the internet
>or not. How is that different from converging? Many systems had
>internal IP LANs without being connected to the internet and used
>uucp to batch-transfer email/news among each other and their
>better-connected gateway. A host on this kind of net would have
>local services just like an internet-connected box - it just took longer
>to transfer to off-site locations.
Yes, that is how it is different from converging. A host on this kind
of network was using UUCP to access email and usenet, not "the
Internet". Not every use of IP is "the Internet", and "the Internet"
isn't limited to a single application or connection type.
> > >In the late 80's and early 90's it was difficult to impossible for
>> >companies not involved in defense research to get directly on the
>> >internet. However, UUNET and some universities provided uucp
>> >dial-up connectivity. Anyone could register a domain name and
>> >have full internet email connectivity regardless of the fact that they
>> >weren't directly connected.
>>
>> True; I understand your point. It is the fact that they were not
>> directly connected which makes the difference from my perspective. I
>> guess the servers were connected through the Internet, rather than UUCP,
>> but the majority of Usenet was still servicing 'direct access' UUCP,
>> even server-to-server.
>
>There was an attempt to map the arbitrary mesh of ad-hoc uucp
>connections so you could email anyone without knowing the
>route, but basically anyone who cared about reliability got
>their own domain name and a directly connected gateway like
>uunet.
Whether the leading edge or the high-water mark of a migration should be
used to designate the change seems to me to be quibbling.
>> >Once it became possible to connect directly, uucp was no longer needed
>> >or desirable and the considerable effort needed to route email over
>> >many unorganized hops was dropped.
>>
>> Yes, I guess this did occur in the later 80s, didn't it.
>
>It was probably early 90's before it became easy.
Indeed. Interesting how in the previous exchange, you pegged the
leading edge, while I pointed to the bulk of the curve, while in this
one we switch. :-)
[...]
>> If the limitations are gone, I can't see any reason to limit anything to
>> one network protocol and routing concept. ;-)
>
>Administer an all-IP net vs. one where an assortment of protocols
>are all running and you will see.
I have. I don't see. Sure, "administering" becomes easier the less you
have to administer. But networks aren't designed to make administration
easy; they are designed to provide services. If more services make
administration more difficult, is that any reason to say it is a good
idea to limit services, without even examining the benefits?
>You basically multiply the
>things that can go wrong (and Murphy wins, your downtime)
>by the number of protocols running because each has its own
>failure mode. Or just run a sniffer on a multi-protocol net when
>it should be idle and look at all the useless chatter from each
>one. For example, every Windows box will be broadcasting
>its netbios name using every protocol you have active every
>few minutes. Netware is even worse, and appletalk the worst.
And here is where we come, pointedly, to the reason I am so apparently
unreasonable (and possibly unreasoning) on these subjects. No, you
*don't* multiple the things that can go wrong with any one component
because each has its own failure mode. That is the 'simplex'
understanding of the network as a single complicated transmission
system. It is not. The more practical understanding of a "complex
network", which would include all internetworks, their transmission
systems, and their software services, leads to a different approach.
You divide the things that can go wrong (ultimately, reducing them to
one: a service is not available) by the inverse of the number of
alternatives you have on each of the three levels of connectivity
(physical transmission [links], logical route [path], and software
client/server [system]). So, the more alternatives you have on any one
level, the more reliable the system becomes. Murphy does always win,
which is why simplex networking has gone away (though most people are
still taught to represent all networks with this archaic model) and been
replaced with complex networking. I didn't mention anything about
adding components which are recognized as sub-standard (netbios), but if
your network supports both IPX and IP, it will be more reliable! This
can be illustrated in a variety of ways. If your routers can handle
both protocols, then you can use one in place of the other if there is
some design inhibition (future proof, so to speak; there are still some
implementations which can take advantage of IPX). You can always use
the dual stack to verify connectivity, if the IP works but the IPX
doesn't, you have gone quite a ways to isolating the problem. Systems
designed to provide multiple protocols will inherently be more robust,
and will not have the tendency to lock future operations in to features
only supported by one particular protocol (protecting against the
'de-commoditization' methods which certain vendors routinely employ).
Running two protocols can also be used as a form of control; utilization
breakdowns can be instrumented, processed, and displayed much more
efficiently to simply compare IPX to IP. The known (or easily
characterized) determination of what "kind of traffic" each is used for
is much more easily used than having to try to sniff every datagram on
every link to determine what kind of services are using resources in
what proportions.
All in all, other than the job of "administration" being somewhat easier
(and less flexible, and less capable), there really isn't any reason to
reduce services to only one protocol. Similar to the idea of
general-purpose versus special-purpose computing appliances. A network
which only supports one protocol is an appliance; cheap, but inflexible,
and you might easily end up supporting a whole bunch of different
appliances. When what you really need is a general purpose computer.
Complex networks are general purpose networks. Its no surprise that
even intelligent and cognizant businesses, operations, and
administrators want to "save the expense" of implementing a general
purpose network. But those who have what is known as "enlightened self
interest" might find that "having" to support two protocols, or more,
would be much more efficient, effective, and ultimately even more
expedient, because it makes it more obvious what is wheat and what is
chaff in the matter of trying to run a network. Yes, the additional
details of the additional protocols might seem a burden which provides
no compensating value. But that's because of the old simplex model,
where the details of the protocol were entirely controlling. In a
complex network, where "a routing protocol" becomes something of a
commodity (and excludes non-routable protocols), "having" to support
more than one means you only have to worry about how a routing protocol
works, and don't even need to be aware of the details of any specific
routing protocol, necessarily.
You only need to be an expert in a component when you have to try to
reverse-engineer a failure. When you can just "swap it out", somehow,
to verify the rest of the system, you don't even need to know if the
component failed. All you need to know is that replacing it will
restore service.
>> >IPX can't work
>> >as the only one so it is out of the picture if you remember
>> >that the only reason for using it in the first place was simplicity.
>>
>> IPX works just fine for quick local connectivity. One might say we
>> don't need it, but its still there, as is (gasp) NetBIOS/NetBEUI. As
>> carefully (or not) tunneled as it might be, the efficiencies of having
>> local and long-haul communications use the same routing mechanisms seems
>> somewhat pointless, in some respects, don't you think?
>
>NetBIOS is a software layer. NetBEUI is unroutable and can't handle
>more than 254 nodes. IPX usually requires more expensive router
>software plus the extra time to set it up. There is a point to using the
>same protocol for everything if you are the person who would have to
>keep the multiple versions working and provide the extra bandwidth
>for duplication. And, in every box it is that much more that can
>and will go wrong.
I am the person who would have to keep multiple protocols working. In
fact, I am a highly paid consultant to global carriers, service
providers, and enterprises who is routinely asked how to keep a great
variety of network things, including protocols, working. I will give to
you for free what Sprint often pays $2000/day for:
It is easier to keep two protocols working than it is to keep one
protocol working, from the perspective of the real world, where the
protocol is just one component in a complex network. Just as in the
discussion about Ethernet which also resulted from my earlier comments,
the principle of networking on which the Internet itself, and all
successful internetworks, is based is the perspective of the whole
network. It is complex, but it is not complicated. When you only have
one protocol, spend a very large amount of time administrating it, and a
preposterous amount of time troubleshooting it. When you have two
protocols, you spend a large amount of time administering it, and a
nominal amount of time fixing it when its broke. The cost of
administration is linear, not exponential. The more protocols you have
to administer, the more restricted the amount of administration which
needs to be performed per protocol, and the more benefit, in terms of
optimization and reliability, that administration time provides.
Yes, every added component is just another thing that can go wrong, in
the simple view of networks, where complicated technology is
interconnected in complicated ways. In the complex view of networks,
where simple technology is interconnected in simple ways, there is a
counter-force dubbed "the network effect" by Dr. Richard Metcalfe,
inventor of Ethernet and founder of 3Com. Every added component makes
every other component already added even more valuable.
An apt analogy for these different views, Murphy versus Network Effect,
might be found in the Standard Model of physics. Gravity is many orders
of magnitudes weaker than the electroweak force, just as the network
effect might be considered much weaker than Murphy's Law. If you
examine only very small specific things, as in particle physics, you can
all but ignore gravity, while the electroweak force is so powerful that
it would seem laughable to say that it is not the most important and
controlling force in any system. But when you are interested in the
"real world", while a simple refrigerator magnet can still out-pull the
combined mass of the entire planet Earth and win in a tug-of-war for a
small iron nail, saying that gravity is weak and unimportant is
obviously a counter-intuitive view, no matter how correct it might seem
in a pedantic sense.
The network effect might be as weak as gravity in comparison to Murphy's
Law. But the larger the network, the more important it becomes to deal
with, and the more value there is, therefore, in taking advantage of it.
So the "common sense" approach which leads you to think that adding
another protocol is just asking for headaches might just be too narrow a
view, in the end. It would make more sense to consider the "real world"
approach, and recognize that adding another protocol can only make the
job of keeping the network running easier and more efficient.
--
T. Max Devlin
*** The best way to convince another is
to state your case moderately and
accurately. - Benjamin Franklin ***
======USENET VIRUS=======COPY THE URL BELOW TO YOUR SIG==============
Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html
====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
======= Over 80,000 Newsgroups = 16 Different Servers! ======
------------------------------
From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: Tue, 31 Oct 2000 12:28:48 -0500
Reply-To: [EMAIL PROTECTED]
Said Les Mikesell in comp.os.linux.advocacy;
>
>"T. Max Devlin" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>> Said Bernd Paysan in comp.os.linux.advocacy;
>> [...]
>> >There are certainly crude parts in the Unix interface; but at least the
>> >free software community has successfully smoothed the edges. Unix is not
>> >the best of all possible OS, but it is good enough. It is friendly
>> >enough to programmers (though I don't understand why I have to open
>> >sockets with htons and htonl instead of just
>> >open("/ports/tcp/www.foo-bar.com/http");
>>
>> If I might contribute my perspective, it is because dealing with a
>> socket within Unix's "everything is a file" paradigm would be a Bad
>> Thing, because it would mask, without abstracting correctly, the fact
>> that a socket is not strictly a system resource.
>
>Huh? A socket is a file once you get past the magic of opening it.
Once it is "opened" means once it exists. The existence of a socket
requires someone on the other end. Therefore, opening a socket cannot
effectively be abstracted as opening a file, even though using a socket
once it exists can be abstracted to using a file.
>Otherwise inetd would be unable to start programs that know
>nothing about sockets with sockets as their stdio connections.
>Taking a bit more of the magic out of opening one by using
>standard string representations as their names would allow
>all languages that know about opening named files to use them,
>and thus makes exactly as much sense as adding named pipes
>which does the same thing for pipe i/o.
It would also allow, I presume, software which botched the job rather
badly because opening sockets is not a matter of simply allocating local
resources.
--
T. Max Devlin
*** The best way to convince another is
to state your case moderately and
accurately. - Benjamin Franklin ***
======USENET VIRUS=======COPY THE URL BELOW TO YOUR SIG==============
Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html
====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
======= Over 80,000 Newsgroups = 16 Different Servers! ======
------------------------------
From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To:
comp.os.ms-windows.nt.advocacy,comp.sys.mac.advocacy,comp.os.ms-windows.advocacy
Subject: Re: Pros and Cons of MS Windows Dominated World?
Date: Tue, 31 Oct 2000 12:33:19 -0500
Reply-To: [EMAIL PROTECTED]
Said JS/PL in comp.os.linux.advocacy;
[...]
>I was a pollster once, back in 1980. I can testify from actually witnessing
>the process, don't trust polls taken by low paid workers in a practically
>unsupervised environment. I answered about 99.9% of the questions myself,
>sitting on my couch, eating and watching TV, instead of walking door to door
>with the huge printout of registered voters I was given. I also had the
>feeling I wasn't the only one in the group with the same idea.
>
><dink wad snipped>
>
He just admitted to being a dishonest couch potato, and *I'm* a 'dink
wad'? :-D
--
T. Max Devlin
*** The best way to convince another is
to state your case moderately and
accurately. - Benjamin Franklin ***
======USENET VIRUS=======COPY THE URL BELOW TO YOUR SIG==============
Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html
====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
======= Over 80,000 Newsgroups = 16 Different Servers! ======
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.advocacy) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Advocacy Digest
******************************