Linux-Advocacy Digest #894, Volume #29           Sat, 28 Oct 00 14:13:05 EDT

Contents:
  Re: Ms employees begging for food (T. Max Devlin)
  Re: The BEST ADVICE GIVEN. (Peter Hayes)
  Re: Pros and Cons of MS Windows Dominated World? (T. Max Devlin)

----------------------------------------------------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: Sat, 28 Oct 2000 13:55:16 -0400
Reply-To: [EMAIL PROTECTED]

Said R.E.Ballard in comp.os.linux.advocacy; 
   [...]
>Unfortunately,
>the UNIX community had already reached the point where Novell
>address limitations were blown.

Interesting that you should put it this way.  I'm not sure what "address
limitations" translates to in the abstraction your familiar with, but
the development of Novell's IPX is as interesting an examination of
development in network technology as I think anyone could present.

Novell developed their technology, possibly unknowingly, along the same
fundamental paradigm which drove the Internet.  I refer to this as the
"complex model of networks" (formerly the Three Level Connectivity
Model, also the Best Practices Framework, also the Enhanced Reference
Model.)

Basically, Novell needed three things to make their file-and-print
server product work.  They needed a) physical transmission systems, b) a
logical transport system (to provide end-to-end connectivity across one
or more transmission systems), and c) a software data transfer system.

The mechanisms they used to provide these three were almost text-book
cases of competitive efficiency.  The software network was first; the
NCP (Netware Core Protocols) were the essence of efficiency.  By
recognizing that logins must be persistent, but routine interactions
(such as short-term "keep alives" or assumptions of data integrity) were
not, NCP is still a high-water mark of efficiency in providing file and
print services.  The physical transmission system was already available,
in theory, with Ethernet, as Netware was designed for local access
(shared office resources).  The ethernet technology of PCs was still
nascent, but was rapidly developed by Novell, including the practical
commoditization allowed by NE2000 compatibility.  Novell initially
manufactured NIC cards, but were more than happy to encourage other
hardware producers to "clone" their driver interface and hardware specs.

The tricky part was how to put the two together.  Ethernet technology
doesn't really allow for a "connection oriented" approach (what I call a
simple, or simplex, network) which bound the software directly to the
interconnection hardware (quite the way NetBIOS/NetBUEI still does).
They decided that the system would be most flexible and efficient by
using the routing model developed in the late seventies.  They took XNS,
a somewhat rudimentary routing technology, and based the development of
IPX on that.  Every server was a "router", enabling multiple servers to
easily inter-connect multiple Ethernets, thus allowing scalable (but
still local) access to a number of servers through a single physical
interconnection.  Now is where the innovative part of the development
came into play, though.

While TCP/IP was superior to XNS in allowing "wide area network"
connectivity, it is not extremely efficient (though hardly burdensome,
at least on more modern systems) for purely local area connections
through one or more bridged/routed LANs.  Novell optimized IPX to be, in
effect, *blazingly fast*, so long as certain presumptions remained
inviolate.  As it turned out, without the unbeatable speed of IPX+NCP,
Novell might well have not ruled the "server market" they had developed.
Unfortunately, though, one of those presumptions was local area network
interconnection, which is what eventually stymied Novell when the
Internet happened.  Some of the efficiencies that this presumption
allowed was the aforementioned use of "connectionless mechanisms", even
in the transport.  IPX itself is the functional equivalent of IP plus
UDP; transport services (fragmentation, re-assembly and sequencing, and
reliability) are only really necessary when the path includes
indeterminate transmission systems, including the low fidelity of
long-haul bandwidth and the long propagation time of long-distance
circuits.  Novell developed SPX, the equivalent of TCP, for when
"reliability" was essential, but rarely used it for anything outside
control 'channels', typically admin interfaces running on client PCs to
control the server configuration.  I suppose it may have been necessary
at the time to ensure enhanced reliability for such connections, as the
bindery (the precursor to NDS) might well become corrupted if 'raw' IPX
were used.

The efficiencies Novell implemented included the "ping-pong protocol"
aspect of IPX; each packet was individually acknowledged.  With no
"sliding window", IPX prevented the 'global communications' which TCP/IP
allowed, but also saved on overhead when the delay from client
transceiver to server transceiver was restricted to microseconds.
Likewise, error correction was abjured, as a LAN link does not really
receive many "bit hits", and the software level (client, the DOS
redirector, and server, the Netware system) handled checksums on the
data block (file?) level, rather than on the transmission system layer.
This provides yet another lesson in competitive design (or mis-design,
as the case may be) in the problems with ENET-II, 802.2, and 802.3
"frame types" (more packet types, actually, as the difference between
the three was almost entirely within the 'data' portion of the frame)
they later contended with.

When Novell was developing their NIC-card drivers, the "right" way to
build an ethernet frame was using 802.3 rules.  The older, original,
ENET-II (or Ethernet_DIX) rules used a 'type' field to inform the
receiver's NIC driver which protocol stack was needed to decode the
packet within the frame's data field.  In 802.3, they cleaned up this
mechanism (the type of packet would be identified in an additional layer
of encapsulation, known as LLC, in the frames data field, thus avoiding
the possibility that a particular NIC card and driver would be required
to transmit and receive a particular protocol packet) by substituting a
"length" field for the type information.  The length was used in the CRC
(Cyclic Redundancy Check; a checksum method of verifying fidelity of
byte-wise transmissions) value, but was actually more of a "place
holder" to allow an 802.3 frame to have the same basic format as an
ENET_II frame.  The length of a received packet, after all, can easily
be determined by examining the received packet!

The problem was that this LLC level, which was designated 802.2, was
specified in the 802.3 spec as necessary, but had not yet been defined
completely.  Novell, needing to put solutions on the market and not
being able to afford to wait for the IEEE to finalize the 802.2 spec,
re-arranged things a bit, based on their LAN-communications presumption.
Novell wrote their drivers to replace the checksum field itself with
"all zeroes", and used this to identify "the contents of this frame is
an IPX packet".  This was necessary, from their perspective, because
Enet_II provided type information, but 802.3 didn't; instead, 802.3
relied on 802.2 to provide type information, but that was not available
to Novell at the time.

So Novell wrote their software to understand three different types of
frame/packet combinations:

ENET_II: An Ethernet DIX (stands for Digital/Intel/Xerox, the developers
of the revised pre-802 spec) frame, designating the packet contents as
an IPX packet using the type field.
802.3: An 802.3 Ethernet (IEEE standard) frame, designating the packet
contests as an IPX packet using the 0x000000 in the checksum field.
802.2: An 802.3 frame, using the IEEE standard 802.2 LLC encapsulation,
including a type field, to designate the contents as an IPX packet.

Anyone familiar with Netware is probably aware of the problems this
scenario imposed, but few are cognizant of the performance and
market-oriented reasons for adopting it.  You don't really need a
checksum field for a local transmission most of the time, particularly
when your client/server system is set up to deal with reliability
directly (as made possible, again, by the presumption that there is no
long delay between transmitter and receiver systems, so having to ack
each packet, and re-transmit any not acked before the next packet is
sent does not compound the problem.)  It does, however, add yet another
burden if you try to run this stuff over WAN lines.

The "single packet fixed window" of IPX was fixed with something called
"burst mode", developed in the mid-to-late 90s.  This allowed
sequencing, to some extent, so that multiple packets could be sent
without an ack, and also made the maximum packet size much larger.  But
only if 802.2 (802.3 + LLC) was implemented would both reliability and
"protocol multiplexing" (using both IP and IPX on a single physical
end-system) be supported.  It also didn't do anything to fix the real
problem, which was the reliance of NCP on IPX.  To this day, (despite
claims to the contrary), Netware does not provide a 'native interface'
to a TCP/IP stack.  NCP was simply designed, specifically, with IPX in
mind, and I'm of the opinion that in the present market, made somewhat
disfunctional by the presence of an anti-competitive monopoly, Novell
does not have the programming wherewithal to re-engineer NCP so that it
works on TCP/IP, nor could they keep their decisive performance
advantage, which is predicated on the client and server being
interconnected exclusively with local links.

As far as "address space" goes, another efficiency of the presumptions
made by Novell is that no ARP mechanism was necessary.  Unlike the 'any
to any' software model of the TCP/IP world, in which the host may very
well run multiple client and multiple server programs simultaneously,
the Netware model presumed that PCs (at the time, non-multitasking)
would act as both client and server.  No Unix mini-computer type systems
necessary!  This meant that it could be presumed that client computers
would only talk to server computers.  Clients would never talk to each
other, and for the most part, neither would servers.  (Server-to-server
communications were actually the routing of packets from one LAN to
another through the server; the two NCP server processes need not
interact to provide services.)  So the logical address used by IPX host
and routing software was made up of an arbitrary network number (the
"segment ID" in IPX-speak) and the MAC address of the NIC card for that
host.  The server would broadcast its address on the LAN, making its
logical address mapping available to all clients (and other servers, if
any).  These were known as Service Advertising Protocol broadcasts
(SAPs).  SAPs were sent out routinely (initially, every ten seconds).
The router/server learned the MAC address of the client by examining its
logical address, identified in the packet which logged the client in to
a server.  While not nearly as flexible as TCP/IP's class/subnet-mask
scenario, it was much more easily configured (because it essentially
required no configuration, save designating a segment ID for each LAN)
and actually provided *more* host addresses, since each MAC address was
globally unique, but a Novell server could actually handle "duplicate
MACs" just fine if they were on separate segments.

The lack of ARPs never really was a problem; address space in IPX is
much larger than in IP, at least potentially.  But the SAPs, again,
caused a great deal of difficulty when you try to use long-distance
links to support an NCP network.
>[...]In fact, both IBM and the other
>big UNIX server vendors have reached the point where they depend
>very little on royalties and very heavily on consulting.  You can
>get the server software for a few thousand dollars, and spend a few
>million in consulting and support services.

I believe this is more a sign of the disfunction in the market caused by
the "ripple effect" of monopolization, to be honest.  That IBM is a
paragon of such an approach only supports that point.

>Ironically, it took Linux to actually tap the real market potential
>of UNIX.

I think this is quite true, in a lot of ways.

>Sun Microsystems discovered that almost all of their
>Enterprise series server customer had done the prototyping and pilot
>on Linux.  Linux was not only selling Suns, but it was selling the
>consulting services required to tune and tailor the system to the
>the application and environment.  Some companies such as Pyramid
>could increase performance 10-fold by knowing both the OS and
>Database source code.

Here you show your Linux bias, Rex.  AFAIK, Sun still does not directly
support Linux on their boxes, and cannot provide any "tuning and
tailoring" services, though I may be a year or so behind in that
assessment.  Certainly since they bought StarOffice, they might have
changed their tune, but I haven't heard anything to indicate that they
support StarOffice directly on anything but Solaris.

   [...]
>> IBM was under a similar decree that
>> they wouldn't enter the phone business.
>
>Partially correct.  IBM couldn't carry voice, but
>they could carry data on their SNA links.

Entirely fabricated, to my knowledge.  IBM didn't have "their SNA
links", if you mean any specific telecommunications circuits.  IBM is
not a carrier, and did not provide any voice services because voice and
data were not considered in any way to be "compatible" allowing both to
be dealt with interchangeably.

   [...]
>>  UNIX seems to be the beast that just won't die, no
>> matter what internal roadblocks come up.
>
>This is largely because it's a customer driven market.  Even today,
>it is the expectations of Windows users converting to Linux that
>drive current desktop development efforts.

I think of it more as "Unix is the beast that won't die" because it was
mandatorily fragmented from the outset.  The modern sensibility which
confuses a market with a product is evident here, I think.  Unix would
be nothing but something which Microsoft "bested", as they "bested" all
other technology which threatened their monopoly, if not for the fact
that Unix is not a product; it is a market.  A market only exists when
you can buy substantially the same thing from more than one vendor.
This is what prevents Microsoft from being legal; as long as only they
sell Windows, they are monopolizing the PC OS market.  If there were one
or two Windows clones, even bad ones, then it would be a Windows market,
and even if MS had 90% market share it would be almost impossible to
convict them of monopolization.

>One of the key ingrediants is the entire telecommunications network.
>UNIX system administrators used "UNIX to UNIX COPY Protocol" (UUCP)
>to move files between machines.  Very quickly this evolved into
>e-mail and newsgroups.  In about 1982, the DOD/ARPA did an
>interoperability test called "Project Dahlgren".  For years,
>those involved had to keep secret the fact that TCP/IP had
>made it possible for numerous computers of numerous brands
>to communicate with each other using inexpensive hardware and
>free software.  This was the actual "Birth" of what we now
>know as "The Internet", which was the merger of the UUCP network
>known as usenet, and the DOD network known as ARPANet.

I'm afraid I'll have to disagree with you again.  UUCP may have
prototyped email and newsgroups, but there was never a true convergence
of UUCP and the Internet in the way you describe.  I wasn't around at
the time, I'll admit, but UUCP was more or less dropped, once (and only
after) ARPANET became NSFNET became The Internet.  RFC 822 (email) and
NNTP (newsgroups) may well owe a great deal to UUCP-based systems, but
they are certainly not the _result_ of any convergence; they _are_ the
convergence between the UUCP and TCP/IP methods.

>I was one of several hundred involved in this project, and most
>of us worked rediculous hours, often performing more traditional
>tasks during the day and then working well into the wee hours
>of the morning.  People like Henry Spencer, who architected
>the first international e-mail distribution schemes, and Bill Joy,
>who literally came up with the "dotted name" notation used in DNS
>(he literally put "the dot in dot com").  I worked with a number of
>people, including Oded Feingold, Don Black, and Vicky Stuart, and
>a number of other interesting people to deal with the commercial
>interests.  This included creating a number of legal structures
>INCLUDING helping Richard Stallman with his "General Public License",
>establishing software and license terms that made it possible for
>businesses using UNIX to get the real-time support they needed while
>protecting the interests of the creative elements that the vendors
>needed.
>
>It took almost 10 years to create a legal, cultural, and economic
>model that made it possible for Linux to become what Red Hat, Caldera,
>and the others have made it today.  From that very modest beginning
>in 1982, to the extraordinary growth to over 1/2 billion internet
>users in 2000, growing to 1 billion by 2001, it has taken thousands
>of voluteers, some eager, some reluctant, to create this thing we
>call the Internet.

With all due respect, Rex, you make it seem all too purposeful for me to
buy this perspective.  I certainly appreciate the background, and I'm
not doubting the facts you present.  Merely your interpretation of them.
:-)

>And it has taken almost 20 years to transform UNIX from a "laboratory
>rat" to the invisible conduit through which nearly 90% of the world's
>information eventually passes.  Many, like John Postel, didn't live
>to see their dreams bloom into full reality (John's vision was a
>global network in which even the poorest members of the smallest
>and remote village could communicate with the rest of the world
>via the Internet.

I certainly must dispute this notion.  John Postel most certainly lived
to see his dream come true, to some extent.  The more fanciful
description aside, John Postel did more to build the Internet than
everyone you described in Project Dahlgren combined.  Unix wasn't
"transformed" at all; it was merely used because it was the de facto
standard.  As for becoming the backbone of global datacommunications,
that took, in my estimation, about seven years, between 1982 and 1989.
That was when all the CS grads who learned Unix so intimately in the
academic world brought it out to the commercial enterprises when
thousands of bosses said 'we need this built' and thousands of former
students said 'I can do that; just give me Unix'.

>Others like Vint Cerf and Vicky Stuart have had to stay in the shadows
>while others took the limelight. Vint is deaf and nearly mute, barely
>understandable when he speaks at a podium.  Vicky is a transexual.
>Oded was a holocaust survivor.  And Don Black was the Grand Dragon
>of the KKK.  Other unsung heros include a man falsely accused of
>molesting his daughter, several are fathers who lost their wives and
>children, partly in persuit of this higher goal.  Some are "old
>hippies" who never shaved off the beard, never cut off the pony tail,
>and never stopped challenging the "established order".  Several
>had used illegal drugs at one time or another.

The amount of colorful background you provide continues to know no
bounds.  Personally, I'm re-growing my ponytail even as we speak.

>Whatever their contributions, whatever their personal lives, these
>were the people who dedicated their lives to making the "New Economy"
>possible.

But that's the thing, Rex.  None of these people were trying to make a
'new economy' possible (though I am not trying to impugn their acumen,
goals, or foresight).  They did an excellent job doing whatever it is
they were doing, but nobody ever planned to "invent the Internet", I'm
afraid.  And even those who built a global TCP/IP network generally
don't even comprehend what that is.

>>  Over the years we've seen PWB vs. V7,
>>  AT&T vs. BSD, OSF vs. AT&T/SUN, Linux & *BSD vs.
>> the commercial vendors.
>
>This I respect!  Not only does he sign his name and provide his
>email, but he even includes his phone number and corporate identity.
>
>We know that you aren't an "official spokesman of Red Hat", but at
>the same time, it's great to see you making the presence of Red Hat
>felt and known.

Congratulations to you both, and everyone else, including the guys at
Novell who revolutionized the world of computing enough for the Internet
to become possible, even though they weren't revolutionizing the
Internet, but instead focusing on LAN services.  Fastest and easiest
damn file-and-print services around, bar none, even now.  Sucks for
everything else, though.  ;-)

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***


======USENET VIRUS=======COPY THE URL BELOW TO YOUR SIG==============

Sign the petition and keep Deja's archive alive!

http://www2.PetitionOnline.com/dejanews/petition.html


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: Peter Hayes <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.os.ms-windows.advocacy
Subject: Re: The BEST ADVICE GIVEN.
Date: Sat, 28 Oct 2000 18:50:58 +0100
Reply-To: [EMAIL PROTECTED]

On Sat, 28 Oct 2000 04:22:09 GMT, "Chad Myers"
<[EMAIL PROTECTED]> wrote:

> 
> "Charlie Ebert" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Considering today's events about Microsoft and
> > their being HACKED into,
> 
> I wouldn't say "hacked", an employe was deceived
> into opening an email.
> 
> Unfortunately, the weakest link in any good security
> plan is humans. Note that the "hackers" or "crackers"
> weren't able to actually "hack" into MS, they had to
> deceive an employee to run an app on their system.

The real question is why M$ were daft enough to have their source code on
any machine(s) that were in any way connected to the outside world.

If they'd any sense, their development machines would have their own
network isolated from the rest of the world, then no matter how many
trojans they ran no "hacker" could "steal" their code.

But I guess this incident just confirms M$'s attitude to security - heaven
help us if .Net ever gets off the ground.
 
> MS is no different than any other corporation in this
> regards. I'm sure any major business with employees
> who are non technical and who receive emails have
> caused incidents like this.
> 
> > their W2K source code stolen
> 
> documentation please. All the reports I've read
> (including the ones that Slashdot even posted) said that
> either nothing was stolen except passwords, or that only
> a few projects had things stolen and that those projects
> didn't include Windows or Office.

It's reasonable to suppose that "hackers" with access to M$'s inner
workings would download all they could, and worry about what they'd got
later.

> > their not detecting the break-in for weeks,
> 
> Do you monitor your network for every outboud email?
> 
> > and their total lack of security in operating system
> > development.
> 
> It wasn't in the OS development, twit, did you even read
> the articles?

Whatever was "stolen" there's no doubt that business and commercial
security is now severely compromised. Get your business off the net now...

Peter
-- 

Microsoft:   This company has performed an illegal operation
                      and will be shut down.

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.ms-windows.nt.advocacy,comp.sys.mac.advocacy,comp.os.ms-windows.advocacy
Subject: Re: Pros and Cons of MS Windows Dominated World?
Date: Sat, 28 Oct 2000 13:56:33 -0400
Reply-To: [EMAIL PROTECTED]

Said Weevil in comp.os.linux.advocacy; 
   [...]
>Anyway, this game hinges on the fact that neither player knows what the
>other will do.  In the real world, this may or may not be the case, and it's
>certainly not always the case with monopolies.  In any case, I'm not sure
>game theory in general can be accurately applied to situations as complex as
>economic markets.  Far too many variables.

I've started to toy with an idea along these lines.  It revolves around
political polling, though, not economic markets.  I call it "Citizen's
Dilemma", and I suspect it may explain the recent "activity" or lack
thereof in the Presidential elections.

For several months now, the race has been reported as "neck and neck".
The poll figures are generally both candidates having mid-40s in polls
identified to include "likely voters".  It wouldn't surprise me,
however, if this is a complete delusion, caused by the effects of
Citizen's Dilemma game theory (or game hypothesis, I should say).

When a citizen is asked whom they prefer, they have several iterative
choices.  First, they can disqualify their input for poll numbers by
saying they are unlikely to vote or are undecided.  Then, if they have
stated they will vote, they are provided the option of naming an
individual candidate.  It must be said that, apart from the desires of
the pollsters to have their survey numbers considered accurate, there is
no abstract reason for the citizen to tell the truth, nor to lie.  BUT,
if they are aware of their options (consider that they are under no
particular obligation to *not* lie), there is some small chance that
they will say they'll vote for the other guy, simply to throw off the
tracking.  Likewise, they could say they will vote for their chosen
candidate, even though they are actually unlikely to vote.  Finally,
they may say something which is effectively arbitrary, because whether
true or false it does not reflect what their true vote will actually be.

Add to this the fact that the numbers provided by the polls, which guide
the opinions of the citizen to begin with in their choice of what to
say, since they may want to back "the underdog" in the poll, regardless
of their perspective true voting behavior, or they may say they are more
convinced of their vote simply because the poll numbers don't give them
any reason to want to modify the status quo of opinions with their own
statistical contribution to the way that status quo is reflected.

In the end, I think the more "dead heat" the polling numbers are, the
more non-indicative they are.  On top of all this 'second guessing'
prompted by an inherent grasp of game theory amongst the players of
Citizen's Dilemma, those taking the polls encourage people to pay more
attention to the polls if the election is considered to be "up for
grabs".

A related issue, something of an extension of Citizen's Dilemma, is the
old "third party 'vote cutting'" which has been expressed.  Again, in an
effort to raise sufficient alarm to a) make their reporting seem
important and valuable, and b) contribute, unknowingly, to the
'Citizen's Dilemma effect, the major media outlets have expressed the
pre-requisite "concern" that Nader will steal votes from Gore, and
secure the election for Bush, just as some say Perot did years ago.
This seems to be based on an inherently self-conflicted fallacy.  The
more voters for Gore who become concerned by this possibility, the more
people will vote for both Gore and Nader.

I know it all sounds like meaningless jabbering, and it mostly is up to
this point.  As I've said, I've only begun playing with the idea.  But
it seems to me that Citizen's Dilemma is dependant on studied ignorance
of the amount (the quite overwhelming amount, in fact) of voters who
plan not to vote, say they will not vote or are undecided, or end up not
voting or not voting for either Bush or Gore.  When the "margin of
error" includes the "depths of ignorance", leading a 12% preference (a
number pulled from my ass, I assure you) to be reported as "45% of
likely voters prefer Bush", the game of Citizen's Dilemma makes it even
more likely that any individual polled will decide to choose an
arbitrary option, rather than simply stating their opinion (presuming
they actually have one).

I think the results of the election may provide, in the end, some small
support for the possibility that Citizen's Dilemma is a real effect.  My
personal conviction is that, no matter how close the voting itself may
even be (and now, we see the ultimate result of exit polling, as the
east coast decisions will at least marginally influence the west coast
returns), Bush has no chance of winning at all.  So if Bush wins,
obviously my thinking is random, even if some of the ideas I've
presented might be valid.  But if, as I expect, Gore wins, and
especially if he wins by a "surprising margin" in comparison to the
popular wisdom being reported, I think it may well be worth
investigating this further.  If nothing else, it should encourage poll
reporting to stop pretending that its ignorance of the majority
preference, and extrapolation of a small sample to a larger, but still
decisively minority, sample.  Hopefully, in future elections, the
numbers will reflect reality, which is that any one voter *may* vote,
and so the number who will "likely" vote for one candidate based on
their currently expressed preference will be reported as a 'straight
percentage'.  Wouldn't everyone think and act a little differently if we
knew that it is, in fact, only 12-15% of the populace who strongly
prefers either candidate, instead of the "snow job" forty percent
numbers we're being supplied today?

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***


======USENET VIRUS=======COPY THE URL BELOW TO YOUR SIG==============

Sign the petition and keep Deja's archive alive!

http://www2.PetitionOnline.com/dejanews/petition.html


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to