Linux-Advocacy Digest #66, Volume #30 Sun, 5 Nov 00 18:13:03 EST
Contents:
Re: Ms employees begging for food (T. Max Devlin)
Re: Definition of WIndows 95: (spicerun)
Re: Windoze 2000 - just as shitty as ever (Giuliano Colla)
Re: Should I use GNOME/KDE or Motif? (zhou)
----------------------------------------------------------------------------
From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: Sun, 05 Nov 2000 18:03:58 -0500
Reply-To: [EMAIL PROTECTED]
Said Joe Doupnik in comp.os.linux.advocacy;
>>> In a word, no.
>>> This raving is costing lots of bandwidth. May I humbly suggest running
>>>those experiments to see first hand the true state of affairs. Try several
>>>simultaneous FTP streams amongst a collection of stations in the same
>>>collision domain. The wire is fully occupied, the throughput is rather evenly
>>>divided amongst transmitters, the aggregrate throughput is about 90% of total
>>>wire capacity. Borrow a hub and please give it a try; no model is involved.
>>
>> I don't understand what this has to do with the quoted text. Did I say
>> an Ethernet could not "go" over 30% utilization?
>
> I merely included enough to pinpoint the message, not include all
>the verbage.
Then I'm afraid I must complain that you did not accomplish your
objective, as it did not "pinpoint the message" enough for me to
understand what that had to do with the quoted text.
>>> While the test runs flip throught the Boggs et al paper on Myths
>>>and Reality concerning Ethernet, location cited previously and a copy is
>>>also on netlab2.usu.edu in directory misc as file ethercap.zip, same file
>>>in pub/mirror/misc on netlab1.usu.edu.
>>
>> Well, thank you for the link. I'll have to read it to find out where
>> the confusion is coming from. Consider while I do that, as it will be a
>> couple days before I will be back, this question:
>>
> The paper is well worth reading. It debunks a lot of muddled
>thinking and urban legend.
When I finally got around to checking it, I found that it was a text I'd
originally read at least four years ago.
>> How much "bandwidth" does each ftp session have when the aggregate
>> throughput is 90%?
>
> They share the link, rather fairly as I mentioned above.
I am not interested in "rather fairly", I'm afraid.
>Please do run the tests.
No. Tests are tests, not real life. I am only concerned with how
things work in real life, not tests. In this point in particular, since
I have a way of identifying and describing why tests of a physical
transmission technology are not deterministic predictors of operational
network performance, I'm not even going to briefly consider spending
times on such tests.
>As a favorite book says "The data shall set
>you free" meaning there are facts available rather than speculation
>and opinions.
Welcome to the post-post-modern world, where I'm afraid we must confront
the fact that distinguishing facts from speculation or opinions is
something of an academic exercise. I can understand why someone would
suppose that I was not more-or-less familiar with these facts, since, as
I mentioned above, I have already learned not to bother addressing them
for the most part. But the data did set me free, and now I'm trying to
set you free. Stop cracking me in the head while I'm trying to take
your blindfold off. Once you can see where you are, I'm sure you'll
understand the practicality of my network model, whether you agree with
its accuracy or consistency or not.
> Once you get machines setup you can also try timing moderate
>length files which are sent while much longer ones are on the wire to
>other stations. That's a "sensitivity" test about sharing. Length of
>time to do the tests is a lot less than pounding out yet more NEWS
>messages on the matter.
I'm afraid the entire reason this conversation led to where it has is
because I simply don't have time for going over such trivial ground, as
the gains or losses due to Ethernet's transmission scheme, whatever
understanding anyone might have of it, is most efficiently dealt with by
ignoring it, apart from taking care to provision per-demand loading for
10% of the channel's bit rate.
Here's another, somewhat similar issue. This is a real story, though
admittedly just a personal experience.
A large federal mortgage company used an FDDI-based campus backbone as a
MAN/WAN hub. So they had this huge counter-rotating 100 Megabit per
second token-passing LAN, and they hooked their routers and servers up
to it directly, with routers/brouters feeding to hubs for desktop
Ethernet. Originally built in the mid 90s, I got a call to come take a
look at their management metrics in 1998. The "performance reporting"
package they'd implemented was giving entirely different utilization
statistics than their old "network management" system, which was
probe-based. One faction within the organization recognized the
heightened value of the statistical reporting methods of the new
package, and was trying to promote it within the company.
Unfortunately, this stepped on some people's toes, since the manual
statistical analysis methods they had been using before were familiar
and considered correct by the staff who supplied them. So here we had
an organization that had two different groups trying to describe which
utilization statistics were "wrong" (the other guys'). I was asked to
figure out which set of numbers were "right."
Of course, even after I'd analyzed the situation, and explained it to
several people, nobody at all wanted to hear my results. The customer
trying to promote the performance reporting solution, the network
infrastructure people collecting the other set of statistics, or the
"capacity planning" people who used those numbers in their manual
analysis, and least of all the various consultants and vendors who had
supplied either system. You would think that, considering what those
results were, the vendor of the performance reporting system would be
leaping to talk about it with anyone they could, but the fact is, the
issue was too arcane, technical, and not just counter-intuitive, but
downright scary in its ramifications, even for them, that nobody seemed
to think it was a big deal. Or maybe, despite my efforts, they just
never understood what the issue was.
Here's the details. The probes were reporting backbone utilization
figures of ~45% all night, every night, as every one of their servers,
with a tremendous amount of critical data, were backed up every night.
The number of servers and the sophistication of data had increased the
number of servers over the last couple years; I'm not sure what the
average utilization was initially, but it was consistently ~45%
utilization as long as any of the existing staff had been looking. This
load generally petered out sometime between 6:00 AM and 8:30 AM. About
the same time, the daily office demand was starting. This was very
bursty, averaging ~10% all day, but spiking at various times to 70%+,
more or less randomly.
The performance reporting system, however, was showing a somewhat
different picture. While the general pattern of solid and continuous
loading all night, and extremely bursty utilization all day, the
specific numbers were confusingly different. Of course, the bursts
during "interactive" use would only coincidentally or statistically
match up, given the different instrumentation that the systems were
using to get the raw numbers, and the difference in polling rates and
sample times. But that was trivial; the real problem that everyone had
is that the performance reporting system stated unequivocally that the
FDDI backbone was at 90% utilization all night, stretching in to the
morning hours, consistently. How could one product say the ring was at
saturation, and the other indicate that only half the available
bandwidth was being used? It was my job to find out.
A bit of investigation into the MIB parameters being polled for the
utilization statistics showed something a bit odd with the way the
performance reporting package was calculating its numbers. It took me
two days to track down specifically what the problem was. I won't bore
anyone with the details, except to say that the performance reporting
package was able to take advantage of the fact that the hub vendor,
who's management module we were getting numbers from, was the one vendor
who did things the *right* way, in providing utilization statistics for
an FDDI ring. The probes, and every other management system that I know
of, even the hallowed Network General Sniffer, used the standard
mechanism for reporting utilization. You take the bit rate of the link,
divide it by eight, and compare it in a ratio to the actual number of
bytes received and transmitted as reported by standard MIB-II interface
statistics. This gives you the percentage of utilization, as expressed
in how many of the 100 million bits per second that an FDDI ring can
theoretically carry were actually transmitted in any specific sample
period of time. You average these out, and you get the "N% utilization"
that is generally reported for any and every transmission channel,
regardless of technology or application.
Except in this one case. The vendor who built the FDDI management
module for the hub had tried to support the "high end" for FDDI, and had
gone to the (considerable) expense of doing things the way the FDDI
specifications say you should do them. Because FDDI was designed
(whether its ever been used this way or not, and it generally has not)
to carry both 'synchronous' and 'asynchronous' traffic. In today's
terms, that would be "voice and data". In order to support a token
passing mechanism that could supply a "streaming" channel as well as a
"packet" transmission system, FDDI uses a "target token rotation time"
mechanism, rather than the "token holding timer" which 802.5 uses.
To clear out the technical detail, the problem reduces to this: FDDI
rings (and all LANs, in fact) do not transmit bits. They do not
transmit bytes. They transmit *frames*. And the way an FDDI ring
works, if a station gets the token, but there's not time left on the
token rotation timer, they are not allowed to do anything but pass the
token. This was done this way in case there is synchronous data that
needs low transmission latency further down the ring. Now, we might
second-guess the people who built FDDI, or FDDI equipment, at not
finding a way around the results of this limitation. But the fact of
the matter is, that's wishful thinking, not best practices. So we're
left with trying to understand the numbers, rather than insisting they
should be different than they are.
The standard utilization statistic, calculated the way all other such
numbers are, presumed that because, on the average, only 50 megabits of
throughput occurred on the channel, the LAN was at 50% utilization.
When we look at the way the system actually works, which we just
happened to luck out and be able to do because, almost by accident, the
hub vendor and the management software vendor had both done things "the
right way", it became obvious that this was a delusion. What made this
seem rather poignant to me, but actually made the results unpopular
enough so as to be ignored, is that the organization had been using
these statistics for their "capacity planning" for at least four years.
When I explained to the cognizant staff that their numbers were not
"wrong", they just weren't measuring what they thought they were, I was,
shall we say, non-plussed by the response. "Your backbone is actually
saturated, and but for the fact that the hub vendor you used, unique
amongst all FDDI equipment vendors, and the management software, in
providing exactly the benefits it promised, you would not even be aware
of it. When you started to have more and more problems, as you add new
servers (they were at the time testing, laboriously and at great
expense, an Internet application, and generally expected the number of
servers they were using to double in the following five years, at
least), the backups will crunch your network later and later each day.
They have already extended to at least an hour and a half after the
start of the business day, and there is no nighttime on the Internet."
"The ring does not carry bits, so measuring bits only tells you how much
capacity you are using. Since rings actually carry frames, and FDDI has
a mechanism which is designed to prevent transmission of frames if the
target token rotation time is intact, it is only by using the 'frame
latency' method the hub vendor implements that you can use the
utilization figure for what you actually expect it means: how much
capacity you have remaining. While the MIB-II statistics indicate you
have 50% capacity remaining, the frame latency statistics are far more
important: they show you have just about zero capacity remaining, so
long as the backups are running."
"It doesn't even matter if you try to monkey with the target token
rotation time. It doesn't matter what you try to set it to (and setting
each individual FDDI transciever/NIC in the entire network would be a
monstrous and possibly damaging exercise), eventually it will run out,
and all the rest of the stations around the ring will be unable to
transmit, even though you've got quite a bit (in fact, the situation is
worst when the frames are small, and there is a very large bit, pun
intended) of "bandwidth" still theoretically available."
The 'capacity planning' guy insisted, of course, that he knew all this,
and the answer was to configure the applications to use large packets.
Ingenuously, I asked him "Well, I'm not sure how that is done, because
there are simply too many different applications to deal with that I
can't keep up. What kind of configuration changes do your standard
applications allow?" The fact that I knew full well that he was
thinking of setting the packet size in the net.cfg for the IPX
interfaces, and that this would be a waste of time because the
application, the backup software, would still use small datagrams, as
they are most efficient for his purposes, and that would still cause
problems on the FDDI backbone, if you expect that you could add more
traffic when it was running at "45%" and not run into a great deal of
performance-related problems.
"Once you do start noticing that "the network is slow" for most of the
day, and you've got more trouble getting your replication system from
your Internet data servers to your data warehouse, and you start looking
to see where the bottleneck is, you're going to skip right over the
backbone, assuming there's plenty of "bandwidth", so that can't be the
reason for the poor throughput."
The lesson I had been trying to teach since I first started working with
the performance configuration software involved was well illustrated by
this experience: You cannot differentiate between lack of load, and lack
of demand, on a complex network.
It takes me about a day and a half to explain this, and only about 30%
of the people show any signs of understanding it after a week. But it
might save this company a couple million dollars, at least, if they
should happen to remember it when somebody tells them they have to rip
out all of their equipment and replace it, simply because they don't
understand how their technology actually works. There are a lot of
different ways they could approach the 'problem' of "high utilization",
but none of them have much chance of doing anything beneficial if they
don't even know they have high utilization, because the "bandwidth"
statistic doesn't mean what they think it means.
--
T. Max Devlin
*** The best way to convince another is
to state your case moderately and
accurately. - Benjamin Franklin ***
======USENET VIRUS=======COPY THE URL BELOW TO YOUR SIG==============
Sign the petition and keep Deja's archive alive!
http://www2.PetitionOnline.com/dejanews/petition.html
====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
======= Over 80,000 Newsgroups = 16 Different Servers! ======
------------------------------
From: spicerun <[EMAIL PROTECTED]>
Subject: Re: Definition of WIndows 95:
Date: Sun, 05 Nov 2000 23:02:03 GMT
Chris Applegate wrote:
> That was funny five years ago.
>
> CDA
Which is the current age of the innovations in MS products.
------------------------------
From: Giuliano Colla <[EMAIL PROTECTED]>
Crossposted-To: alt.destroy.microsoft,comp.os.ms-windows.advocacy,alt.linux.sucks
Subject: Re: Windoze 2000 - just as shitty as ever
Date: Sun, 05 Nov 2000 23:04:24 GMT
Ayende Rahien wrote:
>
> "T. Max Devlin" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Said Chris Ahlstrom in alt.destroy.microsoft;
> > [...]
> > >I haven't seen wsh, but I'd guess up front that it's a half-assed
> > >implementation, unless a third-party wrote it.
> >
> > I suspect you read "WSH" as "Win shell", rather than "Windows Scripting
> > Host". WSH is that oh-so-convenient service in Windows which runs
> > scripts for you from, say, email attachments. This, along with the
> > access to the operating system which VB gives you (and anyone else), as
> > you mentioned, is what makes it possible to so easily say ILOVEYOU to
> > all your friends (and everyone else in your address book) and delete
> > files randomly from your hard drive at the same time.
> >
> > How convenient.
> >
> > I'd prefer batch files. ;-\
>
> Batch files?
>
> echo format c:/q/y > c:\autoexec.bat >> null
>
> Guess what happens when you reboot?
>
> Not very secure.
You've just shown how crappy is the "security model" of Windows 9x.
However you example won't work in Win NT 4. Would it work with Win 2000?
------------------------------
From: zhou <[EMAIL PROTECTED]>
Crossposted-To: linux.redhat,linux.redhat.misc
Subject: Re: Should I use GNOME/KDE or Motif?
Date: Mon, 06 Nov 2000 12:08:01 +1300
Raul Sainz wrote:
> Jeff Jeffries <[EMAIL PROTECTED]> escribió en el mensaje de noticias
> [EMAIL PROTECTED]
> > I need to develop an app for analyzing data and graphically (3D)
> > displaying results, preferrably with C++, X Windows, and OpenGL.
>
> KDE and QT are your choice.
Motif is not as convenient as QT/KDE and may not as powerful as QT/KDE.
>
>
> > I need to be able to move my development efforts, and the app itself, to
> > as many platforms as possible as I'll be moing and don't know what kind of
> > platform will be at my next locale.
>
> I think any Unix will run fine, as well as Windows I think. Do not know
> anything about any other platform.
>
> > Second priority is speed, as it will be extremely computational- and
> > diskI/O- intensive.
>
> Computational tasks are independent from user interfaz, so I do not
> think these are related to the one you use.
>
> > Should I use Motif, or the newer GNOME (or KDE)?
>
> I would use QT and KDE since you want Unix, X11 and C++, and
> given that those are good API's. Motif is really dead.
Motif may possiblly come back because of the open-motif and lesstif.
When you develop the commecial packages, motif may even possiblly a better
choice.
QT/KDE will ask you some money when include in commercial packages.
But open-motif or lesstif may possiblly not, will possiblly become really free
in the future as gnu currently is!
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.advocacy) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Advocacy Digest
******************************