Linux-Advocacy Digest #996, Volume #29            Thu, 2 Nov 00 00:13:05 EST

Contents:
  Re: 2.4 Kernel Delays. ("Aaron R. Kulkis")
  Re: 2.4 Kernel Delays. ("Aaron R. Kulkis")
  Re: 2.4 Kernel Delays. ("Aaron R. Kulkis")
  Re: Ms employees begging for food (T. Max Devlin)
  Re: Ms employees begging for food (Patrick Farrell)

----------------------------------------------------------------------------

From: "Aaron R. Kulkis" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.os.ms-windows.advocacy
Subject: Re: 2.4 Kernel Delays.
Date: Wed, 01 Nov 2000 23:59:24 -0500

Erik Funkenbusch wrote:
> 
> "Aaron R. Kulkis" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > > microsoft.com alone transfers 6 Terrabytes of data day.  Executes
> millions
> > > of queries, and creates dynamic content by the 100's of millions.  I'm
> > > sorry, but a mainframe can't deal with that kind of data throughput on a
> > > single machine.
> > >
> > > Even if it could, you're putting all your eggs in one basket.  What
> happens
> > > if you have a power outage,
> >
> > The same way financial businesses do:  a HUGE lead-acid battery (often
> > consisting of dozens of car batteries) as battery backup AND a big old
> > diesel-engine-powered generator activated by the battery backup.
> 
> Which will run a data center for at most, a few hours.

What part of diesel-engine-powered generator do you not understand?

Hey, shit-for brains....the fuel-tank CAN be topped off without shutting
the thing down.


>  What happens if an
> earthquake hits, or a plane crashes into the data center, or any number of
> other natural catastrophies that might occur to a single site.

Who ever said the redundant system is on the same site?

You really are a dipshit, you know that?



> 
> > > a failed router,
> >
> > simple redundancy.
> 
> Not so simple, especially if it's the main trunk that dies.  A few years
> back, some bums were lighting fires under a bridge to keep warm and cut off
> the entire internet access to the state of MN due to heat melting the
> fiber-optic lines.
> 
> > > or any number of other failures
> > > that can cause a localized failure.
> >
> > simple redundancy
> 
> Simple redundancy doesn't solve the problem of a localized failure (plane
> crash, earthquake, etc..).
> 
> > >  Even if you have redundancy in the
> > > mainframe, it might not be enough.  If you need 24x7, you need
> distributed
> > > servers with load balancing run across multiple data centers.
> >
> > simple redundancy
> 
> What exactly are you answering here?

Machine A  meet your twin  Machine B.

Complicated, isn't it.







> 
> > > On top of that, Mainframes don't run as a single fast server.  An IBM
> > > mainframe typically runs as something like 500 seperate servers
> virtually.
> > > That means no single subsystem can gain anywhere near the kind of
> > > performance the full machine is capable of.
> >
> > But much of the load is off-loaded onto smart disk-cabinets like EMC.
> 
> Oh yes, the disk cabinets are capable of executing a SQL query.  Right.

Actually, they are quite intelligent.

They understand the database formats.  They can do data transfers between
databases.
They understand multiple filesystem formats.  IF you have an IBM mainframe,
and a Sun, and an HP, and a Losedows NT box all sharing parts of the same
EMC box...the EMC box itself can do file transfers between the various
disks.


> 
> > > A single IBM mainframe costs millions of dollars.  That's not counting
> the
> > > proprietary maintenance agreements you need to sign, and the cost of the
> > > periphials.  You have to add disk arrays, network arrays, etc..  And
> they
> > > cost a pretty penny.  Microsofts tpc cost is much much lower than a
> typical
> > > IBM mainframe, which tells me that mainframes aren't as cheap as you
> claim.
> >
> > You're fucking on drugs, you know?
> >
> > It takes a couple THOUSAND NT-machines to imitate an IBM mainframe.
> >
> > Not to mention the administrative headings of that fucking mess.
> 
> Not if they're 32 processor boxes.  It would only take 15 or so to equal the
> power of that mainframe.


-- 
Aaron R. Kulkis
Unix Systems Engineer
ICQ # 3056642

http://directedfire.com/greatgungiveaway/directedfire.referrer.fcgi?2632


H: "Having found not one single carbon monoxide leak on the entire
    premises, it is my belief, and Willard concurs, that the reason
    you folks feel listless and disoriented is simply because
    you are lazy, stupid people"

I: Loren Petrich's 2-week stubborn refusal to respond to the
   challenge to describe even one philosophical difference
   between himself and the communists demonstrates that, in fact,
   Loren Petrich is a COMMUNIST ***hole

J: Other knee_jerk reactionaries: billh, david casey, redc1c4,
   The retarded sisters: Raunchy (rauni) and Anencephielle (Enielle),
   also known as old hags who've hit the wall....

A:  The wise man is mocked by fools.

B: Jet Silverman plays the fool and spews out nonsense as a
   method of sidetracking discussions which are headed in a
   direction that she doesn't like.
 
C: Jet Silverman claims to have killfiled me.

D: Jet Silverman now follows me from newgroup to newsgroup
   ...despite (C) above.

E: Jet is not worthy of the time to compose a response until
   her behavior improves.

F: Unit_4's "Kook hunt" reminds me of "Jimmy Baker's" harangues against
   adultery while concurrently committing adultery with Tammy Hahn.

G:  Knackos...you're a retard.

------------------------------

From: "Aaron R. Kulkis" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.os.ms-windows.advocacy
Subject: Re: 2.4 Kernel Delays.
Date: Thu, 02 Nov 2000 00:00:28 -0500

Ayende Rahien wrote:
> 
> "Colin R. Day" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Erik Funkenbusch wrote:
> >
> >
> > >
> > > Which will run a data center for at most, a few hours.  What happens if
> an
> > > earthquake hits, or a plane crashes into the data center, or any number
> of
> > > other natural catastrophies that might occur to a single site.
> > >
> >
> > Great. And what happens if a nuclear missile hits Redmond?
> > Linux, on the other hand, you'd have to nuke half the planet.
> 
> I don't think so, all you need to do is to kill one man, Linus, and then the
> linux compunity is going to be:
> A> In shock
> B> Un-unified.
> 
> Very soon there will be no official kernel, no one with the autority to
> release it, Linux will split up to various groups which will be totally
> incompatible. Reasonable people will move to BSD.

A new leader will be appointed, and unification will resume.


> 
> (Yes, that is the worst case scenario, but a lot of people has already
> expressed worry about Linus being the center on Linux.)


-- 
Aaron R. Kulkis
Unix Systems Engineer
ICQ # 3056642

http://directedfire.com/greatgungiveaway/directedfire.referrer.fcgi?2632


H: "Having found not one single carbon monoxide leak on the entire
    premises, it is my belief, and Willard concurs, that the reason
    you folks feel listless and disoriented is simply because
    you are lazy, stupid people"

I: Loren Petrich's 2-week stubborn refusal to respond to the
   challenge to describe even one philosophical difference
   between himself and the communists demonstrates that, in fact,
   Loren Petrich is a COMMUNIST ***hole

J: Other knee_jerk reactionaries: billh, david casey, redc1c4,
   The retarded sisters: Raunchy (rauni) and Anencephielle (Enielle),
   also known as old hags who've hit the wall....

A:  The wise man is mocked by fools.

B: Jet Silverman plays the fool and spews out nonsense as a
   method of sidetracking discussions which are headed in a
   direction that she doesn't like.
 
C: Jet Silverman claims to have killfiled me.

D: Jet Silverman now follows me from newgroup to newsgroup
   ...despite (C) above.

E: Jet is not worthy of the time to compose a response until
   her behavior improves.

F: Unit_4's "Kook hunt" reminds me of "Jimmy Baker's" harangues against
   adultery while concurrently committing adultery with Tammy Hahn.

G:  Knackos...you're a retard.

------------------------------

From: "Aaron R. Kulkis" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.os.ms-windows.advocacy
Subject: Re: 2.4 Kernel Delays.
Date: Thu, 02 Nov 2000 00:01:17 -0500

Bruce Schuck wrote:
> 
> "Aaron R. Kulkis" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Erik Funkenbusch wrote:
> > >
> > > "Shannon Hendrix" <[EMAIL PROTECTED]> wrote in message
> > > news:8tobps$cq$[EMAIL PROTECTED]...
> > > > In article <iAbL5.5023$[EMAIL PROTECTED]>,
> > > > Erik Funkenbusch <[EMAIL PROTECTED]> wrote:
> > > >
> > > > > How many multi-server load balanced Linux sites can you come up
> with?
> > > > > Google is a good one, but it's a rarity.
> > > >
> > > > The main reason you don't see as many clustered UNIX sites is that a
> > > > single machine can do the work of many NT machines.
> > >
> > > No, it's that web sites are not just serving up static pages anymore.
> > >
> > > > Microsoft's own site is a virtual masterpiece example of throwing
> > > > resources at a problem instead of using your head.  A single IBM
> > > > mainframe could handle that load, and be cheaper.  It would, of
> > > > course, be running UNIX.  Or Linux even.
> > >
> > > microsoft.com alone transfers 6 Terrabytes of data day.  Executes
> millions
> > > of queries, and creates dynamic content by the 100's of millions.  I'm
> > > sorry, but a mainframe can't deal with that kind of data throughput on a
> > > single machine.
> > >
> > > Even if it could, you're putting all your eggs in one basket.  What
> happens
> > > if you have a power outage,
> >
> > The same way financial businesses do:  a HUGE lead-acid battery (often
> > consisting of dozens of car batteries) as battery backup AND a big old
> > diesel-engine-powered generator activated by the battery backup.
> >
> >
> > > a failed router,
> >
> > simple redundancy.
> >
> > > or any number of other failures
> > > that can cause a localized failure.
> >
> > simple redundancy
> >
> > >  Even if you have redundancy in the
> > > mainframe, it might not be enough.  If you need 24x7, you need
> distributed
> > > servers with load balancing run across multiple data centers.
> >
> > simple redundancy
> >
> > >
> > > On top of that, Mainframes don't run as a single fast server.  An IBM
> > > mainframe typically runs as something like 500 seperate servers
> virtually.
> > > That means no single subsystem can gain anywhere near the kind of
> > > performance the full machine is capable of.
> >
> > But much of the load is off-loaded onto smart disk-cabinets like EMC.
> >
> > >
> > > > Microsoft has probably the largest.  I'm amazed by it really.
> > > >
> > > > In all fairness to Microsoft, this is a less a problem with them than
> > > > it is our current idea of lot's of little machines to do computation.
> > > > It is not always a win.  Sometimes big iron works better, and cheaper.
> > >
> > > A single IBM mainframe costs millions of dollars.  That's not counting
> the
> > > proprietary maintenance agreements you need to sign, and the cost of the
> > > periphials.  You have to add disk arrays, network arrays, etc..  And
> they
> > > cost a pretty penny.  Microsofts tpc cost is much much lower than a
> typical
> > > IBM mainframe, which tells me that mainframes aren't as cheap as you
> claim.
> >
> > You're fucking on drugs, you know?
> >
> > It takes a couple THOUSAND NT-machines to imitate an IBM mainframe.
> 
> The amazingly ignorant unix geek spews bullsh*t again.
> 
> False. Absolutely false.

Prove it.

NT is incredibly slow.

-- 
Aaron R. Kulkis
Unix Systems Engineer
ICQ # 3056642

http://directedfire.com/greatgungiveaway/directedfire.referrer.fcgi?2632


H: "Having found not one single carbon monoxide leak on the entire
    premises, it is my belief, and Willard concurs, that the reason
    you folks feel listless and disoriented is simply because
    you are lazy, stupid people"

I: Loren Petrich's 2-week stubborn refusal to respond to the
   challenge to describe even one philosophical difference
   between himself and the communists demonstrates that, in fact,
   Loren Petrich is a COMMUNIST ***hole

J: Other knee_jerk reactionaries: billh, david casey, redc1c4,
   The retarded sisters: Raunchy (rauni) and Anencephielle (Enielle),
   also known as old hags who've hit the wall....

A:  The wise man is mocked by fools.

B: Jet Silverman plays the fool and spews out nonsense as a
   method of sidetracking discussions which are headed in a
   direction that she doesn't like.
 
C: Jet Silverman claims to have killfiled me.

D: Jet Silverman now follows me from newgroup to newsgroup
   ...despite (C) above.

E: Jet is not worthy of the time to compose a response until
   her behavior improves.

F: Unit_4's "Kook hunt" reminds me of "Jimmy Baker's" harangues against
   adultery while concurrently committing adultery with Tammy Hahn.

G:  Knackos...you're a retard.

------------------------------

From: T. Max Devlin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: Thu, 02 Nov 2000 00:07:13 -0500
Reply-To: [EMAIL PROTECTED]

Said Maynard Handley in comp.os.linux.advocacy; 
>In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
   [...]
>As someone watching this thread and with no idea what you are trying to
>say, how about we approach this from a different angle? I have no great
>interest in whether T. Max is correct or the rest of the world. Neither do
>I care if Novell was or was not the world's best network software, ort how
>many vendors make 100 base T chipsets. I am, however, interested in a more
>useful discussion.
>
>Suppose we have a a single ethernet not connected to the outside world. 

I thought you said this was going to be a useful discussion?  ;-)

>Suppose we have two computers connected, and all we want is for computer A
>to pump data at computer B as fast as possible using TCP (a single ftp
>connection of a multi gigabyte file). Are you willing to agree that in
>this case ethernet is close to 100% efficient, in the sense that the time
>it takes for the transfer to occur is pretty close to
>T=fileSize/(100Mb/s)? (Modulo some ignorable constant factor for header
>overhead, acks and suchlike).
>(1) 

Shall we quibble?  Ethernet has no acks, and how ignorable the header
overhead is depends on the average packet size.

>Alright, now lets have a situation where we have say 20 of these pairs of
>computers engaged in ftp. In a "perfect" network, each user would now see
>see a transfer time of 20T.
>
>My understanding of what you are trying to say is that you claim the time
>each user will see for the transfer is more like (100/10%)*20 T, ie 10x as
>large
>
>whereas other the (supposedly wrong IBM study) claimed a time like
>(100/30%)*20 T, ie about 3.3x as large
>
>whereas others on this group are saying the real time is actually more
>like (100/70%)*20 T. ie about 1.4x as large.
>(2)

Well, yes and no.

I am not claiming these gedanken results, though I am saying that using
the 10% figure allows you to take the ethernet to the real world and
still have no problems whatsoever, not just on the LAN, but on the
network, but contributed to by the LAN.

IBM's study showed that the "break even point" for ethernet loading was
typically 30%.  So yes, as a gedanken experiment, your premise
concerning their position is accurate.

I don't think anyone else in the group has seriously proposed a "70%
efficiency" figure for ethernet.

>So, do (1) and (2) accurately match what you mean? If so, this is surely
>easily testable empirically.
>
>If (1) and (2) do NOT match what you mean, can you give us an example in
>the same style which DOES match what you mean? That means, a concrete
>example with real numbers, something I could go out, set up, and measure,
>and free of any obfuscation about the "third network way" or "this is my
>definition of a word" or any other such which, I believe, T. Max, are
>making you look more and more like a crank with each posting, and doing
>less and less to actually clarify your point.

Well, I do appreciate the feedback, seriously.  But my explanations of
this "complex network model" is not obfuscation, by any means.  Your own
complaints seem to echo the point I've been trying to clarify.  Let me
try to be succinct (for once.)

The reason people have problems with networks in the real world is
because when you have such a large collection of component systems all
needing to interconnect to provide services, the isolated and easily
testable empirical verification of how these systems inter-react is not
practical.  In other words, you're *staring* at all the "concrete
examples with real numbers you could go out set up and measure".  Go set
it up.  Then try to figure out if it agrees or diverges from the theory.
Then give me a call, because I don't have time for that.  There are real
networks that have to work, for real businesses.

I am hired by these real businesses to find out how to run their
networks so that they don't have problems, and when they do have
problems, they can find out about that, figure out what they are, and
fix them.  This is an important part of modern business, and its worth
noting that the "good old campus Ethernet" is a minuscule issue.  The
cost of even *caring* whether it is switched or shared (what I've called
"configuration costs" in one of my fits of conceptual creativity) is not
an issue.  When you're talking about *the network*, you should form
whatever formulation you have about how it works or why it doesn't by
planning for, in fact *trying to have*, about 10% utilization on your
ethernet segments.

Now, I don't have a few million dollars from major vendors to build a
lab to provide examples that are both illustrative of these principles
and "real world" enough to be anything but a dog-and-pony show*.  The
real world laboratories I deal with are the networks which make the
networks.  Watch 30 minutes of CNN; you'll see at least six commercials
from my customer list.  I know I sound like a crank, sometimes, and
maybe I am.  But if anything I am a crank who knows much more about
networks, including most of the components, such as Ethernet, than just
about anybody you'll ever have the dubious pleasure of hearing from
directly.  Don't let this playful jabber upset you; I know where these
LAN Admin guys are coming from, but I do know what I'm talking about.

So did IBM.  They didn't write out any "30% rule of thumb."  But they
were entirely correct in recognizing Ethernet's logarithmic response
curve.  It is a natural result of the CSMA/CD mechanism used for channel
access.  Ethernet actually has better performance at low utilizations
that deterministic LANs, where a station spends a lot of time waiting
for a token, and the ring spends time passing the token, even when a
station doesn't need it.  But this catches up quickly, in terms of
latency increase which occurs waiting to access the channel if its busy.

With token passing, and all other transmission systems, frankly,
including WANs, the response curve is linear.  When others are
transmitting, you can't (the shared media), and the busier the channel,
the longer you wait, statistically:

#      |             *
       +           * 
o      |         *
f      |       *
       +     *
s      |   *
t      | *
's     L_______________

         latency

With CSMA/CD, where each transceiver just "jumps in" and tries to
transmit, and if it works it works, and if it collides, they try again,
the response curve is logarithmic:

#      |            *
       +            * 
o      |           *
f      |          *
       +         *
s      |       *
t      |  *
's     L_______________

         latency

So when you're talking whether or not you will have the bandwidth you
need for any one transmission, and you want the latency to be as low as
possible, and you want to ensure the LAN never becomes a bottleneck
which could limit throughput, it makes sense to take advantage of the
performance advantage which the "30% rule" illustrates.  LAN bandwidth
is ultra cheap, especially if you use shared media, loading ethernets
beyond 10% routinely might seem to make sense to people forced to do it
because that's what they had to work with, but its no way to run a
modern internetwork.

If I were to see any single Ethernet channel, shared or switched,
running at more than 10% utilization routinely, I'd have to wonder what
seemingly unrelated problems this underprovisioning might cause.  Its
always possible there's going to be a correlation between that, and that
client in upstate New York not getting the service they are expecting.
Maybe unlikely, but there's a lot of clients, and a lot of ethernets; it
makes sense to be a bit cautious.  If nothing else, you know where to
target the next upgrade or what should be closer to the backbone.

But I must admit, I'm used to looking at much more switched ethernet
than shared.  If a link has more than 5% utilization, it typically has
more than 30% during peak hours.

-- 
T. Max Devlin
  *** The best way to convince another is
          to state your case moderately and
             accurately.   - Benjamin Franklin ***


======USENET VIRUS=======COPY THE URL BELOW TO YOUR SIG==============

Sign the petition and keep Deja's archive alive!

http://www2.PetitionOnline.com/dejanews/petition.html


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: Patrick Farrell <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy,comp.arch,comp.os.netware.misc
Subject: Re: Ms employees begging for food
Date: Wed, 01 Nov 2000 22:57:18 -0600
Reply-To: [EMAIL PROTECTED]

They are common all over.  The argument of the switches outnumber the hubs is
absurd.  Large companies do not run switched ports to every end point, most of
them run 100MB hubs to switches.  Most individual users don't need a 100MB
pipe.  In my case I have a 12 port 100MB switch to which 10 100MB hubs are
hooked to and 2 servers go direct to the switch.  This is by far a more common
setup than all switched ports.

Patrick


Clayton O'Neill wrote:
> 
> On Tue, 31 Oct 2000 14:53:34 -0500, T. Max Devlin <[EMAIL PROTECTED]> wrote:
> |I realize that is what you meant to imply.  I do not believe your
> |contention alone is enough to make the case, however.  I'm not saying
> |you're wrong; you are simply mistaken.  There were hardly any shared
> |media 100BaseT hubs ever marketed, nor does your claim that you know of
> |three refute my number of five, nor, I must admit, would I take your
> |word for it that you are correctly identifying either 100BaseT or a hub.
> |I've known intelligent professionals who've been working with a system
> |for years and years to mis-identify either one, and much more serious
> |lapses in terminology.
> |
> |Perhaps you simply got confused and thought by "boxes" I meant
> |individual units, as opposed to a particular model of a device put into
> |production.  When I said "five different boxes", I meant five different
> |devices available from various vendors, not five metal enclosures with
> |electronics inside.
> 
> A search on www.necxdirect.com turned up about 100 different models of
> 10/100 autosensing "hubs" from about a dozen different manufacturers.  Check
> any online site that offers networking equipment or your local computer
> store.  Not only are 100baseT hubs made, they're common and popular with the
> SOHO market.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to