On Tue, 17 Nov 1998, Brendan Miller wrote:

> > I don't think that this ends up being a problem with cards that generate
> > few interrupts, and video cards should be in that category, but I'm not
> > sure.  Mixing ISA ethernet or ISA SCSI with PCI ethernet or SCSI is
> > probably a bad idea, though, in a server.  And it may be that having ANY
> > ISA cards causes conservative assumptions to be automatically made that
> > effectively increase latencies.
> 
> Does this mean that pulling mp3s off the network (PCI net card) to play on 
> my oldish (ISA) SB16 could be slowing my machine down?  Do I understand
> correctly that there is a performance hit, but not a reliability hit--that
> the machine automatically corrects to maintain the slow ISA bus?
> 
> In terms of quantity of interrupts, here is my /proc/interrupts (uptime
> reports 9+ days):
> 
> [root@xenon]# cat /proc/interrupts 
>            CPU0       CPU1       
>   0:   39136593   39090719    IO-APIC-edge  timer
>   1:      71515      71586    IO-APIC-edge  keyboard
>   2:          0          0          XT-PIC  cascade
>   5:     341235     348383    IO-APIC-edge  soundblaster
>   8:          0          0    IO-APIC-edge  rtc
>  12:     816610     814963    IO-APIC-edge  PS/2 Mouse
>  13:          1          0          XT-PIC  fpu
>  16:    1989136    1989007   IO-APIC-level  3c905 Boomerang 100baseTx
>  17:     445756     445591   IO-APIC-level  BusLogic BT-958
> NMI:          0
> IPI:          0
> 
> Almost as many soundblaster interrupts as local disk controller interrupts!!
> But over one-sixth as many soundblaster interrupts as network interrups!!
> 
> Hmmm...  This would be an interesting analysis to do on several systems
> with several card combinations...  But hey, why did you think I bought
> my second CPU?  To play MP3s!!! :)
> 

OK, before everybody panics about this, we need a reality check (before
Alan or somebody who really knows what they're talking about bops me a
good one:-).  First yes, the system automatically slows things down to
accomodate the pace of whatever bus is being used, although I believe
that it is possible to override some of this in at least some BIOS's
(and basically destabilize your system).  This kind of thing "can" slow
down considerably and/or destabilize certain subsystems -- the 3c905
used to have a hideous time with busmastering and would only work if its
per-interrupt latency was set to some incredibly large value.

Second, I believe that the issue regarding performance comes down to who
is generating the interrupts.  In the case of a video controller, all
the activity is controlled by the kernel.  Video controllers don't
generate a lot of interrupts that have to be handled by the system, so
latency is relatively unimportant -- the kernel schedules video updates
and writes to suit itself and the worst penalty one might encounter is
the 2.0.x blocking of other pending interrupts while this occurs.  Since
the system doesn't typically drop network packets or the like even while
using a video card intensively (e.g. -- playing doom or just switching
virtual screens), I can only presume that this is handled flawlessly and
efficiently in nearly all cases, although we do see a few rare postings
about a few rare system configurations that crash when somebody is using
video and the network hard at the same time.

A sound card is also in this category; you grab the mp3 files from the
network and play them, but the player controls the timing of the device.
Again, the worst that might happen is that in 2.0.x with a single kernel
lock, the SB driver might hold the lock and delay the processing of e.g.
network packets.  However, with upper-half and lower-half handlers, I'll
bet that even with the slowness of the ISA bus affecting the UHH a bit
that UHH latency on high-interrupt density devices is rarely affected.
Remember, there aren't THAT many interrupts required to use your SB -
you're only accumulating around 5x as many SB interrupts as you are
typing keystrokes, and I doubt that you're too worried about the effects
of typing on your system responsiveness.  To put it another way, in a
given second your system might handle 0-45,000 network interrupts.  Your
SB might (if it were playing a song just as this burst of net traffic
came in) have to handle 100 or so.  Even if they were ten times more
expensive to upper-half handle than network interrupts (which I doubt),
you might delay as many as 1000 network packets by a bit, but the
network driver would >>catch up<< as soon as it got the lock.  So
performance would probably not be measurably affected, let alone
noticeably.

The situation is different for ISA network cards mixed with PCI network
cards.  First of all, they share a kernel spinlock, I would guess in
both 2.0.x (where everything shares one spinlock) and 2.1.x (right,
2.1.x persons?).  Second, they generate "a lot" of asynchronous
interrupts (controlled by the timing of the sender, not the receiver)
that cannot be conveniently scheduled by the kernel -- it has to stop
what it is doing and ensure that the packets are not lost for each and
every one (although it tries to do lots of packets in a single pause, of
course).

Here is a case where the large latency on the ISA bus might well slow
down the competing PCI devices.  While the kernel is locked to handle
incoming ISA packets, they cannot be handled on the PCI bus.  I don't
think (again, please correct me real experts).  The converse is also
true.  I believe that busmastering (one of the tricks that permits the
efficient handling of high density interrupt streams on the PCI bus) is
negatively affected by the presence of the ISA devices because the bus
masters have to make immensely conservative assumptions about the
latency of the attached ISA devices.

All of this could be incorrect analysis, of course.  It is based on what
I've read in the ATM-PCI article aforementioned, what I've read on this
list and other lists in the past, what I've seen in reading some of the
kernel docs, but it isn't based on actually reading the kernel source.
I did once have both ISA ethernet and PCI ethernet devices in the same
system and did some performance tests but they were inconclusive --
there was certainly a performance hit on the PCI channel, but SOME of
that is to be expected just from the loading of the CPU and I lack(ed)
the time or energy to do the necessary matrix of tests to deduce just
what part of the deficit arose from this phenomenon.

I can give you probable upper bounds for the performance deficit,
though.  Even with a 10 Mbps ISA network adapter being driven in full
tilt boogie against 100 Mbps PCI network adapter, the performance
deficit is (as I recall) "small", order of 10%.  This is basically
negligible unless you are building a beowulf -- networked workstations
will have not have interactive performance noticeably affected because,
after all, the ISA card in this admittedly stupid configuration (now
that PCI cards cost $20-30) is bound to be carrying low traffic density
most of the time.

To conclude, I >>think<< that one could put an ISA video controller on a
beowulf with lots of PCI NICs and impact theoretically achievable
network performance by at most a percent or two.  I also think that one
could put an ISA SB on a workstation and not be able to detect any
difference at all in measured network speed (this one I can actually
verify from experiment as I have this in my own desktop).  I won't say
there isn't any, but it is sure well within sigma for a half dozen
measurements, and that's good enough for me.  I would hesitate to mix
ISA NICs with PCI NICs.  I would avoid ISA SCSI adapters.  This is no
particular burden these days -- why bother, one might say, in both cases
even ASIDE from the possible delay of PCI NICs or SCSI controllers.

The issue is simple to resolve though -- just try it.  An experiment
beats even reading the kernel sources, as it reveals problems that a
nice theoretical analysis doesn't.  It also gives you real-world numbers
for what is otherwise "some delay".  If the "cost" is 0.3% diminishment
of average peak speed, few of us could care.  If the "cost" is 30%
diminishment, most would care even if the actual reason for the delay
had nothing to do with the ISA latencies per se but had to do with the
lack of DMA on the ISA bus controllers and the consequent loading of the
CPU, or the like.

In a sane universe, we'd stop buying systems with ISA buses at all.
We'd use our old ISA cards to prop up windows.  PCI sound cards, PCI
video cards, PCI pretty much anything cards are all cheap and plentiful
and are MUCH friendlier to operating systems.  Of course, it would be
nice if systems manufacturers would compensate for the elimination of
the ISA bus by adding a PCI slot or two, or if PCI card manufacturers
started making "multifunction" cards like the ones that used to flood
the PC market -- why not put e.g. S3 video and sound/multimedia on a
single card, for example?  There is plenty of bandwidth and tolerant
latencies on both devices, and both devices are basically single chips,
some support memory, and glue -- plenty of room.


   rgb

Robert G. Brown                        http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:[EMAIL PROTECTED]


Reply via email to