Matt Dillon wrote:
> :What "this", exactly?
> :
> :That "virtual wire" mode is actually a bad idea for some
> :applications -- specifically, high speed networking with
> :multiple gigabit ethernet cards?
> 
>     All the cpu's don't get the interrupt, only one does.

I think that you will end up taking an IPI (Inter Processor
Interrupt) to shoot down the cache line during an invalidate
cycle, when moving an interrupt processing thread from one
CPU to another.  For multiple high speed interfaces (disk or
network; doesn't matter), you will end up burining a *lot*
of time, without a lockdown.

You might be able to avoid this by doing some of the tricks
I've discussed with Alfred to ensure that there is no lock
contention in the non-migratory case for KSEs (or kernel
interrupt threads) to handle per CPU scheduling, but I
think that the interrupt masking will end up being very hard
to manage, and you will get the same effect as locking the
interrupt to a particular CPU... if you asre lucky.

Any case which _did_ invoke a lock and resulted in contention
would require at least a barrier instruction; I guess you
could do it in a non-cacheable page to avoid the TLB
interaction, and another IPI for an update or invalidate
cycle for the lock, but then you are limited to memory speed,
which is getting down to around a factor of 10 (133MHz) slower
than CPU speed, these days, and that's actually one heck of a
stall hit to take.


> :That Microsoft demonstrated that wiring down interrupts
> :to a particular CPU was a good idea, and kicked both Linux'
> :and FreeBSD's butt in the test at ZD Labs?
> 
>     Well, if you happen to have four NICs and four CPUs, and
>     you are running them all full bore, I would say that
>     wiring the NICs to the CPUs would be a good idea.  That
>     seems like a rather specialized situation, though.

I don't think so.  These days, interrupt overhead can come
from many places, including intentional denial of service
attacks.  If you have an extra box around, I'd suggest that
you install QLinux, and benchmark it side by side against
FreeBSD, under an extreme load, and watch the FreeBSD system's
performance fall off when interrupt overhead becomes so high
that NETISR effectively never gets a chance to run.

I also suggest using 100Base-T cards, since the interrupt
coelescing on Gigabit cards could prevent you from observing
the livelock from interrupt overload, unless you could load
your machine to full wire speed (~950Mbits/S) so that your
PCI bus transfer rate becomes a barrier.

I know you were involved in some of the performance tuning
that was attempted immediately after the ZD Labs tests, so I
know you know this was a real issue; I think it still is.

-- Terry

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to