From: Roland Dreier <[EMAIL PROTECTED]>
Date: Mon, 30 Jul 2007 19:01:55 -0700
> Are you saying that netif_rx_reschedule() itself is racy?
Nevermind, you're right.
I'll see if I can resurrect something akin to netif_rx_reschedule()
when I get back to the NAPI patches. Probably what I'll do is le
> It is not a feature of these changes.
> These races exist in the current code, it is a bug fix.
Not sure I follow. What is the race in the current code? Are you
saying that netif_rx_reschedule() itself is racy? Otherwise the code
in ipoib looks OK to me.
- R.
-
To unsubscribe from this l
From: "Michael Chan" <[EMAIL PROTECTED]>
Date: Thu, 26 Jul 2007 14:38:09 -0700
> If the driver wants a simple solution, it can do what you did in the
> patch: wrap the tx cleanup code with netif_tx_lock() and
> netif_tx_unlock().
>
> If a NAPI driver wants to be more clever, it can do something
From: Roland Dreier <[EMAIL PROTECTED]>
Date: Mon, 30 Jul 2007 08:04:24 -0700
> IPoIB can cope but it really seems like an unfortunate feature of
> these changes that we can't do something like what we have today,
> which imposes no overhead unless an event actually lands in the race
> window.
It
> If you have a means in the device (like tg3, bnx2, e1000, and a score
> of others do) to force the device to trigger a HW interrupt, that's
> what you do if you detect that events are pending after re-enabling
> interrupt in the ->poll() handler.
It is possible to trigger an interrupt for IP
From: Jeff Garzik <[EMAIL PROTECTED]>
Date: Sat, 28 Jul 2007 14:08:44 -0400
> That's a performance/parallelization regression from current NAPI :(
That's not true since current NAPI will only run on one cpu, the
one that the interrupt triggers on.
The existing cases that are not guarding the seq
From: Roland Dreier <[EMAIL PROTECTED]>
Date: Sat, 28 Jul 2007 08:27:18 -0700
> > Most drivers are in good shape, although some still have very
> > questionable netif_rx_complete() handling, in that racy area that
> > Rusty and myself were discussing today.
> >
> > My inclination is to wrap
David Miller wrote:
From: Jeff Garzik <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 22:00:31 -0400
David Miller wrote:
From: Jeff Garzik <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 21:55:08 -0400
I don't see any logic to your request, only added overhead for no reason.
There may be some flawed
> Most drivers are in good shape, although some still have very
> questionable netif_rx_complete() handling, in that racy area that
> Rusty and myself were discussing today.
>
> My inclination is to wrap those sequences around with an IRQ
> safe spinlock to fix the race provably, and then if
On Thu, 2007-07-26 at 00:15 -0700, David Miller wrote:
> The netpoll code has to take that anyways in order to call
> into ->hard_start_xmit() to send out the packet it has
> pending, I'm leveraging that as a synchronization mechanism
> in the drivers because the locking options are limited
> give
From: "Michael Chan" <[EMAIL PROTECTED]>
Date: Thu, 26 Jul 2007 00:05:47 -0700
> David Miller wrote:
>
> > So that ->poll_controller() can process TX acks by just having
> > the TX lock and interrupts disabled.
> >
> > Can you think of another way to process TX acks from absolutely
> > any execu
David Miller wrote:
> So that ->poll_controller() can process TX acks by just having
> the TX lock and interrupts disabled.
>
> Can you think of another way to process TX acks from absolutely
> any execution context whatsoever? That's what we need and
> preferably in some generic way, and the ab
From: "Michael Chan" <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 23:39:18 -0700
> David Miller wrote:
>
> > As a result there is no more messing around with fake NAPI polls and
> > all that other crap, instead ->poll_controller() merely has to try to
> > process TX queue acks to free up space and
David Miller wrote:
> As a result there is no more messing around with fake NAPI polls and
> all that other crap, instead ->poll_controller() merely has to try to
> process TX queue acks to free up space and wake up the transmit
> queue(s) of the device.
>
I think we also need to take care of li
From: "Michael Chan" <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 23:33:03 -0700
> This will lead to more and longer lock contentions between
> ->hard_start_xmit() and ->poll() in the normal fast path. Why
> do we need to widen the scope of netif_tx_lock() in this case?
So that ->poll_controller()
David Miller wrote:
> /* run TX completion thread */
> if (sblk->idx[0].tx_consumer != tp->tx_cons) {
> - tg3_tx(tp);
> + netif_tx_lock(netdev);
> + __tg3_tx(tp);
> + netif_tx_unlock(netdev);
>
This will lead to more and longer lock cont
From: David Miller <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 01:31:54 -0700 (PDT)
> Besides that the only major issue is netpoll and I have some
> ideas on how to handle that, which I'll try to implement
> tonight and tomorrow.
Ok, here is how I'm trying to crack this nut.
The way I'll solve th
From: Jeff Garzik <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 22:00:31 -0400
> David Miller wrote:
> > From: Jeff Garzik <[EMAIL PROTECTED]>
> > Date: Wed, 25 Jul 2007 21:55:08 -0400
> >
> >> I don't see any logic to your request, only added overhead for no reason.
> >
> > There may be some flawe
David Miller wrote:
From: Jeff Garzik <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 21:55:08 -0400
I don't see any logic to your request, only added overhead for no reason.
There may be some flawed logic in what Stephen stated, but
the change really is needed.
It must be atomic to execute the:
From: Jeff Garzik <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 21:55:08 -0400
> I don't see any logic to your request, only added overhead for no reason.
There may be some flawed logic in what Stephen stated, but
the change really is needed.
It must be atomic to execute the:
enable_interr
Stephen Hemminger wrote:
The usage of NAPI on 8139cp and 8139too seems dodgy;
these drivers expect this to work:
local_irq_save(flags);
cpw16_f(IntrMask, cp_intr_mask);
__netif_rx_complete(dev);
local_irq_restore(flags);
It works o
From: Stephen Hemminger <[EMAIL PROTECTED]>
Date: Wed, 25 Jul 2007 09:56:54 +0100
> The usage of NAPI on 8139cp and 8139too seems dodgy;
> these drivers expect this to work:
>
> local_irq_save(flags);
> cpw16_f(IntrMask, cp_intr_mask);
> __netif_rx_comple
On Wed, 2007-25-07 at 01:31 -0700, David Miller wrote:
> We're getting there, slowly...
>
> 1) netif_napi_init() is added, the workqueue/requeue stuff
>as discussed is not needed so you won't see that here
..
> Another thing that's really apparent now is all the wacky
> napi->weight value
On Wed, 25 Jul 2007 01:31:54 -0700 (PDT)
David Miller <[EMAIL PROTECTED]> wrote:
>
> We're getting there, slowly...
>
> 1) netif_napi_init() is added, the workqueue/requeue stuff
>as discussed is not needed so you won't see that here
>
> 2) all cases where netif_rx_complete() are invoked in
24 matches
Mail list logo