On Fri, 15 Sep 2000, Bogdan Costescu wrote:
> On Fri, 15 Sep 2000, jamal wrote:
>
> > Only the timer runs at HZ granularity ;-<
>
> Some cards provide their own high resolution timers; latest 3Com cards
> provide several with different purposes (none currently used). The
> question is how
On Fri, 15 Sep 2000, Bogdan Costescu wrote:
On Fri, 15 Sep 2000, jamal wrote:
Only the timer runs at HZ granularity ;-
Some cards provide their own high resolution timers; latest 3Com cards
provide several with different purposes (none currently used). The
question is how many of
On Fri, 15 Sep 2000, Bogdan Costescu wrote:
> On Fri, 15 Sep 2000, jamal wrote:
> > You use the period(5-10micros), while waiting
> > for full packet arrival, to make the route decision (lookup etc).
> > i.e this will allow for a better FF; it will not offload things.
>
> Just that you span
On Fri, 15 Sep 2000, jamal wrote:
> Only the timer runs at HZ granularity ;-<
Some cards provide their own high resolution timers; latest 3Com cards
provide several with different purposes (none currently used). The
question is how many of these also provide the Rx early interrupts.
You also
On Fri, 15 Sep 2000, Bogdan Costescu wrote:
> On Thu, 14 Sep 2000, jamal wrote:
>
> The 3Com cards can generate this interrupt, however this is not used in
> current 3c59x.c. I suggested this to Andrew, but he is already worried
> about the current interrupt rate and unhappy that 3Com cards
On Thu, 14 Sep 2000, jamal wrote:
> If i remember correctly some of the 3coms still give this 'mid-interupt',
> no? It could useful to just say quickly read the header and make routing
> decisions as in fast routing but not under heavy load.
The 3Com cards can generate this interrupt, however
On Thu, 14 Sep 2000, jamal wrote:
If i remember correctly some of the 3coms still give this 'mid-interupt',
no? It could useful to just say quickly read the header and make routing
decisions as in fast routing but not under heavy load.
The 3Com cards can generate this interrupt, however this
On Fri, 15 Sep 2000, Bogdan Costescu wrote:
On Thu, 14 Sep 2000, jamal wrote:
The 3Com cards can generate this interrupt, however this is not used in
current 3c59x.c. I suggested this to Andrew, but he is already worried
about the current interrupt rate and unhappy that 3Com cards do not
On Fri, 15 Sep 2000, jamal wrote:
Only the timer runs at HZ granularity ;-
Some cards provide their own high resolution timers; latest 3Com cards
provide several with different purposes (none currently used). The
question is how many of these also provide the Rx early interrupts.
You also
On Thu, Sep 14, 2000 at 10:26:08PM -0400, jamal wrote:
>
>
> One of the things we need to measure still is the latency. The scheme
> currently used with dynamically adjusting the mitigation parameters might
> not affect latency much -- simply because the adjustement is based on the
> load. We
On Thu, 14 Sep 2000, Donald Becker wrote:
> No, because I know I sound like a broken record.
;->
> What we measured is that the cache impact of allocating and initializing our
> (ever-larger) skbuffs is huge. So we pay some CPU time getting a new
> skbuff, and some more CPU time later
On Thu, 14 Sep 2000, jamal wrote:
> On Thu, 14 Sep 2000, Andrew Morton wrote:
> > But for 3c59x (which is not a very efficient driver (yet)), it takes 6
> > usecs to even get into the ISR, and around 4 uSecs to traverse it.
> > Guess another 4 to leave the ISR, guess half as much again for
On Thu, 14 Sep 2000, Andrew Morton wrote:
> But for 3c59x (which is not a very efficient driver (yet)), it takes 6
> usecs to even get into the ISR, and around 4 uSecs to traverse it.
> Guess another 4 to leave the ISR, guess half as much again for whoever
> got interrupted to undo the
What Alexey's code does is _not_ preallocation -- it does re-cycling.
On tx_completion, the skb is recycled onto a recycle queue unless the
queue is full (which is a tunable parameter) in which case it is freed.
This is more sensible than doing pre-allocation during idle times
or other smart
Yes !
The FF experiments with 2.1.X indicated improvement factor about 2-3 times
with skb recycling. With combination of FF and skb recycling we could reach
fast Ethernet wire speed forwarding on 400 Mhz CPU. About ~147 KPPS.
As jamal reported the improvement is much less today but the
On Thu, Sep 14, 2000 at 11:59:32PM +1100, Andrew Morton wrote:
> That's 20 usec per interrupt, of which 1 usec could be saved by skb
> pooling.
FF usually runs with interrupt mitigation at higher rates (8-16 or even
more packets / interrupt). I agree though that it probably does not
make too
jamal wrote:
>
> The FF code of the tulip does have skb recycling code.
> And i belive Jes' acenic code does or did at some point.
But this isn't preallocation. Unless you got cute, this scheme would
limit the "preallocation" to the DMA ring size.
For network-intensive applications, a larger
> "jamal" == jamal <[EMAIL PROTECTED]> writes:
jamal> The FF code of the tulip does have skb recycling code. And i
jamal> belive Jes' acenic code does or did at some point. Robert
jamal> Olson and I were thinking of taking out that code out of the
jamal> tulip for reasons such as you talk
On Thu, Sep 14, 2000 at 04:55:16AM -0700, David S. Miller wrote:
>Date: Thu, 14 Sep 2000 06:53:37 -0400 (EDT)
>From: jamal <[EMAIL PROTECTED]>
>
>Dave, would a scheme with an aging of the skbs in the recycle queue
>and an upper bound of the number of packets sitting on the queue
Date: Thu, 14 Sep 2000 06:53:37 -0400 (EDT)
From: jamal <[EMAIL PROTECTED]>
Dave, would a scheme with an aging of the skbs in the recycle queue
and an upper bound of the number of packets sitting on the queue be
acceptable?
This sounds more reasonable, certainly. Perhaps you and
On Thu, 14 Sep 2000, David S. Miller wrote:
>Does anyone think that allocating skbs during system idle time
>would be useful?
>
> I really don't like these sorts of things, because it makes an
> assumption as to what memory is about to be used for.
I agree. Surely The Linux Way (tm)
On Thu, 14 Sep 2000, David S. Miller wrote:
>Date: Thu, 14 Sep 2000 04:44:53 -0400
>From: Jeff Garzik <[EMAIL PROTECTED]>
>
>Does anyone think that allocating skbs during system idle time
>would be useful?
>
> I really don't like these sorts of things, because it makes
Does anyone think that allocating skbs during system idle time would be
useful?
Net drivers (well, ethernet at least) often wind up allocating
maximum-sized skb's for use in Rx descriptors. It seems to me that it
would be useful at interrupt time to have an skb already allocated,
falling back
Date:Thu, 14 Sep 2000 04:44:53 -0400
From: Jeff Garzik <[EMAIL PROTECTED]>
Does anyone think that allocating skbs during system idle time
would be useful?
I really don't like these sorts of things, because it makes an
assumption as to what memory is about to be used for.
Does anyone think that allocating skbs during system idle time would be
useful?
Net drivers (well, ethernet at least) often wind up allocating
maximum-sized skb's for use in Rx descriptors. It seems to me that it
would be useful at interrupt time to have an skb already allocated,
falling back
On Thu, 14 Sep 2000, Donald Becker wrote:
No, because I know I sound like a broken record. skipskip
;-
What we measured is that the cache impact of allocating and initializing our
(ever-larger) skbuffs is huge. So we pay some CPU time getting a new
skbuff, and some more CPU time later
Date:Thu, 14 Sep 2000 04:44:53 -0400
From: Jeff Garzik [EMAIL PROTECTED]
Does anyone think that allocating skbs during system idle time
would be useful?
I really don't like these sorts of things, because it makes an
assumption as to what memory is about to be used for.
What
jamal wrote:
The FF code of the tulip does have skb recycling code.
And i belive Jes' acenic code does or did at some point.
But this isn't preallocation. Unless you got cute, this scheme would
limit the "preallocation" to the DMA ring size.
For network-intensive applications, a larger
On Thu, 14 Sep 2000, David S. Miller wrote:
Date: Thu, 14 Sep 2000 04:44:53 -0400
From: Jeff Garzik [EMAIL PROTECTED]
Does anyone think that allocating skbs during system idle time
would be useful?
I really don't like these sorts of things, because it makes an
On Thu, 14 Sep 2000, David S. Miller wrote:
Does anyone think that allocating skbs during system idle time
would be useful?
I really don't like these sorts of things, because it makes an
assumption as to what memory is about to be used for.
I agree. Surely The Linux Way (tm) would
Date: Thu, 14 Sep 2000 06:53:37 -0400 (EDT)
From: jamal [EMAIL PROTECTED]
Dave, would a scheme with an aging of the skbs in the recycle queue
and an upper bound of the number of packets sitting on the queue be
acceptable?
This sounds more reasonable, certainly. Perhaps you and
"jamal" == jamal [EMAIL PROTECTED] writes:
jamal The FF code of the tulip does have skb recycling code. And i
jamal belive Jes' acenic code does or did at some point. Robert
jamal Olson and I were thinking of taking out that code out of the
jamal tulip for reasons such as you talk about (and
Yes !
The FF experiments with 2.1.X indicated improvement factor about 2-3 times
with skb recycling. With combination of FF and skb recycling we could reach
fast Ethernet wire speed forwarding on 400 Mhz CPU. About ~147 KPPS.
As jamal reported the improvement is much less today but the
On Thu, 14 Sep 2000, Andrew Morton wrote:
But for 3c59x (which is not a very efficient driver (yet)), it takes 6
usecs to even get into the ISR, and around 4 uSecs to traverse it.
Guess another 4 to leave the ISR, guess half as much again for whoever
got interrupted to undo the resulting
What Alexey's code does is _not_ preallocation -- it does re-cycling.
On tx_completion, the skb is recycled onto a recycle queue unless the
queue is full (which is a tunable parameter) in which case it is freed.
This is more sensible than doing pre-allocation during idle times
or other smart
On Thu, Sep 14, 2000 at 10:26:08PM -0400, jamal wrote:
One of the things we need to measure still is the latency. The scheme
currently used with dynamically adjusting the mitigation parameters might
not affect latency much -- simply because the adjustement is based on the
load. We still
36 matches
Mail list logo