On Fri, Jan 4, 2013 at 5:41 AM, Eric Dumazet wrote:
> On Thu, 2013-01-03 at 11:41 -0800, Rick Jones wrote:
>
>> In terms of netperf overhead, once you specify P99_LATENCY, you are
>> already in for the pound of cost but only getting the penny of output
>> (so to speak). While it would clutter the
On Fri, 2013-01-04 at 11:14 +0400, Oleg A.Arkhangelsky wrote:
> It leads to many context switches when softirqs processing deffered to
> ksoftirqd kthreads which can be very expensive. Here is some evidence
> of ksoftirqd activation effects:
>
> http://marc.info/?l=linux-netdev&m=124116262916969&
On Fri, 2013-01-04 at 06:31 +0100, Sedat Dilek wrote:
>
> Will you send a v2 with this change...?
>
> -#define MAX_SOFTIRQ_TIME min(1, (2*HZ/1000))
> +#define MAX_SOFTIRQ_TIME max(1, (2*HZ/1000))
I will, I was planning to do this after waiting for other
comments/reviews.
--
To unsubscribe
On Fri, 2013-01-04 at 14:16 +0900, Namhyung Kim wrote:
> Probably a silly question:
>
> Why not using ktime rather than jiffies for this?
ktime is too expensive on some hardware.
Here we only want a safety belt, no need for high time resolution.
--
To unsubscribe from this list: send the line
On Fri, Jan 4, 2013 at 5:41 AM, Eric Dumazet wrote:
> On Thu, 2013-01-03 at 11:41 -0800, Rick Jones wrote:
>
>> In terms of netperf overhead, once you specify P99_LATENCY, you are
>> already in for the pound of cost but only getting the penny of output
>> (so to speak). While it would clutter the
Hi,
On Thu, 03 Jan 2013 14:41:15 -0800, Eric Dumazet wrote:
> On Thu, 2013-01-03 at 12:46 -0800, Andrew Morton wrote:
>> Can this change cause worsened latencies in some situations? Say there
>> are a large number of short-running actions queued. Presently we'll
>> dispatch ten of them and retur
On Thu, 2013-01-03 at 11:41 -0800, Rick Jones wrote:
> In terms of netperf overhead, once you specify P99_LATENCY, you are
> already in for the pound of cost but only getting the penny of output
> (so to speak). While it would clutter the output, one could go ahead
> and ask for the other late
On Thu, 2013-01-03 at 12:46 -0800, Andrew Morton wrote:
> On Thu, 03 Jan 2013 04:28:52 -0800
> Eric Dumazet wrote:
>
> > From: Eric Dumazet
> >
> > In various network workloads, __do_softirq() latencies can be up
> > to 20 ms if HZ=1000, and 200 ms if HZ=100.
> >
> > This is because we iterate
On Thu, 2013-01-03 at 22:08 +, Ben Hutchings wrote:
> On Thu, 2013-01-03 at 04:28 -0800, Eric Dumazet wrote:
> > From: Eric Dumazet
> >
> > In various network workloads, __do_softirq() latencies can be up
> > to 20 ms if HZ=1000, and 200 ms if HZ=100.
> >
> > This is because we iterate 10 ti
On Thu, 2013-01-03 at 04:28 -0800, Eric Dumazet wrote:
> From: Eric Dumazet
>
> In various network workloads, __do_softirq() latencies can be up
> to 20 ms if HZ=1000, and 200 ms if HZ=100.
>
> This is because we iterate 10 times in the softirq dispatcher,
> and some actions can consume a lot of
On Thu, 03 Jan 2013 04:28:52 -0800
Eric Dumazet wrote:
> From: Eric Dumazet
>
> In various network workloads, __do_softirq() latencies can be up
> to 20 ms if HZ=1000, and 200 ms if HZ=100.
>
> This is because we iterate 10 times in the softirq dispatcher,
> and some actions can consume a lot
On 01/03/2013 05:31 AM, Eric Dumazet wrote:
A common network load is to launch ~200 concurrent TCP_RR netperf
sessions like the following
netperf -H remote_host -t TCP_RR -l 1000
And then you can launch some netperf asking P99_LATENCY results :
netperf -H remote_host -t TCP_RR -- -k P99_LATE
On Thu, 2013-01-03 at 14:12 +0100, Sedat Dilek wrote:
> Hi Eric,
>
> your patch from [2] applies cleanly on top of Linux v3.8-rc2.
> I would like to test it.
> In [1] you were talking about benchmarks you did.
> Can you describe them or provide a testcase (script etc.)?
> You made only network tes
13 matches
Mail list logo