On 7/17/19 11:52 PM, Thomas Gleixner wrote:
> Sudip,
> 
> On Wed, 17 Jul 2019, Sudip Mukherjee wrote:
>> On Wed, Jul 17, 2019 at 9:53 PM Thomas Gleixner <t...@linutronix.de> wrote:
>>> You can hack ksoftirq_running() to return always false to avoid this, but
>>> that might cause application starvation and a huge packet buffer backlog
>>> when the amount of incoming packets makes the CPU do nothing else than
>>> softirq processing.
>>
>> I tried that now, it is better but still not as good as v3.8
>> Now I am getting 375.9usec as the maximum time between raising the softirq
>> and it starting to execute and packet drops still there.
>>
>> And just a thought, do you think there should be a CONFIG_ option for
>> this feature of ksoftirqd_running() so that it can be disabled if needed
>> by users like us?
> 
> If at all then a sysctl to allow runtime control.
>  
>> Can you please think of anything else that might have changed which I still 
>> need
>> to change to make the time comparable to v3.8..
> 
> Something with in that small range of:
> 
>  63592 files changed, 13783320 insertions(+), 5155492 deletions(-)
> 
> :)
> 
> Seriously, that can be anything.
> 
> Can you please test with Linus' head of tree and add some more
> instrumentation, so we can see what holds off softirqs from being
> processed. If the ksoftirqd enforcement is disabled, then the only reason
> can be a long lasting softirq disabled region. Tracing should tell.

ksoftirqd might be spuriously scheduled from tx path, when
__qdisc_run() also reacts to need_resched().

By raising NET_TX while we are processing NET_RX (say we send a TCP ACK packet
in response to incoming packet), we force __do_softirq() to perform
another loop, but before doing an other round, it will also check need_resched()
and eventually call wakeup_softirqd()

I wonder if following patch makes any difference.

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 
11c03cf4aa74b44663c74e0e3284140b0c75d9c4..ab736e974396394ae6ba409868aaea56a50ad57b
 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -377,6 +377,8 @@ void __qdisc_run(struct Qdisc *q)
        int packets;
 
        while (qdisc_restart(q, &packets)) {
+               if (qdisc_is_empty(q))
+                       break;
                /*
                 * Ordered by possible occurrence: Postpone processing if
                 * 1. we've exceeded packet quota

Reply via email to