On Tue, 28 Jul 2020 22:29:02 +0530 Rakesh Pillai wrote: > > -----Original Message----- > > From: David Laight <david.lai...@aculab.com> > > Sent: Sunday, July 26, 2020 4:46 PM > > To: 'Sebastian Gottschall' <s.gottsch...@dd-wrt.com>; Hillf Danton > > <hdan...@sina.com> > > Cc: Andrew Lunn <and...@lunn.ch>; Rakesh Pillai = > <pill...@codeaurora.org>; > > net...@vger.kernel.org; linux-wirel...@vger.kernel.org; linux- > > ker...@vger.kernel.org; ath10k@lists.infradead.org; > > diand...@chromium.org; Markus Elfring <markus.elfr...@web.de>; > > evgr...@chromium.org; k...@kernel.org; johan...@sipsolutions.net; > > da...@davemloft.net; kv...@codeaurora.org > > Subject: RE: [RFC 0/7] Add support to process rx packets in thread > >=20 > > From: Sebastian Gottschall <s.gottsch...@dd-wrt.com> > > > Sent: 25 July 2020 16:42 > > > >> i agree. i just can say that i tested this patch recently due = > this > > > >> discussion here. and it can be changed by sysfs. but it doesnt = > work for > > > >> wifi drivers which are mainly using dummy netdev devices. for = > this i > > > >> made a small patch to get them working using napi_set_threaded > > manually > > > >> hardcoded in the drivers. (see patch bellow) > >=20 > > > > By CONFIG_THREADED_NAPI, there is no need to consider what you did > > here > > > > in the napi core because device drivers know better and are = > responsible > > > > for it before calling napi_schedule(n). > >=20 > > > yeah. but that approach will not work for some cases. some stupid > > > drivers are using locking context in the napi poll function. > > > in that case the performance will runto shit. i discovered this with = > the > > > mvneta eth driver (marvell) and mt76 tx polling (rx works) > > > for mvneta is will cause very high latencies and packet drops. for = > mt76 > > > it causes packet stop. doesnt work simply (on all cases no crashes) > > > so the threading will only work for drivers which are compatible = > with > > > that approach. it cannot be used as drop in replacement from my = > point of > > > view. > > > its all a question of the driver design > >=20 > > Why should it make (much) difference whether the napi callbacks (etc) > > are done in the context of the interrupted process or that of a > > specific kernel thread. > > The process flags (or whatever) can even be set so that it appears > > to be the expected 'softint' context. > >=20 > > In any case running NAPI from a thread will just show up the next > > piece of code that runs for ages in softint context. > > I think I've seen the tail end of memory being freed under rcu > > finally happening under softint and taking absolutely ages. > >=20 > > David > >=20 > > Hi All, > > Is the threaded NAPI change posted to kernel ?
https://lore.kernel.org/netdev/20200726163119.86162-1-...@nbd.name/ https://lore.kernel.org/netdev/20200727123239.4921-1-...@nbd.name/ > Is the conclusion of this discussion that " we cannot use threads for > processing packets " ?? That isn't it if any conclusion reached. Hard to answer your question TBH, and OTOH I'm wondering in which context device driver developer prefers to handle tx/rx, IRQ or BH or user context on available idle CPUs, what is preventing them from doing that? Is it likely making ant-antenna-size sense to set the napi::weight to 3 and turn to 30 kworkers for processing the ten-minute packet flood hitting the hardware for instance on a system with 32 CPU cores or more? _______________________________________________ ath10k mailing list ath10k@lists.infradead.org http://lists.infradead.org/mailman/listinfo/ath10k