I have detected that ath10k with strong iperf tests (both TCP and UDP,
In UDP it's much easier to make it happen, due lack of congestion
control), the kernel runs out of memory and starts to launch OOM
Killer.

The scenario in which this problem is replicated is as follows:
20 stations and an ath10k radio performing a download test of 5 Mbps
per station. AP ----(5Mbps) -→ client
The test starts with around 60000 KB of MemFree and MemAvailable.

I have also been able to see what happens with different radios, a
9888 and a 9880.

The OOM trace:

Mon Jul 30 23:20:11 2018 kern.warn kernel: [25365.649038] observer
invoked oom-killer:
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), nodemask=0,
order=1, oom_score_adj=0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649048] CPU: 1 PID:
2873 Comm: observer Not tainted 4.9.17 #0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649050] Call Trace:
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649059] [c2441d00]
[c03d1798] dump_stack+0x84/0xb0 (unreliable)
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649075] [c2441d10]
[c03cfae0] dump_header.isra.4+0x54/0x180
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649082] [c2441d50]
[c0097aec] oom_kill_process+0x88/0x3f0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649087] [c2441d90]
[c0098350] out_of_memory+0x37c/0x3b0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649094] [c2441dc0]
[c009bbf0] __alloc_pages_nodemask+0x904/0x9cc
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649105] [c2441e70]
[c001d09c] copy_process.isra.6.part.7+0xdc/0x13c0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649111] [c2441f00]
[c001e4ec] _do_fork+0xc4/0x280
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649120] [c2441f40]
[c000cff8] ret_from_syscall+0x0/0x3c
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649126] ---
interrupt: c00 at 0xfd41948
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649126] LR = 0xfe1e100
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649178] Mem-Info:
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649192]
active_anon:5586 inactive_anon:696 isolated_anon:0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649192]
active_file:15 inactive_file:24 isolated_file:0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649192]
unevictable:19098 dirty:0 writeback:0 unstable:0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649192]
slab_reclaimable:840 slab_unreclaimable:2398
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649192] mapped:2483
shmem:786 pagetables:208 bounce:0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649192] free:6289
free_pcp:51 free_cma:0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649203] Node 0
active_anon:22344kB inactive_anon:2784kB active_file:60kB
inactive_file:96kB unevictable:76392kB isolated(anon):0kB
isolated(file):0kB mapped:9932kB dirty:0kB writeback:0kB shmem:3144kB
writeback_tmp:0kB unstable:0kB pages_scanned:0 all_unreclaimable? no
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649216] DMA
free:25156kB min:16384kB low:20480kB high:24576kB active_anon:22344kB
inactive_anon:2784kB active_file:60kB inactive_file:96kB
unevictable:76392kB writepending:0kB present:262144kB managed:200856kB
mlocked:0kB slab_reclaimable:3360kB slab_unreclaimable:9592kB
kernel_stack:1000kB pagetables:832kB bounce:0kB free_pcp:204kB
local_pcp:84kB free_cma:0kB
Mon Jul 30 23:20:11 2018 kern.emerg kernel: lowmem_reserve[]: 0 0 0 0
Mon Jul 30 23:20:11 2018 kern.debug kernel: [25365.649224] DMA:
333*4kB (UMEH) 160*8kB (UME) 89*16kB (UMEH) 60*32kB (UMEH) 30*64kB
(UMEH) 19*128kB (UMEH) 10*256kB (UMEH) 6*512kB (MEH) 1*1024kB (E)
2*2048kB (UE) 1*4096kB (M) 0*8192kB 0*16384kB = 25156kB

I have tried to solve it by reducing the size of the MSDUs to 400 (in
hw.h) or even smaller values and it keeps happening.
-#define TARGET_10X_NUM_MSDU_DESC (1024 + 400)
+#define TARGET_10X_NUM_MSDU_DESC 400

Any ideas?

thanks, EG

_______________________________________________
ath10k mailing list
ath10k@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/ath10k

Reply via email to