Vincenzo Maffione wrote: > In general netmap adapters (i.e. netmap ports) may support > NS_MOREFRAG. But in practice this is mainly supported on VALE ports. > So if you don't want to add the missing support by yourself you can > simply change the netmap buffer size by tuning the sysctl > dev.netmap.buf_size, and increase it to 9600.
I added parameter buf_size=9600 to the LINUX netmap module. The TX path works properly, it sends 9600 byte frames, but my app receives no RX frames from the loopback. I presume this is why: [ 856.820521] 618.970159 [ 716] virtio_netmap_config virtio config txq=1, txd=256 rxq=1, rxd=256 [ 857.972334] daemon_exe: page allocation failure: order:7, mode:0x2088020 [ 857.972345] CPU: 1 PID: 2289 Comm: daemon_exe Tainted: G OE 4.4.86rt-rt99 #0 [ 857.972346] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014 [ 857.972350] 0000000000000286 36d1c578027e2926 ffff88003d3df938 ffffffff813ecf3e [ 857.972352] 0000000002088020 0000000000000000 ffff88003d3df9c8 ffffffff811a250a [ 857.972352] ffff88003ffde078 0000000000000000 ffff88003ffdc020 ffff88003d3df990 [ 857.972353] Call Trace: [ 857.972384] [<ffffffff813ecf3e>] dump_stack+0x63/0x85 [ 857.972395] [<ffffffff811a250a>] warn_alloc_failed+0xfa/0x150 [ 857.972397] [<ffffffff811a6672>] __alloc_pages_nodemask+0x352/0xb80 [ 857.972409] [<ffffffff811f0abd>] alloc_pages_current+0x8d/0x110 [ 857.972428] [<ffffffffc0022c52>] netmap_mem2_finalize+0x142/0x590 [netmap] [ 857.972443] [<ffffffffc00a3c9c>] ? virtio_netmap_config+0xcc/0x100 [virtio_net] [ 857.972446] [<ffffffffc0024173>] netmap_mem_finalize+0x83/0x2c0 [netmap] [ 857.972449] [<ffffffffc0031b69>] netmap_do_regif+0x89/0x2e0 [netmap] [ 857.972452] [<ffffffffc003365b>] netmap_ioctl+0x5fb/0xa50 [netmap] [ 857.972462] [<ffffffff810aa07a>] ? migrate_enable+0x7a/0x150 [ 857.972473] [<ffffffff81833197>] ? rt_spin_unlock+0x27/0x40 [ 857.972479] [<ffffffff813fbb85>] ? lockref_put_or_lock+0x25/0x30 [ 857.972488] [<ffffffff8123875c>] ? mntput_no_expire+0x2c/0x1b0 [ 857.972489] [<ffffffff81238904>] ? mntput+0x24/0x40 [ 857.972493] [<ffffffff81221b6b>] ? terminate_walk+0x6b/0xe0 [ 857.972496] [<ffffffffc0034d89>] linux_netmap_ioctl+0xa9/0x120 [netmap] [ 857.972497] [<ffffffff81227eb5>] ? do_filp_open+0xa5/0x100 [ 857.972499] [<ffffffff810aa07a>] ? migrate_enable+0x7a/0x150 [ 857.972500] [<ffffffff8122b188>] do_vfs_ioctl+0x298/0x490 [ 857.972502] [<ffffffff81235867>] ? __fget+0x77/0xb0 [ 857.972503] [<ffffffff8122b3f9>] SyS_ioctl+0x79/0x90 [ 857.972504] [<ffffffff818335f2>] entry_SYSCALL_64_fastpath+0x16/0x71 [ 857.972511] Mem-Info: [ 857.972522] active_anon:16975 inactive_anon:684 isolated_anon:0 active_file:13080 inactive_file:24152 isolated_file:0 unevictable:1466 dirty:3 writeback:0 unstable:0 slab_reclaimable:6202 slab_unreclaimable:3621 mapped:13590 shmem:794 pagetables:748 bounce:0 free:1403 free_pcp:274 free_cma:0 [ 857.972528] Node 0 DMA free:3776kB min:64kB low:80kB high:96kB active_anon:1992kB inactive_anon:64kB active_file:1164kB inactive_file:2408kB unevictable:176kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15908kB mlocked:176kB dirty:0kB writeback:0kB mapped:1460kB shmem:76kB slab_reclaimable:576kB slab_unreclaimable:388kB kernel_stack:32kB pagetables:76kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [ 857.972534] lowmem_reserve[]: 0 926 926 926 [ 857.972536] Node 0 DMA32 free:1836kB min:3860kB low:4824kB high:5788kB active_anon:65908kB inactive_anon:2672kB active_file:51156kB inactive_file:94200kB unevictable:5688kB isolated(anon):0kB isolated(file):0kB present:1032060kB managed:998044kB mlocked:5688kB dirty:12kB writeback:0kB mapped:52900kB shmem:3100kB slab_reclaimable:24232kB slab_unreclaimable:14096kB kernel_stack:3072kB pagetables:2916kB unstable:0kB bounce:0kB free_pcp:1096kB local_pcp:228kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [ 857.972548] lowmem_reserve[]: 0 0 0 0 [ 857.972550] Node 0 DMA: 1*4kB (M) 3*8kB (UME) 6*16kB (UE) 4*32kB (UE) 1*64kB (U) 3*128kB (UME) 2*256kB (ME) 1*512kB (U) 2*1024kB (UE) 0*2048kB 0*4096kB = 3772kB [ 857.972560] Node 0 DMA32: 1*4kB (U) 1*8kB (U) 6*16kB (UME) 2*32kB (EH) 2*64kB (UE) 2*128kB (UH) 1*256kB (H) 2*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 1836kB [ 857.972570] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB [ 857.972571] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [ 857.972572] 39070 total pagecache pages [ 857.972578] 0 pages in swap cache [ 857.972580] Swap cache stats: add 0, delete 0, find 0/0 [ 857.972581] Free swap = 0kB [ 857.972582] Total swap = 0kB [ 857.972582] 262013 pages RAM [ 857.972583] 0 pages HighMem/MovableOnly [ 857.972584] 8525 pages reserved [ 857.972584] 0 pages cma reserved [ 857.972585] 0 pages hwpoisoned [ 857.972591] 620.122243 [1316] netmap_finalize_obj_allocator Unable to create cluster at 44352 for 'netmap_buf' allocator [ 857.981963] 620.131618 [ 202] virtio_netmap_clean_used_rings got 1 used bufs on queue tx-0 [ 857.982582] 620.132238 [ 215] virtio_netmap_clean_used_rings got 0 used bufs on queue rx-0 [ 857.983264] 620.132920 [ 670] virtio_netmap_init_buffers added 255 inbufs on queue 0 [ 857.985600] 620.135256 [ 202] virtio_netmap_clean_used_rings got 0 used bufs on queue tx-0 [ 857.986201] 620.135857 [ 215] virtio_netmap_clean_used_rings got 0 used bufs on queue rx-0 [ 857.986802] 620.136458 [ 241] virtio_netmap_reclaim_unused detached 0 pending bufs on queue tx-0 [ 857.987467] 620.137123 [ 253] virtio_netmap_reclaim_unused detached 256 pending bufs on queue rx-0 [ 857.988255] 620.137910 [ 716] virtio_netmap_config virtio config txq=1, txd=256 rxq=1, rxd=256 [ 857.988938] 620.138594 [ 202] virtio_netmap_clean_used_rings got 0 used bufs on queue tx-0 [ 857.989543] 620.139199 [ 215] virtio_netmap_clean_used_rings got 0 used bufs on queue rx-0 [ 857.990177] 620.139833 [ 670] virtio_netmap_init_buffers added 255 inbufs on queue 0 Joe Buehler _______________________________________________ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"