Hi Gregory, 2016-11-24 16:01 GMT+01:00 Gregory CLEMENT <gregory.clem...@free-electrons.com>: > Hi Arnd, > > On jeu., nov. 24 2016, Arnd Bergmann <a...@arndb.de> wrote: > >> On Thursday, November 24, 2016 4:37:36 PM CET Jisheng Zhang wrote: >>> solB (a SW shadow cookie) perhaps gives a better performance: in hot path, >>> such as mvneta_rx(), the driver accesses buf_cookie and buf_phys_addr of >>> rx_desc which is allocated by dma_alloc_coherent, it's noncacheable if the >>> device isn't cache-coherent. I didn't measure the performance difference, >>> because in fact we take solA as well internally. From your experience, >>> can the performance gain deserve the complex code? >> >> Yes, a read from uncached memory is fairly slow, so if you have a chance >> to avoid that it will probably help. When adding complexity to the code, >> it probably makes sense to take a runtime profile anyway quantify how >> much it gains. >> >> On machines that have cache-coherent DMA, accessing the descriptor >> should be fine, as you already have to load the entire cache line >> to read the status field. >> >> Looking at this snippet: >> >> rx_status = rx_desc->status; >> rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + >> MVNETA_MH_SIZE); >> data = (unsigned char *)rx_desc->buf_cookie; >> phys_addr = rx_desc->buf_phys_addr; >> pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc); >> bm_pool = &pp->bm_priv->bm_pools[pool_id]; >> >> if (!mvneta_rxq_desc_is_first_last(rx_status) || >> (rx_status & MVNETA_RXD_ERR_SUMMARY)) { >> err_drop_frame_ret_pool: >> /* Return the buffer to the pool */ >> mvneta_bm_pool_put_bp(pp->bm_priv, bm_pool, >> rx_desc->buf_phys_addr); >> err_drop_frame: >> >> >> I think there is more room for optimizing if you start: you read >> the status field twice (the second one in MVNETA_RX_GET_BM_POOL_ID) >> and you can cache the buf_phys_addr along with the virtual address >> once you add that. > > I agree we can optimize this code but it is not related to the 64 bits > conversion. Indeed this part is running when we use the HW buffer > management, however currently this part is not ready at all for 64 > bits. The virtual address is directly handled by the hardware but it has > only 32 bits to store it in the cookie. So if we want to use the HWBM in > 64 bits we need to redesign the code, (maybe by storing the virtual > address in a array and pass the index in the cookie). >
How about storing data (virt address and maybe other stuff) as a part of data buffer and using rx_packet_offset? It has to be used for a3700 anyway. No need of additional rings whatsoever. Best regards, Marcin