On Monday, May 21, 2018 2:17 PM, Jon Rosen (jrosen) <jro...@cisco.com> wrote: > On Monday, May 21, 2018 1:07 PM, Willem de Bruijn > <willemdebruijn.ker...@gmail.com> wrote: >> On Mon, May 21, 2018 at 8:57 AM, Jon Rosen (jrosen) <jro...@cisco.com> wrote:
...snip... >> >> A setsockopt for userspace to signal a stricter interpretation of >> tp_status to elide the shadow hack could then be considered. >> It's not pretty. Either way, no full new version is required. >> >>> As much as I would like to find a solution that doesn't require >>> the spin lock I have yet to do so. Maybe the answer is that >>> existing applications will need to suffer the performance impact >>> but a new version or option for TPACKET_V1/V2 could be added to >>> indicate strict adherence of the TP_STATUS_USER bit and then the >>> original diffs could be used. It looks like adding new socket options is pretty rare so I wonder if a better option might be to define a new TP_STATUS_XXX bit which would signal from a userspace application to the kernel that it strictly interprets the TP_STATUS_USER bit to determine ownership. Todays applications set tp_status = TP_STATUS_KERNEL(0) for the kernel to pick up the entry. We could define a new value to pass ownership as well as one to indicate to other kernel threads that an entry is inuse: #define TP_STATUS_USER_TO_KERNEL (1 << 8) #define TP_STATUS_INUSE (1 << 9) If the kernel sees tp_status == TP_STATUS_KERNEL then it should use the shadow method for tacking ownership. If it sees tp_status == TP_STATUS_USER_TO_KERNEL then it can use the TP_STATUS_INUSE method. >>> >>> There is another option I was considering but have yet to try >>> which would avoid needing a shadow ring by using counter(s) to >>> track maximum sequence number queued to userspace vs. the next >>> sequence number to be allocated in the ring. If the difference >>> is greater than the size of the ring then the ring can be >>> considered full and the allocation would fail. Of course this may >>> create an additional hotspot between cores, not sure if that >>> would be significant or not. >> >> Please do have a look, but I don't think that this will work in this >> case in practice. It requires tracking the producer tail. Updating >> the slowest writer requires probing each subsequent slot's status >> byte to find the new tail, which is a lot of (by then cold) cacheline >> reads. > > I've thought about it a little more and am not convinced it's > workable but I'll spend a little more time on it before giving > up. I've given up on this method. Just don't see how to make it work.