Hi, Mark,

Glad to hear that your segfault issue has gone away :) even though it sounds 
frustrating not to understand why :(.  Here are some additional responses for 
you:

> On Dec 15, 2020, at 21:00, Mark Ruzindana <ruziem...@gmail.com> wrote:
> 
> I'm taking note of the following change for documentation purposes. It's not 
> the reason for my issue. Feel free to ignore or comment on it. This change 
> was made before and remained after I observed the segfault issue. To flush 
> the packets in the port before the thread is run, I am using 
> "p_frame=hashpipe_pktsock_recv_udp_frame_nonblock(p_ps, bindport)" instead of 
> "p_frame=hashpipe_pktsock_recv_frame_nonblock(p_ps, bindport)" in the while 
> loop, otherwise, there's an infinite loop because there are packets with 
> other protocols constantly being captured by the port. 

Looping until hashpipe_recv_udp_frame_nonblock() returns NULL will only discard 
the initial UDP packets and one non-UDP packet.  What sort of packet rate is 
the interface receiving?  I find it hard to imagine packets being received so 
fast that the discard loop never completes.

> Okay, so now, I'm still experiencing dropped packets. Given a kernel page 
> size of 4096 bytes and a frame size of 16384 bytes, I have tried buffer 
> parameters ranging from, 480 to 128000 total number of frames and 60 to 1000 
> blocks respectively. With improvements in throughput in one instance, but not 
> the other three that I have running. The one instance with improvements, on 
> the upper end of that range, exceeds the number of packets expected in a 
> hashpipe shared memory buffer block (the ring buffers in between threads), 
> but only for about four or so of them at the very beginning of a scan. No 
> dropped packets for the rest of the scan. While the other instances, with no 
> recognizable improvements, drop packets through out the scan with one of them 
> dropping significantly more than the other two.

If you are you running four instances on the same host, do they each bind to a 
different interface?  Multiple instances binding to the same interface is not 
going improve performance because each instance will receive copies of all 
packets that arrive at the interface.  This is almost certainly not what you 
want.  The way the packet socket buffer is specified by frames and blocks is a 
bit unusual and the rationale for it could be better explained in the kernel 
docs IMHO.

> I'm currently trying a few things to debug this, but I figured that I would 
> ask sooner rather than later. Is there a configuration or step that I may 
> have missed in the implementation of packet sockets? My understanding is that 
> it should handle my current data rates with no problem. So with multiple 
> instances running (four in my case), I should be able to capture data with 0 
> dropped packets (100% data throughput).

What is the incoming packet rate and data rate?  What packet and data rate are 
your instances achieving?  Obviously both the latter has to be higher than both 
the former or things won't work.

>  Just a note, with a packet size of 8168 bytes, and a frame size of 8192 
> bytes, hashpipe was crashing, but in a completely unrelated way to how it did 
> before. It was not a segfault after capturing the exact number of packets 
> that correspond to the number of frames in the packet socket ring buffer as I 
> described in previous emails. The crashes were more inconsistent and I think 
> it's because the frame size needs to be considerably larger than the packet 
> size. An order of 2 seemed to be enough. I currently have the frame size set 
> to 16384 (also a multiple of the kernel page size), and do not have an issue 
> with hashpipe crashing.

The frame size used when sizing the buffers needs to be large enough to hold 
the entire packet (including network headers) plus TPACKET_HDRLEN.  A 
frame_size of 8192 bytes and a packet size of 8168 bytes leaves just 24 bytes, 
which is definitely less than TPACKET_HDRLEN.  You could probably use 12,288 
bytes (3*4096) instead of 16,384 for a frame size if you really want/need to 
minimize memory usage.  I'm not sure what happens if the frame size is not 
large enough.  At best the packets will get truncated, but that's still not 
good.

Dave

-- 
You received this message because you are subscribed to the Google Groups 
"casper@lists.berkeley.edu" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to casper+unsubscr...@lists.berkeley.edu.
To view this discussion on the web visit 
https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/8EE2FE4A-8710-4E40-B25F-299B3466C8D6%40berkeley.edu.

Reply via email to