On Thursday, July 31, 2003, at 3:07 PM, David Chang wrote:


I've been reading several old postings about how Linux (among other OSs) implementation of pcap_open_live() doesn't respect the timeout value.� That is, pcap_dispatch (hence pcap_read) blocks while waiting for a packet to arrive.

Even if the OS *does* respect the timeout value *and* implements it as a timeout that expires regardless of whether any packets have arrived, the underlying OS call that "pcap_read()" (which is what "pcap_dispatch()" and "pcap_loop()" call) makes will block while waiting for a packet to arrive. The only difference is that, on OSes where there's a timeout that expires regardless of whether any packets have arrived, the call doesn't block any longer than the timeout.


The way that underlying OS call works is different on different platforms.

On some platforms, it blocks until a packet arrives, and then immediately delivers the packet. Systems on which it works this way include:

Linux

Irix (I suspect)

HP-UX, and, with the exception of Solaris, any other systems that use DLPI, as far as I know (e.g., AIX if using DLPI).

On some platforms, it blocks until "enough" packets arrive or until the timeout expires; the timer starts when the call is made. "Enough" is platform-dependent; the intent here is to batch up packets so that only one call is needed to retrieve several packets, but to have a timeout so that you don't wait an indefinite period of time for "enough" packets to arrive if, for example, traffic is arriving very slowly. Systems on which it works this way include:

the BSDs (including Mac OS X);

Digital/Tru64 UNIX;

Win32 with WinPcap.

On some platforms, it blocks until "enough" packets arrive or until the timeout expires; *however*, the timer starts when the first packet arrives, rather than when the call is made, so the call could block indefinitely - it doesn't block indefinitely waiting for "enough" packets to arrive, but it will block until at least one packet has arrived.

On AIX, it's like the BSDs except that the timeout apparently doesn't actually work - the read doesn't complete until "enough" packets arrive. Libpcap works around this by turning BIOCIMMEDIATE on in AIX.

� The old postings go on to mention something about the BIOCIMMEDIATE ioctl call to eliminate the blocking, but then later goes on to say doing this causes pcap to not batch the packets and send them one by one.

BIOCIMMEDIATE - which is BPF-only and doesn't exist on Linux - doesn't eliminate the blocking.


It causes 1 to be interpreted as "enough", so that the read blocks until *a* packet arrives or until the timeout expires. As noted, this gets rid of the batching.

My question is: what's the proper way to get Linux to not block pcap_read?

If you literally mean "not block", as in "return *immediately* if no packets have arrived", the answer is "put the descriptor in non-blocking mode"; recent versions of libpcap have an API, "pcap_setnonblock()", to do this, and, in older versions, you can just use "fcntl()" on the descriptor that "pcap_fileno()" returns.


If, however, you mean "block, but limit the time that it blocks", the first question is "why do you want to limit the time that it blocks?" I.e., what is your software going to do if "pcap_dispatch()" returns 0?

If it's going to check for input from some other source - i.e., the timeout is a means to an end, not an end, and the end is multiplexing of input from various sources - one way to do this, if the other source has a "select()"able file descriptor, is to use a call intended for that end, namely "select()" or "poll()". On most platforms, you can do a "select()" or "poll()" on the "pcap_fileno()" descriptor, so you'd do a "select()" or "poll()" on those descriptors, and handle input from either or both of them if they're ready for input. In many BSDs, "select()" doesn't work on those descriptors, but, except in, I think, one of the FreeBSD 4.x versions (4.5 or so, I think), you can work around this by

1) putting the pcap descriptor into non-blocking mode (see above);

2) use, as the timeout in the "select()" or "poll()", the same time amount used in "pcap_open_live()";

3) call "pcap_dispatch()" on the pcap_t regardless of whether "select()" says you can read from it or not.

� We have some Solaris machines here and the code runs fine (it doesn't block).

As noted above, at least on Solaris 7, the timer starts when the first packet arrives, so "pcap_dispatch()" can - and *does* - block indefinitely. Note that "arrive", in "'enough' packets arrive", is defined as "arrives at the kernel's packet capture mechanism"; in Solaris, as there is no BPF support in the kernel and no code in libpcap to attempt to generate code for Solaris's less-capable kernel packet filtering mechanism, packets that match the filter *and* packets that don't match the filter both "arrive", so if you're sniffing on a busy network, the time it takes for a packet to "arrive" might be short enough that "pcap_dispatch()" won't block for long, even if the filter is a filter that few packets pass. If, however, you run on a very quiet network - as I did at home - you will see "pcap_dispatch()" block a *very* long time.


-
This is the TCPDUMP workers list. It is archived at
http://www.tcpdump.org/lists/workers/index.html
To unsubscribe use mailto:[EMAIL PROTECTED]

Reply via email to