[tcpdump-workers] savefile.c patch
I feel it's high time we cleanup some of the sources. I'd start with savefile.c. Currently it doesn't work for offline data from stdin. --gv --- libpcap-2004.05.20/savefile.c Tue Mar 23 21:18:08 2004 +++ savefile.c Wed Mar 24 16:29:06 2004 @@ -52,6 +52,12 @@ #define TCPDUMP_MAGIC 0xa1b2c3d4 #define PATCHED_TCPDUMP_MAGIC 0xa1b2cd34 +#if defined(WIN32) || defined(MSDOS) +#define SETMODE(file,mode) setmode(file,mode) +#else +#define SETMODE(file,mode) ((void)0) +#endif + /* * We use the receiver-makes-right approach to byte order, * because time is at a premium when we are writing the file. @@ -587,6 +593,7 @@ { if (p-sf.rfile != stdin) (void)fclose(p-sf.rfile); + elseSETMODE (fileno(stdin),O_TEXT); if (p-sf.base != NULL) free(p-sf.base); } @@ -607,15 +614,12 @@ } memset((char *)p, 0, sizeof(*p)); - - if (fname[0] == '-' fname[1] == '\0') + if (fname[0] == '-' fname[1] == '\0') { fp = stdin; + SETMODE(fileno(fp), O_BINARY); + } else { -#ifndef WIN32 - fp = fopen(fname, r); -#else fp = fopen(fname, rb); -#endif if (fp == NULL) { snprintf(errbuf, PCAP_ERRBUF_SIZE, %s: %s, fname, pcap_strerror(errno)); @@ -726,13 +730,15 @@ break; } -#ifndef WIN32 +#if !defined(WIN32) !defined(MSDOS) /* * You can do select() and poll() on plain files on most * platforms, and should be able to do so on pipes. * * You can't do select() on anything other than sockets in * Windows, so, on Win32 systems, we don't have selectable_fd. +* But one could use 'WaitForSingleObject()' on HANDLE obtained +* from '_get_osfhandle(p-selectable_fd)'. */ p-selectable_fd = fileno(fp); #endif @@ -748,8 +754,10 @@ return (p); bad: - if(fp) + if(fp fp != stdin) fclose(fp); + if (fp == stdin) + SETMODE (fileno(stdin),O_TEXT); free(p); return (NULL); } @@ -973,6 +981,7 @@ pcap_dump_open(pcap_t *p, const char *fname) { FILE *f; + pcap_dumper_t *pd; int linktype; linktype = dlt_to_linktype(p-linktype); @@ -985,26 +994,23 @@ if (fname[0] == '-' fname[1] == '\0') { f = stdout; -#ifdef WIN32 - _setmode(_fileno(f), _O_BINARY); -#endif + SETMODE(fileno(f), O_BINARY); } else { -#ifndef WIN32 - f = fopen(fname, w); -#else f = fopen(fname, wb); -#endif - if (f == NULL) { + setbuf(f, NULL);/* XXX - why? */ + } + + pd = (pcap_dumper_t*)f; + + if (!pd || sf_write_header(f, linktype, p-tzoff, p-snapshot) 0) { snprintf(p-errbuf, PCAP_ERRBUF_SIZE, %s: %s, fname, pcap_strerror(errno)); - return (NULL); + if (pd) + pcap_dump_close(pd); + pd = NULL; + f = NULL; } -#ifdef WIN32 - setbuf(f, NULL);/* XXX - why? */ -#endif - } - (void)sf_write_header(f, linktype, p-tzoff, p-snapshot); - return ((pcap_dumper_t *)f); + return (pd); } FILE * @@ -1026,11 +1032,15 @@ void pcap_dump_close(pcap_dumper_t *p) { + FILE *fil = (FILE*)p; #ifdef notyet - if (ferror((FILE *)p)) + if (ferror(fil)) return-an-error; /* XXX should check return from fclose() too */ #endif - (void)fclose((FILE *)p); + if (fil == stdin || fil == stdout) + SETMODE (fileno(fil),O_TEXT); + else + fclose (fil); } - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
[tcpdump-workers] Minor error in print-ipx.c
gcc, at least on my FreeBSD 4.9 box, doesn't like an active statement before the first declaration (compiling tcpdump-2004.05.20): ./print-ipx.c: In function `ipx_print': ./print-ipx.c:60: syntax error before `const' I fixed it as shown below. Steinar Haug, Nethelp consulting, [EMAIL PROTECTED] --- print-ipx.c.origSat May 1 12:06:55 2004 +++ print-ipx.c Wed May 26 12:18:29 2004 @@ -54,10 +54,10 @@ void ipx_print(const u_char *p, u_int length) { + const struct ipxHdr *ipx = (const struct ipxHdr *)p; + if (!eflag) printf(IPX ); - - const struct ipxHdr *ipx = (const struct ipxHdr *)p; TCHECK(ipx-srcSkt); (void)printf(%s.%04x , - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] single packet capture time w/pcap vs. recvfrom()
Brandon, For curiousity sake (I have a simular app and am seriously interested in performance). - What platform (OS/processor) are you on, - How did you measure the time to call recvfrom(), or perhaps even a more relevant question is how do you use recvfrom() (whats the surrounding code like, do you select() first, etc..)? I ask because I'm seeing substantially better numbers for recvfrom (0.021ms), although granted it is for a fairly short message but that doesn't explain the 1000X performance delta. Recvfrom itself is relatively cheap, select() is VERY expensive in comparison ( 100K clock cycles versus 23M clock cycles for select, or ~0.021ms for recv from versus ~9.7ms on a 2.4Ghz Xenon for select). Note also that this is for an empty select set with 0 timeout, so that is JUST the calling overhead, if its checking fd's it will be greater. If your on a slower platform (embedded perhaps given your company?) the numbers will be obviously be worse, recvfrom seems to scale almost directly proportional to clock speed. Based on the numbers I've seen I would expect recvfrom to be at LEAST as fast if not faster than libpcap since libpcap (often) uses select() (on some platforms :) to check if the capture device has data ready. Libpcap will benefit somewhat from its ability to bundle multiple packets into a buffer (on platforms that support that), but not I suspect as much to make up a ~250X performance delta. I'm also going to agree with Guy. If you have checksum problems and are still seeing the packets you are doing something I would seriously like to find out how you accomplished. More likely you shouldn't see the packets at all. Below is a short sample of results and a short overview of my methodology. This is on a single CPU 2.4Ghz Xenon running RH 9.0 with a stock 2.4.20 kernel. rdtsc: 1009667286648918 - 1009667286584434 = 64484 ( / one_second) = 0.21 size 36 rdtsc: 1009679499010456 - 1009679498987684 = 22772 ( / one_second) = 0.07 size 36 rdtsc: 1009691711003018 - 1009691710985706 = 17312 ( / one_second) = 0.06 size 36 rdtsc: 1009703923467532 - 1009703923448944 = 18588 ( / one_second) = 0.06 size 36 rdtsc: 1009716135791420 - 1009716135773620 = 17800 ( / one_second) = 0.06 size 36 rdtsc: 1009752772553556 - 1009752772447452 = 106104 ( / one_second) = 0.35 size 36 rdtsc: 1009764984789778 - 1009764984748314 = 41464 ( / one_second) = 0.14 size 36 rdtsc: 1009777197311290 - 1009777197270442 = 40848 ( / one_second) = 0.13 size 36 rdtsc: 1009789410262486 - 1009789410220774 = 41712 ( / one_second) = 0.14 size 36 rdtsc: 1009801622233478 - 1009801622192834 = 40644 ( / one_second) = 0.13 size 36 rdtsc: 1009813840971578 - 1009813840931126 = 40452 ( / one_second) = 0.13 size 36 rdtsc: 1009826046989322 - 1009826046959894 = 29428 ( / one_second) = 0.10 size 36 rdtsc: 1009838259554966 - 1009838259526818 = 28148 ( / one_second) = 0.09 size 36 rdtsc: 1009850472114698 - 1009850472085270 = 29428 ( / one_second) = 0.10 size 36 Here is a code snippet that shows how I do my timings: while(1) { // We wouldn't actually use select in the real app // we're using it here to make sure we're timing the // call the recvfrom for a live socket instead of // counting the time recvfrom blocks waiting for a packet FD_ZERO(rfds); FD_SET(recv_s, rfds); select(recv_s +1, rfds, NULL, NULL, NULL); // time the actual recvfrom call rdtsc_ret[index] = rdtsc(); index = !index; size = recvfrom(recv_s, buf, sizeof(buf), 0, NUL, NUL); rdtsc_ret[index] = rdtsc(); printf(rdtsc: %lld - %lld = %lld ( / one_second) = %f size %d\n, rdtsc_ret[index], rdtsc_ret[!index], rdtsc_ret[index] - rdtsc_ret[!index], (double)(rdtsc_ret[index] - rdtsc_ret[!index])/(double) one_second, size); } rdtsc is defined as an assembly snippet that reads the processor clock register on I386 achitectures. Other architectures are obviously different. The overhead for calling {rdtsc(); index = !index; rdtsc(); } is 84-96 clock cycles so I just ignored it for this since its way less than the noise. extern __inline__ unsigned long long int rdtsc() { unsigned long long int x; __asm__ volatile (.byte 0x0f, 0x31 : =A (x)); return x; } One second is defined as the real processor clock speed in HZ. You need to figure this out (dmesg, on linux cat /proc/cpuinfo, etc??) double one_second = 3050.905*100; We use rdtsc because gettimeofday doesn't have enough resolution to accurately measure a single call like this (resolution on the same machine as above for gettimeofday is slightly worse than 1ms). On Sun, May 23, 2004 at 06:37:40PM -0700, Brandon Stafford wrote: Hello, I'm writing a server that captures UDP packets and, after some manipulation, sends the data out the serial port. Right now, I'm using recvfrom(), but it takes 20 ms to execute for each packet
Re: [tcpdump-workers] savefile.c patch
On May 26, 2004, at 1:55 AM, Gisle Vanem wrote: I feel it's high time we cleanup some of the sources. I'd start with savefile.c. Currently it doesn't work for offline data from stdin. --gv --- libpcap-2004.05.20/savefile.c Tue Mar 23 21:18:08 2004 +++ savefile.c Wed Mar 24 16:29:06 2004 @@ -52,6 +52,12 @@ #define TCPDUMP_MAGIC 0xa1b2c3d4 #define PATCHED_TCPDUMP_MAGIC 0xa1b2cd34 +#if defined(WIN32) || defined(MSDOS) +#define SETMODE(file,mode) setmode(file,mode) +#else +#define SETMODE(file,mode) ((void)0) +#endif + Perhaps it should just define SETMODE() as nothing - that would cause SETMODE(file, mode); to expand to ; which, as a null statement, should be a valid statement. Also, is setmode() sufficient with all the compilers that could be used to compile libpcap/WinPcap on Windows (MSVC++, MinGW, etc.), or is _setmode() needed with some compilers? (The code currently uses _setmode().) -#ifndef WIN32 - fp = fopen(fname, r); -#else fp = fopen(fname, rb); -#endif Presumably there are no interesting UN*X platforms left that wouldn't ignore the b (Ethereal's library for reading capture files unconditionally uses rb), so that should be OK. - if(fp) + if(fp fp != stdin) fclose(fp); + if (fp == stdin) + SETMODE (fileno(stdin),O_TEXT); Why not just if (fp) { if (fp == stdin) SETMODE(fileno(stdin), O_TEXT); else fclose(fp); } @@ -973,6 +981,7 @@ pcap_dump_open(pcap_t *p, const char *fname) { FILE *f; + pcap_dumper_t *pd; int linktype; linktype = dlt_to_linktype(p-linktype); @@ -985,26 +994,23 @@ if (fname[0] == '-' fname[1] == '\0') { f = stdout; -#ifdef WIN32 - _setmode(_fileno(f), _O_BINARY); -#endif + SETMODE(fileno(f), O_BINARY); } else { -#ifndef WIN32 - f = fopen(fname, w); -#else f = fopen(fname, wb); -#endif - if (f == NULL) { + setbuf(f, NULL);/* XXX - why? */ + } I'm not sure why we're setting the output unbuffered on Windows; even if there's a legitimate reason to do so, I don't see any reason to do so on UN*X - we really don't want to have the standard I/O library routines make a separate write() call for every fwrite() etc. call to the file. - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] single packet capture time w/pcap vs. recvfrom()
Guy Harris wrote: If a received packet has a bad IP header or UDP checksum, it should get discarded at the IP or UDP layer, so that your application would *never* see it, not just see it after 20ms. I should have been more clear-- the UDP packets have checksums of 0x00, not bad checksums. I believe that this means that the checksum is ignored. This still does not explain the 20 ms execution time. If that's what you want - i.e., if the server is supposed to log all the UDP packets in question, even if they have a checksum error - then it should use libpcap. I have rewritten the application using libpcap, and it is definitely faster. The receive time now varies between 1 ms to ~12 ms. I haven't tested it very thoroughly yet, so I don't know how it does under load. Also, I'm using pcap_next() to grab one packet at a time. I could probably do better grabbing a few at once. I'll keep you posted. Thanks for the response and the software. Brandon - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] single packet capture time w/pcap vs. recvfrom()
Ryan Mooney wrote: Brandon, For curiousity sake (I have a simular app and am seriously interested in performance). - What platform (OS/processor) are you on, OpenBSD 3.4 on a HP Kayak XA, Pentium II, 233 MHz (ugh!) - How did you measure the time to call recvfrom(), or perhaps even a more relevant question is how do you use recvfrom() (whats the surrounding code like, do you select() first, etc..)? I am measuring execution time using gettimeofday(). I am not using select(), though I did write a version that used select() previously-- it was about the same speed, but I didn't measure it too precisely. I'll post a code snippet later tonight. I ask because I'm seeing substantially better numbers for recvfrom (0.021ms), although granted it is for a fairly short message but that doesn't explain the 1000X performance delta. 0.021 ms! This is what I need! A glimmer of hope! If your on a slower platform (embedded perhaps given your company?) the numbers will be obviously be worse, recvfrom seems to scale almost directly proportional to clock speed. The devices sending the packets are slow embedded devices, but the receiver is relatively fast. I also tried the same code on a Via Epia running at 600 MHz with no speed improvement, which is what makes me suspect that there is a timeout involved, rather than just a lot of instructions to execute. rdtsc is defined as an assembly snippet that reads the processor clock register on I386 achitectures. Other architectures are obviously different. The overhead for calling {rdtsc(); index = !index; rdtsc(); } is 84-96 clock cycles so I just ignored it for this since its way less than the noise. extern __inline__ unsigned long long int rdtsc() { unsigned long long int x; __asm__ volatile (.byte 0x0f, 0x31 : =A (x)); return x; } Interesting. I'll post some code snippets with timing results later. I'll also try the same stuff on a 2 GHz Linux machine running the 2.6.3 kernel to see if OpenBSD is a dog. Thanks for the comments, Brandon - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.