[tcpdump-workers] Re: HP-UX support and portability
--- Begin Message --- I meant to include an "only" in there :) Wasn't meaning to suggest any heroic efforts should be made. happy benchmarking, rick On Tue, Mar 12, 2024 at 4:08 PM Guy Harris wrote: > On Mar 12, 2024, at 2:07 PM, Rick Jones via tcpdump-workers < > tcpdump-workers@lists.tcpdump.org> wrote: > > > If https://en.wikipedia.org/wiki/HP-UX#Version_history is any > indication, > > there are ~21 months left on HP's (er, sorry, HPE's) own support for > HP-UX. > > As far as I know, now that Itania are no longer being manufactured and > shipped, and given that HPE haven't, as far as I know, shown any sign of > plans to port HP-UX to x86-64, the future is something like "no more HP-UX, > just the ability to run HP-UX Itanium binaries on x86-64 Linux with > binary-to-binary translation and either HP-UX system call emulation or > HP-UX shared library call emulation". > > I can't find much to indicate the details of the strategy, except that it > involves "Linux containers" in some fashion; if one of those particular > "Linux containers" won't run native Linux/x86-64 applications and emulated > HP-UX/Itanium apps in parallel, maybe there'd be some demand for the HP-UX > tcpdump running in a container; otherwise, running a Linux tcpdump using > Linux libpcap would probably be the future. --- End Message --- ___ tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
[tcpdump-workers] Re: HP-UX support and portability
--- Begin Message --- If https://en.wikipedia.org/wiki/HP-UX#Version_history is any indication, there are ~21 months left on HP's (er, sorry, HPE's) own support for HP-UX. happy benchmarking, rick jones On Tue, Mar 12, 2024 at 1:48 PM Denis Ovsienko wrote: > Hello all. > > HP-UX is one of the OSes nominally supported by libpcap and tcpdump, > but it is rather difficult (or expensive) to find a live HP-UX host with > shell for testing and development. So before I forget again, this is > far from ideal, but is the best reference material I managed to find for > reasoning about HP-UX portability: > > http://hpux.connect.org.uk/hppd/hpux/Networking/Admin/libpcap-1.10.4/ > http://hpux.connect.org.uk/hppd/hpux/Networking/Admin/tcpdump-4.99.4/ > > The "source code" tarballs are not the pristine release archives, but > archives of working copies after applying the patches and running > ./configure, so there is a config.h with (or without) the various > HAVE_ macros. > > The binary packages also include description of changes applied before > building, perhaps some of that could be up-streamed. > > -- > Denis Ovsienko > ___ > tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org > To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org > %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s > --- End Message --- ___ tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
Re: [tcpdump-workers] proposed change: make tcpdump -n and tcpdump -nn behave differently
My proverbial two cents is that is changing the semantics of an option, semantics which go back literally decades. New semantics should be associated with new options. On Tue, Oct 30, 2018 at 2:48 AM Denis Ovsienko wrote: > Hello list. > > At https://github.com/the-tcpdump-group/tcpdump/pull/702 there is a > simple proposed change, which seems to be an improvement: > - > Subject: Introduce -nn option > > This changes the semantics on -n option so only namelookups are skipped. > Port > numbers *are* translated to their string representations. Option -nn then > has > the same semantics as -n had originally. > > This is a partial upstreaming of tcpdump-4.9.2-3 used in CentOs 7.5. > - > > If anybody sees how this change isn't an improvement, please make your > point on the list. > > Thank you. > > -- > Denis Ovsienko > > > ___ > tcpdump-workers mailing list > tcpdump-workers@lists.tcpdump.org > https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers > ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] Time to enable GUESS_TSO by default?
How does this look? On Fri, Apr 13, 2018 at 3:56 PM Michael Richardson wrote: > Rick Jones wrote: > > It has been a few years since GUESS_TSO was added. Might it be time > to > > enable it by default? > > send pull request... update documentation :-) > > -- > ] Never tell me the odds! | ipv6 mesh > networks [ > ] Michael Richardson, Sandelman Software Works| network > architect [ > ] m...@sandelman.ca http://www.sandelman.ca/| ruby on > rails[ > > ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
[tcpdump-workers] Time to enable GUESS_TSO by default?
It has been a few years since GUESS_TSO was added. Might it be time to enable it by default? rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] posix_fadvise()
On 04/19/2016 10:02 AM, Denis Ovsienko wrote: Hello list. As far as documentation goes, some sense may be presumed in libpcap calling posix_fadvise() with POSIX_FADV_SEQUENTIAL. This way the OS would be able to adjust the buffers better to read the .pcap file. Does anybody have any practical experience with posix_fadvise() to comment if the gain is ever visible with present day hardware and OSes? Well, in the long ago and far away when web server benchmarking was all the rage, I and others with whom I was working found that getting a web server to use posix_fadvise() to tell the file system the web server was no longer interested in trailing sections of its access log was a clear win for keeping less-frequently but still accessed URLs in the file cache. If nothing else, if tcpdump isn't doing so already it would be a worthwhile thing to add to the writing of the capture file. And, for that matter, just about anything generating logfiles... happy benchmarking, rick jones [Note to moderator: I've updated my subscription to my new email and cancelled the post that got moderated.] ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] How to capture the data at transport layer (not on interface)
On 03/11/2015 02:30 AM, srinivasarao...@bel.co.in wrote: Hai, How to capture the data at transport layer (not on interface) Apart from nettl in HP-UX, I've not come across an OS/application which enables such a thing. rick jones every extra byte in a message is another microgram of CO2 released... (OK, I made that up but still :) ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] Libpcap performance problem
On 01/28/2015 06:57 AM, Giray Simsek wrote: Hi, We are currently working on testing Linux network performance. We have two Linux machines in our test setup. Machine1 is the attacker machine from which we are sending SYN packets to Machine2 at a rate of 3million pps. We are able to receive these packets on Machine2's external interface and forward them through the internal interface without dropping any packets. So far no problems. However, when we start another app that captures traffic on Machine2's external interface using libpcap, the amount of traffic that is forwarded drops significantly. Obviously, this second libpcap app becomes a bottleneck. It can capture only about 800Kpps of traffic and only about 800Kpps can be forwarded in this case. This drop in the amount of forwarded traffic is not acceptable for us. Is there any way we can overcome this problem? Are there any settings on Os, ixgbe driver or libpcap that will allow us to forward all the traffic? Both machines are running Linux kernel 3.15. TCP SYN segments would be something like 66 bytes per (I'm assuming some options being set in the SYN). At 3 million packets per second, that would be 198 million bytes per second. Perhaps overly paranoid of me but can the storage on Machine2 keep-up with that without say the bulk of the RAM being taken-over by buffer cache and perhaps inhibiting skb alloctions? If you aren't trying to forward the SYNs and just let them bit-bucket, is the packet capture able to keep-up? rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] What's the point of "oui Unknown"?
On 10/22/2014 10:29 AM, Michael Richardson wrote: Rick Jones wrote: >> It seems to me that without more robust support this is just annoying >> noise and, at the very least, the Unknown oui printing should be >> removed. >> >> Thoughts? > What would removing it do to scripts attempting to parse tcpdump > output? I'm thinking that we leave the () there, and just make it blank when we don't know rather than say "oui unknown". I suppose that might be marginally less annoying, but then I don't use -e all that often in the first place. Still, when I have, I've not been bothered by the oui unknown messages. rick ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] What's the point of "oui Unknown"?
On 10/12/2014 01:00 PM, John Hawkinson wrote: It seems to me that without more robust support this is just annoying noise and, at the very least, the Unknown oui printing should be removed. Thoughts? What would removing it do to scripts attempting to parse tcpdump output? rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
[tcpdump-workers] Prospects for per-receive queue tracing or perhaps tagging packets with their receive queue for later filtering?
I find myself looking at the likes of ethtool -S output on a Linux system, for a multi-queue NIC, and seeing drops reported for a specific receive queue. I thus find myself wishing I could know which packets/flows were arriving on that receive queue so I could, presumably, figure-out who the "top talker" through that receive queue might be. I can find the top talker overall for traffic arriving on the NIC of course, but that requires ass-u-me-ing that the top talker overall would be the top talker on the queue with the drops, and given Murphy's Law and differing IRQ assignments, that probably isn't a good assumption. thoughts? rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] ICMP echo reply
Please keep the discussion on the list - I don't have a monopoly on knowledge in this area. On 07/23/2014 05:50 PM, Christ French wrote: *)i have have tcpdump traces from both the client and the server *) assume yes thier clocks are synchronize *) these are the ICMP packets ** *15:51:58.844673 IP 192.168.0.1 > 192.168.0.2: ICMP echo request, id 27396, seq 1, length 64 15:51:58.844881 IP 192.168.0.2 > 192.168.0.1: ICMP echo reply, id 27396, seq 1, length 64 *)the client and the server are both VM(s) on the same server If you have tcpdump traces from both the client and the server I would expect to see a total of four lines of trace. Two from the trace on the client and two from the trace on the server. Exactly *how* are the VM's clocks synchronized? If you are going to want to know the time it took to get from the server back to the client, using tcpdump timestamps, those clocks are going to have to be rather well synchronized indeed. Down to some small number of microseconds. Are the client and the server running the same versions of the same operating system, and using the same NIC emulation etc etc? Why is it important to know how long it took in this case given it is clear it didn't take very long at all? Given the likely symmetry of the path between client and server, were I pressed for an answer, I would probably start by ass-u-me-ing that the time from server to client was 1/2 the total round-trip time. rick jones that is all Thanks A Lot * On Thursday, July 24, 2014 1:52 AM, Rick Jones wrote: On 07/19/2014 09:20 AM, French_christ wrote: > I just have a question and i am suppose to answer it. > The question is :ICMP echo request was sent by the client,then ICMP > echo reply was recieved by the client,both have timestamps on the > tcpdump output The question is how long took the ICMP echo reply to > be sent from the server to the client. Questions, the answers to which will perhaps help lead you to the/an answer. *) Do you have just the one tcpdump trace or do you have tcpdump traces from both the client and the server? *) Do the client and the server synchronize their clocks? *) How large is the latency as reported by ping (I'm assuming ping is the source of these ICMP Echo Requests and so triggers the ICMP echo replies)? *) What do you know about the network path from the client to the server? *) What do you know about the network path from the server to the client? Answers to at least some, if not all, those questions will go a long way towards being able to say something about how long it took the ICMP Echo Reply to travel from the server to the client. rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] ICMP echo reply
On 07/19/2014 09:20 AM, French_christ wrote: I just have a question and i am suppose to answer it. The question is :ICMP echo request was sent by the client,then ICMP echo reply was recieved by the client,both have timestamps on the tcpdump output The question is how long took the ICMP echo reply to be sent from the server to the client. Questions, the answers to which will perhaps help lead you to the/an answer. *) Do you have just the one tcpdump trace or do you have tcpdump traces from both the client and the server? *) Do the client and the server synchronize their clocks? *) How large is the latency as reported by ping (I'm assuming ping is the source of these ICMP Echo Requests and so triggers the ICMP echo replies)? *) What do you know about the network path from the client to the server? *) What do you know about the network path from the server to the client? Answers to at least some, if not all, those questions will go a long way towards being able to say something about how long it took the ICMP Echo Reply to travel from the server to the client. rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] "not vlan" filter expression brokencatastrophically!
On 02/04/2013 01:29 AM, David Laight wrote: I agree. Honestly I think a perfectly reasonable stance to take is requesting that the filters get packets *as seen on the wire/nic*. I think that's the mental model everyone uses, and any deviation from that model is prone to bugs in the kernel, libpcap, and for the enduser. TX and RX segmentation offload also confuse matters here. I think Linux can give libpcap large TCP fragments even when the hardware isn't doing segmentation offload. This also breaks the mental model. That would be "GRO" - Generic Receive Offload. NIC-based "LRO" (Large Receive Offload) would as well. Might also include GSO - Generic Segmentation Offload rather than TSO. TX Checksum Offload (CKO) also breaks the mental model in that it tricks tcpdump into reporting false invalid checksum warnings. I don't think that Linux is particularly alone in the matter of stateless offloads taking what someone running tcpdump sees farther from the stated philosophy. Other stacks may or may not have GSO and GRO (I think Solaris has something like GSO called Multi-Data Transmit) but do have CKO, and TSO. I do not know what to suggest about the matter with vlans and those headers being/not being stripped automagically, but I suspect that the likelihood of getting Linux (or another stack) to toast the stateless offloads in the name of packet capture purity is epsilon. It may be unpleasant, but if the goal is to see traffic as seen on the wire/NIC, packet capture in a general-purpose end system isn't going to achieve it any longer. Nor has it really for many years. There are too many demands on performance to make the stateless offloads go away. Particularly since individual cores have ceased getting faster particularly. Seeing just what traffic looks like "on the wire" will require interposing a "dedicated" capture device of some sort. Perhaps implemented in a general-purpose system with all the stateless stuff disabled, and the affiliated performance issues. But on the end-system involved in the conversations? Nope. The stateless offloads and their effect on what one sees via packet capture are here to stay. rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] Decoding the unencrypted part(s) of SSL/TLS?
On 12/11/2012 05:58 AM, Wesley Shields wrote: On Mon, Dec 10, 2012 at 11:38:29PM -0500, Michael Richardson wrote: "Rick" == Rick Jones writes: Rick> Is there a version of tcpdump in the works which will decode Rick> the unecrypted Rick> portions of an SSL/TLS session? Or do I need to look Rick> elsewhere? Are you asking if there is a decoder for the SSL/TLS handshakes or are you asking if there is something that will, given a private key, decrypt the SSL? The Client/Server Hellos are sufficient for my present purposes. Yes/no. You have, in general, to do TCP reassembly as TLS blocks might span TCP segments. Fortunately, you can use: http://www.rtfm.com/ssldump/ to do exactly that. There are some problems with ssldump when building on newer-ish systems (at least I think there were last time I tried to use it). If you can get it to work it is good. I've given it a quick try and it seems to be giving me what I need, though it may not be all that up-to-date on compression method id's. I did an apt-get so didn't have to build from source - though I may if I need to go-in and enhance its knowledge of ids. thanks all, rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
[tcpdump-workers] Decoding the unencrypted part(s) of SSL/TLS?
Is there a version of tcpdump in the works which will decode the unecrypted portions of an SSL/TLS session? Or do I need to look elsewhere? thanks, rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
Re: [tcpdump-workers] incorrect tcp checksum on Linux tun interfaces?
On 12/04/2012 02:24 PM, Gert Doering wrote: Hi, On Tue, Dec 04, 2012 at 11:09:43AM -0500, Michael Richardson wrote: What's curious to me is that the chsum is not zero. If it was being "offloaded" into a step after the PF_PACKET interface, it would be zero, right? I'm not sure. I find this highly irritating, and I'm fairly sure that *here* are the folks that have seen all the funnies when tcpdumping on specific interfaces... If I recall correctly, the TCP checksum is "seeded" with the pseudo-header checksum. That could be passed-down separately but I suspect it is still functionally correct if that is simply shoved into the TCP checksum field. rick jones ___ tcpdump-workers mailing list tcpdump-workers@lists.tcpdump.org https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
[tcpdump-workers] is "not multicast" supposed to include broadcast?
I'm messing about with some link-layer stuff, and in the testing thereof I've experimented with sending to a broadcast address. However, when I say: tcpdump -i eth0 not ip and not multicast and not arp I do not see my broadcast frame being sent (which is neither IP, nor ARP, but is an 802.2 TEST frame) However if I drop "and not multicast" I will see my broadcast frame. Given there is also an alias for broadcast that behaviour seems incorrect - "not multicast" should still show broadcast. Thoughts? This is with 4.1.1 and libpcap 1.1.1. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Multiple interface capture and thread safety
On 05/11/2012 06:26 AM, Wiener Schnitzel wrote: I see. As I said, I might need to merge the data coming from the interfaces, so I need an algorithm to compare the age of packets with different sources. I don't think you will be able to arrive at that goal with perfect accuracy. Can it be like the game of horseshoes and be "close enough?" In addition to packets from even the same interface taking different paths up the stack, there is also the matter of different interfaces providing notification of packet arrival at the host at different times - mechanisms like interrupt avoidance/coalescing mean that if Packet 1 arrived on NIC A a microsecond before Packet 2 arrived on NIC B, NIC B may still tell the host about Packet 2 before NIC A told the host about Packet 1. You could I suppose disable interrupt coalescing, and perhaps even get NIC HW timestamping going, but even then I suspect there will be some skew between NIC A's concept of time and NIC B's I trust there isn't any assumption being made about the relative send times of packets based on their arrival times - certainly in different flows, but depending on the nature of the transport(s) carrying the flows perhaps even within a single flow (eg a flow of UDP traffic) rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] why I'm capturing packets larger than MTU size
On 02/23/2012 06:31 AM, Andriy Tylychko wrote: I capture network traffic on Debian 5 and 6 with libpcap v. 1.2.1 compiled from sources. Then I send these traffic by pcap_sendpacket(). Sometimes there're packets (both TCP and UDP) larger than default MTU size (1500 bytes). I cannot send these packets with error: "send error: packetSendPacket failed". Found this post: http://seclists.org/tcpdump/2007/q2/112 "[Patch] libpcap support for IP fragment reassembly", but I didn't enable such reassemply. How can I disable this reassempling? The NIC might support LRO (Large Receive Offload) in which case it could be coalescing consecutive TCP segments. It may also support IP fragment reassembly. Even if the NIC does not support LRO, in "new enough" Linux kernels there is GRO, a segment coalescing just bove the driver in the networking stack. Disabling GRO is done via ethtool -K. LRO too, though go back far enough and it may need to be done at the module parameter level. I don't know if UFO (UDP Fragmentation Offload) is both directions or not, but that too is an ethtool -K thing. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] twice past the taps, thence out to net?
but is that getting a little close? rick jones Sure ! I only pointed out a possible problem, and not gave a full patch, since we also need to change the opposite threshold (when we XON the queue at TX completion) You can see its not even consistent with the minimum for a single TSO frame ! Most probably your high requeue numbers come from this too low value given the real requirements of the hardware (4 + nr_frags descriptors per skb) /* How many Tx Descriptors do we need to call netif_wake_queue ? */ #define IGB_TX_QUEUE_WAKE 16 Maybe we should CC Intel guys Could you try following patch ? I would *love* to. All my accessible igb-driven hardware is in an environment locked to the kernels already there :( Not that it makes it more possible for me to do it, but I suspect it does not require 30 receivers to reproduce the dups with netperf TCP_STREAM. Particularly if the tx queue len is at 256 it may only take 6 or 8. In fact let me try that now... Yep, with just 8 destinations/concurrent TCP_STREAM tests from the one system one can still see the duplicates in the packet trace taken on the sender. Perhaps we can trouble the Intel guys to try to reproduce what I've seen? rick Thanks ! diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index c69feeb..93ce118 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -51,8 +51,8 @@ struct igb_adapter; /* TX/RX descriptor defines */ #define IGB_DEFAULT_TXD 256 #define IGB_DEFAULT_TX_WORK128 -#define IGB_MIN_TXD 80 -#define IGB_MAX_TXD 4096 +#define IGB_MIN_TXDmax_t(unsigned, 80U, IGB_TX_QUEUE_WAKE * 2) +#define IGB_MAX_TXD 4096 #define IGB_DEFAULT_RXD 256 #define IGB_MIN_RXD 80 @@ -121,8 +121,11 @@ struct vf_data_storage { #define IGB_RXBUFFER_16384 16384 #define IGB_RX_HDR_LEN IGB_RXBUFFER_512 -/* How many Tx Descriptors do we need to call netif_wake_queue ? */ -#define IGB_TX_QUEUE_WAKE 16 +/* How many Tx Descriptors should be available + * before calling netif_wake_subqueue() ? + */ +#define IGB_TX_QUEUE_WAKE (MAX_SKB_FRAGS * 4) + /* How many Rx Buffers do we bundle into one write to the hardware ? */ #define IGB_RX_BUFFER_WRITE 16 /* Must be power of 2 */ -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] twice past the taps, thence out to net?
This may be of help. http://www.tcptrace.org/faq_ans.html#FAQ%2021 Given the behaviour seems to be (at least for the foreseeable future) a "feature" is there someplace in tcptrace/tcpdump to mention this? The tcptrace FAQ seems to have stopped growing sometime in 2003 - I could I suppose mention this behaviour to the Debian maintainer for tcptrace, but is there a writeup to be enhanced for tcpdump? rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] twice past the taps, thence out to net?
On 12/15/2011 11:00 AM, Eric Dumazet wrote: Device's work better if the driver proactively manages stop_queue/wake_queue. Old devices used TX_BUSY, but newer devices tend to manage the queue themselves. Some 'new' drivers like igb can be fooled in case skb is gso segmented ? Because igb_xmit_frame_ring() needs skb_shinfo(skb)->nr_frags + 4 descriptors, igb should stop its queue not at MAX_SKB_FRAGS + 4, but MAX_SKB_FRAGS*4 diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 89d576c..989da36 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -4370,7 +4370,7 @@ netdev_tx_t igb_xmit_frame_ring(struct sk_buff *skb, igb_tx_map(tx_ring, first, hdr_len); /* Make sure there is space in the ring for the next send. */ - igb_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS + 4); + igb_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS * 4); return NETDEV_TX_OK; Is there a minimum transmit queue length here? I get the impression that MAX_SKB_FRAGS is at least 16 and is 18 on a system with 4096 byte pages. The previous addition then would be OK so long as the TX queue was always at least 22 entries in size, but now it would have to always be at least 72? I guess things are "OK" at the moment: raj@tardy:~/net-next/drivers/net/ethernet/intel/igb$ grep IGB_MIN_TXD *.[ch] igb_ethtool.c: new_tx_count = max_t(u16, new_tx_count, IGB_MIN_TXD); igb.h:#define IGB_MIN_TXD 80 but is that getting a little close? rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] twice past the taps, thence out to net?
More exactly, we call dev_queue_xmit_nit() from dev_hard_start_xmit() _before_ giving skb to device driver. If device driver returns NETDEV_TX_BUSY, and a qdisc was setup on the device, packet is requeued. Later, when queue is allowed to send again packets, packet is retransmitted (and traced a second time in dev_queue_xmit_nit()) Is this then an unintended consequence bug, or a known feature? rick You can see the 'requeues' counter from "tc -s -d qdisc" output : qdisc mq 0: dev eth2 root Sent 29421597369 bytes 20301716 pkt (dropped 0, overlimits 0 requeues 371) backlog 0b 0p requeues 371 Sure enough: $ tc -s -d qdisc qdisc mq 0: dev eth0 root Sent 2212158799862 bytes 1938268098 pkt (dropped 0, overlimits 0 requeues 4975139) backlog 0b 0p requeues 4975139 rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] twice past the taps, thence out to net?
While looking at "something else" with tcpdump/tcptrace, tcptrace emitted lots of notices about hardware duplicated packets being detected (same TCP sequence number and IP datagram ID). Sure enough, if I go into the tcpdump trace (taken on the sender) I can find instances of what it was talking about, separated in time by rather less than I would expect to be the RTO, and often as not with few if any intervening arriving ACKs to trigger anything like fast retransmit. And besides, those would have a different IP datagram ID no? I did manage to reproduce the issue with plain netperf tcp_stream tests. I had one sending system with 30 concurrent netperf tcp_stream tests to 30 other receiving systems. There are "hardware duplicates" in the sending trace, but no duplicate segments (that I can find thus far) in the two receiver side traces I took. Of course that doesn't mean "conclusively" there were two actual sends but it suggests there werent. While I work through the "obtain permission" path to post the packet traces (don't ask...) I thought I would ask if anyone else has seen something similar. In this case, all the systems are running a 2.6.38-8 Ubuntu kernel (the same sorts of issues which delay my just putting the traces up on netperf.org preclude a later kernel, and I've no other test systems :( ), with Intel 82576 interfaces being driven by: $ sudo ethtool -i eth0 driver: igb version: 2.1.0-k2 firmware-version: 1.8-2 bus-info: :05:00.0 All the systems were connected to the same switch. It is projecting, but given that the interface was fully saturated, and there were 30 concurrent streams making 64K TSO sends, it "feels" like some sort of "go past the packet tap and be captured, find a queue/resource past the tap unavailable, get re-queued above the tap, get captured again when resent" sort of thing. Where in the Linux stack does the tap used by libpcap 1.1.1 reside? rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] trimming bytes from already captured packets
I have some packet captures which were taken with a snaplen of 128 bytes, but I would like to convert that to one with a snaplen of say 66 bytes. The existing tcpdump/libpcap does not *seem* to do that - is there already a utility out there which can, or is this an "enhancement opportunity?" happy benchmarking, rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] questions on -B, performance, mbufs, and
On 09/28/2011 03:57 PM, Sanjay Sundaresan wrote: What is the meaning of dropped by interface ? Dropped by kernel means packets dropped due to lack of memory at the kernel in the same way what does interface drop signifies ? If the numbers for dropped by interface correlate with the packet drops being reported by ethtool -S over the length of time tcpdump was running, then it means packets dropped by the NIC. So, do ethtool -S > before tcpdump ethtool -S > after and then "subtract" before from after and compare that with what tcpdump reports for dropped by interface. 99 times out of 10 the stats reported by ethtool -S are statistics as measured by the NIC itself. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] questions on -B, performance, mbufs, and
On 09/27/2011 07:32 PM, Jon Schipp wrote: Hello Guy, I'm now doing testing with tcpdump on an Ubuntu machine. One difference I noticed was that in addition to "dropped by kernel", tcpdump on Ubuntu also reports "dropped by interface". Is this specific to Linux, because I haven't experienced this on FreeBSD? Is this Ubuntu distro addendum or has this been added by the tcpdump team. Where do the numbers come from for the "dropped by interface", you've already explained the "dropped by kernel" I was just wondering how this differs. Would this be the number reported by ifconfig? If, as the name suggests, those are drops reported by the NIC, presumably the value you see being emitted by tcpdump would track rather closely with the stats reported for the interface via ethtool -S rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Suggestion: Pcap-over-IP client support in
What are the issues/benefits/downfalls one way or t'other between the two schemes - over ssh and a specific connection - when it comes to making certain that this thing forwarding captured traffic isn't simply chasing its own tail forwarding captures of its forwarding of captures of its fowarding of captures... rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] post-commit emailing
On 09/13/2011 07:22 AM, Michael Richardson wrote: "Rick" == Rick Jones writes: >> Guy and I were discussing adding post-commit hooks to the repos >> to send out summaries of activities. >> >> Is there an objection if they go to this list? Or do people >> prefer a new list? >> >> I note that the github.com/mcr/{tcpdump,libpcap} is pushed every >> night, and you can also get RSS feeds from there. Rick> What sort of frequency of email are you expecting? well, I was going to point at ohlot.net: https://www.ohloh.net/p/tcpdump but they don't let me write a URL directly to the right page. They say: 30-Day Commit Activity Aug 15 - Sep 13 4 committers made 11 commits 11 files modified 87 lines added 70 lines removed I think that this is a bit low, so double it. While that would be considerably higher than the current tcpdump-workers email rate (as I perceive it, not actually measured) it does not strike me as an onerous level of emails. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] post-commit emailing
On 09/07/2011 10:02 AM, Michael Richardson wrote: Guy and I were discussing adding post-commit hooks to the repos to send out summaries of activities. Is there an objection if they go to this list? Or do people prefer a new list? I note that the github.com/mcr/{tcpdump,libpcap} is pushed every night, and you can also get RSS feeds from there. What sort of frequency of email are you expecting? - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] [PATCH] ifSpeed in sFlow is 64 bits not 32
On Thu, 2011-06-02 at 10:57 -0700, Guy Harris wrote: > On May 27, 2011, at 5:59 PM, Rick Jones wrote: > > > The ifSpeed field of a generic interface counter in sFlow is 64 bits. > > The "overlay" definition in print-sflow.c is correct, but the actual > > extract for printing is using EXTRACT_32BITS rather than EXTRACT_64BITS, > > which leads to an incorrect report for speed. > > Checked into the trunk and 4.2 branches Excellent. > (with a fix to the format string to use PRIu64). Oops - I keep forgetting that all my compiles are 64 bit and so don't remember to check for such things. rick > - > This is the tcpdump-workers list. > Visit https://cod.sandelman.ca/ to unsubscribe. - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] [PATCH] ifSpeed in sFlow is 64 bits not 32
The ifSpeed field of a generic interface counter in sFlow is 64 bits. The "overlay" definition in print-sflow.c is correct, but the actual extract for printing is using EXTRACT_32BITS rather than EXTRACT_64BITS, which leads to an incorrect report for speed. Signed-off-by: Rick Jones diff --git a/print-sflow.c b/print-sflow.c index f27370a..c7e5bc0 100644 --- a/print-sflow.c +++ b/print-sflow.c @@ -316,7 +316,7 @@ print_sflow_counter_generic(const u_char *pointer, u_int len printf("\n\t ifindex %u, iftype %u, ifspeed %u, ifdirection %u (%s)", EXTRACT_32BITS(sflow_gen_counter->ifindex), EXTRACT_32BITS(sflow_gen_counter->iftype), - EXTRACT_32BITS(sflow_gen_counter->ifspeed), + EXTRACT_64BITS(sflow_gen_counter->ifspeed), EXTRACT_32BITS(sflow_gen_counter->ifdirection), tok2str(sflow_iface_direction_values, "Unknown", EXTRACT_32BITS(sflow_gen_counter->ifdirection))); - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] libpcap capture performance drop
On Fri, 2011-05-27 at 10:39 -0700, Guy Harris wrote: > On May 27, 2011, at 10:16 AM, Rick Jones wrote: > > > Is this new libpcap going to be guaranteed that the underlying NIC HW > > isn't doing Large Receive Offload, or that the tracepoint in the stack > > is below any stack's attempt to do Generic Receive Offload? > > If > > 1) your kernel supports the ethtool ioctls and libpcap was built with > headers that support those ioctls > > and > > 2) the code below (which I checked in a few days ago) can detect all > the forms of offloading that could cause large "packets" to be delivered > > then, yes, the new libpcap will not trust the MTU value if any of the forms > of offloading are enabled. (It also checks for TCP segmentation offloading > and UDP fragmentation offloading, so that large *transmitted* "packets" won't > cause a problem.) Excellent! And that makes me wonder if I should add similar offload checking (and reporting) code to netperf for its omni tests... rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] libpcap capture performance drop
On Fri, 2011-05-27 at 14:20 +0200, ri...@happyleptic.org wrote: > -[ Mon, May 23, 2011 at 12:38:43AM -0700, Guy Harris ] > > > > On May 23, 2011, at 12:31 AM, ri...@happyleptic.org wrote: > > > > > Which brings the question: how one could find out the MTU of a > > > pcap_handle in order no to set caplen to 65535 ? > > > > See pcap-linux.c in the top of the trunk or of the 1.2 branch. (Short > > answer: SIOCGIFMTU, as in the iface_get_mtu() routine.)- > > If I understand this code correctly, in the next release of the libpcap > if a client program ask for a capture length bigger than the MTU then > the size allocated for each frame in the ring buffer will be sized down > to avoid wasting space ? > > If so then I just have to wait for the new libpcap :-) Is this new libpcap going to be guaranteed that the underlying NIC HW isn't doing Large Receive Offload, or that the tracepoint in the stack is below any stack's attempt to do Generic Receive Offload? rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] libpcap capture performance drop
On Fri, 2011-05-20 at 12:02 -0700, Guy Harris wrote: > On Sep 6, 2010, at 11:45 AM, Doktor Bernd wrote: > > > If I recompile with the HAVE_PACKET_RING stuff *not* commented out I get > > the bad performance as with the packaged versions from Ubuntu. So the > > performance drop is caused by that part of libpcap. > > The packet-ring stuff has fixed-length slots, which means that the number of > slots is the buffer size divided by the size of the slots. > > The slot size is calculated from the snapshot length; what snapshot length > are you using? If, for example, this is on Ethernet, and your snapshot > length is > 1518 (1518 just in case the CRC is delivered as part of the > packet; it is with BPF in Mac OS X, for example, and I think on some other > BPF platforms, but it might not be on Linux), that might reduce the number of > ring buffer slots and thus increase the number of packet drops, especially if > the snapshot length is, for example, the tcpdump/Wireshark default of 65535.- Are there alignment differences for the different buffer sizes? For example, when one would use 1518, would one be better-off using 1520 to end on a 4 byte boundary and so begin on a 4 byte boundary if these buffers are carved one after the other? rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] first pass at verbose printing of host sFlow PDUs
Folks - Attached are three files - first the diff against top-of-trunk (well, I snapped it a few days ago, so i'm ass-u-me-ing no one else has modifed the file) for print-sflow.c which adds a first pass at printing host sflow PDUs. The second is a pcap file with 10 such PDUs and the third is an example of the output as it presently stands. As I'm not sure I am (or anyone else is) 100% happy with the format I'm not suggesting the output be checked-in at this time, the .pcap and .cooked files are simply for others' enjoyment and evaluation. One thing I'm never really sure about is when to do everything on one line and when to split it up... happy benchmarking, rick jones diff --git a/print-sflow.c b/print-sflow.c index f27370a..82c3706 100644 --- a/print-sflow.c +++ b/print-sflow.c @@ -61,6 +61,8 @@ static const char rcsid[] _U_ = * */ +#define IPV6_AGENT_OFFSET 12 + struct sflow_datagram_t { u_int8_t version[4]; u_int8_t ip_version[4]; @@ -213,6 +215,12 @@ struct sflow_expanded_counter_sample_t { #define SFLOW_COUNTER_BASEVG4 #define SFLOW_COUNTER_VLAN 5 #define SFLOW_COUNTER_PROCESSOR 1001 +#define SFLOW_COUNTER_HOST_DESC 2000 +#define SFLOW_COUNTER_HOST_ADAPTORS 2001 +#define SFLOW_COUNTER_HOST_CPU 2003 +#define SFLOW_COUNTER_HOST_MEMORY 2004 +#define SFLOW_COUNTER_HOST_DISC 2005 +#define SFLOW_COUNTER_HOST_NET_IO 2006 static const struct tok sflow_counter_type_values[] = { { SFLOW_COUNTER_GENERIC, "Generic counter"}, @@ -221,6 +229,12 @@ static const struct tok sflow_counter_type_values[] = { { SFLOW_COUNTER_BASEVG, "100 BaseVG counter"}, { SFLOW_COUNTER_VLAN, "Vlan counter"}, { SFLOW_COUNTER_PROCESSOR, "Processor counter"}, +{ SFLOW_COUNTER_HOST_DESC, "Host Description"}, +{ SFLOW_COUNTER_HOST_ADAPTORS, "Host Adaptors"}, +{ SFLOW_COUNTER_HOST_CPU, "Host CPU"}, +{ SFLOW_COUNTER_HOST_MEMORY, "Host Memory"}, +{ SFLOW_COUNTER_HOST_DISC, "Host Disc"}, +{ SFLOW_COUNTER_HOST_NET_IO, "Host Network I/O"}, { 0, NULL} }; @@ -251,7 +265,7 @@ struct sflow_generic_counter_t { u_int8_tifinbroadcastpkts[4]; u_int8_tifindiscards[4]; u_int8_tifinerrors[4]; -u_int8_tifinunkownprotos[4]; +u_int8_tifinunknownprotos[4]; u_int8_tifoutoctets[8]; u_int8_tifoutunicastpkts[4]; u_int8_tifoutmulticastpkts[4]; @@ -303,6 +317,136 @@ struct sflow_vlan_counter_t { u_int8_tdiscards[4]; }; +#define SFLOW_OS_NAME_UNKNOWN 0 +#define SFLOW_OS_NAME_OTHER 1 +#define SFLOW_OS_NAME_LINUX 2 +#define SFLOW_OS_NAME_WINDOWS 3 +#define SFLOW_OS_NAME_DARWIN 4 +#define SFLOW_OS_NAME_HPUX5 +#define SFLOW_OS_NAME_AIX 6 +#define SFLOW_OS_NAME_DRAGONFLY 7 +#define SFLOW_OS_NAME_FREEBSD 8 +#define SFLOW_OS_NAME_NETBSD 9 +#define SFLOW_OS_NAME_OPENBSD10 +#define SFLOW_OS_NAME_OSF11 +#define SFLOW_OS_NAME_SOLARIS12 + +static const struct tok sflow_os_name_values[] = { +{ SFLOW_OS_NAME_UNKNOWN, "Unknown"}, +{ SFLOW_OS_NAME_OTHER, "Other"}, +{ SFLOW_OS_NAME_LINUX, "Linux"}, +{ SFLOW_OS_NAME_WINDOWS, "Windows"}, +{ SFLOW_OS_NAME_DARWIN, "Darwin"}, +{ SFLOW_OS_NAME_HPUX, "HP-UX"}, +{ SFLOW_OS_NAME_AIX, "AIX"}, +{ SFLOW_OS_NAME_DRAGONFLY, "DRAGONFLY"}, +{ SFLOW_OS_NAME_FREEBSD, "FreeBSD"}, +{ SFLOW_OS_NAME_NETBSD, "NetBSD"}, +{ SFLOW_OS_NAME_OPENBSD, "OpenBSD"}, +{ SFLOW_OS_NAME_OSF, "OSF"}, +{ SFLOW_OS_NAME_SOLARIS, "Solaris"}, +{ 0, NULL} +}; + + +#define SFLOW_MACH_TYPE_UNKNOWN 0 +#define SFLOW_MACH_TYPE_OTHER1 +#define SFLOW_MACH_TYPE_X86 2 +#define SFLOW_MACH_TYPE_X86_64 3 +#define SFLOW_MACH_TYPE_IA64 4 +#define SFLOW_MACH_TYPE_SPARC5 +#define SFLOW_MACH_TYPE_ALPHA6 +#define SFLOW_MACH_TYPE_POWERPC 7 +#define SFLOW_MACH_TYPE_M68K 8 +#define SFLOW_MACH_TYPE_MIPS 9 +#define SFLOW_MACH_TYPE_ARM 10 +#define SFLOW_MACH_TYPE_HPPA 11 +#define SFLOW_MACH_TYPE_S390 12 + +static const struct tok sflow_mach_type_values[] = { +{ SFLOW_MACH_TYPE_UNKNOWN , "Unknown"}, +{ SFLOW_MACH_TYPE_OTHER , "Other"}, +{ SFLOW_MACH_TYPE_X86 , "x86"}, +{ SFLOW_MACH_TYPE_X86_64 , "x86_64"}, +{ SFLOW_MACH_TYPE_IA64 , "ia64"}, +{ SFLOW_MACH_TYPE_SPARC , "SPARC"}, +{ SFLOW_MACH_TYPE_ALPHA , "Alpha"}, +{ SFLOW_MACH_TYPE_POWERPC , "PowerPC"}, +{ SFLOW_MACH_TYPE_M68K , "M68K"}, +{ SFLOW_MACH_TYPE_MIPS , "MIPS"}, +{
Re: [tcpdump-workers] [PATCH] print-sflow.c - actually print more
On Wed, 2011-04-27 at 15:21 -0400, Michael Richardson wrote: > Rick, I've committed your pcap file and .out file. > I edited the out file to remove the dates (-t option), and I suggest you > want to generate one file for each -v level. > > It's pretty important for me to have the .pcap and .out file. You > can run things directly as: > cd tests > ./TESTonce sflow_multiple_counter_30_pdus.pcap > sflow_multiple_counter_30_pdus.out "-t -v" > > The raw output goes into NEW/foo, and DIFF/foo should be zero lenght if > things are okay. If you like want is in NEW/foo, the cp NEW/foo.out > foo.out. > Cool. I will try to be good about including updated .pcap and .out files with subsequent fixes. And there likely will be subsequent fixes if I ever get more time :) Looking at the code, there is a bug lurking if ever an actual IPv6 agent id is used - right now it is ass-u-me-ing IPv4. I still think that since all the routines being called are checking lengths against their structure sizes some of the additional length checks are redundant but I haven't worked it through completely and may still be confused about some things there. But, extra length checks aren't nearly as bad as missing ones so I'm not going to sweat it too much. Also, I may be using the wrong masks and shifts for type and index in counter samples and am just getting lucky. rick - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] output style question
I have an output style question before I continue hacking at print-sflow.c. Some of the fields in the PDUs are encoded - the top two bits are a format which change the meaning of the remaining 30 bits. my question is whether the printing should simply emit the format and the value as separate items, or change the "description" text based on the format. For example, sflowtool from inmon does the latter mostly but also a bit of the former: switch(sample->inputPortFormat) { case 3: sf_log("inputPort format==3 %u\n", sample->inputPort); break; case 2: sf_log("inputPort multiple %u\n", sample->inputPort); break; case 1: sf_log("inputPort dropCode %u\n", sample->inputPort); break; case 0: sf_log("inputPort %u\n", sample->inputPort); break; } switch(sample->outputPortFormat) { case 3: sf_log("outputPort format==3 %u\n", sample->outputPort); break; case 2: sf_log("outputPort multiple %u\n", sample->outputPort); break; case 1: sf_log("outputPort dropCode %u\n", sample->outputPort); break; case 0: sf_log("outputPort %u\n", sample->outputPort); break; } what is/should be the way that is done in tcpdump? rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] [PATCH] pay attention to the enterprise
On Fri, 2011-04-15 at 10:02 -0700, Guy Harris wrote: > On Apr 14, 2011, at 2:59 PM, Rick Jones wrote: > > > Thanks to some traces sent my way by Gavin McCullagh, and a comparison > > against the output of inMon's sflowtool, I can confidently say "Yes > > Virginia, there is an enterprise other than zero." Which means lest we > > start trying to decode something as what it is not, we best actually > > look at the enterprise field and make sure it is one we recognize. > > Checked into the trunk and 4.2 branches and pushed. Excellent. So, does anyone know what sFlow enterprise ID 8800 might be all about? And within that context what an 8 byte, type 2 flow record is?-) rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] [PATCH] pay attention to the enterprise field of the sflow flow and counter record format
Thanks to some traces sent my way by Gavin McCullagh, and a comparison against the output of inMon's sflowtool, I can confidently say "Yes Virginia, there is an enterprise other than zero." Which means lest we start trying to decode something as what it is not, we best actually look at the enterprise field and make sure it is one we recognize. Signed-off-by: Rick Jones diff --git a/print-sflow.c b/print-sflow.c index c508824..f27370a 100644 --- a/print-sflow.c +++ b/print-sflow.c @@ -170,6 +170,13 @@ struct sflow_expanded_flow_raw_t { u_int8_theader_size[4]; }; +struct sflow_ethernet_frame_t { +u_int8_t length[4]; +u_int8_t src_mac[8]; +u_int8_t dst_mac[8]; +u_int8_t type[4]; +}; + struct sflow_extended_switch_data_t { u_int8_t src_vlan[4]; u_int8_t src_pri[4]; @@ -468,6 +475,7 @@ sflow_print_counter_records(const u_char *pointer, u_int len, u_int records) { u_int tlen; u_int counter_type; u_int counter_len; +u_int enterprise; const struct sflow_counter_record_t *sflow_counter_record; nrecords = records; @@ -480,10 +488,13 @@ sflow_print_counter_records(const u_char *pointer, u_int len, u_int records) { return 1; sflow_counter_record = (const struct sflow_counter_record_t *)tptr; - counter_type = EXTRACT_32BITS(sflow_counter_record->format); + enterprise = EXTRACT_32BITS(sflow_counter_record->format); + counter_type = enterprise & 0x0FFF; + enterprise = enterprise >> 20; counter_len = EXTRACT_32BITS(sflow_counter_record->length); - printf("\n\t%s (%u) length %u", - tok2str(sflow_counter_type_values,"Unknown",counter_type), + printf("\n\tenterprise %u, %s (%u) length %u", + enterprise, + (enterprise == 0) ? tok2str(sflow_counter_type_values,"Unknown",counter_type) : "Unknown", counter_type, counter_len); @@ -492,36 +503,37 @@ sflow_print_counter_records(const u_char *pointer, u_int len, u_int records) { if (tlen < counter_len) return 1; - - switch (counter_type) { - case SFLOW_COUNTER_GENERIC: - if (print_sflow_counter_generic(tptr,tlen)) - return 1; - break; - case SFLOW_COUNTER_ETHERNET: - if (print_sflow_counter_ethernet(tptr,tlen)) - return 1; - break; - case SFLOW_COUNTER_TOKEN_RING: - if (print_sflow_counter_token_ring(tptr,tlen)) - return 1; - break; - case SFLOW_COUNTER_BASEVG: - if (print_sflow_counter_basevg(tptr,tlen)) - return 1; - break; - case SFLOW_COUNTER_VLAN: - if (print_sflow_counter_vlan(tptr,tlen)) - return 1; - break; - case SFLOW_COUNTER_PROCESSOR: - if (print_sflow_counter_processor(tptr,tlen)) - return 1; - break; - default: - if (vflag <= 1) - print_unknown_data(tptr, "\n\t\t", counter_len); - break; + if (enterprise == 0) { + switch (counter_type) { + case SFLOW_COUNTER_GENERIC: + if (print_sflow_counter_generic(tptr,tlen)) + return 1; + break; + case SFLOW_COUNTER_ETHERNET: + if (print_sflow_counter_ethernet(tptr,tlen)) + return 1; + break; + case SFLOW_COUNTER_TOKEN_RING: + if (print_sflow_counter_token_ring(tptr,tlen)) + return 1; + break; + case SFLOW_COUNTER_BASEVG: + if (print_sflow_counter_basevg(tptr,tlen)) + return 1; + break; + case SFLOW_COUNTER_VLAN: + if (print_sflow_counter_vlan(tptr,tlen)) + return 1; + break; + case SFLOW_COUNTER_PROCESSOR: + if (print_sflow_counter_processor(tptr,tlen)) + return 1; + break; + default: + if (vflag <= 1) + print_unknown_data(tptr, "\n\t\t", counter_len); + break; + } } tptr += counter_len; tlen -= counter_len; @@ -613,6 +625,22 @@ print_sflow_raw_packet(const u_char *pointer, u_int len) { return 0; } +static int +print_sflow_ethernet_frame(const u_char *pointer, u_int len) { + +const struct sflow_ethernet_frame_t *sflow_ethernet_frame; + +if (len < sizeof(struct sflow_ethernet_frame_t)) + return 1; + +sflow_ethernet_frame = (const struct sflow_ethernet_frame_t *)pointer; + +printf("\n\t frame len %u, type %u", + EXTRACT_32BITS(sflow_ethernet_frame->length), + EXTRACT_32BITS(sflow_ethernet_frame-
Re: [tcpdump-workers] [PATCH] replacement print-sflow.c to include
On Thu, 2011-04-14 at 11:35 -0700, Guy Harris wrote: > On Apr 13, 2011, at 1:02 PM, Rick Jones wrote: > > > To enable printing of non-expanded samples I've shuffled a bunch of code > > around and created a bunch of smaller routines to more easily support > > printing of both expanded and non-expanded counter and flow samples. > > I've done simple testing of non-expanded counter and flow, and expanded > > counter, but I don't have expanded flow at present with which to test. > > So, that part of the change is only compile/eyeball tested. > > Checked in, with some tweaks (making variables unsigned, adding _U_ on > unused parameters to stub routines, and adding some additional checks) > and pushed on the trunk and 4.2 branch.- > This is the tcpdump-workers list. > Visit https://cod.sandelman.ca/ to unsubscribe. Cool. BTW, I just found a bug in the print_counters routines - i was manipulating based on the size of the pointer rather than the struct to which it pointed. in 64-bit (where I'm running) the two were the same, but in 32 bit they were not. So, it should be: tptr += sizeof(struct sflow_counter_record_t); tlen -= sizeof(struct sflow_counter_record_t); and a similar change for the flow_counter printing routine. rick - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] [PATCH] replacement print-sflow.c to include "non-expanded" counter and flow samples
To enable printing of non-expanded samples I've shuffled a bunch of code around and created a bunch of smaller routines to more easily support printing of both expanded and non-expanded counter and flow samples. I've done simple testing of non-expanded counter and flow, and expanded counter, but I don't have expanded flow at present with which to test. So, that part of the change is only compile/eyeball tested. The diff would likely be as large as the file itself so I'm simply sending the file itself. I suspect it doesn't matter, but this is from a git clone from the 12th. I'm not terribly git-savvy or I could give the various ids... Signed-off-by: Rick Jones rick jones /* * Copyright (c) 1998-2007 The TCPDUMP project * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that: (1) source code * distributions retain the above copyright notice and this paragraph * in its entirety, and (2) distributions including binary code include * the above copyright notice and this paragraph in its entirety in * the documentation or other materials provided with the distribution. * THIS SOFTWARE IS PROVIDED ``AS IS'' AND * WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT * LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE. * * The SFLOW protocol as per http://www.sflow.org/developers/specifications.php * * Original code by Carles Kishimoto * * Expansion and refactoring by Rick Jones */ #ifndef lint static const char rcsid[] _U_ = "@(#) $Header: /tcpdump/master/tcpdump/print-sflow.c,v 1.1 2007-08-08 17:20:58 hannes Exp $"; #endif #ifdef HAVE_CONFIG_H #include "config.h" #endif #include #include #include #include #include "interface.h" #include "extract.h" #include "addrtoname.h" /* * sFlow datagram * * 0 1 2 3 * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ * | Sflow version (2,4,5) | * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ * | IP version (1 for IPv4 | 2 for IPv6)| * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ * | IP Address AGENT (4 or 16 bytes) | * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ * | Sub agent ID | * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ * | Datagram sequence number | * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ * | Switch uptime in ms | * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ * |num samples in datagram| * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ * */ struct sflow_datagram_t { u_int8_t version[4]; u_int8_t ip_version[4]; u_int8_t agent[4]; u_int8_t agent_id[4]; u_int8_t seqnum[4]; u_int8_t uptime[4]; u_int8_t samples[4]; }; struct sflow_sample_header { u_int8_t format[4]; u_int8_t len[4]; }; #define SFLOW_FLOW_SAMPLE 1 #define SFLOW_COUNTER_SAMPLE 2 #define SFLOW_EXPANDED_FLOW_SAMPLE 3 #define SFLOW_EXPANDED_COUNTER_SAMPLE 4 static const struct tok sflow_format_values[] = { { SFLOW_FLOW_SAMPLE, "flow sample" }, { SFLOW_COUNTER_SAMPLE, "counter sample" }, { SFLOW_EXPANDED_FLOW_SAMPLE, "expanded flow sample" }, { SFLOW_EXPANDED_COUNTER_SAMPLE, "expanded counter sample" }, { 0, NULL} }; struct sflow_flow_sample_t { u_int8_tseqnum[4]; u_int8_ttypesource[4]; u_int8_trate[4]; u_int8_tpool[4]; u_int8_tdrops[4]; u_int8_tin_interface[4]; u_int8_tout_interface[4]; u_int8_trecords[4]; }; struct sflow_expanded_flow_sample_t { u_int8_tseqnum[4]; u_int8_ttype[4]; u_int8_tindex[4]; u_int8_trate[4]; u_int8_tpool[4]; u_int8_tdrops[4]; u_int8_tin_interface_format[4]; u_int8_tin_interface_value[4]; u_int8_tout_interface_format[4]; u_int8_tout_interface_value[4]; u_int8_trecords[4]; }; #define SFLOW_FLOW_RAW_PACKET 1 #define SFLOW_FLOW_ETHERNET_FRAME 2 #define SFLOW_FLOW_IPV4_DATA 3 #define SFLOW_FLOW_IPV6_DATA 4 #define SFLOW_FLOW_EXTENDED_SWITCH_DATA 1001 #define SFLOW_FLOW_EXTENDED_ROUTER_DATA 1002 #define SFLOW_FLOW_EXTENDED_GATEWAY_DATA 1003 #define SFLOW_FLOW_EXTENDED_USER_DATA 1004 #define SFLOW_FLOW_EXTENDED_URL_DATA 1005 #define SFLOW_FLOW_EXTENDED_MPLS_DATA 1006 #define SFLOW_FLOW_EXTENDED_NAT_DATA 10
Re: [tcpdump-workers] [PATCH] print-sflow.c - actually print more
On Fri, 2011-04-08 at 17:04 -0700, Rick Jones wrote: > Either I fumbled trying the patch or something else has gone amis > because with a freshly cloned tcpdump, and a new set of sflows I get > output like: what has happened is I have switched switches and the switch I'm using is not sending the expanded format. so, I'm busily adding support for that... i have something working (*) for both counters and flows and will send it on its way before the end of the week - i am also doing some refactoring to make the code a bit more easy to follow because I kept getting lost in all the switch statements :) rick jones (*) defined as the output looks sane - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] [PATCH] print-sflow.c - actually print more
Either I fumbled trying the patch or something else has gone amis because with a freshly cloned tcpdump, and a new set of sflows I get output like: raj@tardy:~/tcpdump$ ./tcpdump -r /tmp/sflow.pcap -vvv reading from file /tmp/sflow.pcap, link-type EN10MB (Ethernet) 16:45:18.468863 IP (tos 0x0, ttl 64, id 48091, offset 0, flags [none], proto UDP (17), length 1232) the-switch.54321 > z400.sflow: [udp sum ok] sFlowv5, IPv4 agent the-switch, agent-id 0, seqnum 5908, uptime 2294190, samples 6, length 1204 flow sample (1), length 208, flow sample (1), length 148, flow sample (1), length 208, flow sample (1), length 148, flow sample (1), length 208, flow sample (1), length 208, ... 16:47:41.409631 IP (tos 0x0, ttl 64, id 49088, offset 0, flags [none], proto UDP (17), length 1348) the-switch.54321 > z400.sflow: [udp sum ok] sFlowv5, IPv4 agent the-switch, agent-id 0, seqnum 6903, uptime 2437130, samples 7, length 1320 flow sample (1), length 208, flow sample (1), length 148, flow sample (1), length 208, counter sample (2), length 168, counter sample (2), length 168, counter sample (2), length 168, counter sample (2), length 168, when I was expecting something rather more verbose. I've uploaded the pcap file to ftp://ftp.netperf.org/netperf/misc/sflow.pcap.gz . This time it is from a completely private switch doing nothing but sending sflow counters and configured for some flow samples while I was running a single instance of netperf through it. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] [PATCH] print-sflow.c - actually print more
On Mon, 2011-04-04 at 19:06 -0700, Guy Harris wrote: > On Apr 4, 2011, at 12:15 PM, Rick Jones wrote: > > > The former is easy enough - attached is a compressed pcap file with 30 > > captured PDUs which can be used for testing. They are all just counter > > samples, there are no flow samples. Also attached is a compressed > > "cooked" file with the correct output based on Guy's patch. I've given > > it a quick once-over to verify that it looks sane. > > OK, I've checked my change in. I presume it's OK to check the capture > file and the output into the tests directory; I'll do that once we > decide whether to uuencode the capture files or not.- Yes. I did secure permission for the capture file to become part of the test suite. If I'm able to get a completely synthetic setup going I can see about trying to include some flow samples and send that along if you like. No committed timeframe on that though. rick > This is the tcpdump-workers list. > Visit https://cod.sandelman.ca/ to unsubscribe. - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] [PATCH] print-sflow.c - actually print more
On Mon, 2011-04-04 at 18:49 -0700, Guy Harris wrote: > On Apr 4, 2011, at 12:15 PM, Rick Jones wrote: > > > As for the latter, I don't have some of the pre-reqs installed: > > > > raj@tardy:~/tcpdump$ make check > > uudecode --help || (echo "No uudecode program found, not running tests"; > > echo "apt-get/rpm install sharutils?"; exit 1) > > /bin/sh: uudecode: not found > > No uudecode program found, not running tests > > apt-get/rpm install sharutils? > > make: *** [check] Error 1 > > > > so unless it is particularly important to have I'd rather not bother. > > Well, either > > 1) you don't have uudecode installed It is that. If you really need make check output for this situation I can go ahead and install uudecode and whatever else make check wants, but if it isn't really necessary I'd rather not add the clutter. rick - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] [PATCH] print-sflow.c - actually print more
On Sun, 2011-04-03 at 20:27 +0200, Michael Richardson wrote: > >>>>> "Rick" == Rick Jones writes: > Rick> tcpdump 4.1.1, and 4.3.0-PRE-GIT_2011_04_01 prints just one > Rick> expanded counter sample per captured PDU because it mistakenly > Rick> skips forward sflow_sample_len when it has already adjusted > Rick> tprt and tlen while it was printing the sample contents. This > > Can you send pcap file with reference output as well for 'make check'? The former is easy enough - attached is a compressed pcap file with 30 captured PDUs which can be used for testing. They are all just counter samples, there are no flow samples. Also attached is a compressed "cooked" file with the correct output based on Guy's patch. I've given it a quick once-over to verify that it looks sane. As for the latter, I don't have some of the pre-reqs installed: raj@tardy:~/tcpdump$ make check uudecode --help || (echo "No uudecode program found, not running tests"; echo "apt-get/rpm install sharutils?"; exit 1) /bin/sh: uudecode: not found No uudecode program found, not running tests apt-get/rpm install sharutils? make: *** [check] Error 1 so unless it is particularly important to have I'd rather not bother. happy benchmarking, rick sflow_multiple_counter_30_pdus.pcap.gz Description: GNU Zip compressed data sflow_multiple_counter_30_pdus.cooked.gz Description: GNU Zip compressed data - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] [PATCH] print-sflow.c - actually print more
On Fri, 2011-04-01 at 20:11 -0700, Guy Harris wrote: > On Apr 1, 2011, at 6:03 PM, Rick Jones wrote: > > > tcpdump 4.1.1, and 4.3.0-PRE-GIT_2011_04_01 prints just one expanded > > counter sample per captured PDU because it mistakenly skips forward > > sflow_sample_len when it has already adjusted tprt and tlen while it was > > printing the sample contents. This then leaves it confused about what it > > is seeing. Shifting the adjustment to the "default sample" case where > > the sample wasn't printed appears to fix this, though there is still > > some question as to whether it should advance by sflow_sample_len or > > some adjustment thereof. > > Actually, it should probably be checking whether sflow_sample_len is > too small for the sample; if it decrements sflow_sample_len as it > goes, after doing that check, that should also fix the same problem. > > Does this do it? (It also makes some white-space changes to make the > "tptr += ..." and "tlen -= ..." stuff consistent, and makes various > counts, types, and lengths unsigned.) It seems to do it. rick - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] [PATCH] print-sflow.c - actually print more than one extended counter sample
tcpdump 4.1.1, and 4.3.0-PRE-GIT_2011_04_01 prints just one expanded counter sample per captured PDU because it mistakenly skips forward sflow_sample_len when it has already adjusted tprt and tlen while it was printing the sample contents. This then leaves it confused about what it is seeing. Shifting the adjustment to the "default sample" case where the sample wasn't printed appears to fix this, though there is still some question as to whether it should advance by sflow_sample_len or some adjustment thereof. Signed-off-by: Rick Jones raj@tardy:~/tcpdump$ diff print-sflow.c.orig print-sflow.c 559a560,565 > /* since we didn't know about it, we haven't advanced > through it and need to move-on to the next one. what > isn't clear is if we should adjust by the full > sflow_sample_len or not */ > tptr += sflow_sample_len; > tlen -= sflow_sample_len; 562,563c568,571 < tptr += sflow_sample_len; < tlen -= sflow_sample_len; --- > /* if we are here, it means we successfully decoded our way > through the sample, and we do not* want to actually skip > forward by sflow_sample_len like we used to, because we've > already advanced through the counters. */ rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Best OS / Distribution for gigabit capture?
Fabian Schneider wrote: Hi, Regarding the OS we have done testing on this some five years ago. Back then we found that FreeBSD performed better than Linux. Yet there have been improvements proposed for both Linux (memory mapping, and Luca Deri's work) and FreeBSD ("zero-copy BPF and Alexandre Fiveg's work). To get details just google all this. Yet, experience from operating a large scale packet capturing systems shows that the biggest challenge usually is to have a disk system that is fast enough to write the stream of packets to disk. You might want to check this first. (e.g. you can run a Bonnie++ to see how fast your disk system is.) And be certain to beat on the filesystem/disc with I/Os of the size that will be coming from your packet capturing... rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] HUGE packet-drop
The best way I found to do this was to have the application that is receiving the packets running on the same cores that the kernel is pulling them off the nic. Since my machine had two chips each with 4-6 cores (8-12 logical cores), I limited my application to run on the same chip as the nic was receiving its interrupts. Interesting - because at least for a time there, before LRO/GRO etc, when it was near impossible to achieve link-rate with full-size frames on 10GbE, I found that in netperf testing I could get higher thoughtput when netserver was bound to a core *other* than the one taking interrupts from the NIC - preferably though one sharing the last level cache. I guess in the case of "plain" networking (with full-sized segments) the number of cache lines ping-ponged from one cache to another is less than the packet capture case. Rereading the rest though I see you weren't binding the app explicitly to the same core, but simply to the same processor, so perhaps the same thing is happening with your app as with my netserver :) rick jones PS - don't forget that some NICs have multiple IRQs... :) - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] tcp sequence and ack number with libpcap
Can you provide some examples of those "weird seq and ack numbers"? Thanks for your reply. With weird I meant different than obtained with "tcpdump -vv". There numbers are much too high: seq 101688001 ack 580300460 seq 103252140 ack 276497601 seq 101689793 ack 580300460 seq 101592513 ack 580300460 seq 102902956 ack 276497601 seq 102902700 ack 276497601 seq 101689281 ack 580300460 seq 101689025 ack 580300460 seq 102902444 ack 276497601 seq 101688769 ack 580300460 With "tcpdump -r -n -vv tcp" I get: 17:53:35.347343 IP (tos 0x10, ttl 64, id 40919, offset 0, flags [DF], proto TCP (6), length 92) 193.34.150.174.22 > 83.247.48.159.52238: Flags [P.], seq 949215706:949215758, ack 3908965070, win 80, length 52 absolute seuqnce numbers reported above 17:53:35.347348 IP (tos 0x10, ttl 64, id 40920, offset 0, flags [DF], proto TCP (6), length 156) 193.34.150.174.22 > 83.247.48.159.52238: Flags [P.], seq 52:168, ack 1, win 80, length 116 17:53:35.367017 IP (tos 0x0, ttl 122, id 8778, offset 0, flags [DF], proto TCP (6), length 40) 83.247.48.159.52238 > 193.34.150.174.22: Flags [.], cksum 0xb0f5 (correct), seq 1, ack 52, win 16356, length 0 almost certainly relative sequence numbers reported there - for any given four-tuple of local/remote IP, local/remote port, tcpdump will report the "raw" sequence numbers on the first segment it sees and then will subtract those values from the sequence numbers in subsequent segments it sees. Are you printing-out any other characteristics of the TCP segments to act as a sanity check - say to make sure you are dealing with the correct offsets? rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] [RFC PATCH 0/2]: hw timestamp support
Guy Harris wrote: Is there ever any reason *NOT* to use the hardware timestamp if it's available? Only if it and the host time are not sufficiently in sync and you want to correlate with other things timestamped with host time. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Libpcap performance under VMWare guest OSes
Mark Bednarczyk wrote: I hope you don't mind if I submit ttcp numbers instead of netperf. in and of itself, no, but ttcp doesn't give pps figures, it gives bulk transfer figures - which is one of the reasons netperf was created in the first place :) the next "wonder" is how much CPU is left over when the guest(s) are running at these rates. rick jones The following tests are between 3 physical machines on a 10-meg ethernet network all connected to the same subnet. System #1: win_32 and ubuntu1_32(dual CPU) System #2: fedora core 64-bit (quad CPU) System #3: debian_32 and ubuntu2_32 (dual CPU) (I also have various other OSes solaris, freebsd, etc but I think this mix is more then enough for our purposes.) For file copies via sftp I get: host(win_32) to guest(ubuntu1_32) (same physical machine and disk): 20.8Mbps host(win_32) to host(fc_64) (different machines over 10 meg eth): 6.8Mbps host(win_32) to guest(debian_32) (different machines over ethernet): 6.5Mbps ttcp transfers: host(win_32) to host(fc_64) (different phsyical machine): 8Mbps host(fc_64) to guest(ubuntu2_32) (different machine): 7.8Mbps host(fc_64) to guest(debian_32) (different machine): 7.7Mbps guest(debian_32) to guest(ubuntu2_32) (same physical machine): 624Mpbs So it looks like that the network drivers on VMs are able to handle quiet a bit of traffic upto 600Mbps when a physical network is not involved, but respectible 7 - 8 Mbps on a 10-meg ethernet network. Somehow libpcap, when it taps into this captured traffic, is not able to handle a fraction of the actual traffic. Cheers, mark... -Original Message- From: tcpdump-workers-ow...@lists.tcpdump.org [mailto:tcpdump-workers-ow...@lists.tcpdump.org] On Behalf Of Rick Jones Sent: Thursday, December 10, 2009 4:43 PM To: tcpdump-workers@lists.tcpdump.org Subject: Re: [tcpdump-workers] Libpcap performance under VMWare guest OSes What is the delta in "plain" packet per second performance between the VMguest and bare iron? I'd expect that to correlate with libpcap performance. Ie, if the VMguest cannot do well on plain packet per second stuff (like say burst mode/aggregate netperf TCP_RR) then it won't do well on libpcap. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe. - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe. - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Libpcap performance under VMWare guest OSes
What is the delta in "plain" packet per second performance between the VMguest and bare iron? I'd expect that to correlate with libpcap performance. Ie, if the VMguest cannot do well on plain packet per second stuff (like say burst mode/aggregate netperf TCP_RR) then it won't do well on libpcap. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] libpcap-1.0.0 configure error on HP-UX 11.11 and 11.31
In a fit of curiousity I downloaded libpcap-1.0.0 and tried to do a ./configure on HP-UX 11.11 (aka 11iv1) and 11.31 (aka 11iv3) and in both cases I get checking if --disable-protochain option is specified... enabled ./configure[6236]: Syntax error at line 6659 : `newline or ;' is not expected. It appears that this: for ac_header in do as_ac_Header=`echo "ac_cv_header_$ac_header" | $as_tr_sh` is what it complains about - I'm guessing that the HP-UX shell doesn't like there to not be anything after "in" for if I add a '""' after the in the configure script appears to complete on 11.11 and 11.31. Is this a known problem? happy benchmarking, rick jones cannot recall if he is still subscribed to the list, so please cc on replies - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] [Patch] tcpdump probabilistic sampling
Jesse Kempf wrote: Hi, So tcpdump tends to jam up the terminal a bit when you try to dump on a saturated gigabit link. I've added a -P option to tcpdump that lets you specify a probability for tcpdump to print each packet. It uses drand48() to figure out whether each packet captured should be printed. Obviously this isn't the same thing as saying "print every Nth packet" since this is a Bernoulli process and the expected value of the number of printed packets is different. The wording won't sound right... but what's the point? Just wanting to watch pseudo-random subsets of the traffic? I'd think that if one wanted to be tracing a gigabit link one would trace to a binary file and post-process, or have a rather specific filter in place? rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Should we enable IPv6 support by default?
Gert Doering wrote: Hi, On Wed, Feb 06, 2008 at 10:09:33AM -0800, Rick Jones wrote: What is the reason for having optional IPv6 in the first place (besides OSes that don't provide all necessary header files)? Memory savings? ISTR there were some "funnies" on some OSes where IPv6 was "pre-enabled" but not actually enabled unless something else was added. tcpdump is not actually *doing* any IPv6. All it does is "read packets, print their contents". Which is completely independent of the question "will the operating system provide IPv6 support?". It's not like tcpdump is trying to actually *use* IPv6. (I can do tcpdump to look at "spanning tree" packets. Or Cisco CDP. Neither is supported in my operating system...) Right - there was just some sort of assumption made in (at this point anyway) much older tcpdump (perhaps libpcap) source that if a certain IPv6ish thing was around then the rest of it would be there. My dimm memory from back then has many multi-bit errors. rick - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Should we enable IPv6 support by default?
Gert Doering wrote: Hi, On Wed, Feb 06, 2008 at 03:37:05AM -0800, Guy Harris wrote: It's 2008. Should we enable IPv6 support by default in libpcap and tcpdump (as long as the OS supports IPv6 to a sufficient extent that we can compile the support in), and let users do "--disable-ipv6" if, for whatever reason, they don't want it? I would certainly appreciate that. What is the reason for having optional IPv6 in the first place (besides OSes that don't provide all necessary header files)? Memory savings? ISTR there were some "funnies" on some OSes where IPv6 was "pre-enabled" but not actually enabled unless something else was added. HP-UX 11.0 comes to mind in that regard. At this point though, given that HP-UX 11.0 is well past its EOL date, so long as there is a --disable-ipv6 and it is reasonably well documented in places someone would be likely to look if a compile failed, IPv6 by default should be fine. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Secret of great tcpdump performance ..
Michael Krueger wrote: On Tue, 22 Jan 2008 19:47:24 +0100, Rick Jones <[EMAIL PROTECTED]> wrote: How many processors do you have, are interrupts from each NIC going to seperate processors/cores/whatever (show us the output of /proc/interrupts), and have you bound each tcpdump to its corresponding NICs interrupt CPU? That seems to be a great thing to investigate. I did find a lot of material about process and interrupt affinity. I will give it a try and see if that does help fix my problem. Any other idea worth looking at? CPU profiles of your application. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Secret of great tcpdump performance ..
Michael Krueger wrote: On Tue, 22 Jan 2008 19:47:24 +0100, Rick Jones <[EMAIL PROTECTED]> wrote: How many processors do you have, are interrupts from each NIC going to seperate processors/cores/whatever (show us the output of /proc/interrupts), and have you bound each tcpdump to its corresponding NICs interrupt CPU? I have a single CPU / dual core machine. I think you are right. All of the interrupts are going to a single core: lxvoipmon05:~/perftest # cat /proc/interrupts CPU0 CPU1 0:113 168298887IO-APIC-edge timer 7: 1 1IO-APIC-edge parport0 8: 0 2IO-APIC-edge rtc 9: 0 0 IO-APIC-level acpi 14: 06034770IO-APIC-edge ide0 50: 0 75531924 IO-APIC-level libata, eth1 58: 0 848554 IO-APIC-level libata, ehci_hcd:usb1 66: 0394 IO-APIC-level libata, ohci_hcd:usb2 74: 0 3 IO-APIC-level ohci1394 82: 0 79599257 IO-APIC-level eth0 177: 04140498 IO-APIC-level eth2 NMI: 0 0 LOC: 168325561 168325456 ERR: 0 MIS: 0 This was without binding tcpdump to a specific CPU. Anyway, tcpdump was able to capture the traffic on both NICs without dropped packets. My own app almost immediatly reports dropped packets. What do I have to do so that interrupts are handled on both cores? Will the interrupts move with the process if I bind them to separate cores? First, you make certain that there is no irqbalance daemon running. Then you do: echo M > /proc/irq/N/smp_affinity where M is 1 << CPUnum (ie CPU0 would use 1, CPU1 would use 2, CPU2 would use 4 etc etc) and N is from the table above. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Secret of great tcpdump performance ..
How many processors do you have, are interrupts from each NIC going to seperate processors/cores/whatever (show us the output of /proc/interrupts), and have you bound each tcpdump to its corresponding NICs interrupt CPU? rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Running multiple instances of tcpdump
Because HP-UX doesn't support more than one process putting a particular interface into promiscuous mode at a time, so you can't have two instances of tcpdump (or any other application using promiscuous mode) running on the same interface at the same time. I am reliably informed that this limitation is relaxed in 11iv3 (aka 11.31) with the installation of patch PHNE_36857 which was released in December of 2007. I have no information on that being back-ported to 11iv2 or 11iv1 and would assume it would be based entirely on demand made known through official channels. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] capturing only wrong checksum packets
Guy Harris wrote: Mohan Lal Jangir wrote: How can I capture "only wrong checksum packets" using tcpdump (specially wrong udp checksum)? Unfortunately, there's no way to do so with an unmodified tcpdump. And even if there were, if you happened to be taking the trace on a system with CKO (ChecKsum Offload) enabled, you would probably see incorrect checksums for all outbound traffic. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] false checksum failure reports
ronnie sahlberg wrote: On Nov 7, 2007 12:54 PM, Rick Jones <[EMAIL PROTECTED]> wrote: Harley Stenzel wrote: On Nov 6, 2007 2:03 PM, Rick Jones <[EMAIL PROTECTED]> wrote: Any thoughts as to how to deal with false checksum failure reports for outbound traffic being sniffed on a system with ChecKsum Offload (CKO)? It seems that linux has a flag they can set when capturing the packet that would tell us, not sure what other platforms might have Love it. It would be very nice to know if a packet's checksum will be calculated in a CKO card. These are some of the specifics courtesy of folks over in the linux netdev mailing list: The thing to check is "TP_STATUS_CSUMNOTREADY". When using mmap(), it will be provided in the descriptor. When using recvmsg() it will be provided via a PACKET_AUXDATA control message when enabled via the PACKET_AUXDATA socket option. I have tried to take a quick first look at the pcap code for linux but wasn't in the correct frame of mind and so got lost rather quickly. Without a corresponding change to the binary file format (as in find a spare bit somewhere) the change would initially be limited to "live" tracing. Not a complete solution, but a step in the right direction. Meanwhile, making certain that the docs/manpage etc call-out that tracing on a CKO capable system/NIC will result in false checksum failure reports for outbound traffic would be goodness. I suppose I should have checked if that was already there before typing the previous sentence, but there you go :) It should call out that "ON SOME SYSTEMS" this will result in the checksum being reported as invalid. Some popular systems put 0x in the checksum field when CKO is used. This allows tools such as wireshark to heuristically detect : checksum is wrong, but the packet contains 0x which is what several popular implementations store in the packet when CKO is used, so no need to flag it with checksum invalid. We could be more specific about which systems, sure. If pcap on linux would detect CKO and modify the packet to clear the tcp checksum field to 0x before passing it to the application this would make tools such as wireshark work correctly when capturing and also when reading files without the need to modify the file format. Well, many (most? all?) of the CKO implementations in the NICs call for the pseudo-header checksum to be in the checksum field. So, for a stack to modify it for the purposes of tracing implies that the stack is making a copy of the packet being traced before handing that to the user. I'm not sure if Linux is doing that copy. I hope it isn't because packet tracing is expensive enough as it is and knuth only knows what it would be like for a 10 Gig NIC. Making a copy just to communicate one bit of information doesn't seem like a very efficient way to do things. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] false checksum failure reports
Harley Stenzel wrote: On Nov 6, 2007 2:03 PM, Rick Jones <[EMAIL PROTECTED]> wrote: Any thoughts as to how to deal with false checksum failure reports for outbound traffic being sniffed on a system with ChecKsum Offload (CKO)? It seems that linux has a flag they can set when capturing the packet that would tell us, not sure what other platforms might have Love it. It would be very nice to know if a packet's checksum will be calculated in a CKO card. These are some of the specifics courtesy of folks over in the linux netdev mailing list: The thing to check is "TP_STATUS_CSUMNOTREADY". When using mmap(), it will be provided in the descriptor. When using recvmsg() it will be provided via a PACKET_AUXDATA control message when enabled via the PACKET_AUXDATA socket option. I have tried to take a quick first look at the pcap code for linux but wasn't in the correct frame of mind and so got lost rather quickly. Without a corresponding change to the binary file format (as in find a spare bit somewhere) the change would initially be limited to "live" tracing. Not a complete solution, but a step in the right direction. Meanwhile, making certain that the docs/manpage etc call-out that tracing on a CKO capable system/NIC will result in false checksum failure reports for outbound traffic would be goodness. I suppose I should have checked if that was already there before typing the previous sentence, but there you go :) rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
[tcpdump-workers] false checksum failure reports
Any thoughts as to how to deal with false checksum failure reports for outbound traffic being sniffed on a system with ChecKsum Offload (CKO)? It seems that linux has a flag they can set when capturing the packet that would tell us, not sure what other platforms might have - there it might be necessary to "guess" based on the source IP matching one of the system's local IPs. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] RFC: Add multicast reception API to libpcap
Bruce M. Simpson wrote: Rick Jones wrote: So this is meant to enable receipt of specific multicasts and not receipt of all multicasts right? Is that a particularly "pcappy" thing? Correct. I believe it logically belongs with pcap, as it is something which may well be required if using pcap as the link-layer API. I'm just stuck in the stoneage when pcap was just for packet capture, so things added to make it useful as a link-layer API always seem odd to me :) rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] RFC: Add multicast reception API to libpcap
So this is meant to enable receipt of specific multicasts and not receipt of all multicasts right? Is that a particularly "pcappy" thing? Anyway, for HP-UX and Solaris, I suspect the receive all multicasts would be a DL_PROMISC_MULTI rather than the (i suspect) current DL_PROMISC_PHYS rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Byte count instead of packet count?
Olof Backing wrote: Well, I've tried to search the 'net but came up failry empty handed with any decent solution for trying to use a byte counter instead of a packet dito. Since we all know that the packet length varies a lot, I thought that I wanted to have a byte size delimiter on my savefile. In that way I could say "give me 100GB of ethernet frames" and I would get close to that. Now I can only say "give me 100M of packets". On the other hand - should I even bother? Perhaps not. Typically when examining packet traces, only the first N bytes of the packet are particularly interesting - protocol and application headers - the rest isn't all that interesting. Hence the snaplen option rather than saving an entire packet. If the desire is to be certain you don't get too large a trace file, the combination of snaplen and packet count already bounds its size. If you are interested in full packet contents then the packet count already bounds your trace file size. And if you know or can guess at the average packet size, you can still get rather close to the 100GB of ethernet frames with the packet count and a full size snaplen. Having said that, I suspect that the changes to add a byte count would be pretty straightforward if you wanted to implement them. The only question outstanding I would think would be whether that should be a byte count based on snaplen captured, or the sum of the actual packet lengths. rick jones - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] print-tcp.c: remove commas from output, to
Kevin Steves wrote: commas aren't used in tcp fields so remove these that are before and after cksum. i'm not necessarily trying to stop the patch, but while it may not be consistent with other output, maintaining "consistency" with previous versions has the nice property of being less likely to break someones scripts no? rick jones Index: print-tcp.c === RCS file: /tcpdump/master/tcpdump/print-tcp.c,v retrieving revision 1.126 diff -u -r1.126 print-tcp.c --- print-tcp.c 2 Nov 2006 08:56:16 - 1.126 +++ print-tcp.c 17 Jan 2007 19:45:14 - @@ -411,12 +411,12 @@ if (TTEST2(tp->th_sport, length)) { sum = tcp_cksum(ip, tp, length); -(void)printf(", cksum 0x%04x",EXTRACT_16BITS(&tp->th_sum)); +(void)printf(" cksum 0x%04x",EXTRACT_16BITS(&tp->th_sum)); if (sum != 0) { tcp_sum = EXTRACT_16BITS(&tp->th_sum); - (void)printf(" (incorrect -> 0x%04x),",in_cksum_shouldbe(tcp_sum, sum)); + (void)printf(" (incorrect -> 0x%04x)",in_cksum_shouldbe(tcp_sum, sum)); } else - (void)printf(" (correct),"); + (void)printf(" (correct)"); } } #ifdef INET6 @@ -424,12 +424,12 @@ u_int16_t sum,tcp_sum; if (TTEST2(tp->th_sport, length)) { sum = tcp6_cksum(ip6, tp, length); -(void)printf(", cksum 0x%04x",EXTRACT_16BITS(&tp->th_sum)); +(void)printf(" cksum 0x%04x",EXTRACT_16BITS(&tp->th_sum)); if (sum != 0) { tcp_sum = EXTRACT_16BITS(&tp->th_sum); - (void)printf(" (incorrect (-> 0x%04x),",in_cksum_shouldbe(tcp_sum, sum)); + (void)printf(" (incorrect (-> 0x%04x)",in_cksum_shouldbe(tcp_sum, sum)); } else - (void)printf(" (correct),"); + (void)printf(" (correct)"); } } - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe. - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] libpcap : Reading from kernel interface
[EMAIL PROTECTED] wrote: Hello friends, I wish to use hostap (a driver for wireless cards) + libpcap + tcpdump to bring some extra details of the packets in the user space. I wish to know that how libpcap reads the packet from the kernel/interfaces. That varies by platform. Probably best done as a "Use the Source, Luke" (Star Wars "Use the Force" take-off) kind of thing. On HP-UX/Solaris libpcap uses dlpi, on others a /dev/bpf etc etc. They are all _pretty_ much variations on the read/write/ioctl theme. For my purpose, i have to implement an interface in the hostap code (which i shall be registering with the kernel). Then how shall i modify the libpcap code to read from this interface too. Mimic what the closest "pcap_mumble.c" file does. Also i dont need libpcap to filter any packets collected from this new interface as they have already been filtered in the driver code itself. A quick help will be much appreciated. Thanks a lot in advance -madhuresh - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe. - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] libpcap
[EMAIL PROTECTED] wrote: Hello friends, I am a newbie in the group. Please let me know if the clarifications regarding libpcap should also be posted here or there is some separate mailing list for it. Yes, this group is used for questions about libpcap. rick jones anyone heard from guy harris recently? - This is the tcpdump-workers list. Visit https://cod.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] HP-UX crash on inject while receiving
Harley Stenzel wrote: On 7/28/06, Guy Harris <[EMAIL PROTECTED]> wrote: On Jul 28, 2006, at 12:51 PM, Harley Stenzel wrote: > Show that this happens when 2 threads use pcap_t at the same time: libpcap is, for better or worse, not thread-safe, Good to know, thanks. Using *different* pcap_t's in two threads should work, although pcap- dlpi.c has static variables that it uses on HP-UX (ctlbuf and ctl), which is a clear botch unless getmsg() is guaranteed not to modify ctl. Also good to know. Although with the one promiscous STREAMS accessor per device on HP-UX, this doesn't suggest a solution. I thought that one thread was send/recv and the other was send. The "send only" thread ostensibly would not need to be in promiscuous mode right? Still, do feel free to excercise your UX support contract and submit an ER against Streams to enable support for multiple promiscuous streams per interface. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] libpcap 0.9.4 on HP-UX: "Device busy"
Justin E wrote: Hi Harley, Thanks for the response. As I mentioned in my post, I used lsof to check that no other process has the device open. In addition, using an earlier version of libpcap (modified in-house but based on 0.5 or thereabouts) works fine on the same box. Is there any reason why earlier versions of libpcap would allow devices to be opened more than once, but the newest version wouldn't? I'm pretty sure there isn't (as it seems to just be a limitation of dlpi), which makes me pretty sure that there's no other processes interfering. Which SAPs are being bound in each case? rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] libpcap 0.9.4 on HP-UX: "Device busy"
Harley Stenzel wrote: On 6/8/06, Justin E <[EMAIL PROTECTED]> wrote: Hello, I've been trying to get libpcap 0.9.4 to work on several HP-UX boxes, and am unfortunately having some trouble when I try to open an interface using pcap_open_live. The error I receive is: recv_ack: promisc_phys: UNIX error - Device busy It sounds like some other process has the device open. On HP-UX, unlike Linux and (iirc) Solaris, only one process may have an interface open at a time. It would be more accurate to say that it allows only one promiscuous mode stream per interface. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] compiling problem
[EMAIL PROTECTED] wrote: Hi guys, I havent heard from anyone and I really need solution to this problem. I was able to successfully create makefile. When I tried to run the 'make' command with this option in the makefile #CCOPT = -O2 CCOPT = -g I get this error print-dhcp6.o:./print-dhcp6.c:445: more undefined references to `__ntohs' follow print-dhcp6.o: In function `dhcp6opt_print': ./print-dhcp6.c:465: undefined reference to `__ntohl' ./print-dhcp6.c:469: undefined reference to `__ntohl' ./print-dhcp6.c:489: undefined reference to `__ntohl' ./print-dhcp6.c:551: undefined reference to `__ntohl' ./print-dhcp6.c:563: undefined reference to `__ntohs' ./print-dhcp6.c:573: undefined reference to `__ntohl' ./print-dhcp6.c:574: undefined reference to `__ntohl' ./print-dhcp6.c:575: undefined reference to `__ntohl' ./print-dhcp6.c:596: undefined reference to `__ntohl' ./print-dhcp6.c:598: undefined reference to `__ntohl' I wonder if the manpage for ntohl and/or ntohs it shows an include file that print-dhcp6.c is not including? rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] guessing when TSO is present
Guy Harris wrote: On Apr 7, 2006, at 10:08 AM, Rick Jones wrote: As for checking against the normal MTU, does tcpdump/libpcap have that information? Only to the extent that it could infer that from the link-layer type. (Jumbo frames might make that tricky.) And since TSO _really_ works on the MSS of the *connection* not the MTU of the link, knowing the link MTU doesn't really tell you much anyway. I think the aforesuggested addition of the "is the checksum 0" would be a sufficient addition to the existing heuristic. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] guessing when TSO is present
ronnie sahlberg wrote: large segment offload (LSO) can be easily detected by TCP checksum==0and being incorrect and that the segment is much larger than the normal mtu. I like the idea of there being a few additional sanity checks, like I said, what I did there was a WAG :) So, adding a check for a ULP checksum of 0 sounds like goodness. As for checking against the normal MTU, does tcpdump/libpcap have that information? Also, IIRC, the MSS for the connection could be rather smaller than the link-local MTU, and there could still be (IIRC) an "large send" that was less than the MTU but still multiple MSS segments. Does libpcap/tcpdump have any way of knowing that this packet originated on the system where libpcap/tcpdump was running? We should "never" see the IP len of zero on traffic we receive. The problem report suggests HP-UX 11.11. I cannot remember if HP offered TSO on 11.11 or if it was strictly 11.23 and later. Asking the person who submitted the bug if they have set a VMTU on the interface being traces would be goodness. rick jones On 4/7/06, Guy Harris <[EMAIL PROTECTED]> wrote: Hannes Gredler wrote: checked in - thanks for the submission - /hannes On Wed, Jan 19, 2005 at 05:35:13PM -0800, Rick Jones wrote: | A while back I think I posted something asking about what to do about TSO | (large send) and how it generated "IP bad-len 0" output when tracing on a | TSO-enabled sender. | | I had a couple spare cycles, so I decided to just take a WAG at what might | be done, which was to say that if the IP len was zero, just go ahead and | guess that this was a TSO and set the len to the length parm pass-in to | print-ip and hope. [ ... ] | basically, if the IP len is zero, ass-u-me that the segment is TSO and wing | it. | | rick jones Should we make that the default, or would that be too risky? See https://sourceforge.net/tracker/index.php?func=detail&aid=1437110&group_id=53066&atid=469573 which I suspect is caused by TSO. - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe. - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe. - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] transmitting packets
Jan Allman wrote: Rick, I'm tring to see how much data I can send over a point to point Ethernet link using UDP (i.e. as quickly as possible). I had a repeating loop of "sendto" calls. I was using Ethereal to report statistics on the throughput I was achieving. Repeating your test on my machines (between one dual 2.8GHz machine and one 3GHz machine)... netperf -t UDP_STREAM -H 192.168.0.6 -c -C -- -m 1472 UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.6 (192.168.0.6) port 0 AF_INET Socket Message Elapsed Messages CPU Service Size Size Time Okay Errors Throughput Util Demand bytes bytes secs # # 10^6bits/sec % SS us/KB 65535 1472 10.00 811659 0 956.0 15.61 5.410 65535 10.00 802851 946.6 49.18 4.201 Is this telling me I am able to get 950Mbps of throughput? (I've never used netperf before) Yes - it is saying that the netperf side (the send_udp_stream() routine) believed it was sending at a rate of 956 Mbps, and that the netserver size (the recv_udp_stream() routine) a subset of that data at a rate of 946.6 Mbit/s. Of the 811659 1472 byte sends netperf made, 802851 of them were received by netserver - the rest were lost somewhere between the two. You can use the -f option to change the output units - k, m and g give power of ten bits per second; K, M and G give power of two bytes per second. When I use Ethereal it calculated an "Avg. MBit/sec" of 177.592 (linux kernel 2.4.24, libpcap 0.8.3, ethereal 0.10.7). Do you know why this is so much less than the netperf reports? Is Ethereal reporting a low rate because it is dropping data due to having to maintain its GUI? Is Ethereal displaying a low throughput rate due to libpcap limitations for 1472 bytes sized frames? Well, you could check if it was ethereal's GUI by using tcpdump - I'd suggest a tcpdump -w that you then post-process with ethereal. If it then reports closer to what netperf reported, you can ass-u-me it was the GUI overhead. If it still reports what ethereal was reporting, you can ass-u-me it related to the libpcap and below not keeping-up. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] pcap_inject() fails with rc 0 on HP-UX
WRT the return of zero as the number of bytes - perhaps some of the code in pcap_dpli is ass-u-me-ing the return value from a call will give that? I've not looked at that source in a very long time though... 4) What is the expected interaction of multiple libpcap instances on HP-UX? I can't use my program and tcpdump at the same time; something I can do on other OSes. I believe this is a long-standing limitation of promiscuous mode support in HP-UX - only one process may have a promiscuous stream open on an interface at one time. I believe that is the case for DL_PROMISC_PHYS (give me everything that reaches the NIC). I cannot recall if that would be the same for the lesser "give me everything that the NIC gives the host" DL_PROMISC_SAP. If you need/want that limitation lifted, definitely get in touch with the HP Response Centre and excercise your support contract(s) to have an enhancement request opened against DLPI. Does your program actually require promiscuous mode to function? rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] transmitting packets
Jan Allman wrote: Is it possible to send standard Ethernet packets at near Gigabit throughput using UDP packets and libpcap? Has anyone tried this before. I am wondering whether libpcap is a more optimum approach to the standard Linux "sendto" function. It could very well be, assuming you are willing to craft your own UDP headers, but it begs the question "Why?" Most CPU's these days (well, most "decent" CPUs :) can use send()/sendto() to generate traffic at gigabit speeds. Between a pair of 2x1GHz Itanium2 systems running a 2.6.12 kernel: loiter:/opt/netperf2_work# src/netperf -t UDP_STREAM -H 192.168.4.215 -c -C -- -m 1472 UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.4.215 (192.168.4.215) port 0 AF_INET Socket Message Elapsed Messages CPU Service SizeSize Time Okay Errors Throughput Util Demand bytes bytessecs# # 10^6bits/sec % SS us/KB 1351681472 10.00 812799 0 957.1 19.023.256 135168 10.00 812799 957.1 23.123.958 They weren't even breaking a sweat :) What sort of problem are you looking to solve? For just blasting frames onto the wire, there is the in-kernel pktgen stuff under linux. sincerely, rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Checksum
And keep in mind that if you are tracing on a system with a NIC doing ChecKsum Offload the outbound traffic from that system will probably not have the checksum calculated yet since the NIC will be doing it... rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] fragmented packets
Luis Del Pino wrote: Hello, i have a question. I am filtering UDP segments by port. In fragmented packets, i only capture the UDP segment and i can't capture the other fragments. My questions are: could the fragments loss? or if a fragment is lost in the network, the UDP segment entirely is it lost?. i'm sorry for my English. When IP fragments the datagram containing the UDP _datagram_ (TCP sends segments, UDP sends datagrams :) there is no replication of the UDP header in each IP datagram fragment - the UDP datagram, afterall, is _data_ as far as IP is concerned. So, short of actually doing IP fragment reassembly, which would be a triffle too expensive to do in tcpdump, there is no way for tcpdump to know to which UDP datagrams those IP datagrams belong and so cannot match them to your filter. So, there is no way to know from tcpdump if the other fragments are lost, short of taking-in _all_ packets and doing the work yourself by hand. As for the second question, IP does not retransmit datagrams (or or datagram fragments), so if any of the IP datagram fragments are lost, the IP datagram cannot be reassembled on the receiver and so will be dropped. This is one of the reasons sending UDP datagrams >= the MTU is discouraged - packet loss is (IIRC) exponentially increased as datagram loss. If we consider a probability of packet loss of p, then the probability of any one _packet_ making it across the network is (1-p) (eg if the packet loss rate is 1% p would be 0.01 and 1-p would be 0.99). Since all packets have to make it across, if there are N fragments that means the chances of all of them making it is (1-p)^N. (1-p)^N gets very small very rapidly as N increases. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Multi process sniffing and dropped packets
I would choose threads but my "boss" prefers processes; he said computation parallelizing mechanism (in his cluster) don't work well with threads as it does with processes; i don't know if is true, now i'll implement my software with processes and then with posix threads; so i'll "taste" performance differences. _Clusters_?!? That is a rather important detail... Somehow I seriously doubt that a libpcap application can span nodes in a computational cluster. At least not the stuff doing the promiscuous mode bits. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Multi process sniffing and dropped packets
[EMAIL PROTECTED] wrote: Hi people! I'm writing a sniffer with libpcap 0.9.3 that gets packets and makes some cpu-intensive work with those. I want to use a multi process architecture (rather than a multi thread one) because i want to distribute work on multiple processor; so i thought two way to do this: Do you really have to do the cpu intensive work on those packets in real time? Why don't you want to use threads to distribute work across the CPUs? Specifically, on _which_ platform do you want to do this? rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] libpcap make failing on AIX
Guy Harris wrote: Rebekah Taylor wrote: I have attached the output of the config and make runs Do I need some prerequisite libraries or something? It appears that Bison isn't correctly installed on your machine - configure finds it: checking for bison... bison and it is, in fact, present, but one of its data files isn't installed: bison -y -p pcap_ -d ./grammar.y bison: /usr/local/share/bison.simple: No such file or directory If you installed it, you might try building and installing it again, and make sure that it's completely installed. If it came with AIX, ask IBM about it; if somebody else installed it, ask them about it. might the yacc that comes with AIX succede? one way to try might be to make the bison binary not executable (remote the x perms) and reconfigure and see what happens. lex/yacc work for libpcap on HP-UX, perhaps they will on AIX as well. rick jones portable adj, code that compiles under more than one compiler these opinions are mine, all mine; HP might not want them anyway... :) feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH... - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] CVS down? Daily snapshot script broken?
Albert Chin wrote: Is CVS down? $ cvs up cvs [update aborted]: connect to cvs.tcpdump.org(205.150.200.186):2401 failed: Connection refused Looking at the daily snapshots on http://tcpdump.org/daily/, 2005.10.10 is the latest. Is the script to generate these things running? Interesting - last time I looked, 10.09 was the latest. Ironic coincidence that today happens to be the 10th of November - is the machine's date off by a month? rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] changes to scanner.l for HP-UX 11.11 lex
Guy Harris wrote: On Nov 9, 2005, at 2:53 PM, Rick Jones wrote: Shouldn't that have then appeared in the 11/09 "current" tar? I just grabbed that and it doesn't seem to have been there. Operator error on my part? Grabbing the wrong tar file or something? It was checked in, but perhaps the tarball was created before it was checked in. the "newest" tarball I saw under dailies was from October 9th - is the daily tarball bit actually operating? That's just as well, because I deleted the "%n 2000" line when applying the changes by hand (I guess it was quicker than saving the mail message as a patch), so that tarball wouldn't have been right. I added the line back. cool - if there is a more official way I should forward patches, feel free to hit me over the head with a clue bat - nerf version please :) rick - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
[tcpdump-workers] compilation warnings from HP-UX 11.23 IA64 (32-bit compile)
For fun and excitement :) I tried compiling the 10.09 bits on an IA64 HP-UX 11.23 system. I'm not _entirely_ certain of the lineage of the compiler I'm using, having gotten it from an internal depot, but it did emit some warnings that may or may not be meaningful: the first batch look mostly like someone using a shorthand of "-1" to try to set something to all ones: cc -O -I. -I/usr/local/include -DHAVE_CONFIG_H -D_U_="__attribute__((u nused))" -c ./gencode.c "./gencode.c", line 777: warning #2068-D: integer conversion resulted in a change of sign off_vpi = -1; ^ "./gencode.c", line 778: warning #2068-D: integer conversion resulted in a change of sign off_vci = -1; ^ "./gencode.c", line 779: warning #2068-D: integer conversion resulted in a change of sign off_proto = -1; ^ "./gencode.c", line 780: warning #2068-D: integer conversion resulted in a change of sign off_payload = -1; ^ "./gencode.c", line 785: warning #2068-D: integer conversion resulted in a change of sign off_sio = -1; ^ "./gencode.c", line 786: warning #2068-D: integer conversion resulted in a change of sign off_opc = -1; ^ "./gencode.c", line 787: warning #2068-D: integer conversion resulted in a change of sign off_dpc = -1; ^ "./gencode.c", line 788: warning #2068-D: integer conversion resulted in a change of sign off_sls = -1; ^ "./gencode.c", line 795: warning #2068-D: integer conversion resulted in a change of sign orig_linktype = -1; ^ "./gencode.c", line 796: warning #2068-D: integer conversion resulted in a change of sign orig_nl = -1; ^ "./gencode.c", line 826: warning #2068-D: integer conversion resulted in a change of sign off_linktype = -1; ^ "./gencode.c", line 833: warning #2068-D: integer conversion resulted in a change of sign off_linktype = -1; ^ "./gencode.c", line 1028: warning #2068-D: integer conversion resulted in a change of sign off_mac = -1; /* LLC-encapsulated, so no MAC-layer header */ ^ "./gencode.c", line 1036: warning #2068-D: integer conversion resulted in a change of sign off_linktype = -1; ^ "./gencode.c", line 1053: warning #2068-D: integer conversion resulted in a change of sign off_linktype = -1; ^ "./gencode.c", line 1079: warning #2068-D: integer conversion resulted in a change of sign off_linktype = -1; ^ "./gencode.c", line 1094: warning #2068-D: integer conversion resulted in a change of sign off_linktype = -1; ^ "./gencode.c", line 1095: warning #2068-D: integer conversion resulted in a change of sign off_nl = -1; ^ "./gencode.c", line 1096: warning #2068-D: integer conversion resulted in a change of sign off_nl_nosnap = -1; ^ "./gencode.c", line 1103: warning #2068-D: integer conversion resulted in a change of sign off_linktype = -1; ^ "./gencode.c", line 1104: warning #2068-D: integer conversion resulted in a change of sign off_nl = -1; ^ "./gencode.c", line 1105: warning #2068-D: integer conversion resulted in a change of sign off_nl_nosnap = -1; ^ "./gencode.c", line 1129: warning #2068-D: integer conversion resulted in a change of sign off_nl_nosnap = -1; /* no 802.2 LLC */ ^ "./gencode.c", line 1156: warning #2068-D: integer conversion resulted in a change of sign off_nl_nosnap = -1; /* no 802.2 LLC */ ^ "./gencode.c", line 1162: warning #2068-D: integer conversion resulted in a change of sign off_nl_nosnap = -1; /* no 802.2 LLC */ ^ "./gencode.c", line 1167: warning #2068-D: integer conversion resulted in a change of sign off_nl = -1;/* not really a network layer but raw IP adresses */ ^ "./gencode.c", line 1168: warning #2068-D: integer conversion resulted in a change of sign off_nl_nosnap = -1; /* no 802.2 LLC */ ^ "./gencode.c", line 1174: warning #2068-D: integer conversion resulted in a change of sign off_nl_nosnap = -1; /* no 802.2 LLC */ ^ "./gencode.c", line 1179: war
Re: [tcpdump-workers] changes to scanner.l for HP-UX 11.11 lex
Shouldn't that have then appeared in the 11/09 "current" tar? I just grabbed that and it doesn't seem to have been there. Operator error on my part? Grabbing the wrong tar file or something? Part of it might be that there is no 11/09 tar - the last "current" appears to be 10/09 rick - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] changes to scanner.l for HP-UX 11.11 lex
Guy Harris wrote: On Nov 7, 2005, at 1:08 PM, Rick Jones wrote: The following change bumps a few limits in scanner.l so it will be processed by the lex which ships with HP-UX. It is based on libpcap-2005.10.09. While I was here, I went through to make sure that utilization of these things was no more than ~80%: Checked into the main and x.9 branches. Shouldn't that have then appeared in the 11/09 "current" tar? I just grabbed that and it doesn't seem to have been there. Operator error on my part? Grabbing the wrong tar file or something? rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] Libpcap compile
Vossie wrote: Sorry guys. I was typing too fast. I mean HTTP packets (that transfer the data) and not the TCP ACK's :-) Looking at a stream of HTTP carried in TCP segments without looking at the ACKs seems a bit odd, but if you really don't want to see the bare ACKs, you could probably filter on packet size - a bare ACK would just be the TCP, IP and link-level headers. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
[tcpdump-workers] changes to scanner.l for HP-UX 11.11 lex
The following change bumps a few limits in scanner.l so it will be processed by the lex which ships with HP-UX. It is based on libpcap-2005.10.09. While I was here, I went through to make sure that utilization of these things was no more than ~80%: $ lex -t scanner.l > /dev/null 6056/7600 nodes(%e), 22089/27600 positions(%p), 1312/2000 (%n), 48325 transitions, 3621/4550 packed char classes(%k), 14716/18400 packed transitions(%a), 17206/21500 output slots(%o) $ diff -c scanner.l.orig scanner.l *** scanner.l.orig Mon Sep 5 02:07:01 2005 --- scanner.l Mon Nov 7 13:05:31 2005 *** *** 81,91 B ([0-9A-Fa-f][0-9A-Fa-f]?) W ([0-9A-Fa-f][0-9A-Fa-f]?[0-9A-Fa-f]?[0-9A-Fa-f]?) ! %a 16000 ! %o 19000 ! %e 6000 ! %k 4000 ! %p 25000 %n 2000 V680 {W}:{W}:{W}:{W}:{W}:{W}:{W}:{W} --- 81,91 B ([0-9A-Fa-f][0-9A-Fa-f]?) W ([0-9A-Fa-f][0-9A-Fa-f]?[0-9A-Fa-f]?[0-9A-Fa-f]?) ! %a 18400 ! %o 21500 ! %e 7600 ! %k 4550 ! %p 27600 %n 2000 V680 {W}:{W}:{W}:{W}:{W}:{W}:{W}:{W} - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
[tcpdump-workers] HP-UX 11.11 - print-dccp.c compile fails
cc -O -DHAVE_CONFIG_H -I./../libpcap-2005.10.09 -I/usr/local/include -I/usr//include -I./missing -D_U_="" -I. -I./../libpcap-2005.10.09 -I/usr/local/include -I/usr//include -I./missing -c ./print-dccp.c cc: "print-dccp.c", line 149: error 1539: Cannot do arithmetic with pointers to objects of unknown size. *** Error exit code 1 the offensive code :) static u_int64_t dccp_seqno(const struct dccp_hdr *dh) { u_int32_t seq_high = DCCPH_SEQ(dh); u_int64_t seqno = EXTRACT_24BITS(&seq_high) & 0xFF; if (DCCPH_X(dh) != 0) { const struct dccp_hdr_ext *dhx = (void *)dh + sizeof(*dh); u_int32_t seq_low = dhx->dccph_seq_low; seqno &= 0x00; /* clear reserved field */ seqno = (seqno << 32) + EXTRACT_32BITS(&seq_low); } return seqno; } specifically the "const struct dccp_hdr_ext..." There seems to be a dccp.h, and it has that field: /** * struct dccp_hdr_ext - the low bits of a 48 bit seq packet * * @dccph_seq_low - low 24 bits of a 48 bit seq packet */ struct dccp_hdr_ext { u_int32_t dccph_seq_low; }; It seems the compiler I have didn't like the + sizeof(*dh) in the declaration. If I change that to be on a separate line it appears to compile: $ diff -c print-dccp.c.orig print-dccp.c *** print-dccp.c.orig Mon Sep 19 23:25:20 2005 --- print-dccp.cMon Nov 7 13:20:14 2005 *** *** 146,152 u_int64_t seqno = EXTRACT_24BITS(&seq_high) & 0xFF; if (DCCPH_X(dh) != 0) { ! const struct dccp_hdr_ext *dhx = (void *)dh + sizeof(*dh); u_int32_t seq_low = dhx->dccph_seq_low; seqno &= 0x00; /* clear reserved field */ seqno = (seqno << 32) + EXTRACT_32BITS(&seq_low); --- 146,153 u_int64_t seqno = EXTRACT_24BITS(&seq_high) & 0xFF; if (DCCPH_X(dh) != 0) { ! const struct dccp_hdr_ext *dhx = (void *)dh; ! dhx += sizeof(*dh); u_int32_t seq_low = dhx->dccph_seq_low; seqno &= 0x00; /* clear reserved field */ seqno = (seqno << 32) + EXTRACT_32BITS(&seq_low); probably a bug in the compiler - perhaps even one that has been fixed in a compiler patch or later version, but I thought I might send-along the patch just the same. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] 3.9.3 on HP-UX 11i
Guy Harris wrote: On Aug 16, 2005, at 4:29 PM, Ebright, Don wrote: I have some information regarding your question to Albert Chin. If I try to run more than one tcpdump at the same time on HP-UX, it doesn't work for me. ... It appears that the second pcap_open_live() fails with one of two errors. If promiscuous mode is specified on the second pcap_open_live(), it fails with this error: recv_ack: promisc_phys: UNIX error - Invalid argument If promiscuous mode is not specified, pcap_open_live() fails with this error: recv_ack: promisc_sap: UNIX error - Invalid argument I don't know much about DLPI, but if you have any ideas Having just checked the HP-UX documentation, the idea I have is "HP- UX DLPI isn't as friendly as we'd like": http://docs.hp.com/en/B2355-90139/ch01s03.html "Note: Each LAN interface currently allows only one stream to enable the promiscuous mode service. This restriction will be removed with a future release of the DLPI provider." That's the edition for HP-UX 10.x, 11.0, and 11i v1.6. http://docs.hp.com/en/B2355-90871/ch01s03.html "Note: Each LAN interface currently allows only one unbound stream to enable the promiscuous mode service." That's the edition for 11i v2 and 11i v2 September 2004; I guess that release isn't the "future release" to which the older manual referred, and the lack of the "future release" item in the newer manual appears to suggest that they don't want to make any such promises. If we're not in DL_PROMISC_PHYS mode, I think we want to be in DL_PROMISC_SAP, so we at least see all traffic being sent to or from the machine running the libpcap application, not just traffic for some particular SAP. I guess that means that, even though the changes not to use a hardwired SAP mean that we can now run when other software is using the SAP to which we used to be hardwired, we can't run if some other software using libpcap (or otherwise enabling DL_PROMISC_PHYS or DL_PROMISC_SAP) is running. Rick, any ideas, or are we just out of luck? Well, that is a good question - notice how the wording changed and "unbound" was added. Maybe there is hope there. Otherwise, I can see about asking the DLPI folks. One other thing though - HP does distribute a port of ethereal - I've no idea though if anything was/could be done at that level to workaround the limitation, but if there was something done there, the ethereal sources HP modified should be around - software.hp.com should be a decent starting point to find the "internet express" bits, which are the bits with ethereal in them rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] 3.9.3 on HP-UX 11i
Guy Harris wrote: On Aug 17, 2005, at 2:16 PM, Albert Chin wrote: Capture->Interfaces->Capture (on lan0 interface) results in: The capture session could not be initiated (recv_ack: promisc_phys: UNIX error - Device busy). Please check to make sure you have sufficient permissions, and that you have the proper interface or pipe specified. The "Capture" buttons in the Capture->Interfaces window require that it be possible to open the same interface twice; it appears that HP- UX doesn't support that, at least not if the same flavor of promiscuity is being requested in both opens, and possibly if *any* flavor of promiscuity is being requested in both opens (libpcap requests SAP promiscuity if it doesn't request physical promiscuity). Indeed, the dimm recesses of my mind recall such limitations in HP-UX promiscuous mode support. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] 3.9.3 on HP-UX 11i
Albert Chin wrote: I've built libpcap-0.9.3 and tcpdump-3.9.3 on HP-UX 11i: # tcpdump tcpdump: no suitable device found libpcap-0.8.3/tcpdump-3.8.3 works fine. How can I help debug this? FWIW, there are/were _several_ 11i releases: HP-UX 11.11 aka 11i v1.0 PA-RISC only HP-UX 11.20 aka 11i v1.5 Itanium only HP-UX 11.22 aka 11i v1.6 Itanium2 (?) only HP-UX 11.23 aka 11i v2.0 Itanium2 only HP-UX 11.23 aka 11i v2.0 Update 2 Itanium2 and PA-RISC rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.
Re: [tcpdump-workers] BPF vs DLPI performance
alexander medvedev wrote: Hallo, Which of the two (BPF or DLPI) will generally give you better performance? Particularly, i am looking to reduce the number of dropped packets. Will DLPI capture even report captured/dropped packet count? Which piece of string is longer?-) Saying "DLPI" in and of itself isn't quite enough - it could have stuff pushed onto it like a bufmod or even (one of these days I _really_ have to revisit it) a "bpfmod" However, in broad handwaving terms, when one uses DLPI, the filtering is done in user-space, and there is no aggregation of captured traffic in the kernel. There may be some DLPI or more likely Streams-specific stats (not quite sure what they are) that would imply dropped packets, but promiscuous mode via DLPI will not explicitly tell you. I would have to guess that on the same box, modulo some implementation screw-up, a "pure" BPF interface would give better performance than DLPI. DLPI though may still give "sufficient" performance. rick jones - This is the tcpdump-workers list. Visit https://lists.sandelman.ca/ to unsubscribe.