On Mon, Jul 14, 2003 at 01:36:12AM +0100, Marcos Paredes Farrera wrote: > This is my first post in the List, I'm using tcpdump to monitor packets, > however I would like to know exactly how tcpdump is doing the time stamp
It's doing so by getting it from libpcap. Libpcap is, on almost all platforms, doing it by getting it from the OS's packet capture mechanism (HP-UX is, I think, the only exception). > I would like to know if tcpdump assign the time stamp when the first bit in > the packet is just to be delivered or when the packet has already delivered > the last bit. And in other case if a packet is received how is time stamped > when the first bit enters or when the last bit is received. I infer that your first question there is about packets being transmitted by the machine running tcpdump (or another libpcap-based application) and the second question is about packets being received from that machine. In the first case, it's assigned whenever the packet is supplied, by the networking stack, to whatever piece of code time-stamps the packet; that's before the first bit is even put onto the network. In the second case, it's assigned whenever the packet is supplied, by the device driver or networking stack, to whatever piece of code time-stamps the packet; that's after the lat bit is received. In other words, the time stamps aren't extremely precise measurements of when the first, or last, bit of the packet was put onto the network; the imprecision includes time spent passing the packet through the networking code and the driver code, as well as, possibly, through the networking card. It might also, for incoming packets, include interrupt latency. - This is the TCPDUMP workers list. It is archived at http://www.tcpdump.org/lists/workers/index.html To unsubscribe use mailto:[EMAIL PROTECTED]
