> I infer that your first question there is about packets being
> transmitted by the machine running tcpdump (or another libpcap-based
> application) and the second question is about packets being received
> from that machine.
>
> In the first case, it's assigned whenever the packet is supplied, by the
> networking stack, to whatever piece of code time-stamps the packet;
> that's before the first bit is even put onto the network.

So it means that for delivered packets the timestamp is done at the packet
beginning.

> In the second case, it's assigned whenever the packet is supplied, by
> the device driver or networking stack, to whatever piece of code
> time-stamps the packet; that's after the lat bit is received.

And for received Packets the time stamp is at the packet end.

> In other words, the time stamps aren't extremely precise measurements of
> when the first, or last, bit of the packet was put onto the network; the
> imprecision includes time spent passing the packet through the
> networking code and the driver code, as well as, possibly, through the
> networking card.  It might also, for incoming packets, include interrupt
> latency.

So if we have a tcpdump output with information  of every packet in the
interface, the timestamping is different between the outgoing and incoming
packets in that interface. I really need to confirm that because I've been
some performance test with Instantaneous Throughput and I have noticed that
the way how tcpdump make the timestamping is critical for this analysis and
in the other hand if there is a high traffic the way how is time stamped
become more notorious.

Regards Marcos

-
This is the TCPDUMP workers list. It is archived at
http://www.tcpdump.org/lists/workers/index.html
To unsubscribe use mailto:[EMAIL PROTECTED]

Reply via email to