hi,
I am trying to measure the time when the frame is inserted into queue
(ath_tx_process_buffer, ath_drain_txq_list[not req actually])
and when one receives ACK from hardware in ath_tx_process_buffer()
in the descriptor, ath_tx_status  and eventually the
timestamp is calculated in ath_tx_complete_buf() by me.

But when I compute the airtime of the frame by
using frame-size/bitrate,I get the airtime greater than
the difference in the tsf values.

My traces have this situation about 22% by volume count.
For example, for a frame of size 1542 transmitted at 18Mbps,
the airtime calculated is 685 [1542/18.0],
while the ACK_time- enqueue_time ==318 !
Both the values are in microseconds.
Is there a factor between the clock time and the airtime I calculate ?


I am unable to understand why ?
https://github.com/abnarain/mac80211-ath9k/blob/master/drivers/net/wireless/ath/ath9k/xmit.c#L2075

I have neat documented code above, which does the following :
get the hw tsf in ath_tx_process_buffer() [also in ath_drain_txq_list(),
which is hardly called],
this is stored in bf->timestamp_temp.
This is then used to combine the ath_tx_status->ts_tstamp 32 bit field
to get the final tsf value, using the way tsf is calculated in the receive
path (ath9k/recv.c)

Will be great if someone can help
-
Abhinav
_______________________________________________
ath9k-devel mailing list
ath9k-devel@lists.ath9k.org
https://lists.ath9k.org/mailman/listinfo/ath9k-devel

Reply via email to