Tim,
thanks for your reply.

I thought the packets always had the same size (specified by U2_MAX_SAMPLES), 
or am I wrong here? I also do get the very same timestamp diff when running 
rx_streaming_samples with N= 400, 4000, 6000, 60000 etc.

I don't quite follow you on what will happen when calling the handler multiple 
times? I print the timestamps in usrp2_impl handle_data_packet which I thought 
would be run every time (once) a new ethernet packet arrives which isn't a 
control packet?

(And yes the timestamps are treated as uint32's.)

Regards,
/Ulrika

________________________________
From: Tim Pearce [mailto:timothy.pea...@gmail.com]
Sent: Thursday, February 18, 2010 9:35 PM
To: Ulrika Uppman
Cc: Discuss-gnuradio@gnu.org
Subject: Re: [Discuss-gnuradio] Timestamp value

Ulrika,

I agree with how you think the timestamps are generated -- it seems to work for 
me that way anyway!

I did it with a custom source block that added the counter*decimation rate 
after the first sample, the trap I fell into there is that (particuarly at 
lower decimation rates) rx_*_handler() can be called multiple times per 
instance of it.

Are the timestamps being treated as UINT32's?

400 samples is quite low, I think the packets are usually bigger than that (I 
might be wrong, its been a while since I looked into that).

Cheers,

Tim

On Thu, Feb 18, 2010 at 4:10 PM, Ulrika Uppman 
<ulrika.upp...@foi.se<mailto:ulrika.upp...@foi.se>> wrote:
Hi,
I wonder how the timestamps are being generated for each ethernet-packet sent 
from the USRP2 to the host? My initial idea about how it works was that 
timestamps are generated at 100MHz (same as the samples) and then the timestamp 
associated with the first sample in an ethernet data packet will be put in the 
metadata which could then be unpacked in the host. I then would assume that the 
next packet after the first one will have a timestamp value that is 
proportional to the number of samples per packet times the decimation rate. 
However I get timestamp values that increase much much more for each received 
packet, so I wonder if my idea of how timestamps are generated is wrong?

I run the stable 3.2 version of gnuradio on Ubuntu 9.04 and I have an USRP2 
with the RFX2400. (I also was going to try the gnuradio trunk but I got 
problems with building, see my other post "Error on make from git development 
trunk"). I tried both an old version of the fpga bin-file and one that I just 
recently downloaded (but both gave the same result).

I put some printouts in usrp2_impl.cc in the handle_data_packet function and 
the output when I run rx_streaming_samples then looks like this:
./rx_streaming_samples -f 2457e6 -d 16 -N 400 -v
...................................................................................
Daughterboard configuration:
 baseband_freq=2456250000.000000
      ddc_freq=-750000.000000
 residual_freq=-0.016764
      inverted=no

USRP2 using decimation rate of 16
Receiving 400 samples

ts_in = 1435221596, ts_last = 0, diff = 1435221596
ts_in = 2560802396, ts_last = 1435221596, diff = 1125580800
ts_in = 3367616092, ts_last = 2560802396, diff = 806813696
ts_in = 4174429788, ts_last = 3367616092, diff = 806813696
ts_in = 686341724, ts_last = 4174429788, diff = 806879232
ts_in = 1493155420, ts_last = 686341724, diff = 806813696
ts_in = 2283192156, ts_last = 1493155420, diff = 790036736
ts_in = 3090005852, ts_last = 2283192156, diff = 806813696
ts_in = 3896819548, ts_last = 3090005852, diff = 806813696
ts_in = 408731484, ts_last = 3896819548, diff = 806879232

Copy handler called 2 times.
Copy handler called with 2968 bytes.

Elapsed time was 0.000 seconds.
Packet rate was 100000 pkts/sec.
Approximate throughput was 148.40 MB/sec.
Total instances of overruns was 0.
Total missing frames was 0.
...................................................................................

ts_in is the timestamp found in the metadata of the packet just received, 
ts_last is the one from previous packet and diff is just the difference them 
between. Since there seems to be no missing frames I'm guessing the big value 
of diff can't be related to lost packets?
If I try different decimation rates, I see no obvious relation between the 
difference in value between two timestamps...

Do anyone know why the difference in timestamp value between received packets 
is so big? What am I missing here?

Thanks,
/Ulrika

_______________________________________________
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org<mailto:Discuss-gnuradio@gnu.org>
http://lists.gnu.org/mailman/listinfo/discuss-gnuradio

_______________________________________________
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
http://lists.gnu.org/mailman/listinfo/discuss-gnuradio

Reply via email to