Shouldn’t you be scaling your num_tx_samples by the time per sample when 
calculating the expectedTime?

Sent from my iPhone

> On Mar 9, 2021, at 10:03 PM, Doug Blackburn via USRP-users 
> <usrp-users@lists.ettus.com> wrote:
> 
> 
> Hello --
> 
> I've got some questions re: latency with the x300 over the 10GigE interface.  
> 
> If I use the latency_test example operating at a rate of 50 MSPS, I have no 
> issues with a latency of 1ms.  The latency test receives data, examines the 
> time stamp, and transmits a single packet. 
> 
> I have an application where I'd like to run the transmitter continuously, and 
> I got curious about the latency involved in that operation.  My application 
> is similar to the benchmark_rate example.  I added the following lines to the 
> benchmark_rate example at line 256 after the line.
> 
> md.has_time_spec = false; 
> 
> ====
> if ( (num_tx_samps % 50000000) < 4*max_samps_per_packet )
> {
>     uhd::time_spec_t expectedTime = startTime + (double) ( num_tx_samps  )
>                       / (double)usrp->get_tx_rate();
>     uhd::time_spec_t timeAtLog = usrp->get_time_now();
>     timeAtLog = usrp->get_time_now();
>     std::cerr << "==== Actual time ====" << std::endl;
>     std::cerr << "     " << timeAtLog.get_full_secs() << " / "
>                           << timeAtLog.get_frac_secs() << std::endl;
>     std::cerr << "==== Expected time ====" << std::endl;
>     std::cerr << "     " << expectedTime.get_full_secs() << " / "
>                           << expectedTime.get_frac_secs() << std::endl;
> }
> ====
> 
> The intent of this insertion is to log the time at which we return from 
> tx_stream->send() and the time at which the first sample of that sent data 
> should be transmitted -- at approximately once per second when running at 50 
> MSPS.
> 
> After the first second, I consistently saw the following results:
> 
> ==== Actual time ====
>      1 / 0.10517
> ==== Expected time ====
>      1 / 0.27253
> 
> ==== Actual time ====
>      1 / 0.105419
> ==== Expected time ====
>      1 / 0.27255
> 
> Which indicates to me that there is a latency of approximately 167ms when 
> transmitting data.  That is, the send() function is returning 167ms earlier 
> than I expect the data to actually be transmitted.   If I halve the sample 
> rate to 25 MSPS, the latency doubles.
> 
> What is the source of the latency when running in a continuous mode?  
> Initially, I had thought that this might be related to the send_buffer_size, 
> but making changes to send_buffer_size seem to not have an effect.   FWIW, 
> 167ms at 50 MSPS is suspiciously close to the value for wmem_max (33554432) 
> suggested in the x300 system configuration ... but neither changing that 
> value or send_buffer_size seems to make a difference.
> 
> Is this latency tunable?  
> 
> Thank you for your help!
> 
> Best Regards,
> Doug Blackburn
> 
> _______________________________________________
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to