A quick update ...

I added
#include <uhd/device3.hpp>

to my includes and the following code to UHD_SAFE_MAIN:

=================
    uhd::device3::sptr usrp3 = usrp->get_device3();
    uhd::rfnoc::dma_fifo_block_ctrl::sptr dmafifo_block_ctrl =
        usrp3->get_block_ctrl<uhd::rfnoc::dma_fifo_block_ctrl>(
            uhd::rfnoc::block_id_t(0,"DmaFIFO"));

    int fifoSize = 4*1024*1024;
    int numChannels = usrp->get_tx_num_channels();
    for (int chanIdx = 0; chanIdx<numChannels; ++chanIdx)
    {
        // uint32_t currDepth = dmafifo_block_ctrl->get_depth(0);
        // uint32_t currBaseAddr = dmafifo_block_ctrl->get_base_addr(0);
        // std::cerr << "DMA chan " << chanIdx << ": base / depth : " <<
        //     currBaseAddr << " / " << currDepth << std::endl;
        std::cerr << "Attempting to resize channel " << chanIdx <<
std::endl;
        dmafifo_block_ctrl->resize( chanIdx*fifoSize, /* base address */
                                    fifoSize, /* depth */
                                    chanIdx );
    }
=================

I started with 16MB, then 8MB, etc ...

At 4MB, latency is 1/8 of what I see at 32MB as expected ... about 21.33
ms.  I'm sure I'll need to tune this a little more once I apply it to my
application, but I can now control it.

I greatly appreciate the help, Brian!

Best,
Doug


On Wed, Mar 10, 2021 at 2:46 PM Doug Blackburn <doug...@gmail.com> wrote:

> Brian --
>
> Thanks so much!   I sprinkled my comments in below :
>
> On Wed, Mar 10, 2021 at 1:42 PM Brian Padalino <bpadal...@gmail.com>
> wrote:
>
>> On Wed, Mar 10, 2021 at 12:39 PM Doug Blackburn <doug...@gmail.com>
>> wrote:
>>
>>> Brian,
>>>
>>> I've seen this using UHD-3.14 and UHD-3.15.LTS.
>>>
>>
>> The DMA FIFO block default size is set here in the source code for
>> UHD-3.15.LTS:
>>
>>
>> https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/lib/rfnoc/dma_fifo_block_ctrl_impl.cpp#L25
>>
>> And the interface in the header file provides a way to resize it:
>>
>>
>> https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/include/uhd/rfnoc/dma_fifo_block_ctrl.hpp#L33
>>
>> I'd probably resize it before sending any data to it.
>>
>> That should help with your latency question I think.
>>
>
> This is super helpful.  I'll give it a shot and see what happens!
>
>
>>
>>
>>>
>>> I have performed some follow-on testing that raises more questions,
>>> particularly about the usage of end_of_burst and start_of_burst.  I talk
>>> through my tests and observations below; the questions that these generated
>>> are at the end ...
>>>
>>> I thought it would be interesting to modify benchmark_rate.cpp to
>>> attempt to place a timestamp on each buffer that was sent out to see if I
>>> could observe the same behavior.  I haven't seen thorough explanations of
>>> what exactly the start_of_burst and end_of_burst metadata fields do at a
>>> low level beyond this posting --
>>> http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2016-November/050555.html
>>> and a note about start_of_burst resetting the CORDICs (I'd appreciate being
>>> pointed in the right direction if I've missed it, thank you!) --  so I
>>> wanted to test the effect on timing when has_time_spec is true and the SOB
>>> and EOB fields are either false or true.  I initially set my test up in the
>>> following way (I hope the pseudocode makes sense) to make observations
>>> easy.  I watched for the LO on a spectrum analyzer.  Per the code below, I
>>> would expect a burst every 2 seconds if the time_spec was followed ...
>>>
>>> ======
>>> max_samps_per_packet = 50e5; // 100ms at 50 MSPS
>>> start_of_burst = <true,false>
>>> end_of_burst = <true,false>
>>> has_time_spec = true;
>>> while( not burst_timer_elapsed)
>>> {
>>>     tx_stream->send();
>>>     start_of_burst = <true,false>
>>>     end_of_burst = <true, false>
>>>     time_spec = time_spec + 2.0;
>>>  }
>>>
>>
>> A few things.  I'd expect a burst every 2 seconds if you set sob = true,
>> eob = true outside the loop, and never change it and only change the
>> time_spec for every send.  Does that not work for you?
>>
>>
> Yes -- that does work, too.  I tried all the different combinations ... So
> for example, if sob/eob were true/true outside the loop and false/false
> inside the loop, I'd see a two second pause after the first burst and then
> we'd roll through the rest of them contiguously.
>
>
>> Next, The sizing of packets can be really important here.  The way the
>> DUC works is a little unintuitive.  The DUC works by creating N packets
>> from 1 input packet.  To this end, if you have an extra 1 sample, it will
>> repeat that small 1 sample packet N times - very processing inefficient.
>>
>> Furthermore, when I tried doing this I would run into weird edge cases
>> with the DMA FIFO where the send() call would block indefinitely.  My
>> workaround was to manually zero stuff and keep the transmit FIFO constantly
>> going - not using any eob flags at all.  My system would actually use a
>> software FIFO for bursts that wanted to go out, and I had a software thread
>> in a tight loop that would check if the FIFO had anything in it.  If it
>> didn't, it would zero stuff some small amount of transmit samples (1 packet
>> I believe).  If it did, it would send the burst.  You may want to do
>> something similar even with a synchronized system and counting outgoing
>> samples.
>>
>
> :) This is what led me here; the application I was working on essentially
> did that.  I'd have some data I'd want to send at a specific time.  I'd
> translate that time to a number of buffers past the start of my transmit
> (with some extra bookkeeping and buffer magic to align samples, etc), and
> found that I could only get this to work if my requested transmit time was
> at least 167 ms in the future.   This didn't quite reconcile with the 1ms
> of latency I could demonstrate with 'latency_test'  -- which uses a single
> packet -- hence my trip down the rabbit hole.  If I can lower that number a
> little by modifying the FIFO block, I think I'll be happy, but ...
>
>
>>
>>
>>>
>>> My observations were as follows: if end_of_burst for the prior burst was
>>> set to true, my code adhered to the time_spec.  The value of start_of_burst
>>> had no effect on whether or not the expected timing was followed.  If
>>> end_of_burst was set to false, the time_spec for the following burst is
>>> ignored and the packet is transmitted as soon as possible.
>>>
>>> I then followed this up with another test -- I replaced
>>>       time_spec = time_spec + 2.0;
>>> with the equivalent of
>>>       time_spec = time_spec + 0.100;
>>>
>>> And set end_of_burst and start_of_burst to true.
>>>
>>> I figured if I can run this continuously by setting has_time_spec to
>>> 'false' after the first burst and easily push data into the FIFO buffer,
>>> that doing this should not be a problem ... but I'm presented with a stream
>>> of lates and no actual transmission.
>>>
>>> I understand that 100ms is not an integer multiple of packet size
>>> returned by get_max_num_samps() -- so I tried an integer multiple of the
>>> packet size, too, with an appropriately updated time_spec. This also
>>> resulted with a lates through the entire transmit.
>>>
>>> So .... here are my additional questions:
>>>
>>> Is the only effect of "start_of_burst = true" to cause the CORDICs to
>>> reset?
>>> What is end_of_burst doing to enable a following time_spec to be used?
>>> What additional work is being performed when I set end_of_burst and
>>> has_time_spec to 'true' such that I get latest throughout the entire
>>> attempted transmission?
>>>
>>
>> I don't know the answer to these questions.  Try the suggestions above
>> and see if they help you out or not.
>>
>> Good luck!
>>
>>
> ...I would love to know the answer to these questions if anyone knew
> them.  Or could point me towards where they are documented.
>
> Thanks again!
>
>
>> Brian
>>
>
> Best, Doug
>
>
_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to