Yes, there were multiple issues in the thread, and what I said only applies
to TX.  Certainly on the receive side there should be no latency issues, as
every block should do all computations they can on whatever data they
receive, since that is exactly what is tripping up the TX guys :)

Matt

On Fri, Oct 17, 2014 at 10:25 AM, John Malsbury <
jmalsbury.perso...@gmail.com> wrote:

> Matt,
>
> BTW - in this particular case, this was *all* in the receive direction -
> its a lowrate demodulator.  So I was a bit surprised to see the issue at
> all - 'cause I've never seen such high latency on a receive flowgraph.  We
> have a "closed" loop approach to managing transmit latency through the USRP
> without timestamps or relying on a source stream for throttling.  But I do
> think the advice you provide is useful for other's who are trying to solve 
> *transmitter
> latency by shrinking buffers.*
>
> Per the email above - it turned out that buffer sizes were not an issue.
> Something else was weird was happening - see (2) on my prev email if you're
> interested.
>
> Thanks for following up on this thread.
>
> -John
>
> On Fri, Oct 17, 2014 at 10:16 AM, Matt Ettus <m...@ettus.com> wrote:
>
>>
>> We see this issue a lot with applications that only transmit, and which
>> transmit continuously.  The problem is that you end up generating samples
>> far in advance of when you really know what you want to transmit, because
>> there is no rate-limiting on the production side.
>>
>> Some general principles -- Large buffers *allow* you to deal with high
>> latency.  Large buffers do not *create* high latency unless the application
>> is not designed properly.  A properly designed application will work with
>> infinitely large buffers as well as it does with minimally sized ones.
>>
>> Shrinking buffers may allow your application to work, but that isn't
>> really the best way to solve this problem.  The best way to solve the
>> problem is to modify your head-end source block to understand wall-clock
>> time.  The easiest way to do that if you are using a USRP is to instantiate
>> a UHD source (i.e. a receiver) at a relatively low sample rate and feed it
>> into the source you have created.
>>
>> Your source block should then look at timestamps on the incoming samples
>> (it can throw the samples themselves away).  It should generate only enough
>> samples to cover the maximum latency you want, and it should timestamp
>> those transmit samples.  For example, if it receives samples timestamped
>> with T1, it should generate samples with timestamps from T1+L1 to T1+L1+L2,
>> where L1 is the worst-case flowgraph and device latency, and L2 is the
>> worst case reaction time you are looking for.  Thus, if you suddenly get a
>> message from your operator to send a message, you know that you will never
>> need to wait for more than L2 seconds.  Thus, you can bound your worst case
>> reaction time.
>>
>> It should be noted that in two-way application like GSM or LTE, you would
>> never run into these problems and they are naturally avoided because you
>> won't generate samples until you've seen what you receive.  It only is an
>> issue in TX-only apps.
>>
>> I think we should generate an example app to do this, because the issue
>> comes up periodically, especially among the space communications crowd.  It
>> is a design pattern we really should document.
>>
>> Matt
>>
>>
>> On Fri, Oct 10, 2014 at 5:20 PM, Vanush Vaswani <van...@gmail.com> wrote:
>>
>>> I ran into this problem when doing 57.6kbps BPSK decoding, AX.25. The
>>> only way I was able to fix it was to reduce GR_FIXED_BUFFER_SIZE in
>>> flat_flowgraph.cc.
>>> This is regarded as a dodgy hack by all the GR developers here, but it
>>> worked for me (and I read the article on latency). I believe the guy
>>> who wrote GMSKSpacecraftGroundstation had the same problem, and found
>>> it in one of his old threads.
>>>
>>> On Sat, Oct 11, 2014 at 5:55 AM, Dan CaJacob <dan.caja...@gmail.com>
>>> wrote:
>>> > Hey John,
>>> >
>>> > I am way out of my depth here, but while working on a custom python
>>> block
>>> > the other day, I saw some weird behavior in 3.7.5 that was similar.
>>> Setting
>>> > the global max output had no effect, but setting the just-upstream
>>> block(s)
>>> > min/max output buffer size(s) low fixed my apparent slowliness.
>>> >
>>> > Very Respectfully,
>>> >
>>> > Dan CaJacob
>>> >
>>> > On Fri, Oct 10, 2014 at 2:14 PM, John Malsbury
>>> > <jmalsbury.perso...@gmail.com> wrote:
>>> >>
>>> >> Default scheduler.
>>> >>
>>> >> tb.start(1024), with different values, etc, etc.
>>> >>
>>> >> Most of the downstream blocks are stock GNU Radio blocks - a delay
>>> block
>>> >> (max delay is 1 sample), logical operators, etc.  I guess I'll add
>>> some
>>> >> printf debugging?
>>> >>
>>> >> -John
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> On Fri, Oct 10, 2014 at 11:07 AM, Marcus Müller <
>>> marcus.muel...@ettus.com>
>>> >> wrote:
>>> >>>
>>> >>> Hi John,
>>> >>> On 10.10.2014 19:33, John Malsbury wrote:
>>> >>>
>>> >>> Toward the end of the receive chain, there are a multitude of blocks
>>> that
>>> >>> are used for Viterbi node synchronization. I've found that the
>>> number of
>>> >>> blocks in series (3-5), combined with the low datarates at this
>>> point in
>>> >>> the flowgraph, lead to latencies on the order of 1-2 minutes.  That
>>> is to
>>> >>> say, once the node synchronization is accomplished, it takes 1-2
>>> minutes
>>> >>> to
>>> >>> flush these blocks and get the fresh, good data through.  This is
>>> >>> measured
>>> >>> with function probes on the state of the sync process, and BERT
>>> analysis
>>> >>> of
>>> >>> the demodulator output [through TCP/IP socket].
>>> >>>
>>> >>> I see you found the hidden interplanetary signal delay simulator.
>>> >>> Congrats! Watch out for the red shift in downstream samples.
>>> >>>
>>> >>> No, seriously, that sounds like a lot.
>>> >>> You are using 3.6.4.1 with the default scheduler, tpb?
>>> >>>
>>> >>>    - I've tried messing around with the output buffer size option in
>>> the
>>> >>>    flowgraph, but this seems l to have a negligible impact.
>>> >>>
>>> >>> That surprises me. How did you mess around? top_block->run(1024)?
>>> >>>  Do your blocks really get called with smaller input item sizes? (do
>>> a
>>> >>> little printf-debugging)
>>> >>>
>>> >>>    - I can write some custom blocks to reduce the overall block
>>> count,
>>> >>> but
>>> >>>    I have demonstrated that this provides a linear improvement,
>>> rather
>>> >>> than
>>> >>>    the two-order-magnitude improvement I need.
>>> >>>
>>> >>> Any general advice anyone can offer?  It feels like the right
>>> solution is
>>> >>> to force small buffer sizes on the relevant blocks...
>>> >>>
>>> >>> agreed. But still. That sounds *bad*. Are you sure none of the block
>>> >>> demands a large input/output multiple?
>>> >>>
>>> >>>
>>> >>> Greetings,
>>> >>> Marcus
>>> >>>
>>> >>> -John
>>> >>>
>>> >>>
>>> >>>
>>> >>> _______________________________________________
>>> >>> Discuss-gnuradio mailing list
>>> >>> Discuss-gnuradio@gnu.org
>>> >>> https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
>>> >>>
>>> >>>
>>> >>>
>>> >>> _______________________________________________
>>> >>> Discuss-gnuradio mailing list
>>> >>> Discuss-gnuradio@gnu.org
>>> >>> https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
>>> >>>
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> Discuss-gnuradio mailing list
>>> >> Discuss-gnuradio@gnu.org
>>> >> https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
>>> >>
>>> >
>>> >
>>> > _______________________________________________
>>> > Discuss-gnuradio mailing list
>>> > Discuss-gnuradio@gnu.org
>>> > https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
>>> >
>>>
>>> _______________________________________________
>>> Discuss-gnuradio mailing list
>>> Discuss-gnuradio@gnu.org
>>> https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
>>>
>>
>>
>> _______________________________________________
>> Discuss-gnuradio mailing list
>> Discuss-gnuradio@gnu.org
>> https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
>>
>>
>
_______________________________________________
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio

Reply via email to