Hi Jack,

that is really peculiar, as 400k is really not a rate I'd assume you'd
be getting into trouble. I'd agree with your interpretation of the
netstat observations – but really, why on earth would the kernel be so
short on ressources that it has to drop packets?

This calls for an investigation of the run-time behaviour of your flow
graph – something is eating up your ressources. Would you mind
installing "perf" (under Fedora/Redhat/centos: dnf install perf,
debian/Ubuntu/mint...: apt-get install linux-tools), then

sudo sysctl kernel.perf_event_paranoid=-1
perf record -ag python /path/to/your/flow/graph
[let run for a minute or so, then stop operation]
perf report

That should give you a text UI that allows you to see at which points in
operation the CPU was whenever perf looked.
If you want to share that, an easy way would be writing the call tree to
a file, e.g. "perf report --stdio > report.txt" and sharing that report.txt.

Thanks!
Marcus

On 08/03/2017 12:06 PM, Jack White via USRP-users wrote:
> For the moment it's just a UHF satellite telemetry downlink (attached)
> with GMSK. The initial behaviour was that after maybe a minute, I
> would suddenly get a load of dropped packets (D in the log) - maybe a
> hundred or so - then 'normal service' would resume. But this process
> repeated for about an hour and then communication would stop, the
> processor load for python would drop to 0% and the gui plots would
> remain static in their last configuration. However, it's not that the
> programme has frozen, because I can still rescale all the plots - it's
> communication with the X310 that's stopped.
>
> Following the guidance of someone's post somewhere, I used netstat to
> monitor the kernel receive buffer errors (RcvbufErrors), which spiked
> over the running of the flowgraph. The poster said that this indicated
> the packets are being dropped in the kernel, not on the network card
> used for interfacing with the X310. So I installed a low-latency
> kernel. This has almost entirely got rid of the dropped packets, but
> the freeze now happens after maybe five minutes. GNU Radio doesn't log
> an error. I can ping the X310, but uhd_find_devices and uhd_usrp_probe
> don't find it, though maybe that's because there's still and open
> connection to it.
>
> Cheers,
>
> Jack
>
>
>
> On Thu, Aug 3, 2017 at 10:19 AM, Marcus Müller
> <marcus.muel...@ettus.com <mailto:marcus.muel...@ettus.com>> wrote:
>
>     Ah! You've set the data type of the QT GUI sink to "complex
>     message"; I wasn't aware that this feature had made it to the
>     master branch :)
>
>>     The point of moving to C++ was that the flowgraph I /really/ want
>>     to use is just causing me huge problems - most notably that there
>>     are periods of a few seconds every now and again when the USRP
>>     drops a load of packets and then, after a while, the flow just
>>     freezes up. I find it difficult to follow how GNU Radio really
>>     works and I thought it would be a better bet to be directly in
>>     control of my samples all the way.
>
>     Hm, yeah, I know it's not trivial, but I'm pretty sure this isn't
>     the way to go. What kind of flow graph was giving you so much trouble?
>
>
>     Best regards,
>
>     Marcus
>
>
>     On 08/03/2017 11:15 AM, Jack White wrote:
>>     Hi Marcus,
>>
>>     Thanks for the response. I attach the flowgraph I am using for
>>     this test, and for which I got the "unknown data type of samples"
>>     error.
>>
>>     I wasn't aware that the metadata was included in the PDUs, so
>>     that makes more sense now.
>>
>>     The point of moving to C++ was that the flowgraph I /really/ want
>>     to use is just causing me huge problems - most notably that there
>>     are periods of a few seconds every now and again when the USRP
>>     drops a load of packets and then, after a while, the flow just
>>     freezes up. I find it difficult to follow how GNU Radio really
>>     works and I thought it would be a better bet to be directly in
>>     control of my samples all the way.
>>
>>     Jack
>>
>>
>>
>>     On Wed, Aug 2, 2017 at 10:45 PM, Marcus Müller via USRP-users
>>     <usrp-users@lists.ettus.com <mailto:usrp-users@lists.ettus.com>>
>>     wrote:
>>
>>         Hi Jack,
>>
>>         PDUs are not just samples one after the other – they contain
>>         metadata. I can't really imagine what your flow graph looks
>>         like, so I'd be grateful for a screenshot (File->Screen Capture).
>>
>>         Anyway, there'd be no obvious reason your UDP detour would
>>         make things faster – maybe the intermediate socket buffering
>>         might help, but you'd probably get the same result by
>>         extending a UHD USRP Source's Output Buffer Size.
>>
>>         So, I'm not sure where we should take this – from a gut
>>         feeling, we should maybe move on to the discuss-gnuradio
>>         mailing list and discuss what part in your GNU Radio
>>         application isn't performing well enough – as I'm currently
>>         assuming your approach wasn't born through an in-depth
>>         analysis, but might more be of a trial&error iteration?
>>
>>         Best regards,
>>
>>         Marcus
>>
>>
>>         On 02.08.2017 13:10, Jack White via USRP-users wrote:
>>>         Hi,
>>>
>>>         I've been having some difficulty getting reliable data flow
>>>         from my USRP X310 with a GRC flowgraph, so I'm trying out
>>>         writing my system in C++ with the UHD driver API. My first
>>>         step has been to retrieve samples from the X310, forward
>>>         them to a UDP port and then pick them up with a GRC Socket
>>>         PDU component and then plot them. The C++ programme, so far,
>>>         follows Ettus's example rx_samples_to_udp almost exactly and
>>>         uses the std::complex<float> data type.
>>>
>>>         When the data enters the running flowgraph from the UDP
>>>         transport, I get this error:
>>>
>>>         thread[thread-per-block[1]: <block freq_sink_c (1)>]:
>>>         freq_sink_c: unknown data type of samples; must be complex.
>>>
>>>         Can anyone offer insight into why this should occur?
>>>
>>>         Many thanks,
>>>
>>>         -- 
>>>         Jack White
>>>         white.n.j...@googlemail.com <mailto:white.n.j...@googlemail.com>
>>>         07875 813 745
>>>
>>>
>>>         _______________________________________________
>>>         USRP-users mailing list
>>>         USRP-users@lists.ettus.com <mailto:USRP-users@lists.ettus.com>
>>>         http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>         <http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com>
>>         _______________________________________________ USRP-users
>>         mailing list USRP-users@lists.ettus.com
>>         <mailto:USRP-users@lists.ettus.com>
>>         http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>         <http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com>
>>
>>
>>     -- 
>>     Jack White white.n.j...@googlemail.com
>>     <mailto:white.n.j...@googlemail.com> 07875 813 745
>
> -- 
> Jack White white.n.j...@googlemail.com
> <mailto:white.n.j...@googlemail.com> 07875 813 745
>
> _______________________________________________
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to