Re: [USRP-users] Overflows at flowgraph start using gr-uhd

2020-11-19 Thread Josh via USRP-users
Starting to get an idea what is going on with the startup overflows

First, suppose I have a raw UHD application that does the following:
{
  instantiate usrp object, set freq, gain, samp_rate
  issue stream cmd to start some time in the future (2 sec)
  while(1)
  {
call _rx_stream->recv()
print the error code
  }
}

The output of this will be [ timeout, timeout, ... , none, none, ...] in
that it will timeout until the stream starts, then get valid samples
If I then put a very small delay (0.1 seconds in my case) when there is a
timeout, then I see the exact same behavior as in gr-uhd, which is:
[timeout, ..., overflow, overflow(seq), overflow(seq), none, ...] --> the
"ODD"

This corresponds to the behavior of gr-uhd, where inside work(), when
recv() returns a timeout, work() then returns a 0, and the scheduler calls
work again when it feels like it.

So if I modify the work() function of usrp_source_impl.cc to be:
case ::uhd::rx_metadata_t::ERROR_CODE_TIMEOUT:
// its ok to timeout, perhaps the user is doing finite streaming
// return 0;
return work(noutput_items, input_items, output_items);

Then the work() function keeps trying with no delay, and then there are no
overflows.  Obviously this is not desirable behavior for the general case,
but this is what "made the problem go away" for me.

Josh








On Thu, Nov 19, 2020 at 7:09 AM Josh  wrote:

> Same deal - with "num_recv_frames=128,master_clock_rate=" +
> str(samp_rate*4) I still get "ODD", just about every time.
>
> On Thu, Nov 19, 2020 at 6:52 AM Ron Economos via USRP-users <
> usrp-users@lists.ettus.com> wrote:
>
>> The automatic setting of the master clock seems to be getting in the way
>> after the PPS transition. Try explicitly setting the master clock.
>>
>> "num_recv_frames=128,master_clock_rate=" + str(samp_rate*4)
>>
>> Ron
>> On 11/19/20 03:33, Josh via USRP-users wrote:
>>
>> Marcus,
>>
>> This is naked hardware - B210 usb into a pretty beefy laptop running
>> Ubuntu 20.04, GNU Radio latest master (3.9)
>> Even with num_recv_frames = 128, still getting "ODD" at startup of the
>> flowgraph
>>
>> Any other optimizations I should be tuning?  Getting no overruns in the
>> steady state, just at startup.
>>
>> Flowgraph is attached.
>>
>> Josh
>>
>> On Wed, Nov 18, 2020 at 4:46 PM Marcus D. Leech via USRP-users <
>> usrp-users@lists.ettus.com> wrote:
>>
>>> On 11/18/2020 07:27 AM, Josh via USRP-users wrote:
>>>
>>> I'm seeing a difference in behavior between gr-uhd and plain uhd c++ api:
>>>
>>> Setup:
>>> B210, 2 channels, 5MSPS, master_clock_rate 20MSPS, PPS sync
>>> Receive only flowgraph
>>>
>>> With gr-uhd, there is always a "OOD" when the flowgraph first starts
>>>
>>> But, if I replicate the setup in a simple compiled program using the uhd
>>> API with all the same settings, this never occurs.
>>>
>>> So my question is - is the GR scheduler doing something at the beginning
>>> of the flowgraph that delays the work() calls and causes overflows, and are
>>> there settings I use to mitigate this?  My problem is that once these
>>> overflows occur, I can't trust my timing synchronization on the received
>>> samples (or have to do further calculations on the rx_time tags).
>>>
>>> Thanks,
>>> Josh
>>>
>>>
>>> ___
>>>
>>>
>>> Try specifying "num_recv_frames=128" in your device arguments.
>>>
>>> Also, are you running this on naked hardware or through a VM?
>>>
>>>
>>> ___
>>> USRP-users mailing list
>>> USRP-users@lists.ettus.com
>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>
>>
>> ___
>> USRP-users mailing 
>> listUSRP-users@lists.ettus.comhttp://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>
>> ___
>> USRP-users mailing list
>> USRP-users@lists.ettus.com
>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] Overflows at flowgraph start using gr-uhd

2020-11-19 Thread Josh via USRP-users
Same deal - with "num_recv_frames=128,master_clock_rate=" +
str(samp_rate*4) I still get "ODD", just about every time.

On Thu, Nov 19, 2020 at 6:52 AM Ron Economos via USRP-users <
usrp-users@lists.ettus.com> wrote:

> The automatic setting of the master clock seems to be getting in the way
> after the PPS transition. Try explicitly setting the master clock.
>
> "num_recv_frames=128,master_clock_rate=" + str(samp_rate*4)
>
> Ron
> On 11/19/20 03:33, Josh via USRP-users wrote:
>
> Marcus,
>
> This is naked hardware - B210 usb into a pretty beefy laptop running
> Ubuntu 20.04, GNU Radio latest master (3.9)
> Even with num_recv_frames = 128, still getting "ODD" at startup of the
> flowgraph
>
> Any other optimizations I should be tuning?  Getting no overruns in the
> steady state, just at startup.
>
> Flowgraph is attached.
>
> Josh
>
> On Wed, Nov 18, 2020 at 4:46 PM Marcus D. Leech via USRP-users <
> usrp-users@lists.ettus.com> wrote:
>
>> On 11/18/2020 07:27 AM, Josh via USRP-users wrote:
>>
>> I'm seeing a difference in behavior between gr-uhd and plain uhd c++ api:
>>
>> Setup:
>> B210, 2 channels, 5MSPS, master_clock_rate 20MSPS, PPS sync
>> Receive only flowgraph
>>
>> With gr-uhd, there is always a "OOD" when the flowgraph first starts
>>
>> But, if I replicate the setup in a simple compiled program using the uhd
>> API with all the same settings, this never occurs.
>>
>> So my question is - is the GR scheduler doing something at the beginning
>> of the flowgraph that delays the work() calls and causes overflows, and are
>> there settings I use to mitigate this?  My problem is that once these
>> overflows occur, I can't trust my timing synchronization on the received
>> samples (or have to do further calculations on the rx_time tags).
>>
>> Thanks,
>> Josh
>>
>>
>> ___
>>
>>
>> Try specifying "num_recv_frames=128" in your device arguments.
>>
>> Also, are you running this on naked hardware or through a VM?
>>
>>
>> ___
>> USRP-users mailing list
>> USRP-users@lists.ettus.com
>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>
>
> ___
> USRP-users mailing 
> listUSRP-users@lists.ettus.comhttp://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] Overflows at flowgraph start using gr-uhd

2020-11-19 Thread Ron Economos via USRP-users
The automatic setting of the master clock seems to be getting in the way 
after the PPS transition. Try explicitly setting the master clock.


"num_recv_frames=128,master_clock_rate=" + str(samp_rate*4)

Ron

On 11/19/20 03:33, Josh via USRP-users wrote:

Marcus,

This is naked hardware - B210 usb into a pretty beefy laptop running 
Ubuntu 20.04, GNU Radio latest master (3.9)
Even with num_recv_frames = 128, still getting "ODD" at startup of the 
flowgraph


Any other optimizations I should be tuning?  Getting no overruns in 
the steady state, just at startup.


Flowgraph is attached.

Josh

On Wed, Nov 18, 2020 at 4:46 PM Marcus D. Leech via USRP-users 
mailto:usrp-users@lists.ettus.com>> wrote:


On 11/18/2020 07:27 AM, Josh via USRP-users wrote:

I'm seeing a difference in behavior between gr-uhd and plain uhd
c++ api:

Setup:
B210, 2 channels, 5MSPS, master_clock_rate 20MSPS, PPS sync
Receive only flowgraph

With gr-uhd, there is always a "OOD" when the flowgraph first starts

But, if I replicate the setup in a simple compiled program using
the uhd API with all the same settings, this never occurs.

So my question is - is the GR scheduler doing something at the
beginning of the flowgraph that delays the work() calls and
causes overflows, and are there settings I use to mitigate this? 
My problem is that once these overflows occur, I can't trust my
timing synchronization on the received samples (or have to do
further calculations on the rx_time tags).

Thanks,
Josh


___


Try specifying "num_recv_frames=128" in your device arguments.

Also, are you running this on naked hardware or through a VM?


___
USRP-users mailing list
USRP-users@lists.ettus.com 
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] Overflows at flowgraph start using gr-uhd

2020-11-19 Thread Josh via USRP-users
Marcus,

This is naked hardware - B210 usb into a pretty beefy laptop running Ubuntu
20.04, GNU Radio latest master (3.9)
Even with num_recv_frames = 128, still getting "ODD" at startup of the
flowgraph

Any other optimizations I should be tuning?  Getting no overruns in the
steady state, just at startup.

Flowgraph is attached.

Josh

On Wed, Nov 18, 2020 at 4:46 PM Marcus D. Leech via USRP-users <
usrp-users@lists.ettus.com> wrote:

> On 11/18/2020 07:27 AM, Josh via USRP-users wrote:
>
> I'm seeing a difference in behavior between gr-uhd and plain uhd c++ api:
>
> Setup:
> B210, 2 channels, 5MSPS, master_clock_rate 20MSPS, PPS sync
> Receive only flowgraph
>
> With gr-uhd, there is always a "OOD" when the flowgraph first starts
>
> But, if I replicate the setup in a simple compiled program using the uhd
> API with all the same settings, this never occurs.
>
> So my question is - is the GR scheduler doing something at the beginning
> of the flowgraph that delays the work() calls and causes overflows, and are
> there settings I use to mitigate this?  My problem is that once these
> overflows occur, I can't trust my timing synchronization on the received
> samples (or have to do further calculations on the rx_time tags).
>
> Thanks,
> Josh
>
>
> ___
>
>
> Try specifying "num_recv_frames=128" in your device arguments.
>
> Also, are you running this on naked hardware or through a VM?
>
>
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>


test_usrp_rx.grc
Description: application/gnuradio-grc
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] Overflows at flowgraph start using gr-uhd

2020-11-18 Thread Marcus D. Leech via USRP-users

On 11/18/2020 07:27 AM, Josh via USRP-users wrote:

I'm seeing a difference in behavior between gr-uhd and plain uhd c++ api:

Setup:
B210, 2 channels, 5MSPS, master_clock_rate 20MSPS, PPS sync
Receive only flowgraph

With gr-uhd, there is always a "OOD" when the flowgraph first starts

But, if I replicate the setup in a simple compiled program using the 
uhd API with all the same settings, this never occurs.


So my question is - is the GR scheduler doing something at the 
beginning of the flowgraph that delays the work() calls and causes 
overflows, and are there settings I use to mitigate this?  My problem 
is that once these overflows occur, I can't trust my timing 
synchronization on the received samples (or have to do further 
calculations on the rx_time tags).


Thanks,
Josh


___


Try specifying "num_recv_frames=128" in your device arguments.

Also, are you running this on naked hardware or through a VM?


___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Marcus D. Leech via USRP-users

On 07/26/2018 01:30 PM, Андрій Хома wrote:


Have you changed your cpu governor to performance?

yes

Have you tuned your network interface profile with ethtool -g?

When I work with a usrp x310 - yes

I found that maxing out that buffer size helped lots.

i playing with it

You may also have to manually schedule your threads if using
isolcpus/numactl/taskset. I noticed that the linux scheduler did a
poor job of distributing threads to different processors.

I allocated usrps for a completely separate processor (the motherboard 
supports two)
All of the above really helps and works!But when you just run, for 
example, leafpad... "OOO" 😂

Nobody ever met this?
Yes.  This is due to the subtleties of scheduling and interrupt handling 
in general-purpose operating systems.  When you're streaming at
  high rates, it doesn't take much in the way of "not paying attention 
to that I/O while I do this other thing" to cause you to fill up buffers

  very quickly.

How much physical memory do you have?




чт, 26 июл. 2018 г. в 20:17, Keith k >:


Have you changed your cpu governor to performance? Have you tuned
your network interface profile with ethtool -g? I found that
maxing out that buffer size helped lots. You may also have to
manually schedule your threads if using isolcpus/numactl/taskset.
I noticed that the linux scheduler did a poor job of distributing
threads to different processors.

On Thu, Jul 26, 2018 at 10:35 AM, Андрій Хома mailto:anik12...@gmail.com>> wrote:

Yes, thank you, I've tried this before: I allocated 10 or more
cores purely for the USRPs. Overflows are generally less, but
when starting any application, one or two "O" are guaranteed
to be printed.
Therefore, I suggested that maybe it's a case of cache or
something else.

I was playing with num_recv_frames, but the problem is that I
do not know how to determine the correct value for it. Now
it's num_recv_frames = 150, and recv_frame_size = 8000.

In general, while running my application, a lot of start /
kill processes, which are causes overflows. If you do not
touch anything, do not run anything - everything is fine, even
without the allocation of cores by isolcpus and numactl :)

чт, 26 июл. 2018 г. в 19:07, Marcus D. Leech
mailto:mle...@ripnet.com>>:

Make sure that you’re increasing the num_recv_frames in
the device args as well


Sent from my iPhone

On Jul 26, 2018, at 11:10 AM, Keith k via USRP-users
mailto:usrp-users@lists.ettus.com>> wrote:


How many CPU cores do you have? I've also found this a
problem with multiusrp and high data rates. The solution
for me was to isolate cpu cores and then use taskset to
run my program on the isolated cores. This drastically
reduced the number of overflows to almost none. This
however will probably require you to use an 8 core or
higher computer. UHD spawns 2 threads for every USRP you
have, so you can only schedule so many threads on an
isolated core before they starve each other.

On Thu, Jul 26, 2018 at 1:56 PM, Андрій Хома via
USRP-users mailto:usrp-users@lists.ettus.com>> wrote:

Perhaps a dumb question: what is more critical in
order to avoid buffer overflows ("O")? Frequency,
cache size, or something else?
I dealt with two processors
1: 2.2GHz, 25MB cache
2: 3.5GHz, 15MB cache
In both cases, I observed overflows

4х usrp b205mini, through usb3.0

Thank you, Andrei.

___
USRP-users mailing list
USRP-users@lists.ettus.com


http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com




-- 
-Keith Kotyk

___
USRP-users mailing list
USRP-users@lists.ettus.com

http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com





-- 
-Keith Kotyk




___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Keith k via USRP-users
Ahh sorry dude. That is what used to happen to me before I tried the stuff
I listed. The only other thing I can thing of is maybe playing around with
MTU. I found my best performance at 4000, but I'm not sure that will help.

On Thu, Jul 26, 2018 at 11:30 AM, Андрій Хома  wrote:

> Have you changed your cpu governor to performance?
>>
> yes
>
> Have you tuned your network interface profile with ethtool -g?
>>
> When I work with a usrp x310 - yes
>
> I found that maxing out that buffer size helped lots.
>>
> i playing with it
>
>
>> You may also have to manually schedule your threads if using
>> isolcpus/numactl/taskset. I noticed that the linux scheduler did a poor job
>> of distributing threads to different processors.
>>
> I allocated usrps for a completely separate processor (the motherboard
> supports two)
>
> All of the above really helps and works! But when you just run, for
> example, leafpad... "OOO" 😂
> Nobody ever met this?
>
> чт, 26 июл. 2018 г. в 20:17, Keith k :
>
>> Have you changed your cpu governor to performance? Have you tuned your
>> network interface profile with ethtool -g? I found that maxing out that
>> buffer size helped lots. You may also have to manually schedule your
>> threads if using isolcpus/numactl/taskset. I noticed that the linux
>> scheduler did a poor job of distributing threads to different processors.
>>
>> On Thu, Jul 26, 2018 at 10:35 AM, Андрій Хома 
>> wrote:
>>
>>> Yes, thank you, I've tried this before: I allocated 10 or more cores
>>> purely for the USRPs. Overflows are generally less, but when starting
>>> any application, one or two "O" are guaranteed to be printed.
>>> Therefore, I suggested that maybe it's a case of cache or something else.
>>>
>>> I was playing with num_recv_frames, but the problem is that I do not
>>> know how to determine the correct value for it. Now it's
>>> num_recv_frames = 150, and recv_frame_size = 8000.
>>>
>>> In general, while running my application, a lot of start / kill
>>> processes, which are causes overflows. If you do not touch anything, do
>>> not run anything - everything is fine, even without the allocation of cores
>>> by isolcpus and numactl :)
>>>
>>> чт, 26 июл. 2018 г. в 19:07, Marcus D. Leech :
>>>
 Make sure that you’re increasing the num_recv_frames in the device args
 as well


 Sent from my iPhone

 On Jul 26, 2018, at 11:10 AM, Keith k via USRP-users <
 usrp-users@lists.ettus.com> wrote:

 How many CPU cores do you have? I've also found this a problem with
 multiusrp and high data rates. The solution for me was to isolate cpu cores
 and then use taskset to run my program on the isolated cores. This
 drastically reduced the number of overflows to almost none. This however
 will probably require you to use an 8 core or higher computer. UHD spawns 2
 threads for every USRP you have, so you can only schedule so many threads
 on an isolated core before they starve each other.

 On Thu, Jul 26, 2018 at 1:56 PM, Андрій Хома via USRP-users <
 usrp-users@lists.ettus.com> wrote:

> Perhaps a dumb question: what is more critical in order to avoid
> buffer overflows ("O")? Frequency, cache size, or something else?
> I dealt with two processors
> 1: 2.2GHz, 25MB cache
> 2: 3.5GHz, 15MB cache
> In both cases, I observed overflows
>
> 4х usrp b205mini, through usb3.0
>
> Thank you, Andrei.
>
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
>


 --
 -Keith Kotyk

 ___
 USRP-users mailing list
 USRP-users@lists.ettus.com
 http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


>>
>>
>> --
>> -Keith Kotyk
>>
>


-- 
-Keith Kotyk
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Андрій Хома via USRP-users
>
> Have you changed your cpu governor to performance?
>
yes

Have you tuned your network interface profile with ethtool -g?
>
When I work with a usrp x310 - yes

I found that maxing out that buffer size helped lots.
>
i playing with it


> You may also have to manually schedule your threads if using
> isolcpus/numactl/taskset. I noticed that the linux scheduler did a poor job
> of distributing threads to different processors.
>
I allocated usrps for a completely separate processor (the motherboard
supports two)

All of the above really helps and works! But when you just run, for
example, leafpad... "OOO" 😂
Nobody ever met this?

чт, 26 июл. 2018 г. в 20:17, Keith k :

> Have you changed your cpu governor to performance? Have you tuned your
> network interface profile with ethtool -g? I found that maxing out that
> buffer size helped lots. You may also have to manually schedule your
> threads if using isolcpus/numactl/taskset. I noticed that the linux
> scheduler did a poor job of distributing threads to different processors.
>
> On Thu, Jul 26, 2018 at 10:35 AM, Андрій Хома  wrote:
>
>> Yes, thank you, I've tried this before: I allocated 10 or more cores
>> purely for the USRPs. Overflows are generally less, but when starting
>> any application, one or two "O" are guaranteed to be printed.
>> Therefore, I suggested that maybe it's a case of cache or something else.
>>
>> I was playing with num_recv_frames, but the problem is that I do not know
>> how to determine the correct value for it. Now it's num_recv_frames =
>> 150, and recv_frame_size = 8000.
>>
>> In general, while running my application, a lot of start / kill
>> processes, which are causes overflows. If you do not touch anything, do
>> not run anything - everything is fine, even without the allocation of cores
>> by isolcpus and numactl :)
>>
>> чт, 26 июл. 2018 г. в 19:07, Marcus D. Leech :
>>
>>> Make sure that you’re increasing the num_recv_frames in the device args
>>> as well
>>>
>>>
>>> Sent from my iPhone
>>>
>>> On Jul 26, 2018, at 11:10 AM, Keith k via USRP-users <
>>> usrp-users@lists.ettus.com> wrote:
>>>
>>> How many CPU cores do you have? I've also found this a problem with
>>> multiusrp and high data rates. The solution for me was to isolate cpu cores
>>> and then use taskset to run my program on the isolated cores. This
>>> drastically reduced the number of overflows to almost none. This however
>>> will probably require you to use an 8 core or higher computer. UHD spawns 2
>>> threads for every USRP you have, so you can only schedule so many threads
>>> on an isolated core before they starve each other.
>>>
>>> On Thu, Jul 26, 2018 at 1:56 PM, Андрій Хома via USRP-users <
>>> usrp-users@lists.ettus.com> wrote:
>>>
 Perhaps a dumb question: what is more critical in order to avoid buffer
 overflows ("O")? Frequency, cache size, or something else?
 I dealt with two processors
 1: 2.2GHz, 25MB cache
 2: 3.5GHz, 15MB cache
 In both cases, I observed overflows

 4х usrp b205mini, through usb3.0

 Thank you, Andrei.

 ___
 USRP-users mailing list
 USRP-users@lists.ettus.com
 http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


>>>
>>>
>>> --
>>> -Keith Kotyk
>>>
>>> ___
>>> USRP-users mailing list
>>> USRP-users@lists.ettus.com
>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>
>>>
>
>
> --
> -Keith Kotyk
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Keith k via USRP-users
Have you changed your cpu governor to performance? Have you tuned your
network interface profile with ethtool -g? I found that maxing out that
buffer size helped lots. You may also have to manually schedule your
threads if using isolcpus/numactl/taskset. I noticed that the linux
scheduler did a poor job of distributing threads to different processors.

On Thu, Jul 26, 2018 at 10:35 AM, Андрій Хома  wrote:

> Yes, thank you, I've tried this before: I allocated 10 or more cores
> purely for the USRPs. Overflows are generally less, but when starting any
> application, one or two "O" are guaranteed to be printed.
> Therefore, I suggested that maybe it's a case of cache or something else.
>
> I was playing with num_recv_frames, but the problem is that I do not know
> how to determine the correct value for it. Now it's num_recv_frames =
> 150, and recv_frame_size = 8000.
>
> In general, while running my application, a lot of start / kill processes,
> which are causes overflows. If you do not touch anything, do not run
> anything - everything is fine, even without the allocation of cores by
> isolcpus and numactl :)
>
> чт, 26 июл. 2018 г. в 19:07, Marcus D. Leech :
>
>> Make sure that you’re increasing the num_recv_frames in the device args
>> as well
>>
>>
>> Sent from my iPhone
>>
>> On Jul 26, 2018, at 11:10 AM, Keith k via USRP-users <
>> usrp-users@lists.ettus.com> wrote:
>>
>> How many CPU cores do you have? I've also found this a problem with
>> multiusrp and high data rates. The solution for me was to isolate cpu cores
>> and then use taskset to run my program on the isolated cores. This
>> drastically reduced the number of overflows to almost none. This however
>> will probably require you to use an 8 core or higher computer. UHD spawns 2
>> threads for every USRP you have, so you can only schedule so many threads
>> on an isolated core before they starve each other.
>>
>> On Thu, Jul 26, 2018 at 1:56 PM, Андрій Хома via USRP-users <
>> usrp-users@lists.ettus.com> wrote:
>>
>>> Perhaps a dumb question: what is more critical in order to avoid buffer
>>> overflows ("O")? Frequency, cache size, or something else?
>>> I dealt with two processors
>>> 1: 2.2GHz, 25MB cache
>>> 2: 3.5GHz, 15MB cache
>>> In both cases, I observed overflows
>>>
>>> 4х usrp b205mini, through usb3.0
>>>
>>> Thank you, Andrei.
>>>
>>> ___
>>> USRP-users mailing list
>>> USRP-users@lists.ettus.com
>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>
>>>
>>
>>
>> --
>> -Keith Kotyk
>>
>> ___
>> USRP-users mailing list
>> USRP-users@lists.ettus.com
>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>
>>


-- 
-Keith Kotyk
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Андрій Хома via USRP-users
Yes, thank you, I've tried this before: I allocated 10 or more cores purely
for the USRPs. Overflows are generally less, but when starting any
application, one or two "O" are guaranteed to be printed.
Therefore, I suggested that maybe it's a case of cache or something else.

I was playing with num_recv_frames, but the problem is that I do not know
how to determine the correct value for it. Now it's num_recv_frames = 150,
and recv_frame_size = 8000.

In general, while running my application, a lot of start / kill processes,
which are causes overflows. If you do not touch anything, do not run
anything - everything is fine, even without the allocation of cores by
isolcpus and numactl :)

чт, 26 июл. 2018 г. в 19:07, Marcus D. Leech :

> Make sure that you’re increasing the num_recv_frames in the device args as
> well
>
>
> Sent from my iPhone
>
> On Jul 26, 2018, at 11:10 AM, Keith k via USRP-users <
> usrp-users@lists.ettus.com> wrote:
>
> How many CPU cores do you have? I've also found this a problem with
> multiusrp and high data rates. The solution for me was to isolate cpu cores
> and then use taskset to run my program on the isolated cores. This
> drastically reduced the number of overflows to almost none. This however
> will probably require you to use an 8 core or higher computer. UHD spawns 2
> threads for every USRP you have, so you can only schedule so many threads
> on an isolated core before they starve each other.
>
> On Thu, Jul 26, 2018 at 1:56 PM, Андрій Хома via USRP-users <
> usrp-users@lists.ettus.com> wrote:
>
>> Perhaps a dumb question: what is more critical in order to avoid buffer
>> overflows ("O")? Frequency, cache size, or something else?
>> I dealt with two processors
>> 1: 2.2GHz, 25MB cache
>> 2: 3.5GHz, 15MB cache
>> In both cases, I observed overflows
>>
>> 4х usrp b205mini, through usb3.0
>>
>> Thank you, Andrei.
>>
>> ___
>> USRP-users mailing list
>> USRP-users@lists.ettus.com
>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>
>>
>
>
> --
> -Keith Kotyk
>
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Marcus D. Leech via USRP-users
Make sure that you’re increasing the num_recv_frames in the device args as well


Sent from my iPhone

> On Jul 26, 2018, at 11:10 AM, Keith k via USRP-users 
>  wrote:
> 
> How many CPU cores do you have? I've also found this a problem with multiusrp 
> and high data rates. The solution for me was to isolate cpu cores and then 
> use taskset to run my program on the isolated cores. This drastically reduced 
> the number of overflows to almost none. This however will probably require 
> you to use an 8 core or higher computer. UHD spawns 2 threads for every USRP 
> you have, so you can only schedule so many threads on an isolated core before 
> they starve each other.
> 
>> On Thu, Jul 26, 2018 at 1:56 PM, Андрій Хома via USRP-users 
>>  wrote:
>> Perhaps a dumb question: what is more critical in order to avoid buffer 
>> overflows ("O")? Frequency, cache size, or something else?
>> I dealt with two processors
>> 1: 2.2GHz, 25MB cache
>> 2: 3.5GHz, 15MB cache
>> In both cases, I observed overflows
>> 
>> 4х usrp b205mini, through usb3.0
>> 
>> Thank you, Andrei.
>> 
>> ___
>> USRP-users mailing list
>> USRP-users@lists.ettus.com
>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>> 
> 
> 
> 
> -- 
> -Keith Kotyk
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Keith k via USRP-users
How many CPU cores do you have? I've also found this a problem with
multiusrp and high data rates. The solution for me was to isolate cpu cores
and then use taskset to run my program on the isolated cores. This
drastically reduced the number of overflows to almost none. This however
will probably require you to use an 8 core or higher computer. UHD spawns 2
threads for every USRP you have, so you can only schedule so many threads
on an isolated core before they starve each other.

On Thu, Jul 26, 2018 at 1:56 PM, Андрій Хома via USRP-users <
usrp-users@lists.ettus.com> wrote:

> Perhaps a dumb question: what is more critical in order to avoid buffer
> overflows ("O")? Frequency, cache size, or something else?
> I dealt with two processors
> 1: 2.2GHz, 25MB cache
> 2: 3.5GHz, 15MB cache
> In both cases, I observed overflows
>
> 4х usrp b205mini, through usb3.0
>
> Thank you, Andrei.
>
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
>


-- 
-Keith Kotyk
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Андрій Хома via USRP-users
Motherboard Z10PE-D8 WS
Processor Intel Xeon E5-2630V4
Sample rate - 45MHz

In general, overflows are not regular. If the computer is not touched by
any other tasks - then everything is almost fine. But you just have to try
to run some application (for example, a leafpad), but at least just browse
the Internet - I immediately see overflow.
Even if you assign a real time priority to the threads responsible for the
USRPs (libusb thread, read threads).
Why is that?

чт, 26 июл. 2018 г. в 17:01, Neel Pandeya :

> In general, CPU clock speed is more important.
>
> What sampling rate are you using?
>
> --Neel Pandeya
>
>
>
> On Thu, Jul 26, 2018, 19:27 Андрій Хома via USRP-users <
> usrp-users@lists.ettus.com> wrote:
>
>> Perhaps a dumb question: what is more critical in order to avoid buffer
>> overflows ("O")? Frequency, cache size, or something else?
>> I dealt with two processors
>> 1: 2.2GHz, 25MB cache
>> 2: 3.5GHz, 15MB cache
>> In both cases, I observed overflows
>>
>> 4х usrp b205mini, through usb3.0
>>
>> Thank you, Andrei.
>> ___
>> USRP-users mailing list
>> USRP-users@lists.ettus.com
>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Neel Pandeya via USRP-users
Depending on what your application is doing, and your sampling rate, disk
I/O can also be a major factor too.

--Neel Pandeya


On Thu, Jul 26, 2018, 19:31 Neel Pandeya  wrote:

> In general, CPU clock speed is more important.
>
> What sampling rate are you using?
>
> --Neel Pandeya
>
>
>
> On Thu, Jul 26, 2018, 19:27 Андрій Хома via USRP-users <
> usrp-users@lists.ettus.com> wrote:
>
>> Perhaps a dumb question: what is more critical in order to avoid buffer
>> overflows ("O")? Frequency, cache size, or something else?
>> I dealt with two processors
>> 1: 2.2GHz, 25MB cache
>> 2: 3.5GHz, 15MB cache
>> In both cases, I observed overflows
>>
>> 4х usrp b205mini, through usb3.0
>>
>> Thank you, Andrei.
>> ___
>> USRP-users mailing list
>> USRP-users@lists.ettus.com
>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] OVERFLOWS

2018-07-26 Thread Neel Pandeya via USRP-users
In general, CPU clock speed is more important.

What sampling rate are you using?

--Neel Pandeya



On Thu, Jul 26, 2018, 19:27 Андрій Хома via USRP-users <
usrp-users@lists.ettus.com> wrote:

> Perhaps a dumb question: what is more critical in order to avoid buffer
> overflows ("O")? Frequency, cache size, or something else?
> I dealt with two processors
> 1: 2.2GHz, 25MB cache
> 2: 3.5GHz, 15MB cache
> In both cases, I observed overflows
>
> 4х usrp b205mini, through usb3.0
>
> Thank you, Andrei.
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] Overflows (D) when receiving nsamps more than once

2017-07-15 Thread Pope, Adrian P via USRP-users
Hi Marcus,

I am calling recv_to_file() multiple times, which in turn calls recv() multiple 
times. It is only consecutive calls to recv_to_file() that I see the overflow 
(D). As far as I can tell, the destruction of the rx_streamer object causes a 
sample to be left in the buffer. 

After reading though other listserv postings, it seems that it's better 
practice and much faster to keep the streamer object alive for multiple 
collections as opposed to creating and destroying it each time. Now that I am 
doing that instead, the overflow is no longer an issue for me.

Still, why does this error occur when using my x310s but not when I run the 
same code using my B-series hardware?

Thank you,
Adrian




Message: 15
Date: Sat, 15 Jul 2017 15:13:14 +0200
From: Marcus M?ller 
To: usrp-users@lists.ettus.com
Subject: Re: [USRP-users] Overflows (D) when receiving nsamps more
than once
Message-ID: <21d13c13-f9b3-7e5b-fd2c-1ee124d04...@ettus.com>
Content-Type: text/plain; charset="windows-1252"

Hi Adrian,

in your modified version, are you calling recv() repeatedly, or are you
trying to get all the samples you want at once?

Best regards,

Marcus



On 07/09/2017 02:34 AM, Pope, Adrian P via USRP-users wrote:
>
> Hello,
>
>  
>
> I have several x310s equipped with TwinRxs, and I?m having an issue
> with consecutive receives using STREAM_MODE_NUM_SAMPS_AND_DONE.
>
>  
>
> To illustrate my issue, I will refer to the uhd examples provided on
> github. I have built the original ?rx_samples_to_file? example and can
> run it with no problem. However, if I modify it by simply duplicating
> the ?recv_to_file? call or put it inside of a fore loop, I get an
> overflow (D) on every consecutive.  I saw a previous post,
> ?/[USRP-users] Overflows when doing repeated captures with X300
> <http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2017-March/024174.html>?,/
> in which the same problem was reported, but there was some ambiguity
> as to whether the poster was using continuous or num samps and done
> mode. I USE THE ?NSAMPS? ARGUMENT AND NO ?DURATION? OR ?TIME? ARGUMENT.
>
>  
>
> After some investigating it seems like a single packet is being left
> in the buffer. Can this be fixed? Or at least in the meantime, is
> there a way to avoid the delay that is caused by the out of order
> packet D overflow?
>
>  
>
> Thanks in advanced!
>
> Adrian

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] Overflows (D) when receiving nsamps more than once

2017-07-15 Thread Marcus Müller via USRP-users
Hi Adrian,

in your modified version, are you calling recv() repeatedly, or are you
trying to get all the samples you want at once?

Best regards,

Marcus



On 07/09/2017 02:34 AM, Pope, Adrian P via USRP-users wrote:
>
> Hello,
>
>  
>
> I have several x310s equipped with TwinRxs, and I’m having an issue
> with consecutive receives using STREAM_MODE_NUM_SAMPS_AND_DONE.
>
>  
>
> To illustrate my issue, I will refer to the uhd examples provided on
> github. I have built the original “rx_samples_to_file” example and can
> run it with no problem. However, if I modify it by simply duplicating
> the “recv_to_file” call or put it inside of a fore loop, I get an
> overflow (D) on every consecutive.  I saw a previous post,
> “/[USRP-users] Overflows when doing repeated captures with X300
> ”,/
> in which the same problem was reported, but there was some ambiguity
> as to whether the poster was using continuous or num samps and done
> mode. I USE THE “NSAMPS” ARGUMENT AND NO “DURATION” OR “TIME” ARGUMENT.
>
>  
>
> After some investigating it seems like a single packet is being left
> in the buffer. Can this be fixed? Or at least in the meantime, is
> there a way to avoid the delay that is caused by the out of order
> packet D overflow?
>
>  
>
> Thanks in advanced!
>
> Adrian
>
>  
>
>  
>
>
>
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com