lock() unlock()
in a loop).
Thanks,
Matt.
-- Forwarded message --
From: *Matt Mills* mmi...@2bn.net mailto:mmi...@2bn.net
Date: Fri, Dec 9, 2011 at 9:02 AM
Subject: Re: [Discuss-gnuradio] Gnuradio locking up
To: Don Ward don2387w...@sprynet.com mailto:don2387w...@sprynet.com
Matt,
I wanted to test your python file.
I first commented out the lock/unlock and the program continuously
prints noflow. Is this the expected behavior?
My understainding is that after every flushing of the message queue and
sleeping of 1 msec, the message queue should always be non-empty
On 12/18/2011 11:18 AM, Achilleas Anastasopoulos wrote:
Matt,
I wanted to test your python file.
I first commented out the lock/unlock and the program continuously
prints noflow. Is this the expected behavior?
My understainding is that after every flushing of the message queue and
: [Discuss-gnuradio] Gnuradio locking up
To: Don Ward don2387w...@sprynet.com
Don,
File is attached; If you watch CPU use, on my system it goes to about 40%
for 5-10 seconds and then drops to 0%. Once it has dropped to 0 it has
locked up.
-MM
On Fri, Dec 9, 2011 at 8:32 AM, Don Ward don2387w
On Wed, Dec 7, 2011 at 3:27 PM, Matt Mills mmi...@2bn.net wrote:
*frowns at ubuntu*
After updating to Ubuntu 11.10 (which has boost 1.46) I still experience
the lockup.
linux; GNU C++ version 4.6.1; Boost_104601; UHD_003.004.000-7dc76db
___
On Dec 8, 2011, at 1:05 PM, Matt Mills wrote:
After updating to Ubuntu 11.10 (which has boost 1.46) I still experience the
lockup.
For the record, I was testing on this:
morbo:~$ cat /etc/slackware-version; uname -a
Slackware 13.37.0
Linux morbo 2.6.37.6 #3 SMP Sat Apr 9 22:49:32 CDT 2011
On Thu, Dec 8, 2011 at 2:05 PM, Matt Mills mmi...@2bn.net wrote:
After updating to Ubuntu 11.10 (which has boost 1.46) I still experience
the lockup.
Also, Here's the python I've been using to reproduce:
http://pastebin.com/at0FdzXp
and the GDB backtrace after its locked up:
On Thu, Dec 8, 2011 at 21:47, Matt Mills mmi...@2bn.net wrote:
Also, Here's the python I've been using to reproduce:
http://pastebin.com/at0FdzXp
and the GDB backtrace after its locked up: http://pastebin.com/vx9cgSzp
The unlock() code, when the number of calls to unlock() reaches the same
On Thu, Dec 8, 2011 at 11:02 PM, Johnathan Corgan
jcor...@corganenterprises.com wrote:
It's possible that whatever thread is being interrupted is somewhere in an
uninterruptible state, though I'm not sure what that could be. If you
could do an info threads in gdb, that might shed some light.
On Thu, Dec 8, 2011 at 22:15, Matt Mills mmi...@2bn.net wrote:
With an even simpler version of the app (signal - null sink) just running
lock() unlock() in the loop it still locks up; using that version gives
this info from gdb (info threads, and bt on both threads included)
On Thu, Dec 8, 2011 at 11:15 PM, Matt Mills mmi...@2bn.net wrote:
With an even simpler version of the app (signal - null sink) just running
lock() unlock() in the loop it still locks up; using that version gives
this info from gdb (info threads, and bt on both threads included)
Has anyone had time to look into the unlock() lockup that Rachel reproduced
below further? I seem to be running into it left and right for some reason
and sadly my C++ isnt anywhere near good enough to go seeking the cause
myself.
On Tue, Nov 22, 2011 at 9:02 AM, Rachel Kroll
Matt Mills wrote:
Has anyone had time to look into the unlock() lockup that Rachel
reproduced
below further? I seem to be running into it left and right for some reason
and sadly my C++ isnt anywhere near good enough to go seeking the cause
myself.
This looks like an old boost problem
On Wed, Dec 7, 2011 at 10:00 AM, Don Ward don2387w...@sprynet.com wrote:
This looks like an old boost problem (
https://svn.boost.org/trac/boost/ticket/2330). Is there any chance you
are using a version of boost older than 1.45?
*frowns at ubuntu*
ii libboost1.40-dev
1.40.0-4ubuntu4
On Tue, Nov 22, 2011 at 10:21 PM, Matt Mills mmi...@2bn.net wrote:
On Tue, Nov 22, 2011 at 11:28 AM, Philip Balister phi...@balister.orgwrote:
You can use the single threaded scheduler by setting an environment
variable:
export GR_SCHEDULER=STS
Gave this a shot; app runs for a while
It appears UHD does not like to be watched; reproducibly if I run with GDB
attached, UHD eventually stops sending data to the upstream blocks and my
screen fills up with:
thread[single-threaded-scheduler]: RuntimeError: Control channel send error
thread[single-threaded-scheduler]: RuntimeError:
On 11/21/2011 10:24 PM, Matt Mills wrote:
Hello all,
I seem to be having an issue that, after about 30-45 minutes of running
normally my gnuradio based python app will just lock up. It wont respond to
control C, it holds all of its existing file handles open but doesnt do
anything with
On Tue, Nov 22, 2011 at 12:29 PM, Philip Balister phi...@balister.orgwrote:
On 11/21/2011 10:24 PM, Matt Mills wrote:
Hello all,
I seem to be having an issue that, after about 30-45 minutes of running
normally my gnuradio based python app will just lock up. It wont respond
to
control
Curiously at startup python begins consuming ~1950M of VIRT, but only 46M
of RES and 23M of SHR... No signs of any of those numbers increasing more
than +/- 5% (although VIRT occasionally drops momentarily to ~160-180 MB
before returning to ~1950 MB which seems awfully strange).
About 15 minutes
On 22/11/11 10:18 AM, Matt Mills wrote:
Curiously at startup python begins consuming ~1950M of VIRT, but only
46M of RES and 23M of SHR... No signs of any of those numbers
increasing more than +/- 5% (although VIRT occasionally drops
momentarily to ~160-180 MB before returning to ~1950 MB
On Tue, Nov 22, 2011 at 6:52 AM, Mark Steward markstew...@gmail.com wrote:
I've seen lockups of this sort when multi-threaded python processes exit.
You might also like to take a look at what each thread is up to in gdb.
I'm not really sure how to get around in GDB, but I've captured a
Ubuntu 10.04 LTS (x86) on a physical desktop (intel G6950 dual core CPU), 2
GB physical ram, 6 GB swap space, OS is up to date per apt. Gnuradio and
UHD are both built from git as of yesterday.
Linux -hostname- 2.6.32-35-generic-pae #78-Ubuntu SMP Tue Oct 11 17:01:12
UTC 2011 i686 GNU/Linux
On
On 22/11/11 10:30 AM, Matt Mills wrote:
Ubuntu 10.04 LTS (x86) on a physical desktop (intel G6950 dual core
CPU), 2 GB physical ram, 6 GB swap space, OS is up to date per apt.
Gnuradio and UHD are both built from git as of yesterday.
Linux -hostname- 2.6.32-35-generic-pae #78-Ubuntu SMP Tue
This graph doesnt have any unlock/locks in the code itself, it does use
valve blocks (which I believe use unlock/lock internally) which are used to
mute/unmute streams (there is probably an average of 2-4 valve state
changes per second across the graphs 19 valves).
On Tue, Nov 22, 2011 at 8:39
And this is still the flow-graph that has lock/unlock() in it? From the
report of very-high
rescheduling interrupts, I wonder if there's a subtle bug in the Gnu Radio
block
scheduler around lock()/unlock() that causes horrible thrashing.
It's pretty easy to get wedged forever if you
On 22/11/11 10:44 AM, Matt Mills wrote:
This graph doesnt have any unlock/locks in the code itself, it does
use valve blocks (which I believe use unlock/lock internally) which
are used to mute/unmute streams (there is probably an average of 2-4
valve state changes per second across the graphs
On 22/11/11 10:48 AM, Rachel Kroll wrote:
It's pretty easy to get wedged forever if you call lock and unlock a lot in
conjunction with connect and disconnect. Sooner or later, you'll hit a race
and things will get stuck.
I have a simple reproduction case if anyone is interested. It'll
On 22/11/11 11:02 AM, Rachel Kroll wrote:
On Nov 22, 2011, at 7:56 AM, Marcus D. Leech wrote:
On 22/11/11 10:48 AM, Rachel Kroll wrote:
It's pretty easy to get wedged forever if you call lock and unlock a lot in
conjunction with connect and disconnect. Sooner or later, you'll hit
On 11/22/2011 11:02 AM, Rachel Kroll wrote:
On Nov 22, 2011, at 7:56 AM, Marcus D. Leech wrote:
On 22/11/11 10:48 AM, Rachel Kroll wrote:
It's pretty easy to get wedged forever if you call lock and unlock a lot in
conjunction with connect and disconnect. Sooner or later, you'll hit a
How do you compile this? I put it in a file and made a couple fo quick
stabs at it.
My Makefile is just:
grlock: grlock.cc
g++ -g -Wall -I/usr/local/include/gnuradio -o grlock grlock.cc \
-lgnuradio-core -Xlinker -rpath /usr/local/lib64
You probably won't need the -Xlinker
On 11/22/2011 08:28 AM, Philip Balister wrote:
On 11/22/2011 11:02 AM, Rachel Kroll wrote:
On Nov 22, 2011, at 7:56 AM, Marcus D. Leech wrote:
On 22/11/11 10:48 AM, Rachel Kroll wrote:
It's pretty easy to get wedged forever if you call lock and unlock a lot
in conjunction with connect
I may have also neglected to mention that this graph, by my count, has
about 197 blocks in it...
So is their anything I could look at further in my app (aside from trying
to eliminate the valve blocks, which I'm attempting to do) that I could
positively determine the cause of the lockups (and if
On 22/11/11 12:58 PM, Matt Mills wrote:
I may have also neglected to mention that this graph, by my count, has
about 197 blocks in it...
So is their anything I could look at further in my app (aside from
trying to eliminate the valve blocks, which I'm attempting to do) that
I could
On 11/22/2011 11:31 AM, Rachel Kroll wrote:
How do you compile this? I put it in a file and made a couple fo quick
stabs at it.
I can duplicate the hang. Also it looks like it does not hang using the
single threaded scheduler. (Which I guess we expect)
You can use the single threaded scheduler
Just so y'all know what lock/unlock is doing:
When you unlock the flow graph it basically interrupts and joins all
scheduler threads. Then it creates an entirely new scheduler.
wait() is a good candidate for the cause of lockups, that is,
interrupted threads are not exiting. This may be a sign
On Tue, Nov 22, 2011 at 11:28 AM, Philip Balister phi...@balister.orgwrote:
You can use the single threaded scheduler by setting an environment
variable:
export GR_SCHEDULER=STS
Gave this a shot; app runs for a while (at 100% CPU with quite a few
overruns) then segfaults...
[20077.594080]
On 22/11/11 10:21 PM, Matt Mills wrote:
On Tue, Nov 22, 2011 at 11:28 AM, Philip Balister phi...@balister.org
mailto:phi...@balister.org wrote:
You can use the single threaded scheduler by setting an environment
variable:
export GR_SCHEDULER=STS
Gave this a shot; app runs
On Tue, Nov 22, 2011 at 8:30 PM, Marcus D. Leech mle...@ripnet.com wrote:
**
It's a big flow-graph (you'd mentioned 197 blocks), so it will be a pain
to whittle down exactly *which* block is causing the segfault--and it's
provoking it from inside libc, which makes it even less fun.
While
On 22/11/11 10:41 PM, Matt Mills wrote:
On Tue, Nov 22, 2011 at 8:30 PM, Marcus D. Leech mle...@ripnet.com
mailto:mle...@ripnet.com wrote:
It's a big flow-graph (you'd mentioned 197 blocks), so it will
be a pain to whittle down exactly *which* block is causing the
segfault--and
Hello all,
I seem to be having an issue that, after about 30-45 minutes of running
normally my gnuradio based python app will just lock up. It wont respond to
control C, it holds all of its existing file handles open but doesnt do
anything with them, and an strace attach shows only:
40 matches
Mail list logo