Re: GNURadio Leaking memory during repeated simulations

2020-02-18 Thread Geof Nieboer
It's possible I suppose, but the IDE wouldn't have been my first guess,
unless it's running in some kind of debugging mode where it can't garbage
collect freed memory?  I haven't used PyCharm so can't comment specifically
on how it works.  I would try to just to run the python example script from
a run_gr.bat command line without anything else involved at all and see
what happens.  I ran the sample using VS 2015 Community (which I start
["devenv.exe"] from the run_gr.bat command line so that not only are the
python environment vars included, but all the $PATH changes as well).

Geof

On Tue, Feb 18, 2020 at 12:40 PM Roman A Sandler 
wrote:

> Him Geof,
>
> I am running GR using the standard install but from PyCharm (I imported
> all the necessary environment variables from run_gr.bat into PyCharm
> myself). I installed tqdm on top of the GR python via pip. I also confirmed
> that the issue occurs even without tqdm. Could PyCharm be causing these
> problems?
>
> -Roman
>
> On Thu, Feb 13, 2020 at 7:10 PM Geof Nieboer 
> wrote:
>
>> Roman,
>>
>> I was able to run your code, and got a consistent 150-160 it/s through
>> the whole run, with about 65MB of memory in use (as reporting by Task
>> Manager).  This was on Windows 10 running GR 3.8.0.0.
>>
>> I noticed there was another package installed (tqdm) that's not part of
>> the GR install.  So I want to confirm... did you add that package to the
>> python that installed as part of GR, or are you using a different python?
>>
>> If you are using a different python install, a possible explanation is
>> compilers; the GR packages and binaries are all built with VS 2015, but the
>> py2.7 standard is VS 2008 (which won't build GR any more), and mixing (VS)
>> compilers can cause odd issues.  So for GR, the entire python 2.7 and every
>> package is built from scratch w/ 2015.  tqdm doesn't seem to be affected, I
>> added that to the GR python install using a pip install, nothing fancy, and
>> it seems to have worked.
>>
>> If you are, however, running the script with the GR-installed python
>> after opening with run_gr.bat, then I'm scratching my head what's happening.
>>
>> Geof
>>
>> On Thu, Feb 13, 2020 at 12:00 PM Roman A Sandler 
>> wrote:
>>
>>> Hi Marcus,
>>>
>>> Thanks for the reply!
>>>
>>> My GNURadio version: *3.8.0.0 (Python 2.7.10)*
>>> It is the Windows 3.8.0.0 version downloaded from:
>>> http://www.gcndevelopment.com/gnuradio/downloads.htm
>>>
>>> Complete reproducible code is below. I use the tqdm package to monitor
>>> iterations per second. On my PC, the it/sec declines very precipitously
>>> (starts at 85it/sec, then down to 22it/s after 40s and keeps dropping.
>>> Eventually as low as 1 it/sec).
>>>
>>>
>>> ```
>>>
>>> import numpy as np
>>> from gnuradio import gr, gr_unittest
>>> from gnuradio import blocks
>>> from gnuradio import digital
>>> from gnuradio.filter import firdes
>>> from gnuradio import channels
>>> from tqdm import tqdm
>>>
>>>
>>> def decode(channel_sigs):
>>> tb = gr.top_block()
>>>
>>> ##
>>> # Variables
>>> ##
>>> nfilts = 32
>>> sps = 4
>>> timing_loop_bw = 3 * 6.28 / 100.0  # NOTE: this controls convergence 
>>> speed!
>>> constellation_scheme = digital.constellation_8psk().base()
>>> rrc_taps = firdes.root_raised_cosine(nfilts, nfilts, 1.0 / float(sps), 
>>> 0.35, 11 * sps * nfilts)
>>>
>>>  Actual blocks
>>> channel_src = blocks.vector_source_c(channel_sigs, False)
>>> digital_pfb_clock_sync = digital.pfb_clock_sync_ccf(sps, 
>>> timing_loop_bw, rrc_taps, nfilts,
>>> nfilts / 2, 1.5, 1)
>>> constellation_soft_decoder = 
>>> digital.constellation_soft_decoder_cf(constellation_scheme)
>>> binary_slicer = digital.binary_slicer_fb()
>>> blocks_char_to_float = blocks.char_to_float(1, 1)
>>> recovered_bits_dst = blocks.vector_sink_f()
>>>
>>> ##
>>> # Connections
>>> ##
>>> tb.connect((channel_src, 0), (digital_pfb_clock_sync, 0))
>>> tb.connect((digital_pfb_clock_sync, 0), (constellation_soft_decoder, 0))
>>> tb.connect((constellation_soft_decoder, 0), (binary_slicer, 0))
>>> tb.connect((binary_slicer, 0), (blocks_char_to_float, 0))
>>> tb.connect((blocks_char_to_float, 0), (recovered_bits_dst, 0))
>>>
>>> tb.run()
>>> recovered_bits = np.array(recovered_bits_dst.data())
>>>
>>> return recovered_bits
>>>
>>>
>>> if __name__ == '__main__':
>>> n_trls = 1
>>> n_samples = 
>>> sig = np.random.normal(size=(n_samples,)) + 1j * 
>>> np.random.normal(size=(n_samples,))
>>>
>>> for n in tqdm(range(n_trls)):
>>> decode(sig)
>>>
>>> ```
>>>
>>>
>>> On Thu, Feb 13, 2020 at 2:11 AM Müller, Marcus (CEL) 
>>> wrote:
>>>
 Hi,

 huh. That looks

Re: GNURadio Leaking memory during repeated simulations

2020-02-18 Thread Roman A Sandler
Him Geof,

I am running GR using the standard install but from PyCharm (I imported all
the necessary environment variables from run_gr.bat into PyCharm myself). I
installed tqdm on top of the GR python via pip. I also confirmed that the
issue occurs even without tqdm. Could PyCharm be causing these problems?

-Roman

On Thu, Feb 13, 2020 at 7:10 PM Geof Nieboer  wrote:

> Roman,
>
> I was able to run your code, and got a consistent 150-160 it/s through the
> whole run, with about 65MB of memory in use (as reporting by Task
> Manager).  This was on Windows 10 running GR 3.8.0.0.
>
> I noticed there was another package installed (tqdm) that's not part of
> the GR install.  So I want to confirm... did you add that package to the
> python that installed as part of GR, or are you using a different python?
>
> If you are using a different python install, a possible explanation is
> compilers; the GR packages and binaries are all built with VS 2015, but the
> py2.7 standard is VS 2008 (which won't build GR any more), and mixing (VS)
> compilers can cause odd issues.  So for GR, the entire python 2.7 and every
> package is built from scratch w/ 2015.  tqdm doesn't seem to be affected, I
> added that to the GR python install using a pip install, nothing fancy, and
> it seems to have worked.
>
> If you are, however, running the script with the GR-installed python after
> opening with run_gr.bat, then I'm scratching my head what's happening.
>
> Geof
>
> On Thu, Feb 13, 2020 at 12:00 PM Roman A Sandler 
> wrote:
>
>> Hi Marcus,
>>
>> Thanks for the reply!
>>
>> My GNURadio version: *3.8.0.0 (Python 2.7.10)*
>> It is the Windows 3.8.0.0 version downloaded from:
>> http://www.gcndevelopment.com/gnuradio/downloads.htm
>>
>> Complete reproducible code is below. I use the tqdm package to monitor
>> iterations per second. On my PC, the it/sec declines very precipitously
>> (starts at 85it/sec, then down to 22it/s after 40s and keeps dropping.
>> Eventually as low as 1 it/sec).
>>
>>
>> ```
>>
>> import numpy as np
>> from gnuradio import gr, gr_unittest
>> from gnuradio import blocks
>> from gnuradio import digital
>> from gnuradio.filter import firdes
>> from gnuradio import channels
>> from tqdm import tqdm
>>
>>
>> def decode(channel_sigs):
>> tb = gr.top_block()
>>
>> ##
>> # Variables
>> ##
>> nfilts = 32
>> sps = 4
>> timing_loop_bw = 3 * 6.28 / 100.0  # NOTE: this controls convergence 
>> speed!
>> constellation_scheme = digital.constellation_8psk().base()
>> rrc_taps = firdes.root_raised_cosine(nfilts, nfilts, 1.0 / float(sps), 
>> 0.35, 11 * sps * nfilts)
>>
>>  Actual blocks
>> channel_src = blocks.vector_source_c(channel_sigs, False)
>> digital_pfb_clock_sync = digital.pfb_clock_sync_ccf(sps, timing_loop_bw, 
>> rrc_taps, nfilts,
>> nfilts / 2, 1.5, 1)
>> constellation_soft_decoder = 
>> digital.constellation_soft_decoder_cf(constellation_scheme)
>> binary_slicer = digital.binary_slicer_fb()
>> blocks_char_to_float = blocks.char_to_float(1, 1)
>> recovered_bits_dst = blocks.vector_sink_f()
>>
>> ##
>> # Connections
>> ##
>> tb.connect((channel_src, 0), (digital_pfb_clock_sync, 0))
>> tb.connect((digital_pfb_clock_sync, 0), (constellation_soft_decoder, 0))
>> tb.connect((constellation_soft_decoder, 0), (binary_slicer, 0))
>> tb.connect((binary_slicer, 0), (blocks_char_to_float, 0))
>> tb.connect((blocks_char_to_float, 0), (recovered_bits_dst, 0))
>>
>> tb.run()
>> recovered_bits = np.array(recovered_bits_dst.data())
>>
>> return recovered_bits
>>
>>
>> if __name__ == '__main__':
>> n_trls = 1
>> n_samples = 
>> sig = np.random.normal(size=(n_samples,)) + 1j * 
>> np.random.normal(size=(n_samples,))
>>
>> for n in tqdm(range(n_trls)):
>> decode(sig)
>>
>> ```
>>
>>
>> On Thu, Feb 13, 2020 at 2:11 AM Müller, Marcus (CEL) 
>> wrote:
>>
>>> Hi,
>>>
>>> huh. That looks hard to debug; also, the slow down is suspicious (as
>>> long as there's memory available, it shouldn't take significantly
>>> longer to get some – usually, memory fragmentation isn't *that* bad,
>>> and this shouldn't be doing *that* much memory allocation).
>>>
>>> Could you put all your code in one .py file (or a set of these) that
>>> one can simply execute right away? That would allow us to reproduce.
>>> Also, could you tell us your specific GNU Radio version (all four
>>> digits of it?).
>>>
>>> Best regards,
>>> Marcus
>>>
>>> On Tue, 2020-02-11 at 16:34 -0800, Roman A Sandler wrote:
>>> > Hi,
>>> >
>>> > I am using GNURadio to decode a large amount of 1024-sample complex
>>> > vectors of different modulation schemes. Thus, I have a for loop
>>> > which runs gr

Re: GNURadio Leaking memory during repeated simulations

2020-02-13 Thread Geof Nieboer
Roman,

I was able to run your code, and got a consistent 150-160 it/s through the
whole run, with about 65MB of memory in use (as reporting by Task
Manager).  This was on Windows 10 running GR 3.8.0.0.

I noticed there was another package installed (tqdm) that's not part of the
GR install.  So I want to confirm... did you add that package to the python
that installed as part of GR, or are you using a different python?

If you are using a different python install, a possible explanation is
compilers; the GR packages and binaries are all built with VS 2015, but the
py2.7 standard is VS 2008 (which won't build GR any more), and mixing (VS)
compilers can cause odd issues.  So for GR, the entire python 2.7 and every
package is built from scratch w/ 2015.  tqdm doesn't seem to be affected, I
added that to the GR python install using a pip install, nothing fancy, and
it seems to have worked.

If you are, however, running the script with the GR-installed python after
opening with run_gr.bat, then I'm scratching my head what's happening.

Geof

On Thu, Feb 13, 2020 at 12:00 PM Roman A Sandler 
wrote:

> Hi Marcus,
>
> Thanks for the reply!
>
> My GNURadio version: *3.8.0.0 (Python 2.7.10)*
> It is the Windows 3.8.0.0 version downloaded from:
> http://www.gcndevelopment.com/gnuradio/downloads.htm
>
> Complete reproducible code is below. I use the tqdm package to monitor
> iterations per second. On my PC, the it/sec declines very precipitously
> (starts at 85it/sec, then down to 22it/s after 40s and keeps dropping.
> Eventually as low as 1 it/sec).
>
>
> ```
>
> import numpy as np
> from gnuradio import gr, gr_unittest
> from gnuradio import blocks
> from gnuradio import digital
> from gnuradio.filter import firdes
> from gnuradio import channels
> from tqdm import tqdm
>
>
> def decode(channel_sigs):
> tb = gr.top_block()
>
> ##
> # Variables
> ##
> nfilts = 32
> sps = 4
> timing_loop_bw = 3 * 6.28 / 100.0  # NOTE: this controls convergence 
> speed!
> constellation_scheme = digital.constellation_8psk().base()
> rrc_taps = firdes.root_raised_cosine(nfilts, nfilts, 1.0 / float(sps), 
> 0.35, 11 * sps * nfilts)
>
>  Actual blocks
> channel_src = blocks.vector_source_c(channel_sigs, False)
> digital_pfb_clock_sync = digital.pfb_clock_sync_ccf(sps, timing_loop_bw, 
> rrc_taps, nfilts,
> nfilts / 2, 1.5, 1)
> constellation_soft_decoder = 
> digital.constellation_soft_decoder_cf(constellation_scheme)
> binary_slicer = digital.binary_slicer_fb()
> blocks_char_to_float = blocks.char_to_float(1, 1)
> recovered_bits_dst = blocks.vector_sink_f()
>
> ##
> # Connections
> ##
> tb.connect((channel_src, 0), (digital_pfb_clock_sync, 0))
> tb.connect((digital_pfb_clock_sync, 0), (constellation_soft_decoder, 0))
> tb.connect((constellation_soft_decoder, 0), (binary_slicer, 0))
> tb.connect((binary_slicer, 0), (blocks_char_to_float, 0))
> tb.connect((blocks_char_to_float, 0), (recovered_bits_dst, 0))
>
> tb.run()
> recovered_bits = np.array(recovered_bits_dst.data())
>
> return recovered_bits
>
>
> if __name__ == '__main__':
> n_trls = 1
> n_samples = 
> sig = np.random.normal(size=(n_samples,)) + 1j * 
> np.random.normal(size=(n_samples,))
>
> for n in tqdm(range(n_trls)):
> decode(sig)
>
> ```
>
>
> On Thu, Feb 13, 2020 at 2:11 AM Müller, Marcus (CEL) 
> wrote:
>
>> Hi,
>>
>> huh. That looks hard to debug; also, the slow down is suspicious (as
>> long as there's memory available, it shouldn't take significantly
>> longer to get some – usually, memory fragmentation isn't *that* bad,
>> and this shouldn't be doing *that* much memory allocation).
>>
>> Could you put all your code in one .py file (or a set of these) that
>> one can simply execute right away? That would allow us to reproduce.
>> Also, could you tell us your specific GNU Radio version (all four
>> digits of it?).
>>
>> Best regards,
>> Marcus
>>
>> On Tue, 2020-02-11 at 16:34 -0800, Roman A Sandler wrote:
>> > Hi,
>> >
>> > I am using GNURadio to decode a large amount of 1024-sample complex
>> > vectors of different modulation schemes. Thus, I have a for loop
>> > which runs gr.top_block.run() at each iteration and uses a vector
>> > sink to collect the results. The issue is that as the simulation
>> > keeps going, each itertion takes longer (e.g. starts of at 120it/sec,
>> > and then after 5000 iterations slows down to 10it/sec). I can see in
>> > task manager (I am on windows) that memory is increasing so clearly
>> > there is a memory leak where somehow the results of the iterations
>> > arent being deleted.
>> >
>> > Is there an explicit way to delete runs or is this a bug?
>> >
>> 

Re: GNURadio Leaking memory during repeated simulations

2020-02-13 Thread Roman A Sandler
Hi Marcus,

Thanks for the reply!

My GNURadio version: *3.8.0.0 (Python 2.7.10)*
It is the Windows 3.8.0.0 version downloaded from:
http://www.gcndevelopment.com/gnuradio/downloads.htm

Complete reproducible code is below. I use the tqdm package to monitor
iterations per second. On my PC, the it/sec declines very precipitously
(starts at 85it/sec, then down to 22it/s after 40s and keeps dropping.
Eventually as low as 1 it/sec).


```

import numpy as np
from gnuradio import gr, gr_unittest
from gnuradio import blocks
from gnuradio import digital
from gnuradio.filter import firdes
from gnuradio import channels
from tqdm import tqdm


def decode(channel_sigs):
tb = gr.top_block()

##
# Variables
##
nfilts = 32
sps = 4
timing_loop_bw = 3 * 6.28 / 100.0  # NOTE: this controls convergence speed!
constellation_scheme = digital.constellation_8psk().base()
rrc_taps = firdes.root_raised_cosine(nfilts, nfilts, 1.0 /
float(sps), 0.35, 11 * sps * nfilts)

 Actual blocks
channel_src = blocks.vector_source_c(channel_sigs, False)
digital_pfb_clock_sync = digital.pfb_clock_sync_ccf(sps,
timing_loop_bw, rrc_taps, nfilts,
nfilts / 2, 1.5, 1)
constellation_soft_decoder =
digital.constellation_soft_decoder_cf(constellation_scheme)
binary_slicer = digital.binary_slicer_fb()
blocks_char_to_float = blocks.char_to_float(1, 1)
recovered_bits_dst = blocks.vector_sink_f()

##
# Connections
##
tb.connect((channel_src, 0), (digital_pfb_clock_sync, 0))
tb.connect((digital_pfb_clock_sync, 0), (constellation_soft_decoder, 0))
tb.connect((constellation_soft_decoder, 0), (binary_slicer, 0))
tb.connect((binary_slicer, 0), (blocks_char_to_float, 0))
tb.connect((blocks_char_to_float, 0), (recovered_bits_dst, 0))

tb.run()
recovered_bits = np.array(recovered_bits_dst.data())

return recovered_bits


if __name__ == '__main__':
n_trls = 1
n_samples = 
sig = np.random.normal(size=(n_samples,)) + 1j *
np.random.normal(size=(n_samples,))

for n in tqdm(range(n_trls)):
decode(sig)

```


On Thu, Feb 13, 2020 at 2:11 AM Müller, Marcus (CEL) 
wrote:

> Hi,
>
> huh. That looks hard to debug; also, the slow down is suspicious (as
> long as there's memory available, it shouldn't take significantly
> longer to get some – usually, memory fragmentation isn't *that* bad,
> and this shouldn't be doing *that* much memory allocation).
>
> Could you put all your code in one .py file (or a set of these) that
> one can simply execute right away? That would allow us to reproduce.
> Also, could you tell us your specific GNU Radio version (all four
> digits of it?).
>
> Best regards,
> Marcus
>
> On Tue, 2020-02-11 at 16:34 -0800, Roman A Sandler wrote:
> > Hi,
> >
> > I am using GNURadio to decode a large amount of 1024-sample complex
> > vectors of different modulation schemes. Thus, I have a for loop
> > which runs gr.top_block.run() at each iteration and uses a vector
> > sink to collect the results. The issue is that as the simulation
> > keeps going, each itertion takes longer (e.g. starts of at 120it/sec,
> > and then after 5000 iterations slows down to 10it/sec). I can see in
> > task manager (I am on windows) that memory is increasing so clearly
> > there is a memory leak where somehow the results of the iterations
> > arent being deleted.
> >
> > Is there an explicit way to delete runs or is this a bug?
> >
> > CODE:
> >
> > calling code:
> > ```
> > for _ in range(1):
> > decode(sig)
> > ```
> >
> > decode func:
> > ```
> > def decode(channel_sigs):
> > tb = gr.top_block()
> >
> > ##
> > # Variables
> > ##
> > nfilts = 32
> > sps = 4
> > timing_loop_bw = 3 * 6.28 / 100.0  # NOTE: this controls
> > convergence speed!
> > constellation_scheme, bps = get_constellation_scheme(scheme)
> > rrc_taps = firdes.root_raised_cosine(nfilts, nfilts, 1.0 /
> > float(sps), 0.35, 11 * sps * nfilts)
> > phase_bw = 6.28 / 100.0
> > eq_gain = 0.01
> > arity = 4
> >
> >  Actual blocks
> > channel_src = blocks.vector_source_c(channel_sigs, False)
> > digital_pfb_clock_sync = digital.pfb_clock_sync_ccf(sps,
> > timing_loop_bw, rrc_taps, nfilts,
> > nfilts / 2,
> > 1.5, 1)
> > constellation_soft_decoder =
> > digital.constellation_soft_decoder_cf(constellation_scheme)
> > binary_slicer = digital.binary_slicer_fb()
> > blocks_char_to_float = blocks.char_to_float(1, 1)
> > recovered_bits_dst = blocks.vector_sink_f()
> >
> > #

Re: GNURadio Leaking memory during repeated simulations

2020-02-13 Thread CEL
Hi,

huh. That looks hard to debug; also, the slow down is suspicious (as
long as there's memory available, it shouldn't take significantly
longer to get some – usually, memory fragmentation isn't *that* bad,
and this shouldn't be doing *that* much memory allocation).

Could you put all your code in one .py file (or a set of these) that
one can simply execute right away? That would allow us to reproduce.
Also, could you tell us your specific GNU Radio version (all four
digits of it?).

Best regards,
Marcus

On Tue, 2020-02-11 at 16:34 -0800, Roman A Sandler wrote:
> Hi, 
> 
> I am using GNURadio to decode a large amount of 1024-sample complex
> vectors of different modulation schemes. Thus, I have a for loop
> which runs gr.top_block.run() at each iteration and uses a vector
> sink to collect the results. The issue is that as the simulation
> keeps going, each itertion takes longer (e.g. starts of at 120it/sec,
> and then after 5000 iterations slows down to 10it/sec). I can see in
> task manager (I am on windows) that memory is increasing so clearly
> there is a memory leak where somehow the results of the iterations
> arent being deleted. 
> 
> Is there an explicit way to delete runs or is this a bug?
> 
> CODE:
> 
> calling code:
> ```
> for _ in range(1):
> decode(sig)
> ```
> 
> decode func:
> ```
> def decode(channel_sigs):
> tb = gr.top_block()
> 
> ##
> # Variables
> ##
> nfilts = 32
> sps = 4
> timing_loop_bw = 3 * 6.28 / 100.0  # NOTE: this controls
> convergence speed!
> constellation_scheme, bps = get_constellation_scheme(scheme)
> rrc_taps = firdes.root_raised_cosine(nfilts, nfilts, 1.0 /
> float(sps), 0.35, 11 * sps * nfilts)
> phase_bw = 6.28 / 100.0
> eq_gain = 0.01
> arity = 4
> 
>  Actual blocks
> channel_src = blocks.vector_source_c(channel_sigs, False)
> digital_pfb_clock_sync = digital.pfb_clock_sync_ccf(sps,
> timing_loop_bw, rrc_taps, nfilts,
> nfilts / 2,
> 1.5, 1)
> constellation_soft_decoder =
> digital.constellation_soft_decoder_cf(constellation_scheme)
> binary_slicer = digital.binary_slicer_fb()
> blocks_char_to_float = blocks.char_to_float(1, 1)
> recovered_bits_dst = blocks.vector_sink_f()
> 
> ##
> # Connections
> ##
> tb.connect((channel_src, 0), (digital_pfb_clock_sync, 0))
> tb.connect((digital_pfb_clock_sync, 0),
> (constellation_soft_decoder, 0))
> tb.connect((constellation_soft_decoder, 0), (binary_slicer, 0))
> tb.connect((binary_slicer, 0), (blocks_char_to_float, 0))
> tb.connect((blocks_char_to_float, 0), (recovered_bits_dst, 0))
> 
> tb.run()
> 
> recovered_bits = np.array(recovered_bits_dst.data())
> return recovered_bits
> ```


smime.p7s
Description: S/MIME cryptographic signature


GNURadio Leaking memory during repeated simulations

2020-02-11 Thread Roman A Sandler
Hi,

I am using GNURadio to decode a large amount of 1024-sample complex vectors
of different modulation schemes. Thus, I have a for loop which runs
gr.top_block.run() at each iteration and uses a vector sink to collect the
results. The issue is that as the simulation keeps going, each itertion
takes longer (e.g. starts of at 120it/sec, and then after 5000 iterations
slows down to 10it/sec). I can see in task manager (I am on windows) that
memory is increasing so clearly there is a memory leak where somehow the
results of the iterations arent being deleted.

Is there an explicit way to delete runs or is this a bug?

CODE:

calling code:
```
for _ in range(1):
decode(sig)
```

decode func:
```
def decode(channel_sigs):
tb = gr.top_block()

##
# Variables
##
nfilts = 32
sps = 4
timing_loop_bw = 3 * 6.28 / 100.0  # NOTE: this controls convergence
speed!
constellation_scheme, bps = get_constellation_scheme(scheme)
rrc_taps = firdes.root_raised_cosine(nfilts, nfilts, 1.0 / float(sps),
0.35, 11 * sps * nfilts)
phase_bw = 6.28 / 100.0
eq_gain = 0.01
arity = 4

 Actual blocks
channel_src = blocks.vector_source_c(channel_sigs, False)
digital_pfb_clock_sync = digital.pfb_clock_sync_ccf(sps,
timing_loop_bw, rrc_taps, nfilts,
nfilts / 2, 1.5, 1)
constellation_soft_decoder =
digital.constellation_soft_decoder_cf(constellation_scheme)
binary_slicer = digital.binary_slicer_fb()
blocks_char_to_float = blocks.char_to_float(1, 1)
recovered_bits_dst = blocks.vector_sink_f()

##
# Connections
##
tb.connect((channel_src, 0), (digital_pfb_clock_sync, 0))
tb.connect((digital_pfb_clock_sync, 0), (constellation_soft_decoder, 0))
tb.connect((constellation_soft_decoder, 0), (binary_slicer, 0))
tb.connect((binary_slicer, 0), (blocks_char_to_float, 0))
tb.connect((blocks_char_to_float, 0), (recovered_bits_dst, 0))

tb.run()

recovered_bits = np.array(recovered_bits_dst.data())
return recovered_bits
```