[Discuss-gnuradio] CC Decoder with punctured code = errors

2017-05-24 Thread Justin Hamilton
Hi everyone,

Is it possible to modify the *traceback depth* of the built-in *CC Decoder
Definition*? It appears unable to recover errors I have introduced during
puncturing/depuncturing, returning values with bit errors still present.

I am trying to decode a PDU I originally convolutionally encoded using the *FEC
Async Encoder* block with the default 1/2 rate, K=7, 79/109 *CC Encoder
Definition* in 'terminated mode'. I then perform async puncturing using a
custom block I wrote to achieve 2/3 and 3/4 code rates, followed by the
corresponding async depuncturing block, which inserts zeroes in previously
punctured positions. I then attempt to decode the depunctured code using
the *FEC Async Decoder *block with the default 1/2 rate, K=7, 79/109 *CC
Decoder Definition* in 'terminated mode'.

Without puncturing (rate = 1/2) decoding works perfectly. I have confirmed
the puncture/depuncture behaviour is correct, so it appears that the
decoder just isn't looking back far enough when performing Viterbi
decoding. I haven't been able to find an obvious way to modify the
decoder's traceback depth... *SSE2 Viterbi* decoding methods like that used
in *DVB-T* and *802.11* actually already modify their traceback depth to
accommodate puncturing.

Anyone have any advice? Ideally I'd make the default cc encoders/decoders
work since they perform the necessary flushing of bits for fixed length
packets, rather than modifying the SSE2 versions to do the same.

Big thanks in advance for any help!

Cheers,
Justin
___
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio


[Discuss-gnuradio] 64QAM Repack Bits Misalignment

2017-05-08 Thread Justin Hamilton
Hi everyone,

I am using the *Repack Bits *block to unpack bytes into 6 bits (aka. k=8 to
l=6) and then repack this output back into full bytes (aka. k=6 to l=8).
The input is a stream of tagged packets, say of length 5 bytes (6.667 64QAM
symbols), meaning the number of bits isn't exactly divisible by 6. As such
I have the block doing the unpacking aligned to the input, while the block
doing the packing is aligned to the output (as per doco). What I find -
both in LSB and MSB modes - is that the first packet is recovered correctly
while every second one is misaligned and incorrect.

If the packet length was 7 (9.333 symbols), I can expect every 3rd packet
to be correct. A similar thing occurs with 8PSK which unpacks 8bits to
3bits and vice versa.

How do I fix this alignment issue to get 64QAM/8PSK repacking working?
Thanks!

Regards,
Justin
___
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio


Re: [Discuss-gnuradio] OFDM Channel Equalisation not unity for perfect channel

2017-05-07 Thread Justin Hamilton
Hi again,

My current suspicion is that there is a triggering delay produced by
the *Schmidl
& Cox OFDM Sync* block that isn't accounted for by the *delay* block
currently running in parallel to it in the flowgraph (delays samples going
to the header/payload demux until the trigger point is found in the sync
words). On the default 64 FFT + 16 cyclic prefix arrangement for example,
if I change the delay block from 80 to 73, it results in a unity OFDM
channel estimation and I am able to decode successfully using my comb pilot
interpolation equalisation technique (also: 89 gets unity alternating
between the real and imaginary). It does appear however that the first
estimation is off before correctly reaching unity on the second packet.
This effect disappears if I bypass my *channel model *block (epsilon = 1,
taps = 1.0, noise voltage = 0, frequency offset = 0). This raises the
question of whether this triggering delay is somehow variable and depends
on what blocks are currently in the flowgraph...?

As a comparison, when I changed to having the header/payload demux
triggered by a tag instead of the trigger port, I achieved a unity channel
estimation.

Is there something I'm misinterpreting about the Schmidl & Cox method's
triggering or is this a known issue? Thanks for your help.

Regards,
Justin

On Mon, May 8, 2017 at 1:01 PM, Justin Hamilton <justham...@gmail.com>
wrote:

> Hi everyone,
>
> I've been working on a coded-OFDM system (looks similar to the standard
> OFDM flowgraphs) and have come to the stage where I am trying to improve
> channel estimation by implementing LS, STA and comb pilot interpolation
> methods similar to gr-ieee802.11. If I follow the default technique
> outlined in *ofdm_equalizer_simpledfe* and equalise just a single
> subcarrier using its estimated tap calculated by the *OFDM Channel
> Estimation* block, everything behaves as usual and I can decode my OFDM
> packets. If however I start using neighbouring subcarriers as per STA or
> when interpolating pilots, I am unable to decode any symbols.
>
> Suspecting something strange was going on, I began investigating my
> flowgraph. Since I am running the system in simulation across a perfect
> channel, I would have expected the channel taps derived by the *OFDM
> Channel Estimation* block to all be equal to 1. Instead each channel
> appeared to be experiencing it's own arbitrary value (which explains why
> using multiple subcarriers wouldn't work). Since they are not equal to 1, I
> figured some other part of the flowgraph must be applying some unintended
> modulation.
>
> At first I suspected irreversibility between the FFT/IFFT blocks, but
> after ensuring both were using *fft.window.rectangular(fft_len)* as the
> selected window function and correctly rescaling after the FFT, I was able
> to successfully recover the original signal directly after the IFFT. Next I
> took a look at the *cyclic prefixer* and *header/payload demux* combo,
> but after using a *Keep M in N *block after the *cyclic prefixer *I was
> again able to recover the signal.
>
> This left only the *Schmidl & Cox OFDM Sync* block, *analog frequency
> modulation* block and the subsequent multiplication. Since I was on a
> perfect channel, it was no surprise that the fine frequency offset was 0Hz,
> meaning the frequency mod block resulted in a simply multiplication by 1.
> However,replacing the input to the frequency mod block with a constant
> source of zero changed the resulting channel estimations further down the
> line and meant they were correct (unity) for at least one packet (and again
> on every third packet for some reason...) I can't really explain what's
> going on...?
>
> Now I might be on the totally incorrect path here, so if anyone has any
> suspicions or could recommend something for me to try out, it would be
> greatly appreciated. I could be misinterpreting the Schmidl/Cox technique
> for example or the role of channel equalisation altogether. Thanks in
> advance for your help!
>
> Regards,
> Justin
>
___
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio


[Discuss-gnuradio] OFDM Channel Equalisation not unity for perfect channel

2017-05-07 Thread Justin Hamilton
Hi everyone,

I've been working on a coded-OFDM system (looks similar to the standard
OFDM flowgraphs) and have come to the stage where I am trying to improve
channel estimation by implementing LS, STA and comb pilot interpolation
methods similar to gr-ieee802.11. If I follow the default technique
outlined in *ofdm_equalizer_simpledfe* and equalise just a single
subcarrier using its estimated tap calculated by the *OFDM Channel
Estimation* block, everything behaves as usual and I can decode my OFDM
packets. If however I start using neighbouring subcarriers as per STA or
when interpolating pilots, I am unable to decode any symbols.

Suspecting something strange was going on, I began investigating my
flowgraph. Since I am running the system in simulation across a perfect
channel, I would have expected the channel taps derived by the *OFDM
Channel Estimation* block to all be equal to 1. Instead each channel
appeared to be experiencing it's own arbitrary value (which explains why
using multiple subcarriers wouldn't work). Since they are not equal to 1, I
figured some other part of the flowgraph must be applying some unintended
modulation.

At first I suspected irreversibility between the FFT/IFFT blocks, but after
ensuring both were using *fft.window.rectangular(fft_len)* as the selected
window function and correctly rescaling after the FFT, I was able to
successfully recover the original signal directly after the IFFT. Next I
took a look at the *cyclic prefixer* and *header/payload demux* combo, but
after using a *Keep M in N *block after the *cyclic prefixer *I was again
able to recover the signal.

This left only the *Schmidl & Cox OFDM Sync* block, *analog frequency
modulation* block and the subsequent multiplication. Since I was on a
perfect channel, it was no surprise that the fine frequency offset was 0Hz,
meaning the frequency mod block resulted in a simply multiplication by 1.
However,replacing the input to the frequency mod block with a constant
source of zero changed the resulting channel estimations further down the
line and meant they were correct (unity) for at least one packet (and again
on every third packet for some reason...) I can't really explain what's
going on...?

Now I might be on the totally incorrect path here, so if anyone has any
suspicions or could recommend something for me to try out, it would be
greatly appreciated. I could be misinterpreting the Schmidl/Cox technique
for example or the role of channel equalisation altogether. Thanks in
advance for your help!

Regards,
Justin
___
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio


Re: [Discuss-gnuradio] uhd_packet_rx example freezes - Header/Payload Demux not releasing payload

2017-04-06 Thread Justin Hamilton
Hi Marcus,

Thanks for your response!

So I spent the last week testing out the changes you recommended. First I
had a look at the FLL block with the waterfall plots but couldn't
immediately spot anything strange, except that during pure simulation with
a perfect channel it did however shift things off centre and ruin
synchronisation. Pretty unexpected. The same doesn't happen when using
multipath channel taps or my real channel however. I also
increased/decreased the sps, eb and loop bandwidths on the various sync
blocks, leading to more packets being decoded and now the channel doesn't
lock up as before. To tune them I was watching the receiver constellations
and looking at how likely the system was to adapt vs. stick with the
current synchronisation.

It appears that the faster I send packets, the easier synchronisation is,
but if I send packets too often (< 1ms period) I get async buffer overflows
and am more prone to crashes. I'm not experiencing any under or overruns.
Ideally I'd be sending packets every 1-10ms, though since the end
implementation is completely async (connected to a TUN/TAP), I could
realistically get packets only every 1s, which would be horrendous for sync
in its current state.

In general I'm still only getting very few packets getting through however.
With the *header_format_default* I am able to receive quite a lot of
packets, while *header_format_counter* successfully decodes far less and
additionally crashes with a segfault while performing a CRC check. Are
there any additional sync steps I could take advantage of to improve my
performance?

Thanks for your help.

On Fri, Mar 31, 2017 at 12:15 AM, Marcus Müller <marcus.muel...@ettus.com>
wrote:

> Hi Justin,
>
> sorry for the delayed response. So:
>
> indeed, the uhd_packet_rx.grc flow graph has two different sync elements:
> 1. the FLL band-edge
> 2. PFB Clock Sync within the packet_rx hier block,
>
> as you've noticed.
>
> In your case, it's possible the FLL doesn't attack fast enough; you would
> verify that by comparing waterfalls / spectra of the signal before and
> after.
> Maybe you'd alternatively/additionally want to increase the bandwidth of
> the PFB Clock Sync. In a first attempt: increase the sps (from default 2 ->
> 3, for example); for that, increase the USRP source's sampling rate
> accordingly, adapt the FLL to that to still lock tightly onto you signal,
> and make sure the sps in the packet_rx reflects the new sps value.
>
> Best regards,
> Marcus
>
> On 03/28/2017 04:58 AM, Justin Hamilton wrote:
>
> Hi everyone,
>
> I'm trying to develop a packet-based, single-carrier system (BPSK, QPSK,
> QAM) with FEC (CC) similar to that implemented in packet_rx.grc and
> packet_tx.grc. I am using two Ettus B205mini-i as my USRP devices,
> connected for now, via a 50Ω, 30dB attenuator and SMA cable (antennas
> also at hand). I am using 64-bit Ubuntu 14.04LTS, Xeon E3-1575M, 32gb ram.
>
> When testing uhd_packet_rx.grc with the default BPSK signal I find that
> the receiver immediately locks up after I enable it using the "on"
> checkbox. After taking a look into the packet_rx hierarchical block, I
> found that the correlation estimator was indeed finding a peak indicating
> the transmitted packet. The generated tags were then used to trigger the
> Header/Payload Demux block to release the header as expected. This block
> doesn't seem to be getting back a valid decoded header however. This
> results in the payload never being released and causes buffer back pressure
> which leads back to the USRP source and ultimately locks up the system.
>
> I have noticed a frequency offset between my two radios due to the
> receiver constellation spinning, but hand-tuning it proved quite difficult.
> The difference might be outside the acceptable limits for the
> synchronisation blocks used?
>
> Possibly related: I am able to run packet_loopback_hier.grc without issue,
> except that if I add considerable noise to the system it often never
> recovers, either returning the message "gr::log :INFO:
> header_payload_demux0 - Parser returned #f" or flat out crashing. Could it
> possibly be the case that the noise added by my 'over-the-air' radio system
> is too much for the Polyphase Clock Sync and Costas Loop blocks to
> compensate for?
>
> Has anyone experienced this issue before or figured out how to solve it?
> If I can get these example flowgraphs working it'll be a great help for my
> custom flowgraph. Surely sending OTA packets with modulation and coding
> can't be this difficult :)
>
> Also if anyone has any tips on modifying the synchronisation (Costas Loop)
> to support QAM constellations that be great!
>
> Thanks for your help!
> Justin
>
>
> ___

Re: [Discuss-gnuradio] Derive OOT header format from header_format_default

2017-04-06 Thread Justin Hamilton
Hi everyone,

I was able to solve this issue by running *nm -C -u
libgnuradio-cognitiveSDR.so | grep header_format_cognitive* in my custom
module's folder "*gr-cognitiveSDR/build/lib*", which identified a function
accidentally declared in my class' header file, which wasn't implemented in
the source code and was trying to override the base class' corresponding
function. I simply removed this declaration and rebuilt/installed.

To make my new header format work with the *protocol_formatter_async* block
I also had to remove *"typedef boost::shared_ptr
sptr*" from my header file so that my class would instead inherit the
required sptr from *header_format_base* to satisfy the requirements of the
*protocol_formatter_async* block's make method.

The issue I am facing now is attempting to make my format's custom setter
and getter functions accessible by python GUI elements and variables inside
GRC. Currently it appears that after creating the formatter object (
*hdr_format*) and using it in the protocol formatter, I can't, for example,
call *hdr_format.setter_function()* to update some property in the header.
I get the following error:


*Value "hdr_format.set_bps(2)" cannot be evaluated:
'header_format_base_sptr' object has no attribute 'set_bps'*

Currently all my functions are declared as virtual both in "
*gr-cognitiveSDR/include/cognitiveSDR/header_format_cognitive.h*" and "
*gr-cognitiveSDR/lib/header_format_cognitive.h*" (not pure virtual ie. "*bool
setter_function() = 0*" as this prevents me from even instantiating the
object ), and defined in the source "
*gr-cognitiveSDR/lib/header_format_cognitive.cc*".

Any ideas? Do I need to add something to the SWIG config to make these
callback functions accessible?

Cheers.

On Fri, Mar 31, 2017 at 3:56 PM, Justin Hamilton <justham...@gmail.com>
wrote:

> Hi everyone,
>
> I'm creating a custom header format *'header_format_cognitive*' derived
> from *'header_format_default*'. I'm adding extra info to the default
> header, very similar to the way the child class *'header_format_counter*'
> adds bits/symbol to the default. I'm creating my class as an OOT module
> named *'cognitiveSDR'*.
>
> I'm running into what appears to be a common issue, with people previously
> creating spin-offs of *packet_header_default*, including gr-ieee-802.11.
> https://www.mail-archive.com/discuss-gnuradio@gnu.org/msg50789.html
> http://gnuradio.4.n7.nabble.com/Trouble-with-SWIG-for-
> packet-formatter-default-child-class-td52446.html
>
> I was able to compile, install and import my module in GRC, but once I try
> to create the class using a variable with 
> "cognitiveSDR.header_format_cognitive(..)",
> I get the message:
>
>
>
> *"Value "cognitiveSDR.header_format_cognitive(..)" cannot be
> evaluated: 'module' object has no attribute 'header_format_cognitive'"*
> To give some context, normally when creating a standard header format
> object using a GRC variable I call *"digital.header_format_counter(preamble_b,
> 3, Const_PLD.bits_per_symbol())"*
>
> *Steps taken so far:*
> 1. Created my new module *cognitiveSDR* with gr_modtool
> 2. Added *header_format_cognitive* (c++) using gr_modtool as type
> 'noblock'
> 3. Fleshed out the .cc and .h files using a combination of
> *header_format_counter*, *header_format_crc* and my own functions
> 4. Made sure to *#include * in
> the header file
> 5. Used *digital::header_format_default* as the parent class in the class
> declaration since it's now in separate namespace to 'digital'
> 6. Modified *set(GR_REQUIRED_COMPONENTS RUNTIME DIGITAL)* in
> CMakeLists.txt
> 6. Modified the SWIG file as shown below
>
> Are there any modifications that I should make to the default
> CMakeLists.txt file generated for the module or additional changes to the
> SWIG file in order to make my class accessible inside GRC? I assume the
> parent and child classes are linking up properly since cmake, make and
> install succeed.
>
> Thanks for you help! Cheers.
> Justin
>
> *cognitiveSDR_swig.i*
>
> *---*
> /* -*- c++ -*- */
>
> #define COGNITIVESDR_API
> #define DIGITAL_API
>
> %include "gnuradio.i"// the common stuff
>
> //load generated python docstrings
> %include "cognitiveSDR_swig_doc.i"
>
> %{
> #include "cognitiveSDR/header_format_cognitive.h"
> %}
>
> %include "gnuradio/digital/header_format_base.h"
> %include "gnuradio/digital/header_format_default.h"
>
> %include "cognitiveSDR/header_format_cognitive.h"
>
> %template(header_format_cognitive_sptr) boost::shared_ptr cognitiveSDR::header_format_cognitive>;
> %pythoncode %{
> header_format_cognitive_sptr.__repr__ = lambda self:
> ""
> header_format_cognitive = header_format_cognitive .make;
> %}
>
___
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio


[Discuss-gnuradio] Derive OOT header format from header_format_default

2017-03-30 Thread Justin Hamilton
Hi everyone,

I'm creating a custom header format *'header_format_cognitive*' derived
from *'header_format_default*'. I'm adding extra info to the default
header, very similar to the way the child class *'header_format_counter*'
adds bits/symbol to the default. I'm creating my class as an OOT module
named *'cognitiveSDR'*.

I'm running into what appears to be a common issue, with people previously
creating spin-offs of *packet_header_default*, including gr-ieee-802.11.
https://www.mail-archive.com/discuss-gnuradio@gnu.org/msg50789.html
http://gnuradio.4.n7.nabble.com/Trouble-with-SWIG-for-packet-formatter-default-child-class-td52446.html

I was able to compile, install and import my module in GRC, but once I try
to create the class using a variable with
"cognitiveSDR.header_format_cognitive(..)", I get the message:



*"Value "cognitiveSDR.header_format_cognitive(..)" cannot be
evaluated: 'module' object has no attribute 'header_format_cognitive'"*
To give some context, normally when creating a standard header format
object using a GRC variable I call *"digital.header_format_counter(preamble_b,
3, Const_PLD.bits_per_symbol())"*

*Steps taken so far:*
1. Created my new module *cognitiveSDR* with gr_modtool
2. Added *header_format_cognitive* (c++) using gr_modtool as type 'noblock'
3. Fleshed out the .cc and .h files using a combination of
*header_format_counter*, *header_format_crc* and my own functions
4. Made sure to *#include * in
the header file
5. Used *digital::header_format_default* as the parent class in the class
declaration since it's now in separate namespace to 'digital'
6. Modified *set(GR_REQUIRED_COMPONENTS RUNTIME DIGITAL)* in CMakeLists.txt
6. Modified the SWIG file as shown below

Are there any modifications that I should make to the default
CMakeLists.txt file generated for the module or additional changes to the
SWIG file in order to make my class accessible inside GRC? I assume the
parent and child classes are linking up properly since cmake, make and
install succeed.

Thanks for you help! Cheers.
Justin

*cognitiveSDR_swig.i*
*---*
/* -*- c++ -*- */

#define COGNITIVESDR_API
#define DIGITAL_API

%include "gnuradio.i"// the common stuff

//load generated python docstrings
%include "cognitiveSDR_swig_doc.i"

%{
#include "cognitiveSDR/header_format_cognitive.h"
%}

%include "gnuradio/digital/header_format_base.h"
%include "gnuradio/digital/header_format_default.h"

%include "cognitiveSDR/header_format_cognitive.h"

%template(header_format_cognitive_sptr)
boost::shared_ptr;
%pythoncode %{
header_format_cognitive_sptr.__repr__ = lambda self:
""
header_format_cognitive = header_format_cognitive .make;
%}
___
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio


[Discuss-gnuradio] uhd_packet_rx example freezes - Header/Payload Demux not releasing payload

2017-03-27 Thread Justin Hamilton
Hi everyone,

I'm trying to develop a packet-based, single-carrier system (BPSK, QPSK,
QAM) with FEC (CC) similar to that implemented in packet_rx.grc and
packet_tx.grc. I am using two Ettus B205mini-i as my USRP devices,
connected for now, via a 50Ω, 30dB attenuator and SMA cable (antennas also
at hand). I am using 64-bit Ubuntu 14.04LTS, Xeon E3-1575M, 32gb ram.

When testing uhd_packet_rx.grc with the default BPSK signal I find that the
receiver immediately locks up after I enable it using the "on" checkbox.
After taking a look into the packet_rx hierarchical block, I found that the
correlation estimator was indeed finding a peak indicating the transmitted
packet. The generated tags were then used to trigger the Header/Payload
Demux block to release the header as expected. This block doesn't seem to
be getting back a valid decoded header however. This results in the payload
never being released and causes buffer back pressure which leads back to
the USRP source and ultimately locks up the system.

I have noticed a frequency offset between my two radios due to the receiver
constellation spinning, but hand-tuning it proved quite difficult. The
difference might be outside the acceptable limits for the synchronisation
blocks used?

Possibly related: I am able to run packet_loopback_hier.grc without issue,
except that if I add considerable noise to the system it often never
recovers, either returning the message "gr::log :INFO:
header_payload_demux0 - Parser returned #f" or flat out crashing. Could it
possibly be the case that the noise added by my 'over-the-air' radio system
is too much for the Polyphase Clock Sync and Costas Loop blocks to
compensate for?

Has anyone experienced this issue before or figured out how to solve it? If
I can get these example flowgraphs working it'll be a great help for my
custom flowgraph. Surely sending OTA packets with modulation and coding
can't be this difficult :)

Also if anyone has any tips on modifying the synchronisation (Costas Loop)
to support QAM constellations that be great!

Thanks for your help!
Justin
___
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio