Daniel,
RS and similar block codes are fairly efficient and have some nice properties
especially when mixed (layered) with other types of FEC. E.g. Viterbi
(convolutional) type decoders work well with random errors. But these type
codes perform poorly in burst error situations. Sometimes if the timing
permits bursts can be reduced by interleaving but there are limits especially
when you don’t want to delay the signal much in time as in the case of digital
voice. In such cases adding a block code “outside” the convolutional codes can
be effectively in many systems.
Reed Solomon adds two “parity” characters for each carrier correction desired.
( a character can be any number of bits from 2 on up although often is done
with 8 bit characters) E.g. if I were sending 8 bit characters and I wanted to
correct up to 8 character errors I would have to add 16 8 bit “parity”
characters to my data and these could correct up to 8 character errors in the
entire block of data (data + Parity). The amount of Parity you need is a
function of the channel and the error rates. So the “efficiency” of a RS type
code is a function of how many errors you wish to be able to correct. For
example if I were sending a block of 100 data characters and wished to correct
up to 8 erroneous characters I would need to add 16 total parity characters so
my overall efficiency would be 100/116 or about 86% which is reasonable. As
the number of corrected characters desired increases the efficiency of course
goes down but the robustness increases. Correcting up to 32 characters would
require 64 characters of Parity so the efficiency would be 100/164 or about
61%. The beauty of RS and other similar block codes is they are tolerant of
block errors. It corrects by character whether there is one bit in error or 8
bits in error of the character. So it compliments the convolutional code
nicely by correcting bursts that the convolutional code can’t correct.
This is why layering two types of FEC is effective and why for example NASA
often uses layered block coding on top of convolutional coding in very weak
signal transmission.
In the end the amount and selection of FEC is a function of the channel and the
type and number and type of errors expected. But in general adding the correct
type and level of FEC can usually provide a net improvement over uncoded
systems in terms of net BER. This is usually expressed as equivalent dB
improvement. (the amount of improvement you would get by increasing the S/N by
the same number of dB.
In digital Voice we have another issue in that unlike in binary message
transmission some level of uncorrected errors may be acceptable (resulting in
some speech artifact which may not severely impact intelligibility).
I have experimented with this a lot in the development of WINMOR and often
optimization is the result of iterating different promising FEC coding and
evaluating them over specific (some times standardized CCIR) channels using a
HF channel simulator. Prediction with theory alone is possible on White
Gaussian Noise channels but we seldom see these channels in HF propagation.
73,
Rick Muething, KN6KB
------------------------------------------------------------------------------
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester
Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the
endpoint security space. For insight on selecting the right partner to
tackle endpoint security challenges, access the full report.
http://p.sf.net/sfu/symantec-dev2dev
_______________________________________________
Freetel-codec2 mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/freetel-codec2