Yes, Of course there are block codes and convolutional codes.

Convolutional codes have a memory that is equal in length to the code 
length.  e.g a K=7 rate 1/2 code (NASA Voyager) has a length of 7 so each 
data bit affects the current codec output and the next 6 bits.  In practice 
this means that you need to supply 6 "flush" bits and send an additional 6 
bits (or use some other tail biting scheme) to get full effectiveness of the 
Viterbi (most likely hood) decoder.   That can be additional overhead but 
sometimes can be "burried" in the leader or inter frame gap so it has 
minimal effect.

Very often when the most powerful FEC is desired it is done in layers with 
different (complementary) codes. E.g.   Viterbi inside code surrounded by a 
block code (e.g. Reed Solomon). These can be very effective in some 
channels.   Modern HF data modes (e.g Pactor 2, 3, 4, WINMOR) often use 
layered convolutional and block codes for maximum efficiency and 
performance.

But this whole coding and FEC thing has to be evaluated using the following:
1) What is the nature of the channel we are trying to code for.  E.g. WGN on 
VHF/UHF or poor multipath on HF.  Deep fading, burst errors etc.
2) What is the information content of the source.  This is complicated using 
CODEC2 since some bits may be more "important" than others (not usually the 
case with text)  Dave has done a good job of squeezing down the bits so it 
may be a reasonable approximation that they are all now of "equal" 
importance.
3) Since we are dealing with a audio signal the type and frequency of 
"acceptable" errors may not be simply related to standard Bit Error Rates. 
What is desired of course is FEC coding that maximizes the intelligibility 
of the voice using a particular propagation channel and S/N.

I would suggest that there be more testing done on a HF channel simulator to 
evaluate the most effective coding scheme.  E.g. A block scheme may be 
effective but it may be that a convolutional code (using flush bits where 
needed) might actually give better speech intelligibility even though it 
propagates a net higher bit error rate due to the way is is decoded. 
Although ideally one would select the best code for the current channel we 
really need to standardize on one (or perhaps 2) FEC schemes. (e.g.  it is 
very likely different FEC schemes and different modulation schemes will be 
optimum for HF or VHF/UHF)    Because of the complexity of the CODEC2 in 
forming the bits and the human factor in determining the intelligibility of 
speech at certain error rates we probably can't use some of the conventional 
WGN channel and BER analysis tools.

The good news is that all the tools are available and coding up test 
implementations and using the HF channel simulator are pretty 
straightforward exercises.

73,

Rick Muething, KN6KB


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Freetel-codec2 mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/freetel-codec2

Reply via email to