I've been investigating potential improvements to our FEC encoding in
my spare time (in particular the use of LDPC codes and their
relatives, but the following is generally applicable).  I'd like to
ask for opinions on what assumptions I should be making about
acceptable levels of CPU time, memory usage, and disk usage.

We care both about how well our FEC codes work, and how fast they are.
 How well they work is a surprisingly nuanced question, but for this
we can assume it's completely described by what block-level loss rate
the code can recover from, for a specified file size and success rate.
 As I see it, there are four fundamental metrics: disk space usage,
disk io (in particular seeks), ram usage, and CPU usage.  Different
FEC schemes have different characteristics, and allow different
tradeoffs to be made.

Our current FEC (simple segments, with Reed-Solomon encoding of each
segment) does very well on the disk performance.  I haven't examined
what it actually does, but it could be made to alternately read and
write sequential 4 MiB blocks, making one pass over the whole file,
without needing any additional space; this is as good possible.  It
does fairly well on memory usage: it needs to hold a whole 4 MiB
segment in ram at a time, plus a small amount of overhead for lookup
tables and Vandermonde matrices and such.  CPU performance is poor:
decoding each block requires operations on the entire segment, and
those operations are table lookups rather than simple math.  (Decode
CPU cost is O(n^2) with segment size, and our segments are big enough
that this is relevant.)

Other schemes will likely make different tradeoffs.  A naive LDPC
implementation will use very little RAM and CPU, but do lots of disk
seeks and need space to store (potentially) all data and check blocks
of the file on disk during that time (that is, double the file size,
where the current RS codes only need space equal to the final file
size).  However, it also allows ways to trade off more memory usage
and CPU time for less disk io and (I think) less disk space usage.  An
interleaved segments code based on RS codes (like the CIRC code used
on CDs) would be worse than our current scheme (equivalent memory
usage, poor CPU performance, slightly more disk space required, a
moderate number of disk seeks required).  (Both LDPC and interleaved
segments are more effective than our current scheme for large files.)

So, given that the tradeoffs will be complex, and that the decoder is
likely to have some flexibility (eg more memory usage for fewer
seeks), what baseline assumptions about these should I be making?  Do
we care more about reducing the number of seeks, even if it has an
increased cost in memory usage or CPU time?  How much memory is it
safe to assume will always be available?  Is it ok to need disk space
beyond the file size?  What if avoiding that has a significant cost in
CPU time?

I realize these are fairly vague questions; vague and opinion-based
answers are certainly welcome.  Hopefully it wont be too long before I
can toss some example numbers into the discussion.

Evan Daniel

Reply via email to