On Wednesday 14 April 2010 17:52:10 Evan Daniel wrote:
> I've been investigating potential improvements to our FEC encoding in
> my spare time (in particular the use of LDPC codes and their
> relatives, but the following is generally applicable).  I'd like to
> ask for opinions on what assumptions I should be making about
> acceptable levels of CPU time, memory usage, and disk usage.
> 
> We care both about how well our FEC codes work, and how fast they are.
>  How well they work is a surprisingly nuanced question, but for this
> we can assume it's completely described by what block-level loss rate
> the code can recover from, for a specified file size and success rate.
>  As I see it, there are four fundamental metrics: disk space usage,
> disk io (in particular seeks), ram usage, and CPU usage.  Different
> FEC schemes have different characteristics, and allow different
> tradeoffs to be made.
> 
> Our current FEC (simple segments, with Reed-Solomon encoding of each
> segment) does very well on the disk performance.  I haven't examined
> what it actually does, but it could be made to alternately read and
> write sequential 4 MiB blocks, making one pass over the whole file,
> without needing any additional space; this is as good possible.  It
> does fairly well on memory usage: it needs to hold a whole 4 MiB
> segment in ram at a time, plus a small amount of overhead for lookup
> tables and Vandermonde matrices and such.  CPU performance is poor:
> decoding each block requires operations on the entire segment, and
> those operations are table lookups rather than simple math.  (Decode
> CPU cost is O(n^2) with segment size, and our segments are big enough
> that this is relevant.)

We can deploy the 64-bit native FEC to improve CPU usage by a significant 
factor (IIRC 3x?). And next year's new CPUs will allow us to avoid the lookup 
table as I understand it.

With regards to disk usage, our current code does not interleave as you 
suggest; each block is stored separately, and it creates new buckets for newly 
decoded blocks. Worse, we stripe to avoid memory usage - this is leftover from 
0.5 days and we should get rid of it. Interleaving would of course give better 
performance and disk usage, but it would take significant work. A complicating 
factor is healing, which requires us to encode after decoding, although this 
doesn't prevent interleaving in that check blocks we managed to fetch already 
can be safely discarded. A further complicating factor is binary blobs, but 
again this can be managed.

Bugs need to be filed for all these optimisations - IF our current FEC remains 
an important part of Freenet. But it won't. It will only be used for small 
files, because for large files segmentation makes redundancy performance very 
poor.
> 
> Other schemes will likely make different tradeoffs.  A naive LDPC
> implementation will use very little RAM and CPU, but do lots of disk
> seeks and need space to store (potentially) all data and check blocks
> of the file on disk during that time (that is, double the file size,
> where the current RS codes only need space equal to the final file
> size).  However, it also allows ways to trade off more memory usage
> and CPU time for less disk io and (I think) less disk space usage.  

Currently we use a lot of disk space. This can be greatly optimised, but I'm 
not sure using up to 2x the file size temporarily during decode (most of which 
will have to happen at the end) is a big deal.

What is a big deal is that while LDPC allows for some partial decoding, most of 
the decode will have to be done at the end. Whereas in simple segmented RS 
codes segments are decoded independantly while the download goes on. And 
presumably interleaved segments, while they will have some cascading, will have 
a significant amount of the decoding happen in parallel with downloading - 
especially for popular files. Decodes which are not on the critical path are 
not a problem IMHO - it's the big, blocking burst of heavy disk I/O at the end 
that is the larger issue.

Specifically, the ongoing decodes involve seeking, but not a lot of seeking - 
an average of 5 blocks, each of which can be read sequentially, one of which we 
already have buffered (at the OS level if not at the freenet level) because we 
just downloaded it or we are cascading in some small way. Okay, this means 6 
seeks (2 writes) instead of 1 on downloading a block, and that might become a 
problem with cheap disks on faster network connections - but it is interesting 
to note that the current datastore does 5 seeks on writing a block (but this 
can also be radically optimised).

Of course, segmentation solves many of these problems, as well as making 
streaming easier - but segmentation is unacceptable because it dramatically 
reduces success rates. Unless we could have some sort of hybrid scheme where 
most but not all of the blocks we need are within the pseudo-segment...

An interesting side-issue: LDPC decoding can be implemented in such a way that 
writes are sequential and reads are random. This is important for slower flash 
devices (USB keys, flash in phones, etc).

The broader question is memory usage, of course... We need Freenet to run on 
systems with relatively low memory, we can take advantage of more memory, but 
there is a question of whether LDPC will be unacceptably slow / cause 
unacceptable disk wear / noise on systems with lowish amounts of memory and 
whether we care.

> An 
> interleaved segments code based on RS codes (like the CIRC code used
> on CDs) would be worse than our current scheme (equivalent memory
> usage, poor CPU performance, slightly more disk space required, a
> moderate number of disk seeks required).  (Both LDPC and interleaved
> segments are more effective than our current scheme for large files.)
> 
> So, given that the tradeoffs will be complex, and that the decoder is
> likely to have some flexibility (eg more memory usage for fewer
> seeks), what baseline assumptions about these should I be making?  Do
> we care more about reducing the number of seeks, even if it has an
> increased cost in memory usage or CPU time?  How much memory is it
> safe to assume will always be available?  Is it ok to need disk space
> beyond the file size?  What if avoiding that has a significant cost in
> CPU time?

Generally seeks are more expensive than memory. But memory is a hard limit - if 
we need 200MB to do an operation, it will not work at all or be insanely and 
unreasonably slow on less than that. Hence IMHO we should detect and scale to 
whatever memory is available. What the lower bound for acceptable performance 
should be is unclear. Bloom filter sharing will eventually require a lot more 
RAM than our current minimum, at least for real-time request answering (if 
there are no real-time requests, e.g. on sneakernet, we can get away with 
periodically checking a much larger on disk bloom filter). Hostile environments 
are usually developing countries and tend to have oldish computers with very 
limited RAM - but Moore's Law covers over a multitude of inefficiencies.

Other issues with memory:
- Db4o works a lot better (fewer seeks) if it has enough memory to cache a lot 
of objects. We will dramatically optimise database usage (especially for 
downloads of older files), but this is still an issue, especially if connection 
speeds improve.
- In-RAM plaintext temp buckets are much faster than on-disk encrypted temp 
files. This will be less true with next year's CPU-level hardware crypto 
acceleration, but the operating system call will probably still be a noticeable 
slowdown.
- Freetalk, Library etc may use significant amounts of memory. Not really 
quantified yet.

I would like Freenet to run in 128MB all-in with minimal disk access, but I am 
not sure how realistic this is. The current default limit is 192M, although 
autodetection is being worked on. 256 may be more realistic, especially with 
bloom filter sharing and a largish datastore.

And RAM is pretty cheap. But people aren't going to upgrade for Freenet, they 
want Freenet to use a small enough fraction of their existing resources as to 
not slow down everything else, and their existing resources are likely to be 
barely enough for everything else. Ultra-portables typically have 1GB, cheap 
laptops often have 2GB or even 1GB. And these typically run Windows 7. 
Expensive stuff has 8GB. And in most hostile environments you'd be lucky to 
have 256MB with Windows XP or 1GB with Windows 7. But on the other hand, in 5 
years time your phone will probably have 2GB of RAM and your laptop will have 
8. Meaning we can comfortably chew 512MB or so, and low profile means "few disk 
I/O's", or on flash "few random writes".

IMHO it is acceptable to use more disk space than the file size. We currently 
use way more, we can improve on this, but a bit more is probably acceptable. 
It's unintuitive but acceptable.
> 
> I realize these are fairly vague questions; vague and opinion-based
> answers are certainly welcome.  Hopefully it wont be too long before I
> can toss some example numbers into the discussion.

Numbers would be very helpful!
> 
> Evan Daniel
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20100414/8126f83f/attachment.pgp>

Reply via email to