Phako wrote:
I probably don't get all the implication of what these feature would
imply, but as it is today, fec files are useless in practice for large
files. It takes 20 times more times to create 2% of recovery files than
it takes to make another copies of the archive and you need an
unreasonable amount of RAM when individual files are big.

The FEC implementation in lziprecover 1.25 is tuned to run faster on files that fit in RAM, but it can process files (much) larger than the RAM. For example, I have just computed a 2% fec file (460 blocks, 15 MB) of gcc-6.4.0.tar (753 MB) on a machine with just 384 MiB of RAM, of which it has used about 220 MiB.

This is a first implementation, and surely the performance on "large" files can be improved, but I need more info to understand the problem. How large are the files to protect? How much RAM does lziprecover use? How much RAM and how many processors has your machine. Why 2% instead of the default 8 blocks? What FEC fragmentation levels have you tried? (-0 is 40 to 80 times faster than the default -9).

http://www.nongnu.org/lzip/manual/lziprecover_manual.html#Invoking-lziprecover
-0 .. -9
FEC fragmentation level. Defaults to -9. Level -0 is the fastest; it creates FEC data using GF(2^8), maybe with large blocks. Levels -1 to -9 use GF(2^8) or GF(2^16) as required, with increasing amounts of smaller blocks.

Best regards,
Antonio.

Reply via email to