This is a first implementation, and surely the performance on "large" files can be improved, but I need more info to understand the problem. How large are the files to protect? How much RAM does lziprecover use? How much RAM and how many processors has your machine. Why 2% instead of the default 8 blocks? What FEC fragmentation levels have you tried? (-0 is 40 to 80 times faster than the default -9).

I created 2% fec file of a 14.5 GB tar.lz on a NAS server (4 cores Xeon E3-1220 V2 with 8 GB of RAM and 8 GB of swap). I didn't time it, but it took more than 2 hours. After that I try to test the resulting files ( lziprecoer -Ft ) on a VM with 8GB of RAM but no swap and the test fail immediately with a memory allocation error.

    FEC fragmentation level. Defaults to -9. Level -0 is the fastest; it creates FEC data using GF(2^8), maybe with large blocks. Levels -1 to -9 use GF(2^8) or GF(2^16) as required, with increasing amounts of smaller blocks.

Does memory requirement also change with the fragmentation level?

Regards.

Reply via email to