Ian Clarke wrote:
> On Mon, Dec 14, 2009 at 5:25 AM, Florent Daigniere
> <nextgens at freenetproject.org> wrote:
>> Attempting to compress the file with the same compression algorithm is likely
>> to be fruitless, yes... I had a patch somewhere which was trying to
>> use file extensions to make educated guesses... but it never got merged
>> because of conflicts (saces was working on metadatas) and lack of interrest
>> on my side.
>>
>> Anyway, how do you determine if a file is already compressed or not without
>> actually compressing it? Did you do the maths? In most cases, even though the
>> data is already compressed it does make sense to recompress it with another
>> algorithm (walltime-wise) before sending it over the (slow) wire.
> 
> I think by looking at the filetype you can make an educated guess.
> Also, if the file is larger than 1MB, there is a good chance that its
> already been compressed.
> 
> I don't think you'll gain anything by re-compressing an already
> compressed file unless the original compression mechanism was really
> dumb.
 >

You will in all cases if you trust 
http://compressionratings.com/comp.cgi. Anyway, if the result is bigger 
we discard it.

Don't forget that the size gain (even if small) has to be multiplied by
the redundancy we introduce (FEC) ... and that you insert once but you 
expect people to download MANY times.

Bottom line is: you invest some CPU cycles, you end up inserting faster 
(less blocks to insert) and people will enjoy faster downloads (less 
blocks to download)!

Florent



Reply via email to