During this month I refactor the code used for the tests and kept doing
them with the same base mentioned above (about 92 000 files with an average
size of 2kb), but procedure differently: I ran the compression and
decompression 50 times in eight different computers.
The results were not
During this month I refactor the code used for the tests and kept doing
them with the same base mentioned above (about 92 000 files with an average
size of 2kb),
Those are _tiny_. It seems likely to me that you're spending most of your
time on I/O related to metadata (disk seeks, directory
During the tests the only programs that are running are Eclipse and Chromium
, I don't believe they affect the results because they are running during
the entire test.
The permissions issue has been fixed.
Thanks.
--
Jose Vinicius Pimenta Coletto
Question: Are you 100% sure that nothing else was running on that
system during the tests?
No cron jobs, no makewhatis or updatedb?
P.S. There is a permission issue with downloading one of the files.
2011/3/2 José VinÃcius Pimenta Coletto jvcole...@gmail.com:
Hi,
I'm making a comparison
I think some profiling is in order: claiming LZO decompresses at 1.0MB/s and is
more than 3x faster at compression than decompression (especially when it's a
well known asymmetric algorithm in favor of decompression speed) is somewhat
unbelievable.
I see that you use small files. Maybe
slightly not on point for this conversation, but I thought it worth
mentioningLZO is splitable, which makes it a good for for hadoopy things.
Just something to remember when you do get some final results on performance.
Cheers
James.
On 2011-03-02, at 8:12 PM, Brian Bockelman wrote: