Yes, please file a bug.
There are file systems with different block sizes out there Linux or Solaris.

Thanks,
--Konstantin

Martin Traverso wrote:
I think I found the issue. The class org.apache.hadoop.fs.DU assumes
1024-byte blocks when reporting usage information:

   this.used = Long.parseLong(tokens[0])*1024;

This works fine in linux, but in Solaris and Mac OS the reported number of
blocks is based on 512-byte blocks.

The solution is simple: DU should use "du -sk" instead of "du -s".

Should I file I bug for this?

Martin

Reply via email to