How many files do you have per node? What i find is that most of my
inodes/dentries are almost always cached so calculating the 'du -sk' on a
host even with hundreds of thousands of files the du -sk generally uses high
i/o for a couple of seconds. I am using 2TB disks too.

 Sridhar



On Fri, Apr 8, 2011 at 12:15 AM, Edward Capriolo <edlinuxg...@gmail.com>wrote:

> I have a 0.20.2 cluster. I notice that our nodes with 2 TB disks waste
> tons of disk io doing a 'du -sk' of each data directory. Instead of
> 'du -sk' why not just do this with java.io.file? How is this going to
> work with 4TB 8TB disks and up ? It seems like calculating used and
> free disk space could be done a better way.
>
> Edward
>

Reply via email to