[
https://issues.apache.org/jira/browse/HDDS-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mark Gui resolved HDDS-5380.
----------------------------
Resolution: Fixed
> Get more accurate space info for DedicatedDiskSpaceUsage
> --------------------------------------------------------
>
> Key: HDDS-5380
> URL: https://issues.apache.org/jira/browse/HDDS-5380
> Project: Apache Ozone
> Issue Type: Sub-task
> Reporter: Mark Gui
> Assignee: Mark Gui
> Priority: Minor
> Labels: pull-request-available
>
> Java has 2 apis to get available space info for a mounted fs:
> `getFreeSpace()` and `getUsableSpace()`, the latter is more accurate(see its
> java doc).
> Let's do a experiment:
> {code:java}
> // java
> import java.io.File;
> public class FsSpace {
> public static void main(String args[]) {
> File file = new File(args[0]);
> System.out.println("FreeSpace: " + file.getFreeSpace());
> System.out.println("UsableSpace: " + file.getUsableSpace());
> }
> }{code}
> compile it and run against my root /, let's compare with linux command 'df'.
> {code:java}
> // linux command line
> $ java FsSpace /
> FreeSpace: 49653022720 <-- A
> UsableSpace: 45126942720 <-- B
> $ df / -B1
> Filesystem 1B-blocks Used Available Use% Mounted on
> overlay 105553784832 55900872704 45126832128 56% /
> ^
> |
> C{code}
> So the B and C is more close to each other, while A is larger.
> Here we have about a 4GB diff on a 100GB disk for the 2 java apis.
> I think that `getFreeSpace()` must only account for all unallocated space but
> does not take the space reserved for system use into consideration while
> `getUsableSpace()` do.
> So we should be more conservative to prevent over consumed disk space.
> P.S.
> HDFS has a implemtation that use the 2 apis for different purposes, see:
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DF.java
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]