Ravi, I will definitely arrange the results into some short handy document and post it here.

Also, @JoeJulian on IRC suggested me to perform this test on XFS bricks with inode size of 256b and 1k:

===
22:38 <@JoeJulian> post-factum: Just wondering what 256 byte inodes might look like for that. And, by the same token, 1k inodes.
22:39 < post-factum> JoeJulian: should I try 1k inodes instead?
22:41 <@JoeJulian> post-factum: Doesn't hurt to try. My expectation is that disk usage will go up despite inode usage going down.
22:41 < post-factum> JoeJulian: ok, will check that
22:41 <@JoeJulian> post-factum: and with 256, I'm curious if inode usage will stay close to the same while disk usage goes down.
===

Here are the results for 1k:

(1171336 - 33000) / (1066036 - 23) == 1068 bytes per inode.

Disk usage is indeed higher (1.2G), but inodes usage is the same.

Will test with 256b inode now.

17.03.2016 06:28, Ravishankar N wrote:
Looks okay to me Oleksandr. You might want to make a github gist of
your tests+results as a reference for others.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to