On Mon, 18 May 2020 at 12:39, Xavi Hernandez <jaher...@redhat.com> wrote:

>> > ### User stories
>> > * [Hari] users are hesitant to upgrade. A good number of issues in 
>> > release-7 (crashes, flooding of logs, self heal) Need to look into this.
>> > * [Sunil] Increase in inode size 
>> > https://lists.gluster.org/pipermail/gluster-users/2020-May/038196.html 
>> > Looks like it can have perf benefit.
>> >
>>
>> Is there work underway to ascertain if there are indeed any
>> performance related benefits? What are the kind of tests which would
>> be appropriate?
>
>
> Rinku has done some tests downstream to validate that the change doesn't 
> cause any performance regression. Initial results don't show any regression 
> at all and it even provides a significant benefit for 'ls -l' and 'unlink' 
> workloads. I'm not sure yet why this happens as the xattrs for these tests 
> should already fit inside 512 bytes inodes, so no significant differences 
> were expected.

Can we not consider putting together an update or a blog (as part of
release 8 content) which provides a summary of the environment,
workload and results for these tests? I understand that test rig may
not be publicly available - however, given enough detail, others could
attempt to replicate the same.

>
> The real benefit would be with volumes that use at least geo-replication or 
> quotas. In this case the xattrs may not fit inside the 512 bytes inodes, so 
> 1024 bytes inodes will reduce the number of disk requests when xattr data is 
> not cached (and it's not always cached, even if the inode is in cache). This 
> testing is pending.
>
> From the functional point of view, we also need to test that bigger inodes 
> don't cause weird inode allocation problems when available space is small. 
> XFS allocates inodes in contiguous chunks in disk, so it could happen that 
> even though there's enough space in disk (apparently), an inode cannot be 
> allocated due to fragmentation. Given that the inode size is bigger, the 
> required chunk will also be bigger, which could make this problem worse. We 
> should try to fill a volume with small files (with fsync pre file and without 
> it) and see if we get ENOSPC errors much before it's expected.
>
> Any help validating our results or doing the remaining tests would be 
> appreciated.
>

It seems to me that we need to have a broader conversation around
these tests and paths - perhaps on a separate thread.


-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
kadalu.io : Making it easy to provision storage in k8s!
_______________________________________________

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply via email to