Re: [Gluster-devel] [Gluster-users] Minutes of Gluster Community Meeting [12th May 2020]

2020-05-18 Thread sankarshan
On Mon, 18 May 2020 at 12:39, Xavi Hernandez  wrote:

>> > ### User stories
>> > * [Hari] users are hesitant to upgrade. A good number of issues in 
>> > release-7 (crashes, flooding of logs, self heal) Need to look into this.
>> > * [Sunil] Increase in inode size 
>> > https://lists.gluster.org/pipermail/gluster-users/2020-May/038196.html 
>> > Looks like it can have perf benefit.
>> >
>>
>> Is there work underway to ascertain if there are indeed any
>> performance related benefits? What are the kind of tests which would
>> be appropriate?
>
>
> Rinku has done some tests downstream to validate that the change doesn't 
> cause any performance regression. Initial results don't show any regression 
> at all and it even provides a significant benefit for 'ls -l' and 'unlink' 
> workloads. I'm not sure yet why this happens as the xattrs for these tests 
> should already fit inside 512 bytes inodes, so no significant differences 
> were expected.

Can we not consider putting together an update or a blog (as part of
release 8 content) which provides a summary of the environment,
workload and results for these tests? I understand that test rig may
not be publicly available - however, given enough detail, others could
attempt to replicate the same.

>
> The real benefit would be with volumes that use at least geo-replication or 
> quotas. In this case the xattrs may not fit inside the 512 bytes inodes, so 
> 1024 bytes inodes will reduce the number of disk requests when xattr data is 
> not cached (and it's not always cached, even if the inode is in cache). This 
> testing is pending.
>
> From the functional point of view, we also need to test that bigger inodes 
> don't cause weird inode allocation problems when available space is small. 
> XFS allocates inodes in contiguous chunks in disk, so it could happen that 
> even though there's enough space in disk (apparently), an inode cannot be 
> allocated due to fragmentation. Given that the inode size is bigger, the 
> required chunk will also be bigger, which could make this problem worse. We 
> should try to fill a volume with small files (with fsync pre file and without 
> it) and see if we get ENOSPC errors much before it's expected.
>
> Any help validating our results or doing the remaining tests would be 
> appreciated.
>

It seems to me that we need to have a broader conversation around
these tests and paths - perhaps on a separate thread.


-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Minutes of Gluster Community Meeting [12th May 2020]

2020-05-18 Thread Xavi Hernandez
Hi Sankarshan,

On Sat, May 16, 2020 at 9:15 AM sankarshan  wrote:

> On Fri, 15 May 2020 at 10:59, Hari Gowtham  wrote:
>
> > ### User stories
> > * [Hari] users are hesitant to upgrade. A good number of issues in
> release-7 (crashes, flooding of logs, self heal) Need to look into this.
> > * [Sunil] Increase in inode size
> https://lists.gluster.org/pipermail/gluster-users/2020-May/038196.html
> Looks like it can have perf benefit.
> >
>
> Is there work underway to ascertain if there are indeed any
> performance related benefits? What are the kind of tests which would
> be appropriate?
>

Rinku has done some tests downstream to validate that the change doesn't
cause any performance regression. Initial results don't show any regression
at all and it even provides a significant benefit for 'ls -l' and 'unlink'
workloads. I'm not sure yet why this happens as the xattrs for these tests
should already fit inside 512 bytes inodes, so no significant differences
were expected.

The real benefit would be with volumes that use at least geo-replication or
quotas. In this case the xattrs may not fit inside the 512 bytes inodes, so
1024 bytes inodes will reduce the number of disk requests when xattr data
is not cached (and it's not always cached, even if the inode is in cache).
This testing is pending.

>From the functional point of view, we also need to test that bigger inodes
don't cause weird inode allocation problems when available space is small.
XFS allocates inodes in contiguous chunks in disk, so it could happen that
even though there's enough space in disk (apparently), an inode cannot be
allocated due to fragmentation. Given that the inode size is bigger, the
required chunk will also be bigger, which could make this problem worse. We
should try to fill a volume with small files (with fsync pre file and
without it) and see if we get ENOSPC errors much before it's expected.

Any help validating our results or doing the remaining tests would be
appreciated.

Regards,

Xavi


>
> > * Any release updates?
> > * 6.9 is done and announced
> > * [Sunil]can we take this in for release-8:
> https://review.gluster.org/#/c/glusterfs/+/24396/
> > * [Rinku]Yes, we need to ask the patch owners to port this to
> release8 post merging it to master. Till the time we tag release8 this is
> possible post this it will be difficult, after which we can put it in
> release8.1
> > * [Csaba] This is necessary as well
> https://review.gluster.org/#/c/glusterfs/+/24415/
> > * [Rinku] We need release notes to be reviewed and merged release8
> is blocked due to this. https://review.gluster.org/#/c/glusterfs/+/24372/
>
> Have the small set of questions on the notes been addressed? Also, do
> we have plans to move this workflow over to GitHub issues? In other
> words, how long are we planning to continue to work with dual systems?
>
>
> > ### RoundTable
> > * [Sunil] Do we support cento8 and gluster?
> > * [sankarshan] Please highlight the concerns on the mailing list.
> The developers who do the manual testing can review and provide their
> assessment on where the project stands
> > * We do have packages, how are we testing it?
> > * [Sunil] Centos8 regression is having issues and are not being used for
> regression testing.
> > * [Hari] For packages, Shwetha and Sheetal are manually testing the bits
> with centos8. Basics works fine. But this testing isn't enough
> > * send out a mail to sort this out
>
> I am guessing that this was on Sunil to send out the note to the list.
> Will be looking forward to that.
>
> > * [Amar] Kadalu 0.7 release based on GlusterFS 7.5 has been recently
> released (Release Blog: https://kadalu.io/blog/kadalu-storage-0.7)
> > * [Rinku] How to test
> > * [Aravinda]
> https://kadalu.io/docs/k8s-storage/latest/quick-start
>
>
>
>
> --
> sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
> kadalu.io : Making it easy to provision storage in k8s!
> ___
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
>
>
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Minutes of Gluster Community Meeting [12th May 2020]

2020-05-16 Thread sankarshan
On Fri, 15 May 2020 at 10:59, Hari Gowtham  wrote:

> ### User stories
> * [Hari] users are hesitant to upgrade. A good number of issues in release-7 
> (crashes, flooding of logs, self heal) Need to look into this.
> * [Sunil] Increase in inode size 
> https://lists.gluster.org/pipermail/gluster-users/2020-May/038196.html Looks 
> like it can have perf benefit.
>

Is there work underway to ascertain if there are indeed any
performance related benefits? What are the kind of tests which would
be appropriate?


> * Any release updates?
> * 6.9 is done and announced
> * [Sunil]can we take this in for release-8: 
> https://review.gluster.org/#/c/glusterfs/+/24396/
> * [Rinku]Yes, we need to ask the patch owners to port this to release8 
> post merging it to master. Till the time we tag release8 this is possible 
> post this it will be difficult, after which we can put it in release8.1
> * [Csaba] This is necessary as well 
> https://review.gluster.org/#/c/glusterfs/+/24415/
> * [Rinku] We need release notes to be reviewed and merged release8 is 
> blocked due to this. https://review.gluster.org/#/c/glusterfs/+/24372/

Have the small set of questions on the notes been addressed? Also, do
we have plans to move this workflow over to GitHub issues? In other
words, how long are we planning to continue to work with dual systems?


> ### RoundTable
> * [Sunil] Do we support cento8 and gluster?
> * [sankarshan] Please highlight the concerns on the mailing list. The 
> developers who do the manual testing can review and provide their assessment 
> on where the project stands
> * We do have packages, how are we testing it?
> * [Sunil] Centos8 regression is having issues and are not being used for 
> regression testing.
> * [Hari] For packages, Shwetha and Sheetal are manually testing the bits with 
> centos8. Basics works fine. But this testing isn't enough
> * send out a mail to sort this out

I am guessing that this was on Sunil to send out the note to the list.
Will be looking forward to that.

> * [Amar] Kadalu 0.7 release based on GlusterFS 7.5 has been recently released 
> (Release Blog: https://kadalu.io/blog/kadalu-storage-0.7)
> * [Rinku] How to test
> * [Aravinda] https://kadalu.io/docs/k8s-storage/latest/quick-start




-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel