Re: [Gluster-devel] Release 3.12.10: Scheduled for the 13th of July

2018-06-12 Thread Jiffin Tony Thottan

typos


On Tuesday 12 June 2018 12:15 PM, Jiffin Tony Thottan wrote:

Hi,

It's time to prepare the 3.12.7 release, which falls on the 10th of


3.12.10


each month, and hence would be 08-03-2018 this time around.



13-06-2018


This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.10? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

Plus I have cc'ed owners of patch which can be candidate for 3.12 but 
failed regressions.


Please have look into that

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.10

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-06-12-9647f0c6 (master branch)

2018-06-12 Thread staticanalysis


GlusterFS Coverity covscan results for the master branch are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-06-12-9647f0c6/

Coverity covscan results for other active branches are also available at
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-12 Thread Shyam Ranganathan
On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> As brick-mux tests were failing (and still are on master), this was
> holding up the release activity.
> 
> We now have a final fix [1] for the problem, and the situation has
> improved over a series of fixes and reverts on the 4.1 branch as well.
> 
> So we hope to branch RC0 today, and give a week for package and upgrade
> testing, before getting to GA. The revised calendar stands as follows,
> 
> - RC0 Tagging: 31st May, 2018
> - RC0 Builds: 1st June, 2018
> - June 4th-8th: RC0 testing
> - June 8th: GA readiness callout
> - June 11th: GA tagging

GA has been tagged today, and is off to packaging.

> - +2-4 days release announcement

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-12 Thread Niels de Vos
On Tue, Jun 12, 2018 at 11:26:33AM -0400, Shyam Ranganathan wrote:
> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > As brick-mux tests were failing (and still are on master), this was
> > holding up the release activity.
> > 
> > We now have a final fix [1] for the problem, and the situation has
> > improved over a series of fixes and reverts on the 4.1 branch as well.
> > 
> > So we hope to branch RC0 today, and give a week for package and upgrade
> > testing, before getting to GA. The revised calendar stands as follows,
> > 
> > - RC0 Tagging: 31st May, 2018
> > - RC0 Builds: 1st June, 2018
> > - June 4th-8th: RC0 testing
> > - June 8th: GA readiness callout
> > - June 11th: GA tagging
> 
> GA has been tagged today, and is off to packaging.

The glusterfs packages should land in the testing repositories from the
CentOS Storage SIG soon. Currently glusterd2 is still on rc0 though.
Please test with the instructions from
http://lists.gluster.org/pipermail/packaging/2018-June/000553.html

Thanks!
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Disabling use of anonymous fds in open-behind

2018-06-12 Thread Vijay Bellur
On Mon, Jun 11, 2018 at 7:44 PM, Raghavendra Gowdappa 
wrote:

> All,
>
> This is an option in open-behind, which lets fuse native mounts to use
> anonymous fds. The reasoning being since anonymous fds are stateless,
> overhead of open is avoided and hence better performance. However, bugs
> filed [1][2] seemed to indicate contrary results.
>
> Also, using anonymous fds affects other xlators which rely on per fd state
> [3].
>
> So, this brings to the point do anonymous-fds actually improve performance
> on native fuse mounts? If not, we can disable them. May be they are useful
> for light weight metadata operations like fstat, but the workload should
> only be limited to them. Note that anonymous fds are used by open-behind by
> only two fops - readv and fstat. But, [1] has shown that they actually
> regress performance for sequential reads.
>


Perhaps a more intelligent open-behind based on size could help? IIRC,
open-behind was originally developed to improve latency for small file
operations. For large files, it is unnecessary and can affect read-ahead
behavior as observed in the referenced bugs. Could we alter the behavior to
disable open-behind for those files which are bigger than a configurable
size threshold?

Thanks,
Vijay


> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1419807
> [2] https://bugzilla.redhat.com/1489513, "read-ahead underperrforms
> expectations"
>   open-behind without patch (MiB/s) with patch (MiB/s)
>   on  132.87133.51
>   off 139.70139.77
>
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1084508
>
> PS: Anonymous fds are stateless fds, where a client like native fuse mount
> doesn't do an explicit open. Instead, bricks do the open on-demand during
> fops which need an fd (like readv, fstat etc).
>
> regards,
> Raghavendra
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Disabling use of anonymous fds in open-behind

2018-06-12 Thread Poornima Gurusiddaiah
On Wed, Jun 13, 2018, 5:22 AM Vijay Bellur  wrote:

>
>
> On Mon, Jun 11, 2018 at 7:44 PM, Raghavendra Gowdappa  > wrote:
>
>> All,
>>
>> This is an option in open-behind, which lets fuse native mounts to use
>> anonymous fds. The reasoning being since anonymous fds are stateless,
>> overhead of open is avoided and hence better performance. However, bugs
>> filed [1][2] seemed to indicate contrary results.
>>
>> Also, using anonymous fds affects other xlators which rely on per fd
>> state [3].
>>
>> So, this brings to the point do anonymous-fds actually improve
>> performance on native fuse mounts? If not, we can disable them. May be they
>> are useful for light weight metadata operations like fstat, but the
>> workload should only be limited to them. Note that anonymous fds are used
>> by open-behind by only two fops - readv and fstat. But, [1] has shown that
>> they actually regress performance for sequential reads.
>>
>
>
> Perhaps a more intelligent open-behind based on size could help? IIRC,
> open-behind was originally developed to improve latency for small file
> operations. For large files, it is unnecessary and can affect read-ahead
> behavior as observed in the referenced bugs. Could we alter the behavior to
> disable open-behind for those files which are bigger than a configurable
> size threshold?
>
+1, this sounds like a perfect solution which doesn't give out the benefits
(may be in few cases) but also doesn't reduce the performance in small file
read. We could enable open behind only for fd with rd-only, and if the size
is less than or equal to the quick-read file size.

Regards,
Poornima


> Thanks,
> Vijay
>
>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1419807
>> [2] https://bugzilla.redhat.com/1489513, "read-ahead underperrforms
>> expectations"
>>   open-behind without patch (MiB/s) with patch (MiB/s)
>>   on  132.87133.51
>>   off 139.70139.77
>>
>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1084508
>>
>> PS: Anonymous fds are stateless fds, where a client like native fuse
>> mount doesn't do an explicit open. Instead, bricks do the open on-demand
>> during fops which need an fd (like readv, fstat etc).
>>
>> regards,
>> Raghavendra
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Disabling use of anonymous fds in open-behind

2018-06-12 Thread Raghavendra Gowdappa
On Wed, Jun 13, 2018 at 7:21 AM, Poornima Gurusiddaiah 
wrote:

>
>
> On Wed, Jun 13, 2018, 5:22 AM Vijay Bellur  wrote:
>
>>
>>
>> On Mon, Jun 11, 2018 at 7:44 PM, Raghavendra Gowdappa <
>> rgowd...@redhat.com> wrote:
>>
>>> All,
>>>
>>> This is an option in open-behind, which lets fuse native mounts to use
>>> anonymous fds. The reasoning being since anonymous fds are stateless,
>>> overhead of open is avoided and hence better performance. However, bugs
>>> filed [1][2] seemed to indicate contrary results.
>>>
>>> Also, using anonymous fds affects other xlators which rely on per fd
>>> state [3].
>>>
>>> So, this brings to the point do anonymous-fds actually improve
>>> performance on native fuse mounts? If not, we can disable them. May be they
>>> are useful for light weight metadata operations like fstat, but the
>>> workload should only be limited to them. Note that anonymous fds are used
>>> by open-behind by only two fops - readv and fstat. But, [1] has shown that
>>> they actually regress performance for sequential reads.
>>>
>>
>>
>> Perhaps a more intelligent open-behind based on size could help? IIRC,
>> open-behind was originally developed to improve latency for small file
>> operations.
>>
>
It looks like Quick-read accounts to larger share of performance impact
when compared to open-behind. Milind is scheduling some runs with the
combination of "open-behind off, quick-read on" to asses the actual perf
impact of open-behind for the workload of small file reads. I hope we'll
have enough data to arrive at usefulness of open-behind by then.

For large files, it is unnecessary and can affect read-ahead behavior as
>> observed in the referenced bugs. Could we alter the behavior to disable
>> open-behind for those files which are bigger than a configurable size
>> threshold?
>>
> +1, this sounds like a perfect solution which doesn't give out the
> benefits (may be in few cases) but also doesn't reduce the performance in
> small file read. We could enable open behind only for fd with rd-only, and
> if the size is less than or equal to the quick-read file size.
>

Yes, that's one solution we thought off too (aligning open-behind's
decision on quick-read size). But a slight variation of this approach
(opens early for files of size > 256KB and tested on files of size GBs) has
been tried and didn't yield expected results [1].

[1] https://review.gluster.org/#/c/17377/


> Regards,
> Poornima
>
>
>> Thanks,
>> Vijay
>>
>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1419807
>>> [2] https://bugzilla.redhat.com/1489513, "read-ahead underperrforms
>>> expectations"
>>>   open-behind without patch (MiB/s) with patch (MiB/s)
>>>   on  132.87133.51
>>>   off 139.70139.77
>>>
>>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1084508
>>>
>>> PS: Anonymous fds are stateless fds, where a client like native fuse
>>> mount doesn't do an explicit open. Instead, bricks do the open on-demand
>>> during fops which need an fd (like readv, fstat etc).
>>>
>>> regards,
>>> Raghavendra
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel