[Gluster-Maintainers] Announcing Gluster release 6.1

2019-04-22 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
6.1 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

None

Thanks,
Gluster community

[1] Packages for 6.1:
https://download.gluster.org/pub/gluster/glusterfs/6/6.1/

[2] Release notes for 6.1:
https://docs.gluster.org/en/latest/release-notes/6.1/

___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers

___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Announcing Gluster release 5.6

2019-04-18 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
5.6 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

- Release 5.x had a long standing issue where network bandwidth usage
was much higher than in prior releases. This issue has been addressed in
this release. Bug 1673058 has more details regarding the issue [3].

Thanks,
Gluster community

[1] Packages for 5.6:
https://download.gluster.org/pub/gluster/glusterfs/5/5.6/

[2] Release notes for 5.6:
https://docs.gluster.org/en/latest/release-notes/5.6/

[3] Bandwidth usage bug: https://bugzilla.redhat.com/show_bug.cgi?id=1673058
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-6.1 released

2019-04-18 Thread Shyam Ranganathan
On 4/18/19 3:40 AM, Niels de Vos wrote:
> On Wed, Apr 17, 2019 at 06:30:46PM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/86/artifact/glusterfs-6.1.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/86/artifact/glusterfs-6.1.sha512sum
> 
> Packages are available in the testing repository of the CentOS Storage
> SIG. Please try them out and let us know if they are working as
> expected.

Tested, works as expected. Please tag them for a release.

> 
>   # yum install centos-release-gluster6
>   # yum --enablerepo=centos-gluster6-test install glusterfs-server
> 
> Thanks,
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.6 released

2019-04-17 Thread Shyam Ranganathan
On 4/11/19 9:57 AM, Shyam Ranganathan wrote:
> On 4/9/19 9:37 AM, Niels de Vos wrote:
>> On Tue, Apr 09, 2019 at 11:56:31AM +, jenk...@build.gluster.org wrote:
>>> SRC: 
>>> https://build.gluster.org/job/release-new/85/artifact/glusterfs-5.6.tar.gz
>>> HASH: 
>>> https://build.gluster.org/job/release-new/85/artifact/glusterfs-5.6.sha512sum
>>
>> Packages from the CentOS Storage SIG should be landing in the testing
>> repository very soon from now. Please let us know any test results.
>>
>> CentOS 7:
>>
>> # yum install centos-release-gluster5
>> # yum --enablerepo=centos-gluster5-test install glusterfs-server
> 
> Poornima, are you testing these out?

Niels, tested out CentOS packages, 5.6 installs and works as expected.
Please mark them for a release.

> 
>>
>> CentOS 6:
>>
>> # yum install centos-release-gluster5
>> # yum --enablerepo=centos-gluster5-test install glusterfs-fuse
>>
>> Thanks,
>> Niels
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> https://lists.gluster.org/mailman/listinfo/maintainers
>>
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6.1: Tagged!

2019-04-17 Thread Shyam Ranganathan
Is now tagged and being packaged. If anyone gets a chance, please test
the packages from CentOS SIG, as I am unavailable for the next 4 days.

Thanks,
Shyam
On 4/16/19 9:53 AM, Shyam Ranganathan wrote:
> Status: Tagging pending
> 
> Waiting on patches:
> (Kotresh/Atin) - glusterd: fix loading ctime in client graph logic
>   https://review.gluster.org/c/glusterfs/+/22579
> 
> Following patches will not be taken in if CentOS regression does not
> pass by tomorrow morning Eastern TZ,
> (Pranith/KingLongMee) - cluster-syncop: avoid duplicate unlock of
> inodelk/entrylk
>   https://review.gluster.org/c/glusterfs/+/22385
> (Aravinda) - geo-rep: IPv6 support
>   https://review.gluster.org/c/glusterfs/+/22488
> (Aravinda) - geo-rep: fix integer config validation
>   https://review.gluster.org/c/glusterfs/+/22489
> 
> Tracker bug status:
> (Ravi) - Bug 1693155 - Excessive AFR messages from gluster showing in
> RHGSWA.
>   All patches are merged, but none of the patches adds the "Fixes"
> keyword, assume this is an oversight and that the bug is fixed in this
> release.
> 
> (Atin) - Bug 1698131 - multiple glusterfsd processes being launched for
> the same brick, causing transport endpoint not connected
>   No work has occurred post logs upload to bug, restart of bircks and
> possibly glusterd is the existing workaround when the bug is hit. Moving
> this out of the tracker for 6.1.
> 
> (Xavi) - Bug 1699917 - I/O error on writes to a disperse volume when
> replace-brick is executed
>   Very recent bug (15th April), does not seem to have any critical data
> corruption or service availability issues, planning on not waiting for
> the fix in 6.1
> 
> - Shyam
> On 4/6/19 4:38 AM, Atin Mukherjee wrote:
>> Hi Mohit,
>>
>> https://review.gluster.org/22495 should get into 6.1 as it’s a
>> regression. Can you please attach the respective bug to the tracker Ravi
>> pointed out?
>>
>>
>> On Sat, 6 Apr 2019 at 12:00, Ravishankar N > <mailto:ravishan...@redhat.com>> wrote:
>>
>> Tracker bug is https://bugzilla.redhat.com/show_bug.cgi?id=1692394, in
>> case anyone wants to add blocker bugs.
>>
>>
>> On 05/04/19 8:03 PM, Shyam Ranganathan wrote:
>> > Hi,
>> >
>> > Expected tagging date for release-6.1 is on April, 10th, 2019.
>> >
>> > Please ensure required patches are backported and also are passing
>> > regressions and are appropriately reviewed for easy merging and
>> tagging
>> > on the date.
>> >
>> > Thanks,
>> > Shyam
>> > ___
>> > Gluster-devel mailing list
>> > gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
>> > https://lists.gluster.org/mailman/listinfo/gluster-devel
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>> -- 
>> - Atin (atinm)
>>
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] Requesting for an early 5.6 release

2019-04-11 Thread Shyam Ranganathan
On 4/11/19 9:58 AM, Patrick Matthäi wrote:
> 
> Am 08.04.2019 um 16:04 schrieb Shyam Ranganathan:
>> On 4/6/19 1:47 AM, Poornima Gurusiddaiah wrote:
>>> Hi,
>>>
>>> We had a critical bug [1], that got introduced in gluster release 5.
>>> There are users waiting on an update with the fix. Hence requesting for
>>> an out of band release for 5.6. Myself and jiffin, volunteer to do some
>>> tasks of the release- tagging(?) testing. But would need help with the
>>> builds and other tasks.
> 
> Hi,
> 
> Debian Buster 10 is now in freeze and I were able to pull in version 5.5
> instead of 5.4 for it. Pulling again a new upstream version to buster
> could be difficult..
> 
> Could you point me to the bug report and the diff for the critical fix,
> so that I can upload this explicit change to buster?

Here you go,
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1673058
Fix/patch: https://review.gluster.org/c/glusterfs/+/22404
commit ID on release-5 branch: 1fba86169b0850c3fcd02e56d0ddbba3efe9ae78

> 
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.6 released

2019-04-11 Thread Shyam Ranganathan
On 4/9/19 9:37 AM, Niels de Vos wrote:
> On Tue, Apr 09, 2019 at 11:56:31AM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/85/artifact/glusterfs-5.6.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/85/artifact/glusterfs-5.6.sha512sum
> 
> Packages from the CentOS Storage SIG should be landing in the testing
> repository very soon from now. Please let us know any test results.
> 
> CentOS 7:
> 
> # yum install centos-release-gluster5
> # yum --enablerepo=centos-gluster5-test install glusterfs-server

Poornima, are you testing these out?

> 
> CentOS 6:
> 
> # yum install centos-release-gluster5
> # yum --enablerepo=centos-gluster5-test install glusterfs-fuse
> 
> Thanks,
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Requesting for an early 5.6 release

2019-04-08 Thread Shyam Ranganathan
On 4/6/19 1:47 AM, Poornima Gurusiddaiah wrote:
> Hi,
> 
> We had a critical bug [1], that got introduced in gluster release 5.
> There are users waiting on an update with the fix. Hence requesting for
> an out of band release for 5.6. Myself and jiffin, volunteer to do some
> tasks of the release- tagging(?) testing. But would need help with the
> builds and other tasks.

Tagging and building a tarball needs rights on the release branch, also
it takes about 10 minutes to do, hence I can take care of the same.

Packaging is still done by just 2 folks, so added the packaging list to
this mail, to understand if they need any assistance and their time
constraints.

Post packaging, we test if the RPMS are fine, and upgrade from an older
version. I use containers to test the same as detailed here [1]. Helping
out with this step can be useful, and I will assume till I hear
otherwise that you would taking care of the same for this minor release.

Other activities include,
- prepare release notes
- Upload release notes to the doc site
- Announce the release
- Close bugs fixed in the release post the announcement

Again, something I can take care of, and should take about 30 minutes
overall.

Shyam

[1] Package testing: https://hackmd.io/-yC3Ol68SwaRWr8bzaL8pw#
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Announcing Gluster release 4.1.8

2019-04-05 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
4.1.8 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

None

Thanks,
Gluster community

[1] Packages for 4.1.8:
https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.8/

[2] Release notes for 4.1.8:
https://docs.gluster.org/en/latest/release-notes/4.1.8/



___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.1.8 released

2019-03-28 Thread Shyam Ranganathan
On 3/28/19 8:05 AM, Niels de Vos wrote:
> On Wed, Mar 27, 2019 at 10:12:28PM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/84/artifact/glusterfs-4.1.8.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/84/artifact/glusterfs-4.1.8.sha512sum
>>
>> This release is made off jenkins-release-84
> 
> Packages from the CentOS Storage SIG (el6 + el7) should land in the
> testing repository within the next hour or so. Please check them out and
> provide any results of the testing:

Tested install and a short volume creation, data addition test, packages
look fine and can be marked for release.

> 
># yum install centos-release-gluster41
># yum install --enablerepo=centos-gluster*-test glusterfs-server
> 
> Thanks,
> Niels
> 
> 
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-6.0 released

2019-03-25 Thread Shyam Ranganathan
On 3/21/19 3:03 AM, Kaleb Keithley wrote:
> glusterfs-6.0 packages for:
> 
> * CentOS Storage SIG are now available for testing at [1]. Please try
> them out and report test results on this list. After someone reports
> that they have tested them then they will be tagged for release.

Tested the CentOS bits, things work as expected and can be marked for a
release.

Also, can move the LATEST in d.g.o to point to 6.0, as relase notes and
upgrade guides are in place, and release announcement would follow
sometime later today.

> 
> * Fedora 30 and Fedora 31/rawhide are in the Fedora Updates-Testing and
> rawhide repos. Use `dnf` to install. Fedora packages will move to the
> Fedora Updates repo after a nominal testing period. Fedora 28 and Fedora
> 29 are at [2].
> 
> * RHEL 8 Beta are at [2].
> 
> * Debian Stretch/9 and Debian buster/10 are at [2] (arm64 packages
> coming soon.)
> 
> * Bionic/18.04, Cosmic/18.10, and Disco/19.04 are on Launchpad at [3].
> 
> * SUSE SLES12SP4, Tumbleweed, SLES15, and Leap15.1 are on OpenSUSE Build
> Service at [4].
> 
> I have _NOT_ updated the top-level LATEST symlink. I will wait for the
> public announcement before changing it from 5 to 6.
> 
> [1]
> https://buildlogs.centos.org/centos/[67]/storage/{x86_64,ppc64le,aarch64}/gluster-6
> [2] https://download.gluster.org/pub/gluster/glusterfs/6
> 
> [3] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-6
> 
> [4] https://build.opensuse.org/project/subprojects/home:glusterfs
> 
> On Tue, Mar 19, 2019 at 8:38 PM  > wrote:
> 
> SRC:
> https://build.gluster.org/job/release-new/83/artifact/glusterfs-6.0.tar.gz
> HASH:
> 
> https://build.gluster.org/job/release-new/83/artifact/glusterfs-6.0.sha512sum
> 
> This release is made off
> jenkins-release-83___
> packaging mailing list
> packag...@gluster.org 
> https://lists.gluster.org/mailman/listinfo/packaging
> 
> 
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.5 released

2019-03-21 Thread Shyam Ranganathan
On 3/19/19 11:28 PM, Kaleb Keithley wrote:
> It seems I neglected to tag the packages for testing.
> 
> I've fixed that. Hopefully the packages land on buildlogs.centos.org
> <http://buildlogs.centos.org> soon.

Tested, passed and can be released.

Thanks Kaleb!

> 
> On Tue, Mar 19, 2019 at 10:58 AM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> On 3/16/19 2:03 AM, Kaleb Keithley wrote:
> > Packages for the CentOS Storage SIG are now available for testing.
> > Please try them out and report test results on this list.
> >
> >   # yum install centos-release-gluster
> >   # yum install --enablerepo=centos-gluster5-test glusterfs-server
> 
> The buildlogs servers do not yet have the RPMs for 5.5 to test. I did
> try to go and use the build artifacts from
> https://cbs.centos.org/koji/buildinfo?buildID=25417 but as there is no
> repo file, unable to install pointing to this source as the repo.
> 
> Can this be fixed, or some alternate provided, so that the packages can
> be tested and reported back for publishing?
> 
> Thanks,
> Shyam
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 6: Tagged and ready for packaging

2019-03-19 Thread Shyam Ranganathan
Hi,

RC1 testing is complete and blockers have been addressed. The release is
now tagged for a final round of packaging and package testing before
release.

Thanks for testing out the RC builds and reporting issues that needed to
be addressed.

As packaging and final package testing is finishing up, we would be
writing the upgrade guide for the release as well, before announcing the
release for general consumption.

Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.5 released

2019-03-19 Thread Shyam Ranganathan
On 3/16/19 2:03 AM, Kaleb Keithley wrote:
> Packages for the CentOS Storage SIG are now available for testing.
> Please try them out and report test results on this list.
> 
>   # yum install centos-release-gluster
>   # yum install --enablerepo=centos-gluster5-test glusterfs-server

The buildlogs servers do not yet have the RPMs for 5.5 to test. I did
try to go and use the build artifacts from
https://cbs.centos.org/koji/buildinfo?buildID=25417 but as there is no
repo file, unable to install pointing to this source as the repo.

Can this be fixed, or some alternate provided, so that the packages can
be tested and reported back for publishing?

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.4 released

2019-03-15 Thread Shyam Ranganathan
On 3/13/19 10:44 AM, Shyam Ranganathan wrote:
> On 3/13/19 9:09 AM, Kaleb Keithley wrote:
>> The v5.4 tag was made and a release job was run which gave us
>> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.tar.gz.
>> If the v5.4 tag is moved then there's a logical disconnect between the
>> tag and _that_ tar file, or more accurately the files in that tar file. 
>>
>> Shyam and I discussed the merits of releasing v5.5 versus respinning
>> builds with patches.  Respinning builds with patches isn't uncommon. The
>> difference in the amount of work between one or the other is negligible.
>> In the end Shyam (mainly) decided to go with respinning with patches
>> because a full up "release" for him is a lot more work. (And we both
>> have other $dayjob things we need to be working on instead of endlessly
>> spinning releases and packages.)
> 
> Considering all comments/conversations, I think I will tag a v5.5 with
> the required commits and update the 5.4 release-notes to call it 5.5
> with the added changes.
> 
> Give me a couple of hours :)

Well that took longer (sorry was out sick for some time).

5.4 is now tagged and the release tarball generated for packaging.

> 
>>
>>
>> On Wed, Mar 13, 2019 at 8:52 AM Amar Tumballi Suryanarayan
>> mailto:atumb...@redhat.com>> wrote:
>>
>> I am totally fine with v5.5, my suggestion for moving the tag was if
>> we consider calling 5.4 with these two patches.
>>
>> Calling the release as 5.5 is totally OK, and we call it out
>> specifically in our version numbering scheme, as if something is
>> very serious, we can break 'release date' train.
>>
>> -Amar
>>
>> On Wed, Mar 13, 2019 at 6:13 PM Kaleb Keithley > <mailto:kkeit...@redhat.com>> wrote:
>>
>> The Version tag should be (considered) immutable. Please don't
>> move it.
>>
>> If you want to add another tag to help us remember this issue
>> that's fine.
>>
>> The other option which Shyam and I discussed was tagging v5.5.
>>
>>
>> On Wed, Mar 13, 2019 at 8:32 AM Amar Tumballi Suryanarayan
>> mailto:atumb...@redhat.com>> wrote:
>>
>> We need to tag different commit may be? So the 'git checkout
>> v5.4' points to the correct commit?
>>
>> On Wed, 13 Mar, 2019, 4:40 PM Shyam Ranganathan,
>> mailto:srang...@redhat.com>> wrote:
>>
>> Niels, Kaleb,
>>
>> We need to respin 5.4 with the 2 additional commits as
>> follows,
>>
>> commit a00953ed212a7071b152c4afccd35b92fa5a682a (HEAD ->
>> release-5,
>>     core: make compute_cksum function op_version compatible
>>
>> commit 8fb4631c65f28dd0a5e0304386efff3c807e64a4
>>     dict: handle STR_OLD data type in xdr conversions
>>
>> As the current build breaks rolling upgrades, we had
>> held back on
>> announcing 5.4 and are now ready with the fixes that can
>> be used to
>> respin 5.4.
>>
>> Let me know if I need to do anything more from my end
>> for help with the
>> packaging.
>>
>> Once the build is ready, we would be testing it out as
>> usual.
>>
>> NOTE: As some users have picked up 5.4 the announce
>> would also carry a
>> notice, that they need to do a downserver upgrade to the
>> latest bits
>> owing to the patches that have landed in addition to the
>> existing content.
>>
>> Thanks,
>> Shyam
>>
>> On 3/5/19 8:59 AM, Shyam Ranganathan wrote:
>> > On 2/27/19 5:19 AM, Niels de Vos wrote:
>> >> On Tue, Feb 26, 2019 at 02:47:30PM +,
>> jenk...@build.gluster.org
>> <mailto:jenk...@build.gluster.org> wrote:
>> >>> SRC:
>> 
>> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.tar.gz
>> >>> HASH:
>> 
>> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.sha512sum
>> 

Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.4 released

2019-03-13 Thread Shyam Ranganathan
On 3/13/19 9:09 AM, Kaleb Keithley wrote:
> The v5.4 tag was made and a release job was run which gave us
> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.tar.gz.
> If the v5.4 tag is moved then there's a logical disconnect between the
> tag and _that_ tar file, or more accurately the files in that tar file. 
> 
> Shyam and I discussed the merits of releasing v5.5 versus respinning
> builds with patches.  Respinning builds with patches isn't uncommon. The
> difference in the amount of work between one or the other is negligible.
> In the end Shyam (mainly) decided to go with respinning with patches
> because a full up "release" for him is a lot more work. (And we both
> have other $dayjob things we need to be working on instead of endlessly
> spinning releases and packages.)

Considering all comments/conversations, I think I will tag a v5.5 with
the required commits and update the 5.4 release-notes to call it 5.5
with the added changes.

Give me a couple of hours :)

> 
> 
> On Wed, Mar 13, 2019 at 8:52 AM Amar Tumballi Suryanarayan
> mailto:atumb...@redhat.com>> wrote:
> 
> I am totally fine with v5.5, my suggestion for moving the tag was if
> we consider calling 5.4 with these two patches.
> 
> Calling the release as 5.5 is totally OK, and we call it out
> specifically in our version numbering scheme, as if something is
> very serious, we can break 'release date' train.
> 
> -Amar
> 
> On Wed, Mar 13, 2019 at 6:13 PM Kaleb Keithley  <mailto:kkeit...@redhat.com>> wrote:
> 
> The Version tag should be (considered) immutable. Please don't
> move it.
> 
> If you want to add another tag to help us remember this issue
> that's fine.
> 
> The other option which Shyam and I discussed was tagging v5.5.
> 
> 
> On Wed, Mar 13, 2019 at 8:32 AM Amar Tumballi Suryanarayan
> mailto:atumb...@redhat.com>> wrote:
> 
> We need to tag different commit may be? So the 'git checkout
> v5.4' points to the correct commit?
> 
> On Wed, 13 Mar, 2019, 4:40 PM Shyam Ranganathan,
> mailto:srang...@redhat.com>> wrote:
> 
> Niels, Kaleb,
> 
> We need to respin 5.4 with the 2 additional commits as
> follows,
> 
> commit a00953ed212a7071b152c4afccd35b92fa5a682a (HEAD ->
> release-5,
>     core: make compute_cksum function op_version compatible
> 
> commit 8fb4631c65f28dd0a5e0304386efff3c807e64a4
>     dict: handle STR_OLD data type in xdr conversions
> 
> As the current build breaks rolling upgrades, we had
> held back on
> announcing 5.4 and are now ready with the fixes that can
> be used to
> respin 5.4.
> 
> Let me know if I need to do anything more from my end
> for help with the
> packaging.
> 
> Once the build is ready, we would be testing it out as
> usual.
> 
> NOTE: As some users have picked up 5.4 the announce
> would also carry a
> notice, that they need to do a downserver upgrade to the
> latest bits
> owing to the patches that have landed in addition to the
> existing content.
> 
> Thanks,
> Shyam
> 
> On 3/5/19 8:59 AM, Shyam Ranganathan wrote:
> > On 2/27/19 5:19 AM, Niels de Vos wrote:
> >> On Tue, Feb 26, 2019 at 02:47:30PM +,
> jenk...@build.gluster.org
> <mailto:jenk...@build.gluster.org> wrote:
> >>> SRC:
> 
> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.tar.gz
> >>> HASH:
> 
> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.sha512sum
> >>
> >> Packages for the CentOS Storage SIG are now available
> for testing.
> >> Please try them out and report test results on this list.
> >>
> >>   # yum install centos-release-gluster
> >>   # yum install --enablerepo=centos-gluster5-test
> glusterfs-server
> >
> > Due to patch [1] upgrades are broken, so we are
> awaitin

Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.4 released

2019-03-13 Thread Shyam Ranganathan
Niels, Kaleb,

We need to respin 5.4 with the 2 additional commits as follows,

commit a00953ed212a7071b152c4afccd35b92fa5a682a (HEAD -> release-5,
core: make compute_cksum function op_version compatible

commit 8fb4631c65f28dd0a5e0304386efff3c807e64a4
dict: handle STR_OLD data type in xdr conversions

As the current build breaks rolling upgrades, we had held back on
announcing 5.4 and are now ready with the fixes that can be used to
respin 5.4.

Let me know if I need to do anything more from my end for help with the
packaging.

Once the build is ready, we would be testing it out as usual.

NOTE: As some users have picked up 5.4 the announce would also carry a
notice, that they need to do a downserver upgrade to the latest bits
owing to the patches that have landed in addition to the existing content.

Thanks,
Shyam

On 3/5/19 8:59 AM, Shyam Ranganathan wrote:
> On 2/27/19 5:19 AM, Niels de Vos wrote:
>> On Tue, Feb 26, 2019 at 02:47:30PM +, jenk...@build.gluster.org wrote:
>>> SRC: 
>>> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.tar.gz
>>> HASH: 
>>> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.sha512sum
>>
>> Packages for the CentOS Storage SIG are now available for testing.
>> Please try them out and report test results on this list.
>>
>>   # yum install centos-release-gluster
>>   # yum install --enablerepo=centos-gluster5-test glusterfs-server
> 
> Due to patch [1] upgrades are broken, so we are awaiting a fix or revert
> of the same before requesting a new build of 5.4.
> 
> The current RPMs should hence not be published.
> 
> Sanju/Hari, are we reverting this patch so that we can release 5.4, or
> are we expecting the fix to land in 5.4 (as in [2])?
> 
> Thanks,
> Shyam
> 
> [1] Patch causing regression: https://review.gluster.org/c/glusterfs/+/22148
> 
> [2] Proposed fix on master: https://review.gluster.org/c/glusterfs/+/22297/
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 6: Release date update

2019-03-12 Thread Shyam Ranganathan
On 3/5/19 1:17 PM, Shyam Ranganathan wrote:
> Hi,
> 
> Release-6 was to be an early March release, and due to finding bugs
> while performing upgrade testing, is now expected in the week of 18th
> March, 2019.
> 
> RC1 builds are expected this week, to contain the required fixes, next
> week would be testing our RC1 for release fitness before the release.

RC1 is tagged, and will mostly be packaged for testing by tomorrow.

Expect package details in a day or two, to aid with testing the release.

> 
> As always, request that users test the RC builds and report back issues
> they encounter, to help make the release a better quality.
> 
> Shyam
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-users] Release 6: Release date update

2019-03-07 Thread Shyam Ranganathan
Bug fixes are always welcome, features or big ticket changes at this
point in the release cycle are not.

I checked the patch and it is a 2 liner in readdir-ahead, and hence I
would backport it (once it gets merged into master).

Thanks for checking,
Shyam
On 3/7/19 6:33 AM, Raghavendra Gowdappa wrote:
> I just found a fix for
> https://bugzilla.redhat.com/show_bug.cgi?id=1674412. Since its a
> deadlock I am wondering whether this should be in 6.0. What do you think?
> 
> On Tue, Mar 5, 2019 at 11:47 PM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> Hi,
> 
> Release-6 was to be an early March release, and due to finding bugs
> while performing upgrade testing, is now expected in the week of 18th
> March, 2019.
> 
> RC1 builds are expected this week, to contain the required fixes, next
> week would be testing our RC1 for release fitness before the release.
> 
> As always, request that users test the RC builds and report back issues
> they encounter, to help make the release a better quality.
> 
> Shyam
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org <mailto:gluster-us...@gluster.org>
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-05 Thread Shyam Ranganathan
On 3/4/19 12:33 PM, Shyam Ranganathan wrote:
> On 3/4/19 10:08 AM, Atin Mukherjee wrote:
>>
>>
>> On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan
>> mailto:atumb...@redhat.com>> wrote:
>>
>> Thanks to those who participated.
>>
>> Update at present:
>>
>> We found 3 blocker bugs in upgrade scenarios, and hence have marked
>> release
>> as pending upon them. We will keep these lists updated about progress.
>>
>>
>> I’d like to clarify that upgrade testing is blocked. So just fixing
>> these test blocker(s) isn’t enough to call release-6 green. We need to
>> continue and finish the rest of the upgrade tests once the respective
>> bugs are fixed.
> 
> Based on fixes expected by tomorrow for the upgrade fixes, we will build
> an RC1 candidate on Wednesday (6-Mar) (tagging early Wed. Eastern TZ).
> This RC can be used for further testing.

There have been no backports for the upgrade failures, request folks
working on the same to post a list of bugs that need to be fixed, to
enable tracking the same. (also, ensure they are marked against the
release-6 tracker [1])

Also, we need to start writing out the upgrade guide for release-6, any
volunteers for the same?

Thanks,
Shyam

[1] Release-6 tracker bug:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 6: Release date update

2019-03-05 Thread Shyam Ranganathan
Hi,

Release-6 was to be an early March release, and due to finding bugs
while performing upgrade testing, is now expected in the week of 18th
March, 2019.

RC1 builds are expected this week, to contain the required fixes, next
week would be testing our RC1 for release fitness before the release.

As always, request that users test the RC builds and report back issues
they encounter, to help make the release a better quality.

Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.4 released

2019-03-05 Thread Shyam Ranganathan
On 3/5/19 10:10 AM, Sanju Rakonde wrote:
> 
> 
> On Tue, Mar 5, 2019 at 7:29 PM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> On 2/27/19 5:19 AM, Niels de Vos wrote:
> > On Tue, Feb 26, 2019 at 02:47:30PM +,
> jenk...@build.gluster.org <mailto:jenk...@build.gluster.org> wrote:
> >> SRC:
> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.tar.gz
> >> HASH:
> 
> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.sha512sum
> >
> > Packages for the CentOS Storage SIG are now available for testing.
> > Please try them out and report test results on this list.
> >
> >   # yum install centos-release-gluster
> >   # yum install --enablerepo=centos-gluster5-test glusterfs-server
> 
> Due to patch [1] upgrades are broken, so we are awaiting a fix or revert
> of the same before requesting a new build of 5.4.
> 
> The current RPMs should hence not be published.
> 
> Sanju/Hari, are we reverting this patch so that we can release 5.4, or
> are we expecting the fix to land in 5.4 (as in [2])?
> 
> 
> Shyam, I need some more time(approximately 1 day) to provide the fix. If
> we have 1 more day with us, we can wait. Or else we can revert the
> patch[1] and continue with the release.

We can wait a day, let me know tomorrow regarding the status. Thanks.

> 
> 
> Thanks,
> Shyam
> 
> [1] Patch causing regression:
> https://review.gluster.org/c/glusterfs/+/22148
> 
> [2] Proposed fix on master:
> https://review.gluster.org/c/glusterfs/+/22297/
> 
> 
> 
> -- 
> Thanks,
> Sanju
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.4 released

2019-03-05 Thread Shyam Ranganathan
On 2/27/19 5:19 AM, Niels de Vos wrote:
> On Tue, Feb 26, 2019 at 02:47:30PM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/80/artifact/glusterfs-5.4.sha512sum
> 
> Packages for the CentOS Storage SIG are now available for testing.
> Please try them out and report test results on this list.
> 
>   # yum install centos-release-gluster
>   # yum install --enablerepo=centos-gluster5-test glusterfs-server

Due to patch [1] upgrades are broken, so we are awaiting a fix or revert
of the same before requesting a new build of 5.4.

The current RPMs should hence not be published.

Sanju/Hari, are we reverting this patch so that we can release 5.4, or
are we expecting the fix to land in 5.4 (as in [2])?

Thanks,
Shyam

[1] Patch causing regression: https://review.gluster.org/c/glusterfs/+/22148

[2] Proposed fix on master: https://review.gluster.org/c/glusterfs/+/22297/
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Shyam Ranganathan
On 3/4/19 10:08 AM, Atin Mukherjee wrote:
> 
> 
> On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan
> mailto:atumb...@redhat.com>> wrote:
> 
> Thanks to those who participated.
> 
> Update at present:
> 
> We found 3 blocker bugs in upgrade scenarios, and hence have marked
> release
> as pending upon them. We will keep these lists updated about progress.
> 
> 
> I’d like to clarify that upgrade testing is blocked. So just fixing
> these test blocker(s) isn’t enough to call release-6 green. We need to
> continue and finish the rest of the upgrade tests once the respective
> bugs are fixed.

Based on fixes expected by tomorrow for the upgrade fixes, we will build
an RC1 candidate on Wednesday (6-Mar) (tagging early Wed. Eastern TZ).
This RC can be used for further testing.

> 
> 
> 
> -Amar
> 
> On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan <
> atumb...@redhat.com > wrote:
> 
> > Hi all,
> >
> > We are calling out our users, and developers to contribute in
> validating
> > ‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
> > upgrade, stability, and performance.
> >
> > Some of the key highlights of the release are listed in release-notes
> > draft
> >
> 
> .
> > Please note that there are some of the features which are being
> dropped out
> > of this release, and hence making sure your setup is not going to
> have an
> > issue is critical. Also the default lru-limit option in fuse mount for
> > Inodes should help to control the memory usage of client
> processes. All the
> > good reason to give it a shot in your test setup.
> >
> > If you are developer using gfapi interface to integrate with other
> > projects, you also have some signature changes, so please make
> sure your
> > project would work with latest release. Or even if you are using a
> project
> > which depends on gfapi, report the error with new RPMs (if any).
> We will
> > help fix it.
> >
> > As part of test days, we want to focus on testing the latest upcoming
> > release i.e. GlusterFS-6, and one or the other gluster volunteers
> would be
> > there on #gluster channel on freenode to assist the people. Some
> of the key
> > things we are looking as bug reports are:
> >
> >    -
> >
> >    See if upgrade from your current version to 6.0rc is smooth,
> and works
> >    as documented.
> >    - Report bugs in process, or in documentation if you find mismatch.
> >    -
> >
> >    Functionality is all as expected for your usecase.
> >    - No issues with actual application you would run on production
> etc.
> >    -
> >
> >    Performance has not degraded in your usecase.
> >    - While we have added some performance options to the code, not
> all of
> >       them are turned on, as they have to be done based on usecases.
> >       - Make sure the default setup is at least same as your current
> >       version
> >       - Try out few options mentioned in release notes (especially,
> >       --auto-invalidation=no) and see if it helps performance.
> >    -
> >
> >    While doing all the above, check below:
> >    - see if the log files are making sense, and not flooding with some
> >       “for developer only” type of messages.
> >       - get ‘profile info’ output from old and now, and see if
> there is
> >       anything which is out of normal expectation. Check with us
> on the numbers.
> >       - get a ‘statedump’ when there are some issues. Try to make
> sense
> >       of it, and raise a bug if you don’t understand it completely.
> >
> >
> >
> 
> Process
> > expected on test days.
> >
> >    -
> >
> >    We have a tracker bug
> >    [0]
> >    - We will attach all the ‘blocker’ bugs to this bug.
> >    -
> >
> >    Use this link to report bugs, so that we have more metadata around
> >    given bugzilla.
> >    - Click Here
> >     
>  
> 
> >       [1]
> >    -
> >
> >    The test cases which are to be tested are listed here in this sheet
> >   
> 
> [2],
> >    please add, update, and keep it up-to-date to reduce duplicate
> efforts
> 
> -- 
> - Atin (atinm)
> 
> ___
> Gluster-devel mailing list

Re: [Gluster-Maintainers] Various upgrades are Broken

2019-03-04 Thread Shyam Ranganathan
On 3/4/19 8:09 AM, Hari Gowtham wrote:
> On Mon, Mar 4, 2019 at 6:18 PM Shyam Ranganathan  wrote:
>>
>> On 3/4/19 7:29 AM, Amar Tumballi Suryanarayan wrote:
>>> Thanks for testing this Hari.
>>>
>>> On Mon, Mar 4, 2019 at 5:42 PM Hari Gowtham >> <mailto:hgowt...@redhat.com>> wrote:
>>>
>>> Hi,
>>>
>>> With the patch https://review.gluster.org/#/c/glusterfs/+/21838/ the
>>> upgrade from 3.12 to 6, 4.1 to 6 and 5 to 6 is broken.
>>>
>>> The above patch is available in release 6 and has been back-ported
>>> to 4.1 and 5.
>>> Though there isn't any release made with this patch on 4.1 and 5, if
>>> made there are a number of scenarios that will fail. Few are mentioned
>>> below:
>>>
>>>
>>> Considering there is no release with this patch in, lets not consider
>>> backporting at all.
> 
> It has been back-ported to 4 and 5 already.
> Regarding 5 we have decided to revert and make the release.
> Are we going to revert the patch for 4 or wait for the fix?

Release-4.1 next minor release is slated for week of 20th March, 2019.
Hence, we have time to get the fix in place, but before that I would
revert it anyway, so that tracking need not bother with possible late
arrival of the fix.

> 
>>
>> Current 5.4 release (yet to be announced and released on the CentOS SIG
>> (as testing is pending) *has* the fix. We need to revert it and rebuild
>> 5.4, so that we can make the 5.4 release (without the fix).
>>
>> Hari/Sanju are you folks already on it?
> 
> Yes, Sanju is working on the patch.

Thank you!

> 
>>
>> Shyam
> 
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Various upgrades are Broken

2019-03-04 Thread Shyam Ranganathan
On 3/4/19 7:29 AM, Amar Tumballi Suryanarayan wrote:
> Thanks for testing this Hari.
> 
> On Mon, Mar 4, 2019 at 5:42 PM Hari Gowtham  > wrote:
> 
> Hi,
> 
> With the patch https://review.gluster.org/#/c/glusterfs/+/21838/ the
> upgrade from 3.12 to 6, 4.1 to 6 and 5 to 6 is broken.
> 
> The above patch is available in release 6 and has been back-ported
> to 4.1 and 5.
> Though there isn't any release made with this patch on 4.1 and 5, if
> made there are a number of scenarios that will fail. Few are mentioned
> below:
> 
> 
> Considering there is no release with this patch in, lets not consider
> backporting at all. 

Current 5.4 release (yet to be announced and released on the CentOS SIG
(as testing is pending) *has* the fix. We need to revert it and rebuild
5.4, so that we can make the 5.4 release (without the fix).

Hari/Sanju are you folks already on it?

Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 5.4: Earlier than Mar-10th

2019-02-26 Thread Shyam Ranganathan
Thank you all, merging the patches and tagging the release today.

Shyam
On 2/26/19 6:50 AM, Atin Mukherjee wrote:
> Milind - response inline.
> 
> Shyam - I'm not sure if you're tracking this. But at this point any
> blocker bugs attached to 5.4 should also have clones of release-6 and
> attached to the release tracker, otherwise we'll regress!
> 
> On Tue, Feb 26, 2019 at 5:00 PM Milind Changire  <mailto:mchan...@redhat.com>> wrote:
> 
> On Fri, Feb 8, 2019 at 8:26 PM Shyam Ranganathan
> mailto:srang...@redhat.com>> wrote:
> 
> Hi,
> 
> There have been several crashes and issues reported by users on the
> latest 5.3 release. Around 10 patches have been merged since 5.3
> and we
> have a few more addressing the blockers against 5.4 [1].
> 
> The original date was March 10th, but we would like to
> accelerate a 5.4
> release to earlier this month, once all critical blockers are fixed.
> 
> Hence there are 3 asks in this mail,
> 
> 1) Packaging team, would it be possible to accommodate an 5.4
> release
> around mid-next week (RC0 for rel-6 is also on the same week)?
> Assuming
> we get the required fixes by then.
> 
> 2) Maintainers, what other issues need to be tracked as
> blockers? Please
> add them to [1].
> 
> 3) Current blocker status reads as follows:
> - Bug 1651246 - Failed to dispatch handler
>   - This shows 2 patches that are merged, but there is no patch that
> claims this is "Fixed" hence bug is still in POST state. What other
> fixes are we expecting on this?
>   - @Milind request you to update the status
> 
> Bug 1651246 has been addressed.
> Patch has been merged on master
> <https://review.gluster.org/c/glusterfs/+/1> as well as
> release-5 <https://review.gluster.org/c/glusterfs/+/22237> branches.
> Above patch addresses logging issue only.
> 
> 
> Isn't this something which is applicable to release-6 branch as well? I
> don't find https://review.gluster.org/#/c/glusterfs/+/1/ in
> release-6 branch which means we're going to regress this in release 6 if
> this isn't backported and marked as blocker to release 6.
> 
> 
> 
> - Bug 1671556 - glusterfs FUSE client crashing every few days with
> 'Failed to dispatch handler'
>   - Awaiting fixes for identified issues
>   - @Nithya what would be the target date?
> 
> - Bug 1671603 - flooding of "dict is NULL" logging & crash of
> client process
>   - Awaiting a fix, what is the potential target date for the same?
>   - We also need the bug assigned to a person
> 
> Bug 1671603 has been addressed.
> Patch has been posted on master
> <https://review.gluster.org/c/glusterfs/+/22126> and merged on
> release-5 <https://review.gluster.org/c/glusterfs/+/22127> branches.
> 
> 
> Are you sure the links are correct? Patch posted against release 5
> branch is abandoned? And also just like above, same question for
> release-6 branch, I don't see a patch?
> 
> Above patch addresses logging issue only.
> 
> 
> Thanks,
> Shyam
> 
> [1] Release 5.4 tracker:
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.4
> ___
> maintainers mailing list
> maintainers@gluster.org <mailto:maintainers@gluster.org>
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
> 
> 
> -- 
> Milind
> 
> ___
> maintainers mailing list
> maintainers@gluster.org <mailto:maintainers@gluster.org>
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 5.4: Earlier than Mar-10th

2019-02-25 Thread Shyam Ranganathan
Time to reset and understand where we are at here. The dates are
slipping, but the tracker is still having the following blockers.

We need to either understand when these would be fixed, and take a call
on creating 5.4 with known issues not addressed in the blocker, or wait
for the fixes. Responses to the bugs and updates to bugzilla would help
deciding on the same.

1) Title: glustereventsd does not start on Ubuntu 16.04 LTS
https://bugzilla.redhat.com/show_bug.cgi?id=1649054

@aravinda, any updates? Should we wait, of will this take more time?

2) Title: glusterfs FUSE client crashing every few days with 'Failed to
dispatch handler'
https://bugzilla.redhat.com/show_bug.cgi?id=1671556

All patches against the bug are merged, but bug remains in POST state,
as none of the patches claim that the issue "Fixes" the reported problem.

Are we awaiting more patches for the same? Du/Milind/Nithya?

3) Title: flooding of "dict is NULL" logging & crash of client process
https://bugzilla.redhat.com/show_bug.cgi?id=1671603

Still in NEW state, Du/Amar, do we have a date for resolution?

Thanks,
Shyam
On 2/18/19 10:13 AM, Shyam Ranganathan wrote:
> We have more blockers against 5.4 now marked. Would like to ask the list
> and understand what we want to get done with by 5.4 and what the
> timeline looks like.
> 
> The bug list:
> https://bugzilla.redhat.com/buglist.cgi?bug_id=1667103_id_type=anddependson=tvp
> 
> 1) https://bugzilla.redhat.com/show_bug.cgi?id=1649054
> Title: glustereventsd does not start on Ubuntu 16.04 LTS
> Assignee: Aravinda
> @aravinda can we understand target dates here?
> 
> 2) https://bugzilla.redhat.com/show_bug.cgi?id=1671556
> Title: glusterfs FUSE client crashing every few days with 'Failed to
> dispatch handler'
> Assignee: Nithya (working on Du and Milind)
> @du/@milind can we get an update on this bug and how far away we are?
> 
> 3) https://bugzilla.redhat.com/show_bug.cgi?id=1671603
> Title: flooding of "dict is NULL" logging & crash of client process
> Assignee: None (Amar has most of the comments)
> @amar, do we need 2 bugs here? Also, how far away is a fix?
> 
> 4) https://bugzilla.redhat.com/show_bug.cgi?id=1676356
> Title: glusterfs FUSE client crashing every few days with 'Failed to
> dispatch handler'
> Assignee: Du
> @du, we are still waiting to get
> https://review.gluster.org/c/glusterfs/+/22189 merged, right?
> 
> Shyam
> On 2/13/19 4:09 AM, Raghavendra Gowdappa wrote:
>>
>>
>> On Wed, Feb 13, 2019 at 2:24 PM Nithya Balachandran > <mailto:nbala...@redhat.com>> wrote:
>>
>> Adding Raghavendra G and Milind who are working on the patches so
>> they can update on when they should be ready.
>>
>> On Fri, 8 Feb 2019 at 20:26, Shyam Ranganathan > <mailto:srang...@redhat.com>> wrote:
>>
>> Hi,
>>
>> There have been several crashes and issues reported by users on the
>> latest 5.3 release. Around 10 patches have been merged since 5.3
>> and we
>> have a few more addressing the blockers against 5.4 [1].
>>
>> The original date was March 10th, but we would like to
>> accelerate a 5.4
>> release to earlier this month, once all critical blockers are fixed.
>>
>> Hence there are 3 asks in this mail,
>>
>> 1) Packaging team, would it be possible to accommodate an 5.4
>> release
>> around mid-next week (RC0 for rel-6 is also on the same week)?
>> Assuming
>> we get the required fixes by then.
>>
>> 2) Maintainers, what other issues need to be tracked as
>> blockers? Please
>> add them to [1].
>>
>> 3) Current blocker status reads as follows:
>> - Bug 1651246 - Failed to dispatch handler
>>   - This shows 2 patches that are merged, but there is no patch that
>> claims this is "Fixed" hence bug is still in POST state. What other
>> fixes are we expecting on this?
>>   - @Milind request you to update the status
>>
>> - Bug 1671556 - glusterfs FUSE client crashing every few days with
>> 'Failed to dispatch handler'
>>   - Awaiting fixes for identified issues
>>   - @Nithya what would be the target date?
>>
>>
>> Fix @ https://review.gluster.org/#/c/glusterfs/+/22189/
>>
>> waiting on reviews. I am hoping to get this merged by end of this week
>> and send a backport.
>>
>>
>> - Bug 1671603 - flooding of "dict is NULL" logging & crash of
>> client process

Re: [Gluster-Maintainers] glusterfs-6.0rc0 released

2019-02-25 Thread Shyam Ranganathan
Hi,

Release-6 RC0 packages are built (see mail below). This is a good time
to start testing the release bits, and reporting any issues on bugzilla.
Do post on the lists any testing done and feedback from the same.

We have about 2 weeks to GA of release-6 barring any major blockers
uncovered during the test phase. Please take this time to help make the
release effective, by testing the same.

Thanks,
Shyam

NOTE: CentOS StorageSIG packages for the same are still pending and
should be available in due course.
On 2/23/19 9:41 AM, Kaleb Keithley wrote:
> 
> GlusterFS 6.0rc0 is built in Fedora 30 and Fedora 31/rawhide.
> 
> Packages for Fedora 29, RHEL 8, RHEL 7, and RHEL 6* and Debian 9/stretch
> and Debian 10/buster are at
> https://download.gluster.org/pub/gluster/glusterfs/qa-releases/6.0rc0/
> 
> Packages are signed. The public key is at
> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub
> 
> * RHEL 6 is client-side only. Fedora 29, RHEL 7, and RHEL 6 RPMs are
> Fedora Koji scratch builds. RHEL 7 and RHEL 6 RPMs are provided here for
> convenience only, and are independent of the RPMs in the CentOS Storage SIG.
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6: Branched and next steps

2019-02-20 Thread Shyam Ranganathan
On 2/20/19 7:45 AM, Amar Tumballi Suryanarayan wrote:
> 
> 
> On Tue, Feb 19, 2019 at 1:37 AM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> In preparation for RC0 I have put up an intial patch for the release
> notes [1]. Request the following actions on the same (either a followup
> patchset, or a dependent one),
> 
> - Please review!
> - Required GD2 section updated to latest GD2 status
> 
> 
> I am inclined to drop the GD2 section for 'standalone' users. As the
> team worked with goals of making GD2 invisible with containers (GCS) in
> mind. So, should we call out any features of GD2 at all?

This is fine, we possibly need to add a note in the release notes, on
the GD2 future and where it would land, so that we can inform users
about the continued use of GD1 in non-GCS use cases.

I will add some text around the same in the release-notes.

> 
> Anyways, as per my previous email on GCS release updates, we are
> planning to have a container available with gd2 and glusterfs, which can
> be used by people who are trying out options with GD2.
>  
> 
> - Require notes on "Reduce the number or threads used in the brick
> process" and the actual status of the same in the notes
> 
> 
> This work is still in progress, and we are treating it as a bug fix for
> 'brick-multiplex' usecase, which is mainly required in scaled volume
> number usecase in container world. My guess is, we won't have much
> content to add for glusterfs-6.0 at the moment.

Ack!

>  
> 
> RC0 build target would be tomorrow or by Wednesday.
> 
> 
> Thanks, I was testing for few upgrade and different version clusters
> support. With 4.1.6 and latest release-6.0 branch, things works fine. I
> haven't done much of a load testing yet. 

Awesome! Helps write out the upgrade guide as well. As this time content
there would/should carry data regarding how to upgrade if any of the
deprecated xlators are in use by a deployment.

> 
> Requesting people to support in upgrade testing. From different volume
> options, and different usecase scenarios.
> 
> Regards,
> Amar
> 
>  
> 
> Thanks,
> Shyam
> 
> [1] Release notes patch: https://review.gluster.org/c/glusterfs/+/6
> 
> On 2/5/19 8:25 PM, Shyam Ranganathan wrote:
> > Hi,
> >
> > Release 6 is branched, and tracker bug for 6.0 is created [1].
> >
> > Do mark blockers for the release against [1].
> >
> > As of now we are only tracking [2] "core: implement a global
> thread pool
> > " for a backport as a feature into the release.
> >
> > We expect to create RC0 tag and builds for upgrade and other testing
> > close to mid-week next week (around 13th Feb), and the release is
> slated
> > for the first week of March for GA.
> >
> > I will post updates to this thread around release notes and other
> > related activity.
> >
> > Thanks,
> > Shyam
> >
> > [1] Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0
> >
> > [2] Patches tracked for a backport:
> >   - https://review.gluster.org/c/glusterfs/+/20636
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> 
> -- 
> Amar Tumballi (amarts)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6: Branched and next steps

2019-02-18 Thread Shyam Ranganathan
In preparation for RC0 I have put up an intial patch for the release
notes [1]. Request the following actions on the same (either a followup
patchset, or a dependent one),

- Please review!
- Required GD2 section updated to latest GD2 status
- Require notes on "Reduce the number or threads used in the brick
process" and the actual status of the same in the notes

RC0 build target would be tomorrow or by Wednesday.

Thanks,
Shyam

[1] Release notes patch: https://review.gluster.org/c/glusterfs/+/6

On 2/5/19 8:25 PM, Shyam Ranganathan wrote:
> Hi,
> 
> Release 6 is branched, and tracker bug for 6.0 is created [1].
> 
> Do mark blockers for the release against [1].
> 
> As of now we are only tracking [2] "core: implement a global thread pool
> " for a backport as a feature into the release.
> 
> We expect to create RC0 tag and builds for upgrade and other testing
> close to mid-week next week (around 13th Feb), and the release is slated
> for the first week of March for GA.
> 
> I will post updates to this thread around release notes and other
> related activity.
> 
> Thanks,
> Shyam
> 
> [1] Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0
> 
> [2] Patches tracked for a backport:
>   - https://review.gluster.org/c/glusterfs/+/20636
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 5.4: Earlier than Mar-10th

2019-02-18 Thread Shyam Ranganathan
We have more blockers against 5.4 now marked. Would like to ask the list
and understand what we want to get done with by 5.4 and what the
timeline looks like.

The bug list:
https://bugzilla.redhat.com/buglist.cgi?bug_id=1667103_id_type=anddependson=tvp

1) https://bugzilla.redhat.com/show_bug.cgi?id=1649054
Title: glustereventsd does not start on Ubuntu 16.04 LTS
Assignee: Aravinda
@aravinda can we understand target dates here?

2) https://bugzilla.redhat.com/show_bug.cgi?id=1671556
Title: glusterfs FUSE client crashing every few days with 'Failed to
dispatch handler'
Assignee: Nithya (working on Du and Milind)
@du/@milind can we get an update on this bug and how far away we are?

3) https://bugzilla.redhat.com/show_bug.cgi?id=1671603
Title: flooding of "dict is NULL" logging & crash of client process
Assignee: None (Amar has most of the comments)
@amar, do we need 2 bugs here? Also, how far away is a fix?

4) https://bugzilla.redhat.com/show_bug.cgi?id=1676356
Title: glusterfs FUSE client crashing every few days with 'Failed to
dispatch handler'
Assignee: Du
@du, we are still waiting to get
https://review.gluster.org/c/glusterfs/+/22189 merged, right?

Shyam
On 2/13/19 4:09 AM, Raghavendra Gowdappa wrote:
> 
> 
> On Wed, Feb 13, 2019 at 2:24 PM Nithya Balachandran  <mailto:nbala...@redhat.com>> wrote:
> 
> Adding Raghavendra G and Milind who are working on the patches so
> they can update on when they should be ready.
> 
> On Fri, 8 Feb 2019 at 20:26, Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> Hi,
> 
> There have been several crashes and issues reported by users on the
> latest 5.3 release. Around 10 patches have been merged since 5.3
> and we
> have a few more addressing the blockers against 5.4 [1].
> 
> The original date was March 10th, but we would like to
> accelerate a 5.4
> release to earlier this month, once all critical blockers are fixed.
> 
> Hence there are 3 asks in this mail,
> 
> 1) Packaging team, would it be possible to accommodate an 5.4
> release
> around mid-next week (RC0 for rel-6 is also on the same week)?
> Assuming
> we get the required fixes by then.
> 
> 2) Maintainers, what other issues need to be tracked as
> blockers? Please
> add them to [1].
> 
> 3) Current blocker status reads as follows:
> - Bug 1651246 - Failed to dispatch handler
>   - This shows 2 patches that are merged, but there is no patch that
> claims this is "Fixed" hence bug is still in POST state. What other
> fixes are we expecting on this?
>   - @Milind request you to update the status
> 
> - Bug 1671556 - glusterfs FUSE client crashing every few days with
> 'Failed to dispatch handler'
>   - Awaiting fixes for identified issues
>   - @Nithya what would be the target date?
> 
> 
> Fix @ https://review.gluster.org/#/c/glusterfs/+/22189/
> 
> waiting on reviews. I am hoping to get this merged by end of this week
> and send a backport.
> 
> 
> - Bug 1671603 - flooding of "dict is NULL" logging & crash of
> client process
>   - Awaiting a fix, what is the potential target date for the same?
>   - We also need the bug assigned to a person
> 
> Thanks,
> Shyam
> 
> [1] Release 5.4 tracker:
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.4
> ___
> maintainers mailing list
> maintainers@gluster.org <mailto:maintainers@gluster.org>
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 5.4: Earlier than Mar-10th

2019-02-08 Thread Shyam Ranganathan
Hi,

There have been several crashes and issues reported by users on the
latest 5.3 release. Around 10 patches have been merged since 5.3 and we
have a few more addressing the blockers against 5.4 [1].

The original date was March 10th, but we would like to accelerate a 5.4
release to earlier this month, once all critical blockers are fixed.

Hence there are 3 asks in this mail,

1) Packaging team, would it be possible to accommodate an 5.4 release
around mid-next week (RC0 for rel-6 is also on the same week)? Assuming
we get the required fixes by then.

2) Maintainers, what other issues need to be tracked as blockers? Please
add them to [1].

3) Current blocker status reads as follows:
- Bug 1651246 - Failed to dispatch handler
  - This shows 2 patches that are merged, but there is no patch that
claims this is "Fixed" hence bug is still in POST state. What other
fixes are we expecting on this?
  - @Milind request you to update the status

- Bug 1671556 - glusterfs FUSE client crashing every few days with
'Failed to dispatch handler'
  - Awaiting fixes for identified issues
  - @Nithya what would be the target date?

- Bug 1671603 - flooding of "dict is NULL" logging & crash of client process
  - Awaiting a fix, what is the potential target date for the same?
  - We also need the bug assigned to a person

Thanks,
Shyam

[1] Release 5.4 tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.4
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 6: Branched and next steps

2019-02-05 Thread Shyam Ranganathan
Hi,

Release 6 is branched, and tracker bug for 6.0 is created [1].

Do mark blockers for the release against [1].

As of now we are only tracking [2] "core: implement a global thread pool
" for a backport as a feature into the release.

We expect to create RC0 tag and builds for upgrade and other testing
close to mid-week next week (around 13th Feb), and the release is slated
for the first week of March for GA.

I will post updates to this thread around release notes and other
related activity.

Thanks,
Shyam

[1] Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0

[2] Patches tracked for a backport:
  - https://review.gluster.org/c/glusterfs/+/20636
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6: Kick off!

2019-01-24 Thread Shyam Ranganathan
On 1/24/19 3:23 AM, Soumya Koduri wrote:
> Hi Shyam,
> 
> Sorry for the late response. I just realized that we had two more new
> APIs glfs_setattr/fsetattr which uses 'struct stat' made public [1]. As
> mentioned in one of the patchset review comments, since the goal is to
> move to glfs_stat in release-6, do we need to update these APIs as well
> to use the new struct? Or shall we retain them in FUTURE for now and
> address in next minor release? Please suggest.

So the goal in 6 is to not return stat but glfs_stat in the modified
pre/post stat return APIs (instead of making this a 2-step for
application consumers).

To reach glfs_stat everywhere, we have a few more things to do. I had
this patch in my radar, but just like pub_glfs_stat returns stat (hence
we made glfs_statx as private), I am seeing this as "fine for now". In
the future we only want to return glfs_stat.

So for now, we let this API be. The next round of converting stat to
glfs_stat would take into account clearing up all such instances. So
that all application consumers will need to modify code as required in
one shot.

Does this answer the concern? and, thanks for bringing this to notice.

> 
> Thanks,
> Soumya
> 
> [1] https://review.gluster.org/#/c/glusterfs/+/21734/
> 
> 
> On 1/23/19 8:43 PM, Shyam Ranganathan wrote:
>> On 1/23/19 6:03 AM, Ashish Pandey wrote:
>>>
>>> Following is the patch I am working and targeting -
>>> https://review.gluster.org/#/c/glusterfs/+/21933/
>>
>> This is a bug fix, and the patch size at the moment is also small in
>> lines changed. Hence, even if it misses branching the fix can be
>> backported.
>>
>> Thanks for the heads up!
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6: Kick off!

2019-01-23 Thread Shyam Ranganathan
On 1/23/19 6:03 AM, Ashish Pandey wrote:
> 
> Following is the patch I am working and targeting - 
> https://review.gluster.org/#/c/glusterfs/+/21933/

This is a bug fix, and the patch size at the moment is also small in
lines changed. Hence, even if it misses branching the fix can be backported.

Thanks for the heads up!
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6: Kick off!

2019-01-23 Thread Shyam Ranganathan
On 1/23/19 5:52 AM, RAFI KC wrote:
> There are three patches that I'm working for Gluster-6.
> 
> [1] : https://review.gluster.org/#/c/glusterfs/+/22075/

We discussed mux for shd in the maintainers meeting, and decided that
this would be for the next release, as the patchset is not ready
(branching is today, if I get the time to get it done).

> 
> [2] : https://review.gluster.org/#/c/glusterfs/+/21333/

Ack! in case this is not in by branching we can backport the same

> 
> [3] : https://review.gluster.org/#/c/glusterfs/+/21720/

Bug fix, can be backported post branching as well, so again ack!

Thanks for responding.
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6: Kick off!

2019-01-18 Thread Shyam Ranganathan
On 12/6/18 9:34 AM, Shyam Ranganathan wrote:
> On 11/6/18 11:34 AM, Shyam Ranganathan wrote:
>> ## Schedule
> 
> We have decided to postpone release-6 by a month, to accommodate for
> late enhancements and the drive towards getting what is required for the
> GCS project [1] done in core glusterfs.
> 
> This puts the (modified) schedule for Release-6 as below,
> 
> Working backwards on the schedule, here's what we have:
> - Announcement: Week of Mar 4th, 2019
> - GA tagging: Mar-01-2019
> - RC1: On demand before GA
> - RC0: Feb-04-2019
> - Late features cut-off: Week of Jan-21st, 2018
> - Branching (feature cutoff date): Jan-14-2018
>   (~45 days prior to branching)

We are slightly past the branching date, I would like to branch early
next week, so please respond with a list of patches that need to be part
of the release and are still pending a merge, will help address review
focus on the same and also help track it down and branch the release.

Thanks, Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-5.3 released

2019-01-18 Thread Shyam Ranganathan
On 1/17/19 11:09 AM, Niels de Vos wrote:
> On Wed, Jan 16, 2019 at 08:48:16PM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/77/artifact/glusterfs-5.3.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/77/artifact/glusterfs-5.3.sha512sum
> 
> Packages for CentOS 6 & 7 are available in the testing repository.
> Please try them out and let me know when I can mark them for release to
> the CentOS mirrors.

Tested, good to release. Thanks!

> 
>   # yum install centos-release-gluster
>   # yum install glusterfs-server
> 
> Thanks,
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-4.1.7 released

2019-01-18 Thread Shyam Ranganathan
On 1/17/19 11:08 AM, Niels de Vos wrote:
> On Wed, Jan 16, 2019 at 08:49:38PM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/78/artifact/glusterfs-4.1.7.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/78/artifact/glusterfs-4.1.7.sha512sum
> 
> Packages for CentOS 6 & 7 are available in the testing repository.
> Please try them out and let me know when I can mark them for release to
> the CentOS mirrors.

Tested, good to release. Thanks!

> 
>   # yum install centos-release-gluster41
>   # yum install glusterfs-fuse
> 
> Thanks,
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.2 released

2018-12-13 Thread Shyam Ranganathan
On 12/13/18 6:05 AM, Niels de Vos wrote:
> On Thu, Dec 13, 2018 at 02:42:17AM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/76/artifact/glusterfs-5.2.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/76/artifact/glusterfs-5.2.sha512sum
> 
> Packages for CentOS 6 & 7 have been built and should become available
> for testing shortly. Please try them out and  report success/failures
> back. Thanks!
> 
> # yum install centos-release-gluster
> # yum install glusterfs-server

Tested client and server bits, works fine (including upgrade from 5.1).
Please tag them for GA.

> 
> 
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 6: Kick off!

2018-12-06 Thread Shyam Ranganathan
On 11/6/18 11:34 AM, Shyam Ranganathan wrote:
> ## Schedule

We have decided to postpone release-6 by a month, to accommodate for
late enhancements and the drive towards getting what is required for the
GCS project [1] done in core glusterfs.

This puts the (modified) schedule for Release-6 as below,

Working backwards on the schedule, here's what we have:
- Announcement: Week of Mar 4th, 2019
- GA tagging: Mar-01-2019
- RC1: On demand before GA
- RC0: Feb-04-2019
- Late features cut-off: Week of Jan-21st, 2018
- Branching (feature cutoff date): Jan-14-2018
  (~45 days prior to branching)
- Feature/scope proposal for the release (end date): *Dec-12-2018*

So the first date is the feature/scope proposal end date, which is next
week, please send in enhancements that you are working on that will meet
the above schedule, for us to track and ensure they get in on time better.

> 
> ## Volunteers
> This is my usual call for volunteers to run the release with me or
> otherwise, but please do consider. We need more hands this time, and
> possibly some time sharing during the end of the year owing to the holidays.

Also, taking this opportunity to call for volunteers to run the release
again. Anyone interested please do respond.

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.1 released

2018-12-05 Thread Shyam Ranganathan
(top posting)

As Kaleb pointed out, packages for 5.1 are present in the CentOS
mirrors, I went ahead and created my own repo file to test the package
contents.

Packages for 5.1 worked fine and can be tagged for release in the
Storage SIG.

Thanks,
Shyam

On 11/14/18 3:52 PM, Kaleb S. KEITHLEY wrote:
> On 11/14/18 1:02 PM, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/75/artifact/glusterfs-5.1.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/75/artifact/glusterfs-5.1.sha512sum
>>
>>
>> This release is made off jenkins-release-75
> 
> glusterfs-5.1 packages for:
> 
> * Fedora 29 and 30 are in the Fedora Updates-Testing or Rawhide
> repo. Use `dnf` to install. Fedora packages will move to the Fedora
> Updates repo after a nominal testing period. Fedora 28 packages are at [1].
> 
> * RHEL X (X>7) Beta packages are at [1].
> 
> * Debian Stretch/9 and Buster/10(Sid) are on download.gluster.org at [1]
> (arm64 packages coming soon.)
> 
> * Xenial/16.04, Bionic/18.04, Cosmic/18.10, and Disco/19.04 are on
> Launchpad at [2].
> 
> * SUSE SLES12SP4, Tumbleweed, SLES15, and Leap15 will be on OpenSUSE
> Build Service at [3] soon.
> 
> I will update the .../LATEST symlinks soon.
> 
> [1] https://download.gluster.org/pub/gluster/glusterfs/5
> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-5
> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
> 
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.1.6 released

2018-11-19 Thread Shyam Ranganathan
On 11/15/2018 05:53 AM, Niels de Vos wrote:
> On Wed, Nov 14, 2018 at 06:00:57PM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/74/artifact/glusterfs-4.1.6.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/74/artifact/glusterfs-4.1.6.sha512sum
> 
> Packages for the CentOS Storage SIG are ready for testing. Please let me
> know when these can be marked for releasing to the mirrors.
> 
>   # yum install centos-release-gluster41
>   # yum install glusterfs-server

Tested 4.1.6 (upgrade from 4.1.5 and basic FUSE tests). Packages are
fine, and can be marked for release.

> 
> Note that syncing the builds to the mirrors will need to wait until
> CentOS 7.6. is released. ETA is not known yet, but is expected to be
> 'soonish'.

Niels, I guess release-5 is still not available, right?

> 
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Fwd: Bug#913677: glusterfs-client: Hard link limit?

2018-11-14 Thread Shyam Ranganathan
Are they hitting the setting introduced in 4.0 that limits the # of
hardlinks per file? [1]

To check if that is true, the brick logs should contain the log message
[2] when they hit this option induced limit. Also, inspection of "Links"
output for the file in question would help determine how many links it
already has.

If it is local-FS specific, they may hit this message instead [3].

Although, it could be an incarnation of the extended attribute size that
we store hard link names for GFID backpointers as in this bug [4]. This
should not occur on XFS (which is what the user is reporting as being
used), but useful to keep in check.

Hth,
Shyam

[1] option introduced to control max hardlinks:
https://github.com/gluster/glusterfs/blob/release-4.0/doc/release-notes/4.0.0.md#5-add-option-in-posix-to-limit-hardlinks-per-inode

[2] Error message when posix xlator detects the max-hardlink is being
exceeded on the brick logs:
https://github.com/gluster/glusterfs/blob/release-4.1/xlators/storage/posix/src/posix-entry-ops.c#L1921

[3] Error message if creating the hardlink is failing on the local-FS on
the brick logs:
https://github.com/gluster/glusterfs/blob/release-4.1/xlators/storage/posix/src/posix-entry-ops.c#L1948

[4] Running out of xattr space:
https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c5
On 11/14/2018 07:10 AM, Patrick Matthäi wrote:
> Hello,
> 
> can anyone help with this issue here?:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=913677
> 
> 
> 
>  Weitergeleitete Nachricht 
> Betreff:  Bug#913677: glusterfs-client: Hard link limit?
> Weitersenden-Datum:   Wed, 14 Nov 2018 11:27:02 +
> Weitersenden-Von: Dean Hamstead 
> Weitersenden-An:  debian-bugs-d...@lists.debian.org
> Weitersenden-CC:  Patrick Matthäi 
> Datum:Wed, 14 Nov 2018 22:09:12 +1100
> Von:  Dean Hamstead 
> Antwort an:   Dean Hamstead , 913...@bugs.debian.org
> An:   pmatth...@debian.org, Dean Hamstead ,
> 913...@bugs.debian.org
> 
> 
> 
> xfs in this case
> 
> On 14/11/18 7:18 pm, Patrick Matthäi wrote:
>> Am 14.11.2018 um 01:09 schrieb Dean Hamstead:
>>> Package: glusterfs-client
>>> Version: 4.1.5-1~bpo9+1
>>> Severity: normal
>>>
>>> Dear Maintainer,
>>>
>>> I've not been able to determine if this is a fuse limit, a gluster
>>> limit, or something else entirely.
>>>
>>> Anyway, I have been rsync'ing from a $commercial NFS over to a new shiny
>>> gluster volume.
>>>
>>> However it seems there is hard link limit I have hit.
>>>
>>>  From rsync:
>>>
>>> rsync: link
>>> "/opt/nximages/pi/ae/266c042d7ec532c9fb8bf9f81d1012512e86b5-26253/.full.jpg.23309"
>>> => 02/f6b1b71161fbd3900b5b04cc0eca8ba5ec5efd-26253/full.jpg failed: Too
>>> many links (31)
>>> rsync: link
>>> "/opt/nximages/pi/ae/266c042d7ec532c9fb8bf9f81d1012512e86b5-26253/.merch.jpg.23309"
>>> => 02/f6b1b71161fbd3900b5b04cc0eca8ba5ec5efd-26253/merch.jpg failed: Too
>>> many links (31)
>> Hi,
>>
>> do you use xfs or ext4 as filesystem?
>>
> 
> 
> 
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 4.1.7 & 5.2

2018-11-14 Thread Shyam Ranganathan
Hi,

As 4.1.6 and 5.1 are now tagged and off to packaging, announcing the
tracker and dates for the next minor versions of these stable releases.

4.1.7:
- Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-4.1.7
- Deadline for fixes: 2019-01-21

5.2:
- Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.2
- Deadline for fixes: 2018-12-10

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Bug state change proposal based on the conversation on bz 1630368

2018-11-06 Thread Shyam Ranganathan
On 11/06/2018 12:22 PM, Shyam Ranganathan wrote:
> On 11/06/2018 10:27 AM, Kaleb S. KEITHLEY wrote:
>> On 11/6/18 9:27 AM, Shyam Ranganathan wrote:
>>> Here is the way I see it,
>>> - If you find a bug on master and want to know if it is
>>> present/applicable for a release, you chase it's clone against the release
>>> - The state of the cloned bug against the release, tells you if is is
>>> CURRENTRELEASE/NEXTRELEASE/or what not.
>>>
>>> So referring to the bug on master, to determine state on which
>>> release(s) it is fixed in is not the way to find fixed state.
>>>
>>> As a result,
>>> - A bug on master with NEXTRELEASE means next major release of master.
>>>
>>> - A Bug on a release branch with NEXTRELEASE means, next major/minor
>>> release of the branch.
>>>
>>
>> For the record: I'm not in love with that.
>>
>> I'd prefer that a BZ for a release branch get closed with CURRENTRELEASE
>> (meaning 4.1, 5, 6, etc.) and Fixed In: set to the specific version,
>> 4.1.6 or 5.1, etc.
> 
> Yes, when the release is made, it will be closed CURRENTRELEASE with the
> fixed in set as the release, as it happens today. (also see similar
> response to Atin's question)
> 
>>
>> If a BZ on a release branch never gets fixed during the lifetime of that
>> branch (e.g. 4.1) it could/should be set to CLOSED/NEXTRELEASE (meaning
>> 5 or later) when 4.1 reaches EOL. It could also be set to CLOSED/EOL,
>> but that implies, perhaps, that it won't be ever be fixed. I'd reserve
>> CLOSED/EOL for new bugs filed against versions that have already reached
>> EOL. (Clone the BZ to an active version if the bug exists there.)

Adding some more on EOL.

What happened till before 3.12 was EOLd, was that all bugs that were not
CLOSED, were marked CLOSED-EOL. This was a problem as some bugs were not
even triaged or looked at.

>From 3.12 we are looking at bugs that are still OPEN and moving them to
"found in" master if they were not triaged or followed up to
satisfaction. Bugs, that have workarounds or request for data etc. are
closed EOL with a request to reproduce with an supported release and
reopen the bug as appropriate. As is obvious there is some manual work
involved here.

So in short CLOSED-EOL is used for older bugs as well, when it goes
nowhere in terms of root causing.

> 
> The definition of NEXTRELEASE here is a question as I see it.
> 
> I assumed next release of the found in version, I think you mean next
> release than the found in version. IOW, if bug is against 4.1 and marked
> NEXTRELEASE it means fixed in 4.1.next or above as I understand it,
> whereas your understanding is 5.x or above, right?
> 
> I do not know which is the right interpretation, not finding
> documentation for the same at present.
> 
>>
>> I thought we were following kernel semantics. What are the kernel
>> semantics? Where are they described?
>>
>> --
>>
>> Kaleb
>>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Bug state change proposal based on the conversation on bz 1630368

2018-11-06 Thread Shyam Ranganathan
On 11/06/2018 10:27 AM, Kaleb S. KEITHLEY wrote:
> On 11/6/18 9:27 AM, Shyam Ranganathan wrote:
>> Here is the way I see it,
>> - If you find a bug on master and want to know if it is
>> present/applicable for a release, you chase it's clone against the release
>> - The state of the cloned bug against the release, tells you if is is
>> CURRENTRELEASE/NEXTRELEASE/or what not.
>>
>> So referring to the bug on master, to determine state on which
>> release(s) it is fixed in is not the way to find fixed state.
>>
>> As a result,
>> - A bug on master with NEXTRELEASE means next major release of master.
>>
>> - A Bug on a release branch with NEXTRELEASE means, next major/minor
>> release of the branch.
>>
> 
> For the record: I'm not in love with that.
> 
> I'd prefer that a BZ for a release branch get closed with CURRENTRELEASE
> (meaning 4.1, 5, 6, etc.) and Fixed In: set to the specific version,
> 4.1.6 or 5.1, etc.

Yes, when the release is made, it will be closed CURRENTRELEASE with the
fixed in set as the release, as it happens today. (also see similar
response to Atin's question)

> 
> If a BZ on a release branch never gets fixed during the lifetime of that
> branch (e.g. 4.1) it could/should be set to CLOSED/NEXTRELEASE (meaning
> 5 or later) when 4.1 reaches EOL. It could also be set to CLOSED/EOL,
> but that implies, perhaps, that it won't be ever be fixed. I'd reserve
> CLOSED/EOL for new bugs filed against versions that have already reached
> EOL. (Clone the BZ to an active version if the bug exists there.)

The definition of NEXTRELEASE here is a question as I see it.

I assumed next release of the found in version, I think you mean next
release than the found in version. IOW, if bug is against 4.1 and marked
NEXTRELEASE it means fixed in 4.1.next or above as I understand it,
whereas your understanding is 5.x or above, right?

I do not know which is the right interpretation, not finding
documentation for the same at present.

> 
> I thought we were following kernel semantics. What are the kernel
> semantics? Where are they described?
> 
> --
> 
> Kaleb
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Bug state change proposal based on the conversation on bz 1630368

2018-11-06 Thread Shyam Ranganathan
On 11/06/2018 11:44 AM, Atin Mukherjee wrote:
> 
> 
> On Tue, 6 Nov 2018 at 19:57, Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> On 11/06/2018 09:20 AM, Atin Mukherjee wrote:
> >
> >
> > On Tue, Nov 6, 2018 at 7:16 PM Shyam Ranganathan
> mailto:srang...@redhat.com>
> > <mailto:srang...@redhat.com <mailto:srang...@redhat.com>>> wrote:
> >
> >     On 11/05/2018 07:00 PM, Atin Mukherjee wrote:
> >     > Bit late to this, but I’m in favour of the proposal.
> >     >
> >     > The script change should only consider transitioning the bug
> >     status from
> >     > POST to CLOSED NEXTRELEASE on master branch only. What’d be
> also ideal
> >     > is to update the fixed in version in which this patch will land.
> >
> >     2 things, based on my response to this thread,
> >
> >     - Script will change this bug state for all branches, not just
> master. I
> >     do not see a reason to keep master special.
> >
> >     - When moving the state to NEXTRELEASE I would not want to put
> in a
> >     fixed in version yet, as that may change/morph, instead it
> would be
> >     added (as it is now) when the release is made and the bug
> changed to
> >     CURRENTRELEASE.
> >
> >
> > I can buy in the point of having the other branches also follow
> the same
> > rule of bug status moving to NEXTRELEASE from POST (considering we're
> > fine to run a script during the release of mass moving them to
> > CURRENTRELEASE) but not having the fixed in version in the bugs which
> > are with mainline branch may raise a question/concern on what exact
> > version this bug is being addressed at? Or is it that the post release
> > bug movement script also considers all the bugs fixed in the master
> > branch as well?
> 
> Here is the way I see it,
> - If you find a bug on master and want to know if it is
> present/applicable for a release, you chase it's clone against the
> release
> - The state of the cloned bug against the release, tells you if is is
> CURRENTRELEASE/NEXTRELEASE/or what not.
> 
> So referring to the bug on master, to determine state on which
> release(s) it is fixed in is not the way to find fixed state.
> 
> 
> Question : With this workflow what happens when a bug is just filed &
> fixed only in master and comes as a fix to the next release as part of
> branch out? So how would an user understand what release version is the
> fix in if we don’t have a fixed in version?

I think the workflow is explained in my other longer mail, but for this
question, the bug is moved from NEXTRELEASE->CURRENTRELEASE and the
fixed in milestone is set. This happens even today, with bugs fixed in
master that stay at MODIFIED and get CLOSED-CURRENTRELEASE with the
fixed in milestone set to the release.

> 
> 
> 
> As a result,
> - A bug on master with NEXTRELEASE means next major release of master.
> 
> - A Bug on a release branch with NEXTRELEASE means, next major/minor
> release of the branch.
> 
> >
> >
> >     In all, the only change is the already existing script moving
> a bug from
> >     POST to CLOSED-NEXTRELEASE instead of MODIFIED.
> >
> >     >
> >     > On Mon, 5 Nov 2018 at 21:39, Yaniv Kaul  <mailto:yk...@redhat.com>
> >     <mailto:yk...@redhat.com <mailto:yk...@redhat.com>>
> >     > <mailto:yk...@redhat.com <mailto:yk...@redhat.com>
> <mailto:yk...@redhat.com <mailto:yk...@redhat.com>>>> wrote:
> >     >
> >     >
> >     >
> >     >     On Mon, Nov 5, 2018 at 5:05 PM Sankarshan Mukhopadhyay
> >     >      <mailto:sankarshan.mukhopadh...@gmail.com>
> >     <mailto:sankarshan.mukhopadh...@gmail.com
> <mailto:sankarshan.mukhopadh...@gmail.com>>
> >     >     <mailto:sankarshan.mukhopadh...@gmail.com
> <mailto:sankarshan.mukhopadh...@gmail.com>
> >     <mailto:sankarshan.mukhopadh...@gmail.com
> <mailto:sankarshan.mukhopadh...@gmail.com>>>> wrote:
> >     >
> >     >         On Mon, Nov 5, 2018 at 8:14 PM Yaniv Kaul
> >     mailto:yk...@redhat.com>
> <mailto:yk...@redhat.com <mailto:yk...@redhat.com>>
> >     >         <mailto:yk...@redhat.com <mailto:yk..

[Gluster-Maintainers] Release 6: Kick off!

2018-11-06 Thread Shyam Ranganathan
Hi,

With release-5 out of the door, it is time to start some activities for
release-6.

## Scope
It is time to collect and determine scope for the release, so as usual,
please send in features/enhancements that you are working towards
reaching maturity for this release to the devel list, and mark/open the
github issue with the required milestone [1].

At a broader scale, in the maintainers meeting we discussed the
enhancement wish list as in [2].

Other than the above, we are continuing with our quality focus and would
want to see a downward trend (or near-zero) in the following areas,
- Coverity
- clang
- ASAN

We would also like to tighten our nightly testing health, and would
ideally not want to have tests retry and pass in the second attempt on
the testing runs. Towards this, we would send in reports of retried and
failed tests, that need attention and fixes as required.

## Schedule
NOTE: Schedule is going to get heavily impacted due to end of the year
holidays, but we will try to keep it up as much as possible.

Working backwards on the schedule, here's what we have:
- Announcement: Week of Feb 4th, 2019
- GA tagging: Feb-01-2019
- RC1: On demand before GA
- RC0: Jan-02-2019
- Late features cut-off: Week of Dec-24th, 2018
- Branching (feature cutoff date): Dec-17-2018
  (~45 days prior to branching)
- Feature/scope proposal for the release (end date): Nov-21-2018

## Volunteers
This is my usual call for volunteers to run the release with me or
otherwise, but please do consider. We need more hands this time, and
possibly some time sharing during the end of the year owing to the holidays.

Thanks,
Shyam

[1] Release-6 github milestone:
https://github.com/gluster/glusterfs/milestone/8

[2] Release-6 enhancement wishlist:
https://hackmd.io/sP5GsZ-uQpqnmGZmFKuWIg#
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Bug state change proposal based on the conversation on bz 1630368

2018-11-06 Thread Shyam Ranganathan
On 11/06/2018 09:20 AM, Atin Mukherjee wrote:
> 
> 
> On Tue, Nov 6, 2018 at 7:16 PM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> On 11/05/2018 07:00 PM, Atin Mukherjee wrote:
> > Bit late to this, but I’m in favour of the proposal.
> >
> > The script change should only consider transitioning the bug
> status from
> > POST to CLOSED NEXTRELEASE on master branch only. What’d be also ideal
> > is to update the fixed in version in which this patch will land.
> 
> 2 things, based on my response to this thread,
> 
> - Script will change this bug state for all branches, not just master. I
> do not see a reason to keep master special.
> 
> - When moving the state to NEXTRELEASE I would not want to put in a
> fixed in version yet, as that may change/morph, instead it would be
> added (as it is now) when the release is made and the bug changed to
> CURRENTRELEASE.
> 
> 
> I can buy in the point of having the other branches also follow the same
> rule of bug status moving to NEXTRELEASE from POST (considering we're
> fine to run a script during the release of mass moving them to
> CURRENTRELEASE) but not having the fixed in version in the bugs which
> are with mainline branch may raise a question/concern on what exact
> version this bug is being addressed at? Or is it that the post release
> bug movement script also considers all the bugs fixed in the master
> branch as well?

Here is the way I see it,
- If you find a bug on master and want to know if it is
present/applicable for a release, you chase it's clone against the release
- The state of the cloned bug against the release, tells you if is is
CURRENTRELEASE/NEXTRELEASE/or what not.

So referring to the bug on master, to determine state on which
release(s) it is fixed in is not the way to find fixed state.

As a result,
- A bug on master with NEXTRELEASE means next major release of master.

- A Bug on a release branch with NEXTRELEASE means, next major/minor
release of the branch.

> 
> 
> In all, the only change is the already existing script moving a bug from
> POST to CLOSED-NEXTRELEASE instead of MODIFIED.
> 
> >
> > On Mon, 5 Nov 2018 at 21:39, Yaniv Kaul  <mailto:yk...@redhat.com>
> > <mailto:yk...@redhat.com <mailto:yk...@redhat.com>>> wrote:
> >
> >
> >
> >     On Mon, Nov 5, 2018 at 5:05 PM Sankarshan Mukhopadhyay
> >      <mailto:sankarshan.mukhopadh...@gmail.com>
> >     <mailto:sankarshan.mukhopadh...@gmail.com
> <mailto:sankarshan.mukhopadh...@gmail.com>>> wrote:
> >
> >         On Mon, Nov 5, 2018 at 8:14 PM Yaniv Kaul
> mailto:yk...@redhat.com>
> >         <mailto:yk...@redhat.com <mailto:yk...@redhat.com>>> wrote:
> >         > On Mon, Nov 5, 2018 at 4:28 PM Niels de Vos
> mailto:nde...@redhat.com>
> >         <mailto:nde...@redhat.com <mailto:nde...@redhat.com>>> wrote:
> >         >>
> >         >> On Mon, Nov 05, 2018 at 05:31:26PM +0530, Pranith Kumar
> >         Karampuri wrote:
> >         >> > hi,
> >         >> >     When we create a bz on master and clone it to the
> next
> >         release(In my
> >         >> > case it was release-5.0), after that release happens
> can we
> >         close the bz on
> >         >> > master with CLOSED NEXTRELEASE?
> >         >
> >         >
> >         > Since no one is going to verify it (right now, but I'm
> hopeful
> >         this will change in the future!), no point in keeping it open.
> >         > You could keep it open and move it along the process,
> and then
> >         close it properly when you release the next release.
> >         > It's kinda pointless if no one's going to do anything
> with it
> >         between MODIFIED to CLOSED.
> >         > I mean - assuming you move it to ON_QA - who's going to
> do the
> >         verification?
> >         >
> >         > In oVirt, QE actually verifies upstream bugs, so there is
> >         value. They are also all appear in the release notes, with
> their
> >         status and so on.
> >
> >         The Glusto framework is intended to accomplish this end,
> is it not?
> >
> >
> >     If the developer / QE engineer developed a test case for that BZ -
> >     that would b

Re: [Gluster-Maintainers] Bug state change proposal based on the conversation on bz 1630368

2018-11-06 Thread Shyam Ranganathan
On 11/05/2018 07:00 PM, Atin Mukherjee wrote:
> Bit late to this, but I’m in favour of the proposal.
> 
> The script change should only consider transitioning the bug status from
> POST to CLOSED NEXTRELEASE on master branch only. What’d be also ideal
> is to update the fixed in version in which this patch will land.

2 things, based on my response to this thread,

- Script will change this bug state for all branches, not just master. I
do not see a reason to keep master special.

- When moving the state to NEXTRELEASE I would not want to put in a
fixed in version yet, as that may change/morph, instead it would be
added (as it is now) when the release is made and the bug changed to
CURRENTRELEASE.

In all, the only change is the already existing script moving a bug from
POST to CLOSED-NEXTRELEASE instead of MODIFIED.

> 
> On Mon, 5 Nov 2018 at 21:39, Yaniv Kaul  > wrote:
> 
> 
> 
> On Mon, Nov 5, 2018 at 5:05 PM Sankarshan Mukhopadhyay
>  > wrote:
> 
> On Mon, Nov 5, 2018 at 8:14 PM Yaniv Kaul  > wrote:
> > On Mon, Nov 5, 2018 at 4:28 PM Niels de Vos  > wrote:
> >>
> >> On Mon, Nov 05, 2018 at 05:31:26PM +0530, Pranith Kumar
> Karampuri wrote:
> >> > hi,
> >> >     When we create a bz on master and clone it to the next
> release(In my
> >> > case it was release-5.0), after that release happens can we
> close the bz on
> >> > master with CLOSED NEXTRELEASE?
> >
> >
> > Since no one is going to verify it (right now, but I'm hopeful
> this will change in the future!), no point in keeping it open.
> > You could keep it open and move it along the process, and then
> close it properly when you release the next release.
> > It's kinda pointless if no one's going to do anything with it
> between MODIFIED to CLOSED.
> > I mean - assuming you move it to ON_QA - who's going to do the
> verification?
> >
> > In oVirt, QE actually verifies upstream bugs, so there is
> value. They are also all appear in the release notes, with their
> status and so on.
> 
> The Glusto framework is intended to accomplish this end, is it not?
> 
> 
> If the developer / QE engineer developed a test case for that BZ -
> that would be amazing!
> Y.
> ___
> maintainers mailing list
> maintainers@gluster.org 
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
> -- 
> - Atin (atinm)
> 
> 
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Bug state change proposal based on the conversation on bz 1630368

2018-11-05 Thread Shyam Ranganathan
On 11/05/2018 09:43 AM, Yaniv Kaul wrote:
> 
> 
> On Mon, Nov 5, 2018 at 4:28 PM Niels de Vos  > wrote:
> 
> On Mon, Nov 05, 2018 at 05:31:26PM +0530, Pranith Kumar Karampuri wrote:
> > hi,
> >     When we create a bz on master and clone it to the next
> release(In my
> > case it was release-5.0), after that release happens can we close
> the bz on
> > master with CLOSED NEXTRELEASE?
> 
> 
> Since no one is going to verify it (right now, but I'm hopeful this will
> change in the future!), no point in keeping it open.
> You could keep it open and move it along the process, and then close it
> properly when you release the next release.
> It's kinda pointless if no one's going to do anything with it between
> MODIFIED to CLOSED.
> I mean - assuming you move it to ON_QA - who's going to do the verification?

The link provided by Niels is the "proper" process, but there are a few
gotchas here (which are noted in the comments provided in this mail as
well),

- Moving from MODIFIED to ON_QA assumes/needs packages to be made
available, these are made available only when we prepare for the
release, else bug reporters or QE need to use nightly builds to verify
the same

- Further, once on ON_QA we are not getting these verified as Yaniv
states, so moving this out of the ON_QA state would not happen, and the
bug would stay in limbo here till the release is made with the
unverified(?) fix

Here is what happens automatically at present,

- Bugs move to POST and MODIFIED states as patches against the same are
posted and then merged (with the patch commit message stating it "Fixes"
and not just "Updates" the bug)

- From here on, when the bug lands in a release and the release notes
are prepared to notify that the said bugs are fixed, these bugs are
moved to CLOSED-CURRENTRELEASE (using the release tools scripts [2])

The tool moving the bug to the CLOSED state, is in reality to catch any
bugs that are not in the right state, ideally it would be correct to
only move those bugs that are VERIFIED to the closed state, but again as
stated, current manner of dealing with the bugs does not include a
verification step.

So the time a bug spends between MODIFIED to CLOSED, states that it is
merged (into the said branch against which the bug is filed) and
awaiting a release.

Instead the suggestion is to reflect that state more clearly as
CLOSED-NEXTRELEASE.

The automation hence can change to the following,

- Do not move to MODIFIED when the patch is merged, but move it to
CLOSED-NEXTRELEASE

- The release tools would change these to CLOSED-CURRENTRELEASE with the
"fixed in" version set right, when the release is made

The change would be constant for bugs against master and against release
branches. If we need to specialize this for bugs on master to move to
only MODIFIED till it is merged into a release branch, that calls for
more/changed automation and also a definition of what NEXTRELEASE means
when a bug is filed against a branch.

IMO, a bug on master marked NEXTRELEASE, means it lands when a next
major release is made, and a bug on a branch marked NEXTRELEASE is when
the next major (time between branching and GA/.0 of the branch) or,
minor release is made.

If we go with the above, the only automation change is not to move bugs
to MODIFIED, but just push it to CLOSED-NEXTRELEASE instead.

Based on the current state of lacking verification, this change is possible.

Thoughts?

> 
> In oVirt, QE actually verifies upstream bugs, so there is value. They
> are also all appear in the release notes, with their status and so on.
> Y.
> 
> 
> Yes, I think that can be done. Not sure what the advantage is, an
> explanation for this suggestion would be nice :)
> 
> I am guessing it will be a little clearer for users that find the
> CLOSED/NEXTRELEASE bug? It would need the next major version in the
> "fixed in version" field too though (or 'git describe' after merging).
> 
> If this gets done, someone will need to update the bug report lifecycle
> at
> 
> https://docs.gluster.org/en/latest/Contributors-Guide/Bug-report-Life-Cycle/
> 
> Uhmm, actually, that page already mentions CLOSED/NEXTRELEASE!
> 
> Niels
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: GA tomorrow!

2018-10-22 Thread Shyam Ranganathan
On 10/17/2018 07:47 PM, Shyam Ranganathan wrote:
> On 10/15/2018 02:29 PM, Shyam Ranganathan wrote:
>> On 10/11/2018 11:25 AM, Shyam Ranganathan wrote:
>>> So we are through with a series of checks and tasks on release-5 (like
>>> ensuring all backports to other branches are present in 5, upgrade
>>> testing, basic performance testing, Package testing, etc.), but still
>>> need the following resolved else we stand to delay the release GA
>>> tagging, which I hope to get done over the weekend or by Monday 15th
>>> morning (EDT).
>>>
>>> 1) Fix for libgfapi-python related blocker on Gluster:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1630804
>>>
>>> @ppai, who needs to look into this?
>> Du has looked into this, but resolution is still pending, and release
>> still awaiting on this being a blocker.
> Fix is backported and awaiting regression scores, before we merge and
> make a release (tomorrow!).
> 
> @Kaushal, if we tag GA tomorrow EDT, would it be possible to tag GD2
> today, for the packaging team to pick the same up?
> 

@GD2 team can someone tag/branch GD2 for release-5, else we are stuck
with the RC1 tag for the same.
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.0 released

2018-10-21 Thread Shyam Ranganathan
On 10/19/2018 05:01 AM, Niels de Vos wrote:
> On Thu, Oct 18, 2018 at 02:14:42PM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/73/artifact/glusterfs-5.0.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/73/artifact/glusterfs-5.0.sha512sum
> 
> Packages for CentOS 6 and 7 are ready for testing. Please pass along any
> results of your tests. If al goes well, we might be able to release (and
> announce to CentOS lists) early next week.
> 
> 1. install centos-release-gluster5:
>- for CentOS-6: 
> http://cbs.centos.org/kojifiles/packages/centos-release-gluster5/0.9/1.el6.centos/noarch/centos-release-gluster5-0.9-1.el6.centos.noarch.rpm
>- for CentOS-7: 
> http://cbs.centos.org/kojifiles/packages/centos-release-gluster5/0.9/1.el7.centos/noarch/centos-release-gluster5-0.9-1.el7.centos.noarch.rpm
> 
># yum install ${CENTOS_RELEASE_GLUSTER5_URL}
> 
> 2. the centos-gluster5-test repository should be enabled by default, so
> 
># yum install glusterfs-fuse
> 
> 3. report back to this email

Tested, installs the required version as needed, Thanks.

> 
> 
> Thanks!
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: GA Tagged and release tarball generated

2018-10-18 Thread Shyam Ranganathan
GA tagging done and release tarball is generated.

5.1 release tracker is now open for blockers against the same:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.1

5.x minor release is set to release on the 10th of every month, jFYI (as
the release schedule page in the website is updated).

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: GA tomorrow!

2018-10-17 Thread Shyam Ranganathan
On 10/15/2018 02:29 PM, Shyam Ranganathan wrote:
> On 10/11/2018 11:25 AM, Shyam Ranganathan wrote:
>> So we are through with a series of checks and tasks on release-5 (like
>> ensuring all backports to other branches are present in 5, upgrade
>> testing, basic performance testing, Package testing, etc.), but still
>> need the following resolved else we stand to delay the release GA
>> tagging, which I hope to get done over the weekend or by Monday 15th
>> morning (EDT).
>>
>> 1) Fix for libgfapi-python related blocker on Gluster:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1630804
>>
>> @ppai, who needs to look into this?
> 
> Du has looked into this, but resolution is still pending, and release
> still awaiting on this being a blocker.

Fix is backported and awaiting regression scores, before we merge and
make a release (tomorrow!).

@Kaushal, if we tag GA tomorrow EDT, would it be possible to tag GD2
today, for the packaging team to pick the same up?

> 
>>
>> 2) Release notes for options added to the code (see:
>> https://lists.gluster.org/pipermail/gluster-devel/2018-October/055563.html )
>>
>> @du, @krutika can we get some text for the options referred in the mail
>> above?
> 
> Inputs received and release notes updated:
> https://review.gluster.org/c/glusterfs/+/21421

Last chance to add review comments to the release notes!

> 
>>
>> 3) Python3 testing
>> - Heard back from Kotresh on geo-rep passing and saw that we have
>> handled cliutils issues
>> - Anything more to cover? (@aravinda, @kotresh, @ppai?)
>> - We are attempting to get a regression run on a Python3 platform, but
>> that maybe a little ways away from the release (see:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1638030 )
>>
>> Request attention to the above, to ensure we are not breaking things
>> with the release.
>>
>> Thanks,
>> Shyam
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> https://lists.gluster.org/mailman/listinfo/maintainers
>>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Fwd: [Gluster-users] Announcing Glusterfs release 3.12.15 (Long Term Maintenance)

2018-10-17 Thread Shyam Ranganathan
Packaging team,

The arbiter stale lock issue seems to be biting a few folks (as per the
traffic on the lists).

Can we make a one-off 4.1 release in October for the same?

Thanks,
Shyam


 Forwarded Message 
Subject: Re: [Gluster-users] Announcing Glusterfs release 3.12.15 (Long
Term Maintenance)
Date: Wed, 17 Oct 2018 09:18:25 -0400
From: Shyam Ranganathan 
To: Dmitry Melekhov , gluster-us...@gluster.org

On 10/17/2018 12:21 AM, Dmitry Melekhov wrote:
> #1637989 <https://bugzilla.redhat.com/1637989>: data-self-heal in
> arbiter volume results in stale locks.
> 
> 
> Could you tell me, please, when 4.1 with fix will be released?

Tagging the release is on the 20th of the month [1], packages should be
available 2-3 days from then.

Also, post the first 3-4 minor releases the release schedule was changed
to releasing minor releases every 2 months [2]. This puts the next 4.1
release in November [3].

This bug seems critical enough to make a release earlier, so we may make
a out of band release next week (20th of October), after discussing the
same with the packaging team (will update the list once we make a decision).

[1] Release schedule: https://www.gluster.org/release-schedule/
[2] Release cadence announce:
https://lists.gluster.org/pipermail/announce/2018-July/000103.html
[3] Next 4.1 release in the release notes:
https://docs.gluster.org/en/latest/release-notes/4.1.5/

> 
> Thank you!
> 
> 
> 
> 
> 16.10.2018 19:41, Jiffin Tony Thottan пишет:
>>
>> The Gluster community is pleased to announce the release of Gluster
>> 3.12.15 (packages available at [1,2,3]).
>>
>> Release notes for the release can be found at [4].
>>
>> Thanks,
>> Gluster community
>>
>>
>> [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.15/
>> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
>> <https://launchpad.net/%7Egluster/+archive/ubuntu/glusterfs-3.12>
>> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
>> [4] Release notes:
>> https://gluster.readthedocs.io/en/latest/release-notes/3.12.15/
>>
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
gluster-us...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GlusterFS Project Update - Week 1&2 of Oct

2018-10-16 Thread Shyam Ranganathan
This is a once in 2 weeks update on activities around glusterfs
project [1]. This is intended to provide the community with updates on
progress around key initiatives and also to reiterate current goals that
the project is working towards.

This is intended to help contributors to pick and address key areas that
are in focus, and the community to help provide feedback and raise flags
that need attention.

1. Key highlights of the last 2 weeks:
- Patches merged [1]
  Key patches:
- Coverity fixes
- Python3 related fixes
- ASAN fixes (trickling in)
- Patch to handle a case of hang in arbiter
  https://review.gluster.org/21380
- Fixes in cleanup sequence
  https://review.gluster.org/21379
- Release updates:
  - Release 5 has a single blocker before GA, all other activities are
complete
- Blocker bug: https://bugzilla.redhat.com/show_bug.cgi?id=1630804
  - Release 6 scope call out to happen this week!
- Interesting devel threads
  - “Gluster performance updates”
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055484.html
  - “Update of work on fixing POSIX compliance issues in Glusterfs”
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055488.html
  - “Compile Xlator manually with lib 'glusterfs'”
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055560.html

2. Bug trends in the last 2 weeks
  - Bugs and status for the last 2 weeks [3]
- 14 bugs are still in the NEW state and need assignment

3. Key focus areas for the next 2 weeks
  - Continue coverity, clang, ASAN focus
- Coverity how to participate [4]
- Clang issues that need attention [5]
- ASAN issues:
  See https://review.gluster.org/c/glusterfs/+/21300 on how to
effectively use ASAN builds, and use the same to clear up ASAN issues
appearing in your testing.

  - Improve on bug backlog reduction (details to follow)

  - Remove unsupported xlators from the code base:
https://bugzilla.redhat.com/show_bug.cgi?id=1635688

  - Prepare xlators for classification assignment, to enable selective
volume graph topology for GCS volumes
https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/xlator-classification.md

  - Adapt all xlators (and options) to the new registration function as in,
https://review.gluster.org/c/glusterfs/+/19712

3. Next release focus areas
  - Deprecate xlators as announced in the lists
  - Complete implementation of xlator classification for all xlators
  - Cleanup sequence with brick-mux
  - Fencing infrastructure for gluster-block
  - Fuse Interrupt Syscall Support
  - Release 6 targeted enhancements [6] (Needs to be populated)

4. Longer term focus areas (possibly beyond the next release)
  - Reflink support, extended to snapshot support for gluster-block
  - Client side caching improvements

- Amar, Shyam and Xavi

Links:

[1] GlusterFS: https://github.com/gluster/glusterfs/

[2] Patches merged in the last 2 weeks:
https://review.gluster.org/q/project:glusterfs+branch:master+until:2018-10-14+since:2018-10-01+status:merged

[3] Bug status for the last 2 weeks:
https://bugzilla.redhat.com/report.cgi?x_axis_field=bug_status_axis_field=component_axis_field=_redirect=1_format=report-table_desc_type=allwordssubstr_desc==Community=GlusterFS_type=allwordssubstr=_file_loc_type=allwordssubstr_file_loc=_whiteboard_type=allwordssubstr_whiteboard=_type=allwords===_id=_id_type=anyexact=_type=greaterthaneq=substring==substring==substring==%5BBug+creation%5D==2018-10-01=2018-10-14_top=AND=component=notequals=project-infrastructure=noop=noop==table=wrap

[4] Coverity reduction and how to participate:
https://lists.gluster.org/pipermail/gluster-devel/2018-August/055155.html

[5] CLang issues needing attention:
https://build.gluster.org/job/clang-scan/lastCompletedBuild/clangScanBuildBugs/

[6] Release 6 targeted enhancements:
https://github.com/gluster/glusterfs/milestone/8
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Maintainer meeting minutes : 15th Oct, 2018

2018-10-15 Thread Shyam Ranganathan
### BJ Link
* Bridge: https://bluejeans.com/217609845
* Watch: 

### Attendance
* Nigel, Nithya, Deepshikha, Akarsha, Kaleb, Shyam, Sunny

### Agenda
* AI from previous meeting:
  - Glusto-Test completion on release-5 branch - On Glusto team
  - Vijay will take this on.
  - He will be focusing it on next week.
  - Glusto for 5 may not be happening before release, but we'll do
it right after release it looks like.

- Release 6 Scope
- Will be sending out an email today/tomorrow for scope of release 6.
- Send a biweekly email with focus on glusterfs release focus areas.

- GCS scope into release-6 scope and get issues marked against the same
- For release-6 we want a thinner stack. This means we'd be removing
xlators from the code that Amar has already sent an email about.
- Locking support for gluster-block. Design still WIP. One of the
big ticket items that should make it to release 6. Includes reflink
support and enough locking support to ensure snapshots are consistent.
- GD1 vs GD2. We've been talking about it since release-4.0. We need
to call this out and understand if we will have GD2 as default. This is
call out for a plan for when we want to make this transition.

- Round Table
- [Nigel] Minimum build and CI health for all projects (including
sub-projects).
- This was primarily driven for GCS
- But, we need this even otherwise to sustain quality of projects
- AI: Call out on lists around release 6 scope, with a possible
list of sub-projects
- [Kaleb] SELinux package status
- Waiting on testing to understand if this is done right
- Can be released when required, as it is a separate package
- Release-5 the SELinux policies are with Fedora packages
- Need to coordinate with Fedora release, as content is in 2
packages
- AI: Nigel to follow up and get updates by the next meeting

___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: GA and what are we waiting on

2018-10-11 Thread Shyam Ranganathan
On 10/11/2018 11:25 AM, Shyam Ranganathan wrote:
> 1) Fix for libgfapi-python related blocker on Gluster:
> https://bugzilla.redhat.com/show_bug.cgi?id=1630804


@du @krutika, the root cause for the above issue is from the commit,

commit c9bde3021202f1d5c5a2d19ac05a510fc1f788ac
https://review.gluster.org/c/glusterfs/+/20639

performance/readdir-ahead: keep stats of cached dentries in sync with
modifications

I have updated the bug with the required findings, please take a look
and let us know if we can get a fix in time for release-5.

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 5: GA and what are we waiting on

2018-10-11 Thread Shyam Ranganathan
So we are through with a series of checks and tasks on release-5 (like
ensuring all backports to other branches are present in 5, upgrade
testing, basic performance testing, Package testing, etc.), but still
need the following resolved else we stand to delay the release GA
tagging, which I hope to get done over the weekend or by Monday 15th
morning (EDT).

1) Fix for libgfapi-python related blocker on Gluster:
https://bugzilla.redhat.com/show_bug.cgi?id=1630804

@ppai, who needs to look into this?

2) Release notes for options added to the code (see:
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055563.html )

@du, @krutika can we get some text for the options referred in the mail
above?

3) Python3 testing
- Heard back from Kotresh on geo-rep passing and saw that we have
handled cliutils issues
- Anything more to cover? (@aravinda, @kotresh, @ppai?)
- We are attempting to get a regression run on a Python3 platform, but
that maybe a little ways away from the release (see:
https://bugzilla.redhat.com/show_bug.cgi?id=1638030 )

Request attention to the above, to ensure we are not breaking things
with the release.

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 5: Missing option documentation (need inputs)

2018-10-11 Thread Shyam Ranganathan
On 10/10/2018 11:20 PM, Atin Mukherjee wrote:
> 
> 
> On Wed, 10 Oct 2018 at 20:30, Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> The following options were added post 4.1 and are part of 5.0 as the
> first release for the same. They were added in as part of bugs, and
> hence looking at github issues to track them as enhancements did not
> catch the same.
> 
> We need to document it in the release notes (and also the gluster doc.
> site ideally), and hence I would like a some details on what to write
> for the same (or release notes commits) for them.
> 
> Option: cluster.daemon-log-level
> Attention: @atin
> Review: https://review.gluster.org/c/glusterfs/+/20442
> 
> 
> This option has to be used based on extreme need basis and this is why
> it has been mentioned as GLOBAL_NO_DOC. So ideally this shouldn't be
> documented.
> 
> Do we still want to capture it in the release notes?

This is an interesting catch-22, when we want users to use the option
(say to provide better logs for troubleshooting), we have nothing to
point to, and it would be instructions (repeated over the course of
time) over mails.

I would look at adding this into an options section in the docs, but the
best I can find in there is
https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/

I would say we need to improve the way we deal with options and the
required submissions around the same.

Thoughts?

> 
> <https://review.gluster.org/c/glusterfs/+/20442>
> 
> Option: ctime-invalidation
> Attention: @Du
> Review: https://review.gluster.org/c/glusterfs/+/20286
> 
> Option: shard-lru-limit
> Attention: @krutika
> Review: https://review.gluster.org/c/glusterfs/+/20544
> 
> Option: shard-deletion-rate
> Attention: @krutika
> Review: https://review.gluster.org/c/glusterfs/+/19970
> 
> Please send in the required text ASAP, as we are almost towards the end
> of the release.
> 
> Thanks,
> Shyam
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-10 Thread Shyam Ranganathan
On 09/26/2018 10:21 AM, Shyam Ranganathan wrote:
> 3. Upgrade testing
>   - Need *volunteers* to do the upgrade testing as stated in the 4.1
> upgrade guide [3] to note any differences or changes to the same
>   - Explicit call out on *disperse* volumes, as we continue to state
> online upgrade is not possible, is this addressed and can this be tested
> and the documentation improved around the same?

Completed upgrade testing using RC1 packages against a 4.1 cluster.
Things hold up fine. (replicate type volumes)

I have not attempted a rolling upgrade of disperse volumes, as we still
lack instructions to do so. @Pranith/@Xavi is this feasible this release
onward?

Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-5.0rc1 released

2018-10-10 Thread Shyam Ranganathan
On 10/10/2018 03:58 AM, Niels de Vos wrote:
> On Fri, Oct 05, 2018 at 03:04:41PM -0400, Kaleb S. KEITHLEY wrote:
>> On 10/5/18 12:44 PM, jenk...@build.gluster.org wrote:
>>> SRC: 
>>> https://build.gluster.org/job/release-new/71/artifact/glusterfs-5.0rc1.tar.gz
>>> HASH: 
>>> https://build.gluster.org/job/release-new/71/artifact/glusterfs-5.0rc1.sha512sum
>>>
>>> This release is made off jenkins-release-71
>>>
>> GlusterFS 5.0rc1 Packages for:
>>
>>   * el6, el7, el8 (CentOS, RHEL)...
>>   * Fedora 27, 28, 29...
>>   * Debian stretch/9, buster/10
> 
> And since Monday the CentOS Storage SIG packages are also available.
> Sorry for forgetting to send out a note.
> 
> 1. install centos-release-gluster5:
>- for CentOS-6: 
> http://cbs.centos.org/kojifiles/packages/centos-release-gluster5/0.9/1.el6.centos/noarch/centos-release-gluster5-0.9-1.el6.centos.noarch.rpm
>- for CentOS-7: 
> http://cbs.centos.org/kojifiles/packages/centos-release-gluster5/0.9/1.el7.centos/noarch/centos-release-gluster5-0.9-1.el7.centos.noarch.rpm
> 
># yum install ${CENTOS_RELEASE_GLUSTER5_URL}
> 
> 2. the centos-gluster5-test repository should be enabled by default, so
> 
># yum install glusterfs-fuse
> 
> 3. report back to this email

Tested install, upgrade procedure with heal and other client IO traffic
as well. All tests passed as required.

> 
> 
> Thanks!
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 5: Missing option documentation (need inputs)

2018-10-10 Thread Shyam Ranganathan
The following options were added post 4.1 and are part of 5.0 as the
first release for the same. They were added in as part of bugs, and
hence looking at github issues to track them as enhancements did not
catch the same.

We need to document it in the release notes (and also the gluster doc.
site ideally), and hence I would like a some details on what to write
for the same (or release notes commits) for them.

Option: cluster.daemon-log-level
Attention: @atin
Review: https://review.gluster.org/c/glusterfs/+/20442

Option: ctime-invalidation
Attention: @Du
Review: https://review.gluster.org/c/glusterfs/+/20286

Option: shard-lru-limit
Attention: @krutika
Review: https://review.gluster.org/c/glusterfs/+/20544

Option: shard-deletion-rate
Attention: @krutika
Review: https://review.gluster.org/c/glusterfs/+/19970

Please send in the required text ASAP, as we are almost towards the end
of the release.

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Last minor release for 3.12?

2018-10-10 Thread Shyam Ranganathan
Hi,

3.12 goes EOL with release-5, which is in about a week as most things
are settled at present.

We have the following open patches against 3.12:
https://review.gluster.org/q/project:glusterfs+branch:release-3.12+status:open

My thinking is that we do a final bug fix release for 3.12 (as the
release is on the 10th of each month), before we EOL it.

Thoughts?

@Jiffin I would suggest we wait a day for any inputs else, tag and
generate the 3.12.next release.

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Branched and further dates

2018-10-05 Thread Shyam Ranganathan
On 10/05/2018 10:59 AM, Shyam Ranganathan wrote:
> On 10/04/2018 11:33 AM, Shyam Ranganathan wrote:
>> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
>>> RC1 would be around 24th of Sep. with final release tagging around 1st
>>> of Oct.
>> RC1 now stands to be tagged tomorrow, and patches that are being
>> targeted for a back port include,
> We still are awaiting release notes (other than the bugs section) to be
> closed.
> 
> There is one new bug that needs attention from the replicate team.
> https://bugzilla.redhat.com/show_bug.cgi?id=1636502
> 
> The above looks important to me to be fixed before the release, @ravi or
> @pranith can you take a look?
> 

RC1 is tagged and release tarball generated.

We still have 2 issues to work on,

1. The above messages from AFR in self heal logs

2. We need to test with Py3, else we risk putting out packages there on
Py3 default distros and causing some mayhem if basic things fail.

I am open to suggestions on how to ensure we work with Py3, thoughts?

I am thinking we run a regression on F28 (or a platform that defaults to
Py3) and ensure regressions are passing at the very least. For other
Python code that regressions do not cover,
- We have a list at [1]
- How can we split ownership of these?

@Aravinda, @Kotresh, and @ppai, looking to you folks to help out with
the process and needs here.

Shyam

[1] https://github.com/gluster/glusterfs/issues/411
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Branched and further dates

2018-10-05 Thread Shyam Ranganathan
On 10/05/2018 11:04 AM, Atin Mukherjee wrote:
> >
> > 3) Release notes review and updates with GD2 content pending
> >
> > @Kaushal/GD2 team can we get the updates as required?
> > https://review.gluster.org/c/glusterfs/+/21303
> 
> Still awaiting this.
> 
> 
> Kaushal has added a comment into the patch providing the content today
> morning IST. Any additional details are you looking for?

Saw this now, this should be fine. I did not read comments in my morning
(my bad), and instead saw there was no activity on the patch itself, and
missed this.

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Branched and further dates

2018-10-05 Thread Shyam Ranganathan
On 10/04/2018 11:33 AM, Shyam Ranganathan wrote:
> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
>> RC1 would be around 24th of Sep. with final release tagging around 1st
>> of Oct.
> 
> RC1 now stands to be tagged tomorrow, and patches that are being
> targeted for a back port include,

We still are awaiting release notes (other than the bugs section) to be
closed.

There is one new bug that needs attention from the replicate team.
https://bugzilla.redhat.com/show_bug.cgi?id=1636502

The above looks important to me to be fixed before the release, @ravi or
@pranith can you take a look?

> 
> 1) https://review.gluster.org/c/glusterfs/+/21314 (snapshot volfile in
> mux cases)
> 
> @RaBhat working on this.

Done

> 
> 2) Py3 corrections in master
> 
> @Kotresh are all changes made to master backported to release-5 (may not
> be merged, but looking at if they are backported and ready for merge)?

Done, release notes amend pending

> 
> 3) Release notes review and updates with GD2 content pending
> 
> @Kaushal/GD2 team can we get the updates as required?
> https://review.gluster.org/c/glusterfs/+/21303

Still awaiting this.

> 
> 4) This bug [2] was filed when we released 4.0.
> 
> The issue has not bitten us in 4.0 or in 4.1 (yet!) (i.e the options
> missing and hence post-upgrade clients failing the mount). This is
> possibly the last chance to fix it.
> 
> Glusterd and protocol maintainers, can you chime in, if this bug needs
> to be and can be fixed? (thanks to @anoopcs for pointing it out to me)

Release notes to be corrected to call this out.

> 
> The tracker bug [1] does not have any other blockers against it, hence
> assuming we are not tracking/waiting on anything other than the set above.
> 
> Thanks,
> Shyam
> 
> [1] Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.0
> [2] Potential upgrade bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1540659
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Branched and further dates

2018-10-05 Thread Shyam Ranganathan
On 10/04/2018 02:46 PM, Kotresh Hiremath Ravishankar wrote:
> 2) Py3 corrections in master
> 
> @Kotresh are all changes made to master backported to release-5 (may not
> be merged, but looking at if they are backported and ready for merge)?
> 
> 
> All changes made to master are backported to release-5. But py3 support
> is still not complete.
> 

So if run with Py3 the code may not work as intended? Looking for some
clarification around "not complete" so that release notes can be amended
accordingly.

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Branched and further dates

2018-10-04 Thread Shyam Ranganathan
On 10/04/2018 12:01 PM, Atin Mukherjee wrote:
> 4) This bug [2] was filed when we released 4.0.
> 
> The issue has not bitten us in 4.0 or in 4.1 (yet!) (i.e the options
> missing and hence post-upgrade clients failing the mount). This is
> possibly the last chance to fix it.
> 
> Glusterd and protocol maintainers, can you chime in, if this bug needs
> to be and can be fixed? (thanks to @anoopcs for pointing it out to me)
> 
> 
> This is a bad bug to live with. OTOH, I do not have an immediate
> solution in my mind on how to make sure (a) these options when
> reintroduced are made no-ops, especially they will be disallowed to tune
> (with out dirty option check hacks at volume set staging code) . If
> we're to tag RC1 tomorrow, I wouldn't be able to take a risk to commit
> this change.
> 
> Can we actually have a note in our upgrade guide to document that if
> you're upgrading to 4.1 or higher version make sure to disable these
> options before the upgrade to mitigate this?

Yes, adding this to the "Major Issues" section in the release notes as
well as noting it in the upgrade guide is possible. I will go with this
option for now, as we do not have complaints around this from 4.0/4.1
releases (which have the same issue as well).
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Branched and further dates

2018-10-04 Thread Shyam Ranganathan
On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
> RC1 would be around 24th of Sep. with final release tagging around 1st
> of Oct.

RC1 now stands to be tagged tomorrow, and patches that are being
targeted for a back port include,

1) https://review.gluster.org/c/glusterfs/+/21314 (snapshot volfile in
mux cases)

@RaBhat working on this.

2) Py3 corrections in master

@Kotresh are all changes made to master backported to release-5 (may not
be merged, but looking at if they are backported and ready for merge)?

3) Release notes review and updates with GD2 content pending

@Kaushal/GD2 team can we get the updates as required?
https://review.gluster.org/c/glusterfs/+/21303

4) This bug [2] was filed when we released 4.0.

The issue has not bitten us in 4.0 or in 4.1 (yet!) (i.e the options
missing and hence post-upgrade clients failing the mount). This is
possibly the last chance to fix it.

Glusterd and protocol maintainers, can you chime in, if this bug needs
to be and can be fixed? (thanks to @anoopcs for pointing it out to me)

The tracker bug [1] does not have any other blockers against it, hence
assuming we are not tracking/waiting on anything other than the set above.

Thanks,
Shyam

[1] Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.0
[2] Potential upgrade bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1540659
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Lock down period merge process

2018-10-03 Thread Shyam Ranganathan
On 10/03/2018 11:32 AM, Pranith Kumar Karampuri wrote:
> 
> 
> On Wed, Oct 3, 2018 at 8:50 PM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> On 10/03/2018 11:16 AM, Pranith Kumar Karampuri wrote:
> >     Once we have distributed tests running, such that overall
> regression
> >     time is reduced, we can possibly tackle removing retries for
> tests, and
> >     then getting to a more stringent recheck process/tooling. The
> reason
> >     being, we now run to completion and that takes quite a bit of
> time, so
> >     at this juncture removing retry is not practical, but we
> should get
> >     there (soon?).
> >
> >
> > I agree with you about removing retry. I didn't understand why recheck
> > nudging developers has to be post-poned till distributed regression
> > tests comes into picture. My thinking is that it is more important to
> > have it when tests take longer.
> 
> Above is only retry specific, not recheck specific, as in "we can
> possibly tackle removing retries for tests"
> 
> But also reiterating this is orthogonal to the lock down needs discussed
> here.
> 
> 
> As per my understanding the reason why lock down is happening because no
> one makes any noise about the failures that they are facing as and when
> it happens, and it doesn't get conveyed on gluster-devel. So is there
> any reason why you think it is orthogonal considering it is contributing
> directly to the problem that we are discussing on this thread?

Taking steps to ensure quality is maintained is going to reduce
instances of lock down, hence orthogonal.

> 
> -- 
> Pranith
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Lock down period merge process

2018-10-03 Thread Shyam Ranganathan
On 10/03/2018 11:16 AM, Pranith Kumar Karampuri wrote:
> Once we have distributed tests running, such that overall regression
> time is reduced, we can possibly tackle removing retries for tests, and
> then getting to a more stringent recheck process/tooling. The reason
> being, we now run to completion and that takes quite a bit of time, so
> at this juncture removing retry is not practical, but we should get
> there (soon?).
> 
> 
> I agree with you about removing retry. I didn't understand why recheck
> nudging developers has to be post-poned till distributed regression
> tests comes into picture. My thinking is that it is more important to
> have it when tests take longer.

Above is only retry specific, not recheck specific, as in "we can
possibly tackle removing retries for tests"

But also reiterating this is orthogonal to the lock down needs discussed
here.
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Lock down period merge process

2018-10-03 Thread Shyam Ranganathan
On 10/03/2018 05:36 AM, Pranith Kumar Karampuri wrote:
> 
> 
> On Thu, Sep 27, 2018 at 8:18 PM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> On 09/27/2018 10:05 AM, Atin Mukherjee wrote:
> >         Now does this mean we block commit rights for component Y till
> >         we have the root cause?
> >
> >
> >     It was a way of making it someone's priority. If you have another
> >     way to make it someone's priority that is better than this, please
> >     suggest and we can have a discussion around it and agree on it
> :-).
> >
> >
> > This is what I can think of:
> >
> > 1. Component peers/maintainers take a first triage of the test
> failure.
> > Do the initial debugging and (a) point to the component which needs
> > further debugging or (b) seek for help at gluster-devel ML for
> > additional insight for identifying the problem and narrowing down to a
> > component. 
> > 2. If it’s (1 a) then we already know the component and the owner. If
> > it’s (2 b) at this juncture, it’s all maintainers responsibility to
> > ensure the email is well understood and based on the available details
> > the ownership is picked up by respective maintainers. It might be also
> > needed that multiple maintainers might have to be involved and this is
> > why I focus on this as a group effort than individual one.
> 
> In my thinking, acting as a group here is better than making it a
> sub-groups/individuals responsibility. Which has been put forth by Atin
> (IMO) well. Thus, keep the merge rights out for all (of course some
> still need to have it), and get the situation addressed is better.
> 
> 
> In my experience, it has been rather difficult for developers without
> domain expertise to solve the problem (at least on the components I am
> maintaining), so the reality is that not everyone may be able to solve
> the issues on all the components where the problem is observed. May be
> you mean we need more participation  when you say we need to act as a
> group, so with that assumption one way to make that happen is to change
> the workflow around 'recheck centos'. In my thinking following the tools
> shouldn't lead to less participation on gluster-devel where developers
> can just do recheck-centos until the test passes and be done. So maybe
> tooling should encourage participation. Maybe something like 'recheck
> centos ' This is
> just an idea, thoughts are welcome.

I agree, any recheck should have enough reason behind it to state why
the recheck is being attempted, and what the failures were, which are
deemed spurious or otherwise to require a recheck.

The manner of enforcing the same is not present yet, and is possibly an
orthogonal discussion to the one here.

The recheck stringency (and I would add even the retry a test if it
fails once should be removed), will aid in getting to less frequent
breakage in nightly, as more effort is put into correcting the tests or
fixing the code around the same.

Once we have distributed tests running, such that overall regression
time is reduced, we can possibly tackle removing retries for tests, and
then getting to a more stringent recheck process/tooling. The reason
being, we now run to completion and that takes quite a bit of time, so
at this juncture removing retry is not practical, but we should get
there (soon?).

>  
> 
> ___
> maintainers mailing list
> maintainers@gluster.org <mailto:maintainers@gluster.org>
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
> 
> 
> -- 
> Pranith
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-02 Thread Shyam Ranganathan
On 09/26/2018 10:21 AM, Shyam Ranganathan wrote:
> 1. Release notes (Owner: release owner (myself), will send out an
> initial version for review and to solicit inputs today)


Please find the initial commit here [1].

@Kaushal/GD2 team, request updation of the Management section with
relevant notes.

@others, reviews welcome, also if any noted feature still has to update
the gluster documentation to call out the options or its use, now would
be a good time to close the same, as it can aid users better than just
release notes and what is written in there.

Shyam

[1] https://review.gluster.org/c/glusterfs/+/21303
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [for discussion] suggestions around improvements in bug triage workflow

2018-10-02 Thread Shyam Ranganathan
On 09/27/2018 11:33 AM, Atin Mukherjee wrote:
> 
> 
> On Thu, 27 Sep 2018 at 20:37, Sankarshan Mukhopadhyay
>  > wrote:
> 
> The origin of this conversation is a bit of a hall-way discussion with
> Shyam. The actual matter should be familiar to maintainers. For what
> it is worth, it was also mentioned at the recent Community meeting.
> 
> As the current workflows go, once a release is made generally
> available, a large swathe of bugs against an EOLd release are
> automatically closed citing that "the release is EOLd and if the bug
> is still reproducible on later releases, please reopen against those".
> However, there is perhaps a better way to handle this:
> 
> 
> I will play a devil’s advocate role here, but one of the question we
> need to ask ourselves additionally:
> 
> - Why are we getting into such state where so many bugs primarily the
> ones which haven’t got development’s attention get auto closed due to EOL?
> - Doesn’t this indicate we’re actually piling up our backlog with
> (probable) genuine defects and not taking enough action?
> 
> Bugzilla triage needs to be made as a habit by individuals to ensure new
> bugs get attention. Technically this will no longer be a problem.

Agree, further when the triage is done, if problem is release specific
then we can let bug be, else cloning it against master will ensure the
bug is tracked and even if we EOL bugs against a release, the bug report
survives against master till it is fixed.
 
> 
> However, for now I think this workflow sounds a right measure atleast to
> ensure we don’t close down a genuine defect.
> 
> 
> 
> [0] clone the bug into master so that it continues to be part of a
> valid bug backlog
> 
> [1] validate per release that the circumstances described by the bug
> are actually resolved and hence CLOSED CURRENTRELEASE them
> 
> I am posting here for discussion around this as well as being able to
> identify whether tooling/automation can be used to handle some of
> this.

As part of the a release EOL, such a job needs to be taken up. Till
triaging improves, cloning the bug against master and closing the
release bug would be a viable option to proceed with.

If the quantum of bugs are within a reasonable number (say 20 odd) then
we can even triage them then and take action, else tooling needs to be
in place.

I will keep a watch out when we EOL 3.12 as release-5 goes out to see
how much of an issue this is to do manually.

> 
> 
> 
> -- 
> sankarshan mukhopadhyay
> 
> ___
> maintainers mailing list
> maintainers@gluster.org 
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
> -- 
> - Atin (atinm)
> 
> 
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Performance comparisons

2018-10-02 Thread Shyam Ranganathan
On 09/26/2018 10:21 AM, Shyam Ranganathan wrote:
> 4. Performance testing/benchmarking
>   - I would be using smallfile and FIO to baseline 3.12 and 4.1 and test
> RC0 for any major regressions
>   - If we already know of any please shout out so that we are aware of
> the problems and upcoming fixes to the same


Managed to complete this yesterday, and attached are the results. The
comparison is between 4.1.5 and 5.0 to understand if there are any mojor
regressions, the tests themselves need to be tuned better in the future,
but help to provide an initial look into the comparing releases.

Observations (just some top ones, not details, considering at least a 5%
delta from prior release numbers):
- ls -l tests with smallfile on dist-arbiter volumes seems to have regressed
- create tests with smallfile on dist-arbiter volumes seems to have improved
- FIO sequential write performance remains mostly the same across volume
types
- FIO sequential read performance seems to degrade on certain volume types
- FIO random write performance seems to degrade on certain volume types
- FIO random read performance seems to have improved across volume types

Goof-ups:
- The volume creation ansible play just laid out bricks in host order,
hence for tests like dist-dispers-4x(4+2) all 6 bricks of the same
subvolume ended up on the same host. Interestingly this happened across
both versions compared, and hence the pattern was the same, allowing
some base comparisons.

Next steps:
- I will be running the tests that gave inconsistent results with RC1
when we build the same
- It would be useful for component owners to take a look and call out
possible causes for some of the degrades, if already known

Shyam


gbench-Summary-4.1.5-to-5.0.ods
Description: application/vnd.oasis.opendocument.spreadsheet
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-5.0rc0 released

2018-10-01 Thread Shyam Ranganathan
On 09/25/2018 05:36 PM, Niels de Vos wrote:
> On Tue, Sep 18, 2018 at 11:55:46AM -0400, Kaleb S. KEITHLEY wrote:
>> On 09/17/2018 08:44 PM, jenk...@build.gluster.org wrote:
>>> SRC: 
>>> https://build.gluster.org/job/release-new/69/artifact/glusterfs-5.0rc0.tar.gz
>>> HASH: 
>>> https://build.gluster.org/job/release-new/69/artifact/glusterfs-5.0rc0.sha512sum
>>>
>>> This release is made off jenkins-release-69
> 
> Packages from the CentOS Storage SIG can now be tested from the testing
> repository. Please let me know if any dependencies are missing or when
> there are issues with any of the components.
> 
> 1. install centos-release-gluster5:
>- for CentOS-6: 
> http://cbs.centos.org/kojifiles/packages/centos-release-gluster5/0.9/1.el6.centos/src/centos-release-gluster5-0.9-1.el6.centos.src.rpm
>- for CentOS-7: 
> http://cbs.centos.org/kojifiles/packages/centos-release-gluster5/0.9/1.el7.centos/src/centos-release-gluster5-0.9-1.el7.centos.src.rpm

The above should be:
http://cbs.centos.org/kojifiles/packages/centos-release-gluster5/0.9/1.el7.centos/noarch/centos-release-gluster5-0.9-1.el7.centos.noarch.rpm

IOW, not from src, but from arch directory, right Niels (cross checking)?

Corrected the same and installed (and upgraded from 4.1.5) client and
server bits for FUSE testing. Packages and dependencies look good.

> 
># yum install ${CENTOS_RELEASE_GLUSTER5_URL}
> 
> 2. the centos-gluster5-test repository should be enabled by default, so
> 
># yum install glusterfs-fuse
> 
> 3. report back to this email
> 
> Thanks!
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Memory overwrites due to processing vol files???

2018-09-28 Thread Shyam Ranganathan
We tested with ASAN and without the fix at [1], and it consistently
crashes at the mdcache xlator when brick mux is enabled.
On 09/28/2018 03:50 PM, FNU Raghavendra Manjunath wrote:
> 
> I was looking into the issue and  this is what I could find while
> working with shyam.
> 
> There are 2 things here.
> 
> 1) The multiplexed brick process for the snapshot(s) getting the client
> volfile (I suspect, it happened
>      when restore operation was performed).
> 2) Memory corruption happening while the multiplexed brick process is
> building the graph (for the client
>      volfile it got above)
> 
> I have been able to reproduce the issue in my local computer once, when
> I ran the testcase tests/bugs/snapshot/bug-1275616.t
> 
> Upon comparison, we found that the backtrace of the core I got and the
> core generated in the regression runs was similar.
> In fact, the victim information shyam mentioned before, is also similar
> in the core that I was able to get.  
> 
> On top of that, when the brick process was run with valgrind, it
> reported following memory corruption
> 
> ==31257== Conditional jump or move depends on uninitialised value(s)
> ==31257==    at 0x1A7D0564: mdc_xattr_list_populate (md-cache.c:3127)
> ==31257==    by 0x1A7D1903: mdc_init (md-cache.c:3486)
> ==31257==    by 0x4E62D41: __xlator_init (xlator.c:684)
> ==31257==    by 0x4E62E67: xlator_init (xlator.c:709)
> ==31257==    by 0x4EB2BEB: glusterfs_graph_init (graph.c:359)
> ==31257==    by 0x4EB37F8: glusterfs_graph_activate (graph.c:722)
> ==31257==    by 0x40AEC3: glusterfs_process_volfp (glusterfsd.c:2528)
> ==31257==    by 0x410868: mgmt_getspec_cbk (glusterfsd-mgmt.c:2076)
> ==31257==    by 0x518408D: rpc_clnt_handle_reply (rpc-clnt.c:755)
> ==31257==    by 0x51845C1: rpc_clnt_notify (rpc-clnt.c:923)
> ==31257==    by 0x518084E: rpc_transport_notify (rpc-transport.c:525)
> ==31257==    by 0x123273DF: socket_event_poll_in (socket.c:2504)
> ==31257==  Uninitialised value was created by a heap allocation
> ==31257==    at 0x4C2DB9D: malloc (vg_replace_malloc.c:299)
> ==31257==    by 0x4E9F58E: __gf_malloc (mem-pool.c:136)
> ==31257==    by 0x1A7D052A: mdc_xattr_list_populate (md-cache.c:3123)
> ==31257==    by 0x1A7D1903: mdc_init (md-cache.c:3486)
> ==31257==    by 0x4E62D41: __xlator_init (xlator.c:684)
> ==31257==    by 0x4E62E67: xlator_init (xlator.c:709)
> ==31257==    by 0x4EB2BEB: glusterfs_graph_init (graph.c:359)
> ==31257==    by 0x4EB37F8: glusterfs_graph_activate (graph.c:722)
> ==31257==    by 0x40AEC3: glusterfs_process_volfp (glusterfsd.c:2528)
> ==31257==    by 0x410868: mgmt_getspec_cbk (glusterfsd-mgmt.c:2076)
> ==31257==    by 0x518408D: rpc_clnt_handle_reply (rpc-clnt.c:755)
> ==31257==    by 0x51845C1: rpc_clnt_notify (rpc-clnt.c:923)
> 
> Based on the above observations, I think the below patch  by Shyam
> should fix the crash.

[1]

> https://review.gluster.org/#/c/glusterfs/+/21299/
> 
> But, I am still trying understand, why a brick process should get a
> client volfile (i.e. the 1st issue mentioned above). 
> 
> Regards,
> Raghavendra
> 
> On Wed, Sep 26, 2018 at 9:00 PM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> On 09/26/2018 10:21 AM, Shyam Ranganathan wrote:
> > 2. Testing dashboard to maintain release health (new, thanks Nigel)
> >   - Dashboard at [2]
> >   - We already have 3 failures here as follows, needs attention from
> > appropriate *maintainers*,
> >     (a)
> >
> 
> https://build.gluster.org/job/regression-test-with-multiplex/871/consoleText
> >       - Failed with core:
> ./tests/basic/afr/gfid-mismatch-resolution-with-cli.t
> >     (b)
> >
> 
> https://build.gluster.org/job/regression-test-with-multiplex/873/consoleText
> >       - Failed with core: ./tests/bugs/snapshot/bug-1275616.t
> >       - Also test ./tests/bugs/glusterd/validating-server-quorum.t
> had to be
> > retried
> 
> I was looking at the cores from the above 2 instances, the one in job
> 873 is been a typical pattern, where malloc fails as there is internal
> header corruption in the free bins.
> 
> When examining the victim that would have been allocated, it is often
> carrying incorrect size and other magic information. If the data in
> victim is investigated it looks like a volfile.
> 
> With the crash in 871, I thought there maybe a point where this is
> detected earlier, but not able to make headway in the same.
> 
> So, what could be corrupting this memory and is it when the graph is
> being processed? Can we run this with ASAN or such (I have not tried,
> but 

Re: [Gluster-Maintainers] Lock down period merge process

2018-09-27 Thread Shyam Ranganathan
On 09/27/2018 10:05 AM, Atin Mukherjee wrote:
> Now does this mean we block commit rights for component Y till
> we have the root cause? 
> 
> 
> It was a way of making it someone's priority. If you have another
> way to make it someone's priority that is better than this, please
> suggest and we can have a discussion around it and agree on it :-).
> 
> 
> This is what I can think of:
> 
> 1. Component peers/maintainers take a first triage of the test failure.
> Do the initial debugging and (a) point to the component which needs
> further debugging or (b) seek for help at gluster-devel ML for
> additional insight for identifying the problem and narrowing down to a
> component. 
> 2. If it’s (1 a) then we already know the component and the owner. If
> it’s (2 b) at this juncture, it’s all maintainers responsibility to
> ensure the email is well understood and based on the available details
> the ownership is picked up by respective maintainers. It might be also
> needed that multiple maintainers might have to be involved and this is
> why I focus on this as a group effort than individual one.

In my thinking, acting as a group here is better than making it a
sub-groups/individuals responsibility. Which has been put forth by Atin
(IMO) well. Thus, keep the merge rights out for all (of course some
still need to have it), and get the situation addressed is better.
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 5: New option noatime

2018-09-27 Thread Shyam Ranganathan
On 09/27/2018 09:08 AM, Shyam Ranganathan wrote:
> Writing this to solicit opinions on merging this [1] change that
> introduces an option late in the release cycle.
> 
> I went through the code, and most changes are standard option handling
> and basic xlator scaffolding, other than the change in posix xlator code
> that handles the flag to not set atime and the code in utime xlator that
> conditionally sets the flag. (of which IMO the latter is more important
> than the former, as posix is just acting on the flag).
> 
> The option if enabled would hence not update atime for the following
> FOPs, opendir, open, read, and would continue updating atime on the
> following FOPs fallocate and zerofill (which also update mtime, so the
> AFR self heal on time change would kick in anyways).
> 
> As the option suggests, with it enables atime is almost meaningless and
> hence it almost does not matter where we update it and where not. Just
> considering the problem where atime changes cause AFR to trigger a heal,
> and the FOPs above that strictly only change atime handled with this
> option, I am looking at this as functionally workable.
> 
> So IMO we can accept this even though it is late, but would like to hear
> from others if this needs to be deferred till release-6.
> 
> Shyam

[1] Patch under review: https://review.gluster.org/c/glusterfs/+/21281
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 5: New option noatime

2018-09-27 Thread Shyam Ranganathan
Writing this to solicit opinions on merging this [1] change that
introduces an option late in the release cycle.

I went through the code, and most changes are standard option handling
and basic xlator scaffolding, other than the change in posix xlator code
that handles the flag to not set atime and the code in utime xlator that
conditionally sets the flag. (of which IMO the latter is more important
than the former, as posix is just acting on the flag).

The option if enabled would hence not update atime for the following
FOPs, opendir, open, read, and would continue updating atime on the
following FOPs fallocate and zerofill (which also update mtime, so the
AFR self heal on time change would kick in anyways).

As the option suggests, with it enables atime is almost meaningless and
hence it almost does not matter where we update it and where not. Just
considering the problem where atime changes cause AFR to trigger a heal,
and the FOPs above that strictly only change atime handled with this
option, I am looking at this as functionally workable.

So IMO we can accept this even though it is late, but would like to hear
from others if this needs to be deferred till release-6.

Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Memory overwrites due to processing vol files???

2018-09-26 Thread Shyam Ranganathan
On 09/26/2018 10:21 AM, Shyam Ranganathan wrote:
> 2. Testing dashboard to maintain release health (new, thanks Nigel)
>   - Dashboard at [2]
>   - We already have 3 failures here as follows, needs attention from
> appropriate *maintainers*,
> (a)
> https://build.gluster.org/job/regression-test-with-multiplex/871/consoleText
>   - Failed with core: 
> ./tests/basic/afr/gfid-mismatch-resolution-with-cli.t
> (b)
> https://build.gluster.org/job/regression-test-with-multiplex/873/consoleText
>   - Failed with core: ./tests/bugs/snapshot/bug-1275616.t
>   - Also test ./tests/bugs/glusterd/validating-server-quorum.t had to be
> retried

I was looking at the cores from the above 2 instances, the one in job
873 is been a typical pattern, where malloc fails as there is internal
header corruption in the free bins.

When examining the victim that would have been allocated, it is often
carrying incorrect size and other magic information. If the data in
victim is investigated it looks like a volfile.

With the crash in 871, I thought there maybe a point where this is
detected earlier, but not able to make headway in the same.

So, what could be corrupting this memory and is it when the graph is
being processed? Can we run this with ASAN or such (I have not tried,
but need pointers if anyone has run tests with ASAN).

Here is the (brief) stack analysis of the core in 873:
NOTE: we need to start avoiding flushing the logs when we are dumping
core, as that leads to more memory allocations and causes a sort of
double fault in such cases.

Core was generated by `/build/install/sbin/glusterfsd -s
builder101.cloud.gluster.org --volfile-id /sn'.
Program terminated with signal 6, Aborted.
#0  0x7f23cf590277 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb) bt
#0  0x7f23cf590277 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7f23cf591968 in __GI_abort () at abort.c:90
#2  0x7f23cf5d2d37 in __libc_message (do_abort=do_abort@entry=2,
fmt=fmt@entry=0x7f23cf6e4d58 "*** Error in `%s': %s: 0x%s ***\n") at
../sysdeps/unix/sysv/linux/libc_fatal.c:196
#3  0x7f23cf5db499 in malloc_printerr (ar_ptr=0x7f23bc20,
ptr=, str=0x7f23cf6e4ea8 "free(): corrupted unsorted
chunks", action=3) at malloc.c:5025
#4  _int_free (av=0x7f23bc20, p=, have_lock=0) at
malloc.c:3847
#5  0x7f23d0f7c6e4 in __gf_free (free_ptr=0x7f23bc0a56a0) at
/home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/mem-pool.c:356
#6  0x7f23d0f41821 in log_buf_destroy (buf=0x7f23bc0a5568) at
/home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/logging.c:358
#7  0x7f23d0f44e55 in gf_log_flush_list (copy=0x7f23c404a290,
ctx=0x1ff6010) at
/home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/logging.c:1739
#8  0x7f23d0f45081 in gf_log_flush_extra_msgs (ctx=0x1ff6010, new=0)
at
/home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/logging.c:1807
#9  0x7f23d0f4162d in gf_log_set_log_buf_size (buf_size=0) at
/home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/logging.c:290
#10 0x7f23d0f41acc in gf_log_disable_suppression_before_exit
(ctx=0x1ff6010) at
/home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/logging.c:444
#11 0x7f23d0f4c027 in gf_print_trace (signum=6, ctx=0x1ff6010) at
/home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/common-utils.c:922
#12 0x0040a84a in glusterfsd_print_trace (signum=6) at
/home/jenkins/root/workspace/regression-test-with-multiplex/glusterfsd/src/glusterfsd.c:2316
#13 
#14 0x7f23cf590277 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#15 0x7f23cf591968 in __GI_abort () at abort.c:90
#16 0x7f23cf5d2d37 in __libc_message (do_abort=2,
fmt=fmt@entry=0x7f23cf6e4d58 "*** Error in `%s': %s: 0x%s ***\n") at
../sysdeps/unix/sysv/linux/libc_fatal.c:196
#17 0x7f23cf5dcc86 in malloc_printerr (ar_ptr=0x7f23bc20,
ptr=0x7f23bc003cd0, str=0x7f23cf6e245b "malloc(): memory corruption",
action=) at malloc.c:5025
#18 _int_malloc (av=av@entry=0x7f23bc20, bytes=bytes@entry=15664) at
malloc.c:3473
#19 0x7f23cf5df84c in __GI___libc_malloc (bytes=15664) at malloc.c:2899
#20 0x7f23d0f3bbbf in __gf_default_malloc (size=15664) at
/home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/mem-pool.h:106
#21 0x7f23d0f3f02f in xlator_mem_acct_init (xl=0x7f23bc082b20,
num_types=163) at
/home/jenkins/root/workspace/regression-test-with-multiplex/libglusterfs/src/xlator.c:800
#22 0x7f23b90a37bf in mem_acct_init (this=0x7f23bc082b20) at
/home/jenkins/root/workspace/regression-test-with-multiplex/xlators/performance/open-behind/src/open-behind.c:

Re: [Gluster-Maintainers] Lock down period merge process

2018-09-26 Thread Shyam Ranganathan
This was discussed in the maintainers meeting (see notes [1]), and the
conclusion is as follows,

- Merge lock down would be across the code base, and not component
specific. As component level decision goes into more 'good faith'
category and requires more tooling to avoid the same.

- Merge lock down would get closer to when repeated failures are
noticed, than as it stands now (looking for failures across) as we
strengthen the code base

In all testing health maintained at always GREEN is where we want to
reach over time and take a step back to correct any anomalies when we
detect the same to retain the said health.

Shyam

[1] Maintainer meeting notes:
https://lists.gluster.org/pipermail/maintainers/2018-September/005054.html
(see Round table section)
On 09/03/2018 01:47 AM, Pranith Kumar Karampuri wrote:
> 
> 
> On Wed, Aug 22, 2018 at 5:54 PM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> On 08/18/2018 12:45 AM, Pranith Kumar Karampuri wrote:
> >
> >
> > On Tue, Aug 14, 2018 at 5:29 PM Shyam Ranganathan
> mailto:srang...@redhat.com>
> > <mailto:srang...@redhat.com <mailto:srang...@redhat.com>>> wrote:
> >
> >     On 08/09/2018 01:24 AM, Pranith Kumar Karampuri wrote:
> >     >
> >     >
> >     > On Thu, Aug 9, 2018 at 1:25 AM Shyam Ranganathan
> >     mailto:srang...@redhat.com>
> <mailto:srang...@redhat.com <mailto:srang...@redhat.com>>
> >     > <mailto:srang...@redhat.com <mailto:srang...@redhat.com>
> <mailto:srang...@redhat.com <mailto:srang...@redhat.com>>>> wrote:
> >     >
> >     >     Maintainers,
> >     >
> >     >     The following thread talks about a merge during a merge
> >     lockdown, with
> >     >     differing view points (this mail is not to discuss the view
> >     points).
> >     >
> >     >     The root of the problem is that we leave the current process
> >     to good
> >     >     faith. If we have a simple rule that we will not merge
> >     anything during a
> >     >     lock down period, this confusion and any future
> repetitions of
> >     the same
> >     >     would not occur.
> >     >
> >     >     I propose that we follow the simpler rule, and would
> like to hear
> >     >     thoughts around this.
> >     >
> >     >     This also means that in the future, we may not need to
> remove
> >     commit
> >     >     access for other maintainers, as we do *not* follow a good
> >     faith policy,
> >     >     and instead depend on being able to revert and announce
> on the
> >     threads
> >     >     why we do so.
> >     >
> >     >
> >     > I think it is a good opportunity to establish guidelines and
> >     process so
> >     > that we don't end up in this state in future where one needs
> to lock
> >     > down the branch to make it stable. From that p.o.v.
> discussion on this
> >     > thread about establishing a process for lock down probably
> doesn't add
> >     > much value. My personal opinion for this instance at least
> is that
> >     it is
> >     > good that it was locked down. I tend to forget things and not
> >     having the
> >     > access to commit helped follow the process automatically :-).
> >
> >     The intention is that master and release branches are always
> maintained
> >     in good working order. This involves, tests and related checks
> passing
> >     *always*.
> >
> >     When this situation is breached, correcting it immediately is
> better
> >     than letting it build up, as that would entail longer times
> and more
> >     people to fix things up.
> >
> >     In an ideal world, if nightly runs fail, the next thing done
> would be to
> >     examine patches that were added between the 2 runs, and see if
> they are
> >     the cause for failure, and back them out.
> >
> >     Hence calling to backout patches is something that would
> happen more
> >     regularly in the future if things are breaking.
> >
> >
> > I'm with you till here.
> >  
> >
> >
> >     Lock down may happen if 2 consecutive nightly builds 

Re: [Gluster-Maintainers] Release 5: Branched and further dates

2018-09-26 Thread Shyam Ranganathan
Hi,

Updates on the release and shout out for help is as follows,

RC0 Release packages for testing are available see the thread at [1]

These are the following activities that we need to complete for calling
the release as GA (with no major regressions i.e):

1. Release notes (Owner: release owner (myself), will send out an
initial version for review and to solicit inputs today)

2. Testing dashboard to maintain release health (new, thanks Nigel)
  - Dashboard at [2]
  - We already have 3 failures here as follows, needs attention from
appropriate *maintainers*,
(a)
https://build.gluster.org/job/regression-test-with-multiplex/871/consoleText
- Failed with core: 
./tests/basic/afr/gfid-mismatch-resolution-with-cli.t
(b)
https://build.gluster.org/job/regression-test-with-multiplex/873/consoleText
- Failed with core: ./tests/bugs/snapshot/bug-1275616.t
- Also test ./tests/bugs/glusterd/validating-server-quorum.t had to be
retried
(c)
https://build.gluster.org/job/regression-test-burn-in/4109/consoleText
- Failed with core: ./tests/basic/mgmt_v3-locks.t

3. Upgrade testing
  - Need *volunteers* to do the upgrade testing as stated in the 4.1
upgrade guide [3] to note any differences or changes to the same
  - Explicit call out on *disperse* volumes, as we continue to state
online upgrade is not possible, is this addressed and can this be tested
and the documentation improved around the same?

4. Performance testing/benchmarking
  - I would be using smallfile and FIO to baseline 3.12 and 4.1 and test
RC0 for any major regressions
  - If we already know of any please shout out so that we are aware of
the problems and upcoming fixes to the same

5. Major testing areas
  - Py3 support: Need *volunteers* here to test out the Py3 support
around changed python files, if there is not enough coverage in the
regression test suite for the same

Thanks,
Shyam

[1] Packages for RC0:
https://lists.gluster.org/pipermail/maintainers/2018-September/005044.html

[2] Release testing health dashboard:
https://build.gluster.org/job/nightly-release-5/

[3] 4.1 upgrade guide:
https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/

On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
> Hi,
> 
> Release 5 has been branched today. To backport fixes to the upcoming 5.0
> release use the tracker bug [1].
> 
> We intend to roll out RC0 build by end of tomorrow for testing, unless
> the set of usual cleanup patches (op-version, some messages, gfapi
> version) land in any form of trouble.
> 
> RC1 would be around 24th of Sep. with final release tagging around 1st
> of Oct.
> 
> I would like to encourage everyone to test out the bits as appropriate
> and post updates to this thread.
> 
> Thanks,
> Shyam
> 
> [1] 5.0 tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.0
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.1.5 released

2018-09-24 Thread Shyam Ranganathan
On 09/21/2018 10:55 AM, Niels de Vos wrote:
> On Fri, Sep 21, 2018 at 02:14:45PM +, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/70/artifact/glusterfs-4.1.5.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/70/artifact/glusterfs-4.1.5.sha512sum
> 
> Packages for the CentOS Storage SIG are being built and should land in
> the testing repositories soon. Please let me know if these packages work
> OK for you.

Tested, works fine and can be released. Thanks.

> 
> # yum -y install centos-release-gluster
> # yum -y --enablerepo=centos-gluster-test* install glusterfs-server
> (run your tests)
> 
> Thanks,
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-5.0rc0 released

2018-09-18 Thread Shyam Ranganathan
We need the following projects tagged as well,

1) Glusterd2:
@Kaushal I see https://github.com/gluster/glusterd2/tree/v5.0.0-rc0 is
this the one?

We need to version this 5.0rc0 as we are now in a different versioning
scheme for GlusterFS.

2) gfapi-python: This needs to be py3 compliant as otherwise Fedora
packaging will drop the same. @ppai request you look into this.

Shyam

On 09/17/2018 08:44 PM, jenk...@build.gluster.org wrote:
> SRC: 
> https://build.gluster.org/job/release-new/69/artifact/glusterfs-5.0rc0.tar.gz
> HASH: 
> https://build.gluster.org/job/release-new/69/artifact/glusterfs-5.0rc0.sha512sum
> 
> This release is made off jenkins-release-69
> 
> 
> 
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 5: Branched and further dates

2018-09-13 Thread Shyam Ranganathan
Hi,

Release 5 has been branched today. To backport fixes to the upcoming 5.0
release use the tracker bug [1].

We intend to roll out RC0 build by end of tomorrow for testing, unless
the set of usual cleanup patches (op-version, some messages, gfapi
version) land in any form of trouble.

RC1 would be around 24th of Sep. with final release tagging around 1st
of Oct.

I would like to encourage everyone to test out the bits as appropriate
and post updates to this thread.

Thanks,
Shyam

[1] 5.0 tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.0
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.12.14 released

2018-09-13 Thread Shyam Ranganathan
On 09/13/2018 10:10 AM, Niels de Vos wrote:
> Anyone that can provide me with an ansible playbook, or even scripts
> that need to run on server and client systems is strongly encouraged to
> share them. We can then include them in the CentOS CI where client and
> server systems can get reserved with different CentOS releases.
> 
> At the moment I do not have any automation to easily run tests locally.
> I will rather invest time in setting up jobs in a real CI.

This [1] is what I do, and have been posting the link (almost) every
time I test this. This does use docker and containers, because that
gives me a clean environment quickly, but steps would remain the same.

Does this help?

[1] Package testing: https://hackmd.io/-yC3Ol68SwaRWr8bzaL8pw#
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.12.14 released

2018-09-13 Thread Shyam Ranganathan
On 09/11/2018 04:44 AM, Niels de Vos wrote:
> On Thu, Sep 06, 2018 at 08:39:50PM +0200, Niels de Vos wrote:
>> On Thu, Sep 06, 2018 at 05:07:21PM +, jenk...@build.gluster.org wrote:
>>> SRC: 
>>> https://build.gluster.org/job/release-new/68/artifact/glusterfs-3.12.14.tar.gz
>>> HASH: 
>>> https://build.gluster.org/job/release-new/68/artifact/glusterfs-3.12.14.sha512sum
>>
>> These packages have been built in the CentOS Storage SIG. Please enable
>> the testing repository and try them out (might take a few hours for the
>> packages to become available).
>>
>> Once done testing, let me know and I'll mark them as stable. There are
>> normally no pushes to the CentOS mirrors done on Friday, so Monday is
>> the earliest next slot.
> 
> There has been no confirmation that these packages work as expected.
> Could someone try them out and update the lists?

@Jiffin are you taking care of this? (same for 4.1.4)

@Niels, can we make this part the CentOS package manager responsibility?
As it is mostly me who does this testing, just waiting for the build and
then using the bits, it would be more efficient if done at the source
IMO. Thoughts?

Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Release calendar and status updates

2018-09-10 Thread Shyam Ranganathan
On 08/22/2018 02:03 PM, Shyam Ranganathan wrote:
> On 08/14/2018 02:28 PM, Shyam Ranganathan wrote:
>> 2) Branching date: (Monday) Aug-20-2018 (~40 days before GA tagging)
> 
> We are postponing branching to 2nd week of September (10th), as the
> entire effort in this release has been around stability and fixing
> issues across the board.

This is delayed for the following reasons,
- Stability of mux regressions
  There have been a few cores last week and we at least need an analysis
of the same before branching. Mohit, Atin and myself have looked at the
same and will post a broader update later today or tomorrow.

NOTE: Branching is not being withheld for the above, as we would
backport the required fixes, and post branching there is work to do in
terms of cleaning up the branch (gfapi, versions etc.) that takes some time.

- Not having the Gluster 5.0 found in version in bugzilla
This issue has been resolved with the bugzilla team today, so it is no
longer a blocker.

(read on as I still need information for some of the asks below)

> 
> Thus, we are expecting no net new features from hereon till branching,
> and features that are already a part of the code base and its details
> are as below.
> 



> 1) Changes to options tables in xlators (#302)
> 
> @Kaushal/GD2 team, can we call this complete? There maybe no real
> release notes for the same, as these are internal in nature, but
> checking nevertheless.

@Kaushal or GD2 contributors, ping!

> 5) Turn on Dentry fop serializer by default in brick stack (#421)
> 
> @du, the release note for this can be short, as other details are
> captured in 4.0 release notes.
> 
> However, in 4.0 release we noted a limitation with this feature as follows,
> 
> "Limitations: This feature is released as a technical preview, as
> performance implications are not known completely." (see section
> https://docs.gluster.org/en/latest/release-notes/4.0.0/#standalone )
> 
> Do we now have better data regarding the same that we can use when
> announcing the release?

@Du ping!
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.1.3 released

2018-08-28 Thread Shyam Ranganathan
On 08/28/2018 10:50 AM, Niels de Vos wrote:
> On Tue, Aug 28, 2018 at 08:45:13AM -0400, Kaleb S. KEITHLEY wrote:
>> On 08/28/2018 08:19 AM, Shyam Ranganathan wrote:
>>> On 08/27/2018 04:58 PM, Niels de Vos wrote:
>>>> Done! CentOS Storage SIG packages are built and should become available
>>>> for testing very soon (if not there yet).
>>>
>>> Niels, request you to check on this once more, I cannot see the packages
>>> in https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.1/ yet.
>>>
>>
>> Missing testing tags?
>>
>> I added tags storage7-gluster-41-testing and storage6-gluster-41-testing
>> for el7 and el6 respectively.
> 
> Thanks! Probably the conference wifi disconnected me before the builds
> were finished and tagging was done.
> 
>> I don't recall how long it takes for them to appear on
>> buildlogs.centos.org after tagging.
> 
> The packaes are available now.

Tested, works fine. Good to ship.

Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.1.3 released

2018-08-28 Thread Shyam Ranganathan
On 08/27/2018 04:58 PM, Niels de Vos wrote:
> Done! CentOS Storage SIG packages are built and should become available
> for testing very soon (if not there yet).

Niels, request you to check on this once more, I cannot see the packages
in https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.1/ yet.

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: Release calendar and status updates

2018-08-22 Thread Shyam Ranganathan
On 08/14/2018 02:28 PM, Shyam Ranganathan wrote:
> 2) Branching date: (Monday) Aug-20-2018 (~40 days before GA tagging)

We are postponing branching to 2nd week of September (10th), as the
entire effort in this release has been around stability and fixing
issues across the board.

Thus, we are expecting no net new features from hereon till branching,
and features that are already a part of the code base and its details
are as below.

> 
> 3) Late feature back port closure: (Friday) Aug-24-2018 (1 week from
> branching)

As stated above, there is no late feature back port.

The features that are part of master since 4.1 release are as follows,
with some questions for the authors,

1) Changes to options tables in xlators (#302)

@Kaushal/GD2 team, can we call this complete? There maybe no real
release notes for the same, as these are internal in nature, but
checking nevertheless.

2) CloudArchival (#387)

@susant, what is the status of this feature? Is it complete?
I am missing user documentation, and code coverage from the tests is
very low (see:
https://build.gluster.org/job/line-coverage/485/Line_20Coverage_20Report/ )

3) Quota fsck (#390)

@Sanoj I do have documentation in the github issue, but would prefer if
the user facing documentation moves to glusterdocs instead.

Further I see no real test coverage for the tool provided here, any
thoughts around the same?

The script is not part of the tarball and hence the distribution RPMs as
well, what is the thought around distributing the same?

4) Ensure python3 compatibility across code base (#411)

@Kaleb/others, last patch to call this issue done (sans real testing at
the moment) is https://review.gluster.org/c/glusterfs/+/20868 request
review and votes here, to get this merged before branching.

5) Turn on Dentry fop serializer by default in brick stack (#421)

@du, the release note for this can be short, as other details are
captured in 4.0 release notes.

However, in 4.0 release we noted a limitation with this feature as follows,

"Limitations: This feature is released as a technical preview, as
performance implications are not known completely." (see section
https://docs.gluster.org/en/latest/release-notes/4.0.0/#standalone )

Do we now have better data regarding the same that we can use when
announcing the release?

Thanks,
Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Test health report (week ending 19th Aug. 2018)

2018-08-20 Thread Shyam Ranganathan
Although tests have stabilized quite a bit, and from the maintainers
meeting we know that some tests have patches coming in, here is a
readout of other tests that needed a retry. We need to reduce failures
on retries as well, to be able to not have spurious or other failures in
test runs.

Tests being worked on (from the maintainers meeting notes):
- bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t

Other retries and failures, request component maintainers to look at the
test case and resulting failures and post back any findings to the lists
to take things forward,

https://build.gluster.org/job/line-coverage/481/console
20:10:01 1 test(s) needed retry
20:10:01 ./tests/basic/distribute/rebal-all-nodes-migrate.t

https://build.gluster.org/job/line-coverage/483/console
18:42:01 2 test(s) needed retry
18:42:01 ./tests/basic/tier/fops-during-migration-pause.t
18:42:01
./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
(fix in progress)

https://build.gluster.org/job/regression-test-burn-in/4067/console
18:27:21 1 test(s) generated core
18:27:21 ./tests/bugs/readdir-ahead/bug-1436090.t

https://build.gluster.org/job/regression-test-with-multiplex/828/console
18:19:39 1 test(s) needed retry
18:19:39 ./tests/bugs/glusterd/validating-server-quorum.t

https://build.gluster.org/job/regression-test-with-multiplex/829/console
18:24:14 2 test(s) needed retry
18:24:14 ./tests/00-geo-rep/georep-basic-dr-rsync.t
18:24:14 ./tests/bugs/glusterd/quorum-validation.t

https://build.gluster.org/job/regression-test-with-multiplex/831/console
18:20:49 1 test(s) generated core
18:20:49 ./tests/basic/ec/ec-5-2.t

Shyam
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Release 5: Release calendar and status updates

2018-08-14 Thread Shyam Ranganathan
This mail is to solicit the following,

Features/enhancements planned for Gluster 5 needs the following from
contributors:
  - Open/Use relevant issue
  - Mark issue with the "Release 5" milestone [1]
  - Post to the devel lists issue details, requesting addition to track
the same for the release

NOTE: We are ~7 days from branching, and I do not have any issues marked
for the release, please respond with your issues that are going to be a
part of this release as you read this.

Calendar of activities look as follows:

1) master branch health checks (weekly, till branching)
  - Expect every Monday a status update on various tests runs

2) Branching date: (Monday) Aug-20-2018 (~40 days before GA tagging)

3) Late feature back port closure: (Friday) Aug-24-2018 (1 week from
branching)

4) Initial release notes readiness: (Monday) Aug-27-2018

5) RC0 build: (Monday) Aug-27-2018



6) RC1 build: (Monday) Sep-17-2018



7) GA tagging: (Monday) Oct-01-2018



8) ~week later release announcement

Go/no-go discussions per-phase will be discussed in the maintainers list.


[1] Release milestone: https://github.com/gluster/glusterfs/milestone/7
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Lock down period merge process

2018-08-14 Thread Shyam Ranganathan
On 08/09/2018 01:24 AM, Pranith Kumar Karampuri wrote:
> 
> 
> On Thu, Aug 9, 2018 at 1:25 AM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> Maintainers,
> 
> The following thread talks about a merge during a merge lockdown, with
> differing view points (this mail is not to discuss the view points).
> 
> The root of the problem is that we leave the current process to good
> faith. If we have a simple rule that we will not merge anything during a
> lock down period, this confusion and any future repetitions of the same
> would not occur.
> 
> I propose that we follow the simpler rule, and would like to hear
> thoughts around this.
> 
> This also means that in the future, we may not need to remove commit
> access for other maintainers, as we do *not* follow a good faith policy,
> and instead depend on being able to revert and announce on the threads
> why we do so.
> 
> 
> I think it is a good opportunity to establish guidelines and process so
> that we don't end up in this state in future where one needs to lock
> down the branch to make it stable. From that p.o.v. discussion on this
> thread about establishing a process for lock down probably doesn't add
> much value. My personal opinion for this instance at least is that it is
> good that it was locked down. I tend to forget things and not having the
> access to commit helped follow the process automatically :-).

The intention is that master and release branches are always maintained
in good working order. This involves, tests and related checks passing
*always*.

When this situation is breached, correcting it immediately is better
than letting it build up, as that would entail longer times and more
people to fix things up.

In an ideal world, if nightly runs fail, the next thing done would be to
examine patches that were added between the 2 runs, and see if they are
the cause for failure, and back them out.

Hence calling to backout patches is something that would happen more
regularly in the future if things are breaking.

Lock down may happen if 2 consecutive nightly builds fail, so as to
rectify the situation ASAP, and then move onto other work.

In short, what I wanted to say is that preventing lock downs in the
future, is not a state we aspire for. Lock downs may/will happen, it is
done to get the branches stable quicker, than spend long times trying to
find what caused the instability in the first place.

>  
> 
> 
> Please note, if there are extraneous situations (say reported security
> vulnerabilities that need fixes ASAP) we may need to loosen up the
> stringency, as that would take precedence over the lock down. These
> exceptions though, can be called out and hence treated as such.
> 
> Thoughts?
> 
> 
> This is again my personal opinion. We don't need to merge patches in any
> branch unless we need to make an immediate release with that patch. For
> example if there is a security issue reported we *must* make a release
> with the fix immediately so it makes sense to merge it and do the release.

Agree, keeps the rule simple during lock down and not open to
interpretations.

>  
> 
> 
> Shyam
> 
> PS: Added Yaniv to the CC as he reported the deviance
> 
>  Forwarded Message 
> Subject:        Re: [Gluster-devel] Release 5: Master branch health
> report
> (Week of 30th July)
> Date:   Tue, 7 Aug 2018 23:22:09 +0300
> From:   Yaniv Kaul mailto:yk...@redhat.com>>
> To:     Shyam Ranganathan  <mailto:srang...@redhat.com>>
> CC:     GlusterFS Maintainers  <mailto:maintainers@gluster.org>>, Gluster Devel
> mailto:gluster-de...@gluster.org>>,
> Nigel Babu mailto:nig...@redhat.com>>
> 
> 
> 
> 
> 
> On Tue, Aug 7, 2018, 10:46 PM Shyam Ranganathan  <mailto:srang...@redhat.com>
> <mailto:srang...@redhat.com <mailto:srang...@redhat.com>>> wrote:
> 
>     On 08/07/2018 02:58 PM, Yaniv Kaul wrote:
>     >     The intention is to stabilize master and not add more patches
>     that my
>     >     destabilize it.
>     >
>     >
>     > https://review.gluster.org/#/c/20603/ has been merged.
>     > As far as I can see, it has nothing to do with stabilization and
>     should
>     > be reverted.
> 
>     Posted this on the gerrit review as well:
> 
>     
>     4.1 does not have nightly tests, those run on master only.
> 
> 
> That should change of course. We cannot strive for stability otherwise,
> AFAIK. 
> 
> 
>     Stability of master does not (will not), in

Re: [Gluster-Maintainers] Lock down period merge process

2018-08-14 Thread Shyam Ranganathan
On 08/09/2018 12:29 AM, Nigel Babu wrote:
> I would trust tooling that prevents merges rather than good faith. I
> have worked on projects where we trust good faith, but still enforce
> that with tooling[1]. It's highly likely for one or two committers to be
> unaware of an ongoing lock down. As the number of maintainers increase,
> the chances of someone coming back from PTO and accidentally merging
> something is high.

Agree, I would also go with a few having merge rights, to prevent above
cases.

> 
> The extraneous situation exception applies even now. I expect the
> janitors who have commit access in the event of a lock down to use their
> judgment to merge such patches.
> 
> [1]: https://mozilla-releng.net/treestatus
> 
> 
> On Thu, Aug 9, 2018 at 1:25 AM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> Maintainers,
> 
> The following thread talks about a merge during a merge lockdown, with
> differing view points (this mail is not to discuss the view points).
> 
> The root of the problem is that we leave the current process to good
> faith. If we have a simple rule that we will not merge anything during a
> lock down period, this confusion and any future repetitions of the same
> would not occur.
> 
> I propose that we follow the simpler rule, and would like to hear
> thoughts around this.
> 
> This also means that in the future, we may not need to remove commit
> access for other maintainers, as we do *not* follow a good faith policy,
> and instead depend on being able to revert and announce on the threads
> why we do so.
> 
> Please note, if there are extraneous situations (say reported security
> vulnerabilities that need fixes ASAP) we may need to loosen up the
> stringency, as that would take precedence over the lock down. These
> exceptions though, can be called out and hence treated as such.
> 
> Thoughts?
> 
> Shyam
> 
> PS: Added Yaniv to the CC as he reported the deviance
> 
>  Forwarded Message 
> Subject:        Re: [Gluster-devel] Release 5: Master branch health
> report
> (Week of 30th July)
> Date:   Tue, 7 Aug 2018 23:22:09 +0300
> From:   Yaniv Kaul mailto:yk...@redhat.com>>
> To:     Shyam Ranganathan  <mailto:srang...@redhat.com>>
> CC:     GlusterFS Maintainers  <mailto:maintainers@gluster.org>>, Gluster Devel
> mailto:gluster-de...@gluster.org>>,
> Nigel Babu mailto:nig...@redhat.com>>
> 
> 
> 
> 
> 
> On Tue, Aug 7, 2018, 10:46 PM Shyam Ranganathan  <mailto:srang...@redhat.com>
> <mailto:srang...@redhat.com <mailto:srang...@redhat.com>>> wrote:
> 
>     On 08/07/2018 02:58 PM, Yaniv Kaul wrote:
>     >     The intention is to stabilize master and not add more patches
>     that my
>     >     destabilize it.
>     >
>     >
>     > https://review.gluster.org/#/c/20603/ has been merged.
>     > As far as I can see, it has nothing to do with stabilization and
>     should
>     > be reverted.
> 
>     Posted this on the gerrit review as well:
> 
>     
>     4.1 does not have nightly tests, those run on master only.
> 
> 
> That should change of course. We cannot strive for stability otherwise,
> AFAIK. 
> 
> 
>     Stability of master does not (will not), in the near term guarantee
>     stability of release branches, unless patches that impact code
> already
>     on release branches, get fixes on master and are back ported.
> 
>     Release branches get fixes back ported (as is normal), this fix
> and its
>     merge should not impact current master stability in any way, and
> neither
>     stability of 4.1 branch.
>     
> 
>     The current hold is on master, not on release branches. I agree that
>     merging further code changes on release branches (for example
> geo-rep
>     issues that are backported (see [1]), as there are tests that fail
>     regularly on master), may further destabilize the release
> branch. This
>     patch is not one of those.
> 
> 
> Two issues I have with the merge:
> 1. It just makes comparing master branch to release branch harder. For
> example, to understand if there's a test that fails on master but
> succeeds on release branch, or vice versa. 
> 2. It means we are not focused on stabilizing master branch.
> Y.
> 
> 
>     Merging patches on release branches are allowed by release
> owners o

  1   2   3   >