Re: [Gluster-devel] Building GD2 from glusterd2-v5.0-0-vendor.tar.xz fails on CentOS-7

2018-10-30 Thread Kaushal M
On Tue, Oct 30, 2018 at 11:50 AM Kaushal M  wrote:
>
> On Tue, Oct 30, 2018 at 2:20 AM Niels de Vos  wrote:
> >
> > Hi,
> >
> > not sure what is going wrong when building GD2 for the CentOS Storage
> > SIG, but it seems to fail with some golang import issues:
> >
> >   https://cbs.centos.org/kojifiles/work/tasks/5141/595141/build.log
> >
> >   + cd glusterd2-v5.0-0
> >   ++ pwd
> >   + export GOPATH=/builddir/build/BUILD/glusterd2-v5.0-0:/usr/share/gocode
> >   + GOPATH=/builddir/build/BUILD/glusterd2-v5.0-0:/usr/share/gocode
> >   + mkdir -p src/github.com/gluster
> >   + ln -s ../../../ src/github.com/gluster/glusterd2
> >   + pushd src/github.com/gluster/glusterd2
> >   ~/build/BUILD/glusterd2-v5.0-0/src/github.com/gluster/glusterd2 
> > ~/build/BUILD/glusterd2-v5.0-0
> >   + /usr/bin/make PREFIX=/usr EXEC_PREFIX=/usr BINDIR=/usr/bin 
> > SBINDIR=/usr/sbin DATADIR=/usr/share LOCALSTATEDIR=/var/lib LOGDIR=/var/log 
> > SYSCONFDIR=/etc FASTBUILD=off glusterd2
> >   Plugins Enabled
> >   Building glusterd2 v5.0-0
> >   # github.com/gluster/glusterd2/vendor/github.com/coreos/etcd/clientv3
> >   vendor/github.com/coreos/etcd/clientv3/client.go:346: cannot use 
> > c.tokenCred (type *authTokenCredential) as type 
> > credentials.PerRPCCredentials in argument to grpc.WithPerRPCCredentials:
> > *authTokenCredential does not implement 
> > credentials.PerRPCCredentials (wrong type for GetRequestMetadata method)
> > have GetRequestMetadata("context".Context, ...string) 
> > (map[string]string, error)
> > want 
> > GetRequestMetadata("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context,
> >  ...string) (map[string]string, error)
> >   vendor/github.com/coreos/etcd/clientv3/client.go:421: cannot use 
> > client.balancer (type *healthBalancer) as type grpc.Balancer in argument to 
> > grpc.WithBalancer:
> > *healthBalancer does not implement grpc.Balancer (wrong type for 
> > Get method)
> > have Get("context".Context, grpc.BalancerGetOptions) 
> > (grpc.Address, func(), error)
> > want 
> > Get("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context, 
> > grpc.BalancerGetOptions) (grpc.Address, func(), error)
> >   vendor/github.com/coreos/etcd/clientv3/retry.go:145: cannot use 
> > retryKVClient literal (type *retryKVClient) as type etcdserverpb.KVClient 
> > in return argument:
> > *retryKVClient does not implement etcdserverpb.KVClient (wrong type 
> > for Compact method)
> > have Compact("context".Context, 
> > *etcdserverpb.CompactionRequest, ...grpc.CallOption) 
> > (*etcdserverpb.CompactionResponse, error)
> > want 
> > Compact("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context,
> >  *etcdserverpb.CompactionRequest, ...grpc.CallOption) 
> > (*etcdserverpb.CompactionResponse, error)
> >   ...
> >
> > Did anyone else try to build this on CentOS-7 (without EPEL)?
>
> This occurs when Go<1.9 is used to build GD2. The updated etcd version
> we vendor (etcd 3.3) requires Go>=1.9 to compile.
> But the failure here is strange, because CentOS-7 has golang-1.9.4 in
> its default repositories.
> Don't know what's going wrong here.

Looked at the logs again. This is an aarch64 build. It seems that
CentOS-7 for aarch64 is still on go1.8.
So, we could disable aarch64 for GD2 until the newer Go compiler is available.

>
> >
> > Thanks,
> > Niels
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Building GD2 from glusterd2-v5.0-0-vendor.tar.xz fails on CentOS-7

2018-10-30 Thread Kaushal M
On Tue, Oct 30, 2018 at 2:20 AM Niels de Vos  wrote:
>
> Hi,
>
> not sure what is going wrong when building GD2 for the CentOS Storage
> SIG, but it seems to fail with some golang import issues:
>
>   https://cbs.centos.org/kojifiles/work/tasks/5141/595141/build.log
>
>   + cd glusterd2-v5.0-0
>   ++ pwd
>   + export GOPATH=/builddir/build/BUILD/glusterd2-v5.0-0:/usr/share/gocode
>   + GOPATH=/builddir/build/BUILD/glusterd2-v5.0-0:/usr/share/gocode
>   + mkdir -p src/github.com/gluster
>   + ln -s ../../../ src/github.com/gluster/glusterd2
>   + pushd src/github.com/gluster/glusterd2
>   ~/build/BUILD/glusterd2-v5.0-0/src/github.com/gluster/glusterd2 
> ~/build/BUILD/glusterd2-v5.0-0
>   + /usr/bin/make PREFIX=/usr EXEC_PREFIX=/usr BINDIR=/usr/bin 
> SBINDIR=/usr/sbin DATADIR=/usr/share LOCALSTATEDIR=/var/lib LOGDIR=/var/log 
> SYSCONFDIR=/etc FASTBUILD=off glusterd2
>   Plugins Enabled
>   Building glusterd2 v5.0-0
>   # github.com/gluster/glusterd2/vendor/github.com/coreos/etcd/clientv3
>   vendor/github.com/coreos/etcd/clientv3/client.go:346: cannot use 
> c.tokenCred (type *authTokenCredential) as type credentials.PerRPCCredentials 
> in argument to grpc.WithPerRPCCredentials:
> *authTokenCredential does not implement credentials.PerRPCCredentials 
> (wrong type for GetRequestMetadata method)
> have GetRequestMetadata("context".Context, ...string) 
> (map[string]string, error)
> want 
> GetRequestMetadata("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context,
>  ...string) (map[string]string, error)
>   vendor/github.com/coreos/etcd/clientv3/client.go:421: cannot use 
> client.balancer (type *healthBalancer) as type grpc.Balancer in argument to 
> grpc.WithBalancer:
> *healthBalancer does not implement grpc.Balancer (wrong type for Get 
> method)
> have Get("context".Context, grpc.BalancerGetOptions) 
> (grpc.Address, func(), error)
> want 
> Get("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context, 
> grpc.BalancerGetOptions) (grpc.Address, func(), error)
>   vendor/github.com/coreos/etcd/clientv3/retry.go:145: cannot use 
> retryKVClient literal (type *retryKVClient) as type etcdserverpb.KVClient in 
> return argument:
> *retryKVClient does not implement etcdserverpb.KVClient (wrong type 
> for Compact method)
> have Compact("context".Context, 
> *etcdserverpb.CompactionRequest, ...grpc.CallOption) 
> (*etcdserverpb.CompactionResponse, error)
> want 
> Compact("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context,
>  *etcdserverpb.CompactionRequest, ...grpc.CallOption) 
> (*etcdserverpb.CompactionResponse, error)
>   ...
>
> Did anyone else try to build this on CentOS-7 (without EPEL)?

This occurs when Go<1.9 is used to build GD2. The updated etcd version
we vendor (etcd 3.3) requires Go>=1.9 to compile.
But the failure here is strange, because CentOS-7 has golang-1.9.4 in
its default repositories.
Don't know what's going wrong here.

>
> Thanks,
> Niels
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-04 Thread Kaushal M
On Fri, Oct 5, 2018 at 9:05 AM Raghavendra Gowdappa  wrote:
>
>
>
> On Fri, Oct 5, 2018 at 8:53 AM Amar Tumballi  wrote:
>>
>> Can you try below diff in your rfc, and let me know if it works?
>
>
> No. it didn't. I see the same error.
>  [rgowdapp@rgowdapp glusterfs]$ ./rfc.sh
> + rebase_changes
> + GIT_EDITOR=./rfc.sh
> + git rebase -i origin/master
> [detached HEAD e50667e] cluster/dht: clang-format dht-common.c
>  1 file changed, 10674 insertions(+), 11166 deletions(-)
>  rewrite xlators/cluster/dht/src/dht-common.c (88%)
> [detached HEAD 0734847] cluster/dht: fixes to unlinking invalid linkto file
>  1 file changed, 1 insertion(+), 1 deletion(-)
> [detached HEAD 7aeba07] rfc.sh: test - DO NOT MERGE
>  1 file changed, 8 insertions(+), 3 deletions(-)
> Successfully rebased and updated refs/heads/1635145.
> + check_backport
> + moveon=N
> + '[' master = master ']'
> + return
> + assert_diverge
> + git diff origin/master..HEAD
> + grep -q .
> ++ git log -n1 --format=%b
> ++ grep -ow -E 
> '([fF][iI][xX][eE][sS]|[uU][pP][dD][aA][tT][eE][sS])(:)?[[:space:]]+(gluster\/glusterfs)?(bz)?#[[:digit:]]+'
> ++ awk -F '#' '{print $2}'
> + reference=1635145
> + '[' -z 1635145 ']'
> ++ clang-format --version
> + clang_format='LLVM (http://llvm.org/):
>   LLVM version 3.4.2
>   Optimized build.
>   Built Dec  7 2015 (09:37:36).
>   Default target: x86_64-redhat-linux-gnu
>   Host CPU: x86-64'

This is a pretty old version of clang. Maybe this is the problem?

>
>>
>> ```
>>>
>>> diff --git a/rfc.sh b/rfc.sh
>>> index 607fd7528f..4ffef26ca1 100755
>>> --- a/rfc.sh
>>> +++ b/rfc.sh
>>> @@ -321,21 +321,21 @@ main()
>>>  fi
>>>
>>>  # TODO: add clang-format command here. It will after the changes are 
>>> done everywhere else
>>> +set +e
>>>  clang_format=$(clang-format --version)
>>>  if [ ! -z "${clang_format}" ]; then
>>>  # Considering git show may not give any files as output matching 
>>> the
>>>  # criteria, good to tell script not to fail on error
>>> -set +e
>>>  list_of_files=$(git show --pretty="format:" --name-only |
>>>  grep -v "contrib/" | egrep --color=never 
>>> "*\.[ch]$");
>>>  if [ ! -z "${list_of_files}" ]; then
>>>  echo "${list_of_files}" | xargs clang-format -i
>>>  fi
>>> -set -e
>>>  else
>>>  echo "High probability of your patch not passing smoke due to 
>>> coding standard check"
>>>  echo "Please install 'clang-format' to format the patch before 
>>> submitting"
>>>  fi
>>> +set -e
>>>
>>>  if [ "$DRY_RUN" = 1 ]; then
>>>  drier='echo -e Please use the following command to send your 
>>> commits to review:\n\n'
>>
>> ```
>> -Amar
>>
>> On Fri, Oct 5, 2018 at 8:09 AM Raghavendra Gowdappa  
>> wrote:
>>>
>>> All,
>>>
>>> [rgowdapp@rgowdapp glusterfs]$ ./rfc.sh
>>> + rebase_changes
>>> + GIT_EDITOR=./rfc.sh
>>> + git rebase -i origin/master
>>> [detached HEAD 34fabdd] cluster/dht: clang-format dht-common.c
>>>  1 file changed, 10674 insertions(+), 11166 deletions(-)
>>>  rewrite xlators/cluster/dht/src/dht-common.c (88%)
>>> [detached HEAD 4bbcbf9] cluster/dht: fixes to unlinking invalid linkto file
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>> [detached HEAD c5583ea] rfc.sh: test - DO NOT MERGE
>>>  1 file changed, 8 insertions(+), 3 deletions(-)
>>> Successfully rebased and updated refs/heads/1635145.
>>> + check_backport
>>> + moveon=N
>>> + '[' master = master ']'
>>> + return
>>> + assert_diverge
>>> + git diff origin/master..HEAD
>>> + grep -q .
>>> ++ git log -n1 --format=%b
>>> ++ grep -ow -E 
>>> '([fF][iI][xX][eE][sS]|[uU][pP][dD][aA][tT][eE][sS])(:)?[[:space:]]+(gluster\/glusterfs)?(bz)?#[[:digit:]]+'
>>> ++ awk -F '#' '{print $2}'
>>> + reference=1635145
>>> + '[' -z 1635145 ']'
>>> ++ clang-format --version
>>> + clang_format='LLVM (http://llvm.org/):
>>>   LLVM version 3.4.2
>>>   Optimized build.
>>>   Built Dec  7 2015 (09:37:36).
>>>   Default target: x86_64-redhat-linux-gnu
>>>   Host CPU: x86-64'
>>>
>>> Looks like the script is exiting right after it completes clang-format 
>>> --version. Nothing after that statement gets executed (did it crash? I 
>>> don't see any cores). Any help is appreciated
>>>
>>> regards,
>>> Raghavendra
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>> --
>> Amar Tumballi (amarts)
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-18 Thread Kaushal M
On Mon, Jun 18, 2018 at 11:30 PM Kaleb S. KEITHLEY  wrote:
>
> On 06/18/2018 12:03 PM, Kaushal M wrote:
> >
> > GD2 packages have been built for Fedora 28 and available from the
> > updates-testing repo, and soon from the updates repo
> > Packages are also available for Fedora 29/Rawhide.
> >
> I built GD2 rpms for Fedora 27 using the -vendor tar file. They are
> available at [1].
>
> Attempts to build from the non-vendor tar file failed. Logs from one of
> the failed builds are at [2] for anyone who cares to examine them to see
> why they failed.

It's because a few of the updated packages that are required were/are
still in updates-testing.
I've found the koji uses dependencies in updates-testing when building packages.
So when I built the package previously, I didn't notice that the
packages weren't actually available in the updates repo.
This has now been corrected, the dependencies in updates-testing have
moved to updates, and one additional missing dependency has been added
to updates-testing, which should move into updates soon.

>
>
> [1] https://download.gluster.org/pub/gluster/glusterd2/4.1/
> [2] https://koji.fedoraproject.org/koji/taskinfo?taskID=27705828
>
>
> --
>
> Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-18 Thread Kaushal M
On Fri, Jun 15, 2018 at 6:26 PM Niels de Vos  wrote:
>
> On Fri, Jun 15, 2018 at 05:03:38PM +0530, Kaushal M wrote:
> > In Tue, Jun 12, 2018 at 10:15 PM Niels de Vos  wrote:
> > >
> > > On Tue, Jun 12, 2018 at 11:26:33AM -0400, Shyam Ranganathan wrote:
> > > > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > > > As brick-mux tests were failing (and still are on master), this was
> > > > > holding up the release activity.
> > > > >
> > > > > We now have a final fix [1] for the problem, and the situation has
> > > > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > > > >
> > > > > So we hope to branch RC0 today, and give a week for package and 
> > > > > upgrade
> > > > > testing, before getting to GA. The revised calendar stands as follows,
> > > > >
> > > > > - RC0 Tagging: 31st May, 2018
> > > > > - RC0 Builds: 1st June, 2018
> > > > > - June 4th-8th: RC0 testing
> > > > > - June 8th: GA readiness callout
> > > > > - June 11th: GA tagging
> > > >
> > > > GA has been tagged today, and is off to packaging.
> > >
> > > The glusterfs packages should land in the testing repositories from the
> > > CentOS Storage SIG soon. Currently glusterd2 is still on rc0 though.
> > > Please test with the instructions from
> > > http://lists.gluster.org/pipermail/packaging/2018-June/000553.html
> > >
> > > Thanks!
> > > Niels
> >
> > GlusterD2-v4.1.0 has been tagged and released [1].
> >
> > [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0
>
> Packages should become available in the CentOS Storage SIGs
> centos-gluster41-test repository (el7 only) within an hour or so.
> Testing can be done with the description from
> http://lists.gluster.org/pipermail/packaging/2018-June/000553.html, the
> package is called glusterd2.
>
> Please let me know if the build is functioning as required and I'll mark
> if for release.

GD2 packages have been built for Fedora 28 and available from the
updates-testing repo, and soon from the updates repo
Packages are also available for Fedora 29/Rawhide.

>
> Thanks,
> Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-15 Thread Kaushal M
In Tue, Jun 12, 2018 at 10:15 PM Niels de Vos  wrote:
>
> On Tue, Jun 12, 2018 at 11:26:33AM -0400, Shyam Ranganathan wrote:
> > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > As brick-mux tests were failing (and still are on master), this was
> > > holding up the release activity.
> > >
> > > We now have a final fix [1] for the problem, and the situation has
> > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > >
> > > So we hope to branch RC0 today, and give a week for package and upgrade
> > > testing, before getting to GA. The revised calendar stands as follows,
> > >
> > > - RC0 Tagging: 31st May, 2018
> > > - RC0 Builds: 1st June, 2018
> > > - June 4th-8th: RC0 testing
> > > - June 8th: GA readiness callout
> > > - June 11th: GA tagging
> >
> > GA has been tagged today, and is off to packaging.
>
> The glusterfs packages should land in the testing repositories from the
> CentOS Storage SIG soon. Currently glusterd2 is still on rc0 though.
> Please test with the instructions from
> http://lists.gluster.org/pipermail/packaging/2018-June/000553.html
>
> Thanks!
> Niels

GlusterD2-v4.1.0 has been tagged and released [1].

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0

> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 10:55 PM Kaleb S. KEITHLEY  wrote:
>
> On 06/04/2018 11:32 AM, Kaushal M wrote:
>
> >
> > We have a proper release this time. Source tarballs are available from [1].
> >
> > [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> >
>
> I didn't wait for you to do COPR builds.
>
> There are rpm packages for RHEL/CentOS 7, Fedora 27, Fedora 28, and
> Fedora 29 at [1].
>
> If you really want to use COPR builds instead, let me know and I'll
> replace the ones I built with your COPR builds.
>
> I think you will find (as I did) that Fedora 28 (still) doesn't have all
> the dependencies and you'll need to build from the -vendor tar file.
> Ditto for Fedora 27. If you believe this should not be the case please
> let me know.

I did find this as well. Builds failed for F27 and F28, but succeeded
on rawhide.
What makes this strange is that our dependency versions haven't
changed since 4.0,
and I was able to build on all Fedora versions then.
I'll need to investigate this.

>
> [1] https://download.gluster.org/pub/gluster/glusterd2/qa-releases/4.1rc0/
>
> --
>
> Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 8:54 PM Kaushal M  wrote:
>
> On Mon, Jun 4, 2018 at 8:39 PM Kaushal M  wrote:
> >
> > On Sat, Jun 2, 2018 at 12:11 AM Kaushal M  wrote:
> > >
> > > On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  
> > > wrote:
> > > >
> > > > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > > > As brick-mux tests were failing (and still are on master), this was
> > > > > holding up the release activity.
> > > > >
> > > > > We now have a final fix [1] for the problem, and the situation has
> > > > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > > > >
> > > > > So we hope to branch RC0 today, and give a week for package and 
> > > > > upgrade
> > > > > testing, before getting to GA. The revised calendar stands as follows,
> > > > >
> > > > > - RC0 Tagging: 31st May, 2018
> > > >
> > > > RC0 Tagged and off to packaging!
> > >
> > > GD2 has been tagged as well. [1]
> > >
> > > [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> >
> > I've just realized I've made a mistake. I've pushed just the tags,
> > without updating the branch.
> > And now, the branch has landed new commits without my additional commits.
> > So, I've unintentionally created a different branch.
> >
> > I'm planning on deleting the tag, and updating the branch with the
> > release commits, and tagging once again.
> > Would this be okay?
>
> Oh well. Another thing I messed up in my midnight release-attempt.
> I forgot to publish the release-draft once I'd uploaded the tarballs.
> But this makes it easier for me. Because of this no one has the
> mis-tagged release.
> I'll do what I planned above, and do a proper release this time.

We have a proper release this time. Source tarballs are available from [1].

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0

>
> >
> > >
> > > >
> > > > > - RC0 Builds: 1st June, 2018
> > > > > - June 4th-8th: RC0 testing
> > > > > - June 8th: GA readiness callout
> > > > > - June 11th: GA tagging
> > > > > - +2-4 days release announcement
> > > > >
> > > > > Thanks,
> > > > > Shyam
> > > > >
> > > > > [1] Last fix for mux (and non-mux related):
> > > > > https://review.gluster.org/#/c/20109/1
> > > > >
> > > > > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> > > > >> Here is the current release activity calendar,
> > > > >>
> > > > >> - RC0 tagging: May 14th
> > > > >> - RC0 builds: May 15th
> > > > >> - May 15th - 25th
> > > > >>   - Upgrade testing
> > > > >>   - General testing and feedback period
> > > > >> - (on need basis) RC1 build: May 26th
> > > > >> - GA readiness call out: May, 28th
> > > > >> - GA tagging: May, 30th
> > > > >> - +2-4 days release announcement
> > > > >>
> > > > >> Thanks,
> > > > >> Shyam
> > > > >>
> > > > >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> > > > >>> Hi,
> > > > >>>
> > > > >>> Release 4.1 has been branched, as it was done later than 
> > > > >>> anticipated the
> > > > >>> calendar of tasks below would be reworked accordingly this week and
> > > > >>> posted to the lists.
> > > > >>>
> > > > >>> Thanks,
> > > > >>> Shyam
> > > > >>>
> > > > >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
> > > > >>>> Hi,
> > > > >>>>
> > > > >>>> As we have completed potential scope for 4.1 release (reflected 
> > > > >>>> here [1]
> > > > >>>> and also here [2]), it's time to talk about the schedule.
> > > > >>>>
> > > > >>>> - Branching date (and hence feature exception date): Apr 16th
> > > > >>>> - Week of Apr 16th release notes updated for all features in the 
> > > > >>>> release
> > > > >>>> - RC0 tagging: Apr 23rd
> > > > >>>> -

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 8:39 PM Kaushal M  wrote:
>
> On Sat, Jun 2, 2018 at 12:11 AM Kaushal M  wrote:
> >
> > On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  
> > wrote:
> > >
> > > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > > As brick-mux tests were failing (and still are on master), this was
> > > > holding up the release activity.
> > > >
> > > > We now have a final fix [1] for the problem, and the situation has
> > > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > > >
> > > > So we hope to branch RC0 today, and give a week for package and upgrade
> > > > testing, before getting to GA. The revised calendar stands as follows,
> > > >
> > > > - RC0 Tagging: 31st May, 2018
> > >
> > > RC0 Tagged and off to packaging!
> >
> > GD2 has been tagged as well. [1]
> >
> > [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
>
> I've just realized I've made a mistake. I've pushed just the tags,
> without updating the branch.
> And now, the branch has landed new commits without my additional commits.
> So, I've unintentionally created a different branch.
>
> I'm planning on deleting the tag, and updating the branch with the
> release commits, and tagging once again.
> Would this be okay?

Oh well. Another thing I messed up in my midnight release-attempt.
I forgot to publish the release-draft once I'd uploaded the tarballs.
But this makes it easier for me. Because of this no one has the
mis-tagged release.
I'll do what I planned above, and do a proper release this time.

>
> >
> > >
> > > > - RC0 Builds: 1st June, 2018
> > > > - June 4th-8th: RC0 testing
> > > > - June 8th: GA readiness callout
> > > > - June 11th: GA tagging
> > > > - +2-4 days release announcement
> > > >
> > > > Thanks,
> > > > Shyam
> > > >
> > > > [1] Last fix for mux (and non-mux related):
> > > > https://review.gluster.org/#/c/20109/1
> > > >
> > > > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> > > >> Here is the current release activity calendar,
> > > >>
> > > >> - RC0 tagging: May 14th
> > > >> - RC0 builds: May 15th
> > > >> - May 15th - 25th
> > > >>   - Upgrade testing
> > > >>   - General testing and feedback period
> > > >> - (on need basis) RC1 build: May 26th
> > > >> - GA readiness call out: May, 28th
> > > >> - GA tagging: May, 30th
> > > >> - +2-4 days release announcement
> > > >>
> > > >> Thanks,
> > > >> Shyam
> > > >>
> > > >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> > > >>> Hi,
> > > >>>
> > > >>> Release 4.1 has been branched, as it was done later than anticipated 
> > > >>> the
> > > >>> calendar of tasks below would be reworked accordingly this week and
> > > >>> posted to the lists.
> > > >>>
> > > >>> Thanks,
> > > >>> Shyam
> > > >>>
> > > >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
> > > >>>> Hi,
> > > >>>>
> > > >>>> As we have completed potential scope for 4.1 release (reflected here 
> > > >>>> [1]
> > > >>>> and also here [2]), it's time to talk about the schedule.
> > > >>>>
> > > >>>> - Branching date (and hence feature exception date): Apr 16th
> > > >>>> - Week of Apr 16th release notes updated for all features in the 
> > > >>>> release
> > > >>>> - RC0 tagging: Apr 23rd
> > > >>>> - Week of Apr 23rd, upgrade and other testing
> > > >>>> - RCNext: May 7th (if critical failures, or exception features 
> > > >>>> arrive late)
> > > >>>> - RCNext: May 21st
> > > >>>> - Week of May 21st, final upgrade and testing
> > > >>>> - GA readiness call out: May, 28th
> > > >>>> - GA tagging: May, 30th
> > > >>>> - +2-4 days release announcement
> > > >>>>
> > > >>>> and, review focus. As in older releases, I am starring reviews that 
> > > >>>> are
> > > >>>

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Sat, Jun 2, 2018 at 12:11 AM Kaushal M  wrote:
>
> On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  wrote:
> >
> > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > As brick-mux tests were failing (and still are on master), this was
> > > holding up the release activity.
> > >
> > > We now have a final fix [1] for the problem, and the situation has
> > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > >
> > > So we hope to branch RC0 today, and give a week for package and upgrade
> > > testing, before getting to GA. The revised calendar stands as follows,
> > >
> > > - RC0 Tagging: 31st May, 2018
> >
> > RC0 Tagged and off to packaging!
>
> GD2 has been tagged as well. [1]
>
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0

I've just realized I've made a mistake. I've pushed just the tags,
without updating the branch.
And now, the branch has landed new commits without my additional commits.
So, I've unintentionally created a different branch.

I'm planning on deleting the tag, and updating the branch with the
release commits, and tagging once again.
Would this be okay?

>
> >
> > > - RC0 Builds: 1st June, 2018
> > > - June 4th-8th: RC0 testing
> > > - June 8th: GA readiness callout
> > > - June 11th: GA tagging
> > > - +2-4 days release announcement
> > >
> > > Thanks,
> > > Shyam
> > >
> > > [1] Last fix for mux (and non-mux related):
> > > https://review.gluster.org/#/c/20109/1
> > >
> > > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> > >> Here is the current release activity calendar,
> > >>
> > >> - RC0 tagging: May 14th
> > >> - RC0 builds: May 15th
> > >> - May 15th - 25th
> > >>   - Upgrade testing
> > >>   - General testing and feedback period
> > >> - (on need basis) RC1 build: May 26th
> > >> - GA readiness call out: May, 28th
> > >> - GA tagging: May, 30th
> > >> - +2-4 days release announcement
> > >>
> > >> Thanks,
> > >> Shyam
> > >>
> > >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> > >>> Hi,
> > >>>
> > >>> Release 4.1 has been branched, as it was done later than anticipated the
> > >>> calendar of tasks below would be reworked accordingly this week and
> > >>> posted to the lists.
> > >>>
> > >>> Thanks,
> > >>> Shyam
> > >>>
> > >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
> > >>>> Hi,
> > >>>>
> > >>>> As we have completed potential scope for 4.1 release (reflected here 
> > >>>> [1]
> > >>>> and also here [2]), it's time to talk about the schedule.
> > >>>>
> > >>>> - Branching date (and hence feature exception date): Apr 16th
> > >>>> - Week of Apr 16th release notes updated for all features in the 
> > >>>> release
> > >>>> - RC0 tagging: Apr 23rd
> > >>>> - Week of Apr 23rd, upgrade and other testing
> > >>>> - RCNext: May 7th (if critical failures, or exception features arrive 
> > >>>> late)
> > >>>> - RCNext: May 21st
> > >>>> - Week of May 21st, final upgrade and testing
> > >>>> - GA readiness call out: May, 28th
> > >>>> - GA tagging: May, 30th
> > >>>> - +2-4 days release announcement
> > >>>>
> > >>>> and, review focus. As in older releases, I am starring reviews that are
> > >>>> submitted against features, this should help if you are looking to help
> > >>>> accelerate feature commits for the release (IOW, this list is the watch
> > >>>> list for reviews). This can be found handy here [3].
> > >>>>
> > >>>> So, branching is in about 4 weeks!
> > >>>>
> > >>>> Thanks,
> > >>>> Shyam
> > >>>>
> > >>>> [1] Issues marked against release 4.1:
> > >>>> https://github.com/gluster/glusterfs/milestone/5
> > >>>>
> > >>>> [2] github project lane for 4.1:
> > >>>> https://github.com/gluster/glusterfs/projects/1#column-1075416
> > >>>>
> > >>>> [3] Review focus dashboard:
> > >>>> https://review.gluster.org/#/q/starredby:srangana%2540redhat.com
> > >>>> ___
> > >>>> maintainers mailing list
> > >>>> maintain...@gluster.org
> > >>>> http://lists.gluster.org/mailman/listinfo/maintainers
> > >>>>
> > >>> ___
> > >>> maintainers mailing list
> > >>> maintain...@gluster.org
> > >>> http://lists.gluster.org/mailman/listinfo/maintainers
> > >>>
> > >> ___
> > >> Gluster-devel mailing list
> > >> Gluster-devel@gluster.org
> > >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> > >>
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> > >
> > ___
> > maintainers mailing list
> > maintain...@gluster.org
> > http://lists.gluster.org/mailman/listinfo/maintainers
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 5:29 PM Kaleb S. KEITHLEY  wrote:
>
> On 06/02/2018 07:47 AM, Niels de Vos wrote:
> > On Sat, Jun 02, 2018 at 12:11:55AM +0530, Kaushal M wrote:
> >> On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  
> >> wrote:
> >>>
> >>> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> >>>> As brick-mux tests were failing (and still are on master), this was
> >>>> holding up the release activity.
> >>>>
> >>>> We now have a final fix [1] for the problem, and the situation has
> >>>> improved over a series of fixes and reverts on the 4.1 branch as well.
> >>>>
> >>>> So we hope to branch RC0 today, and give a week for package and upgrade
> >>>> testing, before getting to GA. The revised calendar stands as follows,
> >>>>
> >>>> - RC0 Tagging: 31st May, 2018
> >>>
> >>> RC0 Tagged and off to packaging!
> >>
> >> GD2 has been tagged as well. [1]
> >>
> >> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> >
> > What is the status of the RPM for GD2? Can the Fedora RPM be rebuilt
> > directly on CentOS, or does it need additional dependencies? (Note that
> > CentOS does not allow dependencies from Fedora EPEL.)
> >
>
> I checked, and was surprised to see that gd2 made it into Fedora[1]. I
> guess I missed the announcement.

This happened in time for the 4.0 release. I did send out an announcement IIRC.

>
> But I was disappointed to see that packages have only been built for
> Fedora29/rawhide. We've been shipping glusterfs-4.0 in Fedora28 and even
> if [2] didn't say so, I would think it would be obvious that we should
> have packages for gd2 in F28 too.
>

The package got accepted during the F28 freeze. So I was only able to
request a branch for F27 when 4.0 happened.
I should have gotten around to requesting a branch for F28, but I forgot.

> And it's good that RC0 was tagged in a timely matter. Who is building
> those packages?

I can build the RPMs. I'll build them on the COPR I've been
maintaining. But I don't believe that those can be used as the
official RPM sources.
So where should I build and how should they be distributed?

>
> [1] https://koji.fedoraproject.org/koji/packageinfo?packageID=26508
> [2] https://docs.gluster.org/en/latest/Install-Guide/Community_Packages/
> --
>
> Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 7:05 PM Kaleb S. KEITHLEY  wrote:
>
> On 06/02/2018 07:47 AM, Niels de Vos wrote:
> > On Sat, Jun 02, 2018 at 12:11:55AM +0530, Kaushal M wrote:
> >> On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  
> >> wrote:
> >>>
> >>> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> >>>> As brick-mux tests were failing (and still are on master), this was
> >>>> holding up the release activity.
> >>>>
> >>>> We now have a final fix [1] for the problem, and the situation has
> >>>> improved over a series of fixes and reverts on the 4.1 branch as well.
> >>>>
> >>>> So we hope to branch RC0 today, and give a week for package and upgrade
> >>>> testing, before getting to GA. The revised calendar stands as follows,
> >>>>
> >>>> - RC0 Tagging: 31st May, 2018
> >>>
> >>> RC0 Tagged and off to packaging!
> >>
> >> GD2 has been tagged as well. [1]
> >>
> >> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> >
> > What is the status of the RPM for GD2? Can the Fedora RPM be rebuilt
> > directly on CentOS, or does it need additional dependencies? (Note that
> > CentOS does not allow dependencies from Fedora EPEL.)
> >
>
> My recollection of how this works is that one would need to build from
> the "bundled vendor" tarball.
>
> Except when I tried to download the vendor bundle tarball I got the same
> bits as the unbundled tarball.
>
> ISTR Kaushal had to do something extra to generate the vendor bundled
> tarball. It doesn't appear that that occured.

That is right. For CentOS/EL, the default in the spec is to use the
vendored tarball. Using this, the only requirement to build GD2 is
golang>=1.8.
Are you sure you're downloading the right tarball [1]?

[1]: 
https://github.com/gluster/glusterd2/releases/download/v4.1.0-rc0/glusterd2-v4.1.0-rc0-vendor.tar.xz
>
> --
>
> Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-01 Thread Kaushal M
On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  wrote:
>
> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > As brick-mux tests were failing (and still are on master), this was
> > holding up the release activity.
> >
> > We now have a final fix [1] for the problem, and the situation has
> > improved over a series of fixes and reverts on the 4.1 branch as well.
> >
> > So we hope to branch RC0 today, and give a week for package and upgrade
> > testing, before getting to GA. The revised calendar stands as follows,
> >
> > - RC0 Tagging: 31st May, 2018
>
> RC0 Tagged and off to packaging!

GD2 has been tagged as well. [1]

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0

>
> > - RC0 Builds: 1st June, 2018
> > - June 4th-8th: RC0 testing
> > - June 8th: GA readiness callout
> > - June 11th: GA tagging
> > - +2-4 days release announcement
> >
> > Thanks,
> > Shyam
> >
> > [1] Last fix for mux (and non-mux related):
> > https://review.gluster.org/#/c/20109/1
> >
> > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> >> Here is the current release activity calendar,
> >>
> >> - RC0 tagging: May 14th
> >> - RC0 builds: May 15th
> >> - May 15th - 25th
> >>   - Upgrade testing
> >>   - General testing and feedback period
> >> - (on need basis) RC1 build: May 26th
> >> - GA readiness call out: May, 28th
> >> - GA tagging: May, 30th
> >> - +2-4 days release announcement
> >>
> >> Thanks,
> >> Shyam
> >>
> >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> >>> Hi,
> >>>
> >>> Release 4.1 has been branched, as it was done later than anticipated the
> >>> calendar of tasks below would be reworked accordingly this week and
> >>> posted to the lists.
> >>>
> >>> Thanks,
> >>> Shyam
> >>>
> >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
>  Hi,
> 
>  As we have completed potential scope for 4.1 release (reflected here [1]
>  and also here [2]), it's time to talk about the schedule.
> 
>  - Branching date (and hence feature exception date): Apr 16th
>  - Week of Apr 16th release notes updated for all features in the release
>  - RC0 tagging: Apr 23rd
>  - Week of Apr 23rd, upgrade and other testing
>  - RCNext: May 7th (if critical failures, or exception features arrive 
>  late)
>  - RCNext: May 21st
>  - Week of May 21st, final upgrade and testing
>  - GA readiness call out: May, 28th
>  - GA tagging: May, 30th
>  - +2-4 days release announcement
> 
>  and, review focus. As in older releases, I am starring reviews that are
>  submitted against features, this should help if you are looking to help
>  accelerate feature commits for the release (IOW, this list is the watch
>  list for reviews). This can be found handy here [3].
> 
>  So, branching is in about 4 weeks!
> 
>  Thanks,
>  Shyam
> 
>  [1] Issues marked against release 4.1:
>  https://github.com/gluster/glusterfs/milestone/5
> 
>  [2] github project lane for 4.1:
>  https://github.com/gluster/glusterfs/projects/1#column-1075416
> 
>  [3] Review focus dashboard:
>  https://review.gluster.org/#/q/starredby:srangana%2540redhat.com
>  ___
>  maintainers mailing list
>  maintain...@gluster.org
>  http://lists.gluster.org/mailman/listinfo/maintainers
> 
> >>> ___
> >>> maintainers mailing list
> >>> maintain...@gluster.org
> >>> http://lists.gluster.org/mailman/listinfo/maintainers
> >>>
> >> ___
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> >>
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Meeting minutes (7th March)

2018-03-07 Thread Kaushal M
On Thu, Mar 8, 2018 at 10:21 AM, Amar Tumballi  wrote:
> Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST)
>
> BJ Link
>
> Bridge: https://bluejeans.com/205933580
> Download : https://bluejeans.com/s/mOGb7
>
> Attendance
>
> [Sorry Note] : Atin (conflicting meeting), Michael Adam, Amye, Niels de Vos,
> Amar, Nigel, Jeff, Shyam, Kaleb, Kotresh
>
> Agenda
>
> AI from previous meeting:
>
> Email on version numbers: Still pending - Amar/Shyam
>
> Planning to do this by Friday (9th March)
>
> can we run regression suite with GlusterD2
>
> OK with failures, but can we run?
> Nigel to run tests and give outputs

Apologies for not attending this meeting.

I can help get this up and running.

But, I also wanted to setup a smoke job to run GD2 CI against glusterfs patches.
This will help us catch changes that adversly affect GD2, in
particular changes to the option_t and xlator_api_t structs.
Will not be a particularly long test to run. On average the current
GD2 centos-ci jobs finish in under 4 minutes.
I expect that building glusterfs will add about 5 minutes more.
This job should be simple enough to get setup, and I'd like it if can
set this up first.

>
> Line coverage tests:
>
> SIGKILL was sent to processes, so the output was not proper.
> Patch available, Nigel to test with the patch and give output before
> merging.
> [Nigel] what happens with GD2 ?
>
> [Shyam] https://github.com/gojp/goreportcard
> [Shyam] (what I know)
> https://goreportcard.com/report/github.com/gluster/glusterd2
>
> Gluster 4.0 is tagged:
>
> Retrospect meeting: Can this be google form?
>
> It usually is, let me find and paste the older one:
>
> 3.10 retro:
> http://lists.gluster.org/pipermail/gluster-users/2017-February/030127.html
> 3.11 retro: https://www.gluster.org/3-11-retrospectives/
>
> [Nigel] Can we do it a less of form, and keep it more generic?
> [Shyam] Thats what mostly the form tries to do. Prefer meeting & Form
>
> Gluster Infra team is testing the distributed testing framework contributed
> from FB
>
> [Nigel] Any issues, would like to collaborate
> [Jeff] Happy to collaborate, let me know.
>
> Call out for features on 4-next
>
> should the next release be LTM and 4.1 and then pick the version number
> change proposal later.
>
> Bugzilla Automation:
>
> Planning to test it out next week.
> AI: send the email first, and target to take the patches before next
> maintainers meeting.
>
> Round Table
>
> [Kaleb] space is tight on download.gluster.org
> * may we delete, e.g. purpleidea files? experimental (freebsd stuff from
> 2014)?
> * any way to get more space?
> * [Nigel] Should be possible to do it, file a bug
> * AI: Kaleb to file a bug
> *
>
> yesterday I noticed that some files (…/3.12/3.12.2/Debian/…) were not owned
> by root:root. They were rsync_aide:rsync_aide. Was there an aborted rsync
> job or something that left them like that?
>
> most glusterfs 4.0 packages are on download.g.o now. Starting on gd2
> packages now.
>
> el7 packages on on buildroot if someone (shyam?) wants to get a head start
> on testing them
>
> [Nigel] Testing IPv6 (with IPv4 on too), only 4 tests are consistently
> failing. Need to look at it.
>
>
>
> --
> Amar Tumballi (amarts)
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-03-04 Thread Kaushal M
On Sat, Mar 3, 2018 at 12:34 AM, Shyam Ranganathan <srang...@redhat.com> wrote:
> On 03/02/2018 04:24 AM, Kaushal M wrote:
>> I was able to create libglusterfsd, with just the pmap_signout nad
>> autoscale functions.
>> Turned out to be easy enough to do in the end.
>> I've pushed a patch for review [1] on master.
>
> Thanks!
>
>>
>> I've also opened new bugs to track the fixes for master[2] and
>> release-4.0[3]. They have been made blockers to the glusterfs-4.0.0
>> tracker bug [4].
>>
>> Shyam,
>> To backport the fix from master to release-4.0, also requires
>> backporting one more change [5].
>> Would you be okay with backporting that as well, in a single patch?
>
> I am *not* in favor of taking patch [5], it is a feature change, and so
> late (considering the current build stability as well, if you are
> following other threads).
>
> Is there a way for your patch to land without the dependency?
>

The autoscale function had the dependency. But it is trivial to update
the patch to move the function, even without the dependency.
Later backports from master will be slightly harder though.

>>
>> [1]: https://review.gluster.org/19657
>> [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1550895
>> [3]: https://bugzilla.redhat.com/show_bug.cgi?id=1550894
>> [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1539842
>> [5]: https://review.gluster.org/19337
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-03-04 Thread Kaushal M
On Sat, Mar 3, 2018 at 4:25 AM, Kaleb S. KEITHLEY <kkeit...@redhat.com> wrote:
> On 03/02/2018 04:24 AM, Kaushal M wrote:
>> [snip]
>> I was able to create libglusterfsd, with just the pmap_signout nad
>> autoscale functions.
>> Turned out to be easy enough to do in the end.
>> I've pushed a patch for review [1] on master.
>>
>> I've also opened new bugs to track the fixes for master[2] and
>> release-4.0[3]. They have been made blockers to the glusterfs-4.0.0
>> tracker bug [4].
>
> I really don't like creating this libglusterfsd.so with just two
> functions to get around this. It feels like a quick-and-dirty hack.
> (There's never time to do it right, but there's always time to do it
> over. Except there isn't.)
>
> I've posted a change at https://review.gluster.org/19664 that moves
> those two functions to libgfrpc.so. It works on my f28/rawhide box and
> the various centos and fedora smoke test boxes. No tricky linker flags,
> or anything else, required. Regression is running now.
>
> (And truth be told I'd like to also move glusterfs_mgmt_pmap_signin()
> into libgfrpc.so too. Just for (foolish) consistency/symmetry.)

Moving a specific RPC client implementation into RPC lib doesn't seem
so right to me.
But otherwise, this change is okay with me.

>
>>
>> Shyam,
>> To backport the fix from master to release-4.0, also requires
>> backporting one more change [5].
>> Would you be okay with backporting that as well, in a single patch?
>>
>> [1]: https://review.gluster.org/19657
>> [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1550895
>> [3]: https://bugzilla.redhat.com/show_bug.cgi?id=1550894
>> [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1539842
>> [5]: https://review.gluster.org/19337
>>
>>>>
>>>>>
>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Kaleb
>>>> ___
>>>> Gluster-devel mailing list
>>>> Gluster-devel@gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>
>>>
>>>
>>> --
>>> Milind
>>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-03-02 Thread Kaushal M
On Thu, Mar 1, 2018 at 6:22 PM, Milind Changire <mchan...@redhat.com> wrote:
> Maybe: gcc -Wl,--start-group foo.o bar.o -Wl,--end-group
>
> quote from man ld:
> It is best to use it only when there are unavoidable circular references
> between two or more archives.
>
>
> On Thu, Mar 1, 2018 at 6:18 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> On Thu, Mar 1, 2018 at 6:14 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> > On Thu, Mar 1, 2018 at 12:52 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> >> On Wed, Feb 28, 2018 at 9:50 PM, Kaleb S. KEITHLEY
>> >> <kkeit...@redhat.com> wrote:
>> >>> On 02/28/2018 10:49 AM, Kaushal M wrote:
>> >>>> We have a GlusterD2-4.0.0rc1 release.
>> >>>>
>> >>>> Aravinda, Prashanth and the rest of the GD2 developers have been
>> >>>> working hard on getting more stuff merged into GD2 before the 4.0
>> >>>> release.
>> >>>>
>> >>>> At the same time I have been working on getting GD2 packaged for
>> >>>> Fedora.
>> >>>> I've been able to get all the required dependencies updated and have
>> >>>> submitted to the package maintainer for merging.
>> >>>> I'm now waiting on the maintainer to accept those updates. Once the
>> >>>> updates have been accepted, the GD2 spec can get accepted [2].
>> >>>> I expect this to take at least another week on the whole.
>> >>>>
>> >>>> In the meantime, I've been building all the updated dependencies and
>> >>>> glusterd2-v4.0.0rc1, on the GD2 copr [3].
>> >>>>
>> >>>> I tried to test out the GD2 package with the GlusterFS v4.0.0rc1
>> >>>> release from [4]. And this is where I hit the blocker.
>> >>>>
>> >>>> GD2 does not start with the packaged glusterfs-v4.0.0rc1 bits. I've
>> >>>> opened an issue on the GD2 issue tracker for it [5].
>> >>>> In short, GD2 fails to read options from xlators, as dlopen fails
>> >>>> with
>> >>>> a missing symbol error.
>> >>>>
>> >>>> ```
>> >>>> FATA[2018-02-28 15:02:53.345686] Failed to load xlator options
>> >>>>
>> >>>> error="dlopen(/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so)
>> >>>> failed; dlerror =
>> >>>> /usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so: undefined
>> >>>> symbol: glusterfs_mgmt_pmap_signout" source="[main.go:79:main.main]"
>> >>>
>> >>>
>> >>> see https://review.gluster.org/#/c/19225/
>> >>>
>> >>>
>> >>> glusterfs_mgmt_pmap_signout() is in glusterfsd. When glusterfsd
>> >>> dlopens
>> >>> server.so the run-time linker can resolve the symbol — for now.
>> >>>
>> >>> Tighter run-time linker semantics coming in, e.g. Fedora 28, means
>> >>> this
>> >>> will stop working in the near future even when RTLD_LAZY is passed as
>> >>> a
>> >>> flag. (As I understand the proposed changes.)
>> >>>
>> >>> It should still work, e.g., on Fedora 27 and el7 though.
>> >>>
>> >>> glusterfs_mgmt_pmap_signout() (and glusterfs_autoscale_threads())
>> >>> really
>> >>> need to be moved to libglusterfs. ASAP. Doing that will resolve this
>> >>> issue.
>> >>
>> >> Thanks for the pointer Kaleb!
>> >>
>> >> But, I'm testing on Fedora 27, where this shouldn't theoretically
>> >> happen.
>> >> So then, why am I hitting this. Is it something to do with the way the
>> >> packages are built?
>> >> Or is there some runtime ld configuration that has been set up.
>> >>
>> >> In any case, we should push and get the offending functions moved into
>> >> libglusterfs.
>> >> That should solve the problem for us.
>> >
>> > I took a shot at this, and it's not as easy simple as it appeared.
>> > I ended up in a recursive linking situation with libglusterfs,
>> > libgfxdr and libgfrpc.
>> > Looks like the solution is to create a libglusterfsd.
>>
>> I see two ways to do this.
>> 1. Make a library out of the whole glusterfsd.
>> Rename `main` to `init`
>> And then crea

Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-03-01 Thread Kaushal M
On Thu, Mar 1, 2018 at 6:14 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, Mar 1, 2018 at 12:52 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> On Wed, Feb 28, 2018 at 9:50 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com> 
>> wrote:
>>> On 02/28/2018 10:49 AM, Kaushal M wrote:
>>>> We have a GlusterD2-4.0.0rc1 release.
>>>>
>>>> Aravinda, Prashanth and the rest of the GD2 developers have been
>>>> working hard on getting more stuff merged into GD2 before the 4.0
>>>> release.
>>>>
>>>> At the same time I have been working on getting GD2 packaged for Fedora.
>>>> I've been able to get all the required dependencies updated and have
>>>> submitted to the package maintainer for merging.
>>>> I'm now waiting on the maintainer to accept those updates. Once the
>>>> updates have been accepted, the GD2 spec can get accepted [2].
>>>> I expect this to take at least another week on the whole.
>>>>
>>>> In the meantime, I've been building all the updated dependencies and
>>>> glusterd2-v4.0.0rc1, on the GD2 copr [3].
>>>>
>>>> I tried to test out the GD2 package with the GlusterFS v4.0.0rc1
>>>> release from [4]. And this is where I hit the blocker.
>>>>
>>>> GD2 does not start with the packaged glusterfs-v4.0.0rc1 bits. I've
>>>> opened an issue on the GD2 issue tracker for it [5].
>>>> In short, GD2 fails to read options from xlators, as dlopen fails with
>>>> a missing symbol error.
>>>>
>>>> ```
>>>> FATA[2018-02-28 15:02:53.345686] Failed to load xlator options
>>>> 
>>>> error="dlopen(/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so)
>>>> failed; dlerror =
>>>> /usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so: undefined
>>>> symbol: glusterfs_mgmt_pmap_signout" source="[main.go:79:main.main]"
>>>
>>>
>>> see https://review.gluster.org/#/c/19225/
>>>
>>>
>>> glusterfs_mgmt_pmap_signout() is in glusterfsd. When glusterfsd dlopens
>>> server.so the run-time linker can resolve the symbol — for now.
>>>
>>> Tighter run-time linker semantics coming in, e.g. Fedora 28, means this
>>> will stop working in the near future even when RTLD_LAZY is passed as a
>>> flag. (As I understand the proposed changes.)
>>>
>>> It should still work, e.g., on Fedora 27 and el7 though.
>>>
>>> glusterfs_mgmt_pmap_signout() (and glusterfs_autoscale_threads()) really
>>> need to be moved to libglusterfs. ASAP. Doing that will resolve this issue.
>>
>> Thanks for the pointer Kaleb!
>>
>> But, I'm testing on Fedora 27, where this shouldn't theoretically happen.
>> So then, why am I hitting this. Is it something to do with the way the
>> packages are built?
>> Or is there some runtime ld configuration that has been set up.
>>
>> In any case, we should push and get the offending functions moved into
>> libglusterfs.
>> That should solve the problem for us.
>
> I took a shot at this, and it's not as easy simple as it appeared.
> I ended up in a recursive linking situation with libglusterfs,
> libgfxdr and libgfrpc.
> Looks like the solution is to create a libglusterfsd.

I see two ways to do this.
1. Make a library out of the whole glusterfsd.
Rename `main` to `init`
And then create a simple executable which loads this library and calls `init`.

Or,
2. Create a very small library with just the pmap_signout and
autoscale functions. And use that instead.

If anyone else has a better idea about how to do this please let me know.

>
>>
>>>
>>> --
>>>
>>> Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-03-01 Thread Kaushal M
On Thu, Mar 1, 2018 at 12:52 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Wed, Feb 28, 2018 at 9:50 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com> 
> wrote:
>> On 02/28/2018 10:49 AM, Kaushal M wrote:
>>> We have a GlusterD2-4.0.0rc1 release.
>>>
>>> Aravinda, Prashanth and the rest of the GD2 developers have been
>>> working hard on getting more stuff merged into GD2 before the 4.0
>>> release.
>>>
>>> At the same time I have been working on getting GD2 packaged for Fedora.
>>> I've been able to get all the required dependencies updated and have
>>> submitted to the package maintainer for merging.
>>> I'm now waiting on the maintainer to accept those updates. Once the
>>> updates have been accepted, the GD2 spec can get accepted [2].
>>> I expect this to take at least another week on the whole.
>>>
>>> In the meantime, I've been building all the updated dependencies and
>>> glusterd2-v4.0.0rc1, on the GD2 copr [3].
>>>
>>> I tried to test out the GD2 package with the GlusterFS v4.0.0rc1
>>> release from [4]. And this is where I hit the blocker.
>>>
>>> GD2 does not start with the packaged glusterfs-v4.0.0rc1 bits. I've
>>> opened an issue on the GD2 issue tracker for it [5].
>>> In short, GD2 fails to read options from xlators, as dlopen fails with
>>> a missing symbol error.
>>>
>>> ```
>>> FATA[2018-02-28 15:02:53.345686] Failed to load xlator options
>>> 
>>> error="dlopen(/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so)
>>> failed; dlerror =
>>> /usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so: undefined
>>> symbol: glusterfs_mgmt_pmap_signout" source="[main.go:79:main.main]"
>>
>>
>> see https://review.gluster.org/#/c/19225/
>>
>>
>> glusterfs_mgmt_pmap_signout() is in glusterfsd. When glusterfsd dlopens
>> server.so the run-time linker can resolve the symbol — for now.
>>
>> Tighter run-time linker semantics coming in, e.g. Fedora 28, means this
>> will stop working in the near future even when RTLD_LAZY is passed as a
>> flag. (As I understand the proposed changes.)
>>
>> It should still work, e.g., on Fedora 27 and el7 though.
>>
>> glusterfs_mgmt_pmap_signout() (and glusterfs_autoscale_threads()) really
>> need to be moved to libglusterfs. ASAP. Doing that will resolve this issue.
>
> Thanks for the pointer Kaleb!
>
> But, I'm testing on Fedora 27, where this shouldn't theoretically happen.
> So then, why am I hitting this. Is it something to do with the way the
> packages are built?
> Or is there some runtime ld configuration that has been set up.
>
> In any case, we should push and get the offending functions moved into
> libglusterfs.
> That should solve the problem for us.

I took a shot at this, and it's not as easy simple as it appeared.
I ended up in a recursive linking situation with libglusterfs,
libgfxdr and libgfrpc.
Looks like the solution is to create a libglusterfsd.

>
>>
>> --
>>
>> Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-02-28 Thread Kaushal M
On Wed, Feb 28, 2018 at 9:50 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com> wrote:
> On 02/28/2018 10:49 AM, Kaushal M wrote:
>> We have a GlusterD2-4.0.0rc1 release.
>>
>> Aravinda, Prashanth and the rest of the GD2 developers have been
>> working hard on getting more stuff merged into GD2 before the 4.0
>> release.
>>
>> At the same time I have been working on getting GD2 packaged for Fedora.
>> I've been able to get all the required dependencies updated and have
>> submitted to the package maintainer for merging.
>> I'm now waiting on the maintainer to accept those updates. Once the
>> updates have been accepted, the GD2 spec can get accepted [2].
>> I expect this to take at least another week on the whole.
>>
>> In the meantime, I've been building all the updated dependencies and
>> glusterd2-v4.0.0rc1, on the GD2 copr [3].
>>
>> I tried to test out the GD2 package with the GlusterFS v4.0.0rc1
>> release from [4]. And this is where I hit the blocker.
>>
>> GD2 does not start with the packaged glusterfs-v4.0.0rc1 bits. I've
>> opened an issue on the GD2 issue tracker for it [5].
>> In short, GD2 fails to read options from xlators, as dlopen fails with
>> a missing symbol error.
>>
>> ```
>> FATA[2018-02-28 15:02:53.345686] Failed to load xlator options
>> 
>> error="dlopen(/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so)
>> failed; dlerror =
>> /usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so: undefined
>> symbol: glusterfs_mgmt_pmap_signout" source="[main.go:79:main.main]"
>
>
> see https://review.gluster.org/#/c/19225/
>
>
> glusterfs_mgmt_pmap_signout() is in glusterfsd. When glusterfsd dlopens
> server.so the run-time linker can resolve the symbol — for now.
>
> Tighter run-time linker semantics coming in, e.g. Fedora 28, means this
> will stop working in the near future even when RTLD_LAZY is passed as a
> flag. (As I understand the proposed changes.)
>
> It should still work, e.g., on Fedora 27 and el7 though.
>
> glusterfs_mgmt_pmap_signout() (and glusterfs_autoscale_threads()) really
> need to be moved to libglusterfs. ASAP. Doing that will resolve this issue.

Thanks for the pointer Kaleb!

But, I'm testing on Fedora 27, where this shouldn't theoretically happen.
So then, why am I hitting this. Is it something to do with the way the
packages are built?
Or is there some runtime ld configuration that has been set up.

In any case, we should push and get the offending functions moved into
libglusterfs.
That should solve the problem for us.

>
> --
>
> Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-02-28 Thread Kaushal M
We have a GlusterD2-4.0.0rc1 release.

Aravinda, Prashanth and the rest of the GD2 developers have been
working hard on getting more stuff merged into GD2 before the 4.0
release.

At the same time I have been working on getting GD2 packaged for Fedora.
I've been able to get all the required dependencies updated and have
submitted to the package maintainer for merging.
I'm now waiting on the maintainer to accept those updates. Once the
updates have been accepted, the GD2 spec can get accepted [2].
I expect this to take at least another week on the whole.

In the meantime, I've been building all the updated dependencies and
glusterd2-v4.0.0rc1, on the GD2 copr [3].

I tried to test out the GD2 package with the GlusterFS v4.0.0rc1
release from [4]. And this is where I hit the blocker.

GD2 does not start with the packaged glusterfs-v4.0.0rc1 bits. I've
opened an issue on the GD2 issue tracker for it [5].
In short, GD2 fails to read options from xlators, as dlopen fails with
a missing symbol error.

```
FATA[2018-02-28 15:02:53.345686] Failed to load xlator options
error="dlopen(/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so)
failed; dlerror =
/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so: undefined
symbol: glusterfs_mgmt_pmap_signout" source="[main.go:79:main.main]"

```

My preliminary investigation points to the problem being in the
glusterfs packages.
An externally built GD2 binary that works against a source install of
glusterfs-v4.0.0rc1, fails with the same error with the packaged
install.
I'll continue investigating and try to find the cause. Any help with
this is appreciated.

~kaushal

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0.0rc1
[2]: https://bugzilla.redhat.com/1540553
[3]: https://copr.fedorainfracloud.org/coprs/kshlm/glusterd2/
[4]: 
https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc1/Fedora/fedora-27/x86_64/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterD2 v4.0rc0 tagged

2018-01-31 Thread Kaushal M
Hi all,

GlusterD2 v4.0rc0 has been tagged and a release made in anticipation
of GlusterFS-v4.0rc0. The release and source tarballs are available
from [1].

There aren't any sepcific release-notes for this release.

Thanks.
~kaushal

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0rc0
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] configure fails due to failure in locating libxml2-devel

2018-01-21 Thread Kaushal M
Did you run autogen.sh after installing libxml2-devel?

On Mon, Jan 22, 2018 at 11:10 AM, Raghavendra G
 wrote:
> All,
>
> # ./configure
> 
> configure: error: libxml2 devel libraries not found
>
> # ls /usr/lib64/libxml2.so
> /usr/lib64/libxml2.so
>
> # ls /usr/include/libxml2/
> libxml
>
> # yum install libxml2-devel
> Package libxml2-devel-2.9.1-6.el7_2.3.x86_64 already installed and latest
> version
> Nothing to do
>
> Looks like the issue is very similar to one filed in:
> https://bugzilla.redhat.com/show_bug.cgi?id=64134
>
> Has anyone encountered this? How did you workaround this?
>
> regards,
> --
> Raghavendra G
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [GD2] New release - GlusterD2 v4.0dev-10

2018-01-12 Thread Kaushal M
We have a new GD2 release!!

This has been a while coming. The last release happened around the
time of the Gluster summit, and we have been working hard the last 2
months.

There have been a lot of changes, most of them aimed at getting GD2 in
shape for release.  We also have new commands and CLIs implemented for
you to try.

The release is available from [1].
RPMs are available from the COPR repository at [2]. The RPMs require
the nightly builds of GlusterFS master, currently only available for
EL [3].
There is a quick start guide available at [4].

We're working on implementing more commands and we hope to have some
more preview releases before GlusterFS-4.0 lands.

Thanks!
GD2 developers

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-10
[2]: https://copr.fedorainfracloud.org/coprs/kshlm/glusterd2/
[3]: http://artifacts.ci.centos.org/gluster/nightly/master.repo
[4]: 
https://github.com/gluster/glusterd2/blob/v4.0dev-10/doc/quick-start-user-guide.md
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [gluster-packaging] Release 4.0: Making it happen! (GlusterD2)

2018-01-10 Thread Kaushal M
On Thu, Jan 11, 2018 at 1:56 AM, Kaleb S. KEITHLEY  wrote:
> comments inline
>
> On 01/10/2018 02:08 PM, Shyam Ranganathan wrote:
>
> Hi, (GD2 team, packaging team, please read)
>
> Here are some things we need to settle so that we can ship/release GD2
> along with Gluster 4.0 release (considering this is a separate
> repository as of now).
>
> 1) Generating release package (read as RPM for now) to go with Gluster
> 4.0 release
>
> Proposal:
>   - GD2 makes github releases, as in [1]
>
>   - GD2 Releases (tagging etc.) are made in tandem to Gluster releases
> - So, when an beta1/RC0 is tagged for gluster release, this will
> receive a coordinated release (if required) from the GD2 team
> - GD2 team will receive *at-least* a 24h notice on a tentative
> Gluster tagging date/time, to aid the GD2 team to prepare the required
> release tarball in github
>
> This is a no-op. In github creating a tag or a release automatically creates
> the tar source file.

While true, this tarball isn't enough. The GD2 build scripts lookup
versioning from git tags or from a VERSION file (same as glusterfs).
Both of these are not present in the tarball github generates.
The GD2 release script generates tarballs that have everything
required to build a properly versioned GD2.

>
>   - Post a gluster tag being created, and the subsequent release job is
> run for gluster 4.0, the packaging team will be notified about which GD2
> tag to pick up for packaging, with this gluster release
> - IOW, a response to the Jenkins generated packaging job, with the
> GD2 version/tag/release to pick up
>
>   - GD2 will be packaged as a sub-package of the glusterfs package, and
> hence will have appropriate changes to the glusterfs spec file (or other
> variants of packaging as needed), to generate one more package (RPM) to
> post in the respective download location
>
>   - The GD2 sub-package version would be the same as the release version
> that GD2 makes (it will not be the gluster package version, at least for
> now)
>
> IMO it's clearer if the -glusterd2 sub-package has the same version as the
> rest of the glusterfs-* packages.
>

+1. We will follow glusterfs versioning not just for the packages, but
for the source itself.

> The -glusterd2 sub-package's Summary and/or its %description can be used to
> identify the version of GD2.
>
> Emphasis on IMO. It is possible for the -glusterd sub-package to have a
> version that's different than the parent package(s).
>
>   - For now, none of the gluster RPMs would be dependent on the GD2 RPM
> in the downloads, so any user wanting to use GD2 would have to install
> the package specifically and then proceed as needed
>
>   - (thought/concern) Jenkins smoke job (or other jobs) that builds RPMs
> will not build GD2 (as the source is not available) and will continue as
> is (which means there is enough spec file magic here that we can specify
> during release packaging to additionally build GD2)
>
> 2) Generate a quick start or user guide, to aid using GD2 with 4.0
>
> @Kaushal if this is generated earlier (say with beta builds of 4.0
> itself) we could get help from the community to test drive the same and
> provide feedback to improve the guide for users by the release (as
> discussed in the maintainers meeting)
>
> One thing not covered above is what happens when GD2 fixes a high priority
> bug between releases of glusterfs.
>
> Once option is we wait until the next release of glusterfs to include the
> update to GD2.
>
> Or we can respin (rerelease) the glusterfs packages with the updated GD2.
> I.e. glusterfs-4.0.0-1 (containing GD2-1.0.0) -> glusterfs-4.0.0-2
> (containing GD2-1.0.1).
>
> Or we can decide not to make a hard rule and do whatever makes the most
> sense at the time. If the fix is urgent, we respin. If the fix is not urgent
> it waits for the next Gluster release. (From my perspective though I'd
> rather not do respins, I've already got plenty of work doing the regular
> releases.)
>
> The alternative to all of the above is to package GD2 in its own package.
> This entails opening a New Package Request and going through the packaging
> reviews. All in all it's a lot of work. If GD2 source is eventually going to
> be moved into the main glusterfs source though this probably doesn't make
> sense.
>
> --
>
> Kaleb
>
>
>
>
> ___
> packaging mailing list
> packag...@gluster.org
> http://lists.gluster.org/mailman/listinfo/packaging
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [gluster-packaging] Release 4.0: Making it happen! (GlusterD2)

2018-01-10 Thread Kaushal M
On Thu, Jan 11, 2018 at 12:38 AM, Shyam Ranganathan  wrote:
> Hi, (GD2 team, packaging team, please read)
>
> Here are some things we need to settle so that we can ship/release GD2
> along with Gluster 4.0 release (considering this is a separate
> repository as of now).
>
> 1) Generating release package (read as RPM for now) to go with Gluster
> 4.0 release
>
> Proposal:
>   - GD2 makes github releases, as in [1]
>
>   - GD2 Releases (tagging etc.) are made in tandem to Gluster releases
> - So, when an beta1/RC0 is tagged for gluster release, this will
> receive a coordinated release (if required) from the GD2 team
> - GD2 team will receive *at-least* a 24h notice on a tentative
> Gluster tagging date/time, to aid the GD2 team to prepare the required
> release tarball in github

Sounds good.

>
>   - Post a gluster tag being created, and the subsequent release job is
> run for gluster 4.0, the packaging team will be notified about which GD2
> tag to pick up for packaging, with this gluster release
> - IOW, a response to the Jenkins generated packaging job, with the
> GD2 version/tag/release to pick up
>
>   - GD2 will be packaged as a sub-package of the glusterfs package, and
> hence will have appropriate changes to the glusterfs spec file (or other
> variants of packaging as needed), to generate one more package (RPM) to
> post in the respective download location
>
>   - The GD2 package version would be the same as the release version
> that GD2 makes (it will not be the gluster package version, at least for
> now)

I prefer if GD2 follows gluster versioning. Keeps things simpler.
Anyone packaging will have to just pick the same version of GD2.
We already version our perview releases as v4.0dev.

>
>   - For now, none of the gluster RPMs would be dependent on the GD2 RPM
> in the downloads, so any user wanting to use GD2 would have to install
> the package specifically and then proceed as needed

Yes. The glusterfs-server package will not depend on GD2 right now.
This will be changed later when GD2 becomes the default.

>
>   - (thought/concern) Jenkins smoke job (or other jobs) that builds RPMs
> will not build GD2 (as the source is not available) and will continue as
> is (which means there is enough spec file magic here that we can specify
> during release packaging to additionally build GD2)

The glusterfs spec file can be updated to include building GD2 from
its release tarball. I don't remember exactly but, rpmbuild might have
ways to automatically download sources/dependencies. We can check if
this is true.

>
> 2) Generate a quick start or user guide, to aid using GD2 with 4.0
>
> @Kaushal if this is generated earlier (say with beta builds of 4.0
> itself) we could get help from the community to test drive the same and
> provide feedback to improve the guide for users by the release (as
> discussed in the maintainers meeting).

We will do this.

>
> Thanks,
> Shyam
>
> [1] github GD2 releases: https://github.com/gluster/glusterd2/releases
> ___
> packaging mailing list
> packag...@gluster.org
> http://lists.gluster.org/mailman/listinfo/packaging
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-05 Thread Kaushal M
On Fri, Nov 3, 2017 at 8:50 PM, Alastair Neil <ajneil.t...@gmail.com> wrote:
> Just so I am clear the upgrade process will be as follows:
>
> upgrade all clients to 4.0
>
> rolling upgrade all servers to 4.0 (with GD1)
>
> kill all GD1 daemons on all servers and run upgrade script (new clients
> unable to connect at this point)
>
> start GD2 ( necessary or does the upgrade script do this?)
>
>
> I assume that once the cluster had been migrated to GD2 the glusterd startup
> script will be smart enough to start the correct version?
>

This should be the process, mostly.

The upgrade script needs to GD2 running on all nodes before it can
begin migration.
But they don't need to have a cluster formed, the script should take
care of forming the cluster.


> -Thanks
>
>
>
>
>
> On 3 November 2017 at 04:06, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic <bu...@onholyground.com>
>> wrote:
>> > Will the various client packages (centos in my case) be able to
>> > automatically handle the upgrade vs new install decision, or will we be
>> > required to do something manually to determine that?
>>
>> We should be able to do this with CentOS (and other RPM based distros)
>> which have well split glusterfs packages currently.
>> At this moment, I don't know exactly how much can be handled
>> automatically, but I expect the amount of manual intervention to be
>> minimal.
>> The least minimum amount of manual work needed would be enabling and
>> starting GD2 and starting the migration script.
>>
>> >
>> > It’s a little unclear that things will continue without interruption
>> > because
>> > of the way you describe the change from GD1 to GD2, since it sounds like
>> > it
>> > stops GD1.
>>
>> With the described upgrade strategy, we can ensure continuous volume
>> access to clients during the whole process (provided volumes have been
>> setup with replication or ec).
>>
>> During the migration from GD1 to GD2, any existing clients still
>> retain access, and can continue to work without interruption.
>> This is possible because gluster keeps the management  (glusterds) and
>> data (bricks and clients) parts separate.
>> So it is possible to interrupt the management parts, without
>> interrupting data access to existing clients.
>> Clients and the server side brick processes need GlusterD to start up.
>> But once they're running, they can run without GlusterD. GlusterD is
>> only required again if something goes wrong.
>> Stopping GD1 during the migration process, will not lead to any
>> interruptions for existing clients.
>> The brick process continue to run, and any connected clients continue
>> to remain connected to the bricks.
>> Any new clients which try to mount the volumes during this migration
>> will fail, as a GlusterD will not be available (either GD1 or GD2).
>>
>> > Early days, obviously, but if you could clarify if that’s what
>> > we’re used to as a rolling upgrade or how it works, that would be
>> > appreciated.
>>
>> A Gluster rolling upgrade process, allows data access to volumes
>> during the process, while upgrading the brick processes as well.
>> Rolling upgrades with uninterrupted access requires that volumes have
>> redundancy (replicate or ec).
>> Rolling upgrades involves upgrading servers belonging to a redundancy
>> set (replica set or ec set), one at a time.
>> One at a time,
>> - A server is picked from a redundancy set
>> - All Gluster processes are killed on the server, glusterd, bricks and
>> other daemons included.
>> - Gluster is upgraded and restarted on the server
>> - A heal is performed to heal new data onto the bricks.
>> - Move onto next server after heal finishes.
>>
>> Clients maintain uninterrupted access, because a full redundancy set
>> is never taken offline all at once.
>>
>> > Also clarification that we’ll be able to upgrade from 3.x
>> > (3.1x?) to 4.0, manually or automatically?
>>
>> Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
>> gdeploy has playbooks to automate it.
>> At the end of this you will be left with a 4.0 cluster, but still be
>> running GD1.
>> Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
>> that automates this is planned only for 4.1.
>>
>> >
>> >
>> > 
>> > From: Kaushal M <kshlms...@gmail.com>
>> > Subject: [Gluster-users

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-03 Thread Kaushal M
On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic <bu...@onholyground.com> wrote:
> Will the various client packages (centos in my case) be able to
> automatically handle the upgrade vs new install decision, or will we be
> required to do something manually to determine that?

We should be able to do this with CentOS (and other RPM based distros)
which have well split glusterfs packages currently.
At this moment, I don't know exactly how much can be handled
automatically, but I expect the amount of manual intervention to be
minimal.
The least minimum amount of manual work needed would be enabling and
starting GD2 and starting the migration script.

>
> It’s a little unclear that things will continue without interruption because
> of the way you describe the change from GD1 to GD2, since it sounds like it
> stops GD1.

With the described upgrade strategy, we can ensure continuous volume
access to clients during the whole process (provided volumes have been
setup with replication or ec).

During the migration from GD1 to GD2, any existing clients still
retain access, and can continue to work without interruption.
This is possible because gluster keeps the management  (glusterds) and
data (bricks and clients) parts separate.
So it is possible to interrupt the management parts, without
interrupting data access to existing clients.
Clients and the server side brick processes need GlusterD to start up.
But once they're running, they can run without GlusterD. GlusterD is
only required again if something goes wrong.
Stopping GD1 during the migration process, will not lead to any
interruptions for existing clients.
The brick process continue to run, and any connected clients continue
to remain connected to the bricks.
Any new clients which try to mount the volumes during this migration
will fail, as a GlusterD will not be available (either GD1 or GD2).

> Early days, obviously, but if you could clarify if that’s what
> we’re used to as a rolling upgrade or how it works, that would be
> appreciated.

A Gluster rolling upgrade process, allows data access to volumes
during the process, while upgrading the brick processes as well.
Rolling upgrades with uninterrupted access requires that volumes have
redundancy (replicate or ec).
Rolling upgrades involves upgrading servers belonging to a redundancy
set (replica set or ec set), one at a time.
One at a time,
- A server is picked from a redundancy set
- All Gluster processes are killed on the server, glusterd, bricks and
other daemons included.
- Gluster is upgraded and restarted on the server
- A heal is performed to heal new data onto the bricks.
- Move onto next server after heal finishes.

Clients maintain uninterrupted access, because a full redundancy set
is never taken offline all at once.

> Also clarification that we’ll be able to upgrade from 3.x
> (3.1x?) to 4.0, manually or automatically?

Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
gdeploy has playbooks to automate it.
At the end of this you will be left with a 4.0 cluster, but still be
running GD1.
Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
that automates this is planned only for 4.1.

>
>
> 
> From: Kaushal M <kshlms...@gmail.com>
> Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
> Date: November 2, 2017 at 3:56:05 AM CDT
> To: gluster-us...@gluster.org; Gluster Devel
>
> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user friendly as possible.
>
> We're getting this out here now, because there was quite a bit of
> concern and confusion regarding the upgrades between 3.x and 4.0+.
>
> ---
> ## Background
>
> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> which is backwards incompatible with the GlusterD (GD1) in
> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> established, rolling upgrades are not possible. This meant that
> upgrades from 3.x to 4.0 would require a volume downtime and possible
> client downtime.
>
> This was a cause of concern among many during the recently concluded
> Gluster Summit 2017.
>
> We would like to keep pains experienced by our users to a minimum, so
> we are trying to develop an upgrade strategy that avoids downtime as
> much as possible.
>
> ## (Expected) Upgrade strategy from 3.x to 4.0
>
> Gluster-4.0 will ship with both GD1 and GD2.
> For fresh installations, only GD2 will be installed and available by
> default.
> For existing installations (upgrades) GD1 will be installed and run by
> default. GD2 will also be installed simultaneously, but will not run
> automatically.
>
> GD1 will allow rolling upgrades, and allow properly setup Gluster
> volum

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kaushal M
On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P <amudha...@gmail.com> wrote:
> if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> volume without any challenge?
>
> I am asking this because 4.0 comes with DHT2?

Very short answer, yes. Your volumes will remain the same. And you
will continue to access them the same way.

RIO (as DHT2 is now known as) developers in CC can provide more
information on this. But in short, RIO will not be replacing DHT. It
was renamed to make this clear.
Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
that exist will continue to use DHT, and continue to work as they
always have.
You will only be able to create new RIO volumes, and will not be able
to migrate DHT to RIO.

>
>
>
>
> On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> We're fast approaching the time for Gluster-4.0. And we would like to
>> set out the expected upgrade strategy and try to polish it to be as
>> user friendly as possible.
>>
>> We're getting this out here now, because there was quite a bit of
>> concern and confusion regarding the upgrades between 3.x and 4.0+.
>>
>> ---
>> ## Background
>>
>> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> which is backwards incompatible with the GlusterD (GD1) in
>> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> established, rolling upgrades are not possible. This meant that
>> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> client downtime.
>>
>> This was a cause of concern among many during the recently concluded
>> Gluster Summit 2017.
>>
>> We would like to keep pains experienced by our users to a minimum, so
>> we are trying to develop an upgrade strategy that avoids downtime as
>> much as possible.
>>
>> ## (Expected) Upgrade strategy from 3.x to 4.0
>>
>> Gluster-4.0 will ship with both GD1 and GD2.
>> For fresh installations, only GD2 will be installed and available by
>> default.
>> For existing installations (upgrades) GD1 will be installed and run by
>> default. GD2 will also be installed simultaneously, but will not run
>> automatically.
>>
>> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> volumes to be upgraded to 4.0 binaries, without downtime.
>>
>> Once the full pool is upgraded, and all bricks and other daemons are
>> running 4.0 binaries, migration to GD2 can happen.
>>
>> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> and GD2 started instead.
>> GD2 will not automatically form a cluster. A migration script will be
>> provided, which will form a new GD2 cluster from the existing GD1
>> cluster information, and migrate volume information from GD1 into GD2.
>>
>> Once migration is complete, GD2 will pick up the running brick and
>> other daemon processes and continue. This will only be possible if the
>> rolling upgrade with GD1 happened successfully and all the processes
>> are running with 4.0 binaries.
>>
>> During the whole migration process, the volume would still be online
>> for existing clients, who can still continue to work. New clients will
>> not be possible during this time.
>>
>> After migration, existing clients will connect back to GD2 for
>> updates. GD2 listens on the same port as GD1 and provides the required
>> SunRPC programs.
>>
>> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> versions. without volume downtime, will be possible.
>>
>> ### FAQ and additional info
>>
>>  Both GD1 and GD2? What?
>>
>> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> essentially be the GD1 from the last 3.x series. It will not support
>> any of the newer storage or management features being planned for 4.0.
>> All new features will only be available from GD2.
>>
>>  How long will GD1 be shipped/maintained for?
>>
>> We plan to maintain GD1 in the 4.x series for at least a couple of
>> releases, at least 1 LTM release. Current plan is to maintain it till
>> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
>> then upgrade to newer releases.
>>
>>  Migration script
>>
>> The GD1 to GD2 migration script and the required features in GD2 are
>> being planned only for 4.1. This would technically mean most users
>> will only be able to migrate from 3.x to 4.1. But users can still
>> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
>> improv

[Gluster-devel] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kaushal M
We're fast approaching the time for Gluster-4.0. And we would like to
set out the expected upgrade strategy and try to polish it to be as
user friendly as possible.

We're getting this out here now, because there was quite a bit of
concern and confusion regarding the upgrades between 3.x and 4.0+.

---
## Background

Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
which is backwards incompatible with the GlusterD (GD1) in
GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
established, rolling upgrades are not possible. This meant that
upgrades from 3.x to 4.0 would require a volume downtime and possible
client downtime.

This was a cause of concern among many during the recently concluded
Gluster Summit 2017.

We would like to keep pains experienced by our users to a minimum, so
we are trying to develop an upgrade strategy that avoids downtime as
much as possible.

## (Expected) Upgrade strategy from 3.x to 4.0

Gluster-4.0 will ship with both GD1 and GD2.
For fresh installations, only GD2 will be installed and available by default.
For existing installations (upgrades) GD1 will be installed and run by
default. GD2 will also be installed simultaneously, but will not run
automatically.

GD1 will allow rolling upgrades, and allow properly setup Gluster
volumes to be upgraded to 4.0 binaries, without downtime.

Once the full pool is upgraded, and all bricks and other daemons are
running 4.0 binaries, migration to GD2 can happen.

To migrate to GD2, all GD1 processes in the cluster need to be killed,
and GD2 started instead.
GD2 will not automatically form a cluster. A migration script will be
provided, which will form a new GD2 cluster from the existing GD1
cluster information, and migrate volume information from GD1 into GD2.

Once migration is complete, GD2 will pick up the running brick and
other daemon processes and continue. This will only be possible if the
rolling upgrade with GD1 happened successfully and all the processes
are running with 4.0 binaries.

During the whole migration process, the volume would still be online
for existing clients, who can still continue to work. New clients will
not be possible during this time.

After migration, existing clients will connect back to GD2 for
updates. GD2 listens on the same port as GD1 and provides the required
SunRPC programs.

Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
versions. without volume downtime, will be possible.

### FAQ and additional info

 Both GD1 and GD2? What?

While both GD1 and GD2 will be shipped, the GD1 shipped will
essentially be the GD1 from the last 3.x series. It will not support
any of the newer storage or management features being planned for 4.0.
All new features will only be available from GD2.

 How long will GD1 be shipped/maintained for?

We plan to maintain GD1 in the 4.x series for at least a couple of
releases, at least 1 LTM release. Current plan is to maintain it till
4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
then upgrade to newer releases.

 Migration script

The GD1 to GD2 migration script and the required features in GD2 are
being planned only for 4.1. This would technically mean most users
will only be able to migrate from 3.x to 4.1. But users can still
migrate from 3.x to 4.0 with GD1 and get many bug fixes and
improvements. They would only be missing any new features. Users who
live on the edge, should be able to the migration manually in 4.0.

---

Please note that the document above gives the expected upgrade
strategy, and is not final, nor complete. More details will be added
and steps will be expanded upon, as we move forward.

To move forward, we need your participation. Please reply to this
thread with any comments you have. We will try to answer and solve any
questions or concerns. If there a good new ideas/suggestions, they
will be integrated. If you just like it as is, let us know any way.

Thanks.

Kaushal and Gluster Developers.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Meeting 2017-10-11

2017-10-11 Thread Kaushal M
We had a quick meeting today, with 2 main topics.

We have a new community issue tracker [1], which will be used to track
community initiatives. Amye will be sharing more information about
this in another email.

To co-ordinate people travelling to the Gluster Community Summit
better, a spreadsheet [2] has been setup to share information.

Apart from the above 2 topics, Shyam shared that he is on the lookout
for a partner to manage the 4.0 release.

For more information, meeting logs and minutes are available at the
links below. [3][4][5]

The meeting scheduled to be held on the 25 Oct, is being skipped. A
lot of the attendees will be travelling to the Gluster Summit at the
time. The next meeting now is scheduled for 8th Nov.

See you then.

~kaushal

[1]: https://github.com/gluster/community
[2]: 
https://docs.google.com/spreadsheets/d/1Jde-5XNc0q4a8bW8-OmLC2w_jiPg-e53ssR4wanIhFk/edit#gid=0
[3]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-10-11/gluster_community_meeting_2017-10-11.2017-10-11-15.03.html
[4]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-10-11/gluster_community_meeting_2017-10-11.2017-10-11-15.03.txt
[5]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-10-11/gluster_community_meeting_2017-10-11.2017-10-11-15.03.log.html
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GD2 integration - Commands

2017-09-22 Thread Kaushal M
Hey all,

We are continuing our development of the core of GD2, and will soon
come to the stage where we can begin integrating and developing other
commands for GD2.

As the GD2 team, we are developing and implementing the core commands
for managing Gluster. These include the basic volume management
commands and cluster management commands. To implement the other
Gluster feature commands (snapshot, geo-rep, quota, NFS, gfproxy etc.)
we will be depending on the feature maintainers.

We are not yet ready to begin full fledged implementation of the new
commands. But we can begin getting ready by designing out the various
requirements and parts of command implementation. We have prepared a
document [1] to help begin this design process. This document explains
what is involved in implementing a new command and how it will be
structured. This will help developers come up with a skeleton design
for their command, and its operation. Which will help speed up
implementation later.

We request everyone to start this process soon, so we can deliver on
target. If there are any questions, feel free to ask on this thread or
reach out to us.

Thanks.

[1]: https://github.com/gluster/glusterd2/wiki/New-Commands-Guide
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Meeting 2017-09-13 - Volunteers to host?

2017-09-13 Thread Kaushal M
Hi all,

We need a volunteer to stand in as the host of today's community meeting, as
I will not be able to host or attend today's community meeting.

If anyone wants to volunteer, let the rest know by replying here.
Also, reply if you're willing to regularly host/co-host the meetings
as well.

Who's ready to host?

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Changing Submit Type on review.gluster.org

2017-09-07 Thread Kaushal M
On 7 Sep 2017 6:25 pm, "Niels de Vos"  wrote:

On Thu, Sep 07, 2017 at 04:41:54PM +0530, Nigel Babu wrote:
> On Thu, Sep 07, 2017 at 12:43:28PM +0200, Niels de Vos wrote:
> >
> > Q: Can patches of a series be merged before all patches in the series
> > have a +2? Initial changes that prepare things, or add new (unused) core
> > functionalities should be mergable so that follow-up patches can be
> > posted against the HEAD of the branch.
> >
> > A: Nigel?
> >
>
> If you have patches that are dependent like this:
>
> A -> B -> C -> D
>
> where A is the first patch and B is based on top of A and so forth.
>
> Merging A is not dependent on B. It can be merged any time you have Code
Review
> and Regression votes.
>
> However, you cannot merge B until A is ready or merged. If A is unmerged,
but
> is ready to merge, when you merge B, Gerrit will merge them in order, i.e.
> first merge A, and B automatically.
>
> Does this answer your question? If it helps, I can arrange for staging to
be
> online so moe people can test this out.

That answers my question, I don't need to try it out myself.

Thanks!
Niels
___
maintainers mailing list
maintain...@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers



Gerrit still provides all the meta information about patches in a special
branch as git-notes. Git can be configured to display these notes along
with commit messages. You would still effectively get the same experience
as before.

More information is available at [1]. This depends on a gerrit plugin, but
I believe it's enabled by default.

[1]
https://gerrit.googlesource.com/plugins/reviewnotes/+/master/src/main/resources/Documentation/refs-notes-review.md
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GD2 demo

2017-09-05 Thread Kaushal M
Hi all,

We had a GD2 demo in the Red Hat Bangalore office, aimed mainly at
developers. The demo was recorded and is available at the [1]. This
isn't the best possible demo, but it should give an idea of how
integration with GD2 will happen. Questions and comments are welcome.

~kaushal

[1]: https://bluejeans.com/s/z970R
Requires flash to view. If flash isn't possible, the video can be
downloaded for viewing as well.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Call for help: Updating xlator options for GD2 (round one)

2017-09-05 Thread Kaushal M
We are beginning the integration of GlusterFS and GD2. The first part
of this process, is to update xlator options to include some more
information that GD2 requires.

I've written down a guide to help with this process in a hackmd
document [1]. We will using this document to track progress with the
changes as well.

Please follow the guidelines in [1] and do your changes. As you do you
changes also keep updating the document (editable link [2]).

If there are questions or comments, let us know here or in the document.

Thanks,
Kaushal

[1]: https://hackmd.io/s/Hy87Y2oYW
[2]:https://hackmd.io/IYDgrA7AbFAMAsBaATBAnAI0fDBGMiGAZlFgCYCmEAxsGchbAMxVA===?both
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Community Meeting 2017-08-30

2017-08-30 Thread Kaushal M
On Wed, Aug 30, 2017 at 3:07 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Hi All,
>
> This a reminder about today's meeting. The meeting will start later
> today at 1500UTC.
> Please add topics and updates to the meeting pad at
> https://bit.ly/gluster-community-meetings
>
> Thanks.

The meeting minutes and logs can be found at the links below. [1][2][3][4]

In this meeting we had updates on our docs site and the website.
The docs site is now accessible via the docs.gluster.org [5] address
and has improved search. There are still one more change in progress
that will complete the move to the docs.gluster.org domain. In the
longer term, the plan is to host docs on our own infrastructure.
The new community website is currently being staged at [6]. This
combines our previous website, blog and blog aggregator (planet) into
a single Wordpress instance with consistent look and feel. Please
report any bugs you find in the website at [7].

The next meeting is scheduled for 13th September. Add your updates and
topics to the meeting pad at [8].

Thanks.

~kaushal

[1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-08-30
[2]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-30/community_meeting_2017-08-30.2017-08-30-15.02.html
[3]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-30/community_meeting_2017-08-30.2017-08-30-15.02.txt
[4]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-30/community_meeting_2017-08-30.2017-08-30-15.02.log.html
[5]: http://docs.gluster.org
[6]: https://gluster.wpengine.com/
[7]: https://github.com/gluster/glusterweb/issues
[8]: https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Meeting 2017-08-30

2017-08-30 Thread Kaushal M
Hi All,

This a reminder about today's meeting. The meeting will start later
today at 1500UTC.
Please add topics and updates to the meeting pad at
https://bit.ly/gluster-community-meetings

Thanks.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterd2 - Some anticipated changes to glusterfs source

2017-08-17 Thread Kaushal M
I've created an issue [1] to request the changes to the xlators.

I've also posted a patch for review [2] which adds the new fields to
the xlator options. The patch is on the experimental branch for now,
but I could just as well post it on master. It doesn't affect any
operations of GD1 or xlators yet.

~kaushal

[1]: https://github.com/gluster/glusterfs/issues/302
[2]: https://review.gluster.org/18050

On Thu, Aug 17, 2017 at 12:46 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, Aug 3, 2017 at 2:12 PM, Milind Changire <mchan...@redhat.com> wrote:
>>
>>
>> On Thu, Aug 3, 2017 at 12:56 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>>
>>> On Thu, Aug 3, 2017 at 2:14 AM, Niels de Vos <nde...@redhat.com> wrote:
>>> > On Wed, Aug 02, 2017 at 05:03:35PM +0530, Prashanth Pai wrote:
>>> >> Hi all,
>>> >>
>>> >> The ongoing work on glusterd2 necessitates following non-breaking and
>>> >> non-exhaustive list of changes to glusterfs source code:
>>> >>
>>> >> Port management
>>> >> - Remove hard-coding of glusterd's port as 24007 in clients and
>>> >> elsewhere.
>>> >>   Glusterd2 can be configured to listen to clients on any port (still
>>> >> defaults to
>>> >>   24007 though)
>>> >> - Let the bricks and daemons choose any available port and if needed
>>> >> report
>>> >>   the port used to glusterd during the "sign in" process. Prasanna has
>>> >> a
>>> >> patch
>>> >>   to do this.
>>> >> - Glusterd <--> brick (or any other local daemon) communication should
>>> >>   always happen over Unix Domain Socket. Currently glusterd and brick
>>> >>   process communicates over UDS and also port 24007. This will allow us
>>> >>   to set better authentication and rules for port 24007 as it shall
>>> >> only be
>>> >> used
>>> >>   by clients.
>>> >
>>> > I prefer this last point to be configurable. At least for debugging we
>>> > should be able to capture network traces and display the communication
>>> > in Wireshark. Defaulting to UNIX Domain Sockets is fine though.
>>>
>>> This is the communication between GD2 and bricks, of which there is
>>> not a lot happening, and not much to capture.
>>> But I agree, it will be nice to have this configurable.
>>>
>>
>> Could glusterd start attempting port binding at 24007 and progress on to
>> higher port numbers until successful and register the bound port number with
>> rpcbind ? This way the setup will be auto-configurable and admins need not
>> scratch their heads to decide upon one port number. Gluster clients could
>> always talk to rpcbind on the nodes to get glusterd service port whenever a
>> reconnect is required.
>
> 24007 has always been used as the GlusterD port. There was a plan to
> have it registered with IANA as well.
> Having a well defined port is useful to allow proper firewall rules to be 
> setup.
>
>>
>>>
>>> >
>>> >
>>> >> Changes to xlator options
>>> >> - Xlator authors do not have to modify glusterd2 code to expose new
>>> >> xlator
>>> >>   options. IOW, glusterd2 will not contain the "glusterd_volopt_map"
>>> >> table.
>>> >>   Most of its fields will be moved to the xlator itself. Glusterd2 can
>>> >> load
>>> >>   xlator's shared object and read it's volume_options table. This also
>>> >> means
>>> >>   xlators have to adhere to some naming conventions for options.
>>> >> - Add following additional fields (names are indicative) to
>>> >> volume_option_t:
>>> >> - Tag: This is to enable users to list only options having a
>>> >> certain
>>> >> tag.
>>> >>  IOW, it allows us to filter "volume set help" like output.
>>> >>  Example of tags: debug, perf, network etc.
>>> >> - Opversion: The minimum (or a range) op-version required by the
>>> >> xlator.
>>> >> - Configurable: A bool to indicate whether this option is
>>> >> user-configurable.
>>> >>   This may also be clubbed with DOC/NO_DOC
>>> >> functionality.
>>> >
>>> > This is something I have been thinking about to do

Re: [Gluster-devel] Glusterd2 - Some anticipated changes to glusterfs source

2017-08-17 Thread Kaushal M
On Thu, Aug 3, 2017 at 2:12 PM, Milind Changire <mchan...@redhat.com> wrote:
>
>
> On Thu, Aug 3, 2017 at 12:56 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> On Thu, Aug 3, 2017 at 2:14 AM, Niels de Vos <nde...@redhat.com> wrote:
>> > On Wed, Aug 02, 2017 at 05:03:35PM +0530, Prashanth Pai wrote:
>> >> Hi all,
>> >>
>> >> The ongoing work on glusterd2 necessitates following non-breaking and
>> >> non-exhaustive list of changes to glusterfs source code:
>> >>
>> >> Port management
>> >> - Remove hard-coding of glusterd's port as 24007 in clients and
>> >> elsewhere.
>> >>   Glusterd2 can be configured to listen to clients on any port (still
>> >> defaults to
>> >>   24007 though)
>> >> - Let the bricks and daemons choose any available port and if needed
>> >> report
>> >>   the port used to glusterd during the "sign in" process. Prasanna has
>> >> a
>> >> patch
>> >>   to do this.
>> >> - Glusterd <--> brick (or any other local daemon) communication should
>> >>   always happen over Unix Domain Socket. Currently glusterd and brick
>> >>   process communicates over UDS and also port 24007. This will allow us
>> >>   to set better authentication and rules for port 24007 as it shall
>> >> only be
>> >> used
>> >>   by clients.
>> >
>> > I prefer this last point to be configurable. At least for debugging we
>> > should be able to capture network traces and display the communication
>> > in Wireshark. Defaulting to UNIX Domain Sockets is fine though.
>>
>> This is the communication between GD2 and bricks, of which there is
>> not a lot happening, and not much to capture.
>> But I agree, it will be nice to have this configurable.
>>
>
> Could glusterd start attempting port binding at 24007 and progress on to
> higher port numbers until successful and register the bound port number with
> rpcbind ? This way the setup will be auto-configurable and admins need not
> scratch their heads to decide upon one port number. Gluster clients could
> always talk to rpcbind on the nodes to get glusterd service port whenever a
> reconnect is required.

24007 has always been used as the GlusterD port. There was a plan to
have it registered with IANA as well.
Having a well defined port is useful to allow proper firewall rules to be setup.

>
>>
>> >
>> >
>> >> Changes to xlator options
>> >> - Xlator authors do not have to modify glusterd2 code to expose new
>> >> xlator
>> >>   options. IOW, glusterd2 will not contain the "glusterd_volopt_map"
>> >> table.
>> >>   Most of its fields will be moved to the xlator itself. Glusterd2 can
>> >> load
>> >>   xlator's shared object and read it's volume_options table. This also
>> >> means
>> >>   xlators have to adhere to some naming conventions for options.
>> >> - Add following additional fields (names are indicative) to
>> >> volume_option_t:
>> >> - Tag: This is to enable users to list only options having a
>> >> certain
>> >> tag.
>> >>  IOW, it allows us to filter "volume set help" like output.
>> >>  Example of tags: debug, perf, network etc.
>> >> - Opversion: The minimum (or a range) op-version required by the
>> >> xlator.
>> >> - Configurable: A bool to indicate whether this option is
>> >> user-configurable.
>> >>   This may also be clubbed with DOC/NO_DOC
>> >> functionality.
>> >
>> > This is something I have been thinking about to do as well. libgfapi
>> > users would like to list all the valid options before mounting (and
>> > receiving the .vol file) is done. Similar to how many mount options are
>> > set over FUSE, the options should be available through libgfapi.
>> > Hardcoding the options is just wrong, inspecting the available xlators
>> > (.so files) seems to make more sense. Each option would have to describe
>> > if it can be client-side so that we can apply some resonable filters by
>> > default.
>> >
>>
>> Looks like we'd missed this. All the fields available in the vol opt
>> map will move to xlator option tables, including the client flag.
>>
>> > A GitHub Issue with this feature request is at
>> > https:

[Gluster-devel] Community Meeting 2017-08-{02,16}

2017-08-16 Thread Kaushal M
This is combined update for the last two community meetings (because I
forgot to send out the update for the earlier meeting, my bad).

# Community Meeting 2017-08-02

There weren't any explicit topics of discussion, but we had updates on
action items and releases. The logs and minutes are available at
[1][2][3]. The minutes are also available at the end of this mail.

# Community Meeting 2017-08-16

Shyam called out 3.12rc0 and wanted to remind everyone to test it.

Without any other topics up for discussion, the meeting ended early.
Logs and minutes are available at [4][5][6].


The next meeting is scheduled for the 30th of August. Do attend, and
don't forget to add any topics/updates to the meeting pad at [7].

Thanks.

~kaushal

[1]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-02/community_meeting_2017-08-02.2017-08-02-15.06.html
[2]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-02/community_meeting_2017-08-02.2017-08-02-15.06.txt
[3]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-02/community_meeting_2017-08-02.2017-08-02-15.06.log.html
[4]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-16/community_meeting_2017-08-16.2017-08-16-15.01.html
‎[5‏]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-16/community_meeting_2017-08-16.2017-08-16-15.01.txt
‎[‎6‏]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-16/community_meeting_2017-08-16.2017-08-16-15.01.log.html
[7]: https://bit.ly/gluster-community-meetings


Meeting summary Community Meeting 2017-08-02
---
* ndevos will check with Eric Harney about the Openstack Gluster efforts
  (kshlm, 15:07:09)

* JoeJulian to invite Harsha to next meeting to discuss Minio  (kshlm,
  15:16:20)

* shyam will edit release pages and milestones to reflect 4.0 is STM
  (kshlm, 15:23:41)
  * LINK: https://www.gluster.org/community/release-schedule/# still
says LTM  (kkeithley, 15:26:47)

Meeting ended at 15:45:32 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (63)
* ndevos (13)
* kkeithley (10)
* JoeJulian (8)
* zodbot (4)
* amye (2)
* loadtheacc (1)
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//www.marudot.com//iCal Event Maker
X-WR-CALNAME:Gluster Community Meeting
CALSCALE:GREGORIAN
BEGIN:VEVENT
DTSTAMP:20170816T155050Z
UID:20170816t155050z-1618495...@marudot.com
DTSTART;TZID="Etc/UTC":20170830T15
DTEND;TZID="Etc/UTC":20170830T16
SUMMARY:Gluster Community Meeting
URL:https%3A%2F%2Fbit.ly%2Fgluster-community-meetings
DESCRIPTION:The Gluster Community Meeting. Agenda and meeting pad is at https://bit.ly/gluster-community-meetings
LOCATION:#gluster-meeting on Freenode
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:Gluster Community Meeting
TRIGGER:-PT5M
END:VALARM
END:VEVENT
END:VCALENDAR___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Glusterd2 - Some anticipated changes to glusterfs source

2017-08-03 Thread Kaushal M
On Thu, Aug 3, 2017 at 2:14 AM, Niels de Vos  wrote:
> On Wed, Aug 02, 2017 at 05:03:35PM +0530, Prashanth Pai wrote:
>> Hi all,
>>
>> The ongoing work on glusterd2 necessitates following non-breaking and
>> non-exhaustive list of changes to glusterfs source code:
>>
>> Port management
>> - Remove hard-coding of glusterd's port as 24007 in clients and elsewhere.
>>   Glusterd2 can be configured to listen to clients on any port (still
>> defaults to
>>   24007 though)
>> - Let the bricks and daemons choose any available port and if needed report
>>   the port used to glusterd during the "sign in" process. Prasanna has a
>> patch
>>   to do this.
>> - Glusterd <--> brick (or any other local daemon) communication should
>>   always happen over Unix Domain Socket. Currently glusterd and brick
>>   process communicates over UDS and also port 24007. This will allow us
>>   to set better authentication and rules for port 24007 as it shall only be
>> used
>>   by clients.
>
> I prefer this last point to be configurable. At least for debugging we
> should be able to capture network traces and display the communication
> in Wireshark. Defaulting to UNIX Domain Sockets is fine though.

This is the communication between GD2 and bricks, of which there is
not a lot happening, and not much to capture.
But I agree, it will be nice to have this configurable.

>
>
>> Changes to xlator options
>> - Xlator authors do not have to modify glusterd2 code to expose new xlator
>>   options. IOW, glusterd2 will not contain the "glusterd_volopt_map" table.
>>   Most of its fields will be moved to the xlator itself. Glusterd2 can load
>>   xlator's shared object and read it's volume_options table. This also means
>>   xlators have to adhere to some naming conventions for options.
>> - Add following additional fields (names are indicative) to volume_option_t:
>> - Tag: This is to enable users to list only options having a certain
>> tag.
>>  IOW, it allows us to filter "volume set help" like output.
>>  Example of tags: debug, perf, network etc.
>> - Opversion: The minimum (or a range) op-version required by the xlator.
>> - Configurable: A bool to indicate whether this option is
>> user-configurable.
>>   This may also be clubbed with DOC/NO_DOC
>> functionality.
>
> This is something I have been thinking about to do as well. libgfapi
> users would like to list all the valid options before mounting (and
> receiving the .vol file) is done. Similar to how many mount options are
> set over FUSE, the options should be available through libgfapi.
> Hardcoding the options is just wrong, inspecting the available xlators
> (.so files) seems to make more sense. Each option would have to describe
> if it can be client-side so that we can apply some resonable filters by
> default.
>

Looks like we'd missed this. All the fields available in the vol opt
map will move to xlator option tables, including the client flag.

> A GitHub Issue with this feature request is at
> https://github.com/gluster/glusterfs/issues/263. I appreciate additional
> comments and ideas about it :-)
>

We need to open an issue for our requested changes are well, which
will be a superset of this request. We'll make sure to mention this
feature request in it.
Or we could use a single issue as a tracker for all the xlator option
changes, in which case I'd prefer we update the existing issue.

>
>> - Xlators like AFR, changelog require non-static information such as brick
>> path
>>   to be present in it's options in the volfile. Currently, xlator authors
>> have
>>   to modify glusterd code to get it.
>>   This can rather be indicated by the xlator itself using
>> templates/placehoders.
>>   For example, "changelog-dir" can be set in xlator's option as as
>>   <>/.glusterfs/changelogs and then glusterd2 will ensure to
>> replace
>>   <> with actual path during volfile generation.
>
> I suggest to stick with whatever is a common syntax for other
> configuration files that uses placeholders. Maybe just {variable} or
> $VARIABLE, the <> looks a bit awkward.

The exact syntax for these variables hasn't been decided yet. But I'm
leaning towards '{{ variable }}' used in the Go template package,
which is what we'll mostly end up using to implement this
functionality.

>
>
>> We'd like to hear your thoughts, suggestions and comments to these proposed
>> changes.
>
> Thanks for sharing!
> Niels
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterd2 - Some anticipated changes to glusterfs source

2017-08-02 Thread Kaushal M
On Wed, Aug 2, 2017 at 5:03 PM, Prashanth Pai  wrote:
> Hi all,
>
> The ongoing work on glusterd2 necessitates following non-breaking and
> non-exhaustive list of changes to glusterfs source code:
>
> Port management
> - Remove hard-coding of glusterd's port as 24007 in clients and elsewhere.
>   Glusterd2 can be configured to listen to clients on any port (still
> defaults to
>   24007 though)
> - Let the bricks and daemons choose any available port and if needed report
>   the port used to glusterd during the "sign in" process. Prasanna has a
> patch
>   to do this.
> - Glusterd <--> brick (or any other local daemon) communication should
>   always happen over Unix Domain Socket. Currently glusterd and brick
>   process communicates over UDS and also port 24007. This will allow us
>   to set better authentication and rules for port 24007 as it shall only be
> used
>   by clients.
>
> Changes to xlator options
> - Xlator authors do not have to modify glusterd2 code to expose new xlator
>   options. IOW, glusterd2 will not contain the "glusterd_volopt_map" table.
>   Most of its fields will be moved to the xlator itself. Glusterd2 can load
>   xlator's shared object and read it's volume_options table. This also means
>   xlators have to adhere to some naming conventions for options.
> - Add following additional fields (names are indicative) to volume_option_t:
> - Tag: This is to enable users to list only options having a certain
> tag.
>  IOW, it allows us to filter "volume set help" like output.
>  Example of tags: debug, perf, network etc.
> - Opversion: The minimum (or a range) op-version required by the xlator.
> - Configurable: A bool to indicate whether this option is
> user-configurable.
>   This may also be clubbed with DOC/NO_DOC
> functionality.
> - Xlators like AFR, changelog require non-static information such as brick
> path
>   to be present in it's options in the volfile. Currently, xlator authors
> have
>   to modify glusterd code to get it.
>   This can rather be indicated by the xlator itself using
> templates/placehoders.
>   For example, "changelog-dir" can be set in xlator's option as as
>   <>/.glusterfs/changelogs and then glusterd2 will ensure to
> replace
>   <> with actual path during volfile generation.

One more change in this regard would be that xlators would now need to
ensure that all options have default values. There are cases where
certain xlators options had default values only in the volopt_map and
not in their own opt table. Also this will remove the possibilty of
the defaults differing between volopt_map and xlator options table.

>
> We'd like to hear your thoughts, suggestions and comments to these proposed
> changes.
>
> - Glusterd2 team
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [Update] GD2 - what's been happening

2017-08-02 Thread Kaushal M
Hello!
We're restarting regular GD2 updates. This is the first one, and I
expect to send these out every other week.

In the last month, we've identified a few core areas that we need to
focus on. With solutions in place for these, we believe we're ready to
start more deeper integration with glusterfs, that would be requiring
changes in the rest of the code base.

As of release v4.0dev-7, GD2 provided the following features that
would be required for writing/developing Gluster management commands.

- A transaction framework to run orchestrate actions across the cluster
- A rest framework to add new rest endpoints to handle
- A central store based on etcd, to store cluster information
- An auto forming and scaling etcd cluster.

Using these features, we can currently form and run basic GlusterFS
clusters and volumes. While this works, this is not the most usable
yet. Nor is it ready for further integration with rest of GlusterFS.

We've identified and begun working on 3 specific features, that will
make GD2 more usable, and become ready for integration.

- CLI and ReST client packages [1][2][3]
  Aravinda has begun working on creating a CLI application for GD2,
that talks to GD2 using ReST. As a related effort, he's also creating
a GD2 rest-client Go package. With this available, users will be more
easily able to form and use a GD2 cluster. The client package will
also help us write tests using the end-to-end test framework.

- Volume set [3][4][5]
  Prashanth has been working on implementing the volume set
functionality. This is necessary because it allows volumes to be
customized after creation. Xlator options will be read directly from
the xlators, instead of being having a mapping table in GD2. This
means that xlator developers will not need any changes in GD2 to add
new options to their xlator. What this also means is that we will
require some changes to the xlator options table to add some
information that used to be available in the GD options table. We will
be detailing the required changes soon.

- Volgen [6]
  I've been working on a getting a volgen package and framework ready.
We had a very ambitious design earlier [7] involving a dynamic graph
generator with dependency resolution. Work was done on this long back
[8], but was stopped as it turned out to be too complex. The new plan
is much simpler, with graphs order being described using template
files, and having different template files for different volume types.
While this will not be as flexible to the end-user, it is much easier
for developers to add new xlators to graphs. As with volume set,
xlator developers will not need to change anything in GD2. But there
will be changes necessary in the xlators themselves to make the
xlators ready to be automatically picked up and used by GD2. We are
still figuring out the changes needed and finalizing the design. I
will update the wiki with more details on the design, and share
details on the changes required.

In addition to the above, we've had bug fixes to our store and etcd
packages, that make cluster scaling more reliable. Aravinda has also
started initial work on a geo-replication plugin [9], that will help
us develop our plugin infrastructure and be a demo/example of a GD2
plugin for developers.

This concludes the updates since the last update. Thanks for reading.

~kaushal

[1] https://github.com/gluster/glusterd2/pull/334
[2] https://github.com/gluster/glusterd2/pull/337
[3] https://github.com/gluster/glusterd2/pull/335
[4] https://github.com/gluster/glusterd2/pull/339
[5] https://github.com/gluster/glusterd2/pull/345
[6] https://github.com/gluster/glusterd2/pull/351
[7] 
https://github.com/gluster/glusterd2/wiki/Flexible-Volgen-(Old)#systemd-units-style-1
[8] https://github.com/kshlm/glusterd2-volgen/tree/volgen-systemd-style
[9] https://github.com/gluster/glusterd2/pull/349
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Community Meeting 2017-07-19

2017-07-24 Thread Kaushal M
On Wed, Jul 19, 2017 at 8:08 PM, Kaushal M <kshlms...@gmail.com> wrote:
> This is a (late) reminder about today's meeting. The meeting begins in
> ~20 minutes from now.
>
> The meeting notepad is at https://bit.ly/gluster-community-meetings
> and currently has no topics for discussion. If you have anything to be
> discussed please add it to the pad.
>
> ~kaushal

Apologies for the late update.

The last community meeting happened with good participation. I hope to
see the trend continuing.

The meeting minutes and logs are available at the links below.

The next meeting is scheduled for 2nd August. The meeting notepad is
at [4] for your updates and topics for discussion.

See you at the next meeting.

~kaushal

[1]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-07-19/community_meeting_2017-07-19.2017-07-19-15.02.html
[2]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-07-19/community_meeting_2017-07-19.2017-07-19-15.02.txt
[3]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-07-19/community_meeting_2017-07-19.2017-07-19-15.02.log.html
[4]: https://bit.ly/gluster-community-meetings

Meeting summary
---
* Should we build 3.12 packages for old distros  (kshlm, 15:06:23)
  * AGREED: 4.0 will drop support for EL6 and other old distros. Will
see what can be done if and when someone wants to do it anyway.
(kshlm, 15:21:48)

* Is 4.0 LTM or STM?  (kshlm, 15:24:04)
  * AGREED: 4.0 is STM. Will take call on 4.1 and beyond later.  (kshlm,
15:38:54)
  * ACTION: shyam will edit release pages and milestones to reflect 4.0
is STM.  (kshlm, 15:39:59)

Meeting ended at 16:02:12 UTC.




Action Items

* shyam will edit release pages and milestones to reflect 4.0 is STM.




Action Items, by person
---
* shyam
  * shyam will edit release pages and milestones to reflect 4.0 is STM.
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (91)
* bulde (30)
* ndevos (27)
* amye (22)
* shyam (20)
* nigelb (17)
* Snowman (16)
* kkeithley (13)
* vbellur (8)
* zodbot (3)
* jstrunk (3)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [New Release] GlusterD2 v4.0dev-7

2017-07-05 Thread Kaushal M
After nearly 3 months, we have another preview release for GlusterD-2.0.

The highlights for this release are,
- GD2 now uses an auto scaling etcd cluster, which automatically
selects and maintains the required number of etcd servers in the
cluster.
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
- An end to end functional testing framework is now available
- And RPMs are available for Fedora >= 25 and EL7.

This release still doesn't provide a CLI. The HTTP ReST API is the
only access method right now.

Prebuilt binaries are available from [1]. RPMs have been built in
Fedora Copr and available at [2]. A Docker image is also available
from [3].

Try this release out and let us know if you face any problems at [4].

The GD2 development team is re-organizing and kicking of development
again. So regular updates can be expected again.

Cheers,
Kaushal and the GD2 developers.

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-7
[2]: https://copr.fedorainfracloud.org/coprs/kshlm/glusterd2/
[3]: https://hub.docker.com/r/gluster/glusterd2-test/
[4]: https://github.com/gluster/glusterd2/issues
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Community Meeting 2017-06-07

2017-06-07 Thread Kaushal M
Today's meeting didn't happen due to low turnout. The next meeting is
on 2017-06-21.

~kaushal

On Wed, Jun 7, 2017 at 6:06 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Just a reminder. The community meeting is scheduled to happen in about
> 2.5 hours. Please add topics you want to discuss and any updates you
> have to the meeting notepad at [1].
>
> ~kaushal
>
> [1]: https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Community Meeting 2017-06-07

2017-06-07 Thread Kaushal M
Just a reminder. The community meeting is scheduled to happen in about
2.5 hours. Please add topics you want to discuss and any updates you
have to the meeting notepad at [1].

~kaushal

[1]: https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Meeting 2017-05-24

2017-05-24 Thread Kaushal M
Very poor turnout today, just 3 attendees including me.

But, we actually did have a discussion and came out with a couple of AIs.

The logs and minutes are available at the links below.

Archive: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-05-24
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-24/gluster_community_meeting_2017-05-24.2017-05-24-15.03.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-24/gluster_community_meeting_2017-05-24.2017-05-24-15.03.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-24/gluster_community_meeting_2017-05-24.2017-05-24-15.03.log.html


The next meeting will be held on 7-June. The meeting pad is at
https://bit.ly/gluster-community-meetings as always for your updates
and topics.

~kaushal

Meeting summary
---
* Roll Call  (kshlm, 15:08:19)

* Openstack Cinder glusterfs support has been removed  (kshlm, 15:17:35)
  * LINK:
https://wiki.openstack.org/wiki/ThirdPartySystems/RedHat_GlusterFS_CI
shows BharatK and deepakcs  (JoeJulian, 15:30:46)
  * LINK:

https://github.com/openstack/cinder/commit/16e93ccd4f3a6d62ed9d277f03b64bccc63ae060
(kshlm, 15:38:52)
  * ACTION: ndevos will check with Eric Harney about the Openstack
Gluster efforts  (kshlm, 15:39:49)
  * ACTION: JoeJulian will share his conversations with Eric Harney
(kshlm, 15:40:24)

Meeting ended at 15:42:15 UTC.




Action Items

* ndevos will check with Eric Harney about the Openstack Gluster efforts
* JoeJulian will share his conversations with Eric Harney




Action Items, by person
---
* JoeJulian
  * JoeJulian will share his conversations with Eric Harney
* ndevos
  * ndevos will check with Eric Harney about the Openstack Gluster
efforts
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (37)
* ndevos (31)
* JoeJulian (30)
* zodbot (6)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Meeting 2017-05-10

2017-05-18 Thread Kaushal M
Once again, I couldn't send out this mail quick enough. Sorry for that.

The meeting minutes and logs for this mail are available at the links below.

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.log.html

The meeting pad has been archived at
https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-05-10
.

The next meeting is on 24th May. The meeting pad is available at
https://bit.ly/gluster-community-meetings to add updates and topics
for discussion.

~kaushal

==
#gluster-meeting: Gluster Community Meeting 2017-05-10
==


Meeting started by kshlm at 15:00:50 UTC. The full logs are available at
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.log.html
.



Meeting summary
---
* Roll Call  (kshlm, 15:05:28)

* Github issues  (kshlm, 15:10:06)

* Coverity progress  (kshlm, 15:23:51)

* Good build?  (kshlm, 15:30:41)
  * LINK:

https://software.intel.com/en-us/articles/intel-c-compiler-170-for-linux-release-notes-for-intel-parallel-studio-xe-2017
(kkeithley, 15:38:55)

* External Monitoring of Gluster performance / metrics  (kshlm,
  15:40:32)

* What is the status on getting gluster-block into Fedora?  (kshlm,
  15:53:31)

Meeting ended at 16:04:13 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (85)
* JoeJulian (21)
* kkeithley (21)
* jdarcy (17)
* BatS9 (12)
* amye (12)
* vbellur (8)
* zodbot (5)
* sanoj (5)
* ndevos (4)
* rafi (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.12 and 4.0: Thoughts on scope

2017-05-16 Thread Kaushal M
On 16 May 2017 06:16, "Shyam"  wrote:

Hi,

Let's start a bit early on 3.12 and 4.0 roadmap items, as there have been
quite a few discussions around this in various meetups.

Here is what we are hearing (or have heard), so if you are working on any
of these items, do put up your github issue, and let us know which release
you are targeting these for.

If you are working on something that is not represented here, shout out,
and we can get that added to the list of items in the upcoming releases.

Once we have a good collection slotted into the respective releases (on
github), we can further announce the same in the users list as well.

3.12:
1. Geo-replication to cloud (ie, s3 or glacier like storage target)
2. Basic level of throttling support on server side to manage the self-heal
processes running.
3. Brick Multiplexing (Better support, more control)
4. GFID to path improvements
5. Resolve issues around disconnects and ping-timeouts
6. Halo with hybrid mode was supposed to be with 3.12
7. Procedures and code for +1 scaling the cluster?
8. Lookup-optimized turned on by default.
9. Thin client (or server side clustering) - phase 1.


We also have the IPV6 patch by FB. This was supposed to go into 3.11 but
hasn't. The main thing blocking this is having an actual IPV6 environment
to test it in.


4.0: (more thematic than actual features at the moment)
1. Separation of Management and Filesystem layers (aka GlusterD2 related
efforts)
2. Scaling Distribution logic
3. Better consistency with rename() and link() operations
4. Thin client || Clustering Logic on server side - Phase 2
5. Quota: re-look at optimal support
6. Improvements in debug-ability and more focus on testing coverage based
on use-cases.

Components moving out of support in possibly 4.0
- Stripe translator
- AFR with just 2 subvolume (either use Arbiter or 3 way replicate)
- Re-validate few performance translator's presence.

Thanks,
Shyam

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Community meeting 2017-05-10

2017-05-10 Thread Kaushal M
Hi all,

Today's meeting is scheduled to happen in 6 hours at 1500UTC. The
meeting pad is at https://bit.ly/gluster-community-meetings . Please
add your updates and topics for discussion.

I had forgotten to send out the meeting minutes and logs for the last
meeting which happened on 2017-04-26. There wasn't a lot of discussion
in the meeting, but the logs and minutes are available at the links
below.

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-26/gluster_community_meeting_2017-04-26.2017-04-26-15.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-26/gluster_community_meeting_2017-04-26.2017-04-26-15.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-26/gluster_community_meeting_2017-04-26.2017-04-26-15.00.log.html

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-05 Thread Kaushal M
On Thu, May 4, 2017 at 6:40 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, May 4, 2017 at 4:38 PM, Niels de Vos <nde...@redhat.com> wrote:
>> On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
>>> On Wed, May 3, 2017 at 2:36 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>>
>>> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
>>> > <pkara...@redhat.com> wrote:
>>> > >
>>> > >
>>> > > On Sun, Apr 30, 2017 at 9:01 PM, Shyam <srang...@redhat.com> wrote:
>>> > >>
>>> > >> Hi,
>>> > >>
>>> > >> Release 3.11 for gluster has been branched [1] and tagged [2].
>>> > >>
>>> > >> We have ~4weeks to release of 3.11, and a week to backport features 
>>> > >> that
>>> > >> slipped the branching date (May-5th).
>>> > >>
>>> > >> A tracker BZ [3] has been opened for *blockers* of 3.11 release. 
>>> > >> Request
>>> > >> that any bug that is determined as a blocker for the release be noted
>>> > as a
>>> > >> "blocks" against this bug.
>>> > >>
>>> > >> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>>> > >> weeks need not be reflected against the blocker, *only* blocker bugs
>>> > >> identified that should prevent the release, need to be tracked against
>>> > this
>>> > >> tracker bug.
>>> > >>
>>> > >> We are not building beta1 packages, and will build out RC0 packages 
>>> > >> once
>>> > >> we cross the backport dates. Hence, folks interested in testing this
>>> > out can
>>> > >> either build from the code or wait for (about) a week longer for the
>>> > >> packages (and initial release notes).
>>> > >>
>>> > >> Features tracked as slipped and expected to be backported by 5th May
>>> > are,
>>> > >>
>>> > >> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>>> > >>
>>> > >> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>>> > >>   - Needs a +2 on https://review.gluster.org/13762
>>> > >>
>>> > >> 3) Enhance handleops readdirplus operation to return handles along with
>>> > >> dirents #174 (@skoduri)
>>> > >>
>>> > >> 4) Halo - Initial version (@pranith)
>>> > >
>>> > >
>>> > > I merged the patch on master. Will send out the port on Thursday. I have
>>> > to
>>> > > leave like right now to catch train and am on leave tomorrow, so will be
>>> > > back on Thursday and get the port done. Will also try to get the other
>>> > > patches fb guys mentioned post that preferably by 5th itself.
>>> >
>>> > Niels found that the HALO patch has pulled in a little bit of the IPv6
>>> > patch. This shouldn't have happened.
>>> > The IPv6 patch is currently stalled because it depends on an internal
>>> > FB library. The IPv6 bits that made it in pull this dependency.
>>> > This would have lead to a -2 on the HALO patch by me, but as I wasn't
>>> > aware of it, the patch was merged.
>>> >
>>> > The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
>>> > to affect anything HALO. So they should be easily removable and should
>>> > be removed.
>>> >
>>>
>>> As per the configure.ac the macro is enabled only when we are building
>>> gluster with "--with-fb-extras", which I don't think we do anywhere, so
>>> didn't think they are important at the moment. Sorry for the confusion
>>> caused because of this. Thanks to Kaushal for the patch. I will backport
>>> that one as well when I do the 3.11 backport of HALO. So will wait for the
>>> backport until Kaushal's patch is merged.
>>
>> Note that there have been disucssions about preventing special vendor
>> (Red Hat or Facebook) flags and naming. In that sense, --with-fb-extras
>> is not acceptible. Someone was interested in providing a "site.h"
>> configuration file that different vendors can use to fine-tune certain
>> things that are too detailed for ./configure options.
>>
>> We should remove the --with

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-04 Thread Kaushal M
On Thu, May 4, 2017 at 4:38 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
>> On Wed, May 3, 2017 at 2:36 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
>> > <pkara...@redhat.com> wrote:
>> > >
>> > >
>> > > On Sun, Apr 30, 2017 at 9:01 PM, Shyam <srang...@redhat.com> wrote:
>> > >>
>> > >> Hi,
>> > >>
>> > >> Release 3.11 for gluster has been branched [1] and tagged [2].
>> > >>
>> > >> We have ~4weeks to release of 3.11, and a week to backport features that
>> > >> slipped the branching date (May-5th).
>> > >>
>> > >> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
>> > >> that any bug that is determined as a blocker for the release be noted
>> > as a
>> > >> "blocks" against this bug.
>> > >>
>> > >> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>> > >> weeks need not be reflected against the blocker, *only* blocker bugs
>> > >> identified that should prevent the release, need to be tracked against
>> > this
>> > >> tracker bug.
>> > >>
>> > >> We are not building beta1 packages, and will build out RC0 packages once
>> > >> we cross the backport dates. Hence, folks interested in testing this
>> > out can
>> > >> either build from the code or wait for (about) a week longer for the
>> > >> packages (and initial release notes).
>> > >>
>> > >> Features tracked as slipped and expected to be backported by 5th May
>> > are,
>> > >>
>> > >> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>> > >>
>> > >> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>> > >>   - Needs a +2 on https://review.gluster.org/13762
>> > >>
>> > >> 3) Enhance handleops readdirplus operation to return handles along with
>> > >> dirents #174 (@skoduri)
>> > >>
>> > >> 4) Halo - Initial version (@pranith)
>> > >
>> > >
>> > > I merged the patch on master. Will send out the port on Thursday. I have
>> > to
>> > > leave like right now to catch train and am on leave tomorrow, so will be
>> > > back on Thursday and get the port done. Will also try to get the other
>> > > patches fb guys mentioned post that preferably by 5th itself.
>> >
>> > Niels found that the HALO patch has pulled in a little bit of the IPv6
>> > patch. This shouldn't have happened.
>> > The IPv6 patch is currently stalled because it depends on an internal
>> > FB library. The IPv6 bits that made it in pull this dependency.
>> > This would have lead to a -2 on the HALO patch by me, but as I wasn't
>> > aware of it, the patch was merged.
>> >
>> > The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
>> > to affect anything HALO. So they should be easily removable and should
>> > be removed.
>> >
>>
>> As per the configure.ac the macro is enabled only when we are building
>> gluster with "--with-fb-extras", which I don't think we do anywhere, so
>> didn't think they are important at the moment. Sorry for the confusion
>> caused because of this. Thanks to Kaushal for the patch. I will backport
>> that one as well when I do the 3.11 backport of HALO. So will wait for the
>> backport until Kaushal's patch is merged.
>
> Note that there have been disucssions about preventing special vendor
> (Red Hat or Facebook) flags and naming. In that sense, --with-fb-extras
> is not acceptible. Someone was interested in providing a "site.h"
> configuration file that different vendors can use to fine-tune certain
> things that are too detailed for ./configure options.
>
> We should remove the --with-fb-extras as well, specially because it is
> not useful for anyone that does not have access to the forked fbtirpc
> library.
>
> Kaushal mentioned he'll update the patch that removed the IPv6 default
> define, to also remove the --with-fb-extras and related bits.

The patch removing IPV6 and fbextras is at
https://review.gluster.org/17174 waiting for regression tests to run.

I've merged the Selinux backports, https://review.gluster.org/17159
and https:

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-03 Thread Kaushal M
On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
 wrote:
>
>
> On Sun, Apr 30, 2017 at 9:01 PM, Shyam  wrote:
>>
>> Hi,
>>
>> Release 3.11 for gluster has been branched [1] and tagged [2].
>>
>> We have ~4weeks to release of 3.11, and a week to backport features that
>> slipped the branching date (May-5th).
>>
>> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
>> that any bug that is determined as a blocker for the release be noted as a
>> "blocks" against this bug.
>>
>> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>> weeks need not be reflected against the blocker, *only* blocker bugs
>> identified that should prevent the release, need to be tracked against this
>> tracker bug.
>>
>> We are not building beta1 packages, and will build out RC0 packages once
>> we cross the backport dates. Hence, folks interested in testing this out can
>> either build from the code or wait for (about) a week longer for the
>> packages (and initial release notes).
>>
>> Features tracked as slipped and expected to be backported by 5th May are,
>>
>> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>>
>> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>>   - Needs a +2 on https://review.gluster.org/13762
>>
>> 3) Enhance handleops readdirplus operation to return handles along with
>> dirents #174 (@skoduri)
>>
>> 4) Halo - Initial version (@pranith)
>
>
> I merged the patch on master. Will send out the port on Thursday. I have to
> leave like right now to catch train and am on leave tomorrow, so will be
> back on Thursday and get the port done. Will also try to get the other
> patches fb guys mentioned post that preferably by 5th itself.

Niels found that the HALO patch has pulled in a little bit of the IPv6
patch. This shouldn't have happened.
The IPv6 patch is currently stalled because it depends on an internal
FB library. The IPv6 bits that made it in pull this dependency.
This would have lead to a -2 on the HALO patch by me, but as I wasn't
aware of it, the patch was merged.

The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
to affect anything HALO. So they should be easily removable and should
be removed.

>
>>
>>
>> Thanks,
>> Kaushal, Shyam
>>
>> [1] 3.11 Branch: https://github.com/gluster/glusterfs/tree/release-3.11
>>
>> [2] Tag for 3.11.0beta1 :
>> https://github.com/gluster/glusterfs/tree/v3.11.0beta1
>>
>> [3] Tracker BZ for 3.11.0 blockers:
>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.0
>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
>
>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Community Meeting 2017-03-01

2017-03-06 Thread Kaushal M
 carefully going forward

 Others

- _None_

## Announcements

### New announcements

- Reminder: Community cage outage on 14th and 15th March

### Regular announcements

- If you're attending any event/conference please add the event and
yourselves to Gluster attendance of events:
http://www.gluster.org/events (replaces
https://public.pad.fsfe.org/p/gluster-events)
- Put (even minor) interesting topics/updates on
https://bit.ly/gluster-community-meetings

---
* Roll call  (kshlm, 15:00:30)

* Discuss backport tracking via gerrit Change-ID  (kshlm, 15:06:03)
  * ACTION: shyam to notify devel list about the backport whine job and
gather feedback  (kshlm, 15:19:14)
  * ACTION: nigelb will implement the backports whine job after feedback
is obtained  (kshlm, 15:19:33)
  * ACTION: amye  to work on revised maintainers draft with vbellur to
get out for next maintainer's meeting. We'll approve it 'formally'
there, see how it works for 3.11.  (kshlm, 15:54:54)

Meeting ended at 15:57:32 UTC.




Action Items

* shyam to notify devel list about the backport whine job and gather
  feedback
* nigelb will implement the backports whine job after feedback is
  obtained
* amye  to work on revised maintainers draft with vbellur to get out for
  next maintainer's meeting. We'll approve it 'formally' there, see how
  it works for 3.11.




Action Items, by person
---
* amye
  * amye  to work on revised maintainers draft with vbellur to get out
for next maintainer's meeting. We'll approve it 'formally' there,
see how it works for 3.11.
* nigelb
  * nigelb will implement the backports whine job after feedback is
obtained
* shyam
  * shyam to notify devel list about the backport whine job and gather
feedback
* vbellur
  * amye  to work on revised maintainers draft with vbellur to get out
for next maintainer's meeting. We'll approve it 'formally' there,
see how it works for 3.11.
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (77)
* nigelb (55)
* shyam (35)
* amye (27)
* vbellur (22)
* zodbot (3)
* sankarshan (1)
* atinm (1)

On Wed, Mar 1, 2017 at 12:32 PM, Kaushal M <kshlms...@gmail.com> wrote:
> This is a reminder about todays meeting. The meeting will start at
> 1500UTC in #gluster-meeting on Freenode.
>
> The meeting notepad is available at
> https://bit.ly/gluster-community-meetings for you to add your updates
> and topics for discussion.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Meeting 2017-03-01

2017-02-28 Thread Kaushal M
This is a reminder about todays meeting. The meeting will start at
1500UTC in #gluster-meeting on Freenode.

The meeting notepad is available at
https://bit.ly/gluster-community-meetings for you to add your updates
and topics for discussion.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Community Meeting 2017-02-15

2017-02-16 Thread Kaushal M
We had a well attended and active (and productive) meeting this time.
Thank you everyone for your attendance.

We discussed 3.10, 3.8.9 and an upcoming infra downtime.

shyam once again reminded everyone about the very close release date
for 3.10. 3.10.0 is still expected on the 21st. A RC1 release is
planned, which is expected to be tested by maintainers. shyam has been
reaching out to maintainers to get feedback.

ndevos brought to notice the problem of packaging 3.8.9. Our one man
packaging superman, kkeithley, is on vacation. As a result the
download.gluster.org, Ubuntu PPA and SuSE packages haven't built. We
decided to postpone building these packages for 3.8.9 till kkeithley
returns, but still go ahead with the 3.8.9 announcement. We ended up
in a long discussion about how we can fix this problem for the future.
More details are in the logs and minutes.

nigelb brought to notice of a planned infra downtime for an expected
72hours, due to data center migration. The exact date is still not
set, but the earliest should be March 14.


Our next meeting is scheduled for 1500UTC on March 1, 2017. I've
attached a calendar invite for this. The meeting pad is available for
your updates and topics of discussion at
https://bit.ly/gluster-community-meetings

See you all later.

~kaushal

## Logs
- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-02-15/community_meeting_2017-02-15.2017-02-15-15.01.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-02-15/community_meeting_2017-02-15.2017-02-15-15.01.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-02-15/community_meeting_2017-02-15.2017-02-15-15.01.log.html

## Topics of Discussion

The meeting is an open floor, open for discussion of any topic entered below.

- Reminder: Last week before 3.10 release
- Post your test results
- Post any blocker bugs ASAP, so that we can build RC1
- Check the release notes if you have any features delivered in this release
- Package maintainers gear up for RC1 (if any gear up is needed)
- glusterfs-3.8.9 has been tagged, release (hopefully) later today
- Kaleb is on holidays, other packagers for community .rpm and .deb?
- ndevos can take care of the Fedora packages
- debs and rpms on download.gluster.org need help
- no documentation exists, and not clear that required permissions
can be obtained without kkeithley
- nigelb will begin documenting the process when kkeithley is back
- helps with automating everything
- 3.8.9 packages for dependent distros delayed till kkeithely returns
- 3.8.9 will be announced later today as planned
- We need to fix this problem before the march release cycle
- Get a rotating set of packagers who know how to build the packages
- Probably have the release-maintainers package
- Too much work to package for all distributions
- What all pacakges do we build?
- 
https://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
- This page is not informative enough
- Docs need fixing
- Docs hackday in Malta?
- Infra Outage in March
- Details still being finalized
- Possibly around March 14th for 72h.
- RH Community Cage is being moved to a different DC.
- review.g.o, build.g.o, fstat.g.o, centos-ci will be offline
-  list of servers/VM in the cage:
https://github.com/gluster/gluster.org_ansible_configuration/blob/master/hosts#L27


### Next edition's meeting host

- kshlm

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Action Items from the last meeting

- jdarcy will work with nigelb to make the infra for reverts easier.
- _No updates_
- shyam will send out a note on 3.10 deadlines and get someone in BLR to
  nag in person.
- Done.

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
  - GD2:
  - FOSDEM 2017 presentation -
https://fosdem.org/2017/schedule/event/glusterd2/
  - Prashanth is refactoring some of the sunrpc code
  - Aravinda started a discussion and a POC around plugins
  - https://github.com/gluster/glusterd2/pull/241

 GlusterFS 3.10

- Maintainers : shyam, kkeithley, rtalur
- Next release : 3.10.0
- Target date: February 21, 2017
- Release tracker : https://github.com/gluster/glusterfs/milestone/1
- Updates:
  - Awaiting test updates to github issues
  - Link to testing issues: http://bit.ly/2kDCR8M
  - Awaiting final release blocker fixes
  - Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=1416031
  - Merge pending dashboard:
  - http://bit.ly/2ky6VPI

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.1
- Next release : 3.9.2
  

[Gluster-devel] Stop sending patches for and merging patches on release-3.7

2017-02-01 Thread Kaushal M
Hi all,

GlusterFS-3.7.20 is intended to be the final release for release-3.7.
3.7 enters EOL with the expected release of 3.10 in about 2 weeks.

Once 3.10 is released I'll be closing any open bugs on 3.7 and
abandoning any patches on review.

So as the subject says, developers please stop sending changes to
release-3.7, and maintainers please don't merge any more changes onto
release-3.7.

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterD2 v4.0dev-5

2017-02-01 Thread Kaushal M
We have a new development release of GD2.

GD2 now supports volfile fetch and portmap requests, so clients are
finally able to mount volumes using the mount command. Portmap doesn't
work reliably yet, so there might be failures.

GD2 was refactored to clean up the main function and standardize the
various servers it runs.

More details about the release and downloads can be found at [1].

We also have a docker image ready for testing [2]. More information on
how to use this image can be found at [3].

Cheers!

~kaushal

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-5
[2]: 
https://hub.docker.com/r/gluster/glusterd2-test/builds/bqecolrgfsx8damioi3uyas/
[3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Community Meeting 2016-02-01

2017-01-31 Thread Kaushal M
Hi all,

This is a reminder for tomorrows community meeting. The meeting is
scheduled to be held in #gluster-meeting on Freenode at 1500UTC.

Please add your updates for the last two weeks and any topics for
discussion into the meeting pad at [1].

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Announcing GlusterFS-3.7.20

2017-01-31 Thread Kaushal M
GlusterFS-3.7.20 has been released. This is regular bug fix release
for GlusterFS-3.7, and is currently the last planned release of
GlusterFS-3.7. GlusterFS-3.10 is expected next month, and
GlusterFS-3.7 [enters EOL][5] once it is
released. The community will be notified of any changes to the EOL schedule.

The release-notes for GlusterFS-3.7.20 can be read [here][1].

The release tarball and [community provided packages][2] can obtained
from [download.gluster.org][3]. The CentOS [Storage SIG][4] packages
have been built and should be available soon from the
`centos-gluster37` repository.

[1]: 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.20.md
[2]: https://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
[3]: https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.20/
[4]: https://wiki.centos.org/SpecialInterestGroup/Storage
[5]: https://www.gluster.org/community/release-schedule/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterFS-3.7.20 will be tagged later today

2017-01-30 Thread Kaushal M
I'll be tagging this release later today. This should be the final 3.7
release after over a year and half of updates.

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Community Meeting 2017-01-18

2017-01-18 Thread Kaushal M
On Wed, Jan 18, 2017 at 9:43 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Hi All,
>
> This meeting was the first following our new schedule - 1500UTC on
> Wednesday once every 2 weeks.
>
> This week we had one major discussion on fixing up and improving our
> regression test suite. More information on is available below and in
> the meeting logs.
> There was also a small discussion on conference attendance at
> DevConf.cz 2017, FOSDEM 2017, FAST'17 and Scale15x, all of which will
> have some Gluster community members in attendance. In particular, if
> you don't already know, Gluster will have a stand at FOSDEM, and will
> be a part of the software defined storage devroom.
>
> The next meeting will be on Feb 1 2017, 2 weeks from now, at 1500UTC.
> I've attached a calendar invite to make it easier to remember the
> meeting. The meeting pad[1] is open for discussion topics and updates.
>
> See you all later.
>
> Thanks,
> Kaushal
>
> [1] https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-01-18
> [2] Minutes: 
> https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.html
> [3] Minutes (text):
> https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.txt
> [4] Log: 
> https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.log.html
> [5] https://bit.ly/gluster-community-meetings
>
> ---
>
> ## Topics of Discussion
>
> The meeting is an open floor, open for discussion of any topic entered below.
>
> - Regression test suite hardening
> - or Changes to testing
> - Discussions happened on the topic of reducing test runtime
> - [vbellur] next steps regarding this is not clear
> - [rastar] nigelb is working on reducing test runtime
> - [rastar] fstat.gluster.org being updated to get stats by release
> - [jdarcy & rastar] tests parallelization is being worked on,
> using containers possibly
> - [jdarcy] I have a list of bad tests
> - https://github.com/gluster/glusterfs/wiki/Test-Clean-Up has more details
> - [vbellur] Test-Clean-up looks good
> - [rastar] additional proposals
> - blocker test bugs for 3.10
> - stop merges to modules with known bad tests over a watermark
> - [vbellur] Need to discuss blocker test bugs for 3.10
> - [rastar] will start a conversation
> - Upcoming conference attendence
> - Lots of people in FOSDEM. Stand + Storage Devroom
> - Lots of people in DevConf
> - Some going to FAST
> - At least one person at Scale
>
> ### Next edition's meeting host
>
> - kshlm
>
> ## Updates
>
>> NOTE : Updates will not be discussed during meetings. Any important or 
>> noteworthy update will be announced at the end of the meeting
>
> ### Action Items from last meeting
>
> - None
>
> ### Releases
>
>  GlusterFS 4.0
>
> - Tracker bug :
> https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
> - Roadmap : https://www.gluster.org/community/roadmap/4.0/
> - Updates:
>   - GD2
>   - ppai finished exploring/prototyping SunRPC for Go.
>   - ppai is exploring multiplexing services onto a single port
>   - kshlm is preparing his presentation for FOSDEM.
>
>  GlusterFS 3.10
>
> - Maintainers : shyam, kkeithley, rtalur
> - Next release : 3.10.0
> - Target date: February 14, 2017
> - Release tracker : https://github.com/gluster/glusterfs/milestone/1
> - Updates:
>   - Branching to be done today (i.e 18th Jan)
>   - Some exceptions are noted for features that will be backported and
> land in 3.10 post the branching, within mostly a week of branching
>   - gfapi statedump support
>   - Brick multiplexing
>   - Trash can directory creation
>   - DHT rebalance estimation
>   - some remaining storhaug work
>
>  GlusterFS 3.9
>
> - Maintainers : pranithk, aravindavk, dblack
> - Current release : 3.9.1
> - Release date: 20 December 2016
> - actual release date 17 Jan 2017
> - Next release : 3.9.2
>   - Release date : 20 Feb 2017 if 3.10 hasn't shipped by then.
> - Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.2
> - Open bugs : 
> https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.9.1_resolved=1
> - Updates:
>   - 3.9.1 has been tagged
>   - Packages have been built and are on d.g.o
>   - LATEST symlink has been moved
>   - CentOS Storage SIG will available soon
>   - Release announcement
> https://lists.gluster.org/pipermail/gluster-devel/2017-January/051931.ht

[Gluster-devel] Community Meeting 2017-01-18

2017-01-18 Thread Kaushal M
Hi All,

This meeting was the first following our new schedule - 1500UTC on
Wednesday once every 2 weeks.

This week we had one major discussion on fixing up and improving our
regression test suite. More information on is available below and in
the meeting logs.
There was also a small discussion on conference attendance at
DevConf.cz 2017, FOSDEM 2017, FAST'17 and Scale15x, all of which will
have some Gluster community members in attendance. In particular, if
you don't already know, Gluster will have a stand at FOSDEM, and will
be a part of the software defined storage devroom.

The next meeting will be on Feb 1 2017, 2 weeks from now, at 1500UTC.
I've attached a calendar invite to make it easier to remember the
meeting. The meeting pad[1] is open for discussion topics and updates.

See you all later.

Thanks,
Kaushal

[1] https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-01-18
[2] Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.html
[3] Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.txt
[4] Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.log.html
[5] https://bit.ly/gluster-community-meetings

---

## Topics of Discussion

The meeting is an open floor, open for discussion of any topic entered below.

- Regression test suite hardening
- or Changes to testing
- Discussions happened on the topic of reducing test runtime
- [vbellur] next steps regarding this is not clear
- [rastar] nigelb is working on reducing test runtime
- [rastar] fstat.gluster.org being updated to get stats by release
- [jdarcy & rastar] tests parallelization is being worked on,
using containers possibly
- [jdarcy] I have a list of bad tests
- https://github.com/gluster/glusterfs/wiki/Test-Clean-Up has more details
- [vbellur] Test-Clean-up looks good
- [rastar] additional proposals
- blocker test bugs for 3.10
- stop merges to modules with known bad tests over a watermark
- [vbellur] Need to discuss blocker test bugs for 3.10
- [rastar] will start a conversation
- Upcoming conference attendence
- Lots of people in FOSDEM. Stand + Storage Devroom
- Lots of people in DevConf
- Some going to FAST
- At least one person at Scale

### Next edition's meeting host

- kshlm

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Action Items from last meeting

- None

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
  - GD2
  - ppai finished exploring/prototyping SunRPC for Go.
  - ppai is exploring multiplexing services onto a single port
  - kshlm is preparing his presentation for FOSDEM.

 GlusterFS 3.10

- Maintainers : shyam, kkeithley, rtalur
- Next release : 3.10.0
- Target date: February 14, 2017
- Release tracker : https://github.com/gluster/glusterfs/milestone/1
- Updates:
  - Branching to be done today (i.e 18th Jan)
  - Some exceptions are noted for features that will be backported and
land in 3.10 post the branching, within mostly a week of branching
  - gfapi statedump support
  - Brick multiplexing
  - Trash can directory creation
  - DHT rebalance estimation
  - some remaining storhaug work

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.1
- Release date: 20 December 2016
- actual release date 17 Jan 2017
- Next release : 3.9.2
  - Release date : 20 Feb 2017 if 3.10 hasn't shipped by then.
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.2
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.9.1_resolved=1
- Updates:
  - 3.9.1 has been tagged
  - Packages have been built and are on d.g.o
  - LATEST symlink has been moved
  - CentOS Storage SIG will available soon
  - Release announcement
https://lists.gluster.org/pipermail/gluster-devel/2017-January/051931.html

 GlusterFS 3.8

- Maintainers : ndevos, jiffin
- Current release : 3.8.8
- Next release : 3.8.9
  - Release date : 10 February 2017
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.8
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.8.8_resolved=1
- Updates:
  - GlusterFS-3.8.8 has been released
  - 
https://lists.gluster.org/pipermail/gluster-users/2017-January/029695.html
  - http://blog.nixpanic.net/2017/01/gluster-388-ltm-update.html

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.19
- Next release : 3.7.20
  - Release date : 30 January 2017
- 

[Gluster-devel] Weekly Community meeting 2017-10-18

2017-01-16 Thread Kaushal M
Hi everyone,

If you haven't already heard, we will be moving to a new schedule for
the weekly community meeting, starting with tomorrows' meeting.

The meeting will be held at 1500UTC tomorrow, in #gluster-meeting on Freenode.

Add your updates and topics for discussion at
https://bit.ly/gluster-community-meetings

See you all tomorrow.

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] New schedule for community meetings - 1500UTC every alternate Wednesday.

2017-01-16 Thread Kaushal M
Hi All,

We recently changed the community meeting format to make it more
lively and are pleased with the nature of discussions happening since
the change. In order to foster more participation for our meetings, we
will be trying out a bi-weekly cadence and move the meeting to 1500UTC
on alternate Wednesdays. The meeting will continue to happen in
#gluster-meeting on Freenode.


We intend letting the new schedule be effective from  Jan 18. The next
community meeting will be held in #gluster-meeting on Freenode, at
1500UTC on Jan 18.

If you are a regular attendee of the community meetings and will be
inconvenienced by the altered schedule, please let us know.

Thanks,
Kaushal and Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Weekly Community Meeting - 20170111

2017-01-11 Thread Kaushal M
ug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.8
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.8.8_resolved=1
- Updates:
  - None

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.19
- Next release : 3.7.20?
  - Release date : 30 Jan 2017?
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.19
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.7.19_resolved=1
- Updates:
  - 3.7.19 released
  - https://www.gluster.org/pipermail/gluster-devel/2017-January/051882.html

### Related projects and efforts

 Community Infra

- Gerrit OS upgrade on 21st Jan.

 Testing

- https://github.com/gluster/glusterfs/wiki/Test-Clean-Up


Meeting summary
---
* Roll call  (kshlm, 12:02:44)

* Is 3.7.20  required?  (kshlm, 12:08:32)
  * AGREED: 3.7.20 will be the (hopefully) last release of release-3.7
(kshlm, 12:10:54)

* Testing discussion update:
  https://github.com/gluster/glusterfs/wiki/Test-Clean-Up  (kshlm,
  12:12:43)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2017-January/051859.html
(kshlm, 12:13:30)

* Do we need a wiki page for meeting minutes?  (kshlm, 12:23:04)
  * AGREED: Meetings need to email minutes/notes to the mailing list.
Optionally add notes to wiki.  (kshlm, 12:41:27)
  * Weekly Community Meeting will do both.  (kshlm, 12:41:43)

* Discuss participation in the meetings  (kshlm, 12:41:57)

Meeting ended at 13:08:07 UTC.


People Present (lines said)
---
* kshlm (88)
* ndevos (47)
* nigelb (27)
* shyam (18)
* kkeithley (11)
* sankarshan (10)
* jdarcy (7)
* Saravanakmr (6)
* zodbot (3)
* skoduri (1)
* anoopcs (1)
* partner (1)

On Wed, Jan 11, 2017 at 3:27 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Today's meeting is due to start in 2 hours from now at 1200UTC in
> #gluster-meeting on Freenode.
>
> The meeting agenda and update pad is at [1]. Add updates and topics
> for discussion here.
>
> See you all in 2 hours.
>
> ~kaushal
>
> [1]: https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS 3.7.19 released

2017-01-11 Thread Kaushal M
On Wed, Jan 11, 2017 at 3:43 PM, Kaushal M <kshlms...@gmail.com> wrote:
> GlusterFS 3.7.19 is a regular bug fix release for GlusterFS-3.7. The
> release-notes for this release can be read here[1].
>
> The release tarball and community provided packages[2] can obtained
> from download.gluster.org[3]. The CentOS Storage SIG[4] packages have
> been built and should be available soon from the centos-gluster37
> repository.
>
> A reminder to everyone, GlusterFS-3.7 is scheduled[5] to be EOLed with
> the release of GlusterFS-3.10, which should happen sometime in
> February 2017.
>
> ~kaushal
>

The links have been corrected. Thanks Niels for noticing this.

 [1]: 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.19.md
 [2]: https://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
 [3]: https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.19/
 [4]: https://wiki.centos.org/SpecialInterestGroup/Storage
 [5]: https://www.gluster.org/community/release-schedule/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterFS 3.7.19 released

2017-01-11 Thread Kaushal M
GlusterFS 3.7.19 is a regular bug fix release for GlusterFS-3.7. The
release-notes for this release can be read here[1].

The release tarball and community provided packages[2] can obtained
from download.gluster.org[3]. The CentOS Storage SIG[4] packages have
been built and should be available soon from the centos-gluster37
repository.

A reminder to everyone, GlusterFS-3.7 is scheduled[5] to be EOLed with
the release of GlusterFS-3.10, which should happen sometime in
February 2017.

~kaushal

[1]: 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.18.md
[2]: https://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
[3]: https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.18/
[4]: https://wiki.centos.org/SpecialInterestGroup/Storage
[5]: https://www.gluster.org/community/release-schedule/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting - 20170111

2017-01-11 Thread Kaushal M
Today's meeting is due to start in 2 hours from now at 1200UTC in
#gluster-meeting on Freenode.

The meeting agenda and update pad is at [1]. Add updates and topics
for discussion here.

See you all in 2 hours.

~kaushal

[1]: https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Weekly Community Meeting - 20170104

2017-01-04 Thread Kaushal M
cluded in glusterfs packages




Action Items, by person
---
* shyam
  * shyam will file a bug to get arequal included in glusterfs packages
* **UNASSIGNED**
  * Need to find out when 3.9.1 is happening




People Present (lines said)
---
* nigelb (92)
* kshlm (60)
* ndevos (39)
* shyam (29)
* rastar (15)
* jdarcy (12)
* atinmu (5)
* Saravanakmr (4)
* zodbot (3)
* sankarshan (1)

On Tue, Jan 3, 2017 at 12:36 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Happy New Year everyone!
>
> This is a reminder for the resumption of the weekly community meeting
> after nearly a month of not being held. I hope everyone enjoyed their
> holidays, and are now ready to get this restarted.
>
> The meeting agenda and updates document is at [1] as always. I expect
> there are a lot of updates to add after this long break, so make sure
> to add your updates and topics to this before the meeting starts.
>
> The meeting will start, as always, at 1200UTC in #gluster-meeting on Freenode.
>
> See you all tomorrow!
>
> ~kaushal
>
> [1] https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterFS-3.7.19 tagging approaching

2017-01-02 Thread Kaushal M
3.7.19 was supposed to be tagged on 30th December, but didn't happen
due to the holidays.

I'll be tagging the release before the end of this week. To do this,
no more patches will be merged after tomorrows meeting. If anyone has
anything to get merged before then, please bring it to my notice.

Right now, there have been 6 commits since .18 and the current
release-3.7 HEAD is at 2892fb430.

Thanks.

Kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting - 20170104

2017-01-02 Thread Kaushal M
Happy New Year everyone!

This is a reminder for the resumption of the weekly community meeting
after nearly a month of not being held. I hope everyone enjoyed their
holidays, and are now ready to get this restarted.

The meeting agenda and updates document is at [1] as always. I expect
there are a lot of updates to add after this long break, so make sure
to add your updates and topics to this before the meeting starts.

The meeting will start, as always, at 1200UTC in #gluster-meeting on Freenode.

See you all tomorrow!

~kaushal

[1] https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Announcing GlusterFS-3.7.18

2016-12-12 Thread Kaushal M
Hi all,

GlusterFS-3.7.18 has been released. This is a regular bug fix release.
This release fixes 13 bugs. The release-notes can be found at [1].

Packages have been built at the CentOS Storage SIG [2] and
download.gluster.org [3]. The tarball can be downloaded from [3].

The next release might be delayed (more than normal) due to it being
the end of the year. The tracker for 3.7.19 is at [4], mark any bugs
that need to be fixed as dependencies.

See you all in the new year.

~kaushal

[1] 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.18.md
[2] https://wiki.centos.org/SpecialInterestGroup/Storage
[3] https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.18/
[4] https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.19
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Cancelled: Weekly Community Meeting 2016-12-07

2016-12-07 Thread Kaushal M
Hi All,

This weeks meeting has been cancelled. There was neither enough
attendance (is it the holidays already?), nor any topics for
discussion.

The next meeting is still on track for next week. The meeting pad [1]
will be carried over to the next week. Please add updates and topics
to it.

Thanks,
Kaushal

[1]: https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Etherpads and archiving

2016-12-07 Thread Kaushal M
Hi all,

We use etherpads a lot in this community. We use it arrange meetings,
collect notes, collaborate on designs, and for a lot of other uses.

Our current preferred etherpad hosted by the FSFE, is being shutdown
[1]. This is an issue for us, as a lot of community documents are on
this etherpad instance, and are in the danger of being lost forever.
(Don't worry, there is an effort happening to archive documents from
here).

To ensure that we don't face such a situation in the future, we have
decided that
etherpads (or alternatives) should only be used for live collaboration
on text documents. After which they must be archived in the GlusterFS
github wiki [2].

This ensures us that we are not dependent on a particular etherpad
instance, and leaves us free to use/move to better alternatives when
needed. All etherpad users are encouraged to follow this.

This guideline is already being followed for the weekly community
meetings. The pad (or hackmd.io [3]in this case) is used for
collecting updates and topics through the week, and used to add
minutes during the meeting. It is then archived at the end of the
meeting [4][5], and a new pad is created for the next meeting.

Also as a part of this, we'd begun an effort to find and archive all
existing community etherpads on the FSFE etherpad. Saravana sent out a
mail [6] regarding this. We need help from the community to do this.
Let us know of any unknown etherpads you were using that need to be
archived, and archive it.

Thanks,
Kaushal

[1] https://wiki.fsfe.org/Teams/System-Hackers/Decommissioning-Pad
[2] https://github.com/gluster/glusterfs/wiki
[3] https://bit.ly/gluster-community-meetings
[4] https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-02
[5] https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-09
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting 2016-11-30

2016-11-30 Thread Kaushal M
Hi all,

This weeks meeting was a short meeting with just one topic discussed
and few attendees. This left us wondering about what lead to the low
attendance. We moved on quickly though, so if anyone has any ideas
about this, please let us know.

The meeting agenda and updates for the week are archived at [1]. The
meeting minutes and logs are available at [2],[3] and [4].

The next weeks meeting agenda is at [5]. Please add your updates to
this before the next meeting. Also, everyone (and I mean EVERYONE) is
welcome to add their own topics for discussion.

Thanks,
Kaushal

[1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-30
[2]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-30/weekly_community_meeting_2016-11-30.2016-11-30-12.02.html
[3]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-30/weekly_community_meeting_2016-11-30.2016-11-30-12.02.txt
[4]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-30/weekly_community_meeting_2016-11-30.2016-11-30-12.02.log.html
[5]: https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Last week in GD2

2016-11-29 Thread Kaushal M
Not a lot has happened in GD2 since the dev3 release.

Prashanth has worked on getting a working aggregation mechanism for
transactions. [1]
We switched to requiring go1.6 as a minimum, as go1.5 is no longer
under support.
I've not had a lot of progress on volgen in the last week.

~kaushal

[1] https://github.com/gluster/glusterd2/pull/180
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Weekly Community Meeting - 2016-11-23

2016-11-23 Thread Kaushal M
On Tue, Nov 22, 2016 at 6:26 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Hi All,
>
> This is a reminder to add your status updates and topics to the
> meeting agenda at [1].
> Ensure you do this before the meeting tomorrow.
>
> Thanks,
> Kaushal
>
> [1]: https://bit.ly/gluster-community-meetings

Thank you everyone who attended today's meeting.

4 topics were discussed today, the major one among them being on the
release of 3.9 and the beginning of 3.10 cycle. More information can
be found in the meeting minutes and logs at [1][2][3][4].

The agenda for next weeks meeting is available at [5]. Please add your
updates and topics for discussion to the agenda. Everyone is welcome
to add their own topics for discussion.

I'll be hosting next weeks meeting, at the same time and same place.

Thanks.
~kaushal

[1] https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-23
[2] Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-23/weekly_community_meeting_2016-11-23.2016-11-23-12.01.html
[3] Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-23/weekly_community_meeting_2016-11-23.2016-11-23-12.01.txt
[4] Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-23/weekly_community_meeting_2016-11-23.2016-11-23-12.01.log.html
[5] https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting - 2016-11-23

2016-11-22 Thread Kaushal M
Hi All,

This is a reminder to add your status updates and topics to the
meeting agenda at [1].
Ensure you do this before the meeting tomorrow.

Thanks,
Kaushal

[1]: https://bit.ly/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-18 Thread Kaushal M
On Fri, Nov 18, 2016 at 3:29 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Fri, Nov 18, 2016 at 2:04 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> On Fri, Nov 18, 2016 at 1:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>> On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>>> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>>>>
>>>> I made a mistake.
>>>>
>>>> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
>>>> not correct.
>>>> So I corrected it with a new commit, c11131f, directly on top of my
>>>> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
>>>> tagged this commit as 3.7.17.
>>>>
>>>> Unfortunately, when pushing I just pushed the tags and didn't push my
>>>> updated branch to release-3.7. Because of this I inadvertently created
>>>> a new (virtual) branch.
>>>> Any new changes merged in release-3.7 since have happened on top of
>>>> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
>>>> v3.7.17 exists as a virtual branch now.
>>>>
>>>> The current branching for release-3.7 and v3.7.17 looks like this.
>>>>
>>>> | release-3.7 CURRENT HEAD
>>>> |
>>>> | new commits
>>>> |   | c11131f (tag: v3.7.17)
>>>> 8b95eba /
>>>> |
>>>> | old commits
>>>>
>>>> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
>>>> push this as the new release-3.7.
>>>>
>>>>  | release-3.7 NEW HEAD
>>>> |release-3.7 CURRENT HEAD -->| Merge commit
>>>> ||
>>>> | new commits*   |
>>>> || c11131f (tag: v3.7.17)
>>>> | 8b95eba ---/
>>>> |
>>>> | old commits
>>>>
>>>> I'd like to avoid doing a rebase because it would lead to changes
>>>> commit-ids, and break any existing clones.
>>>>
>>>> The actual commands I'll be doing on my local system are:
>>>> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
>>>> to the 3.7.17 branch in the picture above)
>>>> ```
>>>> $ git fetch origin # fetch latest origin
>>>> $ git checkout release-3.7 # checking out my local release-3.7
>>>> $ git merge origin/release-3.7 # merge updates from origin into my
>>>> local release-3.7. This will create a merge commit.
>>>> $ git push origin release-3.7:release-3.7 # push my local branch to
>>>> remote and point remote release-3.7 to my release-3.7 ie. the merge
>>>> commit.
>>>> ```
>>>>
>>>> After this users with existing clones should get changes done on their
>>>> next `git pull`.
>>>
>>> I've tested this out locally, and it works.
>>>
>>>>
>>>> I'll do this in the next couple of hours, if there are no objections.
>>>>
>>
>> If forgot to give credit. Thanks JoeJulian and gnulnx for noticing
>> this and bringing attention to it.
>
> I'm going ahead with the plan. I've not gotten any bad feedback. On
> JoeJulian and Niels said it looks okay.

This is now done. A merge commit 94ba6c9 was created which merges back
v3.7.17 into release-3.7. The head of release-3.7 now points to this
merge commit.
Future pulls of release-3.7 will not be affected. If anyone faces
issues, please let met know.

>
>>
>>>> ~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-18 Thread Kaushal M
On Fri, Nov 18, 2016 at 2:04 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Fri, Nov 18, 2016 at 1:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>>>
>>> I made a mistake.
>>>
>>> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
>>> not correct.
>>> So I corrected it with a new commit, c11131f, directly on top of my
>>> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
>>> tagged this commit as 3.7.17.
>>>
>>> Unfortunately, when pushing I just pushed the tags and didn't push my
>>> updated branch to release-3.7. Because of this I inadvertently created
>>> a new (virtual) branch.
>>> Any new changes merged in release-3.7 since have happened on top of
>>> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
>>> v3.7.17 exists as a virtual branch now.
>>>
>>> The current branching for release-3.7 and v3.7.17 looks like this.
>>>
>>> | release-3.7 CURRENT HEAD
>>> |
>>> | new commits
>>> |   | c11131f (tag: v3.7.17)
>>> 8b95eba /
>>> |
>>> | old commits
>>>
>>> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
>>> push this as the new release-3.7.
>>>
>>>  | release-3.7 NEW HEAD
>>> |release-3.7 CURRENT HEAD -->| Merge commit
>>> ||
>>> | new commits*   |
>>> || c11131f (tag: v3.7.17)
>>> | 8b95eba ---/
>>> |
>>> | old commits
>>>
>>> I'd like to avoid doing a rebase because it would lead to changes
>>> commit-ids, and break any existing clones.
>>>
>>> The actual commands I'll be doing on my local system are:
>>> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
>>> to the 3.7.17 branch in the picture above)
>>> ```
>>> $ git fetch origin # fetch latest origin
>>> $ git checkout release-3.7 # checking out my local release-3.7
>>> $ git merge origin/release-3.7 # merge updates from origin into my
>>> local release-3.7. This will create a merge commit.
>>> $ git push origin release-3.7:release-3.7 # push my local branch to
>>> remote and point remote release-3.7 to my release-3.7 ie. the merge
>>> commit.
>>> ```
>>>
>>> After this users with existing clones should get changes done on their
>>> next `git pull`.
>>
>> I've tested this out locally, and it works.
>>
>>>
>>> I'll do this in the next couple of hours, if there are no objections.
>>>
>
> If forgot to give credit. Thanks JoeJulian and gnulnx for noticing
> this and bringing attention to it.

I'm going ahead with the plan. I've not gotten any bad feedback. On
JoeJulian and Niels said it looks okay.

>
>>> ~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-18 Thread Kaushal M
On Fri, Nov 18, 2016 at 1:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>>
>> I made a mistake.
>>
>> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
>> not correct.
>> So I corrected it with a new commit, c11131f, directly on top of my
>> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
>> tagged this commit as 3.7.17.
>>
>> Unfortunately, when pushing I just pushed the tags and didn't push my
>> updated branch to release-3.7. Because of this I inadvertently created
>> a new (virtual) branch.
>> Any new changes merged in release-3.7 since have happened on top of
>> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
>> v3.7.17 exists as a virtual branch now.
>>
>> The current branching for release-3.7 and v3.7.17 looks like this.
>>
>> | release-3.7 CURRENT HEAD
>> |
>> | new commits
>> |   | c11131f (tag: v3.7.17)
>> 8b95eba /
>> |
>> | old commits
>>
>> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
>> push this as the new release-3.7.
>>
>>  | release-3.7 NEW HEAD
>> |release-3.7 CURRENT HEAD -->| Merge commit
>> ||
>> | new commits*   |
>> || c11131f (tag: v3.7.17)
>> | 8b95eba ---/
>> |
>> | old commits
>>
>> I'd like to avoid doing a rebase because it would lead to changes
>> commit-ids, and break any existing clones.
>>
>> The actual commands I'll be doing on my local system are:
>> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
>> to the 3.7.17 branch in the picture above)
>> ```
>> $ git fetch origin # fetch latest origin
>> $ git checkout release-3.7 # checking out my local release-3.7
>> $ git merge origin/release-3.7 # merge updates from origin into my
>> local release-3.7. This will create a merge commit.
>> $ git push origin release-3.7:release-3.7 # push my local branch to
>> remote and point remote release-3.7 to my release-3.7 ie. the merge
>> commit.
>> ```
>>
>> After this users with existing clones should get changes done on their
>> next `git pull`.
>
> I've tested this out locally, and it works.
>
>>
>> I'll do this in the next couple of hours, if there are no objections.
>>

If forgot to give credit. Thanks JoeJulian and gnulnx for noticing
this and bringing attention to it.

>> ~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-17 Thread Kaushal M
On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M <kshlms...@gmail.com> wrote:
> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>
> I made a mistake.
>
> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
> not correct.
> So I corrected it with a new commit, c11131f, directly on top of my
> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
> tagged this commit as 3.7.17.
>
> Unfortunately, when pushing I just pushed the tags and didn't push my
> updated branch to release-3.7. Because of this I inadvertently created
> a new (virtual) branch.
> Any new changes merged in release-3.7 since have happened on top of
> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
> v3.7.17 exists as a virtual branch now.
>
> The current branching for release-3.7 and v3.7.17 looks like this.
>
> | release-3.7 CURRENT HEAD
> |
> | new commits
> |   | c11131f (tag: v3.7.17)
> 8b95eba /
> |
> | old commits
>
> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
> push this as the new release-3.7.
>
>  | release-3.7 NEW HEAD
> |release-3.7 CURRENT HEAD -->| Merge commit
> ||
> | new commits*   |
> || c11131f (tag: v3.7.17)
> | 8b95eba ---/
> |
> | old commits
>
> I'd like to avoid doing a rebase because it would lead to changes
> commit-ids, and break any existing clones.
>
> The actual commands I'll be doing on my local system are:
> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
> to the 3.7.17 branch in the picture above)
> ```
> $ git fetch origin # fetch latest origin
> $ git checkout release-3.7 # checking out my local release-3.7
> $ git merge origin/release-3.7 # merge updates from origin into my
> local release-3.7. This will create a merge commit.
> $ git push origin release-3.7:release-3.7 # push my local branch to
> remote and point remote release-3.7 to my release-3.7 ie. the merge
> commit.
> ```
>
> After this users with existing clones should get changes done on their
> next `git pull`.

I've tested this out locally, and it works.

>
> I'll do this in the next couple of hours, if there are no objections.
>
> ~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-17 Thread Kaushal M
IMPORTANT: Till this is fixed please stop merging changes into release-3.7

I made a mistake.

When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
not correct.
So I corrected it with a new commit, c11131f, directly on top of my
local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
tagged this commit as 3.7.17.

Unfortunately, when pushing I just pushed the tags and didn't push my
updated branch to release-3.7. Because of this I inadvertently created
a new (virtual) branch.
Any new changes merged in release-3.7 since have happened on top of
8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
v3.7.17 exists as a virtual branch now.

The current branching for release-3.7 and v3.7.17 looks like this.

| release-3.7 CURRENT HEAD
|
| new commits
|   | c11131f (tag: v3.7.17)
8b95eba /
|
| old commits

The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
push this as the new release-3.7.

 | release-3.7 NEW HEAD
|release-3.7 CURRENT HEAD -->| Merge commit
||
| new commits*   |
|| c11131f (tag: v3.7.17)
| 8b95eba ---/
|
| old commits

I'd like to avoid doing a rebase because it would lead to changes
commit-ids, and break any existing clones.

The actual commands I'll be doing on my local system are:
(NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
to the 3.7.17 branch in the picture above)
```
$ git fetch origin # fetch latest origin
$ git checkout release-3.7 # checking out my local release-3.7
$ git merge origin/release-3.7 # merge updates from origin into my
local release-3.7. This will create a merge commit.
$ git push origin release-3.7:release-3.7 # push my local branch to
remote and point remote release-3.7 to my release-3.7 ie. the merge
commit.
```

After this users with existing clones should get changes done on their
next `git pull`.

I'll do this in the next couple of hours, if there are no objections.

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [GD2] New dev release - GlusterD2 v4.0dev-3

2016-11-17 Thread Kaushal M
On Fri, Nov 18, 2016 at 12:37 PM, Humble Devassy Chirammal
<humble.deva...@gmail.com> wrote:
> Good going !!
>
> Does this embedded etcd capable of connecting or working with external etcd
> if its availble ?

Nope. This is a private to GD2. But GD2 itself will become capable of
connecting to external stores at some point in the future.
External etcd clients apart from GD2 could connect to this, but this may change.

>
> --Humble
>
>
> On Thu, Nov 17, 2016 at 8:36 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> I'm pleased to announce the third development release of GD2.
>>
>> The big news in this release is the move to embedded etcd. You no
>> longer need to install etcd separately. You just install GD2, and do
>> you work, just like old times with GD1.
>>
>> Prashanth was all over this release, in addition to doing the
>> embedding work, he also did a lot of minor cleanup and fixes, and a
>> lot of linter fixes.
>>
>> Prebuilt binaries for Linux x86-64 are available from the release page
>> [1]. A docker image gluster/glusterd2-test [2] has also been created
>> with this release.
>>
>> Please refer to the 'Testing releases' wiki [3] page for more
>> information on how to go about installing, running and testing GD2.
>> This also contains instructions on how to make use of the docker image
>> with Vagrant to set up a testing environment.
>>
>> Thanks,
>> Kaushal
>>
>> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-3
>> [2]:
>> https://hub.docker.com/r/gluster/glusterd2-test/builds/bgr3aitcjysvfmgubfk3ud/
>> [3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [GD2] New dev release - GlusterD2 v4.0dev-3

2016-11-17 Thread Kaushal M
I'm pleased to announce the third development release of GD2.

The big news in this release is the move to embedded etcd. You no
longer need to install etcd separately. You just install GD2, and do
you work, just like old times with GD1.

Prashanth was all over this release, in addition to doing the
embedding work, he also did a lot of minor cleanup and fixes, and a
lot of linter fixes.

Prebuilt binaries for Linux x86-64 are available from the release page
[1]. A docker image gluster/glusterd2-test [2] has also been created
with this release.

Please refer to the 'Testing releases' wiki [3] page for more
information on how to go about installing, running and testing GD2.
This also contains instructions on how to make use of the docker image
with Vagrant to set up a testing environment.

Thanks,
Kaushal

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-3
[2]: 
https://hub.docker.com/r/gluster/glusterd2-test/builds/bgr3aitcjysvfmgubfk3ud/
[3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Meetings - Feedback on new meeting format

2016-11-17 Thread Kaushal M
Hi All,

We have begun following a new format for the weekly community meetings
for the past 4 weeks.

The new format is just a massive Open floor for everyone to discuss a
topic of their choice. The old boring weekly updates have been
relegated to just being notes in the meeting agenda. The meetings are
being captured into wiki [1][2][3], and give a good picture of what's
been happening in the community in the past week.

The format was trialed the format for 3 weeks (we actually did an
extra week, and will follow it next week as well). We'd like hear
feedback about this from the community. It'll be good if your feedback
covers the following,
1. What did you like or not like about the new format?
2. What could be done better?
3. Should we continue with the format?

---
I'll begin with my feedback.

This has resulted in several good changes,
a. Meetings are now more livelier with more people speaking up and
making themselves heard.
b. Each topic in the open floor gets a lot more time for discussion.
c. Developers are sending out weekly updates of works they are doing,
and linking those mails in the meeting agenda.

Thought the response and attendance to the initial 2 meetings was
good, it dropped for the last 2. This week in particular didn't have a
lot of updates added to the meeting agenda. It seems like interest has
dropped already.

We could probably do a better job of collecting updates to make it
easier for people to add their updates, but the current format of
adding updates to etherpad(/hackmd) is simple enough. I'd like to know
if there is anything else preventing people from providing updates.

I vote we continue with the new format.
---

Everyone please provide your feedback by replying to this mail. We'll
be going over the feedback in the next meeting.

Thanks.
~kaushal

[1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-16
[2]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-09
[3]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-02
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting - 2016-11-16

2016-11-16 Thread Kaushal M
I forgot to send this reminder out earlier. Please add your updates
and any topics of discussion to
https://public.pad.fsfe.org/p/gluster-community-meetings .

The meeting starts in ~3 hours from now.

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] gfid generation

2016-11-15 Thread Kaushal M
On Tue, Nov 15, 2016 at 11:33 PM, Ankireddypalle Reddy
 wrote:
> Pranith,
>
>  Thanks for getting back on this. I am trying to see how
> gfid can be generated programmatically. Given a file name how do we generate
> gfid for it. I was reading some of the email threads about it where it was
> mentioned that gfid is generated based upon parent directory gfid and the
> file name. Given a same parent gfid and file name do we always end up with
> the same gfid.

You're probably confusing the hash as generated for the elastic hash
algorithm in DHT, with UUID. That is a combination of

I always thought that the GFID was a UUID, which was randomly
generated. (The random UUID might be being modified a little to allow
some leeway with directory listing, IIRC).

Adding gluster-devel to get more eyes on this.

>
>
>
> Thanks and Regards,
>
> ram
>
>
>
> From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> Sent: Tuesday, November 15, 2016 12:58 PM
> To: Ankireddypalle Reddy
> Cc: gluster-us...@gluster.org
> Subject: Re: [Gluster-users] gfid generation
>
>
>
> Sorry, didn't understand the question. Are you saying give a file on gluster
> how to get gfid of the file?
>
> #getfattr -d -m. -e hex /path/to/file shows it
>
>
>
> On Fri, Nov 11, 2016 at 9:47 PM, Ankireddypalle Reddy 
> wrote:
>
> Hi,
>
> Is the mapping from file name to gfid an idempotent operation.  If
> so please point me to the function that does this.
>
>
>
> Thanks and Regards,
>
> Ram
>
> ***Legal Disclaimer***
>
> "This communication may contain confidential and privileged material for the
>
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
>
> by others is strictly prohibited. If you have received the message by
> mistake,
>
> please advise the sender by reply email and delete the message. Thank you."
>
> **
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
>
> Pranith
>
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Kaushal M
On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee  wrote:
>
>
> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri
>  wrote:
>>
>> I am trying to understand the criticality of these patches. Raghavendra's
>> patch is crucial because gfapi workloads(for samba and qemu) are affected
>> severely. I waited for Krutika's patch because VM usecase can lead to disk
>> corruption on replace-brick. If you could let us know the criticality and we
>> are in agreement that they are this severe, we can definitely take them in.
>> Otherwise next release is better IMO. Thoughts?
>
>
> If you are asking about how critical they are, then the first two are
> definitely not but third one is actually a critical one as if user upgrades
> from 3.6 to latest with quota enable, further peer probes get rejected and
> the only work around is to disable quota and re-enable it back.
>

If a workaround is present, I don't consider it a blocker for the release.

> On a different note, 3.9 head is not static and moving forward. So if you
> are really looking at only critical patches need to go in, that's not
> happening, just a word of caution!
>
>>
>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
>> wrote:
>>>
>>> Pranith,
>>>
>>> I'd like to see following patches getting in:
>>>
>>> http://review.gluster.org/#/c/15722/
>>> http://review.gluster.org/#/c/15714/
>>> http://review.gluster.org/#/c/15792/
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri
>>>  wrote:

 hi,
   The only problem left was EC taking more time. This should affect
 small files a lot more. Best way to solve it is using compound-fops. So for
 now I think going ahead with the release is best.

 We are waiting for Raghavendra Talur's
 http://review.gluster.org/#/c/15778 before going ahead with the release. If
 we missed any other crucial patch please let us know.

 Will make the release as soon as this patch is merged.

 --
 Pranith & Aravinda

 ___
 maintainers mailing list
 maintain...@gluster.org
 http://www.gluster.org/mailman/listinfo/maintainers

>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>
>>
>>
>>
>> --
>> Pranith
>
>
>
>
> --
>
> ~ Atin (atinm)
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Weekly Community Meeting - 2016-11-09

2016-11-09 Thread Kaushal M
On Wed, Nov 9, 2016 at 9:55 AM, Kaushal M <kshlms...@gmail.com> wrote:
> On Wed, Nov 9, 2016 at 9:54 AM, Kaushal M <kshlms...@gmail.com> wrote:
>> Hi all,
>> This a reminder to everyone to add the updates to the meeting etherpad
>> [1] before the meeting starts at 1200UTC today.
>
> Also, add any topics you want discussed to the Open floor.
>
>>
>> Thanks.
>>
>> ~kaushal
>>
>> [1]: https://public.pad.fsfe.org/p/gluster-community-meetings

Attendance to this weeks meeting was quite poor, initially but picked
up later on.
Two topics were discussed today, about the impending shutdown of the
FSFE etherpad and the trial run of the new format meetings.
We'll be running the same format next week as well.

The meeting pad has been archived at [1]. The logs can be found at [2],[3],[4].

I'll be hosting next weeks meeting, same time (people in NA, remember
to come an hour early), same place.
Don't forget, add your topics and updates to the agenda at [5].

Thanks.

~kaushal

[1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-09
[2]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-09/community_meeting_20161109.2016-11-09-12.01.html
[3]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-09/community_meeting_20161109.2016-11-09-12.01.txt
[4]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-09/community_meeting_20161109.2016-11-09-12.01.log.html


## Topics of Discussion

### Next weeks meeting host

- kkeitheley volunteered last week.
- kkeithley can no longer host, thanks to DST.
- kshlm will host again

### Open floor

- [kshlm] FSFE etherpad is shutting down
- Need to find all existing Gluster etherpads on it
- archive whatever can be archived in the gh-wiki
(https://github.com/gluster/glusterfs/wiki)
- Find alternative for others
- https://github.com/ether/etherpad-lite/wiki/Sites-that-run-Etherpad-Lite
- Saravanakmr volunteered to lead the effort to find and collect
existing pads on FSFE etherpad
- https://hackmd.io suggested by post-factum
- New meeting format trial ending. Should we continue?
- The format was under trial for 3 weeks.
- +1 to continue from hchirram, post-factum, Saravanakmr, samikshan, rastar
- No changes suggested
- kshlm will ask for feedback on mailing lists.

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
- _Add updates here_
- GD2
- https://www.gluster.org/pipermail/gluster-devel/2016-November/051421.html
- Brick Multiplexing
- https://www.gluster.org/pipermail/gluster-devel/2016-November/051364.html
- https://www.gluster.org/pipermail/gluster-devel/2016-November/051389.html

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.0rc2
- Next release : 3.9.0
  - Release date : End of Sept 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.0
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.9.0_resolved=1
- Roadmap : https://www.gluster.org/community/roadmap/3.9/
- Updates:
  - _None_

 GlusterFS 3.8

- Maintainers : ndevos, jiffin
- Current release : 3.8.5
- Next release : 3.8.6
  - Release date : 10 November 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.6
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.8.6_resolved=1
- Updates:
  - Release is planned for the weekend
 - https://www.gluster.org/pipermail/maintainers/2016-November/001659.html

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.17
- Next release : 3.7.18
  - Release date : 30 November 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.18
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.7.18_resolved=1
- Updates:
- samikshan sent out a release announcement
- https://www.gluster.org/pipermail/gluster-devel/2016-November/051414.html

### Related projects and efforts

 Community Infra

- _None_

 Samba

- Fedora updates for Samba v4.3.12, v4.4.7 and v4.5.1 was created and
pushed following the regression encountered with GlusterFS integration
tracked by https://bugzilla.samba.org/show_bug.cgi?id=12404

 Ganesha

- _None_

 Containers

- _None_

 Testing

- [loadtheacc] 
https://www.gluster.org/pipermail/gluster-devel/2016-November/051369.html

 Others

- [atinm] GlusterD-1.0 updates
https://www.gluster.org/pipermail/gluster-devel/2016-November/051432.html


### Action Items from last week

- nigelb, kshlm, will document and start the practice of recording
etherpads into Github wikis.
- Meet

[Gluster-devel] Weekly Community Meeting - 2016-11-09

2016-11-08 Thread Kaushal M
Hi all,
This a reminder to everyone to add the updates to the meeting etherpad
[1] before the meeting starts at 1200UTC today.

Thanks.

~kaushal

[1]: https://public.pad.fsfe.org/p/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Last week in GD2

2016-11-08 Thread Kaushal M
Hi all,

Not a lot has happened in the last week in GD2 land.

- Prashanth made changes to move GD2 from using etcdv2 client API to
the etcdv3 client api.
  - This is helps a lot with the embedded etcd work.
- Prashanth also did a lot of cleanups of the codebase to satisfy golint
- I've been continuing work on Volgen-2.0. This week I've been working
on dependency resolution
  - I'll be sharing more details about the algorithm later this week
with the devel list.

In the upcoming week, I'll continue working on  volgen-2.0. Prashanth
is on vacation this week.

I'm thinking of doing a hangouts session on volgen-2.0 sometime soon.
I'll let everyone know of the details when I plan it.

Thanks.

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Help needed: NFS Debugging for Glusto tests and Glusto help in general

2016-11-06 Thread Kaushal M
On Mon, Nov 7, 2016 at 11:55 AM, Nigel Babu  wrote:
> Hello,
>
> I've been working on getting the Glusto tests to work and it appears that 
> we're
> stuck in a situation which Shewtha and Jonathan haven't been able to fully
> arrive at a solution. Here are the two problems:
>
> 1. Originally, we ran into issues with NFS with an error that looked like 
> this:
>
> if 'nfs' in cls.mount_type:
> cmd = "showmount -e localhost"
> _, _, _ = g.run(cls.mnode, cmd)
>
> cmd = "showmount -e localhost | grep %s" % cls.volname
> ret, _, _ = g.run(cls.mnode, cmd)
>>   assert (ret == 0), "Volume %s not exported" % cls.volname
> E   AssertionError: Volume testvol_replicated not exported
> E   assert 1 == 0
>

We stopped exporting volumes over the internal nfs server by default
for new volumes in 3.8 and all volumes in 3.9 and beyond. You will
need to enable nfs on volumes by setting the option 'nfs.disable' to
'off'.

> bvt/test_bvt_lite_and_plus.py:88: AssertionError
>
> Entries in glustomain.log:
>
> 2016-11-07 06:17:06,007 INFO (run) root@172.19.2.69 (cp): gluster volume info 
> | egrep "^Brick[0-9]+" | grep -v "ss_brick"
> 2016-11-07 06:17:06,058 ERROR (get_servers_used_bricks_dict) error in getting 
> bricklist using gluster v info
> 2016-11-07 06:17:06,059 INFO (run) root@172.19.2.69 (cp): gluster volume info 
> testvol_replicated --xml
> 2016-11-07 06:17:06,111 INFO (run) root@172.19.2.69 (cp): gluster volume 
> create testvol_replicated replica 3   
> 172.19.2.69:/mnt/testvol_replicated_brick0 
> 172.19.2.15:/mnt/testvol_replicated_brick1 172.19.2.3
> 8:/mnt/testvol_replicated_brick2 --mode=script force
> 2016-11-07 06:17:08,272 INFO (run) root@172.19.2.69 (cp): gluster volume 
> start testvol_replicated --mode=script
> 2016-11-07 06:17:19,066 INFO (run) root@172.19.2.69 (cp): gluster volume info 
> testvol_replicated
> 2016-11-07 06:17:19,125 INFO (run) root@172.19.2.69 (cp): gluster vol status 
> testvol_replicated
> 2016-11-07 06:17:19,189 INFO (run) root@172.19.2.69 (cp): showmount -e 
> localhost
> 2016-11-07 06:17:19,231 INFO (run) root@172.19.2.69 (cp): showmount -e 
> localhost | grep testvol_replicated
> 2016-11-07 06:17:19,615 INFO (main) Ending glusto via main()
> 2016-11-07 06:20:23,713 INFO (main) Starting glusto via main()
>
> Today I tried to comment out the NFS bits and run the test again. Here's what
> that got me:
>
> # Setup Volume
> ret = setup_volume(mnode=cls.mnode,
>all_servers_info=cls.all_servers_info,
>volume_config=cls.volume, force=True)
>>   assert (ret == True), "Setup volume %s failed" % cls.volname
> E   AssertionError: Setup volume testvol_distributed-replicated failed
> E   assert False == True
>
> bvt/test_bvt_lite_and_plus.py:73: AssertionError
>
> Entries in glustomain.log:
> 2016-11-07 06:20:34,994 INFO (run) root@172.19.2.69 (cp): gluster volume info 
> | egrep "^Brick[0-9]+" | grep -v "ss_brick"
> 2016-11-07 06:20:35,048 ERROR (form_bricks_list) Not enough bricks available 
> for creating the bricks
> 2016-11-07 06:20:35,049 ERROR (setup_volume) Number_of_bricks is greater than 
> the unused bricks on servers
>

This seems to be an error with provisioning or Glusto. Glusto seems to
be expecting more bricks than have been provisioned.

> Does this make sense to anyone in terms of whether it's an error at Glusto-end
> or an error in Gluster that's being caught?
>
> --
> nigelb
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Reminder to add meeting updates

2016-11-01 Thread Kaushal M
Hi all,

This is a reminder to all to add their updates to weekly meeting pad
[1]. Please make sure you find some time to add updates about your
components, features or things you are working on.

Thanks,
Kaushal

[1] https://public.pad.fsfe.org/p/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday - Release 3.9

2016-10-28 Thread Kaushal M
I've finished my testing of GlusterD, and everything is working as
expected. I'm giving an ACK for GlusterD.

I've tested mainly the core of GlusterD and CLI. I've not tested
features like snapshots, tier, bit-rot, quota, ganesha etc.

On Fri, Oct 28, 2016 at 1:45 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Fri, Oct 28, 2016 at 1:28 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> I'm continuing testing GlusterD for 3.9.0rc2. I wasted a lot of my
>> time earlier this morning testing 3.8.5 because of an oversight.
>>
>> I have one issue till now, the cluster.op-version defaults to 4.
>> This shouldn't be how it's supposed to be. It needs to be set to the
>> 39000 for 3.9.0.
>>
>> I'll send out a patch to fix this.
>
> I've opened a bug to track this.
> https://bugzilla.redhat.com/show_bug.cgi?id=1389675
>
>>
>> On Fri, Oct 28, 2016 at 11:29 AM, Raghavendra Gowdappa
>> <rgowd...@redhat.com> wrote:
>>> Thanks to "Tirumala Satya Prasad Desala" <tdes...@redhat.com>, we were able 
>>> to run tests for Plain distribute and didn't see any failures.
>>>
>>> Ack Plain distribute.
>>>
>>> - Original Message -
>>>> From: "Kaleb S. KEITHLEY" <kkeit...@redhat.com>
>>>> To: "Aravinda" <avish...@redhat.com>, "Gluster Devel" 
>>>> <gluster-devel@gluster.org>, "GlusterFS Maintainers"
>>>> <maintain...@gluster.org>
>>>> Sent: Thursday, October 27, 2016 8:51:36 PM
>>>> Subject: Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday - 
>>>>Release 3.9
>>>>
>>>>
>>>> Ack on nfs-ganesha bits. Tentative ack on gnfs bits.
>>>>
>>>> Conditional ack on build, see:
>>>>http://review.gluster.org/15726
>>>>http://review.gluster.org/15733
>>>>http://review.gluster.org/15737
>>>>http://review.gluster.org/15743
>>>>
>>>> There will be backports to 3.9 of the last three soon. Timely reviews of
>>>> the last three will accelerate the availability of backports.
>>>>
>>>> On 10/26/2016 10:34 AM, Aravinda wrote:
>>>> > Gluster 3.9.0rc2 tarball is available here
>>>> > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.9.0rc2.tar.gz
>>>> >
>>>> > regards
>>>> > Aravinda
>>>> >
>>>> > On Tuesday 25 October 2016 04:12 PM, Aravinda wrote:
>>>> >> Hi,
>>>> >>
>>>> >> Since Automated test framework for Gluster is in progress, we need
>>>> >> help from Maintainers and developers to test the features and bug
>>>> >> fixes to release Gluster 3.9.
>>>> >>
>>>> >> In last maintainers meeting Shyam shared an idea about having a Test
>>>> >> day to accelerate the testing and release.
>>>> >>
>>>> >> Please participate in testing your component(s) on Oct 27, 2016. We
>>>> >> will prepare the rc2 build by tomorrow and share the details before
>>>> >> Test day.
>>>> >>
>>>> >> RC1 Link:
>>>> >> http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
>>>> >> Release Checklist:
>>>> >> https://public.pad.fsfe.org/p/gluster-component-release-checklist
>>>> >>
>>>> >>
>>>> >> Thanks and Regards
>>>> >> Aravinda and Pranith
>>>> >>
>>>> >
>>>> > ___
>>>> > maintainers mailing list
>>>> > maintain...@gluster.org
>>>> > http://www.gluster.org/mailman/listinfo/maintainers
>>>>
>>>> ___
>>>> Gluster-devel mailing list
>>>> Gluster-devel@gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>>
>>> ___
>>> maintainers mailing list
>>> maintain...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/maintainers
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterFS-3.9.0 - Delete or disable experimental features

2016-10-28 Thread Kaushal M
Jeff & Shyam,

We need you opinions on this as these are your components.

3.9 is still building and shipping experimental features. The packages
being build currently include these. We shouldn't be doing this.

I have 2 changes under review [1] & [2], which disable and delete
these respectively.

Niels informed that for 3.8 they were deleted, so if we want follow
precedence, I'd prefer [2].

[1] isn't complete as it requires changes to the spec file. The change
attempts to make experimental features disabled by default, while
allowing them to enabled if required. There might be more things to
fix to make it completely correct. Even if we don't choose [1] now, I
will be forward porting it to master to allow selective building of
experimental features.

So what would you guys prefer?

~kaushal

[1]: https://review.gluster.org/15748
[2]: https://review.gluster.org/15750
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday - Release 3.9

2016-10-28 Thread Kaushal M
I'm continuing testing GlusterD for 3.9.0rc2. I wasted a lot of my
time earlier this morning testing 3.8.5 because of an oversight.

I have one issue till now, the cluster.op-version defaults to 4.
This shouldn't be how it's supposed to be. It needs to be set to the
39000 for 3.9.0.

I'll send out a patch to fix this.

On Fri, Oct 28, 2016 at 11:29 AM, Raghavendra Gowdappa
 wrote:
> Thanks to "Tirumala Satya Prasad Desala" , we were able 
> to run tests for Plain distribute and didn't see any failures.
>
> Ack Plain distribute.
>
> - Original Message -
>> From: "Kaleb S. KEITHLEY" 
>> To: "Aravinda" , "Gluster Devel" 
>> , "GlusterFS Maintainers"
>> 
>> Sent: Thursday, October 27, 2016 8:51:36 PM
>> Subject: Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday -   
>>  Release 3.9
>>
>>
>> Ack on nfs-ganesha bits. Tentative ack on gnfs bits.
>>
>> Conditional ack on build, see:
>>http://review.gluster.org/15726
>>http://review.gluster.org/15733
>>http://review.gluster.org/15737
>>http://review.gluster.org/15743
>>
>> There will be backports to 3.9 of the last three soon. Timely reviews of
>> the last three will accelerate the availability of backports.
>>
>> On 10/26/2016 10:34 AM, Aravinda wrote:
>> > Gluster 3.9.0rc2 tarball is available here
>> > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.9.0rc2.tar.gz
>> >
>> > regards
>> > Aravinda
>> >
>> > On Tuesday 25 October 2016 04:12 PM, Aravinda wrote:
>> >> Hi,
>> >>
>> >> Since Automated test framework for Gluster is in progress, we need
>> >> help from Maintainers and developers to test the features and bug
>> >> fixes to release Gluster 3.9.
>> >>
>> >> In last maintainers meeting Shyam shared an idea about having a Test
>> >> day to accelerate the testing and release.
>> >>
>> >> Please participate in testing your component(s) on Oct 27, 2016. We
>> >> will prepare the rc2 build by tomorrow and share the details before
>> >> Test day.
>> >>
>> >> RC1 Link:
>> >> http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
>> >> Release Checklist:
>> >> https://public.pad.fsfe.org/p/gluster-component-release-checklist
>> >>
>> >>
>> >> Thanks and Regards
>> >> Aravinda and Pranith
>> >>
>> >
>> > ___
>> > maintainers mailing list
>> > maintain...@gluster.org
>> > http://www.gluster.org/mailman/listinfo/maintainers
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


  1   2   3   4   5   >