Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-26 Thread Amar Tumballi Suryanarayan
On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos  wrote:

> On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura Krishna
> Murthy wrote:
> > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian  wrote:
> >
> > > > Comparing the changes between revisions is something
> > > that GitHub does not support...
> > >
> > > It does support that,
> > > actually.___
> > >
> >
> > Yes, it does support. We need to use Squash merge after all review is
> done.
>
> Squash merge would also combine multiple commits that are intended to
> stay separate. This is really bad :-(
>
>
We should treat 1 patch in gerrit as 1 PR in github, then squash merge
works same as how reviews in gerrit are done.  Or we can come up with
label, upon which we can actually do 'rebase and merge' option, which can
preserve the commits as is.

-Amar


> Niels
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [RFC] alter inode table lock from mutex to rwlock

2019-08-22 Thread Amar Tumballi Suryanarayan
Hi Changwei Ge,

On Thu, Aug 22, 2019 at 5:57 PM Changwei Ge  wrote:

> Hi,
>
> Now inode_table_t:lock is type of mutex which I think we can use
> ‘pthread_rwlock' to replace it for a better concurrency.
>
> Because phread_rwlock allows more than one thread accessing inode table
> at the same time.
> Moreover, the critical section the lock is protecting won't take many
> CPU cycles and no I/O and CPU fault/exception involved after a quick
> glance at glusterfs code.
> I hope I didn't miss something.
> If I would get an ACK from major glusterfs developer, I will try to do it.
>
>
You are right. I believe this is possible. No harm in trying this out.

Xavier, Raghavendra, Pranith, Nithya, do you think this is possible?

Regards,



> Thanks.
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [Announcement] Gluster Community Update

2019-07-09 Thread Amar Tumballi Suryanarayan
Hello Gluster community,



Today marks a new day in the 26-year history of Red Hat. IBM has finalized
its acquisition of Red Hat
,
which will operate as a distinct unit within IBM moving forward.



What does this mean for Red Hat’s contributions to the Gluster project?



In short, nothing.



Red Hat always has and will continue to be a champion for open source and
projects like Gluster. IBM is committed to Red Hat’s independence and role
in open source software communities so that we can continue this work
without interruption or changes.



Our mission, governance, and objectives remain the same. We will continue
to execute the existing project roadmap. Red Hat associates will continue
to contribute to the upstream in the same ways they have been. And, as
always, we will continue to help upstream projects be successful and
contribute to welcoming new members and maintaining the project.



We will do this together, with the community, as we always have.



If you have questions or would like to learn more about today’s news, I
encourage you to review the list of materials below. Red Hat CTO Chris
Wright will host an online Q session in the coming days where you can ask
questions you may have about what the acquisition means for Red Hat and our
involvement in open source communities. Details will be announced on the Red
Hat blog .



   -

   Press release
   

   -

   Chris Wright blog - Red Hat and IBM: Accelerating the adoption of open
   source
   

   -

   FAQ on Red Hat Community Blog
   



Amar Tumballi,

Maintainer, Lead,

Gluster Community.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Migration of the builders to Fedora 30

2019-07-09 Thread Amar Tumballi Suryanarayan
On Thu, Jul 4, 2019 at 9:55 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

>
>
> On Thu, Jul 4, 2019 at 9:37 PM Michael Scherer 
> wrote:
>
>> Hi,
>>
>> I have upgraded for testing some of the builder to F30 (because F28 is
>> EOL and people did request newer version of stuff), and I was a bit
>> surprised to see the result of the test of the jobs.
>>
>> So we have 10 jobs that run on those builders.
>>
>> 5 jobs run without trouble:
>> - python-lint
>> - clang-scan
>> - clang-format
>> - 32-bit-build-smoke
>> - bugs-summary
>>
>> 1 is disabled, tsan. I didn't try to run it.
>>
>> 4 fails:
>> - python-compliance
>>
>
> OK to run, but skip voting, so we can eventually (soonish) fix this.
>
>
>> - fedora-smoke
>>
>
> Ideally we should soon fix it. Effort is ON. We have a bug for this:
> https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5
>

Can we re-run this on latest master? I think we are ready for  fedora-smoke
on fedora30 on latest master.


>
>> - gluster-csi-containers
>> - glusterd2-containers
>>
>>
> OK to drop for now.
>
>
>> The job python-compliance fail like this:
>> https://build.gluster.org/job/python-compliance/5813/
>>
>> The fedora-smoke job, who is building on newer fedora (so newer gcc),
>> is failling too:
>> https://build.gluster.org/job/fedora-smos some new vol option that ought
>> to be set?ke/6753/console
>> <https://build.gluster.org/job/fedora-smoke/6753/console>
>>
>> Gluster-csi-containers is having trouble to run
>> https://build.gluster.org/job/gluster-csi-containers/304/console
>> https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5
>> but before, it did fail with "out of space":
>> https://build.gluster.org/job/gluster-csi-containers/303/console
>>
>> and it also fail (well, should fail) with this:
>> 16:51:07 make: *** No targets specified and no makefile found.  Stop.
>>
>> which is indeed not present in the git repo, so this seems like the job
>> is unmaintained.
>>
>>
>> The last one to fail is glusterd2-containers:
>>
>> https://build.gluster.org/job/glusterd2-containers/323/console
>>
>> This one is fun, because it fail, but appear as ok on jenkins. It fail
>> because of some ansible issue, due to newer Fedora.
>>
>> So, since we need to switch, here is what I would recommend:
>> - switch the working job to F30
>> - wait 2 weeks, and switch fedora-smoke and python-compliance to F30.
>> This will force someone to fix the problem.
>> - drop the non fixed containers jobs, unless someone fix them, in 1 month.
>>
>
> Looks like a good plan.
>
>
>>
>> --
>> Michael Scherer
>> Sysadmin, Community Infrastructure
>>
>>
>>
>> ___
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/836554017
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/486278655
>>
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>
> --
> Amar Tumballi (amarts)
>


-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Migration of the builders to Fedora 30

2019-07-04 Thread Amar Tumballi Suryanarayan
On Thu, Jul 4, 2019 at 9:37 PM Michael Scherer  wrote:

> Hi,
>
> I have upgraded for testing some of the builder to F30 (because F28 is
> EOL and people did request newer version of stuff), and I was a bit
> surprised to see the result of the test of the jobs.
>
> So we have 10 jobs that run on those builders.
>
> 5 jobs run without trouble:
> - python-lint
> - clang-scan
> - clang-format
> - 32-bit-build-smoke
> - bugs-summary
>
> 1 is disabled, tsan. I didn't try to run it.
>
> 4 fails:
> - python-compliance
>

OK to run, but skip voting, so we can eventually (soonish) fix this.


> - fedora-smoke
>

Ideally we should soon fix it. Effort is ON. We have a bug for this:
https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5


> - gluster-csi-containers
> - glusterd2-containers
>
>
OK to drop for now.


> The job python-compliance fail like this:
> https://build.gluster.org/job/python-compliance/5813/
>
> The fedora-smoke job, who is building on newer fedora (so newer gcc),
> is failling too:
> https://build.gluster.org/job/fedora-smoke/6753/console
>
> Gluster-csi-containers is having trouble to run
> https://build.gluster.org/job/gluster-csi-containers/304/console
> https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5
> but before, it did fail with "out of space":
> https://build.gluster.org/job/gluster-csi-containers/303/console
>
> and it also fail (well, should fail) with this:
> 16:51:07 make: *** No targets specified and no makefile found.  Stop.
>
> which is indeed not present in the git repo, so this seems like the job is
> unmaintained.
>
>
> The last one to fail is glusterd2-containers:
>
> https://build.gluster.org/job/glusterd2-containers/323/console
>
> This one is fun, because it fail, but appear as ok on jenkins. It fail
> because of some ansible issue, due to newer Fedora.
>
> So, since we need to switch, here is what I would recommend:
> - switch the working job to F30
> - wait 2 weeks, and switch fedora-smoke and python-compliance to F30. This
> will force someone to fix the problem.
> - drop the non fixed containers jobs, unless someone fix them, in 1 month.
>

Looks like a good plan.


>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure
>
>
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Amar Tumballi Suryanarayan
On Thu, Jun 20, 2019 at 2:35 PM Niels de Vos  wrote:

> On Thu, Jun 20, 2019 at 02:11:21PM +0530, Amar Tumballi Suryanarayan wrote:
> > On Thu, Jun 20, 2019 at 1:13 PM Niels de Vos  wrote:
> >
> > > On Thu, Jun 20, 2019 at 11:36:46AM +0530, Amar Tumballi Suryanarayan
> wrote:
> > > > Considering python3 is anyways the future, I vote for taking the
> patch we
> > > > did in master for fixing regression tests with python3 into the
> release-6
> > > > and release-5 branch and getting over this deadlock.
> > > >
> > > > Patch in discussion here is
> > > > https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone
> > > notices, it
> > > > changes only the files inside 'tests/' directory, which is not
> packaged
> > > in
> > > > a release anyways.
> > > >
> > > > Hari, can we get the backport of this patch to both the release
> branches?
> > >
> > > When going this route, you still need to make sure that the
> > > python3-devel package is available on the CentOS-7 builders. And I
> > > don't know if installing that package is already sufficient, maybe the
> > > backport is not even needed in that case.
> > >
> > >
> > I was thinking, having this patch makes it compatible with both python2
> and
> > python3, so technically, it allows us to move to Fedora30 if we need to
> run
> > regression there. (and CentOS7 with only python2).
> >
> > The above patch made it compatible, not mandatory to have python3. So,
> > treating it as a bug fix.
>
> Well, whatever Python is detected (python3 has preference over python2),
> needs to have the -devel package available too. Detection is done by
> probing the python executable. The Matching header files from -devel
> need to be present in order to be able to build glupy (and others?).
>
> I do not think compatibility for python3/2 is the problem while
> building the tarball.


Got it! True. Compatibility is not the problem to build the tarball.

I noticed the issue of smoke is coming only from strfmt-errors job, which
checks for 'epel-6-i386' mock, and fails right now.

The backport might become relevant while running
> tests on environments where there is no python2.
>
>
Backport is very important if we are running in a system where we have only
python3. Hence my proposal to include it in releases.

But we are stuck with strfmt-errors job right now, and looking at what it
was intended to catch in first place, mostly our
https://build.gluster.org/job/32-bit-build-smoke/ would be doing same. If
that is the case, we can remove the job altogether.  Also note, this job is
known to fail many smokes with 'Build root is locked by another process'
errors.

Would be great if disabling strfmt-errors is an option.

Regards,

> Niels
>
>
> >
> >
> > > Niels
> > >
> > >
> > > >
> > > > Regards,
> > > > Amar
> > > >
> > > > On Thu, Jun 13, 2019 at 7:26 PM Michael Scherer  >
> > > wrote:
> > > >
> > > > > Le jeudi 13 juin 2019 à 14:28 +0200, Niels de Vos a écrit :
> > > > > > On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> > > > > > > On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > > > > > > > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley <
> > > > > > > > kkeit...@redhat.com> wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi
> Suryanarayan <
> > > > > > > > > atumb...@redhat.com> wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > We recently noticed that in one of the package update on
> > > > > > > > > > builder (ie,
> > > > > > > > > > centos7.x machines), python3.6 got installed as a
> dependency.
> > > > > > > > > > So, yes, it
> > > > > > > > > > is possible to have python3 in centos7 now.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > EPEL updated from python34 to python36 recently, but C7
> doesn't
> > > > > > > > > have
> > > > > > > > > python3 in the base. I don't think we've ever used EPEL
> > > > > > > > > packages 

Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Amar Tumballi Suryanarayan
On Thu, Jun 20, 2019 at 1:13 PM Niels de Vos  wrote:

> On Thu, Jun 20, 2019 at 11:36:46AM +0530, Amar Tumballi Suryanarayan wrote:
> > Considering python3 is anyways the future, I vote for taking the patch we
> > did in master for fixing regression tests with python3 into the release-6
> > and release-5 branch and getting over this deadlock.
> >
> > Patch in discussion here is
> > https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone
> notices, it
> > changes only the files inside 'tests/' directory, which is not packaged
> in
> > a release anyways.
> >
> > Hari, can we get the backport of this patch to both the release branches?
>
> When going this route, you still need to make sure that the
> python3-devel package is available on the CentOS-7 builders. And I
> don't know if installing that package is already sufficient, maybe the
> backport is not even needed in that case.
>
>
I was thinking, having this patch makes it compatible with both python2 and
python3, so technically, it allows us to move to Fedora30 if we need to run
regression there. (and CentOS7 with only python2).

The above patch made it compatible, not mandatory to have python3. So,
treating it as a bug fix.


> Niels
>
>
> >
> > Regards,
> > Amar
> >
> > On Thu, Jun 13, 2019 at 7:26 PM Michael Scherer 
> wrote:
> >
> > > Le jeudi 13 juin 2019 à 14:28 +0200, Niels de Vos a écrit :
> > > > On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> > > > > On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > > > > > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley <
> > > > > > kkeit...@redhat.com> wrote:
> > > > > >
> > > > > > >
> > > > > > > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi Suryanarayan <
> > > > > > > atumb...@redhat.com> wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > We recently noticed that in one of the package update on
> > > > > > > > builder (ie,
> > > > > > > > centos7.x machines), python3.6 got installed as a dependency.
> > > > > > > > So, yes, it
> > > > > > > > is possible to have python3 in centos7 now.
> > > > > > > >
> > > > > > >
> > > > > > > EPEL updated from python34 to python36 recently, but C7 doesn't
> > > > > > > have
> > > > > > > python3 in the base. I don't think we've ever used EPEL
> > > > > > > packages for
> > > > > > > building.
> > > > > > >
> > > > > > > And GlusterFS-5 isn't python3 ready.
> > > > > > >
> > > > > >
> > > > > > Correction: GlusterFS-5 is mostly or completely python3
> > > > > > ready.  FWIW,
> > > > > > python33 is available on both RHEL7 and CentOS7 from the Software
> > > > > > Collection Library (SCL), and python34 and now python36 are
> > > > > > available from
> > > > > > EPEL.
> > > > > >
> > > > > > But packages built for the CentOS Storage SIG have never used the
> > > > > > SCL or
> > > > > > EPEL (EPEL not allowed) and the shebangs in the .py files are
> > > > > > converted
> > > > > > from /usr/bin/python3 to /usr/bin/python2 during the rpmbuild
> > > > > > %prep stage.
> > > > > > All the python dependencies for the packages remain the python2
> > > > > > flavors.
> > > > > > AFAIK the centos-regression machines ought to be building the
> > > > > > same way.
> > > > >
> > > > > Indeed, there should not be a requirement on having EPEL enabled on
> > > > > the
> > > > > CentOS-7 builders. At least not for the building of the glusterfs
> > > > > tarball. We still need to do releases of glusterfs-4.1 and
> > > > > glusterfs-5,
> > > > > until then it is expected to have python2 as the (only?) version
> > > > > for the
> > > > > system. Is it possible to remove python3 from the CentOS-7 builders
> > > > > and
> > > > > run the jobs that require python3 on the Fedora builders instead?
> > > >
> > > > Actually, if the python-devel package for python3 is 

Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Amar Tumballi Suryanarayan
Considering python3 is anyways the future, I vote for taking the patch we
did in master for fixing regression tests with python3 into the release-6
and release-5 branch and getting over this deadlock.

Patch in discussion here is
https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone notices, it
changes only the files inside 'tests/' directory, which is not packaged in
a release anyways.

Hari, can we get the backport of this patch to both the release branches?

Regards,
Amar

On Thu, Jun 13, 2019 at 7:26 PM Michael Scherer  wrote:

> Le jeudi 13 juin 2019 à 14:28 +0200, Niels de Vos a écrit :
> > On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> > > On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > > > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley <
> > > > kkeit...@redhat.com> wrote:
> > > >
> > > > >
> > > > > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi Suryanarayan <
> > > > > atumb...@redhat.com> wrote:
> > > > >
> > > > > >
> > > > > > We recently noticed that in one of the package update on
> > > > > > builder (ie,
> > > > > > centos7.x machines), python3.6 got installed as a dependency.
> > > > > > So, yes, it
> > > > > > is possible to have python3 in centos7 now.
> > > > > >
> > > > >
> > > > > EPEL updated from python34 to python36 recently, but C7 doesn't
> > > > > have
> > > > > python3 in the base. I don't think we've ever used EPEL
> > > > > packages for
> > > > > building.
> > > > >
> > > > > And GlusterFS-5 isn't python3 ready.
> > > > >
> > > >
> > > > Correction: GlusterFS-5 is mostly or completely python3
> > > > ready.  FWIW,
> > > > python33 is available on both RHEL7 and CentOS7 from the Software
> > > > Collection Library (SCL), and python34 and now python36 are
> > > > available from
> > > > EPEL.
> > > >
> > > > But packages built for the CentOS Storage SIG have never used the
> > > > SCL or
> > > > EPEL (EPEL not allowed) and the shebangs in the .py files are
> > > > converted
> > > > from /usr/bin/python3 to /usr/bin/python2 during the rpmbuild
> > > > %prep stage.
> > > > All the python dependencies for the packages remain the python2
> > > > flavors.
> > > > AFAIK the centos-regression machines ought to be building the
> > > > same way.
> > >
> > > Indeed, there should not be a requirement on having EPEL enabled on
> > > the
> > > CentOS-7 builders. At least not for the building of the glusterfs
> > > tarball. We still need to do releases of glusterfs-4.1 and
> > > glusterfs-5,
> > > until then it is expected to have python2 as the (only?) version
> > > for the
> > > system. Is it possible to remove python3 from the CentOS-7 builders
> > > and
> > > run the jobs that require python3 on the Fedora builders instead?
> >
> > Actually, if the python-devel package for python3 is installed on the
> > CentOS-7 builders, things may work too. It still feels like some sort
> > of
> > Frankenstein deployment, and we don't expect to this see in
> > production
> > environments. But maybe this is a workaround in case something
> > really,
> > really, REALLY depends on python3 on the builders.
>
> To be honest, people would be surprised what happen in production
> around (sysadmins tend to discuss around, we all have horrors stories,
> stuff that were supposed to be cleaned and wasn't, etc)
>
> After all, "frankenstein deployment now" is better than "perfect
> later", especially since lots of IT departements are under constant
> pressure (so that's more "perfect never").
>
> I can understand that we want clean and simple code (who doesn't), but
> real life is much messier than we want to admit, so we need something
> robust.
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure
>
>
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Release 7: Gentle Reminder, Regression health for release-6.next and release-7

2019-06-18 Thread Amar Tumballi Suryanarayan
On Tue, Jun 18, 2019 at 12:07 PM Rinku Kothiya  wrote:

> Hi Team,
>
> We need to branch for release-7, but nightly builds failures are blocking
> this activity. Please find test failures and respective test links below :
>
> The top tests that are failing are as below and need attention,
>
>
> ./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t
> ./tests/bugs/gfapi/bug-1319374-THIS-crash.t
>

Still an issue with many tests.


> ./tests/basic/distribute/non-root-unlink-stale-linkto.t
>

Looks like this got fixed after https://review.gluster.org/22847


> ./tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t
> ./tests/features/subdir-mount.t
>

Got fixed with https://review.gluster.org/22877


> ./tests/basic/ec/self-heal.t
> ./tests/basic/afr/tarissue.t
>

I see random failures on this, not yet sure if this is setup issue, or a
actual regression issue.


> ./tests/basic/all_squash.t
> ./tests/basic/ec/nfs.t
> ./tests/00-geo-rep/00-georep-verify-setup.t
>

Most of the times, it fails if 'setup' is not complete to run geo-rep.


> ./tests/basic/quota-rename.t
> ./tests/basic/volume-snapshot-clone.t
>
> Nightly build for this month :
> https://build.gluster.org/job/nightly-master/
>
> Gluster test failure tracker :
> https://fstat.gluster.org/summary?start_date=2019-06-15_date=2019-06-18
>
> Please file a bug if needed against the test case and report the same
> here, in case a problem is already addressed, then do send back the
> patch details that addresses this issue as a response to this mail.
>
>
Thanks!


> Regards
> Rinku
>
>
> On Fri, Jun 14, 2019 at 9:08 PM Rinku Kothiya  wrote:
>
>> Hi Team,
>>
>> As part of branching preparation next week for release-7, please find
>> test failures and respective test links here.
>>
>> The top tests that are failing are as below and need attention,
>>
>> ./tests/bugs/gfapi/bug-1319374-THIS-crash.t
>> ./tests/basic/uss.t
>> ./tests/basic/volfile-sanity.t
>> ./tests/basic/quick-read-with-upcall.t
>> ./tests/basic/afr/tarissue.t
>> ./tests/features/subdir-mount.t
>> ./tests/basic/ec/self-heal.t
>>
>> ./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t
>> ./tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t
>> ./tests/basic/afr/split-brain-favorite-child-policy.t
>> ./tests/basic/distribute/non-root-unlink-stale-linkto.t
>> ./tests/bugs/protocol/bug-1433815-auth-allow.t
>> ./tests/basic/afr/arbiter-mount.t
>> ./tests/basic/all_squash.t
>>
>> ./tests/bugs/glusterd/mgmt-handshake-and-volume-sync-post-glusterd-restart.t
>> ./tests/basic/volume-snapshot-clone.t
>> ./tests/bugs/glusterd/serialize-shd-manager-glusterd-restart.t
>> ./tests/basic/gfapi/upcall-register-api.t
>>
>>
>> Nightly build for this month :
>> https://build.gluster.org/job/nightly-master/
>>
>> Gluster test failure tracker :
>>
>> https://fstat.gluster.org/summary?start_date=2019-05-15_date=2019-06-14
>>
>> Please file a bug if needed against the test case and report the same
>> here, in case a problem is already addressed, then do send back the
>> patch details that addresses this issue as a response to this mail.
>>
>> Regards
>> Rinku
>>
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Project Update: Containers-based distributed tests runner

2019-06-17 Thread Amar Tumballi Suryanarayan
This is a nice way to validate the patch for us.

Question I have is did we measure the time benefit of running them in
parallel with containers?

Would be great to see the result in getting this tested in a cloud env,
with 5 parallel threads and 10 parallel threads.

-Amar


On Fri, Jun 14, 2019 at 7:44 PM Aravinda  wrote:

> **gluster-tester** is a framework to run existing "*.t" test files in
> parallel using containers.
>
> Install and usage instructions are available in the following
> repository.
>
> https://github.com/aravindavk/gluster-tester
>
> ## Completed:
> - Create a base container image with all the dependencies installed.
> - Create a tester container image with requested refspec(or latest
> master) compiled and installed.
> - SSH setup in containers required to test Geo-replication
> - Take `--num-parallel` option and spawn the containers with ready
> infra for running tests
> - Split the tests based on the number of parallel jobs specified.
> - Execute the tests in parallel in each container and watch for the
> status.
> - Archive only failed tests(Optionally enable logs for successful tests
> using `--preserve-success-logs`)
>
> ## Pending:
> - NFS related tests are not running since the required changes are
> pending while creating the container image. (To know the failures run
> gluster-tester with `--include-nfs-tests` option)
> - Filter support while running the tests(To enable/disable tests on the
> run time)
> - Some Loop based tests are failing(I think due to shared `/dev/loop*`)
> - A few tests are timing out(Due to this overall test duration is more)
> - Once tests are started, showing real-time status is pending(Now
> status is checked in `/regression-.log` for example
> `/var/log/gluster-tester/regression-3.log`
> - If the base image is not built before running tests, it gives an
> error. Need to re-trigger the base container image step if not built.
> (Issue: https://github.com/aravindavk/gluster-tester/issues/11)
> - Creating an archive of core files
> - Creating a single archive from all jobs/containers
> - Testing `--ignore-from` feature to ignore the tests
> - Improvements to the status output
> - Cleanup(Stop test containers, and delete)
>
> I opened an issue to collect the details of failed tests. I will
> continue to update that issue as and when I capture failed tests in my
> setup.
> https://github.com/aravindavk/gluster-tester/issues/9
>
> Feel free to suggest any feature improvements. Contributions are
> welcome.
> https://github.com/aravindavk/gluster-tester/issues
>
> --
> Regards
> Aravinda
> http://aravindavk.in
>
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Quick Update: a cleanup patch and subsequent build failures

2019-06-14 Thread Amar Tumballi Suryanarayan
I just merged a larger cleanup patch, which I felt is good to get in, but
due to the order of its parents when it passed the regression and smoke,
and the other patches which got merged in same time, we hit a compile issue
for 'undefined functions'.

Below patch fixes it:

glfs: add syscall.h after header cleanup

in one of the recent patches, we cleaned-up the unneccesary header
file includes. In the order of merging the patches, there cropped
up an compile error.

updates: bz#1193929
Change-Id: I2ad52aa918f9c698d5273bb293838de6dd50ac31
Signed-off-by: Amar Tumballi 

diff --git a/api/src/glfs.c b/api/src/glfs.c
index b0db866441..0771e074d6 100644
--- a/api/src/glfs.c
+++ b/api/src/glfs.c
@@ -45,6 +45,7 @@
 #include 
 #include "rpc-clnt.h"
 #include 
+#include 

 #include "gfapi-messages.h"
 #include "glfs.h"

-
The patch has been pushed to repository, as it is causing critical compile
error right now. and if you have a build error, please fetch the latest
master to fix the the issue.

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Seems like Smoke job is not voting

2019-06-14 Thread Amar Tumballi Suryanarayan
Ok, guessed the possible cause.

The same possible DNS issue with review.gluster.org could have prevented
the patch fetching in smoke, and hence would have not triggered the job.

Those of you who have patches not getting a smoke, please run 'recheck
smoke' through comment.

-Amar

On Fri, Jun 14, 2019 at 5:16 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> I see patches starting from 10:45 AM IST (7hrs before) are not getting
> smoke votes.
>
> For one of my patch, the smoke job is not triggered at all IMO.
>
> https://review.gluster.org/#/c/22863/
>
> Would be good to check it.
>
> Regards,
> Amar
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Seems like Smoke job is not voting

2019-06-14 Thread Amar Tumballi Suryanarayan
I see patches starting from 10:45 AM IST (7hrs before) are not getting
smoke votes.

For one of my patch, the smoke job is not triggered at all IMO.

https://review.gluster.org/#/c/22863/

Would be good to check it.

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-12 Thread Amar Tumballi Suryanarayan
On Wed, Jun 12, 2019 at 8:42 PM Niels de Vos  wrote:

> On Wed, Jun 12, 2019 at 07:54:17PM +0530, Hari Gowtham wrote:
> > We haven't sent any patch to fix it.
> > Waiting for the decision to be made.
> > The bz: https://bugzilla.redhat.com/show_bug.cgi?id=1719778
> > The link to the build log:
> >
> https://build.gluster.org/job/strfmt_errors/1/artifact/RPMS/el6/i686/build.log
> >
> > The last few messages in the log:
> >
> > config.status: creating xlators/features/changelog/lib/src/Makefile
> > config.status: creating xlators/features/changetimerecorder/Makefile
> > config.status: creating xlators/features/changetimerecorder/src/Makefile
> > BUILDSTDERR: config.status: error: cannot find input file:
> > xlators/features/glupy/Makefile.in
> > RPM build errors:
> > BUILDSTDERR: error: Bad exit status from /var/tmp/rpm-tmp.kGZI5V (%build)
> > BUILDSTDERR: Bad exit status from /var/tmp/rpm-tmp.kGZI5V (%build)
> > Child return code was: 1
> > EXCEPTION: [Error()]
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py",
> > line 96, in trace
> > result = func(*args, **kw)
> >   File "/usr/lib/python3.6/site-packages/mockbuild/util.py", line 736,
> > in do_with_status
> > raise exception.Error("Command failed: \n # %s\n%s" % (command,
> > output), child.returncode)
> > mockbuild.exception.Error: Command failed:
> >  # bash --login -c /usr/bin/rpmbuild -bb --target i686 --nodeps
> > /builddir/build/SPECS/glusterfs.spec
>
> Those messages are caused by missing files. The 'make dist' that
> generates the tarball in the previous step did not included the glupy
> files.
>
> https://build.gluster.org/job/strfmt_errors/1/console contains the
> following message:
>
> configure: WARNING:
>
> -
> cannot build glupy. python 3.6 and python-devel/python-dev
> package are required.
>
> -
>
> I am not sure if there have been any recent backports to release-5 that
> introduced this behaviour. Maybe it is related to the builder where the
> tarball is generated. The job seems to detect python-3.6.8, which is not
> included in CentOS-7 for all I know?
>
>
We recently noticed that in one of the package update on builder (ie,
centos7.x machines), python3.6 got installed as a dependency. So, yes, it
is possible to have python3 in centos7 now.

-Amar


> Maybe someone else understands how this can happen?
>
> HTH,
> Niels
>
>
> >
> > On Wed, Jun 12, 2019 at 7:04 PM Niels de Vos  wrote:
> > >
> > > On Wed, Jun 12, 2019 at 02:44:04PM +0530, Hari Gowtham wrote:
> > > > Hi,
> > > >
> > > > Due to the recent changes we made. we have a build issue because of
> glupy.
> > > > As glupy is already removed from master, we are thinking of removing
> > > > it in 5.7 as well rather than fixing the issue.
> > > >
> > > > The release of 5.7 will be delayed as we have send a patch to fix
> this issue.
> > > > And if anyone has any concerns, do let us know.
> > >
> > > Could you link to the BZ with the build error and patches that attempt
> > > fixing it?
> > >
> > > We normally do not remove features with minor updates. Fixing the build
> > > error would be the preferred approach.
> > >
> > > Thanks,
> > > Niels
> >
> >
> >
> > --
> > Regards,
> > Hari Gowtham.
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Linux 5.2-RC regression bisected, mounting glusterfs volumes fails after commit: fuse: require /dev/fuse reads to have enough buffer capacity

2019-06-11 Thread Amar Tumballi Suryanarayan
Thanks for the heads up! We will see how to revert / fix the issue properly
for 5.2 kernel.

-Amar

On Tue, Jun 11, 2019 at 4:34 PM Sander Eikelenboom 
wrote:

> L.S.,
>
> While testing a linux 5.2 kernel I noticed it fails to mount my glusterfs
> volumes.
>
> It repeatedly fails with:
>[2019-06-11 09:15:27.106946] W [fuse-bridge.c:4993:fuse_thread_proc]
> 0-glusterfs-fuse: read from /dev/fuse returned -1 (Invalid argument)
>[2019-06-11 09:15:27.106955] W [fuse-bridge.c:4993:fuse_thread_proc]
> 0-glusterfs-fuse: read from /dev/fuse returned -1 (Invalid argument)
>[2019-06-11 09:15:27.106963] W [fuse-bridge.c:4993:fuse_thread_proc]
> 0-glusterfs-fuse: read from /dev/fuse returned -1 (Invalid argument)
>[2019-06-11 09:15:27.106971] W [fuse-bridge.c:4993:fuse_thread_proc]
> 0-glusterfs-fuse: read from /dev/fuse returned -1 (Invalid argument)
>etc.
>etc.
>
> Bisecting turned up as culprit:
> commit d4b13963f217dd947da5c0cabd1569e914d21699: fuse: require
> /dev/fuse reads to have enough buffer capacity
>
> The glusterfs version i'm using is from Debian stable:
> ii  glusterfs-client3.8.8-1
> amd64clustered file-system (client package)
> ii  glusterfs-common3.8.8-1
> amd64GlusterFS common libraries and translator modules
>
>
> A 5.1.* kernel works fine, as does a 5.2-rc4 kernel with said commit
> reverted.
>
> --
> Sander
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Regression failure continues: 'tests/basic/afr/split-brain-favorite-child-policy.t`

2019-06-10 Thread Amar Tumballi Suryanarayan
Fails with:

*20:56:58* ok 132 [  8/ 82] < 194> 'gluster --mode=script
--wignore volume heal patchy'*20:56:58* not ok 133 [  8/  80260] <
195> '^0$ get_pending_heal_count patchy' -> 'Got "2" instead of
"^0$"'*20:56:58* ok 134 [ 18/  2] < 197> '0 echo 0'


Looks like when the error occurred, it took 80seconds.


I see 2 different patches fail on this, would be good to analyze it further.


Regards,

Amar


-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] CI failure - NameError: name 'unicode' is not defined (related to changelogparser.py)

2019-06-08 Thread Amar Tumballi Suryanarayan
Update:

The issue happened because python3 got installed on centos7.x series of
builders due to other package dependencies. And considering GlusterFS picks
python3 as priority even if python2 is default, the tests started to fail.
We had completed the work of migrating the code to work smoothly with
python3 by glusterfs-6.0 release, but had not noticed issues with
regression framework as it was running only on centos7 (python2) earlier.

With this event, our regression tests are also now compatible with python3
(Thanks the the below mentioned patch of Kotresh). We were able to mark few
spurious failures as BAD_TEST, and fix all the python3 related issues in
regression by EOD Friday, and after watching regression tests for 1 more
day, can say that the issues are now resolved.

Please resubmit (or rebase in the gerrit web) before triggering the
'recheck centos' in the submitted patch(es).

Thanks everyone who responded quickly once the issue was noticed, and we
are back to GREEN again.

Regards,
Amar



On Fri, Jun 7, 2019 at 10:26 AM Deepshikha Khandelwal 
wrote:

> Hi Yaniv,
>
> We are working on this. The builders are picking up python3.6 which is
> leading to modules missing and such undefined errors.
>
> Kotresh has sent a patch https://review.gluster.org/#/c/glusterfs/+/22829/
> to fix the issue.
>
>
>
> On Thu, Jun 6, 2019 at 11:49 AM Yaniv Kaul  wrote:
>
>> From [1].
>>
>> I think it's a Python2/3 thing, so perhaps a CI issue additionally
>> (though if our code is not Python 3 ready, let's ensure we use Python 2
>> explicitly until we fix this).
>>
>> *00:47:05.207* ok  14 [ 13/386] <  34> 'gluster --mode=script 
>> --wignore volume start patchy'*00:47:05.207* ok  15 [ 13/ 70] <  36> 
>> '_GFS --attribute-timeout=0 --entry-timeout=0 --volfile-id=patchy 
>> --volfile-server=builder208.int.aws.gluster.org 
>> /mnt/glusterfs/0'*00:47:05.207* Traceback (most recent call 
>> last):*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 233, in 
>> *00:47:05.207* parse(sys.argv[1])*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 221, in 
>> parse*00:47:05.207* process_record(data, tokens, changelog_ts, 
>> callback)*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 178, in 
>> process_record*00:47:05.207* callback(record)*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 182, in 
>> default_callback*00:47:05.207* 
>> sys.stdout.write(u"{0}\n".format(record))*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 128, in 
>> __str__*00:47:05.207* return unicode(self).encode('utf-8')*00:47:05.207* 
>> NameError: name 'unicode' is not defined*00:47:05.207* not ok  16 [ 53/  
>>39] <  42> '2 check_changelog_op 
>> /d/backends/patchy0/.glusterfs/changelogs RENAME' -> 'Got "0" instead of "2"'
>>
>>
>> Y.
>>
>> [1] https://build.gluster.org/job/centos7-regression/6318/console
>>
>> ___
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/836554017
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/486278655
>>
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Fwd: Build failed in Jenkins: regression-test-with-multiplex #1359

2019-06-06 Thread Amar Tumballi Suryanarayan
Got time to test subdir-mount.t failing in brick-mux scenario.

I noticed some issues, where I need further help from glusterd team.

subdir-mount.t expects 'hook' script to run after add-brick to make sure
the required subdirectories are healed and are present in new bricks. This
is important as subdir mount expects the subdirs to exist for successful
mount.

But in case of brick-mux setup, I see that in some cases (6/10), hook
script (add-brick/post-hook/S13-create-subdir-mount.sh) started getting
executed after 20second of finishing the add-brick command. Due to this,
the mount which we execute after add-brick failed.

My question is, what is making post hook script to run so late ??

I can recreate the issues locally on my laptop too.


On Sat, Jun 1, 2019 at 4:55 PM Atin Mukherjee  wrote:

> subdir-mount.t has started failing in brick mux regression nightly. This
> needs to be fixed.
>
> Raghavendra - did we manage to get any further clue on uss.t failure?
>
> -- Forwarded message -
> From: 
> Date: Fri, 31 May 2019 at 23:34
> Subject: [Gluster-Maintainers] Build failed in Jenkins:
> regression-test-with-multiplex #1359
> To: , , ,
> , 
>
>
> See <
> https://build.gluster.org/job/regression-test-with-multiplex/1359/display/redirect?page=changes
> >
>
> Changes:
>
> [atin] glusterd: add an op-version check
>
> [atin] glusterd/svc: glusterd_svcs_stop should call individual wrapper
> function
>
> [atin] glusterd/svc: Stop stale process using the glusterd_proc_stop
>
> [Amar Tumballi] lcov: more coverage to shard, old-protocol, sdfs
>
> [Kotresh H R] tests/geo-rep: Add EC volume test case
>
> [Amar Tumballi] glusterfsd/cleanup: Protect graph object under a lock
>
> [Mohammed Rafi KC] glusterd/shd: Optimize the glustershd manager to send
> reconfigure
>
> [Kotresh H R] tests/geo-rep: Add tests to cover glusterd geo-rep
>
> [atin] glusterd: Optimize code to copy dictionary in handshake code path
>
> --
> [...truncated 3.18 MB...]
> ./tests/basic/afr/stale-file-lookup.t  -  9 second
> ./tests/basic/afr/granular-esh/replace-brick.t  -  9 second
> ./tests/basic/afr/granular-esh/add-brick.t  -  9 second
> ./tests/basic/afr/gfid-mismatch.t  -  9 second
> ./tests/performance/open-behind.t  -  8 second
> ./tests/features/ssl-authz.t  -  8 second
> ./tests/features/readdir-ahead.t  -  8 second
> ./tests/bugs/upcall/bug-1458127.t  -  8 second
> ./tests/bugs/transport/bug-873367.t  -  8 second
> ./tests/bugs/replicate/bug-1498570-client-iot-graph-check.t  -  8 second
> ./tests/bugs/replicate/bug-1132102.t  -  8 second
> ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
> -  8 second
> ./tests/bugs/quota/bug-1104692.t  -  8 second
> ./tests/bugs/posix/bug-1360679.t  -  8 second
> ./tests/bugs/posix/bug-1122028.t  -  8 second
> ./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  8 second
> ./tests/bugs/glusterfs/bug-861015-log.t  -  8 second
> ./tests/bugs/glusterd/sync-post-glusterd-restart.t  -  8 second
> ./tests/bugs/glusterd/bug-1696046.t  -  8 second
> ./tests/bugs/fuse/bug-983477.t  -  8 second
> ./tests/bugs/ec/bug-1227869.t  -  8 second
> ./tests/bugs/distribute/bug-1088231.t  -  8 second
> ./tests/bugs/distribute/bug-1086228.t  -  8 second
> ./tests/bugs/cli/bug-1087487.t  -  8 second
> ./tests/bugs/cli/bug-1022905.t  -  8 second
> ./tests/bugs/bug-1258069.t  -  8 second
> ./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t
> -  8 second
> ./tests/basic/xlator-pass-through-sanity.t  -  8 second
> ./tests/basic/quota-nfs.t  -  8 second
> ./tests/basic/glusterd/arbiter-volume.t  -  8 second
> ./tests/basic/ctime/ctime-noatime.t  -  8 second
> ./tests/line-coverage/cli-peer-and-volume-operations.t  -  7 second
> ./tests/gfid2path/get-gfid-to-path.t  -  7 second
> ./tests/bugs/upcall/bug-1369430.t  -  7 second
> ./tests/bugs/snapshot/bug-1260848.t  -  7 second
> ./tests/bugs/shard/shard-inode-refcount-test.t  -  7 second
> ./tests/bugs/shard/bug-1258334.t  -  7 second
> ./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
> ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  7 second
> ./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
> ./tests/bugs/posix/bug-1175711.t  -  7 second
> ./tests/bugs/nfs/bug-915280.t  -  7 second
> ./tests/bugs/md-cache/setxattr-prepoststat.t  -  7 second
> ./tests/bugs/md-cache/bug-1211863_unlink.t  -  7 second
> ./tests/bugs/glusterfs/bug-848251.t  -  7 second
> ./tests/bugs/distribute/bug-1122443.t  -  7 second
> ./tests/bugs/changelog/bug-1208470.t  -  7 second
> ./tests/bugs/bug-1702299.t  -  7 second
> ./tests/bugs/bug-1371806_2.t  -  7 second
> ./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  7
> second
> ./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t  -  7 second
> ./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -
> 7 second
> ./tests/bitrot/br-stub.t  -  7 second
> 

[Gluster-devel] Update: GlusterFS code coverage

2019-06-05 Thread Amar Tumballi Suryanarayan
All,

I just wanted to update everyone about one of the initiatives we have
undertaken, ie, increasing the overall code coverage of GlusterFS above 70%.
You can have a look at current code coverage here:
https://build.gluster.org/job/line-coverage/lastCompletedBuild/Line_20Coverage_20Report/
(This shows the latest all the time)

The daily job, and its details are captured @
https://build.gluster.org/job/line-coverage/

When we started focus on code coverage 3 months back, our code coverage was
around 60% overall. We kept the ambitious goal of increasing the code
coverage by 10% before glusterfs-7.0 release, and I am happy to announce
that we met this goal, before the branching.

Before talking about next goals, I want to thank and call out few
developers who made this happen.

* Xavier Hernandez - Made EC cross 90% from < 70%.
* Glusterd Team (Sanju, Rishub, Mohit, Atin) - Increased CLI/glusterd
coverage
* Geo-Rep Team (Kotresh, Sunny, Shwetha, Aravinda).
* Sheetal (help to increase glfs-api test cases, which indirectly helped
cover more code across).

Also note that, Some components like AFR/replicate was already at 80%+
before we started the efforts.

Now, our next goal is to make sure we have above 80% functions coverage in
all of the top level components shown. Once that is done, we will focus on
75% code coverage across all components. (ie, no 'Red' in top level page).

While it was possible to meet our goal of increasing the overall code
coverage from 60% - 70%, increasing it above 70% is not going to be easy,
mainly because it involves adding more tests for negative test cases, and
adding tests with different options (currently >300 of them across). We
also need to look at details from code coverage tests, and reverse engineer
to see how to write a test to hit the particular line in the code.

I personally invite everyone who is interested to contribute to gluster
project to get involved in this effort. Help us write test cases, suggest
how to improve it. Help by assigning interns write them for us (if your
team has some of them). This is a good way to understand glusterfs code
too. We are happy to organize sessions on how to walk through the code etc
if required.

Happy to hear feedback and see more contribution in this area.

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] tests are timing out in master branch

2019-05-21 Thread Amar Tumballi Suryanarayan
Looks like after reverting a patch on RPC layer reconnection logic (
https://review.gluster.org/22750) things are back to normal.

For those who submitted a patch in last 1 week, please resubmit. (which
should take care of rebasing on top of this patch).

This event proves that there are very delicate races in our RPC layer,
which can trigger random failures. While it was discussed in brief earlier.
We need to debug this further, and come up with possible next actions.
Volunteers welcome.

I recommend to use https://github.com/gluster/glusterfs/issues/391 to
capture our observations, and continue on github from here.

-Amar


On Wed, May 15, 2019 at 11:46 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> On Wed, May 15, 2019 at 11:24 AM Atin Mukherjee 
> wrote:
> >
> > There're random tests which are timing out after 200 secs. My belief is
> this is a major regression introduced by some commit recently or the
> builders have become extremely slow which I highly doubt. I'd request that
> we first figure out the cause, get master back to it's proper health and
> then get back to the review/merge queue.
> >
>
> For such dire situations, we also need to consider a proposal to back
> out patches in order to keep the master healthy. The outcome we seek
> is a healthy master - the isolation of the cause allows us to not
> repeat the same offense.
>
> > Sanju has already started looking into
> /tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t to understand
> what test is specifically hanging and consuming more time.
> ___
> Atin Mukherjee , Sankarshan Mukhopadhyay <
> sankarshan.mukhopadh...@gmail.com>
> Community Meeting Calendar:
>
> APAC Schedule -https://review.gluster.org/22750
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Release 6.2: Expected tagging on May 15th

2019-05-17 Thread Amar Tumballi Suryanarayan
Which are the patches? I can merge it for now.

-Amar

On Fri, May 17, 2019 at 1:10 PM Hari Gowtham  wrote:

> Thanks Sunny.
> Have CCed Shyam.
>
> On Fri, May 17, 2019 at 1:06 PM Sunny Kumar  wrote:
> >
> > Hi Hari,
> >
> > For this to pass regression other 3 patches needs to merge first, I
> > tried to merge but do not have sufficient permissions to merge on 6.2
> > branch.
> > I know bug is already in place to grant additional permission for
> > us(Me, you and Rinku) so until then waiting on Shyam to merge it.
> >
> > -Sunny
> >
> > On Fri, May 17, 2019 at 12:54 PM Hari Gowtham 
> wrote:
> > >
> > > Hi Kotresh ans Sunny,
> > > The patch has been failing regression a few times.
> > > We need to look into why this is happening and take a decision
> > > as to take it in release 6.2 or drop it.
> > >
> > > On Wed, May 15, 2019 at 4:27 PM Hari Gowtham 
> wrote:
> > > >
> > > > Hi,
> > > >
> > > > The following patch is waiting for centos regression.
> > > > https://review.gluster.org/#/c/glusterfs/+/22725/
> > > >
> > > > Sunny or Kotresh, please do take a look so that we can go ahead with
> > > > the tagging.
> > > >
> > > > On Thu, May 9, 2019 at 4:45 PM Hari Gowtham 
> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > Expected tagging date for release-6.2 is on May, 15th, 2019.
> > > > >
> > > > > Please ensure required patches are backported and also are passing
> > > > > regressions and are appropriately reviewed for easy merging and
> tagging
> > > > > on the date.
> > > > >
> > > > > --
> > > > > Regards,
> > > > > Hari Gowtham.
> > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > > Hari Gowtham.
> > >
> > >
> > >
> > > --
> > > Regards,
> > > Hari Gowtham.
>
>
>
> --
> Regards,
> Hari Gowtham.
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] nightly builds are available again, with slightly different versioning

2019-05-15 Thread Amar Tumballi Suryanarayan
Thanks for noticing and correcting the issue Niels. Very helpful.


On Wed, May 15, 2019 at 12:48 PM Niels de Vos  wrote:

> This is sort of an RCA and notification to anyone interested in using
> nightly builds of GlusterFS. If you have any (automated) tests that
> consume the nightly builds for non-master branches, you did not run
> tests with updated packages since 2 May 2019. The nightly builds failed
> to run, but nobody was notified or reported this.
>
> Around two weeks ago the nightly builds for glusterfs of the non-master
> branches were broken due to a change in the CI script. This has been
> corrected now and a manual run of the job shows green balls again:
>   https://ci.centos.org/view/Gluster/job/gluster_build-rpms/
>
> The initial breakage was introduced by an optimization to not download
> the whole glusterfs git repository, but only the current HEAD. This did
> not take into account that 'git checkout' would not be able to switch to
> a branch that was not downloaded. With a few iterations of fixes, it
> became obvious that also tags were not fetched (duh), and 'git describe'
> would not work. Without tags it is not possible to mark builds with the
> most recent minor release that was made of a branch. Currently the date
> of the build + git-hash is part of the package version. That means that
> there is a new version of each branch every day, instead of only after
> commits have been merged. This might be changed in the future...
>
> As a reminder, the YUM .repo files for the nightly builds can be found
> at http://artifacts.ci.centos.org/gluster/nightly/
>
> Cheers,
> Niels
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] gluster-block v0.4 is alive!

2019-05-06 Thread Amar Tumballi Suryanarayan
On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever  wrote:

> Hello Gluster folks,
>
> Gluster-block team is happy to announce the v0.4 release [1].
>
> This is the new stable version of gluster-block, lots of new and
> exciting features and interesting bug fixes are made available as part
> of this release.
> Please find the big list of release highlights and notable fixes at [2].
>
>
Good work Team (Prasanna and Xiubo Li to be precise)!!

This was much needed release w.r.to gluster-block project, mainly because
of the number of improvements done since last release. Also, gluster-block
release 0.3 was not compatible with glusterfs-6.x series.

All, feel free to use it if your deployment has any usecase for Block
storage, and give us feedback. Happy to make sure gluster-block is stable
for you.

Regards,
Amar


> Details about installation can be found in the easy install guide at
> [3]. Find the details about prerequisites and setup guide at [4].
> If you are a new user, checkout the demo video attached in the README
> doc [5], which will be a good source of intro to the project.
> There are good examples about how to use gluster-block both in the man
> pages [6] and test file [7] (also in the README).
>
> gluster-block is part of fedora package collection, an updated package
> with release version v0.4 will be soon made available. And the
> community provided packages will be soon made available at [8].
>
> Please spend a minute to report any kind of issue that comes to your
> notice with this handy link [9].
> We look forward to your feedback, which will help gluster-block get better!
>
> We would like to thank all our users, contributors for bug filing and
> fixes, also the whole team who involved in the huge effort with
> pre-release testing.
>
>
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/releases
> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
> [4] https://github.com/gluster/gluster-block#usage
> [5] https://github.com/gluster/gluster-block/blob/master/README.md
> [6] https://github.com/gluster/gluster-block/tree/master/docs
> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
> [8] https://download.gluster.org/pub/gluster/gluster-block/
> [9] https://github.com/gluster/gluster-block/issues/new
>
> Cheers,
> Team Gluster-Block!
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Amar Tumballi Suryanarayan
On Fri, May 3, 2019 at 3:17 PM Atin Mukherjee  wrote:

>
>
> On Fri, 3 May 2019 at 14:59, Xavi Hernandez  wrote:
>
>> Hi Atin,
>>
>> On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee 
>> wrote:
>>
>>> I'm bit puzzled on the way coverity is reporting the open defects on GD1
>>> component. As you can see from [1], technically we have 6 open defects and
>>> all of the rest are being marked as dismissed. We tried to put some
>>> additional annotations in the code through [2] to see if coverity starts
>>> feeling happy but the result doesn't change. I still see in the report it
>>> complaints about open defect of GD1 as 25 (7 as High, 18 as medium and 1 as
>>> Low). More interestingly yesterday's report claimed we fixed 8 defects,
>>> introduced 1, but the overall count remained as 102. I'm not able to
>>> connect the dots of this puzzle, can anyone?
>>>
>>
>> Maybe we need to modify all dismissed CID's so that Coverity considers
>> them again and, hopefully, mark them as solved with the newer updates. They
>> have been manually marked to be ignored, so they are still there...
>>
>
> After yesterday’s run I set the severity for all of them to see if
> modifications to these CIDs make any difference or not. So fingers crossed
> till the next report comes :-) .
>

If you noticed the previous day report, it was 101 'Open defects' and 65
'Dismissed' (which means, they are not 'fixed in code', but dismissed as
false positive or ignore in CID dashboard.

Now, it is 57 'Dismissed', which means, your patch has actually fixed 8
defects.


>
>
>> Just a thought, I'm not sure how this really works.
>>
>
> Same here, I don’t understand the exact workflow and hence seeking
> additional ideas.
>
>
Looks like we should consider overall open defects as Open + Dismissed.


>
>> Xavi
>>
>>
>>>
>>> [1] https://scan.coverity.com/projects/gluster-glusterfs/view_defects
>>> [2] https://review.gluster.org/#/c/22619/
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> --
> - Atin (atinm)
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-01 Thread Amar Tumballi Suryanarayan
Hi Cynthia Zhou,

Can you post the patch which fixes the issue of missing free? We will
continue to investigate the leak further, but would really appreciate
getting the patch which is already worked on land into upstream master.

-Amar

On Mon, Apr 22, 2019 at 1:38 PM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:

> Ok, I am clear now.
>
> I’ve added ssl_free in socket reset and socket finish function, though
> glusterfsd memory leak is not that much, still it is leaking, from source
> code I can not find anything else,
>
> Could you help to check if this issue exists in your env? If not I may
> have a try to merge your patch .
>
> Step
>
> 1>   while true;do gluster v heal  info,
>
> 2>   check the vol-name glusterfsd memory usage, it is obviously
> increasing.
>
> cynthia
>
>
>
> *From:* Milind Changire 
> *Sent:* Monday, April 22, 2019 2:36 PM
> *To:* Zhou, Cynthia (NSB - CN/Hangzhou) 
> *Cc:* Atin Mukherjee ; gluster-devel@gluster.org
> *Subject:* Re: [Gluster-devel] glusterfsd memory leak issue found after
> enable ssl
>
>
>
> According to BIO_new_socket() man page ...
>
>
>
> *If the close flag is set then the socket is shut down and closed when the
> BIO is freed.*
>
>
>
> For Gluster to have more control over the socket shutdown, the BIO_NOCLOSE
> flag is set. Otherwise, SSL takes control of socket shutdown whenever BIO
> is freed.
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Query regarding dictionary logic

2019-04-30 Thread Amar Tumballi Suryanarayan
Shreyas/Kevin tried to address it some time back using
https://bugzilla.redhat.com/show_bug.cgi?id=1428049 (
https://review.gluster.org/16830)

I vaguely remember the reason to keep the hash value 1 was done during the
time when we had dictionary itself sent as on wire protocol, and in most
other places, number of entries in dictionary was on an avg, 3. So, we
felt, saving on a bit of memory for optimization was better at that time.

-Amar

On Tue, Apr 30, 2019 at 12:02 PM Mohit Agrawal  wrote:

> sure Vijay, I will try and update.
>
> Regards,
> Mohit Agrawal
>
> On Tue, Apr 30, 2019 at 11:44 AM Vijay Bellur  wrote:
>
>> Hi Mohit,
>>
>> On Mon, Apr 29, 2019 at 7:15 AM Mohit Agrawal 
>> wrote:
>>
>>> Hi All,
>>>
>>>   I was just looking at the code of dict, I have one query current
>>> dictionary logic.
>>>   I am not able to understand why we use hash_size is 1 for a
>>> dictionary.IMO with the
>>>   hash_size of 1 dictionary always work like a list, not a hash, for
>>> every lookup
>>>   in dictionary complexity is O(n).
>>>
>>>   Before optimizing the code I just want to know what was the exact
>>> reason to define
>>>   hash_size is 1?
>>>
>>
>> This is a good question. I looked up the source in gluster's historic
>> repo [1] and hash_size is 1 even there. So, this could have been the case
>> since the first version of the dictionary code.
>>
>> Would you be able to run some tests with a larger hash_size and share
>> your observations?
>>
>> Thanks,
>> Vijay
>>
>> [1]
>> https://github.com/gluster/historic/blob/master/libglusterfs/src/dict.c
>>
>>
>>
>>>
>>>   Please share your view on the same.
>>>
>>> Thanks,
>>> Mohit Agrawal
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-04-26 Thread Amar Tumballi Suryanarayan
On Fri, Apr 26, 2019 at 6:27 PM Kaleb Keithley  wrote:

>
>
> On Fri, Apr 26, 2019 at 8:21 AM Harold Miller  wrote:
>
>> Has Red Hat security cleared the Slack systems for confidential /
>> customer information?
>>
>> If not, it will make it difficult for support to collect/answer questions.
>>
>
> I'm pretty sure Amar meant as a replacement for the freenode #gluster and
> #gluster-dev channels, given that he sent this to the public gluster
> mailing lists @gluster.org. Nobody should have even been posting
> confidential and/or customer information to any of those lists or channels.
> And AFAIK nobody ever has.
>
>
Yep, I am only talking about IRC (from freenode, #gluster, #gluster-dev
etc).  Also, I am not saying we are 'replacing IRC'. Gluster as a project
started in pre-Slack era, and we have many users who prefer to stay in IRC.
So, for now, no pressure to make a statement calling Slack channel as a
'Replacement' to IRC.


> Amar, would you like to clarify which IRC channels you meant?
>
>

Thanks Kaleb. I was bit confused on why the concern of it came up in this
group.



>
>> On Fri, Apr 26, 2019 at 6:00 AM Scott Worthington <
>> scott.c.worthing...@gmail.com> wrote:
>>
>>> Hello, are you not _BOTH_ Red Hat FTEs or contractors?
>>>
>>>
Yes! but come from very different internal teams.

Michael supports Gluster (the project) team's Infrastructure needs, and has
valid concerns from his perspective :-) I, on the other hand, bother more
about code, users, and how to make sure we are up-to-date with other
technologies and communities, from the engineering view point.


> On Fri, Apr 26, 2019, 3:16 AM Michael Scherer  wrote:
>>>
>>>> Le vendredi 26 avril 2019 à 13:24 +0530, Amar Tumballi Suryanarayan a
>>>> écrit :
>>>> > Hi All,
>>>> >
>>>> > We wanted to move to Slack from IRC for our official communication
>>>> > channel
>>>> > from sometime, but couldn't as we didn't had a proper URL for us to
>>>> > register. 'gluster' was taken and we didn't knew who had it
>>>> > registered.
>>>> > Thanks to constant ask from Satish, Slack team has now agreed to let
>>>> > us use
>>>> > https://gluster.slack.com and I am happy to invite you all there.
>>>> > (Use this
>>>> > link
>>>> > <
>>>> >
>>>> https://join.slack.com/t/gluster/shared_invite/enQtNjIxMTA1MTk3MDE1LWIzZWZjNzhkYWEwNDdiZWRiOTczMTc4ZjdiY2JiMTc3MDE5YmEyZTRkNzg0MWJiMWM3OGEyMDU2MmYzMTViYTA
>>>> > >
>>>> > to
>>>> > join)
>>>> >
>>>> > Please note that, it won't be a replacement for mailing list. But can
>>>> > be
>>>> > used by all developers and users for quick communication. Also note
>>>> > that,
>>>> > no information there would be 'stored' beyond 10k lines as we are
>>>> > using the
>>>> > free version of Slack.
>>>>
>>>> Aren't we concerned about the ToS of slack ? Last time I did read them,
>>>> they were quite scary (like, if you use your corporate email, you
>>>> engage your employer, and that wasn't the worst part).
>>>>
>>>> Also, to anticipate the question, my employer Legal department told me
>>>> to not setup a bridge between IRC and slack, due to the said ToS.
>>>>
>>>>
Again, re-iterating here. Not planning to use any bridges from IRC to
Slack. I re-read the Slack API Terms and condition. And it makes sense.
They surely don't want us to build another slack, or abuse slack with too
many API requests made for collecting logs.

Currently, to start with, we are not adding any bots (other than github
bot). Hopefully, that will keep us under proper usage guidelines.

-Amar


> --
>>>> Michael Scherer
>>>> Sysadmin, Community Infrastructure
>>>>
>>>>
>>>>
>>>> ___
>>>> Gluster-users mailing list
>>>> gluster-us...@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>>
>> HAROLD MILLER
>>
>> ASSOCIATE MANAGER, ENTERPRISE CLOUD SUPPORT
>>
>> Red Hat
>>
>> <https://www.redhat.com/>
>>
>> har...@redhat.comT: (650)-254-4346
>> <https://red.ht/sig>
>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-04-26 Thread Amar Tumballi Suryanarayan
Hi All,

We wanted to move to Slack from IRC for our official communication channel
from sometime, but couldn't as we didn't had a proper URL for us to
register. 'gluster' was taken and we didn't knew who had it registered.
Thanks to constant ask from Satish, Slack team has now agreed to let us use
https://gluster.slack.com and I am happy to invite you all there. (Use this
link

to
join)

Please note that, it won't be a replacement for mailing list. But can be
used by all developers and users for quick communication. Also note that,
no information there would be 'stored' beyond 10k lines as we are using the
free version of Slack.

Regards,
Amar
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-04-11 Thread Amar Tumballi Suryanarayan
Hi All,

Below is the final details of our community meeting, and I will be sending
invites to mailing list following this email. You can add Gluster Community
Calendar so you can get notifications on the meetings.

We are starting the meetings from next week. For the first meeting, we need
1 volunteer from users to discuss the use case / what went well, and what
went bad, etc. preferrably in APAC region.  NA/EMEA region, next week.

Draft Content: https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g

Gluster Community Meeting
<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Previous-Meeting-minutes>Previous
Meeting minutes:

   - http://github.com/gluster/community

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#DateTime-Check-the-community-calendar>Date/Time:
Check the community calendar
<https://calendar.google.com/calendar/b/1?cid=dmViajVibDBrbnNiOWQwY205ZWg5cGJsaTRAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ>
<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Bridge>Bridge

   - APAC friendly hours
  - Bridge: https://bluejeans.com/836554017
   - NA/EMEA
  - Bridge: https://bluejeans.com/486278655

--
<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Attendance>Attendance

   - Name, Company

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Host>Host

   - Who will host next meeting?
  - Host will need to send out the agenda 24hr - 12hrs in advance to
  mailing list, and also make sure to send the meeting minutes.
  - Host will need to reach out to one user at least who can talk about
  their usecase, their experience, and their needs.
  - Host needs to send meeting minutes as PR to
  http://github.com/gluster/community

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#User-stories>User stories

   - Discuss 1 usecase from a user.
  - How was the architecture derived, what volume type used, options,
  etc?
  - What were the major issues faced ? How to improve them?
  - What worked good?
  - How can we all collaborate well, so it is win-win for the community
  and the user? How can we

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Community>Community

   -

   Any release updates?
   -

   Blocker issues across the project?
   -

   Metrics
   - Number of new bugs since previous meeting. How many are not triaged?
  - Number of emails, anything unanswered?

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Conferences--Meetups>Conferences
/ Meetups

   - Any conference in next 1 month where gluster-developers are going?
   gluster-users are going? So we can meet and discuss.

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Developer-focus>Developer
focus

   -

   Any design specs to discuss?
   -

   Metrics of the week?
   - Coverity
  - Clang-Scan
  - Number of patches from new developers.
  - Did we increase test coverage?
  - [Atin] Also talk about most frequent test failures in the CI and
  carve out an AI to get them fixed.

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#RoundTable>RoundTable

   - 

----

Regards,
Amar

On Mon, Mar 25, 2019 at 8:53 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Thanks for the feedback Darrell,
>
> The new proposal is to have one in North America 'morning' time. (10AM
> PST), And another in ASIA day time, which is evening 7pm/6pm in Australia,
> 9pm Newzealand, 5pm Tokyo, 4pm Beijing.
>
> For example, if we choose Every other Tuesday for meeting, and 1st of the
> month is Tuesday, we would have North America time for 1st, and on 15th it
> would be ASIA/Pacific time.
>
> Hopefully, this way, we can cover all the timezones, and meeting minutes
> would be committed to github repo, so that way, it will be easier for
> everyone to be aware of what is happening.
>
> Regards,
> Amar
>
> On Mon, Mar 25, 2019 at 8:40 PM Darrell Budic 
> wrote:
>
>> As a user, I’d like to visit more of these, but the time slot is my 3AM.
>> Any possibility for a rolling schedule (move meeting +6 hours each week
>> with rolling attendance from maintainers?) or an occasional regional
>> meeting 12 hours opposed to the one you’re proposing?
>>
>>   -Darrell
>>
>> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan <
>> atumb...@redhat.com> wrote:
>>
>> All,
>>
>> We currently have 3 meetings which are public:
>>
>> 1. Maintainer's Meeting
>>
>> - Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
>> on an avg, and not much is discussed.
>> - Without majority attendance, we can't take any decisions too.
>>
>> 2. Community meeting
>>
>> - Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
>> meeting which is for 'Community/Users'. Others are for developers as of
>> now.
>

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-10 Thread Amar Tumballi Suryanarayan
Thanks for the summary Atin.

On Wed, Apr 10, 2019 at 7:30 PM Atin Mukherjee  wrote:

> And now for last 15 days:
>
> https://fstat.gluster.org/summary?start_date=2019-03-25_date=2019-04-10
>
> ./tests/bitrot/bug-1373520.t 18  ==> Fixed through
> https://review.gluster.org/#/c/glusterfs/+/22481/, I don't see this
> failing in brick mux post 5th April
> ./tests/bugs/ec/bug-1236065.t 17  ==> happens only in brick mux, needs
> analysis.
> ./tests/basic/uss.t 15  ==> happens in both brick mux and non
> brick mux runs, test just simply times out. Needs urgent analysis.
> ./tests/basic/ec/ec-fix-openfd.t 13  ==> Fixed through
> https://review.gluster.org/#/c/22508/ , patch merged today.
> ./tests/basic/volfile-sanity.t  8  ==> Some race, though this succeeds
> in second attempt every time.
>
>
Can volfile-sanity.t be  failing because of the 'hang' in uss.t ? It is
possible as volfile-sanity.t runs after uss.t in regressions. I checked
volfile-sanity.t, but it has 'cleanup' at the beginning, but not sure if
there are any lingering things which caused these failures.


> There're plenty more with 5 instances of failure from many tests. We need
> all maintainers/owners to look through these failures and fix them, we
> certainly don't want to get into a stage where master is unstable and we
> have to lock down the merges till all these failures are resolved. So
> please help.
>
> (Please note fstat stats show up the retries as failures too which in a
> way is right)
>
>
> On Tue, Feb 26, 2019 at 5:27 PM Atin Mukherjee 
> wrote:
>
>> [1] captures the test failures report since last 30 days and we'd need
>> volunteers/component owners to see why the number of failures are so high
>> against few tests.
>>
>> [1]
>> https://fstat.gluster.org/summary?start_date=2019-01-26_date=2019-02-25=all
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
Thanks for the feedback Darrell,

The new proposal is to have one in North America 'morning' time. (10AM
PST), And another in ASIA day time, which is evening 7pm/6pm in Australia,
9pm Newzealand, 5pm Tokyo, 4pm Beijing.

For example, if we choose Every other Tuesday for meeting, and 1st of the
month is Tuesday, we would have North America time for 1st, and on 15th it
would be ASIA/Pacific time.

Hopefully, this way, we can cover all the timezones, and meeting minutes
would be committed to github repo, so that way, it will be easier for
everyone to be aware of what is happening.

Regards,
Amar

On Mon, Mar 25, 2019 at 8:40 PM Darrell Budic 
wrote:

> As a user, I’d like to visit more of these, but the time slot is my 3AM.
> Any possibility for a rolling schedule (move meeting +6 hours each week
> with rolling attendance from maintainers?) or an occasional regional
> meeting 12 hours opposed to the one you’re proposing?
>
>   -Darrell
>
> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
> All,
>
> We currently have 3 meetings which are public:
>
> 1. Maintainer's Meeting
>
> - Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
> on an avg, and not much is discussed.
> - Without majority attendance, we can't take any decisions too.
>
> 2. Community meeting
>
> - Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
> meeting which is for 'Community/Users'. Others are for developers as of
> now.
> Sadly attendance is getting closer to 0 in recent times.
>
> 3. GCS meeting
>
> - We started it as an effort inside Red Hat gluster team, and opened it up
> for community from Jan 2019, but the attendance was always from RHT
> members, and haven't seen any traction from wider group.
>
> So, I have a proposal to call out for cancelling all these meeting, and
> keeping just 1 weekly 'Community' meeting, where even topics related to
> maintainers and GCS and other projects can be discussed.
>
> I have a template of a draft template @
> https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g
>
> Please feel free to suggest improvements, both in agenda and in timings.
> So, we can have more participation from members of community, which allows
> more user - developer interactions, and hence quality of project.
>
> Waiting for feedbacks,
>
> Regards,
> Amar
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
All,

We currently have 3 meetings which are public:

1. Maintainer's Meeting

- Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
on an avg, and not much is discussed.
- Without majority attendance, we can't take any decisions too.

2. Community meeting

- Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
meeting which is for 'Community/Users'. Others are for developers as of now.
Sadly attendance is getting closer to 0 in recent times.

3. GCS meeting

- We started it as an effort inside Red Hat gluster team, and opened it up
for community from Jan 2019, but the attendance was always from RHT
members, and haven't seen any traction from wider group.

So, I have a proposal to call out for cancelling all these meeting, and
keeping just 1 weekly 'Community' meeting, where even topics related to
maintainers and GCS and other projects can be discussed.

I have a template of a draft template @
https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g

Please feel free to suggest improvements, both in agenda and in timings.
So, we can have more participation from members of community, which allows
more user - developer interactions, and hence quality of project.

Waiting for feedbacks,

Regards,
Amar
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterFS v7.0 (and v8.0) roadmap discussion

2019-03-25 Thread Amar Tumballi Suryanarayan
Hello Gluster Members,

We are now done with glusterfs-6.0 release, and the next up is
glusterfs-7.0. But considering for many 'initiatives', 3-4 months are not
enough time to complete the tasks, we would like to call for a road-map
discussion meeting for calendar year 2019 (covers both glusterfs-7.0, and
8.0).

It would be good to use the meeting slot of community meeting for this.
While talking to team locally, I compiled a presentation here: <
https://docs.google.com/presentation/d/1rtn38S4YBe77KK5IjczWmoAR-ZSO-i3tNHg9pAH8Wt8/edit?usp=sharing>,
please go through and let me know what more can be added, or what can be
dropped?

We can start having discussions in https://hackmd.io/jlnWqzwCRvC9uoEU2h01Zw

Regards,
Amar
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Jim,

On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney  wrote:

>
> Issues with glusterfs fuse mounts cause issues with python file open for
> write. We have to use nfs to avoid this.
>
> Really want to see better back-end tools to facilitate cleaning up of
> glusterfs failures. If system is going to use hard linked ID, need a
> mapping of id to file to fix things. That option is now on for all exports.
> It should be the default If a host is down and users delete files by the
> thousands, gluster _never_ catches up. Finding path names for ids across
> even a 40TB mount, much less the 200+TB one, is a slow process. A network
> outage of 2 minutes and one system didn't get the call to recursively
> delete several dozen directories each with several thousand files.
>
>
Are you talking about some issues in geo-replication module or some other
application using native mount? Happy to take the discussion forward about
these issues.

Are there any bugs open on this?

Thanks,
Amar


>
>
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>>
>> Hi,
>>
>> Looking into something else I fell over this proposal. Being a shop that
>> are going into "Leaving GlusterFS" mode, I thought I would give my two
>> cents.
>>
>> While being partially an HPC shop with a few Lustre filesystems,  we
>> chose GlusterFS for an archiving solution (2-3 PB), because we could find
>> files in the underlying ZFS filesystems if GlusterFS went sour.
>>
>> We have used the access to the underlying files plenty, because of the
>> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
>> effortless to run and mainly for that reason we are planning to move away
>> from GlusterFS.
>>
>> Reading this proposal kind of underlined that "Leaving GluserFS" is the
>> right thing to do. While I never understood why GlusterFS has been in
>> feature crazy mode instead of stabilizing mode, taking away crucial
>> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
>> very useful, even though the current implementation are not perfect.
>> Tiering also makes so much sense, but, for large files, not on a per-file
>> level.
>>
>> To be honest we only use quotas. We got scared of trying out new
>> performance features that potentially would open up a new back of issues.
>>
>> Sorry for being such a buzzkill. I really wanted it to be different.
>>
>> Cheers,
>> Hans Henrik
>> On 19/07/2018 08.56, Amar Tumballi wrote:
>>
>>
>> * Hi all, Over last 12 years of Gluster, we have developed many features,
>> and continue to support most of it till now. But along the way, we have
>> figured out better methods of doing things. Also we are not actively
>> maintaining some of these features. We are now thinking of cleaning up some
>> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
>> totally taken out of codebase in following releases) in next upcoming
>> release, v5.0. The release notes will provide options for smoothly
>> migrating to the supported configurations. If you are using any of these
>> features, do let us know, so that we can help you with ‘migration’.. Also,
>> we are happy to guide new developers to work on those components which are
>> not actively being maintained by current set of developers. List of
>> features hitting sunset: ‘cluster/stripe’ translator: This translator was
>> developed very early in the evolution of GlusterFS, and addressed one of
>> the very common question of Distributed FS, which is “What happens if one
>> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
>> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
>> was very hard to handle failure scenarios, and give a real good experience
>> to our users with this feature. Over the time, Gluster solved the problem
>> with it’s ‘Shard’ feature, which solves the problem in much better way, and
>> provides much better solution with existing well supported stack. Hence the
>> proposal for Deprecation. If you are using this feature, then do write to
>> us, as it needs a proper migration from existing volume to a new full
>> supported volume type before you upgrade. ‘storage/bd’ translator: This
>> feature got into the code base 5 years back with this patch
>> [1]. Plan was to use a block device
>> directly as a brick, which would help to handle disk-image storage much
>> easily in glusterfs. As the feature is not getting more contribution, and
>> we are not seeing any user traction on this, would like to propose for
>> Deprecation. If you are using the feature, plan to move to a supported
>> gluster volume configuration, and have your setup ‘supported’ before
>> upgrading to your new gluster version. ‘RDMA’ transport support: Gluster
>> started supporting RDMA while ib-verbs was still new, and very high-end
>> infra around that time were using Infiniband. Engineers did work with
>> Mellanox, and got the technology into GlusterFS for 

Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Hans,

Thanks for the honest feedback. Appreciate this.

On Tue, Mar 19, 2019 at 5:39 PM Hans Henrik Happe  wrote:

> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
>
It is a right concern to raise, and removing the existing features is not a
good thing most of the times. But, one thing we noticed over the years is,
the features which we develop, and not take to completion cause the major
heart-burn. People think it is present, and it is already few years since
its introduced, but if the developers are not working on it, users would
always feel that the product doesn't work, because that one feature didn't
work.

Other than Quota in the proposal email, for all other features, even though
we have *some* users, we are inclined towards deprecating them, considering
projects overall goals of stability in the longer run.


> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> About Quota, we heard enough voices, so we will make sure we keep it. The
original email was 'Proposal', and hence these opinions matter for decision.

Sorry for being such a buzzkill. I really wanted it to be different.
>
> We hear you. Please let us know one thing, which were the versions you
tried ?

We hope in coming months, our recent focus on Stability and Technical debt
reduction will help you to re-look at Gluster after sometime.


> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning up some
> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
> totally taken out of codebase in following releases) in next upcoming
> release, v5.0. The release notes will provide options for smoothly
> migrating to the supported configurations. If you are using any of these
> features, do let us know, so that we can help you with ‘migration’.. Also,
> we are happy to guide new developers to work on those components which are
> not actively being maintained by current set of developers. List of
> features hitting sunset: ‘cluster/stripe’ translator: This translator was
> developed very early in the evolution of GlusterFS, and addressed one of
> the very common question of Distributed FS, which is “What happens if one
> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
> was very hard to handle failure scenarios, and give a real good experience
> to our users with this feature. Over the time, Gluster solved the problem
> with it’s ‘Shard’ feature, which solves the problem in much better way, and
> provides much better solution with existing well supported stack. Hence the
> proposal for Deprecation. If you are using this feature, then do write to
> us, as it needs a proper migration from existing volume to a new full
> supported volume type before you upgrade. ‘storage/bd’ translator: This
> feature got into the code base 5 years back with this patch
> [1]. Plan was to use a block device
> directly as a brick, which would help to handle disk-image storage much
> easily in glusterfs. As the feature is not getting more contribution, and
> we are not seeing any user traction on this, would like to propose for
> Deprecation. If you are using the feature, plan to move to a supported
> gluster volume configuration, and have your setup ‘supported’ before
> upgrading to your new gluster version. ‘RDMA’ transport support: Gluster
> started supporting RDMA while ib-verbs was still new, and very high-end
> infra around that time were using Infiniband. Engineers did work with
> Mellanox, and got the 

Re: [Gluster-devel] Github#268 Compatibility with Alpine Linux

2019-03-13 Thread Amar Tumballi Suryanarayan
I tried this recently, and the issue of rpcgen is real, and is not
straight-forward is what I felt. Would like to pick this up after
glusterfs-6.0 release.

-Amar

On Tue, Mar 12, 2019 at 8:17 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> Saw some recent activity on
>  - is there a plan to
> address this or, should the interested users be informed about other
> plans?
>
> /s
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-13 Thread Amar Tumballi Suryanarayan
We recommend to use 'tirpc' in the later releases. use '--with-tirpc' while
running ./configure

On Wed, Mar 13, 2019 at 10:55 AM ABHISHEK PALIWAL 
wrote:

> Hi Amar,
>
> this problem seems to be configuration issue due to librpc.
>
> Could you please let me know what should be configuration I need to use?
>
> Regards,
> Abhishek
>
> On Wed, Mar 13, 2019 at 10:42 AM ABHISHEK PALIWAL 
> wrote:
>
>> logs for libgfrpc.so
>>
>> pabhishe@arn-build3$ldd
>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.*
>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0:
>> not a dynamic executable
>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0.0.1:
>> not a dynamic executable
>>
>>
>> On Wed, Mar 13, 2019 at 10:02 AM ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Here are the logs:
>>>
>>>
>>> pabhishe@arn-build3$ldd
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.*
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0:
>>> not a dynamic executable
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1:
>>> not a dynamic executable
>>> pabhishe@arn-build3$ldd
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1
>>> not a dynamic executable
>>>
>>>
>>> For backtraces I have attached the core_logs.txt file.
>>>
>>> Regards,
>>> Abhishek
>>>
>>> On Wed, Mar 13, 2019 at 9:51 AM Amar Tumballi Suryanarayan <
>>> atumb...@redhat.com> wrote:
>>>
>>>> Hi Abhishek,
>>>>
>>>> Few more questions,
>>>>
>>>>
>>>>> On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL <
>>>>> abhishpali...@gmail.com> wrote:
>>>>>
>>>>>> Hi Amar,
>>>>>>
>>>>>> Below are the requested logs
>>>>>>
>>>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
>>>>>> not a dynamic executable
>>>>>>
>>>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
>>>>>> not a dynamic executable
>>>>>>
>>>>>>
>>>> Can you please add a * at the end, so it gets the linked library list
>>>> from the actual files (ideally this is a symlink, but I expected it to
>>>> resolve like in Fedora).
>>>>
>>>>
>>>>
>>>>> root@128:/# gdb /usr/sbin/glusterd core.1099
>>>>>> GNU gdb (GDB) 7.10.1
>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>> License GPLv3+: GNU GPL version 3 or later <
>>>>>> http://gnu.org/licenses/gpl.html>
>>>>>> This is free software: you are free to change and redistribute it.
>>>>>> There is NO WARRANTY, to the extent permitted by law.  Type "show
>>>>>> copying"
>>>>>> and "show warranty" for details.
>>>>>> This GDB was configured as "powerpc64-wrs-linux".
>>>>>> Type "show configuration" for configuration details.
>>>>>> For bug reporting instructions, please see:
>>>>>> <http://www.gnu.org/software/gdb/bugs/>.
>>>>>> Find the GDB manual and other documentation resources online at:
>>>>>> <http://www.gnu.org/software/gdb/documentation/>.
>>>>>> For help, type "help".
>>>>>> Type "apropos word" to search for commands related to "word"...
>>>>>> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
>>>>>> found)...done.
>>>>>> [New LWP 1109]
>>>>>> [New LWP 1101]
>>>>>> [New LWP 1105]
>>>>>> [New LWP 1110]
>>>>>> [New LWP 1099]
>>>>>> [New LWP 1107]
>>>>>> [New LWP 1119]
>>>>>> [New LWP 1103]
>>>>>> [New LWP 1112]
>>>>>> [New LWP 1116]
>>>>>> [New LWP 1104]
>>>>>> [New LWP 1239]
>>>>>> [New LWP 1106]
>>>>>> [New LWP ]
>>>>>> [New LWP 1108]
>>>>>> [New LWP 1117]
>>>>>> [New LWP 1102]
>>>>>> [New LWP 1118]
>>>>>> [New LWP 1100]
>>>>>> [New LWP 1114]
>>>>>> [New 

Re: [Gluster-devel] [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread Amar Tumballi Suryanarayan
bgfxdr.so.0
>> No symbol table info available.
>> #8  0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8109870, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109920 "\232\373\377\315\352\325\005\271"
>> stat = 
>> #9  0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa8109870, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #10 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #11 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa81096f0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa81097a0 "\241X\372!\216\256=\342"
>> stat = 
>> ---Type  to continue, or q  to quit---
>> #12 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa81096f0, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #13 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #14 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8109570, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109620 "\265\205\003Vu'\002L"
>> stat = 
>> #15 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa8109570, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #16 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #17 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa81093f0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa81094a0 "\200L\027F'\177\366D"
>> stat = 
>> #18 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa81093f0, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #19 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #20 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8109270, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109320 "\217{dK(\001E\220"
>> stat = 
>> #21 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa8109270, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #22 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #23 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa81090f0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa81091a0 "\217\275\067\336\232\300(\005"
>> stat = 
>> #24 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa81090f0, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #25 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #26 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8108f70, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109020 "\260.\025\b\244\352IT"
>> stat = 
>> #27 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa8108f70, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #28 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #29 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8108df0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8108ea0 "\212GS\203l\035\n\\"
>> ---Type  to continue, or q  to quit---
>>
>>
>> Regards,
>> Abhishek
>>
>> On Mon, Mar 11, 2019 at 7:10 PM Amar Tumballi Suryanarayan <
>> atumb...@redhat.com> wrote:
>>
>>> Hi Abhishek,
>>>
>

Re: [Gluster-devel] [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-11 Thread Amar Tumballi Suryanarayan
Hi Abhishek,

Can you check and get back to us?

```
bash# ldd /usr/lib64/libglusterfs.so
bash# ldd /usr/lib64/libgfrpc.so

```

Also considering you have the core, can you do `(gdb) thr apply all bt
full`  and pass it on?

Thanks & Regards,
Amar

On Mon, Mar 11, 2019 at 3:41 PM ABHISHEK PALIWAL 
wrote:

> Hi Team,
>
> COuld you please provide some pointer to debug it further.
>
> Regards,
> Abhishek
>
> On Fri, Mar 8, 2019 at 2:19 PM ABHISHEK PALIWAL 
> wrote:
>
>> Hi Team,
>>
>> I am using Glusterfs 5.4, where after setting the gluster mount point
>> when trying to access it, glusterfsd is getting crashed and mount point
>> through the "Transport endpoint is not connected error.
>>
>> Here I are the gdb log for the core file
>>
>> warning: Could not load shared library symbols for linux-vdso64.so.1.
>> Do you need "set solib-search-path" or "set sysroot"?
>> [Thread debugging using libthread_db enabled]
>> Using host libthread_db library "/lib64/libthread_db.so.1".
>> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140
>> --volfile-id gv0.128.224.95.140.tmp-bric'.
>> Program terminated with signal SIGSEGV, Segmentation fault.
>> #0  0x3fff95ab1d48 in _int_malloc (av=av@entry=0x3fff7c20,
>> bytes=bytes@entry=36) at malloc.c:3327
>> 3327 {
>> [Current thread is 1 (Thread 0x3fff90394160 (LWP 811))]
>> (gdb)
>> (gdb)
>> (gdb) bt
>> #0  0x3fff95ab1d48 in _int_malloc (av=av@entry=0x3fff7c20,
>> bytes=bytes@entry=36) at malloc.c:3327
>> #1  0x3fff95ab43dc in __GI___libc_malloc (bytes=36) at malloc.c:2921
>> #2  0x3fff95b6ffd0 in x_inline (xdrs=0x3fff90391d20, len=> out>) at xdr_sizeof.c:89
>> #3  0x3fff95c4d488 in .xdr_gfx_iattx () from /usr/lib64/libgfxdr.so.0
>> #4  0x3fff95c4de84 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #5  0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c132020, size=, proc=) at
>> xdr_ref.c:84
>> #6  0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c132020, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #7  0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #8  0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c131ea0, size=, proc=) at
>> xdr_ref.c:84
>> #9  0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c131ea0, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #10 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #11 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c131d20, size=, proc=) at
>> xdr_ref.c:84
>> #12 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c131d20, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #13 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #14 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c131ba0, size=, proc=) at
>> xdr_ref.c:84
>> #15 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c131ba0, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #16 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #17 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c131a20, size=, proc=) at
>> xdr_ref.c:84
>> #18 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c131a20, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #19 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #20 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c1318a0, size=, proc=) at
>> xdr_ref.c:84
>> #21 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c1318a0, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #22 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #23 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c131720, size=, proc=) at
>> xdr_ref.c:84
>> #24 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c131720, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #25 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #26 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c1315a0, size=, proc=) at
>> xdr_ref.c:84
>> #27 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c1315a0, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #28 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> 

Re: [Gluster-devel] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Amar Tumballi Suryanarayan
Thanks to those who participated.

Update at present:

We found 3 blocker bugs in upgrade scenarios, and hence have marked release
as pending upon them. We will keep these lists updated about progress.

-Amar

On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Hi all,
>
> We are calling out our users, and developers to contribute in validating
> ‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
> upgrade, stability, and performance.
>
> Some of the key highlights of the release are listed in release-notes
> draft
> <https://github.com/gluster/glusterfs/blob/release-6/doc/release-notes/6.0.md>.
> Please note that there are some of the features which are being dropped out
> of this release, and hence making sure your setup is not going to have an
> issue is critical. Also the default lru-limit option in fuse mount for
> Inodes should help to control the memory usage of client processes. All the
> good reason to give it a shot in your test setup.
>
> If you are developer using gfapi interface to integrate with other
> projects, you also have some signature changes, so please make sure your
> project would work with latest release. Or even if you are using a project
> which depends on gfapi, report the error with new RPMs (if any). We will
> help fix it.
>
> As part of test days, we want to focus on testing the latest upcoming
> release i.e. GlusterFS-6, and one or the other gluster volunteers would be
> there on #gluster channel on freenode to assist the people. Some of the key
> things we are looking as bug reports are:
>
>-
>
>See if upgrade from your current version to 6.0rc is smooth, and works
>as documented.
>- Report bugs in process, or in documentation if you find mismatch.
>-
>
>Functionality is all as expected for your usecase.
>- No issues with actual application you would run on production etc.
>-
>
>Performance has not degraded in your usecase.
>- While we have added some performance options to the code, not all of
>   them are turned on, as they have to be done based on usecases.
>   - Make sure the default setup is at least same as your current
>   version
>   - Try out few options mentioned in release notes (especially,
>   --auto-invalidation=no) and see if it helps performance.
>-
>
>While doing all the above, check below:
>- see if the log files are making sense, and not flooding with some
>   “for developer only” type of messages.
>   - get ‘profile info’ output from old and now, and see if there is
>   anything which is out of normal expectation. Check with us on the 
> numbers.
>   - get a ‘statedump’ when there are some issues. Try to make sense
>   of it, and raise a bug if you don’t understand it completely.
>
>
> <https://hackmd.io/YB60uRCMQRC90xhNt4r6gA?both#Process-expected-on-test-days>Process
> expected on test days.
>
>-
>
>We have a tracker bug
><https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0>[0]
>- We will attach all the ‘blocker’ bugs to this bug.
>-
>
>Use this link to report bugs, so that we have more metadata around
>given bugzilla.
>- Click Here
>   
> <https://bugzilla.redhat.com/enter_bug.cgi?blocked=1672818_severity=high=core=high=GlusterFS_whiteboard=gluster-test-day=6>
>   [1]
>-
>
>The test cases which are to be tested are listed here in this sheet
>
> <https://docs.google.com/spreadsheets/d/1AS-tDiJmAr9skK535MbLJGe_RfqDQ3j1abX1wtjwpL4/edit?usp=sharing>[2],
>please add, update, and keep it up-to-date to reduce duplicate efforts.
>
> Lets together make this release a success.
>
> Also check if we covered some of the open issues from Weekly untriaged
> bugs
> <https://lists.gluster.org/pipermail/gluster-devel/2019-February/055874.html>
> [3]
>
> For details on build and RPMs check this email
> <https://lists.gluster.org/pipermail/gluster-devel/2019-February/055875.html>
> [4]
>
> Finally, the dates :-)
>
>- Wednesday - Feb 27th, and
>- Thursday - Feb 28th
>
> Note that our goal is to identify as many issues as possible in upgrade
> and stability scenarios, and if any blockers are found, want to make sure
> we release with the fix for same. So each of you, Gluster users, feel
> comfortable to upgrade to 6.0 version.
>
> Regards,
> Gluster Ants.
>
> --
> Amar Tumballi (amarts)
>


-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Version uplift query

2019-02-27 Thread Amar Tumballi Suryanarayan
GlusterD2 is not yet called out for standalone deployments.

You can happily update to glusterfs-5.x (recommend you to wait for
glusterfs-5.4 which is already tagged, and waiting for packages to be
built).

Regards,
Amar

On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL 
wrote:

> Hi,
>
> Could  you please update on this and also let us know what is GlusterD2
> (as it is under development in 5.0 release), so it is ok to uplift to 5.0?
>
> Regards,
> Abhishek
>
> On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL 
> wrote:
>
>> Hi,
>>
>> Currently we are using Glusterfs 3.7.6 and thinking to switch on
>> Glusterfs 4.1 or 5.0, when I see there are too much code changes between
>> these version, could you please let us know, is there any compatibility
>> issue when we uplift any of the new mentioned version?
>>
>> Regards
>> Abhishek
>>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-02-25 Thread Amar Tumballi Suryanarayan
Hi all,

We are calling out our users, and developers to contribute in validating
‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
upgrade, stability, and performance.

Some of the key highlights of the release are listed in release-notes draft
.
Please note that there are some of the features which are being dropped out
of this release, and hence making sure your setup is not going to have an
issue is critical. Also the default lru-limit option in fuse mount for
Inodes should help to control the memory usage of client processes. All the
good reason to give it a shot in your test setup.

If you are developer using gfapi interface to integrate with other
projects, you also have some signature changes, so please make sure your
project would work with latest release. Or even if you are using a project
which depends on gfapi, report the error with new RPMs (if any). We will
help fix it.

As part of test days, we want to focus on testing the latest upcoming
release i.e. GlusterFS-6, and one or the other gluster volunteers would be
there on #gluster channel on freenode to assist the people. Some of the key
things we are looking as bug reports are:

   -

   See if upgrade from your current version to 6.0rc is smooth, and works
   as documented.
   - Report bugs in process, or in documentation if you find mismatch.
   -

   Functionality is all as expected for your usecase.
   - No issues with actual application you would run on production etc.
   -

   Performance has not degraded in your usecase.
   - While we have added some performance options to the code, not all of
  them are turned on, as they have to be done based on usecases.
  - Make sure the default setup is at least same as your current version
  - Try out few options mentioned in release notes (especially,
  --auto-invalidation=no) and see if it helps performance.
   -

   While doing all the above, check below:
   - see if the log files are making sense, and not flooding with some “for
  developer only” type of messages.
  - get ‘profile info’ output from old and now, and see if there is
  anything which is out of normal expectation. Check with us on the numbers.
  - get a ‘statedump’ when there are some issues. Try to make sense of
  it, and raise a bug if you don’t understand it completely.

Process
expected on test days.

   -

   We have a tracker bug
   [0]
   - We will attach all the ‘blocker’ bugs to this bug.
   -

   Use this link to report bugs, so that we have more metadata around given
   bugzilla.
   - Click Here
  

  [1]
   -

   The test cases which are to be tested are listed here in this sheet
   
[2],
   please add, update, and keep it up-to-date to reduce duplicate efforts.

Lets together make this release a success.

Also check if we covered some of the open issues from Weekly untriaged bugs

[3]

For details on build and RPMs check this email

[4]

Finally, the dates :-)

   - Wednesday - Feb 27th, and
   - Thursday - Feb 28th

Note that our goal is to identify as many issues as possible in upgrade and
stability scenarios, and if any blockers are found, want to make sure we
release with the fix for same. So each of you, Gluster users, feel
comfortable to upgrade to 6.0 version.

Regards,
Gluster Ants.

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterFs v4.1.5: Need help on bitrot detection

2019-02-20 Thread Amar Tumballi Suryanarayan
Hi Chandranana,

We are trying to find a BigEndian platform to test this out at the moment,
will get back to you on this.

Meantime, did you run the entire regression suit? Is it the only test
failing? To run the entire regression suite, please run `run-tests.sh -c`
from glusterfs source repo.

-Amar

On Tue, Feb 19, 2019 at 1:31 AM Chandranana Naik 
wrote:

> Hi Team,
>
> We are working with Glusterfs v4.1.5 on big endian platform(Ubuntu 16.04)
> and encountered that the subtest 20 of test
> ./tests/bitrot/bug-1207627-bitrot-scrub-status.t is failing.
>
> Subtest 20 is failing as below:
> *trusted.bit-rot.bad-file check_for_xattr trusted.bit-rot.bad-file
> //d/backends/patchy1/FILE1*
> *not ok 20 Got "" instead of "trusted.bit-rot.bad-file", LINENUM:50*
> *FAILED COMMAND: trusted.bit-rot.bad-file check_for_xattr
> trusted.bit-rot.bad-file //d/backends/patchy1/FILE1*
>
> The test is failing with error "*remote operation failed [Cannot allocate
> memory]"* logged in /var/log/glusterfs/scrub.log.
> Could you please let us know if anything is missing in making this test
> pass, PFA the logs for the test case
>
> *(See attached file: bug-1207627-bitrot-scrub-status.7z)*
>
> Note: *Enough memory is available on the system*.
>
> Regards,
> Chandranana Naik
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] md-cache: May bug found in md-cache.c

2019-02-20 Thread Amar Tumballi Suryanarayan
Hi David,

https://docs.gluster.org/en/latest/Developer-guide/Backport-Guidelines/
gives more details about it.

But easiest is to go to your patch (https://review.gluster.org/22234), and
then click on 'Cherry Pick' button. In the pop-up, 'branch:' field, give
'release-6' and Submit. If you want it in release-5 branch too, repeat the
same, with branch being 'release-5'. Siimlarly we need 'clone-of' bug for
both the branches (the original bug used in patch is for master branch).

That should be it. Rest, we can take care.

Thanks a lot!

Regards,
Amar

On Wed, Feb 20, 2019 at 6:58 PM David Spisla 
wrote:

> Hello Amar,
>
>
>
> no problem. How can I do that? Can you please tell me the procedure?
>
>
>
> Regards
>
> David
>
>
>
> *Von:* Amar Tumballi Suryanarayan 
> *Gesendet:* Mittwoch, 20. Februar 2019 14:18
> *An:* David Spisla 
> *Cc:* Gluster Devel 
> *Betreff:* Re: [Gluster-devel] md-cache: May bug found in md-cache.c
>
>
>
> Hi David,
>
>
>
> Thanks for the patch, it got merged in master now. Can you please post it
> into release branches, so we can take them in release-6, release-5 branch,
> so next releases can have them.
>
>
>
> Regards,
>
> Amar
>
>
>
> On Tue, Feb 19, 2019 at 8:49 PM David Spisla  wrote:
>
> Hello,
>
>
>
> I already open a bug:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1678726
>
>
>
> There is also a link to a bug fix patch
>
>
>
> Regards
>
> David Spisla
>
>
>
> Am Di., 19. Feb. 2019 um 13:07 Uhr schrieb David Spisla <
> spisl...@gmail.com>:
>
> Hi folks,
>
>
>
> The 'struct md_cache' in md-cache.c uses int data types which are not in
> common with the data types used in the 'struct iatt' in iatt.h . If one
> take a closer look to the implementations one can see that the struct in
> md-cache.c uses still the int data types like in the struct 'old_iatt' .
> This can lead to unexpected side effects and some values of iatt maybe will
> not mapped correctly. I would suggest to open a bug report. What do you
> think?
>
> Additional info:
>
> struct md_cache {
> ia_prot_t md_prot;
> uint32_t md_nlink;
> uint32_t md_uid;
> uint32_t md_gid;
> uint32_t md_atime;
> uint32_t md_atime_nsec;
> uint32_t md_mtime;
> uint32_t md_mtime_nsec;
> uint32_t md_ctime;
> uint32_t md_ctime_nsec;
> uint64_t md_rdev;
> uint64_t md_size;
> uint64_t md_blocks;
> uint64_t invalidation_time;
> uint64_t generation;
> dict_t *xattr;
> char *linkname;
> time_t ia_time;
> time_t xa_time;
> gf_boolean_t need_lookup;
> gf_boolean_t valid;
> gf_boolean_t gen_rollover;
> gf_boolean_t invalidation_rollover;
> gf_lock_t lock;
> };
>
> struct iatt {
> uint64_t ia_flags;
> uint64_t ia_ino; /* inode number */
> uint64_t ia_dev; /* backing device ID */
> uint64_t ia_rdev;/* device ID (if special file) */
> uint64_t ia_size;/* file size in bytes */
> uint32_t ia_nlink;   /* Link count */
> uint32_t ia_uid; /* user ID of owner */
> uint32_t ia_gid; /* group ID of owner */
> uint32_t ia_blksize; /* blocksize for filesystem I/O */
> uint64_t ia_blocks;  /* number of 512B blocks allocated */
> int64_t ia_atime;/* last access time */
> int64_t ia_mtime;/* last modification time */
> int64_t ia_ctime;/* last status change time */
> int64_t ia_btime;/* creation time. Fill using statx */
> uint32_t ia_atime_nsec;
> uint32_t ia_mtime_nsec;
> uint32_t ia_ctime_nsec;
> uint32_t ia_btime_nsec;
> uint64_t ia_attributes;  /* chattr related:compressed, immutable,
>   * append only, encrypted etc.*/
> uint64_t ia_attributes_mask; /* Mask for the attributes */
>
> uuid_t ia_gfid;
> ia_type_t ia_type; /* type of file */
> ia_prot_t ia_prot; /* protection */
> };
>
> struct old_iatt {
> uint64_t ia_ino; /* inode number */
> uuid_t ia_gfid;
> uint64_t ia_dev; /* backing device ID */
> ia_type_t ia_type;   /* type of file */
> ia_prot_t ia_prot;   /* protection */
> uint32_t ia_nlink;   /* Link count */
> uint32_t ia_uid; /* user ID of owner */
> uint32_t ia_gid; /* group ID of owner */
> uint64_t ia_rdev;/* device ID (if special file) */
> uint64_t ia_size;/* file size in bytes */
> uint32_t ia_blksize; /* blocksize for filesystem I/O */
> uint64_t ia_blocks;  /* number of 512B blocks allocated */
> uint32_t ia_atime;   /* last access time 

Re: [Gluster-devel] md-cache: May bug found in md-cache.c

2019-02-20 Thread Amar Tumballi Suryanarayan
Hi David,

Thanks for the patch, it got merged in master now. Can you please post it
into release branches, so we can take them in release-6, release-5 branch,
so next releases can have them.

Regards,
Amar

On Tue, Feb 19, 2019 at 8:49 PM David Spisla  wrote:

> Hello,
>
> I already open a bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1678726
>
> There is also a link to a bug fix patch
>
> Regards
> David Spisla
>
> Am Di., 19. Feb. 2019 um 13:07 Uhr schrieb David Spisla <
> spisl...@gmail.com>:
>
>> Hi folks,
>>
>> The 'struct md_cache' in md-cache.c uses int data types which are not in
>> common with the data types used in the 'struct iatt' in iatt.h . If one
>> take a closer look to the implementations one can see that the struct in
>> md-cache.c uses still the int data types like in the struct 'old_iatt' .
>> This can lead to unexpected side effects and some values of iatt maybe will
>> not mapped correctly. I would suggest to open a bug report. What do you
>> think?
>>
>> Additional info:
>>
>> struct md_cache {
>> ia_prot_t md_prot;
>> uint32_t md_nlink;
>> uint32_t md_uid;
>> uint32_t md_gid;
>> uint32_t md_atime;
>> uint32_t md_atime_nsec;
>> uint32_t md_mtime;
>> uint32_t md_mtime_nsec;
>> uint32_t md_ctime;
>> uint32_t md_ctime_nsec;
>> uint64_t md_rdev;
>> uint64_t md_size;
>> uint64_t md_blocks;
>> uint64_t invalidation_time;
>> uint64_t generation;
>> dict_t *xattr;
>> char *linkname;
>> time_t ia_time;
>> time_t xa_time;
>> gf_boolean_t need_lookup;
>> gf_boolean_t valid;
>> gf_boolean_t gen_rollover;
>> gf_boolean_t invalidation_rollover;
>> gf_lock_t lock;
>> };
>>
>> struct iatt {
>> uint64_t ia_flags;
>> uint64_t ia_ino; /* inode number */
>> uint64_t ia_dev; /* backing device ID */
>> uint64_t ia_rdev;/* device ID (if special file) */
>> uint64_t ia_size;/* file size in bytes */
>> uint32_t ia_nlink;   /* Link count */
>> uint32_t ia_uid; /* user ID of owner */
>> uint32_t ia_gid; /* group ID of owner */
>> uint32_t ia_blksize; /* blocksize for filesystem I/O */
>> uint64_t ia_blocks;  /* number of 512B blocks allocated */
>> int64_t ia_atime;/* last access time */
>> int64_t ia_mtime;/* last modification time */
>> int64_t ia_ctime;/* last status change time */
>> int64_t ia_btime;/* creation time. Fill using statx */
>> uint32_t ia_atime_nsec;
>> uint32_t ia_mtime_nsec;
>> uint32_t ia_ctime_nsec;
>> uint32_t ia_btime_nsec;
>> uint64_t ia_attributes;  /* chattr related:compressed, immutable,
>>   * append only, encrypted etc.*/
>> uint64_t ia_attributes_mask; /* Mask for the attributes */
>>
>> uuid_t ia_gfid;
>> ia_type_t ia_type; /* type of file */
>> ia_prot_t ia_prot; /* protection */
>> };
>>
>> struct old_iatt {
>> uint64_t ia_ino; /* inode number */
>> uuid_t ia_gfid;
>> uint64_t ia_dev; /* backing device ID */
>> ia_type_t ia_type;   /* type of file */
>> ia_prot_t ia_prot;   /* protection */
>> uint32_t ia_nlink;   /* Link count */
>> uint32_t ia_uid; /* user ID of owner */
>> uint32_t ia_gid; /* group ID of owner */
>> uint64_t ia_rdev;/* device ID (if special file) */
>> uint64_t ia_size;/* file size in bytes */
>> uint32_t ia_blksize; /* blocksize for filesystem I/O */
>> uint64_t ia_blocks;  /* number of 512B blocks allocated */
>> uint32_t ia_atime;   /* last access time */
>> uint32_t ia_atime_nsec;
>> uint32_t ia_mtime; /* last modification time */
>> uint32_t ia_mtime_nsec;
>> uint32_t ia_ctime; /* last status change time */
>> uint32_t ia_ctime_nsec;
>> };
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 6: Branched and next steps

2019-02-20 Thread Amar Tumballi Suryanarayan
On Tue, Feb 19, 2019 at 1:37 AM Shyam Ranganathan 
wrote:

> In preparation for RC0 I have put up an intial patch for the release
> notes [1]. Request the following actions on the same (either a followup
> patchset, or a dependent one),
>
> - Please review!
> - Required GD2 section updated to latest GD2 status
>

I am inclined to drop the GD2 section for 'standalone' users. As the team
worked with goals of making GD2 invisible with containers (GCS) in mind.
So, should we call out any features of GD2 at all?

Anyways, as per my previous email on GCS release updates, we are planning
to have a container available with gd2 and glusterfs, which can be used by
people who are trying out options with GD2.


> - Require notes on "Reduce the number or threads used in the brick
> process" and the actual status of the same in the notes
>
>
This work is still in progress, and we are treating it as a bug fix for
'brick-multiplex' usecase, which is mainly required in scaled volume number
usecase in container world. My guess is, we won't have much content to add
for glusterfs-6.0 at the moment.


> RC0 build target would be tomorrow or by Wednesday.
>
>
Thanks, I was testing for few upgrade and different version clusters
support. With 4.1.6 and latest release-6.0 branch, things works fine. I
haven't done much of a load testing yet.

Requesting people to support in upgrade testing. From different volume
options, and different usecase scenarios.

Regards,
Amar



> Thanks,
> Shyam
>
> [1] Release notes patch: https://review.gluster.org/c/glusterfs/+/6
>
> On 2/5/19 8:25 PM, Shyam Ranganathan wrote:
> > Hi,
> >
> > Release 6 is branched, and tracker bug for 6.0 is created [1].
> >
> > Do mark blockers for the release against [1].
> >
> > As of now we are only tracking [2] "core: implement a global thread pool
> > " for a backport as a feature into the release.
> >
> > We expect to create RC0 tag and builds for upgrade and other testing
> > close to mid-week next week (around 13th Feb), and the release is slated
> > for the first week of March for GA.
> >
> > I will post updates to this thread around release notes and other
> > related activity.
> >
> > Thanks,
> > Shyam
> >
> > [1] Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0
> >
> > [2] Patches tracked for a backport:
> >   - https://review.gluster.org/c/glusterfs/+/20636
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster Container Storage: Release Update

2019-02-13 Thread Amar Tumballi Suryanarayan
Hello everyone,

We are announcing v1.0RC release of GlusterCS this week!**

The version 1.0 is due along with *glusterfs-6.0* next month. Below are the
Goals for v1.0:

   - RWX PVs - Scale and Performance
   - RWO PVs - Simple, leaner stack with Gluster’s Virtual Block.
   - Thin Arbiter (2 DataCenter Replicate) Support for RWX volume.
  - RWO hosting volume to use Thin Arbiter volume type would be still
  in Alpha.
   - Integrated monitoring.
   - Simple Install / Overall user-experience.

Along with above, we are in Alpha state to support GCS on ARM architecture.
We are also trying to get the website done for GCS @
https://gluster.github.io/gcs

We are looking for some validation of the GCS containers, and the overall
gluster stack, in your k8s setup.

While we are focusing more on getting stability, and better
user-experience, we are also trying to ship few tech-preview items, for
early preview. The main item on this is loopback based bricks (
https://github.com/gluster/glusterd2/pull/1473), which allows us to bring
more data services on top of Gluster with more options in container world,
specially with backup and recovery.

The above feature also makes better snapshot/clone story for gluster in
containers with reflink support on XFS. *(NOTE: this will be a future
improvement)*

This email is a request for help with regard to testing and feedback on
this new stack, in its alpha release tag. Do let us know if there are any
concerns. We are ready to take anything from ‘This is BS!!’ to ‘Wow! this
looks really simple, works without hassle’

[image: :smile:]

Btw, if you are interested to try / help, few things to note:

   - GCS uses CSI spec v1.0, which is only available from k8s 1.13+
   - We do have weekly meetings on GCS as announced in
   https://lists.gluster.org/pipermail/gluster-devel/2019-January/055774.html -
   Feel free to jump in if interested.
  - ie, Every Thursday, 15:00 UTC.
   - GCS doesn’t have any operator support yet, but for simplicity, you can
   also try using https://github.com/aravindavk/kubectl-gluster
  - Planned to be integrated in later versions.
   - We are not great at creating cool website, help in making GCS homepage
   would be great too :-)

Interested? feel free to jump into Architecture call today.

Regards,
Gluster Container Storage Team

PS: The meeting minutes, where the release pointers were discussed is @
https://hackmd.io/sj9ik9SCTYm81YcQDOOrtw?both

** - subject to resolving some blockers @
https://waffle.io/gluster/gcs?label=GCS%2F1.0

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Failing test case ./tests/bugs/distribute/bug-1161311.t

2019-02-12 Thread Amar Tumballi Suryanarayan
On Wed, Feb 13, 2019 at 9:51 AM Nithya Balachandran 
wrote:

> I'll take a look at this today. The logs indicate the test completed in
> under 3 minutes but something seems to be holding up the cleanup.
>
>
Just a look on some successful runs show output like below:

--

*17:44:49* ok 57, LINENUM:155*17:44:49* umount: /d/backends/patchy1:
target is busy.*17:44:49* (In some cases useful info about
processes that use*17:44:49*  the device is found by lsof(8)
or fuser(1))*17:44:49* umount: /d/backends/patchy2: target is
busy.*17:44:49* (In some cases useful info about processes
that use*17:44:49*  the device is found by lsof(8) or
fuser(1))*17:44:49* umount: /d/backends/patchy3: target is
busy.*17:44:49* (In some cases useful info about processes
that use*17:44:49*  the device is found by lsof(8) or
fuser(1))*17:44:49* N*17:44:49* ok

--

This is just before finish, so , the cleanup is being held for sure.

Regards,
Amar

On Tue, 12 Feb 2019 at 19:30, Raghavendra Gowdappa 
> wrote:
>
>>
>>
>> On Tue, Feb 12, 2019 at 7:16 PM Mohit Agrawal 
>> wrote:
>>
>>> Hi,
>>>
>>> I have observed the test case ./tests/bugs/distribute/bug-1161311.t is
>>> getting timed
>>>
>>
>> I've seen failure of this too in some of my patches.
>>
>> out on build server at the time of running centos regression on one of my
>>> patch https://review.gluster.org/22166
>>>
>>> I have executed test case for i in {1..30}; do time prove -vf
>>> ./tests/bugs/distribute/bug-1161311.t; done 30 times on softserv vm that is
>>> similar to build infra, the test case is not taking time more than 3
>>> minutes but on build server test case is getting timed out.
>>>
>>> Kindly share your input if you are facing the same.
>>>
>>> Thanks,
>>> Mohit Agrawal
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Meeting minutes: Feb 04th, 2019

2019-02-05 Thread Amar Tumballi Suryanarayan
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/37SS6/

Attendance

   - Nigel Babu
   - Sunil Heggodu
   - Amar Tumballi
   - Aravinda VK
   - Atin Mukherjee

Agenda

   -

   Gluster Performance Runs (on Master):
   - Some regression since 3.12 compared to current master.
  - Few operations had major regresions.
  - Entry serialization (SDFS) feature caused regression. We have
  disable it by default, plan to ask users to turn it on for edge cases.
  - Some patches are currently being reviewed for perf improvements
  which are not enabled by default.
  - See Xavi’s email for perf improvements
  

in
  self-heal. This can cause some regression on sequential IO.
  - [Nigel]Can we publish posts on 3.12 perf and our machine specs.
  Then we can do a follow up post after 6 release.
  - Yes. This is a release highlight that we want to talk about.
   -

   GlusterFS 6.0 branching:
   - upgrade tests, specially with some removed volume types and options.
 - [Atin] I’ve started testing some of the upgrade tests
 (glusterfs-5 to latest master), have some observations around
some of the
 tiering related options which are leading to peer rejection issue post
 upgrade, we need changes to avoid the peer rejection
failures. GlusterD
 team will focus on this testing in coming days.
  - performance patches - Discussed earlier
  - shd-mux
 - [Atin] Shyam highlighted concern in accepting this big change
 such late and near to branching timelines, so most likely not
going to make
 into 6.0.
 - A risk because of the timeline. We will currently keep testing
 it on master and once stable we could do an exception to merge it to
 release-6
 - The changes are glusterd heavy, so we want to make sure it’s
 thoroughly tested so we don’t cause regressions.
  -

   GCS - v1.0
   - Can we announce it, yet?
 - [Atin] Hit a blocker issue in GD2,
 https://github.com/gluster/gcs/issues/129 , root cause is in
 progress. Testing of https://github.com/gluster/gcs/pull/130 is
 blocked because of this. We are still postive to nail this
down by tomorrow
 and call out GCS 1.0 by tomorrow.
  - GCS has a website now - https://gluster.github.io/gcs/Contribute by
  sending patches to the gh-pages branch on github.com/gluster/gcs repo.
  - What does it take to run the containers from Gluster (CSI/GD2 etc)
  on ARM architecture host machines?
 - It should theoretically work given Gluster has been known to
 work on ARM. And we know that k8s on ARM is something that people do.
 - Might be useful to kick it off on a Raspberry pi and see what
 breaks.
  -

   We need more content on website, and in general on internet. How to
   motivate developers to write blogs?
   - New theme is proposed for upstream documentation via the pull request
  https://github.com/gluster/glusterdocs/pull/454
  - Test website: https://my-doc-sunil.readthedocs.io/en/latest/
   -

   Round Table:
   - Nigel: AWS migration will happen this week and regressions will be a
  little flakey. Please bear with us.


-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Maintainer's meeting: Jan 21st, 2019

2019-01-22 Thread Amar Tumballi Suryanarayan
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/PAnE5

Attendance

   - Nigel Babu, Amar, Nithya, Shyam, Sunny, Milind (joined late).

Agenda

   -

   GlusterFS - v6.0 - Are we ready for branching?
   - Can we consider getting https://review.gluster.org/20636 (lock free
  thread pool) as an option in the code, so we can have it?
 - Lets try to keep it as an option, and backport it, if not ready
 by end of this week.
  - Fencing? - Most probable to make it.
  - python3 support for glusterfind -
  https://review.gluster.org/#/c/glusterfs/+/21845/
  - Self-heal daemon multiplexing?
  - Reflink?
  - Any other performance enhancements?
   -

   Infra Updates
   - Moving to new cloud vendor this week. Expect some flakiness. This is
  on a timeline we do not control and already quite significantly delayed.
  - Going to delete old master builds from
  http://artifacts.ci.centos.org/gluster/nightly/
  - Not deleting the release branch artifacts.
   -

   Performance regression test bed
   - Have machines, can we get started with bare minimum tests
  - All we need is the result to be out in public
  - Basic tests are present. Some more test failures, so resolving that
  should be good enough.
  - Will be picked up after above changes.
   -

   Round Table
   - Have a look at website and suggest what more is required.


-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] FUSE directory filehandle

2019-01-11 Thread Amar Tumballi Suryanarayan
On Thu, Jan 10, 2019 at 8:17 AM Emmanuel Dreyfus  wrote:

> Hello
>
> This is not strictly a GlusterFS question since I came to it porting
> LTFS to NetBSD, however I would like to make sure I will not break
> GlusterFS by fixing NetBSD FUSE implementation for LTFS.
>
> Current NetBSD FUSE implementation sends the filehandle in any FUSE
> requests for an open node, regardless of its type (directory or file).
>
> I discovered that libfuse low level code manages filehandle differently
> for opendir/readdir/syncdir/releasedir than for other operations. As a
> result, when a getattr is done on a directory, setting the filehandle
> obtained from opendir can cause a crash in libfuse.
>
> The fix for NetBSD FUSE implementation is to avoid setting the
> filehandle for the following FUSE operations on directories: getattr,
> setattr, poll, getlk, setlk, setlkw, read, write (only the first two
> ones are likely to be actually used, though)
>
> Does anyone forsee a possible problem for GlusterFS with such a
> behavior? In other words, will it be fine to always have a
> FUSE_UNKNOWN_FH (aka null) filehandle for getattr/setattr on
> directories?
>
>
Below is the code snippet from fuse_getattr().

#if FUSE_KERNEL_MINOR_VERSION >= 9
priv = this->private;
if (priv->proto_minor >= 9 && fgi->getattr_flags & FUSE_GETATTR_FH)
state->fd = fd_ref((fd_t *)(uintptr_t)fgi->fh);
#endif

Which means, it may crash if we get fd as NULL, when FUSE_GETATTR_FH is set.


>
> --
> Emmanuel Dreyfus
> http://hcpnet.free.fr/pubz
> m...@netbsd.org
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-10 Thread Amar Tumballi Suryanarayan
That is a good point Mohit, but do we know how many of these tests failed
because of 'timeout' ? If most of these are due to timeout, then yes, it
may be a valid point.

-Amar

On Thu, Jan 10, 2019 at 4:51 PM Mohit Agrawal  wrote:

> I think we should consider regression-builds after merged the patch (
> https://review.gluster.org/#/c/glusterfs/+/21990/)
> as we know this patch introduced some delay.
>
> Thanks,
> Mohit Agrawal
>
> On Thu, Jan 10, 2019 at 3:55 PM Atin Mukherjee 
> wrote:
>
>> Mohit, Sanju - request you to investigate the failures related to
>> glusterd and brick-mux and report back to the list.
>>
>> On Thu, Jan 10, 2019 at 12:25 AM Shyam Ranganathan 
>> wrote:
>>
>>> Hi,
>>>
>>> As part of branching preparation next week for release-6, please find
>>> test failures and respective test links here [1].
>>>
>>> The top tests that are failing/dumping-core are as below and need
>>> attention,
>>> - ec/bug-1236065.t
>>> - glusterd/add-brick-and-validate-replicated-volume-options.t
>>> - readdir-ahead/bug-1390050.t
>>> - glusterd/brick-mux-validation.t
>>> - bug-1432542-mpx-restart-crash.t
>>>
>>> Others of interest,
>>> - replicate/bug-1341650.t
>>>
>>> Please file a bug if needed against the test case and report the same
>>> here, in case a problem is already addressed, then do send back the
>>> patch details that addresses this issue as a response to this mail.
>>>
>>> Thanks,
>>> Shyam
>>>
>>> [1] Regression failures: https://hackmd.io/wsPgKjfJRWCP8ixHnYGqcA?view
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>
>>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster Maintainer's meeting: 7th Jan, 2019 - Meeting minutes

2019-01-08 Thread Amar Tumballi Suryanarayan
Meeting date: 2019-01-07 18:30 IST, 13:00 UTC, 08:00 EDT
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#BJ-Link>BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/sGFpa

<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda

   -

   Welcome 2019: New goals / Discuss:
   - https://hackmd.io/OiQId65pStuBa_BPPazcmA
  - Give it a week and take it to mailing list, discuss and agree upon
  - [Nigel] Some of the above points are threads of its own. May need
  separate thread.
   -

   Progress with GCS
   -

  Email about GCS in community.
  -

  RWX:
  - Scale testing showing GD2 can scale to 1000s of PVs (each is a
 gluster volume)
 - Bricks with LVM
 - Some delete issues seen, specially with LV command scale. Patch
 sent.
 - Create rate: 500 PVs / 12mins
 - More details by end of the week, including delete numbers.
  -

  RWO:
  - new CSI for gluster-block showing good scale numbers, which is
 reaching higher than current 1k RWO PV per cluster, but need
to iron out
 few things. (https://github.com/gluster/gluster-csi-driver/pull/105
 )
 - 280 pods in 3 hosts, 1-1 Pod->PV ratio: leaner graph.
 - 1080 PVs with 1-12 ratio on 3 machines
 - Working on 3000+ PVC on just 3 hosts, will update by another 2
 days.
 - Poornima is coming up with steps and details about the
 PR/version used etc.
  -

   Static Analyzers:
   - glusterfs:
 - Coverity - 63 open
 - https://scan.coverity.com/projects/gluster-glusterfs?tab=overview
 - clang-scan - 32 open
 -
 
https://build.gluster.org/job/clang-scan/lastCompletedBuild/clangScanBuildBugs/
  - gluster-block:
 -
 https://scan.coverity.com/projects/gluster-gluster-block?tab=overview
 - coverity: 1 open (66 last week)
  -

   GlusterFS-6:
   - Any priority review needed?
 - Fencing patches
 - Reducing threads (GH Issue: 475)
 - glfs-api statx patches [merged]
  - What are the critical areas need focus?
 - Asan Build ? Currently not green
 - Some java errors, machine offline. Need to look into this.
  - How to make glusto automated tests become blocker for the release?
  - Upgrade tests, need to start early.
  - Schedule as called out in the mail
  
<https://lists.gluster.org/pipermail/gluster-devel/2018-December/055721.html>
  NOTE: Working backwards on the schedule, here’s what we have:
 - Announcement: Week of Mar 4th, 2019
 - GA tagging: Mar-01-2019
 - RC1: On demand before GA
 - RC0: Feb-04-2019
 - Late features cut-off: Week of Jan-21st, 2018
 - Branching (feature cutoff date): Jan-14-2018 (~45 days prior to
 branching)
 - Feature/scope proposal for the release (end date): Dec-12-2018
  -

   Round Table?
   - [Sunny] Meetup in BLR this weekend. Please do come (at least those who
  are in BLR)
  - [Susant] Softserve has 4hrs timeout, which can’t get full
  regression cycle. Can we get at least 2 more hours added, so full
  regression can be run.


---

On Mon, Jan 7, 2019 at 9:04 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

>
> Meeting date: 2019-01-07 18:30 IST, 13:00 UTC, 08:00 EDTBJ Link
>
>- Bridge: https://bluejeans.com/217609845
>
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance
> <https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda
>
>-
>
>Welcome 2019: Discuss about goals :
>- https://hackmd.io/OiQId65pStuBa_BPPazcmA
>-
>
>Progress with GCS
>- Scale testing showing GD2 can scale to 1000s of PVs (each is a
>   gluster volume, in RWX mode)
>   - new CSI for gluster-block showing good scale numbers, which is
>   reaching higher than current 1k RWO PV per cluster, but need to iron out
>   few things. (https://github.com/gluster/gluster-csi-driver/pull/105)
>-
>
>Performance focus:
>- Any update? What are the patch in progress?
>   - How to measure the perf of a patch, is there any hardware?
>-
>
>Static Analyzers:
>- glusterfs:
>  - coverity - 63 open
>  - clang-scan - 32 open (with many false-positives).
>   - gluster-block:
>  - coverity: 1 open (66 last week)
>   -
>
>GlusterFS-6:
>- Any priority review needed?
>   - What are the critical areas need focus?
>   - How to make glusto automated tests become blocker for the release?
>   - Upgrade tests, need to start early.
>   - Schedule as called out in the mail
>   
> <https://

Re: [Gluster-devel] https://review.gluster.org/#/c/glusterfs/+/19778/

2019-01-08 Thread Amar Tumballi Suryanarayan
On Tue, Jan 8, 2019 at 8:04 PM Shyam Ranganathan 
wrote:

> On 1/8/19 8:33 AM, Nithya Balachandran wrote:
> > Shyam, what is your take on this?
> > An upstream user has tried it out and reported that it seems to fix the
> > issue , however cpu utilization doubles.
>
> We usually do not backport big fixes unless they are critical. My first
> answer would be, can't this wait for rel-6 which is up next?
>
> Considering it may take some more time to get adoption, doing a backport
may surely benefit users, IMO.


> The change has gone through a good review overall, so from a review
> thoroughness perspective it looks good.
>
> The change has a test case to ensure that the limits are honored, so
> again a plus.
>
> Also, it is a switch, so in the worst case moving back to unlimited
> should be possible with little adverse effects in case the fix has issues.
>
> It hence, comes down to how confident are we that the change is not
> disruptive to an existing branch? If we can answer this with resonable
> confidence we can backport it and release it with the next 5.x update
> release.
>
>
Considering the code which the patch changes has changed very little over
last few years, I feel it is
totally safe to do the backport. Don't see any possible surprises. Will
send a patch today on release-5 branch.

-Amar



> >
> > Regards,
> > Nithya
> >
> > On Fri, 28 Dec 2018 at 09:17, Amar Tumballi  > > wrote:
> >
> > I feel its good to backport considering glusterfs-6.0 is another 2
> > months away.
> >
> > On Fri, Dec 28, 2018 at 8:19 AM Nithya Balachandran
> > mailto:nbala...@redhat.com>> wrote:
> >
> > Hi,
> >
> > Can we backport this to release-5 ? We have several reports of
> > high memory usage in fuse clients from users and this is likely
> > to help.
> >
> > Regards,
> > Nithya
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org 
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> >
> > --
> > Amar Tumballi (amarts)
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
>


-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster Maintainer's meeting: 7th Jan, 2019 - Agenda

2019-01-06 Thread Amar Tumballi Suryanarayan
Meeting date: 2019-01-07 18:30 IST, 13:00 UTC, 08:00 EDTBJ Link

   - Bridge: https://bluejeans.com/217609845

Attendance
Agenda

   -

   Welcome 2019: Discuss about goals :
   - https://hackmd.io/OiQId65pStuBa_BPPazcmA
   -

   Progress with GCS
   - Scale testing showing GD2 can scale to 1000s of PVs (each is a gluster
  volume, in RWX mode)
  - new CSI for gluster-block showing good scale numbers, which is
  reaching higher than current 1k RWO PV per cluster, but need to iron out
  few things. (https://github.com/gluster/gluster-csi-driver/pull/105)
   -

   Performance focus:
   - Any update? What are the patch in progress?
  - How to measure the perf of a patch, is there any hardware?
   -

   Static Analyzers:
   - glusterfs:
 - coverity - 63 open
 - clang-scan - 32 open (with many false-positives).
  - gluster-block:
 - coverity: 1 open (66 last week)
  -

   GlusterFS-6:
   - Any priority review needed?
  - What are the critical areas need focus?
  - How to make glusto automated tests become blocker for the release?
  - Upgrade tests, need to start early.
  - Schedule as called out in the mail
  

  NOTE: Working backwards on the schedule, here’s what we have:
 - Announcement: Week of Mar 4th, 2019
 - GA tagging: Mar-01-2019
 - RC1: On demand before GA
 - RC0: Feb-04-2019
 - Late features cut-off: Week of Jan-21st, 2018
 - Branching (feature cutoff date): Jan-14-2018 (~45 days prior to
 branching)
 - Feature/scope proposal for the release (end date): Dec-12-2018
  -

   Round Table?

=

Feel free to add your topic into :
https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?edit


-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel