Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-19 Thread Atin Mukherjee
>From glusterd perspective couple of enhancements I'd propose to be added
(a) to capture get-state dump and make it part of sosreport . Off late, we
have seen get-state dump has been very helpful in debugging few cases apart
from it's original purpose of providing source of cluster/volume
information for tendrl (b) capture glusterd statedump

Sanju - if you want to volunteer for sending this enhancement, please grab
it :-) . Note that this both the dumps are generated in /var/run/gluster,
so please check if we capture /var/run/gluster content in sosreport (which
I doubt) and if not you can always have the dump captured in a specific
file as you are already aware of.

On Wed, Mar 20, 2019 at 7:25 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> On Tue, Mar 19, 2019 at 8:30 PM Soumya Koduri  wrote:
> > On 3/19/19 9:49 AM, Sankarshan Mukhopadhyay wrote:
> > >  is (as might just be widely known)
> > > an extensible, portable, support data collection tool primarily aimed
> > > at Linux distributions and other UNIX-like operating systems.
> > >
> > > At present there are 2 plugins
> > > 
> > > and <
> https://github.com/sosreport/sos/blob/master/sos/plugins/gluster_block.py>
> > > I'd like to request that the maintainers do a quick review that this
> > > sufficiently covers topics to help diagnose issues.
> >
> > There is one plugin available for nfs-ganesha as well -
> > https://github.com/sosreport/sos/blob/master/sos/plugins/nfsganesha.py
> >
> > It needs a minor update. Sent a pull request for the same -
> > https://github.com/sosreport/sos/pull/1593
> >
>
> Thanks Soumya!
>
> Other Gluster maintainers - review and respond please.
>
> > Kindly review.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Vijay Bellur
I tried this configuration on my local setup and the test passed fine.

Adding the fuse and write-behind maintainers in Gluster to check if they
are aware of any oddities with using mmap & fuse.

Thanks,
Vijay

On Tue, Mar 19, 2019 at 2:21 PM Jim Kinney  wrote:

> Volume Name: home
> Type: Replicate
> Volume ID: 5367adb1-99fc-44c3-98c4-71f7a41e628a
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp,rdma
> Bricks:
> Brick1: bmidata1:/data/glusterfs/home/brick/brick
> Brick2: bmidata2:/data/glusterfs/home/brick/brick
> Options Reconfigured:
> performance.client-io-threads: off
> storage.build-pgfid: on
> cluster.self-heal-daemon: enable
> performance.readdir-ahead: off
> nfs.disable: off
>
>
> There are 11 other volumes and all are similar.
>
>
> On Tue, 2019-03-19 at 13:59 -0700, Vijay Bellur wrote:
>
> Thank you for the reproducer! Can you please let us know the output of
> `gluster volume info`?
>
> Regards,
> Vijay
>
> On Tue, Mar 19, 2019 at 12:53 PM Jim Kinney  wrote:
>
> This python will fail when writing to a file in a glusterfs fuse mounted
> directory.
>
> import mmap
>
> # write a simple example file
> with open("hello.txt", "wb") as f:
> f.write("Hello Python!\n")
>
> with open("hello.txt", "r+b") as f:
> # memory-map the file, size 0 means whole file
> mm = mmap.mmap(f.fileno(), 0)
> # read content via standard file methods
> print mm.readline()  # prints "Hello Python!"
> # read content via slice notation
> print mm[:5]  # prints "Hello"
> # update content using slice notation;
> # note that new content must have same size
> mm[6:] = " world!\n"
> # ... and read again using standard file methods
> mm.seek(0)
> print mm.readline()  # prints "Hello  world!"
> # close the map
> mm.close()
>
>
>
>
>
>
>
> On Tue, 2019-03-19 at 12:06 -0400, Jim Kinney wrote:
>
> Native mount issue with multiple clients (centos7 glusterfs 3.12).
>
> Seems to hit python 2.7 and 3+. User tries to open file(s) for write on
> long process and system eventually times out.
>
> Switching to NFS stops the error.
>
> No bug notice yet. Too many pans on the fire :-(
>
> On Tue, 2019-03-19 at 18:42 +0530, Amar Tumballi Suryanarayan wrote:
>
> Hi Jim,
>
> On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney  wrote:
>
>
> Issues with glusterfs fuse mounts cause issues with python file open for
> write. We have to use nfs to avoid this.
>
> Really want to see better back-end tools to facilitate cleaning up of
> glusterfs failures. If system is going to use hard linked ID, need a
> mapping of id to file to fix things. That option is now on for all exports.
> It should be the default If a host is down and users delete files by the
> thousands, gluster _never_ catches up. Finding path names for ids across
> even a 40TB mount, much less the 200+TB one, is a slow process. A network
> outage of 2 minutes and one system didn't get the call to recursively
> delete several dozen directories each with several thousand files.
>
>
>
> Are you talking about some issues in geo-replication module or some other
> application using native mount? Happy to take the discussion forward about
> these issues.
>
> Are there any bugs open on this?
>
> Thanks,
> Amar
>
>
>
>
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>
> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> Sorry for being such a buzzkill. I really wanted it to be different.
>
> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning 

Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Vijay Bellur
Thank you for the reproducer! Can you please let us know the output of
`gluster volume info`?

Regards,
Vijay

On Tue, Mar 19, 2019 at 12:53 PM Jim Kinney  wrote:

> This python will fail when writing to a file in a glusterfs fuse mounted
> directory.
>
> import mmap
>
> # write a simple example file
> with open("hello.txt", "wb") as f:
> f.write("Hello Python!\n")
>
> with open("hello.txt", "r+b") as f:
> # memory-map the file, size 0 means whole file
> mm = mmap.mmap(f.fileno(), 0)
> # read content via standard file methods
> print mm.readline()  # prints "Hello Python!"
> # read content via slice notation
> print mm[:5]  # prints "Hello"
> # update content using slice notation;
> # note that new content must have same size
> mm[6:] = " world!\n"
> # ... and read again using standard file methods
> mm.seek(0)
> print mm.readline()  # prints "Hello  world!"
> # close the map
> mm.close()
>
>
>
>
>
>
>
> On Tue, 2019-03-19 at 12:06 -0400, Jim Kinney wrote:
>
> Native mount issue with multiple clients (centos7 glusterfs 3.12).
>
> Seems to hit python 2.7 and 3+. User tries to open file(s) for write on
> long process and system eventually times out.
>
> Switching to NFS stops the error.
>
> No bug notice yet. Too many pans on the fire :-(
>
> On Tue, 2019-03-19 at 18:42 +0530, Amar Tumballi Suryanarayan wrote:
>
> Hi Jim,
>
> On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney  wrote:
>
>
> Issues with glusterfs fuse mounts cause issues with python file open for
> write. We have to use nfs to avoid this.
>
> Really want to see better back-end tools to facilitate cleaning up of
> glusterfs failures. If system is going to use hard linked ID, need a
> mapping of id to file to fix things. That option is now on for all exports.
> It should be the default If a host is down and users delete files by the
> thousands, gluster _never_ catches up. Finding path names for ids across
> even a 40TB mount, much less the 200+TB one, is a slow process. A network
> outage of 2 minutes and one system didn't get the call to recursively
> delete several dozen directories each with several thousand files.
>
>
>
> Are you talking about some issues in geo-replication module or some other
> application using native mount? Happy to take the discussion forward about
> these issues.
>
> Are there any bugs open on this?
>
> Thanks,
> Amar
>
>
>
>
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>
> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> Sorry for being such a buzzkill. I really wanted it to be different.
>
> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning up some
> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
> totally taken out of codebase in following releases) in next upcoming
> release, v5.0. The release notes will provide options for smoothly
> migrating to the supported configurations. If you are using any of these
> features, do let us know, so that we can help you with ‘migration’.. Also,
> we are happy to guide new developers to work on those components which are
> not actively being maintained by current set of developers. List of
> features hitting sunset: ‘cluster/stripe’ translator: This translator was
> developed very early in the evolution of GlusterFS, and addressed one of
> the very common question of Distributed FS, which is “What happens if one
> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
> exported in glusterfs, my file is 

[Gluster-devel] Release 6: Tagged and ready for packaging

2019-03-19 Thread Shyam Ranganathan
Hi,

RC1 testing is complete and blockers have been addressed. The release is
now tagged for a final round of packaging and package testing before
release.

Thanks for testing out the RC builds and reporting issues that needed to
be addressed.

As packaging and final package testing is finishing up, we would be
writing the upgrade guide for the release as well, before announcing the
release for general consumption.

Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-19 Thread Soumya Koduri




On 3/19/19 9:49 AM, Sankarshan Mukhopadhyay wrote:

 is (as might just be widely known)
an extensible, portable, support data collection tool primarily aimed
at Linux distributions and other UNIX-like operating systems.

At present there are 2 plugins

and 
I'd like to request that the maintainers do a quick review that this
sufficiently covers topics to help diagnose issues.


There is one plugin available for nfs-ganesha as well - 
https://github.com/sosreport/sos/blob/master/sos/plugins/nfsganesha.py


It needs a minor update. Sent a pull request for the same - 
https://github.com/sosreport/sos/pull/1593


Kindly review.

Thanks,
Soumya


This is a lead up to requesting more usage of the sos tool to diagnose
issues we see reported.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Jim,

On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney  wrote:

>
> Issues with glusterfs fuse mounts cause issues with python file open for
> write. We have to use nfs to avoid this.
>
> Really want to see better back-end tools to facilitate cleaning up of
> glusterfs failures. If system is going to use hard linked ID, need a
> mapping of id to file to fix things. That option is now on for all exports.
> It should be the default If a host is down and users delete files by the
> thousands, gluster _never_ catches up. Finding path names for ids across
> even a 40TB mount, much less the 200+TB one, is a slow process. A network
> outage of 2 minutes and one system didn't get the call to recursively
> delete several dozen directories each with several thousand files.
>
>
Are you talking about some issues in geo-replication module or some other
application using native mount? Happy to take the discussion forward about
these issues.

Are there any bugs open on this?

Thanks,
Amar


>
>
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>>
>> Hi,
>>
>> Looking into something else I fell over this proposal. Being a shop that
>> are going into "Leaving GlusterFS" mode, I thought I would give my two
>> cents.
>>
>> While being partially an HPC shop with a few Lustre filesystems,  we
>> chose GlusterFS for an archiving solution (2-3 PB), because we could find
>> files in the underlying ZFS filesystems if GlusterFS went sour.
>>
>> We have used the access to the underlying files plenty, because of the
>> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
>> effortless to run and mainly for that reason we are planning to move away
>> from GlusterFS.
>>
>> Reading this proposal kind of underlined that "Leaving GluserFS" is the
>> right thing to do. While I never understood why GlusterFS has been in
>> feature crazy mode instead of stabilizing mode, taking away crucial
>> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
>> very useful, even though the current implementation are not perfect.
>> Tiering also makes so much sense, but, for large files, not on a per-file
>> level.
>>
>> To be honest we only use quotas. We got scared of trying out new
>> performance features that potentially would open up a new back of issues.
>>
>> Sorry for being such a buzzkill. I really wanted it to be different.
>>
>> Cheers,
>> Hans Henrik
>> On 19/07/2018 08.56, Amar Tumballi wrote:
>>
>>
>> * Hi all, Over last 12 years of Gluster, we have developed many features,
>> and continue to support most of it till now. But along the way, we have
>> figured out better methods of doing things. Also we are not actively
>> maintaining some of these features. We are now thinking of cleaning up some
>> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
>> totally taken out of codebase in following releases) in next upcoming
>> release, v5.0. The release notes will provide options for smoothly
>> migrating to the supported configurations. If you are using any of these
>> features, do let us know, so that we can help you with ‘migration’.. Also,
>> we are happy to guide new developers to work on those components which are
>> not actively being maintained by current set of developers. List of
>> features hitting sunset: ‘cluster/stripe’ translator: This translator was
>> developed very early in the evolution of GlusterFS, and addressed one of
>> the very common question of Distributed FS, which is “What happens if one
>> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
>> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
>> was very hard to handle failure scenarios, and give a real good experience
>> to our users with this feature. Over the time, Gluster solved the problem
>> with it’s ‘Shard’ feature, which solves the problem in much better way, and
>> provides much better solution with existing well supported stack. Hence the
>> proposal for Deprecation. If you are using this feature, then do write to
>> us, as it needs a proper migration from existing volume to a new full
>> supported volume type before you upgrade. ‘storage/bd’ translator: This
>> feature got into the code base 5 years back with this patch
>> [1]. Plan was to use a block device
>> directly as a brick, which would help to handle disk-image storage much
>> easily in glusterfs. As the feature is not getting more contribution, and
>> we are not seeing any user traction on this, would like to propose for
>> Deprecation. If you are using the feature, plan to move to a supported
>> gluster volume configuration, and have your setup ‘supported’ before
>> upgrading to your new gluster version. ‘RDMA’ transport support: Gluster
>> started supporting RDMA while ib-verbs was still new, and very high-end
>> infra around that time were using Infiniband. Engineers did work with
>> Mellanox, and got the technology into GlusterFS for 

Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Hans,

Thanks for the honest feedback. Appreciate this.

On Tue, Mar 19, 2019 at 5:39 PM Hans Henrik Happe  wrote:

> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
>
It is a right concern to raise, and removing the existing features is not a
good thing most of the times. But, one thing we noticed over the years is,
the features which we develop, and not take to completion cause the major
heart-burn. People think it is present, and it is already few years since
its introduced, but if the developers are not working on it, users would
always feel that the product doesn't work, because that one feature didn't
work.

Other than Quota in the proposal email, for all other features, even though
we have *some* users, we are inclined towards deprecating them, considering
projects overall goals of stability in the longer run.


> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> About Quota, we heard enough voices, so we will make sure we keep it. The
original email was 'Proposal', and hence these opinions matter for decision.

Sorry for being such a buzzkill. I really wanted it to be different.
>
> We hear you. Please let us know one thing, which were the versions you
tried ?

We hope in coming months, our recent focus on Stability and Technical debt
reduction will help you to re-look at Gluster after sometime.


> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning up some
> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
> totally taken out of codebase in following releases) in next upcoming
> release, v5.0. The release notes will provide options for smoothly
> migrating to the supported configurations. If you are using any of these
> features, do let us know, so that we can help you with ‘migration’.. Also,
> we are happy to guide new developers to work on those components which are
> not actively being maintained by current set of developers. List of
> features hitting sunset: ‘cluster/stripe’ translator: This translator was
> developed very early in the evolution of GlusterFS, and addressed one of
> the very common question of Distributed FS, which is “What happens if one
> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
> was very hard to handle failure scenarios, and give a real good experience
> to our users with this feature. Over the time, Gluster solved the problem
> with it’s ‘Shard’ feature, which solves the problem in much better way, and
> provides much better solution with existing well supported stack. Hence the
> proposal for Deprecation. If you are using this feature, then do write to
> us, as it needs a proper migration from existing volume to a new full
> supported volume type before you upgrade. ‘storage/bd’ translator: This
> feature got into the code base 5 years back with this patch
> [1]. Plan was to use a block device
> directly as a brick, which would help to handle disk-image storage much
> easily in glusterfs. As the feature is not getting more contribution, and
> we are not seeing any user traction on this, would like to propose for
> Deprecation. If you are using the feature, plan to move to a supported
> gluster volume configuration, and have your setup ‘supported’ before
> upgrading to your new gluster version. ‘RDMA’ transport support: Gluster
> started supporting RDMA while ib-verbs was still new, and very high-end
> infra around that time were using Infiniband. Engineers did work with
> Mellanox, and got the 

[Gluster-devel] Coverity scan is back

2019-03-19 Thread Yaniv Kaul
After a long shutdown, the upstream Coverity scan for Gluster is back[1].
We were last time measured (January) @ 61 issues and went up to 93 items.
Overall, we are still at a very good 0.16 defect density, but we've
regressed a bit.

Y.
[1] https://scan.coverity.com/projects/gluster-glusterfs?tab=overview
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel