Re: [Gluster-devel] Report ESTALE as ENOENT

2018-02-21 Thread Raghavendra G
On Wed, Oct 11, 2017 at 7:32 PM, J. Bruce Fields wrote: > On Wed, Oct 11, 2017 at 04:11:51PM +0530, Raghavendra G wrote: > > On Thu, Mar 31, 2016 at 1:22 AM, J. Bruce Fields > > wrote: > > > > > On Mon, Mar 28, 2016 at 04:21:00PM -0400, Vijay Bellur

[Gluster-devel] 4.0 documentation

2018-02-21 Thread Nithya Balachandran
Hi, We need to start on writing up topics for 4.0. If you have worked on any features for 4.0, lets get started on writing those up. Please get in touch with me so I can track those. Regards, Nithya ___ Gluster-devel mailing list

Re: [Gluster-devel] 2 way with Arbiter degraded behavior

2018-02-21 Thread Jeff Applewhite
Yes thanks. That is exactly the scenario I was referring to. Writes will never be optimizable in this way but in random workloads the majority of IO's will usually be reads. On Wed, Feb 21, 2018 at 11:39 AM, Manoj Pillai wrote: > > > On Wed, Feb 21, 2018 at 9:13 PM, Jeff

Re: [Gluster-devel] 2 way with Arbiter degraded behavior

2018-02-21 Thread Manoj Pillai
On Wed, Feb 21, 2018 at 9:13 PM, Jeff Applewhite wrote: > Hi All > > When you have a setup with 2 way replication + Arbiter backed by two > large RAID 6 volumes what happens when there is a disk failure and > rebuild in progress in one of those RAID sets from a client >

[Gluster-devel] 2 way with Arbiter degraded behavior

2018-02-21 Thread Jeff Applewhite
Hi All When you have a setup with 2 way replication + Arbiter backed by two large RAID 6 volumes what happens when there is a disk failure and rebuild in progress in one of those RAID sets from a client perspective? Does the FUSE client know how to prioritize the quicker disk (the RAID set that

[Gluster-devel] Heketi v6.0.0 available for download

2018-02-21 Thread Michael Adam
Hi all, Heketi v6.0.0 is now available [1]. This is the new stable version of Heketi. Older versions are discontinued. The main additions in this release are the block-volume API, a great deal of stabilization to prevent inconsistent database and out-of-sync situations, and tooling to do

Re: [Gluster-devel] gNFS service management from glusterd

2018-02-21 Thread Atin Mukherjee
On Wed, Feb 21, 2018 at 4:24 PM, Xavi Hernandez wrote: > Hi all, > > currently glusterd sends a SIGKILL to stop gNFS, while all other services > are stopped with a SIGTERM signal first (this can be seen in > glusterd_svc_stop() function of mgmt/glusterd xlator). > > The

[Gluster-devel] Coverity covscan for 2018-02-21-49e57efa (master branch)

2018-02-21 Thread staticanalysis
GlusterFS Coverity covscan results are available from http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-02-21-49e57efa ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] Inputs for 4.0 Release notes on Performance

2018-02-21 Thread Shyam Ranganathan
On 02/19/2018 09:29 PM, Raghavendra Gowdappa wrote: > I am trying to come up with content for release notes for 4.0 > summarizing performance impact. Can you point me to > patches/documentation/issues/bugs that could impact performance in 4.0? > Better still, if you can give me a summary of

[Gluster-devel] gNFS service management from glusterd

2018-02-21 Thread Xavi Hernandez
Hi all, currently glusterd sends a SIGKILL to stop gNFS, while all other services are stopped with a SIGTERM signal first (this can be seen in glusterd_svc_stop() function of mgmt/glusterd xlator). The question is why it cannot be stopped with SIGTERM as all other services. Using SIGKILL blindly