[Gluster-users] Gluster's proposal to adopt GPL cure enforcement

2018-04-18 Thread Amar Tumballi
Hi all, Following the lines of Red Hat's announcement for commitment for better open-source model, along with many other companies (lead by Facebook/Google/IBM) [1], Gluster project is also proposing to have the COMMITMENT statement in its project. While we discuss about the same here, a RFC

[Gluster-users] Meeting minutes : April 18th Maintainers meeting.

2018-04-18 Thread Amar Tumballi
Meeting date: 04/18/2018 (April 18th, 2018), 19:30 IST, 14:00 UTC, 10:00 EDT BJ Link - Bridge: https://bluejeans.com/205933580 - Download: https://bluejeans.com/s/qjebZ

Re: [Gluster-users] Announcing Glusterfs release 3.12.8 (Long Term Maintenance)

2018-04-18 Thread Sankarshan Mukhopadhyay
On Thu, Apr 19, 2018 at 9:25 AM, Jiffin Tony Thottan wrote: > The Gluster community is pleased to announce the release of Gluster 3.12.8 > (packages available at [1,2,3]). > > Release notes for the release can be found at [4]. > The link to the release notes have a banner

[Gluster-users] Announcing Glusterfs release 3.12.8 (Long Term Maintenance)

2018-04-18 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 3.12.8 (packages available at [1,2,3]). Release notes for the release can be found at [4]. Thanks, Gluster community [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.8/ [2]

Re: [Gluster-users] Replicated volume read request are served by remote brick

2018-04-18 Thread Vlad Kopylov
I was trying to use http://lists.gluster.org/pipermail/gluster-users/2015-June/022322.html as an example and it never worked Neither did gluster volume set cluster.nufa enable on with cluster.choose-local: on cluster.nufa: on It still reads data from network bricks. Was thinking to block

[Gluster-users] Replicated volume read request are served by remote brick

2018-04-18 Thread Frederik Banke
I have created a 2 brick replicated volume. gluster> volume status Status of volume: storage Gluster process TCP Port RDMA Port Online Pid -- Brick

Re: [Gluster-users] Bitrot strange behavior

2018-04-18 Thread FNU Raghavendra Manjunath
Hi Cedric, The 120 seconds is given to allow a window for things to settle. i.e. imagine the following situation 1) open file (fd1 as file descriptor) 2) modify the file via fd1 3) close the file descriptor (fd1) 4) Again open the file (fd2) 5) modify In the above set of operations, by the time

Re: [Gluster-users] Bitrot - Restoring bad file

2018-04-18 Thread FNU Raghavendra Manjunath
Hi, Patch [1] has been sent for review. The path also prints the brick to which the corrupted object (file) belongs to. With the patch the output of the scrub status command looks like this. " # gluster volume bitrot repl scrub status Volume name : repl State of scrub: Active (Idle) Scrub

Re: [Gluster-users] Bitrot strange behavior

2018-04-18 Thread Cedric Lemarchand
Hi Sweta, Thanks, this drive me some more questions: 1. What is the reason of delaying signature creation ? 2. As a same file (replicated or dispersed) having different signature thought bricks is by definition an error, it would be good to triggered it during a scrub, or with a different

Re: [Gluster-users] Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first

2018-04-18 Thread Nithya Balachandran
Hi Artem, I have not looked into this but what Amar said seems likely. Including Poornima to this thread. Regards, Nithya On 18 April 2018 at 10:01, Artem Russakovskii wrote: > Nithya, Amar, > > Any movement here? There could be a significant performance gain here that >

Re: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-18 Thread Artem Russakovskii
OK, thank you. I'll try that. The reason I was confused about its status is these things in the doc: How To Test > TBD. > Documentation > TBD > Status > Design complete. Implementation done. The only thing pending is the > compounding of two fops in shd code. Sincerely, Artem -- Founder,

Re: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-18 Thread Ravishankar N
On 04/18/2018 11:59 AM, Artem Russakovskii wrote: Btw, I've now noticed at least 5 variations in toggling binary option values. Are they all interchangeable, or will using the wrong value not work in some cases? yes/no true/false True/False on/off enable/disable It's quite a

Re: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-18 Thread Artem Russakovskii
Btw, I've now noticed at least 5 variations in toggling binary option values. Are they all interchangeable, or will using the wrong value not work in some cases? yes/no true/false True/False on/off enable/disable It's quite a confusing/inconsistent practice, especially given that many options

Re: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-18 Thread Artem Russakovskii
Thanks for the link. Looking at the status of that doc, it isn't quite ready yet, and there's no mention of the option. Does it mean that whatever is ready now in 4.0.1 is incomplete but can be enabled via granular-entry-heal=on, and when it is complete, it'll become the default and the flag will

Re: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-18 Thread Ravishankar N
On 04/18/2018 10:35 AM, Artem Russakovskii wrote: Hi Ravi, Could you please expand on how these would help? By forcing full here, we move the logic from the CPU to network, thus decreasing CPU utilization, is that right? Yes, 'diff' employs the rchecksum FOP which does a sha256  checksum