Re: [Gluster-users] "gluster volume set all cluster.enable-shared-storage enable"

2017-05-02 Thread Jiffin Tony Thottan
On 02/05/17 15:27, hvjunk wrote: Good day, I’m busy setting up/testing NFS-HA with GlusterFS storage across VMs running Debian 8. GlusterFS volume to be "replica 3 arbiter 1" In the NFS-ganesha information I’ve gleamed thus far, it mentions the "gluster volume set all

Re: [Gluster-users] glustershd: unable to get index-dir on myvolume-client-0

2017-05-02 Thread Ravishankar N
On 05/02/2017 11:48 PM, mabi wrote: Hi Ravi, Thanks for the pointer, you are totally right the "dirty" directory is missing on my node1. Here is the output of a "ls -la" of both nodes: node1: drw--- 2 root root 2 Apr 28 22:15 entry-changes drw--- 2 root root 2 Mar 6 2016

Re: [Gluster-users] How to fix heal-failed

2017-05-02 Thread David Squire
Thank you, Linday! Gluster volume info returns: Volume Name: hosting Type: Distributed-Replicate Volume ID: f5d06f57-d81c-4a76-a71b-6eb31b1ce0b0 Status: Started Number of Bricks: 8 x 3 = 24 Transport-type: tcp Bricks: Brick1: ftp1:/data/brick1/hosting Brick2:

Re: [Gluster-users] How to fix heal-failed

2017-05-02 Thread Lindsay Mathieson
On 3/05/2017 5:17 AM, David Squire wrote: I am very new to Gluster; thank you very much for helping! My Gluster (version 3.5.9) volume is Distributed Replicated. When I run “gluster volume heal my-volume info heal-failed” I get a very, very long list of items. Not an expert, but :)

[Gluster-users] How to fix heal-failed

2017-05-02 Thread David Squire
I am very new to Gluster; thank you very much for helping! My Gluster (version 3.5.9) volume is Distributed Replicated. When I run "gluster volume heal my-volume info heal-failed" I get a very, very long list of items. How can I fix them? There are some files and directories on the

Re: [Gluster-users] glustershd: unable to get index-dir on myvolume-client-0

2017-05-02 Thread mabi
Hi Ravi, Thanks for the pointer, you are totally right the "dirty" directory is missing on my node1. Here is the output of a "ls -la" of both nodes: node1: drw--- 2 root root 2 Apr 28 22:15 entry-changes drw--- 2 root root 2 Mar 6 2016 xattrop node2: drw--- 2 root root 3 May 2

Re: [Gluster-users] [Gluster-devel] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Nithya Balachandran
On 2 May 2017 at 16:59, Shyam wrote: > Talur, > > Please wait for this fix before releasing 3.10.2. > > We will take in the change to either prevent add-brick in > sharded+distrbuted volumes, or throw a warning and force the use of --force > to execute this. > > IIUC, the

[Gluster-users] Meeting minutes of todays Bug Triage

2017-05-02 Thread Niels de Vos
Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2017-05-02/weely_gluster_bug_triage.2017-05-02-12.12.html Minutes (text): https://meetbot.fedoraproject.org/gluster-meeting/2017-05-02/weely_gluster_bug_triage.2017-05-02-12.12.txt Log:

Re: [Gluster-users] [Gluster-devel] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Shyam
Talur, Please wait for this fix before releasing 3.10.2. We will take in the change to either prevent add-brick in sharded+distrbuted volumes, or throw a warning and force the use of --force to execute this. Let's get a bug going, and not wait for someone to report it in bugzilla, and also

Re: [Gluster-users] Add single server

2017-05-02 Thread lemonnierk
> Don't bother with another bug. We have raised > https://github.com/gluster/glusterfs/issues/169 for the issue in mail > thread. If I'm not mistaken that's about the possibility of adding bricks without adding a full replica set at once, that's a different subject. We were talking about adding

Re: [Gluster-users] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-02 Thread Pranith Kumar Karampuri
On Sun, Apr 30, 2017 at 9:01 PM, Shyam wrote: > Hi, > > Release 3.11 for gluster has been branched [1] and tagged [2]. > > We have ~4weeks to release of 3.11, and a week to backport features that > slipped the branching date (May-5th). > > A tracker BZ [3] has been opened

Re: [Gluster-users] [Gluster-devel] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Pranith Kumar Karampuri
On Tue, May 2, 2017 at 9:16 AM, Pranith Kumar Karampuri wrote: > Yeah it is a good idea. I asked him to raise a bug and we can move forward > with it. > +Raghavendra/Nitya who can help with the fix. > > On Mon, May 1, 2017 at 9:07 PM, Joe Julian

[Gluster-users] local mounts failing using systemd during bootstrapping gluster

2017-05-02 Thread hvjunk
Good day, The specific SystemD problem reference is https://github.com/systemd/systemd/issues/4468#issuecomment-255711912 This problem with GlusterFS arises specifically during bootstrapping of the cluster, ie. configure

[Gluster-users] "gluster volume set all cluster.enable-shared-storage enable"

2017-05-02 Thread hvjunk
Good day, I’m busy setting up/testing NFS-HA with GlusterFS storage across VMs running Debian 8. GlusterFS volume to be "replica 3 arbiter 1" In the NFS-ganesha information I’ve gleamed thus far, it mentions the "gluster volume set all cluster.enable-shared-storage enable”. My first question

[Gluster-users] Replica 2 cluster not replicating

2017-05-02 Thread Marcus
Hi all! I have setup a replicated gluster cluster on two identical machines with replica 2. I run CentOs 7 and gluster version 3.8.11. I started out with creating a distributed single node gluster brick. When I created the brick there was already about 11TB data in directory before I created

Re: [Gluster-users] glustershd: unable to get index-dir on myvolume-client-0

2017-05-02 Thread Ravishankar N
On 05/02/2017 01:08 AM, mabi wrote: Hi, I have a two nodes GlusterFS 3.8.11 replicated volume and just noticed today in the glustershd.log log file a lot of the following warning messages: [2017-05-01 18:42:18.004747] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep]

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-02 Thread Soumya Koduri
Hi, On 05/02/2017 01:34 AM, Rudolf wrote: Hi Gluster users, First, I'd like to thank you all for this amazing open-source! Thank you! I'm working on home project – three servers with Gluster and NFS-Ganesha. My goal is to create HA NFS share with three copies of each file on each server. My