Re: [Gluster-users] problems with geo-replication on 3.7.4

2015-09-21 Thread Saravanakumar Arumugam
Replies inline. On 09/19/2015 03:37 PM, ML mail wrote: So yes indeed I am using ZFS on Linux v.0.6.5 as filesystem behind Gluster. As operating system I use Debian 8.2 GNU/Linux. I also followed that documentation you mention in order to enable POSIX acltype for example on my ZFS volume. I

Re: [Gluster-users] gluster processes won't start when a single node is booted

2015-09-21 Thread Mauro M.
Atin, I can assure you that with 3.5 too I had the second server turned off most of the time and when I rebooted the primary server without having the secondary turned on, gluster always started and I was able to mount the filesystem automatically. If what you describe is the behaviour by design

[Gluster-users] Launching heal operation has been unsuccessful

2015-09-21 Thread Mark Ruys
Hi, I’ve trouble the two bricks in replica (on two servers, file1 and file2). They fail to perform a heal: file1:~$ sudo gluster volume heal GLUSTER-SHARE Launching Heal operation on volume GLUSTER-SHARE has been unsuccessful file2:~$ sudo gluster volume heal GLUSTER-SHARE Commit failed on

Re: [Gluster-users] multiple volumes on same brick?

2015-09-21 Thread Atin Mukherjee
On 09/22/2015 02:07 AM, Gluster Admin wrote: > Gluster users, > > We have a multiple node setup where each server has a single XFS brick > (underlying storage is hardware battery backed raid6). Are there any > issues creating multiple gluster volumes using the same underlying > bricks from a

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Davy Croonen
> On 21 Sep 2015, at 12:44, Ravishankar N wrote: > > s > > On 09/21/2015 03:48 PM, Davy Croonen wrote: >> Hmmm, strange, I went through all my bricks with every time the same result: >> >> -bash: cd: >>

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Ravishankar N
On 09/21/2015 05:21 PM, Ravishankar N wrote: I ran the stat command against every file on the volume but after that nothing was changed when running gluster volume heal public info. I should have asked you earlier, but can you share the /var/log/glusterfs/glfsheal-.log on the node on

[Gluster-users] "file changed as we read it" in gluster 3.7.4

2015-09-21 Thread hmlth
Hello, I'm evaluating gluster on Debian, I installed the version 3.7.4 and I see this kind of error messages when I run tar: # tar c linux-3.16.7-ckt11/ > /dev/null tar: linux-3.16.7-ckt11/sound/soc: file changed as we read it tar: linux-3.16.7-ckt11/net: file changed as we read it tar:

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Ravishankar N
s On 09/21/2015 03:48 PM, Davy Croonen wrote: Hmmm, strange, I went through all my bricks with every time the same result: -bash: cd: /mnt/public/brick1/.glusterfs/31/38/3138d605-25ec-4aa9-9069-5db2e4202db4: No such file or directory The directory /mnt/public/brick1/.glusterfs/31/38 does

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Davy Croonen
Ravi For the moment it’s not possible to stop the volume. We’ve planned a maintenance cycle within 6 weeks. I will then try to start/stop the volume and let you know the outcome. Thanks for your support. Davy > On 21 Sep 2015, at 15:22, Ravishankar N wrote: > >

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Davy Croonen
Hmmm, strange, I went through all my bricks with every time the same result: -bash: cd: /mnt/public/brick1/.glusterfs/31/38/3138d605-25ec-4aa9-9069-5db2e4202db4: No such file or directory The directory /mnt/public/brick1/.glusterfs/31/38 does exist, and indeed there’s a symlink in there but

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Ravishankar N
Thanks Davy, I don't see any errors in the logs, especially in gfs01b where the command was run today :( If these are indeed state entries, heal info should not display them (and would in fact delete them from the .glusterfs/indices/xattrop) of the bricks when the command is run. Is it possible

Re: [Gluster-users] "file changed as we read it" in gluster 3.7.4

2015-09-21 Thread Krutika Dhananjay
- Original Message - > From: hm...@t-hamel.fr > To: "Krutika Dhananjay" > Cc: gluster-users@gluster.org > Sent: Monday, September 21, 2015 8:56:32 PM > Subject: Re: [Gluster-users] "file changed as we read it" in gluster 3.7.4 > Thank you, this solved the issue

Re: [Gluster-users] "file changed as we read it" in gluster 3.7.4

2015-09-21 Thread hmlth
Thank you, this solved the issue (after a umount/mount). The question now is: what's the catch? Why is this not the default? https://partner-bugzilla.redhat.com/show_bug.cgi?id=1203122 The above link makes me think that there is a problem with "readdirp" performances but I'm not sure if the

[Gluster-users] Fwd: nfs-ganesha HA with arbiter volume

2015-09-21 Thread Tiemen Ruiten
Whoops, replied off-list. Additionally I noticed that the generated corosync config is not valid, as there is no interface section: /etc/corosync/corosync.conf totem { version: 2 secauth: off cluster_name: rd-ganesha-ha transport: udpu } nodelist { node { ring0_addr: cobalt

[Gluster-users] GlusterFS across two datacenter's

2015-09-21 Thread Amardeep Singh
Hi There, We are planning to implement GlusterFS on CentOS-6 across two datacenter's. The architecture we are planning is : *SAN Storage 1 - SiteA* > announce via iSCSI > *GlusterFS Server 1 - SiteA* *SAN Storage 2 - SiteB* > announce via iSCSI > *GlusterFS Server 2 - SiteB* Once glusterfs

[Gluster-users] Gluster 3.7 and nfs ganesha HA howto

2015-09-21 Thread Gluster Admin
Hi, Can someone point me to the howto/docs on setting up nfs-ganesha HA for a distributed-replicated volume across 4 nodes with replica 2. thanks ___ Gluster-users mailing list Gluster-users@gluster.org

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Ravishankar N
On 09/21/2015 05:08 PM, Davy Croonen wrote: On 21 Sep 2015, at 12:44, Ravishankar N wrote: s On 09/21/2015 03:48 PM, Davy Croonen wrote: Hmmm, strange, I went through all my bricks with every time the same result: -bash: cd:

Re: [Gluster-users] "file changed as we read it" in gluster 3.7.4

2015-09-21 Thread Krutika Dhananjay
Could you set 'cluster.consistent-metadata' to 'on' and try the test again? #gluster volume set cluster.consistent-metadata on -Krutika - Original Message - > From: hm...@t-hamel.fr > To: gluster-users@gluster.org > Sent: Monday, September 21, 2015 7:10:59 PM > Subject:

[Gluster-users] Monthly GlusterFS 3.5 release, fixing two bugs

2015-09-21 Thread Niels de Vos
[original post at http://blog.nixpanic.net/2015/09/monthly-glusterfs-35-release-fixing-two.html] Hi all, last week an other release of the stable glusterfs-3.5 branch was made. Packages for most distributions are ready now, enjoy! Note that the 3.5 version will become End-Of-Life when

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Davy Croonen
Ravi Thanks for your quick reply. I didn’t solve the split-brain on the file because I don’t know to which directory this gfid is referencing (due to the implementation of our application we have multiple directories containing the same files). Running the command “getfattr -m . -d -e hex ”

[Gluster-users] gdeploy-1.0 released

2015-09-21 Thread Sachidananda URS
Hi All, I'm pleased to announce the initial release of gdeploy[1]. RPMs can be downloaded from: http://download.gluster.org/pub/gluster/gdeploy/1.0/ gdeploy is a deployment tool that helps in: * Setting up backends for GlusterFS. * Creating a volume * Adding a brick to the volume. * Removing a

Re: [Gluster-users] nfs-ganesha HA with arbiter volume

2015-09-21 Thread Jiffin Tony Thottan
On 21/09/15 13:56, Tiemen Ruiten wrote: Hello Soumya, Kaleb, list, This Friday I created the gluster_shared_storage volume manually, I just tried it with the command you supplied, but both have the same result: from etc-glusterfs-glusterd.vol.log on the node where I issued the command:

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Ravishankar N
On 09/21/2015 03:09 PM, Davy Croonen wrote: Ravi Thanks for your quick reply. I didn’t solve the split-brain on the file because I don’t know to which directory this gfid is referencing (due to the implementation of our application we have multiple directories containing the same files).

[Gluster-users] multiple volumes on same brick?

2015-09-21 Thread Gluster Admin
Gluster users, We have a multiple node setup where each server has a single XFS brick (underlying storage is hardware battery backed raid6). Are there any issues creating multiple gluster volumes using the same underlying bricks from a performance or management standpoint? or would it be better

Re: [Gluster-users] nfs-ganesha HA with arbiter volume

2015-09-21 Thread Tiemen Ruiten
Hello Soumya, Kaleb, list, This Friday I created the gluster_shared_storage volume manually, I just tried it with the command you supplied, but both have the same result: from etc-glusterfs-glusterd.vol.log on the node where I issued the command: [2015-09-21 07:59:47.756845] I [MSGID: 106474]

[Gluster-users] How to clear split brain info

2015-09-21 Thread Davy Croonen
Hi all For, at the moment a unknown reason, the command "gluster volume heal public info” shows a lot of the following entries: /Doc1_LOUJA.htm - Is in split-brain. The part after the / differs but the gfid is always the same, I suppose this gfid is referring to a directory. Now considering

Re: [Gluster-users] Launching heal operation has been unsuccessful

2015-09-21 Thread Ravishankar N
On 09/21/2015 01:18 PM, Mark Ruys wrote: Hi, I’ve trouble the two bricks in replica (on two servers, file1 and file2). They fail to perform a heal: file1:~$ sudo gluster volume heal GLUSTER-SHARE Launching Heal operation on volume GLUSTER-SHARE has been unsuccessful file2:~$ sudo gluster

Re: [Gluster-users] How to clear split brain info

2015-09-21 Thread Ravishankar N
On 09/21/2015 02:32 PM, Davy Croonen wrote: Hi all For, at the moment a unknown reason, the command "gluster volume heal public info” shows a lot of the following entries: /Doc1_LOUJA.htm - Is in split-brain. The part after the / differs but the gfid is always the same, I suppose this