Replies inline.
On 09/19/2015 03:37 PM, ML mail wrote:
So yes indeed I am using ZFS on Linux v.0.6.5 as filesystem behind Gluster. As
operating system I use Debian 8.2 GNU/Linux.
I also followed that documentation you mention in order to enable POSIX acltype
for example on my ZFS volume.
I
Atin,
I can assure you that with 3.5 too I had the second server turned off most
of the time and when I rebooted the primary server without having the
secondary turned on, gluster always started and I was able to mount the
filesystem automatically.
If what you describe is the behaviour by design
Hi,
I’ve trouble the two bricks in replica (on two servers, file1 and file2). They
fail to perform a heal:
file1:~$ sudo gluster volume heal GLUSTER-SHARE
Launching Heal operation on volume GLUSTER-SHARE has been unsuccessful
file2:~$ sudo gluster volume heal GLUSTER-SHARE
Commit failed on
On 09/22/2015 02:07 AM, Gluster Admin wrote:
> Gluster users,
>
> We have a multiple node setup where each server has a single XFS brick
> (underlying storage is hardware battery backed raid6). Are there any
> issues creating multiple gluster volumes using the same underlying
> bricks from a
> On 21 Sep 2015, at 12:44, Ravishankar N wrote:
>
> s
>
> On 09/21/2015 03:48 PM, Davy Croonen wrote:
>> Hmmm, strange, I went through all my bricks with every time the same result:
>>
>> -bash: cd:
>>
On 09/21/2015 05:21 PM, Ravishankar N wrote:
I ran the stat command against every file on the volume but after
that nothing was changed when running gluster volume heal public info.
I should have asked you earlier, but can you share the
/var/log/glusterfs/glfsheal-.log on the node on
Hello,
I'm evaluating gluster on Debian, I installed the version 3.7.4 and I
see this kind of error messages when I run tar:
# tar c linux-3.16.7-ckt11/ > /dev/null
tar: linux-3.16.7-ckt11/sound/soc: file changed as we read it
tar: linux-3.16.7-ckt11/net: file changed as we read it
tar:
s
On 09/21/2015 03:48 PM, Davy Croonen wrote:
Hmmm, strange, I went through all my bricks with every time the same result:
-bash: cd:
/mnt/public/brick1/.glusterfs/31/38/3138d605-25ec-4aa9-9069-5db2e4202db4: No
such file or directory
The directory /mnt/public/brick1/.glusterfs/31/38 does
Ravi
For the moment it’s not possible to stop the volume. We’ve planned a
maintenance cycle within 6 weeks. I will then try to start/stop the volume and
let you know the outcome.
Thanks for your support.
Davy
> On 21 Sep 2015, at 15:22, Ravishankar N wrote:
>
>
Hmmm, strange, I went through all my bricks with every time the same result:
-bash: cd:
/mnt/public/brick1/.glusterfs/31/38/3138d605-25ec-4aa9-9069-5db2e4202db4: No
such file or directory
The directory /mnt/public/brick1/.glusterfs/31/38 does exist, and indeed
there’s a symlink in there but
Thanks Davy, I don't see any errors in the logs, especially in gfs01b
where the command was run today :(
If these are indeed state entries, heal info should not display them
(and would in fact delete them from the .glusterfs/indices/xattrop) of
the bricks when the command is run.
Is it possible
- Original Message -
> From: hm...@t-hamel.fr
> To: "Krutika Dhananjay"
> Cc: gluster-users@gluster.org
> Sent: Monday, September 21, 2015 8:56:32 PM
> Subject: Re: [Gluster-users] "file changed as we read it" in gluster 3.7.4
> Thank you, this solved the issue
Thank you, this solved the issue (after a umount/mount). The question
now is: what's the catch? Why is this not the default?
https://partner-bugzilla.redhat.com/show_bug.cgi?id=1203122
The above link makes me think that there is a problem with "readdirp"
performances but I'm not sure if the
Whoops, replied off-list.
Additionally I noticed that the generated corosync config is not valid, as
there is no interface section:
/etc/corosync/corosync.conf
totem {
version: 2
secauth: off
cluster_name: rd-ganesha-ha
transport: udpu
}
nodelist {
node {
ring0_addr: cobalt
Hi There,
We are planning to implement GlusterFS on CentOS-6 across two
datacenter's. The architecture we are planning is :
*SAN Storage 1 - SiteA* > announce via iSCSI > *GlusterFS Server 1 - SiteA*
*SAN Storage 2 - SiteB* > announce via iSCSI > *GlusterFS Server 2 - SiteB*
Once glusterfs
Hi,
Can someone point me to the howto/docs on setting up nfs-ganesha HA for a
distributed-replicated volume across 4 nodes with replica 2.
thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
On 09/21/2015 05:08 PM, Davy Croonen wrote:
On 21 Sep 2015, at 12:44, Ravishankar N wrote:
s
On 09/21/2015 03:48 PM, Davy Croonen wrote:
Hmmm, strange, I went through all my bricks with every time the same result:
-bash: cd:
Could you set 'cluster.consistent-metadata' to 'on' and try the test again?
#gluster volume set cluster.consistent-metadata on
-Krutika
- Original Message -
> From: hm...@t-hamel.fr
> To: gluster-users@gluster.org
> Sent: Monday, September 21, 2015 7:10:59 PM
> Subject:
[original post at
http://blog.nixpanic.net/2015/09/monthly-glusterfs-35-release-fixing-two.html]
Hi all,
last week an other release of the stable glusterfs-3.5 branch was made.
Packages for most distributions are ready now, enjoy!
Note that the 3.5 version will become End-Of-Life when
Ravi
Thanks for your quick reply.
I didn’t solve the split-brain on the file because I don’t know to which
directory this gfid is referencing (due to the implementation of our
application we have multiple directories containing the same files).
Running the command “getfattr -m . -d -e hex ”
Hi All,
I'm pleased to announce the initial release of gdeploy[1]. RPMs can be
downloaded from:
http://download.gluster.org/pub/gluster/gdeploy/1.0/
gdeploy is a deployment tool that helps in:
* Setting up backends for GlusterFS.
* Creating a volume
* Adding a brick to the volume.
* Removing a
On 21/09/15 13:56, Tiemen Ruiten wrote:
Hello Soumya, Kaleb, list,
This Friday I created the gluster_shared_storage volume manually, I
just tried it with the command you supplied, but both have the same
result:
from etc-glusterfs-glusterd.vol.log on the node where I issued the
command:
On 09/21/2015 03:09 PM, Davy Croonen wrote:
Ravi
Thanks for your quick reply.
I didn’t solve the split-brain on the file because I don’t know to which
directory this gfid is referencing (due to the implementation of our
application we have multiple directories containing the same files).
Gluster users,
We have a multiple node setup where each server has a single XFS brick
(underlying storage is hardware battery backed raid6). Are there any
issues creating multiple gluster volumes using the same underlying bricks
from a performance or management standpoint?
or would it be better
Hello Soumya, Kaleb, list,
This Friday I created the gluster_shared_storage volume manually, I just
tried it with the command you supplied, but both have the same result:
from etc-glusterfs-glusterd.vol.log on the node where I issued the command:
[2015-09-21 07:59:47.756845] I [MSGID: 106474]
Hi all
For, at the moment a unknown reason, the command "gluster volume heal public
info” shows a lot of the following entries:
/Doc1_LOUJA.htm - Is in split-brain.
The part after the / differs but the gfid is always the same, I suppose this
gfid is referring to a directory. Now considering
On 09/21/2015 01:18 PM, Mark Ruys wrote:
Hi,
I’ve trouble the two bricks in replica (on two servers, file1 and
file2). They fail to perform a heal:
file1:~$ sudo gluster volume heal GLUSTER-SHARE
Launching Heal operation on volume GLUSTER-SHARE has been unsuccessful
file2:~$ sudo gluster
On 09/21/2015 02:32 PM, Davy Croonen wrote:
Hi all
For, at the moment a unknown reason, the command "gluster volume heal public
info” shows a lot of the following entries:
/Doc1_LOUJA.htm - Is in split-brain.
The part after the / differs but the gfid is always the same, I suppose this
28 matches
Mail list logo