Ideally a same node shouldn't have multiple UUIDs, which is unfortunately
true in your setup. I do not see any straight forward case which can lead
to this issue. What Gluster version are you using?
-Atin
Sent from one plus one
On Sep 19, 2015 4:40 AM, "John Casu" wrote:
> Hi,
>
> we're seeing m
Hi,
The underlying filesystem which you use seems like ZFS.
I don't have much idea about zfs. You may want to check this link:
http://www.gluster.org/community/documentation/index.php/GlusterOnZFS
As far as the error is concerned, it is trying to use stat command to
get inode details.
(stat c
On 09/19/2015 01:35 AM, Bidwell, Christopher wrote:
I can do a generalized setup of gluster, but I'm not sure how to read
this error and how to fix it. Can anyone provide some assistance?
[2015-09-18 19:56:27.169245] E [MSGID: 108008]
[afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_
Thanks for your response.
I tried to your suggestion.
root@node1:~/gluster_ec# mount
...
10gnode1:disp on /mnt/gluster type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)
root@node1:~/gluster_ec# !dd
dd if=/dev/zero of=/mnt/gluster/test.io.1 bs=1024k count=1024 oflag=direct
dd
also, gluster volume status takes *forever* to run, & returns no useful info
On 9/18/15 4:10 PM, John Casu wrote:
Hi,
we're seeing multiple entries for nodes in pool list & peer status, each
associated with a unique UUID.
Filesystem seems to be working, but we need to clean this up in anticip
Hi,
we're seeing multiple entries for nodes in pool list & peer status, each
associated with a unique UUID.
Filesystem seems to be working, but we need to clean this up in anticipation of
any future issues.
The nodes with multiple UUIDs were previously disconnected, but they're good
now.
Doe
I am new to gluster and I am evaluating using it for a project. One of the
things I am looking at is various failure scenarios and how gluster deals
with them.
As far as I can tell, gluster is supposed to recover from a hung filesystem
brick. I see that storage.health-check-interval is set to 30
Hello,
I am trying in vain to setup geo-replication on now version 3.7.4 of GlusterFS
but it still does not seem to work. I have at least managed to run succesfully
the georepsetup using the following command:
georepsetup reptest gfsgeo@gfs1geo reptest
But as soon as I run:
gluster volume g
Hello,
I know KVM runs fine from it, but Windows integration is non-existing. AFAIK
the best you could do is export the volume via Samba and try it that way.
You'll probably need "strict allocate = yes" in the samba conf.
Personally, seeing how you're already lost to Microsoft, you should try t
I can do a generalized setup of gluster, but I'm not sure how to read this
error and how to fix it. Can anyone provide some assistance?
[2015-09-18 19:56:27.169245] E [MSGID: 108008]
[afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
0-MAGWEB-replicate-0: Gfid mismatch detecte
I can do a generalized setup of gluster, but I'm not sure how to read this
error and how to fix it. Can anyone provide some assistance?
[2015-09-18 19:56:27.169245] E [MSGID: 108008]
[afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
0-MAGWEB-replicate-0: Gfid mismatch detecte
This was discussed several years ago on the list. The described way worked
for me when i needed to replace a brick.
http://www.gluster.org/pipermail/gluster-users/2012-October/011502.html
-Grant
On Tue, Sep 15, 2015 at 4:01 AM, Александр wrote:
> Hello!
>
>
> We have glusterfs (3.6.5) di
Hi Tiemen,
One of the pre-requisites before setting up nfs-ganesha HA is to create
and mount shared_storage volume. Use below CLI for that
"gluster volume set all cluster.enable-shared-storage enable"
It shall create the volume and mount in all the nodes (including the
arbiter node). Note th
Hello Kaleb,
I don't:
# Name of the HA cluster created.
# must be unique within the subnet
HA_NAME="rd-ganesha-ha"
#
# The gluster server from which to mount the shared data volume.
HA_VOL_SERVER="iron"
#
# N.B. you may use short names or long names; you may not use IP addrs.
# Once you select on
On 09/18/2015 09:46 AM, Tiemen Ruiten wrote:
> Hello,
>
> I have a Gluster cluster with a single replica 3, arbiter 1 volume (so
> two nodes with actual data, one arbiter node). I would like to setup
> NFS-Ganesha HA for this volume but I'm having some difficulties.
>
> - I needed to create a dir
Hello,
I have a Gluster cluster with a single replica 3, arbiter 1 volume (so two
nodes with actual data, one arbiter node). I would like to setup
NFS-Ganesha HA for this volume but I'm having some difficulties.
- I needed to create a directory /var/run/gluster/shared_storage manually
on all node
Dear Paul.
I appreciate for your reply.
I worry about ENOSPC with XFS and meta-data corrupt in dm-thin.
I will test it. ( https://lwn.net/Articles/592645/ )
Thanks a lot!
- hgichon
2015-09-18 11:39 GMT+09:00 Paul Cuzner :
> Hi
>
> dm-thinp is fully supported in RHEL6 and RHEL7 (and presumab
Hello!
We have glusterfs (3.6.5) distributed volume and want to move one or
several bricks from one server to another (and migrate the data). What is
the proper way to do this since "replace-brick start" command is deprecated
now? Is there way to do this without increasing replica count for
Hi all,
a little later than usual, but here are the meeting minutes of last
Wednesdays meeting.
The agenda for next week has been prepared, and anyone is free to add
more topics to the Open Floor section around line 60 of the etherpad:
https://public.pad.fsfe.org/p/gluster-community-meetings
Dear Paul.
I appreciate for your reply.
I worry about ENOSPC with XFS and meta-data corrupt in dm-thin.
I will test it. ( https://lwn.net/Articles/592645/ )
Thanks a lot!
- hgichon
2015-09-18 11:39 GMT+09:00 Paul Cuzner :
> Hi
>
> dm-thinp is fully supported in RHEL6 and RHEL7 (and presumabl
20 matches
Mail list logo